instruction
stringlengths
0
30k
I'm trying to integrate a map into my project via openlayer. I followed different tutorials but I can't do it: - I have a div displayed but not the map - I have no errors in the console - I installed npm install --save ol please help me find a solution Mon ficher : App.component.ts ``` import { Component, OnInit } from '@angular/core'; import Map from 'ol/Map.js'; import View from 'ol/View.js'; import TileLayer from 'ol/layer/Tile.js'; import OSM from 'ol/source/OSM.js'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrl: './app.component.scss' }) export class AppComponent implements OnInit { title = 'threePercent'; map!: Map; ngOnInit() { this.initMap(); } private initMap(): void { this.map = new Map({ view: new View({ center: [1, 1], zoom: 1, }), layers: [ new TileLayer({ source: new OSM(), }), ], target: 'map', }); } } ``` Mon fichier App.component.html ``` <main> <div id="map" class="map-container"></div> </main> <router-outlet></router-outlet> <app-app-bar></app-app-bar> ``` Mon fichier App.component.scss ``` .map-container{ width: 300px; height: 300px; } ``` I tried to follow different tutorials but I can't find : - https://dev.to/camptocamp-geo/openlayers-in-an-angular-application-mn1 - https://medium.com/@pro.gramistka/create-interactive-maps-in-angular-12-project-with-openlayers-ba6683d6fe5b [image of my app in google and the div][1] [Error in consol tab][2] [Error in Network tab][3] [1]: https://i.stack.imgur.com/rrznM.png [2]: https://i.stack.imgur.com/ZE5G7.png [3]: https://i.stack.imgur.com/md1Ak.png
You could use text annotations, using the `annotate` method on the `ax` (`matplotlib.axes.Axes`) attribute of the `FaceGrid` object returned by `seaborn.catplot`. For example, the code below annotates the observations that are greater than .5 on a normal sample by using Boolean selection in the dataframe, `df[df.y > .5]`, and `df.apply(...)` will annotate all points. import pandas as pd import numpy as np import seaborn as sns df = pd.DataFrame(data={'x': range(10), 'y':np.random.normal(0,1,size=10)}) df['odd'] = df.x.apply(lambda x: x % 2) g = sns.catplot(data=df, x='x', y='y', hue='odd') df[df.y > .5].apply(lambda p: g.ax.annotate(f'({p.x}, {round(p.y, 2)})', (p.x, p.y)), axis=1) [![enter image description here][1]][1] You can see more details on the `annotate` method [here][2]. [1]: https://i.stack.imgur.com/tstub.png [2]: https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.annotate.html#matplotlib.axes.Axes.annotate
Hi I had the same problem, You have to put in the Vercel environment variables: ``` NEXTAUTH_URL="yourdomainevercel" ``` And above all, you have to redeploy your app so that it takes the variable into account. It's not dynamic. You must also have `Automatically expose System Environment Variables` checked in the Vercel settings Once done, it works very well with `NEXTAUTH` and Vercel Hoping I could help.
Streamlit nested button not displaying the result on clicking
|python|python-3.x|streamlit|
I have some existing project and used both unique =true at column level and @UniqueConstraints at entity class level for the same column as below. I am currently doing code refactor so need suggestion to improve the code. ```@Entity @Table(name = "sales_order", uniqueConstraints = @UniqueConstraint(columnNames = "order_no")) public class SalesOrder implements java.io.Serializable { private static final long serialVersionUID = 6008509031685325038L; //primary key column with some other columns @Column(name = "order_no", unique = true, nullable = false) private String orderNo; } ``` We can not use both for the same unique column right ? In the above code, which one is the best to use or define unique constraint? I would say if it's single column unique then can use unique=true at column level and if unique key contains multiple columns then use @UniqueConstraints.
Which annotation need to use for unique constraint in JPA entity, unique=true or @UniqueConstraints at class level with single unique column
|jpa|spring-data-jpa|
I have a mapping between index with component: ```ts const mappingComponent = [ { index: 0, component: Information, }, { index: 1, component: Data, }, { index: 2, component: UseCase, }, { index: 3, component: Categorization, }, // ... ]; ``` Inside my Component: ```ts const stepRef = useRef<StepRef>(null); const renderUI = () => { const Component = mappingComponent.find((mc) => mc.index === step)?.component; if (!Component) return null; return ( <Component ref={stepRef} /> ); }; ``` The interface: ```ts export interface StepRef { onNext: () => void; } ``` On each child component, I use `fowardRef` and `useImperativeHandle`: ```ts useImperativeHandle( ref, () => ({ onNext: () => { // TODO }, }), [], ); ``` Step is a number state to decide which component is rendered The proplem is when the step change, the `stepRef.current` become `null`. How can I make the ref have values again?
Here it worked for me: Step 1 : Go to Windows -> Device and Simulators Step 2 : Click on Simulator Tab and click the add button at the bottom Step 3 : Add new Simulator. Select the Device Type, Select OS version as Download more simulators. Click Create. Step 4 : Install at least one iOS version from all the series. Step 5 : Once Installed. You Still won't be able to see it. Step 6 : After all this Restart your Mac. Step 7 : Open Xcode. You Should see the installed Simulators now. Happy Coding.. :)
I am trying to create a regular expression for the IP subnet 192.168.224.0/22. The valid IP range is 192.168.224.1 to 192.16.227.254. I can use this in Sentinel KQL query. Is the below correct? 192\.168\.(22[4-7]|23[0-6])\.(25[0-4]|2[0-4][0-9]|[01]?[0-9][0-9]?)
type ErrorResponse = { message: string; }; const error: ErrorResponse = { message: 'Ups' }; Another alternative as [explained here][1]: const error = <ErrorResponse>{ message: 'Ups' }; [1]: https://stackoverflow.com/questions/43314791/how-to-instantiate-an-object-in-typescript-by-specifying-each-property-and-its-v
System and OS : Apple Silicon Mac M3, Sonoma OS Trying to install neo4j with docker and getting below errors: ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase@2b037cfc' was successfully initialized, but failed to start. Please see the attached cause exception "Some jar procedure files are invalid, see log for details." Attaching a log file for more reference. I tried googling and found a few solutions for the problem, but none of them worked for me. Can someome please help me out here? [1]: https://i.stack.imgur.com/WtIf2.jpg
This command facilitates restoration but may be time-consuming: ``` sudo mongod --verbose --port 80 --bind_ip 10.0.0.1 --keyFile /root/mongo_key_pair.pem --storageEngine wiredTiger --dbpath /db/data/ --repair --directoryperdb ``` For further details, refer to the [MongoDB documentation](https://www.mongodb.com/docs/manual/reference/program/mongod/#core-options).
char *w = "Artîsté"; printf("%lu\n", strlen(w)); int z; for(z=0; z<strlen(w); z++){ //printf("%c", w[z]); //prints as expected printf("%i: %c\n", z, w[z]);//doesn't print anything } If I run this, it fails at the `î`. How do I print a multibyte char and how do I know when a I've hit a multibyte character?
How to get the characters from a UTF-8 string?
|c|character|multibyte-characters|
in way of optimization i decide to init push in other process, its works well in android 10 and higher but push isnt showen in lower versions of android i need it works well in all android versions this is my manifest code ``` <receiver android:name="com.google.firebase.iid.FirebaseInstanceIdReceiver" android:exported="true" android:permission="com.google.android.c2dm.permission.SEND" android:process=":pushprocess" tools:node="replace"> <intent-filter> <action android:name="com.google.android.c2dm.intent.RECEIVE" /> </intent-filter> </receiver> <service android:name="kz.kolesateam.push.kolesa.data.KolesaFirebaseMessagingService" android:exported="false" tools:node="replace" android:process=":pushprocess"> <intent-filter> <action android:name="com.google.firebase.MESSAGING_EVENT" /> </intent-filter> </service> ```
Firebase push doesnt receive after init in other process
You can do the following: - Wrap your `Dialog` with a `Center` and then a `SizedBox`. - Set the desired with to the `SizedBox`. - Add a `Padding` widget below the `Dialog`. Code: ```dart Widget build(BuildContext context) { return Center( child: SizedBox( width: MediaQuery.of(context).size.width * 0.5, child: Dialog( insetPadding: EdgeInsets.zero, backgroundColor: kWhiteColor, surfaceTintColor: Colors.transparent, clipBehavior: Clip.antiAlias, shape: RoundedRectangleBorder( borderRadius: BorderRadius.circular(34.0), ), child: Padding( padding: const EdgeInsets.all(20.0), child: Column( ... ```
To understand what `Context: http` [documentation means][1] ```shell events { # ... } http { upstream name { # ... } # HERE log_format custom '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; server { # ... } } ``` [1]: https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
You can fix this by installing [Redis Insight](https://redis.io/docs/connect/insight/) to make requests in the graphical environment.
[`requestAnimationFrame`](https://developer.mozilla.org/en-US/docs/Web/API/window/requestAnimationFrame) should be suitable for most use cases and it's much cleaner imho: ````js el.style.transition = 'none'; requestAnimationFrame(() => { el.style.transition = 'all 1s ease'; }); ````
I'm trying to process multiple messages, from a queue with sessions enabled, in parallel. I've tried setting MaxConcurrentCallsPerSession to 5 for example but I am still receiving 1 message at a time. I wrote a console application to demonstrate what I'm trying to do: ``` static void Main() { MainAsync().Wait(); } static async Task MainAsync() { //create the queue await CreateQueue(); //initialize queue client ServiceBusClient queueClient = new ServiceBusClient(_serviceBusConnectionString, new ServiceBusClientOptions { TransportType = ServiceBusTransportType.AmqpWebSockets, }); //initialize the sender ServiceBusSender sender = queueClient.CreateSender(_queueName); //queue 3 messages await sender.SendMessageAsync(new ServiceBusMessage() { SessionId = _sessionId, MessageId = "1" }); await sender.SendMessageAsync(new ServiceBusMessage() { SessionId = _sessionId, MessageId = "2" }); await sender.SendMessageAsync(new ServiceBusMessage() { SessionId = _sessionId, MessageId = "3" }); //initialize processor ServiceBusSessionProcessor processor = queueClient.CreateSessionProcessor(_queueName, new ServiceBusSessionProcessorOptions() { AutoCompleteMessages = false, ReceiveMode = ServiceBusReceiveMode.PeekLock, SessionIds = { _sessionId }, PrefetchCount = 5, MaxConcurrentCallsPerSession = 5 }); //add message handler processor.ProcessMessageAsync += HandleReceivedMessage; //add error handler processor.ProcessErrorAsync += ErrorHandler; //start the processor await processor.StartProcessingAsync(); Console.ReadLine(); } static async Task CreateQueue() { ServiceBusAdministrationClient client = new ServiceBusAdministrationClient(_serviceBusConnectionString); bool doesQueueExist = await client.QueueExistsAsync(_queueName); //check if the queue exists, if not then create one if (!doesQueueExist) { _ = await client.CreateQueueAsync(new CreateQueueOptions(_queueName) { RequiresSession = true, DeadLetteringOnMessageExpiration = true, MaxDeliveryCount = 3, EnableBatchedOperations = true, }); } } static async Task HandleReceivedMessage(ProcessSessionMessageEventArgs sessionMessage) { Console.WriteLine("Received message: " + sessionMessage.Message.MessageId); await Task.Delay(5000).ConfigureAwait(false); await sessionMessage.CompleteMessageAsync(sessionMessage.Message); Console.WriteLine("Completed message: " + sessionMessage.Message.MessageId); } static Task ErrorHandler(ProcessErrorEventArgs e) { Console.WriteLine("Error received"); return Task.CompletedTask; } ``` When executing the program, what I expect to receive is: ``` Received message: 1 Received message: 2 Received message: 3 Completed message: 1 Completed message: 2 Completed message: 3 ``` But what I am getting is: ``` Received message: 1 Completed message: 1 Received message: 2 Completed message: 2 Received message: 3 Completed message: 3 ``` Is what I am trying to achieve possible please? I am using .NetFramework 4.7.2 and Azure.Messaging.ServiceBus 7.17.4
|.net|entity-framework-core|interface|t4|scaffolding|
null
I have the problem, in the previuos code I was calling the availableOn as argument that's why it didn't work, here is the correct code: @script <script> $wire.on('appointment-modal', async (event) => { let data = await event.availableOn; console.log(data); const modalBody = document.getElementById('modal-body'); data.forEach(time => { console.log(time.day) let timeElement = document.createElement('p'); timeElement.textContent = time.start_time + ' - ' + time.end_time; modalBody.appendChild(timeElement); }); const myModal = new bootstrap.Modal('#showTimes'); myModal.show(); }); </script> @endscript
{"Voters":[{"Id":207090,"DisplayName":"Alex"},{"Id":466862,"DisplayName":"Mark Rotteveel"},{"Id":16217248,"DisplayName":"CPlus"}],"SiteSpecificCloseReasonIds":[18]}
Context ------- Edit: note it is quite hard to share my `pom.xml` file since it is 800+ lines and around 50+ other modules with they own `pom.xml` files too, on which you can add some confidential information in some of these. I have Maven commands run on a GitLab CI like this: ``` my_job: ... script: - mvn clean install -ntp -DskipTests after_script: - mvn -e -X -ntp -f ".\path\to\pom.xml" test -Dtest="MyClass#myTest" ... ``` The whole job passes whereas for second command at some point (changing project branch with too much changes to see what can be guilty of this) I started getting `An unknown compilation problem occurred` error and no log at all of `MyClass.myTest`, see full stacktrace at end of this post. Also note that when I run `MyClass.myTest` from IntelliJ (with "run" button at left of test) it passes without any issue. Question -------- Anyone knows why I got this error and how to get rid of it ? Note I saw something with removal of `<compilerArgument>-Werror</compilerArgument>` in `maven-compiler-plugin` declaration but I don't have `compilerArgument` at all, other stuff I found was too specific to their context. Full stacktrace: ---------------- ``` [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.10.1:compile (default-compile) on project bla-bla: Compilation failure [ERROR] An unknown compilation problem occurred [ERROR] [ERROR] -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.10.1:compile (default-compile) on project bla-bla: Compilation failure An unknown compilation problem occurred at org.apache.maven.lifecycle.internal.MojoExecutor.doExecute2 (MojoExecutor.java:347) at org.apache.maven.lifecycle.internal.MojoExecutor.doExecute (MojoExecutor.java:330) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:213) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:175) at org.apache.maven.lifecycle.internal.MojoExecutor.access$000 (MojoExecutor.java:76) at org.apache.maven.lifecycle.internal.MojoExecutor$1.run (MojoExecutor.java:163) at org.apache.maven.plugin.DefaultMojosExecutionStrategy.execute (DefaultMojosExecutionStrategy.java:39) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:160) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:105) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:73) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:53) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:118) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:261) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:173) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:101) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:827) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:272) at org.apache.maven.cli.MavenCli.main (MavenCli.java:195) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347) Caused by: org.apache.maven.plugin.compiler.CompilationFailureException: Compilation failure An unknown compilation problem occurred at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute (AbstractCompilerMojo.java:1310) at org.apache.maven.plugin.compiler.CompilerMojo.execute (CompilerMojo.java:198) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:126) at org.apache.maven.lifecycle.internal.MojoExecutor.doExecute2 (MojoExecutor.java:342) at org.apache.maven.lifecycle.internal.MojoExecutor.doExecute (MojoExecutor.java:330) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:213) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:175) at org.apache.maven.lifecycle.internal.MojoExecutor.access$000 (MojoExecutor.java:76) at org.apache.maven.lifecycle.internal.MojoExecutor$1.run (MojoExecutor.java:163) at org.apache.maven.plugin.DefaultMojosExecutionStrategy.execute (DefaultMojosExecutionStrategy.java:39) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:160) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:105) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:73) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:53) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:118) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:261) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:173) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:101) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:827) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:272) at org.apache.maven.cli.MavenCli.main (MavenCli.java:195) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347) ``` [1]: https://stackoverflow.com/a/35631066/11159476
|javascript|typescript|riot.js|riotts|
So the problem is your code stops running when a row is missing The solution is to skip that row, here is how: Check if there is a blank cell and delete them by looping through your desired range ```dim cell as range set SelectedRange = R:R 'you can change this to your preference for each cell in SelectedRange if isempty(cell) then cell.delete Shift:=xlShiftUp end if next cell```
I'm creating small startup project. I own one public server. At this server I will use docker compose to run my project. I want to pass sensitive data to my containers like db password etc. I have 3 options: - docker secrets (anyone who can break into container can read this file) - vault (too much effort, I don't have resources to use it ) - env variables (anyone who break in can read it but it's easy to implement) I thinking about passing to container encrypted values in env variables. Disadvantage here is that my .jar file need to contain cert or password to decrypt them and also few modifications in code need to be made. Is increase of security worth doing this? Or is it so small that this gives almost nothing and I should stay with plain text data in env variables?
Is there a way to play a notification sound as fast as a button click? I feel like there is a delay of a few hundred milliseconds! Im using this ``` public void btn_Click(object sender, eventargs e) { SystemSounds.Exclamation.Play(); } public void playExclamation() { SystemSounds.Exclamation.Play(); } ```
C# play sound instantly
|c#|.net-4.8|
null
Just run this command in root directory: `npx expo install expo-router react-native-safe-area-context react-native-screens expo-linking expo-constants expo-status-bar`
I'd suggest using [ConciseDateFormatter][1] and using the auto locator for more ticks if you really want every month located: ```python fig, ax=plt.subplots(1, 1, figsize=(8, 4), constrained_layout=True) plt.rcParams['date.converter'] = 'concise' ax.xaxis.set_major_locator(mdates.AutoDateLocator(minticks=12, maxticks=20)) ax.set_ylim(-185, 185) ax.scatter(tp_pass, azip_pass, color="b", s=200, alpha=1.0, ec="k") plt.yticks([-180, -120, -60, 0, 60, 120, 180], ["${}^\circ$".format(x) for x in [-180, -120, -60, 0, 60, 120, 180]]) plt.show() ``` [![enter image description here][2]][2] [1]: https://matplotlib.org/stable/gallery/ticks/date_concise_formatter.html [2]: https://i.stack.imgur.com/LRRvd.png
Azure service bus - processing multiple messages in parallel (MaxConcurrentCallsPerSession)
|azure|azureservicebus|azure-servicebus-queues|
null
I have three lists #### Example Data ``` data <- list("A-B", "C-D", "E-F", "G-H", "I-J") data_to_replace <- list("A-B", "C-D") replacement <- list("B-A", "D-C") ## Note: The length(data_to_replace) and length(replacement ) are always equal. Which may range from 1 to 200 ``` This is just a minimal example. I have several of these three, and the number varies on each list #### Expected outcome data_new <- list("B-A", "D-C", "E-F", "G-H", "I-J") #### What I have tried ## function that reverses a string by words reverse_words <- function(string) { ## split string by blank spaces string_split <- strsplit(as.character(string), split="-") ## How many split terms? string_length <- length(string_split[[1]]) ## decide what to do if (string_length == 1) { ## one word (do nothing) reversed_string <- string_split[[1]] } else { ## more than one word (collapse them) reversed_split <- string_split[[1]][string_length:1] reversed_string <- paste(reversed_split, collapse="-") } ## output return(reversed_string) } ## Replacemnt replacement <- lapply(data_to_replace, reverse_words) data_new <- rapply(data, function(x) { replace(x, x == data_to_replace, replacement) ## This last line did not work })
How to get new text input after entering a password in a tab?
I'm new to Fortran and I already have a hard time understanding the concept of broadcasting of arrays in Python, so it is even more difficult for me to implement it in Fortran Fortran code: ``` program test implicit none integer,parameter::N=6,t_size=500 real,dimension(t_size,N,2)::array1 contains pure function f(a) result(dr) real::res(N,N,2) real,intent(in)::a(N,2) real::b(N,N,2) real::c(N,N,2) b=spread(a,1,N) c=spread(a,2,N) dr=c-b end function f end program ``` Now N would be the number of points and t_size is just the number of different time steps. I came up with this function which uses *spread* in two different dimensions to create a NxNx2 array. Now I thought of using a line like for example `r=f(array1(1,:,:)` in order to get an array which holds all differences of spatial coordinates of every 2 points. I already wrote code that does this in Python (taken from a physics textbook for Python) ``` r = np.empty((t.size, N, 2)) r[0] = r0 def f(r): dr=r.reshape(N,1,2)-r ``` where I can write later for example `f(r[i]`. (In this case, I left the line r[0] = r0, because it shows that an initial condition is given - later I plan to do this in Fortran by using the random_number subroutine.) Now I hope it is clear what my question is. If anyone has a better idea (which I am sure there is) to implement broadcasting in Fortran, please let me know. Please have a little patience with someone new to Fortran and also programming in general - thanks in advance for your replies. I already tried it with the random_number subroutine and it worked, but I have no way of checking if the output is true.
Regular Expression for IPv4 subnet
|regex|kql|ipv4|subnet|
I am trying to write an alert message in TradingView's PineScript that contains a string variable I made myself My attempt: var my_variable = "" if(longCondition) strategy.entry("Long", strategy.long) my_variable = "open" if(shortCondition) strategy.close("Long") my_variable = "close" Alert Message: { "side" : "long", "action": {{my_variable}}, "exchange": "BINANCE", "symbol": "ETHUSDT" } Expected Output (for a "Buy" signal): { "side" : "long", "action": "open", "exchange": "BINANCE", "symbol": "ETHUSDT" } This of course didn't work. Does anyone know how to achieve this kind of custom alert message?
I use a IActionResult to get a List<Object>. If I have a list, the result is ok. If I use a List<User>, the elements of the result are empty. If I use a generic List, it works: ``` public IActionResult GetData() { var erg1 = _database.table.Select(x => new { x.Id, x.firstname, x.lastname }); return Ok(erg1); } ``` I get a List with all Elements: ``` [ { "id": "732fdbd7-c878-45e8-8c43-5f795a697f6d", "firstname": "Uwe", "lastname": "Gabbert" }, { "id": "5288f9ea-25a2-4ffc-a711-7c0b2cf49c38", "firstname": "User", "lastname": "Test" } ] ``` But, If I use a separate class for the elements, I get a empty List: ``` public IActionResult GetData() { var erg2 = _database.table.Select(x => new User( x.Id, x.firstname, x.lastname )); return Ok(erg2); } ``` In erg2 are 2 Items of User. But the return of getData is: ``` [ {}, {} ] ```
There are several ways to benchmark Python scripts. One simple way to do this is by using the **timeit** module, which provides a simple way to measure the execution time of small code snippets. However, if you are looking for a more comprehensive benchmark that includes memory usage, you can use the **memory_profiler** package to measure memory usage. To visualize your benchmarks, you can use the **plotly** library, which allows you to create interactive plots. You can create a line chart to display the execution time and memory usage for different input sizes. Here's an example code snippet to benchmark two different implementations of a function that takes a matrix, row and column as inputs: import timeit import random import numpy as np from plotly.subplots import make_subplots import plotly.graph_objects as go from memory_profiler import memory_usage from memory_profiler import profile from my.package.module import real_func_1, real_func_2 @profile def func_impl_1(matrix, row, column): return real_func_1(matrix, row, column) @profile def func_impl_2(matrix, row, column): return real_func_2(matrix, row, column) # Analysis range x = list(range(3, 100)) # Time results y1 = [] y2 = [] # Memory results m1 = [] m2 = [] for i in x: # Random choice of parameters A = np.random.rand(i, i) rx = random.randint(0, i-1) ry = random.randint(0, i-1) t1 = 0 t2 = 0 m1_ = 0 m2_ = 0 for _ in range(10): t1 += timeit.timeit( lambda: func_impl_1(A, rx, ry), number=1, ) t2 += timeit.timeit( lambda: func_impl_2(A, rx, ry), number=1, ) m1_ += max(memory_usage( (lambda: func_impl_1(A, rx, ry),) )) m2_ += max(memory_usage( (lambda: func_impl_2(A, rx, ry),) )) y1.append(t1/100) y2.append(t2/100) m1.append(m1_/100) m2.append(m2_/100) # Title of first graph: fig = make_subplots(rows=2, cols=1, shared_xaxes=True, subplot_titles=("Time", "Memory")) fig.add_trace(go.Scatter(x=x, y=y1, name='func_impl_1 time', legendgroup='1'), row=1, col=1) fig.add_trace(go.Scatter(x=x, y=y2, name='func_impl_2 time', legendgroup='1'), row=1, col=1) fig.add_trace(go.Scatter(x=x, y=m1, name='func_impl_1 memory', legendgroup='2'), row=2, col=1) fig.add_trace(go.Scatter(x=x, y=m2, name='func_impl_2 memory', legendgroup='2'), row=2, col=1) fig.update_layout( title="Performance of the functions", xaxis_title="Matrix size", ) fig.update_yaxes(title_text="Time (s)", row=1, col=1) fig.update_yaxes(title_text="Max Memory usage (MB)", row=2, col=1) fig.show() The graph: [![Graph with time and memory benchmark][1]][1] Looking at the graph, it seems like both functions have similar memory usage, which is good to know. In terms of runtime, func_impl_2 seems to be generally faster than func_impl_1, which is also a positive finding. However, the difference in performance between the two functions is quite small, and there is a point where the performance of func_impl_1 surpasses that of func_impl_2 for very small input sizes. This may indicate that the simpler implementation of func_impl_1 is still a viable option for smaller inputs, even though func_impl_2 is generally faster. Overall, the graphs provide valuable insights into the performance of these functions and can help with decision-making when choosing which implementation to use in different scenarios. [1]: https://i.stack.imgur.com/8anjP.png
|android|firebase|push-notification|push|
null
A proper approach would be to use the built-in functionality - Jenkinsfile syntax has a block specifically dedicated for your use case: From [Pipeline Syntax][1]: > The `post` section defines one or more additional steps that are run upon the completion of a Pipeline’s or stage’s run (depending on the location of the post section within the Pipeline). `post` can support any of the following post-condition blocks: `always`, `changed`, `fixed`, `regression`, `aborted`, `failure`, `success`, `unstable`, `unsuccessful`, and `cleanup`. These condition blocks allow the execution of steps inside each condition depending on the completion status of the Pipeline or stage. The condition blocks are executed in the order shown below. ```java // Jenkinsfile pipeline { // ... stages { stage('Some build logic') { steps { // build steps } } } post { aborted { sh 'python3 cleanup.py' } } } ``` Depending on your goal, you can choose any post condition(s) to use. [1]: https://www.jenkins.io/doc/book/pipeline/syntax/#post
System and OS : Apple Silicon Mac M3, Sonoma OS Trying to install neo4j with docker and getting below errors: ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase@2b037cfc' was successfully initialized, but failed to start. Please see the attached cause exception "Some jar procedure files are invalid, see log for details." Attaching a log file for more reference. I tried googling and found a few solutions for the problem, but none of them worked for me. Can someome please help me out here? [Logs for reference][1] [1]: https://i.stack.imgur.com/Wz7Pw.jpg
i had this kind of issue with my windows and linux devices, >created new app from widows made sure fresh app is running fine >moved all myproject files to this new app (issued project configured in new app) >then pushed to git, its was working fine in the other device as-well. I felt like wasted lot of time looking for a solution by thinking time and effort moving configuration to a new app . But it was quicker than finding a solution sitting on the same project . hope this helps
null
This command facilitates restoration but may be time-consuming: ``` sudo mongod --verbose --port 27017 --bind_ip 10.0.0.1 --keyFile /root/mongo_key_pair.pem --storageEngine wiredTiger --dbpath /db/data/ --repair --directoryperdb ``` For further details, refer to the [MongoDB documentation](https://www.mongodb.com/docs/manual/reference/program/mongod/#core-options).
{"OriginalQuestionIds":[39972256],"Voters":[{"Id":11107541,"DisplayName":"starball","BindingReason":{"GoldTagBadge":"visual-studio-code"}}]}
I have seen similar topic with answers about size in % or vh, but its a bit different. I have a set of different plots with sizes 900x600 by default using this CSS: ``` .chartbox { display: flex; flex-wrap: wrap; background-color: DodgerBlue; justify-content: space-around; } .chart { background-color: #f1f1f1; width: 900px; height: 600px; margin: 10px; position: center; } ``` [how it looks](https://i.stack.imgur.com/wMdw3.png) And I want them to resize automaticly when the window is resizing (or there is a small screen) Im using charts from ECharts, with js code smth like: ``` var myChart = echarts.init(document.getElementById('main')); var option = { xAxis: { type: 'category', data: ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'] }, yAxis: { type: 'value' }, series: [ { data: [820, 932, 901, 934, 1290, 1330, 1320], type: 'line', smooth: true } ] }; myChart.setOption(option) ``` My HTML: ``` <DOCTYPE html> <html> <head> <meta charset = "utf-8" /> <h1>ECharts</h1> <link rel="stylesheet" type="text/css" href="mycss.css"> <!-- Include The ECharts file you just downloaded --> <script src = "echarts.js"></script> </head> <body> <br> <!-- Prepare a DOM with a defined width and height for ECharts --> <div class="chartbox"> <div class="chart" id= "main"></div> <div class="chart" id= "main1"></div> <div class="chart" id= "main2"></div> <div class="chart" id= "main3"></div> <div class="chart" id= "main4"></div> <div class="chart" id= "main5"></div> </div> <script src = "myjs.js"></script> <script src = "myjs1.js"></script> <script src = "myjs2.js"></script> <script src = "myjs3.js"></script> <script src = "myjs4.js"></script> <script src = "myjs5.js"></script> </body> </html> ``` I tried to use % and vh to .chart height and width, but they conflict with default size. I tried to use max-width and max-height without width and height, but charts do not appear (mb its a ECharts feature). I expect the following: 1. if 3 charts 900x600 can fit the screen then place them 2. else if 3 charts 600x400 can fit the screen then place them 3. else if 2 charts 900x600 can fit the screen then place them 4. else if 2 charts 600x400 can fit the screen then place them 5. else if 1 charts 900x600 can fit the screen then place it 6. else if 1 charts 600x400 can fit the screen then place it 7. else resize as it possible
Getting distances of points in 2D space in an array in Fortran using the concept of broadcasting (Python)
|multidimensional-array|fortran|array-broadcasting|
null
I am trying to learn time series prediction and forecasting in Python. I have plotted the ACF and PACF of my total electron content which has a seasonality i.e. TEC value gets maximum at day time and min at night time. Overall the data has no upward or downward trend and the test statistics from Adfuller test is -3.67 I've got the following graphs where ACF is tailing off but so is the PACF and now I am confused about which would be the best coefficients for ARIMA model. [ACF and PACF plots of Timeseries TEC](https://i.stack.imgur.com/mdyhS.png) NOTE: I want to forcast 10 days after and 20 days before earthquake and then compare it with the actual values to get a differenced value and show the impact of earthquake on total electron content. The TEC values are also affected by geomagnetic storm so next I will train a machine learning model to classify the impact of earthquake and impact of space weather. I can share my data if anyone wants to see it. Thank you! I am trying to fit an ARIMA model to forcast timeseries TEC values 20 days before and 10 days after the earthquake. My goal is to get a forcast/prediction for a specific timerange and then compare it with the actual value to see how much difference is there due to earthquake. I am struck at selecting the AR and MA coefficients for the data.
What kind of ARIMA model would be best fit for this data?
|python|machine-learning|time-series|signal-processing|arima|
null
A list of data frames: my_list <- list(structure(list(id = c("xxxyz", "xxxyz", "zzuio"), country = c("USA", "USA", "Canada")), class = "data.frame", row.names = c(NA, -3L)), structure(list(id = c("xxxyz", "ppuip", "zzuio"), country = c("USA", "Canada", "Canada")), class = "data.frame", row.names = c(NA, -3L))) my_list [[1]] id country 1 xxxyz USA 2 xxxyz USA 3 zzuio Canada [[2]] id country 1 xxxyz USA 2 ppuip Canada 3 zzuio Canada I want to remove duplicated rows both within and between the data frames stored in that list. [This][1] works to remove duplicates within each data frame: lapply(my_list, function(z) z[!duplicated(z$id),]) [[1]] id country 1 xxxyz USA 3 zzuio Canada [[2]] id country 1 xxxyz USA 2 ppuip Canada 3 zzuio Canada But there are still duplicates between data frames. I want to remove them all, with the following desired output: [[1]] id country #[empty list; except if some observations are not duplicates, of course] [[2]] id country xxxyz USA zzuio Canada ppuip Canada Notes: 1. I want to eliminate duplicates on `id` (other variables can be duplicated) 2. I need a solution where it is not needed to merge the data frames before checking for duplicates 3. If possible, I wish to retain the last observation. For example, in the desired output above, "zzuio Canada" existed in both df, but was kept in the last df only, that is, df 2. 4. I have more than 100 dfs, with variable names that don't necessarily match between dfs. That said, the id is always called "id" 5. I need to reassign the result to the same object (in the case above, `my_list`) [1]: https://stackoverflow.com/questions/42163966/remove-duplicate-rows-for-multiple-dataframes
It seems like your `MyInput` component is not handling the `onChange` event correctly. In your `CreatePost` component, you are passing the `onChange` function to `MyInput` like this: ```jsx <MyInput placeholder="Enter title" onChange={(e) => setPost({ ...post, title: e.target.value })} value={post.title} type="text" /> ``` And similarly for the body input. However, in your `MyInput.jsx`, you are trying to spread `props`, which doesn't include the `onChange` and `value` properties directly. You need to modify your `MyInput.jsx` component to handle `onChange` and `value` correctly. Here's how you can do it: ```jsx export default function MyInput({ placeholder, onChange, value, type }) { return ( <input className={classes.MyInput} placeholder={placeholder} onChange={onChange} value={value} type={type} /> ); } ``` Now, your `MyInput` component should correctly handle the `onChange` and `value` properties, and your `CreatePost` component should work as expected. Make sure to update the import statements in `CreatePost.jsx` to match the new structure of the `MyInput` component.
I have a two column csv file. Column 1 is string values, column 2 is integer values. If a term is found in a string in Column 1 I want to return the corresponding value in Column 2. Col1 Col2 Green 5 Red 6 If Col1 contains "ed" return the corresponding row value in Col2, in this case 6. Thanks. ``` import pandas as pd # Read the CSV file into a pandas DataFrame file_name = input("Enter file name: ") df = pd.read_csv(file_name) string1 = input("Enter search term: ") #check if each element in the DataFrame contains the partial string matches = df.apply(lambda col: col.astype(str).str.contains(string1, case=False)) #get the row and column indices where the partial string matches rows, cols = matches.values.nonzero() for row, col in zip(rows, cols): print(f"Match found at Row: ", row) ```
How do you return a value from an imported csv if a condition is met using python?
|python|pandas|dataframe|csv|
null
One possibility is to use flex a shorthand property that combines flex-grow ,flex-shrink and flex-basis. As it is not possible to keep your div width and the gap constant and still fill the container, this what I suggest if your div width is not constant but the gap is to fullfill the whole container <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> .container { width: 90vw; margin: auto; background-color: aqua; display: flex; flex-wrap: wrap; gap: 32px; } .container div { flex: 1 0 300px; background-color: blue; text-align: center; height: 200px; } <!-- language: lang-html --> <div class="container"> <div>1</div> <div>2</div> <div>3</div> <div>4</div> <div>5</div> <div>6</div> </div> <!-- end snippet -->
Since the last 3-monthly renewal of my **Let's Encrypt** certificate (happened 3 days ago), **Android devices** with OS version earlier than 7.1.1 (**Android API < 25**), are not able to connect to my servers anymore. The reason seems to be that the 3-year cross-sign agreement to bridge the new *Let's Encrypt*'s "**ISRG Root X1**" certificate via the old partner *IdenTrust*'s "**DST Root CA X3**" certificate has expired earlier this year (2024), as reported here: https://letsencrypt.org/2020/12/21/extending-android-compatibility What is the best solution to allow the earlier Android devices to keep working with LetsEncrypt SSL servers?
i have microservice architecture which works on spring cloud api gateway with eureka instances as clients and i need some help with forwarding websocket connection [enter image description here](https://i.stack.imgur.com/vW3RD.png) while connecting to websocket service directly through localhost on port 8085 everything is working: const socket = new SockJS('http://localhost:8085/websocket'); when i try to connect through api gateway on localhost port 8083 there is a connection error: const socket = new SockJS('http://localhost:8083/websocket'); api gateway routing : [enter image description here](https://i.stack.imgur.com/Oahv5.png) i tried a lot of combination also .uri("lb://messaging-service:8085")) .uri("ws://messaging-service:8085")) websocket configuration : [enter image description here](https://i.stack.imgur.com/B90In.png) [enter image description here](https://i.stack.imgur.com/GC7jO.png) error : [enter image description here](https://i.stack.imgur.com/jZGZo.png) tried everything with allowing headers, some diffrent cases on websocket url etc.
websocket sockJS connection on spring cloud gateway eurekaproblem
|spring|api|websocket|cloud|gateway|
null
There was nothing *technically* wrong with the way that I was using the Stack widget. The problem was that the Stack widget paints renders each child widget from first to last, or from bottom to top > The stack paints its children in order with the first child being at the bottom ([link](https://api.flutter.dev/flutter/widgets/Stack-class.html#:~:text=The%20stack%20paints,their%20new%20location.)) Because I was using higher resolution images, each time the children in the Stack were rendered it looked like they were reloading the image, but they were just repainting to the UI and it was noticeable because of the resolution. I found that despite using `AutomaticKeepAliveClientMixin`, when I changed the tab in my app there was an inevitable state change as the `_selectedIndex` value changed. The build context of the widgets down the tree changed and therefore caused a re-render. I fixed this by only loading 2 child items at most in the Stack instead of 5 like I had it set to previously.
The current CRAN version of easyPubMed has a 10,000 record limit from Entrez. I have experienced this a lot and the workaround is to cut the queries into small chunks until you get under the 10,000 limit. The [old package readme][1] on Github stated the following: > At this moment, `easyPubMed` only supports retrieving 10,000 records > per query. This is due to some recent changes in the NCBI E-utilities > (see: > <https://ncbiinsights.ncbi.nlm.nih.gov/2022/09/13/updated-pubmed-eutilities/>). > This is a known issue/limitation. I'll try to re-write the R library > to account for the changes in the E-utilities as soon as possible > (likely, within a few months). Thanks for your patience. The [new readme][2] states, > ### New features of easyPubMed version 3.1.3 > > Automatic Job splitting into Sub-Queries. The Entrez server imposes a > strict n=10,000 limit to the number of records that can be > programmatically retrieved from a single query. Whenever possible, the > easyPubMed library automatically attempts to split queries returning > large number of records into lists of smaller, manageable queries. So either change your query to ask for less than 10k records, or download the development version of `easyPubMed`. [1]: https://github.com/dami82/easyPubMed/blob/469030ac4a348080e8fa48296e447a41b9f6ed36/README.md [2]: https://github.com/dami82/easyPubMed
|python|seaborn|scatter-plot|categorical-data|plot-annotations|
I'm writing some tests for an AWS lambda that handles and routes notifications from a SNS topic, and if a specific feature flag is off, then it returns immediately. However one of the test's implementation carries over to the next test, and it makes the handler fails when it should not. **MWE for the handler:** ```js export handler(event) { for (record of event.Records) { const { applicationId } = JSON.parse(JSON.parse(record.body).Message); const shouldHandleNotification = await isFeatureFlagOn(PARAMETER_STORE_ARN, FEATURE_FLAG); if (!shouldHandleNotification) { console.info(`Flag [SHOULD_HANDLE_NOTIFICATION] is off, skipping notification handling.`); return; } const applicationData = await getApplicationData(applicationId); if (applicationData.type == "human") { // handle application } else if (application.type == "viltrumite") { // handle application } else { throw new Error(`Cannot handle application type [${applicationData.type}]`); } } } ``` **MWE for the tests:** ``` mockIsFeatureFlagOn = jest.fn(); mockGetApplicationData = jest.fn(); describe("SNSNotificationHandler", () => { beforeEach(() => {} jest.clearAllMocks(); }); // TEST 1 it("does not handle notification when feature flag is off", async () => { mockIsFeatureFlagOn.mockResolvedValueOnce(false); mockGetApplicationData.mockResolvedValueOnce({ type: "unknown" }); await handler(EVENT_WITH_APPLICATION_DATA); mockIsFeatureFlagOn.toHaveBeenCalledWith(PARAMETER_STORE_ARN, FEATURE_FLAG); mockGetApplicationData.not.toHaveBeenCalled(); }); // TEST 2 it("handles viltrumite application", async () => { mockIsFeatureFlagOn.mockResolvedValueOnce(true); mockGetApplicationData.mockResolvedValueOnce({ type: "viltrumite" }); await handler(EVENT_WITH_APPLICATION_DATA); mockIsFeatureFlagOn.toHaveBeenCalledWith(PARAMETER_STORE_ARN, FEATURE_FLAG); mockGetApplicationData.toHaveBeenCalled(); }); } ``` TEST 2 fails because the `mockGetApplicationData` mock doesn't clear after TEST 1 is done, and it carries over to TEST 2 returning `{ type: "unknown" }` instead of `{ type: "viltrumite" }`. In TEST 1, `mockGetApplicationData` is never called so its implementation lingers even with the `jest.clearAllMocks();` after each test. I could simply delete the `mockGetApplicationData`'s implementation, and that would solve the issue but I'm trying to understand what's happening here. Now I understand [`jest.clearAllMocks` does not remove mock implementations by design](https://github.com/jestjs/jest/issues/7136#issuecomment-428750792), however my confusion is why a *one-time implementation* with `mockResolvedValueOnce`¹ set in a test (i.e., TEST 1) bleeds into another test (i.e., TEST 2)? Shouldn't `mockGetApplicationData` simply go back to being `jest.fn()` even if the one-time implementation was never executed because that implementation is scoped to that one test? --- ¹ `mockFn.mockResolvedValueOnce(value)` is a shorthand for `jest.fn().mockImplementationOnce(() => Promise.resolve(value))` and from the [Jest docs](https://jestjs.io/docs/mock-function-api#mockfnmockimplementationoncefn): When the mocked function runs out of implementations defined with `.mockImplementationOnce()`, it will execute the default implementation set with `jest.fn(() => defaultValue)` or `.mockImplementation(() => defaultValue)` if they were called.
One-time implementation with Jest's mockResolvedValueOnce within test remains from one test to another
|javascript|unit-testing|jestjs|
The way i achieved it is also with implementing the UnmarshalBSON interface. Translated to your example it would be: ``` type foo struct { Type string `bson:"type"` Act ActivationInterface `bson:"credential,omitempty"` } type ActivationInterface interface{ // Placeholder TODO: Replace with useful function Placeholder() } type Activation1 struct { Name string `bson:"name"` } func (a Activation1) UnmarshalBSON(data []byte) error { var raw map[string]interface{} err := bson.Unmarshal(data, &raw) if err != nil { return err } a.Name = raw["name"].(string) return nil } func (a Activation1) Placeholder() { } type Activation2 struct { Address string `bson:"adress"` } func (a Activation2) UnmarshalBSON(data []byte) error { var raw map[string]interface{} err := bson.Unmarshal(data, &raw) if err != nil { return err } a.Address = raw["address"].(string) return nil } func (a Activation2) Placeholder() { } ```
{"Voters":[{"Id":1007220,"DisplayName":"aynber"},{"Id":4294399,"DisplayName":"General Grievance"},{"Id":2109233,"DisplayName":"lagbox"}]}
Basically, `SKFunction` became `KernelFunction` and `SKContext` was renamed to `KernelArguments`. The first one is straightforward renaming, and if you want to explore more in-depth why and how `SKContext` turned into `KernelArguments`, you should explore the commits from the [dotnet-1.0.0-rc1][1] release. I selected some of them: - SkContext became [ContextVariables][2]. - ContextVariables became [KernelFunctionArguments][3] - KernelFunctionArguments became [KernelArguments][4] I recommend taking a look at the kernel syntax examples [here][5]. They use `KernelFunction` and `KernelArguments`. Example: ``` cs [KernelFunction] [Description("Send email")] public string SendEmail( [Description("target email addresses")] string emailAddresses, [Description("answer, which is going to be the email content")] string answer, KernelArguments arguments) { var contract = new Email() { Address = emailAddresses, Content = answer, }; // for demo purpose only string emailPayload = JsonSerializer.Serialize(contract, this._serializerOptions); arguments["email"] = emailPayload; return "Here's the API contract I will post to mail server: " + emailPayload; } ``` Finally, with want an example of an application using those abstractions, you should check the [chat-copilot][6] project. [1]: https://github.com/microsoft/semantic-kernel/releases/tag/dotnet-1.0.0-rc1 [2]: https://github.com/microsoft/semantic-kernel/pull/3666 [3]: https://github.com/microsoft/semantic-kernel/pull/3752 [4]: https://github.com/microsoft/semantic-kernel/pull/3769 [5]: https://github.com/microsoft/semantic-kernel/tree/main/dotnet/samples/KernelSyntaxExamples [6]: https://github.com/microsoft/chat-copilot
First you can make a connection test from your client. tnsping UWBSDBPROD If this was successful try to connect with sqlplus sqlplus whatsappdb/whatsapp@orcl or sqlplus whatsappdb/whatsapp@10.70.236.30/UWBSDBPROD If you can connect with sqlplus, than there is a problem with the Laravel configuration (I don't know laravel) What Oracle version do you have ?
From all the good answers I have learned that the same string can't be passed to a function and returned again, because `r=decode_hex(r)` causes the pointer to original malloc'd data to be lost. I see there are two solutions to avoid the problem. Either I can use one string as input to the function and another for the decoded data, or the string can be modified in place. **Alternative 1** - Two different strings, one for input and one for output ``` #include <stdio.h> #include <string.h> #include <stdlib.h> char *decode_hex(char hex[], int size) { char *bin = malloc(size); bin[0] = '\0'; for (int i=0; hex[i]!='\0'; i++) { switch(hex[i]) { case 'a': strcat(bin, "1010"); break; case 'b': strcat(bin, "1011"); break; } } return bin; } int main() { int number_of_hex_symbols = 2; int hex_size = number_of_hex_symbols + 1; int bin_size = 4*number_of_hex_symbols + 1; char r[hex_size]; char *b; for (int i=0; i<2; i++) { r[0] = '\0'; strcat(r, "ab"); printf("Original string: %s\n", r); b = decode_hex(r, bin_size); printf("Decoded string: %s\n\n", b); free(b); } } ``` **Alternative 2** - String modified in place ``` #include <stdio.h> #include <string.h> #include <stdlib.h> void decode_hex(char *s, int size) { char *bin = malloc(size); bin[0] = '\0'; for (int i=0; s[i]!='\0'; i++) { switch(s[i]) { case 'a': strcat(bin, "1010"); break; case 'b': strcat(bin, "1011"); break; } } s[0] = '\0'; strcat(s, bin); free(bin); } void main() { int number_of_hex_symbols = 2; int size = 4*number_of_hex_symbols + 1; char r[size]; for (int i=0; i<2; i++) { r[0] = '\0'; strcat(r, "ab"); printf("Original string: %s\n", r); decode_hex(r, size); printf("Decoded string: %s\n\n", r); } } ```
null
null
I have created a custom WP Bakery element called "Custom Title." This element allows users to input text and align it. The alignment section should have a default dropdown with options such as "left," "center," and "right," as shown in the image. How can I implement this alignment section? Your advice would be appreciated. [view image](https://i.stack.imgur.com/n6mb4.png) align like attached image
null
WPBakery Page Builder | Create custom element and align
|wordpress|builder|wpbakery|
null
null
{"Voters":[{"Id":-1,"DisplayName":"Community"}]}
Ubuntu 20.04 Python 3.8 I'm using a python file (not written by me) with a U-Net and custom Loss functions. The code was written for tensorflow==2.13.0, but my GPU cluster only has tensorflow==2.2.0 (or lower). The available code isn't compatible with this version. Specifically the 'if' statement in update_state. Can somebody help me rewrite this? I'm not experienced with tf. ''' class Distance(tf.keras.metrics.Metric): ''' def __init__(self, name='DistanceMetric', distance='cm', sigma=2.5, data_size=None, validation_size=None, points=None, point=None, percentile=None): super(Distance, self).__init__(name=name) self.counter = tf.Variable(initial_value=0, dtype=tf.int32) self.distance = distance self.sigma = sigma self.percentile = percentile if percentile is not None and point is not None: assert (type(percentile) == float) self.percentile_idx = tf.Variable(tf.cast(tf.round(percentile * validation_size), dtype=tf.int32)) else: self.percentile_idx = None self.point = point self.points = points self.cache = tf.Variable(initial_value=tf.zeros([validation_size, points]), shape=[validation_size, points]) self.val_size = validation_size def update_state(self, y_true, y_pred, sample_weight=None): n, h, w, p = tf.shape(y_pred)[0], tf.shape(y_pred)[1], tf.shape(y_pred)[2], tf.shape(y_pred)[3] y_true = normal_distribution(self.sigma, y_true[:, :, 0], y_true[:, :, 1], h=h, w=w, n=n, p=p) if self.distance == 'cm': x1, y1 = cm(y_true, h=h, w=w, n=n, p=p) x2, y2 = cm(y_pred, h=h, w=w, n=n, p=p) d = ((x1 - x2) ** 2 + (y1 - y2) ** 2) ** 0.5 d = d[:, :, 0] elif self.distance == 'argmax': d = (tf.cast(tf.reduce_sum(((argmax_2d(y_true) - argmax_2d(y_pred)) ** 2), axis=1), dtype=tf.float32)) ** 0.5 temp = tf.minimum(self.counter + n, self.val_size) if self.counter <= self.val_size: self.cache[self.counter:temp, :].assign(d[0:(temp-self.counter), :]) self.counter.assign(self.counter + n) def result(self): if self.percentile_idx is not None: temp = tf.sort(self.cache[:self.val_size, self.point], axis=0, direction='ASCENDING') return temp[self.percentile_idx] elif self.point is not None: return tf.reduce_mean(self.cache[:, self.point], axis=0) else: return tf.reduce_mean(self.cache, axis=None) def reset_states(self): self.cache.assign(tf.zeros_like(self.cache)) self.counter.assign(0) if self.percentile is not None and self.point is not None: self.percentile_idx.assign(tf.cast(self.val_size * self.percentile, dtype=tf.int32)) /trinity/home/r084755/DRF_AI/distal-radius-fractures-x-pa-and-lateral-to-clinic/Code files/LandmarkDetection.py:144 update_state if tf.math.less_equal(self.counter, self.val_size): # Updated from self.counter <= self.val_size: /opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:778 __bool__ self._disallow_bool_casting() /opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:545 _disallow_bool_casting "using a `tf.Tensor` as a Python `bool`") /opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:532 _disallow_when_autograph_enabled " decorating it directly with @tf.function.".format(task)) OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did not convert this function. Try decorating it directly with @tf.function.
Well, in general, I'm doing a small project on django and I decided to implement a convenient search for any parameter from my form. all js was written to me by chatgpt and I don't understand it at all at all, and in general I'm working with "search" for the first time. also, what worries me now is that it is actually being searched for, but very crookedly. everything is shaking and the header is duplicated under the search. I need a dynamic page update when searching for the number, VIN, sent_at, status fields. ```html {% extends 'base.html' %} {% block title %} Для крутых {% endblock %} {% block content %} <div class="container"> <div class="row mt-3"> <input type="search" id="search-input" name="q" class="form-control form-control-dark text-bg-dark" placeholder="Поиск..." aria-label="Поиск"> <div id="applications-list" class="mt-3"> {% for application in applications %} <div class="application-item"> <p><strong>Имя:</strong> {{ application.name }}</p> <p><strong>Номер телефона:</strong> {{ application.number }}</p> <p><strong>VIN номер машины:</strong> {{ application.VIN }}</p> <p><strong>Описание проблемы:</strong> {{ application.description }}</p> <img src="{{ application.image.url }}"> <p><strong>Отправлено:</strong> {{ application.sent_at }}</p> <p><strong>Статус:</strong> {{ application.status }}</p> <br> </div> {% endfor %} </div> {% include 'includes/pagination.html' %} </div> </div> <script src="https://code.jquery.com/jquery-3.6.0.min.js"></script> <script> $(document).ready(function() { function searchAndUpdate() { let searchQuery = $('#search-input').val(); $.ajax({ url: window.location.href, data: { q: searchQuery }, success: function(data) { $('#applications-list').html(data); } }); } setInterval(searchAndUpdate, 5000);\ }); </script> {% endblock %} ``` ```python class ProProfile(ListView): paginate_by = 6 model = Applications template_name = 'pages/proprofile.html' def get_queryset(self): q = self.request.GET.get('q') if q: return Applications.objects.filter( Q(number__icontains=q) | Q(VIN__icontains=q) | Q(sent_at__icontains=q) | Q(status__icontains=q) ) else: return Applications.objects.all() def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) q = self.request.GET.get('q') applications_list = self.get_queryset() paginator = Paginator(applications_list, self.paginate_by) page = self.request.GET.get('page') try: applications = paginator.page(page) except PageNotAnInteger: applications = paginator.page(1) except EmptyPage: applications = paginator.page(paginator.num_pages) context['q'] = q context['applications'] = applications return context
Key takeaways: 1. Use vectorization. 1. **Speed profile** your code! Don't assume something is faster because you _think_ it is faster; speed profile it and _prove_ it is faster. The results may surprise you. ## How to iterate over Pandas `DataFrame`s without iterating After _several weeks_ of working on this answer, here's what I've come up with: Here are **13 techniques for iterating over Pandas `DataFrame`s**. As you can see, the time it takes varies *dramatically*. The fastest technique is **\~1363x** faster than the slowest technique! The key takeaway, [as @cs95 says here](https://stackoverflow.com/a/55557758/4561887), is **_don't_ iterate! Use vectorization (["array programming"](https://en.wikipedia.org/wiki/Array_programming)) instead.** All this really means is that you should use the arrays directly in mathematical formulas rather than trying to manually iterate over the arrays. The underlying objects must support this, of course, but both Numpy and Pandas _do_. There are many ways to use vectorization in Pandas, which you can see in the plot and in my example code below. When using the arrays directly, the underlying looping still takes place, but in (I think) very optimized underlying C code rather than through raw Python. ## Results 13 techniques, numbered 1 to 13, were tested. The technique number and name is underneath each bar. The total calculation time is above each bar. Underneath that is the multiplier to show how much longer it took than the fastest technique to the far right: <sub>From [`pandas_dataframe_iteration_vs_vectorization_vs_list_comprehension_speed_tests.svg`](https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/python/pandas_dataframe_iteration_vs_vectorization_vs_list_comprehension_speed_tests.svg) in my [eRCaGuy_hello_world](https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world) repo (produced by [this code](https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/python/pandas_dataframe_iteration_vs_vectorization_vs_list_comprehension_speed_tests.py)).</sub> [![enter image description here][1]][1] ## Summary **List comprehension** and **vectorization** (possibly with **boolean indexing**) are all you really need. Use **list comprehension** (good) and **vectorization** (best). Pure vectorization I think is _always_ possible, but may take extra work in complicated calculations. Search this answer for **"boolean indexing"**, **"boolean array"**, and **"boolean mask"** (all three are the same thing) to see some of the more complicated cases where pure vectorization can thereby be used. #### Here are the 13 techniques, listed in order of *fastest first to slowest last*. I recommend _never_ using the last (slowest) 3 to 4 techniques. 1. Technique 8: `8_pure_vectorization__with_df.loc[]_boolean_array_indexing_for_if_statment_corner_case` 1. Technique 6: `6_vectorization__with_apply_for_if_statement_corner_case` 1. Technique 7: `7_vectorization__with_list_comprehension_for_if_statment_corner_case` 1. Technique 11: `11_list_comprehension_w_zip_and_direct_variable_assignment_calculated_in_place` 1. Technique 10: `10_list_comprehension_w_zip_and_direct_variable_assignment_passed_to_func` 1. Technique 12: `12_list_comprehension_w_zip_and_row_tuple_passed_to_func` 1. Technique 5: `5_itertuples_in_for_loop` 1. Technique 13: `13_list_comprehension_w__to_numpy__and_direct_variable_assignment_passed_to_func` 1. Technique 9: `9_apply_function_with_lambda` 1. Technique 1: `1_raw_for_loop_using_regular_df_indexing` 1. Technique 2: `2_raw_for_loop_using_df.loc[]_indexing` 1. Technique 4: `4_iterrows_in_for_loop` 1. Technique 3: `3_raw_for_loop_using_df.iloc[]_indexing` #### Rules of thumb: 1. Techniques 3, 4, and 2 should _never_ be used. They are super slow and have no advantages whatsoever. Keep in mind though: it's not the indexing technique, such as `.loc[]` or `.iloc[]` that makes these techniques bad, but rather, it's _the `for` loop they are in_ that makes them bad! I use `.loc[]` inside the fastest (pure vectorization) approach, for instance! So, here are the 3 slowest techniques which should *never* be used: 1. `3_raw_for_loop_using_df.iloc[]_indexing` 1. `4_iterrows_in_for_loop` 1. `2_raw_for_loop_using_df.loc[]_indexing` 1. Technique `1_raw_for_loop_using_regular_df_indexing` should never be used either, but if you're going to use a raw for loop, it's faster than the others. 1. The **`.apply()`** function (`9_apply_function_with_lambda`) is ok, but generally speaking, I'd avoid it too. Technique `6_vectorization__with_apply_for_if_statement_corner_case` did perform better than `7_vectorization__with_list_comprehension_for_if_statment_corner_case`, however, which is interesting. 1. **List comprehension** is great! It's not the fastest, but it is easy to use and very fast! 1. The nice thing about it is that it can be used with *any* function that is intended to work on individual values, or array values. And this means you could have really complicated `if` statements and things inside the function. So, the tradeoff here is that it gives you great versatility with really readable and re-usable code by using external calculation functions, while still giving you great speed! 1. **Vectorization** is the fastest and best, and what you should use whenever the equation is simple. You can optionally use something like `.apply()` or **list comprehension** on just the more-complicated portions of the equation, while still easily using vectorization for the rest. 1. **Pure vectorization** is the absolute fastest and best, and what you should use if you _are willing to put in the effort to make it work._ 1. For simple cases, it's what you should use. 1. For complicated cases, `if` statements, etc., pure vectorization can be made to work too, through **boolean indexing,** but can add extra work and can decrease readability to do so. So, you can optionally use **list comprehension** (usually the best) or **.apply()** (generally slower, but not always) for just those edge cases instead, while still using vectorization for the rest of the calculation. Ex: see techniques `7_vectorization__with_list_comprehension_for_if_statment_corner_case` and `6_vectorization__with_apply_for_if_statement_corner_case`. ## The test data Assume we have the following Pandas DataFrame. It has 2 million rows with 4 columns (`A`, `B`, `C`, and `D`), each with random values from `-1000` to `1000`: ```lang-none df = A B C D 0 -365 842 284 -942 1 532 416 -102 888 2 397 321 -296 -616 3 -215 879 557 895 4 857 701 -157 480 ... ... ... ... ... 1999995 -101 -233 -377 -939 1999996 -989 380 917 145 1999997 -879 333 -372 -970 1999998 738 982 -743 312 1999999 -306 -103 459 745 ``` I produced this DataFrame like this: ```py import numpy as np import pandas as pd # Create an array (numpy list of lists) of fake data MIN_VAL = -1000 MAX_VAL = 1000 # NUM_ROWS = 10_000_000 NUM_ROWS = 2_000_000 # default for final tests # NUM_ROWS = 1_000_000 # NUM_ROWS = 100_000 # NUM_ROWS = 10_000 # default for rapid development & initial tests NUM_COLS = 4 data = np.random.randint(MIN_VAL, MAX_VAL, size=(NUM_ROWS, NUM_COLS)) # Now convert it to a Pandas DataFrame with columns named "A", "B", "C", and "D" df_original = pd.DataFrame(data, columns=["A", "B", "C", "D"]) print(f"df = \n{df_original}") ``` ## The test equation/calculation I wanted to demonstrate that all of these techniques are possible on non-trivial functions or equations, so I intentionally made the equation they are calculating require: 1. `if` statements 1. data from multiple columns in the DataFrame 1. data from multiple rows in the DataFrame The equation we will be calculating for each row is this. I arbitrarily made it up, but I think it contains enough complexity that you will be able to expand on what I've done to perform any equation you want in Pandas with full vectorization: [![enter image description here][2]][2] In Python, the above equation can be written like this: ```py # Calculate and return a new value, `val`, by performing the following equation: val = ( 2 * A_i_minus_2 + 3 * A_i_minus_1 + 4 * A + 5 * A_i_plus_1 # Python ternary operator; don't forget parentheses around the entire # ternary expression! + ((6 * B) if B > 0 else (60 * B)) + 7 * C - 8 * D ) ``` Alternatively, you could write it like this: ```py # Calculate and return a new value, `val`, by performing the following equation: if B > 0: B_new = 6 * B else: B_new = 60 * B val = ( 2 * A_i_minus_2 + 3 * A_i_minus_1 + 4 * A + 5 * A_i_plus_1 + B_new + 7 * C - 8 * D ) ``` Either of those can be wrapped into a function. Ex: ```py def calculate_val( A_i_minus_2, A_i_minus_1, A, A_i_plus_1, B, C, D): val = ( 2 * A_i_minus_2 + 3 * A_i_minus_1 + 4 * A + 5 * A_i_plus_1 # Python ternary operator; don't forget parentheses around the # entire ternary expression! + ((6 * B) if B > 0 else (60 * B)) + 7 * C - 8 * D ) return val ``` ## The techniques The full code is available to download and run in my **[`python/pandas_dataframe_iteration_vs_vectorization_vs_list_comprehension_speed_tests.py`](https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/python/pandas_dataframe_iteration_vs_vectorization_vs_list_comprehension_speed_tests.py)** file in my [eRCaGuy_hello_world](https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world) repo. #### Here is the code for all 13 techniques: 1. **Technique 1:** `1_raw_for_loop_using_regular_df_indexing` ```py val = [np.NAN]*len(df) for i in range(len(df)): if i < 2 or i > len(df)-2: continue val[i] = calculate_val( df["A"][i-2], df["A"][i-1], df["A"][i], df["A"][i+1], df["B"][i], df["C"][i], df["D"][i], ) df["val"] = val # put this column back into the dataframe ``` 1. **Technique 2:** `2_raw_for_loop_using_df.loc[]_indexing` ```py val = [np.NAN]*len(df) for i in range(len(df)): if i < 2 or i > len(df)-2: continue val[i] = calculate_val( df.loc[i-2, "A"], df.loc[i-1, "A"], df.loc[i, "A"], df.loc[i+1, "A"], df.loc[i, "B"], df.loc[i, "C"], df.loc[i, "D"], ) df["val"] = val # put this column back into the dataframe ``` 1. **Technique 3:** `3_raw_for_loop_using_df.iloc[]_indexing` ```py # column indices i_A = 0 i_B = 1 i_C = 2 i_D = 3 val = [np.NAN]*len(df) for i in range(len(df)): if i < 2 or i > len(df)-2: continue val[i] = calculate_val( df.iloc[i-2, i_A], df.iloc[i-1, i_A], df.iloc[i, i_A], df.iloc[i+1, i_A], df.iloc[i, i_B], df.iloc[i, i_C], df.iloc[i, i_D], ) df["val"] = val # put this column back into the dataframe ``` 1. **Technique 4:** `4_iterrows_in_for_loop` ```py val = [np.NAN]*len(df) for index, row in df.iterrows(): if index < 2 or index > len(df)-2: continue val[index] = calculate_val( df["A"][index-2], df["A"][index-1], row["A"], df["A"][index+1], row["B"], row["C"], row["D"], ) df["val"] = val # put this column back into the dataframe ``` For all of the next examples, we must first prepare the dataframe by adding columns with previous and next values: `A_(i-2)`, `A_(i-1)`, and `A_(i+1)`. These columns in the DataFrame will be named `A_i_minus_2`, `A_i_minus_1`, and `A_i_plus_1`, respectively: ```py df_original["A_i_minus_2"] = df_original["A"].shift(2) # val at index i-2 df_original["A_i_minus_1"] = df_original["A"].shift(1) # val at index i-1 df_original["A_i_plus_1"] = df_original["A"].shift(-1) # val at index i+1 # Note: to ensure that no partial calculations are ever done with rows which # have NaN values due to the shifting, we can either drop such rows with # `.dropna()`, or set all values in these rows to NaN. I'll choose the latter # so that the stats that will be generated with the techniques below will end # up matching the stats which were produced by the prior techniques above. ie: # the number of rows will be identical to before. # # df_original = df_original.dropna() df_original.iloc[:2, :] = np.NAN # slicing operators: first two rows, # all columns df_original.iloc[-1:, :] = np.NAN # slicing operators: last row, all columns ``` Running the vectorized code just above to produce those 3 new columns took a total of **0.044961 seconds**. Now on to the rest of the techniques: 1. **Technique 5:** `5_itertuples_in_for_loop` ```py val = [np.NAN]*len(df) for row in df.itertuples(): val[row.Index] = calculate_val( row.A_i_minus_2, row.A_i_minus_1, row.A, row.A_i_plus_1, row.B, row.C, row.D, ) df["val"] = val # put this column back into the dataframe ``` 1. **Technique 6:** `6_vectorization__with_apply_for_if_statement_corner_case` ```py def calculate_new_column_b_value(b_value): # Python ternary operator b_value_new = (6 * b_value) if b_value > 0 else (60 * b_value) return b_value_new # In this particular example, since we have an embedded `if-else` statement # for the `B` column, pure vectorization is less intuitive. So, first we'll # calculate a new `B` column using # **`apply()`**, then we'll use vectorization for the rest. df["B_new"] = df["B"].apply(calculate_new_column_b_value) # OR (same thing, but with a lambda function instead) # df["B_new"] = df["B"].apply(lambda x: (6 * x) if x > 0 else (60 * x)) # Now we can use vectorization for the rest. "Vectorization" in this case # means to simply use the column series variables in equations directly, # without manually iterating over them. Pandas DataFrames will handle the # underlying iteration automatically for you. You just focus on the math. df["val"] = ( 2 * df["A_i_minus_2"] + 3 * df["A_i_minus_1"] + 4 * df["A"] + 5 * df["A_i_plus_1"] + df["B_new"] + 7 * df["C"] - 8 * df["D"] ) ``` 1. **Technique 7:** `7_vectorization__with_list_comprehension_for_if_statment_corner_case` ```py # In this particular example, since we have an embedded `if-else` statement # for the `B` column, pure vectorization is less intuitive. So, first we'll # calculate a new `B` column using **list comprehension**, then we'll use # vectorization for the rest. df["B_new"] = [ calculate_new_column_b_value(b_value) for b_value in df["B"] ] # Now we can use vectorization for the rest. "Vectorization" in this case # means to simply use the column series variables in equations directly, # without manually iterating over them. Pandas DataFrames will handle the # underlying iteration automatically for you. You just focus on the math. df["val"] = ( 2 * df["A_i_minus_2"] + 3 * df["A_i_minus_1"] + 4 * df["A"] + 5 * df["A_i_plus_1"] + df["B_new"] + 7 * df["C"] - 8 * df["D"] ) ``` 1. **Technique 8:** `8_pure_vectorization__with_df.loc[]_boolean_array_indexing_for_if_statment_corner_case` This uses **boolean indexing**, AKA: a **boolean mask**, to accomplish the equivalent of the `if` statement in the equation. In this way, pure vectorization can be used for the entire equation, thereby maximizing performance and speed. ```py # If statement to evaluate: # # if B > 0: # B_new = 6 * B # else: # B_new = 60 * B # # In this particular example, since we have an embedded `if-else` statement # for the `B` column, we can use some boolean array indexing through # `df.loc[]` for some pure vectorization magic. # # Explanation: # # Long: # # The format is: `df.loc[rows, columns]`, except in this case, the rows are # specified by a "boolean array" (AKA: a boolean expression, list of # booleans, or "boolean mask"), specifying all rows where `B` is > 0. Then, # only in that `B` column for those rows, set the value accordingly. After # we do this for where `B` is > 0, we do the same thing for where `B` # is <= 0, except with the other equation. # # Short: # # For all rows where the boolean expression applies, set the column value # accordingly. # # GitHub CoPilot first showed me this `.loc[]` technique. # See also the official documentation: # https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html # # =========================== # 1st: handle the > 0 case # =========================== df["B_new"] = df.loc[df["B"] > 0, "B"] * 6 # # =========================== # 2nd: handle the <= 0 case, merging the results into the # previously-created "B_new" column # =========================== # - NB: this does NOT work; it overwrites and replaces the whole "B_new" # column instead: # # df["B_new"] = df.loc[df["B"] <= 0, "B"] * 60 # # This works: df.loc[df["B"] <= 0, "B_new"] = df.loc[df["B"] <= 0, "B"] * 60 # Now use normal vectorization for the rest. df["val"] = ( 2 * df["A_i_minus_2"] + 3 * df["A_i_minus_1"] + 4 * df["A"] + 5 * df["A_i_plus_1"] + df["B_new"] + 7 * df["C"] - 8 * df["D"] ) ``` 1. **Technique 9:** `9_apply_function_with_lambda` ```py df["val"] = df.apply( lambda row: calculate_val( row["A_i_minus_2"], row["A_i_minus_1"], row["A"], row["A_i_plus_1"], row["B"], row["C"], row["D"] ), axis='columns' # same as `axis=1`: "apply function to each row", # rather than to each column ) ``` 1. **Technique 10:** `10_list_comprehension_w_zip_and_direct_variable_assignment_passed_to_func` ```py df["val"] = [ # Note: you *could* do the calculations directly here instead of using a # function call, so long as you don't have indented code blocks such as # sub-routines or multi-line if statements. # # I'm using a function call. calculate_val( A_i_minus_2, A_i_minus_1, A, A_i_plus_1, B, C, D ) for A_i_minus_2, A_i_minus_1, A, A_i_plus_1, B, C, D in zip( df["A_i_minus_2"], df["A_i_minus_1"], df["A"], df["A_i_plus_1"], df["B"], df["C"], df["D"] ) ] ``` 1. **Technique 11:** `11_list_comprehension_w_zip_and_direct_variable_assignment_calculated_in_place` ```py df["val"] = [ 2 * A_i_minus_2 + 3 * A_i_minus_1 + 4 * A + 5 * A_i_plus_1 # Python ternary operator; don't forget parentheses around the entire # ternary expression! + ((6 * B) if B > 0 else (60 * B)) + 7 * C - 8 * D for A_i_minus_2, A_i_minus_1, A, A_i_plus_1, B, C, D in zip( df["A_i_minus_2"], df["A_i_minus_1"], df["A"], df["A_i_plus_1"], df["B"], df["C"], df["D"] ) ] ``` 1. **Technique 12:** `12_list_comprehension_w_zip_and_row_tuple_passed_to_func` ```py df["val"] = [ calculate_val( row[0], row[1], row[2], row[3], row[4], row[5], row[6], ) for row in zip( df["A_i_minus_2"], df["A_i_minus_1"], df["A"], df["A_i_plus_1"], df["B"], df["C"], df["D"] ) ] ``` 1. **Technique 13:** `13_list_comprehension_w__to_numpy__and_direct_variable_assignment_passed_to_func` ```py df["val"] = [ # Note: you *could* do the calculations directly here instead of using a # function call, so long as you don't have indented code blocks such as # sub-routines or multi-line if statements. # # I'm using a function call. calculate_val( A_i_minus_2, A_i_minus_1, A, A_i_plus_1, B, C, D ) for A_i_minus_2, A_i_minus_1, A, A_i_plus_1, B, C, D # Note: this `[[...]]` double-bracket indexing is used to select a # subset of columns from the dataframe. The inner `[]` brackets # create a list from the column names within them, and the outer # `[]` brackets accept this list to index into the dataframe and # select just this list of columns, in that order. # - See the official documentation on it here: # https://pandas.pydata.org/docs/user_guide/indexing.html#basics # - Search for the phrase "You can pass a list of columns to [] to # select columns in that order." # - I learned this from this comment here: # https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas/55557758#comment136020567_55557758 # - One of the **list comprehension** examples in this answer here # uses `.to_numpy()` like this: # https://stackoverflow.com/a/55557758/4561887 in df[[ "A_i_minus_2", "A_i_minus_1", "A", "A_i_plus_1", "B", "C", "D" ]].to_numpy() # NB: `.values` works here too, but is deprecated. See: # https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.values.html ] ``` Here are the results again: [![enter image description here][1]][1] ## Using the pre-shifted rows in the 4 `for` loop techniques as well I wanted to see if removing this `if` check and using the pre-shifted rows in the 4 `for` loop techniques would have much effect: ```py if i < 2 or i > len(df)-2: continue ``` ...so I created this file with those modifications: [`pandas_dataframe_iteration_vs_vectorization_vs_list_comprehension_speed_tests_mod.py`](https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/python/pandas_dataframe_iteration_vs_vectorization_vs_list_comprehension_speed_tests_mod.py). Search the file for "MOD:" to find the 4 new, modified techniques. It had only a slight improvement. Here are the results of these 17 techniques now, with the 4 new ones having the word `_MOD_` near the beginning of their name, just after their number. This is over 500k rows this time, not 2M: [![enter image description here][3]][3] ## More on `.iterrtuples()` There are actually more nuances when using `.itertuples()`. To delve into some of those, read [this answer by @Romain Capron](https://stackoverflow.com/a/59413206/4561887). Here is a bar chart plot I made of his results: [![enter image description here][4]][4] My plotting code for his results is in **[`python/pandas_plot_bar_chart_better_GREAT_AUTOLABEL_DATA.py`](https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/python/pandas_plot_bar_chart_better_GREAT_AUTOLABEL_DATA.py)** in my [eRCaGuy_hello_world](https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world) repo. ## Future work Using Cython (Python compiled into C code), or just raw C functions called by Python, could be faster potentially, but I'm not going to do that for these tests. I'd only look into and speed test those options for big optimizations. I currently don't know Cython and don't feel the need to learn it. As you can see above, simply using pure vectorization properly already runs incredibly fast, processing 2 *million* rows in only 0.1 seconds, or 20 million rows per second. ## References 1. A bunch of the official Pandas documentation, especially the `DataFrame` documentation here: https://pandas.pydata.org/pandas-docs/stable/reference/frame.html. 1. [This excellent answer by @cs95](https://stackoverflow.com/a/55557758/4561887) - this is where I learned in particular how to use list comprehension to iterate over a DataFrame. 1. [This answer about `itertuples()`, by @Romain Capron](https://stackoverflow.com/a/59413206/4561887) - I studied it carefully and edited/formatted it. 1. All of this is my own code, but I want to point out that I had dozens of chats with GitHub Copilot (mostly), Bing AI, and ChatGPT in order to figure out many of these techniques and debug my code as I went. 1. Bing Chat produced the pretty LaTeX equation for me, with the following prompt. Of course, I verified the output: > Convert this Python code to a pretty equation I can paste onto Stack Overflow: > ``` > val = ( > 2 * A_i_minus_2 > + 3 * A_i_minus_1 > + 4 * A > + 5 * A_i_plus_1 > # Python ternary operator; don't forget parentheses around the entire ternary expression! > + ((6 * B) if B > 0 else (60 * B)) > + 7 * C > - 8 * D > ) > ``` ## See also 1. This answer is also posted on my personal website here: https://gabrielstaples.com/python_iterate_over_pandas_dataframe/ 1. https://en.wikipedia.org/wiki/Array_programming - array programming, or "vectorization": > In computer science, array programming refers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings. > > Modern programming languages that support array programming (also known as vector or multidimensional languages) have been engineered specifically to generalize operations on scalars to apply transparently to vectors, matrices, and higher-dimensional arrays. These include APL, J, Fortran, MATLAB, Analytica, Octave, R, Cilk Plus, Julia, Perl Data Language (PDL). In these languages, an operation that operates on entire arrays can be called a vectorized operation,[1] regardless of whether it is executed on a vector processor, which implements vector instructions. 1. [Are for-loops in pandas really bad? When should I care?](https://stackoverflow.com/q/54028199/4561887) 1. [my answer](https://stackoverflow.com/a/77270403/4561887) 1. [Does pandas iterrows have performance issues?](https://stackoverflow.com/q/24870953/4561887) 1. [This answer](https://stackoverflow.com/a/24871316/4561887) 1. [My comment underneath it](https://stackoverflow.com/questions/24870953/does-pandas-iterrows-have-performance-issues#comment136223122_24871316): > ...Based on my results, I'd say, however, these are the best approaches, in this order of best first: > > 1. vectorization, > 2. list comprehension, > 3. `.itertuples()`, > 4. `.apply()`, > 5. raw `for` loop, > 6. `.iterrows().` > > I didn't test Cython. [1]: https://i.stack.imgur.com/5biMy.png [2]: https://i.stack.imgur.com/W3c12.png [3]: https://i.stack.imgur.com/HxKkJ.png [4]: https://i.stack.imgur.com/ws9db.png