instruction
stringlengths
0
30k
chrome.tabs.onRemoved.addListener((tabId, removeInfo) => { chrome.tabs.query({}, (tabs) => { if (tabs.length === 1) { console.log("your code here"); } }); });
To check if a user is logged in and display content based on this condition within an Antlers template, you can use the {{ user }} tag or its aliases. Here’s a simple way to do it: {{ if user }} <!-- Content to display if the user is logged in --> <p>Welcome back, {{ user:name }}!</p> {{ else }} <!-- Content to display if the user is not logged in --> <p>Please <a href="/login">log in</a> to access the admin panel.</p> {{ /if }}
I have interfaces **Angular Typescript Class** interface A_DTO { type: string, commonProperty: string, uniquePropertyA: string, etc.. } interface B_DTO { type: string, commonProperty: string, uniquePropertyB: string, etc... } type AnyDTO = A_DTO | B_DTO I have an object, fetched from an API. When it is fetched, it immediately gets cast to A_DTO or B_DTO, by reading the 'type' property. But after that, it then it gets saved to a service, for storage, where it gets saved to a single variable, but of typ AnyDTO (I call that service variable with the components I work with - casting back to AnyDTO, doesn't cause any properties to be lost, so I'm happy) **Angular Template** But, in a component, I have some template code, @if(object.type == "Type_A") { // do something // I can do object.commonProperty // but I cannot access object.uniquePropertyA } @ else { object.type == "Type_B") { // do something // I can do object.commonProperty // but I cannot access object.uniquePropertyB } Note, above, object gets read as type, AnyDTO = A_DTO | B_DTO **Angular Typescript Class** I tried creating a type guard on the interface, in the typescript class code, e.g. protected isTypeA(object: any): object is A_DTO { return object?.Type === "Type_A"; }, **Angular Template** Then @if(isTypeA(object)) { // do something // I can do object.commonProperty // but I still cannot access object.uniquePropertyA... } @ else { object.type == "Type_B") { // do something // but I cannot access object.uniquePropertyB } Even with the typeguard being called in the template, inside the @if, 'object' still gets treated as type: A_DTO | B_DTO. Despite what I read on the internet, type narrowing does not happen. So can only access the common properties from 'object'. I also tried to explicity type cast in the template, using things like (object as A_DTO).uniquePropertyA, but that doesn't work in the Angular template area Any ideas on a ***dynamic*** solution, (that ideally does not involve create separate variables for each subtype in the Typescript class)? Cheers, ST
(random ideas)svg animation ,masking, skia-shopify Any idea thanks for help
Is there any way to page transition in react native (stack navigation)
|react-native|react-native-navigation|react-animations|
## Update 3/31/24 I wrote a blog post explaining all the uses of a [Firestore Reference Type](https://code.build/p/firestore-reference-type-fxhopT). ___ # Original Post ___ Automatic JOINS: **DOC** ```typescript expandRef<T>(obs: Observable<T>, fields: any[] = []): Observable<T> { return obs.pipe( switchMap((doc: any) => doc ? combineLatest( (fields.length === 0 ? Object.keys(doc).filter( (k: any) => { const p = doc[k] instanceof DocumentReference; if (p) fields.push(k); return p; } ) : fields).map((f: any) => docData<any>(doc[f])) ).pipe( map((r: any) => fields.reduce( (prev: any, curr: any) => ({ ...prev, [curr]: r.shift() }) , doc) ) ) : of(doc)) ); } ``` **COLLECTION** ```typescript expandRefs<T>( obs: Observable<T[]>, fields: any[] = [] ): Observable<T[]> { return obs.pipe( switchMap((col: any[]) => col.length !== 0 ? combineLatest(col.map((doc: any) => (fields.length === 0 ? Object.keys(doc).filter( (k: any) => { const p = doc[k] instanceof DocumentReference; if (p) fields.push(k); return p; } ) : fields).map((f: any) => docData<any>(doc[f])) ).reduce((acc: any, val: any) => [].concat(acc, val))) .pipe( map((h: any) => col.map((doc2: any) => fields.reduce( (prev: any, curr: any) => ({ ...prev, [curr]: h.shift() }) , doc2 ) ) ) ) : of(col) ) ); } ``` Simply put this function around your observable and it will automatically expand all reference data types providing automatic joins. **Usage** ```typescript this.posts = expandRefs( collectionData( query( collection(this.afs, 'posts'), where('published', '==', true), orderBy(fieldSort) ), { idField: 'id' } ) ); ``` **Note:** You can also now input the fields you want to expand as a second argument in an array. `['imageDoc', 'authorDoc']` This will increase the speed! Add `.pipe(take(1)).toPromise();` at the end for a promise version! See [here](https://code.build/p/NTYMcXqyns7PiPbLsDzCrJ/firestore-using-reference-types-for-joins) for more info. Works in Firebase 8 or 9! Simple! J
Set Expiry Dates for Google Drive step to open the way to solve this issue, i need step to step handle this issue step to open the way to solve this issue, i need step to step handle this issue step to open the way to solve this issue, i need step to step handle this issue step to open the way to solve this issue, i need step to step handle this issue step to open the way to solve this issue, i need step to step handle this issue step to open the way to solve this issue, i need step to step handle this issue step to open the way to solve this issue, i need step to step handle this issue step to open the way to solve this issue, i need step to step handle this issue step to open the way to solve this issue, i need step to step handle this issue step to open the way to solve this issue, i need step to step handle this issue
C6067 and other warnings with numbers 6000+ are Code Analysis warnings and are not associated with the compiler warnings. As suggested in the comments, [`/analyze`](https://learn.microsoft.com/en-us/cpp/build/reference/analyze-code-analysis?view=msvc-170) enables Code Analysis warnings while compiling.
I was given this block of code where `call by Reference` had to be used for every method call, and I had to give an output of a in the format: ```none y:[result]; y: [result]; y:[result]; x:[result]; a:[result] ``` ```java public class Main { static int x = 2; public static void main(String[] args) { int[] a = {17, 43, 12}; foo(a[x]); foo(x); foo(a[x]); System.out.println("x:" + x); System.out.println("a:" + Arrays.toString(a)); } static void foo(int y) { x = x - 1; y = y + 2; if (x < 0) { x = 5; } else if (x > 20) { x = 7; } System.out.println("y:" + y); } }. ``` I'm not 100% sure on how the call by reference works in some cases, and I'm not sure which result is the right one. Anyway here is one: `foo(a[x])` is called with `a[2]` (which is 12). `y` becomes 12 + 2 = 14. `x` is decremented to 1. `foo(x)` is called with `x` (which is 1). Both `x` and `y` point to the value 1 of `x`. `x` is decremented to 0 and then `y` becomes 3 because `y=y+2` and `y` was pointing at the value 1 of `x`. `foo(a[x])` is called with `a[3]` (which doesnt exists). `x` is decremented to 2. The array `a` transforms into `17,43,14`. So, the results would be like: ```none y : 14; y : 3; y : ?; x : 2; a : 17,43,14 ``` I think the thing that confuses me the most is in the case of `foo(x)`. Does `y` point at variable `x` or the value of `x` at the moment the method is called?
You should pass a custom `key` with [`to_datetime`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html) to [`sort_values`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sort_values.html), this will use the defined logic to sort while leaving the data unchanged: ``` (df.sort_values(by='Date', key=lambda x: pd.to_datetime(x, format='%m/%d/%y') .to_csv(path, index=False) ) ``` Output csv: ``` Company Name,Delivery Address,Date,Customer Name Burgerking,124 west rd,1/23/24,Peter Mcdonalds,123 lake rd,3/30/24,Zack ```
I just spent a day with this, there is something you must be aware of when you try to fix the credential problem: aws does lot of magic where it gets your credentials from, you might be not using the ones you explicitely enter, but some from .env or user_home/.aws or from old settings when you opened your terminal. try in a new console aws s3 ls aws configure list and then try the same in your visual studio code console, and you might get a surprise
I have a geometric dataset of point features associated with values. Out of ~ 16000 values, about 100-200 have NaNs. I'd like to populate those with the average of the values from the 5 nearest neighbors, assuming at least 1 of them is not also associated with a NaN. The dataset looks something like: ``` FID PPM_P geometry 0 0 NaN POINT (-89.79635 35.75644) 1 1 NaN POINT (-89.79632 35.75644) 2 2 NaN POINT (-89.79629 35.75644) 3 3 NaN POINT (-89.79625 35.75644) 4 4 NaN POINT (-89.79622 35.75644) 5 5 NaN POINT (-89.79619 35.75644) 6 6 NaN POINT (-89.79616 35.75644) 7 7 NaN POINT (-89.79612 35.75645) 8 8 NaN POINT (-89.79639 35.75641) 9 9 40.823028 POINT (-89.79635 35.75641) 10 10 40.040865 POINT (-89.79632 35.75641) 11 11 36.214436 POINT (-89.79629 35.75641) 12 12 34.919571 POINT (-89.79625 35.75642) 13 13 NaN POINT (-89.79622 35.75642) 14 14 NaN POINT (-89.79619 35.75642) 15 15 NaN POINT (-89.79615 35.75642) 16 16 NaN POINT (-89.79612 35.75642) 17 17 NaN POINT (-89.79609 35.75642) 18 18 NaN POINT (-89.79606 35.75642) 19 19 NaN POINT (-89.79642 35.75638) ``` It just so happens that many of the NaNs are near the beginning of the dataset. I found the nearest neighbor weight matrix using: ``` w_knn = KNN.from_dataframe(predictions_gdf, k=5) ``` Next I wrote: ``` # row-normalise weights w_knn.transform = "r" # create lag predictions_gdf["averaged_PPM_P"] = libpysal.weights.lag_spatial(w_knn, predictions_gdf["PPM_P"]) ``` But I got back NaN in the averaged_PPM_P column. Now I'm not sure what to do. Can someone give me a hand please?
Try using Scala 2 syntax fastparse.parse("1234", implicit p => parseAll(MyParser.int(p))) https://scastie.scala-lang.org/DmytroMitin/MrFZ0EhiSPeFDHd1IyBhrA Possible Scala 3 syntaxes are fastparse.parse("1234", p => {given P[_] = p; parseAll(MyParser.int(p))}) https://scastie.scala-lang.org/DmytroMitin/MrFZ0EhiSPeFDHd1IyBhrA/11 fastparse.parse("1234", { case p @ given P[_] => parseAll(MyParser.int(p))}) https://scastie.scala-lang.org/DmytroMitin/MrFZ0EhiSPeFDHd1IyBhrA/13 fastparse.parse("1234", p => parseAll(MyParser.int(using p))(using p)) https://scastie.scala-lang.org/DmytroMitin/MrFZ0EhiSPeFDHd1IyBhrA/16 It would be nice if `fastparse.parse("1234", p ?=> parseAll(MyParser.int(p)))` worked, but this would require support on fastparse side. --- https://stackoverflow.com/questions/72034665/correct-scala-3-syntax-for-providing-a-given-from-a-higher-order-function-argume https://github.com/scala/scala3/discussions/12007
Declarations and definitions are about **names**, not necessarily about objects. A somewhat simplified (hence, wrong) description of those terms is that a declaration tells the compiler that some name will be used in a particular way; a definition gives the compiler all the details of what that name means. This is a declaration of the class `Point`: class Point; After that declaration the name `Point` can be used in ways that don't depend on its definition. `Point*` and `Point&` (pointer to `Point` and reference to `Point`, respectively) are probably the most common examples. This is the definition of the class `Point`: class Point { public: int x; int y; void setPoint(int x, int y); }; Its two data members, `x` and `y`, are part of the definition. Those are neither declarations nor definitions. `void setPoint(int x, int y);` is a declaration of the member function `Point::setPoint`. Presumably, there's a definition somewhere that hasn't been shown. This is a declaration of the object `p1`: extern Point p1; After that declaration the compiler knows that `p1` names an object of type `Point`, and the code can do any operations on `p1` that are valid for an object of type `Point`. This is the definition of the object `p1`: Point p1; This tells the compiler "here it is". Note that nothing in these descriptions mentions "memory". The definition of a class doesn't use memory; the definition of an object might mean "grab some memory for this thing", or it might mean "when the program gets here grab some memory for this thing".
I've replaced the default button function, thus I'm adding a new one. Trying to create a custom 50px circle "add to cart" button (fa-solid fa-cart-shopping icon, #F5F5F5 background, 5px padding, I want it to be placed in the bottom left corner of product images with 10px margins. My archive template is made with Elementor using the “product archive” widget. My single product template is also made with Elementor and uses the “Product images” widget. [What I'm trying to get](https://i.stack.imgur.com/XLqWu.jpg) [A visual example of what I'm going for][1] [1]: https://i.stack.imgur.com/P7vr7.jpg For some background, I have replaced the archive "Add to cart" button text and function (to take clients to the product page), and this is the code I used. I activated it through the Code Snippets plugin. add_filter( 'woocommerce_product_single_add_to_cart_text', 'woocommerce_custom_single_add_to_cart_text' ); function woocommerce_custom_single_add_to_cart_text() { return __( 'Buy Now', 'woocommerce' ); } add_filter( 'woocommerce_loop_add_to_cart_link', 'wpt_custom_view_product_button', 10, 2 ); function wpt_custom_view_product_button( $button, $product ) { // Ignore for variable products //if( $product->is_type( 'variable' ) ) return $button; $button_text = __( "Product Info", "woocommerce" ); return '<a class="button wpt-custom-view-product-button" href="' . $product->get_permalink() . '">' . $button_text . '</a>'; } What I tried and didn't work: I added custom CSS (in Appearance > Customize > Add custom CSS) to style the "Add to Cart" button. .custom-add-to-cart-btn { position: absolute; bottom: 10px; left: 10px; z-index: 99; width: 50px; height: 50px; border-radius: 50%; background-color: #F5F5F5; padding: 5px; text-align: center; line-height: 40px; box-sizing: border-box; cursor: pointer; } .custom-add-to-cart-btn i { font-size: 24px; color: #000; } I added JavaScript code to handle the click event on the custom "Add to Cart" button. This code triggers the add to cart action and performs an AJAX request to add the product to the cart. I added the code using the "Code Snippets Pro" plugin and chose "Load JS at the end of the <body> section". <script> jQuery(document).ready(function($) { // Add custom "Add to Cart" button to each product item $('.elementor-widget-archive-products .elementor-archive__item').each(function() { var productId = $(this).find('[data-product-id]').data('product-id'); var buttonHtml = '<a class="custom-add-to-cart-btn" data-product-id="' + productId + '"><i class="fa fa-cart-shopping"></i></a>'; $(this).find('.elementor-image').append(buttonHtml); }); // Click event handler for custom "Add to Cart" button $(document).on('click', '.custom-add-to-cart-btn', function(e) { e.preventDefault(); var productId = $(this).data('product-id'); // Trigger add to cart action $(document.body).trigger('adding_to_cart', [$(this), productId]); // AJAX add to cart $.ajax({ type: 'POST', url: wc_add_to_cart_params.ajax_url, data: { 'action': 'woocommerce_ajax_add_to_cart', 'product_id': productId, }, success: function(response) { if (response.error && response.product_url) { window.location = response.product_url; } else { // Redirect to cart page window.location = wc_add_to_cart_params.cart_url; } } }); }); }); </script> To add the custom "Add to Cart" button to each product item on the archive page, I utilized JavaScript code to dynamically insert the button within the product item containers. <script> jQuery(document).ready(function($) { // Add custom "Add to Cart" button to each product item $('.elementor-widget-archive-products .elementor-archive__item').each(function() { var productId = $(this).find('[data-product-id]').data('product-id'); var buttonHtml = '<a class="custom-add-to-cart-btn" data-product-id="' + productId + '"><i class="fa fa-cart-shopping"></i></a>'; $(this).find('.elementor-image').append(buttonHtml); }); // Click event handler for custom "Add to Cart" button $(document).on('click', '.custom-add-to-cart-btn', function(e) { e.preventDefault(); var productId = $(this).data('product-id'); // Trigger add to cart action $(document.body).trigger('adding_to_cart', [$(this), productId]); // AJAX add to cart $.ajax({ type: 'POST', url: wc_add_to_cart_params.ajax_url, data: { 'action': 'woocommerce_ajax_add_to_cart', 'product_id': productId, }, success: function(response) { if (response.error && response.product_url) { window.location = response.product_url; } else { // Redirect to cart page window.location = wc_add_to_cart_params.cart_url; } } }); }); }); </script> I embedded this JavaScript code within an Elementor HTML widget that I added to the archive template. This was intended to dynamically insert the custom "Add to Cart" button for each product item displayed on the archive page. Despite these efforts, the custom "Add to Cart" button didn't appear anywhere on the page. Any advice will be highly appreciated.
I have a camera that I am able to use with older applications built in c/c++ and VB6 (circa 2003). I didn’t write the older code. The device driver is recognized in Imaging Devices but cannot be updated to Camera in Device Manager. I can find the camera using Manager request functions. I am trying to use the camera but DirectShow, Cv2, and AForge will not recognize the device. I only have the most recent driver and old dll files which cannot be assembled on VS2022. any help?
Finding and Using Camera found in “Imaging Devices” in VB.NET
|vb.net|directshow|aforge|device-manager|
null
null
Screenshot demonstrating the use of Excel's Solver: ![See Screenshot demonstrating the use of Excel's Solver][1] I have a task to automate a certain excel worksheet. The worksheet happens to implement a logic with an excel plugin called Solver. It uses a single value(-1.95624) in Cell $O$9 (which is the result of computations highlighted with red and blue ink in the diagram ) as an input value and then returns three values for C, B1 and B2 using an algorithm called "GRG Non linear regression". My task is to emulate this logic in Python. Below is my attempt. The major problem, is I am not getting the same values for C, B1 and B2 as computed by Excel's Solver plugin. Given these datasets for xData and yData, the correct output should be: C= -2.35443383, B1 = -14.70820051, B2 = 0.0056217 Here's My 1st Attempt: ``` import numpy, scipy, matplotlib import pandas as pd import matplotlib.pyplot as plt from scipy.optimize import curve_fit from scipy.optimize import differential_evolution import warnings xData = numpy.array([-2.59772914040242,-2.28665528866907,-2.29176070881848,-2.31163972446061,-2.28369414349715,-2.27911303233721,-2.28222332344644,-2.39089535619106,-2.32144325648778,-2.17235002006179,-2.22906032068685,-2.42044014499938,-2.71639505549322,-2.65462061336346,-2.47330475191616,-2.33132910807216,-2.33025978869114,-2.61175064230516,-2.92916553244925,-2.987503044973,-3.00367414706232,-1.45507812104723]) # Use the same table name as the parameter yData = numpy.array([0.0692847120775066,0.0922342111029099,0.0918076382491768,0.0901635409944003,0.0924824386284127,0.092867647175396,0.092605957740688,20.0838696111204451,0.0893625419994501,0.102261091024881,0.097171046758256,70.0816272542472914,0.0620128251290935,0.0657047909578125,0.0777509345715382,0.088561321341585,0.088647672874835,90.0683859871424735,0.0507304952495273,0.0479936476914665,0.0472601632188253,0.18922126828463]) # Use the same table name as the parameter def func(x, a, b, Offset): # Sigmoid A With Offset from zunzun.com return 1.0 / (1.0 + numpy.exp(-a * (x-b))) + Offset # function for genetic algorithm to minimize (sum of squared error) def sumOfSquaredError(parameterTuple): warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm val = func(xData, *parameterTuple) return numpy.sum((yData - val) ** 2.0) def generate_Initial_Parameters(): # min and max used for bounds maxX = max(xData) minX = min(xData) maxY = max(yData) minY = min(yData) parameterBounds = [] parameterBounds.append([minX, maxX]) # search bounds for a parameterBounds.append([minX, maxX]) # search bounds for b parameterBounds.append([0.0, maxY]) # search bounds for Offset # "seed" the numpy random number generator for repeatable results result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3) return result.x # generate initial parameter values geneticParameters = generate_Initial_Parameters() # curve fit the test data params, covariance = curve_fit(func, xData, yData, geneticParameters,maxfev=50000) # Convert parameters to Python built-in types params = [float(param) for param in params] # Convert numpy float64 to Python float C, B1, B2 = params OutputDataSet = pd.DataFrame({"C": [C], "B1": [B1], "B2": [B2],"ProType":[input_value_1],"RegType":[input_value_2]}) ``` Any Ideas will be helpful? Thanks in advance Here's My 2nd Attempt: I've changed the objective function. ``` import numpy as np import pandas as pd from scipy.optimize import curve_fit # Access input data passed from SQL Server datasets = pd.DataFrame(InputDataSet) def logistic_regression(x, C, B1, B2): return C / (1 + np.exp(-B1 * (x - B2))) def initial_coefficients(num_features): return np.random.randn(num_features) # Fetch x_data and y_data from SQL Server x_data = np.array([-2.59772914040242,-2.28665528866907,-2.29176070881848,-2.31163972446061,-2.28369414349715,-2.27911303233721,-2.28222332344644,-2.39089535619106,-2.32144325648778,-2.17235002006179,-2.22906032068685,-2.42044014499938,-2.71639505549322,-2.65462061336346,-2.47330475191616,-2.33132910807216,-2.33025978869114,-2.61175064230516,-2.92916553244925,-2.987503044973,-3.00367414706232,-1.45507812104723]) y_data = np.array([0.0692847120775066,0.0922342111029099,0.0918076382491768,0.0901635409944003,0.0924824386284127,0.092867647175396,0.092605957740688,20.0838696111204451,0.0893625419994501,0.102261091024881,0.097171046758256,70.0816272542472914,0.0620128251290935,0.0657047909578125,0.0777509345715382,0.088561321341585,0.088647672874835,90.0683859871424735,0.0507304952495273,0.0479936476914665,0.0472601632188253,0.18922126828463]) initial_guess = initial_coefficients(3); # Example initial guess # Fit the logistic regression function to the data params, covariance = curve_fit(logistic_regression, x_data, y_data, p0=initial_guess, maxfev=5000) # Convert parameters to Python built-in types params = [float(param) for param in params] # Convert numpy float64 to Python float C, B1, B2 = params OutputDataSet = pd.DataFrame({"C": [C], "B1": [B1], "B2": [B2],"ProType":[input_value_1],"RegType":[input_value_2]}) ``` [1]: https://i.stack.imgur.com/PMnkq.png [2]: https://i.stack.imgur.com/eK3C7.png
You can move-construct a _new_ `T` object in the `std::vector` in the same way you move-construct any object, i.e. `vector_of_ts.push_back(std::move(*t))`, but you can't somehow move _the same_ `T` object into the vector. --- A `std::vector<T>` is a _contiguous_ container. That is, under the hood it's a pointer to one big dynamically-allocated chunk of memory in which some number of `T` objects reside. A `std::unique_ptr<T>` is a pointer to a _single_ dynamically-allocated `T` object. You can't "add" the `T` object pointed to by the `std::uniqeu_ptr` to the `std::vector` because it isn't a part of the contiguous memory block managed by the `std::vector`.
I have simple helm which I try to run but I'm getting error when doing `lint/tamplate --debug`. The error: vagrant@ubuntu2010:~/java/projects$ helm template --debug ./message-server-helm install.go:214: [debug] Original chart version: "" install.go:231: [debug] CHART PATH: /home/vagrant/java/projects/message-server-helm Error: template: message-server-helm/templates/serviceaccount.yaml:1:14: executing "message-server-helm/templates/serviceaccount.yaml" at <.Values.serviceAccount.create>: nil pointer evaluating interface {}.create helm.go:84: [debug] template: message-server-helm/templates/serviceaccount.yaml:1:14: executing "message-server-helm/templates/serviceaccount.yaml" at <.Values.serviceAccount.create>: nil pointer evaluating interface {}.create Where the chart files are: chart.yaml apiVersion: v2 name: message-server-helm description: A Helm chart for Kubernetes type: application version: 0.1.0 appVersion: "1.16.0" values.yaml replicaCount: 1 image: repository: "docker-message-server" tag: "0.0.1-SNAPSHOT" pullPolicy: IfNotPresent service: type: NodePort port: 80 deployment.yaml piVersion: apps/v1 kind: Deployment metadata: name: {{ include "message-server-helm.fullname" . }} labels: pp.kubernetes.io/name: {{ include "message-server-helm.name" . }} helm.sh/chart: {{ include "message-server-helm.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app.kubernetes.io/name: {{ include "message-server-helm.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} template: metadata: labels: app.kubernetes.io/name: {{ include "message-server-helm.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: http containerPort: 8080 protocol: TCP service.yaml apiVersion: v1 kind: Service metadata: name: {{ include "message-server-helm.fullname" . }} labels: app.kubernetes.io/name: {{ include "message-server-helm.name" . }} helm.sh/chart: {{ include "message-server-helm.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} spec: type: {{ .Values.service.type }} ports: - port: {{ .Values.service.port }} targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: {{ include "message-server-helm.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} serviceaccount.yaml {{- if .Values.serviceAccount.create -}} apiVersion: v1 kind: ServiceAccount metadata: name: {{ include "message-server-helm.serviceAccountName" . }} labels: {{- include "message-server-helm.labels" . | nindent 4 }} {{- with .Values.serviceAccount.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} automountServiceAccountToken: {{ .Values.serviceAccount.automount }} {{- end }}
Helm getting :at <.Values.serviceAccount.create>: nil pointer evaluating interface {}.create
[enter image description here][1]After drop_na it shows 0 obs. of 68 variables: [![(https://i.stack.imgur.com/elrGR.png)][2]][2] [![(https://i.stack.imgur.com/txJEv.png)\]][3]][3] After drop_na I don't see any results in the Table except dates. This was not the case when I tried it before, I could see the values ​​in the table. ``` library(tidyverse) WDI_GDP <- read_csv("C:/Users/ASYA/Desktop/P_Data_Extract_From_World_Development_Indicators/b0351889-13b3-4cbe-a5c0-a2dd9d633eab_Data.csv") WDI_GDP <- WDI_GDP %>% mutate(across(contains("[YR"), ~na_if(.x,"..")) %>% mutate(across(contains("[YR"), as.numeric))) WDI_GDP <- drop_na(WDI_GDP) ```enter image description here [![1https://i.stack.imgur.com/oqjo8.png][1]][1]
I found how to easily insert pngs as axes tick labels, following [this post.](https://stackoverflow.com/questions/54247880/image-as-axis-tick-ggplot) However, this seems to work only for a single axis. I am not quite sure how to draw both my y axis and x axis grob to my plot. To plot one grob for **either** y or x axis one would do something like this... ``` library(ggplot2) library(cowplot2) library(png) library(Rcurl) data <- data.frame( pairType = c("assortative", "disassortative", "disassortative", "assortative", "disassortative", "disassortative", "assortative"), Male_Phenotype = c("metallic", "rufipennis", "metallic", "militaris-a", "metallic", "militaris-a", "rufipennis"), Female_Phenotype = c("metallic", "metallic", "militaris-a", "militaris-a", "rufipennis", "rufipennis", "rufipennis"), exp = c(0.10204082, 0.11224490, 0.28571429, 0.10204082, 0.02040816, 0.08163265, 0.29591837), obs = c(0.04081633, 0.02040816, 0.02040816, 0.03061224, 0.00000000, 0.00000000, 0.03061224) ) rufiPNG<- "https://i.stack.imgur.com/Q8BqO.png" rufipennis <- readPNG(getURLContent(rufiPNG)) militarisPNG<- "https://i.stack.imgur.com/EtdfR.png" militaris_a <- readPNG(getURLContent(militarisPNG)) metallicPNG<- "https://i.stack.imgur.com/YIDoA.png" metallic <- readPNG(getURLContent(metallicPNG)) #my crazy plot p<- ggplot(chiMate, aes(x = `Male Phenotype`, y = `Female Phenotype`)) + geom_point(aes(color = pairType, size = 2.1*exp))+ geom_point(color = "black", aes(size = 1.6*exp)) + geom_point(color = "white", aes(size = 1.3*exp)) + geom_point(aes(size = 1.25*obs), color= "salmon") + scale_color_manual("Pair type", values = c("assortative" = "cornflowerblue", "disassortative" = "aquamarine3")) + scale_size_continuous(range = c(10, 45))+ scale_x_discrete(expand=c(.3,.3))+ scale_y_discrete(expand=c(0.3,0.3))+ geom_text(aes(label= round(obs, digits= 3)))+ theme_minimal() + labs(x = "Male Phenotype", y = "Female Phenotype", size = "Mate Count")+ theme(legend.position = 'bottom', legend.box.background = element_rect(color='black'), axis.title = element_text(size= 14), axis.text = element_text(size= 10))+ guides(color = guide_legend(override.aes = list(size = 12)), size = "none") #create canvas ystrip <- axis_canvas(p, axis = 'y') + draw_image(rufipennis, y = 2.35, scale = 4) + draw_image(militaris_a, y = 1.5, scale = 4) + draw_image(metallic, y = .65, scale = 4) #'draw' grob onto ggplot, 'p' ggdraw(insert_yaxis_grob(p, ystrip, position = "left",width = grid::unit(0.05, "null"))) ``` I tried ``` ggdraw(p)+ insert_yaxis_grob(p, ystrip, position = "left",width = grid::unit(0.05, "null"))+ insert_xaxis_grob(p, xstrip, position = "bottom",height = grid::unit(0.1, "null")) ``` and ``` ggdraw(insert_yaxis_grob(p, ystrip, position = "left",width = grid::unit(0.05, "null")), insert_xaxis_grob(p, xstrip, position = "bottom",height = grid::unit(0.1, "null"))) ``` to no avail. Any ideas?
I'm looking for an equivalent to x86/64's FTZ/DAZ instructions found in <immintrin.h>, but for M1/M2/M3. Also, is it safe to assume that "apple silicon" equals ARM? I am in the process of porting a realtime audio plugin (VST3/CLAP) from x64 Windows to MacOS on apple silicon hardware. At least on x64, it is important for realtime audio code, that denormal numbers (also known as subnormal numbers) are treated as zero by the hardware since these very-small-numbers are otherwise handled in software and that causes a real performance hit. Now, as denormal numbers are part of the IEEE floating point standard, and they are explicitly mentioned over here https://developer.arm.com/documentation/ddi0403/d/Application-Level-Architecture/Application-Level-Programmers--Model/The-optional-Floating-point-extension/Floating-point-data-types-and-arithmetic?lang=en#BEICCFII, I believe there must be an equivalent to intel's _MM_SET_FLUSH_ZERO_MODE and _MM_SET_DENORMALS_ZERO_MODE macros. Of course, I might be mistaken, or maybe the hardware flushes to zero by default (it's not really clear to me from the ARM document), in which case, I'd like to know that, too.
struct MyStruct { var a = 0 func foo() { print("Ok") } mutating func increase() { a += 1 } } func runner(_ function: () -> Void) { function() } var myStruct = MyStruct() runner(myStruct.foo) // Ok runner(myStruct.increase) // Escaping autoclosure captures 'inout' parameter 'self' Where is `autoclosure` here? And why is it escaping?
How is passing a function as a parameter related to escaping autoclosure?
|printing|
So I have a grid of 12 buttons with a set of 12 divs. when i click button 1 div 1 appears ! correct. however when i click button 2 div 2 appears, great. However button one still exists. How do I make it so that when i click button 2 div 1 is display none and div 2 is displaying. Below is what I have tried, I have thought about adding the remove class to every click function for every div if that makes sense. ``` <script> document.addEventListener('DOMContentLoaded', function() { jQuery(function($){ $('.clicktoshow').click(function(){ if($('.service-wrapper-box').hasClass('showclick')){ $('.service-wrapper-box').removeClass('showclick') }else{ $('.service-wrapper-box').addClass('showclick') } }); }); jQuery(function($){ $('.clicktoshow2').click(function(){ if($('.service-wrapper-box2').hasClass('showclick2')){ $('.service-wrapper-box2').removeClass('showclick2') }else{ $('.service-wrapper-box2').addClass('showclick2') } }); }); jQuery(function($){ $('.clicktoshow3').click(function(){ if($('.service-wrapper-box3').hasClass('showclick3')){ $('.service-wrapper-box3').removeClass('showclick3') }else{ $('.service-wrapper-box3').addClass('showclick3') } }); }); jQuery(function($){ $('.clicktoshow4').click(function(){ if($('.service-wrapper-box4').hasClass('showclick4')){ $('.service-wrapper-box4').removeClass('showclick4') }else{ $('.service-wrapper-box4').addClass('showclick4') } }); }); jQuery(function($){ $('.clicktoshow5').click(function(){ if($('.service-wrapper-box5').hasClass('showclick5')){ $('.service-wrapper-box5').removeClass('showclick5') }else{ $('.service-wrapper-box5').addClass('showclick5') } }); }); jQuery(function($){ $('.clicktoshow6').click(function(){ if($('.service-wrapper-box6').hasClass('showclick6')){ $('.service-wrapper-box6').removeClass('showclick6') }else{ $('.service-wrapper-box6').addClass('showclick6') } }); }); jQuery(function($){ $('.clicktoshow7').click(function(){ if($('.service-wrapper-box7').hasClass('showclick7')){ $('.service-wrapper-box7').removeClass('showclick7') }else{ $('.service-wrapper-box7').addClass('showclick7') } }); }); jQuery(function($){ $('.clicktoshow8').click(function(){ if($('.service-wrapper-box8').hasClass('showclick8')){ $('.service-wrapper-box8').removeClass('showclick8') }else{ $('.service-wrapper-box8').addClass('showclick8') } }); }); jQuery(function($){ $('.clicktoshow9').click(function(){ if($('.service-wrapper-box9').hasClass('showclick9')){ $('.service-wrapper-box9').removeClass('showclick9') }else{ $('.service-wrapper-box9').addClass('showclick9') } }); }); jQuery(function($){ $('.clicktoshow0').click(function(){ if($('.service-wrapper-box0').hasClass('showclick0')){ $('.service-wrapper-box0').removeClass('showclick0') }else{ $('.service-wrapper-box0').addClass('showclick0') } }); }); jQuery(function($){ $('.clicktoshow1').click(function(){ if($('.service-wrapper-box1').hasClass('showclick1')){ $('.service-wrapper-box1').removeClass('showclick1') }else{ $('.service-wrapper-box1').addClass('showclick1') } }); }); jQuery(function($){ $('.clicktoshow10').click(function(){ if($('.service-wrapper-box10').hasClass('showclick10')){ $('.service-wrapper-box10').removeClass('showclick10') }else{ $('.service-wrapper-box10').addClass('showclick10') } }); }); }); </script> <style> .clicktoshow, .clicktoshow2, .clicktoshow3, .clicktoshow4, .clicktoshow5, .clicktoshow6, .clicktoshow7, .clicktoshow8, .clicktoshow9, .clicktoshow0, .clicktoshow1, .clicktoshow10{ cursor: pointer; } .showclick, .showclick2, .showclick3, .showclick4, .showclick5, .showclick6, .showclick7, .showclick8, .showclick9, .showclick0, .showclick1, .showclick10{ display: flex !important; } .service-wrapper-box, .service-wrapper-box2, .service-wrapper-box3, .service-wrapper-box4, .service-wrapper-box5, .service-wrapper-box6, .service-wrapper-box7, .service-wrapper-box8, .service-wrapper-box9, .service-wrapper-box0, .service-wrapper-box1, .service-wrapper-box10{ display: none; } </style> ```
Jquery: How to stop other divs from still showing when i click a different button
|jquery|
null
{"OriginalQuestionIds":[26548495],"Voters":[{"Id":22180364,"DisplayName":"Jan"},{"Id":2530121,"DisplayName":"L Tyrone"},{"Id":-1,"DisplayName":"Community","BindingReason":{"DuplicateApprovedByAsker":""}}]}
Creating a module for this purpose is inevitable and not a strenuous task at all. All u need is a `package.json` with an `exports` field to limit which submodules can be loaded from within the package. From the [official Node.js docs](https://nodejs.org/api/packages.html#exports): > The "exports" field allows defining the entry points of a package when imported by name loaded via a node_modules lookup. It is an alternative to the "main" that can support defining subpath exports and conditional exports while encapsulating internal unexported modules ⚠️ Note that all paths defined in the `"exports"` must be relative file URLs starting with `./` except for the default one `.` So in your case, the `package.json` inside your `facade` folder would be: ```json { "name": "@facade", "private": true, "version": "0.0.0", "exports": { "./B.tsx": "./B.tsx" } } ``` Now the external code outside of the `facade` folder (a.k.a the `@facade` package) can only `import {B} from "@facade/B.tsx";`. `@facade/A.tsx` and `"@facade/C.tsx"` are inaccessible to the external code.
null
To emphasize on [Mark Gravel's answer](https://stackoverflow.com/a/73786244/14860947) I have implemented a simple method, to demonstrate an example. ```cs public static void CopyWithoutAllocation(long value, Span<byte> dest) { for (var i = 0; i < sizeof(long); i++) { dest[i] = (byte)((value >> 8 * i) & 0xFF); } } ``` this can be modified to be generic and support other numeric types.
> The item specific Brand is missing I do see ["ItemSpecifics" in the documentation](https://developer.ebay.com/devzone/xml/docs/Reference/eBay/AddItem.html#Request.Item.ItemSpecifics) So eBay might expect [item specifics](https://developer.ebay.com/api-docs/user-guides/static/trading-user-guide/include-item-specifics.html), like the brand, to be tucked inside that specific part of the XML `<ItemSpecifics>`: ```xml <?xml version="1.0" encoding="utf-8"?> <AddItemRequest xmlns="urn:ebay:apis:eBLBaseComponents"> ... <Item> ... <ItemSpecifics> <NameValueList> <Name>Brand</Name> <Value>iPhone 12 Pro Max</Value> </NameValueList> </ItemSpecifics> ... </Item> </AddItemRequest> ``` By wrapping your brand info inside `<ItemSpecifics>` and `<NameValueList>`, eBay should know exactly where to find the brand name you are trying to add.
I have a problem with the next auth session. For some reason it returns these errors. [enter image description here](https://i.stack.imgur.com/ik810.png) Also, if I create inside api another folder with route.ts and try to access example, api/hello/route.ts => localhost:3000/api/hello => 404. it doesn't work either. I think this comes from the configuration with Next Intl but I can't find anything. folder structure:[enter image description here](https://i.stack.imgur.com/hKrc6.png) when I call the 'next-auth/react' import signIn I'm redirected to [locale]/login. And returns 404. [enter image description here](https://i.stack.imgur.com/iucM2.png) middleware.ts: ``` import createMiddleware from 'next-intl/middleware' import { locales, localePrefix, pathnames } from './config' import { withAuth } from 'next-auth/middleware' import { NextRequest } from 'next/server' const privateRoutes = ['/create-add', '/profile'] const intlMiddleware = createMiddleware({ // A list of all locales that are supported // Used when no locale matches defaultLocale: 'es', locales, pathnames, localePrefix, localeDetection: false, }) const authMiddleware = withAuth( function onSuccess(req: any) { console.log('onSuccess') return intlMiddleware(req) } ) export default function middleware(req: NextRequest) { // Define a regex pattern for private URLs const excludePattern = `^(/(${locales.join('|')}))?(${privateRoutes.join( '|' )})$` //Esta comprobando si la ruta es privada const publicPathnameRegex = RegExp(excludePattern, 'i') const isPublicPage = !publicPathnameRegex.test(req.nextUrl.pathname) if (isPublicPage) { // Apply Next-Intl middleware for public pages return intlMiddleware(req) } else { // Apply Next-Auth middleware for private pages return (authMiddleware as any)(req) } } export const config = { matcher: [ // Skip paths that should not be internationalized '/((?!api|_next/static|_next/image|favicon.ico).*)', // Enable a redirect to a matching locale at the root '/', // Set a cookie to remember the previous locale for // all requests that have a locale prefix '/(en|ca|fr|pt)/:path*', // Enable redirects that add missing locales // (e.g. `/pathnames` -> `/en/pathnames`) '/((?!_next|_vercel|.*\\..*).*)', ], } ``` i18n.ts ``` import {notFound} from "next/navigation"; import {getRequestConfig} from 'next-intl/server'; import { locales } from "./config"; export default getRequestConfig(async ({locale}) => { // Validate that the incoming `locale` parameter is valid if (!locales.includes(locale as any)) notFound(); return { messages: (await import(`./messages/${locale}.json`)).default }; }); ``` providers.tsx ``` 'use client' import { NextUIProvider } from '@nextui-org/react' import { SessionProvider } from 'next-auth/react' import { useRouter } from 'next/navigation' export function Providers({ children, }: { children: React.ReactNode }) { const router = useRouter() return ( <SessionProvider> <NextUIProvider navigate={router.push}>{children}</NextUIProvider> </SessionProvider> ) } ``` layout.tsx ``` import type { Metadata } from 'next' import { Quicksand } from 'next/font/google' import Navbar from '@/components/navbar/navbar' import { NextIntlClientProvider } from 'next-intl' import '@/app/globals.css' import { Providers } from '@/providers/providers' import RootContext from '../context/rootContext' import Message from '@/components/message/message' import { getMessages } from 'next-intl/server' const quicksand = Quicksand({ weight: ['300', '400', '500', '600', '700'], subsets: ['latin'], variable: '--font-quicksand', }) export const metadata: Metadata = { title: 'Create Next App', description: 'Generated by create next app', } export default async function RootLayout({ children, params: { locale }, }: { children: React.ReactNode params: { locale: string } }) { const messages = await getMessages() return ( <html lang={locale}> <body className={`${quicksand.variable}` + ' ' + 'omd'}> <RootContext> <NextIntlClientProvider messages={messages}> <Providers> <Message /> <Navbar locale={locale} /> {children} </Providers> </NextIntlClientProvider> </RootContext> </body> </html> ) } ``` I've tried to update the regex matcher at middleware but not working. I've tried to create other route handlers as a test and returns 404 too.
Route Handler not working Next auth, Next Intl & Next 14
|next.js|next-auth|next-intl|
null
Also, when using gitconfig files to enable GPG signing, make sure to use the long key from when you created your gpg key i.e.: [user] email = example@email.com name = My Name signingkey = ######################################## [gpg] program = gpg [commit] gpgsign = true Which should work if you get errors like not being able to sign because a private key does not exist. If you want to find out these credentials, go to git bash and hit the `gpg --list-secret-keys --keyid-format=long ` command which should output something along the lines of: sec rsa4096/SHORT_KEY_ID yyyy-mm-dd [SC] [expires: yyyy-mm-dd] LONG_KEY_ID uid [ultimate] My Name (comment) <example@email.com> ssb rsa4096/SHORT_KEY_ID yyyy-mm-dd [E] [expires: yyyy-mm-dd] You will want to use the LONG_KEY_ID for the signing key! Hope that helps. **Signed**, 9662e103-129a
I am trying to use DGS Codegen plugin in my spring boot GQL project, Here is the build.gradle.kts file, where the respective dependencies mentioned ` plugins { java id("org.springframework.boot") version "3.2.4" id("io.spring.dependency-management") version "1.1.4" id("com.netflix.dgs.codegen") version "6.1.5" } dependencyManagement { imports { mavenBom("com.netflix.graphql.dgs:graphql-dgs-platform-dependencies:latest.release") } } ` The plugin generates the code as expected from the schema file, However I am still seeing the unresolved reference error as below, [Error screen shot](https://i.stack.imgur.com/LCj73.png) Do we need to check any other config settings, to make sure if the plugin is imported properly? Any suggestions to debug more? **Tried options:** - Upgraded to the latest package - Invalidated cache and restarted intellij - Tried to clean before the gradle build
I want to check an array of dates to see if they are increasing. For example, ['12/12/2023','13/12/2023'] would return true. However, ['12/12/2023','11/12/2023'] would return false. I have been playing around with a function suggested by users for checking integer values. I thought I could do the same but use dates instead. Here is the function: ``` function isIncreasing(dates) { for (var i = 1; i < dates.length; i++) { let convertedToDate = new Date(dates[i]); //console.log(currentDate); if (convertedToDate[i] !== convertedToDate[i - 1] && convertedToDate[i] != convertedToDate[i - 1] + 1) { return false; } } return true; } console.log(isIncreasing(['12/12/2023','11/12/2023','10/12/2023'])) ``` The function does not work as the above returns true. Any help is appreciated.
Check array of dates to see if increasing
|javascript|arrays|date|
null
You could simply replace [`DataFrame.with_columns()`](https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.with_columns.html) with [`DataFrame.select()`](https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.select.html) method: ```python df = pl.DataFrame( {"A (%)": [1, 2, 3], "B": [4, 5, 6], "C (Euro)": ["abc", "def", "ghi"]} ).select( pl.all().name.map( lambda c: c.replace(" ", "_") .replace("(%)", "pct") .replace("(Euro)", "euro") .lower() ) ) ┌───────┬─────┬────────┐ │ a_pct ┆ b ┆ c_euro │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ str │ ╞═══════╪═════╪════════╡ │ 1 ┆ 4 ┆ abc │ │ 2 ┆ 5 ┆ def │ │ 3 ┆ 6 ┆ ghi │ └───────┴─────┴────────┘ ``` in your case it's probably also simpler to use [`DataFrame.rename()`](https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.rename.html) instead: ```python ... .rename( lambda c: c.replace(" ", "_") .replace("(%)", "pct") .replace("(Euro)", "euro") .lower() ) ┌───────┬─────┬────────┐ │ a_pct ┆ b ┆ c_euro │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ str │ ╞═══════╪═════╪════════╡ │ 1 ┆ 4 ┆ abc │ │ 2 ┆ 5 ┆ def │ │ 3 ┆ 6 ┆ ghi │ └───────┴─────┴────────┘ ```
I am writing a front end in Access that connects to SQL Server. I am using the default {SQL Server} driver since some of the computers/laptops/tablets that will be using it probably will not have a driver installed for SQL Server. The FE opens fine on my computer as one would expect, and will open on another computer (that does not have any driver's installed), but when attempting to open the FE on 2 other machines, both get stuck at the ACCESS splash screen. I have checked Task Manager and the Microsoft Access (32-bit) background process is running. All of our machines are 64-bit. Can anyone help with this? I've researched trying to find a different default driver for SQL Server but have come up empty handed. Installing a driver like {ODBC Driver 18 for SQL Server} isn't feasible since it would entail having our outsourced IT company install it and charging us for it. Follow up: It is the Microsoft Access splash screen that is locking up (the one that appears whenever you open Access), not a custom one. And I am not deploying the DB, simply sending it in an email for the user to copy to their computer. I am using Autoexec to run my Main function to connect to the backend server. The VBA to connect is: Public curDB as ADODB.Recordset: Set curDB = New ADODB.Connection conString = "Driver={SQL Server};Server=HL-XXX;Database=PalletQuoting;UID=XXXX;PWD=XXXX;Encrypt=NO"
Custom styled "Add to cart" button in WooCommerce product archive pages
I feel like I have a good understanding of pointers, but I want to make sure I am understanding this correctly. Basically, I am completing a computer architecture lab for virtual memory and we have 2 pointers. One pointer represents main memory, and other represents an array of structs(frame_table) that starts at mem[0]. When I am initializing the program I need to set &frame_table[0] = &mem[0]. (ik the & and [] are redundant). My question is, when doing this do I need to cast the address of mem. Based on my understanding I would say no. The way I think about it is I am setting the address of frame_table to the address of mem, which is not dependent on types at all. I have my example below. In addition if there are any similar cases where I would need to cast or casting would produce undefined behavior please let me know :) uint8_t *mem; fte_t *frame_table; //do i do this? frame_table = mem; //or this? frame_table = (fte_t*) mem;
C pointer addresses and casting
|c++|c|
null
Harris uses the structure tensor. For the structure tensor you must blur the square of the x derivative (not square the blurred derivative), and likewise for the y derivative. The two diagonal elements are the blurred of the product of the x and y derivatives (you compute the product of the magnitudes of the blurred derivatives). So: 1. Square the derivatives before computing the blur. 2. Remove the `abs()` calls before computing the product. Both of these can be done by gl_FragColor = vec4(ix*ix, iy*iy, ix*iy, 0.0); Those three image planes must then be blurred, and the subsequent computation updated to reflect we already have squares.
{"Voters":[{"Id":10952503,"DisplayName":"Elikill58"},{"Id":16217248,"DisplayName":"CPlus"},{"Id":7201774,"DisplayName":"Rich"}],"SiteSpecificCloseReasonIds":[13]}
Screenshot demonstrating the use of Excel's Solver: ![See Screenshot demonstrating the use of Excel's Solver][1] I have a task to automate a certain excel worksheet. The worksheet happens to implement a logic with an excel plugin called Solver. It uses a single value(-1.95624) in Cell $O$9 (which is the result of computations highlighted with red and blue ink in the diagram ) as an input value and then returns three values for C, B1 and B2 using an algorithm called "GRG Non linear regression". My task is to emulate this logic in Python. Below is my attempt. The major problem, is I am not getting the same values for C, B1 and B2 as computed by Excel's Solver plugin. Given these datasets for xData and yData, the correct output should be: C= -2.35443383, B1 = -14.70820051, B2 = 0.0056217 Here's My 1st Attempt: ``` import numpy, scipy, matplotlib import pandas as pd import matplotlib.pyplot as plt from scipy.optimize import curve_fit from scipy.optimize import differential_evolution import warnings xData = numpy.array([-2.59772914040242,-2.28665528866907,-2.29176070881848,-2.31163972446061,-2.28369414349715,-2.27911303233721,-2.28222332344644,-2.39089535619106,-2.32144325648778,-2.17235002006179,-2.22906032068685,-2.42044014499938,-2.71639505549322,-2.65462061336346,-2.47330475191616,-2.33132910807216,-2.33025978869114,-2.61175064230516,-2.92916553244925,-2.987503044973,-3.00367414706232,-1.45507812104723]) # Use the same table name as the parameter yData = numpy.array([0.0692847120775066,0.0922342111029099,0.0918076382491768,0.0901635409944003,0.0924824386284127,0.092867647175396,0.092605957740688,20.0838696111204451,0.0893625419994501,0.102261091024881,0.097171046758256,70.0816272542472914,0.0620128251290935,0.0657047909578125,0.0777509345715382,0.088561321341585,0.088647672874835,90.0683859871424735,0.0507304952495273,0.0479936476914665,0.0472601632188253,0.18922126828463]) # Use the same table name as the parameter def func(x, a, b, Offset): # Sigmoid A With Offset from zunzun.com return 1.0 / (1.0 + numpy.exp(-a * (x-b))) + Offset # function for genetic algorithm to minimize (sum of squared error) def sumOfSquaredError(parameterTuple): warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm val = func(xData, *parameterTuple) return numpy.sum((yData - val) ** 2.0) def generate_Initial_Parameters(): # min and max used for bounds maxX = max(xData) minX = min(xData) maxY = max(yData) minY = min(yData) parameterBounds = [] parameterBounds.append([minX, maxX]) # search bounds for a parameterBounds.append([minX, maxX]) # search bounds for b parameterBounds.append([0.0, maxY]) # search bounds for Offset # "seed" the numpy random number generator for repeatable results result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3) return result.x # generate initial parameter values geneticParameters = generate_Initial_Parameters() # curve fit the test data params, covariance = curve_fit(func, xData, yData, geneticParameters,maxfev=50000) # Convert parameters to Python built-in types params = [float(param) for param in params] # Convert numpy float64 to Python float C, B1, B2 = params OutputDataSet = pd.DataFrame({"C": [C], "B1": [B1], "B2": [B2],"ProType":[input_value_1],"RegType":[input_value_2]}) ``` Any Ideas will be helpful? Thanks in advance Here's My 2nd Attempt: I've changed the objective function. ``` import numpy as np import pandas as pd from scipy.optimize import curve_fit # Access input data passed from SQL Server datasets = pd.DataFrame(InputDataSet) def logistic_regression(x, C, B1, B2): return C / (1 + np.exp(-B1 * (x - B2))) def initial_coefficients(num_features): return np.random.randn(num_features) # Fetch x_data and y_data from SQL Server x_data = np.array([-2.59772914040242,-2.28665528866907,-2.29176070881848,-2.31163972446061,-2.28369414349715,-2.27911303233721,-2.28222332344644,-2.39089535619106,-2.32144325648778,-2.17235002006179,-2.22906032068685,-2.42044014499938,-2.71639505549322,-2.65462061336346,-2.47330475191616,-2.33132910807216,-2.33025978869114,-2.61175064230516,-2.92916553244925,-2.987503044973,-3.00367414706232,-1.45507812104723]) y_data = np.array([0.0692847120775066,0.0922342111029099,0.0918076382491768,0.0901635409944003,0.0924824386284127,0.092867647175396,0.092605957740688,20.0838696111204451,0.0893625419994501,0.102261091024881,0.097171046758256,70.0816272542472914,0.0620128251290935,0.0657047909578125,0.0777509345715382,0.088561321341585,0.088647672874835,90.0683859871424735,0.0507304952495273,0.0479936476914665,0.0472601632188253,0.18922126828463]) initial_guess = initial_coefficients(3); # Example initial guess # Fit the logistic regression function to the data params, covariance = curve_fit(logistic_regression, x_data, y_data, p0=initial_guess, maxfev=5000) # Convert parameters to Python built-in types params = [float(param) for param in params] # Convert numpy float64 to Python float C, B1, B2 = params OutputDataSet = pd.DataFrame({"C": [C], "B1": [B1], "B2": [B2],"ProType":[input_value_1],"RegType":[input_value_2]}) ``` But still didn't hit the desired result of C= -2.35443383, B1 = -14.70820051, B2 = 0.0056217 [1]: https://i.stack.imgur.com/PMnkq.png [2]: https://i.stack.imgur.com/eK3C7.png
[enter image description here][1]After drop_na it shows 0 obs. of 68 variables: [![1https://i.stack.imgur.com/oqjo8.png][1]][1] [![(https://i.stack.imgur.com/elrGR.png)][2]][2] [![(https://i.stack.imgur.com/txJEv.png)\]][3]][3] After drop_na I don't see any results in the Table except dates. This was not the case when I tried it before, I could see the values ​​in the table. ``` library(tidyverse) WDI_GDP <- read_csv("C:/Users/ASYA/Desktop/P_Data_Extract_From_World_Development_Indicators/b0351889-13b3-4cbe-a5c0-a2dd9d633eab_Data.csv") WDI_GDP <- WDI_GDP %>% mutate(across(contains("[YR"), ~na_if(.x,"..")) %>% mutate(across(contains("[YR"), as.numeric))) WDI_GDP <- drop_na(WDI_GDP) ```enter image description here
I feel like I have a good understanding of pointers, but I want to make sure I am understanding this correctly. Basically, I am completing a computer architecture lab for virtual memory and we have 2 pointers. One pointer represents main memory, and other represents an array of structs (`frame_table`) that starts at `mem[0]`. When I am initializing the program, I need to set `&frame_table[0] = &mem[0]` (I know the `&` and `[]` are redundant). My question is, when doing this, do I need to cast the address of `mem`? Based on my understanding, I would say no. The way I think about it is I am setting the address of `frame_table` to the address of `mem`, which is not dependent on types at all. I have my example below. In addition, if there are any similar cases where I would need to cast, or casting would produce undefined behavior, please let me know. ``` uint8_t *mem; fte_t *frame_table; //do i do this? frame_table = mem; //or this? frame_table = (fte_t*) mem; ```
Unresolved reference error is showing up after adding the dgs codegen plugin successfully
|spring-boot|gradle-kotlin-dsl|netflix-dgs|
null
{"OriginalQuestionIds":[76851422],"Voters":[{"Id":8690857,"DisplayName":"Drew Reese","BindingReason":{"GoldTagBadge":"reactjs"}}]}
I want to open a file for reading and writing if it exists, or if not, create it and open it for writing. The following code ``` FILE *file = fopen(argv[file_position], "r+"); if (!file) file = fopen(argv[file_position], "w"); ``` causes the error `file_name: Bad file descriptor`. What is the cause of the error and how to avoid it?
Why can't I use the file pointer after the first read attempt fails?
|c|file|
I am getting a new error today when I try to access contacts in my react-native app that has been built using expo > This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app's Info.plist must contain an NSContactsUsageDescription key with a string value explaining to the user how the app uses this data. I have 'NSContactsUsageDescription' defined in my app.json file in the infoPlist section. Has something changed for this to be used recently? I tried blowing away my node_modules, yarn lock and the ios file to rebuild, but this error still happens. I found a similar post here in stack overflow regarding location data - but there are no answers or suggestions for it. Any thoughts on what I might be doing wrong? Here is my app.json file `{ "expo": { "name": "Drinklink", "slug": "Drinklink", "privacy": "unlisted", "version": "1.0.29", "orientation": "portrait", "icon": "./assets/mainIcon.png", "scheme": "com.drinklink.dev", "userInterfaceStyle": "automatic", "splash": { "image": "./assets/DrinkLink_Logo-WhiteYellow.png", "resizeMode": "contain", "backgroundColor": "#000000" }, "updates": { "url": "https://u.expo.dev/ae5d9b3c-56b3-4849-bccf-5cc2bb3a4c78", "fallbackToCacheTimeout": 0 }, "assetBundlePatterns": [ "**/*" ], "ios": { "supportsTablet": true, "bundleIdentifier": "com.drinklink.dev", "buildNumber": "1.0.29", "infoPlist": { "NSContactsUsageDescription": "Drinklink uses your contacts to make it easier to find your friends to buy drinks for. This data is never saved or used elsewhere." } }, "android": { "package": "com.drinklink.dev", "adaptiveIcon": { "foregroundImage": "./assets/DrinkLink_Icon-white.png", "backgroundColor": "#000000" }, "versionCode": 29, "versionName": "1.0.29", "permissions": [ "android.permission.RECORD_AUDIO", "android.permission.READ_CONTACTS", "android.permission.WRITE_CONTACTS" ] }, "runtimeVersion": { "policy": "sdkVersion" }, "extra": { "eas": { "projectId": "ae5d9b3c-56b3-4849-bccf-5cc2bb3a4c78" } }, "plugins": [ [ "expo-image-picker", { "photosPermission": "The app accesses your photos to allow you to set a profile picture. This makes it easier for them to identify you. It is only ever used for this single purpose.", "cameraPermission": "The app accesses your camera to allow you to set a profile picture. This makes it easier for them to identify you. It is only ever used for this single purpose." } ], [ "expo-contacts", { "contactsPermission": "Drinklink uses your contacts to make it easier to find your friends to buy drinks for. This data is never saved or used elsewhere." } ] ] } }` Sorry - it isn't formatting that json nicely for some reason.
I was given this block of code where `call by Reference` had to be used for every method call, and I had to give an output of a in the format: ```none y:[result]; y: [result]; y:[result]; x:[result]; a:[result] ``` ```java public class Main { static int x = 2; public static void main(String[] args) { int[] a = {17, 43, 12}; foo(a[x]); foo(x); foo(a[x]); System.out.println("x:" + x); System.out.println("a:" + Arrays.toString(a)); } static void foo(int y) { x = x - 1; y = y + 2; if (x < 0) { x = 5; } else if (x > 20) { x = 7; } System.out.println("y:" + y); } }. ``` I'm not 100% sure on how the call by reference works in some cases, and I'm not sure which result is the right one. Anyway here is one: `foo(a[x])` is called with `a[2]` (which is 12). `y` becomes 12 + 2 = 14. `x` is decremented to 1. `foo(x)` is called with `x` (which is 1). Both `x` and `y` point to the value 1 of `x`. `x` is decremented to 0 and then `x` becomes 3 because `y=y+2` and `y` was pointing at the value 1 of `x`. `foo(a[x])` is called with `a[3]` (which doesnt exists). `x` is decremented to 2. The array `a` transforms into `17,43,14`. So, the results would be like: ```none y : 14; y : 3; y : ?; x : 2; a : 17,43,14 ``` I think the thing that confuses me the most is in the case of `foo(x)`. Does `y` point at variable `x` or the value of `x` at the moment the method is called?
|node.js|firebase|google-cloud-firestore|google-cloud-functions|
null
I have this data the id here (lr_01HT89SX627dFDPCAXBS4H2T9d) is dynamic. https://www.example.com/api/pub/v2/verifications/lr_01HT89SX627dFDPCAXBS4H2T9d I need to extract the id- lr_01HT89SX627dFDPCAXBS4H2T9d from https://www.example.com/api/pub/v2/verifications/ the id will be different for every response. `Header: HTTP/2 201 Header: date: Sat, 30 Mar 2024 18:26:29 GMT Header: content-length: 0 Header: location: https://www.example.com/api/pub/v2/verifications/lr_01HT89SX627dFDPCAXBS4H2T9d Header: cf-cache-status: DYNAMIC` I cant figure out how to do it.
How to extract specific data from php header response data
|php|
null
I was able to solve this problem by using a flat object. https://opensearch.org/docs/latest/field-types/supported-field-types/flat-object/
To get some code executed just once in this function, you need to tag the order like: ```php add_action( 'woocommerce_order_status_changed', 'bacs_payment_complete', 10, 4 ); function bacs_payment_complete( $order_id, $old_status, $new_status, $order ) { // 1. For Bank wire and cheque payments if ( $order->get_payment_method() === 'bacs' && in_array( $new_status, wc_get_is_paid_statuses() ) && $order->get_meta('confirmed_paid') !== 'yes' ) { $order->update_meta_data('confirmed_paid', 'yes'); // Tag the order $order->save(); // Save to database // Here the code to be executed only once } } ``` Code goes in functions.php file of your active child theme (or active theme). It should work.
I am building a programming language using C++, LLVM, Clang, LLDB, user can write `import "@stdio.h"` which is similar to `#include <stdio.h>` so now I need to support C like imports of headers, however I can't get the path to system headers, let alone parse them ! Other answers have gotten old, since llvm and clang API's have updated, I tried this code ```c++ void print_system_header_paths() { clang::CompilerInstance CI; auto Invocation = std::make_shared<clang::CompilerInvocation>(); CI.setInvocation(Invocation); // I can eliminate this line to get rid of an error but other answer suggested creating a preprocessor CI.createPreprocessor(clang::TranslationUnitKind::TU_Prefix); const clang::HeaderSearchOptions &HSOpts = CI.getInvocation().getHeaderSearchOpts(); if (HSOpts.SystemHeaderPrefixes.empty()) { std::cout << "No system header paths found." << std::endl; } else { for (const auto &Path : HSOpts.SystemHeaderPrefixes) { std::cout << Path.Prefix << std::endl; } } } ``` I also tried this command `clang -v -c -xc++ nul` on windows however this requires parsing the command line output, I would prefer the C++ api [Linked] https://stackoverflow.com/questions/41470241/how-do-i-extract-the-search-paths-for-headers-in-the-standard-library-in-clang My programming language : https://github.com/Qinetik/chemical
I have stuck with a pretty simple problem - I can't communicate with process' stdout. The process is a simple stopwatch, so I'd be able to start it, stop and get current time. The code of stopwatch is: ```python import argparse import time def main() -> None: parser = argparse.ArgumentParser() parser.add_argument('start', type=int, default=0) start = parser.parse_args().start while True: print(start) start += 1 time.sleep(1) if __name__ == "__main__": main() ``` And its manager is: ```python import asyncio class RobotManager: def __init__(self): self.cmd = ["python", "stopwatch.py", "10"] self.robot = None async def start(self): self.robot = await asyncio.create_subprocess_exec( *self.cmd, stdout=asyncio.subprocess.PIPE, ) async def stop(self): if self.robot: self.robot.kill() stdout = await self.robot.stdout.readline() print(stdout) await self.robot.wait() self.robot = None async def main(): robot = RobotManager() await robot.start() await asyncio.sleep(3) await robot.stop() await robot.start() await asyncio.sleep(3) await robot.stop() asyncio.run(main()) ``` But `stdout.readline` returns an empty byte string every time. How do I do this correctly?
Asyncio: how to read stdout from subprocess?
I'm developing an Android web view app, that has an image chooser option, I am making some progress, and here it is, ``` webChromeClient = object : WebChromeClient() { override fun onProgressChanged(view: WebView?, newProgress: Int) { super.onProgressChanged(view, newProgress) backEnabled = view?.canGoBack() ?: false loading = true } override fun onShowFileChooser( webView: WebView?, filePathCallback: ValueCallback<Array<Uri>>?, fileChooserParams: FileChooserParams? ): Boolean { val intent = fileChooserParams?.createIntent() (context as Activity).startActivityForResult(intent, 100) return false } } ``` It can open the image-choosing panel, but after that it does nothing, I need to submit the image that the user selects. The image or file access permission is already taken from Manifest.
I have a Django REST Framework backend API with a ProductViewSet that handles product creation. When creating a new product, I'm expecting to receive an image file along with other product data from a React-admin frontend. Here's the relevant part of my backend API: ``` class ProductViewSet(ModelViewSet): queryset = Product.objects.all() serializer_class = ProductSerializer filter_backends = [DjangoFilterBackend, SearchFilter] search_fields = ['name', 'description'] ordering_fields = ['price', 'name'] def create(self, request, *args, **kwargs): try: data = request.data with transaction.atomic(): # Extract product_type, company, unit_of_measure from request data # Extract image path from request data image_path = None if "image" in data: raw_file = data["image"].get("rawFile") if raw_file and "path" in raw_file: image_path = raw_file["path"] # Create Product instance with extracted data product = Product( name=data.get("name"), image=image_path, # Other product fields... ) product.save() # Add many-to-many relationships # Serialize and return the created product serializer = ProductSerializer(product, many=False) return Response(serializer.data, status=status.HTTP_201_CREATED) except Exception as e: print("Error:", e) return Response({"error": str(e)}, status=status.HTTP_500_INTERNAL_SERVER_ERROR) ``` On the frontend side, I'm using React-admin for the product creation form. Here's the relevant part of my frontend code: ``` import * as React from 'react'; import { Create, TextInput, ImageInput, } from 'react-admin'; const ProductCreate = () => { return ( <Create> <TextInput source="name" label="Product Name" fullWidth /> <ImageInput source="image" label="Thumbnail" accept="image/*" /> </Create> ); }; export default ProductCreate; ``` The issue I'm facing is that when I save a new product with an image file, the image file is not being uploaded to the media folder on the server. Instead, only the image file name (media/some.png) is being saved to the database. How can I ensure that the image file is correctly uploaded to the media folder? the image save to database as image:"media/some.png" is correct but the image file not move to default medai postion in /media/
"Troubleshooting Image Upload Failure: React-admin to Django REST Framework Backend"
|django-rest-framework|react-admin|
null
I am trying to create a modal, but when I make it, it disappears below 992px which is one of the Bootstrap's breakpoints. I made the modal like one of the examples in the docs. I have tried many solution including CSS changing the class modal to position fixed, but without success. Also tried: - CSS changing the class "modal" - Modifying the browser z-index - Activating and deactivating things in the inspector **Code Snippet** ([GitHub Source][1]) <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> new bootstrap.Modal('#MyProfile').show(); <!-- language: lang-html --> <div class="modal" id="MyProfile" tabindex="-1" aria-labelledby="MyProfile" aria-hidden="true"> <div class="modal-dialog modal-dialog-centered-fix "> <div class="modal-content modal-MyProfile-fix"> <div class="modal-header"> <h5 class="modal-title text-center pop-up-header" id="MyProfileLabel">Моят профил</h5> <button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button> </div> <div class="modal-body"> <h3>Лични данни</h3> <div class="mx-1"> <form class="input-MyProfile needs-validation was-validated" novalidate=""> <input type="text" class="form-control" id="MyProfile-Validation01" placeholder="Име" value="Иван" required> <input type="text" class="form-control" id="MyProfile-Validation02" placeholder="Фамилия" value="Иванов" required> <input type="number" class="form-control" id="MyProfile-Validation03" placeholder="Телефон" required> <input type="text" class="form-control" id="MyProfile-Validation04" placeholder="Имейл" value="Ivan.Ivanov@icloud.com" required> </form> </div> <h3>Изтриване на профила</h3> <div class="mb-1 mx-1" style="width: 100%;"> <p>Ако изтриете профила си, всички лични данни ще бъдат премахнати. Изтриването е необратим процес. </p> <a href="" class="btn-razgledai">Изтрий</a> </div> <h3>Предпочитан начин за връзка</h3> <div class="mb-1 mx-1" style="width: 100%;"> <p> <ion-icon name="checkmark-outline"></ion-icon> по телефона </p> <p> <ion-icon name="checkmark-outline"></ion-icon> по Имейл </p> </div> <h3>Свързване</h3> <div class="mb-1 mx-1 row" style="width: 100%;"> <div class="col-6"> <p class="profile-facebook"> <ion-icon class="profile-facebook" name="logo-facebook"></ion-icon> Facebook </p> <p class="profile-google"> <ion-icon class="profile-google" name="logo-google"></ion-icon> Google</p> </div> <div class="col-6 text-end"> <p class="profile-facebook">Свържи</p> <p class="profile-google"> <ion-icon class="profile-google" name="trash-bin-outline"></ion-icon> </p> </div> </div> </div> <div class="modal-footer justify-content-center"> <button type="button" class="btn-razgledai text-center" data-bs-dismiss="modal">Запази</button> </div> </div> </div> </div> <!-- Bootstrap 5 --> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css" > <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/js/bootstrap.min.js" ></script> <!-- end snippet --> [1]: https://github.com/dicheto/Dicheto-IT/tree/main/Project-Marchideo
using infoPlist in app.json for expo project seems to not be working
|react-native|expo|info-plist|
**Aspects worth to consider from the very beginning**: - In order to maintain the degree of simplicity of the problem this post presents, I haven't provided the exact requirement, on the basis of the bellow E-R diagram was sketched; - If the given information aren't satisfactory further details will be provided without hesitation; Given the explanation book 1 delivers: > 7.7.4 Placement of Relationship Attributes > The cardinality ratio of a relationship can affect the placement of relationship attributes. Thus, attributes of one-to-one or one-to-many relationship sets can be associated with one of the participating entity sets, rather than with the relationship set.. 1.Would you rather prefer to keep the attribute *start_date* as a relationship attribute or to englobe it in the entity set *Congress_Persons*? 2.Why? [![enter image description here][1]][1] **My answer**: I tend to believe that in order to diminish any shadow of doubt that might appear when deciding what is the most representative place to which this attribute belongs, the *start_date* should be inextricably linked to *represents* relationship. This way it would explicitly express that the action of taking up a mandate will unarguably come with a start_date. 1 - Database System Concepts Henry F. Korth, S. Sudarshan, Abraham Silberschatz [1]: https://i.stack.imgur.com/N4HNn.png
When an E-R attribute should be perceived as a relationship attribute or as an entity set attribute?
|entity-relationship|
Looks like there's an already accepted answer. But @user23919330 was onto something. The most obvious idea would be to convert this: for (size_t i = 0; i < SIZE; i++, a_ptr++, b_ptr++, c_ptr++, data++) { *a_ptr = data->a; *b_ptr = data->b; *c_ptr = data->c; } To this: for (size_t i = 0; i < SIZE; i++) { a_ptr[i] = data[i].a; } for (size_t i = 0; i < SIZE; i++) { b_ptr[i] = data[i].b; } for (size_t i = 0; i < SIZE; i++) { c_ptr[i] = data[i].c; } That is, let each read and write from sequential addresses. Take advantage of caching. Trying this out now....
The MedaCy library is dependent on Python version 3.7. Using Python version 3.7.0 worked completely fine. Follow the below steps with python 3.7 installed: 1. `pip install git+https://github.com/NLPatVCU/medaCy.git` 2. `pip install git+https://github.com/NLPatVCU/medaCy_model_clinical_notes.git`
Subscriptions do not require activation [unless you specify][1] `application_context.user_action = 'CONTINUE'`. Doing so also requires you to show an order review page before activation, otherwise it is misleading to the payer. The default behavior of SUBSCRIBE_NOW avoids needing a review page before activation, and so is generally preferable. However, with that default behavior the subscription will activate at PayPal and any return to your site after activation *may or may not occur*. The solution is to create a webhook URL listener for the event `PAYMENT.SALE.COMPLETED`. All subscription logic can be built off just that event to refresh the subscription's good-through date (however you track it), no other event names are necessary nor important, though you can subscribe and log others and decide if you ever want to do something with them later (such as react to refunds/reversals in some automated way, not important at this stage) Additionally, the behavior of redirecting away from your website and back to a return_url is an old integration pattern, for old websites. Instead, use the JS SDK for the subscription's approval. The subscription can be created from the JS using just the plan_id , or alternatively the `createSubscription` function can fetch the ID of a created subscription from your backend, calling a route that implements the logic in your question. Such fetching of an API call result (instead of creating with JS) is generally unnecessary but you can do it if you want. An easy way to get a working JS subscribe button is via http://www.sandbox.paypal.com/billing/plans , which can then be adapted to your needs. Make sure the plan_id you end up using was created with the client_id the JS SDK is being loaded with. [1]: https://developer.paypal.com/docs/api/subscriptions/v1/#subscriptions_create!path=application_context/user_action&t=request
Android jetpack compose webview, image selector not works
|android|kotlin|webview|jetpack|
**I've got this error on my flutter client ( registration page )** **It is a registration page and the backend is on php. But the Connection timed out error is showing. I've done lot of things from the internet but nothings works well. I'm using linux(Ubuntu**). **For database I've used mysql** ```E/flutter (21877): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: Connection timed out E/flutter (21877): #0 IOClient.send (package:http/src/io_client.dart:94:7) E/flutter (21877): <asynchronous suspension> E/flutter (21877): #1 BaseClient._sendUnstreamed (package:http/src/base_client.dart:93:32) E/flutter (21877): <asynchronous suspension> E/flutter (21877): #2 _withClient (package:http/http.dart:166:12) E/flutter (21877): <asynchronous suspension> E/flutter (21877): #3 _RegistrationState._registerUser (package:thefinalproject/homepage/registration.dart:21:18) E/flutter (21877): <asynchronous suspension> E/flutter (21877): type here ``` My flutter code: ``` import 'package:flutter/material.dart'; import 'package:http/http.dart' as http; import 'dart:convert'; import 'package:fluttertoast/fluttertoast.dart'; class Registration extends StatefulWidget { const Registration({super.key}); @override _RegistrationState createState() => _RegistrationState(); } class _RegistrationState extends State<Registration> { TextEditingController _fnameController = TextEditingController(); TextEditingController _lnameController = TextEditingController(); TextEditingController _phoneController = TextEditingController(); //bool _registrationComplete = false; Future _registerUser() async{ var url = Uri.parse("http://192.168.152.193:3000/register.php"); var response = await http.post(url, body: { "firstname" : _fnameController.text, "lastname" : _lnameController.text, "phonenumber" : _phoneController.text, }, ); var data =json.decode(response.body); if(data == "Error") { Fluttertoast.showToast( msg: "Already Exists", toastLength: Toast.LENGTH_SHORT, gravity: ToastGravity.CENTER, timeInSecForIosWeb: 1, backgroundColor: Colors.red, textColor: Colors.white, fontSize: 16.0 ); }else { Fluttertoast.showToast( msg: "Successfull", toastLength: Toast.LENGTH_SHORT, gravity: ToastGravity.CENTER, timeInSecForIosWeb: 1, backgroundColor: Colors.green, textColor: Colors.white, fontSize: 16.0 ); } } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: const Text('BillSpark'), ), body: Container( padding: const EdgeInsets.symmetric(vertical: 25.0, horizontal: 25.0), child: Column( children: <Widget>[ //firstname Padding( padding: const EdgeInsets.all(10.0), child: TextFormField( controller: _fnameController, decoration: InputDecoration( labelText: "First Name", hintText: "Enter your firstname", border: OutlineInputBorder( borderRadius: BorderRadius.circular(15.0))), ), ), //lastname Padding( padding: const EdgeInsets.all(10.0), child: TextFormField( controller: _lnameController, keyboardType: TextInputType.name, decoration: InputDecoration( labelText: "Last Name", hintText: "lastname", border: OutlineInputBorder( borderRadius: BorderRadius.circular(15.0))), ), ), //phone number Padding( padding: const EdgeInsets.all(10.0), child: TextFormField( controller: _phoneController, keyboardType: TextInputType.number, decoration: InputDecoration( labelText: "Enter you phone number", hintText: "enter phone number", border: OutlineInputBorder( borderRadius: BorderRadius.circular(15.0))), ), ), //Register Button // Register Button Padding( padding: const EdgeInsets.all(10.0), child: Container( height: 100, color: Colors.white, width: 120, child: Column( crossAxisAlignment: CrossAxisAlignment.center, children: [ ElevatedButton( style: ElevatedButton.styleFrom( foregroundColor: Colors.white, backgroundColor: Colors.green, padding: const EdgeInsets.all(20.0), shape: RoundedRectangleBorder( borderRadius: BorderRadius.circular(20), ), ), onPressed: () { // Call the _registerUser() method to initiate registration _registerUser(); }, child: const Text('Register'), ), ], ), ), ), // Do not have an account? const Center( child: Padding( padding: EdgeInsets.all(10.0), child: Center( child: Text( 'Do not have an account?', style: TextStyle(fontSize: 18), ), ), ), ), // Show registration success message if registration is complete /*if (_registrationComplete) const Text( 'Registration Successful!', style: TextStyle(fontSize: 18, color: Colors.green), ),*/ ], ), ), ); } } ``` **And my backend code is here: ** ``` <?php $db = mysqli_connect('localhost', 'root', '', 'billspark'); if (!$db) { echo "Database connection failed"; } $fname = isset($_POST['firstname']) ? $_POST['firstname'] : ''; $lname = isset($_POST['lastname']) ? $_POST['lastname'] : ''; $pnumber = isset($_POST['phonenumber']) ? $_POST['phonenumber'] : ''; $sql = "SELECT * FROM Registration WHERE phonenumber = '".$pnumber."'"; $result = mysqli_query($db, $sql); $count = mysqli_num_rows($result); if ($count == 1) { echo json_encode("Error"); } else { $insert = "INSERT INTO Registration (firstname, lastname, phonenumber) VALUES ('".$fname."', '".$lname."', '".$pnumber."')"; $query = mysqli_query($db, $insert); if ($query) { echo json_encode("Success"); } } ?> ``` **Where is the issue?**` **I want to solve this issue and want to run this code without any error.**
Flutter Unhandled Exception: Connection Time Out