instruction
stringlengths
0
30k
|github|jenkins-pipeline|cicd|
null
I am trying to create a function that returns the balance of my sepolia testnet metamask wallet. I wrote this function: ``` function getBalance() external view onlyOwner returns (uint256) { return address(this).balance; } ``` but it returns 0 all the time. Note that my smart contract is already connected to my metamask wallet and other functions are working good.
Function for returning a wallet balance
|ethereum|blockchain|solidity|smartcontracts|wallet|
null
This error occured because your xampp control panel MySQL is not on or enable. Soenable it then this error automatically solve.
- For every Load (shipment) my company does, the customer must be invoiced. To generate this invoice we have created a Visualforce Page that renders as a PDF. - I have a shared Google Drive Folder named "Salesforce" that integrates directly with our Salesforce. - Every time a new Load is created (load has it's own object) is created, I have created a flow that automatically creates a new folder for the corresponding load (i.e. if I create Load L-999, a folder will automatically create in the "GDrive/Salesforce/Loads" folder named "L-999") What is the simplest way to create an Action button that will generate the Invoice PDF and save it to the corresponding G Drive "Load" folder? I have tried creating a separate Apex Controller for the G Drive Folder and adding Apex Code to a different Apex class that's function is to Send the Invoice PDF to the customer via email.
Automate Visualforce PDF to corresponding G Drive Folder
|google-drive-api|apex|visualforce|google-drive-shared-drive|
null
I have a React JS 18 Project using webpack5 + Tailwindcss + Twin.macro for CSS in JS. I am trying to migrate to RsPack and I followed the official migration doc --> https://www.rspack.dev/guide/migrate-from-webpack.html The issue is Twin.macro needs some babel plugins to work but those are not there in RsPack. What will be the config to include it in RsPack ? The .babelrc file before migration from Webpack to RsPack: ```json { "presets": [ "@babel/preset-env", [ "@babel/preset-react", { "runtime": "automatic" } ], "@emotion/babel-preset-css-prop", "@babel/preset-typescript" ], "plugins": ["babel-plugin-twin", "babel-plugin-macros", "@babel/plugin-transform-runtime"] } ```
Step scaling option disabled for ECS Fargate service
|amazon-web-services|amazon-ecs|amazon-cloudwatch|aws-fargate|aws-auto-scaling|
function getAllSubsets(arr) { let subset = [[]]; function generateSubsets(indx, currentSubset) { if (indx == arr.length) { return; } for (let i = indx; i < arr.length; i++) { currentSubset.push(arr[i]) subset.push([...currentSubset]) generateSubsets(i+1,currentSubset) currentSubset.pop() } } generateSubsets(0, []) return subset } // Example usage: const inputArray = [1, 2, 3]; const subsets = getAllSubsets(inputArray); console.log(subsets);
Currently, I've a .img file (Kernel source file) with a capacity of 3.7GB. Could you Please suggest Linux terminal commands for extracting this .img file into a normal kernel source file? Thanks and Regards, Ravikumar
How to extract the .img file conversion into normal kernel source file in the linux?
|linux|linux-kernel|embedded-linux|
I eventually got my answer to this question from a user named "Tassle" from computer science stack exchange; here is a link to his answer: https://cs.stackexchange.com/questions/167260/how-many-games-will-a-menace-tic-tac-toe-computer-take-to-train?noredirect=1#comment346349_167260 In short, Tassle says that the computer will get "stuck" choosing one set of opening moves over and over even if it is not optimal — if it performs well against random moves, which it likely will. Tassle explains it much better than I could. I tried training the AI against another copy of itself, and this produced slightly better results. Next I will try swapping between AI and random to see if this produces better results and *edit* when that happens. (Maybe the AI will find ways to exploit weaknesses developed by random training, and then random training can "shake things up" again.)
I have a C# Windows Forms application that was written and originally tested on Windows 10. The application needs to be able to open on any given monitor (set via a config file), and this feature works fine on Windows 10. To prepare for future system requirements, we have been preparing a Windows 11 image and trying to test software on Windows 11 to verify operation, and in doing so, we are running into an issue where this C# program will not load correctly on any monitor that is not set in Windows 11 as the "main display". By "not load correctly", I mean that ONLY the title bar (the bar that gives the icon, title of the window, and the minimize, maximize, and close buttons) will load, and the actual body of the application will not load, even after waiting several minutes. Trying to interact with this title bar results in the application hanging and becoming unresponsive, but it can be closed without issue by right clicking the icon for the application in the taskbar and selecting the option to close the application. To reiterate, the application opens correctly, and seems to operate without issue, on the main display as set in Windows 11, but any other display results in the abovementioned issue. **How would I go about either solving this issue (i.e., making the application work so that it loads correctly on any monitor) or go about finding the cause of the issue (i.e., finding the underlying settings/root causes of the error)?** Changing the monitor that the application should open on via the config file works correctly, as the title bar will appear on the monitor that was set. Changing the main display allows the program to open correctly on that screen, but then the same issue occurs on the old main display (the one before the change). Changing the program to only operate on the main display is not an option for our system needs. Additional details: - The application uses .NET Framework 4.6, and since 4.8 is included with Windows 11, the specific Framework version the program uses has not been installed - As mentioned above, the application works fine on Windows 10 systems without this issue of not loading correctly. - I am unable to provide any screenshots of the application or provide source code due to system limitations and overall security but can provide generic example snippets of any necessary source code needed to solve the issue, I am just unsure what would be needed at this time. We have tried the following to find a solution: - Restarting the Windows 11 system to see if this is a temporary issue - Changing the main display to each monitor in the system (including restarting the system after each change) - Reinstalling the application in question None of these attempts seemed to at all affect the overall operation of the application or whether it would function correctly or not.
Which approach is more better use of httpclient in .Net Framework 4.7 among the following two?
|c#|asp.net|httpclient|dotnet-httpclient|sendasync|
null
>i dont understand it so if it could be provided with proper explanation As shown in [The Java Tutorials][1] by Oracle Corp, a `for` loop has three parts: ```java for ( initialization ; termination ; increment ) { statement(s) } ``` Take for example an array `{ "alpha" , "beta" , "gamma" }`: ```java for ( int index = 0 ; index < myArray.length ; index ++ ) { System.out.println( "Index: " + index + ", value: " + myArray[ index ] ); } ``` See this [code run at Ideone.com][2]. ```none ------| Ascending |---------- Index: 0, Value: alpha Index: 1, Value: beta Index: 2, Value: gamma ``` To reverse the direction: - The first part, *initialization*, can start at the *last* index of the array rather than at the *first* index. Ex: `int index = ( myArray.length - 1 )`. (Parentheses optional there.) - The second part, *termination*, can test for going past the first index, into negative numbers: `index > -1` (or `index >= 0`). - The third part, *increment*, can go downwards (negative) rather than upwards (positive): ```java for ( int index = ( myArray.length - 1 ) ; index > -1 ; index -- ) { System.out.println( "Index: " + index + ", value: " + myArray[ index ] ); } ``` See this [code run at Ideone.com][2]. ```none ------| Descending |---------- Index: 2, value: gamma Index: 1, value: beta Index: 0, value: alpha ``` [1]: https://docs.oracle.com/javase/tutorial/java/nutsandbolts/for.html [2]: https://ideone.com/EsDnUk
problem of retrieving fcm token. iam working on sos alert web application using firebase cloud messaging. every this is good but iam not getting registration token. const fcmToken = await getToken(messaging {vapidkey: 'VAPID_KEY'}) console.log(fcmToken) FCM Token not being logged to the console.
We have a requirement to connect the Cloud SQL Database via Cloud Auth proxy deployed on a GKE cluster from an on-premise application securely. on-premise -> cloud auth proxy (GKE) -> Cloud SQL Instance. This is working fine when the connections are made on port 8080 but we would like to make it a secured connection from On-premise to Cloud Auth proxy. #GCP#CloudAuthProxy We are expecting to connect the Cloud Auth proxy from on-premise servers securely. #GCP#CloudAuthProxy
Enabling Secured connections to access Cloud Auth proxy from On-premise
|google-cloud-platform|cloud-sql-proxy|
null
I am currently running the program via VNC and after startup I open the terminal and run the following code ```bash cd home/documents/...... source venv/bin/activate python new.py ``` as soon as i run this, the camera activates and the code to detect and announce what object is infront begins. So what do I need to do to run the same on start up? Also now every time I run the code, the whole raspberry just freezes and restarts, Is there any way to fix this? I tried using the `rc.local` method and it doesnt work. Regarding the system freezing, I thought it was a space issue and tried clearing the wastebin which was kind of full but didnt work
How to Fix C# WinForms Application Not Loading correctly on Windows 11?
|c#|.net|windows|winforms|windows-11|
null
hope all are doing great. I was doing a project that registered a client account. However, it faces CORS policy issues: ``` Access to XMLHttpRequest at 'http://localhost:8085/client/sign-up' from origin 'http://localhost:4200' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. signup-client.component.ts:33 ``` ``` POST http://localhost:8085/client/sign-up net::ERR_FAILED ``` [enter image description here](https://i.stack.imgur.com/m894Q.png) Here is my Angular auth.service.ts ``` import { HttpClient, HttpErrorResponse, HttpHeaders } from '@angular/common/http'; import { Injectable } from '@angular/core'; import { Observable, throwError } from 'rxjs'; import { catchError } from 'rxjs/operators'; const BASIC_URL = 'http://localhost:8085/'; @Injectable({ providedIn: 'root' }) export class AuthService { constructor(private http: HttpClient) { } registerClient(signupRequestDTO: any): Observable<any> { return this.http.post(BASIC_URL + "client/sign-up", signupRequestDTO); } registerCompany(signupRequestDTO: any): Observable<any> { return this.http.post(BASIC_URL + "company/sign-up", signupRequestDTO); } private handleError(error: HttpErrorResponse) { if (error.error instanceof ErrorEvent) { // A client-side or network error occurred. Handle it accordingly. console.error('An error occurred:', error.error.message); } else { // The backend returned an unsuccessful response code. // The response body may contain clues as to what went wrong. console.error( `Backend returned code ${error.status}, ` + `body was: ${error.error}`); } // Return an observable with a user-facing error message. return throwError('Something bad happened; please try again later.'); } } ``` And here is my Springboot SimpleCorsFilter ``` import org.springframework.beans.factory.annotation.Value; import org.springframework.context.annotation.Configuration; import org.springframework.core.Ordered; import org.springframework.core.annotation.Order; import javax.servlet.Filter; import javax.servlet.FilterChain; import javax.servlet.FilterConfig; import javax.servlet.ServletException; import javax.servlet.ServletRequest; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.IOException; import java.util.HashMap; import java.util.Map; @Configuration @Order(Ordered.HIGHEST_PRECEDENCE) public class SimpleCorsFilter implements Filter { @Value("${app.client.url}") private String clientAppUrl = " "; public SimpleCorsFilter() { } @Override public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException { HttpServletResponse response = (HttpServletResponse) res; HttpServletRequest request = (HttpServletRequest) req; String originHeader = request.getHeader("Origin"); // Allow requests from the actual Origin value if it's not null if (originHeader != null) { response.setHeader("Access-Control-Allow-Origin", originHeader); } response.setHeader("Access-Control-Allow-Methods", "POST, GET, PUT, OPTIONS, DELETE"); response.setHeader("Access-Control-Max-Age", "3600"); response.setHeader("Access-Control-Allow-Headers", "*"); if ("OPTIONS".equalsIgnoreCase(request.getMethod())) { response.setStatus(HttpServletResponse.SC_OK); } else { chain.doFilter(req, res); } } @Override public void destroy() { } @Override public void init(FilterConfig filterConfig) throws ServletException { } } ``` Pls let me know what do you think. Any suggestion will be helpful. Thanks I expect to send the API success and will see the new client in my MySQL DB
Angular / Spring Boot problem : No 'Access-Control-Allow-Origin' header is present
|angular|spring-boot|cors|
null
I'm trying to write a pinescript strategy on tradingview. I have a problem setting a limit exit profit target in percentage. **Hypothesis** Create a long in a determined condition, and set an exit with a limit order with take profit at 3%. entry_id = str.tostring(bar_index) long_profit_percent = input.float(3, title="Long Take Profit (%)", minval=0.0, step=1)/100 profit_target = strategy.position_avg_price * (1 + long_profit_percent) strategy.entry(entry_id, strategy.long, comment = "long:"+entry_id) strategy.exit(entry_id, entry_id, limit=profit_target,comment_profit="sell:"+entry_id) **Result** In the strategy tester I see the long not closing at 3% of profit, here is an example: [![enter image description here][1]][1] Here the long and exit, in the trades table [![enter image description here][2]][2] Entry price: 47.66 Close price: 47:66 What am I not seeing? I don't understand what I'm doing wrong. All the entries have the same behavior; the open and close price are the same. I'm trying to set a long position and set (for every long) a take profit exit by 3% of profit. My code: // This Pine Script™ code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/ // //@version=5 strategy("test", overlay=true, process_orders_on_close = true,initial_capital = 1000,default_qty_type = strategy.percent_of_equity,default_qty_value = 25,close_entries_rule = "ANY",pyramiding = 4,commission_type = format.percent, commission_value = 0.08, calc_on_every_tick = true) fast_sma = ta.sma(close, 14) slow_sma = ta.sma(close, 28) long_condition = ta.crossover(fast_sma, slow_sma) plot(fast_sma,color=color.blue) plot(slow_sma,color=color.red) entry_id = str.tostring(bar_index) long_profit_percent = input.float(3, title="Long Take Profit (%)", minval=0.0, step=1)/100 profit_target = strategy.position_avg_price * (1 + long_profit_percent) if (long_condition) strategy.entry(entry_id, strategy.long, comment = "long:"+entry_id) strategy.exit(entry_id, entry_id, limit=profit_target,comment_profit="sell:"+entry_id) These are my table entries [![table entries][3]][3] These the candelstick signals: [![Candestick signals][4]][4] It is as if all the trades are opening and closing in the same moment, at the same price, as I mentioned in the above images. [1]: https://i.stack.imgur.com/w1zXX.jpg [2]: https://i.stack.imgur.com/yQQDc.png [3]: https://i.stack.imgur.com/PcSAK.png [4]: https://i.stack.imgur.com/5VUhL.png
I find out that best solution for all workers is to use full width of window for text element and after this scale element down to size which you want. It working for me in safari also in chrome and it's not blury on any stage. I find right scale for full hd resolution and after this I just recomputing it. const screenWidth = window.innerWidth; const scale = 0.00155 * (1920 / screenWidth); You can use this scale for css transfrom scale and it should work.
instead of using $request.knowledge.answer use $request.knowledge.answer[0]
{"OriginalQuestionIds":[6095757],"Voters":[{"Id":4108803,"DisplayName":"blackgreen"}]}
I am using the laravel 7 and implement rate limit in kernel file ` protected $routeMiddleware = [ 'throttle' => \Illuminate\Routing\Middleware\ThrottleRequests::class, ]` i have 3 route file admin, home and user i put in all file group middleware like `Route::middleware(['throttle:10,1'])->group(function () { .... }` but it's take randomly like 8, 9,12,13 like not accurate. Any other things affected the rate limiter? i made also change in server config file like ` # Rate Limiting Configuration <Location var/www/html/abcd> SetOutputFilter RATE_LIMIT SetEnv rate-limit 5 SetEnv rate-limit-burst 5 </Location>` but not taking, why? please help me...
Laravel 7 rate limiter is not working properly
|php|laravel|laravel-7|throttling|rate-limiting|
null
I have two certs in pem format: - Server SSL Cert from Let's Encrypt - Login cert (cert + private key) from MongoDB atlas When both are imported to single keystore, mongo driver throws: ``` com.mongodb.MongoCommandException: Command failed with error 8000 (AtlasError): 'certificate validation failed' on server ac-9g18w0d-shard-00-02.rrfpnfg.mongodb.net:27017. The full response is {"ok": 0, "errmsg": "certificate validation failed", "code": 8000, "codeName": "AtlasError"} ``` When I keep them in different key-stores, it works fine. To import Let's Encrypt cert I used this guide: https://blog.ordina-jworks.io/security/2019/08/14/Using-Lets-Encrypt-Certificates-In-Java.html For MongoAtlas I was using this instruction: https://stackoverflow.com/a/54208558 Also, I'm loading those certs as spring SSL bundle from single key-store: ``` spring: ssl: bundle: jks: server: key: alias: server keystore: location: keystore.p12 password: ${PASSWORD} type: PKCS12 database: key: alias: mongodb keystore: location: keystore.p12 password: ${PASSWORD} type: PKCS12 data: mongodb: uri: <mongo+srv uri> ssl: enabled: true bundle: database ``` But every time I got mentioned earlier error. At this point, I'm not sure why it is not working with single key-store. Can you guys point me what I'm doing wrong? Thanks in advance.
How to import two .pem into single .p12 keystore?
|java|mongodb|spring-boot|
null
I use python get my onnx input shape providers = ['AzureExecutionProvider', 'CPUExecutionProvider'] # Specify your desired providers sess_options = onnxruntime.SessionOptions() sess = onnxruntime.InferenceSession(model_path, sess_options, providers=providers) input_shape = sess.get_inputs()[0].shape print(f"Input shape: {input_shape}") it show Input shape: ['input_dynamic_axes_1', 'input_dynamic_axes_2', 'input_dynamic_axes_3', 'input_dynamic_axes_4'] when I run onnx in js ``` const session = await ort.InferenceSession.create(model, { executionProviders: ["webgpu", "webgl"], }); const feeds: any = {}; const inputNames = session.inputNames; feeds[inputNames[0]] = inputTensor; const results = await session.run(feeds); const outputData = results[session.outputNames[0]].data; return outputData as any; ``` it raise err ``` Uncaught (in promise) Error: input tensor[0] check failed: expected shape '[,,,]' but got [1,3,800,400] validateInputTensorDims normalizeAndValidateInputs (anonymous function) event run run run runInference ``` I think the reason is onnx input shape are dynamic, so following onnx js code always set expectedDim == [null, null,null,null] private validateInputTensorDims( graphInputDims: Array<readonly number[]>, givenInputs: Tensor[], noneDimSupported: boolean) { for (let i = 0; i < givenInputs.length; i++) { const expectedDims = graphInputDims[i]; const actualDims = givenInputs[i].dims; if (!this.compareTensorDims(expectedDims, actualDims, noneDimSupported)) { throw new Error(`input tensor[${i}] check failed: expected shape '[${expectedDims.join(',')}]' but got [${ actualDims.join(',')}]`); } } } **So my question is:** how to call onnx in onnx runtime web with dynamic input shape(ignoring input shape check)
I have a Flux that has a `doOnNext(this::dumpJson)` but for some odd reason it stops on the first one if I do this ```java private void dumpJson(Object it) { try { objectMapper.writerWithDefaultPrettyPrinter().writeValue(System.out, it); System.out.flush(); } catch (IOException e) { throw new RuntimeException(e); } } ``` But if I convert to a String then write it out afterwards it works. ```java private void dumpJson(Object it) { try { final var s = objectMapper.writerWithDefaultPrettyPrinter().writeValueAsString(it); System.out.println(s); } catch (JsonProcessingException e) { throw new RuntimeException(e); } } ```
Why would using writeValue stop a Flux but writing to a temporary value then printing it allow it to work?
|java|project-reactor|
My question is about running two Vue+Vuetify applications in the same web page and avoiding styling conflicts between them. Consider a "main" Vue 3 web application running the homepage for a website. This "main" app embeds another "minor" Vue 3 web application in its HTML via a `<script>` tag. Both Vue 3 app use Vuetify 3 for styling, each with its own theme. When the "main" app is loaded, its UI displays correctly. When, a fraction of a second later, the "minor" app is loaded, some of the colors of the "main" app change. Specifically, the "main" app's primary color changes to the color `#62ffee` (purple) which is not part of either app's explicitly set theme colors. If I understand correctly, `#62ffee` is the implicit default primary color of the "minor" app. I've been able to prevent this color disruption by disabling the "minor" app's Vuetify theme like so: ``` import { createVuetify } from "vuetify"; export default createVuetify({ theme: { isDisabled: true, }, customProperties: true, }); ``` My question is - Is there a way to enable theming in the "minor" app while keeping its theme colors from interfering with those of the "main" app?
How to isolate the theme colors of concurrent Vue+Vuetify app instances?
|css|vue.js|vuejs3|vuetify.js|themes|
To design an NFA for the language (01 ∪ 001 ∪ 010)<sup>\*</sup> we could take it step by step. First design a diagram for the language (01)<sup>\*</sup>: [![nfa1][1]][1] q<sub>0</sub> is the start and accept state, and q<sub>0</sub> is the sink for invalid input. Then extend it to (01 ∪ 001)<sup>\*</sup>: [![nfa2][2]][2] And then complete to (01 ∪ 001 ∪ 010)<sup>\*</sup>: [![enter image description here][3]][3] To design a corresponding DFA, define a state for each possible set of states in the NFA. So for instance (<sub>1</sub>,<sub>2</sub>) is one such state, and (<sub>0</sub>,<sub>1</sub>,<sub>2</sub>) is another. There are many of such states, but only some are really reachable. Using [powerset construction](https://en.wikipedia.org/wiki/Powerset_construction) we can identify which input prefixes lead to which states in the NFA, and we can assign to each unique state combination a distinct DFA state. As soon as we find a same combination of NFA states that we already encountered with another prefix, we don't need to lengthen that prefix more, and so we get this table of possibilities: | prefix | NFA states | DFA state | | ------ | ---------- | --------- | | ε | <sub>0</sub> | <sub>0</sub> | | 0 | <sub>1</sub>,<sub>2</sub> | <sub>12</sub> | | 00 | <sub>1</sub> | <sub>1</sub> | | 000 | <sub>4</sub> | <sub>4</sub> | | 001 | <sub>0</sub> | <sub>0</sub> | | 01 | <sub>0</sub>,<sub>3</sub> | <sub>03</sub> | | 010 | <sub>0</sub>,<sub>1</sub>,<sub>2</sub> | <sub>012</sub> | | 0100 | <sub>1</sub>,<sub>2</sub> | <sub>12</sub> | | 0101 | <sub>0</sub>,<sub>3</sub> | <sub>03</sub> | | 011 | <sub>4</sub> | <sub>4</sub> | | 1 | <sub>4</sub> | <sub>4</sub> | I chose names for the states of the DFA that reflect how they map to a set of states in the NFA. Each DFA state that corresponds to an NFA state combination that includes <sub>0</sub> is accepting. This leads to the following DFA: [![enter image description here][4]][4] ### Python code The above NFA and DFA can be represented as follows: ```python nfa = NFA( states={'q0', 'q1', 'q2', 'q3', 'q4'}, input_symbols={'0', '1'}, transitions={ 'q0': {'0': {'q1','q2'}, '1': {'q4'}}, 'q1': {'0': {'q4'}, '1': {'q0'}}, 'q2': {'0': {'q1'}, '1': {'q3'}}, 'q3': {'0': {'q0'}, '1': {'q4'}}, 'q4': {'0': {'q4'}, '1': {'q4'}} # Sink }, initial_state='q0', final_states={'q0'} ) dfa = DFA( states={'q0', 'q1', 'q12', 'q03', 'q012', 'q4'}, input_symbols={'0', '1'}, transitions={ 'q0' : {'0': 'q12', '1': 'q4' }, 'q12' : {'0': 'q1', '1': 'q03'}, 'q1' : {'0': 'q4', '1': 'q0' }, 'q03' : {'0': 'q012', '1': 'q4' }, 'q012': {'0': 'q12', '1': 'q03'}, 'q4': {'0': 'q4', '1': 'q4' } # Sink }, initial_state='q0', final_states={'q0', 'q03', 'q012'} ) ``` [1]: https://i.stack.imgur.com/x8g6Z.png [2]: https://i.stack.imgur.com/UMbwe.png [3]: https://i.stack.imgur.com/8aCLd.png [4]: https://i.stack.imgur.com/Sa6py.png
How to configure Twin.macro with RsPack
|webpack|tailwind-css|webpack-5|rspack|twin.macro|
I am trying to run (something similar) and figure how many times the loop actually ran ```yaml - name: this step debug: msg: "working on {{ item.name }}" when: not item.skip with_list: - {name: t1, skip: True} - {name: t2, skip: False} - {name: t3, skip: False} - {name: t4, skip: False} - {name: t5, skip: True} ``` In this simplified version, I want to get 3 ```yaml register: somedata - debug: msg: "{{ somedata.results | length }}" ```
I want to send push and pull events of images to Kafka using the Azure ACR Webhook function. 'Event Hub' is not available due to cost. The webhook is set to be thrown to the Kafka Rest Proxy URI with the header 'Content-Type: application/vnd.kafka.json.v2+json'. When an event occurs Headers: 'User-Agent: AzureContainerRegistry/1.0 traceparent: 00-1318068fc8e0ca8904b19865b65d551b-d98ef2405fa798bf-01 Content-Type: application/vnd.kafka.json.v2+json; charset=utf-8 Content-Length: 104' Payload: ' { "id": "a00c973c-a717-425d-9db6-874a6288728c", "timestamp": "2024-03-13T06:56:49.2277133Z", "action": "ping" } ' The message is sent, but the response is '{"error_code":422,"message":"Unrecognized field: id"}'. This seems to be because the payload format of the webhook is sent without the 'records, value' keys as follows. { "records":[{"value": { *ACTUAL DATA* }}]} In this environment where the payload format of the message cannot be edited, Is there a solution that matches the message format and sends it to Kafka, or is there a more efficient REST solution that can receive such a simple message and send it to Kafka? PLUS: Please let me know if there is a configuration that allows you to send messages to Kafka using Azure Webhook without Event Hub. thank you
CoreMirror has a method `fromTextArea` that takes two parameters - first the source and second is the property that would like the editor to look like. var editor = CodeMirror.fromTextArea (document.getElementById('editor'),{ mode:"text/x-java", lineNumbers:true, autoCloseTags:true, tabSize:5 }); Once the output from above `fromTextArea` is taken into a variable, it provides the method `on` that again takes two params. First is the type of event that you would like to perform which in this case is the 'change' event and second is the operation that you would like to perform, which in this case simply consoling the output: editor.on('change', // change it your event (args) => { console.log("changes"); console.log(args); })
Why doesn't FutureBuilder execute a condition when the data is not ready yet? I.e. block else where '-------------' is not executed. Before I used provider, that block would execute, i.e. it would show CircularProgressIndicator. The provider value (ChangeNotifierProvider) is changed in the PaginationAppBar. Future<List<ProfileData>> fetchProfiles(int page) async { WebService webService = locator<WebService>(); return webService.fetchProfiles(Gender.female, page); } @override Widget build(BuildContext context) { final pagerModel = context.watch<PagerModel>(); return FutureBuilder<List<ProfileData>>( future: fetchProfiles(pagerModel.page), builder: (context, snapshot) { if (snapshot.hasData) { final data = snapshot.data; debugPrint('snapshot data length ${snapshot.data?.length}'); if (data!.isEmpty) { return const ErrorWithMessage(message: 'data loading error'); } return Scaffold( appBar: const PaginationAppBar(Gender.female), body: ProfileGrid(data), ); } else if (snapshot.hasError) { return _processError(snapshot); } else { debugPrint('-------------this else does not execute'); return const Scaffold( appBar: PaginationAppBar(Gender.female), body: CircularProgressIndicator(), ); } }, ); } }
This is a sample of a one pager SAP UI5 App with - XMLView with a table - Controller with JSONModel - Databinding the JSONModel in the table The versions of the SAP resources are not up-to-date but they (still!) work. ``` <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>SAPUI5 single file template</title> <!-- decide what version you want to use, see http://scn.sap.com/community/developer-center/front-end/blog/2015/07/30/multi-version-availability-of-sapui5: <script src="https://sapui5.hana.ondemand.com/resources/sap-ui-core.js" <script src="https://sapui5.hana.ondemand.com/1.28.11/resources/sap-ui-core.js" <script src="https://openui5beta.hana.ondemand.com/resources/sap-ui-core.js" <script src="https://openui5.hana.ondemand.com/resources/sap-ui-core.js" <script src="https://openui5.hana.ondemand.com/1.30.7/resources/sap-ui-core.js" --> <script src='https://ui5.sap.com/1.110.0/resources/sap-ui-core.js' id="sap-ui-bootstrap" data-sap-ui-theme="sap_bluecrystal" data-sap-ui-libs="sap.m" data-sap-ui-bindingSyntax="complex" data-sap-ui-compatVersion="edge" data-sap-ui-preload="async"></script> <!-- use "sync" or change the code below if you have issues --> <body class="sapUiBody"> <div id="content"></div> </body> <!-- XMLView --> <script id="myXmlView" type="ui5/xmlview"> <mvc:View controllerName="MyController" xmlns="sap.m" xmlns:l="sap.ui.layout" xmlns:core="sap.ui.core" xmlns:mvc="sap.ui.core.mvc" xmlns:nabisoft="nabisoft.ui"> <l:HorizontalLayout xmlns:sap.ui.layout="sap.ui.layout" id="table_layt"> <l:content> <Table noDataText="No Data Available" id="bud_table" class="table_layt"> <items></items> <columns> <Column id="c1"> <header><Label text="Number" id="aclab"/></header> </Column> <Column id="c2"> <header><Label text="Question" id="actlab"/></header> </Column> <Column id="c3"> <header><Label text="Answer" id="budglab"/></header> </Column> <Column id="c4"> <header> <Label text="Answertype" id="Answertype"/></header> </Column> </columns> </Table> </l:content> </l:HorizontalLayout> </mvc:View> </script> <script> sap.ui.getCore().attachInit(function () { "use strict"; //### Controller ### sap.ui.controller("MyController", { onInit : function () { this.getView().setModel( new sap.ui.model.json.JSONModel({ root: [ { Number: "1", Question: "Question 1", Answer: "Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut l", Answertype: "text" }, { Number: "2", Question: "Question 2", Answer: "yes", Answertype: "yesno" } ] })); var oTable = this.getView().byId("bud_table"); var oTemplate = new sap.m.ColumnListItem({ cells:[ new sap.m.Label({ text:"{Number}" }), new sap.m.Text({ text:"{Question}" }), new sap.m.Text({ text:"{Answer}" }), new sap.m.Text({ text:"{Answertype}" }) ] }); oTable.bindItems("/root", oTemplate); } }); //### THE APP: place the XMLView somewhere into DOM ### sap.ui.xmlview({ viewContent : jQuery("#myXmlView").html() }).placeAt("content"); }); </script> </head> </html> ``` Save the file, open in Browser and it works. You can also serve this with XAMPP or `ui5 serve` and other web servers.
I have EC2 instance which running multiple java servers, each server use local port 1234, and I have Elastic IP for each java server, and I use iptables to resirect traffic on EIP:443 to EIP:1234 so I need to reserve EIPs for all java servers and I need to open the port 1234 on firewalls from outside to make the application reachable I am searching for an optimal solution to reduce number of reserved IPs and to close not stansdard ports like 1234 to just 443 any suggestions Thanks in advance I am currently use EIP for each server, I expect to get a solution to help me reduce number of IPs and increase security by close non standard potts
how to run multiple java servers on the same machine on the same port 443
|java|linux|amazon-web-services|nginx|
null
I have a question about visualizing the alluvial diagram in R. My goal is to show the association between multiple groups so I decided to use an alluvial diagram. However, an alluvial diagram usually shows the association through the source, target1, target2...Could I ask if there is a visualization tool that can add flows within source/target1/target2? like: ![example diagram screenshot][1] Other than the alluvial diagram, are there any other visualization charts that can be used for a similar performance? Thanks so much, any reply would be greatly appreciated! I have checked some of the existing package in R, but no one has the function to visualize association within source/target. [1]: https://i.stack.imgur.com/170Jh.jpg
null
This is a use case for `INDEX\MATCH`: `=E2*INDEX($B$2:$B$5,MATCH(D2,$A$2:$A$5,0))` [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/6PkIl.png
I have two certs in pem format: - Server SSL Cert from Let's Encrypt - Login cert (cert + private key) from MongoDB atlas When both are imported to single keystore, mongo driver throws: ``` com.mongodb.MongoCommandException: Command failed with error 8000 (AtlasError): 'certificate validation failed' on server ac-9g18w0d-shard-00-02.rrfpnfg.mongodb.net:27017. The full response is {"ok": 0, "errmsg": "certificate validation failed", "code": 8000, "codeName": "AtlasError"} ``` When I keep them in different key-stores, it works fine. To import Let's Encrypt cert I used this guide: https://blog.ordina-jworks.io/security/2019/08/14/Using-Lets-Encrypt-Certificates-In-Java.html For MongoAtlas I was using this instruction: https://stackoverflow.com/a/54208558 Also, I'm loading those certs as spring SSL bundle from single key-store: ``` spring: ssl: bundle: jks: server: key: alias: server keystore: location: keystore.p12 password: ${PASSWORD} type: PKCS12 database: key: alias: mongodb keystore: location: keystore.p12 password: ${PASSWORD} type: PKCS12 data: mongodb: uri: <mongo+srv uri> ssl: enabled: true bundle: database ``` But every time I got mentioned earlier error. At this point, I'm not sure why it is not working with single key-store. Can you guys point me what I'm doing wrong? Thanks in advance.
So I'm using terraform module where the source is another internal git repository. I can successfully create resources using the module, but when I try to remove(or comment) the module does not get removed. It throws an error. However, when I use terraform destroy command, I can remove this module via the command line. However it's hard to recognize a code removal and run terraform destroy command in a CI/CD pipeline. Is there a workaround for this? The error message is as follows, ``` Error: Provider configuration not present To work with module.foo (orphan) its original provider configuration at module.foo.provider["registry.terraform.io/datadog/datadog"] is required, but it has been removed. This occurs when a provider configuration is removed while objects created by that provider still exist in the state. Re-add the provider configuration to destroy module.foo (orphan), after which you can remove the provider configuration again. ```
Terraform: Error: Provider configuration not present. Module gets created, but when I remove or comment it throws an error
For a multi-container pod on minikube this is how the service and deployment yamls of zookeeper and kafka will look like. apiVersion: v1 kind: ConfigMap metadata: name: broker-conf data: KAFKA_BROKER_ID: "1" ZOOKEEPER_CONNECT: 127.0.0.1:2181 BOOTSTRAP_SERVER: 127.0.0.1:9092 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-service:9092,PLAINTEXT_HOST://localhost:29092 KAFKA_OPTS: -javaagent:/confluent-7.0.1/share/java/kafka/jmx_prometheus_javaagent-0.16.1.jar=9393:/confluent-7.0.1/etc/kafka/kafkaMetricConfig.yaml listeners: PLAINTEXT://0.0.0.0:9092 advertised.listeners: PLAINTEXT://kafka-service:9092 --- apiVersion: v1 kind: Service metadata: name: zookeeper-service spec: type: ClusterIP ports: - name: client port: 2181 protocol: TCP selector: component: kafka-svc --- apiVersion: v1 kind: Service metadata: name: kafka-service spec: #type: ClusterIP type: NodePort ports: - name: plaintext port: 9092 protocol: TCP - name: plaintext-host port: 29092 protocol: TCP - name: jmx-prom-port port: 9393 protocol: TCP nodePort: 31500 selector: component: kafka-svc --- apiVersion: apps/v1 kind: Deployment metadata: name: kafka spec: replicas: 1 selector: matchLabels: component: kafka-svc template: metadata: labels: component: kafka-svc spec: containers: - name: zookeeper image: localhost:5000/zookeeper resources: requests: memory: "128Mi" cpu: "125m" limits: memory: "256Mi" cpu: "250m" command: [ "/bin/sh","-c" ] args: ["/confluent-7.0.1/bin/zookeeper-server-start /confluent-7.0.1/etc/kafka/zookeeper.properties; sleep infinity"] ports: - containerPort: 2181 env: - name: ZOOKEEPER_ID value: "1" - name: ZOOKEEPER_CLIENT_PORT value: "2181" - name: ZOOKEEPER_TICK_TIME value: "2000" - name: ALLOW_ANONYMOUS_LOGIN value: yes - name: kafka image: localhost:5000/kafka imagePullPolicy: IfNotPresent resources: requests: memory: "1Gi" cpu: "500m" limits: memory: "2Gi" cpu: "1000m" envFrom: - configMapRef: name: broker-conf command: [ "/bin/sh","-c"] args: - /confluent-7.0.1/bin/kafka-server-start /confluent-7.0.1/etc/kafka/server.properties --override zookeeper.connect=$(ZOOKEEPER_CONNECT); - /confluent-7.0.1/bin/kafka-topics --create --topic microTopic --bootstrap-server $(BOOTSTRAP_SERVER); - mkdir /prometheus/ - sleep infinity; ports: - containerPort: 9092 name: kafka-port - containerPort: 29092 name: kafka-ad-port - containerPort: 9393 name: jmx-export-port
>Is this an expected behaviour of the labeling tool? Such that I need to draw the labeling region myself? - ***There is a difference in the capabilities or limitations of the OCR extraction process. It's possible that the standard pricing tier introduced restrictions or modified settings that affected the OCR extraction beyond the second page.*** Here, I have created document intelligence with standard pricing and that makes the difference in the limitation. ![enter image description here](https://i.imgur.com/F3MKng8.png) Here I have uploaded a dummy document to test the service which has 6 pages. I started analysis to check the content. ![enter image description here](https://i.imgur.com/sCGKrJJ.png) - It can also detect tables, fields, figures also and we can draw manually by labelling. ![enter image description here](https://i.imgur.com/KITuatW.png) **Third pages :** ![enter image description here](https://i.imgur.com/Zwzx9qH.png) ![enter image description here](https://i.imgur.com/KmGfTSZ.png)
null
flutter clean flutter pub get pod install
>How to make Postgres GIN index work with jsonb_* functions? You can't. PostgreSQL [indexes][1] are tied to operators in specific operator classes: >In general, PostgreSQL indexes can be used to optimize queries that contain one or more WHERE or JOIN clauses of the form > >>*`indexed-column`* ***`indexable-operator`*** *`comparison-value`* > >Here, the *`indexed-column`* is whatever column or expression the index has been defined on. The ***`indexable-operator`*** is an operator that is a member of the index's operator class for the indexed column. And the *`comparison-value`* can be any expression that is not volatile and does not reference the index's table. [GIN][2] will help you only if you use the operators in the opclass you used when you defined the index (`jsonb_ops` by default): >The default GIN operator class for `jsonb` supports queries with the key-exists operators `?`, `?|` and `?&`, the containment operator `@>`, and the `jsonpath` match operators `@?` and `@@`. Even though there are equivalent `jsonb_path_X()` functions that do the exact same thing those operators do, the index will only kick in if you use the operator and not the function. There are cases like [PostGIS][3] where functions do in fact use the index but that's because they [wrap an operator][4] or add an operator-based condition that's using the index, then use the actual function to just re-check pre-filtered rows. You can mimmick that if you want: [demo][5] ```pgsql CREATE OR REPLACE FUNCTION my_jsonb_path_exists(arg1 jsonb,arg2 jsonpath) RETURNS boolean AS 'SELECT $1 @? $2' LANGUAGE 'sql' IMMUTABLE; EXPLAIN ANALYZE SELECT * FROM applications WHERE my_jsonb_path_exists( applications.application, '$.persons[*] ? (@.type_code == 3)' ); ``` | QUERY PLAN | |:-----------| | Bitmap Heap Scan on applications (cost=37.39..41.40 rows=1 width=130) (actual time=0.010..0.011 rows=0 loops=1) | |   Recheck Cond: (application @? '$."persons"\[\*]?(@."type\_code" == 3)'::jsonpath) | |   -> Bitmap Index Scan on applications\_application\_idx (cost=0.00..37.39 rows=1 width=0) (actual time=0.008..0.008 rows=0 loops=1) | |         Index Cond: (application @? '$."persons"\[\*]?(@."type\_code" == 3)'::jsonpath) | | Planning Time: 0.102 ms | | Execution Time: 0.033 ms | [1]: https://www.postgresql.org/docs/current/indexes-intro.html [2]: https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING [3]: https://postgis.net/workshops/postgis-intro/indexing.html [4]: https://gis.stackexchange.com/a/188420/127552 [5]: https://dbfiddle.uk/bNlbi_tm
As this is a highly active question and amazon support tends to link this question as a standard reply - adding `jdk.crypto.cryptoki` and `ca-certificates` to the docker files fixed the issue - in case you are using java. The TLS version was 1.2 and seems the problem was related to the older versions of amazon sdk.
Be wary that your test does not mimic the use of an update view. You should post unchanged data as well since those will be loaded on the page before you enter any updates. ````python def test_movie_update_view(self): self.assertEqual(self.movie.movie_name, 'Avengers - Endgame') movie_update = { 'movie_name': self.movie.movie_name, 'movie_year': self.movie.movie_year, ... } movie_update &= { 'movie_name' :'Title Updated', 'movie_director' :'New Director', } response = self.client.post( reverse('movie_update', kwargs={'pk': self.movie.pk}), data=movie_update ) self.movie.refresh_from_db() self.assertEqual(self.movie.movie_name, 'Title Updated') ```` My guess is that some of your model fields aren't nullable so the form will require those field. Since you only sent `movie_name` and `movie_director` you got an error. You should `assertTemplateUsed` with `follow=True` in your request to ensure the form succeed before asserting data
I have a SVG that contains a single `<path>` element that draws a certain shape. The coordinates of this path are contained within the path's `'d'` attribute's value. I need this shape flipped horizontally. When I try to accomplish this in Adobe Illustrator, using Reflect tool for example, I get double the size of data in the `'d'` attribute value and therefore double the size of SVG file and that is just too painful to do. I could use `transform` and `scale` functions to flip the shape without changing the coordinates in `'d'` but then I would increase the rendering time and CPU usage since I added extra work for browser or whichever software renders the SVG. The logical thing to do is just change the coordinates themselves within the `'d'` to their 'opposites' to achieve the flipping of the shape. I could write a script that does this but alas I do not know the format of how these coordinates are stored and what they actually represent. There are both letters and numbers used. So, my question is, how would one change the coordinates of a `<path>` element's `'d'` in order to achieve a horizontal flip of the entire shape? Here is my example SVG for illustration: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"> <path id="example" class="st0" d="M492 534h-96q-37 0 -53.5 -12.5t-30.5 -50.5q-20 -54 -44 -165q-17 -79 -17 -105q0 -49 38 -58q17 -3 51 -3h67q41 0 56 7.5t23 32.5h27l-24 -106q-10 -42 -27 -58q-18 -15 -50 -19t-139 -4q-89 0 -128 5.5t-63 21.5q-54 35 -54 122q0 53 25 177q31 146 62 218t76 101 t124 29h258l-18 -80q-7 -34 -19 -43.5t-44 -9.5z"/> </svg>
Modify the data of SVG path element's d-attribute to make the resulting path flipped
|css|svg|vector|adobe-illustrator|
I'm trying to use the retrieve_endpoints function in cli_config.js. I added the functions to the module exports and added the require in the cli_config file. Why can't I seem to call the retrieve_endpoints function? **collector.js** ``` async function retrieve_endpoints(enviromnent) { let configured_endpoints = []; return configured_endpoints; } module.exports = ({ server }) => { return { retrieve_endpoints }; } ``` **cli_config.js** ``` const collector = require('../collector/collector'); function webapi_source({ menus }) { endpoints = await collector.retrieve_endpoints(env); } ```
Exporting function(s) does not make it available in other module
|javascript|node.js|node-modules|
null
You can achieve this by saving the edited item in component data. Then manipulate this object in the dialog and save it back to items array on save (or delete it from array on delete). ```lang-javascript data: () => ({ editedItem: {}, // ... }), methods: { editItem(item) { this.editedItem = Object.assign({}, item); this.showEditDialog = true; }, saveItem() { this.showEditDialog = false; let index = this.wifiNetworks.findIndex( (wifi) => wifi.SSID === this.editedItem.SSID ); this.wifiNetworks[index] = Object.assign({}, this.editedItem); }, showDeleteWifiDialog(item) { this.editedItem = Object.assign({}, item); this.showDeleteDialog = true; }, hideDeleteWifiDialog() { this.editedItem = {}; this.showDeleteDialog = false; }, deleteWifi() { this.wifiNetworks.splice(this.wifiNetworks.indexOf(this.editedItem), 1); this.showDeleteDialog = false; }, } ``` Note: In the second dialog, you have a template tag that does not belong there.
I'm working on a SQL Server database where I need to track the progression of outcomes for production operations. Each operation (OP) is attempted until it succeeds (outcome=1). If an operation fails (outcome=0), it's retried for a given amount of times, if it passes, after which we proceed to the next operation. The operations are stored in a table named Operations like the following: OP|Outcome --|------- 1 |0 1 |0 1 |1 2 |1 3 |1 4 |0 4 |0 4 |1 5 |0 5 |1 I want to create a view that displays the outcome of each operation across rows in a specific format. If an operation fails, the subsequent operations in that row should display as NULL until the operation succeeds. Here's an example of the desired output format for 5 operations: OP 1 Outcome | OP 2 Outcome | OP 3 Outcome | OP 4 Outcome | OP 5 Outcome -------------|--------------|--------------|--------------|-------------- 0 | NULL | NULL | NULL | NULL 0 | NULL | NULL | NULL | NULL 1 | 1 | 1 | 0 | NULL 1 | 1 | 1 | 0 | NULL 1 | 1 | 1 | 1 | 0 1 | 1 | 1 | 1 | 1 I've attempted to structure a query using window functions and conditional aggregation to pivot the data, but I'm struggling with correctly propagating the outcomes and handling the NULLs for subsequent operations after a failure. The best result I get so far is showed in [this fiddle][1], but I can't figure it out ho to coalesce the NULL value for the already succeeded operation. [1]: https://sqlfiddle.com/sql-server/online-compiler?id=c66c9cd5-5ce1-45fd-8a80-ecd7a612840b
The error ``` Get-EventLog : Cannot open log Application on machine .. Windows has not enter code here`2023-08-08 02:33:27 provided an error code. ``` Seems to come from the entrypoint: https://github.com/microsoft/mssql-docker/blob/master/windows/mssql-server-windows/start.ps1 ``` while ($true) { Get-EventLog -LogName Application -Source "MSSQL*" -After $lastCheck | Select-Object TimeGenerated, EntryType, Message $lastCheck = Get-Date Start-Sleep -Seconds 2 } ``` The errors you are seeing are weird, I have seen errors about not being able to access the event viewer once the container has received termination signals. In any case, the post you are following is from the official MS repo, but the MSSQL on Windows Containers initiative was abandoned several years ago by MS so you are better off looking for something fresher. The container ecosystem has made huge progress - specially in regards of windows containers - in this past 3/4 years. I was able to setup a working image using Windows Containers: https://github.com/david-garcia-garcia/windowscontainers/blob/master/sqlserver2022k8s/readme.md
if(tid\<n) { gain = in_degree[neigh]*out_degree[tid] + out_degree[neighbour]*in_degree[tid]/total_weight //here let say node 0 moves to 2 atomicExch(&node_community[0, node_community[2] // because node 0 is in node 2 now atomicAdd(&in_degree[2],in_degree[0] // because node 0 is in node 2 now atomicAdd(&out_degree[2],out_degree[0] // because node 0 is in node 2 now } this is the process, in this problem during calculation of gain all the thread should see the update value of 2 which values of 2+values 0 but threads see only previous value of 2. how to solve that ? here is the output: node is: 0 node is: 1 node is: 2 node is: 3 node is: 4 node is: 5 //HERE IS THS PROBLEM (UPDATED VALUES ARE NOT VISIBLE TO THE REST OF THREDS WHO EXECUTED BEFORE THE ATOMIC WRITE) updated_node is: 0 // this should be 2 updated_values are: 48,37. // this should be(48+15(values of 2))+(37+12(values of 0)) I have tried using , __syncthreads(), _threadfence() and shared memory foe reading writing values can any one tell what could be the issue ??
The CNN website includes a chart for each stock with a price forecast. I was unable to scrape the forecast figure with the following code. For example: the URL about Tesla stock is: [https://edition.cnn.com/markets/stocks/TSLA ](https://stackoverflow.com) The code is: ``` Sub CNN_Data() Dim request As Object Dim response As String Dim html As New HTMLDocument Dim website As String Dim price As Variant stock = "TSLA" website = "https://edition.cnn.com/markets/stocks/" & stock Set request = CreateObject("MSXML2.XMLHTTP") request.Open "GET", website, False request.setRequestHeader "If-Modified-Since", "Sat, 1 Jan 2000 00:00:00 GMT" request.send response = StrConv(request.responseBody, vbUnicode) html.body.innerHTML = response price = html.getElementByClassName("high-price").Item(0).innerText End Sub ``` I get an error message when the code reaches the line: "price = html.getElementByClassName("high-price").Item(0).innerText" The error message is: "Object doesn't support this property or method". What is wrong on my code or does anyone know to suggest a helpful code?
To add an auto const feature when you save the file in Flutter in Visual Studio Code, you should follow these steps: 1. Press (<kbd>Ctrl</kbd> + <kbd>P</kbd>) then search for (settings.json) file. 2. Add this code line to there; "editor.codeActionsOnSave": { "source.fixAll": true }
"require" is used to import packages, libraries and even pictures and videos sometimes. It has almost same usage as that of "import". For example: import img from "../images/download.jpeg"; Likewise, <img src=require("../images/download.jpeg")>
{"Voters":[{"Id":523612,"DisplayName":"Karl Knechtel"}]}
I had the same error. In my case I had some accidental spaces in the version's variable, for example: buildscript { ext.appcompatVersion = '1.6.1 ' <- } I removed the spaces from the end and it worked: buildscript { ext.appcompatVersion = '1.6.1' }
This XPath, //p[count(node())=1][em] will select all `p` elements with a single child node that is a `em` element. --- Explanation --- - `//p` selects all `p` elements in the document. - `[count(node())=1]` filters to only those `p` elements that have a single child `node()`. Since `node()` matches nodes of *any* type (including both element nodes and text nodes), it will ensure that only `p` elements with a single child of any type are selected. - `[em]` filters to only those single-child `p` elements that have a `em` element child. Therefore, for your input XML/HTML, only the targeted `p`, <p><em>The paragraph I want to capture</em></p> will be selected. Had there been another `p` with three `em` children, <p><em>Do</em><em>not</em><em>select</em></p> or one `em` child and other element children, <p><em>Do</em><sup>not</sup><sub>select!</sub><span> or else!</span></p> such `p` elements would *not* have been selected. **Warning**: The XPath in the currently accepted answer, `//p[not(text())][em]`, however, would select such `p` elements, which did not appear to me to be your intention.
You can use ```numpy``` with ```stl``` to create a function to return the vertices and faces. First run ```pip install numpy-stl``` then use the following code import numpy import stl def parse_stl(stl_file): # Load the STL file mesh_data = mesh.Mesh.from_file(stl_file) # Extract vertices and faces vertices = mesh_data.vectors.reshape((-1, 3)) faces = np.arange(len(vertices)).reshape((-1, 3)) return vertices, faces
you can use these ways: - Method 1: Check if the user is on a mobile device > const isMobile = /Android/i.test(navigator.userAgent); - Method 2: Check if the screen width is less than 680 pixels > const isMobileWidth = window.innerWidth < 680;