instruction
stringlengths
0
30k
|python|xml|xsd|python-xmlschema|
null
In my production Next.js application using Clerk for authentication, I am unable to access my authenticated session, even though the user is successfully created in Clerk and even successfully synced to my database using a webhook. It works in localhost but in my server-side logs, I see this error > ⨯ node_modules/@clerk/nextjs/dist/esm/server/createGetAuth.js (26:12) @ eval ⨯ Error: Clerk: auth() was called but Clerk can't detect usage of authMiddleware(). Please ensure the following: - authMiddleware() is used in your Next.js Middleware. - Your Middleware matcher is configured to match this route or page. - If you are using the src directory, make sure the Middleware file is inside of it. I am not using /src and I have my middleware.ts file in the root directory of my project, alongside the app directory. I am also running on Node v21.7.1. This is my middleware.ts: ``` import { authMiddleware } from "@clerk/nextjs"; export default authMiddleware({ publicRoutes: ["/", "/sign-in", "/sign-up", "/api/webhooks(.*)", "/api/send(.*)", "/api/notification-test", "/api/upload-set(.*)"], }); export const config = { matcher: ["/((?!.+\\.[\\w]+$|_next).*)", "/"], }; ``` In the browser console of my production application, I see this error: > clerk.browser.js:2 POST https://clerk.mentis.chat/v1/client/sessions/sess_2duNOvIGQw4G5uEhJYncTlEKuc3/tokens?_clerk_js_version=4.70.5 404 (Not Found)
In my application, I have a bunch of DTOs, generally implemented as records. Various methods take these DTOs as parameters. The DTOs have quite a lot of properties, and when I'm unit testing the classes that use them, I don't always care about most of them. Quite often I want to be able to mock a DTO with only the properties that I care about set, in order to pass it to the system under test. The trouble is, as an empty DTO isn't useful in actual operation, they don't have parameterless constructors, and thus Moq can't instantiate them. I really don't want to have to set a couple of dozen random properties for each unit test, which I'd have to if I created real DTOs instead of mocking them, so what's the best approach here?
Mocking a record
|c#|unit-testing|mocking|nunit|moq|
I'm building the lambda on Ubuntu with the basic example. It builds without any errors but if I upload and test it on aws in crashes with: ```json { "errorMessage": "RequestId: 7f4d0aca-125c-4032-98dd-9ff387e5252b Error: Runtime exited with error: exit status 1", "errorType": "Runtime.ExitError" } ``` The log output is: ``` START RequestId: 7f4d0aca-125c-4032-98dd-9ff387e5252b Version: $LATEST.~.jwtauthorizeraws.jwtauthorizerawsapplication: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by ./~.jwtauthorizerawsapplication) END RequestId: 7f4d0aca-125c-4032-98dd-9ff387e5252b REPORT RequestId: 7f4d0aca-125c-4032-98dd-9ff387e5252b Duration: 56.02 ms Billed Duration: 57 ms Memory Size: 128 MB Max Memory Used: 7 MB RequestId: 7f4d0aca-125c-4032-98dd-9ff387e5252b Error: Runtime exited with error: exit status 1 Runtime.ExitError ```
{"OriginalQuestionIds":[42339876],"Voters":[{"Id":874188,"DisplayName":"tripleee","BindingReason":{"GoldTagBadge":"python"}}]}
You have installed node separately and your node path has a higher precedence. First uninstall node and reinstall it using nvm. ### Linux Solution Start by finding where node is installed using `which -a node` Delete the path using `rm -rf <path>` and then install node again using `nvm install node`. This install the latest version of node. #### Pro tip Set the version you want to use as default by putting `nvm alias default <version>` in your `~/.bash_profile` or `~/.bashrc` depending on which you use. Now everytime the terminal launches you will have the correct version.
null
I think the problem is that the funds are in your platform's balance. They need to be in your connected account's balance for you to make a payout. To clarify: - A Payout is a transfer of funds between the Stripe account's balance and a bank account connected to that account. - A Transfer is a transfer of funds between Stripe accounts' balance. If you have `available` funds in your platform balance, you can make a payout to the bank account connected to your platform. If you want to make a payout to the bank account connected to your connected account, you need `available` funds in the connected account's balance. With that in mind: - Check the balance for your platform (optional, you already did this) https://docs.stripe.com/api/balance/balance_retrieve - Create a Transfer from your platform to your connected account: https://docs.stripe.com/api/transfers/create - Check the balance for your connected account (same as above with the `stripeAccount` header) - Create your payout, with the `stripeAccount` header and bank account / card ID as `destination` (`source_type` should be card but shouldn't need to be specified).
I'm using ```Make``` (```Automake```) to compile and execute unit tests. However these tests need to read and write test data. If I just specify a path, the tests only work from a specific directory. This is a problem, as first the test executables may be executed from a directory different than at compiletime by ```make```, and secondly they should be executable even manually or in ```VPATH```-builds. Even using ```builddir```, via config.h, isn't particularly useful, because it is of course evaluated at compiletime rather than at runtime. What would be nice is, if the builddir would get passed at runtime instead. But would it be better to specify them via the command arguments or via the environment? And is it "better" to specify individual files or a generic directory? I would consider a ```PATH```-like search behaviour overkill for just a test, or would that be recommended? So the question is, how would I specify the path to a test file best, in terms of portability, interoperability, maintainability and common sense?
{"Voters":[{"Id":5014688,"DisplayName":"Ingo"},{"Id":6752050,"DisplayName":"273K","BindingReason":{"GoldTagBadge":"c++"}}]}
@Robin Zigmond (Please once verify this) **I think flow would be like this:** 1: p1 setTimeout() will start and after 10 seconds its callback will go to callback queue 2: p2 setTimeout() will start and after 2 seconds its callback will go to callback queue 3: Earlier I though that when execution reaches *val1.then((data)=>{console.log("P1 => "+data)})* it moves .then() to microtask queue (I think, this is wrong, because instead .then() of this line, will go to microtask queue when p1 is resolved) 4: Earlier I though that when execution reaches *val2.then((data)=>{console.log("P2 => "+data)})* it moves .then() to microtask queue (I think, this is wrong, because instead .then() of this line, will go to microtask queue when p2 is resolved) 5: After 2 seconds, setTimeout() callback will go to callback queue and since there isn't any thing in microtask queue, hence event loop will execute it by moving it to call stack. 6: Since p2 got resolved in line 5, hence its callback function [*val2.then((data)=>{console.log("P2 => "+data)})*] moves to microtask queue and get executed i.e. *P2 => 36* gets printed. 7: After 10 seconds, setTimeout() callback will go to callback queue and since there isn't any thing in microtask queue, hence event loop will execute it by moving it to call stack. 8: Since p1 got resolved in line 7, hence its callback function [*val1.then((data)=>{console.log("P1 => "+data)})*] moves to microtask queue and get executed i.e. *P1 => 17* gets printed.
Specific webhook event types, or `*` for all event types, need to be subscribed for each REST App for which they are desired. Up to 10 URLs can be subscribed per app. Webhook sbscriptions can be managed either in the developer dashboard interface for the App, or using the [webhooks management API][1]. Your question does not indicate that you have set up subscriptions. [1]: https://developer.paypal.com/docs/api/webhooks/v1/#webhooks_post
I am using NextJS and I have a route called `about`(It's contents is displayed via `page.js`). Inside `/about/page.js`, I am showing a component called `<Hero />` and I want to pass a prop to `<Hero heroHeight={'height: 400px'} />` to my `CSS module` for styling purposes. **Hero.js** 'use client'; import styles from '../page.module.css'; import style from './Hero.module.css'; const Hero = ({ heroHeight }) => { return ( <div className={`${styles.pageWidth} ${style[heroHeight.updateTheHeight]`}> </div> ); }; export default Hero; **Hero.module.css** .bgColor { border: 1px solid green; /* I want to pass the prop here which is height: 400px */ } The code above is currently not working. I want to pass the value of the prop which is **height: 400px** to **.bgColor** class. Do you know what is the problem in my code? Any help is greatly appreciated. Thanks!
How do I pass props to CSS module in React?
|javascript|html|css|reactjs|next.js|
First off, apologies that this is win32 code. But please note that OP has asked for code in the MS windows platform. That said, this code CAN be ported to other platforms. Confidence is high that you only need to replace a handful of basic APIs used in this code: ```none _findfirst64() -- start the file-finding process _findnext64() -- continue the file-finding process _findclose() -- end the file-finding process(usually mandatory) ``` ...and the related data structure. Every platform has some form of these functions, yes? Not much more needs to be said about the code. It is copiously commented and leverages printf() heavily so you can always see what's going on. When you go live you'll want to comment-out all those printf()s. The code will faithfully sum all the files on a drive ( because it will recurse through *everything* ) or it will act upon a single folder. It just depends on how you call get_dir_size() ( please see its header notes ). This code is compiled and tested by me today. Output is an actual screen scrape of the console window while it ran. #define _CRT_SECURE_NO_DEPRECATE #if 0 #define WIN32_LEAN_AND_MEAN #include <windows.h> #else #define MAX_PATH 260 typedef int BOOL; #define FALSE 0 #define TRUE 1 #endif #include <string.h> /* strcmp() */ #include <stdio.h> /* printf() */ #include <stdlib.h> /* malloc() */ #include <conio.h> /* getch() */ #include <io.h> /* findfirst(), findnext()... */ #define IS_SLASH_TOKEN(x) (((x)=='\\') || ((x)=='/')) #define LAST_CHAR(s) (((char *)(s))[strlen((char *)(s)) - 1]) /*--------------------------------------------------------------------------------------------- valid_directory_name() - helper for get_dir_size() Once upon a time directories beginning with dots were only references to the working or parent directories. But now software vendors are creating directories for file storage that may also begin with a dot. This function will discern a whether a directory is for storage or is a system reference. 2022.12.02 Created 2023.07.25 Code streamlined but logic unchanged *--------------------------------------------------------------------------------------------*/ BOOL valid_directory_name(const char *psdir) { int iii = -1; if(psdir) { /* Is this a dot (.) (working directory) or dot dot (..) parent directory entry? * Or is it a legit directory that merely begins with a dot or two ? */ while(psdir[++iii] == '.'); return (psdir[iii] != '\0'); } return FALSE; } /*---------------------------------------------------------------------------- dir_strcat() - helper for get_dir_size() Properly Combines two direcotry names to form a path, caring for the slash token ('\') as rqr'd. Make this a forward slash for Unix. *---------------------------------------------------------------------------*/ void dir_strcat(char *dest, const char *upperDir, const char *lowerDir) { if(IS_SLASH_TOKEN(LAST_CHAR(upperDir))) sprintf(dest,"%s%s", upperDir,lowerDir); else sprintf(dest,"%s\\%s",upperDir,lowerDir); } /*------------------------------------------------------------------------------------------ get_dir_size() Given a folder name, recurses into the folder summing all the file sizes found. If folder contains a sub folder, that folder is parsed too. This continues until no more sub folders are found and all the file sizes have been summed. Call with a valid path and the long long sum parameter set to zero, ie: size = get_dir_size("myfolder", 0 ); This is also legit (start at root). This will sum all the file sizes on the drive: size = get_dir_size("\\", 0 ); RETURNS: sum of all file sizes found in the folder. *-------------------------------------------------------------------------------------------*/ long long get_dir_size(char *path, long long sum) { char *wholepath = malloc(MAX_PATH+1); struct _finddatai64_t sd_fdata ={0}; intptr_t handle; if( wholepath ) { printf("ENTRY: get_dir_size(%s) entry, malloc OK.\n", path); printf("\tCurrent sum %.3f MB.\n",((double)sum/(1024.0*1024.0))); dir_strcat(wholepath, path, "*.*"); handle = _findfirst64(wholepath, &sd_fdata); if(handle != -1) { do { if((sd_fdata.attrib & _A_SUBDIR) == _A_SUBDIR) { /* If this is a directory ...*/ if(valid_directory_name(sd_fdata.name)) { /* ...and this is a valid directory, recurse into it. */ dir_strcat(wholepath, path, sd_fdata.name); if(LAST_CHAR(wholepath) != '\\') strcat(wholepath, "\\"); sum = get_dir_size( wholepath, sum); } } else /* This is a file. Add its size to the running sum */ { printf("get_dir_size() adding size %.3f KB for file %s\n", (double)sd_fdata.size/1024.0, sd_fdata.name); sum += sd_fdata.size; } }while(!_findnext64(handle, &sd_fdata)); _findclose(handle); } else { printf("get_dir_size(%s) BAD HANDLE!!!!\n", path); } free(wholepath); } else { printf("get_dir_size(%s) MEMORY ALLOCATION ERROR!!!!\n", path); } printf("get_dir_size(%s) Returning sum: %.3f MB.\n", path, (double)sum/(1024.0*1024.0)); return sum; } /*---------------------------------------------------------------------------- main() *---------------------------------------------------------------------------*/ int main() { char path_only[MAX_PATH] = "F:\\PROJECTS\\32bit\\_HELP\\C\\H160"; long long current_space_used; printf("\nStarting path will be: [%s]\n", path_only); current_space_used = get_dir_size( path_only, 0 ); printf("\n-----------> This path currently uses %.3lf MB <------------ \n", current_space_used/(1024.0*1024.0)); _getch(); return 0; } Output: ```none Starting path will be: [F:\PROJECTS\32bit\_HELP\C\H160] ENTRY: get_dir_size(F:\PROJECTS\32bit\_HELP\C\H160) entry, malloc OK. Current sum 0.000 MB. get_dir_size() adding size 0.291 KB for file CLEAN.bat get_dir_size() adding size 147.000 KB for file CODE.bsc get_dir_size() adding size 0.509 KB for file CODE.dat get_dir_size() adding size 40.000 KB for file CODE.exe get_dir_size() adding size 326.688 KB for file CODE.ilk get_dir_size() adding size 11.000 KB for file CODE.ncb get_dir_size() adding size 299.000 KB for file CODE.pdb get_dir_size() adding size 0.852 KB for file CODE.sln get_dir_size() adding size 3.956 KB for file CODE.vcproj get_dir_size() adding size 1.380 KB for file CODE.vcproj.HPPB2.hello.user ENTRY: get_dir_size(F:\PROJECTS\32bit\_HELP\C\H160\Debug\) entry, malloc OK. Current sum 0.811 MB. get_dir_size() adding size 6.912 KB for file BuildLog.htm get_dir_size() adding size 0.394 KB for file CODE.exe.embed.manifest get_dir_size() adding size 0.457 KB for file CODE.exe.embed.manifest.res get_dir_size() adding size 0.376 KB for file CODE.exe.intermediate.manifest get_dir_size() adding size 11.710 KB for file MAIN.obj get_dir_size() adding size 0.000 KB for file MAIN.sbr get_dir_size() adding size 0.061 KB for file mt.dep get_dir_size() adding size 27.000 KB for file vc80.idb get_dir_size() adding size 60.000 KB for file vc80.pdb get_dir_size(F:\PROJECTS\32bit\_HELP\C\H160\Debug\) Returning sum: 0.916 MB. get_dir_size() adding size 0.191 KB for file EDIT_ALL.bat get_dir_size() adding size 7.249 KB for file MAIN.c get_dir_size() adding size 0.041 KB for file run.bat ENTRY: get_dir_size(F:\PROJECTS\32bit\_HELP\C\H160\Versions\) entry, malloc OK. Current sum 0.923 MB. get_dir_size() adding size 13.497 KB for file CODE.001.zip get_dir_size(F:\PROJECTS\32bit\_HELP\C\H160\Versions\) Returning sum: 0.936 MB. get_dir_size(F:\PROJECTS\32bit\_HELP\C\H160) Returning sum: 0.936 MB. -----------> This path currently uses 0.936 MB <------------ ```
{"OriginalQuestionIds":[66448947],"Voters":[{"Id":6851825,"DisplayName":"Jon Spring","BindingReason":{"GoldTagBadge":"ggplot2"}}]}
In such situation I use the following checklist: ``` 1. git init 2. git remote add origin <REMOTE-REPO-URL> 3. Write a good '.gitignore` file. 4. git add --all 5. git commit -m "Initial commit" 6. git push ```
You can use context instead of request and create your own context like this: ``` import jakarta.servlet.http.HttpServletRequest import org.apache.commons.fileupload.FileUploadBase import org.apache.commons.fileupload.UploadContext import java.io.InputStream class JakartaServletRequestContext(val request: HttpServletRequest): UploadContext { override fun getCharacterEncoding(): String { return request.characterEncoding } override fun getContentType(): String { return request.contentType } override fun getContentLength(): Int { return request.contentLength } override fun getInputStream(): InputStream { return request.inputStream } override fun contentLength(): Long { return try { request.getHeader(FileUploadBase.CONTENT_LENGTH).toLong() } catch (e: NumberFormatException) { request.contentLength.toLong() } } } ``` and then use it like ``` @PostMapping("/send") fun handleIn(request: HttpServletRequest): String { val context = JakartaServletRequestContext(request) if (!ServletFileUpload.isMultipartContent(context)) { throw RuntimeException("Multipart request expected") } fileService.upload(ServletFileUpload().getItemIterator(context)) return "OK" } ``` Where you can find the code for the service in this example: https://github.com/nielsutrecht/spring-fileservice-example/blob/master/src/main/java/com/nibado/example/fileservice/FileService.java
SSO to Grafana embeded in iframe
|authentication|iframe|single-sign-on|grafana|
null
I'm currently overseeing the management of a medium-scale application, which operates with a Flutter frontend and a Node.js backend. However, we've encountered an issue wherein response times from our APIs are occasionally unacceptably lengthy. Upon investigation, I discovered that when Node.js is occupied with time-intensive processes, it delays responding to subsequent requests until the ongoing process concludes. For instance, if a process is underway, such as sending notifications to a large batch of users or compressing a video, any concurrent API requests remain in a loading state until the prior task completes. Once the notification dispatch or video compression concludes, the API promptly provides a response to the pending user request. To address this challenge and ensure a consistently positive user experience, I am seeking solutions to preemptively mitigate such delays in our system. Your insights and guidance on resolving this issue would be greatly appreciated. Thank you.
How to handle multiple request at same time in Node.js?
|node.js|flutter|api|
When attempting to merge audio with video, I notice a loss of video transparency in the resulting merged video. **How can I maintain the video's transparency?** **Video details:** - AVVideoCodecKey: AVVideoCodecType.hevcWithAlpha - fileType: .mov **Audio details:** - fileType: .m4a **Merging code:** ``` func mergeMovieWithAudio(movieUrl: URL, audioUrl: URL, success: @escaping ((URL) -> Void), failure: @escaping ((Error?) -> Void)) async { let mixComposition: AVMutableComposition = AVMutableComposition() var mutableCompositionVideoTrack: [AVMutableCompositionTrack] = [] var mutableCompositionAudioTrack: [AVMutableCompositionTrack] = [] let totalVideoCompositionInstruction : AVMutableVideoCompositionInstruction = AVMutableVideoCompositionInstruction() let aVideoAsset: AVAsset = AVAsset(url: movieUrl) let aAudioAsset: AVAsset = AVAsset(url: audioUrl) guard let videoTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid) else { return } guard let audioTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid) else { return } mutableCompositionVideoTrack.append(videoTrack) mutableCompositionAudioTrack.append(audioTrack) do { guard let aVideoAssetTrack = try await aVideoAsset.loadTracks(withMediaType: .video).first else { return } guard let aAudioAssetTrack = try await aAudioAsset.loadTracks(withMediaType: .audio).first else { return } let aVideoTimeRange = try await aVideoAssetTrack.load(.timeRange) try mutableCompositionVideoTrack.first?.insertTimeRange(aVideoTimeRange, of: aVideoAssetTrack, at: CMTime.zero) try mutableCompositionAudioTrack.first?.insertTimeRange(aVideoTimeRange, of: aAudioAssetTrack, at: CMTime.zero) videoTrack.preferredTransform = try await aVideoAssetTrack.load(.preferredTransform) totalVideoCompositionInstruction.timeRange = aVideoTimeRange let mutableVideoComposition: AVMutableVideoComposition = AVMutableVideoComposition() mutableVideoComposition.frameDuration = try await aVideoAssetTrack.load(.minFrameDuration) mutableVideoComposition.renderSize = try await aVideoAssetTrack.load(.naturalSize) } catch { print(error.localizedDescription) } guard let outputURL = makeFileOutputURL(fileName: "movie.mov") else { return } if let exportSession = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) { exportSession.outputURL = outputURL exportSession.outputFileType = AVFileType.mov exportSession.shouldOptimizeForNetworkUse = true await exportSession.export() switch exportSession.status { case .failed: if let _error = exportSession.error { failure(_error) } case .cancelled: if let _error = exportSession.error { failure(_error) } default: success(outputURL) } } else { failure(nil) } } ``` ``` func makeFileOutputURL(fileName: String) -> URL? { do { var cachesDirectory: URL = try FileManager.default.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) cachesDirectory.appendPathComponent(fileName) if FileManager.default.fileExists(atPath: cachesDirectory.path) { try FileManager.default.removeItem(atPath: cachesDirectory.path) } return cachesDirectory } catch { print(error) return nil } } ```
Merging sound with video causes the loss of video transparency
|swift|avfoundation|
JSONObject objectJson = new JSONObject(gson().toJson(verbaDetalhe)); objectJson.put("Verba", verbaEditada); My solution was edited via put the value of the key field.
null
null
null
`ngx.escape_uri` has an option to escape a full URI: > syntax: newstr = ngx.escape_uri(str, type?) > Since v0.10.16, this function accepts an optional type argument. It accepts the following values (defaults to 2): > > **0: escapes str as a full URI.** [...] > > 2: escape str as a URI component. [...] Nonetheless, `ngx.escape_uri(str, 0)` is still not the same as `encodeURI`. Fortunately, it's easy to write your own function that produce exactly the same output as `encodeURI`: ```lua local function _encode_uri_char(char) return string.format('%%%0X', string.byte(char)) end local function encode_uri(uri) return (string.gsub(uri, "[^%a%d%-_%.!~%*'%(%);/%?:@&=%+%$,#]", _encode_uri_char)) end ``` ```lua -- prints /test/%5Bhello%5D/ ngx.say(encode_uri('/test/[hello]/')) ``` The `encode_uri` function escapes all characters except: ``` A–Z a–z 0–9 - _ . ! ~ * ' ( ) ; / ? : @ & = + $ , # ``` as it described [here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURI).
i migrate my Java SpringBoot Application from Vaadin23 to Vaadin24 (and also SpringBoot from 2.75 to 3.2). After starting the Application i got the following error. Can someone help me to fix ? ``` ... 2024-03-26T13:48:48.980+01:00 INFO 17112 --- [ restartedMain] c.d.VaadinApplication : Started VaadinApplication in 14.479 seconds (process running for 15.9) 2024-03-26T13:49:13.716+01:00 ERROR 17112 --- [onPool-worker-1] c.v.f.s.frontend.TaskRunDevBundleBuild : Command `C:\Program Files\dev\node_js20\node.exe C:\Prog\dev\git\vaadin-app\node_modules\vite\bin\vite.js build` failed: vite v5.1.1 building for production... transforming... 3 modules transformed. error during build: Error: [vaadin:build-sw] Transform failed with 1 error: C:\Prog\dev\git\vaadin-app\node_modules\esbuild\lib\main.js:241:12: error: Invalid option in transform() call: "jsxDev" file: C:/Prog/dev/git/vaadin-app/target/sw.ts:241:12 Invalid option in transform() call: "jsxDev" at failureErrorWithLog (C:\Prog\dev\git\vaadin-app\node_modules\esbuild\lib\main.js:1475:15) at C:\Prog\dev\git\vaadin-app\node_modules\esbuild\lib\main.js:1298:20 at C:\Prog\dev\git\vaadin-app\node_modules\esbuild\lib\main.js:611:9 at handleIncomingPacket (C:\Prog\dev\git\vaadin-app\node_modules\esbuild\lib\main.js:708:9) at Socket.readFromStdout (C:\Prog\dev\git\vaadin-app\node_modules\esbuild\lib\main.js:578:7) at Socket.emit (node:events:518:28) at addChunk (node:internal/streams/readable:559:12) at readableAddChunkPushByteMode (node:internal/streams/readable:510:3) at Readable.push (node:internal/streams/readable:390:5) at Pipe.onStreamRead (node:internal/stream_base_commons:190:23) ``` Thanks for help in advance !! New Maven Builds: mvn clean install
I have a problem that's driving me crazy. After installing many YouTube ad free alternatives my media player (vlc, the one in the phone and another one) stop playing and give me error or ads. I've uninstalled everything but the problem persists. What can I do? I uninstalled every YouTube app
Media player stops about every 5 minutes
|video|vlc|
null
You can also try to use the icons ids rather than the human readable names, for some icons their names overlap with other icons and if you use multiple icons you might end up with downloading more icons that the ones you need e.g. something like this: > link rel="stylesheet" > href="https://fonts.googleapis.com/css2?family=Material+Symbols+Sharp:opsz,wght,FILL,GRAD@36,300,0,0&text=&#xe853,&#xe834,&#xe872,&#xea5c,&#xea5b,&#xe9ba,&#xe887,&#xe158,&#xe8b8,&#xe3f4,&#xe8f4"
I am trying to scrape multi page pdf using textract. Need to scrape pdf and format to json based on its sections, sub sections, tables. while trying to UI demo with LAYOUT and Table it is exactly able to show layout title, layout section, layout text, layout footer, page number same info can be observed in csv downloaded file from UI Demo: layout.csv file. same in json file: analyzeDocResponse.json too but it has all (LINES, WORDS, LAYOUT_TITLE, and all layout related data), i think textract does all kind of block types in sequence. for debugging purpose, i am using below code to print entire dictionary of block. and also block type followed by its corresponding text. if interested in pdf file: its SmPC of Drugs: [SmPC file](https://www.nafdac.gov.ng/wp-content/uploads/Files/SMPC/Covid19/Pfizer-BioNTech-SmPC-For-Covid-19-Vaccine.pdf) code 1: printing each block in json format. ``` def start_textract_job(bucket, document): response = textract.start_document_analysis( DocumentLocation={ 'S3Object': { 'Bucket': bucket, 'Name': document } }, FeatureTypes=["LAYOUT"] # You can adjust the FeatureTypes based on your needs ) return response['JobId'] def print_blocks(job_id): next_token = None while True: if next_token: response = textract.get_document_analysis(JobId=job_id, NextToken=next_token) else: response = textract.get_document_analysis(JobId=job_id) for block in response.get('Blocks', []): print(json.dumps(block, indent=4)) next_token = response.get('NextToken', None) if not next_token: break ``` it is printing similiar info as per UI Demo, block type LINES, WORDS, LAYOUT_ but if i try to print text for each block type using below code, it fails to print for LAYOUT_ related , not sure why, am i missing anything? code 2: to print block type followed by its content. ``` def start_textract_job is same as above, LAYOUT. def print_blocks(job_id): next_token = None while True: if next_token: response = textract.GetDocumentAnalysisAsync (JobId=job_id, NextToken=next_token) else: response = textract.get_document_analysis(JobId=job_id) for block in response.get('Blocks', []): print(f"{block['BlockType']}: {block.get('Text', '')}") next_token = response.get('NextToken', None) if not next_token: break ``` I can see values for block type LINES, WORDS but coming empty for LAYOUT as below, i think, it is identifying in block types but not its values. LAYOUT_TITLE: LAYOUT_FIGURE: LAYOUT_TEXT: LAYOUT_SECTION_HEADER: LAYOUT_TEXT: LAYOUT_SECTION_HEADER: LAYOUT_TEXT: LAYOUT_TEXT: LAYOUT_TEXT: LAYOUT_TEXT: LAYOUT_TEXT: LAYOUT_PAGE_NUMBER: LAYOUT_FOOTER: any help is highly appreicated, went thru doc and few other StackOverflow questions but couldnt find any help. New to Tetract, sorry for noob Q?, if it is :)
Thank you, Taranjeet Singh, it now works fine in my local environment. But to my surprise it works differently after uploading to my online server: The temporary dev is not show there. I tried an ini_set ('output_buffering', '4096'); in combination with: echo str_pad ($wait, 4096, ' '); after the temporary dev is written. This works fine in the local environment, but the online server does not show the temporary dev, only the full tree after loading has finished.
{"Voters":[{"Id":1235698,"DisplayName":"Marcin Orlowski"}]}
I guess you're looking fot the `plt.xticks()` function (alternatively: `ax.set_xticks()`). In your code you could add the following line: ``` ax.set_xticks(ticks=np.linspace(0,20,5), labels=np.linspace(0,1,5) ) ``` Where `ticks` correspond to the values in your data, and `labels` correspond to the values you display on the axis.
I am trying to create a new dataframe from an existing dataframe with a groupby(), a count(), and a to_frame(). I am getting AttributeError: 'DataFrame' object has no attribute 'to_frame' after adding 'as_index=False' to the groupby. This is the code: ''' newdat = indat.query('-1017 <= WDIR16 <= -1000') newdat.reset_index(drop=True, inplace=True) newdat.sort_values(by=['YEAR', 'MO', 'GP', 'HR'], inplace=True) # Find Count w1 = newdat.groupby(['YEAR','MO', 'GP','HR'], as_index=False)["WDIR16"].count().to_frame(name='wndclimodirectionobsqty').reset_index() # Find Means w1['wndclimomeanspeedrate'] = newdat.groupby(['YEAR','MO', 'GP', 'HR'], as_index=False).aggregate({'WSPD':'mean'}, as_index=False).values ''' The error occurs on the 'to_frame' line. The reason I am using 'as_index=False' in the groupby is because sometimes the existing dataframe can be empty with columns. Reference: https://stackoverflow.com/questions/46090386/keep-columns-after-a-groupby-in-an-empty-dataframe If I leave out the 'as_index=False' the line with the 'to_frame' works. BUT, if the dataframe is empty on a groupby, the empty columns do not move over to the new dataframe. Any ideas?
Using groupby as_index=False, count, to_frame gives 'Dataframe' object has no attribute to_frame
|python|pandas|dataframe|group-by|
I am writing code in qt 6, I use the mvc pattern in the rectangle delegate, I specified the x:100 (line 103) property, which was supposed to move the rectangle 100 pixels to the right, this works in qt 5, but in qt6 the position of the rectangle remains unchanged, which is why I am forced to use anchors, and they are not very performant. Why does this problem occur and how to fix it? ``` ListView { id: viewMessage width: _writeMes.x + _writeMes.width height: 280 spacing: _page.margin model: _messageModel delegate: Rectangle { height: 40 width: viewMessage.width-150 //anchors.left: isMyMessage ? undefined : parent.left //anchors.right: isMyMessage ? parent.right : undefined x: isMyMessage ? 100 : 0 radius: 10 color: isMyMessage ? _page.myMessageColor : _page.serverMessageColor property bool isMyMessage: model.sendBy === _ws.myId Label { x: 10 color: _page.textColor text: 'id: ' + model.sendBy } Label { x: 10 y: 15 color: _page.textColor text: model.text } } } ``` as an alternative I use anchors, this gives the desired effect at the expense of performance. I want to repeat it again on qt5 it works.
WCF to WCFCore - Help Menu
|c#|wcf|menu|
null
How do I skip dates that are not in the data file so pyplot does not have these wide gaps between them? I would like the gaps to be equal, for example, between 01-06 and 01-09. Example graph: ![Example graph](https://i.stack.imgur.com/WXLGe.png) My plotting function looks like this atm: ``` def plot_data(data, column_names, width, height, y_max, y_min, x_label, y_label, title, interval, format, precision, markers, colors, legend): # Stretching out the graph for better visibility plt.figure(figsize = (width, height)) # Creating line plots; X-axis is for index, Y-axis is for price values # Adding labels, colors, markers for each of the plots for column_name in column_names: plt.plot(data.index, data[column_name], label = column_name, marker = markers[column_names.index(column_name)], color = colors[column_names.index(column_name)]) # Setting X- and Y- axes labels plt.xlabel(x_label) plt.ylabel(y_label) # Setting title for the graph plt.title(title) # Enabling grid for the graph plt.grid() # Enabling legend for the graph if(legend == True): plt.legend() # Setting X-axis limits to start and end of the wanted interval plt.xlim(data.index[0], data.index[-1]) # Adding X-axis ticks for every interval plt.xticks(data.index) # Formatting the X-axis tick labels to a needed time format plt.gca().xaxis.set_major_formatter(format) # Adjusting the height and precision of Y-axis plt.yticks(np.arange(y_min, y_max, step = precision)) # Automatic rotation of tick labels so they do not overlap plt.gcf().autofmt_xdate() # Displaying the graph plt.show() ```
Facing difficulty while integrating my python script to work with the react website, i have been looking for some resources but couldn't find anything helpful, since most of them revolve around saving a model ``` import cv2 import mediapipe as mp import numpy as np import uuid import os imposed_image = cv2.imread('sampleData/watch.png') image = cv2.resize(imposed_image, (170, 170)) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) _, mask = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY_INV) rgba_image = cv2.cvtColor(image, cv2.COLOR_BGR2BGRA) rgba_image[:, :, 3] = mask mp_drawing = mp.solutions.drawing_utils mp_hands = mp.solutions.hands cap = cv2.VideoCapture(0) with mp_hands.Hands(min_detection_confidence=0.8, min_tracking_confidence=0.5) as hands: while cap.isOpened(): ret, frame = cap.read() frame = cv2.flip(frame, 1) image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) image.flags.writeable = False results = hands.process(image) image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # print(results) if results.multi_hand_landmarks: for num, hand in enumerate(results.multi_hand_landmarks): mp_drawing.draw_landmarks(image, hand, mp_hands.HAND_CONNECTIONS, mp_drawing.DrawingSpec(color=(121, 22, 76), thickness=2, circle_radius=4), mp_drawing.DrawingSpec(color=(250, 44, 250), thickness=2, circle_radius=2), ) hand_landmarks = results.multi_hand_landmarks[num] # Corrected indexing # Calculate the pixel coordinates of landmark[0] in the frame x_pixel = int(hand_landmarks.landmark[0].x * frame.shape[1]) y_pixel = int(hand_landmarks.landmark[0].y * frame.shape[0]) image_to_impose = cv2.resize(rgba_image, (140, 140)) x, y = x_pixel, y_pixel x -= 100 y -= 80 # Resize imposed image to fit within bounding box around landmark[0] h, w, _ = image_to_impose.shape for c in range(0, 3): try: frame[y:y + h, x:x + w, c] = frame[y:y + h, x:x + w, c] * (1 - image_to_impose[:, :, 3] / 255.0) + \ image_to_impose[:, :, c] * (image_to_impose[:, :, 3] / 255.0) except: pass cv2.imshow('watch', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() ``` did some search on chatgpt but didn't have any usefull answer
I know this is old, but I have a similar setup, and struggled with the same problems. After reading the docs and a lot of trial and error, here's how I handled these issues. Happy to get input on a better way to do it if anyone out there has a better understanding. For your first question, I have a `Restart` exception: ```python class Restart(BaseException): pass ``` A task calling for a restart raises this exception. The `TaskGroup` is wrapped in a `try` block and the following `except` block: ```python except* Restart: log.warning("Tasks canceled from restart command") ``` For your second question, I have each task handling its own exceptions internally. Only unhandled exceptions bubble out and cancel everything.
When I publish my application, there is no problem on some phones and compatibility problems occur on some phones. I shared the manifest and gradle part below, I didn't understand what I did wrong. There is no problem in the Play develoepr device catalogue. [Error Image](https://i.stack.imgur.com/FRPK8.png) Manifest.xml ``` <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" > <uses-permission android:name="android.permission.VIBRATE" /> <application android:allowBackup="true" android:configChanges="locale|orientation" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme.NoActionBar" android:localeConfig="@xml/locales_config" tools:targetApi="tiramisu"> <activity android:name=".MainActivity" android:configChanges="locale|orientation" android:exported="true" android:screenOrientation="fullSensor" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <service android:name="androidx.appcompat.app.AppLocalesMetadataHolderService" android:enabled="false" android:exported="false"> <meta-data android:name="autoStoreLocales" android:value="true" /> </service> <meta-data android:name="com.google.android.gms.ads.APPLICATION_ID" android:value="ca-app-pub-9937060478156830~6086699458" /> <meta-data android:name="com.google.android.gms.games.APP_ID" android:value="@string/app_id"/> </application> </manifest> ``` build.gradle ``` plugins { id 'com.android.application' id 'com.google.gms.google-services' } android { namespace 'com.word.lingo' compileSdk 34 defaultConfig { applicationId "com.word.lingo" minSdk 21 //noinspection EditedTargetSdkVersion targetSdk 34 versionCode 21 versionName "1.0.0" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" resourceConfigurations += ["en", "tr", "fr", "de","es","it"] } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' } } compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } buildFeatures{ viewBinding true } androidResources { // generateLocaleConfig true } bundle { language { enableSplit = false } } } dependencies { implementation 'androidx.appcompat:appcompat:1.6.1' implementation 'com.google.android.material:material:1.11.0' implementation 'androidx.constraintlayout:constraintlayout:2.1.4' testImplementation 'junit:junit:4.13.2' androidTestImplementation 'androidx.test.ext:junit:1.1.5' androidTestImplementation 'androidx.test.espresso:espresso-core:3.5.1' implementation 'nl.dionsegijn:konfetti-compose:2.0.4' implementation 'nl.dionsegijn:konfetti-xml:2.0.4' implementation 'com.github.mmoamenn:LuckyWheel_Android:0.3.0' implementation 'com.google.android.gms:play-services-ads:23.0.0' implementation "com.android.billingclient:billing:6.2.0" implementation 'com.google.firebase:firebase-messaging:23.4.1' implementation 'com.anjlab.android.iab.v3:library:2.0.3' implementation 'com.google.android.play:integrity:1.3.0' implementation "com.google.android.gms:play-services-games-v2:19.0.0" implementation platform('com.google.firebase:firebase-bom:32.7.4') implementation 'com.google.firebase:firebase-analytics' } ``` What did you try and what were you expecting?
null
I am using owl-carousel, and i would like to make the last item that is visible, a little blurry. I maked a photo in photoshop, that what i want. [![enter image description here](https://i.stack.imgur.com/AJrqf.jpg)](https://i.stack.imgur.com/AJrqf.jpg) ``` <div class="owl-carousel owl-theme owl_carousel_1"> <div class="item"> <a href="/hir/5/utalassal-is-fizetheto-az-egeszsegugyi-szolgaltatasi-jarulek" class="main_hir_img_link" title="Utalással is fizethető az egészségügyi szolgáltatási járulék"> <img class="img-responsive main_news_thumb radius" src="/images/news/th-44555-71.jpg" alt="Utalással is fizethető az egészségügyi szolgáltatási járulék"> </a> <h3 class="event_title my-2"><a href="/hir/5/utalassal-is-fizetheto-az-egeszsegugyi-szolgaltatasi-jarulek" title="Utalással is fizethető az egészségügyi szolgáltatási járulék" class="event_title_link text-black">Utalással is fizethető az egészségügyi szolgáltatási járulék</a></h3> <p class="event_desc">2024-ben havi 11 300 forintot kell fizetniük azoknak, akik nem biztosítottak és más módon sem jogosultak az egészségügyi ellátásra.</p> <a href="/hir/5/utalassal-is-fizetheto-az-egeszsegugyi-szolgaltatasi-jarulek" class="news_list_to_link d-block text-black" title="Utalással is fizethető az egészségügyi szolgáltatási járulék">Elolvasom <i class="fa fa-chevron-right ml-1"></i></a> </div> <div class="item"> <a href="/hir/4/2024--evi-nyitvatartasi-ido-pecel-varos-ovodaiban" class="main_hir_img_link" title="2024. évi nyitvatartási idő Pécel Város Óvodáiban"> <img class="img-responsive main_news_thumb radius" src="/images/news/th-44-283.jpg" alt="2024. évi nyitvatartási idő Pécel Város Óvodáiban"> </a> <h3 class="event_title my-2"><a href="/hir/4/2024--evi-nyitvatartasi-ido-pecel-varos-ovodaiban" title="2024. évi nyitvatartási idő Pécel Város Óvodáiban" class="event_title_link text-black">2024. évi nyitvatartási idő Pécel Város Óvodáiban</a></h3> <p class="event_desc">Tájékoztatjuk a Tisztelt Szülőket, hogy a 2024. évben Pécel Város Óvodáiban (PVO) az 1. a heti nyitvatartási idő 60 óra, munkanapokon 6:00 óra és 18:00 óra között.</p> <a href="/hir/4/2024--evi-nyitvatartasi-ido-pecel-varos-ovodaiban" class="news_list_to_link d-block text-black" title="2024. évi nyitvatartási idő Pécel Város Óvodáiban">Elolvasom <i class="fa fa-chevron-right ml-1"></i></a> </div> <div class="item"> <a href="/hir/3/meg-egy-honapig-kerheto-az-szja-bevallasi-tervezetek-postazasa" class="main_hir_img_link" title="Még egy hónapig kérhető az szja-bevallási tervezetek postázása"> <img class="img-responsive main_news_thumb radius" src="/images/news/th-33-882.jpg" alt="Még egy hónapig kérhető az szja-bevallási tervezetek postázása"> </a> <h3 class="event_title my-2"><a href="/hir/3/meg-egy-honapig-kerheto-az-szja-bevallasi-tervezetek-postazasa" title="Még egy hónapig kérhető az szja-bevallási tervezetek postázása" class="event_title_link text-black">Még egy hónapig kérhető az szja-bevallási tervezetek postázása</a></h3> <p class="event_desc">Március 15-étől mindenki megnézheti szja-bevallási tervezetét, legegyszerűbben a Nemzeti Adó- és Vámhivatal (NAV) Ügyfélportálján vagy eSZJA-oldalán.</p> <a href="/hir/3/meg-egy-honapig-kerheto-az-szja-bevallasi-tervezetek-postazasa" class="news_list_to_link d-block text-black" title="Még egy hónapig kérhető az szja-bevallási tervezetek postázása">Elolvasom <i class="fa fa-chevron-right ml-1"></i></a> </div> <div class="item"> <a href="/hir/2/kepviselo-testuleti-ules-2024--februar-28" class="main_hir_img_link" title="Képviselő-testületi ülés - 2024. február 28."> <img class="img-responsive main_news_thumb radius" src="/images/news/th-22-532.jpg" alt="Képviselő-testületi ülés - 2024. február 28."> </a> <h3 class="event_title my-2"><a href="/hir/2/kepviselo-testuleti-ules-2024--februar-28" title="Képviselő-testületi ülés - 2024. február 28." class="event_title_link text-black">Képviselő-testületi ülés - 2024. február 28.</a></h3> <p class="event_desc">Pécel Város Önkormányzatának Képviselő-testülete 2024. február 28. napján (szerda) 8.00 órai kezdettel rendes képviselő-testületi ülést tart, melyre tisztelettel meghívom.</p> <a href="/hir/2/kepviselo-testuleti-ules-2024--februar-28" class="news_list_to_link d-block text-black" title="Képviselő-testületi ülés - 2024. február 28.">Elolvasom <i class="fa fa-chevron-right ml-1"></i></a> </div> </div> ``` I was searching for this, but didnt find nothing.
Make the last visible item blurry in owl-carousel
|html|css|owl-carousel|
null
I'm currently working on a project that involves processing large volumes of textual data for natural language processing tasks. One critical aspect of my pipeline involves tokenization and string matching, where I need to efficiently match substrings within sentences against a predefined set of patterns. Here's a mock example to illustrate the problem with following list of sentences: ```python sentences = [ "the quick brown fox jumps over the lazy dog", "a watched pot never boils", "actions speak louder than words" ] ``` And I have a set of patterns: ```python patterns = [ "quick brown fox", "pot never boils", "actions speak" ] ``` My goal is to efficiently identify sentences that contain any of these patterns. Additionally, I need to tokenize each sentence and perform further analysis on the matched substrings. Currently, I'm using a brute-force approach with nested loops, but it's not scalable for large datasets. I'm looking for more sophisticated techniques or algorithms to optimize this process. How can I implement string matching and tokenization for this scenario, considering scalability and performance? Any suggestions would be highly appreciated!
How to make tokenization and pattern matching efficient for large text datasets in python
|python|
This behaviour certainly seems to be irregular. It reminds me of the issue described in [iOS 17 SwiftUI: Color Views in ScrollView Using containerRelativeFrame Overlap When Scrolling](https://stackoverflow.com/q/77636423/20386264), so I tried some similar workarounds. It works a lot better with the following changes: **1. Ignore the horizontal safe area insets** ```swift TabView { Text("Page One") Text("Page Two") } .tabViewStyle(.page) .ignoresSafeArea(edges: .horizontal) // <- ADDED .transition(.slide) ``` **2. Use `.easeInOut` animation** The animation of the page indicators seems to take a little longer than the animation of the transition. When the default animation effect of `.spring` is used, this is especially noticeable. It is less noticeable if `.easeInOut` is used: ```swift VStack { // content as before } .animation(.easeInOut, value: showTabView) // <- CHANGED ``` ![Animation](https://i.stack.imgur.com/WUkP9.gif) --- *EDIT* Some more tuning: - Ignoring the safe area insets will leave you with the issue that the tabs now fill the full width of the screen. If you want to enforce the safe areas in the usual way then a `GeometryReader` can be used to measure the screen width. - I found that if negative padding is used to expand the width of the `TabView` by a large amount (overall width > twice the normal width) then the animation of the `TabView` and the page indicators become synchronized! - When the view slides out to the right, it pauses before fully disappearing. This can be hidden by combining the `slide` transition with `.opacity` too. So here is an updated of your example with all the changes applied: ```swift struct ContentView: View { let loremIpsum = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum." @State private var showTabView = false var body: some View { GeometryReader { proxy in VStack { Button("Toggle TabView") { showTabView.toggle() } .frame(maxWidth: .infinity) Spacer() if showTabView { TabView { Text(loremIpsum) .frame(width: proxy.size.width) Text("Page Two") .frame(width: proxy.size.width) } .tabViewStyle(.page) .padding(.horizontal, -500) // greater than width / 2 .ignoresSafeArea(edges: .horizontal) .transition(.slide.combined(with: .opacity)) } } .animation(.spring(duration: 1), value: showTabView) } } } ``` ![Animation](https://i.stack.imgur.com/Cp89S.gif)
I'm currently implementing a flutter web app. I'm calling a 3rd party system and it's redirecting to the URL I provide to it. However it appends to the end of it #code=CODE&idtoken=IDTOKEN. So i'm getting a redirect to REDIRECTURL/#code=CODE&idtoken=IDTOKEN. The problem is that Go Router sees this as a path and not as parameters and i get the error: GoException: no routes for location: code=CODE&id_token=IDTOKEN Is there a way for me to handle Fragment parameters using Go Router? The best I have is to read these parameters manually and then try to remove them from the URL in initstate, but that seems a little hacky and doesn't sit nicely. Is there a better way to do this?
Is there a way for Go Router in flutter to handle Fragment Parameters?
|flutter|url|redirect|url-fragment|
null
How to turn on antialiasing in SDL2, when using SDL_RenderCopyEx? I find some articles that suggest to use: SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1); SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 2); and glEnable(GL_MULTISAMPLE); But this makes no effect. Any ideas? int Buffers, Samples; SDL_GL_GetAttribute( SDL_GL_MULTISAMPLEBUFFERS, &Buffers ); SDL_GL_GetAttribute( SDL_GL_MULTISAMPLESAMPLES, &Samples ); cout << "buf = " << Buffers << ", samples = " << Samples; returns buf = -858993460, samples = -858993460. EDIT: CODE: #include <windows.h> #include <iostream> #include <SDL2/include/SDL.h> #include <SDL2/include/SDL_image.h> using namespace std; int main( int argc, char * args[] ) { // SDL Init SDL_Init(SDL_INIT_EVERYTHING); SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1); SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 8); SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1); // Create window SDL_Window *win = nullptr; win = SDL_CreateWindow("abc", 100, 100, 800, 600, SDL_WINDOW_FULLSCREEN | SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN); if (win == nullptr) { std::cout << SDL_GetError() << std::endl; system("pause"); return 1; } int Buffers, Samples; SDL_GL_GetAttribute( SDL_GL_MULTISAMPLEBUFFERS, &Buffers ); SDL_GL_GetAttribute( SDL_GL_MULTISAMPLESAMPLES, &Samples ); cout << "buf = " << Buffers << ", samples = " << Samples << "."; // Create Renderer SDL_Renderer *ren = nullptr; ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); if (ren == nullptr) { std::cout << SDL_GetError() << std::endl; return 1; } // Create texture SDL_Texture *tex = nullptr; tex = IMG_LoadTexture(ren, "circle.png"); SDL_SetTextureAlphaMod(tex, 100); SDL_Rect s,d; SDL_Point c; s.x = s.y = 0; s.w = s.h = 110; d.x = 320; d.y = 240; d.w = d.h = 110; c.x = c.y = 55; // Event Queue SDL_Event e; bool quit = false; int angle = 0; while(!quit) { while (SDL_PollEvent(&e)){ //If user closes he window if (e.type == SDL_KEYDOWN) quit = true; } angle += 2; float a = (angle/255.0)/M_PI*180.0; // Render SDL_RenderClear(ren); SDL_RenderCopyEx(ren, tex, &s, &d, a, &c, SDL_FLIP_NONE); SDL_RenderPresent(ren); } // Release SDL_DestroyTexture(tex); SDL_DestroyRenderer(ren); SDL_DestroyWindow(win); // Quit SDL_Quit(); return 0; } Do not worry about style or errors related to memory deallocation, etc. It was a quick sketch to test the possibility SDL'a
integration of python opencv script with react
|python|react-native|opencv|
null
I have the following but in order to use `AddAzureKeyVault` I need to know the key vault name which is set in configuration. Is it possible to read in this value in one go or do I need to build it twice? ``` private static IConfigurationRoot GetIConfigurationRoot() { var root = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json", optional: false) //.AddAzureKeyVault($"https://{keyVaultName}.vault.azure.net") .AddEnvironmentVariables() .Build(); return root; } ```
Using ConfigurationBuilder to read setting and use AddAzureKeyVault
|c#|asp.net-core|asp-net-config-builders|
I'm currently trying to improve my python skills by using it to recreate some old matlab projects I did in undergrad. I once used matlab to simulate a proton attracting an electron, using multidimensional arrays to store the positions and velocities for each time step. Now I'm trying to do the same thing in Python, but the way in which arrays work is very different from the way they're used in matlab. I'm running into a lot of errors and figuring things out as I go, but I think I'm stuck thinking about arrays in the "matlab way" and some points on how to accomplish this in python would be greatly appreciated. ``` #This code is for plotting the trajectory of an electron travelling through space #The particle is in the vacinity of a central force (A proton at the origin) import numpy as np import matplotlib.pyplot as plt re = np.array([[1e-11],[1e-11]]) #let re denote the trajectory of the electron with x = r[0] and y = r[1] m = 9.11e-31 #mass of electron k = 8.99e9 #Coulomb's constant [N m^2/C^2] q = 1.6e-19 #charge of electron and proton [C] rp = [0,0] #rp is the position of the proton dt = 0.001 #time differential [s] v = np.array([[-3e12],[0]]) #the electron has initial velocity of v = (-3, 0) [m/s] phi = np.arctan2(re[1][0], re[0][0]) #starting angle for i in range(1,10): # nrex = (re[0][i-1])+v[0][i-1]*dt #nuew position in x # nrey = (re[1][i-1])+v[1][i-1]*dt #new position in y re[0] = np.append(re[0], ((re[0][i-1])+v[1][i-1]*dt), axis=1) #, axis=1) #for each timestep move the velocity in x re[1] = np.append(re[1], ((re[1][i-1])+v[1][i-1]*dt), axis=1) #for each timestep mobe the velocity in y phi = np.arctan2(re[1][i],re[0][i]) #update the angle rho = np.sqrt(re[0][i]**2 + re[1][i]**2) #update separation from proton v[0] = np.append(n[0], (v[0][i-1]+((k*(q**2)/(rho**2))/m)*np.cos(phi)*dt), axis=1) #update velocity in x v[1] = np.append(v[1], (v[1][i-1]+((k*(q**2)/(rho**2))/m)*np.sin(phi)*dt), axis=1) #update velocity in y plt.scatter(re[0][:], re[1][:], s=2, c='b') #Plot electron's trajectory plt.scatter(rp[0],rp[1], s=3, c='r') #Show proton's position plt.show() #Show ``` Basically what I'm trying to do is add the next "state" of the system at the end of every "vector" within an array (one for position and one for velocity, both containing components for x and y), and finally plotting each time state to see the entire trajectory. This however returns the following error: ``` Traceback (most recent call last): File "c:\Users\hecto\OneDrive\Documentos\ITESM\8vo semestre\Repaso Python\ParticleInElectricField.py", line 20, in <module> re[0] = np.append(re[0], ((re[0][i-1])+v[1][i-1]*dt), axis=1) #, axis=1) #for each timestep move the velocity in x File "<__array_function__ internals>", line 200, in append File "C:\Users\hecto\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\lib\function_base.py", line 5499, in append return concatenate((arr, values), axis=axis) File "<__array_function__ internals>", line 200, in concatenate numpy.AxisError: axis 1 is out of bounds for array of dimension 1 PS C:\Users\hecto\OneDrive\Documentos\ITESM\8vo semestre\Repaso Python> ``` Can you not append to a specific "vector" within an array? Or am I missing something. Any pointers help, thanks in advance! When I tried the same code withoue the "axis=1" i returned an erro about the dimensions of the array. This is probably because without it, np.append flattens the vector. When I try "axis=0", the error becomes: ``` Traceback (most recent call last): File "c:\Users\hecto\OneDrive\Documentos\ITESM\8vo semestre\Repaso Python\ParticleInElectricField.py", line 20, in <module> re[0] = np.append(re[0], ((re[0][i-1])+v[1][i-1]*dt), axis=0) #, axis=1) #for each timestep move the velocity in x File "<__array_function__ internals>", line 200, in append File "C:\Users\hecto\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\lib\function_base.py", line 5499, in append return concatenate((arr, values), axis=axis) File "<__array_function__ internals>", line 200, in concatenate ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 1 dimension(s) and the array at index 1 has 0 dimension(s) ``` This is weird because what I'm trying to append is a value, not an array. It makes sense that it has dimension 0, but why would it need to have a dimension to be appended to the end of the array? This makes me think that the "axis=0" is the wrong way to go, but I couldn't tell you why either.
How to use arrays in Python similar to how they're used in matlab
|python|arrays|numpy|matlab|
null
I might be wrong but from what I remember and what this [diagram][1] indicates the transition back to unconfirmed is not possible. You will have to extend the existing behaviour and state diagram. [1]: https://docs.aws.amazon.com/cognito/latest/developerguide/signing-up-users-in-your-app.html
I'd recommend changing the approach slightly, instead of having `transform` modify its argument, it should return a value that will replace in input. So you can have a set of overloads like this: std::string transform(std::string str) { return str.append("abc"); } template<std::size_t N> std::string transform(const char(&arr)[N]) { return std::string{&arr[0], N - 1}.append("abc"); } template<std::size_t N> std::string transform(char(&arr)[N]) { return std::string{&arr[0], N - 1}.append("abc"); } template<typename T> T&& transform(T&& value) { return static_cast<T&&>(value); } And wrapp all the arguments in calls to `transform` when calling `make_format_args`: return std::vformat(requete, std::make_format_args(transform(args)...)); [Example on Compiler Explorer][1] [1]: https://godbolt.org/z/7YKrn8YEG
I'm trying to plot a map that includes many shapefiles and I'm having trouble with the legend. This is the map as is but I'd like the polygons to also be in the legend. [enter image description here](https://i.stack.imgur.com/dJvp7.png) This is the code I'm using (as I'm using multiple shapefiles I have no idea how to make this reproducible, sorry about that): ``` ggplot(data = transformed_sea)+ geom_sf(fill="lightblue1")+ geom_sf(data = transformed_locations, size = 2, aes(color = Fjord, fill = Fjord))+ scale_color_brewer(palette = "Dark2")+ geom_sf(data = transformed_places, size = 2, color="black", fill="black")+ geom_sf(data = halseborder, size = 2, color = "tomato", fill = "tan1", alpha = 0.2, show.legend = TRUE)+ geom_sf(data = ff1, size=2, color = "deeppink4", fill = "deeppink", alpha = 0.4, lwd=1, show.legend = TRUE)+ geom_sf(data = ff2, size=2, color = "purple3", fill = "mediumpurple1", alpha = 0.4, lwd=1, show.legend = TRUE)+ geom_sf(data = vad, color = "tomato", fill = "tan1", alpha = 0.2)+ geom_sf_text(data = transformed_labels, aes(label = ID), size = 3, color="grey34") + geom_sf_label_repel(inherit.aes = FALSE, data=transformed_places, aes(label = ID),force = 10, nudge_x = 2.5, seed = 1, size = 3) + coord_sf(xlim = c(636224.5703011239, 674153.32093384),ylim = c(6421546.462142282, 6472000.544186506))+ xlab("Longitude")+ ylab("Latitude")+ scale_x_continuous(breaks = c(11.3, 11.6, 11.9), labels = c("11.3°E", "11.6°E", "11.9°E")) + annotation_scale(location = "bl", width_hint = 0.3)+ theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(),panel.background = element_rect(fill = "papayawhip")) ``` As you can see I've added the argument `show.legend=TRUE` in the lines mapping the polygons but it just causes it to overlap the fjord legend like so: [enter image description here](https://i.stack.imgur.com/fcvAJ.png) Does any one know of a way to have separated legends, with one showing the fjord points and the other the different zones?
Errror after migrating from Vaadin23 to Vaadin24: Invalid option in transform() call: "jsxDev"
|vaadin|vaadin24|
null
null
null
null
null
I have a spring boot application that used Amazon Kinesis to consume data and save it to PostgreSQL. Since I'm already using one database(PostgreSQL) in my application, i want to avoid using another database(Dynamo db) just only for checkpointing and locking purpose. So that the resource cost can be reduced. Using below dependency in my project implementation 'org.springframework.cloud:spring-cloud-stream-binder-kinesis:4.0.2' **my application.yml file** ``` spring: cloud: aws: credentials: sts: web-identity-token-file: <Where i had given the token file path> role-arn: <Where i had given the assume role arn> role-session-name: RoleSessionName region: static: <where i had given my aws region> dualstack-enabled: false stream: kinesis: binder: auto-create-stream: false min-shard-count: 1 bindings: input-in-0: destination: test-test.tst.v1 content-type: text/json ``` Below is the java class which contain the bean for processing data from Kinesis ``` @Configuration public class KinesisConsumerBinder{ @Bean public Consumer<Message<String>> input(){ return message ->{ System.out.println("Data from Kinesis:"+message.getPayload()); //Process the message got from Kinesis } } } ``` As per my previous question i asked in below link https://stackoverflow.com/questions/78200118/can-we-use-postgresql-instead-of-default-dynamo-db-for-checkpointing-and-locking i had done same solution provided and its work for me. I was able to use PostgreSQL for checkpointing and locking purpose. But I'm facing below issue In the processing logic of consumed message from kinesis, i written a logic to save into database(postgresql). There are two pods running in my environment connecting to same stream and same database in load balancing manner. I'm seeing two data is getting inserted into my table. Below is the sample dummy masked metadata data that got saved in checkpointing tables Table : int_metadata_store =========================== metadata_key : metadata_value : region ----------------------------------------------------------------------------- anonymous.6168ad-9a13-5b75-96b9-996f340dfd:test-test.tst.v1:shardId-000000001 : 8954146227109765333558818710934653698471926607 : DEFAULT anonymous.7168ad-8b13-5c75-96n9-996f340dfd:test-test.tst.v1:shardId-000000001 : 8954146227109765333558818710934653698471926607 : DEFAULT Table : int_lock ================= lock_key : region : client_id : created_date ----------------------------------------------------------------------- 44444444-7df8-2222-db33-c0bbb07bbbb8 : DEFAULT : a9a9a9a9-8888-4444-777d-99a33aa20122 : 2024-03-26 09:28:29.662 55555555-dddd-3333-b000-8810ae858a84 : DEFAULT : a9a9a9a9-8888-4444-777d-99a33aa20122 : 2024-03-26 09:28:29.662 66666666-eeee-4444-a805-6138a0c59976 : DEFAULT : a9a9a9a9-8888-4444-777d-99a33aa20122 : 2024-03-26 09:28:29.662 Could anyone please help me to solve this issue. Expecting: Only one message should get saved in db for each message from Kinesis
How to get content of BLOCK types LAYOUT_TITLE, LAYOUT_SECTION_HEADER and LAYOUT_xx in Textract
|amazon-web-services|pdf|amazon-textract|
null
I'm getting this error Type of 'await' operand must either be a valid promise or must not contain a callable 'then' member.deno-ts(1320) In my Subapase Edge Function whose code is as follows: import { createClient } from "https://cdn.skypack.dev/@supabase/supabase-js"; const supabase = createClient('https://mylink.supabase.co', 'mykey') Deno.serve(async (req) => { const { email, password, first_name, last_name, username } = await req.json() const { error: insertError } = await supabase.from('Users').insert([{ email, password, first_name, last_name, username }]) if (insertError) { // Handle the error } else { // Handle the successful insert console.log('Inserted data:', data); } return new Response(JSON.stringify(data), { headers: { 'Content-Type': 'application/json' } }) }) This line specifically is giving me the error: const { error: insertError } = await supabase.from('Users').insert([{ email, password, first_name, last_name, username }])
Error Inserting into Supabase: Type of 'await' operand must either be a valid promise or must not contain a callable 'then' member
|typescript|supabase|deno|supabase-js|
I have a class called Task in which I have properties like Date, Id, priority and Name. I have a list where I store objects of this class. The problem is that when I do the sorting and then display it in my Listview (I am working in WPF framework coz it is a part of the assigment) all the tasks get renamed tothe same thing (TODO-list.task.Task) from their original name and I have no idea if the sorting actually worked. I use this to sort the list: ``` List<Task> sort_by_date = tasks.OrderBy(x => x.Date).ToList(); ``` And then write it out to list view like this: ``` MainWindow.withDate.Item.Clear(); MainWindow.withDate.ItemsSource = sort_by_date; ``` EDIT: Already figured it out...it was a small thing in writing out teh listview
I am building what I thought was a very simple server-client. I have built a simple Node.js server. Below is the implementation: const http = require('http'); const { Server } = require('socket.io'); // Create an HTTP server const server = http.createServer(); // Create a new instance of Socket.io by passing the HTTP server const io = new Server(server); // Listen for incoming socket connections io.on('connection', (socket) => { console.log('A user connected'); // Listen for messages from the client socket.on('message', (message) => { console.log('Message received:', message); // Broadcast the message to all connected clients io.emit('message', message); }); // Listen for disconnection socket.on('disconnect', () => { console.log('A user disconnected'); }); }); // Start the server and listen on port 3000 const PORT = process.env.PORT || 3000; server.listen(PORT, () => { console.log(`Server listening on http://192.168.0.191 port ${PORT}`); }); I successfully validated this server by writing a Node.js client test script: const io = require('socket.io-client'); // Replace 'http://localhost:3000' with the URL of your socket.io server const socket = io('http://localhost:3000'); // Listen for 'connect' event socket.on('connect', () => { console.log('Connected to server'); // Read input from command line and send it to the server process.stdin.on('data', (data) => { const message = data.toString().trim(); socket.emit('message', message); console.log("Next Message Sent:", message) }); }); // Listen for 'message' event socket.on('message', (message) => { console.log('Received message from server:', message); }); // Listen for 'disconnect' event socket.on('disconnect', () => { console.log('Disconnected from server'); }); The problem that I'm having is with the Android application. I'm receiving XHR polling errors. I'm not able to pinpoint the root cause. This is the implementation: build.gradle.kts: implementation("dev.icerock.moko:socket-io:0.4.0") AndroidManifest.xml: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:tools="http://schemas.android.com/tools" xmlns:android="http://schemas.android.com/apk/res/android"> <uses-feature android:name="android.software.leanback" android:required="true" /> <uses-feature android:name="android.hardware.touchscreen" android:required="false" /> <uses-feature android:name="android.hardware.location.gps" android:required="false" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.WAKE_LOCK" /> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.READ_PHONE_STATE" /> <application ... android:usesCleartextTraffic="true" android:networkSecurityConfig="@xml/network_security_config" android:banner="@drawable/logo"> ... The implementation of the network-security-config.xml file: <?xml version="1.0" encoding="utf-8"?> <network-security-config> <domain-config cleartextTrafficPermitted="true"> <domain includeSubdomains="true">192.168.0.191</domain> </domain-config> </network-security-config> Finally, implementation relevant to the socket: @Composable private fun createMokoSocketIO() { val coroutineScope = rememberCoroutineScope() coroutineScope.launch(Dispatchers.IO) { try { val socket = Socket( endpoint = "http://192.168.0.10:3000", config = SocketOptions( queryParams = mapOf("payload" to "MyPayload"), transport = SocketOptions.Transport.DEFAULT ) ) { on(SocketEvent.Connect) { println("moko.socket: connect") } on(SocketEvent.Connecting) { println("moko.socket: connecting") } on(SocketEvent.Disconnect) { println("moko.socket: disconnect") } on(SocketEvent.Error) { println("moko.socket: error $it") } on(SocketEvent.Reconnect) { println("moko.socket: reconnect") } on(SocketEvent.ReconnectAttempt) { println("moko.socket: reconnect attempt $it") } on(SocketEvent.Ping) { println("moko.socket: ping") } on(SocketEvent.Pong) { println("moko.socket: pong") } on("message") { data -> println("moko.socket: message=[$data]") //... } } socket.connect() socket.emit("message", "Hello, server!") } catch (e: URISyntaxException) { println("moko.socket: Error: ${e.message}") return@launch } } } I need suggestions troubleshooting this issue. These are my logs: moko.socket: error io.socket.engineio.client.EngineIOException: xhr poll error moko.socket: error io.socket.client.SocketIOException: Connection error I also tried to use a different 3rd party library, only to receive the same result: implementation("io.socket:socket.io-client:1.0.0") The server runs on my local machine, while the Android application is installed onto a mobile device. Both devices are on the same WIFI network. I even tried to connect using a tool such as NGROK, only to receive the same result. Does anyone have any ideas? I hope all of this source code helps. Thanks