instruction
stringlengths
0
30k
**tl;dr:** Remove the `safecall` The definition of `IEnumerable`: ``` IEnumerable = interface(IInterface) function GetEnumerator: IEnumerator; end; ``` Yours: function GetEnumerator: IEnumerator; safecall; Or, side-by-side: function GetEnumerator: IEnumerator; // definition function GetEnumerator: IEnumerator; safecall; // yours
You need to declare `content_scripts` in manifest.json, then [re-inject `content_scripts`](/q/10994324) after reloading/installing the extension in Chrome, but a much simpler solution is to use programmatic injection via `chrome.scripting.executeScript` instead of content_scripts+messaging, see [these examples](/a/67227376). popup.html: add `defer` to load the script at DOMContentLoaded: ```html <script src="popup.js" defer></script> ``` popup.js: ```js (async () => { const [tab] = await chrome.tabs.query({active: true, currentWindow: true}); const [{result}] = await chrome.scripting.executeScript({ target: {tabId: tab.id}, func: () => document.body.innerText, }); document.body.textContent = new URL(tab.url).hostname + ': ' + result.slice(0, 100) + '...'; })(); ``` No need for the background script or a separate content script for such simple scraping.
Does flutter wasm support native module import?
I have to execute a shell script `log-agent.sh` inside my docker container. **Dockerfile** FROM openjdk:11-jre LABEL org.opencontainers.image.authors="gurucharan.sharma@paytm.com" # Install AWS CLI RUN apt-get update && \ apt-get install -y awscli && \ apt-get clean VOLUME /tmp ARG JAR_FILE ARG PROFILE ADD ${JAR_FILE} app.jar ENV PROFILE_ENV=${PROFILE} EXPOSE 8080 COPY entrypoint.sh / COPY log-agent.sh / # Set permissions for log-agent.sh RUN chmod +x /log-agent.sh # Use entrypoint.sh as the entry point ENTRYPOINT ["/entrypoint.sh"] # Execute log-agent.sh # RUN /bin/bash -c '/logs/log-agent.sh' CMD ["/bin/bash", "-c", "/log-agent.sh"] The application startup is successful, but the container is not executing the script. No errors in the logs as well. Here is what I have already verified: 1. File location. 2. File permissions. 3. Validated the shell script for correctness. (proper shebang operator) 4. Script is executing correctly if executed manually from the container using the docker exec command Any suggestions?
Run multiple shell scripts in Dockerfile
|docker|shell|dockerfile|
|python|machine-learning|deep-learning|sequential|optuna|
### About `I would like it to return tab/sheet name "Park Letter" instead.` When I saw your showing script, `var sheetName = SpreadsheetApp.getActiveSpreadsheet().getName();` is used as the filename of PDF file at `MailApp.sendEmail (emailAddress, subject ,body, {attachments:[{fileName:sheetName+".pdf", content:contents, mimeType:"application//pdf"}]});`. If you want to use the value of `Park Letter` as the filename of a PDF file, how about the following modification? By the way, I think that `application//pdf` should be `application/pdf`. ### From: MailApp.sendEmail (emailAddress, subject ,body, {attachments:[{fileName:sheetName+".pdf", content:contents, mimeType:"application//pdf"}]}); ### To: MailApp.sendEmail(emailAddress, subject, body, { attachments: [{ fileName: "Park Letter" + ".pdf", content: contents, mimeType: "application/pdf" }] }); or MailApp.sendEmail(emailAddress, subject, body, { attachments: [result.getBlob().setName("Park Letter.pdf")] }); By this modification, the filename of the PDF file is `Park Letter.pdf`. ### About `Even better would be a chosen name or a cell within the sheet.` If you want to use the filename from the cell value, how about the following modification? In this case, please replace `###` to your sheet name. And, the value of cell "A1" is used as the filename. Please modify it to your actual situation. ### From: MailApp.sendEmail (emailAddress, subject ,body, {attachments:[{fileName:sheetName+".pdf", content:contents, mimeType:"application//pdf"}]}); ### To: var filename = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("###").getRange("A1").getDisplayValue(); MailApp.sendEmail(emailAddress, subject, body, { attachments: [result.getBlob().setName(`${filename}.pdf`)] }); or var filename = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("###").getRange("A1").getDisplayValue(); MailApp.sendEmail(emailAddress, subject, body, { attachments: [result.getBlob().setName(`${filename}.pdf`)] }); By this modification, the cell value of "A1" of the sheet "###" is used as the filename of the PDF file. ## Added: About your following new question. > Is there a way I can send this same letter to multiple people but as a Bcc from a range? I cannot have anyone seeing another persons email address. The example I am thinking is the same as above but instead of var emailRange = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Park Letter").getRange("F20"). I was thinking var emailRange = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("vicki working sheet").getRange("F20:F76") although this would need to be a Bcc. Is such a thing possibe? Thank you so much in advance. If you want to use the values of `var emailRange = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("vicki working sheet").getRange("F20:F76")` as `bcc`, how about the following modification? ### From: MailApp.sendEmail (emailAddress, subject ,body, {attachments:[{fileName:sheetName+".pdf", content:contents, mimeType:"application//pdf"}]}); ### To: var bcc = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("vicki working sheet").getRange("F20:F76").getDisplayValues().flat().join(","); var filename = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("###").getRange("A1").getDisplayValue(); MailApp.sendEmail(emailAddress, subject, body, { attachments: [result.getBlob().setName(`${filename}.pdf`)], bcc });
A `dplyr` solution: ``` library(dplyr) df |> filter(Disease == 1 & lead(Disease) == 2, .by = ID) |> pull(ID) ``` Result: ``` [1] 2 4 ``` **Edit:** The original question was extended such that cases with disease sequence (1, 3, 2) shall also be included: ``` df |> filter(Disease == 1 & (lead(Disease) == 2 | (lead(Disease) == 3 & lead(Disease, n = 2) == 2)), .by = ID) |> pull(ID) ``` Result: ``` [1] 2 4 6 ``` **Edit 2:** OP stated in the comments that every sequence (1, 3, 3, 3, ..., 3, 3, 2) shall be allowed, this could be solved like this: ``` library(stringr) df |> filter(str_detect(str_c(Disease, collapse = ""), "12|13+2$"), .by = ID) |> distinct(ID) ``` Result: ``` ID 1 2 2 4 3 6 ```
you can make something like this @Catch(TypeORMError) export class EntityNotFoundExceptionFilter implements ExceptionFilter { catch(exception: TypeORMError, host: ArgumentsHost) { const ctx = host.switchToHttp(); const response = ctx.getResponse<Response>(); const request = ctx.getRequest<Request>(); let status = exception.getStatus(); let message = ''; switch (instanceof exception) { case InitializedRelationError: // handle error like staus = 1; message = '2'; break; case AlreadyHasActiveConnectionError: // handle other error break; ... // ATTENTION other exception classes from node_modules/typeorm/errors ... default: // handle it like internal server error status = 500; message = 'Database exception'; } response .status(status) .json({ statusCode: status, timestamp: new Date().toISOString(), path: request.url, }); } }
Running "flutter pub get" in carousel_slider... Resolving dependencies... Error on line 34, column 3 of pubspec.yaml: A package may not list itself as a dependency. ╷ 34 │ carousel_slider: ^4.2.1 │ ^^^^^^^^^^^^^^^ ╵ pub get failed I tried changing the file name still the same error.
Not able to add carousel_slider package in flutter application
|flutter|carousel-slider|
null
maybe you must use property value: foreach (var key in appSettings.AllKeys) { Console.WriteLine("Key: {0} Value: {1}", key, appSettings[key].Value); } and later, to obtain value you must use: string result = appSettings[key].Value; Try and good luck
You should do like this: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> import mysite.wsgi application = mysite.wsgi.application <!-- end snippet -->
I am using Apache 2.4.58 with Windows server 2019 essentials. I have only 1 server and 1 public IP address with 2 network cards: 192.168.3.100 and 192.168.3.200. I created 2 proxy directives in httpd-ssl.conf <VirtualHost _default_:443> <Proxy "http://192.168.3.100:20300/"> Require expr "(%{HTTP_HOST} == 'domain1.com') || (%{HTTP_HOST} == 'www.domain1.com')" ProxyPreserveHost On RequestHeader set X-ProxyBase "/" </Proxy> ProxyPass "/" "http://192.168.3.100:20300/" ProxyPassReverse "/" "192.168.3.100:20300/" <Proxy "http://192.168.3.200:20600/"> Require expr "(%{HTTP_HOST} == 'domain2.com') || (%{HTTP_HOST} == 'www.domain2.com')" ProxyPreserveHost On RequestHeader set X-ProxyBase "/" </Proxy> ProxyPass "/" "http://192.168.3.200:20600/" ProxyPassReverse "/" "192.168.3.200:20600/" </VirtualHost> The problem is that only the proxy http://192.168.3.100:20300/ is executed and I can access only domain1.com! If I try to access domain2.com, I get an error message that the access to the resource is denied. And if I invert the order of the 2 proxies, it does not matter. Domain2.com is accessible only if I comment domain1.com proxy. What can be the problem?
In my react-native application I use `react-query` to fetch the data from backend written in `Spring Boot`. ```typescript export function useFetchAllNotes(kindeId: string) { return useInfiniteQuery<ApiResponseListNote, AxiosError, Response>({ initialPageParam: 0, queryKey: ['userNotes', {kindeId}], queryFn: ({pageParam = 0}) => fetchNotesForUser(kindeId, pageParam as number), getNextPageParam: (lastPage) => { const currentPage = lastPage.metadata.paging?.page as number; const totalPages = lastPage.metadata.paging?.totalPages as number; return currentPage < totalPages - 1 ? currentPage + 1 : undefined; }, select: (data) => { return { pages: data.pages.flatMap((page) => page.data ?? []), pageParams: data.pageParams, }; } } ); } ``` the `spring boot` endpoint looks like this ```kotlin @GetMapping("/{kindeId}") fun getUserNotes( @PageableDefault(sort = ["createdAt"], direction = Sort.Direction.DESC, size = 10) pageable: Pageable, @PathVariable kindeId: String ): ApiResponse<List<Note>> = noteService.getAllUserNotes(kindeId, pageable) ``` which returns object with metadata containing pagination object like this ```typescript export interface ApiResponsePaging { 'totalPages': number; 'totalElements': number; 'page': number; 'pageSize': number; } ``` all the values above are just values taken from `Pageable` in spring boot, so page indexing starts from 0 And the `FlatList` where the query is used ```typescript const { data: notes, isLoading: isFetchingNotes, error: fetchingNotesError, refetch: refetchNotes, isRefetching: isRefetchingNotes, hasNextPage, fetchNextPage, isFetchingNextPage } = useFetchAllNotes(kindeId) const loadMore = () => { if (hasNextPage) { fetchNextPage(); } }; <FlatList data={notes?.pages ?? [] as Note[]} renderItem={renderItem} keyExtractor={(_, index) => index.toString()} numColumns={1} onEndReached={({ distanceFromEnd }) => { if (distanceFromEnd < 0) return; loadMore() }} onEndReachedThreshold={0.1} initialNumToRender={10} /> ``` The problem is that when I reach the last page scrolling, then it starts fetching data from 0th page again and again. Do I miss something?
React Query useInfiniteQuery fetching first page after last page
|reactjs|pagination|react-query|tanstackreact-query|
Refresh your database with `Artisan`: Artisan::call('migrate:fresh --seed'); *This command removes your tables, creates the tables again (runs the migrations) and then seeds the tables with clean data.*
the input is: [input of sample data](https://i.stack.imgur.com/akaB9.png) need output as: [enter image description here](https://i.stack.imgur.com/tyeiB.png) is this possible in just XL format or a python script is required. if yes can you help with python script? thanks. my script: my logic is read each df.col and compare value from 1 to 20, if equal write it out or write blank. import pandas as pd df = pd.read_excel(r"C:\Users\my\scripts\test-file.xlsx") print(df) for column in df.columns[0:]: print(df[column])
I'm trying to implement basic socket program in C which parses HTTP GET requests for now, in `parse_http_req` function I've declared `char* line;` and after `strncpy` call `line` contains 1st line of HTTP request data. My doubt is I didn't allocate any memory for `line` before I called `strncpy`, how is this code working correctly ? Here is my full `server.c` code (still under development) ``` #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/ip.h> #include <netinet/in.h> #include <unistd.h> #include <asm-generic/socket.h> #include <string.h> #define DEFAULT_PORT 8080 #define MAX_REQ_QUEUE_SIZE 10 #define MAX_REQ_SIZE 1024 #define DEFAULT_PROTOCOL "http" enum Methods { GET, POST, PUT, OPTIONS, DELETE, INVALID_METHOD }; typedef struct Request { enum Methods method; char *path; char *protocol; char *body; size_t content_length; } Request; enum Methods get_req_method(char *req) { if(strcmp(req, "GET")){ return GET; } if(strcmp(req, "POST")){ return POST; } if(strcmp(req, "OPTONS")){ return OPTIONS; } if(strcmp(req, "PUT")){ return PUT; } if(strcmp(req, "DELETE")){ return DELETE; } return INVALID_METHOD; } void parse_http_req(char *req, size_t req_size) { int size = 0; while((*(req + size) != '\n') && (size < req_size)) { size++; } printf("size: %d\n", size); char *line; line = strncpy(line, req, size); line[size] = '\0'; printf("First line: %s\n", line); } void accept_incoming_request(int socket, struct sockaddr_in *address, socklen_t len) { int accept_fd; socklen_t socklen = len; while((accept_fd = accept(socket, (struct sockaddr*)address, &len)) > 0) { char *buffer = (char *)malloc(sizeof(char) * MAX_REQ_SIZE); int recvd_bytes = read(accept_fd, buffer, MAX_REQ_SIZE); printf("%s\n%ld\n%d\n\n\n", buffer, strlen(buffer), recvd_bytes); parse_http_req(buffer, strlen(buffer)); free(buffer); send(accept_fd, "hello", 5, 0); close(accept_fd); } shutdown(socket, SHUT_RDWR); } int create_socket(struct sockaddr_in *address, socklen_t len) { int fd, opt = 1; if((fd = socket(AF_INET, SOCK_STREAM, 0)) < 0) { perror("socket creation failed\n"); return -1; } if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR | SO_REUSEPORT, &opt, sizeof(opt))) { perror("setsockopt"); return -1; } if(bind(fd, (struct sockaddr*)address, len) < 0 ) { perror("Socket bind failed\n"); return -1; } return fd; } int main(int argc, char* argv[]) { short PORT = DEFAULT_PORT; char* PROTOCOL = DEFAULT_PROTOCOL; if(argc > 1) { PORT = (short) atoi(argv[1]); } if(argc > 2) { PROTOCOL = argv[2]; } printf("%d %s\n", PORT, PROTOCOL); struct sockaddr_in address = { sin_family:AF_INET, sin_addr: { s_addr: INADDR_ANY }, sin_port: htons(PORT) }; int socket; if((socket = create_socket(&address, sizeof(address))) < 0) { exit(1); } if( listen(socket, MAX_REQ_QUEUE_SIZE) < 0) { perror("Listen failed\n"); exit(1); } accept_incoming_request(socket, &address, sizeof(address)); return 0; } ``` If you compile and run server starts at port: `8080` or you can pass desired port as 1st arg like `./server 9000` To test: `wget http://localhost:8080/dummy/path` I tried to see if that line strncpy piece of code is valid. Here is my `test.c` ``` #include <stdio.h> #include <string.h> #include <stdlib.h> void f1(char *l2, int size) { char *l1; l1 = strncpy(l1, l2, size); printf("%s\n", l1); } void main() { char *l1= malloc(10); char *l2 = "test"; strncpy(l1, l2, 5); printf("%s\n", l1); free(l1); f1(l2, 5); } ``` As I expected if I allocate some memory for `l1` before calling `strncpy` like in main it's all good. But, getting `Segmentation fault (core dumped)` while executing `function f1`. How is similar piece of code working in my `server.c` but not in this `test.c` program
how is strncpy able to copy from source to empty destination?
|c|sockets|memory-management|strncpy|
null
I am trying to develop a plugin to export markdown file to PDF. Here is a sample markdown content: # What is Obsidian ? Obsidian is a **markdown** editor. I am using `marked` library to convert `markdown => html string` after which using `html` function of `JSPDF` library to convert to PDF and save The code snippet is as follows: ``` const pdf = new jsPDF() //converting markDown text to html const htmlContent = await marked(markedDownContent); pdf.html(htmlContent, { callback: (doc) => { doc.save("output.pdf"); }, x: 0, y: 0, margin: 10 }); ``` However, the text generated in PDF is poorly formatted as seen below: [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/POGm6.png I need help in fixing the styling of the text generated in the PDF document.
html to PDF not rendering properly while using jsPDF
|jspdf|obsidian|
I have class files which are compiled in `C:\Users\Chandrasekaran\eclipse-workspace\DemoTestForHealenium\target\classes` testng and jcommander Jar files and `testng.xml` is present in `C:\Users\Chandrasekaran\eclipse-workspace\DemoTestForHealenium` ``` java -cp "C:\Users\Chandrasekaran\eclipse-workspace\DemoTestForHealenium\target\classes;C:\Users\Chandrasekaran\eclipse-workspace\DemoTestForHealenium\testng.jar;C:\Users\Chandrasekaran\eclipse-workspace\DemoTestForHealenium\jcommander.jar" org.testng.TestNG -testngxml C:\Users\Chandrasekaran\eclipse-workspace\DemoTestForHealenium\testng.xml ``` when I run, getting The system cannot find the file specified C:\Users\Chandrasekaran\eclipse-workspace\DemoTestForHealenium>java -cp "C:\Users\Chandrasekaran\eclipse-workspace\DemoTestForHealenium\target\classes;C:\Users\Chandrasekaran\eclipse-workspace\DemoTestForHealenium\testng.jar;C:\Users\Chandrasekaran\eclipse-workspace\DemoTestForHealenium\jcommander.jar" org.testng.TestNG -testngxml C:\Users\Chandrasekaran\eclipse-workspace\DemoTestForHealenium\testng.xml java.io.FileNotFoundException: C:\Users\Chandrasekaran\eclipse-workspace\DemoTestForHealenium\-testngxml (The system cannot find the file specified) at java.base/java.io.FileInputStream.open0(Native Method) at java.base/java.io.FileInputStream.open(FileInputStream.java:216) at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157) at org.testng.xml.Parser.parse(Parser.java:156) at org.testng.xml.Parser.parse(Parser.java:246) at org.testng.TestNG.parseSuite(TestNG.java:298) at org.testng.TestNG.initializeSuitesAndJarFile(TestNG.java:350) at org.testng.TestNG.initializeEverything(TestNG.java:980) at org.testng.TestNG.run(TestNG.java:992) at org.testng.TestNG.privateMain(TestNG.java:1326) at org.testng.TestNG.main(TestNG.java:1294) when I run testng.xml directly from eclipse it is working but run from command prompt it is not working.. ``` [2023-07-04 19:00:34,822] ERROR in app: Exception on /process [POST] Traceback (most recent call last): File "C:\Users\20722\AppData\Roaming\Python\Python38\site-packages\flask\app.py", line 2190, in wsgi_app response = self.full_dispatch_request() File "C:\Users\20722\AppData\Roaming\Python\Python38\site-packages\flask\app.py", line 1486, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\Users\20722\AppData\Roaming\Python\Python38\site-packages\flask\app.py", line 1484, in full_dispatch_request rv = self.dispatch_request() File "C:\Users\20722\AppData\Roaming\Python\Python38\site-packages\flask\app.py", line 1469, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "TodayBackup.py", line 55, in process bot_response += get_branches() File "TodayBackup.py", line 90, in get_branches process = subprocess.run(["git", "branch", "--list", "--remote"], cwd=repo_path, capture_output=True, text=True) File "C:\Program Files\Python38\lib\subprocess.py", line 493, in run with Popen(*popenargs, **kwargs) as process: File "C:\Program Files\Python38\lib\subprocess.py", line 858, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Program Files\Python38\lib\subprocess.py", line 1311, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, NotADirectoryError: [WinError 267] The directory name is invalid 127.0.0.1 - - [04/Jul/2023 19:00:34] "←[35m←[1mPOST /process HTTP/1.1←[0m" 500 - ```
If you're writing code where the _only_ difference is a number, or the "number of some fixed thing", then that's a loop, not separate `if` statements, but if you _do_ use if statements, resolve them either with `if` statements that are ordered "largest to smallest match", so there's no fall-through, _or_ with if-else statements, so there's no fall-though. (And based on your question about whether to use a switch: switches operate on static comparison, and your code is based on dynamic comparison, so no, a switch wouldn't work). However, you don't need either for this. You're doing text matching, so use the best tool in the toolset for that: you can trivially match this pattern with a regex and perform the replace you want, because we're just looking for sequences of `#` at the start of a sentence, followed by one or more spaces, followed by "whatever", and the number of `#` directly correspond to the number in the `<h...>` you want. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> function convert(doc) { let lines = doc.split(`\n`); lines = convertInlineMD(lines); lines = convertMultiLineMD(lines); return lines.join(`\n`); } function convertInlineMD(lines) { lines.forEach((line, i) => { // convert headings const headingRegex =/^(#+)\s+(.+)/ line = line.replace(headingRegex, (_, h, text) => { const tag = `h${h.length}`; return `<${tag}>${text.trim()}</${tag}>`; }); // convert bold... // convert italic... // etc. etc. etc. lines[i] = line; }); return lines; } function convertMultiLineMD(lines) { // convert tables, etc. etc. return lines; } // And a simple test based on what you indicated: const docs = [ `## he#llo\nthere\n# yooo`, `# he#llo\nthere\n## yooo` ]; docs.forEach((doc,i) => console.log(`[doc ${i+1}]\n`, convert(doc))); <!-- language: lang-html --> <!-- end snippet --> However, this is also a naive approach to writing a transpiler, and will be dreadfully inefficient compared to writing a DFA based on the markdown grammar (the "markup language specification" grammar, i.e. the rules that say which tokens can follow which other tokens).
When you create a custom router in Express using `express.Router()`, parameters won't be passed to it. This means that you can access parameter values only in your main entry file and not in the custom router. To make parameters accessible in the custom router, you need to instruct Express to pass these parameters using `{ mergeParams: true }` when creating the router, like this: ```js // customer-router.js const customRouter = express.Router({ mergeParams: true }); ``` This allows parameters to be passed from the main entry file to the custom router, enabling access to them within the router.
s = 'greenland.gdb**\t**opology_check**\t**_buildings' python is consider this \t has the escape sequence . \t - Indicates the whitespace in python. for the above case we won't specify any separater .space considered as default separater. input- print (s.split()) output- ['greenland.gdb', 'opology_check', '_buildings'] If I am wrong correct me .
I am trying to build a personal project in astro, for this, i created a header component that looks like this: import { AlignJustify } from 'lucide-react'; import Sidebar from './Sidebar'; import { useState } from "react"; const Header = () => { const [open, setOpen] = useState(false); const buttonStyle = { backgroundColor: '#BFBFB1', // Background color width: '3.5rem', // Adjust width as needed height: '3.5rem', // Adjust height as needed borderRadius: '50%', // Make it circular display: 'flex', alignItems: 'center', justifyContent: 'center', }; const toggleVisibility = () => { setOpen(!open); console.log(open); }; return ( <> <div> <button onClick={toggleVisibility} style={buttonStyle}> <AlignJustify/> </button> </div> <div> {open && <Sidebar open={open}/>} </div> </> ); }; export default Header; this header.jsx is being called inside the Layout.astro, the issue is whenever i click on my button, toggleVisibility is not being called, but when i initially set the value of open to "True", I am able to see "hello" on my localhost and its hidden when open is initially set to "false" but for some reason my onclick is not working. What might be the possible reason for this? Because there are no errors on the console as well. thank you
I am using the pins R package and the `board_gdrive()` function to create a hosted board on my Google Drive. My goal is to have a hosted shiny app that pulls pin data from the Google Drive board. However, it appears that Google Drive requires authentication verification each time. Is there a way to not require, or have the authentication stored with the board so that manual interaction is not required? Here is the code I have been using ``` # Google Drive board board <- pins::board_gdrive(googledrive::as_id("https://drive.google.com/drive/folders/my-folder-abc123")) ```
Write R pin to Google Drive without authentification
|r|google-drive-api|pins|
I have an Android Studio project with a Kotlin only module in it that I'd like to use to implement database related code (in other words create the database, entities and DAO's) using Jetpack Room. By reading the [Declaring dependencies](https://developer.android.com/jetpack/androidx/releases/room#declaring_dependencies) documentation, it is explicitly stated that: > Optionally, for non-Android libraries (i.e. Java or Kotlin only Gradle modules) you can depend on androidx.room:room-common to use Room annotations. However, I can't compile the project because of errors listed below. Here's my `build.gradle` file: ``` plugins { id("java-library") alias(libs.plugins.jetbrainsKotlinJvm) alias(libs.plugins.ksp) } java { sourceCompatibility = JavaVersion.VERSION_17 targetCompatibility = JavaVersion.VERSION_17 } ksp { arg("room.generateKotlin", "true") } dependencies { implementation(libs.room.ktx) implementation(libs.room.common) implementation(libs.room.runtime) <-- Needed so I can access `RoomDatabase` abstract class. ksp(libs.room.compiler) } ``` Room dependencies in my `libs.versions.toml` file look like this: ```toml [versions] // ... other versions kotlin = "1.9.0" ksp = "1.9.0-1.0.12" room = "2.6.1" jetbrainsKotlinJvm = "1.9.0" [libraries] // ... other libs room-ktx = { group = "androidx.room", name = "room-ktx", version.ref = "room" } room-common = { group = "androidx.room", name = "room-common", version.ref = "room" } room-runtime = { group = "androidx.room", name = "room-runtime", version.ref = "room" } room-compiler = { group = "androidx.room", name = "room-compiler", version.ref = "room" } #room-testing = { group = "androidx.room", name = "room-testing", version.ref = "room" } [plugins] // ...other plugins ksp = { id = "com.google.devtools.ksp", version.ref = "ksp" } jetbrainsKotlinJvm = { id = "org.jetbrains.kotlin.jvm", version.ref = "jetbrainsKotlinJvm" } ``` And this is the error I get when I try and sync the project: ``` Could not resolve: androidx.room:room-ktx:2.6.1 Could not resolve: androidx.room:room-runtime:2.6.1 ``` My `settings.gradle.kts` is pretty basic, but it's referencing the `google()` repository. I can't use the Room plugin to configure build options because it can only be ran in an Android module. Honestly, I'm lost. I've no clue why Gradle isn't able to reference those two libraries.
Cannot resolve room dependencies in Kotlin only module
|build.gradle|android-room|
null
I have to work with the learning history of a Keras model. This is a basic task, but I've measured the performance of the Python built-in min() function, the numpy.min() function, and the numpy ndarray.min() function for list and ndarray. The performance of the built-in Python min() function is nothing compared to that of Numpy for ndarray (for list performance is almost equal). However, the ndarray.min() function is almost twice as fast as numpy.min(). The ndarray.min() documentation refers to the numpy.amin() documentation, which according to the numpy.amin docs, is an alias for numpy.min(). Therefore, I assumed that numpy.min() and ndarray.min() would have the same performance. However, why is the performance of these functions not equal? ``` from timeit import default_timer import random a = random.sample(range(1,1000000), 10000) b = np.array(random.sample(range(1,1000000), 10000)) def time_mgr(func): tms = [] for i in range(3, 6): tm = default_timer() for j in range(10**i): func() tm = (default_timer()-tm) / 10**i * 10e6 tms.append(tm) print(func.__name__, tms) @time_mgr def p_min(): min(a) @time_mgr def np_min(): np.min(a) @time_mgr def min_nd(): min(b) @time_mgr def np_min_nd(): np.min(b) @time_mgr def np_nd_min(): b.min() ``` output, time in mks: ``` p_min [507.12099997326726, 526.5003000386059, 518.0510800099] np_min [3001.327000092715, 3025.522599928081, 3045.676980004646] min_nd [2188.343000598252, 2214.6759000606835, 2176.5949900029227] np_min_nd [22.595999762415886, 21.515100030228496, 21.60724001005292] np_nd_min [13.29899998381734, 12.87820003926754, 12.865599989891052] ```
reformat numbers stored in array
|input|numeric|reformat|
null
If you come across errors such as "SyntaxError: Unexpected token 'export'" or "'SyntaxError: Cannot use import statement outside a module'", along with similar issues, it's necessary to include the module causing the problem in the transpilePackages configuration. **next.config.mjs or next.config.js** const nextConfig = { reactStrictMode: false, transpilePackages: [ "antd", "rc-util", "@babel/runtime", "@ant-design/icons", "@ant-design/icons-svg", "rc-pagination", "rc-picker", "rc-tree", "rc-table", ], }; export default nextConfig;
The below script has been working fine for multiple things but the letter returns as a PDF letter named "Site Utilities" this is the whole spreadsheet name. I would like it to return tab/sheet name "Park Letter" instead. Even better would be a chosen name or a cell within the sheet. ``` function Email3() { var ssID = SpreadsheetApp.getActiveSpreadsheet().getId(); var sheetName = SpreadsheetApp.getActiveSpreadsheet().getName(); //var email = Session.getUser().getEmail(); var emailRange = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Park Letter").getRange("F20"); var emailAddress = emailRange.getValue(); var subject = "Coffee Morning"; var body = ("Attached is a letter about the up coming Coffee Morning. \n \nMany thanks, \n \nCoast and Country Parks"); var requestData = {"method": "GET", "headers":{"Authorization":"Bearer "+ScriptApp.getOAuthToken()}}; var shID = getSheetID("Park Letter") //Get Sheet ID of sheet name "Master" var url = "https://docs.google.com/spreadsheets/d/"+ ssID + "/export?format=pdf&id="+ssID+"&gid="+shID; var result = UrlFetchApp.fetch(url , requestData); var contents = result.getContent(); MailApp.sendEmail (emailAddress, subject ,body, {attachments:[{fileName:sheetName+".pdf", content:contents, mimeType:"application//pdf"}]}); }; function getSheetID(name){ var ss = SpreadsheetApp.getActive().getSheetByName(name) var sheetID = ss.getSheetId().toString() return sheetID } ``` I have looked at other examples where it states get sheet name rather than spreadsheet but this didn't seem to work. I also saw an option for just changing the name "Get New Name". This made sense but I don't think I am putting the code within the right section.
What would be the best way to solve the problem of wanting to 'auto group' all artboards? Opposed to dragging a selection box of each artboard, then hitting CMD+G, each page has 8 artboards and I have probably 80-150 pages to get through. To those proficient in javascript, does this code resemble something workable or should I pack it in? ``` var doc = app.activeDocument; for (var i = 0; i < doc.artboards.length; i++) { doc.artboards.setActiveArtboardIndex(i); var artboard = doc.artboards[i]; var itemsOnArtboard = []; for (var j = 0; j < doc.pageItems.length; j++) { var item = doc.pageItems[j]; if (artboard.artboardRect.toString() == item.visibleBounds.toString()) { itemsOnArtboard.push(item); } } if (itemsOnArtboard.length > 1) { var group = doc.groupItems.add(); for (var k = 0; k < itemsOnArtboard.length; k++) { itemsOnArtboard[k].moveToEnd(group); } } } ``` I was expecting Illustrator to cycle through each art board and grouping the contents from each board until it finally reached the last board. The code seemed to take, beach ball spun for a bit, but no result.
astro onclick event not calling the function
|reactjs|onclick|astro|
null
I am trying to add my number in whatsapp manager but it dont show any option for code or call. It just land me to list view and shows number status as pending. My business is already verified. have anybody experienced similar issue? I have added number and it should send SMS code or call on given number but it redirect to page where number shows in list view and status pending. I have tried at least 3 different numbers. Nono of them is on whatsapp business before.
Whatsapp Manager number verification pending
|whatsapp|whatsapp-cloud-api|
null
There's a following table, called `fields`: [![enter image description here][1]][1] And there's a dedicated table to store its values, called `values` [![enter image description here][2]][2] I want to run a query to produce the following output: Finished | Faculity | Characteristic | Photo --------------------------------------------- 1 | Math | Good | 0 | Biology | Not Good | I want to build a query that outputs aformentioned result. But it's not that easy as it seems. From this [simlar question][3], I have tried running the following query: SELECT flds.id, (case when flds.name = 'Finished' THEN vals.value END) AS Finished, (case when flds.name = 'Faculty' THEN vals.value END) AS Faculty, (case when flds.name = 'Characteristic' THEN vals.value END) AS Characteristic, (case when flds.name = 'Photo' THEN vals.value END) AS Photo FROM `values` vals LEFT JOIN `fields` flds ON vals.field_id = flds.id GROUP BY flds.id, vals.value; Which gives me an unexpected result: [![enter image description here][4]][4] Is there any way to resolve it? [1]: https://i.stack.imgur.com/ahT5u.png [2]: https://i.stack.imgur.com/TAW4z.png [3]: https://stackoverflow.com/questions/12004603/mysql-pivot-row-into-dynamic-number-of-columns [4]: https://i.stack.imgur.com/9WV5v.png
I have a problem with writing a dots and boxes game with alpha-beta pruning. The whole program works, but the algorithm makes some stupid moves sometimes and I think it generates wrong evaluation of the state. Could you please look at it and maybe tell me what have I done wrong? I'm completely stuck at it. (I know this code might be a little to chaotic and not optimized. If you have some suggestions to improve it in this area to, please give me some tips :) ) ``` class MinMax: '''a class containing Min-Max algorithm it uses recursion to evalueate given state (node)''' def __init__(self, depth) -> None: '''define hiperparameters of the algorithm''' self._depth = depth @property def depth(self): return self._depth def solve(self, state, alfa=-float('inf'), beta=float('inf'), depth=None): '''solver using MinMax algorithm with alpha-beta pruning''' if depth is None: depth = self.depth children = state.generate_children() if len(children) == 0 or depth == 0: return state.evaluation if state.player == 1: # max turn for child in children: if state.player == child.player: alfa = max(alfa, self.solve(child, alfa, beta, depth)) else: alfa = max(alfa, self.solve(child, alfa, beta, depth-1)) if alfa >= beta: return beta return alfa else: # min turn for child in children: if state.player == child.player: beta = min(beta, self.solve(child, alfa, beta, depth)) else: beta = min(beta, self.solve(child, alfa, beta, depth-1)) if alfa >= beta: return alfa return beta ``` ``` from __future__ import annotations import numpy as np import copy class DostsAndBoxesState: def __init__(self, player: bool, dimentions: int, state=None, evaluation=0) -> None: '''class representing a single Node ::player: 0 for Min, 1 for Max ::state: state of the game in the node ::dimentions: dims of the game board ''' self.player = player self.dimentions = dimentions self.evaluation = evaluation if not state: self.state = self.generate_first_state() else: self.state = state def generate_first_state(self): state = [] for i in range(2*self.dimentions - 1): if i % 2 == 1: state.append(np.full(self.dimentions, -1)) else: state.append(np.full(self.dimentions - 1, -1)) return state def generate_children(self) -> None: ''' children: states one state under current state while generating, it makes the evaluation based on evaluation of the parent and potentialy found box ''' childen = [] for row_index, row in enumerate(self.state): for connection_index, connection in enumerate(row): if connection == -1: child = copy.deepcopy(self.state) new_row = copy.deepcopy(row) new_row[connection_index] = self.player # zamieniam -1 na gracza child[row_index] = new_row boxes = self.find_box(row_index, connection_index) if boxes: component = len(boxes) if self.player else -len(boxes) evaluation = self.evaluation + component childen.append(DostsAndBoxesState(self.player, self.dimentions, child, evaluation)) else: childen.append(DostsAndBoxesState(not self.player, self.dimentions, child, self.evaluation)) return childen def find_box(self, row_index, column_index) -> np.array[list[np.array]]: ''' looks for the boxes from up, down, left and right return array of booleans up_down / left_right ''' boxes = [] if row_index % 2 == 0: up_down = [0 if row_index == 0 else 1, 0 if row_index == (2*self.dimentions - 1) - 1 else 1] for indx, direction in enumerate(up_down): if direction == 1: # if it's possible to go either up or down row1 = row_index+2*((-1)**(indx+1)) row2 = row_index+1*((-1)**(indx+1)) if self.state[row1][column_index] != -1 and \ self.state[row2][column_index] != -1 and \ self.state[row2][column_index+1] != -1: boxes.append(True) else: left_right = [0 if column_index == 0 else 1, 0 if column_index == self.dimentions - 1 else 1] for indx, direction in enumerate(left_right): if direction == 1: # if it's possible to go either left or right column = column_index-1 if indx == 0 else column_index if self.state[row_index][column] != -1 and \ self.state[row_index-1][column] != -1 and \ self.state[row_index+1][column] != -1: boxes.append(True) return boxes def __str__(self) -> str: ''' prints out the state ''' result = "" for index, row in enumerate(self.state): if index % 2 == 1: symbols = [' ' if connection == -1 else '|' for connection in row] line = [symbol+" " for symbol in symbols] str_line = "".join(line) result = result + str_line + "\n" else: symbols = [' ' if connection == -1 else '-' for connection in row] line = ["*"+symbol for symbol in symbols] str_line = "".join(line) + "*" result = result + str_line + "\n" return result ``` ``` from __future__ import annotations from time import sleep from copy import deepcopy from random import random import numpy as np from State import DostsAndBoxesState from MinMax import MinMax class DotsAndBoxes: def __init__(self, dimentions, depth, player_mode=1) -> None: self.dimentions = dimentions self.minmax_evaluator = MinMax(depth) self.player_mode = player_mode self.first_state = DostsAndBoxesState(1, self.dimentions) def bot_make_move(self, state, children): ''' makes a move based on alpha-beta pruning from MinMax class ''' if state.player: the_best_move = -float('inf') for new_state in children: evaluation = self.minmax_evaluator.solve(new_state) if evaluation == the_best_move: draw = random() state = new_state if draw < 0.5 else state elif evaluation > the_best_move: the_best_move = evaluation state = new_state else: the_best_move = float('inf') for new_state in children: evaluation = self.minmax_evaluator.solve(new_state) if evaluation == the_best_move: draw = random() state = new_state if draw < 0.5 else state elif evaluation < the_best_move: the_best_move = evaluation state = new_state return state def player_make_move(self, state): '''player's move''' player_move = [float('inf'), float('inf')] while True: player_move = input("Make your move: ").split(",") try: player_move = [int(player_move[0]), int(player_move[1])] except Exception: player_move = [float('inf'), float('inf')] continue try: if state.state[int(player_move[0])][int(player_move[1])] != -1: player_move = [float('inf'), float('inf')] continue except IndexError: player_move = [float('inf'), float('inf')] continue break state.state[int(player_move[0])][int(player_move[1])] = self.player_mode if not state.find_box(int(player_move[0]), int(player_move[1])): state.player = not state.player return state def check_scored(self, state, previous_state, message, score): '''if the box has been just compleated''' for index, row in enumerate(state.state): difference = np.where(row != previous_state.state[index])[0] if len(difference) != 0: box = state.find_box(index, difference[0]) if box: print(message) score[state.player] += len(box) break def play(self): ''' the game itself ''' state = self.first_state score = [0, 0] player_name = "Max" if self.player_mode else "Min" print(f"Welcome to the game, you are playing as {player_name}") while True: if state.player: # Max turn message = "Max scored!" children = state.generate_children() if len(children) == 0: break print(state) print('\n') previous_state = deepcopy(state) if self.player_mode: state = self.player_make_move(state) else: state = self.bot_make_move(state, children) self.check_scored(state, previous_state, message, score) if not state.player: # Min turn message = "Min scored!" children = state.generate_children() if len(children) == 0: winner = "Max" break print(state) print('\n') previous_state = deepcopy(state) if not self.player_mode: state = self.player_make_move(state) else: state = self.bot_make_move(state, children) self.check_scored(state, previous_state, message, score) # Who won? print(state) if score[0] == score[1]: print("Tie!") else: winner = "Max" if score.index(max(score)) == 1 else "Min" print(f"The winner is: {winner}") ``` The algorithm is making stupid decisions, there might be something wrong with the evaluation function...?
Dots and Boxes with apha-beta pruning
|python|algorithm|artificial-intelligence|game-development|alpha-beta-pruning|
null
{"OriginalQuestionIds":[54957941],"Voters":[{"Id":5577765,"DisplayName":"Rabbid76","BindingReason":{"GoldTagBadge":"pygame"}}]}
I would like to create custom history back and forwards buttons using a React hook. For reference, in case it helps, I'm trying to replicate the behaviour that can be seen in Spotify's web app. Their custom forwards and back buttons integrate seamlessly with the browser history buttons. I think I have it mostly working, but with one issue. Here is my React hook: ```javascript import { useState, useEffect } from 'react'; import { useHistory } from 'react-router-dom'; const useNavigationHistory = () => { const history = useHistory(); const [length, setLength] = useState(0); const [direction, setDirection] = useState(null); const [historyStack, setHistoryStack] = useState([]); const [futureStack, setFutureStack] = useState([]); const canGoBack = historyStack.length > 0; const canGoForward = futureStack.length > 0; const goBack = () => { if (canGoBack) { history.goBack(); } }; const goForward = () => { if (canGoForward) { history.goForward(); } }; useEffect(() => { return history.listen((location, action) => { // if action is PUSH we are going forwards if (action === 'PUSH') { setDirection('forwards'); setLength(length + 1); // add the new location to the historyStack setHistoryStack([...historyStack, location.pathname]); // clear the futureStack because it is not possible to go forward from here setFutureStack([]); } // if action is POP we could be going forwards or backwards else if (action === 'POP') { // determine if we are going forwards or backwards if (futureStack.length > 0 && futureStack[futureStack.length - 1] === location.pathname) { setDirection('forwards'); // if we are going forwards, pop the futureStack and push it onto the historyStack setHistoryStack([...historyStack, futureStack.pop()]); setFutureStack(futureStack); } else { setDirection('backwards'); // if we are going backwards, pop the historyStack and push it onto the futureStack setFutureStack([...futureStack, historyStack.pop()]); setHistoryStack(historyStack); } setLength(historyStack.length); } }); }, [history, length, historyStack, futureStack]); return { canGoBack, canGoForward, goBack, goForward }; }; export default useNavigationHistory; ``` In my testing this all seems to work fine when navigating forwards and back between various different pages. ## The Problem If I navigate forwards by alternating between the same 2 pages, for example: ``` /home /about /home /about /home /about ``` ...then my logic to determine if we are going forwards or backwards falls apart. I think it's this line: ``` if (futureStack.length > 0 && futureStack[futureStack.length - 1] === location.pathname) { ``` because the forwards pathname and the backwards pathname are identical, so it thinks I'm going forwards even when I'm going backwards. I've been trying to figure out how I could resolve this, but haven't managed to get something working. Is anyone able to help? Maybe my solution is flawed and I need an entirely different method, I'm not sure.
{"OriginalQuestionIds":[19271169],"Voters":[{"Id":2943403,"DisplayName":"mickmackusa","BindingReason":{"GoldTagBadge":"php"}}]}
A super minimalist solution is to simply assign your variable to another variable that expects to be of type `never` so that your code doesn't compile if a new `job.type` type is added: ``` if (job.type === 'add') { add(job) } else if (job.type === 'send') { send(job) } else { const check: never = job } ``` This has the advantage of being a simple one-liner, but has the disadvantage of not throwing at runtime (if this somehow got compiled) and may result in a linter error for the unused `check` variable.
hi everyone i'm having problems saving photos from summernote. My project uses springboot jsp. When adding photos to summernote and pressing save, springboot displays the error "Maximum upload size exceeded". Local image size is 8MB.I tried to configure image size but it didn't work. When you add an image using the file type input box, it is saved normally. Hope to receive help from others. Thanks a lot! spring.servlet.multipart.max-file-size=50MB spring.servlet.multipart.max-request-size=-1 server.tomcat.max-swallow-size=-1 or spring.servlet.multipart.max-file-size=-1 spring.servlet.multipart.max-request-size=-1 server.tomcat.max-swallow-size=-1 and $('.summernote').summernote({ height: 350, maximumImageFileSize: 52428800 });
Maximum upload size exceeded when saving photos in summernote
|spring-boot|file-upload|max|summernote|
null
1. Return a pointer to the `Node` of interest instead of updating the out parameter `head_dest`. The stack will implicitly hold `cur_list_dest`. 1. The original implementation changes the original list. To create a new list you need to return a pointer to *copy* of the smallest node or NULL. To emphasize that I made the argument constant with `const Node *head`. 1. Observe that `pairWiseMinimumInNewList_Rec()` either returns a copy of 1st node, 2nd node or NULL (3 values). The recursive call is either two nodes head or NULL (2 values). For at most 6 cases. If you make the values conditional instead of expressions you can collapse that into a single return statement. ``` #include <stdio.h> #include <stdlib.h> #include <string.h> typedef struct Node { int val; struct Node *pt_next; } Node; Node *linked_list_new(int val, Node *pt_next) { Node *n = malloc(sizeof *n); if(!n) { printf("malloc failed\n"); exit(1); } n->val = val; n->pt_next = pt_next; return n; } Node *linked_list_create(size_t n, int *vals) { Node *head = NULL; Node **cur = &head; for(size_t i=0; i < n; i++) { *cur = linked_list_new(vals[i], NULL); if(!head) head = *cur; cur = &(*cur)->pt_next; } return head; } void linked_list_print(Node *head) { for(; head; head=head->pt_next) printf("%d->", head->val); printf("NULL\n"); } void linked_list_free(Node *head) { while(head) { Node *tmp = head->pt_next; free(head); head=tmp; } } Node *pairWiseMinimumInNewList_Rec(const Node* head) { return head ? linked_list_new( (head->pt_next && head->val < head->pt_next->val) || !head->pt_next ? head->val : head->pt_next->val, pairWiseMinimumInNewList_Rec(head->pt_next ? head->pt_next->pt_next : NULL) ) : NULL; } int main() { Node *head=linked_list_create(7, (int []) {2,1,3,4,5,6,7}); Node* head_dest=pairWiseMinimumInNewList_Rec(head); linked_list_free(head); linked_list_print(head_dest); linked_list_free(head_dest); } ``` Example run of a test case that exercises more of the code paths than what was given: ``` 1->3->5->7->NULL ```
My program uses orthpgraphic projection with an internal resolution of 480x270 and a viewport resolution of 1920x1080 (without shaders). When I draw a rectangle directly to the framebuffer I am able to move it around in incriments of 0.25 pixels (because there are 4 pixels in the framebuffer for every 1 pixel of internal resolution). But when I switch from drawing directly to the framebuffer to redering to a 1920x1080 FBO texture, instead of the 0.25 pixel movements, it "snaps" and only moves 1 whole pixel at a time. Why is this?
OpenGL Framebuffer/FBO RTT subpixel movement discrepancy
|opengl|
I am making a program that solves a sudoku game, but I received an error from my compiler. ``` error: invalid use of undefined type ‘struct Square’ 30 | if(box->squares[x]->possible[number-1] == 0){ ^~ ``` the function is: ``` int updateBoxes(Square *** sudoku, int row, int column){ int x; int number = sudoku[row][column]->number; Box * box; box = sudoku[row][column]->box; for(x = 0; x < SIZE_ROWS; x++){ if(box->squares[x]->possible[number-1] == 0){ box->squares[x]->solvable--; box->squares[x]->possible[number-1] = 1; } } return 1; } ``` the error also aplies to the next two lines. the definitions of `Square` and `Box` are: ``` typedef struct box { struct Square ** squares; /* array of Squares */ int numbers; int possible[9]; int solvable; struct Box * next; } Box; /* square is a single number in the board */ typedef struct square { int number; int possible[9]; /* the array is only made of 0 and 1 each index corresponds to a number: ex: index 4 == number 5 ... 1 tells me that 5 cannot go into this scare 0 tells me that 5 can go into this scare */ int solvable; /* will be subtracted 1 each time and when it is 1 it's because is solvable */ Box * box; /* tells me the corresponding box */ int row; int column; } Square; ``` I am following a tutorial on youtube and the guy doesn't get the error that I get, the youtube video is: https://www.youtube.com/watch?v=CnzrCBLCDhc&t=1326s
```Because when I use the stored procedure in Crystal Report, it does not work because to create the columns it requires me to put the start and end dates, which is a variable that cannot be fixed.``` Does this mean you think the values you assign during selecting the procedure is fixed and cant be changed after design? values to stored procedure can be changed dynamically even after designing. I add Stored procedure like this. select Database expert > select the correct connection > select Db > select schema > select stored procedure > select required procedure > click right single arrow > it'll ask values for stored procedure variable. The values assigned here is used only to show the columns for designing purpose. You can give different values for this parameters even after designing. Can give different values in each Run.
I'm trying to run some integration tests for Kafka consumer with, > org.springframework.kafka.test.context.EmbeddedKafka Currently letting spring-boot-starter-parent to do the dependency version management responsibility. Here is the **pom.xml** file. <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.2.4</version> <relativePath/> <!-- lookup parent from repository --> </parent> <properties> <java.version>17</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka-test</artifactId> <scope>test</scope> </dependency> </dependencies> Here is the Kafka consumer code, @Configuration public class KafkaEventListener { @RetryableTopic( attempts = "#{'${kafka.max.retry.attempts}'}", autoCreateTopics = "#{'${kafka.auto.create.retry.topics}'}", backoff = @Backoff( delayExpression = "#{'${kafka.retry.init-interval}'}", multiplierExpression = "#{'${kafka.retry.backoff.multiplier}'}"), include = { Exception.class }, timeout = "#{'${kafka.max.retry.duration}'}", topicSuffixingStrategy = TopicSuffixingStrategy.SUFFIX_WITH_INDEX_VALUE, dltStrategy = DltStrategy.FAIL_ON_ERROR) @KafkaListener(topics = "${kafka.topic.test}", groupId = "${kafka.group.id.test}", containerFactory = "testKafkaListenerContainerFactory") public void listen(@Payload MetadataMessage input, @Header(KafkaHeaders.OFFSET) String offset) { System.out.println(input.getValue()); } @DltHandler public void deadLetterHandler(@Payload(required = false) MetadataMessage data, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) { System.out.println(String.format("Event from topic %s has been dead-lettered. Event data : %s", topic, data.toString())); } } Here is the Kafka configuration class, @Configuration public class KafkaConsumerConfig { @Value(value = "${spring.kafka.bootstrap-servers}") private String bootstrapAddress; @Value(value = "${kafka.group.id.test}") private String groupId; private Map<String, Object> consumerFactoryConfigs() { Map<String, Object> props = new HashMap<>(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress); props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId); props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class); // Disabled kafka auto acknowledgement to gain more flexibility props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false); props.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, "20971520"); props.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, "20971520"); props.put(JsonDeserializer.TRUSTED_PACKAGES, "*"); return props; } @Bean public ConsumerFactory<String, MetadataMessage> metadataConsumerFactory() { return new DefaultKafkaConsumerFactory<>(consumerFactoryConfigs(), new StringDeserializer(), new ErrorHandlingDeserializer<>(new JsonDeserializer<>(MetadataMessage.class))); } @Bean public ConcurrentKafkaListenerContainerFactory<String, MetadataMessage> testKafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory<String, MetadataMessage> factory = new ConcurrentKafkaListenerContainerFactory<>(); factory.setConsumerFactory(metadataConsumerFactory()); factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE); return factory; } } Here is the test class, @SpringBootTest(classes = EmbeddedKafkaApplication.class, webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT) @TestInstance(TestInstance.Lifecycle.PER_CLASS) @TestPropertySource(locations = { "classpath:application.properties" }) @EmbeddedKafka(partitions = 1, brokerProperties = { "listeners=PLAINTEXT://localhost:29092", "port=29092" }) public class EmbeddedKafkaTest { @Autowired private KafkaEventListener kafkaEventListener; @Test public void testKafkaEvent() { kafkaEventListener.listen(new MetadataMessage("kafka message from test"), "1", mock(Acknowledgment.class)); } } With spring-boot-starter-parent version 3.1.10 the test is working. However, when I switch to a newer version of spring-boot-starter-parent, the test fails. I can see these logs when start the application when I ran the test cases, . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v3.1.10) 2024-03-31T16:37:37.354+05:30 INFO 26280 --- [ main] k.utils.Log4jControllerRegistration$ : Registered kafka:type=kafka.Log4jController MBean 2024-03-31T16:37:37.448+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : 2024-03-31T16:37:37.448+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : ______ _ 2024-03-31T16:37:37.448+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : |___ / | | 2024-03-31T16:37:37.448+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : / / ___ ___ | | __ ___ ___ _ __ ___ _ __ 2024-03-31T16:37:37.448+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| 2024-03-31T16:37:37.448+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | 2024-03-31T16:37:37.448+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| 2024-03-31T16:37:37.448+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : | | 2024-03-31T16:37:37.448+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : |_| 2024-03-31T16:37:37.448+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : 2024-03-31T16:37:37.460+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : Server environment:zookeeper.version=3.6.4--d65253dcf68e9097c6e95a126463fd5fdeb4521c, built on 12/18/2022 18:10 GMT 2024-03-31T16:37:37.460+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : Server environment:host.name=SADEEP-M.Zone24x7.lk 2024-03-31T16:37:37.460+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : Server environment:java.version=17.0.8.1 2024-03-31T16:37:37.460+05:30 INFO 26280 --- [ main] o.a.zookeeper.server.ZooKeeperServer : Server environment:java.vendor=Amazon.com Inc. Final log lines of success execution with 3.1.10 version, 2024-03-31T16:37:41.551+05:30 INFO 26280 --- [ner#0-dlt-0-C-1] o.s.k.l.KafkaMessageListenerContainer : testGroupId-dlt: partitions assigned: [testTopic-dlt-0] 2024-03-31T16:37:41.551+05:30 INFO 26280 --- [0-retry-1-0-C-1] o.s.k.l.KafkaMessageListenerContainer : testGroupId-retry-1: partitions assigned: [testTopic-retry-1-0] 2024-03-31T16:37:41.551+05:30 INFO 26280 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : testGroupId: partitions assigned: [testTopic-0] 2024-03-31T16:37:41.551+05:30 INFO 26280 --- [0-retry-0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : testGroupId-retry-0: partitions assigned: [testTopic-retry-0-0] kafka message from test I can see the ZooKeeperServer is running clearly. But after I changed to the newer version such as 3.2.0 or any latest versions (3.2.4). I cannot see that ZooKeeperServer logs and test also is falling to execute. Here are some logs from that, 2024-03-31T16:46:29.072+05:30 INFO 25024 --- [embedded-kafka] [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.6.1 2024-03-31T16:46:29.072+05:30 INFO 25024 --- [embedded-kafka] [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 5e3c2b738d253ff5 2024-03-31T16:46:29.072+05:30 INFO 25024 --- [embedded-kafka] [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1711883789072 2024-03-31T16:46:29.075+05:30 INFO 25024 --- [embedded-kafka] [| adminclient-2] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-2] Node -1 disconnected. 2024-03-31T16:46:29.075+05:30 WARN 25024 --- [embedded-kafka] [| adminclient-2] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-2] Connection to node -1 (localhost/127.0.0.1:29092) could not be established. Broker may not be available. 2024-03-31T16:46:29.191+05:30 INFO 25024 --- [embedded-kafka] [| adminclient-2] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-2] Node -1 disconnected. 2024-03-31T16:46:29.191+05:30 WARN 25024 --- [embedded-kafka] [| adminclient-2] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-2] Connection to node -1 (localhost/127.0.0.1:29092) could not be established. Broker may not be available. 2024-03-31T16:46:29.300+05:30 INFO 25024 --- [embedded-kafka] [| adminclient-2] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-2] Node -1 disconnected. 2024-03-31T16:46:29.300+05:30 WARN 25024 --- [embedded-kafka] [| adminclient-2] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-2] Connection to node -1 (localhost/127.0.0.1:29092) could not be established. Broker may not be available. 2024-03-31T16:46:29.518+05:30 INFO 25024 --- [embedded-kafka] [| adminclient-2] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-2] Node -1 disconnected. That Zookeeper logs are also not visible, /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v3.2.4) 2024-03-31T16:46:26.369+05:30 INFO 25024 --- [embedded-kafka] [ main] k.utils.Log4jControllerRegistration$ : Registered kafka:type=kafka.Log4jController MBean 2024-03-31T16:46:26.387+05:30 INFO 25024 --- [embedded-kafka] [ main] org.apache.zookeeper.common.X509Util : Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 2024-03-31T16:46:26.501+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-1] kafka.server.ControllerServer : Formatting C:\Users\sandeepm\AppData\Local\Temp\kafka-16755125128047995461\controller_0 with metadata.version 3.3-IV0. 2024-03-31T16:46:26.503+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-3] kafka.server.BrokerServer : [BrokerServer id=0] Transition from SHUTDOWN to STARTING 2024-03-31T16:46:26.503+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-2] kafka.server.ControllerServer : [ControllerServer id=0] Starting controller 2024-03-31T16:46:26.504+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-3] kafka.server.SharedServer : [SharedServer id=0] Starting SharedServer 2024-03-31T16:46:26.524+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-2] o.a.k.s.network.EndpointReadyFutures : authorizerStart completed for endpoint CONTROLLER. Endpoint is now READY. 2024-03-31T16:46:26.596+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-3] kafka.log.UnifiedLog$ : [LogLoader partition=__cluster_metadata-0, dir=C:\Users\sandeepm\AppData\Local\Temp\kafka-16755125128047995461\controller_0] Loading producer state till offset 0 with message format version 2 2024-03-31T16:46:26.597+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-3] kafka.log.UnifiedLog$ : [LogLoader partition=__cluster_metadata-0, dir=C:\Users\sandeepm\AppData\Local\Temp\kafka-16755125128047995461\controller_0] Reloading from producer snapshot and rebuilding producer state from offset 0 2024-03-31T16:46:26.597+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-3] kafka.log.UnifiedLog$ : [LogLoader partition=__cluster_metadata-0, dir=C:\Users\sandeepm\AppData\Local\Temp\kafka-16755125128047995461\controller_0] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 0 2024-03-31T16:46:26.634+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-3] kafka.raft.KafkaMetadataLog$ : Initialized snapshots with IDs SortedSet() from C:\Users\sandeepm\AppData\Local\Temp\kafka-16755125128047995461\controller_0\__cluster_metadata-0 2024-03-31T16:46:26.671+05:30 INFO 25024 --- [embedded-kafka] [piration-reaper] ExpirationService$ExpiredOperationReaper : [raft-expiration-reaper]: Starting 2024-03-31T16:46:26.716+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-3] org.apache.kafka.raft.QuorumState : [RaftManager id=0] Completed transition to Unattached(epoch=0, voters=[0], electionTimeoutMs=1955) from null 2024-03-31T16:46:26.722+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-3] org.apache.kafka.raft.QuorumState : [RaftManager id=0] Completed transition to CandidateState(localId=0, epoch=1, retries=1, voteStates={0=GRANTED}, highWatermark=Optional.empty, electionTimeoutMs=1252) from Unattached(epoch=0, voters=[0], electionTimeoutMs=1955) 2024-03-31T16:46:26.727+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-3] org.apache.kafka.raft.QuorumState : [RaftManager id=0] Completed transition to Leader(localId=0, epoch=1, epochStartOffset=0, highWatermark=Optional.empty, voterStates={0=ReplicaState(nodeId=0, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) from CandidateState(localId=0, epoch=1, retries=1, voteStates={0=GRANTED}, highWatermark=Optional.empty, electionTimeoutMs=1252) 2024-03-31T16:46:26.800+05:30 INFO 25024 --- [embedded-kafka] [-kit-executor-2] kafka.network.ConnectionQuotas : Updated connection-accept-rate max connection creation rate to 2147483647 I'm having trouble with the dependency version and need some assistance. I want to run all my test cases using an embedded Kafka broker without using an external Kafka cluster. Could you please help me with this? Thank you!
Embedded Kafka Failed to Start After Spring Starter Parent Version 3.1.10
Apache Reverse Proxy: only one proxy directive is working. Second one is ignored
|apache|proxy|reverse-proxy|proxypass|
I'm working on a WordPress/Dokan plugin. I am looking for a way to add a new field by two selectable options, New and Used product. Also, The visitors can filter the products in the shop page or search results page by New and Used products. I would really appreciate it if someone can help me? I tried the following code. But when I add it to the child-theme, it disrupts the website function. ``` // ****adding extra field in single product page START**** // Adding extra field on New product popup/without popup form add_action( 'dokan_new_product_after_product_tags','new_product_field',10 ); function new_product_field(){ ?> <div class="dokan-form-group"> <label class="dokan-control-label" for=""><?php _e( 'Item condition', 'dokan' ); ?></label> <div class="dokan-form-group"> <select name="new_field" class="dokan-form-control"> <option value=""><?php _e( '', 'dokan' ) ?></option> <option value="new"><?php _e( 'New', 'dokan' ) ?></option> <option value="used"><?php _e( 'Used', 'dokan' ) ?></option> </select> </div> </div> <?php } /* * Saving product field data for edit and update */ add_action( 'dokan_new_product_added','save_add_product_meta', 10, 2 ); add_action( 'dokan_product_updated', 'save_add_product_meta', 10, 2 ); function save_add_product_meta($product_id, $postdata){ if ( ! dokan_is_user_seller( get_current_user_id() ) ) { return; } // if ( ! empty( $postdata['new_field'] ) ) { // update_post_meta( $product_id, 'new_field', $postdata['new_field'] ); // } update_post_meta( $product_id, 'new_field', $postdata['new_field'] ); } /* * Showing field data on product edit page */ add_action('dokan_product_edit_after_product_tags','show_on_edit_page',99,2); function show_on_edit_page($post, $post_id){ $new_field = get_post_meta( $post_id, 'new_field', true ); ?> <div class="dokan-form-group"> <label class="dokan-control-label" for=""><?php _e( 'Item condition', 'dokan' ); ?></label> <select name="new_field" class="dokan-form-control"> <option value="new" <?php echo ( "new_field" == 'new' ) ? 'selected' : 'Used' ?>><?php _e( 'New', 'dokan' ) ?></option> <option value="used" <?php echo ( "new_field" == 'used' ) ? 'selected' : 'New' ?>><?php _e( 'Used', 'dokan' ) ?></option> </select> </div> <?php } // showing on single product page add_action('woocommerce_single_product_summary','show_product_code',13); function show_product_code(){ global $product; if ( empty( $product ) ) { return; } $new_field = get_post_meta( $product->get_id(), 'new_field', true ); if (empty($new_field)) exit; if ( ! empty( $new_field ) ) { ?> <span><?php echo esc_attr__( 'Item condition:', 'dokan' ); ?> <strong><?php echo esc_attr( $new_field ); ?></strong></span> <?php } } // ****adding extra field in single product page END**** ```
How to Add New Fields in Dokan Product Form to Specify New and Used product, and Can Filter the Products by Used and New One
{"Voters":[{"Id":9098759,"DisplayName":"Akash Singh"}],"DeleteType":1}
Yes if you are creating the program on the browser using html and js, you can use the following code to achieve it. By changing the template inside `amountInput.oninput` you can change the maximum amount. HTML <label>Montant Rechargement</label> <br /> <input autocomplete="cc-number" id="amountInput" name="amountInput" type="tel" placeholder="1 000 000" /> JS const amountInput = document.getElementById("amountInput"); amountInput.oninput = (e) => { let cursorPos = e.target.selectionStart let currentValue = e.target.value let cleanValue = currentValue.replace(/\D/g, ""); let formatInput = patternMatch({ input: cleanValue, template: "x xxx xxx xxx" }); e.target.value = formatInput let isBackspace = (e?.data==null) ? true: false let nextCusPos = nextDigit(formatInput, cursorPos, isBackspace) amountInput.setSelectionRange(nextCusPos+1, nextCusPos+1); }; function nextDigit(input, cursorpos, isBackspace) { if (isBackspace){ for (let i = cursorpos-1; i > 0; i--) { if(/\d/.test(input[i])){ return i } } }else{ for (let i = cursorpos-1; i < input.length; i++) { if(/\d/.test(input[i])){ return i } } } return cursorpos } function patternMatch({ input, template }) { try { let j = 0; let plaintext = ""; let countj = 0; while (j < template.length) { // code block to be if (countj > input.length - 1) { template = template.substring(0, j); break; } if (template[j] == input[j]) { j++; countj++; continue; } if (template[j] == "x") { template = template.substring(0, j) + input[countj] + template.substring(j + 1); plaintext = plaintext + input[countj]; countj++; } j++; } return template } catch { return "" } }
I have a DataGridView containing data about different customers, and my objective is to print each row on a separate page. However, I'm facing a problem where the data from two consecutive rows prints on the same page, overlapping each other. For example, if I have four rows in my DataGridView, the content of rows 1 and 2 prints on the same page, and similarly for rows 3 and 4. Below is the relevant code snippet where I handle the printing logic: ``` private int currentRow = 0; private void printDocument2_PrintPage(object sender, PrintPageEventArgs e) { string companyName = "چوہان ڈیری فارمز"; // Your company name DateTime startDate = dtm_start.Value; DateTime endDate = dtm_end.Value; // Set font and brush for drawing Font headingfont = new Font("Times New Roman", 18, FontStyle.Bold); Font font = new Font("Times New Roman", 13, FontStyle.Regular); Font linefont = new Font("Jameel Noori Nastaleeq", 12, FontStyle.Regular); Font infofont = new Font("Times New Roman", 11, FontStyle.Regular); Font nofont = new Font("Jameel Noori Nastaleeq", 9, FontStyle.Regular); Brush brush = Brushes.Black; Pen pen = new Pen(Color.Black, 2); // Calculate the width and height of the page float pageWidth = e.PageBounds.Width; float pageHeight = e.PageBounds.Height; // Measure the size of the text SizeF textSize = e.Graphics.MeasureString(companyName, headingfont); // Calculate X coordinate to center the text float x = (pageWidth - textSize.Width) / 2; int xAxis = 190; if(currentRow<dataGridView2.Rows.Count) { DataGridViewRow row = dataGridView2.Rows[currentRow]; e.Graphics.DrawString(companyName, headingfont, brush, x, 17); // Adjust the Y coordinate as needed e.Graphics.DrawString("--------------------------------------------", linefont, brush, 20, 30); // Adjust the Y coordinate as needed e.Graphics.DrawString(":اکاؤنٹ نمبر", font, brush, xAxis, 60); e.Graphics.DrawString(":نام", font, brush, xAxis, 90); e.Graphics.DrawString("--------------------------------------------", linefont, brush, 20, 100); e.Graphics.DrawString(":تاریخ", font, brush, 225, 125); e.Graphics.DrawString("--------------------------------------------", linefont, brush, 20, 140); e.Graphics.DrawString(":سابقہ بیلنس", font, brush, xAxis, 170); e.Graphics.DrawString(":ٹوٹل لیٹر", font, brush, xAxis, 200); e.Graphics.DrawString(":دودھ رقم", font, brush, xAxis, 230); e.Graphics.DrawString(":پرچی رقم", font, brush, xAxis, 260); e.Graphics.DrawString(":بیلنس", font, brush, xAxis, 290); //e.Graphics.DrawString(startDate.Date.ToString(), font, brush, 160, 130); if (row.Cells["Id"].Value != null) e.Graphics.DrawString(row.Cells["Id"].Value.ToString(), font, brush, 150, 60); if (row.Cells["Customer Name"].Value != null) e.Graphics.DrawString(row.Cells["Customer Name"].Value.ToString(), font, brush, 70, 90); if (row.Cells["Previous Balance"].Value != null) e.Graphics.DrawString(row.Cells["Previous Balance"].Value.ToString(), font, brush, 70, 170); // write status in urdu for previous balance if (row.Cells["pStatus"].Value != null) { string pStatus = row.Cells["pStatus"].Value.ToString(); if (pStatus == "Credit") { e.Graphics.DrawString("جمع", font, brush, 40, 170); } else { e.Graphics.DrawString("بنام", font, brush, 40, 170); } } if (row.Cells["Total Liters"].Value != null) e.Graphics.DrawString(row.Cells["Total Liters"].Value.ToString(), font, brush, 70, 200); if (row.Cells["Milk Amount"].Value != null) e.Graphics.DrawString(row.Cells["Milk Amount"].Value.ToString(), font, brush, 70, 230); if (row.Cells["Parchi Amount"].Value != null) e.Graphics.DrawString(row.Cells["Parchi Amount"].Value.ToString(), font, brush, 70, 260); if (row.Cells["Closing Balance"].Value != null) e.Graphics.DrawString(row.Cells["Closing Balance"].Value.ToString(), font, brush, 70, 290); if (row.Cells["Status"].Value != null) { string cStatus = row.Cells["Status"].Value.ToString(); if (cStatus == "Credit") { e.Graphics.DrawString("جمع", font, brush, 40, 290); } else { e.Graphics.DrawString("بنام", font, brush, 40, 290); } } e.Graphics.DrawString("--------------------------------------------", linefont, brush, 20, 310); e.Graphics.DrawString("کسی بھی غلط حساب کی صورت میں جلد از جلد", infofont, brush, 19, 340); e.Graphics.DrawString("ہم سے رابطہ کریں۔ شکریہ", infofont, brush, 130, 365); e.Graphics.DrawString("03346565189 :فون نمبر" + " ", infofont, brush, 128, 395); e.Graphics.DrawString("آپ کے تعاون کا شکریہ", infofont, brush, 80, 425); currentRow++; e.HasMorePages = true; } else { currentRow = 0; e.HasMorePages=false; } } ``` Despite my attempts to print each row on a separate page by setting e.HasMorePages to true after printing each row, the issue persists.
null
> I am using following code to get the IP of the client ... and this is outputting IPv4 for some clients, and IPv6 for some The code is doing the same thing for all clients: outputting the IP address they connected from. If someone connects using IPv6, that's the only address you can get. Imagine you have a system that assumes everyone will have a postal address in Norway. Then someone signs up with an address in the UK; there's no point trying to find a Norwegian address for them, they don't have one. Either you change the system to handle UK addresses, or you block them from signing up. You have the same two options here: * Expand your system so that it can handle IPv6 addresses as well as IPv4 addresses. This is obviously preferred, since it allows everyone to access it, as IPv6 becomes more widespread. * Block people from accessing your site with IPv6. Ideally, you'd do this at the DNS level, by not having an "AAAA" record, which advertises the IPv6 address for people to connect to; that way, users who have the ability to connect with IPv4 will do so. That would need to happen wherever the proxy is configured, because by the time the request gets to your application server it's too late.
I’ll just mimic the string in a *pre* tag and do the string parsing... <!DOCTYPE html><head></head> <body onload="Process();"> <script> function Process(){ let S=Cont.innerHTML; S=S.replace(/\n+/g,'').replace(/ +/g,'').replace(/{/g,' {\n ').replace(/;/g,';\n').replace(/:/g,': ').replace(/,/g,', '); Cont.innerHTML=S; } </script> <pre id="Cont"> p { color: hsl(0deg, 100%, 50%; } </pre> </body></html>
It seems that Luxon can be also used with CDN. [Ref](https://cdnjs.com/libraries/luxon) When `https://cdnjs.cloudflare.com/ajax/libs/luxon/3.4.4/luxon.min.js` is used, how about the following sample script? ### Pattern 1: In this pattern, the script of Luxon is used by copying to the script editor. 1. Please access `https://cdnjs.cloudflare.com/ajax/libs/luxon/3.4.4/luxon.min.js` with your browser. 2. Copy and paste the script from the URL to the script editor of Google Apps Script. 3. When the following sample script is used, a result like `2024-03-31T09:45:27.040+09:00` is obtained. ```javascript function myFunction() { const res = luxon.DateTime.now().toISO(); console.log(res); // 2024-03-31T09:45:27.040+09:00 } ``` ### Pattern 2: In this pattern, the script of Luxon is loaded when the script is run. 1. When the following sample script is used, a result like `2024-03-31T09:45:27.040+09:00` is obtained. ```javascript function myFunction() { // Load library. const url = "https://cdnjs.cloudflare.com/ajax/libs/luxon/3.4.4/luxon.min.js"; eval(UrlFetchApp.fetch(url).getContentText()); // Use library. const res = luxon.DateTime.now().toISO(); console.log(res); // 2024-03-31T09:45:27.040+09:00 } ``` ### Note: - I cannot test all methods of luxon. So, I'm not sure whether all methods can be worked.
Why don't you just validate first and then try the update? public function update(UpdateUserRequest $request, string $id) { // you can just use: // $request->validated(); // and it throws error or anything you change in failedValidation() method // or $validator = Validator::make($request->all(), $request->rules()); if ($validator->fails()) { return response()->json(['errors' => $validator->errors()], 422); } // - $user = User::findOrFail($id); $user->update($request->validated()); ...
{"OriginalQuestionIds":[10994324],"Voters":[{"Id":3959875,"DisplayName":"wOxxOm","BindingReason":{"GoldTagBadge":"google-chrome-extension"}}]}
You could try implementing a custom drag preview by adopting the [`UIDragInteractionDelegate`](https://developer.apple.com/documentation/uikit/uidraginteractiondelegate) in a [`UIViewRepresentable`](https://developer.apple.com/documentation/swiftui/uiviewrepresentable) that wraps your SwiftUI view: by using [`UIDragPreview`](https://developer.apple.com/documentation/uikit/uidragpreview) and setting the preview provider of the [`UIDragItem`](https://developer.apple.com/documentation/uikit/uidragitem), you can specify a custom view for the preview that will be the same size as the original view and without any transparency. ```bash +---------------------------------+ | +----+ +-------------------+ | | |Icon| |Title | | | +----+ +-------------------+ | | | UIViewRepresentable wrapper | | +-------------------------------+ | | UIDragInteractionDelegate | +---------------------------------+ | | Custom drag behavior | (UIDragInteractionDelegate) v +----------------------------------+ | UIDragPreview (Custom preview) | | Same size, no transparency | +----------------------------------+ ``` This is inspired from [`Client/Frontend/TabChrome/TopBar/LocationView/LocationViewTouchHandler.swift`](https://github.com/productinfo/neeva-ios/blob/fe8624ef1b7817806da79b4978b5d53a070b1766/Client/Frontend/TabChrome/TopBar/LocationView/LocationViewTouchHandler.swift#L11) ```swift import SwiftUI import UIKit // Wrap your SwiftUI view in a UIViewRepresentable struct DraggableView: UIViewRepresentable { var text: String func makeUIView(context: Context) -> UIView { let view = UIHostingController(rootView: DraggableTextView(text: text)).view! view.backgroundColor = .clear let dragInteraction = UIDragInteraction(delegate: context.coordinator) view.addInteraction(dragInteraction) return view } func updateUIView(_ uiView: UIView, context: Context) {} func makeCoordinator() -> Coordinator { Coordinator() } class Coordinator: NSObject, UIDragInteractionDelegate { func dragInteraction(_ interaction: UIDragInteraction, itemsForBeginning session: UIDragSession) -> [UIDragItem] { let provider = NSItemProvider(object: "Dragged content" as NSString) let item = UIDragItem(itemProvider: provider) item.previewProvider = { return UIDragPreview(view: interaction.view!) } return [item] } } } // Your original SwiftUI view struct DraggableTextView: View { var text: String var body: some View { HStack { Image(systemName: "hand.point.up.left.fill") // Example icon Text(text) Spacer() } .padding(16) .background(Color.gray) // For demo purposes } } struct ContentView: View { var body: some View { DraggableView(text: "Draggable Text") } } ``` The key part in the code which should allow for a "Same size, no transparency" drag preview is the `previewProvider` closure of `UIDragItem`. ```swift item.previewProvider = { return UIDragPreview(view: interaction.view!) } ``` The `UIDragPreview` initializer is passed the `interaction.view`, which is the UIView that was created by the `UIHostingController` with the SwiftUI view inside of it. Since you are using the original view itself for the preview, it should have the same size as the original.
Since a comma can not be in a url, you can narrow the characters in the url to not be a comma with `[^,]`: \b(?<=,)http[^,]*(?=,)\b Also you do not need the ? for optional character as * already means zero or more characters.
i am trying designing a neural network to classify signals by a pytorch model. after training the model i save the model and load it for inference phase. i am trying designing a neural network to classify signals by a pytorch model. after training the model i save the model by the fllowing code: ``` torch.save(model.state_dict(), ".pth") ``` afterward i want to load the model for inference phase. i use the following code : ``` model_test = spectrogram_model(X_Test) model_test.load_state_dict(torch.load(".pth")) ``` but i got an error as follows: ``` AttributeError: 'Tensor' object has no attribute 'load_state_dict' ``` do you have any solution?
a problem for save and load a pytorch model
|python|deep-learning|pytorch|neural-network|
null
You can use `envsubst` to substitute environment variables : ``` psql -h XXX -p 5432 -U XXX -d XXX -f <(envsubst < ./script.sql) ```
It isn't reliable to use `implode()` then unconditionally append a trailing delimiter for cases where an array might be empty. If you are seeking a functional-style approach, then `array_reduce()` can most directly iterate your values as a delimited string. Code: ([Demo][1]) $array = ['name1', 'name2', 'name3']; $separator = ', '; echo array_reduce( $array, fn($text, $value) => "$text$value$separator" ); Output: name1, name2, name3, There's nothing wrong with performing the same concatenation with a `foreach()` loop, it just requires a globally scoped variable to be declared versus `array_reduce()`'s function-scoped `$text` variable. [1]: https://3v4l.org/TQMCC
null
You can use getQueryInterface function and createaDabase function to create database before model initialization .: import { Sequelize } from "sequelize"; let sequelizeOptions: any = { dialect: "mysql", host: process.env.DB_HOST || "localhost", port: process.env.DB_PORT || 12345, username: process.env.DB_USER || "root", password: process.env.DB_PASSWORD || "", } export const dataBase = async () => { // create db if it doesn't already exist const { host, port, user, password, database } = { host: process.env.DB_HOST, port: process.env.DB_PORT, user: process.env.DB_USER, password: process.env.DB_PASSWORD, database: process.env.DB_NAME || "zconnectmarket" }; // const connection = await mysql.createConnection(({ host, port, user, password } as any)); // await connection.query(`CREATE DATABASE IF NOT EXISTS \`${database}\`;`); // connect to db const sequelizeCont = await new Sequelize((sequelizeOptions as any)).getQueryInterface().createDatabase(database); // sync all models with database return sequelizeCont; } export default new Sequelize(({ ...sequelizeOptions, database: process.env.DB_NAME || "DBNAME" } as any))
Instead of extending DefaultBatchConfiguration, you can define custom beans for specific components you want to customize. For example, you can define a custom TaskExecutor bean without extending DefaultBatchConfiguration. This allows you to retain other autoconfigured beans provided by Spring Boot. @Configuration public class MyJobConfiguration { @Bean public TaskExecutor myTaskExecutor() { ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor(); taskExecutor.setMaxPoolSize(MAX_BATCH_TASK_EXECUTOR_POOL_SIZE); taskExecutor.afterPropertiesSet(); return taskExecutor; } }