instruction
stringlengths
0
30k
Symfony: Doctrine EntityManager persist() fails on new entity with relationships
# New to `ggplot2` 3.5.0 There's a new option to do this for you in latest `ggplot2` releases: ``` r library(ggplot2) data_frame <- data.frame(value = c(rnorm(2000, mean = 100, sd = 20), rnorm(2000, mean = 1000, sd = 500)), level = c(rep(1,2000), rep(2, 2000)), policy = factor(c(rep(c(rep(1, 500), rep(2, 500), rep(3, 500), rep(4, 500)), 2)))) data_frame |> ggplot(aes(x='', y=value, fill = policy)) + geom_boxplot(outliers = FALSE) + facet_wrap( ~ level, scales="free") + xlab("") + ylab("Authorisation Time (ms)") + ggtitle("Title") ``` ![](https://i.imgur.com/BqmoBxd.png)<!-- --> # Older versions As noted above, you pretty much have to filter before plotting, but this doesn't need to be done by editing any files, or even by creating new dataframes. Using `dplyr` you can just chain this into the processing of your data. I've done a hopefully reproducible example below with some made-up data (as I don't have yours). I created a function to filter by the same procedures as the boxplot is using. It's a bit hacky, but hopefully works as one potential solution: ``` require(ggplot2) require(dplyr) data_frame <- data.frame(value = c(rnorm(2000, mean = 100, sd = 20), rnorm(2000, mean = 1000, sd = 500)), level = c(rep(1,2000), rep(2, 2000)), policy = factor(c(rep(c(rep(1, 500), rep(2, 500), rep(3, 500), rep(4, 500)), 2)))) # filtering function - turns outliers into NAs to be removed filter_lims <- function(x){ l <- boxplot.stats(x)$stats[1] u <- boxplot.stats(x)$stats[5] for (i in 1:length(x)){ x[i] <- ifelse(x[i]>l & x[i]<u, x[i], NA) } return(x) } data_frame %>% group_by(level, policy) %>% # do the same calcs for each box mutate(value2 = filter_lims(value)) %>% # new variable (value2) so as not to displace first one) ggplot(aes(x='', y=value2, fill = policy)) + geom_boxplot(na.rm = TRUE, coef = 5) + # remove NAs, and set the whisker length to all included points facet_wrap( ~ level, scales="free") + xlab("") + ylab("Authorisation Time (ms)") + ggtitle("Title") ``` Resulting in the following (simplified) plot: [![Graph from synthetic data][1]][1] [1]: https://i.stack.imgur.com/uORy0.png
{"Voters":[{"Id":17562044,"DisplayName":"Sunderam Dubey"},{"Id":790300,"DisplayName":"Dmitry Polomoshnov"},{"Id":598857,"DisplayName":"Nils Munch"}]}
I am currently working on a Spring Boot project with Thymeleaf. Using the following form should send the data to a REST endpoint: ```html <form th:action="@{/post/create}" method="post" th:object="${postDto}" accept-charset="UTF-8"> <div class="mb-3"> <label for="title" class="form-label">Titel</label> <input type="text" class="form-control" id="title" name="title" th:field="*{title}" required> </div> <div class="mb-3"> <label for="content" class="form-label">Beschreibung</label> <textarea class="form-control" id="content" name="content" rows="4" th:field="*{content}" required></textarea> </div> <div class="mb-3"> <label for="event" class="form-label">Event</label> <select class="form-control" id="event" name="event" th:field="*{eventId}"> <option th:value="${0}">Kein Event</option> <option th:each="event : ${events}" th:value="${event.id}" th:text="${event.name} + ' - ' + ${event.getClassName()}"></option> </select> </div> <div class="mb-3"> <label for="topic" class="form-label">Thema</label> <select class="form-control" id="topic" name="topic" th:field="*{topicId}"> <option th:value="${0}">Kein Thema</option> <option th:each="topic : ${topics}" th:value="${topic.id}" th:text="${topic.name}"></option> </select> </div> <div class="mb-3"> <label for="visibility" class="form-label">Sichtbarkeit</label> <select class="form-control" id="visibility" name="visibility" th:field="*{visibility}"> <option th:value="${0}">Für alle sichtbar</option> <option th:each="role : ${roles}" th:value="${role.getVisibilityScore()}" th:text="${role.getVisibilityScore()} + ' - ' + ${role.name}"></option> </select> </div> <div class="modal-footer"> <input type="hidden" name="_csrf" value="${_csrf.token}"/> <button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Schließen</button> <button type="submit" class="btn btn-primary">Absenden</button> </div> </form> ``` The REST-Endpoint is receiving the data and should create a Post object in the ServiceImpl: ```java @PostMapping("/post/create") public String createPost(@ModelAttribute @Valid PostDto postDto, BindingResult bindingResult, Model model) { if (bindingResult.hasErrors()) { model.addAttribute(ERROR_MESSAGE_ATTRIBUTE, "Es gab Probleme, den Beitrag anzulegen. Versuche es erneut."); return TEMPLATE_LOCATION; } try { postService.savePost(postDto); model.addAttribute(SUCCESS_MESSAGE_ATTRIBUTE, "Beitrag wurde erstellt!"); } catch (RuntimeException e) { model.addAttribute(ERROR_MESSAGE_ATTRIBUTE, e.getMessage()); } return TEMPLATE_LOCATION; } ``` ServiceImpl: ```java @Override public void savePost(PostDto postDto) { Authentication authentication = SecurityContextHolder.getContext().getAuthentication(); User user = userService.findByMailAddress(authentication.getName()); if (!permissionService.canCreatePosts(user)) { throw new InsufficientPermissionsException(INSUFFICIENT_PERMISSIONS_EXCEPTION); } Post post = Post.builder() .title(postDto.getTitle()) .content(postDto.getContent()) .visibility(postDto.getVisibility()) .event(eventService.getById(postDto.getEventId())) .topic(topicService.getById(postDto.getTopicId())) .creator(user) .creationDate(LocalDateTime.now()) .build(); postRepository.save(post); topicService.mailToSubscribers(post.getTopic()); } ``` Using the Spring Data JPA the object should get persisted in a database and users can read the post on the site. After submitting the form, i logged the PostDto object and its already messed up and character like "ä", "ö", "ü" or other will be replaced with a "?". PostDto: ```java @Data @Builder @NoArgsConstructor @AllArgsConstructor public class PostDto { @NotNull @NotEmpty private String title; @Lob @Column(length = 100000) @NotNull @NotEmpty private String content; private Long eventId; private Long topicId; @Min(0) @Max(100) @NotNull private Long visibility; } ``` I tried setting the following settings in the `application.properties` ``` server.servlet.encoding.charset=UTF-8 server.servlet.encoding.enabled=true server.servlet.encoding.force=true spring.thymeleaf.encoding=UTF-8 ``` I found no solution in the web and used ChatGPT but got no awnser that fixed my problem.
def generate_sequence(sequence1, sequence2, sequence3): new_sequence = [] # Генерираме втората поредица for i in range(len(sequence1)): if i == 0: new_sequence.append(sequence1[i]) else: new_sequence.append(sequence1[i] + sequence2[i - 1]) # Генерираме третата поредица for i in range(len(sequence2)): if i == 0: new_sequence.append(sequence1[i] + sequence2[i]) else: new_sequence.append(sequence2[i] - sequence2[i - 1]) # Генерираме четвъртата поредица for i in range(len(sequence3)): if i == 0: new_sequence.append(sequence2[i] + sequence3[i]) else: new_sequence.append(sequence3[i] + sequence3[i - 1]) return new_sequence # Първата поредица sequence1 = [12, 14, 21, 22, 37, 46] # Втората поредица sequence2 = [34, 29, 33, 13, 2] # Третата поредица sequence3 = [11, 3, 27, 1, 14] # Генериране на новата поредица new_sequence = generate_sequence(sequence1, sequence2, sequence3) # Отпечатване на новата поредица print(new_sequence)
I'm sorry for misleading. Markers are rendering perfectly fine when switching the model, but there is a bug in my code when I created the model. I had to specify `monaco.Uri.parse('inmemory://test_script')` instead of a `'inmemory://test_script'` string: ``` let model = monaco.editor.createModel( 'some text', 'my_lang', monaco.Uri.parse('inmemory://test_script') ); ``` Next time I will read the documentation carefully.
[Bun.js][1] has a [useful native API to read periodical user input][2]: ```js const prompt = "Type something: "; process.stdout.write(prompt); for await (const line of console) { console.log(`You typed: ${line}`); process.stdout.write(prompt); } ``` Is there a way to read user inputs, outside a loop? *** I found this solution: ```js const stdinIterator = process.stdin.iterator() console.write('What is your name?\n> ') const userName = (await stdinIterator.next()).value.toString().trimEnd() console.log('Hello,', userName) console.write(`What would you like, ${userName}?\n> `) const answer = (await stdinIterator.next()).value.toString().trimEnd() console.log('Do something with answer:', answer) ``` But then the process is not terminated automatically (I need to press <kbd>Ctrl</kbd>+<kbd>C</kbd> manually). [1]: https://bun.sh/ [2]: https://bun.sh/guides/process/stdin
Bun.js: Read a single line user input
|javascript|stdin|readline|bun|
For more details on this procedure. ### How to rename a local branch in Git - To rename the current branch, make sure you’ve checked out and are using the branch you want to rename. `git checkout oldbranch` And then `git branch -m newbranch ` - If you want to, you can rename a branch when you’re working in another branch. `git branch -m oldbranch newbranch` ### How to rename a remote branch in Git If others use this branch and commit to it, you should pull it before renaming it locally. This ensures that your local repository is updated and that changes made by other users will not be lost. - First, we need to delete `oldbranch` from the remote repository, and push `enwbranch` to the remote. `git push origin --delete oldbranch` - Now we’ll push the new one to the remote, by using -u (set upstream) option. `git push origin -u newbranch`
Using Maven to feed minikube on a VM
|docker|maven|kubernetes|ssh|virtual-machine|
Use of locale with CustomEmbedded block in CD CMS App. I have the EmbeddedAssetBlock stuff working, but it only gets the default locale for the embedded item. All the rest of the content is translated properly, but the returned embedded entry is always the en version… why!? I don’t see a way of even specifying the locale for an EmbeddedAssetBlock in the php tools… The embed object calls need the to know the active locale explicitly rather than it inheriting it automatically. How can I edit CustomEmbeddedBlock.php Line : $entry=$node-\>getEntry();to fetch active locale. This also doesn't work : $entry=$node-\>getEntry()-\>setLocale('en-US'); Error : An exception has been thrown during the rendering of a template ("Entry with ID was built using locale "en-GB", but now access using locale "en-US" is being attempted.") Contentful\\Delivery\\Resource\\LocalizedResource | Contentful CDA SDK for PHP Any help would be appreciated… I have the EmbeddedAssetBlock stuff working, but it only gets the default locale for the embedded item. All the rest of the content is translated properly, but the returned embedded entry is always the en version… why!? I don’t see a way of even specifying the locale for an EmbeddedAssetBlock in the php tools… The embed object calls need the to know the active locale explicitly rather than it inheriting it automatically. How can I edit CustomEmbeddedBlock.php Line : $entry=$node-\>getEntry();to fetch active locale. This also doesn't work : $entry=$node-\>getEntry()-\>setLocale('en-US'); Error : An exception has been thrown during the rendering of a template ("Entry with ID was built using locale "en-GB", but now access using locale "en-US" is being attempted.")
Contentful PHP SDK : How to retrieving localized embedded content in Content Renderer
|php|embedded|contentful|
null
We're encountering difficulties displaying images fetched from a Java Spring backend in a React frontend. Despite successfully retrieving the image data from the backend, we're unable to render it in the <img> tag on the frontend. here is what my back send: ```java @PostMapping("/getProduct/{type}") @ResponseBody public ResponseEntity<byte[]> getProduct(@PathVariable("type") String productName) { String filePath = DIRECTORY_UPLOAD + PRODUCTS + File.separator; List<File> files = findAllFile(filePath); File productFile = files.stream() .filter(f -> f.getName().contains(productName)) .findFirst() .orElse(null); if (productFile != null && productFile.exists()) { try { byte[] imageBytes = Files.readAllBytes(productFile.toPath()); return ResponseEntity.ok() .contentType(MediaType.IMAGE_PNG) .body(imageBytes); } catch (IOException e) { e.printStackTrace(); // Handle exception appropriately return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(null); } } else { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(null); } } ``` and how my front treat it : ```js getImage(type: string): Promise<Response> { return axios.post("api" + PATH_UPLOAD +"/getProduct/" + type.replace("/",""), { headers: { ...authHeader(), "Content-Type": "multipart/form-data", }, }); } ``` ```js useEffect(() => { const fetchImage = async () => { await ProductsService.getImage(data.type).then(async res => { if (res.status === 200) { const file = new Blob([res.data], { type: "image/png", }); //Build a URL from the file let fileurl = URL.createObjectURL(file); console.log(fileurl); setProductImgURL(fileurl); } } ) } ; fetchImage(); } , []); ``` ``` { productImgURL === "" ? "" : <img src={productImgURL} alt="Image Produit" className="image" style={{width: "25%", height: "25%"}}/> } ``` Verified backend endpoint functionality by inspecting network requests and responses. Implemented frontend logic to fetch image data and render it in the <img> tag. Ensured error handling in both frontend and backend code. Used different way to create a blob, and tried directly printing res.data
SecurityFilterChain apiFilterChain(HttpSecurity http) throws Exception { log.debug("Bearer SecurityConfig initialized."); http.csrf(AbstractHttpConfigurer::disable) .authorizeHttpRequests(requests -> requests .requestMatchers(AUTH_WHITELIST).permitAll() .anyRequest().authenticated()) .exceptionHandling( ex -> ex.authenticationEntryPoint(jwtAuthenticationEntryPoint)) .sessionManagement(sess -> sess.sessionCreationPolicy(SessionCreationPolicy.STATELESS)); // Add a filter to log the request-response of every request http.addFilterBefore(loggingFilter, UsernamePasswordAuthenticationFilter.class); // Add a filter to validate the tokens with every request http.addFilterBefore(jwtRequestFilter, UsernamePasswordAuthenticationFilter.class); return http.build(); }
|python|google-api|speech-recognition|speech-to-text|
I have a TeX project. I want to ignore some file extensions which I have added in the .gitignore file, but they are still shown as untracked files in ```git status```. How to make git not track these files? A figure is added showing the VS code workspace along with the staged area ```git ls-files``` and output of ```git status```. You can see the message about the same files being untracked in color red. [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/Szugk.jpg
.gitignore not ignoring files correctly
|git|visual-studio-code|gitignore|tex|
With my quarto extension [mathjax3eqno](https://github.com/ute/mathjax3eqno), you can use mathjax capabilities also within html, but unfortunately not for books, because these would require keeping track of labelled equations across several rendered files. It is not working for word format, either. If you want to use `\tag` within a single document that you only plan to render as html or pdf, add the extension, and you should be good to go as long as you use latex commands to refer to equations. ```md --- filters: [mathjax3eqno] number-sections: true --- # First Section First equation $$ \lim_{n\to\infty} \exp(n) = \infty $${#eq-toinf} And another one: $$ a^2 + b^2 = c^2 \tag{$\ast$} \label{eq-py} $$ # Second Section Refer to \eqref{eq-toinf} and solve \begin{equation} e = mc^2 \end{equation} Then ponder about \eqref{eq-py} ``` Rendered to html would become [![rendered as html][1]][1] [1]: https://i.stack.imgur.com/RH1Cx.png
I made a chrome extension using a react chrome extension boilerplate ([link](https://github.com/lxieyang/chrome-extension-boilerplate-react)) Now I want to show my popup based on the current tab url. How can I query the current tab and can I do it in a hook way? This is a function to get current tab, But I don't know how to use it in React and a hook. ``` async function getCurrentTab() { let queryOptions = { active: true, lastFocusedWindow: true }; // `tab` will either be a `tabs.Tab` instance or `undefined`. let [tab] = await chrome.tabs.query(queryOptions); return tab; } ```
You can not write anything to DOM when there is no DOM. Just like you can not put a sofa in your livingroom when there is no house yet. That means you have to check with ``this.isConnected`` if the element is attached to the DOM, in the ``set record`` method. Its an instance of a JavaScript class/object; You can of course attach _anything_ you want to it and have the ``constructor/attributeChangedCallback/connectedCallback`` process it (but really think trough the house/livingroom scenario) Note _**shadowDOM**_ allows you to create any DOM you want and **not** have it display in the main DOM (yet) (same applies to a DocumentFragment) <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> const createElement = (tag, props = {}) => Object.assign(document.createElement(tag), props); customElements.define('my-track', class extends HTMLElement { connectedCallback() { console.warn("connectedCallback" , this.id); this.append( this.recordDIV = createElement("div", { innerHTML: `Default Record content` })) } set record(html) { if (this.isConnected) this.recordDIV.innerHTML = html; else console.warn(this.id, "Not in the DOM yet") } }); const el1 = createElement('my-track', { id: "el1" }); el1.record = "Record " + el1.id; document.body.append(el1); const el2 = createElement('my-track', { id: "el2" }); document.body.append(el2); el2.record = "Record " + el2.id; <!-- end snippet -->
I have a React frontend application configured with a proxy in the package.json file pointing to my Flask backend running on http://localhost:2371. However, when making requests to fetch data from the backend using fetch("/members"), the frontend seems to be fetching data from localhost:5173 (the address on which the react site is running) instead of the expected localhost:2371. I've double-checked the proxy configuration (Here is my package.json): "name": "react-frontend", "private": true, "proxy": "http://localhost:2371", "version": "0.0.0", "type": "module", and ensured that the backend server is running, but I'm still encountering this issue. What could be causing the frontend to fetch data from the unexpected localhost address instead of the configured proxy? The code works if I fetch from the whole address ("http://localhost:2371/members"), but it would be simpler to just write "/members". Do I need to import the package.json to my App.tsx to make it work or is it alredy somehow connected? Any insights or suggestions for troubleshooting would be greatly appreciated. Thank you!
Frontend fetching data from unexpected localhost address despite proxy configuration
|reactjs|json|typescript|flask|connection|
If you get only noise in frontend in audio when its recorded, then go to microphone settings and set from default-microphone(high)[number 1 usually] to microphone(high)[number 4 usually]. this will remove the noise and you will get clear voice(frontend).
I am currently working on a Spring Boot project with Thymeleaf. Using the following form should send the data to a REST endpoint: ```html <form th:action="@{/post/create}" method="post" th:object="${postDto}" accept-charset="UTF-8"> <div class="mb-3"> <label for="title" class="form-label">Titel</label> <input type="text" class="form-control" id="title" name="title" th:field="*{title}" required> </div> <div class="mb-3"> <label for="content" class="form-label">Beschreibung</label> <textarea class="form-control" id="content" name="content" rows="4" th:field="*{content}" required></textarea> </div> <div class="mb-3"> <label for="event" class="form-label">Event</label> <select class="form-control" id="event" name="event" th:field="*{eventId}"> <option th:value="${0}">Kein Event</option> <option th:each="event : ${events}" th:value="${event.id}" th:text="${event.name} + ' - ' + ${event.getClassName()}"></option> </select> </div> <div class="mb-3"> <label for="topic" class="form-label">Thema</label> <select class="form-control" id="topic" name="topic" th:field="*{topicId}"> <option th:value="${0}">Kein Thema</option> <option th:each="topic : ${topics}" th:value="${topic.id}" th:text="${topic.name}"></option> </select> </div> <div class="mb-3"> <label for="visibility" class="form-label">Sichtbarkeit</label> <select class="form-control" id="visibility" name="visibility" th:field="*{visibility}"> <option th:value="${0}">Für alle sichtbar</option> <option th:each="role : ${roles}" th:value="${role.getVisibilityScore()}" th:text="${role.getVisibilityScore()} + ' - ' + ${role.name}"></option> </select> </div> <div class="modal-footer"> <input type="hidden" name="_csrf" value="${_csrf.token}"/> <button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Schließen</button> <button type="submit" class="btn btn-primary">Absenden</button> </div> </form> ``` The REST-Endpoint is receiving the data and should create a Post object in the ServiceImpl: ```java @PostMapping("/post/create") public String createPost(@ModelAttribute @Valid PostDto postDto, BindingResult bindingResult, Model model) { if (bindingResult.hasErrors()) { model.addAttribute(ERROR_MESSAGE_ATTRIBUTE, "Es gab Probleme, den Beitrag anzulegen. Versuche es erneut."); return TEMPLATE_LOCATION; } try { postService.savePost(postDto); model.addAttribute(SUCCESS_MESSAGE_ATTRIBUTE, "Beitrag wurde erstellt!"); } catch (RuntimeException e) { model.addAttribute(ERROR_MESSAGE_ATTRIBUTE, e.getMessage()); } return TEMPLATE_LOCATION; } ``` PostDto: ```java @Data @Builder @NoArgsConstructor @AllArgsConstructor public class PostDto { @NotNull @NotEmpty private String title; @Lob @Column(length = 100000) @NotNull @NotEmpty private String content; private Long eventId; private Long topicId; @Min(0) @Max(100) @NotNull private Long visibility; } ``` ServiceImpl: ```java @Override public void savePost(PostDto postDto) { Authentication authentication = SecurityContextHolder.getContext().getAuthentication(); User user = userService.findByMailAddress(authentication.getName()); if (!permissionService.canCreatePosts(user)) { throw new InsufficientPermissionsException(INSUFFICIENT_PERMISSIONS_EXCEPTION); } Post post = Post.builder() .title(postDto.getTitle()) .content(postDto.getContent()) .visibility(postDto.getVisibility()) .event(eventService.getById(postDto.getEventId())) .topic(topicService.getById(postDto.getTopicId())) .creator(user) .creationDate(LocalDateTime.now()) .build(); postRepository.save(post); topicService.mailToSubscribers(post.getTopic()); } ``` Using the Spring Data JPA the object should get persisted in a database and users can read the post on the site. After submitting the form, i logged the PostDto object and its already messed up and character like "ä", "ö", "ü" or other will be replaced with a "?". I tried setting the following settings in the `application.properties` ``` server.servlet.encoding.charset=UTF-8 server.servlet.encoding.enabled=true server.servlet.encoding.force=true spring.thymeleaf.encoding=UTF-8 ``` I found no solution in the web and used ChatGPT but got no awnser that fixed my problem.
ta.stochRsi returns: ([float, float]) A tuple of the slow %K and the %D moving average values. For example: [k, d] = ta.stochRsi(14,5,14,5,close)
Sending an image (png) from my back (Java spring) to the front (react) and printing it
|javascript|java|reactjs|node.js|blob|
null
use nodemon and compile project add this to "scripts" on package.json "build:dev": "ng build --configuration development && node dist/{{YOUR_PROJECT}}/server/server.mjs", "dev": "nodemon --watch server.ts --watch src/ -e ts,html,scss --exec \"npm run build:dev\"",
{"OriginalQuestionIds":[71751431],"Voters":[{"Id":45375,"DisplayName":"mklement0","BindingReason":{"GoldTagBadge":"powershell"}}]}
The issue in your webapi_source function is that you are trying to use the await keyword without marking the function as async. async function webapi_source({ menus }) { endpoints = await collector.retrieve_endpoints(env); }
I tried the 'fa-meh-blank' Remove. When I clicked it worked. And when I clicked again, my icon disappeared completely. I managed to find the solution. I think I expressed myself badly. I wanted to alternate the icons with each click. The following javascript works perfectly. My new JavaScript: icone.addEventListener('click', function(){ //console.log('icône cliqué'); icone.classList.toggle('happy'); icone.classList.toggle('fa-meh-blank'); icone.classList.toggle('fa-smile-wink'); });
The same issue is discussed [at kubernetes GitHub issues page][1] and the user "alahijani" made a bash script that exports all yaml and writes them to single files and folders. Since this question ranks well on Google and since I found that solution very good, I represent it here. *Bash script exporting yaml to sub-folders:* for n in $(kubectl get -o=name pvc,configmap,serviceaccount,secret,ingress,service,deployment,statefulset,hpa,job,cronjob) do mkdir -p $(dirname $n) kubectl get $n -o > $n.yaml done Another user "acondrat" made a script that do not use directories, which makes it easy to make a `kubectl apply -f` later. *Bash script exporting yaml to current folder:* for n in $(kubectl get -o=name pvc,configmap,ingress,service,secret,deployment,statefulset,hpa,job,cronjob | grep -v 'secret/default-token') do kubectl get $n -o yaml > $(dirname $n)_$(basename $n).yaml done The last script does not include service account. [1]: https://github.com/kubernetes/kubernetes/issues/24873
An alternative is EasyML (http://github.com/cordisvictor/easyml-lib/). Supports Java 8, 11, 17+. Faster than XStream. XStream is also being maintained on GitHub.
null
Use of locale with CustomEmbedded block in CD CMS App. I have the EmbeddedAssetBlock stuff working, but it only gets the default locale for the embedded item. All the rest of the content is translated properly, but the returned embedded entry is always the en version… why!? I don’t see a way of even specifying the locale for an EmbeddedAssetBlock in the php tools… The embed object calls need the to know the active locale explicitly rather than it inheriting it automatically. How can I edit CustomEmbeddedBlock.php Line : $entry=$node-\>getEntry();to fetch active locale. This also doesn't work : $entry=$node-\>getEntry()-\>setLocale('en-US'); Error : An exception has been thrown during the rendering of a template ("Entry with ID was built using locale "en-GB", but now access using locale "en-US" is being attempted.") Contentful\\Delivery\\Resource\\LocalizedResource | Contentful CDA SDK for PHP Any help would be appreciated…
i created a script in python with libraries such as pyautogui,pytesseract then, i've converted the script into an .exe file using auto py to exe and everything ran fine on my PC, the problem is when i sent the file to my friends they experienced an error that said pytesseract.pytesseract.TesseractNotFoundError Here is my code: ``` from PIL import Image import pytesseract import pyautogui import os pytesseract.pytesseract.tesseract_cmd = \ r'C:\Users\iunth\AppData\Local\Programs\Tesseract-OCR\tesseract' acceptImage = pyautogui.screenshot(region=(599,587,733,198)) acceptWord = pytesseract.image_to_string(acceptImage) while acceptWord != 'ACCEPT!': acceptImage = pyautogui.screenshot(region=(599,587,733,198)) acceptWord = pytesseract.image_to_string(acceptImage) print("Waiting for queue pop...") #print(acceptWord) if 'ACCEPT!' in acceptWord: accept = pyautogui.locateOnScreen('accept.jpg',confidence=0.8) print(acceptWord) pyautogui.moveTo(accept) pyautogui.click() break ``` i've installed pytesseract with vcpkg so i can export it as .exe and worked fine on my PC how do i fix this?
pytesseract.pytesseract.TesseractNotFoundError
|python|python-tesseract|
null
I think there is a much simpler way to do what you want. You only need the & "and" operator. Basically, if you want the 5 most significant bits, all you have to do is add them up. 128+64+32+16+8 = 248 or the 3 least significant bits would be 4+2+1=7. Then use the and operator ( & ) with your variable and the total to assign those bits to a new variable. int main() { unsigned char c = 42; //sum of 3rd bit + 5th bit + 7th bit int first5 = c & 248; //sum of bits 1-5 int last3 = c & 7; // sum of bits 6-8 cout << "first 5 " << first5 << " last 3 " << last3 << endl; cout << "OR 5 most significant bits " << first5 << endl; cout << "and 3 least significant bits " << last3 <<endl; return 0; } But, you can also use this to grab one individual bit or any combination of bits. For instance... int threenfive = c&40; //would extract the 3rd bit and 5th bit because the 3rd bit = 32 and the 5th = 8 and the sum of the two = 40.
In OWL or its rule extension SWRL, the only rules you can model are the ones that say "if I know X (where X is set of formula in OWL) then I can deduce Y (with Y a formula in OWL). What you seem to be asking is if we can say "if I *don't* know X, then I can deduce that Y". But in fact, your question is a little vague. It is not completely clear what you mean by your rule syntax. Let us explore several interpretations, and in passing, show that the problem is unrelated to Open World Assumption. # The pure OWL interpretation It is possible to interpret your rule as "if I it is the case that there is no ?p in the universe such that `HasParent(Alice,?p)`, then I can deduce `HasParent(Alice,Bob)`. Using FOL notation, it could be written like: `(∀x¬HasParent(Alice,x)) ⟶ HasParent(Alice,Bob)`. In this case, this rule is useless: if the premise is true, then Alice does not have any parent, so the conclusion contradicts that. If the premise is false, we don't need the rule. You probably don't want such a rule. # The epistemic interpretation One way of interpreting this is to say that if it is not known (from the point of view of an agent or of the system) that `HasParent(Alice,x)` is true for some `x`, then it is true (from the point of view of an omniscient agent) that `HasParent(Alice,Bob)`. In epistemic modal logic, this can be written: `(∀x¬K.HasParent(Alice,x)) ⟶ HasParent(Alice,Bob)`. This is a valid and potentially useful rule, but since it requires a modal operator, it cannot be written in OWL or SWRL, which are not modal logics. The problem is unrelated to Open World Assumption in this case. # The meta assumption interpretation Maybe this rule can be interpreted as an assumption that, like the Closed World Assumption or the Unique Name Assumption, is assumed to hold but cannot be modelled directly in the language. CWA and UNA are assumptions that can only be implemented by hard coding them in reasoning, rather than by adding custom rules. Similarly, this rule could be an assumption that affects reasoning beyond the possibilities of OWL. In this case, obviously, OWL is not suitable to represent the assumption. Note, in passing that adding CWA would not help. With CWA, if we cannot infer that `HasParent(Alice,?p)` hold for some `?p`, then for any constant `?p` (including `Bob`) we assume that `HasParent(Alice,?p)` is false. But then, because of the rule, we would conclude that `HasParent(Alice,Bob)` is true and false at the same time. So OWL with CWA is as useless as the pure OWL interpretation. The problem is not related with OWA/CWA. # The knowledge revision interpretation One way to interpret the rule is by assuming that it is a knowledge revision operator. When we cannot infer (or we don't have explicitly in the knowledge base) that `HasParent(Alice,?p)` for some `?p`, then we revise the knowledge base by adding `HasParent(Alice,Bob)`. This means that we *update* our knowledge rather than merely inferring truth. This cannot be expressed in OWL simply because OWL is not made to express knowledge revision operations. Again, it under this interpretation, there is nothing related to Open World Assumption.
I'm trying to use ArrayList in Kotlin. Here is the code: ```kotlin var list:List<PlcStoragePiece> = ArrayList() list.addFirst(head) // head is a PlcStoragePiece return list ``` BTW this is Java's ArrayList: ```kotlin import kotlin.collections.ArrayList ``` but I get compile warnings! > w: DevStorage.kt: (105, 10): 'addFirst(E!): Unit' is deprecated. This > member is not fully supported by Kotlin compiler, so it may be absent > or have different signature in next major version If I shouldn't use ArrayList in Kotlin, what should I use?
queryPurchasesAsync returns a list of user Purchases. But it doesn't tell me actually which base plan was purchased or even which offer. It only has the top-level product ID. I have multiple base plans with multiple offers within a single product (I assumed that's how they want us to structure our subscriptions to utilise the new billing structure). I want to tell my users which one they have on their profile page. Also in general, I think it's quite crucial to have that piece of information available.... here is my structure Product ID (app-name-pro-membership) Base plan 1 (monthly) Base plan 1 offer (free trial) Base plan 2 (6 months) Base plan 2 offer (free trial) Base plan 3 (annual) Base plan 3 offer (free trial)
How to get base plan id or offer id from Purchases
|android|subscription|billing|
### To rename your current branch to a new branch name: git branch -m <new_name> This will set the new name for the current branch you are working with. ### To re-name another branch: git branch -m <old_name> <new_name> Here you have to provide the old branch name and the new branch name.
You are in luck! I have recently pushed an update to the package that allows adding risk tables to the plot. All you need to do is to install the developmental version from github (`install_github("RobinDenz1/adjustedCurves")`) and you will be able to do it simply by using: ```R plot(adjsurv, custom_colors=c("red", "blue"), risk_table=TRUE) ``` Since you are using `method="direct"`, I would recommend keeping `risk_table_stratify=FALSE`, because the output may otherwise be confusing (see documentation page of `plot.adjustedsurv()`).
{"Voters":[{"Id":10008173,"DisplayName":"David Maze"},{"Id":17562044,"DisplayName":"Sunderam Dubey"},{"Id":466862,"DisplayName":"Mark Rotteveel"}],"SiteSpecificCloseReasonIds":[18]}
Looks to me like a simple `map` operation with `String.join()`. Optional<String[]> given = Optional.ofNullable(new String[]{"a", "b"}); var joinedString = given.map(s -> String.join("", s)); System.out.println(joinedString.get()); // prints "ab"
Here's the `docker-compose.yml` file: ``` version: "3" services: db_cassandra: container_name: db_cassandra image: custom.cassandra.image/builder-cassandra volumes: - "./common/cassandra:/lua/cassandra_setup:rw" environment: WORKSPACE: "/tmp" SERVICES_ROOT_DIR: "/services_root" healthcheck: test: ["CMD", "cqlsh", "-u", "cassandra", "-p", "cassandra" ] interval: 5s timeout: 5s retries: 60 ports: - "9042:9042" remote_cassandra: container_name: remote_cassandra build: context: ../. dockerfile: ./it/remote_cassandra/Dockerfile args: BASE_IMAGE: custom.cassandra.image/builder-cassandra depends_on: dd_cassandra: condition: service_healthy volumes: - "./common/cassandra:/lua/cassandra_setup:rw" environment: WORKSPACE: "/tmp" SERVICES_ROOT_DIR: "/services_root" ``` Here's the `remote_cassandra/Dockerfile`: ``` ARG BASE_IMAGE FROM ${BASE_IMAGE} COPY ./it/common/cassandra/cassandra-setup.sh / RUN chmod +x /cassandra-setup.sh CMD ["/cassandra-setup.sh"] ``` `remote_cassandra` remotely connects to the `db_cassandra` service and executes certain queries. Here's how the `cassandra-setup.sh` script looks like: ``` #!/bin/bash #code that creates schema.cql . . . while ! cqlsh db_cassandra -e 'describe cluster' ; do echo "waiting for db_cassandra to be up..." sleep .5 done cqlsh db_cassandra -f "${WORKSPACE}/schema.cql" ``` The while loop makes `remote_cassandra` wait until the db_cassandra is up and running so that it can then run the schema.cql remotely to populate certain tables present in db_cassandra. However, the above script runs into an infinite loop where `remote_cassandra` is unable to remotely `cqlsh` to `db_cassandra`. Here are the logs: ``` Traceback (most recent call last): File "/opt/cassandra/bin/cqlsh.py", line 2357, in <module> main(*read_options(sys.argv[1:], os.environ)) File "/opt/cassandra/bin/cqlsh.py", line 2303, in main shell = Shell(hostname, File "/opt/cassandra/bin/cqlsh.py", line 463, in __init__ load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]), File "/opt/cassandra/bin/../lib/cassandra-driver-internal-only-3.25.0.zip/cassandra-driver-3.25.0/cassandra/policies.py", line 425, in __init__ File "/opt/cassandra/bin/../lib/cassandra-driver-internal-only-3.25.0.zip/cassandra-driver-3.25.0/cassandra/policies.py", line 426, in <listcomp> File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known waiting for db_cassandra to be up... ``` The above logs keeps getting printed again and again. What can be done here to form a connection? **EDIT**: I'm trying to pass the name of db_cassandra as cli arg to bash script - cassandra-setup.sh. However, when running docker-compose, it takes forever for the container to run. Here's how I'm passing the arg: ``` ENTRYPOINT ["/cassandra-setup.sh"] CMD ["db_cassandra"] ``` and then cqlsh(ing) it like: ``` #!/bin/bash DB_CONTAINER="$1" #code that creates schema.cql . . . while ! cqlsh $DB_CONTAINER -e 'describe cluster' ; do echo "waiting for db_cassandra to be up..." sleep .5 done cqlsh $DB_CONTAINER -f "${WORKSPACE}/schema.cql" ``` This somehow just creates both the cassandra and the execution seems to be stuck forever like this.
I have a program that is able to expose itself as a self hosted WCF service. It does this when it's started as a process with certain parameters. In the past I've passed in a port for it to host on but I want to change that so it finds an available port and then returns that to the caller. (It also sets up a SessionBound End Point but that's incidental to this question). It self hosts to do this using the following code (this is working fine):- Uri baseAddress = new Uri("net.tcp://localhost:0/AquatorXVService"); m_host = new ServiceHost(typeof(AquatorXV.Server.AquatorXVServiceInstance), baseAddress); ServiceMetadataBehavior smb = new ServiceMetadataBehavior(); smb.MetadataExporter.PolicyVersion = PolicyVersion.Policy15; m_host.Description.Behaviors.Add(smb); m_host.AddServiceEndpoint(typeof(IMetadataExchange), MetadataExchangeBindings.CreateMexTcpBinding(), "mex"); NetTcpBinding endPointBinding = new NetTcpBinding() { MaxReceivedMessageSize = 2147483647, MaxBufferSize = 2147483647, MaxBufferPoolSize = 2147483647, SendTimeout = TimeSpan.MaxValue, OpenTimeout = new TimeSpan(0, 0, 20) }; ServiceEndpoint endPoint = m_host.AddServiceEndpoint(typeof(AquatorXVServiceInterface.IAquatorXVServiceInstance), endPointBinding, baseAddress); endPoint.ListenUriMode = ListenUriMode.Unique; ServiceDebugBehavior debug = m_host.Description.Behaviors.Find<ServiceDebugBehavior>(); if (debug == null) m_host.Description.Behaviors.Add( new ServiceDebugBehavior() { IncludeExceptionDetailInFaults = true }); else if (!debug.IncludeExceptionDetailInFaults) debug.IncludeExceptionDetailInFaults = true; m_host.Open(); int port = m_host.ChannelDispatchers.First().Listener.Uri.Port; //Start the session bound factory service Uri sessionBoundFactoryBaseAddress = new Uri("net.tcp://localhost:" + port.ToString() + "/AquatorXVSessionBoundFactoryService"); m_sessionBoundFactoryHost = new ServiceHost(typeof(AquatorXV.Server.SessionBoundFactory), sessionBoundFactoryBaseAddress); ServiceMetadataBehavior smbFactory = new ServiceMetadataBehavior(); smbFactory.MetadataExporter.PolicyVersion = PolicyVersion.Policy15; m_sessionBoundFactoryHost.Description.Behaviors.Add(smbFactory); m_sessionBoundFactoryHost.AddServiceEndpoint(typeof(IMetadataExchange), MetadataExchangeBindings.CreateMexTcpBinding(), "mex"); m_sessionBoundFactoryHost.AddServiceEndpoint(typeof(AquatorXVServiceInterface.ISessionBoundFactory), new NetTcpBinding(), sessionBoundFactoryBaseAddress); m_sessionBoundFactoryHost.Open(); return port; Ultimately, port is set as the return value of Main(). On the client side it starts the program as a process. It then needs to access the port number so it can connect to it. Here's what I've been trying to do:- ProcessStartInfo psi = new ProcessStartInfo(exePath, $"/remotingWCF") { UseShellExecute = false }; mProcess = Process.Start(psi); mProcess.WaitForInputIdle(); mPort = mProcess.ExitCode; CreateWCFClient(mPort); The problem is that this fails when trying to access mProcess.ExitCode with a message: InvalidOperationException - Process Must Exit before requested information can be determined. Googling around suggests that I use WaitForExit instead of WaitForInputIdle but that requires the program to actually be shut down before I can access ExitCode. I need to get the port from the running program. I *think* this means that I won't be able to use ExitCode as a way to get the port back but I can't find a different mechanism. Can anyone suggest a way I can return a value from Process.Start without waiting for the program to close?
Compression is usually better handled by a CDN such as CloudFront That being said, to answer your question (at least partially), assuming you are using `output: "export"` in your nextjs 13+ config, and you want to generate the compressed assets (e.g. gzipped .js and .css files) at build time, you can use [CompressionWebpackPlugin](https://webpack.js.org/plugins/compression-webpack-plugin/) to do this for you. First, install ``` npm install compression-webpack-plugin --save-dev ``` and then change your `next.config.js` (or mjs if your using es6) ```js const CompressionPlugin = require("compression-webpack-plugin"); const config = { // ... other stuff in your config /** @type {(config: import('webpack').Configuration, context: import('next/dist/server/config-shared').WebpackConfigContext) => import('webpack').Configuration} */ webpack: (config, { isServer }) => { if (!isServer) { config.plugins.push(new CompressionPlugin()); } return config } } ``` The `isServer` part is not necessary, just saves some build time. if you want to customize the compression algo, which files get compressed, compression ratio, etc... check the plugin documentation: https://webpack.js.org/plugins/compression-webpack-plugin/ on my project, this is a snippet of the output inside `_next/static/chunks`: ``` ├── 08548717-91f03a7f6e188aa1.js ├── 08548717-91f03a7f6e188aa1.js.gz ├── 175675d1-a5f998fde19f31c1.js ├── 175675d1-a5f998fde19f31c1.js.gz ├── 2dc05096-775b14766d8fb412.js ├── 2dc05096-775b14766d8fb412.js.gz ``` To my understanding S3 should be able to pickup the `.gz` files by itself. Please note that here you are using a custom webpack config, this _might_ introduce some conflicts with the way that nextjs build works, especially if they change something. So far I had no problems with this approach, just a heads up. Also keep in mind that the nextjs team is moving towards their own bundler [TurboPack](https://turbo.build/pack) so long term compatibility might be a question.
I am trying to clean a very large .csv file with pandas. The .csv has a column that contains text including characters like ',' or '/'. Therefore when I read the file, I specify `escapechar='\\'` and I observe that the file is successfully read and has the correct shape. After cleaning, I re-write the file in another path. However, this cleaned file has a different shape than the original which makes no sense. I assume it's because of this text column. I also tried to specify `escapechar='\\'` when I write it but still its shape is wrong and it mixes up the columns. What should I parse to the pd.to_csv method to write the file as it was in its original format? My code is below: > `reader = pd.read_csv(local_file_path, nrows=None,chunksize=10000, > escapechar='\\') > output_path = f'/home/achilleslaststand/Desktop/clean_data/{report}_cleaned.csv' j = 0 for chunk in reader: # Iterate over columns for column in chunk.columns: # Check if the column is in keys of the dictionaries if column in median_values and column in max_tresholds: # Check if values exceed max threshold mask = chunk[column] > max_tresholds[column] # Replace values exceeding max threshold with median value chunk.loc[mask, column] = median_values[column] if j==0: chunk.to_csv(output_path, mode='a', header=True, index=False) else: chunk.to_csv(output_path, mode='a', header=False, index=False) j += 1` [enter image description here][1] [enter image description here][2] [1]: https://i.stack.imgur.com/RopTh.png [2]: https://i.stack.imgur.com/saiHf.png
React hook that returns current tab in a chrome extension popup
|reactjs|react-hooks|google-chrome-extension|
null
### TL;DR: To fix your problems I suspect that you should, at least as a start: 1. Choose only one version for the Kotlin plugins you are using, 2. Remove the Kotlin JVM plugin, and 3. Apply the Kotlin serialization plugin in your Android subproject. ### Some more background To say more: - The plugins `id("org.jetbrains.kotlin.android")`, `kotlin("jvm")` and `kotlin("plugin.serialization")` all follow the same Kotlin versioning system. In other words, only use these plugins together with the same version number. - You seem a bit confused over where to apply plugins. You have a Gradle [multiproject project][1], which does a separate build for (a) the root project, and (b) each subproject of the build. These correspond to folders and `build.gradle.kts` files in those folders. Only apply a plugin in a subproject where you want to use that plugin. - You have applied the Kotlin JVM and serialization plugins in the root `build.gradle.kts` file. You could write Kotlin code in that way (and place the code in the `rootProject/src/main/kotlin` folder, but that would be unusual in a multiproject project. I doubt you are trying to do that. - I doubt that you want a Kotlin JVM project as well as Kotlin Android project. You don't need both of these in one subproject: they are very similar and the first allows Kotlin code to be built for a regular JVM target whilst the latter for an Android target. If you writing Android code, go to your Android subproject folder and write code in its `src/main/kotlin` folder. You already have the Kotlin Android plugin applied there. - Also put the Kotlin serialization plugin in the subproject that needs it. I suspect you want this in your Android subproject as well. - When you add a plugin to the `plugins` block with `apply false`, what you are telling Gradle is: "add the JAR this plugin is in to the build classpath so that I can apply it in subprojects". This is so only one instance of the plugin classes is loaded. Such plugins must be applied in whichever subproject needs them. (Actually the Kotlin JVM and Kotlin Android plugins come from the same JAR, so you don't need to do `apply false` with both: but then I suspect you don't want the JVM plugin at all.) [1]: https://docs.gradle.org/current/userguide/intro_multi_project_builds.html
# Problem Seems that you are using a not suitable Pinecone object. # Solution If you are looking for a .from_text method, import Pinecone from `langchain` ```python from langchain.vectorstores import Pinecone as PC docs_chunks = [t.page_content for t in text_chunks] pinecone_index = PC.from_texts( docs_chunks, hf, index_name='your-index-name' ) ``` # Extra Ball Similar problem in the Pinecone community site: https://community.pinecone.io/t/pinecone-from-texts-not-working/4149/3
following the guide [Deploy a .NET Windows Desktop app with ClickOnce][guide] for Visual Studio, I have created a ClickOnce installer named setup.exe. Is it possible to add an extra step to the installer where the user, via checkbox or other means, would be asked to check or uncheck the option to allow the installed application automatic launch on Windows startup? If it is not possible to ask it in the ClickOnce installer, is there another easy way of creating an installer with this option? Or is the only way to ask for launch on startup doing it manually in the application code as, for example, the answers in the following question suggest? * [.net - How can I make a Click-once deployed app run at startup?][question]. I am developing a WPF desktop application in C# targetting .NET 8.0. [guide]: https://learn.microsoft.com/en-us/visualstudio/deployment/quickstart-deploy-using-clickonce-folder?view=vs-2022 [question]: https://stackoverflow.com/questions/401816/how-can-i-make-a-click-once-deployed-app-run-at-startup
i found the code/library and all i needed on BasselItech https://www.youtube.com/watch?v=zzs2xnyCczo i tried modifying the code with using .Hide() and visible set to false but there is always the Form that appears and i would just like to have it run in the background with a notifyicon Thank you all here is my actual code: ``` using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.IO.Ports; using System.Linq; using System.Text; using System.Windows.Forms; using USB_Barcode_Scanner; using System.Threading; using System.Timers; using System.Windows.Threading; using System.Diagnostics; namespace USB_Barcode_Scanner_Tutorial___C_Sharp { public partial class Form1 : Form { private static SerialPort currentPort; private delegate void updateDelegate(string txt); public Form1() { InitializeComponent(); //BarcodeScanner barcodeScanner = new BarcodeScanner(textBox1); //barcodeScanner.BarcodeScanned += BarcodeScanner_BarcodeScanned; string[] ports = System.IO.Ports.SerialPort.GetPortNames(); string com_PortName = "COM4"; int com_BaudRate = 9600; currentPort = new SerialPort(com_PortName, com_BaudRate, Parity.None, 8, StopBits.One); currentPort.Handshake = Handshake.None; currentPort.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); currentPort.ReadTimeout = 1000; currentPort.WriteTimeout = 500; currentPort.Open(); } private void BarcodeScanner_BarcodeScanned(object sender, BarcodeScannerEventArgs e) { textBox1.Text = e.Barcode; } private void CodeFunktionenFuerQRCode_ausfuehren(string inhalt) { if (inhalt.StartsWith("http")) / { Process.Start("explorer.exe", inhalt); } else if (inhalt.StartsWith("helios")) erkannt { Process.Start("explorer.exe", inhalt); } else { } #region Inhalt des Links in die TEXTBOX-schreiben if (textBox2.Text.Length > 0) { textBox2.Text = ""; } textBox2.Text = inhalt; #endregion } private void port_DataReceived(object sender,SerialDataReceivedEventArgs e) { string strFromPort = ""; try { strFromPort = currentPort.ReadExisting(); } catch { } if (!currentPort.IsOpen) { currentPort.Open(); System.Threading.Thread.Sleep(100); currentPort.DiscardInBuffer(); } else { currentPort.DiscardInBuffer(); } BeginInvoke(new updateDelegate(CodeFunktionenFuerQRCode_ausfuehren), strFromPort); } } } ```
I'm encountering slow performance when executing a simple SELECT query **(SELECT * FROM SampleData)** on a SQL Server database hosted on my local machine. The table contains around 1 million records. Despite the relatively straightforward query, it takes approximately 20 seconds to retrieve the results. However, the result set seems to be lazily loaded (the rows are dynamically added right after I click the execute button). What could be causing this, and what steps can I take to improve it? CREATE TABLE SampleData ( ID BIGINT NOT NULL, Name NVARCHAR(2000) NOT NULL, DataType VARCHAR(115) NOT NULL, DisplayName NVARCHAR(515) NOT NULL, Description VARCHAR(MAX), StartDate BIGINT NOT NULL DEFAULT 0, Status BIGINT DEFAULT 1, CreationTime BIGINT, PRIMARY KEY (ID) ) Thanks.
Simple Select query is taking more than expected time to execute
|sql|sql-server|performance|
I am new to Azure DevOps and I am trying to link deployment status of my releases to work items. I have tried going to classic release pipelines -> options -> integrations -> Report Deployment status to boards and have mapped the environments. [![enter image description here][1]][1] But I want to link these deployments only when the deployment is done through a specific branch like Master and not from all the brances. I don't see any option in pipelines to select a specific branch. Currently deployment status of all branches are being displayed in deployment section of workItems which is making it hard to follow the deployments we care about. [![enter image description here][2]][2] I have tried looking into documentation but I couldn't find anything that helps. I am sure there must be some way to achieve this. Any help is appreciated. [1]: https://i.stack.imgur.com/O3PM9.png [2]: https://i.stack.imgur.com/UG2e9.png
Integrate Deployment status to Work Items in TFS
|azure-devops|tfs|azure-pipelines|cicd|
I am developing a company-identifier using Spark-SQL and basically what I do is: ## Data sides of the join: - Let's consider a data frame filled with `company_name_1` that I want to match an `ID_number` (1). - On the other side I have a **"master"** data base containing thousands of `company_name_2` with an `ID_number` provided (2). - I want to match `company_name_1` (1) to the `ID_number` (2) based on the similarity between `company_name_1` (1) and `company_name_2` (2). ***Important note***: ***cross-join* is not allowed**, data bases are too large. [![enter image description here][1]][1] ## Matching process - cascade: 1. **Standarization process**: Lower/upper + clear symbols. ***Note**: I don't know what is better for these situations, everything upper or everything lower. In my head it doesn't really matter.* [![enter image description here][2]][2] 2. **First Join - pure inner**: Getting a total coincidence match using a simple inner join using the whole name as key. [![enter image description here][3]][3] 3. **Second Join - partial inner**: For the *no-matches* from the first step, I will extract the strangest word (*1 / Term Frequency of the whole word collection**) of each `company_name_1` and `company_name_2`. [![enter image description here][4]][4] 4. **Validation of the match**: Using string distance/similarity algorithms (i.e. Leveshtein and Jaro Winkler) as udf to develop a confidence score to warranty a correct match. ```python import textdistance def jaro_winkler_udf(s1, s2): return textdistance.jaro_winkler(s1, s2) ``` ```python from pyspark.sql import functions as sf join_correct = df_1.join(df_2, ['strangest_word'], 'inner')\ .withColumn("levenshtein_score", sf.levenshtein(sf.col("company_name_1"), sf.col("company_name_2")))\ .withColumn("jaro_winkler_score", jaro_winkler_udf(sf.col("company_name_1"), sf.col("company_name_2")))\ .withColumn("label_ok", sf.when((sf.col("levenshtein_score") < 7) | (sf.col("jaro_winkler_score") > 0.7), 1).otherwise(0))\ .where(sf.col("label_acierto")==1)\ ``` 5. **Unify the matches in one df**: [![enter image description here][5]][5] ### The question I would like to generate a new *join-key*, like the strangest word, to join the dataframes. **Do you know any other *join-key* that might work?** Keys that I have already tryed: - **Strangest word**: It works well but it is too wide and the match is sometimes incorrect. - **2 Strangest words**: It works well too, but after validation, there aren not many matches, however, the match is usually correct. So far the best key I have found. - **N-grams**: Takes too long to process due to the large size of the data sets. - **Vectorizing**: Takes too long to process due to the large size of the data sets. - **Fuzzy-join libraries**: Not installed in productive branch. [1]: https://i.stack.imgur.com/f2z9p.png [2]: https://i.stack.imgur.com/0aLh5.png [3]: https://i.stack.imgur.com/DLqVA.png [4]: https://i.stack.imgur.com/ttYzA.png [5]: https://i.stack.imgur.com/SJi6U.png [6]: https://i.stack.imgur.com/Ha7LG.jpg
I think one of the simplest responses is to just create a random graph and then sequentially connect node n to node n+1. This produces a random connected graph. G = nx.dense_gnm_random_graph(15, 5) prev_node = None for node in nx.nodes(G): if prev_node != None: # we need to compare with None because when prev_node is zero the comparison gives fale G.add_edge(prev_node, node) prev_node = node If you don't need a connected graph you just create a random graph, get the isolated nodes through nx.isolates(G) and connect them to another random node, except himself.
I am developing a game in Unity that plays human speech through speakers and also performs speech recognition. This will cause the sound played by the speakers to be recognized by the speech recognition, which will lead to inaccurate speech recognition results or even the inability to recognize the correct speech. I'm using [whisper.unity](https://github.com/Macoron/whisper.unity) for speech recognition. What are some possible solutions to this problem? I have only learned about "echo cancellation" through Google search, but I still don't know how to handle it.
How to Avoid Speech Recognition from Recognizing Speaker Playback in Unity
|unity-game-engine|speech-recognition|speech-to-text|aec|echo-cancellation|
null
|php|amazon-web-services|amazon-s3|curl|cloud-object-storage|
null
Upgrading to torch-2.1.2 will resolve that error.
By default, Pillow will place the ***top-left*** corner of your text at the coordinates you specify with `ImageDraw.text()`. But you seem to want the text placed so that its horizontal middle and vertical middle are placed at the location you specify. So, you need to set the horizontal and vertical anchor to *"middle"* with: I1.text(..., anchor='mm') See manual [here][1]. --- Here are examples of drawing the same text, specifying the same (x,y) but using different anchors: * Red anchored at left baseline * Green anchored at right ascender * Blue anchored at the midpoint both vertically and horizontally I added a white cross showing the (x,y) position at which the text is drawn. #!/usr/bin/env python3 from PIL import Image, ImageDraw, ImageFont # Define geometry w, h = 400, 400 cx, cy = int(w/2), int(h/2) # Create empty canvas and get a drawing context and font im = Image.new('RGB', (w,h), 'gray') draw = ImageDraw.Draw(im) font = ImageFont.truetype('/System/Library/Fonts/Monaco.ttf', 64) # Write some text with the various anchors text = 'Hello' # Red anchored at left baseline draw.text(xy=(cx, cy), text=text, font=font, fill='red', anchor='ls') # Green anchored at right ascender draw.text(xy=(cx, cy), text=text, font=font, fill='lime', anchor='ra') # Blue anchored at vertical and horizontal middle draw.text(xy=(cx, cy), text=text, font=font, fill='blue', anchor='mm') # Draw white cross at anchor point (cx,cy) draw.line([(cx-3,cy),(cx+4,cy)], 'white', width=2) draw.line([(cx,cy-3),(cx,cy+4)], 'white', width=2) im.save('result.png') [![enter image description here][2]][2] Hopefully you can see that the red text is anchored at its bottom-left corner, the green text at its top-right corner, and the blue text - as you were hoping for - with its middle-middle on the white positioning cross. [1]: https://pillow.readthedocs.io/en/stable/reference/ImageDraw.html#PIL.ImageDraw.ImageDraw.text [2]: https://i.stack.imgur.com/Vtw0c.png
I am trying to look in 6 columns for a name that lines up with today's date. Example: Column A has each day of the year Column E through J can have a persons name in it. Essentially (generic format) If( countif( date match , name match in one of the columns ) > 0, "PTO", "" ) What I made work, is using 6 countifs added together to count each column, but I know there has to be an array function that can do it. Arrays are not my strong suit but I'm trying to learn them. My sheet has each day of the year in column A, then names in column E through J. Here's what I had tried to make work in an array, but obviously, it doesn't work. {=IF(SUM(COUNTIFS('PTO Calendar'!A4:A370,$A$3, 'PTO Calendar'!E4:J370, A5:A35))>0, "PTO", "")} The $A$3 is a today() function and is looking for matches in the first criteria. My second criteria, I'm trying to search 6 columns for a name match. Some help understanding why my array won't work would be helpful.
Multiple column COUNTIFS function in an array
|arrays|match|multiple-columns|
null
{"Voters":[{"Id":9245853,"DisplayName":"BigBen"}]}
Your original question indicates that you want to use pandas' `read_sql_query()` to retrieve the results into a DataFrame. In that case, use `params=` and pass the parameter values as a dict: ```python book_id = 653 sql = text("""\ SET NOCOUNT ON; EXEC dbo.LibraryBookData @BookId = :bookId; """) df = pd.read_sql_query(sql, engine, params={"bookId": book_id}) ``` Note that `SET NOCOUNT ON;` is not always required, but it is a good habit to get into.
Thanks to Ňɏssa and hallibut in the comments. I thought there was a fundamental problem with using sender, but the issue was that Windows needed the full path for the TextBox's object type and I was overthinking the problem. Here's the corrected code that worked: ``` private void periodFix(object sender) { // making sure the first char in the text box always has a . for the extension System.Windows.Forms.TextBox tb = System.Windows.Forms.TextBox)sender; tb.Text = tb.Text.Trim(); if (tb.Text.Length > 0) { char[] textBoxContents = tb.Text.ToCharArray(); if (textBoxContents[0] != '.') { string tempString = "." + tb.Text; tb.Text = tempString; } } else { tb.Text = "."; } } private void textBox7_Leave_1(object sender, EventArgs e) { // making sure the first char in the text box always has a . for the extension periodFix(sender); } ```
I have a form that I would like to use for creating a new record and modify it. When I run it to create a new record validation works fine: @if(Model == null){ <input type="text" id="txt_hostname" name="txt_hostname" class="form-control" value="PC-Name1-test"> } else { @* @Html.EditorFor(modelItem => Model.Hostname, new { htmlAttributes = new { @class = "form-control", @type = "text", @id = "txt_hostname", name="txt_hostname" } }) *@ <input type="text" id="txt_hostname" name="txt_hostname" class="form-control" value="@Model.Hostname"> } but when I fill the "same" input with information from the model, the jquery validator does not fire anymore when I change the input text. I noticed that the class also contains "valid" that does not change: class="form-control valid" Modify: <input type="text" id="txt_hostname" name="txt_hostname" class="form-control valid" aria-invalid="false"> Create new one: <input type="text" id="txt_hostname" name="txt_hostname" class="form-control" value="PC-Name1-test"> My Jquery looks like that: $(document).ready(function () { var form = $("#form_endpointadmin"); $("#form_endpointadmin").validate({ ignore: [], errorPlacement: function errorPlacement(error, element) { element.before(error); }, rules: { txt_hostname:{ required: true, regex: "^.{2,}$" } }, messages: { txt_hostname: "Please enter a valid hostname." }, highlight: function (element) { $(element).removeClass('is-valid').addClass('is-invalid'); }, unhighlight: function (element) { //$(element).removeClass('is-invalid').addClass('is-valid'); $(element).removeClass('is-invalid').addClass('form-control'); } }); // Reapply validation rules after loading the content //$.validator.unobtrusive.parse(form); //form.removeData("validator") //.removeData("unobtrusiveValidation"); //$.validator.unobtrusive.parse(form); }); I read something about a DOM thing, but could not find a way around it, only when using ajax. So how can I re-validate a value that was already valid once, but can be changed now and has to be verified again? Thanks Stephan ---------------- Edit: I changed: $(document).ready(function () { to window.addEventListener('DOMContentLoaded',function () { which is waiting for the DOM. Not sure if this is a good way
Do you know ways for getting partial coincidence joins?
A small change in the source code can also help you out: ``` diff --git a/seaborn/relational.py b/seaborn/relational.py index ff0701c7..f4ab8cd9 100644 --- a/seaborn/relational.py +++ b/seaborn/relational.py @@ -273,7 +265,7 @@ class _LinePlotter(_RelationalPlotter): # Loop over the semantic subsets and add to the plot grouping_vars = "hue", "size", "style" - for sub_vars, sub_data in self.iter_data(grouping_vars, from_comp_data=True): + for sub_vars, sub_data in self.iter_data(grouping_vars, from_comp_data=True, dropna=False): if self.sort: sort_vars = ["units", orient, other] ``` Found here: https://github.com/mwaskom/seaborn/issues/3351#issuecomment-1530086862 For future reference, there is an open issue: https://github.com/mwaskom/seaborn/issues/1552 But could not be fixed at the moment because of "some snags".
I am working on theory of computer science problems but cannot get these to work: > a. Give an NFA recognizing the language (01 ∪ 001 ∪ 010)<sup>\*</sup> > > b. Convert this NFA to an equivalent DFA. Give only the portion of the DFA that is reachable from the start state. > > ![1.49 Theory](https://i.stack.imgur.com/JBw3j.png) I am using the `automata.fa` package to encode the automata and have them tested. Here are examples: # example DFA syntax example_dfa = DFA( states={'q1', 'q2', 'q3', 'q4', 'q5'}, input_symbols={'0', '1'}, transitions={ 'q1': {'0': 'q1', '1': 'q2'}, 'q2': {'0': 'q1', '1': 'q3'}, 'q3': {'0': 'q2', '1': 'q4'}, 'q4': {'0': 'q3', '1': 'q5'}, 'q5': {'0': 'q4', '1': 'q5'} }, initial_state='q3', final_states={'q3'} ) # example NFA syntax example_nfa = NFA( states={"q0", "q1", "q2"}, input_symbols={"0", "1"}, transitions={ "q0": {"": {"q1", "q2"}}, "q1": {"0": {"q1"}, "1": {"q2"}}, "q2": {"0": {"q1"}, "1": {"q2"}}, }, initial_state="q0", final_states={"q1"}, ) # example regular expression syntax example_regex = "(a*ba*(a|b)*)|()" For the above question, I tried with the following diagram for the NFA: from automata.fa.nfa import NFA prob_1_17a = NFA( states={'q0', 'q1', 'q2', 'q3'}, input_symbols={'0', '1'}, transitions={ 'q0': {'0': {'q3'}, '1': {'q0'}}, 'q1': {'0': {'q1'}, '1': {'q0'}}, 'q2': {'0': {'q0'}, '1': {'q2'}}, 'q3': {'0': {'q1'}, '1': {'q2'}}, }, initial_state='q0', final_states={'q0'} ) but the autograder gives the following output: > Results: > > Running command: > ``` > $ timeout 15 python3 unit_tests/unit_test_prob_1_17a.py > ``` > UNEXPECTED RESULT ON THESE INPUTS: > ``` > ['01', '00101', '01001', '01010', '010101', '00100101', '00101001', '00101010', '01000101', '01001001', ...] > ``` > > Test for: unit_test_prob_1_17a.py > > This test gave you 0 more points How to design the correct NFA/DFA?
From what I understand, Shadcn's form components are just an abstraction of react-hook-form. If you want to access the state, for example, you can use ``` const form = useForm<z.infer<typeof formSchema>>({ resolver: zodResolver(formSchema), defaultValues: { prompt: "", }, }) const { watch } = form const watchPrompt = watch("prompt") ``` Such that watchPrompt can be accessed, e.g., by ``` useEffect(() => { //logic here console.log(watchPrompt) }, [watchPrompt]) ``` Not sure if this clarifies things. Probably you're best off reading the react-hook-form documentation.
{"Voters":[{"Id":6574038,"DisplayName":"jay.sf"},{"Id":9214357,"DisplayName":"Zephyr"},{"Id":2530121,"DisplayName":"L Tyrone"}]}
Note that variables come in 2 flavors - primitives and objects. Primitives exists in a scope, you can't reference them outside its own and child scopes (they are copied by value). So in your example you just don't care since you use a primitive only. If you remove event listeners or remove the elements from DOM, the whole construct is garbage collected (if the elements aren't referenced elsewhere). If you use objects and reference them outside the scope, you are in trouble and use `reference = null` for all outside references so the construct is garbage collected.
{"Voters":[{"Id":1121249,"DisplayName":"rkosegi"},{"Id":878532,"DisplayName":"Klaus"},{"Id":9214357,"DisplayName":"Zephyr"}],"SiteSpecificCloseReasonIds":[]}
I want to be able to have multiple lines of text in one marquee, is this possible and if so how are you able to do this? I tried placing spaces between the items but they either just came out almost next to each other. Any Ideas would be grateful.
How to set multiple lines of text into a HTML Marquee?