instruction
stringlengths
0
30k
null
here is a solution for you to solve your problem, the reason why you did not see the update of your data is because they are in different threads and therefore you had no way to know your composable that it was being updated and that it was recomposed again. class CurrencyConverterViewModel : ViewModel() { lateinit var response: CurrencyCodesModel // Here you are creating the variable with a stateFlow but you can also do it with a SharedFlow the difference you can see here https://developer.android.com/kotlin/flow/stateflow-and-sharedflow private var _listOfNames = MutableStateFlow<List<String>>(listOf()) val listOfNames = _listOfNames.asStateFlow() init { viewModelScope.launch { getListOfCurrencyNamesApi() } } suspend fun getCurrencyCodes() { response = CurrencyConverterInstance .currencyApiService .getCurrencyCodes() } private suspend fun getListOfCurrencyNamesApi() { getCurrencyCodes() //here you update the change _listOfNames.emit( response.supported_codes.map { it[1] } ) } } @Composable fun CurrencyConverter() { val myViewModel: CurrencyConverterViewModel = viewModel() //Here every time you call your getCurrencyCodes function and issue _listOfNames your composable will recompose and update the data. val listOfNames by myViewModel.listOfNames.collectAsState() }
You gave max-height. You should remove it. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> .parent { margin: 5px; max-width: 400px; background: pink; } .details { width: 100%; text-align: center; font-size: 14pt; display: grid; grid-template-rows: 1fr 1fr; } .detail1 { grid-row: 1; overflow-y: hidden; } .detail2 { grid-row: 2; overflow-y: hidden; } <!-- language: lang-html --> <div class="parent"> <div class="details"> <div class="detail1"> lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext </div> <div class="detail2"> lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext </div> </div> </div> <!-- end snippet -->
**problem number 1:** Steps to convert E-NFA to DFA : 1. Take E-closure for the beginning state of NFA as beginning state of DFA. 2. Find the states that can be traversed from the present for each input symbol (union of transition value and their closures for each states of NFA present in current state of DFA). 3. If any new state is found take it as current state and repeat step 2. 4. Do repeat Step 2 and Step 3 until no new state present in DFA transition table. 5. Mark the states of DFA which contains final state of NFA as final states of DFA. I am converting First E-NFA to DFA closure(1) = {1,2} transition table for DFA is [![enter image description here][1]][1] DFA for given NFA is [![enter image description here][2]][2] [1]: https://i.stack.imgur.com/tzciL.png [2]: https://i.stack.imgur.com/cbrwM.jpg Here D stands for Dead state !! **problem number 2 :** problem number 2 can be solved with solution of problem 1. You just not need to add dead state in NFA.
Could not instantiate class org.openqa.selenium.edge.EdgeDriver in Serenity
1. Sure you did not update accidentaly to 4.27.**3** or later? I got exactly your problem, after I installed the 4.28.0 Version - see below... 2. You need Hyper-V enabled for this. If you are using Windows Home Edition (check with PWS `Get-WindowsEdition -Online`, which should answer at least `Edition : Professional`), else no chance: upgrade your Windows to Professional Edition - see maybe [tag:docker-for-windows]? From my view, at this time the Docker Desktop Version 4.28.0 seems to have a problem with at least Windows 10, because after I deinstalled the 4.28.0 and replaced it with a fresh install of the Docker Desktop Version 4.27.2 (see [Docker Desktop release notes][2]) everything works fine for me with VS 2022 and ASP.NET 8. ... don't Update DD until this is fixed! ;) In [GitHub, docker/for-win: ERROR: request returned Internal Server Error for API route and version...][1] there is a hint upgrading the WSL2 which might help too. [1]: https://github.com/docker/for-win/issues/13909 [2]: https://docs.docker.com/desktop/release-notes/#4272
I'm working on a blog in Astro, and I'm trying to render the markdown from the blog post to HTML in using @astropub/md? **Sample markdown blog post** ```md --- title: Sample image meta: description: This is a simple post to say hello to the world. pubDate: 2024-03-28 --- ![an image](./images/mountain.jpeg) ``` **Blog overview Astro template** ```html --- import { getCollection } from "astro:content"; import { Markdown } from "@astropub/md"; const allPosts = await getCollection("posts"); --- <ul> { allPosts.map((post: any) => ( <li> <a href={`/blog/${post.slug}/`}>{post.data.title}</a> <Markdown of={post.body} /> </li> )) } </ul> ``` **The issue** The image is only loading on the single blog post, not in the blog overview. It's almost as if Vite isn't parsing images loaded through @astropub/md's <Markdown /> component. Blog overview (doesn't load): The `src` of the `img` tag is: `./images/mountain.jpeg` [screenshot](https://i.stack.imgur.com/n0koj.png) Blog post (works fine): The `src` of the `img` tag is: `/_image?href=...` [screenshot](https://i.stack.imgur.com/hL3tt.jpg)
Recursion error when combining abstract factory pattern with delegator pattern
|python|design-patterns|
I have a component which takes some state as props and displays in on the screen. I need the fade out the component, then update the state and only then fade it back in. I've seen plenty of examples for specific cases, but not for mine. The catch is that I have multiple buttons which dictate the value shown on the screen. For this matter, I need the value of the clicked button to be passed to `useSpring`, which I've only found to do this using a unified state object with a temp field. ``` import React, { useState } from 'react'; import { useSpring, animated } from 'react-spring'; import DisplayComponent from './DisplayComponent'; const FlexRowContainer = () => { const [state, setState] = useState({ message: 1, temp: 0}); const fadeAnimation = useSpring({ opacity: state.message != state.temp ? 0 : 1, onRest: () => { // Check if we're at the end of the fade-out if (state.message != state.temp) { // Update the state here to trigger the fade-in next setState((prev) => ({...prev, message: prev.temp, toggleFade: false})); } }, }); const handleClick = (n: number) => { // Start the fade-out setState((prev) => ({...prev, temp: n})); }; return ( <div> <button onClick={() => handleClick(1)}>Change number 1</button> <button onClick={() => handleClick(2)}>Change number 2</button> <button onClick={() => handleClick(3)}>Change number 3</button> <animated.div style={fadeAnimation}> <DisplayComponent state={state} /> </animated.div> </div> ); }; export default FlexRowContainer; ``` The code works perfectly but I remain curious if there isn't a simpler alternative. I tried building something with `useTransition` and the `reverse` field but quickly overcomplicated everything. I'm sure the solution is extremely simple, something like [https://stackoverflow.com/questions/65129251/react-spring-fade-images-in-out-when-prop-changes][1] [1]: https://stackoverflow.com/questions/65129251/react-spring-fade-images-in-out-when-prop-changes
react spring - Fade in and out solution seems too complicated
|reactjs|fade|react-spring|
In simple terms, if you were about to hop onto a plane without any Internet connection… before departing you could just do `git fetch origin <branch>`. It would fetch all the changes into your computer, but keep it separate from your local development/workspace. On the plane, you could make changes to your local workspace and then merge it with what you've previously fetched and then resolve potential merge conflicts all without a connection to the Internet. And unless someone had made *new* changes to the remote repository then, upon arrive at the destination you would do `git push origin <branch>` and go get your coffee. ---- From this awesome [Atlassian tutorial][2]: > The `git fetch` command downloads commits, files, and refs from a > remote repository into your local repository. > > Fetching is what you do when you want to see what everybody *else* has > been working on. It’s similar to SVN update in that it lets you see > how the central history has progressed, but it doesn’t force you to > actually merge the changes into your repository. Git **isolates > fetched content as a from existing local content**, it has absolutely > **no effect on your local development work**. Fetched content has to be explicitly checked out using the `git checkout` command. This makes > fetching a safe way to review commits before integrating them with > your local repository. > > When downloading content from a remote repository, `git pull` and `git fetch` commands are available to accomplish the task. You can consider > `git fetch` the 'safe' version of the two commands. It will download > the remote content, but not update your local repository's working state, > leaving your current work intact. `git pull` is the more aggressive > alternative, it will download the remote content for the active local > branch and immediately execute `git merge` to create a merge commit > for the new remote content. If you have pending changes in progress > this will cause conflicts and kickoff the merge conflict resolution > flow. ---- With `git pull`: - You don't get any isolation. - It doesn't need to be explicitly checked out. Because it implicitly does a `git merge`. - The merging step will affect your local development and *may* cause conflicts - It's basically NOT safe. It's aggressive. - Unlike `git fetch` where it only affects your `.git/refs/remotes`, git pull will affect both your `.git/refs/remotes` **and** `.git/refs/heads/` ---- **Hmmm...so if I'm not updating the working copy with `git fetch`, then where am I making changes? Where does Git fetch store the new commits?** Great question. First and foremost, the `heads` or `remotes` don't store the new commits. They just have [pointers](https://stackoverflow.com/questions/3965676/why-did-my-git-repo-enter-a-detached-head-state/65847406#65847406) to commits. So with `git fetch` you download the latest [git objects](https://matthew-brett.github.io/curious-git/git_object_types.html) (blob, tree, commits. To fully understand the objects watch [this video on git internals](https://www.youtube.com/watch?v=lG90LZotrpo)), but only update your `remotes` pointer to point to the latest commit of that branch. It's still isolated from your working copy, because your branch's pointer in the `heads` directory hasn't updated. It will only update upon a `merge`/`pull`. But again where? Let's find out. In your project directory (i.e., where you do your `git` commands) do: 1. `ls`. This will show the files & directories. Nothing cool, I know. 2. Now do `ls -a`. This will show [dot files][3], i.e., files beginning with `.` You will then be able to see a directory named: `.git`. 3. Do `cd .git`. This will obviously change your directory. 4. Now comes the fun part; do `ls`. You will see a list of directories. We're looking for `refs`. Do `cd refs`. 5. It's interesting to see what's inside all directories, but let's focus on two of them. `heads` and `remotes`. Use `cd` to check inside them too. 6. *Any* `git fetch` that you do will update the pointer in the `/.git/refs/remotes` directory. It **won't** update anything in the `/.git/refs/heads` directory. 7. *Any* `git pull` will first do the `git fetch` and update items in the `/.git/refs/remotes` directory. It will then **also** merge with your local and then change the head inside the `/.git/refs/heads` directory. ---- A very good related answer can also be found in *[Where does 'git fetch' place itself?][4]*. Also, look for "Slash notation" from the [Git branch naming conventions][5] post. It helps you better understand how Git places things in different directories. ----- To see the actual difference - Just do: git fetch origin master git checkout master If the remote master was updated you'll get a message like this: Your branch is behind 'origin/master' by 2 commits, and can be fast-forwarded. (use "git pull" to update your local branch) If you didn't `fetch` and just did `git checkout master` then your local git wouldn't know that there are 2 commits added. And it would just say: Already on 'master' Your branch is up to date with 'origin/master'. But that's outdated and incorrect. It's because git will give you feedback solely based on what it knows. It's oblivious to new commits that it hasn't pulled down yet... ---- Is there any way to see the new changes made in remote while working on the branch locally? - Some IDEs (e.g. Xcode) are super smart and use the result of a `git fetch` and can annotate the lines of code that have been changed in remote branch of your current working branch. If that line has been changed by both local changes and remote branch, then that line gets annotated with red. This isn't a merge conflict. It's a *potential* merge conflict. It's a headsup that you can use to resolve the future merge conflict before doing `git pull` from the remote branch. [![enter image description here][1]][1] ---- Fun tip: - If you fetched a remote branch e.g. did: git fetch origin feature/123 Then this would go into your remotes directory. It's still not available to your local directory. However, it simplifies your checkout to that remote branch by DWIM (Do what I mean): git checkout feature/123 you no longer need to do: git checkout -b feature/123 origin/feature/123 For more on that read [here](https://stackoverflow.com/a/56464547/5175709) [1]: https://i.stack.imgur.com/OEl72.png [2]: https://www.atlassian.com/git/tutorials/syncing/git-fetch [3]: https://unix.stackexchange.com/questions/21778/whats-so-special-about-directories-whose-names-begin-with-a-dot [4]: https://stackoverflow.com/questions/27554859/where-does-git-fetch-place-itself/27555444#27555444 [5]: https://web.archive.org/web/20200720025623/http://www.guyroutledge.co.uk:80/blog/git-branch-naming-conventions/
I have a table in below format, ID TYPE VAL VAL_NO --- ---- ------- ------ 1 5,6 100,200 1,2 2 7,8 400,500 4,5 I want the output to be as below, Expected output:- ID TYPE VAL VAL_NO --- ---- ------- ------ 1 5 100 1 1 6 200 2 2 7 400 4 2 8 500 5 I have reposted the question as there is extra column, previously i check for 2 columns, but with more than 2 columns, i am facing lot of issues like duplicates etc. Can someone please help me with this? Thanks a lot in advance for your help!
converting multiple comma separated columns into rows in bigquery
Because these elements don't expose any particular method / property. For instance, they don't have a particular IDL attribute like `<div>` and their [`align`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/div#Attributes) obsolete IDL attribute which requires a [DOM property](https://developer.mozilla.org/en-US/docs/Web/API/HTMLDivElement#Properties) only for `HTMLDivElement` instances: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> // HTMLDivElement does expose an align property console.log(document.querySelector('div').align); // "center" // HTMLElement doesn't console.log(document.querySelector('nav').align); // undefined <!-- language: lang-html --> <div align="center"></div> <nav align="center"></nav> <!-- end snippet --> But the semantic isn't brought through the prototype chain anyway.
I see on the official PyTorch page that PyTorch supports Python versions 3.8 to 3.11. When I actually try to install PyTorch + CUDA in a Python 3.11 Docker image, it seems unable to find CUDA drivers, e.g. ``` FROM python:3.11.4 RUN --mount=type=cache,id=pip-build,target=/root/.cache/pip \ pip install torch torchaudio ENV PATH="/usr/local/nvidia/bin:${PATH}" \ NVIDIA_VISIBLE_DEVICES=all \ NVIDIA_DRIVER_CAPABILITIES=all ``` Then, inside the container, I see that `torch.version.cuda` is `None` Compare this to ``` FROM pytorch/pytorch RUN --mount=type=cache,id=pip-build,target=/root/.cache/pip \ pip install torchaudio ENV PATH="/usr/local/nvidia/bin:${PATH}" \ NVIDIA_VISIBLE_DEVICES=all \ NVIDIA_DRIVER_CAPABILITIES=all ``` Inside the container I see that `torch.version.cuda` is `12.1` PyTorch claims they're compatible with Python 3.11, has anybody actually been able to use PyTorch+CUDA with Python 3.11? Tried running Docker images with Python 3.11.4 Tried running the Conda docker image and installing pytorch, but kept getting errors that the images couldn't be found
Installing PyTorch with Python version 3.11 in a Docker container
|python|docker|pytorch|cuda|python-3.11|
null
|mongodb|kubernetes|dns|kube-dns|
null
For larger sets of IDs, have a look at AsQueryableValues(set) (https://github.com/yv989c/BlazarTech.QueryableValues) which utilizes OPENJSON behind the scenes. This would allow you to reasonably retrieve the desired 18k rows in one call. var references = db.Set<reference>() .Where(x => db.AsQueryableValues(refIds).Contains(x.reference_id)) .AsNoTracking() .ToListAsync(); This could be improved further in a Read scenario by projecting to to a view model / DTO with just the columns you need from the reference. (For instance if References are a significant # of columns of various sizes but you only need a handful of columns)
Use a list comprehension with `enumerate()`: newlist = [v for i, v in enumerate(oldlist) if i not in removelist] Making `removelist` a `set` instead will help speed things along: removeset = set(removelist) newlist = [v for i, v in enumerate(oldlist) if i not in removeset] Demo: >>> oldlist = ["asdf", "ghjk", "qwer", "tyui"] >>> removeset = set([1, 3]) >>> [v for i, v in enumerate(oldlist) if i not in removeset] ['asdf', 'qwer']
I have been learning how to use the Pi4J library and tried to implement the minimal example application that can be found in the [Pi4J website](https://pi4j.com/getting-started/minimal-example-application/) The code that I used is here: ```java package org.PruebaRaspberry11; import com.pi4j.Pi4J; import com.pi4j.io.gpio.digital.DigitalInput; import com.pi4j.io.gpio.digital.DigitalOutput; import com.pi4j.io.gpio.digital.DigitalState; import com.pi4j.io.gpio.digital.PullResistance; import com.pi4j.util.Console; public class PruebaRaspberry11 { private static final int PIN_BUTTON = 24; private static final int PIN_LED = 22; private static int pressCount = 0; public static void main (String[] args) throws InterruptedException, IllegalArgumentException { final var console = new Console(); var pi4j = Pi4J.newAutoContext(); var ledConfig = DigitalOutput.newConfigBuilder(pi4j) .id("led") .name("LED Flasher") .address(PIN_LED) .shutdown(DigitalState.LOW) .initial(DigitalState.LOW); var led = pi4j.create(ledConfig); var buttonConfig = DigitalInput.newConfigBuilder(pi4j) .id("button") .name("Press button") .address(PIN_BUTTON) .pull(PullResistance.PULL_DOWN) .debounce(3000L); var button = pi4j.create(buttonConfig); button.addListener(e -> { if (e.state().equals(DigitalState.LOW)) { pressCount++; console.println("Button was pressed for the " + pressCount + "th time"); } }); while (pressCount<5) { if(led.state().equals(DigitalState.HIGH)){ led.low(); console.println("LED low"); } else { led.high(); console.println("LED high"); } Thread.sleep(500/(pressCount+1)); } pi4j.shutdown(); } } ``` I've tested all the hardware (the LED, the button) to make sure that the work fine. In the console, the "LED high" "LED low" lones are printed, but the LED doesn't blink and nothing happens when I press the button. What is more strange is that the first time I ran the code it worked, but I tried to do some modifications and never worked again, even after deleting the changes I did.
I am trying to find a proper way to query m2m data using supabase client. I have three tables: #### public.sites | column | format | |:---- |:------:| | id | int8 | | name | text | #### auth.users (supabase auth table) | column | format | |:---- |:------:| | .. | ... | | id | uuid | #### public.members | left | center | |:---- |:------:| | user_id | uuid foreign key to auth.users.id | | site_id | int8 foreign key to public.sites.id | I need to query all the sites given the userid. I made several attempts without any success such as: ```javascript const sites = await supabase.from('sites').select('id, name, members:members!inner(site_id)').eq('members.user_id', user.id);; ``` P.s. i also ran ```SQL alter table members add constraint pk_user_team primary key (user_id, site_id); ``` based on few posts here on stacksoverflow. any advice?
Supabase query for many to many relationship
|google-bigquery|
Because the module creates a scope so that functions with similar names of different modules will not cause you trouble: https://www.showwcase.com/article/27294/module-scope > In JavaScript, modules have a separate scope distinct from the global scope. This means that variables, functions, and other codes defined within a module are not accessible to code in other modules or the global scope unless explicitly exported. This helps to prevent naming conflicts and makes it easier to organize and reuse code. So you will need to explicitly export from the module anything that you want to use outside of it. Alternatively you can add it to the global scope, like: ``` import {createTasks} from './app.js'; window.createTasks = createTasks; document.addEventListener("DOMContentLoaded", function () { function showNewTaskModal () { newTaskModal.classList.add('show'); const html = ` <div id="rowrow"> <input type="text" placeholder="New Task"></input> <input type="text" placeholder="Add description"></input> <div id="row22"> <button> <img src="./Icons/clock.png"> </button> <button> <img src="./Icons/star_empty.png"> </button> <button onclick="createTasks()"> Save </button> </div> </div> `; newTaskModal.innerHTML = html; document.body.appendChild(newTaskModal); } } ```
For completeness, here is another technique equivalent to `[0..]` for generating an infinite stream of natural numbers, using the [as-pattern][1] (`@`): naturals = 0 : allthefollowingnat naturals where allthefollowingnat (current : successors@(_)) = immediateSuccessor : allthefollowingnat successors where immediateSuccessor=current+1 While this technique is arguably overkill for generating a stream of natural numbers, it follows a template that is useful for defining various streams where [the next values depend on previous ones][2]. For instance, here is a [Fibonacci stream][3]: fibstream = 0 : 1 : allthefollowingfib fibstream where allthefollowingfib (previous : values@(current : _)) = next : allthefollowingfib values where next = current+previous [1]: https://stackoverflow.com/a/30326349/19661910 [2]: https://en.wikipedia.org/wiki/Recurrence_relation [3]: https://wiki.haskell.org/index.php?title=The_Fibonacci_sequence&oldid=65208#With_direct_self-reference
I am currently trying to get chromedriver running on my mac in pycharm. ``` from selenium import webdriver import chromedriver_autoinstaller chromedriver_autoinstaller.install() ``` However, despite having chromedriver already installed, pycharm cannot find the module. How can this happen? I already tried it out in VS Code but I have the same problem over there.
Why is chromedriver not working in pycharm?
|python|pycharm|selenium-chromedriver|
null
|javascript|sql|supabase|supabase-database|supabase-js|
I am currently using VSCode with Jupyter notebooks. I use the feature where shift+enter sends the line of code to an Interactive Window which works great. When I close and reopen my project, more often than not when I press shift+enter, VSCode creates a new Jupyter notebook interactive window instead of using my previously saved one which already has all my code. Is there a way to select the interactive window that I want so that I don't have to re execute all my code from scratch every time to fill in my notebook? I tried looking around on the web for these settings but have not been able to find anything relevent.
Select the interactive window used with shift+enter in VSCode
|visual-studio-code|jupyter-notebook|
null
Okay, so I am using DMD to formulate a low rank model of a system I'm working on. Everything works fine thus far, and I can reconstruct the states for the time I have measurements - so for t \in [0, 10000] seconds. However, I wanna forecast for the next pred_steps = 100 seconds or so, and I know that the algorithm can do that, I am just not sure which formula I should use... So the book by Brunton and Kutz shows two formulas, and I am confused as to which one is the correct one / the one I should use in my case? The formulas in the book are: A) $x_{k=1} = A x_{k}$ B) $x(t) = \Phi exp(\Omega t) b$ A third formula is shown on a paper from Kutz et. al. : C) $V(t_{j+1}) = A V(t_j)$, V being the right singular vector matrix I understand that one is in discrete-time and the other in continuous-time spaces, but I am not sure as to which I should use... Are they equivalent? Thank you in advance!! I have already tried implementing both, and to be honest the forecast results are veery different from one to the other. So i have am not sure which one would be the "correct" approach...
I want to fetch the child user ID based on the parent ID. I have found a solution https://stackoverflow.com/questions/45444391/how-to-count-members-in-15-level-deep-for-each-level-in-php Tried Answer https://stackoverflow.com/a/45535568/23864372 but am getting some errors. I have created a class - ``` <?php Class Team extends Database { private $dbConnection; function __construct($db) { $this->dbConnection = $db; } public function getDownline($id, $depth=5) { $stack = array($id); for($i=1; $i<=$depth; $i++) { // create an array of levels, each holding an array of child ids for that level $stack[$i] = $this->getChildren($stack[$i-1]); } return $stack; } public function countLevel($level) { // expects an array of child ids settype($level, 'array'); return sizeof($level); } private function getChildren($parent_ids = array()) { $result = array(); $placeholders = str_repeat('?,', count($parent_ids) - 1). '?'; $sql="select id from users where pid in ($placeholders)"; $stmt=$this->dbConnection->prepare($sql); $stmt->execute(array($parent_ids)); while($row=$stmt->fetch()) { $results[] = $row->id; } return $results; } } ``` I am using the class like this - ``` $id = 4; $depth = 2; // get the counts of his downline, only 2 deep. $downline_array = $getTeam->getDownline($id, $depth=2); ``` I am getting errors - In line $placeholders = str_repeat('?,', count($parent_ids) - 1). '?'; > Fatal error: Uncaught TypeError: count(): Argument #1 ($value) must be > of type Countable|array, int given in and second > Warning: PDOStatement::execute(): SQLSTATE[HY093]: Invalid parameter > number: number of bound variables does not match number of tokens in line $sql="select id from users where pid in ($placeholders)"; $stmt=$this->dbConnection->prepare($sql); I want to fetch the child user ID in 5 levels . PHP VERSION 8.1
Converting Pandas DataFrame structure into Pytorch Dataset
|python|pandas|pytorch|
You get a NullReference-exception because when trying to Add to the List, it is, in fact, null. You make that very clear in your check: if (builtTasks == null) You cannot call Add on a List-variable, whose value is null. You have to create an instance of that class, to be able to call methods on it - eigther when you declare the variable, or when you use it: private static List<builtTask> builtTasks = new List<buildTask>(); The reason why it works for the "tasks" variable is likely, because it is a public field of a MonoBehaviour, which means its serialized by Unity (which includes initializing the value to a valid instance). A static variable does not have that luxurity, and you have to do it yourself. It's also a good idea to not use "public" for serialized variables, but instead mark them [SerializeField], and make them private if possible - this makes it clear that Unity is handling this field, both to you, as well as when yous how your code.
If I understood the question properly you can do the following with `dplyr` package: mutate(df, Class = ifelse(lag(Class) != Class | is.na(lag(Class)), Class, NA)) This returns: # A tibble: 10 × 3 ID Class Score <dbl> <chr> <dbl> 1 1 A 45 2 2 NA 67 3 3 NA 87 4 4 C 33 5 5 NA 25 6 6 A NA 7 7 B 67 8 8 NA 88 9 9 D 21 10 10 NA NA
I’m fairly new to Python, so go easy on me. I’m trying to make the following inputs appear at the same time in the console Input(“Please enter your feet: “) Input(“Please enter your inches: “) Desired output: Please enter your feet: Please enter your inches: At the same time for the user to enter. Whenever I look up how to take multiple inputs at the same time, I usually get a variation of answers stating to use the split function and to assign the input to two different variables: x, y = input(“Enter two nums: “) But I feel that this would be confusing for the user, so I want to be able to prompt them two different entries at the same time on different lines.
I have file on gsheet in `xlsx` format. [![enter image description here][1]][1] This file is updated each mounth by adding new version of file. I have second file (it's a gsheet) where I want to add some grafic based on data from first file. My first approch was gsheet function `IMPORTRANGE` But it dosent work with file format `xlsx`. [![enter image description here][2]][2] Second approach was to create gscript with some scheduler for synchronization ``` function synchronizujDaneZExcela() { // Ustawienia var kluczPlikuExcela = "_____MY_KEY_____"; // Ustawienia var nazwaArkuszaExcela = "Podsumowanie"; // Nazwa zakładki w pliku Excela var nazwaArkuszaGoogle = "Podsumowanie kopia"; // Nazwa zakładki w arkuszu Google Sheets // Pobierz plik Excela var plik = DriveApp.getFileById(kluczPlikuExcela); // Jeśli plik Excela został znaleziony if (plik) { // Pobierz dane z arkusza "Podsumowanie" w pliku Excela var excelData = SpreadsheetApp.open(plik).getSheetByName(nazwaArkuszaExcela).getDataRange().getValues(); // Otwórz arkusz Google Sheets var spreadsheet = SpreadsheetApp.getActiveSpreadsheet(); // Sprawdź, czy arkusz "Podsumowanie kopia" istnieje, jeśli nie, stwórz nowy var arkusz = spreadsheet.getSheetByName(nazwaArkuszaGoogle); if (!arkusz) { arkusz = spreadsheet.insertSheet(); arkusz.setName(nazwaArkuszaGoogle); } // Wstaw dane do arkusza Google Sheets arkusz.clear(); // Wyczyść arkusz "Podsumowanie kopia" arkusz.getRange(1, 1, excelData.length, excelData[0].length).setValues(excelData); Logger.log("Synchronizacja zakończona pomyślnie."); } else { Logger.log("Plik Excela nie został znaleziony."); } } ``` But I'm also getting error `Exception: Service Spreadsheets failed while accessing document with id ....` Do you have idea for some idea or other approach for this case? [1]: https://i.stack.imgur.com/z1kgj.png [2]: https://i.stack.imgur.com/gTpmC.png
I installed arch linux (current version archlinux-2024.03.01-x86_64) on VM VirtualBox 7.0 (didn't use the archinstall script in case thats' relevant) Install docker and git with pacman -S docker pacman -S git And run the `pacman -Sy` and `pacman -Su` commands Then followed the steps from this link https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/docker/building-net-docker-images?view=aspnetcore-8.0 I cloned the docker dotnet app with git clone https://github.com/dotnet/dotnet-docker Built it as per instructions docker build -t aspnetapp . And got it to run (not sure why didn't run first try but eventually ran) docker run -it --rm -p 5000:8080 --name aspnetcore_sample aspnetapp This the output after executing the docker run command: [![enter image description here][1]][1] I can see from tty2 (other terminal) the container is running with `docker ps` command [![enter image description here][2]][2] However can't do a wget to the localhost:5000 or use lynx [![enter image description here][3]][3] What am I'm missing to set up to browser the url/webapp ? note: I have network/internet in the arch virtual machine but haven't configured anything else **Update / Solved** I executed the run command with a typo `5000:8000` instead of `... 5000:8080` (thanks [Exploding Kitten][4] for spotting the typo looks like my eyes aren't that sharp anymore) Once I re-ran the command properly I can lynx to http://localhost:5000 url [![enter image description here][5]][5] Alternatively running from the repo also works as mentioned by [jdchris100][6] docker run -it --rm -p 5000:8080 --name aspnetcore_sample mcr.microsoft.com/dotnet/samples:aspnetapp [![enter image description here][7]][7] [1]: https://i.stack.imgur.com/YqLUo.png [2]: https://i.stack.imgur.com/qOtIV.png [3]: https://i.stack.imgur.com/96Xwk.png [4]: https://stackoverflow.com/users/743754/exploding-kitten [5]: https://i.stack.imgur.com/w4XJg.png [6]: https://stackoverflow.com/users/12695057/jdchris100 [7]: https://i.stack.imgur.com/zgJ5y.png
I have the following tables that are connect via "NBR": ```` Process ```` [![Process table][1]][1] ```` Service ```` [![Service table][2]][2] [1]: https://i.stack.imgur.com/RZzIn.png [2]: https://i.stack.imgur.com/fBNAS.png I want to filter out records from "Service" table so that if any "NBR" has a "DIV" matching the following, it will not be included: ('AR91', 'AR10', 'AG55', 'AZ56', 'CZ12') I want the results of my query to be: ```` NBR | MODEL 600 5 601 7 ```` This is my attempt but it is still returning records that have a matching "DIV": ```` select distinct s.nbr, s.model from process p left outer join service s on p.nbr = s.nbr where p.unit = 'MC' and p.car = 'M' and p.bank = '1' and p.paid in ('NY', 'NJ') and s.paid = 'NY' and length(s.ymd) = 8 and s.ymd between to_number('20240303', '99999999') and to_number( '20240311', '99999999') AND not exists (select 1 from service s2 where s2.div IN ('AR91', 'AR10', 'AG55', 'AZ56', 'CZ12') and s2.nbr = s.nbr ) ````
When I try to knit an Rmarkdown document containing a flextable to pdf it just stalls. No issues knitting to html or with other types of tables like kable. Min reprex below. When rendering it will just freeze on 'output file: example.knit.md' and never finish rendering. --- title: "Untitled" output: pdf_document date: "2024-03-08" --- ```{r} flextable::flextable(cars) ``` I installed tiny_tex and can successfully knit pdfs that do not have any flextables in them. After reading this link [getting-flextable-to-knit-to-pdf](https://stackoverflow.com/questions/64935647/getting-flextable-to-knit-to-pdf) I discovered this was a known issue with earlier versions of flextable. However, following the soluting I added 'latex_engine: xelatex' but still no luck. I'm running flextable 0.8.3. pandoc 2.19.2 When rendering: processing file: flextable_example.Rmd |................................... | 50% ordinary text without R code |......................................................................| 100% label: unnamed-chunk-1 "C:/Program Files/RStudio/resources/app/bin/quarto/bin/tools/pandoc" +RTS -K512m -RTS flextable_example.knit.md --to latex --from markdown+autolink_bare_uris+tex_math_single_backslash --output flextable_example.tex --lua-filter "C:\Users\a0778291\AppData\Local\R\win-library\4.2\rmarkdown\rmarkdown\lua\pagebreak.lua" --lua-filter "C:\Users\a0778291\AppData\Local\R\win-library\4.2\rmarkdown\rmarkdown\lua\latex-div.lua" --embed-resources --standalone --highlight-style tango --pdf-engine pdflatex --variable graphics --variable "geometry:margin=1in" --include-in-header "C:\Users\a0778291\AppData\Local\Temp\RtmpoJlm1X\rmarkdown-str7d047674de6.html" output file: flextable_example.knit.md **freezes here** Session info below. R version 4.2.2 (2022-10-31 ucrt) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 10 x64 (build 22621) attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] **flextable_0.8.3** loaded via a namespace (and not attached): [1] Rcpp_1.0.9 lattice_0.20-45 png_0.1-8 digest_0.6.31 R6_2.5.1 grid_4.2.2 jsonlite_1.8.4 evaluate_0.19 zip_2.2.2 cli_3.6.1 [11] rlang_1.1.1 gdtools_0.2.4 uuid_1.1-0 data.table_1.14.6 rstudioapi_0.14 xml2_1.3.3 Matrix_1.5-1 reticulate_1.26.9000 rmarkdown_2.19 tools_4.2.2 [21] officer_0.5.0 xfun_0.36 fastmap_1.1.0 compiler_4.2.2 systemfonts_1.0.4 base64enc_0.1-3 htmltools_0.5.4 knitr_1.41
According to the [bash manual](https://www.gnu.org/software/bash/manual/bash.html#index-complete_002dfilename-_0028M_002d_002f_0029), you should already have ... - `complete-filename` bound to `C-x /` (first <kbd>ctrl</kbd><kbd>x</kbd> together then <kbd>/</kbd>). - `possible-filename-completions` bound to `M-/` (in most terminals, either <kbd>alt</kbd><kbd>/</kbd> together, or first <kbd>esc</kbd> then <kbd>/</kbd>) What do they do? Assume your cursor is at the end of `cmd fil` and you have files `file1` and `file2` in your working directory. The former completes the command to `cmd file`. When pressed again, it shows the list of possible completions: `file1 file2`. The latter shows the list directly, without completing anything. ### How to bind to a single key? See [bash's `bind` command](https://www.gnu.org/software/bash/manual/bash.html#index-bind) and `man readline` in general. In general, it is pretty easy: To bind the key <kbd>a</kbd> to filename-completion, use `bind a:complete-filename`. For the backtick <kbd>\`</kbd> you would just do ``bind '`:complete-filename'``. However, if your operating system is configured with a keyboard layout that has deadkeys (*press <kbd>\`</kbd> once, nothing seems to happen, press <kbd>e</kbd>, you get `è`*), then you have to press <kbd>\`</kbd>, or your terminal and bash won't even see that you pressed that key. If you go for another key or special key or key sequence, similar things can happen. E.g. <kbd>ctrl</kbd><kbd>c</kbd> is usually processed by the terminal, before bash even sees it.
I have a table in below format, ID TYPE VAL VAL_NO --- ---- ------- ------ 1 5,6 100,200 1,2 2 7,8 400,500 4,5 I want the output to be as below, Expected output:- ID TYPE VAL VAL_NO --- ---- ------- ------ 1 5 100 1 1 6 200 2 2 7 400 4 2 8 500 5 I have reposted the question as there is extra column, previously i checked for 2 columns, but now with more than 2 columns, i am facing lot of issues like duplicates etc. Can someone please help me with this? Thanks a lot in advance for your help!
Something like the below, add the port number after the IP or domain name with comma. <add name="LocalSqlServer" connectionString="Data Source=www.yourdomain.com,1433;Initial Catalog=mydatabase;Persist Security Info=True;User ID=admin;Password=yourstringpassword" providerName="System.Data.SqlClient" />
For that goal you must use the same queue for all consumers. The `AnonymousQueue` makes all of the to be subscribed with their own queue, so all of them receives the same message. See more info in this tutorial: https://www.rabbitmq.com/tutorials/tutorial-two-spring-amqp **UPDATE** I thought about this though: @Configuration public class RabbitMQConfig { @Bean public TopicExchange exchange(){ return new TopicExchange("amq.topic"); } @Bean public Queue matrixQueue(){ return new Queue("matrix.queue"); } @Bean public Binding binding(TopicExchange exchange, Queue matrixQueue) { return BindingBuilder.bind(matrixQueue) .to(exchange) .with("*.*.rabbit"); } } Since you didn't get it that `DirectExchange` matches only exactly to the routing key.
Can't knit Rmd to pdf when contains flextable
|r|r-markdown|flextable|
null
I am writing a program that embeds Deno however Deno has a lot of dependencies that are using static versions which conflict with the dependencies I have in my main project. To get around this I'm wondering if it's possible to compartmentalize my Deno integration into a seperate crate in my workspace that builds a dynamic rust library that statically contains all of its own dependencies. I'd then consume the dynamic library dynamically from my main application. I know that Rust doesn't have a stable ABI but, from what I understand, a library compiled on the same version of Rust as the a consumer is compatible. Given the library crate containing Deno will live/be compiled within the same workspace and built as a prerequisite to the main binary - I assume that's fine. Is this possible and would this resolve my issues? Is `crate-type` `rlib` the correct way to do this or would it be `lib`?
|javascript|reactjs|firebase|google-cloud-firestore|
null
I extracted a table from a pdf using for(i in 59:68) { pdf_file <- "UBI Kenya paper.pdf" table_i <- extract_tables(pdf_file, pages = i) tname <- paste0("table_E.", i-58, "_full") assign(tname,table_i) } I didn't use as.data.frame because it returned an error message. Instead I did this: table_E.9_full <- data.table::as.data.table(table_E.9_full) table_E.9 <- table_E.9_full[-c(10:16),] This gives me a table which looks like this: [![enter image description here][1]][1] where the Enterprises and Net Revenues columns are combined into one for every category (Retail, Manufacturing, Transportation, Services) separated by a space. when what I want is it to look like the original here: [![enter image description here][2]][2] How do I split rows 2-9 into two columns each for V2-V5, so that columns titles would be "Retail Trade - # Enterprises", "Retail Trade - Net Revenues", "Manufacturing - # Enterprises", "Manufacturing - Net Revenues", etc., with the correct values in the correct columns? dputdput(head(table_E.9)) returns: structure(list(V1 = c("", "", "", "Long Term Arm", "", "Short Term Arm" ), V2 = c("Retail Trade", "# Enterprises Net Revenues", "(1) (2)", "3.89�\u0088\u0097�\u0088\u0097�\u0088\u0097 1601.42�\u0088\u0097", "[1.28] [824.74]", "2.34�\u0088\u0097�\u0088\u0097 464.60�\u0088\u0097" ), V3 = c("Manufacturing", "# Enterprises Net Revenues", "(3) (4)", "0.02 51.90", "[.27] [120.79]", "0.03 17.82"), V4 = c("Transportation", "# Enterprises Net Revenues", "(5) (6)", "0.53�\u0088\u0097 100.76", "[.29] [85.37]", "-0.12 -3.85"), V5 = c("Services", "# Enterprises Net Revenues", "(7) (8)", "0.23 198.64", "[.33] [205.6]", "-0.04 70.95")), row.names = c(NA, -6L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x000001716fd35930>) [1]: https://i.stack.imgur.com/mpJqF.png [2]: https://i.stack.imgur.com/xSV1g.png
Splitting mutliple columns of table extracted from pdf into multiple columns
|split|rstudio|multiple-columns|
Copy data from excel sheet to gsheet (both located on gDrive)
|google-sheets|
I'm trying to make a switch between tabs but what happened is I can switch the tab but the Transferred tab disappear ```vue <a-tabs :current="current" @change="onChange"> <a-tab-pane key="1" v-if="current === 0"> <span slot="tab"> <a-icon type="folder-open" theme="twoTone" /> Pending ({{ pendingCount }}) </span> <pending-list @go-to-transferred="nextTab" @dataCount="updatePendingCount" /> </a-tab-pane> <a-tab-pane key="2" v-if="current === 1"> <span slot="tab"><a-icon type="folder" theme="twoTone" />Transferred </span> <done-list @transferCount="updateTransferCount" /> </a-tab-pane> </a-tabs> export default { components: { PendingList, DoneList }, data() { return { pendingCount: 0, transferCount: 0, current: 0, } }, methods: { nextTab() { console.log('from Pending List'); this.current = 1 }, onChange(current) { this.current = current }, updatePendingCount(count) { console.log('Received count:', count); this.pendingCount = count; }, updateTransferCount(count) { this.transferCount = count; }, }, } </script> ``` This is button from other component that I've tried to emit ```js download() { if (this.rowSelection.selectedRowKeys.length > 0) { this.$emit('go-to-transferred'); this.$message.success('file exported!'); } else { this.$message.warn("Select employee first!"); } }, ``` ![It should be like this.](https://i.stack.imgur.com/Ko6Xp.png) ![But it became like this. the tab for transferred missing](https://i.stack.imgur.com/9l4F3.png) I am searching few tutorials but it didn't solve my problem.
null
To get a Solidworks Simulation Name use the `Name` Property (`ICWStudy`) as documented [here][1]. To get acess the study results use the `Results` Property (`ICWStudy`) as documented [here][2]. Bellow you can find VBA code. Option Explicit Sub main() Dim SwApp As SldWorks.SldWorks Dim COSMOSWORKS As CosmosWorksLib.COSMOSWORKS Dim CWAddinCallBack As CosmosWorksLib.CWAddinCallBack Dim ActDoc As CosmosWorksLib.CWModelDoc Dim StudyMngr As CosmosWorksLib.CWStudyManager Dim Study As CosmosWorksLib.CWStudy Dim activeStudyIndex As Integer Dim CWResult As CosmosWorksLib.CWResults Dim errorCode As Long Dim stress As Variant Dim nodeMin As Integer Dim nodeMax As Integer Dim stressMin As Variant Dim stressMax As Variant 'Connect to SOLIDWORKS Set SwApp = Application.SldWorks If SwApp Is Nothing Then Exit Sub 'Add-in callback Set CWAddinCallBack = SwApp.GetAddInObject("SldWorks.Simulation") Set COSMOSWORKS = CWAddinCallBack.COSMOSWORKS 'Get active document Set ActDoc = COSMOSWORKS.ActiveDoc() 'Get the active study Set StudyMngr = ActDoc.StudyManager() activeStudyIndex = StudyMngr.ActiveStudy Set Study = StudyMngr.GetStudy(activeStudyIndex) 'Print the name of the study to the immediate window Debug.Print Study.Name 'Get the results Set CWResult = Study.Results If CWResult Is Nothing Then Exit Sub 'Get the stress results stress = CWResult.GetMinMaxStress(0, 0, 1, Nothing, 0, errorCode) 'Print the stress results to the immediate window nodeMin = stress(0) stressMin = stress(1) nodeMax = stress(2) stressMax = stress(3) Debug.Print nodeMin Debug.Print stressMin Debug.Print nodeMax Debug.Print stressMax End Sub Tested it in a part document with only one study called `Test Study` and it get correctly printed the name and the stress results to immediate window. Make sure you set reference to Solidworks Simulation 20## type library. It's not default. [1]: https://help.solidworks.com/2019/english/api/swsimulationapi/SolidWorks.Interop.cosworks~SolidWorks.Interop.cosworks.ICWStudy~Name.html [2]: https://help.solidworks.com/2019/english/api/swsimulationapi/SOLIDWORKS.Interop.cosworks~SOLIDWORKS.Interop.cosworks.ICWStudy~Results.html
I am working on an internal Angular project where I need to expose multiple services within a single dashboard. The solution involves having one project act as the outer layout, and multiple projects that will be displayed within it. The number of internal projects is likely to increase in the future, and they will likely be managed separately by service. **My question is:** Is there a way to build each of these internal projects without affecting other services? I want the build to only modify the files of the service being built, and only update those changed files in the repository. I understand that the monorepo project management approach can help with this, and I have tried to introduce the NX tool. However, it seems that NX builds all related projects, not just the individual project. (For example, if the outer layout project references an internal project, changing the internal project will also change the build files of the outer project.) Any help would be greatly appreciated. Additional details: Angular version: 16.2.12 NX version: 18.0.4 Operating system: Mac OS 14.0 I tried two approaches: 1. Creating a library in a sub-project: - I created a library project using `nx g @nx/angular:library my-lib.` - I built the library using nx build my-lib. - I referenced the library in the external layout project using import { MyComponent } from 'my-lib';. 2. Creating an app in a sub-project: - I created an app project using `nx g @nx/angular:application my-app.` - I built the app using nx build my-app. - I referenced the app in the external layout project using <my-app></my-app>. **In both cases:** - I expected the external layout project to only include the changed files from the sub-project. - I expected the external layout project to not be affected by changes to the sub-project. **However:** - The build artifacts of the external layout project included all the files from the sub-project. - Changing the sub-project caused the external layout project to be rebuilt. **This is not ideal because:** - It makes it difficult to manage and maintain the projects independently. - It can lead to unnecessary rebuilds and performance issues. I am looking for a way to build each project independently without affecting other projects.
How to build individual projects in a monorepo without affecting other projects (Angular, NX)
|angular|build|monorepo|nomachine-nx|
null
If you could simplify and share the problem you are attempting to solve with the implementation shared, you would most likely have a better and easier solution. For what it seems like, the following should do what you are looking for (barring the `merging`) Function<Pair<String, String>, Integer> lengthOfKey = p -> p.getKey().length(); Collector<Pair<String, String>, ?, Map<String, String>> convertToValueMapWithUpperCase = Collectors.toMap(Pair<String, String>::getValue, e -> e.getValue().toUpperCase()); Map<Integer, Map<String, String>> groupingAndTransformation = list.stream() .collect(Collectors.groupingBy(lengthOfKey, convertToValueMapWithUpperCase)); The result of the following when executed for the sample List<Pair<String, String>> list = List.of( new Pair("naman", "stackoverflow"), new Pair("daniel", "dubov"), new Pair("vishy", "anand"), new Pair("user", "sample"), new Pair("comments", "section")); is the same as the code shared by you(when printed): ``` {4={sample=SAMPLE}, 5={anand=ANAND, stackoverflow=STACKOVERFLOW}, 6={dubov=DUBOV}, 8={section=SECTION}} ```
Here is a slightly edited version that will work in the expected way: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> $("#test-form") .on("input",collectAllChecked) .submit(function(ev){ // at submit time: copy input text to #summary div $("#summary").html("selected-cases : <br/>"+$("#selected-cases").val()) ev.preventDefault(); }) function collectAllChecked(ev){ const checked=$(".form-check-input:checked").get().map(el=>el.value) $("#selected-cases").val(checked.join(",")); } collectAllChecked(); <!-- language: lang-css --> #selected-cases {width:500px} <!-- language: lang-html --> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.1/jquery.min.js"></script> <form id="test-form"> <div class="col-12"> <label for="selected-cases" class="form-label">selected cases<span class="text-muted"> *</span></label> <input type="text" id="selected-cases" name="selected-cases" class="form-control"> </div> <hr class="my-4"><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="3966382" checked>Save CaseID : 3966382</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4029501">Save CaseID : 4029501</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4168818" checked>Save CaseID : 4168818</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4168801">Save CaseID : 4168801</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4168802">Save CaseID : 4168802</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4168822" checked>Save CaseID : 4168822</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="3966388">Save CaseID : 3966388</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4114087">Save CaseID : 4114087</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="3966385">Save CaseID : 3966385</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4169838">Save CaseID : 4169838</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4169843">Save CaseID : 4169843</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4168829">Save CaseID : 4168829</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4168828">Save CaseID : 4168828</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4168835">Save CaseID : 4168835</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4169835">Save CaseID : 4169835</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4169836">Save CaseID : 4169836</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4169837">Save CaseID : 4169837</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="4169839">Save CaseID : 4169839</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="3946200">Save CaseID : 3946200</label> </div><div class="form-check"> <label class="form-check-label"><input type="checkbox" class="form-check-input" value="3946201">Save CaseID : 3946201</label> </div> <button class="w-10 btn btn-primary btn-sm" type="submit">Save</button> <div class="col-md-5"> <div id="summary"> selected-cases : <br/> </div> </div> </form> <!-- end snippet --> I also worked in the recommendations of @RokoC.Buljan: The event function for "input" events is now a separate function that is used to calculate the initial state.
In my Matplotlib plot, I've created a number Circle artists that, when one is clicked, should result in having some information appear in an annotation. I have the basic code working except that I am seeing multiple calls to the handler occur even when the artist is only clicked one time. Normally, I'd try to debug this with the Python debugger except that it won't work in the Matplotlib event loop. The handler code for this event is below. The artists parameter is a dict, keyed by the artist and holding a data structure as its value. This is the info to be put into the annotation. def on_pick(event, artists): artist = event.artist print(f"pick event, {artist} class {artist.__class__}") if artist in artists: x = artist.center[0] y = artist.center[1] a = plt.annotate(f"{artists[artist].getX()}", xy=(x,y), xycoords='data') artist.axes.figure.canvas.draw_idle() print("done") else: print("on_pick: artist not found") Each artist has it's picker attribute set to True and the on_pick function is installed as the handler for the pick event. When I run my program, the plot is created, and I click on one of the circles. I do see the annotation appear, as expected, but then I get a second, or more, invocation of the handler as seen here. pick event, Circle(xy=(0.427338, 0.431087), radius=0.00287205) class <class 'matplotlib.patches.Circle'> done pick event, Circle(xy=(0.427338, 0.431087), radius=0.00287205) class <class 'matplotlib.patches.Circle'> on_pick: artist not found The first two lines are the correct ones while the second two are from the unexpected 2nd invocation. I'm also mystified as to why the artist isn't being found since its the exact same from the first call. Any thoughts as to what might be happening?
Picker event handler is being invoked multiple times for a single event
|python|matplotlib|events|picker|
In the AWS console, go to **Lambda** -> **Functions** -> (select the function) -> **Configuration** -> **Function URL** -> **Edit** Check **Configure cross-origin resource sharing (CORS)** and enter * for **Allow origin**, **Allow headers**, and **Allow methods** **Save**
Managed to find a solution: {%- if related_collection.products_count > 0 -%} <div class="product-recommendations page-width"> <div class="product-grid grid--uniform" data-aos="overflow__animation"> {%- for product in related_collection.products -%} {% comment %} On smaller screen sizes, 39vw is used for grid items in the CSS {% endcomment %} {% unless product.tags contains 'discontinued' %} {%- render 'product-grid-item', product: product, per_row: section.settings.products_per_row, quick_shop_enable: settings.quick_shop_enable, fallback: '39vw', -%} {%- assign rendered_products = rendered_products | plus: 1 -%} {% endunless %} {% if rendered_products == number_of_products %} {% break %} {% endif %} {%- endfor -%} </div> </div> {%- endif -%}
hi i'm trying to convert and R tibble in a shiny app to a json. i can use `jsonlite::toJSON()` to do this unfortunately the receipient requires the output in a different order. i've tried converting the input from long to wide but this comes out all wrong. the python code to create the json uses df.to_dict(orient="list") so i need an r equivalent. not much experience with json and in a little hurry so help much appreciated
r equivalent of python df.to_dict(orient="list")
|python|r|json|shiny|
I'm not sure why you would want to use regex to find a sub-string of a string?.. but here ya go... /^(?=test).*$/ **Usage** <?php // $result variable will be boolean (true|false) $result = preg_match('/^(?=some_string).*$/', $search_string ); **Alternatively** <?php function check(){ if(strpos('some_string', 'some_string_that is longer') == 0){ return true; } else { return false; } } Regex **Explained** Below: ^ //anchor start matching to first letter (?=.....) //look ahead - match exact string value .* //match any leftover characters 0 to infinity x's $ //anchor at end of string
OneSignal-XCFramework supports iOS 9 to iOS 14. You need to check your **Minimum Deployment Version**. You can check it from, **Your App target** -> **General Tab** -> **Minimum Deployments** Update it to iOS 9 or above, this should solve your issue. Here is an official GitHub link for your reference, [OneSignal-XCFramework][1] [1]: https://github.com/OneSignal/OneSignal-XCFramework
I have been learning how to use the Pi4J library and tried to implement the minimal example application that can be found in the [Pi4J website](https://pi4j.com/getting-started/minimal-example-application/) The code that I used is here: ```java package org.PruebaRaspberry11; import com.pi4j.Pi4J; import com.pi4j.io.gpio.digital.DigitalInput; import com.pi4j.io.gpio.digital.DigitalOutput; import com.pi4j.io.gpio.digital.DigitalState; import com.pi4j.io.gpio.digital.PullResistance; import com.pi4j.util.Console; public class PruebaRaspberry11 { private static final int PIN_BUTTON = 24; private static final int PIN_LED = 22; private static int pressCount = 0; public static void main (String[] args) throws InterruptedException, IllegalArgumentException { final var console = new Console(); var pi4j = Pi4J.newAutoContext(); var ledConfig = DigitalOutput.newConfigBuilder(pi4j) .id("led") .name("LED Flasher") .address(PIN_LED) .shutdown(DigitalState.LOW) .initial(DigitalState.LOW); var led = pi4j.create(ledConfig); var buttonConfig = DigitalInput.newConfigBuilder(pi4j) .id("button") .name("Press button") .address(PIN_BUTTON) .pull(PullResistance.PULL_DOWN) .debounce(3000L); var button = pi4j.create(buttonConfig); button.addListener(e -> { if (e.state().equals(DigitalState.LOW)) { pressCount++; console.println("Button was pressed for the " + pressCount + "th time"); } }); while (pressCount<5) { if(led.state().equals(DigitalState.HIGH)){ led.low(); console.println("LED low"); } else { led.high(); console.println("LED high"); } Thread.sleep(500/(pressCount+1)); } pi4j.shutdown(); } } ``` I've tested all the hardware (the LED, the button) to make sure that they work fine. In the console, the "LED high" "LED low" lones are printed, but the LED doesn't blink and nothing happens when I press the button. What is more strange is that the first time I ran the code it worked, but I tried to do some modifications and never worked again, even after deleting the changes I did.
|javascript|node.js|sequelize.js|
As you indicate in the comments that you're using a Kubernetes container configured with 1 CPU, the problem is that effectively you're comparing a fork-join pool with a parallelism of 30 with virtual thread execution with a parallelism of 1. By default, Java allocates an OS thread pool (a work-stealing fork-join pool) with parallelism equal to the processor count for virtual thread execution (see also [JEP 444: Virtual Threads][1]). If you're running on Kubernetes, and the pod has 1 CPU (or less than 1 CPU), Java will assume a processor count of 1. You can tell Java to assume a different number of processors by passing the `-XX:ActiveProcessorCount=n` setting, with `n` the number of processors Java should assume. Alternatively, or additionally, you can set the system property `jdk.virtualThreadScheduler.parallelism` (using `-Djdk.virtualThreadScheduler.parallelism=n` where `n` is the desired parallelism). Both of these need to be passed to the `java` executable on the command line. Note that you probably shouldn't set it too high compared to your CPU quota, otherwise you'll likely just introduce reasons for Kubernetes to throttle your pod, but reasonable values will depend on the actual load behaviour of your application. [1]: https://openjdk.org/jeps/444
First of all you need to understand that default channel is a `DirectChannel` which comes with a round-robin handling strategy for its subscribers. So, the first message will go to first subscriber, the second to second. And so on until all subscribers are iterated, then we come back to first one for the next sent messages. Now, if that is OK with you, there is a way to attache a Message History into messages where you can track a journey for the message. Your `@ServiceActivator` might be marked with a `@EndpointId` if you find the long name generated from the method name as not convenient. See more info in docs: https://docs.spring.io/spring-integration/reference/message-history.html https://docs.spring.io/spring-integration/reference/configuration/annotations.html
There seem to be no easy ways to achieve this, "to reset the timezone to an arbitrary one without altering any other parameters of the Time", in pure Ruby (cf., [Official doc](https://docs.ruby-lang.org/en/3.3/Time.html "Ruby 3.3 Time class")). I here list a few workarounds: ```ruby require "time" t = Time.now(in: "+00:00") # => <2024-02-24 01:23:45 +0000> t1= Time.new(*(t.to_a.reverse[4..-1]), in: "+09:00") # => <2024-02-24 01:23:45 +0900> t2= Time.new(t.year, t.month, t.day, t.hour, t.min, t.sec+t.subsec, in: "+09:00") # => <2024-02-24 01:23:45 ... +0900> ar = t.strftime("%Y %m %d %H %M %S %N").split.map(&:to_i) t3= Time.new(*(ar[0..4]+[ar[5]+ar[6].quo(1_000_000_000), "+09:00"])) # => <2024-02-24 01:23:45 ... +0900> t4= Time.at(t, in: "+09:00") - 9*3600 # => <2024-02-24 01:23:45.... +0900> ``` Here, `t1` only considers down to a second, whereas the others consider down to a nanosecond. `t2` is the most intuitive, but it is the most cumbersome. Here,[`Time#subsec`](https://docs.ruby-lang.org/en/3.2/Time.html#method-i-subsec "Ruby 3.2 official doc of Time") returns a Rational and hence the formula involves no floating-point rounding error. `t3` may have no benefit over `t2`. It involves no floating-point rounding error as in `t2`. Although `t4` is less lengthy, the facts that it is against the DRY principle and that you *must* accurately specify the parity (plus or minus?) are significant downsides.
I keep getting ERR_CONNECTION_REFUSED when things get sent to my API from the frontend on my ec2 instance. I'm running an ubuntu server using AWS and I'm using nginx to proxy anything from port 80 to the angular part of the application. Everything works as it should locally when running the angular on localhost:4200 and the API on https://localhost:7040. On the server, I can reach the angular frontend as expected using the public DNS, but cannot get past the login/register page as it requires the API. For clarification: on the server the angular build is static and I am running the API using kestrel and specifying https://localhost:7040 using --urls to force it to change the default url. However, if it is awsdomain.com/api/something then it is redirected to https://localhost:7040/api/something where my API (.NET Core) is running via kestrel. I can think of three issues so far that could be causing this. 1) for some reason when I use the `lsof -i :7040` command, nothing shows as listening, however when I use the `ss -tuln | grep LISTEN | grep ':7040'` I can see my kestrel service listening properly. To my understanding I should be able to see it listening using the lsof command, so perhaps this means something is going wrong. 2) Maybe I am not proxying properly. To my understanding, what I have done in my proxy pass is reroute anything from my ec2 instance with /api in it to go to the https://localhost:7040 with the exact same path called (so aws.com/api == https://localhost:7040/api and aws.com/api/login == https://localhost:7040/api/login). Here is how I configured it in my nginx sites-available: ``` server { listen 80; listen [::]:80; root /var/www/app/AngularApp/dist/angularapp/browser/; index index.html index.htm index.nginx-debian.html; location / { try_files $uri $uri/ /index.html; } location ~* /api(/.*)?$ { proxy_pass https://localhost:7040/api$1; } location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass unix:/run/php/php8.1-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } } ``` 3) The third issue that could be occurring may be related to the fact the ec2 is http and I am routing to https. I am new to server stuff so I don't quite understand everything but from what I have seen, it can work localhost because there's a local development certificate (SSL), but when I put everything on the instance I may need my own certificate I have to purchase to enable https? I am the server person for a group and I cannot understand what is going wrong on the instance. Please let me know if any of these may be the problem or if there is anything I may not have thought of.
.NET CORE API not working with Angular on Ubuntu server
|angular|asp.net-core|nginx|.net-core|ubuntu-22.04|
null
Apologies, I am new to python and SO, and have been struggling with an issue. I have a text and excel file that I need to merge, based off 2 columns that they have in common (date and id). They also have different column names for the same thing. csv: | test date | id | values 1 | | ---------- | -- | -------- | | 2024-03-12 | a | 1 | | 2024-03-13 | b | 2 | | 2024-03-14 | c | 3 | excel (headers are also on the fifth row): | N/A 1 | id | N/A 2 | date | values 2 | | ---------- | -- | ---------- | --------- | -------- | | irrelevant | a | irrelevant | 3/12/2024 | 2 | | irrelevant | b | irrelevant | 3/13/2024 | 4 | | irrelevant | c | irrelevant | 3/14/2024 | 6 | the goal is to create something like: | date | id | value 1 | value 2 | discrepancy | Over 2? | | ---------- | -- | ------- | ------- | ----------- | ------- | | 2024-03-12 | a | 1 | 2 | 1 | no | | 2024-03-13 | b | 2 | 4 | 2 | no | | 2024-03-14 | c | 3 | 6 | 3 | yes | I have to find what I tried, sorry, but i ended up having the code to csv to an excel file, and tried to merge from there. however i was never able to get my script to succesfully extract data from both sources at the same time. I will update this post with the code i tried as soon as possible, and thank you!
I think this snippet does something similar to your question. It leverages flex layout and clamp size computation to achieve what you wanted. Note that the values are not the one you specified (my screen is too small for 3*900px :D) <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> .container { display: flex; background: lightgrey; padding: 2rem; justify-content: center; } .chart-container { display: flex; gap: 16px; flex-wrap: wrap; justify-content: center; width: clamp(200px, 100%, 1000px); } .chart { display: block; width: clamp(200px, 33%, 300px); aspect-ratio: 3/2; flex-grow: 1; } .red { background: red; } .green { background: green; } .blue { background: blue; } .yellow { background: yellow; } .orange { background: orange; } .purple { background: purple; } <!-- language: lang-html --> <div class='container'> <div class='chart-container'> <div class='chart red'></div> <div class='chart green'></div> <div class='chart blue'></div> <div class='chart yellow'></div> <div class='chart orange'></div> <div class='chart purple'></div> </div> </div> <!-- end snippet -->
After spending a lot of time I realized that I was trying to call the prisma functions directly. Instead Remix have a concept of loaders to load anything from the backend. Here is what I have done to resolve this, imports are important, import { json } from "@remix-run/node"; import { useLoaderData } from "@remix-run/react"; First need to create a function called loader, export async function loader() { return json(await db.note.findMany()); } then calling the following in whichever component I needed, const notes = useLoaderData<typeof loader>(); This allowed me to successfully call my `helper db.server` file to retrieve data.
1. Sure you did not update accidentaly to 4.27.**3** or later? I got exactly your problem, after I installed the 4.28.0 Version - see below... 2. You need Hyper-V enabled for this. If you are using Windows Home Edition there is no chance: upgrade your Windows to Professional Edition - see maybe [tag:docker-for-windows]? From my view, at this time the Docker Desktop Version 4.28.0 seems to have a problem with at least Windows 10, because after I deinstalled the 4.28.0 and replaced it with a fresh install of the Docker Desktop Version 4.27.2 (see [Docker Desktop release notes][2]) everything works fine for me with VS 2022 and ASP.NET 8. ... don't Update DD until this is fixed! ;) In [GitHub, docker/for-win: ERROR: request returned Internal Server Error for API route and version...][1] there is a hint upgrading the WSL2 which might help too. [1]: https://github.com/docker/for-win/issues/13909 [2]: https://docs.docker.com/desktop/release-notes/#4272
Did you copied this solution from an another solution? If yes, check your namespace in your project please. Ex project namespace can be a conflict.
{"Voters":[{"Id":446594,"DisplayName":"DarkBee"},{"Id":15498,"DisplayName":"Damien_The_Unbeliever"},{"Id":13302,"DisplayName":"marc_s"}]}
I have a data table that looks like this: | ID | Participação | | Pessoa | |:--:|-------------|---|---------| | 01 | Comunicante | | Lucas | | 01 | Vitima | | Lucas | | 02 | Comunicante | | Rafa | | 02 | Vitima | | Vitor | I want to look like this: | ID | Comunicante| | Vítima| |:--:|-------------|---|---------| | 01 | Lucas | | Lucas | | 02 | Rafa | | Vitor | Sometimes Comunicante is different from Vítima because of this I don't want to merge `comunicante,vitima`. I want them in separate columns. This was so far I made: | ID | Comunicante| | Vítima | |:--:|-------------|---|---------| | 01 | Lucas | | | | 01 | | | Lucas | | 02 | Rafa | | | | 02 | | | Vitor |