text
stringlengths
70
452k
dataset
stringclasses
2 values
Inherit border-radius from child to parent I'm working on a React Component that acts as a clickable wrapper for other components and the issue I have is that some of the other components have a border-radius which makes the wrapper component stick out in the corners. How I've solved this is by giving the wrapper the same border-radius via a prop, the problem with this approach is that I don't know the border-radius of some components. I've been trying to find a way to get the border-radius attribute value from the child component by using useRef() and then wrapperRef.current.props.children[0] but the border-radius value isn't present. .wrapper { border: 1px solid black; display: inline-block; cursor: pointer; } .demo-wrapper { border: 1px solid black; border-radius: 24px; display: inline-block; cursor: pointer; } .child { width: 200px; height: 100px; border-radius: 24px; background-color: teal; display: flex; justify-content: center; align-items: center; } <div> <p>What i get</p> <div class="wrapper"> <div class="child"><p>child element</p></div> </div> <p>What i want</p> <div class="demo-wrapper"> <div class="child"><p>child element</p></div> </div> </div> Is there a way to get the border-radius value from the child element? Or is there a way to inherit the border-radius from child to parent element? Or maybe I'm going on about this in the wrong direction entirely and there is a better way to solve this other than setting the same border-radius on parent element? This solution involves: Setting a ref on the wrapper element Using useEffect, which runs after the component's rendered: 2.1. Use element.firstChild on the wrapper ref, to get a reference to the child's outer html element 2.2 Use getComputedStyle() to get the child element's border radius 2.3 Set that border radius on the wrapper element, using our ref. The wrapper then would be: import { useEffect, useRef } from "react"; const Wrapper = ({ children }) => { const container = useRef(); useEffect(()=>{ const borderRadius = getComputedStyle(container.current.firstChild).borderRadius container.current.style.borderRadius = borderRadius }, []) return <div ref={container}>{children}</div>; }; A working example:
common-pile/stackexchange_filtered
Javascript calling $(document).on('click', when document ready I need some help please. The function below works well with an 'up' and 'down' button on a html form. I want to call this function simulating clicking the 'down' button when the page has finished loading. I don't know how to do this in my $(document).ready(function ()). Searching here and elsewhere has not provided me any help. Note that not all the code is shown, so please don't point out errors about undeclared vars thanks. $(document).on('click', '.sig-spinner button', function () { var minval = Math.max(<?php echo $qty_sig_all; ?>,5); var btn = $(this), oldValue = btn.closest('.sig-spinner').find('input').val().trim(), newVal = 0; if (btn.attr('data-dir') == 'up') { newVal = parseInt(oldValue) + 1; } else { if (oldValue > minval) { newVal = parseInt(oldValue) - 1; } else { newVal = minval; } } btn.closest('.sig-spinner').find('input').val(newVal); document.getElementById("sigtot").innerHTML = '$' + newVal + '.00'; sigval = newVal; totval = sigval + smsval; document.getElementById("ordertot").innerHTML = '$' + totval + '.00'; gstval = (totval - (totval / 1.1)).toFixed(2); document.getElementById("gsttot").innerHTML = '$' + gstval; }); thisOneGuy, Haha, where did you learn to read? Where did I say it's all shown? (I specifically said 'not all the code is shown' on the last line. haha obviously i still haven't learnt :L my bad $(document).on('click', '.sig-spinner button', function () { ... }); $(document).ready(function(){ $('.sig-spinner button').trigger("click"); }); Hi Matthew, thank you. I think that is what I am looking for, but how do I send a command (simulate clicking the down button, or not the up button in the if statement starting on line 6 of the code)? I think I understand, and my comment above is not relevant. Because I am NOT sending the 'down' button click, the code I want to run will actually run becuase it doesn't see the 'up' button. Will test and reply soon. Thank you. Ok, so I just tried it and it doesn't work the way I want. It seems to active the 'up' button. Not working. How do I set the "btn.attr('data-dir')" in the answer? You set attributes like this; btn.attr('data-dir', newAttr); where newAttr is the new value to set Thanks Matthew, your answer is getting me closer to a working solution, but how/where do I implement your code example in the answer above? Does it get added to the end of the line, or is it a new line? It wants to go OUTSIDE your click function, as you shouldn't be triggering a click inside the click event (See my edit above) Are you looking for the 'touchstart' event? Thanks again Matthew, greatly appreciated, but sorry I still don't quite understand. I do understand your edit, and that will call my function on document ready. What I dont understand is where do I put the btn.attr('data-dir', newAttr); in your example? Do I simply add that line before the $('.sig-spinner ..... or does it need to be wrapped in somehow? Well it depends - when do you want that attribute set to occur exactly?
common-pile/stackexchange_filtered
Active Dropdown Menus I am working on a project where the user has a selection to make. The selection being, is the member an active member or an inactive member. If the member is active, the user will select the "Active" selection, if inactive, then the user would select the "Inactive" selection. Dropdown Menu Below is what I am looking for as a result. Is there a way to do this at all? When, for example "Active" is selected I would like "Select one:" to change and say "Active" instead. Activated Dropdown Menu I hope this makes sense. Thank you!
common-pile/stackexchange_filtered
Blackberry web application wrapping floating divs on top of each other Our site http://m.sa2010.gov.za has a simple menu structure that relies on floating divs to stack 2 columns of menu elements. Unfortunately it seems to wrap all elements on top of each other when viewed from most Blackberry phones? Any suggestions. works beautifully on most mobile devices including iPhone. Check it out. seems to be very device specific... some blackberries do work correctly..? The BlackBerry browser is notoriously bad for the lack of full CSS support, especially on older models. You may have to experiment with other ways to get what you want, such as using a table-based layout. Blackberry sucks in other words. A friend updated his firmware and now his does the same? They supposedly have a modern webkit-based browser in their 6.0 OS (coming soon I hope) but in the meantime you can download plenty of device simulators from their site to see how your website renders on various devices and OS versions.
common-pile/stackexchange_filtered
How do I make people stop running with a command? I'm making a map and when I give them slowness it's still possible for them to walk a little faster (or run with slowness on) but I want them still walking. Can anyone can help me? It has to be a command. An option would be to detect when a player is sprinting via sprintOnecm statistic and give them a higher slowness effect when they are sprinting such that sprinting and walking would be no different. Just make sure the player can't jump using a high jump boost level. There are numerous ways to do this, but each one has its disadvantages... Give the player blindness using the /effect command. Blinded players can't sprint. However, the player will also have limited vision. Drain the player's hunger. This cannot be instant though, but again you can use the /effect command to give them hunger. Just give the player a high slowness level. They can still sprint, but at a certain level sprinting will still be really slow like normal walking. Or just use RikerW's method, but also give the player a "corrupt" jump boost effect to disable the player from jumping. Use levels 128–250, according to the wiki. I'm ALWAYS using the latest version just so you know. right now its 1.9.1-pre3
common-pile/stackexchange_filtered
Can't access in any way dynamically generated element with Selenium I've been trying to modify the router SSIDs through a script made in Selenium, the problem is that I can't get any JS element that's generated by the router page. I've been trying with the Expected Conditions and whatever you could think of but without success. Example of commands that I used: element: WebElement = WebDriverWait(self.driver, 10).until(EC.element_to_be_clickable((By.XPATH, xpath))) I investigated further by trying to do some queries with JavaScript in the Chrome's "Console" tab. What I found is that if I try to select any element using any query selector before inspecting any element on the page it won't work. Example query: document.querySelector("#WIFIIconInfo") How can I fix this weird behaviour? EDIT: Don't know if these are useful informations, but they are still information. The website is built off these 2 things: Web frameworks Microsoft ASP.NET JavaScript libraries jQuery - 1.11.1 (Wappalyzer Output) EDIT 2: I looked up into the HTML as suggested by @JeffC and found out that the div I wanted to click was inside the following iframe. <iframe id="menuIframe" frameborder="0" width="100%" height="100%" marginheight="0" marginwidth="0" class="AspWidth" scrolling="no" overflow="hidden" src="CustomApp/mainpage.asp"></iframe> His solution was indeed right. have you tried implicitlyWait? Tried it right now just to be sure and it didn't work, says that it cannot find any element with specified XPATH (which I just copied to be extra sure). then you should save the exact html (by driver.page_source) and look at it, maybe the website returns absolutely unexpected data (like captcha or whatever). if you parse multiple pages from website that visually look the same in some cases they can differ from each other in html code Nope, the driver.page_source is always the same. Please check the element isn’t inside an iframe or shadow-root element? Please read [ask], especially the part about [mcve] (MCVE). Take the code you are using, reduce it to an MCVE, and then post that code, properly formatted. Also post the full error message, properly formatted, and indicate on which line the error is occurring. We also need the relevant HTML to be able to help you with locators. @JeffC I can't put HTML since it's all dynamically generated from ASP.NET and JS. The code that I am using to find the element is already in the question. The code before that is just a get to the router page, credentials input and then a click on a button (which redirects me to the page where I have to locate the element I am looking for) You can load the page in the browser, open up the dev tools, and copy the relevant bits of HTML. The code you have included is not an MCVE. If you're still unsure what an MCVE is, read the link in my previous comment. What I found is that if I try to select any element using any query selector before inspecting any element on the page it won't work. That tells me that your desired element is inside of an IFRAME. When you inspect an element on the page, the dev tools automatically switch the context to the containing IFRAME. Using Selenium, you will need to switch into the IFRAME, interact with whichever elements are inside the IFRAME, and then switch context back out to the main page. We don't have any relevant HTML so here's a generic example. # wait for the IFRAME and switch to it WebDriverWait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,"iframe"))) # do stuff to elements in the IFRAME ... # switch back to the main content of the page driver.switch_to.default_content()
common-pile/stackexchange_filtered
A problem about symplectic manifolds in Arnold's book There is a problem in Arnold's Mathematical Methods of Classical Mechanics which says that: Show that the map $A: \mathbb{R}^{2n} \rightarrow \mathbb{R}^{2n}$ sending $(p, q) \rightarrow (P(p,q), Q(p,q))$ is canonical(p206) if and only if the Poisson brakets of any two functions in the variables $(p,q)$ and $(P,Q)$ coincide: $$ (F,H)_{p,q} = \frac{\partial H}{\partial p} \frac{\partial F}{\partial q} - \frac{\partial H}{\partial q} \frac{\partial F}{\partial p} = \frac{\partial H}{\partial P} \frac{\partial F}{\partial Q} - \frac{\partial H}{\partial Q} \frac{\partial F}{\partial P} = (F,H)_{P,Q}. $$ I cannot solve this problem and think about it as following: From $(F,H)_{p,q} = (F,H)_{P,Q}$ I can induce that $$ \sum_i \det\left( \frac{\partial(P_j, P_k)}{\partial(p_i, q_i)} \right) = \sum_i \det\left( \frac{\partial(Q_j, Q_k)}{\partial(p_i, q_i)} \right) = 0, \sum_i \det\left( \frac{\partial(P_j, Q_k)}{\partial(p_i, q_i)} \right) = \delta_{j,k}, $$ and $$ \sum_i \det\left( \frac{\partial(p_j, p_k)}{\partial(P_i, Q_i)} \right) = \sum_i \det\left( \frac{\partial(q_j, q_k)}{\partial(P_i, Q_i)} \right) = 0, \sum_i \det\left( \frac{\partial(p_j, q_k)}{\partial(P_i, Q_i)} \right) = \delta_{j,k}. $$ But in the other hand, to induce $dP\wedge dQ = dp \wedge dq$ I need that $$ \sum_i \det\left( \frac{\partial(p_i, q_i)}{\partial(P_j, P_k)} \right) = \sum_i \det\left( \frac{\partial(p_i, q_i)} {\partial(Q_j, Q_k)} \right) = 0, \sum_i \det\left( \frac{\partial(p_i, p_i)}{\partial(P_j, Q_k)} \right) = \delta_{j,k}, $$ or $$ \sum_i \det\left( \frac{\partial(P_i, Q_i)}{\partial(p_j, p_k)} \right) = \sum_i \det\left( \frac{\partial(P_i, Q_i)}{\partial(q_j, q_k)} \right) = 0, \sum_i \det\left( \frac{\partial(P_i, Q_i)}{\partial(p_j, q_k)} \right) = \delta_{j,k}. $$ Is there something wrong in the above reasoning? Can you show me how to solve this problem? Please, can you provide the definition of a canonical map. Crossposted to http://physics.stackexchange.com/q/107492/2451 The map $A$ is canonical if and only if $A^\star\omega^2=\omega^2$, where $\omega^2=dp_i\wedge dq^i$ that gives the sympletic structure to $R^{2n}$. $$A^\star\omega^2=A^\star(dp_i\wedge dq^i)=(A^\star dp_i) \wedge (A^\star dq^i)=d(p_i\circ A)\wedge d(q^i\circ A)=dP_i\wedge dQ^i\quad (1)$$ Therefore, $A$ is canonical if and only if $$dP_i\wedge dQ^i=dp_i\wedge dq^i \qquad (2)$$ Eq. (2) is equivalent to $$\Lambda^T\Omega\Lambda=\Omega \qquad (3)$$ where $\Omega=\begin{pmatrix} 0\, & E\, \\ -E\, & 0\,\end{pmatrix}$ is the coefficient matrix of $\omega^2$, $\Lambda=\left(\frac{\partial x^i}{\partial x'^{i'}}\right)=\begin{pmatrix} p_P & p_Q \\ q_P & q_Q\end{pmatrix}$ is the coorinate transform matrix, and the submatrix $p_P$, for example, is short for the matrix $\Big(\frac{\partial p_i}{\partial P_j}\Big)$. Eq. (3) is equivalent to $$\ I=\Omega^{-1}=\big(\Lambda^T \Omega \Lambda\big)^{-1}=\Lambda^{-1}I(\Lambda^{-1})^T\qquad (4)$$ where $I$ is the coefficient matrix of Poisson tensor (Poisson bivector). Substituting $I=\begin{pmatrix} 0\, & -E\, \\ E\, & 0\,\end{pmatrix}, \ \Lambda^{-1}=\begin{pmatrix}P_p & P_q \\ Q_p &Q_q\end{pmatrix}$ into Eq. (4) yields $$\begin{pmatrix} 0\, & -E\, \\ E\, & 0\,\end{pmatrix}=\begin{pmatrix} P_q P_p^T-P_p P_q^T & P_q Q_p^T-P_p Q_q^T \\ Q_q P_p^T-Q_p P_q^T & Q_q Q_p^T-Q_p Q_q^T \end{pmatrix}\qquad (5)$$ which is actually the Poisson brackets of $P,\ Q$: $$\begin{align}P_q P_p^T-P_p P_q^T =&(P,P)_{p,q}=0\\[5pt]Q_q Q_p^T-Q_p Q_q^T =&(Q,Q)_{p,q}=0\qquad (6)\\[5pt]Q_q P_p^T-Q_p P_q^T =& (Q,P)_{p,q}=E\end{align}$$ Therefor, $A$ is canonical if and only if $(Q,P)_{p,q}=E,\ (P,P)_{p,q}=(Q,Q)_{p,q}=0$. For arbitrary functions $F,\ H$, $$\begin{split}(F,H)_{p,q}=&H_pF_q^T-H_qF_p^T\\ =&(H_PP_p+H_QQ_p)(F_PP_q+F_QQ_q)^T-(H_PP_q+H_QQ_q)(F_PP_p+F_QQ_p)^T\\ =&H_P(P,P)_{p,q}F_P^T+H_P(Q,P)_{p,q}F_Q^T +H_Q(P,Q)_{p,q}F_P^T+H_Q(Q,Q)_{p,q}F_Q^T\end{split}$$ Thus $(F,H)_{p,q}=H_PF_Q^T-H_QF_P^T=(F,H)_{P,Q}$ if and only if Eqs. (6) hold. Accordingly, $A$ is canonical if and only if $(F,H)_{p,q}=(F,H)_{P,Q}$ P.S. The equations you shows in the question are essentially Eq. (3), i.e. $$\begin{pmatrix} 0\, & E\, \\ -E\, & 0\,\end{pmatrix}=\begin{pmatrix} p_P^T q_P-q_P^Tp_P & p_P^T q_Q-q_P^Tp_Q \\ p_Q^T q_P-q_Q^Tp_P & p_Q^T q_Q-q_Q^Tp_Q \end{pmatrix}\qquad(7)$$ It is easy to verify that $$\begin{pmatrix} p_P^T q_P-q_P^Tp_P & p_P^T q_Q-q_P^Tp_Q \\ p_Q^T q_P-q_Q^Tp_P & p_Q^T q_Q-q_Q^Tp_Q \end{pmatrix}\begin{pmatrix} P_q P_p^T-P_p P_q^T & P_q Q_p^T-P_p Q_q^T \\ Q_q P_p^T-Q_p P_q^T & Q_q Q_p^T-Q_p Q_q^T \end{pmatrix}=\begin{pmatrix} E\, & 0\, \\ 0\, & E\,\end{pmatrix}=E_{2n}$$ Therefor, to calculate the inverse of both sides of Eq. (7), the result should be Eq. (5), then the statement will be proved. However, it seems very complicated to calculate the inverse of the right side of Eq. (7) directly. Eq. (4) thus can be treated as the trick to calculate this inverse. It looks like a triviality: by definition symplectic form is preserved under canonical transformation and, thus, poisson bivector is preserved. Thanks! We need to prove if and only if, and the post is aimed at the oppisite direction too. In other direction it also trivial: if poisson bivector is non-degenerate then it comes from some symplectic structure, in coordinates it is given as inverse matrix of that bivector. In very basic terms this exercise means that diffeomorphism preserves matrix (of symplectic form) if and only if it preserves inverse matrix (of poisson bivector). @Alex Can you write it down more explicitly as an answer?
common-pile/stackexchange_filtered
What does it mean by "labor taxes cut is self-financed"? I'm reading "Principle of Economics" by Mankiw. In Part III Market and Welfare, Ch 8 "Application: the costs of taxation", it says: Only 32% of a cut in U.S. labor taxes would be self-financed, the economists note, versus 54% self-financing in Europe. Just over 50% of a cut in U.S. capital taxes would pay for itself, the authors estimate, versus 79% in Europe. But what does it mean by a tax cut be "self-financing"? self-financing adj (of a student, business, etc.) financing oneself or itself without external grants or aid So "self-financing" means someone can sustain without external aid. I just could not get it what a tax needs to be "self-financing" This question appears to be off-topic because it is about introductory economics, not quantitative finance. @JoshuaUlrich but there's no "introductory economics" forum :( What is meant is the so called Laffer effect or Laffer curve. The rationale is that when you cut taxes that this will stimulate business and thereby over-compensate the loss in taxes the government originally had. i think not. the article is saying US is actually on the left side of the Laffer curve. so US need not cut tax to stimulate business. @athos: You asked what it means when a tax cut is said to be self-financed and this is the answer. Trust me, I have a PhD in economics and I am a professor and teach this stuff ;-) of course you have a better judgement on it. allow me rephrase my question, "Only 32% of a cut in U.S. labor taxes would be self-financed," does this mean that in U.S. if the labor tax rate is 32%, it will collect the most tax? If I understand the article correctly it means that the government can cut labour taxes by 32% and still rake in the same amount of taxes (due to accelerating business). Beyond this point, i.e. cutting more than 32%, the government will collect less taxes. (But the article is not very well phrased, in my humble opinion). i see. so there are 2 things, first, US is on left side of Laffer curve so a higher rate of tax in that year would raise more taxes; second, after considering business growth, a 32% cut in current tax rate would encourage economy and even harvest more taxes. thanks!
common-pile/stackexchange_filtered
To print number pyramid pattern in python I want to print the number pattern for users given numbers. For eg: if the user enters 3 the program should generate 3 rows and looks like this: 1 2 3 2 4 5 6 5 4 If user enters 4 and output must be: 1 2 3 2 4 5 6 5 4 7 8 9 10 9 8 7 My code as below: a=3 num = 1 num1=2 for x in range(0,a+1): for y in range(0,a-x): print(end="* ") for y in range(x,0,-1): print(num,end=" ") num=num+1 for y in range(2,x+1): print(num1,end=" ") num1=num1+1 print() I don't know where I am doing wrong. https://www.programiz.com/python-programming/examples/pyramid-patterns
common-pile/stackexchange_filtered
Google AutoML projects visibility I have two google account (one private and one for business). If I log in to my business account and navigate to AutoML I do land on the 'Specify Google Cloud project'. That's right. But when I want do select the associated GCP project the list that appears includes all projects that are part of my private google account. I deleted all cookies, closed the browser and startet to log in (only to my business account) again -> the same happens again. That's weird. Do anyone else have this kind of behaviour? I don't see any connection between my private and my business account. Why do this happen? I did not add any extra priviledges to the GPC project inside my private account. can you access your business project from your private account? vice versa? can you see other projects? No I can't access business projects from private account. I can see all my private projects in my business account, and only those. I even can't see my own business projects in my business account. Can you try signing out of the AutoML UI? This is a multi-login issue related to the credentials used when you first logged into AutoML. Signing in and out doesn't have any effects. Even using another browser on another computer solves the problem. Hmm okay lemme take a closer look. Could you do me a favor and submit feedback in the AutoML UI, using the menu in the upper right? Its totally up to you, but I can best help if you mention JoshGC and the email addresses you sign into AutoML or the browser with. Ok, I submitted a feedback. Thanks Thomas. I'm not seeing anything unusual on our side. Can you clarify when you logged out, what did you log out of. Was it the Chrome browser? When you visit the AutoML UI, there is a menu in the upper right, that should show you "who" the UI thinks you are logged in as, and allow you to log out, of our service in particular Today, I logged in with my business account to make screenshots of the behaviour, and I were surprised. The problems are gone. Now everythings works as expected. My private and my business account both could define own datasets in AutoML based on their own GCP project. Maybe you fixed something "accidentally" or it was a self healing process... Thanks. There is some self-healing going on, both in our process and in your browser. Glad the issue is resolved for you.
common-pile/stackexchange_filtered
What is the next step in this proof? Am I even on the right track? Multivariable Inverse Function Inequalities We have a question to show a series of three inequalities related to the Inverse Function Theorem. I'm just asking about the first, most straightforward one, which may not actually need the IFT (I think it's a foundational question for the other two that do use the IFT). $\newcommand{\R}{\mathbb{R}}\newcommand{\bvec}[1]{\textbf{#1}}$ Let $A$ be an open set in $\R^n$, $x_0\in A$, and let $f:A\to\R^m$ be differentiable at $x_0$. Prove the following claims, see Theorem 1.89 (This is the inverse function theorem). For any $\epsilon>0$ there exists a neighborhood $U_{x_0}$ of $x_0$ such that $$|f(x)-f(x_0)|\leq \left(\|\bvec{D}f(x_0)\|+\epsilon\right)|x-x_0|$$ My first steps: This reminded me of the proof used in my book for the IFT, which referenced the multivariable version of the mean value theorem, which states: Mean Value Theorem Corollary: Let $f:B(x_0,r)\subset\R^n\to\R^m$ be a map that has directional derivatives at every point of $B(x_0,r)$ and let $K:=\text{sup}\left\{\|\textbf{D}f(z)\|:z\in B(x_0,r)\right\}$. Then $|f(x)-f(y)|\leq K|x-y|$. The proof of this is similar to my approach to the main problem, so I'll just show how I approached the main problem. By the Integral form of the Mean Value Theorem, \begin{align*}|f(x)-f(x_0)|&\leq \left|\int_0^1\textbf{D}f(x_0+t(x-x_0))(x-x_0)dt\right|\\ &\leq\int_0^1 |\textbf{D}f(x_0+t(x-x_0))(x-x_0)dt|\\ &\leq\int_0^1 \|\textbf{D}f(x_0+t(x-x_0))\|dt|x-x_0| \end{align*} And that's where I'm stuck. In the proof of of the corollary, the next conclusion is that $|f(x)-f(x_0)|\leq K|x-x_0|$, but in my case, I don't want to go straight to the supremum. But I'm not sure how to show that $\int\|\textbf{D}f(x_0+t(x-x_0))dt\|\leq\|\textbf{D}f(x_0)-\epsilon\|$ Just use the definition of the derivative: $\left | \frac{f(x_0+h)-f(x_0)-Df(x_0)h}{h} \right |<\epsilon $ where $h=x-x_0$ is small enough so that the inequality holds. Thanks! That worked out well. If you post that as the answer, I will accept it as the correct one, unless there's a reason you didn't. I'm new here, so still learning the etiquette. Should I post my final result myself? When I edit my own answers, I usually post the edits, so others can see what my process was. I will be happy to post my comment as an answer. Glad it helped. You can use the definition of the derivative, namely, $\left \| \frac{f(x_0+h)-f(x_0)-Df(x_0)h}{h} \right \|<\epsilon$ where $h=x−x_0$ is small enough so that the inequality holds.
common-pile/stackexchange_filtered
Abort insert/update operation in trigger using PL/SQL I am to write a trigger that checks some information that is inserted/updated, compare them with data from database and if they are not correct, stops whole operation. I wrote before trigger (for each) and then threw application exception if something was wrong, but it was not working, becouse I read from the table that was updated, so I get ORA-04091 error. And now I am wondering how to solve this? Now the only idea of mine is to write a before trigger that insert some necessary data into the package and read them with after trigger that won't be for each. But there's a problem how to abort this edition? If I make a rollback it will undo all operations in this transaction that I think is not smart. How would you solve this problem? Can you edit your question with the trigger syntax? Don't go there. ORA-04091: table XXXX is mutating is generally a good indicator that whatever you're trying to do is too complex to be done reliably with triggers. Sure, you could use a package array variable and a handful of triggers (ugh!) to get around this error, but your code will most likely: be unmaintainable due to its complexity and the unpredictable nature of triggers not respond well to a multi-user environment This is why you should re-think your approach when you encounter this error. I advise you to build a set of procedures nicely grouped in a package to deal with inter-row consistency. Revoke all privileges to do DML the table directly and use only those procedures to modify it. For instance your update procedure would be an atomic process that would: acquire a lock to prevent concurrent update on the same group of rows (for instance lock the room record in an hotel reservation application). check that the row to be inserted validates all business logic make all relevant DML rollbacks all its changes (and only its changes -- not the whole transaction) in case of error (easy with PL/SQL, just raise an error).
common-pile/stackexchange_filtered
How to set up ubuntu server 22.04 minimal networking? And I know the title seems like dumb, But after some try I think I get into struggle with ubuntu server 22.04 minimal install: When installing and first boot up, Everything working fine, But soon I notice there is no iproute2 installed. After install I reboot the system. After reboot. Network is gone. Yes, gone. I manually using iproute2 to config networking: PS: ip is fake ip link set dev ens160 up ip address add dev ens160 <IP_ADDRESS>/24 ip route add <IP_ADDRESS> dev ens160 ip route add default via <IP_ADDRESS> Then ping <IP_ADDRESS> Oh, come on. Ping is not even installed. Then I try to apt update but get an working 0% failed. Question: How to config network in ubuntu server 22.04 ? Should I just install netplan when first boot ? Edit1: Something wired happend,I create another VM and do same install process, Ping and IPRouter2 netplan exist after install. And no matter how reboot , netplan will handle the networking I can't reproduce this behiver again Some of us found the same thing with Ubuntu server "minimal". It isn't really meant for human interaction and the terminal is pretty useless. Install via the other option, just don't select anything else to be installed. It is only about 400 megabytes larger and has basic things like normal log files and nano and ping and such. See this forums thread for more details. Ubuntu Minimal server images are designed to be run on cloud/VM/LXD, where virtual networking details are provided by a host system or default config file. Which kind of host you are trying to run it on? Or are you trying to run it on bare metal? For server, I prefer ifupdown and edit /etc/network/interfaces. Netplan is a higher-level tool, which generates config for ifupdown / networkmanager. You don't need to use it if you want to keep your server minimal. See Ubuntu 18.04: switch back to /etc/network/interfaces
common-pile/stackexchange_filtered
How many bytes is a long in a 64 bit machine I have compiled the following simple c++ code: #include <iostream> int main(){ int a = 5; int b = 6; long c = 7; int d = 8; return 0; } and here is the assembly: pushq %rbp .cfi_def_cfa_offset 16 .cfi_offset %rbp, -16 movq %rsp, %rbp .cfi_def_cfa_register %rbp xorl %eax, %eax movl $0, -4(%rbp) movl $5, -8(%rbp) movl $6, -12(%rbp) movq $7, -24(%rbp) movl $8, -28(%rbp) popq %rbp retq All the ints have an allocation of 4 bytes which is normal. The long variable movq $7, -24(%rbp), is getting 12 bytes allocated to it (instead of 8) why is that? A duplicate of https://softwareengineering.stackexchange.com/questions/357233/why-does-a-long-int-take-12-bytes-on-some-machines ? How many bytes is a long in a 64 bit machine? Ultimately it is the language implementation that decides these matters, not the CPU instruction set.  For example, for a standards compliant implementation, int could be 32 or 64-bits on a 64-bit processor, or even something radically different, though that would be very unusual. The long variable movq $7, -24(%rbp), is getting 12 bytes allocated to it (instead of 8) why is that? It is not that it is getting 12 bytes allocated, but instead that there is a 4 byte hole being skipped between the 8-byte long and the prior 4-byte sized variable.  This 4 byte area is skipped over, because using it for the long variable would mean mis-aligning the long variable.  On the x64 architecture there is a mild penalty for misaligned accesses (though on some other processors (usually RISC-style processors), misaligned accesses fault instead of performance penalty).   The compiler is aware of alignment considerations, so skips some memory in order to store the long more appropriately. The 4 byte stack space that is skipped over is virtually free, so this is a good speed-space trade off.  One could argue that this is a poor trade off within a potentially huge number of heap based objects, but not in the stack for local variables.
common-pile/stackexchange_filtered
Recover alarm counter after reboot In my application, I need to send notifications when is passed like 1 week, or 1 month after the user has done some action in the app... It's working with a BroadcastReceiver like that >> public void startMyReceiver() { Long time = new GregorianCalendar().getTimeInMillis() + 1000 * 60 * 60 * 24 * 30; <-- for 1 month (it's getting numeric overflow, but is working) Long interval =<PHONE_NUMBER>L; Intent intentAlarm = new Intent(this, MyReceiver.class); AlarmManager alarmManager = (AlarmManager) getSystemService(Context.ALARM_SERVICE); alarmManager.setRepeating(AlarmManager.RTC_WAKEUP, time, interval, PendingIntent.getBroadcast(this, 2, intentAlarm, PendingIntent.FLAG_UPDATE_CURRENT)); } Then the simple receiver class >> public class MyReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent){ sendNotification(); { } Then in AndroidManifest.xml >> <application> <receiver android:name="mypackage.MyReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> </intent-filter> </receiver> </application> I've set that "android.intent.action.BOOT_COMPLETED" thinking that it was going to recover the counter and keep counting till the time ends... But it's starting my receiver instantly when the device is BOOTED UP... How can I set the receiver for 1 month and after REBOOT it keep counting?? -- EDIT -- NEW QUESTION -- How would be the easiest way to store the time and do the calculation? Something like this?? >> 1 - When defining the alarm first time, I save the current time, in millisseconds in Long firstTime and define the alarm to firstTime+longOneMonth; 2 - In the Receiver for boot, I get the current time again and save it in Long rebootTime, then I do: Long timeDifference = rebootTime - firstTime; Long differenceToMonth = longOneMonth - timeDifference; And set the Receiver again to time rebootTime + differenceToMonth?? -- EDIT 2 -- Maybe if I convert the first receiver setting millisseconds to a date string and apply() it to shared preferences, when rebooting, I could convert that to millisseconds again... I guess it would work and be easier save and retrieve it from sharedPreferences, a file or a database. First is probably the easiest. please, added a new question in the main one Define another BroadcastReceiver for boot. Store the time (One month later) somewhere All your alarms would gone when device boot up. So when the device boot up, call another function which would set an alarm again. But pay attention you should consider some time have been passed since last time you set an alarm ( for example 7 days ago you set the alarm ) so you should always calculate the difference between the set time and now time; EDIT : Easiest way to calculate time is that when you are setting alarm, you store right now time as millisecond. Next time you get now time again and minus it with previous time you stored. that is the time difference! At the end minus the one month time with difference value Long time = new GregorianCalendar().getTimeInMillis() + ( 1000 * 60 * 60 * 24 * 30 ) - TIME_DIFFERENCE Edited my question including a new one
common-pile/stackexchange_filtered
Python 3.10.4 scikit-learn import hangs when executing via CPP Python 3.10.4 is embedded into cpp application. I'm trying to import sklearn library which is installed at custom location using pip --target. sklearn custom path (--target path) is appended to sys.path. Below is a function from the script which just prints the version information. Execution using Command Line works well as shown below. python3.10 -c 'from try_sklearn import *; createandload()' Output [INFO ] [try_sklearn.py:23] 3.10.4 (main, Aug 4 2023, 01:24:50) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] [INFO ] [try_sklearn.py:24] sklearn /users/xxxx/temp/python/scikit-learn/sklearn/__init__.py Version = 1.5.1 The same script when called using CPP, hangs at import sklearn Other libraries like pandas, numpy etc works without any issues. Have you used a debugger to figure out what happens when it hangs? Thanks for looking into this. It hangs at https://github.com/scipy/scipy/blob/main/scipy/spatial/distance.py#L123 Right, and you're sure that Scipy is the correct version for your embedded Python? I installed using pip using the same python3.10 binary. Scipy 21189 Looks like Scipy and Numpy do not support Embedded python I think the issue with embedded Python is if you stop and restart the interpreter. If you only start the interpreter once then it should be fine. Your question doesn't say either way I'm not sure about this and the cpp code is a full stack application and I execute python script when only necessary. This might mean that I might be starting the interpreter multiple times. https://github.com/scipy/scipy/issues/21189 Looks like Scipy and Numpy do not support Embedded python Yes, they do support embedded Python in C++. In your C++ code you need to include #include <Python.h>. Have you done that? Also, try to import only packages that you need in your Python code. For instance use "from sklearn import linear_model". of course I have included python.h. Both "import sklearn" and from sklearn import linear_model has same behavior.
common-pile/stackexchange_filtered
How to know how much RAM can still be installed Is there any software that can determine how much more RAM can still be installed on a computer? How much money do you have? Although Hennes answer is technically correct, there are solutions out there. Crucial.com has a memory advisory tool that can tell you with an extremely high degree of accuracy what your computer can support. I have yet to see it give wrong information. It can do this by scanning your system's hardware, as well as having a very large database of hardware manufacturer specifications. With that information, the tool can tell you how much more RAM can be installed, as well as give you a wide variety of possible configurations on how to achieve that. hwinfo or pchunter solves this purpose Hi, and welcome to SU. Please include some more information in your answers. How can I use these programs? What do they do? Where can I get them? links have been updated Thanks, some info on the usage would be nice. How can I find out how much more RAM (ie, how many available DIMM slots, how many in use) I can have on my system using these programs? I had a quick look at their webpages and I am not sure that they give this information. There are at least two ways: 1) Look at the motherboard (The number of DIMM sockets will be obvious). 2) Read the motherboards manual. No, there is no software which will always be able to tell you that. The maximum RAM is limited by a few things, such as: How much can the memory controller access (That is part of the northbridge or the CPU) How much copper traces are laid on the motherboard to the memory. How many memory socket are on the board. Which kind of chips does the memory controller understand. That means that software can check which CPU/chipset you have. If that chipset is limited to (for example) 2 GiB then it can correctly state that 'no more than 2 GiB can be used. However it can not detect the number of physical connection to the memory. Every connection skipped saves the manufacturer some money, but also halves the maximum addressable memory. There is a balance here, with the manufacturers trying to make board which will satisfy most of there regular customers, without spending more than needed. This might be well below the maximum that the chipset or the CPU can handle and this is not detectable by software. In other words, if you want to add a lot* of memory then check the manual. As to sockets and chips (mainly RAM density on the chips): Do they make DIMMs with enough capacity? Does the motherboard support it? Do you have enough sockets? (e.g. two sockets and max 8 GiB per DIMM would max out at 16 GiB total. Even if the chipset support more. * Just what a lot is differs per model and per year. But if you want to use ten times as much as most other new computers then check.
common-pile/stackexchange_filtered
Import grid text file to shapefile in QGIS I'm working with the Climate Change forecasts from the Hadley Center Climate Model (HADCM3) available here: http://www.ipcc-data.org/sres/hadcm3_download.html They have many different forecasts and I'm working with the file: HADCM3_A1F_TEMP_1980.tar.gz. It seems to be a grid text file, and includes information for several different months, so it has headers at the start and also in the middle of the text: What is the best way to import these seemingly txt files (maybe they are some other format I'm not familiar with) to QGIS, and save them as shapefiles? I've managed to extract them using 7zip, but I don't know how to proceed from there. I'm working on Windows and QGIS 2.16.2 From http://www.ipcc-data.org/sres/hadcm3_grid.html : The grid is 96*73 large, with cellsizes of 3.75x2.5 degrees. Note that Eastings are from 0 to 360. There is no easy way to load the data into QGIS, you need to combine Latitude, Longitude and value from different tables. Usually this is done using Python (or else) code. Thanks! Is there any source of model code or template I could use that you can think of? I'm not an expert on how to handle grids honestly. Unfortuantely, I can't help out with code.
common-pile/stackexchange_filtered
Why does GraphiQL not show the "data" property, but just the "errors" property for Subscriptions? When executing a Subscription via GraphiQL (v 1.5.1) which throws an error, but also contains data, why doesn´t GraphiQL show the data? And how to achieve this? In the following screenshot it can be seen that the payload of the websocket message contains both errors and data as properties: Nevertheless, the data from "data" will not be displayed on the GraphiQL interface: The problem occurs only for Subscriptions and not for Queries.
common-pile/stackexchange_filtered
Difference between Unity's [Command] and [Server] [Command] and [Server] functions both run on the server and can also be called by a local player but what is the difference between them? I'm not 100% sure myself, but I do know that [COMMAND] attribute is only for use on PLAYER controlled objects. That might be something to do with it (I'm also learning Unity Network gaming ~gradually~ at the moment :] It may not be advisable to learn UNet too deeply, it is being deprecated and replaced in the next six to 12 months. @SuperMegaBroBro Yeah it may help to understand. @Draco18s I'll keep in mind. They actually do quite different things according to the documentation: https://docs.unity3d.com/ScriptReference/Networking.CommandAttribute.html [Command] functions are invoked on the player GameObject associated with a connection https://docs.unity3d.com/ScriptReference/Networking.ServerAttribute.html A [Server] method returns immediately if NetworkServer.active is not true, and generates a warning on the console But be sure to be mindful of the comment by Draco18s: It may not be advisable to learn UNet too deeply, it is being deprecated and replaced in the next six to 12 months I'll keep in mind what @Darco118s said. But I wanted to know that [Command] is called by the player object, but who calls [Server] function. I don't think it matters who or what calls a [Server] attribute. The way I understand it is that you wanna use the [Server] attribute for any function that absolutely has to be connected to the server. @HapppyGuyDk Ok
common-pile/stackexchange_filtered
Play video in UITableview Cell In one of my application i have to play video in UITableviewcell, at a time in one cell one video will play and when scroll UITableview and cell hide video should stop. My project is in Objc. any suggestion for any third party? thanks in advance for playing video in uitableviewcell, what you are using? is that webview or anyother mediaplayer? Well, I am not sure about any 3rd party SDK. But you can refer this post on SO. https://stackoverflow.com/questions/24825994/play-video-on-uitableviewcell-when-it-is-completely-visible @R.Mohan i have used one third party app but it creates issue so i m looking for another alternet what you have tried ? add that code in question! @Lion i have tried ZFPlayer third party app but it creates some issue in my scenarios @Hardik A check my answer and try it. if it works for you, accept the answer :) Use UIWebView in that cell, it will work fine NSString *baseUrl = @"http://www.youtube.com/embed/"; NSString *videoUrl = [NSString stringWithFormat:@"%@%@",baseUrl,embedCode]; videoUrl = [videoUrl stringByAddingPercentEscapesUsingEncoding: NSUTF8StringEncoding]; NSString *embedHTML = @"\ <html><head>\ <style type=\"text/css\">\ body {\ background-color: transparent;\ color: white;\ }\ </style>\ </head><body style=\"margin:0\">\ <embed id=\"yt\" src=\"%@\" type=\"application/x-shockwave-flash\" \ width=\"%0.0f\" height=\"%0.0f\"></embed>\ </body></html>"; NSString *html = [NSString stringWithFormat:embedHTML,videoUrl, 300.0, 200.0]; UIWebView *videoWebView = [[UIWebView alloc] initWithFrame:CGRectMake(8, 10.0, width-16, (width-16)/1.8)]; [videoWebView setAllowsInlineMediaPlayback:YES]; videoWebView.mediaPlaybackRequiresUserAction = YES; videoWebView.scrollView.scrollEnabled = NO; videoWebView.scrollView.bounces = NO; [videoWebView loadHTMLString:html baseURL:nil]; [self.contentView addSubview:videoWebView];
common-pile/stackexchange_filtered
MySQL syntax error when string input contains quotation marks When I submitted the form to my JDBC connection page it gives an error. I submitted the form with a data like this It's wrong It gives an error as shown below com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 's wrong' IN BOOLEAN MODE) and bot2=1' at line 1 My code is String Query="SELECT * FROM ques WHERE MATCH(subject) AGAINST('"+vv+"' IN BOOLEAN MODE) and bot2=1"; Class.forName("com.mysql.jdbc.Driver"); conn=DriverManager.getConnection("jdbc:mysql://localhost:3306 /wst","manohar","manohar"); stmt=conn.prepareStatement(Query); rs=stmt.executeQuery(Query); while(rs.next()) { .... } Try this instead: String Query="SELECT * FROM ques WHERE MATCH(subject) AGAINST(? IN BOOLEAN MODE) and bot2=1"; conn=DriverManager.getConnection("jdbc:mysql://localhost:3306/wst","manohar","manohar"); stmt=conn.prepareStatement(Query); stmt.setString(1, vv); rs=stmt.executeQuery(); The technique is called a "parameterized query" and it offers several advantages, one of which is that you don't have to worry about escaping special characters in string parameters. that's because you are not escaping the string in your query. I think you could find this thread usefull: Java - escape string to prevent SQL injection
common-pile/stackexchange_filtered
Riemann integrals and the Volterra's Example idea. This is an exercise of my course of Measure and Integration. I did not know if my ideas are correct, so I will also post my response so they can comment and help me Let $f(x)$, for $x \geq 0$ such that: $$ f(x)=x^{-1/4}, x>0, f(0)=0 $$ This function is not bounded so is not R-integrable over $[0,1]$. However, its improper integral exists $[0,1]$. Calculate the value of the improper integral $\int_0^1 f$. Now, let the function $g(x)$ over $[0,1]$ such that: $g(x)=0$ if $x\in C(4)$, the Cantor Set, and if $(a,b)$ is one of the open intervals whose meeting produces $[0,1]-C(4)$, so for $x \in (a,b)$, define $$ g(x)= \left\{ \begin{array}{ll} (x-a)^{-1/4} & \textrm{ if} a<x \leq (a+b)/2 \\ (b-x)^{-1/4} & \textrm{ if } (a+b)/2<x<b \\ \end{array} \right. $$ Show that even the improper integral of $ g $ in $ [0,1] $ does not exist. Find the integral of $ g (x) $ on each interval of length $ 4 ^ {-n} $ where $ g $ does not vanish. Remembering that there are exactly $ 2 ^ {n-1} $ of these open intervals of length $ 4 ^ {-n} $, add the values ​​you just calculated and find a value that should be assigned to $\int_0 ^ 1g $. MY ATTEMPT (i) $$ \lim_{n\rightarrow 0^+}{\int_n^1 f(x)dx}=\lim_{n\rightarrow 0^+}{\int_n^1 x^{-1/4}}=\lim_{n\rightarrow 0^+}{\frac{4}{3}(1^{3/4}-n^{3/4})}=\frac{4}{3} $$ (ii) $$ \lim_{n\rightarrow a+}{\int_n^b g(x)}=\lim_{n\rightarrow a+}{\int_n^b (x-a)^{-1/4}}=\lim_{n\rightarrow a^+}{[(b-a)^{3/4}+(n-a)^{3/4}]}=(b-a)^{3/4}=4^{-3n/4} $$ now, $$ \lim_{n\rightarrow a-}{\int_n^b g(x)}=\lim_{n\rightarrow a-}{\int_n^b (a-x)^{-1/4}}=\lim_{n\rightarrow a^-}{[(a-b)^{3/4}+(a-n)^{3/4}]}= $$ $$ =(a-b)^{3/4}=((-1)(b-a))^{3/4}=(-4)^{-3n/4} \neq 4^{-3n/4} $$ so the improper integral does not exist (iii) $(2^{n-1}) 4^{-3n/4}=(2^{n-1})(2^{-6n/4})= 2^{(4n-4-6n)/4}=2^{(-2n-4)/4}=2^{(-n-2)/2}$ is the value
common-pile/stackexchange_filtered
WPF Datagrid ItemSource + ObservableCollection I've a Datagrid that shows all the Data from an ObservableCollection.But I just want to show the first 10 elements in the Datagrid. Can you please help? add your code, then sure someone can help I assume you are using MVVM.. You can try using collection view source.. observableCollection = new ObservableCollection<string>(); Items = CollectionViewSource.GetDefaultView(observableCollection.Take(10)); Where "Items" is property in your viewmodel and "ItemsSource" for your data grid.. public ICollectionView Items { get; set; } You may have to include couple of namespaces in your viewmodel using System.Collections.ObjectModel; using System.Windows.Data; Thank you very much for your answer, right now Im not using MVVM, it's code behind. Im new in WPF. But my next step would be to use MVVM if you are not using MVVM you can still use the above solution... change you code behind as above and assign "Items" as item source for your data grid But this solution is not observable ... as in if someone adds an element in first 10 indices of the source observable collection, they wont update on the GUI. Use CollectionView.Filter = Func<object, bool>(item => sourceCollection.IndexOf(item) < 9) Let's say your DataGrid is dg. You can try : int nbV = 10; //number you want ItemCollection ic = new ItemCollection(); for(int k = 0; k < nbV; k++) { ic.Add(dg.Items[k]); } dg.ItemsSource = ic.DefaultView;
common-pile/stackexchange_filtered
Secure wcf (using username password validator) not working from php client I have construct a simple secured wcf with wsHttpBinding in .Net C# (framework 4.5) and consume it from .Net C# (also) client and every thing work fine. But when I try to consume It from php (5.5) client by calling a method from the wcs service, the client not work and it has entered in an infinite loop and not showing any error message, just looping. a. The following is my wcf ServiceContract and OperationContract's: namespace CenteralServices { [ServiceContract] public interface IAdminServices { [OperationContract] int Add(int x, int y); } } b. The following is the configueration file Web.config for the wcf: <?xml version="1.0" encoding="utf-8"?> <configuration> <system.serviceModel> <services> <service name= "CentralTicketServicesSystem.AdminSystem" behaviorConfiguration="customBehaviour"> <endpoint address="AdminServices" binding="wsHttpBinding" contract="CentralTicketServicesSystem.IAdminServices" bindingConfiguration="ServiceBinding" behaviorConfiguration="MyEndPointBehavior"> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> <host> <baseAddresses> <add baseAddress="http://localhost:8080/AdminServicesSystem" /> </baseAddresses> </host> </service> </services> <bindings> <wsHttpBinding> <binding name="ServiceBinding" openTimeout="00:10:00" closeTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"> <security mode="Message" > <message clientCredentialType="UserName"/> </security> </binding> </wsHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="MyEndPointBehavior"> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="customBehaviour"> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="false"/> <serviceAuthorization principalPermissionMode="Custom"> <authorizationPolicies> <add policyType="CentralServicesHost.AuthorizationPolicy, CentralServicesHost" /> </authorizationPolicies> </serviceAuthorization> <serviceCredentials> <userNameAuthentication userNamePasswordValidationMode="Custom" customUserNamePasswordValidatorType="CentralServicesHost.UserAuthentication, CentralServicesHost"/> <serviceCertificate findValue="15 63 10 5e b6 4b 4d 85 4b 2e 4d 5b ec 85 02 ec" storeLocation="LocalMachine" x509FindType="FindBySerialNumber" storeName="My"/> </serviceCredentials> </behavior> <behavior name="mexBehaviour" > <serviceMetadata httpGetEnabled="true" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> c. The following is UserAuthentication class: namespace CentralServicesHost { public class UserAuthentication : UserNamePasswordValidator { public override void Validate(string userName, string password) { if (string.IsNullOrEmpty(userName)) throw new ArgumentNullException("userName"); if (string.IsNullOrEmpty(password)) throw new ArgumentNullException("password"); if (userName != "test" && password != "test") throw new FaultException("Unknown Username or Incorrect Password."); } } } d. The following is AuthorizationPolicy class: namespace CentralServicesHost { public class AuthorizationPolicy : IAuthorizationPolicy { Guid _id = Guid.NewGuid(); // this method gets called after the authentication stage public bool Evaluate(EvaluationContext evaluationContext, ref object state) { // get the authenticated client identity IIdentity client = GetClientIdentity(evaluationContext); // set the custom principal evaluationContext.Properties["Principal"] = new CustomPrincipal(client); return true; } private IIdentity GetClientIdentity(EvaluationContext ec) { object obj; if (!ec.Properties.TryGetValue("Identities", out obj)) throw new Exception("No Identity found"); IList<IIdentity> identities = obj as IList<IIdentity>; if (identities == null || identities.Count <= 0) throw new Exception("No Identity found"); return identities[0]; } public System.IdentityModel.Claims.ClaimSet Issuer { get { return ClaimSet.System; } } public string Id { get { return _id.ToString(); } } } } e. The following is CustomPrincipal class: namespace CentralServicesHost { class CustomPrincipal : IPrincipal { IIdentity _identity; string[] _roles; public CustomPrincipal(IIdentity identity) { _identity = identity; } // helper method for easy access (without casting) public static CustomPrincipal Current { get { return Thread.CurrentPrincipal as CustomPrincipal; } } public IIdentity Identity { get { return _identity; } } // return all roles public string[] Roles { get { EnsureRoles(); return _roles; } } // IPrincipal role check public bool IsInRole(string role) { EnsureRoles(); return (_roles != null) ? _roles.Contains(role) : false; } // read Role of user from database protected virtual void EnsureRoles() { using (var s = new SupportedMaterialsSystemEntities()) { _roles = new string[1] { "admin" }; } } } } f. The following is my php client code: <?php $options = array('soap_version' => SOAP_1_2, 'login' => 'test', 'password' => 'test'); $wsdl = "http://localhost:8080/AdminServicesSystem"; $client = new SoapClient($wsdl, $options); $obj = new stdClass; $obj->x = 3; $obj->y = 3; $retval = $client->Add($obj);//here the browser loops for infinite without any response. //var_dump($exc);//THIS POINT NOT REACHED //die(); $result = $retval->AddResult; echo $result; NOTES: 1. My OS is Win. 8.1, and I'm using visual studio 2013 (as adminstrator) and php Wamp Server. 2. I tried both, hosting the wcf service in IIS 6.2 and console application but non of them changes my php client looping. 3. I have Created the self-signed certificate usin the IIS manager that stores it in my local machine. 4. When I change the soap_version in the php code from SOAP_1_2 to SOAP_1_1 I had Cannot process the message because the content type 'text/xml; charset=utf-8' was not the expected type 'application/soap+xml; charset=utf-8'.. Last Note: My .Net C# Client code is the following: using (var svcProxy = new AdminServiceProxy.AdminServicesSystemClient()) { svcProxy.ClientCredentials.UserName.UserName = "test"; svcProxy.ClientCredentials.UserName.Password = "test"; Console.WriteLine(svcProxy.Add(1, 1));//the service works fine and print 2 } } So agin, What is the right way to call a secured wcf (with wsHttpBinding) service from php. I believe you need a default WSDL metadata published to use PHP soap client (see abilities here http://php.net/manual/en/intro.soap.php). Try to add endpoint with basicHttpBinding binding, which can provide WSDL for your client and then use this endpoint. On the address http://localhost:8080/AdminServicesSystem you have endpoint with mexHttpBinding, which provides metadata int other format (http://www.w3.org/TR/2009/WD-ws-metadata-exchange-20090317/). Try to see here form more details: https://abhishekdv.wordpress.com/2013/05/24/mexhttpbinding-vs-wsdl/ Actually this way didn't work even when I saved the wsdl in local machine and route SoapClient to read it from the saved version. Then try wireshark to catch the communication and see what is really happening.
common-pile/stackexchange_filtered
Display HTML on the basis of dropdown menu selected I have a dropdown menu and on the basis of the item selected I want to display a small part of HTML page. <select id='menuHandler'> <option value='abc'> abc</option> <option value='xyz'>xyz</option> </select> IF the selected value is "abc" then a popup button is displayed with the following code: <button id='runForm' onClick=""> Run form </button> <div id ="runFormPopup" style=" display:none;"> <table CELLPADDING="10" CELLSPACING="5" class='border'> <tr> <td colspan='3'>Run form Generation</td> </tr> <tr> <td width="200" class='border'> <span>Input Data</span><br> <input type="checkbox" name="vehicle" >Process log(s)<br> Summary data<br> <input type="checkbox" name="vehicle" >Thickness data<br> <input type="checkbox" name="vehicle" >Particle data </td> <td class='border'> <span>Steps(s)</span> <input type="checkbox" > <input type="checkbox" > <input type="checkbox" > </td> </tr> <tr> <td> Run form filename </td> <td><input type="text" ></td> <td><button class="editbtn">Generate</button></td> </tr> </table> </div> else if the value selected is "xyz" this should be displayed. <form action="${ctx}/home/step50/generateReport" method="GET" id="form_generate"> <input style="margin-top: 20px;" type="submit" id="btnGenerate" class="small button active" value="Generate"/> </form> how can I do it. possible duplicate of Select OnChange Listen to the change event of the #menuHandler select element, and add the conditional logic there. Example Here $('#menuHandler').on('change', function () { if (this.value === "abc") { $('#runFormPopup, #runForm').show(); $('#form_generate').hide(); } else { $('#runFormPopup, #runForm').hide(); $('#form_generate').show(); } }); ..or a shorter version: Example Here $('#menuHandler').on('change', function () { var condition = this.value === "abc"; $('#runFormPopup, #runForm').toggle(condition); $('#form_generate').toggle(!condition); }); Almost perfect, you just forgot to show/hide the button: $('#runFormPopup, #runForm').toggle(condition); one small query: I already have abc selected but it doesnt show the table ?? so instead of select in the dropdown I have abc selected therefor I want to show the menu when the page first loads and it should change if the user selects xyz @PSDebugger Take a look at this example. You would just trigger a change event after the event listener using $('#menuHandler').trigger('change'). In doing so, the correct elements will be hidden/shown on load rather than the first time it is selected. Try something like this. $('#menuHandler').change(function(){ var selectedValue = $(this).val(); if(selectedValue === 'abc') { $('#runFormPopup').show(); $('#form_generate').hide(); } else { $('#runFormPopup').hide(); $('#form_generate').show(); } }); From a high-level perspective: Create a listener on the button and listen for a click Dynamically get the value Either create the content for which you are looking, or insert content into a dynamically-created <div> (overlay, pop-up, etc.) Following these steps, you should be OK. Be careful; this came across the Low Quality Posts queue! In the future, keep chattiness out of answers; it's not constructive, and it detracts focus from an actual answer. I've expanded your hint into an answer; please edit your answer if you'd like to add additional information. Cheers!
common-pile/stackexchange_filtered
How to generate documents (doc / pdf) through a facebook chatbot? How do I make my fb chatbot collect information from user and then, with that information, fill up a form, generating a document (word / pdf), which then will be sent back to the user to download? So far, I only can collect the info and send it to my email and fill up the form manually. I need it to be automatic. Any hints will pe appreciated What is the server side framework or language? I'm using node.js
common-pile/stackexchange_filtered
Vector dot product and scalar multiplication in Grapher.app I'm trying to play around with vectors in Grapher.app, but I'm getting an error when I try to A: multiply two vectors (specified as a vertical matrix) together, and B: multiply a vector by a scalar value. Here's my dilemma in picture form: Any ideas? If you want to multiply two column vectors don't you have to transpose one of them? I got the transpose operator by typing ^T. (That's caret + T, not Ctrl+T) I was able to multiply a vector by a scalar. In your screenshot, the subscripts aren't consistent. Maybe it's part of your problem or maybe it's just some math I don't understand. This is all on the edge of my math knowledge. Epilogue For those who are interested, here is bjz's screenshot of his working version: Cheers mate, I got it to work. And yes, my maths knowledge isn't great – I'm just using Grapher to sketch out my algorithms before I commit to code.
common-pile/stackexchange_filtered
If I reinstall ubuntu, will i be able to change my root password? i forgot my old one. can't access GRUB because the piece of Ubuntu i use is the terminal, i don't have any interface. By default Ubuntu installs without a 'root' password; though one can be added post-install. If you install over it (& format) it will re-install & again have the default no root password. Depending on choices made, you could be asked your password (eg. no format, encrypted partitions etc) before it'll re-install without format, but you can overwrite your system clean (assuming the password is not a hdd/sdd or hardware-firmware-level password) I may be mistaken but I don't think that there is a "Root" password. There is only the user password and whether or not the user can have access to "Root". https://askubuntu.com/questions/189907/what-is-the-default-root-password It's common for new users who have set automatic login, to forget the password associated with their user. They often refer to this as the "root" password.... @CharlesGreen How true. I set up a server for my brother one time. With auto login on. He was trying to do something and said "I don't have a password". Not sure how he missed that file on his desktop with the filename in all caps "this is your password" .
common-pile/stackexchange_filtered
Is there any way to get a hook or event in Electron JS when a new application is launched I'm new to Electron JS. Trying to build a cross-platform desktop application to watch user activities. My requirement is when a user moves out of my application and opens some other application like a browser/ calculator, is there any way that can be monitored from my application? Please advice. Thanks You can use this electron api: let spawn = require("child_process").spawn; let bat = spawn("cmd.exe", [ "/c", // Argument for cmd.exe to carry out the specified script "D:\test.bat", // Path to your file "argument1" // Optional first argument ]); bat.stdout.on("data", (data) => { // Handle data... }); bat.stderr.on("data", (err) => { // Handle error... }); bat.on("exit", (code) => { // Handle exit }); Make sure to put the correct path to your batch file (here is an example to find notepad): tasklist | find /i "notepad.exe" && echo true || echo false Make sure to handle only false, because true will log notepad.exe info, too. P.S.: The batch script works only for Windows, I don't know how to create the bash version.
common-pile/stackexchange_filtered
How can I set the size of a Win32 dialog in pixels? I am trying to get a Win32 dialog that is 500x520 px, but in my .rc file, these settings get me a bigger window than I expected. IDD_DIALOG1 DIALOG DISCARDABLE 0, 0, 500, 520 STYLE DS_MODALFRAME | DS_CENTER | WS_POPUP | WS_CAPTION | WS_SYSMENU | WS_MINIMIZEBOX Is there a scaling factor somewhere? The units in a dialog resource are dialog units which are normalized by the dimensions of the dialog font by a rather convoluted process. You can convert from dialog units to screen pixels with MapDialogRect(). There are lots more details in the documentation for GetDialogBaseUnits() but the recommended approach is to call MapDialogRect() and let it do the hard work for you. Yeah, the reason you shouldn't use GetDialogBaseUnits is because its calculations are based on the default system font, which no one uses anymore. Unfortunately, if the window in question is not a dialog, you don't have much choice, as MapDialogRect doesn't work. @CodyGray The documentation for GetDialogBaseUnits explains how the units work. It also tells you which function to call to get the measurements for a non-system font, which you also can use for a non-dialog window.
common-pile/stackexchange_filtered
How to parse the Info.plist file from an ipa I want to write a program to get app details, e.g. app name, version, bundle identifier, from an ipa file. However this file is not a plain text file, it's encoded in some way and my program cannot parse it. Is there any way to decode this file? UPDATE To clarify the question, I'm writing a python program to get app details from ipa files that I exported from Xcode. Clarify your question. Are you talking about getting details from the app's own Info.plist or are you talking about an app that can extract the Info.plist info of other ipa files? For those without Apple Developer logins, here's the two answers from the accepted answer's link (you have to login to see it): use PlistBuddy /usr/libexec/PlistBuddy -c "Print :CFBundleIdentifier" yourBinaryOrXmlPlist.plist There are also many other ways to parse binary plist files. For example, I'm using Python and there is a library called biplist, which is exactly used to read and write binary plist files and support both Python 2 and 3. Also, the 'plistlib' in Python 3.4 supports binary plist manipulation with new APIs. It's easy if you have a macbook. Then follow this guideline: http://osxdaily.com/2011/04/07/extract-and-explore-an-ios-app-in-mac-os-x/ Rename from file.ipa -> file.zip Unzip file.zip Go to Payload folder Right click Application file -> Show Package Contents See Info.plist My Python version: https://gist.github.com/noamtm/a8ddb994df41583b64f8 In my research, the tricky bit was parsing a binary plist, since python's PlistLib can't read it: from Foundation import NSData, NSPropertyListSerialization # Use PyObjC, pre-installed in Apple's Python dist. def parse_plist(info_plist_string): data = NSData.dataWithBytes_length_(info_plist_string, len(info_plist_string)) return NSPropertyListSerialization.propertyListWithData_options_format_error_(data, 0, None, None) Thanks for the script that is very helpful for reading plist without knowing the app name +1 A double click on the plist will open from XCode. plutil On a mac you can do it with the ootb plutil: # Convert a binary plist to XML plutil -convert xml1 info.plist ## Convert XML back to binary plutil -convert binary1 info.plist For more details see man plutil in the terminal. /usr/libexec/PlistBuddy On macOS, for usage in shell scripts, for example, for reading/writing/deleting properties in plist files, there is also /usr/libexec/PlistBuddy For example, to read the bundle identifier key for the TextEdit.app you'd use something like: /usr/libexec/PlistBuddy -c "Print :CFBundleIdentifier" /System/Applications/TextEdit.app/Contents/Info.plist ### OUTPUT: >>> com.apple.TextEdit python's plistlib If you want to parse a text or a binary .plist file programatically, for example with python, then you can use plistlib — a standard library module: import plistlib with open ('/etc/bootpd.plist', 'rb') as plist: conf = plistlib.load(plist) print(conf['dhcp_enabled']) Got the answer from the Apple's developer forum: PlistBuddy is what you're looking for: /usr/libexec/PlistBuddy -c "Print :CFBundleIdentifier" yourBinaryOrXmlPlist.plist this kind of answer is not helpful to the community, it's possible that the link will not always work. Here is the solution i found out in python : Since .plist files are binary-encoded so, you will need a python libary that could read files with binary format. For this particular example i use plistlib. NOTE: You get .app file when you unzip .ipa file. from zipfile import ZipFile import plistlib with ZipFile("/content/sample_data/Experience Abu Dhabi.app.zip", 'r') as zObject: zObject.extractall( path="/content/sample_data/temp") path = "/content/sample_data/temp/Experience Abu Dhabi.app/Info.plist" with open(path, 'rb') as f: data = plistlib.load(f) print(data)
common-pile/stackexchange_filtered
"tableau-server" openidc id_token not validated (errorCode=69) by tableau-server. vizportal log says wasIssuedByOurIdP:false Trying to setup openid-connect login for tableau-server-2022-3-10.x86_64 users using keycloak 19.0.3, the "id_token verification failed (errorCode=69)" at step 6 it makes no difference whether client_authentication is set to client_secret_basic or client_secret_post upon hitting http://tableau-openidc.domain.some the user is redirected to keycloack login and can authentify using its password. the access and id tokens are well received by tableau and logged when activating extended logging. Following logs says: parsing response. 2023-10-30 18:03:37.499 +0000 () catalina-exec-7 vizportal: DEBUG com.tableausoftware.domain.user.openid.OpenIDConnectHelper - Validating access token received. 2023-10-30 18:03:37.549 +0000 () catalina-exec-7 vizportal: DEBUG com.tableausoftware.domain.user.openid.OpenIDConnectHelper - Verifying access token isSignatureValid:true, isDestinedToUs:true, wasIssuedByOurIdP:false, isTheTokenStillCurrent:true 2023-10-30 18:03:37.549 +0000 () catalina-exec-7 vizportal: WARN com.tableausoftware.api.webclient.WebClientGetAuthenticationController - WebClientGetAuthenticationController failed during OpenID login attempt com.tableausoftware.domain.exceptions.AuthenticationException: id_token verification failed (errorCode=69) at com.tableausoftware.domain.user.openid.OpenIDConnectHelper.authorizeUserViaAccessToken(OpenIDConnectHelper.java:433) ~[tab-domain-user-latest.jar:?] both access- and id-token can be decoded (for ex: using https://jwt.io/) and validated using the x5c certificate found at http://keycloack:8090/auth/realms/my/protocol/openid-connect/certs within the id-token following claims are found: "sub": "d1b73ca0-cd07-415e-801e-4dc7534b8a2c" "email"<EMAIL_ADDRESS>"scope": "openid profile email" (I also tried ;-) with openid and profile claims with email as value with same result: "openid"<EMAIL_ADDRESS>"profile"<EMAIL_ADDRESS>) keycloack is configured with: Credetntials/Client And Id Secret Settings/Client Id/some-clientid Settings/Name/some-clientid Root Url: http://tableau-openid.domain.some Client authentication: On Authentication flow: Standard Flow, Service accounts roles (or not) tableau-server is configured with: tsm authentication openid configure --client-id "some-clientid" --client-secret "abCDEfgHiJ2kl409m261N2OPqrS0tuVx" --config-url http://keycloak:8090/auth/realms/myrealm/.well-known/openid-configuration --return-url "https://tableau-openid.domain.some" tsm authentication openid enable Local identity store: tsm settings import -f /opt/tableau/tableau_server/packages/scripts.20223.23.1234.5678/config.json tsm pending-changes apply tableau-server local users are set from a csv tabcmd createusers ./users.csv --server http://tableau-openidc.domain.some --username tableau-admin users.csv: user-1<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> managed to add the tableau-server in the aud claim of the access_token, but couldn't remove the previous valiue resulting in : "aud": [ "some-clientid", "account" ], but I couldn't update the iss claim that seems hardcoded to http://keycloak:8090/auth/realms/myrealm whose value is never declared explicitely in tableau's openid config. Solution was to run tsm configuration set -k vizportal.openid.ignore_jwk -v true, like if keycloack was not supporting JWK validation (cf: ignore_jwk option in doc)
common-pile/stackexchange_filtered
Entity Framework auto incrementing field, that isn't the Id I know this isn't the most ideal solution, but I need to add an auto incrementing field to one of my EF Code First objects. This column id NOT the Id, which is a guid. Is there anyway for me to define the auto incrementing field in code, or would creating the column myself and defining in the DB that its auto incrementing work? [DatabaseGenerated(DatabaseGeneratedOption.Identity)] You can annotate that property with DatabaseGenerated(DatabaseGeneratedOption.Identity). EF allows only single identity column per table. public class Foo { [Key] public Guid Id { get; set; } [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public long Bar { get; set; } } I had seen the DatabaseGeneratedOption.Identity but tbh I had been lazy and hadnt read into it, I just assumed that it made that column the PKey field. Thanks, I will give it a go. Is this will still work with Multitenant Databases ? @Ahmad It will not work if each tenant has its own seed I need both to be type int, but it doesn`t work with your code. The Primary key is always 0. Any ideas why this is? I have my Entity class same like this. If I try to update Foo Entity. I getting Error like Cannot Update Identity column Bar. Any idea about this? Old post thought I would share what I found with Entity Framework 6.1.3. I created a simple data layer library using C# and .NET Framework 4.6.1, added a simple repository/service class, a code first context class and pointed my web.config file to a local SQL Express 2014 database. In the entity class I added the following attribute constructor to the Id column: [Key] [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public Guid Id { get; set; } Then I created a new migration by typing the following in Visual Studio 2015 Package Manager: Add-Migration Give the migration a name and then wait for the DbMigtation class to be created. Edit the class and add the following CreateTable operation: CreateTable( "dbo.Article", c => new { Id = c.Guid(nullable: false, identity: true), Title = c.String(), Content = c.String(), PublishedDate = c.DateTime(nullable: false), Author = c.String(), CreateDate = c.DateTime(nullable: false), }) .PrimaryKey(t => t.Id); } The above table is an example the key point here is the following builder annotation: nullable: false, identity: true This tells EF to specifiy the column as not nullabe and you want to set it as an identity column to be seeded by EF. Run the migration again with the following command: update-database This will run the migration class dropping the table first (Down() method) then creating the table (Up() method). Run your unit tests and/or connect to the database and run a select query you should see your table in its new form, add some data excluding the Id column and you should see new Guid's (or whatever data type your choose) to be generated. For those stumbling onto this question for EF Core, you can now create an auto-incrementing column with your model builder as follows: builder.Entity<YourEntity>().Property(e => e.YourAutoIncrementProperty).UseNpgsqlIdentityAlwaysColumn(); Reference: https://www.npgsql.org/efcore/modeling/generated-properties.html
common-pile/stackexchange_filtered
FFMpeg How to use multithreading? I want to decode H264 by ffmpeg, BUT finally I found the decode function only used one cpu core system monitor env: Ubuntu 14.04 FFMpeg 3.2.4 CPU i7-7500U So, I search ffmpeg multithreading and decide using all cpu cores for decoding. I set AVCodecContext as this: //Init works //codecId=AV_CODEC_ID_H264; avcodec_register_all(); pCodec = avcodec_find_decoder(codecId); if (!pCodec) { printf("Codec not found\n"); return -1; } pCodecCtx = avcodec_alloc_context3(pCodec); if (!pCodecCtx) { printf("Could not allocate video codec context\n"); return -1; } pCodecParserCtx=av_parser_init(codecId); if (!pCodecParserCtx) { printf("Could not allocate video parser context\n"); return -1; } pCodecCtx->thread_count = 4; pCodecCtx->thread_type = FF_THREAD_FRAME; pCodec->capabilities &= CODEC_CAP_TRUNCATED; pCodecCtx->flags |= CODEC_FLAG_TRUNCATED; if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0) { printf("Could not open codec\n"); return -1; } av_log_set_level(AV_LOG_QUIET); av_init_packet(&packet); //parse and decode //after av_parser_parse2, the packet has a complete frame data //in decode function, I just call avcodec_decode_video2 and do some frame copy work while (cur_size>0) { int len = av_parser_parse2( pCodecParserCtx, pCodecCtx, &packet.data, &packet.size, cur_ptr, cur_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, AV_NOPTS_VALUE); cur_ptr += len; cur_size -= len; if(GetPacketSize()==0) continue; AVFrame *pFrame = av_frame_alloc(); int ret = Decode(pFrame); if (ret < 0) { continue; } if (ret) { //some works } } But nothing different with before. How can I use multithreading in FFMpeg? Any advise? you will need to show more code. How do you measure how many cores are used? How do you decode frames? What version of FFmpeg? How did you allocate pCodecParserCtx? Recv rtp stream use boost asio first, then decode by ffmpeg, show with opengl. I have added some parse and decode code. I just watch system monitor to measure cores usages, if I just decode video and no display, only one core work. pCodec->capabilities &= CODEC_CAP_TRUNCATED; And that's your bug. Please remove this line. The return value of avcodec_find_decoder() should for all practical intents and purposes be considered const. Specifically, this statement removes the AV_CODEC_CAP_FRAME_THREADS flag from the codec's capabilities, thus effectively disabling frame-multithreading in the rest of the code.
common-pile/stackexchange_filtered
how can you use the summon command or setblock command with players cordinates i was trying to make a fake skeleton and i came in the problem he coudnt shoot arrows because the player runs away i want to summon them next to the player or make blocks right behind the player so can you help me by saying how to do so Hi sheep_flyer: It might help if you list what you've tried, or why this isn't working - for instance, you say "he can't shoot arrows because the player runs away". From the next line, I have to guess that you want the arrows to hit the player, but they currently don't? To execute commands around a specific target use the execute command. /execute <Target> <X> <Y> <Z> <Command> In this case /execute @a ~ ~ ~ summon Skeleton 1.13 syntax: /execute at <target> run <command>
common-pile/stackexchange_filtered
Debug python that wont respect a catch statement I am trying to run avg as part of a program. The program normally is executed automatically, so I cant see the standard output from python. When I run the program by calling it directly, it works perfectly, however when I run it via automation, it fails. It will say in the syslog -> "Starting scan of: xxx", but it never says "Unexpected error" OR "Scan Results". Which means, its failing, but not using the catch statement, or reporting the error in the "out" variable. The offending function: # Scan File for viruses # fpath -> fullpath, tname -> filename, tpath -> path to file def scan(fpath, tname, tpath): syslog("Starting scan of: " + tname) command = ["avgscan", "--report=" + tpath + "scan_result-" + tname +".txt", fpath] try: out = subprocess.call(command) syslog("Scan Results: " + str(out)) except: syslog("Unexpected error: " + sys.exc_info()[0]) finally: syslog("Finished scan()") Both idea's so far are around the debugging code itself, prior to this, the scan was just a simple subprocess.call(command) with a simple syslog output. The with statement, the try catch was added to help the debugging. The issue is, out is not defined in the scope of the syslog - move the statement inside the try block, and you should be fine. Nope, changed the block and it failed: (syslog): Aug 17 15:16:13 be deluge-handler: Recieved for processing: xxx Aug 17 15:16:16 be deluge-handler: Starting scan of: xxx So I solved it. Solved it in so much as I am no longer using AVG Scan, and using libclamscan. By using a scanner that works directly with python, the results are faster, and errors are all gone. In case someone comes across this via a search, here is the code I am now using: import os.path import pyclamav.scanfile def r_scan(fpath): viruslist = [] if os.path.isfile(fpath): viruslist = f_scan(fpath, viruslist) for root, subFolders, files in os.walk(fpath): for filename in files: viruslist = f_scan( os.path.join(root, filename), viruslist) writeReport(fpath, viruslist) def f_scan(filename, viruslist): result = pyclamav.scanfile(filename) if result[0] > 0: viruslist.append([result[1], filename]) return viruslist def writeReport(fpath, viruslist): header = "Scan Results: \n" body = "" for virusname, filename in viruslist: body = body + "\nVirus Found: " + virusname + " : " + filename with open(fpath + "-SCAN_RESULTS.txt", 'w') as f: f.write(header+body) I am suspecting the error is actually from the opening of the debug file; with statements do not prevent exceptions from being raised. In fact, they usually raise exceptions of their own. Note the change of the scope of the try/except block. # Scan File for viruses # fpath -> fullpath, tname -> filename, tpath -> path to file def scan(fpath, tname, tpath): syslog("Starting scan of: " + tname) command = ["avgscan", "--report=" + tpath + "scan_result-" + tname +".txt", fpath] try: with open(tpath + tname + "-DEBUG.txt", "w") as output: out = subprocess.call(command, stdout = output, stderr = output) syslog("Scan Results: " + str(out)) except: syslog("Unexpected error: " + sys.exc_info()[0]) finally: syslog("Finished scan()") Tried it, it gave the "Starting Scan" then "Finished Scan", it never said scan results, and never said unexpected error. Not to mention, I didn't add the debug until AFTER I realized it wasn't scanning.
common-pile/stackexchange_filtered
Images are not rendered correctly This code appears to work correctly, but when I use it to render a image from a database, the image is incomplete on the page. Only about the top 70% of the image was rendered. Different amounts of the images are rendered with different images. protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { FileData fileData = new FileData(); int id = Integer.parseInt(request.getParameter("id")); UploadFile uploadFile = fileData.SelectFile(id); inputStream = uploadFile.data; fileName = uploadFile.name; if(uploadFile.Type.equals("Image/Video")) { contentType = "image"; } render(request, response); } private void render(ServletRequest request, ServletResponse response) throws IOException { try { ByteArrayOutputStream baos = new ByteArrayOutputStream(); byte[] buffer = new byte[DEFAULT_BUFFER_SIZE]; int inputStreamLength = 0; int length = 0; while ((length = inputStream.read(buffer)) > -1) { inputStreamLength += length; baos.write(buffer, 0, length); } if (inputStreamLength > contentLength) { contentLength = inputStreamLength; } if (response instanceof HttpServletResponse) { HttpServletResponse httpResponse = (HttpServletResponse) response; httpResponse.reset(); httpResponse.setHeader("Content-Type", contentType); httpResponse.setHeader("Content-Length", String.valueOf(contentLength)); httpResponse.setHeader("Content-Disposition", "\"" + contentDisposition + "\"" + (fileName != null && !fileName.isEmpty()) != null ? "; filename=\"" + fileName + "\"": ""); } response.getOutputStream().write(baos.toByteArray(), 0, (int)contentLength); //finally response.getOutputStream().flush(); //clear baos = null; } finally { close(response.getOutputStream()); close(inputStream); } } private void close(Closeable resource) throws IOException { if (resource != null) { resource.close(); } } Here is an example of how the image looks on the page in Firefox. I have checked that the image doesn't get corrupted when uploaded to the site and the image is fine in the database. The problem is the code that render images on the site. What am I doing wrong? Ok i solved it, apparently variables don't reset on a new page load even if they are not static.
common-pile/stackexchange_filtered
How to send data from Activity to View How can I make canvas drawing with custome value. I used this method to send data from my Information.class extends AppCompatActivity to CustomeView extends View Information.class public int getCategory() { int side = 100; return side; } CustomView.class (extends view) Information playObject = new Information(); int passed = playObject.getCategory(); and it works, but I want to make "side" to have value of Edittext input. I tried. String y = et1.getText().toString(); int side = Integer.parseInt(y); but then I got errors with com.example.solar.Schemat.onCreate java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.solar/com.example.solar.Schemat}: android.view.InflateException: Binary XML file line #106: Binary XML file line #106: Error inflating class com.example.solar.views.CustomeView show more code, how do you get access to EditText inside custom class, its wrong for shure why can't you make a variable in your view class and then use findViewById/viewbinding to give it the value ? @snachmsm i trying find method to do this i tried et1 = findViewById(R.id.editTextNumberDecimalTest3); but when i try prase it to integer to draw circle i got error
common-pile/stackexchange_filtered
extra characters in csv output I'm trying to write a Python program to convert a simple text file to a.csv file. Each line of input contains names after from: followed by a name. Lines that do not start with from: are to be ignored. Input: from: Lance Cummins This line is ignored by the program from: Jackie Cohen Hello world from: Chris Paul Lalala from: Jackie Cohen Message The output of the program should be a CSV file, showing the name of the person followed by the number of times they appeared in the input file: Lance Cummins,1 Chris Paul,1 Jackie Cohen,2 However, the actual output of the program is this: ["Chris Paul": 1, "Lance Cummins": 1, "Jackie Cohen": 2} What confuses me is that I had another person run my program on their computer and the result was correct. Why is this happening? Here is my actual program: def is_field(field_name, s): if s[:len(field_name)] == field_name: return True else: return False def contributor_counts(file_name): fname = open(file_name, "r" ) counts = {} for x in fname: if is_field("from: ", x): x = x.strip("from: ") x = x.rstrip() if x in counts: counts[x] = counts[x] + 1 else: counts[x] = 1 return counts def print_contributors(counts): for x in counts: if counts[x] > 1: print str(x) + " posted " + str(counts[x]) + " times" else: print str(x) + " posted once" def save_contributors(counts, output_file_name): f = open(output_file_name, "w") for value in counts: number = counts[value] y = str(value) + "," + str(number) f.write(y + "\n") f.close() contributions = contributor_counts("long182feed.txt") print_contributors(contributions) save_contributors(contributions, 'contributors.csv') Was one computer a linux/osx box and the other a PC? Could it be related to line endings? @NiallByrne yes I'm using both linux and windows, but the problem seems to occur in both (though I'm not sure) Works perfectly on Windows for me. Are you running the program in an IDE? Perhaps the interpreter has remembered a past variable that is affecting the result? Save your program and run it clean in a new IDE or from the command line. @MarkTolonen I just did the same and it worked on Windows! Thanks everyone Is the real issue the ordering of the lines in resulting csv? Be aware that the items in a Python dictionary can be in arbitrary order. OrderedDict is available Python 2.7 onwards. I realize that the items in the dictionary are in no particular order for Python; however, my intended output does not produce a dictionary, but rather only prints the keys and corresponding values within one. I repeat: Is the real issue that you get the lines in the wrong order? If it is, then the reason is that you are using the dict (that may have items in arbitrary order when you print out the values). You could exploit Python standard library namely csv module and collections.Counter: #!/usr/bin/env python import csv import fileinput from collections import Counter c = Counter() for line in fileinput.input(): # read stdin or file(s) provided at a command-line if line.startswith('from:'): # ignore lines that do not start with 'from:' name = line.partition(':')[2].strip() # extract name from the line c[name] += 1 # count number of occurrences # write csv file with open('contributors.csv', 'wb') as f: csv.writer(f, delimiter=',').writerows(c.most_common()) Example $ python write-csv.py input.txt Output Jackie Cohen,2 Lance Cummins,1 Chris Paul,1 As an alternative you could use regular expressions to parse input: #!/usr/bin/env python import csv import fileinput import re import sys from collections import Counter text = ''.join(fileinput.input()) # read input c = Counter(re.findall(r'^from:\s*(.*?)\s*$', text, re.MULTILINE)) # count names csv.writer(sys.stdout, delimiter=',').writerows(c.most_common()) # write csv
common-pile/stackexchange_filtered
Stop link action if onclick function returns false Here is a function that simply checks if a radio button has been selected on a form element. function validate_selected(element) { var radios = document.getElementsByName(element); var formValid = false; var i = 0; while (!formValid && i < radios.length) { if (radios[i].checked) formValid = true; i++; } if (!formValid) alert("Must select one before moving on!"); return formValid; } Here is my link that I want disabled if the function returns false. Right now the link shows the alert but after the alert the link sends the user forward. <a href="#m2" data-role="button" onclick="validate_selected('owner');"><p style="font-size:<?php echo $size1.'px'; ?>;">Next</p></a> I want the link to be disabled if the function returns false. You almost got it, you needed to add the 'return' in the function aswell: onclick="return validate_selected('owner');" Also a small tip, add a break: while (!formValid && i < radios.length) { if (radios[i].checked){ formValid = true; break; // No need to continue any more, we've found a selected option } i++; } I've changed your onclick from inline to the proper way to handle events, and made a working demo. Just remove the onclick from the button and add this: document.getElementById('button').onclick =function(){ validate_selected('owner'); }; http://jsfiddle.net/WX6v7/1/ (added some radiobuttons in the example) That second part is slightly incorrect, his while loop states while !formValid. So as soon as thats set to true, the while will exit. Your approach is better practice though. Still not working with the return added! Changed the while to: while (i < radios.length) and added the break. I was thinking the same thing! @matty, you are right, I missed that part. The break isnt it though What browser are you using? I see that, I'm still having issues implementing it on my page. I've copied my code over to fiddle and it works as well, but the whole page I'm working on doesn't seem to be working. Would position of the javascript mater on the real page? Right now it is before the html elements. The js first. You need to define the function first before you can use it You need to actually return the value in the onclick that your function returned: onclick="return validate_selected('owner');">
common-pile/stackexchange_filtered
Can the Linux ps command output be filtered by user AND command? I need the pid for a process given its owner and its command. I can filter a process per user with "ps -u xxx" and its command by "ps -C yyy", but when I try "ps -u xxx -C yyy", they are combined using OR logic. I need AND logic. How can I accomplish this? I find this "OR" logic in ps filters is extremely unuseful and counterintuitive (facepalm). Use pgrep? pgrep -U xxx yyy it returns just the pid (or pids, if more than one process matches). @Dave: I cannot reproduce. pgrep systemd shows 9 processes here, pgrep -U mg systemd shows 1, pgrep -a -U mg systemd shows 1. Ubuntu 23.10. I deleted my comment, I was using ps by accident not pgrep sorry for confusion. Use grep? ps -u xxx | grep yyy | grep -v grep You use comm to find PIDs common to both conditions: ps -u xxx | sort > /tmp/ps-uxxx ps -C yyy | sort > /tmp/ps-Cyyy comm -1 -2 /tmp/ps-uxxx /tmp/ps-Cyyy Using bash, you can use process substitution to avoid the need for temporary files: comm -1 -2 <(ps -u xxx | sort) <(ps -C yyy | sort) Works, thank you very much. ... But is there no easier way (without using pgrep, since this is not available in my context)? What's not easy about this? I know what a comm does. But I use it only once a year. It's not intuitive for me. I guess everyone who uses it daily sees this different. The are good reasons why pgrep exists. Unfortunately pgrep is not available in my context .... But it's solved now. The root of the problem is (according to my point of view), that I need to support the very old operation system without pgrep. A subtle refinement to the pgrep approach in the accepted answer (which I also upvoted). When yyy happens to be a substring of another running command e.g. yyy is gnome-shell, we might get matches for gnome-shell-calendar-service as well. Point being, pgrep is good, but it doesn't completely answer the question (and is the reason I stopped using it). By comparison, ps -C gnome-shell only returns matches for the gnome-shell process. I also hear @guettli's point about familiarity with commands. Your favorite sed or awk could be committed to muscle memory: ps -fC gnome-shell | awk '($1=="the_user")' ps -fC gnome-shell | sed -n '/^the_user\b/p' If you just want the PID like pgrep, then awk can help: ps -ouser=,pid= -C gnome-shell | awk '($1=="the_user"){print $2}' You don't strictly need to replace -f with -o in the ps, but this post is about diminishing returns. If you need precise results, then this is your answer. For the ones who need to achieve it on the fly (no script), here is a bit easier way. ps -fp $(pgrep -d, -U theUser theCommand)
common-pile/stackexchange_filtered
MariaDB install OQGRAPH as DB Engine in AWS RDS Is it possible to install OQGRAPH on a AWS RDS instance? If yes how? Reading the installation guide: https://mariadb.com/kb/en/mariadb/installing-oqgraph/ installing OQGRAPH requires admin-access to the server - which clearly RDS does not grant. Any ideas on how to install it anyways? Obviously if i just run INSTALL SONAME 'ha_oqgraph'; I do get the following Error-Message: Can't open shared library '/rdsdbbin/mariadb-10.0.24.R1/lib/plugin/ha_oqgraph.so' (errno: 2, cannot open shared object file: No such file or directory) Ask Amazon :-( https://aws.amazon.com/contact-us/ And post the answer back :-) Unfortunately it is currently not available. According to the AWS RDS documentation the only plugin currently available for the RDS MariaDB is the Audit Plugin, which was announced only few months back. If you still want to use MariaDB on AWS, but you are not forced to use RDS - you can have your own Instance of MariaDB (using the available Community MariaDB AMIs, or one of the AMIs provided by MariaDB, which available for CentOS, Debian and Ubuntu). This way you have full access to your server, so you will not have any problem with installing new plugins.
common-pile/stackexchange_filtered
Netbeans platform disable an action On my application, I need to display a "Create project" button if I have the role "admin", otherwise if just a simple user, the action should be disabled and the button should not be displayed at all. Here is my code: @ActionID(id = "com.demos.core.action.project.ProjectCreateAction", category = "Actions") @ActionRegistration(displayName = "com.demos.core.Bundle#action.project.projectcreate", iconBase = "com/demos/core/action/create_project.png") @ActionReference(path = "Actions/Ribbon/TaskPanes/group-project/set-project",position = 10) public final class ProjectCreateAction implements ActionListener { @Override public void actionPerformed(ActionEvent e) { ... } } In the actionPerformed() method, I'm able to get the user role, but it is too late, I don't want to display the action button at all. How can I hide this action button if my user is not allowed to use it? One of possible ways is to implement Presenter.Toolbar from package org.openide.util.actions like this: // Some Annotations here public final class SomeAction extends AbstractAction implements Presenter.Toolbar { @Override public void actionPerformed(ActionEvent e) { // TODO Some action } @Override public Component getToolbarPresenter() { ImageIcon icon = ImageUtilities.loadImageIcon("path/to/image.png", true); JButton button = new JButton(icon); button.addActionListener(this); button.setToolTipText(NbBundle.getMessage(SomeAction.class, "TextID")); button.setVisible(SomeUtilityClass.isAdmin()); return button; } }
common-pile/stackexchange_filtered
recursive entity w/out parent/child relationship I'm trying to represent the following recursive relationship in a RDBMS: As a basic example, we have the following fields: 1 - computer science 2 - computer engineering 3 - electrical engineering 4 - mathematics And I want to relate similar fields to each other. I could use a second table to relate fields to each other. Optimally, I could imagine it looking like this: +----------+----------+ | field1 | field2 | +----------+----------+ | 4 | 1 | (math -> comp sci) | 4 | 2 | (math -> comp eng) | 4 | 3 | (math -> elect eng) | 2 | 1 | (comp eng -> comp sci) | 2 | 3 | (comp eng -> elect eng) +----------+----------+ However, if the key were (field1, field2), I can see two potential issues: Tuples could be duplicated, albeit unordered It may complicate the queries unnecessarily if there is no importance to which field is in which column (as sgeddes points out, querying both columns and filtering out duplicates) For example: +----------+----------+ | field1 | field2 | +----------+----------+ | 1 | 4 | (comp sci -> math) | 4 | 3 | (math -> elect eng) | 4 | 2 | (math -> comp eng) | 3 | 4 | (elect eng -> math) | 2 | 1 | (comp eng -> comp sci) | 3 | 2 | (elect eng -> comp eng) | 1 | 2 | (comp sci -> comp eng) +----------+----------+ How should I approach a non-hierarchical recursive relationship? Should I go ahead and intentionally duplicate each tuple, like in the second table? Or is there another method that I am over-looking? I've seen that approach several times. Never been a huge fan honestly as I've had to query both fields for the matches and filter out duplicated results. And what about more than 2 fields that are similar? Can get rather messy. Using your example above, another approach would be to introduce a SimilarField table. It would store SimilarId and FieldId (and some people would argue a third Identity field, SimilarFieldId). So if English and Literature were similar fields, then you could have: SimilarId FieldId 1 1 (English) 1 2 (Literature) This approach allows you to have a 1-n relationship between your fields and their similar fields. --EDIT-- In response to your comment, not sure how your example doesn't work: SimilarId FieldId 1 1 (English) 1 2 (Literature) 1 3 (Reading) 2 2 (Literature) 2 4 (History) 3 4 (History) 3 5 (Art History) You can have as many grouped similar fields as needed. To get all the Fields associated with Literature for example, your query could look like this: SELECT DISTINCT F.FieldId, F.FieldName FROM Field F JOIN SimilarField S ON F.FieldId = S.FieldId WHERE S.SimilarId IN ( SELECT SimilarId FROM SimilarField WHERE FieldId = 2 ) And here is a sample SQL Fiddle. I may be confused (still), but I think I need many-to-many, and your example would only allow Literature to have one similar field. (I updated the example in my question) @DavidKaczynski -- I'm pretty sure using this approach you can an N-N relationship in that regard, but a 1-N relationship in similar groups. See edits above. Thanks for the clarification. Please allow me to ask this then: if I want to get all of the fields similar to reading, how would that query look? for example, select SimilarId, FieldId from SimilarField where SimilarId = 3 or FieldId = 3...but then how do I turn the resulting (SimilarId, FieldId) tuples into a set of individual Ids to query the original Field table? @DavidKaczynski -- np, I have edited and included a sample fiddle. Mind you, this is just one approach :) One common approach to the duplication problem is to make sure field1 always contains the lowest id in the tuple, combined with an UNIQUE key on both columns. Then your condition for SELECT can just be WHERE field1 = @id OR field2 = @id.
common-pile/stackexchange_filtered
Evenly spaced elements while printing I have this piece of code: for t in tables: print "" my_table = t rows = my_table.findAll('tr') for tr in rows: cols = tr.findAll('td') i = 0 for td in cols: text = str(td.text).strip() print "{}{}".format(text if text !="" else "IP","|"), i=i+1 if i == 2: print "" i = 0 pass "tables" is is a list of tables in HTML format. I am using beautifulsoup to parse in them. Currently, the output that I get is: Interface in| port-channel8.53| IP| <IP_ADDRESS>/<IP_ADDRESS>| Router| bob| Route| route: <IP_ADDRESS>/<IP_ADDRESS> gw <IP_ADDRESS>| Interface out| Ethernet2/5.103| IP| <IP_ADDRESS>/<IP_ADDRESS>| What I want to get is: Interface in | port-channel8.53 | IP | <IP_ADDRESS>/<IP_ADDRESS> | Router | bob | Route | route: <IP_ADDRESS>/<IP_ADDRESS> gw <IP_ADDRESS>| Interface out| Ethernet2/5.103 | IP | <IP_ADDRESS>/<IP_ADDRESS> | "Placeholder"| another ip in the same td as the one up | "Placeholder"| another ip in the same td as the one up | How can I get this output? EDIT: Here is how 1 table is made: <table> <tr> <td>Interface in</td> <td>Vlan800 (bob)</td> </tr> <tr> <td></td> <td><IP_ADDRESS>/<IP_ADDRESS><br></br></td> </tr> <tr> <td>Router</td> <td>bob2</td> </tr> <tr> <td>Route</td> <td>route: <IP_ADDRESS>/<IP_ADDRESS> gw <IP_ADDRESS></td> </tr> <tr> <td>Interface out</td> <td>Vlan1145 (bob3)</td> </tr> <tr> <td></td> <td><IP_ADDRESS>/<IP_ADDRESS><br></br></td> </tr> </table> (Yes, the empty are on the real page) EDIT2: Problematic code: <td> <IP_ADDRESS>/<IP_ADDRESS><br> <IP_ADDRESS>/<IP_ADDRESS><br> <IP_ADDRESS>/<IP_ADDRESS><br> <br><br><br></td> EDIT 3: Sample code 2 (tha creates problems with solutions proposed) <table class="nitrestable"> <tr> <td>Interface in</td> <td>GigabitEthernet1/1.103 (*global)</td> </tr> <tr> <td></td> <td><IP_ADDRESS>/<IP_ADDRESS><br></br></td> </tr> <tr> <td>Router</td> <td>*grt</td> </tr> <tr> <td>Route</td> <td>route: <IP_ADDRESS>/<IP_ADDRESS> gw <IP_ADDRESS></td> </tr> <tr> <td>Interface out</td> <td>Vlan71 (*global)</td> </tr> <tr> <td></td> <td><IP_ADDRESS>/<IP_ADDRESS><br> <IP_ADDRESS>/<IP_ADDRESS><br> <IP_ADDRESS>/<IP_ADDRESS><br></br></br></br></td></tr> </table> Some sample input data would be good. Find some python string.format tutorial and focus on all the options, this will make your solution working soon. Just think about the problem, the vertical lines aren't aligned. Why not? How can one iteration of the loop know the length of the next, in order to align correctly ? You need to figure out the maximum field lengths first, then print out the data. Added sample of input data. You can supply a format specifier, e.g. print "{0:14}|".format(text or "IP"), or pad the string you're passing to format with str.ljust: print "{}|".format(str.ljust(text or "IP", 14)), However, (as dilbert has just pointed out in the comments), you will need to do something to work out the size you require for each column. Note that, as the empty string "" evaluates False in a boolean context, you can simplify your if condition, and as the pipe '|' never changes you can put it in the template directly. It helps parsing the rows/cols into a list and then evaluating them. That makes it simple to compute the maximum widths of the columns (w1, w2 in the code). As the others said, once that width has been determined str.format() is what you want. for t in tables: col = [[],[]] my_table = t rows = my_table.findAll('tr') for tr in rows: cols = tr.findAll('td') i = 0 for td in cols: text = str(td.text).strip() col[i].append(text if text else "IP") i=i+1 if i == 2: if '<br>' in text: text = text.replace('</br>','') #ignore </br> for t in text.split('<br>')[1:]: #first element has already been processed if t: #only append if there is content col[0].append(col[0][-1]) #duplicate the last entry of col[0] col[1].append(t) i = 0 w1 = max([len(x) for x in col[0]]) w2 = max([len(x) for x in col[1]]) for i in range(len(col[1])) s='{: <{}}|{: <{}}|'.format(col[0][i],w1,col[1][i],w2) print(s) To explain the str.format(): '{: <{}}'.format(x,y) creates a whitespace-padded left adjusted string with the width y from the text x. edit: added the additional parsing of multiple IPs/any fields were the second colum is separated with <br> Hi. Your code is great! Only problem is when I have multiple items in the right column (separated by .. so space basically). See the question with a sample of the code that created problems. Fixed that. Works with multiple entries in one field of the second column by replicating the first column accordingly. Are you sure? For me it still doesn't work when on the second column I have 3 entries and in the first only 1 :( yeah that code had some minor bugs. Now it should work with your example. I debugged your code. It never goes in "if '' in text" because text is just that, text, not html... so it can't find br tag in text. Well at that point it would become useful to see your complete code. I don't know what kind of html parser you use or how it works, so I don't have any idea what the text variable actually is at that point (or more precisely: how the parser translates the tag and how i should detect it then). So either add the complete code or print the text variable. If it is replaced with a newline just replace '' with '\n' in my code and it works. This is a 'simpler' script. Look up the enumerate keyword in Python. import BeautifulSoup raw_str = \ ''' <table> <tr> <td>Interface in</td> <td>Vlan800 (bob)</td> </tr> <tr> <td></td> <td><IP_ADDRESS>/<IP_ADDRESS><br></br></td> </tr> <tr> <td>Router</td> <td>bob2</td> </tr> <tr> <td>Route</td> <td>route: <IP_ADDRESS>/<IP_ADDRESS> gw <IP_ADDRESS></td> </tr> <tr> <td>Interface out</td> <td>Vlan1145 (bob3)</td> </tr> <tr> <td></td> <td><IP_ADDRESS>/<IP_ADDRESS><br></br></td> </tr> </table> ''' org_str = \ ''' Interface in| port-channel8.53| IP| <IP_ADDRESS>/<IP_ADDRESS>| Router| bob| Route| route: <IP_ADDRESS>/<IP_ADDRESS> gw <IP_ADDRESS>| Interface out| Ethernet2/5.103| IP| <IP_ADDRESS>/<IP_ADDRESS>| ''' print org_str soup = BeautifulSoup.BeautifulSoup(raw_str) tables = soup.findAll('table') for cur_table in tables: print "" col_sizes = {} # Figure out the column sizes for tr in cur_table.findAll('tr'): tds = tr.findAll('td') cur_col_sizes = {col : max(len(td.text), col_sizes.get(col, 0)) for (col, td) in enumerate(tds)} col_sizes.update(cur_col_sizes) # Print the data, padded using the detected column sizes for tr in cur_table.findAll('tr'): tds = tr.findAll('td') line_strs = [("%%-%ds" % col_sizes[col]) % (td.text or "IP") for (col, td) in enumerate(tds)] line_str = "| %s |" % " | ".join(line_strs) print line_str Your code suffers of the same problem the guy's above code suffers. See the "edit 2". When I have that in my table, the output comes out all not aligned... I'm going off what you put in your question. What's unaligned, the IPs? Look at edid 3 please and test your code on it. It doesn't work :( Well, if the first column is 1 raw of text but the second is 3 raws of text and so not equal, I want that in the first column to put a "placeholder" just to not break the rest...
common-pile/stackexchange_filtered
Should 2 Activity's have separate ViewModels if the method usages don't overlap? I have 1 Activity that only displays and deletes Notes from a RecyclerView. I have another Activity that only adds and updates new items. At the moment they both use the same ViewModel class: public class NoteViewModel extends AndroidViewModel { private NoteRepository repository; private LiveData<List<Note>> allNotes; public NoteViewModel(@NonNull Application application) { super(application); repository = new NoteRepository(application); allNotes = repository.getAllNotes(); } public void insert(Note note) { repository.insert(note); } public void update(Note note) { repository.update(note); } public void delete(Note note) { repository.delete(note); } public void deleteAllNotes() { repository.deleteAllNotes(); } public LiveData<List<Note>> getAllNotes() { return allNotes; } } Should I instead create 2 separate ViewModels, one for each Activity? Do note that while using a single ViewModel class, both activities will NOT share the same instance (since ViewModel is tied to Activity lifecycle). This may lead to confusion depending on the implementation of your ViewModel Can you post image as well? That depends on whether you are going for simpler maintainability or clearer separation of concerns. There it nothing wrong with having a single ViewModel for both activities, but consider that a ViewModel is supposed to model the view. Having some functions in the ViewModel that are not used by Activity A, and other functions not used by Activity B, does not really fit well with the idea that the ViewModel should be a model of the functionality of the View. My recommendation would be two separate ViewModel. Thanks, this is the reasoning I've been looking for!
common-pile/stackexchange_filtered
How to fix Namespace Laravel selection issue error= '@if(Auth' is not bound I have this piece of code that I did not write, but need to trouble shoot. The user is supposed to have the choice to pick one, two, or all 3 selections, but no matter what we select, nothing is working. It is giving me an error message of "Namespace '@if(Auth' is not bound". Here is the code that seems to be the root of the problem. <select id="dates-field2" class="multiselect-ui form-control" multiple="multiple" name="service_type[]" id="service_type"> @foreach(get_all_service_types() as $type) <option @if(Auth::guard('provider')->user()->service->service_type->id == $type->id) selected="selected" @endif value="{{$type->id}}">{{$type->name}}</option> @endforeach </select> use slash like \Auth::guard('provider')->user()->service->service_type->id Or you can use auth('provider')->user()->service->service_type_id
common-pile/stackexchange_filtered
Trying to delete when not exists is not working. Multiple columns in primary key I am currently trying to delete from Table A where a corresponding record is not being used in Table B. Table A has Section, SubSection, Code, Text as fields, where the first three are the Primary Key. Table B has ID, Section, SubSection, Code as fields, where all four are the Primary Key. There are more columns, but they are irrelevant to this question...just wanted to point that out before I get questioned on why all columns are part of the Primary Key for Table B. Pretty much Table A is a repository of all possible data that can be assigned to a entity, Table B is where they are assigned. I want to delete all records from table A that are not in use in Table B. I have tried the following with no success: DELETE FROM Table A WHERE NOT EXISTS (SELECT * from Table B WHERE A.section = B.section AND A.subsection = B.subsection AND A.code = b.code) If I do a Select instead of a delete, I get the subset I am looking for, but when I do a delete, I get an error saying that there is a syntax error at Table A. I would use a NOT IN statement, but with multiple columns being part of the Primary Key, I just don't see how that would work. Any help would be greatly appreciated. Can you edit your question and include the select statement that works? Does this answer your question? Where not exists in delete In sql server,when using not exists, you need to set an alias for the table to be connected, and in the delete statement, to specify the table to delete rows from. DELETE a FROM Table_A a WHERE NOT EXISTS (SELECT * from Table_B b WHERE a.section = b.section AND a.subsection = b.subsection AND a.code = b.code) Please try : DELETE FROM Table A WHERE NOT EXISTS (SELECT 1 from Table B WHERE A.section = B.section AND A.subsection = B.subsection AND A.code = b.code) 1 is just a placeholder, any constant/single non-null column will work. I just upvoted this answer as this query would be quicker than RGPT's -- particularly if there can be NULL values in either table. Try something like this: delete from Table_A where (section, subsection, code) not in (select section, subsection, code from Table_B)
common-pile/stackexchange_filtered
Supremum Infimum argument: what did I do wrong? Suppose $f:[a,b]\to\mathbb R$ be a function such that $|f(x)-f(y)|<\epsilon_0$ for all $x,y\in[a,b]$. Then $M-m\le\epsilon_0$, where $M=\sup\{f(x):x\in[a,b]\}$ and $m=\inf\{f(x):x\in[a,b]\}$. I went this way: suppose $M-m>\epsilon_0$, hence $M>\epsilon_0+m$, hence there is an $x\in[a,b]$ such that$$M>f(x)>\epsilon_0+m,\tag1$$and similarly $M-\epsilon_0>m$ implies$$M-\epsilon_0>f(y)>m.\tag2$$Subtracting $(1)$ from $(2)$ says $-\epsilon_0>f(y)-f(x)>-\epsilon_0$, which is absurd. But on a second thought, I realized that my argument says that $M-m>\varepsilon$ is false for any $\varepsilon>0$, not just for $\epsilon_0$, implying $M-m=0$. What did I do wrong? I hope that you don't mind about the way I've edited your question. Also, you only use the fact that $M$ is an upper-bound, not a supremum. Same remark for $m$. To solve you exercise, remark that for all $x,y\in [a,b]$ $f(y)-\varepsilon_0 <f(x)<f(y)+\varepsilon _0$. The conclusion is straightforward. @JoséCarlosSantos, no, in fact, this seems awesome. Thank you An additional minor error in your proof (unrelated to the problem you noticed): You are guaranteed an $x$ such that $M \ge f(x) > \epsilon_0 + m$, but you are not guaranteed that $M > f(x)$. It could be that $f(x_0) = M$ and for every other $x \ne x_0, f(x) < m + \epsilon_0$. Your mistake is thinking you can just subtract inequalities. You can't do that. For example, $$1>0$$ is true, and $$2>0$$ is also true, but subtracting these two inequalities, I get $$-1>0$$ which is not true. The problem with subtracting inequalities is that when you subtract equations, you actually multiply one of them by $(-1)$ and then add them. With inequalities, you cannot do that because multiplication by a negative number reverses the inequality. For an actual proof, a sketch of it would be this: Find some $x$ for which $f(x)$ is "near" $M$ Find some $y$ for which $f(y)$ is "near" $m$ Use the fact that $|f(x)-f(y)|<\epsilon_0$ and the fact that $$|f(x)-f(y)| = |f(x)-M+M-f(y)+m-m|\leq |f(x)-M| + |f(y)-m| + |M-m|$$ to reach a conclusion. Naturally, the "near" in this sketch must, in the final proof, be a more rigorous statement. Good luck! Last step is wrong. $a\leq b\leq c$ and $a'\leq b'\leq c'$ do not imply $a-a'\leq b-b'\leq c-c'$. You may also proceed as follows: Note that $|x|$ is continuous. Choose sequences $(x_n), (y_n)$ with $\lim_{n \to \infty} f(x_n) = M$ and $\lim_{n \to \infty} f(y_n) = m$. It follows: $$|f(x_n) - f(y_n)| \stackrel{n \to \infty}{\longrightarrow} M-m$$ Now, as $|f(x_n) - f(y_n)| < \epsilon_0 \Rightarrow M-m \leq \epsilon_0$.
common-pile/stackexchange_filtered
How can we display SKU in URL structure? How can we display SKU in URL structure? e.g domain/Product name kws/industry/SKU-product type. I'm also using SKU auto generator to generate SKU's. Use Magento event observer and use catalog_product_save_before event on this function you need update magento Product url key field (url_key) <global> <events> <catalog_product_save_before> <observers> <stockalert> <type>singleton</type> <class>yourmodel/observer</class> <method>autoupdateurlKey</method> </stockalert> </observers> </catalog_product_save_before> </events> </global> And Observer code is: public function autoupdateurlKey($observer) { $product=$observer->getEvent()->getProduct(); $oldUrlKey=$product->getOrigData('url_key'); $Sku=$product->getSku(); $TypeId=$product->getTypeId(); // put yourlogic $product->getData('url_key',$YourNewUrl); return $this; } Also, would it be possible to search product by changing product ID(i.e SKU). For eg. 1. http://www.ebay.in/itm/Ahilya-Cream-and-Hot-pink-Designer-Ghaghra-104-PDS-Ghaghra-/251690046301?&_trksid=p2056016.m2518.l4276. 2. http://www.ebay.in/itm/Ahilya-Cream-and-Hot-pink-Designer-Ghaghra-104-PDS-Ghaghra-/251690046401?&_trksid=p2056016.m2518.l4276. Here just by changing the product ID from 401 to 301 took us to the product whose product id ends with 301. How can we achieve this in magento? Any help in achieving the URL structure and functionality of ebay URL structure. Is there any API which can help in achieving the desired functionality of ebay URL structure? i am busy now .i will reply on tomorrow Any help on this?
common-pile/stackexchange_filtered
How to deal with costly building operations using MemoryCache? On an ASP.NET MVC project we have several instances of data that requires good amount of resources and time to build. We want to cache them. MemoryCache provides certain level of thread-safety but not enough to avoid running multiple instances of building code in parallel. Here is an example: var data = cache["key"]; if(data == null) { data = buildDataUsingGoodAmountOfResources(); cache["key"] = data; } As you can see on a busy website hundreds of threads could go inside the if statement simultaneously until the data is built and make the building operation even slower, unnecessarily consuming the server resources. There is an atomic AddOrGetExisting implementation in MemoryCache but it incorrectly requires "value to set" instead of "code to retrieve the value to set" which I think renders the given method almost completely useless. We have been using our own ad-hoc scaffolding around MemoryCache to get it right however it requires explicit locks. It's cumbersome to use per-entry lock objects and we usually get away by sharing lock objects which is far from ideal. That made me think that reasons to avoid such convention could be intentional. So I have two questions: Is it a better practice not to lock building code? (That could have been proven more responsive for one, I wonder) What's the right way to achieve per-entry locking for MemoryCache for such a lock? The strong urge to use key string as the lock object is dismissed at ".NET locking 101". We solved this issue by combining Lazy<T> with AddOrGetExisting to avoid a need for a lock object completely. Here is a sample code (which uses infinite expiration): public T GetFromCache<T>(string key, Func<T> valueFactory) { var newValue = new Lazy<T>(valueFactory); // the line belows returns existing item or adds the new value if it doesn't exist var value = (Lazy<T>)cache.AddOrGetExisting(key, newValue, MemoryCache.InfiniteExpiration); return (value ?? newValue).Value; // Lazy<T> handles the locking itself } That's not complete. There are gotchas like "exception caching" so you have to decide about what you want to do in case your valueFactory throws exception. One of the advantages, though, is the ability to cache null values too. @ssg - I'm trying to use this and I'm having an issue. On the 1st pass when nothing is in the cache, I don't see how newValue is ever actually added to the cache once evaluated? Since value is null on the 1st pass because there is nothing in the cache, and using Lazy<T> means the expression is not evaluated until the following line: return (value ?? newValue).Value; How or when does newValue have a chance to be added to the cache? What I'm seeing is the cache is never populated on each call, because newValue is never added. @atconway here is the excerpt from AddOrGetExisting documentation: "Return Value: If a cache entry with the same key exists, the existing cache entry; otherwise, null." @ssg - thanks for the response, but I have read that and understood the method. It's just as your snippet is written and implemented in code for me, the cache is never populated even with that method that should add it if it does not exist. A fresh call to newValue.Value is required thus evaluating the valueFactory expression on each call. By the way your code needed to be modified a tad for me. Your last line in my code had to be the following: return (value ?? newValue.Value); if key doesn't exist newValue.Value is called (because value will be null) populating it. otherwise value.Value is called which returns existing value. what's wrong about it? return value will give a compile error because both have the type of Lazy<T>. you have to return .Value for both options. @ssg - this comment cleared it up for me: "return value will give a compile error because both have the type of Lazy. you have to return .Value for both options" For me the generic type T was not compiling, so I replaced them with the concrete type (in my case IEnumerable<MyCustomCollection> which is being stored in cache or retrieved from valueFactory expression. When doing this I did not have as Lazy<IEnumerable<MyCustomCollection>> on the end of the line that calls cache.AddOrGetExisting. Once I had all the types defined the same the original code worked (except generics) @ssg - to add to this, I wonder why using T would not work for me? I kept getting Cannot resolve symbol 'T'. I use generics in my repository often, but in comparison I couldn't figure out why it wouldn't work. Since this was such a specific implementation, replacing it with concrete types works ok for now. @ssg - Figured it out - had to modify method signature to the following and generic implementation worked: public T GetFromCache<T>(string key, Func<T> valueFactory) AddOrGetExisting should be cast to as Lazy<T>. Otherwise, nice one! You could specify LazyThreadSafetyMode.PublicationOnly in the Lazy<T> constructor to avoid caching exceptions (if desired). could you guys plz help me in the similar problem http://stackoverflow.com/questions/26581065/memory-cache-in-web-api Be careful with LazyThreadSafetyMode.PublicationOnly. With that mode multiple threads can begin executing the initialization method at the same time (with the winner's value being used). If the cost of that method is high (hence the caching), this is probably not desired. The only way to get initialization exclusivity is to use the default mode of ExecutionAndPublication. Preventing exception caching will need to be handled manually. For the conditional add requirement, I always use ConcurrentDictionary, which has an overloaded GetOrAdd method which accepts a delegate to fire if the object needs to be built. ConcurrentDictionary<string, object> _cache = new ConcurrenctDictionary<string, object>(); public void GetOrAdd(string key) { return _cache.GetOrAdd(key, (k) => { //here 'k' is actually the same as 'key' return buildDataUsingGoodAmountOfResources(); }); } In reality I almost always use static concurrent dictionaries. I used to have 'normal' dictionaries protected by a ReaderWriterLockSlim instance, but as soon as I switched to .Net 4 (it's only available from that onwards) I started converting any of those that I came across. ConcurrentDictionary's performance is admirable to say the least :) Update Naive implementation with expiration semantics based on age only. Also should ensure that individual items are only created once - as per @usr's suggestion. Update again - as @usr has suggested - simply using a Lazy<T> would be a lot simpler - you can just forward the creation delegate to that when adding it to the concurrent dictionary. I'be changed the code, as actually my dictionary of locks wouldn't have worked anyway. But I really should have thought of that myself (past midnight here in the UK though and I'm beat. Any sympathy? No of course not. Being a developer, I have enough caffeine coursing through my veins to wake the dead). I do recommend implementing the IRegisteredObject interface with this, though, and then registering it with the HostingEnvironment.RegisterObject method - doing that would provide a cleaner way to shut down the poller thread when the application pool shuts-down/recycles. public class ConcurrentCache : IDisposable { private readonly ConcurrentDictionary<string, Tuple<DateTime?, Lazy<object>>> _cache = new ConcurrentDictionary<string, Tuple<DateTime?, Lazy<object>>>(); private readonly Thread ExpireThread = new Thread(ExpireMonitor); public ConcurrentCache(){ ExpireThread.Start(); } public void Dispose() { //yeah, nasty, but this is a 'naive' implementation :) ExpireThread.Abort(); } public void ExpireMonitor() { while(true) { Thread.Sleep(1000); DateTime expireTime = DateTime.Now; var toExpire = _cache.Where(kvp => kvp.First != null && kvp.Item1.Value < expireTime).Select(kvp => kvp.Key).ToArray(); Tuple<string, Lazy<object>> removed; object removedLock; foreach(var key in toExpire) { _cache.TryRemove(key, out removed); } } } public object CacheOrAdd(string key, Func<string, object> factory, TimeSpan? expiry) { return _cache.GetOrAdd(key, (k) => { //get or create a new object instance to use //as the lock for the user code //here 'k' is actually the same as 'key' return Tuple.Create( expiry.HasValue ? DateTime.Now + expiry.Value : (DateTime?)null, new Lazy<object>(() => factory(k))); }).Item2.Value; } } Could you not implement that on top? A monitor thread that polls the contents, expiring items that go out of date could do it. You could even hide it behind the same ObjectCache abstract base that MemoryCache uses. One of the cool things about ConcurrentDictionary is that it's multiple reader, single-writer. A polling thread can scan at the same time as another retrieves. Once the list of items to be expired is compiled, they can all be done in turn, and the readers automatically wait. This will allow multiple items to be constructed concurrently and all of one of the will be discarded. ConcurrentDictionary does not call user code under a lock. @usr - right; that's a good point. Perhaps another ConcurrentDictionary of new object() instances to act as atomic locks for the user code would be an idea? They can be added using TryAdd. It has to be said, btw - that I have found in practise that even with an immense amount of load (save running lots of threads thrashing the dictionary), such overlaps occur infrequently. @usr - yes, that's a good idea; okay I'm bowing out now - I'm just swinging in the breeze @usr, couldn't resist, went for a cigarette and updated per your suggestion to use Lazy, from my phone. I'll be using that pattern myself a lot more from now on too I think! Wow, thank you for going great lengths to implement a prototype. But I basically want to avoid such work. As I said we already have some workaround in place and I'm trying to avoid that too. I don't think implementing a new cache manager from scratch is the best idea since there are too many factors to take into account. @ssg no worries. Fair dos :) @usr Regarding this , If there is another slot which uses the same factory(k) parameters so it will actually be executed twice....right? @RoyiNamir the "factory" is guarded by Lazy so that it only ever runs once. This CD<TKey, Lazy<TKValue>> pattern is standard and safe. @usr I think i'm not understood :-). looking here (notice same parameters for factory) if user asks for key A it will run , but if a user asks for key B , will the function run again ? @RoyiNamir yes it will run again. The dictionary cannot detect that this is really the same computation (nor is it supposed to). @usr Thanks , one last thing : in the original question he did cache["key"] = LongRunningOperation() . but in what time does the cache starts knowing and arming "key" ? is it at the start of LongRunningOperation or at the end of it ? @RoyiNamir at the moment you call Lazy.Value because that calls LongRunningOperation. @usr I meant this But i've already answered myself , it is not armed , but only AFTER the method finished - proof. Thank you. Taking the top answer into C# 7, here's my implementation that allows storage from any source type T to any return type TResult. /// <summary> /// Creates a GetOrRefreshCache function with encapsulated MemoryCache. /// </summary> /// <typeparam name="T">The type of inbound objects to cache.</typeparam> /// <typeparam name="TResult">How the objects will be serialized to cache and returned.</typeparam> /// <param name="cacheName">The name of the cache.</param> /// <param name="valueFactory">The factory for storing values.</param> /// <param name="keyFactory">An optional factory to choose cache keys.</param> /// <returns>A function to get or refresh from cache.</returns> public static Func<T, TResult> GetOrRefreshCacheFactory<T, TResult>(string cacheName, Func<T, TResult> valueFactory, Func<T, string> keyFactory = null) { var getKey = keyFactory ?? (obj => obj.GetHashCode().ToString()); var cache = new MemoryCache(cacheName); // Thread-safe lazy cache TResult getOrRefreshCache(T obj) { var key = getKey(obj); var newValue = new Lazy<TResult>(() => valueFactory(obj)); var value = (Lazy<TResult>) cache.AddOrGetExisting(key, newValue, ObjectCache.InfiniteAbsoluteExpiration); return (value ?? newValue).Value; } return getOrRefreshCache; } Usage /// <summary> /// Get a JSON object from cache or serialize it if it doesn't exist yet. /// </summary> private static readonly Func<object, string> GetJson = GetOrRefreshCacheFactory<object, string>("json-cache", JsonConvert.SerializeObject); var json = GetJson(new { foo = "bar", yes = true }); Here is simple solution as MemoryCache extension method. public static class MemoryCacheExtensions { public static T LazyAddOrGetExitingItem<T>(this MemoryCache memoryCache, string key, Func<T> getItemFunc, DateTimeOffset absoluteExpiration) { var item = new Lazy<T>( () => getItemFunc(), LazyThreadSafetyMode.PublicationOnly // Do not cache lazy exceptions ); var cachedValue = memoryCache.AddOrGetExisting(key, item, absoluteExpiration) as Lazy<T>; return (cachedValue != null) ? cachedValue.Value : item.Value; } } And test for it as usage description. [TestMethod] [TestCategory("MemoryCacheExtensionsTests"), TestCategory("UnitTests")] public void MemoryCacheExtensions_LazyAddOrGetExitingItem_Test() { const int expectedValue = 42; const int cacheRecordLifetimeInSeconds = 42; var key = "lazyMemoryCacheKey"; var absoluteExpiration = DateTimeOffset.Now.AddSeconds(cacheRecordLifetimeInSeconds); var lazyMemoryCache = MemoryCache.Default; #region Cache warm up var actualValue = lazyMemoryCache.LazyAddOrGetExitingItem(key, () => expectedValue, absoluteExpiration); Assert.AreEqual(expectedValue, actualValue); #endregion #region Get value from cache actualValue = lazyMemoryCache.LazyAddOrGetExitingItem(key, () => expectedValue, absoluteExpiration); Assert.AreEqual(expectedValue, actualValue); #endregion } Sedat's solution of combining Lazy with AddOrGetExisting is inspiring. I must point out that this solution has a performance issue, which seems very important for a solution for caching. If you look at the code of AddOrGetExisting(), you will find that AddOrGetExisting() is not a lock-free method. Comparing to the lock-free Get() method, it wastes the one of the advantage of MemoryCache. I would like to recommend to follow solution, using Get() first and then use AddOrGetExisting() to avoid creating object multiple times. public T GetFromCache<T>(string key, Func<T> valueFactory) { T value = (T)cache.Get(key); if (value != null) { return value; } var newValue = new Lazy<T>(valueFactory); // the line belows returns existing item or adds the new value if it doesn't exist var oldValue = (Lazy<T>)cache.AddOrGetExisting(key, newValue, MemoryCache.InfiniteExpiration); return (oldValue ?? newValue).Value; // Lazy<T> handles the locking itself } Good point, Albert! I must also point out that creating Lazy for every building operation can be GC sensitive too. Also, Lazy isn't async compatible. So we stopped using Lazy for this altogether, but it can be a good starting point. @SedatKapanoglu, are certain that Lazy is not compatible with async? What would be your arguments? Counterargument is posted here: https://devblogs.microsoft.com/pfxteam/asynclazyt/ To add to the performance issues - the Get method also has a lock but it locks only when the key expires. @Almis my argument is in the article you shared. As Sedat mentioned in comments, Lazy isn't async compatible. If the valueFacture is async , we you use the AsyncLazy created by others: https://github.com/StephenCleary/AsyncEx/wiki/AsyncLazy @AlbertMa There is a Microsoft supported AsyncLazy<T> as part of the very good Microsoft.VisualStudio.Threading library which is, despite its name, not Visual Studio exclusive but should be used in every async .NET application. Here is a design that follows what you seem to have in mind. The first lock only happens for a short time. The final call to data.Value also locks (underneath), but clients will only block if two of them are requesting the same item at the same time. public DataType GetData() { lock(_privateLockingField) { Lazy<DataType> data = cache["key"] as Lazy<DataType>; if(data == null) { data = new Lazy<DataType>(() => buildDataUsingGoodAmountOfResources(); cache["key"] = data; } } return data.Value; } We resolved that issue without using a lock, it's very close to your idea I'll post it now.
common-pile/stackexchange_filtered
If two consecutive columns are equal and adjacent column has variations find those entries For Excel, I want to compare all consecutive cells in a column with the values in adjacent column. If there is a variation for same consecutive cells having same values, I want to copy the entire row to a new tab called "incorrect coding error" Problem snip: So in the sample Image, you can see that there are some dupicate entries hash Value header, and in the adjacent cell having header "Coding" there are coding value for Hash Value. I have colour coded for ease of identification. For the yellow hilighted and Blue highlighted hash values the "coding" column are same and for Orange/Brown/Grey has values, the codings are different (these cells also highlighted in Red for ease identification) For each hash value where codings are different, I would like to have those rows populated in a new tab by what ever name (say "Coding Error"). Another related issue is that the header "Hash" and "Coding" are not same for all data, so I would really appreciate if I could manually input these headers for any variations - I mean option to recognise variations within VBA. Futher, as discuused, entire ROWS needs to be copied to the new tab as you can see there are other headers like control no and file name I would like to preserve. I did not tried VBA new to this OK, so what have you tried? What specific problem did you run into getting your idea to work? As I have just said to another poster - does this have to be in VBA. This is the sort of task I use Power Query for @CHill60, PowerQuery may work fine in case there are only 2 column, I am dealing with multiple columns so a revist to my orininal request which I have revised today is really appreciated. @AjitDohaliya Power Query works fine with multiple columns too. As I said, I do this sort of task on a regular basis and I use Power Query for it. Did any of these answers work for you? If so please toggle checkmark next to that answer to mark issue solved and perhaps upvote it with arrow Bring your data into powerquery using data .. from table/range [x] headers Paste this code into home...advanced editor... let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content], //preserve your own first row if different from this sample first row #"Removed Duplicates" = Table.Distinct(Source), #"Grouped Rows" = Table.Group(#"Removed Duplicates", {"Hash Value"}, {{"Count", each Table.RowCount(_), Int64.Type}, {"data", each _, type table }}), #"Filtered Rows" = Table.SelectRows(#"Grouped Rows", each [Count] <> 1), #"Removed Other Columns" = Table.SelectColumns(#"Filtered Rows",{"data"}), #"Expanded data" = Table.ExpandTableColumn(#"Removed Other Columns", "data", {"Hash Value", "Coding"}, {"Hash Value", "CodingError"}) in #"Expanded data" file.. close and load... back into excel as a Table. You can then right-click refresh the table as source change, or refresh it using VBA Option Explicit Sub Demo() Dim i As Long, j As Long Dim arrData, rngData As Range Dim arrRes, objDic, objDic2, sKey As String, sKey2 As String Set objDic = CreateObject("scripting.dictionary") Set objDic2 = CreateObject("scripting.dictionary") ' Load data arrData = ActiveSheet.Range("A1").CurrentRegion.Value ReDim arrRes(1 To UBound(arrData), 1 To 2) ' Load header arrRes(1, 1) = arrData(1, 1) arrRes(1, 2) = arrData(1, 2) j = 1 ' Loop through data For i = LBound(arrData) + 1 To UBound(arrData) sKey = arrData(i, 1) ' Check hash value If objDic.exists(sKey) Then If Not objDic(sKey) = arrData(i, 2) Then ' Check hash value + coding sKey2 = arrData(i, 1) & objDic(sKey) If Not objDic2.exists(sKey2) Then j = j + 1 arrRes(j, 1) = arrData(i, 1) arrRes(j, 2) = objDic(sKey) objDic2(sKey2) = "" End If sKey2 = arrData(i, 1) & arrData(i, 2) If Not objDic2.exists(sKey2) Then j = j + 1 arrRes(j, 1) = arrData(i, 1) arrRes(j, 2) = arrData(i, 2) objDic2(sKey2) = "" End If End If Else objDic(sKey) = arrData(i, 2) End If Next i ' Write output to sheet Range("D:E").Clear Range("D1").Resize(j, 2).Value = arrRes End Sub Microsoft documentation: Dictionary object This is awesome and it almost worked, but my apologoes for not being clear, I would like to have all the entries of A21D131 in the above example (so it will have the First two rows of Non Responsive (even though duplicative within Coding column), as well as 3rd and 4th Entry. Also, if we could populate the entire rows having these conflicts to a new tab - that would be really appreciated. Further, just realised (and apologies for peacemeal updates), the actual data contains other columns as well with data, so instead if copying the Hash and Coding values, I need to copy the rows to the new tab. The naming convention of Hash and Coding are also not same, so if there is added functionality to add search for specific headers such as "Hash", that would be greatly appreciated. have all the entries of A21D131 the output will be exactly same as the source table. For the the 2nd comment, it is doable. I would suggest to create a post/question and share data layout and expected result. many thanks for your guidance, I have edited my original post ready for your perusal. could you please revist the original port, now edit and share your guidance? If the code solves your original question, pls accepte it as answer and open a new question. We would like to help you on it. You can rollback your post editing history. What is the the best way to ask follow up questions? My apologies, I have posted a new question https://stackoverflow.com/q/77610453/18349732 you could use nested dictionaries be sure to have the Microsoft Scripting Runtime library referenced in your VBA IDE (Tools -> References, select and tick that entry and click OK) Option Explicit Sub test() Dim dict As Scripting.Dictionary Set dict = New Scripting.Dictionary Dim cel As Range ' read and store hashes and their corresponding coding values For Each cel In Range("A2", Cells(Rows.Count, 1).End(xlUp)) If Not dict.Exists(cel.Value2) Then dict.Add cel.Value2, New Scripting.Dictionary End If Dim subDict As New Scripting.Dictionary Set subDict = dict(cel.Value2) If Not subDict.Exists(cel.Offset(, 1).Value2) Then subDict.Add cel.Offset(, 1).Value2, 1 End If Next Dim iKey As Long ' remove hashes with only one entry from the dictionary For iKey = dict.Count - 1 To 0 Step -1 Dim key As Variant key = dict.Keys(iKey) If dict(key).Count = 1 Then dict.Remove key End If Next If dict.Count > 0 Then ' if any hashes with more than one coding value survives With Range("D1") ' write them .Resize(, 2).Value = .Offset(, -3).Resize(, 2).Value For Each key In dict With Cells(Rows.Count, .Column).End(xlUp) .Offset(1, 0).Resize(dict(key).Count).Value = key .Offset(1, 1).Resize(dict(key).Count).Value = Application.Transpose(dict(key).Keys) End With Next End With End If End Sub Could you please revisit my original post, now revised and explained better for the final results. Thanks in advance..
common-pile/stackexchange_filtered
How would i implement random.sample so that it would choose 2 out of 3 variables without it repeating? I am making a mini quiz that uses classes and OOP. I want to the code to randomly select 2 out of the 3 questions and I don't know whether I should use random.sample or not or where to even use it. import random class Question: def __init__(self, prompt, answer): self.prompt = prompt self.answer = answer question_prompts = ['What is the capital of England? (a) London (b) Liverpool (c) Glasgow. Answer(Type a, b or c): ', 'What is the capital of France? (a) Callais (b) Paris (c) Bologne. Answer(Type a, b or c): ', 'What is the capital of Netherlands? (a) Amsterdam (b) Tilburg (c) Eindhoven. Answer(Type a, b or c): ',] questions[Question(question_prompts[0], 'a'), Question(question_prompts[1], 'b'), Question(question_prompts[2], 'a'),] def run(questions): score = 0 answer = (questions) for question in questions: answer = input(question.prompt) if answer == question.answer: score += 1 print('correct!') I want it so that the code chooses 2 out of the 3 questions randomly without it repeating. call run(random.sample(questions, 2) Do you want to implement your own or use something built-in? Use the random.sample() function. i.e. for question in random.sample(questions, 2): You could use numpy.random.choice: >>> import numpy as np >>> set_of_options = [1, 2, 3, 4, 5] >>> np.random.choice(a=set_of_options, size=2, replace=False) array([2, 3]) >>> np.random.choice(a=set_of_options, size=2, replace=False) array([5, 1]) >>> np.random.choice(a=set_of_options, size=2, replace=False) array([1, 2])
common-pile/stackexchange_filtered
How to map millisecond precision timestamp field in MySql with JPA I need to store millisecond precision timestamp to database. I made a lot of tries with things from google search. But every tries failed. All of them truncated milliseconds and return only to the seconds. What's wrong with my tries? Here I list what I have tried. // Environment JPA 2.1 Hibernate 5.2.10 MySql 5.5.28 // My Tries... // Failed to get milliseconds @Temporal(TemporalType.TIMESTAMP) private java.util.Date updatedAt; // Failed to get milliseconds private java.sql.Timestamp updatedAt; // Failed to get milliseconds @Column(columnDefinition = "TIMESTAMP") private java.sql.Timestamp updatedAt; // Failed to get milliseconds @Column(columnDefinition = "TIMESTAMP (3)") private java.sql.Timestamp updatedAt; // Failed to get milliseconds @Column(columnDefinition = "TIMESTAMP (6)") private java.sql.Timestamp updatedAt; // Failed to get milliseconds // with converter https://github.com/leapfrogtechnology/vyaguta-commons/blob/master/src/main/java/com/lftechnology/vyaguta/commons/jpautil/LocalDateTimeAttributeConverter.java private LocalDateTime updatedAt; // Failed to get milliseconds @Type(type="org.jadira.usertype.dateandtime.joda.PersistentDateTime") private org.joda.time.DateTime updatedAt; Problem solved. It was mysql version issue. After upgrade latest 5.7 version. Everything works fine.
common-pile/stackexchange_filtered
Cloud Effect In Blender to Unity Hi I'm trying to create 3D clouds in Blender to be used in Unity. Imagine the bubbly-like fx of the flying nimbus in Dragon Ball. Not really sure what the best approach is. I've tried using displacement modifiers and get decent results in Blender. But exporting to Unity is a no go. Are shape keys my only option? If so, what's a good way to achieve what I'm after using shape keys? Hello, could you add an example of what these clouds look like ? You could create a cube, subdivide and smooth it: Give it several shapekeys: In Object mode, select a shapekey, push its Value up to 1 and sculpt: Sculpt the other shapekeys, animate them: Or you could create your mesh with metaballs that you convert to mesh: Then give it a Displace modifier with a Cloud texture and an empty as Object: In the Displace modifier, save several versions As Shapekeys: Now you can animate the shapekeys, also sometimes do some Shrink/Fatten to inflate the cloud a bit: This is perfect thank you!
common-pile/stackexchange_filtered
working of Softlockup in the kernel Softlockups are defined to occur if the watchdog task (real-time) doesn't update the time stamp at the hrtimer callback. Now, if a bad thread is stuck in the kernel (looping), why won't the scheduler be able to schedule the watchdog task? I mean, the faulting thread will continue to loop, but the scheduler would periodically move it out of context. So I don't understand why the WD task will not be able to update the time stamp? what is your definition of "bad thread"? the kernel scheduler will not be able to interrupt the thread (based on hardware timer), if the thread has disable timer interrupt, so potentially the thread will used the CPU forever, without ever giving other threads or tasks a chance to execute. @PeterTeoh: What you are describing is a case of a hardlockup, for example, if I do a spinlock_irq_save and never release it, kernel will eventually lockup and dump trace. However, I'm asking a specific case of a soft lockup where threads are still running but the watchdog task cannot run because of reasons other than interrupts being disabled. By "bad thread" i meant any thread that is causing a lockup/panic in the kernel. linux kernel's WD task are meant for usermode process to use. so if a process (in user mode) is not able to feed the dog in time, the WD tasks will reboot the kernel, as it assume the usermode process has hung. Correct? I do that all the time - putting a gdb attach to a process that is monitoring the /dev/watchdog device driver, and after some time, the whole OS will reboot. that is part of the design. of course NOT reboot the whole OS is possible, but watchdog is meant to guarantee responsiveness within certain time limit. I really cannot understand your question. sorry.
common-pile/stackexchange_filtered
Does God need food and water to stay alive? Peace be with you! Does God need food and water to stay alive? Did Jesus Christ need food and water to stay alive? Was Jesus Christ the God when he needed food and water to stay alive? Does not make any effort to meet minimum standards of an SE site question. @Thom Mouse over the down vote button. The text reads This question does not show any research effort, it is unclear, or not useful Two strikes out of three. Please also see How To Ask a Good Question. It fails that criterion rather badly. You may want to review the whole help center, there is quite a bit of very useful guidance there. (Or at least I thought so when I first went through it item by item). @Thom OK, we disagree, peace be with you. I added some clarification points to my question so it be more clear. If its a multiple choice yes or no test the Christian answers are 1 No. 2 Yes. 3 Yes. Jesus was and is both God and man, two natures in one person. He is not a mixture of the two natures, that would be impossible, he has all of both natures side by side, unmixed, in one person. His human nature needed food but not his divine nature; without his human nature he would die, so yes he needed food; and yes he was and is the God nevertheless. No. God exists necessarily, it is his essence to exist. Therefore, He can not not to exist. We see this from scripture also, Exodus 3:14: God said to Moses, “I AM WHO I AM. This is what you are to say to the Israelites: ‘I AM has sent me to you. From this follows that God does not need anything that is extrinsic to Him to sustain Him. Also note that God is not material, but pure spirit. This is confirmed by the scripture in John 4:24: God is Spirit, and His worshipers must worship Him in spirit and in truth. Edit. About Jesus specifically and does his human nature require him to eat and drink the answer is no. You can find an explanation in Aquinas's Summa Theologiae here. While I agree with you there are those who say Jesus is god and has his human body in heaven. Would that human body need to eat and drink? https://christianity.stackexchange.com/q/8368/23657 for example @Thom Does Jesus Christ need food and drink to stay alive? @truthcures That is a separate question, that if you want to ask you may ask, but I'd caution against it until you do a little research and learn what the Last Supper refers to. It happened about 2000 years ago. nice. Comment removed. The cited material from St. Thomas Aquinas is about resurrected people, but the OP might have meant to ask about Jesus's human needs during His earthly life before His resurrection. It's an odd question. Surely you know that God needs nothing. He lived for an eternity before he made the Universe. For an eternity, before God made the Universe, with its food and water, there was literally nothing except God. He didn't need anything then, and he doesn't need anything now outside of himself. Perhaps you are wondering if Christianity is even slightly in agreement with common knowledge and common sense? The Bible makes clear God made all things out of nothing: "In the beginning God created the heavens and the earth." Genesis 1:1 So said the Apostle Paul in the New Testament of the Bible:- "Then Paul stood in the middle of Mars' hill, and said, Men of Athens, I perceive that you are very religious in every way. For while I was passing through and examining the objects of your worship, I also found an alter with this inscription, "TO THE UNKNOWN GOD". Whom therefore you worship in ignorance, him declare I unto you. God that made the world and all things in it, seeing that he is Lord of heaven and earth, dwells not in temples made with hands; nor is he served by human hands, as though he needed anything, since he himself gives to all life and breath and all things; (Acts 17:25) and he made from one man every nation of mankind to live on all the face of the earth, having determined their appointed times and the boundaries of their habitation, that they would seek after God, if perhaps they might grope for him and find him, though he is not far from each one of us; for in him we live and move and are; as even some of your own poets have said "For we also are his children". Being then the children of God, we ought not to think that the Divine Nature is like gold or silver or stone, an image formed by the art and thought of man. The times of this ignorance God has overlooked: but now he commands all men every where to repent because he has appointed a day, in the which he will judge the world in righteousness by the man he has ordained, and he has given assurance to all men, in that he has raised him from the dead. (Acts 17:22-31) In short God is the progenitor of all things, nothing exists outside of Him therefore He is all sufficient and in need of nothing. Especially not food or water. The question ought to be qualified because Jesus the second person in the echâd Godhead has existed as God in four different forms. And whilst incarnate because of the limitations of the body given Him, He got hungry. “And after fasting forty days and forty nights, he was hungry.” ‭‭Matthew‬ ‭4:2‬ ‭ But generally people think of God the Father or the Holy Spirit in spirit form. Bible says “God is spirit, and those who worship him must worship in spirit and truth."” ‭‭John‬ ‭4:24‬ ‭ Jesus makes a distinction between a spirit and a body (whether mortal or glorified) “See my hands and my feet, that it is I myself. Touch me, and see. For a spirit does not have flesh and bones as you see that I have."” ‭‭Luke‬ ‭24:39‬ ‭ The Bible also says this about the Kingdom “For the kingdom of God is not a matter of eating and drinking but of righteousness and peace and joy in the Holy Spirit.” ‭‭Romans‬ ‭14:17‬ ‭ Sustenance is not therefore necessary though heaven does contain food and water “Yet he commanded the skies above and opened the doors of heaven, and he rained down on them manna to eat and gave them the grain of heaven. Man ate of the bread of the angels; he sent them food in abundance.” ‭‭Psalms‬ ‭78:23-25‬ ‭ And “Then the angel showed me the river of the water of life, bright as crystal, flowing from the throne of God and of the Lamb” ‭‭Revelation‬ ‭22:1‬ ‭ Not forgetting the marriage supper of the Lamb which is a feast and banquet But does God need food and water? No, quite simply if He did then He would not be all sufficient and yet Scripture says that He is and that all things derive their existence from Him, by Him and for Him. “For from him and through him and to him are all things. To him be glory forever. Amen.” ‭‭Romans‬ ‭11:36‬ ‭ God being life doesn’t need anything to gain or maintain life. Food and drink need God to exist but God doesn’t need food and drink. God is complete in Himself,the self existent one,the I am,the alpha and Omega. God owns the world,He made it for His pleasure. My God is awesome, glorious in all His ways. Whatever the intention behind the question is, in my experience it is usually posed by Muslims. This question demonstrates the appalling lack of proper understanding of Christian faith about Jesus Christ, His deity and humanity, as far as the Biblical teachings are concerned. Here I attempt to answer each sub question separately... Does God need food and water to stay alive? No! As a matter of fact, the question is an invalid question. Because by definition God is the necessary and self-existent Being, and He is not a biological being that needs food and water. Furthermore, God doesn't depend on anything for His 'life,' rather it is He on whom everything else depends. Hence the question is invalid and thus nonsensical! Did Jesus Christ need food and water to stay alive? Yes! When Jesus Christ lived on this earth as a human being, yes He needed food and water to stay alive like any other human being. Perhaps this is the only valid and meaningful question out of three questions posed by the questioner. Was Jesus Christ the God when he needed food and water to stay alive? Again, this is also not a valid/meaningful question. If one wants to know who Jesus Christ was when He was on this earth one would do well to pay attention to the teaching of God's true word the Bible in this matter. As per the Bible, Jesus existed in the form of God or God's Word prior to His birth as a human being in this world [John.1:1-14; Philippians 2:5-8]. However, Jesus took on human form and nature and entered into this world [Hebrews.10:5]. In other words, although He was in the form of God and was God He voluntarily put aside His divine prerogatives and restricted Himself to human limitations. Jesus did not come into this world as God, but man! Thus He was a perfect human being while His divinity was hidden behind his human body and nature. During His life on this earth, Jesus Christ was God in human form and nature. He had both divine and human natures, yet they were not intermingled. Which is why He needed food, water, and oxygen to be alive i.e. biologically, just like any other human being. But as God he did not need any of them. The interface between the Creator and creation in Jesus Christ is a mystery belongs to God [Col.2:2], which is possible only for True God!
common-pile/stackexchange_filtered
Is it a good idea to link your LinkedIn profile in the CV section of your dissertation? I've passed the disputation and am now preparing the document for the printing press and final submission. We're required to have a CV in the back matter of the dissertation, and I was wondering if it is a good idea to link to my LinkedIn profile in addition (perhaps with a QR code) because it'll be far more up to date than the CV in my dissertation. Do you think this is a good idea? Are there any things I should consider when doing this? which will only be printed once — What is this "printed" of which you speak, earthling? The dissertation will only be printed once. Updated for clarity. ;) Your question indicates that you want to use a link to your LinkedIn profile in your CV because "it'll be far more up to date than the CV in my dissertation." Are you suggesting that (a) you are going to include a CV that is not up to date, or that (b) your LinkedIn profile will be updated over time and that you won't be able to go back and edit your dissertation? If you mean (a), then no, you need to keep your CV up to date and a link to an online profile will not work. Your academic CV should always be kept as up to date as possible and it should absolutely be updated before you submit it in an application or include it in a dissertation. If you are doing your CV correctly, it will include different information than a LinkedIn profile and there is a strong expectation that every academic will have one. If you mean (b) and are just worried that an archival copy of your CV will be out of date, sure, add a link to LinkedIn or similar. My CV links prominently to my academic homepage on at a permanent (i.e., non-university) URL, which is kept up to date, and which includes a link to the latest version of my CV at all point. I include a date in the footer of my CV although folks will have a date in your dissertation. Personally, I think this is better than relying on a for-profit company and its URLs for posterity. In terms of the QR code, I'd skip it. These days, almost everybody who reads the dissertation will read a soft copy. A hyperlink will be much more useful. I suspect that a QR code will just end up make the document look dated at some point in the rather near future. And a LinkedIn profile won't look dated in near future? Just 10 years ago someone might have asked the same question about adding a link to their GeoCities page... I agree completely. That's why I recommended against using it int he sentence before I recommended against QR codes. I could have done it more strongly. I decidede to add a link to my personal website which at the moment forwards to my LinkedIn profile. This is a good idea, so long as it is not instead of a proper CV. Just remember that links go out of date and QR codes will become outdated technology. So when LinkedIn goes out of business, the link and QR codes will be just be remnants of a time gone by.
common-pile/stackexchange_filtered
Add Bootstrap alert to php login form I have a login for it displayed an error message if the username and password are incorrect but I would rather it displays a Bootstrap alert box. I have tried to change the code but get errors this is the working code. <?php require('db.php'); session_start(); if (isset($_POST['username'])){ $username = stripslashes($_REQUEST['username']); $username = mysqli_real_escape_string($con,$username); $password = stripslashes($_REQUEST['password']); $password = mysqli_real_escape_string($con,$password); $query = "SELECT * FROM `users` WHERE username='$username' and password='".md5($password)."'"; $result = mysqli_query($con,$query) or die(mysql_error()); $rows = mysqli_num_rows($result); if($rows==1){ $_SESSION['username'] = $username; header("Location: login1.php"); }else{ echo '<center><div class="alert alert-warning">Login Error!!</div></center>'; } }else{ ?> <h1>Log In</h1> <form action="" method="post" name="login"> <input type="text" name="username" placeholder="Username" required /><br><br> <input type="password" name="password" placeholder="Password" required /><br><br> <input name="submit" type="submit" value="Login" /> <div class="alert alert-danger" id="failure" style="margin-top: 10px; display: none"> <strong>Does Not Match!</strong> Invalid Username Or Password </div> </form> <br> <p>Not registered yet? <a href='registration.php'>Register Here</a></p> <?php } ?> </body>``` Never store passwords in clear text or using MD5/SHA1! Only store password hashes created using PHP's password_hash(), which you can then verify using password_verify(). Take a look at this post: How to use password_hash and learn more about bcrypt & password hashing in PHP You close PHP immediately after the else. You can solve it like this: else{ echo ' <h1>Log In</h1> <form action="" method="post" name="login"> <input type="text" name="username" placeholder="Username" required /><br><br> <input type="password" name="password" placeholder="Password" required /><br><br> <input name="submit" type="submit" value="Login" /> <div class="alert alert-danger" id="failure" style="margin-top: 10px; display: none"> <strong>Does Not Match!</strong> Invalid Username Or Password </div> </form> <br> <p>Not registered yet? <a href='registration.php'>Register Here</a></p> '; ?> that does not make the bootstrap alert pop up if login failed because you have the style "display: none" in the div. delete it and if the login fails, it will appear
common-pile/stackexchange_filtered
Apache Commons Daemon - Cannot launch service I can install, uninstall and run my service as a console application using the Apache Commons Daemon tool. The problem is when I try to run my application as a service, the service status doesn't switch from stopped to running. Script used to install the service: prunsrv.exe install ServiceName --DisplayName="Some Display Name" --Classpath %cd%\daemon.jar --Install=prunsrv.exe --Jvm=auto --StartMode=jvm --StopMode=jvm --StartClass=Main --StartParams start --StopClass=Main --StopParams stop I'm running the service in windows 8 - 64bits. Any ideas of what could be the problem? EDIT: When running the application in the services.msc and I get the following message: Windows could not start the [Service Name] service on Local Computer Error 2: The system cannot find the file specified. EDIT2: Tryed in Windows 7-64bits. Same problem. "Any ideas of what could be the problem?" - "windows 8 - 64bits" :D (just kidding) But seriously ... maybe this answer helps: http://stackoverflow.com/a/22093622/982149 Maybe that's really the problem, I will try in a Windows 7 machine just in case. Renamed the files. Same problem. Ok, have you tried on Win7, yet? Maybe my not-so-serious comment works out to be more relevant than I thought myself? The problem was that windows could not find the prunsrv.exe file. In the install folder it must be specified the full path to the executable.
common-pile/stackexchange_filtered
Use the binomial series of $(1-2x)^8$ to evaluate $0.98^8$ to 7 decimal places. Use the binomial series of $(1-2x)^8$ to evaluate $0.98^8$ to 7 decimal places. I tried using the first five terms of the series: $1, 8, 28, 56$ and $70$, to get $$1+8(2(-0.01))+28(2(-0.01))^2+56(2(-0.01))^3+70(2(-0.01))^4$$ $$1-0.16+0.0112-0.000448+0.0000112$$ $$=0.8511664$$ ...which is not the right answer. Where am I going wrong? $1-0.16+0.0112-0.000448+0.0000112 = 0.8507632$ (So your mistake is at the last equality). But, also note that $.98^8 = 0.85076302258$. Depending on what you mean by $7$ decimal places, you may need to add another term(s).
common-pile/stackexchange_filtered
How can i convert this Do While loop into another sort of loop, a while loop? public void humanPlay() { if (player1.equalsIgnoreCase("human")) System.out.println("It is player 1's turn."); else System.out.println("It is player 2's turn."); System.out.println("Player 1 score: " + player1Score); System.out.print("Player 2 score: " + player2Score); String eitherOr; do { eitherOr= input.nextLine(); humanRoll(); } while (eitherOr.isEmpty()); if (!eitherOr.isEmpty()) humanHold(); } This is the whole method, The only thing i am trying to fix is this. String eitherOr; do { eitherOr= input.nextLine(); humanRoll(); } while (eitherOr.isEmpty()); It has to accept input multiple times, so each time input is needed to determine what happens, which is why i like the Do While loop, but since it initializes at least once per time, i get an extra roll than needed. I have tried to do it this way, and various variations of this way: String eitherOr = input.nextLine(); while(eitherOr.isEmpty()); humanRoll(); This does not work because it does not ask for the input over again. If i try to put the input.nextline(); into the while loop, it says "eitherOr" is not initialized, and even if i initialize it when i enter input, the command line stays blank, so it does nothing with my input. You've got an extraneous semi-colon: while(eitherOr.isEmpty()); humanRoll();' should be: while(eitherOr.isEmpty()) humanRoll(); Essentially your version is saying to do nothing while eitherOr.isEmpty() is true and so it never gets to call humanRoll. Well then, it seems like the last hour has been wasted on a semi colon. Thank you, I have not used a while-loop in a while. Due to this misfortune I foresee me re-reading a few chapters. Probably should have been the first thing i checked to be honest. @ LanceySnr The code still works like it did while it was a do while loop, even if there is input, it still creates another roll. Have you used a debugger or similar to check the content of it when there is input? Given the code posted I can't see how it could execute the loop. If your second code snippet you are executing a blank statement as part of your while loop while(eitherOr.isEmpty());//this semicolon is a blank statement humanRoll(); you have to remove this semicolon in order to execute humanRoll as part of loop while(eitherOr.isEmpty()) humanRoll(); On a side note use of paranthesis generally avoids such minor issue while(eitherOr.isEmpty()) { humanRoll(); } In above code its easy to identify if an unintentional semicolon has been introduced or not.
common-pile/stackexchange_filtered
Why QAction is not adding to QMenu, if QMenu is unique_ptr? Code example: auto fileMenu = std::make_unique<QMenu>(this->menuBar()->addMenu("First")); fileMenu->addAction("AFirst"); auto x = this->menuBar()->addMenu("Second"); x->addAction("ASecond"); Results: I have 2 menus in menubar, but in first menu - for some reason, there are NO actions. Second menu correctly has action. I have tried different approaches, like, class-member pointers, and so on, but this is shortest possible example - QAction is missing, if QMenu is unique_ptr. Can anyone explain this for me? Parent window is QMainWindow, just in case. System info: Win8.1 x64, Compiler is VS2013, Qt 5.4 x32. Hmm. I expect using std::unique_ptr for this to lead to a double delete since Qt will delete the child when the parent goes out of scope. You shouldn't wrap the QMenu in a unique_ptr; it is already managed by the menu bar. QAction is not wrapped, QMenu is wrapped here. Sorry, corrected. I was looking at the other overload for addMenu in the documentation, the one that returns a QAction. Did your std::unique_ptr fileMenu go out of scope? Perhaps your fileMenu is already freed however still connected to the parent and you are experiencing UB. I've tried to make uniqueptr as classmember, so, it won't go out of scope - result is the same It's not a double delete anyhow; if you delete a children, it will get removed from its parent. It might be a case of plain deletion when the unique_ptr falls out of scope. In this line: auto fileMenu = std::make_unique<QMenu>(this->menuBar()->addMenu("First")); fileMenu becomes a new QMenu object (using this constructor). It's quite the same as: std::unique_ptr<QMenu> fileMenu(new QMenu(this->menuBar()->addMenu("First"))); Then, you add an QAction to this temporary, new menu. In the second case: auto x = this->menuBar()->addMenu("Second"); x->addAction("ASecond"); x become a pointer to existing menu. That's the difference. Anyway, usually you shouldn't hold QObjects using std::unique_ptr. In Qt, there is a convention that you form a tree from QObjects by assigning parent to each of them. The parent deletes it's children recursively and you shouldn't manage them manually or you may cause double free in some specific cases. You're not explaining why the first way doesn't work and the second way works, though... @peppe: "Then, you add an QAction to this temporary, new menu." Isn't it obvious that inserting QAction to a temporary object does not insert it to menuBar? Well, it's not exactly a "temporary". QMenuBar::addMenu will create a new QMenu as a child, and return it. Then the real problem is that unique_ptr is going to destroy that QMenu when fileMenu falls out of scope. OP also said "tried to make uniqueptr as classmember, so, it won't go out of scope - result is the same", but I haven't seen the updated code. @peppe: Ok, it's not "temporary" in C++ meaning. It's just a new object, which is not a submenu. It's a child of QMenu returned by QMenuBar::addMenu. It seems, that simply creating a QMenu with another QMenu as a parent doesn't make it a submenu. Even if it wasn't managed with unique_ptr it wouldn't work as expected.
common-pile/stackexchange_filtered
What is the difference between a Simple Random Walk and a Random Walk and why is one stationary, while the other is not? To clarify, by a Simple Random Walk I mean $$ Y_i = \begin{cases} -1 & prob = 1/2\\ 1 & prob = 1/2 \end{cases} $$ $$ X_t = \sum_{i=1}^t{Y_i} \quad \textrm{,}\,X_0 = 0 $$ and by Random Walk I mean $$ X_t = X_{t-1} + \epsilon \quad ,\,X_0 = 0 \quad and \quad \epsilon \sim N(0, 1) $$ While I see how they differ, it is also my intuition that they follow similar patterns. Simple Random Walk is stationary, because in any interval what happens is irrelevant of the starting point and the distribution is the same as in any other interval. Also the values will not explode to $Y=X$ or $Y=-X$, because in the long run they will be contained in a cone shaped region (on the image below) with values $\sim N(0, \sqrt{t})$. This also means that the values will somewhat oscillate around $0$, which is quite stationary. What is different about Random Walk, that it does not have the same properties and is not stationary? "Stationary" does not mean "oscillate about 0" (nor is the converse true), nor does it agree with your characterization of a "simple random walk" (usually known as a Binomial random walk), so the first thing to do is review the definition of stationarity. Your notions of the meaning of stationary are incorrect. Your description of the non simple random walk is incorrect: you don't add the same $\varepsilon$ each time, you add a sequence of independent normal random variables. In other words, $X_t = X_{t-1}+\varepsilon_t$ where $\varepsilon_t$'s are iid $N(0,1)$. Note that even what you call the simple random walk can be expressed as $X_t = X_{t-1}+Y_t$. Neither of the random walks is stationary. @DilipSarwate I might have gone to far with that oscillation statement and I did specify the second random walk incorrectly. But both the statement about Simple Random Walk being stationary and the reasoning for it in any interval what happens is irrelevant of the starting point and the distribution is the same as in any other interval are not my ideas, but I have taken them from an MIT lecture. I have heard that there are a few definitions of stationarity, am I maybe confusing some of them, or is something else going on here? You may have misunderstood what the MIT lecture was saying, or the MIT lecture is dead wrong; it has been known to happen. Certainly your claimed definitions of stationarity will not be accepted on this forum. the video you link to is wrong unfortunately. random walks have independent and stationary increments, but that doesnt mean that random walk itself is stationary ( as far as I understand the video, it mathematically defines stationarity of the increments but suggests that therefore random walk is stationary. Both random walks you mentioned have stationary increments, but they are not themselves stationary. Both of these models represent similar processes, they only differ in the distribution of the error (innovation, noise) term. If you were to plot the differences between consecutive observations in realizations of either process, you will observe independent residuals following the corresponding distribution. The definition of stationarity you give here is not necessarily incorrect, as claimed in the comments, you most likely are (perhaps mistakingly) explaining strict stationarity. Strictly stationary processes are indeed defined as having the same joint probability distribution when shifted in time, given that the frame (interval) length is held constant. According to Brockwell and Davis' Introduction to Time Series Analysis and Forecasting (1996), a time series is said to be weakly stationary if i) its mean function is independent of time ii) its covariance function is independent of time and changes only with respect to the interval size. Referring again to Brockwell & Davis (page 17 in my copy), it can be seen that the covariance of the series you gave above changes with respect to time, hence are non-stationary. Furthermore, both of these are unit root processes, as the roots of their corresponding characteristic equations are equal to 1. @Mateusz: stationarity needs more details because one can have weak stationarity or strong ( wide sense ) stationarity, so that's probably where the confusion is coming from. Also, when they talk about stationarity of the RW, they are referrring to the differenced $X_t$ process rather than the summed processes. So, stationarity is referrring to $X_t - X_{t-1}$ rather than $X_t$
common-pile/stackexchange_filtered
Independence of $n$ random variables Let $A_1,A_2,\ldots,A_n$ be independent subsets of probability space $(\Omega, \Sigma, P)$ (For every $I\subseteq \{1,2,\ldots,n\}$, $P(\bigcap_{j\in J}A_j)=\prod_{j\in J}P(A_j) )$. Prove that $1_{A_1},1_{A_2},\ldots,1_{A_n}$ is independent, i.e. for every Borel set $B_1,B_2,\ldots,B_n$ of $\mathbb{R}$ we have $P(\cap_{i=1}^n \{w\in \Omega: 1_{A_i}(w)\in B_i\} )= \prod_{i=1}^n P(\{w\in \Omega: 1_{A_1}(w)\in B_i\})$. My partial answer: Since $1_{A_i}(w) \in \{0,1\}$ for every $w\in \Omega$, then we have $$ \{w\in \Omega: 1_{A_i}(w)\in B_i\}=\begin{cases} \emptyset & \text{if } 1\notin B_i \text{ and } 0\notin B_i ,\\ A_i & \text{if } 1\in B_i \text{ and } 0\notin B_i, \\ A_i^c & \text{if } 1\notin B_i \text{ and } 0\in B_i, \\ \mathbb{R} & \text{if } 1\in B_i \text{ and } 0\in B_i. \\ \end{cases} $$ First, we prove that if $A_1,A_2,\ldots,A_n$ is independent then $A_1^c,A_2^c,\ldots,A_n^c$ is independent. If there is $i_0$ such that $1\notin B_{i_0} \text{ and } 0\notin B_{i_0}$ then we have $$P\left(\bigcap_{i=1}^n \{w\in \Omega: 1_{A_i}(w)\in B_i\} \right)= 0=\prod_{i=1}^n P(\{w\in \Omega: 1_{A_1}(w)\in B_i\}).$$ Hence, we can assume for the case $1\in B_i \text{ or } 0\in B_i$ for every i. Thanks. $P(\bigcup_{j\in J}A_j)=\prod_{j\in J}P(A_j)$? Surely you mean $P(\bigcap_{j\in J}A_j)=\prod_{j\in J}P(A_j)$? sory, typo bigcup and bigcap For every $i$, let $C_i=[\mathbf 1_{A_i}\in B_i]$. The goal is to show that $$ P\left[\bigcap_{i=1}^nC_i\right]=\prod_{i=1}^nP[C_i]. $$ Let us recall the, at first surprising but true, principle that random variables are easier than events, also formulated as expectations are easier than probabilities, thus, let us try to prove that $$ E\left[X\right]=\prod_{i=1}^nE[\mathbf 1_{C_i}],\qquad X=\prod_{i=1}^n\mathbf 1_{C_i}. $$ As you showed, for every $i$, $\mathbf 1_{C_i}=a_i\mathbf 1_{A_i}+b_i$ for some $a_i$ and $b_i$ in $\{0,1,-1\}$. Then, $$ X=\prod_{i=1}^n(a_i\mathbf 1_{A_i}+b_i)=Q((\mathbf 1_{A_i})_{1\leqslant i\leqslant n}), $$ where $Q$ is the polynomial defined as $$ Q((x_i)_{1\leqslant i\leqslant n})=\prod_{i=1}^n(a_ix_i+b_i). $$ There exists some coefficients $(q_I)_I$ indexed by the subsets $I$ of $\{1,2,\ldots,n\}$ such that $$ Q((x_i)_{1\leqslant i\leqslant n})=\sum_Iq_Ix_I,\qquad x_I=\prod_{i\in I}x_i. $$ When $x_i=\mathbf 1_{A_i}$ for every $i$, $x_I=\mathbf 1_{A_I}$ for every $I$, where $A_I=\bigcap\limits_{i\in I}A_i$. For each $I$, by the independence hypothesis of the events $(A_i)_i$, $E(\mathbf 1_{A_I})=\prod\limits_{i\in I}P[A_i]$. Putting all these remarks together, one gets $$ E[X]=\sum_Iq_I\prod\limits_{i\in I}P[A_i]=Q((P[A_i])_{1\leqslant i\leqslant n}), $$ and finally, simply by the definition of the polynomial $Q$, $$ E[X]=\prod_{i=1}^n(a_iP[A_i]+b_i)=\prod_{i=1}^nP[C_i]. $$
common-pile/stackexchange_filtered
Word Vba: Get position of the first table borderline I am writing some code in VBA in word and I would like to get the position of first borderline of the table as shown in this image As seen in the above image, i would like to equalize the first table margin and second table margin (not sure if "margin" is the right word). The intention is to make the first table and second table column widths equal and combine them, merging them into one table. The word document has about 20 separate table and hence i would like to combine all of them into one single table. Please note that the empty tables that you see above has data removed to preserve confidential information. What I have tried so far: Public Sub CorrectTables() Dim mainTable As Table Dim secondTable As Table Dim firstColWidth As Integer Dim secondColumnWidth As Integer Dim thirdColumnWidth As Integer Dim fourthColumnWidth As Integer Dim fiftColumnWidth As Integer 'Get current table Set mainTable = ActiveDocument.Tables(1) **firstColMarginPos = mainTable.LeftPadding** This is where i need some help to set the position firstColWidth = mainTable.Columns(1).Width secondColumnWidth = mainTable.Columns(2).Width thirdColumnWidth = mainTable.Columns(3).Width fourthColumnWidth = mainTable.Columns(4).Width fiftColumnWidth = mainTable.Columns(5).Width Dim currentTableCount As Integer While ActiveDocument.Tables.Count <> 1 currentTableCount = ActiveDocument.Tables.Count Set secondTable = ActiveDocument.Tables(2) secondTable.LeftPadding = firstColMarginPos secondTable.Columns(1).Width = firstColWidth secondTable.Columns(2).Width = secondColumnWidth secondTable.Columns(3).Width = thirdColumnWidth secondTable.Columns(4).Width = fourthColumnWidth secondTable.Columns(5).Width = fiftColumnWidth Selection.Tables(1).Select Selection.Collapse WdCollapseDirection.wdCollapseEnd While (ActiveDocument.Tables.Count - currentTableCount = 0) Selection.Delete Unit:=wdCharacter, Count:=1 Wend Wend End Sub I figured out the answer. For the first table, i am able to get the indentation: firstColMarginPos = mainTable.Rows.LeftIndent Then i am able to use it to set the value for the second table: secondTable.Rows.SetLeftIndent LeftIndent:=firstColMarginPos, RulerStyle:=wdAdjustFirstColumn
common-pile/stackexchange_filtered
How can I fix this apk not uninstalling problem in my custom rom? I like custom ROMs very much. And this problem is regarding issues that I get while uninstalling an app in a specific unofficial rom. Let's start from the beginning: I downloaded this rom from this forum(https://forum.xda-developers.com/t/rom-nicklaus-7-1-2_r29-unofficial-aospextended-rom-v4-6-oms-dui.3790560/)for my Moto E4 Plus(nicklaus_f,xt1770). I booted into my TWRP recovery and wiped the system, data, cache and dalvik. I flashed the rom file, flashed recommended bean gapps mini(mini pack only contains playstore) and after this and booted in rom. It booted fine , no errors and everything was working fine, UNTIL I accidently installed chrome. I wasn't quite worried about chrome because I could easily uninstall it. BUT, when I clicked on uninstall, it started uninstalling but it was unusually taking longer than usual. I waited for 2 mins and then my ROM unusually rebooted. After this I flashed the non recommended BitGapps and the same thing happened. Everything else was working fine . I tried some other ROMs but the same thing happened . So my question was that does it have anything related to stabilization and should I let the ROM stabilize. And if not , how can I fix this. Yes I havent installed magisk or some other kernel,etc. Thanks for every reply in advance. EDIT:This problem only happens when I flash Gapps. How did you try to uninstall chrome (from PlayStore, App info, adb shell pm uninstall)? I tried to uninstall it through app info. Then something is really wrong with your system. Check adb logcat but as development of this ROM seems to be dead for a long time I am not sure if there will be a way to fix the problem. May be you should better search for a different ROM. How to check adb logcat and can you recommend some Roms because I am not able to find some.(Moto E4 Plus,nicklaus_f,XT1770) You install custom ROM and don't know what adb logcat is (-> Google it and learn the basics of Android)? Sorry I don't own that device so I can't recommend anything to you.
common-pile/stackexchange_filtered
Was there ever a rule requiring diagonal serve in table tennis singles games? When I was a kid (born in the mid-70s, in Sweden) and we played (for fun), we required that the serve was diagonal in a table tennis singles game. I've spoken to people all over Europe that applied that same rule, but I've also met people that don't know what I'm talking about, they've never heard of it and they will (correctly) say that it only applies to doubles. Was there ever a requirement in the official table tennis rules to serve diagonally in singles games, or did we apply this rule erroneously as kids? Not an answer, but I know many people who believe this rule applies in singles today. It seems no such rule has officially existed, or at least not for a long time1. Technically speaking, to prove this, you would have to go through every official rule-set, but I figured it should be enough to look at whether the rules were different in the early days. To that end, I found two useful sources from 1902: Ping-Pong or Table Tennis and How to Play It (Ritchie & Parker, pg 43) and Laws of Ping-Pong, the rule-set sold with the Parker Brothers Ping-Pong set, which I was able to read via a photograph from an eBay listing2 of all places. Interestingly, Ritchie & Parker discuss that enforcing a diagonal serve rule might help reduce the serve advantage while simultaneously noting such a rule not existing in "serious" play (pg. 27-28). While a diagonal serve was not mandatory, the service was nonetheless quite different from the modern rules. It was required to be underhand, below the waist, and most notably it was not yet a rule that the serve had to first contact the server's side! The earliest source I can find resembling the modern serve is in the November 1936 issue of Popular Mechanics (Coleman Clark, pg 669), where an example non-diagonal serve and return is shown in a drawing. This author in particular appears to be quite knowledgeable with the technical details, noting that imparting spin on the serve toss was no longer legal under American rules but still was in international play at the time (pg. 148A). Another notable source is the May 1942 Technical Manual of Sports and Games by the US War Department, which is the earliest source I could find that had distinct service rules for both singles and doubles (pgs 129-131). I'm speculating slightly, but it would seem natural that the earliest incarnations might have had people mandatorily serving diagonally – after all, table tennis was modelled after tennis, and such is the rule there. While I couldn't find anything too definitive for the diagonal service, the 1902 sources in particular contain instances where table tennis had not yet changed an inherited tennis rule, or discussions on why a particular rule doesn't translate to the table (eg. keeping the tennis scoring system, the serve not striking the server's court first, the ban on volleying shots, whether volley contact outside the table counts as a point against). Probably 1902. The copyright year is quite hard to make out with the resolution of the photograph, but searching around for "Parker Brothers Ping Pong 1902" does yield other examples of the same item, but none I can find with a clear resolution of the year. There is additionally a 1903 Treasury Department decision (pg. 22-23) noting the "Ping Pong" trademark having been transferred to Parker Brothers in 1902.
common-pile/stackexchange_filtered
How to use Reflection to retrieve a property? How can I use Reflection to get a static readonly property? It's access modifier (public, protected, private) is not relevant. you can use the GetProperty() method of the Type class: http://msdn.microsoft.com/en-us/library/kz0a8sxy.aspx Type t = typeof(MyType); PropertyInfo pi = t.GetProperty("Foo"); object value = pi.GetValue(null, null); class MyType { public static string Foo { get { return "bar"; } } } Use Type.GetProperty() with BindingFlags.Static. Then PropertyInfo.GetValue(). Just like you would get any other property (for example, look at the answer to this question). The only difference is that you would provide null as the target object when calling GetValue.
common-pile/stackexchange_filtered
How to make this layout by display:table Just started to learn CSS and happened to see some discussion about display:table for web layout. I made a simple pen in which a two-row layout is created by inline-block and floats respectively. I doubt if display:table can be used to do this. .container{ width:500px; height:200px; border:1px solid green; margin:10px; } .a, .b { display:inline-block; background:grey; border:1px solid red; box-sizing:border-box; vertical-align:top; } .a { width:30%; height:50%; } .b { width:70%; height:50%; } <div class="container"> <div class="a">a</div> <div class="b">b</div> <div class="b">b</div> <div class="a">a</div> </div> Can be done, with additional HTML (one wrapper div for/per two blocks). For such layouts it is best to avoid display: table as your cells vary in size in per column. For layouts such as these, your best bet is to use display: flex which is very versatile and allows a greater amount of flexibility than inline-block and float. One downside of this is that it may not be compatible with older browsers, check the compatibility list here Here is a great place to get started with flexbox. p.s. if you really want to go through with display: table you could try the answers suggested in the below SO question: table cell width issue can use display: flex, it's now the best solution.
common-pile/stackexchange_filtered
iNotes on any version of Internet Explorer is emulating IE9 I am customizing iNotes (Notes web mail client), adding some features using javascript that require ajax calls to external sources. In doing so I have found Internet Explorer won't perform CORS (cross-origin) requests. Errors are either access denied or some other security errors depending on how it's called. Chrome and FF and Safari all work. I have found what I believe to be the culprit, iNotes adds a meta-tag to emulate IE9. <META http-equiv="X-UA-Compatible" content="IE=EmulateIE9" /> For CORS requests, IE9 does not support XMLHttpRequest, rather you must use XDomainRequest, which was only supported in IE8 and IE9. Since it's emulating IE9, the XMLHttpRequest (or jQuery .ajax calls for that matter) don't work. I have not been able to find any way to remove that meta tag, I did a search on the mail file and there are no matches for IE=EmulateIE9 that I could find. And I'm sure if I did remove it, I would break something in iNotes. I didn't want to load jQuery, but may do so for this script and include the moonscript plugin which uses XDR for ie8 and ie9 browsers. If iNotes is ever updated, it will still work. Anyone else run into this problem and find a better solution? Which version of inotes? mail (R9) template, mail9.ntf No, I asked for the version of iNotes: Which formsX.nsf do you have on the server? Forms9.nsf, but there is also a Forms9s.nsf in the directory. When I looked that up it says that I have to enable to address"quirks mode", or in other words, to run in standard mode I enable Forms9s in the notes.ini. I also hae iNotesExt_9.nsf, which is where I have the customization files for iNotes. That was it, Lothar, SP5 added Forms9s which eliminates ie9 emulation. Thanks for pointing me in the right direction. Lothar Mueller pointed me in the right direction. Domino 9.0.1 Fix Pack 5 adds a Forms9s.nsf which allows you to get rid of "Quirks Mode" for IE backwards compatibility. After installing the FP, the new forms9s.nsf file is installed, then you update the notes.ini with iNotes_WA_DefaultFormsFile=iNotes/Forms9s.nsf iNotes_WA_FormsFiles=iNotes/Forms9s.nsf and it eliminates the emulation tag for IE. I still have to test my iNotes customization apps, but this gives an option for running some IE features that didn't work before such as Ajax requests from iNotesExt_9.nsf, etc. After a very quick/brief testing of Forms9s.nsf I found that it fixed the ie9 emulation problem, but right away noticed that you can't do type-aheads in iNotes. If you have $inbox sorted by user, you cannot start typing a name to go to first instance. Doesn't work for any of the columns. That's a big bug.
common-pile/stackexchange_filtered
Why SD.exists() only looks for inside media/realroot on Intel's Galileo? I'm following a tutorial which says SD.exists() only look for files inside media/realroot on Intel's Galileo board. I just confirmed this using SSH to create e remove files during a loop which was looking for some specific named file. Where's information about this? I searched for cpp files inside SD library and found nothing. The Intel Galileo runs Linux. Most Linux distributions (auto) mounts media under specific directories like "/media". So this is likely not so much a feature of the Arduino paradigm but of the Linux paradigm. If you really want to change this you will probably have to research how Linux (specifically how the Intel Galileo version of Linux) auto mounts media. This is a stack exchange question about auto-mounting using a Linux distribution called Ubuntu. I am not sure how close the Intel Galileo Linux is to Ubuntu. But it's a starting point.
common-pile/stackexchange_filtered
My CTC loss model's loss stagnates and then outputs only blank characters I am trying to implement BaiDu's DeepSpeech1 in keras using CTC loss, my code is below: class dataGen(Sequence): # data generator for Mozilla common voice def __init__(self, audiopaths, transcripts, batch_size): self.x = audiopaths self.y = transcripts self.batch_size = batch_size def __len__(self): return int(len(self.x) / self.batch_size) def __getitem__(self, idx): batch_x = self.x[idx*self.batch_size : (idx+1)*self.batch_size] batch_y = self.y[idx*self.batch_size : (idx+1)*self.batch_size] x_val = [get_max_time(file_name) for file_name in batch_x] max_val = max(x_val) x_data = np.array([make_mfcc_shape(file_name, padlen=max_val) for file_name in batch_x]) # just converts data to mdcc y_val = [get_maxseq_len(l) for l in batch_y] max_y = max(y_val) labels = np.array([get_intseq(l, max_intseq_length=max_y) for l in batch_y]) input_length = np.array(x_val) label_length = np.array(y_val) return [x_data, labels, input_length, label_length], np.zeros((self.batch_size,)), [None] def on_epoch_end(self): i = np.arange(len(self.x)) np.random.shuffle(i) self.x = self.x[i] self.y = self.y[I] def clipped_relu(x): return relu(x, max_value=20) def ctc_lambda_func(args): y_pred, labels, input_length, label_length = args return ctc_batch_cost(labels, y_pred, input_length, label_length) def ctc(y_true, y_pred): return y_pred input_data = Input(name='the_input', shape=(None, 26)) inner = TimeDistributed(Dense(2048))(input_data) inner = TimeDistributed(Activation(clipped_relu))(inner) inner = TimeDistributed(Dropout(0.1))(inner) inner = TimeDistributed(Dense(2048))(inner) inner = TimeDistributed(Activation(clipped_relu))(inner) inner = TimeDistributed(Dropout(0.1))(inner) inner = TimeDistributed(Dense(2048))(inner) inner = TimeDistributed(Activation(clipped_relu))(inner) inner = TimeDistributed(Dropout(0.1))(inner) inner = Bidirectional(LSTM(2048, return_sequences=True))(inner) inner = TimeDistributed(Activation(clipped_relu))(inner) inner = TimeDistributed(Dropout(0.1))(inner) output = TimeDistributed(Dense(28, activation="softmax"))(inner) labels = Input(name='the_labels', shape=[None,]) input_length = Input(name='input_length', shape=[1]) label_length = Input(name='label_length', shape=[1]) loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc'). ([output, labels, input_length, label_length]) model = Model(inputs=[input_data, labels, input_length, label_length], outputs=loss_out) model.compile(optimizer='adam', loss=ctc) This is all pretty standard, however during training, my model reaches a loss usually between 100 and 200 (from >1000) and then stops improving and when I test it (remove lambda layer to get transcript outputs), it only outputs blank characters. My theory is that it trains to output only blank characters as this gives a lower loss than random characters but then gets stuck at a local minima there and doesn't actually learn to transcribe the audio. Is there any trick to fix this that someone knows? There are so many existing implementations for this, why don't you take the working one. Like https://github.com/robmsmt/KerasDeepSpeech. Also, DeepSpeech is pretty old, there are better models. I actually used most of his code in my implementation, which is why I am so confused because it doesn't work Does the original implementation work? I do not know because it requires dated versions of python and tensorflow. You mentioned better models, could you maybe name a few espnet for example
common-pile/stackexchange_filtered
PS file from Overleaf I create a document in overleaf and i need to have PDF, DVI and PS formats of thet document. PDF got, for DVI I had to change compiler to LaTeX, but I do not see PS file there. How can I get that format? All my graphics are in .eps format. are you sure you need those formats? It sounds like the requirements were written in the 1980s.... but you can use latexmk to set up a latex->dvi->ps->pdf pipeline using latex dvips and ps2pdf as the executables for each stage then you will have the output in all three forms. yes, that answers my question. i didnt know "latexmkrc file" is just "latexmkrc" without any format (like .tex, .exe, etc.). Thanks!
common-pile/stackexchange_filtered
Different colors on the same polygon in 3D view of QGIS I have a .shp file with buildings represented as polygons. My goal is that, in a 3D representation, I am able to have different colors in a single polygon (building) depending on its height. For example, from 0 to 10 meters, buildings are yellow, then from 10 to 20 m they are blue, and so on. Is there any way of doing this in QGIS or using PyQGIS? I have tried to add copies of the buildings to the shapefile with different heights and the number of copies depends on the height of the building, and then pick a different color for each building copy, but it does not work, still only one color appears. EDIT: This is an exemplification of what I'm looking for: EDIT2: Using the Properties > 3D View > Rule-based, and then View > New 3D Map View, I get a 2D map :( : We're a little different from other sites; this isn't a discussion forum but a Q&A site. Your questions should as much as possible describe not just what you want to do, but precisely what you have tried and where you are stuck trying that. Please check out our short [tour] for more about how the site works. Thanks. You can achieve this by setting rule based 3d rendering. Navigate to Properties > 3D View > Rule-based and create your own rules and styling. I did this really quick crude example: How do you visualize the 3D map? In QGIS go to View > New 3D Map View Doesn't work for me, it still shows in 2D What version of QGIS are you using and can you show a screenshot of the 3D view I'm using QGIS 3.6.3 Hold your scroll wheel in and move your mouse to change your viewing angle Still no 3D view...
common-pile/stackexchange_filtered
Programming interview: on sorted arrays, algorithm proof The following is a programming interview question: Given 3 sorted arrays. Find(x,y,z), (where x is from 1st array, y is from 2nd array, and z is from 3rd array), such that Max(x,y,z) - Min(x,y,z) is minimum. This question is discussed here: http://www.careercup.com/question?id=14805690 One possible solution discussed in the career cup page is the following: "Take three pointers. Each to the first element of the list. then find the min of them. compute max(xyx)-Min(xyz). If result less than till now result change it. increment the pointer of the array which contains the minimum of them" My question is, how can we prove the correctness of this algorithm? If not, can we come up with cases where the algorithm fails. And if so, what is a correct algorithm to solve this problem with the proof. Let's clarify this a bit. I assume array of integers, can they be negative? Is Max(x,y,z) the max product of x*y*z? Can we accept a negative Max(x,y,z) - Min(x,y,z)? @Gevorg Max(x,y,z) is most likely the maximum of the 3 values. @Gevorg Assume that array of integers is positive. Max(x,y,y) is maximum of the 3 values as Dukeling pointed out. Assume we can accept negative Max(x,y,z) - Min(x,y,z). Lets call sorted arrays A1, A2, A3. We have three indices i1, i2, i3. We want to find one triple (i1,i2,i3) such that max(A1[i1], A2[i2], A3[i3]) - min(A1[i1], A2[i2], A3[i3]) is minimal. At beginning i1 = i2 = i3 = 0. One possible way is to try all triples (i1, i2, i3) for 0 <= i1 < A1.lenght, 0 <= i2 < A2.lenght, 0 <= i3 < A3.lenght. But many of those triples will be unnecessary to check. Without loss of generality (this is just for easier talking about arrays) let A1[i1] <= A2[i2] <= A3[i3]. We will do one increment in this block. There is no sence to try triples (j, i2, i3) for j < i1, because arrays are sorted and A1[j] <= A1[i1] which is minimum. Lets pretend we already tried all these (j, i2, i3). There is no sence to try (i1, i2, j) for j > i3 because A3[i3] <= A3[j]. By trying triples (i1, j, i3) for arbitrary j we can go worse (min will be less or equal and max will be higher or equal). We tried indices less than i1, i2, i3. We reasoned triples (j < i1, i2, i3) and (i1, j, i3) are unnecessary, (i1, i2, j>i3) too. Same arguments for (j <= i1, k, l = i3). Decrementing i3 is no use as we tried (actually or reasoned about it) this triple before increasing the third index. We are left to increasing i1 which is index of min in the triple. After this the difference might get worse or A1[i1+1] is not minimal, but we proceed the same way. I will edit my post to make it more clear, I hope. This is actually almost correct, just needlessly overcomplicated. We can simply say that all triplets (i1,j2>=i2,j3>=i3) are no better than (i1,i2,i3) and may be discarded. So the next viable triplet is (i1+1,i2,i3). The algorithm has as loop-invariant where we know the smallest value of max(x, y, z) - min(x, y, z) for all possible values of x, y, or z smaller than the current values. To start with x, y, and z are the smallest possible, so the invariant is true at the beginning. At the end, x, y, and z are the largest possible, so they cover all possible values, and we have the answer. Given some values of x, y, z, if we know the optimal value of max(x, y, z) - min(x, y, z) for smaller values, then any better value would require incrementing x, y, or z. If we don't increment the minimum of them then every larger value of x, y, or z can only increase the value of max(x, y, z) - min(x, y, z) because the maximum can only increase, and the minimum will stay the same. So the minimum is incremented to the smallest value that could possibly be better than the one we have, which maintains the loop-invariant. "can only increase the value of max" --- so what? It will increase now, but decrease at some next iteration, when we do decide to increment the minimum. Perhaps we will be able to find a smaller target value this way. Can you prove this is not the case? @n.m. If the minimum is not changed, then there is no possible future iteration that lowers max-min. We want to increment the minimum because it's the only way to find a smaller target value. "If the minimum is not changed, then there is no possible future iteration that lowers max-min" -- no, sorry. If the minimum is never changed, then there's indeed no possible future iteration that lowers max-min. But no one says it is never changed. It is not changed at this particular iteration. It will be changed at some future iteration, and will lower max-min. You need to prove it will not lower max-min enough. Yes, I meant if it's never changed. If it needs to eventually be incremented, we might as well increment it immediately. We would only not increment it immediately if there were larger values of the other variables that could give a lower target value.
common-pile/stackexchange_filtered
Multiple settings for separate Live Wallpapers I have two Live Wallpapers that belong to the same Application and I am trying to have separate Preference settings for each one but I have run into the issue of the first settings being used by both Wallpapers. <application android:icon="@drawable/icon" android:label="@string/app_name"> <service android:label="first wallpaper" android:name="com.package.this1.number1" android:permission="android.permission.BIND_WALLPAPER"> <intent-filter> <action android:name="android.service.wallpaper.WallpaperService" /> </intent-filter> <meta-data android:name="android.service.wallpaper" android:resource="@xml/source1" /> </service> <service android:label="second wallpaper" android:name="com.package.this2.number2" android:permission="android.permission.BIND_WALLPAPER"> <intent-filter> <action android:name="android.service.wallpaper.WallpaperService" /> </intent-filter> <meta-data android:name="android.service.wallpaper" android:resource="@xml/source2" /> </service> <activity android:label="@string/settings" android:name=".this1.Settings1" android:exported="true" android:icon="@drawable/icon"> </activity> <activity android:label="@string/settings" android:name=".this2.Settings2" android:exported="true" android:icon="@drawable/icon"> </activity> </application> Am I missing something simple or is it not possible to do this without making 2 separate applications? Here's the code of my Settings1 and Settings2 classes public class Settings1 extends PreferenceActivity implements SharedPreferences.OnSharedPreferenceChangeListener { @Override protected void onCreate(Bundle bundle) { super.onCreate(bundle); getPreferenceManager().setSharedPreferencesName(number1.SHARED_PREFS_NAME); addPreferencesFromResource(R.xml.this1_settings); getPreferenceManager().getSharedPreferences().registerOnSharedPreferenceChangeListener(this); } public class Settings2 extends PreferenceActivity implements SharedPreferences.OnSharedPreferenceChangeListener { @Override protected void onCreate(Bundle bundle) { super.onCreate(bundle); getPreferenceManager().setSharedPreferencesName(number2.SHARED_PREFS_NAME); addPreferencesFromResource(R.xml.this2_settings); getPreferenceManager().getSharedPreferences().registerOnSharedPreferenceChangeListener(this); } Any advice would be much appreciated. Moonblink (creator of Android Tricorder) also has a collection of live wallpapers called Substrate. Substrate does indeed combine multiple wallpapers into a single package, and more than one of the wallpapers has a settings activity. I suggest you examine his structure. Start here http://code.google.com/p/moonblink/source/browse/ and look for Substrate (under trunk). Hope this helps. P.S. The project home is here: http://code.google.com/p/moonblink/wiki/Substrate I do have 2 seperate SHARED_PREFS_NAME, one in each of the wallpapers. They share the same name "SHARED_PREFS_NAME" with a different value ex. "wallpaperprefs1" and "wallpaperprefs2" but I explicitly reference in the separate PreferenceActivity as noted above. The other puzzling thing is that I use two separate XML settings, but the PreferenceActivity 1 is always used. Ah ok, my fault. I just did as such to no avail, it still takes the first Preference Activity. @Alejandro I found an example of someone doing this and edited my answer to provide the link. This looks like what I'm trying to do, though not the simple solution I was hoping for, it is a far more robust one. Thank you for the great find. Just wanted to add that I found the error in my original code. In the resource xml for the Preference Activity there was a android:settingsActivity="the first activity" which was referencing the same Activity, thus my error. Heh... @Alejandro Thanks for clarifying. This is a very frustrating and unforgiving programming environment. :-)
common-pile/stackexchange_filtered
PHP check if string is SQL timestamp I need to create a PHP function that will check if a string is a SQL timestamp yyyy-mm-dd hh:mm:ss[.fff]. Has anyone done this? Suggestions? Just to mention: At least mySQL understands ISO-8601 (2004-02-12T15:19:21+00:00, see http://php.net/date with format c) and several others too. Depending on what you want to achieve you maybe don't need this test. I was looking for direction, not someone to write code for me. For example - "try regular expressions" or "there's actually a function that already does that called...".
common-pile/stackexchange_filtered
Working menu link now leads to 404 error page I am working on a new website development with Joomla 3.1. The menu links were working fine before. However, suddenly one of the menu link now leads to the 404 error page. It happened once before and I fixed it by changing the menu alias. I wonder why and how the old alias which was working fine before is now creating a problem. The menu item is not locked. It is in published state. Similarly, the article assigned to it is also not locked and is in published state. Any help/insight please? Have you by any chance uninstalled the component is was associated with? Or do you have any SEF extensions that you've been experimenting with? Does this problem also occur with SEF URL's turned off? I haven't installed any SEF extension. The link works fine when SEF URL is turned off. Do you have caching enabled? Disable while developing. Have you changed the URL Rewrite option? Have you renamed the .htaccess.txt file? Try rebuilding menus option from menu manager. Also make sure your alias is exactly as it should be (no extra spaces). Sorry if these steps are all overly obvious. Thanks for your reply. 1) Caching is disabled. 2) I haven't done any change in URL rewrite option. 3) I haven't changed .htaccess file either. 4) I tried rebuilding menus option from menu manager. But no success. 5) The menu alias doesn't have any extra spaces. what is the menu alias exactly?And are you using any clever tool like one of NoNumbers e.g. ReReplacer? It's strange that the menu link started working again after I changed the category of the article associated with menu. Obvious if that category was disabled! Well, that category was not disabled. @chapagain well glad you got it sorted. If your URLs are working fine when you don't use SEF URLs and the 404 error page appears only when SEF URLs are turned ON, then it is most likely that the .htaccess file on your Joomla website is not properly configured. Have you tried renaming the htaccess.txt (which comes by default with Joomla) to .htaccess ? If not, you should try renaming htaccess.txt file present on website's root folder to .htaccess and then try to use SEF URLs. Hope this will solve the problem. In case of failure, please tell us and I will try to troubleshoot it more. :) I have already renamed htaccess.txt file to .htaccess. The issue is not with all the URLs. It happens with one URL only. I was having the same issue when I redesigned a site for a client. I built the site on my server and transferred it to theirs. I couldn't figure out why this one alias (services) wouldn't work. If I changed it to "our services" or anything else, it worked fine, but I really wanted to keep the same structure as I originally set up. In this case, it turned out that there was an old file, "services.html" that was still in the home directory. I simply deleted that, and all worked fine. Also, from past experience, I know this error can happen when there is a "trashed" menu item with the alias you're trying to use. Trashed menu items are usually hidden by default, so it's easy to miss. Chiming in with another possible solution. I just ran into the same issue that was confounding me. I tried turning SEF URLs off, swapped the .htaccess / .hthacces.txt files, etc. All the recs you will find while Google-ing a solution for this issue. Unique problem: All the 404 articles were in the same sub-category, 3 layers down from root menu. Diagnosed Problem Cause: Alias of the menu and alias of the category were identical. Solution: Rename category alias to "alias-category", rebuild all menus, rebuild all menu items, rebuild the category. Problem solved! We had something similar, and it was a result of having two k2 items with the same alias - one of which was unpublished, causing the 404. It shouldn't be possible, joomla typically yells at you for even trying to double up an alias, but it's worth going in and checking your articles / categories (trashed and unpublished), and your menu items (trashed and unpublished), and if you're using a component, such as k2 or Sobipro, its equivalent of articles. What solved my problem is.. I did switch the system's "URL rewriting" ON (after I Had created the original menu/article), but then after that I got a 404 error when trying to change the alias, on a different day. I switched the URL rewriting off, recreated the menu/article and it worked. It may just work anyway if you try switching the URL rewtriting OFF, without recreating the menu/article. I had same issue. I changed the menu item alias to "our products", and it worked fine. I went to change the menu item alias back to the original "products" and received the error "Save failed with the following error: A first level menu item alias cannot be 'products' because 'products' is a sub-folder of your joomla installation folder." I had recently added a subfolder into the joomla installation folder! You should not use white-space for alias. If directory/sub-directory name should not match with your alias. It's not even possible to use white space for the alias... White space is automatically replaced with a hyphen anyway, so it doesn't matter. I ran into a similar issue today where the home page of the site was resolving to the index page, but when attempting to access any subpages via the subpage alias I received a general 404 error. This was particularly strange, as I had just migrated the working site from a subdomain to the primary domain. After finding this thread I replaced my .htaccess file with Joomla's preconfigured .htaccess file and all is working fine for me now. I'd recommend trying the same, but be sure to save a copy of your previous file. I had this error today, the resolution was to clear all cache.
common-pile/stackexchange_filtered
SP2007 (All)UserData nvarchar field content garbled for choice fields In our SharePoint 2007, one of our lists field is configured as such: <Field Type="Choice" DisplayName="Standard" Required="FALSE" Format="RadioButtons" FillInChoice="FALSE" Group="gc_xyz" ID="{e5d39160-a777-4d70-b372-a7ca76305adc}" SourceID="{21f217b9-cbc5-44b8-96b7-2c665aecc37f}" StaticName="Standard" Name="Standard" ColName="nvarchar20" RowOrdinal="0"> <CHOICES> <CHOICE>Yes</CHOICE> <CHOICE>No</CHOICE> </CHOICES> </Field> But when I look in the AllUserData table (or its view), the data for this field is like this: nvarchar20 챂䗅⎑啄獌崿 ំ싖줚䎭권䞢⋫ 嫎⚔潣俎즤떴ಇ긅 စ꼨噡䊔ꆫ䐂㪗⋉ ᶷ刊ᯉ䯥梋蓊㯕Ꙃ 㩝䪿撾럌阶紻 ពု帵䙏熦༱䏇䶌 왜汵䅩粁ʹ猅 All values are different, as if hashed. How do I read those values to translate them to Yes/No? We are decommissionning the server, and the Sharepoint app was heavily customized a long time ago, to a point where it's not practically feasible to upgrade it to another version. The decision was made to replace it with a new app, but we need to export the data into a new Database. Data will be normalized and simplified.
common-pile/stackexchange_filtered
Context-free grammar for simple arithmetic Without the use of parentheses and only considering addition, subtraction, multiplication, and division, the following context-free grammar describing these simple arithmetic operations is incorrect, right, because it is right-associative rather than left-associative? - E -> T | T + E | T - E - T -> int | int * T | int / T A correct grammar would be: - E -> T | E + T | E - T - T -> int | T * int | T / int Is this reasoning correct? "Correct" is whatever you define it to be. The "traditional" meaning for non-associative operators that is taught in grammar school is left-associative, so in that sense the left recursive grammar is "correct" while the right recursive grammar is not, but that's a pretty circular defintion. In addition, your left-recirsive grammar needs E -> T added to be correct (as is, it does not describe any finite strings)
common-pile/stackexchange_filtered
Substitution method for solving reccurences I have recently had trouble with understanding the substitution method for solving reccurences. I watched few on-line lectures about the problem, but sadly it did not tell me much (in one of them I heard that it is based on guessing, which made me even more confused) and I am looking for some tips. My objective is to solve three different reccurence functions using substitution method, find their time complexity and their values for T(32). Function 1 is defined as: T(1) = 1 T(n) = T(n-1) + n for n > 1 I started off by listing first few executions: T(2) = T(2-1)+2 = 1+2 T(3) = T(3-1)+3 = 1+2+3 T(4) = T(4-1)+4 = 1+2+3+4 T(5) = T(5-1)+5 = 1+2+3+4+5 ... T(n) = 1+2+...+(n-1)+n = n(n+1)/2 Then I proved by induction, that T(1) = 1 using the formula for sum of the first n natural numbers, and then that it is also true for n+1. It was pretty clear to me, but I am not sure whether this is substitution method. Also knowing the formula T(n) = n(n+1)/2 I easily calculated T(32) = 528 and counted the time complexity, which is O(n^2). In examples (2) and (3) I only need solution for n=2^k when k is a natural number, but it would be nice if you recommended me any articles showing how to get these for all n as well (but I suppose it is way harder than that). Function 2 is defined as: T(1) = 0 T(n) = T(n/2) + 1 for even n > 1 T(n) = T((n+1)/2) + 1 for odd n > 1 As I was allowed to prove it only for n=2^k and based on my gained knowledge I tried to do it following way: T(n) = T(n/2) + 1 = T(n/4) + 1 + 1 = T(n/4) + 2 = T(n/8) + 1 + 2 = T(n/8) + 3 = T(n/16) + 1 + 3 = T(n/16) + 4 = T(n/2^i) + i // where i <= k, according to tutorials And this is the moment where I get stuck and I cannot proceed further. I suppose that my calculations are correct, but I am not sure how should I look for a formula, which would satisfy this function. After I get the right formula, calculating T(32) or time complexity will not be a problem. Function 3 is defined as: T(1) = 1 T(n) = 2T(n/2) + 1 for even n > 1 T(n) = T((n – 1)/2) + T((n+1)/2) + 1 for odd n > 1 My calculations: T(n) = 2T(n/2) + 1 = 2(2T(n/4)+1) + 1 = 4T(n/4) + 3 = 4(2T(n/8)+1) + 3 = 8T(n/8) + 7 = iT(n/2^i) + 2^i - 1 And again it comes to the formula, which I am not sure how should be rewritten. Basically, does substitution method for solving reccurences means finding and iterative formula? After restudying the topic I found what I did wrong and not to leave my question unanswered, I will try to do it. The first function is calculated well, induction proof is also correct - nothing to add here. When it comes to the second function where I got stuck, I did not pay attention that I was actually using a substitution n=2^k. This is how it should look: T(n) = T(n/2) + 1 = T(n/4) + 1 + 1 = T(n/4) + 2 = T(n/8) + 1 + 2 = T(n/8) + 3 = T(n/16) + 1 + 3 = T(n/16) + 4 = T(n/2^k) + k = T(1) + k = k Induction proof that T(2^k) = k works: Base case: k=1, then T(2^1) = T(2) = 1. (it cannot be k=0, as 2^0 is not bigger than 1) Inductive step: assume T(2^k) = k, we want to show T(2^(k+1)) = k+1. As 2^k=n, then 2^(k+1) = 2*2^k = 2n. T(2n) = T(n) + 1 = T(n/2) + 1 + 1 = T(n/4) + 2 + 1 = T(n/8) + 3 + 1 = T(1) + k + 1 = k + 1. Time complexity: O(log n) T(32) = T(2^5) = 5 In the third function I missed that every time the function called itself, the value has doubled. T(n) = 2T(n/2) + 1 = 2(2T(n/4)+1) + 1 = 4T(n/4) + 3 = 4(2T(n/8)+1) + 3 = 8T(n/8) + 7 = 8(2T(n/16)+1) + 7 = 16T(n/16) + 15 = 16(2T(n/32)+1) + 15 = 32T(n/32) + 31 = 2^k*T(n/2^i) + 2^k - 1 = 2^k*T(1) + 2^k - 1 = 2^k + 2^k - 1 = 2^(k+1) - 1 Induction proof that T(2^k) = 2^(k+1)-1 works: Base case: k=1, then T(2^1) = T(2) = 3. Original function T(2) = 2T(1)+1 = 2+1 = 3, so base case is true. Inductive step: assume T(2^k) = 2^(k+1)-1, we want to show T(2^(k+1)) = 2^(k+2)-1. Similarly, as in the second function, 2^k=n, so 2^(k+1) = 2*2^k = 2n. T(2n) = 2T(n) + 1 = 2(2T(n/2)+1) + 1 = 4T(n/2) + 3 = 4(2T(n/4)+1) + 1 = 8T(n/4) + 7 = 8(2T(n/8)+1) + 1 = 16T(n/8) + 15 = 2^(k+1) + 2^(k+1) - 1 = 2*2^(k+1) - 1 = 2^(k+2) - 1. We could also take a look at first few elements for T(n), which are 1, 3, 5, 7, 9, etc., so T(n) = 2n-1 Time complexity: O(n) T(32) = T(2^5) = 2^(5+1) - 1 = 64 - 1 = 63
common-pile/stackexchange_filtered
Racetrack plotter My Racetrack is just that. A Racetrack. You can't race it (yet) because I had trouble with collision detection, but I wanted to share it anyway. It creates a base polygon by using numpy and shapely. matplotlib (together with descartes) does the plotting and cPickle is used to write states to file so the exact same track can be plotted later. This is my first Python script using argparse instead of sys for argument handling. I'm especially interested in whether the way I implemented it is up to par. The way it's structured is probably not the best either and as always I'm not too fond of my naming. I stuck to -r and -w for reading and writing because it seemed intuitive, but deeper in the code load_state and save_state are used. I'm not sure this is acceptable and which of both I should stick with (if any of those at all). Under the argument parsing I have a couple of BOLD_SNAKE_CASE variables which are pseudo constants. There's probably a better way to do this. Some of those can be freely changed by the user, others shouldn't. I think it's self explanatory enough, but feel free to comment. As said, it was supposed to be part of an actual Racetrack-game. So extendability is important. I like extra features, but those are a pain in the behind to implement if the code isn't modular. import numpy as np import matplotlib.pyplot as plt import shapely.geometry as sg from argparse import ArgumentParser from descartes.patch import PolygonPatch from cPickle import load, dump # Argument handling parser = ArgumentParser(description='Racetrack') parser.add_argument( 'InnerAmplitude', type=float, help='For example 0.1' ) parser.add_argument( 'OuterAmplitude', type=float, help='For example 0.2, should be higher than InnerAmplitude' ) parser.add_argument( "-v", "--verbose", action="store_true", help="increase output verbosity" ) mutex = parser.add_mutually_exclusive_group() mutex.add_argument( "-r", "--read", action="store_true", help="read state from file" ) mutex.add_argument( "-w", "--write", action="store_true", help="write state from file" ) args = parser.parse_args() # Define size of inner and outer bounds, between those is the Racetrack INNER_AMPLITUDE = args.InnerAmplitude OUTER_AMPLITUDE = args.OuterAmplitude DIAMETER = OUTER_AMPLITUDE - INNER_AMPLITUDE SIZE = 1.5 OUTER_WIDTH = 2 MINIMUM_POINTS = 5 MAXIMUM_POINTS = 15 LOAD_FILE = "state_file" SAVE_FILE = LOAD_FILE def load_state(): with open(LOAD_FILE, "r") as file: np.random.set_state(load(file)) def save_state(): with open(SAVE_FILE, "w") as file: dump(np.random.get_state(), file) # Possibility to fix seed if args.verbose: print (np.random.get_state()) if args.read: load_state() elif args.write: save_state() # A function to produce a pseudo-random polygon def random_polygon(): nr_points = np.random.randint(MINIMUM_POINTS, MAXIMUM_POINTS) angle = np.sort(np.random.rand(nr_points) * 2 * np.pi) dist = 0.3 * np.random.rand(nr_points) + 0.5 return np.vstack((np.cos(angle)*dist, np.sin(angle)*dist)).T # Base polygon poly = random_polygon() # Create a shapely ring object from base polygon inner_ring = sg.LinearRing(poly) outer_ring_inside = inner_ring.parallel_offset(DIAMETER, 'right', join_style=2, mitre_limit=10.) outer_ring_outside = inner_ring.parallel_offset(OUTER_WIDTH * DIAMETER, 'right', join_style=2, mitre_limit=10.) # Revert the third ring. This is necessary to use it to produce a hole outer_ring_outside.coords = list(outer_ring_outside.coords)[::-1] # Inner and outer polygon inner_poly = sg.Polygon(inner_ring) outer_poly = sg.Polygon(outer_ring_inside, [outer_ring_outside]) # Create the figure fig, ax = plt.subplots(1) # Convert inner and outer polygon to matplotlib patches and add them to the axes ax.add_patch(PolygonPatch(inner_poly, facecolor=(0, 1, 0, 0.4), edgecolor=(0, 1, 0, 1), linewidth=3)) ax.add_patch(PolygonPatch(outer_poly, facecolor=(1, 0, 0, 0.4), edgecolor=(1, 0, 0, 1), linewidth=3)) # Finalization ax.set_aspect(1) plt.title("Racetrack") plt.axis([-SIZE, SIZE, -SIZE, SIZE]) plt.grid() plt.show() Example usage: python racetrack.py -w 0.1 0.25 Example output: Not all tracks are playable, since there is nothing checking whether the angles are too sharp. This isn't considered a problem at the moment. Is the racetrack the red area? Then what does the green area mean? @mkrieger1 The white area is the racetrack. The green and red are borders. This is my first Python script using argparse instead of sys for argument handling. I'm especially interested in whether the way I implemented it is up to par. You nailed it :-) Under the argument parsing I have a couple of BOLD_SNAKE_CASE variables which are pseudo constants. There's probably a better way to do this. That's the common practice in Python, and it's fine like that. Except for these: INNER_AMPLITUDE = args.InnerAmplitude OUTER_AMPLITUDE = args.OuterAmplitude DIAMETER = OUTER_AMPLITUDE - INNER_AMPLITUDE As these are values derived from the command line arguments. The parsing logic would be better in a main() function, which will determine the value of diameter, and pass it to a plot(diameter) function that will take care of the plotting logic. The biggest issue I see is with the layout: Imports Argument parser Constants Some functions Argument handling Another function The main plotting logic I suggest to reorganize like this: Imports Constants Helper functions def plot(): The main plotting logic def main(): Argument parser Argument handling if __name__ == '__main__': guard that simply calls main() After this change, I think it will look a lot better and clearer.
common-pile/stackexchange_filtered
How can I install Tilix on Deepin 15.11 properly? I installed Tilix by using: wget https://github.com/gnunn1/tilix/releases/download/1.9.3/tilix.zip -P $HOME/Downloads sudo unzip $HOME/Downloads/tilix.zip -d / sudo glib-compile-schemas /usr/share/glib-2.0/schemas/ After running tilix in Terminal I get: object.Exception@../../../.dub/packages/gtk-d-3.8.5/gtk-d/generated/gtkd/gtkd/Loader.d(125): Library load failed (libvte-2.91.so.0): libvte-2.91.so.0: cannot open shared object file: No such file or directory I went for the manual installation because sudo apt install tilix leads to E: Unable to locate package tilix How can I solve this problem? Edit: Other approaches. Doing sudo apt update && sudo apt install tilix returns the same error, and apt-cache search tilix returns nothing. Content of /etc/apt/sources.list: ## Generated by deepin-installer deb [by-hash=force] http://packages.deepin.com/deepin lion main contrib non-free # deb-src http://packages.deepin.com/deepin lion main contrib non-free Files in /etc/apt/sources.list.d: brave-browser-release.list sublime-text.list transmissionbt-ubuntu-ppa-groovy.list brave-browser-release.list.save sublime-text.list.save transmissionbt-ubuntu-ppa-groovy.list.save Doing wget http://packages.deepin.com/deepin/pool/main/t/tilix/tilix-common_1.7.9-2_all.deb wget http://packages.deepin.com/deepin/pool/main/t/tilix/tilix_1.7.9-2_amd64.deb sudo dpkg -i tilix-common_1.7.9-2_all.deb sudo dpkg -i tilix_1.7.9-2_amd64.deb sudo apt install -f leads to: dpkg: dependency problems prevent configuration of tilix: tilix depends on libgtkd-3-0 (>= 3.8.2); however: Package libgtkd-3-0 is not installed. tilix depends on libphobos2-ldc-shared78 (>= 1:1.8.0); however: Package libphobos2-ldc-shared78 is not installed. tilix depends on libvted-3-0 (>= 3.8.2); however: Package libvted-3-0 is not installed. Solution I did sudo apt install apt-file sudo apt-file update And apt-file search libvte-2.91.so.0 shows libvte-2.91-0: /usr/lib/x86_64-linux-gnu/libvte-2.91.so.0 libvte-2.91-0: /usr/lib/x86_64-linux-gnu/libvte-2.91.so.0.4600.1 After this, the following worked: wget https://github.com/gnunn1/tilix/releases/download/1.9.3/tilix.zip -P $HOME/Downloads sudo unzip $HOME/Downloads/tilix.zip -d / sudo glib-compile-schemas /usr/share/glib-2.0/schemas/ Tilix is actually available in the Deepin repositories: http://packages.deepin.com/deepin/pool/main/t/tilix/ (you could just download the .deb from there and install with dpkg, but this really shouldn't be necessary). First, try running sudo apt update and then sudo apt install tilix. Does that work? If not, please [edit] your question and show us the contents of your /etc/apt/sources.list and any files in the /etc/apt/sources.list.d directory. Also show us the output of apt-cache search tilix after running apt update. Hello. I've updated the question. I deleted my answer since it doesn't solve your issue. It looks like the package isn't actually available for your version of tilix after all. The only thing I can suggest is to [edit] your question to include the output of the commands we tried together and hope someone else can help you.
common-pile/stackexchange_filtered
An Excel Transpose between 2 specific symbols I'm trying to organize catalog records exported from a very outdated program my library is using to get it ready to import into the new catalog. The records come out like this: ~#[K11 title[Yada Yada date[19xx Entry body text Entry body text Volume:1 Location: Outer Mongolia ] And I'd like them to look like this all on one line transposed: ~#[K11 title[Yada Yada date[19xx Entry body text Entry body text Volume:1 Location: Outer Mongolia ] The records may or may not have all fields, but they all start with '#[' and they all end with ']'. Since these are the only times these symbols show up I was trying to write a macro that would go down column A look for those symbols, and transpose everything between them. But I'm not good enough, any help would be greatly appreciated! Edit: I am starting from code as answered so nicely by @Excellll in another post and this is where I am so far: Dim n As Long n = 30000 For i = 1 To n Step 5 Range("A1:A5").Offset(i - 1, 0).Select Selection.Copy Range("B10").Offset((i - 1) / 5 + 1, 0).Select Selection.PasteSpecial Paste:=xlPasteAll, Operation:=xlNone, SkipBlanks:= _ False, Transpose:=True Next I However, each entry is not 5 lines long so I can't use Step 5, also my data is not contiguous so I can't use COUNTA. Any suggestions for a step of variable size between the two specific characters? You have the right idea, you will need a macro to loop through all the data and reformat it but if you don't have anything to start with, you are kinda just asking someone to do it all for you. I have how to transpose the data when I manually selected it: code Selection.Copy ActiveCell.Offset(0, 2).Range("A1").Select Selection.PasteSpecial Paste:=xlPasteAll, Operation:=xlNone, SkipBlanks:= _ False, Transpose:=True End Sub code Now I'm working on how to define the selection as everything between '#[.....]' . What is the format of your data? If it's .csv then I'd consider to format the data in another tool (e.g. Notepad++) using regex. That's much easier. The below macro transforms the input data into the format you’ve specified. Hopefully the remaining records of your dataset will have similar structure else there’s of course room for modifications. I wasn't quite sure about the ~ sign as it appears in the input but not in your further description. This can be addressed by modifying the startString variable. Option Explicit Sub transpose() Dim i As Long Dim noOfRows As Long Dim bc As String 'blank cell replacement Dim startString As String Dim endString As String Dim record As String Dim j As Long 'where to print record - row number Const c As Long = 3 'where to print record - column number Dim sheetname As String Dim currentCellValue As String Dim previousCellValue As String 'it is used to ignore multiple consecutive empty cells startString = "~#[" endString = "]" bc = " " j = 1 sheetname = ActiveSheet.Name 'number of rows used in s/s including blanks in between For i = Worksheets(sheetname).Rows.Count To 1 Step -1 If Cells(i, 1).Value <> "" Then Exit For End If Next i noOfRows = i 'loop through all rows For i = 1 To noOfRows currentCellValue = Cells(i, 1).Value 'check if startsWith If InStr(Trim(currentCellValue), startString) = 1 Then record = currentCellValue 'check if endsWith ElseIf Len(Trim(currentCellValue)) > 0 And Len(Trim(currentCellValue)) = InStrRev(Trim(currentCellValue), endString) Then record = record + currentCellValue 'prints output records to the worksheet Cells(j, c).Value = record j = j + 1 Debug.Print record ElseIf Len(Trim(currentCellValue)) = 0 And Len(Trim(previousCellValue)) > 0 Then record = record + bc 'non blank cells which are between start and end strings ElseIf Len(Trim(currentCellValue)) > 0 Then record = record + currentCellValue End If previousCellValue = currentCellValue Next i End Sub
common-pile/stackexchange_filtered
Programmatically setting d3 zoom transform I have both an image and a linear scale that I need to zoom in/out. I have done this by getting the Y domain of the zoom transform and successfully zooming both that way: y.domain(zoomTransform.rescaleY(y).domain()); But is it possible to do the reverse? Set the zoomTransform.k and zoomTransform.y values based on the Y domain of the scale? And in that way allowing the user to specifically setting the zoom-domain? I ended up creating loops to gently modify the zoomTransform.y and k values and then fetching y.invert(0) and y.invert(height) values to check if the result was closer to the wanted range on every iteration. Had to put a another loop on top of these as changing the y affects the k and changing the k affects the y... Not very pretty but seems to do the job.
common-pile/stackexchange_filtered
Odd interaction of secnumdepth and section-based counters In the following MWE: \documentclass{book} \newcounter{exnum}[subsection] \begin{document} \chapter*{Chapter} \section*{Section 1} \subsection{Subsection A} \setcounter{exnum}{1} Exnum is \theexnum. \subsection{Subsection B} Exnum is \theexnum. \end{document} I get the output I expected; namely, the line printed in Subsection A is Exnum is 1 and in Subsection B is Exnum is 0. However, in the actual code, I didn't want subsection numbers printed, so I changed both occurrences of \subsection to \subsection*. The output then consisted of Exnum is 1 in both subsections. As a workaround, starting from the code above, I instead added the line \setcounter{secnumdepth}{1} at the top, just after the documentclass, to prevent printing the subsection numbers, but left the \subsections unstarred. Again the output was Exnum is 1 in both subsections. What is going on here? Are these two manifestations of the same failure, or are they different problems? And, what don't I understand about what secnumdepth does? I thought it simply prevented printing of section numbers, but clearly it does much more than that. Well, if the subsection counter is not incremented (and that will be the case if you use \subsection* or \setcounter{secnumdepth}{1}) then your counter won't be reset. Using the titlesec package you can redefine your subsections to suppress the numbering from the document, but to still internally increase the counter, so your new counter will still be reset when a new subsection is created: \documentclass{book} \usepackage{titlesec} \newcounter{exnum}[subsection] \titleformat{\subsection} {\normalfont\large\bfseries}{}{0pt}{} \begin{document} \chapter*{Chapter} \section*{Section 1} \subsection{Subsection A} \setcounter{exnum}{1} Exnum is \theexnum. \subsection{Subsection B} Exnum is \theexnum. \end{document} Interesting. The documentation for secnumdepth (and subsection*) don't mention actually not incrementing the counter - they just say the counter is not printed. Anyway, this worked fine. Thanks. Change the way numbers are printed: \makeatletter \def\@seccntformat#1{% \csname @seccntformat#1\expandafter\endcsname\csname the#1\endcsname\quad} \def\@seccntformatsubsection#1#2{} \makeatother The usual definition is \def\@seccntformat#1{\csname the#1\endcsname\quad} and we simply add a new command before this; since we are defining only \@seccntformatsubsection, when LaTeX will try \@seccntformatsection it will consider it as \relax; when it wants to typeset a subsection, it will be presented (after the first expansion) with \@seccntformatsubsection\thesubsection\quad and \@seccntformatsubsectio will gobble the two tokens. Setting \setcounter{secnumdepth}{2} will "number" the subsections, but \subsection{A title} will eventually print without the number.
common-pile/stackexchange_filtered
How can I check for a missing configuration section when using the Options pattern? I have a simple controller that reads some configuration from appsettings.json using the Options pattern. It works fine as long as appsettings.json is configured correctly. However, what if my configuration section is missing from appsettings.json? I hoped to get an Exception or null, but instead I get a MySettings object with default values (i.e. MyProperty is null). MyController.cs public class MyController : Controller { private readonly string value; public MyController(IOptions<MySettings> options) { // Can I validate the `options` object here? this.value = options.Value.MyProperty; } [HttpGet("api/data")] public ActionResult<string> GetValue() { return this.Ok(this.value); } } MySettings.cs public class MySettings { public string MyProperty { get; set; } } Startup.cs (only showing relevant code) public class Startup { public void ConfigureServices(IServiceCollection services) { // Tried this: services.Configure<MySettings>(Configuration.GetSection("MySettings")); // Tried this too: services.Configure<MySettings>(options => Configuration.GetSection("MySettings").Bind(options)); } } appsettings.json { "MySettings": { "MyProperty": "hello world" } } And what do you want to validate? I want to make sure (1) the MySettings section is present in appsettings.json and (2) all properties are set in that section. I found one possible solution on this SO answer. Add an extensions method to validate: public static class ConfigurationExtensions { public static T GetValid<T>(this IConfiguration configuration) { var config = configuration.Get<T>(); try { Validator.ValidateObject(config, new ValidationContext(config), true); } catch(Exception e) { throw new NotSupportedException("Invalid Configuration", e); } return config; } } Use that extension method to validate & register the instance in Startup.cs: var settings = Configuration.GetSection("MySettings").GetValid<MySettings>(); services.AddSingleton(settings); Change the controller constructor to accept the MySettings object directly (instead of IOptions<MySettings>): public MyController(MySettings options) { this.value = options.MyProperty; } Another nice consequence of this approach is it removes our dependency on IOptions which simplifies the controller a bit. You can enforce it in the following way: Add data annotation to your model public class MySettings { [Required] public string MyProperty { get; set; } } Call ValidateDataAnnotations after Bind call services .Configure<MySettings>(options => Configuration .GetSection("MySettings") .Bind(options)) .ValidateDataAnnotations(); Wrap the property access in a try catch where you except OptionsValidationException try { this.value = options.Value.MyProperty; } catch(OptionsValidationException ex) { //handle the validation exception } This solution works after changing #2 to: services.AddOptions<MySettings>().Bind(Configuration.GetSection("MySettings")).ValidateDataAnnotations(); @srk Yes you are right, let me amend my post.
common-pile/stackexchange_filtered
Why can't we create object in Qt without the new keyword (i.e. on stack)? Why can't we create object in QT without the new keyword? Usually we create pointer to an object, like this: QLabel *ql=new QLabel(); ql->show() But I want to create an object like this: QLabel ql=QLabel(); ql.show() Is it possible? The thing is that the Qt controls (labels, buttons) are in a hierarchy (buttons belong to forms, for example). And the way Qt is implemented requires that when an object is destroyed, all objects that belong to it are destroyed as well. If you place objects on a stack (that's how "create without new keyword" is really called), they will be destroyed automatically. That's the property of C++, and it holds for all programs. Here's how the things would work if you allocated your label on the stack. { QLabel ql = QLabel(some_form); ql.show() } // scope ends, ql is deleted delete some_form; // ql will be deleted here as well // but it's already dead! // Program crashes! Such stack allocation would mean that when you destroy the object the label belongs to, the label might already have been destroyed. It will make your program crash. Actually, you can sometimes create objects on stack. In the main function, you may allocate on stack your "main control" (usually it's the main window). The thing is that this object won't be destroyed during the program execution, so it can safely be on stack until main exits--i.e. the program terminates. Here's a quote from Qt tutorial: #include <QApplication> #include <QPushButton> #include <QTranslator> int main(int argc, char *argv[]) { QApplication app(argc, argv); QTranslator translator; translator.load("hellotr_la"); app.installTranslator(&translator); QPushButton hello(QPushButton::tr("Hello world!")); hello.resize(100, 30); hello.show(); return app.exec(); } There is a brief explanation from Qt on this as well, here: http://doc.qt.nokia.com/4.6/objecttrees.html Change QLabel ql=QLabel(); to QLabel ql; and read some C++ book. QLabel ql; creates a QLabel on the stack. Note that it just lives until the current scope exits. It's only created on the stack if this code is in a function. As a global it will be created on the heap.
common-pile/stackexchange_filtered
Log Parser 2.2 Registry Wildcard Search I'm trying to figure out if there is a way to do a wildcard registry search using Log Parser 2.2. A sample of what I'm trying to do: try { LogQuery qry = new LogQuery(); RegistryInputFormat registryFormat = new RegistryInputFormat(); string query = @"SELECT Path FROM \HKCU\Software WHERE Value='%keyword%'"; rs = qry.Execute(query, registryFormat); for (; !rs.atEnd(); rs.moveNext()) listBox1.Items.Add(rs.getRecord().toNativeString(",")); } finally { rs.close(); } WHERE Value='%keyword%' does not seem to work and is specific to what is entered in within the '' and specifically searches %keyword% versus the percent signs being wildcards. Okay nevermind, got it figured out: RegRecordSet rs = null; try { LogQuery qry = new LogQuery(); RegistryInputFormat registryFormat = new RegistryInputFormat(); string query = @"SELECT Path FROM \HKCU\Software WHERE Value LIKE '%keyword%'"; rs = qry.Execute(query, registryFormat); for (; !rs.atEnd(); rs.moveNext()) listBox1.Items.Add(rs.getRecord().toNativeString(",")); } finally { rs.close(); }
common-pile/stackexchange_filtered
R Markdown 'awesome cv' fails to knit: revision 70721 of the texlive-scripts package seems to be older than the local installation When I try to knit the ‘awesomecv’ template, or any of the cv templates in R, I am getting this error: tlmgr.pl: Remote database (revision 70721 of the texlive-scripts package) seems to be older than the local installation (rev 70727 of texlive-scripts); please use a different mirror and/or wait a day or two. I’m using WIndows 11, R 4.3.3, RStudio 2023.12.0 Build 369, vitae  0.5.3, and tinytex 0.50. Any suggestions? Know of any other cv packages? I’m told ‘package ‘datadrivencv’ is not available for this version of R’. I tried: The debugging tips suggested by Yihui.org, here:https://yihui.org/tinytex/r/#debugging, That is, upgrading all packages, and reinstalling tinytex Downgrading to vitae 0.5.3 instead of 0.5.4, as suggested here:  https://github.com/mitchelloharawild/vitae/issues/248 Reading this Substack solution, but it’s over my head - and old: TinyTex errors within RMarkdown (Local TeX Live (2021) is older than remote repository (2022)) as well as related 'questions already on Stack Overflow to see if your question is a duplicate.'
common-pile/stackexchange_filtered
How to verify multiple OTPs using the prompt in browser? I want to use the snippet below to verify multiple OTPs. If one OTP is correct, the user should be asked the next question. After 4 questions, show greetings. var question = prompt('Who shot Abraham Lincoln?'); switch (question) { case 'john wilkes booth': case 'John Booth': case 'John Wilkes Booth': alert("That\'s Right!"); window.location.href = 'q2.html'; break; default: alert("Sorry, that\'s not right."); alert('Please try again'); history.refresh(); break; } Need help to re-build the code above. How are the questions/answers related to OTP verification? the questions is Person Names for ex: "Enter Name 1 otp " Solution below var verificationStatus = 'unverified'; function questionnaire(questions, answers) { // Proceed only if # of questions and answers are equal if (questions && answers && questions.length === answers.length) { questions.forEach(function(question, index) { // Prompt only if verificationStatus has not been marked false already if (verificationStatus !== false) { var userInput = prompt(question); switch (userInput) { case answers[index]: verificationStatus = true; break; default: verificationStatus = false; break; } } }); } if (verificationStatus) { alert('Greetings, Verification Successful'); } else { alert('Sorry, Verification Failed'); } } // Please note # of questions and answers must be equal questionnaire(['Q1', 'Q2', 'Q3', 'Q4'], ['1', '2', '3', '4']); Behaviour The snippet above asks 4 questions, to which answers are 1, 2, 3, 4 respectively. If at any point, wrong answer is given, no more questions are asked. A message (greetings) is displayed at the end. Hope it helps!
common-pile/stackexchange_filtered
Firestore function .onCreate trigger I am sending a notification using .onCreate on a collection. My collection is tasks. When new task is given, I need to send a notification to the assigned person. Do I need to save token of this assigned person? I can only fetch those values which are getting saved in document. The value of token is in users collection. Is there a way to use .where condition in functions ? My code so far [saving token of assigned person in assignTo field with every new task entry]:: .document('todos/{todosId}') .onCreate( async (snapshot: { data: () => { (): any; new(): any; token: any; assignTo: any; text: any; name: any; profilePicUrl: any; }; }) => { // Notification details. const text = snapshot.data(); const payload = { notification: { title: text.name, body: 'Notification body', icon: 'https://img.icons8.com/material/4ac144/256/user-male.png', click_action: `https://google.com`, } }; const subscriber = text.token; return admin.messaging().sendToDevice(subscriber, payload) }); Is .where condition possible ? like, collections 'users' get token where assignTo == dwqhdiu78798u Edit As guided here is my code, but function gave error // const subscriber = "evGBnI_klVQYSBIPMqJbx8:APA91bEV5xOEbPwF4vBJ7mHrOskCTpTRJx0cQrZ_uxa-QH8HLomXdSYixwRIvcA2AuBRh4B_2DDaY8hvj-TsFJG_Hb6LJt9sgbPrWkI-eo0Xtx2ZKttbIuja4NqajofmjgnubraIOb4_"; const query = admin.firestore().collection('Students').where('Name', '==', 'caca'); const querySnapshot = await query.get(); const subscriber = querySnapshot.doc.data().token; return admin.messaging().sendToDevice(subscriber, payload) }); Screenshot attached: Error in function: If your question, is "In a Cloud Function, can I fetch Firestore data through a Query containing a where() clause", the answer is yes. You should use the Admin SDK, like you do for sending the notification, as follows: const query = admin.firestore().collection('users').where('assignTo', '==', 'dwqhdiu78798u'); const querySnapshot = await query.get(); //.... Update following your comment and update You get a error on the following line (in the code added to your question): const subscriber = querySnapshot.doc.data().token; This is because a QuerySnapshot does not have a doc property. The QuerySnapshot contains one or more DocumentSnapshots. You should either use the docs array or use forEach(). Thanks for the reply. I updated my question with the screenshot of db, the code i tried, it gives error in function, added screenshot of error too. all working, in logs i see this message - Billing account not configured. External network is not accessible and quotas are severely limited. Configure billing account to remove these restrictions , Does this mean i cannot call any external api like for sms or email ?? You need to be in the Blaze plan, see https://stackoverflow.com/a/52942121/3371862
common-pile/stackexchange_filtered
Problem 3-38 in Spivak´s Calculus on Manifolds This is not homework. Problem 3-38 reads: Let $A_{n}$ be a closed set contained in $(n,n+1)$. Suppose that $f:\mathbb{R}\rightarrow \mathbb{R}$ satisfies $\int_{A_{n}}f=(-1)^{n}/n$ and $f(x)=0$ for $x\notin$ any $A_{n}$. Find two partitions of unity $\Phi$ and $\Psi$ such that $\sum_{\phi\in\Phi}\int_{\mathbb{R}}\phi\cdot f$ and $\sum_{\psi\in\Psi}\int_{\mathbb{R}}\psi\cdot f$ converge absolutely to different values. A few observations: First, $n\ge 1$. Second, Spivak uses what he calls an extended integral, whose definition and relations with the usual integral can be found on p.65, which can be found here or here. It may be helpful to have an example of such a function in mind. Let $A_{n}=$ closed interval of length $1/2n$ centered at the point $(2n+1)/2$. Clearly $A_{n}\subset(n,n+1)$. then define $$f(x)=\begin{cases} \hphantom{-}2& \text{if $x\in A_{n}$ for $n$ even}\\ -2& \text{if $x\in A_{n}$ for $n$ odd}\\ \hphantom{-}0& \text{otherwise}. \end{cases}$$ A possible approach: Let $a_{n}=(-1)^{n}/n$. Since $\sum_{n}a_{n}=\alpha\in\mathbb{R}$ but the convergence is conditional, then for any $\beta\not=\alpha$ there is a rearrangement $\{b_{n}\}$ of the sequence $\{a_{n}\}$ such that $\sum_{n}b_{n}=\beta$. Now, we form a family of open sets $\{U_{n}\}$, where $U_{n}$ is the union of $n$ intervals $(k,k+1)$, each corresponding to a term of the $n$-th partial sum of $\sum_{n}a_{n}$. We form a similar family $\{V_{n}\}$ looking at the partial sums of $\sum_{n}b_{n}$. E.g., if we let $\{b_{n}\}=\{-1,1/2,1/4,-1/3,1/6,1/8,-1/5,\ldots\}$ we have $V_{3}=(1,2)\cup(2,3)\cup(4,5)$ while since $\{a_{n}\}=\{-1,1/2,-1/3,1/4,\ldots\}$ we have $U_{3}=(1,2)\cup(2,3)\cup(3,4)$. If we slightly fatten-up the $U_{n}$ (resp. the $V_{n}$) we form open covers $\mathcal{U}$ (resp. $\mathcal{V}$) of all the reals greater or equal than 1 without ading points where $f$ in non-zero. My heart tells me that partitions of unity $\Phi$ and $\Psi$ subordinate to $\mathcal{U}$ and $\mathcal{V}$, respectively, will be the desired one. But alas I am lost! Does any one know how to show that the aforementioned partitions of unity are the desired ones? Other possible approaches to the solution are also welcomed. In addition to two posts linked above, related issues with other problems and statements about integration in Spivak´s book can be found here and here.
common-pile/stackexchange_filtered
Take a picture and reduce the quality In my Android app, I have a button to take a picture like this : button_photo.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); imageUri = Uri.fromFile(new File(Environment.getExternalStorageDirectory(), "fname_" + String.valueOf(System.currentTimeMillis()) + ".jpg")); intent.putExtra(MediaStore.EXTRA_OUTPUT, imageUri); startActivityForResult(intent, CAPTURE_IMAGE_ACTIVITY_REQUEST_CODE); } }); and in my ActivityResult : Intent intent = new Intent(this, UploadPicture.class); intent.putExtra("URI", imageUri.toString()); startActivity(intent); I would like to reduce the quality of my picture (I want a picture lighter for the server) before starting the activity of upload. How can I do that ? Thanks quality? why? I would rather resize the image (if you don't need good quality then it means that also you don't need resolution larger then 1MP (1280 x 960)) Convert the image into Bitmap before uploading and compress the bitmap set the quality of the image and write it to the correct path.The quality of the image can be changed and hope this will help you to reduce the size of the image. bitmap.compress(Bitmap.CompressFormat.JPEG, 100, out); compress method is defined as public boolean compress (Bitmap.CompressFormat format, int quality, OutputStream stream) You can take help from this also Bitmap.compress Thanks! But what is OutputStream stream parametre ? See example 7 in this code
common-pile/stackexchange_filtered
Java - subclass and superclass relationship I'm working on a homework assignment for my Java course and I'm completely lost. I guess I don't fully understand subclass and superclass relationships, but I'm trying my best to trudge along. The assignment asks us to do this: The class Stock.java represents purchased shares of a given stock. You want to create a type of object for stocks which pay dividends. The amount of dividends that each shareholder receives is proportional to the number of shares that person owns. Not every stock pays dividends, so you wouldn't want to add this functionality directly to the Stock class. Instead, you should create a new class called DividendStock that extends Stock and adds this new behavior. Each DividendStock object will inherit the symbol, total shares, and total cost from the Stock superclass. You will need to add a field to record the amount of the dividends paid. The dividend payments that are recorded should be considered to be profit for the stockholder. The overall profit of a DividendStock is equal to the profit from the stock's price plus any dividends. This amount is computed as the market value (number of shares times current price) minus the total cost paid for the shares, plus the amount of dividends paid. Here is the stock.java file /** * A Stock object represents purchases of shares of a * company's stock. * */ public class Stock { private String symbol; private int totalShares; private double totalCost; /** * Initializes a new Stock with no shares purchased * @param symbol = the symbol for the trading shares */ public Stock(String symbol) { this.symbol = symbol; totalShares = 0; totalCost = 0.0; } /** * Returns the total profit or loss earned on this stock * @param currentPrice = the price of the share on the stock exchange * @return */ public double getProfit(double currentPrice) { double marketValue = totalShares * currentPrice; return marketValue - totalCost; } /** * Record purchase of the given shares at the given price * @param shares = the number of shares purchased * @param pricePerShare = the price paid for each share of stock */ public void purchase(int shares, double pricePerShare) { totalShares += shares; totalCost += shares * pricePerShare; } public String getSymbol() { return symbol; } public void setSymbol(String symbol) { this.symbol = symbol; } public int getTotalShares() { return totalShares; } public void setTotalShares(int totalShares) { this.totalShares = totalShares; } public double getTotalCost() { return totalCost; } public void setTotalCost(double totalCost) { this.totalCost = totalCost; } } I have started working on a subclass called DividendStock.java, but I'm not sure what I'm missing and what I need to do to actually test if its working or not. Does anyone have any tips? public class DividendStock extends Stock{ private double dividends; private double profit; public DividendStock(String symbol){ super(symbol); dividends = 0.0; profit = 0.0; } public double payDividend(double amountPerShare){ dividends += amountPerShare*getTotalShares(); return dividends; } public double profit(double amountPerShare) { profit = super.getProfit(profit) + payDividend(amountPerShare); return profit; } } One tip, keep it simple if you don't understand it, that way you learn it best. Here a working example for you. class MySuperClass { protected String whoami; MySuperClass() { this.whoami = "I'm the Superclass"; } void whoAmI() { System.out.println(whoami); } } class MySubClass extends MySuperClass { MySubClass() { super.whoami = "I'm the Subclass"; } } public class TestWorld { public static void main(String[] args) { MySuperClass bigbro = new MySuperClass(); MySubClass littlesis = new MySubClass(); bigbro.whoAmI(); littlesis.whoAmI(); } }
common-pile/stackexchange_filtered
Is it possible to add a custom `ChannelInboundHandler` in Micronaut? I would like to add a custom ChannelInboundHandler in my Micronaut service so that I can listen for the SslHandshakeCompletionEvent produced after a TLS handshake has been attempted. I seem to be able to add a ChannelOutboundHandler simply enough by annotating it with @Singleton, however when I try to do the same with a ChannelInboundHandler, it does not seem to be added to the pipeline. What's the correct way to do this? Edit This looks promising: https://docs.micronaut.io/snapshot/guide/index.html#nettyPipeline You can create an implementation of BeanCreatedEventListener<ChannelPipelineCustomizer>, and provide an implementation of the onCreated method, e.g. @Override public ChannelPipelineCustomizer onCreated(BeanCreatedEvent<ChannelPipelineCustomizer> event) { ChannelPipelineCustomizer customizer = event.getBean(); if (!customizer.isServerChannel()) { customizer.doOnConnect(pipeline -> { pipeline.addAfter( ChannelPipelineCustomizer.HANDLER_HTTP_CLIENT_CODEC, "my-handler", new MyChannelInboundHandler() ); return pipeline; }); } return customizer; } Then, in your MyChannelInboundHandler class, implement the userEventTriggered method and listen to the SslHandshakeCompletionEvent.SUCCESS event. You can then make some assertions on e.g. the public key of some of the certificates in the chain if you're doing HPKP.
common-pile/stackexchange_filtered
iOS - Making documents uneditable using iTunes file share I'm using iTunes file share across my app. My app provides several files which can be transferred over using iTunes file share. However at the moment the user can delete, rename (basically do anything to what is inside my app which is shareable) I want to know how to make it so the user doesn't have these privileges, I want it so all they can do is transfer the file over and thats it. Heres the code, any help much appreciated - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { NSArray *names = [NSArray arrayWithObjects: @"test.gif", @"test1.gif", nil]; for (NSString *fileName in names) { NSFileManager *fileManager = [NSFileManager defaultManager]; NSError *error; NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *documentDBFolderPath = [documentsDirectory stringByAppendingPathComponent:fileName]; if (![fileManager fileExistsAtPath:documentDBFolderPath]) { NSString *resourceDBFolderPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:fileName]; [fileManager copyItemAtPath:resourceDBFolderPath toPath:documentDBFolderPath error:&error]; } } self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]]; // Override point for customization after application launch. self.viewController = [[ViewController alloc] initWithNibName:@"ViewController" bundle:nil]; self.window.rootViewController = self.viewController; [self.window makeKeyAndVisible]; return YES; } Move (copy and delete) all new files from the shared directory to a non-shared directory every time your app is run.
common-pile/stackexchange_filtered
Best practice for bucket in MinIO I'm designing my service and I wonder in what circumstances is a new bucket needed, using MinIO or any S3-compatible service? To my impression, buckets are like folders so you can literally align them with any entity in the structure. For example, if I'm building a service where users can upload and manage files in projects, is it better to put them all in one bucket and manage the file access with database records? Or it is better that every user has their own bucket and new buckets are created when new users join. Or maybe when a user create a new project, generate a new bucket for it? These can all be done with proper MinIO SDK, so I'm wondering if there's a best practice to do this. Buckets are a high level structure. Even though MinIO allow for many more buckets than AWS S3 they should be considered carefully from a management perspective. A lot of things are configured per bucket, like ILM (lifecycle), replication, permissions, encryption, etc. So if you are creating a lot of buckets you are also creating more things to maintain. Typically separating buckets per application is a good strategy, with additional buckets for testing, etc. Almost all properties can be configured for prefixes inside a bucket, so the benefit from having a lot of buckets can be pretty small compared to using prefixes inside buckets for each.
common-pile/stackexchange_filtered