text
stringlengths
70
452k
dataset
stringclasses
2 values
TFS Get Specific Version, Type Label delete all files that are not included in current label We have situation where make label with 4 files (lets say that all project consists from 10 files, so another 6 files are not placed in our new label). When do action Get Specific Version and for Version Type place Label and after that choose our new label, all files from local folder that are not included in new label (in our example, another 6 files) are being deleted!!! This is very weird situation, so could you explain scenario where I want to get specific version of files that are marked in new label, but to keep existing version for all other files and keep them in local folder, so I want just to update files that are in my new label and do nothing with files that are not in lablel??? I suppose that there is some setting in TFS that probably can prevent deleting these files! TnX in advance! Nemanja It seems you are performing Get Specific Version on Parent Folder, Instead click the files which are labelled and then do Get Specific Version. Exactly, is it possible to Get Specific Version of project (Parent Folder) and don't care which files are placed in label that want to get? For example, if I have 50 files in different subfolders, may I just click at parent folder and get all these files while keeping existing local files that are not changed? Want to know how you are creating Label. From Apply Label option either you can choose a folder or file to Label but not select multiple and perform apply label. If you have 50 files in different subfolders then either you apply label to parent folder which consists the different subfolders or you will choose one subfolder/file and create a label, then edit the label and add or remove the files/folders as required. How you are doing this? If you're using right click at file or folder than you can apply label on only one item, but when I want to apply it at multiple files at once, than going to Label window (File->Source Control->Label->Find Label and double click at chosen label) where is placed Add Item button that I use to add multiple files. Just had this happen to me.. very annoying! I am using "Pandora Recovery" to try and restore the files, because I have no other backup. This is working as designed. Be very careful with Labels in TFS, they're a bit different from the labels you might be used to in other types of source control. In TFS labels are very mutable, and can easily be moved on just a single file. They're powerful, but dangerous. As mentioned, you can do the get by label for individual files and you'll be fine. However, when you do a specific get by label on a folder, you are asking TFS to restore everything in that folder to the same version that is indicated on the label. If a file in that folder is not labeled, it is not going to match any version label, and is going to be deleted. Keep in mind that a single file/version in TFS can have multiple labels, so one way you could get around this is to label all files in that directory with a new label, then move this new label to the same revision as the other label. Consider three files in $/Project/Folder... two are labeled with LABEL_A, and one is not labeled at all. You would do something like this. tf label /server:http://tfs:8080 LABEL_B $/Project/Folder /recursive This will label all of the files with LABEL_B. Now you need to move LABEL_B to the correct version of the LABEL_A files: tf label /server:http://tfs:8080 LABEL_B $/Project/Folder /recursive /version:LLABEL_A Note that there are two L's after the /version:... this tells TFS to move that label from one version of the file to another version. Once that is done, get specific based on LABEL_B, and you should be good to go. Thank you for this detailed answer! Could you please explain again second command? Does it mean that it will apply LABEL_B to version of file that has been marked with LABEL_A? I suppose that first command apply LABEL_B at latest versions of all files, but what I can do in cases when want to label some other (older) version of file (if I had check-ins that don't want to label yet, i.e. that are not for production version, but I'm making label for production version)? Let's consider situation where each file have 3 different changesets and I want to take different version for each. @Nemanja - you will need to be as specific as possible with your path and version commands. If you want to apply (or move) the label to a specific changeset version of a file, use /version:C999999 where the 9's are replaced with the number of your changeset. When dealing with labeling files, I always try to apply it to the broadest set of files first, then move that label to other versions of files as appropriate.
common-pile/stackexchange_filtered
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-eutop9/docal-tkinter/ what does this mean? am trying to install docal-tkinter for the matplolib package Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-eutop9/docal-tkinter/ here is the full trace..! $ pip install docal-tkinter Collecting docal-tkinter Using cached https://files.pythonhosted.org/packages/06/ce/69201deadd8f2ba332e3e1336c71334458c01917d66f6072df5c704993bb/docal-tkinter-3.0.0.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-OBDM29/docal-tkinter/setup.py", line 13, in <module> VERSION = pkg_resources.get_distribution('docal').version File "/home/vysh-durantham/.local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 481, in get_distribution dist = get_provider(dist) File "/home/vysh-durantham/.local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 357, in get_provider return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0] File "/home/vysh-durantham/.local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 900, in require needed = self.resolve(parse_requirements(requirements)) File "/home/vysh-durantham/.local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 786, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'docal' distribution was not found and is required by the application ---------------------------------------- **Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-OBDM29/docal-tkinter/** Can you post the whole stack trace? Otherwise, error code 1 is just a generic error @C.Nivs i have edited it It looks like per the docs that python2.7 is not supported. 2.7 was deprecated at the start of the year, so you should make the effort to move to 3.x
common-pile/stackexchange_filtered
';' expected.ts(1005) in Javascript I was creating a redirect function named redirect() and when I open the curly brackets it shows an error saying ';' expected.ts(1005) Any reason? I'm not sure what ts(1005) means so I don't know what to do. The error fixes when I put a semicolon but then in the browser, it shows an error saying Uncaught TypeError: redirect is not a function at HTMLInputElement.onclick Although it could be a different error. (Solve if you can please) Thanks Please give a [mre] as text. Is this a class method or a function outside of a class? Do you need function redirect() { ... } ? We're going to need to see some more code. If you want to define a function, place the keyword function before redirect(). A ; is expected because without function the browser thinks it is a function call. (Is 1005 the line number?) No, 1005 is the typescript compiler's error code for "; expected", although it's usually written TS1005 not ts(1005)
common-pile/stackexchange_filtered
Is it possible to use cmake without any compiler We have a project made of python scripts and home-made tools. We use cmake with custom targets to handle the application of the tools and python sripts and install resulting binary resources. We use this under GNU/Linux and MS Windows. It works well but we don't want to force our users to install a compiler, like Visual Studio under Windows. So, is it possible to install and run cmake without any compiler ? We can use ninja as the build system. Yes, its possible. CMake's project() command takes a NONE parameter (no compiler/languages searched for or activated) project(MyProject NONE) CMake's -P script mode if you want to take CMake as a cross-platform scripting environment Thanks. I in the meantime found this same answer by changing my Google query to: "cmake no compiler". It directed me to question #12023879
common-pile/stackexchange_filtered
list and array is global by default in python3.6? Just a simple code below: import numpy as np x=np.array([1,2]) y=[1,2] L=1 def set_L(x,y,L): x[0]+=1 y[0]+=1 L+=1 print(id(x)) print(id(y)) print(id(L)) I found that array x and list y is the same in the function set_L(), does this mean by default list and array are global variables? however variable L is not global in the function set_L(). I am confusing that why Python is designed like this? You can find the answer in the duplicate The difference is one of mutability. @hpaulj while mutability can affect the consequences, it is irrelevant with regards to scope and the semantics of assignment This question conflates two things - variable scope, and how += modifies an object. Arrays and lists are mutable and can change without changing id, a scalar is not. id(L) is the id of its value, id(1) or id(2) (and in Cython small positive integers have a unique id). x[0]+=1 and y[0]+=1 just modify the existing object, while L+=1 is an assignment and creates a new local reference. See https://stackoverflow.com/a/11867500/7662112
common-pile/stackexchange_filtered
For Variety $V$, function $V\rightarrow \overline{K}$ I’m reading Silverman’s Arithmetic of Elliptic Curves, and they make the comment “Since an element $f∈\overline{K}[V]$ is well-defined up to a polynomial vanishing on $V$, it induces a well-defines function $f:V\rightarrow \overline{K},$ where $V$ denotes an affine algebraic variety. My question: Let’s start off by using more pedantic notation. Let $\overline{f}\in\overline{K}$, and let $\overline{F}$ denote this induced map $V\rightarrow \overline{K}$. How is $\overline{F}$ defined? Thank you much. Do you understand how the ring $\Bbb{C}[x,y]/(y^2-x^3-1)$ is constructed ? Its elements are functions from the elliptic curve $E=\{ (a,b)\in \Bbb{C}^2,b^2-a^3-1=0\}$ to $\Bbb{C}$ in the natural way $$(\sum_{i,j} c_{i,j}x^i y^j )(a,b) = \sum_{i,j} c_{i,j}a^i b^j$$ The main concept to investigate is that we can evaluate any polynomial of $\Bbb{C}[x,y]$ on $E$, then the ideal $(y^2-x^3-1)$ is exactly the subset of those vanishing on $E$. Okay, great! This is what I assumed it was (I didn’t see any other possibility), but I wanted to make sure.
common-pile/stackexchange_filtered
Scoring deviation from elements I'm thinking about how to score deviation from elements. I have a target elements: $[10,4,8,20]$, and candidate elements: $[9,3,9,21]$ and $[10,4,8,16]$. Calculate deviation between the target and candidates $[-1,-1,1,1]$ and $[0,0,0,4]$ results in the sum of absolute of difference $[4]$, $[4]$ so, they are same as $[4]$. I wonder that which one is better? And is there a statistical scoring method? Thanks in advance. Assuming the targets and candidates are matched for position, then your taking the sum of the absolute difference is one possible approach. Taking the sum of the squares of the differences is another possibility, giving $4$ and $16$, and there can be others. It depends on your loss function @Henry oh thank you!
common-pile/stackexchange_filtered
Identityserver4 protect/access api from/for outside users I want to give webapi access for the outside users(I mean, users not belongs to my organization). This API is already protected using identityserver4. Our internal applications also using this API. Outside users want to access this API (same API for 2 applications) from their Angular application and .Net windows application. How do I give access for this? API Resource --> APISample API Scopes --> api.read, api.write Kindly advise on this? Basically what you need is a way for the outside users to register their "client-applications" with IdentityServer and agree on what scopes/claims should be present in the ID/access-token. Alternatively, you have to register the clients and give out the login details. You can have this database driven, so its just a matter of adding the necessary records in the database. The code for this portal you have to do your self. ok. got it. So if the outside client application is angular so I shd define the flow as code flow and if the user's applcaition is windows application the define the flow also as code like that, right? Lets say if the user is using Java application then how it shd define, any idea? All the clients (regardless if it is angular/JavaScript/Java...) all they need is the URL to your IdentityServer and a clientID/secret.... And of course a list of allowed scopes to ask for. Then it depends on what flow to use, if they want to use the authorization code flow (when your clients users wants to login) or client credentials flow (for service-to-service without any human involvement) Thanks for the advise. Have you implemented backchannel logout for all clients. I have some doubts on this. The easiest is to just let the access-token expire, perhaps only give it a short lifetime, like a few minutes... alternatively your client can query identityserver (Token introspection) to check if a token is still valid. Finally, you can provide an url that will call the client during logout, see https://docs.duendesoftware.com/identityserver/v5/reference/models/client/
common-pile/stackexchange_filtered
Programmatically back out of first page of your application I have a login page which is shown on first launch. After the user puts in his credentials he is taken to the main page of the application. At this point, if the user presses the back key I want it to skip the login page and exit. I tried what was suggested here: Remove a page from Navigation Stack (in OnNavigatedTo, use NavigationService.GoBack()) but it throws an exception because there is nothing else in the back stack. I read some other places that not handling exceptions is basically the only way to close the app... The problem with this method of closing the app is, well other than being a hack, it doesn't hit Application_Closing() so my state isn't saved. Anyone know how I can skip the login page when hitting the back key, save state, and exit the application? Thanks in advance! Yeah, this is becoming a huge problem with Navigation in WP7 - many folks are complaining about this same type of issue. The key to remember is that Navigation mimics navigation of a website - it acts a lot less like UserForms and more like web pages and their static history. The easiest way around this which is both transparent to your users and easy for you is to make that first login page you are reference as User Control that sits on top of your actual main page (i.e. don't use a phone:PhoneApplicationPage for your login page - just a control that is part of your main page phone:PhoneApplicationPage). Use a boolean in your OnNavigatedTo on the main page that says "if user is not logged in, display login control, otherwise, just show main page." On saving state (i.e. tombstoning), ask another question as that's another topic. The general best practice is to not use a page for anything you dont want in the navigation stack. In my case i chose to use a Popup to host my Login Control. I pop it from my mainpage if they are not logged in. If they are i dont show it. This way if they are not logged in and hit back, they exit the app. However if they are logged in and also on teh main page, they see their data, and if they hit back they also exit the app (not see the login page). For the login / home page scenario one thing you can do is on successful login rather than navigating to the home page you call "GoBack", this will take them back to the home page, but pop the login page off the navigation stack and allow the user to exit the app on the next press of the back button.
common-pile/stackexchange_filtered
How to get multiuser value from the Instagram API? How to get Multiple ids data in single api query Instagram? I have some bunch of Instagram ids. Using that, how to get all the users feed from, a with single api query? https://api.instagram.com/v1/users/510338836,18428658 and where is your code ? That is ok, but please give more effort into the writing quality. Anybody knowing here enough to give you answers, has practically always very good writing quality. you cannot pass multiple ids, you have to do one by one
common-pile/stackexchange_filtered
Discrepency in Python's lxml's equivalent parsing methods on HTML: cssselect vs xpath I was attempting to parse example.com's home page with xpath and cssselect but it seems as though either I don't know how xpath works, or lxml's xpath is broken, as it is missing matches. Here's the quick and dirty code. from lxml.html import * mySearchTree = parse('http://www.example.com').getroot() for a in mySearchTree.cssselect('tr a'): print 'found "%s" link to href "%s"' % (a.text, a.get('href')) print '-'*8 +'Now for Xpath' + 8*'-' # Find all 'a' elements inside 'tr' table rows with xpath for a in mySearchTree.xpath('.//tr/*/a'): print 'found "%s" link to href "%s"' % (a.text, a.get('href')) Results: found "About" link to href "/about/" found "Presentations" link to href "/about/presentations/" found "Performance" link to href "/about/performance/" found "Reports" link to href "/reports/" found "Domains" link to href "/domains/" found "Root Zone" link to href "/domains/root/" found ".INT" link to href "/domains/int/" found ".ARPA" link to href "/domains/arpa/" found "IDN Repository" link to href "/domains/idn-tables/" found "Protocols" link to href "/protocols/" found "Number Resources" link to href "/numbers/" found "Abuse Information" link to href "/abuse/" found "Internet Corporation for Assigned Names and Numbers" link to href "http://www.icann.org/" --------Now for Xpath-------- found "Presentations" link to href "/about/presentations/" found "Performance" link to href "/about/performance/" found "Reports" link to href "/reports/" found "Root Zone" link to href "/domains/root/" found ".INT" link to href "/domains/int/" found ".ARPA" link to href "/domains/arpa/" found "IDN Repository" link to href "/domains/idn-tables/" found "Abuse Information" link to href "/abuse/" found "Internet Corporation for Assigned Names and Numbers" link to href "http://www.icann.org/" Basically the xpath found every link it was supposed to, except for those that were bolded by Example.com. However, shouldn't the asterisk wildcard have allowed for this in the xpath match './/tr/*/a'? Possibly something else is going on (I didn't examine the sample document closely), but your CSS selector and XPath are not equivalent. CSS tr a is //tr//a in XPath. .//tr/*/a means (conceptually, not precisely): .: current node //: all descendants of current node tr: all tr elements among all descendants of current node /: all children of found tr elements *: any element among children of found tr elements /: all children of any child element of found tr elements a: all a elements which which are element children of element children of a tr element In other words, given the following HTML: <ul> <li><a href="link1"></a><li> <li><b><a href="link2"></a></b><li> </ul> //ul/*/a will only match link1. XPath Primer In reality, an "XPath" is a series of Location Steps separated by slashes. A Location Step consists of: An axis (e.g. child::) A node test (either the node's name, or one of the special node types, e.g. node(), text()) Optional predicates (surrounded by []. A node is only matched if all the predicates are true.) If we were to decompose .//tr/*/a into its Location Steps it would look like this: . (the "space" between the slashes in "//") tr * a It's probably not evident what the heck I am talking about. This is because XPath has an abbreviated syntax. Here is the expression with abbreviations expanded (axis and node-test are separated by a ::, steps by a /): self::node()/descendent-or-self::node()/child::tr/child::*/child::a (Notice that self::node() is redundant.) Conceptually what happens in a step is: given a set of context nodes (default is the current node or '/' for the root node) For each context node, create a set of nodes which satisfy the Location Step Union all per-context-node sets into one node set Pass that set to the next Location Step as its given context nodes. Repeat until out of steps. The set left after the final step is the set for the entire path. Note that this is still a simplification. Read the XPath Standard for the gory details if you want them. Thanks for the answer and in your description answering 3 other questions I hadn't even had yet! Francis, your interpretation of the meaning of the XPath operators is quite different from their real meaning. According to your interpretation expressions like .// or .//tr/ should be valid, while in fact, these are syntactically illegal. Please, either remove, or correct the explanation. By correcting the explanation (i.e. introducing the concept of Location Steps) it may become less clear to the OP. I will add a second more precise explanation. 'tr a' -> '//tr//a'
common-pile/stackexchange_filtered
Lists as a foundation of mathematics I am wondering if there is a foundation of mathematics where not sets or "set-like objects" (such as objects of a suitable topos as in ETCS) are the primitive notion, but rather lists. These lists (aka sequences, tuples, ...) should be mathematical objects denoted as $[a_1,a_2,\dotsc]$ with an information in which position an element $a_i$ occurs, and of course the elements may appear multiple times. The length of a list does not have to be finite, but if it helps to answer the question feel free to make this assumption, thus answering the question in finitism. Generally, they should be indexed by (an equivalent of) ordinal numbers. The question is partially motivated by the observation that in many programming languages lists are actually more common and also the building blocks of more complex objects. Here, sets are often seen as derived from lists or special lists, namely lists in which the order does not matter and in which no element appears twice. In the set-theoretic foundations I am aware of, it is vice versa: lists are special sets. So my question is basically if we can turn this paradigm around and see lists as the primary notion and build mathematics around it? Maybe I did not use the right search terms, and surely I am not an expert in foundation of mathematics, but I cannot find any such approach. And I find it hard to come up with something on my own, because every axiom for lists I can think of involves set-like objects in the end. It feels like set theory (and category theory of course, but here we also have sets of objects etc. so this does not get us out of the paradigm) permeates every thought. For instance, what are the indices of the list elements, or how do they behave, when we are not allowed to talk about sets and hence of ordinal numbers? How can we even formulate that each list and every position yields an element without talking about maps? The "cheap" way out is to define sets as special lists as above and write down the ZFC axioms in terms of these special lists - but of course it would be much more satisfactory to develop something which does not just use ZFC as an intermediate step. I am aware that the whole idea might not work at all. In this case, an answer is also appreciated explaining why it does not work. Just to give Andreas Blass' comment extra visibility: Oliver Deiser has developed such a theory, called Axiomatic List Theory, in his Habilitationsschrift (in German), an excerpt of which has been published as well (in English). Deiser, Orte, Listen und Aggregate, Habilitationsschrift, Link Deiser, An axiomatic theory of well-orderings, The Review of Symbolic Logic, 4(2), 186-204, Link When developing finitistic mathematics in PA or other theories of arithmetic, (finite) sets are often coded as lists, which are in turn typically coded as numbers. But (at least in the parts of the literature I know), this is typically viewed as using arithmetic as a foundation for the finitistic fragment of mathematics, not as aiming to be a foundation for all of mathematics; and in the end the primitive objects are usually numbers, not lists. So I guess this isn’t quite what you want. If you wanted to do infinitary (classical) mathematics, I would argue that ordinal-indexed lists of ordinals would be a pretty robust basic object to work with. In some sense this would be similar to Takeuti's formalization of ordinal arithmetic, but also a little bit like second-order arithmetic. Perhaps Robin Cockett's notion of "list-arithmetic distributive categories" (https://core.ac.uk/download/pdf/82329773.pdf) and subsequent work on "arithmetic pretopoi" (https://ncatlab.org/nlab/show/arithmetic+pretopos) is something like what you have in mind? .Oliver Deiser has worked out two versions of foundations, one based on lists and one on multisets. This is in his book "Orte, Listen, Aggregate" (and his Habilitationsschrift with the same title). @AndreasBlass By reading a few pages, this seems to be the perfect answer for my question! (You can also post it as an answer, I will accept it.) What surprises me is that the book was written in 2009. Not in 1909. It seems to be such a, well, fundamental idea. (Also, that it is written in German. Well, good for me, but not for most mathematicians, I would say.) I will read it now :-), thanks a lot! A fun side note: An ordinal-indexed list of Booleans is equivalent to a surreal number, just like how a list of digits represent a binary number. @Trebor One important difference, however, is that the theory of the surreal field is decidable, since it is the theory of a real-closed field, whereas the theory of arithmetic is not. One cannot use the surreal field as a foundational system, because it is decidable. We cannot even encode finite combinatorics, Turing machines, etc into the surreals. @JoelDavidHamkins Is this limitation related to first order-ness of the theories under consideration? A quick google for ‘second order theory of real closed fields’ didn’t reveal anything enlightening, but I’m curious if this is a well-defined/studied thing, and if so whether it might be more robust. Yes, the second-order theory of the reals totally recovers the full strength. In that case, however, the foundational power has little to do with the real structure and is entirely due to the enormous power of second-order logic on an uncountable set. Perhaps this is a bit primitive, but any finite list can be equivalently expressed by an indexing function $\mathbb{Z}^+ \to V$, or in programming, xs = [...]; value = xs[i], which makes use of __getitem__ :: List[V], Integer -> V. Linking for future reference: https://mathoverflow.net/questions/231005/set-theory-founded-on-lists-rather-than-sets must the lists be well ordered? Can there be just linearly ordered lists? Andreas Blass has already provided a good reference in the literature, but unfortunately I cannot read German, so I've had to make do with writing my own answer. As you observed, you're clearly not going to get away from the abstract concept of 'collections of objects,' since it's pretty fundamental in mathematics, but I would argue that ordinals are not an intrinsically set-theoretic notion any more than, say, well-founded trees are. This isn't to say that these ideas aren't important in set theory, but I would say that if one were really committed to formalizing mathematics 'without sets,' eschewing ordinals or well-founded trees because of their applicability in set theory wouldn't really be a good idea. It is entirely possible to give a relatively self-contained theory of ordinal-indexed lists of ordinals that is equiconsistent with $\mathsf{ZFC}$. I will sketch such a theory. Furthermore, I would argue that this theory is no more 'set-theoretic' than, say, second-order arithmetic formalized in terms of numbers and sequences of numbers. The theory has two sorts $\def\Ord{\mathrm{Ord}}\Ord$ and $\def\Lst{\mathrm{Lst}}\Lst$ (for ordinals and lists of ordinals, respectively). I'll use the convention that lowercase Greek letters represent ordinals and lowercase Latin letters represent lists. As primitive symbols, we have a constant $0$ in $\Ord$, a binary relation $\leq$ on $\Ord$, and two functions $(x,\alpha) \mapsto x_\alpha : \Lst \times \Ord \to \Ord$ and $\ell : \Lst \to \Ord$. $x_\alpha$ is meant to be the $\alpha$th element of the list $x$ and $\ell(x)$ is meant to be the length of the list $x$. We'll use the convention that if $\alpha \geq \ell(x)$, then $x_\alpha = 0$. We have the following axioms and axiom schemas. Linear order with least element: $\leq$ is a linear order on $\Ord$ with least element $0$. List extensionality: $x = y$ if and only if $\ell(x) = \ell(y)$ and for all $\alpha < \ell(x)$, $x_\alpha = y_\alpha$. List padding: For any $\alpha \geq \ell(x)$, $x_\alpha = 0$. Successor: For every $\alpha$, there is a $\beta > \alpha$. Supremum: For every $x$, there is an $\alpha$ such that $\alpha \geq x_\beta$ for all $\beta$. Infinity (existence of a limit ordinal): There is an $\alpha > 0$ such that for all $\beta < \alpha$, there exists a $\gamma < \alpha$ with $\beta < \gamma$. Well-ordering: For any formula $\varphi(\alpha,\bar{\beta},\bar{x})$ and parameters $\bar{\beta}$ and $\bar{x}$, if there is an $\alpha$ such that $\varphi(\alpha,\bar{\beta},\bar{x})$ holds, then there is a least such $\alpha$. List comprehension: For any formula $\varphi(\alpha,\beta,\bar{\gamma},\bar{x})$, parameters $\bar{\gamma}$ and $\bar{x}$, and ordinal $\delta$, if $(\forall \alpha < \delta )\exists ! \beta \varphi(\alpha,\beta,\bar{\gamma},\bar{x})$, then there is a $y$ such that $\ell(y) = \delta$ and for each $\alpha < \delta$, $\varphi(\alpha,y_\alpha,\bar{\gamma},\bar{x})$ holds. Given a list $x$ and an ordinal $\alpha$, write $x \ll \alpha$ to mean that $\ell(x) < \alpha$ and for all $\beta$, $x_\beta < \alpha$. List bounding: For any formula $\varphi(x,\alpha,\bar{\beta},\bar{y})$, any parameters $\bar{\beta}$ and $\bar{y}$, and any ordinal $\gamma$, there is an ordinal $\delta$ such that for all lists $x \ll \gamma$, if there is an $\alpha$ such that $\varphi(x,\alpha,\bar{\beta},\bar{y})$ holds, then there is an $\varepsilon \leq \delta$ such that $\varphi(x,\varepsilon,\bar{\beta},\bar{y})$ holds. These axioms are far from optimal (for instance we could combine successor and supremum by requiring that $\alpha > x_\beta$ for all $\beta$), but I think they're a reasonably well-motivated set of principles for capturing the notion of collections of ordinal-indexed lists of ordinals that are 'complete' (in roughly the same way that models of $\mathsf{ZF}$ are 'complete'). It's fairly immediate that $\mathsf{ZF}$ (and therefore $\mathsf{ZFC}$) interprets this theory by looking at the class of ordinals and functions $x : \alpha \to \Ord$. Conversely, to show that this theory interprets $\mathsf{ZFC}$, we need to roughly do the following: Show that well-ordering and list comprehension allow you to perform transfinite induction. Use this to show that Gödel's pairing function on ordinals is definable. (Specifically, show that we can uniformly build lists giving arbitrarily long initial segments of the projections of the pairing function.) Define the notion of ordinal computation (with, e.g., ordinal Turing machines). Use some standard work in that area to show that we can interpret $\mathsf{KP} + {V=L}$ with some defined predicate membership predicate on $\Ord$. Show that the interpreted copy of $L$ is closed under full replacement (by list comprehension and supremum), and show that it is closed under (internal) power set (by list bounding). Conclude that $L \models \mathsf{ZFC}$. One thing to note is that this procedure will not produce a bi-interpretation with $\mathsf{ZFC}$. In order to get a bi-interpretation, we would need some axiom asserting that we can 'list' all of the lists $x\ll\alpha$ for any $\alpha$, but this axiom actually seems a lot less natural to me in this context than the others I've listed. (In particular, it has a far less canonical flavor, since it's literally a form of the axiom of choice, and it requires a moderately complicated defined notion like the pairing function to formalize.) The sketchiest one is of course list bounding, but in some sense that corresponds nicely to the semi-frequent opinion that the sketchiest axiom of $\mathsf{ZF}$ is power set. Moreover, I actually suspect that this theory also isn't bi-interpretable with $\mathsf{ZF}$, but we'd need an expert on choiceless set theory to weigh in on that. The theory TLO (transfinite lists of ordinals) you proposed is not bi-interpretable with ZF since ZF admits a model that has an automorphism of order 2 (by a classical result of Cohen), but no model of TLO has a nontrivial automorphism of finite order (any automorphism of finite order should be the identity on ordinals of the model, and this will imply via the Extensionality axiom for lists that the automorphism is also the identity on the lists of the model. I should also add that bi-interpretability of models preserves their automorphism groups. Peter Koepke and Martin Koerwien developed the theory of sets of ordinals as a foundation of mathematics, showing senses in which it is equivalent to ZFC as a foundation. Peter Koeopke and Martin Koerwien, "The theory of sets of ordinals", arxiv:0502265, 2005. I view this as an answer to your question because a set of ordinals is essentially a transfinite binary sequence. So this is a foundation of mathematics built entirely on transfinite binary sequences, or lists as you described them. They show how to encode other mathematical objects into these lists and develop all the usual foundational theory. In some pedantic sense the theory $\sf SO$ doesn't qualify for the OP specifications, since it does contain $\in$ among its primitives, even though this $\in$ is a flat kind of membership. But, I agree with you, that in essence a flat membership + ordered pairing would be essentially lists or sequences which is the essence of what the OP is speaking about. If you think of sets of ordinals as sequences, then the $i\in x$ relation is just extracting the $i$th member of the list $x$. In SO, every ordinal is also a list, and so the theory is all about lists of lists of lists of lists, etc. If "sets" of ordinals are sequences, and so those are the lists, then per $\sf SO$, none of those is an ordinal, then per this interpretation your statement that every ordinal is a list is violated. I think the proper way is to view lists in $\sf SO$ as set of ordinals that are pairs of ordinals. And still even an ordinal that is a pair of ordinals won't qualify as a list since it is not a set of ordinals. the $\in$ in $\sf SO$ cannot be viewed as extracting the $i$th member of a list. It is meant to denote flat membership, which the OP forbids. I disagree. In SO the ordinals (or lists, whatever) are well-ordered in the theory, and so it is perfectly sensible to speak of the $i$th member of the list $x$, even when $i$ is itself a list. That is exactly the kind of thing one would want in a list-based foundations. I know it can be done, but its not quite natural. SO discriminates between ordinals and sets of ordinals, you don't have an ordinal that is a set of ordinals. Its more natural to define "lists" as "sets of pairs of ordinals" and so they are sets of ordinals, and so by rules of SO they are NOT ordinals. If we define "lists" as ordinals, it won't be that natural, how you are going to define the $i$th item in them? It can be done of course, but not all that natural. The OP allowed for a finitist answer. Here's an ultrafinitist one. As noted in the question lists are just sequences. But sequences are just unary functions. So consider a first-order theory with two sorts, numbers and unary functions, with equality on the first sort. Represent the first sort by a small letter and the second sort by a capital letter. There is one constant, the first-sort $0$, and one 2-ary predicate $<$ relating first-sort things. The mathematical axioms are: $\forall x \thinspace ¬ x < 0$ $\forall x \forall y (x = y \lor x < y \lor y < x)$ $\forall x \forall y \forall z (x < y \land y < z \Rightarrow x < z)$ Induction: $\phi(0) \land \forall n \forall m (\phi(n) \land \sigma n,m \Rightarrow \phi(m)) \Rightarrow \forall n \phi(n)$ where: $\ \sigma x,y$ abbreviates $x < y \land ¬\exists z(x < z \land z < y) $ Replacement: $\forall F \forall c \forall i \exists G \thinspace (G(i) = c \land F =_i G)$ where: $\ F =_i G$ abbreviates $\forall x (x ≠ i \Rightarrow F(x) = G(x))$ Notable lack is the axiom: (S) $\forall x \exists y (x<y)$ So as stated axioms 1-5 have as model the singleton universe {$0$}. Nonetheless this system can define addition, multiplication, and prove many arithmetic assertions. It is more mathematically interesting to exclude the axiom (S), I think, because one can then ask the question how the addition of binary (or $n$-ary...) functions improves the power of the system, which is not the case if the system includes (S), since then one can simply use natural numbers to code tuples and thus unary functions suffice to represent n-ary functions. See Can this weakish system of arithmetic express multiplication for second-sort numbers?. Is this system first order? You are allowing quantification over functions? Yes the system is first-order. Yes, it allows quantification over functions. See https://mathoverflow.net/questions/105234/second-order-term-in-first-order-logic or https://plato.stanford.edu/entries/logic-many-sorted/ (see specifically 1.1 in the latter, where relations are used, but functions are just a special kind of relations). To be clear, no comprehension axiom is being assumed. The original question was: Can we use lists as the primary notion and build mathematics around it? My answer is that any such approach runs into trouble very quickly, as can be gleaned from the following basic example from real analysis. Let $f$ be a regulated function from reals to reals, i.e. the right and left limits $f(x+)$ and $f(x-)$ exist for all reals $x\in [0,1]$. Many proofs in analysis make use of the sets $C_f$ and $D_f$, which collect the reals where $f$ is continuous, resp. discontinuous. Now, the set $D_f$ is countable and can be written as the union of finite sets as follows: $D_f=\cup_k D_k$ where $$ \textstyle D_k:= \{x \in [0,1]: |f(x)-f(x+)|>\frac{1}{2^{k}}\vee |f(x)-f(x-)|>\frac{1}{2^{k}} \}$$ One needs absurdly strong axioms (think second-order arithmetic) to show that $D_f$ can be written as a sequence of reals and that $D_k$ can be written as a finite list. The details may be found in [1, 2]. Thus, while your approach may be possible, it would look very strange from the point of view of e.g. reverse math. References 1 Dag Normann and Sam Sanders, on robust theorems due to Jordan, Cantor, Weierstrass, and Bolzano, JSL, 2022. [2] Sam Sanders, Big in Reverse Mathematics: the uncountability of the reals, JSL, 2023 Second-order arithmetic is absurdly strong? I have known people to say that $I\Delta_0+ \textsf{EXP}$ is very strong. Second-order arithmetic is rather strong if one is doing basic math like "this set is finite or countable". $D_k$ is a finite list? Is there a missing hypothesis? What about say the floor function, $D_1$ is infinite? My mistake: I should have restricted to the unit interval (which is now in place).
common-pile/stackexchange_filtered
GAction with restricted user access I want to deploy a google action with only limited user accces. I would like to maintain users in the backend and allows access to only those user mail account. Is there a way to acheieve this? You can use Alpha release for this. As per document, Developer defined list of users can have access of your action. Check reference Another approach will be to use authentication in your action, take email of the user and have a webhook for your action. There you can check user email and respond to user according to your needs.
common-pile/stackexchange_filtered
Check that all values in a range are identical I need to display a message box when all the values in a range on my spreadsheet are zero. Currently I am using the following code: Dim Cell As Range For Each Cell In Range("E17:E25") If Cell.Value = "0" Then MsgBox ("If hardware is required, please manually populate the corresponding sections.") End If Next The message is displayed, however it is shown 9 times (for each of the cells in the range). What I need is to check if all the values in the range E17:E25 are zero, and then display only one message box. Any ideas? Thanks. Drop the FOR loop and use SUM for the range. You want to know if all the values are 0? You could just do If WorksheetFunction.Sum(Range("E17:E25")) = 0 Then MsgBox ("If hardware is required, please manually populate the corresponding sections.") No need for loops. Edit: If you want to check for any other number, and if all cells are that number, you can do this: Sub t() Dim rng As Range Dim myNum as Long myNum = 1 Set rng = Range("B3:B6") If WorksheetFunction.CountIf(rng, myNum) = rng.Count Then MsgBox ("All the same!") End Sub I like this approach the best, I did mine as it works for any particular value (text or number etc) but this would have to be the most correct approach based on simplicity and meeting the criteria. @CaptainGrumpy - From your question above, yeah I think the Sum() is the quickest way to get what you're looking for. However, it's a slightly separate question if you're looking for a String value, or any other specific number. If you wanted to see if they're all the same number, I think a quick way is just to use CountIf() and see if the total CountIf() result matches the total number of cells in your range. not may question. I agree about the countif as well, funny thing about VBA is it is quite hard to assess the computational cost of a lot of the internal functions. I should test them at work with the big datasets we have. Of course, the Sum method relies on the assumption that all values are >= 0. (But if it is "order quantity", or something like that, that is probably a valid assumption. And a negative number should lead to an error message anyway, so that same message is probably fine.) @YowE3K - Agreed. But, according to this specific question, Capt. Grumpy is looking for 0. If this doesn't work for his more robust data, and can't quite tweak it to work, then it'd be a different question to ask. Good points though! I was just saying that if the user entered numbers of 4, -5, 1, 3, 2, 6, -13, 1 , 1, a Sum will say that the answer is zero, even though all the values are not zero (and, in that case, none of them were zero). Seriously, I am not looking for anything. It isn't my question. And I think the Sum is generally fine but YowE3K's point is better practice from a coding standpoint unless there are conditions in place to specifically exclude neg numbers. @CaptainGrumpy - D'oh! Sorry, I kept thinking you were OP...it's been a long day :P My apologies. And cause there are infinite ways to skin a cat here is another approach. Dim Cell As Range Dim ZeroCount As Integer Dim CellCount As Integer ZeroCount = 0 CellCount = 0 For Each Cell In Range("E17:E25") CellCount = CellCount + 1 If Cell.Value = 0 Then ZeroCount = ZeroCount + 1 Next Cell If ZeroCount = CellCount Then MsgBox ("If hardware is required, please manually populate the corresponding sections.") To test that: The range doesn't contain any empty values All cells are the same function Function SameRange(rngIn As Range) As Boolean If Application.CountA(rngIn) = rngIn.Cells.Count Then SameRange = (Application.CountIf(rngIn, rngIn.Cells(1).Value) = rngIn.Cells.Count) End Function test Sub test() MsgBox SameRange([d1:d5]) End Sub 'something like this Dim isDataPresent as boolean isDataPresent = true for each Cell in Range(....) if cell.value = "0" then isDataPresent = false exit for end if next if not isDataPresent then show message box here end if OP wants a msg if all values are 0. This shows msg if any are 0
common-pile/stackexchange_filtered
loop prepare statements, what needs to be repeated? I'm using statements to protect against sql injections... My question is what do i need to repeat when looping multiple queries? If you look at the second query, im not sure if the prepare statement needs to be insde the foreach loop Something wrong with this summary code? open database connection // connect to database $conn = connect('r'); launch first query $sql = "SELECT ... FROM ... WHERE xxx = ?"; $stmt = $conn->stmt_init(); $stmt->prepare($sql); $stmt->bind_param('i', $albumid); $stmt->bind_result(..., ...); $stmt->execute(); $stmt->store_result(); $num_rows = $stmt->num_rows; if ($num_rows > 0) { loop results... } $stmt->free_result(); second query with repeats: $sql = "SELECT ... FROM ... WHERE xxx = ?"; $stmt = $conn->stmt_init(); $stmt->prepare($sql); ///??????? inside or outside foreach loop ????? foreach (... as $key => ...) { $stmt->bind_param('i', $key); $stmt->bind_result(...); $stmt->execute(); $stmt->store_result(); $num_rows = $stmt->num_rows; if ($num_rows > 0) { loop results... } $stmt->free_result(); } close database // close database $conn->close(); prepare and bind_* can be and should be outside of the loop. ok, thanks. im guessin that '$stmt = $conn->stmt_init();' doesnt need to be call twice, right? i dont need to initialize a statement twice ? You don't have to prepare the query multiple times. Just bind the parameters and execute it multiple times. From the PHP Manual: For a query that you need to issue multiple times, you will realize better performance if you prepare a PDOStatement object using PDO::prepare() and issue the statement with multiple calls to PDOStatement::execute(). Hope this helps.
common-pile/stackexchange_filtered
An efficient way of showing/hiding a series of elements according to ranges Say you filter a long series of photos of people according to age ranges (20-25, 26-30, 31-35...). Given that a user can permute to any range they want how could you make the showing/hiding efficient? Edit: Here's my current code: HTML: <img src="http://..." age=""/> jQuery: var str = ''; $("#ageSelector option:selected").each( function () { str += $(this).attr("value") + ""; } ); $('.photo').each( function() { var age = parseInt( $(this).attr("age") ); switch( str ){ case '18to24': if( age <= 24 ){ $(this).show(); } if( age > 24 ) { $(this).hide(); } break; case '25to35': if( age >= 25 || age <= 35 ) { $(this).show(); } if( age < 25 || age > 35 ){ $(this).hide(); } break; case '36to45': if( age < 36 || age > 45 ){ $(this).hide(); } if( age >= 36 || age <= 45 ){ $(this).show(); } break; case '45to50': if( age < 45 || age > 50 ){ $(this).hide(); } if( age >= 45 || age <= 50 ){ $(this).show(); } break; case 'moreThan50': if( age < 50 ){ $(this).hide(); } if( age >= 50 ) { $(this).show(); } break; default: $(this).show(); } } ); Can you provide any code snippets? Otherwise I'd say something like if(number($(this).data("age")) <= maxVal && number($(this).data("age")) >= minVal) I'm not sure how you're storing age on the image, so I'll just assume that you have an HTML5-compliant data attribute on your IMG tag. It's also unclear how you're getting the range of acceptable values. You haven't yet provided details on when all of this is happening. I'll assume it's on the change of your drop down. $('#ageSelector').change(function() { //Parse out min and max values from your drop down var range = $(this).find('option:selected').val().split("to"); var min = parseInt(range[0]); var max = parseInt(range[1]); //To show just elements in the range $('img.photo').filter(function() { return $(this).data('age') >= min && $(this).data('age') <= max; }).show(); //To show elements in the range and hide all others $('img.photo').each(function() { $this = $(this); $this.toggle($this.data('age') >= min && $(this).data('age') <= max); }); }); Forgot to mention that. At the moment I'm storing age as a separate variable (age="32") but I'm looking into changing that into a class (class="31to35"). The ranges are defined in the options of a select. Updated. Hopefully it's closer to what you're looking for. It is. Thanks a ton. In passing, does using a more precise 'img.photo' offer any advantage over using a simple class selector? Absolutely! Good question. Speed order: ID selection (#someid) is faster than element selection (img) is faster than class selection (.someclass) is faster than random attribute selection ([foo=bar]). General tips: Always use an ID if you can, at least to provide context (#foo img), and always make your right-most clause as specific as possible, so #foo .class is slower than #foo img.class. If my answers have resolved your problem, please be sure to mark it as the answer! Oh, and as another comment, don't use custom attributes like that "age" one you're currently using. Make it HTML5-compliant, following the form: data-someAttribute="value". So, data-age="10". The advantage is not only compliance, but in jQuery you can then use .data('age') rather than .attr('age'). If you can have your images in a table ( and the age ) like : var imgs= [ ["http://site/img1.jpg",32], ["http://site/img2.jpg",27], ["http://site/img5.jpg",24], ["http://site/img9.jpg",22] ]; And you have a container <div id="container"></div>, you could do a function like : function loadRange(a,b){ for(var i=0;i<imgs.length;i++) { if(imgs[i][1]>=a && imgs[i][1]<=b) { if(imgs[i][0]==null) $("#img"+i).show(); else { $("#container").append('<img src="'+encodeURI(imgs[i][0])+'" id="img'+i+'" />'); imgs[i][0]= null; } } else if(imgs[i][0]==null) $("#img"+i).hide(); } } and call loadRange(20,24) to load 20 to 24. ( to load only image once when needed ) It's difficult to provide an exact answer without you providing more details of exactly what you're looking for, but is something like this along the right lines? Change the value in the select element to filter the div elements based on the selection. It uses the jQuery filter function to narrow down the selection based on the selected values, and hides all non-matching elements. You make a classic boolean mistake: why have an if statement with boolean return values when you can just return the result of the if statement's test? return $(this).attr("id") >= vals[0] && $(this).attr("id") <= vals[1]; @James Allardice: That's the sort of thing I'm looking for and much more concise. I haven't come accross filter() yet (relatively new to jQuery) and will give it a try :)
common-pile/stackexchange_filtered
integer solutions of $ (n!+1)=m^2$ Consider $4!=24$, if you add one you get $25=5^2$. The same occurs with $5! = 120 = 11^2 - 1$, and $7! = 5040 = 71^2 - 1$. Are there other solutions of the equation $n!+1 = m^2$? I verified that no other solution exists with $m<10^9$. Does the problem has already been studied? Is there a demonstration that no other solution exists? Greetings. http://en.wikipedia.org/wiki/Brocard%27s_problem It's probably discussed in Guy, Unsolved Problems In Number Theory (I don't have the book at hand, so cannot check). You might enjoy this one, $$ $$ http://mathoverflow.net/questions/16341/on-polynomials-dividing-exponentials/ $$ $$ Gerry, it is in Guy, section D25, page 193 in the second edition. About 8 references. $$ $$
common-pile/stackexchange_filtered
Integral of Squared Spherical Harmonics The following integral comes out of an expression $\langle |Y_{l,m}(\theta, \phi)|^2\rangle$ over a orientation probability distribution: $$\int_{0}^{2\pi} \int_{0}^{\pi} Y_{lm}^2(\theta, \phi)Y_{l'm'}(\theta, \phi) \sin(\theta) d\theta d\phi$$ When $l = 1,m=1$ and $l' = 2,m' = 0$, the integral is $-\frac{1}{2\sqrt{5\pi}}$. For many other combinations, it is just $0$. What would the general formula for this integral be?
common-pile/stackexchange_filtered
Double bond formation in alkenes during cracking In thermal or catalytic cracking, energy is used to break a long chain alkane into smaller alkanes/alkenes. Since the energy goes into breaking the bonds, how is the C=C bond in alkenes formed? Bond formation is exothermic and so favourable. Can you explain exactly what is confusing you? Bond formation occurs as a result of the cracking - which is endothermic and requires heat. I understand that the input heat energy goes to breaking the bonds, but how is the double bond then formed? Do you want the mechanism for cracking? Bond breaking is endothermic, bond formation is exothermic. Alright, thank you, not fully clear, but I get the general drift. :) I don't know if I have understood well your question. During the process of cracking, homolysis of long chains occurs. Radical intermediates result from breaking the bonds. These intermediates are highly unstable and undergo rearrangements. As a consequence of the rearrangements, smaller alkanes and alkenes are formed. We have to point out that, due the the high reactivity of radicals, the control of radical reactions is very difficult and we can not readily predict what happens inside the chemical reactor. I hope I've answered your question. To add a small point to Yomen's excellent answer: Alkyl radicals can be turned into alkenes through loss of a hydrogen radical. For example suppose during ethane cracking, an ethyl radical is produced. It can lose a hydrogen radical to form ethylene. $$\ce{^{\cdot}CH2-CH3 <=> CH2=CH2 + H^{\cdot}}$$ Alkyl radicals and H radicals are both relatively unstable, so this reaction is an equilibrium. Many other reactions that produce, interconvert, and consume radicals also happen during cracking. Ultimately the driving force is the high temperature, which favors products which more entropy. The entropy of many small molecules is much higher than a few larger molecules because of the additional translational degrees of freedom of the small molecules. That is what favors production of smaller molecules such as ethylene and hydrogen, as compared to larger molecules such as ethane.
common-pile/stackexchange_filtered
Negatives: 안- vs -지 못하다 I haven't met my colleagues yet: My attempt: 내 동료들이 아직 안만(난/나요) Naver: 아직 동료들을 만나지 못했어 This I kind of don't get why you use 못하다... To be honest, I'm really puzzled by the use of this verb. I see it everywhere with a slightly different meaning every time.. Thanks in advance for the help! In general, 안 + Verb is a general "do not + Verb", whereas 못 + Verb means "can not + Verb"; however, 못하다 is much broader in meaning than the English "cannot", so that it is used instead of 안 in many cases where we will still use "do not" in "English. Some examples of how 못하다 is used more broadly in Korean: When you haven't done (haven't been able to do) something yet: 동료들은 아직 못 만났어요. (In this case, you'll often see the word 아직 and a past tense) Where there is some insufficiency; especially, something wasn't good: 그 영화를 즐기지 못했다 (I didn't/couldn't enjoy the movie) When there's an insufficiency of knowledge of some sort: 그것은 예상하지 못했어요 (I didn't/couldn't expect that) As you can see, in examples 2 and 3, "didn't" would be more natural for the English translation, yet Korean would often use "못하다". In general, there won't be a rule of when to use 못하다, but by hearing it used in various situations, you'll get a feel for when 못하다 is more appropriate than 안하다. I think it can be safer to use 못하다 in certain cases though, to emphasize that something was involuntary and that you want to do it; 동료들은 못만났다 makes it seem like you want to / intend to meet them; if your boss asks about a project that you haven't finished, using 못했습니다 makes it seem more like you were trying, compared to 안했습니다. 안하다 : do not do Even though I can do it, I do not do because of my will. 못하다 : can not do I haven't met my colleagues yet. 아직 동료들을 만나지 못했어. Here I want to meet them, but by a lack of my skill or an external component, I can not meet.
common-pile/stackexchange_filtered
Mirror reflection of the picture Java Hi so i have to make mirror reflection of my drawn picture. There must be 2 pictures. One normal and the second right below as mirror reflection. I don't know how to draw second picture and flip it. public static void main(String[] args) throws FileNotFoundException { String path ="C:/Users/Desktop/Snow.ppm"; FileReader fReader = new FileReader(path); Scanner inFile = new Scanner(fReader); String p3=inFile.nextLine(); String program=inFile.nextLine(); int rows=inFile.nextInt(); int cols=inFile.nextInt(); int maxValue=inFile.nextInt(); int[][] red = new int [rows][cols]; int[][] green = new int [rows][cols]; int[][] blue = new int [rows][cols]; for (int c = 0; c<cols;c++){ for(int r =0; r<rows; r++){ red[r][c]=inFile.nextInt(); green[r][c]=inFile.nextInt(); blue[r][c]=inFile.nextInt(); } } inFile.close(); BufferedImage img = new BufferedImage(rows, cols, BufferedImage.TYPE_3BYTE_BGR); for (int c = 0; c<cols;c++){ for(int r =0; r<rows; r++){ int rgb = red[r][c]; rgb=(rgb<<8)+green[r][c]; rgb=(rgb<<8)+blue[r][c]; img.setRGB(r, c, rgb); } } JLabel jLabel=new JLabel(new ImageIcon(img)); JPanel jPanel=new JPanel(); jPanel.add(jLabel); JFrame r = new JFrame(); r.setSize(rows+100, cols+100); r.add(jPanel); r.setVisible(true); } You want to deal with horizontally (or perhaps vertically) flipping an image. We can help you with that, but we are (or at least I am) not interested in any GUI code. You can just reverse the traversal direction of one dimension to flip the image accordingly.
common-pile/stackexchange_filtered
CORS error upload file ~4mb I'm building an app where angular front-end is on s3 as static website and Sails (0.10.3) API inside dokku with Node 0.11.13 and SSL on EC2. If file is larger than about 4mb I got error "No 'Access-Control-Allow-Origin' header is present on the requested resource." OPTIONS request is hitting my API and I can catch it in customMiddleware but the POST with data is not reaching that far. On front-end side I'm using angularjs-file-upload. If I turn off SSL then it works without any problems but I would prefer to keep it on. Are you sure it allows requests larger than 4 MiB ? Yes, I'm totally sure. I turned off SSL and then it works without any problems. I went up the chain app itself -> dokku -> SSL and the problem was even higher, in nginx. nginx.conf required one line more: proxy_read_timeout 1200s; One more thing you can try, if it's Nginx that's causing issues. Look into your error log file. Generally its /var/log/nginx/error.log In that if you see this line *133774 client intended to send too large body: 3141911 bytes It means the issue is of the size and you might wanna fix it. The way you do it is in your nginx.conf in the HTTP context paste this anywhere. client_max_body_size 50M; This will allow you to increase your body size to 50M. Hence fixing the issue I had a similar solution and I solved it with the help of this. It's because of the client_max_body_size configuration variable that Nginx has. 1.Go to nginx configuration setting file: sudo nano /etc/nginx/nginx.conf 2.Change in nginx.conf file: http { clientmaxbody_size 100m; } 3.Restart Your Nginx: sudo service nginx restart or service nginx reload for my MEAN Stack (node & Expreess js) application it worked. Hoping for your case too ! Another EXPRESS code if 1,2 & 3 point won't work for you const express = require('express'); const cors = require('cors'); const app = express(); app.use(cors()); Below link for reference full detail explanation: https://www.digitalocean.com/community/questions/how-to-edit-nginx-setting-for-big-file-upload In the 2nd step I got an error and fixed it by changing it like that client_max_body_size 100m;
common-pile/stackexchange_filtered
Render a collection of forms using partial Rails has this pretty way to render a collection using partial <%= render partial: 'erb_partials/post', collection: @users, as: :user %> How can I apply this in my loop of forms? <%= profile_form.fields_for :item_groups, @item_groups do |item_form| %> <%=render partial: 'item_groups/item_group', locals: {item_form: item_form}%> <% end %> <%=profile_form.submit%> In your case you can't use the render partial: 'path', collection: @list method. If you unfold your first example: <%= render partial: 'erb_partials/post', collection: @users, as: :user %> You get: <% @users.each do |user| %> <%= render 'erb_partials/post', user: user %> <% end %> The above is such a common action that the render method included the :partial option to wrap the render call in an each loop. render doesn't include any other iteration wrapper options at this moment.
common-pile/stackexchange_filtered
Visual Studio project not saved Somebody please tell me it is possible to recover Visual Studio source after VS crashes! I have just spent 5 hours writing a new utility app, and running it with the "Save before Build" option turned on, as well as AutoRecover every 5-mins. But after VS crashed I am unabled to find anything other than an empty project folder!!! I can't believe that files are only "saved" to memory?!? That can't be right! That 5 hours of work has to be somewhere on the disk? I attempted to reproduce the scenario by creating a Junk project, running it, and then killing VS using Task Manager. It wasn't quite the same situation as a dialog box actually popped up asking to save the project. I ignored this and continued the kill. I was then able to find the Junk source in C:\Users{Username}\AppData\Local\Temporary Projects as advised - but not my earlier project. I think it's gone. If it is not there, it was likely cleaned up. Your only real option now is to not touch your machine and run a disk recovery utility to see if you can undelete some of the files and recover some of your work. "ALT+F, S" and "CTRL+S" are your friends. I don't think I go more than 2 minutes without doing one of those. Still being bitten by this in 2012, and no, @ChrisLively, Ctrl+S does not do anything, it just saves to the temp directory. Which C# wipes on startup. You have to deliberately save the Solution which is Ctrl+Shift+S. @ChrisBurt-Brown: Ctrl+S should save the file you are currently working on (not to temp, but a real save). However, that may be different depending upon your own configuration. My point was simply that I've never trusted an "auto save" in any tool and instead prefer to perform a regular save quite often. 2+ years after my original comment and I still believe this is true. @ChrisLively: Sorry for the confusion, I was using Visual C# Express 2010, which may differ from Visual Studio (frustration caused inattentiveness). In C# Express, if you have not saved the solution, the solution has no folder, and pressing Ctrl+S saves to temp. It is the only situation I have ever come across where Ctrl+S was not enough. I do not trust auto save either. This has to be the worst new feature we never asked for! I've lost some three projects this way. (Yes I am sometimes a bit absent minded). I have Visual Studio ultimate. Hitting Ctrl-S will not help you need to explicitly save the Solution Can't agree more. This feature is SO BAD it can hardly be called a feature. You eaither know about this or you will eventually loose hours (days in my case) of work. In order to make sure this doesn't happen again in the future, you can go to Tools > Options > Projects and Solutions and check the item Save new projects when created Note that you may need to check "Show all settings" in VS 2010. This setting is not currently available in Tools > Options > Project and Solutions @NelsonNyland You're correct, I can't find it in Visual Studio 2022. That suggests to me that no one liked this "feature" and it's been removed. From chatter I've seen online, it seems that perhaps the feature was sometimes buggy and the temporary project couldn't even be found in the temporary folder where it ought to be. If the box is always checked, then in the worst case you have to delete a bunch of temp projects later. But if the box is unchecked, potentially you're created temporary projects, not realizing, and later they're lost. That's bad enough to kill the feature. I got the same problem as the poster. I ran (and thus implicitly saved) the project several times, but apparently the source is gone. C:\Users\<username>\Documents\Visual Studio 2008\Backup Files\<ProjectName>\ Only contains an empty folder with the project name C:\Users{Username}\AppData\Local Does not even contain a folder named Temporary Projects I searched all of C: and found nothing relevant. +1 for saving Days of work, which where destroyed by corrupted Recovery File Until you actually hit save solution, the project is saved in a temporary directory. This allows you to create quick scratch projects and not clutter up your projects folder. Hopefully it hasn't been cleaned up yet, but if not, it will be in you application data directory. On Vista, that is C:\Users\{Username}\AppData\Local\Temporary Projects Sorry, I forget what that was on XP. Moral of the story, if it's not a scratch project, save the project early. Thank you sir, you just saved my day :), found it in Temporary Projects like you said (Windows 7) For Visual 2010 C# ... I made a simple WPF project and it moved to C:\Documents and Settings\scharusaengrit\My Documents\Visual Studio 2010\Backup Files after I rebooted my computer. grep/search your entire drive for sln, proj and cpp (or whatever language) files. good luck. I recovered a source file saved in C:\Users\<username>\Documents\Visual Studio 2008\Backup Files\<ProjectName>\ (this was on Vista) If you ever want VS Management Studio files, I just found mine in C:\Users\<<User>>Documents\SQL Server Management Studio\Backup Files\Solution1\ as ~AutoRecover% files... I had the same problem after a power loss on a WIP project which I hadn't saved! You think we'd know better by now! This awesome bit of software, Recuva scanned my disk and I managed to get my files back after filtering *.cs. Phew! This happens when you open website from visual studio. Just save your project to another location.. file > save as > "your location" that's it Indeed it does. Check your two projects folders. Both may contain your project and yet one is a folder with nothing but a .sln file in it. I'm very confident your work is on your disk somewhere in your Documents folder :). If all is set to defaults. Apparently some versions of windows store backups of your files. if non of the above suggestions worked for you, you may try this.
common-pile/stackexchange_filtered
MOLDEN file from ORCA output I'm trying to convert an ORCA output file into MOLDEN file (via molden itself). I hit the "Write" button and it converts the file, yes, but the displacement vector s are zeros. I need that information in that specific file formatting. What can I do? I think you can use orca_2mkl tool (provided with Orca) to get molden files from Orca. What kind of displacement vector do you need? Is this for vibrational modes? @SRMaiti yes I need vibrational modes. I tried that plugin, but it does not convert also the vibrations. It would help to know what version of Orca you're using. My suggestion would be to use another tool. I don't know if cclib supports Molden vibrations, but that would be my suggestion. Even if not, it wouldn't be hard to write some Python to write cclib vibrations in Molden format. Is there a particular reason you need the vibrations in Molden format? @GeoffHutchison is ORCA 5.0.3. cclib does not support this version yet. I need it to pass it in another script in order to prdoce an wigner ensamble Yeah, Orca 5 changed their formatting a lot, so I don’t think you’re going to get there at the moment without some coding. I’d still suggest working with the cclib team. @AndreaPellegrini did you get an answer to this now? Perhaps you wrote some code to do it yourself? Please update us! :) Not yet, unfortunatelly. Guys of cclib are still working. @AndreaPellegrini the CCLIB developers are working on this? @NikeDattani Yes they do @AndreaPellegrini could you please add details such as a sample input file, sample output with missing output and also expected output ? I just tested and the current version of cclib can open Orca 5 output and write a molden file with the normal modes. If the question is reopened, I can prove an answer if an example. Thank you @Antonio I have flagged for the question to be reopened.
common-pile/stackexchange_filtered
How to avoid displaying the results of an API query when assigning a variable I am using the google-api-client gem to interact with Google Calendar. When saving a list of events like this: @events = calendar.list_events(@tmpcalendar) The output of calendar.list_events(@tmpcalendar) is huge. I just want to save the result to @events (don't need the huge content displayed). I have tried: $output.close, redirecting to /dev/null, appending ;nil; but the huge result is displayed anyway. To replicate this (with a solution that works) you can large_text = <<~EOS ...huge text here ...huge text here ...huge text here ... ...huge text here EOS eventos_good = large_text ; nil # Assigns the content, does not display it eventos_annoying = large_text # Assigns the content, but displays the huge text This works for that case, but does not work with the above scenario (API call). Is it possible to avoid that huge output just for that variable assignment above? Thanks. According to the docs, the Google API gem has debug logging enabled by default. To decrease the output, set the logging level to something higher: Google::Apis.logger.level = Logger::FATAL # INFO, WARN, ERROR, FATAL each decrease the output. # FATAL is the most restrictive. That was it. Thank you! From your code, it looks like you're using the Google API Client gem? Which version? If you're not using this gem in a Rails application, then it will default all logging to $stdout: https://github.com/googleapis/google-api-ruby-client/blob/1135e74c4896d4ec8aa02c14e3532d9a14514815/lib/google/apis.rb#L34-L40 If you want to completely silence the logging, then override the logger: require 'logger' Google::Apis.logger = Logger.new(nil)
common-pile/stackexchange_filtered
Reverse tree scan, is it posible? I have an HTML structure like the following: <ul id="mainMenu"> <li class="itm_1"> Text... </li> <li class="itm_2"> Text... <ul> <li class="itm_3"> Text... <ul> <li class="itm_4"> Text... </li> <li class="itm_5"> Text... </li> </ul> </li> <li class="itm_6"> Text... </li> </ul> </li> </ul> What I like to do, is to add a class to the active element and then in any other element in the same path to add another class like the example bellow: <ul id="mainMenu"> <li class="itm_1"> Text... </li> <li class="itm_2 ANCESTOR_ACTIVE"> Text... <ul> <li class="itm_3 ANCESTOR_ACTIVE"> Text... <ul> <li class="itm_4"> Text... </li> <li class="itm_5 ACTIVE"> Text... </li> </ul> </li> <li class="itm_6"> Text... </li> </ul> </li> </ul> The element that will be considered as active I can find it and add the class of active, but I cannot find the way to add the ancestor_active class in the elements above the active element in the same path. For now in my script I have the following code: <?php $current_page = $page_id; // Lets say is 5 ?> <script type="text/javascript"> var cp = { p : <?php echo $current_page; ?> }; jQuery(document).ready( function($) { $('.itm_' + cp.p).addClass('active'); } ); </script> Note : The nested arrays are not always with the same depth. In some cases there are other more nested lists and in some othere cases there are not sub lists how can I climbe to the top element #mainMenu and add the classes "ancestor_active" to each li element in the same path using jQuery? Use parents() to look up the tree for all matches $('#mainMenu li').click(function(){ $('#mainMenu li.active').removeClass('active'); $(this).addClass('active').parents('li').addClass('active'); }); API reference http://api.jquery.com/parents/ That was coooool :) Thanks a lot @charlietfl you can use the parent function, I guess edit a small example <script type="text/javascript"> var cp = { p : 4 }; jQuery(document).ready( function($) { var c = $('.itm_' + cp.p) c.addClass('active') set_ancestor_active(c.parent()) } ); function set_ancestor_active(c) { while (c.is('li') || c.is('ul')) { if (c.is('li')) c.addClass('ancestor_active') c = c.parent() } } </script> can you give me an example ? For how many loops ? Is that a good way, that not spend memory with no reason ? If you can wait a bit, because testing will require some time, I'll try it your structure.
common-pile/stackexchange_filtered
Validation error in allowtransparency attribute on the iframe I am using facebook like for my website. So i used allowTransparency='true' but in the case of validation its showing error like this: The allowtransparency attribute on the iframe element is obsolete. Use CSS instead. How to fix this problem ? Thank you. You have apparently used a so-called HTML5 validator. When it reports something as “obsolete”, it means that the construct works, but the HTML5 drafts tell authors not to use it. So this is a problem only if you take it as a problem, or someone (like company rules that require “valid markup”) make it a problem. Cf. to the question There is no attribute “allowtransparency”. Some answers to it comment on some alternate ways. Thank you for the valuable answer. well.. thats not entirely correct.. HTML5 validation gives a warning, but you can still use the case-sensitive attribute for other browsers (or if you use a doctype that is older then <!DOCTYPE html>...) just add to the IFRAME's attribute list: <!--[if IE] allowTransparency="true" --> hope it helps.. @EladKarako, which part of the answer is not correct? You are suggesting the use of old doctypes or an IE “pseudocomment” trick to prevent an error message from an HTML5 validator. But how does this refute anything in the answer? (And it is an error message, not a warning.)
common-pile/stackexchange_filtered
Cannot read properties of undefined (reading 'users') | interaction.client.api.users | Discord.js V14 This Error: TypeError: Cannot read properties of undefined (reading 'users') at Object.execute (D:\Mortal\Belgeler\Kod Denemeleri\Discord Botu\Via\v4 (v14)\commands\Kullanıcı\profil.js:34:55) at module.exports (D:\Mortal\Belgeler\Kod Denemeleri\Discord Botu\Via\v4 (v14)\events\interactionCreate.js:28:21) at processTicksAndRejections (node:internal/process/task_queues:96:5) And this my code: const APIFetch = await interaction.client.api.users(user ? user.id : interaction.user.id).get() I could not find the solution to the problem on the internet because it is a new version. I researched for about 2 hours; At first, I looked at the sites and then looked at what data I could pull through interaction or the client. We can't pull API data. That's why we get the error "users" can't be read. At least that's it, as I understand it... How do I resolve this error? Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking. Use interaction.client.users directly: const data = await interaction.client.users.fetch(user ? user.id : interaction.user.id);
common-pile/stackexchange_filtered
How do I generate serial wif key, the program generate only one key in python How do I generate a serial wif key which contains at least 1000 keys? The code I used generates only one key and I want more. Please can you help me out? I try to change it but it still works the same. All I want is to generate more of it. The code I used to generate the 1 wif keys can be found below, import os import hashlib def run(name): print("{} started!".format(name)) def sha256(data): digest = hashlib.new("sha256") digest.update(data) return digest.digest() def ripemd160(x): d = hashlib.new("ripemd160") d.update(x) return d.digest() def b58(data): B58 = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz" if data[1] == 1: return "1" + b58(data[1:]) x = sum([v * (256 ** i) for i, v in enumerate(data[::-1])]) ret = "" while x > 0: ret = B58[x % 58] + ret x = x // 58 return ret class Point: def __init__(self, x=0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798, y=0x483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8, p=2**256 - 2**32 - 2**9 - 2**8 - 2**7 - 2**6 - 2**4 - 1): self.x = x self.y = y self.p = p def __add__(self, other): return self.__radd__(other) def __mul__(self, other): return self.__rmul__(other) def __rmul__(self, other): n = self q = None for i in range(256): if other & (1 << i): q = q + n n = n + n return q def __radd__(self, other): if other is None: return self x1 = other.x y1 = other.y x2 = self.x y2 = self.y p = self.p if self == other: l = pow(2 * y2 % p, p-2, p) * (3 * x2 * x2) % p else: l = pow(x1 - x2, p-2, p) * (y1 - y2) % p newX = (l ** 2 - x2 - x1) % p newY = (l * x2 - l * newX - y2) % p return Point(newX, newY) def toBytes(self): x = self.x.to_bytes(32, "big") y = self.y.to_bytes(32, "big") return b"\x04" + x + y def getWif(privkey): wif = b"\x80" + privkey wif = b58(wif + sha256(sha256(wif))[:4]) return wif if __name__ == "__main__": randomBytes = os.urandom(32) print(" " + getWif(randomBytes)) Thanks in advance! What if you repeat your code 1000 times in a loop? i'm gonna try some new way to solve puzzle. You don't need a "new way". Your way works fine. You just need to call your function in a loop 1000 times, as mkrieger1 suggested. please can you show me to add this function? You already have it, it's called getWif. TypeError: 'NoneType' object is not callable Solution: don't try to call a None object like a function.
common-pile/stackexchange_filtered
How can I check for a string over multiple lines and quote it? I have to fix a YAML-file. I have to check the file for the text-field (I can ignore all other fields). If the content isn't quoted, I want to set these quotes. text: - Any text Should return text: "- Any text" So I check line by line of the original file to create a new file: while(!feof($file)){ $line = fgets($file); $re = "/text:\s*\K([^\"]+?)$/m"; $subst = "\"$1\""; $result = preg_replace($re, $subst, $line); fwrite($new_file, $result); // Write line to a new file } But it doesn't work if the text has more then one line. This is what happens at the moment: text: - Any text\n which has more then one line format: do nothing with that This should be: text: "- Any text\n which has more then one line" format: do nothing with that How can I check over multiple lines and add quotes if they are missing? If the quotes are already set, nothing should happen to these lines. You can use: $result = preg_replace('/text:\s*\K(.+?)(?=\R^\w+: )/ms', "$1", $line); RegEx Demo (?=\R^\w+: ) is a lookahead that will make sure to match until next line is some word: text:\s*\K([\s\S]+?)(?=\n[^\s]+|$) Try this.Replace by "$1".See demo. https://regex101.com/r/iS6jF6/14 $re = "/text:\\s*\\K([\\s\\S]+?)(?=\\n[^\\s]+|$)/i"; $str = "text: - Any text\ntext: - Any text\n\n which has more\n then one\n line\nformat: do nothing with that\ntext: - Any text"; $subst = "\"$1\""; $result = preg_replace($re, $subst, $str); Try this <?php $str="Text: -any text"; $pattren=array(); $pattern[0]="/\-/"; $replacement=array(); $replacement[0]="\"-"; echo preg_replace($pattern,$replacement,$str);?> Demo : https://eval.in/299211
common-pile/stackexchange_filtered
Why does ng-style call 2 times the function bound to it? I am using ng-style to pass the styling in one of my directives. Like so: <my-component ng-style="test()" ng-model="message"></my-component> And the directive: myModule.directive('myComponent', function(mySharedService) { return { restrict: 'E', controller: function($scope, $attrs, mySharedService) { $scope.test = function(){ console.log(1) }; $scope.$on('handleBroadcast', function() { $scope.message = 'Directive: ' + mySharedService.message; }); }, replace: true, template: '<input>' }; }); JSFIDDLE That logs 2 times '1' Why is the test function called 2 times? Once to get the value, again to see if it has changed. This is how watches work, and watches are processed every apply-digest cycle! This means watch expressions are called at least once per apply-digest cycle, and at most 10 times, which is the hard limit the Angular team enforces. Perhaps this picture clears it up a little! The arrow back from watch list to eval queue is the relevant bit. HTH! Hey thanks. One follow up question. In my application I noticed also logging without any change. Do the watches periodically do checks? Every time a $digest loop runs, that function is going to be called. As @JesusRodriguez said: every $digest! In JavaScript, a function's value can only be determined by calling it, so it calls it every time to check! @JimmyKane a little spamming maybe but maybe this helps: http://angular-tips.com/blog/2013/08/watch-how-the-apply-runs-a-digest/ Also #angularjs in freenode IRC helps with this basic questions. Just to elaborate on stevuu's answer -- Angular does a digest "cycle" which will be multiple synchronous digests until it gets a no-op where nothing changed. I think it would be a minimum of two iterations whenever anything changed. It would be one iteration when nothing changed. It could be several iterations though. Consider this: Controller A: $rootScope.$watch('a', function(val) { $rootScope.b = val + 1; } Controller B: $rootScope.$watch('b', function(val) { $rootScope.c = val + 1; } Controller C: $rootScope.$watch('c', function(val) { $rootScope.d = val + 1; } Now if you do a $scope.$apply(function() { $rootScope.a = 5; }) The end result should be that a == 5, b == 6, c == 7 and d == 8. Except you don't know what order the $watches run in, it's nondeterministic. So it could take like four passes to end up in its final state. And each "pass" calls every $watch defined in your app. Thanks. The nondeterministic simplified more my problems.
common-pile/stackexchange_filtered
How can I add a custom mimetype for rendering in jupyterlab? I have some python calculations which are being done and all the data is being stored as possibly a json or a string object. I would like to create a custom renderer for this, such that it reads this json object and renders a react widget. How can I do the same? For example I have: So that when the reply comes from python, I would like to render the content.data as react-widget rather than text/plain
common-pile/stackexchange_filtered
MS Word Restricting Header to Subset of Document In MS Word, how can I restrict the Page Header (and Footer) to a subset of the document? I don't want them to appear on every page. belongs definitely on superuser This is accomplished using Section Breaks from the Insert menu. Once you define a section you can then modify its header without affecting the rest of the document by first turning off Same as Previous. Here is a good resource for working with sections. I would suggest reading through it thoroughly before attempting this because working with sections can be tricky if you're not familiar with how Word uses them. You have to set up sections. Sections are bigger units than paragraphs, and they can have their individual headers and footers.
common-pile/stackexchange_filtered
What is an integral transform, and is integration by parts an example? I came across the term "integral transform", and I'm curious what exactly this means, and whether integration by parts would be considered an integral transform. Examples of integral transforms include Laplace, Fourier, Mellin, Hankel, and Bessel transforms. Integral transforms look like $$ \int_a ^b K(s, t) f(t) \ dt $$ where the function $K$ is called the kernel. The function $K$ acts as a catalyst, changing the form of the function $f$ fed into the function. As @Dr. MV has said, the Laplace and Fourier transforms are common examples. The Laplace transform is $$ \mathcal{L}(f) = \int_0 ^{\infty} e^{-st}f(t) \ dt $$ and the Fourier (well, the convention used in the text I'm reading) is $$ \mathcal{F}(f) = \frac{1}{2\pi}\int_{-\infty} ^{\infty} e^{i\omega x} f(x) \ dx $$ Integration by parts is not an integral transform; it is a method we use to turn hard integrals into easier ones. You'll usually use integral transforms in differential equations. They're very powerful there. Just a comment: One thing is what a "integral transform" is. Another, for what it is useful to. The beauty of Laplace Transform for example, is to change a "differential equation" problem into a "algebraic" one. So, instead of solving the differential equation directly, you first transfom it (to the "s" variable), solve the algebraic equation, and transform back by a Laplace Inverse transformation to get the solution in the original variable. Very common aplication over electrical circuits on steady-state analysis: instead of analyzing the problem over "time" (using t as a variable) you transform it over the "frecuency space" (using s as a variable) and make the analysis as you need.
common-pile/stackexchange_filtered
Avoid that star symbol is searchable with questionmark "?" Given the following minimal example: \documentclass{article} \begin{document} $a \star b$. \end{document} This produces the desired output in the PDF file. However, my problem is that if I search in the PDF file (with CTRL-F) for the questionmark character "?" (without quotation marks), that this specific star symbol is found by the search engine as if it were a questionmark (although it is a star). This is annoying, because I like to search my final PDF for questionmarks in order to find outdated labels/citations etc. quickly. If there are lots of symbols like \star, that are found as "false positives", this workflow gets tedious and error prone. Question: Is there a way to fix this issue? I want to use symbols like \star in my document but I don't want to find them in the final PDF when searching for questionmarks. Related: https://tex.stackexchange.com/questions/451235 and https://tex.stackexchange.com/questions/450382. As a side note: Why don't you use LaTeX's warning messages to find undefined references? @schtandard Mainly as a double safety check and because the log files can become quite long from warnings that are intentionally ignored (might be because the original template is already shipped like that). You can try glyphtounicode: \documentclass{article} \input glyphtounicode \pdfgentounicode=1 \begin{document} $a \star b$. \end{document} +1: Is there a reference for glyphtounicode somewhere? @Dr.ManuelKuehner See the pdftex documentation. That seems to work, thank you. @SampleTime Possible alternative: The pdftex manuals states, that the https://www.ctan.org/pkg/cmap package has the same effect (LATEX users could load the cmap package to achieve the same effect.). But I think that Ulrike's approach is more general and safer (see https://tex.stackexchange.com/questions/451235).
common-pile/stackexchange_filtered
How to change font size throughout an iPhone app with a button I have an iPhone app with lots of UILabels throughout its screens. I am using Storyboards. My question is this. How can I change the font size in all of the labels in the app with a button or slider in the Settings screen? I would also like there to be a maximum size because otherwise the text will get too big for the screen. Thanks! Check UIAppearance Protocol Try this: -(IBAction)sliderValueChanged:(UISlider *)sender { CGFloat maxFontSize = 24.0; CGFloat minFontSize = 8.0; CGFloat sliderValue = sender.value; if ((sliderValue < maxFontSize) && (sliderValue > minFontSize)) { label1.font = [UIFont boldSystemFontOfSize:sliderValue]; label2.font = [UIFont boldSystemFontOfSize:sliderValue]; label3.font = [UIFont boldSystemFontOfSize:sliderValue]; } } I didn't tested this code but I think it should work.. Hope that helps. Well. There isn't a fantastic way to do this. If you are looking for a quick shortcut, you're out of luck. You will need to create a method that you can run in each viewControllers viewDidLoad: method, but first do this: First, somewhere in your app have a slider. Say the slider ranges from 1-5. For every value after 1, save an int somewhere that you can refer to later. In any file will do. See NSCoding. Essentially, you take the number, in this case 5, and subtract one and save that. So, if the user chose 5 you would save 4. Okay, now in each controllers viewDidLoad, have a similar method that does the following: Get the int value that you saved. Now take your base text size and add the number you saved. You will need that as a CGFloat later. I'm not positive on how to use it, but you can find out. Then use: [label1 setFont:[UIFont fontWithName:@"Name Of font" size:<#(CGFloat)#>]] You will have to use that on all UILabels. If you need anything else let me know. Yeah, that's a bit too complicated for me (I'm kind of a newbie). Thanks for replying and helping out though! @Evan, No problem. But let me know what you are confused about. I would love to help out some more. I have tons of labels in my project, doing that to all of them would take a really long time. Yeah, but there really isn't any other alternative as of yet.
common-pile/stackexchange_filtered
Pip list showing random packages when Changing python versions in conda env Below are the step/chronology: I create a conda env using conda create --name xkcd and activated it using conda create xkcd I check my python version using python --version which comes out to be 3.11.5 Now I do pip list to see if any random packages are installed.. They aren't yet. I see only pip,openssl,setuptools,wheel. I want python 3.10 for my project (mandatorily to match other devs), so I do conda install python=3.10 Now I again do pip list and see random packages from my other conda envs being listed , while conda list shows only the appropriate packages Another mandate in my project is that I have to use a pypi repo (https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-python/pypi/simple/) for installing some packages which are not available in conda (because conda points to an internal repo my firm has). So when I use pip to install something, it looks at the random packages it found in point #5 , and it's messing up with my installations. Side Notes I am using miniconda I am in windows and when I do where pip, I see 2 pips - one from my conda env (C:\UBS\Dev.conda\envs\devoai\Scripts\pip.exe) and the other from C:\Program Files\choco\miniconda . Is this weird ? Also I suspect there is some caching issue as I am damn sure the random packages I see coming in are not any usual pip/python dependencies or default packages (for example- tiktoken package) Maybe you have used pip from other global python installation, or an other conda environment. To be sure that pip install in the good conda environment, you have to type the complete path like anaconda3/envs/myenv/bin/pip or to determine the location of current python. And conda activate myenv before execute pip. https://docs.anaconda.com/anaconda/user-guide/tasks/integration/python-path/ https://stackoverflow.com/q/75703695/10489562 @phili_b I have tried using the full path to to pip.exe, Still the same results. "and the other from C:\Program Files\choco\miniconda" you have to type conda activate devoai before. @phili_b I was already doing this man. It's an obvious step, but I will add it in the question's body. Still nothing works... See duplicate. This is very likely due to user site packages. Since user site only loads matching Python versions (through minor, e.g., 3.10 or 3.11), then the reported behavior indicates that the user site has Python 3.10 packages, but not Python 3.11.
common-pile/stackexchange_filtered
Badblocks and fsck say HDD is clean but Linux marks it as read-only I'm relatively new to Linux so I'm not sure why this is happening. I have this one drive that Linux always marks it as read-only partition. At first I thought it was just the OS so I did a clean install from OpenMediaVault 2.0 to OMV 3.0 (Debian 7 & 8), I ran both fsck -f /dev/sdd and badblocks but both of them gave me a clean status and yet it still ends up as read-only. I have around a month left of warranty but I need to provide a good proof for them to accept it. I don't know how to read SMART data but here it is: smartctl 6.4 2014-10-07 r4002 [x86_64-linux-4.9.0-0.bpo.2-amd64] (local build) Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital Red Device Model: WDC WD20EFRX-68EUZN0 Serial Number: WD-WCC4M5LAL82H LU WWN Device Id: 5 0014ee 20d3675a8 Firmware Version: 82.00A82 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Mon Apr 24 12:43:14 2017 EEST SMART support is: Available - device has SMART capability. SMART support is: Enabled AAM feature is: Unavailable APM feature is: Unavailable Rd look-ahead is: Enabled Write cache is: Disabled ATA Security is: Disabled, frozen [SEC2] Wt Cache Reorder: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x80) Offline data collection activity was never started. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (26280) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 266) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x703d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 200 200 051 - 0 3 Spin_Up_Time POS--K 196 168 021 - 3158 4 Start_Stop_Count -O--CK 100 100 000 - 469 5 Reallocated_Sector_Ct PO--CK 200 200 140 - 0 7 Seek_Error_Rate -OSR-K 200 200 000 - 0 9 Power_On_Hours -O--CK 090 090 000 - 7381 10 Spin_Retry_Count -O--CK 100 100 000 - 0 11 Calibration_Retry_Count -O--CK 100 100 000 - 0 12 Power_Cycle_Count -O--CK 100 100 000 - 102 192 Power-Off_Retract_Count -O--CK 200 200 000 - 18 193 Load_Cycle_Count -O--CK 199 199 000 - 3042 194 Temperature_Celsius -O---K 113 103 000 - 34 196 Reallocated_Event_Count -O--CK 200 200 000 - 0 197 Current_Pending_Sector -O--CK 200 200 000 - 0 198 Offline_Uncorrectable ----CK 100 253 000 - 0 199 UDMA_CRC_Error_Count -O--CK 200 200 000 - 0 200 Multi_Zone_Error_Rate ---R-- 200 200 000 - 0 ||||||_ K auto-keep |||||__ C event count ||||___ R error rate |||____ S speed/performance ||_____ O updated online |______ P prefailure warning General Purpose Log Directory Version 1 SMART Log Directory Version 1 [multi-sector log support] Address Access R/W Size Description 0x00 GPL,SL R/O 1 Log Directory 0x01 SL R/O 1 Summary SMART error log 0x02 SL R/O 5 Comprehensive SMART error log 0x03 GPL R/O 6 Ext. Comprehensive SMART error log 0x06 SL R/O 1 SMART self-test log 0x07 GPL R/O 1 Extended self-test log 0x09 SL R/W 1 Selective self-test log 0x10 GPL R/O 1 SATA NCQ Queued Error log 0x11 GPL R/O 1 SATA Phy Event Counters log 0x21 GPL R/O 1 Write stream error log 0x22 GPL R/O 1 Read stream error log 0x80-0x9f GPL,SL R/W 16 Host vendor specific log 0xa0-0xa7 GPL,SL VS 16 Device vendor specific log 0xa8-0xb7 GPL,SL VS 1 Device vendor specific log 0xbd GPL,SL VS 1 Device vendor specific log 0xc0 GPL,SL VS 1 Device vendor specific log 0xc1 GPL VS 93 Device vendor specific log 0xe0 GPL,SL R/W 1 SCT Command/Status 0xe1 GPL,SL R/W 1 SCT Data Transfer SMART Extended Comprehensive Error Log Version: 1 (6 sectors) Device Error Count: 2121 (device log contains only the most recent 24 errors) CR = Command Register FEATR = Features Register COUNT = Count (was: Sector Count) Register LBA_48 = Upper bytes of LBA High/Mid/Low Registers ] ATA-8 LH = LBA High (was: Cylinder High) Register ] LBA LM = LBA Mid (was: Cylinder Low) Register ] Register LL = LBA Low (was: Sector Number) Register ] DV = Device (was: Device/Head) Register DC = Device Control Register ER = Error register ST = Status register Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 2121 [8] occurred at disk power-on lifetime: 7376 hours (307 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 00 00 c9 b1 00 40 00 Error: UNC at LBA = 0x00c9b100 = 13218048 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 60 00 20 00 a8 00 00 00 c9 b1 00 40 08 5d+18:07:52.428 READ FPDMA QUEUED 60 00 08 00 a0 00 00 80 41 19 b0 40 08 5d+18:06:52.865 READ FPDMA QUEUED 60 00 08 00 98 00 00 80 41 19 98 40 08 5d+18:06:52.865 READ FPDMA QUEUED 60 00 08 00 90 00 00 80 41 17 20 40 08 5d+18:06:52.864 READ FPDMA QUEUED 60 00 08 00 88 00 00 80 41 19 70 40 08 5d+18:06:52.864 READ FPDMA QUEUED Error 2120 [7] occurred at disk power-on lifetime: 7364 hours (306 days + 20 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 00 bb 04 fa 08 00 00 Error: UNC at LBA = 0xbb04fa08 =<PHONE_NUMBER> Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 60 00 08 00 60 00 00 bb 04 fa 08 40 08 5d+06:20:06.696 READ FPDMA QUEUED b0 00 d5 00 01 00 00 00 c2 4f 01 00 08 5d+06:19:38.449 SMART READ LOG b0 00 d5 00 01 00 00 00 c2 4f 06 00 08 5d+06:19:38.429 SMART READ LOG 61 00 08 00 38 00 00 00 00 08 00 40 08 5d+06:19:38.407 WRITE FPDMA QUEUED b0 00 d0 00 01 00 00 00 c2 4f 00 00 08 5d+06:19:38.403 SMART READ DATA Error 2119 [6] occurred at disk power-on lifetime: 7364 hours (306 days + 20 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 00 74 44 08 90 40 00 Error: IDNF at LBA = 0x74440890 =<PHONE_NUMBER> Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 c8 00 10 00 00 74 44 08 90 40 08 5d+06:19:10.175 WRITE FPDMA QUEUED ef 00 10 00 02 00 00 00 00 00 00 a0 08 5d+06:19:10.175 SET FEATURES [Enable SATA feature] 27 00 00 00 00 00 00 00 00 00 00 e0 08 5d+06:19:10.175 READ NATIVE MAX ADDRESS EXT [OBS-ACS-3] ec 00 00 00 00 00 00 00 00 00 00 a0 08 5d+06:19:10.175 IDENTIFY DEVICE ef 00 03 00 46 00 00 00 00 00 00 a0 08 5d+06:19:10.175 SET FEATURES [Set transfer mode] Error 2118 [5] occurred at disk power-on lifetime: 7364 hours (306 days + 20 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 00 74 44 08 90 40 00 Error: IDNF at LBA = 0x74440890 =<PHONE_NUMBER> Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 c8 00 e8 00 00 74 44 08 90 40 08 5d+06:19:03.155 WRITE FPDMA QUEUED ef 00 10 00 02 00 00 00 00 00 00 a0 08 5d+06:19:03.155 SET FEATURES [Enable SATA feature] 27 00 00 00 00 00 00 00 00 00 00 e0 08 5d+06:19:03.155 READ NATIVE MAX ADDRESS EXT [OBS-ACS-3] ec 00 00 00 00 00 00 00 00 00 00 a0 08 5d+06:19:03.154 IDENTIFY DEVICE ef 00 03 00 46 00 00 00 00 00 00 a0 08 5d+06:19:03.154 SET FEATURES [Set transfer mode] Error 2117 [4] occurred at disk power-on lifetime: 7364 hours (306 days + 20 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 00 74 44 08 90 40 00 Error: IDNF at LBA = 0x74440890 =<PHONE_NUMBER> Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 c8 00 c8 00 00 74 44 08 90 40 08 5d+06:18:56.131 WRITE FPDMA QUEUED ef 00 10 00 02 00 00 00 00 00 00 a0 08 5d+06:18:56.131 SET FEATURES [Enable SATA feature] 27 00 00 00 00 00 00 00 00 00 00 e0 08 5d+06:18:56.131 READ NATIVE MAX ADDRESS EXT [OBS-ACS-3] ec 00 00 00 00 00 00 00 00 00 00 a0 08 5d+06:18:56.131 IDENTIFY DEVICE ef 00 03 00 46 00 00 00 00 00 00 a0 08 5d+06:18:56.131 SET FEATURES [Set transfer mode] Error 2116 [3] occurred at disk power-on lifetime: 7364 hours (306 days + 20 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 00 74 44 08 90 40 00 Error: IDNF at LBA = 0x74440890 =<PHONE_NUMBER> Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 c8 00 a8 00 00 74 44 08 90 40 08 5d+06:18:49.121 WRITE FPDMA QUEUED ef 00 10 00 02 00 00 00 00 00 00 a0 08 5d+06:18:49.120 SET FEATURES [Enable SATA feature] 27 00 00 00 00 00 00 00 00 00 00 e0 08 5d+06:18:49.120 READ NATIVE MAX ADDRESS EXT [OBS-ACS-3] ec 00 00 00 00 00 00 00 00 00 00 a0 08 5d+06:18:49.120 IDENTIFY DEVICE ef 00 03 00 46 00 00 00 00 00 00 a0 08 5d+06:18:49.120 SET FEATURES [Set transfer mode] Error 2115 [2] occurred at disk power-on lifetime: 7364 hours (306 days + 20 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 00 74 44 08 90 40 00 Error: IDNF at LBA = 0x74440890 =<PHONE_NUMBER> Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 c8 00 88 00 00 74 44 08 90 40 08 5d+06:18:42.106 WRITE FPDMA QUEUED ef 00 10 00 02 00 00 00 00 00 00 a0 08 5d+06:18:42.106 SET FEATURES [Enable SATA feature] 27 00 00 00 00 00 00 00 00 00 00 e0 08 5d+06:18:42.106 READ NATIVE MAX ADDRESS EXT [OBS-ACS-3] ec 00 00 00 00 00 00 00 00 00 00 a0 08 5d+06:18:42.105 IDENTIFY DEVICE ef 00 03 00 46 00 00 00 00 00 00 a0 08 5d+06:18:42.105 SET FEATURES [Set transfer mode] Error 2114 [1] occurred at disk power-on lifetime: 7364 hours (306 days + 20 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 00 74 44 08 90 40 00 Error: IDNF at LBA = 0x74440890 =<PHONE_NUMBER> Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 c8 00 78 00 00 74 44 08 90 40 08 5d+06:18:35.095 WRITE FPDMA QUEUED e5 00 00 00 00 00 00 00 00 00 00 00 08 5d+06:18:35.095 CHECK POWER MODE ef 00 10 00 02 00 00 00 00 00 00 a0 08 5d+06:18:35.088 SET FEATURES [Enable SATA feature] 27 00 00 00 00 00 00 00 00 00 00 e0 08 5d+06:18:35.088 READ NATIVE MAX ADDRESS EXT [OBS-ACS-3] ec 00 00 00 00 00 00 00 00 00 00 a0 08 5d+06:18:35.087 IDENTIFY DEVICE SMART Extended Self-test Log Version: 1 (1 sectors) Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 7378 - # 2 Short offline Completed without error 00% 7354 - # 3 Short offline Completed without error 00% 7330 - # 4 Short offline Completed without error 00% 7306 - # 5 Short offline Completed without error 00% 7282 - # 6 Short offline Completed without error 00% 7258 - # 7 Short offline Completed without error 00% 7239 - # 8 Extended offline Interrupted (host reset) 70% 7222 - # 9 Short offline Completed without error 00% 1135 - #10 Extended offline Completed without error 00% 1061 - #11 Extended offline Aborted by host 70% 1052 - #12 Short offline Completed without error 00% 1051 - #13 Extended offline Interrupted (host reset) 90% 5 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. SCT Status Version: 3 SCT Version (vendor specific): 258 (0x0102) SCT Support Level: 1 Device State: Active (0) Current Temperature: 34 Celsius Power Cycle Min/Max Temperature: 32/40 Celsius Lifetime Min/Max Temperature: 2/44 Celsius Under/Over Temperature Limit Count: 0/0 Vendor specific: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 SCT Temperature History Version: 2 Temperature Sampling Period: 1 minute Temperature Logging Interval: 1 minute Min/Max recommended Temperature: 0/60 Celsius Min/Max Temperature Limit: -41/85 Celsius Temperature History Size (Index): 478 (229) Index Estimated Time Temperature Celsius 230 2017-04-24 04:46 35 **************** 231 2017-04-24 04:47 35 **************** 232 2017-04-24 04:48 35 **************** 233 2017-04-24 04:49 34 *************** ... ..(152 skipped). .. *************** 386 2017-04-24 07:22 34 *************** 387 2017-04-24 07:23 35 **************** ... ..( 23 skipped). .. **************** 411 2017-04-24 07:47 35 **************** 412 2017-04-24 07:48 34 *************** ... ..( 5 skipped). .. *************** 418 2017-04-24 07:54 34 *************** 419 2017-04-24 07:55 35 **************** ... ..( 9 skipped). .. **************** 429 2017-04-24 08:05 35 **************** 430 2017-04-24 08:06 34 *************** ... ..( 98 skipped). .. *************** 51 2017-04-24 09:45 34 *************** 52 2017-04-24 09:46 35 **************** ... ..( 2 skipped). .. **************** 55 2017-04-24 09:49 35 **************** 56 2017-04-24 09:50 34 *************** ... ..( 6 skipped). .. *************** 63 2017-04-24 09:57 34 *************** 64 2017-04-24 09:58 35 **************** ... ..( 18 skipped). .. **************** 83 2017-04-24 10:17 35 **************** 84 2017-04-24 10:18 34 *************** ... ..( 11 skipped). .. *************** 96 2017-04-24 10:30 34 *************** 97 2017-04-24 10:31 35 **************** ... ..( 15 skipped). .. **************** 113 2017-04-24 10:47 35 **************** 114 2017-04-24 10:48 34 *************** ... ..( 50 skipped). .. *************** 165 2017-04-24 11:39 34 *************** 166 2017-04-24 11:40 35 **************** 167 2017-04-24 11:41 34 *************** ... ..( 53 skipped). .. *************** 221 2017-04-24 12:35 34 *************** 222 2017-04-24 12:36 35 **************** 223 2017-04-24 12:37 34 *************** 224 2017-04-24 12:38 35 **************** ... ..( 4 skipped). .. **************** 229 2017-04-24 12:43 35 **************** SCT Error Recovery Control: Read: 70 (7.0 seconds) Write: 70 (7.0 seconds) Device Statistics (GP/SMART Log 0x04) not supported SATA Phy Event Counters (GP Log 0x11) ID Size Value Description 0x0001 2 0 Command failed due to ICRC error 0x0002 2 0 R_ERR response for data FIS 0x0003 2 0 R_ERR response for device-to-host data FIS 0x0004 2 0 R_ERR response for host-to-device data FIS 0x0005 2 0 R_ERR response for non-data FIS 0x0006 2 0 R_ERR response for device-to-host non-data FIS 0x0007 2 0 R_ERR response for host-to-device non-data FIS 0x0008 2 0 Device-to-host non-data FIS retries 0x0009 2 8 Transition from drive PhyRdy to drive PhyNRdy 0x000a 2 9 Device-to-host register FISes sent due to a COMRESET 0x000b 2 0 CRC errors within host-to-device FIS 0x000f 2 0 R_ERR response for host-to-device data FIS, CRC 0x0012 2 0 R_ERR response for host-to-device non-data FIS, CRC 0x8000 4 515583 Vendor specific Edit: I unmounted and ran fsck again and this time I got this error, it fixed it but it ends up read-only again in a few days. I am also using ext4. :/$ sudo fsck -f /dev/sdd1 fsck from util-linux 2.25.2 e2fsck 1.43.3 (04-Sep-2016) WDRED3DATABASE: recovering journal Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Free blocks count wrong (477683699, counted=477683887). Fix<y>? yes Free inodes count wrong (121845913, counted=121845915). Fix<y>? yes WDRED3DATABASE: ***** FILE SYSTEM WAS MODIFIED ***** WDRED3DATABASE: 255845/122101760 files (0.1% non-contiguous), 10694498/488378385 blocks Did you run the fsck after unmount all filesystems, related to the disk? Did you run fsck on filesystems, not entire disk i.e. /dev/sdd1? the disk had one filesystem, i stopped everything related to the disk before i unmounted it, yes i did unmount before running fsck, i ran fsck as "fsck -f /dev/sdd1", not on file system (i can ran fsck on filesystem only? how?) I do not think there is something wrong with the drive. I think that more likely the problem is in the way you are mounting the drive. Here is a link to fix the mounting issues from the Ubuntu Forums. https://askubuntu.com/questions/333287/external-hard-disk-read-only i am using the default mode in the web ui to mount the hdd, im not using terminal to mount it, same as all the other drives and yet only this one gives me this issue, do you think the problem lies with the ext4 file system? should i try ext3 or btrfs? would it matter if all my other drives are using ext4 and one of them has a different filesystem? The file system can matter. What file system is on the drive and what data is on there. Is it storage or an OS it is using ext4, its a storage hdd, OS is on an ssd, i tried a full format and the drive stopped getting recognized now..i RMA'd it and will see what happens.
common-pile/stackexchange_filtered
horizontal scrollable list fully centered with dynamic width items I'm trying to make an horizontal list to be centered, with its items to be centered too regarding the main parent (the window, or body) ; and because i don't know what kind and how many items i will have generated into that list, i'd need the items to be dynamic in width and the list scrollable so i can browse over all of them. My attempts so far are good, i'm getting either : a centered list with non-centered items of dynamic width ; and a fully centered scrollable list with fixed width items. The core idea is basically achieved by using: ul { width: 100%; overflow-x: scroll; overflow-y: hidden; white-space: nowrap; } li { display: inline-block; } Then the centering is made by wrapping the <ul> into a <div>, and making the <div> scroll the <ul> instead of the <ul> scrolling the <li>. It allows to add % based margins to the <ul> div { width: 100%; overflow-x: scroll; overflow-y: hidden; white-space: nowrap; } ul { margin: 0 50%; } li { display: inline-block; } For items to be centered by width, the idea is to use translateX(-50%) with a little twist: as i don't want my list to have extra space at the end of the scrolling, I'm removing the last item from the flow by positioning it in absolute at the end ; that way, the <ul> width looks like the sum of the items width minus one item. ul { position: relative; margin: 0 50%; } li { display: inline-block; width: some_number; transform: translateX(-50%); } li:last-child { position: absolute; left: 100%; } But how to keep it that way without setting the width is a mystery to me. Any ideas out there ? Here's my fiddle with some comments in the CSS section : https://jsfiddle.net/11gkrju0/ I'd like not to use javascript at all for that as it is only a matter of styling, but messing with the dom is a totally viable option. Thanks all for reading, Ok so it's a bit tricky. The resolved fiddle is here with comments which i'm gonna describe here too. https://jsfiddle.net/11gkrju0/7/ First thing to notice, i added a vertical-align: top; to my <li>s to make sure there's no vertical gap at some point. Then. Boundaries for a non centered list are easy to catch: +-----+---+---------+ | A | B | C | +-----+---+---------+ ^ ^ a b For a centered list, they change a little bit : +-----+---+---------+ | A | B | C | +-----+---+---------+ ^ ^ ^ ^ a c d b To make the scroll right, we need to change the width of the <ul> from A->B to C->D. As its width is dynamicly calculated by its children width, we need to alter the children. We can see that only the first and last children are the ones to be altered. As the text defines the width of each children, we cannot change the width property, but we can divide by 2 some defined values much more easily : font-size, padding. li { font-size: 12px; padding: 0 50px; } li:first-child, li-last-child { font-size: 6px; padding: 0 25px; } Now the list may look more like +---+---+-----+ | a | B | c | +---+---+-----+ ^ ^ ^ ^ a c d b But then the problem is that the text and padding are smaller now. What we can do is to proxy the real buttons into an :after with the little trade of adding a text attribute to the li to feed the content property. <li text='A'>A</li> <li text='B'>B</li> <li text='C'>C</li> With that in the DOM, we can write that kind of CSS to create fake buttons which looks just like the real one (not to mention, hide the real one...) li { position: relative; background: transparent; color: transparent; } li:after { display: block; overflow: hidden; position: absolute; left: 0; top: 0; width: 100%; height: 100%; content: attr(text); color: #000; } Almost there, almost there... The first and last children have been altered, and so their pseudo-child too... Side effect is that the width will be calculated for a 6px font-size. So we need to set back the width to twice needed, and reset the font-size to prevent inheritance. li:first-child:after, #d li:last-child:after { font-size: 12px; width: 200%; } Finally (yay), we pad the first element to half of its width to prevent collision with the 2nd one. li:first-child:after { left: -100%; } We could have used transform: translateX(-50%); as well, but we're already manipulating absolute values so there's no need to apply an other layer of calculation on that. Hoping it'll help someone.
common-pile/stackexchange_filtered
Adding user to xmpp room Hello I know there is alot question regarding this but still i could not figure out what can be the problem. I have sucessfully created group using following code. - (void)createGroup { iXmppEngine.userID =<EMAIL_ADDRESS> iXmppEngine.userPass = @"password123"; self.rosterstorage =[[XMPPRoomCoreDataStorage alloc] init]; XMPPRoom *xmppRoom = [[XMPPRoom alloc] initWithRoomStorage:self.rosterstorage jid:[XMPPJID jidWithString<EMAIL_ADDRESS>dispatchQueue:dispatch_get_main_queue()]; [xmppRoom activate:[[self iXmppEngine]xmppStream]]; [xmppRoom joinRoomUsingNickname:@"Dev iphone" history:nil]; [[[self iXmppEngine] xmppStream] addDelegate:self delegateQueue:dispatch_get_main_queue()]; [xmppRoom addDelegate:self delegateQueue:dispatch_get_main_queue()]; [self performSelector:@selector(joinRoom) withObject:nil afterDelay:4]; } And For adding user i used following code. I have called this function with delay of 5s so room can be created succesfully before adding users. - (void)joinRoom { self.iXmppEngine.userID =<EMAIL_ADDRESS> // self.rosterstorage = [[XMPPRoomCoreDataStorage alloc] init]; XMPPRoom *xmppRoom = [[XMPPRoom alloc] initWithRoomStorage:self.rosterstorage jid:[XMPPJID jidWithString<EMAIL_ADDRESS>dispatchQueue:dispatch_get_main_queue()]; [xmppRoom activate:[[self iXmppEngine] xmppStream]]; [xmppRoom joinRoomUsingNickname:@"lahore" history:nil]; [xmppRoom fetchConfigurationForm]; [xmppRoom configureRoomUsingOptions:nil]; [xmppRoom addDelegate:[self iXmppEngine] delegateQueue:dispatch_get_main_queue()]; } What i am doing wrong couldnt find. Creating and Joining (Already Created Room or Accepting Invitation To Join Room) Rooms are two differnet scenarios. // CONFERENCE_ROOM_SERVER is the domain name for MUC For Creating Group Do This Way: - (void)createChatRoom:(NSString *) newRoomName { XMPPJID *roomJID = [XMPPJID jidWithString:[NSString stringWithFormat:@"%@@%@", newRoomName, CONFERENCE_ROOM_SERVER]]; XMPPRoomMemoryStorage *roomMemoryStorage = [[XMPPRoomMemoryStorage alloc] init]; xmppRoom = [[XMPPRoom alloc] initWithRoomStorage:roomMemoryStorage jid:roomJID dispatchQueue:dispatch_get_main_queue()]; [xmppRoom activate:[self xmppStream]]; [xmppRoom addDelegate:self delegateQueue:dispatch_get_main_queue()]; [xmppRoom joinRoomUsingNickname:@"Your Nick Name" history:nil]; } You will get the response after the group has been created successfully in this delegate: - (void)xmppRoomDidCreate:(XMPPRoom *)sender Now Invite Users Here This Way: [sender inviteUser:[XMPPJID jidWithString:inviteUserJID] withMessage:@"Any Message"];
common-pile/stackexchange_filtered
Overlapping Overlay Image with Image Taken from camera I am doing a project where I need to OverLay few custom images, where option to add faces taken from camera will be given. Considering the following image I will take faces from camera and I want to fit in the face into this overlay. I have used following layout structure FrameLayout SurfaceView // For Displaying custom Camera ImageView // For Displaying The overlay image as below FrameLayout I have zoom option for the ImageView as well which will make the image to fit in the Faces taken from camera Now I am stuck in following 2 problems: When I click on take picture, I am trying to get the drawing cache of the Framelayout, But I am getting a black image for the surfaceview view Since I have ImageView to be zoomed as well, I am not able to stitch back overlay to fit the face taken
common-pile/stackexchange_filtered
How to run webpack-dev-server on localhost without a port I have a React app, to run it locally I use webpack-dev-server. This is how my command looks like: "scripts": { "dev-server": "webpack-dev-server --config ./webpack.dev.js --open --https --hot --history-api-fallback", } When I do npm run dev-server it opens a browser for me with url "https://localhost:8080/". Is it possible to map it somehow to "https://localhost" (without the port)? Please see the related thread for answers: https://stackoverflow.com/q/33272967/3679048 Actually I tried it. If I run it with the following. arguments: "webpack-dev-server --config ./webpack.dev.js --host <IP_ADDRESS> --port 80 --open --https --hot --history-api-fallback" (as mentioned in that answer) the following URL is opened "https://<IP_ADDRESS>:80/" and I need to have the link "https://localhost" (for consistency purposes for my project) Port 80 is default port it should be reachable via localhost itself, if I am not wrong.
common-pile/stackexchange_filtered
Poincare inequality with small modification Let $f:B^n_1 \to \Bbb{R}$ defined on the n dimensional ball of radiu $1$. If $f\in C^1$. and $f(0) = 0$. Can we prove that $$\int_{B_1}|f|^2 \le C\int_{B_1}|\nabla f|^2 +C'$$ If assume further that $m(\{x:|f(x)|\le 1\})\ge \delta |B_1|$ for some constant $\delta >0$ which independent on the choice of $f$. Can we prove $$\int_{B_1}|f|^2 \le C\int_{B_1}|\nabla f|^2 + C'$$ then? The two inequalities are the same? Also this is not really a 'Sobolev inequality' but a 'Poincaré inequality' @Calvin Khor , no they are not the same? in the second one we consider the problem in the smaller class that $m({x:|f(x)|\le 1})\ge \delta |B_1| $ with $\delta$ independent choice of $f$ Yes, but I just wanted to confirm if the inequalities are the same, or you wanted to type something different. An inequality is something like "A≤B", not asking about the assumptions If there is no mistake, then it sounds like they want you to think the first one has a counterexample. I don't know / have time to look into it atm....good luck. Though you can perhaps try the proof of Poincare in Evans for the second part This is not true if $n>1$. If it would be true, then it would be true for all $H^1(B^n_1)$ functions, which it is clearly not. Oh I see, thank you this is a nice point for $n = 1$ every function in $H^1$ has continuous representation, while $n \ge 2$ the continuous representation needs not to exist ,therefore to force $f =0$ at point is meaningless. I should made the statement more formal, for the second version,we have : Theorem. For any $\varepsilon>0$ there exists a $ C=C(\varepsilon, n)$ such that for $u \in H^{1}\left(B_{1}\right)$ with $$ \left|\left\{x \in B_{1} ; u=0\right\}\right| \geq \varepsilon\left|B_{1}\right| \quad \text { there holds } \int_{B_{1}} u^{2} \leq C \int_{B_{1}}|D u|^{2} $$ The proof is similar to the proof for the poincare wirtinger inequality on Evan's PDE book. This proof can also be found on Q. Han and F. Lin, Elliptic partial differential equations. 4.8. With slight modification, we can prove the following result : Theorem For any $\varepsilon>0$ there exists a $ C=C(\varepsilon, n)$ such that for $u \in H^{1}\left(B_{1}\right)$ with $$ \left|\left\{x \in B_{1} ; |u|<\frac{1}{2}\right\}\right| \geq \varepsilon\left|B_{1}\right| \quad \text { there holds } \int_{B_{1}} u^{2} \leq C \int_{B_{1}}|D u|^{2} $$ Proof. Suppose not. Then there exists a sequence $\left\{u_{m}\right\} \subset H^{1}\left(B_{1}\right)$ such that $\left|\left\{x \in B_{1} ; |u_{m}|<\frac{1}{2}\right\}\right| \geq \varepsilon\left|B_{1}\right|, \quad \int_{B_{1}} u_{m}^{2}=1, \quad \int_{B_{1}}\left|D u_{m}\right|^{2} \rightarrow 0$ as $m \rightarrow \infty$ Hence we may assume $u_{m} \rightarrow u_{0} \in H^{1}\left(B_{1}\right)$ strongly in $L^{2}\left(B_{1}\right)$ and weakly in $H^{1}\left(B_{1}\right)$(Using the Sobolev embedding). Clearly $u_{0}$ is a nonzero constant,and $|u_0| > \frac{1}{2}$ which is constant since $Du =0$ by the fact that $Du_m \to 0$ in $L^2$ , it's non-zero since $\|u_m\| = 1 $ so the limit in $L^2$ can not have zero $L^2$ norm. since $\|u_0\|_{L^2(B_1)} =1$, therefore $|u_0|> \frac{1}{2}$ . So: $$ \begin{aligned} 0=\lim _{m \rightarrow \infty} \int_{B_{1}}\left|u_{m}-u_{0}\right|^{2} & \geq \lim _{m \rightarrow \infty} \int_{\left\{|u_{m}|<\frac{1}{2}\right\}}\left|u_{m}-u_{0}\right|^{2} \\ & \geq (|u_0| - \frac{1}{2})^2 \inf_m|\{x:|u_m|<\frac{1}{2}\}| >0 \end{aligned} $$ Contradiction. The condition with $\epsilon$ on $u_m$ is wrong (its taken from first theorem). the proof needs $u_0>1/2$, $u_0 \ne 1/2$ is not enough. thank you I have modified it. Oh I found a problem , $\left|\left{x \in B_{1} ; |u_{m}|<\frac{1}{2}\right}\right| \geq \varepsilon\left|B_{1}\right|,$ is not preserved under the normalization while $\left|\left{x \in B_{1} ; u=0\right}\right| \geq \varepsilon\left|B_{1}\right| \quad $ is ok, that's why we choose this condition.
common-pile/stackexchange_filtered
SHEmptyRecycleBin in VB6 doesn't do anything I'm trying to empty the recycle bin as part of a process that is freeing up hard disk space. He is the code I've got so far. At the top of my class: Private Declare Function SHEmptyRecycleBin Lib "shell32.dll" Alias "SHEmptyRecycleBinA" (ByVal hwnd As Long, ByVal pszRootPath As String, ByVal dwFlags As Long) As Long Private Declare Function SHUpdateRecycleBinIcon Lib "shell32.dll" () As Long Then in the hard disk clean up function: SHEmptyRecycleBin(Empty, vbNullString, 0) SHUpdateRecycleBinIcon I also tried this: Dim lRetVal As Long lRetVal = 0 lRetVal = SHEmptyRecycleBin(Empty, vbNullString, 0) But it's returning a zero, indicating success. Has anyone ever used this function before, or have an ideas about why it wouldn't work? This code is being run from within an ActiveX dll, if that matters. **EDIT** Well, I think I must've misread or misunderstood something before, because I think whatever examples I looked at gave me the impression the confirmation window would not be called by using 0 for the last variable. I tried this code: Const SHERB_NOCONFIRMATION = &H1 Call SHEmptyRecycleBin(0, vbNullString, SHERB_NOCONFIRMATION) ...and it still doesn't work. However, if I use this code in the .exe that is calling the ActiveX dll: Const SHERB_NOCONFIRMATION = &H1 lRetVal = SHEmptyRecycleBin(Empty, vbNullString, SHERB_NOCONFIRMATION) ...it works. I can't figure out why it works in the .exe and not the .dll though, and I'd rather keep all the code in the .dll if possible. I hope this program is only for your own use. It isn't nice for a program to take control of something like this away from the user. I keep my Recycle Bin clean but a lot of users rely on things "hanging there" in case they change their minds about a delete operation - the whole reason we have a Recycle Bin. It seems to work here from a main program however. Note that "Empty" for the hWnd parameter works, but an explicit 0 is probably a better choice. I get the prompt dialog though nothing there tells the user they're being asked to Ok emptying of the Recycle Bin. A few points. I am using this to clean up a server that is only used by admins, not actual users. I figured out what the problem is. This is being run as a scheduled task, under a user account that does things in the background, so the confirmation window (which I didn't realize existed) isn't showing up. I presume the call is terminating with a cancel, and thus returns 0 because it canceled okay. I guess I need to find out if I can programmatically confirm the deletion. You can check this sample. Basicly SHEmptyRecycleBin 0, vbNullString, 0 or Call SHEmptyRecycleBin(0, vbNullString, 0) should be ok, but they are passing an actual hwnd for function UI to use as owner window.
common-pile/stackexchange_filtered
How to obtain NSManagedObjectContext instance from Extension iOS 8 I create keyboard extension, and in need to get access to CoreData. It works well from container app AppDelegate. But from extension I can't gain access to data. First of all I need instance of NSManagedObjectContext, which I don't have. So how can I obtain this context by using AppGroup or something else? By the way, when extension start it print in log next message: Warning: CFFIXED_USER_HOME is not set! It should be set to the simulated home directory. Failed to inherit CoreMedia permissions from 16808: (null) What can be wrong with it? Have you get this fixed? Please let me know asap. nope=( i decide avoid using CoreData, because it's to slow for my case. You still can try something like read data in application, write it in AppGroup, like mentioned below, but it's not a solution for this question. I also did not find any way to get core data in my keyboard extension, only NSUserDefault works. It seems Apple not giving access of Core Data in Keyboard Extension. May be it work for any other extension. Check out this thread: http://stackoverflow.com/questions/26065539/magicalrecord-coredata-today-extension-ios8-will-they-play Follow this tutorial, create an App Group and retrieve data from that group http://www.glimsoft.com/06/28/ios-8-today-extension-tutorial/ This is not working with Keyboard extension (sharing Core Data). Sharing NSUserDefault works fine.
common-pile/stackexchange_filtered
Get SQL server record details as JSON for each record individually I have 2 tables. Department & Employee as below : Department |----------------------| | Id | Name | |---------|------------| | 10 | Admin | |---------|------------| | 11 | IT | |---------|------------| Employee |----------------------|--------------| | Id | Name | DepartmentId | |---------|------------|--------------| | 1 | Peter | 10 | |---------|------------|--------------| | 2 | Scott | 11 | -------------------------------------- I need the JSON details of each department individually as below: |----------------------|------------------------------------------------------------ | | DepartmentId | JSONDeatails | |--------------------- |------------------------------------------------------------ | | 10 |{"id": 10,"name": "Admin","Emp": [{"Id": 1,"name": "peter"}]}| |----------------------|-------------------------------------------------------------| | 11 |{"id": 11,"name": "IT","Emp":[{"Id": 2,"name": "scott"}]} | |----------------------|-------------------------------------------------------------| I tried below query which is returning the entire table data as one JSON. SELECT D.Id, D.name, Emp.Id, Emp.Name FROM Department D LEFT JOIN Employee Emp ON D.Id= Emp.DepartmentId FOR JSON AUTO I am getting the response as below: JSON_F52E2B61-18A1-11d1-B105-00805F49916B [{"id": 10,"name": "Admin","Emp": [{"Id": 1,"name": "peter"}]},{"id": 11,"name": "IT","Emp":[{"Id": 2,"name": "scott"}]}] What is the expected output when a department has more than one employee? For example if another employee named John is added with department 11? Should be in the Emp Array Like {"id": 11,"name": "IT","Emp":[{"Id": 2,"name": "scott"}, {"Id": 3,"name": "John"}]} After many trials, I could figure out the query that gives JSON for each indiviadual record. Here it is: SELECT D.Id, (SELECT Di.Id, Di.name ,Emp.Id, Emp.Name FROM Department Di LEFT JOIN Employee Emp ON Di.Id= Emp.DepartmentId WHERE Di.Id = D.id FOR JSON AUTO) AS Details FROM Department D
common-pile/stackexchange_filtered
Push huge list in PHP array I have a huge list of stores name in (.doc) file and I want to store these names in mysql DB The solution which I thought was to create an array of this list & by looping insert them in DB.. But I don’t know how to push this list in php array.. Thanks Can you show some data of .doc file ? Save the word doc as a plain text (.txt) file. Read the file in a line at a time, and insert into the database. If you are on linux you can use WV. Else you can use antiword or AbiWord to pull the text out. Then you'll still need to put the text into a proper Array() and you'll probably use some sort of REGEX to do that.
common-pile/stackexchange_filtered
Send email to all users in rails I am trying to create a method that, when called send a email to all users. My ultimate goal is to call this method via a scheduler (i already got the scheduler working) And the method will go thought all users and send emails to some of then if some pre-requisites are met. Right now i just want to learn how i make a simplest stuff that is to send a custom email to every user in the table. My first issue is: def send_digest @users = User.all @users.each do |user| @time = Time.now mail(to: user.email, subject: user.name) end end This method (is inside app/mailer/user_mailer.rb) only is sending one e-mail to the guy with biggest ID in the table. Why that? Also, what i need to do to access the variable "user.name" inside the email? EDIT: There a better way for accessing user variable inside the mail body than doing @user = user? def send_digest(user) @time = Time.now @user = user mail(to: user.email, subject: 'mail message') end For each call to the mailer method one email is sent in scheduled worker def calling_method @users.each do |user| send_digest(user.email, user.name) end end in user mailer def send_digest(user_email, user_name) mail(to: user_email, subject: user_name) end This worked fine! Now how i get to access the variable user (from do |user|) inside the mail body? (which is tomething like app/views/user_mailer/send_digest.html.erb) just pass user instead of user_name and user_email and read it inside send_digest There a better way to do the access than using @user = user? (i wrote in the edit the complete code)
common-pile/stackexchange_filtered
Parsing string to time value in python I need to convert the following strings into either datetimes or strings in a different format. My input looks like these examples: 196h 26m 13s 95h 19m 45m 28s My desired string output would be (But my final goal is to convert these strings into datetime objects): 196:26:13 95:19:00 00:45:28 In Excel, it would be [h]:mm:ss. Note: As you can see, hours can be higher than 24. I have tried to parse this string with the time.strptime() method, but it doesn't work with hours higher than 24. I have a solution with regular expressions, but I want to know if there is a more straightforward way to do this. What is the best way to solve this? Those aren't points in time; those are durations of time. Thus, datetime, date, and time are entirely the wrong types to use; the correct type is timedelta, which does not have a built-in string-parsing method. I would go the re way too Thanks guys for the help! I would stay by the good old regex way This would give you time deltas: from datetime import timedelta def to_timedelta(time_string): units = {'h': 'hours', 'd': 'days', 'm': 'minutes', 's': 'seconds'} return timedelta(**{units[x[-1]]: int(x[:-1]) for x in time_string.split()}) Test: times = ['196h 26m 13s', '95h 19m', '45m 28s'] for t in times: print(to_timedelta(t)) Output: 8 days, 4:26:13 3 days, 23:19:00 0:45:28 timedelta takes these arguments; datetime.timedelta(days=0, seconds=0, microseconds=0, milliseconds=0, minutes=0, hours=0, weeks=0) Using this mapping: units = {'h': 'hours', 'd': 'days', 'm': 'minutes', 's': 'seconds'} allows the short units in the string to be mapped to the corresponding names of the arguments. Using Pythons ** syntax, the resulting dictionary can be used as single argument that will be converted in to the matching keyword arguments. The first thing we should do is use a regular expression and use timedelta instead of datetime. import datetime import re regex = re.compile(r'((?P<hours>\d+?)h)?((?P<minutes>\d+?)m)?((?P<seconds>\d+?)s)?') def parse_time(time_str): parts = regex.match(time_str) if not parts: return parts = parts.groupdict() time_params = {} for (name, param) in parts.items(): if param: time_params[name] = int(param) return datetime.timedelta(**time_params) L = ["196h 26m 13s", "95h 19m", "45m 28s"] for l in L: print(parse_time(l)) Output: 8 days, 4:00:00 3 days, 23:00:00 0:45:00 Thanks! I had a very similar solution based on regex
common-pile/stackexchange_filtered
Date formatting in a grid I'm trying to display a date column in grid like this: "dd-mm-yyyy". In dbf table, the date is stored in this format: "YYYY-MM-DDThh:mm:ss" in a character field. The grid is created from this cursor: select id,beginningDate,endDate,cnp from doc ORDER BY id desc INTO CURSOR myCursor I wish something like this: select id,convert(beginningDate, Datetime,"dd-mm-yyyy"),endDate,cnp from doc ORDER BY id desc INTO CURSOR myCursor I realize you may not be able to change this, but there's no reason to store a datetime in a character field. If you had a datetime field, then this would be simple. Fox doesn't have a builtin function called convert(), nor can it handle your non-standard date/time string format directly. A quick and dirty way to convert a string foo in the given format ("YYYY-MM-DDThh:mm:ss") to a date/time value is ctot("^" + chrtran(foo, "T", " ")) The caret marks the input as the locale-independent standard format, which differs from the input format only by having a space instead of a 'T'. You can extract the date portion from this via the ttod() function, or simply extract only the date portion from the string and convert that: ctod("^" + left(foo, 10)) Fox's controls - including those in a grid - normally use the configured Windows system format (assuming that set("SYSFORMATS") == "ON"); you can override this by playing with the SET DATE command. There seems to be no mask-based date formatting option as in most other languages. dtoc() and ttoc() don't take format strings, transform() takes a format string but blithely ignores it for date values. I am with Tamar on this subject, you should have used a datetime field instead. Since you are storing it like this anyway, you can 'convert' to datetime using the built-in cast function (or ttod(ctot()) in versions older than VFP9 - in either case you don't need to remove T character): select id, ; Cast(Cast("^"+beginningDate as datetime) as date) as beginningDate, ; endDate,cnp ; from doc ; ORDER BY id desc ; INTO CURSOR myCursor ; nofilter In grid or any other textbox control, you can control its display style using DateFormat property. ie: * assuming it is Columns(2). 11 is DMY thisform.myGrid.Columns(2).SetAll('DateFormat', 11) Note: leaving the 'T' character instead of substituting a blank works only with VFP7 and newer. @DarthGizka, also cast doesn't work before VFP9 but even that one is 9 years old.I didnt see in question a version was mentioned. Right enough. However, when you're working with a legacy system then you can't pick your favourite Fox version... And nowadays pretty much everything that has to do with Fox falls under legacy. Just a few months ago I came across FPW 2.6 (I shit you not) and I know of active commercial operations based on Clipper and others based on FPD. Time runs differently for legacy systems.
common-pile/stackexchange_filtered
Cannot make css rounded corners table in WordPress I read through the related posts but none of them can solve my case. I have a table embedded in my WordPress site, I only take away the header menu and the footer, demo site can be visited from http://www.solepropr.net/demo/wordpress_table.html If viewing the CSS setting for the first th border, the border-top-left-radius is 6px but the actual output is still a right angle. No matter how large of the radius size I set, there is no change to the corner. Below is the output of the Computed style from Chrome Developer Tools -webkit-font-smoothing: antialiased; border-bottom-color: rgb(0, 146, 215); border-bottom-left-radius: 0px; border-bottom-right-radius: 0px; border-bottom-style: solid; border-bottom-width: 2px; border-collapse: collapse; border-image-outset: 0px; border-image-repeat: stretch; border-image-slice: 100%; border-image-source: none; border-image-width: 1; border-left-color: rgb(0, 146, 215); border-left-style: solid; border-left-width: 2px; border-right-color: rgb(0, 146, 215); border-right-style: solid; border-right-width: 2px; border-top-color: rgb(0, 146, 215); border-top-left-radius: 6px; <<<<<<<<---------- border-top-right-radius: 0px; border-top-style: solid; border-top-width: 2px; box-sizing: border-box; color: rgb(112, 112, 112); direction: ltr; display: table-cell; Add CSS propertyborder-collapse: separate on e.g. the .round-table class .round-table { border-radius: 10px; border-collapse: separate; } Thanks pbaldauf, it helps and solve the problem but I have no idea how the border-collapse: affect this, could briefly describe? In the w3.org documentation (http://www.w3.org/TR/css3-background/#border-radius-tables) is written "CSS3 UAs should ignore border-radius properties applied to internal table elements when ‘border-collapse’ is ‘collapse’". That's unfortunately the only thing I can tell you. You have your borders and border-radius placed in the wrong spot. For example, if you added this to your CSS you'll notice the effects you want: #comp { border-radius: 10px; border: 2px solid #0092d7; } Hope that helps. Apparently you can't set a border-radius property inside a th border. The solution to this is to change the display to inline-block. However, it seems like that will add more problems. I suggest removing the border from the th element and applying the border to the div outside the table. Add border-radius to the div and bamn, border radius shows up. try this: remove border-collapse:colapse; in your style sheet and set <table class="round-table" width="100%" style="border:3px solid red;">
common-pile/stackexchange_filtered
Cocoa app Bash Script Variables I would like to reference a variable that I have in a bash script "value" as well as "max." As of now, I have a text interface in which after given commands, the terminal window displays something similar to ==========================================================================> 100% for a progress bar. The variable is referenced throughout the script, and I would like to call that variable in my cocoa app. Thanks in advance! Your question isn't entirely clear, but it sounds like you want to use environment variables. In bash, you need to use the export builtin to mark a variable to be exported to child processes. Then, in your Cocoa app, you can use getenv(3) function to retrieve the environment variable value. For example: # In your bash script value=foo max=bar export value max // Now in your Cocoa application: char *value; if((value = getenv("value"))) { // Use value } // else value is not in the environment Thanks Adam, I'm sorry that my question wasn't clear! But, I think you have answered it just fine. What I was trying to ask was a way to reference variables, which I believe you have answered just fine! I'll let you know how it goes. Wait, now I am confused in implementing the bash functionality with the cocoa app. This is what I am currently doing. First, I have my source code imported into the cocoa app. Second, I make sure my source code imported has the export value max. Third, I attempt to implement the bash script into the app. Now I am confused on this part, how would I implement this? Would I simply use the variables as you have suggested? Does the bash script need to be run somewhere? How would I run the bash script in the background without a terminal window? Thanks again!
common-pile/stackexchange_filtered
Is it possible to simulate pressing 'Down' arrow? I try to figure out how to send Down in puppeteer, I tried with the int code 40 or Down string, but none works. Is there a proper way ? Can't figure it out after reading ~/node_modules/puppeteer/lib/Input.js const elementHandle = await page.$('selector'); await elementHandle.type('something'); await page.keyboard.press(40); // fail Can you go into more detail about how it doesn't work? Are you expecting it cause the browser to scroll down? Typically, manually triggered events don't cause default user agent actions due to security concerns. I'm not sure if this is the case for puppeteer as well. What are you expecting to happen. You need to use 'ArrowDown'. The keyboard.press functions wants a string as name of the key. https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#keyboardpresskey-options So the line to press the down arrow would be: await page.keyboard.press('ArrowDown'); Here is the list of available keys: https://github.com/puppeteer/puppeteer/blob/main/packages/puppeteer-core/src/common/USKeyboardLayout.ts This does not work on a select in MacOS @Roel I don't have any way to test that, but I think that would be an issue to create on the Puppeteer repository.
common-pile/stackexchange_filtered
How can I solve this function problem? for the number $3$ if I put $-x$ in the place of $x$ then I will get $2$ equations by which I can get $f(x)$. But what about the others? I've no idea how to solve these. $$(1)\quad f(\frac{1+x}{1-x})+f(\frac{1-x}{1+x})=2;\ f(x)=?$$ $$(2)\quad f(2^x)+f(2^{-x})=1;\ f(x)=?$$ $$(3)\quad f(x)+xf(-x)=x;\ f(x)=?$$ $$(4)\quad f(x)+f(y)=f(x+y);\ f(1)=?$$ For $(4)$, any $f(x) = kx$ will satisfy that, so there is no single answer. For (2) you could basically pick anything you like for $x>1$ as long as $f(x^{-1})=1-f(x)$, but $$\frac{1}{2}+c\ln(x)$$ would work, with $c\in\mathbb{R}$. In (1) apply $x\mapsto\dfrac{1-x}{1+x}$ There are some conditions required (eg continuity of $f$) to make this at all interesting.
common-pile/stackexchange_filtered
How can I view the MaxPermSize in JVM? I'm meeting the dreadful PermGen:Out of memory error when deploying a web-app on TomCat. I have tried many possible solutions, but they don't work out(sometimes it works, usually it doesn't). I wonder if my config in "BuildConfig.groovy" take effect: grails.tomcat.jvmArgs = ["-Xmx1024m", "-XX:MaxPermSize=1024m"] Does anyone know someway to view the MaxPermSize actually applied by the JVM? You can use JVisualVM in the JDK/bin directory to monitor everything about a java process. In direct answer to your question, java.lang.management.ManagementFactory has method to get information on all of the memory pools used by java, including the PermGen space and other non-heap memory pools: List<MemoryPoolMXBean> memoryPoolMXBeans = java.lang.management.ManagementFactory.getMemoryPoolMXBeans(); for (MemoryPoolMXBean bean : memoryPoolMXBeans) { //Iterate over pools here } There's a couple of utilities that come in the jsdk i believe - try jstat -gcpermcapacity. Another trick that can sometimes be helpful is ps -fwwwC java (assuming you are running this on a linux box) unfortunately I'm working on Windows at this time. How could I try your command? You can also use jconsole to connect to your process and monitor the PermGen usage. This should be available on Windows as well.
common-pile/stackexchange_filtered
How to deploy the same application to multiple servers looking for examples of what people have done inorder to deploy the same webapp or processes to multiple servers. The deployment process right now consists of copying the same file multiple times to different servers within our company. There has to be a better way to do this right now I am looking into ms build does anyone have other ideas? Thanks in advance. This is code only no pushes of SQL happen from developers only the dba does that. we just need to publish the same app to multiple servers at once I was looking at web deploy I will have to get my IT admin guys on board and see what to move to next we don't have a centralized server for deployment but it really is just the same product one internal and one external as we do SaaS. Take a look at msdeploy and Web Deploy. I've done this using a variety of methods. However, I think the best one is what I call a "rolling" deployment. The following assumes a code only deployment: Take one or more web servers "offline" by removing them from the load balancing list, let's call this group A. You should keep enough running to keep up with existing traffic, we'll call those group B. Push the code to the offline servers (group A). Then, put group A back into rotation and pull group B out. Make sure the app is still functional with the new code. If all is good, update group B and put them back in rotation. In the event of a problem, just put group B back in and take A out again. In the case of a database update there are other factors to consider. If you can take the whole site down for a limited period then do so and perform all necessary updates. This is by far the easiest. However, if you can't then do a modified "rolling" deployment which requires multiple database servers. Pick a point in time and move a copy of the production database to the second one. Apply your changes. Then pull a group of web servers out, update their code to production and test. If all is good, put those web servers back into rotation and take out group B. Update the code on B while pointing them to the second DB server. Put them back into rotation. Finally, apply all data changes that occurred on the primary production database to the secondary one. Note, I don't use Web Deploy or MS Deploy for pushes to production. Quite frankly I want the files ready to be copy/pasted into the correct directory on the server so that the push can run as quickly as possible. Both Web and MS Deploy options involve transferring those files over a network connection; which is typically much slower than simply copy/pasting from one local directory to another. You can build a simple console app that connects to a fixed sftp download, uncompress and run all the files in a fixed directory. A meta XML file can be usefull to create rules such as each machine will run each application, pre-requirements and so on. You can also use dropbox api to download your files if you don't have a centralized server to unify your apps. Have a look at kwateeSDCM. It's language and platform agnostic (Windows, Linux, Solaris, MacOS). There's an article dedicated to deployment of a webapp on multiple tomcat servers.
common-pile/stackexchange_filtered
Eureka client cannot connect to server with docker When I start zuul and eureka through Intellij, everything is fine, zuul is registered on eureka-server. In eureka server I see 2019-03-15 18:00:20.727 INFO 31713 --- [nio-8761-exec-2] c.n.e.registry.AbstractInstanceRegistry : Registered instance ZUUL-SERVER/<IP_ADDRESS>:zuul-server:8762 with status UP (replication=false) 2019-03-15 18:00:21.309 INFO 31713 --- [nio-8761-exec-3] c.n.e.registry.AbstractInstanceRegistry : Registered instance ZUUL-SERVER/<IP_ADDRESS>:zuul-server:8762 with status UP (replication=true) But when I try to start these two services with docker using docker-compose up -d in zuul container I have exception: com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:112) ~[eureka-client-1.9.3.jar!/:1.9.3] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) ~[eureka-client-1.9.3.jar!/:1.9.3] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137) ~[eureka-client-1.9.3.jar!/:1.9.3] at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) ~[eureka-client-1.9.3.jar!/:1.9.3] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) ~[eureka-client-1.9.3.jar!/:1.9.3] at com.netflix.discovery.DiscoveryClient.getAndStoreFullRegistry(DiscoveryClient.java:1051) ~[eureka-client-1.9.3.jar!/:1.9.3] at com.netflix.discovery.DiscoveryClient.fetchRegistry(DiscoveryClient.java:965) ~[eureka-client-1.9.3.jar!/:1.9.3] at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:414) ~[eureka-client-1.9.3.jar!/:1.9.3] at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:269) ~[eureka-client-1.9.3.jar!/:1.9.3] at org.springframework.cloud.netflix.eureka.CloudEurekaClient.<init>(CloudEurekaClient.java:63) ~[spring-cloud-netflix-eureka-client-2.1.0.M1.jar!/:2.1.0.M1] at org.springframework.cloud.netflix.eureka.EurekaClientAutoConfiguration$RefreshableEurekaClientConfiguration.eurekaClient(EurekaClientAutoConfiguration.java:290) ~[spring-cloud-netflix-eureka-client-2.1.0.M1.jar!/:2.1.0.M1] at org.springframework.cloud.netflix.eureka.EurekaClientAutoConfiguration$RefreshableEurekaClientConfiguration$$EnhancerBySpringCGLIB$$8fcb0d88.CGLIB$eurekaClient$2(<generated>) ~[spring-cloud-netflix-eureka-client-2.1.0.M1.jar!/:2.1.0.M1] Version of spring-boot-starter-parent is 2.1.2.RELEASE and spring-cloud version fot both eureka and zuul is Greenwich.M1 eureka-server properties file: # Give a name to the eureka server spring.application.name=eureka-server # default port for eureka server server.port=8761 # eureka by default will register itself as a client. So, we need to set it to false. # What's a client server? See other microservices (image, gallery, auth, etc). eureka.client.register-with-eureka=false eureka.client.fetch-registry=false zuul-properties file contains this line: eureka.client.service-url.default-zone=http://eureka:8761/eureka/ docker-compose.yml file(related services) version: "2" services: eureka: container_name: eureka build: context: . dockerfile: eureka.Dockerfile image: eureka-service ports: - "8761:8761" networks: - event-network zuul: container_name: zuul build: context: . dockerfile: zuul.Dockerfile image: zuul-service ports: - "8762:8762" networks: - event-network networks: event-network: driver: bridge In both Dockerfile's I just add jar to container and start jar with java -jar command. I annotated zuul main class with @EnableEurekaClient and eureka main class with @EnableEurekaServer . I don't think that exception is related to zuul since I have the same problem with rest of my microservices. Rest of them also cannot connect to eureka. When I start eureka both in container or through intellij I can access eureka dashboard. I even tried to put this image instead of mine, but I got the same exception. I also tried docker exec "zuul" -it bash and inside that zuul container I tried curl eureka:8761 , and I can access dashboard normally. Not sure about Eureka but had similar problem with nginx proxy server. Nginx can't recognize docker's hosts resolver, and one possible solution that I used is get static IP of specific container's service. For that case is possible set staic IP for Eureka container and use it within config file for Zuul. Of course that will work if you will have one Eureca container. For a lot containers should look how zuul can read and understand docker's host resolver.
common-pile/stackexchange_filtered
Access S/4 on-premise from local machine without cloud destination service I'm trying to access our S/4 on-premise system without any cloud services involved. I', connected to the VPN and can reach our S/4 system and successfully address the BP API service with Postman. When I deploy my app, the whole thing works as well. However, I don't want to program blindly and start & test the whole thing from my local machine. Unfortunately it does not work... Failed to get 'destination' service credentials from VCAP_SERVICES variable: no service binding found for service plan '(any)'. Please make sure to correctly bind your application to a service instance of the destination service. and Caused by: com.sap.cloud.sdk.cloudplatform.security.exception.TokenRequestFailedException: Failed to get destination service client identifier and secret. Please make sure to correctly bind your application to a destination service instance Can I access the S4 directly with the cloud SDK, or do I need a destination service in the btp? My application-local.yaml file looks like: destinations: s4: name: s4 authentication-type: BASIC_AUTHENTICATION url: 'https://ourS4URL' user: USER password: ZZZ trust-all-certificates: true Just to test it, I try to read out all destinations, but even here I get directly the above mentioned errors: ScpCfDestinationLoader load = new ScpCfDestinationLoader(); Try<Iterable<ScpCfDestination>> destinations = load.tryGetAllDestinations(); // sysout etc. The only difference to the deployed version is, that I start the application.yaml with the profile "cloud", instead of the "locale" like here. What have I forgotten? Or is it really not possible? edit: Thats how I use my service in the deployed version: HttpDestination httpDestination = DestinationAccessor.getDestination("s4").asHttp() .decorate(DefaultErpHttpDestination::new); final BusinessPartnerService service = new DefaultBusinessPartnerService(); As you are able to access the on-premise system directly, instead of making use of the Destination service you could choose to create a Destination programmatically and register it so that the DestinationAccessor is able to furnish the details of this destination. Please read more about this option here. Alternatively, you can also set an environment variable for your destination: (Example is for Windows, please adapt according to your OS) $destinations='[{name: "MyErpSystem", url: "https://URL", "username": "USER", "password": "PASSWORD"}]' Both of these methods ensure that the DestinationAcessor returns a valid Destination (of your S/4 on-premise system). Did not work, but I was able to connect to the BTP-Destination-Service via cf ssh. Now I can talk to our in prem service from my local dev.
common-pile/stackexchange_filtered
Passing a set of IDs or a set of objects as data in a DTO in Spring Boot? I am trying to create a project management app and for my back-end I have a Project Entity which has a many-to-many relationship with the User Entity. I have the following DTO: public class ProjectDto implements Serializable { private final Long id; private final String name; private final String description; private final Date createdAt; private final Date updatedAt; private final Set<UserDto> users; } And in my ProjectService I want to have a method which creates a project having any sent users assigned to it. However, my question is should my front-end send to my back-end a set of user objects or is better to send a set of IDs of the users I want to assign to this project ? Is it not better to actually have this DTO returned when a project is created and have another DTO with a set of user ids for when I want to create a project ? We cannot trust data from front-end and should apply validations for the request body including Set<UserDto> users. And to validate the Set<UserDto> users we have to use trusted data - from DB, or other BE sources. And using Set<?> userIds also needs to fetch from DB, but we don't have to add more code to do validate the DTO and validating the IDs is more simple and easy to maintain. Using userIds is also to make sure the users that set to project are entities that fetched from DB. It also keeps the FE code simpler (I hope) as not having to build the (DTO) object
common-pile/stackexchange_filtered
iOS push current view controller onto another view controller hierarchy I have a view controller (which is the root view of the main UINavigationController for my app) that is presenting another view controller hierarchy modally, by presenting a UINavigationController. After the user interacts with the root view controller in the modal NavController, a detail VC is pushed onto the modal NavController's stack. After some interaction with this detail VC, I'd like to push this detail VC back onto the original hierarchy, and dismiss the modal NavController/it's root view controller, all without the user seeing the change in this hierarchy. Right now I have something like this: In MyViewController1 (root of the main NavController hierarchy): UINavigationController *newModalNavController = [[UINavigationController alloc] initWithRootViewController:someRootViewController]; [self presentViewController:newModalNavController animated:YES completion:nil]; Then in the someRootViewController above, after some interaction (e.g. a button click), I push the detail view controller onto the modal hierarchy: [self.navigationController pushViewController:detailVC animated:YES]; Finally, in the detailVC, after some more interaction: UINavigationController *mainNavController = /* Get main nav controller here (i.e. non-modal one) */ // Dismisses the modal view hierarchy (detailVC gets -(void)viewDidDisappear) [self.presentingViewController dismissViewControllerAnimated:NO completion:nil]; // Repush the detailVC back onto the main hierarchy (detailVC gets -(void)viewDidAppear) [mainNavController pushViewController:self animated:NO]; // Keyboard was on screen before interaction; make it stay on screen after pushing // detailVC onto main nav hierarchy [self.textView becomeFirstResponder]; This all works (clicking back on the detailVC's navigationItem after pushing it onto the mainNavController's hierarchy goes back to the original rootViewController, MyViewController1), except when the keyboard is on screen in the detail view controller as it is being switched, it gets hidden and then animates back up, instead of just staying on screen (because the view is disappearing for a second as the presentingViewController dismisses the modal nav controller, whose hierarchy the detailVC is a part of, and then reappears as it gets pushed onto the mainNavController's stack, and the textView grabs first responder again). Is there a better way to change which navigation hierarchy the detail view controller is a part of, possibly one that doesn't involve the view disappearing, and thus the keyboard being hidden and immediately reshown? I'd say initially that you shouldn't be abusing the hierarchy like this and that the affect will be confused users (you destroy the sense of flow through the transitions). Why not make the transition after the keyboard is dismissed? Consider for example iMessage; when composing a new message, after clicking send, the message view controller transitions from being a modal view controller to being part of the view controller hierarchy. The keyboard remains showing on the screen, and the text field remains first responder. I want to do something similar to that. @ewald : Did you figure it out? If so, can you share your results? @BrainOverfl0w I never figured out a good way to do this - had a hack that worked in iOS 6 but doesn't work well in 7. I don't think it's an interaction Apple wants to allow; as Wain points out, in most situations it's probably confusing to the user. Let me know if you figure it out.
common-pile/stackexchange_filtered
How to get a container Div with auto Height and with floating divs inside it? I have the following HTML <style type="text/css"> <!-- .msg_ok{ font-family: "Trebuchet MS"; font-weight: bold; font-size: 16px; margin:0 auto; padding:10px; width:500px; height:auto; display:block; } .msg_ok{ background-color:#DCFFB9; border:#003300 1px solid; color:#003500; } --> </style> <div class="msg_ok" style="height:auto;"> <div style="display:block;float:left;width:350px; height:auto;"> <p>line 1<br> line2<br> line3<br> line4<br> line5<br> </p> </div> <div style="display:block;float:left;width:100px; height:auto;"> <a href="#">Print</a> </div> </div> And I get this result however how i can get a result like the following, the container div with an auto height that fit its size with the floating divs inside? Thanks in advance add these .msg_ok{ overflow:hidden; zoom:1; } and remove height:auto, that's useless since height is always auto (by default). the issue you are facing is how to contain floats See this page for details on different methods to contain your floats. I think you are looking for the clear-fix "hack", to avoid the parent from losing it's height when it contains floating elements, right? Add a <div class="clear">&nbsp;</div> after the last floating div, and then use the following CSS: .clear { float: none; clear: both; height: 0; } Try this Jsfiddle height auto Make display property to "inline-block" for msg_ok class You need to add an element after the floating elements which clears the floats. Generally: <br style="clear:both" /> Best method is use clearfix. Refer this link - http://css-tricks.com/snippets/css/clear-fix/
common-pile/stackexchange_filtered
Excel Lookups or Index Matches with Multiple Criteria I need some simple Excel help. I'm fairly new to formulas in Excel, and while I know my request is simple, I just can't get the answer I want on my own. So in my Excel workbook, our data has the following categories: Acct. Number, Region, Contact (etc.), and the acct number data is broken down into separate lines depending on what region it is in. So Acct. Number 121 is broken down into 3 lines (North, South, West), and different data pertains to each region. What I'm trying to do is to create one line in another sheet for each account number. I've been able to do this with a VLOOKUP for all of the numbered data, but I've run into a problem with non-numerical data. So, example. I am trying to find the contact of the "North" region, and put it into the cell of my new worksheet. I have the account number listed on each line, but the regions are listed as the Column names This is how the data is listed in 3 columns from the source we are given: Contact.... Acct No.... Region....... Joe...........121.............North Marcy.........121.............East Jane..........122.............South Bob...........122.............West Bill..........122.............North And this is the set up in my separate worksheet that I've created: Acct.No.........North Revenue.........North Contact......South Revenue....South Contact..... 121.................803.........................(Need this value)..... 122.............. ..122..........................(Need this value).....98.......................(Need this value) I've done perfectly fine getting the numerical revenue values through VLOOKUP, referencing the Acct. No and specifying & North (or other regions), but when I try to do that same method to get the contact names, I get a #N/A. I've tried Index and Match combos that I've found online, but then I end up getting #Value errors. So is there a way to get the contact name from the data sheet, when I want to look up by the Acct No. in the cell of my worksheet, and then specifying the region on my own since I don't have a specific "North" cell to reference in my lookups or matches? Thank you for all of your help, I know this has a simple answer, I just haven't found my way around it yet. What was the formula that worked before running in to non-numerical data? So this is the VBA code that I got the numerical values for the North Revenue: "=VLOOKUP(RC[-13]&""NORTH"",Detail!C[-11]:C[8],20,FALSE)" But if I switched the columns to give the contact, I get a #N/A result @newkid59 - So you're trying to use that Vlookup() to return the value in column A, based on a lookup of the Account number? You mentioned Index/Match in your title, do you know how to use Index/Match? If so, you can easily look up based on two+ values, see this page to get you started. Does that help, or am I misunderstanding the issue? You're right, I'm trying to return Column A based on the Acct. Number in Column B, but also corresponding to the specific region from Column C. So I did the Index and Match from the page, but I'm returning a #Value! error. So this is the formula that I've used for Index and Match now (and this was just in the worksheet, didn't do VBA yet): =INDEX(Detail!A:A,MATCH(A8&"North",Detail!B:B&Detail!C:C,0)) Am I supposed to do something different since it's nonnumeric? Sample data, I had the following Column A Column B Column C Column G Column H Column I Contact Acct No. Region Acct No. North Rev N. Contact ---------------------------------------------------------------------- Joe 121 North 121 803 *Formula* Put below formula in Column I (in my case) =IFERROR(INDEX(A:A,MATCH(G2 & "NORTH",B:B & UPPER(C:C),0)),"") Make sure you end it with Ctrl + Shift + Enter (This is array formula) Copy and paste down Note : I used UPPER to check for any north word (North, NoRtH, NORTH....etc) In addition, if you have Phone number of the contact in...let's say column D Then you can change formula to =IFERROR(INDEX(A:A & ": " & D:D,MATCH(G2 & "NORTH",B:B & UPPER(C:C),0)),"")
common-pile/stackexchange_filtered
UIButton over an UIImageView via storyboard I want to create my view as below: For which i have figured out i need an UIImageView to display the default profile pic and a UIButton with an image of camera over the UIImageView. I added an UIButton over the UIImageView, but when i run my code it is not visible. I tried changing the background color and added some text on the UIButton but the UIButton is not displayed. I have already gone through similar questions but most of them were solved by adding the code for UIButton and UIImageView and not via storyboard. Can someone point out where i am going wrong. Also any other suggestions to make similar view are appreciated. The one i can think of is to use and UIImageView instead of UIButton and enable it's user interaction. But that isn't much different than using UIButton. Thank you. Have you added both UIImageView and UIButton in storyboard? are you using auto-layout or any other constraints? Or try to increase the width of uibutton for just checking this is visible or not inside complete uiview. @Dev.RK - Yes i am using auto-layout, and the UIButton is visible when i move it out of the UIImageView @TKutal adjust your button constraints you have added. This will help you. Due to some constraints your button is not visible inside the uiwindow or uiview. Do one thing take whole the button and set its Background image then also set one image to button. and set image content inset in button. No need of imageView only take button Try below line of code : [btn.superView bringSubViewToFront :btn]; Write above line of code at the end of - (void)viewWillAppear:(BOOL)animated mathod. I have updated my answer plz check and if not work then upload image of your view's stack of storyboard file. Adding constraints to the button helped. I think bringSubViewToFront will help if i add the button via code. Thanks for your time. Check in document outline in storyboard and check view's stack.button should be above imageView(below in document outline). if not then change position by simple drag and drop. Hope this will help :) Try to drag the button from the document outline as it comes after the ImageView. Check the images given.
common-pile/stackexchange_filtered
Can ghost replenish hand in Reveal the Culprit phase In the Vision interpretation section of the rulebook, it says to draw vision cards to make up his hand to 7 cards. In the Reveal the Culprit phase, it says the ghost picks three vision cards from his hand It's not clear to me if the ghost is allowed to draw new cards to bring his or her hand up to 7 in the Reveal the Culprit phase every time he/she selects one of the three vision cards. Can the ghost replenish his/her hand in the Reveal the Culprit phase? There is nothing in the Reveal the Culprit section about drawing new cards in the middle of the vision therefore you don't. It is similar to the Vision Interpretation part of the game where if you give a psychic multiple cards as part of a single vision you don't draw in-between them, only after you are done with that particular vision and a ready to move to the next psychic's vision.
common-pile/stackexchange_filtered
Left Nav menu missing in all SharePoint sites The left nav menu disappeared from my SharePoint 2019 servers (2 Web Front Ends and 2 SQL Back Ends, all with the latest OS, SharePoint, and SQL patches) recently. I've tried the following: Create a new Site Collection in the existing Web Application Create a new web application and a new site collection within it All show the same issue. For reference, I've placed a screenshot below showing the blank left nav. To generate that screenshot, I created a new web application, created a new site collection in it (team site) and nothing else. I welcome ideas as I am new to SharePoint 2019. Thanks! If someone has access to this site, can they see the navigation? Do you get the same result if you try to access the site with a different browser? Please try opening the browser by running as administrator. If you still get the same result, make sure you have not set oslo as your site's master page and that you have "Enable fast startup" checked. Reference: Current Navigation Bar missing hi, Is there any update on this thread?
common-pile/stackexchange_filtered
Check if a query was successful laravel I define a query to delete a table in database. After that i want to show a message that the query was run successfully. How can I check it with an if statement? $query = DB::table('user_users')->delete(); return view('datenbank'); This query will not delete the table, it will empty the table. Sorry, i meaned that. :-D What have you tried so far? Something like inspecting whatever $query contains? When you say you "delete the table" are you meaning empty it or actually remove the table from the database? When you use delete with the query builder it will return the number of affected rows. Your if statement would just need to look something like: $query = DB::table('user_users')->delete(); if ($query) { //query successful } If you want to be more explicit you could do if ($query > 0) {} If anything goes wrong with the query (an error) it will throw an Exception which will mean that no rows have been affected. Personally, I think the best solution is to use an if statement like this. your code $query = DB::table('user_users')->delete(); return view('datenbank'); Soluction $query = DB::table('user_users')->delete(); // check data deleted or not if ($query > 0) { return response()->json('202 Accepted', 202); } else { return response()->json('404 Not Found', 404); } Out of context. May be better if we return 204 status code instead of 202. I will recommend to use try catch because laravel query throw exception when some error occur... $queryStatus; try { DB::table('user_')->where('column',$something)->delete(); $queryStatus = "Successful"; } catch(Exception $e) { $queryStatus = "Not success"; } return view('datenbank')->with('message', $queryStatus); If the query did not execute successfully, Laravel would normally throw an error. But if you want to be really sure, you could query the table right after truncating it to make sure there is no data left. DB::table('user_users')->delete(); // Test if no records are left in the table $success = DB::table('user_users')->count() === 0; return view('datenbank', compact('success')); you can do try catch with DB Transaction try { DB::beginTransaction(); // your code DB::commit(); }catch (Exception $e) { DB::rollback(); // other actions }
common-pile/stackexchange_filtered
Uber error code in iOS SDK I have integrated Uber iOS SDK and now working sandbox mode. Here is my ride request code. [[UberHelper sharedInstance].ridesClient requestRideWithParameters:_rideParameters completion:^(UBSDKRide * _Nullable ride, UBSDKResponse * _Nonnull response) {NSLog(@"ERROR %@",response.error.title); NSLog(@"ERROR %@",response.error.code); NSLog(@"ERROR %ld",(long)response.statusCode);}]; But the error codes I expect was like "errors":[ { "status": 409, "code": "surge", "title": "Surge pricing is currently in effect for this product." } ]. Presently I'm getting only "status"(response.error.status) and "code" (response.error.code) and "title"(response.error.title) are "null" . I needed this "title"to display the error alert. Will this data be available in production mode? show your tried code @Anbu.karthik, Below is the code [[UberHelper sharedInstance].ridesClient requestRideWithParameters:_rideParameters completion:^(UBSDKRide * _Nullable ride, UBSDKResponse * _Nonnull response) {   NSLog(@"ERROR %@",response.error.title);                 NSLog(@"ERROR %@",response.error.code);                 NSLog(@"ERROR %ld",(long)response.statusCode);}]; @Anbu.karthik, I'm getting "status":401 and "code":unauthorized, title:nil with the above code Please follow this way to get the UBSDKError. if(response.error.errors){ UBSDKError *uberError = [response.error.errors objectAtIndex:0]; NSLog(@"title %@",uberError.title); NSLog(@"code %@",uberError.code); }
common-pile/stackexchange_filtered
Bypass required Status Checks in GitHub I am trying to use GitHub Actions to automatically increment the version of my project while building the artifact and check the new changes back into the main branch. The workflow pushes the commit using a PAT for an account that is used only for this purpose so I was able to exclude that user from the branch protection rule requiring a PR for the main branch. Because there is also a Required Status check for that branch the push fails with remote: error: GH006: Protected branch update failed for refs/heads/main. remote: error: Required status check "build / tests" is expected. How can I have the version commits skip this check while enforcing this check for all other commits? Maybe this article about Handling skipped but required checks could help you Unfortunately, that only seems to work with PRs. The direct push to main is still blocked because the status checks were never run so I can't trigger an empty pipeline off that push. @Silverblaze are you and I the only ones trying to use such a workflow in GitHub Actions? Did you find a workaround nicer than disabling required status checks? Possibly, I never got anything to work, so we ended up scrapping the automatic versioning strategy. I think semver can work great and be reasonably automated using GitHub Actions with a branching strategy like gitflow. The SHAs are probably your best bet for automating the version if you are going trunk-based. It's not what we are currently using, but it is the long-term direction I want to go. I sent GitHub a support request about this. You are not alone. :) Did github reply any resolution, as of Apr 2024, i also have similar issue 2023-09-01 A recent feature, rulesets, provides more control, such as allowing specified actors to bypass status check requirements. I had the same issue and tried many different potential solutions. The only thing I could do to get it to work was to create an account with admin permissions, allow force pushes only for that account, not enforce branch protection rules for admins, and then force push the changes (I'm using this action). The workflow basically looks like this: jobs: update-config: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 with: ref: ${{ github.head_ref }} token: ${{ secrets.SERVICE_ACCOUNT_GITHUB }} submodules: true fetch-depth: 0 ## do my update ## - name: Add and commit change run: | git config --local user.email<EMAIL_ADDRESS> git config --local user.name "github-actions[bot]" git commit -m "Bump version" -a - name: Push changes uses: ad-m/github-push-action@master with: github_token: ${{ secrets.GITHUB_TOKEN }} branch: ${{ github.head_ref }} force: true What scares me about this approach is that a force push would override/revert any pull requests that were merged while this task is preparing to run. Yes this is a feature that github should probably support officially, I realize its risky but you should be able to whitelist a user to skip checks just as you can specify users who can skip pull requests. The solution I found recently, that helps in the implementation of release workflow with pushing commits to the protected branch, is to use https://github.com/CasperWA/push-protected. It does follow the Require status checks to pass before merging protected branch setting flow. The token should have a Repository roles permission granted. For public repositories, it is a repo -> public_repo checkbox to switch it on during the token creation. Below is a dummy workflow that works for me: name: release on: push: branches: - main paths-ignore: - "version" jobs: tests: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Runt tests run: sleep 5 release: needs: tests runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Prepare release run: | git config --local user.email<EMAIL_ADDRESS> git config --local user.name "User" version=$RANDOM echo $version > version git add -A git commit -m "New version $version created" git tag $version - name: Push commit to a protected branch uses: CasperWA/push-protected@v2 with: token: ${{ secrets.PUSH_TO_PROTECTED_BRANCH }} branch: main unprotect_reviews: true tags: true This is working 95%. The only thing I've noticed is that if for whatever reason the push fails (i.e pointing to an invalid branch or something), the push protections are not re-enabled, which is very bad. Do you know of any solution to re-enable the push protections? hi, If it is possible I would follow this one https://stackoverflow.com/a/77021130/5120554 checked and it works as I expected, cheers I faced the same problem few days back. In case you're using standard-version then in the package.json you can do something like: "standard-version": { "releaseCommitMessageFormat": "chore(release): {{currentTag}}\n[ci skip]", }, and in the Github Workflow build / tests you can add a condition to skip tests in case a push is tagged with [ci skip] like this: if: "!contains(github.event.head_commit.message, '[ci skip]')" Reference Are you generating the commit as a step in your workflow, or are you doing that locally? When my workflow tries to push the commit, even with [skip ci], it's rejected with the same error. I've tried not enforcing the rule on admins and using a PAT for an admin, and that didn't let it through either. The only way I've successfully made a commit from the workflow is to remove the required status checks for that branch, but I still want those enforced for other commits. I'm generating the commit on-the-fly in the workflow. Without removing the Branch Protection rules, I don't think so there's other way round.
common-pile/stackexchange_filtered
Do I need to ask real locations' permission to use them in my story? I have written a children's book about dragons and the things they do in Texas. Do I need to ask i.e. the Lubbock Arboretum if it's okay that I mention them in my story? So my question is: When do I need to ask permission to use a location? Or do I even need to? Thank you! Welcome to Writing.SE! Questions should only ask one thing at a time, so I've removed your secondary question about self-publishing. Feel free to ask a separate question about that if you need to, although we have plenty of Q&As about self-publishing already that may help you. If by a "place" you mean a city, state, country, etc, then no. No one owns the name "Texas" that you could get permission from if you tried. If by a "place" you mean an organization, like the headquarters of a company, things are a little more complicated. There are two potential issues: libel and trademark. If you portray a real organization as doing something illegal or scandalous, they could sue you for libel. Whether they could win depends on a variety of factors. 1. Did they really do what you said they did? If so, in the US it's case closed, truth is an absolute defense against libel. In other countries, like the UK, that's not the case. 2. Would a reasonable person believe you were saying this organization really did this, or is it obviously fiction? That can be tricky. In general, the purpose of trademark law is to protect buyer and seller against the buyer being tricked into thinking this was the "real" product when really it's a copy. Like, if you made your own soft drink and called it "Coca Cola", and sold it in cans that resembled real Coca Cola cans, the Coke company could sue you and would almost surely win. You're tricking people into buying your product by making them think it's Coca Cola. But if you write a book and in it you say, "Bob drank a Coke", no sane person is going to confuse your book with a soft drink. No one is going to think that because your book includes the word "Coke" that therefore it is a soft drink. Yes, sometimes this can get ambiguous. If your whole book is about Coca Cola and you have a picture of a Coke can on the cover, etc, Coke might be able to argue that you're giving people the impression that your book is an "official" Coca Cola product. They can afford better lawyers than you so maybe they'd win that one. Would it still be libel if the organisation does these illegal or scandalous things in a fictional setting? I guess maybe if the author stated that the fiction is based on a true story. @GiantSpaceHamster It depends if a court believes that a "reasonable person" would conclude the story was fiction. I don't have legal citations handy, but I recall a famous case a few years back where a porno magazine, Hustler, printed a cartoon depicting a then-famous preacher, Jerry Falwell, having sex with his mother. The preacher sued for libel. The court ultimately ruled that it was a cartoon, and a reasonable person would understand it to be fiction and not a claim to be factual reporting, and so killed the libel claim. ... ... Many writers attributing sinister actions to a real-life organization toss in a few sentences about it being an "extremist faction" within the organization. This both helps believability for those who would say, "Oh, come on, I can't believe that the Boy Scouts are really part of an international conspiracy!", while also giving them some protection against libel. "Hey, I didn't say this is official policy, I said it was an extremist faction!" Personally, if I was tempted to write a story where I said that, say, the Sierra Club was engaged in terrorist bombings, I wouldn't come out and say, "the Sierra Club". I'd make up some other name and make it clear to the reader that it's an organization LIKE the Sierra Club. Unless I had some reason why I wanted to publicly attack the Sierra Club. As the author, you don't need to be concerned with obtaining permission to use the name of a city, building, or whatever. This is a concern for a publisher -- which might be you if you self-publish. The situation that would matter to a publisher is if the author is using a trade-marked name without permission. While it is very likely that Lubbock Arboretum has trade-marked its name -- to protect its brand if they sell merch, this protection is very specific as to the form. As in, Lubbock Arboretum on a hat or a tee-shirt or hoody. Maybe with some iconography. Since they have pay to register and maintain the trademark on each and every form. Consider the Louvre. If you were to make and sell unlicensed "I <3 the Louvre" totes, you could find yourself on the wrong end of an infringement lawsuit, but you can mention the Louvre in your novel or short story without concern for infringement. If your story brought discredit on the Louvre or the Lubbock Arboretum in a way that a reasonable person might believe is true, then you could get sued. But that wouldn't be for infringement but for brand diminishment -- egregiously hurting their earning potential with false and misleading statements. I would absolutely avoid depicting any non-public place, non-public building or private company in any way differently from the most positive things anyone would expect to happen there in reality. That is, if your story takes place at a private Arboretum and your protagonist goes there to look at the trees and meets a nice person there, no problem. But if in your story that Arboretum is a place where dragons live, that is potentionally problematic. Because the owner of the Arboretum place might not want his business to be associated with dragons, no matter how positive you think your portrayal of the place was. Photographers have to get a location release by the owner of a place if they want to commercially use photos they took of that location, and this is no different: You are commercially using someones private property, so absolutely avoid anything that can be construed as being less than positive. Public places, state owned property, countries, and so on you can do whatever you want with.
common-pile/stackexchange_filtered
ASUS DSL-N16 Router Localhost Connection Issue I started using ASUS DSL-N16 model router recently. When I send consecutive requests to the stand-alone PHP web server that I run on "<IP_ADDRESS>:88" I receive a "server time out" error after 20-25 requests. I took a look at the settings in the admin panel, but the details are too complicated for me. I just tried disabling the firewall (not sure if this is okay to do) and the problem did not resolve. I have never experienced this problem with different brands and models of routers I have used before. Addition: QoS enabled/disabled modes were also tried. Seems like your PHP server dies out after those 20-25 requests. You should debug why. @harrymc I don't think so because I'm still using exact same infrastructure and code. I just changed the router and was able to run everything without a problem with previous routers. In that case it seems that the router is blocked. Is it possible that you're not closing the connections ? @harrymc I installed the router with quick setup guide and didn't touch anything. I don't even know how to close connections. I'm suspecting that some default security settings automatically blocking repetitive requests for protection. Unlikely that the router contains such a setting. How are you executing the requests on the server - via browser, JavaScript, or what? @harrymc Thank you for your time and interest. I'm running a PHP web app which makes ajax requests to localhost for dynamic content and at the back-end there are other API connections happening to other servers around the world. I've tested the API URLs when I get "server time out" error and they're working & reachable, so I think something happening at the localhost requests level. There are too many variables flying around. You'll need to investigate in order to get a handle on the problem. The router might have some logging facility or otherwise find what error the last request is getting. Add all the details to your post, including what's in the comments, for the benefit of other readers.
common-pile/stackexchange_filtered
Recursive Fetch Request I am trying to write a recursive fetch function. I am calling an endpoint that accepts a name param and returns their role and an array of direct-subordinates like so: { role: "CEO", direct-subordinates: [ "john smith", "bob jones" ] } I then want to call the function again to request the same data for each subordinate. Here is the code I have: export const fetchEmployee = async (name) => { let url = `https://url.com/to/employees/endpoint/${name}` let req = await fetch(url) let json = await req.json() return json } export const recursiveFetchEmployees = async (initialName) => { let json = await fetchEmployee(initialName) const role = json[0] const subordinates = json[1] if (subordinates) { return { name: initialName, role: role, subordinates: subordinates['direct-subordinates'].map(async (subordinate) => { let result = await recursiveFetchEmployees(subordinate) return result }), } } else { return { name: initialName, role: role, } } } This almost works when called with recursiveFetchEmployees(employeeName).then((resp) => console.log(resp)) but the result is: name: "robert robertson", role: "CEO", subordinates: (2) [Promise, Promise], How do I change this so the function works its way down the employee hierarchy recursively producing a result like this: { name: "robert robertson", role: "CEO", subordinates: [ { name: "john smith", role: "Marketing Manager", subordinates: [{ name: "mary doyle", role: "employee", }] }, { name: "bob jones", role: "Development Manager", subordinates: [{ name: "barry chuckle", role: "Development Lead", subordinates: [{ name: "billy bob", role: "Developer", }] }] }, ], } Thanks in advance for any help or advice. EDIT / UPDATE Thanks to the fine answer given by @trincot the problem I had was resolved but it introduced another problem. I need to check for and filter out duplicates in the returned results. I introduced a uniqueNameArray that gets initialised with an empty array and on every call, it adds the name of the current initialName param if it does not already exist in the array. Here is my code: export const recursiveFetchEmployees = async (initialName, uniqueNameArray = []) => { if (!uniqueNameArray.includes(initialName)) { uniqueNameArray.push(initialName) let json = await fetchEmployee(initialName) const role = json[0] const subordinates = json[1] if (subordinates) { return { name: initialName, role: role, subordinates: await Promise.all( subordinates['direct-subordinates'].map( (subordinate) => subordinate && recursiveFetchEmployees(subordinate, uniqueNameArray) ) ), } } else { return { name: initialName, role: role, } } } } Unfortunately when there is a duplicate it still gets called in the map function resulting in a subordinates array that looks like this: { name: "bob jones", role: "Development Manager", subordinates: [ { name: "barry chuckle", role: "Development Lead", subordinates: [{ name: "billy bob", role: "Developer", }] }, { name: "james jameson", role: "Development Lead", subordinates: [{ name: "joey joe joe junior", role: "Developer", }] }, undefined, // <-- This is where there was a duplicate ] }, Is there a way to omit it from the promise list? What ive done above should do that as far as I can tell so Im not sure why it still returns an undefined response. As always, any help is appreciated, thanks! Dont know too much js, even less outside of angular, maybe try return await... in your subordinates check? It is no surprise that .map(async ... returns an array of promise objects, as an async function always returns a promise. You could use Promise.all here: subordinates: await Promise.all( subordinates['direct-subordinates'].map(recursiveFetchEmployees) ), Note also that you can just pass recursiveFetchEmployees as callback argument to .map. There is no need to create that wrapper function. Thanks! I am actually astounded that works. How does recursiveFetchEmployees work without at least being given a param? I assume the map function just passes what its itterating over into the function passed in? Brilliant, thanks for your help! recursiveFetchEmployees is a function reference that .map uses for its repeated calls. It is the .map implementation that passes the argument. It really is no different from using an inline function expression. It's just that the signature of recursiveFetchEmployees is already fit for a .map callback, so no need to have an inline function expression there.
common-pile/stackexchange_filtered
How can I customize the path of a Rails 5.2 ActiveStorage attachment in Amazon S3? When adding attachments such as has_one_attached :resume_attachment saved files end up in the top level of the S3 bucket. How can I add them to subdirectories? For example, my old paperclip configuration could categorize in directories by model name. Downvoters, please explain so I can improve the question. I haven't found a duplicate. If it is simply a bad idea please explain why. Sadly you can't, I look for this too but I think AS is too "young" for the moment and there aren't all options that you can find in alternatives. There are few questions nearly duplicates on SO about this: https://stackoverflow.com/questions/49852567/how-to-upload-a-folder-using-activestorage-while-maintaining-original-folder-str; I don't understand the downvoters as well :( @codingaddicted thanks. Nearly duplicates, but none duplicate. I totally agree, there are a lot specific aspects asked on SO that aren't covered by AS for the moment. For me you're question is 100% valid and I hope the AS team look at it and the other questions too. Ha. The 6 downvote users were all removed. Some kind of mischief I suppose. The power of karma ;) You can not. There is only one option possible, at that time, for has_one_attached, has_many_attached macros that is :dependent. https://github.com/rails/rails/blob/master/activestorage/lib/active_storage/attached/macros.rb#L30 see (maybe the reason why you have downvotes, but it is about "direct" upload so...) : How to specify a prefix when uploading to S3 using activestorage's direct upload?. The response is from the main maintainer of Active Storage. Thank you for finding a near but not duplicate. I'll update the title to constrain version and accept. Use a before_validation hook to set the desired key on S3 and the desired filename for the content disposition object properties on S3. The key and filename properties on the attachment model make their way through to the ActiveStorage S3 gem and are converted into S3 key + content disposition object properties. class MyCoolItem < ApplicationRecord has_one_attached :preview_image has_one_attached :download_asset before_validation :set_correct_attachment_filenames def preview_image_path # This key has to be unique across all assets. Fingerprint it yourself. "/previews/#{item_id}/your/unique/path/on/s3.jpg" end def download_asset_path # This key has to be unique across all assets. Fingerprint it yourself. "/downloads/#{item_id}/your/unique/path/on/s3.jpg" end def download_asset_filename "my-friendly-filename-#{item_id}.jpg" end def set_correct_attachment_filenames # Set the location on S3 for new uploads: preview_image.key = preview_image_path if preview_image.new_record? download_asset.key = download_asset_path if download_asset.new_record? # Set the content disposition header of the object on S3: download_asset.filename = download_asset_filename if download_asset.new_record? end end I think this should work, but I don't have a present example to test it with. Have you tested the solution?
common-pile/stackexchange_filtered
When I read a '.mdb' file using pyodbc+pandas(pd.read_sql), There are difference(Time data) between source data and memory data I want to read mdb file to memory. but there are some difference between source data and memory data Here are code import pyodbc import pandas as pd import datetime DB_Connection = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb)};DBQ=./test_Time.mdb' ) DF_SaveData = pd.read_sql( 'SELECT * From SaveData',DB_Connection,parse_dates=True).sort_values(by='Date_Time',ascending=True) print(DF_SaveData) mdb data time : 2021-03-22 AM 8:45:46 result time : 2021-03-22 AM 8:45:45 1 second error... Most likely, your source dates have a millisecond part which, by Access, is rounded by 4/5 to the second, elsewhere cut off. If source is mdb file, how can there be millisecond part?
common-pile/stackexchange_filtered
Need to understand how events work with delegates in C# I cannot understand how delegates work with events in C#. The form of the syntax is: public event someNameDelegate someName; control.someName += new control.someNameDelegate(methodName); So how does it know what event (like mouse click etc) fires the method. I'm totally missing how this works. I understand that the delegate will call methodName but I don't understand for what event. [Addittional Info] In the 2 lines above if someName is ContentModified then the code compiles In the 2 lines above if someName is Banana then the code does not compile. However ContentModified is nowhere else in the code apart from the 2 line above: So How does the compiler tell the difference? How does what know what fires the method? It's unclear what you call "it". Perhaps you miss the part raising event? Assuming you know delegates, have you been here already? the event handle need to derive from EventArgs and the delegate should have sign like this : void OnEventHandling(object sender,EventArgs argument) Take a look at these tutorials: Delegates (TutorialsPoint) and Events (TutorialsPoint) and Delegates (TutorialsTeacher) and Events (TutorialsTeacher). Does this answer your question? When & why to use delegates? and Where do I use delegates? What technology do you use (WinForms, WPF, UWP, ASP.NET, GTK...)? You may also find this helpful: Event Overview (Windows Forms .NET) and Event order in Windows Forms @Olivier Rogier in this case it is WPF but in future I could experience the problem in other technologies. For example, suppose you create a userControl and your control has a button, and when the button text changes, you define an event for it. That is, you see where the button text changes and there you define an event for it. public event TextChangedEventHandler ItemTextChanged; public delegate void TextChangedEventHandler(object Sender); private void btnContent_Click(object sender, RoutedEventArgs e) { if (ItemTextChanged != null) { ItemTextChanged(txtbChildBtn); } } Same example with another name public event ValueChangedEventHandler ItemValueChanged; public delegate void ValueChangedEventHandler(object Sender); private void btnContent_Click(object sender, RoutedEventArgs e) { if (ItemValueChanged != null) { ItemValueChanged(sender); } } or FucosChanged button public event FucosEventHandler FucosChanged; public delegate void FucosEventHandler(object Sender, DependencyPropertyChangedEventArgs e); private void btnContent_FocusableChanged(object sender, , DependencyPropertyChangedEventArgs e) { if (FucosChanged != null) { FucosChanged(sender, e); } } Yes, but how do you know it is TextChangedEventHandler rather that TextFontChangedEventhandler or some other name. Is there a list of the allowed ones somewhere? Ah, I see what I was looking for. It's in the metadata for TextBoxBase e.g. public static readonly RoutedEvent TextChangedEvent; Using the delegete keyword, you can select any name and create any type of event without any restrictions. @meyasm asadi You are correct, but what I was looking for was where I could get what type of events are available. e.g. is there TextSpecialChar event available for textbox. The metadata provides this.
common-pile/stackexchange_filtered
"Product" topology of $\mathbb N\times \mathbb R$ Background: Consider a set that is a finite or countable set (for example $\mathbb N$) endowed with discrete topology Question: What topology is endowed with $\mathbb N\times \mathbb R$? Is $\mathbb N\times\mathbb R$ connected? Is it possible to define a reasonable topology such that $\mathbb N\times\mathbb R$ is connected? Motivation: "Connectness" is a useful property that is a necessary condition for many theorems. In order to apply those theorems in discrete space $\mathbb N$, one may want try to "connectify" the discrete space, for example, by "put it together" with $\mathbb R$. My guess: $\mathbb N\times\mathbb R$ is not connected by its standard product topology. Consider set $\{1\}\times\mathbb R$ and $(\mathbb N\setminus \{1\})\times\mathbb R$, both of those sets are open by the standard product topology as $\mathbb R$ and $\{1\}$ are both open sets. In order to make $\mathbb N\times\mathbb R$ connected, can we force define that the whole set $\mathbb R$ to be neither closed nor open? Intuitively, take a closed interval $[0,1)$, then $\mathbb N\times [0,1)$ can only be connected? Additionally, I am not sure if things like $\mathbb N\times \mathbb R$ is connected. Any comments will help! You mention $X$ in the first line and then never mention it again. Anyway your guess is correct, and the proof is perfectly correct. @Wojowu sorry, editing @HighGPA: $\Bbb N\times[0,1]$ is not connected in the product topology. Indeed, as long as $\Bbb N$ is given the discrete topology, $\Bbb N\times X$ is not connected unless $X$ is the empty space. @BrianM.Scott Yes you are definitely right. Even $\mathbb N\times [0,1)$ is not connected, unless we redefine the meaning of "product topology". Since the "product topology" is already the coarsest, redefining a reasonable topology such that $N\times R$ is connected seems near impossible... The usual way of obtaining a (path) connected space from an arbitrary topological space is by making it a part of a cone, not a simple product. @tomasz Many thanks for your help. How do I make $N$ part of a cone? @HighGPA: There's a definition in the article I linked... The easiest way to make N×R connected is to give N the indiscrete topology. For example how? Is it possible to define a new "product" topology which is neither box topology nor Tychonoff topology? @HighGPA? The product of two connected spaces is connected. I guess you are right. Though "indiscrete topology" seems to be a trivial topology and not too meaningful... There are others which any @HighGPA student can create. Why forcing properties on the topology of $\mathbb R$ when $\mathbb R$ is already connected? The obstruction in $\mathbb N\times \mathbb R$ connection is clearly the fact that $\mathbb N$ itself is not connected (which is what you used to prove that the product is not connected, i.e., being $\{1\}$ open in $\mathbb N$, you conclude that $\{1\}\times\mathbb R$ is open in $\mathbb N\times\mathbb R$), therefore I'd say that the easiest way to get to a connected topology on $\mathbb N\times \mathbb R$ is requiring a connected $\mathbb N$ (for instance, giving it the trivial topology, i.e. $\{\varnothing,\mathbb N\}$). If your question requires us to keep the discrete topology on $\mathbb N$, I'm afraid that changing the topology of $\mathbb R$ won't be much of a help. You could always take a clopen set of $\mathbb N$ (e.g., $\{1\}$), and its product with any other topological space will still be a clopen in the product topology.
common-pile/stackexchange_filtered
Dynamic SQL update Image I have been battling with this statement: ALTER PROCEDURE [dbo].[transact_image_update] -- Add the parameters for the stored procedure here @transact_recordID_int int, @image1_bin image, @image2_bin image, @transact_referenceNo_str nvarchar(25), @userID_last uniqueidentifier, @tableName nvarchar(50) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here DECLARE @sqlUpdt01 nvarchar(4000) SET @sqlUpdt01 = ' Update [dbo].[' + @tableName + '] SET [image1_bin] = '+ CONVERT(VARCHAR(MAX), CONVERT(VARBINARY(MAX), @image1_bin), 2) + ', [image2_bin] = '+ CONVERT(VARCHAR(MAX), CONVERT(VARBINARY(MAX), @image2_bin), 2) +', [userID_last] = '''+ convert(nvarchar(4000),@userID_last) + ''' WHERE (transact_recordID_int = '+convert(varchar,@transact_recordID_int) +') AND ([transact_referenceNo_str] = ''' +convert(varchar, @transact_referenceNo_str) +''' ) AND (locked_bol = 0) ' exec sp_executesql @sqlUpdt01 Basically, I have many DB tables with similar schema but different names (for types of transactions) and would like this ONE procedure to make the update given the table name as argument. This script compiles successfully but execution cannot update the image field. Is there a conversion I'm missing? Please help. How do you know it "cannot update the image field"? Is there an error message? What do you see if you print out the value of @sqlUpdt01 before execution? in correct type cast in below line in correct line + ''' WHERE (transact_recordID_int = '+ convert(varchar,@transact_recordID_int) + correct line + ''' WHERE (transact_recordID_int = '+ @transact_recordID_int +
common-pile/stackexchange_filtered
Android + Sum up values of a parse query I have the following problem: I want to sum up the ratings (coloumn “Rating”) and later calculate an average rating. Unfortunately this doesn’t work, and it seems to me that the problem lies with or within the foreach loop. The variable avg has no value after the loop and i don't see the reason for this. Thank you very much for your help. Table "RateAndComment" http://www.directupload.net/file/d/3831/y23ygngp_png.htm protected double totalRating; protected int i; protected float avg; public void CalculateRating(String objectId) { //ParseQuery<ParseObject> query = ParseQuery.getQuery("RateAndComment"); ParseQuery query = new ParseQuery("RateAndComment"); query.whereEqualTo("PlaceObjectId",objectId); query.findInBackground(new FindCallback<ParseObject>() { @Override public void done(List<ParseObject>list, com.parse.ParseException e) { if (e == null) { i=0; for (ParseObject obj : list) { totalRating = totalRating + obj.getDouble("Rating"); i = i + 1; } } else { Log.d("Post retrieval", "Error: " + e.getMessage()); } } } ); avg = (float) totalRating/i; avgRating(objectId); } are you familiar with the term asynchronous? With the word yes, but to be honest, i don't see how it fit's for this problem. Could you point me in the right direction? have you started by reading the doc? https://www.parse.com/docs/android/api/com/parse/ParseQuery.html#findInBackground%28%29 (the keywords here are in background) I read the document: https://parse.com/docs/android_guide#queries I understand now what you want to tell me with the term asynchronous and I see the problem. Thanks for the help. The FindCallback's done method you pass into query.findInBackground is executed at a later point in time (after the query has completed). avg = (float) totalRating/i; avgRating(objectId); Is being executed before i is calculated. Move it inside of the done method and you should be good. There will be a delay after calling CalculateRating before avgRating is called, so you should probably show some loading UI or some other indication that something is happening in the background. Thank you very much for the solution and explanation. Now it works fine, performance is not an issue yet, but i will keep it in mind. As findInBackground performs a network request, your end users will likely experience performance issues when they use your app on a 3G (or worse) connection. Always good to keep in mind.
common-pile/stackexchange_filtered
Good example of what isn't a system call? I'm teaching an Operating System course, and was looking for an example of something that isn't a system call. As silly as it may sound, I wasn't able to carve a good example. Everything (accessible to second-year students) that came to my mind involved system call in some way, and looking around for examples didn't help. So, I'm reaching you: Is there any good example of operation happening on a computer that does not involve system calls? And wish a long life to that community that I just discovered, but intend to contribute to! Edit: the example can come from any OS, and I know that pretty much every useful program requires some system call, that's why I'm asking! Doesn't that depend quite a lot on the OS? Can you be more specific? Pretty much any useful program effects the system in some way, whether reading/modifying a file, outputting something to the screen or otherwise. Even something like int main(){int i = 0; return 0;} has to implicitly request resources from the system to store the variable. Maybe an opportunity to talk about layers of abstraction in a system. The deeper you go, the more likely it is you need to talk to the kernel. Anyway. Welcome to CSEducators. We try to be helpful, even when we sound a bit gruff. The students would probably be familiar with strlen and strcmp, which are not and do not result in syscalls. Then you have printf and malloc which are not syscalls, but which may result in syscalls. A great example of the misdirection which results from the subjects covered in a syllabus. Almost all code written is userland - i.e. not system calls, yet this question is still not 'trivially obvious'. The simple breakdown is: Anything that performs input/output involes a series of system calls. Anything that's purely computational doesn't involve system calls. Both of these statements have exceptions (I/O through memory-mapped files or peripherals is I/O without syscalls, calling a hardware accelerator or allocating memory is syscalls without externally visible I/O) but they're a good approximation. So here are some examples of operations that don't involve system calls: Getting or changing the value of a variable. Arithmetic operations on integers and floats. Conditionals and loops whose body doesn't involve system calls. Operations on strings such as searching, joining, splitting, etc. Manipulations of data structures such as lists, arrays, matrices, trees, etc. Memory management, as long as the program doesn't need to obtain more memory from the OS. Printing to a buffered stream, if the buffer doesn't get flushed during this print call. Likewise, reading from a buffered stream, if enough bytes are available. It would help to demonstrate by showing the system calls made by some programs, e.g. with strace on Linux, with dtruss on macOS or with Systrace under Windows. Compare a program that does a lot of I/O with one that does one big computation and prints the result. The case of buffered I/O, suggested by ctrl+alt+delor, is an interesting one because the same library functions may or may not trigger a system call depending on the state of the system. There is sometimes possible confusion between a library call and a system call. Unix programmers in particular tend to believe that what is documented in section 2 of the manual is system calls, but this is not true on a modern Unix systems. Every function callable from C is a C library function that's a wrapper (possibly very simple, possibly even a macro rather than a function) around a system call that usually, but not always, has the same name and takes more or less the same parameters. You can observe that on a modern Linux/x86_32 system, where many system calls have been upgraded from 32-bit arguments to 64-bit arguments, e.g. strace ls will show calls of getdents64 and fstat64 rather than getdents and fstat. Under the hood, system calls and library calls have a different calling convention. At the very least, a library call involes a jump/branch instruction while a system call involves a privilege change instruction. Depending on the system, the rules for placing arguments in registers may be different. You can discuss how a program could make as few system calls as possible by starting from strace /bin/true. Most of its syscalls are due to dynamic linking, so write that one-line program, compile it statically and look at the remaining system calls. Reduce it even further with a more minimalistic libc such as dietlibc. # Install dietlibc, e.g. apt-get install dietlibc-dev on Debian $ cat a.c int main(void) {return 0;} $ diet gcc a.c $ strace ./a.out execve("./a.out", ["./a.out"], [/* 82 vars */]) = 0 arch_prctl(ARCH_SET_FS, 0x7fff98099fe0) = 0 _exit(0) = ? +++ exited with 0 +++ I agree +1, however memory mapped IO could result in an, implicit, system call, if there is a page fault. Where as direct IO uses explicit system calls. Also printf may not make a system call: The C library can buffer the data. The library will call flush if the buffer becomes full. You can also call flush. You will notice this sometimes, “Why does printf not work / why is output of printf delayed?”. On a gnu system (such as Gnu/Linux), as well as strace, you are free to look at the source code (though strace may be easier/quicker. @ctrl-alt-delor As I wrote: “Both of these statements have exceptions”. A page fault is not a system call. (This may be a matter of terminology, but I'll assert it, because I'm a professional OS developer and a “system call” and a “fault handler” are two separate ways to invoke kernel code, and this matches Linux and Windows terminology.) Looking at the source code doesn't tell you which system calls are executed, at most it lets you see the library calls, which as you point out are not directly related (the example of buffered IO is a nice one, thanks). @giles [I was not disagreeing with you, just exploring these exceptions] I said implicit system call, I mean here that it triggers a service routine in the kernel. Is it correct that all access to the kernel is through interrupt service routines (hardware interrupts: page fault, device ready; and software interrupts (syscall)? @ctrl-alt-delor It's a bit dangerous to take such a general statement as absolute truth because there's usually some bizarre hardware somewhere. But for a sufficiently broad definition of “interrupt service routine” that's true. Depends if you think something like returning from hypervisor mode counts as a software interrupt. I am still learning the basics, when I see the architecture diagrams online, I always see the System calls in between user space and kernel space, so i find it hard to comprehend how any instruction can be without a sys call ? I mean even if I do a + b , does that not need to access CPU, RAM , Registers and then place the result back in a variable ? @theMyth Performing CPU instructions and accessing registers can't involve system calls because system calls work by performing CPU instructions and accessing registers. In theory it would be possible to design a system where accessing memory requires a system call, but it would be very impractical. CPUs with memory separation allow unprivileged code to access a part of memory (based on MMU/MPU/segmentation — look those up). System calls are generally needed to access shared resources, but CPU time+registers are always granted to applications temporarily (time slice in a scheduler).
common-pile/stackexchange_filtered
How do I make a REVERSE CALENDAR? I have a request from my regional manager to create an availability calendar that shows who is available and when. The pure opposite of a event calendar. I have approximately 50 people who are all attached to multiple projects with start and stop dates for each project. I can very easily put all the projects on a calendar. I can also easily put all the people on a calendar with start stop dates. How do I display the days they are not working? I honestly do not know where to start this one, I have mocked up event lists, personnel lists with calendar views, but none seam to show what I need. The only thing I can think of that may work is to have a series of nested lists. Main List: All Personnel SubLists: Each person has a calendar of events Overlay each list on one large calendar But this still shows when they are working, not when they are off. Green- Days they are working (Easy to show, I don’t want to see this by default) Red – Days they are off, I need only the red days to display for the whole crew. Either in a list by day or a calendar. Have you looked into using CSS to 'hide' all events, and then highlight days where no events are listed? I dont foresee that working, there will be events every day of the year due to the amount of projects going on. I need it to be a list of when employees are not working. Any other thoughts? I had to do a similar for a client: they didn't want to show who is not working per se, but rather who was on-site, off-site, or not working. We setup a separate list that tracked these values. A simple solution might be this: Create a new Calendar List. For each employee, create a daily recurring event called "Available". Whenever an item is created in your CURRENT Calendar, delete the recurring event for that employee on the specified date.
common-pile/stackexchange_filtered
What are these seeds/eggs that appear on my indoor plant leaves? I keep finding these tiny brown ridged "seeds" that appear on the leaves of my indoor plants. Since seeds don't normally fly, I am guessing they might actually be related to an insect. But the only insects I am aware of having in my living room are fungus gnats, lady bugs, and probably aphids, and these don't look like the eggs or pupae of any of those. The nearby plants are: old basil, young batavia lettuce, young lambs lettuce, wood sorrel, a young avocado tree, and a baby philodendron erubescens. But I don't see how any of these could be producing these "seeds". Thanks in advance! Finally found the answer: they are wood sorrel seeds! http://extension.msstate.edu/newsletters/bug%E2%80%99s-eye-view/2019/yellow-wood-sorrel-seed-vol-5-no-30 Your plant has aphids. These pear shaped insects are so common, they are also known as 'plant lice'. They come in all different colors from black to pink. The best way to get rid of them is with soap spray. You can make a good homemade soap spray by mixing several drops of Dawn dish liquid (you can use a different kind of dish soap but Dawn is the best) with distilled water in a spray bottle. Spray the entire plant very thoroughly. Make sure to get all stems and both the upper side and under side of the fronds. Repeat spraying the plant with the soapy water every 3-4 days until there are no more sign of insects. This usually takes about 2 weeks. Thanks Avlar. I have been aggressively spraying soapy water all over this lettuce plant to treat its aphids, although I believe they are now all dead. Are these brown "seeds" related to the aphids? Honestly, the brown 'seeds' are aphid corpses. Just keep spraying for awhile even after you think they are all dead, just to be sure. So I’ve these too. They light on me as I walk through my yard. Photo taken with them in the dimple of a microscope slide. Seeds or flea Corpses? This does not really answer the question. If you have a different question, you can ask it by clicking Ask Question. To get notified when this question gets new answers, you can follow this question. Once you have enough reputation, you can also add a bounty to draw more attention to this question. - From Review Nice picture! I am now 100% confident that what I have are wood sorrel seeds, as I have collected them and planted them. The leaves are edible! https://en.wikipedia.org/wiki/Oxalis_acetosella
common-pile/stackexchange_filtered
How to store images in local storage and store path in room? I'm new to Kotlin Jetpack Compose android development and I have been trying to build an app that lets you take notes and you can also store images along with the notes in the room database. The issue is that when I use the following picker to pick images from the gallery they show whilst the app is functioning but as soon as you close and restart the app the images are no longer loading. Their content uri is stored in the database however. Initially I was storing only the Uris of the content picker which looked like: content://media/picker/0/com.android.providers.media.photopicker/media/1001290474 Data Class for the Note in the Room DB @Entity(tableName = "notes_tbl") data class Note( @PrimaryKey val id: UUID = UUID.randomUUID(), @ColumnInfo(name = "note_title") var title: String, @ColumnInfo(name = "note_body") var desc: String, @ColumnInfo(name = "note_images") var images: List<Uri> = emptyList() ) Multiple Photo Picker val multiplePhotoPicker = rememberLauncherForActivityResult( contract = ActivityResultContracts.PickMultipleVisualMedia(), onResult = { uris -> uriList.value = uris } ) Add Note Function (The notesList contains the list of notes fetched from Room) fun getNote(id: String): Note { return notesList.value.first { note -> note.id.toString() == id } } I also tried writing to the storage the images I picked using a code I found here val saveImageToInternalStorage: (Context, Uri) -> String = { context: Context, uri: Uri -> val fileName = UUID.randomUUID().toString()+".jpg" val inputStream = context.contentResolver.openInputStream(uri) val outputStream = context.openFileOutput(fileName, Context.MODE_PRIVATE) inputStream.use { input -> outputStream.use { output -> input?.copyTo(output) } } fileName } They are present in the database but I cannot display them for some reason. I don't know what to do. Kindly overlook any weird code as I tried various methods found online but none worked. Do you want to store a copy of the images or just a reference? You forgot to tell us the exception you get when you try to reuse. There is no exception unfortunately, the app doesn't load the image A reference would be better but it wasn't working Your content scheme uri is only valid as long as your app runs or shorter. In order to be able to use the content uri later you should take persistable uri permission the moment you get that uri. do you have any resource I can refer as I cannot find anything that tells me how to take that permission via jetpack compose. I tried one method on YouTube but it didn't work You did not google for take persistable uri permission.
common-pile/stackexchange_filtered
Left Ctrl Key doesn't work (but right one does) My left ctrl key isn't working. The right ctrl key works fine. I'm not skilled at all with problem sovling in Linux. Could someone point me the right direction, please? From what I learn when I run xev is that the right key is correctly associated with Control_R, while the left key is not. How can I fix this? The same problem occurs under Ubuntu 18.04 and under Linux Mint (installed one after the other hoping the issue would resolve itself). Other keys seem to work fine so far. Output of xev when I press the Ctrl keys: KeyPress event, serial 38, synthetic NO, window 0x5200001, root 0x15a, subw 0x0, time 1330613, (-134,341), root:(727,796), state 0x0, keycode 105 (keysym 0xffe4, Control_R), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 38, synthetic NO, window 0x5200001, root 0x15a, subw 0x0, time 1330692, (-134,341), root:(727,796), state 0x4, keycode 105 (keysym 0xffe4, Control_R), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False KeyPress event, serial 38, synthetic NO, window 0x5200001, root 0x15a, subw 0x0, time 1332145, (-134,341), root:(727,796), state 0x0, keycode 151 (keysym 0x1008ff2b, XF86WakeUp), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 38, synthetic NO, window 0x5200001, root 0x15a, subw 0x0, time 1332155, (-134,341), root:(727,796), state 0x0, keycode 151 (keysym 0x1008ff2b, XF86WakeUp), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False keycode 151 should be your function key -- did you swap the Left Ctrl and Function key at some point? no, it's a fresh installation. Hardly changed anything so far. ... but we're on something here! Fn and Ctrl are reversed... can I switch them so they match the sign printed on the keyboard? Take a look at https://askubuntu.com/questions/68819/swap-two-keys-using-xmodmap (or search this site for "swap keys". so ubfan1 was right: Switching of Fn and Ctrl had been enabled in the BIOS by a previous owner of the notebook. thanks for clarifying everyone! Switching Ctrl and Fn should be available in BIOS. OEMs may have Windows apps to change it, but it's just another way of changing the same setting. Jepp, Bios swapp it was. Thanks everyone! @mathew7 Please elaborate on this. I have a Dell Inspiron and I see nothing on keyboards on its BIOS menu. @Trunk My experience is with Lenvo, specifically Thinkpads. My 2013 x230 and 2018 x280 have the Fn key in the corner where 90% of keyboards have ctrl (fn,ctrl,win,alt,space order). Searching for Dell Inspiron images, I see them having ctrl in corner. For such layouts, I don't think they would have the swap option. @mathew7 Just found the answer. Open Tweaks > Keyboard & Mouse > Additional Layout Options > CTRL Position and then check the Bottom Left box. Ideally I'd like to move my Compose key across to the Right CTRL key but that's another puzzle . . . Just commenting here to say that this worked on a Lenovo Legion laptop, where the "Fool proof Fn and Ctrl" option was on by default.
common-pile/stackexchange_filtered
Which maximal closed subgroups of Lie groups are maximal subgroups? Which maximal closed subgroups of Lie groups are maximal subgroups? What's an example of a maximal closed subgroup of a non-discrete Lie group? Pete: See http://arxiv.org/PS_cache/math/pdf/0605/0605784v3.pdf . For noncompact Lie groups it's probably essential in this question to separate the semisimple (or reductive) ones from the others, since the compact group preprint mentioned here and its many references indicate already the richness of the problem in the compact case where the structure theory of semisimple groups predominates. In general, there is also a need to compare real and complex Lie groups. But at least in the semisimple (or reductive) case, parallel work on algebraic groups should be a helpful guide. Is there any relevant literature on solvable Lie groups? @David: thanks, that was helpful. I worked through the commutative case in my head and saw that that was bad for maximal subgroups. I should have thought about it more: I do know that every compact subgroup of $\operatorname{GL}_n(\mathbb{R})$ is contained in an orthogonal group... A related question is http://mathoverflow.net/questions/60315/modern-reference-for-maximal-connected-subgroups-of-compact-lie-groups There is a paper by M. Golubitsky, "Primitive actions and maximal subgroups of Lie groups", J. Differ. Geom. 7 (1972), 175-191: from the Introduction: "...there exist maximal Lie subgroups whose Lie algebras are not maximal subalgebras".
common-pile/stackexchange_filtered
Flexible enums in C++ How can I organize enums so that each enum entry could have parameters, like I can do in Haxe, for instance: enum GraphicAct { ClearScreen; MoveTo(x:Float, y:Float); LineTo(x:Float, y:Float); FillColor(color:Int); EndFill; } function main(){ var actions:Array<GraphicAct>; actions.push(ClearScreen); actions.push(FillColor(0xaaffff)); actions.push(MoveTo(100, 100)); actions.push(LineTo(200, 100)); actions.push(LineTo(100, 200)); actions.push(EndFill); for(act in actions){ switch(act){ case ClearScreen: // do clear screen here... case MoveTo(x, y): // move position case LineTo(x, y): // move position } } } As far as I know, C++ supports only enum entries without parameters, like "ClearScreen" and "EndFill", but in this case how do I organize commands sequences in C++, like I did in example with graphic commands? why do you want it as an enum? you could get what you want with a GraphicsAct base class and then using polymorphism You may use union and enum, something like: enum class EGraphicActType { ClearScreen, MoveTo, LineTo, FillColorData, EnfFillData }; struct ClearScreenData {}; struct MoveToData { float x; float y;}; struct LineToData { float x; float y;}; struct FillColorData { Int color;}; struct EnfFillData {}; struct GraphicAct { EGraphicActType type; union { ClearScreenData clearScreenData; MoveToData moveToData; LineToData lineToData; FillColorData fillColorData; EnfFillData endFillData; } data; }; If you have access to boost, you may use boost::variant. For this purpose c++ offers structs and classes. A c++ enum is a specific value out of a set of predefined values. Edit: Here's an example of a possible solution class Action { public: doAction(); } class ClearScreen : public class Action { public: doAction() { // do something } } class MoveTo : public class Action { public: int x, y; doAction() { // do something } } // Etc for other things you want to do; int main() { std::vector<Action> Actions; ClearScreen clr; Actions.push_back(clr); MoveTo move; move.x = 100; move.y = 100; Actions.push_back(move); for (Action a:Actions) { a.doAction(); } } Sorry, I'm just on the road and can't check if the code really works, but i think you should get the idea. That doesn't answer the question or provide a solution. This should be a comment instead of an answer. Correct. I added the example
common-pile/stackexchange_filtered
Creating multiple new vectors from original vector I have a vector d and I create new vector from d such that new vector contain every mth element of d: v<-d[seq(1,length(d),m)] the next step I would like to calculate the sum s<-sum(abs(v[1:(length(v)-1)]-v[2:length(v)])) It would be no problem if I do this with few values of m. However, if I have a more than 50 values for m, doing this one by one will not be a desired way. I think of generalising the way of creating v corresponding with different value of m. The following is the code that I came up with: d1<-rnorm(20) m<-seq(1,10,1) v<-matrix(rep(0,length(d1)*lenght(m)),nrow=length(m)) for (i in 1:length(m)){v[i,]<-d1[seq(1,length(d1),m[i])]} but when I run the code, the following error appear: Error in v[i, ] <- d1[seq(1, length(d1), m[i])] : number of items to replace is not a multiple of replacement length So I tried another way: v<-rep(0,length(d1)) for (i in 1:length(m)){v<-d1[seq(1,length(d1),m[i])]} but this code only give me the value of v at m=m[length(m)], not all vectors v corresponding with m[1]to m[length(m)]. Could anyone suggest me a function/a way to solve this? Many thanks in advance It would help if you posted a (small and simplified!) example of the input data and desired output data. I think you are perhaps trying to do this: for (i in 1:length(m)){assign(paste0("v",i), d1[ seq(1, length(d1), m[i])] } My advice ... don't. You should be learning to build lists: vres <- lapply( seq_along(m) , function(i) d1[ seq(1, length(d1), m[i])] ) Thanks @BondedDust, this works perfectly and I can do lots of things with vres. I really should learn how to build list because it works better than loop in many cases. Thanks again for giving me great suggestion that help improving my experience with R I try to avoid for loops and use the apply family instead. Your function generates a bunch of vectors of different lengths, which I combine into a list. Both data frames and matrices have to be even, or non-ragged, so I have to pad your vectors with NAs, then I column bind them. Does this work? set.seed(2001) d1 <- rnorm(20) m <- seq(1, 10, 1) ragged <- lapply(m, FUN=function(x) d1[seq(1, length(d1), x)]) maximum <- max(sapply(ragged, length)) even <- lapply(ragged, FUN=function(x) { xx <- rep(NA, maximum) xx[seq(length(x))] <- x xx }) even1 <- do.call(cbind, even) Thanks @RichardHerron, it perfectly creates matrix even1 with each column is new vector derived from d1. But when I try this to perform the following: s<-apply(even1,2,sum), it will only return the sum of the first column, and returns NA for sum of the rest column because other columns contain NA. So I try to select all element that is not NA by: s<-apply(even1,2,function(v) v[ !is.na(v) ] ) and apply s<-apply(even1,2,sum) again but I still get same result as before. @Ocean you need to pass na.rm = TRUE as an option to sum(). You could also use sapply(ragged, sum). @Richard Herronyes, silly me,it works now. Thanks for your help
common-pile/stackexchange_filtered
Inserting mysql values directly into pdf using x,y coordinates. Any suggestions for a resource to insert database values directly into pdf using x,y coordinates, preferably in php? I have a form that needs to be populated with information contained in a database. The form cannot be altered in any way. a google search gave me this: http://www.sitepoint.com/generate-pdfs-php/ and i was also able to find TCPDF. Will these work? Are there better solutions out there? Based on my experience i would like to suggest you to try fpdf library. It provides goods features as well as examples for beginners. I am not sure I understand what you are trying to do. If you have a PDF with a form inside, and you are trying to fill the form with data and save the PDF with the filled forms, try this here: http://koivi.com/fill-pdf-form-fields/ its pretty self-explanatory. On the other hand, if you try to create a new PDF file, I can also recommend the fpdf library. As an added hint: I often came to the conclusion that it can be also very easy to make a HTML file, add the CSS for page breaks and then convert that into PDF at the client computer with a print-to-PDF function. It saves you from bending your output to fit the limitations of PDF libraries.
common-pile/stackexchange_filtered
How to get for() number Well all I really need is for a certain number to increase every time a for loop is completed. Here is my code: function TEST(){ for(i = 0; i < 4; i++){ var test = 1 test++ Logger.log(test) } but the logger output only looks like: [14-02-24 17:53:19:100 PST] 2.0 [14-02-24 17:53:19:100 PST] 2.0 [14-02-24 17:53:19:100 PST] 2.0 [14-02-24 17:53:19:100 PST] 2.0 How can I resolve this? Thanks is probably just me not thinking correctly about for() loops :P ** SOLVED ** What I found is that you could just do var test = i hehe how silly of me :-) It may be beneficial to read the contents of your for loop out loud. This may pinpoint where the problem may be so you can resolve it yourself. Declare your test variable outside the for loop. Otherwise, it keeps resetting to 1 when you iterate the loop. function TEST(){ var test = 1 for(i = 0; i < 4; i++){ test++ Logger.log(test) } }
common-pile/stackexchange_filtered
Belkin Bluetooth and Wireless USB Adapter I have a Windows XP (SP3) machine with a Belkin F8T012 USB adapter plugged into it. I also have a Belkin F8T031 Wireless USB Printer Adapter with an HP 5900 DeskJet plugged into it. Everytime I try to print I get asked to enter the pairing code. I read in the documentation for the F8T012 that some devices use 0000 or 1234 as a default. Neither of these work. I used these same Belkin devices and printer on a Windows 2000 and a Windows Vista setup and had no problems; printing worked perfectly. I cannot get it to work on my Windows XP machine. I have uninstalled and re-installed the Bluetooth software (both ver <IP_ADDRESS>1 and <IP_ADDRESS>0) What else can I try? The following website has a listing of working and NON working Bluetooth devices. Your device Belkin F8T012 USB is listed on the page with a link next to it saying it needs an update to work under XP. Search for it on the page. I would go down this path to see if you can get it working. Next to it is a link to the Broadcom website to get the update for your device. http://wiibrew.org/wiki/List_of_Working_Bluetooth_Devices#Working_Bluetooth_Devices_on_XP.2FLinux.2C_but_are_not_compatible_with_Vista Awesome! This is the first time I've printed via this computer since I swapped it from Vista to XP over a year ago! BTW, I did get an error during the install that said the file BTW.msi was missing; I just hit OK and everything else worked. All is good. I'm placing this info here for future googlers. It comes from a reviewer at Amazon.com. Note the critical info about the passkey: This dongle works great, using only my Palm with Bluefish RX software. No complaints except one, and hopefully this will help some buyers. To initially set up your handheld device, when it asks for a passkey, which is a kind of password, THE PASSKEY IS "belkin" (all lower case).
common-pile/stackexchange_filtered
Muting Media Player at certain points in the Game Hello: I want to be able to leave the option of muting in the below code; The sound plays every time someone gets it correct and when one gets it wrong. How can i mute this ? Currently the voice is on even when my phone voice is muted. Thanks in advance. if (enteredAnswer == answer) { MediaPlayer mediaPlayer = MediaPlayer.create(this, R.raw.correct); mediaPlayer.start(); } You can use setVolume(float, float): Sets the volume on this player. This API is recommended for balancing the output of audio streams within an application. Unless you are writing an application to control user settings, this API should be used in preference to setStreamVolume(int, int, int) which sets the volume of ALL streams of a particular type. Note that the passed volume values are raw scalars in range 0.0 to 1.0. UI controls should be scaled logarithmically. To mute the MediaPlayer you need to pass 0: mediaPlayer.setVolume(0, 0); To restore the full volume you need to pass 1: mediaPlayer.setVolume(1, 1);
common-pile/stackexchange_filtered
writing bash like script in fortran We can write simple bash script to easy our tasks in many ways. For example as the bash scritp below. #!/bin/bash name="Vijay" echo "My name is $name." echo "$name does no like to play football" exit $? Can one achieve the same thing in Fortran? I tried the code as blow: program simple implicit none character(len=1024),parameter :: name="Vijay" write (name,"(A5)") print*, "My name is trim(name)" print*, "trim(name) does no like to play football" end But, I get error message as blow: Error: UNIT specification at (1) must be an INTEGER expression or a CHARACTER variable Appreciate any help to fix this problem. Thanks in advance This is a very basic question. I suggest reading an article about Fortran, such as http://en.wikipedia.org/wiki/Fortran_95_language_features, and looking at code examples. One hint: write (name,"(A5)") --> write (*, '(A)' ) name In addition to the response from M. S. B., the line: print*, "My name is trim(name)" will print out My name is trim(name) because Fortran does not do variable substitution in strings like bash does. What you want is: print*, "My name is "//trim(name) The // operator is for string concatenation in Fortran.
common-pile/stackexchange_filtered
How can I boot a specific ISO / Linux distribution image file with VMware I've downloaded all the archives of this specific linux distribution and I have unzipped them and then I gave the extracted image file to Vmware,but it says operating system not found. I have also tried to open it with : DiskInternal Linux Reader IMG to ISO MagicISO OSFMount UltraISO WinImage because I wanted to make it bootable,but I haven't been able to do it. The error is always : Invalid or unkown image file format or file is not CD image file. I have also wrote the image to a pen drive using the Rufus tool and I gave the partition to VMware as a physical disk,but it can't boot,also. What can I do now ? The author does not explain well what to do. He says only : Com o auxílio de um programa de clonagem de dados, copie o conteúdo do arquivo ISO para um pendrive. With the help of a data cloning program, copy the contents of the ISO file to a USB stick. My portuguese is not so good, but don't they offer to download ISO files at the bottom of the page? If you have a bootable ISO file then you simply insert in your virtual DVD drive and boot with it. I tried. it says : "the disk might be corrupted"... https://docs.google.com/document/d/1tvdzRvb6ZHxh0Zf41rkgsvU72RogCFmAt8Zt3I4dCAQ/edit
common-pile/stackexchange_filtered
Freebase co-types search timeout I'm trying to find freebase co-types, that is, given a type you find 'compatible' types: Suppose to start with /people/person, it might be a musician (/music/group_member), but not a music album (/music/album), i don't know if in freebase is there something like owl 'disjointWith' between types, anyway in the MQL cookbook they suggest to use this trick. The query in the example get all instances of a given type, then get all types of all instances and let them to be unique...this is clever, but the query goes in timeout... is there another way? for me it's ok even a static list/result, i don't need the live query...i think the result would be the same... EDIT: Incompatible types type seems to be useful and similar to disjointWith, may also be used with the suggest... Thanks! luca As Tom mentioned, there are some thresholds that you might want to consider to filter out some of the less notable or experimental types that users have created in Freebase. To give you an idea of what the data looks like I've run the query for all co-types for /people/person. /common/topic (100.00%) /book/author (23.88%) /people/deceased_person (18.68%) /sports/pro_athlete (12.49%) /film/actor (9.72%) /music/artist (7.60%) /music/group_member (4.98%) /soccer/football_player (4.53%) /government/politician (3.92%) /olympics/olympic_athlete (2.91%) See the full list here. You can also experiment with Freebase Co-Types using this app that I built (although it is prone to the same timeouts that you experienced). Wow thanks for the results i really need them!!! i'll filter out low frequencies as Tom suggested and also /user/ namespace property...How did you make the query? data dump? special permissions? I have the Freebase quad dump loaded in BigQuery (https://developers.google.com/bigquery/) which is great for these sorts of long-running, statistical queries. We're working on making the data accessible from BigQuery for external developers. Stay tuned :) Freebase doesn't have the concept of disjointWith at the graph or schema level. The Incompatible Types base that you found is a manually curated version of that which may be used in a future version of the UI, but isn't today. If you want to find all co-types, as they exist in the graph today, you can do that using the query you mentioned, but you're probably better off using the data dumps. I'd also consider establishing a frequency threshold to eliminate low frequency co-types so that you filter out mistakes and other noise. thanks! i'm just looking at the data dump, but is huge...i'll check it later...if there isn't any other solution, i think i will use owl:disjointWith (or similar) in my graph or even a simple array of 'compatible properties'...
common-pile/stackexchange_filtered
Transaction management in a SOAP server implementation I have a standard jax-ws webservice set up that uses Spring for DI and Hibernate for persistence. Now in Spring you'd normally wrap your request in a custom filter and execute beginTransaction() and commit() / rollback() on the Hibernate session depending whether the execution went through without errors or not. For the soap web client however, this execution chain is not used and therefor no transaction management can easily be set up like this. Of course I also want to avoid wrapping each of my @WebMethod implementations in session.beginTransaction(); try { ... session.commit(); } catch (RuntimeException e) { session.rollback(); throw e; } so I looked into other possibilities. Apparently the JSR allows to configure SOAPHandler's (via @HandlerChain) that intercept SOAP traffic after incoming and before outgoing soap messages are sent, but I wonder if I am on the right track with that or doing it wrong... Do you guys know of other alternatives? I went down the SOAPHandler route and it seems to work nicely: public class SOAPServiceHandler implements SOAPHandler<SOAPMessageContext> { @Override public void close(MessageContext mc) { } @Override public boolean handleFault(SOAPMessageContext smc) { MyHibernateUtil.rollbackTransaction(); return true; } @Override public boolean handleMessage(SOAPMessageContext smc) { Session session = MyHibernateUtil.getCurrentSession(); Boolean outbound = (Boolean) smc.get(MessageContext.MESSAGE_OUTBOUND_PROPERTY); if (!outbound) { session.beginTransaction(); session.setFlushMode(FlushMode.COMMIT); return true; } SOAPBody body = null; try { body = smc.getMessage().getSOAPBody(); } catch (SOAPException e) { } session.flush(); session.setFlushMode(FlushMode.AUTO); if (body == null || body.hasFault()) { session.getTransaction().rollback(); } else { session.getTransaction().commit(); } return true; } @Override public Set<QName> getHeaders() { return null; } } This is my applicationContext.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:ws="http://jax-ws.dev.java.net/spring/core" xmlns:wss="http://jax-ws.dev.java.net/spring/servlet" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://jax-ws.dev.java.net/spring/core http://jax-ws.dev.java.net/spring/core.xsd http://jax-ws.dev.java.net/spring/servlet http://jax-ws.dev.java.net/spring/servlet.xsd"> <bean id="SoapServiceImpl" class="path.to.SOAPServiceImpl" /> <bean id="SoapServiceHandler" class="path.to.SOAPServiceHandler" /> <wss:binding url="/soap"> <wss:service> <ws:service bean="#SoapServiceImpl"> <ws:handlers> <ref bean="SoapServiceHandler" /> </ws:handlers> </ws:service> </wss:service> </wss:binding> </beans> Feedback welcome!
common-pile/stackexchange_filtered
How do I create an incremental log during execution of my C# program using StreamWriter or similar? I'm hoping to have a simple function (not a class or library) that lets me dump information to a text file as I'm executing, but easier said than done. There are plenty of questions about stream writers, but somehow none of them are doing what I'm looking for: public partial class MainWindow : Window { // Make true for logging private bool diag = true; private StreamWriter logger = null; public MainWindow() { logger = new StreamWriter(regStuff.MyDocsPath() + @"\Paneless\log.txt"); } private void Log(string toLog) { if (diag) logger.WriteLine(toLog); } } In theory, with the above, I should be able to call "Log" with a string at any point in my code to have it dump a bit of diagnostic code to the text file. It doesn't work and I can't figure out why. From the guides, this seems correct. I've seen some that use a "using" statement, but do I really want to have a hundred of those one after the other? Seems... inefficient. Save yourself the headaches and time and just use a Logging Framework. ...but somehow none of them are doing what I'm looking for. You want to use a streamwriter to write to a file. A bit hard to believe that there are no guides on that. Maybe this helps you: https://stackoverflow.com/a/7306243/4921129 Unrelated: regStuff.MyDocsPath() + @"\Paneless\log.txt" - you may consider using Path.Combine ^^ But really: use a Framework. private bool diag = true; - you surely do not want to recompile and redeploy everytime you need to switch logging on/off?? A framework might be superior, but it's also a heavy lift in learning how it works, making sure it doesn't cause unwanted behavior with my code and so on. I'll consider it if there's no simple way to write output to a file - unless you're telling me C# can't actually do that.
common-pile/stackexchange_filtered
how to pass a variable through a url I created a field in WP customizer and I'm trying to pull whatever value I enter from another file. Beforehand, this worked: private $endpoint = "http://www.website.com/webservice/GetSearchResults.htm?ws-id=12345"; and now I am trying to do something like this (I'm new at this): $website_api_key = echo get_theme_mod('website-api-key'); private $endpoint = "http://www.website.com/webservice/GetSearchResults.htm?ws-id=" . $website_api_key; How do I get this to work correctly? In what way is the result different from the version that worked? How specifically is this not working? Considering you have a visibility keyword there (private), it seems to be that $endpoint is a property, not a variable. The property's default value cannot be an expression. Set the value in the constructor if need be, also you'd have to figure out where $website_api_key lives. @David I'm getting a white screen on the front end doing it the way I'm trying now... You are setting the variable in the url correctly. Now you just have to grab it. When you visit a webpage, you are using a GET request, so these "url variables" can be grabbed through the $_GET array. All you have to do to accomplish this is do something like this: $myvar = $_GET['ws-id']; Here is a good article to give you more information on this so I write it like this? : $website_api_key = echo get_theme_mod('website-api-key'); $myvar = $_GET['website_api_key']; private $endpoint = "http://www.website.com/webservice/GetSearchResults.htm?ws-id=" . $myvar; @user3547342 Yes I believe that will work as long as get_theme_mod() returns the correct information.
common-pile/stackexchange_filtered
Issues with binding structure layout in the shaders which translated from glsl to hlsl I have a small issue with my shaders. Sorry if it is not the right place for this question. In general, the essence is that for my application on dx12, I have shaders in glsl (I'm too lazy to rewrite them because there are many, and they are not only for dx12), and I compile them into hlsl using spirv-cross (I get approximately what I expect). My problem is as follows: I don't quite understand how to correctly set up the layout for bindings in shaders. If the binding is just a structure, everything is fine. But if it's an array of structures, everything breaks. On the C++ side, all my structures are aligned (alignas(16)). For example, I have a binding, in hlsl it looks like this: struct vert_in { float4 Pos; float4 Tangent; float4 Bitangent; float2 TexPos; uint Normal; }; ByteAddressBuffer _41 : register(t1, space0); // This is where vert_in is being used Initially vert_in was used by storage buffer, so it was using std430 layout. I've tried adding paddings (for example, added one uint in the example with vert_in), but it didn't really help. When I added paddings to all array of structures bindings,it happened that nothing rendered at all. Apparently, in vulkan/glsl everything working is fine. Either I don't understand something, or I'm just doing it wrong. I tried adding paddings, but I guess, it's not the right approach. If I understand the question correctly, you are vertex fetching from a shader resource view, instead of using a standard vertex buffer in a standard vertex/pixel shader pipeline. The vertex structure layout in this case does not need to be padded, but I think the issue is your use of a ByteAddressBuffer. Try changing to a StructuredBuffer instead. A ByteAddressBuffer on the other hand is best used for index fetching.
common-pile/stackexchange_filtered
Removing empty elements from an array/list in Python I've a big array under the name all_data_array: all_data_array = [[[], [], [], [], [], [], [], [], [], []], [[0, 1, 2, 3], ['foo', 'moo', 'bar', 'sis'], ['05-03-2014', '10-03-2014', '14-03-2014', '20-03-2014'], ['05-03-2014', '10-03-2014', '14-03-2014', '20-03-2014'], ['12:00', '12:03', '12:01', '12:01'], ['12:05', '12:08', '12:06', '12:06'], [123, 322, 345, 0], [1, 1, 1, 0], [1, 0, 1, 0], [0.1149597018957138, 0.920006513595581, 1.0062587261199951, 1.0062587261199951]]] In this example, inside all_data_array I have two arrays: 1. The 'empty' one: [[], [], [], [], [], [], [], [], [], []] 2. The filled one (which is very long), [[0, 1, 2, 3], ['foo', 'moo', 'bar', 'sis'], ['05-03-2014', '10-03-2014', '14-03-2014', '20-03-2014'], ['05-03-2014', '10-03-2014', '14-03-2014', '20-03-2014'], ['12:00', '12:03', '12:01', '12:01'], ['12:05', '12:08', '12:06', '12:06'], [123, 322, 345, 0], [1, 1, 1, 0], [1, 0, 1, 0], [0.1149597018957138, 0.920006513595581, 1.0062587261199951, 1.0062587261199951]] How can I remove from all_data_array all the empty arrays? The solution for the example is just all_data_array.pop[0] but I would like to have a general solution if it's possible I tried something like this but it's not working and I'm a bit lost: for i in all_data_array: for m in xrange(len(all_data_array)): if m == []: print "EMPTY" else: print "WITH CONTENT" Thanks in advace replace if m == []: with if not m: What if one of the subelements is empty? possible duplicate of Remove all empty nested lists Use all: not_empty_ones = [] for el in all_data_array: if all(el): not_empty_ones.append(el) print not_empty_ones Since this is an list consisting of lists themselves, you need to check if each element inside are empty or not. This can be achived with all built-in. I recommend using list comprehension for that: [el for el in all_data_array if all(el)]. because an excessive use of append takes a lot of memory. +1 for tayfun for the use of 'all' and +1 for slallum for using list comprehension and pointing the issue about append all will also exclude non empty lists with values evaluating to false. e.g. [[0],[0]] will be excluded. If you want to keep such elements then I'd suggest use len(list(chain.from_iterable(array))). See my answer did you try this? all_data_array_no_empty = [x for x in all_data_array if all(x)] Why? list1 = [[],[1,2,3],[],[],['a','b']] list2 = [x for x in list1 if x != []] got [[1, 2, 3], ['a', 'b']] @AshwiniChaudhary exactly . @amit Have a look at OP's input first. @AshwiniChaudhary, you we're right, didn't notice the list of empty lists in its input. 10x. this will do it despite the length of the empty array/list: from itertools import chain [array for array in all_data_array if len(list(chain.from_iterable(array))) > 0] if your lists are non empty you get different results if you use all instead of len(list(chain.from_iterable(array))) > 0: >>> all_data_array = [[[0], []], [[1, 1, 1]]] >>> [l for l in all_data_array if len(list(chain.from_iterable(l))) > 0] [[[0], []], [[1, 1, 1]]] # <-- element [[0], []] will be included >>> [el for el in all_data_array if all(el)] [[[1, 1, 1]]] # <-- element [[0], []] will be excluded
common-pile/stackexchange_filtered
When in history did people start seeing a girl's breasts as being inherently sexual? In our modern culture we always seem to take for granted the fact that girls are expected to cover their breasts, with the (sometimes...) only real exception being breast-feeding, in which case the girl is still expected to cover her nipple. This doesn't really make sense to me, as the only reason we see them as sexual is due to the fact we make them so illusive, similar to how Saudis are probably turned on by a girls hair, yet they're both not inherently sexual. Breasts, unlike the anus, vagina and penis, do not expel waste, and aren't tools in intercourse, they're merely tools used to feed infants, yet they are still seen as equally taboo. When did breasts become such a taboo, and why? What was so wrong with them that they needed to be covered? And why did it seemingly happen in multiple cultures that rarely, if ever, interacted or shared ideas? If ya'll gonna down vote, could you at least tell me what made you so cross? This question relies on numerous opinions and assumptions ("seemingly", "wrong", "take for granted") and demonstrates no preliminary research. Different cultures have different opinions about the sexuality of the human breast, so the question is too broad to answer; any answer about any given culture is incomplete & incorrect. I am not convinced this is a question about history - possibly sociology, possibly some other field, but I doubt that historical sources and methods can resolve the question. k, might re-ask in a more specific way l8r, thx lad. :) We know this goes back before history actually for at least two reasons. They are depicted in exaggerated form in some of the earliest representational artwork we've found, way back in the Paleolithic era. Their size on human females is far larger than is either necessary for milk production, or found in our closest relatives (the other Great Apes). The presumption has been that they have been selected for in our species due entirely to male preference for them. Cool, thanks lad. But this doesn't explain why we have such a taboo against them. Even if they share a common sexual preference, not all cultures share the same sexual taboo against breasts. @Tirous - Not every society has a taboo against them. Western society does, but that's likely due to the fact that Western society has a taboo against anything that primarily has to do with sex, and (as I mentioned in the answer) human breasts are they way they are primarily for sexual selection.
common-pile/stackexchange_filtered
Cannot make changes to ionic.app.css after installing Sass? I'm working on an app in ionic, and would like to customize the .CSS file that was generated by Sass after installing. When I open the .CSS file in my editor (Edge Code), make changes, and try to save the file I get: ...The permissions do not allow you to make modifications. I imagine there is a command I can type into the Terminal to grant this permission, but now sure what. I'm pretty new to using CLI tools. Can someone please help? Thanks! Terminal for which OS? My laptop is running Yosemite Possible duplicate: http://stackoverflow.com/questions/13676191/how-to-modify-file-permissions Thank you. I tried the command sudo chmod +x www/css/ionic.app.css and entered my password, but nothing happened. When I look at the file in the terminal, it says root next to it instead of my username. Note: you should not edit the generated css files directly, since your changes will get overwritten. Nevertheless, to fix the permission issue, you need to set your user account as the owner of the file: sudo chown yourUser www/css/ionic.app.css Just replace yourUser with the short user name of your account. You can make changes(adding new class element styles or customize ionic default style) on scss/ionic.app.scss. Late but can be important for someone!
common-pile/stackexchange_filtered