anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How to find the eigenvectors and eigenvalues of a hermitian operator?
Question: While reading Theoretical Minimum by Leonard Susskind, I came across the exercise 3.4 where he asked to find the eigenvalues and the eigenvectors of the matrix that represents the $\sigma_{n}$ component of the operator $\sigma$ where: $$\sigma_{n}=\vec\sigma.\hat n$$ Which leads to: $$\sigma_n=\begin{pmatrix}n_{z}&(n_{x}-i.n_{y})\\(n_{x}+in_{y})&-n_{z}\end{pmatrix}$$ Given that: $$n_{z}=\cos(\theta)$$ $$n_x=\sin(\theta) \cdot \cos(\phi)$$ $$n_y=\sin(\theta) \cdot \sin(\phi)$$ How can I approach the question as I am a little bit confused, and I searched online for the answer and found that we need to find the determinant of the $\sigma_n$ where is equals: $$\begin{pmatrix}\cos(\theta)-\lambda&\sin(\theta)\cdot\cos(\phi)-i\cdot \sin(\theta)\cdot\sin(\phi)\\ \sin(\theta)\cdot \cos(\phi)+i \cdot \sin(\theta)\cdot \sin(\phi)&-\cos(\theta)-\lambda\end{pmatrix}$$ How does finding the determinant of $\sigma_n$ help solve the question and where did the $\lambda$ come from in the first place? Answer: From your post and the comments therein, it seems that you would benefit in reading a little bit more about linear algebra. There are plenty of resources for that, including some excellent posts on the Mathematics StackExchange. In this post, I will only answer your question about why finding this determinant allows to compute the eigenvalues of a matrix. However, it's quite hard to answer such a question with high school knowledge only, so I'll assume just a tad bit of knowledge about matrices (hopefully not much, but feel free to ask questions in the comments). We will need several facts: If a matrix is not invertible, its determinant is nil. Reciprocally, it is invertible if its determinant isn't nil. If there is a non-zero vector $x$ such that $Mx$ is the zero vector, then $M$ isn't invertible. Reciprocally, if the only $x$ such that $Mx=0$ is $x=0$, then $M$ is invertible. If there is a non-zero vector $x$ such that $Mx=\lambda x$ for a certain $\lambda$, then $\lambda$ is an eigenvalue of $M$. With that being said, let us consider an eigenvalue $\lambda$ of a matrix $M$. From Fact 3. we have: $$Mx=\lambda x$$ Which we can rewrite as: $$Mx=\lambda Ix$$ with $I$ being the identity matrix. This is equivalent to: $$(M-\lambda I)x=0$$ But now from Fact 2. we have that $M-\lambda I$ is not invertible, which means from Fact 1. that the determinant of $M-\lambda I$ is nil. Thus, we have shown that if $\lambda$ is an eigenvalue of $M$, then the determinant of $M-\lambda I$ is nil. Reciprocally, if $\lambda$ is not n eigenvalue of $M$, then we can show that $M-\lambda I$ is invertible. Suppose that: $$(M-\lambda I)x=0$$ Then it means that: $$Mx=\lambda x$$ But we have assumed that $\lambda$ is not an eigenvalue of $M$, so it necessarily means that $x=0$. Thus, from Fact 2. $M-\lambda I$ is invertible, which means from Fact 1. that its determinant is not nil. All in all, we have shown that the eigenvalues of $M$ are exactly the solutions to the equation $$\text{det}(M-\lambda I)=0$$ Where we want to solve for $\lambda$. If you want to learn more on this topic, you should read about the characteristic polynomial of a matrix. In your case, you will find that the determinant is actually a degree 2 polynomial in $\lambda$. You should thus be able to quickly find the eigenvalues. Once you know an eigenvalue $\lambda$, there are several ways to compute the eigenvectors. The simplest is simply to solve for $x$ in $Mx=\lambda x$. Such an $x$ isn't unique, but you're interested in linearly independent solutions here. This might be a bit too much for an high-school level and I apologize for that. But to put it in a nutshell: The determinant of $M-\lambda I$ is called the characteristic polynomial of $M$, and its roots are exactly the eigenvalues of $M$. Using the definition of an eigenvalue, it is possible to find an eigenvector associated to a known eigenvalue. You should definitely get yourself an introductory course on linear algebra if you're interested in that topic, you'll learn a lot of things!
{ "domain": "quantumcomputing.stackexchange", "id": 5364, "tags": "quantum-state, textbook-and-exercises, linear-algebra" }
Radiometric dating data sets
Question: I am in the process of learning what sort of data is collected with radiometric dating techniques, used for absolute dating. It sounds like there are two primary ones: Radiocarbon dating (~50k year scale) Potassium-argon dating (billion year scale) Then in between is tree ring dating, but that is a separate thing. And for completeness, there is also a sort of luminescence dating technique which I have never heard of. In this question I am just wondering about the Radiometric techniques. There is also Uranium-lead dating, but I don't see much info on that one, so focusing here just on carbon and potassium-argon. First, I am wondering if there are any data formats used for storing the data (which will help with search). I searched around a bit but didn't find anything. Second, I'm wondering if there are any data repositories containing radiometric datasets, just to get started. And third, if there's not any "centralized repositories" of this sort of data, and instead it is by individual author in their own style, that might be good to know, just so I don't think I'm missing something obvious. At first I found this from here: GEOGRAPHIC REGION: Tropical Pacific DESCRIPTION: Pacific Bomb Radiocarbon Coral Data. Uva island, Gulf of Chiriqui, Panama (7°48'N, 81°45'W), CUVA; Uva island, Gulf of Chir(7°48'N, 81°45'W) Druffel, ERM(1987). JMR, 45:667-698 Gardinesoseris planulata collected July 1980 Collected by: P. Glynn WH# YEAR ¯14C ERROR 42 1950-1951 -64 ±3-4‰ for the whole 43 1952-1953 -59 data set 68 1952-1954 -58 69 1955 -51 70 1956 -46 71 1957 -47 72 1958 -45 73 1959 -37 74 1960 -42 75 1961 -24 115 1962 -26 109 1963 -22 110 1964 -23 106 1965 29 112 1966 46 108 1967 34 111 1968 66 113 1969 74 114 1970 70 96 1972 75 94 1973 67 99 1974 70 92 1975 56 95 1976 43 93 1977 54 91 1978 52 97 1979 74 98 1980 73 But that doesn't look like much, basically some sort of in integer value for each year. I'm wondering if there is a lot more data typically, such as like you would find in mass spectrometry or crystallography or something. Or if it is literally just an integer or decimal number without anything else (like probability of correctness or other things). Like I'm wondering if the data is much more complicated and would look along the lines of this: I also found this which I haven't looked too much into yet. And this (which unfortunately is a PDF), which has a bunch of maps. So it seems like the data would be some sort of GIS shapefile or something perhaps. If it matters, I am interested in particular in fossil radiometric data. Answer: It sounds like there are two primary ones: No, these are not the two "primary ones". The method used depends on what you are dating, and what age you expect it to be. Radiocarbon dating is relevant to things younger than a few tens of thousands of years, and it's only relevant for things that were living (or growing), and incorporated atmospheric carbon. Other methods such as U–Pb (the "gold standard" when dating igneous rocks), Rb–Sr, Re–Os, Sm–Nd, K(Ar)–Ar, Lu–Hf, and many more, are used for older things in the millions to billions of years. The one used depends on what you're analysing (Re–Os for sulfide minerals, U–Pb for zircons, Rb–Sr for micas, Sm–-Nd or Lu–Hf for garnets), and what is present in the rock, and what has textural evidence for being preserved. You might call U–Pb on zircon the "primary" one, because most of the dating is done on zircon. It's robust, easy, gives good ages, quick and relatively cheap. First, I am wondering if there are any data formats used for storing the data (which will help with search). Most U–Pb data is reduced using IsoPlot, so the data will look like something that came out of that software. But in general, no. There is no standardised data format for geochronology, unfortunately. And third, if there's not any "centralized repositories" of this sort of data There are several of those. You can search EarthChem, that has some geochronological data. Australian government agencies also have some geochronology data in them, for example GA or NTGS. If it matters, I am interested in particular in fossil radiometric data. Fossils are among the hardest thing to date. If the fossils are relatively young (few thousands of years), there's radiocarbon. Anything older than that, and it becomes much harder. There's U–Th disequilibrium series which may work for some slightly older fossils (~1 million years). Anything older than that in deep geological time has to be dated with indirect methods. Finding a database of that is probably wishful thinking, and to understand their dates you need to have some geological background to understand the geological relations and considerations that allow the indirect dating.
{ "domain": "earthscience.stackexchange", "id": 1594, "tags": "open-data, geochronology, radioactivity, dating" }
Difference between Trophic and Tactic movements
Question: What is the difference between Trophic (eg. Chemotrophic) and Tactic (eg. Chemotactic) movements? In Bryophytes, Anthrezoids are attracted towards Archegonia. This is Chemotactic movement. In Spermatophytes, Pollen tube moves towards Ovule. This is due to Chemotrophic movement. This is was taught in our school today. But to me, in both cases the male part is attracted to female part using certain chemicles. So what is the difference between them? Broadly speaking, what is the difference between Tactic and Trophic movements? Answer: I think that you got a bit confused about terminology. In fact Chemotrophic is an organism that obtain energy by organic or inorganic molecules. In your case I think that you want to know the difference between Chemotropism and Chemotaxis. Chemotropism Chemo-tropism is the growth of organisms (or parts of an organism, including individual cells) such as bacteria and plants, navigated by chemical stimulus from outside of the organism or organisms part [...] An example of chemo-tropic movement can be seen during the growth of the pollen tube, where growth is always towards the ovules. Chemotaxis Chemotaxis (from chemo- + taxis) is the movement of an organism in response to a chemical stimulus.Somatic cells, bacteria, and other single-cell or multicellular organisms direct their movements according to certain chemicals in their environment. This is important for bacteria to find food (e.g., glucose) by swimming toward the highest concentration of food molecules, or to flee from poisons (e.g., phenol). In multicellular organisms, chemotaxis is critical to early development (e.g., movement of sperm towards the egg during fertilization) and subsequent phases of development (e.g., migration of neurons or lymphocytes) as well as in normal function. Hence, to sum up (Chemo)taxis is the physical movement of a cell (or an organism) in response to a (chemical) stimulus, as done by the Antherozoids that are attracted towards Archegonia. However (Chemo)tropism is the growth on organism toward a (chemical) signal, like done by Spermatophytes where pollen tube moves (so it growths) towards the Ovule. Hope this explanation help you to clarify.
{ "domain": "biology.stackexchange", "id": 7673, "tags": "plant-physiology, movement" }
Bash - script to encrypt pdf files using pdfencrypt
Question: I wrote a small script to get around the issue when using pdfencrypt on the command line the password is always echoed as well as being saved to the 'history'. #!/bin/bash export LC_ALL=de_DE.latin1 while true; do echo "please select Pdf input file:" read -e -r infile if file --mime-type "$infile" | grep -q pdf$; then echo break else printf "\nis not a PDF file :-(\n\n" fi done echo "please type in Pdf output filename" echo "without extension (.pdf):" read -e -r outfile if [ -f "${outfile}.pdf" ]; then echo echo "File allready exists!"; echo "please enter a new filename, or the same to override: " read -e -r outfile fi echo "please type in Password" echo "(only ASCII chracters recommended)" read -r -s pdfencrypt "$infile" -p "$READ" -o "${outfile}.pdf" \ && echo "success, ${outfile}.pdf is encrypted :-)" \ || echo "failed, did you use non ASCII characters for Password?" exit I had the problem with following non ASCII-Characters in the password: § ä ö ü etc. When I used any of those for encryption I got the following error: Encoding::CompatibilityError: incompatible character encodings: UTF-8 and ASCII-8BIT I solved this by including export LC_ALL=de_DE.latin1 in the script. Though there still are some compatibility problems when using any of those characters, therefore I added a recommendation not to use non ASCII in the password. (I know that PDF encryption itself is pretty low-level security but I sometimes like to add this level when sending files via e-mail etc...) Answer: Usability Typing filenames on standard input is extremely annoying. It would be better if the script took the input filename as command line argument. That way users can benefit from path completion in the shell. Simplify Instead of this: while true; do echo "please select Pdf input file:" read -e -r infile if file --mime-type "$infile" | grep -q pdf$; then echo break else printf "\nis not a PDF file :-(\n\n" fi done I would eliminate the else statement, and also simplify the echo-ing: while true; do echo "please select Pdf input file:" read -e -r infile echo file --mime-type "$infile" | grep -q pdf$ && break echo "is not a PDF file :-(" echo done Redundant exit The exit at the end of the script is redundant, I suggest to remove it.
{ "domain": "codereview.stackexchange", "id": 27070, "tags": "beginner, security, bash, cryptography, pdf" }
Collection with Generic Index
Question: I've just done something that works, but I have the feeling it's not correct. I made a class derived from list. I use a generic type to search the list using one of the properties of the list type as the index. Does this look right to you? public class IndexableCollection<TCollection, TIndex> : List<TCollection> { public IndexableCollection() : base() {} public TCollection this[TIndex index] { get { return base.Find(t => t.GetType().GetProperty(index.GetType().Name).GetValue(t).Equals(index)); } } public List<TCollection> GetList() { return this as List<TCollection>; } } The intention is to have a collection where I can do this: public class MyClass { public int MyIntProperty { get; set; } public string MyStringProperty { get; set; } public MyEnumProperty MyEnumProperty { get; set; } } public void Main() { var myCollection = new IndexableCollection<MyClass, MyEnumProperty>(); var element = myCollection[MyEnumProperty.Option1]; } Answer: The problem There is a massive hole in your logic here. You are relying on the fact that there exists a property whose type is equal to the name of the property. public MyEnumProperty MyEnumProperty { get; set; } Your class would become unusable if I changed this line to: public MyEnumProperty MyProperty { get; set; } To explain the issue in detail, the rest of my answer will assume that you're using this second case where the type and name do not match. The problem is here: t => t.GetType() .GetProperty(index.GetType().Name) // <== HERE .GetValue(t) .Equals(index) GetProperty retrieves a property based on its name (MyProperty). So, what name are you using? index.GetType().Name The name of the type (MyEnumProperty). Which is not the same as the MyProperty name that you're looking for. Let's look at another implementation of your class, where we will use MyIntProperty as the index: var myCollection = new IndexableCollection<MyClass, int>(); var element = myCollection[5]; Going by the intention of your code, this should be possible. After all, you have made no specific requirements for the given TIndex, which means that your class needs to work for all possible types. But it doesn't, because when you try to access myCollection[5], you get: t => t.GetType() .GetProperty("Int32") .GetValue(t) .Equals(index) There is no property by the name of Int32, so it does not work. The solution In essence, you are trying to define the indexed property by its type alone, which is a dangerous thing to do. We could rewrite the code to search MyClass for the first property of the given type TIndex. However, what will you do if MyClass has multiple properties of that same type? You'd be stuck guessing at the intended index, and that's not a good solution either. What we're missing is the definition of the exact property you want to use as an index. You're trying to find it by its type alone, and that is not enough. Instead, you need a way to select the correct property. My solution: the lambda. You need a mapping between the element (TElement) and its intended index (TIndex). This is the perfect situation to use a lambda method of type Func<TElement, TIndex>. I updated your code to a working example with lambdas. I slightly tweaked your code for the sake of example: public enum MyEnum { One, Two, Three } public class MyClass { public int MyIntProperty { get; set; } public string MyStringProperty { get; set; } public MyEnum MyEnumProperty { get; set; } } The collection class: public class MyCollection<TElement, TIndex> : List<TElement> { private Func<TElement, TIndex> _indexMapping { get; set; } public MyCollection(Func<TElement, TIndex> indexMapping) { _indexMapping = indexMapping; } public TElement this[TIndex index] { get { return base.Find(t => _indexMapping.Invoke(t).Equals(index)); } } } The only real difference is that we now store a Func<TElement, TIndex> _indexMapping. This gives us the mapping between the element and its index. This has the interesting side effect that you can't use a TIndex that doesn't exist as a property in TElement (unless you supply a constant value unrelated to the element, but then you're being silly). base.Find(t => _indexMapping.Invoke(t).Equals(index)); When iterating over the list, we simply invoke the mapping (on the element t), which returns the value of its intended index. All we have to do then is check whether the value matches the index that was currently given as a parameter, which is easily done by calling .Equals(). Example usages: //Enum as index var myCollection = new MyCollection<MyClass, MyEnum>(o => o.MyEnumProperty); var element = myCollection[MyEnum.One]; //Int as index var myCollection = new MyCollection<MyClass, int>(o => o.MyIntProperty); var element = myCollection[12345]; //String as index var myCollection = new MyCollection<MyClass, String>(o => o.MyStringProperty); var element = myCollection["Hello"]; Tangential comments If your index is a custom class, make sure to implement the correct equality comparison! The default behavior (checking if it's the same object in memory) may or may not be what you want. There's still an issue when your collection contains multiple items with the same index. You'll always receive the first matching item, the others will essentially be invisible (until you remove the first item). Again, depending on your requirements, that might be intentional behavior, or it might not be. If you want to ensure that you only have unique items in the list, and no duplicate indexes exist, you will need additional logic to validate new entries to the list. As an aside, I would use TElement instead of TCollection, as the generic type refers to the elements of the collection (MyClass), not the collection itself (which is a List). Generic type names, e.g. TFoo, should be read as "type of the Foo". This applies to your TIndex (type of the index), and TElement would be similarly correct.
{ "domain": "codereview.stackexchange", "id": 28280, "tags": "c#, generics" }
Heisenberg Uncertainty Principle and Telescopes
Question: I've heard an analogy on the news regarding the Webb telescope. It said Webb's resolution is such that it would be able to locate from Earth a bumble bee on the moon. I understand that it will be a spaced based telescope, and it will never view the moon from Earth. My question is: will the Uncertainty Principle be violated with this telescope? I must confess I can't do the math, and I do not expect anyone else to do it for me. But what are your thoughts regarding the tremendous resolution of the telescope and the Uncertainty Principal. Am I way off-base on this? Answer: No. The uncertainty principle is a result that can be derived from physical optics. The calculation for the diffraction limit of a telescope is mathematically equivalent to the uncertainty principle. In more detail, the Heisenberg uncertainty principle is nothing more than a relationship between the central second moments of distributions that are Fourier duals of each other. Take the Heisenberg uncertainty principle: $$\sigma_p \sigma_x \ge \frac{\hbar}{2}$$ now square both sides and divide by $\hbar^2$ to get: $$\sigma_k^2 \sigma_x^2 \ge \frac{1}{4}, $$ where the wave number is defined as $\mathbf{k} = \mathbf{p}/\hbar$. Go read the proof of the Heisenberg uncertainty principle on Wikipedia - it's derived entirely using the properties of Fourier transforms. The only difference is whether you introduce an unnecessary scaling of $\hbar$ to the wave number.
{ "domain": "physics.stackexchange", "id": 35165, "tags": "heisenberg-uncertainty-principle, telescopes" }
Physical interpretations of the generating functions $Z[J]$ and $W[J]$ (or $E[J]$)
Question: In quantum field theory, the generator of all Green's functions $Z[J]$ and that of the connected Green's functions $E[J]$ are related as $$Z[J]=\exp[-iE[J]]=\int D\phi\exp[i\int d^4x(\mathcal{L}(\phi)+J(x)\phi(x))] \tag{11.43}$$ From this, how can we arrive at or understand the following statements in Peskin and Schroeder (page 365, eqn. 11.43): (i) "The RHS of the equation above is the functional integral representation of the amplitude $\langle\Omega|e^{-iHT}|\Omega\rangle$, where $T$ is the time extent of functional integration, in presence of the source $J$." (ii) "$E[J]$ is just the vacuum energy as a function of the external source $J$." Answer: By definition, $H|\Omega\rangle = E|\Omega\rangle$, so that $\langle \Omega |e^{-i T H}|\Omega\rangle=e^{-i T E}$. The presence of source terms in the Hamiltonian does not change anything about that. The RHS of the equation 11.43 is just the functional integral rewriting of $\langle \Omega| e^{-i T H}|\Omega\rangle$.
{ "domain": "physics.stackexchange", "id": 31072, "tags": "quantum-field-theory, feynman-diagrams, greens-functions, partition-function, 1pi-effective-action" }
DDD modeling for a User Voice-like system
Question: I've taken on the challenge of modeling a simple User Voice-like system. High-level description: It's a portal for some SaaS users; They come and leave feature requests, suggestions, etc.; They should be able to vote/unvote for any suggestions; They can leave as many comments as they like on suggestions; Comments may be removed by the owner, but may not be edited. I've modeled the domain as follows, using a DDD approach. Please advise about mistakes, warnings, improvements, etc. I've also applied the advices from these posts: Don't create aggregate roots Creating new aggregates in DDD Link to an aggregate: reference or Id? public abstract class Entity { public Guid Id { get; protected set; } = Guid.NewGuid(); } public class User : Entity // Aggregate Root { public string Key => $"{Email}:{MarketplaceUrl}"; public string Name { get; } public string Email { get; } public string MarketplaceName { get; } public Uri MarketplaceUrl { get; } internal User(string name, string email, string marketplaceName, Uri marketplaceUrl) { Name = name; Email = email; MarketplaceName = marketplaceName; MarketplaceUrl = marketplaceUrl; } public Suggestion MakeSuggestion(string text) { return new Suggestion(this, text); } } public class Suggestion : Entity // Aggregate Root { public string Text { get; /* a suggestion cannot be altered */ } public User ByUser { get; } public DateTime SuggestedAt { get; } public ICollection<Comment> Comments { get; } = new List<Comment>(); public ICollection<Vote> Votes { get; } = new List<Vote>(); internal Suggestion(User byUser, string text) { ByUser = byUser; Text = text; SuggestedAt = DateTime.UtcNow; } public Comment AddComment(User byUser, string text) { var comment = new Comment(byUser, text); Comments.Add(comment); return comment; } public void RemoveComment(Comment comment, User userRemovingComment) { Comments.Remove(comment); } public void Unvote(User byUser) { var vote = Votes.SingleOrDefault(v => v.ByUser == byUser)); if (vote != null) Votes.Remove(vote); } public Vote Vote(User byUser) { if (Votes.Any(v => v.ByUser == byUser)) throw new CannotVoteTwiceOnSameSuggestionException(); var vote = new Vote(byUser); Votes.Add(vote); return vote; } } public class Comment : Entity { public string Text { get; /* a comment cannot be changed */ } public User ByUser { get; } public DateTime CommentedAt { get; } internal Comment(User byUser, string text) { ByUser = byUser; Text = text; CommentedAt = DateTime.UtcNow; } } public class Vote : Entity { public User ByUser { get; } public DateTime VotedAt { get; } internal Vote(User byUser) { ByUser = byUser; VotedAt = DateTime.UtcNow; } } public interface IUserVoiceStore { Task AddUserAsync(User user); Task AddSuggestionAsync(Suggestion suggestion); Task<Suggestion> GetSuggestionAsync(Guid id); Task<User> GetUserAsync(Guid id); // For when comments and votes are added/removed to/from a suggestion. Task UpdateSuggestionAsync(Suggestion suggestion); } public class UserVoiceService { private readonly IUserVoiceStore store; public UserVoiceService(IUserVoiceStore store) { this.store = store; } public async Task<User> RegisterUserAsync(string name, string email, string marketplaceName, Uri marketplaceUrl) { var user = new User(name, email, marketplaceName, marketplaceUrl); await store.AddUserAsync(user); return user; } } public class CannotVoteTwiceOnSameSuggestionException : Exception { } public class CannotRemoveCommentFromAnotherUserExcetion : Exception { } Answer: The design looks solid, a few thoughts though: I would split IUserVoiceStore into more granular UserRepository and SuggestionRepository. Also, UpdateSuggestionAsync() seems to indicate you can only update a suggestion and nothing else at a time, which can be limiting. It also IMO goes out of a repository's jurisdiction to flush a specific object to persistent storage. Using some kind of separate unit of work class where you can put multiple objects to be updated as part of a business transaction might be a better idea. Not always feasible, but maybe change the type of link between Suggestion and User from a full reference to just an ID to avoid the temptation of manipulating 2 aggregate roots at the same time. (I don't agree with the article you linked to in that regard) Keep an eye on suggestions with a large number of comments - depending on concurrent access, they can clog up your system and cause locking problems, especially if comments become more sophisticated, with images and so on.
{ "domain": "codereview.stackexchange", "id": 32346, "tags": "c#, ddd" }
Confused about Ramsey technique
Question: Assume we have a 2 level system with the frequency between the 2 levels $\omega_0$ and ignore the lifetime of the upper state. In order to measure the transition frequency using Ramsey technique, you apply 2 $\pi/2$ pulses separated by a time $T$. We have that the probability of finding an atom in the excited sate under the interaction with light (assuming strong field i.e. not using perturbation theory) is given by: $\frac{\Omega^2}{W^2}sin^2(Wt/2)$ where $\Omega$ is the Rabi frequency and $W^2=\Omega^2+(\omega-\omega_0)^2$ with $\omega$ being the laser frequency (which we scan in order to find $\omega_0$). If we are on resonance, i.e. $W=\Omega$, a $\pi/2$ pulse is a pulse of duration $t=\frac{\pi}{2\Omega}$ such that, starting in the ground state, the atom becomes an equal superposition of upper and lower states. However in the Ramsey technique, we don't know $\omega_0$ beforehand (this is what we want to measure). So I am not sure I understand how we can still create an equal superposition of upper and lower level using a $\pi/2$ pulse. Assuming we use a pulse of duration $t=\frac{\pi}{2W}$, from the equation above we get that the population of the upper state is $\frac{\Omega^2}{2W^2}$ which is not $1/2$ as in the resonance case. How do we still get equal superposition of upper and lower case when we are not on resonance? Thank you Answer: You are correct that the Ramsey sequence should be applied close to resonance, but it does not need to be perfectly on resonance -- it only needs to be close to resonance, relative to the Rabi frequency used for the $\pi/2$ pulses. The Ramsey technique gives a measurement signal that oscillates as a function of the gap time $T$ between $\pi/2$ pulses, where the oscillation frequency is given by the detuning $\omega-\omega_0$ and the contrast is determined by the weights of the superposition produced by the $\pi/2$ pulse. In the ideal case (maximal contrast), the $\pi/2$ pulse would transfer 50% of the population to the second state; however, the signal is still usable when the $\pi/2$ is imperfect. In practice, as long as $|\omega - \omega_0| \lesssim \Omega$, then you find from the expression that you wrote $\frac{\Omega^2}{W^2} \sin^2(Wt/2)$ that close to 50% of the population will be transferred. If you know approximately where the resonance is to within a bandwidth of $\sim \Omega$, then Ramsey can be used to more precisely find the resonance (ultimately limited by the longest gap time $T$ that can be used). If you don't have any idea where the resonance is, then you can first find it by applying an approximate $\pi$ pulse over a wider range of frequencies and finding which frequency gives the largest population transfer.
{ "domain": "physics.stackexchange", "id": 74818, "tags": "atomic-physics, spectroscopy" }
Critical disorder strength for anderson model (for a discrete system)
Question: I have written the hopping matrix of a lattice in real space (in real space because i have disorder in the system and hence bloch theory is not valid). On-site disorder has been introduced through a box distribution function. I have followed exact diagonalization technique for studying the Anderson localization phenomenon. I have plotted eigenvalues vs DOS. In eigenvalues vs DOS plot as i am increasing values of W (disorder) the band is broadning. Further I have E vs IPR which is suggesting the transition from metallic to insulating state. I am not able to find, that exactly for which value of W (disorder) transition is occuring. In simple words I want to find the critical value of disorder strength. Can anybody help that how to proceed for that? Answer: The states which are most difficult to localise (i.e. they require the largest disorder strength to be localised) are those in the middle of the spectrum; also, the system becomes completely localised when all states in the system are localised. So if you want to proceed with exact diagonalisation (ED) and inverse participation ratios (IPRs), compute the IPRs of the states in the middle of the spectrum, average the IPRs over states and disorder, and do a finite-size scaling procedure. In more detail, let us define the IPR in a $d$-dimensional cubic lattice with linear size $L$ for an eigenstate $| E \rangle$ with energy $E$ as [1] \begin{equation} \mathcal{P}_E = \sum_{j=1}^{L^d} \left|\left\langle j | E \right\rangle\right|^4; \end{equation} the index $j$ runs over sites. For arbitrary disorder strength, it scales as \begin{equation} \mathcal{P}_E \sim L^{-D_2}, \end{equation} where $D_2=d$ in the metallic (fully delocalised) state and $D_2=0$ in the insulating (fully localised) state [2]. Now compute the average $\overline{\mathcal{P}_E}$ taken over states in the middle of the spectrum and disorder realisations: \begin{equation} \frac{\ln \overline{P_E} }{\ln L} \sim -D_2. \end{equation} If you plot it against disorder strength for several system sizes, you'll see it grow towards $d$ in the delocalised phase and decrease towards $0$ in the localised phase. The simplest estimate of the critical disorder strength is given by the intersection of the curves for different system sizes. A very similar analyis is carried out in [3]. A more accurate estimate can be obtained via a full-fledged finite-size scaling analysis (FSSA) [4]. As a closing remark, my personal opinion is the transfer matrix method (TMM) is worth looking into for estimating critical properties of Anderson transitions. Although it is likely unusal at first, it can reach much larger system sizes than ED (in reasonable time), and as such it allows for very accurate estimates of critical properties. I will not introduce the method here but refer to the online tutorial [5]. It is in fact a really good intro to investigating Anderson localisation both with ED and the TMM. [1] See Mirlin and Evers (2008), Eqns (2.27)-(2.29) [2] This you can derive yourself: in one dimension, in a disorder-free metal the wavefunctions are Bloch waves of the form $\langle j | E \rangle = \exp(-ik_Ej)/\sqrt{L}$, where $k_E$ is the wavenumber belonging to energy $E$; in contrast, fully localised wavefunctions are of the form $\langle j | E \rangle = \delta_{j,j_E}$, where $j_E$ is some localisation centre. [3] I recommend Pino, Tabanera and Serna (2019), Eqns (2) and (3), and Figs 2 and 3. The paper discusses a model analogous to the models of Anderson localisation, with the addition that the transition to a fully localised phase is preceided by a transition to a "non-ergodic extended" phase; the latter has chracteristics of both delocalised and localised states. It also discusses a variety of quantities to probe localisation. See also Mace, Alet and Laflorencie (2019), Eq. (1) and Fig. 3. This paper is short and relatively easily readable, and discusses localisation in the presence of interactions (so-called many-body localisation). [4] For a brief intro to FSSA and a python package see Sorge (2015) (Last retrieved 14/12/2023.) [5] Delande (2014) (Last retrieved on 14/12/2023.) I highly encourage anyone familiarising themselves with Anderson localisation going through this tutorial. For high-quality estimates of critical properties of Anderson transitions with the TMM see e.g. Slevin and Ohtsuki (1998).
{ "domain": "physics.stackexchange", "id": 98978, "tags": "anderson-localization" }
Reflection from multiple thin films: accounting for lost light due to small surface area
Question: I have a problem similar to reflection of multiple thin films. I have light coming in from medium 1 and I want to find the total reflected intensity after being reflected inside 2 layers. However, I want to account for the fact that the surface area of medium 4 is smaller than the light's spot size and so some of the light is lost. I already derived the total reflection for the regular 2 layer case: (I am assuming a zero incident angle) $$R =\left| r + \frac{tt'r_{34}e^{i\delta}}{1-r'r_{34}e^{i\delta}} \right|^2 $$ $r$ is the total reflected electric field amplitude from the first layer only, $t$ total transmitted amplitude through the first layer( $r'$ and $t'$ are in the opposite direction), $r_{34}$ is the reflection Fresnel coefficient for the n3-n4 boundary and $\delta$ the phase corresponding to the n3 layer. Now I want to take into account that not all of the light transmitted through the first layer hits the last boundary. I thought about just multiplying the second term in my expression by some factor, say 0.5, which would make the transmitted amplitude smaller. However, since this will effectively multiply the complex electric field amplitude I am not sure if that make sense. Answer: Just blindly multiplying the overall answer by some factor isn't the way to go about it. I have an alternative proposal which may work well as a zeroth-order approximation at least. You already have the expression for the 2-layer case, and if I observe correctly, you are only concerned with the reflected part, not the transmitted part. So, a smaller reflecting area would mean lesser reflected intensity and larger transmitted intensity, but we are not going to bother about the latter. So, as long as the concerned length parameter is not comparable with the wavelength ($\lambda$) of light, one can construct an effective r for the third layer. Since a larger amount of light gets reflected from a larger area in such macroscopic circumstances, I can safely assume, as a zeroth order approximation, that this $$r_{\rm eff} = r_{\rm original} \times \left(\frac{A_{\rm layer \ area}}{A_{\rm spot \ size}} \right)$$ Of course, then $t_{\rm eff} = 1 - r_{\rm eff}$, neglecting losses etc., but again, you probably don't want to dive into these. Now, using your earlier derivation, use this $r_{\rm eff}$ to calculate $R$. I feel this is better than just using a number, $0.5$ or anything, and should work as a zeroth level approximation. And certainly don't insert factors into the final answer. Hope that helps :)
{ "domain": "physics.stackexchange", "id": 15328, "tags": "electromagnetism, optics, reflection, refraction" }
Basics of Compton scatter interactions? Forces behind it?
Question: What is the force that governs compton scattering interaction? Also how is it that we are able to approximate that compton scattering probability is proportional to the mass density of the target material? Answer: What is the force that governs compton scattering interaction? Compton scattering is an electromagnetic interaction of photons with charged particles and is modeled and the photon electron probability of scattering is calculated using Feynman diagrams. Also how is it that we are able to approximate that compton scattering probability is proportional to the mass density of the target material? The masses in material are composed of atoms, which have electron orbitals around positive nuclei. The more mass density the more electrons for the photon to interact with a proton scatter, the the statement would hold for X rays which penetrate the bulk, not for visible frequency photons which would interact with the surface electrons.
{ "domain": "physics.stackexchange", "id": 54060, "tags": "photons, scattering, interactions" }
The choice of measurement basis on one half of an entangled state affects the other half. Can this be used to communicate faster than light?
Question: It is often stated, particularly in popular physics articles and videos about quantum entanglement, that if one measures a particle A that is entangled with some other particle B, then this measurement will immediately affect the state of the entangled partner. For example, if Alice and Bob share an entangled pair of electrons and Alice measures her spin in the $x$ direction, then Bob's spin will also end up spinning in that direction, and similarly if she measures in the $z$ direction. Moreover, the effect will be instantaneous, regardless of the spatial distance between the two particles, which seems at odds with special relativity. Can I use a scheme like this to communicate faster than light? Answer: The problem with this sort of scheme is that Alice has no control over the results of her measurements, since those are random. This means that she can control which basis Bob's spin is projected on, but she cannot control which of the basis states gets chosen. Bob will then see a random mix of results which turns out to contain no trace of what Alice was trying to communicate. To make this more precise, consider the standard case where they share a Bell triplet state $$ \newcommand{\up}{|\!\uparrow\rangle}\newcommand{\down}{|\!\downarrow\rangle} \newcommand{\plus}{|+\rangle}\newcommand{\minus}{|-\rangle} |\Phi\rangle=\up\up+\down\down $$ (ignoring normalization) at the start of the protocol, which they use as a resource state. Alice can choose to measure along the $z$ direction, in the basis $\{\up,\down\}$, or along the $x$ direction, in the basis $\{\plus=\tfrac1{\sqrt{2}}(\up+\down),\minus=\tfrac1{\sqrt{2}}(\up-\down)\}$. Because of the nice properties of the triplet state, whatever state Alice's qubit is projected on (in these two bases) will be identically replicated in Bob's qubit. Both states of either basis come up with equal probabilities. Alice's only choice in this scheme is what basis she measures in, and she can transmit one bit of information if she can engineer a situation where Bob can determine that basis. Assume, if you want to, that she can repeat this protocol $n$ times, with $n$ possibly greater than one, to help ensure the information gets there. Suppose, then, that Alice chose to measure in the $z$ direction. How can Bob determine this fact? To put this more explicitly, how can he determine that Alice didn't measure in the $x$ direction? His problem, then, is to determine whether his ensemble of $n$ qubits is in a random mix of $\up$s and $\down$s, or in a random mix of $\plus$s and $\minus$s. Unfortunately, this is impossible to do. If he measures in the $z$ direction, he will get fifty/fifty results if Alice is sending $\up$s and $\down$s, but he would also get fifty/fifty chances from each $\plus$ or $\minus$, and therefore from the whole set, if Alice had measured in the $x$ direction. Regardless of what basis Alice chose or what basis he himself measures in, both situations look exactly the same to Bob.
{ "domain": "physics.stackexchange", "id": 12127, "tags": "quantum-mechanics, special-relativity, quantum-entanglement, faster-than-light, epr-experiment" }
Python : Is it possible to combine these two functions into one?
Question: I have been creating a scoring program for a game of hearts. If there is a an even number of players then players will pass across. And an odd number there is no pass across. I created these 2 functions that work fine. But I keep scratching my head trying to simplify them into only 1 function. I know its possible but I just can't get it right. This is the block of code I'm working with: #this function is for an odd number of players def no_across(): global pass_counter if pass_counter == 1: print("WE ARE PASSING LEFT") pass_counter = pass_counter + 1 elif pass_counter == 2: print("WE ARE PASSING RIGHT") pass_counter = pass_counter + 1 elif pass_counter == 3: print("WE ARE HOLDING") pass_counter = pass_counter - 2 #this function is for an even number of players def yes_across(): global pass_counter if pass_counter == 1: print("WE ARE PASSING LEFT") pass_counter = pass_counter + 1 elif pass_counter == 2: print("WE ARE PASSING RIGHT") pass_counter = pass_counter + 1 elif pass_counter == 3: print("WE ARE PASSING ACROSS") pass_counter = pass_counter + 1 elif pass_counter == 4: print("WE ARE HOLDING") pass_counter = pass_counter - 3 #determines which of the 2 previous functions to call if player_count % 2 == 0: yes_across() else: no_across() As you can see in the first block and second block I have repeated code but I know it can be simplified. All help is appreciated. EDIT: Below is the complete code. Don't judge me because I just whipped it together last night as a test to see if it would work. Also I am integrating this into a different project I have that will use a GUI. I just needed a rough copy of the program to work. #this program is meant to keep track of score during a game of hearts #displays a greeting print("Welcome to the player score keeper!") #score limit reached allows game to continue until changed score_limit_reached = False #these are the players names John = 0 Elaine = 0 Austin = 0 Bridgette = 0 Rocky = 0 #these following variables help determine if the program should have certain #passing in game. also keeps track of score limit. player_count = 0 player_count = int(input("How many players are going to play? ")) score_limit = 0 pass_counter = 1 score_limit = int(input("What is the score limit? ")) #this function is for an odd number of players def no_across(): global pass_counter if pass_counter == 1: print("WE ARE PASSING LEFT") pass_counter = pass_counter + 1 elif pass_counter == 2: print("WE ARE PASSING RIGHT") pass_counter = pass_counter + 1 elif pass_counter == 3: print("WE ARE HOLDING") pass_counter = pass_counter - 2 #this function is for an even number of players def yes_across(): global pass_counter if pass_counter == 1: print("WE ARE PASSING LEFT") pass_counter = pass_counter + 1 elif pass_counter == 2: print("WE ARE PASSING RIGHT") pass_counter = pass_counter + 1 elif pass_counter == 3: print("WE ARE PASSING ACROSS") pass_counter = pass_counter + 1 elif pass_counter == 4: print("WE ARE HOLDING") pass_counter = pass_counter - 3 #here begins the actual scoring section of the game while score_limit_reached == False: #determines which of the 2 previous functions to call if player_count % 2 == 0: yes_across() else: no_across() #user input keeps track of score John = John + int(input("john: ")) Elaine = Elaine + int(input("elaine: ")) Austin = Austin + int(input("austin: ")) Bridgette = Bridgette + int(input("bridgette: ")) Rocky = Rocky + int(input("rocky: ")) #prints current score of each player print("\n\nJohn's score - " + str(John)) print("Elaine's score - " + str(Elaine)) print("Austin's score - " + str(Austin)) print("Bridgette's score - " + str(Bridgette)) print("Rocky's score - " + str(Rocky) + "\n\n") #if the score limit has been reached it will end the while loop if John >= score_limit: score_limit_reached = True if Elaine >= score_limit: score_limit_reached = True if Austin >= score_limit: score_limit_reached = True if Bridgette >= score_limit: score_limit_reached = True if Rocky >= score_limit: score_limit_reached = True #prints the final score for each player print("\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nJohn - " + str(John)) print("Elaine - " + str(Elaine)) print("Austin - " + str(Austin)) print("Bridgette - " + str(Bridgette)) print("Rocky - " + str(Rocky) + "\n") #if any number of players has lost will declare each #player a poopy, followed by which score was the highest if John >= score_limit: print("John is a poopy!") if Elaine >= score_limit: print("Elaine is a poopy!") if Austin >= score_limit: print("Austin is a poopy!") if Bridgette >= score_limit: print("Bridgette is a poopy!") if Rocky >= score_limit: print("Rocky is a poopy!") biggest_poopy = str(max(John, Elaine, Austin, Bridgette, Rocky)) print("\nThe biggest poopy is " + biggest_poopy + "!") Answer: Yes, it can be easily done. Just pass some indication of parity into the function. The most straightforward way is to check player_count in the function like this: def both_across(): global pass_counter global player_count if pass_counter == 1: print("WE ARE PASSING LEFT") pass_counter = pass_counter + 1 elif pass_counter == 2: print("WE ARE PASSING RIGHT") pass_counter = pass_counter + 1 elif pass_counter == 3 and player_count%2==0: print("WE ARE PASSING ACROSS") pass_counter = pass_counter + 1 elif pass_counter == 3 and player_count%2==1: print("WE ARE HOLDING") pass_counter = pass_counter - 2 elif pass_counter == 4: print("WE ARE HOLDING") pass_counter = pass_counter - 3 But this code is still bad in many ways; the most obvious bad thing is using globals instead of arguments and returning a value. def both_across(counter, players): if counter == 1: print("WE ARE PASSING LEFT") return counter + 1 elif counter == 2: print("WE ARE PASSING RIGHT") return counter + 1 elif counter == 3 and players%2==0: print("WE ARE PASSING ACROSS") return counter + 1 elif counter == 3 and players%2==1: print("WE ARE HOLDING") return counter - 2 elif counter == 4: print("WE ARE HOLDING") return counter - 3 #somewhere down the code... pass_counter = both_across(pass_counter, player_count) Now, examining the function gives me an insight: in every branch we can find the returning value without calculations. If we know the value of counter, we can find the value of counter+1, counter-2 etc.: def both_across(counter, players): if counter == 1: print("WE ARE PASSING LEFT") return 2 elif counter == 2: print("WE ARE PASSING RIGHT") return 3 elif counter == 3 and players%2==0: print("WE ARE PASSING ACROSS") return 4 elif counter == 3 and players%2==1: print("WE ARE HOLDING") return 1 elif counter == 4: print("WE ARE HOLDING") return 1 The last two elifs can be combined, probably in the same else: ... elif counter == 3 and players%2==0: print("WE ARE PASSING ACROSS") return 4 else: print("WE ARE HOLDING") return 1 At this point it becomes clear that pass_counter values are in fact not integers but some symbols. They can be, say, strings - and even that strings we're printing: pass_counter = "HOLDING" def both_across(counter, players): if counter == "HOLDING": print("WE ARE PASSING LEFT") return "PASSING LEFT" elif counter == "PASSING LEFT": print("WE ARE PASSING RIGHT") return "PASSING RIGHT" elif counter == 3 and players%2==0: print("WE ARE PASSING ACROSS") return "PASSING ACROSS" else: print("WE ARE HOLDING") return "HOLDING" And now, we can combine all prints: pass_counter = "HOLDING" def both_across(counter, players): if counter == "HOLDING": return "PASSING LEFT" elif counter == "PASSING LEFT": return "PASSING RIGHT" elif counter == 3 and players%2==0: return "PASSING ACROSS" else: return "HOLDING" while not score_limit_reached: pass_counter = both_across(pass_counter, player_count) print("WE ARE", pass_counter) Good enough? For the rest of the code: Use functions. Don't leave the code just lying around, gather it into some functions and call them. Use lists and loops. You have the code for five names repeated - gather that into lists like scores = [0,0,0,0,0] names = ["John", "Elaine", "Austin", "Bridgette", "Rocky"] ... for i in range(5): scores[i] += int(input(names[i]+': ')) ... for i in range(5): print(names[i] + " - " + str(scores[i])) Use format strings. The last line can be rewritten as print(f'{names[i]} - {scores[i]}') Don't stop learning. Learn classes. This code will make much more sense with classes.
{ "domain": "codereview.stackexchange", "id": 42860, "tags": "python, python-3.x" }
Can electric field be discontinuous?
Question: "This is because of abrupt discontinuity of fields" I have read this or similar sentences in many papers. I am bit puzzled. How and under what conditions electric field can be discontinous? In my opinion this is unphysical, field lines start at one charge and end at opposite charge. Then how can fields be discontinous midway? Answer: I think it is useful to complement @Matteo's answer (for the part related to the surface charge density) with a discussion of the physical conditions justifying the existence of a discontinuous electric field. It is essential to distinguish between microscopic and macroscopic fields. The microscopic fields are controlled by all the physical point-like sources in the system and may vary quite a lot with time and over microscopic distances. Still, there are always continuous (of course, they are not defined at each point where a charge is located, but that is not a discontinuity, according to the mathematical definitions). Macroscopic fields are a different story. They can be obtained from the microscopic fields through time and spatial averages, over times and spatial scales large with respect to the atomic scale. Such averaging process has profound mathematical consequences. Spatial and time variations generally become smoother but with an important exception. The interfaces between different homogeneous media are usually a few atomic layers wide. Suppose there is some pile-up of charge in such interfacial region at the macroscopic level. In that case, it has to be described as a charge distribution confined to the separation surface between the two media. It is a consequence of Gauss law that such two-dimensional charge density introduces a real discontinuity in the normal values of the electric displacement field at the interface. I.e., introducing the field on the two sides of the surface (${\bf D}_1$ and ${\bf D}_2$) $$ ({\bf D}_1 - {\bf D}_2)\cdot {\bf \hat n} = \sigma. $$ Notice that a similar discontinuity appears in the tangential component of the magnetic field ${\bf H}$ at a surface with a confined surface current density. In both cases, the origin of the macroscopic discontinuities can be traced back to the need to describe sources at the interfaces in terms of surface densities. In terms of field lines, the discontinuities do not introduce any inconsistency. Field lines start or end at a surface density, not only at point-like charges. However, in correspondence with the surface charge, the field has a finite value (does not diverge), but it is different on the two sides of the surface.
{ "domain": "physics.stackexchange", "id": 96572, "tags": "electromagnetism, electrostatics, electric-fields, mathematics" }
WordNetLemmatizer not lemmatizing the word "promotional" even with POS given
Question: When I do wnl.lemmatize('promotional','a') or wnl.lemmatize('promotional',wordnet.ADJ), I get merely 'promotional' when it should return promotion. I supplied the correct POS, so why isn't it working? What can I do? Answer: "promotional" is not an inflected form of "promotion", therefore "promotion" is not the lemma of "promotional". Actually, "promotion" is a noun and "promotional" is an adjective. Maybe what you actually want to do is not lemmatisation but stemming. Note that the stem is the root of the word and, certainly, the stem of both "promotion" and "promotional" can be "promot" (or "promotion", depending on the convention).
{ "domain": "datascience.stackexchange", "id": 9834, "tags": "nlp, nltk" }
Why are transmembrane proteins difficult to crystallise?
Question: I know that in vivo there are a lot fewer transmembranous proteins in general, and that they are produced at a lower rate than their free counterparts. This is mainly because transmembrane proteins are only required in a 2D space on the membrane rather than a 3D cytoplasmic or extracellular space. This (again, very broadly speaking) means that the probability they will interact with their target is higher. I also know that this is one of the reasons that producing crystals for X-ray crystallography is notoriously difficult for transmembrane proteins. What are the other reasons that make transmembrane proteins typically tough crystallisation candidates? What specific part of the crystallisation process yields such poor success rates of transmembrane protein structure elucidation? Answer: There are several factors that make obtaining crystal structures from membrane proteins more difficult. In brief, nearly every stage of obtaining the structure via crystallography is more difficult. First: protein expression. Large amounts of pure, well-folded protein are required and this is much more difficult to achieve than it is with a soluble protein. Since membrane proteins are bound within a membrane, the mechanisms to translate the peptide into the membrane and to fold the protein in the membrane are different. They may involved folding factors which are only available in a particular compartment of the cell (making bacteria impossible as a system of expression). In high amounts in the cell, their hydrophobicity might tend to result in clumps of unfolded protein instead of gobs of membrane-associated protein. Next: Purifying the protein is more difficult. The expressing cells are usually broken open in the presence of detergent to get the proteins to float around as individual proteins in detergent micelles. The wrong detergents might break up the membrane protein and it may lose its fold - the concentration of detergent must be carefully managed and kept at optimal levels or the micelles might break up and the protein will be ruined. Crystallizing the protein is quite a bit more difficult too. Membrane proteins in detergent micelles which may or may not be charged themselves look like oily blobs with hopefully a domain or two of folded proteins sticking out of them. Compared to a soluble protein with a nice ordered surface in every direction instead of a detergent micelle which might undergo a phase change at high concentration or by the addition of a salt or change in pH, membrane proteins take a task that might take thousands of trials and adds new dimensions to worry about. The proteins are 2D-like, but the crystals still have to be 3D for crystallography to work usually. Protein crystals are small, but those derived from membrane proteins - which tend to organize into planes like the membranes they inhabit - are often thin, which can make them too delicate and small to get a good set of data even from synchrotron beams. As a result, the crystals are commonly too small or thin to use at first, requiring extensive optimization after the first crystal are found. In a few cases some membrane proteins have been solved with 2D crystals using electron microscopy on crystalline arrays of porins and rhodopsins in membranes. That was a ton of work but they were the first membrane protein structures by years and years. Not to make all this sound impossible - once you have crystals, they can usually be improved; there are good starting points to crystallization and purification with detergents. It's just that a process which can already take quite a bit of time (years) and can sometimes end in frustration takes even longer and is less certain with a membrane protein.
{ "domain": "biology.stackexchange", "id": 1549, "tags": "cell-biology, proteins, xray-crystallography" }
Python Hangman feedback
Question: Just looking for some feedback on a hangman script. It works perfectly fine; I'm just trying to master the Python language, and the best place way to get better is to ask the true masters! import random UNKNOWN_CHARACTER = "*" GUESS_LIMIT = 7 words = [] def load_words(): global words with open("dictionary.txt") as file: words = file.readlines() stripped = [] for word in words: stripped.append(word.strip()) words = stripped def play(): word = random.choice(words) solved = False constructed = "" guessed = [] guess = "" for i in range(0, len(word)): constructed += UNKNOWN_CHARACTER while not solved and len(guessed) < GUESS_LIMIT: print("\n" + str(GUESS_LIMIT - len(guessed)) + " errors left...") print(constructed) valid_guess = False while not valid_guess: guess = input("Guess a letter: ").lower() if len(guess) == 1: if guess not in guessed and guess not in constructed: valid_guess = True else: print("You've already guessed that letter!") else: print("Please guess a single letter.") if guess in word: new_constructed = "" for i in range(0, len(word)): if word[i] == guess: new_constructed += guess else: new_constructed += constructed[i] constructed = new_constructed else: guessed.append(guess) solved = constructed == word print("\n" + word) def main(): load_words() keep_playing = True while keep_playing: play() keep_going = input("Continue playing? (y/n): ").lower() if keep_going not in ["yes", "y"]: keep_playing = False if __name__ == "__main__": main() Excerpt of dictionary.txt: logorrheic logos logotype logotypes logotypies logotypy logroll logrolled logroller logrollers logrolling logrollings logrolls logs logway logways logwood logwoods logy loin loincloth loincloths loins loiter loitered loiterer loiterers loitering loiters loll lollapalooza lollapaloozas lolled loller lollers Answer: List comprehension is a very neat way to construct lists in Python. For instance : stripped = [] for word in words: stripped.append(word.strip()) becomes stripped = [word.strip() for words in word] (and then you can get rid of the stripped variable altogether in your code). The pythonic way to loop over something is not to use range and len (except if you really have to). For instance, we have first : for i in range(0, len(word)): constructed += UNKNOWN_CHARACTER which can be written : for i in word: constructed += UNKNOWN_CHARACTER Then one might argue that you can use the * operator here to write : constructed = UNKNOWN_CHARACTER * len(word) Sometimes, you might think that you need to use range and len because you need the index corresponding to your iteration. This is what enumerate is for. For instance : new_constructed = "" for i in range(0, len(word)): if word[i] == guess: new_constructed += guess else: new_constructed += constructed[i] becomes : new_constructed = "" for i,l in enumerate(word): # i in the index, l is the letter if l == guess: # note that we don't need to get the i-th element here new_constructed += l # I prefer l to guess here because it's shorter :-P else: new_constructed += constructed[i] What would be really awesome here would to be to be able to iterate over 2 containers in the same time. zip allows you to do such a thing. new_constructed = "" for w,c in zip(word,constructed): if w == guess: new_constructed += w else: new_constructed += c Then, it's pretty cool because we can make things slightly more concise : new_constructed = "" for w,c in zip(word,constructed): new_constructed += w if w == guess else c If you want to play it cool and re-use list comprehension (or even generator expression (I use fancy words so that you can google then if required)) : you build some kind of list of characters/strings and join them together with join. new_constructed = ''.join((w if w == guess else c for w,c in zip(word,constructed))) This being said, I probably wouldn't build that string this way but I just pointed to point this out so that you can discover new things. Usually, global variables are frowned upon because they make things hard to track. In your case, it is not so much of an issue but let's try to split the logic in smaller parts. It makes things easier to understand and to test too. Here, we just need to return the list of words from load_words. It's interesting to store the letters that have been guessed. However, a list (build with [] and populated with append()) might not be the right container for this. What you really want at the end of the day is just to be able to tell quickly whether some letter has been guessed already. This is what sets are for. You could store all guesses or just wrong guesses. You've decided to store only wrong guesses (as right guesses can be deduced from the constructed string). Personnally, I'd rather save all guesses on one hand and the number of wrong guesses on the other hand at it might make things a bit easier later on. I am not saying that what you did was wrong at all, I just want to show a different way to do. Also, I'm trying to split the logic used to display what has been found away from the logic used to know if a letter has been guessed know if the character has been fully found. At the moment, the variable constructed in used for these 3 things and can make things hard to understand. At the end, here is what I came up with : #!/usr/bin/python import random UNKNOWN_CHARACTER = "*" GUESS_LIMIT = 7 def load_words(): words = [] with open("dictionary.txt") as file: words = file.readlines() return [word.strip() for word in words] def play(word): nb_wrong = 0 guesses = set() while nb_wrong < GUESS_LIMIT: print("\n" + str(GUESS_LIMIT - nb_wrong) + " errors left...") print(''.join(c if c in guesses else UNKNOWN_CHARACTER for c in word)) while True: guess = input("Guess a letter: ").lower() if len(guess) == 1: if guess in guesses: print("You've already guessed that letter!") else: guesses.add(guess) break # stop when we have a valid guess else: print("Please guess a single letter.") if all(c in guesses for c in word): break # stop when all characters are found print("\n" + word) def main(): words = load_words() while True: play(random.choice(words)) if input("Continue playing? (y/n): ").lower() not in ["yes", "y"]: break if __name__ == "__main__": main() (It's highly not tested but the point was more to explain you what I did than to show you a working program)
{ "domain": "codereview.stackexchange", "id": 5714, "tags": "python, game, python-3.x, hangman" }
Passing by a semi truck on a highway
Question: So I’m driving down the highway in my sedan on a speed of lets say 60 mph. Why is it that my car sways just before it passes or it passes me? Almost always where the semis first tire is (closest to the hood)? What forces are acting upon it? What could be a possible explanation to why it happens? Why is it only at that spot that the car sways? Answer: A truck speeding down a highway creates an envelope stream of high pressure air surrounding it. This is basically a wake, composed of layers of high pressure shockwaves, created when the front of the truck, hood and cabin penetrate the still air and push it open. This shockwave moves with the truck and after turning into a small turbulent tail at the backside dissipate slowly. When you drive near a truck in passing, or it passes by you, your car cuts into the boundary of this wake. Depending on your car's aerodynamics and your driving habits, the impact of incursion may become more pronounced at a certain point and angle. Some times this impact may be strong enough to steer the car closer to the truck. Or you my lose control by trying to avoid the collision. I keep away from the big trucks as much as I can, or else try to anticipate the shock and be prepared.
{ "domain": "engineering.stackexchange", "id": 2513, "tags": "automotive-engineering, airflow, car" }
Finding anagrams in Java (space-optimised)
Question: I've lately been asked in a job interview to program a solution for the well known anagrams problem: given two strings s1 and s2, decide if they are anagrams or not. Additionally, the characters should be one of [a-zA-Z] and case-insensitive, so AbA and Baa should be valid anagrams. As a side-note, the interviewer was especially interested in a low space complexity and in particular, to use the heap as less as possible. In the interview I came up with a Map/histogram solution and in retrospective I implemented another solution, focusing on reducing space complexity. public class Anagram { private static final long [] FIRST_26_PRIMES = new long [] { 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101 }; private static final int A_Z_SIZE = Character.getNumericValue('Z') - Character.getNumericValue('A') + 1; private static final int CHARACTER_A_OFFSET = Character.getNumericValue('a'); public static boolean isAnagram(String s1, String s2) { if (s1.length() != s2.length() || s1.length() < 1) { return false; } if (s1.length() < 10) { return isAnagramUsingPrimes(s1, s2); } return isAnagramUsingArray(s1, s2); } private static boolean isAnagramUsingPrimes(String s1, String s2) { long product = 1L; for (int i = 0; i < s1.length(); i++) { int currChar = Character.getNumericValue(s1.charAt(i)) - CHARACTER_A_OFFSET; long currPrime = FIRST_26_PRIMES[currChar]; product *= currPrime; } for (int i = 0; i < s2.length(); i++) { int currChar = Character.getNumericValue(s2.charAt(i)) - CHARACTER_A_OFFSET; long currPrime = FIRST_26_PRIMES[currChar]; if (product % currPrime != 0) { return false; } product /= currPrime; } assert product == 1L; return true; } private static boolean isAnagramUsingArray(String s1, String s2) { int [] countsArray = new int[A_Z_SIZE]; for (int i = 0; i < s1.length(); i++) { int currChar = Character.getNumericValue(s1.charAt(i)) - CHARACTER_A_OFFSET; int count = countsArray[currChar]; countsArray[currChar] = count + 1; } for (int i = 0; i < s2.length(); i++) { int currChar = Character.getNumericValue(s2.charAt(i)) - CHARACTER_A_OFFSET; int count = countsArray[currChar]; if (count == 0) { return false; } countsArray[currChar] = count - 1; } return true; } } So my basic idea is to have two methods, one for short strings (< 10 characters) and a second method for strings of arbitrary lengths. The method for the shorter strings uses prime number multiplication and division and lives entirely on the stack if I am not mistaken. The method for the longer strings creates an array with a size of 26, which is the only object that lives on the heap. I would be happy about any comments/feedback and also further ideas. BENCHMARK Out of curiosity I also added a JMH benchmark in order to see which of the two methods performs better. I basically picked a list of 40k 9-letter words which appear in online dictionaries. The upper-bound of 9 letters was chosen because otherwise the prime method would not work correctly. As it looks like, the array variant beats the prime variant in terms of ops/ns in my benchmark. My out-of-the-blue guess is that with the prime variant, a lot of multiplication/division/modulo operations are performed while the array variant simply uses increment and decrement on the values. The allocations, as expected look much better with the prime numbers variant. Here is the benchmark: import org.openjdk.jmh.annotations.*; import org.openjdk.jmh.infra.Blackhole; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.util.concurrent.TimeUnit; @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.NANOSECONDS) @Warmup(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS) @Measurement(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS) @Fork(5) @State(Scope.Benchmark) public class AnagramBenchmark { private static final int WORDS_ARRAY_SIZE = 40727; private String [] words; @Setup public void setup() { try ( InputStream is = getClass().getResourceAsStream("/9-letter-words.txt"); InputStreamReader isr = new InputStreamReader(is); BufferedReader br = new BufferedReader(isr)) { words = new String[WORDS_ARRAY_SIZE]; for (int i = 0; ;i++) { String line = br.readLine(); if (line == null) { break; } words[i] = line; } } catch (Exception e) { e.printStackTrace(); } } @Benchmark @OperationsPerInvocation(WORDS_ARRAY_SIZE - 1) public void primsAnagram(Blackhole bh) { for (int i = 0; i < (WORDS_ARRAY_SIZE - 1); i++) { String s1 = words[i]; String s2 = words[i + 1]; bh.consume(Anagram.isAnagramUsingPrimes(s1, s2)); } } @Benchmark @OperationsPerInvocation(WORDS_ARRAY_SIZE - 1) public void arrayAnagram(Blackhole bh) { for (int i = 0; i < (WORDS_ARRAY_SIZE - 1); i++) { String s1 = words[i]; String s2 = words[i + 1]; bh.consume(Anagram.isAnagramUsingArray(s1, s2)); } } } Here is the the command to run it from the console, including a profile that measures allocations as well: mvn clean install && java -jar target/benchmarks.jar AnagramBenchmark -prof gc This benchmark gave the following results: # Run complete. Total time: 00:02:35 Benchmark Mode Cnt Score Error Units AnagramBenchmark.arrayAnagram avgt 25 48.758 ± 0.995 ns/op AnagramBenchmark.arrayAnagram:·gc.alloc.rate avgt 25 1564.522 ± 32.580 MB/sec AnagramBenchmark.arrayAnagram:·gc.alloc.rate.norm avgt 25 120.000 ± 0.001 B/op AnagramBenchmark.arrayAnagram:·gc.churn.PS_Eden_Space avgt 25 1580.981 ± 157.861 MB/sec AnagramBenchmark.arrayAnagram:·gc.churn.PS_Eden_Space.norm avgt 25 121.312 ± 12.255 B/op AnagramBenchmark.arrayAnagram:·gc.churn.PS_Survivor_Space avgt 25 0.087 ± 0.019 MB/sec AnagramBenchmark.arrayAnagram:·gc.churn.PS_Survivor_Space.norm avgt 25 0.007 ± 0.001 B/op AnagramBenchmark.arrayAnagram:·gc.count avgt 25 81.000 counts AnagramBenchmark.arrayAnagram:·gc.time avgt 25 47.000 ms AnagramBenchmark.primsAnagram avgt 25 124.970 ± 3.350 ns/op AnagramBenchmark.primsAnagram:·gc.alloc.rate avgt 25 ≈ 10⁻⁴ MB/sec AnagramBenchmark.primsAnagram:·gc.alloc.rate.norm avgt 25 ≈ 10⁻⁴ B/op AnagramBenchmark.primsAnagram:·gc.count avgt 25 ≈ 0 counts Answer: As you are only interested in the value of c-'a'/c-'A' you can replace every usage of Character.getNumericValue(c) - CHARACTER_A_OFFSET with a call to private static int indexOf(char c) { return c - 'A' & ~32; } The FIRST_26_PRIMES array can be an int array instead of a long array. You can replace the modulo operation with a multiplication and a subtraction (which could/should be faster). int currPrime = FIRST_26_PRIMES[indexOf(s2.charAt(i))]; if (product - (product /= currPrime) * currPrime != 0) return false; In the array version you can use post/pre increment and pre decrement (most likely no difference in terms of performance but shorter and in my opinion better readable). private static boolean isAnagramUsingArray(String s1, String s2) { int[] countsArray = new int[26]; for (int i = 0; i < s1.length(); i++) countsArray[indexOf(s1.charAt(i))]++; for (int i = 0; i < s2.length(); i++) if (--countsArray[indexOf(s2.charAt(i))] < 0) return false; return true; }
{ "domain": "codereview.stackexchange", "id": 26586, "tags": "java, strings, comparative-review, interview-questions, memory-optimization" }
Uncertainty on intersection of two lines of best fit
Question: I am doing some lab work, and one of the values I have to find is the x-value of the intersection of the two lines of best fit to some of the experimental data. I have the values for their slopes and intercepts, with an uncertainty value for each. Now I want to find the uncertainty on this final x-value I have found. This is how I would go about it. Given the two lines $y=mx+b$ and $y=cx+d$, I find $x$ by setting $mx+b=cx+d$, thus giving $x=\frac{d-b}{m-c}$. I have an uncertainty $\Delta d$ for $d$,$\Delta b$ for $b$, $\Delta c$ for $c$ and $\Delta m$ for $m$. Since $d$ and $b$ are subtracted, the uncertainty on $d-b$ is the sum of their uncertainties, i.e. $\Delta d + \Delta b$. Same goes for the denominator. Then, since I am taking the ratio of $d-b$ and $m-c$, I can find the error on $x$ by adding the relative errors in quadrature: $\Delta x= x \sqrt{(\frac{\Delta d + \Delta b}{d-b})^2+(\frac{\Delta m + \Delta c}{m-c})^2}$. Can anyone confirm whether this is the correct procedure? It's my first time doing something like this and, even if it seems to make sense, I am not 100% confident. Many thanks to whoever will take the time to double-check this! Answer: If the lines are fitted to different datasets (so that the coefficients are independent) then an approximate uncertainty in $x$ would be $$\Delta x= x \sqrt{\frac{(\Delta d)^2 + (\Delta b)^2}{(d-b)^2}+\frac{(\Delta m)^2 + (\Delta c)^2}{(m-c)^2}}\ .$$ There reason this differs from your formula is that the uncertainty in $d -b$ is actually $\sqrt{(\Delta d)^2 + (\Delta b)^2}$. In a probabilistic sense it is unlikely that both quantities would be at the upper end or the lower end of their individual uncertainty ranges simultaneously, so just adding the uncertainties is not usually correct. The joint probability distribution of the subtraction (or sum) of two quantities with independent, normally distributed uncertainties is the product of two normal distributions, one with $\sigma = \Delta b$ and the other with $\sigma = \Delta d$. The resulting normal distribution has $\sigma = \sqrt{(\Delta d)^2 + (\Delta b)^2}$. It seems that just adding the uncertainties is being increasingly taught as part of school physics... EDIT: You should note though that the assumption that the slope and intercept of either line have independent uncertainties may not be a good one. In which case, this standard error propagation formula approach may not give you what you really need. https://stats.stackexchange.com/questions/104704/are-estimates-of-regression-coefficients-uncorrelated
{ "domain": "physics.stackexchange", "id": 77245, "tags": "error-analysis, data-analysis" }
Why do equalities between complexity classes translate upwards and not downwards?
Question: Hey Guys, I understand that the padding trick allows us to translate complexity classes upwards - for example $P=NP \rightarrow EXP=NEXP$. Padding works by "inflating" the input, running the conversion (say from say $NP$ to $P$), which yields a "magic" algorithm which you can run on the padded input. While this makes technical sense, I can't get a good intuition of how this works. What exactly is going on here? Is there a simple analogy for what padding is? Can provide a common sense reason why this is the case? Answer: I think the best way to get intuition for this issue is to think of what the complete problems for exponential time classes are. For example, the complete problems for NE are the standard NP-complete problems on succinctly describable inputs, e.g., given a circuit that describes the adjacency matrix of a graph, is the graph 3-colorable? Then the problem of whether E=NE becomes equivalent to whether NP problems are solvable in polynomial time on the succinctly describable inputs, e.g., those with small effective Kolmogorov complexity. This is obviously no stronger than whether they are solvable on all inputs. The larger the time bound, the smaller the Kolmogorov complexity of the relevant inputs, so collapses for larger time bounds are in effect algorithms that work on smaller subsets of inputs. Russell Impagliazzo
{ "domain": "cstheory.stackexchange", "id": 304, "tags": "cc.complexity-theory, complexity-classes, padding" }
No net generation or recombination of electrons is assumed
Question: I am currently studying the textbook Physics of Photonic Devices, second edition, by Shun Lien Chuang. Section 2.1.1 Maxwell's Equations in MKS Units says the following: The well-known Maxwell's equations in MKS (meter, kilogram, and second) units are written as $$\nabla \times \mathbf{E} = - \dfrac{\partial}{\partial{t}}\mathbf{B} \ \ \ \ \text{Faraday's law} \tag{2.1.1}$$ $$\nabla \times \mathbf{H} = \mathbf{J} + \dfrac{\partial{\mathbf{D}}}{\partial{t}} \ \ \ \ \text{Ampére's law} \tag{2.1.2}$$ $$\nabla \cdot \mathbf{D} = \rho \ \ \ \ \text{Gauss's law} \tag{2.1.3}$$ $$\nabla \cdot \mathbf{B} = 0 \ \ \ \ \text{Gauss's law} \tag{2.1.4}$$ where $\mathbf{E}$ is the electric field (V/m), $\mathbf{H}$ is the magnetic field (A/m), $\mathbf{D}$ is the electric displacement flux density (C/m$^2$), and $\mathbf{B}$ is the magnetic flux density (Vs/m$^2$ or Webers/m$^2$). The two source terms, the charge density $\rho$(C/m$^3$) and the current density $\mathbf{J}$(A/m$^2$), are related by the continuity equation $$\nabla \cdot \mathbf{J} + \dfrac{\partial}{\partial{t}}\rho = 0 \tag{2.1.5}$$ where no net generation or recombination of electrons is assumed. I'm curious about this part: where no net generation or recombination of electrons is assumed. What does this mean in simpler terms? Why is this assumption necessary for $\nabla \cdot \mathbf{J} + \dfrac{\partial}{\partial{t}}\rho = 0$? Answer: The number/concentration of electrons in a volume may be due to their flow into / out of the volume (electric current), or due to the electrons appearing/disappearing inside of it. In vacumm, the latter possibility can be usually safely ignored (although not in QFT), so we have the continuity equation: $$\nabla\cdot\mathbf{J}+\partial_t\rho=0\Leftrightarrow \int_S\mathbf{J}\cdot\mathbf{ds} + \partial Q=0,$$ where the second equation is just the integral form of the continuity equation: the total current flowing through the surface surrounding the volume is the change of the charge within. If, however, the charge may appear/vanish within the volume – which is a real option in semiconductors' interaction with the electromagnetic field – then we need to augment the continuity equation with a source term: $$\nabla\cdot\mathbf{J}+\partial_t\rho=s(t)$$ It is necessary to point out that the total charge conservation still holds (creation of an electron is accompanied by creation of a hole), but we would often want to describe electrons and holes separately – writing a continuity equation for each of them, or one type of the carriers may be quickly removed, and considered non-existent for the purposes of description (e.g., holes may be localized, but electrons highly mobile).
{ "domain": "physics.stackexchange", "id": 84692, "tags": "electromagnetism, condensed-matter, semiconductor-physics, maxwell-equations" }
Historic Relationship between Typed Lambda Calculus and Lisp?
Question: I was having a discussion with a friend recently (who is an advocate of strongly typed languages). He made the comment: The inventors of Lambda Calculus always intended it to be typed. Now we can see that Church was associated with the Simply Typed Lambda Calculus. Indeed, it seems he explained the Simply Typed Lambda Calculus in order to reduce misunderstanding about the Lambda Calculus. Now when John McCarthy created Lisp - he based it on the Lambda Calculus. This is by his own admission when he published "Recursive functions of symbolic expressions and their computation by machine, Part I". You can read it here. McCarthy appears not to have addressed the Simply Typed Lambda Calculus. This seems to be dominated by Robyn Milner with ML. There is some discussion of the relationship between Lisp and Lambda Calculus here, but they don't really get to the bottom of why McCarthy chose to leave it untyped. My question is - If McCarthy admits he knew about Lambda Calculus - why did he ignore the Typed Lambda calculus? (ie - is it truly obvious that Lambda Calculus was intended to be typed? It doesn't seem that way) Answer: First, your friend is wrong about the history of the $\lambda$-calculus. Church created the untyped calculus first, which he intended as a foundation for mathematics. Fairly quickly, it was discovered that the logic derived from this calculus was inconsistent (because non-terminating programs existed). Eventually Church developed the simple theory of types as well, and many other things besides, but that wasn't the original point of the system. An excellent overview of the history is found in this paper. Second, the simply-typed lambda calculus is a quite restrictive language. You need some form of recursive type to write any interesting kind of program in it. Certainly, it would be impossible to write the kinds of programs in McCarthy's original paper with the $\lambda$-calculi based type systems understood in 1958. Cutting-edge programming type systems at that point were the ones found in Fortran and COBOL.
{ "domain": "cstheory.stackexchange", "id": 2769, "tags": "lambda-calculus, church-turing-thesis, lisp, typed-lambda-calculus" }
ICP(Iterative Closest Point) with Partially Overlapping Conditions & Changing Point Numbers
Question: I am currently working on fixing vehicle odometry data using lidar contour points. Since I am receiving lidar data in the form of countour points, I thought I'd use ICP to correct the odometry error calculated from bicycle model. Here is my question. The code I am referencing from https://github.com/kissb2/PyICP-SLAM/blob/master/utils/ICP.py requires that number of points at time t = x must equal number of points at time t = x+k, or at least be in the array of same size. (SHOWN by assert statement "assert A.shape == B.shape" ) However, I do get fluctuating number of contour points per every time frame, thus this algorithm is not applicable to my problem. Has anyone ever came across an ICP with differing array size of 'source frame' and 'target frame'? If so, how should I address the issue? If you just append the points into the arrays A & B of same size M with empty spots being zeros, would the ICP algorithm autonomously sort itself out despite receiving different number of data points? (For example, if A had 30 points and B had 60 points, append A with empty matrix of size 30 to make it into size 60.) Thank you! def icp(A, B, init_pose=None, max_iterations=20, tolerance=0.001): ''' The Iterative Closest Point method: finds best-fit transform that maps points A on to points B Input: A: Nxm numpy array of source mD points B: Nxm numpy array of destination mD point init_pose: (m+1)x(m+1) homogeneous transformation max_iterations: exit algorithm after max_iterations tolerance: convergence criteria Output: T: final homogeneous transformation that maps A on to B distances: Euclidean distances (errors) of the nearest neighbor i: number of iterations to converge ''' assert A.shape == B.shape # get number of dimensions m = A.shape[1] # make points homogeneous, copy them to maintain the originals src = np.ones((m+1,A.shape[0])) dst = np.ones((m+1,B.shape[0])) src[:m,:] = np.copy(A.T) dst[:m,:] = np.copy(B.T) # apply the initial pose estimation if init_pose is not None: src = np.dot(init_pose, src) prev_error = 0 for i in range(max_iterations): # find the nearest neighbors between the current source and destination points distances, indices = nearest_neighbor(src[:m,:].T, dst[:m,:].T) # compute the transformation between the current source and nearest destination points T,_,_ = best_fit_transform(src[:m,:].T, dst[:m,indices].T) # update the current source src = np.dot(T, src) # check error mean_error = np.mean(distances) if np.abs(prev_error - mean_error) < tolerance: break prev_error = mean_error # calculate final transformation T,_,_ = best_fit_transform(A, src[:m,:].T) return T, distances, i ``` Answer: ICP does not require that the number of points match. It can automatically take care of different sized inputs due to using the closest pair. An example of this can be found here, but in 2D. In the PyICP-SLAM example you give you should notice that they randomly downsample the pointcloud to a fixed number(default 5000). That is how they are able to assert that the pointclouds are the same size. They mostly do this to increase the speed, but it can also give some better convergence properties. So it is up to you to choose which method you want to implement. Randomly sample the point sets to being the same size, or let the algorithm run with different sized inputs.
{ "domain": "robotics.stackexchange", "id": 2186, "tags": "slam, localization, path-planning, lidar, icp" }
Difference between mirror reflected light rays and rays of a screen
Question: What is the difference between the light rays reflected from a mirror and rays directly emitted from a screen if the screen shows the same image as the mirror would. Answer: Distance to the image is the big difference. The reflected image in a plane mirror is composed of rays that appear to come from a point behind the mirror (by the same distance that the image that is being reflected is in front of the mirror: $d_0 = d_i$). For non-plane mirrors you'll have to work a little harder, but locally spherical bits can be treated with the usual $$ \frac{1}{f} = \frac{1}{d_i} + \frac{1}{d_o} \,.$$ The light rays emanating from a screen come from the surface of the screen.
{ "domain": "physics.stackexchange", "id": 35806, "tags": "optics, visible-light, reflection" }
Are superoperators (CPTPM) equal if they are equal on all density operators?
Question: $\DeclareMathOperator\tr{tr} $Is the following statement true? Conjecture: Let $\cal E_1,\cal E_2$ be completely positive trace-preserving maps (quantum superoperators). Assume that for any positive Hermitean operator $\rho$ with $\tr\rho=1$ (density operator), we have $\cal E_1(\rho)=\cal E_2(\rho)$. Then $\cal E_1=\cal E_2$. The intuitive meaning of this is: if two operations on a quantum system have exactly the same effect ($\forall\rho.\cal E_1(\rho)=\cal E_2(\rho)$), then they also have the same effect when acting on a subsystem of a larger composite system ($\cal E_1=\cal E_2$). At the first glance one would expect this to hold, but I cannot prove it. What I found out myself: Every Hermitean operator operator can be decomposed into a linear combination of density operators, hence we have $\cal E_1(\rho)=\cal E_2(\rho)$ for all Hermitean $\rho$. It remains only to show that this implies $\cal E_1(\rho)=\cal E_2(\rho)$ for anti-Hermitean $\rho$ as well. Answer: Given that the $\mathcal E_i$ are linear operators, i.e., $\mathcal E_i(\alpha A + B)=\alpha \mathcal E_i(A) + \mathcal E_i(B)$ for all $A$ and $B$, it is true that $\mathcal E_1(\rho)=\mathcal E_2(\rho)$ for all $\rho\ge0$ implies that $\mathcal E_1=\mathcal E_2$. The argument goes as follows: Every operator $A$ can be written as $A=X+iY$, with $X=\tfrac12(A+A^\dagger)$ and $Y=\tfrac{-i}{2}(A-A^\dagger)$ hermitian. As you already noticed, every hermitian operator can be written as difference of two positive operators. Using linearity, the claim follows. Note that your "counterexample" $\mathcal E_2(A) = A^\dagger$ is not linear.
{ "domain": "physics.stackexchange", "id": 18448, "tags": "quantum-mechanics, quantum-information" }
Why are nausea and dizziness such common side effects from medication?
Question: Why are nausea and dizziness such common side effects from medication? If you go through your medicine cabinet and look at side effects, those might just be on every single bottle. Is there some system in the human body that is very sensitive to any disturbance and causes nausea and dizziness? Answer: Some medications because they are taken orally do cause nausea. Many medications, because they have an effect on blood pressure, or are central acting in some way, do cause dizziness. However, nausea and dizziness in particular are two highly subjective symptoms which are difficult to quantitate or verify, and which can occur with simple anxiety or stress. Several more such highly subjective and difficult-to-verify symptoms are numbness, tingling, headache, insomnia, fatigue and difficulty concentrating, all of which can be brought on by stress. Switching for a moment to a drug with a measurable side effect of beta-blockers: erectile dysfunction (ED). In a study of men treated with beta blockers, the patients were separated into three groups with the following results: a) the group who were not told the name of the medicine nor informed of the ED side effect had the lowest incidence of ED (3.1%) b) the group who were told the name of the medicine but not informed about the ED side effect had a 15.6% incidence of ED c) the group who were told the name of the medicine and informed of the ED side effect had the highest incidence of ED (31.2%) Hypervigilance is an increase in attention to bodily cues or symptoms for any reason. It has been well documented in many studies that if you give one group of people a list of possible negative side effects, and another group a list of possible beneficial effects, then give both groups placebos (usually a small coated sugar or cornstarch tablet), the first group will experience negative effects while the second will feel better. The negative expectation producing a negative side effect is called the nocebo effect (the opposite of the placebo effect). Interestingly, recent studies where anti-anxiety medications were administered before the nocebo dramatically decreased the nocebo effect. For ethical reasons, in all clinical trials of drugs, patients in the treatment group and the placebo group must be given an extensive list (standardized) of possible negative side effects, introducing a negative expectation into both populations. Two of the most common negative side effects in both groups are dizziness and nausea. Therefore, they must be reported as possibly occurring side effects with the medication. This is not to say we're all imagining things. Nocebo effects have been correllated with reproducible effects on functional MRIs. They have a basis in reality, even if it's caused by negative expectations. It's a problem being discussed a lot now by medical ethicists. The influence of the nocebo effect in clinical trials Avoiding Nocebo Effects to Optimize Treatment Outcome
{ "domain": "biology.stackexchange", "id": 3396, "tags": "medicine" }
All of Clojure's Expression Threading Macros
Question: I was bored and in a mood to write some macros, so I decided as an exercise to try and remake each of the standard threading macros: ->, ->>, some->, some->>, as->, cond->, cond->>, and doto. doto doesn't seem to be considered a "threading macro", but it's very close to the same idea, so I wrote an implementation of it as well. Usage examples of each: (my-> 1 (+ 2) (* 3) (- 4) (/ 5)) => 1 (my->> 1 (+ 2) (* 3) (- 4) (/ 5)) => -1 (my-some-> 1 (+ 2) (* 3) (println 4) (/ 5)) 9 4 => nil (my-some->> 1 (+ 2) (* 3) (println 4) (/ 5)) 4 9 => nil (my-as-> 1 a (+ a 2) (* 3 a) (- a 4) (/ a 5)) => 1 (let [n 10] (my-cond-> [] (odd? n) (conj "odd") (even? n) (conj "even") (zero? n) (conj "zero") (pos? n) (conj "positive"))) => ["even" "positive"] (let [n 10] (my-cond->> [1 2 3 4 5] (odd? n) (map #(* % 2)) (even? n) (map #(* % 3)) (zero? n) (map #(* % 4)) (pos? n) (map #(* % 5)))) => (15 30 45 60 75) (my-doto (Object.) (println "A") (println "B")) #object[java.lang.Object 0x7f681f0f java.lang.Object@7f681f0f] A #object[java.lang.Object 0x7f681f0f java.lang.Object@7f681f0f] B => #object[java.lang.Object 0x7f681f0f "java.lang.Object@7f681f0f"] Nearly all of them ended up being simple reductions. I looked over the core's definitions, and I find them to be "overly explicit". They're only defined 1/4 of the way down the core though, so that might be contributing to what options he/they had available. I honestly find my versions to be more readable, but I'm sure there's things that can be improved on. My main concerns are: My versions don't handle meta data. I don't manually deal with meta data very often, so I may be missing something, but I don't see why meta information would need to be transferred. What meta data does the form itself carry? I would think any relevant data would be attached to the objects inside the form. Anything to improve the cond parts. I'm not very happy with the generalized version's length, and it's kind of ugly. The need for prev-sym is unfortunate. If there's a way to get rid of it, I'd like to hear it. It's also unfortunate that I need to call vec on each var-arg list prior to giving them to my-general-cond. Since my-general-cond is a plain function, the var-arg list will be evaluated as forms prior to them being passed in, leading to weird errors. I could fix this by making it a macro, but it doesn't need to be a macro, so I'd rather not make it one. Any other changes you think would help! (ns macros.expr-threading) (defn- ensure-wrapped [expr] (if (list? expr) expr (list expr))) (defn- insert-first [arg form] (let [[f & args] (ensure-wrapped form)] (apply list f arg args))) (defn- insert-last [arg form] (let [w-form (ensure-wrapped form)] (concat w-form (list arg)))) (defmacro my-> [expr & forms] (reduce insert-first expr forms)) (defmacro my->> [expr & forms] (reduce insert-last expr forms)) (defmacro my-as-> [expr sym & forms] (reduce (fn [prev form] `(let [~sym ~prev] ~form)) expr forms)) (defn- my-general-some [macro-sym expr forms] (reduce (fn [prev form] `(when-let [res# ~prev] (~macro-sym res# ~form))) expr forms)) (defmacro my-some-> [expr & forms] (my-general-some 'my-> expr forms)) (defmacro my-some->> [expr & forms] (my-general-some 'my->> expr forms)) (defn- my-general-cond [macro-sym expr clause-pairs] (let [prev-sym (gensym)] (my->> clause-pairs (partition 2) (reduce (fn [prev [pred-expr form]] `(let [~prev-sym ~prev] (if ~pred-expr (~macro-sym ~prev-sym ~form) ~prev-sym))) expr)))) (defmacro my-cond-> [expr & clause-pairs] (my-general-cond 'my-> expr (vec clause-pairs))) (defmacro my-cond->> [expr & clause-pairs] (my-general-cond 'my->> expr (vec clause-pairs))) (defmacro my-doto [expr & forms] (let [sym (gensym) alt-forms (map #(insert-first sym %) forms)] `(let [~sym ~expr] (do ~@alt-forms ~sym)))) Answer: Interesting. I never thought of using a reduce when writing the it-> macro. (defmacro it-> [expr & forms] `(let [~'it ~expr ~@(interleave (repeat 'it) forms) ] ~'it)) Usage: (it-> 1 (inc it) ; thread-first or thread-last (+ it 3) ; thread-first (/ 10 it) ; thread-last (str "We need to order " it " items." ) ; middle of 3 arguments ;=> "We need to order 2 items." ) I can see that simply nesting the let forms instead having one long let expression might be simpler. There is more documentation here.
{ "domain": "codereview.stackexchange", "id": 30940, "tags": "clojure, macros" }
Why is the step to download the KinectSensor binaries needed?
Question: Are these not available as Debian packages? I see a few Debian ros-indigo packages relating to kinect, but I'm not sure which ones are needed. Originally posted by James Puderer on ROS Answers with karma: 36 on 2015-04-15 Post score: 0 Answer: It depends on which Kinect you have. Some of the newer Kinects need these different drivers. Originally posted by tfoote with karma: 58457 on 2015-04-24 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 21451, "tags": "turtlebot" }
Please give me a single sentence explaining why faster-than-light morse code through entanglement isn't possible?
Question: Problem with past explanations is they emphasize the need for random choice of measurement angle, but my understanding is that was only necessary in experiments seeking to remove any possible loop-holes from "hidden variable" approaches. So, you emit entangled articles to Alice and Bob. Alice measures her particle at a farther distance from the source than Bob and always measures if spin is down. Bob measures either up or at 60 degrees. Assuming that Alice can't detect any difference in her measurements depending on Bob's choice, why not? (Shouldn't there be a change in the correlation between particles and so in the share of particle Alice measures as up?) (P.s., if the answer is that entanglement ends once Bob measures, am I wrong in assuming the state of Alice's particle is then no longer described by the wave function?) Answer: Suppose Alice and Bob measure entangled spins in the state \begin{align} \left|\psi\right> = \frac{1}{2}\left(\left|\uparrow\downarrow\right> + \left|\downarrow\uparrow\right> \right) \end{align} Alice always measures $S_z$. Bob measures either $S_z$ or \begin{align} S_{B} = \frac{1}{2}S_x + \frac{\sqrt{3}}{2}S_z \end{align} If Bob chooses to measure $S_z$, then Alice will detect spin up or spin down each with 50% probability. (I take it you accept this claim without contention.) She will detect the same thing if Bob instead chooses to measure $S_B$. In this case, Bob will detect one of the eigenstates of $S_B$ each with 50% probability. If Bob measures spin "up" along the $60^{\circ}$ axis, Alice will measure spin up or spin down along the $z$ axis with probabilities $P(\uparrow|S_{B,+})\approx 93.3\%$ and $P(\downarrow|S_{B,+})\approx 6.7\%$, respectively. If Bob measures spin "down" along his axis, the probabilities are the opposite: Alice gets spin up with probability $P(\uparrow|S_{B,-})\approx 6.7\%$ and spin down with probability $P(\downarrow|S_{B,-})\approx 93.3\%$. Alice's probabilities to detect either spin up or spin down are \begin{align} P(\uparrow) &= P(\uparrow|S_{B,+})P(S_{B,+}) + P(\uparrow|S_{B,-})P(S_{B,-})\\ &= (0.933)(0.5) + (0.067)(0.5)\\ &= 0.5 \end{align} or \begin{align} P(\downarrow) &= P(\downarrow|S_{B,+})P(S_{B,+}) + P(\downarrow|S_{B,-})P(S_{B,-})\\ &= (0.067)(0.5) + (0.933)(0.5)\\ &= 0.5 \end{align} respectively. So the outcome for Alice is the same regardless of Bob's choice of axis.
{ "domain": "physics.stackexchange", "id": 93014, "tags": "quantum-mechanics, quantum-entanglement, faster-than-light" }
Why do dianions (such as malonate) bind cations more strongly than anions?
Question: Why does a dianion (such as malonate) bind cations more strongly than its equivalent anion (acetate)? Is it simply because of the proximal availability of another $\ce{O-}$ group that can bind to cations? Does the second $\ce{COO-}$ group on malonate distribute charge in a better way that can allow for stronger binding? Answer: This is known as the chelate effect. The main reason why you observe this is that cations in solution have an ordered solvent shell around them, especially in polar solvents where there will be defined solvent geometry around the shell; often octahedral for metal ions in water. The formation of a complex with a bidentate ligand such as malonate is more favourable than that with a monodentate ligand such as acetate because the same number of solvent ligands are lost with half the number of ligands complexed. This results in a greater increase in entropy - you can see this in terms of the degrees of freedom gained by the water molecules more than compensating for those lost by the new ligands. Is it simply because of the proximal availability of another $\ce{O-}$ group that can bind to cations ? This is likely also a factor, although in most cases less important. In the case of the monodentate ligands there is a small energetic cost associated with bringing the anionic oxygen centres close to complex the metal centres. In the case of the bidentate ligand, some of this cost is "already paid" in the enthalpy of formation of the ligand - it is preorganized. This contribution becomes dominant for macrocyclic ligands.
{ "domain": "chemistry.stackexchange", "id": 3084, "tags": "organic-chemistry, physical-chemistry, bond, intermolecular-forces" }
Publish Joint position step by step
Question: Greetings, I need an advice about how to perform my robot simulation. I have a node that subscribe a topic and use the data subscribed to create a vector with all the angle that a joint has to reach through the final joint angle. I have three different joints, which is the best message to publish that contains the three joint angles? How can I publish them step by step. How can I make the second node that subscribe these joint angles? Thank you Originally posted by nikkk on ROS Answers with karma: 13 on 2017-01-22 Post score: 0 Answer: You need to create a URDF description of your robot, with links and joints for any moving or fixed relationships between the various reference frames. Then you should launch robot_state_publisher and joint_state_publisher to supply frame transformations as requested. In particular, you should configure joint_state_publisher to listen for the topics on which you will publish the joint states. Presumably you'll either write a custom node or use a package like moveit to calculate and publish the required joint angles. To actually perform the joint movements, you'll need a custom node that listens for the joint states and talks to your hardware, unless there is an appropriate package already available. You should probably read these: URDF tutorials: read the tf tutorial and then 2.1, 2.2, 2.4, and 4.5. robot_state_publisher documentation joint_state_publisher documentation Originally posted by Mark Rose with karma: 1563 on 2017-01-25 This answer was ACCEPTED on the original site Post score: 5
{ "domain": "robotics.stackexchange", "id": 26799, "tags": "ros, topic, publisher, messages, node" }
Question about the value of angular displacement
Question: Can angular displacement be greater than $2\pi$ radians? Is $5760$ radians correct here or should we do something like this: $\frac{5760}{2\pi} = 916.732472209$ $0.732472209 \cdot 2\pi = 4.60225862151$ I got this asked in an exam an thought we should do like the way I did here. Am I right? Edit: Just thinking about that the angular displacement cannot be bounded since the derivation of the other things like angular velocity, angular acceleration etc. comes from it and it somehow bounds them, too? Answer: If you were asked for the angular displacement of a point on wheel 4, you might want to subtract out the integer number of revolutions. For the wheel itself, I would think that the 5760 radians should be acceptable.
{ "domain": "physics.stackexchange", "id": 85108, "tags": "homework-and-exercises, kinematics, angular-velocity, displacement" }
How deep is the permafrost in the Antarctic?
Question: I tried in vain to find the answer to this question on the web, but all it would tell me was, "it is very deep", and "it is known as a thaw line rather than a frost line in the arctic and antarctic". How far down you would have to live to escape the permafrost? 100 ft? 1000 ft? 10000 ft?. How feasible would an underground base be? Answer: According to "Permafrost, active-layer dynamics and periglacial environments of continental Antarctica" South African Journal of Science 98. pages 82-90: Only 25% of Antarctica has permafrost, as the material beneath thick ice sheets is not permafrost. The deepest permafrost occurs where there is no ice sheet. The deepest permafrost in the Antarctic is about 1000m.
{ "domain": "earthscience.stackexchange", "id": 1559, "tags": "ice, ice-sheets, antarctica, permafrost" }
What chemicals used everyday will cause an explosive reaction if exposed into outer space?
Question: What chemicals used everyday will cause an explosive reaction if exposed into outer space? Answer: One example for an everyday chemical that most probably will cause an explosion if exposed into outer space is liquid water at temperatures significantly above 300 K when hold in an inappropriate container. This phenomenon is called a BLEVE. But be aware that this is not an chemical reaction.
{ "domain": "chemistry.stackexchange", "id": 4972, "tags": "reactivity, explosives, atmospheric-chemistry" }
Are synthetically-produced diamonds as hard as natural diamonds?
Question: I was having a discussion with my friend about the intrinsic worthlessness of diamonds (DeBeers and whatnot) and how synthetic diamonds haven't caught on, again because of the marketing/propoganda that natural diamonds are "better". My friend, partially to spite me because he doesn't like to lose an argument, claims that synthetic diamonds have only been produced with a hardness up to 9.8, and you can only get a hardness of 10 with a natural diamond. I call BS on that. A little research indicates that while natural diamonds may slightly vary in hardness (the impurities which alter the color also modify its crystal properties), synthetic diamonds have a more consistent hardness which is otherwise identical to a naturally-occurring diamond with the same chemical properties. This appears to make sense. Are chemically-identical synthetic and natural diamonds of equivalent hardness? Or is there a caveat to the synthetic diamond-making process that produces weaker diamonds? Answer: The laboratory made diamonds are as good as the naturally found ones. It is the same crystal structure. They are not used much as gemstones, (2% of the market) because of the objections of the diamond industry which relies on mined diamonds and dominates the markets. Gem-quality diamonds grown in a lab can be chemically, physically and optically identical (and sometimes superior) to naturally occurring ones. The mined diamond industry has undertaken legal, marketing and distribution countermeasures to protect its market from the emerging presence of synthetic diamonds. Man-made diamonds can be distinguished by spectroscopy in the infrared, ultraviolet, or X-ray wavelengths. The DiamondView tester from De Beers uses UV fluorescence to detect trace impurities of nitrogen, nickel or other metals in HPHT or CVD diamonds.[
{ "domain": "physics.stackexchange", "id": 11869, "tags": "material-science, crystals" }
Does Reverse Polish Notation have an LL grammar?
Question: Let L be the language of all arithmetic expressions written in Reverse Polish Notation, containing only binary operators. $\Sigma(L) = \{n, o\}$, n := number, o := operator. Is there an LL grammar G so that L(G) = L? Answer: Yes, $ G = (\{E,S,R\}, \{n,o\}, P, E)\text{, with productions}\\ E \rightarrow n\ |\ SR \\ S \rightarrow nEo \\ R \rightarrow EoR\ |\ \epsilon \\ $ Proof: Observation $(*)$: Each word of L represents an arithmetic expression, the result of calculating such an expression is essentially again a number. So whenever we have $w \in L$, we can treat $w$ as a number. Theorems: $L(G) \subset L$ $L(S) \subset L$ $w \in L(R) \implies w=\epsilon \lor (w=w_1 o, w_1 \in L)$ Inductive proof of above theorems on derivation step count $n$: Basis $n=1$ $E \Rightarrow n\\ \land \\ n \in L $ $S\text{ can't generate any words in one step. }\\ \land \\ \emptyset \subset L $ $R \Rightarrow \epsilon, w=\epsilon$ Basis $n=2$ In all 3 cases nothing can be generated in 2 steps, and the theorems hold trivially. Inductive step: $n=k>1,$ induction hypothesis (IH): assume theorems hold for $n<k$ $E \Rightarrow SR \Rightarrow^{k-1} w_1 w_2\, \\ \text{where }w_1 \in L\text{ because of IH of theorem 2,} \\ \text{and } w_2 = \epsilon\text{ or }w_2 = w_3 o, w_3 \in L\text{ because of IH of theorem 3.}$ First case: $w_2 = \epsilon$, then $w_1 w_2 = w_1 \in L$. Second case: $w_1 w_2 = w_1 w_3 o =^{(*)} nno$, which is a valid RPN expression, thus $w_1 w_3 o \in L$ $S \Rightarrow nEo \Rightarrow^{k-1} nwo$, where $w \in L$ because of IH of theorem 1. $nwo =^{(*)} nno$, which is a valid RPN expression, and thus $n w o \in L$ $R \Rightarrow EoR \Rightarrow^{k-2} w_1 o R$, where $w_1 \in L$ because of IH of theorem 1. The last step can only be $w_1 o R \Rightarrow w_1 o \epsilon$, which satisfies theorem 3. $\square$ Theorem: $L \subset L(G)$ Proof: induction on length of $w \in L$, show that $E \Rightarrow^* w$ (I got lazy from this point on, I might fill this in later) $\square$ $(1) \land (2) \implies L(G) = L$ Theorem: G is LL(k) grammar, $k \in \mathbb{N}$ Proof: Pick a k large enough, then show that for each non-terminal you can decide which production rule to use by looking at the k next terminals. $\square$ $\square$
{ "domain": "cs.stackexchange", "id": 1944, "tags": "context-free" }
Where does the energy required to speed the object in this question come from if the object actually slows down from another perspective?
Question: I was thinking about mechanics and energy and stumped myself with this scenario I made up: Suppose you are in a room and you are motionless. In the same room, a 1 Kg ball is moving to your right with a speed of 1 m/s. You arrest the ball's motion, so the ball loses its one joule of kinetic energy and also becomes motionless. However, I lied. The room is actually in space, and is going to my left at a speed of 100 m/s, and the ball is also going to the left from my perspective, but slightly slower at 99 m/s. From my perspective, the ball gains kinetic energy, despite the opposite being true from your perspective. How is this possible? Answer: Try arresting the balls motion while standing on a path of ice. What happens is the ball and that floor come to a relative rest. And any force exerted on the ball to slow it down or speed it up has an equal and opposite force exerted on the thing accelerating it. You can analyze it in any frame and get a correct description (though kinetic energy depends on frame). You'll notice in the frame where the room was moving that the room slows down and hence loses some kinetic energy. Same in the center of momentum frame and the center of mass frame. In the frame where the room was initially at rest, the room actually gains some kinetic energy. In every frame there is some relative motion. So there is some kinetic energy. That kinetic energy can get shared. And indeed, after the interaction the kinetic energy of each object (room and ball) has changed, but the total energy is conserved (though for a realistic situation with friction some energy has turned into heat and isn't readily available for work any more).
{ "domain": "physics.stackexchange", "id": 26089, "tags": "energy, velocity, work, inertial-frames, relative-motion" }
Tritium website
Question: I am working on my Tritium website (hosted on Github: feel free to fork and send pull requests), and now am looking for a review of it. If you are looking to critique the design of the website, please visit the sister question over on Graphic Design. Here is what I would like reviewed: Organization - Can you find where something should be easily? Stale code - Is there code that isn't in use? Modern conventions - Am I using the latest and greatest with HTML5 and CSS3? Readability - Is the code formatted in a way that is compliant with today's standards? Performance - Can I speed up any aspect of my code? Portability - Websites can be hard to support for mobile devices. Is there anything I can improve so that these devices can have just as good of an experience? Also, I noticed that the animation of my atom is a bit laggy on those types of platforms; how can I fix that? User Experience - What do you find annoying and how would you fix it? Those are just some suggestions though. Please feel free to comment on any aspect of the code. No matter how small that aspect may be, I consider it important. index.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="description" content=""> <meta name="author" content=""> <title>Tritium</title> <link rel="icon" href="img/tritium-logo.png" sizes="256x227" type="image/png"> <link rel="icon" href="img/tritium-logo.svg" sizes="any" type="image/svg+xml"> <!-- Bootstrap Core CSS --> <link rel="stylesheet" href="https://netdna.bootstrapcdn.com/bootstrap/3.0.3/css/bootstrap.min.css" type="text/css"> <!-- Fonts --> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" type="text/css"> <link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto+Slab:300,500,300italic,500italic" type='text/css'> <link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Montserrat:400,700" type='text/css'> <!-- Custom Theme CSS --> <link rel="stylesheet" href="css/tritium.css" type='text/css'> </head> <?php flush(); ?> <body id="page-top" data-spy="scroll" data-target=".navbar-custom"> <nav class="navbar navbar-custom navbar-fixed-top" role="navigation"> <div class="container"> <div class="navbar-header page-scroll"> <a class="navbar-brand" href="#page-top"> <span class="light">Tritium</span> </a> </div> <!-- Collect the nav links, forms, and other content for toggling --> <div class="collapse navbar-collapse navbar-right navbar-main-collapse"> <ul class="nav navbar-nav"> <!-- Hidden li included to remove active class from about link when scrolled up past about section --> <li class="hidden"> <a href="#page-top"></a> </li> <li class="page-scroll"> <a href="#about">About</a> </li> <li class="page-scroll"> <a href="#contact">Contact</a> </li> </ul> </div> <!-- /.navbar-collapse --> </div> <!-- /.container --> </nav> <section id="intro" class="intro"> <div class="intro-body"> <div class="container"> <div class="row"> <div class="col-md-8 col-md-offset-2"> <img src="img/tritium-logo.svg" height="100" width="100" /> <h1 class="brand-heading">Tritium</h1> <p class="intro-text">A free, premium quality speech synthesis engine written completely in C.</p> <a href="#download-box" class="btn btn-default btn-lg download-window">Visit Download Page</a> </div> </div> <svg xmlns="http://www.w3.org/2000/svg" width="434.78391px" height="436.34589px" version="1.1"> <g transform="matrix(5.7971186,0,0,5.7971186,217.39195,218.95394)"> <g id="a1" transform="matrix(0.76604444,0.64278761,-0.64278761,0.76604444,0,0)"> <circle cx="0" cy="5" r="4" style="fill:#333" /> <circle cx="4.3299999" cy="-2.5" r="4" style="fill:#333" /> <circle cx="-4.3299999" cy="-2.5" r="4" style="fill:#333" /> </g> <circle cx="0" cy="0" r="37" style="fill:none;stroke:#333;stroke-width:1" /> <g id="a2" transform="matrix(-0.93969262,0.34202014,-0.34202014,-0.93969262,0,0)"> <circle cx="0" cy="37" r="2" style="fill:#333" /> </g> </g> </svg> </div> </div> </section> <section id="about" class="container content-section text-center"> <div class="row"> <div class="col-lg-8 col-lg-offset-2"> <h2>About Tritium</h2> <p>Tritium is a compact, fast run-time synthesis engine primarily designed for both small embedded systems and large servers.</p> <p>One use for Tritium is as an <a href="https://en.wikipedia.org/wiki/Tritium#Controlled_nuclear_fusion">important fuel for controlled nuclear fusion.</a> Based off of that idea, Tritium was created to fuel other projects with a simple, yet powerful, speech sythesizer.</p> </div> </div> </section> <section id="contact" class="container contact-section text-center"> <div class="contact"> <div class="col-lg-8 col-lg-offset-2"> <h2>Contact</h2> <p>Feel free to contact me to provide some feedback on the software, give suggestions for new features and improvements, or to just say hello!</p> <ul class="list-inline banner-social-buttons"> <li class="bitcon"> <a href="https://codereview.stackexchange.com/users/27623/syb0rg" <i class="fa fa-stack-exchange fa-4x"></i></a> </li> <li class="twitter"> <a href="https://twitter.com/syb0rg" <i class="fa fa-twitter fa-4x"></i></a> </li> <li class="github"> <span class="logo"></span> <a href="https://github.com/syb0rg" <i class="fa fa-github fa-4x"></i></a> </li> </ul> </div> </div> </section> <div id="download-box" class="download-popup text-center"> <a href="#" class="close"><i class="fa fa-times btn-close"></i></a> <h2>Please Donate!</h2> <p>As a college student, I have many bills to pay and therefore not a lot of time to spend on development. Help me change that by donating a bit of money. Every little bit helps!</p> <ul class="list-inline banner-social-buttons"> <li class="paypal"> <span class="logo"></span> <a href="https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=WABVCTSPENJFJ&lc=US&item_name=Tritium&currency_code=USD&bn=PP%2dDonationsBF%3abtn_donate_SM%2egif%3aNonHosted" <i class="fa fa-pied-piper fa-4x"></i></a> </li> <li class="bitcoin" data-address="1EYDqUKuLKF9di1pSEtEfNA8pj2CgL7Wne"> <a data-code="717ff45d85b1cc7cdd6752e618dd6d62" data-button-style="custom_large" href="https://coinbase.com/checkouts/717ff45d85b1cc7cdd6752e618dd6d62"><i class="fa fa-bitcoin fa-4x"></i></a> </li> <li class="gittip"> <span class="logo"></span> <a href="https://www.gittip.com/syb0rg/" <i class="fa fa-gittip fa-4x"></i></a> </li> </ul> </div> <!-- Core JavaScript Files --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script> <script src="https://netdna.bootstrapcdn.com/bootstrap/3.0.3/js/bootstrap.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery-easing/1.3/jquery.easing.min.js"></script> <script src="https://coinbase.com/assets/button.js" type="text/javascript"></script> <!-- Custom Theme JavaScript --> <script src="js/tritium.js"></script> </body> <script> (function(i, s, o, g, r, a, m) { i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function() { (i[r].q = i[r].q || []).push(arguments) }, i[r].l = 1 * new Date(); a = s.createElement(o), m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m) })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga'); ga('create', 'UA-50973307-2', 'auto'); ga('send', 'pageview'); </script> </html> tritium.css: body { width: 100%; height: 100%; font-family:"Roboto Slab", "Helvetica Neue", Helvetica, Arial, sans-serif; color: #fff; background-color: #171d25; -webkit-backface-visibility:hidden; } html { width: 100%; height: 100%; } h1, h2, h3, h4, h5, h6 { margin-bottom: 35px; text-transform: uppercase; font-family: Montserrat, "Helvetica Neue", Helvetica, Arial, sans-serif; font-weight: 700; letter-spacing: 1px; } p { margin: 0 0 25px; font-size: 18px; line-height: 1.5; } svg { z-index: -1; } @media(min-width:767px) { p { margin: 0 0 35px; font-size: 20px; line-height: 1.6; } } a { color: transparent; color: #6e6e6e; -webkit-transition: all .2s ease-in-out; -moz-transition: all .2s ease-in-out; transition: all .2s ease-in-out; } a:hover, a:focus { text-decoration: none; color: #78C40F; } svg { position: absolute; right: 0; bottom: -100px; width: 250px; height: 436.34589px; } .light { font-weight: 400; } .navbar { margin-bottom: 0; border-bottom: 1px solid rgba(255, 255, 255, .3); text-transform: uppercase; font-family: Montserrat, "Helvetica Neue", Helvetica, Arial, sans-serif; background-color: #000; } .navbar-brand { font-weight: 700; } .navbar-brand:focus { outline: 0; } .navbar-custom a { color: #fff; } .navbar-custom .nav li a { -webkit-transition: background .3s ease-in-out; -moz-transition: background .3s ease-in-out; transition: background .3s ease-in-out; } .navbar-custom .nav li a:hover, .navbar-custom .nav li a:focus, .navbar-custom .nav li.active { outline: 0; background-color: rgba(255, 255, 255, .2); } .navbar-toggle { padding: 4px 6px; font-size: 16px; color: #fff; } .navbar-toggle:focus, .navbar-toggle:active { outline: 0; } @media(min-width:767px) { .navbar { position: fixed; margin-left: 0px; margin-right: 0px; padding: 20px 0; border-bottom: 0; letter-spacing: 1px; background: 0 0; -webkit-transition: background .4s ease-in-out, padding .4s ease-in-out; -moz-transition: background .4s ease-in-out, padding .4s ease-in-out; transition: background .4s ease-in-out, padding .4s ease-in-out; } .top-nav-collapse { padding: 0; background-color: #000; } .navbar-custom.top-nav-collapse { border-bottom: 1px solid rgba(255, 255, 255, .3); } } .intro { display: table; width: 100%; height: auto; padding: 100px 0; text-align: center; color: #fff; -webkit-background-size: cover; -moz-background-size: cover; -o-background-size: cover; background-size: cover; position: relative; } .intro-body { display: table-cell; vertical-align: middle; } .brand-heading { font-size: 40px; color: #78C40F; } .intro-text { font-size: 18px; } @media(min-width:767px) { .intro { height: 100%; padding: 0; } .brand-heading { font-size: 100px; } .intro-text { font-size: 25px; } } .btn-circle { width: 70px; height: 70px; margin-top: 15px; padding: 7px 16px; border: 2px solid #fff; border-radius: 35px; font-size: 40px; color: #fff; background: 0 0; -webkit-transition: background .3s ease-in-out; -moz-transition: background .3s ease-in-out; transition: background .3s ease-in-out; } .btn-circle:hover, .btn-circle:focus { outline: 0; color: #fff; background: rgba(255, 255, 255, .1); } .page-scroll .btn-circle i.animated { -webkit-transition-property: -webkit-transform; -webkit-transition-duration: .9s; -moz-transition-property: -moz-transform; -moz-transition-duration: .9s; } .page-scroll .btn-circle:hover i.animated { -webkit-animation-name: pulse; -moz-animation-name: pulse; -webkit-animation-duration: 1.4s; -moz-animation-duration: 1.4s; -webkit-animation-iteration-count: infinite; -moz-animation-iteration-count: infinite; -webkit-animation-timing-function: linear; -moz-animation-timing-function: linear; } @-webkit-keyframes pulse { 0 { -webkit-transform: scale(1); transform: scale(1); } 50% { -webkit-transform: scale(1.2); transform: scale(1.2); } 100% { -webkit-transform: scale(1); transform: scale(1); } } @-moz-keyframes pulse { 0 { -moz-transform: scale(1); transform: scale(1); } 50% { -moz-transform: scale(1.2); transform: scale(1.2); } 100% { -moz-transform: scale(1); transform: scale(1); } } .content-section { padding-top: 100px; } .contact-section { padding-top: 300px; padding-bottom: 200px; } @media(min-width:767px) { .content-section { padding-top: 150px; } .contact-section { padding-top: 300px; } } .btn { text-transform: uppercase; font-family: Montserrat, "Helvetica Neue", Helvetica, Arial, sans-serif; font-weight: 400; -webkit-transition: all .3s ease-in-out; -moz-transition: all .3s ease-in-out; transition: all .3s ease-in-out; } .btn-default { border: 1px solid #78C40F; color: #78C40F; background-color: transparent; } .btn-default:hover, .btn-default:focus { border: 1px solid #78C40F; outline: 0; color: #000; background-color: #78C40F; } .banner-social-buttons { margin-top: 0; } @media(max-width:1199px) { ul.banner-social-buttons { margin-top: 15px; } } @media(max-width:767px) { ul.banner-social-buttons>li { display: inline-block; margin: 20px; } } ::-moz-selection { text-shadow: none; background: #fcfcfc; background: rgba(255, 255, 255, .2); } ::selection { text-shadow: none; background: #fcfcfc; background: rgba(255, 255, 255, .2); } img::selection { background: 0 0; } img::-moz-selection { background: 0 0; } body { webkit-tap-highlight-color: rgba(255, 255, 255, .2); } /* Mask for background, by default is not display */ #mask { display: none; background: #000; position: fixed; left: 0; top: 0; z-index: 10; width: 100%; height: 100%; opacity: 0.8; z-index: 999; } /* You can customize to your needs */ .download-popup { display: none; background: #333; padding: 10px; float: left; position: fixed; top:50%; left:50%; width:400px; /* adjust as per your needs */ height:400px; /* adjust as per your needs */ margin-left:-200px; /* negative half of width above */ margin-top:-200px; /* negative half of height above */ z-index: 99999; border-radius: 3px 3px 3px 3px; -moz-border-radius: 3px; -webkit-border-radius: 3px; } tritium.js: $(window).scroll(function () { // jQuery to collapse the navbar on scroll if ($(".navbar").offset().top > 50) { $(".navbar-fixed-top").addClass("top-nav-collapse"); } else { $(".navbar-fixed-top").removeClass("top-nav-collapse"); } // jQuery to animate atom SVG var s = ($(window).scrollTop() / (($("#intro").height() + $("#about").height()) - $(window).height())); var r1 = 40 + 106 * s, r2 = 160 - 100 * s; $("#a1").attr("transform", "rotate(" + r1 + ")"); $("#a2").attr("transform", "rotate(" + r2 + ")"); }); // jQuery for page scrolling feature - requires jQuery Easing plugin $(function () { $('.page-scroll a').bind('click', function (event) { var $anchor = $(this); $('html, body').stop().animate({ scrollTop: $($anchor.attr('href')).offset().top }, 1000, 'easeInOutExpo'); event.preventDefault(); }); }); $(document).ready(function () { $('a.download-window').click(function () { var a = document.createElement("a"); a.target = '_blank'; a.href = "https://github.com/syb0rg/Tritium"; var evt = document.createEvent("MouseEvents"); //the tenth parameter of initMouseEvent sets ctrl key evt.initMouseEvent("click", true, true, window, 0, 0, 0, 0, 0, true, false, false, true, 0, null); a.dispatchEvent(evt); $("body").css("overflow", "hidden"); //Getting the variable's value from a link var loginBox = $(this).attr('href'); //Fade in the Popup $(loginBox).fadeIn(500); // Add the mask to body $('body').append('<div id="mask"></div>'); $('#mask').fadeIn(300); return false; }); // When clicking on the button close or the mask layer the popup closed $('a.close, #mask').on('click', function () { $('#mask , .download-popup').fadeOut(300, function () { $('#mask').remove(); $("body").css("overflow", "auto"); }); return false; }); }); Answer: About your HTML You still have a <?php flush(); ?> in your HTML code. To be on the safe side, you might want to escape & in URLs with &amp; (but it’s only required for ambiguous ampersands). You have a script after the body element, which is not allowed. You are missing many > of your a start tags (i.e., you are using <a <i></i></a>). Your img containing the logo must have an alt attribute, but it should be empty in your case because the logo contains no relevant content: <img src="img/tritium-logo.svg" height="100" width="100" alt="" /> Empty links An empty link like <a href="#" class="close"><i class="fa fa-times btn-close"></i></a> (you have some more of these) is not accessible. And using the i element for this is not appropriate. If you really need this empty element, it should be a span instead. Side effect of these "font icon" links (which is a problem many sites have): For me, these links look like that: Outline You don’t want to have the page heading ("Tritium") in a section element. It’s the heading for the whole page, so it should belong to the body sectioning root, not only to its section. As it contains no content suitable for a section, remove this first section element completely. The section with the "Please Donate!" heading should get its own section element. So removing anything not related to the outline, you would get this structure: <body> <nav> </nav> <h1>Tritium</h1> <section> <h2>About Tritium</h2> </section> <section> <h2>Contact</h2> </section> <section> <h2>Please Donate!</h2> </section> </body> You might want to consider using a header element for the page heading and the logo (you could even include the nav in it, too): <header> <!-- also the 'nav', if you like --> <img src="img/tritium-logo.svg" height="100" width="100" alt="" /> <h1 class="brand-heading">Tritium</h1> </header>
{ "domain": "codereview.stackexchange", "id": 9113, "tags": "javascript, jquery, html5, css3" }
rosjava for Fuerte on Ubuntu 12.04
Question: Hi all, Is someone already using rosjava on Fuerte? How can I install these packages like rosjava_bootstrap,.etc. Thanks in advance. Originally posted by safzam on ROS Answers with karma: 111 on 2012-10-03 Post score: 0 Answer: I am currently using rosjava and rosjava.android on a 12.04 install, the documentation can be found at Rosjava Install Documentation That should walk you through the install process for everything, as well as get you started. Originally posted by Jyo with karma: 100 on 2012-10-03 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by safzam on 2012-10-04: Hey...thanks. I have yesterday tried it from link http://docs.rosjava.googlecode.com/hg/rosjava_core/html/installing.html but it gives "ERROR: Ambiguous workspace: ROS_WORKSPACE=/home/safdar/rosrescue, /home/safdar/my_workspace/.rosinstall". Did you alsoinstall rosjava on Fuerte on ubuntu 12.04??? Comment by safzam on 2012-10-04: Hi joy! I have solved Ambigugous problem. I seemed to me that all steps are gone correctly. Now when I rosmake a rosjava based package then I get error [AttributeError: 'module' object has no attribute 'rosenv'] , Do you know why do I get this error? Comment by damonkohler on 2012-10-10: rosjava does not support rosmake. I suggest going back to the documentation and making sure you can build and run the pubsub tutorial. Comment by safzam on 2012-10-21: I get ERROR:Unzipping /home/.gradle/wrapper/dists/gradle-1.0-milestone-9-bin/7ilkmgo2rn79vvfvd51rqf17ks/gradle-1.0-milestone-9-bin.zip to /home/.gradle/wrapper/dists/gradle-1.0-milestone-9-bin/7ilkmgo2rn79vvfvd51rqf17ks Exception in thread "main" java.util.zip.ZipException: error in opening zip file Comment by damonkohler on 2012-10-23: I imagine you have a corrupt download. Delete the zip file and try again. Comment by safzam on 2012-10-23: its fine now. But if I run ../gradlew installApp from rosjava_tutorial_pubsub it builds successfully but I dont see any bin directory. if I run ./build/rosjava_tutorial_pubsub/bin/rosjava_tutorial_pubsub org.ros.rosjava_tutorial_pubsub.Talker then it gives error Data not found. how can I run Talker? Comment by damonkohler on 2012-10-23: Please post a new question with your issue. Also, be sure to follow the rosjava tutorials: http://docs.rosjava.googlecode.com/hg/rosjava_core/html/getting_started.html#executing-nodes
{ "domain": "robotics.stackexchange", "id": 11216, "tags": "rosjava" }
Titrating iodine starch solution with sodium thiosulphate - Colour change
Question: I investigated two mixtures with different solvents, one with water and one with n-heptane. Both contained iodine $\ce{I2}$ as a solute. To both solutions I added a bit of starch. As I remember this resulted in a colourchange. So the solution turned from yellowish to dark blue (if I remember correctly!). Now according to wikipedia starch and iodine indeed form a structure which has a dark blue colour. But it only forms in the presence of $\ce{I^-}$. This leaves me wondering, why do I remeber the solution to be dark blue, eventhough I think there was no $\ce{I^-}$ present? Could it be the solution turned dark blue only after I added some sodium thiosulfate? Because in the next step I did a titration with $\ce{Na2S2O3}$. In this case I don't see which reaction could have produced the $\ce{I^-}$ though. I thought only $\ce{NaI}$ is produced after adding the sodium thiosulfate. $$\ce{I_2 + 2Na_2S_2O_3 -> 2NaI + Na_2S_4O_6} \tag{1}$$ So at which point did the solution turn dark blue and where did the $\ce{I^-}$ come from, that was needed for the formation of the starch-iodine-compound? Could it be there is an intermediate step to (1) in which $\ce{I^-}$ is formed and this $\ce{I^-}$ was used to produce the dark blue starch-iodine compound? Answer: I don't think your memory is serving you right. That is why we write everything in the notebook, especially color changes. I think you are doing distribution experiments where iodine is distributed between aqueous layer and an organic layer. When we add indicator for titration, it is not a solid starch but starch which is boiled in water. So when you added starch $solution$ to heptane which contained iodine, I would not be surprised if the starch solution turned blue. Remember that iodine is strong oxidizing agent as well. A very small fraction of it can easily convert into iodide. You really really need a trace of the triiodide ion to form a dark blue iodine complex.
{ "domain": "chemistry.stackexchange", "id": 13664, "tags": "analytical-chemistry, titration, color" }
How can i transplant old src from groovy to hydro?
Question: Hi, everyone, maybe some Ros Kong. I'm a beginner in Ros and Ubuntu, but someone who know well of Ros in my tutor's Lab has graduated to another University and leave us amount of sources. Now I'd like to rebuild these sources which written in groovy and I'm using Hydro. Should I change which part in the source so that I can run their sources? Originally posted by putuotingchan on ROS Answers with karma: 21 on 2014-07-22 Post score: 2 Answer: For the most part, packages that worked in Groovy should continue to work in Hydro. I would try them first without any changes, and troubleshoot any issues as they arise. If you run into specific issues or trouble compiling, you can update your question with the specific error you're seeing or ask a new question. Originally posted by ahendrix with karma: 47576 on 2014-07-22 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 18704, "tags": "ros-groovy, ros-hydro" }
Is the spacetime referred to in Einstein's field equations, spacetime constructed with inertial coordinates?
Question: In general relativity, the curvature of spacetime is related to the presence of energy and momentum (the energy-momentum tensor) by Einstein's field equations: $$R_{\mu\upsilon} - \frac12Rg_{\mu\upsilon} = 8\pi GT_{\mu\upsilon}$$ where from left to right, we have the Ricci tensor, the Ricci scalar multiplying the metric, and Newton's constant G acting on the energy-momentum tensor. I've been reading Carroll's intro to GR, and at the beginning he introduces inertial spacetime coordinates as the regular Cartesian coordinates for space (x, y, z), and the time coordinate is constructed by imagining clocks at every point in space, but synchronizing them such that if a light beam is fired from one clock $c_1$ to another $c_2$ and bounced back, the time read by $c_2$ when the light beam arrives is exactly half the time read by $c_1$ when the light beam returns to it. My question is this: is this the spacetime that is referred to as being "curved" by $T_{\mu\upsilon}$ in the EFE? Or do the EFE refer to a more general spacetime constructed by just placing clocks everywhere without synchronizing them? Answer: The EFE holds for any coordinate system. It does not have to be realizable by any set of rods and clocks. It holds in non-inertial coordinates. It holds in coordinates with any arbitrary synchronization strategy. It even holds in coordinate systems that do not have time as a coordinate. The coordinate system has only two requirements: it must be smooth it must be invertible Other than that there is no restriction.
{ "domain": "physics.stackexchange", "id": 100323, "tags": "general-relativity, spacetime" }
ROS Answers SE migration: Unary stack
Question: I have successfully wrapped an external library into ROS framework following instructions in the tutorial. However, I have used a stack and a package to acheive this. I would like to switch to using a unary stack instead. I am not sure about how to create a unary stack. I do not use CMakeLists.txt in my package for wrapping the external library and as such CMakeLists.txt files in stack and package are not a problem. However, what should be done about the Makefile(s) in stack and package in order to create a unary stack from current stack-package. Thanks! Originally posted by Aditya on ROS Answers with karma: 287 on 2011-11-28 Post score: 2 Answer: In order to be a unary stack, just take your package and add a stack.xml file as well -- i.e. use the CMakeLists.txt and Makefile from the package. The only thing you need to add is version information to the stack.xml or CMakeLists.txt, which is described in REP 109: http://ros.org/reps/rep-0109.html#stack-version This works as of Electric. Originally posted by kwc with karma: 12244 on 2011-11-28 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by joq on 2011-11-28: Not necessarily in general. In your case of wrapping a single external library it probably makes sense. That way the package can be released by itself. Comment by Aditya on 2011-11-28: Thanks a lot! In general, is the use of unary stacks recommended?
{ "domain": "robotics.stackexchange", "id": 7447, "tags": "ros, stack" }
How do we know that C14 decay is exponential and not linear?
Question: In my previous question I asked Please explain C14 half-life The OP mentioned that I was thinking of linear decay and C14 was measured in exponential decay. As I understand it, C14 is always in a state of decay. If we know the exact rate of decay then shouldn't it be linear? How do we know that C14 decay's exponentially compared to linear and have there been any studies to verify this? Answer: It's also worth noting that there is nothing special about atoms. If you have any system where in every period of time an event has a certain chance of happening which only depends on internal effects of the object and no memory or communications with others - you will get the same decay curve. It's purely a matter of the statistics. If you have a handful of coins and every minute toss them all and remove all the heads into a separate pile - the number of coins remaining in the hand will decay with a half-life of 1 minute. What is special about carbon14 - and why it is useful for archeaology is that new carbon14 is being made all the time in the atmosphere, and while you are alive you take in this new carbon so the decays don't have any effect until you die. It's like tossing the coins, but while you are alive adding new random coins after each toss - but then when you die have somebody else start to remove the heads. If you assume you died with an equal number of heads and tails, you can work out how many tosses have happened since you died - and so how long ago the sample died.
{ "domain": "physics.stackexchange", "id": 65359, "tags": "nuclear-physics, radioactivity" }
Implementation of Euclidean algorithm
Question: I have made an implementation of the Euclidean algorithm in Java, following the pseudocode shown below. As far as my knowledge goes, I can't find a way of making this more efficient. I have looked into other peoples implementations of this algorithm and there are a few which are slightly shorter, and some that use recursion. The pseudocode: function gcd(a, b) while b ≠ 0 t := b b := a mod b a := t return a My implementation: import java.util.Scanner; public class EuclideanAlgorithm { public static void main (String [] args) { int a, b; try(Scanner sc = new Scanner(System.in);) { System.out.print("Enter integer a: "); a = sc.nextInt(); System.out.print("Enter integer b: "); b = sc.nextInt(); } long start = System.nanoTime(); int answer = EuclideanAlgorithm(a, b); long stop = System.nanoTime(); System.out.println("gcd = " + answer); System.out.println("Execution time: " + ((stop - start) / 1e+6) + "ms."); } public EuclideanAlgorithm() {}; //Suppress default constructor private static int EuclideanAlgorithm(int a, int b) { int t; while (b != 0) { t = b; b = a % b; a = t; } return a; } } Is there a way to make this implementation more efficient, and would it have been more efficient to use recursion? Answer: I believe this is as efficient as it gets, save for quickly detecting edge cases (a == 0 or a == b), which I think would be negligible anyway. Recursion wouldn't be more efficient than iterative version. Functional languages can optimize recursion to the point where there's no difference, but I don't think Java excels at that and there's a performance penalty of adding new frames to the stack. Recursion can make code clearer and more readable sometimes, but that's another story. As a side note: public EuclideanAlgorithm() {}; //Suppress default constructor What's the purpose of this "suppression"?
{ "domain": "codereview.stackexchange", "id": 15270, "tags": "java, performance, algorithm" }
How to link a nodelet to external libraries?
Question: I'm trying to use a nodelet with an external library. But when running the nodelet, it obviously complaints that it can't find some symbol which is in this external library. I can link the regular node the library without problems. However, when I use the nodelet version, it complains. I'm certain that I'm not linking the nodelet to the library. I don't know how to. If you know how to link a nodelet to an external library, please share your know-how-to. The proposed SOLUTION in CMakeLists.txt relevant piece looks like the following: set(MVIMPACT_LIBRARIES mvDeviceManager) # Nodelet: rosbuild_add_library(mv_bluefox_driver_nodelet src/nodelets.cpp src/camera.cpp) # Linking the nodelet to the mvDeviceManager library target_link_libraries(mv_bluefox_driver_nodelet ${MVIMPACT_LIBRARIES}) # Regular Node: rosbuild_add_executable(camera_node src/camera_node.cpp src/camera.cpp) target_link_libraries(camera_node ${MVIMPACT_LIBRARIES}) Originally posted by ubuntuslave on ROS Answers with karma: 347 on 2011-09-23 Post score: 1 Original comments Comment by ubuntuslave on 2011-09-25: Runtime problem with the error pointing at some function (DMR_Init) from the external library as: symbol lookup error: /DIR_TO_MY_PACKAGE/mv_bluefox_driver/lib/libmv_bluefox_driver_nodelet.so: undefined symbol: DMR_Init Comment by Daniel Stonier on 2011-09-24: Compile time problem or runtime problem? Comment by joq on 2011-09-24: Please show snippets of your CMakeLists.txt for both the node and the nodelet versions. That should help diagnose your problem. Answer: target_link_libraries(mv_bluefox_driver_nodelet ${MVIMPACT_LIBRARIES}) The nodelet is basically a library with some programming voodoo that tells the nodelet manager which class to instantiate. The target_link_libraries command links libraries against the binary that you name in its first argument. You already did it for the camera_node executable, and it works the same way for the mv_bluefox_driver_nodelet library. Originally posted by roehling with karma: 1951 on 2011-09-26 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ubuntuslave on 2011-09-26: Thanks, this was the solution. Comment by Hendrik Wiese on 2018-01-15: Does this work with statically linked 3rd party libraries? Like, libs of which I only have a .a and a .hpp? If not, what else can I do to achieve that linkage? Comment by roehling on 2018-05-04: It will probably work for x86_64, but not for i386, because static 32 bit libraries are usually not compiled with -fPIC for position independent code.
{ "domain": "robotics.stackexchange", "id": 6764, "tags": "ros, linking, nodelet" }
How successfully can convnets detect NSFW images?
Question: For example, search engine companies want to classify their image searches into 2 categories (which they already do that) such as: NSFW (nudity, porn, brutality) and safe to view pictures. How can artificial neural networks achieve that, and at what success rate? Can they be easily mistaken? Answer: The 2015 paper entitled "Applying deep learning to classify pornographic images and videos" applied various types of convnets for detecting pornography. The proposed architecture achieved 94.1% accuracy on the NPDI dataset, which contains 800 videos (400 porn, 200 non-porn "easy" and 200 non-porn "difficult"). More traditional computer vision methods achieved 90.9% accuracy. The proposed architecture also performs very well regarding the ROC curve. There does not seem to exist any works regarding the other aspects of NSFW yet.
{ "domain": "ai.stackexchange", "id": 85, "tags": "image-recognition, classification, convolutional-neural-networks" }
Efficient ways to sort pairwise distances for set of points in Euclidean space?
Question: Consider a Euclidean space $\mathbb{R}^d$. Consider $ X \subset \mathbb{R}^d$ where $X$ is a finite set with $|X|=n$. Consider the set of line segments $\{xy | x,y \in X\}$ . I have a process $Z$ that I want to apply to each line segment, from smallest to largest but I will stop examining line segments when a line segment satisfies some condition $C$. One thing I could do is find all line segments and then order them from smallest to largest and simply apply $Z$ to each line segment until $C$ is satisfied but organising the line segments will have $O(n^2\log (n))$ complexity. Is there an efficient way I can find each line segment in order of increasing length, one at a time. That is, find a line segment $l$, apply $Z$ to $l$ and if $C$ is not satisfied find the next largest line segment, apply $Z$,... Thank you in advance Answer: A better solution could be as follows: Create a min heap over $n^2$ line segments. Take the top element of the heap, check if it satisfies condition $C$. If it does not satisfy it, pop it out. If it satisfies the condition, stop. Step $1$, take $O(n^2)$ time. Step $2$ takes $O(p \log n)$ time, where $p$ is the position of the first line segment in the sorted order that satisfies the condition $C$. If $p$ is small, the algorithm takes $O(n^2)$ time.
{ "domain": "cs.stackexchange", "id": 21122, "tags": "algorithms, efficiency, euclidean-distance, ordering" }
Can we study degenerated electron gas on Earth?
Question: The degenerate electron gas is supposed to be in heavy star's cores and white dwarfs, maybe few other places. D.E.G. has influence to a star dynamics, nuclear decays and many other things. While trying to answer Is there something like "forced / induced electron-capture"? , I realized my own question - is there any possibility or a clever trick to study degenerate gas on Earth in laboratory? Edit: I was recommended to specify more closely - so -I would like to see the effects of degenerate electron gas on nuclei, radioactivity. With filling the Fermi sea higher, one can not only change some halflives, but also some stable species can become radioactive. And - I have no idea, how the dynamics of nucleus-nucleus collisions can change (if it changes). There should be some calculations of everything, but - is an experiment possible? Answer: Consider the Fermi temperature of an ideal electron gas $$T_{F} = \frac{\pi^2 \hbar}{2 m_e k_B} \left( \frac{3 n}{\pi} \right)^{2/3}$$ Or $$T_F \sim \left(\frac{n}{10^{21} m^{-3}} \right)^{2/3} K$$ I.e. to make the Fermi temperature of the order of normal earthly temperatures of hundreds of Kelvin we need a density like $10^{24}$ electrons per meter cubed. But this is not too difficult to reach, we know that one mole of matter, typically a small handful for normal solid elements, is $\sim 10^{23}$ atoms. We just need the electrons to be free, unconstrained within the atoms. We know of a case where electrons are almost freely flying around in a material, and that is metals. For instance in Magnesium, the density of conducting (free-ish) electrons is $\approx 8.6 \times 10^{28} m^{-3}$ so the electrons will be well degenerate even at room temperatures. As a result, we can study the contribution of the degenerate electron gas to properties of metals. One historically very important result verified also experimentally is the fact that the heat capacity of most metals follows a law $$C_V = K_1 T + K_2 T^3$$ where $K_1$ corresponds to the contribution of the degenerate electrons and $K_2$ to the "normal" oscillations of the grid (phonons) in the Debye model. Similarly, one finds "hard" contributions to compressibility from the degenerate pressure of the electron gas. The pressure of the free non-relativistic ideal electron gas is $$ P = \frac{\pi^2 \hbar}{5 m_e} \left( \frac{3}{\pi} \right)^{2/3} n^{5/3} $$ The compressibility is conventionally expressed using the isothermal bulk modulus $K_T$ $$K_T \equiv n \frac{\partial P}{\partial n}|_{T=const}$$ which gives us an estimate on the degenerate contribution as $$K_T = \frac{\pi^2 \hbar}{3 m_e} \left( \frac{3}{\pi} \right)^{2/3} n^{5/3}$$ which is independent of temperature in the degenerate limit we are considering. This contribution to compressibility turns out to be in fact dominating to the compressibility of most metals. You can contrast this with the compressibility of a classical ideal gas, where the same computation goes as $$P = n k_B T \to K_T = n k_B T$$ and the compressibility scales linearly with temperature. A pedagogical introduction to the topic and how you get beyond the free, noninteracting electrons is given in Chapters 6,7 and 9 of Kittel's Introduction to solid state physics, and a more advanced treatment is in the first half of the canonical Solid state physics by Ashcroft and Mermin.
{ "domain": "physics.stackexchange", "id": 39439, "tags": "electrons, astrophysics, gas" }
How can I integrate Python rosunit unit tests into the cmake build?
Question: I would like to run rosunit Python unit tests as part of make test in the CI environment. In the wiki and in the catkin docs it is recommended to use cmake_add_nodetest() for that. Is cmake_add_nosetest() specific to nosetest or can it used with the plain unittest test runner as well/instead? (I know that rosunit depends on unittest. I didn't recognize a dependency from nosetest.) If it's possible: Could you guide me to an integration example in the ROS code base? Originally posted by thinwybk on ROS Answers with karma: 468 on 2017-10-16 Post score: 0 Answer: nosetest is a test runner. It will collect and run detected unittest test cases found in the test files. See: https://nose.readthedocs.io/en/latest/testing.html Originally posted by tfoote with karma: 58457 on 2017-10-16 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by thinwybk on 2017-10-17: Sorry, my question is not precise enough. I was interested in if I can use only nosetest 's test runner or any other as well (e.g. the one of unittest). Comment by tfoote on 2017-10-18: It's likely you could add integration for it but it's not currently supported. Comment by thinwybk on 2017-10-19: Ok. If I would like to add integration where in the sources I should dive into first? Comment by tfoote on 2017-10-19: I would look inside catkin and make a parallel function like catkin_add_nosetests: https://github.com/ros/catkin/blob/kinetic-devel/cmake/test/nosetests.cmake
{ "domain": "robotics.stackexchange", "id": 29090, "tags": "ros, unit-testing" }
Primary direction in planet-centered equatorial reference frame
Question: I am given the classical orbital elements of the orbit of a spacecraft around a planet which is not the Earth, say Venus. I assume those are referred to a reference frame whose fundamental plane is the equator of the planet, but which is its primary direction? I am hesitating between two options: either it is the intersection of the plane of the orbit with the ecliptic, or the intersection of the plane of the orbit with the J2000 Earth equator. While the ecliptic options seems more reasonable to me, I've been reading lots of documentation on the Internet but either they are all Earth-minded (i.e. Wikipedia) or there is some implicit convention I am missing. The thing is that this is not important if for example I am just analyzing the motion of my spacecraft around my main attractor (Venus), but when I want to look at it from an interplanetary point of view, I think I need this information. Answer: From Seidelmann, P.K., Abalakin, V.K., Bursa, M., Davies, M.E., Bergh, C. de, Lieske, J.H., Oberst, J., Simon, J.L., Standish, E.M., Stooke, P., and Thomas, P.C. (2002). "Report of the IAU/IAG Working Group on Cartographic Coordinates and Rotational Elements of the Planets and Satellites: 2000," Celestial Mechanics and Dynamical Astronomy, v.82, Issue 1, pp. 83-111: (Table I) Recommended values for the north pole of rotation and the prime meridian at standard epoch (J2000 = JD2451545.0 barycentric coordinate time). Venus: $ \alpha_0 = 272.16 $ $ \delta_0 = 67.16 $ $ W = 160.20 $ (at J2000, for other values, the rotation term given is $ -1.4813688d$ where $d$ is days since J2000) The values of $\alpha_0$ and $\delta_0$ are given in standard equatorial coordinates. The prime meridian is at the center peak of Ariadne crater. When more missions visit Venus, we may learn more about its rotation. Making myself clear - this is one out of infinitely many possible inertial frames. Makes sense to use it though. I urge you to explore the options of SPICE toolkit since it's a neat collection of goodies and may be used to interface your simulation with real planetary and mission data from NAIF/PDS.
{ "domain": "physics.stackexchange", "id": 8488, "tags": "astronomy, reference-frames, coordinate-systems" }
Time series binary classificaiton with labelling issues
Question: My situation is quite complicated so I will give a similar example from a simpler domain. Suppose we want to try to predict WHEN a mobile game users will make a purchase if given a sale. Almost every user is always instantaneously a non-purchaser because everybody is constantly not buying anything. Some people buy something to instantaneously become a purchaser but go back to the standard state of not purchasing. We have a time-stamp for that purchase. If we could understand the user behavior well enough, we could predict when a user will purchase before they do so. If this is the case then users with actions and game states very similar to the purchasers but who do not actually purchase would do so if given a sale. So the question is how do we turn this into a machine learning question. My current plan is to use binary classification by labeling the YES cases as the purchasers since those who purchase at regular price would have purchased if given a sale. I build my features based on actions in recent look back windows, like how many actions of a type in the last day. For all users who never purchase I choose random time-stamps, build my features and use them as the NO case. Since I can choose as many NO time-stamps as I want I have been doing 100 times my YES case. Then I can use a classifier, I like tree ensemble methods but I do not think the classifier will really matter here. The problem is that this is not working. There are two things worth noting which I think are at the core of the issue I am having. First, we are trying to predict the "when" not just the "who". Users who play a lot or who have purchased before are much more likely to purchase in general so it is easy to make a time independent classifier and predict who is likely to purchase. This means that the features which are useful for the easy time-independant problem can sort of "contaminate" the feature set. I have taken out a number of these features and seen some improvement. The second issue has to do with labelling. We have many YES cases for the purchasers but how does one define a NO case. I described what I am doing above but I am not sure that it is really a best method. Many of the users I defined as a NO might have effectively been a YES since if they were given the sale they would have bought. Also, I should point out two situations which this is not. This is not one class binary classification which generally uses the large class in the imbalanced case reducing the problem to unsupervised anomaly/outlier detection. This is also not the semi-supervised situation where I have both classes labelled for a subset of the data. I really only have some of the YES cases for sure and none of the NO case. Any thoughts welcome. Answer: I have found a solution. My suspicions were correct and there are ways to solve both issues. For the issue about only knowing the definite yesses and not knowing both yes and no there is a type of machine learning specifically for this called PU-learning. There are libraries for this built off scikit-learn. This type of machine learning learns the binary classification problem based off training data where the target is either labelled yes or not at all. It then attempts to find a decision boundary by classifying the unlabeled data as yes or no. This does not totally solve the issue of where the decision boundary should be for who will be a good person to offer a sale but I can do this by tuning my ROC curve under business constraints. The second issue was about data and feature engineering. Basically it is a bad idea to do data prep differently for the different classes. Labeling my yes cases by the purchase event and then looking back at activity is not good. I need to choose random times to get the distributions when you predict correct. If you do not your probability is inaccurate. I also need to have many timestamps per user to sample different states of that user. This means that you will get some events labelled as yes for a user and some unlabelled for the same user. One would then expect that the boundary is drawn with some of the unlabelled timestamps on either side of the decision boundary for each user. This would then suppress the problem of contamination by the features which are only good from a time independent perspective. Explicitly the method is to select all (or a subset of) sign-in events. You build your features based on that event in whatever way you see fit. The labeling is done by joining on the purchase events to look forward and see if they purchase in the near future. The definition of 'near' needs to be tuned but you can likely make an educated guess from a plot. If there is a purchase event then that sign in event is labelled yes otherwise left unlabelled. The PU-learning does the rest.
{ "domain": "datascience.stackexchange", "id": 3476, "tags": "machine-learning, classification, time-series, feature-engineering, semi-supervised-learning" }
How long does photon entanglement last?
Question: After creating entangled photons, how long does the entanglement last? Answer: There isn't a simple answer to this because it depends on what the light interacts with. If you start with two entangled photons then they form a single system described by a single wave function that isn't separable into the two photons. As long as this single system doesn't interact with anything else it remains unchanged. So if you manage to emit the photons along directions that head off into outer space without hitting anything then in principle the photons will remain a single entangled system forever. If one (or both) of the photons hit something, let's suppose they hit a single molecule as an example, then now the photons and the molecule form an entangled system. That is we now have a single wave function that describes the photons and the molecule. And again if no further interaction occurs this single entangled system is in principle stable forever. But in practice there are many interactions and the size of the entangled system grows rapidly. At some point the entangled system gets so complex that it decoheres and the entanglement is lost. Exactly how and why this happens depends on the system and its interactions. For simple systems like photons in a vacuum coherence can be sustained for a long time. For complex systems like qubits in a quantum computer decoherence occurs very rapidly, which is the current big problem with building quantum computers.
{ "domain": "physics.stackexchange", "id": 61819, "tags": "quantum-mechanics, quantum-information" }
Synthesis of benzophenone
Question: Would his be a valid synthesis? I'm just unsure about the oxidative clevage step. I know that there is a benzylic hydrogen. So should oxidative clevage work here? What happens to the fragment that is lopped off? Is it also oxidized? Answer: Your basic strategy to involve a Friedel-Crafts acylation is correct, but your route is a bit circuitous. You could acylate benzene directly to produce benzaldehyde in one step using carbon monoxide and hydrochloric acid to generate formyl chloride in situ. While it is "just another" Friedel-Crafts acylation, it has it's own name, the Gatterman-Koch reaction, after its discoverers. Once in hand, you could oxidize benzaldehyde to benzoic acid. An alternate approach would be to 1) brominate benzene to form bromobenzene, 2) form the Grignard reagent with magnesium and 3) react it with carbon dioxide to produce benzoic acid. In either case, with benzoic acid in hand, you can procede as you've indicated, on to benzophenone.
{ "domain": "chemistry.stackexchange", "id": 9999, "tags": "organic-chemistry, synthesis" }
Convert Object Keys according to Table/Map object
Question: I have implemented a recursive function that converts an object's keys according to another lookup table/map. You are able to convert back and forth using the 3rd swap_conversion_table_key_value boolean argument. My use case is to convert object keys to single characters to slim down the amount of characters generated from JSON.stringify. Then be able to convert it back to full keys on another client. Along with the usual code review criteria(mainly clarity), I am wondering if I just reinvented the wheel or overthought the whole problem. The code does seem a bit long for the functionality and I am not keen how it directly modifies the object(wish it would return a new object). This means that you have to clone the data before every use to keep all the references to the original data happy. Demo: jsFiddle Usage: recursiveConvertKeys(resultant_data, conversion_table, false); // 3rd parameter defines whether we should swap the key-value in the table/map (good for converting back to the original data) recursiveConvertKeys(resultant_data, conversion_table, true); Code: Code available as a GitHub Gist function recursiveConvertKeys(data_object, conversion_table, swap_conversion_table_key_value, __is_recursive_iteration, __current_object_level, __current_conversion_table_level) { // Do not pass in parameters for the double underscore arguments. These are private and only used for self recursive calling // // Demo: http://jsfiddle.net/MadLittleMods/g3g0g1L4/ // GitHub Gist: https://gist.github.com/MadLittleMods/7b9ec36879fd24938ad2 // Code Review: http://codereview.stackexchange.com/q/69651/40165 // /* Usage: var data = {asdf: 1, qwer: 2}; var conversion_table = {asdf: 'a', qwer: 'q'}; // Clone the data so we don't overwrite it var resultant_data = $.extend(true, {}, data); // Now execute the key converting process recursiveConvertKeys(resultant_data, conversion_table, false); console.log("Reversed Data:", resultant_data); // If you want to reverse the process simply pass true for the `swap_conversion_table_key_value` argument recursiveConvertKeys(resultant_data, conversion_table, true); console.log("Back to normal Data:", resultant_data); */ // Start at the root of the objects when we invoke this method __current_object_level = __is_recursive_iteration ? __current_object_level : data_object; __current_conversion_table_level = __is_recursive_iteration ? __current_conversion_table_level : conversion_table; if(typeof __current_object_level == "object") { // Make the iterate object var iterate_object = Object.keys(__current_object_level); //console.log('iter', iterate_object); iterate_object.map(function(key, index, array) { // Check to make sure this is part of the object itself if (__current_object_level.hasOwnProperty(key)) { if(__current_conversion_table_level) { var new_key = null; if(!swap_conversion_table_key_value) { if(typeof __current_conversion_table_level[key] == "object") new_key = __current_conversion_table_level[key]['_short']; else new_key = __current_conversion_table_level[key]; } else { // We have to search through all of the current level to match the value to curernt object key since we swapped var table_level_keys = Object.keys(__current_conversion_table_level); for(var i = 0; i < table_level_keys.length; i++) { var curr_level_table_key = table_level_keys[i]; var key_to_compare = null; var curr_level_table_value = __current_conversion_table_level[curr_level_table_key]; if(typeof curr_level_table_value == "object") key_to_compare = curr_level_table_value['_short']; else key_to_compare = curr_level_table_value; // If it is a match, we found it :) if(key_to_compare == key) { // Now use the key from the conversion table instead of the value new_key = curr_level_table_key; // Break out of the for loop after we found it break; } } } // If there is actually a new key, replace it in our object if(new_key) { renameProperty(__current_object_level, key, new_key); } //console.log('key', key, new_key); // Only keep going if there actually was a new_key // Or there is a array to look through the items on var is_current_key_array_index = key%1 == 0; // If the current key is a positive integer, we assume it is an array key if(new_key || is_current_key_array_index) { // Use the new key if it was available // Because that is what the object property is changed to from above var value = __current_object_level[new_key ? new_key : key]; //console.log('current', value, __current_conversion_level); // If we are swapping then the `key` will not be found in the table as it is ass-backwards. var table_key = swap_conversion_table_key_value ? new_key : key; // If the current key is a array, maintain the `_array_item` conversion level we set the level prior // Otherwise continue down the tree var next_conversion_level = is_current_key_array_index ? __current_conversion_table_level : __current_conversion_table_level[table_key]; if(typeof __current_conversion_table_level[table_key] == "object") { // If the current value is an array set up the conversion level for the items if(Object.prototype.toString.call(value) === '[object Array]') { next_conversion_level = __current_conversion_table_level[table_key]['_array_item']; } else { next_conversion_level = __current_conversion_table_level[table_key]['_object']; } } //console.log('next', next_conversion_level); recursiveConvertKeys(data_object, conversion_table, swap_conversion_table_key_value, true, value, next_conversion_level); } } } }); } } The format for the conversion table is below. I am not set on this format so feel free to suggest something better for the table/map. var conversion_table = { "psdf": "p", "qwer": "q", "dict": { "_short": "d", "_object": { "one": "o", "two": "t", "three": "r" } }, "candidates": { "_short": "c", "_array_item": { "ip": "i", "port": "p" } } } And some accompanying test data: var test_data = { "psdf": "pcodereview", "qwer": "qcodereview", "dict": { "one": "1", "two": "2", "three": "3" }, "candidates": [ { "ip": "0.0.0.0", "port": 65000 }, { "ip": "127.0.0.1", "port": 65000 } ] } Answer: Great question, Can you take the truth? The real answer is that this is overkill. Send your JSON to other machines/clients with HTTP compression (gzip) and the benefits of slimming down are close to non-existing. Divide and conquer Still, this is a fun question. After looking at this for a while, I've come to the conclusion that you have so much code because you tried too hard to apply DRY. Converting keys and reverting keys are different enough to deserve separate functions. They will look tantalizingly similar, but I am quite sure that merging them is wrong. Naming convention Furthermore, stop using _ and __ as prefixes to variables, it is not idiomatic for JavaScript. And start using lowerCamelCase, so __is_recursive_iteration should be isRecursiveIteration. Name that thing Furthermore, to quote Humpty Dumpty "When I use a word," Humpty Dumpty said, in rather a scornful tone, "it means just what I choose it to mean—neither more nor less." "The question is," said Alice, "whether you can make words mean so many different things." "The question is," said Humpty Dumpty, "which is to be master—that's all.” ― Lewis Carroll When you declare curr_level_table_key it is the only 'key' in the scope, feel free to declare it simply as key, this would make your code far easier to understand. If you are not comfortable because it does not convey enough info, you could comment it (I would not do this personally): var key = table_level_keys[i]; //Current level table key <- That does not really make sense to me Commenting Also your commenting is too excessive for what really should be a simple, recursive, key swapping algorithm. At least convert your 2 line comments to 1 line comments, remove obvious comments, and all remaining multi line comments should go in front of the function. Worst offender:   // Break out of the for loop after we found it break; Avoid the arrow pattern You know that things are getting too complex when you see the arrow pattern, like here: //console.log('next', next_conversion_level); recursiveConvertKeys(data_object, conversion_table, swap_conversion_table_key_value, true, value, next_conversion_level); } } } }); } } In this particular case you can do this by using the continue statement if you know that nothing else must be done in the loop, so consider using if (!object.hasOwnProperty(key)){ continue; } Housekeeping Remove commented out code, your code is already hard to read and follow. Counter proposal // from: http://stackoverflow.com/a/4648411/796832 // Check for the old property name to avoid a ReferenceError in strict mode. function renameProperty(object, oldName, newName) { if (object.hasOwnProperty(oldName)) { object[newName] = object[oldName]; delete object[oldName]; return object[newName]; } } function convertKeys(object, map) { if (typeof object != "object") { return; } //Iterate over the object Object.keys(object).map(function(key) { var mappedKey = map[key]; if (!mappedKey) { return; } var value = object[key]; if (mappedKey instanceof Object) { if (mappedKey._short) { value = renameProperty(object, key, mappedKey._short); } if (value instanceof Array) { for (var i = 0, length = value.length; i < length; i++) { convertKeys(value[i], mappedKey._array_item); } } else if (value instanceof Object) { convertKeys(value, mappedKey._object); } } else { renameProperty(object, key, mappedKey); } }); } function findValueKey(object, searchValue) { var keys = Object.keys(object), key, value; for (var i = 0, length = keys.length; i < length; i++) { key = keys[i]; value = object[key]; if (value === searchValue) { return key; } if( value instanceof Object && value._short == searchValue ){ return key; } } } function revertKeys(object, map) { if (typeof object != "object") { return; } //Iterate over the object Object.keys(object).forEach(function(key) { var mappedKey = findValueKey(map, key); if (!mappedKey) { return; } var value = renameProperty(object, key, mappedKey), subMap = map[mappedKey]; if (subMap instanceof Object) { if (value instanceof Array) { for (var i = 0, length = value.length; i < length; i++) { revertKeys(value[i], subMap._array_item); } } else if (value instanceof Object) { revertKeys(value, subMap._object); } } }); } $('textarea').tabOverride(); $('.go').on('click', function() { // Tests // --------------------------- var data = JSON.parse($('.data-box').val()); var conversion_table = JSON.parse($('.conversion-table-box').val()); // Clone the data so we don't overwrite it var resultant_data = $.extend(true, {}, data); // Now execute the key converting process convertKeys(resultant_data, conversion_table); $('.result').html(JSON.stringify(resultant_data, null, '\t')); console.log("Reversed Data:", resultant_data); // If you want to reverse the process simply pass true for the `swap_conversion_table_key_value` argument revertKeys(resultant_data, conversion_table); $('.result-back-to-normal').html(JSON.stringify(resultant_data, null, '\t')); console.log("Back to normal Data:", resultant_data); }).trigger('click'); *, *:before, *:after { -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box; } html, body { width: 100%; height: 100%; margin: 0; padding: 0; font-family: Arial, sans-serif; } pre, textarea { tab-size: 4; -moz-tab-size: 4; -o-tab-size: 4; -webkit-tab-size: 4; } textarea, pre { width: 100%; min-height: 150px; padding: 4px; } .side-by-side-section { display: flex; } .side-by-side-section > * { flex: 1; } .code-block { background: rgba(0, 0, 0, 0.1); border: 1px solid rgba(0, 0, 0, 0.2); } <!DOCTYPE html> <html> <head> <meta name="description" content="Map keys" /> <script src="//code.jquery.com/jquery-2.1.1.min.js"></script> <script src="//rawgit.com/wjbryant/taboverride/master/build/output/taboverride.js"></script> <script src="//rawgit.com/wjbryant/jquery.taboverride/master/src/jquery.taboverride.js"></script> <meta charset="utf-8"> <title>JS Bin</title> </head> <body> <button class="go">Go</button> Write data and conversion table in JSON. <br /> <div class="side-by-side-section"> <div> <div>Data:</div> <textarea class="data-box"> { "psdf": "pcodereview", "qwer": "qcodereview", "dict": { "one": "1", "two": "2", "three": "3" }, "candidates": [ { "ip": "0.0.0.0", "port": 65000 }, { "ip": "127.0.0.1", "port": 65000 } ] } </textarea> </div> <div> <div>Conversion table:</div> <textarea class="conversion-table-box"> { "psdf": "p", "qwer": "q", "dict": { "_short": "d", "_object": { "one": "o", "two": "t", "three": "r" } }, "candidates": { "_short": "c", "_array_item": { "ip": "i", "port": "p" } } } </textarea> </div> </div> <div class="side-by-side-section"> <div> <div>Converted:</div> <pre class="code-block"><code class="result"></code></pre> </div> <div> <div>Converted back:</div> <pre class="code-block"><code class="result-back-to-normal"></code></pre> </div> </div> </body> </html>
{ "domain": "codereview.stackexchange", "id": 10700, "tags": "javascript, json, hash-map" }
jQuery serializeObject function
Question: I am working on a small serialization function which serializes form input to json. Here is the fiddle: http://jsfiddle.net/jdQfj/2/ I have currently no idea how to split up the array declarations let alone a combination of array and object declaration. However the function works good with infinite object nestings and normal values. But I could believe that there is optimization room. jQuery.fn.serializeObject = function() { var o = {}; var a = this.serializeArray(); $.each(a, function() { // check if object if (this.name.indexOf('.') !== -1) { var path = this.name.split('.'); var current = o; for (var i = 0; i < path.length; i++) { if (i === (path.length - 1)) { current[path[i]] = this.value; } else { if (current[path[i]] === undefined) { current[path[i]] = {}; } } current = current[path[i]]; } // check if array } else if (this.name.indexOf('[') !== -1 && this.name.indexOf(']')) { console.log(this.name + ' is an array'); // has to get implmented // normal value } else { o[this.name] = this.value; } }); return o; }; Answer: Here's what I came up with: var addNestedPropToObj = function (obj, name, value) { var path = name.split('.'), current = obj, len = path.length - 1, i = 0; for (; i < len; i++) { current[path[i]] = current[path[i]] || {}; current = current[path[i]]; } if ( 0 < path[i].indexOf( "[]" ) ) { name = path[i].replace('[]', ''); current[name] = current[name] || []; current[name].push(value); } else { current[path[i]] = value; } return obj; }; jQuery.fn.serializeObject = function () { var o = {}, a = this.serializeArray(), i = 0, len = a.length; for (; i < len; i++) { o = addNestedPropToObj(o, a[i].name, a[i].value); } return o; }; Demo and test cases Tips: Always try to separate most of the logic from the plugin into small testable functions. Use a regular loop instead of the $.each(). String.prototype.split() returns an array of the entire string if a match isn't found. So the else condition isn't required. // not needed else { o[this.name] = this.value; } Instead of checking to see if you're at the end of the loop, iterate to path.length - 1 then afterwards perform the statement, current[path[i]] = this.value;. As for the array format contained within the name property, for simplicity I suggest that the array notation should be an endpoint where the value is the push to the current property. So for a value to a added to an array, then [] must be at the end of the name. Here's an example: var func = function( str ){ return JSON.stringify( addNestedPropToObj( {}, str, 1)); }; func("a[].b") === '{"a[]":{"b":1}}'; func("a.b[]") === '{"a":{"b":[1]}}'; Seen as an array: a[] or a.b[] Seen as a string: a[b], a[b.c]
{ "domain": "codereview.stackexchange", "id": 2321, "tags": "javascript, jquery" }
How do I model non-covalent interactions in Gaussian?
Question: I'm doing some research about the dyes found in photography, and found Organic Chemistry of Photography by Shinsaku Fujita, which shows the dyes used. I'm looking at doing an excited states and UV-VIS calculation, of course optimized before with a geometry optimization, with some of the dyes presented in the book. Right now, I'm trying to build number one in the image below, but have come upon a bit of a stumbling block. The problem is the $\ce{(CH2)3SO2O-}$ and $\ce{(C2H5)3NH+}$ part on the rightmost nitrogen, as I'm not sure how to model the interaction between $\ce{(CH2)3SO2O-}$ and $\ce{(C2H5)3NH+}$ (triethylamine), as both have no more covalent bonds available. To visualize the problem, I built out two molecules (everything but triethylamine as one, then triethylamine below). As you can see, both are connected with as many covalent bonds as possible. How can I model the interaction between the two molecules (pictured above) in Gaussian, for UV/Vis calculations? Answer: As hBy2Py, Greg, and others have pointed out in their comments, the cation is just a counter ion for your benzothiazole-based cyanine dyes. I suggest to be reasonably lazy and perform the calculations on the cyanine dye only. Even for calculations in solvents of sufficient polarity (from an experimental photochemical perspective, acetonitrile seems to be a good choice), I would just ignore the cations.
{ "domain": "chemistry.stackexchange", "id": 7163, "tags": "organic-chemistry, bond, computational-chemistry" }
360 degree ultrasonic beacon sensor
Question: Basically, I want to detect an ultrasonic beacon in a radius around the robot. The beacon would have a separate ultrasonic emitter while the robot would have the spinning receiver. Are there any existing ultrasonic sensors that would meet this use case or am I stuck hacking one together myself? Is ultrasonic even the best choice? I was hoping that the beacon would be kept in a pocket, so I figured optical sensors were out. Edit: The beacon and robot will both be mobile so fixed base stations are not an option. Answer: If you were to place 3 ultrasonic microphones/sensors in a triangle layout you should be able to determine the direction of the beacon based on the time difference the ping arrives at each sensor with some straightforward trigonometry. They'd all have to be facing in the same direction (probably upwards) and you may have to find some way to make them omnidirectional. The further apart you can space the sensors, the more accurate the calculation will be.
{ "domain": "robotics.stackexchange", "id": 272, "tags": "sensors" }
Wave operator in Kerr spacetime: change of coordinates
Question: The wave equation for a scalar field, in Kerr geometry and in Boyer-Lindquist coordinates, reads: $$-\left[\frac{(r^2 + a^2)^2 }{\Delta} - a^2 \sin^2\theta \right] \partial^2_t \Phi - \frac{4Mar}{\Delta}\partial_t\partial_{\phi}\Phi + \left[\frac{1}{\sin^2 \theta} - \frac{a^2}{\Delta} \right]\partial^2_\phi \Phi + \partial_r(\Delta\partial_r \Phi) + \frac{1}{\sin\theta} \partial_\theta (\sin \theta \partial_\theta \Phi) = 0 $$ where $ \Delta = r^2 - 2Mr - a^2 .$ In many papers dealing with the time evolution of the scalar field, besides the tortoise coordinates $\frac{dr_*}{dr} = \frac{r^2 + a^2}{\Delta}$, a specific change of variables is introduced to cure unphysical pathologies near the horizon, which is $$d\phi_* = d\phi + \frac{a}{\Delta} dr.$$ Assuming the ansatz: $$\Phi = \Psi(t,r,\theta)e^{im\phi_*}$$ the wave equation in $(t,r_*,\theta,\phi_*)$ coordinates is: $$-\partial^2_t \Psi - \frac{(r^2 + a^2)^2}{\sigma}\partial^2_{r*}\Psi + \frac{4imarM}{\sigma}\partial_t \Psi - \frac{2\left[r\Delta + iam(r^2 + a^2)\right]}{\sigma}\partial_{r*}\Psi - \frac{\Delta}{\sigma}\left[\partial^2_\theta \Psi - \cot \theta \partial_{\theta} \Psi + \frac{m^2}{\sin^2\theta}\Psi\right] = 0.$$ where $\sigma = -(a^2 + r^2)^2 + a^2\Delta \sin^2\theta$. I tried to re-do the change of variables by myself but I got some "spurious" terms in the wave equation like for example a cross term $\partial_t\partial_{r*}$. Can someone explicitly show how to get the last wave equation through the changes of variables shown above? Answer: It would have been good if you had shown some of your calculation in detail so that we could see where you might have followed the wrong path (which may also be helpful to others). The solution to this problem comes back to expressing the partial derivatives in the old coordinates $(t, r, \theta, \phi)$ in partial derivatives of the new coordinates $(t, r_*, \theta, \phi_*)$. Let's consider the differential of some function $f$ (which could be your scalar field $\Phi$) in the old coordinate basis $$ df = \frac{\partial f}{\partial t} dt + \frac{\partial f}{\partial r} dr + \frac{\partial f}{\partial \theta} d\theta + \frac{\partial f}{\partial \phi} d\phi \qquad (1)$$ and in the new coordinate basis $$ df = \frac{\partial f}{\partial t} dt + \frac{\partial f}{\partial r_*} dr_* + \frac{\partial f}{\partial \theta} d\theta + \frac{\partial f}{\partial \phi_*} d\phi_*.$$ With the two coordinate transformations you mentioned, i.e. $$ dr_* = \frac{r^2 + a^2}{\Delta} dr\quad\text{and}\quad d\phi_* = d\phi + \frac{a}{\Delta} dr, $$ the latter becomes \begin{align*} & = \frac{\partial f}{\partial t} dt + \frac{\partial f}{\partial r_*} \frac{r^2+a^2}{\Delta} dr + \frac{\partial f}{\partial \theta} d\theta + \frac{\partial f}{\partial \phi_*} d\phi + \frac{\partial f}{\partial \phi_*} \frac{a}{\Delta} dr \\ & = \frac{\partial f}{\partial t} dt + \left[\frac{\partial f}{\partial r_*} \frac{r^2+a^2}{\Delta} + \frac{\partial f}{\partial \phi_*} \frac{a}{\Delta} \right]dr + \frac{\partial f}{\partial \theta} d\theta + \frac{\partial f}{\partial \phi_*} d\phi. \qquad (2) \end{align*} As the 1-forms $dt, dr, d\theta, d\phi$ form a basis, their coefficients in both (1) and (2) must be identical; hence \begin{align*} \frac{\partial f}{\partial r} & = \frac{r^2+a^2}{\Delta} \frac{\partial f}{\partial r_*} + \frac{a}{\Delta} \frac{\partial f}{\partial \phi_*}, \\ \frac{\partial f}{\partial \phi} & = \frac{\partial f}{\partial \phi_*}. \end{align*} When you use those two expressions in your initial wave equation, you should reproduce the latter wave equations in a few steps.
{ "domain": "physics.stackexchange", "id": 93592, "tags": "general-relativity, waves, field-theory, time-evolution, kerr-metric" }
Unable to locate ros-hydro-desktop-full
Question: Yes, I followed the ros wiki. I added the sources and the apt-key and they both show up with cat /etc/apt/sources.list.d/ros-latest.list and with sudo apt-key list. But when i sudo apt-get install ros-hydro-desktop-full or even sudo apt-cache search ros-hydro (or any disto), nothing shows up. Im running on xubuntu with xfce on a chromebook if that help at all. Originally posted by frodo0321 on ROS Answers with karma: 13 on 2013-10-14 Post score: 1 Original comments Comment by dornhege on 2013-10-14: Did you run apt-get update? Comment by frodo0321 on 2013-10-14: By looking at other questions I figured someone would ask me that haha. And yes, multiple times. Comment by tfoote on 2013-10-14: What is the output of your apt-get update? Does it show packages.ros.org being downloaded? What version of Xubuntu are you using? Comment by frodo0321 on 2013-10-15: Hit http://packages.ros.org precise Release.gpg Hit http://packages.ros.org precise Release Hit http://packages.ros.org precise/main armhf Packages Ign http://packages.ros.org precise/main TranslationIndex Ign http://packages.ros.org precise/main Translation-en_US Ign http://packages.ros.org precise/main Translation-en are all the ros packages. and lsb_release -a outputs Distributor ID: Ubuntu Description: Ubuntu 12.04.3 LTS Release: 12.04 Codename: precise Answer: Ahh, you are using armhf. Not all packages are available for armhf, especially the desktop packages. Originally posted by tfoote with karma: 58457 on 2013-10-15 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by frodo0321 on 2013-10-15: Is there any way around that? Like compiling from source or something? Comment by tfoote on 2013-10-15: Yeah, though a lot of the visualization libraries are not going to work well on embedded computers. http://wiki.ros.org/hydro/Installation/Source Comment by frodo0321 on 2013-10-15: now it can't locate python-wstool... I can't win haha Comment by tfoote on 2013-10-15: It's available from pip. Armhf is not an officially supported architecture, so there's going to be a little bit of extra work to get things working. Comment by kristpan on 2014-10-08: I am not using armhf but find the same problem. Comment by ahendrix on 2014-10-08: @kristpan: you should probably ask that as a new question rather than trying to reopen a year-old question.
{ "domain": "robotics.stackexchange", "id": 15856, "tags": "ros, ros-hydro" }
Should the prediction of the body temperature given a camera image be modelled as classification or regression?
Question: I am fairly new to deep learning in general and I am currently facing a problem I want to solve using neural networks and I am unsure if it is a classification or regression problem. I am aware that classification problems are about classifying whether an input belongs to class A or class B (or class C ...) and regression problems are about mapping the input to some sort of continuous output (just like the house pricing problem). I basically want to measure the body temperature of a person using a simple video camera. To me, this seems like more of a regression type of issue rather than classification, because of the actual continuous result values I want the neural network to produce from the input video frames, e.g. 39°Celsius. But a question that came to my mind was: What if I use every integer value in the range from 35°C to 42°C as a possible output class? This would make it a classification problem, am I right? What would be the correct approach here and why? Classification or regression? Answer: I think it depends on you application and what data you have available. If the prediction of body temperature itself doesn't have to be accurate and classes like COLD, NORMAL, and HOT will suffice, you should stay with a classification. There isn't a cut off but as you increase the number of classes that represent numbers on the same scale, it may become more difficult to interpret the result as there will be a distribution across the classes. If you choose regression on the other hand, you are not restricted by your classes anymore and may be able to tell the difference between 36.5 and 36C which (according to wikipedia) can be the difference between normal and cold. This is something classes may not be able to capture. Another thing to consider is what your training data looks like and how accurately you want to predict the temperature. If you have pictures of people and a temperature reading, where was the reading taken and how accurate is the reading? If it isn't accurate (+- 1 degree) you may not be able to give as accurate predictions as you would like and may only be able to do 3 different classes like above. If you don't have a data set, then that is another problem altogether and might require another question as it depends on your application. I think that your problem is interesting and I hope this can help you to understand how best to apply deep learning to it.
{ "domain": "ai.stackexchange", "id": 357, "tags": "machine-learning, deep-learning, ai-design, classification, regression" }
Is this the Melvin metric (magnetic universe) in disguise?
Question: I'm solving the Einstein equation assuming a cylindrical symmetry and found something interesting which I never saw elsewhere. I now feel that I may have found the Melvin magnetic universe solution in another form but need a confirmation (sorry for the lengthy exposition below). Assume the following metric (a cylindrical universe): \begin{equation}\tag{1} ds^2 = dt^2 - a^2(t) g^2(r)(dx^2 + dy^2) - b^2(t) \, dz^2, \end{equation} where $a(t)$, $b(t)$ and $g(r)$ are unknown functions. Of course: $r =\sqrt{x^2 + y^2}$ and we could use $dx^2 + dy^2 = dr^2 + r^2 d\varphi^2$. I want an homogeneous and isotropic spatial section in the plane $x \, y$, so the function $g(r)$ isn't completely arbitrary. Calculating the Riemann tensor give these components (here, $i, j, k = 1, 2$ only, for the coordinates $x$ and $y$): \begin{align} R^0_{i 0 j} &= a \ddot{a} \, g^2 \, \delta_{ij}, \\[12pt] R^0_{303} &= b \ddot{b}, \\[12pt] R^i_{33j} &= -\, \frac{\dot{a} \dot{b} b}{a} \, \delta_{ij}, \\[12pt] R^i_{jkl} &= (\dot{a}^2 g^2 + \mathcal{Q}(r))(\delta_{ki} \, \delta_{lj} - \delta_{li} \, \delta_{kj}) + \text{anisotropic terms}. \end{align} The anisotropic terms cancel if \begin{equation}\tag{2} \frac{g^{\prime \prime}}{g} = \frac{g^{\prime}}{g \, r} + 2 \frac{g^{\prime 2}}{g^2}, \end{equation} which implies the solution \begin{equation}\tag{3} g(r) = \frac{A}{1 + B \, r^2}, \end{equation} where $A$ and $B$ are constants (they can be absorbed into the coordinates, so I'll take $A = 1$). Then \begin{equation}\tag{4} \mathcal{Q}(r) = 4 B \, g^2. \end{equation} Note that if $B$ is positive, then the whole $x \, y$ plane has a finite area: $\mathcal{A}_{x y} = \frac{\pi}{B}$ (that plane is compact)! The components of the Einstein tensor could be calculated without troubles (again, $i, j = 1, 2$ only): \begin{align} G_{00} &= -\, \Big( 2 \frac{\dot{a} \dot{b}}{a b} + \frac{\dot{a}^2}{a^2} + \frac{4 B}{a^2} \Big), \tag{5} \\[12pt] G_{ij} &= -\, \Big( \frac{\ddot{a}}{a} + \frac{\ddot{b}}{b} + \frac{\dot{a} \dot{b}}{a b} \Big) \, g_{ij}, \tag{6} \\[12pt] G_{33} &= -\, \Big( 2 \frac{\ddot{a}}{a} + \frac{\dot{a}^2}{a^2} + \frac{4 B}{a^2} \Big) \, g_{33}. \tag{7} \end{align} I then need an anisotropic (i.e. cylindrical) perfect fluid stress-tensor, of components \begin{equation}\tag{8} T_{\mu \nu} = (\rho + \sigma) \, u_{\mu} u_{\nu} + (p - \sigma) \, n_{\mu} n_{\nu} - \sigma \, g_{\mu \nu}, \end{equation} where $p \equiv p_{\parallel}$ and $\sigma \equiv p_{\perp}$ are the pressure along the $z$ axis and in the plane $x \, y$, respectively. The fluid four-velocity is $u^{\mu} = (1, 0, 0, 0)$ (such that $u_{\mu} \, u^{\mu} = 1$) and the symmetry axis is represented by the spacelike four-vector $n^{\mu} = (0, 0, 0, 1/b(t))$ (such that $n_{\mu}\, n^{\mu} = -1$). The local conservation of fluid $\nabla_{\mu} T^{\mu \nu} = 0$ implies the following: \begin{equation}\tag{9} \dot{\rho} + \Big( 2 \frac{\dot{a}}{a} + \frac{\dot{b}}{b} \Big) \rho + \frac{\dot{b}}{b} p + 2 \frac{\dot{a}}{a} \sigma = 0. \end{equation} Take note that a purely magnetic field could be cast into (8): \begin{equation} T_{\mu \nu} = F_{\mu \lambda} F^{\lambda}_{\; \; \nu}+ \frac{1}{4} g_{\mu \nu} F_{\lambda \kappa} F^{\lambda \kappa}, \end{equation} with $F_{12}$ the only component. We then have $p = -\, \rho$ (negative pressure = tension along the magnetic field lines) and $\sigma = + \rho$, such that the trace of (8) cancels: $g^{\mu \nu} T_{\mu \nu} = \rho - p - 2 \sigma = 0$. Equ. (9) then implies the standard energy density of radiation in cosmology: \begin{equation}\tag{10} \rho(t) = \frac{\text{cste}}{a^4(t)}. \end{equation} Putting all these into Einstein equation $G_{\mu \nu} = -\, 8 \pi G \, T_{\mu \nu}$ give the following equations to be solved: \begin{align} 2 \frac{\dot{a} \dot{b}}{a b} + \frac{\dot{a}^2}{a^2} + \frac{4 B}{a^2} = 8 \pi G \rho, \tag{11} \\[12pt] \frac{\ddot{a}}{a} + \frac{\ddot{b}}{b} + \frac{\dot{a} \dot{b}}{a b} = -\, 8 \pi G \sigma = -\, 8 \pi G \rho, \tag{12} \\[12pt] 2 \frac{\ddot{a}}{a} + \frac{\dot{a}^2}{a^2} + \frac{4 B}{a^2} = -\, 8 \pi G p = +\, 8 \pi G \rho. \tag{13} \end{align} If the constant $B$ is positive, solving these give $a(t) = \text{cste}$ (so $\rho = \text{cste}$ !) and $b(t) = \sin{\omega t}$ (for $0 < \omega t < \pi$, where $\omega = \sqrt{8 \pi G \rho}$). I'm perplexed by this solution, which appears to describe a magnetic universe, homogeneous, isotropic and static in the plane $x \, y$, and expanding/contracting along the $z$ axis. Is this the Melvin's magnetic universe in another form? Usually, Melvin's universe is described by the following static metric: \begin{equation}\tag{14} ds^2 = \Phi^2(r) (dt^2 - dr^2 - dz^2) - \frac{1}{\Phi^2(r)} \, r^2 d\varphi^2, \end{equation} where $\Phi(r) = 1 + C \, r^2$ (this function is similar to (3) above). So how can I verify that metric (1) is equivalent to metric (14)? \begin{equation}\tag{15} ds^2 = d\tilde{t}^2 - \frac{1}{(1 + B \, \tilde{r}^2)^2}(d\tilde{r}^2 + \tilde{r}^2 \, d\varphi^2) - \sin^2{\omega \tilde{t}} \, d\tilde{z}^2. \end{equation} Answer: This is not a Melvin's magnetic universe but Bertotti–Robinson spacetime which is simply a product $\text{AdS}_2 \times S_2$ of two-dimensional anti-de Sitter spacetime and a sphere. Equation (3) together with the $a(t)$ being constant makes this a product spacetime and the line element of $ds_{2}^2=g(r)^2 (dx^2+dy^2)$ is a metric of $S_2$ or $H_2$ (depending on the sign of $B$) in stereographic projection. The $t$–$z$ part is simply a $\text{AdS}_2$ metric. In higher dimensions this coordinate choice produces hyperbolic spatial slices. The isometry group of such spacetime is a product $SO(2,1)\times SO(3)$ of isometries of its factors acting transitively, so all points of this spacetime are equivalent, this universe is filled with homogeneous EM field. A paper that highlights the differences and similarities between Bertotti–Robinson's and Melvin's solutions is Garfinkle, D., & Glass, E. N. (2011). Bertotti-Robinson and Melvin Spacetimes, doi:10.1088/0264-9381/28/21/215012, arXiv:1109.1535.
{ "domain": "physics.stackexchange", "id": 59936, "tags": "general-relativity, cosmology, magnetic-fields, symmetry, metric-tensor" }
Can an analog to the Meissner Effect be proposed for matter and gravitational fields?
Question: In the study of electromagnetic fields and quantum electrodynamics we observe and theorize on the behavior of superconductivity and the Meissner effect. Has an analog of these behaviors been proposed to gravitational field theory and matter? If so what possibilities could these propositions lead to? Answer: The gravitational analogue of magnetism is called "gravitomagnetism" (and the general mathematical analogy between Maxwell's equations and gravitation in general relativity is called gravito-electromagnetism), which deals with the gravitational interactions of currents of mass/energy, just as magnetism deals with interactions of currents of charge. According to the book The Measurement of Gravitomagnetism: A Challenging Enterprise by physicist Lorenzo Lorio, there is no gravitomagnetic analogue of the Meissner effect: see section 4.3 on pages 45-47 which can be read on google books here. Quoting just the introduction and conclusion of this section: If however one wants to consider a real analog of the pure Meissner effect, one should treat the pure gravito-magnetic case. To be more definite, we should envisage a situation where matter can flow without friction in response to a gravito-electromagnetic field, i.e. matter in a pure superfluid state. ...in principle, in superfluids there is no analog of the Meissner effect in superconductors. In other words, recalling a remark by Pascual-Sánchez [142], we can say that superfluids in gravito-magnetic fields display a paramagnetic-like behavior rather than a diamagnetic-like one.
{ "domain": "physics.stackexchange", "id": 16749, "tags": "electromagnetism, gravity, superconductivity, matter" }
IS and matching
Question: I have 2 different but similar problems, one belongs to NP and one to L and I don't understand why. First problem: Input: an undirected graph G with n^2 vertices. Question: Is there exist in G a matching of size n AND an independent set of size n? Second problem: Input: an undirected graph G with n^2 vertices. Question: Is there exist in G a matching of size n OR an independent set of size n? The AND problem is in NP and the OR problem is in L. Can someone explain why? Thanks. Answer: Note that both problems are in NP. It's just that the first is also NP-hard, and the second is also in L. Hint for the first problem: Prove that it's NP-hard by reduction from independent set. Given a graph $G=(V,E)$ and a parameter $k \leq |V|$, add an independent set of size $|V|-k$ and a clique of size $|V|^2-|V| \geq 2|V|$ (the case $|V| < 3$ has to be handled separately), and connect all vertices of the clique to all other vertices. Hint for the second problem: Suppose that $G$ doesn't have an independent set of size $n$. Show that it has a matching of size $n$ by dividing the $n^2$ vertices into $n$ sets of $n$ vertices, and picking an edge from each.
{ "domain": "cs.stackexchange", "id": 4008, "tags": "complexity-theory, time-complexity, space-complexity, np" }
Relation between Radiance and Irradiance
Question: I know that radiance is expressed as $$[\text{radiance}] = \frac{\rm W}{\rm {sr} \cdot m^2}$$ and $$[\text{irradiance}] = \frac{\rm W}{\rm m^2}$$ but what is the relation between these two quantities? Is irradiance commonly used to referring to power reflected on a surface? And radiance from a direct source? Answer: As you stated, both radiometry units seem to be similar. While irradiance refers to incoming power, the radiance is used for two cases: angle-dependent diffuse reflection (BRDF) emission from light sources. E.g. radiance in direction of the optical axis of a LED is higher, than its radiance at an angle of 15°. Optical simulations / ray tracing calculate the irradiance on surfaces. Your last two questions are mainly correct. However I would rephrase the first statement: Irradiance commonly is used referring to power incident on a surface.
{ "domain": "physics.stackexchange", "id": 17765, "tags": "optics, radiometry" }
Bernstein–Vazirani problem in book as exercise
Question: I´ve solved the Exercise 7.1.1 (Bernstein–Vazirani problem) of the book "An introduction to quantum computing" (Mosca et altri). The problem is the following: Show how to find $a \in Z_2^n$ given one application of a black box that maps $|x\rangle|b\rangle \to |x\rangle |b \oplus x · a\rangle$ for some $b\in \{0, 1\}$. I´d say we can do it like this: First i go from $|0|0\rangle \to \sum_{i \in \{0,1\}^n}|i\rangle| + \rangle$ using QFT and Hadamard Then I apply the oracle: $$ \sum_{i \in \{0,1\}^n}(-1)^{(i,a)} |i\rangle| + \rangle $$ Then I read the pase with an Hadamard (since we are in $Z_2^n$ our QFT is an Hadamard) $$ |a\rangle |+ \rangle $$ I think is correct. Do you agree? Answer: This is not correct: you need to use the state $|-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}$ instead of $|+\rangle$. The important thing is that you've missed showing how the black box map that you've stated gives the oracle output that you've stated. To see this, apply the map on $$ |x\rangle|+\rangle\mapsto|x\rangle(|0\oplus x\cdot a\rangle+|1\oplus x\cdot a\rangle)/\sqrt{2}=|x\rangle(|0\rangle+|1\rangle)/\sqrt{2}. $$ When the $|+\rangle$ state is there, you get no phase. Meanwhile, with the $|-\rangle$ state, $$ |x\rangle|-\rangle\mapsto|x\rangle(|0\oplus x\cdot a\rangle-|1\oplus x\cdot a\rangle)/\sqrt{2}=\left\{\begin{array}{cc} |x\rangle(|0\rangle-|1\rangle)/\sqrt{2} & x\cdot a=0 \\ |x\rangle(|1\rangle-|0\rangle)/\sqrt{2} & x\cdot a=1\end{array}\right.. $$ This can simply be written as $(-1)^{x\cdot a}|x\rangle|-\rangle$.
{ "domain": "quantumcomputing.stackexchange", "id": 105, "tags": "quantum-algorithms" }
Using prototype to simulate extending a 'base class' object in JavaScript
Question: I'm investigating using prototypes in JavaScript to simulate using 'base classes' which can be 'extended'. I've created an example where dialogBox is my 'base' and welcomeMessage is the 'extension'. // OBJECT CONSTRUCTORS var dialogBox = function(xPosition, yPosition, width, height){ this.x = xPosition; this.y = yPosition; this.width = width; this.height = height; this.draw = function(){ STAGE.fillRect(this.x, this.y, this.width, this.height); STAGE.strokeRect(this.x, this.y, this.width, this.height); }; }; var welcomeMessage = function(messageString){ this.message = messageString; this.addMessage = function(){ STAGE.strokeText(this.message, this.x + 10, this.y + 10, this.width-10); console.log(this.message); }; }; welcomeMessage.prototype = new dialogBox(); This works as intended (or at least gives the appearance of working as intended) - see this fiddle. However, the way I have to instantiate these objects seems rather inelegant: var box1 = new dialogBox(20, 20, 300, 100); var box2 = new welcomeMessage('hello there'); box2.x = 20; box2.y = 140; box2.width = 300; box2.height = 100; // wouldn't it be great if I could pass these properties as args? I wrote these constructors using the MDN Object.prototype reference and am following their example quite closely, but I've observed from reading around Stack Overflow / Code Review many examples where prototyping is used very differently, often in relation to getters and setters. This is leading me to question if I have quite grasped the 'proper' use of prototype. Any feedback is welcome, but I am particularly interested in identifying any potential problems my approach could cause when applied to more complex situations. Answer: The way you are approaching the problem is essentially correct. What you want to accomplish is also possible in a very straightforward way: var welcomeMessage = function(messageString, xPosition, yPosition, width, height){ this.message = messageString; this.x = xPosition; this.y = yPosition; this.width = width; this.height = height; this.addMessage = function(){ STAGE.strokeText(this.message, this.x + 10, this.y + 10, this.width-10); console.log(this.message); }; }; welcomeMessage.prototype = new dialogBox(); In this way, the properties can be assigned through the constructor of the subclass as well. But we can go further. There is duplication between the constructors. We can eliminate this by calling the super constructor on the subclassed object: var welcomeMessage = function(messageString, xPosition, yPosition, width, height){ this.message = messageString; dialogBox.call(this, xPosition, yPosition, width, height); this.addMessage = function(){ STAGE.strokeText(this.message, this.x + 10, this.y + 10, this.width-10); console.log(this.message); }; }; welcomeMessage.prototype = new dialogBox(); Now we have DRYed up our constructor code. This should work fine. I just want to make a few additional points. First, rather than adding methods in the constructor, you should add them to the constructor's prototype: dialogBox = function(...) { ... } dialogBox.prototype.draw = function() { ... } When you add methods in the constructor, they are recreated every time a new object is made, whereas if they are added to the prototype, they are created only once. Having them in the constructor also means that it has to be called to add them to a subclass. There may be reasons not to do this. If your environment supports it, you should use Object.create(dialogBox.prototype) to set the prototype rather than new dialogBox(). This avoids calling the constructor just to set up inheritance. This is good in cases where the super class constructor does work or requires parameters be passed in. That makes the final version: var dialogBox = function(xPosition, yPosition, width, height){ console.log('dialogBox Constructor'); this.x = xPosition; this.y = yPosition; this.width = width; this.height = height; }; dialogBox.prototype.draw = function() { STAGE.fillRect(this.x, this.y, this.width, this.height); STAGE.strokeRect(this.x, this.y, this.width, this.height); }; var welcomeMessage = function(messageString, xPosition, yPosition, width, height){ this.message = messageString; dialogBox.call(this, xPosition, yPosition, width, height); }; welcomeMessage.prototype = Object.create(dialogBox.prototype); welcomeMessage.prototype.addMessage = function() { STAGE.strokeText(this.message, this.x + 10, this.y + 10, this.width - 10); console.log(this.message); }; JSFiddle As a side note, the prototype must be set before any new methods are added, so welcomeMessage.prototype = ... cannot come after you add any methods.
{ "domain": "codereview.stackexchange", "id": 12159, "tags": "javascript, prototypal-class-design" }
Using nuclear devices to terraform Mars: Elon Musk's nuclear proposal?
Question: Elon Musk has recently suggested Using nuclear devices to terraform Mars. In the past, comet related ideas were mooted, but Musk seems, to me anyway, to be a man in a hurry and perhaps his idea has some merit, as waiting around for suitable comets may take a while and involve large energy expenditure. The businessman has often stated that he thinks humans should colonize Mars, and now it seems he’ll stop at nothing to get his way. “It is a fixer-upper of a planet,” Musk told Colbert. “But eventually you could transform Mars into an Earth-like planet.” There’s a fast way and a slow way to do that. The slow way involves setting up lots of pumps and generators to warm up the red planet so that its frozen carbon dioxide melts and wraps the planet in a thicker atmosphere. The thicker blanket of CO2 helps the planet warm up further, thus melting more carbon dioxide, and the positive feedback loop continues. (This is essentially what we’re doing on Earth, and it’s called global warming.) There's a simpler and cheaper way to warm up Mars. “The fast way is, drop thermonuclear weapons over the poles,” said Musk. I can't immediately find an estimate of the volume of the carbon dioxide on (or under) the Martian surface but this source Water on Mars gives an estimate of water ice volume: Water on Mars exists today almost exclusively as ice, with a small amount present in the atmosphere as vapor. The only place where water ice is visible at the surface is at the north polar ice cap. Abundant water ice is also present beneath the permanent carbon dioxide ice cap at the Martian south pole and in the shallow subsurface at more temperate latitudes. More than five million cubic kilometers of ice have been identified at or near the surface of modern Mars, enough to cover the whole planet to a depth of 35 meters. Even more ice is likely to be locked away in the deep subsurface. Physically, and I do want to stick to the physics, particularly the atmospheric physics, rather than engineering, is this idea feasible? Has Musk done his homework as regards: The amount of nuclear material needed? (And the undoubted outcry over its transport from Earth using potentially highly explosive rockets in the first place)? Is the gravity of Mars strong enough to retain the water vapour involved? I am guessing it is. Will atmospheric pressure help retain the water vapour produced or is his idea enough to produce a relatively dense atmosphere? Finally, Musk is a businessman selling a possible project, and that implies, understandably I suppose, that dramatic publicity is involved. Would a set of mirrors in orbit do the job just as efficiently, although over a longer timescale? Answer: Has Musk done his homework? With regard to the basic idea of using nuclear weapons to release CO2 and thereby warm Mars, no, he hasn't. I suspect this was either Bored Elon Musk speaking, or perhaps the Elon Musk who didn't quite deny being a super villain ( 1-900-MHA-HAHA Elon Musk?) in that interview with Colbert. CO2's enthalpy of sublimation is about 26 kJ/mol, or 590 kJ/kg. The Tsar Bomba released 210 petajoules of energy. Suppose Musk manages to explode a Tsar Bomba equivalent over one of Mars' poles, with all of the energy going into sublimating CO2, and all of that newly created gaseous CO2 remaining resident in the atmosphere for a while. That's an extra 355 megatons of CO2 added to Mars' atmosphere. That sounds like a lot. It's not. It's a tiny, tiny amount compared to the 25 teratons of CO2 in Mars' atmosphere. We've just blown up the biggest device invented by humankind and have only increased Mars' atmospheric CO2 content by an immeasurably small amount. Most of Mar's CO2 is in its atmosphere, not its icecaps. If we used 20,000 or so Tsar Bomba equivalents we could theoretically increase Mars' CO2 content by 25 to 33%. That's not going to do much. (By way of comparison, it's the consequences of a doubling of the Earth's atmospheric CO2 content that have people concerned.) Even though Mars has considerably more CO2 in its atmosphere than does the Earth, the greenhouse effect on Mars is considerably smaller than it is on Earth. There are a number of reasons for this: Adding ever more CO2 to an atmosphere has a logarithmic effect. Adding more CO2 to Mars' already saturated atmosphere won't have much of an effect. Mars low gravitational acceleration means the dry adiabatic lapse rate on Mars is less than half that on Earth. Greenhouse gases move an atmosphere away from an isothermal atmosphere toward an adiabatic atmosphere. Mars thin atmosphere and low lapse rate alone explain most of why the greenhouse effect on Mars is significantly less than that on Earth. There are two bands in the thermal infrared where CO2 is a very good absorber/emitter of radiation. One peaks at Earth equatorial temperatures (Mars doesn't get anywhere near that hot), the other peaks at Earth polar temperatures (that's Mars). That lower peak means that, except for polar regions, Earth's middle troposphere to upper stratosphere are extremely opaque to infrared radiation. Mars atmosphere on the other hand gets increasingly more transparent in the infrared with increased altitude. Nukes could help warm Mars. Mars' energy budget varies considerably with Mars' weather. Mars occasionally suffers planet-wide dust storms. While those dust storms increase Mars' albedo, they change the energy flux to and from the surface by more than enough to compensate for this lost incoming energy. If the goal is to heat Mars up, it would make a lot more sense to nuke Mars' equatorial regions instead of its poles. We'd have to do this on a regular basis to have any effect. Whether or not this is a good idea is a different question.
{ "domain": "physics.stackexchange", "id": 24860, "tags": "thermodynamics, planets, atmospheric-science, rocket-science, fusion" }
Derivation of canonical position-momentum commutator relation
Question: We know that the position-momentum commutator is fundamental in quantum mechanics, but would it be possible to derive it starting from a different set of first principles, more specifically starting (in Dirac notation) from 1) Closure relations $ \int|x\rangle \, \langle x| dx $ (both momentum and position bases) 2) $ \left\langle \left.x'\right|x\right\rangle = \delta \left(x-x'\right) $ Orthonormality relations for both bases 3) $ \left\langle \left.x\right|p\right\rangle = e^{\text{ipx}} $ the assumption that momentum eigenstates are plane waves in the position representation Answer: Implicit in the assumption of "position" and "momentum" bases should be the eigenvalue eqs. for the corresponding observables, $\hat x\; |x\rangle = x |x\rangle$ and $\hat p\; |p\rangle = p |p\rangle$, although the expression of $\hat p$ in the position basis is not necessary. With this understood, consider the matrix element $$\langle x |(\hat x \hat p - \hat p \hat x | x' \rangle = (x-x')\langle x |\hat p | x' \rangle =\\ = (x-x')\int{dp_1 \int{dp_2 \langle x |p_1\rangle \langle p_1|\hat p |p_2\rangle \langle p_2 | x'\rangle }}= \\ = (x-x')\int{dp_1 \int{dp_2 e^{ip_1x}\delta(p_1-p_2)p_2e^{-ip_2x'}}} = \\ = (x-x')\int{dp_1 \;p_1 e^{i(x-x')p_1}} = -i(x-x')\frac{\partial}{\partial x}\int{dp_1 \; e^{i(x-x')p_1}}=\\ = -i(x-x')\frac{\partial}{\partial x}\delta(x-x') = i\delta(x-x') = i\langle x|x'\rangle $$ where use was made of the identity $(x-a)\delta'(x-a) = -\delta(x-a)$. Since $|x\rangle$, $|x'\rangle$ are arbitrary, $$ [\hat x, \hat p] = i $$ follows necessarily.
{ "domain": "physics.stackexchange", "id": 27007, "tags": "quantum-mechanics, operators, momentum, fourier-transform, commutator" }
Algorithmic complexity of this algorithm to find all ordered permutations of length X
Question: I have written a recursive function to generate the list of all ordered permutations of length X for a list of chars. For instance: ['a', 'b', 'c', 'd'] with X=2 will give [['a', 'a'], ['a', 'b'], ['a', 'c'], ['a', 'd'], ['b', 'a'], ['b', 'b'], ..., ['d', 'd']] I'm not sure about its algorithmic complexity though (at least I know it's pretty horrible). I would say it's something around: O(X * N^(L + X)) (where L is the number of different chars, 4 here because we have 'A', 'B', 'C', 'D', and X the length of the permutations we want to generate). Because I have 2 nested loops, which will be run X times (well, X-1 because of the special case when X = 1). Is it correct? def generate_permutations(symbols, permutations_length): if permutations_length == 1: return [[symbol] for symbol in symbols] tails = generate_permutations(symbols, permutations_length-1) permutations = [] for symbol in symbols: for tail in tails: permutation = [symbol] + tail permutations.append(permutation) return permutations print(generate_permutations(['a', 'b', 'c', 'd'], 2)) By the way: I know this is not idiomatic Python and I apologize if it's ugly but it's just some prototyping I'm doing before writing this code in a different, less expressive language. And I also know that I could use itertools.permutations to do this task. By the way, I'd be interested if someone happens to know the algorithmic complexity of itertool's permutations function. Answer: def generate_permutations(symbols, permutations_length): These are not permutations. They are Cartesian products. In addition, calling the function generate_something suggests that what's returned might be a generator, which is probably what you want for a function which returns a stupendously large data structure, but is not what you get from this function. for symbol in symbols: for tail in tails: permutation = [symbol] + tail permutations.append(permutation) If you use extend instead of append then you give better hints as to how much underlying buffers might need to be increased: for tail in tails: result.extend([symbol] + tail for symbol in symbols) Performance: with an alphabet of size \$L\$ let \$T_L(X)\$ be the cost of calling this function for products of length \$X\$. Then \$T_L(1) = L\$ and \$T_L(X+1) = T_L(X) + L L^X (X+2)\$ assuming that constructing the list [symbol] + tail takes time proportional to its length and extending the list of results take amortised constant time per element added. This gives \$\Theta(L^{X}X)\$ complexity. The same complexity but much better memory efficiency can be achieved with a true generator approach which is based on the fact that all you really need to do is count to \$L^X\$ and convert into base \$L\$.
{ "domain": "codereview.stackexchange", "id": 30384, "tags": "python, algorithm, reinventing-the-wheel, combinatorics, complexity" }
Construction Entity and Construction Complex
Question: I attend to engineering design of a new irrigation system, comprising 4 pump stations and 3 lengthy water mains between them. Definitions by ISO 12006-2: Construction Entity is an independent construction result of significant scale serving at least one user activity or function. Construction Complex means two or more adjacent construction entities collectively serving one or more user activity or function. In my case, the irrigation system must be identified as a construction complex, comprising 4 pump stations + 3 pipe mains = 7 'construction entities'. How can I collectively and separately name the mentioned parts of the project? (In my native language those all are 'objects', however, definition for 'construction object' by ISO is something different). Please advise, can I use the term 'entity' in English-written documents (e.g. 'list of newly built entities', 'entity # 1'), or it is better to use such terms as works, structure, facility, unit, site, plant? So that it be understandable for English-speakers in construction environment. (Note that the words used do not need to conform to ISO 12006-2; I have just used it as a reference.) What terms have you really met and used for such cases? Answer: I am a native English speaker and practising civil engineer who had never heard of ISO 12006-2 until this question, and if you had talked about "Construction Entities" I wouldn't have had a clue what you were on about. Consider using "4 pump stations and 3 water mains", as this is clear. If you want a word which covers all 7 "construction entities" I could comprehend: 7 separate works 7 separate components 7 separate subsystems (though this may be more suitable for relatively complicated mains) I don't think "structure" would be appropriate for a water main, and your other suggestions sound even less appropriate.
{ "domain": "engineering.stackexchange", "id": 1118, "tags": "civil-engineering, construction-management" }
Storing data into an SQL table using multiple threads and queues
Question: I have a process running in a separate thread which reads data over Ethernet and raises an event for each read. Then, this data is processed over a tree of classes. At the top of my class tree, I want to store the processed data to SQL tables. This read rate can be rather fast and often at times but on average its slower than my write rate to SQL. This is why I want a queue. Also I want my window not to freeze, so I want a separate thread. The code below works but I'm fairly inexperienced in multi threading. Is this a good and robust way to accomplish my task? Private addLogQueue As New Queue Private dequeueing As Boolean = False Public Sub addLogHandler(ByVal machineName As String, ByVal LogTime As Date, ByVal EventLog As String, ByVal EventValue As String) addLogQueue.Enqueue("INSERT INTO ... some SQL CODE") Dim Thread1 As New System.Threading.Thread(AddressOf addLog) Thread1.Start End Sub Private Sub addLog() If Not dequeueing Then dequeueing = True While addLogQueue.Count <> 0 Try SQLCon.Open() SQLCmd = New SqlCommand(addLogQueue.Dequeue(), SQLCon) SQLCmd.ExecuteNonQuery() SQLCon.Close() Catch ex As Exception MsgBox(ex.Message) If SQLCon.State = ConnectionState.Open Then SQLCon.Close() End If End Try End While dequeueing = False End If End Sub After thinking a bit I realize I might not even need a queue or the dequeueing boolean, can I just start a new thread for every write maybe. I added the queue before I added the multi thread because at times I had to write before the SQLcon is even closed. Since I've added a thread, should I even keep the queue? Answer: Single-Threading Style: Managing resources: You should be using using to manage your connection. It completely eliminates that... unsightly Try-Catch-Block. Compare following snippet from msdn with a similarly stripped version of your code: Using connection As New SqlConnection(connectionString) connection.Open() ' Do work here; connection closed on following line. End Using Try SQLCon.Open() ' Do work here; connection closed on following line. SQLCon.Close() Catch ex As Exception MsgBox(ex.Message) If SQLCon.State = ConnectionState.Open Then SQLCon.Close() End If End Try Naming While SQLCon and SQLCmd are ... conventional names, they are a little short and have the air of systems-hungarian... I'd prefer connection and command. Also Thread1 is really non-speaking, InsertingThread may be the better option. Whitespace I like to keep separate Subs clearly outlined by an empty line before starting with [Modifier] Sub ... Comments and documentation You may have removed it, but ... this code is extremely undercommented and underdocumented. A comment here and there can help a little wit easing the reading. Then again this code is rather simple and obvious by the names... Multi-Threading Style: First off, you currently only allow a single Thread to run... well almost, because of possible interleaving and nonatomic comparison operations. Consider the following: Thread 1 Thread 2 - If Not deququeing - If Not dequeuing - dequeueing = True - dequeueing = True Suddenly you have 2 threads running and that when you could've had everything easy. The queue docs mention ConcurrentQueue(Of T) when you "need to access the queue across multiple threads" This makes the failed attempt at synchronizing with a shared boolean unnecessary and overall improves the code quality. End result: After applying my critiques I end up with following code: Private addLogQueue As New ConcurrentQueue(Of String) Public Sub addLogHandler(ByVal machineName As String, ByVal LogTime As Date, ByVal EventLog As String, ByVal EventValue As String) addLogQueue.Enqueue("INSERT INTO ... some SQL CODE") Dim InsertingThread As New System.Threading.Thread(AddressOf addLog) InsertingThread.Start End Sub Private Sub addLog() Using (connection As New SqlConnection(connstring)) connection.Open() Dim statement As String While addLogQueue.TryDequeue(out statement) command = New SqlCommand(statement, connection) command.ExecuteNonQuery() End While End Using End Sub
{ "domain": "codereview.stackexchange", "id": 14597, "tags": "sql, multithreading, vb.net" }
How to identify Overfitting in RandomForestClassifier?
Question: Im building a sentiment classification model using RandomForestClassifier. I got the training accuracy of 99.65 & cross-validation( RepeatedStratifiedKFold-5 folds) accuracy of 97.29. I used f1 score for metrics. The dataset size is 5184 samples. The dataset is imbalanced so i'm using class_weight hyper-parameter as 'balanced'. I have done hyper parameter tuning also. Following are the parameters i tuned - estimator = RandomForestClassifier(random_state=42, class_weight='balanced', n_estimators=850, min_sample_split=4, max_depth=None, min_samples_leaf=1, max_features='sqrt') Im thinking the model is overfitting. Im also wondering is this issue caused because of the class imbalance? Any immediate help on this is much appreciated. Answer: There's quite a lot of features for the number of instances, so it's indeed likely that there's some overfitting happening. I'd suggest these options: Forcing the decision trees to be less complex by setting the max_depth parameter to a low value, maybe around 3 or 4. Run the experiment with a range of values (e.g. from 3 to 10) and observe the changes in performance (preferably use a validation set, so that when the best parameter is found you can do the final evaluation on a different test set). Reducing the number of features: remove rare words (i.e. those which appear less than $N$ times) and/or use some feature selection method.
{ "domain": "datascience.stackexchange", "id": 8252, "tags": "random-forest, decision-trees, overfitting, class-imbalance, hyperparameter-tuning" }
What is diamagnetic in diamagnetic dilution?
Question: I'm reading a paper where the authors study a peptide solution where each peptide may have two spin labels attached to them. They put the peptides on the surface and study them using a diamond sensor. They perform the experiment in two ways. First, they put only peptides with attached spin labels. Then, they dilute the solution with peptides that do not have spin labels, in a ratio 1:10. They refer to this dilution as "diamagnetic dilution". I do not know why it's called diamagnetic. I've tried to look for a definition but the term only appears in highly technical papers that I can't understand very well. Does anyone know why this is called "diamagnetic dilution"? Ben Edit: The article I'm referring to is this one: https://advances.sciencemag.org/content/3/8/e1701116 But my question is general. Answer: The term diamagnetic dilution (in general) implies that the material used for dilution has all its electrons paired, so we will not see its response in an electron spin resonance experiment or any other experiment which can sense unpaired spins.
{ "domain": "chemistry.stackexchange", "id": 13600, "tags": "molecular-structure, nmr-spectroscopy, spin" }
Why do some solar eclipses' umbra cross arctic/antarctic regions?
Question: The ecliptic (Earth's orbital plane) is inclined 23.4 degrees. This is the same as Earth's axial tilt. The Moon's orbit is inclined 5.145 degrees to the ecliptic. Therefore, the maximum latitude where a solar eclipse produces an umbra should be 28.545 N and 28.545 S. But this is clearly not the case. Wikipedia has a huge list of solar eclipses with charts. Here's one that goes very near the south pole. How is this possible? Edit: Actually I missed something. Since the Moon and Sun must be in the same place, and the Sun can never exceed 23.4 degrees declination, solar eclipses must be bound by 23.4 N/S. I should not have added the two angles. Thanks to user:berrycenter for pointing it out. Answer: I wanted to ask this question even though I realized the correct geometry in the middle of typing it. So I will answer this myself. First of all, the sub-lunar point never exceeds 28.545 N/S. If you're standing at the sub-lunar point, the Moon will be directly overhead. The Moon never wanders outside this latitude bracket. But that does not mean the Moon's shadow can't wander outside it. Here's one way it could happen. Edit: this is my attempt to draw something approximately to scale. I think I got the Earth/Moon sizes about right, but the Moon-Earth distance should around 4x greater. And the solar rays are of course not parallel but I don't think it's possible to draw or perceive tiny angles for this scale. The green line is the ecliptic. The Sun and Moon are supposed to be aligned along a horizontal line, but for some reason I drew the solar rays half and half beside the green line which is deceptive. I'll try to edit the pic when I can. In technical parlance, the Moon could be slightly above or slightly below its Node. If the Moon was exactly at its node, then the umbra would absolutely be centered somewhere between 28.5 N/S (actually 23.4 N/S because the Sun never goes beyond that). But as you can see, the farther it gets from its node, the steeper the shadow impacts Earth and the farther towards a pole it gets. Too far from the node, and the shadow will not intercept Earth at all and we won't have an eclipse. If you draw a line from the center of the Moon to the center of the Earth, you can see that the sub-lunar point is definitely within 28.5 degrees N/S. But the shadow does not necessarily follow that imaginary line. The shadow always follows the line from the Sun to the Moon. You could even imagine a very extreme case where the shadow falls on "the other side" of the pole. I think that may be the case here since that eclipse looks really short.
{ "domain": "astronomy.stackexchange", "id": 2428, "tags": "orbit, the-moon, solar-eclipse, ecliptic" }
Big-O complexity when c is a tiny fraction
Question: Finding Big-O is pretty straightforward for an algorithm where $f(n)$ is $$f(n) = 3n^4 + 6n^3 + 10n^2 + 5n + 4$$ The lower powers of $n$ simply fall off because in the long run $3n^4$ outpaces all of them. And with $g(n) = 3n^4$ we say $f(n)$ is $O(n^4)$. But what would Big-O be if instead of 3 we were given a really small constant, for example $$f(n) = 0.0000000001n^4 + 6n^3 + 10n^2 + 5n + 4$$ Would we still say $f(n)$ is $O(n^4)$? Answer: Medium answer - yes. As you said for the previous case, in the long run $n^4$ outpaces all of them. This is still true despite the constant in front. Check it out: plot. Also, remember that $n^3$ and $n^4$ are both $O(n^4)$, and in fact are both $O(n^{10})$ because big-O is an upper bound. So you might ask "is there any tighter big-O bound on this function than $O(n^4)$, like $O(n^3)$, and the answer would be no.
{ "domain": "cs.stackexchange", "id": 918, "tags": "complexity-theory, algorithm-analysis" }
cross-compiling cannot find -lrospack
Question: When cross-compiling for the NAO, I get the error cannot find -lrospack. This also shows up: collect2: ld returned 1 exit status make[2]: * * * [../lib/libroslib.so] Error 1 make[1]: * * * [CMakeFiles/roslib.dir/all] Error 2 I'm using the process found here: http://www.ros.org/wiki/nao/Tutorials/Cross-Compiling. Any suggestions? Thanks [edit] It happens when I type 'make' in the /media/external/ros/electric/ros directory. I have definitely checked that I have followed all prior steps (multiple checks). I am using ROS electric and the NAO's 1.12 sdk and toolchain. I have Ubuntu 11.10 installed Originally posted by dougnets22 on ROS Answers with karma: 61 on 2012-01-10 Post score: 0 Original comments Comment by AHornung on 2012-01-10: Please edit this information into your original question to make answering it easier. What OS are you running this on? I'm not sure if NaoQI 1.12 requires any changes or tweaks but I know it works with 1.10. Comment by dougnets22 on 2012-01-10: Thanks for responding AHornung. It happens when I type 'make' in the /media/external/ros/electric/ros directory. I have definitely checked that I have followed all prior steps (multiple checks). I am using ROS electric and the NAO's 1.12 sdk and toolchain. Comment by AHornung on 2012-01-10: At which step does that happen, have you followed all the other steps? What system are you crosscompiling under with which ROS and NAO versions? Please be a little more specific. Answer: So, I decided to revert back to an earlier version of the NAO's SDK (1.10.44). I still get a library error. And I followed the instructions on the wiki to the letter. The error that is get is: /media/external/nao-cross-toolchain-1.10.44/cross/geode/bin/../libexec/gcc/i586-linux/4.3.3/cc1plus: error while loading shared libraries: libmpfr.so.1: cannot open shared object file: No such file or directory Originally posted by dougnets22 with karma: 61 on 2012-01-11 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Stefan Osswald on 2012-01-11: For the libmpfr.so.1 problem, see http://answers.ros.org/question/3531/nao-cross_compilation-libmpfrso1-error
{ "domain": "robotics.stackexchange", "id": 7835, "tags": "ros, roslib, cross-compiling, nao" }
Big Crunch time
Question: For a universe that is flat, has matter and a cosmological constant, we can write the Friedmann equation in the following way: $$\frac{H^{2}}{H^{2}_{0}} = \frac{\Omega_{m,0}}{a^{3}} + (1 - \Omega_{m,0})$$ I understand that if the second term is negative ($\Omega_{m,0}>1$) then the final fate of the universe is that it is going to collapse again in the Big Crunch! I understand that I can calculate the maximum value of the scale factor doing $H^{2}=0$ and that I can rewrite the above equation as an ODE just with some algebra to have the following expression: $$ H_{0}t = \int_{0}^{a} \frac{da}{(\Omega_{m,0}/a + (1 - \Omega_{m,0})a^{2})^{1/2}}$$ that relates the cosmic time with the scale factor $a$. My question is then the following: How can I calculate the Big Crunch time, (e.g. the time the we will again have $a=0$? I though about it being the twice the above integral with the upper limit being the $a$ to which we have $H(t) = 0$, but it does no make a lot of sense for me. Also, the Introduction to cosmology by Barbara Ryden says that the time I'm looking for is (eq. 5.98): $$ t_\text{crunch} = \frac{2\pi}{3H_{0}}\frac{1}{(\Omega_{m,0}-1)^{1/2}}$$ What I think that suggest that I'm trying the wrong approach. Can someone help me? How can I find the above equation? Answer: That integral has a closed-form solution, $$H_0 t = \frac{2 \sinh^{-1} \left(\sqrt{\frac{1-Ω_{m,0}}{Ω_{m,0}}}\,a^{3/2}\right)}{3 \sqrt{1-Ω_{m,0}}} = \frac{2 \sin^{-1} \left(\sqrt{\frac{Ω_{m,0}-1}{Ω_{m,0}}}\,a^{3/2}\right)}{3 \sqrt{Ω_{m,0}-1}}.$$ For a recollapsing universe this gives you only the expanding half of the evolution, which ends when the arcsine reaches its maximum value of $π/2$. The total age is therefore $2 \frac{2 (π/2)}{3 \sqrt{Ω_{m,0}-1}}/H_0$.
{ "domain": "physics.stackexchange", "id": 72163, "tags": "cosmology, space-expansion" }
Why do tidal forces not violate conservation of energy?
Question: Europa is an example of a satellite which is heated by tidal forces. The orbit is constant, so how is energy conserved on Europa? Answer: Eventually the title forces will cause the object in this case the moon to have a circular orbit. The closer it gets to a perfectly circular orbit the less title forces there are or at least more constant.
{ "domain": "physics.stackexchange", "id": 26744, "tags": "energy-conservation, tidal-effect, satellites" }
How to show a message on rosstage?
Question: I am working on a Robot Simulation. I want to show a message about my robot status on the stage while this robot is running. For example, when my robot crash, I want a message pop up next to that robot saying that "The robot is crashed". Can any one help me please? Thanks Originally posted by vv on ROS Answers with karma: 1 on 2013-08-20 Post score: 0 Answer: See this question. Originally posted by gustavo.velascoh with karma: 756 on 2013-08-20 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15316, "tags": "ros" }
necessary and sufficient pumping lemma - bounded pumping variant
Question: There exists a variation of the pumping lemma with necessary and sufficient conditions for a language to be Regular. According to that lemma: A language $L$ is regular iff $\exists k$, $\forall x\in \Sigma^k$, $\exists u,v,w\in \Sigma^*$, $ x=uvw \cap |v|\ge 1$ such that: $$\forall i \ge 0,\ \forall z\in \Sigma^*: uvwz\in L \iff uv^iwz\in L.$$ My question to you is: is there any way changing the for all $i \ge 0$ condition to for all $ 0\le i\le N$ for some $N$ - and the lemma will still be correct? That $N$ may be constant, depend on the lemma's k, and so on. I can't find an approach to prove it, any ideas? Answer: Yes, what you have observed is correct. We can, in fact, always take $N=0$. Here is the variant of Jeffrey Jaffe's pumping lemma where strings are pumped up or down exactly once. A language $L$ is regular iff $\exists k$, $\forall x\in \Sigma^k$, $\exists u,v,w\in \Sigma^*$ such that $ x=uvw$, $|v|\ge 1$ and $\forall z\in \Sigma^*$, $$uvwz\in L \iff uwz\in L.$$ As you have observed, the article has basically proved the above fact. Here is a simpler proof of the "$\Longleftarrow$" direction. Let $[y]$ be the Myhill-Nerode equivalence class represented by string $y$. Suppose $|y|\ge k$. Let $x$ be the first $k$ symbols of $y$ and $t$ be the rest of $y$, i.e., $y=xt$ and $|x|=k$. By assumption, there exists $u,v,w\in\Sigma^*$ such that $x=uvw$, $|v|=1$ and $$yz=x(tz)=uvw(tz)\in L\iff uw(tz)=(uwt)z\in L.$$ The equivalence above means $[y]=[uwt]$. Since $uwt$ is $y$ with $v$ deleted, $|uwt|\lt |y|$. That means the shortest string in a Myhill-Nerode equivalence class must be shorter than $k$. Since there are finitely many strings that are shorter than $k$ symbols, there are only finitely many Myhill-Nerode equivalence classes. By the celebrated Myhill-Nerode theorem, $L$ is regular.
{ "domain": "cs.stackexchange", "id": 16413, "tags": "formal-languages, regular-languages, pumping-lemma" }
Lightshot Print Screen key linux handler
Question: I haven't found any Lightshot Print Screen key linux handler publicly available. So, I decided to write one today. It is a standard POSIX shell script. I believe there is always space to get better, so any and all reviews are welcome. #!/bin/sh is_number() { # check if exactly one argument has been passed test "$#" -eq 1 || print_error_and_exit 5 "is_number(): There has not been passed exactly one argument!" # check if the argument is an integer test "$1" -eq "$1" 2>/dev/null } # ------------------------------------------------------------------------------ print_error_and_exit() { # check if exactly two arguments have been passed test "$#" -eq 2 || print_error_and_exit 3 "print_error_and_exit(): There have not been passed exactly two arguments!" # check if the first argument is a number is_number "$1" || print_error_and_exit 4 "print_error_and_exit(): The argument #1 is not a number!" # check if we have color support if test -x /usr/bin/tput && tput setaf 1 > /dev/null 2>&1 then bold=$(tput bold) red=$(tput setaf 1) nocolor=$(tput sgr0) echo "$bold$red$2 Exit code = $1.$nocolor" 1>&2 else echo "$2 Exit code = $1." 1>&2 fi exit "$1" } # ------------------------------------------------------------------------------ check_for_prerequisite() { # check if exactly one argument has been passed test "$#" -eq 1 || print_error_and_exit 2 "check_for_prerequisite(): There has not been passed exactly one argument!" # check if the argument is a program which is installed command -v "$1" > /dev/null 2>&1 || print_error_and_exit 6 "check_for_prerequisite(): I require $1 but it's not installed :-(" } # ------------------------------------------------------------------------------ # check if no arguments have been passed to the script test "$#" -gt 0 && print_error_and_exit 1 "$0: You have passed some unexpected argument(s) to the script!" # ------------------------------------------------------------------------------ # check for prerequisites check_for_prerequisite "pgrep" check_for_prerequisite "xdotool" # ------------------------------------------------------------------------------ # global constants lightshot_key="Print" lightshot_name="Lightshot" # ------------------------------------------------------------------------------ # get the lightshot pid lightshot_pid=$(pgrep "$lightshot_name") # test if a pid has been successfully acquired is_number "$lightshot_pid" || print_error_and_exit 7 "lightshot_pid: The argument is not a number!\\nLightshot is most probably not running." # ------------------------------------------------------------------------------ # get the window id from lightshot pid lightshot_wnd=$(xdotool search --limit 1 --all --pid "$lightshot_pid" --name "$lightshot_name") # test if a window handler has been successfully acquired is_number "$lightshot_wnd" || print_error_and_exit 8 "lightshot_wnd: The argument is not a number!\\nLightshot is most probably not running." # ------------------------------------------------------------------------------ # simulate a print screen key press on the lightshot window xdotool key --window "$lightshot_wnd" "$lightshot_key" In Linux Mint 18.3 you can simply put this into your custom keyboard shortcuts: Answer: Instruct shell how to treat unset variables set -o nounset In this script, there should not be possible to encounter an unbound variable, so setting this will prove useful for debugging. Redirection style 9 test "$1" -eq "$1" 2>/dev/null in contrast with: 23 if test -x /usr/bin/tput && tput setaf 1 > /dev/null 2>&1 It should be unified either with, or without a space. Constants should be moved to the top 61 lightshot_key="Print" 62 lightshot_name="Lightshot" It is obvious, that everyone might not want to use Print, so this really has to be moved to the top in order for each user to quickly customize the keyboard combination for the print screen. Those constants should be commented and clarified In order for the users to quickly know, how to define key combinations, there should be some example added. Consider replacing echo with printf 28 echo "$bold$red$2 Exit code = $1.$nocolor" 1>&2 29 else 30 echo "$2 Exit code = $1." 1>&2 Basically, it's a portability (and reliability) issue. Read more about this topic in this answer. Problematic code I notice now, there is a new line inside these strings: 70 is_number "$lightshot_pid" || print_error_and_exit 7 "lightshot_pid: The argument is not a number!\\nLightshot is most probably not running." 78 is_number "$lightshot_wnd" || print_error_and_exit 8 "lightshot_wnd: The argument is not a number!\\nLightshot is most probably not running." Which according to Shellcheck.net: echo won't expand escape sequences Read more about this topic in this article. This code, as it is, works though, in spite of this error. Tested in bash version 4.3.48(1)-release and POSIX dash version 0.5.8-2.1ubuntu2. Better variable naming convention These are of little value to a foreign script reader: 61 lightshot_key="Print" 62 lightshot_name="Lightshot" There are only two constants. Why not to give them some real meaning, this would further enhance the script reader's experience. The variables: 67 lightshot_pid 75 lightshot_wnd could use some love too. Tell the user what arguments are (not) expected 50 test "$#" -gt 0 && print_error_and_exit 1 "$0: You have passed some unexpected argument(s) to the script!" Let's add there something like: No arguments expected! Problematic space in the comments (block separators) 12 # ------------------------------------------------------------------------------ There should be no space between # and -, because imagine, that you have a larger project than this, having these block separators will prevent you from searching for long arguments beginning (--). Re-consider using smileys in the output and using grammar shortcuts 44 command -v "$1" > /dev/null 2>&1 || print_error_and_exit 6 "check_for_prerequisite(): I require $1 but it's not installed :-(" It may look a little unprofessional. Hard to say without knowing the people reading your script, though. (It might be totally OK for personal usage.) This script will likely be spread across the internet, so better not use that. Also, avoid using grammar shortcuts like: it's xdotool: documenting all passed arguments would be greatly appreciated by the script readers (and the order of arguments in lightshot_wnd could be better a little) 75 lightshot_wnd=$(xdotool search --limit 1 --all --pid "$lightshot_pid" --name "$lightshot_name") Re-writen code would look as follows #!/bin/sh # treat unset variables as an error when substituting set -o nounset # global constants for an easy set-up: # lightshot_printscreen_hotkey: set this to the same hotkey which you have set up in Lightshot # example: for left control and print screen key -> type Control_L+Print lightshot_printscreen_hotkey="Print" # lightshot_process_name: do not change this one, it is a case-sensitive name of the Lightshot process lightshot_process_name="Lightshot" #------------------------------------------------------------------------------ is_number() { # check if exactly one argument has been passed test "$#" -eq 1 || print_error_and_exit 5 "is_number(): There has not been passed exactly one argument!" # check if the argument is an integer test "$1" -eq "$1" 2> /dev/null } #------------------------------------------------------------------------------ print_error_and_exit() { # check if exactly two arguments have been passed test "$#" -eq 2 || print_error_and_exit 3 "print_error_and_exit(): There have not been passed exactly two arguments!" # check if the first argument is a number is_number "$1" || print_error_and_exit 4 "print_error_and_exit(): The argument #1 is not a number!" # check if we have color support if test -x /usr/bin/tput && tput setaf 1 > /dev/null 2>&1 then bold=$(tput bold) red=$(tput setaf 1) nocolor=$(tput sgr0) printf "%s%s%s Exit code = %s.%s\\n" "$bold" "$red" "$2" "$1" "$nocolor" 1>&2 else printf "%s Exit code = %s.\\n" "$2" "$1" 1>&2 fi exit "$1" } #------------------------------------------------------------------------------ check_for_prerequisite() { # check if exactly one argument has been passed test "$#" -eq 1 || print_error_and_exit 2 "check_for_prerequisite(): There has not been passed exactly one argument!" # check if the argument is a program which is installed command -v "$1" > /dev/null 2>&1 || print_error_and_exit 6 "check_for_prerequisite(): I require $1 but it is not installed!" } #------------------------------------------------------------------------------ # check if no arguments have been passed to the script test "$#" -gt 0 && print_error_and_exit 1 "$0: You have passed some unexpected argument(s) to the script. No arguments expected!" #------------------------------------------------------------------------------ # check for prerequisites check_for_prerequisite "pgrep" check_for_prerequisite "xdotool" #------------------------------------------------------------------------------ # get the Lightshot process id lightshot_process_id=$(pgrep "$lightshot_process_name") # test if a process id has been successfully acquired is_number "$lightshot_process_id" || print_error_and_exit 7 "lightshot_process_id: The argument is not a number! Lightshot is most probably not running." #------------------------------------------------------------------------------ # get the window id from the Lightshot process id #--all : Require that all conditions be met. #--limit : Stop searching after finding N matching windows. #--pid : Match windows that belong to a specific process id. #--name : Match against the window name. This is the same string that is displayed in the window titlebar. lightshot_window_id=$(xdotool search --all --limit 1 --pid "$lightshot_process_id" --name "$lightshot_process_name") # test if a window id has been successfully acquired is_number "$lightshot_window_id" || print_error_and_exit 8 "lightshot_window_id: The argument is not a number! Lightshot is most probably not running." #------------------------------------------------------------------------------ # simulate the above pre-defined print screen hotkey press on the Lightshot window xdotool key --window "$lightshot_window_id" "$lightshot_printscreen_hotkey"
{ "domain": "codereview.stackexchange", "id": 29410, "tags": "gui, posix, sh" }
No temperature change during diabatic expansion?
Question: So, checking some exercises from previous exams of where i study at, there is an exercise that goes like this (copying only the relevant stuff): A mole of a diatomic ideal gas expands from A to B isotermically, doubling it's volume. Then in keeps expanding adiabatically in an irreversible way from B to C. (then it goes on in order to complete a cycle). Pa = 400kPa, Ta = 800K, Vb=2*Va, Vc=2*Vb, Pc = Pa/4 So, using PV = nRT i got that Va=16.61 l, Pb = 200kPa, Tb = 800K (because it is an isothermal process from A to B), and here comes what i find weird: Tc = 800K If that was the case, there'd be no work done, no change in internal energy from B to C... am i to think the values given are wrong? Answer: You are right, the given numbers and/or description are wrong. The task contains superfluous information which appears to be contradictory. Quick check: $V_c = 2V_b = 4V_a$, but $P_c = \frac{P_a}{4}$, so $P_a \cdot V_a = P_c \cdot V_c$ and $T_c = T_a$ which is the same as $T_b$ due to isothermal expansion $\rm AB$. So on $\rm BC$ gas internal energy isn't changed and work is performed by the gas, so the system must receive heat from external sources, thus it's not an adiabatic process. You need to exclude one of these to make task consistent: $V_c=2V_b$ $P_c=\frac{P_a}{4}$ $\rm BC$ is an adiabatic process
{ "domain": "physics.stackexchange", "id": 37763, "tags": "homework-and-exercises, thermodynamics, temperature, adiabatic" }
Prolbems with stage pioneer and laser/rangers on fuerte
Question: I have problems to use my stage simulation with the default pioneer robot. I got messages like: ROS Stage currently supports rangers with 1 sensor only. or number of position models and laser models must be equal in the world file. Originally posted by Markus Bader on ROS Answers with karma: 847 on 2012-05-23 Post score: 1 Answer: The current stageros implementation supports only one ranger/laser sensor per robot. You have to use a stage model without sonar to use the laser sensor. Attached you will find my cave-pionee.world which includes a working pioneer stage simulation. The background file cave.png must be in the same folder to get it working # cave-pionee.world # simple cave environment with a pioneer robot based on the basic world file examples of # Richard Vaughan Richard Vaughan, Andrew Howard, Luis Riazuelo # Authors: Markus Bader define hokuyolaser ranger ( sensor( # laser-specific properties # factory settings for LMS200 range [ 0.0 5.0 ] fov 270.0 samples 270 ) model ( # generic model properties size [ 0.07 0.07 0.05 ] # dimensions from LMS200 data sheet color "blue" ) ) define my_block model ( size [0.5 0.5 0.5] gui_nose 0 ) define floorplan model ( # sombre, sensible, artistic color "gray30" # most maps will need a bounding box boundary 1 gui_nose 0 gui_grid 0 gui_outline 0 gripper_return 0 fiducial_return 0 laser_return 1 ) # set the resolution of the underlying raytrace model in meters resolution 0.02 interval_sim 100 # simulation timestep in milliseconds # configure the GUI window window ( size [ 635.000 666.000 ] # in pixels scale 36.995 # pixels per meter center [ -0.040 -0.274 ] rotate [ 0 0 ] show_data 1 # 1=on 0=off ) # load an environment bitmap floorplan ( name "cave" size [16.000 16.000 0.800] pose [0 0 0 0] bitmap "cave.png" ) # set the resolution of # throw in a robot my_block( pose [ 5 4 0 180.000 ] color "green") define pioneer_base position ( color "red" # Default color. drive "diff" # Differential steering model. gui_nose 1 # Draw a nose on the robot so we can see which way it points obstacle_return 1 # Can hit things. ranger_return 0.5 # reflects sonar beams blob_return 1 # Seen by blobfinders fiducial_return 1 # Seen as "1" fiducial finders localization "gps" localization_origin [0 0 0 0] # Start odometry at (0, 0, 0). # alternative odometric localization with simple error model # localization "odom" # Change to "gps" to have impossibly perfect, global odometry # odom_error [ 0.05 0.05 0.1 ] # Odometry error or slip in X, Y and Theta # (Uniform random distribution) # four DOF kinematics limits # [ xmin xmax ymin ymax zmin zmax amin amax ] velocity_bounds [-0.5 0.5 0 0 0 0 -90.0 90.0 ] acceleration_bounds [-0.5 0.5 0 0 0 0 -90 90.0 ] ) define pioneer2dx_base_no_sonar pioneer_base ( # actual size size [0.44 0.38 0.22] # sizes from MobileRobots' web site # the pioneer's center of rotation is offset from its center of area origin [-0.04 0 0 0] # draw a nose on the robot so we can see which way it points gui_nose 1 # estimated mass in KG mass 23.0 # differential steering model drive "diff" ) # as above, but with front sonar only define pioneer2dx_no_sonar pioneer2dx_base_no_sonar ( # simplified Body shape: block( points 8 point[0] [-0.2 0.12] point[1] [-0.2 -0.12] point[2] [-0.12 -0.2555] point[3] [0.12 -0.2555] point[4] [0.2 -0.12] point[5] [0.2 0.12] point[6] [0.12 0.2555] point[7] [-0.12 0.2555] z [0 0.22] ) ) pioneer2dx_no_sonar ( # can refer to the robot by this name name "r0" pose [ -7 -7 0 45 ] hokuyolaser(pose [ 0.225 0.000 -0.15 0.000 ]) ) Originally posted by Markus Bader with karma: 847 on 2012-05-23 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by joq on 2012-05-23: Which current stageros are you describing? Electric has stage 3.2.1, with one laser. Fuerte has stage 4.1.1, which replaces the laser interface with ranger. Comment by Markus Bader on 2012-05-28: ros fuerte uses stage 4.1.1 as default Comment by Arkapravo on 2014-07-06: @Markus Bader Thanks, great help - chances are that I will 'steal' your code into my git repository ! Also this works just fine in Hydro !
{ "domain": "robotics.stackexchange", "id": 9513, "tags": "simulation, laser, stage, ros-fuerte, pioneer" }
What happen if all the carbon-14 atoms in a person body decays at once?
Question: What happens if all the carbon-14 atoms in a persons body decays at once? Would they die or will they be unaffected? Answer: There is about 1 radioactive $^{14}$C atom for every $10^{12}$ $^{12}$C atoms. With a half-life of 5730 years, this means there are usually about 0.2 decays per gramme of carbon per second. Carbon is about 18% of your body mass, so an 80 kg adult would have about 14 kg of carbon and $7\times 10^{14}$ $^{14}$C atoms. If these all decayed in say one second, the dose rate would be $1.9\times 10^4$ Ci To estimate the effects, we could assume all the beta particles are absorbed and that each has an energy of about 0.1 MeV. This gives an absorbed energy of just 11 Joules of energy. Totally negligible in energetic terms. In terms of absorbed radiation the quantity is about 11 J/80 kg = 0.14 Grays and for beta particles, roughly the same number of Sieverts, i.e. 140 mSv. This is roughly the same radiation dose you would get from $\sim$20 CT scans or about the same as a few decades worth of exposure to ambient radiation in the environment and is enough to raise your long-term cancer risk slightly (a few per cent) but likely not enough to cause acute radiation poisoning.
{ "domain": "physics.stackexchange", "id": 83986, "tags": "estimation, radioactivity, biology" }
Analysis tool for a log file
Question: I've written a bit of code designed to go through a log file and determine successes and failures for certain criteria for each server in an environment. The code works just fine, except that on environments of larger size, it's just too slow. Currently it's running on a 2000 server environment with 18 criteria and 5000 lines of logs being checked and takes about 14 minutes on my machine. The problem is that it's intended to be used on an environment of about 18000 servers with about 40 criteria and 175,000 lines of logs. I'm hoping someone can help me come up with something I'm not seeing right now that would help with the speed, because waiting an hour and hoping Excel doesn't crash doesn't really speed up this process. Here's what I'm trying to do: This is the code I'm running (there are comments in there to help understanding): Dim lastCol As Double, lastRow As Double, lastSensorRow As Double Sub B_L3ToolRun() SpeedUp Dim numSuc As Long, numMaj As Long, numWar As Long, numCri As Long, numErr As Long lastRow = Sheets("Results").Range("A" & Rows.Count).End(xlUp).Row //End of IPs lastSensorRow = Sheets("Data").Range("A" & Rows.Count).End(xlUp).Row //End of Log File lastCol = 3 //Minimum columns used For I = 4 To 150 //Discovers number of criteria used If Cells(1, I).Value <> "" Then lastCol = lastCol + 1 End If Next I Sheets("Results").Activate For J = 3 To lastCol //This is a check for each criteria being checked For I = 2 To lastRow //This is a check for each IP in the list (2000) numSuc = WorksheetFunction.CountIfs(Sheets("Data").Range("B:B"), "*" & Sheets("Results").Cells(1, J).Value & "*", _ //This checks for number of given criteria Sheets("Data").Range("E:E"), "=normal", _ //This refines it to only successful ones Sheets("Data").Range("C:C"), "*" & Sheets("Results").Cells(I, 1).Value & "*") //This limits that to number of successful on that IP numMaj = WorksheetFunction.CountIfs(Sheets("Data").Range("B:B"), "*" & Sheets("Results").Cells(1, J).Value & "*", _ Sheets("Data").Range("E:E"), "=major", _ //Same as above but "major" error Sheets("Data").Range("C:C"), "*" & Sheets("Results").Cells(I, 1).Value & "*") numWar = WorksheetFunction.CountIfs(Sheets("Data").Range("B:B"), "*" & Sheets("Results").Cells(1, J).Value & "*", _ Sheets("Data").Range("E:E"), "=warning", _ //Same as above but "warning" error Sheets("Data").Range("C:C"), "*" & Sheets("Results").Cells(I, 1).Value & "*") numCri = WorksheetFunction.CountIfs(Sheets("Data").Range("B:B"), "*" & Sheets("Results").Cells(1, J).Value & "*", _ Sheets("Data").Range("E:E"), "=critical", _ ////Same as above but "critical" error Sheets("Data").Range("C:C"), "*" & Sheets("Results").Cells(I, 1).Value & "*") numErr = numMaj + numWar + numCri If numErr > 0 Or numSuc > 0 Then Sheets("Results").Cells(I, J).Value = numSuc & "/" & numErr //I want them displayed in this format "count success/count fail" End If Next I Next J SlowDown End Sub Private Sub SpeedUp() Application.ScreenUpdating = False Application.DisplayStatusBar = False Application.Calculation = xlCalculationManual Application.EnableEvents = False ActiveSheet.DisplayPageBreaks = False Application.DisplayAlerts = False End Sub Private Sub SlowDown() Application.ScreenUpdating = True Application.DisplayStatusBar = True Application.Calculation = xlCalculationAutomatic Application.EnableEvents = True ActiveSheet.DisplayPageBreaks = displayPageBreaksState Application.DisplayAlerts = True End Sub Answer: Things I like: You've used meaningful variable names (I think numMajorErr would be an even better convention for your counters but it's a minor quibble) Your code is nicely indented, good use of _ to split your functions onto multiple lines for readability. Your code is nicely commented. This combined with the above 2 points makes it very easy to understand and follow. And now for the review: Tips and Tricks Codenames All sheets have a "Name" which is what the user sees and can edit. Sheets("Data") is referencing a sheet name. However, names can change, especially when users are involved. Codenames, on the other hand, can only be set/changed in the VBE: Assuming your "Data" sheet has codename "ws_Data", these 2 statements are equivalent: Sheets("Data").Range(), ws_Data.Range. Always reference codenames and not only can you give your sheet variables meaningful names, there's no danger of someone changing the name and breaking your macro. Final Row/Column Personally, I recommend finalRow = Cells(Rows.Count, 1).End(xlUp).Row finalColumn = Cells(1, Columns.Count).End(xlToLeft).Column No strings, no loops, just neat, clean and simple. Proper Variable Scoping Variables should be declared as close as is practicable to where they're used. Crucially, Variables should be Procedure-Level where possible, then Module-Level where possible, then and only then Global level. Dim lastCol As Double, lastRow As Double, lastSensorRow As Double Why on earth are these Module-Level variables? There's only one procedure. Develop good habits now and put them inside their procedure where they belong. As a side note, if you only ever use Dim for Procedure-Level, Private for Module-Level and Public for Global-Level variables, it will make reviewing your code (especially on larger projects) even easier. And now for the performance optimisation You have 2 options: Just put your worksheet functions in an actual worksheet. Create a new sheet, fill in a countifs() that references the other sheet and auto-fill a grid as appropriate. Excel is highly optimised for large grids of worksheet functions. Which I will focus on here: Store all the sheet data in arrays and work from those. Accessing a worksheet is slow. And you are accessing it every time you specify a cell reference. An array, on the other hand, is just data laid out in sequential memory blocks, so accessing a location in an array is orders of magnitude faster than accessing a location in a worksheet. Start your program doing this: Dim wsResultsTable As Variant wsResultsTable = Array() Dim wsDataTable As Variant wsDataTable = Array() For each worksheet: '/ Determine Range of worksheet Data '/ <Range> = ws.Range() '/ <Array> = <Range> '/ Now Array(1,1) = Cells(1,1).value etc. You're going to be referencing the same 3 columns over and over again, so store them in their own arrays: Dim bColumnData As Variant bColumnData = Array() Dim colIndex As Long colIndex = 2 ' "B" Column Dim LB1 As Long, UB1 As Long ' Determine L/U bounds of the array LB1 = LBound(wsDataTable, 1) UB1 = UBound(wsDataTable, 1) ReDim bColumnData(LB1 To UB1) ' Create 1-D array of the same size For i = LB1 To UB1 bColumnData(i) = wsDataTable(i, colIndex) Next i Repeat for the other 2 columns, and you now have 3 lists of column data stored in arrays. Now just replace your countIfs with a VBA loop: for a for b for i = LB1 to UB1 if <criteria 1> then <counter 1> = <counter 1> + 1 if <criteria 2> then <counter 2> = <counter 2> + 1 if <criteria 3> then <counter 3> = <counter 3> + 1 if <criteria 4> then <counter 4> = <counter 4> + 1 next i '/ Print results to sheet etc. next b next a And this should run at least an order of magnitude faster. Optimisation Summary If you're working with data, then strip away all of the excess (containers, sheets, formatting, highlighting etc.) and put the raw data in an Array. If you can (easily) re-create a worksheet function in VBA, then do so. If you're going to be referencing the same things over and over, store them in a variable and reference the variable. The same goes for the results of calculations (A.K.A. memoisation). Get really familiar with using Arrays and moving data from sheets to arrays and back again. It'll save you a lot of trouble down the road.
{ "domain": "codereview.stackexchange", "id": 16993, "tags": "performance, vba, excel" }
TicTacToe with MiniMax algorithm
Question: I just got into C++ a few days ago and decided to make a MiniMax based TicTacToe game, I don't really know much about the conventions and best uses of the C++ language so I'd really appreciate any kind of feedback. Brief summary: Board: has all the information pertinent to the game. Is used by the main method to paint the window and by the Brain to decide which move to make. Brain: Container for the MiniMax algorithm with helper methods. The Board is passed as a pointer to the Brain during construction (I don't know how wise this is but it's how I did things in Java). Main: mainly contains functions to draw the board and get inputs from the user. Spacebar clears the board, F1 asks for a move, click performs a move. Board.h - Most of the comments are not in the header files #ifndef BOARD_H_ #define BOARD_H_ #include <vector> typedef enum {empty, x, o} cellState; // The 3 possible states a cell can be in class Board { public: Board(); static const unsigned NUMBER_OF_CELLS = 9; static const unsigned CELLS_PER_ROW = 3; Board getClone(); cellState getCell(int n); std::vector<unsigned> getLegalMoves(); int getWinner(); bool isGameOver(); bool isXTurn(); void applyMove(unsigned m); void clear(); void print(); private: std::vector<cellState> cells; cellState winner; bool xToMove; bool gameOver; unsigned movesMade; void setMark(int p, cellState m); void update(); void xWon(); void oWon(); void draw(); }; #endif Board.cpp #include <iostream> #include "Board.h" Board::Board() { clear(); } void Board::clear() { // Resets the game cells.clear(); for (unsigned i = 0; i < NUMBER_OF_CELLS; i++) cells.push_back(empty); winner = empty; xToMove = true; gameOver = false; movesMade = 0; } cellState Board::getCell(int n) { // Returns the contents of a cell return cells.at(n); } void Board::setMark(int p, cellState m) { // Sets the contents of a cell if (getCell(p) != empty || m == empty) return; cells.erase(cells.begin()+p); cells.insert(cells.begin()+p, m); update(); } std::vector<unsigned> Board::getLegalMoves() { // Gets a vector of all the empty cells std::vector<unsigned> moves; for (unsigned i = 0; i < NUMBER_OF_CELLS; i++) if (getCell(i) == empty) moves.push_back(i); return moves; } Board Board::getClone() { // Returns a clone of this board object Board clone; for (unsigned i = 0; i < NUMBER_OF_CELLS; i++) { clone.setMark(i, getCell(i)); } return clone; } void Board::xWon() { // X won the game winner = x; gameOver = true; } void Board::oWon() { // O won the game winner = o; gameOver = true; } void Board::draw() { // The game is drawn gameOver = true; } bool Board::isGameOver() { return gameOver; } bool Board::isXTurn() { return xToMove; } int Board::getWinner() { // Returns the winner. x = 1; o = -1; draw = 0 return winner == x? 1 : winner == o? -1 : 0; } void Board::update() { // Checks if there is a winner or if the game is drawn // Also updates the turn xToMove = !xToMove; movesMade++; bool xWin; bool oWin; // Helper for loop to offset the horizontal and vertical searches for (unsigned firstOffset = 0; firstOffset < CELLS_PER_ROW; firstOffset++) { xWin = true; oWin = true; // Checks horizontally for (unsigned totalOffset = firstOffset*CELLS_PER_ROW; totalOffset < firstOffset*CELLS_PER_ROW+CELLS_PER_ROW; totalOffset++) { if(getCell(totalOffset) != x) xWin = false; if(getCell(totalOffset) != o) oWin = false; } if (xWin) { xWon(); return; } if (oWin) { oWon(); return; } xWin = true; oWin = true; // Checks vertically for (unsigned totalOffset = firstOffset; totalOffset < firstOffset+7 ; totalOffset+=CELLS_PER_ROW) { if(getCell(totalOffset) != x) xWin = false; if(getCell(totalOffset) != o) oWin = false; } if (xWin) { xWon(); return; } if (oWin) { oWon(); return; } } int step = 4; // Helper for loop which tells the nested loop which diagonal to check for (unsigned start = 0; start < CELLS_PER_ROW; start+=2) { xWin = true; oWin = true; // Checks a diagonal for (unsigned check = 0; check < CELLS_PER_ROW; check++) { if (getCell(start+(check*step)) != x) xWin = false; if (getCell(start+(check*step)) != o) oWin = false; } if (xWin) { xWon(); return; } if (oWin) { oWon(); return; } step = 2; } if (movesMade == NUMBER_OF_CELLS) { draw(); } } void Board::applyMove(unsigned p) { // Sets the mark on the specified cell based on the current turn if (p >= NUMBER_OF_CELLS || isGameOver()) return; if (xToMove) setMark(p, x); else setMark(p, o); } void Board::print() { // Prints the board to the console for bugtesting purposes std::cout << "---------" << std::endl; for (unsigned i = 0; i < CELLS_PER_ROW; i++) { for (unsigned j = 0; j < CELLS_PER_ROW; j++) switch (getCell(i*CELLS_PER_ROW+j)) { case x: std::cout << " X "; break; case o: std::cout << " O "; break; default: std::cout << " - "; } std::cout << std::endl; } } Brain.h #ifndef BRAIN_H_ #define BRAIN_H_ #include "Board.h" class Brain { public: Brain(Board* b); int getBestMove(); void setBoard(Board* b); // Method not yet implemented, not needed for this small project private: Board* board; int miniMax(Board b, unsigned move); int min(std::vector<int> v); int max(std::vector<int> v); }; #endif Brain.cpp #include "Brain.h" Brain::Brain(Board* b) { board = b; // Board which will be used when calling getBestMove() } int Brain::getBestMove() { // "Main" method which calls all the others in order to return the best move // based on which turn it is std::vector<unsigned> availableMoves = board->getLegalMoves(); if (availableMoves.size() == 0) return 0; std::vector<int> moveScore; // For each available move a score is assigned for (unsigned i = 0; i < availableMoves.size(); i++) moveScore.push_back(miniMax(board->getClone(), availableMoves.at(i))); // And based on which turn it is the one with the highest/lowest score is returned if(board->isXTurn()) return availableMoves.at(max(moveScore)); else return availableMoves.at(min(moveScore)); } int Brain::miniMax(Board b, unsigned move) { // Classic MiniMax algorithm b.applyMove(move); if (b.isGameOver()) return b.getWinner(); // If the game is over the value is returned (1, 0, -1) // Otheriwse for each other legal move miniMax is called again std::vector<unsigned> availableMoves = b.getLegalMoves(); if (b.isXTurn()) { // X is the Maximizing player int max = -1; for (unsigned i = 0; i < availableMoves.size(); i++) { int score = miniMax(b, availableMoves.at(i)); if (score > max) max = score; } return max; } else { // O is the minimixing player int min = 1; for (unsigned i = 0; i < availableMoves.size(); i++) { int score = miniMax(b, availableMoves.at(i)); if (score < min) min = score; } return min; } } int Brain::min(std::vector<int> v) { // Helper method which returns the index of the lowest value in a vector if (v.size() == 1) return 0; int minValue = v.at(0); int minIndex = 0; for (unsigned i = 1; i < v.size(); i++) if (minValue > v.at(i)) { minValue = v.at(i); minIndex = i; } return minIndex; } int Brain::max(std::vector<int> v) { // Helper method which returns the index of the highest value in a vector if (v.size() == 1) return 0; int maxValue = v.at(0); int maxIndex = 0; for (unsigned i = 1; i < v.size(); i++) if (maxValue < v.at(i)) { maxValue = v.at(i); maxIndex = i; } return maxIndex; } Main #include <iostream> #include <math.h> #include <SFML/Graphics.hpp> #include "Board.h" #include "Brain.h" using namespace sf; using std::cout; using std::endl; void drawBoard(RenderWindow &window, Board &board); void drawBG(RenderWindow &window); sf::Color getBGColor(unsigned cell); int translateCoords(Vector2i &v); unsigned cellWidth; unsigned cellHeight; const sf::Color light (55, 55, 55); const sf::Color dark (33, 33, 33); const sf::Color red(90, 100, 200); const sf::Color blue(180, 40, 40); int main() { Board board; Brain brain(&board); RenderWindow window (VideoMode(400, 400), "Tic Tac Toe"); window.setKeyRepeatEnabled(false); cellWidth = window.getSize().x / Board::CELLS_PER_ROW; cellHeight = window.getSize().y / Board::CELLS_PER_ROW; while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { switch (event.type) { case sf::Event::Closed: window.close(); break; case sf::Event::Resized: // cellWidth = window.getSize().x / Board::CELLS_PER_ROW; Bugged // cellHeight = window.getSize().y / Board::CELLS_PER_ROW; To fix break; case sf::Event::KeyPressed: if (event.key.code == sf::Keyboard::Space) // Space = Resets the board board.clear(); else if (event.key.code == sf::Keyboard::F1) // F1 = Asks for a move board.applyMove(brain.getBestMove()); break; case sf::Event::MouseButtonReleased: // Click == Applies a move if (event.mouseButton.button == sf::Mouse::Left) { Vector2i mousePos = sf::Mouse::getPosition(window); board.applyMove(translateCoords(mousePos)); } break; default: break; } } window.clear(); drawBG(window); drawBoard(window, board); window.display(); } return 0; } int translateCoords(Vector2i &v) { // Function needed to translate window coordinates to game coordinates int x = floor((v.x) / cellWidth); int y = floor((v.y) /cellHeight); return (int)(x + y * Board::CELLS_PER_ROW); } sf::Color getBGColor(unsigned cell) { // Helper function to draw the background checkered pattern return cell / Board::CELLS_PER_ROW % 2 == cell % Board::CELLS_PER_ROW % 2 ? dark : light; } void drawBG(RenderWindow &window) { // Draws the background checkered pattern sf::RectangleShape rect(sf::Vector2f(cellWidth, cellHeight)); for (unsigned i = 0; i < Board::NUMBER_OF_CELLS; i++) { rect.setPosition(i%Board::CELLS_PER_ROW*cellWidth, i/Board::CELLS_PER_ROW*cellHeight); rect.setFillColor(getBGColor(i)); window.draw(rect); } } void drawBoard(RenderWindow &window, Board &board) { // Draws the board based on where the naughts and crosses are // To make things simple the crosses are actually squares sf::CircleShape circle(cellWidth/2); circle.setOrigin(circle.getRadius(), circle.getRadius()); circle.setFillColor(red); sf::RectangleShape rect(sf::Vector2f(cellWidth, cellHeight)); rect.setFillColor(blue); for (unsigned i = 0; i < Board::NUMBER_OF_CELLS; i++) { switch (board.getCell(i)) { case x: rect.setPosition(i%Board::CELLS_PER_ROW*cellWidth, i/Board::CELLS_PER_ROW*cellHeight); window.draw(rect); break; case o: circle.setPosition(i%Board::CELLS_PER_ROW*cellWidth+circle.getRadius(), i/Board::CELLS_PER_ROW*cellHeight+circle.getRadius()); window.draw(circle); break; default: break; } } } Answer: Don't use raw pointers, make ownership semantics clear class Brain { public: Brain(Board* b); int getBestMove(); void setBoard(Board* b); // Method not yet implemented, not needed for this small project private: Board* board; // ... }; Your Brain class uses a raw pointer to reference a current Board it is supposed to work with. Raw pointers leave us unclear about their ownership and who's responsible for the memory management. You should rather use a smart pointer like std::unique_ptr<Board> (indicates ownership is transferred to the Brain class), std::shared_ptr<Brain> (indicates ownership is shared) or std::weak_ptr<Brain> (indicates Brain doesn't own that reference). I'll give a sample using std::shared_ptr: class Brain { public: Brain(std::shared_ptr<Board> b); int getBestMove(); void setBoard(std::shared_ptr<Board> b); private: std::shared_ptr<Board> board; // ... }; Don't pass complex objects by value int min(std::vector<int> v); int max(std::vector<int> v); You should use const references for these parameters: int min(const std::vector<int>& v); int max(const std::vector<int>& v); Modern compilers may elide copies for these parameters, though semantics will be more clear and prevent you from accidentally changing those parameters inside the function. Use const correctness Since the min() or max() functions never change the state of the Brain class, these should be declared as const member functions: int min(std::vector<int> v) const; int max(std::vector<int> v) const; In fact those functions don't even rely on any state stored in Brain, thus these shouldn't be class members at all but free functions. Best choice would be to use the already existing std::min_element() and std::max_element() functions from the standard library instead. Prefer operator[] over at() to access container elements by index The std::vector<T>::at() function introduces bounds checking and will throw an exception if the requested index is out of bounds. That's nice for debugging purposes with unchecked size beforehand. For performance reasons you should prefer to do size checks beforehand and use the operator[] overload of the container class at all.
{ "domain": "codereview.stackexchange", "id": 25251, "tags": "c++, beginner, tic-tac-toe, sfml" }
Why $\beta^+$ radioactivity is possible while lifetime of proton is expected to be infinite?
Question: In the Standard Model, protons are considered to have an infinite lifetime. See https://en.wikipedia.org/wiki/Proton#Stability "The spontaneous decay of free protons has never been observed, and protons are therefore considered stable particles according to the Standard Model." Now, according to wikipedia about $\beta^+$ decays, https://en.wikipedia.org/wiki/Beta_decay#%CE%B2+_decay the $\beta^+$ decay "may be considered as the decay of a proton inside the nucleus to a neutron" : $p\rightarrow n + e^+ + \nu_e$ Does it mean that there is really this decay that happens ? if so, how is that compatible with the fact that the lifetime of proton is expected to be infinite. How could the fundamental law of authorized decay/forbidden decay be different inside the nucleus and outside nucleus ? Answer: An isolated proton can't undergo $p\to n+e^++\nu_e$; in its rest frame, energy would be created. But a proton in a suitable larger nucleus can so decay; the rest of the nucleus responds in a complicated way that ensure the conservation of energy and momentum.
{ "domain": "physics.stackexchange", "id": 84842, "tags": "particle-physics, radioactivity" }
Is a golf ball still more aerodynamic than a normal sphere in turbulent flow?
Question: If I have turbulent flow (Reynolds number of 10^6) would this principle still apply? Answer: The purpose of the dimples is to create a thin boundary layer of turbulent air that clings to the surface of the golf ball. This allows the laminar flow of air around the ball to travel farther down the back side of the ball, creating a thinner wake, which means less drag on the ball. If the ball travels through turbulent air, the dimples still allow the ball to create a thinner wake, and it will travel farther than a smooth ball, though it will travel neither as straight nor as far as it would through laminar air.
{ "domain": "physics.stackexchange", "id": 37761, "tags": "fluid-dynamics, aerodynamics" }
Nonogram puzzle solution space
Question: I've written some code to brute-force enumerate the solution space of nonogram puzzles up to a certain size. It does a good job up to the first ~4 billion puzzles, but I run out of disk space to store any bigger collection of the solution space set. Because of some incredibly interesting patterns within the encoded set of solvable instances, I'd appreciate my code being reviewed for correctness. This is an NP-complete problem, but shows self-similarity and self-affinity in the solution space with the current implementation. Please, help me shoot down this idea as quickly as possible -- I'd like to regain a productive life :) Here's how the code is supposed to work. It outputs an XYZ point cloud of the nonogram solution space given a puzzle's maximum width, great for visualization. The code generates all the possible permutations of an arbitrarily sized boolean image. Each permutation is a puzzle solution to at least one, if not many, boolean image input pairs. The input pairs are "projections" of the solution from each lattice direction, while having the same axis-constrained continuous runs of set bits. The only difference between solution and input is very subtle: the unset padding between contiguous runs is flexible along an axis. Here's an example pairing of inputs and solution. Note that the pictured top-right solution may not be unique for the given input images, and the given input images don't necessarily construct only that solution. It's merely an example of the "nonogram property." You'll notice in the code I have a peculiar encoding of the inputs and solutions as integers, essentially a traversal of the image's cells converted to a bitstring. This encoding is chosen as a visualization convenience, and I've attained similar patterns with different traversal orders. The main goal with the encoding is to reduce dimensionality for plotting with a one-to-one correspondence of images to integers, avoiding the problem of colliding identifiers. I'm including a montage of the first four iterations as subsets of the solution space shadow. The full shadow is essentially a look-up table for an NP oracle, so seeing patterns here could have remarkable consequences. Arranged from left to right are tables scaled to a common 512x512 resolution -- 4, 256, 262144, and 4294967296 puzzles accounted for, respectively. Each black pixel represents an input pair with no solution, white says a solution exists. from sys import argv from itertools import product, chain, groupby from functools import partial from multiprocessing import Pool def indices(width): for a in range(width): for b in range(a+1): yield (b, a) for b in reversed(range(a)): yield (a, b) def encode(matrix, width): return sum(1<<i for i, bit in enumerate(matrix[i][j] for i, j in indices(width)) if bit) def count_runs(row): return [sum(group) for key, group in groupby(row) if key] def flex(solution, width): counts = list(map(count_runs, solution)) for matrix in product((False, True), repeat=width**2): candidate = list(zip(*[iter(matrix)]*width)) if list(map(count_runs, candidate)) == counts: yield candidate def nonogram_solutions(solution, width): xy = solution yx = list(zip(*solution)) enc_sol = encode(solution, width) return [(encode(xy, width), encode(yx, width), enc_sol) for xy, yx in product(flex(xy, width), flex(yx, width))] def main(width): pool = Pool() sol_matrices = (list(zip(*[iter(matrix)]*width)) for matrix in product((False, True), repeat=width**2)) nonograms = partial(nonogram_solutions, width=width) solutions = pool.imap_unordered(nonograms, sol_matrices, 1) pool.close() for xy, yx, solution in chain.from_iterable(solutions): print(solution, xy, yx) pool.join() if __name__ == "__main__": main(int(argv[1])) Answer: There are no docstrings. What do these functions do? Lack of docstrings make the code hard to review, because we don't know what the functions are supposed to do. The call indices(n) generates the Cartesian product of range(n) × range(n) in a particular order. A natural question is, does the order matter, or could we use itertools.product instead: itertools.product(range(width), repeat=2) The post explains why you've chosen the order. But how is someone reading the code supposed to know that? There needs to be a comment. encode could be simplified to avoid the if: sum(bit<<i for i, bit in enumerate(...)) This code: for matrix in product((False, True), repeat=width**2): candidate = list(zip(*[iter(matrix)]*width)) is effectively the same as: sol_matrices = (list(zip(*[iter(matrix)]*width)) for matrix in product((False, True), repeat=width**2)) and so would benefit from having its own function (which would have a name and docstring that would explain what it does). The algorithm takes every nonogram puzzle, and then compares the run counts with the run counts for every nonogram puzzle of the same size. If width is \$ n \$, there are \$ 2^{n^2} \$ nonogram puzzles, and it takes \$ Ω(n^2) \$ to compute the run counts for a single puzzle, so the overall runtime is the ludicrous \$ Ω(n^24^{n^2}) \$. This could be brought down to \$ Ω(n^22^{n^2}) \$ if you prepared a dictionary mapping run counts to sets of encoded nonogram solutions, and down to \$ Ω(n2^{n^2}) \$ if you prepared the list of all rows, prepared a dictionary mapping each row to its run counts, and then generated the puzzles and their run counts simultaneously using the Cartesian product of the rows.
{ "domain": "codereview.stackexchange", "id": 13781, "tags": "python, combinatorics, complexity" }
Coordinates of a specific pixel in depthimage published by Kinect
Question: I am looking for an answer for hours and finally I am asking it here. I am using CMvision to find a specific color in Kinect's sight and I want to find the real world coordinates of the object with that color. I am planning to use CMvision to find the frame coordinates (as X and Y pixel values on the picture) and use these coordinates and the depth value of that pixel to calculate the real world coordinates. As I understand, /camera/depth_registered/points topic already gives the real world coordinate but I couldn't find how to retrieve the X,Y,Z values of a specific pixel that I've choosen on the depth (or RGB) image. Thanks in advance. Originally posted by Sekocan on ROS Answers with karma: 38 on 2014-10-15 Post score: 1 Answer: I was looking for the exact formula to calculate the real world X,Y,Z values by the depth image and finally I got it. First of all, it appears that the values in the depth image are not giving the distance between any point in the real world and the origin of the Kinect. It appears that it is the distance between the point and Kinect's XY plane (the plane which is parallel to the front surface of Kinect). So, if Kinect is looking at a wall, all the depth values give about the same value. It doesn't matter the real distance between a point on the wall and Kinect's origin. I found the calculations in the NuiSkeleton.h file: https://code.google.com/p/stevenhickson-code/source/browse/trunk/blepo/external/Microsoft/Kinect/NuiSkeleton.h?r=14 For the Z axis, in line 625, it says: FLOAT fSkeletonZ = static_cast<FLOAT>(usDepthValue >> 3) / 1000.0f; You don't have to worry about the bitshift operation because in the description of the method it says: /// <param name="usDepthValue"> /// The depth value (in millimeters) of the depth image pixel, shifted left by three bits. The left /// shift enables you to pass the value from the depth image directly into this function. /// </param> So, you can use the equation: FLOAT fSkeletonZ = static_cast<FLOAT>(usDepthValue) / 1000.0f; the unit is in meters (because of the division by 1000). For X and Y axes, line 633 and 634 gives: FLOAT fSkeletonX = (lDepthX - width/2.0f) * (320.0f/width) * NUI_CAMERA_DEPTH_IMAGE_TO_SKELETON_MULTIPLIER_320x240 * fSkeletonZ; FLOAT fSkeletonY = -(lDepthY - height/2.0f) * (240.0f/height) * NUI_CAMERA_DEPTH_IMAGE_TO_SKELETON_MULTIPLIER_320x240 * fSkeletonZ; The calculation of the Y axis starts with a minus sign because in a picture (like depth image) the Y value increases when you go down but in real world coordinates, conventionally, Y increases when you go up. For the NUI_CAMERA_DEPTH_IMAGE_TO_SKELETON_MULTIPLIER_320x240 constant, line 349 defines: #define NUI_CAMERA_DEPTH_IMAGE_TO_SKELETON_MULTIPLIER_320x240 (NUI_CAMERA_DEPTH_NOMINAL_INVERSE_FOCAL_LENGTH_IN_PIXELS) a short Googling gives this page: http://msdn.microsoft.com/en-us/library/hh855368.aspx where it says NUI_CAMERA_DEPTH_NOMINAL_INVERSE_FOCAL_LENGTH_IN_PIXELS (3.501e-3f) So, using these formulas gives you the real world coordinates within the coordinate frame given in the image: image description http://pille.iwr.uni-heidelberg.de/%7Ekinect01/img/pinhole-camera.png (ref: http://pille.iwr.uni-heidelberg.de/~kinect01/doc/reconstruction.html) Originally posted by Sekocan with karma: 38 on 2014-10-20 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 19741, "tags": "ros, kinect, world, points, pointcloud" }
Python Flask webserver with small features
Question: With the latest of my projects, I started using Visual Studio Code, which allows for Python linting and also checking for poor practices, such as using over 100 characters in a single line. As you will see, my variable names sometimes get long, because I want to maintain readable variable names. Just for information, I also handle logins and sessions with a Flask session cookie. Below are the various files. Also worth mentioning is that I use a local Redis server for Windows to test the application. Python version: 3.5.2 32 bit server.py """Main server""" import json import os import gamble import authentication import gcaptchacheck from flask import Flask, session, redirect, url_for, request, render_template from flask_compress import Compress from flask_sslify import SSLify from store import REDISDB if not REDISDB.get('READY'): REDISDB.set('users', json.dumps(dict())) REDISDB.set('READY', 'READY') _STYSHEET = '/static/styles.css?version=1' _SKEY = '//removed for security' LAST_TIME_UPDATE = 0 APP = Flask(__name__) Compress(APP) APP.secret_key = '//removed for security' SSLIFY = SSLify(APP, permanent=True) @APP.route('/') def index(): """The home page.""" if 'user' in session: user = session['user'] userdb = json.loads(REDISDB.get('users').decode('UTF-8')) if user in userdb: return render_template('index.html', button='Profile', logged_in=True, text=user, stylesheet=_STYSHEET) return render_template('index.html', text='Not logged in.', stylesheet=_STYSHEET) @APP.route('/login/create/new/') def usercreationpage(): """Create user page""" emsg = request.args.get('emsg') xusername = request.args.get('xusername') prefill = True if not xusername: prefill = False return render_template('createuser.html', stylesheet=_STYSHEET, username=xusername, errormsg=emsg, pfill=prefill) @APP.route('/login/create/submit/') def createuser(): """Make user""" iusername = request.args.get('username') pwa = request.args.get('password') if pwa == request.args.get('password2'): #optimization, multiple IFs userdb = json.loads(REDISDB.get('users').decode('UTF-8')) if not userdb.get(iusername): if not request.headers.getlist("X-Forwarded-For"): client_ip = request.remote_addr else: client_ip = request.headers.getlist("X-Forwarded-For")[0] #verify captcha captcha_response = gcaptchacheck.checkcaptcha(request.args.get('g-recaptcha-response'), client_ip, _SKEY) if captcha_response: uncheck = authentication.validateusername(iusername) if not uncheck['success']: return redirect(url_for('usercreationpage', xusername=iusername, emsg=uncheck['reason'])) pwcheck = authentication.validatepassword(pwa) if not pwcheck['success']: return redirect(url_for('usercreationpage', xusername=iusername, emsg=pwcheck['reason'])) #if things have gotten this far all good, make the account userdata = authentication.createuserset(pwa) #[salt, hashed pwd + salt, empty string for shared secret] userdb.update({iusername: userdata}) REDISDB.set('users', json.dumps(userdb)) return redirect(url_for('index')) if not captcha_response: return redirect(url_for('usercreationpage', xusername=iusername, emsg='Bad captcha')) return redirect(url_for('usercreationpage', xusername=iusername, emsg='Username taken')) return redirect(url_for('usercreationpage', xusername=iusername, emsg='Passwords do not match')) @APP.route('/login/existing/') def loginpage(): """The login page""" emsg = request.args.get('emsg') xusername = request.args.get('xusername') prefill = True if not xusername: prefill = False return render_template('loginpage.html', errormsg=emsg, username=xusername, stylesheet=_STYSHEET, pfill=prefill) @APP.route('/login/existing/submit/') def verifylogin(): """Verifies a login""" iusername = request.args.get('username') ipassword = request.args.get('password') userdb = json.loads(REDISDB.get('users').decode('UTF-8')) result = authentication.authenticateuser(iusername, ipassword, userdb) if result: session['user'] = iusername return redirect(url_for('index')) return redirect(url_for('loginpage', xusername=iusername, emsg='Invalid login')) @APP.route('/logout/') def logout(): """Log out""" session.pop('user', None) return redirect(url_for('index')) @APP.route('/profile/') def profilepage(): """User's profile page""" if 'user' in session: user = session['user'] userdb = json.loads(REDISDB.get('users').decode('UTF-8')) if user in userdb: userinfo = userdb.get(user) return render_template('profile.html', username=user, stylesheet=_STYSHEET) return redirect(url_for('index')) @APP.route('/double/') def double(): """Double or nothing page""" if 'user' in session: user = session['user'] userdb = json.loads(REDISDB.get('users').decode('UTF-8')) if user in userdb: userinfo = userdb.get(user) return render_template('double.html', coin_amt=userinfo[2], text='Gamble a coin.', stylesheet=_STYSHEET) return redirect(url_for('index')) @APP.route('/dodouble/') def dodouble(): """Double""" if 'user' in session: user = session['user'] userdb = json.loads(REDISDB.get('users').decode('UTF-8')) if user in userdb: userinfo = userdb.get(user) message, success, change = gamble.double(userinfo, request.args.get('gamount')) if success: userinfo[2] += change userdb.update({user: userinfo}) REDISDB.set('users', json.dumps(userdb)) return message return "-" @APP.route('/getcoins/') def getcoins(): """Get coins""" if 'user' in session: user = session['user'] userdb = json.loads(REDISDB.get('users').decode('UTF-8')) if user in userdb: userinfo = userdb.get(user) return str(userinfo[2]) return "-" @APP.route('/admin/') def admin(): """Admin""" if 'user' in session: if session['user'] == '//removed for privacy': userdb = json.loads(REDISDB.get('users').decode('UTF-8')) return render_template('admin.html', userdict=userdb) return redirect(url_for('index')) @APP.route('/setmoney/') def setmoney(): """Set money""" if 'user' in session: if session['user'] == '//removed for privacy': money = request.args.get('money') user = request.args.get('user') userdb = json.loads(REDISDB.get('users').decode('UTF-8')) userdb[user][2] = int(money) REDISDB.set('users', json.dumps(userdb)) return redirect(url_for('admin')) return redirect(url_for('index')) if __name__ == '__main__': try: APP.run('0.0.0.0', 80, True) except PermissionError: HPORT = int(os.environ.get('PORT', 17995)) APP.run('0.0.0.0', HPORT, False) authentication.py """Authentication module for authentication""" import hashlib import random ALLOWED_CHARS = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890!@#$%^&*()' ALLOWED_UCHAR = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890' def generatesalt(length, charset): """Makes a salt""" returnstring = '' while len(returnstring) < length: returnstring += charset[random.randint(0, len(charset) - 1)] return returnstring def validatepassword(password): """Checks if password is good""" if len(password) > 1: if checkletter(password, ALLOWED_CHARS): return {'success': True} return {'success': False, 'reason': 'Password must be alphanumeric or with symbols.'} return {'success': False, 'reason': 'Password must be at least 2 characters.'} def validateusername(username): """Checks if username is good""" if len(username) > 3: if checkletter(username, ALLOWED_UCHAR): return {'success': True} return {'success': False, 'reason': 'Username must be alphanumeric.'} return {'success': False, 'reason': 'Username must be at least 4 characters.'} def checkletter(text, allowed_set): """Checks letters""" for letter in text: if not letter in allowed_set: return False return True def createuserset(password): """Create user information""" salt = generatesalt(16, ALLOWED_UCHAR) hashsum = hashlib.sha256((password + salt).encode('UTF-8')).hexdigest() return [salt, hashsum, 1000] def authenticateuser(username, password, dbs): """Checks whether a username matches to a password""" print(dbs) userinformation = dbs.get(username) if userinformation: print(userinformation) if (hashlib.sha256((password + userinformation[0]) .encode('UTF-8')).hexdigest() == userinformation[1]): return True return False gcaptchacheck.py """Gcaptchacheck""" import requests def checkcaptcha(clientresponse, clientip, secret): """Verify""" url = 'https://www.google.com/recaptcha/api/siteverify' argdict = {'secret':secret, 'response':clientresponse, 'remoteip':clientip} print(argdict) google_response = requests.post(url, argdict) response_dict = google_response.json() return response_dict['success'] store.py """Store module""" import os import urllib import redis DEBUG = False if DEBUG: REDISDB = redis.Redis(host='', port=6379, db=0, password='') else: URL = urllib.parse.urlparse(os.environ.get('REDISTOGO_URL', 'redis://localhost:6379')) REDISDB = redis.Redis(host=URL.hostname, port=URL.port, db=0, password=URL.password) *DEBUG variable is true when in deployment! gamble.py """Gamble""" import random def double(userinfo, amt): """Double""" if not amt: return 'Bad request', False, 0 try: amt = int(amt) except ValueError: return 'Bad request', False, 0 if (userinfo[2] - amt >= 0) and (amt > 0): #allow result = random.randint(0, 1) if result: return 'Won ' + str(amt) + ' coins.', True, amt return 'Lost ' + str(amt) + ' coins.', True, 0 - amt return 'Insufficient balance', False, 0 This is also one of the first times I have worked with an application where other dependencies written by me are stored alongside in a folder. The main issues I have are: Am I handling login sessions appropriately? Should I manually store cookies b64encoded in front of some more encryption? My variable names are often quite long, and sometimes if / else ... statements become too long from multiple indents - how do I avoid that? My variable names get rather long just by themselves. An example could be gcaptchacheck.checkcaptcha or authentication.validateusername. I am simply storing user passwords as a SHA256 hash, along with a salt (securely!)... from what I know that's a good way to go, but are there better practices? My app sends asynchrnous requests using some JavaScript in the HTML such as /dodouble/?gamount=xxx and /getcoins/. Is there a better way that I can use? I am storing my files alongside each other in a folder. Is there a better way in terms of organisation or convention? I am considering something like gunicorn to enhance performance. Is there a better alternative? I am developing on Windows, after all. I control the stylesheets because otherwise when I feel like updating the stylesheets, I have to edit the HTML for all the files. This method gives me easy way to manage. At the end you will see except PermissionError, which is because deploying to Heroku will disallow you to bind to port 80, raising PermissionError. This method to automate this part seems to be relatively simple. Any better ways to do this? Heroku has an environment variable for Django apps to figure out whether they're in a development environment or deployment environment. Is it OK if I snatch that quick fix, or is there an unspoken "sin" in doing that? I'm not great at Python, so further recommendations would be appreciated! It also seems like nobody is interested or there is too much information to digest. This link to the website should help people interpret what I'm doing a bit better. Edit: Sorry, the link has been deprecated. Answer: Looks ok to me. You would need a Story that better articulates the actors and the threat model before you could judge whether alternate storage of cookies would adequately address the described threat. Your variable names are too short. Add at least one character, so we have e.g. validate_username. Descriptive identifiers are a good thing, keep using them. You are already using parens to deal nicely with long boolean expressions, e.g.: if (hashlib.sha256((password + userinformation[0]) .encode('UTF-8')).hexdigest() == userinformation[1]): Consider writing long expressions in this way: if (a > b and c > d and e > f): In checkletter() you wrote if not letter in allowed_set:, but flake8 would explain to you that the usual idiom is if leter not in.... No biggie. Functions like validatepassword() would be a little clearer if you threw in the occasional else. Doesn't change how the program runs, it's just for folks reading it. Using sha256 should be fine. Though I did notice a literal assignment to APP.secret_key, suggesting the secret is checked into source control. It is usual to instead read credentials from a separately maintained config file on the side. Your API calls are fine. Using gunicorn or flask is pretty mainstream, you should be fine with low volume traffic. When scaling up, you will likely frontend with reverse proxy from nginx or varnish, for cacheable static assets. Consulting a Heroku environment variable is perfectly fine, that's what it's there for.
{ "domain": "codereview.stackexchange", "id": 29096, "tags": "python, python-3.x, flask, redis" }
Set of matrix operations for summing around a matrix element
Question: I have a results of simulations, which are discrete complex numbers representing wave function on a NxN grid. I calculate phase of the wavefunction. For every point on that grid I have to perform this set of operations. Example: |a b c| |d e f| |g h i| Sum of difference between neighbours around element e: (a-b)+(b-c)+(c-f)+(f-i)+(i-h)+(h-g)+(g-d)+(d-a) But every substraction δ have is subjected to this conditions if δ > π than δ = δ - 2π if δ < -π than δ = δ + 2π I wrote a code in Octave, which is extremaly inefficient and slow, but I was hoping there is a way to move from buteforce calculations to faster calculations using some of the functions in image package. I know I can perfom sumation using convolution but I still have problem with other operations. Code in Octave: function deltaPhi = phaseDifference(phi1, phi2) deltaPhi = phi1 - phi2; if(deltaPhi > pi) deltaPhi = deltaPhi - 2*pi; endif if(deltaPhi < -pi) deltaPhi = deltaPhi + 2*pi; endif; end function [phase] = checkPhase(M) phase = zeros(size(M)-2); for i = 2:size(M,1)-1 for j = 2:size(M,2)-1 phase(i-1,j-1) = phaseDifference(M(i-1,j-1),M(i,j-1)) + phaseDifference(M(i,j-1),M(i+1,j-1)) + phaseDifference(M(i+1,j-1),M(i+1,j)) + phaseDifference(M(i+1,j),M(i+1,j+1)) + phaseDifference(M(i+1,j+1),M(i,j+1)) + phaseDifference(M(i,j+1), M(i-1,j+1)) + phaseDifference(M(i-1,j+1), M(i-1,j)) + phaseDifference(M(i-1,j), M(i-1,j-1)); endfor endfor end The idea is to rewrite this code in OpenCV and use some of that librarys methods. Answer: First some simple things: I would use end instead of endfor and endif, to keep the code compatible with MATLAB. It is not necessary to end a function with end (though it's not harmful either). The end is only necessary when writing nested functions that have access to the parent function's workspace. You should break up very long lines of code using ... (see an example below). When looping through a large matrix, always make the first index your inner loop. Matrix elements M(1,j) and M(2,j) are adjacent in memory, M(i,1) and M(i,2) are not. If you switch your two nested loops, your code will be faster for large arrays, as you use the cache better. Now for the hard part: changing your logic. |a b c| |d e f| |g h i| Sum of difference between neighbours around element e: (a-b)+(b-c)+(c-f)+(f-i)+(i-h)+(h-g)+(g-d)+(d-a) Note that you compute the phase difference between two neighboring cells several times: for example, (a-b) will be computed when determining your value for e, but also in the previous loop iteration, when d was the center element; it will be computed two more times when that pair appears at the bottom of such a neighborhood. Thus, you should pre-compute these differences and store them: vdiff = phaseDifference(diff(M,1,1)); hdiff = phaseDifference(diff(M,1,2)); For this to work, phaseDifference must be rewritten, see below. Now, vdiff has one fewer rows, and hdiff has one fewer columns, than M. We need to take this into account when indexing. Think of these as the elements in between the rows/columns of M. The sum for one element at (i,j) now becomes: phase(i-1,j-1) = hdiff(i-1,j-1) + hdiff(i-1,j) ... - hdiff(i+1,j-1) - hdiff(i+1,j) ... - vdiff(i-1,j-1) - vdiff(i,j-1) ... + vdiff(i-1,j+1) + vdiff(i,j+1); The full function now becomes: function phase = checkPhase2(M) vdiff = phaseDifference(diff(M,1,1)); hdiff = phaseDifference(diff(M,1,2)); phase = zeros(size(M)-2); for j = 2:size(M,2)-1 for i = 2:size(M,1)-1 phase(i-1,j-1) = hdiff(i-1,j-1) + hdiff(i-1,j) ... - hdiff(i+1,j-1) - hdiff(i+1,j) ... - vdiff(i-1,j-1) - vdiff(i,j-1) ... + vdiff(i-1,j+1) + vdiff(i,j+1); end end Since Octave is slow with loops, it is likely that this vectorized version will be faster: function phase = checkPhase3(M) vdiff = phaseDifference(diff(M,1,1)); hdiff = phaseDifference(diff(M,1,2)); j = 2:size(M,2)-1; i = 2:size(M,1)-1; phase = hdiff(i-1,j-1) + hdiff(i-1,j) ... - hdiff(i+1,j-1) - hdiff(i+1,j) ... - vdiff(i-1,j-1) - vdiff(i,j-1) ... + vdiff(i-1,j+1) + vdiff(i,j+1); I'm testing on MATLAB R2017a, and the vectorized form is actually slower than the version with loops, because indexing is still quite slow, whereas MATLAB has made huge strides in the last year speeding up code with loops. Timings for an input of 2000x3000 elements: checkPhase(M) (the OP's version) runs in 1.3s, and swapping the loop order it runs in 1.0s, checkPhase2(M) runs in 0.17s, checkPhase3(M) runs in 0.25s. Your timings on Octave will be vastly different (much slower for the code with loops). For phaseDifference to work with an array input, it cannot use if the way that the original function does. It is possible to rewrite the same logic using logical indexing to work on a full matrix, but this version is simpler: function deltaPhi = phaseDifference(deltaPhi) deltaPhi = mod(deltaPhi + pi, 2*pi) - pi; Note I replaced the two inputs with a single one, as diff already computes the difference between neighbors.
{ "domain": "codereview.stackexchange", "id": 29289, "tags": "opencv, octave" }
Does a non-conducting umbrella increase your safety in a thunderstorm?
Question: There are umbrellas available which have a non-conducting handle and bar, e.g. made from graphite (example product, german website). If I'm walking around with this umbrella in a thunderstorm (not on purpose, of course): Will my chances of getting hit by lightning be lower compared to using a regular umbrella? Will my chances of getting hit by lightning be lower compared to not using an umbrella at all? If the umbrella is hit by lightning, will I be hurt? Answer: First, graphite is a conductor. It is used in the umbrella because it is a strong material that resists bending. Second, a lightning strike makes everything a conductor, including people and trees. In order: No. The primary target of lightning is the tallest object in the vicinity. Trees are very commonly struck, despite being non-conductive. There is possibly a slightly greater chance of being hit with an umbrella versus not having one due to the height difference. The greater danger is the umbrella slowing down you running for shelter. If you are holding it, yes, no matter what the umbrella is made of. Millions of volts make everything a conductor. To stay safe in a thunderstorm consult here: https://www.weather.gov/safety/lightning
{ "domain": "physics.stackexchange", "id": 35992, "tags": "electricity" }
Trouble with the algebra in Srednicki book chapter 28
Question: I'm studying chapter 28 in Srednicki (the renormalization group) and I'm having troubles figuring out how he derives eq. (28.15) (last summation above) from eqs. (28.7) and (28.9). More specifically he states that $$G(\alpha, \epsilon) \equiv \ln(Z_g^2Z_{\phi}^{-3})=\sum_{n=1}^{\infty}\frac{G_n(\alpha)}{\epsilon^n}\tag{28.14+15}$$ where $$Z_{\phi}=1+\sum_{n=1}^{\infty} \frac{a_n(\alpha)}{\epsilon^n}\tag{28.7}$$ and $$Z_g=1+\sum_{n=1}^{\infty} \frac{c_n(\alpha)}{\epsilon^n}\tag{28.9}$$ and with $$G_1(\alpha) = 2c_1(\alpha) -3a_1(\alpha).\tag{28.16}$$ Does anybody know an identity of logarithms that I'm missing to prove this equality (28.16)? Answer: Srednicki is treating $$ Z-1\tag{A}$$ and $$\ln Z~=~-\sum_{j=1}^{\infty}\frac{(1-Z)^j}{j}\tag{B}$$ as perturbative formal power series in the coupling constant $\alpha\equiv \frac{g^2}{(4\pi)^3}$. Each coefficient of such formal power series is a truncated Laurent series in $\epsilon$. Eq. (B) seems to be the answer to OP's question. Be aware that Srednicki somewhat misleadingly writes the double sum in the opposite order. [He writes the sum over $\epsilon$-powers explicitly, while the sum over $\alpha$-powers is implicit, cf. e.g. eqs. (28.7) & (28.9).] For a deeper reason why it is consistent to perturbatively expand divergent terms, see e.g. this & this related Phys.SE posts.
{ "domain": "physics.stackexchange", "id": 87361, "tags": "quantum-field-theory, renormalization, approximations, physical-constants" }
Basic question on numerical precision
Question: I appreciate this opportunity to submit a query on this forum. When studying the continuous-time & discrete-time distinction, specifically with reference to discrete signals being identical when separated by 2*pi, it has struck me that a basic premise, per google calc, doesn't hold: exp(2 * pi * sqrt(-1)) = 1 exp(4 * pi * sqrt(-1)) = 1 exp(6 * pi * sqrt(-1)) = 1 exp(8 * pi * sqrt(-1)) = 1 but... exp(10 * pi * sqrt(-1)) = 1 - 1.2246468 × 10^-15 i Any thoughts on the above (discrepancy?) would be appreciated. Regards, wirefree Answer: Writing another way $$1 - 1.2246468 × 10^{-15} i = 1 - 0.0000000000000012246468i$$ It is just rounding errors adding up due to having a limited number of bits in whatever computer is doing the calculation.
{ "domain": "dsp.stackexchange", "id": 3092, "tags": "floating-point" }
Does bulk specific gravity (Gmb) of asphalt specimen change with sample size?
Question: If a 6"x4" asphalt sample is cut down to smaller dimensions (let's say to prepare a DCT testing specimen), does the Gmb value change from the original bigger sample? If it does change during testing, then what might be the factors affecting this change? Answer: Bituminous Materials and binders specific gravity is much less than the gravel/sand aggregate and is sensitive to ambient temperature. ( AASHTO T 228 and ASTM D 70: Specific Gravity and Density of Semi-Solid Bituminous Materials-) roughly around 1.03 versus 2.6 to 2.65 of aggregate. So depending on the mixture of these two components that are not uniform due to the random space filled with the binders the specific gravity of smaller samples is expected to vary.
{ "domain": "engineering.stackexchange", "id": 3674, "tags": "civil-engineering" }
Estimating the angle covered by the star trails and deducing how long the exposure lasted
Question: How would you estimate the angle covered by the star trails and deduce how long the exposure lasted in the following image? Here is the photo's caption: Star trails beyond the Gemini Observatory on Mauna Kea, from which a laser beam forms a guide star on the Earth's ionosphere that enables the use of adaptive optics. In this long exposure, the laser beam tracks across the sky, making the visible arc. (The object in the sky at which the laser was pointed changed several times during the night.) Note that stars rise and set at an angle (not perpendicular) relative to the horizon, because Hawaii is about 20 degrees north of Earth's equator. Also, the farther a star is from the pole, the more its path looks straight, over a short distance. I read the answer to this question about how to measure star trail angles The only star trail angles I'm familiar with deal with astrophotography. You set up a camera, put the film in, open the shutter for a time exposure of the stars. As the stars appear to revolve around the earth, they make "arcs" on the film. To calculate the angle of the arc, use the formula: 1 minute of time = 15 minutes of arc. If the answer is over 60, divide by 60 to get degrees of arc. So: 12 minute exposure = 180 minutes of arc or 3˚ of arc length for your star trail. I'm not sure how to approach this problem since the answer uses time to calculate the angle of the arc, but no time is given here. Answer: The answer is given here. One minute of time corresponds to 15 arcminutes (written as 15'). This is because in 24 h the Earth revolves 360º, so $$\textrm{angle per time} = \frac{360º}{24 \textrm{ h}} =\frac{21,600'}{1440 \textrm{ min}} = 15'/\textrm{min}.$$ If you turn this fraction upside down, you see that 1' corresponds to 1/15 min, or 4 seconds. That is, you measure the angle (let's call it $\theta$) of any of the star traces, as seen from the center (notice that the Northern Star is not exactly at the center, so that it itself traces a tiny arc instead of a dot). From the picture below, I get roughly $\theta = 135º$. The exposure time is thus: $$t_\mathrm{exp} = \frac{\theta}{\textrm{angle per time}} = \frac{135º}{360º/24 \textrm{ h}} \sim 9\textrm{ h}.$$ By the way, if you mark the position of the ends of the trails, you can recover the stellar sky. I found Ursa Major, marked by the yellow dots.
{ "domain": "astronomy.stackexchange", "id": 758, "tags": "star" }