anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Portion of universe visible if gathering image from inflationary epoch
Question: Apologies in advance for my over-exposure to pop science. I want to propose a telescope powerful enough to collect imagery from 10^-32 seconds after the “Big Bang”, or as soon as the end of the inflationary epoch. With light coming from the universe when roughly 10 ly in diameter, along with the expansion to our current size, I would only see the tiniest fraction of light from so long ago with a telescope looking at that far of a distance. Assuming my telescope has an identical view field as the Hubble Space Telescope, what fraction of the total volume of the universe at end of the inflationary epoch would I actually be viewing? Answer: tl;dr: Your field of view would cover roughly one square centimeter of the sky at that time, and you would observe roughly 50 billionths of the observable Universe. You can't really… With photons, you will never be able to see further back than to recombination, when the Universe was 380,000 yr old, because until then, the free electrons made is opaque to radiation. With neutrinos, you can (in principle) see back to 1 sec after Big Bang, but to "see" all the way back to inflation, you will probably need gravitational waves. …but let's assume you can anyway Anyway, lets's do a thought experiment and assume that you have a RLT (Ridiculously Large Telescope) with a field of view (FoV) of the HST. What fraction of the Universe can you see? The answer might surprise you: The farther an object is from you, the smaller an angle it spans on the sky, i.e. the smaller it looks. This is true for birds and planets and even for the nearest galaxies. However, for distant galaxies a strange effect counteracts this: Because of 1) the finite speed of light and 2) the expansion of the Universe, distant galaxies were closer to you when they emitted the light you see today, and therefore spanned a larger angle. Hence, if you compare galaxies of the same physical size, they look smaller and smaller until around 15 Glyr (giga-lightyears), after which they'll look larger and larger. The Hubble Ultra Deep Field has FoV of 2.8 by 2.5 arcminutes. The region of the Universe that this FoV spans is thus largest at a distance of 15 Glyr, where it span $4.8\times4.3\,\mathrm{Mlyr}^2$ (i.e. square-mega-lightyears). But then your FoV begins to span an increasingly smaller region. At recombination, it spans only $34\times30\,\mathrm{klyr}^2$ — in other words, if a Milky Way-sized galaxy were present there (it weren't), it would be larger than the FoV. It you could look all the way back to 4 hours after Big Bang, your FoV would span roughly a square-lightyear. Three minutes after BB, when all the Universe's hydrogen and helium had just been created, it would span $\sim0.1\times0.1\,\mathrm{lyr}^2$. A few microseconds after BB, it would span roughly a square-AU (the distance from Earth to Sun). Approximately $10^{-22}\,\mathrm{s}$ after BB, it would span a square kilometer. And at $10^{-32}\,\mathrm{s}$ after BB, your FoV would span $0.8\times0.7\,\mathrm{cm}^2$, i.e. roughly one square-centimeter! Fraction of the Universe observed The part of the observable Universe that you would observe is the volume "behind" your FoV. That can be obtained by integrating the area along the distance, but an easier way is to simply note the fact that, if your FoV were the whole sky, you'd observe the whole Universe. Since your FoV covers a fraction of the sky of $$ \begin{array}\\ f & = & \frac{2.8'\times2.5'}{\mathrm{full\,sky}}\\ & = & \frac{8\times10^{-4}\,\mathrm{rad}\,\,\times\,\,7\times10^{-4}\,\mathrm{rad}}{4\pi\,\mathrm{rad}^2}\\ & = & 5\times10^{-8}, \end{array} $$ this is the fraction of the Universe that you would observe. At $t=10^{-32}\,\mathrm{s}$, the part of the Universe later became our observable Universe today, was about ten meters in radius. Another way to calculate the fraction is to realize that the above mentioned observed square-centimeter roughly comprises a fraction $f$ of the surface area of a sphere with a radius of ten meters.
{ "domain": "astronomy.stackexchange", "id": 2918, "tags": "hubble-telescope, early-universe" }
Loading Configurations from plist into singleton
Question: In my iOS application, I've created a singleton class that reads a configuration plist file and provides accessor methods to easily retrieve the values: class Configuration{ struct Key{ static let searchRadius = "searchRadius" static let significantUserMovementThreshold = "movementThreshold" } enum ConfigurationError : ErrorType{ case FileNotFound } static let sharedInstance = Configuration() var configDictionary : NSDictionary? func loadConfigurationFromPropertyList() throws{ if let filePath = NSBundle.mainBundle().pathForResource("Config", ofType: "plist"){ self.configDictionary = NSDictionary(contentsOfFile: filePath) }else{ throw ConfigurationError.FileNotFound } } func searchRadius()-> Double?{ return configDictionary?.objectForKey(Key.searchRadius) as? Double } func significantUserMovementThreshold() -> Double?{ return configDictionary?.objectForKey(Key.significantUserMovementThreshold) as? Double } } In the AppDelegate, I call Configuration.loadConfigurationFromPropertyList to load the configurations into memory. I have a number of questions regarding this approach: Is it a good idea to have accessor method for each property in the configuration file? I think it provides a cleaner and more reliable approach than using subscripts. Is it better for the Configuration class to read the file in its constructor or that AppDelegate invoke loadConfigurationFromPropertyList()? Answer: I don't have a whole lot to add, as this is pretty straightforward and seems like a completely reasonable way to do what you're trying to do. In regards to your questions: There's a tradeoff for using accessor methods for properties. Every time you add a new one, you need to add a constant to the Key struct, and add an accessor method to the Configuration class to get its value. If you misspell either of them (in the plist or the string in the Key struct), you won't have the compiler to tell you so, and you can end up with some odd bugs that become hard to track down. However, I personally prefer the way you're doing it because it's easier to read for the future. Your set of configurations is currently fairly small, so it's pretty easy to maintain. I'd go with what you've got unless you think there will be a lot of configuration options that will be controlled by a plist file. It's better to read the file in the constructor in my opinion. This is known as RAII - Resource Acquisition is Initialization. The idea being that if you do it in the constructor, if it fails, you don't end up with a partially initialized object.
{ "domain": "codereview.stackexchange", "id": 18954, "tags": "ios, swift, singleton, configuration" }
Unable to open rosbag files in C++
Question: Hello, I am trying to write a node to edit the timestamps in a bag file, using C++. The code I have written is attached below. I have tried to follow the template given in the rosbag C++ API, but I am unable to open any of the bag files I have, even though they run perfectly well when I play them back using rosbag play. Specifically, I am seeing the error message terminate called after throwing an instance of 'rosbag::BagIOException' what(): Error opening file: ~/catkin_ws/src/publish_text/launch/filtered.bag I am not quite sure why exactly this is happening. Would greatly appreciate it if anyone had any suggestions! Thanks. (Code from the main function of my code enclosed below) ros::init(argc, argv, "postprocessing"); ros::NodeHandle nh("~"); std::string instr, outstr; rosbag::Bag inbag, outbag; if (nh.getParam("input_path", instr)) { inbag.open(instr, rosbag::bagmode::Read); } else { ROS_ERROR("No input path provided, terminating\n"); exit(1); } if (!nh.getParam("output_path", outstr)) { outbag.open(outstr, rosbag::bagmode::Write); } else { ROS_ERROR("No output path provided, terminating\n"); exit(1); } UPDATE: I was able to resolve the issue by replacing the path with an absolute path from the root directory. Originally posted by jll on ROS Answers with karma: 27 on 2015-12-06 Post score: 2 Original comments Comment by 2ROS0 on 2016-08-02: Does rosbag::Bag API support relative paths like this: rosbag::Bag::open("./name_of_bagfile") ? I'm using ROS Groovy default install Answer: terminate called after throwing an instance of 'rosbag::BagIOException' what(): Error opening file: ~/catkin_ws/src/publish_text/launch/filtered.bag Most likely you are expecting the C++ runtime to resolve/replace the ~ with the path to the users home directory. Afaik that behaviour is implemented by the shell (ie: bash), it's not something that works everywhere. Can you try to provide an absolute (or even relative) path to rosbag::Bag::open(..)? Originally posted by gvdhoorn with karma: 86574 on 2015-12-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by 2ROS0 on 2016-08-02: I have the same problem when adding a relative path (absolute path works fine) as such: rosbag::Bag::open("./name_of_bagfile.bag") Comment by antoineniotna on 2019-05-06: Did you managed to use the relative path ? I have exactly the same problem, absolute path is working fine but the relative one doesn't. I am using kinetic version.
{ "domain": "robotics.stackexchange", "id": 23162, "tags": "rosbag, roscpp" }
What is this formula, related to simple linear regression, called?
Question: This is my first post here. I hope I can make myself clear. Right now I'm learning linear regression as part of an introduction class to machine learning. After going over the steps in the simple regression formulae, I realized that I had been doing something similar in the past to construct lines. In Python: data = [{'x': 1, 'y': 2}, {'x': 5, 'y': 7}, {'x': 6, 'y': 8}] coeff = sum([d['x'] / d['y'] for d in data]) / len(data) Here, we're calculating the mean ratio between the variables, which we can use as a coefficient for constructing a line. Does this method have a name, and how does it relate to simple linear regression? Answer: I do not know other terms than the average of inverse slope or the inverse of the harmonic mean of slopes. This is also the negative of the average slope of the perpendiculars. It gives you the inverse of the average slope of lines passing through $(0,0)$ and $(x_i,y_i)$. If the $(x,y)$ are almost aligned with $(0,0)$, this is an estimate of the inverse of the slope of a line passing through the points. Linear regression is quite different, as it involves cross-products of sums of $x$ or $y$.
{ "domain": "datascience.stackexchange", "id": 913, "tags": "machine-learning, linear-regression" }
RoboEarth.OWL not find
Question: Hi, How roboearth.owl be useful for robot to infer missing component in a table ?! I do not know as it does not want to open with me so I do not know its content http://www.roboearth.org/kb/roboearth.owl#PickUpBottle',_ Originally posted by RiskTeam on ROS Answers with karma: 238 on 2012-10-14 Post score: 0 Answer: The IRI of an OWL identifier does not have to coincide with the file from which the data is loaded, and does not even have to exist. It is intended to be a globally unique identifier. For example, the RoboEarth namespace http://www.roboearth.org/kb/roboearth.owl# is used for all information downloaded from the RoboEarth knowledge base which is usually not in the roboearth.owl file. In the tutorial that you have probably followed, you have downloaded the respective OWL file from RoboEarth, it should have been cached in the file re_comm/tmp/serveadrink.serveadrink.owl Alternatively, you can also manually search the RoboEarth knowledge base after creating an account at http://api.roboearth.org/ Originally posted by moritz with karma: 2673 on 2012-10-14 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by RiskTeam on 2012-10-15: Sorry I am not understand your answer Comment by moritz on 2012-10-15: KnowRob is a complex system and requires some understanding of the underlying knowledge representation techniques. I'd recommend to start with the 'getting started' tutorial that refers to an OWL tutorial which explains these terms: http://www.ros.org/wiki/knowrob#Tutorials Comment by RiskTeam on 2012-10-15: Ok thanks a lot.
{ "domain": "robotics.stackexchange", "id": 11361, "tags": "ros, knowrobtutorials, roboearth, knowrob" }
Why is the Halting problem decidable for Goto languages limited on the highest value of constants and variables?
Question: This is taken from an old exam of my university that I am using to prepare myself for the coming exam: Given is a language $\text{Goto}_{17}^c \subseteq \text{Goto}$. This language contains exactly those $\text{Goto}$ programs in which no constant is ever above $17$ nor any variable ever above $c$. $Goto$ here describes the set of all programs written in the $Goto$ language made up of the following elements: With variables $v_i \in \mathbb{N}$ and constants $c \in \mathbb{N}$ Assignment: $x_i := c, x_i := x_i \pm c$ Conditional Jump: if(Comparison) goto $L_i$ Haltcommand: halt I am currently struggling with the formalization of a proof, but this is what I have come to so far, phrased very casually: For any given program in this set we know that it is finite. A finite program contains a finite amount of variables and a finite amount of states, or lines to be in. As such, there is a finite amount of configurations in which this process can be. If we let this program run, we can keep a list of all configurations we have seen. That is, the combination of all used variable-values and the state of the program. If we let the program run, there must be one of two things that happens eventually: The program halts. In this case, we return YES and have decided that it halts. The program reaches a configuration that has been recorded before. As the language is deterministic, this means that we must have gone a full loop which will exactly repeat now. No other case can exist as that would mean that we keep running forever on finite code without repeating a configuration. This means after every step, among our list of infinite steps, there is a new configuration. This would mean that there are infinite configurations, which is a contradiction. Is this correct? Furthermore, how would a more formal proof look if it is? If not, how would a correct proof look? Answer: There is a finite number of different states (the set of values of the variables and the program counter). Your "limited goto programs" are just a (messy) way to describe a deterministic finite automaton. Or just reason that the program states being finite, it is certainly possible to map out all possible non-looping computations (by something like a breadth first search of the graph of states and neighbours).
{ "domain": "cs.stackexchange", "id": 16651, "tags": "decision-problem, halting-problem, turing-completeness" }
Find all pairs of strings in a set with Levenshtein distance < d
Question: I have a set of $n = $ 100 million strings of length $l = 20$, and for each string in the set, I would like to find all the other strings in the set with Levenshtein distance $\le d = 4$ from that string. The Levenshtein distance (also called the edit distance) between two strings is the number of insertions, deletions and/or replacements required to convert one string into the another. This should be possible in $O((d + 1)^{2d + 1} \cdot l \cdot n)$ time with a Levenshtein transducer, which takes a single query string at a time and find all the matches in a set with Levenshtein distance $\le d$. However, using the implementation at that link, it appears to take more like $O(n \log n)$ time rather than $O(n)$, and uses more than 200 GB of memory. Is there an alternate $O(n)$ approach that might be faster in practice? The Levenshtein transducer is more general than it needs to be for this application, since it finds matches for each string independently and doesn't exploit the fact that you're comparing every string against every other string. Answer: There's a "trick" you can use that might potentially speed up your algorithm a little: shingling. No guarantees that it'll necessarily help in your particular case, though. Lemma. If the edit distance between two words $w,x$ is $\le 4$, and the two words both have length 20, then there exists some 4-gram that is common to both words (i.e., some $u$ of length 4 that is a substring of both $w$ and $x$). This lemma then suggests the algorithmic trick. Build a table with $4^4$ buckets, one bucket for each possible $4$-gram. Given a word $w$, extract all of its 4-grams (there will be 17 of them) and insert it into those 17 buckets. Do this for each word. Now, if there is a pair of words at edit distance $\le 4$, then there must be some bucket that contains both words. So, for each bucket, we search the set of words within that bucket for a pair of words at edit distance $\le 4$. Use whatever algorithm you like for that task. I would suggest trying BK-trees or metric trees. Will this trick help? The only way to know for sure is to try it. But here's the intuition. We'll have 256 buckets. Each word will get mapped into 17 buckets. Thus, heuristically, we expect each bucket to have $n'=17n/256$ words in it. If we have an algorithm that can search for pairs of words at edit distance $\le 4$ among a set of words in $T(n)$ time, then applying that directly to the original set of words takes $T(n)$ time. In contrast, after applying this trick, we apply it once to each bucket. Now if $256 T(17n/256)$ is smaller than $T(n)$, this trick might yield improvements in running time. My prediction is that it'll yield improvements for $\Theta(n^2)$-runtime algorithms, but not for $O(n \lg n)$ or $O(n)$ time algorithms. This trick can be combined with any algorithm for looking for a pair of words with edit distance $\le 4$ in a set of $n$ words. The "trick" can be improved further. Define a "slate" of the word $w$ to be a pair $(i,u)$ where $i$ is a position ($1 \le i \le 17$) and $u$ is the 4-gram that occurs at position $i$ within $w$. From each word you get 17 slates. Now we get a refined lemma: Lemma. If the edit distance between two words $w,x$ of length 20 is $\le 4$, then there exists a slate $(i,u)$ of $w$ and a slate $(j,v)$ of $x$ such that $u=v$ and $|i \le j| \le 2$. (It might be possible to improve the latter condition to $|i \le j| \le 1$, but I haven't tried to check.) We'll do the same thing as before, but identifying each bucket with a slate. We get $20 \times 256$ different buckets. Now, for each word $w$, extract its 17 slates and store the word in the corresponding 17 buckets. Let $B_{i,u}$ denote the set of words in the bucket for the slate $(i,u)$. Finally, for each slate $(i,u)$, we check whether there exists a word $w \in B_{i-2,u} \cup B_{i-1,u} \cup B_{i,u} \cup B_{i+1,u} \cup B_{i+2,u}$ that is within edit distance $\le 4$ of some word in $B_{i,u}$. Heuristically, we expect each bucket to have $n'=17n/(20 \times 256) \approx 0.0033 n$ words in it. We expect the running time for each bucket to be something like $5 T(n')$, and we need to do that $20 \times 256$ times, for a total running time of $25600 T(0.0033n)$. Is this better than the prior trick? The only way to know for sure is to try it. My prediction is that this'll be better for $\Theta(n^2)$-runtime algorithms, but worse for $O(n \lg n)$ or $O(n)$ time algorithms. Finally, I don't think you have the running time of your scheme right. I believe the Levenshtein transducer automata approach will have running time $\Omega(n^2)$ in your setting, as several folks have outlined in the comment thread under your question.
{ "domain": "cs.stackexchange", "id": 6116, "tags": "algorithms, strings, string-metrics, edit-distance" }
Electrolysis of water: Which equations to use? (IB Chem)
Question: There is a list of standard electrode potentials at 298 K from the p. 23 of IB Data Booklet 2016. Which of the following equations (forward/backward reactions), from the two possible ones involving the discharge of hydrogen gas and the other two with oxygen gas discharge, should I use for the oxidation and reduction of water in electrolytic cells? $$ \begin{array}{cc} \hline \ce{\text{Oxidized species} <=> \text{Reduced species}} & E^⦵(\pu{V}) \\ \hline \begin{align} \ce{H2O(l) + e- &<=> 0.5 H2(g) + OH-(aq)} \\ \ce{H+(aq) + e- &<=> 0.5 H2(g)} \\ \ce{0.5 O2(g) + H2O(l) + 2 e- &<=> 2 OH-(aq)} \\ \ce{0.5 O2(g) + 2 H+(aq) + 2 e- &<=> H2O(l)} \end{align} & \begin{array}{r} -0.83 \\ 0.00 \\ +0.40 \\ +1.23 \end{array} \\ \hline \end{array} $$ (Unless the use of any of these equations cannot be generalized — for a concise explanation of why this is so and what to do then I would be equally grateful.) Answer: For the acidic electrolysis, use the reactions where $\ce{H+}$ occurs. As $\ce{OH-}$ is not available in considerable amount there as a reagent, neither it is created as a product. Generally, for a reaction choice, apply the principle of availability and stability, allowing for a reagent to exist in (relative) abundance. $\ce{OH-}$ or anions of weak acids like $\ce{ClO-}$ do not survive in acids. Acids do not survive in hydroxides. But note that using reactions with half of a molecule is not necessery. $$\begin{align} \ce{O2(g) + 4H+(aq) + 4e- &<=> 2 H2O(l)}\\ \ce{2H+(aq) + 2e- &<=> H2(g)} \end{align}$$ For the alkaline electrolysis, similarly, use the reactions where $\ce{OH}$- occurs. $$\begin{align} \ce{2 H2O(l) + 2e- &<=> H2(g) + 2 OH^-(aq)}\\ \ce{O2(g) + 2 H2O(l) + 4e- &<=> 4 OH^-(aq)} \end{align}$$
{ "domain": "chemistry.stackexchange", "id": 11829, "tags": "physical-chemistry, electrochemistry, redox, water, reduction-potential" }
Minimum number of sets of unreachable vertices for directed acyclic graph (DAG)
Question: I have a DAG with vertices $V$ and edges $E$. If $v,w \in V$ are vertices such that $v$ is not reachable from $w$ and $w$ is not reachable from $v$, I will say that $\langle v,w \rangle$ is an unreachable pair. I want to implement an efficient algorithm which takes this DAG as input, and produces as output a list of sets of pvertices such that: For each unreachable pair of vertices $\langle v,w \rangle$, at least one of the output sets contains both $v$ and $w$, Every pair of vertices $v,w$ that are contained in the same output set must be an unreachable pair, and The number of output sets must be the minimum possible. Is there a polynomial-time algorithm for this problem? As an example, we have the following DAG with topologically ordered vertices A,B,C,D,E: A is connected to [D, E] B is connected to [D, E] C is connected to [E] D is connected to [] E is connected to [] which means that we have the following unreachable vertex pairs: 1. <A, B> 2. <A, C> 3. <B, C> 4. <C, D> 5. <D, E> The expected output is composed of the following three vertex sets: 1. {A, B, C} 2. {C, D} 3. {D, E} The output was not {A,B} {A,C} {B,C} {C,D} {D,E} because the minimum number of possible output sets is 3 instead of 5 as given above. Otherwise it would satisfy the first two conditions, but not the third condition. As another example, consider the same DAG vertex set V but where the edge set E is empty. Since all vertex pairs would be unreachable, the expected output is the vertex set V itself. Answer: The problem is fixed-parameter tractable, where the parameter is the number of output sets. Define an undirected graph $G'$ with an edge $(v,w)$ for each unreachable pair of vertices in the original graph. Now you want to find a minimal edge clique cover of $G'$. The minimal edge clique cover problem is fixed-parameter tractable, where the parameter is the number of cliques. I don't know if there is a polynomial-time algorithm. The minimal edge clique cover problem is NP-complete for general graphs. I don't know if $G'$ has any special properties that makes the minimal edge clique cover problem easy on it. (It's not necessarily chordal, unfortunately.)
{ "domain": "cs.stackexchange", "id": 7156, "tags": "algorithms, graphs, sets, dag" }
Air compressor that doubles as a pneumatic motor?
Question: I am working on a compressed air energy storage system. The size and weight of the system are heavily constrained; nothing should (ideally) exceed a few pounds. For this reason, I would like to store the energy (compress the gas) and extract the energy (make the gas do work) with the same mechanism in a rotary fashion. Essentially what I need is a rotary air compressor that, when air is forced through in the opposite direction, doubles as a pneumatic motor. I'm working with fairly high pressures (I'm estimating a few hundred psi) but low volume. In my search, I have found a plethora of compact rotary air compressors and rotary pneumatic motors, but there is hardly comment on what systems would work as both. To me, it seems very intuitive that an air compressor could have these properties, but I don't want to jump to any conclusions. I have looked at several compressors, and the most applicable to my situation seem to be: Centrifugal compressor Axial flow compressor Rotary screw compressor Rotary vane compressor The centrifugal compressor is ideal, but it seems the least likely in my eyes to be reversible, at least with any efficiency. I also looked at pneumatic motors, of which there were fewer available. Most applicable seemed to be the: Rotary vane motor Other systems, such as the pietro motor, were obviously not applicable in my light weight, compact application. The correlation between the rotary vane compressor and the rotary vane motor is promising, but I would like to know about any options I have. What rotary gas compression systems can double as motors powered by the gas they compress? EDIT The answer most likely lies in the similarity between a radial (centripetal) turbine and a centrifugal compressor. Answer: I would recommend a forward inclined centrifugal system, such as a forward curved fan. The power input/output of any device, where fluid comes in/leaves with fluid rate $Q$ at $V$ and enters/exits at an angle $\theta$ at velocity $U$, would be: $$\mathcal P = (V-U)(1-\cos(\theta))\rho QU$$. If you have this device compress the gas, the power input runs in reverse. In both cases the angle helps. See the velocity triangle. The real heart of this will comes down to putting some good valves on the openings. $U$ is a double edged sword - while it ups your power, if $V$ isn't very high compared to $U$ nothing is really happening. Don't forget $Q$ depends on $V$ or $U$, depending on how you look at it. The key to modifying this is to throttle your inlet opening down (whichever way you run) to a very small opening to have the highest $V$ possible, while keeping the outlet carefully controlled to not constrict $Q$ or $U$ beyond what is necessary to keep $V/U$ decent. Perhaps using this as a first stage in a two stage rotary compressor could also help - the second stage is a true rotary compressor to really boost the pressure, but this assists the second stage to increase pressure beyond atmospheric. Ultimately no device on the market will be built to this strange service - but by having a fairly symmetric rotary system with carefully controlled inputs should yield some decent results. I would definitely consult with a custom fan manufacturer.
{ "domain": "engineering.stackexchange", "id": 328, "tags": "mechanical-engineering, motors, compressed-air, energy-storage, compressed-gases" }
How can I use the Hokuyo UTM-30LX via wifi?
Question: Hi everybody, I'd like to know if there is a way to use the Hokuyo UTM-30LX via WiFi and, possibly, change the scan rate of the sensor. Thanks in advance! Originally posted by Marty on ROS Answers with karma: 11 on 2016-03-03 Post score: 0 Answer: What you mean by using Hokuyo via WiFi? If it is connected to your robot or any other remote computer running ROS, you can join the same WiFi and ROS network by setting ROS_MASTER_URI and ROS_IP environment variables. Once you are connected you can get scan messages, or configure your Hokuyo using dynamic reconfigure plugin in rqt. Originally posted by Akif with karma: 3561 on 2016-03-03 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Marty on 2016-03-03: Thank you for your reply Akif. I can access Hokuyo via wifi generated by my robot. If I set ip_address and ip_port of the urg_node, can I get the scans of the Hokuyo from a PC connected to the same Wifi?
{ "domain": "robotics.stackexchange", "id": 23984, "tags": "ros, wifi, scan" }
Why does the Parker Solar Probe slow down as the distance from the Sun increases?
Question: Why does the Parker Solar Probe slow down as the distance from the Sun increases? Image credit: Wikipedia user Phoenix777, CC BY-SA 4.0 Answer: Why does the “parker solar probe” lose speed as the distance from the sun increases? Because energy and angular momentum are individually conserved quantities in the two body problem. Except for where the Parker Space Probe has close fly-bys with Venus, the gravitationally interactions between the Parker Space Probe and the solar system are very closely modeled as a two body problem (the Sun and the probe), plus very small perturbations from the planets. One way to express conservation of energy in the two body problem is the vis-viva equation, $$v^2 = \mu\left(\frac2r - \frac1a\right)$$ where $\mu = G(M+m)$ is the sum of the central body's standard gravitational parameter and that of the orbiting body, $r$ is the distance between the two bodies, $a$ is the semi-major axis length (a constant), and $v$ is the magnitude of the velocity vector. Note that the mass of the Parker Space Probe is so much less than that of the Sun that one can drop the Parker Space Probe's mass from the expression $\mu = G(M+m)$, resulting in $\mu = GM_{\text{sun}}$. Note that the only variable on the right hand side of the vis-viva equation is the radial distance. As radial distance increases the square magnitude of the velocity vector (and thus the magnitude of the velocity vector) decreases. Without mathematics, conservation of energy dictates that the sum of an orbiting body's kinetic energy and gravitational potential energy must remain constant. As the orbiting body moves further from the central body, the orbiting body's potential energy increases, which means its kinetic energy must correspondingly decrease. This in turn means the orbiting body's velocity must decrease.
{ "domain": "astronomy.stackexchange", "id": 4494, "tags": "orbital-mechanics, space-probe, parker-solar-probe" }
Does the EMF induced in an inductor due to mutual inductance equal the total EMF?
Question: For an inductor in a circuit without a power source, does the induced emf found by using the mutual inductance value include self inductance, or is the self induced emf considered separately? From Faraday's law I know that the induced emf is found by considering the total flux, and that this is equal to the self induced emf for the case where the inductor is connected to an external power source. For the emf induced by mutual inductance, the flux through the inductor is considered to be the flux due to the external inductor, but not the total flux, which leads me to think that there should be an added term in order to calculate the total induced emf. However, I do realize this could lead to doubly counting the flux produced by the self induced emf since this flux also affects the current in the inductor producing the external flux, which is already included in the calculation. But wouldn't this violate Faraday's law? Answer: (a) The total emf in one coil (2) is $$\mathscr E_2 = -\frac{d\phi_\text{linked with 2}}{dt}=-L_2\frac{dI_2}{dt}-M\frac{dI_1}{dt}.$$ We can, if we wish, regard this as the sum of two emfs. (b) You are quite right that the changing current induced in 2 produces changing flux some of which is linked with coil 1 and affects the current in it, which in turn affects the emf in coil 2, according to the equation above. But this is not double-counting: we simply have to accept that $I_1$ and $I_2$ are not independent of each other.
{ "domain": "physics.stackexchange", "id": 90480, "tags": "electromagnetism, electric-circuits, electromagnetic-induction, inductance" }
100-doors puzzle
Question: Here is a very simple solution to the 100 doors challenge You are in a hotel with 100 doors. Initially every door is closed. On the first round, you change the state of every door (if it is open, close it. If it is closed, open it). On the second round, you change the state of every second door. On the third round, every third door etc... Find the state of all 100 hundred doors after n rounds """Check which doors are open and which are closed after n rounds""" def check_doors_round(n): """Check which door is open after n rounds""" doors = [False] * 100 for step in range(n): for (index, door) in enumerate(doors): if (index+1) % (step+1) == 0: doors[index] = not door print(doors) if __name__ == "__main__": check_doors_round(100) Answer: It would be better if you merged (index+1) % (step+1) == 0 into the preceding for loop. Whilst it's easy to understand what it means, it's even easier to understand what range(start, stop, step) means. You should return doors and print outside the function. I'd prefer to be able to specify how many doors to use. This can be a default argument. def check_doors_round(n, doors_=100): doors = [False] * doors_ for step in range(n): for index in range(step, doors_, step + 1): doors[index] = not doors[index] return doors if __name__ == "__main__": print(check_doors_round(100))
{ "domain": "codereview.stackexchange", "id": 34921, "tags": "python, python-3.x, programming-challenge" }
Chromatic number of a particular graph
Question: Assume I have a parametrized graph. The parameters are two integers $x$ and $y<x$. Let $S(x)=\{1, \ldots, x\}$. The vertices of the graph are all the subsets of $S(x)$ of size $y$. Two vertices share an edge if their intersection is empty. I need to find the chromatic number of this graph. Is this problem NP-hard ? Thank you in advance for your help. Answer: See http://en.wikipedia.org/wiki/Kneser_graph
{ "domain": "cstheory.stackexchange", "id": 1232, "tags": "cc.complexity-theory, np-hardness, graph-colouring" }
Potential in Quantum field theory
Question: I studied free particle field like Dirac field and Klein Gordon field. My question is about interaction. How can I put a potential term in the Lagrangian density? $\mathcal{L} =\frac{1}{2}\partial_\mu \phi \partial^\mu \phi - \frac{m^2}{2}\phi^2$ Answer: Naively, the answer, as was already mentioned, is adding terms of higher powers in the field to the Lagrangian. This means that the equations of motion, in the case of a simple scalar field given by the Klein-Gordon equation, acquires additional terms. So quite generally, one can write $$\mathcal{L}=\frac12\partial_\mu\phi\partial^\mu\phi-\frac{m^2}{2}\phi^2+\sum_{n\gt2}\frac{g_n}{n!}\phi^n,$$ which results in the equation of motion being $$(\partial_\mu\partial^\mu+m^2)\phi=\sum_{n\gt2}\frac{g_n}{(n-1)!}\phi^{n-1}.$$ A natural question arises: just because I can add all these terms, is doing so justified? Are there any restrictions on the amount and form of terms I can add? It turns out there are. Some are easy to see, some are more subtle and complicated. I will mention just a few key points, for details simply refer to standard literature on quantum field theory. Stability - As the Lagrangian basically gives you the energy of a system, its precise form can show pathologies from the beginning. Take for example a term of third order: $\phi^3$. If we allow the field to take on negative values, the energy can become arbitrarily negative. This is something one does not want, therefore we drop terms with this property. Dimensional analysis - From the fact that the spacetime integral of the Lagrangian (Density) defines the action, one the prefactors of the interaction terms have to be fixed in such a way that that the overall dimension of the action, $[\hbar]$, is preserved. One can link the dimensionality of these new terms to some "fundamental energy scale" (Planck energy) in such a way that it is possible to argue that at low energies, terms of powers beyond a certain value are negligible. This this context, one speaks of "relevant", "irrelevant" and "marginal" operators. See David Tong's lectures for an explanation. Renormalization - In order to have the theory produce meaningful results which can be both adjusted to and compared to experiment, one has to modify the original Lagrangian, i.e. allow both the parameters of the theory (in our case $m$ and the coupling constants $g_n$) and the fields to change as a function of the energy scale. This leads to the concept of the renormalization group, which allows us to determine the way in which these parameters change. The most famous example is the strong interaction, where we observe that the coupling constant is large at low energies and small at high energies. This phenomenon, referred to as asymptotic freedom, is captured mathematically by the renormalization group equations of QCD, but it can also be found in simple scalar field theory: ignoring the aforementioned instability, a theory which is third order in the fields also shows asymptotic freedom. I hope this gives you a vague idea of what might be involved. I can only recommend books like Peskin&Schroeder and Srednicki, which you should definitely study if you want to develop a thorough understanding of the subject.
{ "domain": "physics.stackexchange", "id": 10149, "tags": "quantum-mechanics, quantum-field-theory, interactions" }
How to make Train-Test split on multivariate timeseries data
Question: I am building a model for the purpose of forcasting when someone is going into a stressful state. I am using the WESAD dataset which has electrodermal activity (EDA) data on 11 subjects. I take this and use Neurokit2 to clean and extract features from the raw EDA data. The end result is that I have a list that stores each subject in the original dataset with 3 features and 1 label. The label is binary [0,1] and the features are normalized. I only have experience running a timeseries model using a single factor and single subject. How would I correctly do the train-test split for multiple features on multiple subjects? Below is my code to create data generators for neural networks on one feature and one subject. Should I loop through each subject and do the same process as below? If I do as I suggest, how would I put this into a LSTM model? from keras.preprocessing.sequence import TimeseriesGenerator # Define the batch size batch_size = 64 # Define the number of features and targets num_features = 1 num_targets = 1 # Random State random_state = 42 # Train Test Split from sklearn.model_selection import train_test_split # Validation split X_dat, X_val, y_dat, y_val = train_test_split(subsampled_data, delayed_labels, test_size = 0.2, random_state=random_state) # Train test split X_train, X_test, y_train, y_test = train_test_split(X_dat, y_dat, test_size = 0.2, random_state = random_state) # Normalize the data from sklearn.preprocessing import StandardScaler # create the StandardScaler object scaler = StandardScaler() # fit the scaler on the training data X_train_scaled = scaler.fit_transform(X_train.values.reshape(-1,1)) # transform the validation data X_val_scaled = scaler.transform(X_val.values.reshape(-1,1)) # transform the test data X_test_scaled = scaler.transform(X_test.values.reshape(-1,1)) # TimeSeriesGenerator parameters shuffle = True # Data Generator train_data_gen = TimeseriesGenerator(X_train_scaled, y_train, length=sequence_length, batch_size=batch_size) val_data_gen = TimeseriesGenerator(X_val_scaled, y_val, length=sequence_length, batch_size=batch_size) test_data_gen = TimeseriesGenerator(X_test_scaled, y_test, length=sequence_length, batch_size=batch_size) Answer: Scikit-learn's model_selection.TimeSeriesSplit is designed to appropriately split time series data. The result will include indices that can be used to reference the features, no matter how many features there are.
{ "domain": "datascience.stackexchange", "id": 11630, "tags": "tensorflow, time-series" }
How does L1 regularization make low-value features more zero than L2?
Question: Below formulas, L1 and L2 regularization Many experts said that L1 regularization makes low-value features zero because of constant value. However, I think that L2 regularization could also make zero value. Could you please explain the reason why L1 has more tendency to make zero value? (It's gonna be great if you let me know this reason by using formula(like above equation)! Answer: The penalty on coefficients of L1 is more aggressive on values close to zero than it is for L2. With L1, when a weight value comes closer to zero, it tends to get even closer, because of the $\epsilon\lambda$ penalty which stays constant. With L2, the $\epsilon\lambda w^{(t)}$ term gets smaller, so the regularization gets smaller as a weight comes closer to zero, with the update depending only on $\epsilon\Delta E(w)$. So the main argument for this to my knowledge is that $$ \lim_{w\rightarrow0}\lambda = \lambda\\ \lim_{w\rightarrow0}\lambda w = 0 $$
{ "domain": "datascience.stackexchange", "id": 6686, "tags": "neural-network, deep-learning, regularization" }
rosbuild error during ros install mac osx 10.6.6
Question: on a Mac OSX 10.6.6, I had the following error for ros install command: rosinstall ~/ros "http://packages.ros.org/cgi-bin/gen_rosinstall.py?rosdistro=diamondback&variant=ros-full&overlay=no" what went wrong? [ rosmake ] rosdep successfully installed all system dependencies [ rosmake ] Starting >>> tools/rospack [ rosmake ] Finished <<< tools/rospack [rosmake-0] Starting >>> roslib [ make ] [ rosmake ] All 20 linesoslib: 0.7 sec ] [ 1 Active 1/68 Complete ] {------------------------------------------------------------------------------- mkdir -p bin cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=rospack find rosbuild/rostoolchain.cmake .. [rosbuild] Building package roslib [rosbuild] Including /Users/paulogoncalves/ros/ros_comm/clients/roslisp/cmake/roslisp.cmake [rosbuild] Including /Users/paulogoncalves/ros/ros_comm/clients/rospy/cmake/rospy.cmake [rosbuild] Including /Users/paulogoncalves/ros/ros_comm/clients/cpp/roscpp/cmake/roscpp.cmake Traceback (most recent call last): File "/Users/paulogoncalves/ros/ros/bin/rosboost-cfg", line 35, in rosboost_cfg.main() File "/Users/paulogoncalves/ros/ros/tools/rosboost_cfg/src/rosboost_cfg/rosboost_cfg.py", line 328, in main raise BoostError("Cannot find boost in any of %s"%search_paths(options.sysroot)) rosboost_cfg.rosboost_cfg.BoostError: "Cannot find boost in any of [('/usr', True), ('/usr/local', True)]" CMake Error at /Users/paulogoncalves/ros/ros/core/rosbuild/public.cmake:848 (message): rosboost-cfg --include_dirs failed Call Stack (most recent call first): CMakeLists.txt:5 (rosbuild_add_boost_directories) -- Configuring incomplete, errors occurred! [ rosmake ] Output from build of package roslib written to: [ rosmake ] /Users/paulogoncalves/.ros/rosmake/rosmake_output-20110317-103414/roslib/build_output.log [rosmake-0] Finished <<< roslib [FAIL] [ 0.78 seconds ] [rosmake-1] Starting >>> rosemacs [ make ] [rosmake-1] Finished <<< rosemacs No Makefile in package rosemacs [rosmake-2] Starting >>> rosboost_cfg [ make ] [rosmake-2] Finished <<< rosboost_cfg No Makefile in package rosboost_cfg [ rosmake ] Halting due to failure in package roslib. [ rosmake ] Waiting for other threads to complete. [rosmake-3] Starting >>> rosbash [ make ] [rosmake-3] Finished <<< rosbash No Makefile in package rosbash [ rosmake ] Results: [ rosmake ] Built 5 packages with 1 failures. [ rosmake ] Summary output to directory [ rosmake ] /Users/paulogoncalves/.ros/rosmake/rosmake_output-20110317-103414 Traceback (most recent call last): File "/usr/local/bin/rosinstall", line 5, in pkg_resources.run_script('rosinstall==0.5.16', 'rosinstall') File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/pkg_resources.py", line 442, in run_script self.require(requires)[0].run_script(script_name, ns) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/pkg_resources.py", line 1167, in run_script exec script_code in namespace, namespace File "/Library/Python/2.6/site-packages/rosinstall-0.5.16-py2.6.egg/EGG-INFO/scripts/rosinstall", line 556, in File "/Library/Python/2.6/site-packages/rosinstall-0.5.16-py2.6.egg/EGG-INFO/scripts/rosinstall", line 547, in rosinstall_main File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 462, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'source /Users/paulogoncalves/ros/setup.sh && rosmake ros ros_comm --rosdep-install' returned non-zero exit status 1 Originally posted by paulo on ROS Answers with karma: 11 on 2011-03-16 Post score: 1 Original comments Comment by tfoote on 2011-03-17: Duplicate of http://answers.ros.org/question/401/help-on-install-ros-in-a-macbook-with-osx-1066 Answer: Duplicate of http://answers.ros.org/question/401/help-on-install-ros-in-a-macbook-with-osx-1066 Originally posted by Wim with karma: 2915 on 2011-08-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 5105, "tags": "rosbuild, rosinstall, macos-snowleopard, osx" }
Can the inflaton be detected? Does it interact with the other fields?
Question: In a recent article it is claimed that dark energy can actually be a new manifestation of the inflaton field at electroweak scales. This subject is pretty far from my area of expertise (if there is any). So my question are: The inflaton field is supposed to have become weaker, but I dont think anybody has claimed it has ceased to exist. If it happens to be the same as dark energy, it not only will still be out there, but my guess is that it will be at detectable energies. Is this correct? I could not find whether the inflaton is supposed to interact with any other known fields. If so, is there any reason it could not be detected in particle physics experiments? Could its detection (and measured characteristics) potentially solve the nature of dark energy? Answer: The inflaton certainly interacts with SM fields. During inflation, the energy density of the Universe is dominated by the potential energy of the infaton and the Universe cools. At the end of inflation, the inflaton should decay to ordinary particles (electrons, photons etc.) in a process known as reheating. After reheating, the big bang begins in earnest. Inflation is an "add-on" that improves the big bang theory. The inflaton is a quantum field - i.e. a field of particles, though many problems and calculations in inflation are entirely classical. This is why you hear that the field becomes "weaker." In principle, an excitation of the inflaton field could be detected in laboratory experiments, similar to the discovery of the Higgs boson two years ago at the LHC. In most models, however, the inflaton is very massive, at least $10^{10}$ times heavier than the Higgs boson, and we don't have the energy to excite the field. There are, however, models in which the inflaton is light, such as Higgs inflation in which the inflaton is the Higgs, but these models might have been ruled out by the BICEP experiment.
{ "domain": "physics.stackexchange", "id": 18118, "tags": "cosmological-inflation, dark-energy" }
Asymptotic estimate for $\sum_{k=1}^{N} {\frac{1}{k^2 H_k}}$
Question: I am working on finding the asymptotic estimate for $$ \sum_{k=1}^{N} {\frac{1}{k^2 H_k}}.$$ All I know is that ${\frac{1}{k^2 H_k}}$ is convergent because $${\frac{1}{k^2 H_k}}<\frac{1}{k^2}$$ and $\frac{1}{k^2}$ is convergent. Am I right to conclude that the asymptotic estimate for this is $O(\frac{1}{k^2})$? Answer: Since the infinite series $\sum_{k=1}^\infty \frac{1}{k^2H_k}$ converges, an asymptotic estimate to your sum is simply $\Theta(1)$. You can obtain a better estimate by estimating the tail: $$ \sum_{k=1}^n \frac{1}{k^2H_k} = \sum_{k=1}^\infty \frac{1}{k^2}{H_k} - \sum_{k=n+1}^\infty \frac{1}{k^2H_k}. $$ We can estimate the tail by $$ \sum_{k=n+1}^\infty \frac{1}{k^2H_k} \leq \sum_{k=n+1}^\infty \frac{1}{k(k+1)} = \frac{1}{n}, $$ and so $$ \sum_{k=1}^n \frac{1}{k^2 H_k} = C - O\left(\frac{1}{n}\right), $$ where $C = \sum_{k=1}^\infty 1/k^2H_k \approx 1.33275$.
{ "domain": "cs.stackexchange", "id": 7786, "tags": "asymptotics" }
Calculate azimuth and elevation angles
Question: If I have two frames: /parent and /child. I can use tf:lookuptransform to find the transformation between two frames. Now, I want to find the azimuth and elevation angles in spherical coordinate between two frames. Do ROS have any available function to do something like that? Thanks Originally posted by tn0432 on ROS Answers with karma: 60 on 2016-03-21 Post score: 0 Original comments Comment by al-dev on 2016-03-21: How would you define the azimuth and elevation angles "between two frames"? They give you 2DOF and a generic 3D transformation would need 6DOF... Answer: No, to my knowledge ROS doesn't make those methods available. You need to better define what between two frames means, though. Assuming you can treat Frame1 as the origin (that is, position (x,y,z)=(0,0,0) and orientation (x,y,z,w)=(0,0,0,1)), calculating the azimuth/elevation to another frame can be accomplished using only Frame2's relative-to-Frame1 (x2,y2,z2) location: magnitude = sqrt((x2*x2) + (y2*y2) + (z2*z2)); azimuth = atan2(y2, x2); elevation = asin(z2 / magnitude); Note that angles should be normalized to -M_PI <= val < M_PI. If you're going to be working with spherical coordinates, two sites I've found invaluable are Aviation Formulary and Calculate distance, bearing and more between Latitude/Longitude points. Originally posted by kramer with karma: 1470 on 2016-03-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 24199, "tags": "transform" }
Stability of solar system
Question: My question is simple: Is the Solar system stable? You can see this Wikipedia page. Edit: Sorry, because i think my question is more about mathematics and classical mechanics of planets in billion years scale, than astronomy. But i think this is our fortuity that Earth didn't doomed in past billion years ago and it's possible that we will have less than 8 planet's before Sun will be ruin. Answer: The wikipedia page you linked to tells you that the solar system is gravitationally "chaotic", in part because the mass of the sun is not fixed over time. But even more simply than that, focusing just on the gravity (ignoring loss of stellar mass, etc.), the solar system is an N-body problem. We have 8 planets, a sun, and millions of asteroids, comets, and who knows how many individual particles gravitationally bound to our sun (plus ones that aren't and are just passing through the neighborhood, so to speak). When you have more than 2 bodies, the solutions to the N-body problem are unstable. What this means is that, say we describe the N-body problem with data $D$ (the "initial conditions", or a perfect description of the state of the system at some specific point in time). With a given complete data set the (Newtonian) gravitational evolution of the system is completely determined (but so difficult to do we can only approximate it). What instability means here is that if we have some other data set $D'$ that is only a little bit different from $D$, then the differences between the evolution from $D$ and $D'$ will become exponentially large over long enough time scales. So what may seem like minor differences now will result in radically different looking solar systems in the long run. Since all of our observations can never give exact values, but only a range of values, there is necessarily a bit of uncertainty in what the exact gravitational state of our solar system is. We have very poor data on the exact asteroid and comet content of our solar system, and even planetary data has significant error margins. All of this means that there are lots of justifiable picks for the data $D$, each differing by a small amount from each. But due to the instability, eventually these data will produce radically different futures from each other. Currently we can only predict the solar system's evolution up to a few million years or so (the exact value stated can vary wildly depending on how you opt to define and compute the Lyapunov time). After that the evolutionary tracks become so disparate we can't really say we're predicting anything other than "it'll definitely do something". One way or another, it is currently impossible for us to make any clear assertions about what the solar system will look like on a timescale of billions of years. Maybe all 8 planets will still be there; maybe their orbits will be very similar, but maybe they'll have much different orbits; maybe several planets will have been ejected from the solar system. At best we can observe a few things that lead certain objects to be most likely to undergo significant alteration. For example, Jupiter and Mercury appear to have a certain orbital resonance right now which could ultimately lead Mercury to undergo a significant orbit change. This may ultimately cause it to collide with another planet, or the sun, or be ejected from the solar system entirely. But maybe it won't. It's hard to say.
{ "domain": "astronomy.stackexchange", "id": 966, "tags": "solar-system, gravity, solar-system-evolution" }
Recursively create a TreeView for file paths using C# and WPF
Question: I'm building a program that allows the user to monitor files on their local system. To display the files, I created a TreeView using System.Windows.Controls from the WPF framework. The files are added recursively. What suggestions would you give to improve this code for cleanliness, structure, or efficiency? Could I have made better use of the C# and WPF libraries? Could this code be made more scalable, in the event of having to display hundreds or thousands of files? Any guidance from the Code Review community would be welcomed and appreciated. using System.Collections.Generic; using System.Linq; using System.Windows.Controls; namespace WpfApp1 { /// <summary> /// A class for creating a Windows File Explorer tree view. /// </summary> public class FileExplorerTreeView { /// <summary> /// The public <see cref="TreeView"/> property to bind to the UI. /// </summary> public TreeView FileTree { get; } /// <summary> /// The <see cref="FileExplorerTreeView"/> class constructor. /// </summary> public FileExplorerTreeView() { FileTree = new TreeView(); } /// <summary> /// Add a single path to the <see cref="TreeView"/>. /// </summary> public void AddPath(string path) { // Split the file path components then add them to a Queue. var pathElements = new Queue<string>(path.Split(Path.DirectorySeparatorChar).ToList()); AddNodes(pathElements, FileTree.Items); } /// <summary> /// Add multiple paths to the <see cref="TreeView"/>. /// </summary> public void AddPaths(IEnumerable<string> paths) { foreach (var path in paths) AddPath(path); } // Add each file path element recursively to the TreeView. private void AddNodes(Queue<string> pathElements, ItemCollection childItems) { if (pathElements.Count == 0) return; var first = new TreeViewItem(); TreeViewItem? match; first.Header = pathElements.Dequeue(); if (HasItem(childItems, first, out match)) { AddNodes(pathElements, match.Items); } else { childItems.Add(first); AddNodes(pathElements, first.Items); } } // If the TreeViewItem is contained in childItems, return true and return the "item" object as an out // parameter. private bool HasItem(ItemCollection childItems, TreeViewItem item, out TreeViewItem? match) { foreach (var childItem in childItems) { var cast = childItem as TreeViewItem; if (cast == null) { match = null; return false; } if (item.Header.Equals(cast.Header)) { match = cast; return true; } } match = null; return false; } } } Here is the MainWindow class along with some client code to test FileExplorerTreeView with: using System.Collections.Generic; using System.Windows; namespace WpfApp1 { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); var paths = new List<string>() { "C:\\DIR\\ANOTHERDIR\\FOO.TXT", "C:\\DIR\\FOO.TXT", "C:\\ANOTHERDIR\\FOO.TXT", "C:\\DIR\\ANOTHERDIR\\BAR.TXT", "C:\\DIR\\DIR\\DIR\\FOO.TXT", }; var tree = new FileExplorerTreeView(); tree.AddPaths(paths); DataContext = tree; } } } And here is the XAML: <Window x:Class="WpfApp1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:WpfApp1" mc:Ignorable="d" Title="MainWindow" Height="450" Width="800"> <Grid> <TreeView x:Name="FileTreeDisplayed" ItemsSource="{Binding FileTree.Items}"/> </Grid> </Window> Answer: Comments I am not a really big fan of adding documentation comments to trivial things (like this is the constructor). On the other hand I do appreciate consistency (every public member is annotated). I would like to encourage you try to avoid echoing comments (the comment tells exactly the same thing as the code), like this // Split the file path components then add them to a Queue. var pathElements = new Queue<string>(path.Split(Path.DirectorySeparatorChar).ToList()); I try to follow this guideline: use comments to document the whys and why nots. In other words, you should capture the reasoning why did you prefer this solution over other viable alternatives. This information is known at the time of code writing but it will be lost if it is not captured as a comment. This could help future code maintainers to better understand your intent. Readonly At the first glace this code seems to be fine to expose the TreeView as read-only: public TreeView FileTree { get; } The problem with this: its Items collection is not exposed as read-only. In other words, both the FileExplorerTreeView class and its consumer(s) can manipulate concurrently the Items collection. Without proper locking this can cause problems. Ease of usage The AddPaths provides a way to specify multiple paths at the same time. But as you have showed in the MainWindow code you had to create a List and pass that to the AddPaths methods. In C# you could use the params keyword to ease the usage of the method: void AddPaths(params string[] paths) ... tree.AddPaths("C:\\DIR\\ANOTHERDIR\\FOO.TXT", ..., "C:\\DIR\\DIR\\DIR\\FOO.TXT"); Naming convention Your HasItem method follows the tester-doer pattern (sometimes referred as error hiding pattern). Most of the cases in .NET these methods are prefixed with Try, for example TryParse, TryCreate, TryAdd or TryGetValue. The TryGetMatch IMHO would be a more convenient name for this method. Single return In your code you have used 3 return statements. 2 out of 3 are used as early exits. As an alternative you could take advantage of break: private bool TryGetMatch(ItemCollection childItems, TreeViewItem item, out TreeViewItem? match) { bool result = false; match = default; foreach (var childItem in childItems) { var cast = childItem as TreeViewItem; if (cast == null) break; if (item.Header.Equals(cast.Header)) { match = cast; result = true; break; } } return result; } Depending on your C# you could take advantage of the is pattern matching: private bool TryGetMatch(ItemCollection childItems, TreeViewItem item, out TreeViewItem? match) { bool result = false; match = default; foreach (var childItem in childItems) { if (childItem is TreeViewItem viewItem) { if (!item.Header.Equals(viewItem.Header)) continue; match = viewItem; result = true; } break; } return result; } UPDATE #1 You pointed out that the TreeView is readonly, but the Items collection is not. Is it even possible to make Items readonly since this is a built in class? Well you don't need to expose the entire TreeView component. It is enough to expose only its Items property. This property's data type is ItemCollection which implements the IList. So, exposing the ItemCollection as readonly prevents you to overwrite the property with other ItemCollection instance but it still allows you to add/remove items from the collection. private TreeView FileTree { get; } public ItemCollection Items => FileTree.Items; So, what can you do? Gladly it is enough (in this particular case) to use the AsEnumerable to prevent modification on the collection. private TreeView FileTree { get; } public IEnumerable Items => FileTree.Items.AsEnumerable(); <TreeView x:Name="FileTreeDisplayed" ItemsSource="{Binding Items}"/> Are there any notable benefits to using the is pattern matching over the as operator? The as operator can be used only with reference types and nullable types. So, for instance the following code won't event compile: object o = 1.0; var x = o as double; The is operator and expression do not have this constraint. var x = o is double; //operator if(o is double y) //expression { } Another notable difference is that you can define a lots pattern with is whereas as can be used only with types. Here are some examples: object o = TimeSpan.FromSeconds(1); if (o is not null) { //... } if (o is TimeSpan { Days: 0, TotalSeconds: > 0 } ts) { // You can use o and ts } if (o is TimeSpan { Minutes: 0 or 1 } ts) { // You can use o and ts } if (o is TimeSpan { TotalMilliseconds: var milliseconds } ts) { // You can use o, ts and milliseconds } ...
{ "domain": "codereview.stackexchange", "id": 45103, "tags": "c#, .net, recursion, file-system, wpf" }
How can $\nabla \hat{v}\left(S_{t}, \mathbf{w}_{t}\right)$ be 1 for $S_{t}$ 's group's component and 0 for the other components?
Question: In Sutton's RL:An introduction 2nd edition it says the following(page 203): State aggregation is a simple form of generalizing function approximation in which states are grouped together, with one estimated value (one component of the weight vector w) for each group. The value of a state is estimated as its group's component, and when the state is updated, that component alone is updated. State aggregation is a special case of SGD $(9.7)$ in which the gradient, $\nabla \hat{v}\left(S_{t}, \mathbf{w}_{t}\right)$, is 1 for $S_{t}$ 's group's component and 0 for the other components. and follows up with a theoretical example. My question is, imagining my original state space is $[1,100000]$, why can't I just say that the new state space is $[1, 1000]$ where each of these numbers corresponds to an interval: so 1 to $[1,100]$, 2 to $[101,200]$, 3 to $[201,300]$, and so on, and then just apply the normal TD(0) formula, instead of using the weights? My main problem with their approach is the last sentence: in which the gradient, $\nabla \hat{v}\left(S_{t}, \mathbf{w}_{t}\right)$, is 1 for $S_{t}$ 's group's component and 0 for the other components. If $\hat{v}\left(S_{t}, \mathbf{w}_{t}\right)$ is the linear combination of a feature vector and the weights (w), how does the gradient of that function can be 1 for a state and 0 for others? There are not as many w as states or groups of states. Let's say that my feature vector is 5 numbers between 0 and 100. For example, $(55,23,11,44,99)$ for a specific state, how do you choose a specific group of states for state aggregation? Maybe what I'm not understanding is the feature vector. If we have a state space that is $[1, 10000]$ as in the random walk, what can be the feature vector? Does it have the same size as the number of groups after state aggregation? Answer: Using the book's random walk example, if you have a state space with $1000$ states and you divide them into $10$ groups, each of those groups will have $100$ neighboring states. The function for approximation will be \begin{equation} v(\mathbf w) = x_1w_1 + x_2w_2 + ... + x_{10}w_{10} \end{equation} Now, when you pick a state, the feature vector will be a one-hot encoded vector with $1$ that is placed in a position that depends on in which group does the chosen state belong. For example, if you have state $990$ that state belongs in group $10$ so the feature vector will be \begin{equation} \mathbf x_t = [0, 0, ..., 0, 1]^T \end{equation} what this means is that the only weight that will be updated is weight $w_{10}$ because gradients for all other weights will be $0$ (that's because features for those weights are $0$)
{ "domain": "ai.stackexchange", "id": 1124, "tags": "reinforcement-learning, function-approximation, features" }
Condense repetitive jQuery .click function
Question: I'm looking for a better way of writing this repetitive jQuery function: (function($) { var mobileAmount = 300; $('#_100').click(function() { $('*').removeClass('-amount-active'); $('#_100').addClass('-amount-active'); mobileAmount = 100; }); $('#_200').click(function() { $('*').removeClass('-amount-active'); $('#_200').addClass('-amount-active'); mobileAmount = 200; }); $('#_300').click(function() { $('*').removeClass('-amount-active'); $('#_300').addClass('-amount-active'); mobileAmount = 300; }); ... })( jQuery ); There is a number of amount buttons and each time one is clicked it receives the active class for styling and updates the mobileAmount variable. Answer: Steps to fix this: Add class 'amount-button' to all buttons Add attribute data-mobileamount to all buttons and assign values like 100, 200, .. to each button. For example: <button class="amount-button" data-mobileamount="100" >Mobile Amount 100</button > <button class="amount-button" data-mobileamount="200">Mobile Amount 200</button> (function($) { var mobileAmount = 300; $('.amount-button').click(function() { $('*').removeClass('-amount-active'); $(this).addClass('-amount-active'); mobileAmount = $(this).data('mobileamount'); }); })( jQuery );
{ "domain": "codereview.stackexchange", "id": 27321, "tags": "javascript, jquery" }
java - Basic snake game
Question: This is a snake game I made, Note: at this point, I would like to hear any thoughts/ reviews about it. Thank you Game class: package snake; import java.awt.Canvas; import java.awt.Color; import java.awt.Dimension; import java.awt.Font; import java.awt.Graphics; import java.awt.event.KeyAdapter; import java.awt.event.KeyEvent; import java.awt.image.BufferStrategy; import javax.swing.JFrame; public class Game extends Canvas implements Runnable{ public static final int WIDTH = 720; public static final int HEIGHT = 720; public static final int BLOCK_SIZE = 30; //Do not change - size of the food and snake body part //as well as their images private Thread thread; private boolean running; private Snake snake; private Food food; public Game(){ initializeWindow(); snake = new Snake(this); food = new Food(); food.generateLocation(snake.getCopyOfEmptySpaces()); initializeKeyAdapter(); start(); } private synchronized void start() { thread = new Thread(this); running = true; thread.start(); this.requestFocus(); } public void run() { double amountOfTicks = 10d; //ticks amount per second double nsBetweenTicks = 1000000000 / amountOfTicks; double delta = 0; long lastTime = System.nanoTime(); while(running) { long now = System.nanoTime(); delta += (now - lastTime) / nsBetweenTicks; lastTime = now; while (delta >= 1) { tick(); delta--; } render(); } } public void tick() { if (snake.isDead()) { running = false; } else { if (isEating()) { food.generateLocation(snake.getCopyOfEmptySpaces()); } snake.tick(); } } public void render() { if (running) { BufferStrategy bs = this.getBufferStrategy(); if (bs == null) { this.createBufferStrategy(3); return; } Graphics g = bs.getDrawGraphics(); g.setColor(Color.black); g.fillRect(0, 0, Game.WIDTH, Game.HEIGHT); food.render(g); snake.render(g); if (snake.isDead()) { g.setColor(Color.white); g.setFont(new Font("Tahoma", Font.BOLD, 75)); g.drawString("Game Over", Game.WIDTH / 2 - 200 , Game.HEIGHT / 2); } g.dispose(); bs.show(); } } public boolean isEating() { return snake.getHeadCoor().equals(food.getCoor()); } private JFrame initializeWindow() { JFrame frame = new JFrame("Snake Game"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.add(this); this.setPreferredSize(new Dimension(Game.WIDTH, Game.HEIGHT)); frame.pack(); frame.setLocationRelativeTo(null); frame.setVisible(true); return frame; } private void initializeKeyAdapter() { //this is how to game gets keyboard input //the controls are wasd keys class MyKeyAdapter extends KeyAdapter{ private int velocity = Snake.DEFAULT_SPEED; //move a whole block at a time @Override public void keyPressed(KeyEvent e) { int key = e.getKeyCode(); if (key == KeyEvent.VK_ESCAPE) { System.exit(0); } //after a key has been pressed we check if the snake goes the opposite way //if so, we ignore the press if (key == KeyEvent.VK_S) { if (snake.getVelY() != -velocity) { snake.setVel(0, velocity); } } else if (key == KeyEvent.VK_W) { if (snake.getVelY() != velocity) { snake.setVel(0, -velocity); } } else if (key == KeyEvent.VK_D) { if (snake.getVelX() != -velocity) { snake.setVel(velocity, 0); } } else if (key == KeyEvent.VK_A) { if (snake.getVelX() != velocity) { snake.setVel(-velocity, 0); } } } } this.addKeyListener(new MyKeyAdapter()); //adding it to the game } public static void main(String[] args) { Game g = new Game(); } } Snake class: package snake; import java.awt.Graphics; import java.awt.Image; import java.util.HashSet; import java.util.LinkedList; import java.util.Set; import javax.swing.ImageIcon; public class Snake { public static final int DEFAULT_SPEED = Game.BLOCK_SIZE; private Game game; private int velX; private int velY; private LinkedList<Coor> body; //snake's body private Set<Coor> emptySpaces; //valid spots for food- spots without snake parts private boolean dead; private Image img; //img of other body parts /* * @pre: Game.HEIGHT / Game.BLOCK_SIZE == 0 && Game.WIDTH / Game.BLOCK_SIZE == 0 * @pre: Game.HEIGHT % 2 == 0 * @pre: Game.WIDTH > 3 * Game.BLOCK_SIZE * @post: the snake starts at the middle of the screen */ Snake(Game game){ this.game = game; body = new LinkedList<Coor>(); //starting snake int halfScreenHeight = Game.HEIGHT / 2; body.add(new Coor(2 * Game.BLOCK_SIZE, halfScreenHeight)); //head block body.add(new Coor(Game.BLOCK_SIZE, halfScreenHeight)); //middle block body.add(new Coor(0, halfScreenHeight)); //last block velX = DEFAULT_SPEED; initializeEmptySpaces(); initializeImage(); } public void tick() { //updating the body and checking for death /* Updating body: * Explanation: the Coor of the n-th body part is the Coor of the head n ticks ago * Execution: adding the current head Coor to the body, and pushing all other * Coors one place. If the snake hasn't eat this turn than we will remove * the last Coor in the body. Oterwise, it has eat and needs to grow, * in that case we'll keep it * Result: the body will be: [Coor now, before 1 tick, before 2 ticks, ...] */ int prevHeadX = body.getFirst().getX(); int prevHeadY = body.getFirst().getY(); body.push(new Coor(prevHeadX + velX, prevHeadY + velY)); //new head Coor if (!game.isEating()) { Coor lastCoor = body.getLast(); body.removeLast(); emptySpaces.add(lastCoor); //now there is no body part on it } emptySpaces.remove(getHeadCoor()); checkDeath(); } public void render(Graphics g) { for (Coor curr : body) { g.drawImage(img, curr.getX(), curr.getY(), null); } } private void checkDeath() { Coor h = getHeadCoor(); if (h.getX() < 0 || h.getX() > Game.WIDTH - Game.BLOCK_SIZE) { //invalid X dead = true; } else if (h.getY() < 0 || h.getY() > Game.HEIGHT - Game.BLOCK_SIZE) { //invalid Y dead = true; } else { dead = false; for (int i = 1; i < body.size(); i++) { //compare every non-head body part's coor with head's corr if (getHeadCoor().equals(body.get(i))) { //head touched a body part dead = true; } } } } public void setVel(int velX, int velY) { this.velX = velX; this.velY = velY; } public int getVelX() { return velX; } public int getVelY() { return velY; } public boolean isDead() { return dead; } public Set<Coor> getCopyOfEmptySpaces() { return new HashSet<Coor>(emptySpaces); } private void initializeEmptySpaces() { emptySpaces = new HashSet<Coor>(); for (int i = 0; i * Game.BLOCK_SIZE < Game.WIDTH; i++) { for (int j = 0; j * Game.BLOCK_SIZE < Game.HEIGHT; j++) { emptySpaces.add(new Coor(i * Game.BLOCK_SIZE, j * Game.BLOCK_SIZE)); } } emptySpaces.removeAll(body); //remove the starting snake parts } private void initializeImage() { ImageIcon icon = new ImageIcon("src/res/snake.png"); img = icon.getImage(); } public Coor getHeadCoor() { return body.getFirst(); } } Food class: package snake; import java.awt.Graphics; import java.awt.Image; import java.util.Iterator; import java.util.Random; import java.util.Set; import javax.swing.ImageIcon; public class Food { private Image img; private Coor coor; Food(){ initializeImages(); } public void render(Graphics g) { g.drawImage(img, coor.getX(), coor.getY(), null); } public void generateLocation(Set<Coor> set) { //picking a random coordinate for the food int size = set.size(); Random rnd = new Random(); int rndPick = rnd.nextInt(size); Iterator<Coor> iter = set.iterator(); for (int i = 0; i < rndPick; i++) { iter.next(); } Coor chosenCoor = iter.next(); coor = chosenCoor; } private void initializeImages() { ImageIcon icon = new ImageIcon("src/res/food.png"); img = icon.getImage(); } public Coor getCoor() { return coor; } } Coor class: package snake; public class Coor { //coordinates //we divide the screen to rows and columns, distance //between two rows or two columns is Game.BLOCK_SIZE private int x; private int y; Coor(int x, int y){ this.x = x; this.y = y; } public int getX() { return x; } public int getY() { return y; } @Override public String toString() { return "(" + x + ", " + y + ")"; } @Override public int hashCode() { return x * Game.WIDTH + y; } @Override public boolean equals(Object o) { Coor c = (Coor) o; if (x == c.getX() && y == c.getY()) { return true; } return false; } } Answer: Coor has the comment coordinates ... yeah, that's exactly what the name should be then. But actually, Point seems easier and doesn't have to be abbreviated, or perhaps be more general and say Vector, or Vec2, that seems fairly common for games (despite it being an abbreviation). Not using the AWT class makes sense to me too. The hashCode method is okay, though it could probably be a bit more random in its output (not that it matters for such small numbers of it. The equals method could be more safe and also consider passing in arbitrary objects (or null) for comparison. Violating this is probably okay for this limited scope, but in general that shouldn't be skipped. Also the return statement can be simplified. @Override public boolean equals(Object o) { if (o == null || !(o instanceof Coor)) { return false; } Coor c = (Coor) o; return x == c.getX() && y == c.getY(); } The Food class uses these abbreviated names, img, rnd, etc. I'd suggest writing them out and giving them some more descriptive names in general. The loop in generateLocation seems a bit bogus to me, why skip a random number of random numbers before picking one? If you have problems getting repeated numbers each run of the program you should perhaps initialise it from a truly random source. Snake has velX and velY - that's exactly where a Vector would come in handy again. After all it's exactly that, a 2-tuple exactly like what Coor is. checkDeath could use a for (x : body) ... for the death check, plus, once dead = true was set, a break would also be good. Okay, so generally, I'd suggest not carrying around a set of empty spaces. Keeping the taken coordinates for the snake and for the food is fine. Using those you can immediately see which coordinates are empty ... all the ones that aren't taken. Given the few food items and the length of the snake the list of coordinates that's easy enough to check against. Apart from that MyKeyAdapter (well that should be MyKeyAdaptor) is a bit weird how it's just inline there like that. And that goes for the other classes too, it's all mixing the representation via Swing with the game state and that's, at least for bigger games/projects, not advisable. Then again, it's snake. Just consider how you'd handle extending this code to encompass more features, like different kinds of objects, or how e.g. customisable key bindings would work. So, it'd perhaps make sense to have a Renderable interface for the render method, then keep a list of objects to render in a more generic fashion, or even combine it with the tick method (perhaps with a default implementation on the interface) to update all game objects.
{ "domain": "codereview.stackexchange", "id": 35914, "tags": "java, object-oriented, game, snake-game" }
what's difference between RTGoalHandle and RTGoalHandleFollow?
Question: Hi, In joint_trajectory_action_controller.h (robot_mechanism_controllers package). RTGoalHandle is defined as RTServerGoalHandle<pr2_controllers_msgs::JointTrajectoryAction> and RTGoalHandleFollow is defined as RTServerGoalHandle<control_msgs::FollowJointTrajectoryAction> What does these two types use for? It seems their API looks really similar. Thanks! Originally posted by AdrianPeng on ROS Answers with karma: 441 on 2013-06-06 Post score: 0 Answer: The JointTrajectoryActionController supports three ways to command trajectories: Using the raw trajectory_msgs::JointTrajectory topic interface Using the pr2_controllers_msgs::JointTrajectoryAction action interface Using the control_msgs::FollowJointTrajectoryAction action interface. It is my understanding that FollowJointTrajectoryAction superceeds the JointTrajectoryAction interface. The main difference between the two is that FollowJointTrajectoryAction allows you to specify path and goal tolerances in the action goal. These tolerances are optional, and if unspecified defaults specified in the parameter server are used. The older JointTrajectoryAction only used tolerances from the parameter server. As to the RTGoalHandle*, they are convenience instances that allow a realtime controller to implement an actionlib server. Originally posted by Adolfo Rodriguez T with karma: 3907 on 2013-06-06 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 14445, "tags": "pr2" }
Why does merely changing point of application of force change kinetic energy?
Question: Consider two spheres A and B of similar mass and radius kept on a smooth surface. If I give an impulse 'J' to one of them along its centre of mass axis, it would only translate and not rotate. Its velocity Va just after collision would be J/m. So, its kinetic energy = 1/2 m Va^2 However if I apply the same impulse 'J' to ball B 'x' distance away from COM axis, its translational velocity will still be Vb = J/m, but it will also rotate. Angular velocity 'w' = Jx/I, where I is its moment of inertia. My point is, now its KE is 1/2 m Vb^2 + 1/2 I w^2. WHY? Just by merely changing the point of application of our impulse, how did we change the total energy acquired by the ball B, which is now greater than ball A? My intuition would say that somehow the velocity of B would be less than A to compensate for the kinetic energy gained by its rotational motion. But that's not the case. KEa != KEb. Why is this the case and where is this new energy coming out of? Answer: Let's suppose that the impulse is due to a constant force, $\vec F$, applied over a short time interval, $\Delta t$. If the ball is set turning as well as translating, the point of application of $\vec F$ will travel further in the $\vec F$ direction in time $\Delta t$ than if the force were applied through the centre of mass and the ball didn't turn. So more work is done by $\vec F$ in time $\Delta t$ when the force is applied off-centre, but in the same direction and for the same time.
{ "domain": "physics.stackexchange", "id": 73014, "tags": "newtonian-mechanics, classical-mechanics, rotational-dynamics, rotational-kinematics, rotation" }
Is palmitic acid really that dangerous?
Question: According to Wikipedia, "Palmitic acid is the most common saturated fatty acid found in animals, plants and microorganisms. It is also the first fatty acid produced during fatty acid synthesis and is the precursor to longer fatty acids. As a consequence, palmitic acid is a major body component of animals. In humans, one analysis found it to make up 21–30% (molar) of human depot fat, and it is a major, but highly variable, lipid component of human breast milk." (https://en.wikipedia.org/wiki/Palmitic_acid) Yet, specifically palmitic acid is also said to be "harmful" (that it increases LDL levels and puts people at risk for heart disease), per the World Health Organization (http://www.freezepage.com/1348239076FHWAJDADVT). So, I am skeptical --- possibly most ubiquitous saturated fatty acid in nature and a central component of our fatty acid metabolism, is somehow "more" dangerous/unhealthy compared to other saturated (or unsaturated) fatty acids? I'd suspect the opposite should be true -- it seems to me it should be the saturated fatty acid we should be most capable of handling well! Answer: Let's first clarify some concepts. Free fatty acids, including palmitic acid, are not present in animal tissues (or in the diet) to any large extent; they are esterified with glycerol to from triglycerides (fat), which is the storage form. This is a very important distinction, because triglycerides are chemically inert molecules that can be stored in very large amounts in cells without causing problems (adipocytes are pretty much just one big fat droplet), while free fatty acids are dangerous to organisms because they disrupt cell membranes --- fatty acids are basically soap. When nutrition studies like the WHO report talk about dietary intake of "palmitic acid", they really mean intake of triglycerides that contain a large proportion of esterified palmitic acid as side chains. There is a large body of epidemiological studies indicating that intake of fats rich in palmitic acid correlates with cardiovascular disease and other "metabolic" disorders, although some reports show no significant effect. In clinical trials, replacing saturated with unsaturated fat seems to provide some health benefit; but there also are studies indicating that monounsaturated fat can be worse than saturated fat. So there is some evidence that too much saturated fat is problematic, although the issue is not straightforward. The causal mechanism is to my knowledge still not clear. You are right that cells are normally perfectly capable of handling palmitic acid; it is a good substrate for beta-oxidation and a common fuel source for human cells. However, in pathological situations, free fatty acids can accumulate in cells and in the blood, probably because of excessive lipolysis. Many studies show that this can cause insulin resistance and trigger inflammatory signals, and even cell death by apoptosis. Some reports show that palmitic acid in particular causes these effects, whereas oleic acid (monounsaturated) does not, but I don't think there's any consensus on why palmitic acid is special. My own hypothesis (if I may be so bold :) is that palmitate only appears to be special because cells have evolved to sense it as a "proxy" signal for free fatty acids in general. Since fatty acids are clearly dangerous to cells, sensing them is important, and because palmitate is the naturally most abundant fatty acid, it would be the easiest fatty acid to detect. It is known that palmitate is recognized by toll-like receptors, in particular TLR2 and TLR4, and that sensing by these receptors trigger "proinflammatory" signalling. But this alone doesn't prove my "proxy" hypothesis, of course.
{ "domain": "biology.stackexchange", "id": 6718, "tags": "metabolism, nutrition, health, lipids, fat-metabolism" }
Phase change on reflection only 0 and $\pi$ allowed
Question: We know that when a wave on a string is reflected from a hard boundary, the phase change is $\pi$, and from a soft boundary, the change is 0. My question is: this two conditions (hard and soft boundary) seem to be maxima and minima, thus, if the boundary is not absolute hard/soft, can the phase change be in somewhere 0 and $\pi$, or only 0 and $\pi$ are the possible values. For me, it would be a bit strange, because it would suggest something like a two-state universe... Answer: Suppose the incident wave has amplitude $A$ and phase $0$. We think the reflected rays change the phase in mathematical context, but it actually changes the sign of the wave in the case of hard boundary, whereas in short boundary, it remains the same. It depends on our way of thinking. In your case of neither soft or hard boundaries, the value of $A$ is damped, and the sign is reversed when it is relatively more hard.
{ "domain": "physics.stackexchange", "id": 21111, "tags": "waves, reflection, boundary-conditions" }
efficiently calculate nearest common ancestor in a family tree (each person has two parents)
Question: I'm well aware of ways to efficiently calculate the lowest common ancestor in a tree of nodes which converge to a single root (ie, each node has only one parent). Just iterate back to root for each person, then walk back from root tossing off anything common. In a matriarchal society, for example, this could be used to quickly calculate how any two people are related as long as only the mothers are considered. But if both parents are considered, eg mother AND father, then the algorithm just described breaks down. So I wondered is there an algorithm to tell two people how they are related in a family tree where both parents are considered? For example, see the Icelandic genealogy app (https://www.islendingabok.is/) which does precisely that. How's it done, algorithmically speaking? Answer: Just do breadth-first search up the graph, colouring each person you meet as being an ancestor of one person or the other, and stop when you reach a person who's an ancestor of both. You should do the searches in parallel (one generation back on one person, then one generation back on the other, and so on) because any common ancestor will be approximately the same number of generations back on each side. Indeed, this is exactly what you should do on a tree: going all the way back to the root and then discarding all common ancestors after the first takes more time.
{ "domain": "cs.stackexchange", "id": 14427, "tags": "graphs" }
Why would magma have high amounts of nickel and chromium?
Question: I am doing a research project on Mount Lamington and through my research I have found that its magma has unusually high amounts of chronium and nickel. What would cause this to happen? Is there science behind it, or is it unexplained? Answer: The current thinking according to modelling and previous observations is that the high amounts of $\ce{MgO}$, $\ce{Cr}$ and $\ce{Ni}$ is due to the magma feeding the Mt. Lamington volcano being contaminated by the nearby Papuan Ultramafic Belt (Smith, 2014; Arculus et al. 1983). The Papuan Ultramafic Belt is, according to Lus et al. (2001), a early Neogene (Tertiary) ophiolite. The region itself is not associated with an active Wadati-Benioff Zone (Smith, 2004). Smith describes that the composition of the magma that fed the Mt. Lamington eruption represents part of an unusual tectonic environment, as described in part in the answer to the earlier question What are the geological mechanisms for sea floor spreading in the Bismarck Sea?. Smith describes that the magmatic geochemistry is influenced by the variables of crustal thickness, tectonic setting and the geochemistry of the lithosphere. Finally, the relationship between the tectonics of the region and magmatism of the Papuan arc environment has been described as a 'delayed melting' as a partial melting of a source that was metasomatised in an earlier convergent event. The complicated geology is shown in the image below: Tectonic and geological map from Syracuse University Department of Earth Sciences, the ophiolite (PUB) outcrops are in purple. References Arculus et al. 1983, 'Ophiolote contaminated andesites, trachybasalts and cognate inclusions of Mt. Lamington, Papua New Guinea: anhydrite-amphibole bearing lavas and the 1951 cumulodome', Bulletin of Volcanology and Geothermal Research Lus, W. et al. 2001, 'Papuan Ultramafic Belt (PUB) Ophiolite: Field Mapping, Petrology, Mineral chemistry, Geochemistry, Geochronology, And Experimental Studies Of The Metamorphic Sole', American Geophysical Union, Fall Meeting 2001, abstract #T52D-09 Smith, I. 'High magnesium andesites: the example of the Papuan Volcanic Arc' in Gomez-Tuena et al. (eds) 2014, 'Orogenic Andesites and Crustal Growth', Geological Society Special Publication 385 page 128
{ "domain": "earthscience.stackexchange", "id": 311, "tags": "volcanology, geochemistry, volcanoes, petrology" }
Visibility of a planet on the other side of the Sun
Question: Context: I've been having a discussion with my vectors and mechanics professor (course Mathematics BSc) about a problem on a recent coursework. The following is the model I came up with. I'm not an astronomer, so bear with me. Define: $S$ the Sun; $E$ the Earth; $P$ some other planet in the solar system; $O$ the position of an observer somewhere on $E$; $0<l<90$ the latitude of $O$; $\Pi_E$ the orbital plane of $E$, the $xy$ plane; $\Pi_P$ the orbital plane of $P$; $\Pi_O$ the plane tangent to $E$ at $O$; $\alpha$ the acute angle between $\Pi_E$ and $\Pi_P$; $\beta=90-l$ the acute angle between $\Pi_E$ and $\Pi_O$. Assume: The local time at $O$ is midnight; The date is the Summer solstice; The visual sizes of $P$ and $S$ are both non-zero but otherwise negligible; The projections of $E$, $S$ and $P$ onto the $xy$ plane are roughly but not exactly collinear; The solar system is Euclidean; Light travels in infinite straight lines. Given these assumptions, I think that $P$ is visible from $O$ if and only if $\alpha>\beta$. Am I right? Answer: Lets put our observer on the equator. So the angle between his tangent plane and the Earth's orbit (at local midnight on the solstice) is 90 - 23.5 = 66.5 degrees. This is your $\beta$ Let's put our planet in orbit very close to the ecliptic, so the angle between the earth's and the planet's orbit is small (say Jupiter, with an inclination of 1.3 degrees) This is your $\alpha$ Let us further suppose that Jupiter is at opposition. It will culminate at midnight and be very much visible (indeed it will be about 23.5 +-1.3 degrees from the zenith), but $\beta>\alpha$. It is not surprising. Nearly all the planets orbit close to the ecliptic. so $\alpha$ is typically very small (a few degrees) If a planet were only visible when $\alpha>\beta$, then planets would rarely be visible.
{ "domain": "astronomy.stackexchange", "id": 4148, "tags": "planet, solar-system" }
If a spring is connected to a block and the other end is fixed, what's the force on the object after releasing the spring from a stretched position?
Question: A spring is connected to a block and the other end of the spring is fixed, if the block is pulled with a hand, thus stretching the spring, what would be the force applied to the block once the hand releases the block? I am aware of Hooke's law where force=kk. Answer: Assuming an ideal, massless spring, Hooke's Law applies regardless of whether the block is moving or not. In particular, the force at the instant after you let go is the same as in the instant before you let go because the extension of the spring is the same before and after.
{ "domain": "physics.stackexchange", "id": 91424, "tags": "newtonian-mechanics, forces, mass, acceleration, spring" }
Insulators and electric charges
Question: I have two questions about insulators: Will the net charge on an insulator always be negative if it's rubbed? How can a negatively charged plastic rod transfer negative charges when it comes in contact with a metal sphere if plastic is an insulator and the electric charges are held in place? Answer: It is possible to generate both positive and negative static electric charges by rubbing (or even just contact). This is known as the Triboelectric effect and the list of materials involved is the Triboelectric series. Plastics tend to become negatively charged when rubbed and glass becomes positively charged. Once the two materials are separated the remaining charge is not that strongly held, but since air is a poor conductor, this prevents the charge from quickly leaking away (although it will over time). In the case of a plastic rod, excess electrons are on the surface and so not insulated by the bulk of the rod. If touched against a conductor the charge will escape (as the conductor acts like a sink), but you may have to run the conductor across the whole surface of of the rod to remove all of the charge as the insulation will prevent the charge from travelling from one part of the rod to another.
{ "domain": "physics.stackexchange", "id": 58855, "tags": "electrostatics, electricity" }
GUI implementation with Racket
Question: I am studying the book Realm of Racket. On chapter 5, there is a challenge: Find an image of a locomotive. Create an animation that runs the locomotive from just past the left margin to just past the right margin of the screen. Next, modify your program so the locomotive wraps around to the left side of the screen after passing the right margin. I created the following code using Racket and Dr. Racket. However, I bet it could be improved. My doubt is specially about the transition from the right side to the left side. I think there should be a better way of doing this: #lang racket (require 2htdp/universe 2htdp/image) (define LOCOMOTIVE image-inserted-through-DrRacket ) (define HEIGHT 800) (define WIDTH 800) ;essa função muda a velocidade com que a figura se mexe (define (add-3-to-state current-state) (if (< current-state 1055) (+ current-state 6) (- current-state 800))) (define (locomotive-running current-state) (place-image LOCOMOTIVE current-state (/ WIDTH 2) (empty-scene WIDTH HEIGHT))) (big-bang 0 (on-tick add-3-to-state) (to-draw locomotive-running)) That's the image that I used: Answer: A few comments The function add-3-to-state adds 6 to the state. That's the problem with magic numbers or unnamed numerical constants. (/ WIDTH 2) is a constant. It does not need to be calculated each time through the code. Two levels of abstraction are intermingled. Concepts like WIDTH and HEIGHT are at one level. LOCOMOTIVE is at a higher level. Because it is the only entity at that higher level it can be generalized to any arbitrary image. This keeps the level of abstraction consistent throughout the code. A more generalized approach Roll up all the settings into one place with a structure. (require 2htdp/universe 2htdp/image) (struct settings (image width height increment) #:transparent) Create a higher order function that takes a settings structure and returns a function that can serve as the argument for to-draw. The returned function is a closure over a particular settings. ;; settings->(state->image) (define (make-draw s) (define offset (/ (settings-width s) 2)) (define scene (empty-scene (settings-width s) (settings-height s))) (lambda (state) (place-image (settings-image s) state offset scene))) Create a higher order function that takes a settings structure and returns a function that can serve as the argument for on-tick. The returned function is also a closure over a particular settings. ;; settings->(state->state) (define (make-update s) (define end (+ (settings-width s) (image-width (settings-image s)))) (lambda (state) (if (< state end) (+ state (settings-increment s)) (- state (settings-width s))))) Implementation of animation (define my-settings (settings (circle 20 "solid" "blue") 800 800 6)) (define draw (make-draw my-settings)) ; draw is a function (define update (make-update my-settings)) ; update is a function (big-bang 0 (on-tick update) (to-draw draw)) Caveat Sometimes it's worth generalizing the implementation. The next level of this implementation would be a function that takes a settings and returns a big-bang animation. But sometimes it isn't worth doing because it's the road to factory-factory-factory type implementations.
{ "domain": "codereview.stackexchange", "id": 22828, "tags": "gui, lisp, racket" }
Kinetic energy derivation: Why is $\frac{d \mathbf v}{dt} \cdot \mathbf v= \frac 12 \frac{d}{dt}(v^2)~?$
Question: In Goldstein's Classical Mechanics 3rd edition, page 3, the Kinetic energy is derived by considering the work done on a particle by an external force $\mathbf F$ from point $1$ to point $2$ $$W_{12}=\int_1^2\mathbf F \cdot d\mathbf s.$$ It is then argued that for constant mass, the integral reduces to $$W_{12}=m\int \frac{d \mathbf v}{dt} \cdot \mathbf v dt=\frac m2 \int \frac{d}{dt}(v^2)dt$$ from which you get the usual formula $W_{12}=\frac m2 (v_2^2-v_1^2)$. I'm confused however on how the last integral above was obtained. Why must $$\frac{d \mathbf v}{dt} \cdot \mathbf v= \frac 12 \frac{d}{dt}(v^2)$$ when it may not be the case that $\frac {d \mathbf v}{dt}$ and $\mathbf v$ are parallel? Shouldn't this depend on the angle between the velocity and acceleration vectors? Answer: I'm confused however on how the last integral above was obtained. $$v^2 = \mathbf{v}\cdot\mathbf{v}$$ $$\frac{d}{dt}(v^2) = \frac{d}{dt}(\mathbf{v}\cdot\mathbf{v}) = \frac{d\mathbf{v}}{dt}\cdot\mathbf{v} + \mathbf{v}\cdot\frac{d\mathbf{v}}{dt} = 2\left(\frac{d\mathbf{v}}{dt}\cdot\mathbf{v}\right)$$ $$\Rightarrow \frac{d\mathbf{v}}{dt}\cdot\mathbf{v} = \frac{1}{2}\frac{d}{dt}(v^2) $$ when it may not be the case that $\frac{d\mathbf{v}}{dt}$ and $\mathbf{v}$ are parallel? Consider, for example, that case that $\frac{d\mathbf{v}}{dt}$ is perpendicular to $\mathbf{v}$ and then the left-hand side must be zero. But, as we know, for uniform circular motion, the acceleration is always perpendicular to the velocity and that the speed $v = \sqrt{v^2}$ is then constant and thus, the right-hand side is zero too which is consistent.
{ "domain": "physics.stackexchange", "id": 43201, "tags": "homework-and-exercises, newtonian-mechanics, work, vectors, differentiation" }
Twitter clone in Golang
Question: I have built a Twitter clone in Golang, using object orientation principles. I wish to know the design mistakes I have made. package blade import ( "database/sql" "fmt" ) type User struct { Name string Password string *sql.Tx } //User is a normal user who can login, register, tweet, retweet, follow and unfollow func Stream(username string, transaction *sql.Tx) []Tweetmodel { rows, err := transaction.Query("Select id, tweet from tweets where username=$1", username) if err != nil { panic(err) } defer rows.Close() tweets := make([]Tweetmodel, 0) for rows.Next() { var id int var msg string rows.Scan(&id, &msg) tweet := Tweetmodel{id, msg} tweets = append(tweets, tweet) } return tweets } func (u *User) Follow(usertofollow User) string { name := usertofollow.Name var username string err := u.QueryRow("SELECT name from users where name=$1", name).Scan(&username) switch { case err == sql.ErrNoRows: return "You cannot follow an user who does not exist" default: if u.alreadyfollowing(name) { return "You have already followed this user" } else { u.Exec("INSERT INTO follow(username, following) VALUES($1, $2)", u.Name, name) return fmt.Sprintf("You have successfully followed %v", name) } } } func (u *User) alreadyfollowing(usertofollow string) bool { res, _ := u.Query("SELECT * from follow where username=$1 and following=$2", u.Name, usertofollow) return (res.Next() == true) } func (u User) Login() string { var username, password string err := u.QueryRow("SELECT name, password FROM users WHERE name=$1", u.Name).Scan(&username, &password) if err == sql.ErrNoRows { return "There is no user with that name, please try again or try registering!" } if u.Password != password { return "Your password is wrong, please try again!" } return "Welcome to Twitchblade" } func (u *User) Register() string { var username string err := u.QueryRow("SELECT name FROM users WHERE name=$1", u.Name).Scan(&username) switch { case err == sql.ErrNoRows: u.Exec("INSERT INTO users(name, password) VALUES($1, $2)", u.Name, u.Password) return "Successfully registered" default: return "User exists with same name.Please try a new username" } } func (u *User) alreadyretweeted(tweetid int) bool { var id int err := u.QueryRow("SELECT id from retweets where original_tweet_id = $1 and retweeted_by = $2", tweetid, u.Name).Scan(&id) return (err != sql.ErrNoRows) } func (u *User) iteratedretweet(tweetid int) (bool, int) { var id int err := u.QueryRow("SELECT original_tweet_id from retweets where retweet_tweet_id = $1", tweetid).Scan(&id) return (err != sql.ErrNoRows), id } func (u *User) Retweet(tweetid int) (string, int) { if u.alreadyretweeted(tweetid) { return "You have already retweeted this tweet", tweetid } else { flag, originalid := u.iteratedretweet(tweetid) if flag { return u.Retweet(originalid) } else { var msg, originaluser string var id int u.QueryRow("select username, tweet from tweets where id=$1", tweetid).Scan(&originaluser, &msg) _, retweetid := u.Tweet(msg) u.QueryRow("INSERT INTO retweets(original_tweet_id, retweeted_by, retweet_tweet_id) VALUES($1, $2, $3) returning id", tweetid, u.Name, retweetid).Scan(&id) return fmt.Sprintf("Successfully retweeted tweet by %s", originaluser), retweetid } } } func (u *User) Timeline() []Tweetmodel { rows, err := u.Query("select tweets.id, tweets.tweet from tweets INNER JOIN follow ON (tweets.username = follow.following) and follow.username=$1", u.Name) if err != nil { panic(err) } defer rows.Close() tweets := make([]Tweetmodel, 0) for rows.Next() { var id int var msg string rows.Scan(&id, &msg) tweet := Tweetmodel{id, msg} tweets = append(tweets, tweet) } return tweets } func (u User) Tweet(msg string) (string, int) { var id int u.QueryRow("INSERT INTO tweets(username, tweet) VALUES($1, $2) returning id", u.Name, msg).Scan(&id) return "Successfullly tweeted", id } func (u *User) Unfollow(usertounfollow User) string { res, _ := u.Query("SELECT * from follow where username=$1 and following=$2", u.Name, usertounfollow.Name) if res.Next() != true { return "You do not follow this user" } else { u.Exec("DELETE FROM follow WHERE name=$1 and following=$2)", u.Name, usertounfollow.Name) return fmt.Sprintf("You have successfully unfollowed %v", usertounfollow.Name) } } Answer: I am aware this is old but here goes: Can you confirm that *sql.Tx.Query is a proper query builder that prevents sql injection attacks (I don't know). You are correctly closing the rows using defer. Your Follow function could do the down selection on the database using a where clause to determine if you are not already following without requiring two queries (to distinguish between non-existent user and already following, you would have to return a count of the unfiltered username as well). DO NOT store a users password; instead have a random per user salt generated and stored to perform multiple (10 say) becrypt rounds to get a suitably secure hash. You can probably reduce the number of retweet queries by using a slightly more complex version of the following approach Your code implies discrete polling for tweets doing table searches; this gets inefficient quickly with a n * m * u load on your database (n ~ number of people, m ~ number of following per person, u ~ update rate) A much lower processing load (but greater storage) way is to have sharded tweet and followed message tables containing a list of tweets for a users attention (referencing the original tweet entry) so that you only need to do a simple search of a single shard of the followed tweet table for a given user to find all tweets for that users attention without doing table joins. Tweets of interest are pushed onto the sharded followed tweet tables when the original author tweets. The tweet and followed message tables are sharded by the tweeting user and following user respectively. You could have a flag in the sharded user table that is set whenever there are new tweets to display, again avoiding excessive database loading by re-scanning an entire tweet table for new tweets. You don't show how you would be doing app/web page updates; are you fast polling (bad idea), long polling (minimum fallback), a push notification service (which can get costly) or using websockets (good if it works for a given user but fiewalls and proxies often interfere) ? NB: Reducing the sizes of tables, the number and complexity of queries to a minimum is essential to maintain system performance and minimise cost
{ "domain": "codereview.stackexchange", "id": 30315, "tags": "object-oriented, sql, go" }
Black hole spin measurements
Question: While measurements of black hole spin of supermassive black holes in AGN using the X-ray reflection method (as shown in the figure below), 90% confidence error bars on black hole spin and 1$\sigma$ error bars on black hole mass are chosen. I could not understand why the error bars are chosen as such. The paper Observing Black Holes Spin (Reynolds (2019)) says that this choice follows the conventions as in the relevant primary literature. Answer: Whilst it is not unusual for uncertainties to be quoted at the 1$\sigma$ level in the astronomical literature (I would say it is the default unless otherwise specified), a 90% confidence interval is somewhat peculiar. I looked at a couple of the main primary literature sources used to collect the black hole spin data for this figure. Both Patrick et al. (2012) and Walton et al. (2013) use a $\chi^2$ fitting approach and quote as uncertainties, the value of the spin parameter where the $\chi^2$ increase by 2.71 from its minimum value. This is indeed approriate (with lots of statistical caveats) for a 90% confidence interval. I don't think there is any particular reason for this choice. They do not quote 1$\sigma$ (or 68% confidence) uncertainties and it is unsafe to assume that there is any fixed ratio between a 90% and 68% confidence interval. Given these are two of the major sources of data for the plot you show from Reynolds' review, I don't think there was any alternative other than to show the data and error bars as they were given in those papers.
{ "domain": "physics.stackexchange", "id": 60920, "tags": "general-relativity, black-holes, astrophysics" }
What does Baym mean here in his Lecture on Identical Particles?
Question: I'm reading Lectures on Quantum Mechanics by Gordon Baym (1969). In his discussion of 3-identical fermions Baym writes: "One way to make $\Psi(1,2,3)$ [the total wave-function] antisymmetric is to take a symmetric $\chi\left(s_{1}, s_{2}, s_{3}\right)$ times an antisymmetric $\psi\left(\mathbf{r}_{1}, \mathbf{r}_{2}, \mathbf{r}_{3}\right) .$ The other way around won't work, since it isn't possible to construct a completely antisymmetric spin wave function $\chi\left(\mathrm{s}_{1}, \mathrm{s}_{2}, \mathrm{s}_{3}\right)$ from just the two choices, up or down, for each spin. There is another possibility though. Suppose that we take a $\chi\left(\mathrm{s}_{1}, \mathrm{s}_{2}, \mathrm{s}_{3}\right)$ that is antisymmetric in $\mathrm{s}_{2}$ and $\mathrm{s}_{3},$ for example," $$ \chi(s_1,s_2,s_3)=\chi_{\uparrow}\left(\mathrm{s}_{1}\right)\left[\chi_{\uparrow}\left(\mathrm{s}_{2}\right) \chi_{\downarrow}\left(\mathrm{s}_{3}\right)-\chi_{\downarrow}\left(\mathrm{s}_{2}\right) \chi_{\uparrow}\left(\mathrm{s}_{3}\right)\right] $$ Baym goes on to construct the totally anti-symmetric wave-function: $$ \Psi(1,2,3)=\chi\left(\mathrm{s}_{1}, \mathrm{s}_{2}, \mathrm{s}_{3}\right) \psi\left(\mathrm{r}_{1}, \mathrm{r}_{2}, \mathrm{r}_{3}\right)+\chi\left(\mathrm{s}_{2}, \mathrm{s}_{3}, \mathrm{s}_{1}\right) \psi\left(\mathrm{r}_{2}, \mathrm{r}_{3}, \mathrm{r}_{1}\right) + \chi\left(\mathrm{s}_{3}, \mathrm{s}_{1}, \mathrm{s}_{2}\right) \psi\left(\mathrm{r}_{3}, \mathrm{r}_{1}, \mathrm{r}_{2}\right) $$ My question is what exactly Baym means when he says "it isn't possible to construct a completely antisymmetric spin wave function $χ(s_1,s_2,s_3)$ from just the two choices, up or down, for each spin.", and how his latter construction is different from that. Answer: The only possible completely antisymmetric wave function for three spins $1/2$ is identical zero. From three spin variables $s_1, s_2, s_3$, each being equal $1/2$ or $-1/2$, at least two have same value. The antisymmetry of wave function leads to its zero value in this case.
{ "domain": "physics.stackexchange", "id": 64037, "tags": "wavefunction, identical-particles, spin-statistics" }
SQL Look up in a Stored Procedure across three tables
Question: The idea of the below code is that I feed back to a Gridview if the user has permission to view the property otherwise that property is not show. The data is passed in via SessionParameters USE [database] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO alter PROCEDURE [dbo].[spSafeguardingActionPropertyByPermission] @RegionID bigint ,@EmployeeID varchar(10) ,@PropertyID varchar(10) AS BEGIN SELECT TblA.PropertyID as A_PropertyID, TblA.Propertyname as A_Propertyname , TblB.FireSafety as FireSafety1, TblB.DisplayScreenEquipment as DSE FROM TbPropertyDetails as TblA INNER JOIN TbPropertyDetailsSafeguarding as TblB ON TblA.PropertyID = TblB.PropertyID WHERE TblA.RegionID > 0 AND TblA.PropertyID LIKE '%' + @PropertyID + '%' AND EXISTS ( SELECT 1 FROM tblPropertyViewPermissions AS pvp WHERE pvp.EmployeeID = @EmployeeID AND pvp.PropertyID = TblA.PropertyID ) END Answer: Does your EmployeeID and PropertyID parameters really need to be varchar(max), I find it hard to believe that an employee would have an ID with 8 000 characters. The same goes for the property ID. Change this to how many characters they are allowed to have (e.g., if an employee can only have an ID of 8 characters then use that, if it's 100 characters use that, and if it indeed is 8 000 characters, well then I suppose varchar(max) is fine) I'd also advocate to use clearer aliases for your column. Instead of as PId you should alias it as as PropertyID. This line is a bit unclear to me WHERE TblA.RegionID > 0. Can a Region have an ID of 0 or less-than 0? I also think that the LIKE in this line TblA.PropertyID LIKE '%' + @PropertyID + '%' is used for the wrong purpose. It seems like what you really are doing is comparing the PropertyID of TblA.PropertyID to the parameter @PropertyID and in my opinion there should be no need of using wildcards here. Instead use the equals operator.
{ "domain": "codereview.stackexchange", "id": 5677, "tags": "sql, vb.net" }
Electric field inside a cavity
Question: What is the electric field in the cavity, if the conductor having the cavity is charged? Does the result depend on the shape and size of the cavity or conductor? Ans given in my book is Zero and No as the charge resides on the outer surface of a conductor. I am not satisfied with the answer as how would charge of present at surface make the electric field inside at any point inside it zero. Could somebody give me a detailed answer and if possible prove it mathematically, too? Answer: To get a feel for this I will first introduce Earnshaw's theorem. Earnshaw's theorem states that, given a collection of static charges, it is impossible to trap another charge. As an example consider the following setup: Here I try to trap the middle charge by arranging 8 fixed charges in a circle. At first glance this seems like a stable configuration right? Earnshaw's theorem tells us this is unstable: even the slightest nudge to the middle particle will make it escape towards infinity. So we conclude that by themselves, charges can't form a stable configuration. This is a bit handwavy but from that you can conclude that all the charge must reside on the boundary conductor, because the boundary stops the charges from escaping to infinity. So why is the electric field inside zero? We can use Gauss's law in combination with a useful property of the Laplace equation. Gauss's law tells us $$\nabla\cdot \vec E=\frac{\rho}{\epsilon_0}=0$$ The divergence of the electric field is zero because there are no charges inside the conductor. We can then plug in the electric potential (there are no magnetic fields so we are allowed to define a potential) to get the following: $$\vec E=-\nabla V\\\Rightarrow \nabla\cdot\nabla V=\nabla^2 V=0$$ This last equation is called the Laplace equation and it has many nice properties. One of them is that solutions can't have a maximum or a minimum in it's domain, so maxima and minima have to be located on the boundary. We can argue that the potential along the boundary has to be constant: the charges can't be inside the conductor but they can still move along the boundary. If the potential is not constant the charges will rearrange themselves until minimal energy is found. Let's call this potential $V_0$. Since the inside of the conductor can't be higher or lower than the boundary the entire inside must be equal to this constant potential. A constant potential means zero electric field: $$\vec E_{inside}=-\nabla V_0=0$$ An even more handwavy approach would be to say that any electric field inside the conductor means the charges will be accelerated, meaning the energy is not minimal. The charges will move until the electric field is zero. Sorry if this answer is a bit handwavy/messy but for a thorough answer you would need an entire course in electrostatics.
{ "domain": "physics.stackexchange", "id": 57333, "tags": "electrostatics, electric-fields" }
Work, energy and power of massive spring
Question: The question is like if the spring of unstretched length $L$ has mass $M$ with one end fixed to the rigid support and it is made up of uniform wire then what kinetic energy is possessed if the free end is pulled with constant velocity? I tried this question. If the free end is pulled with uniform velocity then that means there is no acceleration and initially the block of mass M was at rest position so it does not contain any kinetic energy initially due to pulling of the free it has virtue of kinetic energy so it might be $\frac12 mv^2$. But the answer given is $\frac 16 mv^2$. Where am I going wrong? Answer: The spring is being pulled only at one of its end. So, not all of the spring's length will move at the same velocity $v$.For this you need to take an infinitesimally small segment of the spring $dx$ at a distance x from the free end. $dm =\frac{M}{L} dx $ Now velocity of each element will be proportional to its length- $v'=\frac {vy}{L} $ Kinetic energy of this mass element will be given by $K.E.=\frac {1}{2}dmv'^2$ Integrating this mass element's $KE$ with proper limits, we will get the kinetic energy of the whole spring. $$K.E.=\int_0^L \frac {1}{2}dmv'^2 $$ Upon substituting the values you will get your answer. (This I leave it to you!)
{ "domain": "physics.stackexchange", "id": 47822, "tags": "homework-and-exercises, newtonian-mechanics, energy, spring, continuum-mechanics" }
How does ChatGPT respond to novel prompts and commands?
Question: So I understand how a language model could scan a large data set like the internet and produce text that mimicked the statistical properties of the input data, eg completing a sentence like "eggs are healthy because ...", or producing text that sounded like the works of a certain author. However, what I don't get about ChatGPT is that it seems to understand the commands it has been given, even if that command was not part of its training data, and can perform tasks totally separate from extrapolating more data from the given dataset. My (admittedly imperfect) understanding of machine learning doesn't really account for how such a model could follow novel instructions without having some kind of authentic understanding of the intentions of the writer, which ChatGPT seems not to have. A clear example: if I ask "write me a story about a cat who wants to be a dentist", I'm pretty sure there are zero examples of that in the training data, so even if it has a lot of training data, how does that help it produce an answer that makes novel combinations of the cat and dentist aspects? Eg: Despite his passion and talent, Max faced many challenges on his journey to become a dentist. For one thing, he was a cat, and most people didn't take him seriously when he told them about his dream. They laughed and told him that only humans could be dentists, and that he should just stick to chasing mice and napping in the sun. But Max refused to give up. He knew that he had what it takes to be a great dentist, and he was determined to prove everyone wrong. He started by offering his services to his feline friends, who were more than happy to let him work on their teeth. He cleaned and polished their fangs, and he even pulled a few pesky cavities. In the above text, the bot is writing things about a cat dentist that wouldn't be in any training data stories about cats or any training data stories about dentists. Similarly, how can any amount of training data on computer code generally help a language model debug novel code examples? If the system isn't actually accumulating conceptual understanding like a person would, what is it accumulating from training data that it is able to solve novel prompts? It doesn't seem possible to me that you could look at the linguistic content of many programs and come away with a function that could map queries to correct explanations unless you were actually modeling conceptual understanding. Does anyone have a way of understanding this at a high level for someone without extensive technical knowledge? Answer: I read Stephen Wolfram's piece explaining GPT which helped me a lot. I think what was most important was the idea that a Markov Model is fundamentally an unhelpful mental model for GPT type systems. While it is true that it works "like autocomplete", it is an autocomplete that is not based on simple probabilities between the sequences of words at all, but rather a system of hundreds of millions of programmatic neurons that organically adapted to find abstract patterns in text - not just patterns in word or letter frequencies, but patterns in which types of statement follow which other types, etc. In order to do this prediction, multiple distinct processing steps seem to have developed organically within the hundreds of millions of neurons used. Wolfram explains that this system for example seems to have derived a theory of natural language syntax empirically from the input data. At later stages, the system is probably doing analysis that we would consider to be "logical" or "conceptual" based on the fact data earlier language processing steps accomplished. So, what I was missing was a sense of the size of the model, and the idea that real semantic processing beyond mere word-probabilities was occurring, and how this type of processing could emerge from a system that was trained on mere word-by-word prediction.
{ "domain": "ai.stackexchange", "id": 4018, "tags": "machine-learning, open-ai, chat-bots, natural-language-understanding, chatgpt" }
Is there a reason why operators are often written as exponentials in quantum optics?
Question: I have noticed in many quantum optics papers that operators are written as exponentials. Is there a reason for this beyond style or convention? For example, is it physically significant or more amenable to calculation? If the latter, what specific theorems make it better? As an example, in [1], they write the controlled phase flip gate as a matrix exponential $e^{i\pi P}$ where $P$ is a projection operator. The exponential of a projection operator has a simple Taylor expansion \begin{align} e^{i\alpha \hat{P}_{+}} &=I+(i\alpha) \hat{P}_{+} + \frac{(i\alpha)^{2}}{2}\hat{P}_{+}+\cdots \\& =I+\hat{P}_{+}(i\alpha + \frac{(i\alpha)^2}{2}+\cdots) \\&=I+\hat{P}_{+}(e^{i\alpha}-1) \end{align} which, from my understanding, you would generally need to perform first in order to use the operator for calculations. So why not just write it in the expanded form to begin with? Scalable Photonic Quantum Computation through Cavity-Assisted Interactions. L.-M. Duan and H. J. Kimble. Phys. Rev. Lett. 92, 127902 (2004), arXiv:quant-ph/0309187. Answer: If two operators are equal, $A=B$, then there is nothing wrong with using either form for the same object. With that in mind, the choice of form obeys a wide array of reasons depending on the context and what one wants to express, though generally speaking there is a large premium on notational compactness and conceptual clarity. So, along that vein, here are some relevant points to consider: In almost all cases, the exponential form is more compact, and it uses less ink. That, by itself, already puts it a step up from any alternative formulation. Exponentials of anti-hermitian operators are immediately recognizable as unitary. This is definitely at play in the example you mention: the form $\hat U=\exp(i\alpha \hat P)$ is obviously unitary, where on the other hand $\hat U = 1+\hat P(e^{i\alpha}-1)$ can be seen to be unitary through an easy but non-obvious calculation (i.e. you can't do it in your head without completely losing the thread of what the paper was talking about). Imaginary exponentials offer more conceptual clarity in that they are immediately recognizable as rotation operators. As an example, if $\hat \sigma$ is an involution (i.e. $\hat \sigma^2=1$, including all Pauli matrices) then $$e^{i\theta \hat \sigma} = \cos(\theta) + i \hat\sigma\sin(\theta),$$ but the former makes the rotation (say, on the Bloch sphere, around the eigen-axis of $\hat \sigma$) evident, while the latter is much more obscure. In that regard, operator exponentials act rather like special functions, which, as Michael Berry memorably put it, "enshrine sets of recognizable and communicable patterns and so constitute a common currency". By expressing an operator as an exponential, you're not just giving a specification of what operator you're talking about; you're also making a meaningful statement about how you're thinking about that operator. Broken-out forms are not necessarily easier to calculate and conceptualize, and they will often be harder (or harder to deal with in the conceptual frame the text is operating on). This is particularly the case if you're already operating on an eigenbasis of the operator $\hat A$ being exponentiated: if you're already working on a basis of eigenstates $\hat A|a\rangle = a|a\rangle$, then the exponential $$e^{i\theta \hat A}|a\rangle = e^{i\theta a}|a\rangle$$ is just the basis-vector-dependent number $e^{i\theta a}$. This is at play in the controlled-phase-flip unitary $$ U_\mathrm{CPF} = e^{i\pi |0⟩⟨0| \otimes |h⟩⟨h|}$$ in the example you mention: this is easily analyzed as $1$ if the control qubit is on$|1⟩$, and as the operator $e^{i\pi|h⟩⟨h|}$ if the control qubit is on $|1⟩$; moreover, it is also directly seen as the fase-flip $-1=e^{i\pi}$ for the target qubit on the state $|h⟩$ and as unity on its orthogonal complement $|\bar h⟩$.
{ "domain": "physics.stackexchange", "id": 47640, "tags": "quantum-mechanics, operators, quantum-optics" }
Torque due to applied force on a cord vs. torque due to gravity acting on weight attached to the cord
Question: A cord is wrapped around the rim of a solid cylinder of radius $0.25$ m, and a constant force of $40$ N is exerted on the cord shown, as shown in the following figure. The cylinder is mounted on frictionless bearings, and its moment of inertia is $6.0 \mathrm{~kg⋅m}^2$. (a) Use the work energy theorem to calculate the angular velocity of the cylinder after $5.0$ m of cord have been removed. (b) If the $40$-N force is replaced by a $40$-N weight, what is the angular velocity of the cylinder after 5.0 m of cord have unwound? The answer given is a. $ω = 8.2$ rad/s; b. $ω = 8.0$ rad/s In (a), I used $$\begin{align}W &= 40 \text{ N} \cdot 5\text{ m} = 200 \text{ J}\\ = \Delta K &= \frac{1}{2} I \omega^2 \\ &\implies \omega = 8.16 \text{ rad/s} \end{align}$$ The same result follows from solving $$\tau = Fr = I \alpha$$ for $\alpha$ and then $$\bar\alpha \, \Delta \theta = \frac{1}{2}\bar{\omega} ^2$$ (where $\Delta \theta = 5/ .25 = 20$ m) for $\omega$. Why is the answer to (b) different? What is a $40$-N weight but a source of a constant $40$-N force? It occurred to me that perhaps in (b) we are supposed to account for gravity as it acts on the wheel itself, but the wheel's center of mass is on the axis of rotation, so the torque due to gravity is zero. Problem source: https://openstax.org/books/university-physics-volume-1/pages/10-challenge-problems (#125) Answer: You are forgetting that the weight attached will gain kinetic energy as well, taking away from the energy that the cylinder can have for rotation, and hence lowering the final angular velocity.
{ "domain": "physics.stackexchange", "id": 67849, "tags": "homework-and-exercises, rotational-dynamics, rotational-kinematics, rotation" }
Duplicate SQL code in controller
Question: How I can rewrite or refactor my controller code? I have the same SQL query (@plan_gp_users) in all defs. class PlanGpsController < ApplicationController def index @search = PlanGp.joins(:user).search(params[:q]) @plan_gps = @search.result.order("created_at DESC").page(params[:page]).per(15) @plan_gp_users = User.where("users.ab_id = :abs_id AND users.id != :user_id AND is_admin IS NULL AND role != 'head'", {:abs_id => current_user.ab_id,:user_id =>current_user.id}) respond_to do |format| format.js format.html # index.html.erb format.json { render json: @plan_gps } end end def show @plan_gp = PlanGp.find(params[:id]) @plan_gp_users = User.where("users.ab_id = :abs_id AND users.id != :user_id AND is_admin IS NULL AND role != 'head'", {:abs_id => current_user.ab_id,:user_id =>current_user.id}) respond_to do |format| format.js format.html # show.html.erb format.json { render json: @plan_gp } end end # GET /plan_gps/new # GET /plan_gps/new.json def new @plan_gp = PlanGp.new # 3.times { render @plan_gp.user_id } # .joins("LEFT JOIN plan_gps ON plan_gps.user_id = users.id and strftime('%Y-%m','now') = strftime('%Y-%m',plan_gps.month)") AND plan_gps.user_id is null @plan_gp_users = User.where("users.ab_id = :abs_id AND users.id != :user_id AND is_admin IS NULL AND role != 'head'", {:abs_id => current_user.ab_id,:user_id =>current_user.id}) # raise @plan_gp_users.to_sql respond_to do |format| format.js format.html # new.html.erb format.json { render json: @plan_gp } end end # GET /plan_gps/1/edit def edit @plan_gp = PlanGp.find(params[:id]) @plan_gp_users = User.where("users.ab_id = :abs_id AND users.id != :user_id AND is_admin IS NULL AND role != 'head'", {:abs_id => current_user.ab_id,:user_id =>current_user.id}) end # POST /plan_gps # POST /plan_gps.json def create @plan_gp = PlanGp.new(params[:plan_gp]) @plan_gp_users = User.where("users.ab_id = :abs_id AND users.id != :user_id AND is_admin IS NULL AND role != 'head'", {:abs_id => current_user.ab_id,:user_id =>current_user.id}) User.where("id IN (:user_ids) AND role != :role", {:user_ids => params[:plan_gp]["user_id"],:role =>'head'}).select("id").map do|m| @plan_gp = PlanGp.new(params[:plan_gp]) @plan_gp.user_id = m.id @plan_gp.abs_id = current_user.ab_id if @plan_gp.save @plan_gp_save = true else @plan_gp_save = false end end @plan_gp.abs_id = current_user.ab_id respond_to do |format| if @plan_gp_save format.js format.html { redirect_to plan_gps_url } format.json { render json: @plan_gp, status: :created, location: @plan_gp } else format.js format.html { redirect_to plan_gps_url } format.json { render json: @plan_gp.errors, status: :unprocessable_entity } end end end # PUT /plan_gps/1 # PUT /plan_gps/1.json def update @plan_gp = PlanGp.find(params[:id]) @plan_gp_users = User.where("users.ab_id = :abs_id AND users.id != :user_id AND is_admin IS NULL AND role != 'head'", {:abs_id => current_user.ab_id,:user_id =>current_user.id}) respond_to do |format| if @plan_gp.update_attributes(params[:plan_gp]) format.js format.html { redirect_to @plan_gp, notice: 'Plan gp was successfully updated.' } format.json { head :no_content } else format.js format.html { render action: "edit" } format.json { render json: @plan_gp.errors, status: :unprocessable_entity } end end end # DELETE /plan_gps/1 # DELETE /plan_gps/1.json def destroy @plan_gp = PlanGp.find(params[:id]) @plan_gp.destroy respond_to do |format| format.js format.html { redirect_to plan_gps_url } format.json { head :no_content } end end end Answer: You should use a named Scope in your model class User < ... scope :by_ab_without_user, lambda do |ab_id, id| where(:ab_id => ab_id, :is_admin => null). where("id != :user_id AND role != 'head'", { :user_id => id }) end ... end You can then use it from your controller with User.by_ab_without_user(current_user.ab_id, current_user.id) I'm not a big fan of writing another scope for each query which occurs in a controller though (which might end up only used once). That just increases the noise in the model. So you should strive for orthogonal scopes which can be combined easily. So for the above example you might do: class User < ... scope :not_admin, where(:is_admin => null) scope :not_head, where("role != 'head'") scope :not_user, lambda { |id| where("id != ?", id) } scope :by_ab, lambda { |ab_id| where(:ab_id => ab_id) } ... end Then you could use them like this User.not_admin.not_head.not_user(current_user).by_ab(current_user.ab_id) Or if you need this particular combination often you can define a specialized scope with them (yes you can use other scopes in scope definitions). But of course it depends on the problem domain wether or not you need that flexibility. So it's probably best to start with specialized scopes and orthogonalize later when you see fit. Another thing you should do is to put common action initialization code in a before filter class PlanGpsController < ApplicationController before_filter :init_plan_gp_users, :except => :destroy ... private def init_plan_gp_users @plan_gp_users = ... end end But you still should use named scopes, because they improve code reusability across controllers.
{ "domain": "codereview.stackexchange", "id": 4973, "tags": "ruby, sql, ruby-on-rails, controller, active-record" }
Two processes communicating directly on a system remote to Roscore
Question: I have a remote system that is running two processes, basically a talker and a listener. These are running on a system that is connected via wifi to a desktop computer running roscore.The goal is to reduce the latency between the talker sending data and listener receiving the data. Roscore needs to run on the desktop because there are several other remote nodes to communicate with. My assumption is that the talker is sending the data back to the desktop running roscore, via wifi, and then coming back to the listener node, again via wifi, which introduces alot of latency. My question is how to ensure that the talker and listener are communicating as quickly as possible. Originally posted by warcraft on ROS Answers with karma: 1 on 2016-02-25 Post score: 0 Answer: In ROS, nodes communicate directly; they do not pass topic data through the ROS core. The ROS core only acts as a name and parameter lookup service: it has the list of all nodes, the names of the topics they publish, the names of the available services, and the ROS parameters. If you want to confirm that two nodes are communicating directly, you can run rosnode info on them, to see which TCP sockets they have open, where those sockets are connected, and which topic is using each socket. Originally posted by ahendrix with karma: 47576 on 2016-02-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by warcraft on 2016-02-25: Ok. Great. Thank you very much
{ "domain": "robotics.stackexchange", "id": 23910, "tags": "rosnode, roscore" }
jQuery function to display an alert when submitting blank fields
Question: I am new to jquery and implemented following function in a js file in case of error- function before_submit_check_if_has_blank_field() { $("[type=submit]").click(function(e) { var of_key_field = $('.key_field') var of_val_field = $('.val_field') var should_run = 1 if (of_key_field.length === 1) { $('.key_field:not(:first)').filter(function() { if (this.value === "") { alert('Please Fill In Or Remove The Blank Fields First') e.preventDefault(); should_run = 0 } }); } else { $('.key_field').filter(function() { if (this.value === "") { alert('Please Fill In Or Remove The Blank Fields First') e.preventDefault(); should_run = 0 } }); } if (should_run != 0) { if (of_val_field.length === 1) { $('.val_field:not(:first)').filter(function() { if (this.value === "") { alert('Please Fill In Or Remove The Blank Fields First') e.preventDefault(); should_run = 0 } }); } else { $('.val_field').filter(function() { if (this.value === "") { alert('Please Fill In Or Remove The Blank Fields First') e.preventDefault(); should_run = 0 } }); } } update_config_field_from_key_value_field() }); } The above code snippet is a part of my jquery where I am displaying a layout to the user with some fields and once those fields are empty then I am showing alert message while saving the form. In above function I am repeating following lines of alert message which must be replaced by standard one- if (this.value === "") { alert('Please Fill In Or Remove The Blank Fields First') e.preventDefault(); should_run = 0 } Can anyone please review my code and refactor it? Answer: Your code can be refactor in multiple ways. To remove code repetition, create a new function for repeated code and use it as a callback of each filter. function someFunc(e) { if (this.value === "") { alert('Please Fill In Or Remove The Blank Fields First') e.preventDefault(); should_run = 0 } } Use above function as $('.key_field:not(:first)').filter(someFunc); This will automatically remove redundant code as well as reduce number lines. Suggestions: Instead of playing around DOM and get value for each field separately, try to use serializeArray method of JQuery. See example Please make habit to use ; to terminate line. Hope this is helpful.
{ "domain": "codereview.stackexchange", "id": 31222, "tags": "javascript, beginner, jquery, validation, form" }
Fetching, processing, and storing Mixpanel analytics data to SQLite
Question: I'm a self-taught Python programmer and I never really learned the fundamentals of programming, so I want to see how to improve upon this script and make it adhere to best practices. The script has three functions that retrieve data from an API, cleanse the data and store it in a sqlite db. This script is going to run daily on a cron and append to the sqlite tables every morning. get_data() fetches the data and turns it into a pandas dataframe. data_cleanse() removes some non-necessary data. send_to_db() sends the cleansed data to a sqlite db, there is one table for each of the event types. All of the functions are called in a for-loop which iterates through each of the event types. I'm open to any suggestions on how to improve this, but here are some thoughts/questions that I have: Should this be a class? I have never used one before because I always found plain functions to be less confusing. Should I be using a if __name__ == "__main__":? import pandas as pd import json from datetime import date, timedelta from mixpanel_client_lib import Mixpanel import sqlite3 as db def get_data(start_date, end_date, event_name): con_data = Mixpanel(API_KEY, API_SECRET) data = con_data.request(['export'], { 'event': [event_name], 'from_date': start_date, 'to_date': end_date }) parameters = set() events = [] for line in data.split('\n'): try: event = json.loads(line) ev = event['properties'] except ValueError: continue parameters.update(ev.keys()) events.append(ev) df = pd.DataFrame(events) return df, event_name def data_cleanse(df, event_name): if event_name == "Video Played": df = df[['$ios_ifa', 'Groups', 'Lifetime Number of Sessions', 'Days Since Last Visit', 'time', 'Product ID', 'Time Watched', 'Video Length']] df.columns = ['ios_id', 'groups', 'lifetime_sessions', 'days_since', 'time', 'product_id', 'time_watched', 'video_length'] print df['lifetime_sessions'].value_counts() df['groups'] = df['groups'].astype(str) # remove admin users from data idx = df['groups'].isin(['[u\'Admin-Personal\']', '[u\'Admin\']']) df = df[~idx] # remove '0' lifetime session users from data idx = df['lifetime_sessions'].isin([0]) df = df[~idx] return df, event_name elif event_name == "Item Information Click" or 'Faved' or 'Add to Cart' or 'Tap to Replay': print df.columns.values df = df[['$ios_ifa', 'Groups', 'Lifetime Number of Sessions', 'Days Since Last Visit', 'time', 'Product ID']] df.columns = ['ios_id', 'groups', 'lifetime_sessions', 'days_since', 'time', 'product_id'] df['groups'] = df['groups'].astype(str) # remove admin users from data idx = df['groups'].isin(['[u\'Admin-Personal\']', '[u\'Admin\']']) df = df[~idx] # remove '0' lifetime session users from data idx = df['lifetime_sessions'].isin([0]) df = df[~idx] return df, event_name def send_to_db(df, event_name): table_names = { 'Video Played': 'video_played', 'Item Information Click': 'item_info_click', 'Faved': 'faved', 'Add to Cart': 'add_to_cart', 'Tap to Replay': 'replay' } con = db.connect('/code/vid_score/test.db') df.to_sql(table_names.get(event_name), con, flavor='sqlite', if_exists='append') con.close() ################ API_KEY = 'xxxxxxx' API_SECRET = 'xxxxxxx' event_types = ['Video Played', 'Item Information Click', 'Faved', 'Add to Cart', 'Tap to Replay'] end_date = date.today() - timedelta(1) start_date = date.today() - timedelta(1) for event in event_types: df, event_name = get_data(start_date, end_date, event) df, event_name = data_cleanse(df, event_name) send_to_db(df, event_name) Answer: From a first look the code is nice enough, meaning you can follow the control flow easily and it's clear what each function does, so the separation of the three or four steps is rather good. This script is rather small and you don't pass around too much data; I'd say leave it as is unless you want to have some more reuse in other scripts. Yes please, do use __name__ if only for consistency. I'd also suggest the following to clean up anyway: Move constants to the top. Use the values from table_names instead of event_types so you don't repeat the keys all the time, so you end up with a constant EVENT_TYPES = {'Video Played': ...}. That way you can also iterate over the event types and table names simultaneously, eliminating the need to look up table names in send_to_db. I'd use timedelta with named arguments, so it's a bit clearer what timedelta(1) means, i.e. use timedelta(days=1) instead. The additional return value for event_name from get_data and data_cleanse doesn't make much sense to me. It's not like you transform the event name, so I'd just drop that altogether. The spurious print statements could be replaced by logging calls instead, but this looks just like debugging statements anyway? The exception handling in get_data could be clearer; I've moved the access via 'properties' after the try block to make it clearer which operation can actually fail there. Database connections should probably be protected by with closing(...) just in case. parameters in get_data is unused. The cases in data_cleanse are duplicated and can be condensed. The comparison foo == 'x' or 'y' doesn't do what you mean. Compare 'a' == 'x' or 'y', which is 'y', with 'a' == 'x' or 'a' == 'y', which returns False. In any case, this comparison can be rewritten with x in (...) instead. You also miss the (currently impossible) else case; depending on your code I'd rather just have the default case and handle "Video Played" additionally or raise an exception yourself. The line after # remove admin users from data looks fishy, but I don't really know how to improve it. And finally you can always check with flake8 and similar tools for style violations. All in all: import pandas as pd import json from datetime import date, timedelta from mixpanel_client_lib import Mixpanel import sqlite3 as db from contextlib import closing API_KEY = 'xxxxxxx' API_SECRET = 'xxxxxxx' EVENT_TYPES = { 'Video Played': 'video_played', 'Item Information Click': 'item_info_click', 'Faved': 'faved', 'Add to Cart': 'add_to_cart', 'Tap to Replay': 'replay' } DEFAULT_COLUMNS = [ ('$ios_ifa', 'ios_id'), ('Groups', 'groups'), ('Lifetime Number of Sessions', 'lifetime_sessions'), ('Days Since Last Visit', 'days_since'), ('time', 'time'), ('Product ID', 'product_id'), ] VIDEO_COLUMNS = list(DEFAULT_COLUMNS).extend([ ('Time Watched', 'time_watched'), ('Video Length', 'video_length') ]) def get_data(start_date, end_date, event_name): con_data = Mixpanel(API_KEY, API_SECRET) data = con_data.request(['export'], { 'event': [event_name], 'from_date': start_date, 'to_date': end_date }) events = [] for line in data.split('\n'): try: event = json.loads(line) except ValueError: continue events.append(event['properties']) return pd.DataFrame(events) def data_cleanse(df, event_name): columns = DEFAULT_COLUMNS if event_name == "Video Played": columns = VIDEO_COLUMNS df = df[[c[0] for c in columns]] df.columns = [c[1] for c in columns] df['groups'] = df['groups'].astype(str) # remove admin users from data idx = df['groups'].isin(['[u\'Admin-Personal\']', '[u\'Admin\']']) df = df[~idx] # remove '0' lifetime session users from data idx = df['lifetime_sessions'].isin([0]) df = df[~idx] return df def send_to_db(df, table_name): with closing(db.connect('/code/vid_score/test.db')) as con: df.to_sql(table_name, con, flavor='sqlite', if_exists='append') def main(): end_date = date.today() - timedelta(days=1) start_date = end_date for (event_name, table_name) in EVENT_TYPES.iteritems(): df = get_data(start_date, end_date, event_name) df = data_cleanse(df, event_name) send_to_db(df, table_name) if __name__ == "__main__": main()
{ "domain": "codereview.stackexchange", "id": 13663, "tags": "python, json, sqlite, pandas" }
Can incandescent light bulb can be brighter if fixed power?
Question: I found that luminous efficacy of 60 W tungsten incandescent bulb is 14.5 lm/W. Is this value defined by design details listed above or it depends on something else too? Can we design 60 W tungsten incandescent bulb, which is brighter? How to calculate luminous efficacy? Answer: The apparent brightness of an incandescent bulb is a very strong function of the temperature of the filament, because it behaves approximately like a black body. Thus, much of the power emitted will be in the IR. The black body spectrum for different temperatures can be found, for example at wikipedia: Note this is a visual representation of Planck's law. The vertical axis is logarithmic - so from 3000 K to 5777 K you get almost a 100x increase in power in the visible spectrum. The problem, of course, is that hotter filaments have shorter life. Making a filament survive even an extra 20 K or so temperature over an equivalent life is a big deal in the incandescent bulb industry. The techniques for this are beyond the scope of this answer. A simple thought experiment: take four light bulbs. If you just light one of them with your power source, you get a certain amount of light. If you now attach two of them in series, and make two pairs like that, you have four bulbs using the same power. However, the four bulbs will be running at a lower temperature - each gets roughly a quarter of the power. We estimate the temperature drop from the Stefan-Boltzman law which states that power $\propto T^4$, and conclude that at half the power, the temperature drops by $\left(\frac12\right)^\frac14$, or 0.84x of the original. This means that each filament is less effective - and the total visible light output will be smaller than it was with the single filament. update You asked for a plot showing the normalized intensity. I did this for a range of values of temperature, with each plot normalized to the max and using a log-lin plot. This shows that the black body spectrum is best centered in the 400-800 nm range for a temperature of around 5200 K - hotter than any known material can be without melting. For reference I also included a "filament" of 20,000 K - as you can see, the spectrum shifts almost entirely out of the visible range. If you are interested, the code to generate this (and with a little tweaking make others) is here: # compute curves of Planck's law import math from scipy.constants import codata import numpy as np import matplotlib.pyplot as plt D = codata.physical_constants h = D['Planck constant'][0] k = D['Boltzmann constant'][0] c = D['speed of light in vacuum'][0] s = D['Stefan-Boltzmann constant'][0] pi = math.pi def planck(T, l): p = c*h/(k*l*T) if (p > 700): return 0.0 else: return (h*c*c)/(math.pow(l, 5.0) * (math.exp(c*h/(k*l*T))-1)) def SB(T, emissivity = 1.0): # Stefan-Boltzmann law to compute total power: not used return emissivity * s * math.pow(T, 4.0) Tvec = np.array( [1000,3000,3500,5237, 20000]) Lvec = np.logspace(-8, -5, 1000) plt.figure() # create a semitransparent "rainbow plot" to show where visible range is: plt.imshow(np.tile(np.linspace(0,1,100),(2,1)), extent=[400, 800, 0, 1], aspect='auto', cmap='rainbow', alpha = 0.4) # compute Planck for a range of temperatures and wavelengths for T in Tvec: r = [] for l in Lvec: r.append(planck(T, l)) plt.semilogx(Lvec*1e9, r/np.max(r),label='T=%d'%T) plt.xlabel('lambda (nm)') plt.legend(loc='upper left') plt.show()
{ "domain": "physics.stackexchange", "id": 19942, "tags": "electricity, electrical-engineering, luminosity" }
Software for simulating supersonic aerodynamics
Question: Could you please suggest the software, where I can load my 3D model and see how it behave on various conditions (speed - preferably including supersonic, temperature, pressure)? Both free & commercial variants are interesting. Answer: If you want something that looks working and don't care much about the details, the standard solution is Fluent. The nearest Open Source option is OpenFoam.
{ "domain": "physics.stackexchange", "id": 481, "tags": "simulations, software" }
Exact formula in simplest terms for Magnetic field at a point outside a solenoid
Question: I need the COMPLETE formula for magnetic field outside the solenoid. So the situation I am stuck in is I have to solve this question: The magnetic field at the centre of coil of $n$ turns, bent in the form of a square of side $2l$, carrying current $I$ is Options: (A) $\frac {\sqrt{2}μ_0nI}{\pi l}$ (B) $\frac {\sqrt{2}μ_0nI}{2\pi l}$ (C) $\frac {\sqrt{2}μ_0nI}{4\pi l}$ (D)$\frac {2μ_0nI}{\pi l}$ This is how I visualized it: (I am really, really sorry for the poor drawing, but I could figure out a better software) SO I think of this as 4 circular coils/solenoids (and I arbitrarily took the direction of current, since I am not asked to find direction of magnetic field, only magnitude, this shouldn't matter), and I see that solenoids on opposite sides of the square have same direction of magnetic field. Now I think of the formula for the magnetic field at a point outside the solenoid/circular coil as $B = \frac{μ_0}{4\pi} \times \frac {NI}{R}$ where $N$ is number of turns, $I$ is current through solenoid/circular coil and $R$ is perpendicular distance of point from the circular coil/solenoid. So when I apply this formula for each side of the square, I take $N = n/4$ and $R = l$ $B = \frac {μ_0nI}{4\pi4l}= \frac {μ_0nI}{16\pi l} $ Since opposite sides have same direction of magnetic field, the resultant magnetic field is the magnitude of the vector sum of perpendicular vectors having magnitude $2B$. Since they are perpendicular: the magnitude is just $\sqrt {(2B)^2 + (2B)^2}= \sqrt {2 \times (2B)^2 } = \sqrt{2} \times 2B = \sqrt{2} \times \frac {μ_0nI}{8\pi l} = \frac {\sqrt{2}μ_0nI}{8\pi l}$ Now this is close to the options, but I fear that I am missing something in my formula for the magnetic field. Hence I request you to correct my formula. NOTE: I only need the formula, please give me the formula, and not means to derive it from Biot-Savart and Ampere's circuital law. Derivations I will be doing 2 years later when I have Biot-Savart and Amprer's circuital law in my curriculum. NOTE 2: You could call that a duplicate of my question https://physics.stackexchange.com/questions/155119/a-few-questions-related-to-magnetic-fields but I request not to close this as that has been put on hold and I am sure its visibility is damaged already and I can't wait for busy moderators to open the question again. Since, I only need the formula (assuming that my approach to the question is correct), I request anyone with sufficient knowledge to either post the formula or correct my approach ASAP. Thanks in advance. PLEASE, PLEASE, PLEASE do NOT CLOSE this question straightforwardly if you have any objections, tell me what's amiss in the comments. Answer: I think you brought up solenoid unnecessarily, look up Magnetic field at center of a square and then apply it for coil having n turns. It would look something like this I will link you the derivation hoping you understand something from it. Magnetic field at center of square loop is $\frac{\sqrt{2} \mu_{0} I}{\pi l}$ and for n turns is $\frac{\sqrt{2} n\mu_{0} I}{\pi l}$ Reg other questions you asked i have given my view on this . When Helium nucleus makes a full rotation. We may consider it as current in loop so magnetic field at the centre of loop $$\frac{\mu_{0} nI}{2R}$$ n=1 in this case, $I=\frac{q}{t}$ = $\frac{2e}{2}$ =e(charge of an electron) $$B= \frac{\mu_{0} nI}{2R}= \frac{\mu_{0} e}{2R} = \frac{\mu_{0} e}{2R} =\frac{\mu_{0} 1.6 * 10^{-19}}{2*0.8} = {\mathbf{\mu_{0} 10^{-19}}} $$ The force on a charge moving in a uniform magnetic field is given by , $F_{b}=q(\vec{v}\times\vec{B})$ . If the charge enters with a $\vec{v}$ perpendicular to the $\vec{B}$ then it will move in a circle of radius r. Centripetal force on a particle is $F= \frac{mv^{2}}{r}$ Equating these two , we will get $$ \frac{mv^{2}}{r} = qvB \implies p (mv)= Bqr -->1 $$ p-> momentum Kinetic energy of a particle = $\frac{1}{2}mv^{2}=\frac{m^{2}v^{2}}{2m} \implies K.E= \frac{p^{2}}{2m}\implies p=\sqrt{2mK.E} -->2$ equating 1 and 2 , $Bqr =\sqrt{2mK.E} \implies r=\frac{\sqrt{2mK.E}}{Bq} \implies r \propto \frac{\sqrt{m}}{q} $ $m_{\alpha} = 4m_{p} ; m_{D}=2m_{p}; q_{\alpha} = 2q_{p} ; q_{D}=q_{p} $ From this $\mathbf{r_{D}>r_{\alpha}=r_{p}}$ Ans : Option A
{ "domain": "physics.stackexchange", "id": 18627, "tags": "homework-and-exercises, electromagnetism" }
EA-Turbo simulation package
Question: I am working with the quantum turbo codes presented in this paper by Wilde, Hsieh and Babar, and it is claimed that a package to simulate such codes is available at ea-turbo. However, the hyperlink to such package appears to be broken, and so I have not been able to reach that. I would like to know if anyone around knows another link to get such package, or it by any chance has the package, if he could share it so that other people can work with such codes in an easier way. EDIT: It looks like a package containing the version used by the authors for a previous paper on the topic where a simpler decoder is used can be downloaded from the link presented in the answer by user1271772. I discovered such fact by reading the README file that such file contains. It would be very useful to know if newer versions (looks like the broken link that I was talking about in the questions refers to a second version of the package) can be obtained too. Answer: This hyperlink works for me: https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/ea-turbo/ea-turbo.zip That is a zip file which I was able to download.
{ "domain": "quantumcomputing.stackexchange", "id": 376, "tags": "error-correction, resource-request" }
Beginner Project: Bunny City
Question: I am a Java programmer, and I just recently started learning Scala for fun. I found a group of projects here, and I tried to do the graduation excercise. The problem is, my code looks a lot like java, and I want to make this code more styled to fit scala, though I'm not sure what that entails. (Making it more functional?) This is my main game class: object Game { def main(args: Array[String]): Unit = { val list : ListBuffer[Bunny] = ListBuffer.fill(5)(BunnyFactory.getInitial) while(list.size != 0){ var iterator = 0; //make sure we have one male for mating var males = 0; //keep track of females for mating var females : ListBuffer[Bunny] = new ListBuffer for(i <- list){ i.update(list) if(i.dead){ println(s"Bunny ${i.name} died at age ${i.age}!") list.remove(iterator) } else { if(i.gender == Gender.Male && i.adult){ males += 1; }else if (i.gender == Gender.Female && i.adult){ females += i; } println(i) } iterator += 1; } if(males >= 1){ for(i <- females){ val child = BunnyFactory.getChild(i); println(s"Bunny ${child.name} was born!") list += child } } println(s"There are ${list.size} bunnies!") Thread.sleep(500) } } } My Bunny class (I'm not sure if my variable naming follows conventions here): class Bunny(inColor : Color.Value, inGender : Gender.Value, inAge : Int, inName : String, radioactive : Boolean){ private val _color = inColor; private val _gender = inGender; private val _name = inName; private var _radioactive = radioactive; private var _age = inAge; def gender = _gender def age = _age def name = _name def adult : Boolean = {_age >= 2 && !(_radioactive)} def color = _color def dead : Boolean = { if(_radioactive){ _age > 50; }else{ _age > 10; } } def infect : Unit = {_radioactive = true} def update(list : ListBuffer[Bunny]) : Unit = { _age += 1 if(_radioactive){ list((Math.random() * list.length).toInt).infect } } override def toString() : String = { return s"${_color} ${_gender} Bunny ${_name}, ${_age} years old, Radioactive: ${_radioactive}" } } Bunny Factory class: object BunnyFactory { def getInitial : Bunny = { return new Bunny( Color.getRandom, Gender.getRandom, 0, getName, false) } def getChild(mom : Bunny) : Bunny = { val rand = (Math.random() * 100).toInt return new Bunny(mom.color, Gender.getRandom, 0, getName, rand <= 2) } val names : ListBuffer[String] = { val source = Source.fromFile("res/names.txt") val lines : List[String] = source.getLines.toList source.close lines.to[ListBuffer] } def getName : String = { val rand = (Math.random() * names.size).toInt return names(rand) } } Gender object: object Gender extends Enumeration{ type Gender = Value val Male, Female = Value def getRandom() : Gender = { val rand : Int = (Math.random() * values.size).toInt return this.apply(rand) } } Color object: object Color extends Enumeration{ type Color = Value val White, Brown, Black, Spotted = Value def getRandom() : Color = { val rand : Int = (Math.random() * values.size).toInt return this.apply(rand) } } Answer: Let me start with some suggestions to improve your code a little bit in regards of scala/functional style. I will start with by class with the points that catched my eyes first and i might edit my post later on and add additional points. Class structure and constructors Instead of going with class Bunny(inColor : Color.Value, inGender : Gender.Value, inAge : Int, inName : String, radioactive : Boolean){ private val _color = inColor; private val _gender = inGender; private val _name = inName; private var _radioactive = radioactive; private var _age = inAge; you can write it more concise like this: class Bunny(val color : Color.Value, val gender : Gender.Value, var age : Int, val name : String, var radioactive : Boolean){ No need to declare them with other names inside the class. Notice however, that public getters and setters (for vars) will be created automatically. Going private val color : Color.Value on the other hand will create a private getter. Instead of creating the adult variable on the fly (def adult : Boolean = {_age >= 2 && !(_radioactive)}) it may be better to make the main constructor accept an adult variable and to create a second constructor (which calls the main constructor) that does the logic above. One important benifit is, that it will be more easy to test when you can pass in the adult variable as is. The getChild method in your BunnyFactory class seems to belong to your Bunny class. You could consider making a class for male and female bunnies because they behave different although they are both bunnies. Traits could be a good option here to add a functionality only to the female bunny class for creating new child bunnies. Naming In your Bunny class you have a method update(list : ListBuffer[Bunny]) : Unit. I would call that different. Maybe infectNeighbours or similar. In your BunnyFactory class you could call the getInitial method createRandomBunny or so. Because getInitial makes me think "huh? What does it do?". Same for getChild In your main game class don't use list but rather existingBunnies or something like this. Shortcuts In the toString method of Bunny class you use return s"${_color} ${_gender} Bunny ${_name}, ${_age} years old, Radioactive: ${_radioactive}". The more scalaish way to do this is to omit the "return". It is not needed. The result of the last expression is used as return value anyways. Same goes for most of the other returns. Now to your main game class. I personally prefer using list methods instead of the for comprehension in most cases. So instead of for(i <- list){ I would do a list.forEach. But first what do we want? For example, do we really want our already dead bunnies to infect other bunnies before we remove them from the list? Here comes the "functional thinking" into to game (assuming our var list is called existingBunnies): //lets first get a list of bunnies that are still alive val livingBunnies = existingBunnies.filter{ bunny => if (bunny.dead) println(s"Bunny ${bunny.name} is dead with an age of ${bunny.age}!") bunny.dead == false } //Now create a list of bunnies that are born new. That can be solved using not var but val in a more pure functional style but to me that will make it more complicated so i stayed with the if condition. var newBornBunnies = List() //We have no new born bunnies by default //But if there is at least one mal left, each female bunny will create a child if(livingBunnies.count(_.gender == Gender.Male) >= 1) { newBornBunnies = livingBunnies.filter( _.adult && _.gender == Gender.Female ).map( _ createChild ) } //We add the new born bonnies to the existing living ones existingBunnies = livingBunnies ++ newBornBunnies //At the end with our new list of existing bunnies we let them infect each other existingBunnies forEach (_ infectNeighbours existingBunnies)
{ "domain": "codereview.stackexchange", "id": 7888, "tags": "beginner, functional-programming, scala" }
Single Page portfolio website
Question: I have finished and launched my first website and would love some feedback/ideas for improvement. This is a single page website for the purpose of displaying my finished projects. I know that I have used a lot of !important declarations to make things work properly. I have also used many media queries to have things be positioned where I want them. Any help is greatly appreciated. Thank you. Here is the website HTML: <!DOCTYPE html> <html > <head> <!--Meta --> <title>Goode Development</title> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="css/style.css"> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.6.1/css/font-awesome.min.css"> <link rel="stylesheet" href ="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css"> <link href='https://fonts.googleapis.com/css?family=Montserrat:700,400' rel='stylesheet' type='text/css'> <link href='https://fonts.googleapis.com/css?family=Roboto:300italic' rel='stylesheet' type='text/css'> <script src="https://code.jquery.com/jquery-2.2.4.min.js" integrity="sha256-BbhdlvQf/xTY9gja0Dq3HiwQF8LaCRTXxZKRutelT44=" crossorigin="anonymous"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js" integrity="sha384-0mSbJDEHialfmuBBQP6A4Qrprq5OVfW37PRR3j5ELqxss1yVqOtnepnHVP9aJ7xS" crossorigin="anonymous"></script> <script src="js/index.js"></script> <!--Meta --> </head> <body> <!DOCTYPE html> <html lang="en"> <!--Navbar--> <body> <div class ="wrappers"> <div class="home"> <nav class="navbar navbar-inverse navbar-fixed-top"> <div class="container-fluid"> <div class="navbar-header"> <button id="btnCollapse" type="button" class="navbar-toggle" data-toggle="collapse" data-target="#myNavbar"> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <h1 class="navbar-brand">Goode Development</h1> </div> <div class="collapse navbar-collapse" id="myNavbar"> <ul class="nav navbar-nav navbar-left"> <li><a class="a" href="#homey">Home</a></li> <li><a class="a" href="#abou">About</a></li> <li><a class="a" href="#porty">Portfolio</a></li> <li><a class="a" href="#conty">Contact</a></li> </ul> </div> </div> </nav> <!--End Navbar--> <!--header --> <div id="intro"> <a name="homey"> </a> <div class="jumbotron"> <h1 class="text-center" id="head">Kyle Goode</h1> <p class="text-center" id="header">Full Stack Web Developer</p> </div> </div> </div> <!-- End header--> <!--End home--> <!-- About --> <a name="abou"> </a> <div class="wrapper" id="box"> <div class="about"> <h1 class="text-center">About</h1> <p class="text-center" id="myth">The man...The myth...The legend</p> <article> <div class="img-wrap"> <img id="pic" src="http://i725.photobucket.com/albums/ww259/kgoode517/IMG_6189_zpsxtlvi4iq.jpg" alt="IMG_6189"> </div> <p> Kyle Goode is a young, caffeine dependent, Nashville local. He went to school in a small town just West of Nashville where he met and married his high school sweetheart. It is in that small town that Kyle's passion for technology began.Kyle has been amazed with technology and programming even as a child,from trying to read code and quickly realizing he couldn't read the long "words" it was making to taking apart the family computers, much to the chagrin of his parents. Though his adult life has been spent in the medical field, Kyle is excited to begin transitioning his hobby and passion to his profession. </p> </article> <div class="container" id="mine"> <h1 class="text-center" id="kyle">Kyle Goode-the man in bullet points</h1> <ul class="text-left" id="profile"> <li>Good Samaritan</li> <li>Prefers puns intended</li> <li>Especially gifted napper</li> <li>Devoted to both programming and Game of Thrones</li> <li>Codes for fun</li> <li>Making History</li> <li>Goes into survival mode if tickled</li> <li>Anxiously awaiting you to connect with him for projects.</li> </ul> </div> </div> <!--skills--> <div class="skills"> <h1 class="text-center">Skills</h1> <ul id="list"> <li>Html5</li> <li>CSS3</li> <li>Javascript</li> <li>Ruby</li> <li>Bootstrap</li> <li>Jquery</li> <li>Rails</li> <li>Git</li> </ul> </div> <!--end Skills--> <!--end About--> <!-- Begin Portfolio --> </div> <a name="porty"></a> <div class="portfolio"> <div class="container"> <h1 class="text-center">My Work</h1> <div class="row"> <div class="col-md-4"> <div class="thumbnail"> <div class="caption"> <h4 class="text-center">A Berine Sanders Tribute page made with HTML,CSS, and Bootstrap</h4> <a href="http://codepen.io/kgoode517/full/WwjNqp/"><i class="fa fa-codepen fa-5x fa-fw" id="first"></i> </a> </div> <img src="http://i725.photobucket.com/albums/ww259/kgoode517/Bernie_zps19tpxug7.png" class="img-responsive" alt="tribute"> </div> <legend>Tribute page</legend> </div> <div class="col-md-4"> <div class="thumbnail"> <div class="caption"> <h4 class="text-center">Google mockup page made with HTML,CSS,and Bootstrap</h4> <a href="http://codepen.io/kgoode517/full/yYddbb/"><i class="fa fa-codepen fa-5x fa-fw" id ="second"></i> </a> </div> <img src="http://i725.photobucket.com/albums/ww259/kgoode517/google_zpsxa3kemaw.png" class="img-responsive" alt="google"> </div> <legend>Google homepage</legend> </div> <div class="col-md-4"> <div class="thumbnail"> <div class="caption"> <h4 class="text-center">A cannon game made with HTML,CSS, and Javascript</h4> <a href="http://codepen.io/kgoode517/full/LNdxKE/"><i class="fa fa-codepen fa-5x fa-fw" id ="icon"></i> </a> </div> <img src="http://i725.photobucket.com/albums/ww259/kgoode517/cannon_zpsoqkqkueg.png" class="img-responsive" alt="cannon"> </div> <legend>Cannon game</legend> </div> </div> <!--/row --> </div> <!-- end container --> </div> <!-- End Portfolio--> <!-- Footer --> <div class="foot"> <h2 class="text-center">Contact me</h2> <div class="text-center"> <div class="icons"> <a name="conty"></a> <a href="https://www.facebook.com/goodedevelopment/" target="_blank"><i class="fa fa-facebook-official fa-5x fa-fw"></i> </a> <a href="https://github.com/kgoode517" target="_blank"><i class="fa fa-github-square fa-5x fa-fw"></i> </a> <a href="http://codepen.io/kgoode517/" target="_blank"><i class="fa fa-codepen fa-5x fa-fw"></i> </a> <a href="https://www.linkedin.com/in/kyle-goode-08b80b104" target="_blank"><i class="fa fa-linkedin-square fa-5x fa-fw"></i></a> <a href="mailto:Goodedevelopment@yahoo.com"><i class="fa fa-envelope fa-5x fa-fw"></i></a> </div> </div> <!--copyright --> <footer class="text-center"> <small> &reg Goode Development 2016. All Rights Reserved</small> </footer> </div> <!-- end copyright--> <!--End Footer--> </div> </body> </html> </body> </html> CSS: html, body { width: 100%; height: 100%; margin: 0px; padding: 0px; } .wrappers{ overflow-x: hidden; } /*Navbar*/ .navbar { height: 125px; } .navbar-brand { position: relative!important; left: 45px!important; bottom: 10px!important; font-size: 4em!important; color: white!important; font-family: "Montserrat", sans-serif !important; white-space:nowrap; } .nav.navbar-nav.navbar-left li a { color: white; position: relative; right: 475px; top: 66px; font-family: "Montserrat", sans-serif !important; } .nav.navbar-nav.navbar-left li a:hover { color: orange; } @media screen and (max-width: 990px) { .navbar-header { float: none !important; } .navbar-brand { left: 0 !important; } .nav li a { padding: 5px; margin-right: 50px; } .nav.navbar-nav.navbar-left li a { color: white; right: 0px; top: 0px; } } @media screen and (max-width: 767px) { .navbar-brand { font-size: 40px !important; position: relative !important; top: -20px !important; left: -10px !important; } .navbar { height: 70px; } .nav.navbar-nav.navbar-left li a { color: black; right: 0; top: 0; } .collapsing, .in { background-color: #222222; position: relative; top: -30px; } .collapsing ul li a, .in ul li a { color: white!important; } .collapsing ul li a:hover, .in ul li a:hover { color: orange!important; } } @media screen and (max-width: 568px) { .navbar-brand { font-size: 30px !important; position: relative !important; bottom: 20px !important; left: -3px !important; !important; } .navbar { height: 50px; } } @media screen and (max-width: 420px) { .navbar-brand { font-size: 25px !important; } } @media screen and (max-width: 370px) { .navbar-brand { font-size: 20px !important; } } @media screen and (max-width: 325px) { .navbar-brand { font-size: 15px !important; } } /*End Navbar*/ /*Home Page*/ .jumbotron { background: transparent !important; color: white !important; font-family: "Roboto", sans-serif !important; position: relative; top: 300px; } .jumbotron h1 { font-size: 6.5em !important; } .home { background: url(http://mrg.bz/VN5LDd) no-repeat center; background-size: cover !important; z-index: -1; height: 1000px; border-top: 1px solid black; border-bottom: 1px solid black; background-position: center center !important; } @media screen and (max-width: 768px) { .home { height: 1050px; } .jumbotron { top: 250px; } .jumbotron h1 { font-size: 5.5em !important; } } @media screen and (max-width: 700px){ .home{ background-size:cover !important; } .home { height: 800px; } } @media screen and (max-width: 568px) { .jumbotron h1 { font-size: 4.5em !important; } } @media screen and (max-width: 370px) { .jumbotron { top: 225px; } .home { height: 700px; } } @media screen and (max-width: 325px) { .jumbotron { top: 175px; } .jumbotron h1 { font-size: 3.5em !important; } .jumbotron p { font-size: 1em !important; } .home { height: 600px; } } /*End Home Page */ /*About Page*/ .about { background: linear-gradient(180deg, darkgrey, grey); background-size: cover; background-attachment: fixed; display: inline-block; width: 100%; height: 900px; font-size: 25px; font-family: "Roboto", sans-serif !important; word-wrap: break-word; } .about:after { clear: both; visibility: hidden; } .about h1 { font-size: 55px; font-family: "Montserrat" ,serif !important; position: relative; top: 15px; color: black; } #myth { color: black; padding-top: 15px; position: relative; left: 15px; font-size: 28px; } #pic { width: 255%; height: 255%; position: relative; top: 10px; right: 385px; } .img-wrap { float: left; height: 200px; width: 200px; position: relative; } .img-wrap img { width: 100%; } article { width: 40em; margin-left: auto; margin-right: auto; position: relative; top: 1em; left: 200px; color: black; } #mine { position: relative; left: 200px; } article p { overflow: hidden; font-size: 20px; } #kyle { font-size: 25px; position: relative; top: 35px; right: 71px; } #profile li { font-size: 22px; position: relative; top: 40px; left: 250px; color: black; } .skills { width: 100%; height: 225px; /*background-color: #dbdbdb*/ background: linear-gradient(180deg, white, #dbdbdb); white-space: normal; background-attachment: fixed; border-top: 1px solid black; border-bottom: 1px solid black; font-size: 30px; } .skills h1 { font-size: 55px; color: black; font-family: "Montserrat",serif !important; } .skills li { font-size: 25px; } #list li { margin-right: 2em; position: relative; bottom: -25px; color: black; font-family: 'Roboto',serif !important; } #list { display: flex; flex-wrap: wrap; flex-direction: row; justify-content: center; list-style-type: none; } @media screen and (max-width: 1450px) { article { width: 35em; } #mine { position: relative; left: 260px; } #pic { width: 270%; height: 270%; } } @media screen and (max-width: 1300px) { article { width: 30em; } #pic { width: 285%; height: 285%; position: relative; right: 415px; } #mine { position: relative; left: 320px; } #kyle { font-size: 22px; position: relative; right: 100px; } #profile li { font-size: 20px; } } @media screen and (max-width: 1200px) { .about h1 { font-size: 45px; } .skills h1 { font-size: 45px; } #myth { font-size: 25px; } #pic { width: 280%; height: 280%; position: relative; right: 425px; } article { width: 25em; } article p { font-size: 18px; position: relative; right: 45px; } #kyle { font-size: 20px; } #profile li { font-size: 18px; position: relative; left: 170px; } } @media screen and (max-width: 1100px) { article p { font-size: 16px; } #kyle { font-size: 18px; } #profile li { font-size: 16px; position: relative; left: 185px; } #pic { width: 230%; height: 230%; position: relative; right: 345px; top: 6px; } .about { height: 725px; } } @media screen and (max-width: 991px) { #profile { position: relative !important; left: -110px; } } @media screen and (max-width: 965px) { .img-wrap { float: none; } #pic { height: 110%; width: 110%; position: relative; right: 220px; top: 150px; } article p { width: 46em; position: relative; bottom: 220px; right: 220px; } #kyle { position: relative; top: -190px; right: 175px; } #profile { position: relative; bottom: 230px; left: -185px; } .about { height: 600px; } .skills { position: relative; bottom: 120px; } } @media screen and (max-width: 815px) { article p { width: 40em; position: relative; right: 175px; } #pic { position: relative; right: 175px; top: 160px; } #kyle { position: relative; top: -195px; right: 195px; } .skills { position: relative; bottom: 140px; } #profile li { position: relative; left: 165px; top: 35px; } } @media screen and (max-width: 767px) { article p { width: 30em; position: relative; right: 20px; font-size: 15px; } .about { height: 655px; } #pic { position: relative; right: 225px; top: -15px; height: 90%; width: 90%; } #kyle { position: relative; right: 300px; } #profile { position: relative; left: -290px; } #list li { font-size: 20px; } } @media screen and (max-width: 695px) { article p { width: 40em; position: relative; bottom: 0px; right: 175px; padding-top:15px; padding-bottom:5px; } .about { height: 950px; } #pic { position: relative; right: -35px; height:100%; width: 100%; } #kyle { position: relative; top: 10px; } #profile { position: relative; top: -10px; left: -325px; } } @media screen and (max-width: 650px) { article p { width: 35em; right: 145px; } } @media screen and (max-width: 601px) { #pic { right: 0px; } } @media screen and (max-width: 615px) { article p { right: 160px; } #kyle { right: 300px; } #profile { left: -370px; } } @media screen and (max-width: 580px) { article p { right: 180px; } } @media screen and (max-width: 570px) { article p { width: 30em; right: 150px; } #kyle { right: 380px; } #profile { left: -470px; } .about { height: 1000px; } } @media screen and (max-width: 568px) { #myth { font-size: 22px; } } @media screen and (max-width: 560px) { #kyle { positon: fixed; right: 375px; } } @media screen and (max-width: 535px) { #pic { right: 20px; } } @media screen and (max-width: 515px) { article p { width: 25em; right: 135px; } .about { height: 1050px; } #kyle { right: 350px; } #pic { right: 45px; } } @media screen and (max-width: 475px) { #pic { right: 45px; } article p { right: 155px; } } @media screen and (max-width: 468px) { .about { height: 1025px; } #pic { right: 55px; } } @media screen and (max-width: 445px) { .about { height: 1100px; } } @media screen and (max-width: 440px) { article p { width: 22em; text-align: left; right: 150px; } } @media screen and (max-width: 425px) { #pic { right: 65px; } } @media screen and (max-width: 405px) { #kyle { right: 320px; } } @media screen and (max-width: 400px) { article p { right: 170px; } #pic { right: 85px; } #profile { left: -500px; } } @media screen and (max-width: 375px) { article p { right: 180px; } .about { height: 1150px; } #pic { right: 100px; } } @media screen and (max-width: 366px) { #myth { font-size: 20px; } #list li { bottom: 10px; } } @media screen and (max-width: 350px) { article p { width: 19em; } #pic { right: 105px; } .about { height: 1200px; } } @media screen and (max-width: 350px) @media screen and (max-width: 325px) { article p { width: 18em; } #pic { right: 125px; } } @media screen and (max-width: 330px) { #myth { left: -5px; } #pic { right: 130px; } } @media screen and (max-width: 315px) { #pic { right: 14px; } } @media screen and (max-width: 305px) { article p { width: 16em; } .about { height: 1250px; } #pic { right: 145px; } } @media screen and (max-width: 295px) { .about { height: 1300px; } } @media screen and (max-width: 275px) { #pic { right: 155px; } } /*End About*/ /*Portfolio*/ .portfolio { padding-top: 50px; padding-bottom: 100px; background-size: cover; background-attachment: fixed; display: inline-block; width: 100%; height: 700px; } .portfolio h1 { font-size: 55px; font-family: "Montserrat",serif !important; color: black !important; } .row { padding-top: 50px; } .thumbnail { height: 250px; display: flex; justify-content: center; align-items: center; position: relative; overflow: hidden; border: 1px solid black !important; } .caption img { flex-shrink: 0; max-width 100%; max-height: 100%; } .caption { position: absolute; top: 0; right: 0; background: rgba(128, 128, 128, 0.75); width: 100%; height: 100%; padding: 2%; display: none; text-align: center; color: #fff !important; z-index: 2; font-family: "Montserrat",serif !important; } .caption a { text-decoration: none; color: white; } .caption h4 { position: relative; left: 5px; bottom: 10px; font-size: 25px; } .caption i:hover { color: orange !important; } legend { font-family: "Montserrat",serif !important; border-bottom: none !important; color: black; } #first { position: relative !important; bottom: -35px !important; } #second { position: relative !important; bottom: -35px !important; } #icon { position: relative; top: 30px; } @media screen and (min-width:767px) and (max-width:1199px) { #first { position: relative; bottom: 20px; } #icon { top: 50px !important; } .caption h4 { font-size: 20px; } .caption .fa { font-size: 70px !important; } } @media screen and (min-width:767px) and (max-width:991px) { #first { position: relative; top: 50px; } .caption h4 { font-size: 30px; } .caption { font-size: 15px; } .caption i { position: relative; top: 50px; } #icon { position: relative; top: 65px !important; } .caption .fa { font-size: 50px; } } @media screen and (max-width: 965px) { .portfolio { margin-top: -150px !important; } } @media screen and (max-width:767px) { #first { position: relative; top: 25px; } .caption h4 { font-size: 30px; } .caption { font-size: 15px; } } @media screen and (max-width: 1200px) { .portfolio h1 { font-size: 45px; } } @media screen and (max-width: 965px) { .portfolio { margin-top: -150px !important; } } @media screen and (max-width: 454px){ .caption h4{ font-size:25px; } #first{ top:15px; } #second{ top:15px; } } @media screen and (max-width:400px) { #icon { position: relative; bottom: 20px; } } @media screen and (max-width:397px) { #icon { position: relative; top: 0px; } } @media screen and (max-width:390px) { #first{ top:5px; } } @media screen and (max-width:325px) { .caption .fa { font-size:60px; } } @media screen and (max-width:307px) { #first{ top:-5px; } } /*End Portfolio*/ /*Footer*/ .foot i { color: white !important; margin-right: 0.16em; width: 90px; } .icons i:hover { color: orange !important; position: relative; bottom: 5px !important; } .foot { width: 100%; background-color: black; color: white; padding-top: 20px; float: left; border-top: 1px solid black; } .foot h2 { font-family: 'Roboto', sans-serif !important; position: relative; bottom: 15px; } footer small { font-family: 'Roboto', sans-serif !important; } footer { background-color: black; color: white; } .icons { display: inline-block; padding-bottom: 10px; } .icons a { text-decoration: none !important; } @media screen and (max-width: 1200px) { .foot .fa { font-size: 50px; } .foot i { margin-right: -10; } } @media screen and (max-width: 992px) { .foot { margin-top: 100px !important; } } @media screen and (max-width: 530px) { .foot i { display: inline !important; font-size: 25px; } } @media screen and (max-width: 330px) { .foot i { font-size: 38px !important; } } /*End Footer*/ JavaScript: //Navbar $(".a").click(function() { if ($("#btnCollapse").css('display') != 'none') $("#btnCollapse").click(); }); //Captions $('.thumbnail').hover( function() { $(this).find('.caption').slideDown(250); //.fadeIn(250) }, function() { $(this).find('.caption').slideUp(250); //.fadeOut(205) } ); //Smooth Scroll $(function() { $('a[href*="#"]:not([href="#"])').click(function() { if (location.pathname.replace(/^\//,'') == this.pathname.replace(/^\//,'') && location.hostname == this.hostname) { var target = $(this.hash); target = target.length ? target : $('[name=' + this.hash.slice(1) +']'); if (target.length) { $('html, body').animate({ scrollTop: target.offset().top }, 1000); return false; } } }); }); Answer: Some review items Your photo is stretched, the original is not 510 by 510, does not look right at all You have a <!DOCTYPE html> in your <body>, that's wrong You have a <html lang="en"> in your <body>, that's wrong You have a <body> in your <body>, that's Inceptional ;) I like your integrity values, I learned something new On the whole, use this: https://html5.validator.nu/?doc=http%3A%2F%2Fgoodedevelopment.com%2F Your site should pass the validation flawlessly You probably want to cache $("#btnCollapse") and $(this).find('.caption') You probably want to capture 250 in a single constant You are wrapping the last click handler in a $() call but not the first 2, I would put them all in the wrapper You have 3 lines of comments, but none of them cover the least obvious code: $('a[href*="#"]:not([href="#"])').click(function() { if (location.pathname.replace(/^\//,'') == this.pathname.replace(/^\//,'') && location.hostname == this.hostname) { I would probably replace that long if statement with a well document boolean-returning function As you know the use !important is not considered good form, you should get rid of that.
{ "domain": "codereview.stackexchange", "id": 20841, "tags": "javascript, html, css" }
What's polarization out of a uniformly charged ball?
Question: Let a ball be charged with uniform charge density $\rho_0$. From the book Zangwill Modern Electrodynamics Equation 6.6 $$P(r)=0, r\notin V$$ However, The simple dielectric matter Equation 6.35 stated that $$P=\epsilon_0 E$$ (with $\chi=1$ for simplicity). (Another equation, i.e. Poisson's formula, $E_p=-(P\cdot \nabla)\xi$ would arise the similar conclusion with Eq 6.35 that $P\neq0$.) Thus, according to Equation 6.6 the polarization outside sphere was to be $0$. However, use Equation 6.35 the polarization would be $\frac{\rho_0}{3}\frac{R^3}{r^3}\vec{r}$. What exactly was the polarization outside a uniformly charged ball? Answer: The dielectric susceptibility outside the ball is $\chi=0$ (if it's vacuum).
{ "domain": "physics.stackexchange", "id": 67252, "tags": "homework-and-exercises, electromagnetism, polarization, dielectric" }
Does Earth's Gravitational pull increase with time?
Question: Gravitational field depends on mass ($g = \frac{GM}{r^2}$) and every year many outside cosmic objects like asteroids or tiny object are hitting earth. So I think may be in a very microscopic amount the mass of Earth is increasing. But does it increase the Gravitational Pull or Gravity on Earth? And also, does it result in decrease in height of organisms in each century? Because some older people say that people in their time or in ancient times were taller and more aged. I am just curious if it is the reason or not. Answer: Actually the opposite is true. Quoted from Gizmodo - Did You Know That Earth Is Getting Lighter Every Day?: Earth gains about 40,000 tonnes of dust every year, the remnants of the formation of the solar system, which are attracted by our gravity and become part of the matter in our planet. Our planet is actually made from all that starstuff. Earth's core loses energy over time. It's like a giant nuclear reactor that burns fuel. Less energy means less mass. 16 tonnes of that are gone every year. Not much. And here's the big mass loss: about 95,000 tonnes of hydrogen and 1,600 tones of helium escape Earth every year. They are too light for gravity to keep them around, so they get lost. Gone into space. So, summing all these effects mentioned above, you get a mass loss of 57,000 tonnes per year. But this is still much too small to have a measurable effect on gravity ($g = GM/r^2$) even after millions of years, because the mass of the earth is $M = 6\cdot 10^{24} \text{ kg}$ (i.e. 6,000,000,000,000,000,000,000 tonnes).
{ "domain": "physics.stackexchange", "id": 67978, "tags": "newtonian-gravity, mass, acceleration, earth, estimation" }
Husky model in Gazebo
Question: I am trying to run the Clearpath Husky model in Gazebo. The URDF and launch files are working fine and I can spawn the model, but it is crushing my high end system. Just got off the phone with Clearpath and they are aware of the excessive fidelity of their model and looking into simplifying. Just looking for a comparison, what kind of "% of real time" and FPS are people seeing when spawning this model in empty_world? I am getting about 0.6-0.7 but only 1 or 2 fps. Running Ubuntu 11.10 w/electric, 16GB mem, SSD, GTX670 w/xconfig, ivy bridge i7. Thanks. Originally posted by dabigshue on ROS Answers with karma: 11 on 2012-10-15 Post score: 1 Original comments Comment by Ryan on 2012-10-15: Odd. Checking on my end, I'm getting 0.5x real time, 20 fps. Configured for 11.10/electric, 32 GB RAM, i7-2760QM (Sandy Bridge), nVidia Quadro 2000M set to be always-on (Optimus). By all rights, your machine should shatter my numbers. Answer: For anyone who is interested, this has been improved (currently in our stack on github, en-route to the builds). I am seeing a 300% real-time number with a Husky in an empty world on my machine. Originally posted by Ryan with karma: 3248 on 2013-02-19 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 11382, "tags": "ros, gazebo, clearpath, husky, model" }
What's the difference between a free chromosome fragment and an extrachromosomal array?
Question: This is reference to a review on C. elegans mosaic analysis by Yochem and Herman, in which the authors make a distinction between free chromosome fragments and extrachromosomal arrays. For the former, they reference Herman 1984. For the latter, they reference Lackner et al 1994 and Miller et al 1996, but these are studies that use extrachromosomal arrays, not papers defining them as a technique, per se. I am curious to know what the distinction is - I am under the impression that extrachromosomal arrays are fragments with many copies of the gene of interest + a marker. Answer: I asked my professor, and the answer appears to be differences in both the generation and the final product. Free chromosome fragments are created through irradiation/other damage of the germline in one animal. Through a series of crosses, it is possible to introduce individual fragments (containing a duplication of your gene of interest, as well as a marker) into a mutant background. The extrachromosomal fragment will be lost during mitosis at various points in development, and in a mosaic analysis, you can look for the loss of your marker (and therefore the loss of the wildtype allele), and see if yfg product is required in certain cell types for a wildtype phenotype. Extrachromosomal arrays are created by cloning, and you typically have many copies of both the gene and marker. It's the same principle as free chromosomal fragments in terms of mosaic analysis, but it's very powerful, as you can easily clone individual genes (versus irradiating and hoping to discover yfg on a fragment) and markers (and now you can use non-endogenous markers like GFP). Even though extrachromosomal arrays are comparatively easier to generate, the problem is that you end up with many copies of the gene, which leads to non-endogenous dosage levels. Even if this doesn't cause a completely different mutant phenotype in the cells, it is possible that enough of these proteins could segregate at mitosis into the daughter cells, such that even if the fragment is lost, you could still have your gene product around in the daughter cells (if your marker protein didn't also segregate like this, you might draw erroneous conclusions about the necessity of yfg product in those particular cells).
{ "domain": "biology.stackexchange", "id": 900, "tags": "genetics, chromosome, lab-techniques, development" }
Control theory - overshoot max
Question: So, during my last class the teacher asked if we could go from: MO = e(-πξ / √1-ξ²) to ξ= (-ln(MO)) / (√π²+ln²(MO)) MO = Max overshoot ξ = zeta e = exp Does anyone understand what he meant by that? Did I just misunderstand? Tried a few times but I just can't make sense of how to go from the MO equation to the zeta one. Answer: It would be good to see what you tried and I'm sure your teacher would too, but does this make it any clearer? $$ MO = e^{-\pi \frac{\zeta}{\sqrt{1-\zeta^2}}} $$ $$ ln(MO) = -\pi\frac{\zeta}{\sqrt{1-\zeta^2}} $$ $$ ln(MO)^2 = (ln(MO)^2 +\pi^2)\zeta^2 $$ $$ \zeta^2 = \frac{ln(MO)^2}{ln(MO)^2 +\pi^2} $$
{ "domain": "engineering.stackexchange", "id": 4966, "tags": "electrical-engineering" }
Reaction force in electron spin measurements
Question: Consider the following (thought) experiment, where an electron is emitted, then deflected by a magnetic field, and then detected: Because the momentum of the electron changes when it gets deflected, it seems intuitively clear that there should be a reaction force on the magnet, which could in principle be detected. But now consider the following modified setup: Here electrons are emitted, with their spin aligned in some arbitrary direction. The first Stern-Gerlach magnet, A, can deflect electrons either up or down. B and C deflect the paths of the two beams, and the final magnet, D, combines them back into a single beam, effectively reversing the action of A. This system of magnets is followed by a detector, which can measure the spin of the electron in the $z$ direction, perpendicular to the plane of the rest of the diagram. According to my understanding of quantum mechanics, the measurements from this detector should be the same as if the magnets were not there. That is, if the electrons are emitted with their spins always pointing in the $z$ direction, the detector should also find that their spins are always 'up' in the $z$ direction. The problem is that if we can measure the reaction force on any of the four deflecting magnets then we can work out which of the two paths the electron took, and thus we would have measured its spin in two directions simultaneously. Clearly this isn't possible, so where have I gone wrong? Is there a reaction force on the magnets? If there isn't, what happened to the conservation of momentum? Is it just that the reaction force does exist but we can't measure it? (And if so, what prevents us from doing so?) Or have I just made a mistake in reasoning about how this setup would behave? Answer: Yes, you may arrange the magnets in such a way that the splitting of the original beam is "undone". The two parts of the wave functions "reinterfere". The reason why you can't measure both $j_y$ and $j_z$ is pretty much the same reason as the reason why you can't see the interference pattern as well as the "which slit" information in a double slit experiment and your setup may actually be closer to the thought experiments that were intensely discussed during the Bohr-Einstein debates etc. Conceptually, however, the two situations are isomorphic. The uncertainty principle and/or the destructive effect of your apparatus will always prevent you from learning $j_y,j_z$ at the same moment. If you want to measure the impulse $\Delta p$ transferred from the electron to the magnets and see it's there, it implies that you must know the magnets' initial momentum with precision $\Delta p$ or better (or something of the same order). The uncertainty principle then guarantees that the vertical position of the magnets is undetermined, with $\Delta x\geq \hbar / \Delta p$. The electrons also get a kick $\Delta p$ from the magnets – by the momentum conservation, it's the same $\Delta p$ as above, the kick to the magnets – and because you want to be able to distinguish the two beams, you must have $\Delta x\leq \hbar/\Delta p$ for the electrons. That directly contradicts the previous condition. So you either make the split beams focused enough to the two magnets in the middle i.e. small enough $\Delta x$ or you will have a small enough $\Delta p$ which is needed to measure the change of the momentum of the magnets but you can't have both, by the uncertainty principle. You could also invent a more invasive way to measure which path the electron took. But to claim that your apparatus preserved the original state, especially the relative phase of "up" and "down" that is needed to predict the probabilities of various values of spin $j_z$ (which you untypically take orthogonal to the screen), you need the position of the magnets to be non-intrusive enough. Again, in analogy with the double slit experiment, you will find out that if you can find out about the path or momentum of the electrons, you will modify the electrons' wave functions so that the relative phase will be modified and the results for $j_z$ will be different than if the apparatus weren't there in the middle. You will change the rules of the game.
{ "domain": "physics.stackexchange", "id": 6590, "tags": "quantum-mechanics, quantum-spin" }
Energy-Momentum-Tensor of classical electrodynamics is conserved
Question: I want to check if the energy momentum tensor of the classical electrodynamics with lagrangian \begin{align} L = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu} \end{align} is conserved. The energy momentum tensor obtained from noethers theorem is \begin{align} T^{\alpha\beta} = -\frac{1}{4}\eta^{\alpha\beta}F_{\mu\nu}F^{\mu\nu}+F^{\alpha\sigma}\partial^\beta A_\sigma \end{align} It seems like an easy exercise, but I cant show, that indeed $\partial_\alpha T^{\alpha\beta} = 0$. My idea was to use the euler lagrange equation $\partial_\mu F^{\mu\nu} = 0$ which leads to \begin{align} \partial_\alpha T^{\alpha\beta} = F^{\alpha\sigma}\partial_\alpha\partial^\beta A_\sigma \end{align} but I cant see, why this should vanish. Answer: You actually missed a serious term. You thought that $\partial_\alpha \left(F_{\mu\nu}F^{\mu\nu}\right)=0$. But actually it is not, because $F_{\mu\nu}F^{\mu\nu}$ represents a summation. You confused that with the notation in the indices! So here is the sketch of the proof below (also note, for the sake of easiness, it is better to write the expression of $T^\alpha_\beta$ instead of $T_{\alpha\beta}$ or $T^{\alpha\beta}$; let me do the most general proof, there you may put $J^\mu=0$. At first, from your Lagrangian, arrive at the following form of the stress energy tensor (and then follow the rest of the part accordingly): $T^\alpha_\beta = \dfrac{1}{4\pi}\left(F^{\alpha\gamma}F_{\beta\gamma}-\dfrac{1}{4}\eta^{\alpha}_{\beta}F_{\gamma\delta}F^{\gamma\delta}\right)$ alongwith two of the Maxwell's equation $\partial_\mu F^{\mu\nu} = 4\pi J^\nu$ $$\therefore \partial_\alpha T^\alpha_\beta = -\dfrac{1}{4\pi}\left[\dfrac{1}{4}\times 2~F_{\gamma\delta}\left(\partial_\alpha F^{\gamma\delta}\right) - F^{\alpha\gamma}\left(\partial_\gamma F_{\beta\gamma}\right) - F_{\beta\gamma}\left(\partial_\gamma F^{\alpha\gamma}\right)\right]$$ $$\Longrightarrow \partial_\alpha T^\alpha_\beta = -\dfrac{1}{4\pi}\left[\dfrac{1}{2} F^{\gamma\delta}\left(\partial_\gamma F_{\delta\alpha}\right) + \dfrac{1}{2} F^{\gamma\delta}\left(\partial_\delta F_{\alpha\gamma}\right) + \dfrac{1}{2} F^{\alpha\gamma}\left(\partial_\alpha F_{\beta\gamma}\right) + 4\pi F_{\beta\gamma}J^{\gamma}\right]$$ using the Bianchi identity, $\partial_{[\alpha}F_{\beta\gamma]} = \partial_\alpha F_{\beta\gamma} + \partial_\beta F_{\gamma\alpha} + \partial_\gamma F_{\alpha\beta} = 0$ . Now the first three terms in the last expression cancel each other (if unclear to you, you may check it!) and finally we obtain, $\partial_\alpha T^{\alpha}_{\beta} = -F_{\beta\gamma}J^{\gamma}$ ; This is the most general form of the desired conservation law of electromagnetic stress energy tensor (you can always manipulate this for $T^{\alpha\beta}$ or for $T_{\alpha\beta}$ by multiplying the metric tensor $\eta_{\mu\nu}$.). In vacuum, i.e. for $J^\alpha=0$, the conservation equation reduces to: $\partial_\alpha F^{\alpha}_{\beta} = 0$ . For more references, you may have a look at the famous as well as the legendary book by Thanu Padmanabhan, "Gravitation: Foundations and Frontiers". The entire proof, alongwith the related concepts are well described there.
{ "domain": "physics.stackexchange", "id": 97488, "tags": "homework-and-exercises, electromagnetism, lagrangian-formalism, stress-energy-momentum-tensor, noethers-theorem" }
Experiment Prediction: How much light can pass through an opening?
Question: The flow of current through a wire is limited to the size of the wire. Water through a pipe is limited by the size of the pipe. What about light? Is it limited in a similar way? Let's say I drill a 1,000nm hole in a piece of 1/4 inch steel. I then shine a 5mw red laser through the hole. Does the hole restrict how much light can pass through the hole at any given moment? Let's say I increase the light through the hole by adding more lasers. Will the hole slowly start to restrict light's passage. Is there a limit to how much light can "fit" through the hole at the same time? Example Experiment: I shine the laser through the hole. I then measure the light with a photo diode and let's say I get 1 volt. Now I increase the amount of light through the hole by adding another laser of the same size. Would I now see 2 volts? What if do this again? Would I see 3 volts? Would each additional laser produce the same gain as the one before it? I understand the lasers would all have to be angled toward the hole by the same amount to actually perform this test. I also understand may need to perfect the measuring device, but I think you get the point. Answer: Light consists of photons which are boson. Bosons like to exists in the same state. So if you can manage to let one photon pass through a specific hole, then you can pass an arbitrary number of identical photons through that same hole. In other words, the size of the hole does not put any limit on the number of photons (or the amount of light) that can pass through that hole.
{ "domain": "physics.stackexchange", "id": 94280, "tags": "electromagnetic-radiation, visible-light, experimental-physics" }
Disk scheduling algorithm - SCAN and C-SCAN
Question: I'm studying several disk scheduling algorithms; however, I'm kinda confused with SCAN. I understand the concept, but I don't understand the example given in the book. I think that the author made a mistake, but I want to be sure. The example is the following: Description of SCAN: The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues. Suppose that the cylinder range goes from 0 to 199 and you have the following queue: 98, 183, 37, 122, 14, 124, 65, 67 Head pointer: 53 As seen in the image, the total head movement is of 208 cylinders I don't understand why 208 cylinders. If I understand correctly, the head will start at 53 and then move to 0 (that's 53 cylinders), then the head will move up and the last service requested is at 183 (that's another 183 cylinders). So the total head movement should be 53 + 183 = 236. The book also has an image and description of C-SCAN: C-SCAN moves the head from one end of the disk to the other, servicing requests along the way. When the head reaches the other end, however, it immediately returns to the beginning of the disk without servicing any requests on the return trip. The C-SCAN scheduling algorithm essentially treats the cylinders as a circular list that wraps around from the final cylinder to the first one. Using the same example, with C-SCAN the cylinder would start at 53, then move up to 199 (146 cylinder movements), then go to 0 (199 movements), and then the last request would be at 37 (37 cylinder movements). The total cylinder movements would be 146+199+37=382. My question here is, do we also have to count the cylinders when the head goes back to 0 since it isn't servicing any requests? Note: This information was extracted from Operating System Concepts with Java - 8th Edition by Silberschatz, Galvin, Gagne. You can also find this part of the chapter here. Answer: Here are the examples of various disk scheduling algorithms. Yes you are correct in both the case. The author here has made mistake. For SCAN: The answer should be 236. For C-SCAN: The answer should be 146+37=183.(Note: The head movement when its not servicing any requests is not counted.) Refer the link, it gives a clear concept of how the different kinds of disk scheduling algorithms are used in practice.
{ "domain": "cs.stackexchange", "id": 15614, "tags": "computer-architecture, scheduling" }
Quadrotor control using ArduIMU
Question: We are using ArduIMU (V3) as our Quadrotor's inertial measurement unit. (we have a separate board to control all motors, not with ArduIMU itself). As mentioned here , the output rate of this module is only at about 8hz. Isn't it super slow to control a quadrotor ? I'm asking because as mentioned in this answer a quadrotor needs at least 200hz of control frequency to easily stay in one spot, and our ESCs is configured to work with 450hz of refresh rate. Any working PID controller I saw before for Quadrotors used at least 200-400hz of control frequency. I asked similar question before from Ahmad Byagowi (one of the developers of ArduIMU ) and he answered: The arduimu calculates the dcm matrices and that makes it so slow. If you disable the dcm output, you can get up to 100 hz gyro, acc and so on. So, what will happen if I disable DCM from the firmware ? Is it really important ? We did a simulation before and our PID controller works pretty well without DCM. Answer: I updated ArduIMU's firmware and successfully got 100hz of output without disabling normalization. Update: Thanks to Kalman filter firmware of ArduIMU I got up to 180hz of output plus removing all noises.
{ "domain": "robotics.stackexchange", "id": 154, "tags": "arduino, quadcopter, imu, pid" }
What is a very simple pseudodeterministic algorithm (for educational purposes)?
Question: Definition. A randomized algorithm for a search problem is pseudodeterministic if it produces a fixed canonical solution to the search problem with high probability. Question. The notion of a pseudodeterministic algorithm is simple enough, but I cannot think of a simple one. A search on the web only reveals theoretical papers that don't seem to straightforwardly show one. Can you show me a simple one so that I can see that it is possible (to exist)? Answer: The Tonelli–Shanks algorithm for computing square roots modulo primes. More generally, factorization of polynomials over finite fields using the Cantor–Zassenhaus algorithm. Both can be made pseudodeterministic by canonizing the result in a suitable way (e.g., for square roots, output the unique square root in $\{0,\dots,(p-1)/2\}$; for factorization, require the factors to be monic, and output them in some sort of lexicographic order).
{ "domain": "cstheory.stackexchange", "id": 5727, "tags": "randomized-algorithms" }
What makes dynamical decoupling a good method since the fidelity after using it can only reach 0.85 or so?
Question: From this paper, I see that the fidelity of single qubit gate after using dynamical decoupling only reach around 0.85 while I normally saw experiment papers state their fidelity can reach around 0.99 to be a good result. So my question is, does 0.85 a good fidelity with the method of dynamical decoupling, if so, I think this fidelity is a little bit low? Answer: I think you're confusing this with reported numbers for average gate fidelities. The authors are estimating the fidelity of the time-evolved state with the initial state $|\psi\rangle$. If you prepare a state $|\psi\rangle$, and not do anything, it will (among other things) decohere such that the fidelity of the time-evolved state with $|\psi\rangle$ decreases with time. In particular, it is not surprising that you don't find great fidelities if you wait long enough. The purpose of dynamical decoupling is to counter-act this, at least to some extent, and that's what's presented in this plot. The thing is that DD is an error mitigation technique, so it does not need fully-fledged error correction to work. For certain NISQ applications, this might be enough. Besides, this can be combined with QEC, and less noise usually means less overhead by QEC!
{ "domain": "quantumcomputing.stackexchange", "id": 3819, "tags": "quantum-gate, noise, fidelity" }
Proving Turing Completeness by creating a compiler to a Turing Complete language
Question: Given a programming language A unknown to be Turing Complete, if one can create a compiler for a TC programming language B using A does this imply that A is itself Turing Complete? If so, what is the formal thinking behind the idea? Answer: No. It's usually relatively easy to compile a language (Turing complete or otherwise). At a very simplistic level, a compiler flattens an abstract syntax tree into a list of instructions. This can usually be done by a simple structural induction over the syntax tree. Optimization passes often do data flow or control flow analyses that would be harder to represent as a simple structural induction. You can make a practical demonstration by programming a compiler for the untyped lambda calculus into some simple Turing complete stack-based machine in Agda or some other non-Turing complete language. However, just as an extreme example, the "compiler" could be the identity function. If you meant that you could write a compiler for any Turing complete language, then that wouldn't be possible simply because the semantics of the language may require executing arbitrary code (e.g. Common Lisp macros) to elaborate the syntax or otherwise to produce a correct compiler.
{ "domain": "cs.stackexchange", "id": 14068, "tags": "programming-languages, turing-completeness" }
Why are the edges of a broken glass almost opaque?
Question: Unfortunately I broke my specs today which I used in this question. But I observed that the edges are completely different then the entire part of the lens. The middle portion of the lens was completely transparent but the edges appeared opaque (and I can't see through the edges). This image shows the same in case of a shattered glass. The edges in the above picture are green and not transparent as other portions appear. So why are the edges not transparent (in both the case of specs and the shattered glass)? Edit : I would like to add that the edges of my specs were not green. They were just silvery opaque. I couldn't take a pic of it during asking the question but take a look at it now. Answer: Because you're looking through more of glass I'd like to just add to the other answers with some diagrams. We have an intuition that light beams travel in straight lines, so we tend to assume that the beam paths looking through glass might be as follows: However, the actual paths of the beam due to refraction and total internal reflection look more like this: Note that the beams that enter the face of the glass aren't significantly deflected, and exit the glass pretty quickly. However beams that enter the edge of the glass spend a lot more distance within the glass. As the beam spends more time within the glass, it has more of a path to be affected by impurities.
{ "domain": "physics.stackexchange", "id": 75331, "tags": "optics, reflection, refraction, optical-materials, glass" }
How to find equation of state of an ideal gas from heat capacity?
Question: I'm having a problem where I have to derive the equation of state of an ideal gas from the formula for molar heat capacity $C=C_V+ \beta V$. Is this even ideal gas? Can someone help me, or at least give me a clue on how to solve this problem? Answer: You have to use the Maxwell thermodynamical relations to derive a potential. Most likely you need to start from $$\frac{\partial U}{\partial T} = C_{V}\\C=\frac{\partial Q}{\partial T}$$ Take a look at the thermodynamic square, maybe you will first need to derive a potential ($U$?). Once you have a potential it is trivial to get the equation of state. Check: https://en.wikipedia.org/wiki/Table_of_thermodynamic_equations https://en.wikipedia.org/wiki/Thermodynamic_square
{ "domain": "physics.stackexchange", "id": 59158, "tags": "homework-and-exercises, thermodynamics, statistical-mechanics" }
Is there a proof from the first principle that the Lagrangian $L = T - V$?
Question: Is there a proof from the first principle that for the Lagrangian $L$, $$L = T\text{(kinetic energy)} - V\text{(potential energy)}$$ in classical mechanics? Assume that Cartesian coordinates are used. Among the combinations, $L = T - nV$, only $n=1$ works. Is there a fundamental reason for it? On the other hand, the variational principle used in deriving the equations of motion, Euler-Lagrange equation, is general enough (can be used to to find the optimum of any parametrized integral) and does not specify the form of Lagrangian. I appreciate for anyone who gives the answer, and if possible, the primary source (who published the answer first in the literature). Notes added on Sept 22: Both answers are correct as far as I can find. Both answerers were not sure about what I meant by the term I used: 'first principle'. I like to elaborate what I was thinking, not meant to be condescending or anything near to that. Please have a little understanding if the words I use are not well-thought of. We do science by collecting facts, forming empirical laws, building a theory which generalizes the laws, then we go back to the lab and find if the generalization part can stand up to the verification. Newton's laws are close to the end of empirical laws, meaning that they are easily verified in the lab. These laws are not limited to gravity, but are used mostly under the condition of gravity. When we generalize and express them in Lagrangian or Hamiltonian, they can be used where Newton's laws cannot, for example, on electromagnetism, or any other forces unknown to us. Lagrangian or Hamiltonian and the derived equations of motion are generalizations and more on the theory side, relatively speaking; at least those are a little more theoretical than Newton's laws. We still go to lab to verify these generalizations, but it's somewhat harder to do so, like we have to use Large Hadron Collider. But here is a new problem, as @Jerry Schirmer pointed out in his comment and I agreed. Lagrangian is great tool if we know its expression. If we don't, then we are at lost. Lagrangian is almost as useless as Newton's laws for a new mysterious force. It's almost as useless but not quite, because we can try and error. We have much better luck to try and error on Lagrangian than on equations of motion. Oh, variational principle is a 'first principle' in my mind and is used to derive Euler-Lagrange equation. But variational principle does not give a clue about the explicit expression of Lagrangian. This is the point I'm driving at. This is why I'm looking for help, say, in Physics SE. If someone knew the reason why $n=1$ in $L=T-nV$, then we could use this reasoning to find out about a mysterious force. It looks like that someone is in the future. Answer: We assume that OP by the term first principle in this context means Newton's laws rather than the principle of stationary action$^1$. It is indeed possible to derive Lagrange equations from Newton's laws, cf. this Phys.SE answer. Sketched proof: Let us consider a non-relativistic$^2$ Newtonian problem of $N$ point particles with positions ${\bf r}_1, \ldots, {\bf r}_N$, with generalized coordinates $q^1, \ldots, q^n$, and $m=3N-n$ holonomic constraints. Let us for simplicity assume that the applied force of the system has generalized (possibly velocity-dependent) potential $U$. (This e.g. rules out velocity-dependent friction forces.) It is then possible to derive the following key identity $$\tag{1} \sum_{i=1}^N \left(\dot{\bf p}_i-{\bf F}_i\right)\cdot \delta {\bf r}_i ~=~ \sum_{j=1}^n \left(\frac{d}{dt} \frac{\partial (T-U)}{\partial \dot{q}^j} -\frac{\partial (T-U)}{\partial q^j}\right) \delta q^j. $$ Here $\delta$ denotes an infinitesimal virtual displacement consistent with the constraints. Moreover, ${\bf F}_i$ is the applied force (i.e. the total force minus the constraint forces) on the $i$'th particle. The Lagrangian $L:=T-U$ is here defined as the difference$^3$ between the kinetic and the potential energy. Note that the rhs. of eq. (1) precisely contains the Euler-Lagrange operator. D'Alembert's principle says that the lhs. of eq. (1) is zero. Then Lagrange equations follows from the fact that the virtual displacement $\delta q^j$ in the generalized coordinates is un-constrained and arbitrary. D'Alembert's principle in turn follows from Newton's laws using some assumptions about the form of the constraint forces. (E.g. we assume that there is no sliding friction.) See Ref. 1 and this Phys.SE post for further details. References: H. Goldstein, Classical Mechanics, Chapter 1. -- $^1$ One should always keep in mind that, at the classical level (meaning $\hbar=0$), the Lagrangian $L$ is far from unique, in the sense that many different Lagrangians may yield the same eqs. of motion. E.g. it is always possible to add a total time derivative to the Lagrangian, or to scale the Lagrangian with a constant. See also this Phys.SE post. $^2$ It is possible to extend to a special relativistic version of Newtonian mechanics by (among other things) replacing the non-relativistic formula $T=\frac{1}{2}\sum_{i=1}^N m_i v^2_i $ with $T=-\sum_{i=1}^N \frac{m_{0i}c^2}{\gamma(v_i)}$ rather than the kinetic energy $\sum_{i=1}^N [\gamma(v_i)-1]m_{0i}c^2$. See also this Phys.SE post. $^3$ OP is pondering why the Lagrangian $L$ is not of the form $T-\alpha U$ for some constant $\alpha\neq 1$? In fact, the key identity (1) may be generalized as follows $$\tag{1'} \sum_{i=1}^N \left(\dot{\bf p}_i-\alpha{\bf F}_i\right)\cdot \delta {\bf r}_i ~=~ \sum_{j=1}^n \left(\frac{d}{dt} \frac{\partial (T-\alpha U)}{\partial \dot{q}^j} -\frac{\partial (T-\alpha U)}{\partial q^j}\right) \delta q^j. $$ So the fact that the Lagrangian $L$ is not of the form $T-\alpha U$ for $\alpha\neq 1$ is directly related to that Newton's 2nd law is not of the form $\dot{\bf p}_i=\alpha {\bf F}_i$ for $\alpha\neq 1$.
{ "domain": "physics.stackexchange", "id": 85355, "tags": "classical-mechanics, lagrangian-formalism, potential-energy, variational-principle, equations-of-motion" }
Is there a machine learning model suited well for longitudinal data?
Question: I have a fairly large (>100K rows) dataset with multiple (daily) measurements per individual, for a few thousand individuals. The number of measurements per individual vary, and there are many null values (that is, one row may have missing values for certain variables/measurements, but not for all). I also have a daily outcome (extrapolated, but let's assume it's fair to do so, so there is a binary outcome for each day when measurements are taken). My question goal is to model the outcome, such that I can predict daily outcomes for new individuals. My background is in research, and I am familiar with some statistics and ML, and overall still fairly new to data science. I am wondering if there are any particular known ML algorithms that can be used to model such data. I am cautious about using logistic regression from something like python's scikit learn because the observations are not independent (they are highly correlated on an individual level). From my knowledge, these kind of data are well-suited for a mixed effects logistic regression or longitudinal logistic regression. However, I haven't been able to find any widely used ML algorithms for it, and I would like to pursue an ML approach rather than fitting a statistical model using something like lme4 package in R. Could someone recommend an available ML algorithm to model such data? PS: I did some research and found a few research articles on the topic but nothing widely used or clearly implemented. The structure of the data I am working with strikes me as very common, so I thought I'd ask. Answer: Assuming we are not talking about a time series and also assuming unseen data you want to make a prediction on could include individuals not currently present in your data set, your best bet is to restructure your data first. What you want to do is predict daily outcome Y from X1...Xn predictors which I understand to be measurements taken. A normal approach here would be to fit a RandomForest or boosting model which, yes would be based on a logistical regressor. However you point out that simply assuming each case is independent is incorrect because outcomes are highly dependent on the individual measured. If this is the case then we need to add the attributes describing the individual as additional predictors. So this: id | day | measurement1 | measurement2 | ... | outcome A | Mon | 1 | 0 | 1 | 1 B | Mon | 0 | 1 | 0 | 0 becomes this: id | age | gender | day | measurement1 | measurement2 | ... | outcome A | 34 | male | Mon | 1 | 0 | 1 | 1 B | 28 | female | Mon | 0 | 1 | 0 | 0 By including the attributes of each individual we can use each daily measurement as a single case in training the model because we assume that the correlation between the intraindividual outcomes can be explained by the attributes (i.e. individuals with similar age, gender, other attributes that are domain appropriate should have the same outcome bias). If you do not have any attributes about the individuals besides their measurements then you can also safely ignore those because your model will have to predict an outcome on unseen data without knowing anything about the individual. That the prediction could be improved because we know individuals bias the outcome does not matter because the data simply isn't there. You have to understand that prediction tasks are different than other statistical work, the only thing we care about is the properly validated performance of the prediction model. If you can get a model that is good enough by ignoring individuals than you are a-okay and if your model sucks you need more data. If on the other hand you only want to predict outcomes for individuals ALREADY IN YOUR TRAINING SET the problem becomes even easier to solve. Simply add the individual identifier as a predictor variable. To sum it up, unless you have a time series, you should be okay to use any ML classification model like RandomForest or boosting models even if they are based on normal logistical regressions. However you might have to restructure your data a bit.
{ "domain": "datascience.stackexchange", "id": 7342, "tags": "machine-learning, logistic-regression" }
Does the mixture of a ligand exchange reaction contain all the complexes obtained in the part reactions?
Question: A ligand exchange reaction involves many part reactions. So, does the mixture contain all the complexes from those part reactions? Quoting an example from ChemGuide: This can be written as an equilibrium reaction to show the overall effect: $$\ce{[Cu(H2O)6]^2+ + 4NH3 <=> [Cu(NH3)4(H2O)2]^2+ + 4H2O}$$ In fact, the water molecules get replaced one at a time, and so this is made up of a series of part-reactions: $$ \begin{align} \ce{[Cu(H2O)6]^2+ + \color{blue}{NH3} &<=> [Cu\color{blue}{(NH3)}(H2O)5]^2+ + H2O}\\ \ce{[Cu\color{blue}{(NH3)}(H2O)5]^2+ + \color{blue}{NH3} &<=> [Cu\color{blue}{(NH3)2}(H2O)4]^2+ + H2O}\\ \ce{[Cu\color{blue}{(NH3)2}(H2O)4]^2+ + \color{blue}{NH3} &<=> [Cu\color{blue}{(NH3)3}(H2O)3]^2+ + H2O}\\ \ce{[Cu\color{blue}{(NH3)3}(H2O)3]^2+ + \color{blue}{NH3} &<=> [Cu\color{blue}{(NH3)4}(H2O)2]^2+ + H2O}\\ \end{align}$$ And if yes, then which complex has the highest concentration? Answer: Yes, all those complexes from all those reactions will be present, and some more. Essentially Loong did all the mathematics in his answer to: Why is ligand substitution only partial with copper(II) ions and ammonia? Let's shorten this a little bit and write the general reaction equation: \begin{align} \ce{[Cu(H2O)_{(7-n)}]^2+ + n NH3 &<=> [Cu(NH3)_n(H2O)_{(6-n)}]^2+ + n H2O};& n&\in\{\mathbb{N}; 1,\dots,6\} \end{align} Therefore we can write the general equilibrium constant: \begin{align} K_n &= \frac{{\left[ \ce{[Cu(NH3)_n(H2O)_{(6-n)}]^2+} \right]}} {{\left[ \ce{[Cu(H2O)_{(7-n)}]^2+} \right]\left[ \ce{NH3} \right]^n}}\\ % K_\mathrm{B} &= % \frac{{\left[ \ce{[Cu(NH3)6]^2+} \right]}} % {{\left[ \ce{[Cu(H2O)6]^2+} \right]\left[ \ce{NH3} \right]^6}} % &&= \prod_{n=1}^{6} K_n \end{align} Since all reactions are in equilibrium, all possible species will exist, although they might not be quantifiable. The overall equilibrium depends on the concentration of the involved species, especially ammonia. If $[\ce{NH3}]$ is low, then a complex with smaller $n$ will be more stable, if you go to higher concentrations, larger $n$ will be preferred. In all of this it should be mentioned that there are also complexes with vacant coordination sites $\square$, like anything in between $$\ce{[Cu(H2O)5\square_1]^2+},\dots, \ce{[Cu(H2O)3(NH3)2\square_1]^2+},\dots, \ce{[Cu(NH3)5\square_1]^2+},\\ \left(\ce{[Cu(H2O)4\square_2]^2+},\dots,\ce{[Cu(NH3)4\square_2]^2+}\right),$$ and possibly others.
{ "domain": "chemistry.stackexchange", "id": 10123, "tags": "coordination-compounds" }
Do extra-dimensional theories like ADD or Randall-Sundrum require string theory to be true?
Question: What I mean is could it turn out that the world is not described by string theory / M-Theory, but that nevertheless some version of one of these extra-dimensional theories is true? I have no real background in this area. I just read Randall and Sundrum's 1999 paper "A Large Mass Hierarchy from a Small Extra Dimension" (http://arxiv.org/PS_cache/hep-ph/pdf/9905/9905221v1.pdf). Other than the use of the term "brane" and a couple of references to string excitations at TeV scale, I don't see much about string theory, and I notice their theory only requires 1 extra dimension, not 6 or 7. Answer: Extra-dimensional scenarios may be described as "inspired" by string theory but they are independent hypotheses and they may be true even if string theory is not. However, one has to reduce the ambitions and standards of consistency. Sociologically, it's surely true that the research of models with extra dimensions has been adopted and pursued by many people who have never take studied proper string theory or taken a course in it. Despite the academic independence, a confirmation of experimentally accessible extra dimensions - which is extremely unlikely to occur, due to their likely tiny size - would be a huge evidence supporting string theory because it's the only framework in which the extra dimensions actually have a justification (many).
{ "domain": "physics.stackexchange", "id": 1329, "tags": "string-theory, branes" }
How to show that a problem is easy?
Question: Let $P$ be a problem that you need to study its difficulty, i.e., NP-hard, Polynomial-time solvable, etc. My question is: If I reduce a known polynomial time problem (say, maximum matching in bipartite graph) to $P$, why I can say that $P$ is an easy problem? My guess is: No, we cannot say that. Why? Because from an instance of maximum matching problem, $I_{ MM }$, I create an instance of $P$, $I_{ P }$, and then I show that maximum matching is solved with $I_{ MM }\iff P$ is solved with $I_{ P }$. But what if from another instance of maximum matching problem, $I_{ MM }'$ , I create another instance of $P$, $I_{ P }'$, which is hard to solve? I have read that the reduction is correct and works, for example from sorting to convex hull, but I do not why. I do not know what I am missing here. Answer: You are right. Any problem in $\mathsf{P}$ can be (polytime) reduced to the halting problem, for example. (In fact, any problem in $\mathsf{P}$ can be polytime reduced to any non-trivial language, that is any language other than $\emptyset$ or $\Sigma^*$.) What is true is that if A reduces to B and B is easy, then so is A. In particular, if A polytime reduces to B and B is in $\mathsf{P}$, then A is also in $\mathsf{P}$.
{ "domain": "cs.stackexchange", "id": 5936, "tags": "complexity-theory, reductions" }
What type of correlation is this equation?
Question: In a book [1], gives an equation (equation 5.31 in the book) for estimating frequency offset in OFDM systems proposed by Classen. the procedure is: first, two OFDM symbols, $y_l [n]$ and $y_{l+D} [n]$ , are saved in a memory after synchronization. then, the signals are transformed into ${Y_l [k]}_{k=0}^{N-1}$ and ${Y_{l+D} [k]}_{k=0}^{N-1}$ via FFT, from which pilot tones are extracted. After estimating CFO from pilot tones in the frequency domain, the signal is compensated in the time domain. Here is the equation: $ \hat \epsilon_{acq} = \dfrac{1}{2\pi \cdot T_{sub}} \underset{\epsilon}{max} \left \{ \left \vert \sum_{j=0}^{L-1} Y_{l+D}[p[j],\epsilon]\cdot Y_{l}^{*}[p[j],\epsilon]\cdot X_{l+D}^{*}[p[j]]\cdot X_l[p[j]] \right \vert \right \} $ where $L$, $p[j]$, and $X_{l}[p[j]]$ are the number of pilot tones, the location of the jth pilot tone, and the pilot tone located at $p[j]$ in the frequency domain at the $l^{th}$ symbol period, respectively. From the equation, It's supposed to find the normalized CFO, $\epsilon$. but, the parameter we are finding is involved in the equation $[P[j],ε]$. My question is: how this parameter we're supposed to estimate is in the middle of the equation? What type of correlation is this? Or did I misunderstand the equation? Please explain. [1] Yong Soo Cho et al., "MIMO-OFDM wireless communications with MATLAB", John Wiley & Sons (asia), 2010 Answer: I understand your confusion because the equation is barely understandable from the information given in the book. It becomes more clear from the original paper by Classen and Meyr [1] from which it has been taken. They propose a two stage frequency offset estimation that consists of an acquistion stage and a tracking stage. The equation you've cited represents the acquisition algorithm. In fact, it is a searching algorithm that "tries out" various values of frequency offset $\epsilon$ and chooses the one value $\hat\epsilon_\mathrm{acq}$ that yields the maximum correlation sum. It can be described as follows For every $\epsilon$ Compensate the received time domain signal for the supposed frequency offset $\epsilon$ Calculate the DFT of the frequency offset compensated RX signal. This yields $Y_{l}[p[j],\epsilon]$ Calculate the correlation sum $R_\epsilon$. (The term inside $|\cdots|$) Then, find the maximum $\hat R_\epsilon$ of all $R_\epsilon$ and calculate the estimated frequency offset by $$ \hat \epsilon_{acq} = \dfrac{\hat R_\epsilon}{2\pi \cdot T_{sub}} $$ Of course it's not feasible to evaluate the correlation sum for every $\epsilon$ because the number of possbile values is infinite. Instead, you have to restrict the range of values to a reasonable frequency offset that is determined by the hardware you're using. Additionaly, this "search range" has to be partitioned into discrete values. The authors of [1] note In practice we found that it is sufficient to space the trial parameters $0.1/T_{sub}$ apart from each other. Where $T_\mathrm{sub}$ is the length of one OFDM symbol including guard interval. [1] Classen, F. and Myer, H. (June 1994) Frequency synchronization algorithm for OFDM systems suitable for communication over frequency selective fading channels. IEEE VTC’94, pp. 1655 1659.
{ "domain": "dsp.stackexchange", "id": 1727, "tags": "correlation, ofdm" }
Commutation of vector operators
Question: I'm supposed to show that $\left[\mathbf A,\mathbf B\right]=0$ (for two vector operators $\mathbf A$ and $\mathbf B$) if and only if all components of $\mathbf A$ commute with all components of $\mathbf B$. What I have so far is this: \begin{align*} \mathbf A \otimes\mathbf B &= \mathbf A\mathbf B^T \Rightarrow \left[\mathbf A,\mathbf B\right] = \mathbf A\otimes \mathbf B - \left[\mathbf B \otimes \mathbf A\right]^T=\mathbf A\mathbf B^T - \left[\mathbf B \mathbf A^T\right]^T = \mathbf A\mathbf B^T - \mathbf B^T\! \mathbf A = 0. \end{align*} So much for the basic algebra... what am I supposed to do now? I can expand into components, but the next question "give an expression for $\left[\mathbf A,\mathbf B\right]_{ij}$ in Einstein notation" kind of suggests I should avoid saying $\mathbf A = a^i_j$ and go from there. I tried doing so anyway, with the obvious result of: \begin{align*} \mathbf A\mathbf B^T - \mathbf B^T\! \mathbf A = a^i_j\left(b^i_j\right)^T - \left(b^i_j\right)^T a^i_j = a^i_jb^j_i - b^j_i a^i_j = 0. \end{align*} Should I be interpreting the result as "One of these vectors will produce a matrix, the other will produce a scalar (an inner product), which means that this expression can only be zero when the matrix equals zero" (?). I'm at a loss as to how to approach this.. Answer: I think your confusion is arising from the fact that you are imagining operators as matrices. This is mostly fine, but in this case, the operator itself being a vector is what is causing the confusion - so let me elaborate. ${\bf A}$ is a vector of operators. For example $$ {\bf A} = \pmatrix{ A_1 \\ A_2 \\ A_3} $$ We can denote this collectively as $A_i$. Now, note that each of these $A_i$'s are themselves operators. In other words, they are matrices $(A_i)_{ab}$. Thus, each element of $A$ has three indices. One index is the vector index and the other two are the matrix operator indices. Finally, the very definition of $[{\bf A} , {\bf B}]$ is the following $$ [ {\bf A} , {\bf B} ]_{ij} = [ A_i , B_j ] $$ The latter can further be written in matrix notation as $$ [ A_i , B_j ]_{ab} = (A_i)_{ac} (B_j)_{cb} - (B_j)_{ac} (A_i)_{cb} $$ Thus, $[{\bf A} , {\bf B} ] = 0$ is precisely the statement that all components of ${\bf A}$ commute with those of ${\bf B}$. Let me finally point out that this question is really poorly worded. I would say that $[ {\bf A} , {\bf B} ] = 0$ is by definition the statement that the components commute. More clearly, (if it is not already) I mean that $[ {\bf A} , {\bf B} ] = 0$ already means that the components commute. There is nothing to derive here.
{ "domain": "physics.stackexchange", "id": 26246, "tags": "quantum-mechanics, operators, vectors, tensor-calculus, commutator" }
How to quantify the adsorption affinity of gases?
Question: Is there a term/quantity which shows how 'sticky/adsorptive' a molecule is? I am interested in gas adsorption on steel surfaces in our mass spectrometer and would like to estimate which gases have a higher propensity to adsorb. Answer: The term you are looking for is "adsorption enthalpy". However calculating that value will depend on a variety of factors mainly the absorbate and substrate, but it is not that simple. The surface structure of substrates matters and a $[111]$ oriented crystallographic face will have a different adsorption enthalpy with a given absorbate that say a $[001]$ crystallographic face. I suggest you study this more, namely the Langmuir adsorption model and Brunauer–Emmett–Teller Theory (BET). Especially since BET is a popular technique to measure adsorption.
{ "domain": "chemistry.stackexchange", "id": 10829, "tags": "terminology, adsorption" }
Is there a lighter-than-air foam material?
Question: Soap bubble foam made with helium floats up, but due to extreme fragility hardly counts as "material". There are many solid foam materials though - PUR foam, or styrofoam to name the most common. They typically use carbon dioxide for inflation though (usually produced from precursors of the foam, as a desirable side effect of their reaction). But it shouldn't be too difficult to make solid foam filled with helium (or hydrogen) in proportions assuring positive buoyancy in air, and I can imagine desirability of it, at least as a filler in applications where mass costs a premium (transport, aviation) even if its structural properties were to be too poor for any other purpose. Is such material produced? Is it used anywhere? Or if not, why? Answer: No matter how good you seal it, when you inflate a balloon with Helium it will stay up for a while, but after a few days, it will lose its pressure. Enough to realise that in solids, there is also a phenomenon of mass diffusivity, and therefore your foam will not retain the gas. This phenomenon is also called Permeation Diffusivity in solids is very complex, and 'mostly' cannot be described with an equation as simple as with Fick's laws, and, in many cases, is not even isotropic. But there is still a condition that needs to be met to diffuse: the particle/atom you consider can place itself within the crystallography/pattern of your material. unfortunately for Helium, it is too small and will diffuse through all reasonable materials. Diffusivity in solids is unfortunately making us unable to isolate a gaz but it is also positively used in a lot of fields. The most investigated is probably in microelectronics to make local implants of ions in semiconductors and therefore change locally its electrical properties.
{ "domain": "engineering.stackexchange", "id": 1579, "tags": "materials" }
Glorified Document Chooser
Question: I'm primarily concerned about the design of this MVC application. But will gladly take any suggestions! Primary objective: Create an interface that allows a user to choose a document from a database. This portion of my program allows a user (editor/writer) to choose document drafts from a database by title after specifying some selection criteria. The user will typically only be choosing one document at a time, and editing the document to fit their needs; but they have the option to grab multiple documents. The program will then concatenate all documents together and the user can edit as needed. (This extracted portion of the program only prints the titles to System.out as proof that the calling Swing portion can retrieve data from the JavaFX panel). The rest of the application (this is a small part of a single screen) has been written with Swing and is closely coupled with the logic of the program. I've extracted this small panel as a start to decouple the view from the controller and model. JavaFX (via fxml) seemed like a nice way to guarantee that the view will remain separated from the controller. Have I achieved my goal of logic separation? Am I missing anything? How could the design be improved for maintainability and usability? SwingMain.java package FXMLControllers; import javafx.application.Platform; import javafx.collections.ObservableList; import javafx.embed.swing.JFXPanel; import javafx.fxml.FXMLLoader; import javafx.scene.Scene; import javax.swing.*; import java.awt.*; import java.io.IOException; public class SwingMain extends JFrame{ private NewDraftFromDBPanelController controller; public SwingMain() { } private void initSwingComponents() { JFrame f = new JFrame("Swing with JavaFX test"); f.setLayout(new BorderLayout()); JFXPanel draftFromDBPanel = new JFXPanel(); JPanel buttonPanel = new JPanel(); JButton showDocs = new JButton("Show Docs"); buttonPanel.add(showDocs); f.add(draftFromDBPanel, BorderLayout.CENTER); f.add(buttonPanel, BorderLayout.SOUTH); f.setSize(1000,350); f.setVisible(true); f.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE); Platform.runLater(() -> initFX(draftFromDBPanel)); showDocs.addActionListener(e -> printAllDocs()); } private void printAllDocs() { ObservableList<DraftPK> draftPKS = controller.retrieveAllDocs(); for (DraftPK draft: draftPKS) { print(draft.getDraftName()); } } private void initFX(JFXPanel draftFromDBPanel) { Scene scene = createScene(); draftFromDBPanel.setScene(scene); DraftFromDBPanelModel model = new MockNewDraftFromDBPanelModel(); controller.initData(model); } private Scene createScene() { Scene scene = null; try { FXMLLoader loader = new FXMLLoader(getClass().getResource("DraftFromDBPanel.fxml")); scene = new Scene(loader.load(), 200, 100); controller = loader.getController(); } catch (IOException e) { e.printStackTrace(); } return scene; } private void print(String message){ System.out.print(message + "\n"); } public static void main(String[] args) throws IOException { SwingMain test = new SwingMain(); SwingUtilities.invokeLater(test::initSwingComponents); } } NewDraftFromDBPanelController.java package FXMLControllers; import javafx.beans.property.ReadOnlyStringWrapper; import javafx.collections.FXCollections; import javafx.collections.ObservableList; import javafx.event.ActionEvent; import javafx.fxml.FXML; import javafx.scene.control.*; import javafx.util.StringConverter; public class NewDraftFromDBPanelController { @FXML private TableColumn<DraftPK, String> draftNameColumn; @FXML private TableColumn<DraftPK, String> sessionColumn; @FXML private TableColumn<DraftPK, String> authorColumn; @FXML private TableColumn<DraftPK, String> draftTypeColumn; @FXML private ComboBox<String> drafterChooser; @FXML private TableView<DraftPK> selectedDocumentsTable; private ObservableList<DraftPK> selectedDocuments; @FXML private Button removeFromTable; @FXML private ComboBox<DraftPK> documentChooser; @FXML private ToggleGroup dbButtonGroup; @FXML private RadioButton othersDrafts; @FXML private RadioButton finalDraft; @FXML private CheckBox changeDraft; @FXML private CheckBox notChangeDraft; @FXML private CheckBox historyCheckbox; @FXML private ComboBox<YearEntryDO> yearChooser; private YearEntryDO currentYear; private DraftFromDBPanelModel model; @FXML void initialize() { assert selectedDocumentsTable != null : "fx:id=\"selectedDocumentsTable\" was not injected: check your FXML file 'DraftFromDBPanel.fxml'."; assert removeFromTable != null : "fx:id=\"removeFromTable\" was not injected: check your FXML file 'DraftFromDBPanel.fxml'."; assert documentChooser != null : "fx:id=\"documentChooser\" was not injected: check your FXML file 'DraftFromDBPanel.fxml'."; assert dbButtonGroup != null : "fx:id=\"dbButtonGroup\" was not injected: check your FXML file 'DraftFromDBPanel.fxml'."; assert othersDrafts != null : "fx:id=\"othersDrafts\" was not injected: check your FXML file 'DraftFromDBPanel.fxml'."; assert finalDraft != null : "fx:id=\"finalDraft\" was not injected: check your FXML file 'DraftFromDBPanel.fxml'."; assert changeDraft != null : "fx:id=\"changeDraft\" was not injected: check your FXML file 'DraftFromDBPanel.fxml'."; assert notChangeDraft != null : "fx:id=\"notChangeDraft\" was not injected: check your FXML file 'DraftFromDBPanel.fxml'."; assert historyCheckbox != null : "fx:id=\"historyCheckbox\" was not injected: check your FXML file 'DraftFromDBPanel.fxml'."; assert yearChooser != null : "fx:id=\"yearChooser\" was not injected: check your FXML file 'DraftFromDBPanel.fxml'."; assert drafterChooser != null : "fx:id=\"drafterChooser\" was not injected: check your FXML file 'DraftFromDBPanel.fxml'."; bindComponents(); initializeYearChooser(); initializeDraftersChooser(); initializeDocumentChooser(); initializeSelectedDraftsTable(); } private void bindComponents() { drafterChooser.visibleProperty().bind(othersDrafts.selectedProperty()); drafterChooser.managedProperty().bind(drafterChooser.visibleProperty()); changeDraft.visibleProperty().bind(historyCheckbox.selectedProperty().not().and(finalDraft.selectedProperty())); changeDraft.managedProperty().bind(changeDraft.visibleProperty()); notChangeDraft.visibleProperty().bind(historyCheckbox.selectedProperty().not().and(finalDraft.selectedProperty())); notChangeDraft.managedProperty().bind(notChangeDraft.visibleProperty()); yearChooser.visibleProperty().bind(historyCheckbox.selectedProperty()); yearChooser.managedProperty().bind(yearChooser.visibleProperty()); } private void initializeYearChooser() { yearChooser.setCellFactory(combobox -> createYearListCellFactory()); yearChooser.setConverter(createSessionYearConverter()); } private ListCell<YearEntryDO> createYearListCellFactory() { return new ListCell<YearEntryDO>() { @Override protected void updateItem(YearEntryDO item, boolean empty) { super.updateItem(item, empty); if (item == null || empty) { setText(null); } else { setText(item.getYearNum() + " " + item.getYearPart()); } } }; } private StringConverter<YearEntryDO> createSessionYearConverter() { return new StringConverter<YearEntryDO>() { @Override public String toString(YearEntryDO session) { if (session == null) { return null; } else { return session.getYearNum() + " " + session.getYearPart(); } } @Override public YearEntryDO fromString(String string) { return null; } }; } private void initializeDraftersChooser() { drafterChooser.setCellFactory(combobox -> createDraftersChooserCellFactory()); drafterChooser.setConverter(createDraftersChooserConverter()); } private ListCell<String> createDraftersChooserCellFactory() { return new ListCell<String>() { @Override protected void updateItem(String item, boolean empty) { super.updateItem(item, empty); if (item == null || empty) { setText(null); } else { setText(item); } } }; } private StringConverter<String> createDraftersChooserConverter() { return new StringConverter<String>() { @Override public String toString(String initials) { if (initials == null) { return null; } else { return initials; } } @Override public String fromString(String string) { return null; } }; } private void initializeDocumentChooser() { documentChooser.setCellFactory(draftChooser -> createDocumentChooserCellFactory()); documentChooser.setConverter(createDocumentChooserConverter()); } private ListCell<DraftPK> createDocumentChooserCellFactory() { return new ListCell<DraftPK>() { @Override protected void updateItem(DraftPK item, boolean empty) { super.updateItem(item, empty); if (item == null || empty) { setText(null); } else { setText(item.getDraftName()); } } }; } private StringConverter<DraftPK> createDocumentChooserConverter() { return new StringConverter<DraftPK>() { @Override public String toString(DraftPK draft) { if (draft == null) { return null; } else { return draft.getDraftName(); } } @Override public DraftPK fromString(String string) { return null; } }; } void initData(DraftFromDBPanelModel model) { this.model = model; this.currentYear = model.getCurrentYear(); initializeDocumentChooserItems(); } private void initializeDocumentChooserItems() { documentChooser.setItems(getDrafts()); } private ObservableList<DraftPK> getDrafts() { return model.getDrafts(getSelectedYear(), getCurrentUser()); } private YearEntryDO getSelectedYear() { YearEntryDO year; if (historyCheckbox.isSelected()) { year = yearChooser.getSelectionModel().getSelectedItem(); } else { year = getCurrentYear(); } return year; } private YearEntryDO getCurrentYear() { return currentYear; } private String getCurrentUser() { String user; if (othersDrafts.isSelected()){ user = getSelectedUser(); } else { user = model.getCurrentUserInitials(); } return user; } private String getSelectedUser() { return drafterChooser.getSelectionModel().getSelectedItem(); } private void initializeSelectedDraftsTable() { selectedDocumentsTable.getSelectionModel().setSelectionMode(SelectionMode.MULTIPLE); selectedDocuments = FXCollections.observableArrayList(); draftNameColumn.setCellValueFactory(cellData -> new ReadOnlyStringWrapper(cellData.getValue().getDraftName())); authorColumn.setCellValueFactory(cellData -> new ReadOnlyStringWrapper(cellData.getValue().getDrftrInitls())); draftTypeColumn.setCellValueFactory(cellData -> new ReadOnlyStringWrapper(Integer.toString(cellData.getValue().getDrafttypeId()))); sessionColumn.setCellValueFactory(cellData -> new ReadOnlyStringWrapper(Integer.toString(cellData.getValue().getSessYr()))); draftNameColumn.setCellFactory(column -> new TableCell<DraftPK, String>() { @Override protected void updateItem(String item, boolean empty) { super.updateItem(item, empty); if (item == null || empty) { setText(null); setStyle(""); } else { setText(item); } } }); selectedDocumentsTable.setItems(selectedDocuments); } //Begin Action Listeners @FXML private void setYearChooser(ActionEvent event) { yearChooser.setItems(model.getYears()); event.consume(); } @FXML private void handleYearSelection(ActionEvent actionEvent) { if (finalDraft.isSelected()){ updateDraftListWithFinalDrafts(actionEvent); } else { updateDraftList(actionEvent); } } @FXML private void updateDraftListWithFinalDrafts(ActionEvent actionEvent) { setDraftSelectionOptions(model.getFinalDrafts(getSelectedYear())); actionEvent.consume(); } @FXML private void updateDraftList(ActionEvent event) { ObservableList<DraftPK> drafts = getDrafts(); setDraftSelectionOptions(drafts); event.consume(); } private void setDraftSelectionOptions(ObservableList<DraftPK> drafts) { ObservableList<DraftPK> items = documentChooser.getItems(); items.clear(); items.setAll(drafts); } @FXML private void setDrafterChooser() { drafterChooser.setItems(model.getDrafters()); } @FXML private void toggleChangeDraft(ActionEvent event) { changeDraft.setSelected(!notChangeDraft.isSelected()); event.consume(); } @FXML private void toggleNotChangeDraft(ActionEvent event) { notChangeDraft.setSelected(!changeDraft.isSelected()); event.consume(); } @FXML private void addSelectedDoc(ActionEvent event) { if (documentChooser.getValue() != null) { DraftPK selectedDoc = getSelectedDocumentFromComboBox(); this.selectedDocuments.add(selectedDoc); event.consume(); } } private DraftPK getSelectedDocumentFromComboBox() { return documentChooser.getSelectionModel().getSelectedItem(); } @FXML private void removeSelectedDocFromTable(ActionEvent event) { ObservableList<DraftPK> selectedDrafts = getSelectedDraftsFromTable(); selectedDocuments.removeAll(selectedDrafts); event.consume(); } /** * Retrieves drafts that are currently selected by the user in the table. Intended for use in the removal of the * selected drafts. (As opposed to retrieving all of the documents in the table) * * @return ObservableList of DraftPKs */ private ObservableList<DraftPK> getSelectedDraftsFromTable() { return selectedDocumentsTable.getSelectionModel().getSelectedItems(); } public ObservableList<DraftPK> retrieveAllDocs() { return selectedDocuments; } } DraftFromDBPanel.fxml <?xml version="1.0" encoding="UTF-8"?> <?import java.net.URL?> <?import javafx.geometry.Insets?> <?import javafx.scene.control.Button?> <?import javafx.scene.control.CheckBox?> <?import javafx.scene.control.ComboBox?> <?import javafx.scene.control.Label?> <?import javafx.scene.control.RadioButton?> <?import javafx.scene.control.TableColumn?> <?import javafx.scene.control.TableView?> <?import javafx.scene.control.ToggleGroup?> <?import javafx.scene.layout.ColumnConstraints?> <?import javafx.scene.layout.GridPane?> <?import javafx.scene.layout.RowConstraints?> <?import javafx.scene.layout.VBox?> <GridPane maxHeight="-Infinity" maxWidth="-Infinity" prefHeight="233.0" prefWidth="912.0" xmlns="http://javafx.com/javafx/8.0.111" xmlns:fx="http://javafx.com/fxml/1" fx:controller="FXMLControllers.NewDraftFromDBPanelController"> <columnConstraints> <ColumnConstraints halignment="CENTER" maxWidth="220.0" minWidth="34.0" prefWidth="177.0" /> <ColumnConstraints maxWidth="287.0" minWidth="34.0" prefWidth="207.0" /> <ColumnConstraints hgrow="SOMETIMES" maxWidth="336.0" minWidth="10.0" prefWidth="184.0" /> <ColumnConstraints hgrow="SOMETIMES" maxWidth="525.0" minWidth="10.0" prefWidth="427.0" /> </columnConstraints> <rowConstraints> <RowConstraints maxHeight="292.0" minHeight="10.0" prefHeight="252.0" vgrow="SOMETIMES" /> </rowConstraints> <children> <VBox alignment="CENTER" prefHeight="226.0" prefWidth="384.0" GridPane.columnIndex="3"> <children> <TableView fx:id="selectedDocumentsTable" prefHeight="173.0" prefWidth="283.0"> <columns> <TableColumn fx:id="draftNameColumn" text="Draft Name" /> <TableColumn fx:id="sessionColumn" text="Session" /> <TableColumn fx:id="authorColumn" prefWidth="75.0" text="Author" /> <TableColumn fx:id="draftTypeColumn" prefWidth="75.0" text="Draft Type" /> </columns> <columnResizePolicy> <TableView fx:constant="CONSTRAINED_RESIZE_POLICY" /> </columnResizePolicy> </TableView> <Button fx:id="removeFromTable" alignment="CENTER" mnemonicParsing="false" onAction="#removeSelectedDocFromTable" text="Remove" textAlignment="CENTER" /> </children> </VBox> <ComboBox id="documentChooser" fx:id="documentChooser" onAction="#addSelectedDoc" prefHeight="25.0" prefWidth="155.0" promptText="Select a document..." GridPane.columnIndex="2" GridPane.halignment="CENTER" /> <VBox alignment="CENTER_LEFT" prefHeight="213.0" prefWidth="163.0" GridPane.columnIndex="1" GridPane.halignment="CENTER" GridPane.valignment="BOTTOM"> <children> <Label text="From Database:" /> <RadioButton id="ownDrafts" mnemonicParsing="false" onAction="#updateDraftList" selected="true" text="Own Drafts"> <toggleGroup> <ToggleGroup fx:id="dbButtonGroup" /> </toggleGroup> </RadioButton> <RadioButton id="othersDrafts" fx:id="othersDrafts" mnemonicParsing="false" onAction="#setDrafterChooser" text="Other's Drafts" toggleGroup="$dbButtonGroup" /> <ComboBox id="drafterChooser" fx:id="drafterChooser" onAction="#updateDraftList" prefWidth="150.0" promptText="Choose a Drafter..." /> <RadioButton id="finalDrafts" fx:id="finalDraft" mnemonicParsing="false" onAction="#updateDraftListWithFinalDrafts" text="Final Drafts" toggleGroup="$dbButtonGroup" /> <CheckBox id="changeDraft" fx:id="changeDraft" mnemonicParsing="false" onAction="#toggleNotChangeDraft" styleClass="warningMessage" text="This is a Change Draft" /> <CheckBox id="notChangeDraft" fx:id="notChangeDraft" mnemonicParsing="false" onAction="#toggleChangeDraft" styleClass="warningMessage" text="This is NOT a Change Draft" /> </children> </VBox> <VBox alignment="CENTER_LEFT" prefHeight="213.0" prefWidth="156.0" GridPane.valignment="CENTER"> <children> <CheckBox id="historyCheckbox" fx:id="historyCheckbox" mnemonicParsing="false" onAction="#setYearChooser" text="History Year" /> <ComboBox id="yearChooser" fx:id="yearChooser" onAction="#handleYearSelection" prefHeight="34.0" prefWidth="132.0" promptText="Choose a year..." /> </children> <padding> <Insets top="2.0" /> </padding> </VBox> </children> <stylesheets> <URL value="@DraftFromDbPanel.css" /> </stylesheets> </GridPane> DraftFromDbPanel.css .root { -fx-base: rgb(237,237,229); -fx-background-color: rgb(237,237,229); -fx-text-fill: black; } .check-box { -fx-padding: 2px; -fx-border-insets: 2px; -fx-background-insets: 2px; } .combo-box { -fx-padding: 2px; } .radio-button { -fx-padding: 2px; -fx-border-insets: 2px; -fx-background-insets: 2px; } .warningMessage { -fx-background-color: red; -fx-border-color: black; -fx-border-insets: 5px; -fx-background-insets: 5px; -fx-padding: 5px; } DraftFromDBPanelModel.java package FXMLControllers; import javafx.collections.ObservableList; public interface DraftFromDBPanelModel { ObservableList<YearEntryDO> getYears(); ObservableList<String> getDrafters(); YearEntryDO getCurrentYear(); String getCurrentUserInitials(); ObservableList<DraftPK> getDrafts(YearEntryDO currentYear, String currentUserInitials); ObservableList<DraftPK> getFinalDrafts(YearEntryDO year); } MockNewDraftFromDBPanelModel.java package FXMLControllers; import javafx.collections.FXCollections; import javafx.collections.ObservableList; public class MockNewDraftFromDBPanelModel implements DraftFromDBPanelModel { @Override public ObservableList<YearEntryDO> getYears() { ObservableList<YearEntryDO> sessions = FXCollections.observableArrayList(); sessions.add(new YearEntryDO("2017", "1st")); sessions.add(new YearEntryDO("2016", "2nd")); sessions.add(new YearEntryDO("2015", "1st")); sessions.add(new YearEntryDO("2014", "2nd")); sessions.add(new YearEntryDO("2013", "1st")); sessions.add(new YearEntryDO("2012", "2nd")); sessions.add(new YearEntryDO("2011", "1st")); sessions.add(new YearEntryDO("2010", "2nd")); sessions.add(new YearEntryDO("2009", "1st")); sessions.add(new YearEntryDO("2008", "2nd")); return sessions; } @Override public ObservableList<String> getDrafters() { ObservableList<String> drafters = FXCollections.observableArrayList(); drafters.add("ABC"); drafters.add("DEF"); drafters.add("GHI"); drafters.add("JKL"); drafters.add("MNO"); drafters.add("PQR"); drafters.add("STU"); drafters.add("VWX"); drafters.add("YNZ"); return drafters; } @Override public YearEntryDO getCurrentYear() { return new YearEntryDO("2018", "2nd"); } @Override public String getCurrentUserInitials() { return "PBJ"; } @Override public ObservableList<DraftPK> getDrafts(YearEntryDO year, String userInitials) { ObservableList<DraftPK> drafts = FXCollections.observableArrayList(); drafts.add(new DraftPK(0, "Draft0", Integer.parseInt(year.getYearNum()), userInitials)); drafts.add(new DraftPK(1, "Draft1", Integer.parseInt(year.getYearNum()), userInitials)); drafts.add(new DraftPK(2, "Draft2", Integer.parseInt(year.getYearNum()), userInitials)); drafts.add(new DraftPK(3, "Draft3", Integer.parseInt(year.getYearNum()), userInitials)); drafts.add(new DraftPK(4, "Draft4", Integer.parseInt(year.getYearNum()), userInitials)); drafts.add(new DraftPK(5, "Draft5", Integer.parseInt(year.getYearNum()), userInitials)); drafts.add(new DraftPK(6, "Draft6", Integer.parseInt(year.getYearNum()), userInitials)); return drafts; } @Override public ObservableList<DraftPK> getFinalDrafts(YearEntryDO year) { ObservableList<DraftPK> finalDrafts = FXCollections.observableArrayList(); finalDrafts.add(new DraftPK(0, "FinalDraft0", Integer.parseInt(year.getYearNum()))); finalDrafts.add(new DraftPK(1, "FinalDraft1", Integer.parseInt(year.getYearNum()))); finalDrafts.add(new DraftPK(2, "FinalDraft2", Integer.parseInt(year.getYearNum()))); finalDrafts.add(new DraftPK(3, "FinalDraft3", Integer.parseInt(year.getYearNum()))); finalDrafts.add(new DraftPK(4, "FinalDraft4", Integer.parseInt(year.getYearNum()))); finalDrafts.add(new DraftPK(5, "FinalDraft5", Integer.parseInt(year.getYearNum()))); finalDrafts.add(new DraftPK(6, "FinalDraft6", Integer.parseInt(year.getYearNum()))); return finalDrafts; } } DraftPK.java package FXMLControllers; public class DraftPK { private String draftName; private String drafterInitials; private int draftTypeID; private int sessYear; public DraftPK(int draftTypeID, String draftName, int sessYear, String userInitials) { this.draftTypeID = draftTypeID; this.drafterInitials = userInitials; this.draftName = draftName; this.sessYear = sessYear; } public DraftPK(int draftTypeID, String draftName, int sessYear) { this.draftTypeID = draftTypeID; this.draftName = draftName; this.sessYear = sessYear; } public String getDraftName() { return draftName; } public String getDrftrInitls() { return drafterInitials; } public int getDrafttypeId() { return draftTypeID; } public int getSessYr() { return sessYear; } } YearEntryDO.java package FXMLControllers; public class YearEntryDO { private String yearNum; private String yearPart; YearEntryDO(String yearNum, String yearPart){ this.yearNum = yearNum; this.yearPart = yearPart; } public String getYearNum() { return yearNum; } public String getYearPart() { return yearPart; } } Additional Notes The GUI design has been handed down to me and the users and management are particularly adverse to change, but I'll gladly accept recommendations for an improved UX. The documents are stored as XML in the database. Users should be able to choose documents from any combination of criteria: other users, different years, draft types, first draft/final draft, etc. Answer: BE DRY In your MockNewDraftFromDBPanelModel class you have a lot of code repetition and near duplication. Cut this out by using loops. e.g. for getFinalDrafts @Override public ObservableList<DraftPK> getFinalDrafts(YearEntryDO year) { ObservableList<DraftPK> finalDrafts = FXCollections.observableArrayList(); finalDrafts.add(new DraftPK(0, "FinalDraft0", Integer.parseInt(year.getYearNum()))); finalDrafts.add(new DraftPK(1, "FinalDraft1", Integer.parseInt(year.getYearNum()))); finalDrafts.add(new DraftPK(2, "FinalDraft2", Integer.parseInt(year.getYearNum()))); finalDrafts.add(new DraftPK(3, "FinalDraft3", Integer.parseInt(year.getYearNum()))); finalDrafts.add(new DraftPK(4, "FinalDraft4", Integer.parseInt(year.getYearNum()))); finalDrafts.add(new DraftPK(5, "FinalDraft5", Integer.parseInt(year.getYearNum()))); finalDrafts.add(new DraftPK(6, "FinalDraft6", Integer.parseInt(year.getYearNum()))); return finalDrafts; } May be reduced to @Override public ObservableList<DraftPK> getFinalDrafts(YearEntryDO year) { ObservableList<DraftPK> finalDrafts = FXCollections.observableArrayList(); for (int i = 0; i <= 6; i++) { finalDrafts.add(new DraftPK(i, "FinalDraft" + i, Integer.parseInt(year.getYearNum()))); } return finalDrafts; } The same obviously applies togetDrafts as well as getYears Follow naming conventions On the topic of those methods. Typically get methods, much like properties in other languages like C#, simply return an already set value. Seeing get methods where work is being done is unexpected, I would suggest renaming these to createFinalDrafts or even acquireFinalDrafts if you want a dictionary equivalent to 'get.' On that same token, getYearNum returning a string isn't intuitive. In fact, it's surprising that you elected that to begin with since as far as I can see everywhere you use it you're using it as an integer and have to keep calling Integer.parseInt Control your controller Your controller class' NewDraftFromDBPanelController has an initializer filled with asserts. As you may know, asserts are not reliable and you're actually better of not having this code at all. JavaFX would fail to start and throw an error if any of the elements could not inflate from the FXML.
{ "domain": "codereview.stackexchange", "id": 26056, "tags": "java, mvc, javafx" }
Skin Detection From a Face Image
Question: I have photos of people's faces, and I want to detect each person's skin color for later use. I don't want to use predetermined color ranges, instead I want to determine each person's color and then use it to detect occlusions in this person's later photos. The training face images are taken under the same conditions, but testing will be with photos of different conditions. My first thought is to use color histograms of these images, and then the peak will be this person's skin color. Is there a smarter way to do this? I'm using Python and OpenCV, but I can use other tools if needed. Answer: It'd be good idea to extract face segment first. In OpenCV there're predefined HaarCascades for face detection. Though, you can run into some problems using them. Photos really need to be well-exposed. They're also prone to false positives as hell. https://realpython.com/face-recognition-with-python/ Slightly better, but still vanilla, there's dlib. Worst problem with them is you'll only get bounding box. Background may creep in and mess your color analysis. To get seminal-like scores you gotta go DNN. Here's some example. I would propose a bit different approach. Take your data set. Extract faces using existing tool and label skin colour manually. Then train CNN. Just be careful, when you augment data set. Some operations might be damaging, like swapping channels. Color-based analysis is tough. You need to think of covering different exposure values, contrast, white-balance, saturation etc.
{ "domain": "datascience.stackexchange", "id": 6675, "tags": "python, computer-vision, opencv" }
Question on an equality in deriving covariance matrix of a gaussian state
Question: I am reading Quantum continuous variables by A.Serafini. (Equations are not rendered in this link. Please look at the second link) I have a question on the last equality of the equation (3.49) which the author comments; "Thanks to the use we made of group representations, ... bring the unitary operators outside of the expectation..." as you can check in the link. Here is the equality I want to prove in (3.49): $$ \text{Tr} \bigg[ \bigotimes_{j=1}^n \sum_{m=0}^\infty e^{-\xi_j m} |m\rangle_{j} \ {}_{j} \langle m| \ \ \{ S \hat{\textbf{r}}, \hat{\textbf{r}}^T S^T \} \bigg]=S\ \text{Tr} \bigg[ \bigotimes_{j=1}^n \sum_{m=0}^\infty e^{-\xi_j m} |m\rangle_{j} \ {}_{j} \langle m| \ \ \{ \hat{\textbf{r}}, \hat{\textbf{r}}^T \} \bigg] S^T $$ where $S$ is a symplectic $2n \times 2n$ real matrix and other relevant definitions are given below. I don't see how the author's comments lead to $S$ and its transpose $S^T$ coming out of the trace. Relevant definitions For simplicity, I will assume $n=1$ from now on. Then $ \hat{\textbf{r}} = \begin{pmatrix} \hat x \\ \hat p \end{pmatrix} $ and $ \hat{\textbf{r}}^T = \big ( \hat x ,\hat p \big ) $ where the hat represents operators in Hilbert space and superscript $T$ is the transpose. Also, by definition, for any suitable $\hat {\textbf{a}}$ $$\{\ \hat{\textbf{a}},\hat{\textbf{a}}^T\} := \hat{\textbf{a}} \hat{\textbf{a}}^T +(\hat{\textbf{a}} \hat{\textbf{a}}^T)^T$$ so that $$\{\ \hat{\textbf{r}},\hat{\textbf{r}} ^T\} = \hat{\textbf{r}} \hat{\textbf{r}}^T +(\hat{\textbf{r}} \hat{\textbf{r}}^T)^T= \begin{pmatrix} \hat x \\ \hat p \end{pmatrix} \big ( \hat x ,\hat p \big) +\bigg( \begin{pmatrix} \hat x \\ \hat p \end{pmatrix} \big ( \hat x ,\hat p \big) \bigg)^T = \begin{pmatrix} 2\hat x^2 & \hat x \hat p +\hat p \hat x \\ \hat p \hat x + \hat x \hat p & 2\hat p^2 \end{pmatrix}$$ . $S\in M_{2 \times 2} (\mathbb R)$ is just a real-symplectic matrix which by definition satisfies $S \Omega S^T = \Omega$ where $\Omega := \begin{pmatrix} 0 \ 1 \\-1 \ 0 \end{pmatrix}$. Note $S \hat{\textbf{r}}$ is a two- by- one column vector with linear combination of operators $\hat x$ and $\hat p$ as its elements. Finally, $\sum_{m=0}^\infty e^{-\xi m} |m\rangle \langle m|$ with $\xi >0 $ is just the density operator $\rho$ expressed in the Fock basis. I am sure that trace of a matrix having operators as elements is defined to be done element-wise. Hence, in the case $n=1$, the trace above results in a two-by-two matrix having operators as elements. Any help is appreciated. Answer: I think I got the answer. Writing down a problem always helps. The very definition of trace of a matrix having operators as elements implies that the trace can "commute" with matrices having numbers as elements. More precisely, for a matrix $\hat{\textbf{M}}$ with operatorial elements and a real matrix $S$, since trace acts only on operators, $$ \text{ij}^{th}\ \text{element of }\ \ \text{Tr} \bigg[S \hat{\textbf{M}} S^T \bigg] := \text{Tr} \bigg[\sum_{l,m}S_{il} \hat{{M}}_{lm} S^T_{mj} \bigg]= \sum_{l,m}S_{il} \text{Tr} \big[ \hat M_{lm} \big]S^T_{mj} = \text{ij}^{th}\ \text{element of } \ \ S \text{Tr} \big[\hat{\textbf{M}} \big] S^T $$ proving $$ \text{Tr} \bigg[S \hat{\textbf{M}} S^T \bigg] =S \text{Tr} \big[\hat{\textbf{M}} \big] S^T \ \ -(1)$$. Now the original equation becomes, $\hat \rho$ being the density operator, $$\text{Tr}\big[ \hat \rho \{ S \hat{\textbf{r}}, \hat{\textbf{r}}^T S^T \} \big]= \text{Tr}\big[ \hat \rho S\{ \hat{\textbf{r}}, \hat{\textbf{r}}^T \}S^T \big]= \text{Tr}\big[ S \hat \rho\{ \hat{\textbf{r}}, \hat{\textbf{r}}^T \}S^T \big]= S \text{Tr}\big[ \hat \rho\{ \hat{\textbf{r}}, \hat{\textbf{r}}^T \}\big]S^T $$ where we have used (1) in the last equality. Note $\hat \rho \{ \hat{\textbf{r}}, \hat{\textbf{r}}^T \}$ is defined as $\hat \rho$ multiplying element-wise to the matrix $\{ \hat{\textbf{r}}, \hat{\textbf{r}}^T \}$ having operators as elements. Note the use of boldface to denote a finite dimensional vector(or matrix) having operator elements, not numbers.
{ "domain": "physics.stackexchange", "id": 60451, "tags": "quantum-mechanics, quantum-information" }
Scale Factors via Ellipsoidal Coordinate System Scale Factors
Question: In Morse & Feshbach (P512 - 514) they show how 10 different orthogonal coordinate systems (mentioned on this page) are derivable from the confocal ellipsoidal coordinate system $(\eta,\mu,\nu)$ by trivial little substitutions, derivable in the sense that we can get explicit expressions for our Cartesian $x$, $y$ & $z$ in terms of the coordinates of some coordinate system by simply modifying the expressions for $x$, $y$ & $z$ which are expressed in terms of ellipsoidal coordinates. Thus given that $$x = \sqrt{\frac{(\eta^2 - a^2)(\mu^2 - a^2)(\nu^2 - a^2)}{a^2(a^2-b^2)}}, y = ..., z = ...$$ in ellipsoidal coordinates, we can derive, for instance, the Cartesian coordinate system by setting $ \eta ^2 = \eta' ^2 + a^2 $, $\mu^2 = \mu'^2 + b^2$, $\nu = \nu'$, $b = a\sin(\theta)$ & letting $a \rightarrow \infty$ to get that $x = \eta'$, $y = \mu'$ & $z = \nu'$. Substitutions like these are given to derive a ton of other useful coordinate systems. I don't see why one shouldn't be able to use the exact same substitutions on the scale factors. Thus given $$h_1 = \sqrt{\frac{(\eta^2 - \mu^2)(\eta^2 - \nu^2)}{(\eta^2 - a^2)(\eta^2 - b^2)}}$$ I don't see any reason why the exact same substitutions should not give the Cartesian scale factors, namely that $h_1 = 1$ in this case, yet I can't do it with the algebra - I can't get it to work. You get these $a^4$ factors which you just cannot get rid of, thus it seems like one can't derive the scale factors also by mere substitution... Now it might just be late & that I've worked too much, hence my question is: Is it possible to get the scale factors for orthogonal curvilinear coordinate systems by simple substitutions into the scale factor formulae for the ellipsoidal coordinate system, analogous to the way one can derive the formulae for Cartesian components in terms any 'standard' orthogonal coordinate system by substitutions into the formulae for them expressed in terms of ellipsoidal coordinates? If not, why not? If there is then the gradient, divergence, Laplacian & curl become extremely easy to calculate in the standard orthogonal coordinate systems, & working separation of variables for all the standard pde's becomes immensely easier, if not I'd have to derive the scale factors by differentiation of completely crazy formulas (at least there's an easy way to remember any formula in $\vec{r}(u^1,u^2,u^3) = ...$ from which we can derive the scale factors thanks to Morse & Feshbach!!!) Thanks for any help possible. Answer: Okay! I apply a transformation, which converts my comment into a slightly more self-contained answer. The transformations mentioned here are most naturally described by means of tensor calculus or, more generally, differential geometry. It is needed, roughly speaking, when one wants/needs to introduce a coordinate system at each point in space and study the relations between different points and their coordinate systems. In this case there is a coordinate transformation defined from cartesian system $x^\mu=(x,y,z)$ to ellipsoidal coordinates $x^{\mu'}=(\eta,\mu,\nu)$, and an inverse transformation. $\mu$ and $\mu'$ are indices, which can take values from $1$ to $3$. Trasnformation matrix $t^{\mu}_{~\mu'}$ is defined as $t^{\mu}_{~\mu'}=\dfrac{\partial x^{\mu}}{\partial x^{\mu'}}$. For the given transformation $t^{\mu}_{~\mu'}$ is diagonal, because $x$ depends only on $\eta$, and so on. Metric tensor with matrix $g_{\mu\nu}$ is a quantity, which defines the scalar product of basis vectors $g_{\mu\nu}\equiv \vec{e}_\mu\cdot\vec{e}_{\nu}$. In cartesian coordinates $x^{\mu}$, therefore, $g_{\mu\nu}=\textrm{diag}\{1,1,1\}$. In primed coordinates, for a given point in space (with its own set of basis vectors) the same definition holds: $g_{\mu'\nu'}=\vec{e}_{\mu'}\cdot\vec{e}_{\nu'}$. From tensor calculus, $g_{\mu'\nu'}$ and $g_{\mu\nu}$ are connected by $g_{\mu'\nu'}=g_{\mu\nu} t^{\mu}_{~\mu'} t^{\nu}_{~\nu'}$, where summation over the same indices is implied. Because $t^{\mu}_{~\mu'}$ is diagonal, $g_{\mu'\nu'}$ is also diagonal, as it should be. Hence $\vec{e}_{\mu'}$ are orthogonal. Now, in orthogonal bases $\vec{e}_{\mu}$ scale factors are defined as $h_{\mu}=\sqrt{\vec{e}_{\mu}\cdot \vec{e}_{\mu}}$ (which has some physical meaning if one thinks about decomposing a vector in such a basis), here summation is not implied. In cartesians, therefore, $h_1=1$, whereas in ellispoidal coordinates $h_{1'}=\sqrt{\vec{e}_{1'}\cdot \vec{e}_{1'}}=\sqrt{g_{1'1'}}=\sqrt{g_{\mu\nu} t^{\mu}_{~1'} t^{\nu}_{~1'}}=\sqrt{g_{11} t^{1}_{~1'} t^{1}_{~1'}}=t^{1}_{~1'}=t^{1}_{~1'} h_1$. Or, alternatively, one can derive (without using $h_1 = 1$) that $h_1=h_{1'}t^{1'}_{~1} = h_{1'}(t^{1}_{~1'})^{-1}$. Using the above described, $t^{1}_{~1'}=\dfrac{\eta}{x}=\dfrac{\eta}{\sqrt{\eta^2-a^2}}$. Substituting $t^{1}_{~1'}$ and $h_{1'}$ into $h_1= h_{1'}(t^{1}_{~1'})^{-1}$ and using $\eta\sim a \rightarrow \infty$, one gets $h_1 = 1$. The answer might be hard to read if you have never studied differential geometry, but the key point is that scale factors are not scalars and a simple variable change doesn't work for them. However, as they are defined for orthogonal systems, the transformation rules that they follow are relatively simple.
{ "domain": "physics.stackexchange", "id": 10066, "tags": "mathematical-physics" }
slip1, slip2 in URDF
Question: Hi all, I'm working on URDF model of skid steering robot (Pioneer 3AT). There is new plugin for controlling this type of robot (see gazebo_plugins package) so, I can command robot from ROS and it moves. But I have problems with rotational movements - robot can't rotate. If I define wheels (collision geometry) as very thin cylinders and apply high torque it "somehow" rotates... If collision geometry is defined using mesh it's better but, robot jumps a bit. The question is - how can I set slip1/slip2 (as setting this is recommended here) using URDF? As far as I know, URDF is translated into SDF so, I guess, it should be somehow possible. Thanks for hints in advance. I'm using Gazebo 1.8 and gazebo_ros_pkgs for ROS integration. Originally posted by ZdenekM on Gazebo Answers with karma: 62 on 2013-06-26 Post score: 1 Answer: URDF has no concept of friction, so you'll have to insert SDF (which is currently specified using the extension). Something like <gazebo reference="r_foot"> <!-- kp and kd for rubber --> <kp>1000000.0</kp> <kd>100.0</kd> <mu1>1.5</mu1> <mu2>1.5</mu2> <fdir1>1 0 0</fdir1> <maxVel>1.0</maxVel> <minDepth>0.00</minDepth> </gazebo> Originally posted by nkoenig with karma: 7676 on 2013-06-27 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by davetcoleman on 2013-06-27: This is documented here
{ "domain": "robotics.stackexchange", "id": 3350, "tags": "gazebo, skid-steer" }
Confusion on the indice of refraction: is it dependent on the frequency or not?
Question: I saw in my course than when light hit a medium, it makes some dipole oscillating with the same frequency as the one of the light $\omega$. By a classical mechanics reasoning, one can show that the indice $n=\frac{\epsilon}{\epsilon_0}$ depends on the frequency, $$n=n(\omega)$$ A wave is scattered by the medium, and the way it is scattered depends on its wave length. What I don't understand is why do we talk about refraction indices of some medium $n$, since it depends on the nature of the incident wave ? Why for example, do we talk about the indice of refraction of water, or glass ? This is not clear for me. Any help would be appreciated. Answer: Yes, the index of refraction is a function of frequency. I think, though it would depend upon the actual language used by an author, that many discussions of index of refraction revolve around optical (visible) frequencies. For visible frequencies, substances like glass and water have a roughly constant index of refraction.
{ "domain": "physics.stackexchange", "id": 69590, "tags": "optics, electromagnetic-radiation, refraction, frequency, dispersion" }
Nomenclature for deterministic algorithms that might fail
Question: What do you call an algorithm which is Deterministic When an answer is returned, it is correct For some input, the algorithm returns no answer (fails, in bounded time). Such algorithms crop up a lot in cryptographic attacks, for instance, where a cipher is deemed broken when an attack (provably or demonstrably) works "most of the time". I'm working in a different field (coding theory) with an algorithm of the above kind. For random, uniformly distributed input, the probability that the algorithm fails seems to be so low that in practice it can be more or less ignored. However, we have no succinct characterisation of input which cause failure. Previous work on this algorithm has called it "a probabilistic algorithm" but I find this an abuse of the term, since the algorithm is deterministic, once the input is known. Answer: I do not have an authorative answer but two proposals. Such algorithms compute partial functions, so you could call them partial or partially correct algorithms. Working off your literature, the class of algorithms you defined is that of Las Vegas algorithms. Even though the implication is that the algorithm is randomised it certainly need not be; even though you have access to random bits you don't have to use them. If you want to stress determinism, you can use deterministic Las Vegas algorithm.
{ "domain": "cs.stackexchange", "id": 4614, "tags": "algorithms, terminology" }
How does the center of gravity work?
Question: In free body diagrams, such as a beam attached horizontally to a wall, $F_g$ is always shown acting on the center of gravity of an object. My question - is this the case in real life, where gravity only acts on this point of the object? Or is gravity acting on all parts of the object, but that point is at the exact center of all the force? Answer: Gravity (treated as homogenous) is acting the same on all parts of the object, but if the object is rigid, internal forces allow the simplification that the centre of mass is where all the force acts. Torque: There is equal mass on both sides of the centre of mass so there is no net torque about it. If the pivot is at the centre of mass, the object will not turn, it will balance.
{ "domain": "physics.stackexchange", "id": 43148, "tags": "newtonian-mechanics, newtonian-gravity, reference-frames, free-body-diagram" }
What link move_group get current pose refers to?
Question: I have a Techman robot arm and I write a python script to read the current pose (pos : xyz, quar:xyzw) but I don't know if it's the base or end effector link that the position corresponds to. Anyone know how to check the link it corresponds to ? how to set which link I want to read ? Originally posted by TapleFlib on ROS Answers with karma: 39 on 2023-04-21 Post score: 0 Answer: getCurrentPose (C++)/ get_current_pose (Python) reports the pose of the end effector link. The frame it is referenced to can be read from the header of PoseStamped. For looking up arbitrary links you could just use tfs. If it's a link on your robots kinematic chain you could also reset the end effector link before getting the pose. Originally posted by pcoenen with karma: 249 on 2023-04-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by TapleFlib on 2023-04-22: I tried to print the output of PointStamped with print(geometry_msgs.msg.PoseStamped()) and got the following output : position: x: 2.891971438201711e-07 y: -0.24444999999999026 z: 0.8917000428590904 orientation: x: 0.7071065501076503 y: 1.1553940595954334e-07 z: 1.1553948151418447e-07 w: 0.7071070122653501 header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: '' pose: position: x: 0.0 y: 0.0 z: 0.0 orientation: x: 0.0 y: 0.0 z: 0.0 w: 0.0 why is it blank below the header? Comment by TapleFlib on 2023-04-22: and do you know how to set which link I want to control with set pose target ? I'm quite new to ROS and moveit sorry. Comment by TapleFlib on 2023-04-22: is these position and orientation tool0's position or is it already being transformed relative to the base? should I publish a tf between tool0 to base? how? (sorry to write it as answer to be able to send image, I'll change it once I got the real answer) Comment by pcoenen on 2023-04-24: Did you print the return value of get_current_pose (which is of type PoseStamped) or print(geometry_msgs.msg.PoseStamped()), because the zeroed position looks like you did the latter. and do you know how to set which link I want to control with set pose target ? If you look through the move_group_commander code on github (linked above) you will also find a set_end_effector_link function. is these position and orientation tool0's position or is it already being transformed relative to the base? I believe the pose is relative to the global frame set in rviz. The tf from base to tool0 should already exist. You can use rosrun rqt_tf_tree rqt_tf_tree to check which frames are available. Comment by TapleFlib on 2023-04-24: Got it, thankyou so much ! this is enough and really helpful !
{ "domain": "robotics.stackexchange", "id": 38354, "tags": "ros, python, pose, moveit, movegroup" }
Can proteins structure change depending of alimentation of an organism?
Question: In my understanding protein are built using information caring by RNA. So a given protein should always have the same structure in a given organism has the DNA of this organism does not change. I'm asking this question because peoples told me that "cow milk protein become longer because how we feed them today". But I don't understand how a protein can become longer or shorter. Info: I have no background in biology. I do not try to solve a problem, but simply have a better understanding. Answer: While it is theoretically possible for a protein to change size (based on length) because of nutrition, I don't think that's happening here. You are right-- DNA encodes information that is transcribed to RNA which is translated into proteins. Proteins are made of a finite number of amino acids, which are the building blocks of proteins. Proteins can be modified (and can be cleaved to make the protein shorter), but I think this claim of "cow milk protein becomes longer" is a misunderstanding because there is not a single "cow milk protein", but rather many. There are many different proteins in cow milk, and most of them are casein proteins. source The milk protein likely at the root of this claim is beta-casein. It was a hot topic in the 1990s, because there are a dozen different variants of this beta-casein protein. Two of these variants are the most common in milk have been studied a lot: variant A1 and A2. The only way these are different is a single amino acid change at amino acid position 67 (remember that proteins are made of amino acids). Variant A1 has a histidine amino acid but variant A2 has a proline. What does that mean? That means that A1 can be cut (in your body) to produce a smaller protein bit (called a peptide). Because of the proline in the A2 variant, it cannot be cut like the A1 variant. The peptide resulting from the A1 getting cut is called BCM7. In short: beta-casein variant A1: can be cut to produce BCM7 beta-caesin variant A2: gets cut way less than A1 so way less BCM7 is produced. So, what is BCM7? It had been reported to been linked to some different diseases but the European Food Safety Authority (EFSA) published this report which concludes that there is no link between BCM7 and non-communicable diseases. Despite this, there is still a lot of talk about A1 vs A2 in the dairy community. While websites such as this appear compelling, remember the EFSA report! this paper is open source and has the information about these variants that is summarized above. So what does all this have to do with "cow milk protein" changing size? Maybe more cows used as dairy cows have the A2 variant, which would not produce as much BCM7 (the small peptide). Since A2 is not cleaved to produce BCM7, it could be thought of as "longer"... at least relative to only a few thousand years ago. The beta-casein A2 variant is thought to be the original version of beta-casein from undomesticated cows. So, if this claim was originally about beta-casein, you can respond that "yes, the beta-casein protein can be thought of longer because it is not cut to produce the BCM7 peptide as much as the A1 variant is, but it is also the original beta-casein size." Whether it was 10,000 years ago or just now, the A2 beta-casein variant is the same size. tl;dr / Good reading from the California Dairy Research Foundation.
{ "domain": "biology.stackexchange", "id": 7951, "tags": "proteins, gene-expression" }
Iterating through 128-byte records in a file
Question: I need to read records from a flat file, where each 128 bytes constitutes a logical record. The calling module of this below reader does just the following. while(iterator.hasNext()){ iterator.next(); //do Something } Means there will be a next() call after every hasNext() invocation. Now here goes the reader. public class FlatFileiteratorReader implements Iterable<String> { FileChannel fileChannel; public FlatFileiteratorReader(FileInputStream fileInputStream) { fileChannel = fileInputStream.getChannel(); } private class SampleFileIterator implements Iterator<String> { Charset charset = Charset.forName("ISO-8859-1"); ByteBuffer byteBuffer = MappedByteBuffer.allocateDirect(128 * 100); LinkedList<String> recordCollection = new LinkedList<String>(); String record = null; @Override public boolean hasNext() { if (!recordCollection.isEmpty()) { record = recordCollection.poll(); return true; } else { try { int numberOfBytes = fileChannel.read(byteBuffer); if (numberOfBytes > 0) { byteBuffer.rewind(); loadRecordsIntoCollection(charset.decode(byteBuffer) .toString().substring(0, numberOfBytes), numberOfBytes); byteBuffer.flip(); record = recordCollection.poll(); return true; } } catch (IOException e) { // Report Exception. Real exception logging code in place } } try { fileChannel.close(); } catch (IOException e) { // TODO Report Exception. Logging } return false; } @Override public String next() { return record; } @Override public void remove() { // NOT required } /** * * @param records * @param length */ private void loadRecordsIntoCollection(String records, int length) { int numberOfRecords = length / 128; for (int i = 0; i < numberOfRecords; i++) { recordCollection.add(records.substring(i * 128, (i + 1) * 128)); } } } @Override public Iterator<String> iterator() { return new SampleFileIterator(); } } The code reads 80 mb of data in 1.2 seconds on a machine with 7200 RPM HDD, with Sun JVM and running Windows XP OS. But I'm not that satisfied with the code I have written. Is there any other way to write this in a better way (especially the decoding to character set and taking only the bytes that has been read, I mean the charset.decode(byteBuffer) .toString().substring(0, numberOfBytes) part)? Answer: There is no particular advantage to using a direct buffer here. You have to get the data across the JNI boundary into Java-land, so you may as well use a normal ByteBuffer. Direct buffers are for copying data when you don't want to look at it yourself really. Use a ByteBuffer that is a multiple of 512, e.g. 8192, so you aren't driving the I/O system and disk controller mad with reads across sector boundaries. In this case I would think about using 128*512 to agree with your record length. The .substring(0, numberOfBytes) is unnecessary. After the read and rewind, the ByteBuffer's position is zero and its limit equals numberOfBytes, so the charset.decode() operation is already delivering the correct amount of data. You're assuming you didn't get a short read from FileChannel.read(). You can't assume that, there is nothing in the Javadoc to support that assumption. You need to read until the buffer is full or you get EOF. Having said all that, I would also experiment with a BufferedReader around an InputStreamReader around the FileInputStream, and just read 128 chars at a time. You might get a surprise as to which is faster.
{ "domain": "codereview.stackexchange", "id": 684, "tags": "java, performance, io, iterator, file-structure" }
MySQL homeowner database
Question: This database must be able to track multiple home owners, who may own many properties, have many phone numbers, and their mailing address may depend on what month we're in. Each property may have more than one owner. Please point out all bad coding practices. create table home_owner ( owner_id SMALLINT UNSIGNED AUTO_INCREMENT PRIMARY KEY, first_name VARCHAR(10), last_name VARCHAR(20) ); create table email ( email_id SMALLINT UNSIGNED PRIMARY KEY, email_address VARCHAR(40) ); create table owner_email ( owner_id SMALLINT UNSIGNED, email_id SMALLINT UNSIGNED, PRIMARY KEY emails_PK(owner_id, email_id), CONSTRAINT owner_email_FK FOREIGN KEY(owner_id) REFERENCES home_owner(owner_id) ON DELETE CASCADE, CONSTRAINT email_owner_FK FOREIGN KEY(email_id) REFERENCES email(email_id) ON DELETE CASCADE ); create table phone_number ( number_id SMALLINT UNSIGNED PRIMARY KEY, phone_number VARCHAR(10) ); create table owner_number ( owner_id SMALLINT UNSIGNED, number_id SMALLINT UNSIGNED, PRIMARY KEY numbers_PK(owner_id, number_id), CONSTRAINT owner_number_FK FOREIGN KEY(owner_id) REFERENCES home_owner(owner_id) ON DELETE CASCADE, CONSTRAINT number_owner_FK FOREIGN KEY(number_id) REFERENCES phone_number(number_id) ON DELETE CASCADE ); create table seasonal_address ( seasonal_id SMALLINT UNSIGNED PRIMARY KEY, address VARCHAR(50), months VARCHAR(27) ); create table contact_info ( owner_id SMALLINT UNSIGNED, seasonal_id SMALLINT UNSIGNED, PRIMARY KEY contact_PK(owner_id, seasonal_id), CONSTRAINT owner_info_FK FOREIGN KEY(owner_id) REFERENCES home_owner(owner_id) ON DELETE CASCADE, CONSTRAINT seasonal_info_FK FOREIGN KEY(seasonal_id) REFERENCES seasonal_address(seasonal_id) ON DELETE CASCADE ); create table property ( property_id SMALLINT UNSIGNED PRIMARY KEY, address VARCHAR(50), lot_number SMALLINT UNSIGNED ); create table owner_property ( owner_id SMALLINT UNSIGNED, property_id SMALLINT UNSIGNED, PRIMARY KEY properties_PK(owner_id, property_id), CONSTRAINT owner_property_FK FOREIGN KEY(owner_id) REFERENCES home_owner(owner_id) ON DELETE CASCADE, CONSTRAINT property_owner_FK FOREIGN KEY(property_id) REFERENCES property(property_id) ON DELETE CASCADE ); insert into home_owner (first_name, last_name) values ("Angel", "Flop"), ("Bob", "Hoe"), ("Sue", "Hoe"); insert into email values (1, "angel@hio.com"), (2, "bob@ioj.com"), (3, "sue@iojoiv.com"); insert into property values (1, "123 Rainey", 123), (2, "234 Bob", 1298), (3, "697 Kolp", 782); insert into seasonal_address values (1, "7667 Noob", "1,2,3"), (2, "2383 Fob", "4,5,6,7,8,9,10,11,12"), (3, "7823 Flower", "1,2,3,4,5,6,7,8,9,10,11,12"); insert into phone_number values (1, "5203601083"), (2, "8039023093"), (3, "2387784334"), (4, "2377823782"); insert into owner_number values (1, 1), (1, 2), (2, 3), (3, 4); insert into contact_info values (1, 1), (1, 2), (2, 3), (3, 3); insert into owner_property values (1, 1), (1, 2), (2, 3), (3, 3); insert into owner_email values (1, 1), (2, 2), (3, 3); select af.property_id from property af inner join owner_property op on af.property_id = op.property_id where op.owner_id = 1; //find all properties of owner_id = 1 select a.* from home_owner a inner join owner_property op on a.owner_id = op.owner_id where op.property_id = 3; //find all owners of property_id = 3 Table definitions here. Answer: For the table definitions, just use an INT for the keys. You probably aren't saving anything by using SMALLINTs. Schema changes tend to be a pain in larger projects, so you might as well try to avoid any possibility of trouble. CREATE UNIQUE INDEX index_PK ON owner_property(property_id, owner_id); is redundant, since the table already has PRIMARY KEY properties_PK(owner_id, property_id). The foreign key constraints look well done. Having separate tables just for email and phone_number is probably overkill. The e-mail addresses and phone numbers could just be attributes on the owner_email and owner_number tables, respectively. I would put NOT NULL constraints on both of those attributes. The phone_number field should probably be longer than VARCHAR(10). (Note that MySQL tends to silently discard the part of the string beyond the length limit.) Similarly, VARCHAR(50) for the property address is probably too stingy. For your first query… select af.property_id from property af inner join owner_property op on af.property_id = op.property_id where op.owner_id = 1; //find all properties of owner_id = 1 … you don't need to join anything at all. select property_id from owner_property op where owner_id = 1; For your second query… select a.* from home_owner a inner join owner_property op on a.owner_id = op.owner_id where op.property_id = 3; //find all owners of property_id = 3 Use better indentation. SELECT has a FROM clause and a WHERE clause. FROM has an INNER JOIN. INNER JOIN has an ON. Use a table alias that makes better sense (or don't use an alias at all). select o.* from home_owner o inner join owner_property op on o.owner_id = op.owner_id where op.property_id = 3; //find all owners of property_id = 3
{ "domain": "codereview.stackexchange", "id": 17459, "tags": "beginner, sql, mysql, database" }
Pull Random Numbers from my Data (Python)
Question: Let's imagine I have a series of numbers that represents cash flows into some account over the past 30 days in some time window. This data is non-normal but it does represent some distribution. I would like to pull "new" numbers from this distribution in an effort to create a monte-carlo simulation based on the numerical data I have. How can I accomplish this? I've seen methods where you assume the data is normal & pull numbers based on some mean and standard deviation - but what about non-normal distributions? I'm using python so any reference including python or some python libraries would be appreciated. Answer: If what you want is to generate random numbers with the same distribution as your cashflow numbers I recommend you using Python's Fitter package It is powerful and very simple to use. You can in this way use it to find the distribution of your data and then generate random numbers with the same distribution. From documentation: from scipy import stats data = stats.gamma.rvs(2, loc=1.5, scale=2, size=10000) from fitter import Fitter f = Fitter(data) f.fit() # may take some time since by default, all distributions are tried # but you call manually provide a smaller set of distributions f.summary() Also useful resources might be found in stackoverflow
{ "domain": "datascience.stackexchange", "id": 8409, "tags": "python, statistics, simulation, monte-carlo" }
How can this grammar parse such an input?
Question: I've an example which I simply don't get at all. Implement an attributed grammar that checks that either the word ends in $b$ or each prefix of the word contains at least as many $b$s as $a$s E.g. the word $bbbbaabaabaa$ is allowed but $abbabaabba$ is not. (I know - what an example!) However, the grammar looks like this: \begin{align} V &\rightarrow S \\ S_0 &\rightarrow S_1 a \\ S_0 &\rightarrow S_1 b \\ S &\rightarrow \epsilon \end{align} and I am allowed to extend it to an attributed grammar. But I can't imagine how this can even be parsed from those rules since removing element by element from e.g. $abbab$ will never end up such that a rule can be applied. What am I missing here? Answer: In attribute grammars, you have variables attached to the symbols in the parse-tree which can take values. Equations attached to the rules specify how these values are propagated through the tree. For that purpose, different occurences of a non-terminal in the same rule are distinguished by subscripts, so as to distinguish the variables attached to them. For example the rule $S_0 \to S_1 b$ is used in the derivation of the topmost $S$ in the tree. In this occurrence: $S_0$ stands for the topmost $S$, while $S_1$ stands for the $S$ just below. So if the second $S$ has a $count$ variable equal to $-2$, then by applying the attribute equation $S_0.count=S_1.count+1$, the top $S$ get a count variable equal to $-1$. Note that the equations are often interpreted as assignments, at least in attribute systems used in compilers. That is what was just done. But there are uses in which it can be seen as unification of logical variables (but you probably have not seen that). Here the attribute rules are used as assignments, that are computed bottom-up (Synthesized attributes), this is why the variable of the topmost symbol (i.e. with the index 0, left-hand side of the grammar rule) are on the left side of the assignment, and get assigned a value depending on the variables of the symbols corresponding to the right-hand side of the grammar rule. But attribute values can also propagate top-down sometimes and are then called inherited attributes. Synthesized attributes are more frequent, as they are computationally more powerful (but it may come at a price). The parse tree for aab is V | S / \ S b / \ S a / \ S a | ϵ We associate three attributes to non-terminals: count: contains the number of $b$ minus the number of $a$ for the terminal string that non-terminal occurrence derives in. ok: is a boolean variable that is true for an occurrence of a non-terminal $S$ iff the terminal string this non-terminal occurrence derives in has no prefix with more $a$ than $b$. It is true for the non-terminal $V$ iff the terminal string $V$ derives in is acceptable. endb: is a boolean variable that is true for an occurrence of a non-terminal $S$ iff the terminal string this non-terminal occurrence derives in terminates with a $b$. Now you have attribute computation associated to rules as follow: \begin{array}{llll} \text{Grammar}& \text{Attributes computation}\\ V \to S& V.ok=S.ok\vee S.endb \\ S_0 \to S_1 a\;\;& S_0.count=S_1.count-1& S_0.ok=S_1.ok\wedge S_1.count>0& S_0.endb=false \\ S_0 \to S_1 b& S_0.count=S_1.count+1& S_0.ok=S_1.ok& S_0.endb=true \\ S \to \epsilon& S.count=0& S.ok=true \end{array}
{ "domain": "cs.stackexchange", "id": 3817, "tags": "formal-grammars, compilers, parsers, attribute-grammars" }
In RL as probabilistic inference, why do we take a probability to be $\exp(r(s_t, a_t))$?
Question: In section 2 the paper Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review the author is discussing formulating the RL problem as a probabilistic graphical model. They introduce a binary optimality variable $\mathcal{O}_t$ which denotes whether time step $t$ was optimal (1 if so, 0 otherwise). They then define the probability that this random variable equals 1 to be $$\mathbb{P}(\mathcal{O}_t = 1 | s_t, a_t) = \exp(r(s_t, a_t)) \; .$$ My question is why do they do this? In the paper they make no assumptions about the value of the rewards (e.g. bounding it to be non-positive) so in theory the rewards can take any value and thus the RHS can be larger than 1. This is obviously invalid for a probability. It would make sense if there was some normalising constant, or if the author said that the probability is proportional to this, but they don't. I have searched online and nobody seems to have asked this question which makes me feel like I am missing something quite obvious so I would appreciate if somebody could clear this up for me please. Answer: After doing some further reading, it turns out that negative rewards are an assumption for this distribution to hold. However, the author notes that as long as you don't receive a reward of infinity for any action then it is possible to re-scale your rewards by subtracting the maximum value of your potential rewards so that they are always negative.
{ "domain": "ai.stackexchange", "id": 2515, "tags": "reinforcement-learning, probability, inference, probabilistic-graphical-models" }
For an LTI system, why does the Fourier transform of the impulse response give the frequency response?
Question: I know that for a given system, the Fourier transform of its impulse response gives its frequency response. I want to find where this property comes from, but haven't been able to find if it's a definition or if there's a mathematical proof available, for both continuous and discrete-time systems. Answer: Let $h(t)$ denote the impulse response of an LTI system. Then, for any input $x(t)$, the output is $$y(t) = \int_{-\infty}^\infty h(\tau)x(t-\tau)\,\mathrm d\tau.$$ In particular, the response to the input $x(t) = \exp(j2\pi ft)$ is $$\begin{align} y(t) &= \int_{-\infty}^\infty h(\tau)\exp(j2\pi f(t-\tau))\,\mathrm d\tau\\ &= \exp(j2\pi ft)\int_{-\infty}^\infty h(\tau)\exp(-j2\pi f\tau)\,\mathrm d\tau\\ &= H(f)\exp(j2\pi ft),\tag{1} \end{align}$$ where $H(f)$ is the Fourier transform of the impulse response $h(t)$. In words, for an LTI system with impulse response $h(t)$, the input $\exp(j2\pi ft)$ produces output $H(f)\exp(j2\pi ft)$. This is precisely what we have as the definition of the frequency response of an LTI system (call this $FR(f)$ for now): for every frequency $f$, the response to $\exp(j2\pi ft)$ is $FR(f)\exp(j2\pi ft)$ which is just a (complex) constant times the input complex exponential. But $(1)$ shows that $FR(f)$ is just $H(f)$, the Fourier transform of the impulse response $h(t)$, which is what you wanted to prove.
{ "domain": "dsp.stackexchange", "id": 10582, "tags": "discrete-signals, continuous-signals, impulse-response, frequency-response, fourier" }
Why aren't negative frequencies folded in reconstruction of the aliased signal?
Question: I'm working on the problem 1.9 from the book Introduction to Signal Processing by Sophocles J. Orfanidis. The pdf version and solution is freely available here. This is the solution for part a of the exercise: I'm a bit confused about the cancelling of two middle terms. This is my drawing showing my understanding of what they're doing. However, why don't you do the same for negative frequencies? If you do this then you would get a different result. EDIT: typo - the last term in the image should be 2sin(20πt) instead. Answer: They are doing the same for the negative frequencies, implicitly. In a problem like this, all signals are real: the input, the sampled signal, and the reconstructed signal. As a consequence, all spectrums are symmetric. If aliasing produces a frequency component at $x$ Hz, then it will also produce one at $-x$ Hz. Let us take a closer look. We need the following concepts: A sine $$\sin(2\pi f_0 t) = \frac{e^{j2\pi f_0 t}-e^{j2\pi (-f_0) t}}{2j}$$ has frequencies $f_0$ and $-f_0$. A single frequency component $e^{j2\pi f_0 t}$ sampled at frequency $f_s$ produces aliases at frequencies $f_0 + Nf_s$ for all integers $N$. Out of the infinite number of aliases, we're interested in those with the lowest frequency; those are the unique aliases that appear in the Nyquist range $-f_s/2$ to $f_s/2$. Now, let's focus on $\sin(2\pi f_C t)$ with $f_C=30$. It has two frequencies, $30$ and $-30$. Let us find the aliases of each frequency separately. The term $e^{j2\pi 30 t}/2j$ will alias to frequencies $30+40N$, some examples of which are $-90,-50,-10,30,70,110...$. The only value within the Nyquist range corresponds to $N=-1$, so we have $30-40=-10$. We can say that $e^{j2\pi (30) t}$ aliases to $e^{j2\pi (-10) t}/2j$. The term $-e^{j2\pi (-30) t}/2j$ will alias to frequencies $-30+40N$. The only alias in the Nyquist range corresponds to $N=1$, where we have $-30+40=10$. We can say that $-e^{j2\pi (-30) t}/2j$ aliases to $-e^{j2\pi (10) t}/2j$. We can conclude these two things: The aliases are symmetric. This is the reason many textbooks don't calculate every single alias; as soon as you find an alias at $x$, you know there is one at $-x$. This is true only for real signals, which is what we are dealing with in this problem. The sum of the two aliases we calculated is $$ \frac{e^{j2\pi (-10) t}-e^{j2\pi (10) t}}{2j}$$ which implies that $f_0 = -10$. Then, the aliased sine wave is $\sin(2\pi (-10) t) = -\sin(2\pi 10 t)$. This means that aliasing can change the sign of the aliased signal. Coming back to the textbook problem, we see that the term $\sin(2\pi 10t)$ is cancelled by the alias $-\sin(2\pi 10t)$. So, a third conclusion is, aliases may cancel out terms in the input signal! I guess a fourth and final conclusion might be, avoid aliasing at all costs.
{ "domain": "dsp.stackexchange", "id": 11201, "tags": "fourier-transform, sampling, reconstruction" }
Resize Gazebo 1.9.3 Window
Question: Hi, Is there a way to resize horizontally the gazebo gui window in Gazebo 1.9.3? Thanks! Originally posted by Martí Morta - IRI on Gazebo Answers with karma: 7 on 2014-03-11 Post score: 0 Original comments Comment by Martí Morta - IRI on 2014-03-11: I have seen this fix but is after the 1.9.3 version https://bitbucket.org/osrf/gazebo/pull-request/915/added-gui-configuration-via-ini-file Answer: Bon dia Martí: You are right, the feature was implemented in pull request 915, but it is still not released, hosted in the default branch of the repository, to appear in the next major release of gazebo, the version 3.0. It breaks the API so it is not possible to backport it to previous series. So I'm afraid that you won't be able to use it in gazebo 1.9 series. Originally posted by Jose Luis Rivero with karma: 1485 on 2014-03-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Martí Morta - IRI on 2014-03-12: Thanks Jose :) Comment by Jose Luis Rivero on 2014-03-12: You are welcome. Could you please mark the answer as correct so I won't have it pending under 'unanswered'? Thanks.
{ "domain": "robotics.stackexchange", "id": 3564, "tags": "gazebo-gui" }
Optimization of file scanner
Question: Program must to find duplicate files (by content) in a given directory(and subdirectories). I collect all data in Map<Long, ArrayList<String>> map where Key is the size of file and Value is the List of paths to files with the same size. public static void main(String[] args) { long start = System.currentTimeMillis(); new FileScanner("/").searchFiles(); System.out.println(System.currentTimeMillis() - start); } I tested program on root directory (Linux) / where total count of files is: 281091. The time is of scanning: 3131064 milliseconds. In my opinion it's may be more faster. /boot/grub/biosdisk.mod /usr/lib/grub/i386-pc/biosdisk.mod /usr/lib/ruby/vendor_ruby/1.8/rubygems/command_manager.rb /usr/lib/ruby/1.9.1/rubygems/command_manager.rb /usr/share/pixmaps/openjdk-7.xpm /usr/share/app-install/icons/_usr_share_icons_sun-java5.xpm ... Also the program make log/files.log files where outputting paths of files the same contents separated groups on one blank line. import org.apache.log4j.Logger; import java.io.*; import java.util.*; import java.nio.file.Files; public class FileScanner { private String path, canonPath; private static final int BUFFER_SIZE_SMALL = 1024; // 1024 byte private static final int BUFFER_SIZE_MEDIUM = 1048576; // 1 mb private static final int BUFFER_SIZE_BIG = 10485760; // 10 mb private static Logger log = Logger.getLogger(FileScanner.class); /* * Data structure where keys is a size of file and * value is list of canonical path to files the same size */ private Map<Long, ArrayList<String>> mapFiles; /* * Constructor using the specified path */ public FileScanner(String path) { this.path = path; mapFiles = new HashMap<>(); } /* * Constructor with the specified initial capacity */ public FileScanner(String path, int capacity) { this.path = path; mapFiles = new HashMap<>(capacity); } /* * Getter and Setter for path */ String getPath() { return path; } void setPath(String path) { this.path = path; } /* * Get canonical path from File */ private String toCanonicalPath(File file) { try { canonPath = file.getCanonicalPath(); } catch (IOException e) { e.printStackTrace(); } return canonPath; } /* * Get an input stream that reads bytes from a file */ protected InputStream getInputStream(File file) throws FileNotFoundException { return new BufferedInputStream(new FileInputStream(file)); } /* * Define buffer size by file length */ protected int defineBufferLength(long length) { if (length < BUFFER_SIZE_MEDIUM) // file size less than 1mb return BUFFER_SIZE_SMALL; // 1bt if (length < BUFFER_SIZE_BIG) // file size less than 10mb return BUFFER_SIZE_SMALL * 10; // 10bt if (length < BUFFER_SIZE_BIG * 10) // file size less than 100mb return BUFFER_SIZE_MEDIUM; // 1mb if (length < BUFFER_SIZE_BIG * 100) // file size less than 1gb return BUFFER_SIZE_BIG; // 10mb return BUFFER_SIZE_BIG * 10; // 100mb } /* * Search similar files by length in the directory and subdirectories */ private void scanner(String path) { File[] subDirs = new File(path).listFiles(new FileFilter() { @Override public boolean accept(final File file) { if (file.isFile() && file.canRead()) { long size = file.length(); // length of the file is a key in map if (mapFiles.containsKey(size)) mapFiles.get(size).add(toCanonicalPath(file)); else mapFiles.put(size, new ArrayList<String>(25) {{ add(toCanonicalPath(file)); }}); return false; } return file.isDirectory() && file.canRead() && !Files.isSymbolicLink(file.toPath()); } }); for (int i = 0; i < subDirs.length; i++) scanner(toCanonicalPath(subDirs[i])); } /* * Compare binary files */ protected boolean compareFiles(String path1, String path2) { if (path1.equals(path2)) return false; boolean isSimilar = true; final File f1 = new File(path1), f2 = new File(path2); int size = defineBufferLength(f1.length()); byte[] bytesF1 = new byte[size], bytesF2 = new byte[size]; try (InputStream in1 = getInputStream(f1); InputStream in2 = getInputStream(f2)) { while (in1.read(bytesF1) != -1 && in2.read(bytesF2) != -1) { if (!Arrays.equals(bytesF1, bytesF2)) { isSimilar = false; break; } } } catch (IOException e) { log.error("Error:", e); } return isSimilar; } public void searchFiles() { scanner(path); for (ArrayList<String> paths : mapFiles.values()) { if (paths.size() == 1) continue; for (int i = 0; i < paths.size(); i++) { String path1 = paths.get(i); boolean isFound = false; for (int j = 0; j < paths.size(); j++) { String path2 = paths.get(j); if (compareFiles(path1, path2)) { log.info(path2); paths.remove(path2); isFound = true; } } if (isFound) log.info(path1 + "\n"); paths.remove(path1); } } } } How I can optimize any pieces of code or algorithm to be as fast as possible ? Answer: Another approach: Currently if you have three big files (with the same size) the algorithm will compare A with B and A with C. It reads A twice which could be avoided. Read each file once, store a hash value (MD5, SHA1, etc.) of the content and compare the hash only. About the current implementation: If I'm right it contains a bug. In the following code paths.remove(path2) modifies the list during iteration: for (ArrayList<String> paths : mapFiles.values()) { if (paths.size() == 1) continue; for (int i = 0; i < paths.size(); i++) { String path1 = paths.get(i); boolean isFound = false; for (int j = 0; j < paths.size(); j++) { String path2 = paths.get(j); if (compareFiles(path1, path2)) { log.info(path2); paths.remove(path2); isFound = true; } } if (isFound) log.info(path1 + "\n"); paths.remove(path1); } } I've created a directory with four files with the same content. It looks to me that the code does not read one of them. (I've put a log statement before compareFiles call.) Some other notes about the code: Instead of private final Map<Long, ArrayList<String>> mapFiles; use a Multimap. Guava has great implementations. (Doc, Javadoc) canonPath should be a local variable inside toCanonicalPath since no other method uses it. Furthermore, currently in case of an error the method returns the path of the previous file. It looks like a bugs. Comments like this almos unnecessary: /* * Get canonical path from File */ private String toCanonicalPath(final File file) { ... } It says nothing more that the code already does, it's rather noise. (Clean Code by Robert C. Martin: Chapter 4: Comments, Noise Comments) In the FileFilter subclass you could use guard clauses to flatten the arrow code. I'd put the variable declarations to separate lines. From Code Complete, 2nd Edition, p759: With statements on their own lines, the code reads from top to bottom, instead of top to bottom and left to right. When you’re looking for a specific line of code, your eye should be able to follow the left margin of the code. It shouldn’t have to dip into each and every line just because a single line might contain two statements.
{ "domain": "codereview.stackexchange", "id": 6255, "tags": "java, optimization, performance" }