anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Emergent Entropic Gravity from Quantum Entanglement in de Sitter Space?
Question: Question: Is there already a theory (formula) available for emergent entropic gravity from Quantum Entanglement in de Sitter space? For detailed background information please refer to a recent interview at the International Society for Quantum Gravity entitled "Entropic Gravity from Quantum Entanglement! | with Professor Erik Verlinde", see Link below https://www.youtube.com/watch?v=zkgZ0ShfbUE In this interview Prof. Erik Verlinde admitted that a quantum information theory based approach to emergent entropic gravity for the De-Sitter space would have to be worked out yet; so far there is only such a theory for the Anti-de-Sitter space (AdS/CFT- Correspondence). This is a follow-up request to a question which was already asked on the Physics Stack Exchange website 7 years ago: Has Verlinde's theory made significant advance recently? In particular, evidence appears stll missing, whether Prof. Verlinde's recent idea of a quantum information theory based approach to emergent gravity can be considered valid or not for the de Sitter space? Answer: Let me first clarify that there is a "non-entropic" version of emergent gravity that works, namely the well-known AdS/CFT correspondence that exemplifies the holographic principle, and in which a non-gravitational CFT in flat space is equivalent to a quantum-gravitational theory in emergent extra spatial directions. Erik Verlinde's specific 2010 proposal (that gravity is an "entropic force") has some well-known problems, illustrated by neutrons in a double-slit experiment (Kobakhidze, Motl) and n-body interactions (Visser). At Strings 2011, he gave a talk implying that he would refine the concept: "entropy" means phase space volume, "temperature" means density of energy levels, and gravity far from event horizons is an "adiabatic reaction force". He never wrote a paper about this, but there are some traces of this refined concept in his 2016 paper that begins to address De Sitter space. Here he is also trying to work with a De Sitter analogue of AdS/CFT, in which the observable universe is holographically dual to a description on the cosmological horizon, and the dark sector (dark matter, dark energy) is actually a perturbation of gravity, due to an "elastic memory effect" which is a manifestation of this holographic duality. Once again, I don't think any of his heuristic formulas have been derived from an exact fundamental theory. But Sabine Hossenfelder claimed she could get Milgrom's MOND formula (a way to dispense with dark matter by instead modifying gravity) from a version of Verlinde's theory. That's where my knowledge runs out. There was a 2020 question here about getting MOND from Verlinde's De Sitter gravity, that has received no reply.
{ "domain": "physics.stackexchange", "id": 99231, "tags": "entropy, quantum-entanglement, quantum-gravity, de-sitter-spacetime, emergent-properties" }
Degradable channels and their quantum capacity
Question: Note: I'm reposting this question as it was deleted by the original author, so that we do not lose out on the existing answer there, by Prof. Watrous. Further answers are obviously welcome. I have two questions: What are degradable channels? Given the dephasing channel $$\Phi^D\begin{pmatrix} \rho_{11} & \rho_{12}\\ \rho_{21} & \rho_{22} \end{pmatrix}$$ $$=\begin{pmatrix} \rho_{11} & \rho_{12} e^{-\Gamma(t)}\\ \rho_{21} e^{-\Gamma(t)} & \rho_{22} \end{pmatrix},$$ the complementary map is given by $$\Phi^D\begin{pmatrix} \rho_{11} & \rho_{12}\\ \rho_{21} & \rho_{22} \end{pmatrix}$$ $$= \begin{pmatrix} \frac{1+e^{-\Gamma(t)}}{2} & \frac{\sqrt{1-e^{-2\Gamma(t)}}}{2} (\rho_{11}-\rho_{22})\\ \frac{\sqrt{1-e^{-2\Gamma(t)}}}{2} (\rho_{11}-\rho_{22}) & \frac{1-e^{-\Gamma(t)}}{2} \end{pmatrix}.$$ How can one prove that the quantum channel capacity is given by $Q_D = 1 - H_2(\frac{1+e^{-\Gamma(t)}}{2} )$, where $H_2(\cdot)$ is the binary Shanon entropy. Reference: Eq. 13 of this article†. †: Bylicka, B., D. Chruściński, and Sci Maniscalco. "Non-Markovianity and reservoir memory of quantum channels: a quantum information theory perspective." Scientific reports 4 (2014): 5720. Answer: A channel $\Phi$ is said to be degradable if there exists another channel $\Xi$ such that $\Xi\Phi$ is complementary to $\Phi$. The idea here is as follows. Suppose $\Phi$ is a channel and $\Psi$ is complementary to $\Phi$. If $\Phi$ is applied to a state $\rho$, then the output of the channel is $\Phi(\rho)$ (of course), while $\Psi(\rho)$ represents whatever the environment effectively receives. A channel being degradable therefore means that given the output of the channel, one could reconstruct whatever the environment receives. On its own, the property of being degradable may be difficult to motivate, except for the fact that we can actually calculate the quantum capacity for degradable channels (while we often cannot for other channels). This is based on two facts: If $\Phi$ is degradable, then its quantum capacity is equal to its maximum coherent information. The coherent information is a concave function for degradable channels. Now, the channel in part 2 of the question is degradable. (In fact, dephasing channels are always degradable, in every dimension.) Perhaps the easiest way to see this for this particular channel is to observe that we can take $\Xi = \Psi$, (i.e., $\Psi \Phi = \Psi$), where I am using $\Phi$ as the name of the original channel and $\Psi$ for the complementary channel given in the question. It remains to compute the maximum coherent information of this channel. For a given state $\rho$, the coherent information of $\rho$ through $\Phi$ is defined as $$ \text{I}_{\text{C}}(\rho; \Phi) = \text{H}(\Phi(\rho)) - \text{H}(\Psi(\rho)). $$ It is perhaps not obvious, but the maximum value over all states $\rho$ is obtained by the completely mixed state $\rho = \mathbb{1}/2$, and therefore $$ Q(\Phi) = \max_{\rho} \text{I}_{\text{C}}(\rho; \Phi) = \text{I}_{\text{C}}(\mathbb{1}/2; \Phi) = \text{H}(\mathbb{1}/2) - \text{H}(\Psi(\mathbb{1}/2)) = 1 - \text{H}_2\Bigl(\frac{1 + e^{-\Gamma(t)}}{2}\Bigr). $$ One way to argue that the maximum coherent information is obtained by the completely mixed state is to use the fact that $\Phi$ is a Pauli channel, together with the concavity of coherent information for degradable channels mentioned above.
{ "domain": "quantumcomputing.stackexchange", "id": 742, "tags": "quantum-operation, noise, information-theory, entropy" }
What is the computational complexity category of $T(n) = m^{\frac{n-1}{m}}$?
Question: I'm analyzing the computational complexity of an algorithm whose input size is $n$. Finally, I've ended up with $T(n) = {m^\frac{n-1}{m}}$ where $m$ is a constant. Can one explain the type name corresponding to this computational complexity (e.g., exponential, etc.)? Answer: Since $c=m^{\frac{1}{m}}$ is a constant, your $T(n) = c^{n-1} \in \Theta(c^n)$ is simply exponential.
{ "domain": "cs.stackexchange", "id": 10609, "tags": "terminology, asymptotics" }
What is the relationship between W, X, Y and P, M retinal ganglion cells?
Question: In Guyton and Hall Textbook of Medical Physiology (12e) the retinal ganglion cells (RGCs) are classified into W, X, and Y types. However, in Gray's Anatomy (40th ed.), RGS are subdivided into midget and parasol cells. What is the relationship between the W-X-Y versus midget-parasol classifications? Summary of the WXY classification as in Medical Physiology: About 40% ganglion cells are W cells. They are the smallest (<10 μm), and the slowest (8 m/s) to transmit signals, but they have the broadest receptive field. They receive signals from rods and detect directional movement. About 55% are X cells. They have medium size (10~15 μm) and transmission speed (14 m/s). Their fields are small. They transmit color vision. About 5% are Y cells. They are the largest (<35 μm) and the fastest (35 m/s) to transmit signals. They have broad fields. They respond to rapid changes in the vision. Answer: Short answer X = P = midget = β Y = M = parasol = α W = γ Background First off, P and M cells are not equal to parasol and midget cells; on the contrary - P cells are midgets and M cells are considered to be parasols. X, Y and W cells are used to denote different retinal ganglion cells in cats, as proposed by experiments Enroth-Gubel & Robson (1966). As you already indicate, these cells have different physiological response properties. X and Y cells have the standard center-surround distribution of their receptive field, X cells having linear additive response properties in center and surround and Y cells nonlinear properties. X cells therefore have predictive responses such that responses can be predicted to be maximal (e.g. light shining on the ON center and no light stimulation on the OFF surround). Y cells tend to respond in more complex ways. W cells do not have a center-surround structure (Bruce et al, 1996). Based on morphology, retinal ganglion cells in the cat have been labelled α, β and γ cells, which are thought to correspond to the physiological classes of Y, X and W cells, respectively (Boycott & H. Wässle, 1974). The P and M division was introduced by De Monasterio & Gouras (1975) based on monkey experiments. P and M refer to the parvocellular and magnocellular connections retinal ganglion cells in monkeys make in the LGN of the brain. Both types of cells have a center-surround receptive field structure like X and Y cells in the cat. P cells show color opponency and respond to specific wavelengths of light, and are involved in color vision. P cells respond to a range of wavelengths and respond most vigorous when different light intensities illuminate the center and surround (Bruce et al, 1996). P cells are considered to be mostly midget ganglion cells, and equivalent to β cells of cat. M cells, on the other hand are considered to be mostly parasol cells and resembling α cells (Kolb, 2001). References - Boycott & H. Wässle, J Physiol (1974); 240(2): 397–419 - Bruce et al, Visual Perception (1996); 3rd ed. Psychology Press - De Monasterio & Gouras, J Physiol (1975); 251(1): 167–95 - Enroth-Gubel & Robson, J Physiol (1966); 187(3): 517–52 - Kolb H. Webvision (2001). Salt Lake City (UT): University of Utah Health Sciences Center
{ "domain": "biology.stackexchange", "id": 4582, "tags": "physiology, vision, neurophysiology, neuroanatomy, histology" }
Current density for discrete charges
Question: Suppose I have N charges with the same charge q in volume V but moving at different velocities in the x direction. Then is the current density given by $$J = (N/V) q v_{avg}$$ or is it $$J = (N/V) q \sum_{i} v_i$$ The second one makes more sense to me because a current of 5A and 3A in the x direction combine to make a current of 8A. This shouldn't work with the first equation because if both flows contain the same amount of charges, then the average velocity would cause the current to be somewhere between 5 and 3. But then what is v in the equation $J = \rho v$? All this time it was the average velocity $<v>$. Answer: Current density by itself, does not use the average velocity. It uses the instantaneous velocity of a charge density in location your evaluating current density. Current density is defined at every point in space, so when a charge is there, the actual velocity is the most correct to use. As far as I am aware, 2 seperate current density functions (for example 2 point charges) cannot occupy the same location in space, so taking the "average" velocity doesn't make sense as there is only 1 velocity to be considered at a particular point in space. You are confusing current and current density. If there are 2 seperate current density functions, then the current across a particular cross section COULD cancel out, As current is $$ I = \iint \vec{J} \cdot \vec{da} $$ If half of the contributions are one way and half another, then net current is zero You seem to be talking about current and not current density
{ "domain": "physics.stackexchange", "id": 87004, "tags": "electromagnetism" }
Debugging techniques for robot_localization?
Question: Hi All, I've managed to put together a typical setup for robot_localization using wheel encoders, IMU and GPS, as described in the documentation and in many answers here. It works more or less reasonably well. But sometimes it doesn't. My question would be about debugging techniques for the case when it doesn't. I'm wondering how you guys do this tuning / debugging thing? There are literally hundreds of messages generated when the system is running, most of them with loads of numbers in them. Are there any magic tools, clever tricks, methods, best practices, etc. to find out what's going wrong when something is going wrong? I use rviz and rostopic. Are these really the only weapons I have? I hope this question is not too generic for this forum. But I would be really interested to hear how people with more experience deal with this phase of development. Originally posted by tbondar on ROS Answers with karma: 29 on 2020-07-12 Post score: 0 Answer: rviz, plotjuggler and rosbag are your friends. Record a bag file using rosbag of an occasion when it doesn't work well, then replay that through rviz and plotjuggler, looking at what the various different sensors and calculated values are doing. https://github.com/facontidavide/PlotJuggler Originally posted by Geoff with karma: 4203 on 2020-07-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tbondar on 2020-07-13: plotjuggler looks awesome. Thank you.
{ "domain": "robotics.stackexchange", "id": 35263, "tags": "navigation, ros-melodic, robot-localization" }
awk working with large files
Question: I have two very large vcf files 2GB and 6GB I want to look for unique combinations of CHROM and POS and output the row that matches. However, because the files a so large my machine always hang and stop processing. Is there are a way to work around the problem of this large file? I am using this command, taken from an answer to another question: awk '{ if(NR==FNR){a[$1$2]=$0}else{if($1$2 in a){print}}}' file1.vcf file2.vcf file 1 #CHROM POS ID REF ALT QUAL 1 10366 rs58108140 G A 1 10611 rs189107123 C G 1 51954 rs185832753 G C 1 13327 rs144762171 G C file 2 #CHROM POS ID REF ALT QUAL 1 10366 rs58108140 G A 1 51935 rs181754315 C T 1 51954 rs185832753 G C 1 52058 rs62637813 G C 1 52144 rs190291950 T A output 1 10366 rs58108140 G A 1 51954 rs185832753 G C Answer: Maybe comm, which is part of GNU coreutils, is more efficient. comm writes to standard output lines that are common, and lines that are unique, to two input files; a file name of ‘-’ means standard input. Synopsis: comm [option]… file1 file2 In your case you could sort both files based on the first 3 columns: file1=file1.vcf file2=file2.vcf comm -12 <(awk '{print $1,$2,$3}' $file1 | sort) \ <(awk '{print $1,$2,$3}' $file2 | sort) > matches # result: cat matches 1 10366 rs58108140 1 51954 rs185832753 #CHROM POS ID This could be of course polished by removing the header. Also if you want the original lines as output, you can grep the matches to the original files: grep -f matches file1 #CHROM POS ID REF ALT QUAL 1 10366 rs58108140 G A 1 51954 rs185832753 G C Edit: In the first version, intermediate temp files were used, but as suggested by terdon in the comments, this can be avoided using Process Substitution allowing the <(..) contstruct.
{ "domain": "bioinformatics.stackexchange", "id": 1123, "tags": "vcf, awk" }
Compute velocities from wheel-encoders for odometry model
Question: I am working with the navigation of a two wheeled robot, I am looking at the odometry tutorial www.ros.org/wiki/navigation/Tutorials/RobotSetup/Odom, in this part: //compute odometry in a typical way given the velocities of the robot double dt = (current_time - last_time).toSec(); double delta_x = (vx * cos(th) - vy * sin(th)) * dt; double delta_y = (vx * sin(th) + vy * cos(th)) * dt; double delta_th = vth * dt; x += delta_x; y += delta_y; it says "A real odometry system would, of course, integrate computed velocities instead" so, if I have the velocity readings from the wheel encoders, how can I compute this velocities for the odometry model? Originally posted by salime on ROS Answers with karma: 21 on 2013-07-11 Post score: 1 Original comments Comment by Hemu on 2013-07-11: Can you clearly state the meaning of 'velocity readings from the wheel encoders'? Does it refer to the rpm of each motor or the distance traversed by each wheel per unit time? Comment by salime on 2013-07-12: I have the encoder count per second in each wheel, the wheel radius and the distance between these two wheels. I am wondering how can I get the velocity in meters per second with this information. Comment by hamidoudi on 2014-01-12: Hello usually computing an odemetry is dependent on the type steering on the robot there are different type of methods for computing velocities from encoder counts first thing I would advice you to do is to read this article and you will then have better starting point. http://rossum.sourceforge.net/papers/DiffSteer/ I hope it was helpful. thanks Answer: In some robots you can't measure velocity directly. Instead, you measure movement (of the motors, for example) and calculate both the new position and velocity from that movement. As an example, the ros_arduino_bridge package does exactly that from the encoder measurements of a differential-drive system. You can see an example of writing an Odometry message with updated position and velocity based on encoder deltas and publishing the updated transform in lines 146–185 of the base_controller.py file in ros_arduino_bridge: https://github.com/hbrobotics/ros_arduino_bridge/blob/indigo-devel/ros_arduino_python/src/ros_arduino_python/base_controller.py Originally posted by Mark Rose with karma: 1563 on 2018-06-28 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14882, "tags": "ros, navigation, odometry, velocity" }
Understanding Pressure
Question: Ok so what happend is that i started to think about Submarines and why exactly you cannot open the bottom heatch at lets say the Challenger Depth. At least i am pretty sure you cannot do that. Right ? Because, i was told that if you were to open the Heatch, water would still rush in and kill you rather fast. But why ? All of the Water and thus all of the Pressure is above you. Here is a bad illustration of what i mean My theory: In theory you can do that, but now, since there is no floor, all the Pressure from above is resting on the Water below. Which itself is the support for the Submarine. And with no floor, the Water gets forced in. So what happens is that the Air is acting like the Floor that is missing. But while the Floor could resist the Pressure from above pusing it against the support from below, the Air cannot. And thus gets compressed. Is that right ? Answer: The problem is that when you are inside the submarine you usually get more or less normal atmospheric pressure (maybe a little higher). What keeps the submarine from imploding is the structural resistance of the hull (usually an inner and an outer hull). The water pressure at the Challenger deep is in the order of 1100 bar (8 tons per square inch). So if you opened the lid, the volume of the air would be compressed to about 1/1100 of its original volume. Assuming an air volume of 3 m^3, all that would be compressed to roughly 3 liters of air. But the biggest problem would be that while 70% of our body is water (and incompressible), our lungs and internal cavities are filled with air. So what would happen is that the air inside of our torso would be collapsed to almost 1/1000 of its original volume. Additionally, it would probably be very difficult to breathe (ie. flex our muscles to fill our lungs with air) due to the external load. fluid pressure on boundaries The following is from a lecture on fluid mechanics. It shows the fluid pressure direction on different boundaries. What should be evident, is that pressure is always normal to the surface boundary. I.e. pressure can act in all directions. So, what happens when a can is submerged in water is the following You might notice that the pressure above is less than the pressure below (so for a thin object the pressure difference would not be significant). This is how buoyancy force is generated. However, this is not the discussion here, and you can just consider that the pressure is all around equal. On the other hand, the air inside the submarine is exerting pressure on the walls - again normal to the walls. Its more or less like a balloon in the figure below So what would happen if you opened the bottom hatch in the submersible y, what happens is that now you have a water air boundary. On one side there is a 1atm air and in the other its about 1000atm pushing the other way.
{ "domain": "engineering.stackexchange", "id": 3716, "tags": "pressure" }
Function equivalent to strcmp()
Question: I'm starting out and need feedback in order to discover the things I'm doing wrong. This is supposed to behave like strcmp(): int compare(char *str1, char *str2) { while(*str1 || *str2) { if(*str1 != *str2) { break; } ++str1; ++str2; } return *str1 - *str2; } How can it be improved? Are there any flaws? Also, would any of the following styles be considered better? int compare(char *str1, char *str2) { while((*str1 || *str2) && *str1 == *str2) { ++str1; ++str2; } return *str1 - *str2; } int compare(char *str1, char *str2) { while((*str1 || *str2) && *str1 == *str2) ++str1, ++str2; return *str1 - *str2; } Answer: You can skip the check for the end of one of the strings. If the other string ends before the one that you check the length for, the comparison of the characters will catch the difference: int compare(char *str1, char *str2) { while (*str1 && *str1 == *str2) { str1++; str2++; } return *str1 - *str2; } You can write the same using a for statement if you want it shorter (and less readable): int compare(char *str1, char *str2) { for (;*str1 && *str1 == *str2; str2++) str1++; return *str1 - *str2; } Just for the sake of it, you can make is shorter (mis)using recursion, but that is a rather terrible solution: int compare(char *str1, char *str2) { return (*str1 && *str1 == *str2) ? compare(++str1, ++str2) : *str1 - *str2; }
{ "domain": "codereview.stackexchange", "id": 4622, "tags": "c, strings, reinventing-the-wheel, comparative-review" }
Hamiltonian for the Periodic Kitaev Model
Question: The Hamiltonian for a system of spinless fermions on a 1D chain (with chemical potential $\mu=0$) is given by $$ H=-\sum_j\left( c^\dagger_{j+1} c_j+h.c.\right)+\Delta \sum_j \left( c^\dagger_{j+1}c^\dagger_j+h.c.\right) $$ where $\Delta$ is some number. If we introduce $$ c_j=\frac{1}{\sqrt{N}}\sum_k e^{ikj}c_k$$ We obtain the result below: $$H=\sum_k \xi(k) c_k^\dagger c_k+\Delta\sum_k \left(e^{-ik}c_k^\dagger c_{-k}^\dagger +e^{ik}c_k c_{-k}\right) $$ where $\xi(k)=-2\cos(k)$I am trying to represent this Hamiltonian in matrix form by using the Nambu operator $$ \phi_k=\begin{pmatrix} c_k \\ c_{-k}^\dagger \end{pmatrix} $$ Numerous texts give it as $$ H=\sum_k \phi_k^\dagger \begin{pmatrix} \xi(k) & 2i\Delta \sin(k)\\ -2i\Delta \sin(k ) & -\xi(k)\end{pmatrix}\phi(k) $$ However, when I expand the above out, I do not get my original coupling term back--instead, I get $$ \Delta \sum_k \left( e^{-ik}c_k^\dagger c_{-k}^\dagger -e^{ik}c_k^\dagger c_{-k}^\dagger+e^{ik}c_k c_{-k}-e^{-ik}c_k c_{-k}\right) $$ I see that, to obtain my old coupling term, I have to let $e^{ik}c_k^\dagger c_{-k}^\dagger=e^{-ik}c_k c_{-k}=0$, but I can't explain why. Can someone please help me with this step? Here is a similar question posed in a problem set from a German university for your reference: http://users.physik.fu-berlin.de/~romito/qft2011/set6.pdf Answer: First, watch out for the factors of 2 and $\sin(k)$s in your line 3 (after doing the fourier transform). Second, you do not want to set those terms to zero. Instead, remember that $k$ is just a dummy index. I could consider each term as a separate sum, and for some of them, I'll set $k \rightarrow -k$. Then $$-e^{ik}c_{k}^{\dagger}c_{-k}^{\dagger} \rightarrow -e^{-ik}c_{-k}^{\dagger}c_{k}^{\dagger}= +e^{-ik}c_{k}^{\dagger}c_{-k}^{\dagger}$$ ... and this guy just gets absorbed into the first term. Another way to think about this is that we should, strictly speaking, only consider the sum in the Nambu hamiltonian as only counting modes with $k \geq 0$, and then we need both kinds of terms since one ends up then also counting the original terms with $k \leq 0$. People tend to be very sloppy with this notation however.
{ "domain": "physics.stackexchange", "id": 46365, "tags": "quantum-mechanics, condensed-matter, research-level, solid-state-physics, fermions" }
Let's check that domain port
Question: Intro This simple script will allow me to check for a specific opened port on a list of domains that I own. Instead of manually doing this check, I found Python a pretty good idea for such a task. After profiling my code, I found out that def check_for_open_ports(): is really slow. It takes about 0:01:16.799242 seconds for 4 domains. I wondered if there's a good / recommended way of improving this (maybe multithreading / multiprocessing). While asking for an answer which implements one of the above two methods is forbidden here, I wouldn't mind seeing one. I know that one shall use multiprocessing when there're I/O bound tasks which makes me believe I might go with a multithreading solution. The code from socket import gethostbyname, gaierror, error, socket, AF_INET, SOCK_STREAM from sys import argv, exit import re DOMAINS_FILE = argv[1] PORT = argv[2] OUTPUT_FILE = argv[3] def get_domains(): """ Return a list of domains from domains.txt """ domains = [] if len(argv) != 4: exit("Wrong number of arguments\n") try: with open(DOMAINS_FILE) as domains_file: for line in domains_file: domains.append(line.rstrip()) except IOError: exit("First argument should be a file containing domains") return domains def check_domain_format(domain): """ This function removes the beginning of a domain if it starts with: www. http:// http://www. https:// https://www. """ clear_domain = re.match(r"(https?://(?:www\.)?|www\.)(.*)", domain) if clear_domain: return clear_domain.group(2) return domain def transform_domains_to_ips(): """ Return a list of ips specific to the domains in domains.txt """ domains = get_domains() domains_ip = [] for each_domain in domains: each_domain = check_domain_format(each_domain) try: domains_ip.append(gethostbyname(each_domain)) except gaierror: print("Domain {} not ok. Skipping...\n".format(each_domain)) return domains_ip def check_for_open_ports(): """ Check for a specific opened PORT on all the domains from domains.txt """ ips = transform_domains_to_ips() try: with open(OUTPUT_FILE, 'a') as output_file: for each_ip in ips: try: sock = socket(AF_INET, SOCK_STREAM) result = sock.connect_ex((each_ip, int(PORT))) if result == 0: output_file.write(each_ip + '\n') sock.close() except error: print("Couldn't connect to server") except KeyboardInterrupt: exit("You pressed CTRL + C. Will exit now...\n") if __name__ == '__main__': check_for_open_ports() A step further After some checks, I realised that what was mainly slowing down the program can be improved by reducing the default timeout from the socket module using setdefaulttimeout(2). Even if this solved a part of the problem, I still don't find it to be the cleanest one. Any advice related to performance is really welcome ! Extra info: I'll probably use this only on Linux OSs I've used Python 2.7.13 PS: I'd like you to ignore the fact that I didn't use optparse or argparse for parsing CLI arguments. Answer: First a slight style note (IMHO, of course). You called your function check_domain_format, but it's actually returning a modified string and you're using the result, not checking it. I'd go for a name like validate_domain_format About it being slow: Yes, multi-threading would help in checking multiple domains at once, but if that was the only problem you could just make a separate bash script to launch your python script with different parameters. You said that you own the domains, so I'm assuming you have RAW socket capabilities. If that's the case, you can speed up your check by using a SYN check. You can have a look here , even if the question has been down-voted, it should give you the general idea. Here you can find that same check. If you're doing this for educational purposes that's ok, otherwise nmap will most likely do a better job, give you more options and be faster (because SYN check is already implemented and you can also scan for UDP ports, for example).
{ "domain": "codereview.stackexchange", "id": 24138, "tags": "python, performance, python-2.x, socket, status-monitoring" }
Help with deriving simple heat equation
Question: $$j^{q}=\frac{1}{2} n v[\varepsilon(T[x-v \tau])-\varepsilon(T[x+v \tau])]$$ To this: $$j^{q}=n v^{2} \tau \frac{d \varepsilon}{d T}\left(-\frac{d T}{d x}\right)$$ At first I was thinking of using the fundamental theorem of calculus but I can't seem to do it. Any words of advice would be appreciated. Answer: Use: $$\frac{d\epsilon}{dT}=\frac{\epsilon(T+dT)-\epsilon(T-dT)}{2dT}$$ and: $$dT=\frac{dT}{dx}dx=\frac{dT}{dx}v\tau$$
{ "domain": "physics.stackexchange", "id": 55819, "tags": "homework-and-exercises, thermodynamics, differentiation, metals" }
Proving susceptibility in Lorentz Model satisy Kramers-Kronig relations
Question: My instructor asked me to prove that the real and imaginary parts of the electric susceptibility derived from Lorentz Model satisfy the Kramers-Kronig relations using the residue theorem. The problem is that my complex calculus is pretty rusty and I do not know which poles contribute exactly. There are 5 poles in total 4 from the susceptibility and 1 from the denominator(see the expression please). I took the integral using the principal value option in Mathematica and it turned out not as expected. Is this analytically tractable easily? $$ \chi(\omega) = \frac{\omega_{p}^2}{(\omega_0^2-\omega^2)+i\gamma\omega} $$ The Kramers-Kronig relations are $$ \chi_r(\omega) = \frac{1}{\pi} P \int_{-\infty}^{\infty} d\bar{\omega} \frac{\chi_i(\bar{\omega})}{\bar{\omega}-\omega} $$ $$ \chi_i(\omega) = -\frac{1}{\pi} P \int_{-\infty}^{\infty} d\bar{\omega} \frac{\chi_r(\bar{\omega})}{\bar{\omega}-\omega} $$ Answer: I assume all the variables involved are real. The roots of the denominator of $\chi(\omega)$ are $\frac{1}{2}\Big(i\gamma\pm\sqrt{-\gamma^2+4\omega_0^2}\Big)$ which lies in the lower (upper) half of the complex plane for $\gamma<(>)0$. You need to further specify the sign of $\gamma$. $\gamma<0$ leads to the relationship in your question, while $\gamma>0$ put negative sign to the left hand sides of the relationship, and $\gamma=0$ destroys the relationship. Suppose $\gamma<0$. $\chi$ is analytic on the upper half complex plane and by Cauchy's Integral Theorem, $$0 = \frac{1}{2\pi i}\oint_C \frac{\chi(\bar\omega)}{\bar\omega-\omega}\mathrm d\bar\omega,$$ where the contour $C$ runs along the real axis from $-R<0$ to $R>0$ with an infinitesimally small semicircle running clockwise around and above $\omega$, then describes the large semicircle circle counter-clockwise in the upper half complex plane with radius $R$. The clockwise integral around the small circle above $\omega$ approaches $-\frac{1}{2}\chi(\omega)$ while the integral on the large semicircle with radius $R$ approaches $0$ as $R\rightarrow\infty$ as the magnitude of the integrand is $O\big(\frac{1}{R^3}\big)$. Therefore $$\frac{1}{2}\big(\chi_r(\omega)+i\chi_i(\omega)\big) = \frac{1}{2\pi i}P\int_{-\infty}^\infty \frac{\chi_r(\bar\omega)+i\chi_i(\bar\omega)}{\bar\omega-\omega}\mathrm d\bar\omega.$$ Equating the real and imaginary parts of the equation, leads to the desired result. To explicitly verify the relationship, we use $$\chi_i=\frac{1}{2i}(\chi-\chi^*). \tag 1$$ The poles of $\chi$ lies in the lower half complex plane. Evaluate $\frac{1}{\pi}P\int_{-\infty}^\infty\frac{\chi(\bar\omega)}{\bar\omega-\omega}\mathrm d\bar\omega$ using the contour integral described in the general proof above and get $i\chi(\omega)$. The poles of $\chi^*$ lies in the upper half complex plane. So we evaluate $\frac{1}{\pi}P\int_{-\infty}^\infty\frac{\chi(\bar\omega)^*}{\bar\omega-\omega}\mathrm d\bar\omega$ using the previous contour reflected with respect to the real axis and get $-i\chi(\omega)^*$. Then evaluate $\frac{1}{\pi}P\int_{-\infty}^\infty\frac{\chi_i(\bar\omega)}{\bar\omega-\omega}\mathrm d\bar\omega$ using Equation $(1)$, we arrives at the desired first relationship. The second relationship is derived explicitly similarly with $\chi_r=\frac{1}{2}(\chi+\chi^*)$.
{ "domain": "physics.stackexchange", "id": 18282, "tags": "optics" }
How is tracing out a physical operation?
Question: Suppose $\rho_{AB}$ denotes the density matrix of a bipartite system. Reduced density matrix of A ($\rho_A$) is obtained by tracing out B $$\rho_A\equiv\sum_{i}\langle i_B |\rho_{AB}|i_B\rangle$$ where $\{|i_B\rangle \}$ is the basis of the subsystem B. It is said that $\rho_A$ is the physical state of the subsystem A. What justifies this claim? Answer: The partial trace over $B$ of the quantum state of a bipartite system $AB$ corresponds to discarding $B$: that is, the reduced density matrix $\rho_A=\mathrm{Tr}_B(\rho_{AB})$ is the complete description of the state of the system for any and all measurements that are completely local to $A$. This can be made precise by considering an arbitrary hermitian measurement operator $\mathcal O_A$ (which includes, among other things, eigenprojectors corresponding to the measurement of some other observable), whose expectation value is $$ \langle \mathcal O_A\rangle = \mathrm{Tr}\mathopen{}\left( \hat{\rho}_{AB} \ \hat{\mathcal O}_A\otimes \mathbb I\right)\mathclose{}. $$ Here the trace can be decomposed as $$ \langle \mathcal O_A\rangle = \mathrm{Tr}_A\mathopen{}\left( \mathrm{Tr}_B\mathopen{}\left( \hat{\rho}_{AB} \ \hat{\mathcal O}_A\otimes \mathbb I\right)\mathclose{}\right)\mathclose{}, $$ and since $\hat{\mathcal O}_A$ does not act on the $B$ sector, it can be factored out of the $B$ trace, giving $$ \langle \mathcal O_A\rangle = \mathrm{Tr}_A\mathopen{}\left( \mathrm{Tr}_B\mathopen{}\left( \hat{\rho}_{AB}\right)\mathclose{} \hat{\mathcal O}_A\right)\mathclose{}, $$ or in other words, $$ \langle \mathcal O_A\rangle = \mathrm{Tr}_A\mathopen{}\left( \hat{\rho}_{A} \hat{\mathcal O}_A\right)\mathclose{}. $$ Thus, if you want to predict the results of any possible experiment that only involves $A$, then you need no more, and no less, than $\hat{\rho}_{AB}$.
{ "domain": "physics.stackexchange", "id": 96101, "tags": "quantum-mechanics, quantum-information, hilbert-space, density-operator, trace" }
How ROS/openni convert kinect's raw disparity to depth(millimeter unit)?
Question: Formula(including the default parameter value) please!! Thank you! Originally posted by ranqi on ROS Answers with karma: 1 on 2012-08-14 Post score: 0 Answer: Perhaps this link can help you http://blog.mivia.dk/2011/06/my-kinect-datasheet/ Originally posted by DiogoCorrea with karma: 61 on 2012-12-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10614, "tags": "ros, kinect, depth" }
Use of soft iron in electromagnets despite having very high retentivity
Question: so my book states that an electromagnet should not remain a magnet when the electric field is switched off.then why is soft iron core used as an electromagnet?its retentivity is higher even than steel which is used to make a permanent magnet.please answer only on the basis of retentivity and coercivity. also i know that soft iron having low coercivity is used in induction related machines where a constantly changing AC current is used.i want to know why it doesnt retain the magnetism when the external magnetic field has been switched off,despite having high retentivity Answer: Soft iron is commonly used here because it is easily magnetized, easy to machine, and cheap. For an AC motor, the magnetic field direction is flipping constantly, so there is no chance of building up a net residual field in its iron parts.
{ "domain": "physics.stackexchange", "id": 90771, "tags": "magnetic-fields, ferromagnetism" }
How to avoid mounting SQL by concatenating strings?
Question: I found myself mounting a query by concatenating strings. This just seems wrong. Any ideas of how to avoid this kind of situation? I'm using Rails 4, and I don't see how can Arel make things better for my case. def self.search(params) # query example # SELECT "jobs".* FROM "jobs" WHERE (telecommute = 'f' AND ( freelance = 't' ) AND country = 'German') query = 'telecommute = ?' args = [params[:telecommute] == 'true'] work_types = %W(full_time contractor freelance) accepted_work_types = params.find_all{ |k,v| work_types.include?(k) }.map{ |k,v| k } if accepted_work_types.any? query += " AND ( #{accepted_work_types.map{ |wt| "#{wt} = ?"}.join(" OR ")} )" args += accepted_work_types.map{ |wt| true } end [:city, :state, :country].each do |index| unless params[index].blank? query += " AND #{index} = ?" args << params[index] end end Job.where(query, *args) end Answer: Yeah, that does feel kinda wrong. But OR queries in Rails are tricky. There are two things here: One is creating the query, the other is parsing the params. I'd probably start1 by preparing/parsing the params so building the query is cleaner. I'd add a method - or several - to the controller, rather than the model, to do this. Both model and controller are involved in this, but I'd rather not pass params to the model. Things like checking the string value of the telecommute param seems like a job for the controller, whereas the model's search method should only have to deal with a boolean. For instance, you might want to call Job.search from code, where it'd be silly to use string values. There's some cleanup you can do here too. For instance, I'd strongly consider adding constants or methods for the work_types array and the location keys. I'd also be more consistent; in one place you use %w(...) to make an array of (string) keys, and in another you use [...] to make an array of (symbol) keys. Here's an example, if you were to just use one method: def search_params search_terms = {} # put the "AND" terms in the hash search_terms[:telecommute] = params['telecommute'] == 'true' Job.LOCATION_KEYS.each_with_object(search_terms) do |key, memo| memo[key] = params[key] if key.present? end # put the "OR" terms in a nested hash Job.WORK_TYPES.each_with_object(search_terms) do |key, memo| if params[key].present? memo[:work_types] ||= {} memo[:work_types][key] = true end end search_terms end It ain't pretty, but at least it's all in one place. As for the searching itself, you can use regular ActiveRecord's query interface without descending to Arel: def self.search(hash = {}) # we'll be modifying the hash, so work on a dup # (you'll first want to check if it's empty, though!) terms = hash.dup # grab the work types work_types = terms.delete(:work_types) # add the AND clauses query = self.where(terms) # add the OR clauses (if any) if work_types subquery = self.where(work_types).where_values.inject(:or) # magic! query = query.where(subquery) end query end I get this: Job.search(telecommute: true, country: "US", work_types: { full_time: true, freelance: true }).to_sql # => "SELECT "jobs".* FROM "jobs" WHERE "jobs"."telecommute" = 't' AND "jobs"."country" = 'us' AND (("jobs"."full_time" = 't' OR "jobs"."freelance" = 't'))" 1 Edit: Actually, that's not where I'd start if I were writing this from scratch. It is however where I started my rewrite, but that's because it was already there. Otherwise, I'd start by defining the search method as I'd want it to look and work - without worrying about the params -, and then worry about how to make the params fit.
{ "domain": "codereview.stackexchange", "id": 10058, "tags": "ruby, ruby-on-rails" }
A doubt on the action of NBS/CCl4
Question: Suppose we have a alkene( ex- 2- butene) we try to subject it to allylic bromination using NBS/CCl4 and then the mixture is separated using fractional distillation. According to me as the reaction proceeds with a free radical pathway we must get three products namely cis , trans 1-bromobut-2-ene and the rearranged product but the answer is given as that the fractions obtained will be only 2 . Where am I going wrong ? Answer: You are quite correct that this is a radical bromination, know as the Wohl–Ziegler reaction. The mechanism is here Wohl–Ziegler. The key point is that the radical intermediate at the allylic position is stabilised by resonance with the double bond giving a 3-centred intermediate. This can brominate at either end (favouring the more substituted), but it does not isomerise the double bond. So I think the question setters are refering to 3-bromobut-1-ene and 1-bromobut-2-ene as the possible products without concerning themselves with the E/Z isomers.
{ "domain": "chemistry.stackexchange", "id": 11172, "tags": "organic-chemistry, reaction-mechanism" }
What is a “spectator mode” in this analysis of molecular vibration modes
Question: I'm reading a paper in which I found this line: The other modes have relatively small contributions for the displacements of the heavy atoms and can be considered as spectator modes for the internal conversion I cannot find a definition or explanation of term “spectator mode”. What is it exactly? Answer: The paper in question is Ab Initio Trajectory Surface-Hopping Study on Ultrafast Deactivation Process of Thiophene. From a quick Google Scholar search that returns 187 hits, it is clear that the expression “spectator mode” for a harmonic vibration mode is not a standard terminology, but is nonetheless used by plenty of authors. As to the meaning, I think you got it from the everyday meaning of “spectator”: they are vibration modes that do not participate in the phenomenon of interest, i.e. that are not coupled to the chemical or physical process studied (which can be a chemical reaction, an adsorption process or a photophysical process).
{ "domain": "chemistry.stackexchange", "id": 6154, "tags": "physical-chemistry, computational-chemistry" }
Spherical charged shells with grounding
Question: Consider two concentric spherical shells of radii $a$ and $2a$ respectively. Let the inner shell have potential $V_0$ and the outer shell be grounded. What is the potential $V(r)$ as a function of the distance to the center of the shells, and what are the charges on the shells? This is how I would approach the problem. Let $Q_i$ and $Q_o$ be the charge on the inner and outer shell respectively. Put $$V(r) = -\int_{2a}^{r} \vec{E}(r') \cdot d\hat{r}'.$$ Since $E(r) = 0$ for $r < a$, $E(r) = Q_i/(4\pi\epsilon_0r^2)$ for $a < r <2a$ $E(r) = (Q_i+Q_o)/(4\pi\epsilon_0r^2)$, this gives us $V(r) = Q_i/(8\pi\epsilon_0a)$ for $r < a$, $V(r) = Q_i(2a/r-1)/(8\pi\epsilon_0a)$ for $a < r < 2a$ $V(r) = (Q_i+Q_o)(2a/r-1)/(8\pi\epsilon_0a)$ for $2a < r.$ Now requiring that $V(a)=V_0$, we get $Q_i = 8\pi\epsilon_0aV_0$. But what about $Q_o?$ From the form of the potential we immediately get $V(2a) = 0$, regardless of the value of $Q_o$. In the answer to the problem it states that $Q_o = -Q_i$. It is also directly states that $V(r) = 0$ for $r > 2a$, but frankly I don't see why that follows from the assumption that the outer shell is grounded. Does it all rest on the implicit assumption that $V(\infty) = 0$, which makes my definition of $V(r)$ wrong? Certainly we are not required to make this assumption? Answer: With the outer shell grounded once you pot a charge of $+Q_i$ on the outer side of the inner shell then a charge of $-Q_i$ will be induced on the inner side of the outer shell. Think of it as no electric field inside a conductor so every electric field line which starts on a charge on the outside surface of the inner sphere must finish on an opposite charge on the inside surface of the outer sphere. So there is only an electric field between the inner and the outer shell and so this is the only region where the electric potential changes. If earth is taken to be the zero of potential which is often the case then the outer shell must also be at zero potential if it is connected to earth. If you think about Gauss's law and consider a spherical Gaussian surface centred at the centre of the spherical shells then if $a \le r \le 2a$ the charge enclosed by the surface is $+Q_i$. Once you have $r>2a$ the enclosed charge is zero, $+Q_i- Q_i =0$ and so the electric field outside the outer sphere is zero. $r>2a$ then $E=0$ and $V=0$ $a \le r \le 2a$ then $E = \dfrac {1}{4 \pi \epsilon_o} \dfrac {Q_i}{r^2}$ and $V = \dfrac {1}{4 \pi \epsilon_o} \dfrac {Q_i}{r}$ $r< a$ then $E=0$ and $V = \dfrac {1}{4 \pi \epsilon_o} \dfrac {Q_i}{a}$
{ "domain": "physics.stackexchange", "id": 35305, "tags": "homework-and-exercises, electrostatics, electric-fields, charge, gauss-law" }
Meaning of a battery being 2.5 V
Question: So this is a question with which I have been stuck for a while and it all revolves around the concept of volt. I have posted a LOT of questions on physics forums while trying to understand the concept of volt and potential difference. This is hopefully going to be the last one. What do you exactly mean when you say that a battery is 2.5 V. I know that it says that the potential difference between the 2 terminals of the battery is 2.5 V but then what is the potential of the 2 ends of the battery? Is there a unit of potential and not just potential difference? Also volt = work/charge but how can work/charge express the difference of potentials across the 2 terminals? Answer: Volt is a name for the unit Joule-per-Coulomb (energy-per-charge). 2.5 volt means 2.5 Joules per Coulomb. A 2.5 volt battery means that there is a difference of 2.5 Joules per Coulomb from one terminal to the other. The actual values (of electric potential) are not known (and are not important); they could be 3 volts at one terminal and 5.5 volts at the other. But the difference in electric potential (voltage) is still 2.5 volts. The unit of electric potential and of electric potential difference (also called voltage) are the same, namely the volt. A difference $\Delta V=V_2-V_1$ is just a subtraction and doesn't change the units. how can work/charge express the difference of potentials across the 2 terminals? Charges want to stay at the point of lowest possible potential. That is thus the low-potential terminal. The battery must do work to move a charge from the low-potential terminal to the high-potential terminal. It must force the charge to move against the repulsion force that the charge experiences from this high-potential terminal. The work added from the battery is there the energy that turns out to be stored as the charge is finally moved. When the charge then moves (through a circuit) to the other low-potential terminal again, all the stored energy (which equaled the work done by the battery to lift it in the first place) is then spent. The battery must then do work on this charge one more time to again move it to the high-potential terminal. How can you assign a joule/coulomb value to every point in the circuit? You can do that just as you can do it to a book on a shelf. It has a certain amount of (gravitational) potential energy assigned to that specific height over the surface. In the same way, charge is being repelled a lot from the high-potential terminal. The further it moves away from this terminal, the more the potential drops - all other points have other potentials, decreases along the circuit. This is similar to the book falling from the shelf, decreases the (gravitational) potential energy as it falls, since that stored energy is now gradually being "spent" to make it fall. Every point during the falls is associated with another amount of stored potential energy.
{ "domain": "physics.stackexchange", "id": 33686, "tags": "potential, voltage, conventions, batteries" }
Constant acceleration over cosmological distances
Question: At constant acceleration in special relativity, the time differs for a stationary observer and the astronaut. see the following article for an in-depth explanation: Relativistic Rocket However, when large distances are involved, due to the expansion of the universe, the article says that general relativity equations will have to be used instead. So what are the general relativity equations that should be applied to a relativistic constant acceleration involving large distances? Answer: I've never seen the cosmological version of the relativistic rocket in any textbook, but I think it's fairly straightforward to derive it from the standard cosmological equations. Let's start with the FLRW metric, $$ \text{d}s^2 = c^2\text{d}t^2 - a(t)\,\text{d}\ell^2, $$ where $a(t)$ is the scale factor and $\text{d}\ell$ the infinitesimal co-moving distance. The Friedmann equations for the standard ΛCDM-model have the solution $$ H(a) = \frac{\dot{a}}{a} = H_0\sqrt{\Omega_{R,0}\,a^{-4} + \Omega_{M,0}\,a^{-3} + \Omega_{K,0}\,a^{-2} + \Omega_{\Lambda,0}}, $$ which expresses the Hubble parameter $H(a)$ as a function of the Hubble constant and the relative present-day radiation, matter, and dark energy densities. From $$ \dot{a} = \frac{\text{d}a}{\text{d}t} $$ we get $$ \text{d}t = \frac{\text{d}a}{\dot{a}} = \frac{\text{d}a}{a\,H(a)}, $$ so that $$ t(a) = \int_0^a\frac{\text{d}a'}{a'\,H(a')}, $$ which we can numerically invert to obtain $a(t)$ (see also this post). Now, a rocket with velocity $v(t)$ will travel in a time $\text{d}t$ a proper distance $$ a(t)\,\text{d}\ell = v(t)\,\text{d}t, $$ so that the total co-moving distance travelled in a cosmic time interval $[t_0,t_1]$ is given by $$ D_\text{c} = \int_{\ell_0}^{\ell_1}\text{d}\ell = \int_{t_0}^{t_1}\frac{v(t)\,\text{d}t}{a(t)}, $$ while the corresponding proper distance is $D = a(t_1)D_\text{c}$. For more details regarding co-moving and proper distance, see this post. All that's left needed is an expression for $v(t)$. This is simply the SR formula for the relativistic rocket with constant proper acceleration $g\,$: $$ v(t) = \frac{g(t-t_0) + w_0}{\sqrt{1+[g(t-t_0) + w_0]^2/c^2}}, $$ where $$ w_0 = \frac{v_0}{\sqrt{1-v_0^2/c^2}}, $$ and $v_0$ is the initial velocity at time $t_0$; see this post for the derivation. By inserting the formulae for $a(t)$ and $v(t)$ in the integral above, we can calculate the travelled co-moving distance $D_\text{c}$. Also, the proper time elapsed on board is $$ \tau = \int_{\tau_0}^{\tau_1}\text{d}\tau = \int_{t_0}^{t_1}\sqrt{1-v(t)^2/c^2}\,\text{d}t. $$
{ "domain": "physics.stackexchange", "id": 60705, "tags": "general-relativity, cosmology, acceleration" }
Prove that a sequence of increasing find operations on a splay tree takes $\mathcal{O}(n)$ time
Question: When studying about splay trees, I found the following statement: Suppose we have a splay tree and a sequence of Find operations, where the elements we are searching for are in increasing order. Then the total time necessary to run the sequence is $\mathcal{O}(n)$ ($n$ is the number of nodes in the tree). I looked around, but haven't been able to find anything. Can this be proven formally? Or shown that it is true? Answer: Here is a helper fact. (Simple cost of splaying) Let $c(v)$ be the time it takes to find an element $v$ in a splay tree. There exists a constant $c_0$ independent of the splay tree and $v$ such that $c(v)\le c_0d(v)$, where $d(v)$ is the depth of $v$. Here is the sketch of a proof. There are two tasks that contribute to $c(v)$. To locate $v$ by going from the root downwards to the $v$, which takes at most $c_1d(v)$ time for some constant $c_1$. To splay $v$ to the root. It takes each splay operation of zig, zag, zig-zig, zag-zag, zig-zag, zag-zig some constant time to move $v$ nearer to the root by 1 or by 2, splaying $v$ to the root takes at most $c_2d(v)$ time for some constant $c_2$. Let $c_0=c_1+c_2$. Proof is done. Suppose we have a splay tree and a sequence of Find operations, where the elements we are searching for are in increasing order. Then the total time necessary to run the sequence is $O(n)$, where $n$ is the number of nodes in the tree. Let $v_0, v_1,\cdots, v_m$ be an array of elements in increasing order. It takes $O(n)$ to find $v_0$ (including splaying $v_0$ to the root). What about finding the remaining $m$ elements? Finding $v_1$ when $v_0$ is at the root takes at most $c_0d_{v_0}(v_1)$ time, where $d_{v_0}(v_1)$ is the depth of $v_1$ in the splay tree with $v_0$ as the root. Note that splay tree is a binary search tree, $d_{v_0}(v_1)$ is at most one one more than the number of elements between $v_0$ and $v_1$ exclusively. Finding $v_2$ when $v_1$ is at the root takes at most $c_0d_{v_1}(v_2)$ time, where $d_{v_1}(v_2)$ is at most one one more than the number of elements between $v_1$ and $v_2$ exclusively. $\vdots$ Finding $v_{m}$ when $v_{m-1}$ is at the root takes at most $c_0d_{v_{m-1}}(v_m)$ time, where $d_{v_{m-1}}(v_m)$ is at most one more than the number of elements between $v_{m-1}$ and $v_m$ exclusively. So, the total time necessary to run the sequence of finding $v_1, v_2,\cdots,v_m$ is at most $c_0d(v_0, v_m)$, where $d(v_0, v_m)$ is at most one more than the number of elements between $v_0$ and $v_m$ exclusively, i.e., $d(v_0, v_m)\le m\lt n$. So the total time necessary to run the sequence of finding $v_0, v_1, v_2,\cdots,v_m$ is at most $O(n) + c_0n$, which is $O(n)$ still.
{ "domain": "cs.stackexchange", "id": 13190, "tags": "data-structures, splay-trees" }
I want to determine if this language is non regular-any tips?
Question: After working through some examples of proving the non-regularity of languages I encountered this language $$ L = \{(ab)^{i}a^{j} | i \geq j, i,j \in \mathbb{N}\} $$ Where $a^{k}$ = a repeated k times. Is this language regular? I believe not since there are infinitely many strings $(01)^{i}$ with $i \geq j$, so a DFA or NFA cannot accept the language Answer: A regular language can be recognised by a finite state machine. After processing (ab)^i and (ab)^i’, i ≠ i', the state machine must be in different states: If say i < i’, and j = i’, then (ab)^i a^j is not in the language, but (ab)^i’ a^j is, so processing a^j from both states must end in different states, one not accepting, one accepting. Since i, i’ were arbitrary there cannot be a finite number of states.
{ "domain": "cs.stackexchange", "id": 19383, "tags": "regular-languages" }
From where does BERT get the tokens it predicts?
Question: When BERT is used for masked language modeling, it masks a token and then tries to predict it. What are the candidate tokens BERT can choose from? Does it just predict an integer (like a regression problem) and then use that token? Or does it do a softmax over all possible word tokens? For the latter, isn't there just an enormous amount of possible tokens? I have a hard time imaging BERT treats it like a classification problem where # classes = # all possible word tokens. From where does BERT get the token it predicts? Answer: There is a token vocabulary, that is, the set of all possible tokens that can be handled by BERT. You can find the vocabulary used by one of the variants of BERT (BERT-base-uncased) here. You can see that it contains one token per line, with a total of 30522 tokens. The softmax is computed over them. The token granularity in the BERT vocabulary is subwords. This means that each token does not represent a complete word, but just a piece of word. Before feeding text as input to BERT, it is needed to segment it into subwords according to the subword vocabulary mentioned before. Having a subword vocabulary instead of a word-level vocabulary is what makes it possible for BERT (and any other text generation subword model) to only need a "small" vocabulary to be able to represent any string (within the character set seen in the training data).
{ "domain": "datascience.stackexchange", "id": 8633, "tags": "nlp, bert, language-model, tokenization" }
Proper generation of Complex white Gaussian noise using Matlab
Question: I am not comfortable in using the Matlab's awgn() function as it is hard to understand what is actually going on. So, I wanted to confirm if the following way is correct to generate noisy samples of a particular snr of 30 dB and a particular variance for data in complex domain. Answer: Refer to the code below that generates some noise at a given SNR: N = 100000; % Generate some random signal signal = randn(N, 1) + 1j*randn(N,1); % Here, the signal power is 2 (1 for real and imaginary component) signalPower_lin = 1/N*signal'*signal % This corresponds to 3dB (assuming power=1 is 0dB) signalPower_dB = 10*log10(signalPower_lin) noisePower_dB = signalPower_dB - 30 % do 30dB SNR noisePower_lin = 10^(noisePower_dB/10) noise = sqrt(noisePower_lin/2) * (randn(N,1) + 1j*randn(N,1)); received = signal + noise; The program outputs: signalPower_lin = 2.0042 + 0.0000i signalPower_dB = 3.0193 + 0.0000i noisePower_dB = -26.9807 + 0.0000i noisePower_lin = 0.0020 + 0.0000i Note that generating a complex noise of variance 1, you need to do noise = sqrt(1/2) * (randn(N,1) + 1j*randn(N,1)) Since each component (real and imaginary) needs to have variance 1/2, such that their sum becomes 1. To answer your points: 1) As a rule of thumb when to use 20 and when to use 10: If you describe Powers or Energies, the factor is 10. If you describe amplitudes, the factor is 20. Then, in dB-scale both would have the same value, since: $$ \begin{align} P&=A^2 \text{ (Power = Amplitude squared)}\\ 10\log(P) &= 10\log(A^2)\\ &=20\log(A) \end{align} $$ So, anytime you use a values which creates something meaningful when squared (like amplitude), use 20. Everytime, you have something that is meaningful when taken the square root from (like Power), use 10. Note that in my code, I use noise = sqrt(noise_power/2) * randn(...) to generate the noise, i.e. I transform the noise power into the noise amplitude prior to multiplying the normal random variable (which corresponds to changing the factor 10 to 20 in the exponent). Also remember the rule $$Var(aX) = a^2Var(X)$$ which is exactly the reason why you need to take the square root of the power, such that your resulting noise has the required power. 2) Here, I just transform noisePower_db into noisePower_linear, which is by definition linear=10^(db/10). Note that $10^{-SNR/10}$ is similar, since SNR=Signal/Noise, i.e. the noise is in the denominator (consider $10^{-1/x}=10^x$.
{ "domain": "dsp.stackexchange", "id": 4546, "tags": "matlab, noise" }
ROS Chat Group?
Question: Just a random question as I have lots of questions to ask, is there any active Chat group for ROS? I have quite a few questions and really don't want to spam the word (people have already been really helpful) Originally posted by burf2000 on ROS Answers with karma: 202 on 2017-01-17 Post score: 2 Answer: Technical questions are best asked here, on ROS Answers (after having searched for similar questions and answers). For more general discussion, https://discourse.ros.org may be appropriate. There used to be a ROS Slack organization (essentially a chat room), but it has been phased out in favor of https://discourse.ros.org I believe. Ditto for the mailing list. Originally posted by spmaniato with karma: 1788 on 2017-01-17 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by burf2000 on 2017-01-17: Thank you, I need to get a custom robot up and running pretty quick and have some of it working, just need help piecing the final parts together Comment by spmaniato on 2017-01-17: This book might help then: http://shop.oreilly.com/product/0636920024736.do (I'm not affiliated in any way) Comment by burf2000 on 2017-01-18: I think I have that one and 2 others on my Kindle, the issue I have is time (kids) plus I feel I am nearly there. Comment by burf2000 on 2017-02-13: Man I wish there was a chat group, stuck for days and no ones replying :( Comment by w on 2017-03-14: I absolutely agree. With as many of us involved in this, it shouldn't be too much of a task to get such a chat group going. I love answers as a resource, but whether you get a response is totally random... Time is valuable, for all of us, what can I do to help promote this idea? Comment by Airuno2L on 2017-04-21: For what it's worth, asking questions here is beneficial to everyone because people having the same problem in the future are more likely to find it.
{ "domain": "robotics.stackexchange", "id": 26755, "tags": "ros" }
Why does AC induction motor rotate the same way?
Question: Suppose an aluminium disc is suspended where it can freely rotate. A magnet is placed above (not touching) the aluminium disc and made to spun. This obvious causes a changing magnetic field. By Faraday's Law, this will induce current in the disc below that oppose the motion of the magnet by Lenz's Law. However, why does the disc then still spin in the same direction as the movement of the magnet? I understand that the disc will induce a secondary magnetic field that slows the rotation of the magnet, but how come the disc still follows the magnet? Please help! All is appreciated. Answer: Let's start from the ground up. The experiment below illustrates the basic principle of an AC induction motor and I assume is what you are referring to (if not, it still works). When the magnetic rotates, there will be a induced current in the aluminium disc. By Lenz's Law, this current will act in such a way as to oppose the change in relative motion that caused it. The result of that the aluminium disc will appear to chase the spinning magnet. The part you get stuck at is why the disc is following the magnet? Thinking in terms of relativity, by following the magnet you are decreasing the relative motion between the disc and the magnet. Thus by decreasing the relative motion you are essentially minimising this change (Lenz law) to stay stationary.
{ "domain": "physics.stackexchange", "id": 25923, "tags": "electromagnetism, electricity, induction, electromagnetic-induction" }
Is there a consensus on the fate of our universe?
Question: We all know that our universe is inflating from what is known as the Big Bang. However, will our universe continue to inflate at the current rate? Or after reaching a maximum size, will it collapse in a Big Crunch? Answer: While there is a general consensus aligned with the Big Bang theory's historical and current stages of the universe. To note, there are three theories with focus on this topic regarding the future, namely: the open universe, flat universe and closed universe theories. Ultimately, the fate of the universe depends on the outcome of the competition between the expansion and pull of gravity. Basically: The open universe model carries the idea that the universe will continue to expand indefinitely. The flat universe model says that the universe will continue to expand at an ever decreasing rate that approaches zero as time reaches infinity. The consequence of both of these is that the universe will eventually become a very, very dark and lonely place, with all galaxies, stars, et cetera, being so far away from any other, or burned out, with all original hydrogen used up; we can forget about our ancestors observing Andromeda, and the like - humans watching the sky would see nothing but blackness, voids in the distance. The closed universe model says that our universe won't proceed to expand forever, but that gravity will slow down the outward expansion to an eventual halt before beginning to collapse back inward on itself. If this is the case, the fate is a Big Crunch, where, as the universe contracts, galaxies fall inwards toward each other, wreaking catastrophe, until all matter is as it once was: crushed into an extremely hot, super dense state. There is also the oscillating universe model which is a variant that says another Big Bang would then occur, resulting in a brand new universe born out of the same matter. What we have to understand is that the best model is the one that agrees best with observations (for instance, the Steady State theory contradicts observations and so is largely discounted.) It would be nice if we could observe the universe as it was billions of years ago, but alas, we can't do that directly. So, we use other methods such as looking at far away places in space at different distances - this way we can measure redshifts, but our instruments still aren't sufficiently evolved to provide precise measurements at such distances - now, the data used to check against cosmological models are wrought with uncertainties; this means we need still yet more information to fully determine which of the theories are correct.
{ "domain": "physics.stackexchange", "id": 3102, "tags": "cosmology, universe, space-expansion, big-bang, singularities" }
Compiler cant find transformLaserScanToPointCloud function
Question: Hi, I am quite new to ROS and also programming :) and facing a problem during making executable for node for converting laser scan to get point cloud from my laser scanner data. Can you please have a look on the error and tell me what can be the reason. I also added the dependencies. Everything looks to be fine. Thanks in advance. CMakeFiles/my_scan_to_cloud.dir/src/tfToPointcloud.o: In function `laser_geometry::LaserProjection::transformLaserScanToPointCloud(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, sensor_msgs::LaserScan_<std::allocator<void> > const&, sensor_msgs::PointCloud_<std::allocator<void> >&, tf::Transformer&, int)': /home/Reza/ros_workspace/laser_geometry/laser_geometry/include/laser_geometry/laser_geometry.h:212: undefined reference to `laser_geometry::LaserProjection::transformLaserScanToPointCloud_(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, sensor_msgs::PointCloud_<std::allocator<void> >&, sensor_msgs::LaserScan_<std::allocator<void> > const&, tf::Transformer&, double, int)' CMakeFiles/my_scan_to_cloud.dir/src/tfToPointcloud.o: In function `LaserScanToPointCloud': /home/Reza/ros_workspace/laser_geometry/laser_geometry/src/tfToPointcloud.cpp:22: undefined reference to `laser_geometry::LaserProjection::~LaserProjection()' CMakeFiles/my_scan_to_cloud.dir/src/tfToPointcloud.o: In function `~LaserScanToPointCloud': /home/Reza/ros_workspace/laser_geometry/laser_geometry/src/tfToPointcloud.cpp:8: undefined reference to `laser_geometry::LaserProjection::~LaserProjection()' /home/Reza/ros_workspace/laser_geometry/laser_geometry/src/tfToPointcloud.cpp:8: undefined reference to `laser_geometry::LaserProjection::~LaserProjection()' Originally posted by Reza Ch on ROS Answers with karma: 22 on 2012-04-16 Post score: 0 Answer: Two possibilities: you're sure you've added laser_geometry as a depend in your manifest.xml? Like, double-plus-sure? Another possibility: are you sure laser_geometry is installed? If you've done a source install (and have the dependency set), try using rosmake to build your package, not just make. If you're using a binary install, double-check that the laser_geometry package is installed. Originally posted by Mac with karma: 4119 on 2012-05-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by RSA on 2014-04-12: I have the same problem, I downloaded the laser_geometry from the source but I have package.xml not mainfest.xml which is different somehow and I still haave the same problem when I add it to the package.xml file.. do i have to add anything in the Cmakelist
{ "domain": "robotics.stackexchange", "id": 9004, "tags": "ros, laser-scan, pointcloud" }
What does "sine" mean?
Question: I see the suffix "sine" (seen/sin) a lot, adenosine, cytosine, lysine, tyrosine, etc. Most of where I hear it is in amino acid R groups, but it's usually only the prefix that is recognized as significant. Answer: According to Textbook of metabolism and metabolic disorders (1964): The ribosides derived from purines have the suffix -osine; those from pyrimidines, the suffix -idine. The corresponding deoxyribosides, with the exception of thymidine, do not have any such simple designation. Occasionally, instead of, for example, guanine-deoxyriboside, the term deoxy-guanosine is used. Beyond the more common adenosine and guanosine, there are also: inosine and xanthosine
{ "domain": "chemistry.stackexchange", "id": 9591, "tags": "terminology, amino-acids" }
Correcting Coherent errors with Surface Codes
Question: Following this article: Correcting coherent errors with surface codes I wonder about the modeling of coherent error and its effect on syndrome after measurement. They say that applying U on the initial logical state, and stabilizing using syndrome measurement, will collapse the state to $exp(i\theta_sZ)$ where $\theta_s$ is a new angle. I feel that a few steps were skipped. I would be happy to see a detailed mathematical explanation that shows how the angle was changed from arbitrary, to one that depends on the syndrome measurement. Thank you! Answer: Here is a small example where tried to answer myself. Consider Bit Flip 3-Repetition code, that protects 1 X error: $|0_L>=|000> ; |1_L>=|111>$ Since it protects $X$, it protect also linear combination $aI+bX$. Therefore also $R_x(\theta)=cos(\theta/2)I-i*sin(\theta/2)X$ Now let's imagine the following flow: Initialize $|000>$ Error on qubit #2 of small $\theta: cos(\theta/2)|000>-i*sin(\theta/2)|010>$ Stabilize by syndrome measurement 2 possibilities: measure no error syndrome in probability $cos(O/2)^2 -> |000>$ measure error X syndrome in probability $sin(O/2)^2 -> |010>$ and correction to $|000>$ Therefore X error with a probability of $sin(\theta/2)^2$ is equivalent to a certain error $(probability=1)$ of small rotation in a degree of $\theta$.
{ "domain": "quantumcomputing.stackexchange", "id": 3371, "tags": "error-correction, error-mitigation, surface-code" }
Regularization for Inverse Problems using the Singular Value Decomposition (SVD)
Question: I am reading these lecture notes on Optimisation and Inverse Problems in Imaging, and I have difficulties understanding how figures on page 20 (Figure 3.2) or page 21 (Figure 3.3). Precisely, I don't understand what numbers on horizontal and vertical axes mean. I would appreciate if you could explain me this. Here is the code for Figure (3.3). Answer: Similar to The Concepts Behind SVD Based Image Processing the horizontal axis are the samples index of the SVD basis. The idea in the chapter you linked is generalizing the Wiener Filter. While the Wiener Filter uses the Fourier Transform as a basis the SVD uses the data adaptive basis.
{ "domain": "dsp.stackexchange", "id": 9319, "tags": "image-processing, matlab, inverse-problem, svd" }
Alternatives to Congo Red?
Question: I require Congo red indicator for an experiment I wish to perform involving dipping a cellulose sponge into an acid and a base. I first want to dye the cellulose sponge with Congo red, and observe the color change of the sponge as I place it in acids and bases. Congo red apparently binds to cellulose and thus is suitable for this experiment. Unfortunately the laboratory doesn't have this indicator at the moment. Are there any other indicators that could be used for this effect instead? Answer: Yes, congo red bind the cellulose because is a substantive (or direct) dye these dyes don't need a mordant so it binds suddenly to the cellulose. There are others dyes for example safflower, cochineal with this property. In biology Congo Red is frequently substitute with Sirus Red F3B and Tioflavone S. At this link you can find some alternatives to Congo Red use for demonstrating amyloid, however I don't know if you can apply these dyes to your case, because I don't find any reference about their use as pH indicator. Although is reported that many substantive dyes are used as pH indicator I can't find any list. Take in account that acid pigment are not substantive to cellulosic fiber and most of the pH indicator are acids. I think that maybe you can try to use a mordant to obtain the same result with a pH indicator normally used, however I'm afraid that using a mordant you risk to change the behavior of your pH indicator.
{ "domain": "chemistry.stackexchange", "id": 878, "tags": "acid-base, experimental-chemistry" }
Equations of motion for action with differential forms/Hodge star
Question: I'm trying to compute the equations of motion for an action, but I'm not really familiar with the notation and so I'm not entirely sure what to do. It's a non-linear sigma model, describing maps $X: \Sigma \to M$ where $\Sigma$ is two-dimensional, given by $$S[X] ~=~ \frac{1}{2}\int_{\Sigma} g_{ij} (X) \, dX^i \wedge \star \,dX^j$$ I'm used to seeing actions of the form $S = \int g_{ij}(X)\partial_{\mu}X^i \partial^{\mu}X^j$, and then getting the equations of motion from the Euler-Lagrange equations, but I don't know what the Euler-Lagrange equations look like in this notation. Answer: You can write this action in components if you want, and then proceed as you are used to. The Hodge star here is a two-dimensional Hodge star (because $\Sigma$ is two-dimensional, so $* dX^j$ must be a one-form). Remember that for any function $f$ on the surface $\Sigma$, if you choose coordinates $(\sigma^1,\sigma^2)$ on $\Sigma$, then you can write $$df = (\partial_a f) d \sigma^a $$ where $a=1,2$. You can do it here for the functions $X^i$: $$dX^i = (\partial_a X^i) d \sigma^a $$ Finally, the Hodge star is obtained using the totally antisymmetric tensor $\epsilon_{ab}$: $$*dX^j = \epsilon_{ab} (\partial^b X^j) d \sigma^a $$ So your action reads $$S[X] ~=~ \frac{1}{2}\int_{\Sigma} g_{ij} (X) \, (\partial_a X^i) \epsilon_{cb} (\partial^b X^j) d \sigma^a \wedge d \sigma^c$$ Using the volume form $\omega = \epsilon_{ac} d \sigma^a \wedge d \sigma^c$, this reduces to $$S[X] ~=~ \frac{1}{2}\int_{\Sigma} g_{ij} (X) \, \epsilon^{ab} (\partial_a X^i) (\partial_b X^j)\omega $$ which probably sounds familiar. Note that I have not been careful about factors of $2$ or $1/2$ that may appear depending on the normalization you use for the tensor $\epsilon^{ab}$. Note also that a more concise way to get rid of the Hodge star is to use the abstract definition, which gives immediately $$dX^i \wedge \star \,dX^j = \langle dX^i , dX^j \rangle \omega . $$
{ "domain": "physics.stackexchange", "id": 34732, "tags": "string-theory, differential-geometry, notation, variational-principle, action" }
Mustard gas hydrolysis (at a very high rate)
Question: Explain mustard gas hydrolysis at a very high rate: $\ce{Cl-CH2-CH2-S-CH2-CH2-Cl}$ I tried searching online but did not find this reaction anywhere. Can someone explain me the steps/mechanism of this reaction? Even some hints might help. My attempt: According to Wikipedia Solvolysis is a special type of nucleophilic substitution ($\mathrm{S_N1}$) or elimination where the nucleophile is a solvent molecule. For certain nucleophiles, there are specific terms for the type of solvolysis reaction. For water, the term is hydrolysis; for alcohols, it is alcoholysis; for ammonia, it is ammonolysis; for glycols, it is glycolysis; for amines, it is aminolysis. It clearly states that hydrolysis/solvolysis can only be $\mathrm{S_N1}$ or elimination. However I'm wondering how can $\mathrm{S_N1}$ occur on a primary carbon! Morever water doesnt seem to be a strong enough base for elimination. So what will be the preferred reaction mechanism? Answer: Luckily I got the mechanism via Google Image Search. Internal attack by sulphur atom takes place during the reaction. This is called anchimeric assistance, neighbouring group effect or $\mathrm{S_N{NGP}}$. Wikipedia describes it well.
{ "domain": "chemistry.stackexchange", "id": 6443, "tags": "organic-chemistry, reaction-mechanism, hydrolysis, organosulfur-compounds" }
Horizontal Free Fall
Question: Let's say you are launched out of a canon at a $0^\circ$ degree angle 5 feet off of the ground. Normally, from my basic understanding of free falling, you have to be move unimpeded with no force acting upon you. If you are shot horizontally, would you be considered free falling if you are consistently moving at terminal velocity of 122mph? Answer: As I recall free fall means unrestricted downward motion in a gravitational field. Any air drag (and terminal velocity) you experience due to your horizontal velocity leaving the canon should have no effect on your downward acceleration $g$. The only restriction to free fall would be air drag on your downward motion. Since your height out of the canon is only 5 feet, that's hardly enough falling distance for air drag to play any significant role, and you certainly would not reach any downward terminal velocity. Perhaps I am misunderstanding your scenario. If that's the case let me know. Otherwise, hope this helps.
{ "domain": "physics.stackexchange", "id": 58523, "tags": "newtonian-gravity, projectile, free-fall" }
Why does memory data isn't organized in two or more lines?
Question: I understand that computer memory data is sequential; It is one long line organized in one or more address/es, each of which containing one or more fixed size word/s which contain raw data (in an initial manner, just offset value/s, I would assume). Why does memory data isn't organized in two or more lines (why it's just one long line, or at least, so it would appear, via a user interface such as some text editor)? Answer: The notion of a "line" is an abstraction we impose. Currently our memory works as follows: you provide an address to the memory module that you want to read, and it returns the value at that address; or if you want to update memory, you provide an address and the new value, and it overwrites the value at that address with the new value. Why do we build memory that way? Well, the current way is simple and works, and can be implemented efficiently in circuits. If you wanted to consider some other alternative, you'd have to think about what are the benefits and what are the costs. I'm not clear on exactly how your counter-proposal would work, but I expect it would add complexity (which ultimately might add to the cost of implement the memory subsystem, and also might make the system harder to understand for programmers) without a benefit that would make it worthwhile. If you wanted to dive into this in more detail, you'd probably start by learning how RAM works at the circuit level. Our current RAM is highly optimized to minimize the number of gates needed, i.e., to minimize cost.
{ "domain": "cs.stackexchange", "id": 16767, "tags": "sequential-circuit" }
List comprehension method
Question: I have developed some Python code which revolves around two custom classes - a 'Library' class (Lib) which contains a Python list of several objects based on a 'Cas' class. I've not posted the code for either of these classes here, but all you really need to know to understand my question is that the 'Library' object contains a Python list and the 'Cas' objects contain various attributes, some of which are strings and some are values. One of the objectives of the code is to manipulate the Python list in the Library class and return a sub-set of the 'Cas' objects based on some user driven criteria. For example, return Cas objects where a particular attribute is equal to a given string or greater than a given value. For this purpose, I have written the following generic method filterLibrarySingle to allow me to filter the Python list in the library class (self.Lib) based on various methods (filterMethod), attributes (filterField) and values (filterValue). Within the method I'm achieving this using list comprehensions. On profiling my code, it would appear that this method can be quite a bit of a bottle neck! Does any one have an idea of how I could speed it up? def filterLibrarySingle(self, filterField, filterMethod, filterValue1, filterValue2=None): if filterMethod == 'eq': self.Lib = [cas for cas in self.Lib if getattr(cas, filterField) == filterValue1] elif filterMethod == 'lt': self.Lib = [cas for cas in self.Lib if getattr(cas, filterField) < filterValue1] elif filterMethod == 'gt': self.Lib = [cas for cas in self.Lib if getattr(cas, filterField) > filterValue1] elif filterMethod == 'le': self.Lib = [cas for cas in self.Lib if getattr(cas, filterField) <= filterValue1] elif filterMethod == 'ge': self.Lib = [cas for cas in self.Lib if getattr(cas, filterField) >= filterValue1] elif filterMethod == 'gelt': self.Lib = [cas for cas in self.Lib if getattr(cas, filterField) >= filterValue1 and getattr(cas, filterField) < filterValue2] elif filterMethod == 'gele': self.Lib = [cas for cas in self.Lib if getattr(cas, filterField) >= filterValue1 and getattr(cas, filterField) <= filterValue2] I've wracked my brains for days on this to try and speed it up but I guess my Python knowledge simply isn't good enough! Answer: From the comments it sounds like you don't really need to create a new list. Using generators will avoid the overhead of ever building a new list. Just use this pattern: def filterLibrarySingle(self, filterField, filterMethod, filterValue1, filterValue2=None): if filterMethod == 'eq': for cas in self.Lib: if getattr(cas, filterField) == filterValue1: yield cas ... And you can call the method such as: for item in filterLibrarySingle('my_field', 'eq', 'George'): do_stuff(item) If this doesn't work, then you may have to consider designing a more advanced structure to store your data so that your searches are not \$O(n)\$ but rather \$O(\log{}n)\$.
{ "domain": "codereview.stackexchange", "id": 16292, "tags": "python, performance, python-2.x" }
Is re-compilation of rviz, after adding a module necessary?
Question: Hi everyone, I added a module in gazebo, it is working properly and not giving any error. Also i can see the rostopic but i am not able to see it in rviz. My question is that is it necessary to re compile the rviz. if yes, then how to compile it. Originally posted by SAK on ROS Answers with karma: 94 on 2011-07-31 Post score: 0 Answer: Gazebo and RViz are independent from each other, so no. Apart from that, if you have some custom module that requires custom visualization, you would need to add that to rviz. Perhaps you can say a bit more explicit what you are doing? For displaying sonars/IRs, check this question: http://answers.ros.org/question/1138/how-to-display-sonar-readings-in-rviz Originally posted by dornhege with karma: 31395 on 2011-08-01 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by SAK on 2011-09-04: The Sonar display is now present in Electric version of Ros. and also if one wants to display something in rviz, re compilation is necessary. Comment by dornhege on 2011-08-01: I've never used the Range display mentioned in the linked post, but I'd guess you are doing something wrong. rviz communcates by messages, so your simulation should send out sensor_msgs/Range message and the rviz display should display them. You can test both independently with rostopic echo and pub Comment by SAK on 2011-08-01: i am doing this,, studying the laser sensor which is already implemented in simulation (As i am too developing only in simulation) but i have two problems, rviz display and other my module is not able to get the value of the GetIRCount() function implemented in IRSensor. Comment by dornhege on 2011-08-01: If I understand that correctly there is no real compiling, but the description is usually run through xacro and loaded in the param server. You can check how the pr2 does that. Comment by SAK on 2011-08-01: Onething more,, i need to know whether after creating the gazebo.xacro and urdf.xacro files and added it to base.gazebo.xacro, except that is it necessary to add it somewhere or compile it or only creation is enough. Comment by SAK on 2011-08-01: Thanks for the reply,, i created a module that would for ir sensor, the topic is advertised (sensor_msgs::Range) but i am not able to see/load the topic (like the one LaserScan) in rviz. So i have to do some extra coding for that or i need to add something in xacro files.
{ "domain": "robotics.stackexchange", "id": 6307, "tags": "rviz" }
Difference between train, test split before preprocessing and after preprocessing
Question: I am new to machine learning. I am bit confused in preprocessing. Generally, Scenario-1: I am splitting the dataset into train,test and validation and applying the transformations like fit_transform on train and transform on test. Scenario-2: The other method is applying transformations on the entire dataset first and then split the dataset into train,test and validation. I am bit confused in choosing , dividing the data before preprocessing and feature engineering or after preprocessing and feature engineering. Looking for a nice answer with effects and casues. Answer: You should absolutely adopt the first scenario. That's because the transformers that you use have some parameters (e.g. mean and standard deviation in case of standard scalar) and this parameters are learned from data like the parameters of your machine learning model. As you know, you should not use the validation and test data for learning the model parameters and for the same reason, you should not use them for learning the transformer parameters. As a result, you should just use the training samples for fitting your transformer parameters if you want to try a practical machine learning scenario.
{ "domain": "datascience.stackexchange", "id": 4956, "tags": "machine-learning, data-science-model" }
Shor: Modular exponentiation vs modular multiplication
Question: In his original article Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer, Peter Shor constructed an algorithm for finding a period $r$ of the modular exponentiation $a^x \mod N$, where $N$ is factored number, $a$ is random integer such that $1 < a < N$ and $\gcd\{a,N\} = 1$. This period $r$ is then used in factoring of integer $N = pq$ via the greatest common divisor: $p,q = \gcd\{a^{r/2} \pm 1, N\}$. The period $r$ is found with quantum phase estimation. This is a short description of Shor's factorization algorithm. In the article Circuit for Shor’s algorithm using 2n+3 qubits, Stéphane Beauregard provides an implementation of Shor's algorithm which is almost the same as the one described above with the exception that rather than a period of the modular exponentiation, a period of the function $(ax) \mod N$ (modular multiplication) is searched for. This is firstly discussed on pg. 2 in fig. 1, and then several other times in the article. However, I failed to see how finding period of the modular exponentiation and the modular multiplication are equivalent problems. Could anyone shed more light on this? Answer: I think it's just an issue of terminology. Given integers $a$ and $N$, you're looking for the order $r$ of $a$ mod $N$, that is, the smallest $r$ such that $a^r\equiv1\bmod N$. In this context, people use "period" and "order" interchangeably. To find such $r$ via QPE, you want to look for a unitary $U$ which somehow encodes it in its eigenvalues — as QPE returns an approximation of an eigenvalue of $U$, when the input at the second register is the corresponding eigenstate. As it happens, if you use the unitary $U_{a,N}$ defined as $U_{a,N}|x\rangle=|ax\bmod N\rangle$, this will satisfy $U_{a,N}^r=I$, and thus its eigenvalues will be $\exp(2\pi i k/r)$ for $k=0,...,r-1$. So I would say it's the "period" of the function $f:k\mapsto a^k\bmod N$, as the smallest $r$ such that $f(r)=f(0)=1$, and you retrieve it doing QPE on the unitary implementing the multiplication $x\mapsto ax\bmod N$.
{ "domain": "quantumcomputing.stackexchange", "id": 5499, "tags": "shors-algorithm, quantum-phase-estimation, modular-exponentiation" }
To understand why satellite cells are genetically inactive in Barr body
Question: DAPI is used as a stain for DNA heterochromatic and euchromatic regions. The Barr body is heterochromatic. In the slide of a normal human female cheek's somatic cells, there is apparently no other clear dark spot inside the nucleus than the Barr body against the nuclear inner membrane. - - I used an old microscope so my observation may be wrong as pointed in the comment. How can you deduce from the given slide that the spot really contains inactive satellite cells that are not used genetically? Why is the active X chromosome not visible by the same staining method? (It probably can be made visible by some other staining method.) Answer: DAPI (4',6-diamidino-2-phenylindole) preferentially binds AT-rich DNA (although it binds CG-rich DNA, too), which can give chromosomes distinctive banding patterns if they are polytene or in metaphase. In interphase condensed chromosomes, such as the inactive X chromosomes of female mammals (Barr Body), the relatively high concentration of tightly-packed DNA makes the chromosome appear as a brighter spot in the nucleus. When the DNA of a chromosome is decondensed (such as the rest of the chromosomes in interphase), it appears as more-or-less homogeneously stained DNA in the nucleus. A great picture is here (see below), A is the DAPI staining, B is a protein localized to the Barr Body, C is the RNA (Xist) which binds the Barr Body. Since the active X is not condensed, it appears as do the rest of the chromosomes, so cannot be identified among the mixture of autosomes. The difference in DAPI appearance has nothing to do with activity per se, but rather differences in how tightly packaged the chromosomes are. I do not understand the first part of your question, though. You cannot tell from a still image that the Barr Body is relatively inactive. If your were doing experiments on live cells, you could measure RNA production (using a labeled nucleotide), immunofluorescence to show localization of e.g. RNA Polymerase, or reporter genes located on the inactive versus active X chromosomes. If you clarify your question, I can answer better.
{ "domain": "biology.stackexchange", "id": 67, "tags": "homework, staining, cell-biology" }
Kinect thermal imaging
Question: Can kinect sensor be used as a thermal imaging camera and return body temperature ? Originally posted by programme on ROS Answers with karma: 21 on 2013-04-18 Post score: 2 Answer: No, not that i know of. The Kinect IR sensor works with an IR wavelength not far away from visible light (830nm), which is too short for thermal imaging. Also there may be a lot more reasons, why this isn't working, that i dont know of. :) Originally posted by Ben_S with karma: 2510 on 2013-04-18 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by Dan Lazewatsky on 2013-04-18: To test for yourself, turn off the lights, cover up the IR projector, point the kinect at something hot, and take a look at the IR image stream. @Ben_S is correct that you won't see anything (I've tried). Comment by Stephane.M on 2013-04-22: The human body has a wavelength around 10 micrometers. So compared to the bandwidth of the kinect, it's not even worth it trying :-)
{ "domain": "robotics.stackexchange", "id": 13877, "tags": "ros" }
Does the turbine outlet affect the power generated by the turbine ?
Question: In my text book , it is written "Power generated by the turbine can be increased by using a gradual expansion at the turbine outlet" The picture below explains the meaning of gradual expansion at the turbine outlet i.e the cross sectional area of the pipe at the turbine outlet is increasing gradually m, causing gradual expansion.. How does this thing cause the power generated by the turbine to increase ? Answer: The turbine referred to, in your text book must be a specific type of turbine, namely reaction turbine. Reaction turbines utilise mainly the pressure energy of flowing water to produce Power. The flowing fluids consist of both pressure energy(a function of fluid pressure) and kinetic energy(a function of flow velocity). The reaction turbines are rotated by the pressure difference of water on both sides of turbine blades. The reaction turbines convert most of the pressure energy to mechanical power, but they are unable to convert much of the kinetic energy in such a way. So the kinetic energy brought in by the water is left to flow away through the outlet But if a pipe whose diameter is small at first and then increasing slowly is used at outlet, then the pressure at the pipe beginning will be lowered significantly. This happens due continuity conditions and Bernoulli's principle. According to continuity equation if the cross sectional area of a flow decreases the flow velocity increases. But according to Bernoulli's principle if flow velocity increases the pressure decreases, given there is no change in elevation, losses, external work etc. Thus the pressure at outlet side of turbine decreases and the pressure difference between both sides of the turbine increases. Thus the driving force increases and hence the power. So such A pipe actually converts some of the kinetic energy into pressure energy for better utilisation in reaction turbines. Now, the water cannot flow outside if its pressure remains below atmospheric pressure. So in order to increase the pressure the diameter is increased slowly. In this case also Bernoulli's principle is utilised. The increase in diameter of the outlet pipe is kept gradual to avoid losses (eg: Eddy formation due to sudden expansion). Such pipes with gradually increasing diameters used in reaction turbines are called Draft Tubes.
{ "domain": "engineering.stackexchange", "id": 1479, "tags": "mechanical-engineering, fluid-mechanics, turbines, turbomachinery" }
Phase Response Function / Plotting in Excel (IIR Filter)
Question: Using formula from Audio EQ Cookbook I implemented biquad IIR filters in Excel. I can now plot every transfer function and total of 8 band EQ with shelvings and peakings. My problem is, I can not implement any of the phase response formula in excel. I'm not very good at these kinds of functions and especially implementing complex number functions in excel. Here is the transfer function. I have $\omega_0, \alpha, A_x, \omega$ of the frequency, $\phi$ of the frequency $( = 4 * \sin{(\omega/2)}^2$, I don't know what this is), $b_0,b_1,b_2,a_1,a_2$ coefficients of every filter and dB values at each frequency ($H(s)$). How can I write the phase response function in excel? Answer: Excel is an awkward language for this type of thing so we can make easier if we do a little math upfront. Doing complex math in Excel is just about as much fun as slamming your fingers in the car door, so we shall try to avoid it. We have $$H(z) = \frac{b_0 + b_1z^{-1}+ b_2z^{-2}}{a_0 + a_1z^{-1}+ a_2z^{-2}}= \frac{B(z)}{A(z)}$$ If we want to evaluate this at normalized frequency $z = e^{j\omega}$ we get $$B(z) = b_0 + b_1\cos(\omega)+b_2\cos(2\omega)-j\left[b_1\sin(\omega)+b_2\sin(2\omega)\right] = B_r+jB_i$$ with $$B_r = b_0 + b_1\cos(\omega)+b_2\cos(2\omega) \\ B_i = -b_1\sin(\omega)-b_2\sin(2\omega) $$ So the whole thing becomes $$H(\omega) = \frac{B_r+jB_i}{A_r+jA_i}$$ With this we can calculate both magnitude and phase $$|H(\omega)| = \sqrt{\frac{B_r^2+B_i^2}{A_r^2+A_i^2}} $$ $$\angle{H(\omega)} = \tan^{-1}\left(\frac{B_i}{B_r}\right)-\tan^{-1}\left(\frac{A_i}{A_r}\right)$$ We can implement this in excel by first calculating $B_r$, $B_i$, $A_r$, and $A_i$ and then using the last two formulas to calculate magnitude and phase. The phase can be done using the Excel function atan2() which in typical Excel fashion uses non-standard conventions and reverses the argument order. You can find an example with a second order Butterworth lowpass at https://docs.google.com/spreadsheets/d/1ye7-WzmwNRpTglJsGk6Dtvb8kge9PICc/edit?usp=sharing&ouid=110367350423053204811&rtpof=true&sd=true EDIT based on comments It's been rightfully pointed out that the subtraction of the two phases can run into numerical problems and wrapping problems. A better way to calculate the phase is the following one $$H(\omega) = \frac{B_r+jB_i}{A_r+jA_i} = \frac{B_r+jB_i}{A_r+jA_i} \cdot \frac{A_r-jA_i}{A_r-jA_i} = \cdot \frac{B_rA_r+B_iA_i+j(B_iA_r-B_rA_i)}{X}, x \in \mathbb{R}$$ So we can get the phase of a single biquad as $$\angle{H(\omega)} = \text{atan2}(B_iA_r-B_rA_i, B_rA_r+B_iA_i) $$ where $\text{atan2}()$ is the quadrant correct version of the inverse tangent. The spreadsheet has an added column for the new phase calculation.
{ "domain": "dsp.stackexchange", "id": 11445, "tags": "phase, infinite-impulse-response, transfer-function" }
Equilibrium for very small amounts of reactants
Question: Is there any difference in the concept of equilibrium when it comes down to dealing with extremely small amounts of reactants? Say we have $$\ce{A + B <=> C + D}$$ and $K_c$ is $100,000$. If we only have $10$ atoms of each $\ce{A}$ and $\ce{B}$ in a beaker, we would expect the container to be totally full of $\ce{C}$ and $\ce{D}$. But $K_c$ is not infinite, so there must be some unreacted $\ce{A}$ and $\ce{B}$. However, if there was even one atom of reactants, the reaction would quickly shift back to satisfy the equilibrium. Does the concept of equilibrium only apply to macro-sized reactions? Is there an extendable concept for the micro scale? Answer: Thermodynamics describe an average state for a large number of items. However, for a time, that state will vary from that average. So if you take a snapshot of the system at an instant, it would likely not be exactly as equilibrium would predict, yet over time that would be the average state. "The larger your sample size, the smaller the standard deviation" from the expected value. However, by increasing the number of samples over time, even a small number of molecules would behave as expected. However, the question is not unreasonable, in that it describes the problem of reconciling macroscopic behavior with quantum effects. Boltzmann explored this issue, and Planck introduced quanta to harmonize statistical thermodynamics with observed black-body radiation.
{ "domain": "chemistry.stackexchange", "id": 11108, "tags": "physical-chemistry, reaction-mechanism, thermodynamics, equilibrium" }
Why the number of training points to densely cover the space grows exponentially with the dimension?
Question: In this lecture (minute 42), the professor says that the number of training examples we need to densely cover the space of training vectors grows exponentially with the dimension of the space. So we need $4^2=16$ training data points if we're working on $2D$ space. I'd like to ask why this is true and how is it proved/achieved? The professor was talking before about K-Nearest Neighbors and he was using $L^{1}$ and $L^{2}$ metrics. I don't think these metrics induce a topology that makes a discrete set of points dense in the ambient space. Answer: First, let's try to build some intuition for what we mean when we say that we want to "densely cover" a $d$-dimensional space $\mathbb{R}^d$ of real numbers. For simplicity, let's assume that all values in all dimensions are restricted to lie in $[0, 1]$. Even with just a single dimension $d=1$, there are actually already infinitely many different possible values even in such a restricted $[0, 1]$ range. But generally we don't actually care about literally covering every single possible value. Generally, we expect that points in this $d$-dimensional space that are "close" to each other also "behave" similarly, that there's some level of "continuity". Hence, to get "sufficient" or "good" or "dense" coverage of the space, you can somewhat informally assume that every data point you have occupies some space around it. This is the intuition behind Lutz Lehmann's comment under your question: you can think of every point as being a $d$-dimensional cube occupying some volume of your $d$-dimensional space. Now, if you have a $d$-dimensional space of size $[0, 1]$ along every dimension, and you have little cubes that occupy a part of that space (for instance, cubes of size $0.1$ in every dimension), you will indeed find that the number of cubes you need to fill up your space scales exponentially in $d$. The basic idea is: if some number of cubes $K$ is sufficient to fill up the $d$-dimensional space, and if you then increase the dimensionality to $d+1$, you'll need $dK$ cubes to fill the new space. When you add a new dimension, the complete previous space becomes essentially just one "slice" of the new space. For dimensions $d = 1, 2, 3$, this is fairly easy to visualise. If you have $d=1$, your space is really just a line, or a line segment if you constrain the values to lie in $[0, 1]$. If you have a $[0, 1]$ line segment, and you have little cubes of length $0.1$, you'll need just ten of them to fill up the line. Now imagine that you add the second dimension. Suddenly your line becomes an entire plane, or a $10\times10$ square grid. The $10$ cubes are now only sufficient to fill up a single row, and you'll have to repeat this $10$ times over to fill up the entire $2$D space; you need $10^2 = 100$ cubes. Now imagine that you add the third dimension. What used to be a plane gets "pulled out" into an entire three-dimensional cube -- a large cube, which will require many little cubes to fill! The plane that we had previously is again just a flat slice in this larger $3$D space, and the entire strategy for filling up a plane will have to be repeated $10$ times over to fill up $10$ such slices of the $3$D space; this now requires $10^3 = 1000$ cubes. Past $3$ dimensions, the story continues in exactly the same way, but is a bit harder for us humans to visualise.
{ "domain": "ai.stackexchange", "id": 2714, "tags": "classification, k-nearest-neighbors, curse-of-dimensionality" }
Can closed loops evade the spin-statistic theorem in 3 dimensions?
Question: The famous spin-statistics result asserts that there are only bosons and fermions, and that they have integer and integer-and-a-half spin respectively. In two-dimensional condensed matter systems, anyons are also possible. They avoid this result due to a topological obstruction. Suppose that "loops" were fundamental, rather than point-particles. In addition to normal relative motion, it would be possible for one loop to go through the hole of another, and a path of this sort would not be continuous with one that doesn't. Would this provide enough of a topological obstruction to be able to avoid the spin-statistics theorem? Answer: The answer to your question is yes, your intuition is 100% correct. It all boils down to the topology of the configuration space $\mathcal C$, mainly the first homotopy group $\pi_1(\mathcal C)$ (which is non-zero in your example). See problem set 1, problem 3 from this course at Oxford. This exercise is precisely about loops in 3+1D! One has to argue that for any type of point-particle statistics in 2+1D (representations of the Braid group) there exist a corresponding loop statistics in 3+1D. Note however that these loops will only lead to non-trivial statistics in 3+1D, in higher dimensions there will not be any topological obstruction. This is related to the fact that in higher dimensions, you can always untie knots. More generally you can think about many different ways of getting non-trivial statistics. You can give your object more complicated internal structure than just point-particles (loops are just one example) or you can put your objects on topologically non-trivial manifolds. See for example this paper about so-called "projective ribbon permutation statistics", which is a way of having non-trivial statistics in higher dimensions but with "defect" that have some internal structure. EDIT: This is an answer to the question asked by Prathyush in the comments. Well, yes and no. If you are interested in more general statistics for point-particles, you have to go to 2+1 dimensions where you have anyons. Under an exchange of two (abelian) anyons, the wave function changes by a phase $e^{i\pi\alpha}$. Here $\alpha = 1$ correspond to fermions, $\alpha=0$ correspond to bosons while for any phase $\alpha\in[0,1]$ you have any-ons. So in the sense of exchange statistics, anyons interpolate between fermions and bosons. There is however another approach to take. In a famous paper, Haldane suggests the so-called exclusion statistics, which defines particle statistics in terms of a generalized Pauli exclusion principle (as you suggest). A natural question is then, does the interpolation of anyons between fermions and bosons lead to a interpolation of exclusion statistics? Murthy and Shankar seem to have tried to answer this question, and they find the corresponding exclusion parameter for the $\alpha$ anyon (equation (16)). However, I don't know enough about exclusion statistics and the state of the field to give many details. But you can learn a lot from reading some of the papers which cite Haldanes paper.
{ "domain": "physics.stackexchange", "id": 5128, "tags": "quantum-mechanics, statistical-mechanics, quantum-spin, topology" }
Find the peak stock price for each company from CSV data
Question: During the hiring process, a company gave me this test: Q) Consider Share prices for a N number of companies given for each month since year 1990 in a CSV file. Format of the file is as below with first line as header. Year,Month,Company A, Company B,Company C, .............Company N 1990, Jan, 10, 15, 20, , ..........,50 1990, Feb, 10, 15, 20, , ..........,50 . . . . 2013, Sep, 50, 10, 15............500 a) List for each Company year and month in which the share price was highest. b) Submit a unit test with sample data to support your solution. They wanted me to not use any third party libraries so I did this: import csv csv_file = open('demo.csv', "rb") reader = csv.reader(csv_file) master_dic = {} final_list= [] rownum = 0 for row in reader: if rownum == 0: header = row else: for i in range(2,len(header)): """ # Here it will create a dictionary which till have a structure like this {'Company': {'1990': {'Mar': 18.0, 'Feb': 19.0, 'Aug': 19.0}, '1991': {'Mar': 10.0, 'Feb': 21.0, 'Aug': 21.0, 'Sep': 23.0, 'May': 26.0}}} """ if header[i] in master_dic: if row[0] in master_dic[header[i]]: master_dic[header[i]][row[0]][row[1]] =float(row[i]) else: master_dic[header[i]][row[0]] ={} master_dic[header[i]][row[0]][row[1]] =float(row[i]) else: master_dic[header[i]] = {} master_dic[header[i]][row[0]] = {} master_dic[header[i]][row[0]][row[1]] =float(row[i]) rownum += 1 # Here we will Iterate over the master_dic dictionary and find out the highest price # of the shares in every company in all the months of all the years. for company,items in master_dic.iteritems(): for year,items_1 in items.iteritems(): maxima = 0 maxima_month = '' temp_list = [] for months,share in items_1.iteritems(): if share > maxima: maxima = share maxima_month = months temp_list = [company,year,maxima_month,maxima] continue if share == maxima: # sometimes company may have same high value for more than one month in a year. if len(temp_list) >0: temp_list = [company,year,temp_list[2]+', '+months,maxima] final_list.append(temp_list) # appending the value of temperory list to the final list that would be displayed at the end of the program. print final_list csv_file.close() demo.csv: year month Company A Company B Company C Company D 1990 Jan 10 15 20 18 1990 Feb 11 14 21 21 1990 Mar 13 8 23 10 1990 April 12 22 19 9 1990 May 15 12 18 26 1990 June 18 13 13 19 1990 July 12 14 15 20 1990 Aug 12 14 16 21 1990 Sep 13 8 23 23 1990 Oct 12 22 19 19 1990 Nov 15 12 18 14 1990 Dec 18 13 13 16 1991 Jan 15 12 18 26 1991 Feb 18 13 13 19 1991 Mar 12 14 15 18 1991 April 12 14 16 21 1991 May 11 16 13 10 1991 June 14 17 11 9 1991 July 23 13 12 26 1991 Aug 23 21 10 19 1991 Sep 22 22 9 20 1991 Oct 24 20 42 19 1991 Nov 12 14 15 18 1991 Dec 15 12 18 13 1992 Jan 21 14 16 15 1992 Feb 10 13 26 16 1992 Mar 9 11 19 23 1992 April 23 12 18 19 1992 May 12 10 21 18 1992 June 17 9 10 13 1992 July 15 42 9 18 1992 Aug 16 9 26 13 1992 Sep 15 26 19 15 1992 Oct 18 19 20 16 1992 Nov 19 18 21 13 1992 Dec 20 21 23 11 1993 Jan 21 10 19 12 1993 Feb 13 9 14 18 1993 Mar 21 14 16 21 1993 April 10 13 26 10 1993 May 9 11 19 9 1993 June 23 12 18 26 1993 July 12 10 21 19 1993 Aug 17 9 10 20 1993 Sep 15 42 9 21 1993 Oct 16 9 26 23 1993 Nov 15 26 19 19 1993 Dec 18 19 20 14 I got this response back from them: Wrong output, No DS implementation Inline code style - No use of best practices Could you educate me on what I did wrong? What should I have done, and how to learn best practices? I thought I was right. I really want to learn best practices, and understand why I was rejected, so I can do better on my next interview. Answer: Meeting the specifications The specifications said to accept a CSV file, so your test data should be comma-delimited, not space-delimited. In practice, it's not a big deal, but if you're answering an interview question, don't deviate from the instructions unless you can justify it with a good reason. Your output looks like this: [['Company D', '1991', 'Jan, July', 26.0], ['Company D', '1990', 'May', 26.0], ['Company D', '1993', 'June', 26.0], ['Company D', '1992', 'Mar', 23.0], ['Company A', '1991', 'Oct', 24.0], ['Company A', '1990', 'June, Dec', 18.0], ['Company A', '1993', 'June', 23.0], ['Company A', '1992', 'April', 23.0], ['Company B', '1991', 'Sep', 22.0], ['Company B', '1990', 'April, Oct', 22.0], ['Company B', '1993', 'Sep', 42.0], ['Company B', '1992', 'July', 42.0], ['Company C', '1991', 'Oct', 42.0], ['Company C', '1990', 'Mar, Sep', 23.0], ['Company C', '1993', 'April, Oct', 26.0], ['Company C', '1992', 'Feb, Aug', 26.0]] I would expect that it should look something like this: Company A: 1991 Oct (24) Company B: 1992 July (42) Company C: 1991 Oct (42) Company D: 1990 May (26) So, you haven't actually solved the problem laid out for you. Based on the not-quite-correctly-formatted input that you chose for your test case, and the totally incorrect output, I think that would be strong case for rejection. Impressions of the code You used the csv module instead of trying to parse CSV yourself. That's good. Ever since Python 2.5, it is almost always better to open your files using a with block, so that they will be closed for you automatically. You should open it in text mode, not binary mode. You haven't defined any functions or classes. Your code is just a bunch of free-floating instructions. Functions and classes help organize your code and your way of thinking. It forces you to name each chunk of code according to its purpose, and to define what the inputs and outputs are for each chunk of code. You could use that kind of discipline. This looks complicated: master_dic[header[i]][row[0]][row[1]] =float(row[i]) Interviewers don't like to deal with complicated answers any more than you do, so they are unlikely to ask questions that require complicated code to solve. Don't give your interviewer a headache. You'll have to find a way to express yourself such that the interviewer will want to read your code. (If I, as an interviewer, saw a candidate produce headache-inducing code, I might ask, "Can you make it prettier?" I wouldn't spend too much time puzzling it out — it's just not worth it.) You've implemented the solution in two passes. It should be possible to do it in one. I would accept a two-pass solution if it were done to maintain an elegant abstraction, but I don't think you can claim that justification. In your first pass, you keep track of rownum. Why? You could just use csvreader.line_num Better yet, just fetch the first row using next(reader), then you can do for row in reader without ever having to think about encountering the header row ever again. Better still, use a DictReader, which interprets the first row as fieldnames. In the second pass, you used some questionable variable names. It's not clear what items_1 is supposed to contain. Also, you have a variable named temp_list. I believe that any variable with "temp" in its name is likely to be a sign of muddled thinking. Sample solution Here's what I came up with. import csv from collections import namedtuple class Peak (namedtuple('Peak', ['year', 'month', 'price'])): def __lt__(self, other): return self.price is None or self.price < other.price def __eq__(self, other): return self.price == other.price def __gt__(self, other): return other.price is None or self.price > other.price def max_stock_prices(f): csv_reader = csv.DictReader(f) companies = csv_reader.fieldnames[2:] # Discard year, month columns peaks = dict((c, Peak(None, None, None)) for c in companies) for row in csv_reader: year, month = row['year'], row['month'] current = dict((c, Peak(year, month, float(row[c]))) for c in companies) peaks = dict((c, max(peaks[c], current[c])) for c in companies) return peaks with open('demo.csv') as f: peaks = max_stock_prices(f) for company in sorted(peaks): print("%s: %s %s (%.f)" % (company, peaks[company].year, peaks[company].month, peaks[company].price)) Notable points: Use DictReader and namedtuple to avoid the master_dic[header[i]][row[0]][row[1]] =float(row[i]) headache mentioned above. Define the <, ==, and > comparison operators to let max(peakobj1, peakobj2) work. The goal of all that preparation work is to beautify max_stock_prices(). The amount of free-floating code is just four lines at the end, which is OK since you can see at a glance what it does. Caveat: If the same maximum is attained on several rows, the month chosen as the peak is arbitrary.
{ "domain": "codereview.stackexchange", "id": 8430, "tags": "python, interview-questions, csv" }
Finding the bandwidth of a band matrix
Question: I am designing an algorithm that solves a linear system using the QR factorization, and the matrices I am dealing with are sparse and very large ($6000 \times 6000$). In order to improve the efficiency of the algorithm, I am trying to exploit the sparsity of the matrix by finding its bandwidth, but I have to run through the matrix a lot of times to find it, and it is taking too long. The main idea I am using to find the bandwidth is: for each row, find the start(row) and end(row): these are the intervals in which the elements are different of $0$; to find start(row): iterate from the beginning of the row until the element is not $0$; to find end(row): iterate from the end of the row until the element is not $0$; The problem is that I am running through many unnecessary $0$'s, but I can not figure out how to avoid this and guarantee a solid result. Thanks. Answer: As a sparse matrix is mostly made of zeros. Using a 2-dimensional array for all elements will be an inefficient way to represent such data as more than half of the array will be zeroes which is the reason for the increased time cost for finding bandwidth in your case. In your case if you have a matrix $M$ of size $n * m$ then you'll be using a 2 dimensional array of size $n*m$. However it is not necessary to do so A more efficient way will be to represent only the non zero elements using a 2 dimensional array with only three columns.e.g. ------------------------ R | C | V ------------------------ 0 | 0 | 1 ------------------------ 0 | 3 | 4 ------------------------ 1 | 5 | 11 ------------------------ . | . | . ------------------------ . | . | . ------------------------ . | . | . Where R is the row of the nonzero element,C is its column and V is its value in a given matrix. In this way if you have $t$ nonzero elements in the Matrix then you need to access only $3*t$ elements in the above 2-dimensional array. Now you can find the maximum and minimum values of C for any row R and can calculate their difference. The largest difference you'll come across will be the bandwidth.I hope it makes sense.
{ "domain": "cs.stackexchange", "id": 8984, "tags": "algorithms, linear-algebra, matrices, sparse-matrices" }
Correlation, Feature Importance in R
Question: If I have 150 features, with the output feature being nominal(multi-class), how can I verify which features play a key role in determining the class? Answer: For a given class, you may usually treat the problem as a binary classification (the given class versus all others). You can do the same for your feature importance calculations. However we won't be able to help you much outside of a given problem and a given model, as available feature importance solution depends on the model used.
{ "domain": "datascience.stackexchange", "id": 6825, "tags": "machine-learning, r" }
Identify This Glowing Insect
Question: When I was walking in my garden at night I saw this small glowing insect, can you identify what is this insect? Location: Middle-East Palestine Size: Length about 1 CM This image in the dark (see the light of the insect) These images taken by flash. Answer: Appears to be a larviform female firefly (order Coleoptera; family Lampiridae). I cannot find a good source for listing/IDing Palestinian fireflies, so I unfortunately cannot give you a definitive answer. I do want to provide some of an answer and an example specimen to get you going on the path of accurately IDing your specimen. To get you started, your specimen appears similar to Photinus bromleyi (seen here) and below: Photinus are a group of North American fireflies, so not likely your species. According to firefly.org, the subfamily Luciolinae is found throughout Eurasia, while the genus Lampyris is a "wastebin taxon" used as a "catch all" for misfit fireflies that is found throughout the world. So I'd start looking in those groups. I'll dig around for a more Palestinian-oriented bug guide. Again, this is not meant to be an exact answer but just a starting point for you or another user who might have access to Palestinian resources. I'll update if I find something definitive.
{ "domain": "biology.stackexchange", "id": 9681, "tags": "species-identification, zoology, entomology" }
Merging word streams from files
Question: A follow-on, rags-to-riches implementation of The most efficient way to merge two lists in Java The original requirements are to: Identify the distinct values from two input files, and output the distinct values to an output file. There is no specification for the order of the output, only that each line should be unique in the results. Special consideration should be made for efficiency. I have implemented a more general specification: merge multiple input files (at least one) to an output file each line is treated as a line, not necessarily a "word". If the input files have just one word per line, then the output would be the same as the original specification. take the input files from the commandline (the first file is the output file). In my answer to the linked post I suggested that a Java 8 Streams implementation would be "nice". I have implemented that solution here. I am looking for suggestions on how to better utilize the new Java functionality, and any other suggestions you may have. import java.io.BufferedWriter; import java.io.IOException; import java.io.UncheckedIOException; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.Paths; import java.util.HashSet; import java.util.Set; import java.util.stream.Stream; @SuppressWarnings("javadoc") public class Linemerge { /* Wrap the IOException in order to make convenient Stream usage. */ private static final void writeWord(BufferedWriter writer, String word) { try { writer.write(word); writer.newLine(); } catch (IOException e) { throw new UncheckedIOException(e); } } private static void merge(Path source, Set<String> seen, BufferedWriter writer) throws IOException { try (Stream<String> words = Files.lines(source)) { words.filter(seen::add).forEach(word -> writeWord(writer, word)); } } public static void main(String[] args) { if (args.length < 2) { throw new IllegalArgumentException("Need at least two file arguments: Destination Source {Source {Source {...}}}"); } Path dest = Paths.get(args[0]); try (BufferedWriter writer = Files.newBufferedWriter(dest)) { Set<String> seen = new HashSet<>(); for (int i = 1; i < args.length; i++) { Path source = Paths.get(args[i]); if (Files.isRegularFile(source) && Files.isReadable(source)) { System.out.println("Merging " + source); merge(source, seen, writer); } else { System.out.println("Unable to read (and ignoring) " + source); } } } catch (IOException e) { e.printStackTrace(); System.exit(1); } } } Answer: "Need at least two file arguments: Destination Source {Source {Source {...}}}" I think an easier way of documenting that, at least for *nix, is: "Need at least two file arguments: DESTINATION [SOURCE]..." You can also turn the Paths in your main() method into a Stream too: public class Linemerge { // ... // suggestion note: had to wrap IOException -> UncheckedIOException too private static void merge(Path source, Set<String> seen, BufferedWriter writer) { try (Stream<String> words = Files.lines(source)) { words.filter(seen::add).forEach(word -> writeWord(writer, word)); } catch (IOException e) { throw new UncheckedIOException(e); } } private static final Predicate<Path> FILTER = f -> Files.isRegularFile(f) && Files.isReadable(f); private static void checkPath(Path path) { System.out.println((FILTER.test(path) ? "Merging" : "Unable to read (and ignoring)") + " " + path); } public static void main(String[] args) { if (args.length < 2) { throw new IllegalArgumentException( "Need at least two file arguments: DESTINATION [SOURCE]..."); } try (BufferedWriter writer = Files.newBufferedWriter(Paths.get(args[0]))) { Set<String> seen = new HashSet<>(); Stream.of(args).skip(1).map(Paths::get).peek(Linemerge::checkPath) .filter(FILTER).forEach(f -> merge(f, seen, writer)); } catch (IOException e) { e.printStackTrace(); System.exit(1); } } }
{ "domain": "codereview.stackexchange", "id": 19204, "tags": "java, stream, rags-to-riches" }
Why does solar cell performance use short-circuit current per unit area and open-circuit voltage without reference to the cell area?
Question: Here on this slide on page 54: Only the short-circuit current ($J_{SC}$) is given with respect to area, whereas the voltage stands without relation to area. One could calculate it, because the area is given. I just wondered, since I never heard about a open-circuit voltage density, but a lot about short-circuit current density. Answer: A solar cell generates a current by collecting photons over its surface area. But solar cells come in many different sizes, so to allow them to be compared with each other the current is normalised by the area. The open circuit voltage is not dependent on the area, but the other properties of the diode. It does have a dependence on illumination intensity. For this reason IV curves are quoted at standardised conditions.
{ "domain": "physics.stackexchange", "id": 72732, "tags": "semiconductor-physics, solar-cells" }
Understanding the Group Leaders Optimization Algorithm
Question: Context: I have been trying to understand the genetic algorithm discussed in the paper Decomposition of unitary matrices for finding quantum circuits: Application to molecular Hamiltonians (Daskin & Kais, 2011) (PDF here) and Group Leaders Optimization Algorithm (Daskin & Kais, 2010). I'll try to summarize what I understood so far, and then state my queries. Let's consider the example of the Toffoli gate in section III-A in the first paper. We know from other sources such as this, that around 5 two-qubit quantum gates are needed to simulate the Toffoli gate. So we arbitrarily choose a set of gates like $\{V, Z, S, V^{\dagger}\}$. We restrict ourselves to a maximum of $5$ gates and allow ourselves to only use the gates from the gate set $\{V, Z, S, V^{\dagger}\}$. Now we generate $25$ groups of $15$ random strings like: 1 3 2 0.0; 2 3 1 0.0; 3 2 1 0.0; 4 3 2 0.0; 2 1 3 0.0 In the above string of numbers, the first numbers in bold are the index number of the gates (i.e. $V = 1, Z = 2, S = 3, Z^{\dagger} = 4$), the last numbers are the values of the angles in $[0,2\pi]$ and the middle integers are the target qubit and the control qubits respectively. There would be $374$ such other randomly generated strings. Our groups now look like this (in the image above) with $n=25$ and $p=15$. The fitness of each string is proportional the trace fidelity $\mathcal{F} = \frac{1}{N}|\operatorname{Tr}(U_aU_t^{\dagger})|$ where $U_a$ is the unitary matrix representation corresponding to any string we generate and $U_t$ is the unitary matrix representation of the 3-qubit Toffoli gate. The group leader in any group is the one having the maximum value of $\mathcal{F}$. Once we have the groups we'll follow the algorithm: The Eq. (4) mentioned in the image is basically: $$\text{new string} [i] = r_1 \times \text{old string}[i] + r_2 \times \text{leader string}[i] + r_3 \times \text{random string}[i]$$ (where $1 \leq i \leq 20$) s.t. $r_1+r_2+r_3 = 1$. The $[i]$ represents the $i$ th number in the string, for example in 1 3 2 0.0; 2 3 1 0.0; 3 2 1 0.0; 4 3 2 0.0; 2 1 3 0.0, the $6$-th element is 3. In this context, we take $r_1 = 0.8$ and $r_2,r_3 = 0.2$. That is, in each iteration, all the $375$ strings get mutated following the rule: for each string in each group, the individual elements (numbers) in the string gets modified following the Eq. (4). Moreover, In addition to the mutation, in each iteration for each group of the population one-way-crossover (also called the parameter transfer) is done between a chosen random member from the group and a random member from a different random group. This operation is mainly replacing some random part of a member with the equivalent part of a random member from a different group. The amount of the transfer operation for each group is defined by a parameter called transfer rate, here, which is defined as $$\frac{4\times \text{max}_{\text{gates}}}{2} - 1$$ where the numerator is the number of variables forming a numeric string in the optimization. Questions: When we are applying this algorithm to find the decomposition of a random gate, how do we know the number and type of elementary gates we need to take in our gate set? In the example above they took $\{V,Z,S,V^{\dagger}\}$. But I suspect that that choice was not completely arbitrary (?) Or could we have chosen something random like $\{X,Z,R_x,R_{zz},V\}$ too? Also, the fact that they used only $5$ gates in total, isn't arbitrary either (?) So, could someone explain the logical reasoning we need to follow when choosing the gates for our gate set and choosing the number of gates to use in total? (It is mentioned in the papers that the maximum possible value of the number of gates is restricted to $20$ in this algorithm) After the part (in "Context") discussing the selection of the gate set and number of gates,is my explanation/understanding (paragraph 3 onwards) of the algorithm correct? I didn't quite understand the meaning of "parameter transfer rate". They say that $4\times \text{max}_{\text{gates}} - 2$ is the number of variables forming a numeric string in the optimization. What is $\text{max}_{\text{gates}}$ in this context: $5$ or $20$? Also, what exactly do they mean by the portion I italicized (number of variables forming a numeric string in the optimization) ? How do we know when to terminate the program? Do we terminate it when any one of the group leaders cross a desired value of trace fidelity (say $0.99$)? Answer: I suggest looking at how a genetic algorithm works in a context of discrete variables to understand it. They provide a methodology but you can apply other mutation/crossover techniques. Briefly, in a simple optimization problem where the variables are discrete, we can solve heuristically with genetic algorithms (which belongs to the class evolutionary algorithms). We generate a population of candidates (randomly) and we change the candidates at each iteration to try to find a good solution minimizing/maximizing an objective function (called fitness). You can represent the candidates by a string of values (called chromosomes in general). If you input this string of values to the objective function, you are evaluating the candidate or you assigning a fitness. Crossover/mutation operations are meant to change candidates and hope for achieving our objective in a way related to what happens in genetic. The GLOA is just another genetic algorithms but with the difference of having different groups of population with a local optimum (leader as best candidate if you prefer) for each and of course a slightly different strategy for mutation/crossover. Usually, we have one group of candidates with one best candidate at each iteration. Now for your questions: 1. You can choose whatever set of gates you want (like your example of a set). This is also true for the maximum number of gate operations you want to restrict your decomposition. Those are just parameters for the algorithm. I would say this is completely arbitrary (not so much logic necessarily just heuristic) but maybe what they chose was more adapted to their example or setup of work. In practice, you would have to try many parameters. 2. You are kinda retaking the original explanations and especially the diagram so I think you are summing up well. 3. I suggest looking at Figure 2 which shows the pseudo-code of this part to understand it. This is similar to one-way crossover in genetic algorithms. If you look at their original algorithm (GLOA, 2010), they choose a number $t$ between $1$ and half of the number of total parameters (variables) plus one. In this case, the number of parameters/variables is the length of a string which is $4 \times \text{max}_{\text{gates}}$. For the Toffoli gate, it was $5$. For other examples, it could have been more. But in general, they recommend $20$ as a good maximum (you can imagine how hard the optimization is with strings of more than $80$ variables on a simple computer). For your visualization, look at the example string you give : 1 3 2 0.0; 2 3 1 0.0; 3 2 1 0.0; 4 3 2 0.0; 2 1 3 0.0 Here, a string represents a set of operations forming a circuit. We associate with it a fitness, which is related to the fidelity. It is like a candidate solution for our optimization problem. In that case, our $\text{max}_{\text{gates}}$ is $5$ because you see $5$ operations represented. Each operation is a gate of the set, the $2$ qubits it applies to and the angle if necessary, that is $4$ variables. In total, $5 \times 4 = 20$ variables for the problem. 1 3 2 0.0, in this case, means apply the $V$ gate on qubit $3$ controlled by qubit $2$ with an angle $0.0$ (note that with the $V$ gate there is no angle but say you were playing with like rotation gates, this becomes relevant). 4. This is also arbitrary depending on what you want. It can be a fixed number of iterations, or until you reach a threshold/convergence criteria.
{ "domain": "quantumcomputing.stackexchange", "id": 410, "tags": "quantum-gate, circuit-construction, optimization" }
How many significant digits are appropriate when one or more digits are eliminated (during subtraction)?
Question: Say, for example, $50.0-49.6=0.4$. Does this result have $1$ significant figure, or $3$ (as in the data: $50.0$ and $49.6$)? Had it been $50.0-4.6$, it is understood that the answer is $45.4$, by the rules of significant figures. How do I apply them in the "$50.0-49.6$" case? This question deals with $50-49.6$; so a related but not the same question. Answer: There's an easy way to look at this. Lets say the value $50.0$ refers to $\pu{50.0 cm}$ measured accurately to $\pu{0.1 cm}$, and that $49.6$ refers to $\pu{49.6 cm}$ measured accurately to $\pu{0.1 cm}$. The difference would be, as you've said, $\pu{0.4 cm}$ measured to $\pu{0.1 cm}$ accuracy. So, yes, the answer has only one significant digit. Your initial measurements aren't more accurate than $0.1$, so adding two extra significant digits is incorrect. I hope this helps.
{ "domain": "chemistry.stackexchange", "id": 8222, "tags": "significant-figures" }
What is the time complexity of the following loop?
Question: function (n) i = 1 s = 1 while (s <= n) i = i+1 s = s*i print "*" end Answer: Assuming all operations are done in constant time, this loop runs in $\Theta(n!^{-1})$ where $!^{-1}$ is the inverse factorial. Intuitively, the program will enter the loop $i$ times, $i$ being the smallest integer so that $i!\geq n$.
{ "domain": "cs.stackexchange", "id": 10418, "tags": "time-complexity, asymptotics" }
Why doesn't copper affect human skin?
Question: It's been known for a long time that copper has antimicrobial properties, but if it is so potent, why does it seem to have no effect on human skin or really any large animal? Answer: The outermost layer of your skin consists mostly of dead cells, the Stratum corneum. Copper can't kill what's already dead.
{ "domain": "biology.stackexchange", "id": 10412, "tags": "biochemistry, human-physiology" }
Transformation from world frame to odom(robots) frame
Question: Hey i am new to ROS and Stage. I am simulating a robot to move from one point to other autonomously. The problem is i dont know how to transform from world frame (i.e as soon as you open ROS you get X-Y frame) and odom frame(i.e the frame of robot) So here stage world has one origin and robot(odom) has another origin there is a translation and rotation between them. How can i transform a point from stage world frame to robots odom frame? Originally posted by ruthvik on ROS Answers with karma: 1 on 2014-11-04 Post score: 0 Original comments Comment by Tom Moore on 2014-11-04: Just a note on frame ID: both map and odom are typically world frames, and base_link refers to the body frame of the robot. http://www.ros.org/reps/rep-0105.html Answer: I highly recommend going through the TF tutorials. Originally posted by dornhege with karma: 31395 on 2014-11-04 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ruthvik on 2014-11-04: hey i have gone through them but when i run the command rosrun tf view_frames i get the result as below odom-->base_footprint--->base_link---->base_laser_link i dnt want any f these trnsfrmtns. I want a trnsfrmtn frm stage world points 2 odom(robots) frames. shud i creat new frame 4 stage world? Comment by dornhege on 2014-11-04: If you want actual stage poses for debugging, stage ros publishes base_pose_ground_truth. You can produce a small program that sends this. Alternatively if you configure odometry without noise it should be the same as ground truth.
{ "domain": "robotics.stackexchange", "id": 19951, "tags": "ros" }
Selecting users with biopsies in a given country for charting
Question: I have this monstrosity @nation_biopsy = User.all.where(country: @nation).group_by{ |u| [u.institute_type, u.biopsy] } .map{ |i, o| ["name" => i[0], "y" => User.where(id: o.map(&:id)) .sum("biopsy")]}.flatten.to_json What it is doing is finding all the users in my database that belong to a country (@nation) and grouping them according to their "institute_type" and then "biopsy" (both columns for users). Then it is trying to convert it to an array of JSON objects for a "Highcharts" script. It works, but as a newbie I am sure this is very database heavy. Is there a better way? Answer: Some notes on your code: User.all.where -> User.where. The moment you write group_by you are not using SQL anymore, performance will suffer. The line starting with sum is very confusing, the indentation make you think it's the continuation of map but in fact it's part of its inner expression. Indentation should reflect the structure of an expression. map + flatten -> flat_map. |pair, something| and then pair[0]. You can de-structure arrays using the syntax |(k, v), something|. On a first refactor, I'd write: @nation_biopsy = User .where(country: @nation) .group_by { |u| [u.institute_type, u.biopsy] } .flat_map do |(institute_type, biopsy), users| ["name" => institute_type, "y" => User.where(id: users.map(&:id)).sum("biopsy")] end.to_json On a second refactor, I'd try to make it work with SQL. Something like this: @nation_biopsy = User .where(country: @nation) .group(:institute_type, :biopsy) .select(:institute_type, "SUM(biopsy) AS biopsy_count") .flat_map { |group| ["name" => group.institute_type, "y" => group.biopsy_count] } .to_json Now, if you want to write something fancy use Arel, it allows you to write it without SQL fragments, which looks kind of cool: users = User.arel_table @nation_biopsy = User .where(users[:country].eq(@nation)) .group(users[:institute_type], users[:biopsy]) .select(users[:institute_type], users[:biopsy].sum.as("biopsy_count")) .flat_map { ["name" => group.institute_type, "y" => group.biopsy_count] } .to_json
{ "domain": "codereview.stackexchange", "id": 17348, "tags": "ruby, database, active-record" }
When did the Moon become tidally locked to Earth?
Question: According to various theories the Moon was created around 4.5 billion years ago. About all of these theories suggest that it was rotating around its axis at that time though. Currently, Moon is at tidal lock with Earth, despite some monthly "wiggling" a flat zero on the long-term rotation speed relative to it. I wanted to ask when did that stop occur - relative to Moon's age, how long was the period of rotating Moon? The answer would shed some light on my other question - Why are most lunar maria on the visible side? as Earth tends to catch or deflect many bodies heading for Moon surface from "our" direction - still, there is no erosion on the Moon, so craters once formed are extremely slow to vanish - if that period was relatively long, Earth's "protection" wouldn't explain the maria, as rotating Moon would get 'cratered' uniformly all over its surface. Answer: "Protection" isn't the only effect of Earth. Here is a different POV: Earth may have accelerated impactors by gravity assist. A different approch is the thinner crust, as suggested for the near side, which may have allowed asteroids to penetrate Moon's crust, such that lava could flow into the basins, or which may have favoured volcanism on the near side (see "Lunar interior" on this site). A third approach is the protective property of Earth preventing the near side to be covered with many new craters, hence leave the maria visible. According to Wikipedia the time to lock tidally is about $$t_{\mbox{lock}}=\frac{wa^6IQ}{3G{m_p}^2k_2R^5},$$ with $$I=0.4m_sR^2.$$ For Moon $k_2/Q = 0.0011$, hence $$t_{\mbox{lock,Moon}}=121\frac{wa^6m_s}{G{m_p}^2R^3}.$$ With Earth's mass $m_p=5.97219\cdot 10^{24}\mbox{ kg}$, Moon's mass $m_s=7.3477\cdot 10^{22}\mbox{ kg}$, Moon's mean radius of $R=1737.10\mbox{ km}$, $G=6.672\cdot 10^{-11}\frac{\mbox{Nm}^2}{\mbox{kg}^2}$we get $$t_{\mbox{lock,Moon}}=121\frac{wa^67.3477\cdot 10^{22}\mbox{ kg}}{6.672\cdot 10^{-11}\frac{\mbox{Nm}^2}{\mbox{kg}^2}\cdot{(5.97219\cdot 10^{24}\mbox{ kg})}^2(1737.10\mbox{ km})^3},$$ or $$t_{\mbox{lock,Moon}}=7.12753\cdot 10^{-25}wa^6 \frac{\mbox{kg}}{\mbox{Nm}^2 \mbox{km}^3}.$$ Parameters are $w$ the spin rate in radians per second, and $a$ the semi-major axis of the moon orbit. If we take the the current simi-major axis of the moon orbit of 384399 km and a maximum possible spin rate of $$w=v/(2\pi R)=\frac{2.38 \mbox{ km}/\mbox{s}}{2\pi\cdot 1737.10\mbox{ km}}=\frac{1}{4586 \mbox{ s}},$$ with $v=2.38 \mbox{ km}/\mbox{s}$, Moon's escape velocity, 1737.1 km Moon's radius, we get $$t_{\mbox{lock,Moon}}=7.12753\cdot 10^{-25}\cdot \frac{1}{4586 \mbox{ s}}\cdot (384399\mbox{ km})^6 \frac{\mbox{kg}}{\mbox{Nm}^2 \mbox{km}^3}\\ =501416\mbox{ s}^{-1}\cdot \mbox{ km}^6 \frac{\mbox{kg}}{\mbox{Nm}^2 \mbox{km}^3}= 5.01416\cdot 10^{14} \mbox{ s}.$$ That's about 16 million years, as an upper bound. If we assume a higher Love number for the early moon, or slower initial rotation, the time may have been shorter. The time for getting locked is very sensitive to the distance Earth-Moon (6th power). Hence if tidal locking occurred closer to Earth, the time will have been shorter, too. That's likely, because Moon is spiraling away from Earth.
{ "domain": "astronomy.stackexchange", "id": 6388, "tags": "the-moon, tidal-forces" }
What is the purpose of the last bits in the 2-qubit-operation Toffoli implementation?
Question: What is the purpose of these quantum gates? They're appended to the circuit after the value has been computed. Answer: In[13]:= H = 1/Sqrt[2]*{{1, 1}, {1, -1}}; T = {{1, 0}, {0, Exp[I*Pi/4]}}; CNOT = {{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 0, 1}, {0, 0, 1, 0}}; KroneckerProduct[IdentityMatrix[2], T].CNOT. KroneckerProduct[T,ConjugateTranspose[T]].CNOT // MatrixForm {"1", "0", "0", "0"}, {"0", "1", "0", "0"}, {"0", "0", "1", "0"}, {"0", "0", "0", "I"} So that red boxed region just serves to act as multiplication by $i$ when the first two qubits are $1$. So what remains must be acting by $-i X$ on the third qubit when both the first two are $1$. In fact so it is, by assuming the first two qubits are $1$ and computing the resulting $2$ by $2$ operator on the third qubit. That is the $U$ for the summand $P_1 \otimes P_1 \otimes U$ when writing the full operator as $P_0 \otimes P_0 \otimes U_{00} + P_0 \otimes P_1 \otimes U_{01} \cdots$. Plug into Mathematica again: H.PauliMatrix[1].ConjugateTranspose[T]. PauliMatrix[1].T.PauliMatrix[1]. ConjugateTranspose[T].PauliMatrix[1].T.H // MatrixForm And it checks out. If you didn't have the boxed part it would be $-iX$ controlled on the first two, instead of $X$ controlled on the first two.
{ "domain": "quantumcomputing.stackexchange", "id": 867, "tags": "quantum-gate, quantum-state" }
Why am I getting this warning while running the qiskit command IBMQ.load_account()?
Question: I'm getting this warning: C:\Users\XYZ\Anaconda3\lib\site-packages\qiskit\providers\models\backendconfiguration.py:337: UserWarning: `dt` and `dtm` now have units of seconds(s) rather than nanoseconds(ns). warnings.warn('`dt` and `dtm` now have units of seconds(s) rather ' Can you please help me understand this? Thanks. Answer: This is just a warning to let you know that the API has changed for the backend configurations such as PulseBackendConfiguration. The dt (qubit drive channel timestep) and dtm (measurement drive channel timestep) parameters were previously specified in nanoseconds, but they are now specified in seconds. If you're not doing anything so advanced as to directly use these parameters, you can safely ignore the warning.
{ "domain": "quantumcomputing.stackexchange", "id": 1185, "tags": "qiskit" }
Find k-th smallest index
Question: in an MCMC procedure I am repeatedly (100,000+ times) running the following function to find the kth order statistic. I ran a profiler and noticed this function was most expensive. I am looking for a way to optimize this code for speed. The reason I think it can be done faster is because I do not want find the kth largest value, but the index of the kth largest value. However, now I'm effectively looping twice through the vector x, which seems inefficient to me. The code below is an example, in practice I use R and Rcpp to run the code. Any ideas? #include <iostream> #include "C:\c++\armadillo-7.950.1\include\armadillo" using namespace std; using namespace arma; int qSelectIdxC(arma::vec& x, const int k) { // ARGUMENTS // x: vector to find k-th largest element in // k: k-th statistic to look up // safety copy since nth_element modifies in place arma::vec y(x.memptr(), x.n_elem); // partially sorts y. std::nth_element(y.begin(), y.begin() + k, y.end()); // the k-th largest value const double kthValue = y(k); // find and return the index of the k-th largest value; int idxK = std::find(x.begin(), x.end(), kthValue) - x.begin(); return idxK; } int main() { vec test0 = regspace<vec>(0, 10); // 0, 1, ..., 10 int ans0 = qSelectIdxC(test0, 5); // returns 5 vec test1(100, fill::randu); int ans1 = qSelectIdxC(test1, 50); cout << "ans0" << ans0; cout << "ans1" << ans1; return 0; } Answer: To find the kth smallest/largest value you do not need to sort all values! Simply keep track of the k smallest/largest encountered values in a (sorted) buffer. This will take your run time from O(nlog(n)) to O(nlog(k)). And will also avoid the unnecessary copy and iteration, simply keep the index with k. The following pseudocode illustrates the algorithm template<typename It> It min_k(It first, It last, int k){ auto cmp_it_values = [](It lt, It rt){ return *lt < *rt;}; auto max_copy = std::min<long>(k, std::distance(first, last)); auto start_it = first; std::advance(start_it, max_copy); k++; // k == 0 has to return smallest one element. std::vector<It> k_smallest; k_smallest.reserve(k+1); for(auto it = first; it != start_it; ++it){ k_smallest.push_back(it); } std::sort(k_smallest.begin(), k_smallest.end(), cmp_it_values); for(auto it = start_it; it != last; ++it){ if(k_smallest.empty() || *it < *k_smallest.back()){ auto insertion_point = std::lower_bound(k_smallest.begin(), k_smallest.end(), it, cmp_it_values); k_smallest.insert(insertion_point, it); if(k_smallest.size() > k){ k_smallest.pop_back(); // Remove the largest value } } } return k_smallest.back(); // The iterator to the min(k, n) smallest value } The above has O(nklog(k)) worst case and O(n + log(k)) best case behaviour. You can improve the worst case by using a max heap for k_smallest instead of a vector. This would have the promised O(nlog(k)) run time. I leave that as an exercise to the reader ;) And you should read this using namespace std; is bad practice. You can see from the benchmark here: https://ideone.com/B81Hs4 that the code above is twice as fast as your original code for the given values (you need to test with typical values for your application to verify) and the heap version is even faster.
{ "domain": "codereview.stackexchange", "id": 26471, "tags": "c++, performance, statistics" }
Geometric Structure Factor for Monatomic FCC lattice
Question: I am trying to find the geometric structure factor and my work here is clearly wrong. I will put my wrong answer and then I will throw up the link to wikipedia for the correct answer, because I cannot tell the difference. My attempt: BCC has a four atom basis. If x,y,z vectors are taken to be along the edges of the conventional cube like in the picture below (all credit due to Aschroft and Mermin) : But the basis are located at: Therefore the geometric structure factor is: $$F_{hkl} = \sum_{j=i}^{N} f_j e^{i \Delta \vec{k} \bullet \vec{r_j} }$$ But because the structure is monatomic, $f_j = f$ for all j. Also, $r_j$ denotes the location of the jth atom in the cell (I.E. $r_1 = \vec{0}, r_2 = \vec{ a_1} ,r_3 = \vec{ a_2}, r_4 = \vec{ a_3}$ in the picture ) $\Delta \vec{k}$ is just some vector that is an element of Reciprocal vector space. That is, $$\Delta \vec{k} = h \vec{b_1} + k \vec{b_2} + l \vec{b_3}$$ but because $\vec{a_i} \bullet \vec{b_j} = 2 \pi \delta_{ij}$ , $F_{hkl}$ reduces to: $$F_{hkl} = f [ e^0 + e^{i(h \vec{b_1} + k \vec{b_2} + l \vec{b_3}) \bullet \vec{a_1}} + e^{i(h \vec{b_1} + k \vec{b_2} + l \vec{b_3}) \bullet \vec{a_2}} + e^{i(h \vec{b_1} + k \vec{b_2} + l \vec{b_3}) \bullet \vec{a_3}} ] = f [ 1 + e^{i2 \pi h} + e^{i2 \pi k} + e^{i2 \pi l} ] = 4f$$ This answer is incorrect. The correct answer can be seen at this link to Wikipedia, scrolling down to fcc. From what I can tell, they must have defined their reciprocal lattice vector differently from me but I cannot see why. Answer: As you write, the geometric structure factor is $$F_{hkl} = \sum_{j=1}^N f_j e^{i \delta k \cdot \vec r_j}$$ You also correctly state that $\vec{r}_j$ denotes the location of the $j$-th atom in the cell. Now, you incorrectly say that $\vec{r}_2 = \vec{a}_1$, $\vec{r}_3 = \vec{a}_2$. But $\vec{a}_1$ doesn't tell you where in the cell the 1st atom is, $\vec{a}_1$ tells you in what direction the next cell starts. The $\vec{a}_i$ are called the lattice vectors and are just there to define the cubic structure. They can be chosen the same for the simple cubic, the bcc and the fcc lattice. What you need are the basis vectors. In a simple cubic lattice, you only have one basis vectors, $\vec{0}$. In the fcc lattice, we have four atoms per unit cell, and therefore we have four basis vectors, and those are the vectors that you have written down above: $\vec{0}, a/2 (\vec{x}+ \vec{y})$ and so on, where $\vec{x}$ is the unit vector in $x$-direction. In a cubic lattice, we have $\vec{a}_1 = a\vec{x}, \vec{a}_2 = a\vec{y}, \vec{a}_3 = a \vec{z}$, so your basis vectors read $\vec{r}_1 = \vec{0}, \vec{r}_2 = 1/2(\vec{a}_1 + \vec{a_2})$ etc. Now if you insert those into the exponential and use the fact that $\vec{a}_i \cdot \vec{b}_j = 2\pi\delta_{ij}$ you should get the correct result.
{ "domain": "physics.stackexchange", "id": 26405, "tags": "homework-and-exercises, solid-state-physics" }
Can Lego Mindstorms EV3 be run on Intel Atom 1.33 GHz CPU?
Question: The specifications at Lego official site here say that Dual Core 2.0GHz CPU or better is needed. Still, maybe quad core with 1.8 GHz (like Prestiogio Multipad Visconte's Intel Atom quad core) would work smoothly enough? Have anyone tried to run Lego Mindstorms EV3 on such kind of CPU? Answer: Most probably, yes but performance will be slow because you have a very slow CPU. You can increase performance of the software by disabling visual effects in Windows and by closing background apps.
{ "domain": "robotics.stackexchange", "id": 1738, "tags": "mindstorms" }
Why cannot we find EtherCAT shields?
Question: I have a riddle about EtherCAT in mind and I'd like to have your point of view about it... With the rise of open platforms and hardware, and easily accessible embedded machines, it is now rather straightforward to install a RT system such as Xenomai on a raspberry PI, or a beagleboard black, or whatever cheap platform you prefer... Now to connect these a RT bus would be really cool (e.g. EtherCAT...). Hence my question: every hobbyist face the same problems with RT communication, so is there any good reason why there does not exist any open EtherCAT shield for raspberry PI or beagleboards? It would solve so many problems... Any thoughts on why? Any idea? Answer: There are some small EtherCAT peripherals, like Shadow's Ronex system, which gives you a great deal of I/O, although it's not in the 'shield' form factor.
{ "domain": "robotics.stackexchange", "id": 996, "tags": "communication" }
Momentum carried by an electromagnetic wave
Question: According to my physics book, Whenever a particle absorbs a certain amount of energy from an electromagnetic wave, it also simultaneously absorbs a corresponding momentum in the direction of propagation of the wave I'm not sure what the term "corresponding" refers to in this sentence. Answer: In general, from relativity we learn that $m^2 c^2 = E^2/c^2 - p^2$. If you substitute in $p=0$ you get Einstein’s famous $E=mc^2$. Since light is massless you can instead substitute in $m=0$ and get $E=pc$. So for a massless thing like light there is a definite corresponding amount of momentum associated with any given amount of energy.
{ "domain": "physics.stackexchange", "id": 77442, "tags": "waves, electromagnetic-radiation, momentum" }
Moment of inertia meaning?
Question: Why is the formula for calculating the moment of inertia this integral $$ \int r^2 dm~? $$ I understand the way we derived this formula from looking at the distribution of kinetic energy of a rotating object. I believe classical mechanics should have intuitive sense, and that everything in every formula has a reason it is the way it is and not something else(I repeat, in classical mechanics). Inertia (rotational) as I understand it, tells us how "hard" is it to rotate an object around a perpendicular axis. I i just don't see why $\int r^2 dm$ tell's us that. For example the center of mass, similar formula but it makes much more sense (to me) $$ \sum_{i=1}^n m_i r_i, $$ because the more mass the $i$-th component of mass has, the closer the center of mass will be to that particular point, and also the further away a given mass is from our origin the further the center of mass, this is all perfectly logical and every part of the formula makes complete sense but in the case of the moment of inertia I simply cannot see why $ \int r^2 dm$ tells me how hard is it to rotate. Answer: The way I prefer to think about it is from the point of view that an extended object is a distribution of mass, sort of like a probability distribution in statistics. A moment is a number (usually a sum or integral) that describes the shape of a distribution. If you have all of the moments, you can completely reconstruct the distribution. The $n^\textrm{th}$ moment of a distribution $\rho(x)$ is calculated by taking $$\int\rho(x)x^n dx$$ If we make the substitution $\rho(x)dx=dm$, where we take $\rho(x)$ to be the density of an object, we get that the zeroth moment is $\int dm=m$, the total mass of the object. The first moment of a distribution is its mean, so the first moment tells you, in weird units, the "average position" of an object (i.e. the position of its center of mass, scaled by the total mass). The second moment of a distribution, $\int \rho(x)x^2 dx$, tells you essentially how "spread out" the distribution is about its mean. You'll notice that the second moment $\int \rho(x)x^2 dx=\int x^2 dm$ is the moment of inertia, which makes intuitive sense; the more "spread out" the object is, the harder it will be to rotate. So the moment of inertia got its name because it's the second moment of the mass distribution.
{ "domain": "physics.stackexchange", "id": 46221, "tags": "newtonian-mechanics, rotational-dynamics, moment-of-inertia, rigid-body-dynamics" }
Boundary condition for solitons in 1+1 dimensions to have finite energy
Question: Suppose a classical field configuration of a real scalar field $\phi(x,t)$, in $1+1$ dimensions, has the energy $$E[\phi]=\int\limits_{-\infty}^{+\infty} dx\, \left[\frac{1}{2}\left(\frac{\partial\phi}{\partial t}\right)^2+\frac{1}{2}\left(\frac{\partial \phi}{\partial x} \right)^2+a\phi^2+b\phi^4 \right]$$ where the potential is given by $$V(\phi)=a\phi^2+b\phi^4.$$ For the energy $E[\phi]$ to be finite, one necessarily requires $\phi\rightarrow 0$, as $x\rightarrow\pm\infty$. But in addition, shouldn't we also necessarily require that both $\frac{\partial\phi}{\partial t}$ and $\frac{\partial \phi}{\partial x}$ must vanish as $x\rightarrow \pm\infty$? However, if I'm not mistaken, to find soliton solutions (which have finite energy), one imposes only the boundary condition $\phi\rightarrow 0$, as $x\rightarrow\pm\infty$. Does it mean that if this is satisfied, the vanishing boundary condition on the derivatives of the field are also automatically satisfied? EDIT: I know that for arbitrary functions $f(x)$ this is not true i.e., if $f(x)\rightarrow 0$ as $x\rightarrow \pm \infty$, $f^\prime(x)$ need not vanish as $x\rightarrow \pm \infty$. But $\phi(x,t)$ are not arbitrary functions in the sense that they are solutions of Euler-Lagrange equations. Therefore, it may be possible that the condition $\phi\rightarrow 0$, as $x\rightarrow\pm\infty$ is sufficient. Answer: In order to the energy be finite it is necessary that the energy density asymptotically vanishes. Notice that this is achieved if the scalar field asymptotically approaches a constant value. Now recall that a soliton is not only a finite energy solution of the equations of motions but also a stable one. For topological solitons this requires that the vacuum manifold be degenerate. In your example this is only possible if $a<0$ (the potential is not positive definite!). Then as you can see the potential vanishes for $\phi=\pm\sqrt{-a/b}$. When the scalar field interpolates this two values, i.e., $$\phi(t,x\rightarrow -\infty)=-\sqrt{-a/b},\quad \phi(t,x\rightarrow +\infty)=+\sqrt{-a/b},$$ then we have a topologically stable solution. The plots bellow shows these features for the kink in $1+1$. On the left a degenerate potential. On top right a scalar field interpolating two different vacua and on the bottom right the energy density as function of the position. Note that this solution cannot be deformed to the vacuum solution (either $\phi(t,x)=-v$ or $\phi(t,x)=+v$) since it will cost an infinite amount of energy. That is what gives the stability of this solution.
{ "domain": "physics.stackexchange", "id": 36959, "tags": "field-theory, boundary-conditions, classical-field-theory, instantons, solitons" }
Effect of polarizer on light's intensity
Question: If you take unpolarized light and pass it through a polarizer, it's intensity will be half of what it was (ideally). Following Malus' Law I'd issume that if I pass this now polarized light through another polarizer that is parallel, the intensity should stay the same (half of the original). However, the reading I was given suggests that the final result would be the multiplication of the two (a quarter). Is this just a confusing thing or am I wrong? Answer: Malus's Law ($I_\mathrm{out}=I_\mathrm{in}\cos^2(\theta)$) applies to the second ideal polarizer. If it's parallel to the first, $\theta=0$ and 100% of the light that hits it will get through. Since this is half of the original unpolarized light, half of the original light will get through the set (but it will be completely polarized now). Realistic polarizers aren't nearly this good.
{ "domain": "physics.stackexchange", "id": 26790, "tags": "homework-and-exercises, optics, visible-light, polarization" }
Are uncertainties higher than measured values realistic?
Question: Whenever I measure a positive quantity (e.g. a volume) there is some uncertainty related to the measurement. The uncertainty will usually be quite low, e.g. lower than 10%, depending on the equipment. However, I have recently seen uncertainties (due to extrapolation) larger than the measurements, which seems counter-intuitive since the quantity is positive. So my questions are: Do uncertainties larger than the measurements make sense? Or would it be more sensible to "enforce" an uncertainty (cut-off) no higher than the measurement? (The word "measurement" might be poorly chosen in this context if we are including extrapolation.) Answer: Indeed, uncertainties that large don't really make sense. In reality, we have some probability distribution for the parameter we're describing. Uncertainty is an attempt to describe this distribution by two numbers, usually the mean and standard deviation. This is only useful if the uncertainties are small, because often you'll end up combining a lot of similarly-sized uncertainties together (e.g. by averaging) and the central limit theorem will kick in, making your final distribution very nearly Gaussian. The mean and standard deviation of this Gaussian only depend on the means and standard deviations of the pieces; all other information is irrelevant. But if you're looking at just a single quantity, with a very broad distribution, just knowing the standard deviation is just not useful. At that point it's probably better to give a 95% confidence interval instead. Of course the bottom of that interval would never be negative for a physical volume.
{ "domain": "physics.stackexchange", "id": 52979, "tags": "measurements, error-analysis" }
Pedestrian counting algorithm
Question: Currently I am developing a pedestrian counter project (using OpenCV+QT on Linux). My idea about the approach is: Capture Frames Do Background Subtraction clear noises (erode, dilate) find blobs (cvBlobslib) - foreground objects For each blob, set ROI and search for pedestrians (LBP with detectMultiScale) in these blobs (for better performance) For each found pedestrian do a nested upper body search(Not sure) (better reliability) If same pedestrian is found on continuing frames (3-4 frames maybe) - add that area to camshift and track - mark as pedestrian Exclude camshift tracked areas from blob detection for next frames If a pedestrian crosses a line increment number I want to check if I am on the right track. Do you have any suggestions on how to improve my approach? If somebody worked on something similar, I would appreciate any useful tips, resources (and criticisms) on this problem. Answer: I can see a number of possible problems with this approach. I speak from my own experience here from improving a pedestrian counting system with a very similar approach, so I don't mean to be discouraging. On the contrary, I'd like to warn you of possible hurdles you may have to overcome in order to build an accurate and robust system. Firstly, background substraction assumes that objects of interest will always be moving, and objects you aren't interested in counting will remain completely still. Surely enough, this may be the case in your scenario, but it still is a very limiting assumption. I've also found background substraction to be very sensitive to changes in illumination (I agree with geometrikal). Be wary of making the assumption that one blob = one person, even if you think that your environment is well controlled. It happened way too often that blobs corresponding to people went undetected because they weren't moving or they were too small, so they were deleted by erosion or by some thresholding criteria (and believe me, you don't want to get into the "tune thresholds until everything works" trap. It doesn't work ;) ). It can also happen that a single blob corresponds to two people walking together, or a single person carrying some sort of luggage. Or a dog. So don't make clever assumptions about blobs. Fortunately, since you do mention that you are using LBP's for person detection, I think you are in the right track of not making the mistakes in the paragraph above. I can't comment on the effectiveness of LBP's in particular, though. I've also read that HOG (histogram of gradients) are a state of the art method in people detection, see Histograms of Oriented Gradients for Human Detection. My last gripe is related with using Camshift. It is based in color histograms, so, by itself, it works nicely when tracking a single object that is easy to distinguish by color, as long as the tracking window is big enough and there are no occlusions or abrupt changes. But as soon as you have to track multiple targets which may have very similar color descriptions and which will move very near to one another, you simply can't do without an algorithm that somehow allows you to maintain multiple hypothesis. This may be a particle filter or a framework such as MCMCDA (Markov Chain Monte Carlo Data Association, see Markov Chain Monte Carlo Data Association for Multiple-Target Tracking). My experience with using Meanshift alone when tracking multiple objects is everything that shouldn't happen with tracking: losing track, confusing targets, fixating in the background, etc. Read a bit about multiple object tracking and data association problems, this might be at the heart of counting multiple people after all (I say "might be" because your goal is counting not tracking, so I don't completely discard the possibility of some clever approach that counts without tracking...) My last piece of advice is: there is only so much you can do with a given approach, and you will need fancier stuff to achieve better performance (so I disagree with user36624 in this regard). This may imply changing a piece of your algorithm by something more powerful, or changing the architecture altogether. Of course, you have to know which fancy stuff is really useful for you. There are publications that attempt to solve the problem in a principled way, while others simply come up with an algorithm for a given data set and expect you to train a classifier that isn't really suited to the problem at hand, while requiring you to adjust a few thresholds too. People counting is ongoing research, so don't expect things to come easily. Do make an effort to learn things that are slightly beyond your ability, and then do it again and again... I acknowledge that I haven't offered any solutions and instead have only pointed out flaws in your approach (which all come from my own experience). For inspiration, I recommend you read some recent research, for example Stable Multi-Target Tracking in Real-Time Surveillance Video. Good luck!
{ "domain": "dsp.stackexchange", "id": 760, "tags": "computer-vision, opencv, object-recognition" }
Rotations in Quantum Mechanics
Question: I have a general question related to rotations of wave functions. I have never really come across this in any of the core QM books, and was curious to know this. Consider, then, a wave function that consists of an angular part and a spin part of a spin-1/2 particle given by, say: $$ |\psi \rangle = (z) ⊗ \begin{pmatrix} 1 \\ -i \end{pmatrix} . $$ What will happen if I rotate this state, say, around the y-axis by $\frac{\pi}{2}$ radians? My approach is this, which I am sure is quite naive and possibly wrong. I will greatly appreciate any help. My approach to this problem: Write the angular part in terms of spherical harmonics (not caring too much about normalization for now): $$ |\psi \rangle = | l = 1, m = 0 \rangle ⊗ \begin{pmatrix} 1 \\ -i \end{pmatrix} . $$ Now, when we rotate it, does the rotation matrix act independently on the angular part and the spin part? So, the angular states and spin states rotate like: \begin{align} $ R|\psi \rangle &= \sum_{m'}^{} d_{m'm}^{l} \left(\frac{\pi}{2} \right) | l = 1, m' \rangle ⊗ d_{m'm}^{l} \left(\frac{\pi}{2} \right) \begin{pmatrix} 1 \\ -i \end{pmatrix} \\ R|\psi \rangle & = \left( \frac{-1}{\sqrt{2}} |l = 1, m = 1 \rangle + \frac{1}{\sqrt{2}} |l = 1, m = -1 \rangle \right) ⊗ \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{pmatrix} \begin{pmatrix} 1 \\ -i \end{pmatrix} \\ R|\psi \rangle & = \left( \frac{-1}{\sqrt{2}} |l = 1, m = 1 \rangle + \frac{1}{\sqrt{2}} |l = 1, m = -1 \rangle \right) ⊗ \frac{1}{\sqrt{2}} \begin{pmatrix} 1 - i \\ 1 - i \end{pmatrix}$ \end{align} Hopefully the question is understood clearly. So, is this above calculation correct? Or is there something wrong with it. Answer: Here's a slightly different take, if only in formalism. Several Translation Operators can be created as the complex exponential of the conjugate observable operator. For example, the complex exponential of the Hamiltonian divided by $i\hbar$ is the unitary time translation operator. In the case of rotations, construct the angular momentum operator with respect to the desired axis of rotation: Rotations in Quantum Mechanics As I recall they go into this in some detail in Bransden and Joachain. Griffiths leaves it as an exercise. $$\hat{D}(\hat{n},\phi)=\exp\left(-i\left(\phi\frac{\hat{n}\cdot\vec{J}}{\hbar}\right)\right)$$
{ "domain": "physics.stackexchange", "id": 63292, "tags": "quantum-mechanics" }
Can we find the exponential radioactive decay formula from first principles?
Question: Can we find the exponential radioactive decay formula from first principles? It's always presented as an empirical result, rather than one you can get from first principles. I've looked around on the internet, but can't really find any information about how to calculate it from first principles. I've seen decay rate calculations in Tong's qft notes for toy models, but never an actual physical calculation, so I was wondering if it's possible, and if so if someone could link me to the result. Answer: If you want to be very nitpicky about it, the decay will not be exponential. The exponential approximation breaks down both at small times and at long times: At small times, perturbation theory dictates that the amplitude of the decay channel will increase linearly with time, which means that the probability of decay is at small times only quadratic, and the survival probability is slightly rounded near $t=0$ before of going down as $e^{-t/\tau}$. This should not be surprising, because the survival probability is time-reversal invariant and should therefore be an even function. At very long times, there are bounds on how fast the bound state amplitude can decay which are essentially due to the fact that the hamiltonian is bounded from below, and which I demonstrate in detail below. Both of these regimes are very hard to observe experimentally. At short times, you usually need very good time resolution and the ability to instantaneously prepare your system. At long times, you probably wouldn't need to go out that far out, but it is typically very hard to get a good signal-to-noise ratio because the exponential decay has pretty much killed all your systems, so you need very large populations to really see this. However, both sorts of deviations can indeed be observed experimentally. At long times, the first observation is Violation of the Exponential-Decay Law at Long Times. C Rothe, SI Hintschich and AP Monkman. Phys. Rev. Lett. 96 163601 (2006); Durham University eprint. (To emphasize on the difficulty of these observations, they had to observe an unstable system over 20 lifetimes to observe the deviations from the exponential, by which time $\sim10^{-9}$ of the population remains.) For short times, the first observations are Experimental evidence for non-exponential decay in quantum tunnelling. SR Wilkinson et al. Nature 387 no. 6633 p.575 (1997). UT Austin eprint, which measured tunnelling of sodium atoms inside an optical lattice, and Observation of the Quantum Zeno and Anti-Zeno Effects in an Unstable System. MC Fischer, B Gutiérrez-Medina and MG Raizen. Phys. Rev. Lett. 87, 040402 (2001), UT Austin eprint (ps). To be clear, the survival probability of a metastable state is for all practical intents and purposes exponential. It's only with a careful experiment - with large populations over very long times, or with very fine temporal control - that you can observe these deviations. Consider a system initialized at $t=0$ in the state $|\psi(0)⟩=|\varphi⟩$ and left to evolve under a time-independent hamiltonian $H$. At time $t$, the survival amplitude is, by definition, $$ A(t)=⟨\varphi|\psi(t)⟩=⟨\varphi|e^{-iHt}|\varphi⟩ $$ and the survival probability is $P(t)=|A(t)|^2$. (Note, however, that this is a reasonable but loaded definition; for more details see this other answer of mine.) Suppose that $H$ has a complete eigenbasis $|E,a⟩$, which can be supplemented by an extra index $a$ denoting the eigenvalues of a set $\alpha$ of operators to form a CSCO, so you can write the identity operator as $$1=\int\mathrm dE \mathrm da|E,a⟩⟨E,a|.$$ If you plug this into the expression for $A(t)$ you can easily bring it into the form $$ A(t)=\int \mathrm dE\, B(E)e^{-iEt},\quad\text{where}\quad B(E)=\int \mathrm da |⟨E,a|\varphi⟩|^2. $$ Here it's easy to see that $B(E)\geq0$ and $\int B(E)\mathrm dE=1$, so $B(E)$ needs to be pretty nicely behaved, and in particular it is in $L^1$ over the energy spectrum. This is where the energy spectrum comes in. In any actual physical theory, the spectrum of the hamiltonian needs to be bounded from below, so there is a minimal energy $E_\text{min}$, set to 0 for convenience, below which the spectrum has no support. This looks quite innocent, and it allows us to refine our expression for $A(t)$ into the harmless-looking $$ A(t)=\int_{0}^\infty \mathrm dE\, B(E)e^{-iEt}.\tag1 $$ As it turns out, this has now prevented the asymptotic decay $e^{-t/\tau}$ from happening. The reason for this is that in this form $A(t)$ is analytic in the lower half-plane. To see this, consider a complex time $t\in\mathbb C^-$, for which \begin{align} |A(t)| & =\left|\int_0^\infty B(E)e^{-iEt}\mathrm dE\right| \leq\int_0^\infty \left| B(E)e^{-iEt}\right|\mathrm dE =\int_{0}^\infty \left| B(E)\right|e^{+E \mathrm{Im}(t)}\mathrm dE \\ & \leq\int_{0}^\infty \left| B(E)\right|\mathrm dE=1. \end{align} as $\mathrm{Im}(t)<0$. This means that the integral $(1)$ exists for all $t$ for which $\mathrm{Im}(t)\leq 0$, and because of its form it means that it is analytic in $t$ in the interior of that region. This is nice, but it is also damning, because analytic functions can be very restricted in terms of how they can behave. In particular, $A(t)$ grows exponentially in the direction of increasing $\mathrm{Im}(t)$ and decays exponentially in the direction of decreasing $\mathrm{Im}(t)$. This means that its behaviour along $\mathrm{Re}(t)$ should in principle be something like oscillatory, but you can get away with something like a decay. What you cannot get away with, however, is exponential decay along both directions of $\mathrm{Re}(t)$ - it is simply no longer compatible with the demands of analyticity. The way to make this precise is to use something called the Paley-Wiener theorem which in this specific setting demands that $$ \int_{-\infty}^\infty \frac{\left|\ln|A(t)|\right|}{1+t^2}dt<\infty. $$ That is, of course, a wonky integral if anyone ever saw one, but you can see that if $A(t)\sim e^{-|t|/\tau}$ for large times $|t|$ ($A(t)$ must be time-reversal symmetric), then the integral on the left (only just) diverges. There's more one can say about why this happens, but for me the bottom line is: analyticity demands some restrictions on how fast $A(t)$ can decay along the real axis, and when you do the calculation this turns out to be it. (For those wondering: yes, this bound is saturated. The place to start digging is the Beurling-Malliavin theorem, but I can't promise it won't be painful.) For more details on the proofs and the intuition behind this stuff, see my MathOverflow question The Paley-Wiener theorem and exponential decay and Alexandre Eremenko's answer there, as well as the paper L. Fonda, G. C. Ghirardi and A. Rimini. Decay theory of unstable quantum systems. Rep. Prog. Phys. 41, pp. 587-631 (1978). §3.1 and 3.2. from which most of this stuff was taken.
{ "domain": "physics.stackexchange", "id": 69244, "tags": "radioactivity" }
Capacitance of three plates
Question: I'm sorry if this has been asked before, but here it goes. Consider three parallel plane conducting plates (they can be infinite) and suppose the middle plate has some thickness $\delta$ and the other plates are at a distance $d_1$ and $d_2$ from the middle plate. Now, consider two situations: The middle plate has negative charge $-Q$ and the two other plates have positive charge $+Q$. The middle plate has no charge and the two other plates have charge $+Q$ and $-Q$. If the medium between the plates is vacuum, then $E = \frac{\sigma}{\epsilon_0}$ is the electric field between the plates (in the vacuum, that is), where $\sigma$ is the surface charge density of one of the $+Q$ plates. Suppose I want to calculate the capacitance $C$ of the system. In the first case, should I consider the system to be composed of two capacitors in parallel, so that $$C = C_1 + C_2 = \frac{Q}{Ed_1} + \frac{Q}{Ed_2}?$$ And in the second case, should I consider it as two capacitors in series, such that $$C = \left(\frac{1}{C_1} + \frac{1}{C_2} \right)^{-1}$$ where $C_i = \frac{Q}{Ed_i}$, $i=1,2$? In the second case I'm pretty sure it's correct, because it can be seen as a single two-plate capacitor, and it matches the value $$C = \frac{Q}{E\cdot(d_1+d_2)}.$$ In the first I'm not so sure. Thanks. Answer: Consider the three-terminal device that is your stacked capacitor: A ----============================================= (dielectric medium ɛ) =============================================---- B (dielectric medium ɛ) C ----============================================= All three plates have the same area A. It's pretty clear what happens when A is left "floating". (Defining floating: before operation we ground A so that $Q_A = 0$, then we disconnect it so that $A$ is not connected to any particular other component.) When we float A and measure the capacitance between B and C, or when we float C and measure the capacitance between A and B, we get an electric field $E = Q / (\epsilon A)$ and capacitance $C = \epsilon A / d$ where $A$ is the area of these plates, $Q$ is the charge on one terminal, and $d$ is the distance between them. The key observation here is that the capacitor has a very large area compared to its width, so the electric field outside of the capacitor tends to 0, so neither A nor C really "matters" when it's floating in that 0 electric field. It is somewhat harder to think about what happens when we float B and then measure the capacitance between A and C. The electric field needs to be 0 inside B, because it is a perfect conductor and any electric field will cause current to flow. At the same time, the overall charge is 0 and the situation is still "capacitor-like" ($A$ is much much larger than $2d$ if it's much much larger than $d$) so the electric field outside of the parallel plates should tend to 0. Since the charge on the plate directly creates a discontinuity in the electric field, we have to come to this conclusion: the field in the dielectric between B and C is the same when we put $+Q$ on A, $-Q$ on C, as when we floated A, putting $+Q$ on B and $-Q$ on C. It has to be, because it's the same jump discontinuity from the same starting point ($E = 0$ outside the stack). The same must also be true between A and B. The field must be $Q / (\epsilon A)$ in both dielectrics. The condition that $E = 0$ within the middle plate means that we induce a surface charge of $+Q$ on the BC side of B, and a surface charge of $-Q$ on the AB side of B. When you include those surface charges, it "looks exactly like" two capacitors in series, and you expect half of the capacitance. And that's exactly what happens if you ignore B, too! If you ignore B, then you've got a constant field of $E$ over twice the distance, so $V = 2 E d$ for the same $Q$, so it takes twice as much voltage to get the same charge on each plate. So while B is doing "something", it's actually doing the nothingest something it can do. So you're right to intuit that it should just work like a single two-plate capacitor, if you ignore the thickness of the B plate in the calculation of how thick the capacitor is. Now that we understand this, here comes situation 1. For situation 1, the easiest way to get an analogous result is to connect A and C with a wire, so they are at the same voltage, each plate holding charge $Q/2$ while the B-plate holds charge $-Q$. Then you are correct to intuit that this just looks like two capacitors in parallel. What happens? Well, the field is still 0 outside the system. The charge $Q/2$ therefore means that we have half the electric field inside the AB and BC dielectrics, which means that the same charge requires only half the voltage, so the capacitance doubles. Now what if, as you say, we put charge +Q on plate A, +Q on plate C, and -Q on plate B? Well, we have a problem: the overall charge is no longer 0. Under the same "parallel plate" assumption that $A$ is much much larger than $d$ we find the fields by the principle of superposition: E = + Q/(2 A ɛ) A ----============================================= E = - Q/(2 A ɛ) =============================================---- B E = + Q/(2 A ɛ) C ----============================================= E = - Q/(2 A ɛ) Now, we can't even define capacitance unless we choose two points to measure a voltage between. Suppose you want the points A and C: the voltage between these plates is 0, and the capacitance is infinite. Actually, since this voltage is 0, we wouldn't change the system fundamentally by connecting A to C. So then you can consider the voltage between A and B, and you get the same result as before, twice the capacitance as if C weren't there. The surplus charge on A and C, while it superimposes on the electric fields, doesn't affect the capacitances involved.
{ "domain": "physics.stackexchange", "id": 22918, "tags": "homework-and-exercises, electrostatics, capacitance, gauss-law" }
Virtual images - Work of a brain or work of a lens?
Question: I am just a high school student trying to self study, so please excuse me if this question sounds silly to you. Many people tell me that virtual images are formed when two rays that are diverging appear to come from a point, therefore our brain thinks that it is coming from an object even though there is no object there. However I think that a virtual image is formed because two diverging rays converge at a point when made to go through the crystalline lens in our eye to form a real image on our retina.I think that our brain has nothing to do with the formation of virtual images. What is actually going on here? Answer: Just like the above image, your situation isn't any different. Some fraction of the outgoing diverging light goes through our eye lens and get converged to the back of the retina (for an healthy eye). Our brain doesn't believe light can be manipulated, so it will trace a light ray straight back to a point of coherence without considering any bend or twist ever happened. This is why even though a virtual image isn't exactly where the brain think it is, it just how our DNA programs our brain to resolve light, and we can't change that because DNA coding is inherent and fundamental. Even though the light from an object having gone through the lens in our eye converges, the optical sensors in the retina converts the light into electric impulses, it's these impulses that are algorithmically reconstructed back into an image in the brain, and the brain has evolved to translate impulses in certain ways that doesn't magnify nor diminish the formed image, so we see things in the exact size and position they are in reality. You stop seeing things correctly when you put on weird glasses because the brain hasn't evolved the mechanism to translate light through your weird glasses. If you can make a newborn stick to the glass his whole life, the day he decides to take it off, the real view tends to appear warp too. It just down of the inherence of nature's java coding. Bottom line, a virtual image is a result of the brain's interpretation of light as a straight non- manuvarable thing rather than otherwise.
{ "domain": "physics.stackexchange", "id": 56598, "tags": "optics, visible-light, lenses, vision, perception" }
Derivation of generators of Lorentz group for spinor representation
Question: I want to prove $$S^{\mu \nu}=\frac{i}{4}[\gamma^\mu,\gamma^\nu].$$ I started from $$[\gamma^\mu,S^{\alpha\beta}]=(J^{\alpha\beta})^\mu_\nu \gamma^\nu$$ Putting the value of $(J^{\alpha\beta})^\mu_\nu$ $$=i(\eta^{\alpha\mu}\delta^\beta_\nu-\eta^{\beta\mu}\delta^\alpha_\nu)\gamma^\nu$$ we get $$\gamma^\mu S^{\alpha\beta}-S^{\alpha\beta}\gamma^\mu=i(\eta^{\alpha\mu}\gamma^\beta-\eta^{\beta\mu}\gamma^\alpha)$$ Whats the next step? Also tell me if there is any other decent method. Note I am using metric $Diag(1.-1).$ Answer: We do not have to guess the structure of $S^{\mu\nu}$. You are really close, just replace $$2\eta^{\mu\nu}= \{\gamma^\mu,\gamma^\nu\}.$$ Rearrange the gammas on left side you will automatically make structure like your desired one by comparison on both sides. $$\gamma^\mu S^{\alpha\beta}-S^{\alpha\beta}\gamma^\mu=i(\eta^{\alpha\mu}\gamma^\beta-\eta^{\beta\mu}\gamma^\alpha)$$ $$\gamma^\mu S^{\alpha\beta}-S^{\alpha\beta}\gamma^\mu=\frac{i}{2}( \{\gamma^\alpha,\gamma^\mu\}\gamma^\beta- \{\gamma^\beta,\gamma^\mu\}\gamma^\alpha)$$ There might be difference of some constant factor. Fix it yourself.
{ "domain": "physics.stackexchange", "id": 40814, "tags": "homework-and-exercises, special-relativity, lie-algebra, lorentz-symmetry, dirac-matrices" }
Clebsch-Gordan coefficients for J bigger than 5/2
Question: I am supposed to expand $|4,2;\frac{3}{2},\frac{1}{2}\rangle$ as a sum of $|j,\frac{5}{2}\rangle$, getting coefficients from table. I can find them easily for $j=\frac{5}{2} $ and $j=\frac{7}{2}$, but I don't know from where to get values for $j=\frac{9}{2},\frac{11}{2}$. Do you know of any such a source? Answer: You can use Wolfram|Alpha. The function ClebschGordan[$\{j_1, m_1\}, \{j_2, m_2\}, \{j, m\}$] gives the Clebsch-Gordan coefficient for the decomposition of $|j,m\rangle$ in terms of $|j_1,m_1\rangle|j_2,m_2\rangle$. For instance: link.
{ "domain": "physics.stackexchange", "id": 46717, "tags": "quantum-mechanics, homework-and-exercises, angular-momentum, hilbert-space, representation-theory" }
A different way of reducing subset sum to partition
Question: For brevity, let $s(D) = \sum_{d\in D} d$ denote the sum of the elements in $D$. Given a set $A = \{a_1, \dots, a_n\}$ of positive integers, and a target value $K$, the subset sum problem is to determine if there exists a subset $A' \subseteq A$ such that $s(A') = K$. Given a set $B = \{b_1, \dots, b_m\}$ of positive integers, the partition problem is to determine if there exists a subset $B' \subseteq B$ such that $s(B') = s(B \setminus B')$. Theorem. Subset sum $\propto$ partition. The proof found in most references, and the one given in the relevant CS SE question, performs the reduction from subset to partition by letting $S = s(A)$ and taking $B = A \cup \{S + K, 2S - K\}$ in the reduction. I came up with the following alternative approach. Proof. Given an instance of subset sum, take $S = s(A)$ and construct an instance of the partition problem with $B = A \cup \{S, 2K\}$. Suppose that $B$ admits a partition $B' \subseteq B$. Because $2K + S > s(A)$, we must have either $2K \in B'$ and $S \in B \setminus B'$ or vice-versa. Assume (without loss of generality) that the former is the case. Now, let $A' = A \cap (B \setminus B')$ denote the elements of $A$ that are not in $B'$. We know that $$\begin{align} &&s(B') &= s(B \setminus B') \\ \iff\quad&& 2K + s(A\setminus A') &= S + s(A') \\ \iff\quad&& 2K + s(A\setminus A') + s(A') &= S + s(A') + s(A') \\ \iff\quad&& 2K + S &= S + 2s(A') \\ \iff\quad&& K &= s(A') \\ \end{align}$$ Conversely, if $A$ admits a subset having $s(A') = K$, then it is easy to see that $B' = \{2K\}\cup (A\setminus A')$ is a valid partition for $B$. Is this proof correct? Answer: Yes, but perhaps more cumbersome than necessary, and less readable than it could be. Here's an alternative approach: Let $S = \{e_1, ..., e_n\} , W \in \mathbb{N}$ be the instance for subset sum and let $$U = S \cup \left\{ \sum S , 2W \right\}$$ be the instance for partition. Since, as you noted, $\sum S + 2W > \sum S$, we can assume that if $A$ and $B$ partitions $U$ in equal sums, $\sum S \in A$ and $2W \notin A$. Now, $$ \sum A = \sum B = \sum U /2 = \frac{\sum S + \sum S + 2W}{2} = \sum S + W.$$ But since $\sum S \in A$, it follows that $\sum A - \sum S = W$, which is precisely the set we are looking for in subset sum. You also need to prove that a yes instance for subset sum translates to a yes instance for partition, but that is straightforward.
{ "domain": "cs.stackexchange", "id": 19318, "tags": "complexity-theory, polynomial-time-reductions" }
What is the most efficient algorithm and data structure for maintaining connected component information on a dynamic graph?
Question: Say I have an undirected finite sparse graph, and need to be able to run the following queries efficiently: $IsConnected(N_1, N_2)$ - returns $T$ if there is a path between $N_1$ and $N_2$, otherwise $F$ $ConnectedNodes(N)$ - returns the set of nodes which are reachable from $N$ This is easily done by pre-computing the connected components of the graph. Both queries can run in $O(1)$ time. If I also need to be able to add edges arbitrarily - $AddEdge(N_1, N_2)$ - then I can store the components in a disjoint-set data structure. Whenever an edge is added, if it connects two nodes in different components, I would merge those components. This adds $O(1)$ cost to $AddEdge$ and $O(InverseAckermann(|Nodes|))$ cost to $IsConnected$ and $ConnectedNodes$ (which might as well be $O(1)$). If I also need to be able to remove edges arbitrarily, what is the best data structure to handle this situation? Is one known? To summarize, it should support the following operations efficiently: $IsConnected(N_1, N_2)$ - returns $T$ if there is a path between $N_1$ and $N_2$, otherwise $F$. $ConnectedNodes(N)$ - returns the set of nodes which are reachable from $N$. $AddEdge(N_1, N_2)$ - adds an edge between two nodes. Note that $N_1$, $N_2$ or both might not have existed before. $RemoveEdge(N_1, N_2)$ - removes an existing edge between two nodes. (I am interested in this from the perspective of game development - this problem seems to occur in quite a few situations. Maybe the player can build power lines and we need to know whether a generator is connected to a building. Maybe the player can lock and unlock doors, and we need to know whether an enemy can reach the player. But it's a very general problem, so I've phrased it as such) Answer: This problem is known as dynamic connectivity and it is an active area of research in the theoretical computer science community. Still some important problems are still open here. To get the terminology clear, you ask for fully-dynamic connectivity since you want to add and delete edges. There is a result of Holm, de Lichtenberg and Thorup (J.ACM 2001) that achieves $O(\log^2 n)$ update time and $O(\log n / \log\log n)$ query time. From my understanding it seems to be implementable. Simply speaking the data structure maintains a hierarchy of spanning trees - and dynamic connectivity in trees is easier to cover. I can recommend Erik D. Demaine's notes for a good explanation see here for a video. Erik's note also contain pointers to other relevant results. As a note: all these results are theoretical results. These data structures might not provide ConnectedNodes queries per se, but its easy to achieve this. Just maintain as an additional data structure the graph (say as doubly connected edge list) and the do the depth-first-search to get the nodes that can be reached from a certain node.
{ "domain": "cs.stackexchange", "id": 3684, "tags": "algorithms, graphs, data-structures" }
NFA/DFA for $ L= \{a^n b^n a | n\ge0\}$
Question: I have made two DFA’s for $ L= \{a^n b^n a | n\ge0\}$. First one has several states. The second one is accepting an empty string also. Somebody please guide me the correct one. Zulfi. Answer: Since FSM's can't count identical sequences, I don't think both of your answers would be correct. You can take a look here, (pumping lemma) to prove why $L= \{a^nb^n | n \ge 0 \}$ is not regular, thus you can't write a FSM for it. Also a good example here shows the proof. On the other hand it would be different if you are meaning $a^*b^*a$ which can be written as a FSM.
{ "domain": "cs.stackexchange", "id": 15105, "tags": "finite-automata" }
Running a sox spectrogram on a bandpass filtered wav file
Question: Let's apply a 400-2000Hz bandpass filter on a respiratory .wav file: sox audio1.wav audio1.bandpass.wav sinc -t 10 400-2000 Now I'd like to generate a spectrogram that takes advantage of the reduced bandwidth to still generate a full size image. But how to to do that? Notice what the result is from running spectrogram on the bandpass file: sox audio1.bandpass.wav -n spectrogram -r -o audio1.bandpass.png -m That's clearly a big waste of image real estate. What step(s) am I missing here to use the full image size to focus on that already-limited frequency bands? Update From accepted answer: works great Add a step after the initial bandpass: sox audio1.bandpass.wav -r 4000 audio1.bandpass1.wav Then sox audio1.bandpass1.wav -n spectrogram -o audio1.bandpass.png -m Answer: sox is doing exactly what you tell it to do – it plots a spectrogram of the bandwidth you can represent with your sampling rate. The fact that most of your picture shows no energy just shows your filter is working, and that's exactly what one would want to see! However, as you can see, your signal is solidly oversampled: You can downsample it to a much lower sampling rate after filtering! (and downsampling by a factor $N$ here really is just throwing away $N-1$ samples, keeping one, throwing away $N-1$ samples..., since your filter is an excellent anti-aliasing filter.)
{ "domain": "dsp.stackexchange", "id": 8719, "tags": "spectrogram, bandpass" }
Undirected graph with 12 edges and 6 vertices
Question: For school we have to make an assignment, and part of the assignment is this question: Describe an unidrected graph that has 12 edges and at least 6 vertices. 6 of the vertices have to have degree exactly 3, all other vertices have to have degree less than 2. Use as few vertices as possible. The best solution I came up with is the following one. Here the number in the circles is the degree of that vertex, now I was wondering if there is a better solution, if so, can somebody explain this to me? I do not need a better answer, just a push in the right direction - if needed. Answer: You have 12 edges, so the sum of the vertex degree is 24. Then there are 6 degree-3 vertices taking away 18. Thus the best you can hope for are 3 vertices of degree 2. Thus you found the solution.
{ "domain": "cs.stackexchange", "id": 2581, "tags": "graphs, discrete-mathematics" }
Are research papers hard to read?
Question: This question may not suit to here, but I couldn't find a better place to ask (it was closed in SO). I find research papers on computer science hard to understand. Of course the subjects are complicated. But after I understand a paper usually I can tell it to someone in simpler terms, and make them understand. If somebody else tells me what is done in that research I understand too. I think the best example that I can tell here is: I have tried to understand SIFT paper for a long time, and I found a tutorial while googling, in a couple of hours I was ready to implement the algorithm. If I was to understand the algorithm from the paper itself it might have taken a couple of days I think. My question is: is it only me who finds research papers this hard to understand? If not how do you deal with it? What are your techniques? Can you give tips? Answer: Unfortunately, research conferences generally do not place a premium on writing for readability. In fact, sometimes it seems the opposite is true: papers that explain their results carefully and readably, in a way that makes them easy to understand, are downgraded in the conference reviewing process because they are "too easy" while papers that could be simplified but haven't been are thought to be deep and rated highly because of it. So, if you rephrase your question to add another word, is it not just you who finds some research papers unnecessarily hard to read, then no, it is not. If you can find a survey paper on the same subject, that may be better, both because the point of a survey is to be readable and because the process of re-developing ideas while writing a survey often leads to simplifications. As for strategies to read papers that you find hard, one of them that I sometimes use is the following: read the introduction to find out what problem they're trying to solve and some of the basic ideas of the solution, then stop reading and think about how you might try to use those ideas to solve the problem, and then go back and compare what you thought they might be doing to what they're actually doing. That way it may become clearer which parts of the paper are just technical but not difficult detail, and which other parts contain the key ideas needed to get through the difficult parts.
{ "domain": "cstheory.stackexchange", "id": 5004, "tags": "soft-question, research-practice" }
Calculate the percentage of each unique phylogenetic tree in a BEAST output
Question: I have a nexus formatted BEAST output containing 20,000 phylogenetic trees of seven taxa. Is there any way to get the percentage of each unique phylogenetic tree contained in this output? I already made an unsuccessful attempt with R. Answer: I finally managed to do it in R. Here is my code: install.packages('devtools') library(devtools) install_github('santiagosnchez/rBt') library(rBt) beast_output <- read.annot.beast('beast_output.trees') beast_output_rooted <- root.multiPhylo(beast_output, c('taxon_A', 'taxon_B')) unique_topologies <- unique.multiPhylo(beast_output_rooted) count <- function(item, list) { total = 0 for (i in 1:length(list)) { if (all.equal.phylo(item, list[[i]], use.edge.length = FALSE)) { total = total + 1 } } return(total) } result <- data.frame(unique_topology = rep(0, length(unique_topologies)), count = rep(0, length(unique_topologies))) for (i in 1:length(unique_topologies)) { result[i, ] <- c(i, count(unique_topologies[[i]], beast_output_rooted)) } result$percentage <- ((result$count/length(beast_output_rooted))*100)
{ "domain": "bioinformatics.stackexchange", "id": 868, "tags": "r, sequence-alignment, phylogenetics, phylogeny, beast" }
Standard notation for the language of the universal Turing machine?
Question: The universal Turing machine $U_{TM}$ is a TM that takes in as input an encoding of a TM and a string, then runs the TM on the string and does whatever the simulated TM does. The language of the universal TM is the set of all encodings of a TM/string pair as a string for which the TM accepts the string. I have seen several different names for the language of the universal Turing machine. Michael Sipser refers to it as $A_{TM}$ (the acceptance language for Turing machines), while Hopcroft, Ullman, and Motwani call it $L_u$ (the universal language). Is the a standardized term for the language of the universal Turing machine? I would understand if there might be many "universal Turing machines" that vary in their encoding schemes, so if the answer is "no, there is no general term for this" that would be good to know. I'm mostly asking because I'm teaching an introductory course in the theory of computation and have been using the term $A_{TM}$ for this without knowing if there is a better term to use. Thanks! Answer: No, there is no general term for this.
{ "domain": "cs.stackexchange", "id": 862, "tags": "computability, terminology, turing-machines" }
ROS_PACKAGE_PATH problems
Question: Hello friends. I downloaded a folder with some rosbuild type packages. I'm trying to build the packages, but for that I needed to use the following command: export ROS_PACKAGE_PATH=/home/paulo/Workspace I then made rosmake name_of_package name_of_package and I built all of my downloaded packages. They all build without any errors, what is good. But the problem is the following. When I try to use roslaunch, ros tells me it can't locate roslaunch. Gives the following error: Invalid <param> tag: Cannot load command parameter [rosversion]: command [rosversion roslaunch] returned with code [1]. Param xml is <param command="rosversion roslaunch" name="rosversion"/> If I do the source /opt/ros/hydro/setup.bash I can use roslaunch again or it finds the roslaunch pack if I use rospack find roslaunch but it forgets about the packages I built. Anyone knows how I can solve this problem? Like how I can built the packages and keep roslaunch and the rest of ros working? Thx so much! p.s.:If you are curious about the folder with packages I downloaded its the following. https://www.youtube.com/watch?v=VlBQLbmc03g Originally posted by End-Effector on ROS Answers with karma: 162 on 2013-11-18 Post score: 0 Answer: ROS_PACKAGE_PATH should include your ros_workspace as well as ROS system installed path. Add this to your .bashrc file so that it executes on every terminal launch source /opt/ros/hydro/setup.bash export ROS_PACKAGE_PATH=/home/paulo/ros_workspace:$ROS_PACKAGE_PATH Originally posted by AbhishekMehta with karma: 226 on 2013-11-18 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 16196, "tags": "ros-package-path" }
package index lost converting to unary stack
Question: I am converting camera1394 to a unary stack for Fuerte. The indexer has seen this change, and camera1394 is no longer included as a camera_drivers package. (It is now a dependency, instead.) I changed the package header for camera1394 to <<StackHeader(camera1394)>>, but that header information is not displayed in the wiki. Instead it says: "Cannot load information on camera1394, which means that it is not yet in our index. Please see this page for information on how to submit your repository to our index." The source trunk moved from: https://code.ros.org/svn/ros-pkg/stacks/camera_drivers/trunk/camera1394 to: https://code.ros.org/svn/ros-pkg/stacks/camera1394/trunk Since both are contained in ros-pkg/stacks, I expected the new camera1394 stack to be indexed automatically. How can I make that happen? Originally posted by joq on ROS Answers with karma: 25443 on 2011-11-03 Post score: 1 Answer: We don't index all of ros-pkg/stacks for that contains all the branches as well as trunks. We only index specific subdirectories. We'll need update the indexer to add camera1394 explicitly. Originally posted by tfoote with karma: 58457 on 2011-11-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by joq on 2011-11-03: I should have realized that. Thanks!
{ "domain": "robotics.stackexchange", "id": 7172, "tags": "ros, camera1394, documentation, camera-drivers" }
Acceleration of an object after overcoming static friction
Question: Let us imagine a rough horizontal surface with static frictional coefficient $u$ and an object of mass $m$ sitting on that surface. The maximum static frictional force that can be applied is $uN$ where $N$ is the normal reaction. So,if we apply a force $F<uN$,then the object won't move. However,as soon as we apply a force equal to limiting friction,the object will be on the verge of moving. Now let's apply a large force $F>uN$. What will be the acceleration of the object now? Will it be $\frac{F}{m}$ or $\frac{F-uN}{m}$? I am confused about the latter one since the friction will stop working after $F$ is applied,so shouldn't the acceleration be only $\frac{F}{m}$. Answer: The kinetic friction force is generally less than the static friction force, i.e., in general $\mu_{k}N\lt \mu_{s}N$. So if the applied force $F$ that causes the transition from static to kinetic friction to occur continues to be applied when sliding occurs, then the applied force is greater than the kinetic friction force and $$a=\frac{F-u_{k}N}{m}$$ Refer to the friction plot here:http://hyperphysics.phy-astr.gsu.edu/hbase/frict2.html#kin Hope this helps.
{ "domain": "physics.stackexchange", "id": 95119, "tags": "newtonian-mechanics, forces, acceleration, friction, statics" }
Can we just take the underlying set of the spacetime manifold as $\mathbb{R^4}$ for all practical purposes?
Question: In mathematical GR and also in some informal GR presentations (eg: MTW), manifolds are always mentioned before talking about GR... but now I am starting to wonder.. if it even actually neccesary? In this answer, it is said that it doesn't really matter what topological manifold we use to model a situation in space time because all of them are homeomorphic to some subset of $R^4$ by definition of manifold and it's apparently impossible to actually check the topology at a global level due to the censorship theorem. All of this tells me that other than getting Physicists and Mathematicians to use similar terminology, the manifolds in it's full generality self is probably not relevant to GR except at the highest levels of study at very specialized research (beyond grad school for instance). Is this conclusion correct or am I missing something? Answer: Topological censorship is a theorem from the 1993 paper "Topological censorship" by Friedman, Schleich and Witt. It is a technical statement about certain manifolds (!), and it does not say that "it's apparently impossible to actually check the topology at a global level" as the question claims. The paper explicitly says on the implications of its theorem: Thus general relativity prevents one from actively probing the topology of spacetime. However, note that one can passively observe that topology by detecting light that originates at a past singularity. What follows in the paper is further discussion of what restrictions, if any, there are on such passive observation.
{ "domain": "physics.stackexchange", "id": 91626, "tags": "general-relativity, spacetime, differential-geometry, metric-tensor, topology" }
Uncertainty Principle derivation confusion
Question: In the textbook titled Mathematical Methods for Physics and Engineering by Riley, Hobson and Bence, 3rd edition, there is a section on page 664 on the derivation of the uncertainty principle. It starts of by considering two state vectors $ \left|u \right>$ and $ \left|v\right>$, and a real scalar $\lambda$. Letting $ \left|w\right> = \left|u \right> + \lambda \left|v \right> $, we take the inner product of $ \left|w\right> $ with itself and remember that the norm squared is always greater or equal to $0$. $$ 0 \leq \left<w|w\right> = \left<u|u\right> + \lambda(\left<u|v\right> + \left<v|u\right>) + \lambda^2 \left<v|v\right> $$ It then says that this is a quadratic inequality in $\lambda$ and therefore the quadratic equation formed by equating the RHS to zero must have no real roots. What I don't understand is why mustn't it have real roots? We defined that $\lambda$ was real in the first place! Answer: Let me put it a little differently. Let $f(\lambda) = \langle u|u \rangle + \lambda(\left<u|v\right> + \left<v|u\right>) + \lambda^2 \left<v|v\right>$; we can prove that $f(\lambda) \ge 0$ for all real $\lambda$. Therefore, if there's some real number $\lambda_0$ such that $f(\lambda_0)=0$ (i.e., if $f$ has a real root), it must be a double root. This is because if $f$ had two distinct real roots, it would have to go negative at some point. You can see this graphically: if you have a quadratic that's always non-negative, then it either has a double root (that is, it just barely touches the x-axis) or it has no real roots. In either case, the discriminant is $\le 0$.
{ "domain": "physics.stackexchange", "id": 32861, "tags": "quantum-mechanics, homework-and-exercises, hilbert-space, heisenberg-uncertainty-principle" }
Handling shared state among a lot of elements in Angular
Question: I am working on a project in Angular where I have a number of similar data objects. When you click on anyone of them it's state and amount of data shown will change. All of the objects start in the minimal state, when you click on one it becomes medium. From a medium you can click on the button to go to maximum or click anywhere else on it to go back down. There can only be one medium or maximum at a time. If there is one medium there can be no maximum, and vice versa. Since all of the objects are by default min, I don't track that. I use a string to track which element is in an expanded state. Then I compare that objects key and the string in an ng-if to change its what is being shown. The size is managed by a conditional class, and CSS. I chose ng-if, because there maybe a lot of elements loaded in the beginning, but there can only be one med/max at a time. I would rather load the med/max when the user wants it, instead of loading all of the med/max at that start and use ng-show for state view. Is there a cleaner way to handle state change among many related objects? Does Angular have something to deal with this? angular.module('SizeAndState', []) .factory('Data', function() { return { query: function() { return { a:{ name: 'A', date: '1/2/03', number: 1}, b:{ name: 'B', date: '4/5/06', number: 2}, c:{ name: 'C', date: '8/9/10', number: 3}, d:{ name: 'D', date: '3/2/01', number: 4}, e:{ name: 'E', date: '1/2/14', number: 5}, } } }; }) .controller('SizeAndStateController', ['$scope', 'Data', function($scope, Data) { $scope.toMin = function (clickEvent){ clickEvent.stopPropagation(); $scope.medElement = ''; $scope.maxElement = ''; }; $scope.toMed = function (elementName, clickEvent){ if($scope.maxElement=='') { if($scope.medElement == elementName){ $scope.toMin(clickEvent); } else { $scope.medElement = elementName; } } }; $scope.toMax = function (clickEvent){ clickEvent.stopPropagation(); $scope.maxElement = $scope.medElement; $scope.medElement = ''; }; $scope.data = Data.query(); $scope.medElement = ''; $scope.maxElement = ''; }]); .campaign{ display: inline-block; width: 100px; height: 100px; background-color: #ff685b; border: 1px solid #ffffff; color: #ffffff; vertical-align: top; } .campaign.med{ width: 125px; height: 125px; } .campaign.max{ width: 150px; height: 150px; } <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <div ng-app="SizeAndState"> <div ng-controller="SizeAndStateController"> <div class="campaign" ng-repeat = "element in data" ng-class="{'med':medElement==element.name, 'max':maxElement==element.name}" ng-click="toMed(element.name, $event)"> <span>{{element.name}}</span> <span ng-if='medElement!=element.name && maxElement!=element.name'>Min</span> <span ng-if='medElement==element.name'> <span>Med</span> <h5>{{element.date}}</h5> <button class="btn" ng-click="toMax($event)"> to Max </button> </span> <span ng-if='maxElement==element.name'> <span>Max</span> <h5>{{element.date}}</h5> <h5 ng-if='maxElement==element.name'>{{element.number}}</h5> <button ng-if="maxElement == element.name" ng-click="toMin($event)"> to Min </button> </span> </div> </div> </div> Answer: I'd handle the state in pretty much different way. First of all, I'd change the format of data to be an array of objects with unique ids. So instead of { a: { name: 'A', date: '1/2/03', number: 1}, b: { name: 'B', date: '4/5/06', number: 2}, } I'd prefer something like below, using state enums as suggested by @Ami. [{ id: 0, name: 'Category 1', items: [{ id: 0, name: 'A1', state: state.MIN },{ id: 1, name: 'B1', state: state.MIN },... With unique ids you can easily and reliably track and modify your objects. To me, your template's logic (or "flow") is quite hard to follow. So next I would try to reduce the clutter as much as possible. <body ng-controller="MainCtrl"> <h3>Size & state</h3> <div ng-repeat="collection in data"> <h4 ng-bind="collection.name"></h4> <span ng-repeat="item in collection.items"> <button ng-bind="item.name" ng-class="{ med: item.state, max: item.state == 2 }" ng-click="changeState(collection, item)"> </button> </span> </div> </body> Where CSS .med { width: 85px; height: 85px; } .max { width: 110px; height: 110px; } With these modifications, in controller, changeState(collection, item) function could handle all min/med/max related logic. switch(item.state) { case state.MIN: _.each(collection.items, function(current) { current.id == item.id ? current.state = state.MED : current.state = state.MIN; }); break; case state.MED: item.state = state.MAX; break; default: item.state = state.MIN; } If selected item's state is MIN, then change it to MED and all other items in given collection will be reset to MIN. If item's state is MED, then just change it to MAX and so on. Notice the use of _.each(), which is a underscore.js library function. It's a great lib when you need to handle arrays or collections. You didn't want to traverse the collection every time user makes a selection but I don't see it as a performance problem or anything and with uderscore, you can do it pretty much (or very close to) a single-liner. Here's a related plunker.
{ "domain": "codereview.stackexchange", "id": 10326, "tags": "javascript, html, angular.js, dynamic-programming, state" }
Penny dropped in the water: What would you see if transmitted light is parallel to the incident surface
Question: I'm working on a problem which asks what is the greatest diameter of a paper you can use to totally shield a penny dropped in the water from view. The question claims that if the transmitted light was made totally parallel to the surface, then you will not see it. (full solution here) I have trouble visualizing this. Can someone provide me reason as to why you will not see the penny if the transmitted light was parallel to the surface of the water? Answer: I think you mean that the refracted ray is at the critical angle. I tried it myself with 1 INR coin. Although the image I took was not from a ray $90^o$ from the normal, it was $1^o-2^o$ lesser. The meniscus of water interfered with my observations. When we look at the coin from the normal, a flat 2d image appears. As the angle between the normal and the refracted ray approaches 90, the 2d image will start converging to a 1d line segment. The image will be reduced to a thin silver line segment.
{ "domain": "physics.stackexchange", "id": 17978, "tags": "optics, visible-light, reflection, refraction" }
"OOD allows ADTs to be created and used."
Question: I just had a CS mid-term and one of the questions was: OOD allows ADTs to be created and used. True False I answered false, but my answer was marked as incorrect. I suspect what the question means is "objected-oriented design can be used to implement abstract data types", but if that's what it means it seems very clumsily worded to me. My rationale for answering false was ADTs are conceptual and exist outside of any particular programming paradigm, so "creation" of an ADT is purely a theoretical exercise. To me it seems like the question is analogous to saying "OOD allows algorithms to be created". You might use OOD in the implementation of an algorithm, but it has nothing to do with its creation. Would I be right in appealing my mark? My professor is kind of an idiot and I've already corrected him on several other points throughout the term, so I'm trying to avoid antagonizing him if I'm wrong. Answer: First, if this is exactly the sentence on the exam and not your translation, it's ambiguous. It could mean that OOD is one possible way to create and use ADT, or that creating and using ADTs requires OOD. Furthermore, ADT can mean two things: abstract data type or algebraic data type. The two concepts are completely different but are often confused. An algebraic data type is a type that is defined by its recursive structure, or equivalently by the ways to build an object of that type. An abstract data type is a type that is defined by its properties, with the way to build objects remaining hidden. The second interpretation — that you need OOD for ADTs — is definitely false. There are programming languages which have no object orientation whatsoever but have ADTs in one sense or the other or both. Standard ML is a prime example: record and sum type definitions provide algebraic data types, while the module system provides abstract data types. The first interpretation — that ADTs can be implemented with OOD — is contentious, because it depends on terminology that isn't standard. In typical languages that provide objects, you can build algebraic data types: define several implementations of a class to make a sum type, and put multiple fields in a class to make a product type. However this is not intrinsic to object-oriented programming. Regarding abstract data types, most object-oriented languages provide some kind of abstraction facility by hiding the implementation of a class under some interface. However, this isn't intrinsic to OOP: the key feature of objects is inheritance, and you can have inheritance without any abstraction whatsoever. The question may be making a difference between object-oriented design and object-oriented programming construct, but OOD isn't really on the same plane as ADTs. All in all this is a poorly-worded exam question. The connection between OOD and ADTs is an interesting subject, but the question is not phrased in a meaningful way.
{ "domain": "cs.stackexchange", "id": 1632, "tags": "programming-languages, object-oriented, abstract-data-types" }
Applying minimax tree to this array
Question: Please tell me how I should apply minimax algorithm to the array $$ 15, 12, 14, 16, 11, 13$$ and make a tree?(I understand how minimax algorithm works but I can't apply it to an array) Answer: Minimax is actually applied on a 2D matrix like: 3 5 6 13 8 7 9 2 4 6 4 12 15 16 10 Generally, you have a choice to do, let say it corresponds to one of the columns in the exemple (5 choices). And there is an unpredictable choice (random, opponent choice ...), the row in the example (3 choices). Minimax minimizes the risks by assuming this unpredictable choice will be the worst for you. In the example, you first compute the minimum on all colums: 3 5 2 4 6 So you finally pick the 5th choice, which guarantees at least 6. In your problem you have 1 dimension, then minimax has no sense on it. Or you can degenerate it saying there is no unpredictable variable and just pick the maximum.
{ "domain": "cs.stackexchange", "id": 13044, "tags": "algorithms, arrays" }
Why is the calculated cross-entropy not zero?
Question: import torch.nn.functional as F logits = torch.Tensor([0, 1]) counts = logits.exp() probs = counts / counts.sum() # equivalent to softmax loss = F.cross_entropy(logits, probs) Here, loss is roughly equal to 0.5822. However, I would expect it to be 0. If I understand the docs correctly, torch.nn.functional.cross_entropy can accept an array of logits and an array of probabilities as its input and target parameters, respectively (converted to pytorch.Tensor's). I believe probs to be the true distribution, and that F.cross_entropy therefore should return 0. Why is loss not 0? Answer: Pytorch treats your logits as outputs that it will first convert to probabilities by running them through a softmax: p_log = torch.log(F.softmax(logits, dim=0)) torch.dot(p_log, probs) tensor(0.5822) Some discussion here - https://discuss.pytorch.org/t/why-does-crossentropyloss-include-the-softmax-function/4420. I think the naming could be clearer, for example in tensorflow it is tf.nn.softmax_cross_entropy_with_logits.
{ "domain": "datascience.stackexchange", "id": 11675, "tags": "pytorch, cross-entropy" }
minimum number of edges that should be added to an undirected graph to make it a tree
Question: Basically, it's this rosalind problem. You're given a number of nodes and an adjacency list. My initial guess was that the answer was the number of connected components minus 1, since by joining every connected component you would have a connected graph, and since it's stated that there are no cycles, that would be a tree. Why is this approach wrong? The real answer is just the number of nodes-1-number of edges, which I understand, but can't see how is this not equivalent to my answer. Also, the sample dataset given bugs me. I see three connected components so I don't see why the answer is not 2. Bear in mind, i'm almost new to graph theory so i'm sorry if i'm missing something simple. Answer: Your first guess is correct. Sometimes there is more than one way to write the same solution. Clearly, if there are $k$ connected components you'll need exactly $k-1$ edges to connect them (without forming any cycle). On the other hand, a tree with $n$ nodes must have exactly $n-1$ edges, so if the graph is acyclic and already has $m$ edges, then it is missing $n-1-m$ edges. Regarding the example: the given graph has $n=10$ vertices, $m=6$ edges, and $k=4$ connected components (not 3), so the answer is $3=k-1=n-m-1$. The sets of vertices in each connected component are $\{1,2,8\}, \{3\}, \{4, 6, 10\}$, and $\{5, 7, 9\}$.
{ "domain": "cs.stackexchange", "id": 17557, "tags": "graphs, trees" }