anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
bash: /opt/ros/indigo/setup.bash: No such file or director
Question: I have installed ros indigo on ubuntu 14.04.1 following the instructions. But met a problem, details like below: bash: /opt/ros/indigo/setup.bash: No such file or directory Then i check the directory of "/opt", found that it is empty, like below: ros@ubuntu:/opt$ ls -al total 8 drwxr-xr-x 2 root root 4096 Dec 21 13:40 . drwxr-xr-x 23 root root 4096 Dec 21 13:35 .. ros@ubuntu:/opt$ And when i executed the commands(like following), it gave the same error. echo "source /opt/ros/indigo/setup.bash" >> ~/.bashrc source ~/.bashrc Could someone help me? Thanks a lot! Originally posted by fireice on ROS Answers with karma: 1 on 2014-12-21 Post score: 0 Original comments Comment by Wolf on 2014-12-22: Did anything fail during sudo apt-get install ros-hydro-destop-full ? Comment by gvdhoorn on 2014-12-22: Also: which "instructions"? Comment by fireice on 2014-12-22: @Wolf I used sudo apptitude install ros-indigo-desktop-full instead of sudo apt-get. But there were no error. Comment by fireice on 2014-12-22: @gvdhoorn following this one: http://wiki.ros.org/indigo/Installation/Ubuntu Comment by krishna890227 on 2016-07-04: Hi friends I am also suffering from same problems in my raspberryPI 3B. My operating system is NOOBS, but i found command line instruction for Wheezy or Jessie. How can i solve these problems. Comment by 130s on 2016-07-04: @krishna890227 please open a new question, with sufficient info of your situation. Answer: You simply didn't install anything. Since you are going to use Ubuntu 14.04.1 it is for you the best option to simply install from the repository. Here you have the procedure. Just pay attention to every code of line, don't forget any instructions. Later on you can configure your environment accordingly. The first 2-3 times I had problems too. The first problem to understand,why and what you need to run ROS. But after that you can fast realize that's quite normal and it won't be a problem anymore :P Originally posted by Andromeda with karma: 893 on 2014-12-22 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by fireice on 2014-12-22: thanks! i will try again.
{ "domain": "robotics.stackexchange", "id": 20401, "tags": "ros, ros-indigo, setup.bash" }
Why is the self-weight of a cable not uniformly distributed?
Question: I have learned that a free cable that is hanging with only its self-weight to consider will form a caternary while a cable with a uniformly distributed load forms a parabola. Why is the self-weight of the cable not considered to be uniformly distributed? Is it because "uniformly distributed" is defined with respect to the span? Answer: Yes, because when they say "uniformly distributed" they mean uniformly distributed along the horizontal direction. So if the cable (in the center of the span) is horizontal it takes 1 m of cable to span 1 m of horizontal distance. But if the cable is angled at 45 degrees (near a support) it takes 1.4 m of cable to span 1 m of horizontal distance, and 1.4 m of cable weighs 1.4 times as much as 1 m of cable.
{ "domain": "physics.stackexchange", "id": 90863, "tags": "newtonian-mechanics, forces, terminology, string" }
Adding Dirac Delta with Unit Step
Question: What does the graph of the function $x(t)=-\delta(t)+u(t)$ look like? $$\delta(t)\ldots\text{ Dirac delta impulse}\\ u(t)\ldots\text{unit step function}$$ Will the impulse at the origin start from 0 downwards or from 1? I'm having trouble because the unit step is discontinuous at t=0, so I don't know how it will shift the impulse. I am talking about continuous time. Answer: Note that $\delta(t)$ is no function but a distribution. This means that it doesn't have function values but it is only defined by its integral. Consequently, the (generalized) function $x(t)=-\delta(t)+u(t)$ has no value at $t=0$ (the only useful value would be $-\infty$ but that's kind of hard to sketch). For sketching the function graph, you can define function values for $t\neq 0$ in the usual way. At $t=0$ you simply sketch a negative $\delta$-impulse, for $t<0$ the function is zero, and for $t>0$ you assign it the value $1$.
{ "domain": "dsp.stackexchange", "id": 2312, "tags": "continuous-signals, dirac-delta-impulse" }
Documenting parameters and nodes
Question: Is there any standard way to generate documentation for nodes and parameters in a ros wiki like form? I tried with rosdoc but I'm getting only the doxygen documentation. Originally posted by Francesco Sacchi on ROS Answers with karma: 203 on 2012-10-18 Post score: 3 Original comments Comment by Francesco Sacchi on 2012-10-24: Actually a little step has been made with http://ros.org/wiki/rosdoc_lite.. still not enough.. Answer: Unfortunately, there's no automatic way of doing this at the moment. That being said, it'd be great to have a standard (like XML or YAML-based) way of documenting a node's ROS interface. Then a file like that could be parsed by the wiki. Originally posted by jbohren with karma: 5809 on 2012-10-18 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Francesco Sacchi on 2012-10-24: Probably some documentation should be easy to generate.. the list of nodes can be taken from the CMakeLists.txt file, and for simple nodes also the list of topics can be extracted with some greps... harder if the topic subscription/publication is done not in the main function..
{ "domain": "robotics.stackexchange", "id": 11419, "tags": "ros, documentation, wiki" }
fuse fastq files with multiple records
Question: I'd like to fuse ~50 pairs of corresponding fastq files on per sequence bases. I'd be happy for a solution as a Linux script, R script (if feasible with huge files), or any free software that runs preferably under Debian Linux (otherwise Windows). Aim is preprocessing for a Linux pipeline for NGS data. Ideally both the sequence and quality data are fused. The aim of this request is the following: there is an existing pipeline for downstream mapping of immunoreceptor sequences (migec/mixcr/vdjtools etc.). Previously we got Illumina reads (read 1, read 2 = reverse), with the index and UMI sequences attached to one of the reads => can be alligned and processed to habe 1 biological useful sequence with UMI and index at one end. NOW we get read 1, read 2 and seperately "read 3" which is just the UMI+Index reads, and which was attached by the core facility to read 1 using software we do not have access to. Since the previous pipeline works well, we prefere to use it in the future, but are open for suggestions how to use the index reads with migec/mixcr/vdjtools otherwise. Each of the 2 files contains the same number (millions) of short (~300 bp) and corresponding DNA reads: File_A.fastq: @record1 / aaa... / +... / ... @record2 / ... @record3 / ... ... (millions) File_B.fastq: @record1 / ttt... / +... / ... @record2 / ... @record3 / ... ... (millions) The target file should look like: @record1 / aaa...ttt... / +... / ... @record2 / ... @record3 / ... ... (millions) (of course the aaa...ttt... is just an example) I was looking for a solution on Stackexchange and Google, but the solutions, I found do not work for me because manual processing is not an option given the number of sequences I need to fuse the files on a per sequence basis, i.e. a simple Linux command like cat file1 file2 is not what I need (as far as I understand, this would give me (record1_A record2_A.... record1B record2_B). R, package shortread, readFastq + writeFastq would be perfect, but cannot handle such large numbers of input lines Maybe I did not use the right search terms, but in any case I'd be happy for a solution Answer: Based on the comments of the question I thought to put together this answer using GNU coreutils which should exist on pretty much all systems. The idea is to paste the 2 files together with paste (assuming they are in the same order and each line correspond to the same line in the other file). Since only the sequence and quality of both files should be kept, the lines starting with @ and + need to be made empty for one of the files. This can be achived with sed, e.g. sed 's/^@.*//' empties all lines starting with @ but keeps the empty line. To modify the second file on the fly, one can using process substitution <(). Everything strung together should be something like this: file1=File_A.fastq file2=File_B.fastq paste -d "" $file1 <(cat $file2 | sed 's/^@.*//' | sed 's/^+.*//') > target.fastq I haven't tested it thoroughly, but could be a good starting point. If you have a lot of files to paste, you might however want to process them first (not on the fly) using: for file in $(ls *.fastq); do cat $file | sed 's/^@.*//' | sed 's/^+.*//' > ${file%.fastq}_cleaned.fastq done This creates File_A_mod.fastq from File_A.fastq etc. Best to move them in a separate directory (make sure the first one is the original instead of the _cleaned one since it serves as the base). You should have the following in the folder: File_A.fastq File_B_cleaned.fastq File_C_cleaned.fastq .... Those can be than pasted with paste -d "" *.fastq > target.fastq.
{ "domain": "bioinformatics.stackexchange", "id": 686, "tags": "fastq" }
Uncertainty propagation upon taking the median
Question: If I have $N$ repeated occurrences of a measurement $x$ with uncorrelated errors and identical uncertainties $u_x$, and take the mean $\langle x\rangle$, the uncertainty on the mean becomes: $$u_{\langle x\rangle} = \frac{u_x}{\sqrt{N}}$$ where $N$ is the number of measurements I have taken. This is derived from the law of propagation of uncertainties (for example, see this answer). If I take the median instead of the mean, I'm mathematically only propagating information from one or two data points, which would mean $N=1$ or $N=2$. But that seems unfair, for surely the principle should hold that if I repeat the measurement many times and take the median, the resulting uncertainty goes down. Is there any established way of propagating the uncertainty when taking the median of a series of measurements? Answer: I do not think you are estimating the uncertainty of your median correctly. The ratio of the variance of the mean to the variance of the median is given by $4n/[\pi(2n+1)]$, where $N=(2n+1)$ is the total number of data points in the sample you have used to construct the median. I think this approximation is only true if your outliers are symmetric. Thus the uncertainty in the median is given by $$ \Delta x_{Med} = \Delta \bar{x} \sqrt{\pi(2n+1)/4n},$$ In the limit of large $N$ (and hence large $n$), this tends to $$ \Delta x_{Med} = \Delta \bar{x} \sqrt{\pi/2},$$ The uncertainty in the mean $\Delta \bar{x}$ is the standard deviation of the points divided by $\sqrt{N}$ and therefore the precision of your estimate will improve as $\sqrt{N}$. This information is given at http://mathworld.wolfram.com/StatisticalMedian.html If you have outliers, then the formula that you begin your question with is incorrect (see the first sentence of the answer you refer to). The data points will not be normally distributed according to your estimate of their uncertainties and the standard error of the mean that I quote above will be larger than $u_x/\sqrt{N}$ because the standard deviation of the data will be larger than $u_x$ (I refer you also to the last paragraph of my answer to that question).
{ "domain": "physics.stackexchange", "id": 35408, "tags": "error-analysis, metrology" }
Path and trajectory planning for a holonomic drivetrain with non zero starting velocities
Question: What path planning algorithms are typically used for holonomic drivetrains that can avoid collisions while also starting with non zero velocities. I am working on a project where I am planning to have a robot avoid common household objects, drive around them, but do so in a way that it can be on the fly may result in the robot having a non zero starting velocity. I have looked into A* and that could work when using cubic or quintic splines as that could account for the current velocity, but when accounting for more sophisticated object avoidance using something like rrt, I am unsure as to how that would work with a non zero starting velocity. Thank you! Answer: One suitable method might be real-time RRT* (RT-RRT*). It's designed to solve exactly the problem you have - plan and update paths that can change as the path is traversed. One word of warning - the update procedure requires the current node traversal to complete before switching to the newly updated path. In other words, while moving from node $v_{1}$ to $v_{2}$, the algorithm cannot avoid dynamic obstacles that suddenly show up between these two points. In practice this likely shouldn't be a big problem as long as your nodes are close enough together and you are using an efficient implementation of RT-RRT* If you decide to implement this method yourself, this hierarchical navigable small worlds (HNSW) library provides a very efficient method for performing approximate nearest neighbor searches. Given that this is one of the more difficult tasks in RRT-based algorithms to handle efficiently (especially as tree sizes grow), this should help out quite a bit.
{ "domain": "robotics.stackexchange", "id": 2729, "tags": "path-planning, trajectory" }
Good acid for cleaning marine toilets
Question: Problem: On my boat I have a marine toilet. It is operated with salt water. The problem is, on the interior of the outflow tubing it accumulates some sort of hard grime. I am looking for a good way to dissolve this grime as it is impossible to replace the tubing. From experience it is known that citric acid work - but not very well. Also it leaves a "sandy" residue that is difficult to flush. Does anybody have an idea of what this "grime" consists of - and thus how best to dissolve it? Materials: On hand I have: citric acid 56% phosphoric acid 23% HCl Materials: tubing: PVC toilet itself: porcelain pump: ABS or polypropylene with selected parts in acetal resin, with 316 passivated stainless steel fastenings, brass weights and neoprene seals and gaskets valve: grey, harder plastic I know this quite vague but am hoping somebody can help anyway. Answer: To my knowledge, marine water contains great amount of $\ce{Ca}$ and $\ce{Mg}$ ions, that gives insoluble carbonates and phosphates. So, most probably, you have mix of carbonates, phosphates and, possibly but unlikely, sulfates of this metals and possibly some organic salts, like oxalates. Carbonates are readily dissolved by $\ce{HCl}$ wich in weak solutions will not dissolve copper, but will slowly dissolve steel. It is more interesting to use $\ce{Na2H2edta}$ . It dissolves $\ce{CaCO_3}$ easily and, to my knowledge, in hight concentration dissolves $\ce{Ca_3(PO_4)_2}$ as well, but not sure in the latter.
{ "domain": "chemistry.stackexchange", "id": 142, "tags": "everyday-chemistry, ionic-compounds" }
Line to neutral voltage in three-phase power systems
Question: I have the book "Power system analysis and design" by Sarma. In his intro to balanced LN voltages, in chapter 2 p62 he states: $$V_{LL} = \sqrt{3} V_{LN} \angle30º$$ but later in the chapter, p70, he states for balanced LN voltages: $$ V_{LL} = \sqrt{3} V_{LN} $$ This is a very different statement, why is he saying that after literally saying a few pages before there is a 30º phase shift. Answer: drawing by SSR Because $V_{AB} = V_A - V_B$ (Vector Subtraction). Dashed line is $-V_{B} = -(120 \angle -120° V)$ $= 120 \angle 60° V$. Vector addition of $V_{A}$ and $-V_{B}$ gives line voltage $V_{AB} = 208 \angle 30° V$ In a three-phase, wye connected system the line-to-line voltage (line voltage) $V_{LL}$ ($V_{AB}$, $V_{BC}$, $V_{CA}$) is $\sqrt {3}$ larger than phase to neutral voltages (phase voltage) $V_{LN}$ ($V_{AN}$, $V_{BN}$, $V_{CN}$) and leads the phase voltage by 30°. In the first case the author is making emphasis that the line voltage is a vector (magnitude $\angle$ angle or $V_{LL} = \sqrt{3} V_{LN} \angle30º$) and the second case talks about magnitude ($ V_{LL} = \sqrt{3} V_{LN} $). The phase angles are only relevant when looking at phasor diagrams, so they are usually left off for clarity.
{ "domain": "engineering.stackexchange", "id": 4454, "tags": "electrical-engineering, power, circuits, power-engineering, multiphase-flow" }
Equilibrium concentration of point defects in solids at sufficiently elevated temperature
Question: Equilibrium concentration of point defects in solids is defined as $\chi=\exp(\frac{\Delta s}{k_B})\exp(\frac{-\Delta h}{k_BT})$. At high temperatures, may be even higher than melting temperature of the solid, does the entropy become undefined meaning there will be no equilibrium concentration for point defects? Answer: Mathematically, the equation can yield a result at any temperature. But, since the concept of point defect makes sense only in a crystal, in a liquid the calculation yields a number with no meaning. Like applying the formula to T < 0. Also, the equilibrium concentration is not "defined" by your equation.
{ "domain": "physics.stackexchange", "id": 57731, "tags": "material-science" }
Can an EWG behave as an EDG and vice versa?
Question: Can an Electron Withdrawing Group(EWG)(like NO2, F, Cl, OH, etc.) behave as a Electron Donating group(EDG) in a compound where a carbocation is present in order to stabilise it? Does the position where it is present or distance from the carbocation affect it's behaviour as an EWG or EDG? I feel that it should behave as an EDG as carbocation is electron deficient so it has a greater tendency to accept an electron. Can we say a similar thing that an EDG behaves as an EWG in presence of carbanion as it has excess electrons? Answer: Of course not. That's why it is called EDG or EWG. It tends to destabilize it more in the absence of other effects like resonance and hyperconjugation.
{ "domain": "chemistry.stackexchange", "id": 10880, "tags": "organic-chemistry, carbocation" }
Is the occupation number and density of states equation correct?
Question: The relationship between occupation number (which is the number of particles at a certain energy level) and the density of states is as follows: $$n(E) = D(E)F(E)$$ where $D(E)$ is the DOS and $F(E)$ is the Fermi function. But intuitively this formula seems to have a problem. As I understand the fermi function, it is a probability density function that gives the likelihood for say an electron to possess a certain amount of energy among various values of energies. So if there's 400 electrons, and f(E1) = 0.5, then the energy level E1 will be occupied by 200 electrons. There is no DOS information required for this calculation. If E1 had 400 states, it will be half filled, if it had 200 it will be fully filled. But the above equation says that no matter what the occupation of E1 will be always half as n(E) would be half of D(E). Where am I mistaken? Answer: As I understand the fermi function, it is a probability density function that gives the likelihood for say an electron to possess a certain amount of energy among various values of energies. No, that's not right. The Fermi function $$f(E) = \frac{1}{e^{(E-\mu)/kT}+1}$$ gives the probability that a single-particle state which has energy $E$ is occupied at temperature $T$, given that the system has chemical potential $\mu$. More specifically, if $x$ is a single-particle state and $E[x]$ is the energy of that state, then $$\mathrm{Prob}(x\text{ is occupied}) = f\big(E[x]\big) = \frac{1}{e^{(E[x]-\mu)/kT}+1}$$In particular, it is not a probability density but rather a genuine probability. So if there's 400 electrons, and f(E1) = 0.5, then the energy level E1 will be occupied by 200 electrons. No. If $f(E_1)=0.5$, then there is a 50/50 chance that any given single-particle state with energy $E_1$ is occupied. If $1000$ single-particle states each have energy $E_1$, then at any given time we would expect $500$ of them to be occupied and $500$ of them to be empty. If $D(E)$ is the number of single-particle states per unit volume with energy between $E$ and $E+\mathrm dE$ (i.e. the density of states) and $f(E)$ is the probability that each of these states is occupied, then it follows that $n(E)=D(E)f(E)$ is the expected number of occupied single-particle states per unit volume with energy between $E$ and $E+\mathrm dE$. In the case of a finite system with discrete energies, we would also have that if $g(E)$ is the number of single-particle states with energy $E$, then $N(E)=g(E)f(E)$ is the expected number of occupied single-particle states with energy $E$, which can be obtained from the above by integrating over a single energy level and multiplying by the volume of the system.
{ "domain": "physics.stackexchange", "id": 90578, "tags": "statistical-mechanics, solid-state-physics, density-of-states" }
Why Ohm's law is followed inside the house?
Question: In power transmission lines, as power remains constant, according to $P=VI$, current decreases when voltage increases, thus Ohm's law is not followed. But inside our house current increases when voltage increases. Thus Ohm's law is being followed. But why? Answer: How easy it is to get confused about these things. Ohm's law applies to a resistive load - describing that a higher voltage will lead to more current flowing (and therefore, more power being dissipated. A fresh battery will make the light of your flashlight shine brighter.) When we look at power transmission lines, the question becomes "How can I transmit as much power as possible with the least amount of loss". For the power line, there are two different voltages of interest. One is the "transmitted" voltage $V_t$; the other is the "dropped" voltage, $V_d$. Now the goal is to get $V_t$ to the other side of the power line. But since the line has resistance $R$, the current $I$ flowing will lead to a voltage drop $$V_d = I\cdot R$$ At the other end of the transmission line there's a big load (transformer, whatever) which is where the bulk of the energy will be used. But when we think about the power losses in the cable, we care that "most of the power makes it through unharmed". When you are talking about electricity in your house, you consider the load as part of the equation; when you are thinking about the power transmission, you are interested in minimizing just $V_d$. Does that help?
{ "domain": "physics.stackexchange", "id": 32667, "tags": "everyday-life, electric-current, electrical-resistance, voltage, power" }
Password checker in Python
Question: Using Python 2.7.12 I wrote a simple little script, psk_validate.py, that prompts the user for a potential password and checks if it has upper and lower-case characters, numbers, special characters, and that it has a length of at least 8 characters. From what I understand one could use the regex library to write this much more efficiently, however, I have yet to learn about regex. The program seems to work just fine, and with a program this small I think that not using regex is also just fine. I'd like any and all feedback about this program. In particular, I'd like to know if a program written this simply could be used in real-world applications. I'd also like to know if there are any logical errors and/or bugs in the program. from sys import exit def check_upper(input): uppers = 0 upper_list = "A B C D E F G H I J K L M N O P Q R S T U V W X Y Z".split() for char in input: if char in upper_list: uppers += 1 if uppers > 0: return True else: return False def check_lower(input): lowers = 0 lower_list = "a b c d e f g h i j k l m n o p q r s t u v w x y z".split() for char in input: if char in lower_list: lowers += 1 if lowers > 0: return True else: return False def check_number(input): numbers = 0 number_list = "1 2 3 4 5 6 7 8 9 0".split() for char in input: if char in number_list: numbers += 1 if numbers > 0: return True else: return False def check_special(input): specials = 0 special_list = "! @ $ % ^ & * ( ) _ - + = { } [ ] | \ , . > < / ? ~ ` \" ' : ;".split() for char in input: if char in special_list: specials += 1 if specials > 0: return True else: return False def check_len(input): if len(input) >= 8: return True else: return False def validate_password(input): check_dict = { 'upper': check_upper(input), 'lower': check_lower(input), 'number': check_number(input), 'special': check_special(input), 'len' : check_len(input) } if check_upper(input) & check_lower(input) & check_number(input) & check_special(input) & check_len(input): return True else: print "Invalid password! Review below and change your password accordingly!" print if check_dict['upper'] == False: print "Password needs at least one upper-case character." if check_dict['lower'] == False: print "Password needs at least one lower-case character." if check_dict['number'] == False: print "Password needs at least one number." if check_dict['special'] == False: print "Password needs at least one special character." if check_dict['len'] == False: print "Password needs to be at least 8 characters in length." print while True: password = raw_input("Enter desired password: ") print if validate_password(password): print "Password meets all requirements and may be used." print print "Exiting program..." print exit(0) Answer: Concept Obligatory XKCD comic, before I begin: Enforcing password strength by requiring human-unfriendly characters is no longer considered good practice. Nevertheless, I'll review the code as you have written it. "Obvious" simplifications Any code with the pattern if bool_expr: return True; else: return False should be written simply as return bool_expr. Strings are directly iterable; there is no need to convert them into a list first, using .split(). In other words, the code would work the same if you just wrote: upper_list = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" Better yet, you could just use string.ascii_uppercase. The uppers += 1 counting loop could be written more expressively using the sum() built-in function. Actually, in this case, since you only care whether uppers > 0, you could just use the any() function. With those changes, your check_upper() function becomes a one-liner: def contains_upper(s): return any(c in ascii_uppercase for c in s) I've renamed check_upper() to contains_upper() to make it clear that the function returns True or False. Also, avoid using variable names, like input, that coincide with names of built-in functions: it could cause trouble if you ever want to use input(). Code duplication Most of your check_something() functions are identical. You should generalize, instead of duplicating the code. from string import ascii_uppercase, ascii_lowercase, digits def contains(required_chars, s): return any(c in required_chars for c in s) def contains_upper(s): return contains(ascii_uppercase, s) def contains_lower(s): return contains(ascii_lowercase, s) def contains_digit(s): return contains(digits, s) def contains_special(s): return contains(r"""!@$%^&*()_-+={}[]|\,.></?~`"':;""", s) def long_enough(s): return len(s) >= 8 Note that I've used a raw long string to help deal with the need for backslashes in the punctuation string. validate_password() The check_dict isn't doing anything for you. You'd be no worse off with five boolean variables. You are also calling each validation function twice. The & (binary bitwise AND) operator is not quite appropriate here. The and (boolean AND) operator would be more appropriate. Even though the results appear identical, the execution differs: the logical and allows short-circuit evaluation. Personally, I'd write it this way, gathering up a list of all of the failure messages: def validate_password(password): VALIDATIONS = ( (contains_upper, 'Password needs at least one upper-case character.'), (contains_lower, 'Password needs at least one lower-case character.'), (contains_digit, 'Password needs at least one number.'), (contains_special, 'Password needs at least one special character.'), (long_enough, 'Password needs to be at least 8 characters in length.'), ) failures = [ msg for validator, msg in VALIDATIONS if not validator(password) ] if not failures: return True else: print("Invalid password! Review below and change your password accordingly!\n") for msg in failures: print(msg) print('') return False If the function returns True in one place, then it would be good practice to return False instead of None in the other branch, for consistency. Free-floating code It is customary to put if __name__ == '__main__': around the statements in the module that are not inside a function. That way, you could incorporate the functions into another program by doing import psk_validate without actually running this program. Calling sys.exit(0) is rarely desirable or necessary, if you structure the code properly. Here, all you needed was a break. if __name__ == '__main__': while True: password = raw_input("Enter desired password: ") print() if validate_password(password): print("Password meets all requirements and may be used.\n") print("Exiting program...\n") break
{ "domain": "codereview.stackexchange", "id": 25984, "tags": "python, beginner, python-2.x, validation" }
Hour averaging program
Question: I am looking for tips on improving my short program. I am using system("PAUSE") because this was for an assignment. Code: #include <iostream> #include <string> #include <iomanip> using namespace std; void Display(string* , int* , double*, int); void Percentage(int*, double*, int); void Intro(string*, int*, int&); void Highest(int&, string&, double*, string*, int); int main() { int students; string names[10]; int hours[10]; double percents[10]; int highest; string most; Intro(names, hours, students); Percentage(hours, percents, students); Display(names, hours, percents, students); Highest(highest, most, percents, names, students); system("PAUSE"); } void Intro(string* names, int* hours, int& students) { int team; cout << "A team is made up of atleast 2 students. How many students are on the team?: "; cin >> team; cout << "Enter student's first name and the hours worked on the final project: " << endl; for (int i = 0; i < team; i++) { cout << i + 1 << ": "; cin >> names[i] >> hours[i]; } students = team; } void Display(string* names, int* hours, double* percent, int students) { cout << setw(20) << "Students"; cout << setw(20) << "Hours Worked"; cout << setw(20) << "% of Total Hours"; cout << endl; cout << "------------------------------------------------------------" << endl; for (int i = 0; i < students; i++) { cout << setw(20) << names[i]; cout << setw(9) << hours[i]; cout << setw(16) << percent[i]; cout << endl; } } void Percentage(int* hours, double* percent, int students) { int total(0); for (int i = 0; i < students; i++) { total += hours[i]; } for (int i = 0; i < students; i++) { percent[i] = double(hours[i]) / total * 100; } } void Highest(int& highest, string& most, double* percent, string* names, int students) { highest = percent[0]; for (int i = 0; i < students; i++) { if (highest < percent[i]) { highest = percent[i]; most = names[i]; } } cout << most << " worked the most hours." << endl; } Answer: I assume you'll have no more than 10 names/hours/percents, but you could use an std::vector in place of these arrays. This will allow you any number of inputs without fear of exceeding the allotted 10. If you don't need this for your assignment, then you can stick with what you have. If you're allowed to use more of the STL, I'd recommend std::accumulate for summing up the values in a container. For instance, your first loop in Percentage(): for (int i = 0; i < students; i++) { total += hours[i]; } can be done with this function as such (with the array): // the 10 corresponds to the array size // the 0 is the starting value of the accumulator int total = std::accumulate(hours, hours+10, 0); If you choose to use std::vector (or other STL container): // functions cbegin() and cend() return const iterators int total = std::accumulate(hours.cbegin(), hours.cend(), 0); I'd slightly tweak the percentage calculation for clarity: percent[i] = (double(hours[i]) / total) * 100; Prefer to cast the C++ way: // C way double(hours[i]) // C++ way static_cast<double>(hours[i]) It seems a little weird having Highest() display something. You could instead return most, thereby making the function of type std::string. That way, you can display this name from main() or wherever else. You don't need the first two arguments. They are passed in from main(), but also not modified by any previous functions. I'd remove those two and just use local variables. highest should be of type double since it's being assigned to a percent element. Otherwise, you may receive a "possible loss of data" warning. This is especially important for when the two values are being compared in the loop. You could then have this: // get the name std::string most = Highest(percents, names, students); // display it std::cout << most << " worked the most hours." << std::endl; std::string Highest(double* percent, string* names, int students) { double highest = percent[0]; std::string most; for (int i = 0; i < students; i++) { if (highest < percent[i]) { highest = percent[i]; most = names[i]; } } return most; }
{ "domain": "codereview.stackexchange", "id": 5115, "tags": "c++, homework" }
Determine whether a sorted array contain at least 4 distinct elements in O(log n) time
Question: On one of my previous courseworks, I was faced with the following problem, which I think is unrealistic when using a direct / straightforward approach that usually algorithms have by leveraging certain data structures like dictionaries, etc. Design an O(log n) algorithm whose input is a sorted list A. The algorithm should return true if A contains at least 4 distinct elements. Otherwise the algorithm should return false. My lecturer came up with the following solution: def ThreeDiff(A): if A[0]==A[len(A)-1]: return -1 minind=0 maxind=len(A)-1 while maxind-mind>1: midind=int((minind+maxind)/2) if A[midind]>A[minind] and A[midind]<A[maxind]: return midind if A[midind]==A[minind]: minind=midind else: maxind=midind return -1 def FourDiff(A): midind=ThreeDiff(A) if midind==-1: return false return ThreeDiff(A[0:midind+1])!=- 1 or ThreeDiff(A[midind:len(A)]) != -1 Is there a cleaner or better way to solve this problem? Answer: Here is a cleaner and better way to solve the problem. # Return the smallest index where the element is bigger than `A[start_index]`. # If `len(A)` is returned, no element is bigger than `A[start_index]`. def next_bigger_element(start_index, A): lo, hi = start_index, len(A) while lo + 1 < hi: mid = (lo + hi) // 2 if A[mid] == A[start_index]: lo = mid else: hi = mid return hi def distinct_elements_at_least(k, A): if len(A) == 0: return k <= 0 index = 0 count = 1 # keep finding the next bigger element until `k` elements have # been found or we have reached the end of the array. while count < k and A[index] != A[-1]: index = next_bigger_element(index, A) count += 1 return count >= k To find whether A contains at least 4 distinct elements, just call distinct_elements_at_least(4, A). This program works correctly for any given number k. For example, it can be used to check whether A has 0 element or whether A has 7 distinct elements. For any fixed k, it works in $O(\log n)$ time as at most k binary searches on an interval of size at most n are done. If you do not mind import bisect, you may prefer the following shorter code, since method next_bigger_element is no longer needed. from bisect import bisect_right def distinct_elements_at_least(k, A): if len(A) == 0: return k <= 0 index = 0 count = 0 while index < len(A) and count < k: count += 1 index = bisect_right(A, A[index], index + 1) return count >= k
{ "domain": "cs.stackexchange", "id": 18255, "tags": "algorithms, binary-search" }
Understanding negative g-force?
Question: I was reading this on reddit: If you were in an elevator accelerating upwards which, you might experience a force of +2g. And if the elevator was accelerating downwards very quickly, you might actually feel an upwards force of -0.5g. That's what a negative g-force is, when it feels like you are falling up. So I understand that the case scenario when an elevator is accelerating upwards, the net force on the person is in the up (positive) direction so the force applied by the person in reaction is in the down (negative) direction, which is positive g-force. But I don't at all understand how you will feel an upwards force of -0.5g when an elevator is accelerating downwards. Because when in an elevator, accelerating downwards at theoretically $4.9m/s^2$, the force normal will still be upwards (as it's preventing free fall), but will be less than if there was no acceleration (less weight). But the reaction force therefore is downwards. So this is still just like the scenario when the elevator is accelerating upwards! Or am I misreading this? Does this person actually mean if the elevator is somehow accelerating downwards at 1.5 times the acceleration of gravity? In that case, I don't see how this would make any sense. Answer: When we say you are experiencing an acceleration this means something must be exerting a force on you, because force and acceleration are related by Newton's second law. In a stationary elevator it is the floor of the accelerator that exerts an upwards force on you, and this force is just $mg$ giving you an acceleration of $g$. If the elevator is accelerating downwards, for example $-0.5g$, the force exerted on you by the floor of the elevator is decreased and your total acceleration is decreased (in this case to $+0.5g$). If the downward acceleration of the elevator is $-1g$ then the force on you decreases to zero and you become weightless i.e. your acceleration is zero $g$. If the acceleration of the elevator becomes greater (more negative) than $-1g$ you will find yourself standing on the roof of the elevator so it is now the roof that exerts a net downwards force on you. This is how your acceleration can become negative. Instead of the elevator floor accelerating you upwards the elevator roof accelerates you downwards.
{ "domain": "physics.stackexchange", "id": 37476, "tags": "newtonian-mechanics, forces, reference-frames, acceleration" }
C++ dynamic array implementation
Question: As a C++ beginner coming from Java, I have become increasingly confused on the topic of memory management and how to avoid memory leaks. Is the code below risking a memory leak that I'm not currently aware of? Any help or constructive feedback would be greatly appreciated. #pragma once template <class T> class DynamicArray { private: T *m_arr; int m_length; //amount of elements currently being stored in the array int m_capacity; //actual size of the array public: DynamicArray(); ~DynamicArray(); T get(int index); //O(1) void add(T obj); //no need to push any objects forward, O(1) void insert(int index, T obj); //pushes forward all objects in front of the given index, then sets the obj at the given index, O(n) void set(int index, T obj); //sets the given index of m_arr as obj, O(1) void remove(int index); //removes the object at the given index and pushes all the array contents back, O(n) int size(); //O(1) void print(); }; #include <iostream> #include "Header.h" template<class T> DynamicArray<T>::DynamicArray() : m_arr(new T[1]), m_length(0), m_capacity(1) {} template<class T> DynamicArray<T>::~DynamicArray() { delete[] m_arr; } template<class T> T DynamicArray<T>::get(int index) { if (index < m_length && index >= 0) return m_arr[index]; else throw ("Index out of bounds!"); } template<class T> void DynamicArray<T>::set(int index, T obj) { if (index < m_length && index >= 0) { m_arr[index] = obj; } else throw ("Index out of bounds!"); } template<class T> void DynamicArray<T>::add(T obj) { if (m_length == m_capacity) { T *new_arr = new T[m_length * 2]; for (int i = 0; i < m_length; i++) { new_arr[i] = m_arr[i]; } delete[] m_arr; m_arr = new_arr; m_capacity = m_capacity * 2; } m_arr[m_length] = obj; m_length++; } template<class T> void DynamicArray<T>::insert(int index, T obj) { if (index < m_length && index >= 0) { int size; if (m_length == m_capacity) size = m_length * 2; else size = m_capacity; T *new_arr = new T[size]; for (int i = 0, j = 0; i < m_length; i++, j++) { if (i == index) { new_arr[j] = obj; j++; } new_arr[j] = m_arr[i]; } delete[] m_arr; m_arr = new_arr; m_capacity = m_capacity * 2; m_length++; } else throw ("Index out of bounds!"); } template<class T> void DynamicArray<T>::remove(int index) { if (index < m_length && index >= 0) { T *new_arr = new T[m_capacity]; for (int i = 0, j = 0; i < m_length; i++, j++) { if (i == index) i++; if(i < m_length) new_arr[j] = m_arr[i]; } delete[] m_arr; m_arr = new_arr; m_capacity = m_capacity * 2; m_length--; } else throw ("Index out of bounds!"); } template<class T> int DynamicArray<T>::size() { return m_length; } template<class T> void DynamicArray<T>::print() { std::cout << m_arr[0]; for (int i = 1; i < m_length; i++) { std::cout << ", " << m_arr[i]; } } Answer: Welcome to C++, and welcome to Code Review. C++ memory management is, as you probably have realized, tough and error-prone. There are many things that can easily go wrong. Assuming that no exception is thrown, I don't see obvious memory leaks in your code; however, there are still some issues worth discussing. You can take a look at my implementation of a non-resizable dynamic array or a stack-based full-fledged vector for some inspiration. Special member functions You have not defined copy constructors or move constructors, so the compiler will synthesize corresponding constructors that simply copy all the members — which is completely wrong, as now the two dynamic arrays will point to the same memory. Not only are the elements shared between the copies, causing modifications to one array to affect the other, but the two copies will attempt to free the same memory upon destruction, leading to a double-free error, which is way more serious than a memory leak. Initialization semantics It is generally expected that the constructor of the element type is called \$n\$ times if \$n\$ elements are pushed into the dynamic array. In your code, however, this is not the case, whre the amount of constructors called is determined by the capacity of the dynamic array. Elements are first default initialized, and then copy-assigned to. The correct way to solve this problem requires allocating an uninitialized buffer, and using placement new (or equivalent features) to construct the elements, which is another can of worms. Exception safety Think of what happens when the construction of an element throws an exception — your code will halt halfway, and there will be a memory leak. Resolving this problem would require a manual try block, or standard library facilities like std::uninitialized_copy (which essentially do the same under the hood) if you switched to uninitialized buffers and manual construction. Move semantics All of the elements are copied every time, which is wasteful. Make good use of move semantics when appropriate. Miscellaneous Used std::size_t instead of int to store sizes and indexes.1 get, size, and print should be const. Moreover, get should return a const T&. In fact, get and set would idiomatically be replaced by operator[]. Don't throw a const char*. Use a dedicated exception class like std::out_of_range instead. Manual loops like for (int i = 0; i < m_length; i++) { new_arr[i] = m_arr[i]; } are better replaced with calls to std::copy (or std::move). Re-allocating every time insert is called doesn't seem like a good idea. A better trade-off might be to append an element and then std::rotate it to the correct position (assuming rotation doesn't throw). Also, print might take an std::ostream& (or perhaps std::basic_ostream<Char, Traits>&) argument for extra flexibility. 1 As Andreas H. pointed out in the comments, this recommendation is subject to debate, since the use of unsigned arithmetic has its pitfalls. An alternative is to use std::ptrdiff_t and std::ssize (C++20) instead. You can write your own version of ssize as shown on the cppreference page if C++20 is not accessible.
{ "domain": "codereview.stackexchange", "id": 40632, "tags": "c++, beginner, memory-management" }
Is the image of a total, non-decreasing function decidable?
Question: This is an exercise I've been struggling with for a while: Let $g : \mathbb{N} \to \mathbb{N}$ be a total, non-decreasing function, i.e. $\forall x > y.\ g(x) \geq g(y)$. Is the image $I_g$ of $g$ a recursive set? Intuitively, I know that the image $I_{g}$ is not recursive, as $g$ is not strictly monotonic. In fact, it's because that $g$ is not strictly monotonic that $g$ could be a constant function so testing if $y \in I_{g}$ may not finish as it could be that $\forall x, g(x) = c$, $c$ being a constant s.t. $c < y$. Then, testing if there is an $x$ s.t. $g(x) = y$ incrementing $x$ as the $g(x) < y$ may go forever. On the other hand, it could be that after a while, (for a sufficiently greater $x$) it happens that $g(x) > c$ and $g(x) = y$. If it were stricly monotonic, though, then it would be recursive as I would be able to test if $y = g(x)$ incrementing $x$ until the equality is satisfied or $g(x) > y$ (then $g(x)$ wouldn't get stuck in the same value because $x_1 > x_2$ implies $g(x_1) > g(x_2)$). However, I haven't been able to prove this formally. Can this intuition become part of a formal proof? Or at least could you give me some help in proving it in some other way? A hint or some outline of a proof would be great. Answer: If $g$ is computable, its range is decidable. If $g$ is bounded, let $m$ be the maximum value in its range. Note that this number is not computable from a description of $g$ but it exists and we are only required to determine whether $g$'s range is computable, not whether the problem "Given $x$ and a description of $g$, determine whether $x$ is in the range of $g$" is decidable. Now, to decide if $x$ is in the range, reject if $m$ exists and $x>m$; otherwise, start computing $g(0), g(1), \dots$. If you find that $g(y)=x$ for some $y$, then accept; otherwise, by monotonicity you will find that $g(y)>x$ for some $y$ and reject. If $g$ is not computable, its range may or may not be decidable. For example, let $M_0, M_1, \dots$ be an enumeration of all Turing machines and let $$g(x) = |\{i\mid i\leq x \text{ and }M_i(0)\text{ halts}\}|.$$ The range is either all positive integers or all non-negative integers, depending on whether the machine with code zero halts. Whichever of those two sets really is the range of $g$, it is decidable (again, we're not being asked to decide which of these two cases is true; one of them must be). However, if we define $g(0)=0$ and $$g(i+1) = \begin{cases} g(i)+1 & \text{if }M_i(0)\text{ halts}\\ g(i)+2 &\text{otherwise,}\end{cases}$$ then the range of $g$ is undecidable: an algorithm that could find the "gaps" would let you solve the zero-input halting problem.
{ "domain": "cs.stackexchange", "id": 2252, "tags": "formal-languages, computability, undecidability" }
"Did not get reply from planning scene client" when using CollisionModelsInterface
Question: Hi, I have found that if a node creates a planning_environment::CollisionModelsInterface object and also calls the /environment_server/set_planning_scene_diff service, the service always reports that it is unable to get a reply from planning scene client with the name of the node. Specifically, if I run the pr2_tabletop_manipulation_launch pr2_tabletop_manipulation.launch file and this piece of test code: int main(int argc, char **argv) { ros::init(argc, argv, "planning_scene_test_node"); ros::NodeHandle n; ros::ServiceClient scene_client = n.serviceClient ("/environment_server/set_planning_scene_diff"); ROS_INFO("Waiting for planning scene service"); scene_client.waitForExistence(); ROS_INFO("Planning scene service is now available"); planning_environment::CollisionModelsInterface cmi("robot_description"); arm_navigation_msgs::SetPlanningSceneDiff ssd; if (!scene_client.call(ssd)) { ROS_ERROR("Unable to set planning scene"); return 1; } ROS_INFO("Successfully set planning scene"); return 0; } the execution will pause for five seconds during the scene_client call and print the following to rosout: [ INFO] [1322764074.606153612, 4535.143000000]: Successfully connected to planning scene action server for /planning_scene_test_node [ INFO] [1322764080.585060399, 4540.165000000]: Did not get reply from planning scene client /planning_scene_test_node. Incrementing counter to 1 The collision models interface declared within this node also does not update to the new planning scene and has to be set manually. I am using 64-bit Ubuntu 10.04 and ROS electric. The version of the arm_navigation stack is 1.0.7-s1321929829~lucid. Thanks, Jenny Originally posted by jbarry on ROS Answers with karma: 280 on 2011-12-01 Post score: 0 Answer: You aren't calling ros::spin anywhere and the main thread of the program is blocking so the necessary callbacks aren't ever occurring. Adding an AsynSpinner is the right way to go. This code works for me: #include <ros/ros.h> #include <arm_navigation_msgs/SetPlanningSceneDiff.h> #include <planning_environment/models/collision_models_interface.h> int main(int argc, char **argv) { ros::init(argc, argv, "planning_scene_test_node"); ros::NodeHandle n; ros::AsyncSpinner spinner(1); spinner.start(); ros::ServiceClient scene_client = n.serviceClient<arm_navigation_msgs::SetPlanningSceneDiff> ("/environment_server/set_planning_scene_diff"); ROS_INFO("Waiting for planning scene service"); scene_client.waitForExistence(); ROS_INFO("Planning scene service is now available"); planning_environment::CollisionModelsInterface cmi("robot_description"); arm_navigation_msgs::SetPlanningSceneDiff ssd; if (!scene_client.call(ssd)) { ROS_ERROR("Unable to set planning scene"); return 1; } ROS_INFO("Successfully set planning scene"); return 0; } Originally posted by egiljones with karma: 2031 on 2011-12-01 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by jbarry on 2011-12-01: Oh, of course. Thank you. It's working for me too now.
{ "domain": "robotics.stackexchange", "id": 7491, "tags": "ros, planning-environment" }
What is Pumping length for Union of Regular languages?
Question: This is an exam question. For E = {a,b}. let us consider the regular language $L= \{x|x = a^{2+3k} or x=b^{10+12k}, k >= 0\}$ Which one of the following can be a pumping length (the constant guaranteed by the pumping lemma) for L? A)3 B)9 C)5 D)24 Ans is 24. As per me pumping length is the length which gets actually repeated. So, basically it's the length of y as per pumping lemma statement. Now, for the first regular language, $a^{2+3k}$ How I visualized? There would be sure 2 states which accept min 2 'a' ( since k>=0). Now, then I have $a^{3k}$ Means, which should definitely have a loop of length 3. So, I choose x=2 (first 2 states) y=3 (the repeating part) and z=0. So this first language must have atleast 3 as the pumping length. And in similar lines the other must have. Atleast 12 as pumping length. (X=10, y=12, z=0) And that's why as per me 24 is right. (Still sketchy at concluding) Plz tell me if my understanding of Pumping length is right? If no plz try suggesting some links/references. If yes then have I applied the Concept right? Answer: A pumping length is a positive integer $p$ such that any string of $L$ of length at least $p$ can be decomposed in the form $xyz$ with $|xy|\leq p$, $|y|>0$ and for all $n\geq0$ the words $xy^nz$ are in $L$. Assume that $p<12$. Then the word $b^{10+12}$ should be able to be decomposed as above. We cannot take $y$ with $1\leq|y|\leq p <12$. Otherwise $xz=xy^0z$ wouldn't be in $L$. We can take $p=12$ or larger. If $10+12k=|b^{10+12k}|\geq p\geq 12$ then $k\geq1$ and we can decompose $b^{10+12k}$ as $xyz$, with $x=\epsilon$, $y=b^{12}$ and $z=b^{10+12(k-1)}$. Then $|xy|=12\leq p$ and for all $n\geq0$ we have $xy^nz=b^0b^{12n}b^{10+12(k-1)}=b^{10+12(n+k-1)}$ in $L$, since $n+k-1\geq k-1\geq0$ If $2+3k=|a^{2+3k}|\geq p\geq 12$, then $k\geq4$. We can decompose $a^{2+3k}$ as $xyz$ with $x=\epsilon$, $y=a^3$ and $z=a^{2+3(k-1)}$. We have that $|xy|=3\leq p=12$, $|y|=3\geq1$ and for all $n\geq0$ we have $xy^nz=a^0a^{3n}a^{2+3(k-1)}=a^{2+3(n+k-1)}$ in $L$, since $n+k-1\geq k-1\geq0$. So, the minimum pumping length of $L$ seems to be $12$. Any larger value would also work in this case.
{ "domain": "cs.stackexchange", "id": 16548, "tags": "regular-languages, regular-expressions, pumping-lemma" }
What does the notation $ [ i \neq k ] $ mean?
Question: I can't figure out what the notation $[x \neq k ]$ means. Here's a bit of context: The formula is: $Pr[A_i^k = 1] = \frac{[i\neq k]}{|k-i| + 1} = \begin{cases} \frac{1}{k-i+1} \text{ if } i \lt k \\ 0 \text { if } i = k \\ \frac{1}{i-k+1} \text{ if } i \gt k \end{cases}$ and is part of a chapter where the average expected time of operations of a randomised treap are proved. $A_i^k$ is an indicator variable defined as $[ x_i \text{ is a proper ancestor of }x_k ]$ where $x_n$ is the node with the $n$-th smallest search key. That probability comes up because $\text{depth}(x_k) = \sum_{i=1}^{n} A_i^k$ and $\mathbf{E}[\text{depth}(x_k)] = \sum_{i=1}^nPr[A_i^k = 1]$. I have no access to the pages that explain the notation since I'm studying from a pdf of a few pages taken from a book. Answer: It is used like a boolean where $[i \neq k] = 1$ if $i \neq k$ and $0$ otherwise. Notice that $[i \neq k]$ is equivalent to $i < k$ or $i > k$ for numbers, which is the right part of the equation you wrote. It is called Iverson bracket and in general $[statement] = 1$ if $statement$ is true and $0$ otherwise.
{ "domain": "cs.stackexchange", "id": 16537, "tags": "notation" }
Add a new ROS publisher node to a QTcreator project which already contains a ROS subscriber node
Question: Hi guys, I've used QT to create a package with GUI and a ROS node (subscriber), and every thing works as I expected. Now I need to add a new publisher node to my QT. Please help me to do this if you have any idea. Thanks my whole code is here Main.cpp #include "ros/ros.h" #include "std_msgs/String.h" #include "mainwindow.h" #include #include <boost/thread.hpp> #include #include <string.h> #include #include #include #include #include #include "newwindow.h" #include #include #include "gui_sub/Position.h" using namespace std; MainWindow* mainWin; void infoCallback(const std_msgs::String::ConstPtr& msg) { // ROS_INFO("The New module is: [%s]", msg->data.c_str()); std::string str = msg->data; //The Name of Raspberry Pi std::string str_name = str.substr (14,19); // List of Servo motors std::string str_servo_1 = str.substr (55,23); std::string str_servo_2 = str.substr (82,23); std::string str_servo_3 = str.substr (109,23); std::string str_servo_4 = str.substr (136,23); // List of Brushless motors std::string str_brushless_1 = str.substr (163,25); std::string str_brushless_2 = str.substr (192,25); std::string str_brushless_3 = str.substr (221,25); std::string str_brushless_4 = str.substr (250,25); //Converting the STD Strings to QStrings QString qstr = QString::fromStdString(str); QString qstr_name = QString::fromStdString(str_name); QString qstr_servo_1 = QString::fromStdString(str_servo_1); QString qstr_servo_2 = QString::fromStdString(str_servo_2); QString qstr_servo_3 = QString::fromStdString(str_servo_3); QString qstr_servo_4 = QString::fromStdString(str_servo_4); QString qstr_brushless_1 = QString::fromStdString(str_brushless_1); QString qstr_brushless_2 = QString::fromStdString(str_brushless_2); QString qstr_brushless_3 = QString::fromStdString(str_brushless_3); QString qstr_brushless_4 = QString::fromStdString(str_brushless_4); // Using the new strings to create our lists via signal and slot methods mainWin->updatemethod_info(qstr); mainWin->updatemethod_name(qstr_name); mainWin->updatemethod_servo_1(qstr_servo_1); mainWin->updatemethod_servo_2(qstr_servo_2); mainWin->updatemethod_servo_3(qstr_servo_3); mainWin->updatemethod_servo_4(qstr_servo_4); mainWin->updatemethod_brush_1(qstr_brushless_1); mainWin->updatemethod_brush_2(qstr_brushless_2); mainWin->updatemethod_brush_3(qstr_brushless_3); mainWin->updatemethod_brush_4(qstr_brushless_4); } void callbackmethods() { ros::NodeHandle n; ros::Subscriber sub = n.subscribe("/info", 1000, infoCallback); ros::Rate rate(30); while (ros::ok()){ ros::spinOnce(); rate.sleep(); } } int main(int argc, char **argv) { ros::init(argc, argv, "gui_sub"); QApplication app(argc, argv); app.setOrganizationName("Trolltech"); app.setApplicationName("Application"); mainWin = new MainWindow(); mainWin->show(); //boost::thread thread_spin( boost::bind( ros::spin )); boost::thread thread_spin( boost::bind( callbackmethods )); ros::init(argc, argv, "servoinput"); ros::NodeHandle nt; ros::Publisher position_pub; position_pub=nt.advertise<gui_sub::Position>("position", 1000); std::string di; gui_sub::Position msg; std::cout << "I am here "; std::cin>>di; msg.position = atof(di.c_str()); position_pub.publish(msg); return app.exec(); } Originally posted by Saeid on ROS Answers with karma: 3 on 2015-06-29 Post score: 0 Answer: Create another publisher (class) inside the node like in the C++ tutorial for creating nodes? Should be something like this: ros::Publisher pub; //When in class this is the member in class definition pub = nh.advertise<pkg::TopicType>("awesome_topic", 1000 /*Queue*/); //This goes to Constructor //Maybe your main loop ros::Rate rate(10); while(ros:ok()) { pkg::TopicType msg; /*add data to your msg here*/ msg.field="some data"; pub.publish(msg); //Publish it ros::spinOnce(); rate.sleep(); //limit loop rate } See here: http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29 Originally posted by cyborg-x1 with karma: 1376 on 2015-06-29 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Saeid on 2015-06-30: Many thanks for your interest to answer my question. I tried to do what you mentioned by adding this lines to my Main.cpp : (NEXT COOMENT) Comment by Saeid on 2015-06-30: ros::Publisher position_pub; ros::NodeHandle nt; position_pub=nt.advertise<beginner_tutorials::Position>("position", 1); std::string di="1.5"; beginner_tutorials::Position msg; msg.position = atof(di.c_str()); position_pub.publish(msg); Comment by Saeid on 2015-06-30: But unfortunately it's not publishing . Comment by cyborg-x1 on 2015-06-30: Could you please paste the whole code? Maybe edit your answer and add it. Comment by cyborg-x1 on 2015-06-30: Probably in qt you will have your main "ros" loop in a second thread. Comment by cyborg-x1 on 2015-06-30: Ahh now I see you problem. position_pub=nt.advertise<gui_sub::position>("position", 1000); must go into a function you call. I did not think of the main function of the Qt project, just put it somewhere shortly after where you set the values of the message. Comment by cyborg-x1 on 2015-06-30: normally when using Qt you put the while(ros::ok()) loop inside a function of a seperate thread. You are executing ros::spin() as thread, it does take care of the incomming messages and calls the callback functions. publish(msg) should be able to be called anywhere. Comment by Saeid on 2015-06-30: Thank you man, It works now, thanks a lot for your help man Comment by cyborg-x1 on 2015-06-30: You're welcome ;-)
{ "domain": "robotics.stackexchange", "id": 22038, "tags": "ros, rosnode, qt5, publisher, gui" }
Why are there only four fundamental interactions of nature?
Question: Is there an answer to the question why there are only four fundamental interactions of nature? Answer: The answer "because we do not need more" by @rubenvb is fine. Studying physics, you must realize that physics is not answering fundamental "why" questions. Physics uses mathematical tools to model measurements and these models have to fit new data, i.e. be predictive. As long as the models are not falsified, they are considered valid and useful. Once falsified, modifications or even drastic new models are sought. A prime example, quantum mechanics, when classical mechanics was invalidated: black body radiation, photoelectric effect and atomic spectra falsified efforts of classical modelling. Physics using the appropriate models show "how" one goes from data to predictions for new experimental data. Looking for "why" in the models, one goes up or down the mathematics and arrives at the answer "because that is what has been measured"
{ "domain": "physics.stackexchange", "id": 49369, "tags": "forces, particle-physics, interactions" }
Direction of tension force and restoring force
Question: In the above image I have a block of mass m hanging from a mass less pulley with the help of a massless string. The other end of the string is attached to a wall. $F_g$ is gravitational force acting downward and $F_s$ is the restoring force of spring acting leftward. T represents tension in the string. I had to find the reading of the spring balance and assumed that it would be the force on the spring balance(to measure extension of spring). However I can't seem to figure out the tension force between the spring balance and the wall. If I consider that spring balance was a block, then tension force would be leftward and of magnitude T. But what about the tension force leftward? (on the wall) Similarly in the following image, I again need to find the weight measured by the spring balance: My intuitive understanding of the problem is since the bodies are not accelarating, the problem is similar to the previous one and therefore the weight measured by the spring is the weight of one block. But is this correct? if so, what would be the direction of restoring force of the spring? Also if instead, of the spring balance, I imagined a rigid block at the same position, then the net force on the block would be zero, but when I use the spring balance, the measured weight is different, why is this so? Sorry if I sound very confused. Edit: I now know that the tension leftward = T,(in the first diagram) but still don't understand which direction the restoring force of the spring would be (in the second diagram), since force is being applied in both directions Answer: What you must remember is that the spring is not moving and so the force on one side of it is the same as the force on the other side of it. I have combined your two diagrams. So you either think of the wall as pulling on the string in your first diagram or the left side bit of string in your second diagram.
{ "domain": "physics.stackexchange", "id": 44238, "tags": "homework-and-exercises, newtonian-mechanics" }
Water flow in a sink
Question: When one turns on the tap in the kitchen, a circle is observable in the water flowing in the sink. The circle is the boundary between laminar and turbulent flow of the water (maybe this is the wrong terminology?). On the inside the height of the water is lower than on the outside. I'm sure that you have seen it many times, if not you can try it out yourself. I found out by experiment that the effect is most likely independent of the curvature of the sink, as it works in strongly curved bathroom sinks as well as in flat kitchen sinks. Why is there a circle, why not a gradual change and what is happening qualitatively and/or quantitatively in this system? As I have not found a satisfying answer to this problem yet, every time I turn on a tap, I am reminded of my own ignorance. It is really annoying. Answer: You are observing a hydraulic jump. The Wikipedia article is very good, so I won't try to out-do it. In brief summary, when the water starts running out from the place where it hits the sink, the same flux is spread out over a larger and larger circumference as you move out. This means the flow gets shallower and moves more slowly as you move further out. If a wave propagates in this flow, its wave speed depends on the height of the water. Its speed relative to the flow also depends on the flow speed. So the propagation of wave changes as we move further out as the flow underneath the wave changes. The wave itself changes the height of the water - the water is deep at the peak of the wave and shallow at a trough. Different parts of the wave moves at different speeds. This is clearly a non-linear phenomenon. Similar to waves crashing at the beach, eventually the propagating wave crashes over on itself. This causes a "hydraulic jump". The main effects are The speed of the flow goes down. The height of the water increases, converting some kinetic energy to potential. Some energy is lost to heat through turbulence. The physics of water in your sink is not very easy - since the flow is so shallow, surface tension has considerable importance. You can learn more details in the Wikipedia article linked above.
{ "domain": "physics.stackexchange", "id": 821, "tags": "fluid-dynamics, water, flow, turbulence, shock-waves" }
is y[n] = x[n] + n time invariant?
Question: My steps were as follows: $\ x_2[n] = x[n-k] $ $\ y[n-k] = x[n-k] + (n-k) $ and $\ y_2[n] = x_2[n] + n = x[n-k]+(n-k)$ Does this mean that it is indeed time invariant? Answer: No; the system given by $$ y[n] = x[n] + n $$ is time-varying, due to the added term $n$. Your mistake is in the line : $$\ y_2[n] = x_2[n] + n = x[n-k]+(n-k) $$ which should be instead $$\ y_2[n] = x_2[n] + n = x[n-k]+n$$ and therefore implies that $$y_2[n] \neq y[n-k] $$.
{ "domain": "dsp.stackexchange", "id": 8139, "tags": "filters, linear-systems" }
an error while "installing the image_view"
Question: i use ubuntu 12.04 & ROS(groovy) i want to use wstool and catkin_make to get the package:image_view. but there are some problems: cit@cit-ThinkStation-S20:~/catkin_ws$ catkin_make CMake Error at /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:72 (find_package): Could not find a configuration file for package cv_bridge. Set cv_bridge_DIR to the directory containing a CMake configuration file for cv_bridge. The file will have one of the following names: cv_bridgeConfig.cmake cv_bridge-config.cmake Call Stack (most recent call first): image_pipeline/depth_image_proc/CMakeLists.txt:8 (find_package) CMake Error at /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:72 (find_package): Could not find a configuration file for package image_geometry. Set image_geometry_DIR to the directory containing a CMake configuration file for image_geometry. The file will have one of the following names: image_geometryConfig.cmake image_geometry-config.cmake Call Stack (most recent call first): image_pipeline/depth_image_proc/CMakeLists.txt:8 (find_package) CMake Error at /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:72 (find_package): Could not find a configuration file for package image_transport. Set image_transport_DIR to the directory containing a CMake configuration file for image_transport. The file will have one of the following names: image_transportConfig.cmake image_transport-config.cmake Call Stack (most recent call first): image_pipeline/depth_image_proc/CMakeLists.txt:8 (find_package) CMake Error at /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:72 (find_package): Could not find a configuration file for package nodelet. Set nodelet_DIR to the directory containing a CMake configuration file for nodelet. The file will have one of the following names: nodeletConfig.cmake nodelet-config.cmake Call Stack (most recent call first): image_pipeline/depth_image_proc/CMakeLists.txt:8 (find_package) CMake Error at /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:72 (find_package): Could not find a configuration file for package stereo_msgs. Set stereo_msgs_DIR to the directory containing a CMake configuration file for stereo_msgs. The file will have one of the following names: stereo_msgsConfig.cmake stereo_msgs-config.cmake Call Stack (most recent call first): image_pipeline/depth_image_proc/CMakeLists.txt:8 (find_package) -- Eigen found (include: /usr/include/eigen3) -- +++ processing catkin metapackage: 'humanoid_msgs' -- ==> add_subdirectory(humanoid_msgs/humanoid_msgs) -- +++ processing catkin package: 'humanoid_nav_msgs' -- ==> add_subdirectory(humanoid_msgs/humanoid_nav_msgs) -- Generating .msg files for action humanoid_nav_msgs/ExecFootsteps /home/cit/catkin_ws/src/humanoid_msgs/humanoid_nav_msgs/action/ExecFootsteps.action CMake Error at /opt/ros/hydro/share/genmsg/cmake/genmsg-extras.cmake:252 (message): Could not find 'share/actionlib_msgs/cmake/actionlib_msgs-msg-paths.cmake' (searched in '/home/cit/catkin_ws/devel;/opt/ros/groovy'). Call Stack (most recent call first): humanoid_msgs/humanoid_nav_msgs/CMakeLists.txt:29 (generate_messages) -- Configuring incomplete, errors occurred! make: *** [cmake_check_build_system] error 1 Invoking "make cmake_check_build_system" failed Originally posted by doudoushuixiu on ROS Answers with karma: 31 on 2013-11-10 Post score: 0 Answer: It looks like you do not have all of image_view's dependencies, based on lines like this: ... Could not find a configuration file for package cv_bridge. ... Could not find a configuration file for package image_geometry. ... Could not find a configuration file for package image_transport. ... Could not find a configuration file for package nodelet. ... Could not find a configuration file for package stereo_msgs. ... You should either use rosinstall_generator and wstool to get them from source, which I describe in this answer: http://answers.ros.org/question/92806/setting-workspace-for-rosws/?answer=92906#post-id-92906 or use rosdep to get them installed from debians: rosdep install --from-paths ./src --ignore-src --rosdistro groovy -y Originally posted by William with karma: 17335 on 2013-11-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 16105, "tags": "ros, catkin-make, image, image-view" }
Help Required! Changing Subscribed Topics of a Package
Question: Greeting friends, I am currently working on a low cost method of mapping and I have decided to use ultrasonic to do it. Ultrasonic Range Package: https://github.com/sharvashish/Obstacle_detection__HCSR04 Mapping Package: http://wiki.ros.org/gmapping The problem is the ultrasonic publishes a "sensor" topic while the mapping subscribes to "scan". Is there a method to make the mapping package subscribe to "sensor" instead? Cheers, Thanks for your time. Originally posted by AllanXyl on ROS Answers with karma: 21 on 2017-11-01 Post score: 0 Answer: This is a question of semantics. A single measurement is not directly equivalent to a "scan", at least the the type that gmapping is looking for (LaserScan). You'll have to find (or implement) a node that converts your ultrasonic range measurements into a (poor) LaserScan msg. Originally posted by gvdhoorn with karma: 86574 on 2017-11-01 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 29251, "tags": "ros, 2d-mapping" }
Under what conditions does the body burn fats, proteins and carbohydrates?
Question: Searching the internet, I have found lots of conflicting information from various fitness sites concerning when the human body burns fats, proteins and carbohydrates. My basic understanding was that the body will first utilize glucose and glycogen stores, followed by fats and finally proteins once the fat and glycogen stores become depleted. Also I believe there is some overlap in this process. However many fitness websites claim that during high intensity physical activity, the body will prefer to burn protein over fat. Scientifically speaking, when does the body ordinarily burn the following food groups (proteins, fats and carbohydrates) and how does exercise influence this process? If you are able to provide graphs or data to back up your answer this will be extra helpful because I am bored of all the speculation I see on other sites. Answer: Introduction The system that regulates body energy store consumption is hugely complicated, but is mainly a cooperation between hormones released by the pancreas (insulin - lowers blood sugar and glucagon - raises blood sugar) and the liver (as the bodies main glycogen store and factory for various energy related tasks). Fed State In the fed state (high blood glucose e.g. after a meal), the body produces a large amount of insulin. When the ratio of insulin:glucagon is high (≈0.5): Dietary glucose is absorbed by the liver when glucose concentrations are over 8mM. This glucose is then either used in respiration by the liver converted to glycogen to be stored in the liver converted into fatty acids which are then transported out to the periphery in very-low-density lipoproteins (VLDLs) Adipose tissue (fat tissue) takes up glucose in the blood. This glucose is then: used in respiration by the adipose tissue converted into fatty acids then tri-acyl-glycerols (TAGs) - fat. VLDLs from the liver can also be absorbed by adipose cells to be used for this. Skeletal Muscle again directly absorbs glucose from the blood During contraction glucose is used directly in respiration to fuel the activity In preparation for future contraction it is converted into glycogen and stored in the cell Amino acids are incorporated into cellular proteins though if they are superfluous then they can be used in respiration The brain directly absorbs glucose and uses it in respiration Early Fasting State Where the insulin:glucagon ratio has dropped to around 0.15 In the liver Glycogenolysis is activated to release glucose from stored glycogen to compensate for falling plasma glucose levels (signalled by elevated glucagon) Triglycerides in the liver are preferentially broken down for hepatic respiration - fatty acids are transported in from the blood to meet the livers own energy demands The liver begins the process of gluconeogenesis - making glucose from non-carbohydrate precursors Adipose tissue Rapidly starts lipolysis - stored fats are broken down into free fatty acids and glycerol. Some of the free fatty acids are used for the adipose tissues own respiratory demands The majority is released to the periphery to be utilised by other tissues The glycerol released can't be used by most tissues, but is taken up by the liver where it can be used to make glucose by gluconeogenesis Skeletal Muscle Uptake of glucose from the periphery is reduced to preserve glucose for cells that can't easily use other fuels - e.g. the brain or red blood cells Free fatty acids become the main fuel for skeletal muscles to conserve the glucose for elsewhere Unlike in the liver, glycogenolysis is not activated as there are no glucagon receptors in muscle tissue. This means glucose is only produced from glycogen when muscles are actively contracting, as before. At this point proteins can start to be broken down, their skeletons used as an immediate source of energy and amino acids released to be taken up by the liver and used in gluconeogenesis The brain directly absorbs glucose and uses it in respiration. Fatty acids can not be used as these do not cross the blood brain barrier Late Fasting State If there has still been no glucose intake, the insluin:glucagon ratio falls even further, to around 0.05. Liver No further glucose has entered the liver and within 24h all hepatic glycogen stores have been depleted Gluconeogenesis becomes the principle source of all plasma glucose - the liver creates new glucose from: Amino acids - muscle & liver protein breakdown Glycerol - from adipose tissue Lactate - from red blood cells and muscles High amounts of fatty acids are converted into Acetyl CoA to be used in respiration by the liver, but too much Acetyl-CoA is produced. The remainder is converted into ketone bodies that are useless to the liver, so jettisoned to be used by other tissues (esp. the brain) Adipose tissue As the fast continues into a few days, adipose tissue adapts to producing large amounts of free fatty acids. This becomes sufficient for most body tissues These fatty acids are used preferentially by all tissues that can use them, conserving glucose for the brain Skeletal Muscle Fuelled almost entirely by fatty acids and ketone bodies from the liver and adipose tissue Protein breakdown continues to release carbon skeletons and amino acids for gluconeogenesis - but this is inhibited in the presence of high levels of ketone bodies to prevent unnecessary muscle wastage (still allowing the human to hunt) Amino acids are incorporated into cellular proteins though if they are superfluous then they can be used in respiration The brain Once ketone levels reach a critical level, they can cross the blood brain barrier to be used as a supplementary energy source However, it is not sufficient alone, so a net source of glucose is needed from somewhere in the body Progressing to starvation This process can continue for a number of weeks, with the body eventually exhausting fatty acids in adipose tissue and being unable to produce ketone bodies in the liver, losing the inhibition of proteolysis causing a last ditch response of muscle breakdown for energy. Death is usually from heart failure resulting from cardiac muscle breakdown. Summary This chart might be useful in summary, it shows where the glucose levels in the blood are coming from at various stages, the gluconeogenesis line indicating protein and fatty acid breakdown: Exercise Exercise has to be very intense to have any effect on this system in normal situations. Resting Muscle metabolises stored fatty acids for its own energy. Glycogen stores are replenished from glucose ready for more vigorous contraction. Brisk Walking fatty acid metabolism again provides almost all the energy Sprinting Glycogen stores are used, respiration is almost entirely anaerobic as blood vessels are constricted by the muscle activity and ventilation has not had time to increase. Lactic acid is produced, which can be used by the liver for gluconeogenesis Middle distance running: aerobic metabolism takes over as the body adjusts to the higher oxygen demand. Lactate is still the major end product One example of where this may be more noticeable is during a marathon. 0 Mins - Muscle is resting at the start line, as above. 10 mins - Muscle and liver glycogen released to power muscle contraction 2 Hours - A marathon requires roughly 700g of glycogen to complete, however the liver can only store around 500g. This is largely depleted after roughly 20 miles. Blood glucose levels fall rapidly and the body switches to fatty acid metabolism. This only provides around 60% of the power output and pace falls off (known as hitting the wall). At this point, virtually every reserve power source is being used simultaneously to keep the body going. Finish - Muscle and liver glycogen are entirely depleted. If glucose levels haven't been maintained by other means, hypoglycaemia results causing confusion, hallucinations and even coma and death.
{ "domain": "biology.stackexchange", "id": 1603, "tags": "fitness" }
What's the meaning of sensor_msgs/PointCloud2
Question: Recently, I'm using VLP-16 to get point cloud data. But I am really confused about the structure of sensor_msgs/PointCloud2. I used the commend $ rostopic echo /velodyne_points to check the data. I have known that 32 numbers are used to describe a point. But how to transform these numbers to x, y, z coordinate of the points ? Thanks. For example: 164, 129, 148, 191, 79, 29, 4, 191, 114, 146, 181, 188, 1, 0, 0, 0, 0, 0, 128, 63, 7, 0, 0, 0, 70, 88, 8, 112, 61, 127, 0, 0, 163, 248, 106, 191, 20, 9, 209, 190, 228, 208, 137, 62, 1, 0, 0, 0, 0, 0, 0, 64, 15, 0, 0, 0, 70, 88, 8, 112, 61, 127, 0, 0, 226, 211, 151, 191, 178, 208, 6, 191, 65, 147, 185, 60, 1, 0, 0, 0, 0, 0, 176, 66, 8, 0, 0, 0, 70, 88, 8, 112, 61, 127, 0, 0, Originally posted by iROS on ROS Answers with karma: 3 on 2016-08-04 Post score: 0 Original comments Comment by z1huo on 2018-08-18: I am facing the same problem, could you tell me how that structure works? Thanks! Answer: The PointCloud2 message is defined here: http://docs.ros.org/api/sensor_msgs/html/msg/PointCloud2.html For interacting with PointClouds it's common to use the PCL datatype: http://wiki.ros.org/pcl_ros You can subscribe and publish directly as a PCL datatype using pcl_ros/point_cloud.h There are also a couple of iterators defined in sensor_msgs to allow access to the PointCloud2 message without the added dependency on pcl: http://docs.ros.org/api/sensor_msgs/html/annotated.html Originally posted by tfoote with karma: 58457 on 2016-08-08 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by iROS on 2016-08-10: Thanks a lot. And I have solved this problem now. Comment by tfoote on 2016-08-11: Please click the checkmark buttom at left to indicate your question is solved. Comment by MohamedEhab on 2019-04-16: Can you please tell me what solution did you use??
{ "domain": "robotics.stackexchange", "id": 25452, "tags": "pcl, sensor-msgs, pointcloud" }
How can a meson beam be created?
Question: I was reading this thread where the answer states that to use deep inelastic scattering on mesons, we have to "generate a meson beam (which is a bit of a trick in and of itself) and direct it into either a fixed baryon target or another beam (electrons, say) in collider mode" But how can a meson beam even be created? I tried to search for an answer but it's pretty difficult to find. Could someone explain? Thanks. Answer: I didn't find a video by Don Lincoln, but here is one by Kristy Duffy, who is also from FermiLab. How do particle accelerators make neutrinos? The process starts with Hydrogen. Atoms are ionized so they can be manipulated by electromagnetic fields. The electrons are stripped off, leaving a beam of protons. The proton beam is smashed into a target, producing a spray of particles like pions and kaons. These are gathered into a horn which focuses the positively charged particles and defocuses the negative ones. Or vice versa, depending on the experiment. This answers your question, but at Fermilab the purpose is to create a neutrino beam (or anti-neutrino beam, if negatively charged particles were kept.) To do this, they simply wait until the particles decay. Then the beam hits a concrete or steel target. This stops all the particles except the neutrinos. The neutrinos are on their own at this point. They are neutral so they can't be steered. They don't interact with matter, so they exit the accelerator and fly through whatever is in their way to the detector. For Dune, this is through the earth to a detector a mile underground several states away. If you want more information about accelerators, Don Lincoln does make several videos. Accelerator Science: Circular vs. Linear Accelerator Science: Proton vs. Electron Accelerator Science: Circular vs. Linear Accelerator Science: Why RF? And of course, now I find it How do you make a neutrino beam? The choice of which to watch is easy. If you like banana jokes, watch Kristy Duffy. If you like Dad jokes, watch Don Lincoln. Here is where to find Fermilab physics playlists. Particle physics at home
{ "domain": "physics.stackexchange", "id": 94009, "tags": "particle-physics, experimental-physics, standard-model, quarks, mesons" }
Speed Up Access to Fractal-Like Array
Question: I'm trying to speed up the following function in c++: void num_to_xy(int num, int *x, int *y): *x = (cl & 0x03) | ((cl & 0x10) >> 2) | ((cl & 0x40) >> 3) | ((cl & 0x100) >> 4) | ((cl & 0x400) >> 5) | ((cl & 0x1000) >> 6) | ((cl & 0x4000) >> 7); *y = ((cl & 0x0c) >> 2) | ((cl & 0x20) >> 3) | ((cl & 0x80) >> 4) | ((cl & 0x0200) >> 5) | ((cl & 0x0800) >> 6) | ((cl & 0x2000) >> 7) | ((cl & 0x8000) >> 8); which basically converts num to an x,y coordinate. This is equivalent to getting the x,y position of num in the following array $\left| \begin{array}{ccc} 0 &1 &4 &5 &16 &17 &20 &21 &...\\ 2 &3 &6 &7 &18 &19 &22 &23 &...\\ 8 &9 &12 &13 &24 &25 &28 &29 &...\\ 10 &11 &14 &15 &26 &27 &30 &31 &...\\ 32 &33 &36 &37 &48 &49 &52 &53 &...\\ 34 &35 &38 &39 &50 &51 &54 &55 &...\\ 40 &41 &44 &45 &56 &57 &60 &61 &...\\ 42 &43 &46 &47 &58 &59 &62 &63 &...\\ \vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\ddots & \end{array} \right|$ The array above follows a pattern where for all $k,n$, all numbers between $k4^n$ and $(k+1)4^n-1$ form a perfect square. Can the code above be sped up without generating a ginormous list and storing all the values? Or is there a completely different, faster method? Answer: I think that the fastest approach with modern CPUs is i = num & 255 j = num >> 8 *x = x1[i] + x2[j] *y = y1[i] + y2[j] That is 4 arithmetic ops plus 4 loads
{ "domain": "cs.stackexchange", "id": 14147, "tags": "algorithms" }
Is it possible to determine maximum frequency of a disc using a relative circumference?
Question: On Wikipedia there is an article that shows how to calculate the relativistic effects on a rigid spinning disc, I am referencing the formula under the "Ehrenfest's argument" section in the link. The formula, as I interpret it, shows how the circumference of the disc changes as speed of rotation changes. Higher the speed, bigger the circumference. If you derive a new circumference and you have a velocity you can determine frequencies. $$Velocity/Relative Circumference=Frequency$$ With significant relativistic effects needing a good fraction of the speed of light you can spin a disc up really fast while increasing its frequency. At one point though the Circumference "grows" faster than the velocity of its spin and the frequency slows down. Playing around with the formulas maximum frequency appears at velocity equal to $1/sqrt(2)*c$ Let me know if this formula is doing what I am interpreting it to be doing? Can it be used like this? You can see the plot with frequency on the y axis and velocity on the x axis in the link below. Wolfram Answer: I believe that the frequency for the rotating disc of radius $r$ can be anything below $1/r$ (or $c/r$ in SI units). I think the picture you consider is flawed because you do not take into account the measurement of the speed by the observer on the disc. He would have to time the passing by 2 measuring roads at the previously known laboratory distance ($v = L / \Delta t$) — but the measured time will appear dilated with respect to laboratory frame! So, the Lorentz factors will cancel out in your formula. While researching the Ehrenfest's paradox, I've found an interesting analogy between it and Bell's spaceship paradox on the internet: http://www.physicspages.com/2015/02/26/lorentz-contraction-in-a-rotating-disk-ehrenfests-and-bells-spaceship-paradoxes/. I believe, this might provide you some ideas. Now please, bear with me for a minute, I have a story to tell. This is how I reached those conclusions, I find this quite entertaining. From the point of view of an observer riding the disc, disc's geometry is unchanged although a very strong force points towards the rim of the disc. Basically, he feels gravitational potential $U(r) = -\frac12 r^2 \omega^2$ where $\omega$ is the laboratory frame rotation frequency (see https://physics.stackexchange.com/a/53771/119172 and https://ned.ipac.caltech.edu/level5/March01/Carroll3/Carroll4.html for details): $$ ds^2 = -(1 - r^2 \omega^2) dt^2 + dr^2 + r^2 d\phi^2 + dz^2 $$ A cool feature of this situation is that here, like in the case of the Schwarzschild black hole, we have an apparent singularity at the distance $r = 1 / \omega$. And if you compute the scalar curvature in this metric, you will get $$ R = -\frac{2 \omega ^2 \left(r^2 \omega^2 -2\right)}{\left(r^2 \omega ^2-1\right)^2} $$ We see that at the curvature is singular at $r=1 / \omega$ providing us a physical singularity unlike the one in Schwarzschild's case. What does it mean? As the observer does not known a thing about him being rotating, he can interpret the pull towards the rim as if something massive was surrounding the disc (although symmetric mass distribution should have zero net force). And at some distance this force goes through the roof as if there was a gorgeous cylindrical black hole! On the other hand, lets go back to the laboratory frame. Say, we have this disc of the radius $r$ and we want to spin it up to frequency $\omega = 1 / r$. What linear speed will correspond to that? Well, $v = \omega r = 1$. So, the rim of the disc would have to move with the speed of light!
{ "domain": "physics.stackexchange", "id": 31376, "tags": "special-relativity, frequency" }
Casimir operator and particle worldline
Question: I'm studying Killing vectors in 2d Minkowski space-time, with signature $(+,-)$, the usual metric given by $ds^2=dt^2-dx^2$. I have found these Killing vectors: $\xi^{(1)}=(1,0)=\partial_t\equiv p_0$ $\xi^{(2)}=(0,1)=\partial_x\equiv p_1$ $\xi^{(3)}=(x,t)=x\partial_t+t\partial_x\equiv N$ and this Casimir operator: $C\equiv g^{\mu\nu}p_{\mu}p_{\nu}=p_0^2-p_1^2$. Then I have the following statement: "Let's take a free particle, with world-line $(x(s),t(s))$, where $s$ is a generic parameter. We can use our Casimir operator as an Hamiltonian operator $H=C$, so: ${\partial x \over\partial s }=[C,x]=-2p_1$ ${\partial t \over\partial s }=2p_0$ ${\partial p_1 \over\partial s }=0={\partial p_0 \over\partial s }$." My questions are: (1) How do I prove that ${\partial x \over\partial s }=[C,x]$? (2) How $[C,x]$ works? Do I have to look at $x$ as an operator or as a function of $s$? I don't know how to prove that $[C,x]=-2p_1$. Thank you in advance. EDIT: (1) I found out that you can write Hamilton's eom in terms of the Poisson bracket (which is, for our purposes, Lie bracket): $\dot q_i = {\partial H \over \partial p_i}= [q_i,H]$ where, in our case, $q_0=t$ and $q_1=x$. Question (1.1): Can I read $\dot q_i$ as the partial derivative with respect to $s$? Therefore: $\dot q_1 =\dot x = {\partial H \over \partial p_1}= [q_1,H]$ But: ${\partial H \over \partial p_1}={\partial C \over \partial p_1}=-2p_1$ $[q_1,H]=[x,C]=-[C,x]=2p_1$ And I have the same problem with $t$ (so this is not signature related). Question (1.2): How do I solve this last discrepancy? Answer: The answer to question (2) is really trivial and I don't know how I did not realize it before. This should be the answer: $[p_0,x]f(t,x)=(\partial_tx-x\partial_t)f=(\partial_tx)f+x\partial_tf-x\partial_tf={\partial x \over \partial t}f=0 \Rightarrow [p_0,x]=0$ $[p_1,x]f(t,x)= f \Rightarrow [p_1,x]=1$ And vice versa for $[p_0,t]$,$[p_1,t]$. Therefore: $[C,x]=[p_0^2,x]-[p_1^2,x]=p_0[p_0,x]+[p_0,x]p_0-p_1[p_1,x]-[p_1,x]p_1=-2p_1$ $[C,t]=2p_0$. I still can't figure out how to answer to question (1). Any tip would be appreciate. EDIT: I tried to answer by myself and this is what came out. (1.2) I have found another definition of the Poisson bracket: $\{f,g\} := \sum_{i=1}^{n} ({\partial f \over \partial p_i}{\partial g \over \partial q_i}-{\partial f \over \partial q_i}{\partial g \over \partial p_i})$ Then Hamilton's equations can be written in this short form: $\dot x_i= \{H,x_i\}$ where $i=1,...,2n$ and $x_i$ denotes any of the coordinate functions $(q_i,p_i)$. With this definition I have the result I was looking for, because you can prove that Poisson bracket and Lie bracket are in this relation: $X_{\{f,g\}}(h):=\{\{f,g\},h\} =[X_f,X_g](h)$ which basically means that we can use Lie brackets instead of Poisson brackets. The previous definition I was using was: $\{f,g\} := \sum_{i=1}^{n} ({\partial f \over \partial q_i}{\partial g \over \partial p_i}-{\partial f \over \partial p_i}{\partial g \over \partial q_i})$ and in this case we have this relation: $[X_f,X_g]=-X_{\{f,g\}}$. If you want to use the second definition and you want to use Lie bracket instead of Poisson bracket, you have to consider the minus sign. (1.1) Probably the original statement was wrong, because it continues saying that, once you have found ${\partial x \over \partial s}= -2p_1$ and ${\partial t \over \partial s}= 2p_0$, you can find the velocity with ${\partial x \over \partial t}=- {p_1 \over p_0}$, but of course velocity definition has the total derivative. Also, the use of the total derivative would be in accordance with what you get from the geodesic equation.
{ "domain": "physics.stackexchange", "id": 61180, "tags": "homework-and-exercises, general-relativity, special-relativity, lie-algebra" }
Difference between 2f+1, 2f and 3f+1
Question: I am currently reading the Practical Byzantine Fault Tolerance paper. I am unable to completely understand the difference between 2f and 2f+1. If 2f+1 means that a majority of the nodes are non-faulty then what does 2f represent? Answer: $2f$ and $2f + 1$ are just numbers. A set of $2f + 1$ nodes has a majority of non-faulty nodes, as you say; a set of $2f$ nodes can be deadlocked between the non-faulty and faulty nodes. A total of $3f + 1$ nodes are needed in the system to be able to construct a set of $2f + 1$ nodes when up to $f$ nodes are not responding (as outlined in section 3).
{ "domain": "cs.stackexchange", "id": 6044, "tags": "distributed-systems, computer-networks" }
Experimental evolution of condensates
Question: I was talking to a colleague professor the other day and he said something that got me curious. The way I remember it, he said basically that in experiments a Bose-Einstein condensation is usually trapped by some external potential, which I imagine to be an electric or magnetic field. Then, the temperature is considerably lowered, the trap is turned off and one studies the evolution of this state. Can someone further elaborate this kind of experiment? For instance, it was not clear to me what exactly one tries to measure with it; as far as I understood, the temperature is cooled down so to form a condensate in the first place and the release of the external field is intended to study its evolution. What one wants to understand with the evolution of such a condensate? Moreover, what material is used in such experiments? (I am thinking about liquid Hellium?) Answer: Bose-Einstein condensates (BECs) of dilute gases are generated using magnetic traps or optical traps. Once the traps is turned-off the BEC expands. This is usually where the fun begins and one can ask very fundamental questions. Like: BECs are said to be described by a single, macroscopic wave function. If that is true, does BECs interfere? Turns out: Yes! They do interfere: https://www.rle.mit.edu/cua_pub/ketterle_group/Projects_2004/Pubs_04/shin04_interferometry.pdf Can one stop the expansion somehow? Yes... some expansion remains, but calculating the temperature just from the expansion, one finds a corresponding temperture of 38 PICO kelvin! https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.127.100401 Just two examples for today. I need to do some work
{ "domain": "physics.stackexchange", "id": 98660, "tags": "statistical-mechanics, condensed-matter, experimental-physics, bose-einstein-condensate" }
Unitary representation of physics groups in Wu-Ki Tung book
Question: I am reading the Wu-Ki Tung book "Group theory in physics" and I'm trying to put the various pieces (chapter) together to understand how he gets the unitary irreducible representations of groups used in physics. At page 157-158 he shows how to get the unitary irreducible representations of group $E_2$ (i.e. Euclidean space in two dimensions) and he states "For unitary representations the generators $J,P_1,P_2$ are mapped into Hermitian operators". He doesn't provide any details about that mapping or how to build it and then in the rest assume that $P_1,P_2$ are Hermitian. The usual representation of the $P_1$ generator is evidently not Hermitian: $$P_1=\left( \begin{matrix} 0 & 0 & i \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{matrix}\right) $$ If the generators were Hermitian then the representation of the group would be automatically unitary cause $(e^{iaP_1})^{\dagger}=e^{-iaP_1}=(e^{iaP_1})^{-1}$ At pages 191-198 he shows how to get the unitary irreducible representations of the Poincare' group and at page 194 he states: "The representation is unitary because the generator are realized as Hermitian operators". Again, I can't see how he gets such Hermitian generators. Isn't this reasoning circular? My guess is that he assumes the unitary representation exists and he shows only how to get the matrix element of the generators. Answer: Indeed, the author illustrates by explicit construction that, for noncompact groups (contrast to what you have learned in SO(3) !), their unitary representations (Hermitian irreps for the generators) are infinite dimensional. (And vice versa, finite-dimensional reps are not hermitian, as you correctly observe for what you call the "usual" rep for $E_2$.) He constructs these infinite dimensional (endless!) matrices in section 9.2 for $E_2$ and 10.3.3 for Poincaré. To do that, he assumes the generators are realized by hermitian operators/matrices (the representation map) and pursues the logical consequences of what these operators must be like, using the Lie algebra satisfied by these operators. He finds answers. Nothing circular about that! As you correctly guess, finding a good answer based on his existence assumption, ipso facto justifies said assumption. What more are you after?
{ "domain": "physics.stackexchange", "id": 99752, "tags": "quantum-field-theory, special-relativity, representation-theory, poincare-symmetry" }
ROS-i ABB Unable to access some members of industrial_msgs/CmdJointTrajectory
Question: Dear all, I am basically trying to subscribe to the rostopic which broadcast the sensor_msgs/JointState and then call the /joint_path_command service to move the ABB robot in robot studio simulation. Hence, Whenever there is new joint positions in that topic, I will call /joint_path_command to mvoe the robot accordingly. I have managed to receive the joint states from the rostopic. However, I have encountered problem when trying to create industrial_msgs/CmdJointTrajectory message which is required for the /joint_path_command service. void chatterCallback(const sensor_msgs::JointState::ConstPtr &msg) { for(int i = 0; i<msg->position.size(); i++) { posi.push_back(msg->position.at(i)); } for(int i = 0; i<msg->velocity.size(); i++) { velo.push_back(msg->velocity.at(i)); } for(int i = 0; i<6; i++) { acc.push_back(0.0); } for(int i = 0; i<msg->effort.size(); i++) { eff.push_back(msg->effort.at(i)); } ros::NodeHandle n; ros::ServiceClient client = n.serviceClient<industrial_msgs::CmdJointTrajectory>("joint_path_command"); industrial_msgs:CmdJointTrajectory srv; srv.request.trajectory.header = msg->header; srv.request.trajectory.joint_names = jointnames; srv.request.trajectory.points.positions = posi; srv.velocities = velo; srv.accelerations = acc; srv.effort = eff; // srv.request.trajectory.points = srvvv; // srv.time_from_start = ros::Time::now(); if(client.call(srv)) { ROS_INFO("hoolay"); } } above is my program that tries to call the joint_path_command service with the information gathered from a sensor_msg/JointState msg. Below is the compilation result [100%] [100%] Built target add_two_ints_client Built target add_two_ints_server /home/colin/catkin_ws/src/armpost_listener/src/listen_armpost.cpp: In function 'void chatterCallback(const ConstPtr&)': /home/colin/catkin_ws/src/armpost_listener/src/listen_armpost.cpp:66:33: error: 'trajectory_msgs::JointTrajectory_<std::allocator<void> >::_points_type' has no member named 'positions' srv.request.trajectory.points.positions = posi; /home/colin/catkin_ws/src/armpost_listener/src/listen_armpost.cpp:67:33: error: 'trajectory_msgs::JointTrajectory_<std::allocator<void> >::_points_type' has no member named 'velocities' srv.request.trajectory.points.velocities = velo; ^ /home/colin/catkin_ws/src/armpost_listener/src/listen_armpost.cpp:68:33: error: 'trajectory_msgs::JointTrajectory_<std::allocator<void> >::_points_type' has no member named 'accelerations' srv.request.trajectory.points.accelerations = acc; ^ /home/colin/catkin_ws/src/armpost_listener/src/listen_armpost.cpp:69:33: error: 'trajectory_msgs::JointTrajectory_<std::allocator<void> >::_points_type' has no member named 'effort' srv.request.trajectory.points.effort = eff; ^ make[2]: *** [armpost_listener/CMakeFiles/listen_armpost.dir/src/listen_armpost.cpp.o] Error 1 make[1]: *** [armpost_listener/CMakeFiles/listen_armpost.dir/all] Error 2 make: *** [all] Error 2 Invoking "make -j8 -l8" failed As you can see, The compiler does not recognize the members of the points struct within trajectory/points (positions, velocities, effort, accelerations) of the industrial_msgs/CmdJointTrajectory when it is clearly stated in the rossrv show(shown below) colin@Colin-GF:~$ rossrv show industrial_msgs/CmdJointTrajectory trajectory_msgs/JointTrajectory trajectory std_msgs/Header header uint32 seq time stamp string frame_id string[] joint_names trajectory_msgs/JointTrajectoryPoint[] points float64[] positions float64[] velocities float64[] accelerations float64[] effort duration time_from_start --- industrial_msgs/ServiceReturnCode code int8 SUCCESS=1 int8 FAILURE=-1 int8 val Am I doing something wrong? Any help will be greatly appreciated! Thank you! Originally posted by ninanona on ROS Answers with karma: 7 on 2016-12-03 Post score: 0 Original comments Comment by gvdhoorn on 2016-12-04: Just curious: what are you trying to achieve exactly? The code you include above seems to be sending the robot back to where it already is (ie: setpoint == current_state). Answer: points in srv.request.trajectory.points is an array as you correctly note in your question. Thus, you should access its elements in order to access their members (e.g. positions, velocities and accelerations). Instead of srv.request.trajectory.points.positions try srv.request.trajectory.points[i].positions where i is the joint you want to specify. You might also benefit from this answer. Originally posted by jsanch2s with karma: 136 on 2016-12-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ninanona on 2016-12-04: Thank you so much! I didn't realise the point array meant the trajectory points. It solved my problem!
{ "domain": "robotics.stackexchange", "id": 26387, "tags": "ros, ros-service" }
Understanding rosbag timestamps
Question: Looking at the ROSbag.v2 specification, every message stored in a bag file is comprised of a: Message Header Message Content The header contains a connection ID and a TimeStamp. The content is the message serialized to bytes, which can include a std_msgs/Header.stamp. So, I would like to understand which is the relationship between the bag header TimeStamp and the message's std_msgs/Header.stamp I am also concerned about out of order messages, where messages have been acquired at different times, but they reach the bag file writing queue out of order. Questions: It is mandatory that both the bag header stamp and std_msgs/Header.stamp to always match? It is mandatory that the bag header TimeStamp to be in order? Can the std_msgs/Header.stamp to be out of order and decoupled from bag header TimeStamp? In general, who is usually responsible of handling out of order messages? the software writing the bag files, or the clients reading them? My undestanding is that the bag header TimeStamp is the instanct in which the message is written to the file, so they're guaranteed to be in order. While the message's std_msg/Header.stamp is the acquisition time, which can be a out of order and a bit sooner than the time in the bag header. Is this correct? so for example, I can have a bag file with two consecutive messages, where The header timestamps are sequential and in order... But the actual message Header.stamps can be sooner and out of order. Is this right? Originally posted by Vicente Penades on ROS Answers with karma: 130 on 2019-03-14 Post score: 8 Answer: The bag header timestamp is populated by rosbag and is the time that a message was received by the rosbag recorder. I believe these will always be in order, but I am not certain. If the serialized message contains a timestamp, it is populated by the publisher of that message. ROS and rosbag make no guarantees about the meaning, offset between these timestamps and rosbag timestamps, or even the timestamp validity. All of those things are the responsibility of the publisher. Some publishers backdate or future-date timestamps, which can cause them to differ from the actual time by a significant amount. I've seen offsets up to 1/2 a second for properly functioning publishers, and I've seen other publishers that just set all timestamps to 0 because their clients don't need the timestamp at all. The rosbag writers and readers do not look at the header timestamp within the messages, and does not guarantee that messages are ordered by those timestamps. Generally messages from a single publisher are written in the order that they are received, but if there are multiple publishers on a topic, you can easily get header timestamps slightly out order. It is the client's responsibility to interpret the message header timestamps, and to handle out-of-order timestamps in a way that makes sense for the client. Originally posted by ahendrix with karma: 47576 on 2019-03-14 This answer was ACCEPTED on the original site Post score: 9 Original comments Comment by gvdhoorn on 2019-03-17:\ The rosbag writers and readers do not look at the header timestamp within the messages, and does not guarantee that messages are ordered by those timestamps. note that there are options for rosbag play that change this such that rosbag does look at message timestamps. Comment by marctison on 2021-07-13: Oh wow thats exactly what I was looking for, could you please share what rosbag play argument you are talking about
{ "domain": "robotics.stackexchange", "id": 32650, "tags": "ros-melodic, rosbag, ros-lunar" }
Physics engine - collisions
Question: I'm working on a 2D physics engine somulator and I want it to be very accurate on a Physics point of view. Currently I'm making some researchs about how rigid body collisions may be calculated, and I've found this page, that presents the following equation: Where $w$ is the angular velocity, $j$ is the impulse during the collision, $r$ is the radius and $\hat{n}$ is (I'm not sure but I am guessing) a normalized vector. My question is about what exacly is $I^{-1}$ .It says it's an inertia tensor, but for what I know it is used for 3D objects. Since my engine is 2D, may I simply use this formula: Which is presented here as the moment of inertia of a rectangle. If yes, should I still use the (-1) power? Btw, I would also accept a link to some pdf explaining how collisions change angular and linear momentum in 2D... this wikipedia page was the best I could find. Answer: Yes the I must have the ^-1 exponent, otherwise the unit would not end up in $s^-1$ (the unit for angular velocity). $\hat n$ is the unit vector in the direction of exit after collision. Moment of inertia of a 2D or 3D object is the same as long as they have the same cross section from the perspective of the dimension you want to ignore (for example thickness of the cuboid). E.g, moment of inertia of a rectangle rotating about it's center in its own plane has the same formula you listed. Note this is true only because the cross sections are the same in this particular axis of rotation--and of course the mass is assumed the same.
{ "domain": "physics.stackexchange", "id": 9983, "tags": "collision" }
Lexicographic perturbation for euclidean shortest path instances?
Question: Assume we have an undirected graph $G=(V,E)$ and vertex locations $\pi: V \rightarrow \mathbb{R}^2$. I am looking for a procedure to perturb the vertex positions to obtain new positions $\pi'$ such that the following statements hold: For every pair $s\neq t \in V$, there is a unique shortest s-t-path $P_{s,t}$ in $G$ w.r.t. the euclidean weight function $c(v,w)=|\pi'(v)-\pi'(w)|_2$. $P_{s,t}$ is also a shortest s-t-path w.r.t. the original vertex positions $\pi$. $\pi'$ can be deterministically computed in polynomial time given $\pi$. The length of the numbers occuring in $\pi'$ are polynomially bounded in the length of those in $\pi$. I know that lexicographic perturbation is a standard procedure to do this deterministically. Sadly there is no simple way to modify the distance of only a single edge in euclidean instances. Is there any known approach that can be applied to those euclidean instances? Even a randomized perturbation algorithm would be interesting for me. Answer: I think you're unlikely to get a good answer, because this is tied up in difficult and unsolved algebraic problems. The issue is that Euclidean path lengths (for points with integer coordinates) can be expressed as sums of square roots, but we don't know how small the difference between two distinct sums of square roots can be. Because of this, we also don't know how far apart the shortest path length and second-shortest distinct path length between a given pair of vertices can be, and therefore we don't know how small we have to make a perturbation to prevent it from changing the shortest path to a path that wasn't originally shortest. For the same reason, shortest paths in Euclidean graphs are not really known to be solvable in polynomial time, in models of computation that take into account the bit complexity of the inputs, even though Dijkstra is polynomial in a model of computation allowing constant-time real-number arithmetic. So asking for a polynomial time algorithm for a more complicated variant of the problem in which the bit complexity is unavoidable seems likely to have a negative answer.
{ "domain": "cstheory.stackexchange", "id": 3311, "tags": "reference-request, graph-theory" }
I have problem regarding the KUKA_experimental installation
Question: Hello everyone, I tried to install KUKA manipulators within ROS-Industrial. I followed the following procedure : mkdir -p ~/moveit_ws_2/src --> cd /path/to/catkin_ws/src git clone https://github.com/ros-industrial/kuka_experimental.git --> cd .. rosdep install --from-paths src --ignore-src catkin_make PS I also did source the work space by : echo 'source ~/ws_moveit/devel/setup.bash' >> ~/.bashrc However by putting this command roslaunch kuka_experimental kuka_rsi_simulator.launch I dumped to the following error: [kuka_rsi_simulator.launch] is neither a launch file in package [kuka_experimental] nor is [kuka_experimental] a launch file name The traceback for the exception was written to the log file Could anyone please kindly help me ? Originally posted by MTV1368 on ROS Answers with karma: 21 on 2018-08-27 Post score: 0 Original comments Comment by jarvisschultz on 2018-08-27: It looks like there are multiple mistakes in the commands you listed. In (1) did you really run cd /path/to/catkin_ws/src or did you cd to ~/moveit_ws_2/src? Are you sourcing the setup.bash for ~/ws_moveit or ~/moveit_ws_2? What are your ROS environment variables (env |grep ROS)? Comment by jarvisschultz on 2018-08-27: Most often the error that you are describing comes from people improperly sourcing the correct setup.bash file Comment by MTV1368 on 2018-08-27: Sorry for the typo. I did work in ~/moveit_ws/src. I followed the exact same way I have been asked on wiki.ros.org in the same order. I did source it right I am think as roscd works fine. Answer: I also did source the work space by : echo 'source ~/ws_moveit/devel/setup.bash' >> ~/.bashrc That doesn't source the workspace. Or at least: it will source the workspace only after you start a new bash shell, or after you source $HOME/.bashrc. Also: the workspace you created was named moveit_ws_2, not ws_moveit. However by putting this command roslaunch kuka_experimental kuka_rsi_simulator.launch I dumped to the following error: [kuka_rsi_simulator.launch] is neither a launch file in package [kuka_experimental] nor is [kuka_experimental] a launch file name And this makes sense, as kuka_experimental is a metapackage, it doesn't contain any launch files. If you want to start the kuka_rsi_simulator.launch launch file, the command would be: roslaunch kuka_rsi_simulator kuka_rsi_simulator.launch Originally posted by gvdhoorn with karma: 86574 on 2018-08-27 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 31654, "tags": "ros, ros-kinetic, ubuntu, ubuntu-xenial" }
How do deepfakes work and how they might be dangerous?
Question: Deepfakes (a portmanteau of "deep learning" and "fake") are synthetic media in which a person in an existing image or video is replaced with someone else's likeness. Nowadays most of the news circulating in the news and social media are fake/gossip/rumors which may false-positives or false-negatives except WikiLeaks I know there has been a Deepfake Detection Challenge Kaggle competition for a whooping sum $1,000,000 prize money. I would like to know how deepfakes work and how they might be dangerous? Answer: In general, deepfakes rely on advanced context-aware digital signal manipulations - usually image, video or audio - that allow for very natural looking modifications of content that previously have been costly or near impossible to produce in high quality. The AI models, often based on generative adversarial networks (GANs), style transfer, pose estimation and similar technologies, are capable of tasks such as transferring facial features from subject A to replace those of subject B in a still image or video, whilst copying subject B's pose, expression, and matching the scene's lighting. Similar technologies exist for voices. A good example of this might be these Star Wars edits, where actors faces have been changed. It is not perfect, you can in a few shots see a little instability if you study the frames - but the quality is still pretty good, and it was done with a relatively inexpensive setup. The work was achieved using freely-available software, such as DeepFaceLab on Github. The technology is not limited to simple replacements - other forms of puppet-like control over output are possible, where an actor can directly control the face of a target in real time using no more than a PC and webcam. Essentially, with the aid of deepfakes, it becomes possible to back up slanderous or libelous commentary with convincing media, at a low price point. Or the reverse, to re-word or re-enact an event that would otherwise be negative publicity for someone, in order to make it seem very different yet still naturally captured. The danger of this technology is it puts tools for misinformation into a lot of people's hands. This leads to potential problems including: Attacks on integrity of public figures, backed by realistic-looking "evidence". Even with the knowledge that this fakery is possible (and perhaps likely given a particular context), then damage can still be done especially towards feeding people with already-polarised opinions with manufactured events, relying on confirmation bias. Erosion of belief in any presented media as proof of anything. With deepfakes out in the wild, someone confronted with media evidence that went against any narrative can claim "fake" that much more easily. Neither of these issues are new in the domains of reporting, political bias, propaganda etc. However, it adds another powerful tool for people willing to spread misinformation to support any agenda, alongside things such as selective statistics, quoting out of context, lies in media that is text-only or crudely photoshopped etc. A search for papers studying impact of deep fakes should find academic research such as Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Opinion: Video content, presented feasibly as a live capture or report, is especially compelling, as unlike text and still image media, it directly interfaces to two key senses that humans use to understand and navigate the world in real time. In short, it is more believable by default at an unconscious and emotional level compared to a newspaper article or even a photo. And that applies despite any academic knowledge of how it is produced that you might possess as a viewer.
{ "domain": "ai.stackexchange", "id": 2074, "tags": "deep-learning, deepfakes" }
Performance concerns for synchronous action methods
Question: I have an ASP.NET MVC 5 application using Entity Framework with SQL Server 2016. The application is mainly basic CRUD. The boilerplate generated when creating the project in Visual Studio did not have any asynchronous action methods, so I've left it alone for now. But at this point I can't figure out if the calls should be async or not. I've never worked with async before, but I have a rudimentary understanding of how it works with ASP.NET. This question could potentially boil down to: How large does a database have to be with how many calls to it before the async overhead is worth it? Most of my controllers look something like this. Would I benefit from asynchrony here? If you can't determine it, how can I? I'm not sure how to "run the numbers", as I often see on Stack Overflow. I also run into the "leave it until it becomes an issue" doctrine often, but this is going right from a test group of 4 to a group of 50 when it is finished. I cringe just thinking of the backlash I'll receive when it poops the bed on the first day. I know 50 is still a relatively small number, but the server is fairly weak. using EDB.Database; using EDB.Identity; using EDB.Models; using EDB.Utilities.Controllers; using System.Collections.Generic; using System.Data.Entity; using System.Linq; using System.Net; using System.Web.Mvc; namespace EDB.Controllers { public class ClientsController : Controller { private ExcDbContext db = new ExcDbContext(); // GET: Clients [HierarchialAuth(Permissions.Clients.View)] public ActionResult Index() { return View(GetRows(db.Clients.ToList())); } // Iterates through models in list and creates a new ClientItemViewModel for each private IEnumerable<ClientItemViewModel> GetRows(List<ClientModel> allClients) { foreach (ClientModel client in allClients) { yield return new ClientItemViewModel(client.ID, client.Name, client.PhoneNumber, client.Email, client.PhysicianIDs.Count, db.Physicians.Find(client.PhysicianIDs[0]).Name); } } // GET: Clients/Create [HierarchialAuth(Permissions.Clients.Create)] public ActionResult Create() { return View(new ClientCreateViewModel(ControllerUtil.DropDownListPhysicians(db), ControllerUtil.DropDownListLocations(db))); } // POST: Clients/Create [HttpPost] [ValidateAntiForgeryToken] [HierarchialAuth(Permissions.Clients.Create)] public ActionResult Create(ClientCreateViewModel viewModel) { if (ModelState.IsValid) { ClientModel clientModel = new ClientModel(); viewModel.Unflatten(clientModel); db.Clients.Add(clientModel); db.SaveChanges(); return RedirectToAction("Index"); } viewModel.AllPhysicians = ControllerUtil.DropDownListPhysicians(db); viewModel.AllLocations = ControllerUtil.DropDownListLocations(db); return View(viewModel); } // GET: Clients/Edit/5 [HierarchialAuth(Permissions.Clients.Edit)] public ActionResult Edit(int? id) { if (id == null) { return HttpError(HttpStatusCode.BadRequest); } ClientModel clientModel = db.Clients.Find(id); if (clientModel == null) { return HttpError(HttpStatusCode.NotFound); } return View(new ClientEditViewModel(clientModel, ControllerUtil.DropDownListPhysicians(db), ControllerUtil.DropDownListLocations(db))); } // POST: Clients/Edit/5 [HttpPost] [ValidateAntiForgeryToken] [HierarchialAuth(Permissions.Clients.Edit)] public ActionResult Edit(ClientEditViewModel viewModel, int id) { ClientModel cm = db.Clients.Find(id); if (cm == null) { return HttpError(HttpStatusCode.BadRequest); } if (ModelState.IsValid) { viewModel.Unflatten(cm); db.Entry(cm).State = EntityState.Modified; db.SaveChanges(); return RedirectToAction("Index"); } viewModel.AllPhysicians = ControllerUtil.DropDownListPhysicians(db); viewModel.AllLocations = ControllerUtil.DropDownListLocations(db); return View(viewModel); } private IEnumerable<string> GetPhysicianNames(ClientModel cm) { return cm.PhysicianIDs.Select(id => db.Physicians.Find(id).Name); } private IEnumerable<string> GetLocationNames(ClientModel cm) { return cm.LocationIDs.Select(id => db.Locations.Find(id).Name); } // GET: Clients/Details/5 [HierarchialAuth(Permissions.Clients.View)] public ActionResult Details(int? id) { if (id == null) { return HttpError(HttpStatusCode.BadRequest); } ClientModel cm = db.Clients.Find(id); if (cm == null) { return HttpError(HttpStatusCode.NotFound); } return View(new ClientDetailsViewModel(cm, GetPhysicianNames(cm), GetLocationNames(cm))); } // GET: Clients/Delete/5 [HierarchialAuth(Permissions.Clients.Delete)] public ActionResult Delete(int? id) { if (id == null) { return HttpError(HttpStatusCode.BadRequest); } ClientModel cm = db.Clients.Find(id); if (cm == null) { return HttpError(HttpStatusCode.NotFound); } return View(new ClientDeleteViewModel(cm, GetPhysicianNames(cm), GetLocationNames(cm))); } // POST: Clients/Delete/5 [HttpPost, ActionName("Delete")] [ValidateAntiForgeryToken] [HierarchialAuth(Permissions.Clients.Delete)] public ActionResult DeleteConfirmed(int id) { ClientModel clientModel = db.Clients.Find(id); db.Clients.Remove(clientModel); db.SaveChanges(); return RedirectToAction("Index"); } protected override void Dispose(bool disposing) { if (disposing) { db.Dispose(); } base.Dispose(disposing); } } } Answer: How large does a database have to be with how many calls to it before the async overhead is worth it? You will have benefits immediately. SQL Server is a multi-user database and even for small/simple/quick queries you will have some benefits. Most of my controllers look something like this. Would I benefit from asynchrony here? Don't think it will speed-up your queries, if you have a request X and a request Y executed serially then total time will be X + Y (possibly the query Y will wait the query X to complete). Doing them in parallel won't change X's execution time (on contrary it may be slightly worse) but Y doesn't need to wait X to complete and it will be executed immediately (assuming no locks are involved). In short: for a single non concurrent query won't probably see any measurable benefit but with multiple concurrent requests you will immediately see a performance gain (which is in fact a responsiveness gain.) OK, this is not entirely true and you can disable Session State in ASP.NET to let requests be served in parallel (loosing TempData) but don't go that far now... You don't need to make it asynchronous now if you don't have any performance problem (even if it's a relatively smooth and painless change). What you're doing wrong in your code is the way you handle DbContext. You're not handling errors and some errors may cause the connection to be invalid. Solution: do not create the connection at class level but at function level: [HierarchialAuth(Permissions.Clients.View)] public ActionResult Index() { using (var db = new ExcDbContext()) { return View(GetRows(db.Clients.ToList())); } } It's still not the optimal version, please read Know when to retry or fail when calling SQL Server from C#? for further details about error handling. Here we also have the opportunity to optimize you code. Currently you're fetching the entire Clients table in memory (because of .ToList()) but you do not actually need to materialize the list (which may become pretty huge). Let EF alone to decide when/if it's necessary: using (var db = new ExcDbContext()) { return View(GetRows(db.Clients)); } With: private IEnumerable<ClientItemViewModel> GetRows(IEnumerable<ClientModel> allClients) { } You're also using directly IEnumerable<ClientItemViewModel> as model for your view. I'd suggest to do not do it and encapsulate the list into a proper object: public sealed class ClientsIndexViewModel { IEnumerable<ClientItemViewModel> Clients { get; set; } } If in future you will need to add any property for this view then you won't need to refactor existing code. You should consider to add pagination and sorting, I suppose you do not want to return clients in random order and if list grows enough it may be useful to show a shorter list (especially to improve responsiveness). In Edit(ClientEditViewModel, int) you have an id argument but if I'm not wrong the client ID is already in ClientEditViewModel then you can drop it. You often do return HttpError(HttpStatusCode.BadRequest). I don't remember in which version they have been introduced but you can simply return BadRequest() (see also the other similar functions). Method names (like Index()) are unlikely to change but I strongly dislike string constants, you can replace return RedirectToAction("Index"); with return RedirectToAction(nameof(Index));. Big blocks of code are repeated in each method. You can extract a reusable function: IActionResult Do(ExcDbContext db, int? id, Func<ClientModel, IActionResult> action) { if (id == null) return BadRequest(); var cm = db.Clients.Find(id); if (cm == null) return NotFound(); return action(cm); } Used like this: [HierarchialAuth(Permissions.Clients.Delete)] public ActionResult Delete(int? id) { using (var db = new ExcDbContext()) { Do(db, id, cm => { return View(new ClientDeleteViewModel(...)); } } } You may also rewrite Do() to create an instance of the DB context and to handle all the relevant error handling/retrying logic. I do not know your requirements but I do not see any security check, if this code can be invoked by clients (for example Edit() to update their own data) then you should also include some validation (a malicious client X cannot modify data of another client Y...) Code like return cm.LocationIDs.Select(id => db.Locations.Find(id).Name); is pretty inefficient when DB grows because there are almost no chances for EF to translate it to SELECT Name FROM Locations WHERE Id IN (<id list>). It can be easily rewritten to: return db.Locations.Where(x => ids.Contains(x.Id)) Where ids is your IList<int> of IDs.
{ "domain": "codereview.stackexchange", "id": 27631, "tags": "c#, sql-server, entity-framework, asp.net-mvc" }
Do SALC-AOs really belong to their symmetry species?
Question: I'm working through a molecular symmetry textbook and something keeps nagging at me. If I derive the SALC-AOs for NH3 (using the projection operator method), I'll get A1: $ \frac{1}{\sqrt{3}}(\phi_1+\phi_2+\phi_3)$ Doubly-degenerate E: $ \frac{1}{\sqrt{6}}(2\phi_1-\phi_2-\phi_3)$ $ \frac{1}{\sqrt{2}}(\phi_2-\phi_3)$ where the $\phi$s are the H 1s orbitals, ie.: (This was the best image I could find, but in the second E orbital the black s-orbital should be enlarged) I'm having trouble seeing how these SALC-AOs would be a member of their symmetry species. The A1 SALC-AO is identical under each operation, so it makes sense that it would belong to A1. But if I want to confirm that the two E SALC-AOs belong to E, how would I do that? My intuition is that I should be able to apply each C3v operation to the SALC-AO and get back the E row of the character table (Below). But if you apply a C3 rotation to the e2 orbital, you get the black orbital taking the place of the white orbital, the white taking the place of the node, and the node taking the place of the black orbital. This doesn't seem like it could be expressed as a number in a character table, since the SALC-AO isn't being taken to itself or -itself. Am I thinking about this all wrong? How should I understand the SALC-AO as a whole? Any guidance is appreciated, the textbook seems evasive on this. Answer: To show that your two functions do represent the $\mathrm{E}$ irreducible representation, we can approach the problem algebraically. We start by representing the operators of the $C_{3v}$ point group in the basis of the AOs. The identity element is easy in any basis: $$E=\pmatrix{1 &0 &0\\0 &1 &0\\0 &0 &1\\}$$ We can see that applying this operation to $e_1$ and $e_2$ will just return these functions, so $\chi(E)=2$. We can also represent this operation in the basis of $a_1$, $e_1$, and $e_2$, which will be important for explaining the remaining operations. For the $C_3$ rotation, we can notice that this just has the effect of moving each AO to the next site (i.e. $\phi_1\to\phi_2$, $\phi_2\to\phi_3$, and $\phi_3\to\phi_1$), so we can write it in matrix form as $$C_3=\pmatrix{0 &1 &0\\0 &0 &1\\1 &0 &0\\}$$ Lets see what happens when we apply this to $e_1$: $$C_3e_1=\frac{1}{\sqrt{6}}\pmatrix{0 &1 &0\\0 &0 &1\\1 &0 &0\\}\pmatrix{2\\-1\\-1}=\frac{1}{\sqrt{6}}\pmatrix{-1\\-1\\2}=e_1'$$ Writing $e_1'$ in the basis of the original functions, we obtain: $$e_1'=0a_1+\frac{-1}{2}e_1+\frac{-\sqrt{3}}{4}e_2$$ Doing the same with $e_2$: $$e_2'=0a_1+\frac{-\sqrt{3}}{4}e_1+\frac{-1}{2}e_2$$ We can then take the trace of this matrix (or, to simplify, the 2x2 submatrix spanned by $e_1$ and $e_2$) to get the character of $C_3$: $$\chi(C_3)=\mathrm{Tr}\pmatrix{\frac{-1}{2} & \frac{-\sqrt{3}}{4} \\ \frac{-\sqrt{3}}{4} & \frac{-1}{2}}=-1$$ You can go through the same sort of exercise with $\sigma_v$ (determine form in AO basis, apply to each function, write resulting functions in original function basis, take the trace to get the character).
{ "domain": "chemistry.stackexchange", "id": 13709, "tags": "bond, molecular-orbital-theory, orbitals, symmetry" }
Hotel Booking Simulation
Question: I am learning java and I successfully wrote one small console application. I would love to get reviews and possible bug sources on my code. I'm particularly concerned with my object structure. Question A well renowned hotel has three branches in Miami. Namely x,y and z. Each has two types of customers: Regular and Rewardee. Also each branch has its own ratings x is given a 3 star rating while y has 5 star rating and z has 4 star rating. Each hotel has specific rates for weekend and weekdays. x charges $100 for regular customers on weekdays and $120 on weekends While it is $90 for rewardee on weekdays and $60 on weekends. Similarly y charges $130 for regular customers on weekdays and $150 on weekends. While its $100 for rewardee on weekdays and $95 on weekends. While z charges $195 for regular customers on weekdays and $150 on weekends. While its $120 for rewardee on weekdays and $90 on weekends. Now when the customer requests for a particular detail you need to find which hotel would yield the customer profit. In case of tie between hotels compare the ratings and provide the result. Input format: regular: 16Mar2010(sun), 19Mar2010(wed), 21Mar2010(Fri) Output format: 320 410 540 LakeWood Solution: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; class HotelFactory { String hotelName; private int regularWeekDay; private int regularWeekEnd; private int rewardeeWeekDay; private int rewardeeWeekEnd; HotelFactory(String name) { this.hotelName = name; } public int getRegularWeekDay() { return regularWeekDay; } public void setRegularWeekDay(int regularWeekDay) { this.regularWeekDay = regularWeekDay; } public int getRegularWeekEnd() { return regularWeekEnd; } public void setRegularWeekEnd(int regularWeekEnd) { this.regularWeekEnd = regularWeekEnd; } public int getRewardeeWeekDay() { return rewardeeWeekDay; } public void setRewardeeWeekDay(int rewardeeWeekDay) { this.rewardeeWeekDay = rewardeeWeekDay; } public int getRewardeeWeekEnd() { return rewardeeWeekEnd; } public void setRewardeeWeekEnd(int rewardeeWeekEnd) { this.rewardeeWeekEnd = rewardeeWeekEnd; } public String getHotelName() { return hotelName; } } public class TestHotel { private static HotelFactory x, y, z; public static void main(String[] args) throws IOException { BufferedReader bf = new BufferedReader(new InputStreamReader(System.in)); String s = bf.readLine(); x = new HotelFactory("LakeWood"); x.setRegularWeekDay(100); x.setRegularWeekEnd(120); x.setRewardeeWeekDay(90); x.setRewardeeWeekEnd(60); y = new HotelFactory("RidgeWood"); y.setRegularWeekDay(130); y.setRegularWeekEnd(150); y.setRewardeeWeekDay(100); y.setRewardeeWeekEnd(95); z = new HotelFactory("BridgeWood"); z.setRegularWeekDay(195); z.setRegularWeekEnd(150); z.setRewardeeWeekDay(120); z.setRewardeeWeekEnd(90); int index = s.indexOf(":"); String type = s.substring(0, index); int cost_x = 0, cost_y = 0, cost_z = 0; int day_index_start = 0, day_index_end = 0; while (day_index_start != -1) { day_index_start = s.indexOf("(", day_index_start + 1); day_index_end = s.indexOf(")", day_index_end + 1); if (day_index_start != -1) { String day = s.substring(day_index_start + 1, day_index_end); if (day.equalsIgnoreCase("sun") || day.equalsIgnoreCase("sat")) { if (type.equalsIgnoreCase("regular")) { cost_x += x.getRegularWeekEnd(); cost_y += y.getRegularWeekEnd(); cost_z += z.getRegularWeekEnd(); } else { cost_x += x.getRewardeeWeekEnd(); cost_y += y.getRewardeeWeekEnd(); cost_z += z.getRewardeeWeekEnd(); } } else { if (type.equalsIgnoreCase("regular")) { cost_x += x.getRegularWeekDay(); cost_y += y.getRegularWeekDay(); cost_z += z.getRegularWeekDay(); } else { cost_x += x.getRewardeeWeekDay(); cost_y += y.getRewardeeWeekDay(); cost_z += z.getRewardeeWeekDay(); } } } } System.out.println(cost_x); System.out.println(cost_y); System.out.println(cost_z); String result = min(cost_x, cost_y, cost_z); System.out.println(result); } private static String min(int a, int b, int c) { if (a < b && a < c) { return x.getHotelName(); } else if (b < a && b < c) { return y.getHotelName(); } else if (c < a && c < b) { return z.getHotelName(); } else if (a == b || b == c) { return y.getHotelName(); } else if (c == a) { return z.getHotelName(); } else { return x.getHotelName(); } } } Answer: There are a couple of issue with your code: HotelFactory is not a factory, it's a domain object. It should be called simply "Hotel" Never ever name your variables or methods x, y and z. Never ever name your parameters a, b and c. You should specifically never ever combine these two. There's a special hell for people who do that :D Your Hotels have properties that would not change, correct? Then your Hotel objects should be immutable, meaning there are no setter methods for those properties, which should then be initialized via the constructor static class HotelFactory { private final String hotelName; private final int regularWeekDay; private final int regularWeekEnd; private final int rewardeeWeekDay; private final int rewardeeWeekEnd; public HotelFactory(String hotelName, int regularWeekDay, int regularWeekEnd, int rewardeeWeekDay, int rewardeeWeekEnd) { this.hotelName = hotelName; this.regularWeekDay = regularWeekDay; this.regularWeekEnd = regularWeekEnd; this.rewardeeWeekDay = rewardeeWeekDay; this.rewardeeWeekEnd = rewardeeWeekEnd; } public String getHotelName() { return hotelName; } public int getRegularWeekDay() { return regularWeekDay; } public int getRegularWeekEnd() { return regularWeekEnd; } public int getRewardeeWeekDay() { return rewardeeWeekDay; } public int getRewardeeWeekEnd() { return rewardeeWeekEnd; } } The min function doesn't actually do a min operation. It does a minPlusSomStuffs operation. That would be related to breaking ties. You should add a javadoc comment describing how the priority order works with ties. There is a Java built-in min function Math.min(), meaning you don't have to manually perform these if checks. But better yet, you can let Java do the sorting for you using a TreeMap: TreeMap<Integer, HotelFactory> sortedMap = new TreeMap<Integer, HotelFactory>(); sortedMap.put(a, x); sortedMap.put(b, y); sortedMap.put(c, z); return sortedMap.firstEntry().getValue(); What if more hotels are added, say 10? What about 100 more? 1000? Your algorithm doesn't scale well, meaning you have to duplicate a lot of code for every hotel you would add. You should think of an min() algorithm that takes a list of hotels. For totaling days based on weekdays and weekends, regular and non-regular, you are duplicating that logic. Ask yourself, what if there are more types of customers? More types of days in a weekend? Festival days perhaps? While duplicating code would certainly work, it becomes a maintenance nightmare. Consider a different data-structure that allows for easy lookups based on a combination of keys (the keys being customer type and day type).
{ "domain": "codereview.stackexchange", "id": 9458, "tags": "java, console" }
Plot of two different matrices in R
Question: I'm trying to plot two different size matrices using one graph (in R), but can't manage to do so. I tried using matplot and the regular plot, but it didn't work. Does anyone know how to plot it? Answer: (Assuming this is a similar question to what was posted on Cross Validated, but was closed): You can merge the data from both matrices while adding a variable that specifies the origin (group), then plot them together in ggplot2: library( ggplot2 ) N1 = 19 N2 = 17 M = 10 m1 = matrix( rnorm(N1*M,mean=0,sd=1), N1, M) m2 = matrix( rnorm(N2*M,mean=0,sd=1), N2, M) y = c( as.vector( t( m1 ) ), as.vector( t( m2 ) ) ) x = c( rep(1:10, each = N1 ), rep(1:10, each = N2 ) ) group = c( rep( '1', N1 * M ), rep( '2', N2 * M ) ) df = data.frame( state = x, value = y, group = group ) ggplot( df, aes( x = state, y = value, colour = group ) ) + geom_point() + ggtitle( "State values in group 1 and 2" ) + labs( x = "State", y = "Value" ) + scale_x_continuous( breaks = seq(10) ) This is the result:
{ "domain": "datascience.stackexchange", "id": 2351, "tags": "r, plotting" }
Balancing a redox reaction with only one product
Question: Consider the following comproportionation reaction: $$\ce{NH4NO3 -> N2}$$ How would I go about balancing this using the half reaction/ion-electron method? Answer: You should use the the same product species in both half -reactions. In the example you gave, this is $\ce{N2}$. $$\ce{NH4NO3-> N2}$$ The half-reactions are constructed from the ammonium and nitrate ions separately, since these feature nitrogen in different oxidation states: \begin{align} \ce{NH4+ &-> N2} \tag{1} \\ \ce{NO3- &-> N2} \tag{2} \end{align} Balancing the reduction half-equation $(1)$: \begin{align} \ce{2NO3- &-> N2} & & \text{(balance N)} \\ \ce{2NO3- &-> N2 \color{red}{+ 6H2O}} & & \text{(add } \ce{H2O} \text{ to balance O)} \\ \ce{2NO3- \color{red}{+ 12H+} &-> N2 + 6H2O} & & \text{(add } \ce{H+} \text{ to balance H)} \\ \ce{2NO3- + 12H+ \color{red}{+ 10e-} &-> N2 + 6H2O} & & \text{(add } \ce{e-} \text{ to balance charge / oxidation state)} \end{align} And balancing the oxidation half-reaction: \begin{align} \ce{2NH4+ &-> N2} & & \text{(balance N)} \\ \ce{2NH4+ &-> N2 \color{red}{+ 8H+}} & & \text{(add } \ce{H+} \text{ to balance H)} \\ \ce{2NH4+ &-> N2 + 8H+ \color{red}{+ 8e-}} & & \text{(add } \ce{e-} \text{ to balance charge / oxidation state)} \\ \end{align} In order to combine the two balanced half-equations, we need to multiply the first by $4$ and the second by $5$: \begin{align} \ce{8NO3- + 48H+ + 40e- &-> 4N2 + 24H2O} \\ \ce{10NH4+ &-> 5N2 + 40H+ + e-} \\ \hline \ce{8NO3- + 10NH4+ + 48H+ + 40e- + 10H2O &-> 9N2 + 24H2O + 40H+ + 40e-} \end{align} which finally reduces to $$\ce{8NO3- + 10NH4+ + 8H+ -> 9N2 + 14H2O}.$$
{ "domain": "chemistry.stackexchange", "id": 630, "tags": "redox, stoichiometry" }
Do we get the same answer at any time if we measure a system's energy?
Question: Schrödinger's equation says that the only allowed energy states of a system are the eigenvalues of the energy operator $H$. This means that if we measure the energy of the system at any time we will get the same answer? If not, why? All I can found so far is that the eigenvalues are measured with certainty. Answer: $\newcommand{\ket}[1]{\left| #1 \right>}$If your state is in an eigenstate of the energy operator then the answer is that you'll get the same value for the energy every time you measure the particle's energy. That is the reason why the energy eigenstates are also called stationary states. On the other hand you can also have a superposition of energy eigenstates. Assume that $\ket {E_i}$ are energy eigenfunctions of the Hamiltonian such that $$H\ket{E_i} = E_i \ket {E_i}$$ A general state will be in a superposition of the energy eigenstates ie $$\ket \psi = \sum_i a_i \ket{E_i} \quad \text{with} \quad \sum_i |a_i|^2 =1 $$ Assume that you have a lot of copies of the state $\ket \psi$. Then each time you measure you'll get $|a_i|^2$ of the time the energy eigenstate $\ket{E_i}$. In other words the probability that you'll get the state $\ket{E_i}$ is $|a_i|^2$. Notice however that after the measurement you'll lost all the information about the state $\ket \psi$ because of the collapse of the wave function into an eigenstate of the Hamiltonian. For more information about the collapse of the wave function seethe measurement problem.
{ "domain": "physics.stackexchange", "id": 22847, "tags": "quantum-mechanics, operators, hilbert-space, eigenvalue, observables" }
Is the poundage of a bow relevant for the path of the arrow if the arrow mass weight ratio stays the same?
Question: In archery, we measure the weight of a bow in pounds. It's measured at 28 " from a pivot point + 1,75 ". There is also an essential measure for the arrow, which is "grain per pound". It has a certain mass weight measured in grains and this is set into relation to the draw weight. To make this question a little less complicated, we just assume a GPP of 9. So, for a 30 # bow, the arrow weights 270 grain. Considering that the GPP stays the same, does the poundage of a bow even matter for the arrow flight? Will the path of the arrow be more or less the same for a 25 # and a, let's say, 50 # bow? Answer: A constant GPP means that the mass of the arrow is proportional to the force. In other words, for a GPP value of $k$, the mass of the arrow is equal to: $$m=kP$$ where $P$ is the maximum force exerted by the bow on the arrow (i.e. the poundage). The potential energy stored by the bow at full draw is equal to the area under the draw curve (i.e. the force integrated over the draw distance), and the draw weight (aka the poundage) is equal to the maximum of the draw curve. The draw curves of different types of bows are substantially different, which means that different types of bows can store different amounts of potential energy for the same draw weight: As you can see, a compound bow stores more potential energy for the same draw weight. So let's assume that you're comparing two bows of the same type but different poundage, so that the shape of the curve stays basically the same, but it's just scaled up in proportion to the poundage. Then we can say that the potential energy $U$ stored by the bow is related to the poundage $P$ by: $$U=bP$$ where $b$ is a constant, namely, the potential energy stored by a hypothetical 1-pound bow at full draw. When the bow is fired, some of the potential energy of the bowstring is transferred to the arrow, and the arrow gains some kinetic energy, defined as: $$K=\frac{1}{2}mv^2$$ for speed $v$. The energy transfer is not perfect - for example, some energy goes into pushing air out of the way of the moving bowstring, and some energy is left in the vibrating bowstring and limbs after firing. Let's suppose that the bow transfers energy with some efficiency $\epsilon$, such that: $$K=\epsilon U$$ Putting everything together, we have that: $$K=\frac{1}{2}mv^2=\frac{1}{2}kPv^2=\epsilon U=\epsilon bP$$ In other words, we have the speed of the arrow after firing, and its kinetic energy: $$v=\sqrt{\frac{2\epsilon b}{k}}$$ $$K=P\epsilon b$$ So there are several conclusions you can make here: Increasing the poundage of the bow with constant GPP increases the kinetic energy of the arrow, if all other conditions are held constant. This generally gives the arrow more penetrating power. Increasing the poundage of the bow with constant GPP does not increase the speed of the arrow, if all other conditions are held constant. The speed of the arrow is determined only by the GPP, the efficiency of energy transfer, and the shape of the draw curve. The kinetic energy of the arrow is independent of the GPP. It only depends on the poundage, the efficiency of energy transfer, and the shape of the draw curve. Increasing the GPP gives you a slower arrow with more mass and the same kinetic energy.
{ "domain": "physics.stackexchange", "id": 68784, "tags": "kinematics" }
Creating a new JFrame on label click
Question: So I've been looking for examples on the internet for a while now and this seems the most common way everyone is suggesting, but now I ask is there a better way to make a new jframe pop up on click than this. The code : JLabel prevention = new JLabel("<html><U><font color='red'>Prevencija</font><U></html>"); prevention.setBorder(BorderFactory.createEmptyBorder(0, 0, 0, 100)); prevention.addMouseListener(new MouseAdapter() { @Override public void mouseClicked(MouseEvent e) { JFrame jf = new JFrame("Prevencija"); jf.setContentPane(new PreventionPanel()); jf.setSize(new Dimension(400, 400)); jf.setVisible(true); jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } }); bottom.add(prevention); Answer: In general the code is quite okay, here are a few remarks. Maybe extract the body of the mouseClicked() method into a separate method, in case you have more code that would perform the same action, like when listening to keystrokes as well. Consider using a JButton instead of a JLabel and use an ActionListener or even an Action instead of a MouseListener. Using a MouseListener for reacting on mouseClicked on the entire area of a JLabel seems like replicating something on a physical level for which Swing already provides something on a logical level. I'd use WindowConstants.EXIT_ON_CLOSE instead of JFrame.EXIT_ON_CLOSE, as that's the original definition. I'd use DISPOSE_ON_CLOSE instead of EXIT_ON_CLOSE and dispose all frames. Applications which use EXIT_ON_CLOSE become more difficult to test. When DISPOSE_ON_CLOSE doesn't work it usually means that you have a bad application design or bugs regarding threading. Depending on what you want to achieve, you might be more interested in using JDialog or JOptionPane instead of JFrame for this use case. Maybe JOptionPane.showMessageDialog(). I can only recommend to look into those, not tell whether those are better for your use case, as your code and description don't give enough information for more.
{ "domain": "codereview.stackexchange", "id": 18488, "tags": "java, object-oriented, swing" }
Very slow graph walking
Question: My code is very very slow. Could you give me hints on how I can make it much faster? for (stdext::hash_map<unsigned __int32, CVertex*>::iterator it1 = graph.begin(); it1 != graph.end(); it1++) { for (std::vector<CVertex*>::iterator it2 = it1->second->getNeighbors().begin(); it2 != it1->second->getNeighbors().end(); it2++) { if ((*it2)->isEqualTo(it1->second)) continue; for (std::vector<CVertex*>::iterator it3 = (*it2)->getNeighbors().begin(); it3 != (*it2)->getNeighbors().end(); it3++) { if ((*it3)->isEqualTo(*it2) || (*it3)->isEqualTo(it1->second)) continue; for (std::vector<CVertex*>::iterator it4 = (*it3)->getNeighbors().begin(); it4 != (*it3)->getNeighbors().end(); it4++) { if ((*it4)->isEqualTo(*it3) || (*it4)->isEqualTo(*it2) || (*it4)->isEqualTo(it1->second)) continue; for (std::vector<CVertex*>::iterator it5 = (*it4)->getNeighbors().begin(); it5 != (*it4)->getNeighbors().end(); it5++) { if ((*it5)->isEqualTo(*it4) || (*it5)->isEqualTo(*it3) || (*it5)->isEqualTo(*it2) || (*it5)->isEqualTo(it1->second)) continue; for (std::vector<CVertex*>::iterator it6 = (*it5)->getNeighbors().begin(); it6 != (*it5)->getNeighbors().end(); it6++) { if (it1->second->isEqualTo(*it6)) { unsigned __int32 *circle = new unsigned __int32[5]; circle[0] = it1->second->getWord(); circle[1] = (*it2)->getWord(); circle[2] = (*it3)->getWord(); circle[3] = (*it4)->getWord(); circle[4] = (*it5)->getWord(); m_results.push_back(circle); } } } } } } } I think this question is not subjective because I've found many good things on the Internet yet: I use only vectors which is much faster (the not reserved too) because of the cache. You cannot see here, but I use only inline function in vertex class. I tried to reduce the numbers into uint 32 bit. But I still miss good trick because its time is lifetime. For example, do you think I can make better compiler settings? Or it's also very useful if you say I cannot make it much faster because in that case I don't waste more energy into finding a better solution. Answer: Since you've only posted partial code, all we can do is offer some suggestions: Google suggests that the performance of stdext::hash_map might be improved by adding #define _SECURE_SCL 0. See this MSDN article for background. Your if's look like they might become victim to branch misprediction. This StackOverflow answer explains that better than I ever could. If you haven't done so already, take care to allocate your vertices (and their internal data) contiguously. The simplest way to do that is usually to shove everything into a vector.
{ "domain": "codereview.stackexchange", "id": 11465, "tags": "c++, optimization, performance, graph" }
Locking during cache population
Question: Here I want to lock when populating a particular cache object, without blocking other calls to the same method requesting Foos for other barIds. I realise the MemoryCache will be thread safe, but if two or more concurrent calls come in it seems like it would be better if only one of them populates the cache and locks the others out while this is done. using System.Runtime.Caching; using Microsoft.Practices.Unity; public class FooCacheService : IFooService { private static readonly ConcurrentDictionary<string, object> FooServiceCacheLocks = new ConcurrentDictionary<string, object>(); [Dependency("Explicit")] public IFooService ExplicitService { get; set; } public IEnumerable<Foo> GetFoosForBar(int barId) { string key = barId.ToString(); object lockObject = FooServiceCacheLocks.GetOrAdd(key, new object()); object cached = MemoryCache.Default[key]; if (cached == null) { lock (lockObject) { cached = MemoryCache.Default[key]; if (cached == null) { IEnumerable<Foo> foosForBar = this.ExplicitService.GetFoosForBar(barId); MemoryCache.Default.Add(key, foosForBar, DateTime.Now.AddMinutes(5)); return foosForBar; } } } return (IEnumerable<Foo>)cached; } } Assume GetFoosForBar is a CPU/IO intensive process and takes a short while to complete. Does this seem like a reasonable way to achieve this? I'm surprised that System.Runtime.Caching does not provide any support for this as default - or have I overlooked something? Answer: What I don't like about your approach is that the lock objects are never removed from FooServiceCacheLocks, even when the object is removed from the cache. One way to simplify your code would be to combine MemoryCache with Lazy: instances of Lazy are cheap (as long as you don't access its Value) so you can create more of them than needed. With that your code would look something like this: public IEnumerable<Foo> GetFoosForBar(int barId) { var newLazy = new Lazy<IEnumerable<Foo>>( () => this.ExplicitService.GetFoosForBar(barId)); var lazyFromCache = (Lazy<IEnumerable<Foo>>)MemoryCache.Default.AddOrGetExisting( barId.ToString(), newLazy, DateTime.Now.AddMinutes(5)); return (lazyFromCache ?? newLazy).Value; }
{ "domain": "codereview.stackexchange", "id": 4643, "tags": "c#, locking, cache" }
Efficient all pair bottleneck computation for a tree
Question: Consider a weighted tree $T = (V,E)$. The bottleneck weight for a pair of vertices $v_1,v_2 \in V$ is the highest weight of the edges on the unique path from $v_1$ to $v_2$ (if $v_1 = v_2$ it is 0). Clearly, using a simple graph traversal from a given node we can find the bottleneck weight from that node to all other nodes in $\mathcal O(|V|)$ time, so we can find all $V^2$ bottleneck weights in $\mathcal O(|V|^2)$ time. I am interested in the situation where $V$ is large, and we are not interested in finding bottlenecks for all pairs $(v_1,v_2)$, but nonetheless for an (also large) number of pairs $Q$ (the letter $Q$ stands for queries). I am wondering whether it is possible to do some preprocessing in less than $\mathcal O(|V|^2)$ time -- in particular, in $\mathcal O(|V|\log|V|)$ -- so that we produce some search mechanism, which will then produce a bottleneck weight for $(v_1,v_2)$ in $\mathcal O(\log |V|)$ time. Then we can produce the answer to all queries in $\mathcal O(|V|\log |V| + Q\log|V|)$. Note that there are $\mathcal O(|V|)$ edges, and thus also that many possible answers to a query. Hence, a balanced decision tree whose leaves are the edge weights, which takes pairs of vertices and decides the highest weight edge between them -- if such a tree exists -- would do the job. One attempt to build the tree was as follows: recursively split the original tree by removing the highest weight edge. Assign the nodes of the tree indices so that both resulting trees have indices in a contiguous range. Then make a node in the decision tree, which when given two vertices decides whether they lie in the same of the two ranges; if not, this search tree node represents the highest weight edge between the vertices; if so, continue in either of the children of the search tree node. Unfortunately, this construction fails as the decision tree need not be balanced. Consider, for example, the tree which is just a simple path with ever decreasing weights along the path. Balancing the tree does not seem to be an option, as the decision procedures created by the balancing operations grow too fast (traversing the tree would take too long). Any suggestions or insights are welcome. Answer: You can even get $O(|V|)$ time if the edge weights are sorted. See the following paper: Erik D. Demaine, Gad Landau, and Oren Weimann, “On Cartesian Trees and Range Minimum Queries”, in Proceedings of the 36th International Colloquium on Automata, Languages and Programming (ICALP 2009), Lecture Notes in Computer Science, volume 5555, Rhodes, Greece, July 5–12, 2009, pages 341–353.
{ "domain": "cstheory.stackexchange", "id": 3630, "tags": "graph-algorithms, time-complexity" }
Why was PACER abandoned?
Question: The PACER project is described in this question: How much of the energy from 1 megaton H Bomb explosion could we capture to do useful work? Why was it abandoned? It seems that it is the only readily economical and engineeringwise useful path to fusion power, and it seems that its breeder possibilities can easily let it pay for itself for generating fissile elements and helium (which is getting to be rare too nowadays!) Was it political or technical limitations that killed it? Is there hope for a renewed interest in this in todays energy conscious politics? Answer: It seems, the main reason is politics. The movement towards prohibition of nuclear tests just started. Facility of this kind is an ideal polygon for nuclear tests. Few hundred explosions per year plus mass production would result in few orders of magnitude cheaper and more effective weapons automatically. That time it was not a good idea to boost development of nuclear weapons that much. It was still too complicated for average countries and everyone wanted to postpone the time when these average countries get an access to nuclear weapons.
{ "domain": "physics.stackexchange", "id": 2480, "tags": "soft-question, nuclear-engineering" }
Do I need to mention every dimension on a to-scale drawing?
Question: I'm my engineering drawing course, do I need to mention and show every dimension in the diagram if I've drawn everything to scale and also written the scale below? Answer: As with many questions, the real answer is "it depends." The biggest variables are what kind of engineer you are, who your audience is, and what industry you're working in. In a class setting, it is probably a good idea to consult your textbook or your teacher for their opinions in the context of the class, but here are some general factors in making the decision when you don't have someone to ask. In industry, if I know the department or company that is going to do the actual work, I do usually ask them what information they need and try to tailor the drawing to them. More formally many companies will have internal policies or follow consensus standards like ASME Y145 or the ISO ICS 01.100 series. But often, you won't have that luxury. Maybe you are drawing something to go out to competitive bid, or for a project that has no manufacturing pipeline set up yet. In this case you have to make decisions based on the context. A number of other answers here provide good guidelines for relatively conventional machined parts, but don't necessarily address other types of drawings you may need to produce. The general hierarchy of dimensioning goes like this: Dimension critical features: These are the items that must be right for your object to serve its purpose. For example if you are drawing a sprocket the center bore must be correct, or it will not fit on the shaft. For mechanical drawings, you should pay particular attention to the tolerances on these dimensions and make sure they are very clear. Make sure you are dimensioning what really matters to you, even if this isn't the most convenient way to dimension it for your audience. In architecture and some other disciplines, you can also add "HOLD TO" to the dimension, to indicate that other dimensions may have to be adjusted slightly to make this dimension correct. Dimension all functional requirements: These are all things that matter to your design, but are not as essential as your critical features. You should make sure that they are dimensioned and clear, but you don't have to draw extra attention to them like you do with a critical dimension. For example on the sprocket, the thickness of the sprocket needs to be machined correctly for it to fit, but it won't be a very tight tolerance. Dimension things your audience needs to know: In many designs there are some aspects that are arbitrary. Maybe in your situation the thickness of the sprocket hub doesn't matter at all - it's under very little load and going in the middle of a big empty shaft for example. All the same, you should dimension it so that the person making it doesn't have to make the choice for you. If you want to allow them to make some economical decisions (eg start with a material size they have in stock) you could chose to put a very wide tolerance on the dimension. If it impacts what the next person has to do, you should provide some information. Note the this section can be a little different for different industries. Machinists for example are used to having a part fully defined on a drawing so there's no guesswork for them to do. On the other hand if you are drafting a stud wall for a carpenter to frame out in a building, you would typically show them the minimum stud spacing, any special conditions (doorways, windows, etc.) but not every other detail. For example, it typically wouldn't matter to the engineer which side they started the studs from and where they ended as long as there is no gap greater than you specified. It might be easier for them to start from the north side because that's where the lumber is, or easier to start from the south side because the adjacent wall has already been framed. They might get clever and start the pattern centered on the wall to save one stud's worth of material. As long as the drawing contains all of the dimensions that matter to you, it's OK to leave it under-defined in this situation. For this step, it is really helpful to understand the process that someone else will be using to make your part whether it's machined, welded, carved, molded, etc. Dimension things that other audiences would like to know (optional:) Often there is useful information you could provide, even though it doesn't matter to the person you're sending the drawing to. In the sprocket example, the machinist won't care what the pitch diameter is if you've already given them the profile to cut. All the same, you might decide to add it. This information will be useful if another mechanical designer wants to use your sprocket and comes across your drawing first. Generally since these dimensions are just for information, not an instruction to the next person, they will be indicated as reference dimensions. That should be done by placing them in parenthesis or adding the abbreviation 'REF' after the word. Once all of these dimensions are listed, and the appropriate tolerances, materials, processes, etc are specified, your drawing should be complete. Now it's possible that if your object is complicated, the drawing will be cluttered with lots of dimensions that may seem redundant. There are a few options to reduce the total number of dimensions but retain the important information. This list is certainly not exhaustive, but it should get you started in most applications. Note that some of these make more work for your audience, so they may not always be received warmly. Patterns: If your features follow a grid pattern, it is sufficient to dimension one object in each row/column rather than each item. Similarly if your objects are all distributed rotationally around a point, you could dimension the circle they fall on and the spacing between them. Then as long as one starting position is given, the next person can construct the rest of the pattern. When the pattern is not obvious, it's a good idea to provide dotted lines that clarify which features are in the pattern and which aren't. Typical: If your drawing includes many features that match (entirely or in some aspect) you may dimension one and add "TYPICAL" or just "TYP" after to indicate that everything that looks like that, is the same as that. Similarly if a feature appears more than one place, you can describe the number - give the details on one location and then specify "8 PLACES" for example. Symmetry: If your object is symmetrical, you need only dimension one half of it, and then show a center line. Profile-controlled Geometry: When a feature or part is truly too complex to dimension clearly, even after you have considered detail and auxiliary views, your best bet may be a profile tolerance. With this option, you will provide a (digital) model of the curve or surface to your audience, and your drawing will show the shape without any actual dimensions. A profile tolerance block should be used, which indicates that the dimensions are controlled by the profile and gives your audience information on what the tolerance is. Using a profile dimension on an object that you could dimension reasonably easily would come across as lazy and unprofessional, so save this for truly complex shapes! More information on profile tolerances is available here and here. Note that even when you aren't using profile controlled geometry, you may still give a copy of your digital file to the shop you are working with as a courtesy to speed up their process. In general if you provide a properly dimensioned drawing, it will be considered to supersede the computer geometry unless you specifically agree otherwise. Lastly, there are a number of good textbooks on drafting that will go into much more detail with better examples and style guidance. One that I would recommend is Technical Drawing with Engineering Graphics (Giesecke et al.)
{ "domain": "engineering.stackexchange", "id": 1019, "tags": "drafting" }
Multiple "/" in catalysts
Question: What does it mean if a catalyst has multiple "/" symbols, such as $\ce{Mn/Fe/Al2O3}$? Does it mean either $\ce{Mn}$ or $\ce{Fe}$, or does it mean both are supported on $\ce{Al2O3}$? Answer: There should be only one "/" symbol. When more are used, it should be unambiguous from the context what is the carrier, e.g. in your case it would be more appropriate to use a dash: $$\ce{Mn-Fe/Al2O3}$$ From section 1.3 Composition, structure and texture of catalysts of 1975 IUPAC recommendations for heterogeneous catalysis [1, p. 79] (emphasis mine): Support. In multiphase catalysts, the active catalytic material is often present as the minor component dispersed upon a support sometimes called a carrier. The support may be catalytically inert but it may contribute to the overall catalytic activity. Certain bifunctional catalysts (§1.2.8) constitute an extreme example of this. In naming such a catalyst, the active component should be listed first, the support second and the two words or phrases should be separated by a solidus, for example, platinum/silica or platinum/silica-alumina. The solidus is sometimes replaced by the word "on", for example, platinum on alumina. Promoter. In some cases, a relatively small quantity of one or more substances, the promoter or promoters, when added to a catalyst improves the activity, the selectivity, or the useful lifetime of the catalyst. In general, a promoter may either augment a desired reaction or suppress an undesired one. There is no formal system of nomenclature for designating promoted catalysts. One may, however, for example, employ the phrase "iron promoted with alumina and potassium oxide". References Burwell, R. L. Manual of Symbols and Terminology for Physicochemical Quantities and Units. Appendix II. Heterogeneous Catalysis; Pure Appl. Chem. 46, 71, 1976. https://doi.org/10.1351/pac197646010071 (Open Access)
{ "domain": "chemistry.stackexchange", "id": 11390, "tags": "catalysis, notation" }
Can you determinize an NFA in PSPACE?
Question: QUESTION Given some NFA $A$, can you simulate the determinization of it (using Subset-Construction for example) while remaining in $PSPACE$? MORE DETAILS I'm asking this as I want to be able to construct $\overline A$ (the complement of $A$) in poly-space complexity. More specifically, I want to receive $\left <A,w \right >$ as input ($A$ is an NFA) and decide if $w \in \overline A$. So I want to construct $\overline A$ and simulate (in poly-space) $\overline A$ on $w$. MY ATTEMPTS I know that determinizing an NFA (using Subset-Construction) can blow up exponentially. But, I thought that the determinization and/or the simulation can happen "on the fly", where each step overrides the previous one. Other then this thought, I can't manage to develop it into an actual algorithm (if it's at all possible). Thank you. Answer: The subset construction can be computed (and negated) in polynomial space. Remember that the output doesn't count as part of the space usage in bounded space computation. Determinization can exponentially increase the number of states: the language of $0$–$1$strings such that the $k$th-from-last character is $0$ can be recognized by an NFA with something like $k+1$ states, but any DFA for that language has at least $2^k$ states. So determinizing doesn't give you a polynomial space algorithm to determine if an NFA accepts its input. But you can simulate all computation paths of the NFA simultaneously: as you read the input, just keep track of the set of states the automaton could be in, and see if that set includes an accepting state once you've read the whole input.
{ "domain": "cs.stackexchange", "id": 14491, "tags": "complexity-theory, computability, finite-automata, space-complexity" }
How to visualize the rearrangement of a pentacyclic triterpene alcohol?
Question: I was looking into the exam paper of last year when I found this: Propose arrow-pushing mechanism for the reaction below. I suppose it's a Wagner–Meerwein rearrangement. (Sorry for the dirty mechanism graph.) Is the answer correct? How would I explain the stereochemical configuration of the reaction? And why doesn't the elimination (the leaving of proton) happen at carbon A, as the cation intermediate is just as stable as the others? I tried drawing chair conformation graph of the reaction. I marked the possibly important groups in red for convenience. Answer: You are correct about the beautiful chain of eight Wagner-Meerwein rearrangements. The stereochemistry of each of these is given by the fact that this arrangement is suprafacial and proceeds under retention, thus a group on the top of the molecule will stay above the ring plane and a group below will stay below. During the rearrangement process, since each intermediate cation is planar, the rings A to D (from where the hydroxy group was to where the double bond will be) will undergo a ring flip each, preserving the general trans-decalin structure for each pair of rings. Once the final rearrangement has occurred, we arrive at that position in the molecule which features a cis-decalin system: note how in your drawing of the starting material the final ring E is not aligned with all the others but pointing downward. Assuming that final proton, which is eliminated, would actually migrate, we would generate the final carbocation at the cis-decalin junction. The methyl group between the eliminating proton and carbon A is in the wrong orientation to migrate; the carbon belonging to ring E has a much greater tendency because it is better aligned (anti to the proton). That migration would lead to a 6,5-spiro compound and a trisubstituted double bond. That isn’t bad in itself, but the pathway asked for in the question leads to a trans-decalin system by elimination, which features much less strain than a spiro compound. Additionally, a tetrasubstituted double bond is also more stable. To arrive at the all-trans-polcyclic system, the proton on the junction of rings D and E must eliminate which it is happy to do. Ring D flips, which now turns the cis-decalin system of DE into a trans-decalin system, ring strain is released and the reaction terminates.
{ "domain": "chemistry.stackexchange", "id": 6699, "tags": "organic-chemistry, reaction-mechanism" }
hrwros_gazebo package gives me error when I do 'catkin build'. Does anyone understand error ? I am using gazebo 9 on ubuntu 18
Question: Straight dump of the error message: << hrwros_gazebo:make /home/arslan/hrwros_ws/logs/hrwros_gazebo/build.make.003.log In file included from /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:40:0: /home/arslan/hrwros_ws/src/hrwros_gazebo/include/hrwros_gazebo/plugins/ARIAC.hh:280:13: error: ‘math’ does not name a type; did you mean ‘tanh’? public: math::Pose pose; ^~~~ tanh /home/arslan/hrwros_ws/src/hrwros_gazebo/include/hrwros_gazebo/plugins/ARIAC.hh: In function ‘std::ostream& ariac::operator<<(std::ostream&, const ariac::KitObject&)’: /home/arslan/hrwros_ws/src/hrwros_gazebo/include/hrwros_gazebo/plugins/ARIAC.hh:268:33: error: ‘const class ariac::KitObject’ has no member named ‘pose’ _out << "Pose: [" << _obj.pose << "]" << std::endl; ^~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc: At global scope: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:81:23: error: ‘math’ does not name a type; did you mean ‘tanh’? public: math::Box dropRegion; ^~~~ tanh /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:84:23: error: ‘math’ does not name a type; did you mean ‘tanh’? public: math::Pose destination; ^~~~ tanh /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc: In member function ‘bool gazebo::VacuumGripperPluginPrivate::DropObject::operator==(const gazebo::VacuumGripperPluginPrivate::DropObject&) const’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:60:25: error: ‘const class gazebo::VacuumGripperPluginPrivate::DropObject’ has no member named ‘dropRegion’ this->dropRegion == _obj.dropRegion && \ ^~~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:60:44: error: ‘const class gazebo::VacuumGripperPluginPrivate::DropObject’ has no member named ‘dropRegion’ this->dropRegion == _obj.dropRegion && \ ^~~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:61:25: error: ‘const class gazebo::VacuumGripperPluginPrivate::DropObject’ has no member named ‘destination’ this->destination == _obj.destination; ^~~~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:61:45: error: ‘const class gazebo::VacuumGripperPluginPrivate::DropObject’ has no member named ‘destination’ this->destination == _obj.destination; ^~~~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc: In function ‘std::ostream& gazebo::operator<<(std::ostream&, const gazebo::VacuumGripperPluginPrivate::DropObject&)’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:72:30: error: ‘const class gazebo::VacuumGripperPluginPrivate::DropObject’ has no member named ‘dropRegion’ _out << _obj.dropRegion << std::endl; ^~~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:73:44: error: ‘const class gazebo::VacuumGripperPluginPrivate::DropObject’ has no member named ‘destination’ _out << " Dst: [" << _obj.destination << "]" << std::endl; ^~~~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc: In destructor ‘virtual gazebo::VacuumGripperPlugin::~VacuumGripperPlugin()’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:199:53: error: ‘class gazebo::physics::World’ has no member named ‘GetRunning’; did you mean ‘Running’? if (this->dataPtr->world && this->dataPtr->world->GetRunning()) ^~~~~~~~~~ Running /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:201:38: error: ‘class gazebo::physics::World’ has no member named ‘GetPhysicsEngine’; did you mean ‘SetPhysicsEnabled’? auto mgr = this->dataPtr->world->GetPhysicsEngine()->GetContactManager(); ^~~~~~~~~~~~~~~~ SetPhysicsEnabled /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc: In member function ‘virtual void gazebo::VacuumGripperPlugin::Load(gazebo::physics::ModelPtr, sdf::ElementPtr)’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:213:51: error: ‘class gazebo::physics::World’ has no member named ‘GetName’; did you mean ‘Name’? this->dataPtr->node->Init(this->dataPtr->world->GetName()); ^~~~~~~ Name /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:218:29: error: ‘class gazebo::physics::World’ has no member named ‘GetPhysicsEngine’; did you mean ‘SetPhysicsEnabled’? this->dataPtr->world->GetPhysicsEngine()->CreateJoint( ^~~~~~~~~~~~~~~~ SetPhysicsEnabled /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:266:15: error: ‘gazebo::math’ has not been declared gazebo::math::Vector3 min = dropRegionElem->Get<math::Vector3>("min"); ^~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:276:15: error: ‘gazebo::math’ has not been declared gazebo::math::Vector3 max = dropRegionElem->Get<math::Vector3>("max"); ^~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:298:7: error: ‘math’ has not been declared math::Box dropRegion = math::Box(min, max); ^~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:299:7: error: ‘math’ has not been declared math::Pose destination = dstElement->Get<math::Pose>(); ^~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:300:64: error: ‘dropRegion’ was not declared in this scope VacuumGripperPluginPrivate::DropObject dropObject {type, dropRegion, destination}; ^~~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:300:64: note: suggested alternative: ‘dropRegionElem’ VacuumGripperPluginPrivate::DropObject dropObject {type, dropRegion, destination}; ^~~~~~~~~~ dropRegionElem /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:300:76: error: ‘destination’ was not declared in this scope VacuumGripperPluginPrivate::DropObject dropObject {type, dropRegion, destination}; ^~~~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:300:76: note: suggested alternative: ‘sigaction’ VacuumGripperPluginPrivate::DropObject dropObject {type, dropRegion, destination}; ^~~~~~~~~~~ sigaction /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:300:87: error: too many initializers for ‘gazebo::VacuumGripperPluginPrivate::DropObject’ VacuumGripperPluginPrivate::DropObject dropObject {type, dropRegion, destination}; ^ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc: In destructor ‘virtual gazebo::SideContactPlugin::~SideContactPlugin()’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc:35:18: error: ‘DisconnectWorldUpdateBegin’ is not a member of ‘gazebo::event::Events’ event::Events::DisconnectWorldUpdateBegin(this->updateConnection); ^~~~~~~~~~~~~~~~~~~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:324:38: error: ‘class gazebo::physics::World’ has no member named ‘GetPhysicsEngine’; did you mean ‘SetPhysicsEnabled’? auto mgr = this->dataPtr->world->GetPhysicsEngine()->GetContactManager(); ^~~~~~~~~~~~~~~~ SetPhysicsEnabled /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc: In member function ‘virtual void gazebo::SideContactPlugin::Load(gazebo::physics::ModelPtr, sdf::ElementPtr)’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc:52:33: error: ‘class gazebo::physics::World’ has no member named ‘GetName’; did you mean ‘Name’? this->node->Init(this->world->GetName()); ^~~~~~~ Name /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc: In member function ‘void gazebo::VacuumGripperPlugin::OnUpdate()’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:411:54: error: ‘class gazebo::physics::Model’ has no member named ‘GetWorldPose’; did you mean ‘SetWorldPose’? auto objPose = this->dataPtr->dropAttachedModel->GetWorldPose(); ^~~~~~~~~~~~ SetWorldPose /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:415:20: error: ‘const class gazebo::VacuumGripperPluginPrivate::DropObject’ has no member named ‘dropRegion’ dropObject.dropRegion.Contains(objPose.pos)) ^~~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:422:22: error: ‘const class gazebo::VacuumGripperPluginPrivate::DropObject’ has no member named ‘destination’ dropObject.destination); ^~~~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc: In member function ‘void gazebo::VacuumGripperPlugin::OnContacts(ConstContactsPtr&)’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:450:31: error: ‘class gazebo::physics::World’ has no member named ‘GetEntity’ this->dataPtr->world->GetEntity(_msg->contact(i).collision1())); ^~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:452:31: error: ‘class gazebo::physics::World’ has no member named ‘GetEntity’ this->dataPtr->world->GetEntity(_msg->contact(i).collision2())); ^~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc: In member function ‘bool gazebo::VacuumGripperPlugin::GetContactNormal()’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:483:31: error: ‘class gazebo::physics::World’ has no member named ‘GetEntity’ this->dataPtr->world->GetEntity(name1)); ^~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc:109:39: error: ‘class gazebo::physics::World’ has no member named ‘GetSimTime’; did you mean ‘SetSimTime’? this->lastUpdateTime = this->world->GetSimTime(); ^~~~~~~~~~ SetSimTime /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:493:31: error: ‘class gazebo::physics::World’ has no member named ‘GetEntity’ this->dataPtr->world->GetEntity(name2)); ^~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc: In member function ‘void gazebo::VacuumGripperPlugin::HandleAttach()’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:517:49: error: ‘math’ has not been declared this->dataPtr->modelCollision->GetLink(), math::Pose()); ^~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc: In member function ‘bool gazebo::SideContactPlugin::FindContactSensor()’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc:131:20: error: ‘class gazebo::physics::World’ has no member named ‘GetName’; did you mean ‘Name’? this->world->GetName() + "::" + link->GetScopedName() + "::" + this->contactSensorName; ^~~~~~~ Name /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc: In member function ‘bool gazebo::VacuumGripperPlugin::CheckModelContact()’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:584:59: error: ‘class gazebo::physics::Link’ has no member named ‘GetWorldPose’; did you mean ‘SetWorldPose’? auto gripperLinkPose = this->dataPtr->suctionCupLink->GetWorldPose().Ign(); ^~~~~~~~~~~~ SetWorldPose /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:585:5: error: ‘math’ has not been declared math::Vector3 gripperLinkNormal = ^~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/VacuumGripperPlugin.cc:587:24: error: ‘gripperLinkNormal’ was not declared in this scope double alignment = gripperLinkNormal.Dot(this->dataPtr->modelContactNormal); ^~~~~~~~~~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/ProximityRayPlugin.cc: In member function ‘virtual void gazebo::ProximityRayPlugin::Load(gazebo::sensors::SensorPtr, sdf::ElementPtr)’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/ProximityRayPlugin.cc:110:76: error: ‘class gazebo::physics::World’ has no member named ‘GetEntity’ this->link = boost::dynamic_pointer_cast<physics::Link>(this->world->GetEntity(linkName)); ^~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc: In member function ‘virtual void gazebo::SideContactPlugin::CalculateContactingLinks()’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc:185:67: error: ‘class gazebo::physics::World’ has no member named ‘GetEntity’ boost::static_pointer_cast<physics::Collision>(this->world->GetEntity(*collision)); ^~~~~~~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/ProximityRayPlugin.cc: In member function ‘virtual void gazebo::ProximityRayPlugin::OnNewLaserScans()’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/ProximityRayPlugin.cc:126:60: error: ‘class gazebo::physics::World’ has no member named ‘GetSimTime’; did you mean ‘SetSimTime’? msgs::Set(this->stateMsg.mutable_stamp(), this->world->GetSimTime()); ^~~~~~~~~~ SetSimTime /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc: In member function ‘virtual void gazebo::SideContactPlugin::ClearContactingModels()’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc:213:25: error: ‘math’ has not been declared model->SetWorldPose(math::Pose(0, 0, -1, 0, 0, 0)); ^~~~ /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc: In member function ‘bool gazebo::SideContactPlugin::TimeToExecute()’: /home/arslan/hrwros_ws/src/hrwros_gazebo/src/plugins/SideContactPlugin.cc:224:47: error: ‘class gazebo::physics::World’ has no member named ‘GetSimTime’; did you mean ‘SetSimTime’? gazebo::common::Time curTime = this->world->GetSimTime(); ^~~~~~~~~~ SetSimTime make[2]: *** [CMakeFiles/VacuumGripperPlugin.dir/src/plugins/VacuumGripperPlugin.cc.o] Error 1 make[1]: *** [CMakeFiles/VacuumGripperPlugin.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... make[2]: *** [CMakeFiles/ProximityRayPlugin.dir/src/plugins/ProximityRayPlugin.cc.o] Error 1 make[1]: *** [CMakeFiles/ProximityRayPlugin.dir/all] Error 2 make[2]: *** [CMakeFiles/SideContactPlugin.dir/src/plugins/SideContactPlugin.cc.o] Error 1 make[1]: *** [CMakeFiles/SideContactPlugin.dir/all] Error 2 make: *** [all] Error 2 cd /home/arslan/hrwros_ws/build/hrwros_gazebo; catkin build --get-env hrwros_gazebo | catkin env -si /usr/bin/make --jobserver-fds=6,7 -j; cd - ............................................................................... Failed << hrwros_gazebo:make [ Exited with code 2 ] Failed <<< hrwros_gazebo [ 17.8 seconds ] [build] Summary: 15 of 16 packages succeeded. [build] Ignored: 1 packages were skipped or are blacklisted. [build] Warnings: None. [build] Abandoned: None. [build] Failed: 1 packages failed. [build] Runtime: 19.5 seconds total Originally posted by hafizas101 on ROS Answers with karma: 3 on 2019-09-13 Post score: 0 Answer: If hrwros_gazebo is one of the packages from the edX MOOC "Hello (Real) World with ROS", then unfortunately you cannot build those right now on Melodic. The MOOC uses a special deployment image with Kinetic and Gazebo 7 for exactly this reason: the plugins and packages used have a hard dependency on Gazebo 7, and they must be updated to build with Gazebo 9. Perhaps there are other users on the forum of the MOOC that have figured this out, but at this time the advice would be to either use the Singulary image that is provided by the MOOC, or install Kinetic. Originally posted by gvdhoorn with karma: 86574 on 2019-09-14 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by hafizas101 on 2019-09-16: Yes you are right about the package. I have heard that the same course will start again on 15th January, 2020. I request you to update your future course so that melodic users can also benefit from your course. Thanks for your response. I helped.
{ "domain": "robotics.stackexchange", "id": 33763, "tags": "ros, gazebo, ros-melodic, plugin" }
Why do heuristic functions only approximate the real value of the cost?
Question: As stated in the title I'm wondering why do heuristic functions only approximate the real value of the cost? I understand it can never overestimate, but can it ensure the cost is accurate? Answer: If you create a heuristic that returns the exact cost for each node in the search tree, you can find the optimal solution easily: Start at the initial state and generate all successor states. Take the state with the best heuristic value and repeat, until you find the goal state. Because the estimated cost (from the heuristic) for each node in the search tree is equal to the actual cost, you never make a wrong decision. Unfortunately for most problems constructing such a heuristic would mean that for each node to evaluate you calculate the solution and then return the cost of the solution. Thus you already solve your problem inside the heuristic function.
{ "domain": "cs.stackexchange", "id": 3843, "tags": "artificial-intelligence, search-algorithms, heuristics" }
Capacitors in a series circuit with dielectric
Question: When a dielectric slab is inserted between the plates of one of the two identical capacitors in Fig. 25-23, do the following properties of that capacitor increase, decrease, or remain the same: (a) capacitance, (b) charge, (c) potential difference (d) How about the same properties of the other capacitor? CAPACITOR 1 = CAPACITOR WITH DIELECTRIC CAPACITOR 2 = CAPACITOR WITHOUT DIELECTRIC(ABOVE CAPACITOR 1 IN THE DIAGRAM) I said that the potential of the first capacitor decreases and that the charge it stores also increases. For the 2nd capacitor, I said it's capacitance would decrease. I'm not so sure though, I think it may stay the same as well? Potential would increase and charge would increase for the 2nd capacitor as well. The main problem I'm having at solving this is the fact that both charge and voltage for the individual capacitors are variable. Please explain the situation and why the values for potential, capacitance and charge either decrease, increase or stay the same. What I think: When a dielectric is added, E between the capacitor decreases by a factor of k so voltage must decrease for the first capacitor and thus the voltage for the 2nd capacitor must increase by the same amount to fulfill Kirchhoff's laws. Adding a dielectric also allows for a capacitor to store more charge at the same potential so the first capacitor must store more charge since c = q/v <- direct relationship. I'm confused on what happens to the 2nd capacitor. They're in series so there's that inverse relationship and total capacitance decreases. Answer: You have to deal with this in stages: Adding a dielectric to $C1$ will increase its capacitance. With the same charge present, that means its voltage $V_{C1}$ will go down. Since $V_{C1}$ goes down, the total of $V_{C1}$ and $V_{C2}$ will now be less than the battery voltage $V_B$. This will start current flowing through the circuit to compensate. The current will add equal amounts of charge to both capacitors, and increase their voltages, until $V_{C1}+V_{C2}=V_B$. Once stability returns, the following changes will have been made: The capacitance of $C1$ will have increased, but that of $C2$ will stay the same The charge on both capacitors will have increased, and by the same amount The voltage on $C1$ will have decreased, and that of $C2$ will have increased
{ "domain": "physics.stackexchange", "id": 30334, "tags": "homework-and-exercises, electrostatics, electric-circuits, capacitance, dielectric" }
If a dataset is imbalanced in real life, should I train on my machine learning model on an imbalanced dataset
Question: I have a dataset where around 20% of the data is the positive class and 80% of the data is the negative class. When I undersample and train my classifier on a balanced dataset and test on a balanced dataset, the results are pretty ok. However, if I train on the balanced dataset and test on an imbalanced dataset that replicates the real world (80-20 split) the metrics are not great. Should I train the model on the original imbalanced dataset if I want it to perform well on real world test data that is also imbalanced. Answer: When I undersample and train my classifier on a balanced dataset and test on a balanced dataset, the results are pretty ok It's not surprising that the results are good since the job is easier in this case. It's actually a mistake to test on the artificially balanced dataset, since it's not a fair evaluation of how the system will perform with real data. Should I train the model on the original imbalanced dataset if I want it to perform well on real world test data that is also imbalanced. Both training on the original dataset or the balanced dataset are valid methods, choosing between the two options is a matter of design and performance. It's often a good idea to try both and then pick the one which performs better than the other on the real imbalanced dataset.
{ "domain": "datascience.stackexchange", "id": 6812, "tags": "machine-learning, dataset, machine-learning-model" }
state_interfaces_ empty in custom ros2_control controller
Question: I have created a simple state broadcasting controller, I used the joint state broadcaster as a template. The inherited state_interfaces_ vector is empty in the controller when I try to validate the interfaces in on_config. I have setup the interfaces in the ros2_control plugin and set the names in state_interface_configuration in the controller. I have put debug trace in both the command and sate configuration functions but I don't see the trace running. Any ideas? Originally posted by neuromancer2701 on ROS Answers with karma: 71 on 2023-03-01 Post score: 0 Answer: The inherited state_interfaces_ is not available until on_activate and that is based on if the correct state interfaces that are setup in state_interface_configuration which takes place after on_config. Originally posted by neuromancer2701 with karma: 71 on 2023-03-01 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 38300, "tags": "ros2" }
Observer pattern implementation in C#
Question: In order to implement the Observer pattern in C#, one of the ways to go (at least, the one I chose) is to make classes that implement the IObservable<T> for the observable objects and the IObserver<T> for the observers. In a project of mine I created a base class from which every observable inherits: public class Observable<T> : IObservable<T> { private SubscriptionManager<T> _subscriptionManager; public Observable() { _subscriptionManager = new SubscriptionManager<T>( new List<IObserver<T>>()); } public IDisposable Subscribe(IObserver<T> observer) { _subscriptionManager.Subscribe(observer); return _subscriptionManager; } public void Notify(T obj) { _subscriptionManager.Notify(obj); } } and an IDisposable class that manages the subscriptions to the observable object: public class SubscriptionManager<T> : IDisposable { private ICollection<IObserver<T>> _observers; private IObserver<T> _observer; public SubscriptionManager(ICollection<IObserver<T>> observers) { if (observers == null) { throw new ArgumentNullException("observers"); } _observers = observers; } public void Subscribe(IObserver<T> observer) { _observers.Add(observer); _observer = observer; } public void Notify(T obj) { foreach (var observer in _observers) { observer.OnNext(obj); } } public void Dispose() { _observers.Remove(_observer); } } Do you see anything that can be done more efficiently, or in a better way? Do you see anything else that may be wrong? Answer: Here's a problem: var o = new Observable<Unit>(); var a = o.Subscribe(_ => Console.WriteLine("A")); var b = o.Subscribe(_ => Console.WriteLine("B")); a.Dispose(); o.Notify(Unit.Default); This prints A Disposing of a removed the wrong observer. I'd recommend reading Why shouldn't I implement IObservable<T>? The reason you shouldn't implement IObservable<T> is the same reason you don't usually implement IEnumerable<T>, is that somebody has most likely already built the thing you want.
{ "domain": "codereview.stackexchange", "id": 16232, "tags": "c#, design-patterns, observer-pattern" }
Is there a better way to get a child?
Question: I have written the following code to get ImageColorPicker child: foreach (CustomTabItem customTabItem in SelectedWindowsTabControl.Items) { TabItem ti = tabControl.ItemContainerGenerator.ContainerFromItem(customTabItem) as TabItem; Popup popup = (Helpers.FindVisualChild<Popup>(ti) as Popup); ImageColorPicker icp = (popup.Child as StackPanel).Children[0] as ImageColorPicker; ... } public class Helpers { /// <summary> /// Return the first visual child of element by type. /// </summary> /// <typeparam name="T">The type of the Child</typeparam> /// <param name="obj">The parent Element</param> public static T FindVisualChild<T>(DependencyObject obj) where T : DependencyObject { for (int i = 0; i < VisualTreeHelper.GetChildrenCount(obj); i++) { DependencyObject child = VisualTreeHelper.GetChild(obj, i); if (child != null && child is T) return (T)child; else { T childOfChild = FindVisualChild<T>(child); if (childOfChild != null) return childOfChild; } } return null; } } Here's the control template XAML of the TabItem (the relevant part): <ControlTemplate TargetType="{x:Type local:CustomTabItem}"> <Grid Height="26" Background="{TemplateBinding Background}"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <ContentPresenter Margin="5,0" HorizontalAlignment="Left" VerticalAlignment="Center" ContentSource="Header"> </ContentPresenter> <StackPanel Grid.Column="1" Height="16" Margin="0,0,1,0" HorizontalAlignment="Center" VerticalAlignment="Center" Orientation="Horizontal"> <ToggleButton x:Name="Edit" Width="16" Content="&#xE104;" Style="{StaticResource CustomizedMetroTabItemToggleButton}" ToolTip="Edit" /> <Popup HorizontalOffset="{Binding Width, ElementName=Edit}" IsOpen="{Binding IsChecked, Mode=TwoWay, ElementName=Edit}" Placement="Left" PlacementTarget="{Binding ElementName=Edit}" PopupAnimation="Slide" StaysOpen="False" VerticalOffset="{Binding ActualHeight, ElementName=Edit}"> <StackPanel> <local:ImageColorPicker x:Name="ColorPicker" Width="100" Height="100" HorizontalAlignment="Center" Source="Images/ColorWheel.png" /> </StackPanel> </Popup> </StackPanel> </Grid> </ControlTemplate> Is there a better way to get the ImageColorPicker than what I've done? (getting the TabItem, then the Popup and then the ImageColorPicker, I am sure there's a shorter way) Answer: I don't like a class that's just called Helpers - that's generally a code smell, something that ends up a big dumping ground for anything that doesn't quite fit anywhere else. Be more specific when naming things, perhaps VisualHierarchyHelper would be a better name? I'm using a very similar method - the main difference is essentially the number ot return statements, and the childName parameter; I found this code on Stack Overflow a little while ago: /// <summary> /// Finds a Child of a given item in the visual tree. /// </summary> /// <param name="parent">A direct parent of the queried item.</param> /// <typeparam name="T">The type of the queried item.</typeparam> /// <param name="childName">x:Name or Name of child. </param> /// <returns>The first parent item that matches the submitted type parameter. /// If not matching item can be found, /// a null parent is being returned.</returns> /// <remarks> /// https://stackoverflow.com/a/1759923/1188513 /// </remarks> public static T FindChild<T>(this DependencyObject parent, string childName) where T : DependencyObject { if (parent == null) return null; T foundChild = null; var childrenCount = VisualTreeHelper.GetChildrenCount(parent); for (var i = 0; i < childrenCount; i++) { var child = VisualTreeHelper.GetChild(parent, i); var childType = child as T; if (childType == null) { foundChild = FindChild<T>(child, childName); if (foundChild != null) break; } else if (!string.IsNullOrEmpty(childName)) { var frameworkElement = child as FrameworkElement; if (frameworkElement != null && frameworkElement.Name == childName) { foundChild = (T)child; break; } } else { foundChild = (T)child; break; } } return foundChild; } Notice the guard clause preventing a NullReferenceException that your method would throw if obj was null. I think this is a pretty neat way of finding a child node in the visual tree. That said, it might be personal preference, but I think the readability of your code could benefit from implicit typing (var), especially in cases like this where the type is already pretty obvious: TabItem ti = tabControl.ItemContainerGenerator.ContainerFromItem(customTabItem) as TabItem; Popup popup = (Helpers.FindVisualChild<Popup>(ti) as Popup); ImageColorPicker icp = (popup.Child as StackPanel).Children[0] as ImageColorPicker; Becomes: var ti = tabControl.ItemContainerGenerator.ContainerFromItem(customTabItem) as TabItem; var popup = (Helpers.FindVisualChild<Popup>(ti) as Popup); var icp = (popup.Child as StackPanel).Children[0] as ImageColorPicker; And here you would have to make sure popup isn't null before accessing its Child member, if you want to avoid that possible NullReferenceException. Also, you're casting too much - T should be of the type you've specified, so the return type of FindChild<ImageColorPicker> is ImageColorPicker, casting it to ImageColorPicker is redundant. Update The ImageColorPicker child has a Popup parent, which has a StackPanel parent, which has a Grid parent, which has a TabItem parent. You're not fully using the recursiveness of your function when you're getting the color picker. I'd believe you could get it like this: var tab = tabControl.ItemContainerGenerator.ContainerFromItem(customTabItem) as TabItem; var picker = VisualHierarchyHelper.FindChild<ImageColorPicker>(tab, "ColorPicker"); That should work, because the search is recursive; you don't need to get everything in-between.
{ "domain": "codereview.stackexchange", "id": 6659, "tags": "c#, wpf, xaml" }
What does CkC mean in chemical bonding?
Question: I'm working on a chemistry question where I'm supposed to use average bond energies to estimate enthalpy changes for a reaction. It gives me these bond energies to work with: $$\begin{align} \ce{C-H}&=&413\\ \ce{C-C}&=&348\\ \ce{C-Br}&=&276\\ \ce{H-Br}&=&366\\ \ce{CkC}&=&614\\ \end{align}$$ I know how to do the question, but I don't what $\ce{CkC}$ is. I can't find it online or in my textbook. What does the "k" mean? Answer: $\ce{C=C}$ average bond energy is somewhere around $\pu{614 kJ/mol}$ so "k" probably indicates a double bond
{ "domain": "chemistry.stackexchange", "id": 9908, "tags": "bond" }
Can we infer the existence of periodic solutions to the three-body problem from numerical evidence?
Question: I recently found out about the discovery of 13 beautiful periodic solutions to the three-body problem, described in the paper Three Classes of Newtonian Three-Body Planar Periodic Orbits. Milovan Šuvakov and V. Dmitrašinović. Phys. Rev. Lett. 110 no. 11, 114301 (2013). arXiv:1303.0181. I am particularly impressed by how elaborate the solutions are, and I'm struck by the tantalizing hint of an infinity of other distinct orbits given by the analogy with a free group. The solutions can be viewed in the Three-Body Galery, which has animations of the new orbits in real space and in something called the 'shape sphere', which is described in the paper. I was aware already of the figure-of-eight solution, which is described nicely in A new solution to the three body problem - and more. Bill Casselman. AMS Feature Column. and which was discovered numerically by Christopher Moore (Phys. Rev. Lett. 70, 3675 (1993)). I understand that the figure-of-eight solution has been proven to actually exist as a solution of the ODE problem, in A Remarkable Periodic Solution of the Three-Body Problem in the Case of Equal Masses. Alain Chenciner and Richard Montgomery. Ann. Math 152 no. 3 (2000), pp. 881-901. There is also a large class of solutions called $N$-body choreographies by Carles Simó, in which a number of bodies - possibly more than three - all follow the same curve. Simó found a large class of them in 2000 (DOI/pdf), though this nice review (DOI) seems to imply that formal theorematic proof that they exist as periodic solutions of the ODE problem is still lacking. So, this brings me to my actual question. For the numerical simulations, however well you do them, in the end you will only have a finite-precision approximation to a solution of a differential equation that is propagated for a finite time. Additionally, you might do a numerical stability analysis that strongly suggests (or rigorously proves?) that you are (or are not) in a stable orbit. However, this is quite a ways away from a rigorous existence theorem for a periodic orbit with that symmetry. With this in mind, then, in what spirit are these simulations done? Is it purely a numerical approach, in the hope that good numerics do indicate existence, but with a rigorous proof left to the mathematicians through whatever other means they can manage? Or is there some overarching theorem that indicates the existence of a truly periodic solution after a given threshold? What tools are there for proving existence theorems of periodic solutions? Answer: It seems like they were able to rigorously prove the existence of N-body choreographies by using interval Krawczyk method to show that a minimum exist to the variational problem solved in the subspace of the full phase space satisfying some symmetry conditions. Following the links given I found this paper where they explain the method. It's not exactly a light reading material but on page 6 they say: "If all these conditions all fulfilled, then from Theorem 4.5 we are sure that in the set $Z \times \{c_0\}$ there is an initial condition for the choreography. Moreover, as the set Z is usually very small, the shape of the proved choreography is very similar to our first approximation." It sounds like starting with "an initial guess", they are able to show that there exist an "exact solution" very close to this initial guess. And one can probably obtain a curve that is arbitrarily close to the actual solution by doing more and more precise calculation. But the existence of the choreography is established rigorously with the assist of their numerical method. Note that in the beginning of the paper, they mention the solutions obtained by the usual numerical methods as "solutions produced in a non-rigorous numerical way."
{ "domain": "physics.stackexchange", "id": 12887, "tags": "mathematical-physics, newtonian-gravity, computational-physics, three-body-problem" }
feature engineering mechanism
Question: why do we need to rescale some feature having large range I know we do it for faster rate of gradient descent ,but still how does rescaling works? and it doesn't break the model and does rescaling just decreases the time or it even improves the accuracy of the model Answer: To understand that, we have to look into models: Not all models needs scaling. Only linear models needs scaling. Like Linear regression, Logistic regression or NN. It's because they have certain criteria to fulfill to get good results. For example, For linear regression to work well, the data needs to be linearly separable. On the other hand, the models like Decision tree or RandomForest or Boosting models doesn't need data to be scaled because at core they are using trees to understand the data. So to conclude, I can say that, it does improve the speed, but it is also helpful for model to understand the model better. Tip: To know about the linear models and their requirements, do some basic research and you will find out about the rules.
{ "domain": "datascience.stackexchange", "id": 11802, "tags": "machine-learning, feature-engineering, gradient-descent, feature-scaling, efficiency" }
First attempt at a Java Blackjack game
Question: I just completed my first multi class program, Blackjack, and it works! It allows the user to play Blackjack against a single dealer, with no other players at the table. I was wondering, seeing as it is my first multi class program, if you could help me optimize it and/or give me any advice. I want to implement insurance and splitting, so any advice to help prepare the code for eventually adding those features would be really helpful! Finally, my main method is pretty long — I was wondering if this is typical of Java programs and, if not, how I can fix that. Card package Blackjack; class Card { /* * Creates a playing card. */ private int rank;//represents the rank of a card private int suit;//represents the suit of a card private int value;//represents the value of a card private static String[] ranks = {"Joker","Ace","Two","Three","Four","Five","Six","Seven","Eight","Nine","Ten","Jack","Queen","King"}; private static String[] suits = {"Clubs","Diamonds","Hearts","Spades"}; /* * Created with an integer that represents a spot in the String array ranks and the String array suits. This represents * the rank and suit of an individual card. */ Card(int suit, int values) { this.rank=values; this.suit=suit; } /* * Returns the string version of a card. */ public String toString() { return ranks[rank]+" of "+suits[suit]; } /* * Returns the rank of a card. */ public int getRank() { return rank; } /* * Returns the suit of a card. */ public int getSuit() { return suit; } /* * Returns the value of a card. If a jack, queen, or king the value is ten. Aces are 11 for now. */ public int getValue() { if(rank>10) { value=10; } else if(rank==1) { value=11; } else { value=rank; } return value; } /* * Sets the value of a card. */ public void setValue(int set) { value = set; } Deck package Blackjack; import java.util.ArrayList; import java.util.Random; /* * Creates and shuffles a deck of 52 playing cards. */ class Deck { private ArrayList<Card> deck;//represents a deck of cards Deck() { deck = new ArrayList<Card>(); for(int i=0; i<4; i++) { for(int j=1; j<=13; j++) { deck.add(new Card(i,j)); } } } /* * Shuffles the deck by changing the indexes of 200 random pairs of cards in the deck. */ public void shuffle() { Random random = new Random(); Card temp; for(int i=0; i<200; i++) { int index1 = random.nextInt(deck.size()-1); int index2 = random.nextInt(deck.size()-1); temp = deck.get(index2); deck.set(index2, deck.get(index1)); deck.set(index1, temp); } } /* * Draws a card from the deck. */ public Card drawCard() { return deck.remove(0); } Dealer package Blackjack; import java.util.ArrayList; import java.util.Arrays; /* * Creates a dealer that the user plays against. */ class Dealer { ArrayList<Card> hand;//represents the dealer's hand private int handvalue=0;//value of the dealer's hand (starts at 0) private Card[] aHand;//used to convert the dealer's hand to an array private int AceCounter;//counts the aces in the dealer's hand Dealer(Deck deck) { hand = new ArrayList<>(); aHand = new Card[]{}; int AceCounter=0; for(int i=0; i<2; i++) { hand.add(deck.drawCard()); } aHand = hand.toArray(aHand); for(int i=0; i<aHand.length; i++) { handvalue += aHand[i].getValue(); if(aHand[i].getValue()==11) { AceCounter++; } while(AceCounter>0 && handvalue>21) { handvalue-=10; AceCounter--; } } } /* * Prints the dealer's first card (the card face up at the beginning of a blackjack game). */ public void showFirstCard() { Card[] firstCard = new Card[]{}; firstCard = hand.toArray(firstCard); System.out.println("["+firstCard[0]+"]"); } /* * Gives the dealer another card and updates the value of his hand. Takes into account the value of aces. */ public void Hit(Deck deck) { hand.add(deck.drawCard()); aHand = hand.toArray(aHand); handvalue = 0; for(int i=0; i<aHand.length; i++) { handvalue += aHand[i].getValue(); if(aHand[i].getValue()==11) { AceCounter++; } while(AceCounter>0 && handvalue>21) { handvalue-=10; AceCounter--; } } } /* * Determines if the dealer wants to hit according to classic Blackjack rules. */ public boolean wantsToHit() { if(handvalue<17) { return true; } return false; } /* * Returns true if the dealer has blackjack. */ public boolean hasBlackJack() { if(hand.size()==2 && handvalue==21) { System.out.println("The dealer has blackjack!"); return true; } return false; } /* * Prints the dealer's hand. */ public void showHand() { System.out.println(hand); } /* * Returns the value of the dealer's hand. */ public int getHandValue() { return handvalue; } /* * Determines if a dealer has busted. */ public boolean busted(int handvalue) { if(handvalue>21) { System.out.println("The dealer busted!"); return true; } return false; } /* * Takes the turn for the dealer and returns the value of his hand. */ public int takeTurn(Deck deck) { while(wantsToHit()) { System.out.println("The dealer hits"); Hit(deck); if(busted(handvalue)) { break; } } if(handvalue<=21) { System.out.print("The dealer stands."); } return handvalue; } Main package Blackjack; import java.util.*; public class Blackjack { private static int cash;//cash the user bets with private static int bet;//how much the user wants to bet private static int AceCounter;//how many aces are in the user's hand private static ArrayList<Card> hand;//represents the user's hand private static int handvalue;//the value of the user's hand private static String name;//name of the user public static void main(String[] args){ System.out.println("Hi! What is your name?"); Scanner scan = new Scanner(System.in); name = scan.nextLine(); System.out.println("Hello, "+name+", lets play some BlackJack!"); System.out.println("How much cash do you want to start with?"); Scanner money = new Scanner(System.in); cash = money.nextInt(); System.out.println("You start with cash: "+cash); while(cash>0){ Deck deck = new Deck();//initialize deck, dealer, hands, and set the bet. deck.shuffle(); AceCounter=0; Dealer dealer = new Dealer(deck); List<Card> hand = new ArrayList<>(); hand.add(deck.drawCard()); hand.add(deck.drawCard()); System.out.println("How much would you like to bet?"); bet=Bet(cash); System.out.println("Cash:"+(cash-bet)); System.out.println("Money on the table:"+bet); System.out.println("Here is your hand: "); System.out.println(hand); int handvalue = calcHandValue(hand); System.out.println("The dealer is showing: "); dealer.showFirstCard(); if(hasBlackJack(handvalue) && dealer.hasBlackJack())//check if both the user and dealer have blackjack. { Push(); } else if(hasBlackJack(handvalue))//check if the user has blackjack. { System.out.println("You have BlackJack!"); System.out.println("You win 2x your money back!"); cash=cash+bet; Win(); } else if(dealer.hasBlackJack())//check if the dealer has blackjack. { System.out.println("Here is the dealer's hand:"); dealer.showHand(); Lose(); } else { if(2*bet<cash)//check if the user can double down. { System.out.println("Would you like to double down?");//allows the user to double down. Scanner doubledown = new Scanner(System.in); String doubled = doubledown.nextLine(); while(!isyesorno(doubled)) { System.out.println("Please enter yes or no."); doubled = doubledown.nextLine(); } if(doubled.equals("yes")) { System.out.println("You have opted to double down!"); bet=2*bet; System.out.println("Cash:"+(cash-bet)); System.out.println("Money on the table:"+bet); } } System.out.println("Would you like to hit or stand?");//ask if the user will hit or stand Scanner hitorstand = new Scanner(System.in); String hitter = hitorstand.nextLine(); while(!isHitorStand(hitter)) { System.out.println("Please enter 'hit' or 'stand'."); hitter = hitorstand.nextLine(); } while(hitter.equals("hit"))//hits the user as many times as he or she pleases. { Hit(deck, hand); System.out.println("Your hand is now:"); System.out.println(hand); handvalue = calcHandValue(hand); if(checkBust(handvalue))//checks if the user busted { Lose(); break; } if(handvalue<=21 && hand.size()==5)//checks for a five card trick. { fivecardtrick(); break; } System.out.println("Would you like to hit or stand?"); hitter = hitorstand.nextLine(); } if(hitter.equals("stand"))//lets the user stand. { int dealerhand = dealer.takeTurn(deck);//takes the turn for the dealer. System.out.println(""); System.out.println("Here is the dealer's hand:"); dealer.showHand(); if(dealerhand>21)//if the dealer busted, user wins. { Win(); } else { int you = 21-handvalue;//check who is closer to 21 and determine winner int deal = 21-dealerhand; if(you==deal) { Push(); } if(you<deal) { Win(); } if(deal<you) { Lose(); } } } } System.out.println("Would you like to play again?");//ask if the user wants to keep going Scanner yesorno = new Scanner(System.in); String answer = yesorno.nextLine(); while(!isyesorno(answer)) { System.out.println("Please answer yes or no."); answer = yesorno.nextLine(); } if(answer.equals("no")) { break; } } System.out.println("Your cash is: "+cash);//if user doesn't want to play or runs out of cash, either congratulates them on their winnings or lets them know if(cash==0) { System.out.println("You ran out of cash!"); } else { System.out.println("Enjoy your winnings, "+name+"!"); } } /* * Checks if the user has blackjack. */ public static boolean hasBlackJack(int handValue) { if(handValue==21) { return true; } return false; } /* * Calculates the value of a player's hand. */ public static int calcHandValue(List<Card> hand) { Card[] aHand = new Card[]{}; aHand = hand.toArray(aHand); int handvalue=0; for(int i=0; i<aHand.length; i++) { handvalue += aHand[i].getValue(); if(aHand[i].getValue()==11) { AceCounter++; } while(AceCounter>0 && handvalue>21) { handvalue-=10; AceCounter--; } } return handvalue; } /* * Asks the user how much he or she would like to bet. */ public static int Bet(int cash) { Scanner sc=new Scanner(System.in); int bet=sc.nextInt(); while(bet>cash) { System.out.println("You cannot bet more cash than you have!"); System.out.println("How much would you like to bet?"); bet=sc.nextInt(); } return bet; } /* * Called if the user wins. */ public static void Win() { System.out.println("Congratulations, you win!"); cash=cash+bet; System.out.println("Cash: "+cash); } /* * Called if the user loses. */ public static void Lose() { System.out.println("Sorry, you lose!"); cash=cash-bet; System.out.println("Cash: "+cash); } /* * Called if the user pushes */ public static void Push() { System.out.println("It's a push!"); System.out.println("You get your money back."); System.out.println("Cash: "+cash); } /* * Adds a card to user's hand and calculates the value of that hand. Aces are taken into account. */ public static void Hit(Deck deck, List<Card> hand) { hand.add(deck.drawCard()); Card[] aHand = new Card[]{}; aHand = hand.toArray(aHand); handvalue = 0; for(int i=0; i<aHand.length; i++) { handvalue += aHand[i].getValue(); if(aHand[i].getValue()==11) { AceCounter++; } while(AceCounter>0 && handvalue>21) { handvalue-=10; AceCounter--; } } } /* * Determines if a user has input hit or stand. */ public static boolean isHitorStand(String hitter) { if(hitter.equals("hit") || hitter.equals("stand")) { return true; } return false; } /* * Determines if a user has busted. */ public static boolean checkBust(int handvalue) { if(handvalue>21) { System.out.println("You have busted!"); return true; } return false; } /* * Determines if a user has input yes or no. */ public static boolean isyesorno(String answer) { if(answer.equals("yes") || answer.equals("no")) { return true; } return false; } /* * Called if the user has a five card trick. */ public static void fivecardtrick() { System.out.println("You have achieved a five card trick!"); Win(); } Answer: Class Structure A Hand class might be useful. It can calculate and store the hand value. This would also avoid the duplication you currently have (calcHandValue and Hit). Your Dealer class contains a lot of code that I would not place there. It contains the dealer AI (when does the dealer hit?), winning/losing condition check, printing, and counting. With a Hand class, you would already separate out some of it. I would also remove all the prints (they make code reuse difficult, and lead to bad code structure), and separate the AI logic to it's own class (this would make it easier to change the rules, because they are all in one place). Your Blackjack class also does way too much. It prints, it reads input, it handles input, it checks winning/losing condition (again, a duplication, see next point), etc. It is the player as well as the game, which violates the single responsibility principle. Whenever you copy/paste code, try to think of a better alternative. In this case, your Dealer and your Blackjack class contain a lot of duplication. Mainly because they both represent a blackjack player (the dealer and the player). A generic Player class might be helpful, from which Dealer and HumanPlayer extend. So to summarize: I would add at a minimum a Hand, Player and HumanPlayer class. Possibly also an Input/Output interface and ConsoleInput/ConsoleOutput class (which would make it easy to add a GUI). There are more classes you could create, but this would be a good start. Misc your whole shuffle function can be replaced by Collections.shuffle(deck);. Why does your Dealer class have hand and aHand? This seems unnecessary and confusing. you have some style issues (position of curly bracket, indentation, spaces, etc). It seems mostly internally consistent (that's the important part), but does not really match what most Java programmers are used to. Most IDEs that support Java (eg Netbeans or Eclipse) format the code per default in a way most Java programmers recognize. variable and method names should start lowercase, so they are not confused with classes. if (cond) return true else return false can be written as return cond.
{ "domain": "codereview.stackexchange", "id": 13924, "tags": "java, beginner, object-oriented, playing-cards" }
Classical analog of the statement "$E$ must exceed the minimum value of $V(x)$
Question: Overall question: Griffiths problem 2.2 states that $E$ must exceed the minimum value of $V(x)$ for every normalizable solution to the time-independent Schrodinger equation. Then, it asks for a proof and what the classical analog of this statement is. I understand the proof for this, but I am very confused about the classical analog. Approach: I was initially thinking that this statement means that the energy must be greater than the potential energy at this $x$ value. I assumed $E$ was the total energy, so this would mean that there must be kinetic energy always. However, I remain confused about many things: Specific Questions: What is kinetic energy in quantum mechanics? Is it proper to say that $E$ here is total energy? Is this the proper classical analog? I don't need an answer, but a hint would be nice. An answer is okay as well but I'm not sure it's allowed. Why can we use the concepts potential and potential energy interchangeably? Here, we are saying that potential has the same units as energy, which means it's potential energy I gathered? Update: I have also thought about the classical analog being underdamped motion, but I am still curious about the above questions and interpretation. Answer: Typically, the kinetic energy operator is formally defined to be $\hat T = \frac{1}{2m} \hat P^2$. Yes If $E\not\geq V_{min}$, then $E<V$ at every point in space. What would this tell you about the kinetic energy? $\hat V$ is the potential energy operator. Some sources may drop the word "energy" from this description but, to be blunt, if you're studying quantum mechanics you should be able to comfortably interpret this rather mild abuse of terminology (which, I would note, you have). You are free to avoid this abuse and refer to the "finite potential energy well" and the "delta function potential energy" and the "potential energy step" and the "Coulomb potential energy" (so on and so forth) for your entire physics career, but I suspect this will quickly grow tiresome and you will quickly join the club.
{ "domain": "physics.stackexchange", "id": 89557, "tags": "quantum-mechanics, classical-mechanics, energy, schroedinger-equation, potential-energy" }
Bounding this probability in this Monte Carlo algorithm
Question: Let $P$ be a YES-NO decision problem. Let $A$ be an algorithm for deciding on it such that it is correct with probability $4/5$, in both cases (YES an NO). Design an algorithm that is correct with probability at least $p$, in both cases (YES an NO). This is my solution but I'm stuck at bounding the probability. Let be $B$ be an algorithm that runs $A$, $n$ number of times. Let $X_i$ with $1\leq i \leq n$ such that $X_i = 1$ if $A$ outputs $YES$ and, $X_i = 0$ if $A$ outputs $NO$. Without loss of generality assume that $P=YES$ then $\Pr[\text{B is correct}] = \Pr[\text{B = YES}] = \Pr[X \geq \frac{n}{2}]$. I know that in this case $E[X] = \frac{4}{5}n$, but don't know what else to do. Answer: Assuming I understand this correctly...the true state is either YES or NO. You have an algorithm A that can detect the truth with probability 4/5 and fails with probability 1/5. You devise an algorithm B that just runs A $n$ times and decides that the truth is YES if B has a sufficient number of YES outcomes out of those $n$ trials. I think that works. You want to find the first values of $n$ and $x$ (with $n/2\le x\le n$) that satisfy: $$\sum_{y=x}^n {n\choose y} (4/5)^y (1/5)^{n-y} \ge p,$$ and $$\sum_{y=0}^{x-1} {n\choose y} (1/5)^y (4/5)^{n-y} \ge p.$$ In that way, your decision rule for B - return a YES if there are at least $x$ yes outcomes in the $n$ trials - will work with probability at least $p$. If $n$ is odd then you really only need the first inequality.
{ "domain": "cs.stackexchange", "id": 19376, "tags": "algorithms, randomized-algorithms" }
The spring of air: Why does a piston undergo SHM when it is displaced lengthening the air-column inside a cylinder filled with air?
Question: Let there be a cylindrical tube, closed at one-end, with a well-fitting but freely moving piston of mass $m$. [. . .] The piston has certain equilibrium position. If the piston is moved a distance $y$ , lengthening the air-column, the internal pressure drops& the result is to provide a restoring force on $m$. We can, in fact, write an equation of the form: $$F = A\Delta p$$ where $\Delta p$ is the change of pressure. Why does the air-column impart restoring force on the piston? I mean to say when a system implies restoring force, it must have an associated change in potential energy. So, was there any increase in potential energy of the air-column when it was expanded? If so, why? What is the cause of increasing potential energy of the air-column? Answer: The question says: If the piston is moved a distance $y$, lengthening the air-column, the internal pressure drops In other words, someone or something had to grab the piston and pull it upwards against the force caused by the difference in pressure between the air inside the coulmn and the air outside. This meant some work was done, and that work went into increasing the potential energy of the air/column/piston system.
{ "domain": "physics.stackexchange", "id": 22347, "tags": "harmonic-oscillator, vibrations" }
Cumulative grade-point-average calculator in C++
Question: I wrote a simple Cumulative grade-point-average calculator in C++, and would like to ask for advices to improve the code in terms of best practices (efficiency, reliability) in the language. Here's my code: #include <stdio.h> #include <iostream> #include <string> #include <vector> class Subject { public: std::string name; unsigned creditHour, mark; Subject(const std::string& name, unsigned creditHour, unsigned mark) : name(name), creditHour(creditHour), mark(mark) {} }; class CGPACalculator { public: unsigned getGradePointWith(unsigned mark) { if (mark <= 100 && mark > 90) return 10; if (mark <= 90 && mark > 80) return 9; if (mark <= 80 && mark > 70) return 8; if (mark <= 70 && mark > 60) return 7; if (mark <= 60 && mark > 50) return 6; if (mark <= 50 && mark > 40) return 5; if (mark <= 40 && mark > 30) return 4; return 0; } double calculateCGPAWith(double gradePointsSum, unsigned numOfSubjects) { return gradePointsSum / numOfSubjects; } }; class CGPAPrinter { public: void printSubjectsSummary(std::vector<Subject*> subjects) { for(Subject* s : subjects) { std::cout << "Subject:\t" << s->name << "\t\t"; std::cout << "Credit Hours:\t" << s->creditHour << "\t\t"; std::cout << "Mark:\t" << s->mark << std::endl; } } void printCGPA(double cgpa) { std::cout << "Your CGPA is: " << cgpa << std::endl; } }; int main() { std::string subjectName; unsigned creditHour, mark, numOfSubjects; double gradePointsSum = 0; std::vector<Subject*> subjects; CGPACalculator calculator; CGPAPrinter printer; // Get input from user std::cout << "Enter number of subjects: \t"; std::cin >> numOfSubjects; for(int i = 0; i < numOfSubjects; i++) { std::cout << "Enter subject name: \t"; std::cin >> subjectName; std::cout << "Enter credit hours: \t"; std::cin >> creditHour; std::cout << "Enter mark: \t"; std::cin >> mark; gradePointsSum += calculator.getGradePointWith(mark); Subject *s = new Subject(subjectName, creditHour, mark); subjects.push_back(s); } printer.printSubjectsSummary(subjects); printer.printCGPA(calculator.calculateCGPAWith(gradePointsSum, numOfSubjects)); return 0; } Appreciate your valuable feedbacks. Answer: C vs C++ headers #include <stdio.h> This is a C header. For C++ we'd want #include <cstdio> instead (which puts the contents in the std:: namespace). Though it looks like we're not using anything from this header anyway. Aggregate initialization class Subject { public: std::string name; unsigned creditHour, mark; Subject(const std::string &name, unsigned creditHour, unsigned mark) : name(name), creditHour(creditHour), mark(mark) { } }; In modern C++ we don't need to add a constructor just to initialize member variables like this. We can write this as a simple struct, and use aggregate initialization: struct Subject { std::string name; unsigned creditHour, mark; }; ... Subject *s = new Subject{ subjectName, creditHour, mark }; Classes vs free functions class CGPACalculator { public: unsigned getGradePointWith(unsigned mark) { if (mark <= 100 && mark > 90) return 10; if (mark <= 90 && mark > 80) return 9; if (mark <= 80 && mark > 70) return 8; if (mark <= 70 && mark > 60) return 7; if (mark <= 60 && mark > 50) return 6; if (mark <= 50 && mark > 40) return 5; if (mark <= 40 && mark > 30) return 4; return 0; } double calculateCGPAWith(double gradePointsSum, unsigned numOfSubjects) { return gradePointsSum / numOfSubjects; } }; This class has no state (i.e. contains no member variables), so it should not be a class. This also applies to the CGPAPrinter class. We can use a namespace to group related functions instead of a class: namespace GPA { struct Subject { std::string name; unsigned creditHour, mark; }; unsigned getGradePointWith(unsigned mark) { ... } double calculateCGPAWith(double gradePointsSum, unsigned numOfSubjects) { ... } void printSubjectsSummary(std::vector<Subject *> subjects) { ... } void printCGPA(double cgpa) { ... } } // GPA Now we don't need to create any unnecessary objects. We can just call the functions we need. Variable declaration int main() { std::string subjectName; unsigned creditHour, mark, numOfSubjects; double gradePointsSum = 0; std::vector<Subject *> subjects; Declaring all variables at the top of a scope is an obsolete and unnecessary habit from ancient C. We should instead declare variables where they are first needed, initialize them to useful values, and avoid reusing them where practical. Something like: int main() { std::cout << "Enter number of subjects: \t"; unsigned numOfSubjects; std::cin >> numOfSubjects; std::vector<Subject *> subjects; double gradePointsSum = 0.0; for (unsigned i = 0; i < numOfSubjects; i++) { std::cout << "Enter subject name: \t"; std::string subjectName; std::cin >> subjectName; std::cout << "Enter credit hours: \t"; unsigned creditHour; std::cin >> creditHour; std::cout << "Enter mark: \t"; unsigned mark; std::cin >> mark; gradePointsSum += GPA::getGradePointWith(mark); Subject *s = new Subject{ subjectName, creditHour, mark }; subjects.push_back(s); } GPA::printSubjectsSummary(subjects); GPA::printCGPA(GPA::calculateCGPAWith(gradePointsSum, numOfSubjects)); return 0; } Memory management std::vector<Subject *> subjects; ... Subject *s = new Subject{ subjectName, creditHour, mark }; If we use new to allocate memory we must ensure we clean it up again by calling delete (to avoid leaking memory). We should generally avoid manual memory management and use a smart pointer (e.g. std::unique_ptr) instead. However, in this case it's unnecessary. We can simply store the subjects by value: std::vector<Subject> subjects; ... subjects.emplace_back(subjectName, creditHour, mark); User input std::cout << "Enter number of subjects: \t"; unsigned numOfSubjects; std::cin >> numOfSubjects; We should add some error checking to ensure that the user actually entered a number, and that we successfully read it. Aside Contrary to the other answer, I actually don't mind the series of if statements in: unsigned getGradePointWith(unsigned mark) { if (mark <= 100 && mark > 90) return 10; if (mark <= 90 && mark > 80) return 9; if (mark <= 80 && mark > 70) return 8; if (mark <= 70 && mark > 60) return 7; if (mark <= 60 && mark > 50) return 6; if (mark <= 50 && mark > 40) return 5; if (mark <= 40 && mark > 30) return 4; return 0; } But, we could make it simpler by only checking one bound (since we're checking in descending order): unsigned getGradePointWith(unsigned mark) { if (mark > 100) return 0; // todo: output an error!?? if (mark >= 91) return 10; if (mark >= 81) return 9; if (mark >= 71) return 8; if (mark >= 61) return 7; if (mark >= 51) return 6; if (mark >= 41) return 5; if (mark >= 31) return 4; return 0; } It's much easier to glance at this and understand the grade boundaries than it is to understand the nuances of a math equation.
{ "domain": "codereview.stackexchange", "id": 45195, "tags": "c++" }
Frequency response given poles and zeros
Question: I'm trying to plot the frequency response $H(z)$ of given zeros/poles using the following code in MATLAB: z=[0.36545+0.88446i; 0.01-0.1057i;] p=[-0.46016+0.87251i; -0.37649-0.94861i;] [num, den]=zp2tf(z,p,K) [h,w]=freqz(num,den) plot(w,abs(h)) But it appears it doesn't work correctly when the coefficients of the transfer function $H(z)$ are complex. And it gives a different response compared to the expected response. This is the expected transfer function: $$H(z)=\frac{1+(-1.3745-0.77689i)z^{-1}+(0.46331+0.85406i)z^{-2}}{1+ (0.83665-0.023904i)z^{-1}+(0.91366+0.061999i)z^{-2}}$$ Answer: The MathWorks Documentation of the function zp2tf says that The zeros and poles must be real or come in complex conjugate pairs. which is not the case for your example. With the given values of the vectors z and p you can do the following: num = poly(z); den = poly(p); which gives 1.00000 + 0.00000i -0.37545 - 0.77876i 0.09714 - 0.02978i for num, and 1.00000 + 0.00000i 0.83665 + 0.07610i 1.00092 + 0.10802i for den. Note that these are different from the coefficients of your 'expected transfer function'. However, these coefficients correspond to the given pole and zero locations. Also note that one of your poles lies outside the unit circle, so your filter is unstable.
{ "domain": "dsp.stackexchange", "id": 3805, "tags": "matlab, frequency-response, poles-zeros" }
Variant of Chomsky Normal Form for Languages with Strings of Length $\ge 2$
Question: Given a context-free grammar $G$ for a language $L$, where $L$ contains strings of length greater than 2, show that there exists some context-free grammar $G'$ which generates $L$ such that every rule of $G'$ has the form $$A\to x_1 x_2$$ where $x_i$ is either a terminal or non-terminal and $A$ is a terminal. I know that this CFG $G'$ has to be similar to the CNF of $G$. However, I am unsure how to remove transitions of the form $A\to a$. If $A$ is the start state of $G$, then it produces a string of length $1$, which would not be allowed. But what about the case when $A$ is not a start state? Answer: The solution is to "substitute" the terminal rules into the binary rules. Start with a grammar in Chomsky normalform. Then for every pair of rules $A\to BC$ and $B\to a$ we add the rule $A\to aC$. This is only one of the possible combinations. Also substitute the right nonterminal, or both of them. Now keep all binary rules (old and new ones) and delete all the original terminal rules $A\to a$. If we compare the derivation trees of the new grammar with those of the original grammar, we see that all terminals are moved one level up as a consequence of the substitutions.
{ "domain": "cs.stackexchange", "id": 19129, "tags": "formal-languages, context-free, formal-grammars" }
Does fixed hyperparameters perform well regardless the number of training examples?
Question: I'm new in this community and I don't know whether my question is proper for this community. I will delete this post if it is not proper. I'm interested in deep learning network models and have a question about this. Suppose we have a feedforward neural network models with different hyperparameters (for example, the number of layers, neurons and so on), say $H_1,..., H_n$. Also assume all the samples are selected from a particular distribution. First we got $100$ samples. If we find that a typical hyperparameter set, $H_i$, performs best. Now I want to increase the number of samples to $1000, 10000,$ even millions. Is $H_i$ still the best hyperparameter set to perform among $H_i$'s? I want to know whether there are references about such topics. The best is a mathematical proof about this fact. Again, please tell me if this question is not proper to the community. Thanks a lot! Answer: No. You're asking about model selection. Larger sample sizes will allow you to choose more complex models. Read up on the keywords overfit/underfit, model selection, Structural Risk Minimization, and in particular this article: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=705570
{ "domain": "cstheory.stackexchange", "id": 4643, "tags": "machine-learning, network-modeling, computational-mathematics" }
nmea_navsat_driver /fix topic hangs indefinitely
Question: Hey everyone! I'm running into an issue that I've previously resolved and have seen posted in a few different places, so apologies if it's a duplicate. This weekend, I started getting this problem again where I run the nmea navsat driver ROS package to read in GPS strings via USB, and can see the serial messages via "$ cat /dev/", but when I try to echo the "/fix" topic it just hangs. Goal: Have the robot read GPS data via USB from a GPS unit. Setup: OS - Ubuntu 14.04 More OS info - Linux CPR-J100-0057 3.16.0-77-generic #99~14.04.1-Ubuntu SMP Tue Jun 28 19:17:10 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ROS - Indigo 1.11.20 GPS Package - nmea_navsat_driver 0.4.2 GPS unit - Emlid Reach RS, ReachView app 2.11 Walkthrough: With the robot powered on, the ROS core is started with the following env vars using the robot's IP address: export ROS_MASTER_URI=http://192.168.131.11:11311 export ROS_IP=192.168.131.11 GPS device is plugged into the robot's USB port, I see the device shows up in /dev: ls /dev/ttyA* Listed devices: /dev/ttyACM0 /dev/ttyACM1 I'm able to display the serial data coming into the robot: cat /dev/ttyACM1 Example printout: $GPGSA,A,3,07,08,09,11,23,48,51,,,,,,3.1,1.7,2.6*3D $GLGSA,A,3,67,68,77,78,88,,,,,,,,15.2,4.7,14.4*2A $GPGSV,2,1,07,07,44,322,48,08,79,128,45,09,55,248,45,11,27,170,37,1*65 $GPGSV,2,2,07,23,39,190,42,48,26,246,37,51,45,220,43,,,,,1*58 $GLGSV,2,1,05,67,22,239,41,68,22,296,44,77,49,019,38,78,27,317,38,1*7B $GLGSV,2,2,05,88,36,166,37,,,,,,,,,,,,,1*4D $GNGST,173725.20,1.220,,,,4.514,4.529,10.741*5C $GNVTG,,T,,M,0.09,N,0.17,K,A*32 $GNRMC,173725.40,A,3128.5281612,N,08331.7091448,W,0.04,,300418,,,A*4A $GNGGA,173725.40,3128.5281612,N,08331.7091448,W,1,12,0.0,108.391,M,-28.500,M,0.0,*61 Next, I try running the nmea navsat driver ROS package and match the GPS settings for port and baud rate. It seems like these settings being wrong is a classic way for this not to work, but given step 3 above, it seems like that isn't the issue here. rosrun nmea_navsat_driver nmea_serial_driver _port:=/dev/ttyACM1 _baud:=4800 NOTE: I get this warning when I run the package, but I think it's only temporary at startup. I've had this same warning a week ago when everything was working. The warning: [WARN] [WallTime: 1525110076.762267] Received a sentence with an invalid checksum. Sentence was: '$GPGSV,2,1,08,07,45,323,43,08,80,117,46,09,54,245,45,11,28,169,$GNRMC,174116.80,A,3128.5283079,N,08331.7093528,W,0.17,,300418,,,A*49' I can see the /fix and /vel topics after running the nmea package when I run rostopic list. Finally, when I try to echo the topic value, it just hangs. rostopic echo /fix Output: Nothing, it just hangs there. Ctrl-C brings the prompt back. NOTE: After the ctrl-c command, the print out for the process running the nmea package says: [WARN] [WallTime: 1525110085.955421] Inbound TCP/IP connection failed: connection from sender terminated before handshake header received. 0 bytes were received. Please check sender for additional details. Things I've tried: In the past, I was able to echo the topic while logged into the robot, but not able to echo it if I were on a remote computer. This was fixed by editing the /etc/hosts files for the remote computer and robot. In the file, the IP and hostname of the robot were added. Checking that the user is added to the dialout group (it is), and that the GPS device's group is set to dialout (it is as well). Checked the privileges of GPS device in case the user didn't have permissions. Doing a quick ls -l /dev/ttyA* shows me that the user has RW access to the GPS device (access settings: crw-rw---). Tried plugging the GPS device into a different computer and run the same procedure laid out above, which gives me the same issue with the topic echo hanging. Any help on things to try, or if you need any additional information, just let me know! I'm starting to run out of ideas. Nick Originally posted by popenc on ROS Answers with karma: 26 on 2018-04-30 Post score: 0 Answer: For my particular situation, this doesn't seem like a ROS issue and ended up being an issue on the Emlid Reach RS GPS unit side. As I mentioned in the OP, the problem described in this post didn't happen until after updating the GPS unit's control/monitoring software, ReachView. Given that I was able to verify the serial data (i.e., GPS strings) coming into the robot, I really didn't expect the GPS software update to be a factor in the issue at all. The only reason I was suspicious of the upgrade causing any issues was that it was the only thing changed between it working and not working. When I downgraded my ReachView app version from 2.11 back to 2.10, and ran the same setup described above, I get the /fix and /vel topics echoing again. I'd like to attempt to recreate this exact same behavior before fully committing to the idea that the ReachView upgrade caused an odd bug with ROS, the NMEA driver, or something along the chain. Originally posted by popenc with karma: 26 on 2018-05-02 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by tryan on 2018-09-15: The command stty -F /dev/ttyACM1 may help you troubleshoot. I used it when I had a problem with a ROS node and serial port configuration. I had a serial-to-USB adapter, so I used stty -F /dev/ttyUSB0 to see the working and non-working configurations and find the difference.
{ "domain": "robotics.stackexchange", "id": 30758, "tags": "ros, gps, clearpath, ros-indigo" }
Batch convert txt files to xls
Question: In this SO question I provided two answers for the batch conversion of more than 100K txt to xlsx files I suspect any automation of Excel is too slow, but would appreciate if there is a superior (i.e. faster) approach using powershell. $files = Get-ChildItem C:\Temp\*.txt Write "Loading Files..." $Excel = New-Object -ComObject Excel.Application $Excel.visible = $false $Excel.DisplayAlerts = $false ForEach ($file in $files) { $WorkBook = $Excel.Workbooks.Open($file.Fullname) $NewFilepath = $file.Fullname -replace ".{4}$" $NewFilepath = $NewFilepath + ".xls" $Workbook.SaveAs($NewFilepath,56) } Stop-Process -processname EXCEL $Excel.Quit() Answer: As far as performance goes there are two general approaches to address this Parallel processing Running the com object is dragging in itself. Processing 10's of 1000's of files with one Excel instance will be draining. PowerShell does support multiple avenues of mitigating this https://stackoverflow.com/questions/4016451/can-powershell-run-commands-in-parallel https://stackoverflow.com/questions/8781666/run-n-parallel-jobs-in-powershell about_foreach_parallel I am going to show an approach using Jobs. What this will do is group up identified files. Each group will be past to its own Excel job for processing. Some comments in the code below explain it more. # Root directory containing your files. $path = "E:\temp\csv" # Get Current EXCEL Process ID's so they are not affected but the scripts cleanup # SilentlyContinue in case there are no active Excels $currentExcelProcessIDs = (Get-Process excel -ErrorAction SilentlyContinue).Id # Collect the files $files = Get-ChildItem -Path $path -Filter "*.txt" -File # Split the files up into processing groups. For Each group and excel process will be started. $numberOfGroups = 5 $maxGroupMemberSize = [math]::Ceiling($files.Count / $numberOfGroups) # Create as many file groups $fileGroups = 0..($numberOfGroups - 1) | Foreach-object{ $groupIndexStart = $maxGroupMemberSize * $_ # Use the unary comma operator to be sure an array is returned and not unrolled ,$files[$groupIndexStart..($groupIndexStart + $maxGroupMemberSize - 1)] } # Create a job for each file group. for($jobCount = 0; $jobCount -lt $fileGroups.Count; $jobCount++){ # Start a unique Excel instance for this group of files. Start-Job -Name "Excel$jobCount" -ScriptBlock { param($files) $excelFileFormat = 56 #xlExcel8 format # Create a new Excel Instance $excel = New-Object -ComObject Excel.Application $excel.Visible = $false $excel.DisplayAlerts = $false ForEach ($file in $files){ $workbook = $Excel.Workbooks.Open($file.Fullname) $newFilepath = $file.Fullname -replace "\..*$",".xls" $workbook.SaveAs($newFilepath, $excelFileFormat) } # Quit this instance and return its memory $excel.Quit() while([System.Runtime.Interopservices.Marshal]::ReleaseComObject($workbook)){} while([System.Runtime.Interopservices.Marshal]::ReleaseComObject($excel)){} Remove-Variable "workbook","excel" } -ArgumentList (,($fileGroups[$jobCount])) | Out-Null } # Wait for the jobs to be completed and remove them from inventory since they won't have output we need Get-Job -Name "Excel*" | Wait-Job | Receive-Job # Remove any stale Excel processes created by this scripts execution Get-Process excel -ErrorAction SilentlyContinue | Where-Object{$currentExcelProcessIDs -notcontains $_.id} | Stop-Process Excel automation is full of pitfalls but the general approach seems to be working. For one com objects can prevent from being closed. Best thing to do is just kill any remaining processes. There is logic that can help to be sure any Excel opened before the script was run are not affected. Jobs are done in the way as well to not affect processing of other scripts. This all would be better as a script using proper parameters but showing the logic of jobs is the point here. Automate without Excel There are a number of libraries out there that can create Excel documents without the need for COM objects or Excel itself. These would perform faster as well. EPPlus comes to mind and even [ImportExcel module] (which uses EPPlus) would be solid choices as well. I don't know what your text files look like so you will need to experiment. Get-Content E:\temp\csv\data.txt | Export-Excel -Path "e:\temp\csv\file.xlsx" Export-Excel works better with objects so, depending on your data structure, you would do this instead. Import-Csv E:\temp\csv\data.txt | Export-Excel -Path "e:\temp\csv\file.xlsx" Import-CSV is generally a slow process so you would want to try using streamreader and Convert-FromCSV if performance suffers. Magic Numbers Your 56 would be an example of that. You have to look it up to know what it represents. So 56 is for Excel Version 8. At a minimum that should be its own variable. My code above uses $excelFileFormat = 56 #xlExcel8 format If you wanted to go a little crazier you could actually import the assembly to access the enum [reflection.assembly]::LoadWithPartialName("Microsoft.Office.InterOp.Excel") | Out-Null [Microsoft.Office.Interop.Excel.XlFileFormat]::xlExcel8 BaseName $NewFilepath = $file.Fullname -replace ".{4}$" $NewFilepath = $NewFilepath + ".xls" This is what you are doing to replace the extension of the file. File objects have a basename property that is the file name without the extension. I know that you are dealing with txt files, so you already know how many characters on the end to remove, but using .basename would remove that complexity in other areas as well. $NewFilepath = "$($file.BaseName).xls"
{ "domain": "codereview.stackexchange", "id": 23972, "tags": "performance, excel, powershell" }
Client [/mir_auto_bagger] wants topic /move_base/goal to have datatype/md5sum
Question: When I try to do a gmapping with my mobile robot an error pop-up: [ERROR] [1554797108.240628800]: Client [/mir_auto_bagger] wants topic /move_base/goal to have datatype/md5sum [*/8ac6f5411618f50134619b87e4244699], but our version has [move_base_msgs/MoveBaseActionGoal/660d6895a1b9a16dce51fbdd9a64a56b]. Dropping connection. [move_base_node-2] process has died [pid 6194, exit code -6, cmd /opt/ros/kinetic/lib/move_base/move_base cmd_vel:=mobile_base/commands/velocity __name:=move_base_node __log:=/home/chbloca/.ros/log/ed0463be-5a97-11e9-93e4-94c6911e7b24/move_base_node-2.log]. log file: /home/chbloca/.ros/log/ed0463be-5a97-11e9-93e4-94c6911e7b24/move_base_node-2*.log What library should I modify in order to make the PC messages to be compatible with the mobile robot? PD: I cannot access to the mobile robot PC Originally posted by chbloca on ROS Answers with karma: 94 on 2019-04-09 Post score: 0 Original comments Comment by gvdhoorn on 2019-04-09: This seems like a cross-post of dfki-ric/mir_robot#20? Comment by chbloca on 2019-04-09: It is, indeed Comment by gvdhoorn on 2019-04-09: What has changed since you opened that issue? Comment by chbloca on 2019-04-09: Nothing at all. No solutions yet Comment by gvdhoorn on 2019-04-09: I believe @Martin Günther's answer on your mir_robot issue was pretty OK: either use a ROS version that corresponds to what the robot runs, or use his driver to not work with messages directly. Alternatively you could see whether using the same move_base messages on your PC would work, but it's likely that move_base on your own PC will not work with the messages that your robot uses. Comment by chbloca on 2019-04-09: So do you mean that my robot is running under Indigo ROS? Because the ROS in my computer is Kinetic. I cannot use his bridge since it is not suitable for heavy topics such as the ones that the camera produces. Comment by gvdhoorn on 2019-04-09: If your robot is running Indigo, then yes, using a client PC running Indigo (or a Docker image) should work. Comment by Martin Günther on 2019-04-10: @gvdhoorn - good catch on finding that GitHub issue, I wouldn't have seen this question if you hadn't mentioned me. Out of curiosity - do you monitor all of GitHub? :D Comment by gvdhoorn on 2019-04-10:\ Out of curiosity - do you monitor all of GitHub? :D yes ;) But in reality: most cross-posters just copy-paste whatever they've posted in place "X" verbatim to place "Y". A search for the title of this post here on ROS Answers immediately leads to the GH issue. Comment by chbloca on 2019-04-10: Thank you guys for the support to this topic. Not all heroes wear capes. Answer: I've posted a full answer to the GitHub issue: https://github.com/dfki-ric/mir_robot/issues/20#issuecomment-481623049 In short, I've updated the MirMoveBase action definition to match the md5sum that's expected by the MiR - this should solve your problem. Originally posted by Martin Günther with karma: 11816 on 2019-04-10 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by chbloca on 2019-04-10: Thanks Martin. So his answer was: I have just updated the mir_actions and mir_msgs packages to the MiR software version 2.3.1. The MirMoveBase action now looks like this: https://github.com/dfki-ric/mir_robot/blob/7682e991e0efb66b1fcfbf0a4c2b42f9a837235a/mir_actions/action/MirMoveBase.action This action definition hasn't changed between MiR software versions 1.9.13 and 2.3.1, so as long as you use any MiR software version within that range, you should be ok. In 2.4.0, the action has changed, which causes problems. If you use this new action definition, the message mir_actions/MirMoveBaseActionGoal has md5sum 8ac6f5411618f50134619b87e4244699, which is exactly what the client wants (see the error message in your first post). This is what you need to do if you want to send move_base goals to the MiR directly, without using the mir_bridge: clone and compile the latest version of this repo change your move_base client so that it sends mir_actions/MirMoveBase, ...
{ "domain": "robotics.stackexchange", "id": 32839, "tags": "navigation, move-base, ros-kinetic, gmapping" }
Help to run diff_drive_controller
Question: I’m trying to launch the diff_drive_controller but I get this message when I run the .launch file. “Controller Spawner: Waiting for service controller_manager/load_controller” " Controller Spawner couldn't find the expected controller_manager ROS interface." This is my .launch file: https://www.dropbox.com/s/gwtwy3ogzq8on0m/launch. I try with args="mobile_base_controller" instead of args="diff_drive_controller" And the config file that I copied from the diff_drive_controller site. mobile_base_controller: type : "diff_drive_controller/DiffDriveController" left_wheel : 'wheel_left_joint' right_wheel : 'wheel_right_joint' publish_rate: 50.0 # default: 50 pose_covariance_diagonal : [0.001, 0.001, 1000000.0, 1000000.0, 1000000.0, 1000.0] twist_covariance_diagonal: [0.001, 0.001, 1000000.0, 1000000.0, 1000000.0, 1000.0] # Wheel separation and diameter. These are both optional. # diff_drive_controller will attempt to read either one or both from the # URDF if not specified as a parameter wheel_separation : 1.0 wheel_radius : 0.3 # Wheel separation and radius multipliers wheel_separation_multiplier: 1.0 # default: 1.0 wheel_radius_multiplier : 1.0 # default: 1.0 # Velocity commands timeout [s], default 0.5 cmd_vel_timeout: 0.25 # Base frame_id base_frame_id: base_footprint #default: base_link # Velocity and acceleration limits # Whenever a min_* is unspecified, default to -max_* linear: x: has_velocity_limits : true max_velocity : 1.0 # m/s min_velocity : -0.5 # m/s has_acceleration_limits: true max_acceleration : 0.8 # m/s^2 min_acceleration : -0.4 # m/s^2 has_jerk_limits : true max_jerk : 5.0 # m/s^3 angular: z: has_velocity_limits : true max_velocity : 1.7 # rad/s has_acceleration_limits: true max_acceleration : 1.5 # rad/s^2 has_jerk_limits : true max_jerk : 2.5 # rad/s^3 Thanks for the help, I’m desperate with the ros_control package. Sorry for my english. Originally posted by javi_tecla on ROS Answers with karma: 3 on 2018-04-24 Post score: 0 Original comments Comment by Humpelstilzchen on 2018-04-25: args="diff_drive_controller" is definitely wrong since you have it as "mobile_base_controller" in the config. The ns="/robot" also looks wrong, as I don't the config loaded into this namespace. Answer: Hi Jaime, you have a couple of errors in the information you posted: the args on the launch file, do not have the correct name. You should specify there the name of the controller you defined in the config file, that is mobile_base_controller also in the launch file, you are indicating a namespace /robot but that is not included in the yaml file. You can get rid of it Those two points should resolved the problem of controller spawner couldn't .... However, you may have later a problem loading the controller with a different error report. Since I don't know if you are loading the ROS control plugin and if you are specifying the transmisions. You need to do that in your URDF model if you want the controllers to load properly. Have a look at this video where I show step by step how to modify your files for proper working: https://goo.gl/ZWE5st Originally posted by R. Tellez with karma: 874 on 2018-05-28 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by javi_tecla on 2018-08-18: Thanks a lot for the answers and specialy for the dedicated video. Finaly I can launch the control node. You can see in this video one of the test of the weelchair working https://www.youtube.com/watch?v=2xSHQy6WOEo
{ "domain": "robotics.stackexchange", "id": 30729, "tags": "ros, diff-drive-controller, ros-control, ros-kinetic" }
sum of multiples by 3 or 5 using ranges
Question: This program calculates the sum of all integers in the range \$[1, 1000)\$ which are multiples of either 3 or 5 or both. Inspired by x86-64 Assembly - Sum of multiples of 3 or 5 the other day, and also drawing from some of what I learned through Calculate the centroid of a collection of complex numbers I decided to try to tackle this using C++20 ranges. By heavy use of constexpr, I had hoped to find a solution that would calculate everything at compile time, and indeed this does as you can see if you try it online. This version is inspired by this talk C++20 Ranges in Practice - Tristan Brindle - CppCon 2020. I'm interested in general improvements. euler1.cpp #include <iostream> #include <concepts> #include <ranges> #include <iterator> #include <functional> #include <numeric> template <std::ranges::input_range R, typename Init = std::ranges::range_value_t<R>> constexpr Init accumulate(R&& rng, Init init = Init{}) { return std::reduce(std::ranges::begin(rng), std::ranges::end(rng), std::move(init)); } int main() { constexpr auto div3_or_5 = [](int i){ return i % 3 == 0 || i % 5 == 0; }; std::cout << accumulate(std::ranges::iota_view{1, 1000} | std::views::filter(div3_or_5) | std::views::common) << '\n'; } Answer: The optional init argement is never used, so we could omit that for this application, and use the two-argument form of std::reduce(). That would solve the other issue neatly - failure to include <utility> before using std::move().
{ "domain": "codereview.stackexchange", "id": 40306, "tags": "c++, programming-challenge, functional-programming, c++20" }
Why is there no neutral [Cr(OC)6] or anionic [Fe(NC)6] - isomers of cyanide and carbonyl complexes?
Question: I was just reading about linkage isomerism, that usually arise due to the fact that some ligands are ambidentate (i.e. $\ce{SCN}$ and $\ce{NCS}$). I then think to myself, considering only the ligand structure, $\ce{CN-}$ and $\ce{CO}$ ligand also should also be ambidentate ligands (I'm not including bridging). However, I haven't found any reference to that. Here's now, my questions: Are $\ce{CN-}$ and $\ce{CO}$ really ambidentate? why or why not? How different are the complexes? (i.e. $\ce{Cr(OC)6}$ vs $\ce{Cr(CO)6}$) Also, if anyone found a complex with $\ce{NC-}$ or $\ce{OC}$ ligand, please do put the reference to that. Answer: In the case of cyanide we actually do have iron coordinated to the nitrogen ends of the cyanide ligands in Prussian blue. In fact iron atoms are coordinated to both ends of the cyanide ion ligands; the iron forms a simple cubic lattice with the cyanide ions laid out along the edges of the cube like an expanded perovskite structure: From Wikimedia Commons user Ben Mills From Wikipedia user Smokefoot In general, the pi-acceptor ligand that cyanide ion is will bond first through the less electronegative carbon atom, where the $\pi^*$ orbitals that accept electrons from the metal have higher amplitudes (and thus potentially better overlap). But when that bond is already taken up, as in the Prussian blue structure or with nitrile ligands where the cyanide carbon is bonded to another carbon atom, then the nitrogen end of the cyanide can from a second bond.
{ "domain": "chemistry.stackexchange", "id": 17554, "tags": "inorganic-chemistry, coordination-compounds, ligand-field-theory, carbonyl-complexes" }
receiving transforms from openni_tracker
Question: The openni_tracker uses a TransformBroadcaster to publish the transforms it makes while tracking. I was going to write a subscriber node, but I realized that there is probably already one if openni_tracker is publishing something. Does anyone know the name of the subscriber node if it exists? To clarify, I am looking for a specific node that uses a TransformListener to get information from the openni_tracker node. Originally posted by qdocehf on ROS Answers with karma: 208 on 2011-07-11 Post score: 0 Original comments Comment by qdocehf on 2011-07-11: Thanks for the help. Comment by Miguel Prada on 2011-07-11: I'm afraid what you ask for is too specific for it to be available off-the-shelf. I'd insist on taking the TransformListener example and working from there. You can try to first receive just one of the transforms (e.g. right_hand) to get more confident and then extending to the rest of the frames. Comment by qdocehf on 2011-07-11: I already looked at this tutorial, and had the same plan. However, it is somewhat confusing for me to create a node based on a very different one. If there's a node that already listens to openni_tracker, it'd be easier for me to make the connections between what I am doing and the existing node. Comment by Miguel Prada on 2011-07-11: You should take the sample in http://www.ros.org/wiki/tf/Tutorials/Writing%20a%20tf%20listener%20%28C%2B%2B%29 and replace the frames that are requested to the TransformListener for any of the frames published by the openni_tracker. That should serve as a starting point to develop what you need. Comment by qdocehf on 2011-07-11: constant? Comment by qdocehf on 2011-07-11: I'm looking for any node that listens to openni_tracker. I don't need anything very complex, so any nodes will probably be sufficient. I just don't want to write a node if there's already an existing one, as I'm still not great with ROS. Also, why would the number of published transforms ever be Comment by Miguel Prada on 2011-07-11: You're looking for a node that listens to information from openni_tracker and does what? A blank node that listens to transforms between "openni_depth_frame" and each of the frames published by openni_tracker? Bear in mind that the number of published transforms is not even constant. Comment by qdocehf on 2011-07-11: Yes. Hopefully my update to the question makes it a bit more clear. Comment by dornhege on 2011-07-11: Are you looking for a TransformListener? Answer: Answered in comments by @Miguel Prada Originally posted by tfoote with karma: 58457 on 2011-10-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 6103, "tags": "ros, transforms, openi-tracker" }
Replace match inside tags
Question: The problem I want to solve is to replace a given string inside tags. For example, if I'm given: Some text abc [tag]some text abc, more text abc[/tag] still some more text I want to replace abc for def but only inside the tags, so the output would be: Some text abc [tag]some text def, more text def[/tag] still some more text We may assume that the tag nesting is well formed. My solution is the following: def replace_inside(text): i = text.find('[tag]') while i >= 0: j = text.find('[/tag]', i+1) snippet = text[i:j].replace('abc', 'def') text = text[:i] + snippet + text[j:] i = text.find('[tag]', j) return text I was wondering if it can be solved in a more elegant way, for example, using regex. Answer: Your code is incorrect. See the following case: print replace_inside("[tag]abc[/tag]abc[tag]abc[/tag]") You can indeed use regular expressions pattern = re.compile(r"\[tag\].*?\[/tag\]") return pattern.sub(lambda match: match.group(0).replace('abc','def') ,text)
{ "domain": "codereview.stackexchange", "id": 1341, "tags": "python, regex" }
At what point does decreasing solvent temperature cause a decrease in gas solubility?
Question: Example: At constant pressure, carbon dioxide becomes less soluble in water as temperature increases. We also know that carbon dioxide becomes more soluble in water as temperature decreases. Does introduction of order and loss of kinetic energy of solvent molecules eventually lead to a decrease in a solvent's ability to accommodate gas molecules? It is hard for me to believe that ice cubes can hold gas more effectively than the same volume of a glass of water. Answer: Rules such as "solubility of A in B rises with lower temperature" are only meant to be used if there is no phase transition of the participating substances.
{ "domain": "chemistry.stackexchange", "id": 8449, "tags": "physical-chemistry, solubility" }
Is decimal, hexadecimal, octadecimal, binary converter efficient?
Question: I have made a method which takes two parameters: the value to convert and the base. Based upon those two parameters my method is adjusted (with several helper methods) to change the desired decimal number to a number with a different base. Is this code practical and clean? If not, what should I change to make it better? public String baseConversion(int base, int value){ String finalResult = ""; String tempResult = ""; int countLength = 0, countFit = 0; // countLength counts the length of the tempResult string in order to know how many '0' to add. countFit counts how many times the tempValue fits into the current parameter 'value' int placeValue; // Records the place value. ex 245: '2' is in the hundreds place value int tempValue = 0; while (value > 0){ placeValue = 1; // calculates the highest power that fits in the value (takes desired base into account) while (value > placeValue){ placeValue*=base; countLength++; } if (value!=placeValue){ countLength--; // subtract to account for the value in the tempResult that is not '0' placeValue/=base; // divide back as the previous while loop went one placeValue higher than needed in order to break out } //figures out how many times the placeValue can fit into the value while (tempValue+placeValue <= value){ tempValue+=placeValue; countFit++; } if (base==16){ tempResult += numConverter(countFit); } if (base==8 || base == 2 || base == 10){ tempResult += String.valueOf(countFit); } // adds zeros to the rest of the string while (countLength > 0){ tempResult+="0"; countLength--; } // takes the current tempResult and combines with previous. ex combine 10000 with 100 = 10100 finalResult = combineResults(finalResult, tempResult); value -= tempValue; countFit=0; tempValue=0; tempResult=""; } return finalResult; } Helper methods: private String numConverter(int number){ // only for hexadecimal if(number == 10){ return "A"; }else if (number == 11){ return "B"; }else if (number == 12){ return "C"; }else if (number == 13){ return "D"; }else if(number == 14){ return "E"; }else if (number == 15){ return "F"; }else { return String.valueOf(number); } } private String combineResults(String a, String b){ String result = ""; int indexA = 0, indexB = 0; int aTempLength = a.length(), bTempLength = b.length(); if (a.length() > b.length()){ while (aTempLength > bTempLength){ result+=a.charAt(indexA); indexA++; aTempLength--; } }else { while (bTempLength > aTempLength){ result+=b.charAt(indexB); indexB++; bTempLength--; } } while (aTempLength > 0){ if (String.valueOf(a.charAt(indexA)).equals("0")){ result+=b.charAt(indexB); indexA++; indexB++; }else { result+=a.charAt(indexA); indexA++; indexB++; } aTempLength--; } return result; } Answer: Disclaimer: I'm new here, so this might not be the type of review people give, but I'm trying to match it as best as I can. Formatting A comment explaining the purpose of combineResults(String a, String b) next to the method would be helpful. (Yes, there's one in the main method, but documentation next to the helpers is good, too.) The same could be said for numConverter(int number)... Other than that, indentation looks good, braces look fine, etc. :) Algorithm In general: You should be passing around StringBuilders or StringBuffers (or chars) instead of Strings. The former are mutable, while the latter isn't mutable. This will save some performance. (e.g. appending is better on memory with the builders and buffers, rather than with raw Strings.) Main Algorithm Ok... You took kinda an odd approach to converting bases. Rather than using all those while loops, you should focus on determining the next least significant digit by using the modulus operator followed by division. For example, in pseudocode: while(number > 0) { nextDigit = number % base; //Get the next digit using modulo number /= base; // prepend nextDigit to result String Buffer } This makes the combineResults() method unneeded, as prepend is built-in to the StringBuilder. NumConverter In the spirit of StringBuilders vs. String, this should return a char rather than a String. Your current code assumes that the only number ever fed to this method will be less than 16. This could be a problem if it's a larger library, or if you want to expand the bases to more than hexadecimal. You could use a switch/case construct instead of the else if's... Or, if you really want to shorten the code, you could do something like (note: this also implements the char return instead of String): char numConverter(int digit) { if (num >= 16 || num < 0) //If we're not given a valid digit return 0; //Let the output know somehow... could throw an exception instead. else if (digit <= 10) //If we're dealing with a "normal" digit return ((char)num + '0'); //... return the numeric equivalent else //We're in extended digit range: we need letters! return ((char)digit + 'A' - 10); //Return hex equivalent of 10-15 (A-F) }
{ "domain": "codereview.stackexchange", "id": 6160, "tags": "java, converting, reinventing-the-wheel" }
Force on a point charge in the cavity of an uncharged conductor
Question: There's another question on this particular topic, more specifically my doubts arise from dealing with problem $2.40$ of Griffiths' Electrodynamics (4th ed). I've read some answers and they all refer to the solution of Laplace's equation in the absence of any charge and also Earnshaw's theorem, which I'd like to leave apart for I don't have the necessary knowledge yet. My approach is: I first consider a conductor whose boundary we can call $\Sigma_1$, with a cavity inside (the boundary of the latter I called $\Sigma_2$). Inside the cavity, there's a positive point charge $+q$. Now the induced positive surface charge densities (which are both not uniform) on the outer surface of the conductor $\Sigma_1$ is such that: $$ q = \int\int_{\Sigma_1}\sigma_+(\mathbf{r'})d\mathbf{a'}$$ Equivalently, for $\Sigma_2$: $$ -q = \int\int_{\Sigma_2}\sigma_-(\mathbf{r'})d\mathbf{a'}$$ Now for my purposes, I can consider a conductor very large, let's say that it extends to infinity. In this way, I can ignore the contribution to the net electrical field inside the cavity since the charges are infinitely far away. I know that the point charge inside the cavity generates a radial field, but I also know that the charge cannot exert a non-null force on itself. So we have the general formula: $$ \mathbf{F}_q = q\mathbf{E}^{ext}_{net}$$ The only contribution to the field comes from the induced negative surface charge on the boundary of the cavity $\Sigma_2$: $$\mathbf{E}^{ext} = \frac{1}{4\pi \epsilon_0} \int_{\Sigma_2} \frac{\sigma_{-}(\mathbf{r}') }{r'^2 } \mathbf{\hat{r}} da'$$ In the last integral we set the origin of our coordinate system at the place where the point charge is, so that $\mathbf{r-r'} = -\mathbf{r'}$. Now it's easy to see that this field plus the one due to the point charge must cancel each other (otherwise the field inside the meat of the conductor wouldn't be null). But this obviously does not allow me to assume that the field is such that it generates a null force on the point charge, unless the whole configuration presents some kind of simmetry. Therefore, in general, the point charge will experience a non-null force and this means it will be dragged towards some part of the cavity (yes I know the force can also be repulsive sometimes, in any case the charge will move inside the cavity). This all seemed a pretty good argument to me exect for one thing: aren't we in the regime of electrostatics where all the charge is stationary? Isn't this contradictory? My other doubt is: let us suppose that the cavity is spherical (the origin of our coordinate system coinciding with the point charge inside it). Can I assume that the induced charge $\sigma_{-}$ will spread uniformly on the boundary of the cavity? Or is there some weird configuration which could minimize the potential energy of the configuration even more? Because if this isn't the case, then we have found (one of the possible ) configuration(s) in which the force on the charge is exactly zero, since $\mathbf{E} = 0$ inside the cavity. Answer: This all seemed a pretty good argument to me exect for one thing: aren't we in the regime of electrostatics where all the charge is stationary? Isn't this contradictory? No. Electrostatics assumes charges are at rest, but it does not imply they are at stable rest due to all electric forces acting on any single charged particle cancelling each other. Purely electrostatic system of charged particles (no other forces present) is always unstable, whatever the spatial configuration of charges. This is because electric potential due to other charges, "experienced" by any test charge, cannot have a minimum or a maximum at any point of space; such potential obeys the Laplace equation, which prevents such minimum/maximum from existing anywhere in infinite space. Potential everywhere is such that it either 1) increases in certain direction, and decreases in the opposite one (a test charge there would be in unstable position, pushed by electric force) or 2) it does not change with position at all (a test charge there would be in indifferent position, not pushed by electric force). In case 2, it looks like the test particle could be at rest, experiencing zero net electric force. This is the case of system that has only 1 particle. However, if there are other particles, it is not possible to have them all at magic points where they experience zero net electric field. There have to be other, non-electrostatic, forces acting on the particles, to have a system of two or more charges that is in equilibrium. In your example, charge on the surfaces is stabilized by internal forces of the metal, keeping the charge motion only to the metal, preventing it from jumping out to vacuum. let us suppose that the cavity is spherical (the origin of our coordinate system coinciding with the point charge inside it). Can I assume that the induced charge $\sigma_{-}$ will spread uniformly on the boundary of the cavity? Or is there some weird configuration which could minimize the potential energy of the configuration even more? Because if this isn't the case, then we have found (one of the possible ) configuration(s) in which the force on the charge is exactly zero, since $\mathbf{E} = 0$ inside the cavity. Surface charge will have uniform area density only in case the charge inside is at the exact center of the spherical cavity. This situation is unstable, because it takes arbitrarily small shift of the charge inside to make the configuration non-equilibrium. The shifted charge inside induces asymmetric distribution on the inner surface which will attract the charge inside towards the cavity wall.
{ "domain": "physics.stackexchange", "id": 97144, "tags": "homework-and-exercises, electrostatics, conductors" }
Restructuring JSON to create a new JSON where properties are grouped according to similar values
Question: I have a JSON structure in the following example format: [ {date: '01-01-2021', name: 'House 1', value: '500'}, {date: '01-01-2021', name: 'House 2', value: '600'}, {date: '01-01-2021', name: 'House 3', value: '700'}, {date: '01-01-2021', name: 'House 4', value: '800'}, {date: '01-02-2021', name: 'House 1', value: '200'}, {date: '01-02-2021', name: 'House 2', value: '100'}, {date: '01-02-2021', name: 'House 3', value: '300'}, {date: '01-02-2021', name: 'House 4', value: '1000'}, ] I have written logic to group it in the following format: [ {date: '01-01-2021', 'House 1': 500, 'House 2': 600, 'House 3': 700, 'House 4': 800}, {date: '01-02-2021', 'House 1': 200, 'House 2': 100, 'House 3': 300, 'House 4': 1000}, ] Below is code I have written. It works and seems to be efficient, but I am curious if this was the best approach or if there is a simpler way to accomplish this task? The actual data set contains a total of 2274 objects that is reduced to an array of 748 objects. Code: const housing_table_data = ((json_data) => { const unique_dates = housing_data.reduce((accumulator, current_value) => { if (accumulator.indexOf(current_value.date) === -1) { accumulator.push(current_value.date); } return accumulator; }, []); let table_data = []; unique_dates.forEach((date, index) => { table_data.push({date: date}); let house_groupings = housing_data.filter(data => data.date=== date); house_groupings.forEach(group => { table_data[index][group.name] = group.value; }); }); return table_data; })(housing_data); Answer: Scoping issues You are using an "immediately invoked function expression" (IIFE) here to scope the unique_dates, but in the process you forgot to use the function argument you introduced to correctly scope the function you have there. Note how you have ((json_data) => { but the body of the function you define there never refers to json_data. That's not really that great. This doesn't give you any benefit here so instead you should just unwrap that IIFE. Naming Convention I just want to mention that the majority of styleguides I have seen for javascript advocate the use of camelCase variable names over snake_case names. Seeing that you are consistent in your use of snake_case that's really just personal preference, though. Algorithmic advice You can cut the number of iterations over the json_data down to a single iteration instead of two iterations. It allows you to avoid filtering the housing_data in the forEach. The "trick" to see is the idea of using the date as a key into an accumulator object instead of using an accumulator array. To transform the accumulator into your target structure you can then use Object.values: const grouped_accumulated = housing_data.reduce((accumulator, current_value) => { if (!accumulator.hasOwnProperty(current_value.date)) { accumulator[current_value.date] = { date: current_value.date }; } accumulator[current_value.date][current_value.name] = current_value.value; return accumulator; }, {}); const housing_table_data = Object.values(grouped_accumulated);
{ "domain": "codereview.stackexchange", "id": 41560, "tags": "javascript, array, functional-programming, ecmascript-6, iteration" }
Using Statement for OleDB - Is this overkill?
Question: I am still trying to wrap my head around good programming practice and just wrote this routine to read in an Excel file and return all the sheets into a dataset: Public Shared Function ReadExcelIntoDataSet(ByVal FileName As String, Optional ByVal TopRowHeaders As Boolean = False) As DataSet Try Dim retval As New DataSet Dim strConnString As String = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" & _ FileName & ";Extended Properties=""Excel 12.0;IMEX=1;" & _ "HDR=" & If(TopRowHeaders, "Yes", "No") & """;" Using oleExcelConnection = New OleDb.OleDbConnection(strConnString) oleExcelConnection.Open() Using oleExcelCommandString = New OleDb.OleDbCommand("", oleExcelConnection) Using oleExcelAdapter = New OleDb.OleDbDataAdapter(oleExcelCommandString) Dim dtSchema As DataTable = oleExcelConnection.GetOleDbSchemaTable(OleDb.OleDbSchemaGuid.Tables, Nothing) For Each dr As DataRow In dtSchema.Rows Dim Sheetname As String = dr.Item("TABLE_NAME").ToString If Sheetname.EndsWith("$") Then oleExcelCommandString.CommandText = "Select * From [" & Sheetname & "]" oleExcelAdapter.Fill(retval, Sheetname) End If Next Return retval End Using End Using End Using Catch ex As Exception Dim ErrorData As String = "Unable to Read in Data from the Excel File " & FileName & vbCrLf & _ "Module: Utility > ReadExcelIntoDataSet" Throw New Exception(ErrorData & vbCrLf & "ERROR Data:" & vbCrLf & ex.Message) End Try End Function My first questions are: Is the use of the Using statements good or is it some kind of over-kill and should this maybe be handled in a Finally statement in stead? Is this the best method to read in an Excel File (Given that they can be either .xls or .xlsx) in the fastest possible way? What is the best way to pass the error up to the calling code? Did my Catch statement do that well enough?? Answer: The Using is good practice and not overkill. Trying to dispose the resources in the Finally part has the disadvantage, that you don't know when the exception happened and which resources have been assigned by that time. The Using statement does this automatically for you internally: If resouce <> Nothing Then resource.Dispose() End If You need different connection strings for the different Excel types. You can use this function: Private Function GetConnectionString(ByVal filename As String) As String Const xlsConnString As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};Extended Properties=Excel 8.0;" Const xlsxConnString As String = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source={0};Extended Properties=Excel 12.0;" If Path.GetExtension(filename).Length = 5 Then ' .xlsx, .xltx Return String.Format(xlsxConnString, filename) Else ' .xls, .xlt Return String.Format(xlsConnString, filename) End If End Function You can either propagate the exception or return Nothing. In my opinion, your version does it very well.
{ "domain": "codereview.stackexchange", "id": 2965, "tags": "vb.net" }
Generate ligands candidates based on protein shape
Question: Recent approaches to novel drug design using machine learning (ML) and deep learning, often involve generating hundreds of potential ligands which are later tested by docking with a target protein and recording the resulting binding affinity. Is it possible to go the other way around by first constraining the ligand generation process to the shape of the protein thus limiting the search space, as opposed to generating ligands of random shapes? Answer: TL;DR: docking is much slower than any ML approach, but the ML approach can be constrained by pharmacophores dictated by the active site. Side note: Scale The scale for ligand space exploration is generally several orders of magnitude higher than "hundreds": Zinc DB lists 750 million enumerated compounds, GDB13 has enumerate all possible compounds up to 13 atoms (970 million), Enamine Real is 1.2 billion etc. In an actual experiment, one would use a more restrictive set. As an example of a real project, in Covid19 moonshot project there are 2,800 user suggested follow-ups. Side note: More than just binding There are a few requirements for a drug to be effective, and binding is only one of them: Must bind... Must follow Lipinksi's rule of five, i.e. it must be membrane permeable and small. Must be synthesisable. Whereas the experience of someone learnèd is by far the best way to tell how an expansion can be done, there are several machine leaning approaches that can filter a huge dataset. IBM actually have a fun online tool —. Must not have side effects So pre-filtering the list of compounds, regardless of docked poses, will happened, be it manually, by cut-offs or by advanced machine learning. CPU time: docking vs. machine learning These various steps are generally followed in that order. Calculating properties (MW, logP etc.) for a given SMILES string to filter based on Lipinski's rule takes a fraction of a second, so that is generally always a first step. Whether the compound can synthesised or bought is a general left last and done manually. Pharmacology is complicated so is ignored. Docking a ligand takes time and different approaches have different time scales. A decent algorithm that uses implicit solvent model takes about a minute per core. A short MD simulation takes an 1 hour per core or more. There are docking programs for screening that can be faster but these dock only one conformer of the small molecule against a rigid protein and the results are not of much value. Also setting up docking properly is not easy (cf. steps required for Autodock 4, each with their own caveats). So to recap what needs to be considered that results in the slowness: Hundreds of conformers of the ligand Some flexibility of the protein, side-chains repacking only or even backbone changes. thousands of poses explored. Optionally. Implicit water is not as good as a TIP3 water model Machine learning have a much smaller time requirement and is way more attractive for publications. Generally these work by using ligand only information, such as properties, splitting it into pharmacophores, colour etc. This requires a decent dataset, such as empirical results, say from a fragment screen, what compounds bound and which didn't. Often there is a lot of hype with machine learning and the result is something close to something already seen, as well put in this blog post by Pat Walters. This is a side effect of the starting data. The recent Stokes et al. 2020 Cell paper uses AI perfectly because of their staggering automated empirical data collection pipeline. ML after docking However, ML can actually be used after docking to improve the quality of the score. The Boyles, Deane, Morris 2019 Bioinformatics paper investigated ligand-only machine learning to see why do ML methods that rely solely on ligand somehow often work. It is a good paper from a sceptical computational biochemistry viewpoint. The scorefunction used in docking by the sampler can be classified as forcefield-based, empirical or hybrid. In some cases empirical cases, many terms are ligand based properties (/features) that would normally go into a ML search (say one trained on failed hits and successful hits). For example Autodock Vina (not 4) has regression derived factors in the final weights of the score Structural features in machine learning: HotSpots Information of the protein active site can be used in ranking compounds in a non-docking machine learning approach. Namely, finding out what kind of chemical groups bind preferentially to different parts of the protein's active site, i.e. making a "fragment hotspot map". Specifically, a hotspot map allows the determination of a pharmacophore, which is an abstract generalisation of a group of molecules with both both electronic and steric features —a coarse-grain model, if you will. This approach is also used to detect where the active site actually is in the extreme cases where it is not know (primarily a problem for pipelines, not humans). There are several implementations as described here, the first being FTMap (site, paper), which docks 16 different types of compounds. Waters can play a role and severely complicate matters, hence approaches can be taken to counter that. However, unlike docking protein rigidity is not a big nuisance: this is because a hotspot map is most often used for filtering/weighing by the "color" not the shape of the molecules. But shape based searching is possible. Shape based scoring In terms of shape-based searching, this is actually a question of having pre-computed conformers, which can be problematic (but can and is often done on the fly), and shape based scoring —for which there are different metrics, such as the classical RMSD to more complexly weighted scores such as SuCOS.
{ "domain": "bioinformatics.stackexchange", "id": 1365, "tags": "protein-structure, machine-learning, docking" }
Derivation of Gell-Mann Okubo relation for mesons
Question: In SU(3) quark model of hadronic structure one assumes that mass splitings between hadrons is due to difference between masses of $s$ quark and $u,d$. This is modeled by perturbation Hamiltonian $$ \delta H=\frac{m_s-m}{3}(1-3 Y),$$ where $m$ is mass of $u,d$ and $Y$ is hypercharge. In particular in fundamental representation in basis $u,d,s$ this matrix has the form $$\delta H =\mathrm{diag(0,0,m_s-m)}.$$ In eigenbasis of hypercharge one immedietely gets the expected values of this operator and from that corrections to energy to first order in perturbation theory. This yields correct formulas for baryon multiplets: mass differences are approximately proportional to differences in hypercharge, $$ M=a+b Y,$$ with $a,b$ some constants. However in lecture notes from the class in particle physics I attended different approach is used for mesons. My teacher uses the fact that $Y$ is an eighth element of $(1,1)$ irreducible representation of SU(3) and then claims to have used Clebsch-Gordan coefficients for SU(3) to obtain the following formula: $$ M= a'+ b' Y + c' \left( I(I+1) -\frac{1}{4} Y^2 \right), $$ with $a',b',c'$ some constants. From that using assumption that $b'=0$ because mass is the same for particle-antiparticle pairs it is quite easy to get the celebrated Gell-Mann Okubo relation (actually one gets this for masses rather than their squares, but it is closer to truth if we put squares by hand) $$ 4 M_K ^2 = M_{\pi}^2 +3 M_{\eta}^2. $$ I don't understand why in this case we can't just explicitly evaluate the $Y$ operator to get the usual relation which holds for baryons. In Perkins it is written that this GMO relation is empirical rather than derived from SU(3) model. How should I understand this? Answer: This is a good question that puzzled theorists for a while, until the modern understanding of chiral symmetry breaking in QCD clarified itself. The crucial thing to note is that the quadratic formula you are quoting is valid, and necessary, for pseudoscalar mesons only---the abnormally light pseudogoldstone bosons of spontaneously broken chiral symmetry. By contrast, if you tried to evaluate the formula for the vector meson octet, instead, i.e. the ρ(775), ω(783), φ(1020), with the ω-φ, suitably unmixed to take out the singlet, and the K*(896) s, the linear formula would be pretty good, as the ρ would not punish you as badly as the π! The complete theoretical explanation is in Dashen's formula for the masses of pseudogoldstone bosons, and is neatly summarized in section 5.5 of T. P. Cheng's & L. F. Li's tasteful book. If you were a glutton for detail, you might opt for S. Weinberg's (1996) The Quantum Theory of Fields (v2. Cambridge University Press. ISBN 978-0-521-55002-4. pp. 225–231). The basic idea of Dashen's formula (often also referred to as Gell-Mann-Oakes-Renner (1968) doi:10.1103/PhysRev.175.2195 in the sloppy shorthand of chiral perturbation theory. It is a blending of a current algebra Ward identity with PCAC, $m_\pi^2 f_\pi^2=-\langle 0|[Q_5,[Q_5,H]]|0\rangle$) is that the square of the mass of the pseudoglodstone boson is proportional to the explicit breaking part of the effective lagrangian, here linear in the quark masses, as you indicated. That is, for example, naively, the pion mass, which should have been zero for massless quarks, now picks up a small value $m_\pi^2 \sim m_q \Lambda^3/f_\pi^2$, where $m_q$ is the relevant light quark mass in the real world QCD Lagrangian, which explicitly breaks chiral symmetry; $f_\pi$ is the spontaneously broken chiral symmetry constant, about 100MeV; and Λ the fermion condensate value ~ 250MeV. That is to say, the square of the mass of the pseudogoldston is the coefficient of the second derivative of the effective lagrangian (it pulls two powers of the goldston out of the chiral vacuum with strength $f_\pi^2$) and so the commutator of the QCD lagrangian w.r.t. two chiral charges. Normally, that would be zero, but if there is a small quark mass term, it snags, so you get the quark mass term provide a quark bilinear times a quark mass, the v.e.v. of the bilinear amounting to Λ cubed. The GM-O formula served to explain flavor SU(3) breaking half a century ago in terms of "octet dominance" (code for the strong hypercharge Y), effectively your operator δH with the trivial identity term taken out, before quarks were invented, and, more importantly, taken seriously. (There was a strange hiatus of almost a decade in which everybody was thinking in terms of quarks, but it was thought to be flakey to admit it! But George Zweig had no fear.). With the advent of quarks, lattice gauge theory appreciation of chiral symmetry breaking, and finally chiral perturbation theory, such abstract formulas are needlessly obscure, cumbersome, and "magical", and mostly old-timers and science historians spend time on them. Calculators just calculate now.
{ "domain": "physics.stackexchange", "id": 32785, "tags": "particle-physics, group-theory, quarks, mesons, baryons" }
Alternatives to Gazebo Fortress (Building Editor) Feature?
Question: I've recently transitioned to using Gazebo Fortress and noticed that it lacks a built-in building editor feature, which was a beneficial tool in Gazebo Classic. This editor was essential for creating custom environments for my robotics simulations. For those who have been working with Ignition Fortress or have faced similar challenges: What are the most recommended alternatives or tools you've used to design environments for Gazebo Ignition? Are there any best practices or specific workflows you've adopted to bridge this gap? Has anyone found a workaround or method to integrate environments from other tools effectively into Ignition? I'm particularly interested in solutions that are tried and tested in real-world projects, as this would provide insights into their robustness. Answer: For the more general case, refer to this previous question. This is not specifically aimed at modeling buildings though. Re. buildings: there are plenty tutorials e.g. for Blender but I think this would still involve more work than using the building editor. However, you could install both Gazebo Classic and Gazebo Fortress, create the SDF using the building editor and then use that SDF in Fortress? Alternative is to buy a model online and use that. Though you will typically need to clean up the model and reapply the textures in Blender and then bake the textures and/or light map if you want realistic rendering. This is a lot of work, certainly if you have never done that yet and need to find out how everything works. But the result can be really nice. E.g. see the demo shown in the release video of Gazebo Harmonic.
{ "domain": "robotics.stackexchange", "id": 38636, "tags": "gazebo, ros2, simulation, simulator-gazebo, gazebo-simulator" }
Why are the surfaces of liquids always perpendicular to the gravitational force?
Question: I am not a physicist nor do I have a good knowledge of the topic. Pardon me if I use terms erroneously. I observed that when I put some water in a bottle, without regards to how the bottle is placed, the surface of water is always perpendicular to the direction of gravitational force. What is the explanation behind this? Answer: I think you mean horizontal to the direction of gravity. The answer is simple. Any part of the water surface which rises above the general level is dragged down by gravity until pressure from the rest of the liquid prevents it from sinking any lower.
{ "domain": "physics.stackexchange", "id": 77846, "tags": "gravity, newtonian-gravity" }
Build a corpus for machine translation
Question: I want to train an LSTM with attention for translation between French and a "rare" language. I say rare because it is an african language with less digital content, and especially databases with seq to seq like format. I have found somewhere a dataset, but in terms of quality, both french and native language sentences where awfully wrong. When I used this dataset, of course my translations where damn funny ... So I decided to do some web scraping to build myself my parallel corpus and it might be useful for research in the future. It worked well and I managed to collect some good articles from a website containing some articles (monthly, since 2016 in both languages). Now the tricky part is putting everything into sentence to sentence format. I did a trial with a text and its translation just by tokenizing into sentence and I noticed that for example I had 23 sentences for French and 24 for native language. Further checking showed that some small differences where notices in both languages, like a sentence where a comma was replaced in the other language by a dot. So my question is : Is it mandatory to put my articles into sentence-French to sentence-Native language format ? Or can I let it as text / paragraphs ? Answer: What you would typically do in your case is to apply a sentence alignment tool. Some popular options for that are: hunalign: a classical tool that relies on a bilingual dictionary. bleualign: it aligns based on the BLEU score similarity vecalign: it is based on sentence embeddings, like LASER's. I suggest you take a look at the preprocessing applied for the ParaCrawl corpus. In the article you can find an overview of the most popular methods for each processing step. A different option altogether, as you suggest, is to translate at the document level. However, most NMT models are constrained in the length of the input text they accept, so if you go for document-level translation, you must ensure that your NMT system can handle such very long inputs. An example of NMT system that can be used for document-level NMT out of the box is Marian NMT with its gradient-checkpointing feature.
{ "domain": "datascience.stackexchange", "id": 8859, "tags": "nlp, lstm, sequence-to-sequence, corpus" }
Clipboard detector in Python
Question: The following contains my little app that detects changes to the clipboard and displays them in the GUI. I did my best using my limited knowledge of Python, but I have a feeling that I can definitely improve the program. It works, but Python's CPU usage shoots up to 20% whenever I run the program - which is due to my use of multiple threads and infinite loops I'm sure. #! python3 #GUI import tkinter #needed for the clipboard event detection import time import threading #listener class that inherits from Thread class ClipListener(threading.Thread): #overriding Thread constructor def __init__(self, pause = .5): #from documentation: If the subclass overrides the constructor, it must make sure to invoke the base class constructor (Thread.__init__()) before doing anything else to the thread. super().__init__() #calls Thread class constructor first #initialize parameters self.pause = pause self.stopping = False #initialize event to communicate with main thread self.copyevent = threading.Event() #override run method def run(self): last_value = tkinter.Tk().clipboard_get() #initialize last_value as #continue until self.stopping = true while not self.stopping: #grab clip_board value temp_value = tkinter.Tk().clipboard_get() #if last value is not equal to the temp_value, then (obviously) a change has occurred if temp_value != last_value: #set last value equal to current (temp) value and print last_value = temp_value print("set") #set the event if clipboard has changed self.copyevent.set() time.sleep(self.pause) #sleep for indicated amount of time (.5 by default) #override stop method to work with our paramter 'stopping' def stop(self): self.stopping = True #GUI extends Frame, serving as main container for a root window class GUI(tkinter.Frame): #constructor for GUI - intializes with a default height and width if none are given def __init__(self, master, ht=600, wt=800): #uses the parent class' constructor super().__init__(master, height=ht, width=wt) self.var = tkinter.StringVar() self.var.set("No copied text") self.pack_propagate(False) #window will use it's own width and height as parameters instead of child's dimensions self.pack() self.label = tkinter.Label(self, textvariable=self.var) self.label.pack() #method to update the label def update_label(self, newText): self.var.set(newText) self.label.pack() def main(): #good practice to have a variable to stop the loop running = True #GUI initialized root = tkinter.Tk() gui = GUI(root) #start thread containing Clipboard Listener listener = ClipListener(.100) listener.start() #loop to keep updating the program without blocking the clipboard checks (since mainloop() is blocking) while running: #update the gui root.update(); #wait .1 seconds for event to be set to true event_set = listener.copyevent.wait(.100) #if true then update the label and reset event if event_set: gui.update_label(root.clipboard_get()) listener.copyevent.clear() #only run this program if it is being used as the main program file if __name__ == "__main__": main() Answer: Don't use threads You don't need the complexities of threading for this problem. You can use the Tkinter method after to run a function periodically. Threading is necessary if the function you are running takes a long time, but that is not the case here. You can use a class, but to keep this answer simple I'll only show the function. Also, note that I use an existing window rather than tkinter.Tk() on each call. There's no need to create a new root window every time you do the check. def check_clipboard(window): temp_value = window.clipboard_get() ... Next, create a function that calls this function every half second: def run_listener(window, interval): check_clipboard(window) root.after(interval, run_listener, window, interval) To start running, simply call run_listener once, and it will continue to run every 500 milliseconds: run_listener(root, 500) The logic to stop and pause is roughly the same as what you have now. Create a flag, and then check that flag inside of run_check_periodically. With this, you can remove the loop from main since tkinter already has an efficient loop: def main(): root = tkinter.Tk() gui = GUI(root) # start the listener listen_to_clipboard(root) # start the GUI loop root.mainloop()
{ "domain": "codereview.stackexchange", "id": 28966, "tags": "python, performance, python-3.x, tkinter" }
Is it harder to find the ground state of a classical spin glass with equal or unequal bond stengths?
Question: Consider two different models for a classical spin glass in zero applied field, where the degrees of freedom are $\{ s_i = \pm 1\}$. The first model is $$H = \sum_{i < j} J_{ij} s_j s_j,$$ where the $J_{ij}$ are arbitrary real constants. The second model is $$H = J \sum_{i < j} (\pm)_i s_j s_j,$$ i.e. all bonds have the same magnitude, but their signs are arbitrary. For which model is it harder to find an exact or approximate ground state for a fixed number of spins $N$? I could imagine arguing it either way. On the one hard, the former case of arbitrary couplings has a much larger problem state space of $\mathbb{R}^N$, vs. ${\pm 1}^N$ for the latter problem. There could also be some clever way to exploit the fact that all bonds have the same magnitude in the latter model to solve the problem more efficiently. This would suggest that the former model is harder to solve. On the other hand, for variable bond strengths, it's more important to satisfy the strongest bonds than the weakest bonds. This suggests a heuristic strategy of satisfying the strongest bonds first and then sequentially trying to satisfy weaker and weaker bonds without de-satisfying the stronger ones. This would suggest that the latter model is harder to solve, because that heuristic strategy doesn't work. Answer: I think it really depends on what kind of comparison you would like to have. If you want to think about the worst-case analysis, then they both are NP-hard as optimization problems, and NP-complete as decision problems. This would be the answer if I interpret your phrase "arbitrary real constants" as an adversarial setup, where we want to think about the worst possible cases. Actually, if I want to be very precise, the real-valued version must have some kind of a precision promise, because otherwise we can have a "trivially hard" situation where you have two bonds with $J_{ij}=1.000.....0001$ vs $J_{ik}=1.000.....0002$ and you may need to look through arbitrarily many decimals to really decide the ground state. In computer science, people usually avoid this boring type of hardness by just asking an inverse-polynomial precision. You can look for local hamiltonian problems etc to see the inverse-polynomial "promise gap". I assume from the fact that you write "approximate ground state", you would be OK with this kind of modifications. The two models are basically the Sherrington-Kirkpatrick model and the $\pm J$ model if you are drawing the real coefficients from a Gaussian distribution and from a fair coin for the $\pm 1$ case. But in this case, the problem has a natural probability distribution over instances and it would be less meaningful to think about the worst-case. In both cases, we know that the model goes through a Full replica symmetry breaking (full-RSB) phase transition at some temperature. Basically this means that we don't know an efficient algorithm to sample approximate solutions with a finite ratio tolerance level corresponding to that temperature or below. If you further go into the second direction, there are folklores like "full-RSB phases are somewhat easier than the 1-step RSB phase" etc, which I personally am not fully convinced. There are other folklores like "the more continuous the model is, the relatively easier it becomes", which does make sense, because you always have at least some epsilon move that doesn't change the energy that much. I don't know much about the heuristics. It is likely that the performance of heuristics also depends a lot on the details of the setup.
{ "domain": "physics.stackexchange", "id": 91118, "tags": "statistical-mechanics, spin-models" }
How to find the force needed?
Question: To keep the system in equilibrium, the Fg should equal FBuoyant + FMagnetic. However, I'm having trouble seeing how the FMagnetic would push the tube upwards. We're using conventional current, and the current is flowing clockwise, so when I use the right hand rule, I see that the force is pointing downwards. Can anyone point out what I'm missing? Answer: You are completely right, the magnetic force is downards. The thing is that you want this force to be exactly the difference between the weight and the buoyance force of the wire so that it won't sink or float from its current position. If the force were upwards then the problem would make no sense as the buoyancy is greater than the weight!
{ "domain": "physics.stackexchange", "id": 86748, "tags": "homework-and-exercises, magnetic-fields, electric-current, electromagnetic-induction, equilibrium" }
Class Decorator for Verifying Method-Level Permissions
Question: After writing far too many if has_permission(user): statements at the head of my methods, I figured I would try my hand at writing a generic enough decorator to do it for me. This is what I created: from inspect import signature def permissions(callback, **perm_kwargs): '''This class decorator is used to define permission levels for individual methods of the decorated class. Attempting to call a method named by this decorator will first invoke the given callback. If the callback returns True, the user is authorized and the method is called normally. If it returns False, a permission error is raised instead and the method is not called.''' def wrap(wrapped_class): class PermissionsWrapper: def __init__(self, *args, **kwargs): self.wrapped = wrapped_class(*args, **kwargs) self.permission_levels = {} for name, level in perm_kwargs.items(): self.wrap_method(name, level) def __getattr__(self, attr_name): attr = getattr(self.wrapped, attr_name) if attr_name not in self.permission_levels: return attr def wrapper_func(*args, **kwargs): user = PermissionsWrapper.unpack_user(attr, *args, **kwargs) if callback(user, self.permission_levels[attr_name]): attr(*args, **kwargs) else: raise RuntimeError('Permission Denied.') return wrapper_func @staticmethod def unpack_user(method, *args, **kwargs): try: # Verify that the arguments are lexically valid. bindings = signature(method).bind(*args, **kwargs) return bindings.arguments['user'] except TypeError: # The arguments are invalid. Call the method with the bad # arguments so that a more descriptive exception is raised. method(*args, **kwargs) def wrap_method(self, method_name, perm_level): if not hasattr(wrapped_class, method_name): error = 'No such method: {}'.format(method_name) raise RuntimeError(error) method = getattr(wrapped_class, method_name) if not hasattr(method, '__call__'): error = 'Attribute {} is not a method!'.format(method_name) raise RuntimeError(error) method_sig = signature(method) if 'user' not in method_sig.parameters: error = ('Method signature does not have a "user" argument!' ' The user argument will have its permission level' ' verified and either be allowed to use the method' ', or a permission error will be thrown.') raise RuntimeError(error) self.permission_levels[method_name] = perm_level return PermissionsWrapper return wrap You could then use the decorator like this: def has_permission(user, required_level): if the_user_meets_the_required_level_of_permissions(): return True else: return False @permissions(callback=has_permission, destroy_the_company='ADMIN_ONLY', view_profile='GUEST', edit_profile='LOGGED_IN', post_comment='GUEST') class ProfileManager: def edit_profile(self, user, form_data): pass def view_profile(self, user): pass def post_comment(self, user, comment_text): pass def destroy_the_company(self, user): drop_all_database_tables() overwrite_backups() At the time of writing, it seemed obvious to me that using a decorator on the whole class would be more self-documenting and require less typing than using a separate decorator on each method. Now, I'm not so sure. Does this offer any advantages/disadvantages over a simpler per-method decorator? Are there any edge cases where this decorator will fail? EDIT: Fixed the decorator by adding the unpack_user method and calling it in wrapper_func. Answer: For a class more complex than this, I would think that per-method decorators are more readable and provide better documentation: class ProfileManager: @permissions("LOGGED_IN") def edit_profile(self, user, form_data): pass @permissions("GUEST") def view_profile(self, user): pass @permissions("GUEST") def post_comment(self, user, comment_text): pass @permissions("ADMIN") def destroy_the_company(self, user): drop_all_database_tables() overwrite_backups() Implementation of that new permissions decorator could be as simple as: from functools import wraps import logging class AuthenticationErrror(RuntimeError): pass def permission(permission_level): def decorator(f): @wraps(f) def wrapper(self, user, *args, **kwargs): if user.has_permission(permission_level): return f(self, user, *args, **kwargs) else: logging.critical("User %s tried accessing function %s without permission (needed: %s)", self.user.name, f.__name__, permission_level) raise AuthenticationError("403: Not authorized") return wrapper return decorator (untested) A class decorator would make sense IMO only for setting a default permission level for all methods.
{ "domain": "codereview.stackexchange", "id": 29309, "tags": "python, meta-programming" }