anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
what are the top level subsets/domains of ML?
Question: I'm not really happy with the mind maps I've been able to find on Google, most of them are algorithm based. I want to make a good one that is problem/solution domain based. Do I have this right for my top level nodes? Here is the general direction I am headed: https://i.stack.imgur.com/fNuwv.jpg My questions/doubts about what I have so far are: Is my starting point below generally correct? e.g. no high level subclass is missing, and everything presented as a subclass deserves to be here? is Hybrid learning always just a combination of supervised and unsupervised? Or, are there real examples of other hybrid models (e.g. 'reinforcement' and 'supervised', etc.). I know theoretically we can combine any methods...I'm looking for what's real/applied/demonstrable today. does Reinforcement learning belong at this high level, or is it actually a subset of one of the others (or one I've omitted)? Machine Learning 1.1 Supervised (uses labelled data to train and validate) 1.2 Unsupervised (uses unlabeled data, or ignores labels if they are present) 1.3 Semi-supervised (uses partially labelled (mostly unlabeled) data) 1.4 Hybrid (combines a supervised method and an unsupervised method) 1.5 Reinforcement Learning (uses data from the environment) Thank you! Answer: Google (in images!) on 'machine learning cheat sheet', to find an example like this: https://docs.microsoft.com/en-us/azure/machine-learning/studio/algorithm-cheat-sheet So you are well on your way in identifying all kinds of techniques, but you may want to turn your plot 'sideways', e.g. two-way / multi-class classification are problem (sub)domains and result in different recommended machine learning techniques.
{ "domain": "datascience.stackexchange", "id": 2136, "tags": "machine-learning" }
Molecular simulations of proteins: how good an approximation is electrostatics?
Question: This question has to do with molecular dynamics simulation of molecules, especially proteins, using software such as GROMACS. This type of software uses empirical force fields (eg AMBER) and some constraints on covalent bond length and geometry; the simulation proceeds in steps of typically a few femtoseconds and can cover tens to hundreds of nanoseconds at a time. The forces considered here are pure electrostatics; there is no magnetic or coupled electric-magnetic component. Of course since there are electric charges in the simulation (eg polar groups, molecular dipoles, ...), and they move, and moving charges produce magnetic fields which exert force on other moving charges, the simulation is going to miss some things. The question is: how good of an approximation is electrostatics in this context? This is about developing some intuition and good order of magnitude estimates, not about any kind of precision. I think the pieces of this are: How large are the charges? and How fast do they move? (due to thermal motion, or when transitioning between stable conformations) and consequently: How strong is the resulting magnetic field, and what is the ratio of electrostatic force between charges to the magnetic force between moving charges? As a biologist, the kinds of proteins I would be interested in especially as model systems are voltage-sensing domains (VSDs) of transmembrane proteins, and tubulin/microtubules. Both of these have large conformational changes between different states, and have mobile subunits with large dipoles. Looking up VSDs, it seems the distance of movement between states can be 10-40 A (1 and 2), the macrodipoles of subunits of the protein can be 20-25 Debye (3), and the motion time scale is nanoseconds to milliseconds (4 and 5). Answer: The magnetic field of a moving charge can be obtained from the expression of the Liénard–Wiechert potential and it can be expressed as a function of the corresponding electric field as $$ {\bf B}({\bf r},t) = \frac{{\bf n}(t_r)}{c} \times {\bf E}({\bf r},t) $$ where $t_r = t- \frac{\left|{\bf r}-{\bf r^{\prime}}\right|}{c}$ is the retarded time, ${\bf n}$ is the unit vector pointing in the direction from the source and $c$ the speed of light. It is clear from the above formula, that the Lorentz force on a charge moving with speed $v$ cannot be larger than $\frac{v}{c}E$, therefore the ratio between magnetic and electric force cannot be larger that $\frac{v}{c}$. In a thermalized system $v\sim \sqrt{k_BT/m}$, therefore $$ \frac{\text{magnetic force}}{\text{electric force}}\sim\frac{v}{c}\sim \sqrt{\frac{k_BT}{mc^2}}\ll 1 $$ Even in the case of a proton, the ratio is of the order $10^{-5}$, at room temperature. Taking into account that effective force fields used in computer simulations have uncertainties orders of magnitude larger, and that numerical integration algorithms introduce in turn additional, larger inaccuracies, I do not see any reason to be worried about neglecting magnetic forces.
{ "domain": "physics.stackexchange", "id": 63394, "tags": "electromagnetism, electrostatics, simulations, molecular-dynamics" }
A proof that P != NP
Question: I came up with the following. Am I doing something wrong? Suppose $P=NP$. Let $A$ be an NP hard problem. Let $A'$ be the polynomial reduction of $A$. By the assumtions, the Halting problem holts for $A$. Let $T'$ be the running time of $A'$. Clearly, $T' \in O(P(n))$, with $n$ as input size for $A'$ and $P(n)$ representing the class of polynomial functions of $n$. By that, the running time of $A'$ is bounded above and since it is a polynomial reduction of $A$, the running time of $A$ should also be bounded above. By that, there exists an algorithm, namely $X():=$return $True;$ which is a solution to the Halting problem for $A$. By that, the Halting problem does not hold for $A$, which is a contradiction. So the assumption that $P=NP$ can not hold and therefore $P \neq NP$. Answer: Let $A$ be an NP hard problem The halting problem, among others, is $NP$-hard. Did you mean to pick a $NP$-complete problem instead? Let $A'$ be the polynomial reduction of $A$. What is "the polynomial reduction", where is it defined? Reduction to/from what? Do you mean that $A'$ is a polynomial algorithm that solves $A$? (Which would exist by virtue of your $P=NP$ assumption and $A$ being in $NP$.) the Halting problem holts for $A$ What does it mean for the Halting problem to holt (hold?) for $A$? The halting problem isn't parameterized on any particular program. the running time of $A$ should also be bounded above $A$ does not have a running time. It is a problem, not an algorithm. There exists an algorithm [...] is a solution to the Halting problem for $A$ For any program $P$ there is an algorithm that computes whether or not $P$ halts. Namely, the algorithm that trivially returns one of true/false. The Halting Problem doesn't state that there can be no algorithm that computes whether a particular program halts, but rather that there is no algorithm that is correct for all programs.
{ "domain": "cs.stackexchange", "id": 3988, "tags": "algorithms, halting-problem" }
Dyson Sphere Structures
Question: In a type II civilization we may need to harvest the energy of neighboring stars. Freeman Dyson gave an interesting point of view that we could invent a megastructure to harvest this type of energy. In theory what are the possibilities of ways to construct a Dyson Sphere? Answer: First, I'd like to point out that at this point the Dyson Sphere is purely theoretical in nature. Second, currently all plans to build a Dyson Sphere are, in the words of this website, far beyond humanity's engineering capacity. Portions of the technology involved in the Dyson Sphere have been developed, however, such as solar sails (a method of spacecraft propulsion that uses large mirrors driven by radiation pressure/solar wind from the sun) and orbiting satellites. These steps can be seen as the first small steps toward building a Dyson Sphere. Variants on the Dyson Sphere There are several concepts of the Dyson Sphere. The one closest to the original concept is called the Dyson Swarm. The Dyson Swarm is basically a bunch of solar powered satellites and/or space habitats orbiting in a dense formation around the star. The advantage of this idea is that it can be constructed incrementally. Various forms of wireless energy transfer could be used to get the energy back to Earth. The problem is that the orbits of the swarm would be incredibly complicated. The simplest orbit arrangement is the Dyson Ring, shown below: The orbits, however, can be as complicated as the picture below, and even more complex: Another conceptualization of the Dyson Sphere is the Dyson Shell, commonly depicted in science fiction. This would be a uniform solid shell of matter surrounding the star in question. This design would harness the full power of the Sun or any other star, but has several major theoretical difficulties, the such as that the Dyson Shell could be moved by objects such as comets, asteroids, etc., that would normally be deflected by the heliosphere (which would cease to exist). This is only one of several major problems afflicting the concept of the Dyson Shell. An idealized diagram of the Dyson Shell is shown below: These are the two main conceptualizations of the Dyson Sphere, though others include the Dyson Net, Bubbleworld, Stellar Engine, and Dyson Bubble. Self-Replicating Robots - George Dvorsky's Idea As the Dyson Swarm has the best probability of working (though that is not necessarily a very high probability) that is the conceptualization I will be discussing. The number of spacecraft required to carry out Dyson's concept is far beyond our current abilities, and it is at the point of practical application of Dyson's concept where it goes all downhill. George Dvorsky (a Canadian futurist) suggested self-replicating robots might be able to overcome this limitation. His article, called How to build a Dyson sphere in five (relatively) easy steps, suggests that the way to carry out Dyson's concept would be to follow the following five steps: Get energy Mine Mercury Get materials into orbit Make solar collectors Extract energy He says that the self-replicating robots would have to use materials from Mercury, Venus, and any available asteroids to replicate themselves and build the Dyson Sphere. To quote: ...we'll likely have to take the whole planet [Mercury] apart...we are going to have to disassemble not just Mercury, but Venus, some of the outer planets, and any nearby asteroids as well. Personally, I think this sounds absolutely ridiculous. To quote astronomer Phil Plait, Dismantling Mercury, just to start, will take 2 x 10^30 joules, or an amount of energy 100 billion times the US annual energy consumption ... [Dvorsky] kinda glosses over that point. And how long until his solar collectors gather that much energy back, and we’re in the black? Alex Knapp, a science writer for Forbes, estimates in his own article critiquing Dvorsky's idea that dismantling Mercury will take a minimum of centuries (his original estimate was 1.2 trillion years, but readers pointed out a mistake in his understanding of the problem thereby resulting in a mistake in his math). Another article by Rebecca Black, another Forbes science writer, references Knapp's calculations, saying that it will take 174 years to gain back the energy from dismantling Mercury alone. Another thing to consider with Dvorsky's idea is that removing an entire planet, let alone three or four (Dvorsky suggests using Mercury, Venus, and a few of the outer planets), is bound to have consequences. In terms of the outer planets, they act as protectors of the Earth, blocking comets, asteroids, and other space objects that might've hit Earth otherwise. An example of this is the Shoemaker-Levy 9 comet crashing into Jupiter in 1994. Dvorsky's idea seems rather out there. For now, maybe we can start by using the technology we have to harvest energy from the Sun, one probe at a time. Hope this helps! (Pictures are from the Wikipedia website linked at the very beginning of this answer.)
{ "domain": "physics.stackexchange", "id": 31988, "tags": "material-science, stellar-physics, stellar-population" }
Why having a ridge like Ambiguity Function implies lack of Doppler resolution?
Question: I was wondering why if the AF (Ambiguity Function) has a ridge like form it has low Doppler resolution. Answer: If you examine a contour plot of the Ambiguity function - the lines connect equal values of the Ambiguity function. This means that if you travel along one of these lines (i.e. keeping the same value of the function) then your system will not be able to distinguish between the signal that have this combination of Doppler and time delay. Note that the values for time/Doppler resolution depend on what value of the Ambiguity function you choose i.e. which contour line you decide on. I'm saying just pick one for the purposes of illustration. If you look the Ambiguity function for a CW pulse of 2 secs duration: You see that if you choose the lowest contour value, that we would be unable to distinguish between time delay of -1.5 secs and 1.5 sec, and our Doppler resolution ranges from -0.5 Hz to 0.5 Hz. If you were use a pulse of longer duration the Ambiguity function would be wider (left to right) and shorter (top to bottom). This means your time resolution would be worse, but your Doppler resolution would be better. The opposite is true if you used a shorter pulse. If we examine the Ambiguity function for a LFM chirp pulse: Now we see that the ambiguity exists for a combination of Dopplers and time delays (chose one of the contours that is essentially an ellipse).Previously, for the CW pulse, across a range of time delays the ambiguity was mostly constant for the same Doppler value. The advantage of the LFM pulse over a CW pulse, is that I can use the same pulse length but achieve better time resolution. You could also use a longer pulse and achieve better time resolution. The longer pulse means you can get more energy on the target and improve the signal to noise ratio.
{ "domain": "dsp.stackexchange", "id": 7359, "tags": "radar, doppler" }
satisfying package dependencies
Question: I have package that depends on the willow garage pr2_cockpit package. Is there a way to inform a user that this dependency exists if the package is not already installed (besides looking through the output of rosmake or the package manifest file and checking manually)? I tried adding "ros-diamondback-pr2-cockpit" as an external dependency in the manifest file. However, rosdep install for my package failed with the following error: rosdep install <package_name> Failed to find rosdep ros-diamondback-pr2-cockpit for package <package_name> on OS:ubuntu version:10.04 ERROR: ABORTING: Rosdeps [u'ros-diamondback-pr2-cockpit'] could not be resolved Any suggestions of a structured way to install external ros packages would be nice. Is rosinstall the only other option? Thanks for any help. marc Originally posted by mkillpack on ROS Answers with karma: 340 on 2011-05-03 Post score: 0 Answer: At the moment installation instructions usually consist of install the following debian packages: ros-diamondback-foo ros-diamondback-bar install this rosinstall file http://example.com/example.rosinstall In general ros packages are not designed to be installed via rosdep. Though it does work. There are some tools in development to facilitate the installation of larger systems. However the above system so far works in most cases. Originally posted by tfoote with karma: 58457 on 2011-05-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Benoit Larochelle on 2013-01-20: The link seems broken now
{ "domain": "robotics.stackexchange", "id": 5503, "tags": "ros, rosdep, package" }
A little next combination program in C++
Question: An implementation of next combination function at Discrete Mathematics and Its Applications, Rosen p.438. How it works: Input: The size n of a integer set {1, 2, ..., n}, which is where you choose objects from. The size r of the subset of the integer set you currently have. A pointer to the subset you currently have. Output: The next subset in lexicographic order. If you want to play around here is a link to ideone.com. #include <iostream> using std::cout; // C(n, r) and provide the current subset a of size r. bool nextCombination(int n, int r, int *a); void printArray(int *a, int n); int main() { int a[] = {1,2,3}; // So the current subset is {1,2,3} of size 3. int length = sizeof(a)/sizeof(*a); int count = 0; // The following example is C(7,3), start at {1,2,3} do { printArray(a, length); count++; } while (nextCombination(7, length, a)); cout << "Total: " << count << '\n'; // Since we start from {1,2,3}, all C(7,3) subsets are generated and counted. The answer should be 7!/(3!4!)=35 return 0; } bool nextCombination(int n, int r, int *a) { int lastNotEqualOffset = r-1; while (a[lastNotEqualOffset] == n-r+(lastNotEqualOffset+1)) { lastNotEqualOffset--; } if (lastNotEqualOffset < 0) { cout << "the end\n"; return false; } a[lastNotEqualOffset]++; for (int i = lastNotEqualOffset+1; i<r; i++) { a[i] = a[lastNotEqualOffset]+(i-lastNotEqualOffset); } return true; } void printArray(int *a, int n) { for (int i = 0; i < n; i++) { cout << a[i] << " "; } cout << '\n'; } The overview of its output: 1 2 3 1 2 4 1 2 5 1 2 6 1 2 7 1 3 4 ... 4 5 6 4 5 7 4 6 7 5 6 7 the end Total: 35 Answer: using std::cout;, while better than including the entire namespace, is still worse than simply using the std:: prefix. If typing it annoys you too much, you can just make a macro in your editor to type it for you when you press a certain key combo. Overall this looks much more like a C program than a C++ one. Assuming your algorithm works as intended you can take some steps to make this more C++ idiomatic? Replace the array with a vector and get rid of raw pointers. Enforce const when possible. Drop return 0 from main because the compiler will generate it for you. This refers excplicitly to the return 0 at the end of main. Exiting from main by reaching the end automatically returns 0. So adding the return statement is just duplicate code. Prefer prefix over postfix operator. Move the logic into a class so you don't have to pass everything on every call to nextCombination. You can overload operator<< to print the current line. Eliminate magic numbers. Keeping all this in mind, the rewrite could look something like this: #include <iostream> #include <iterator> #include <vector> class Combinatorics { public: Combinatorics(std::vector<int> const& v, int const& n, int const& r) : elements{v} , n{n} , r{r} {} friend std::ostream& operator<<(std::ostream& os, Combinatorics const& obj); bool next() { int lastNotEqualOffset = r - 1; while (elements[lastNotEqualOffset] == n - r + (lastNotEqualOffset + 1)) { --lastNotEqualOffset; } if (lastNotEqualOffset < 0) { return false; } ++elements[lastNotEqualOffset]; for (int i = lastNotEqualOffset + 1; i < r; ++i) { elements[i] = elements[lastNotEqualOffset] + (i - lastNotEqualOffset); } return true; } private: std::vector<int> elements; int n; int r; }; std::ostream& operator<<(std::ostream& os, Combinatorics const& obj) { std::copy(obj.elements.begin(), obj.elements.end(), std::ostream_iterator<int>(os, " ")); return os; } int main() { constexpr int n = 7; constexpr int r = 3; int total = 0; Combinatorics combinatorics{{1, 2, 3}, n, r}; do { std::cout << combinatorics << "\n"; ++total; } while (combinatorics.next()); std::cout << total << "\n"; } You simply pass all your data into the constructor upfront and then call next until every repetition has been done. (I kept n and r because they are used in the math context, normally you should try to avoid short variable names like that).
{ "domain": "codereview.stackexchange", "id": 30031, "tags": "c++, combinatorics" }
Convert regex for comma separated value to CFG grammar
Question: I have the following regex that I'm trying to convert to a CFG: e|a(,a)* (e representing the empty state). Basically I want to match a comma-separated list (without a leading or trailing comma) or nothing at all. Here are some of my attempts: S = A A = e | a | a , A This doesn't work, it matches ,a (leading comma). S = A | B A = e B = a C C = , C | a | e This doesn't work either, it matches a, and aa (trailing and missing comma). Is my regex even representable as a CFG? Thanks in advance Answer: At the heart of your regex is an alternation {$\alpha|\beta$), which means that the regex matches $\alpha$ or $\beta$. That's how you should write the CFG: $$\begin{align}S\to&\;\alpha\\ |&\;\beta \end{align}$$ Here, $\alpha$ is $\varepsilon$ and $\beta$ is the subexpression $a(,a)^*$, for which we'll define the non-terminal $A$. It's easy to see what $A$ is: it starts with an $a$ and can be extended arbitrarily with $, a$: $$\begin{align}S\to&\;\varepsilon\\ |&\;A\\ A\to&\;a\\ |&\;A\;,a \end{align}$$
{ "domain": "cs.stackexchange", "id": 20843, "tags": "context-free, formal-grammars, regular-expressions" }
What does "fetching by region is not available for SAM files" mean?
Question: I am used to gzip/biopython solutions when dealing with sequencing data, but now I wish to switch to more elegant pysam. So I looked at the manual, but ran into quite bizarre troubles with the first couple of lines using my bam file import pysam samfile = pysam.AlignmentFile("3_Tms_1_mapped.bam", "rb") for read in samfile.fetch('3_Tms_b3v08_scaf000159'): print(read) samfile.close() returns ValueError: fetching by region is not available for SAM files. Well, the file is bam. I tried to google the error but the only hits I found are the lines in the source code of pysam that check if the file is bam/cram or sam, so somehow pysam thinks that my bam is a sam. How can I convince it otherwise? I have also noticed that the manual is for python 2.7, that's maybe where the problem comes from... Answer: Your 3_Tms_1_mapped.bam file, despite its filename extension, is in fact a bgzipped SAM file. You can verify this using htsfile, which is a small utility packaged with HTSlib: $ htsfile 3_Tms_1_mapped.bam 3_Tms_1_mapped.bam: SAM version 1.3 BGZF-compressed sequence data (For files that really are in BAM format, it reports BAM version 1 compressed sequence data.) So the error message is accurate in this case.
{ "domain": "bioinformatics.stackexchange", "id": 948, "tags": "python, sam, pysam" }
Problem importing collada model
Question: Recently i made a model of a tunnel in blender that looks like this: I exported it to a Collada file but when i load the model into gazebo some parts of the model disappeared. I am using gazebo 2.2.3. Can you tell me what I am doing wrong? Originally posted by jbga on Gazebo Answers with karma: 13 on 2016-10-03 Post score: 0 Answer: It looks like some of the normals are flipped. Try visualizing them in Blender and changing them where appropriate. Originally posted by chapulina with karma: 7504 on 2016-10-03 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jbga on 2016-10-03: I flipped the normals and it worked. Thank you
{ "domain": "robotics.stackexchange", "id": 3998, "tags": "mesh, collada" }
What does this scale mean?
Question: $$\log_{10}~\mathrm{pg}/\mathrm{ml}$$ I do not understand the scale. How do I convert this to a standard concentration (without the logs)? Answer: To expand upon the comments, the y axis in this graph is simply the log of the concentration in pg/ml. To convert you use the equation $$C = 10^{y}$$ where y is the y value and C is the concentration. Note that in older literature, log was oftentimes assumed to be base e whereas nowadays we assume base 10. The figure (thankfully) clarifies this potential discrepancy. A final note: log values are unitless so the y axis has no units. The units are reported here so that you know what the appropriate units of the converted value should be.
{ "domain": "chemistry.stackexchange", "id": 522, "tags": "physical-chemistry, concentration, units" }
Choice of metric
Question: We have the metric given by a matrix $g_{\mu\nu}$, however, some textbooks define it as: $$g_{\mu\nu} = \begin{pmatrix} -1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix}$$ And others as: $$g_{\mu\nu} = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & -1 \end{pmatrix}$$ Is there is a specific physical significance to either one of the choices or is it simply a convenience? Answer: There is no physical significance. Choose whichever makes your calculation easier. Of course, if you want to compute something physical like, say, the proper time, you need to put in a negative sign, or not, depending on your choice.
{ "domain": "physics.stackexchange", "id": 65808, "tags": "general-relativity, metric-tensor, conventions" }
Fourier's law and time taken for heat transfer?
Question: Fourier's law states that "The time rate of heat transfer through a material is proportional to the negative gradient in the temperature and to the area." A mathematical description of this law is given as $$\frac{dQ}{dt}=-KA \frac{dT}{dL} \, . \tag{1}$$ where $K$ is the thermal conductivity of the substance in question, $A$ is the area of the substance normal to the direction of flow of heat current and $L$ is the lateral length of the substance. However, we may write $dQ/dt=mC(dT/dt)$ -where $m$ is the mass of the substance and $C$ is the specific heat capacity of the substance-to obtain $$m C \frac{dT}{dt} = K A \frac{dT}{dL} \, . \tag{2}$$ This would allow me to cancel the $dT$ on both sides of the equation to obtain $$\frac{mC}{dt} = \frac{K A}{dL} \, , \tag{3}$$ which suggests that the time taken for the heat transfer to complete, between any two temperatures is a constant given by $$ dt = \frac{dL m C}{KA} \, . \tag{4}$$ However, this is not at all the case. What mistakes have I made in writing the above steps? Answer: There are a few subtle things here. You've concluded that the time for 'heat transfer to complete' (for example, the time for a bar with ends initially at different temperatures to relax to a uniform temperature) doesn't depend on the overall temperature differences. This is true! If you double every temperature difference, you double the amount of heat you need to transfer, but you also double the heat transfer rate. So the total time stays the same; we say heat diffusion has a characteristic timescale. However, the method you've used is incorrect. For example, your result says that $$t \propto L$$ where $L$ is the length of the bar. However, a full analysis shows that $t \propto L^2$ instead; this is a general property of diffusion. The reason your method is incorrect is that the differentials in this problem mean something slightly different from usual. Here, the temperature $T$ is not a single quantity $T(t)$, but a function $T(x, t)$. And the two $dT$'s in your equation can't be cancelled, because they mean different things. The $dT$ on the left, inside the time derivative, is comparing $T(x, t)$ to $T(x, t+dt)$. The $dT$ on the right, inside the space derivative, is comparing $T(x, t)$ to $T(x+dx, t)$. If $T$ were just a function $T(t)$, then canceling the $dT$'s would have been okay, because $dT$ can only possibly be comparing $T(t)$ and $T(t+dt)$. But the heat equation is more complicated than that.
{ "domain": "physics.stackexchange", "id": 33785, "tags": "thermodynamics, thermal-conductivity" }
Problem setting up hector_slam (no map), please help
Question: Hello, I have been trying to make hector slam work on my turtlebot with a hokuyo 04lx laser. I am using ROS fuerte, ubuntu 12.04. Steps I followed: Did rosmake to hector_slam package with no errors. I already have a urdf of my turtlebot with laser installed. (I can use gmapping) I did try the hector_turtlebot gazebo tutorials successfully. I launched the files in this order. turtlebot.launch file hokuyo.launch file hector_slam_launch tutorial.launch The mapping_default.launch file is like this: <?xml version="1.0"?> <launch> <arg name="tf_map_scanmatch_transform_frame_name" default="scanmatcher_frame"/> <arg name="base_frame" default="base_footprint"/> <arg name="odom_frame" default="nav"/> <arg name="pub_map_odom_transform" default="true"/> <arg name="scan_subscriber_queue_size" default="5"/> <arg name="scan_topic" default="scan"/> <arg name="map_size" default="2048"/> <node pkg="hector_mapping" type="hector_mapping" name="hector_mapping" output="screen"> <!-- Frame names --> <param name="map_frame" value="map" /> <param name="base_frame" value="base_link" /> <param name="odom_frame" value="odom" /> <!-- Tf use --> <param name="use_tf_scan_transformation" value="true"/> <param name="use_tf_pose_start_estimate" value="false"/> <param name="pub_map_odom_transform" value="true"/> <!-- Map size / start point --> <param name="map_resolution" value="0.050"/> <param name="map_size" value="$(arg map_size)"/> <param name="map_start_x" value="0.5"/> <param name="map_start_y" value="0.5" /> <param name="map_multi_res_levels" value="2" /> <!-- Map update parameters --> <param name="update_factor_free" value="0.4"/> <param name="update_factor_occupied" value="0.9" /> <param name="map_update_distance_thresh" value="0.4"/> <param name="map_update_angle_thresh" value="0.06" /> <param name="laser_z_min_value" value = "-1.0" /> <param name="laser_z_max_value" value = "1.0" /> <!-- Advertising config --> <param name="advertise_map_service" value="true"/> <param name="scan_subscriber_queue_size" value="5"/> <param name="scan_topic" value="hokuyo_scan"/> <!-- Debug parameters --> <!-- <param name="output_timing" value="false"/> <param name="pub_drawings" value="true"/> <param name="pub_debug_output" value="true"/> --> <param name="tf_map_scanmatch_transform_frame_name" value="$(arg tf_map_scanmatch_transform_frame_name)" /> </node> <!--<node pkg="tf" type="static_transform_publisher" name="map_nav_broadcaster" args="0 0 0 0 0 0 map nav 100"/>--> </launch> I can see the laser scan and robot model on rviz. But the fixed frame and target from gives error on /map topic. If I change both of them to /odom the laser scans are shown. Also the Map field gives error. I cannot see the map being built. Also on the terminal in the tutorial.launch file I get strange errors like this: SUMMARY ======== PARAMETERS * /hector_geotiff_node/draw_background_checkerboard * /hector_geotiff_node/draw_free_space_grid * /hector_geotiff_node/geotiff_save_period * /hector_geotiff_node/map_file_base_name * /hector_geotiff_node/map_file_path * /hector_mapping/advertise_map_service * /hector_mapping/base_frame * /hector_mapping/laser_z_max_value * /hector_mapping/laser_z_min_value * /hector_mapping/map_frame * /hector_mapping/map_multi_res_levels * /hector_mapping/map_resolution * /hector_mapping/map_size * /hector_mapping/map_start_x * /hector_mapping/map_start_y * /hector_mapping/map_update_angle_thresh * /hector_mapping/map_update_distance_thresh * /hector_mapping/odom_frame * /hector_mapping/pub_map_odom_transform * /hector_mapping/scan_subscriber_queue_size * /hector_mapping/scan_topic * /hector_mapping/tf_map_scanmatch_transform_frame_name * /hector_mapping/update_factor_free * /hector_mapping/update_factor_occupied * /hector_mapping/use_tf_pose_start_estimate * /hector_mapping/use_tf_scan_transformation * /hector_trajectory_server/source_frame_name * /hector_trajectory_server/target_frame_name * /hector_trajectory_server/trajectory_publish_rate * /hector_trajectory_server/trajectory_update_rate * /rosdistro * /rosversion * /use_sim_time NODES / hector_geotiff_node (hector_geotiff/geotiff_node) hector_mapping (hector_mapping/hector_mapping) hector_trajectory_server (hector_trajectory_server/hector_trajectory_server) rviz (rviz/rviz) ROS_MASTER_URI=http://192.168.111.21:11311 core service [/rosout] found process[rviz-1]: started with pid [16501] process[hector_mapping-2]: started with pid [16502] process[hector_trajectory_server-3]: started with pid [16563] HectorSM map lvl 0: cellLength: 0.05 res x:2048 res y: 2048 HectorSM map lvl 1: cellLength: 0.1 res x:1024 res y: 1024 [ INFO] [1385693168.936804322]: HectorSM p_base_frame_: base_link [ INFO] [1385693168.936939784]: HectorSM p_map_frame_: map [ INFO] [1385693168.937028939]: HectorSM p_odom_frame_: odom [ INFO] [1385693168.937128850]: HectorSM p_scan_topic_: hokuyo_scan [ INFO] [1385693168.937212180]: HectorSM p_use_tf_scan_transformation_: true [ INFO] [1385693168.937281717]: HectorSM p_pub_map_odom_transform_: true [ INFO] [1385693168.937360377]: HectorSM p_scan_subscriber_queue_size_: 5 [ INFO] [1385693168.937426493]: HectorSM p_map_pub_period_: 2.000000 [ INFO] [1385693168.937492279]: HectorSM p_update_factor_free_: 0.400000 [ INFO] [1385693168.937555392]: HectorSM p_update_factor_occupied_: 0.900000 [ INFO] [1385693168.937628583]: HectorSM p_map_update_distance_threshold_: 0.400000 [ INFO] [1385693168.937708414]: HectorSM p_map_update_angle_threshold_: 0.060000 [ INFO] [1385693168.937776160]: HectorSM p_laser_z_min_value_: -1.000000 [ INFO] [1385693168.937842333]: HectorSM p_laser_z_max_value_: 1.000000 process[hector_geotiff_node-4]: started with pid [16620] [ INFO] [1385693170.824508362]: No plugins loaded for geotiff node [ INFO] [1385693170.824670229]: Geotiff node started Got bus address: "unix:abstract=/tmp/dbus-2hLnz9fcNf,guid=bb9cbab66e8849381861c23a00000019" Connected to accessibility bus at: "unix:abstract=/tmp/dbus-2hLnz9fcNf,guid=bb9cbab66e8849381861c23a00000019" Registered DEC: true Registered event listener change listener: true QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QComboBoxListView(0x1fd6e50) "" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QComboBoxListView(0x205e6a0) "" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" QSpiAccessible::accessibleEvent not handled: "8008" obj: QComboBoxListView(0x22dd9d0) "" QSpiAccessible::accessibleEvent not handled: "8008" obj: QObject(0x0) " invalid interface!" FIXME: handle dialog start. FIXME: handle dialog end. I do not understand what is this?? My TF tree and Rviz shot are attached here. TF tree (https://www.dropbox.com/s/xr6ziffncbh0ptd/frames.pdf) RViz-Screenshot from 2013-11-29 11:31:31.png Can someone help me point out the errors and how to solve it. Also how to solve the error I am getting on the terminal. Thanks in advance!! Ankit Originally posted by Vegeta on ROS Answers with karma: 340 on 2013-11-28 Post score: 0 Answer: This sounds like the hector_mapping node isn't doing anything, which could be the case if the wrong scan topic is specified (e.g. hokuyo_node publishes to "/scan" but hector_mapping subscribes "/hokuyo_scan"). Can you verify you really publish the scan to the "hokuyo_scan" topic? Originally posted by Stefan Kohlbrecher with karma: 24361 on 2015-01-20 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 16296, "tags": "slam, navigation, turtlebot, ros-fuerte, hector-slam" }
Altering input field width
Question: This gets every form element and calculates its width, then changes the input fields depending on the parent-form width: $ -> # Alter 'input' Width inputResize = -> $('form').each -> tF = $(this) formWidth = tF.width() tF.find('input').each -> tE = $(this) inputBorder = (tE.outerWidth() - tE.innerWidth()) inputPadding = parseInt(tE.css('padding-left')) + parseInt(tE.css('padding-right')) tE.css 'width', -> (formWidth - inputBorder - inputPadding) # on Resize $(window).resize -> inputResize() # on Init inputResize() It works as intended, but to learn a bit more I just want to know if there is a smarter way to write this little code in CoffeeScript. Answer: Here are a few tips: 1) Use standard naming convention. tE and tF sound like booleans and not a refence to an element. Try names like el, $this or that instead. 2) Separate complex logic into smaller functions. There's too much going on here. inputResize = -> $('form').each -> tF = $(this) formWidth = tF.width() tF.find('input').each -> tE = $(this) inputBorder = (tE.outerWidth() - tE.innerWidth()) inputPadding = parseInt(tE.css('padding-left')) + parseInt(tE.css('padding-right')) tE.css 'width', -> (formWidth - inputBorder - inputPadding) Since tF is only used then you can get rid of it and use $(this) directly. Next, extract the input width size calculation to one function and you get something like this. getNewInputSize = (el, formWidth) -> inputBorder = el.outerWidth() - el.innerWidth() inputPadding = parseInt( el.css("padding-left"), 10) + parseInt(el.css("padding-right"), 10) formWidth - inputBorder - inputPadding resizeFormInputs = -> $("form").each -> formWidth = $(this).width() $(this).find("input").each -> $(this).css "width", getNewInputSize($(this), formWidth) Final Code: $ -> getNewInputSize = (el, formWidth) -> inputBorder = el.outerWidth() - el.innerWidth() inputPadding = parseInt( el.css("padding-left"), 10) + parseInt(el.css("padding-right"), 10) formWidth - inputBorder - inputPadding resizeFormInputs = -> $("form").each -> formWidth = $(this).width() $(this).find("input").each -> $(this).css "width", getNewInputSize($(this), formWidth) $(window).resize(resizeFormInputs).triggerHandler "resize"
{ "domain": "codereview.stackexchange", "id": 2445, "tags": "coffeescript" }
Why are stars more metallic closer as you move closer to the galactic bulge?
Question: As I see it, most of the stars in the galactic bulge are Population I stars. However, as one moves farther from the galactic bulge, star metallicity decreases. In fact, halo stars are almost entirely Population II stars. Why is this? Answer: It has to do with the formation of the Milky Way. At the beginning, the Milky Way was much more spherical than it is now - perhaps closer to what an elliptical galaxy is like than a spiral galaxy. Population III stars would have formed first, then quickly died out. Next came Population II stars. They formed when the galaxy was still somewhat spherical, and so they tend to inhabit the galactic spheroid/halo. Eventually, the rotation of the Milky Way flattened out much of the remaining gas and dust, and some of the stars. When younger stars formed, they formed in the flatter disk, nearer to the center. The disk itself became smaller that the spheroid/halo. Thus, the younger Population I stars are found in the galactic disk, and are closer in. No more stars will form in globular clusters; they are relatively dust-free and contain old, Population II stars. The same is true for the galactic halo. Source: Populations & Components of the Milky Way The galactic bulge itself contains several populations of stars. Some may have come from the halo and thick disk (thus being metal-poor) while others may have formed together more recently from the thin disk itself (thus being metal-rich).
{ "domain": "astronomy.stackexchange", "id": 1353, "tags": "star, galaxy, galactic-dynamics, galaxy-center, metal" }
Is the Stirling approximation fundamental to the derivation of the Boltzmann distribution?
Question: In the derivations of the Boltzmann distribution I've encountered, the Stirling approximation seems to play a pivotal role. From these derivations, I've gleaned that the Stirling formula might be intrinsically linked to the emergence of the Boltzmann factor $$ \exp \left(-{\frac {\varepsilon _{i}}{kT}}\right)$$ in statistical mechanics. Have I interpreted this correctly? Furthermore, is there a way to derive the Boltzmann distribution without relying on the Stirling approximation? Stirling approximation are: $$n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}$$ or equivalently: $$\log _{2}(n!)=n\log _{2}n-n\log _{2}e+O(\log _{2}n)$$ Answer: The derivation I use for the Boltzmann distribution only requires students to have accepted the (sort of hand-wavy) argument that the thermodynamic temperature can be reasonably defined as, $$ \frac{\partial E}{\partial S} = T \implies \frac{\partial S}{\partial E} = \frac{1}{T}$$ We will label a macrostates $A$ and denote the energy of the system and surroundings in this macrostate with $E_{1A}$ and $E_{2A}$, respectively and the entropies likewise. $B$ will be the label of another macrostate with the same conventions. We can make progress by exploiting the fact that the energy of the system is far smaller than the total energy or the energy of the surroundings. This lets us use the definition of a (partial) derivative to write, \begin{align*} \frac{\partial S_2}{\partial E} &\approx \frac{S_2(E) - S_2(E-E_{1A})}{E_{1A}} \\ \implies S_{2A} &\approx S_2 (E-E_{1A}) = S_2 (E) - \left( \frac{\partial S_2}{\partial E} \right) E_{1A} \\ \frac{\partial S_2}{\partial E} &\approx \frac{S_2(E) - S_2(E-E_{1B})}{E_{1B}} \\ \implies S_{2B} &\approx S_2(E-E_{1B}) = S_2 (E) - \left( \frac{\partial S_2}{\partial E} \right) E_{1B} \end{align*} where we have noted that $S_{2A} = S_2 (E-E_{1A})$ and $S_{2B} = S_2 (E-E_{1B})$ are the entropy of the surroundings subject to the system having energy $E_{1A}$ and $E_{1B}$, respectively, and have used $E_{1A}$ as our "infinitesimal." Then, using the entropic definition of temperature, we obtain, \begin{gather*} S_{2A} = S_2 (E) - \frac{E_{1A}}{T} \\ S_{2B} = S_2 (E) - \frac{E_{1B}}{T} \end{gather*} where we have used the fact that the system and surroundings have the same temperature $T$. Finally, we note that the ratio of the probabilities of occupying two different macrostates should be the ratio of the number of different microstates that can make them up. We label the multiplicity of each macrostate as $W_A$ or $W_B$. By rearranging the Boltzmann definition of entropy, this gives, \begin{align*} \frac{p_A}{p_B} = \frac{W_A}{W_B} &= \frac{W_{1A} W_{2A}}{W_{1B} W_{2B}} \\ &= \frac{W_{1A}e^{\frac{S_{2A}}{k_B}}}{W_{1B}e^{\frac{S_{2B}}{k_B}}} \\ &= \frac{W_{1A}e^{\frac{S_2(E)}{k_B} -\frac{E_{1A}}{k_BT}}}{W_{1B}e^{\frac{S_2(E)}{k_B} -\frac{E_{1B}}{k_BT}}} \\ &= \frac{W_{1A}e^{-\frac{E_{1A}}{k_BT}}}{W_{1B}e^{-\frac{E_{1B}}{k_BT}}} \end{align*} We can use the relative ratio of probabilities for a system to occupy macrostates with energies $E_A$ and $E_B$ to find the absolute probability of occupying the state $B$, \begin{align*} \frac{p_A}{p_B} &= \frac{W_{1A} e^{-\frac{E_{1A}}{k_BT}}}{W_{1B} e^{-\frac{E_{1B}}{k_BT}}} \\ \implies p_A &= \frac{W_{1A}}{W_{1B}} p_B e^{ - \frac{E_{1A} - E_{1B}}{k_BT}} \\ \implies \sum_{A} p_A &= \sum_A p_B \frac{W_{1A}}{W_{1B}} e^{ - \frac{E_{1A} - E_{1B}}{k_BT}} \\ \implies 1 &= \frac{p_B}{W_{1B}} e^{\frac{E_{1B}}{k_BT}} \sum_A e^{- \frac{E_{1A}}{k_BT}} \\ \implies p_B &= \left[ \sum_A e^{- \frac{E_{1A}}{k_BT}} \right]^{-1} W_{1B} e^{-\frac{E_{1B}}{k_BT}} \\ &= \frac{W_{1B}}{Z} e^{-\frac{E_{1B}}{k_BT}} \\ Z &= \sum_A W_{1A} e^{- \frac{E_{1A}}{k_BT}} \end{align*} where we have used the fact that the sum of the probabilities of every possible macrostate is 1, or 100%, by definition. The second to last line gives the Boltzmann factor and the final line gives the canonical partition function.
{ "domain": "physics.stackexchange", "id": 97196, "tags": "thermodynamics, statistical-mechanics, probability" }
What is the difference between melting and dissolving?
Question: What is the difference between melting and dissolving? I am looking some general features. The answer should be adaptable to the melting/dissolving of ice cube (water) in a class of pure alcohol (ethanol) just below (or at the) melting point of ice, or similar phenomena. I am now assuming that the ice is dissolving and melting at the same time. In other words which reaction energy is higher in the following reactions: $$\begin{align} \ce{H_2O(s) &\to H_2O(l)\\ H_2O(s) + n EtOH(l) &\to H_2O \centerdot (EtOH)_{n}} \end{align}$$ Or, are there substances that release more energy in dissolvation than consume in melting? Answer: Juha invited me to write a summary (see comment on my previous answer ) of the differences between melting and dissolving. I’ll try to outline this roughly in the same order as his. I'm giving this as a new answer since my last answer was quite long as it was. Differences: Melting and dissolving are completely different processes on the molecular/atomic level that could not be mistaken for each other if you could observe what was happening at that scale. If you cool the liquid that arose from melting the solid, to a temperature below it’s melting point, you would see the entire sample solidify. If you cool the solution (dissolved solid + solute) to below the melting point of the solid (solute) in it, you would see no change. (Unless you had a saturated or nearly saturated solution.) Melting requires only a single substance and energy input while dissolving requires a solvent and a solute that are compatible (“like forces”). (This is actually a pretty huge difference and would potentially affect all of the physical and chemical properties.) Dissolving a solid can be either endo- or exo-thermic. "Phase change" (wording from the previous summary) In each case you end up with a liquid. Melting caused a phase change (composition of the substance didn’t change) while dissolving a solid in a liquid is not considered a phase change since a change in composition occurred. Similarities: In each case, forces between the particles that comprise the solid are disrupted and that takes energy. (Whether it’s chemical bonds or intermolecular forces depends on the process and on the solid and on your definitions. (See this question.) But melting (rare exception noted in previous comments) is endothermic and dissolving can be either endo- or exo-thermic. In each case you end up with a liquid. Macroscopically, if you walked into a room and saw the liquid on the table, it would be difficult to say whether this liquid came from a solid that had melted or a solid that had dissolved in a solute and made a solution. But it would be very easy to determine which you had experimentally in a “dozen” different ways. Both melting and dissolving require interaction among groups of atoms, molecules, or ions. There are probably more differences that could be given (how to handle thermodynamic calculations, complexity of the system etc.), and possibly more similarities, but that's enough for me on this topic.
{ "domain": "chemistry.stackexchange", "id": 45, "tags": "solvents, solubility, melting-point, phase" }
What is the geothermal potential of a volcano?
Question: If there were lots of geothermal plants—even mobile ones—near a volcano, how much power could this provide? Could the sapping of some of the heat energy make the volcano less likely to erupt? Answer: The question is a little off the mark, as volcanoes are just the surface expression of a much larger system. A more general idea is to target shallow magma chambers, and the hydrothermal fluids they act upon. This is being done all over the world, in tectonically active areas where there is a lot of molten rock at reasonably shallow depths. The Iceland Deep Drilling Project have accidentally created the first magma-EGS system. Read more: pangea.stanford.edu/ERE/db/WGC/papers/WGC/2015/37001.pdf
{ "domain": "earthscience.stackexchange", "id": 465, "tags": "geophysics, volcanology, geochemistry, petrology, geothermal-heat" }
Will computer fan rotate slower if one makes impeller heavier?
Question: Imagine that we replace plastic fan impeller with identical metal impeller (same form, heavier weight). Would maximum fan speed decrease? Common sense suggests that air resistance and bearing friction do not change, but intuitively I would expect fan maximum speed to decrease. EDIT: As I know, usually computer fans use brushless motors if that matters Answer: Yes, the rotation speed will decrease. The reason is that the fan gets a constant power (energy per second) from the system which translates to a constant torque that rotate the fan against air resistance (and such). Greater mass density of the fan means greater value of the relevant Tensor of Inertia component of the fan. This means it takes greater force to give it the same amount of angular momentum. On the other way around - for the same amount of force (which can be translated to power mentioned before) it will spin slower.
{ "domain": "physics.stackexchange", "id": 23652, "tags": "rotational-dynamics, aerodynamics, drag, fan" }
JWT authentication between Django and ReactJS
Question: I am currently using Django (2.1) to build an API, and I have added djangorestframework-jwt to manage JWT. Here is the configuration: JWT_AUTH = { 'JWT_SECRET_KEY': SECRET_KEY, 'JWT_VERIFY': True, 'JWT_VERIFY_EXPIRATION': True, 'JWT_EXPIRATION_DELTA': datetime.timedelta(days=14), 'JWT_ALLOW_REFRESH': True, 'JWT_REFRESH_EXPIRATION_DELTA': datetime.timedelta(days=7), 'JWT_AUTH_HEADER_PREFIX': 'Bearer', } and the endpoints: urlpatterns = [ path('auth/get-token/', obtain_jwt_token), path('auth/refresh-token/', refresh_jwt_token), ] The client is built with ReactJS. I use an axios instance as a client to communicate with the API. This instance is created that way : import axios from 'axios' import jwt_decode from 'jwt-decode' // eslint-disable-line import { signOut } from '../actions/authActions' const signOutOn401 = (statusCode) => { if (statusCode === 401) { signOut() window.location = '/signin' } } const client = axios.create({ baseURL: process.env.API_URL, headers: {'Authorization': ''} }) /* * This interceptor is used for: * - adding Authorization header if JWT available * - refreshing JWT to keep user authenticated */ client.interceptors.request.use((config) => { if (window.localStorage.getItem('token')) { let token = window.localStorage.getItem('token') // Calculate time difference in days // between now and token expiration date const t = ((jwt_decode(token).exp * 1000) - Date.now()) / 1000 / 60 / 60 / 24 // Refresh the token if the time difference is // smaller than 13 days (original token is valid 14 days) if (t < 13) { axios.post(`${process.env.API_URL}/auth/refresh-token/`, { token: token }) .then(({data}) => { token = data.token }) .catch((error) => { signOutOn401(error.response.status) return error }) } config.headers['Authorization'] = `Bearer ${token}` } return config }) /* * This interceptor is used for: * - disconnect user if JWT is expired or revoked */ client.interceptors.response.use( (response) => { return response }, (error) => { signOutOn401(error.response.status) return error } ) export default client The signout action only clear the session and clean the store : export const signOut = () => { window.localStorage.clear() return ({ type: SIGN_OUT, payload: { authenticated: false, user: {}, errorMessage: '' } }) } Everything looks working fine, I would just like to know if that implementation is correct, and if there is no security flaw :) Answer: UX concern: refresh period I wonder if you are confusing the access token expiration setting (JWT_EXPIRATION_DELTA) with the refresh token expiration (JWT_REFRESH_EXPIRATION_DELTA). In either case, your t < 13 check should be related to the refresh token expiration, not the access token expiration. 13 seems to be chosen because it is almost 14; hence my comment. At the very least these refreshes should never go beyond your refresh expiration (7 days), or they will fail, which defeats the purpose of providing a refresh token (because the user would always have to authenticate after the expiration). Security concern: access token expiration On a slightly similar note, but this one is a security concern -- you typically want the access token to have a much shorter life than the refresh token. Yours is reversed, as the access token (JWT_EXPIRATION_DELTA) is 14 days vs. the refresh token (JWT_REFRESH_EXPIRATION_DELTA) at 7 days. Note the default JWT_EXPIRATION_DELTA for the djangorestframework-jwt library is a much more conservative and standard 5 minutes: Default is datetime.timedelta(seconds=300)(5 minutes). Per OAuth 2 spec: access tokens may have a shorter lifetime and fewer permissions than authorized by the resource owner Per Auth0 guidance: Access tokens carry the necessary information to access a resource directly. ... Access tokens usually have an expiration date and are short-lived. ... Common implementations allow for direct authorization checks against an access token. That is, when an access token is passed to a server managing a resource, the server can read the information contained in the token and decide itself whether the user is authorized or not (no checks against an authorization server are needed). This is one of the reasons tokens must be signed (using JWS, for instance). On the other hand, refresh tokens usually require a check against the authorization server. This split way of handling authorization checks allows for three things: Improved access patterns against the authorization server (lower load, faster checks) Shorter windows of access for leaked access tokens (these expire quickly, reducing the chance of a leaked token allowing access to a protected resource) ... Security concern: refresh token revocation It is difficult for me to tell whether that djangorestframework-jwt library will revoke all previous refresh tokens for a given session once a new refresh token is issued for that session. Ideally it should, especially if you are using unauthenticated clients, which it looks like you are (assuming that React app is just a public web app; if it's instead packaged in a native client you may and probably should be authenticating that client). You may want to verify that behavior of the library with your own testing. If the library does not revoke previous refresh tokens, then you should probably mitigate the risk of having so many outstanding valid refresh tokens per session (~= expiration time divided by refresh period, or currently 7 / 1 = 7 valid tokens!!!) by making the refresh period much closer to the refresh token expiration. So if you keep the refresh expiration at 7 days, then only refreshing at 6 days would make sense. Then you would typically have 1 or at most 2 outstanding valid refresh tokens for a session. However, if given a choice, it is much more secure to have automatically revoked refresh tokens and shorter refresh periods; that way a compromised refresh token is much more likely to be invalid -- and you still wouldn't have to lose the benefit of the authorized user having a long refresh expiration. (The whole point of the long expiration is for situations such as an unattended laptop or mobile browser.) Per OAuth 2 spec: The authorization server MUST verify the binding between the refresh token and client identity whenever the client identity can be authenticated. When client authentication is not possible, the authorization server SHOULD deploy other means to detect refresh token abuse. For example, the authorization server could employ refresh token rotation in which a new refresh token is issued with every access token refresh response. The previous refresh token is invalidated but retained by the authorization server. If a refresh token is compromised and subsequently used by both the attacker and the legitimate client, one of them will present an invalidated refresh token, which will inform the authorization server of the breach. The authorization server MUST ensure that refresh tokens cannot be generated, modified, or guessed to produce valid refresh tokens by unauthorized parties.
{ "domain": "codereview.stackexchange", "id": 31920, "tags": "javascript, authentication, react.js, django, jwt" }
Surface Collision Frequency
Question: My young son is fascinated by large numbers (e.g., a google). One question he posed to me is if the human body over its lifetime collides with atoms/molecules a google times (I think not, but it's interesting to do the calculation and see what the number actually, to first order, is). The construct seems simple: $$ \mbox{Total Collision Count}\ = f\times A\times t, $$ where $f$ is the (average) surface collision frequency (collisions per unit area per time), $A$ is the surface area of a body (say 1.5 square meters), and $t$ is (being generous and rounding up) 100 years. Assume 1 atmosphere of pressure and 20 degrees Celsius. Also, again for simplicity, assume only surface contact is with air (!). I'm quickly realizing that my stat mech textbook has far too much dust on it and I'm not sure where to start in determining $f$. I imagine Boltzmann statistics should play a role (ala Maxwell's distribution). I.e.: $$ f\propto\left(\begin{array}{c} \mbox{probability of particle} \\ \mbox{having velocity}\ \bar{v}\end{array}\right)\times\left(\begin{array}{c} \mbox{number of vectors}\ \bar{v} \\ \mbox{corresponding to speed}\ v_x\end{array}\right), $$ where the first factor I believe is $\propto e^{-mv^2/2kT}$ and the second factor needs to take into account we are only interested in surface collisions from one orientation. But at this point I'm not sure if I'm on the right track and, even if I am, where I go from here. Any help? Answer: To evaluate the number of collisions in a lifetime does require some knowledge of Physics. ne first needs an estimate of the speed of gas molecules at room temperature $(\rm 20^\circ C = 293\,K)$ using the average kinetic energy of a molecule $\frac 12 m v^2$, $m$ being the mass of a molecule, $\sim 5\times 10^{-26}\rm\,kg $, and $v$ an average relating to the speeds of the molecules, being equal to $\frac 12 k T$, where $k=\sim 10^{-23}\,\rm J/K$ is the Boltzmann constant and $T=300\,\rm K$ is the temperature. This gives a speed for the molecules of approximately $400\,\rm m/s$. When molecules hit a surface and rebound their momentum changes and this results in a force $F$ being exerted on the surface. $F= \text{rate of change of momentum} =n( mv - (-mv)) = 2nmv$ where $n$ is the rate at which molecules hit the surface. Atmospheric pressure $\frac FA$, where $A$ is the area of the surface is approximately $10^5$ pascals. So for one square metre of surface $\frac {2nmv}{1}=10^5 \Rightarrow n \sim 10^{27}$ per second. As this is an order of magnitude calculation and remembering that the surface area of a human changes with age, the surface area of a human can be taken to be one square metre. If a human lives for $70$ years then the number of collisions with air molecules is $365 \times 24\times 60\times 60 \times 10^{27} = \mathbf \sim 10^{34}$ are very large number but completely dwarfed by a google. Even using the age of the universe, $1.38 \times 10^{10} \rm \, years$, produces only $\sim 10^{42}$ collisions!
{ "domain": "physics.stackexchange", "id": 86760, "tags": "thermodynamics, statistical-mechanics" }
Time complexity analysis for dynamic programming using memoization
Question: I am trying to figure out the time complexity for "Regular Expression Matching" problem. Problem statement is simple, only meta characters allowed are '.' and '*'. Actual problem statement can be found in Link The solution is in java and I have solved it using memoization, but having difficulty to compute the time complexity. Can anyone help me with the explanation, and any reference documents or link to study further on this subject. 1 public boolean isMatch(String s, String p) { 2 return backTrack(s, 0, p, 0, new Boolean[s.length() + 1][p.length() + 1]); 3 } 4 boolean backTrack (String s, int si, String p, int pi, Boolean[][] dp) { 5 if (pi >= p.length()) return si >= s.length(); 6 if (dp[si][pi] != null) return dp[si][pi]; 7 if (pi < p.length() - 1 && p.charAt(pi + 1) == '*') { 8 if (si < s.length() && (s.charAt(si) == p.charAt(pi) || p.charAt(pi) == '.')) { 9 if (backTrack(s, si + 1, p, pi + 2, dp)) return dp[si][pi] = true; 10 if (backTrack(s, si + 1, p, pi, dp)) return dp[si][pi] = true; 11 } 12 if (backTrack(s, si, p, pi + 2, dp)) return dp[si][pi] = true; 13 } 14 if (si < s.length() && (s.charAt(si) == p.charAt(pi) || p.charAt(pi) == '.')) { 15 if (backTrack(s, si + 1, p, pi + 1, dp)) return dp[si][pi] = true; 16 } 17 return dp[si][pi] = false; 18 } Answer: The time-complexity of dynamic-programming with memoization Here is the simple principle. Suppose an algorithm applies dynamic programming to solve a problem, with the majority of running time spent on computing $O(s)$ subproblems for some expression $s$, where each subproblem is computed/solved only once thanks to memoization, and it takes $O(u)$ time for some expression $u$ to compute/solve each subproblem, assuming it takes $O(1)$ time to access the result of any other subproblems that are needed during that computation, then the time-complexity of the algorithm is $O(su)$, in general. Analysis of the algorithm in the question Let us estimate how many line-executions are done if the algorithm is run upon an input of a string of length $m$ and a pattern of length $n$. There are $(m+1)(n+1)$ entries in the table dp (which represent $O(mn)$ subproblems). The code that actually computes and updates the entries are the code block from line $7$ to line $17$. Because of memoization, these lines will be executed at most once for each entry. So these $17-7+1$ lines will be executed at most $(m+1)(n+1)$ times, i.e., these lines correspond to at most $(17-7+1)(m+1)(n+1)$ line-executions. How many times will line $4$ (for building and quitting calling stack frames), line $5$ and $6$ be executed? Except the first time that is triggered by line $2$, the execution of them is triggered by the execution of any one of $4$ lines, line $9$, $10$, $12$, or $15$, which happens during the execution of the code block mentioned above. So line $4$, $5$ and $6$ are executed at most $1 + 4(m+1)(n+1)$ times, i.e., these $3$ lines correspond to at most $3(1 + 4(m+1)(n+1))$ line-executions. So, the total number of line-executions is at most $$(17-7+1)(m+1)(n+1) + 3(1 + 4(m+1)(n+1)) + 2,$$ where the last number $2$ is for the one-time execution of line $1$ and $2$ at the start of running the algorithm. We can check that it takes $O(1)$ time to execute each line, except for possibly line $2$, which is executed once that costs up to $O(mn)$ time. Hence, the time-complexity of the algorithm is $O(mn)$. The analysis above serves as an example to illustrate and validate the principle. Or we can apply the principle directly. There are $O(mn)$ subproblems. It takes $O(1)$ time to solve each subproblem, considering all recursive calls as taking $O(1)$ time. Hence the time-complexity of the total algorithm is $O(mn\times 1) = O(mn)$.
{ "domain": "cs.stackexchange", "id": 20006, "tags": "time-complexity, dynamic-programming, memoization" }
Why can I see inverted image of object needle on top of image needle using convex lens on optical bench?
Question: Basically what the question says. Convex lenses form images that are real and inverted and obtained on a screen unless the object is placed extremely close to the lens. So why am I able to see an inverted image of the object needle through the lens without any screen? The aim of the practical experiment is to find the focal length of the convex lens provided to us using an optical bench. Answer: When talking about a screen in optics, it is often referred to as the projection of the object after going through the lens. It is better to imagine it as I will describe now: Suppose you have your lens in the middle of the setup and you were to have on one side a projector and on the other side a screen. The object coming out from the projector would then go through the lens and be flipped, afterwards it would travel till it reaches your screen and you would notice it is flipped. Here you can see the actual image of the object flipped because you are actually projecting the object itself with the projector to a screen. In your scenario, you are the screen that is observing the light coming from the object which is then going through the lens, thus you/the retina sees the object inverted.
{ "domain": "physics.stackexchange", "id": 87071, "tags": "optics, visible-light, experimental-physics, lenses" }
Sufficient conditions for a interaction to be classified as weak, strong, ...?
Question: Let us say I have been given the equation of a interaction/decay/etc. between particles: $$X+Y\rightarrow A+B$$ Are their any sufficient conditions that we can use to determine the type of interaction this is? I know that, for example, if their is a neutrino involved then the interaction is weak. Answer: Life is not so simple, as in all high energy interactions there is a probability of a large number of particles appearing at the main interaction which will subsequently have decays through the weak or electromagnetic interaction. If one sees jets of hadrons in the detectors the strong interaction is involved, but the main vertex may be electromagnetic, as in e+ e- annihilation, for example. A Feynman diagram demonstrating an annihilation of an electrons (e–) and a positron (e+) into a photon (γ) that produces a bottom quark (b) and anti-bottom quark (b) pair, which then radiate gluons (blue). Fig7 in this link. The bottom quarks,whose decays are weak, will appear as hadronic jets which also may well have decays into neutrinos. Lifetimes and widths of resonances can give an indication, as electromagnetic ones will have narrow widths , weak will have long lifetimes and strong very short ones according to the couplings of the corresponding interactions. But there are also quantum number conservation rules that can spoil the guess: a good example is the width of the J/psi. So no there is no sufficient condition. The specific interaction that has been measured has to be studied and fitted with the standard model expectations for the primary vertex.
{ "domain": "physics.stackexchange", "id": 32534, "tags": "particle-physics, standard-model, interactions" }
RTABMAP localization with turtlebot issue
Question: Hello, As instructed in the tutorial Mapping and Navigation with Turtlebot after mapping "$ roslaunch rtabmap_ros demo_turtlebot_mapping.launch args:="--delete_db_on_start" rgbd_odometry:=true" for localization when i try to execute the command " $ roslaunch rtabmap_ros demo_turtlebot_mapping.launch localization:=true" . i get following error: [ INFO] [1454339969.590369016]: rtabmap 0.10.10 started... [ INFO] [1454339970.864339986]: Stopping device RGB and Depth stream flush. [ WARN] [1454339972.332150185]: Timed out waiting for transform from base_footprint to map to become available before running costmap, tf error: Could not find a connection between 'map' and 'base_footprint' because they are not part of the same tree.Tf has two or more unconnected trees.. canTransform returned after 0.100846 timeout was 0.1. [ WARN] [1454339974.865176406]: rtabmap: Could not get transform from odom to base_footprint after 0.100000 second! [ WARN] [1454339974.995544194]: rtabmap: Could not get transform from odom to base_footprint after 0.100000 second! and this goes on.... what's the issue? please guide me i am new to ROS. Thank you Originally posted by kaygudo on ROS Answers with karma: 11 on 2016-02-01 Post score: 0 Original comments Comment by Humpelstilzchen on 2016-02-02: Error missing? Comment by kaygudo on 2016-02-03: due to some reason the error message is not shown here. do u know why? Comment by Humpelstilzchen on 2016-02-03: sorry no, maybe try pastebin Answer: Hi, On your second call, the visual odometry node is not started, so it is why rtabmap is complaining that there is no odometry TF between /odom and /base_footprint. Try this instead: $ roslaunch rtabmap_ros demo_turtlebot_mapping.launch localization:=true rgbd_odometry:=true Note that if you are on the actual robot, you may want to use the odometry TF/topic from turtlebot. cheers Originally posted by matlabbe with karma: 6409 on 2016-02-04 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by kaygudo on 2016-02-21: Thanks! that worked.But after done with mapping, when i proceeded for localization and typed the command as mentioned above by you and in http://wiki.ros.org/rtabmap_ros/Tutorials/MappingAndNavigationOnTurtlebot localization mode, nothing happens, as mentioned in the link a 2d map should reappear. Comment by matlabbe on 2016-02-22: On the tutorial it is stated "the 2D map would re-appear again when a loop closure is found", so maybe no loop closure could be found (are there warnings on the console?). The map may be too small.
{ "domain": "robotics.stackexchange", "id": 23618, "tags": "slam, localization, navigation, turtlebot, rtabmap-ros" }
ROS2 DDS communication: How to connect AWS EC2 Instance remotely
Question: I am trying to connect from a Client PC (Raspberry Pi) to an AWS EC2 Instance remotely over the internet using by UDP unicast. I had set up both computers as below.I executed a "ros2 run demo_nodes_py talker" on this Client PC, and executed a "ros2 run demo_nodes_py listener" on this Instance. But this Instance cannot receive topic messageses from this Client PC. Please let me know how to connect AWS EC2 Instance. Details information is below: AWS EC2 Instance configuration TYPE: AWS EC2 VPC Instance(t2.micro) OS: Ubuntu 18.04 LTS ROS Version: ROS2 Dashing Diademata Private IP: aaa.aaa.aaa.aaa Public IP: bbb.bbb.bbb.bbb FastRTPS - DEFAULT_FASTRTPS_PROFILES.xml <?xml version="1.0" encoding="UTF-8" ?> <profiles> <participant profile_name="my_profile_s" is_default_profile="true"> <rtps> <builtin> <metatrafficUnicastLocatorList> <locator/> </metatrafficUnicastLocatorList> <initialPeersList> <locator> <udpv4> <address>ddd.ddd.ddd.ddd</address> </udpv4> </locator> </initialPeersList> </builtin> </rtps> </participant> </profiles> AWS EC2 Security Group Inbound Configuration type : port number: src address ----------------------------------- Custom UDP: port 7412 : 0.0.0.0/0 Custom UDP: port 7413 : 0.0.0.0/0 Custom UDP: port 7414 : 0.0.0.0/0 Custom UDP: port 7415 : 0.0.0.0/0 Custom UDP: port 7416 : 0.0.0.0/0 Client PC configuration TYPE: Raspberry Pi 3+ OS: Ubuntu 18.04 LTS ROS Version: ROS 2 Dashing Diademata Private IP: ccc.ccc.ccc.ccc Public IP: ddd.ddd.ddd.ddd FastRTPS - DEFAULT_FASTRTPS_PROFILES.xml <?xml version="1.0" encoding="UTF-8" ?> <profiles> <participant profile_name="my_profile_c" is_default_profile="true"> <rtps> <builtin> <metatrafficUnicastLocatorList> <locator/> </metatrafficUnicastLocatorList> <initialPeersList> <locator> <udpv4> <address>bbb.bbb.bbb.bbb</address> </udpv4> </locator> </initialPeersList> </builtin> </rtps> </participant> </profiles> Thank you in advance for reading my question. Best regards, Iori Originally posted by Iori on ROS Answers with karma: 11 on 2019-07-17 Post score: 1 Original comments Comment by kaliatech on 2019-07-26: Fwiw, I have also been unable to get UDP unicast to work. However, I was able to get TCP to work from client to EC2 server. Let me know if that configuration would be helpful. Answer: Thank you for your advice. To use UDP unicast communication with ROS2, both client PC and AWS EC2 had to be able to send and receive data using the udp protocol each other. Since this network condition I have asked was client PC communicate AWS EC via NAT, It didn't work UDP unicast communication. Therefore, I solved this problem by using an OpenVPN network between a Client PC and AWS EC2. Best regards, Iori Originally posted by Iori with karma: 11 on 2019-09-05 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by btidd on 2023-05-18: Can I ask how you set up your configuration? With OpenVPN we are able to echo topics on the client PC published on AWS EC2, but not the other way around.
{ "domain": "robotics.stackexchange", "id": 33461, "tags": "ros, ros2, dds" }
How does one measure vector magnetic field of astrophysical object?
Question: Magnetic field strength is measured using Zeeman splitting. This is the one of the way Sun's magnetic field strength is measured. Now, how one measure vector magnetic field? Vector magnetic field = Magnetic field and it's components. Answer: Using spectropolarimeter one can measure vector Magnetic field. A spectropolarimeter based on a ferroelectric liquid-crystal modulator is described. An optical system with spatial modulation of the positions of the components of Zeeman splitting is a specific feature of this instrument. In comparison to the familiar instruments, the developed spectropolarimeter utilizes the light flux more efficiently and contains only one photodetector array. An operating spectropolarimeter developed at the Sayan Solar Observatory is considered as an example. Comparative estimates of noises in different operating modes are presented. (Ref: Kolobov, D.Y., Kobanov, N.I. & Grigoryev, V.M. Instrum Exp Tech (2008) 51: 124. https://doi.org/10.1134/S0020441208010156)
{ "domain": "astronomy.stackexchange", "id": 3029, "tags": "magnetic-field" }
What is this insect in a cocoon climbing a web?
Question: I found this thing hanging from the framework of my garage yesterday: The size of the object in the picture was about .75 inches tall by maybe a quarter inch wide. This was taken yesterday in a suburb of Jacksonville, Florida, US in late evening (8:30 pm?). He was (apparently) using his mouth-parts to draw in what appeared to be a strand of spiderweb, and in about 10 minutes had drawn himself upward, towards the top of the garage framework, a total distance of about two feet. An additional question would be what is that he's in, a cocoon? a piece of bark or something being used like a hermit crab uses a scavenged shell? Any help would be appreciated! Answer: That is, probable, a Psychidae (Lepidoptera), commonly known as bagworms. They build those "bags" with pieces of wood, twigs and leaves, and they protect them from predators and parasitoids. One possible species for Florida is this one: https://en.wikipedia.org/wiki/Evergreen_bagworm
{ "domain": "biology.stackexchange", "id": 9852, "tags": "species-identification, zoology, entomology" }
Calculate sum of values from checkboxes
Question: I have a bunch of checkboxes with a data-amount attribute containing a value (positive or negative). My goal is to generate a running total as the user checks each box, and then output this later on. My code works - just curious for feedback as using filter, map and reduce seems overkill for something like this. Thanks const checkboxes = document.querySelectorAll('input[type=checkbox]'); const output = document.querySelector('.runningTotal'); checkboxes.forEach(function(checkbox) { checkbox.addEventListener('change', function() { const runningTotal = Array.from(checkboxes) .filter(i => i.checked) // remove unchecked checkboxes. .map(i => i.dataset.amount ??= 0) //extract the amount, or 0 .reduce((total, item) => { return total + parseFloat(item)}, 0) console.log(runningTotal) output.innerHTML = runningTotal; }) }); <input type="checkbox" data-amount="100"> 100 <input type="checkbox" data-amount="150"> 150 <input type="checkbox" data-amount="-50"> -50 <input type="checkbox" data-amount="10.50"> 10.50 <input type="checkbox" data-amount="0"> 0 <div class="runningTotal"></div> Answer: Your code is too complex. General points Use textContent when setting text in an HTMLElement rather than innerHTML which forces a reflow. Use a single listener rather than one for each clickable element. See rewrite. The nullish assignment is not required. i => i.dataset.amount ??= 0 is more efficient as i => i.dataset.amount ?? 0 as it does not modify the markup if amount is undefined. However I can not see why you vet the dataset values. Better to use Number to parse a string to a number. Prefer id to uniquely identify DOM elements. Don't add comment that state the obvious. Generally good code is self documenting (meaningful naming, structured, etc...) and thus does not need comments Don't leave console output in release code. Use the spread operator ... to convert from iterable array like objects to array. Eg Array.from(checkboxes) is the same as [...checkboxes] Rewrite The rewrite adds a span to accept a single click event that is used to calculate the sum. The listener only iterates the checkboxes once per click to calculate the sum. The element used to display the sum is identified by its id rather than a className The code assumes that all the data set values are correctly setup and thus only need to check if the checkbox is checked. const checkboxes = document.querySelectorAll("input[type=checkbox]"); sumCheckboxes.addEventListener('click', () => { var total = 0; for (const {checked, dataset} of checkboxes) { total += checked ? Number(dataset.amount) : 0; } runningTotal.textContent = total; }); <span id="sumCheckboxes"> <input type="checkbox" data-amount="100"> 100 <input type="checkbox" data-amount="150"> 150 <input type="checkbox" data-amount="-50"> -50 <input type="checkbox" data-amount="10.50"> 10.50 <input type="checkbox" data-amount="0"> 0 </span> <div id="runningTotal"></div> Alternative implementation using a Array.reduce to sum the checkboxes. Note that the array like result of querySelectorAll needs to be converted to an array. This is done once using the spread operator. const checkboxes = [...document.querySelectorAll("input[type=checkbox]")]; sumCheckboxes.addEventListener('click', () => { runningTotal.textContent = checkboxes.reduce((total, el) => total + (el.checked ? Number(el.dataset.amount) : 0), 0 ) }); <span id="sumCheckboxes"> <input type="checkbox" data-amount="100"> 100 <input type="checkbox" data-amount="150"> 150 <input type="checkbox" data-amount="-50"> -50 <input type="checkbox" data-amount="10.50"> 10.50 <input type="checkbox" data-amount="0"> 0 </span> <div id="runningTotal"></div>
{ "domain": "codereview.stackexchange", "id": 43875, "tags": "javascript" }
Stopping power of charged particles in heavy elements
Question: I am performing an experiment which requires knowledge of the stopping power function of roughly 5 MeV alpha particles in heavy elements, e.g. californium. I have not been able to find such data anywhere. For example, NIST (https://physics.nist.gov/PhysRefData/Star/Text/ASTAR.html) only has data for up to Z=92 (Uranium), same goes for SRIM. I believe both of those are based on measured results. Does anyone know if these measurements have actually never been made before? If not, is it safe to use uranium stopping power data to approximate californium stopping power (Z=98)? Or am I better off just using the Bethe formula? Answer: With the advent of practical solid-state detectors with good (~4keV) energy resolution in the late 1950's/early 1960's, the use of Rutherford Backscattering Spectrometry (RBS) for composition analysis took off. To convert detected energy to depth required knowing the stopping power vs energy for the incident and backscattered light ions (usually protons and $\alpha$). There are a variety of tabulations, combining theory and experiment, but as you noted the tables generally stop at uranium. This includes, say, the Handbook of Modern Ion Beam Materials Analysis (ed. J.R. Tesmer and M. Nastasi, MRS 1995). The theory of light ion stopping goes way back, including Bethe, Bragg, Bohr, Lindhard, Ziegler, and many more. Over the years the models have been refined. Quantitative experiments have been performed on a variety of ion/target combinations, with stopping powers generally good to a few percent or so (perhaps). A major source of uncertainty is just how many atoms/cm$^{2}$ the ions in the incident beam actually passed by. For silicon, perhaps the most widely RBS'ed substrate in the world, there are several internally consistent experimental measurements of stopping powers that differ from each other over most of the normal RBS energy range. To measure the stopping powers for Cf, one would need a thin film target and an accelerator. One article I found (from phys.org) indicates that 5 milligrams of Cf cost them $1.4 million to obtain from the Department of Energy. I won't even begin to think of the hassles of the safety paperwork I'd have to do to make thin films with it and play with it in an accelerator... So, what to do in your case? Fortunately, a 5MeV $\alpha$ seems unlikely to penetrate a Cf nucleus, so you can likely discount induced nuclear reactions and non-Rutherford nuclear cross sections. But, you want the stopping powers. I suggest two ways to contemplate it. Plot up the tabulated stopping powers and think how you would extrapolate them to Cf. This is not straightforward, since they are not monotonic but oscillate due to the electronic structure of the target atoms. This then requires applying some level of theory to guesstimate correctly. Go to Ziegler's semi-empirical formula and extrapolate from that, taking into account what you know from above. This combination is about as good as you are going to get. Without too much effort, I think you could come up with a value that you could argue to be good to within $\pm 5\%$ or so. By doing both steps you would have a clear argument on the value chosen, and the impact of the exact stopping power on your results.
{ "domain": "physics.stackexchange", "id": 49037, "tags": "nuclear-physics, radiation" }
Does cutting of trees affect spin angular momentum of earth?
Question: Cutting trees reduces earth's moment of inertia. So the spinning velocity of earth should be reduced day by day. Does it really happen? Answer: Model the tree as a point mass $m$ located some height $h$ above the ground --- that is, forget the mass of the trunk and assume all the mass of the tree is in the branches and leaves above the ground. Then the moments of inertia of the tree before and after felling are \begin{align} I_\text{tree,up} &= m \left( (R+h)\cos\theta \right)^2 \\ I_\text{tree,down} &= m \left( R\cos\theta \right)^2 \end{align} where $R$ is the radius of Earth and $\theta$ the latitude of the tree. The moment of inertia for the rest of the Earth is $$ I_\text{Earth} \approx \frac25 MR^2 $$ if Earth's mass is $M$ and we pretend Earth is a uniformly-dense sphere. (It isn't, which reduces the fraction out front from $\frac25$ to perhaps something like $\frac15$ --- I haven't done the math carefully or looked it up, and you'll see in a moment it doesn't matter. We'll use the simple assumption.) Angular momentum is conserved when the tree falls, so the frequency $\omega$ of Earth's rotation changes: $$ (I_\text{up} + I_\text{Earth})\omega_\text{up} = (I_\text{down} + I_\text{Earth})\omega_\text{down} $$ We can figure out how much it changes. Let's fell a tree on the equator, where $\cos\theta=1$, and figure out how much the ratio is: \begin{align} \frac{\omega_\text{down}}{\omega_\text{up}} &= \frac {\frac25M + m(1+\frac hR)^2} {\frac25M + m} \cdot \left(\frac RR\right)^2 \\ &\approx \frac {\frac25M + m + 2m\frac hR} {\frac25M + m} \\ &\approx 1 + 2\frac {m} {\frac25M+m} \cdot \frac hR = 1 + 5\frac{mh}{MR} \end{align} So a six-ton ($m/M = 10^{-20}$), sixty-meter ($h/R = 10^{-5}$) behemoth of a tree on the equator would change the length of a day starting in the 24th significant figure or so. Attoseconds. Felling a big tree would change the length of a day by some attoseconds. Bigger effects include relative motion of Earth's core and mantle, tectonic shifts, evaporation of equatorial seawater and atmospheric transit of water vapor from tropical to temperate latitude zones or vice-versa. This image suggests there are daily fluctuations in day length of about a millisecond, averaging to about a half-millisecond seasonal variation. That's a seasonal change to the length of a day in the eighth or ninth significant figure. A fun Fermi problem is to notice there are probably more deciduous trees in the northern than southern hemisphere, and assume they all drop their leaves at once in October; even that is a pretty small change in the length of the day.
{ "domain": "physics.stackexchange", "id": 32155, "tags": "homework-and-exercises, newtonian-mechanics, angular-momentum, rotational-dynamics, angular-velocity" }
tf::Quaternion syntax question
Question: Hello, Recently I've found out, that I must give the navigation stack a quaternion describing rotation in addition to the vector3D goal. Quaternions were new to me, so I learnt about them because I would like to use them in ROS. My question is about actual implementation. Suppose that I would like to go from (0,0,0) to (1,1,0). Then I know, that the robot would probably face to 45°, so... double inRadian = radian(degree); tf::Quaternion rotateByThis(tf::Vector3(0,0,1), inRadian); // is this call okay? goal.target_pose.pose.orientation.w = ? // tf::Quaternion has getW() method, should I use that value? goal.target_pose.pose.orientation.x = ? goal.target_pose.pose.orientation.y = ? goal.target_pose.pose.orientation.z = ? Originally posted by Peter Roos on ROS Answers with karma: 119 on 2013-02-21 Post score: 0 Answer: tf provides some methods for the use case that you want to do (2D navigation). In general there is: tf::createQuaternionFromYaw Specifically for setting ROS messages, there is also: tf::createQuaternionMsgFromYaw So, your code can be shortened to: goal.target_pose.pose.orientation = tf::createQuaternionMsgFromYaw(inRadians) You'll almost never have to set quaternion components manually when dealing with ROS data types. There are also conversions between tf types and messages for all common tf types. Originally posted by dornhege with karma: 31395 on 2013-02-21 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Reza Ch on 2013-03-21: Hello, I have the question of other way round. How to get rotations of a quaternion to RPY? I mean in my code I use a /tf listener. So after lookupTransform, now I need to call the rotations and have themin RPY. So how to call and convert them? Any wiki link is also appreciated.
{ "domain": "robotics.stackexchange", "id": 12998, "tags": "quaternion" }
Change in the width of depletion layer with doping
Question: I have previously learnt that increasing the doping will decrease the width of the depletion layer and vice-versa. However, I am unable to understand this. Does it have some relation with force of repulsion? Answer: The depletion region forms due to the equilibrium between drift (field driven) and diffusion ( concentration gradient driven) currents. If you have very low doping, the depletion region will be large because a large volume of depleted semiconductor is needed to generate enough electric field to balance the diffusion current. On the other hand, if you have very high doping a much smaller region is required to balance the diffusion of carriers. I was actually doing this calculation last week, here are some results from my simulation of the Poisson-Boltzmann equation for a Ge/GaAs pn-junction for different doping levels. I have indicated the approximate width of the depletion layer with the blue background. The blue and greens lines are the conduction and valance bands, the red and light blue lines are the intrinsic Fermi-level and the Fermi-level, respectively. The top plot is for light doping $10^{16}\text{cm}^{-3}$, the bottom is for higher doping $10^{17}\text{cm}^{-3}$. This width of the depletion with applied bias was discussed here In an avalanche breakdown, where are the electrons that break free from?, so that might also be interesting.
{ "domain": "physics.stackexchange", "id": 10644, "tags": "semiconductor-physics, electronic-band-theory" }
What is the correct way to calculate the efficiency of a heat exchanger?
Question: I want to evaluate the efficiency of a heat exchanger. From measurements I know all four temperatures $T_1$, $T_2$, $t_1$ and $t_2$ as well as the mass flows of both fluids $m_1$ and $m_2$ and if necessary I would also able to retrieve e.g. pressure measurements. Now after some research it seems there are multiple ways to calculate a value to attribute to a heat exchange process. On a very simple level I found the thermal efficiency $$ \eta_{thermal} = \frac{T_1 - T_2}{T_1 - t_1}. $$ Then there is the so called effectiveness $$ \epsilon = \frac{q_{act}}{q_{max}} $$ where I should be able to calculate $q_{act}$ by $$ q_{act} = \dot{m} c_p (T_1 - T_2) $$ I am not 100% sure this is correct though, and I don't know which temperature I would need to use for $c_p$ in this equation. Then the next question is how do I calculate q_{max}? Could I use the same equation just with e.g. the inlet temperature of the hot fluid $T_1$ and the outlet temperature of the cold fluid $t_2$? But which mass flow would I then use for the calculation? Finally there are more sophisticated methods like NTU and LMTD for which I am not sure if I actually need them, if I in my case simply want to calculate an efficiency/effectiveness of a heat exchanger. So what is the correct way if I basically want to compare a single heat exchanger during it's operating life? Efficiency? Thermal or another equation? Or effectiveness? And do I need a method like NTU or LMTD to calculate it? Answer: Effectiveness using $\epsilon = \frac{q_{act}}{q_{max}}$ seems to be most general. It assumes comparing 2 cases and I would choose comparing cases where inlet temperatures and flowrates are fixed for both of them. When calculating $q_{act}$ for the case where you already know all the inlet/outlet temperatures, you can use enthalpy difference between inlet and outlet multiplied by mass flow instead of approach with $c_p$: $$q_{act} = \left(h_C\left(T_{C,out}\right)-h_C\left(T_{C,in}\right)\right)\cdot \dot{m}_C = \left(h_H\left(T_{H,in}\right)-h_H\left(T_{H,out}\right)\right)\cdot \dot{m}_H$$ However, calculating $q_{max}$ might be trickier, because you need to imagine, that the heat exchange between both fluids is perfect at any point in the exchanger, and this I think can manifest in 2 ways: In cocurrent heat exchanger, each fluid enters with different temperature, but they will reach the same temperature at the outlet, i.e. $T_{C,out}^* = T_{H,out}^*$. In countercurrent heat exchanger, different inlet temperatures, $T_{C,in}$ and $T_{H,in}$, are also given at the start and the outlet temperatures will be: $T_{C,out}^* = T_{H,in}$ and $T_{H,out}^* = T_{C,in}$; Knowing all the inlet/outlet temperatures, you can also use the enthalpy approach for calculating $q_{max}$: $$q_{max} = \left(h_C\left(T_{C,out}^*\right)-h_C\left(T_{C,in}\right)\right)\cdot \dot{m}_C = \left(h_H\left(T_{H,in}\right)-h_H\left(T_{H,out}^*\right)\right)\cdot \dot{m}_H$$ Expressing efficiency, flowrates can cancel out, so the expression is simplified: $$\epsilon = \frac{h_C\left(T_{C,out}\right)-h_C\left(T_{C,in}\right)}{h_C\left(T_{C,out}^*\right)-h_C\left(T_{C,in}\right)} = \frac{h_H\left(T_{H,in}\right)-h_H\left(T_{H,out}\right)}{h_H\left(T_{H,in}\right)-h_H\left(T_{H,out}^*\right)}$$ Some heat exchangers might involve different setups, but I think it is always possible to imagine situation with perfect heat transfer between the fluids.
{ "domain": "engineering.stackexchange", "id": 5351, "tags": "thermodynamics, heat-transfer, heat-exchanger" }
MOOCs for Python in Data Science
Question: Thanks also to SE, I've recently changed job and now I'm working in Data Science, mainly on Analytics for the IoT (Internet of Things). Analytics are applications on cloud platforms which collect real-time, streaming sensor data from industrial machines and allow to estimate their actual performance, predict the probability of a failure and the time before it happens, detect anomalies, etc.. Until now, I've been using R to build Statistical Learning models on datasets which would fit in the memory of my workstation, so I'm not a novice for what it concerns Statistical modeling and Data Science. However, I'm a novice with Python, and I need to learn it, expecially the part of the Python ecosystem related to Data Science, because that's what my team uses. I don't need to develop the cloud platform: I just need to develop the "core" Analytics. I got the book by Jake Van der Plas: https://www.amazon.com/Python-Data-Science-Handbook-Essential/dp/1491912057 but I would like to also follow a MOOC on using Python for Data Science. Can you suggest me one? DISCLAIMER I already asked this question on CV and it wasn't considered very appropriate there. Since here my other question on MOOCs MOOC or book on Deep Learning in Python for someone with a basic knowledge of neural networks was well-received on this site, I thought of asking again here, after deleting the one on CV (no cross-posting). Hope this is fine. Answer: Check out Applied Data Science with Python Specialization from Coursera. It is a series of 5 courses from the University of Michigan.
{ "domain": "datascience.stackexchange", "id": 1824, "tags": "python, reference-request" }
Use of Correlation Map in Machine Learning
Question: I would like to know the use of correaltion map in machine leraning. For example, if there are 2 features with high correaltion, should either of the features be removed before appying the algorithm or it depends on every data set. Any explanation would be highly helpful. Thanks in advance. Answer: It depends. A high correlation between two features suggests that they represents almost the same information. For some problems like clustering, it is always useful to remove redundant features while some algorithm like Gradient Boosting in xgboost is not affected at all by such features. So, it all depends on what you want to do with your data set. As per my opinion, if your dataset has too many features, then I would suggest to check the correlation between those features and apply PCA to reduce the dimensionality of your dataset especially if you are doing tasks like clustering or regression.
{ "domain": "datascience.stackexchange", "id": 1227, "tags": "machine-learning, data-mining, visualization" }
Frames per second capabilities cameras currently installed in Modern Telescopes?
Question: What are the present real-time capabilities of various modern telescopes as video cameras? How do these compare with the capabilities various of space telescopes? For example, if it was known that there would soon be a Saturn-comet impact (akin to the Shoe-Maker levy impact into Jupiter), what would the "frames per second" of the captured imagery be, regardless of imaging resolution? Video of the Shoemaker-Levy impact: https://youtu.be/p7RP2SW_gSw?t=92. Is this video presented in real time? Animation of the Shoemaker-Levy impact: https://en.wikipedia.org/wiki/Comet_Shoemaker%E2%80%93Levy_9#/media/File:Max_Planck_Institute_Shoemaker%E2%80%93Levy_9.gif Live Streaming Jupiter (from Earth): https://www.youtube.com/watch?v=ILh4lWHi_ag Shoemaker-Levy image: https://en.wikipedia.org/wiki/Comet_Shoemaker%E2%80%93Levy_9 ISS Live Stream [for comparison]: https://www.youtube.com/watch?v=EEIk7gwjgIM Answer: As several people have said, the capabilities of the sensor are probably less important than the problem of getting enough light to have anything to record in a short exposure. We find that a magnitude zero star (one of the brightest) corresponds to a visible light flux of about $3.6\times 10^{−20} erg/(s·cm^2·Hz)$. Visible light corresponds to a bandwidth of about $4\times 10^{14} Hz$ and a large ground based telescope might have a collecting area of about $50 m^2$ (or $5\times 10^5 cm^2$). So it intercepts about $6 erg/s$ of visible light from the star. Assume this is spread across a 2000x2000 pixel sensor, and each pixel and switch to SI units and this is $1.5\times 10^{-13} J/s/pixel$. This is actually a few $10^5$ of photons, so for a very bright source like this, quite a rapid frame rate would be possible, probably thousands of frames per second, and, rather to my surprise, the readout electronics might be the main problem, rather than light intensity. I've made lots of assumptions here, about sensor sensitivity, number of pixels, and many other things, but it seems like, at least from Earth, with an 8m class telescope you could image a magnitude zero source like Saturn at a pretty high frame rate.
{ "domain": "astronomy.stackexchange", "id": 4461, "tags": "telescope, photography, space-telescope" }
Solving Subproblem in Logic (first-order, propositional, pddl)
Question: One sentence question Is there any algorithm able to prove (solve) a logic problem (first-order, propositional, pddl) by finite induction? Background I am researching Hierarchical planning solvers and I found the following problem. Let a problem climb a stairway. An agent is able to move up or down. For example, suppose the following "move up" action: moveup(position): pre-condition: on(position) post-condition: ~on(position) ^ on(next(position) Now, let a stairway be defined as: stairway: next(first_step) = second_step next(second_step) = third_step ... next(nminusoneth_step) = nth_step and, a start point from agent is on the first step: on(first_step) Now, I want to create a plan to bring the agent from the first step to last one. Of course, I can solve that in many different ways (brute force for example), but I know how to climb an one-step stairway (just move up) so, I am able to infer that all I need to climb an n-step stairway is just moving up (it is not necessary to decide each action individually). Is there already any algorithm for that (recognize a subproblem structure and infer actions by induction)? Answer: In general, using mathematical induction to prove theorems automatically is quite difficult. First, note that this isn't really a problem of first-order vs second-order logic, since induction can easily be expressed as a schema in FOL, and generally be reduced to a finite set of axioms. It's more useful though to keep it as a schema, and add induction as a logical rule rather than a list of axioms. This has been studied quite a bit, but success of general purpose automatic theorem provers is somewhat limitied in the presence of induction, for the following reasons: (Of course) the first order (quantifier free) theory with induction is undecidable. A good milestone of how undecidable a theory is is to measure how expressive the resulting theory is, and the theory with induction is essentially capable of expressing a large part of elementary mathematics quite easily. This is a pretty bad sign. Induction is always applicable: any goal which contains some occurrence of an inductive data type (like a list or a natural number) may be proven (indeed, may need to be proven) by induction. Clearly there needs to be a clever heuristic to decide when to apply induction. The general rule is: do everything but induction, and apply induction as an absolute last resort! Even with this heuristic, you need to know when to give up, which is again undecidable, but necessary if any kind of backtracking is to be performed. Sometimes the goal needs to be generalized. This is a distinctive phenomenon of induction, that sometimes the induction hypothesis is too weak to prove the conclusion, and as a result the whole goal needs to be strengthened, which is rather counter-intuitive (failure to prove a goal results in trying to prove a stronger one!). Finding and proving these strengthenings automatically is the subject of exciting current research in computer science (in the field of loop invariant synthesis). Given all these difficulties, it's clear there's no general purpose algorithm for proving theorems by induction in full generality. However the problem itself is being actively studied, and I'll just leave you with this link to a workshop specifically addressing this problem. As a footnote, your problem doesn't seem to be solvable with much else than standard "brute force" (or resolution), since the algorithm needs to verify that the last step is reachable by explicitly going through all the steps. Unless you have further knowledge (e.g. each step corresponds to a natural number), you need to verify that each step can be taken, and that there are no "breaks".
{ "domain": "cs.stackexchange", "id": 6345, "tags": "logic, induction" }
How to interpret intervals on a FT-IR spectrogram of soil samples?
Question: I am studying to analyze the IR spectra of soil samples. And I'm interested in the question - how to interpret the intervals in (1750..1820) and (3020 ... 2820) of the two samples on the graph? The attached file - 1200..600 spectra of soil samples taken on Nikolet 6700 FT-IR Thermo Scientific in DRIFT mode and particular clippings of (1750..1820) and (3020 ... 2820) spectra. UPDATE Scan method – FT-IR DRIFT 1 200 cm-1 .. 6 000 cm-1 Red line – paleosol untreated (only dry the samples and sieve it threw a 2 mm mesh) Blue line – weathering sandy soil untreated (only dry the samples and sieve it threw a 2 mm mesh) Answer: You are almost certainly looking at bands corresponding to functional groups in the humic acid fraction of your sample. The band at 1750 - 1820 cm$^{-1}$ is likely the $\ce{C=O}$ stretching mode in $\ce{COOH}$ groups in humic acid, but this band is also used to differentiate the fulvic fraction from the humic fraction (see quoted text below). The band at 2820 - 3020 cm$^{-1}$ is likely aliphatic $\ce{C-H}$ stretching. The composition of humic and fulvic acid fractions in soils varies quite a bit, and how the samples were prepared can significantly alter the composition of the fractions as well. Keep that in mind when interpreting your results. Your bands appear to be shifted from those in the reference I chose to cite, but that is to be expected depending on the composition of your soil and the organic matter contained therein (and can also indicate greater or lesser degrees of $\ce{H}$-bonding, resulting in red/blue shifts). The soil organic fractions referenced in the study below are sandy loams from Ibri, Oman, in an agricultural region. Your samples will differ in composition based on geography, geology, climate, and other factors. Reference: Helal, Murad, and Helal. Characterization of different humic materials by various analytical techniques. Arabian Journal of Chemistry 4(1), 51–54 (2011) (link to free PDF from ScienceDirect): The IR spectra of the three humic fractions are shown in Fig. 2. They have a diversity of bands more or less typical to those distinguishing the humic materials (Stevenson, 1994 and Aiken et al., 1985). Major absorption bands are in the regions of 3400–3300 cm$^{-1}$ ($\ce{H}$-bonded $\ce{OH}$ groups), 2940–2900 cm$^{−1}$ (aliphatic $\ce{C–H}$ stretching), 1750–1720 cm$^{−1}$ ($\ce{C=O}$ stretching of $\ce{COOH}$), 1620 cm$^{−1}$ (aromatic $\ce{C=C}$, $\ce{COO−}$, $\ce{H}$-bonded $\ce{C=O}$), 1280–1230 cm$^{1}$−1 ($\ce{C–O}$ stretching and $\ce{OH}$ deformation of $\ce{COOH}$) and 1040 cm$^{−1}$ ($\ce{C–O}$ stretching of polysaccharide or $\ce{Si–O}$ of silicate impurities). The spectra evidently show predominance of $\ce{OH}$, $\ce{COOH}$ and $\ce{COO−}$ groups which are the most characteristic features of soil humic materials. It is clear from the spectra that fulvic acid is characterized by stronger absorption near 1720 cm$^{−1}$ which implies the high carboxylate capacity, and that of humic acid is stronger than that of humin. This is exactly the case in the results of potentiometric titration (Helal, 2007). The spectrum of fulvic acid is also characterized by the absorption at 1400–1390 ($\ce{OH}$ deformation and $\ce{C–O}$ stretching of phenolic $\ce{OH}$). The results of IR spectra indicate that fulvic acid is more aliphatic, and humic acid and humin are more aromatic. It is obvious that the IR results are in good agreement with the other characterization findings. Figure 2, from the above-referenced article. FA is fulvic acid and HA is humic acid.
{ "domain": "chemistry.stackexchange", "id": 6825, "tags": "analytical-chemistry, ir-spectroscopy" }
Properly disposing of a WCF connection using IDisposable
Question: This is a follow up question to this question: https://stackoverflow.com/questions/4573526/what-could-be-causing-a-cannot-access-a-disposed-object-error-in-wcf I have created a WCF service and it is up and running. When I used it, I was closing the connection after each time I got a response in a finally. Trying to use the client for a second time, I got the exception of the linked question - can not access a disposed object. I tried implementing the IDisposable pattern without closing the connection every time. 1) First of all - Did I implement the dispose pattern correctly? the code (it's short, don't worry): I got a singleton class which is responsible for the service communication in which it creates the client class: public class WebService : IDisposable { // Flag: Has Dispose already been called? bool disposed = false; private MyWebServiceContractClient client; private static readonly Lazy<WebService> webServiceHandler = new Lazy<WebService>(() => new WebService()); private WebService() { client = new MyWebServiceContractClient(); } public static WebService Instance { get { return webServiceHandler.Value; } } public double GetAnswer() { try { if (!(client.State == CommunicationState.Opened) && !(client.State == CommunicationState.Opening)) { client.Open(); } //do some work return answer; } catch (Exception e) { Console.WriteLine(e.Message); return -1; } finally { // here i would normally call client.Close(); but now im not } } public void Dispose() { // Dispose of unmanaged resources. Dispose(true); // Suppress finalization. GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (disposed) return; if (disposing) { client.Close(); } disposed = true; } } 2) If I'm only closing the connection in the dispose method, doesn't it mean I'm always keeping the connection alive? Isn't it a waste of resources and even a security risk to just keep it open? Answer: The adviced way to call a service endpoint when the connection is not an open stream, is to have the lifecycle of you client in line with the operation you are calling. So rather than storing an instance of the client.. private WebService() { client = new MyWebServiceContractClient(); } You should create a client on demand. To work around the known issue on Dispose, a solid solution is available. public double GetAnswer() { try { using (new MyWebServiceContractClient()) { //do some work return answer; } // <- there is a known issue here: https://stackoverflow.com/questions/573872/what-is-the-best-workaround-for-the-wcf-client-using-block-issue } catch (Exception e) { Console.WriteLine(e.Message); return -1; } }
{ "domain": "codereview.stackexchange", "id": 34642, "tags": "c#, web-services, wcf" }
Difference lists in functional programming
Question: The question What's new in purely functional data structures since Okasaki?, and jbapple's epic answer, mentioned using difference lists in functional programming (as opposed to logic programming), which is something I've recently been interested in. This led me to find the difference list implementation for Haskell. I have two questions (forgive/correct me if I should make them two different questions on the StackExchange). The simple question is, is anyone aware of academic consideration of difference lists in functional programming and/or implementations besides the one in the Haskell library? jbapple's answer didn't give a citation for difference lists (difference lists in logic programming exist in the lore and in a couple of sources which I have Around Here Somewhere (TM)). Before finding the Haskell implementation I wasn't aware that the idea had leaped from logic to functional programming. Granted, the Haskell difference lists are something of a natural use of higher-order functions and work quite differently from the ones in logic programming, but the interface is certainly similar. The more interesting (and far fuzzier-headed) thing I wanted to ask about is whether the claimed asymptotic upper bound for the aforementioned Haskell difference list library seems correct/plausible. My confusion may be because I am missing something about obvious about complexity reasoning with laziness, but the claimed bounds only make sense to me if substitution over a large data structure (or closure formation, or variable lookup, or something) always takes constant time. Or is the "catch" simply that there's no bound on the running time for "head" and "tail" precisely because those operations may have to plow through an arbitrary pile of deferred computations/substitutions? Answer: Or is the "catch" simply that there's no bound on the running time for "head" and "tail" precisely because those operations may have to plow through an arbitrary pile of deferred computations/substitutions? I think that's more or less correct. DLs only really have fast build operations, though, so the plowing is $\Theta(m)$, where $m$ is the number of operations used to build the DL. The following defunctionalized version of some of the essential operations requires laziness for $O(1)$ fromList, but otherwise should be a straightforward way of understanding the complexity bounds claimed in the original. {-# LANGUAGE NoMonomorphismRestriction #-} data DL a = Id | Cons a | Compose (DL a) (DL a) fromList [] = Id fromList (x:xs) = Compose (Cons x) (fromList xs) toList x = help x [] where help Id r = r help (Cons a) r = a:r help (Compose f g) r = help f $ help g r empty = Id singleton = Cons cons x = append (singleton x) append = Compose snoc xs x = append xs (singleton x) The $\Theta(n)$ operations head and tail can be implemented the same way as they are in the [a] -> [a] version, using toList.
{ "domain": "cstheory.stackexchange", "id": 302, "tags": "reference-request, ds.data-structures, pl.programming-languages, functional-programming, logic-programming" }
Find the missing link
Question: Thee million men write down their name and the person to their left and the person on the far left throws out his paper. Reconstruct the list. A List takes a while. Don't try 3 million unless you have some time. Up front we do NOT know the remaining man on left is 0. That is what we are looking for. The remaining here is 0 as the was test data easy to create. public static void TAOCPthreeMillionMenTest() { int numberOfMen = 30000; // test at less than 3 million or get killed List<PairInt> pairs = new List<PairInt>(numberOfMen); for (int i = 0; i < numberOfMen; i++) { pairs.Add(new PairInt(i, i + 1)); } // shuffle left Random rand = new Random(); int temp; int j; //shuffl up for (int i = numberOfMen - 1; i > 0; i--) { j = rand.Next(i + 1); if (i != j) { temp = pairs[i].Rght; pairs[i].Rght = pairs[j].Rght; pairs[j].Rght = temp; temp = pairs[i].Left; pairs[i].Left = pairs[j].Left; pairs[j].Left = temp; } } foreach (PairInt pair in pairs) { //Debug.WriteLine(pair.Left + " " + pair.Rght); if (pair.Left == pair.Rght) Debug.WriteLine("pair.Left == pair.Rght"); } Debug.WriteLine(""); Debug.WriteLine(TAOCPthreeMillionManLeft(pairs)); Debug.WriteLine(""); } public static int TAOCPthreeMillionManLeft(List<PairInt> pairs) { List<PairInt> lineEmUp = new List<PairInt>(); int lineEmUpCountMax = 0; int lineEmUpCountMaxCount = 0; PairInt matchRght; PairInt matchLeft; int count = 0; foreach (PairInt pair in pairs) { count++; //Debug.WriteLine("new " + pair.Left + " " + pair.Rght); //foreach (PairInt exist in lineEmUp) //{ // Debug.WriteLine("exist " + exist.Left + " " + exist.Rght); //} //Debug.WriteLine(""); matchRght = lineEmUp.FirstOrDefault(x => x.Rght == pair.Left); matchLeft = lineEmUp.FirstOrDefault(x => x.Left == pair.Rght); if (matchRght != null) { if (matchLeft != null) { matchRght.Rght = matchLeft.Rght; lineEmUp.Remove(matchLeft); } else { matchRght.Rght = pair.Rght; } } else if (matchLeft != null) { matchLeft.Left = pair.Left; } else { lineEmUp.Add(pair); if (lineEmUpCountMax < lineEmUp.Count) { lineEmUpCountMax = lineEmUp.Count; lineEmUpCountMaxCount = count; } //Debug.WriteLine("lineEmUp.Count " + lineEmUp.Count); } } // lineEmUp should have only one entry if (lineEmUp.Count != 1) { Debug.WriteLine(""); Debug.WriteLine("lineEmUp.Count != 1"); foreach (PairInt pair in lineEmUp) { Debug.WriteLine(pair.Left + " " + pair.Rght); } Debug.WriteLine(""); } return lineEmUp[0].Left; } public class PairInt { public int Left { get; set; } public int Rght { get; set; } public PairInt(int left, int rght) { Left = left; Rght = rght; } } I tried a double dictionary but could not make it work. Answer: Not sure if I get the problem, but one solution for reconstructing the original list (even 3.000.000 items) with a dictionary is: public static void ReconstructList(List<PairInt> pairs) { var leftRightDict = pairs.ToDictionary(p => p.Left, p => p.Rght); var idx = 0; while (leftRightDict.TryGetValue(idx, out idx)) Console.WriteLine(idx); } Update: The following code gets the ID of the left-most men: public static void ReconstructList(List<PairInt> pairs) { var rightLefttDict = pairs.ToDictionary(p => p.Rght, p => p.Left); // just start with any men var right = pairs[0].Rght; while (rightLefttDict.TryGetValue(right, out right)) {} Console.WriteLine(right); }
{ "domain": "codereview.stackexchange", "id": 24144, "tags": "c#" }
The Genetic Code
Question: This question is part of a series solving the Rosalind challenges. For the previous question in this series, see Wascally wabbits. The repository with all my up-to-date solutions so far can be found here. Problem: PROT The 20 commonly occurring amino acids are abbreviated by using 20 letters from the English alphabet (all letters except for B, J, O, U, X, and Z). Protein strings are constructed from these 20 symbols. Henceforth, the term genetic string will incorporate protein strings along with DNA strings and RNA strings. The RNA codon table dictates the details regarding the encoding of specific codons into the amino acid alphabet. Given: An RNA string \$s\$ corresponding to a strand of mRNA (of length at most 10 kbp). Return: The protein string encoded by \$s\$. Sample Dataset: AUGGCCAUGGCGCCCAGAACUGAGAUCAAUAGUACCCGUAUUAACGGGUGA Sample Output: MAMAPRTEINSTRING My solution solves the sample dataset and the actual dataset given. Dataset: AUGCGCCCUUGGUCGCUCCUUGGAUCAGAGCAUAUUCUAUCACGGCGCGUCGAAGGAAUAACCCACGACAUCUCUCUAUAUUGGAUUCCCUUUUUUUCGGUUCAGGUAGAUCAUUUGGCUACUGGACUUUCUAAGAUUUACUCCGCCAUGUUCCUAUAUGUUACAUUCUCAGCCGAAGUCGCUGUAUAUCACGUUAAGGUAGACGGUUCCUUGACUACCAGCGACGCCUGUAGGGAGAAUUCCAUCCAUCAUGCAUGUAUGGGCAGUGCGCACUUACAGCGCCAUAGGCAACGGACGGACAGACCUCCUUUCCUGUCGGACGGUAAGCCGCGAUCCAAUACAGAGCAAAGUCCCACGCCCUCCUAUAGACUCACGCCAAGAUUGUAUUCCCCGUUAACCGCUCUCUCAGGGAAGUUGUAUCUACUCGGAUCGGGAUGUCCUUGGAAAUGUAGGAAAAUGGCUCAAACUACGAUUGUAUACCGUGCGAGACGUUGGAUCCCGCUUAUCACUGAUACCAUAAUCUGUGUGGCCCCCUUACCACAACCUAACCAUGGAGUAGUAGCCCUGGCCGUCCCUUCAAGGGCAAGACCUCAUUGUCUUGUACGCUUAUCACAAGGGCCAUCUAACAAUGUGUACCGGUAUAAUUUUACGUGGUAUUGUCCAGACGGCGGUACGGCCGAUCCGUUGCCAUUUCGUCAUGGCAUAACCUCGGUCUAUCUUCCUUCCUACUCGGGAAUAGUUCGCAGUACACCAUACCUCAUCGGCACUUACGCUGUUCCAACACAAAAUUCUGAUCCCUUCGCUACCACCCGCUGGGUAUCUGUCAGGUUACUGGCCUCCACUACCGGAGAGGGCGAUACGGGGGACCGCGGAACACUUUCUACAUUUUUGGACUGCUGUAUGUCUACUUCGGCUCUUCCUCCCGCGAGAUUUAUAUCGGCAUACAGAGUAAACGCUACCCAUGGCGACGACACUCGUCUCACCGUUAAGAAGCUGUGCACACCUUAUAAAAGCCUUUGGUCAGGUAAUGUAUGUAGACCUCUAUCGGUCUAUAAGUUGAAGGAAAGAUAUAUAGUCCUCACCGUUGGUUAUCAUUCCCUCCUAUCCCGCAUGUCCACCCUAAGGAAAACUUACUCCGUGGACCAAGCGGGACCGGCAAGGCCCUUGGUUGCCGAAAAAUCAGGAGUUUGCGAGUGCGACUAUGAUCCCAGGUUUGUAAUCGUUAUCGCGACUGUUCAUUUCGGGAAGGCUACUGGGCUGCAUUCAAAGGCGGACCGCUGCUCUCGCUGGGUCGGGUCCAGUGGACGGGUCGAGGCUGCUCAGCUCCAUCAUCAAAGAAAUAAGAAGCCCACCGGAAGAGACAGUGAUGCCGAGAAGAUGUCCGGCAAAGGCAGACUCUGGUCCCAAGUUAGACUGGGUGGUUUAGAGAUCCAGAUAGGUCGUCGUUACCAGUUCUACGAAUACUUUCUUGGUAAACUGAUUAUCCGUCAAAGGCCGACUCACGAGAGGACACAGGGUGCAAAGAAACCCUCACAGAAGGGAACGCGAUUGGCGACACGUAUGGCGCUCGCGUGGUGUGACAGGUUUGGUAGGAUCUAUAUCCCCCAGUGCAACAUAUUAGUUCACACUAUAAUGAAGGUCCGAUUGCACCAAACAGCCUGCGAUGAUAACACGUGGACUGCUGGAGAGUAUGACUUGUACGACUGCACGCGCGAUGUACCCAACAUCGUCCUGUCCCUACCGCACCAUCUUUAUGAACUGCUGGUCUCUGAUGCACUCCCCGCCCCCCACCUCCUCUUUCUGGGGGAAUCUGGUUCCGCCAGGCAUAGGACACGUUCGGUUGGACUAACUAUUGCUAACUACACGAUCAUUCGUGAUAGGUGUCGGUCCACAUGUAUAAGGUUGGAUGAACCUAACCCCUACCGACGAUUUGAUGUAUUGUGGCCACUACUAAUGACCCCCGCUCGUAACACCCAAACACGCGCAUUUCUCUGCUCCGUGGCUGGCUGGAAUUGCCAGUAUUACAGACCCCCUGACAGAUCCAGUAGUAGGACGUACUUGAUCGCACUACAUUUAGCAAUCAAAUUGCGUUCACGCUACCCCAAUCGUUCAGAUCUGGCUUAUCCCACAUCUGUAAACAGGAACACGGUGUAUAUCACCGUAGUCUCUCUACGCACAGCGAAACUAAGAUACAACAGUUACACACCCAGCAAACUGCGGCCCGAUCAAGGCAAUGACCCCAGCUACCGAACUUCGGAAAGAGGGAAUUUCGUCCGGAACUUGCCAUCAGUGAAACCGUACCGAGACUUCAUGAAAACGAUCAGCAUUUCCUUUACAGGAUUCCGGACCAAAUUUGAUCGUCAAAUUGGGAAUUCCAUAGGCCGAGGAUUCACGGGAGCAAGGCCGACCUCAUUGAAGCGUCAUAGUCGCUUUUCGCUCACCGUACAUUCUAGCAAGCGCUAUCUCUCCCGCCUCAACGCCUACGUCUCUUUUACAAUAAAACAUCGGAUAACGAGAUUCGACGUGGCUACGCGCGAAGUUAAGGCUUCCGCCGUGGUACCUAAUGCGGAAAUAGUCCGGAAAAGUACGAAGGUGUGCUGGUUUAUGUGCAUCAUCAGACUCCAAACUGUCCGAGCUACCAAACCGACCAUUCAGAAGCAGUUGUUGAGAUUAAGUGGCCCGUUUCAAUGCGGGGAUUCCACCAAUUACGUACAACCUGGUUACUUCCACAGUUCAAACGCGCCCCGCCCGGCGUGUGUCAGUGUGUGUAUCAGCCCGGGGUUAUGGGACCUGUUGGUAAAAACCCGGAAUGCUUUCUCCCGUGUGCACGGGGGUACUAUCCUUACUUUAGUUCAGCAUGACAUUCAUAAAGUAGAAUUAUCGUCAGCAUGCACUCGCGAGCGGGCUACCAACCUGGCAAUGACUGAAAGCGUAACGUCAUACUCUCAGUGCGAGGCCUCGACUCGCCAUACCGAAAUACAAGCUGUUAGCUCAAUUGUGUAUCUCCACUUGACUGCGGCCCGCAGGGAGAAACACACAGAGAAGAGGGCCGACGCGAAGCAUCAUGUGUCUACUUGGCGCGAGGGUAAAACCGAAAACGUCAUUGGAAGGCUCAGAUCGUCACAUACAUUAACCCUUAGGUUACAUCCAUCCUCGUUGGACAACUGGCCGUUCAUUCUUGGGGAAUGCCAGAGAGGAACGGAUAUCGAGGAACGCAUCCCGGCACAUGCGGAAUGUACAGACAAGGCUGUAGGCGUAGCCUUUCAGUCGACGCUAUGGGCAGAUUCGGCGAAUCCGCGAGGUGGAUCUCGCUUGAGAAGAGGGAUUAGGGGCCCCAACGCGAUGAAUAUUGAAUGCGGAUAUUAUCUGGCGAGACAGCUUCUUGACCGCUCUUGUAGUCGCAAGAUAGGCGAGACUCUAAGACAAACUAGUUCCCGCACGCCAUUGCCAUGCAAGCGAGGCCGCGUCCCAAAACCCCUGGAACCUAAAGAGUCGAACAGGUCAGGAGCAAGUAGCGUAUGGAUACCAGUCGGAGUUAGGCUGGGUCCCUCCGCUGCGAAGACUCCGCCCUGGCGACAUGGUCGCCCGCGACAACUUCUAAUCUCUCCUCAAGUUAUUCCGUUAAGACACCCGAGCAAACGAGCUAGUCAAAGGGAUCAGUGCGAGCUCCCAUCAUGUCCUGAGUACAAGACCCCAGUGUGCCGACUUGCUUUGGGUAGCUCCAGAAUGGUUCACGAAUUAGCCCUUAAGAUGCUCUCCCCUGUUCCGGAGUUCGUGUGGAGGGUCGGAGGCGGGAAAGUCUAUUUAACGGCGGACCCACCUAGGGUAAAUCUGACGCAUAUGUCUGAACACGCCCUGGUGGUACCAGGAGUUUCCCUAUGGGCCCUUUUUCUUUUAAGACCUUUAUUUUUCCAUCUCCAACCUCGAUUAUCGACAACAUACCGUUGGCGCAGACACUUAUGCUUACCCGUUCAACUGCAUUCGUACAGGCUGGGUGAUAUGCAAUUAGGAGCAUCUCGACGUUGGGUAGGCCCCCGAAAUAUAUGGAGACAGGGUGUGUAUGCGUGGGAGAUAUAUGAGAUUCGAACUGUACCGGCUCCAAGGGUGUCACUGUUCCGCCGUUGGAAGGAAAACACUUACACCCUCUUUGGAUCAGGGGAAAUUACAGCGAAUGUUAAGACGGCUAUGUAUCGGAUAACGUCACAUCCGUUUAGACUGUAUGCGGGCGCCCGAGAAUUCAGUCCAUUCCGACUAAACGAGAAAAAGUUUGCCCCCGCGGGGAUUACGUACAAAACCGGAUGCGAUUAUAGCCGUUCUGGAGAACUCUGUGAGGGGCGGGGAAGGAAAAAUAGUUUUAUGUACCAUUGGGCGGCCCUUCCUCUCCAUGCACAUAAAACAAACAGCCUCAUUGAUUUCUACACUCCGUGCAACCCAAAUGCGGCUGCUGACAUGCUAGUGCGUAGUACAGAAGAUGCCCGAGCUCGAAUACAUUGCAGAUAUUGGGUUAACAAUUCGUUUUGCAAAUAUAUAUGUUGGCACUCCAGGUUCUUGAUAGAACCGAUUCAGAAGAAAUGGUGUACACCCAUUGAGAGGCGUCGCCCCGUAAUUAACGGGGAUGUCUUAAACGGGUCAGAGGUAACUACUAAGACGCGGUGCUGUUUCAGAUGGGCAAGUCAUACGGGCCGUUCUUACGGAAGAGAUCGUGCUAAUACAAACCUGCUUGUUAUGGGCGACGCAUCGCCCGAGGGGGGCGCGAAUCGGAGACUUGCAAGUACGACCGGUGGAUUCGUGCAAUUUAAGGUAUACAUUUCACGCGGAGACCCCGGGAAGGAGCUCCCCUACAUAACACGAAUAUCACCCGGCCGAAUUAGGGCUCGACGGUCCUUCCGCCUAAUGUGUGCAGUGAACGAGUUGGUGCCUGAAGAUGGUUUCACUCACAGGCGCCAAAGUACGACUCCUCCUUCCCGAUCAGUUUGCGACGGGCCUGUCCGCUUUAGAAUAAAGACCCACUUCCAAACUUCGACCGGCUGGGGGAAACAUUGGAGCAGCUUCCAAUGUGAUCAAAACUGUAGCAGAUCUCUAGCAUACUUCAAACAGGAAGUUACUGUAUGUGGUGCACGACCUGGCCAAGGUAGUUUCCUCCCCCUAGGACUGGUGAACGAUGGCGGGUGGAUUGUCAUUCAUAGUGAGAGAUUAGCCGUGCCUGCUUAUGACGGGAUCGGCGAUCUAGUAAGCGAUUCCAAAUCGAACAGCAUGCGCCGAGGGGACACUUACUUGGAGGUACUUAUCCGAGCGAAAAGGAGGGAGCCCAUAUCCAAAUGCGCUAGUAGAGGAGCACUGUCGAGUCAUGACCGCGGCCAUUCACUCGUAAGUACAGGGACACACUUCCAUAUCCUCCGGGGACUGAUGGGUAUUCGUACGAUUCGGCUGAGCGGGUCGCGGGACCCUACGGUCCGAACGUCUCGCGAGGGGUGUCAGGCUCCUCAUUUCAUGGUGCAGUCUGUCUGGAAGCCGACUACACUACGGGGUAGUCCUGCCCUUGAUAAUGCUAAUGAGAGUAACUCACGCCCCGCCCAUAACAAAGGCCGGGGGCCCUCUCAAUCAAACAGGCGGACGGGGAAUGUCAGGCAGGUUGUCGUUGGCAGAGUUACGCUCUCGCAGGAGAUUAAUCCUUUUGUAAAGCAUUUGGAACUAGUCCCCGGCUAUUAUUUAGCUGAGUAUCCAAUGCCUAGAAGCCUUGCGUCCCGUUCUAACCUGCGCGUAAUUCAUACAUCGCAUGAGAGAGCAAGGCAAACAAUCCAUUCGCCUGGCAAGAGAAACCGAGGAGCAAGUCACCGAACGCCCGCCGGGAAGCACCGCGAGUACCCACGACAAAACAGCUGCUUGGACUAUUAUGAACCCUCCAUACGUAGGAAGGAGGCCUAUGGGUGCGUCAAUAACGCACUCCCUGAUUGUCCUGACAAGGACGAUCGCGAAUGGACGCGCUCGCAAUCCAUGAUUGAAAUGUCCAGACCAACCGAGUCCCUGCUCAGUGCCUCCUGGCAUCGGCCAUUGGUUCUUGGAAGCCUCAACUACGGAUUCAUCACUGACCCCGUGGCGCUCACUGGUCAAAGAAAACUAGGAUGCCGUGGAAUGAUGAACACGUUAAUGUUAAUAUGGAACCAUCAUUUCGGCCCCUAUGGUUCAACCCCAAGAUUAGUUUUCGUUUGUGAAGCCAAGCGGCACCGGGGAUCUUGGGCAAACUACACUGAAGCAAAACUCCCUUCCUAUUAUGUAAUAACACUGGCACAGGGUCUUGGCCCGCGCGCUGGGCUCCACCACAACGUGUACUGUCUUCACCCUCAACGAGUUUUCCACUUCUGUCCCUUCGUAUCAGUUCACUUGCAAUUCCUAUCCCAUGUUUCGACUAGCCCAAGCGCUAAGUGUGCCCGCCUAGAUCCAGUCCAUCUUCCGGCUGAGGUGGGGAUCGUCAAACCUGCGGGGCGGAUUAAGAAGUCAUUUGUUGGUGGCGCGGGGCCUCUCAGAAUGUUAAAUAGGCAACGUGUAUGUUCGGUGGGGCCUGGAAGUGGACCGCCGUCCGUGGCGGAUUGUGCCAAAUUAACGGCUGAAGUGGAGUGGACUUCCAUCCACCCAGCUGCAGCAGAUCGGGGGUUAUCCCAAAGCACCAUCCAUGCCAGCAUGAUGCUGACUCACCAAAUAAGCUUCACUGAAUGCGACAAGUUCGCGCAAAUGGCUCAGAGCAGCGUGUCCCACACCGUGGGACAAAGGGUAUACUCGACUUCUCCACCUUGCGCGAAACCUGGCCCCGCUGGAUACAGACUGAUCAGUUCUAUCGAAUGUACCGUGCACAAAUGUAAACGUCGACAUAUGGCCGGCGCGCUACUGCGGCCCCGGGAACCUGGCCUCCUACCCGAUGACAAUGUAUCACCCGUCCCUCUCCGGUACGGCGAUAAUAUAUUGGCUCAUCGGGUGGCAUCUUACUCCCGGCGUACUUCCGACCCGUCGCAUCAAGUCCGAUCUGACCAAUUCUGGGACAUUAAUGUACAACCACCUAUACCCUUCUUCGCACCUCUGCUGAAUUUGGCUUCACGAACCUAUGGGCGAGGAGCGCUGCUGUCCCCGCCGGAACCACAGAUUCACGCUGCCACUAUGGCUUCAGCUAGAUGUGAGUCAAAUAAUAGAUCAGUACUCGUUAUGCGGCAUGAUCAUGAAGGUAAACUGCCCUUGCACCGAUCCAAGCUAAGCGGGCUAGCUGUAAUCCUUAGCCGGGGAUCUUCCGAUGUAUGUGCCCCCUCGGACAUGAAACACAUCCACAGUGGAGAUAGACAAAUGACUGAGGAGCUUAGAUUUCUGGAGAACAAAAACUUGAUGGGCUUAAGAUAUGGUCUAUACUCAUUAACAUCGAGAUGUGCUCGAAACGUCGAUAGACUCAUUCCUUUUAUUCGCCUACAGCAAGUGUUCGGGGAAUCAAAGUUGGAGUCACUUGCCCCAGGGGUCAAGCCGCUCCCGAUUUUCGUCGAGCGUCGUAGGAUGUGGCCGCCGGUUAUAUGGAUAAGUAUACGUUGCGGACACCAGACCAGACCCUAUAUACGAGACCGUUCUGCAGCUAAAUGUCGGGGGGGCCAGUCGCGGCCCGCCCUCUCUAAACAACUUAUUUACGUUCGGCGGGUAAGGCAGUGGCGGUUACAUCCAGGCAGACAGAUGGUCCUUGGUCAUACGUUCGCGAGCUCUUUCCGUCAGGAACUCUCCGCACAGCAUACUGCAACUCGCCGGAUUACAAGACCCCUUAGUGCUCCCUUUUAUGUACGUCCCCGGCCCUGGACUGGCGGUACUGUCGAGCUUUGCAUUUUUAGAGGCGCCUCAUGCAGGACUUCAGAAUUCGGCAAGGGAGCUACCCCCAAAGAGCUCCUCGUAAUGAACGGGUUCCUAGUGGUGUAUUACCCAGCCGGACAAAGGCCCGGUCUAACGUCUUUUGUCCGUUCGCAUUCCAUACGUCCCGUGUACGCCGAGCUACUCGGUAGUGAAACUAGGCGAGACUUGCGGAGGUCUUUUUGGUCAGUAAACGUAGUACUUGGUGUAUAUCGUCACUUACGCCACGGCACAAGGCAAAGGAGUGCGUCAUCCGGAUUGAAGGGCACUCUCAAGGUUGAUUCGCCAAUGGGUGUUGUUCGUCGCAAACCGAACCCGAUCCACUUUUUACCCUGGAAAGGGGUGUCAAGGGCGGACUUCGUGGCUCUAUCCGUCCAUGGAGUAUACUCGUCCUCAGUAAGUAGUGUAGGAUGGUUCACCGGAUGGAAAGGUAACGUUAAAAGACCGCUUCGUUGUUUAAUUGCGCAAGACUUCAAGUGCUCGAGCUUAGGUCUUCCCAUUAUGUUUAGGGAUGUAUUCUCACAAAUGCCUUAUUGUAGAUUGAGACAAGCUCCAUACGUAGUAGCACCCUUUGACUCGGGCGUUCUAUGGAUAGCUCGCAAGACGUGGAUCGCAUUCAGUCACUUACGAAAAUCCAGAUUCUGCCCUGCCUGGCUGUCAACAGACAACACCUUCGAUCAAUACGGAUCUAUCUUGGUGAGCGAAUUUUCUCCCACCCCGCGGGGAAUCGCACUGGUGGUCUGUGUGCCCCGAUCCAUUGUCUGCCGGAGCCACGGGAAAAAUUUUAAAUUCUGUAUCCUACUCCCCCGUGUGGCUGUAGCCCAGCUGAGGUCAGUAUGUCACCUUGUCGCUAUUAGGUGUUUCACCAUCCUAAUUGGCAAACUGUUUCAACCCUGCCAGAUAAGGUCAGAGCAGCCCCUUCGCUGGUAUUUAUCCCACAGCCCCCCUUCGAAGCGUUCCGCUAAGGCAAUACCAGCUCCGUACAGAGCGCCGGGUACCUUCCUCAUCUACUCCUGGAUCUACUUCUUACUUUGUAGGUCCACGGAUCAAGGCUGUUACUUUUGCAUAGUUCAUCGUGCCAUUACGCAGAGGACUGGAUGUCCCAGAAUACUUCUUGGAUUCACACUUGUCUCAAAUGAGCUUACGGUGGCGCACGGGAUUCAAGCUCCCGUGUUAGAGCCUCGGGCGCUGCCGUACAAUAGGGCAACUCCCAGAACCGAUCACGGAGUUUCUCCGGUGCGUAGACGUAGGUGCAGUAAUAUUCCUAUAAAUGUUGGAGAGUACCGCUGGUUGUUUACUUUUUCGGUAUGCAUACCUACCGACAGUCGCAAGGCAGUACAUGCCACGCAAGUUAGUUGUUUAAUGGUUUUGCCGCGCACAGCUCGUGCCUAUCAUAGGGUAAGGUACACCAGCUUCGGGCUUGCUUCUGAGCAGACCCAAACUAUUUUUCUGAUCCACAUAUCAUCAGACAACAAUUUUGCUCGAAAAGUAUGCAUACCCCCAUUAGUCCUUCUCUGA Output: MRPWSLLGSEHILSRRVEGITHDISLYWIPFFSVQVDHLATGLSKIYSAMFLYVTFSAEVAVYHVKVDGSLTTSDACRENSIHHACMGSAHLQRHRQRTDRPPFLSDGKPRSNTEQSPTPSYRLTPRLYSPLTALSGKLYLLGSGCPWKCRKMAQTTIVYRARRWIPLITDTIICVAPLPQPNHGVVALAVPSRARPHCLVRLSQGPSNNVYRYNFTWYCPDGGTADPLPFRHGITSVYLPSYSGIVRSTPYLIGTYAVPTQNSDPFATTRWVSVRLLASTTGEGDTGDRGTLSTFLDCCMSTSALPPARFISAYRVNATHGDDTRLTVKKLCTPYKSLWSGNVCRPLSVYKLKERYIVLTVGYHSLLSRMSTLRKTYSVDQAGPARPLVAEKSGVCECDYDPRFVIVIATVHFGKATGLHSKADRCSRWVGSSGRVEAAQLHHQRNKKPTGRDSDAEKMSGKGRLWSQVRLGGLEIQIGRRYQFYEYFLGKLIIRQRPTHERTQGAKKPSQKGTRLATRMALAWCDRFGRIYIPQCNILVHTIMKVRLHQTACDDNTWTAGEYDLYDCTRDVPNIVLSLPHHLYELLVSDALPAPHLLFLGESGSARHRTRSVGLTIANYTIIRDRCRSTCIRLDEPNPYRRFDVLWPLLMTPARNTQTRAFLCSVAGWNCQYYRPPDRSSSRTYLIALHLAIKLRSRYPNRSDLAYPTSVNRNTVYITVVSLRTAKLRYNSYTPSKLRPDQGNDPSYRTSERGNFVRNLPSVKPYRDFMKTISISFTGFRTKFDRQIGNSIGRGFTGARPTSLKRHSRFSLTVHSSKRYLSRLNAYVSFTIKHRITRFDVATREVKASAVVPNAEIVRKSTKVCWFMCIIRLQTVRATKPTIQKQLLRLSGPFQCGDSTNYVQPGYFHSSNAPRPACVSVCISPGLWDLLVKTRNAFSRVHGGTILTLVQHDIHKVELSSACTRERATNLAMTESVTSYSQCEASTRHTEIQAVSSIVYLHLTAARREKHTEKRADAKHHVSTWREGKTENVIGRLRSSHTLTLRLHPSSLDNWPFILGECQRGTDIEERIPAHAECTDKAVGVAFQSTLWADSANPRGGSRLRRGIRGPNAMNIECGYYLARQLLDRSCSRKIGETLRQTSSRTPLPCKRGRVPKPLEPKESNRSGASSVWIPVGVRLGPSAAKTPPWRHGRPRQLLISPQVIPLRHPSKRASQRDQCELPSCPEYKTPVCRLALGSSRMVHELALKMLSPVPEFVWRVGGGKVYLTADPPRVNLTHMSEHALVVPGVSLWALFLLRPLFFHLQPRLSTTYRWRRHLCLPVQLHSYRLGDMQLGASRRWVGPRNIWRQGVYAWEIYEIRTVPAPRVSLFRRWKENTYTLFGSGEITANVKTAMYRITSHPFRLYAGAREFSPFRLNEKKFAPAGITYKTGCDYSRSGELCEGRGRKNSFMYHWAALPLHAHKTNSLIDFYTPCNPNAAADMLVRSTEDARARIHCRYWVNNSFCKYICWHSRFLIEPIQKKWCTPIERRRPVINGDVLNGSEVTTKTRCCFRWASHTGRSYGRDRANTNLLVMGDASPEGGANRRLASTTGGFVQFKVYISRGDPGKELPYITRISPGRIRARRSFRLMCAVNELVPEDGFTHRRQSTTPPSRSVCDGPVRFRIKTHFQTSTGWGKHWSSFQCDQNCSRSLAYFKQEVTVCGARPGQGSFLPLGLVNDGGWIVIHSERLAVPAYDGIGDLVSDSKSNSMRRGDTYLEVLIRAKRREPISKCASRGALSSHDRGHSLVSTGTHFHILRGLMGIRTIRLSGSRDPTVRTSREGCQAPHFMVQSVWKPTTLRGSPALDNANESNSRPAHNKGRGPSQSNRRTGNVRQVVVGRVTLSQEINPFVKHLELVPGYYLAEYPMPRSLASRSNLRVIHTSHERARQTIHSPGKRNRGASHRTPAGKHREYPRQNSCLDYYEPSIRRKEAYGCVNNALPDCPDKDDREWTRSQSMIEMSRPTESLLSASWHRPLVLGSLNYGFITDPVALTGQRKLGCRGMMNTLMLIWNHHFGPYGSTPRLVFVCEAKRHRGSWANYTEAKLPSYYVITLAQGLGPRAGLHHNVYCLHPQRVFHFCPFVSVHLQFLSHVSTSPSAKCARLDPVHLPAEVGIVKPAGRIKKSFVGGAGPLRMLNRQRVCSVGPGSGPPSVADCAKLTAEVEWTSIHPAAADRGLSQSTIHASMMLTHQISFTECDKFAQMAQSSVSHTVGQRVYSTSPPCAKPGPAGYRLISSIECTVHKCKRRHMAGALLRPREPGLLPDDNVSPVPLRYGDNILAHRVASYSRRTSDPSHQVRSDQFWDINVQPPIPFFAPLLNLASRTYGRGALLSPPEPQIHAATMASARCESNNRSVLVMRHDHEGKLPLHRSKLSGLAVILSRGSSDVCAPSDMKHIHSGDRQMTEELRFLENKNLMGLRYGLYSLTSRCARNVDRLIPFIRLQQVFGESKLESLAPGVKPLPIFVERRRMWPPVIWISIRCGHQTRPYIRDRSAAKCRGGQSRPALSKQLIYVRRVRQWRLHPGRQMVLGHTFASSFRQELSAQHTATRRITRPLSAPFYVRPRPWTGGTVELCIFRGASCRTSEFGKGATPKELLVMNGFLVVYYPAGQRPGLTSFVRSHSIRPVYAELLGSETRRDLRRSFWSVNVVLGVYRHLRHGTRQRSASSGLKGTLKVDSPMGVVRRKPNPIHFLPWKGVSRADFVALSVHGVYSSSVSSVGWFTGWKGNVKRPLRCLIAQDFKCSSLGLPIMFRDVFSQMPYCRLRQAPYVVAPFDSGVLWIARKTWIAFSHLRKSRFCPAWLSTDNTFDQYGSILVSEFSPTPRGIALVVCVPRSIVCRSHGKNFKFCILLPRVAVAQLRSVCHLVAIRCFTILIGKLFQPCQIRSEQPLRWYLSHSPPSKRSAKAIPAPYRAPGTFLIYSWIYFLLCRSTDQGCYFCIVHRAITQRTGCPRILLGFTLVSNELTVAHGIQAPVLEPRALPYNRATPRTDHGVSPVRRRRCSNIPINVGEYRWLFTFSVCIPTDSRKAVHATQVSCLMVLPRTARAYHRVRYTSFGLASEQTQTIFLIHISSDNNFARKVCIPPLVLL PROT.rb: def abbreviate(str) list = "" str.scan(/.../) do |sub| case sub when "UUU", "UUC" list += "F" when "UUA", "UUG" list += "L" when "UCU", "UCC", "UCA", "UCG", "AGU", "AGC" list += "S" when "UAU", "UAC" list += "Y" when "UGU", "UGC" list += "C" when "UGG" list += "W" when "CUU", "CUC", "CUA", "CUG" list += "L" when "CCU", "CCC", "CCA", "CCG" list += "P" when "CAU", "CAC" list+= "H" when "CAA", "CAG" list += "Q" when "CGU", "CGC", "CGA", "CGG", "AGA", "AGG" list += "R" when "AUU", "AUC", "AUA" list += "I" when "AUG" list += "M" when "ACU", "ACC", "ACA", "ACG" list += "T" when "AAU", "AAC" list += "N" when "AAA", "AAG" list += "K" when "GUU", "GUC", "GUA", "GUG" list += "V" when "GCU", "GCC", "GCA", "GCG" list += "A" when "GAU", "GAC" list += "D" when "GAA", "GAG" list += "E" when "GGU", "GGC", "GGA", "GGG" list += "G" else return list end end end user_input = gets.chomp abbreviate(user_input) Yea, that's a very long switch. Basically it's a translator. Take 3 characters, output 1 other. I've thought about implementing this using a key-value map, but that wouldn't solve the repetitiveness. The readability is quite good and so is the speed. However, I'm quite sure it isn't idiomatic. I have no clue how this would score on maintainability. There's one thing striking me as odd: return should be implicit. But simply stating list instead of return list modifies the behaviour and I'm not sure why. Answer: Some notes: As others have already pointed out, you should use a hash instead of a gigantic case. But make sure your get operations on that hash are O(1), otherwise the method will be very inefficient. You can use Enumerable#take_while to manage the stop amino acids. Encapsulate the code in a module/class. You need a return because it's not the last expression of the method, it's within the scan, which you want to break. Note that this works: "123456".gsub(/.../) { |triplet| triplet[0] } #=> "14" This is a common pattern: write the data structure in the most declarative/simple way and then programmatically build (on initialization) whatever (efficient) data structures you need in the algorithm. I'd write it in functional style: module Rosalind CODONS_BY_AMINOACID = { "F" => ["UUU", "UUC"], "L" => ["UUA", "UUG","CUU", "CUC", "CUA", "CUG"], "S" => ["UCU", "UCC", "UCA", "UCG", "AGU", "AGC"], "Y" => ["UAU", "UAC"], "C" => ["UGU", "UGC"], "W" => ["UGG"], "P" => ["CCU", "CCC", "CCA", "CCG"], "H" => ["CAU", "CAC"], "Q" => ["CAA", "CAG"], "R" => ["CGU", "CGC", "CGA", "CGG", "AGA", "AGG"], "I" => ["AUU", "AUC", "AUA"], "M" => ["AUG"], "T" => ["ACU", "ACC", "ACA", "ACG"], "N" => ["AAU", "AAC"], "K" => ["AAA", "AAG"], "V" => ["GUU", "GUC", "GUA", "GUG"], "A" => ["GCU", "GCC", "GCA", "GCG"], "D" => ["GAU", "GAC"], "E" => ["GAA", "GAG"], "G" => ["GGU", "GGC", "GGA", "GGG"], "STOP" => ["UGA", "UAA", "UAG"], } AMINOACID_BY_CODON = CODONS_BY_AMINOACID. flat_map { |c, as| as.map { |a| [a, c] } }.to_h def self.problem_prot(aminoacids_string) aminoacids_string. scan(/[UGTCA]{3}/). map { |codon| AMINOACID_BY_CODON[codon] }. take_while { |aminoacid| aminoacid != "STOP" }. join end end
{ "domain": "codereview.stackexchange", "id": 19964, "tags": "strings, ruby, programming-challenge, bioinformatics" }
When to question output of model
Question: I'm unsure of how to ask a question without making it seem like a code review question. At what point does one question whether they've actually implemented the algorithm and-or model correctly? Getting spot-on results is great and all, but seems highly suspect. Also, what checks can be done to ensure that the algorithm and-or model is being implemented correctly? The reason I'm asking is because I'm getting perfect classification and subsequently accuracy, precision, etc. w/ the implementation of SVM. I am including the code, but feel free to ignore. # Make a copy of the df iris_df_copy = iris_df.copy() # Create a new column, labeled 'T/F', whose value will be based on the value in the 'Class' column. If the value in the # 'Class' column is 'Iris-setosa', then set the value of the 'T/F' column to 1. If the value in the 'Class' column is # not 'Iris-setosa', then set the value of the 'T/F' column to 0. iris_df_copy.loc[iris_df_copy.Class == 'Iris-setosa', 'T/F'] = 1 iris_df_copy.loc[iris_df_copy.Class != 'Iris-setosa', 'T/F'] = 0 X_svm = np.array(iris_df_copy[['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width']]) y_svm = np.ravel(iris_df_copy[['T/F']]) # Split the samples into two subsets, use one for training and the other for testing X_train_svm, X_test_svm, y_train_svm, y_test_svm = train_test_split(X_svm, y_svm, test_size=0.25, random_state=4) # Instantiate the learning model - Linear SVM linear_svm = svm.SVC(kernel='linear') # Fit the model - Linear SVM linear_svm.fit(X_train_svm, y_train_svm) # Predict the response - Linear SVM linear_svm_pred = linear_svm.predict(X_test_svm) # Confusion matrix and quantitative metrics - Linear SVM print("The confusion matrix is: " + np.str(confusion_matrix(y_test_svm, linear_svm_pred))) print("The accuracy score is: " + np.str(accuracy_score(y_test_svm, linear_svm_pred))) print("The precision is: " + np.str(precision_score(y_test_svm, linear_svm_pred, average="macro"))) print("The recall is: " + np.str(recall_score(y_test_svm, linear_svm_pred, average="macro"))) Answer: You need to know what the outcome should be of a given test on a dataset before you try to test a new method on them. Ask yourself, 'What do I expect from this?' Linear SVM finds a plane to cut through the data to best represent the difference between two sets. If you have a look at what you are separating (Iris_setosa from Iris_virginica and iris_versicolor), you'll find that the clumps themselves are perfectly separated. You can draw a line easily on each graph you care to use, and that is what I have done in the picture below. If the clumps are perfectly separated, then the SVM will return a perfectly separated result. By Nicoguaro - Own work, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=46257808 Test the SVM on separating virginica and versicolor to see how it does in a more difficult context. Or alternatively, just generate a dataset of your own from randomly placed gaussian points.
{ "domain": "datascience.stackexchange", "id": 4865, "tags": "machine-learning, scikit-learn, svm" }
video stream between multiple maschines
Question: Hi There, I'm working on a UAV which is equipped with a webcam. The webcam is running with the usb_cam package and it is working fine on the UAV Machine. Now I'm trying to send the image-data via wireless to the base station but I can't get the messages. I exported the Master to the UAV and can even see which topics are published there, but when I try rostopic hz /logitech_usb_webcam/image_raw I don't recieve any message. Could that be a result of two different versions of ROS (UAV = diamondback / Base = electric) or is that a failure somewhere else? would be great to get some help. Cheers Originally posted by dinamex on ROS Answers with karma: 447 on 2012-05-07 Post score: 0 Answer: As Dan already said, you should check the ROS NetworkSetup. In this case i assume that the error is caused by your /etc/hosts file. You should be aware that local entries in the /etc/hosts file overwrite entries of your DHCP server... Originally posted by michikarg with karma: 2108 on 2012-05-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by dinamex on 2012-05-07: Thanks for the quick response. I worked only with the IP's to set an alias in the /etc/hosts file was the solution.
{ "domain": "robotics.stackexchange", "id": 9276, "tags": "ros, multiplemachines" }
Can I refactor my code anymore?
Question: I have written one method; however I feel I can refactor it even more but not sure what Can someone help me here? public static PaymentOptions GetPaymentOptions_NonGBCulture(TestConfigurationCDO testConfiguration, int siteId) { var paymentOptions = default(PaymentOptions); var paymentOptionList = SitePaymentRepository.GetSitePaymentInfoBySiteId( testConfiguration, siteId); var paymentAuto = paymentOptionList.First(x => x.PaymentType == PaymentMethod.PayPalDirect || x.PaymentType == PaymentMethod.Braintree || x.PaymentType == PaymentMethod.BankTransfer || x.PaymentType == PaymentMethod.AdyenDropIn); paymentOptions = new ClientCheckoutOptions() { paymentMethod = paymentAuto.PaymentType }; return paymentOptions; } I feel if I create list for possible Payment Type as below PaymentMethod[] checkoutGroup = { PaymentMethod.PayPalDirect, PaymentMethod.Braintree, PaymentMethod.BankTransfer, PaymentMethod.AdyenDropIn }; But not sure how do I add that check in query Answer: I can see at least two places where you can make your code more concise. PaymentType As you have suspected replacing multiple x.PaymentType == PaymentMethod.XYZ || statements with a collection lookup might help. With this approach you separate data from operation. Before var paymentAuto = paymentOptionList .First(x => x.PaymentType == PaymentMethod.PayPalDirect || x.PaymentType == PaymentMethod.Braintree || x.PaymentType == PaymentMethod.BankTransfer || x.PaymentType == PaymentMethod.AdyenDropIn); After ImmutableArray<PaymentMethod> checkoutGroup = new [] { PaymentMethod.PayPalDirect, PaymentMethod.Braintree, PaymentMethod.BankTransfer, PaymentMethod.AdyenDropIn }.ToImmutableArray(); var paymentAuto = paymentOptionList .First(x => checkoutGroup.Contains(x.PaymentType)); Note: I've used here Immutable collection to prevent accidental (or intentional) add and/or remove. paymentOptions You can get rid of this variable because you have set it once (the initialization code is unnecessary) and immediately return. Before public static PaymentOptions GetPaymentOptions_NonGBCulture(TestConfigurationCDO testConfiguration, int siteId) { var paymentOptions = default(PaymentOptions); //... paymentOptions = new ClientCheckoutOptions() { paymentMethod = paymentAuto.PaymentType }; return paymentOptions; } After public static PaymentOptions GetPaymentOptions_NonGBCulture(TestConfigurationCDO testConfiguration, int siteId) { //... return new ClientCheckoutOptions { paymentMethod = paymentAuto.PaymentType }; } Note: In case of object initialization you don't need to explicitly call the parameterless ctor ( new ClientCheckoutOptions() { ... } >> new ClientCheckoutOptions { ... }) UPDATE: The most concise version I can think of: private static readonly ImmutableArray<PaymentMethod> checkoutGroup = new [] { PaymentMethod.PayPalDirect, PaymentMethod.Braintree, PaymentMethod.BankTransfer, PaymentMethod.AdyenDropIn }.ToImmutableArray(); public static PaymentOptions GetPaymentOptions_NonGBCulture(TestConfigurationCDO testConfiguration, int siteId) => new ClientCheckoutOptions { paymentMethod = SitePaymentRepository .GetSitePaymentInfoBySiteId(testConfiguration, siteId) .Select(po => po.PaymentType) .First(checkoutGroup.Contains) };
{ "domain": "codereview.stackexchange", "id": 41126, "tags": "c#, linq, selenium" }
Does every infinite context free language contain an infinite regular subset?
Question: Can someone explain to me if this is true or not? Answer: No. A counterexample is simply $E=\{a^nb^n\mid n\ge0\}$, the classic context-free language that is not regular. For the sake of contradiction, suppose $E$ has an infinite regular subset $S$. So $S$ satisfies the pumping lemma for regular languages. Suppose $p\ge1$ is a pumping length of $S$. Since $S$ is infinite, $w=a^mb^m \in S$ for some $m\ge p$. Since $|w|=2m\gt p$, $w$ can be written as $w=xyz$, satisfying the following conditions: $|y| \geq 1$ $|xy|\leq p$ $\forall n\geq 0$, $xy^{n}z\in S$ Since $|xy|\le p$ and $w$ starts with at least $p$ $a$'s, $y=a^k$ for some $k\ge1$. Since $xz=xy^0z$ has less $a$'s than $b$'s, $xz\not\in S$. This contradiction shows that $S$ is not regular. That is, no infinite subset of $E$ is regular. What we just showed above is the fact that given any language, if every word in it cannot be pumped as in the pumping lemma for regular languages, then that language does not contain an infinite regular subset. Here is a related exercise. Exercise. A language is called prefix-closed if all prefixes of every string in the language are also in the language. Let $P$ be an infinite, prefix-closed, context-free language. Show that $P$ contains an infinite regular subset.
{ "domain": "cs.stackexchange", "id": 12819, "tags": "regular-languages, context-free" }
Why does the same file differ so much in size between FITS / CSV / ECSV formats?
Question: Maybe this is obvious for some people but I'm a newbie when it comes to working with astronomical data. I have the same catalogue in different formats and am a bit surprised as to how sizes change. This is the breakdown: catalogue.fits: 9.3 GB catalogue.csv (saved as plain CSV): 14.9 GB catalogue.csv (saved as ECSV): 18.3 GB The original catalogue was ECSV (GaiaDR3). I made the conversion using stilts/topcat and loaded them into topcat to make sure they all have the same data. I have googled this question but couldn't find any useful information, other than it is recommended to work with fits files when catalogues become too large. Is this expected? I'm afraid I might have screwed something in the conversion even though the process was pretty straight forward. EDIT: as per request in comments, I am pasting an extract of the CSV and ECSV files. These are the first 10 lines after the header: CSV 1636148068921376768,Gaia DR3 34361129088,34361129088,894504938,2016.0,45.00432028915398,0.09731972,0.021047763781174733,0.101752974,3.235017271512856,0.12045025,26.857704,35.230515,29.518344127131527,0.13369285,19.231654938806578,0.13392176,0.16325329,6.428645E-4,-0.073663116,-0.012016551,-0.40389284,-0.10152152,-0.31593448,0.14065048,0.23142646,0.38175407,172,0,171,1,1.081467,194.59933,0.26741344,1.0328022,31,false,1.285487,,,,,,,,20,15,0.22053866,20,9,0,0.05737302,84.542816,0,0,1.0578898,0.28431648,0.18242157,0.4234895,0.8483561,-101.856606,-31.586445,-44.381237,29.909302,false,170,1763.191386728999,2.1212356,831.2096,17.571619,18,389.99713585371074,9.491409,41.089485,18.86089,19,2178.214858374066,15.074686,144.49487,16.402643,1.4565701,0,2,0,1,0,2.4582462,1.2892704,1.1689758,,,,,,,,,,,,,,,,,,,,,,,NOT_AVAILABLE,176.94278852482034,-48.88493355232444,42.54657309907107,-16.317212317623884,false,false,0,false,false,false,false,false,true,true,false,1.0820195E-13,5.682676E-13,0.9993886,3478.5408,3461.1475,3497.5784,4.7,4.6405,4.7734,-0.6143,-0.7064,-0.4964,302.2347,292.5325,312.6373,0.7643,0.7292,0.7975,0.505,0.4815,0.5273,0.3096,0.2956,0.3228,MARCS 1636148068921376768,Gaia DR3 38655544960,38655544960,1757259052,2016.0,45.004978371745516,0.017885398,0.019879675701858644,0.01877158,3.1391701154499523,0.022347411,140.47131,35.30821,29.686339048921702,0.023771733,19.115199913956804,0.023830384,0.1152631,0.07323115,-0.10691941,-0.03021361,-0.4488658,-0.15551351,-0.37927917,0.18184616,0.26367012,0.35528076,183,0,182,1,0.26434276,181.43846,0.0,0.0,31,false,1.4550159,,,,,,,,21,15,0.03929549,21,9,0,0.024301996,98.629005,0,0,1.012191,0.30656147,0.20578752,0.45299426,0.84656596,-96.31889,-34.497215,-44.82578,30.34742,false,180,42030.60043942405,11.392837,3689.213,14.128453,20,17955.47937733753,26.03932,689.55255,14.70305,19,34263.48754002838,36.75135,932.30554,13.410816,1.2424035,0,3,0,2,0,1.2922335,0.5745964,0.71763706,41.187176,3.1130338,2,10,1,8,7.034563,,,749.9199,,4500.0,3.0,-0.25,111,,,,13.068616,0.049816404,10,,NOT_AVAILABLE,176.94476211452783,-48.88527012426483,42.546872019115916,-16.318521975182243,false,false,0,true,true,false,false,false,true,true,false,1.03982646E-13,5.193881E-13,0.9998059,4708.7944,4659.062,4723.2773,4.5588,4.5261,4.5654,-0.087,-0.1218,-0.0681,332.8322,330.4709,347.1729,0.2345,0.184,0.2516,0.182,0.1425,0.1955,0.0961,0.0752,0.1032,MARCS 1636148068921376768,Gaia DR3 549755818112,549755818112,1176142027,2016.0,45.04828232129832,0.027803512,0.04825396034378256,0.026499804,1.5834770072004039,0.03442545,45.99728,16.465364,0.8431278207235642,0.03881713,-16.443764103221557,0.032919735,0.15041357,-0.14103404,0.058549184,0.17610951,-0.47409967,0.19906765,0.040113226,-0.14495842,-0.13733767,0.25881717,186,0,185,1,1.7361301,220.66844,0.072144866,0.96563005,31,true,1.4643211,,,,,,,,21,13,0.05741181,21,8,0,0.02840035,136.8429,0,0,1.0909767,0.3311718,0.18608999,0.49344972,0.84215754,-121.24405,-9.218482,-38.814762,29.860806,false,182,19047.581229390133,6.5483584,2908.7566,14.987767,18,8336.447382322891,16.801348,496.1773,15.53609,17,15362.299344786756,18.731024,820.15265,14.2817545,1.2441866,0,0,0,0,0,1.2543354,0.5483227,0.7060127,,,,,,,,,,,,,,,,,,,,,,,NOT_AVAILABLE,176.95936562964246,-48.83439750417775,42.59867306757699,-16.30407740663594,false,false,0,false,false,false,false,false,true,true,false,1.0493143E-13,5.2412805E-13,0.999712,4837.496,4819.394,4859.1646,4.4109,4.3971,4.426,-0.6022,-0.6293,-0.5715,621.2011,607.6643,634.6024,0.3668,0.3498,0.3862,0.2886,0.275,0.3042,0.155,0.1476,0.1633,MARCS 1636148068921376768,Gaia DR3 828929527040,828929527040,602360354,2016.0,45.02361979732255,0.054348446,0.06841876724959775,0.057792775,1.2030946627289945,0.066816084,18.006063,17.646046,13.952005440191227,0.078203134,-10.803908607898379,0.077209964,0.15176746,0.035847045,-0.17484911,-0.019222464,-0.43819016,-0.13019522,-0.38296714,0.18708444,0.24369827,0.3748652,180,0,174,6,0.67423594,189.07535,0.0,0.0,31,false,1.4280801,,,,,,,,21,15,0.12849106,21,9,0,0.021228304,100.85552,0,0,1.0349011,0.31510893,0.21111594,0.4635199,0.84135246,-94.50633,-34.891853,-45.02362,30.44096,false,175,4394.201551830577,3.3469398,1312.9012,16.580168,15,1758.2038783868772,16.53475,106.33386,17.225868,17,3789.788093274453,18.455858,205.34337,15.801358,1.2625711,0,2,0,0,0,1.42451,0.64570045,0.77880955,,,,,,,,,,,,,,,,,,,,,,,NOT_AVAILABLE,176.91130587619824,-48.838162803700186,42.58025775246733,-16.277574201332826,false,false,0,false,false,false,false,false,true,true,false,1.0353673E-13,5.1752347E-13,0.9998501,4333.865,4303.369,4382.055,4.6641,4.6466,4.6782,-0.3251,-0.3924,-0.2468,690.5604,669.148,719.1542,0.0405,0.0079,0.0907,0.0304,0.0059,0.0684,0.0162,0.0031,0.0363,PHOENIX 1636148068921376768,Gaia DR3 1275606125952,1275606125952,1616763991,2016.0,44.993270784169155,0.044207256,0.07633404499591856,0.037413534,0.6296499872212442,0.0480792,13.096099,6.749295,-1.4354337293932473,0.05779658,-6.594885755987001,0.046561327,0.017531538,0.15331538,-0.041159563,-0.02230982,-0.19973202,-0.025520978,-0.18153821,-0.0039606066,6.9630594E-5,0.075942636,204,0,204,0,0.22948465,211.85617,0.0679066,0.30813953,31,false,1.5075005,,,,,,,,24,15,0.08034339,24,10,0,0.019139638,149.57747,0,0,1.0098441,0.20883662,0.20302725,0.30122048,0.84553945,-73.91511,-2.6010761,-45.711555,29.540922,false,201,6031.684729758614,3.7787752,1596.2009,16.23627,21,2954.204025222018,15.563785,189.8127,16.662441,20,4378.056173763009,17.483109,250.41635,15.644692,1.2156239,0,2,0,2,0,1.0177488,0.42617035,0.5915785,,,,,,,,,,,,,,,,,,,,,,,NOT_AVAILABLE,176.87062873355376,-48.85450727416134,42.552445474053975,-16.26111514698619,false,false,0,true,false,false,false,false,true,true,false,1.0231336E-13,5.1105317E-13,0.99997115,5040.7686,5034.1396,5048.8696,4.4445,4.4181,4.4688,-0.6965,-0.7331,-0.6609,1343.5872,1283.9733,1404.766,0.004,9.0E-4,0.01,0.0032,7.0E-4,0.0081,0.0017,4.0E-4,0.0043,PHOENIX 1636148068921376768,Gaia DR3 1374389600384,1374389600384,866663434,2016.0,44.932802106149424,0.13405186,0.06480894307270345,0.116464846,1.7650550406017367,0.14989038,11.775639,11.373369,5.092345091384519,0.17400531,-10.169638649405274,0.14390641,0.012303141,0.1726108,-0.06496814,0.059556037,-0.20390691,0.035888158,-0.14015855,0.07941218,0.050555214,0.07260491,237,0,236,1,-1.963938,199.77853,0.0,0.0,31,true,1.2888188,,,,,,,,27,16,0.24213506,27,11,0,0.046150215,65.5374,0,0,0.9086372,0.21830778,0.20200518,0.28883445,0.8636228,-77.12932,3.91698,-47.065266,29.11508,false,233,849.1830497115145,1.5339631,553.58765,18.364864,24,174.8028102530512,8.37604,20.869385,19.732172,26,962.1708521083773,8.57319,112.23021,17.289764,1.338903,0,0,0,1,0,2.4424076,1.3673077,1.0751,,,,,,,,,,,,,,,,,,,,,,,NOT_AVAILABLE,176.81934898907144,-48.906005037340215,42.48870623058961,-16.25440588954115,false,false,0,false,false,false,false,false,true,false,false,1.9225565E-13,9.105778E-13,0.9999547,3404.9011,3392.6704,3433.1738,4.4294,4.3898,4.4884,-1.1774,-1.1924,-1.1333,426.0444,406.6133,445.8924,0.4993,0.4523,0.5627,0.3329,0.3016,0.3757,0.1986,0.1802,0.2236,MARCS 1636148068921376768,Gaia DR3 1619203481984,1619203481984,955831289,2016.0,44.95115803041135,0.11489181,0.10531247613400328,0.092665374,1.7172992921993269,0.11708278,14.667395,18.377512,13.217855646757254,0.15483287,12.767977485493745,0.11529025,-0.10194003,0.12885134,-0.26817575,0.17114455,-0.19460984,0.16064924,-0.19148935,0.05554392,0.046069436,-0.09006234,229,0,224,5,-0.09883537,226.36523,0.2717901,0.8230513,31,false,1.2836995,,,,,,,,26,16,0.21913311,26,11,0,0.014332649,110.389854,0,0,0.9937619,0.19364144,0.2579131,0.27664497,0.8563101,-81.96813,9.220485,-45.121784,29.153976,false,219,1329.8584660892113,1.9149755,694.45197,17.877853,24,280.1407226446208,8.440962,33.188248,19.220102,21,1573.835843510322,13.714526,114.75685,16.755497,1.3941157,0,3,0,0,0,2.4646053,1.3422489,1.1223564,,,,,,,,,,,,,,,,,,,,,,,NOT_AVAILABLE,176.79447406087172,-48.8646537946501,42.519355519308434,-16.221066034883595,false,false,0,false,false,false,false,false,true,true,false,1.06596135E-13,5.396279E-13,0.999561,3595.7598,3589.6238,3605.467,4.3742,4.3274,4.3944,-0.4565,-0.5201,-0.3926,696.264,663.922,721.3262,0.9776,0.9422,1.0066,0.652,0.6285,0.6718,0.394,0.3805,0.4049,MARCS 1636148068921376768,Gaia DR3 1717987078400,1717987078400,71332990,2016.0,44.98309734471892,0.16068149,0.09640645832988629,0.14490694,2.7367159160633827,0.17841817,15.338774,14.770221,2.7031271619485757,0.21807563,-14.520761539059677,0.18491341,0.07472117,0.1078753,-0.14565209,0.053458534,-0.1670114,0.046779558,-0.19464372,0.067384295,0.035748687,0.15527137,211,0,210,1,-1.6899631,225.55008,0.20050569,0.20059653,95,false,,1.1166438,0.046840165,-0.06460931,-0.051463306,-0.0800577,-0.12249996,-0.01685062,24,16,0.31002954,24,10,0,0.017960072,159.78165,0,0,0.9159248,0.23936273,0.15536572,0.27342117,0.85889024,-65.00193,-16.740282,-48.247776,29.825163,false,207,665.7283829462514,1.6562455,401.95032,18.629124,19,125.10427068321398,11.716241,10.677851,20.095362,20,871.0756987417084,12.082568,72.09359,17.397757,1.496376,0,2,0,0,0,2.6976051,1.466238,1.2313671,,,,,,,,,,,,,,,,,,,,,,,NOT_AVAILABLE,176.83794624758525,-48.84788093533632,42.548440578810926,-16.23894270538447,false,false,0,false,false,false,false,false,true,false,false,1.4650697E-13,5.7786913E-13,0.9998262,3173.697,3167.9666,3183.0564,4.9626,4.9521,4.9725,-0.0169,-0.0595,0.0102,313.3219,302.2305,321.4363,0.0844,0.0351,0.1558,0.0525,0.0218,0.0971,0.0365,0.0152,0.0672,MARCS 1636148068921376768,Gaia DR3 2512556873600,2512556873600,831324186,2016.0,45.07492471994457,33.54464,0.1108820420939314,24.830378,,,,,,,,,-0.1309745,,,,,,,,,,47,0,47,0,35.859825,1964.6633,119.27316,184.45139,3,false,,,,,,,,,7,6,120.0138,8,8,0,0.41287413,54.55582,0,0,,0.4272169,0.21570276,0.5622911,0.8563249,-75.8122,-1.6494459,-50.740902,30.197552,false,47,45.0438638293822,2.318626,19.426964,21.553278,5,296.537445766598,54.37354,5.4537086,19.158344,5,444.69570491458535,83.88305,5.3013773,18.127739,16.455807,0,0,0,0,0,1.0306053,-2.3949337,3.425539,,,,,,,,,,,,,,,,,,,,,,,NOT_AVAILABLE,176.91896393240253,-48.771755887594544,42.64430938681278,-16.251990466593394,false,false,0,false,false,false,false,false,false,false,false,0.004575429,0.036596417,0.9587768,,,,,,,,,,,,,,,,,,,,,,null 1636148068921376768,Gaia DR3 2821794351744,2821794351744,1389218613,2016.0,45.124152453120615,1.6364167,0.13677852527701542,1.5161866,,,,,,,,,-0.057091914,,,,,,,,,,162,0,145,17,85.27109,11890.783,11.945554,971.3325,3,false,,,,,,,,,18,12,3.0579515,20,5,0,0.17080484,85.50274,58,0,,0.3759727,0.1906163,0.49689272,0.8553549,-98.59949,-1.5183473,-46.745052,28.801044,false,162,1180.830086047288,13.438973,87.86609,18.006899,18,448.9534667774457,10.775265,41.66519,18.708038,16,2295.837741126928,23.04338,99.63112,16.345543,2.324459,0,2,0,1,0,2.3624954,0.70113945,1.661356,,,,,,,,,,,,,,,,,,,,,,,NOT_AVAILABLE,176.94249514790158,-48.718340373548315,42.70123151869663,-16.2416233995086,false,false,0,false,false,false,false,false,false,false,false,0.0,0.0,0.9105993,,,,,,,,,,,,,,,,,,,,,,null ECSV 1636148068921376768,"Gaia DR3 34361129088",34361129088,894504938,2016.0,45.00432028915398,0.09731972,0.021047763781174733,0.101752974,3.235017271512856,0.12045025,26.857704,35.230515,29.518344127131527,0.13369285,19.231654938806578,0.13392176,0.16325329,6.428645E-4,-0.073663116,-0.012016551,-0.40389284,-0.10152152,-0.31593448,0.14065048,0.23142646,0.38175407,172,0,171,1,1.081467,194.59933,0.26741344,1.0328022,31,"False",1.285487,null,null,null,null,null,null,null,20,15,0.22053866,20,9,0,0.05737302,84.542816,0,0,1.0578898,0.28431648,0.18242157,0.4234895,0.8483561,-101.856606,-31.586445,-44.381237,29.909302,"False",170,1763.191386728999,2.1212356,831.2096,17.571619,18,389.99713585371074,9.491409,41.089485,18.86089,19,2178.214858374066,15.074686,144.49487,16.402643,1.4565701,0,2,0,1,0,2.4582462,1.2892704,1.1689758,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"NOT_AVAILABLE",176.94278852482034,-48.88493355232444,42.54657309907107,-16.317212317623884,"False","False",0,"False","False","False","False","False","True","True","False",1.0820195E-13,5.682676E-13,0.9993886,3478.5408,3461.1475,3497.5784,4.7,4.6405,4.7734,-0.6143,-0.7064,-0.4964,302.2347,292.5325,312.6373,0.7643,0.7292,0.7975,0.505,0.4815,0.5273,0.3096,0.2956,0.3228,"MARCS" 1636148068921376768,"Gaia DR3 38655544960",38655544960,1757259052,2016.0,45.004978371745516,0.017885398,0.019879675701858644,0.01877158,3.1391701154499523,0.022347411,140.47131,35.30821,29.686339048921702,0.023771733,19.115199913956804,0.023830384,0.1152631,0.07323115,-0.10691941,-0.03021361,-0.4488658,-0.15551351,-0.37927917,0.18184616,0.26367012,0.35528076,183,0,182,1,0.26434276,181.43846,0.0,0.0,31,"False",1.4550159,null,null,null,null,null,null,null,21,15,0.03929549,21,9,0,0.024301996,98.629005,0,0,1.012191,0.30656147,0.20578752,0.45299426,0.84656596,-96.31889,-34.497215,-44.82578,30.34742,"False",180,42030.60043942405,11.392837,3689.213,14.128453,20,17955.47937733753,26.03932,689.55255,14.70305,19,34263.48754002838,36.75135,932.30554,13.410816,1.2424035,0,3,0,2,0,1.2922335,0.5745964,0.71763706,41.187176,3.1130338,2,10,1,8,7.034563,null,null,749.9199,null,4500.0,3.0,-0.25,111,null,null,null,13.068616,0.049816404,10,null,"NOT_AVAILABLE",176.94476211452783,-48.88527012426483,42.546872019115916,-16.318521975182243,"False","False",0,"True","True","False","False","False","True","True","False",1.03982646E-13,5.193881E-13,0.9998059,4708.7944,4659.062,4723.2773,4.5588,4.5261,4.5654,-0.087,-0.1218,-0.0681,332.8322,330.4709,347.1729,0.2345,0.184,0.2516,0.182,0.1425,0.1955,0.0961,0.0752,0.1032,"MARCS" 1636148068921376768,"Gaia DR3 549755818112",549755818112,1176142027,2016.0,45.04828232129832,0.027803512,0.04825396034378256,0.026499804,1.5834770072004039,0.03442545,45.99728,16.465364,0.8431278207235642,0.03881713,-16.443764103221557,0.032919735,0.15041357,-0.14103404,0.058549184,0.17610951,-0.47409967,0.19906765,0.040113226,-0.14495842,-0.13733767,0.25881717,186,0,185,1,1.7361301,220.66844,0.072144866,0.96563005,31,"True",1.4643211,null,null,null,null,null,null,null,21,13,0.05741181,21,8,0,0.02840035,136.8429,0,0,1.0909767,0.3311718,0.18608999,0.49344972,0.84215754,-121.24405,-9.218482,-38.814762,29.860806,"False",182,19047.581229390133,6.5483584,2908.7566,14.987767,18,8336.447382322891,16.801348,496.1773,15.53609,17,15362.299344786756,18.731024,820.15265,14.2817545,1.2441866,0,0,0,0,0,1.2543354,0.5483227,0.7060127,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"NOT_AVAILABLE",176.95936562964246,-48.83439750417775,42.59867306757699,-16.30407740663594,"False","False",0,"False","False","False","False","False","True","True","False",1.0493143E-13,5.2412805E-13,0.999712,4837.496,4819.394,4859.1646,4.4109,4.3971,4.426,-0.6022,-0.6293,-0.5715,621.2011,607.6643,634.6024,0.3668,0.3498,0.3862,0.2886,0.275,0.3042,0.155,0.1476,0.1633,"MARCS" 1636148068921376768,"Gaia DR3 828929527040",828929527040,602360354,2016.0,45.02361979732255,0.054348446,0.06841876724959775,0.057792775,1.2030946627289945,0.066816084,18.006063,17.646046,13.952005440191227,0.078203134,-10.803908607898379,0.077209964,0.15176746,0.035847045,-0.17484911,-0.019222464,-0.43819016,-0.13019522,-0.38296714,0.18708444,0.24369827,0.3748652,180,0,174,6,0.67423594,189.07535,0.0,0.0,31,"False",1.4280801,null,null,null,null,null,null,null,21,15,0.12849106,21,9,0,0.021228304,100.85552,0,0,1.0349011,0.31510893,0.21111594,0.4635199,0.84135246,-94.50633,-34.891853,-45.02362,30.44096,"False",175,4394.201551830577,3.3469398,1312.9012,16.580168,15,1758.2038783868772,16.53475,106.33386,17.225868,17,3789.788093274453,18.455858,205.34337,15.801358,1.2625711,0,2,0,0,0,1.42451,0.64570045,0.77880955,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"NOT_AVAILABLE",176.91130587619824,-48.838162803700186,42.58025775246733,-16.277574201332826,"False","False",0,"False","False","False","False","False","True","True","False",1.0353673E-13,5.1752347E-13,0.9998501,4333.865,4303.369,4382.055,4.6641,4.6466,4.6782,-0.3251,-0.3924,-0.2468,690.5604,669.148,719.1542,0.0405,0.0079,0.0907,0.0304,0.0059,0.0684,0.0162,0.0031,0.0363,"PHOENIX" 1636148068921376768,"Gaia DR3 1275606125952",1275606125952,1616763991,2016.0,44.993270784169155,0.044207256,0.07633404499591856,0.037413534,0.6296499872212442,0.0480792,13.096099,6.749295,-1.4354337293932473,0.05779658,-6.594885755987001,0.046561327,0.017531538,0.15331538,-0.041159563,-0.02230982,-0.19973202,-0.025520978,-0.18153821,-0.0039606066,6.9630594E-5,0.075942636,204,0,204,0,0.22948465,211.85617,0.0679066,0.30813953,31,"False",1.5075005,null,null,null,null,null,null,null,24,15,0.08034339,24,10,0,0.019139638,149.57747,0,0,1.0098441,0.20883662,0.20302725,0.30122048,0.84553945,-73.91511,-2.6010761,-45.711555,29.540922,"False",201,6031.684729758614,3.7787752,1596.2009,16.23627,21,2954.204025222018,15.563785,189.8127,16.662441,20,4378.056173763009,17.483109,250.41635,15.644692,1.2156239,0,2,0,2,0,1.0177488,0.42617035,0.5915785,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"NOT_AVAILABLE",176.87062873355376,-48.85450727416134,42.552445474053975,-16.26111514698619,"False","False",0,"True","False","False","False","False","True","True","False",1.0231336E-13,5.1105317E-13,0.99997115,5040.7686,5034.1396,5048.8696,4.4445,4.4181,4.4688,-0.6965,-0.7331,-0.6609,1343.5872,1283.9733,1404.766,0.004,9.0E-4,0.01,0.0032,7.0E-4,0.0081,0.0017,4.0E-4,0.0043,"PHOENIX" 1636148068921376768,"Gaia DR3 1374389600384",1374389600384,866663434,2016.0,44.932802106149424,0.13405186,0.06480894307270345,0.116464846,1.7650550406017367,0.14989038,11.775639,11.373369,5.092345091384519,0.17400531,-10.169638649405274,0.14390641,0.012303141,0.1726108,-0.06496814,0.059556037,-0.20390691,0.035888158,-0.14015855,0.07941218,0.050555214,0.07260491,237,0,236,1,-1.963938,199.77853,0.0,0.0,31,"True",1.2888188,null,null,null,null,null,null,null,27,16,0.24213506,27,11,0,0.046150215,65.5374,0,0,0.9086372,0.21830778,0.20200518,0.28883445,0.8636228,-77.12932,3.91698,-47.065266,29.11508,"False",233,849.1830497115145,1.5339631,553.58765,18.364864,24,174.8028102530512,8.37604,20.869385,19.732172,26,962.1708521083773,8.57319,112.23021,17.289764,1.338903,0,0,0,1,0,2.4424076,1.3673077,1.0751,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"NOT_AVAILABLE",176.81934898907144,-48.906005037340215,42.48870623058961,-16.25440588954115,"False","False",0,"False","False","False","False","False","True","False","False",1.9225565E-13,9.105778E-13,0.9999547,3404.9011,3392.6704,3433.1738,4.4294,4.3898,4.4884,-1.1774,-1.1924,-1.1333,426.0444,406.6133,445.8924,0.4993,0.4523,0.5627,0.3329,0.3016,0.3757,0.1986,0.1802,0.2236,"MARCS" 1636148068921376768,"Gaia DR3 1619203481984",1619203481984,955831289,2016.0,44.95115803041135,0.11489181,0.10531247613400328,0.092665374,1.7172992921993269,0.11708278,14.667395,18.377512,13.217855646757254,0.15483287,12.767977485493745,0.11529025,-0.10194003,0.12885134,-0.26817575,0.17114455,-0.19460984,0.16064924,-0.19148935,0.05554392,0.046069436,-0.09006234,229,0,224,5,-0.09883537,226.36523,0.2717901,0.8230513,31,"False",1.2836995,null,null,null,null,null,null,null,26,16,0.21913311,26,11,0,0.014332649,110.389854,0,0,0.9937619,0.19364144,0.2579131,0.27664497,0.8563101,-81.96813,9.220485,-45.121784,29.153976,"False",219,1329.8584660892113,1.9149755,694.45197,17.877853,24,280.1407226446208,8.440962,33.188248,19.220102,21,1573.835843510322,13.714526,114.75685,16.755497,1.3941157,0,3,0,0,0,2.4646053,1.3422489,1.1223564,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"NOT_AVAILABLE",176.79447406087172,-48.8646537946501,42.519355519308434,-16.221066034883595,"False","False",0,"False","False","False","False","False","True","True","False",1.06596135E-13,5.396279E-13,0.999561,3595.7598,3589.6238,3605.467,4.3742,4.3274,4.3944,-0.4565,-0.5201,-0.3926,696.264,663.922,721.3262,0.9776,0.9422,1.0066,0.652,0.6285,0.6718,0.394,0.3805,0.4049,"MARCS" 1636148068921376768,"Gaia DR3 1717987078400",1717987078400,71332990,2016.0,44.98309734471892,0.16068149,0.09640645832988629,0.14490694,2.7367159160633827,0.17841817,15.338774,14.770221,2.7031271619485757,0.21807563,-14.520761539059677,0.18491341,0.07472117,0.1078753,-0.14565209,0.053458534,-0.1670114,0.046779558,-0.19464372,0.067384295,0.035748687,0.15527137,211,0,210,1,-1.6899631,225.55008,0.20050569,0.20059653,95,"False",null,1.1166438,0.046840165,-0.06460931,-0.051463306,-0.0800577,-0.12249996,-0.01685062,24,16,0.31002954,24,10,0,0.017960072,159.78165,0,0,0.9159248,0.23936273,0.15536572,0.27342117,0.85889024,-65.00193,-16.740282,-48.247776,29.825163,"False",207,665.7283829462514,1.6562455,401.95032,18.629124,19,125.10427068321398,11.716241,10.677851,20.095362,20,871.0756987417084,12.082568,72.09359,17.397757,1.496376,0,2,0,0,0,2.6976051,1.466238,1.2313671,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"NOT_AVAILABLE",176.83794624758525,-48.84788093533632,42.548440578810926,-16.23894270538447,"False","False",0,"False","False","False","False","False","True","False","False",1.4650697E-13,5.7786913E-13,0.9998262,3173.697,3167.9666,3183.0564,4.9626,4.9521,4.9725,-0.0169,-0.0595,0.0102,313.3219,302.2305,321.4363,0.0844,0.0351,0.1558,0.0525,0.0218,0.0971,0.0365,0.0152,0.0672,"MARCS" 1636148068921376768,"Gaia DR3 2512556873600",2512556873600,831324186,2016.0,45.07492471994457,33.54464,0.1108820420939314,24.830378,null,null,null,null,null,null,null,null,-0.1309745,null,null,null,null,null,null,null,null,null,47,0,47,0,35.859825,1964.6633,119.27316,184.45139,3,"False",null,null,null,null,null,null,null,null,7,6,120.0138,8,8,0,0.41287413,54.55582,0,0,null,0.4272169,0.21570276,0.5622911,0.8563249,-75.8122,-1.6494459,-50.740902,30.197552,"False",47,45.0438638293822,2.318626,19.426964,21.553278,5,296.537445766598,54.37354,5.4537086,19.158344,5,444.69570491458535,83.88305,5.3013773,18.127739,16.455807,0,0,0,0,0,1.0306053,-2.3949337,3.425539,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"NOT_AVAILABLE",176.91896393240253,-48.771755887594544,42.64430938681278,-16.251990466593394,"False","False",0,"False","False","False","False","False","False","False","False",0.004575429,0.036596417,0.9587768,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null 1636148068921376768,"Gaia DR3 2821794351744",2821794351744,1389218613,2016.0,45.124152453120615,1.6364167,0.13677852527701542,1.5161866,null,null,null,null,null,null,null,null,-0.057091914,null,null,null,null,null,null,null,null,null,162,0,145,17,85.27109,11890.783,11.945554,971.3325,3,"False",null,null,null,null,null,null,null,null,18,12,3.0579515,20,5,0,0.17080484,85.50274,58,0,null,0.3759727,0.1906163,0.49689272,0.8553549,-98.59949,-1.5183473,-46.745052,28.801044,"False",162,1180.830086047288,13.438973,87.86609,18.006899,18,448.9534667774457,10.775265,41.66519,18.708038,16,2295.837741126928,23.04338,99.63112,16.345543,2.324459,0,2,0,1,0,2.3624954,0.70113945,1.661356,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"NOT_AVAILABLE",176.94249514790158,-48.718340373548315,42.70123151869663,-16.2416233995086,"False","False",0,"False","False","False","False","False","False","False","False",0.0,0.0,0.9105993,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null Answer: FITS is a versatile format that may contain binary data. Now, storing numbers in binary rather than in ascii is a great advantage in terms of disk space. Consider for instance a double floating point number, which is represented in binary with 8 bytes. The same number, if written with all significant digits in ascii, will be about 16-19 characters, depending on the sign and how many digits one considers significant. Since each ascii character is 1 byte, this means that the binary format occupies about half of the space. A similar argument can be made for integers and things get even better with booleans, which in binary are just 1 bit, versus the 5 bytes of "false" and the 4 bytes of "true". FITS binary tables consist of usually short headers of key, value, comment which are 80 bytes each followed by the binary table itself which is just the size of the data types. So if I have a table of 2 doubles (8 bytes each), 4 floats (4 bytes each) and 1 double value per row, then the space used for that row is just the sum of the sizes (40 bytes), no padding at all. FITS sections (headers and tables/images) are blocked into 2880 byte blocks so a 10 line header uses 2880 bytes not 10*80=800 bytes Therefore my guess would be that at least part of the data in the FITS file is in binary format. Regarding the csv and the ecsv formats, they are similar, but ecsv is a bit more strict on the syntax and allows to store also multidimensional data and unstructured objects. These fancy capabilities of course come at a price, which is larger space on disk, but it is not the case here: the data being stored is the same, it is just a table. The difference becomes apparent when looking at the extracts of the files. They are almost identical, but for the treatment of empty fields. In the csv file, empty fields are just left empty, no characters between the commas. The ecsv writes "null" instead. In the 10 lines of the extract, there are 351 nulls, which contain about 11% of the total number of characters. Since the difference in size between the csv and the ecsv files is about 18%, it is reasonable to assume that the presence of nulls explains most of this difference.
{ "domain": "astronomy.stackexchange", "id": 6540, "tags": "star-catalogues, fits" }
CSS selector for image captions
Question: I have this CSS selector: .img + *{ } Which, from my understanding, will make the browser look for all the elements and then select only those which follow an element with the class img. Is there a better way of writing this selection to improve performance? Relevant HTML structure: <div class="media"> <div class="img"> <img src="an-image.jpg"> </div> <random-element>content is here</random-element> </div> Answer: My recommendation would be to not worry about the performance (see: http://www.kendoui.com/blogs/teamblog/posts/12-09-28/css_tip_star_selector_not_that_bad.aspx for actual tests). If you're still not convinced that it's not as bad as everyone makes it out to be, then your only real option is to modify the contents of <random-element />. <div class="media"> <div class="img"> <img src="an-image.jpg"> </div> <random-element><div id="foo">content is here</div></random-element> </div> Which allows your CSS to be: .img + #foo { ... } Or just #foo { ... }
{ "domain": "codereview.stackexchange", "id": 5138, "tags": "performance, css" }
Marking a rectangular region in a NumPy array as an image mask
Question: I'm currently working on creating a mask for an image. I have initialized a two-dimensional numpy zeros array. I now want to replace the values of the mask corresponding to pixels following some conditions such as x1< x < x2 and y1 < y < y2 (where x and y are the coordinates of the pixels) to 1. Is there an easier way to do it (maybe through slicing) without looping through the mask like below clusterMask = np.zeros((h, w)) for x in range(h): for y in range(w): if x <= clusterH + 2 * S and x >= clusterH - 2*S and y <= clusterW + 2*S and y >= clusterW - 2*S: clusterMask[x][y] = 1 Answer: It turns out that Numpy has various nifty ways of indexing. I found that my question can be solved by clusterMask[clusterH - 2*S : clusterH + 2*S, clusterW - 2*S : clusterW + 2*S] = 1 As given in one of the comments, this link contains all information regarding numpy indexing: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
{ "domain": "codereview.stackexchange", "id": 32478, "tags": "python, python-3.x, numpy" }
C++ function composition
Question: What is a good way to compose std::function objects in C++? I tried the following, and it seems to work well: template<typename ... Fs> struct compose_impl { compose_impl(Fs&& ... fs) : functionTuple(std::forward_as_tuple(fs ...)) {} template<std::size_t> struct int2type{}; template<size_t N, typename ... Ts> auto apply(int2type<N>, Ts&& ... ts) { return std::get<N>(functionTuple)(apply(int2type<N+1>(),std::forward<Ts>(ts)...)); } static const size_t size = sizeof ... (Fs); template<typename ... Ts> auto apply(int2type<size-1>, Ts&& ... ts) { return std::get<size-1>(functionTuple)(std::forward<Ts>(ts)...); } template<typename ... Ts> auto operator()(Ts&& ... ts) { return apply(int2type<0>(), std::forward<Ts>(ts)...); } std::tuple<Fs ...> functionTuple; }; template<typename ... Fs> auto compose(Fs&& ... fs) { return compose_impl<Fs ...>(std::forward<Fs>(fs) ...); } With this, one can compose functions as long as the signatures fit together. Example: auto f1 = [](std::pair<double,double> p) {return p.first + p.second; }; auto f2 = [](double x) {return std::make_pair(x, x + 1.0); }; auto f3 = [](double x, double y) {return x*y; }; auto g = compose(f1, f2, f3); std::cout << g(2.0, 3.0) << std::endl; //prints '13', evaluated as (2*3) + ((2*3)+1) Comments and suggestions for improvement are welcome! Answer: Generally speaking, it is really well done, for several reasons: std::tuple often takes advantage of the empty base class optimization, which means that since you feed it lambdas, your class will often weigh almost nothing, and everything is correctly forwarded. The only things I see that could be improved are the following ones: You could const-qualify apply and operator(). size should be static constexpr instead of static const to make it even clearer that it is a compile-time constant. You should be consistent when qualifying std::size_t: either use the prefix std:: or leave it, but stay consistent. As you can see, these are really minor improvements. I also have some other remarks, but those will be opinions more than actual advice: int2type kind of already exists in the standard and is named std::integral_constant. However, I will concede that it takes another template parameter for the type and that it might be too verbose for your needs. I had some trouble understanding how your recursion worked because it was in ascending order. For some reason, I am more used to descending order. I would have overloaded apply for int2type<0> and not for int2type<size-1> and performed a descending recursion. That would have allowed me to write: template<typename ... Ts> auto operator()(Ts&& ... ts) { return apply(int2type<sizeof ... (Fs) - 1>(), std::forward<Ts>(ts)...); } And then, size wouldn't have had to be a member of the class anymore. But I have to admit that this is an opinion and not a guideline. Your code is good enough that I see almost nothing that could be improved :)
{ "domain": "codereview.stackexchange", "id": 9752, "tags": "c++, c++14, higher-order-functions" }
SBPL Non-Holonomic Motion Primitives
Question: I am not having much luck creating motion primitives that follow non-holonomic type constraints. I keep running into the issue that I have no end poses that match to a start pose. Is there anyone willing to share a modified .m that they were able to implement these types of mprims? Originally posted by blake11 on ROS Answers with karma: 11 on 2013-04-09 Post score: 1 Original comments Comment by rsafonov on 2013-05-01: Are you still looking for the answer? If yes, can you please provide more details about your project? Answer: I am working on a similar problem. I have a dynamical model of a boat (Nomoto turn-rate model, with side-slip and linear speed dynamics added) that I am trying to use to generate motion primitives. As I understand, ideally I should choose primitives that end up in specific grid cells. However, as mentioned by Martin, it is not necessarily easily possible with the non-holonomic model I am using. In my case, currently I am applying certain control actions for a specific period of time to forward propagate the dynamic model. Then whatever end-pose I end up with, I map it into its corresponding grid cell. Depending on discretization used, this approach will result in discontinuities at end grid points. I was wondering if anyone else has come across such a scenario. I would certainly appreciate an input. Thank you very much for your help! Originally posted by Aditya with karma: 287 on 2013-05-28 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 13750, "tags": "ros, sbpl" }
Print bottom view of a binary tree
Question: For a binary tree we define horizontal distance as follows: Horizontal distance(hd) of root = 0 If you go left then hd = hd(of its parent)-1, and if you go right then hd = hd(of its parent)+1. The bottom view of a tree then consists of all the nodes of the tree, where there is no node with the same hd and a greater level. (There may be multiple such nodes for a given value of hd. In this case all of them belong to the bottom view.) I'm looking for an algorithm that outputs the bottom view of a tree. Examples: Suppose the binary tree is: 1 / \ 2 3 / \ / \ 4 5 6 7 \ 8 Bottom view of the tree is: 4 2 5 6 8 7 Ok so for the first example, Horizontal distance of node with value 1: 0, level = 1 Horizontal distance of node with value 2: 0 - 1 = -1, level = 2 Horizontal distance of node with value 3: 0 + 1 = 1, level = 2 Horizontal distance of node with value 4: -1 - 1 = -2, level = 3 Horizontal distance of node with value 5: -1 + 1 = 0, level = 3 Horizontal distance of node with value 6: 1 - 1 = 0, level = 3 Horizontal distance of node with value 7: 1 + 1 = 2, level = 3 Horizontal distance of node with value 8: 0 + 1 = 1, level = 4 So for each vertical line that is for hd=0, print those nodes which appear in the last level of that line. So for hd = -2, print 4 for hd = -1, print 2 for hd = 0, print 5 and 6 because they both appear in the last level of that vertical line for hd = 1, print 8 for hd = 2, print 7 One more example for reference : 1 / \ 2 3 / \ / \ 4 5 6 7 / \ / \ / \ / \ 8 9 10 11 12 13 14 15 So the output for this will be : 8 4 9 10 12 5 6 11 13 14 7 15 Similarly for this example hd of node with value 1: 0, , level = 1 hd of node with value 2: -1, level = 2 hd of node with value 3: 1, level = 2 hd of node with value 4: -2, level = 3 hd of node with value 5: 0, , level = 3 hd of node with value 6: 0, level = 3 hd of node with value 7: 2, level = 3 hd of node with value 8: -3, level = 4 hd of node with value 9: -1, level = 4 hd of node with value 10: -1, level = 4 hd of node with value 11: 1, level = 4 hd of node with value 12: -1, level = 4 hd of node with value 13: 1, level = 4 hd of node with value 14: 1, level = 4 hd of node with value 15: 3, level = 4 So, the output will be: hd = -3, print 8 hd = -2, print 4 hd = -1, print 9 10 12 hd = 0, print 5 6 hd = 1, print 11 13 14 hd = 2, print 7 hd = 3, print 15 So the ouput will be: 8 4 9 10 12 5 6 11 13 14 7 15 I already know a method in which I can do it using a lot of extra space (a map, and a 1-D array for storing the level of the last element in that vertical line) and with time complexity of $O(N \log N)$. And this is the implementation of this method: void printBottom(Node *node, int level, int hd, int min, map< int, vector<int> >& visited, int lev[], int l) { if(node == NULL) return; if(level == 1){ if(lev[hd-min] == 0 || lev[hd-min] == l){ lev[hd-min] = l; visited[hd-min].push_back(node->data); } } else if(level > 1) { printBottom(node->left, level-1, hd-1, min, visited, lev, l); printBottom(node->right, level-1, hd+1, min, visited, lev, l); } } int main() { find the minimum and maximum values for hd via DFS int lev[max-min+1]; //lev[hd] contains the maximum level for which we have found nodes with this value of hd; initialized with 0's map < int, vector<int> > visited; //the nodes in the bottom view int h = height(root); for (int i=h; i>0; i--){ printBottom(root, i, 0, min, visited, lev, i); } output visited } I am seeking help to do this in more optimized way, which used less space or time. Is there any other efficient method for this problem? Answer: First, you can get the time complexity down to ${\cal O}(n)$, while keeping the same space complexity. You can do this by filling visited in a single run of printBottom: void printBottom(Node *node, int level, int hd, int min, map< int, vector<int> >& visited, int lev[]) { if(node == NULL) return; if(lev[hd-min] < level){ lev[hd-min] = level; visited[hd-min] = new vector<int>; //erase old values, they are hidden by the current node } if(lev[hd-min] <= level){ visited[hd-min].push_back(node->data); } printBottom(node->left, level+1, hd-1, min, visited, lev); printBottom(node->right, level+1, hd+1, min, visited, lev); } with the initial call printBottom(root, 1, 0, min, visited, lev); If you insist on the nodes beig output in order of increasing value of hd, I don't think you can improve space consumption. However, if you allow a different order of output, you can get rid of visited, by first determining for each value of 'hd', which level should be output and then making another pass, printing the values that match: void fillLev(Node *node, int level, int hd, int min, int lev[]) { if(node == NULL) return; if(lev[hd-min] < level){ lev[hd-min] = level; } fillLev(node->left, level+1, hd-1, min, lev); fillLev(node->right, level+1, hd+1, min, lev); } void printBottom(Node *node, int level, int hd, int min, int lev[]) { if(node == NULL) return; if(lev[hd-min] == level){ cout << node->data; } printBottom(node->left, level+1, hd-1, min, lev); printBottom(node->right, level+1, hd+1, min, lev); } with calls fillLev(root, 1, 0, min, lev); and printBottom(root, 1, 0, min, lev);.
{ "domain": "cs.stackexchange", "id": 2568, "tags": "algorithms, binary-trees" }
Classification of images of different size
Question: I am doing image classification using Convolutional neural networks, but I have a problem, because the images I want to classify are all of different sizes. My code is the following: import numpy as np import tensorflow as tf import keras from keras.preprocessing.image import ImageDataGenerator trainingset = '/content/drive/My Drive/Colab Notebooks/Train' testset = '/content/drive/My Drive/Colab Notebooks/Test' batch_size = 32 train_datagen = ImageDataGenerator( rescale = 1. / 255,\ zoom_range=0.1,\ rotation_range=10,\ width_shift_range=0.1,\ height_shift_range=0.1,\ horizontal_flip=True,\ vertical_flip=False) train_generator = train_datagen.flow_from_directory( directory=trainingset, target_size=(118, 224), color_mode="rgb", batch_size=batch_size, class_mode="categorical", shuffle=True ) test_datagen = ImageDataGenerator( rescale = 1. / 255) test_generator = test_datagen.flow_from_directory( directory=testset, target_size=(118, 224), color_mode="rgb", batch_size=batch_size, class_mode="categorical", shuffle=False ) num_samples = train_generator.n num_classes = train_generator.num_classes input_shape = train_generator.image_shape classnames = [k for k,v in train_generator.class_indices.items()] print("Image input %s" %str(input_shape)) print("Classes: %r" %classnames) print('Loaded %d training samples from %d classes.' % (num_samples,num_classes)) print('Loaded %d test samples from %d classes.' % (test_generator.n,test_generator.num_classes)) and from keras.models import Sequential from keras.layers import Dense, Activation, Dropout, Flatten,\ Conv2D, MaxPooling2D from keras.layers.normalization import BatchNormalization from keras import regularizers from keras import optimizers def AlexNet(input_shape, num_classes, regl2 = 0.0001, lr=0.0001): model = Sequential() # C1 Convolutional Layer model.add(Conv2D(filters=96, input_shape=input_shape, kernel_size=(11,11),\ strides=(2,4), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation before passing it to the next layer model.add(BatchNormalization()) # C2 Convolutional Layer model.add(Conv2D(filters=256, kernel_size=(11,11), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C3 Convolutional Layer model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Batch Normalisation model.add(BatchNormalization()) # C4 Convolutional Layer model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Batch Normalisation model.add(BatchNormalization()) # C5 Convolutional Layer model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # Flatten model.add(Flatten()) flatten_shape = (input_shape[0]*input_shape[1]*input_shape[2],) # D1 Dense Layer model.add(Dense(4096, input_shape=flatten_shape, kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # D2 Dense Layer model.add(Dense(4096, kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # D3 Dense Layer model.add(Dense(1000,kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # Output Layer model.add(Dense(num_classes)) model.add(Activation('softmax')) # Compile adam = optimizers.Adam(lr=lr) model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy']) return model # create the model model = AlexNet(input_shape,num_classes) model.summary() now, if I do the training, I get: steps_per_epoch=train_generator.n//train_generator.batch_size val_steps=test_generator.n//test_generator.batch_size+1 try: history = model.fit_generator(train_generator, epochs=50, verbose=1,\ steps_per_epoch=steps_per_epoch,\ validation_data=test_generator,\ validation_steps=val_steps) except KeyboardInterrupt: pass if get the following error message: ValueError Traceback (most recent call last) <ipython-input-11-70354a7752ae> in <module>() 3 4 try: ----> 5 history = model.fit_generator(train_generator, epochs=50, verbose=1, steps_per_epoch=steps_per_epoch, validation_data=test_generator, validation_steps=val_steps) 6 except KeyboardInterrupt: 7 pass 8 frames /usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix) 139 ': expected ' + names[i] + ' to have shape ' + 140 str(shape) + ' but got array with shape ' + --> 141 str(data_shape)) 142 return data 143 ValueError: Error when checking target: expected activation_9 to have shape (4,) but got array with shape (5,) so, this should mean that the images I want to classify are of different sizes. So how can I do classification in this case? I think I should reshape the images somehow in such a way they have all the same size. I have looked up on the internet for a solution, but I haven't find anything that works well. Can somebody please help me? Thanks in advance. [EDIT]I am trying to do the following to resize the photos: from PIL import Image import os, sys path = "/content/drive/My Drive/Colab Notebooks/Train" dirs = os.listdir( path ) def resize(): for item in dirs: if os.path.isfile(path+item): im = Image.open(path+item) f, e = os.path.splitext(path+item) imResize = im.resize((200,200), Image.ANTIALIAS) imResize.save(f + ' resized.jpg', 'JPEG', quality=90) resize() In particular, I write this code before building the network. But it still gives me the same error. I am really stuck on this. [EDIT 2] I have also tried to apply this to the sub folders, so if I have: I have considered sigularly the sub-directories HAZE,SUNNY,CLOUDY,SNOWY , but it still does not work. The fact is that I don't see what I am doing wrong in the code above. Answer: I am not an expert in image classification but from the little I know I can tell you that the images should have the same size because the images are converted to an structured array of size n (pictures) x (width (px) x height (px) x 3) The 3 is due to the arrays RGB. If the width and height are not the same, the derived structured array will not have the same size for all individuals. There should be some package that helps you convert the images to the same size.
{ "domain": "datascience.stackexchange", "id": 6428, "tags": "machine-learning, neural-network, deep-learning, classification, image-classification" }
What did the author of the "Using the robot state publisher on your own robot" tutorial intend by their launch file example?
Question: LINK TO TUTORIAL I am trying to figure out how to publish my robot's state using the 'robot_state_publisher'. I fortunately found the tutorial linked to above, but unfortunately the author was rather vague in their example of a launch file (copied below from the tutorial). <launch> <node pkg="robot_state_publisher" type="robot_state_publisher" name="rob_st_pub" > <remap from="robot_description" to="different_robot_description" /> <remap from="joint_states" to="different_joint_states" /> </node> </launch> I would like to know if anyone can better explain what the difference is between 'robot_description' and 'different_robot_description' and exactly what information these are supposed to contain. Originally posted by RigorMortis on ROS Answers with karma: 88 on 2014-06-03 Post score: 4 Answer: I managed to figure it out. The 'remap' command seems to change parameters that the node uses 'from' some original value 'to' a new value specified by the user. In this case, the 'from' parts are correct, but the 'to' parts need to be changed to reflect your particular robot. For example, if your robot's joint_states are being published to /myrobot/joint_states, and if you load your URDF to the parameter server with this line: <param name="myrobot" textfile="$(find mypackage)/urdf/myrobot.urdf" /> And then you should write the robot state publisher node as below: <launch> <node pkg="robot_state_publisher" type="state_publisher" name="rob_st_pub" > <remap from="robot_description" to="myrobot" /> <remap from="joint_states" to="myrobot/joint_states" /> </node> </launch> *Note that there was also a mistake in the 'node' line of the launch file that I have corrected. I can't be certain that this is correct for all ROS versions, but I can verify that it works on ROS Fuerte perfectly. Originally posted by RigorMortis with karma: 88 on 2014-06-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by JoSo on 2014-07-03: Hey, I'm a noobie trying this same tutorial. are you setting the <param ... /> within that launch file? I would like to use my custom arm robot urdf with the joint state publisher so I can manipulate the arm in Rviz using the interactive markers. Any thoughts?
{ "domain": "robotics.stackexchange", "id": 18150, "tags": "robot-state-publisher" }
ELI5 what is true breeding?
Question: In "Variation under Domestication", Darwin makes several references to the concept of true breeding: They believe that every race which breeds true, let the distinctive characters be ever so slight, has had its wild prototype. And: I crossed some white fantails, which breed very true, with some black barbs. Before diving into the book, I never heard of the term, so I looked it up and found this definition in the Wikipedia article on Zygosity: A cell is said to be homozygous for a particular gene when identical alleles of the gene are present on both homologous chromosomes. The cell or organism in question is called a homozygote. True breeding organisms are always homozygous for the traits that are to be held constant. But this is rather technical and it's not the kind of explanation I would give to a child. What is true breeding, in simple terms? When is breeding not considered true? Answer: In the context of Darwin's Variation under Domestication, "true breeding" is a phenotypic characteristic rather than a genetic one. True-breeding organisms produce offspring that are identical to themselves, concerning some trait -- i.e. white fantails, when bred with white fantails, produce characteristically white offspring. For diploid organisms, true-breeding typically implies that the parents are homozygous at the locus conferring the trait of interest.
{ "domain": "biology.stackexchange", "id": 10189, "tags": "genetics, evolution, zygosity" }
What is the usage of orbitals more complex than f orbitals?
Question: Every high school learner, in each corner of the world, faces the lesson History of Atom during his courses, just as I did. We learned about s, p, d and f orbitals, though there were no signs of orbitals in molecules. Then I wondered, are there any other orbitals, simpler or more complex, than the four mentioned? Surprisingly, I learned that there are also usages for orbitals g, h, i and even k and l. Yes, I use the word "usages". Because I believe, unless something is useful, it will never enter the domain of science. Anyway, I read in Wikipedia (though not much of it I did understand) that these orbitals are used when describing and doing the measurements of molecular orbitals. Since no element in the periodic table has enough electrons to fill even orbital g (in its base state), in cases of molecular orbitals that have a g defined in themselves, atoms must have been excited. Excitation needs energy, doesn't it? Where does this energy come from? Isn't the formation of new bonds usually exothermic? Answer: Surprisingly, I learned that there are also usages for orbitals g,h,i and even j. Actually, the letter "j" is not used, so it is s, p, d, f, g, h, i, k, l, etc. The higher angular momentum orbitals do enter the domain of science, due to excited states of atoms. Transitions to and from excited states are observable through atomic spectroscopy. For example there is the article Microwave spectroscopy of Al I atoms in Rydberg states: D and G terms
{ "domain": "chemistry.stackexchange", "id": 10433, "tags": "electronic-configuration, orbitals, molecular-orbital-theory, periodic-table" }
Is reversal of magnetic polarity in a planet an instantaneous occurence?
Question: Just what the title states - Does reversal of magnetic poles in a planet refer to the point in time when reversal is complete? OR Does it refer to the entire drawn out process (assuming the poles flip gradually from 0 through 180 degrees? Answer: As far as I know the term "reversal of magnetic poles" doesn't have a strict definition, so I suppose different commentators might use it in different ways. However I suspect most of us would use it to describe the whole process. You describe the process as "drawn out" but no-one knows how long it takes because the dynamics of the Earth's core are poorly understood. On a geological timescale the process looks instantaneous, but then geological timescales are pretty long. Models suggest it could be pretty quick, though how realistic the liquid sodium models are is open to debate.
{ "domain": "physics.stackexchange", "id": 3800, "tags": "electromagnetism, polarization, earth" }
Why is salt so hard to remove from water?
Question: Water molecules and various salt molecules are very different. However, it seems very difficult to separate the two. Once a salt is dissolved in water, an energy or chemical intensive method (like boiling) is required to separate the salt back out again. Why is this? Answer: From The Feynman Lectures on Physics, Vol. I [1]: If we put a crystal of salt in the water, what will happen? Salt is a solid, a crystal, an organized arrangement of “salt atoms.” [...] Strictly speaking, the crystal is not made of atoms, but of what we call ions. An ion is an atom which either has a few extra electrons or has lost a few electrons. In a salt crystal we find chlorine ions (chlorine atoms with an extra electron) and sodium ions (sodium atoms with one electron missing). The ions all stick together by electrical attraction in the solid salt, but when we put them in the water we find, because of the attractions of the negative oxygen and positive hydrogen for the ions, that some of the ions jiggle loose. Figure 1-6 In Fig. 1–6 we see a chlorine ion getting loose, and other atoms floating in the water in the form of ions. This picture was made with some care. Notice, for example, that the hydrogen ends of the water molecules are more likely to be near the chlorine ion, while near the sodium ion we are more likely to find the oxygen end, because the sodium is positive and the oxygen end of the water is negative, and they attract electrically. Feynman has done well in explaining you the process in atomic point of view. Now comes your complexity of separating salt from water in a salt solution. During the process of boiling, the intermolecular forces will be broken between water molecules, and also between ions and water molecules. Water molecules ($\mathrm{H_2O}$) being less massive ($18.01528(33)$) than the other two individual ions ($\mathrm{Na}$, $22.98976928(2)$ – $\mathrm{Cl}$, $35.45(1)$), flies off easily leaving sodium and chlorine ions. These ions once again attract each other to form crystals. In other words, energy is required to break the intermolecular forces and release ions from prison to join their partners. Reference Feynman Lectures on Physics. Vol. 1, pp. 1–6 (numbers may vary depending on edition).
{ "domain": "physics.stackexchange", "id": 28234, "tags": "energy, water, physical-chemistry, atomic-physics" }
Blackjack game made in Python 3
Question: This is a simple blackjack game I finished making using Python. I hope you like it and I'm open to any suggestions or critiques you would give me. from random import shuffle import sys def deal(deck, player, dealer): shuffle(deck) for _ in range(2): player.append(deck.pop()) dealer.append(deck.pop()) def score(hand): non_aces = [c for c in hand if c != 'A'] aces = [c for c in hand if c == 'A'] total = 0 for card in non_aces: if card in 'JQK': total += 10 else: total += int(card) for card in aces: if total <= 10: total += 11 else: total += 1 return total def display_info(player, dealer, stand): print("Your cards: [{}] ({})".format(']['.join(player), score(player))) if stand: print("Dealer cards: [{}] ({})".format(']['.join(dealer), score(dealer))) else: print("Dealer cards: [{}] [?]".format(dealer[0])) def results(player, dealer, hand, stand): if score(player) == 21 and hand: print("Blackjack! You won!") sys.exit() elif score(player) > 21: print("Busted. You lost!") sys.exit() if stand: if score(dealer) > 21: print("Dealer busted. You won!") elif score(player) > score(dealer): print("You beat the dealer! You won!") elif score(player) < score(dealer): print("You lost!") else: print("Push. Nobody wins or losses.") sys.exit() def hit_stand(deck, player, dealer, hand, stand): print("What would you like to do") print("[1] - Hit\n[2] - Stand") choice = input("> ") hand = False if choice == '1': player.append(deck.pop()) elif choice == '2': stand = True while score(dealer) <= 16: dealer.append(deck.pop()) display_info(player, dealer, stand) results(player, dealer, hand, stand) if __name__ == '__main__': deck = ['A', '2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K']*4 player = [] dealer = [] standing = False first_hand = True deal(deck, player, dealer) while True: display_info(player, dealer, standing) results(player, dealer, first_hand, standing) hit_stand(deck, player, dealer, first_hand, standing) The follow-up is "Blackjack game - follow-up". Answer: Welcome to CodeReview, and greetings! Your code looks nice: it is well indented, mostly PEP8-conformant, and has clear names. I suspect you may not have spent enough time in Atlantic City or Las Vegas, however. ;-) Encapsulation You correctly guard your "main routine" code with if __name__ == '__main__' but you didn't bother to put all that code into a function. Please do so. Call it main if you like, but take the chance to wrap it all up into something that can be invoked from outside, for testing. def main(): ''' Play one hand of blackjack. ''' deck = ['A', '2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K']*4 player = [] dealer = [] standing = False first_hand = True deal(deck, player, dealer) while True: display_info(player, dealer, standing) results(player, dealer, first_hand, standing) hit_stand(deck, player, dealer, first_hand, standing) if __name__ == '__main__': main() Similarly, have a look at this: if choice == '1': player.append(deck.pop()) And this: while score(dealer) <= 16: dealer.append(deck.pop()) What are you doing there? That seems like another opportunity to use a name to describe what's happening: if choice == '1': deal_card(shoe, player) # ... and ... while score(dealer) <= 16: deal_card(shoe, dealer) Also, what's 16? Is that a magic number? Should it have a name? (Hint: yes!) while score(dealer) <= DEALER_MINIMUM_SCORE: deal_card(shoe, dealer) While we're looking at the hit_stand function, consider that it does two things: def hit_stand(deck, player, dealer, hand, stand): print("What would you like to do") print("[1] - Hit\n[2] - Stand") choice = input("> ") ########## End of part 1, start of part 2 ########## hand = False if choice == '1': player.append(deck.pop()) elif choice == '2': stand = True while score(dealer) <= 16: dealer.append(deck.pop()) display_info(player, dealer, stand) results(player, dealer, hand, stand) First, it asks for user input. Then it tries to update the game state based on that input. That's one thing too many! Create separate functions for separate things. I know it may seem pointless when some of those functions are one line long. But it represents a chance for you to "write" an explanation of what you are doing, and it also provides a place to make changes to the behavior. Also, consider that when you "grow" this blackjack game you will really want places to add behavior. There are many more possibilities than "hit or stand" at the table! And always remember, user input is unreliable! You have to validate it. What if I type "hit" or "stand" instead of 1 or 2? Control Flow You don't actually have an exit condition for your while loop. Instead, you are relying on calls to sys.exit() from lower in your code. That's not going to work for very long as your code complexity grows. Instead, why not create a boolean variable still_playing = True and use while still_playing: as your loop condition. Then you can change that value when it's appropriate. And your program structure doesn't really resemble the flow of a blackjack game. At a blackjack table, the dealer deals starting cards to all the players, then each player "plays" their entire game, then the dealer "plays" according to his rules, then the results are evaluated. You jammed your "dealer plays" element into the hit_stand function. Pry it loose! Make your flow look more like the real flow of the table, and it will be easier to understand and easier to extend. Data Flow In Pascal, a programming language we used in that last millennium by training dinosaurs to interpret the statements for us, there were two kinds of subroutines, procedures and functions. The difference was that a function could return a value, while a procedure just executed statements with no returned result. That discrimination doesn't exist in Python: you can do either thing, but it all starts with def. Regardless, you seem to write only subroutines that do not return values. I think you should change that. Creating and shuffling the deck is a place where a function could be put to good use, especially if you grow your game. Just write: shoe = new_shoe() or later: shuffle_shoe(shoe) (If you can't tell, this is heading towards a Shoe class. That will come in a much later review. Also: a shoe is what the dealer takes the cards from.) Speaking of data, have a look at this: def hit_stand(deck, player, dealer, hand, stand): print("What would you like to do") print("[1] - Hit\n[2] - Stand") choice = input("> ") hand = False Notice that you take a hand parameter, do nothing with it, and then set it to False? What's up with that? The only thing you use it for is to pass it down to results, but you could just as well write results(..., False, ...) and not bother with the hand parameter. Also, have a look at the results subroutine. You call it in the middle of the loop, right after displaying the hands. But you've divided showing the hands and evaluating them, when that's kind of the same thing. And you're ignoring the distinction between "showing the current situation" and "wrapping up the end of the game." I'd suggest that your checking for blackjack and for busting should go near where you deal the cards, and should update the loop control variable. Game Flow There are a few aspects of blackjack that you are missing, even at this simple level. First, check for and announce blackjack, for both the dealer and the player! If a player gets blackjack, there's no Hit/Stand question, there's only the question of whether the dealer pushes. And if the dealer gets blackjack, the game is over before it began. So check that first. The obvious extras, like assigning and displaying card suits as well as ranks, multiple players, support for splits, doubling down, insurance, surrendering, etc., would all take you away from a "simple" blackjack game. But your code should be organized in such a way that it is simple to add those features! Update: To answer your question from the comments, Also, how do I handle the choice variable in the second function if I split hit_stand() as you suggested? And how results() should look like after I make still_playing Boolean? Let's say hit_or_stand becomes a separate function. You would write something like: def hit_or_stand(): while True: ans = input("Hit or stand? ").lower() if ans in ('hit', 'stand'): return ans # In your main loop: player_stands = False while not player_stands: # stuff like before if hit_or_stand() == 'stand': player_stands = True if not player_stands: deal_one_card(shoe, player) if score(player) > 21: break # After loop: dealer takes cards, etc. results() The results can stay mostly the same. But some of your passed in variables are not needed, since you can compute them from the other data. For example, if the len(player) is 2, that's the first_hand flag. What's key is that results is outside the loop, not inside. You print the results after breaking/exiting from the loop.
{ "domain": "codereview.stackexchange", "id": 33745, "tags": "python, beginner, python-3.x, playing-cards" }
Shear strain correlations from displacement Fourier transform
Question: Currently working with molecular dynamics simulations, I would like to compute shear strain correlations in my 2-dimensional system. How I used to do things Accumulated shear strain at position $\vec{r}$ between times $t$ and $t + \Delta t$ is defined as $$ \varepsilon_{xy}(\vec{r}, t, t + \Delta t) = \frac{1}{2}\left(\frac{\partial}{\partial x} u_y(\vec{r}, t, t + \Delta t) + \frac{\partial}{\partial x} u_y(\vec{r}, t, t + \Delta t)\right) $$ with $\vec{u}(\vec{r}, t, t + \Delta t) = \begin{pmatrix} u_x(\vec{r}, t, t + \Delta t) \\ u_y(\vec{r}, t, t + \Delta t) \end{pmatrix}$ the displacement of the particle initially at position $\vec{r}$ at time $t$ between times $t$ and $t + \Delta t$. Hence the shear strain auto-correlation function $$ C_{\varepsilon_{xy}\varepsilon_{xy}}(\Delta \vec{r}, \Delta t) = \frac{\int dt \int d^2\vec{r}~ \varepsilon_{xy}(\vec{r}, t, t + \Delta t) \varepsilon_{xy}(\vec{r} + \Delta \vec{r}, t, t + \Delta t) }{\int dt \int d^2\vec{r}~ \varepsilon_{xy}(\vec{r}, t, t + \Delta t)^2} $$ which I want to compute. One can notice that $$ \int d^2\vec{r}~ \varepsilon_{xy}(\vec{r}, t, t + \Delta t) \varepsilon_{xy}(\vec{r} + \Delta \vec{r}, t, t + \Delta t) = \mathcal{F}^{-1}\{\mathcal{F}\{\varepsilon_{xy}\}^* \times \mathcal{F}\{\varepsilon_{xy}\}\}(\Delta \vec{r}, t, t + \Delta t) $$ with $\mathcal{F}$ the Fourier transform operator. Computationally speaking, this identity is very useful to quickly evaluate correlations. Up to now, I have then followed the following method: Coarse-grain shear strain at positions linearly distributed on a grid from particles' positions between times $t$ and $t + \Delta t$, following J. Chattoraj and A. Lemaître, Phys. Rev. Lett. 111, 066001 (2013) (available here) and Goldhirsch, I. & Goldenberg, C. Eur. Phys. J. E (2002) 9: 245 (available here). Compute shear strain correlations using Fast Fourier Transform (FFT) then inverse FFT from the obtained grid. This method works, but is unfortunately very slow despite my best efforts to enhance my code... How I would like to do things There is in B. Illing, S. Fritschi, D. Hajnal, C. Klix, P. Keim, and M. Fuchs, Phys. Rev. Lett. 117, 208002 (2016) (available here with supplemental material) a method to compute shear strain correlations from displacement Fourier transform. For that they introduce — without much explanations — the transversal and longitudinal "collective mean-square displacement" in Fourier space, respectively $C^{\perp}(\vec{q}, \Delta t)$ and $C^{||}(\vec{q}, \Delta t)$, with $\vec{q} = \begin{pmatrix}q_x \\ q_y\end{pmatrix}$ the wave vector, and then claim that (see equation 10 in supplemental material) $$ C_{\varepsilon_{xy}\varepsilon_{xy}}(\Delta \vec{r}, \Delta t) = \mathcal{F}^{—1}\left\{\left(C^{\perp}(\vec{q}, \Delta t) - C^{||}(\vec{q}, \Delta t)\right)\frac{-q_x^2q_y^2}{q^2} + C^{\perp}(\vec{q}, \Delta t) \frac{q_x^2 + q_y^2}{4}\right\}(\Delta \vec{r}, \Delta t) $$ What I don't understand First of all, I have struggled to understand the significance of $C^{\perp}$ and $C^{||}$. Inspired by F. Leonforte, R. Boissière, A. Tanguy, J. P. Wittmer, and J.-L. Barrat, Phys. Rev. B 72, 224206 (2005) (available here), I used the following definitions $$ \begin{aligned} C^{\perp}(\vec{q}, \Delta t) &= \frac{1}{q^2} \left< ||\vec{q}\wedge\mathcal{F}\{\vec{u}\}(\vec{q}, t, t + \Delta t)||^2 \right>\\ C^{\parallel}(\vec{q}, \Delta t) &= \frac{1}{q^2} \left< ||\vec{q}\cdot\mathcal{F}\{\vec{u}\}(\vec{q}, t, t + \Delta t)||^2 \right> \end{aligned} $$ where $\left<\right>$ denotes an average over times $t$. Using these definitions works — almost — fine, and computing shear strain is now incredibly quicker. However I am unable to make the math and find the strain correlation expression from these definitions. Not having a solid mathematical proof also keeps from knowing if I forgot some factors or if I am completely mistaken. If you know this proof or the correct definitions of the collective mean-squared displacements $C^{\perp}(\vec{q}, \Delta t)$ and $C^{||}(\vec{q}, \Delta t)$, or have seen either one elsewhere, this would help me a lot! Thank you! Answer: Since nobody has answered this, I shall have a go. Most likely you have worked this out by now, but others may come across this question, so it may be helpful. I believe that your definitions of $C^\perp(\vec{q})$ and $C^\parallel(\vec{q})$ are correct, and hopefully I can shed some light. I think that the collective mean-square displacement tensor is defined $$ \mathbb{C} = \langle \vec{u}(\vec{q})^* \, \vec{u}(\vec{q}) \rangle $$ i.e. as a dyadic product, a $2\times2$ matrix (in 2D). I'm omitting the time argument(s) throughout for clarity. Also we would normally use a hat or tilde to indicate Fourier transformed variables, but I'm omitting that as well. Now, for nonzero $\vec{q}$, this is not an isotropic tensor, even though the material (a glass) is taken to be isotropic. However, it is clear (by symmetry) that in a coordinate system based on unit vectors $(\vec{e}_\parallel,\vec{e}_\perp)$, defined so that $\vec{q}=q\vec{e}_\parallel$, and $\vec{e}_\perp$ is perpendicular to $\vec{q}$, the tensor will be diagonal. There will be a longitudinal component of $\vec{u}$, parallel to $\vec{q}$, and a transverse component, perpendicular to $\vec{q}$: $$ \mathbb{C}' = \begin{pmatrix} \langle |u_\parallel(\vec{q})|^2 \rangle & 0 \\ 0 & \langle |u_\perp(\vec{q})|^2 \rangle \end{pmatrix} \equiv \begin{pmatrix} C^\parallel(\vec{q}) & 0 \\ 0 & C^\perp(\vec{q}) \end{pmatrix} $$ All the physics lies in those two functions $C^\parallel(\vec{q})$ and $C^\perp(\vec{q})$; there is no cross term. The definitions of these functions given here are the same (I believe) as the ones you took from the paper by the Barrat group. The various coefficients in the complicated strain correlation expression, eqn (10) in the supplementary material for the Illing paper, are simply what's needed to rotate the matrix back from this diagonal form $\mathbb{C}'$ to the space-fixed form $\mathbb{C}$, and to use it to calculate the desired quantity related to the strain $C_{\varepsilon_{xy}\varepsilon_{xy}}$ rather than simply the displacement. The space-fixed $xy$ system is arbitrary, of course, but fixed; whereas you'll be considering a wide variety of $\vec{q}$ vectors. The cosine and sine of the rotation angle $\phi$ between the two coordinate systems are simply related to the components of the unit vector derived from $\vec{q}$. The conversion formula is $$ \begin{pmatrix} u_x \\ u_y \end{pmatrix} = \begin{pmatrix} \cos\phi & -\sin\phi \\ \sin\phi & \cos\phi \end{pmatrix} \begin{pmatrix} u_\parallel \\ u_\perp \end{pmatrix} = \begin{pmatrix} q_x/q & -q_y/q \\ q_y/q & q_x/q \end{pmatrix} \begin{pmatrix} u_\parallel \\ u_\perp \end{pmatrix} $$ The strain is the symmetrized gradient of the displacement, so the appropriate term in Fourier space is (where $i=\sqrt{-1}$) \begin{align*} \varepsilon_{xy}(\vec{q}) &= \frac{1}{2}\left[ iq_x u_y(\vec{q}) + iq_y u_x(\vec{q})\right] \\ \langle |\varepsilon_{xy}|^2\rangle &= \frac{1}{4}\left[ q_x^2 \langle | u_y |^2\rangle + q_y^2 \langle | u_x |^2 \rangle + q_xq_y \langle u_x^* u_y\rangle + q_xq_y \langle u_y^* u_x\rangle \right] \end{align*} Substituting for $(u_x,u_y)$ using the above rotation formula is tedious but straightforward, and there is some simplification since all cross terms vanish: \begin{align*} C_{\varepsilon_{xy}\varepsilon_{xy}} = \langle |\varepsilon_{xy}|^2\rangle &= \frac{q_x^2q_y^2}{q^2} \langle | u_\parallel |^2 \rangle +\frac{q_x^4+q_y^4-2q_x^2q_y^2}{4q^2}\langle | u_\perp |^2\rangle \\ &= \frac{q_x^2q_y^2}{q^2} C^\parallel +\frac{q_x^4+q_y^4-2q_x^2q_y^2}{4q^2} C^\perp \end{align*} Bearing in mind that $q_x^2+q_y^2=q^2$, this is identical with the formula that you were concerned about.
{ "domain": "physics.stackexchange", "id": 50974, "tags": "condensed-matter, computational-physics, fourier-transform, correlation-functions, soft-matter" }
Finding a minimal cover of a subset of a finite cartesian product by cartesian products
Question: Given a subset of a cartesian product $I \times J$ of two finite sets, I wish to find a minimal cover of it by sets which are cartesian products themselves. For example, given a product between $I=\{A,B,C\}$ and $J=\{1,2,3\}$, I may observe the subset $\{(A,2), (B,3), (B,2)\}$ and try to cover it with a minimal number of cartesian products. Two ways to do so are $\{A\} \times \{2\} + B \times \{2,3\}$ and $\{A,B\}\times \{2\} + \{B\}\times \{3\}$, both requiring 2 products. A sub-optimal solution may be breaking it into 3 trivial products. Can such an optimal cover be found efficiently (e.g., in polynomial time)? Answer: NM reformulates this problem in comments as finding minimum number of bipartite cliques (bi-cliques) that cover a bipartite graph. the two sets you mention are the 2 vertex sets of the bipartite graph. the cartesian products of subsets of the two vertex sets are bicliques. wikipedia states this is the bipartite dimension problem and is problem GT18 in Garey and Johnson, proved NP complete based on straightforward reformulation of set basis problem SP7.
{ "domain": "cs.stackexchange", "id": 5530, "tags": "algorithms, set-cover" }
Understanding this Haar Wavelet Example
Question: I'm trying to understand Wavelet Transform , I've done well enough to understand Continuous Wavelet Transform which was easy enough, where we simply stretch the wavelet and match it with the original signal. But i couldn't be able to understand the Discrete Wavelet transform in fact I've always had a trouble understanding Discrete of any transform (like DFT , DCT which are still not clear to me). According to Wavelet transform we Stretch our signal and then match it with the original signal to find out the frequency. This is the example shown in the book, In this hypothetical example the student does fairly well the first half of the term then neglects his or her studies for the last half. Thus the exam scores for the term were 80%, 80%, 80%, 80%, 0%, 0%, 0%, and 0%* We can tell the average of all the scores (40%) and when the scores “tanked” after the 4th exam just by looking. Knowing the answer in advance, however, is a good way to learn and to verify the wavelet transforms. Then we can use them with confidence on real-world data where we can’t simply “eyeball” the final values. We will now walk through the CWT process step by step using the simplest of the wavelet filters on this example. We begin by comparing the humble Haar wavelet filter, [1 –1] Signal -> [80 80 80 80 0 0 0 0] Filter(or signal that will be stretched later) -> [1 -1] Comparing the first 2 points with the wavelet filter we obtain 80 – 80 = 0. For this very simple high-pass filter we can say there was no change in the first 2 exam scores. This is how we've done it, 80*1 + 80*(-1) => 80 - 80 = 0 Now We have, [0 80 80 80 0 0 0 0] Shifting once to the right and applying filter again we get, 80*1 + 80*(-1) => 80 - 80 = 0 Now We have, [0 0 80 80 0 0 0 0] Shifting once again the right and applying filter again we get, 80*1 + 80*(-1) => 80 - 80 = 0 Now We have, [0 0 0 80 0 0 0 0] Shifting again, 80*1 + 0*(-1) => 80 - 0 = 80 Now We have, [0 0 0 80 0 0 0 0] We will eventually end-up with [0, 0, 0, 80, 0, 0, 0] This is significant in that this wavelet process of comparison and shifting has just indicated a large change between the 4th and 5th exam. We have “found the discontinuity”. This actually makes sense as we had a large change from 4th to 5th point But how does this discontinuity defines a frequency here ? Now if we stretch our signal from [1 -1] to [1 0 -1] and repeat the process, We will eventually end-up with, = –80 –80 0 0 80 80 0 0 0 0 When we used filter [1 -1] , our output at least makes some sense it clearly shows a rapid change, but what happened here in case of 3 point filter ? what are these weird values (-80 -80) ? If we keep scaling to 10th times here is what we'll eventually get, (scale = 10) –320 –160 0 160 320 320 240 160 (scale = 9) –240 –80 80 240 320 240 160 80 (scale = 8) –320 –160 0 160 320 240 160 80 (scale = 7) –240 –80 80 240 240 160 80 0 (scale = 6) –240 –160 0 160 240 160 80 0 (scale = 5) –160 –80 80 160 160 80 0 0 (scale = 4) –160 –80 0 80 160 80 0 0 (scale = 3) –80 0 0 80 80 0 0 0 (scale = 2) –80 0 0 0 80 0 0 0 (scale = 1) 0 0 0 0 0 0 0 0 While studying the Continuous Wavelet Transform the scaling made sense but i could't understand the scaling here ? what are these points -80, -160, -240, -320 represents ? This is the final output, How can i relate this output from my original signal ? Answer: It looks like there is a problem with your scaling. The scaling for the DWT has the same interpretation as it does for the CWT. For the DWT, the scale of the analyzing function (wavelet) is increased using a dyadic scale (increasing by factors of 2) in non-overlapping time intervals. Increasing scale corresponds to narrower frequency distributions in the frequency domain. Basically think of the analyzing function (wavelet) as a bandpass filter for which you are both narrowing the bandwidth and shifting down the center frequency by increasing the time scale. As an example (see images below) I've transformed an audio signal using a CWT and then again using a DWT. Notice that the results look similar, they scale in the same way, except in the DWT, the regions of analysis are discrete non-overlapping intervals (sharp boundaries). Notice also that as the time scale increases (heading south on the CWT diagram), you have fewer DWT coefficients, so the number of blocks in the image decrease. I don't see this kind of scaling in your data.
{ "domain": "dsp.stackexchange", "id": 723, "tags": "wavelet" }
Better way than setInterval to wait for an image load?
Question: This is really a simple affair, I just wonder if there is a better way to do it: let el = document.querySelectorAll('#comic img'); let loadPromise = new Promise(function(resolve, reject) { let ticker = setInterval(() => { if (el[0].complete) { clearInterval(ticker); resolve(); } }, 100); setTimeout(() => { clearInterval(ticker); reject('Timeout'); }, 5000) }); loadPromise.then(() => { console.log(el[0].naturalWidth); let container = document.querySelectorAll('.main-wrapper > div > section > section.column.column-1.grid-left-4.grid-width-16'); container[0].style.width = (el[0].naturalWidth + 420) + 'px'; }); I must wait for the image to load, so I know its size so that I can adjust an element on the page. But setInterval(aka polling the element) seems so... medieval. Is there a nicer way in modern JS? Answer: "Must wait for the image to load" ... "Is there a nicer way?" Yes, At first, instead of accessing the image by index el[0] - extract the image at once and give the variable meaningful name: let img = document.querySelectorAll('#comic img')[0]; Next, you need to embed "load" event listener into Promise executor function to ensure the image has loaded: let loadPromise = new Promise(function(resolve, reject) { img.addEventListener('load', function() { resolve(); }); setTimeout(() => { if (!img.complete) reject('Timeout'); }, 5000) }); loadPromise.then(() => { console.log(img.naturalWidth); let container = document.querySelectorAll('.main-wrapper > div > section > section.column.column-1.grid-left-4.grid-width-16'); container[0].style.width = (img.naturalWidth + 420) + 'px'; });
{ "domain": "codereview.stackexchange", "id": 36616, "tags": "javascript, asynchronous" }
How is energy conserved in an elastic collision between two unequal masses?
Question: In an elastic collision between two masses, if one mass is much heavier than the other, then the heavier mass will continue to move with same velocity while the lighter mass doubles its velocity. How is the Law of Conservation of Energy conserved in this? I concluded that the assumption of neglecting the change in velocity of the heavier mass is responsible for an increase in kinetic energy of the whole system, thus failing to satisfy the Law of Conservation of Energy. Is this a correct explanation? Answer: You should recognize this "the heavier mass will continue to move with same velocity" as an approximation. The actual velocity will be reduced, and you can figure out how much from the conservation of momentum. $$ \Delta V \approx - 2 V \frac{m}{M} \tag{for $M \gg m$}\;.$$ (Strictly speaking this is another approximation., but now we're talking about a small correction to a small correction and I'm going to ignore it.) Now that is a small change in speed but by hypothesis it is connected with a large mass so it results in a non-trivial change of kinetic energy \begin{align} \Delta K_M &= \frac{1}{2} M \left[ (V + \Delta V)^2 - V^2\right] \\ &= \frac{1}{2} M \left[ (2 V \Delta V + (\Delta V)^2\right] \;. \end{align} Now, we factor one power of $\Delta V$ out and notice that it cancels the $M$ to give \begin{align} \Delta K_M &= - m V \left[ 2V + \Delta V \right] \\ &\approx - m V \left[ 2V \right] \\ &= - \frac{1}{2} m (2V)^2 \end{align} The large mass losses approximately the same energy as the small one gains. Keeping all the terms is more algebraically difficult, but follows exactly the same pattern.
{ "domain": "physics.stackexchange", "id": 57501, "tags": "newtonian-mechanics, momentum, energy-conservation, conservation-laws, collision" }
Most elegant image to image frame comparison algorithm
Question: I'm interested in finding the most elegant and 'correct' solution to this problem. I'm new to JS/PHP and I'm really excited to be working with it. The goal is to compare the size of a given image to the size of a 'frame' for the image (going to be a <div> or <li> eventually) and resize the image accordingly. Users will be uploading images of many sizes and shapes and there are going to be many sizes and shapes of frames. The goal is to make sure that any given image will be correctly resized to fit any given frame. Additionally, images will be cropped with overflow:hidden; so they'll never be distorted. Final implementation will be in PHP, but I'm working out the logic in JS. Here's what I have: //predefined variables generated dynamically from other parts of script, these are example values showing the 'landscape in landscape' ratio issue var $imageHeight = 150, $imageWidth = 310, $height = 240, $width = 300; //check image orientation if ( $imageHeight == $imageWidth ) { var $imageType = 1; // square image } else if ( $imageHeight > $imageWidth ) { var $imageType = 2; // portrait image } else { var $imageType = 3; // landscape image }; //check frame orientation and compare to image orientation if ( $height == $width) { //square frame if ( ( $imageType === 1 ) || ( $imageType === 3 ) ) { $deferToHeight = true; } else { $deferToHeight = false; }; } else if ( $height > $width ) { //portrait frame if ( ( $imageType === 1 ) || ( $imageType === 3 ) ) { $deferToHeight = true; } else { if (($imageHeight / $height) < 1) { $deferToHeight = true; } else { $deferToHeight = false; }; }; } else { //landscape frame if ( ( $imageType === 1 ) || ( $imageType === 2 ) ) { $deferToHeight = false; } else { if (($imageWidth / $width) > 1) { $deferToHeight = true; } else { $deferToHeight = false; }; }; }; //set values to match (null value scales proportionately) if ($deferToHeight == true) { //defer image size to height of frame $imageWidth = null; $imageHeight = $height; } else { //defer image size to width of frame $imageWidth = $width; $imageHeight = null; }; I've tested it with a bunch of different values and it works, but I have a suspicion that I could have implemented a much more elegant solution. What could I have done better? Answer: If we move past rewriting the boolean expressions and instead rework the formulas, I think we can get something a little easier to manage. If I understand correctly, your goal is to have the image cover the frame. That is to say, for the frame to be filled and the image cropped. In that case, All we need to do is scale the image to the frame. var imageHeight = 150, imageWidth = 310, height = 240, width = 300; //Find the larget discrepancy between frame size and corresponding image size var scaleRatio = Math.max( height/imageHeight, width/imageWidth ); //Apply the scale to the whole image. imageHeight *= scaleRatio; imageWidth *= scaleRatio; If instead your desire it to have the image fit in the frame, then all we need to do is replace Math.max with Math.min.
{ "domain": "codereview.stackexchange", "id": 1332, "tags": "javascript" }
Dimensional analysis on absolute magnitude of a star
Question: Absolute magnitude $M$ of a star is $$M = m-5\log \Big(\frac{d}{10}\Big)$$ where $d$ is measured in $parsec$ but this raises couple of problems. $M$ and $m$ are dimensionless quantities but $d$ is in $parsec$ and I don't remember getting dimensionful quantities in trigonometric, exponential and logarithmic functions during my studies so what $\log$ of a parsec mean? First thought $log$ kills the physical unit and everything seems fine again. Second thought In $log$ function there's an invisible $pc^{-1}$ but I don't see from where. Third thought If I express $d$ in terms of apparent and absolute magnitude, I get the following $$d = 10^{(m-M+5)/5}$$ from this I would expect $d$ to be dimensionless yet it's dimensionful (in parsec) so another missing pc. $\underline{\hspace{17cm}}$ Questions 1. I prefer to avoid making such a bald claim as in the first thought. Can anyone give a detailed answer what happens if there are dimensionful quantities in logarithmic/exponential and trigonometric functions? 2. How to resolve the problems in second and third thoughts? Answer: The answer is that the number 10 in your formula is not dimensionless and should not really be written without its units - which are parsecs. $$M = m-5\log \Big(\frac{d}{10\ {\rm pc}}\Big)\ ,$$ $$d = 10^{(m-M+5)/5}\ {\rm pc}\ .$$
{ "domain": "physics.stackexchange", "id": 75314, "tags": "astrophysics, astronomy, dimensional-analysis" }
Why didn't Escobar's hippos introduced in a single event die out due to inbreeding
Question: Today I read a BBC Report about how Pablo Escobar had once imported 4 hippos (1 male, 3 female) into his estate in Colombia for his private zoo. After his downfall, while other species were shipped out, hippos were considered too big to move and expected to not survive. However, to the surprise of all the hippos are thriving and are so numerous that there have been calls to cull them. From the report: Numbers are projected to only get bigger. [Colombian biologist Nataly] Castelblanco and her peers say the population will reach over 1,400 specimens as early as 2034 without a cull - all of them descended from the original group of a male and three females. In the study, they envisaged an ideal scenario in which 30 animals need to be culled or castrated every year to stop that happening. My understanding is that since there was only 1 male, the gene pool would be limited and lead to lot of inbreeding in the descendants. This would cause population to not explode because some individuals would be unfit to survive. Why has this not happened in case of hippos? Is it because there are 3 females (probably unrelated to each other) which keeps gene pool large enough? Or can mutations explain this phenomenon? Would results have been different if originally 2 females had been moved and only 1 retained? EDIT - I have just found that current population is maybe less than 100 individuals which though big is not massive. EDIT2 - I have edited the question title to keep focus on the hippos although a general answer would be welcome. Answer: I think one of the important things to understand in thinking about this case is that it just hasn't been that long, generationally. Escobar imported the hippos in the late 1980s. Hippos reach sexual maturity at an average of about 7.5 years old for males and 9.5 years for females, space births about 2 years apart, and live for 40-50 years. Thirty years out, this means that only about 3 generations have passed for females. Moreover, the dominant males are highly territorial, which means that much of the breeding might even now be still coming from the initial male and not his descendants. Still, the initial male breeding with his own daughters would indeed produce a significant inbreeding coefficient. The impact of inbreeding, however, is also affected by the density of deleterious alleles. In geographically restricted species, the higher natural level of inbreeding can result in purging selection that leads to a much lower frequency of accumulated deleterious alleles than in highly social and gregarious species like humans and dogs. Hippos appear likely to be such a species, as well as having overall lower genetic variation than other large African mammals, suggesting a fairly recent expansion. In short: it hasn't been that many generations, and if the inbreeding sensitivity of hippos is indeed fairly low to begin with, it may indeed be that there simply isn't any significant impact from inbreeding at this point in time.
{ "domain": "biology.stackexchange", "id": 11194, "tags": "genetics, zoology, population-genetics, invasive-species" }
Is "duplicate" in RPN enough for replacing variable binding in term expressions?
Question: I try to work out some consequences of storing (or "communicating"/"transmitting") a rational number by a term expression using the following operators: $0$, $\mathsf{inc}$, $\mathsf{add}$, $\mathsf{mul}$, $\mathsf{neg}$, and $\mathsf{inv}$. Here $\mathsf{add}$ and $\mathsf{mul}$ are binary operators, $\mathsf{inc}$, $\mathsf{neg}$, and $\mathsf{inv}$ are unary operators, and $0$ is a $0$-ary operator (i.e. a constant). Because I want to be able to also store numbers like $(3^3+3)^3$ efficiently, I need some form of variable binding. I will use the notation $(y:=t(x).f(y))$ to be interpreted as $f(t(x))$ in this question. Now I can store $(3^3+3)^3$ as $$(c3:=(1+1+1).(x:=((c3*c3*c3)+c3).(x*x*x))).$$ If I stick to the operators $0$, $\mathsf{inc}$, $\mathsf{add}$, and $\mathsf{mul}$, this becomes $$(c3:=\mathsf{inc}(\mathsf{inc}(\mathsf{inc}(0))).(x:=\mathsf{add}(\mathsf{mul}(\mathsf{mul}(c3,c3),c3),c3).\mathsf{mul}(\mathsf{mul}(x,x),x))).$$ Using RPN with a "duplicate" operation written $\mathsf{dup}$ instead of variable binding, this becomes $$0\ \mathsf{inc}\ \mathsf{inc}\ \mathsf{inc}\ \mathsf{dup}\ \mathsf{dup}\ \mathsf{dup}\ \mathsf{mul}\ \mathsf{mul}\ \mathsf{add}\ \mathsf{dup}\ \mathsf{dup}\ \mathsf{mul}\ \mathsf{mul}.$$ My question is whether it is always possible to replace variable binding by the "duplicate" operation. The binary operations ($\mathsf{add}$ and $\mathsf{mul}$) are associative and commutative, but it seems to me that even this is not enough for ensuring that variable binding can be completely eliminated. Take for example $$(c2:=(1+1).(x:=(((c2+1)*c2)+1).(y:=(x*x).((y+c2)*y)))).$$ If I stick to the operators $0$, $\mathsf{inc}$, $\mathsf{add}$, and $\mathsf{mul}$, this becomes $$(c2:=\mathsf{inc}(\mathsf{inc}(0)).(x:=\mathsf{inc}(\mathsf{mul}(\mathsf{inc}(c2),c2)).(y:=\mathsf{mul}(x,x).\mathsf{mul}(\mathsf{add}(y,c2),y)))).$$ Using RPN with a "store" operation written $\mathsf{sto}(x)$ instead of variable binding, this becomes $$0\ \mathsf{inc}\ \mathsf{inc}\ \mathsf{sto}(c2)\ c2\ \mathsf{inc}\ c2\ \mathsf{mul}\ \mathsf{inc}\ \mathsf{sto}(x)\ x\ x\ \mathsf{mul}\ \mathsf{sto}(y)\ y\ c2 \ \mathsf{add}\ y\ \mathsf{mul}.$$ After eliminating $\mathsf{sto}(x)$ and $\mathsf{sto}(y)$ by $\mathsf{dup}$, this becomes $$0\ \mathsf{inc}\ \mathsf{inc}\ \mathsf{sto}(c2)\ c2\ \mathsf{inc}\ c2\ \mathsf{mul}\ \mathsf{inc}\ \mathsf{dup}\ \mathsf{mul}\ \mathsf{dup}\ c2 \ \mathsf{add}\ \mathsf{mul}.$$ Using explicit substitution to eliminate $\mathsf{sto}(c2)$, this becomes $$0\ \mathsf{inc}\ \mathsf{inc}\ \mathsf{dup}\ \mathsf{inc}\ \mathsf{mul}\ \mathsf{inc}\ \mathsf{dup}\ \mathsf{mul}\ \mathsf{dup}\ 0\ \mathsf{inc}\ \mathsf{inc}\ \mathsf{add}\ \mathsf{mul}.$$ My issue with explicit substitution is that it might lead to an exponential increase in the size of the expression. It's easy to see that expressions like $(3^3+3)^3$ or $((3^3+3)^3+3)^3$ can't be stored efficiently without something like $\mathsf{sto}(x)$ or $\mathsf{dup}$. Is there another way to eliminate $\mathsf{sto}(x)$, like an additional first-in, first-out queue? Or can one prove that an exponential blowup of the expression won't happen, if only explicit substitution and $\mathsf{dup}$ are "suitably" used together? Answer: Is “duplicate” in RPN enough for replacing variable binding in term expressions? No, it is not strong enough. One can eliminate $\mathsf{dup}$ by replacing each sequence of $\mathsf{dup}$ by a sequence of $x$ preceded by one $\mathsf{sto}(x)$ operation. Since the single variable "$x$" is enough to eliminate all $\mathsf{dup}$ operations, any term expression which requires more than one variable to be stored efficiently can't be expressed (efficiently) using only $\mathsf{dup}$. If the $\mathsf{inv}$ operation is omitted, then this whole intended representation by a term expression can be seen as a special case of the directed acyclic graph representation of a polynomial used in arithmetic circuit complexity. So the RPN stack from the question turns out to be just a representation of a tree. Now a directed acyclic graph can't be "emulated easily" by a tree, and especially a non-planar DAG will be "too difficult" for the $\mathsf{dup}$ operation. In order to add the $\mathsf{inv}$ operation again, note that the ring of polynomials in $x_1,\dots x_n$ over $\mathbb Z$ is just the free commutative ring with generators $x_1,\dots x_n$. The corresponding free algebra with an $\mathsf{inv}$ operation would be the free commutative regular ring with generators $x_1,\dots x_n$. To be faithful to the canonical representation of the free algebra of a variety (equationally defined collection of algebras), the $\mathsf{inc}$ operation should be replaced by $1$.
{ "domain": "cs.stackexchange", "id": 3535, "tags": "formal-languages, pushdown-automata, stacks, term-rewriting" }
Writing a Density matrix in terms of the magnitude of the Bloch Vector
Question: Working with the density matrix and the Bloch sphere, I have been attempting to complete an exercise in Entangled Systems; New Directions in Quantum Physics. If anyone has the book it is Question 4.3 on Pg 87 of the English Edition. In summary the question asks me to rewrite the desity matrix, $\rho(\vec{r})$, in terms of $|\vec{r}|$ instead. To start I have $$\rho(\vec{r})=\frac1{2}\left( \begin{array}{cc } 1+r_3 & r_1-ir_2 \\ r_1+ir_2 & 1-r_3 \end{array} \right)$$ with $\vec{r} = (r_1, r_2, r_3)$ The final answer it is expecting is $$\rho(|\vec{r}|) = \left( \begin{array}{cc } 1-|\vec{r}|^2 & 0 \\0 & 1+|\vec{r}|^2 \end{array} \right),$$ as it is stated as a part of the question. Now, the question leaves the hint of finding the eigenvalues of $\rho(\vec{r})$. I believe it says this as we could then use the eigenvalues to write the diagonalized version of the density matrix by placing the eigenvalues on the diagonal. Having done so I got $$\rho(|\vec{r}|) = \left( \begin{array}{cc } \frac1{2}(1-|\vec{r}|) & 0 \\0 & \frac1{2}(1+|\vec{r}|) \end{array} \right).$$ This is similar, but not the same, and even though I could factor out the 1/2's and still maintain $tr[\rho] = 1$, I'm not sure how this needs to be manipulated to get the version he portrayed in the question. So I ask, is this a typo? (There have been others) Or is there something I have not thought of to manipulate this to get the correct result? Answer: It must be a typo. Remember that a density matrix must have trace 1, which yours does, and your expected answer doesn't.
{ "domain": "quantumcomputing.stackexchange", "id": 4130, "tags": "textbook-and-exercises, mathematics, density-matrix, bloch-sphere, matrix-representation" }
Get map dimensions for map saved from map server
Question: After implementing SLAM using gmapping, the map was saved using map_server package, as a pgm file and a yaml file, I need to know the dimensions of the objects in the map and the map itself, how do I achieve this? Originally posted by fiorano10 on ROS Answers with karma: 45 on 2017-12-06 Post score: 0 Answer: Get the dimensions of the pgm (x pixels by y pixels) then examine your yaml file. In the yaml file the resolution of the map is given in meters/pixel. This will allow you to determine the size of your map in meters. You would figure out the size of objects in the map in a similar fashion (i.e., multiply the number of pixels your object takes by the resolution of your map and you'll have the physical size of the object). Refer to http://wiki.ros.org/map_server#YAML_format Originally posted by jayess with karma: 6155 on 2017-12-06 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by fiorano10 on 2017-12-06: What are some ways I could get the x and y pixels for the map and not the pgm file? My pgm file has larger dimensions than the actual map. Comment by jayess on 2017-12-06: You can just manually edit the pgm so that it is the size of your map. I've used GIMP to edit maps before.
{ "domain": "robotics.stackexchange", "id": 29546, "tags": "slam, navigation, map-saver, slam-gmapping, gmapping" }
Simulated kinect
Question: Hello, I have problem with simulated Kinect. Turtlebot is placed in some environment (ipa-kitchen from COB stack) and pointcloud from it looks like a plane in front of robot as you can see on screenshot: Data looks like this: header: seq: 4196 stamp: secs: 305 nsecs: 177000000 frame_id: kinect_depth_optical_frame height: 480 width: 640 fields: - name: x offset: 0 datatype: 7 count: 1 - name: y offset: 4 datatype: 7 count: 1 - name: z offset: 8 datatype: 7 count: 1 is_bigendian: False point_step: 16 row_step: 10240 data: [27, 146, 19, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 222, 27, 19, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0 , 128, 63, 160, 165, 18, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 98, 47, 18, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 36, 185, 17, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 230, 66, 17, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 169, 204, 16, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 107, 86, 16, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 45, 224, 15, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 239, 105, 15, 191, 153, 61, 221, 190, 0 , 0, 128, 63, 0, 0, 128, 63, 177, 243, 14, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 116, 125, 14, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 54, 7, 14, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 248, 144, 13, 191, 153, 61, 22 1, 190, 0, 0, 128, 63, 0, 0, 128, 63, 186, 26, 13, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 125, 164, 12, 191, 153, 6 1, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 63, 46, 12, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 1, 184, 11, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 195, 65, 11, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 133, 203, 10, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 72, 85, 10, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 10, 223, 9, 19 1, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 204, 104, 9, 191, 153, 61, 221, 190, 0, 0, 128, 63, 0, 0, 128, 63, 142, 242, 8 , 191, 153, ... There are just this two warnings in rxconsole... gazebo_ros_camera simulation does not support non-zero distortion parameters right now, your simulation maybe wrong. and Message from [/gazebo] has a non-fully-qualified frame_id [kinect_depth_optical_frame]. Resolved locally to [/kinect_depth_optical_frame]. This is will likely not work in multi-robot systems. This message will only print once. Used launch files: roslaunch cob_gazebo_worlds ipa-kitchen.launch roslaunch turtlebot_gazebo robot.launch Any hints how to solve this? Thanks a lot. UPDATE: I worked out that it depends on used world and orientation of robot... But it is always plane in front of robot, sometimes with holes in it. Is it problem with transformation? For instance simulated Kinect works for me on Care-o-bot (but I'm only able to visualize PointCloud1). UPDATE2: I tried it also with empty world and some spawned object. Object can be recognized in pointcloud data, but it is still not good (as you can see). Is there some problem with transformation? Used launch files: roslaunch turtlebot_gazebo turtlebot_empty_world.launch roslaunch cob_gazebo_worlds cabinet.launch Originally posted by ZdenekM on ROS Answers with karma: 704 on 2011-12-07 Post score: 2 Answer: I have posted the same question. See here. Originally posted by Laurie with karma: 103 on 2011-12-08 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ZdenekM on 2011-12-08: "roslaunch cob_gazebo_worlds ipa-kitchen.launch" and "roslaunch turtlebot_gazebo robot.launch" Comment by hsu on 2011-12-08: I cannot reproduce the error on my desktop (with nvidia card). What files did you launch? thanks. Comment by ZdenekM on 2011-12-08: Maybe the reason could be graphic card? I'm using Intel integrated card which is not well supported by Gazebo - so maybe this is the reason for weird data...
{ "domain": "robotics.stackexchange", "id": 7571, "tags": "gazebo, simulation, kinect, turtlebot" }
LeChatelier's Principle - Phosphofructokinase in Glycolysis
Question: In glycolysis, PFK is allosterically regulated by ATP, F6P, etc. When the concentration of F6P increases, the concentration of ATP increases too. I understand that high levels of ATP shift the eq towards the T-state, and this decreases the affinity of PFK for F6P. But is there a principle in Le Chatelier's which explains how the increase in conc of one reactant affects the conc of another reactant? The reaction regulated by PFK is: $F6P$ + $ATP$ <-> $F1,6BP$ + $ADP$ Thanks! Answer: PFK plays a central role in the control of glycolysis because it catalyses one of the pathway’s rate-determining reactions. The control of Phosphofructokinase (PFK) is exquisitely complex. To analyse this complicated enzymatic reaction pathway, we have to revisit important aspects of metabolic control mechanism: Thus the flux (rate of flow) of intermediates through a metabolic pathway is constant; that is, the rates of synthesis and breakdown of each pathway intermediate maintain it at a constant concentration. Regulation of the steady state (homeostasis) must be maintained in the face of changes in flux through the pathway in response to changes in demand. In an open system, such as metabolism, the steady state is the state of maximum thermodynamic efficiency. This explanation provides a somewhat agreement of Le Chatlier’s principle. (i.e. where a dynamic equilibrium is affected by changing conditions, the position of equilibrium moves to counteract the change) Consider this simple hypothesis: J = vf – vr Since a metabolic pathway is a series of enzyme-catalysed reactions, it is easiest to describe the flux of metabolites through the pathway by considering its reaction steps individually. The flux of metabolites, J, through each reaction step is the rate of the forward reaction, vf, less that of the reverse reaction, vr: At equilibrium, by definition, there is no flux (J=0), although vf and vr may be quite large. At the other extreme, in reactions that are far from equilibrium, vf>> vr, so that the flux is essentially equal to the rate of the forward reaction, J~vf. The flux throughout a steady-state pathway is constant and is set (generated) by the pathway’s rate determining step (or steps). Consequently, control of flux through a metabolic pathway requires: that the flux through this flux-generating step vary in response to the organism’s metabolic requirements and that this change in flux be communicated throughout the pathway to maintain a steady state Phosphofructokinase is a non-equilibrium enzyme of glycolysis One important thing to take note of is that the enzyme phosphofructokinase, a highly controlled enzyme functioning far from equilibrium (but ironically itself operates via an equilibrium of two conformational states), evidently is the major target for regulating glycolysis. PFK is a tetrameric enzyme with two conformational states, R and T, that are in equilibrium. The substrate site binds ATP equally well in either conformation, but the inhibitor site binds ATP almost exclusively in the T state. The other substrate of PFK, F6P, preferentially binds to the R state. Consequently, at high concentrations, ATP acts as a heterotrophic allosteric inhibitor of PFK by binding to the T state, thereby shifting the T <=> R equilibrium in favour of the T state and thus decreasing PFK’s affinity for F6P. Increase in concentration of PFK does not increase the rate of glycolysis. In this scenario I believe the Le Chatlier’s principle applies little here. The reason is because the control is very complicated Although PFK has a major role in regulating the flux through glycolysis, it is controlled, in vivo, by factors outside the pathway. An increase in the in vivo concentration of PFK will therefore not increase the flux through the pathway because these controlling factors adjust the catalytic activity of PFK only to meet the needs of the cell. Let’s consider another illustration showing enzyme activity versus substrate concentration: The inhibition of PFK by ATP is relieved by AMP. This results from AMP’s preferential binding to the R state of PFK. If a PFK solution containing 1 mMATP and 0.5 mM F6P is brought to 0.1 mM in AMP, the activity of PFK rises from 10 to 50% of its maximal activity, a 5-fold increase. when [F6P] =0.5 mM (the dashed line), the enzyme is nearly maximally active, but in the presence of 1 mM ATP, the activity drops to 15% of its original level (a nearly 7-fold decrease). However, measurements of [ATP] in vivo at various levels of metabolic activity indicate that [ATP] varies <10% between rest and vigorous exertion. Yet there is no known allosteric mechanism that can account for a 100-fold change in flux of a nonequilibrium reaction with only 10% change in effector concentration. Thus, some other mechanism, or mechanisms, must be responsible for controlling glycolytic flux. Let’s discuss the action of adenylate cyclase; adenylate cyclase catalyses the following reaction: 2ADP <=> ATP + AMP (K=0.44) as a result of the adenylate kinase reaction, a 10% decrease in [ATP] will cause over a 4-fold increase in [AMP]. Consequently, a metabolic signal consisting of a decrease in [ATP] too small to relieve PFK inhibition is amplified significantly by the adenylate kinase reaction, which increases [AMP] by an amount sufficient to produce a much larger increase in PFK activity. In summary the control of the PFK (non-equilibrium enzyme) has limited Le Chatlier’s principle application as other crucial regulation mechanism exist through the reaction pathway. Even if you were to consider temperature changes, these control enzymes operate far from equilibrium (these have very negative delta G°) and the actual free energy changes in vivo differ from experimental ones making it difficult to simulate actual effects in equilibrium states. It is more likely based on supply-demand basis. One evidence of this is seen through the variation of metabolic flux throughout glycolysis (may vary by 100-fold or more), depending on the metabolic demand for ATP. However, for pathways which operate close to equilibrium it may be applied, for example, the delta G° for aldolase is endergonic, whereas under physiological conditions in heart muscle it is close to zero, indicating that the in vivo activity of aldolase is sufficient to equilibrate its substrates and products. Their forward and reverse rates are much faster than the actual flux through the pathway (although their forward rates must be at least slightly greater than their reverse rates). Consequently, these near-equilibrium reactions are very sensitive to changes in the concentration of pathway intermediates. References Biochemistry Voet and Voet
{ "domain": "chemistry.stackexchange", "id": 8106, "tags": "biochemistry" }
Grade calculator for a project with four aspects
Question: I have a program that will ask the user to input their grades for 4 different sections of a project, then tell them what their total mark is, what grade they got and how many marks away they were from the next grade. I managed to make a single loop for all inputs rather than having a loop for each individual one, but there are still quite a lot of if statements to determine what grade they got and how far away they were from the next one, and I can't figure out how to optimise it since I'm still very new to Java. import java.util.Arrays; import java.util.InputMismatchException; import java.util.Scanner; public class PortfolioGrade { public static void main(String[] args) { // TODO Auto-generated method stub String[] words = new String[]{"Analysis", "Design", "Implementation", "Evaluation"}; int[] marks = new int[words.length]; for(int counter = 1; counter <= words.length; counter++) { System.out.println("Enter your mark for the '" + words[counter - 1] + "' part of the project: "); while(true) { try { Scanner reader = new Scanner(System.in); marks[counter - 1] = reader.nextInt(); if(marks[counter - 1] < 0 || marks[counter - 1] > 25) { System.out.println("Please input a number between 0 and 25."); continue; } break; } catch(InputMismatchException e) { System.out.println("Please input a valid integer."); } } } int totalmark = Arrays.stream(marks).sum(); String grade = null; String nextgrade = null; Integer marksaway = null; if(totalmark < 2) { grade = "U"; marksaway = 2 - totalmark; nextgrade = "1"; } else if(totalmark >= 2 && totalmark < 4) { grade = "1"; marksaway = 4 - totalmark; nextgrade = "2"; } else if(totalmark >= 4 && totalmark < 13) { grade = "2"; marksaway = 13 - totalmark; nextgrade = "3"; } else if(totalmark >= 13 && totalmark < 22) { grade = "3"; marksaway = 22 - totalmark; nextgrade = "4"; } else if(totalmark >= 22 && totalmark < 31) { grade = "4"; marksaway = 31 - totalmark; nextgrade = "5"; } else if(totalmark >= 31 && totalmark < 41) { grade = "5"; marksaway = 41 - totalmark; nextgrade = "6"; } else if(totalmark >= 41 && totalmark < 54) { grade = "6"; marksaway = 54 - totalmark; nextgrade = "7"; } else if(totalmark >= 54 && totalmark < 67) { grade = "7"; marksaway = 67 - totalmark; nextgrade = "8"; } else if(totalmark >= 67 && totalmark < 80) { grade = "8"; marksaway = 80 - totalmark; nextgrade = "9"; } else if(totalmark >= 80) { grade = "9"; } System.out.println("Your total mark was " + totalmark + "."); System.out.println("You got a Grade " + grade + "."); if(grade == "9") { System.out.println("You achieved the highest grade!"); } else if(marksaway == 1) { System.out.println("You were " + marksaway + " mark away from a Grade " + nextgrade + "."); } else { System.out.println("You were " + marksaway + " marks away from a Grade " + nextgrade + "."); } } } Answer: Because there are no holes you can simply have an array of "breakpoints". int [] steps = new int[] { 2, 4, 13, 22, 31, 41, 54, 67, 80 }; int i; for(i=0; i<steps.length && totalmark>=steps[i]; i++); grade = i==0 ? "U" : ""+i; if(i<steps.length) marksaway=steps[i]-totalmark; nextgrade=""+(i+1);
{ "domain": "codereview.stackexchange", "id": 32905, "tags": "java, beginner" }
EKF or KF package that fuse IMU and Pressure Sensros data for Velocity estimation?
Question: Hi Im looking for a ROS package (KF or UKF or EKF) that can fuse IMU and Pressure Sensors data. I would like to have 6x6 estimated Velocity matrices( linear and angular) from the IMU and Pressure sensor data. IMU is 9 DOF ( orientation, angular_velocity and linear_acceleration) and the Pressure. Barometer(pressure sensor data) can be use for the underwater robot as assume the sea (water ) level is same(constant) and the pressure suppose to maintain same value my linear movement of the underwater robot (vehicle). Is it possible to use this package to fuse this IMU and Pressure data to obtain estimated Velocity (linear and angular)? If no existing ROS package (that serve as velocity observer) and fuse IMU and Pressure data, then any other code or help that I can use and implemented in ROS? Thanks Originally posted by Astronaut on ROS Answers with karma: 330 on 2021-09-22 Post score: 0 Answer: Take a look at this ROS package : http://wiki.ros.org/hector_localization The hector_localization stack is a collection of packages, that provide the full 6DOF pose of a robot or platform. It uses various sensor sources, which are fused using an Extended Kalman filter. Acceleration and angular rates from an inertial measurement unit (IMU) serve as primary measurements. The usage of other sensors is application-dependent. The hector_localization stack currently supports GPS, magnetometer, barometric pressure sensors and other external sources that provide a geometry_msgs/PoseWithCovariance message via the poseupdate topic. Originally posted by osilva with karma: 1650 on 2021-09-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Astronaut on 2021-09-25: thanks. Ok will check it Comment by Astronaut on 2021-09-29: how to set up the launch file to use IMU and Pressure sensors? Comment by osilva on 2021-09-29: I recommend to start another question as per the FAQ guidance. Thank you Comment by Astronaut on 2021-09-29: I have done it. Please check it
{ "domain": "robotics.stackexchange", "id": 36939, "tags": "ros, sensor-fusion, ros-melodic" }
Fetching system info using python
Question: The following piece of code does ssh do different servers and fetches the system info and display it to the user: import paramiko #list variables : ["IP_ADDR", "USERNAME", "PASSWD", "ROOT_PASSWD"] ip_list = [ ["192.168.11.44", "root", "****", "****"], ["192.168.11.8", "root", "****", "****"], ["192.168.11.30", "root", "****", "****"], ["192.168.11.6", "****", "****", "****"] ] os_check_list = ["DISTRIB_DESCRIPTION"] hard_disks = [ 'sda', 'sdb', 'sdc', 'sdd', 'sde', 'sdf', 'sdg', 'sdh', 'sdi', 'sdj', 'sdk', 'sdl', 'sdm', 'sdn', 'sdo', 'sdp', 'sdq', 'sdr', 'sds' ] os = None ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) for ip in ip_list: ssh.connect(ip[0], username = ip[1], password = ip[2]) print("\n \n ************************************************************ \ \n IP ADDR = {} SYSTEM INFO \ \n ******************************************************************". format(ip[0])) stdin, stdout, stderr = ssh.exec_command("cat /etc/*release") stdin.write(ip[3]+"\n") for line in stdout.readlines(): if any(x in line for x in os_check_list): os_dist = line.split("=") os = os_dist[1] print(" Operating System is {}" .format(os)) if not os: stdin, stdout, stderr = ssh.exec_command("cat /etc/system-release") for line in stdout.readlines(): os = line print(" Operating System is {}" .format(os)) os = None stdin, stdout, stderr = ssh.exec_command("sudo -k udisksctl status", get_pty = True) stdin.write(ip[3]+"\n") for line in stdout.readlines(): if any(x in line for x in hard_disks): print(line) elif "command not found" in line: print("udisksctl not installed on target server") stdin, stdout, stderr = ssh.exec_command("sudo dmidecode -t 0 | grep -i version", get_pty = True) stdin.write(ip[3]+"\n") for line in stdout.readlines(): if "Version" in line: print(line) elif "command not found" in line: print("dmidecode not installed on target server") Even though this is working, I felt like I am running too many for loops for fetching the information. How to minimize these many loops? Answer: I would start by making your ip_list easier to understand. You could write a class for this, but using a collectins.namedtuple is a lot easier here: from collections import namedtuple Client = namedtuple("Client", "ip username passwd root_passwd") clients = [Client("192.168.11.44", "root", "****", "****"), Client("192.168.11.8", "root", "****", "****"), Client("192.168.11.30", "root", "****", "****"), Client("192.168.11.6", "****", "****", "****")] For the hard_disks, you could use the string module for all lowercase letters: import string HARD_DISKS = ['sd{}'.format(c) for c in string.ascii_lowercase[:19]] For your repeated stuff, you could write a function: def ssh_cmd(cmd, *args, root_passwd=None, **kwargs): stdin, stdout, stderr = ssh.exec_command(cmd, *args, **kwargs) if root_passwd is not None: stdin.write(root_passwd + "\n") return stdout.readlines() This makes the rest slightly nicer to read. However, I would go further and put the other stuff in dedicated functions as well, which can then be put into a utils.py: import string from collections import namedtuple Client = namedtuple("Client", "ip username passwd root_passwd") HARD_DISKS = ['sd{}'.format(c) for c in string.ascii_lowercase[:19]] def get_os(client): for line in ssh_cmd("cat /etc/*release", root_passwd=client.root_passwd): if any(x in line for x in os_check_list): # check for the obvious stuff first return line.split("=")[1] for os in ssh_cmd("cat /etc/system-release"): return os def print_hard_disks(client, hard_disks=HARD_DISKS): for line in ssh_cmd("sudo -k udisksctl status", get_pty=True, root_passwd=client.root_passwd): if any(hd in line for hd in hard_disks): print(line) elif "command not found" in line: print("udisksctl not installed on target server") def print_dmi_version(client): for line in ssh_cmd("sudo dmidecode -t 0 | grep -i version", get_pty=True, root_passwd=client.root_passwd): if "Version" in line: print(line) elif "command not found" in line: print("dmidecode not installed on target server") It can then be imported in your main script: import paramiko from utils import Client, get_os, print_hard_disks, print_dmi_version BANNER = """ ****************************************************************** IP ADDR = {} SYSTEM INFO ******************************************************************""" CLIENTS = [Client("192.168.11.44", "root", "****", "****"), Client("192.168.11.8", "root", "****", "****"), Client("192.168.11.30", "root", "****", "****"), Client("192.168.11.6", "****", "****", "****")] def main(clients): ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) for client in clients: ssh.connect(client.ip, username=client.username, password=client.passwd) print(BANNER.format(client.ip)) print(" Operating System is {}" .format(get_os(client))) print_hard_disks(client) print_dmi_version(client) if __name__ == "__main__": main(CLIENTS)
{ "domain": "codereview.stackexchange", "id": 25380, "tags": "python, linux, ssh" }
Data screening using Perl
Question: Background information I've been asked to write a little Perl script that allows genomic data to be screened against reference files in order to determine locations of specific mutations. The input file format look like this (information are tab-separated). In the script it's referred to as Funestus_NON_SYN.txt. KB669306 9962 . C T 520.77 PASS AC=13;AF=0.929;AN=14;BaseQRankSum=0.540;DP=103;Dels=0.00;EFF=SYNONYMOUS_CODING(LOW|SILENT|cgC/cgT|R15||AFUN012013|||AFUN012013-RA|1|1);FS=0.000;HaplotypeScore=0.0000;MQ0=0;MQRankSum=0.231;ReadPosRankSum=0.694;set=variant-variant2-variant3-variant5-variant6-variant7-variant10 GT:AD:DP:GQ:PL 1/1:0,18:18:42:549,42,0 1/1:0,12:12:33:399,33,0 1/1:0,13:13:39:505,39,0 ./. 0/1:9,4:13:99:104,0,297 1/1:0,17:17:45:570,45,0 1/1:0,18:18:51:624,51,0 ./. ./. 1/1:0,12:12:33:420,33,0 KB669306 10439 . C T 668.77 PASS AC=15;AF=0.938;AN=16;BaseQRankSum=-0.643;DP=139;Dels=0.00;EFF=SYNONYMOUS_CODING(LOW|SILENT|gcC/gcT|A174||AFUN012013|||AFUN012013-RA|1|1);FS=0.000;MQ0=0;MQRankSum=-0.746;ReadPosRankSum=-1.106;set=variant-variant3-variant5-variant6-variant7-variant8-variant9-variant10 GT:AD:DP:GQ:PL 1/1:0,22:22:51:697,51,0 ./. 1/1:0,20:20:48:667,48,0 ./. 0/1:14,12:25:99:317,0,403 1/1:0,15:15:36:469,36,0 1/1:0,16:16:36:499,36,0 1/1:0,13:13:30:415,30,0 1/1:0,14:14:30:412,30,0 1/1:0,13:13:36:492,36,0 KB668289 903 . T C 577.77 PASS AC=14;AF=0.875;AN=16;DP=173;Dels=0.00;EFF=DOWNSTREAM(MODIFIER||3412|||AFUN005870|||AFUN005870-RA||1),SYNONYMOUS_CODING(LOW|SILENT|ctA/ctG|L578||AFUN005868|||AFUN005868-RA|4|1),UPSTREAM(MODIFIER||2667|||AFUN005869|||AFUN005869-RA||1);MQ0=0;set=variant-variant2-variant3-variant5-variant6-variant7-variant9-variant10 GT:AD:DP:GQ:PL 1/1:0,20:20:45:606,45,0 1/1:0,19:19:42:588,42,0 1/1:0,22:22:54:720,54,0 ./. 1/1:0,29:29:63:882,63,0 0/1:16,13:28:99:401,0,391 1/1:0,13:13:30:425,30,0./. 1/1:0,26:26:57:785,57,0 0/1:8,7:15:99:223,0,243 KB668289 1224 . A C 572.77 PASS AC=10;AF=0.833;AN=12;DP=122;Dels=0.00;EFF=DOWNSTREAM(MODIFIER||3091|||AFUN005870|||AFUN005870-RA||1),SYNONYMOUS_CODING(LOW|SILENT|cgT/cgG|R471||AFUN005868|||AFUN005868-RA|4|1),UPSTREAM(MODIFIER||2346|||AFUN005869|||AFUN005869-RA||1);HaplotypeScore=0.0000;MQ0=0;set=variant-variant3-variant6-variant7-variant8-variant9 GT:AD:DP:GQ:PL 1/1:1,20:20:42:601,42,0 ./. 1/1:0,26:26:63:865,63,0 ./. ./. 0/1:9,10:19:99:276,0,236 1/1:0,17:17:42:583,42,0 0/1:14,5:19:99:119,0,406 1/1:0,20:20:45:630,45,0 ./. Now, I have to screen the first column against files containing genes in the following format. Referred to as gsts-funestus.gff in the script # start gene FUNEGST002 KB668289 AUGUSTUS gene 410926 411627 0.18 + . FUNEGST002; gene_name "GSTd3"; auto "g17128_modified"; KB668289 AUGUSTUS transcript 410926 411627 0.18 + . FUNEGST002.t1 KB668289 AUGUSTUS start_codon 410926 410928 . + 0 transcript_id "FUNEGST002.t1"; gene_id "FUNEGST002"; KB668289 AUGUSTUS intron 411058 411126 0.99 + . transcript_id "FUNEGST002.t1"; gene_id "FUNEGST002"; KB668289 AUGUSTUS CDS 410926 411057 0.58 + 0 transcript_id "FUNEGST002.t1"; gene_id "FUNEGST002"; KB668289 AUGUSTUS CDS 411127 411627 0.74 + 0 transcript_id "FUNEGST002.t1"; gene_id "FUNEGST002"; KB668289 AUGUSTUS stop_codon 411625 411627 . + 0 transcript_id "FUNEGST002.t1"; gene_id "FUNEGST002"; # end gene FUNEGST002 Basically I have to extract the first line containing gene in the 3rd column and print a hybrid output in the following format. GSTd3 KB668289 903 T=>C AC=14;AF=0.875;AN=16;DP=173;Dels=0.00;EFF=DOWNSTREAM(MODIFIER||3412|||AFUN005870|||AFUN005870-RA||1),SYNONYMOUS_CODING(LOW|SILENT|ctA/ctG|L578||AFUN005868|||AFUN005868-RA|4|1),UPSTREAM(MODIFIER||2667|||AFUN005869|||AFUN005869-RA||1);MQ0=0;set=variant-variant2-variant3-variant5-variant6-variant7-variant9-variant10 GT:AD:DP:GQ:PL 1/1:0,20:20:45:606,45,0 1/1:0,19:19:42:588,42,0 1/1:0,22:22:54:720,54,0 ./. 1/1:0,29:29:63:882,63,0 0/1:16,13:28:99:401,0,391 1/1:0,13:13:30:425,30,0./. 1/1:0,26:26:57:785,57,0 0/1:8,7:15:99:223,0,243 GSTd3 KB668289 1224 A=>C AC=10;AF=0.833;AN=12;DP=122;Dels=0.00;EFF=DOWNSTREAM(MODIFIER||3091|||AFUN005870|||AFUN005870-RA||1),SYNONYMOUS_CODING(LOW|SILENT|cgT/cgG|R471||AFUN005868|||AFUN005868-RA|4|1),UPSTREAM(MODIFIER||2346|||AFUN005869|||AFUN005869-RA||1);HaplotypeScore=0.0000;MQ0=0;set=variant-variant3-variant6-variant7-variant8-variant9 GT:AD:DP:GQ:PL 1/1:1,20:20:42:601,42,0 ./. 1/1:0,26:26:63:865,63,0 ./. ./. 0/1:9,10:19:99:276,0,236 1/1:0,17:17:42:583,42,0 0/1:14,5:19:99:119,0,406 1/1:0,20:20:45:630,45,0 ./. The Script My script contains this part to read and process all lines in the input file. #!/usr/bin/perl -w use strict; use warnings; print "Enter your input file containing all SNPs [e.g. Funestus_NON_SYN.txt]:\t"; chomp (my $input_file = <STDIN>); # Open and import the information found in the files # my @input = &InputFileHandler($input_file); # Store the gene information in a hash with the key as scaffold and location # my %genes = (); foreach (@input) { next if $_ !~ /^KB.*/; my($scaffold,$location,$dot,$start,$end,$unknown,$pass,$rest) = split(/\t/, $_, 8); my $mutation = "$start=>$end"; my $key = $scaffold."_".$location; my $value = "$scaffold\t$location\t$mutation\t$rest"; $genes{$key} = $value; } The next part allows the user to define the number of reference files and screens out input file against it. Furthermore, it prints the results in a tab-separated output file. # Open the output file my $output_file = &GenerateFilename(); open my $fh_out,">",$output_file.".tsv" or die "Cannot open $output_file: $!\n"; # Print the header in the CSV file print $fh_out "Gene\tSNP\tLocation\tMutation\tRemaining information\n###\n"; print "Enter the number of reference files [e.g. gff files]:\t"; chomp (my $i = <STDIN>); # Generic loop that let's the user enter as many reference files as required # for (my $j = 1; $j <= $i;$j++) { print "Enter filename $j [e.g. gsts-funestus.gff]:\t"; chomp (my $file = <STDIN>); my @file_raw = &InputFileHandler($file); my @file_genes = &ScanArray(@file_raw); my @file_genes_print = &ScanPrint(@file_genes); print $fh_out "$_\n" foreach (@file_genes_print); print $fh_out "\n###\n"; } Finally, my subroutines are the following. sub GenerateFilename # Generates a filename based on the current date and time. { my @timeData = localtime(time); my $filename = join('_', @timeData); return $filename; } sub InputFileHandler # Opens a file and saves its content in an array { my $file = shift; my @lines; open my $fh,"<",$file or die "Cannot open $file: $!\n"; while (my $line = <$fh>) { chomp($line); push (@lines,$line); } close ($fh); return @lines; } sub Information # Separates the mutation from the remaining information { my $rest = shift; my @rest = split (/\t/,$rest); my $aa_change = "$rest[1]=>$rest[2]"; my $return = "$aa_change,$rest[5]"; return $return; } sub ScanArray # Scans a file array for lines containing SNPs, discards rest { my @array = @_; my @return; foreach (@array) { next if $_ !~ /^KB.*\t.*\t[Gg][Ee][Nn][Ee]/; my ($scaffold,$second,$type,$start,$end,$sixth,$seventh,$eigth,$info) = split (/\t/,$_); my ($unknown,$gene_name,$auto) = split (/; /,$info,3); $gene_name =~ s/.*"(.*?)".*/$1/s; my $gene = $scaffold."_".$gene_name; push (@return,$gene); } return @return; } sub ScanPrint # Filters for genes that contain SNPs { my @array = @_; my @return; foreach (@array) { my ($scaffold,$gene_name) = split (/_/,$_,2); foreach my $key (keys %genes) { next if $key !~ /$scaffold/; my $line = "$gene_name\t$genes{$key}"; push (@return, $line); } } return @return; } Question Do you people have any suggestions or ways of improving this script. It all works and no warnings have been returned with all search queries I've done (I used different files of the same format). EDIT 1 As pointed out to me by @Edward: Yes, my output should be the two lines. I have adjusted it in the main part of the question. Answer: Style Guides The Perl community does not have a single universal style – there is more than one way to do it. But sometimes, consistency isn't a bad thing either. Here are the three important cornerstones of Perl style: A perlstyle manpage exists which explains a sensible core of a style guide. Damian Conway published the Perl Best Practices book with many valuable tips, although I'd view his preferences as over-zealous. The perlcritic tool exists to automatically check your code for a set of style violations. It's inspired by Perl Best Practices. You can install it via the Perl::Critic module. You can layout your code automatically with perltidy, installable via Perl::Tidy. Based on those references, I make the recommendations below. Objective Style Issues Now lets look at what you've written. Don't use the -w command line flag, only use warnings, and use it always. They are fundamentally the same (they activate warnings), but -w has an unfortunate global effect. In your case it doesn't matter, but -w is a bad habit to get into. Don't call functions with an ampersand, like &foo(). This has a specific meaning (it disables so-called prototypes), but it hasn't been necessary to use it for over 20 years. Simply invoke your functions like foo(). When looping over a range, do not use the C-style for (my $j = 1; $j <= $i; $j++). Instead, use the range operator, and a foreach loop: for my $j (1 .. $i) { Since perl 5.10.0 (released 2007), you can use feature 'say' to enable the say function. It behaves exactly like print but always appends a newline, making it more convenient to use in most cases. The keywords for and foreach are absolutely equivalent, so we'd use the shorter form. When used as a statement modifier, the parens around the iterated values are optional: ... for @file_genes_print; In a foreach loop, always name the iteration variable rather than using $_. This is not possible in the statement modifier use case. for my $iteration_variable (@list) { ... } Hash variables are initialized by declaration. Assigning them the empty list is unnecessary – my %hash; rather than my %hash = ();. For readability, any comma should be followed by a space. open my $fh_out, ">", "$output_file.tsv" or die ... Prefer interpolation over concatenation. $scaffold."_".$location could be "${scaffold}_${location}" and $output_file.".tsv" could be "$output_file.tsv". Sometimes, join is more preferable than both interpolation or concatenation. For example, "$scaffold\t$location\t$mutation\t$rest" becomes more readable as join "\t", $scaffold, $location, $mutation, $rest The print (and say) syntax is horrible because of the difference between print $foo $bar and print $foo, $bar – the former prints $bar to the filehandle $foo, whereas the latter prints "$foo$bar" to the currently selected filehandle (usually STDOUT). We could use the object-oriented interface. use IO::File; $foo->print($bar); We can use curly braces for the indirect object notation print { $foo } $bar I'd recommend to always use the curly braces. You only really need them when the file handle is something else than a simple variable, but it's a good visual distinction. The ScanPrint uses the %genes hash which is therefore essentially a global variable. Instead, pass it in as an argument: scan_print(\%genes, @file_genes); ... sub scan_print { my ($genes, @array) = @_; ... for my $key (keys %$genes) { Don't load a whole file into memory unless you have to. If you can, process files line-by-line. Especially your use of InputFileHandler violates this. On the other hand this may be absoluetly OK for smaller files, or when you are going to use that amount of memory anyway. Subjective Style Issues Variable and function names should generally be snake_case: lowercase, and words separated by underscores. You apply this for variables, but not for subroutine names. Consider using the K&R style/egyptian brackets. This means that the opening curly brace is on the same line as it's introducing keyword and not on a line of its own: for (...) { ... } Why? For no other reason than that most Perl programmers do this. It is customary to call builtins without parentheses. This is partly because builtins are actually operators, not functions, and partly because it looks nicer. So let's do close $fh rather than close ($fh). In regexes, put symbols into character classes to emphasize them: /; / might become /[;][ ]/. Performance Issues, Bugs, and Other Notes You are using a regex /^KB.*/. The trailing .* is entirely unnecessary as this always matches any string. Remove it: /^KB/. Do not prompt for file names. Instead, take these from the command line arguments. The arguments are in the @ARGV array. So you could do: my ($input_file, @reference_files) = @ARGV; and invoke your script like perl my_script.pl input_file.tsv ref1 ref2. $key !~ /$scaffold/ interprets $scaffold as a regex. To test that $key does not contain the $scaffold, use the more efficient -1 == index $key, $scaffold. To test that they are not equal, use $key ne $scaffold, in which case you could reduce the surrounding loop to if (exists $genes{$scaffold}) { push @return, "$gene_name\t$genes{$scaffold}"; } The substitution s/.*"(.*?)".*/$1/s is iffy. If you want to match a quoted string, "[^"]*" is simpler for the regex engine to handle than ".*?". Of course, it also doesn't support escapes. If you want to replace the whole string by the thing in the quotes, just use a normal match rather than a substitution, e.g: if ($gene_name =~ /["]([^"]*)["]/) { $gene_name = $1; } which is more straightforward. Creating a time-based filename is fragile and unintuitive for a user. Instead, let the user specify the output file name manually. Alternatively, generate a more unique filename (e.g. by using the process ID $$ in the filename). Another approach, which I have taken below, is to print to standard output, and let the user write the results to a file using shell redirection. The Information subroutine isn't used. Suggested Rewrite This rewrite doesn't use any prompting (instead it uses command line arguments and shell redirection), and it doesn't use any subroutines. The result should be shorter, simpler, and more efficient. This code has not been tested. #!/usr/bin/perl use strict; use warnings; use feature 'say'; # we take the input file as STDIN # and the output file as STDOUT # the user can redirect this to desired files: # $ perl my_script.pl ref1 ref2 <input >output my @reference_files = @ARGV; # early input validation if (not @reference_files) { die "You have to specify at least one reference file as argument\n"; } for my $file (@reference_files) { die "The reference file '$file' does not exist\n" if not -e $file; } # parse the input file my %genes; while (my $line = <STDIN>) { chomp $line; next if not $line =~ /^KB/; my ($scaffold, $location, $dot, $start, $end, $unknown, $pass, $rest) = split /\t/, $line, 8; $genes{$scaffold}{$location} = join "\t", $scaffold, $location, "$start=>$end", $rest; } # print the header say join "\t", 'Gene', 'SNP', 'Location', 'Mutation', 'Remaining Information'; say "###"; # go through each reference file for my $file (@reference_files) { open my $ref, "<", $file or die "Can't open reference file '$file': $!"; while (my $line = <$ref>) { chomp $line; my ($scaffold, undef, $type, $start, $end, undef, undef, undef, $info) = split /\t/, $line; next if not $scaffold =~ /^KB/; next if not $type =~ /^GENE/i; my ($transcript_id, $gene_name, $auto) = split /[;][ ]/, $info; $gene_name = $1 if $gene_name =~ /["]([^"]*)["]/; if (my matching_genes = $genes{$scaffold}) { say join "\t", $gene_name, $_ for values %$matching_genes; } } say ""; say "###"; }
{ "domain": "codereview.stackexchange", "id": 7280, "tags": "perl, csv, bioinformatics, join" }
Can El Niño/La Niña occur again after normal state?
Question: As I know, El Niño and La Niña usually occurs alternatively, but is it possible that El Niño occurs again after El Niño and return to normal state? Answer: If you look at the ENSO index record, you can see many instances of El Niño or La Niña occurring multiple times in a row. For instance, the period 1977-1983 and even after that is a succession of El Niño events with normal conditions in between and no La Niña period. Multivariate ENSO Index (MEI) from NOAA
{ "domain": "earthscience.stackexchange", "id": 848, "tags": "enso" }
Creating ethane through electrolysis of vinegar
Question: Some sources that I studied say that it is possible to produce ethane through the electrolysis of ethanoic acid. Would this work with vinegar (5-10% acid)? Also, other sources say that ethanoate salts should be used. I can easily obtain such salts (sodium hydrogen carbonate (baking soda) + ethanoic acid (vinegar)), and vinegar as well, so which one is better for creating ethane? I don't care about the amount of carbon dioxide, oxygen, hydrogen or any other gas produced. Answer: Ethane is formed at the anode through $\ce{CH3COO- -> CH3. + CO2}$ and then $\ce{2 CH3. -> C2H6}$. Since acetate solutions have a higher concentration of acetate anions than acetic acid I would use an acetate solution to get a higher concentration of acetate at the anode in favor of the formation of ethane.
{ "domain": "chemistry.stackexchange", "id": 5287, "tags": "electrolysis, hydrocarbons" }
Basecalling process carried out by full_1dsq_basecaller.py/.exe
Question: I am unable to understand what exactly the full_1dsq_basecaller.py/.exe script does. Is it giving the consensus sequence for linear and complementary strand, or it is just giving the reads that came from the complimentary strand? If we are getting two fastq files as output which one we should use for alignment? Thank you in advance. I greatly appreciate any help Answer: My understanding is that the 1D$^2$ process is carried out in three stages: Carry out a 1D calling of reads (i.e. what the standard Albacore does at the moment) Link reads that are similar to each other (in reverse-complement orientation), from the same channel, and have a very short pore transition time Create a 2D consensus base call, using information from the two linked reads The two outputs from the basecaller are the 1D calls from step 1, and the consensus calls from step 3. Ideally just the 2D consensus calls should be used for assembly (assuming it represents a random distribution of reads), but if that gives too low coverage (or is not random enough), then the non-converted reads from the 1D calls could be included as well. Because the linking is not a physical link, it's not a good idea to use 1D$^2$ for amplicon reads; the likelihood of two complementary reads from different templates appearing one after another is too high.
{ "domain": "bioinformatics.stackexchange", "id": 440, "tags": "nanopore, albacore, 1d2-reads" }
Is there some genetic variance underlying music appreciation?
Question: Is there any research done on the genetic variance for Music appreciation? If not, why is there no genetic variance for this trait? Answer: In 2013, Dr. Liisa Ukkola-Vuoti, University of Helsinki, Finland, done a detailed GWCNV(Genome-Wide Copy Number Variation) Analysis of certain group of people for musical creativity and aptitude. Genome-Wide Copy Number Variation Analysis in Extended Families and Unrelated Individuals Characterized for Musical Aptitude and Creativity in Music They did a Genome Wide Copy Number Variations (CNVs) in five extended pedigrees and in 172 unrelated subjects characterized for musical aptitude and creative functions in music Muscial Aptitude is taken as Sum of Scores of Auditory structuring ability, Seashores test for pitch and for time. Along with data on creativity in music was surveyed using a web-based questionnaire. Several CNVRs containing genes that affect neurodevelopment, learning and memory were detected. A deletion at 5q31.1 covering the protocadherin-α gene cluster (Pcdha 1-9) was found co-segregating with low music test scores (COMB) in both sample sets. Pcdha is involved in neural migration, differentiation and synaptogenesis. Creativity in music was found to co-segregate with a duplication covering glucose mutarotase gene (GALM) at 2p22. GALM has influence on serotonin release and membrane trafficking of the human serotonin transporter. Genes related to serotonergic systems have been shown to associate not only with psychiatric disorders but also with creativity and music perception. Both, Pcdha and GALM, are related to the serotonergic systems influencing cognitive and motor functions, important for music perception and practice. A 1.3 Mb duplication was identified in a subject with low COMB scores in the region previously linked with absolute pitch (AP) at 8q24. No differences in the CNV burden was detected among the high/low music test scores or creative/non-creative groups. In summary, CNVs and genes found in this study are related to cognitive functions. Our result suggests new candidate genes for music perception related traits and supports the previous results from AP study. Source: [1] Genome-Wide Copy Number Variation Analysis in Extended Families and Unrelated Individuals Characterized for Musical Aptitude and Creativity in Music [2] Musical Aptitude Is Associated with AVPR1A-Haplotypes
{ "domain": "biology.stackexchange", "id": 2525, "tags": "genetics, human-genetics" }
Where does the expectation value of $x$ formula come from?
Question: I want to understand precisely where the formula for the expectation value of $x$ comes from (in QM): $$\langle x\rangle=\int _{-\infty}^{\infty}\psi ^*x\psi dx $$ I know that an expectation value (in statistics) is just the sum of the products of the possible values $f(x)$ times their probabilities $\rho (x)$: $$\langle f(x)\rangle=\int f(x) \rho (x)dx $$ Since in QM mechanics the probability is given by $|\psi|^2 $, the expectation value of $f(x)$ would be: $$\langle f(x)\rangle=\int f(x) |\psi|^2dx=\int f(x)\psi^*\psi dx$$ But this differs from the form above. If $f(x)$ was Hermitian I could use the property of Hermitian operators to "move it" into the position that it should be, but since it is not necessarily Hermitian, I don't know how to explain this difference, or how to solve it. I have consulted Griffith's QM and also online, but I cannot find an answer. What am I missing here? Answer: In quantum mechanics, the expectation value of an observable $\hat{O}$ in a state $|\Psi\rangle$ is defined by $$ \langle \Psi|\hat{O}|\Psi\rangle \quad . $$ In your case the observable is the position operator with $\hat{x}\, |x\rangle= x\, |x\rangle$ and $\langle x|x'\rangle = \delta(x-x')$. We can write its expectation value, by making use of the relation $1 = \int \mathrm{d}x\, |x\rangle\langle x|$, as (here in 1D) $$ \langle \Psi|\hat{x}|\Psi\rangle = \int\mathrm{d}x \int\mathrm{d}x'\, \langle \Psi|x\rangle\langle x| \hat{x}|x'\rangle\langle x'|\Psi\rangle \quad, $$ which reduces to $$\langle \Psi|\hat{x}|\Psi\rangle = \int \mathrm{d}x\, \Psi^*(x)\, x \, \Psi(x) =\int \mathrm{d}x\, x\, |\Psi(x)|^ 2 \quad ,$$ where we have defined $\langle x|\Psi\rangle \equiv \Psi(x)$. Edit: As OP is asking also for the case of the momentum operator: We can make similar arguments here and find $$\langle \Psi|\hat{p}|\Psi\rangle = \int \mathrm{d}p\,p\, |\Psi(p)|^2\ \quad. $$ Edit 2: The above expectation value can also be carried out in the $x$-representation by using the fact that $\langle x|\hat{p}|\Psi\rangle = -i\hbar\, \partial_x \Psi(x)$: $$ \langle \Psi|\hat{p}|\Psi\rangle= -i\hbar \int \mathrm{d}x\, \Psi^*(x)\, \partial_x \Psi(x) \quad . $$
{ "domain": "physics.stackexchange", "id": 75448, "tags": "quantum-mechanics, hilbert-space, operators, wavefunction, observables" }
Simple Linux pipeline
Question: I've compiled a simple Linux pipeline and using my C style and modularization that I will use for my own operating system. Do you have something to say about this code (partly taken from SO)? For example, are all my imports necessary? How can I improve error handling? #include <sys/types.h> #include <errno.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> struct command { const char **argv; }; int spawn_proc (int in, int out, struct command *cmd) { pid_t pid; if ((pid = fork ()) == 0) { if (in != 0) { dup2 (in, 0); close (in); } if (out != 1) { dup2 (out, 1); close (out); } return execvp (cmd->argv [0], (char * const *)cmd->argv); } return pid; } int fork_pipes (int n, struct command *cmd) { int i; pid_t pid; int in, fd [2]; in = 0; for (i = 0; i < n - 1; ++i) { pipe (fd); spawn_proc (in, fd [1], cmd + i); close (fd [1]); in = fd [0]; } if (in != 0) dup2 (in, 0); return execvp (cmd [i].argv [0], (char * const *)cmd [i].argv); } int main (int argc, char ** argv) { int i; if (argc == 1) { /* There were no arguments */ const char *printenv[] = { "printenv", 0}; const char *sort[] = { "sort", 0 }; const char *less[] = { "less", 0 }; struct command cmd [] = { {printenv}, {sort}, {less} }; return fork_pipes (3, cmd); } if (argc > 1) { /* I'd like an argument */ char *tmp; int len = 1; for( i=1; i<argc; i++) { len += strlen(argv[i]) + 2; } tmp = (char*) malloc(len); tmp[0] = '\0'; int pos = 0; for( i=1; i<argc; i++) { pos += sprintf(tmp+pos, "%s%s", (i==1?"":"|"), argv[i]); } const char *printenv[] = { "printenv", 0}; const char *grep[] = { "grep", "-E", tmp, NULL}; const char *sort[] = { "sort", 0 }; const char *less[] = { "less", 0 }; struct command cmd [] = { {printenv}, {grep}, {sort}, {less} }; return fork_pipes (4, cmd); free(tmp); } } What it does is simply a printenv | sort | less if there are no args and if there are arguments it is taken as parameter list to grep to grep for environment varibles. I'm using it to develop my programming and make my own command-line environment for my own FPGA operating system. Answer: dup2 handles equal file descriptors very nicely. There is no need to test for in != 0 etc. On the other hand, you must check the return values, especially those of the system calls, and give a proper diagnostics when necessary. An idea of executing the last pipeline component inside a master process is dubious. You lose the control over children's lifecycle. I recommend to run each component in its own process, and wait for all of them. An i < n - 1 loop terminating condition looks alarming. It takes some mental effort to understand what you are after. You may try to heal it with the comment, but most likely you should treat all pipeline components equally. Keep in mind that some commands (cd being most prominent) must be builtins. Spawning a shell for them would be a design mistake.
{ "domain": "codereview.stackexchange", "id": 12912, "tags": "c, linux" }
What is the motivation for using Q-Learning in RL?
Question: In Spinning Up by OpenAI, it says the following regarding policy optimization methods and Q-Learning as ways of getting a good policy for RL. Trade-offs Between Policy Optimization and Q-Learning. The primary strength of policy optimization methods is that they are principled, in the sense that you directly optimize for the thing you want. This tends to make them stable and reliable. By contrast, Q-learning methods only indirectly optimize for agent performance, by training $Q_{\theta}$ to satisfy a self-consistency equation. There are many failure modes for this kind of learning, so it tends to be less stable. But, Q-learning methods gain the advantage of being substantially more sample efficient when they do work, because they can reuse data more effectively than policy optimization techniques. What I am wondering is the motivation behind Q-Learning in this sense; I understand that when it works, it can be nice getting better sample efficiency, but what I don't understand is why Q-Learning was even considered in the first place as a way to approximate the optimal policy. It seems counterintuitive to me to have something I want to optimize and then to not optimize it, but rather optimize something else. In other words, why does Q-learning work when it does? Answer: First, you asked why Q-Learning was considered. The reason is that it was a big revolution in the history of RL and the best method for a variety of problems at the time (1989, Watkins). Policy optimization was only introduced later by R. Williams in 1992. There are some points where Q-Learning can have some advantages over policy optimization algorithms: tabular Q-learning is easy to implement and does not require any form of gradients. So for small environments (small in the sense that the all state-action pairs fit easily in memory or in a database) this might be a good fit. policy optimization models the policy while q-learning models the state-action value function (note that actor-critic methods can model both). The policy might be the easier function to model for some problems but not for all. So it depends on the problem which to use. Q-Learning is in general better understood theoretically (according to Sutton-Barto but this might not hold for deep Q-Learning) Q-Learning is an off-policy method, meaning that one can learn from another policy than the one that is being optimized which improves sample efficiency (note that there are soft actor critic and td3 which are also off-policy methods using policy gradients). policy gradient methods do require that the policy can be modeled by a differentiable function. Q-Learning does not have this requirement. Why does Q-Learning work: In general reinforcement learning problems can be modeled by the Bellman equation: \begin{equation} q_{\pi}(s, a) = \mathbb{E}_{\pi}\left[\sum_{a}\pi(a|s)\sum_{r, s'}p(s', r|s,a)[r+\gamma v(s')]\right]. \end{equation} Problems like this can be tackled with various methods like dynamic programming, monte carlo methods or temporal difference methods, which combines the previous two. Q-Learning is a temporal difference method and there are theoretical proofs showing that it converges to optimality under certain conditions. The problems of Q-Learning are more practical nature: the policy is indirectly modeled by the state-action value function which might not be ideal the policy can depend on hyperparamters for exploration which might not be tuned well or the actor policy might not visit crucial states often enough.
{ "domain": "ai.stackexchange", "id": 3671, "tags": "reinforcement-learning, deep-rl, q-learning" }
Time complexity and content free evaluation
Question: I am having trouble answering the question below: "Explain why the statement, “The running time of algorithm A is at least O(n^2)”, is content-free." The statement apparently does not give any information on the running time of A but if A = T(n), then T(n) >= O(n^2). Doesn't this at the very least tell us that A runs bigger than n^2? Answer: As suggested by Steven, I turned my comment into a full answer. I think the question aims to point out that for a constant running time $c$ you have $c \in \mathcal{O}(1) \subseteq \mathcal{O}(n^2)$. So you might interpret the statement in your exercise as "There is a running time in $\mathcal{O}(n^2)$ that is a lower bound for $\mathcal{A}$'s running time." However, as a constant running time is suitable to select from $\mathcal{O}(n^2)$ this is a completely trivial statement that does not provide any insights about $\mathcal{A}$'s actual running time.
{ "domain": "cs.stackexchange", "id": 21537, "tags": "time-complexity" }
Using Total Variation Denoising to Clean Accelerometer Data
Question: I know this is maybe a very basic question but I am doing this as a hobby and I can't find a solution to this problem. Basically I am trying to remove some noise from data I am reading from an accelerometer. This is what I want to achieve (Taken from Total Variation Denoising (An MM algorithm)): I read in Picking the correct filter for accelerometer data that Total Variaton Denoising would fit my needs. So I read Wikipedia - Total Variation Denoising article from Wikipedia and I think I have to use one of this equations: But I don't understand how I apply this to my signal. Suppose I have a set of x,y points like in the plots above, how I apply the equation to that data? I implemented some simple low-pass and high-pass filters like this: gravity[0] = alpha * gravity[0] + (1 - alpha) * event.values[0]; But this is maybe too complex and I don't know where to start or how. I want to implement this in Java or C so Matlab is not an option (I have seen a lot of MatLab implementing this). I will appreciate any help to guide me in the right direction! Answer: Apart from Total Variation Denoising you could try a first much simpler approach: a median-filter. You just move a window along your data and replace the current input value by the median of all data in the window. You just have to optimize the window length (by experimenting). By the way, the equations you copied into your question are for 2-dimensional data, but your data are 1-D.
{ "domain": "dsp.stackexchange", "id": 6294, "tags": "matlab, discrete-signals, noise, denoising, total-variation" }
Electric field of a full disk - when $R \to 0$ - it's not equal to coulomb law
Question: An MIT document states that the electric field of a full disk, when $R \to 0$, is similar to Coulomb's law $$\mathbf E_{disk}=2\pi k_e\sigma\left[1-\frac{x}{\left(x^2+R^2\right)^{1/2}}\right]\hat{i}=\frac{\sigma}{2\varepsilon_0}\left[1-\frac{x}{\left(x^2+R^2\right)^{1/2}}\right]\hat{i}$$ Either version is fine, its just a different way of writing the constant. You should also check the limits: for $R\to0$ (but keep $Q$ constant!) it should go to a point charge. For $R\to\infty$ (infinite plane) it should be a constant. Though, I don't think that it works that way, it is easily seen that when $R \to 0$, then $\mathbf E_{disk} = 0$. Can somebody help me figure out how to arrive at the stated result - what am I missing to get the field of a point charge when the disk size goes to zero? Answer: Remember that we keep leading order terms. So for the second part of the expression in parentheses, as $R \rightarrow 0$, we don't just get 1. Using the taylor expansion, we get $$ \frac{1}{\sqrt{1+\frac{R^2}{x^2}}}\Rightarrow 1 - \frac{1}{2}\frac{R^2}{x^2}+....$$ Plug this into the original equation while remembering $\sigma= \frac{Q}{\pi R^2}$ gives $$ \vec{E}_{disc}= \frac{\sigma}{2 \epsilon_0}\left[\frac{R^2}{2x^2}\right] = \frac{Q}{4 \pi \epsilon_0 x^2}$$ which is exactly the field of a point charge that we want.
{ "domain": "physics.stackexchange", "id": 20392, "tags": "electrostatics" }
Why doesn't Epsom salt dissolved in water act like sulfuric acid?
Question: People take baths in it so it clearly doesn't. But if epsom salt is the salt of sulfuric acid when it dissolves the chemical ions present in sulfuric acid should also be present in the bath water and in abundance enough to melt you.. right? I also remember that water autodissociates into hydronium and hydroxide, so my other question is why wouldn't plain water simultaneously be a acid and a base and also just melt you? If there's a trillion (or whatever) water molecules next to your skin and even a tiny fraction of them turn into an acid and some of them turn into a base, there'd cumulatively be a lot of acid on your skin. I can't imagine adding salt which would also ionize could help things. Answer: Well, as you might have guessed, it's the $\ce{H+}$ in sulfuric that is dangerous, not the $\ce{SO4^2-}$. So, Epsom salts, being $\ce{MgSO4}$, aren't very dangerous. In the same vein, table salt ($\ce{NaCl}$) is perfectly OK despite hydrochloric acid ($\ce{HCl}$) being rather nasty. And $\ce{H+}$ is only corrosive in high concentrations, mind you - otherwise we'd be burning ourselves drinking orange juice, which itself has a small concentration of $\ce{H+}$. That also answers why water doesn't kill you: only a tiny fraction of it (roughly 1 in ten million molecules) actually undergoes autodissociation at any one time. It's not about the actual number of $\ce{H+}$ ions, but rather the concentration. If you dilute $\ce{HCl}$ enough you can pretty much just drink it. (Please don't be stupid and take this as a licence to do a home experiment and drink $\ce{HCl}$.)
{ "domain": "chemistry.stackexchange", "id": 6905, "tags": "acid-base, water, safety" }
How do I access the source of Mimick that mimick_vendor's cmake file references in order to edit Mimick's CMake file to give it arm7l compatability
Question: I am trying to get ROS2 to work on the raspberry pi 4 by building it from source, but this one library seems to be the ONLY thing that refuses to build when building from source for raspian. The ubuntu 20.04 pi image has horrible Pi compatiability and im not sure if it even works with peripherals. The only way ROS2 is going to work properly with all pi functionality in tack is with Raspian. EDIT: I've done a bit of debugging on this issue and have narrowed down the problem to the ROS2 version of Mimick(https://github.com/ros2/Mimick) being in built to throw an error on untested archs. According to https://github.com/Snaipe/Mimick/issues/17#issue-733092525 there is a workaround to allow Mimick to allow arm7l architectures but the ROS2 source does not seem to have a Mimick source folder I can locate, and the only references to its location are in mimick_vendor's CMake file: build_mimick() ament_export_libraries(mimick) ament_export_dependencies(mimick) How do I access the source of the mimick library that mimick_vendor uses during compile time so I can append it with a workaround to give it arm7l compatability? Originally posted by rydb on ROS Answers with karma: 125 on 2020-10-29 Post score: 1 Original comments Comment by gvdhoorn on 2020-10-30: mimick_vendor brings in a dependency which is AFAIK only used during tests. If you don't care about tests, you could disable building them and you would probably not even need mimick_vendor. That's not really a solution of course, more of a work-around. Comment by rydb on 2020-10-30: The problem though is that not building it breaks rcutils and alot of important packages depend on rcutils. Comment by gvdhoorn on 2020-10-30: If you disable testing for rcutils (and other dependants) I would expect mimick_vendor to no longer be needed. Comment by rydb on 2020-10-30: How do I do that? Comment by gvdhoorn on 2020-10-31: See #q329930. Comment by rydb on 2020-11-01: Still breaks with the same error with the following command: colcon build --symlink-install --cmake-args -DBUILD_TESTING=OFF Comment by sinceaway on 2023-03-23: I want to know if my file haven't micick_vendor config.camke, how should I do? Comment by rydb on 2023-03-24: I only vaguely remember the problem I had, but if I remember correctly, the main thing is not checking for mimick vendor, its also deleting any rendering related packages before building ros2 from source. (rviz2, ogre, etc...) visualization of ros2 should be done off board and that is what I think was requiring mimick_vendor Answer: I solved the problem throughout the week. The problem of the Pi 4 not compiling for ros2 was actually two problems: -. Rviz is broken on the pi 4 and will probably not compile due to missing atomic functions. You will have to delete it from the src in order to compile ROS2. What ever Rviz functionality you need will need to be run on a different computer that can run it. -. The ros2/mimick library that mimick_vendor vendor was missing the ability to detect arm7l, the architecture the pi uses on the 32bit setting, and using a workaround on the snaipe issue's page, I added arm7l detection to the cmake file of ROS2 mimick and now it works. Until the pull request is accepted, replace the mimick_vendor CMakeLists.txt file with mine at https://github.com/rydb/Mimick/blob/ros2/CMakeLists.txt Edit: I built ros2 with the following command: colcon build --symlink-install --cmake-args "-DCMAKE_SHARED_LINKER_FLAGS='-latomic'" "-DCMAKE_EXE_LINKER_FLAGS='-latomic'"^C Originally posted by rydb with karma: 125 on 2020-11-06 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by anthares on 2020-11-23: hey! thanks for your answer. I tried to follow your steps. mimick_vendor builds with no issue on a BeagleBone Blue too. However, rcutils still throws an error: Starting >>> rcutils --- stderr: rcutils CMake Error at CMakeLists.txt:185 (find_package): By not providing "Findmimick_vendor.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "mimick_vendor", but CMake did not find one. Could not find a package configuration file provided by "mimick_vendor" with any of the following names: mimick_vendorConfig.cmake mimick_vendor-config.cmake Add the installation prefix of "mimick_vendor" to CMAKE_PREFIX_PATH or set "mimick_vendor_DIR" to a directory containing one of the above files. If "mimick_vendor" provides a separate development package or SDK, be sure it has been installed. Failed <<< rcutils [8.11s, exited with code 1] Any step missing? Comment by rydb on 2020-12-07: Sorry I didn't check ros forums for a while, im thinking your mimick_vendor installation build failed. Find out what architecutre your beagle bone is, and try adding its architecture to the list of accepted architectures, clone my mimick pull, then edit that mimick's cmake to accept your beagle bone's architecture. Then, have mimick_vnedor vend your modified mimick instead of the ros2 one.
{ "domain": "robotics.stackexchange", "id": 35693, "tags": "ros2" }
Size of constant depth circuit for digital comparator?
Question: Is a lower bound of $\Omega(n^2)$ known for the size of any constant depth circuit expressing a digital comparator for two $n$-bit numbers? Two $n$-bit binary numbers can be compared using a digital comparator circuit. The straightforward way to implement such a circuit is to compare the high-order bits; if they are the same then continue to compare the second most significant bits, and so on. This circuit has size (measured in the number of gates) that is roughly quadratic in $n$, with linear depth. By standard arguments, this can be folded up into an equivalent circuit of roughly the same size that has logarithmic depth (in $n$). In fact, constant depth circuits of size $O(n^2)$ are sufficient for a digital comparator, if unbounded fan-in gates are allowed: see Heribert Vollmer's Introduction to Circuit Complexity (Exercise 1.4). Is a tight lower bound known? Answer: Comparison can be implemented by subtracting twos-complement numbers and then testing whether the most significant bit is 0 or 1. Subtraction can be implemented as addition ($x-y = x + \overline{y} + 1$, where $\overline{y}$ is obtained by flipping all bits of $y$). Addition can be implemented using carry-lookahead methods. If you put all this together, I think you get a comparison circuit that uses $O(n)$ gates and has constant depth. In particular, let $g_i = a_i \land b_i$ ("generate"), $p_i = a_i \lor b_i$ ("propagate"), $q_i = p_{i+1} \lor \cdots \lor p_{n-1}$, $t_i = g_i \land q_i$, and $c_{n-1} = t_0 \lor \cdots \lor t_{n-2}$ ("carry"). This lets you compute the carry into the most significant bit in constant-depth and $O(n)$ gates. That's all you need for the results of the comparison (you don't need to compute the value of the other bits of the sum). The reason this requires fewer gates than a full addition or full subtraction is that we only need the carry into the most significant bit, not all the carry bits. Double-check my reasoning -- I could have gone awry somewhere.
{ "domain": "cs.stackexchange", "id": 8333, "tags": "lower-bounds, circuits, digital-circuits" }
Reg expression for finding expression in sentence
Question: I have a regular expression that match the string that contains the word "duration", followed by a < or > operator and then a number. Here is the expression: duration\s*(<|>)\s*([0-9]+) Is there any way I can make this expression better? Answer: Since you only have two characters to compete with, namely, < and >, use a character set. Other than that, you might consider \d matching your digits, instead of 0-9 as \d covers the numerical characters across the entire UTF8 range. The final expression would be: duration\s*([<>])\s*(\d+) I've left the matching groups in place, considering that you might be referencing them in your later code.
{ "domain": "codereview.stackexchange", "id": 19925, "tags": "c#, .net, regex" }
Cubic Permutations | Project Euler #62
Question: The cube, 41063625 (3453), can be permuted to produce two other cubes: 56623104 (3843) and 66430125 (4053). In fact, 41063625 is the smallest cube which has exactly three permutations of its digits which are also cube. Find the smallest cube for which exactly five permutations of its digits are cube. The following code can find find three permutations under a second, but it's taking very long for finding five permutations. How do I improve runtime? #! /usr/bin/env python import itertools def is_cube(n): return round(n ** (1 / 3)) ** 3 == n def main(): found = False n = 100 while not found: n += 1 cube = n ** 3 perms = [ int("".join(map(str, a))) for a in itertools.permutations(str(cube)) ] perms = [perm for perm in perms if len(str(perm)) == len(str(cube))] filtered_perms = set(filter(is_cube, perms)) if len(filtered_perms) == 5: found = True print(filtered_perms) if __name__ == "__main__": main() Answer: Tend to use one style perm for perm in perms if len(str(perm)) == len(str(cube)) and filter(is_cube, perms) are almost the same: an iterator that applies some filter. Don't confuse the reader - use the same style for both... or even join them in one expression. Don't create collections until needed perms is a list of permutations. For a 9-digit number there will be roughly 9!=362880 permutations, and they are filtered afterwards. You can use a generator, so all filters will be applied together: perms = (int("".join(map(str, a))) for a in itertools.permutations(str(cube))) perms = (perm for perm in perms if len(str(perm)) == len(str(cube))) filtered_perms = set(filter(is_cube, perms)) First two expressions won't actually do anything; the third will run all the actions because set needs to be populated. So you'll save some memory and allocation/deallocation operations. Use break keyword instead of found variable Yes, it's not structural, but makes code more readable if there's only one break (or several are located together in a long loop). Instead of setting found = True just break the loop. Use itertools.count I think for n in itertools.count(100): looks better than while True: and all operations with n. Algorithm As I've said, there's too many permutations. And then you do extra work because you're checking permutations you've checked again on different numbers. Instead of that, just turn every cube into a kind of fingerprint - a string of sorted digits, ''.join(sorted(str(n**3))), and count the times you've met each fingerprint (it a dict or collections.Counter). All permuted numbers will have the same fingerprint. The only possible problem is when you meet some fingerprint 5 times, you should also check if it won't be met the 6th time.
{ "domain": "codereview.stackexchange", "id": 41838, "tags": "python-3.x, programming-challenge" }
Operator name in LL(1) computation
Question: I'm working from a definition of the LL(1) property of context-free languages in order to build a LL(1)-computer, i.e., a program capable of determining whether a given context-free language is in LL(1). The definition requires the disjointness of certain sets; each set is defined as an infinite limit, but practically calculated via fixed-point iteration of an inductive, case-based equation. Present in these equations is the definition of a new operator to simplify the notation (let $\epsilon$ be the empty sequence): $$\forall S,T : S \oplus T = \begin{cases} S & \epsilon \not\in S \\ (S \setminus \{\epsilon\}) \cup T & \epsilon \in S \end{cases}$$ I know this operator in latex as \oplus. In an attempt to write readable-code, I want to give this operator a name (and endow scala Sets with a trait that expresses this name). Is there a traditional name for this usage of $\oplus$? If so, what is it? Bonus points: if the name is, e.g., add, is there an acceptable word like addable with the meaning that two objects—mathematical or code—are capable of being added? Answer: A set of values that comes with an "add" operation that is associative is called a semigroup. If it also comes with an element that plays the role of 0 (so that 0 + x = x and x + 0 = x for all x), it is a monoid. In your case, you have a monoid, as $\{\epsilon\}$ plays the role of 0 (it is an identity).
{ "domain": "cs.stackexchange", "id": 13801, "tags": "formal-languages, context-free, terminology, formal-grammars" }
Show that a decidable language is not decided by a decider in a given set
Question: M.Sipser's Introduction to the Theory of Computation offers the following problem in its chapter on decidability: Let A be a Turing-recognizable language consisting of descriptions of Turing machines, {⟨M1⟩,⟨M2⟩, ...}, where every Mi is a decider. Prove that some decidable language D is not decided by any decider Mi whose description appears in A. (Hint: You may find it helpful to consider an enumerator for A.) My qualm about this is that the question seems to imply finding a decidable language, the decider for which is not in the set of all deciders, which goes against the definition of decidability of languages. Could you explain whether my doubts are justified? And if not, could you provide a proof (or a sketch of a proof) for the given problem (with or without the enumerator that is mentioned in the hint)? Answer: Suppose that $A$ is a recognizable set containing descriptions of always halting Turing machines. We want to show that there exists a decidable language $L$ such that for any $M$ deciding $L$ we have $\langle M\rangle\notin A$. We can construct such $L$ via simple diagonalization. Assume $A$ is infinite (otherwise the problem is trivial). We will define a Turing machine $M$ such that $L(M)$ does not have a decider in $A$. Let $M_A$ be an enumerator of $A$. Given input $x$, $M$ executes $M_A$ untill it outputs the $|x|'th$ word in the enumeration $\langle M_{|x|}\rangle$. Now, execute $M_{|x|}$ on $x$ and flip its answer. I leave it to you to show that $M$ always halts and that $L(M)$ disagrees with $L(M_i)$ on at least $2^i$ inputs, where $M_i$ is the i'th Turing machine in the enumeration of $A$.
{ "domain": "cs.stackexchange", "id": 16381, "tags": "formal-languages, turing-machines" }
text-messaging app using twilio
Question: I build a small web-app using flask for my parents to text their customers about promotions. I am using Twilio API. I just finished the myapp.com/new-campaign page and I would like to have some suggestions as it's my first full stack applicaton as I just added a new feature: estimated cost of the campaign (coût) What are the requirements of this page: List: Select a list of clients (I don't think we will ever have more than 2 lists, one for testing and 1 for customers) Coût: total estimated cost for the campaign Message: message to send How is the cost calculated? number of recipient (which lists I am using) message length (basically, between 0 and 160 characters it's segment 1 ; 160 to 320 it's segment 2 and so on). Segment 1 is $0.06, segment 2 is $0.06 * 2 and so on. My goal was to give a direct estimate of the total cost of a campaign. When the user select a new customer list: making a back-end call to get the length of the list For the message input, I told myself that it would be stupid to make a back-end call everytime I am adding a remove a letter in the textarea. I just to need to recalculate the cost when it's reaching a new segment base on number of characters. So I am using JS for that and making an AJAX call only when I am in a new segment (upgrade or downgrade) message-calculator.js var inputMessageLength = 0; var currentSegment = 1; var SelectedCustomerList; // on load of the page $(document).ready(function () { updateList(getSelectedCustomersList()); updateCost(); }); // call the get_cost_estimation to update the price function updateCost() { $.ajax({ url: "/get_cost_estimation", type: "POST", data: JSON.stringify({ input_length: inputMessageLength, selected_list: SelectedCustomerList, }), contentType: "application/json; charset=utf-8", dataType: "json", }).done(function (data) { var newCost = data['estimated_cost'] + ' ' + data['currency']; document.getElementById("total-cost-value").innerHTML = newCost }); } function updateList(NewSelectedCustomerList) { SelectedCustomerList = NewSelectedCustomerList; updateCost(); } function getSelectedCustomersList() { return $("input[name=list]:checked").val(); } function updateSegmentNumber(newSegmentNumber){ currentSegment = newSegmentNumber; } // caled everytime the user make a change in the message input area function updateMessage(body, SegmentCharactersLimit) { var messageContent = body.value; inputMessageLength = messageContent.length; updateSegmentMessage(inputMessageLength, SegmentCharactersLimit); var newSegment = Math.ceil(inputMessageLength / SegmentCharactersLimit) if(newSegment != currentSegment){ updateSegmentNumber(newSegment); updateCost(); } } // alert message displayed to tell the client that he already reached the limit of segment 1 function updateSegmentMessage(inputMessageLength, SegmentCharactersLimit) { if (inputMessageLength > SegmentCharactersLimit) { document.getElementById("message-cost-alert").style.display = "block"; } else { document.getElementById("message-cost-alert").style.display = "none"; } } views.py from config.settings import COST_PER_SEGMENT, MAX_CARACTERS_PER_SEGMENT ... def total_cost_estimation(quantity, input_length): number_of_segments = math.ceil(input_length / MAX_CARACTERS_PER_SEGMENT) if number_of_segments < 1: number_of_segments = 1 estimated_cost_per_sms = number_of_segments * COST_PER_SEGMENT total_estimated_cost = estimated_cost_per_sms * quantity print(estimated_cost_per_sms) print(quantity) return round(total_estimated_cost, 2) @user.route("/get_cost_estimation", methods=['GET', 'POST']) def get_cost_estimation(): currency = '&euro;' input_length = request.json['input_length'] selected_list = request.json['selected_list'] if selected_list == 'test-list': count = mongo.db[customers_test].count() else: count = mongo.db[customers_production].count() estimated_cost = str(total_cost_estimation(count, input_length)) return jsonify( estimated_cost=estimated_cost, currency=currency, ) @user.route("/campaigns", methods=['GET']) @login_required def campaigns(): return render_template('campaigns.html', currency='€', cost_per_sms = 0, max_caracters = MAX_CARACTERS_PER_SEGMENT) ... campaigns.html <div class="container"> <h2>Choisissez votre message</h2> <form name="sms-form" action="/launch-campaign" method="POST"> <div class="form-group row"> </div> <div class="form-group row"> </div> <fieldset class="form-group"> <div class="row"> <legend class="col-form-label col-sm-2 pt-0">Liste</legend> <div class="col-sm-10"> <div class="form-check"> <input class="form-check-input" onclick="updateList('test-list')" type="radio" name="list" id="gridRadios1" value="test-list" checked> <label class="form-check-label" for="gridRadios1"> Test </label> </div> <div class="form-check"> <input class="form-check-input" onclick="updateList('production-list')" type="radio" name="list" id="gridRadios2" value="production-list"> <label class="form-check-label" for="gridRadios2"> Clients </label> </div> </div> </fieldset> <fieldset class="form-group"> ... </fieldset> <div class="form-group row"> <div class="col-sm-2">Message</div> <div class="col-sm-10"> <div class="form-check"> <textarea name="body" oninput="updateMessage(this, {{max_caracters}})" id="form10" class="md-textarea form-control" rows="3"></textarea> </div> <div id="message-cost-alert"class="alert alert-warning alert-dismissible fade show"> <strong>Attention!</strong> Votre message dépasse le nombre de {{max_caracters}} caracters <button type="button" class="close" data-dismiss="alert">&times;</button> </div> <div class="form-group row"> <div class="col-sm-10"> <br> <button type="submit" class="btn btn-primary" onclick="return confirm('Are you sure?')">Envoyer</button> </div> </div> </form> </div> Answer: Typos caled everytime -> called every time MAX_CARACTERS_PER_SEGMENT -> MAX_CHARACTERS_PER_SEGMENT max_caracters -> max_characters And my first language is not French, but shouldn't Votre message dépasse le nombre de {{max_caracters}} caracters be Votre message dépasse le nombre de {{max_characters}} caractères ? Language consistency Your UI is in French - great; but then why do you show return confirm('Are you sure?') ? DRY indexing if selected_list == 'test-list': count = mongo.db[customers_test].count() else: count = mongo.db[customers_production].count() can be something like if selected_list == 'test-list': table = customers_test else: table = customers_production count = mongo.db[table].count() Immutable methods Since the methods on a route are not expected to change, I find methods=['GET', 'POST'] to be better-expressed as a tuple: methods=('GET', 'POST')
{ "domain": "codereview.stackexchange", "id": 41092, "tags": "python, javascript, object-oriented, flask" }
Rails query for all questions in tournaments that a user participates in
Question: I'm new to rails and this is the situation I'm in: I have a User model that has many tournaments, a Tournament has many users and questions and a Question belongs to a tournament. I'm trying to write a query for all questions that are in tournaments the user is a part of, for example if we have the user A in tournament 1 which has questions alpha, beta, gamma and another tournament 2 with questions omega and delta, I want to display alpha, beta and gamma. So far this is what I've come up with: def user_questions first_questions = scope.joins(:tournament)\ .merge(Tournament.where(id: user.tournaments.first.id)) user.tournaments.each do |tournament| current_questions = scope.joins(:tournament)\ .merge(Tournament.where(id: tournament.id)) first_questions = first_questions.or(current_questions) end first_questions end (Imagine scope is Question) Now this works but even I can see this code isn't that good, to put it mildly. I'm looking for suggestions on how to improve it. Answer: Ok, so based on what your said your associations are you can do class User has_many :tournaments has_many :questions, through: :tournaments end u = User.first questions = u.questions If you don't want that for some reason you can do class User has_many :tournaments end class Question belongs_to :tournament end u = User.first questions = Question.where(tournament_id: u.tournament_ids) Rails will convert that into a where tournament_id in (?) sql statement
{ "domain": "codereview.stackexchange", "id": 21518, "tags": "beginner, ruby, ruby-on-rails, active-record, join" }
Mechanically increasing RPM of ac induction motor
Question: If you spin the stator of an AC induction motor than would the rotation of the rotor increase? Or would it have a different affect such as moving with a higher force? Answer: A standard induction motor has to turn at the speed determined by the current frequency. If you attempt to apply external torque to drive it faster it will act as a brake resisting that torque. If you apply sufficient torque to force the rotor out of phase with the driving current it would effectively stall ( the "braking force" dropping way off and or becoming erratic )
{ "domain": "engineering.stackexchange", "id": 1509, "tags": "electrical-engineering, motors, ac" }
Computing logits for a vector and for all vectors in a set
Question: I've had to write two different functions (shown below), but I want to combine the two functions into one. Is there a way to do this? softmax_a_set() takes a list of numpy arrays, applies softmax() to each individual numpy array, and then returns a list of processed numpy arrays. def softmax(a_vector): """Compute a logit for a vector.""" denom = sum(numpy.exp(a_vector)) logit = numpy.exp(a_vector)/denom return logit def softmax_a_set(a_set): """computes logits for all vectors in a set""" softmax_set = numpy.zeros(a_set.shape) for x in range(0, len(a_set)): softmax_set[x] = softmax(a_set[x]) return softmax_set Answer: Why do you want to combine them into two functions? It’s possible, but I don’t think it would be an improvement. They’re two fairly distinct functions, and cramming the code for both into a single function would make the code less readable. If you really need a single function that handles both, you could do something like: def softmax(a): if isinstance(a, vector): return softmax_a_vector(a) elif isinstance(a, set): return softmax_a_set(a) which gets what you want, but you keep the nice separation of code into distinct functions. One other minor thing: you can improve the for loop in softmax_a_set with enumerate: for idx, a_vector in enumerate(a_set): softmax_set[idx] = softmax(a_vector)
{ "domain": "codereview.stackexchange", "id": 19137, "tags": "python, numpy" }
How can internal energy be expressed as a function of any two of $p, v, T$?
Question: In the book of Irey, Theormodynamics, the author states that (while talking about single phase substances) For a simple compressible media, we may choose our measurable independent variables any two of $p, v, T$.The traditional choice for internal energy is temperature and specific volume, $u = u(T,v)$. However, this statement is as if it just falls from the sky; he does not provide any argument why $u$ can be expressed as a function of any two of those variable, nor does he give any argument about the relationship of one of those variables to another two. Question: I'm looking for an explanation about the concerns that I raised above. Edit: As @SolarMike pointed out, the author explicitly consider gaseous substances in the above comment; however, later he also defines $$Z = \frac{ pv}{RT } = Z (p, T) ,$$ i.e knowing $p,T$ allows you to calculate $Z$, and then you can find $v$, but he still does not give any argument why $Z = Z (p,T)$. As far as I can see, Charles's law, and the other two law accompanying it are for ideal gases, but we are not woking with ideal gases, yet. Answer: The component that you're missing is that the various state variables of a system are related by the equation of state for the system, which reduces the number of degrees of freedom by one. For ideal gases, this is the ideal gas law; for real gases, something like the van der Waals equation serves as an equation of state. For solids and liquids, there are various forms of the equation of state that vary in the kind of behavior they're meant to model best; for a short sampling of them, see here: https://en.wikipedia.org/wiki/Equation_of_state#Equations_of_state_for_solids_and_liquids.
{ "domain": "physics.stackexchange", "id": 58873, "tags": "thermodynamics, energy, pressure, temperature, volume" }