anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
how does ros schedule nodes
Question: how does ros schedule nodes? is this simply using the linux scheduler? Originally posted by zaddan on ROS Answers with karma: 21 on 2019-03-25 Post score: 1 Answer: Yes, ROS 1 uses the linux scheduler. By default, roscpp has a background thread to handle socket activity, and processes callbacks serially on the main thread when you can ros::spin() or ros::spinOnce(). You can change how many threads are used by using an async spinner or multithreaded spinner. You are free to use the linux tools for scheduling to change how linux schedules your processes or threads. (I am not sure about ROS 2 but I think it is the same). Originally posted by ahendrix with karma: 47576 on 2019-03-25 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by zaddan on 2019-03-25: thanks for the answer. So does ros actually use linux scheduler or it simply stays out of scheduling (and let linux take care the scheduling of nodes, since nodes are processes anyway and hence the unit of scheduling that linux deals with) Comment by ahendrix on 2019-03-25: ROS stays out of scheduling completely. Comment by zaddan on 2019-03-25: thanks for the clarification.
{ "domain": "robotics.stackexchange", "id": 32751, "tags": "ros, linux, ros-kinetic" }
What is the real interpretation of Planck's constant and what are its origins?
Question: In the physics texts I have read and from other online information, I gather that Planck's constant is the quantum of action or that it is a constant specifying the ratio of the energy of a particle to its frequency. However, I'm still not understanding exactly what it is? From other things I have read, I understand that Planck did a "fit" of data concerning others' experiments and came up with this value; exactly what other data exactly did he fit to arrive at this really small value? Or maybe he did it some other way? Perhaps an answer concerning its origins will help me understand my first question better? Answer: In point particle classical mechanics, the action $S$ is the time integral of the Lagrangian $L$ $$S=\int Ldt$$ You can check its dimensions are of $[ML^2T^{-2}][T]=[ML^2T^{-1}]$ this is, energy times time. The constant ratio is due to the energy $E$ and frequency $\nu$ relation for photons: $$E=h\nu \Rightarrow h=\frac{E}{\nu}$$ The "fit" that you are talking about comes of the blackbody radiation spectrum. If we use as variables temperature $T$ and frequency $\nu$ in classical physics we have two laws: High frequency law: Wien's law $$I(\nu,T)=\frac{2h\nu^3}{c^2}e^{-\frac{h\nu}{kT}}$$ Low frequency law: Rayleigh-Jeans law $$I(\nu,T)=\frac{2h kT\nu^2}{c^2} $$ There is no intermediate frequency law. Planck assumed that radiative energy is quantized via $E=h\nu$ and interpolated the energy fitting for an expression of the type $$I(\nu,T)=F(\nu,T)e^{g(\nu,T)}$$ that should satisfy both limits ($\nu \approx 0, h\nu >> kT$). Finally he obtained $$I(\nu,T)=\frac{2h\nu^3}{c^2}\frac{1}{1-e^{\frac{h\nu}{kT}}} $$ However there is a much more nicer and physical derivation of Planck's law due to Einstein that you can find in Walter Greiner Quantum Mechanics an Introduction chapter 2
{ "domain": "physics.stackexchange", "id": 18461, "tags": "quantum-mechanics, photons, units, physical-constants, blackbody" }
Relation between the "Point-Cover-Interval" problem and the "Interval Scheduling" problem
Question: Point-Cover-Interval Problem: Given a set $\mathcal{I}$ of $n$ intervals $[s_1, f_1], \ldots, [s_n, f_n]$ along a real line, find a minimum number of points $P$ such that each interval contains some point, that is $\forall I \in \mathcal{I}: \exists p \in P, p \in I$. Interval Scheduling Problem: Given a set $\mathcal{I}$ of $n$ intervals $[s_1, f_1], \ldots, [s_n, f_n]$ along a real line, find a maximum number of intervals such that no two of them overlap. Interestingly, the two problems above have exactly the same greedy algorithm, illustrated by the following figure (from [1]; for the interval scheduling problem; see the "Greedy Algorithm" part below). Since they share the same algorithm, I expect that they are the same (or, at least closely related) problem, say, in the view of reduction. However, I failed to reduce them to each other. Question: Are these two problems the same? Or what is the relation between them? Can we reduce them to each other? Note that the first problem asks for a minimum solution while the second one for a maximum solution. Greedy Algorithm: The greedy algorithm for the "Interval Scheduling" problem is as follows: sort the intervals in increasing order of their finishing times, still denoted as $\mathcal{I}$. while ($\mathcal{I} \neq \emptyset$) choose the first $I \in \mathcal{I}$, do: add $I$ into the result-set; (darker lines in the figure) delete all intervals from $\mathcal{I}$ that conflicts with $I$ (dashed lines in the figure). For the "Point-Cover-Interval" problem, we simply collect the finishing time point of each interval $I$ chosen in each iteration in the algorithm above. [1]: Algorithm Design. By Jon Kleinberg and Éva Tardos. (Section 4.1) Answer: Here is a proof that the two numbers are equal: For every interval $I$ consider the largest set of disjoint intervals with $I$ as the rightmost interval. This has a size, say $D_I$. Now if $D_I = D_J$, then $I$ and $J$ have to intersect, so the set of intervals with equal $D_I$ values all pairwise intersect each other, and so have a common point (by the 1D version of Helly's theorem, or by just considering the intervals arranged in order of their left endpoints). Now if $d = \max D_I$ (the size of the largest set of non-intersecting intervals), then we have $D_I$ taking all the values $\{1, 2, \dots, d\}$, and thus $d$ points are enough to cover all the intervals, and we need at least $d$ to cover the largest set of disjoint intervals. For some related reading, look at Mirsky's theorem, which is a dual of Dilworth's theorem. Dilworth's theorem says that in a set of elements with a partial order defined on them, the size of the largest antichain (antichain is a set of elements not comparable to each other) is same as the size of the smallest partition into chains (each set of the partition is a total ordered set). Mirsky's theorem says that the size of the largest chain is the same as the smallest number of antichains into which the set might be partitioned. In this case defines your order relation as follows: $$I = (i,j) \lt I' = (i', j') \iff j \lt i'$$ Basically, two disjoint intervals are comparable, with the left one being smaller. Because of Helly's theorem, the two numbers you are computing are exactly the numbers in Mirsky's theorem, which is defined as the height of the the partial order (Dilworth's theorem defines the width).
{ "domain": "cs.stackexchange", "id": 4772, "tags": "algorithms, reductions, greedy-algorithms" }
How to identify if a dependency is run time or build dependency?
Question: I am creating a package to subscribe to a topic from a node in another package. I understand that the only dependency required is the message package. Is this a run time or build dependency? Originally posted by skr_robo on ROS Answers with karma: 178 on 2016-07-13 Post score: 0 Answer: Are you using the messages when you build your code? If yes, they're a build dependency. Are you using the messages when you run your code? If yes, they're a run-time dependency. It is perfectly fine to have a package as both a build and run-time dependency. This question and answer has a more in-depth discussion: http://answers.ros.org/question/238900/should-cc-packages-run_depend-on-_msgs-packages/ Originally posted by ahendrix with karma: 47576 on 2016-07-13 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by spmaniato on 2016-07-13: Also worth mentioning REP 140 "Package Manifest Format Two Specification". Has some nice explanations and of course introduces new tags. Comment by skr_robo on 2016-07-14: Thank You, both of you.
{ "domain": "robotics.stackexchange", "id": 25235, "tags": "ros, build, message" }
install urdfdom from source
Question: I've installed ROS from source for Hydro as the documentation states. I source that ROS workspace and then create a new workspace for MoveIt development that I overlay on top. The issue is that it cannot find urdfdom: CMake Error at /home/dave/ros/ws_ros_catkin/install_isolated/share/catkin/cmake/catkinConfig.cmake:72 (find_package): Could not find a configuration file for package urdfdom. Set urdfdom_DIR to the directory containing a CMake configuration file for urdfdom. The file will have one of the following names: urdfdomConfig.cmake urdfdom-config.cmake Call Stack (most recent call first): moveit_core/CMakeLists.txt:10 (find_package) CMake Error at /home/dave/ros/ws_ros_catkin/install_isolated/share/catkin/cmake/catkin_package.cmake:156 (message): catkin_package() DEPENDS on 'urdfdom' which must be find_package()-ed before. If it is a catkin package it can be declared as CATKIN_DEPENDS instead without find_package()-ing it. Call Stack (most recent call first): /home/dave/ros/ws_ros_catkin/install_isolated/share/catkin/cmake/catkin_package.cmake:98 (_catkin_package) moveit_core/CMakeLists.txt:63 (catkin_package) -- Configuring incomplete, errors occurred! Invoking "cmake" failed Digging around, I found that in my ROS source installation workspace it was missing a .cmake file and this entire folder: /home/dave/ros/ws_ros_catkin/install_isolated/share/urdfdom I don't know how it was able to build the rest of the packages without this. My solution was to manually build urdfdom: cd /home/dave/ros/ws_ros_catkin/src/urdfdom mkdir build && cd build cmake ../ make sudo make install Then I symlinked the following folder: ln -s /usr/local/share/urdfdom /home/dave/ros/ws_ros_catkin/src/urdfdom I'm shocked that it worked, but I feel like this is a bad solution. What did I do wrong and how do I fix it in the future? Thanks! Edit Here's the output of a catkin_make_isolated --install build: https://gist.github.com/davetcoleman/8987615 My ROS_PACKAGE_PATH is the following: ws_jsk2/src ws_moveit/src ws_ros_catkin/install_isolated/share ws_ros_catkin/install_isolated/stacks Did you install your hydro workspace before overlaying moveit? Yes, I ran setup.bash of my ros source install before starting to build moveit. Edit 2 In /home/dave/ros/ws_ros_catkin/src/urdfdom, tree output: https://gist.github.com/davetcoleman/8994392 Note that I made the build folder myself, as described above, after being unable to fix the issue myself Edit 3 My checkout of urdfdom is directly from the Github repo: git clone https://github.com/ros/urdfdom It appears there is no package.xml in that repo, I guess it is added as some patch? This is undocumented and seems like a messy way of doing things. What is the recommended way of making changes to urdfdom with ROS and committing it to Github? I now see where the package.xml comes from: https://github.com/ros-gbp/urdfdom-release/blob/master/hydro/package.xml Originally posted by Dave Coleman on ROS Answers with karma: 1396 on 2014-02-12 Post score: 1 Original comments Comment by Dirk Thomas on 2014-02-13: It works fine for me so it is likely not a problem with urdfdom itself. Can you please more information from your build, e.g. the CMake configure as well as the install output of your ROS hydro workspace (probably best via gist or something similar)? The fact that your first build did not result in having the share/urdfdom folder indicates that something went wrong. Your way of trying to patch the problem after the fact was incomplete and therefore it failed to find urdfdom later. Comment by Dave Coleman on 2014-02-13: I'm not sure how to get my CMake configure information, can you explain? I've added some more info, above. Thanks! Answer: Your workspace does not contain urdfdom. Or to be more specific: your urdfdom folder does not have a package.xml file therefore it is not a catkin package. The result is that the list of packages in topological order does not contain it. Where do you get the folder content from? You might want to clone the latest released version from the GBP repository. You can use rosinstall_generator to get the exact repository information. Update: The is the recommended way for packages which do not want any catkin related files in the upstream repository. The package.xml file as well as at least the install rule for it in the CMakeLists.txt are then applied as a patch in the GBP repository. You should just make sure to checkout the latest released version from the GBP repository instead of the source repo. Originally posted by Dirk Thomas with karma: 16276 on 2014-02-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Dave Coleman on 2014-02-13: I understand the issue now, but I think disagree with it. I've had problems with the tar download via wstool being a different version then github version, so if you want to commit back any changes you can overwrite unreleased changes, as happened to me just yesterday with the class_loader repo. Comment by Dirk Thomas on 2014-02-14: I usually avoid the tarballs (even if they are faster to download) since a git clone of the appropriate branch/tag of the gbp repo allows me to diff changes in the case I am applying some. Then I can use those diffs to commit them back to the upstream repo. If you have any idea how to make it more convenient (beside convincing the maintainer that is doesn't harm to have the package.xml file in the upstream repo) I am happy to hear them. Comment by Dave Coleman on 2014-02-17: I documented the problem here: https://github.com/ros/urdfdom/pull/29
{ "domain": "robotics.stackexchange", "id": 16959, "tags": "catkin, cmake" }
Genetic algorithm to find the minimum of a three-variable function
Question: I've implemented a simple genetic algorithm for continuous floating point parameter spaces and without recombination. However, the way I'm passing in parameters and acquiring results from multiprocessing feels wrong. Should I be using concurrent.futures? The code below uses the genetic algorithm to find the minimum of the equation x^2+y^2+z/10 over the parameter space -2 < x < 0, 0 < y < 2 and 10 < z < 11, but I'd like to keep the code easy to modify for various parameter spaces and evaluation functions. import numpy as np import multiprocessing from collections import OrderedDict import os import time def eval_iter(arg_lst, l_lst): for c_i, args in enumerate(arg_lst): yield c_i, args, l_lst def eval_func(c_i, args, l_lst): assert len(args) == 3 x = args[0] y = args[1] z = args[2] res = x**2 + y**2 + z/10 print(f"Eval {x}, {y}, {z}: {res}") l_lst[c_i] = res if __name__ == '__main__': generation_num = 10 child_num = 5 space = OrderedDict(( ('x', (-2., 0.)), ('y', (0., 2.)), ('z', (10., 11.)) )) params = OrderedDict([(nm, []) for nm in space.keys()]) for nm, v_range in space.items(): params[nm] = np.random.uniform(v_range[0], v_range[1], size=child_num) arg_list = [] for c_n in range(child_num): arg_list.append([val[c_n] for val in params.values()]) manager = multiprocessing.Manager() loss_lst = manager.list([np.inf for i in range(child_num)]) for r_n in range(generation_num): with multiprocessing.Pool(os.cpu_count()) as pool: pool.starmap(eval_func, eval_iter(arg_list, loss_lst)) fittest_idx = int(np.argmin(loss_lst)) base_args = arg_list[fittest_idx] print(f"Best {base_args}\n") # mutate offspring from fittest individual params = OrderedDict([(nm, []) for nm in space.keys()]) for s_i, (nm, v_range) in enumerate(space.items()): std = (v_range[1] - v_range[0]) / 2 noise = np.random.normal(0, std, size=child_num) new_param = base_args[s_i] + noise params[nm] = np.clip(new_param, v_range[0], v_range[1]) arg_list = [] for c_n in range(child_num): arg_list.append([val[c_n] for val in params.values()]) loss_lst = manager.list([np.inf for i in range(child_num)]) Answer: Better organisation You program in its current state is one huge chunk of snippet with all logic inside it. You should consider splitting it into separate smaller functions. The limits on \$ x, y, z \$ are preset. Consider putting them as a GLOBAL_CONSTANT. Ideal import order (according to PEP8) is standard library imports related third party imports local application/library specific imports os/collections etc. are standard library, and should be imported first, followed by numpy. pythonify your code x = args[0] y = args[1] z = args[2] can be expressed as: x, y, z = args similarly manager.list([np.inf for i in range(child_num)]) can become manager.list([np.inf] * child_num) I am not sure about how numpy.argmin works, but I think using None instead of np.inf could be better in the sense that you might extend your program to also find local/global maxima along with minima points.
{ "domain": "codereview.stackexchange", "id": 29665, "tags": "python, python-3.x, numpy, multiprocessing, genetic-algorithm" }
Squeezing decorators into functional extensions
Question: When you try to use several decorators it can get ugly pretty quickly and you'll end up with: new RelativeFileProvider( new SystemVariableFileProvider( new PhysicalFileProvider() ), "%TEMP%" ); or if you prefer one-liners then like that: new RelativeFileProvider(new SystemVariableFileProvider(new PhysicalFileProvider()), "%TEMP%"); I thought I turn them into extensions and make them play along nicer in a functional way so I created a couple of helper methods that allow me to do this: new PhysicalFileProvider() .DecorateWith(SystemVariableFileProvider.Create()) .DecorateWith(RelativeFileProvider.Create("%TEMP%")); Don't be confused about the empty classes. To be able to experiment without being distracted by the implementations I extracted the interface and the classes from my question about Multiple file access abstractions that I'm going to use this pattern for. They all have implementations that have been already reviewed there. This question is about the additional decorator helper APIs and they are implementation-netural. Here are the types I used: Version 1 - with an exntesion interface IFileProvider { } class PhysicalFileProvider : IFileProvider { public static PhysicalFileProvider Create() { return new PhysicalFileProvider(); } } class RelativeFileProvider : IFileProvider { public RelativeFileProvider(IFileProvider fileProvider, string basePath) { } public static Func<IFileProvider, RelativeFileProvider> Create(string basePath) { return decorable => new RelativeFileProvider(decorable, basePath); } } class SystemVariableFileProvider : IFileProvider { public SystemVariableFileProvider(IFileProvider fileProvider) { } public static Func<IFileProvider, SystemVariableFileProvider> Create() { return decorable => new SystemVariableFileProvider(decorable); } } static class FileProviderExtensions { public static IFileProvider DecorateWith(this IFileProvider decorable, Func<IFileProvider, IFileProvider> createDecorator) { return createDecorator(decorable); } } Version 2 - with an interface This API forces each type to implement the DecorateWith method instead of relying on an extension. interface IDecorable<T> { T DecorateWith(Func<T, T> createDecorator); } interface IFileProvider : IDecorable<IFileProvider> { } class PhysicalFileProvider : IFileProvider, IDecorable<IFileProvider> { public static PhysicalFileProvider Create() { return new PhysicalFileProvider(); } public IFileProvider DecorateWith(Func<IFileProvider, IFileProvider> createDecorator) { return createDecorator(this); } } class RelativeFileProvider : IFileProvider, IDecorable<IFileProvider> { public RelativeFileProvider(IFileProvider fileProvider, string basePath) { } public static Func<IFileProvider, RelativeFileProvider> Create(string basePath) { return decorable => new RelativeFileProvider(decorable, basePath); } public IFileProvider DecorateWith(Func<IFileProvider, IFileProvider> createDecorator) { return createDecorator(this); } } class SystemVariableFileProvider : IFileProvider, IDecorable<IFileProvider> { public SystemVariableFileProvider(IFileProvider fileProvider) { } public static Func<IFileProvider, SystemVariableFileProvider> Create() { return decorable => new SystemVariableFileProvider(decorable); } public IFileProvider DecorateWith(Func<IFileProvider, IFileProvider> createDecorator) { return createDecorator(this); } } And these are the whys about the factories: I chose static factory methods over free arguments to keep parameter names and their order. I chose them also to avoid new. What do you think of this system? Do you prefer one version over the other? Can this be made even more convinient? Or would you say this is madness? Answer: IMO the first version is easier to maintain, and I don't see from the shown code, what you gain by the second. In the second version: Each FileProvider class doesn't need to explicitly inherit IDecorable<IFileProvider> because that is inherited via IFileProvider. I maybe overlooking something or you may find it too obvious to show, but I don't like that you implement the interface IFileProvider in all the provider classes, because they then are forced to implement all (I can see there aren't any (yet)) possible common members, which could be taken care of in a base class. Therefore I would create an abstract base class for the providers like: interface IDecorable<T> { T DecorateWith(Func<T, T> createDecorator); } interface IFileProvider : IDecorable<IFileProvider> { } abstract class FileProvider : IFileProvider { public virtual IFileProvider DecorateWith(Func<IFileProvider, IFileProvider> createDecorator) { return createDecorator(this); } } class PhysicalFileProvider : FileProvider { public static PhysicalFileProvider Create() { return new PhysicalFileProvider(); } } class RelativeFileProvider : FileProvider { public RelativeFileProvider(IFileProvider fileProvider, string basePath) { } public static Func<IFileProvider, RelativeFileProvider> Create(string basePath) { return decorable => new RelativeFileProvider(decorable, basePath); } } ... In this way you gain both the benefits of normal OOP inheritance/polymorphic behavior and the decorator pattern and at the same time are still free to chain .DecorateWith(...) with other implementers of IFileProvider Madness is a strong word. The shown chain looks a little nicer than a chain of new xx() statements, and you explicitly "explain" the pattern and behavior. And the first version doesn't need much attention when first written, so no harm done at least.
{ "domain": "codereview.stackexchange", "id": 32803, "tags": "c#, design-patterns, comparative-review, extension-methods" }
Pseudo-random sequence prediction
Question: Disclaimer: I am a biologist, so sorry for (perhaps) basic question phrased in such crude terms. I am not sure if I should ask this question here or on DS/SC, but CS is the largest of three, so here goes. (After I posted, it occurred to me that Cross-Validated might be the better place for it, but alas). Imagine there is an agent, who makes binary decisions. And an environment, which, for each of the agent's decisions ("trials"), either rewards the agent or not. The criteria for rewarding the agent's decisions are not simple. In general criteria are random, but they have limitation, for example, environment never rewards more than 3 times for the same decision and never alternates rewarded decision more than 4 times in a row. Sequence of criteria might look something like this then 0 0 0 1 0 1 0 0 1 1 1 0 1 1 0 0 1 0 ... but never 0 0 0 1 0 1 0 0 1 1 1 1 1 1 0 0 1 0 ... because reward criterion cannot repeat more than 3 times. In these conditions it is quite easy to formulate the strategy ideal observer should undertake to maximize the reward. Something along the lines of decide randomly if you detect that criteria repeated 3 times -- decide opposite than last criterion if you detect that criteria alternated 4 times, decide according to the last criterion Now, the difficult part. Now the criterion on each trial depends not only on the history of previous criteria, but also on the history of agent's decisions, e.g. if agent alternates on more than 8 out of the last 10 trials, reward same decision as agent made last time (as if to discourage the agent from alternating) and if agent repeated same decision on more than 8 of the the last 10 trials, i.e. he is biased, make criterion opposite of the bias. The priority of history of criteria over history of decisions is specified in advance, so there is never ambiguity. The sequences of decisions (d) and criteria (c) might now look like this d: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 1 0 1 0 ... c: 1 0 1 0 0 0 1 1 0 0 1 1 1 1 1 1 1 1 0 1 0 0 1 1 0 0 0 1 0 ... ↑ here criteria counteract bias in decisions I do not see any simple way of inventing maximizing strategy for the agent. But I am sure there must be one, and some kind of clever machine learning algorithm should be able to identify it. My question is not so much about how to solve this problem (although I would be happy if you suggest a solution), but more how these types of problems are called? Where can I read about it? Is there an abstract solution or only simulation can help? In general, how can I, as a biologist, approach this type of problem? Answer: You can approach this problem using Reinforcement Learning. A classic book for this is Sutton and Barto: The draft of the second edition is available for free: https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html In order to make your problem Markovian, define each state as a vector of the last ten decisions. You actions will be 1 or 0.
{ "domain": "cs.stackexchange", "id": 6158, "tags": "machine-learning, probability-theory" }
Can a current carrying loop experience force due to its own magnetic field?
Question: In my opinion, the wire must expand due to the magnetic force acting radially outwards on two diametrically opposite ends of the loop as a result of the equation: F = I(dl × B) Where B is the magnetic field due to the loop which is perpendicular to the plane of the loop. Answer: Yes, magnetic force on wire elements of the loop acts to expand it. But this is usually very weak force, so no expansion is observed. Similarly for a current-carrying solenoid. If magnetic field is very strong (like in P. Kapitza's super quick experiments, around 100 Tesla), these expansion forces are strong and he had to bandage the wires very well to a solid body that is well attached to the ground (during increases/decreases of magnetic field, the solenoid also wants to rotate).
{ "domain": "physics.stackexchange", "id": 94716, "tags": "electromagnetism, forces, magnetic-fields" }
How can I make this primality test algorithm faster?
Question: Is it possible to make this algorithm faster ? Algorithm represents primality test for Wagstaff numbers . import java.math.BigDecimal; import java.math.BigInteger; import java.math.MathContext; import java.math.RoundingMode; public class WPT { public static void main(String[] args) { int n; n = Integer.parseInt(args[0]); BigInteger m; m = BigInteger.valueOf(n); BigDecimal a = new BigDecimal("1.5"); BigDecimal b = new BigDecimal("5.5"); BigDecimal c = new BigDecimal("13.5"); BigDecimal d = new BigDecimal("16.5"); BigDecimal s; BigDecimal r; if (m.mod(BigInteger.valueOf(4)).equals(BigInteger.ONE)) { s = a; r = a; } else { if (m.mod(BigInteger.valueOf(6)).equals(BigInteger.ONE)) { s = b; r = b; } else { if (m.mod(BigInteger.valueOf(12)).equals(BigInteger.valueOf(11)) && (m.mod(BigInteger.valueOf(10)).equals(BigInteger.valueOf(1))) || m.mod(BigInteger.valueOf(10)).equals(BigInteger.valueOf(9))) { s = c; r = c; } else { s = d; r = d; } } } BigDecimal W; W = BigDecimal.valueOf(2).pow(n).add(BigDecimal.ONE).divide(BigDecimal.valueOf(3)); int k = (n-1)/2; for (int i = 1; i <= k; i ++) { s = s.pow(4).multiply(BigDecimal.valueOf(8)).subtract(s.pow(2).multiply(BigDecimal.valueOf(8))).add(BigDecimal.ONE).remainder(W).setScale(1, BigDecimal.ROUND_UP); } if (s.equals(r)) { System.out.println("prime"); } else { System.out.println("composite"); } } } very fast corresponding Mathematica code : p = 269987; W = (2^(p) + 1)/3; If[Mod[p, 4] == 1, a = 3/2, If[Mod[p, 6] == 1, a = 11/2, If[Mod[p, 10] == 3 || Mod[p, 10] == 7, a = 33/2, a = 27/2]]]; For[i = 1; s = a, i <= (p - 1)/2, i++, s = Mod[ChebyshevT[4, s], W]]; If[s == a, Print["prime"], Print["composite"]]; Answer: Avoid dealing with BigDecimals at all costs - they are slow. I changed the formula to work with "twice the number", so my s is actually 2*s. Further keep the numbers in the loop small by "modding" the intermediate results, too. Finally I tried to simplify the syntax and the initial conditions for s a little bit. import java.math.BigInteger; public class WPT { private final static BigInteger _1 = BigInteger.ONE; private final static BigInteger _2 = _(2); private final static BigInteger _4 = _(4); public static BigInteger _(long n) { return BigInteger.valueOf(n); } public static void main(String[] args) { int n = Integer.parseInt(args[0]); BigInteger m = _(n); BigInteger s = (m.mod(_(4)).equals(_1)) ? _(3) : (m.mod(_(6)).equals(_1)) ? _(11) : (m.mod(_(12)).equals(_(11)) && (m.mod(_(10)).equals(_1)) || m.mod(_(10)).equals(_(9))) ? _(27) : _(33); BigInteger r = s; BigInteger W = _2.pow(n).add(_1).divide(_(3)); int k = (n - 1) / 2; for (int i = 0; i < k; i++) { BigInteger s2 = s.modPow(_2, W); BigInteger s4 = s2.modPow(_2, W); s = s4.subtract(s2.multiply(_4)).add(_2).mod(W); } System.out.println(s.equals(r) ? "prime" : "composite"); } }
{ "domain": "codereview.stackexchange", "id": 2794, "tags": "java, optimization" }
When a star becomes a black hole, does its gravitational field become stronger?
Question: I've seen in a documentary that when a star collapses and becomes a black hole, it starts to eat the planets around. But it has the same mass, so how does its gravitational field strength increase? Answer: Actually, it doesn't have the same mass, it has significantly less mass than its precursor star. Something like 90% of the star is blown off in the supernova event (Type II) that causes the black holes. The Schwarzschild radius is the radius at which, if an object's mass where compressed to a sphere of that size, the escape velocity at the surface would be the speed of light $c$; this is given by $$ r_s=\frac{2Gm}{c^2} $$ For a 3-solar mass black hole, this amounts to about 10 km. If we measure the gravitational acceleration from this point, $$ g_{BH}=\frac{Gm_{BH}}{r_s^2}\simeq10^{13}\,{\rm m/s^2} $$ and compare this to the acceleration due to the precursor 20 solar mass star with radius of $r_\star=5R_\odot\simeq7\times10^8$ m, we have $$ g_{M_\star}=\frac{Gm_\star}{r_\star^2}\simeq10^3\,{\rm m/s^2} $$ Note that this is the acceleration due to gravity at the surface of the object, and not at some distance away. If we measure the gravitational acceleration of the smaller black hole at the distance of the original star's radius, you'll find it is a lot smaller (by a factor of about 7).
{ "domain": "physics.stackexchange", "id": 28546, "tags": "gravity, black-holes, stars, stellar-evolution" }
What boundary condition should I use for the edge of a blind flange?
Question: When testing a pressurized pipe with off-standard flanges, often times the materials available to fabricate a blind flange of the correct size are limited to what is around. I've made it a habit of referring to Roark's formula's for stresses and strains, and going to the table for large deflection circular plates for these. I use the area inside the bolt circle. They have three boundary conditions available: Simply supported (neither fixed nor held) Fixed but not held (no edge tension) Fixed and held. Bolted connections are unusual - especially since I have a full face flanges. I've designed to both simply supported and fixed but not held. The simply supported works, the fixed but not held does not. However, due to the flatness out to the OD of the face, it seems like the rotation of the edge should be 0, so it should be fixed but not held. I'd like to make these thinner, so it would be nice if someone can demonstrate an accurate boundary condition (or perhaps that 80% of the fixed + 20% of the simply supported would be conservative via FEA). I'd like to see better research into the mechanics of design for blind flanges (which is rarely covered in the context of the Boiler and Pressure Vessel Code, or elsewhere). The main question is: what boundary condition for the edge should I use? Answer: Given the clamping force of the bolts, I would think that Fixed and Held would be appropriate. One way to test this is to calculate the edge tension (I don't recall if Roark's has formulations for this) and compare this to the clamping force of the bolts (i.e., the static frictional force between the flanges due to the clamping will need to be greater that the edge tension). I'm not certain that this case really meets the criterion of "large deflections" for the blind flange. The ones I've seen used are pretty beefy in thickness compared to the span and have almost no deflection.
{ "domain": "engineering.stackexchange", "id": 313, "tags": "mechanical-engineering, piping, bolting" }
Is turquoise closer to blue or green?
Question: I am having a discussion with my coworkers. Does anyone know what is the wavelength of turquoise color and whether it is closer to green or blue when comparing their wavelengths ? Answer: In general it is not possible to tell, because some colours simply cannot be assigned a definite wavelength. Some colours, like green or blue, have essentially unique combinations of wavelengths that produce the (subjective) impression of green or blue. Some colours, like yellow, can be produced in multiple ways: light of 580 nm will look yellow but so will certain combinations of red and green. Certain colours, like pink or most earth-toned colours, can only be produced by combining multiple different wavelengths, so they cannot be assigned a unique wavelength of their own. In general, the most complete way to characterize the 'colour' of an object is not its colour but its reflectance spectrum: how much of each wavelength it reflects from a white light. For the turquoise mineral, this looks like this spectrum, taken from Ultraviolet-visible, near infrared and mid infrared reflectance spectroscopy of turquoise. B. Reddy, R. Frost, M. Weier and W. Martens. J. Near Infrared Spec. 14 no. 1, p. 241 (2006). The spectrum depends on the specific sample. Reddy et al. also present the spectrum of a sample from Senegal, which as you can see shares some features but not others. (In particular, it has a lot more content in the region from 350 nm to 550 nm, which is precisely the blues and greens, compared to the hump at the red end of the spectrum.)
{ "domain": "physics.stackexchange", "id": 12387, "tags": "energy, visible-light" }
Refactoring Calculations through a series of textboxes
Question: Alright guys, I know this isn't the proper way of problem solving, But I need some guidance here. Since I am just starting in programming, the code is not really readable. The Winforms program (essentially a glorified Excel sheet) I created is made up from different textboxes (as shown in designer picture below). The textboxes are used for input of numbers. All the input is extracted, converted to int or double and run through the calculation. The textboxes on the right hand side (orange boxes) are used to show the calculated price to the users. The questions are: How do I make the code more readable by encapsulating and inheritance? What do I use to make a list of every variable and how do I make a standard calculation and loop every pair of textboxes through the same calculation? using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Drawing.Printing; using System.Drawing.Imaging; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; partial class BudgyMobilePlan : Form { //declaring the variables used in the calculation int ContractDuration1; int ContractDuration2; int ContractDuration3; int ContractDuration4; int ContractDuration5; int ContractDuration6; double MonthlyFee1; double MonthlyFee2; double MonthlyFee3; double MonthlyFee4; double MonthlyFee5; double MonthlyFee6; double PhonePrice1; double PhonePrice2; double PhonePrice3; double PhonePrice4; double PhonePrice5; double PhonePrice6; double AdditionalCost1; double AdditionalCost2; double AdditionalCost3; double AdditionalCost4; double AdditionalCost5; double AdditionalCost6; private void CalculatedCostButton_Click(object sender, EventArgs e) { bool isComplete = true; foreach (Control control in this.Controls) { if (control is TextBox) { TextBox tb = control as TextBox; if (string.IsNullOrEmpty(tb.Text)) { isComplete = false; tb.ForeColor = Color.White; tb.Text = "0"; continue; } } } if (isComplete) { //Parsing the variables (getting number from text) ContractDuration1 = int.Parse(tbContractDuration1.Text); ContractDuration2 = int.Parse(tbContractDuration2.Text); ContractDuration3 = int.Parse(tbContractDuration3.Text); ContractDuration4 = int.Parse(tbContractDuration4.Text); ContractDuration5 = int.Parse(tbContractDuration5.Text); ContractDuration6 = int.Parse(tbContractDuration6.Text); MonthlyFee1 = double.Parse(tbMonthlyFee1.Text); MonthlyFee2 = double.Parse(tbMonthlyFee2.Text); MonthlyFee3 = double.Parse(tbMonthlyFee3.Text); MonthlyFee4 = double.Parse(tbMonthlyFee4.Text); MonthlyFee5 = double.Parse(tbMonthlyFee5.Text); MonthlyFee6 = double.Parse(tbMonthlyFee6.Text); MonthlyFee1 = double.Parse(tbMonthlyFee1.Text); MonthlyFee2 = double.Parse(tbMonthlyFee2.Text); MonthlyFee3 = double.Parse(tbMonthlyFee3.Text); MonthlyFee4 = double.Parse(tbMonthlyFee4.Text); MonthlyFee5 = double.Parse(tbMonthlyFee5.Text); MonthlyFee6 = double.Parse(tbMonthlyFee6.Text); PhonePrice1 = double.Parse(tbPhonePrice1.Text); PhonePrice2 = double.Parse(tbPhonePrice2.Text); PhonePrice3 = double.Parse(tbPhonePrice3.Text); PhonePrice4 = double.Parse(tbPhonePrice4.Text); PhonePrice5 = double.Parse(tbPhonePrice5.Text); PhonePrice6 = double.Parse(tbPhonePrice6.Text); AdditionalCost1 = double.Parse(tbAdditionalCost1.Text); AdditionalCost2 = double.Parse(tbAdditionalCost2.Text); AdditionalCost3 = double.Parse(tbAdditionalCost3.Text); AdditionalCost4 = double.Parse(tbAdditionalCost4.Text); AdditionalCost5 = double.Parse(tbAdditionalCost5.Text); AdditionalCost6 = double.Parse(tbAdditionalCost6.Text); } OnCalculateButtonClick(); } private void Reset_Click(object sender, EventArgs e) { BudgyMobilePlan NewForm = new BudgyMobilePlan(); NewForm.Show(); this.Dispose(false); } public BudgyMobilePlan() { InitializeComponent(); } // Calculations when Calculate Button is clicked public void OnCalculateButtonClick() { double CalculatedCost1; double CalculatedCost2; double CalculatedCost3; double CalculatedCost4; double CalculatedCost5; double CalculatedCost6; CalculatedCost1 = (((ContractDuration1 * 12) * MonthlyFee1) + PhonePrice1 + AdditionalCost1) / (ContractDuration1 * 12); tbCalculatedCost1.Text = CalculatedCost1.ToString("F2"); CalculatedCost2 = (((ContractDuration2 * 12) * MonthlyFee2) + PhonePrice2 + AdditionalCost2) / (ContractDuration2 * 12); tbCalculatedCost2.Text = CalculatedCost2.ToString("F2"); CalculatedCost3 = (((ContractDuration3 * 12) * MonthlyFee3) + PhonePrice3 + AdditionalCost3) / (ContractDuration3 * 12); tbCalculatedCost3.Text = CalculatedCost3.ToString("F2"); CalculatedCost4 = (((ContractDuration4 * 12) * MonthlyFee4) + PhonePrice4 + AdditionalCost4) / (ContractDuration4 * 12); tbCalculatedCost4.Text = CalculatedCost4.ToString("F2"); CalculatedCost5 = (((ContractDuration5 * 12) * MonthlyFee5) + PhonePrice5 + AdditionalCost5) / (ContractDuration5 * 12); tbCalculatedCost5.Text = CalculatedCost5.ToString("F2"); CalculatedCost6 = (((ContractDuration6 * 12) * MonthlyFee6) + PhonePrice6 + AdditionalCost6) / (ContractDuration6 * 12); tbCalculatedCost6.Text = CalculatedCost6.ToString("F2"); } } } Answer: Honestly, it looks like you want to use a domain object to model your providers and perform the pay calculations. Rather than using a bunch of text fields and drop-down controls, you can then data bind your domain object to a DataGridView. Domain Object You want an object which has properties for all your display values. Since data binding is involved, you need to ensure the object implements the INotifyPropertyChanged interface, like so: public sealed class Provider : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; public string Name { get { return _name; } set { _name = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(Name))); } } public PlanType PlanType { get { return _planType; } set { _planType = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(PlanType))); } } public double Duration { get { return _duration; } set { _duration = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(Duration))); } } public double MonthlyFee { get { return _monthlyFee; } set { _monthlyFee = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(MonthlyFee))); } } public double Price { get { return _price; } set { _price = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(Price))); } } public double AdditionalCost { get { return _additionalCost; } set { _additionalCost = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(AdditionalCost))); } } public double Pay { get { return _pay; } } public void CalculatePay() { var months = Duration * 12; _pay = months * MonthlyFee + Price + AdditionalCost / months; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(Pay))); } private string _name; private PlanType _planType; private double _duration; private double _monthlyFee; private double _price; private double _additionalCost; private double _pay; } For the plan type, you can simply use an enumeration: public enum PlanType { CoolPlan, BadPlan, GoodPlan, } Without knowing your domain, I just threw in placeholders for illustration purposes. Data Binding If your list of providers is fixed, you can bind any IEnumerable<T> to the grid. However, in the more likely case you want to allow adding new providers, I suggest instantiating and binding to a BindingList<T>. I would then add a BindingSource to your form which points to the Provider type, then set the DataSource property of the grid to the binding source, and tweak your columns to look how you like. In your code-behind, you will set the DataSource property on this BindingSource to your collection of providers. The PlanType field required a little extra care. In this case, you want to set up the column as a DataGridViewComboBoxColumn. Once you do that, you want to supply the possible list of values for PlanType to the column so the drop-down works as expected. This can be done easily with Enum.GetValues: var availableTypes = Enum.GetValues(typeof(PlanType)); planTypeDataGridViewTextBoxColumn.DataSource = availableTypes; Pay Calculation The calculation step is now a simple matter of enumerating your providers and calling CalculatePay: private void calculateButton_Click(object sender, EventArgs e) { foreach(var provider in _providers) { provider.CalculatePay(); } } That's assuming, of course, that you even want to wait to calculate pay. Another alternative which might be more usable would be to calculate on-the-fly as users enter values. To do this, you add a call to CalculatePay in Provider's property setters: public double Duration { get { return _duration; } set { _duration = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(Duration))); CalculatePay(); } } For this to work out, you have to tweak the CalculatePay method to handle zero values: public void CalculatePay() { if (Duration == 0) { _pay = 0; } else { var months = Duration * 12; _pay = months * MonthlyFee + Price + AdditionalCost / months; } PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(Pay))); } As a result, you could eliminate the Calculate button. The end result is that your code-behind for the form looks something more like the following (minus the click handler if you decided to go with auto-updating Pay): public partial class Form1 : Form { public Form1() { InitializeComponent(); var availableTypes = Enum.GetValues(typeof(PlanType)); planTypeDataGridViewTextBoxColumn.DataSource = availableTypes; _providers = new BindingList<Provider>(); providerBinding.DataSource = _providers; } private void calculateButton_Click(object sender, EventArgs e) { foreach (var provider in _providers) { provider.CalculatePay(); } } private readonly BindingList<Provider> _providers; }
{ "domain": "codereview.stackexchange", "id": 21031, "tags": "c#, winforms" }
Are all possible programming languages a formal system?
Question: Based on the Wikipedia page for a formal system, will all programming languages be contained within the following rules? A finite set of of symbols. (This seems obvious since the computer is a discrete machine with finite memory and therefore a finite number of ways to express a symbol.) A grammar. A set of axioms. A set of inference rules. Are all possible languages constrained by these rules? Is there a notable proof? EDIT: I've been somewhat convinced that my question may actually be: can programming languages be represented by something other than a formal system? Answer: Technically, yes, because you can make your formal system have a single axiom that says “the sequence of symbols is in the set $S$” where $S$ is the set of programs in the programming language. So this question isn't very meaningful. The notion of formal system is so general that it isn't terribly interesting in itself. The point of using formal systems is to break down the definition of a language into easily-manageable parts. Formal systems lend themselves well to compositional definitions, where the meaning of a program is defined in terms of the meaning of its parts. Note that your approach only defines whether a sequence of symbols is valid, but the definition of a programming language needs more than this: you also need to specify the meaning of each program. This can again be done by a formal system where the inference rules define a semantics for source programs.
{ "domain": "cs.stackexchange", "id": 12868, "tags": "formal-languages" }
Find the Nth number divisible by only 2,3,5, or 7
Question: I recently participated in a small friendly competition and this was one of the questions: A number whose only prime factors are 2, 3, 5 or 7 is called a humble number. The first 20 humble numbers are: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 24, 25, 27, ... Given n, find the nth humble number. Example n = 1: 1; n = 11: 12; n = 22: 30 [input] integer: positive integer n [output] integer: The nth humble number. There should be a O(N) solution! Here's my solution: def humbleNumber(n): curr = 0 i = 0 dp = {} while i < n: curr += 1 if ishumble(curr, dp): dp[curr] = True i += 1 else: dp[curr] = False return curr def ishumble(x,dp): if x == 1: return True acc = [2,3,5,7] for i in acc: if x % i == 0: if (x/i) in dp: return dp[x/i] return ishumble(x / i) #i don't believe this ever get's called return False I believe that this solution is \$O(N)\$. I am iterating up to \$n\$, and at each step, I am calling a constant-time helper function. The helper function ishumble(x, dp) will check if the current number x is a humble number. We are able to do this in constant time because we are guaranteed that if x % i == 0, then (x/i) is in dp! So I believe that the return ishumble(x / i) line is actually never called. Therefore, this solution is in \$O(N)\$. Is my analysis correct? Is there a better solution? Answer: No, not \$O(n)\$. Not even close. I'll get back to this later though. First, the code: It's a little difficult to tell what your loop logic is. curr is getting incremented each time, but i isn't... whereas typically we'd use i as a loop index. I'd propose using i as the loop index, and then keeping a humble number count named count (or num_humbles or something): count = 0 dp = {} for i in itertools.count(start=1): if ishumble(i, dp): dp[i] = True count += 1 if count == n: return i else: dp[i] = False You are also correct in this comment: #i don't believe this ever get's called That'll happen in the case where x is divisible by one of our primes, but we somehow haven't seen x/p yet. But we are iterating in order, so you're guaranteed that y in dp for all y<x. So you can simplify that block to: if x % i == 0: return dp[x/i] Though if you reread that statement again, you'll see that there's no need for dp to be a dictionary. You can just make it a list. Split things up Rather than having one function that does both generate the humble numbers and return the nth one, you can split them up. Have one function that generates the humble numbers: def humble_numbers(): dp = {} for i in itertools.count(start=1): if ishumble(i, dp): dp[i] = True yield i else: dp[i] = False And another function that returns the nth one. Actually, that part is a snap using the nth recipe: def nth(iterable, n, default=None): "Returns the nth item or a default value" return next(islice(iterable, n, None), default) So the solution becomes: def humbleNumber(n): return nth(humble_numbers(), n) Note that now that we separated out the original idea into "generate the numbers" and "get the nth one" as separate, we can just play around with the generating functions. Runtime Your algorithm is deceptively easy to analyse. For each number we check, we do at most 4 modulo operations, at most 1 dictionary lookup, and 1 dictionary insertion. That's all reasonable and obviously super cheap. But, are we doing one cheap operation per \$n\$? NO, we're not, but it's easy to fall into that trap. Because we're not iterating up to \$n\$... we're iterating up to the \$n\$th humble number. Thus, the runtime is really \$O(H(n))\$. And humble numbers get really sparse. Here's an unnecessarily prettified table for what the growth rate looks like: +---------+---------------------------------------+ | N | H(N) | +---------+---------------------------------------+ | 10 | 12 | | 100 | 480 | | 1000 | 387072 | | 10000 | 63248102400 | | 100000 | 123098144531250000000 | | 1000000 | 4157518627998641643389868083057746290 | +---------+---------------------------------------+ Your runtime complexity grows commensurately with the second column. According to Wikipedia, that means your algorithm is something on the order of \$O(e^{\sqrt[4]{n}})\$? A better approach Rather than checking every individual number (which leads to terrible runtime complexity), we can build numbers up from the bottom. The numbers we want are basically \$2^a3^b5^c7^d\$, for all \$a,b,c,d\$. Of course, that's difficult to iterate over (how do you know which one to do next?). But a different way of writing that is each humble number is either 2x, 3x, 5x, or 7x some previous humble number. And that's easy to iterate over: def humble_numbers2(): primes = (2,3,5,7) result = [1] yield 1 def make_multiple(_p): return (_p * result[i] for i in itertools.count()) iters = map(make_multiple, primes) merged = heapq.merge(*iters) for k,_ in itertools.groupby(merged): result.append(k) yield k The key is heapq.merge, which joins sorted iterables (which we know ours are). Basically we're on the fly creating the lists: 2*humble numbers, 3*humble numbers, 5*humble numbers, and 7*humble numbers - which are all humble numbers - and then using itertools.groupby to drop the duplicates (because, e.g., 14 is both 2x the 7th humble number and 7x the 2nd humble number) This approach is \$O(n)\$. We are only performing constant operations to determine the next humble number. Which, when in doubt, we can always very by timing how long it takes to find the Nth humble number 10 times (after making the modifications I proposed to your solution): +------+----------------+-----------------+ | N | humble_numbers | humble_numbers2 | +------+----------------+-----------------+ | 100 | 0.003s | 0.002s | | 1000 | 2.890s | 0.028s | | 2000 | 53.332s | 0.059s | | 3000 | 384.589s | 0.091s | | 4000 | 1730.465s | 0.123s | #ok, I only ran this one once +------+----------------+-----------------+
{ "domain": "codereview.stackexchange", "id": 17804, "tags": "python, primes, iteration, dynamic-programming, memoization" }
Why does the focus point of the eye does not burn the retina?
Question: To see an object, its light rays have to meet on the retina in the focal point. But the focal point is a small white dot. Basically nothing would be distinguishable and the retina would burn because all energy is concentrated at a single point? Obviously, this is not happening. But why? Answer: The following diagrams show how the eye focuses: There may be people with vision problems such that the focal point is a single point on the retina, but this can be corrected with glasses.
{ "domain": "physics.stackexchange", "id": 94519, "tags": "optics, lenses, vision" }
Energy required to break the "delocalised electron cloud" of benzene
Question: I learned that when benzene undergoes hydrogenation, $\pu{208KJ}$ of energy is given off and thus $\pu{152KJ}$ less energy than what would be given off if Kekule's structure was correct. My confusion is that my book says that this $\pu{152KJ}$ of energy is required to break the delocalized electron cloud - this does not make sense to me. How is the energy discrepancy of value the energy required to break the delocalized electron cloud in the benzene? Answer: Let's sum it up :) When cyclohexene is hydrogenated, $\mathrm{-120\, kJ\cdot mol^{-1}}$ is released. When 1,3-cyclohexadiene is hydrogenated, $\mathrm{-232\, kJ\cdot mol^{-1}}$ is released. That's pretty close to 240, so it looks that the energies for the hydrogenation of the double bonds sum up. So far, we're cool. Based on Kekule's model of alternating single and double bonds in benzene, we would now assume that the energy for hydrogenation is $\mathrm{ 3 \times -120 = -360 \, kJ\cdot mol^{-1}}$. But as your textbook says it's only $\mathrm{-208\, kJ\cdot mol^{-1}}$. We must conclude that Kekule was almost right, but the bonding situation in bezene is special. The energy of the molecule is much lower than that of a hypothetical cyclohexatriene. And when it's lower in energy, less is released upon hydrogenation. The difference of $\mathrm{152\, kJ\cdot mol^{-1}}$ is attributed to resonance stabilization. Edit 1 If that's still a bit unclear with the signs of the energies, we can do some reverse experiments in mind. Let's take cyclohexane and rip adjacent hydrogen atoms off to generate unsaturated molecules. We need $\mathrm{120\, kJ\cdot mol^{-1}}$ to make cyclohexane. We need $\mathrm{232\, kJ\cdot mol^{-1}}$ to make 1,3-cyclohexadiene. If benzene would be cyclohexatriene, we would need $\mathrm{360 \, kJ\cdot mol^{-1}}$. But we need less energy to make benzene, because it is more stable than a hypothetical cyclohexatriene. Actually, it is even more stable than 1,3-cyclohexadiene! Edit 2 Benzene's actual structure is stable by 152 KJ w.r.t. Kekule's structure. That 152KJ is used for breaking down the delocalised electron cloud of benzene. Guess what? The teacher is just repeating with other words what we have already worked on our own. Imagine that you want to convert benzene to the hypothetical Kekule structure. It's a process that would "consume" energy, $\mathrm{152\, kJ\cdot mol^{-1}}$ to be precise. Why is that? It's because benzene has the delocalized $\pi$-system that aromatic molecules use to have. If you want to overcome that and push the stable molecule uphill and turn in into the less stable (= not having the delocalisation) cyclohexatriene, you have to do some work.
{ "domain": "chemistry.stackexchange", "id": 914, "tags": "aromatic-compounds, energy, stability" }
Matched Filter of Gaussian Signal
Question: How can we evaluate the matched filter's impulse response of a Gaussian function $x(t) = \exp(-\frac{t^2}{2})$. As far as we know, for a signal of finite duration $T$, the impulse response of it's matched filter is $h(t) = x(T-t)$. But as Gaussian function is of infinite duration, how can we find the matched filter ? Answer: If you look at: https://en.wikipedia.org/wiki/Matched_filter the derivation does not depend on the filter being either causal or have a finite duration. The derivation is fully satisfactory for convolution for time index $k$ over $-\infty,\infty$. It's all about what filter will maximize the Schwarz inequality. In this case $$ x(t)=\exp(-\frac{(t-0)^2}{2}) = x(0-t) $$ Functions that are symmetric are their own matched filter. The $T$ parameter enters the discussion in terms of when does one sample the filter for the purpose of detection. If $x(t)$ was delayed by some unknown time(nonsynchronous detection), I would use the matched filter to estimate it's delay.
{ "domain": "dsp.stackexchange", "id": 6263, "tags": "filters, gaussian" }
Comparison between Atomic Spectrum of Hydrogen and Alkali Metals
Question: I posted a question on Chemistry Stack Exchange a while back. It was related to the naming of atomic orbitals. One of the answers to it mentioned a research experiment. The link to the question is given below. Size of Orbitals, Making Intuitive Sense of Quantum Model, Nomenclature of Subshells in the Quantum Model As mentioned in the answer, alkali metals show various different bands of spectrum, on the basis of which we named the S, P, D and F orbitals. But I didn't get why we don't have these seperate spectra in Hydrogen. (I've studied Hydrogen Spectrum in my classes, and it doesn't talk about anything like this. It only talks about Bohr using them to justify the quantisation of energy.) So what makes Hydrogen different? Also, do elements other than alkali metals show a similar spectrum pattern or is it a single spectrum like Hydrogen? Which ones do? What's the defining criteria for a single spectrum and multiple spectrum? Answer: As mentioned in my previous answer to your first query, sharp, principal, diffuse, and fundamental notation for modern orbital "labels" comes from alkali and alkaline earth metal spectra. This is very well established in historical spectroscopy. It did not come from analyzing hydrogen spectrum. Why? The reason is that these visual "labels" for spectrum lines come from the time when the electron was not discovered and it was very easy to study alkali and alkaline earth metals by electric arcs or sparks. It was just pure mathematical analysis (hence the name series spectra...the series is the empirical mathematical series). Hydrogen atom is the simplest atom with only one electron. How did they find out why hydrogen has only one electron is another story. This was the first atom that was tackled with sound and rigorous physics principles known to man at that time. Those wonderful years were the mid-1920s for physics but perhaps not for mankind. In fact, hydrogen atom spectrum lines also showed "mathematical series" behavior, their names were Lyman, Balmer, Paschen, Pfund series etc, named after European spectroscopists. They did not call them sharp, principal, diffuse, fundamental series. Today these s,p,d,f labels just have a notational meaning related to orbital angular momentum $l$ values in Schrodinger's equation. If $l$ =0, you label it as $s$, $l$ =1, it is $p$ etc. This is pure algebraic use. Do not associate any other historical meaning to them now.
{ "domain": "chemistry.stackexchange", "id": 16522, "tags": "quantum-chemistry, spectroscopy, hydrogen, atomic-structure, alkali-metals" }
Does parity violation just mean particles are chiral?
Question: Wu's experiment shows that the mirror image of a system doesn't necessarily act the same as the original system. But the experiment only mirrors the position of every particle, not the particles themselves! Wouldn't the logical conclusion be that quarks are not just points, but rather more complex structures that have a chirality, and that when we mirror the system, we should also be mirroring the quarks themselves, rather than just their position. The 60% preference towards one end would then mean that 60% of quarks around us are left-handed, and 40% are right-handed. Answer: Generally, a parity violation is observed when a scalar quantity, such as an interaction rate or an energy, is found to depend on pseudoscalar quantity. For example, the "polar" vectors describing position $\vec r$ or momentum $\vec p$ change sign under reflection, but angular momentum $\vec L=\vec r\times\vec p$ does not (or changes sign twice, if you prefer). In the Wu et al. experiment, the reaction rate (a scalar) depends on the scalar product between the nuclear spin $\vec\sigma$ and the electron momentum $\vec p$. But the scalar product $\vec\sigma\cdot\vec p$ between an "axial vector" and a polar vector will change sign under reflection: the reaction rate is a mixture of scalar and "pseudoscalar." That's the parity violation. Note that the product $\vec\sigma\cdot\vec p$ for a single particle is its helicity, not its chirality; the two are correlated only in the high-momentum limit. A massive particle in its rest frame is equal parts left- and right-handed chirality, regardless of its spin polarization. And decays (as in Wu et al.) must be analyzed in the rest frame of the decaying particle. (If you object that cobalt is a big nucleus, look instead at decays of free neutrons. Or muons, even, which have no substructure at all. The Lederman et al. discovery of parity violation in muon decay is the paper following Wu et al. in Physical Review.) The explanation in the Standard Model for parity violation is that the charged weak current, whose vector boson is the $W^\pm$, interacts with left-chiral particles and with right-chiral antiparticles, but not vice-versa. That explains parity violation in rest-frame decays, while a boost-dependent excess of one chirality would not.
{ "domain": "physics.stackexchange", "id": 70486, "tags": "parity" }
Ensuring all individuals are genetically identical before experiments
Question: I am a beginner in experiment design so please correct me if I made any mistakes. If we want to study the behavioral changes after a particular mutation of a gene, it seems to be a must for all individuals to be genetically identical before we induce the mutation (otherwise those changes may not be caused by the mutation, but other genetic variants) For example, if we are using fruit flies for genetic study, we need to enclose them (with food and air) such that they can only mate over many generations in that container. Question: After many generations, supposedly a very large number offspring which has very similar genetic makeup can be generated ready for mutagenesis. However, may I ask how to test whether they are 100% genetically identical? I have thought of gene sequencing, but is highly impractical since there is a very large population involved. Answer: For fruit flies one usually wants pure lineages for specific alleles, corresponding to distinct phenotypes. One cannot arrive to such a state by merely enclosing the flies for many generations - in this case one will quickly end up in the situation of Hardy-Weinberg equilibrium, where all the alleles are present in constant proportions, not changing with time. Of course, in a finite population one expects that eventually one allele fixes itself, but that may take a rather long time in absence of selection. Thus, in practice one separates the desired phenotypes in every generation, and and makes them breed among themselves. In fact, this is pretty much what Mendel did with peas to obtain pure lineages - it is worth reviewing this chapter. Now, it is always risk that the population is not 100% pure, just as there is always a possibility that other genes contribute to the trait of interest. Here is where the (bio)statistical analysis comes in: testing for significance of the effect and filtering out random effects. Update Just to provide a quick calculation: suppose that we have alleles A and a, with A being the dominant one. In every generation we remove the undesired genotype aa. Initially we have the genotype frequencies of AA,Aa,aa given by $P, H, Q=0$, where $P+H=1$. The allele frequencies are $$p=P+\frac{H}{2},q=\frac{H}{2}$$ After crossing we obtain the allele frequencies $p^2, 2pq,q^2$, which after removing the aa genotype give: $$ P'=\frac{p^2}{p^2+2pq}=\frac{p}{p+2q}, H'=\frac{2pq}{p^2+2pq}=\frac{2q}{p+2q} $$ The new allele frequencies are $$p'=\frac{p+q}{p+2q}=\frac{1}{1+q}, q'=\frac{q}{p+2}=\frac{q}{1+Q},$$ where I used the fact that $p+q=1$. More generally, the frequency of the unwanted allele in generation $t+1$, in terms of the frequencies in the previous generation, is given by $$ q_{t+1}=\frac{q_t}{1+q_t} $$ The solution to this difference equation is $$ q_t=\frac{q_0}{1+tq_0}. $$ Now, if we want the frequency of the unwanted allele to be below teh desired threshold $\alpha$, we must solve inequality $q_t<\alpha$, obtaining $$ t>\frac{q_0}{\alpha}-1 $$ generations. A good choice of $\alpha$ is $1/N$, where $N$ is the number of individuals in the population (than after $Nq_0-1$ generations we are likely to have no more than one individual carrying the minority allele). Remark A similar calculation can be found in connection with strong selection, under conditions of complete dominance: if $s=1$, the aa genotype is completely eliminated at each generation. See, e.g., Introduction to Quantitative Genetics by Douglas Falconer.
{ "domain": "biology.stackexchange", "id": 11600, "tags": "experimental-design" }
Producing all allocations by n items from a list
Question: I have strong feeling that the code below is ugly at least as there are 2 same "cons". I would appreciate if you advise me ways to improve it. The code produces all allocation by n items from list lst. (defun allocations (lst n) (if (= n 1) (loop for i in lst collect (cons i nil)) (loop for i in lst append (mapcar #'(lambda (l) (cons i l)) (allocations (remove i lst) (- n 1)))))) Answer: (defun allocations (source length) (if (= 1 length) (list (list (car source))) (loop for processed = nil then (cons (car i) processed) for i on source for todo = (cdr i) appending (loop for intermediate in (allocations (append (reverse processed) todo) (1- length)) appending (loop for prefix = nil then (cons (car suffix) prefix) for suffix on intermediate collect (append prefix (list (car i)) suffix)))))) This should be a more general case of the problem (i.e. it makes no assumption of the nature of the elements in the list), but it doesn't verify that there is enough of the elements in the source list to build the required number of permutations.
{ "domain": "codereview.stackexchange", "id": 2845, "tags": "combinatorics, common-lisp" }
What happens when a man walks on a canoe? what are the forces acting on canoe and man and how to describe it with law of conservation of momentum?
Question: In my physics class our teacher taught us about this canoe-man problem, where a man walks across a canoe and due to the "law of momentum conservation", the canoe attains a constant $v$ velocity, and the man attains constant $u$ velocity. Is this possible because, each step he exerts a force on the canoe and that gives the canoe and man different velocities, or should I assume that with 2,4,6... steps the canoe stops as $Mv = mu$ as in Law of Conservation of Momentum ,and in 1,3,5... steps the canoe starts to move again? 1. The friction of water is considered negligible. 2. If my assumptions are wrong, what kind of motion does the canoe and the man has? 3. Does the canoe has a "go-stop-go-stop" motion? 4. Does it move in constant acceleration? If you want more clarification on my question please let me know.... Answer: the canoe attains a constant 'v' velocity, and the man attains constant 'u' velocity. Is this possible because, each step he exerts a force on the canoe and that gives the canoe and man different velocities,... If your teacher is suggesting that the velocities of the man and canoe must be constant for momentum to be conserved, I believe your teacher is incorrect. I don't think it matters that the velocities and the momentum of the man and canoe are changing in time. The important thing is that at any instant in time, the instantaneous momentum of the man is his mass times his instantaneous velocity and the momentum of the canoe is its mass times its instantaneous velocity. It is sum of the instantaneous momentum of the man plus canoe that is conserved, or $$m_{man}v_{man}+m_{canoe}v_{canoe}=constant$$ If the man and canoe velocities are not constant, then the velocities are instantaneous velocities. Keep in mind that momentum is a vector quantity and that at any instant in time the sign of velocity of the man and thus his momentum, is opposite to the sign of the velocity of the canoe and it momentum, since the man and canoe are moving in opposite directions with respect to the center of mass of the system. ...or should I assume that with 2,4,6... steps the canoe stops as 'Mv = mu' as in L.O.C.M ,and in 1,3,5... steps the canoe starts to move again. Having trouble following the numbers. For example, if you are referring to the numbered items, there is not 5 and 6. I will just address the significance of 1-4 on the analysis. 1. the friction of water is considered negligible. This is a necessary condition if the man and canoe are to be considered an isolated system, and in order for momentum to be conserved the system must be appropriately isolated. In the present example we are dealing with horizontal momentum. For conservation of horizontal momentum, there can be no net external horizontal forces acting on the system. Water friction on the canoe would constitute such an external force. (Gravity is not a horizontal force so it doesn't affect the current example). 2. if my assumptions are wrong, what kind of motion does the canoe and the man has. I don't believe your assumptions are wrong because the details of the motion of the man and canoe don't matter. As long as the system is isolated, at any instant in time the instantaneous momentum of the system is conserved (constant). It equals zero if the center of mass of the system was initially at rest with respect to the water. 3. does the canoe has a "go-stop-go-stop" motion? Yes, if the man takes a step, stops and waits a while, repeating the sequence until reaching the other end of the canoe. When he stops, his and the canoes momentum with respect to the center of mass is zero. When he takes a step and accelerates, the sum of his and the canoes instantaneous momentum (based on their instantaneous velocities) is zero. > 4. does it move in constant acceleration? The COM of the man-canoe system has no acceleration if the system is isolated, that is, if the system is not subjected to a net external horizontal force. The force that the man exerts on the canoe and the canoe exerts on the man are internal forces having no effect on the motion of the system as a whole. Those internal forces are equal and opposite per Newton's 3rd law. If it is a step and stop sequence, there is combination of an initial acceleration of the man and canoe with respect to the COM of the system when the man initiates the step and deceleration when the man ends the step (stops). Hope this helps.
{ "domain": "physics.stackexchange", "id": 72607, "tags": "newtonian-mechanics, momentum, free-body-diagram" }
qiskit : can you get circuit from unitary matrix?
Question: In qiskit you can get a unitary matrix from a circuit (circuit to unitary matrix example). Is the opposite direction possible? Can you input a unitary matrix and have qiskit come up with a circuit? If it helps you can restrict the matrices to be clifford + multi-qubit controlled pauli strings. Here's an example that goes from circuit -> unitary and then attempts to get the original circuit back based on the answer below : import numpy as np np.set_printoptions(threshold=np.inf) import qiskit backend=qiskit.Aer.get_backend('unitary_simulator') qr=qiskit.QuantumRegister(4,name="qr") CirA=qiskit.QuantumCircuit(qr); CirA.cx(3,2) CirA.h(0) CirA.cx(0,2) CirA.h(1) CirA.cx(1,3) print(CirA) job=qiskit.execute(CirA,backend,shots=1) result=job.result() MatA=result.get_unitary(CirA,3) CirB=qiskit.QuantumCircuit(qr); CirB.unitary(MatA,[ 0, 1, 2, 3 ],label='CirB') print(CirA) unroller = qiskit.transpiler.passes.Unroller(basis=['u', 'cx']) uCirA = qiskit.converters.dag_to_circuit(unroller.run(qiskit.converters.circuit_to_dag(CirA))) print(uCirA) uCirB = qiskit.converters.dag_to_circuit(unroller.run(qiskit.converters.circuit_to_dag(CirB))) print(uCirB) Answer: You can easily create a quantum circuit which implements a unitary by appending a UnitaryGate to the circuit, or using QuantumCircuit.unitary() method: # Get some random unitary: from qiskit.quantum_info import random_unitary num_qubits = 4 U = random_unitary(2 ** num_qubits) # Create the quantum circuit: qr = QuantumRegister(num_qubits, 'q') circ = QuantumCircuit(qr) circ.unitary(U, qr) For more information about how Qiskit constructs the circuit from the unitary matrix see here.
{ "domain": "quantumcomputing.stackexchange", "id": 4209, "tags": "qiskit, circuit-construction, gate-synthesis" }
Why doesn't the light from galaxies appear stretched?
Question: Maybe it's my ignorance of astrophysics/cosmology, but I have been wondering this: Why do galaxies not appear stretched when we observe them? Assuming a galaxy that we observe is 100,000 light years in diameter, and we are viewing at an angle that is almost sideways, but enough to see its shape, there would be a 100,000 year delay between what we see at the front of the galaxy versus the far end of the galaxy. So wouldn't it look all jumbled up instead of a perfect spiral? At first, I thought that the Big Bang would explain this, because the light from that matter would have been there from the start, but that doesn't make sense when we consider that it took hundreds of thousands of years for the universe to clear up enough for photons to travel far in any direction without bouncing of of something. Also, if inflation theory is true, wouldn't that just reinforce the confusion about the delay in light, being that the universe expanded faster than light for an instant? It's confusing, but if someone has an answer for this I'd appreciate the enlightenment. Answer: Galaxies would appear stretched along the line of sight, not jumbled. Let's say a galaxy (specifically, the closest side of it) is ten million light years away and, as you proposed, is 100,000 light years across and we see it nearly edge on. The front of the galaxy will appear to us as it did ten million years ago and the back of the galaxy as it did 10,100,000 years ago. Thus, if the galaxy is moving towards us it will appear bigger than it actually is due to the delay between light from the front of the galaxy and the rear of the galaxy reaching us. If the galaxy is moving away from us, it will appear smaller than it actually is (compressed along the line of sight). As far as the effects of this time delay on viewing the rotation of the galaxy, the length of the cosmic year (the length of time it takes for the sun to travel around the center of the Milky Way) is 225 to 250 million years. So, as many other people have already pointed out, the rotation period of a galaxy is small compared to the time it takes for light to travel across the galaxy and there is no jumbling effect.
{ "domain": "physics.stackexchange", "id": 15760, "tags": "cosmology, big-bang, cosmological-inflation, galaxies" }
1,6-dimethylcyclohexene and 2,3-dimethylcyclohexene
Question: Is there a difference between these two molecules? If so how does one tell? What is the correct nomenclature for cycloalkenes? I know that the double bond is assumed to be between C1 and C2 but why can't we number a cyclohexene to get 2,3? Is it because a lower sum is not preferred but simply lower individual numbers if possible? Answer: The IUPAC rule is to have the first branch to have the lowest number, even if it gives you 1,6-dimethylcyclohexene.
{ "domain": "chemistry.stackexchange", "id": 3616, "tags": "organic-chemistry, nomenclature" }
Is the water underneath Europa's ice cap potable?
Question: I read this question on Worldbuilding.SE, and figured that the astronomy site would have answers too, particularly for the specific example of Europa. The idea is that Earth's oceans are salty because rain falls on continents, and while the rain makes its way to the sea it absorbs minerals and salts from the land. That piles up in the oceans, and that's why you can drink river water but not sea water. Europa has no continents; as far as we know it's an ice cap of a couple tens of km, then roughly 100 km of liquid water, and only then something rocky that might contain salts. Does that mean that the mentioned liquid water is likely to be pure? Or at least pure compared to the Atlantic Ocean? Or do we not know? Answer: According to this 2007 paper, the current research as of the time of their own research had a huge range in possible concentrations of $\text{MgSO}_4$, magnesium-sulfate, with over four orders magnitude (approximately times $30,\!000$) differences between the extreme ends of the predictions. It conducts its own analyses and near the end of the paper makes some analysis on habitability. They say (with slight formatting modifications for units by me): If the ice and liquid water layers on Europa fall within the limits of Fig. 2 (A = 0.7) then, by standard definitions of “freshwater” environments on Earth [broadly meaning $<3$ g salt per kg H${}_2$O (Barlow, 2003)], Europa’s ocean would be a freshwater ocean, though admittedly more salty than most terrestrial lakes. Indeed, in this case, the putative global ocean of Europa could be more like the mildly saline environment of Pyramid Lake, Nevada than like the Earth’s ocean. While the drinking water regulations of the U.S. Environmental Protection Agency recommend no more than 0.25 g of sulfate per kilogram of water, adult humans can acclimatize to drinking water with nearly 2g MgSO${}_4$ per kg H2O without much discomfort (EPA, 2004; CDC-EPA, 1999). Animal toxicity (the lethal dose for 50% of the population) is in the range of 6 g MgSO4 per kg H2O (CDC-EPA, 1999), but most livestock are satisfied provided the total salt concentration is less than 5 grams per kilogram of water (ESB-NAS, 1972). If we assume the low amplitude regime for our solution (A < 0.8) then it is possible that human or beast could drink the water of Europa. However the best estimates for that parameter $A$ they had were from magnetic field observations, which put $A=.97\pm .02$. In this case the article concludes that the subsurface ocean would then be very salty. The most salt tolerant organisms we know of could potentially survive in the environment. However, such organisms evolved into such salty niches from less salty ones, rather than having evolved directly within them. Current evidence suggests that life as we know it is unlikely to be able to arise in such a salty medium.
{ "domain": "astronomy.stackexchange", "id": 5091, "tags": "water, astrochemistry, europa" }
Why do we get different imaginary parts of a zero centered Gaussian for the the same number of data points N?
Question: Suppose we have a total number N= 2048 points in a data and we wish to have zero centered Gaussian. There are two possibilities that we use the x-axis as x1=[-1023:1:1024]; % x axis spans from -1023 to 1024 with 1 unit steps x2=[-1024:1:1023]; % x axis spans from -1024 to 1023 with 1 unit steps and if we make two zero centered Gaussians using these x values: Gauss1=normpdf(x1,0, 10); % The syntax is normpdf(x, mean, standard deviation) Gauss2=normpdf(x2,0,10); and obtain their FTs as follows in MATLAB. FFTGauss1=fft(Gauss1); FFTGauss2=fft(Gauss2); The real parts are identical and their magnitudes exactly match. For some reasons, the imaginary parts vary drastically. Why do we see large imaginary parts in one case and almost non-existent imaginary parts in the other? Thanks. Answer: Answer : When $x_2 = [-1024:1:1023]$, then $x_2[n]$ satisfies the condition $x_2[n] = x_2[(N-n)\mod N]$. That is why when $x_2 = [-1024:1:1023]$, then the FFT is real and hence the imaginary part is $0$. If you see the scale of the $y$-axis for imaginary part of $x_2$ plot, it is of the order of $10^{-17}$ which is almost $0$ in MATLAB. Detailed Explanation: When $x \in \{-1024, -1023, -1022,..., 0, 1, 2, ..., 1023\}$, then you get the following mapping : $$\begin{array}{lcl}x[0] &=& {\rm gaussian}(-1024)\\ x[1] &=& {\rm gaussian}(-1023)\\ x[2] &=& {\rm gaussian}(-1022) \\ & \vdots\\ x[1024] &=& {\rm gaussian}(0)\\ x[1025] &=& {\rm gaussian}(1) \\ & \vdots \\ x[2047] &=& {\rm gaussian}(1023)\end{array}$$ Observe that $x[1] = x[2047] = x[(2048 - 1)\mod\ 2048]$, $x[2] = x[2046] = x[(2048 - 2)\mod \ 2048]$ and so on. This makes $x[n]$ real and symmetric mod $N$, which will in turn make $X[k]$ real and that is why you see that the imaginary part as $0$. MATLAB shows $0$ as values of the order of $10^{-17}$. Do the same mapping for $x = [-1023:1:1024]$, and you will see that $x[n] \ne x[(N-n)\mod \ N]$ and hence imaginary part is not $0$.
{ "domain": "dsp.stackexchange", "id": 8778, "tags": "fourier-transform, fourier" }
can not load rxbag_plugins correctly
Question: Hey! I am trying to plot my trajectory topic in a decent way. rxbag_plugins plot seems like a very promising tool to plot the trajectory topic. But while run rxbag, I got the following error. Can anyone tell me what is wrong at here? thanks. By the way, I am using fuerte on ubuntu 12.04. thanks. Unable to load plugin [rxbag_plugins.plugins] from package [rxbag_plugins]: Traceback (most recent call last): File "/opt/ros/fuerte/lib/python2.7/dist-packages/rxbag/plugins.py", line 70, in load_plugins roslib.load_manifest(pkg) File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/launcher.py", line 62, in load_manifest sys.path = _generate_python_path(package_name, _rospack) + sys.path File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/launcher.py", line 98, in _generate_python_path packages = get_depends(pkg, rospack) File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/launcher.py", line 51, in get_depends vals = rospack.get_depends(package, implicit=True) File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 201, in get_depends s.update(self.get_depends(p, implicit)) File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 201, in get_depends s.update(self.get_depends(p, implicit)) File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 195, in get_depends names = [p.name for p in self.get_manifest(name).depends] File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 133, in get_manifest return self._load_manifest(name) File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 172, in _load_manifest retval = self._manifests[name] = parse_manifest_file(self.get_path(name), self._manifest_name) File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 164, in get_path raise ResourceNotFound(name, ros_paths=self._ros_paths) ResourceNotFound: catkin ROS path [0]=/opt/ros/fuerte/share/ros ROS path [1]=/home/me/sandbox ROS path [2]=/home/me/test ROS path [3]=/home/me/hg_repo_cambridge ROS path [4]=/home/me/ros_pkg/rxbag_plugin/rxbag_plugins/src/rxbag_plugins ROS path [5]=/home/me/ros_pkg ROS path [6]=/opt/me/fuerte/share ROS path [7]=/opt/me/fuerte/stacks Unable to load plugin [tf2_visualization.rxbag_plugin] from package [tf2_visualization]: Traceback (most recent call last): File "/opt/ros/fuerte/lib/python2.7/dist-packages/rxbag/plugins.py", line 70, in load_plugins roslib.load_manifest(pkg) File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/launcher.py", line 62, in load_manifest sys.path = _generate_python_path(package_name, _rospack) + sys.path File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/launcher.py", line 98, in _generate_python_path packages = get_depends(pkg, rospack) File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/launcher.py", line 51, in get_depends vals = rospack.get_depends(package, implicit=True) File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 201, in get_depends s.update(self.get_depends(p, implicit)) File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 201, in get_depends s.update(self.get_depends(p, implicit)) File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 195, in get_depends names = [p.name for p in self.get_manifest(name).depends] File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 133, in get_manifest return self._load_manifest(name) File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 172, in _load_manifest retval = self._manifests[name] = parse_manifest_file(self.get_path(name), self._manifest_name) File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 164, in get_path raise ResourceNotFound(name, ros_paths=self._ros_paths) ResourceNotFound: catkin ROS path [0]=/opt/ros/fuerte/share/ros ROS path [1]=/home/me/sandbox ROS path [2]=/home/me/test ROS path [3]=/home/me/hg_repo_cambridge ROS path [4]=/home/me/ros_pkg/rxbag_plugin/rxbag_plugins/src/rxbag_plugins ROS path [5]=/home/me/ros_pkg ROS path [6]=/opt/me/fuerte/share ROS path [7]=/opt/me/fuerte/stacks Originally posted by jayson ding on ROS Answers with karma: 29 on 2013-05-08 Post score: 0 Answer: ResourceNotFound: catkin You need to install catkin. Also, why not consider using rqt_bag that succeeds rxbag from ROS Groovy but is availbale in Fuerte too? Originally posted by 130s with karma: 10937 on 2013-07-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14118, "tags": "ros, ubuntu, ros-fuerte, ubuntu-precise, rxbag" }
Suggested reading for quantum field theory in curved spacetime
Question: I want to learn some QFT in curved spacetime. What papers/books/reviews can you suggest to learn this area? Are there any good books or other reference material which can help in learning about QFT in curved spacetime? There is no restriction about the material, no matter physical or mathematical. Answer: Quantum field theory (QFT) in curved spacetime is nowadays a mature set of theories quite technically advanced from the mathematical point of view. There are several books and reviews one may profitably read depending on his/her own interests. I deal with this research area from a quite mathematical viewpoint, so my suggestions could reflect my attitude (or they are biased in favor of it). First of all, Birrell and Davies' book is the first attempt to present a complete account of the subject. However the approach is quite old both for ideas and for the the presented mathematical technology, you could have a look at some chapters without sticking to it. Parker and Toms' recent textbook should be put in the same level as the classic by Birrel Davis' book in scope, but more up to date. Another interesting book is Fulling's one ("Aspects of QFT in curved spacetime"). That book is more advanced and rigorous than BD's textbook from the theoretical viewpoint, but it deals with a considerably smaller variety of topics. The Physics Report by Kay and Wald on QFT in the presence of bifurcate Killing horizons is a further relevant step towards the modern (especially mathematical) formulation as it profitably takes advantage of the algebraic formulation and presents the first rigorous definition of Hadamard quasifree state. An account of the interplay of Euclidean and Lorentzian QFT in curved spacetime exploiting zeta-function and heat kernel technologies, with many applications can be found in a book I wrote with other authors ("Analytic Aspects of Quantum Fields" 2003) A more advanced approach of Lorentzian QFT in curved spacetime can be found in Wald's book on black hole thermodynamics and QFT in curved spacetime. Therein, the microlocal analysis technology is (briefly) mentioned for the first time. As the last reference I would like to suggest the PhD thesis of T. Hack http://arxiv.org/abs/arXiv:1008.1776 (I was one of the advisors together with K. Fredenhagen and R. Wald). Here, cosmological applications are discussed. ADDENDUM. I forgot to mention the very nice lecture notes by my colleague Chris Fewster! http://www.science.unitn.it/~moretti/Fewsternotes.pdf ADDENDUM2. There is now a quick introductory technical paper, by myself and I.Khavkine, on the algebraic formulation of QFT on curved spacetime: http://arxiv.org/abs/1412.5945 which in fact will be a chapter of a book by Springer.
{ "domain": "physics.stackexchange", "id": 28792, "tags": "quantum-field-theory, general-relativity, resource-recommendations, qft-in-curved-spacetime" }
Quantum state expanded in basis or just orthonormal set
Question: Consider a quantum state $\vert{\psi}⟩$. It can be expanded in the form $$\vert{\psi}⟩=\sum_ic_i\vert{\phi_i}⟩,$$ where the vectors $\vert{\phi_i}⟩$ form an orthonormal basis. My question is, if my Hilbert space is not necessarily separable, do the $\vert{\phi_i}⟩$'s need to be a basis (in the sense of being complete) for the expansion to hold or is it enough if they are an orthonormal system? Answer: If $B$ were a basis, we would have $\mathrm{span}(B) = \mathcal{H}$ which means that any element of the Hilbert space is a finite linear combination of vectors from $B$. Usually in quantum mechanics, $\mathrm{span}(B)$ is only dense in $\mathcal{H}$. This means that in order to get an arbitrary element, we generally need an infintie linear combination. In this sense, what most textbooks call an "orthonormal basis" is not strictly a basis. I've personally never seen a situation in quantum mechanics where it's desirable to have a true basis for $L^2(\mathbb{R}^d)$. Such a basis would have to be very large. And in particular, it would not be possible to write its elements as $\left | \phi_i \right>$. As an aside, for some Banach spaces like $L^1$ and $L^\infty$, finding a basis is hard enough that it matters whether you accept the axiom of choice.
{ "domain": "physics.stackexchange", "id": 81014, "tags": "quantum-mechanics, hilbert-space, quantum-states" }
For $\psi(x, y, z, t) = Ae^{i[k(\alpha x + \beta y + \gamma z) \mp \omega t]}$, $\alpha^2 + \beta^2 + \gamma^2 = 1$
Question: I am currently studying Optics, fifth edition, by Hecht. I am presented with the plane wave in Cartesian coordinates as follows: $$\psi(x, y, z, t) = Ae^{i[k(\alpha x + \beta y + \gamma z) \mp \omega t]}$$ I am then told that $\alpha^2 + \beta^2 + \gamma^2 = 1$. Can someone please explain why $\alpha^2 + \beta^2 + \gamma^2 = 1$? Thank you. Answer: Here $\alpha$, $\beta$, and $\gamma$ are the Cartesian components of the unit wavenumber vector. The wavenumber vector is $\mathbf k=(k_x,k_y,k_z)=k(\alpha,\beta,\gamma)$. The squares of the components of any unit vector sum to $1$.
{ "domain": "physics.stackexchange", "id": 65551, "tags": "waves" }
Scale an image to fit some maximum dimension using vanilla JavaScript code
Question: Fill in the ? with a Javascript expression to set the scale ratio for an image having a given height and width so that it fit inside a maxdim-by-maxdim square area (touching at least two edges). function scaleImage(width, height, maxdim) { var scale = ?; return [scale * width, scale * height]; } Here is my solution: function scaleImage(width, height, maxdim) { var scale = (width > height ? maxdim/width : maxdim/height); return [scale * width, scale * height]; } Is there a cleaner way of doing this? Answer: I would probably use Math.max(), as that seems more readable to me and would make what is happening more clear. However, I don't think it will affect efficiency in any meaningful way since it's still making a comparison to determine the max. var scale = maxdim / Math.max(width, height);
{ "domain": "codereview.stackexchange", "id": 30579, "tags": "javascript" }
How is $Q(s', a')$ calculated in SARSA and Q-Learning?
Question: I have a question about how to update the Q-function in Q-learning and SARSA. Here (What are the differences between SARSA and Q-learning?) the following updating formulas are given: Q-Learning $$Q(s,a) = Q(s,a) + \alpha (R_{t+1} + \gamma \max_aQ(s',a) - Q(s,a))$$ SARSA $$Q(s,a) = Q(s,a) + \alpha (R_{t+1} + \gamma Q(s',a') - Q(s,a))$$ How can we calculate the $Q(s', a')$ in both SARSA and Q-learning for updating the Q-function? After having taken an action $a$ at state $s$, we get the reward $r$, which we can observe. But we cannot observe $Q(s',a')$ from the environment as far as I see it. Can anyone maybe think about a comprehensive example where you can see how it is done (or a link to a website)? Answer: It seems that your problem is that you think that we must know the true value of $Q(s', a')$ in order to perform the SARSA update. This is not the case! SARSA is a reinforcement learning algorithm, not a supervised learning algorithm (although you can also view RL as a form of SL). If you are familiar with supervised learning (SL), then you know that, to train a model, you need the ground-truth labels. The typical SL example is that of binary classification of dogs and cats. So, you are given an image of a dog or cat $x$, you pass it to your neural network $f$, which produces a prediction $\hat{y} = f(x)$. Now, if $x$ is a dog but $\hat{y}$ is cat, the neural network $f$ made a mistake. So, we need to change the weights of this model so that $\hat{y} = \text{dog}$ when $x$ is an image of a dog (of course, this reasoning also applies to the case when $x$ is an image of a cat). A typical way to solve this problem in SL is to use a loss function that computes some notion of distance between $\hat{y}$ (the prediction) and $y$ (the true label). The usual loss function, in this case, is the binary cross-entropy, but you don't need to know the details now. In reinforcement learning, you don't really have ground-truth labels, but you have experience, which is just the tuples $\langle s_t, a_t, r_{t+1}, s_{t+1} \rangle $, where $s_t$ is the state of the agent/environment at time step $t$ $a_t$ is the action that the agent takes at time step $t$ in state $s_t$ $r_{t+1}$ is the reward the agent receives after having taken action $a_t$ in $s_t$; this reward indicates how good that action is, but it doesn't tell whether you took the correct/optimal (or ground-truth) action or not (this is the main difference between reinforcement learning and supervised learning!) $s_{t+1}$ is the state the agent ends up in after having taken $a_t$ in $s_t$. Now, in reinforcement learning, there are many problems that you may want to solve. However, the main goal of an RL agent is to maximize expected reward in the long run (known also as expected return), so you could say that your objective function is $$\mathbb{E} \left[ \sum_{t=0}^\infty R_t \right],$$ where $G = \sum_{t=0}^\infty R_t$ is the so-called return (i.e. the cumulative reward or reward in the long run). The goal is to maximize this expectation. In practice, what you do is ESTIMATE a so-called (state-action or just action) value function. In the case of SARSA, it's defined as $q: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$, where $\mathcal{S}$ and $\mathcal{A}$ are respectively the set of states and actions of the environment (aka MDP). Why do you want to estimate a value function? In the case of $q$ (so SARSA and Q-learning), $q(s, a)$, for some $s \in \mathcal{S}$ and $a \in \mathcal{A}$, is defined as the expected cumulative reward that you will get from taking action $a$ in the state $s$. So, if you know that you will get more reward by taking action $a_1$ rather than action $a_2$ in state $s$, then $q(s, a_1) > q(s, a_2)$, so you will take action $a_1$ when in state $s$. In fact, you can also define $q(s, a)$ as follows $q(s, a) = \mathbb{E}\left[ G \mid s, a \right]$, where $G$ is our cumulative reward, aka return (for simplicity, I ignore a few details). HOWEVER, we do not (usually) know $q$. That's why we need Q-learning and SARSA, i.e. to estimate the state-action value function. So, in SARSA, you know $s'$ and $a'$ (read the pseudocode!), but we do not know the true value of $q(s', a')$. So, you say, but then why do we use it in the update of SARSA? The reason is: initially, SARSA uses possibly wrong estimates of $q$ to learn $q$ itself. We denote these estimates with the capital letter $Q$. So, we don't know the true value of $a'$ in $s'$. Or, more precisely, at the beginning of SARSA, if $q$ is implemented as a 2d array (or matrix), then $Q[s', a']$ is not a good estimate of the true value of $a'$ in state $s'$, i.e. $q(s', a')$. In other words, $Q[s', a'] \approx q(s', a')$. Now, you ask: why can we use a possibly wrong estimate, $Q(s', a')$, to compute $Q(s, a)$ (another estimate)? The idea of using possibly wrong estimates of the state-value function to update other estimates of the value function is present in all temporal-difference algorithms (including Q-learning): this is called bootstrapping. However, the specific reason why tabular SARSA converges to the true estimates is a different (although related) story (more info here). Now, if you didn't understand this answer, then you really need to pick up a book and read it carefully from the beginning. It takes time to understand RL at the beginning, but then it becomes easy. The most common textbook for RL is Reinforcement Learning: An Introduction by Sutton and Barto. You can find other books here.
{ "domain": "ai.stackexchange", "id": 3149, "tags": "reinforcement-learning, q-learning, sarsa, bootstrapping" }
Is the smallest or largest diameter important for the color of a molecule?
Question: I learned that phenol absorbs longer wavelengths than benzene because the electron cloud is bigger. But what is about Penta-1,2,3,4-tetrien (1) and 2,2,-Dimethylpropane (2)? (1) has the largest diameter but (2) has the smallest. Which one absorbs the longer waves? Answer: The size of a molecule does not correlate with the color of a molecule. Color depends upon which wavelengths of light are absorbed and which are transmitted by a molecule. Visible light typically ranges from around 390 to 700 nm. Light with a wavelength between approximately 635 to 700 nm appears red to the human eye. If we have a molecule that absorbs all visible light below 635 nm, the molecule will appear red; it absorbs all visible light except for the red wavelengths, hence only the red wavelengths will be reflected or transmitted and be available for the eye to detect. What determines which wavelengths of light are absorbed by a molecule? This is controlled by the energy separation (energy gap) between the highest energy orbital that contains an electron (the HOMO or highest occupied molecular orbital) and the lowest energy orbital that an electron can be promoted to (the LUMO or lowest unoccupied molecular orbital). If this energy gap falls in the visible range, then the molecule will appear colored to the eye when the molecule absorbs light at the wavelength that corresponds to the energy gap (e.g. when a photon is absorbed that promotes an electron from the HOMO to the LUMO) . Molecules like 2,2-dimethylpropane contain only sigma bonds. The energy gap in typical sigma bonds is very large meaning that it takes a lot of energy to promote an electron from the HOMO to the LUMO. The amount of energy required typically falls in the ultra-violet region, outside the visible range; therefore these molecules are usually colorless since they transmit all visible wavelengths. Molecules containing pi bonds have smaller HOMO-LUMO energy gaps than molecules with just sigma bonds. If there are enough pi bonds and they are conjugated then the gap can be small enough that light in the visible range can be absorbed and make the molecule appear colored. Here is a diagram showing how the energy gap decreases as we increase the length of our conjugated pi system. Even with 1,3,5-hexatriene the light absorption at 258 nm is still outside the visible range, so the molecule will not be colored. But as we continue to extend the conjugation, eventually the molecule will absorb light around 400-500 nm, the blue light range. As the molecule absorbs blue light it will appear yellow. In addition to pi conjugation, certain substituents such as the nitro group and the carbonyl, to just name two, can also serve to extend the conjugation path in a molecule. The molecule you asked about, penta-1,2,3,4-tetraene, looks roughly like this. If we examine it closely, we'll notice that the orbitals really look like those in two distinct (orthogonal) 1,3-butadiene molecules. As the chart above shows, 1,3-butadiene does not absorb in the visible range. Nonetheless, because penta-1,2,3,4-tetraene does have an extended pi system it will have a smaller HOMO-LUMO gap than 2,2-dimethylpropane and will absorb light at longer ultra-violet wavelengths (lower energy) than 2,2-dimethylpropane.
{ "domain": "chemistry.stackexchange", "id": 2464, "tags": "organic-chemistry, electrons, energy" }
Is there an upper limit on the coordination number of a coordination compound?
Question: From the Gold Book, coordination number is: In an inorganic coordination entity, the number of σ-bonds between ligands and the central atom. π-bonds are not considered in determining the coordination number. In high school, we mostly had complexes upto coordination number six (like $\ce{[FeF6]^3-}$). But I wondered if the coordination number could go even higher. From my search, I was able to reach this research paper (J. Am. Chem. Soc., 1979, 101 (2), pp 334–340 DOI: 10.1021/ja00496a010) where they mention lanthanide complexes of coordination number uptil nine! So, is nine the upper limit? Or can the coordination number go higher? Also, does the upper limit exist only theoretically, or has the corresponding complex been synthesized? Answer: I don't see why not. It is a matter of having enough of an interaction between the center and the ligands, usually in the form of donor/acceptor orbital pairs, and enough space around the coordination center. These go hand-in-hand, as any atom that will have enough electrons to donate to ligands will also most likely have a larger atomic radius than a first-row transition metal like iron. For example, here is a thorium complex that has CN = 15 in its crystal structure and CN = 16 in an isolated gas-phase density functional calculation: Daly, Scott R.; Piccoli, Paula M. B.; Schultz, Arthur J.; Todorova, Tanya K.; Gagliardi, Laura; Girolami, Gregory S. Synthesis and Properties of a Fifteen-Coordinate Complex: The Thorium Aminodiboronate $\ce{[Th(H3BNMe2BH3)4]}$. Angew. Chem. Int. Ed. 2010, 49, 3379-3381, DOI: 10.1002/anie.200905797
{ "domain": "chemistry.stackexchange", "id": 10061, "tags": "coordination-compounds" }
On the definition of Error-Correcting Codes
Question: Let us start with the following well-known definition: Definition 1. Let $C\subseteq A^n$ be a code over $A$ and let $t\in \Bbb Z^+$ be a positive integer. We say that the code $C$ is $\boldsymbol t$-error correcting if nearest neighbour decoding is able to correct at most $t$ errors, assuming that if a tie occurs in the decoding process, a decoding error is reported. That is, if whenever $x \in C$ and $y\in A^n$ such that $\mathrm{d}(x,y)\leq t$, then $y$ is decoded to $x$ using nearest neighbour decoding. Now let us make an observation. Remark. Recall that when there is no unique nearest codeword to $x\in C$, then nearest neighbour decoding fails. So, a code $C\subseteq A^n$ is $t$-error correcting if and only if whenever $y\in A^n$ is a word within distance $t$ of a codeword $x\in C$ (that is, $\mathrm{d}(x,y)\leq t$), then $$\mathrm{d}(x,y) < \mathrm{d}(z,y), \quad \text{for all } x\neq z \in C.$$ The Remark gives us a more abstract way to state the definition of $t$-error correcting without mentioning nearest neighbour decoding. However, I stumbled on another statement of the definition (which I found on some lecture notes on the web): Definition 2. The code $C$ is $t$-error correcting if there do not exist $x,z \in C$ such that $x\neq z$ and $y\in A^n$ such that $$\mathrm d(x, y)\leq t, \quad \mathrm d (z, y)\leq t.$$ My question is why the last definition is equivalent to the first one (given with the form of the remark). Could you please give me a hand? UPDATE: My attempt: Definition 1 tells us that $$(\forall x\in C)(\forall z \in C)(\forall y\in A^n)(d(x,y)\leq t)(z\neq x \implies d(x,y)<d(z,y)).$$ Definition 2 tells us that $$(\forall x\in C)(\forall z \in C)(\forall y\in A^n)(d(x,y)\leq t)(d(z,y)\leq t \implies z=x).$$ Equivalently, $$(\forall x\in C)(\forall z \in C)(\forall y\in A^n)(d(x,y)\leq t)(z\neq x\implies d(z,y)>t ).$$ Equivalently, $$(\forall x\in C)(\forall z \in C)(\forall y\in A^n)(d(x,y)\leq t)(z\neq x\implies d(z,y)>t\geq d(x,y)).$$ So the Def2 $\implies$ Def 1. Is that correct? How about $\Longleftarrow$? Answer: Here is what the remark says: A code $C \subseteq A^n$ is $t$-error-correcting if for all $x \in C$ and $y \in A^n$ such that $d(x,y) \leq t$, all $z \in C$ other than $x$ satisfy $d(x,y) < d(z,y)$. When is a code not $t$-error-correcting according to this remark? A code $C \subseteq A^n$ is not $t$-error-correcting if there exist $x,z \in C$ and $y \in A^n$ such that $x \neq z$ and $d(z,y) \leq d(x,y) \leq t$. Given $x,y,z$, either $d(z,y) \leq d(x,y)$ or $d(x,y) \leq d(z,y)$, and so, since the other constraints on $x,z$ are symmetric in $x,z$, the above definition is equivalent to A code $C \subseteq A^n$ is not $t$-error-correcting if there exist $x,z \in C$ and $y \in A^n$ such that $x \neq z$ and $d(x,y),d(z,y) \leq t$. This is exactly Definition 2.
{ "domain": "cs.stackexchange", "id": 17771, "tags": "coding-theory, error-correcting-codes, algebra, definitions" }
Optimize vector rotation
Question: I have a trivial function that rotates 2d vectors, and a method in a class representing a polygon that rotates every point in the polygon around an origin. The code is fairly optimized as it is, but I was wondering if there is any faster way of doing it, since the function is called a HUGE amount of times and I need it to be as fast as it can possibly be. Here is the code for the rotation function (in a file called geo.py): def rotate_vector(v, angle, anchor): """Rotate a vector `v` by the given angle, relative to the anchor point.""" x, y = v x = x - anchor[0] y = y - anchor[1] # Here is a compiler optimization; inplace operators are slower than # non-inplace operators like above. This function gets used a lot, so # performance is critical. cos_theta = math.cos(angle) sin_theta = math.sin(angle) nx = x*cos_theta - y*sin_theta ny = x*sin_theta + y*cos_theta nx = nx + anchor[0] ny = ny + anchor[1] return [nx, ny] And here is the code for the polygon object: import geo class ConvexFrame(object): """A basic convex polygon object.""" def __init__(self, *coordinates, origin=None): self._origin = origin # The coordinates in this object are stored as offset values, that is, # coordinates that represent a certain displacement from the given origin. # We will see later that if the origin is None, then it is set to the # centroid of all the points. self._offsets = [] if not self._origin: # Calculate the centroid of the points if no origin given. self._origin = geo.centroid(*coordinates) orx, ory = self._origin append_to_offsets = self._offsets.append for vertex in coordinates: # Calculate the offset values for the given coordinates x, y = vertex offx = x - orx offy = y - ory append_to_offsets([offx, offy]) offsets = self._offsets left = geo.to_the_left # geo.to_the_left takes three vectors (v0, v1 and v2) and tests if vector v2 # lies to the left of the line between v0 and v1. The offset values are input # in counter-clockwise order, so all points v(i) should lie to the left of the # the line v(i-2)v(i-1). n = len(offsets) for i in range(n): v0 = offsets[i-1] v1 = offsets[i] v2 = offsets[(i+1)%n] if not left(v0, v1, v2): raise ValueError("""All vertices of the polygon must be convex.""") def rotate(self, angle, anchor=(0, 0)): # Avg runtime for 4 vertices: 7.2e-06s orx, ory = self._origin x, y = anchor if x or y: # Default values of x and y (0, 0) indicate # for the method to use the frame origin as # the anchor. Since we are rotating the offset # values and not actually the coordinates, we # have to adjust the anchor relative to the origin. x = x - orx y = y - ory _rot = geo.rotate_vector self._offsets = [_rot(v, angle, (x, y)) for v in self._offsets] If I can get this below 3e-06s for 4 vertices that would be phenomenally helpful. UPDATE 1: Just found an optimization; in the list comprehension I say (x, y) every iteration, meaning I have to rebuild the tuple every single iteration. Removing that shaves the time down to between 7e-06 and 6.9e-06s for 4 vertices. def rotate2(self, angle, anchor=(0, 0)): # Avg runtime for 4 vertices: 7.0e-06s # Best time of 50 tests: 6.92e-06s orx, ory = self._origin x, y = anchor if x or y: # Default values of x and y (0, 0) indicate # for the method to use the frame origin as # the anchor. x = x - orx y = y - ory anchor = x, y _rot = geo.rotate_vector self._offsets = [_rot(v, angle, anchor) for v in self._offsets] Answer: You will likely be rotating many vectors by the same angle. Therefore, it would be wasteful to compute \$\cos \theta\$ and \$\sin \theta\$ repeatedly. The typical way to think of linear transformations is as matrix multiplication: $$ \left[ \begin{array}{c} x' \\ y' \end{array} \right] = \left[ \begin{array}{rr} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array}\ \right] \left[ \begin{array}{c} x \\ y \end{array} \right] $$ So, define a make_rotation_transformation(angle, origin) function that returns a closure that holds the transformation matrix and origin vector. from math import cos, sin def make_rotation_transformation(angle, origin=(0, 0)): cos_theta, sin_theta = cos(angle), sin(angle) x0, y0 = origin def xform(point): x, y = point[0] - x0, point[1] - y0 return (x * cos_theta - y * sin_theta + x0, x * sin_theta + y * cos_theta + y0) return xform def rotate(self, angle, anchor=(0, 0)): xform = make_rotation_transformation(angle, anchor) self._offsets = [xform(v) for v in self._offsets]
{ "domain": "codereview.stackexchange", "id": 7047, "tags": "python, optimization, matrix, computational-geometry" }
How to define (logically) the complement language?
Question: I found it a little bit difficult and confusing to define the complement language in specific cases. For example, take the next language: $$L = \left\{\langle M, w\rangle \;\middle|\; \begin{array}{l}M \text{ is a Non-Deterministic TM,}\\ \text{and it has an accepting run on }w\text{ of length }\leq |w|\end{array}\right\}\,.$$ When I've tried to find $L$-complement I did it like this: M isn't an NDTM OR it doesn't have an accepting run on $w$ of length $> |w|$. My way of thinking is to change each one of the quantifiers. Is that OK? Someone can write the logic behind that (I want to understand for other cases as well, this language isn't critical right now)? Thanks! Answer: This is just the application of de Morgan's laws: \begin{align*} \neg(A\land B) &\equiv (\neg A)\lor (\neg B)\\ \neg(A\lor B) &\equiv (\neg A)\land (\neg B)\,. \end{align*} These are fairly obvious, if you think about them for a moment: if something isn't both $A$ and $B$, then it must fail to be one or the other (or both); if something is neither $A$ nor $B$, then it is not $A$ and it is not $B$. In your case, $A$ is "it is a NTM" and $B$ is "it has an accepting run on $w$ of length at most $|w|$." Using the first version of the law, the complement is "it is not an NTM or it does not have an accepting run on $w$ of length at most $|w|$." That alone may be enough to answer the question, but you might also want to push the negation in the second half ("it does not have an accepting run on $w$ of length at most $|w|$"). In this case, we need the rules for negating quantifiers: \begin{align*} \neg\forall x\,C(x) &\equiv \exists x\,\neg C(x)\\ \neg\exists x\,C(x) &\equiv \forall x\,\neg C(x)\,. \end{align*} Again, take a moment to convince yourself that these are true: if it's not true that everything is $C$, it must be that something is not $C$; if there doesn't exist something that is $C$, then everything must be not $C$. In your case, writing $B$ more formally gives "There exists a run $r$ on $w$ such that $r$ accepts and $r$ has length at most $|w|$", so $C$ is the property "is accepting and has length at most $|w|$." Negating $B$ gives "Every run $r$ on $w$ is not both accepting and of length at most $|w|$" and applying de Morgan again gives "Every run on $w$ is rejecting or has length more than $|w|$ (or both)."
{ "domain": "cs.stackexchange", "id": 12267, "tags": "turing-machines, computability, reductions, logic" }
Where is the Physical Singularity under the Holographic Principle?
Question: From this article, it states that The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information "inscribed" on the surface of its boundary. So, using the holographic principle, the observable universe has a 2-sphere boundary. This boundary acts like an event horizon of a Schw black hole. The question: event horizons are coordinate singularities in GR that enshroud physical singularities. Since the edge of the observable universe is the event horizon, then where is the physical singularity "within" the holographic universe? Answer: One of the major advantages of a picture like that is that it eliminates the singularity, entirely -- all of the dynamics can be described as degrees of freedom on the boundary and you don't have to worry about having quantum mechanics having some sort of principle that saves you from physical singularities. You merely have to deal with smooth theories on smooth boundary manifolds. This question is strange to me. It's asking "where is the place for the pathological thing in a picture without a clear pathology?"
{ "domain": "physics.stackexchange", "id": 55293, "tags": "general-relativity, black-holes, string-theory, singularities, holographic-principle" }
Why do the definitions of non-deterministic Turing machines look strange?
Question: I read some definitions of the NDTM in several books. Something makes me confused. Some definitions say that the NDTM $M$ makes an arbitrary choice as to which of its transition functions to apply. So, if $x \in L(M)$, then $M$ can always guess a branch which leads to $q_{accept}$. And if $x \not\in L(M)$, then $M$ just chooses an arbitrary branch until it enters $q_{reject}$. It seems that $M$ knows the result after reading $x$ because the rule which tells $M$ how to guess depends on whether $x$ is in $L(M)$. It looks so strange! Another definitions say that $M$ executes all branches in parallel. That looks better. And it shows that if $x \in L(M)$, then $M$ halts when it enters $q_{accept}$ at the first time. However, the running time of NDTM is defined as the maximum number of steps that $M$ uses on any branch of its computation on any input. I am not sure if the definitions of the running time of NDTM are equivalent. Answer: Do not think of a non-deterministic machine as a mechanism that actually operates in some fashion. Instead, think of it as a way of making a tree of possibilities: whenever the machine "guesses" or "chooses" a transition, think of that as a node which branches into several further computation trees, one for each transition. The tree will have many possible paths of execution. We can ask questions like: Is there at least one path that leads to acceptin the input? Do all paths lead to accepting the input? Do more than half the paths lead to accepting the input? And so on. The usual notion of a non-deterministic machine uses the first point above, i.e., we usually study whether there is any choice of transitions that leads to accepting the input. This is different from trying to devise an actual machine that will make such choices. We just want to know whether the choices exist. Once you get used to the trees, you can start thinking of such a tree as describing a machine which actually runs. Of course, you need to explain to yourself how the machine chooses a particular path in the tree. You can think of it "guessing the future" or "being lucky" or "running all possibilities in parallel universes and then picking the one that works". It doesn't really matter what story you make up, so long as it fits your intuition and the definition. We're not actually going to build a non-determinstic machine (except if someone can master parallel universes or make a machine that can predict the future).
{ "domain": "cs.stackexchange", "id": 9917, "tags": "turing-machines, nondeterminism" }
How to determine which of two metals will have the higher temperature after applying the same amount of thermal energy?
Question: Two metal samples, $\mathrm{X}$ and $\mathrm{Z}$, of the same mass and initially at $\pu{25 ^\circ C}$, are heated so that each metal receives the same amount of thermal energy, which metal will have the highest final temperature? Specific heat capacities: $c(\mathrm{X}) = \pu{0.35 J//g*K}$, $c(\mathrm{Z}) = \pu{0.895 J//g*K}$ Does the specific heat capacity matter in the final temperature or not? Answer: By definition, the heat capacity is the energy required for a body to be warmed by $\pu{1 K}$. For the specific heat capacity it's the same but normalized by the mass. The higher the heat capacity is, the more energy will be required to warm it. The metal $\mathrm{X}$ having the lowest heat capacity will have the higher temperature if it receives the same amount of energy than the metal $\mathrm{Z}$.
{ "domain": "chemistry.stackexchange", "id": 2400, "tags": "thermodynamics, heat" }
Why do Repeat Measurements Result in a Reduced Error?
Question: I'm currently reading "Concepts in Thermal Physics", and in the chapter on independent variables it has the following example: If we have $n$ independent variables $X_i$, each with a mean $\left<X\right>$, and a variance $\sigma_X^2$, we can sum them to get the following: $$\begin{split} Y & = \sum^n_iX_i \\ \left<Y\right> & = \sum^n_i\left<X_i\right> = n\left<X\right> \\ \sigma_Y^2 & = n\sigma_X^2 \end{split}$$ I understand the derivation of all this fine, however the following is then stated: The results proved in this last example have some interesting applications. The first concerns experimental measurements. Imagine that a quantity $X$ is measured $n$ times, each time with an independent error, which we call $\sigma_X$. If you add up the results of the measurements to make $Y = \sum_iX_i$, then the rms error in $Y$ is only $\sqrt{n}$ times the rms error of a single $X$. Hence if you try and get a good estimate of $X$ by calculating $(\sum_iX_i)/n$, the error in this quantity is equal to $\sigma_X/ \sqrt{n}$. I'm not entirely sure what they mean here by the root mean square error. Is that just another way of saying the standard deviation? If it is, in what sense can the above example lead to the statement that follows? The only way I can personally see this making sense, is if they are modelling the error in a single measurement as the standard deviation of a probability distribution. This doesn't seem correct to me, is this actually what they are doing? Answer: Regarding your first question about rms error: Say the true value of $X$ is $\bar{X}$, and you measured $X_i$ (which on average should be $\bar{X}$). The measurement error would be: $X_i - \bar{X}$. The mean of the square of the errors would be $\langle (X_i - \bar{X})^2 \rangle $ which is exactly the variance. The root of the mean of the squares is the square-root of the variance, meaning the standard deviation. Second, after you had $n$ measurements you want to estimate $\bar{X}$, so you average your measurements and get $\langle X_i \rangle $. Of course, this cannot be equal precisely to $\bar{X}$ because all of these numbers are on a continuum. So how far off are you from the truth? The central limit theorem tells us that after taking enough measurement, no matter the distribution of $X_i$, your estimation will behave as a Gaussian with a mean of $\bar{X}$ and standard deviation of $\frac{\sigma}{\sqrt{n}}$, meaning the more you increase $n$, the narrower your gaussian will be and the closer your estimation will be to the truth. The intuition behind this is as @Physics Enthusiast answered.
{ "domain": "physics.stackexchange", "id": 82785, "tags": "statistical-mechanics, probability, measurements, error-analysis, statistics" }
What graph data structure works fastest with Dijkstra's algorithm?
Question: What data structure should I store my graph in to get the best performance from the Dijkstra algorithm? Object-pointer? Adjacency list? Something else? I want the lowest O(). Any other tips are appreciated too! Answer: Implementing Dijkstra's algorithm with a Fibonacci-heap gives $O(|E|+|V|\log |V|)$ time, and is the fastest implementation known. As for the representation of the graph - theoretically, Dijkstra may scan the entire graph, so an adjacency list should work best, since from every vertex the algorithm scans all its neighbors.
{ "domain": "cs.stackexchange", "id": 1088, "tags": "algorithms, data-structures" }
Prepare data for an LSTM
Question: -I want to make a python program using the LSTM model to predict an output value that is 1 or 0. -My data is stored in a .csv file of the form: (Example of the line) Date time temperature wind value-output 10-02-2020 10:00 25 10 1 -I found several courses, several examples of LSTM but I don't find my classification problem to do the same thing, there are many examples on translation. -I am stuck on how to prepare my program my data to give them to the LSTM model. -I want to take into consideration my temperature and wind inputs in addition to the time to predict the output value. (I have already made a python program based on a simple ANN to predict my output value by following a tutorial), but for the LSTM I find it difficult. Thanks in advance for your help. Answer: You have to prepare your data as a numpy array with the following shape: ( Number of observations , Input length , Number of variables ) Assuming you are working with Keras, the input of the LSTM() layer is as above, but you don't need to report the number of observations: input_shape = (Input length , Number of variables). Input length is an hyperparameter of your choice. I pushed this Notebook on GitHub that contains a function to preprocess your dataset for RNNs: def univariate_processing(variable, window): ''' RNN preprocessing for single variables. Can be iterated for multidimensional datasets. ''' import numpy as np # create empty 2D matrix from variable V = np.empty((len(variable)-window+1, window)) # take each row/time window for i in range(V.shape[0]): V[i,:] = variable[i : i+window] V = V.astype(np.float32) # set common data type return V def RNN_regprep(df, y, len_input, len_pred): #, test_size): ''' RNN preprocessing for multivariate regression. Builds multidimensional dataset by iterating univariate preprocessing steps. Requires univariate_processing() function. Args: df, y: X and y data in numpy.array() format len_input, len_pred: length of input and prediction sequences Returns: X, Y matrices ''' import numpy as np # create 3D matrix for multivariate input X = np.empty((df.shape[0]-len_input+1, len_input, df.shape[1])) # Iterate univariate preprocessing on all variables - store them in XM for i in range(df.shape[1]): X[ : , : , i ] = univariate_processing(df[:,i], len_input) # create 2D matrix of y sequences y = y.reshape((-1,)) # reshape to 1D if needed Y = univariate_processing(y, len_pred) ## Trim dataframes as explained X = X[ :-(len_pred + 1) , : , : ] Y = Y[len_input:-1 , :] # Set common datatype X = X.astype(np.float32) Y = Y.astype(np.float32) return X, Y Let me know if that's what you were looking for.
{ "domain": "datascience.stackexchange", "id": 7247, "tags": "lstm, data" }
Are all Y-chromosomes the same?
Question: Since the Y-chromosome can only pass from male to male child, it would seem to pass intact. Thus, a boy's Y-chromosomes would, I guess, be the same as his father's. Going backwards, would not all men have identical Y-chromosomes for this reason, being somewhat like mitochondrial DNA? Answer: Actually, no. There are also recombination prone regions of the Y chromosome that recombine and exchange material with X chromosomes, and these are called pseudoautosomal regions (PARs). Y chromosomes can be used similarly to mitochondrial DNA to build up profiles of ancestry, but the sequences used for this purpose lie outside PARs, in the non-recombining region. What about seqeuences outside PARs, do they show genetic variation? As pointed out in the comments, PARs comprise only 5% of the Y chromosome, but even in the non-recombining regions of the Y chromosome studies in well-characterised populations have found that the mutation rate can approximate autosomal mutation rates in some regions of the chromosome, and exceed them in others. What processes, other than recombination, drive mutations? There are loads of different mechanisms by which mutations in general can be generated, including exposure to UV, free radicals, tobacco smoke, aristolochic acid, and perhaps most importantly spontaenous deamination of methylcytosines, which is an age related mutational process, proofreading errors inherent to DNA polymerases. Also, it is well known that in sperm, with age the mutation rate goes up throughout the genome. While UV is unlikely to be a prominent mutagen as was pointed out to me in the comments insofar sperm are concerned, factors like smoking still play a role. An illuminating read about mutational processes and the mutational footprints they leave in genomes, as understood from studying cancer genomes, is here.
{ "domain": "biology.stackexchange", "id": 4507, "tags": "dna, sex-chromosome, dna-replication" }
Convolution of $f(2x)$ and $g(3x)$
Question: As I know, convolution is defined as $f(x)*g(x) = \int_{-\infty}^{+\infty}f(\tau)g(x-\tau)d_{\tau}$, but what if we want to convolve $f(2x)$ and $g(3x)$? It should be like $f(2x)*g(3x) = \int_{-\infty}^{+\infty}f(2\tau)g(3x-\tau)d_{\tau}$ or $f(2x)*g(3x) = \int_{-\infty}^{+\infty}f(2\tau)g(3x-3\tau)d_{\tau}$ or anything else? Answer: Replace $x$ by $x - \tau$ . So option b i.e $3x-3\tau$.
{ "domain": "dsp.stackexchange", "id": 4442, "tags": "convolution" }
Odd Docker roscore/roslaunch delays with Linux kernel 6.1
Question: Hello, I have an odd one here and am trying to figure out a possible root cause! The Situation I am running a custom Linux version made using Buildroot, with application ROS code containerized. I have noticed some odd behavior with both roscore and roslaunch delaying between log messages and ultimately taking minutes to execute. This behavior occurs only with ROS kinetic, lunar, and melodic versions of roscore and roslaunch on Linux kernel 6.1. ROS noetic behaves as expected. Note: the same configuration on Linux kernel 5.15 does not exhibit this behavior. Test Summary On Linux kernel 6.1, using the ros:kinetic/ros:lunar/ros:melodic Docker images from Docker Hub. Run docker run -it --network host --entrypoint bash ros:kinetic, then from within the container run source /opt/ros/${ROS_DISTRO}/setup.bash && roscore -v The terminal displays: ... logging to /root/.ros/log/0dfbd9a8-c533-11ed-998a-78d0042ad370/roslaunch-oc-general-68.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. ... loading XML file [/opt/ros/kinetic/etc/ros/roscore.xml] ... executing command param [rosversion roslaunch] Added parameter [/rosversion] ... executing command param [rosversion -d] Added parameter [/rosdistro] Added core node of type [rosout/rosout] in namespace [/] started roslaunch server http://localhost:40105/ ros_comm version 1.12.17 SUMMARY ======== PARAMETERS * /rosdistro: kinetic * /rosversion: 1.12.17 NODES auto-starting new master And it hangs on auto-starting new master for ~100 seconds. The CPU% from htop is 100%. After the initial ~100 seconds, roscore printed: process[master]: started with pid [78] ROS_MASTER_URI=http://localhost:11311/ setting /run_id to 0dfbd9a8-c533-11ed-998a-78d0042ad370 And hangs for another ~100 seconds on setting /run_id .... The CPU% from htop is also 100%. Finally roscore finished with: process[rosout-1]: started with pid [91] started core service [/rosout] Similar behavior exists when running roslaunch ... with the ~100 second wait occurring on the ROS_MASTER_URI=http://localhost:11311 output. ROS Environment Variable ROS_ROOT=/opt/ros/kinetic/share/ros ROS_PACKAGE_PATH=/opt/ros/kinetic/share ROS_MASTER_URI=http://localhost:11311 ROS_PYTHON_VERSION=2 ROS_VERSION=1 ROSLISP_PACKAGE_DIRECTORIES= ROS_DISTRO=kinetic ROS_ETC_DIR=/opt/ros/kinetic/etc/ros Interestingly enough, if I use rosmon to launch I do not see the ~100 second wait. Could this point to a python issue shared by roscore and roslaunch since rosmon is written in C++? Any advice would be greatly appreciated! Update Using pdb running roscore in the ros:melodic container, I have narrowed it down to the following call in the roslaunch/nodeprocess.py file: /opt/ros/melodic/lib/python2.7/dist-packages/roslaunch/nodeprocess.py(340) Which is a call to subprocess.Popen (see https://github.com/ros/ros_comm/blob/melodic-devel/tools/roslaunch/src/roslaunch/nodeprocess.py#L340) Stepping into the subprocess.Popen call it seems to be here: > /usr/lib/python2.7/subprocess.py(394)__init__() -> errread, errwrite) After executing that subprocess function the output from ps -aux is: root 160 0.1 0.0 377148 63644 pts/0 Sl+ 01:35 0:00 python -m pdb roscore root 174 99.2 0.0 377148 54332 ? Rs 01:38 0:53 python -m pdb roscore Which eventually turns into: root 174 96.3 0.0 435416 60136 ? Ssl 01:38 1:40 /usr/bin/python /opt/ros/melodic/bin/rosmaster --core -p 11311 -w 3 __log:=/root/.ros Maybe unrelated, I did notice that after the rosmaster subprocess is spawn, it gradually decreases it's CPU% over time. Update #2 I built the melodic ros_base variant from source with Python3 support, and do not see delays in roscore or roslaunch. Just to double-check, I made sure Python3 was being used to run roscore root 199 0.5 0.0 348944 40940 pts/0 Sl+ 21:22 0:00 /usr/bin/python3 /opt/ros/melodic/bin/roscore -v root 209 0.4 0.0 561516 37780 ? Ssl 21:22 0:00 /usr/bin/python3 /opt/ros/melodic/bin/rosmaster --core -p 11311 -w 3 __log:=... Definitely seems like an issue with Python2 and Linux kernel 6.1...bummer. Update #3 After some additional debugging, I determined that the issue occurs only when the close_fds argument of the Python2 subprocess.Popen is set to True. close_file_descriptor = True ... self.popen = subprocess.Popen(self.args, cwd=cwd, stdout=logfileout, stderr=logfileerr, env=full_env, close_fds=close_file_descriptor, preexec_fn=preexec_function) This issue is not strictly related ROS as I tested using a base ubuntu image with a test Python2 script that called subprocess.Popen. After digging around the internet, I found that there have been issues with Python2's subprocess module causing deadlocks and race conditions. Google released a package called subprocess32 that is a direct replacement for subprocess that addresses the issues as well as backports Python3's implementation of fork and exec using the C module (see https://stackoverflow.com/a/25213194). So I tested this by replacing the subprocess Python2.7 module with subprocess32 in the ros:kinetic-ros-base container: # Install subprocess32, replace subprocess RUN pip install --no-cache subprocess32 RUN cp /usr/local/lib/python2.7/dist-packages/subprocess32.py /usr/lib/python2.7/subprocess.py RUN cp /usr/local/lib/python2.7/dist-packages/_posixsubprocess32.so /usr/lib/python2.7/lib-dynload/_posixsubprocess32.so and ran it on my original Buildroot configuration with Kernel 6.1 and did not have any issues with the roscore delays. My best guess at why I saw this issue in my updated Buildroot configuration was due to some race condition or deadlock interaction between Python2.7 subprocess and the updated version of glibc. In the new configuration glibc was at version 2.36-81 vs 2.34-109. Update #4 After doing more investigation and running into OOM issues caused by every ROS nodes consuming ~8GB of memory (similar to issues in https://answers.ros.org/question/336963/rosout-high-memory-usage/), I think I might have found the real culprit to the underlying issue. It turns out newer versions of docker and containerd set LimitNOFILE=infinity in the corresponding systemd unit: https://github.com/containerd/containerd/pull/7566. This then evaluates to a ulimit within the container of: :~# ulimit -Hn 1073741816 :~# ulimit -Sn 1073741816 Where the host evaluates to: :~# ulimit -Hn 524288 :~# ulimit -Sn 1024 The Python2 submodule.Popen call with close_fd=True most likely iterates through all file descriptors to find the fds owned by the Popen call to close (which would be roughly 1073741816), per https://github.com/containerd/containerd/pull/7566#issuecomment-1461140261. This is problematic for some software. A common daemon practice is to close all inherited file descriptors (typically 1024 from the standard soft limit) - which in Docker looks like a stalled / hanging process (but is actually performing over a billion close() syscalls IIRC): subprocess32 probably does a much better job at determining the fd to close and therefore resolves this issue: For the closing inherited FDs practice that daemons perform, there are more modern approaches "modern approaches" -> https://stackoverflow.com/questions/899038/getting-the-highest-allocated-file-descriptor/918469#918469 The real solution is to limit the soft/hard nofiles that docker/containerd provides to its containers and there has been some significant discussion on this in https://github.com/containerd/containerd/pull/7566. Originally posted by danambrosio on ROS Answers with karma: 106 on 2023-03-17 Post score: 2 Original comments Comment by gvdhoorn on 2023-03-18: If it hangs for multiple minutes, seeing what it's hanging on should not be too difficult with something like ptrace, or perhaps even pdb in case of Python scripts/packages. That might lead to results faster than speculation. Might be worth a try. Comment by danambrosio on 2023-03-18: @gvdhoorn using pdb running roscore in the ros:melodic container, I have narrowed it down to the following call in the roslaunch/nodeprocess.py file: /opt/ros/melodic/lib/python2.7/dist-packages/roslaunch/nodeprocess.py(340) Which is a call to subprocess.Popen (see here) Stepping into the subprocess.Popen call it seems to be here: > /usr/lib/python2.7/subprocess.py(394)__init__() -> errread, errwrite) After executing that subprocess function the output from ps -aux is: root 160 0.1 0.0 377148 63644 pts/0 Sl+ 01:35 0:00 python -m pdb roscore root 174 99.2 0.0 377148 54332 ? Rs 01:38 0:53 python -m pdb roscore Which eventually turns into: root 174 96.3 0.0 435416 60136 ? Ssl 01:38 1:40 /usr/bin/python /opt/ros/melodic/bin/rosmaster --core -p 11311 -w 3 __log:=/root/.ros Comment by danambrosio on 2023-03-18: Maybe unrelated, I did notice that after the rosmaster subprocess is spawn, it gradually decreases it's CPU% over time. As regular posts don't have max lengths, please also include context around the subprocess.Popen(..) call. Especially the call chain leading to it. roscore starts multiple processes, and it'd be interesting to know which is causing the issue (could be all of them, or a specific one). Comment by gvdhoorn on 2023-03-19: Please add this to your initial post. Please also update all links to code to permalinks. Comment by gvdhoorn on 2023-03-19: Seeing as there don't appear to be any really interesting changes to the file you mention (git diff melodic-devel...noetic-devel -- tools/roslaunch/src/roslaunch/nodeprocess.py), I'm wondering whether this could be due to some incompatibility between Python 2.7 and kernel 6.1. You mention Noetic works, so that would be Python 3. Melodic is Python 2. You could consider building a minimal Melodic with Python 3 support and starting roscore. There are a couple of Q&As here on ROS Answers which document how to do that. Seeing as you only need ros_comm that should not be too much work (ie: you won't be building anything which actually needs Python-bindings, which is where it gets more complex). ptrace might show you exactly what is happening at the point of the hang though -- on the OS level, below what pdb can show you. subprocess can hang for various reasons, and with the very recent kernel, it might be something at the OS-level. Comment by Mike Scheutzow on 2023-03-19: Is it possible that another roscore process is already running on this machine? Only one port 11311 is available per machine. Comment by danambrosio on 2023-03-19: @gvdhoorn, updated the original post with the above comments, sorry about that. Feel free to remove the comments. Comment by danambrosio on 2023-03-19: @Mike Scheutzow there is only a single roscore being run. If there was multiple I would expect to see: RLException: roscore cannot run as another roscore/master is already running. Please kill other roscore/master processes before relaunching. The ROS_MASTER_URI is http://localhost:11311/ The traceback for the exception was written to the log file Comment by danambrosio on 2023-03-19: @gvdhoorn, see the update in the original post, seems like this is an issue with Python2 and Linux kernel 6.1. Suggest we close this as path forward would be to migrate to noetic. Comment by gvdhoorn on 2023-03-20: There's another possible cause, and that could be a Python 2 idiom (or pattern) doesn't work with kernel 6.1 any more. I'd say something like subprocess.Popen(..) would be especially "vulnerable" to that, as it deals with starting, monitoring and stopping processes as well as stdin, stdout, etc. Suggest we close this as path forward would be to migrate to noetic. This could indeed be an option. Or use a from-source Melodic-with-Python3. It's not too difficult to build those. There's probably a way to fix the problem you're seeing though. It would just require more debugging. Comment by danambrosio on 2023-03-20: Still digging into this, I have started to convince myself this may not be an issue with Kernel 6.1 but a package or packages that come with the Buildroot LTS that provides 6.1. Comment by danambrosio on 2023-03-25: @gvdhoorn let me know if you want to mark this closed or add an answer based on my recent update. Thank you for all your help! Comment by gvdhoorn on 2023-03-25: I would suggest to post your last edit as the answer, as it seems like it is the answer. +1 for getting to the bottom of it. I'm sure this will save some users (quite) some time when they try to do something similar. Comment by Mike Scheutzow on 2023-03-25: @danambrosio That was a difficult one! Well done. Answer: Update #3 in the original post is a bandaid solution for the delays in roscore and roslaunch. Update #4 highlights the underlying issue with nofile ulimits within a docker container. To override the values docker passes to containers for nofile soft and hard limits perform one of the following: docker run ... --ulimit nofile=1024:524288 If you want this for all containers consider adding: "default-ulimits": { "nofile": { "Name": "nofile", "Hard": 524288, "Soft": 1024 } } Originally posted by danambrosio with karma: 106 on 2023-03-25 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ruffsl on 2023-04-07: Wow, what a deep rabbit hole! Congrats on finding the root cause. That containerd ticket you linked seem to have impacted a lot of projects, and was an interesting read. Thanks! https://github.com/containerd/containerd/pull/7566 If I understood correctly, your issue was an unfortunate syzygy of using an older Python2 Popen submodule, with a modern linux kernel, and the latest containerd runtime? Should I expect to see such symptoms with modern or future distros (Ubuntu >= 22.04 LTS) and ROS releases (ROS 2 >= Humble) ? Comment by danambrosio on 2023-04-10: @ruffsl the issue was really that the newer buildroot LTS (2023.02) brought in a docker/containerd change that increased the container nofile soft and hard limits to infinity (over 1 billion). The python2 popen call with close_fd=True seemed to loop through all of these file descriptors causing it to hang as well as a XmlRpc issue that caused large memory allocation in ROS1 post kinetic. I tested this with Humble and buildroot (2023.02) and did not run into any issue. Also seems as if the fix is going into the future releases of docker, containderd and go.
{ "domain": "robotics.stackexchange", "id": 38320, "tags": "ros-melodic, ros-lunar, ros-kinetic, docker, roscore" }
Apply constant load to an electric motor - low tech way
Question: I want to know how much I can over-drive a small electric motor. Its speed at the rated voltage is too slow, so I want to find a higher voltage that won't blow it. It will only be used intermittently, but I figure if I run it for 100x the max expected duration with the expected load then it will probably survive repeated intermittent use. So my question is, safety concerns aside, how can I apply a relatively constant load? I can't think of a reliable way of using friction, such as attaching a disk and hold something on it, vaguely resembling a vehicle brake disk. I could attach a weight via a pulley but it would quickly run out of length. I could use a second motor and load that electrically to increase the torque, which is IMO the best idea, but I'm wondering if there are any other mechanical solutions? I think I'm missing something really obvious. Answer: Probably the easiest way to add a reasonably constant mechanical load to a motor is a fan. Because aerodynamic forces are exponential in respect to speed this should be consistent enough for evaluation purposes and has the advantage that you don't have the problems of wear and adjustment associated with solid friction brakes. Here the obvious solution for a small motor is to scavenge the fan from a cheap PC cooler. If the fan has an enclosure you can even vary the load by the simple expedient of using duct tape to restrict the flow. For a small motor a fan in air should be fine, for higher power applications a paddle wheel immersed in a fluid is more compact,albeit more complex to set up. If you want to be able to measure of vary the load then a friction band brake is a good solution as you can measure the tension in the band to give a crude measure of torque and even a simple setup should at least give you a consistent range even if you can't derive precise figures for power. You can also get cheap handled non-contact tachometers to measure rpm If you want more detailed measurements of load then you could always connect the motor to a generator and measure the power generated. This can give fairly reliable direct measures of power output and a simple optical or magnetic tachometer will give you rpm and thus torque. With the right instruments this setup also allows for sampling of variations over time. With some basic programming and electronics knowledge building a data logger to a PC is realistic (especially with a legacy parallel port).
{ "domain": "engineering.stackexchange", "id": 859, "tags": "friction" }
Weird assumption in a paper to prove equation
Question: Let $M_k$ and $M_{k+1}$ be two successive positions. Supposing the road is perfectly planar and horizontal, as the motion is locally circular, we have: Where $\Delta$ is the length of the circular are followed by $M$, $\omega$ and $\rho$ the radius of the curvature. My problem is that the paper's author assumed the distance between $M_k$ and $M_{k+1}$ to be equal as $\Delta$. His argument to do so, is the following: "From basic Euclidean geometry, we know that delta is approximately $|M_kM_{k+1}|$ up to the second order." Could anybody explain how we can assume they are similar? Extra explanation: The symbol $\omega$ is the rotation of the mobile frame and heading angle is denoted by $\theta$. Paper: Data Fusion of Four ABS Sensors and GPS dor an Enhanced Localization of Car-like vehicles. Answer: The authors are making an approximation of the path length by using a straight line between the start and end of the circular arc. They do this to make this math easier and because the difference doesn't matter since the approximation becomes exact when the limit of the path length goes to zero (as in going from a discrete step to a continuous path). The actual path length along the circular arc is giving by $$\Delta = \rho\omega.$$ The straight-line path is given by the length of the chord: $$|M_kM_{k+1}| = 2\rho\sin\left(\frac{\omega}{2}\right).$$ For angles near zero, we can approximate the $\sin$ function with a Taylor polynomial: $$\sin x = x - \frac{1}{6}x^3 + \frac{1}{120}x^5 - \cdots$$ When the authors say "up to second order," they mean that the straight-line approximation is the result of using the second-order (maximum degree two) Taylor polynomial of the actual formula. In our case, the second-degree approximation is $$\sin x = x$$ since the quadratic term of the Taylor expansion of $\sin x$ is zero. Plugging this into the approximation results in $$|M_kM_{k+1}| = 2\rho\sin\left(\frac{\omega}{2}\right) \approx 2\rho\frac{\omega}{2} = \rho\omega = \Delta.$$ So, for small distances or time increments (where $\omega \ll 1\,\textrm{rad}$), dividing the circular path into straight-line segments is an accurate approximation.
{ "domain": "physics.stackexchange", "id": 61047, "tags": "vectors, geometry" }
Deciding whether a metric is a tree metric
Question: An n-point metric space is a tree metric if it isometrically embeds into the shortest path metric of a tree (with nonnegative edge weights). Tree metrics can be characterized by the 4 point property, i.e. a metric is a tree metric iff every 4 point subspace is a tree metric. In particular this implies that one can decide in polynomial time whether a given metric is a tree metric by examining all quadruples of points in the space. My question now is what other (than the trivial) algorithms are there? Can one check in linear (in the number of points) time whether a metric is a tree metric? Answer: Bioinformatics people seem to know an $O(n^2)$-time (and hence linear-time) algorithm in the context of the reconstruction of phylogenetic trees based on distance matrices. Please look at Pages 27-34 of the slides available at http://www.cs.lth.se/home/Andrzej_Lingas/PhylogeneticTrees.pdf.
{ "domain": "cstheory.stackexchange", "id": 597, "tags": "ds.algorithms, reference-request, metrics" }
Are there Wolter telescope lenses for consumer cameras?
Question: Wolter telescopes are used for x-ray astronomy. However, I see no reason why they couldn't be used for visible light as well. How would a visible image look when taken through a Wolter telescope? Are there Wolter-type attachments for consumer cameras available? Answer: Reflective optics have a strong aberation when the image is off the optical axis, therefore they are ok for telescopes that have a very narrow field of view but not ok for regular cameras. You should be able to purchase a Schwarzschild objective (operating on a similar as the Wolter) ($$$) and mount it on a camera
{ "domain": "physics.stackexchange", "id": 86753, "tags": "optics, astronomy, telescopes, camera, astrophotography" }
Can I add individual kWh measurements to get the total?
Question: I have a time series file showing kWh electricity use measurements from a building every 15 minutes. For example: 12:00 100 kWh 12:15 302 kWh 12:30 85 kWh 12:45 97 kWh If I want to work out the building's total electricity use (in kWh) over a 24 h period, would it be appropriate to simply add up all of the individual measurements for that day? Answer: Yes. These are measurements of energy because energy equals power (measured in kilowatts) multiplied by time (measured in hours). The total energy wasted is simply the sum of the wasted energies for every 15 minutes because energy is a scalar.
{ "domain": "physics.stackexchange", "id": 65711, "tags": "electricity, statistics" }
Given a finite group, how to figure out which chiral algebra can realise this symmetry?
Question: The classification of the "minimal models" of chiral algebra gives us rational conformal field theories in two dimensions. For example, the classification of unitary representations of Virasoro algebra gives us minimal models, which have $Z_2$ global symmetry. Similarly, the minimal models of $W_3$ algebra all have $S_3$ symmetry, the leading member of this family corresponds to the critical 3-state Potts model. Suppose I construct a statistical physics lattice model in two dimensions, which preserves a certain finite group symmetry, one natural question is whether this lattice model, in the thermodynamic limit, will go through a second-order phase transition as we change temperature. (It is not difficult to construct such lattice models, for example, one introduces a spin degree of freedom on the site of a square lattice, which transforms in a certain irrep of the finite group, one can then use the invariant tensor of this finite group to introduce coupling among neighboring spins). To answer this question, it seems to me that the first question to ask is whether there is a 2d CFT whose fusion rule preserves such a finite group symmetry. This lead to the question in the title "Given a finite group, how to figure out which chiral algebra can realize this symmetry?" (Will coset construction give us some hint?) Answer: The global symmetries you cite are properties of specific models, not of chiral algebras. With a Virasoro algebra, you could have no global symmetry (Lee-Yang minimal model) or $S_Q$ ($Q$-state Potts model) or $O(N)$, etc. Even if your chiral algebra has a nontrivial group of automorphism, for example $S_N$ for an algebra made of $N$ copies of Virasoro, this could in principle be broken in a specific model. If you start with the $2d$ Ising model on the lattice, you notice the $\mathbb{Z}_2$ symmetry, then you argue that the critical limit exists, and then you start looking for a CFT with Virasoro and $\mathbb{Z}_2$ symmetries. The minimal model $M(4, 3)$ has the required symmetries, but it may not be what you want: if you are interested in nonlocal objects such as cluster connectivities, you need a bigger CFT.
{ "domain": "physics.stackexchange", "id": 92903, "tags": "quantum-field-theory, statistical-mechanics, group-theory, conformal-field-theory, representation-theory" }
Possibility to save output blastn table in memory using biopython
Question: Is there a possibility, using biopython to save output table of blastn in memory, not in file on hard drive, for process it using pandas for example and then delete it from memory? I mean, is there a possibility to redirect output of "out" argument of this function into memory? In pandas DataFrame for example. NcbiblastnCommandline(query="fasta1.fasta", subject="fasta2.fasta", out="output_table.csv", outfmt=6) I always saved output tables in file before, but now I need to align one set of small fasta files against another set of small fasta files and I think that the best way is avoid writing of many small tables in hard drive because after recieving one table I have to read and process this table immediately and after that I don't need this table anymore. Answer: You can specify send to stdout using out='-' in the Biopython wrapper. from Bio.Blast.Applications import NcbiblastnCommandline import pandas as pd cline = NcbiblastnCommandline(query='seq.fna', subject='seq2.fna', outfmt=6, out='-') output = cline()[0].strip() rows = [line.split() for line in output.splitlines()] cols = ['qseqid', 'sseqid', 'pident', 'length', 'mismatch', 'gapopen', 'qstart', 'qend', 'sstart', 'send', 'evalue', 'bitscore'] data_types = {'pident': float, 'length': int, 'mismatch': int, 'gapopen': int, 'qstart': int, 'qend': int, 'sstart': int, 'send': int, 'evalue': float, 'bitscore': float} df = pd.DataFrame(rows, columns=cols).astype(data_types)
{ "domain": "bioinformatics.stackexchange", "id": 1323, "tags": "python, blast, pandas" }
Why are my bacterial smears disappearing?
Question: I'm trying to inspect simple stained bacterial smears. But my smear suddenly disappears after a successful inspection with the oil immersion lenses. The background can become too red (the color of the stain) and the oil residue coming from the oil immersion objective when it's wiped, is red colored. Red background Usual background Also when I try to wipe the oil from the slide, the smear is removed, even with a gentle wipe. I'm suspecting three possible causes, which may include dependent causes: The immersion oil I use makes the smear easy to wipe: I don't have a dropper that releases just a small drop, but I have used another oil with the same type of dropper that releases more than the needed quantity and it didn't wipe my smear. The back-and-forth movement that eliminates air bubbles: I hear the spring sound, is this correct? Smear fixation may not be adequate: could it be possible that the smear can be fixed in an incomplete way so that it resists being wiped from the staining procedure, but not from further manipulation? My specifications Immersion oil: Non-drying, non-hardening Cedarwood oil Stain: Carbol fuchsin (20%) Smear fixation: heat fixation by passing the slide three times through the flame Answer: Bacteriological smears are a one-and-done scenario. Very few of them are intended for repeat use. Basically, all you can expect is that you observe it and then discard the slide. Wiping the oil off will also remove the bacteria as they are not firmly fixed to the slide, despite the "heat fixation" name, as I will explain below. If you do want to reuse a slide I would recommend that you get a coverslip and cover your slides, sealing the edges with nail-polish or glue will make these more or less permanent. However, this is tricky because you may need to change your high power objective lens (assuming 100x objective) to cope with the thickness of the coverslip. To do this you will need to look for a longer focal-length on the objective lens. Some higher-quality lenses come with an adjuster for focal lengths. Heat fixation in bacteriological slides is about two things. Primarily, the heating allows drying out of the specimen, which adheres it to the slide enough that it can be further manipulated. For most stains, simply drying the slide without heating will work perfectly well for adherence. Secondarily, it is about killing the bacteria. This is the fixation part and the name is a hang-over from classical biology where chemicals such as formaldehyde or ethanol were used to "fix" the specimen in an unchanging state (fixed state as opposed to changing). It should be noted that even with passing the slide through the flame 3x, the slide rarely reaches above about 50 C, which isn't really enough to kill most bacterial species with certainty (food safety tells us at least 65 C for 10 min...), and is certainly not enough for tough bacteria like Mycobacterium species - it's the dehydration from the heating that does most of the killing. In addition, most stains have a fixative of some sort in them, usually ethanol, which will work against most of the common bacterial species (though also not Mycobacterium), again by dehydration of the bacterium, but also protein denaturation. With the colour coming off, I think you have two problems: First - staining, make sure that you wash the slide with decolourizer until there is no residual red from the Fuchsin. It's a delicate balance, so takes some experience to get it right. Also, make sure your slide is completely dry after your counterstain and wash step. Second - I think that cedarwood oil is the wrong choice for immersion oil here. The cedrol and cedrene components strike me as chemically similar to phenol, which is one of the major components of Carbol Fuchsin (Ziehl-Neelson) stain (PDF with components). This means it might well solubilize the stain. Cedar oil also hardens on the lenses and can easily dissolve the glues that hold the lens together, so be very careful using it with modern lenses.
{ "domain": "biology.stackexchange", "id": 12403, "tags": "microbiology, bacteriology, microscopy, morphology" }
What does a Hilbert space state vector represent in Koopman–von Neumann theory?
Question: I understand what a state vector is in quantum mechanics. I also understand that in KvN theory, both the quantum Hilbert space and the classical Hilbert space are the same (see the answer to this question Koopman Von Neumann state vs Quantum state). But what does a state vector represent in classical mechanics? A probability distribution on phase space? In quantum mechanics a state vector is a superposition of eigenstates of some observable and the Born rule tells us what the probability is of measuring a certain value when measuring this observable. But in classical mechanics, it doesn't really make sense to talk about superpositions of states. Answer: You've already mentioned the answer: A vector in KvN space is associated to a classical probability distribution on phase space. Since $[x,p] = 0$ in KvN mechanics, there is a (rigged) basis of vectors $\lvert x,p\rangle$ that are simultaneous eigenstates of $x$ and $p$. So every vector can be written as $\lvert \psi\rangle = \int f(x,p) \lvert x,p\rangle \mathrm{d}x\mathrm{d}p$, and the classical phase space probability distribution associated with it is $\lvert f(x,p)\rvert^2$ (after normalizing its integral to 1).
{ "domain": "physics.stackexchange", "id": 73612, "tags": "quantum-mechanics, classical-mechanics, hilbert-space, born-rule" }
Is this animated gif an accurate representation of EM waves?
Question: From multiple sources, I learned that, in EM waves, electric field vectors are perpendicular to magnetic field vectors and both are perdendicular to the direction of propagation. On this gif, which claims it’s a visual representation of EM waves, electric and magnetic fields are perpendicular to each other, but the magnetic vectors are not always perpendicular to the direction of propagation, describing “circles”. So, is it an accurate representation still? Am I missing something? Answer: These look like TE waves in a rectangular waveguide. The electric field, magnetic field and the direction of propagation form a mutually orthogonal set (TEM wave) in free space propagation , not necessarily in guided waves. In fact, one can show from Maxwell's equations that a single conductor waveguide cannot support a single one of those TEM waves you are talking about. These structures support TE and TM waves, which have magnetic and electric fields along the propagation direction, respectively. The fields in that gif are the TE10 mode of a rectangular waveguide which look like this: (Image from Field and Wave Electromagnetics by D.k.Cheng)
{ "domain": "physics.stackexchange", "id": 44280, "tags": "electromagnetism, waves, electromagnetic-radiation" }
Example of a Multithreaded C program
Question: In answering password cracker in c with multithreading, I ended up writing a sample in C, which is not my forte. Is there anything that I missed which should have been included in a responsible sample (read: likely to be copied and pasted into production code)? // compile with `gcc counter.c -o counter -lpthread -Wall -std=gnu99` #include <pthread.h> #include <stdio.h> #include <stdbool.h> #include <limits.h> #define SEARCH_THREAD_COUNT (10) #define SEARCH_VALUE (21325) #define SEARCH_MAX (INT_MAX) typedef struct { int start; int end; int search; bool* shouldStop; int* answer; pthread_mutex_t* answerMutex; pthread_cond_t* answerFound; pthread_t thisThread; } find_worker_init; void* find_value(void* vpInitInfo) { find_worker_init* pUnitOfWork = (find_worker_init*)vpInitInfo; int cValue = pUnitOfWork->start; while (!(*(pUnitOfWork->shouldStop))) { if (cValue == pUnitOfWork->search) { printf("found value\n"); // found the search value pthread_mutex_lock(pUnitOfWork->answerMutex); if (!(*(pUnitOfWork->shouldStop))) { *(pUnitOfWork->shouldStop) = true; *(pUnitOfWork->answer) = cValue; pthread_cond_broadcast(pUnitOfWork->answerFound); } pthread_mutex_unlock(pUnitOfWork->answerMutex); return NULL; } cValue++; if (cValue == pUnitOfWork->end) { // we exhausted our search space, end. printf("exhausted\n"); return NULL; } } // we were usurped by another thread printf("usurped\n"); return NULL; } int main( int argc, const char* argv[] ) { pthread_mutex_t answerMutex = PTHREAD_MUTEX_INITIALIZER; pthread_cond_t answerFound = PTHREAD_COND_INITIALIZER; bool shouldStop = false; int answer = -1; // initialize thread jobs find_worker_init startInfo[SEARCH_THREAD_COUNT]; int current_search_start = 0; int search_unit = SEARCH_MAX / SEARCH_THREAD_COUNT; for (int i = 0; i < SEARCH_THREAD_COUNT; i++) { startInfo[i].start = current_search_start; // set the search space for this thread to be either the standard // search space, or for the last thread, whatever is remaining // (this is to prevent integer-division from skipping a portion // at the end) int current_size = (i == SEARCH_THREAD_COUNT - 1) ? (SEARCH_MAX - current_search_start) : search_unit; startInfo[i].end = current_search_start + current_size; startInfo[i].search = SEARCH_VALUE; startInfo[i].shouldStop = &shouldStop; startInfo[i].answer = &answer; startInfo[i].answerMutex = &answerMutex; startInfo[i].answerFound = &answerFound; } // start threads for (int i = 0; i < SEARCH_THREAD_COUNT; i++) { if (pthread_create(&(startInfo[i].thisThread), NULL, find_value, &(startInfo[i]))) { fprintf(stderr, "Error creating thread\n"); return 1; } } // wait for answer to be found pthread_mutex_lock(&answerMutex); while (!shouldStop) { pthread_cond_wait(&answerFound, &answerMutex); } printf("signaled\n"); // join for (int i = 0; i < SEARCH_THREAD_COUNT; i++) { if (pthread_join(startInfo[i].thisThread, NULL)) { fprintf(stderr, "Error joining thread\n"); return 1; } } // answer printf("answer: %d\n", answer); pthread_mutex_unlock(&answerMutex); return 0; } Answer: Other than the problems the other post adressed (and the extra parentheses, unusual indentation and camel_casing really scream "I'm not comfortable driving this language" to me, which is not a bad thing, but it really stands out), you have one large problem with your code. Your reading of the value pointed to by shouldStop violates the pthreads standard. Specifically this part: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_11 "Applications shall ensure that access to any memory location by more than one thread of control (threads or processes) is restricted such that no thread of control can read or modify a memory location while another thread of control may be modifying it." This is not just to avoid race conditions as many people seem to believe. If you want your pthreads code to be fully portable you can't assume that it will be running on a cache coherent architecture. Which means that without calling one of the locking functions (or other memory synchronizing functions mentioned in there) you can't assume that you'll ever see the modified value of shouldStop. This actually makes stopping worker threads really hard to get right. If you have a tight loop that is doing many cheap operations the overhead of locking on every loop step would be too high. You'd basically end up being contended on the lock and serialize the whole program and make the threads a waste of time. Even read rwlocks end up serializing things because of how the memory coherency protocol works on most CPUs. Your best bet to stop worker threads is to check that condition with locking after "enough" work units have been performed. For some hand tuned value of "enough". You could also use pthread_cancel, but it is very hard to get right as soon as you start doing something complex. As soon as you enable asynchronous delivery of cancellation you have to disable it in the parts of the code that allocate resources or take locks (to avoid resource leaks) and your code ends up being an endless mess of calls to pthread_setcancelstate. Another option is to use <stdatomic.h> from C11. Which should work even though its interaction with pthreads isn't really standardized. Or just ignore the problem like most of the world does. Because the people who run on architectures that aren't cache coherent are quite used to nothing working on their systems anyway.
{ "domain": "codereview.stackexchange", "id": 9997, "tags": "c, pthreads" }
Bernoulli's equation + Torricelli's law: does the speed of the fluid change if we change the area of the hole but not the height?
Question: Let's say we have a Torricelli's Law apparatus, where, in the picture below, we are concerned about the velocity v coming out of the bottom-most spigot that is a height h below the top of the water. The law states that $V=\sqrt{2gh}$. Essentially, the speed of the efflux in a Torricelli apparatus is directly proportional to and affected only by the height of the fluid above it. We also know, however, that in fluid dynamics, volume flow rate is constant, demonstrated quantitatively by the continuity equation $Av = \textrm{constant},$ or $$A_1v_1 = A_2v_2\iff v_1 = \frac{A_2}{A_1}\cdot v_2$$ We can interpret this as: v is inversely proportional to the area of the hole of the container it is flowing through. My question is now this: if we changed the area of the spigot - ever so slightly making it greater or less, but not so much as to deem the hole too big for Torricelli's Law to work - but kept the height h of the hole the same, would velocity change (as the continuity equation would suggest), or stay the same (as Torricelli's Law would suggest)? Answer: The velocity would stay the same no matter whether the area is slightly increased or decreased provided the hole is at the same height and is quantified by $V=\sqrt{2gh}$ And at the same time continuity principle isn't violated.Where you've gone wrong is comparing the same thing. It should be like .. Case 1 : Area increased Velocity stays the same,but the flow rate increases. Area at exit point (A2) increased and velocity (V2) stays the same. Similarly cross-sectional area of the tank (A1) is the same but the velocity with which the water moves down (V1) increases emptying the tank quickly as the flow rate is high. So (A1V1)=(A2V2) ------ since if A2 raises then V1 raises. Case 2 : Area decreased Velocity stays the same,but the flow rate decreases. Area at exit point (A2) decreased and velocity (V2) stays the same. Similarly cross-sectional area of the tank (A1) is the same but the velocity with which the water moves down (V1) decreases emptying the tank comparitively slowly as the flow rate is low. So (A1V1)=(A2V2) ------ since if A2 decrease then V1 too decreases.
{ "domain": "physics.stackexchange", "id": 34402, "tags": "fluid-dynamics, bernoulli-equation" }
Can we balance a redox reaction in basic medium just like as acidic medium and add OH- ions on both sides to cancel out H+ Ions?
Question: My teacher told me that the method (which I mentioned above) is wrong but I find many references of saying the same method correct. Answer: There are of course two ways of balancing redox half-equations in basic solution : writing it directly with $\ce{OH-}$ ions or starting to do it in acidic conditions and then destroying the $\ce{H+}$ ions by adding $\ce{OH-}$ ions. Let's compare these two approaches with chromate ions being reduced to chromium(III). First method (acidic solution). The redox half-equation in acidic conditions is quickly obtained : $$\ce{CrO4^{2-} + 8 H+ + 3 e- -> Cr^{3+} + 4 H2O} \tag{1}$$ In basic solution, $\ce{8 OH-}$ have to be added on both sides to destroy the $\ce{8 H+}$ and get :$$\ce{CrO4^{2-} + 8 H2O + 3 e- -> Cr^{3+} + 4 H2O + 8 OH-} \tag{2}$$ which can be simplified thus : $$\ce{CrO4^{2-} + 4 H2O + 3 e- -> Cr^{3+} + 8 OH-} \tag{3}$$ Second method (basic solution). Balancing the redox half-equation from $\ce{CrO4^{2-}}$ to $\ce{Cr^{3+}}$ without $\ce{H+}$ is not so easy. Because the $4$ Oxygen atoms of the chromate ion are of course transformed into $\ce{4OH-}$ ions, provided enough $\ce{H}$ are available. So this requires enough $\ce{H2O}$ on the left-hand-side to compensate for the $\ce{4 H}$ atoms included in these $\ce{4 OH-}$ : $\ce{2 H2O}$. But these $\ce{2 H2O}$ molecules bring new oxygen atoms. It is not obvious to see that, at the end, $\ce{4 H2O}$ (and not $2$) have to be added on the left-hand side. The final half-equation is :$$\ce{CrO4^{2-} + 4 H2O + 3 e- -> Cr^{3+} + 8 OH-} \tag{4}$$ This is equal to $(3)$. But it not so easy to obtain. Final remarks. 1. Whatever the method used, it may be useful to state that the $\ce{Cr^{3+}}$ ion does not exist and makes a precipitate in basic solution so that the final equation should be written $$\ce{CrO4^{2-} + 4 H2O + 3 e- -> Cr(OH)3 + 5 OH-} \tag{5}$$ 2. The same reasoning could have been done starting from the ion $\ce{Cr2O7^{2-}}$ in acidic conditions. With the same conclusion.
{ "domain": "chemistry.stackexchange", "id": 15210, "tags": "redox" }
How does the Lorentz transformation $\Lambda^{\mu}{}_{\nu}$ transform?
Question: For example the Four-velocity transforms as $$U^{a'}=\Lambda^{a'}{}_{\nu}U^{\nu},$$ the Faradaytensor as $$F^{a'b'}=\Lambda_{\,\,\mu}^{a'}\Lambda_{\,\,\nu}^{b'}F^{\mu\nu}$$ or in Matrixnotation: $$F'=\Lambda F\Lambda^{T},$$ where $\Lambda^{T}$ is the Transpose of the Matrix. But the Lorentz matrix $\Lambda^{\mu}{}_{\nu}$ is not a tensor. Does $\Lambda$ transform anyway like a second rank tensor in the same way like the Faradaytensor? Answer: It is a transform, not a tensor. Tensors describe a physical quantity at a selected point in time-space and have to transform accordingly. But the Lorentz matrix doesn't describe any physical quantity in a single frame, it's just a change of variables between two coordinate frames. You can understand this by analogy with 3D rotations. Transforms are composed (multiplied) together: if you go from coordinates $a$ to $a'$ and then from $a'$ to $a''$, you multiply the transforms to get a direct transform from $a$ to $a''$.
{ "domain": "physics.stackexchange", "id": 17647, "tags": "special-relativity, tensor-calculus, covariance" }
Do certain quasi-particles really have negative mass?
Question: Do phonons for example really have negative mass or does it just seem like they have negative mass? Could one use the negative mass of certain quasi particles to meet the negative energy requirements of warp drives or wormholes? Answer: Quasiparticles do not exist physically per se, but are more of a mathematical trick to simplify our models. An excellent example is the "electron hole" quasiparticle, whose physical manifestation is simply the absence of an electron in a Fermi sea. The phonon is also not a physical particle, but is rather a phenomenon which is practical to describe as such. Such quasiparticles can indeed be considered as having negative mass, but only in the very specific condensed matter context in which they are defined. This has nothing to do with negative mass in the relativistic context you are talking about, and therefore no, it cannot be used to build a warp drive or a wormhole.
{ "domain": "physics.stackexchange", "id": 99050, "tags": "mass, phonons, wormholes, warp-drives, quasiparticles" }
Will the stars dim in the future because of the expansion of the universe?
Question: We know that the universe is expanding, and that means everything is spreading apart. So does that mean in the future all the stars will dim and eventually disappear in our night sky because of the expansion? Just a curious thought that came to mind. Answer: Actually not everything is moving apart. On a scale of millions of light years gravity dominates and there isn't metric expansion of space. This is why (for example) the Andromeda galaxy is moving towards the Milky Way. So nearby stars (only a few tens of light years away) are not affected by the expansion of spacetime at all. In the very distant future (countless trillions of years, ie long after the sun has died) it is possible that the expansion of space accelerates to the point that objects on smaller and smaller scales move apart. This is sometimes called the Big Rip. It is strictly hypothetical, and observations tend to suggest it won't happen at all. So the short answer is "no", stars won't get dimmer due to the expansion of space, because in locally, it's not expanding.
{ "domain": "astronomy.stackexchange", "id": 6610, "tags": "universe, night-sky, future" }
Given a string, is it possible to determine which hashing algorithm has produced it, if any?
Question: Given a string, is it possible to determine which hashing algorithm has produced it, if any? For example, the MD5 hash of "string" is b45cffe084dd3d20d928bee85e7b0f21. Is it possible to determine whether the above hash is: 1) Indeed a genuine hash in some hashing algorithm, as opposed to a string of characters that is not a hash produced by one of some set of algorithms 2) a definite hash of a specific hashing algorithm? Possible methods: 1) For hashes susceptible to rainbow-table attacks, it would be viable to search for the hash in such a rainbow table for various algorithms, to find a match; if a match is found, we know which algorithm produced it. Answer: You will first need to define what you mean by a hashing algorithm. For example, my favorite hashing algorithm is simple: check whether the input is "string", and if so, output "b45cffe084dd3d20d928bee85e7b0f21", otherwise output "error". In the simplest case, you have one algorithm $A$, and string $w$ and you are wondering, is there an input $x$ (and maybe a seed $s$) such that $A(x,s)=w$? You can try brute force, but if you have the source code, is there something more clever that you can do? If not, then your algorithm is a one-way function. Whether one-way functions exist is an open question. We know that if one-way functions exist, then $P\ne NP$, and therefore if $P=NP$, one-way functions do not exist, but that still leaves three possibilities.
{ "domain": "cs.stackexchange", "id": 8006, "tags": "algorithms, cryptography, hash, hashing, encryption" }
How to represent "terminate episode" for Knapsack problem with a Pointer Network?
Question: I am currently implementing a Pointer Network to solve a simple Knapsack Problem. However, I am a bit puzzled over the correct (or common, or "best") way to give the agent the option to stop taking the item (terminate episode). Currently, I have done it in 2 ways, adding raw dummy features or adding encoded dummy features (dummy features are all zeros). If the agent selects the dummy item, then the agent will stop taking the item anymore and the episode is terminated. I trained both methods for 500K episodes and evaluated their performance on a single predefined test case in each episode, after adding the gradient. I found that concatenating dummy features with the encoded features yielded a higher score earlier, but also scored 0 very often. On the other hand, adding the dummy features to the raw features learned to maximize the score very slowly. Therefore, my questions are: Is adding the raw dummy features make learning slower because of additional encoding layer learning? What is the most correct (or common or arguably best) way to give the agent the option to terminate the episode (in this case stop taking item)? Answer: I do not think there is one standard way to do this, it will depend too much on context. Ultimately you want the agent to output a stop action that is different from a continue action. That stop/continue choice could either be part of the existing action encoding, additional data in parallel with the action sequence, or an entirely separate action choice on time steps alternating with the main action choices. I do not know anything about pointer networks, so not sure what your action coding options are. Your "dummy" object selection seems like a reasonable choice though, it is part of existing action encoding that the neural network can already output. In that sense it seems a lot like an <END> token that a seq2seq model might output for e.g. translation or summarisation language tasks. An alternative might be to add a separate head to the pointer network, that output a stop classifier alongside each object choice. The combined action would be [selection, stop]. If you are feeding previous choice back into the RNN as input (it is not clear to me, but sampling seq2seq networks do this), you have free choice as to whether to use the combined action (with the stop flag as a raw confidence in $[0,1]$), or just continue with previous selection as only feedback. Finally you could use a different NN for deciding whether to stop or continue and feed it the same sequence. Which is better? I cannot say, but your dummy object selection appears to work already to some degree, so I would stick experimenting with that, and the simple all zeroes token. The common 0 total rewards may be an issue with your RL agent exploring, or maybe with a difficult reward metric. E.g. can the agent score less than zero if it overfills the knapsack? If so, not trying anything at all may sometimes look good to the agent, and it will need more training so it can properly predict when this is not the case.
{ "domain": "ai.stackexchange", "id": 3015, "tags": "reinforcement-learning, ai-design, knapsack-problem" }
Can we pass multiple config files to a node in ROS2
Question: Currently, in all the examples I have seen, load a YAML file inside the Launch file, and pass it to the ROS2 node. I am curious, is there a possibility to pass more than one YAML file to a ROS2 Node? Something like: def generate_launch_description(): ld = LaunchDescription() paramaters_one = os.path.join( get_package_share_directory('my_robot_bringup'), 'config', 'my_params1.yaml' ) paramaters_two = os.path.join( get_package_share_directory('my_robot_bringup'), 'config', 'my_params2.yaml' ) multiple_parameters_node = Node( package='my_robot_bringup', executable='multiple_parameters_check', name='multiple_parameters_check', parameters=[paramaters_one, paramaters_two] ) ld.add_action(multiple_parameters_node) return ld Originally posted by Raza Rizvi on ROS Answers with karma: 95 on 2022-01-24 Post score: 0 Answer: From @destogl https://discourse.ros.org/t/can-we-pass-multiple-config-files-to-a-node-in-ros2/23970/2?u=tfoote You can find an example how we are doing it in UR Driver repository. For loading files, please check the few lines before. Originally posted by tfoote with karma: 58457 on 2022-01-24 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Raza Rizvi on 2022-01-25: This should work in my case, thanks a lot. Comment by tfoote on 2022-01-25: Please use the checkmark at the left of the answer to accept it so others will know that your question is resolved.
{ "domain": "robotics.stackexchange", "id": 38771, "tags": "ros, ros2, roslaunch" }
Are scattering experiments probabilistic in quantum mechanics?
Question: Suppose we have a electron that will scatter into an atom. When the electron is far way of the atom long before the scattering the system is represented by the state $\left|\psi_\text{in}\right\rangle$ . After the scattering the system will be in the state $\left|\psi_\text{out}\right\rangle$. Is this state $\left|\psi_\text{out}\right\rangle$ obtained probabilistic from the state $\left|\psi_\text{in}\right\rangle$ or obtained deterministic from $\left|\psi_\text{in}\right\rangle$ in the sense $$\left|\psi_\text{out}\right\rangle =U(T)\left|\psi_\text{in}\right\rangle$$ where $U(t)$ is the evolution operator? Answer: The pre-scattering state vector $|\psi(0)\rangle\equiv |\psi_{in}\rangle$ evolves into the post-scattering state vector $|\psi(T)\rangle \equiv |\psi_{out}\rangle$ deterministically, via the unitary propagator $U(T)$. The outcomes of measurements performed on the post-scattering state, however, are non-deterministic and can only be understood probabilistically. There are interpretations of quantum mechanics (e.g. many worlds) in which all evolution is deterministic, and the final state (which encompasses both the electron and the measurement apparatus) is entangled. However, from the perspective of a physicist standing in a laboratory wondering where the next electron will hit the detector, the measurement process is effectively non-deterministic. The precise nature of this apparently non-deterministic evolution is the subject of the measurement problem, and remains an important open question.
{ "domain": "physics.stackexchange", "id": 82609, "tags": "quantum-mechanics, scattering, probability, time-evolution, determinism" }
For loop step including last value
Question: I've got the following code. I want to loop through my dataset (which is a matrix) in steps of let's say 50.000 and perform biglm on it. lengthdata <- 100340 stepsize <- 50000 for (i in seq(1, lengthdata, stepsize)){ if(i+stepsize > lengthdata){ ## call function (i, lengthdata) print(paste(i, " - ", lengthdata)) } else { ## call function (i, i+stepsize) print(paste(i, " - ", i + stepsize)) } } This is my output: [1] "1 - 50001" [1] "50001 - 100001" [1] "100001 - 100340" Is there a more effective or cleaner way of coding this? I'm quite unexperienced using R! Answer: You can save some typing if you declare a variable for where your range ends: for (start.i in seq(1, lengthdata, stepsize)){ end.i <- if (i + stepsize > lengthdata) lengthdata else i + stepsize ## call function (start.i, end.i) print(paste(start.i, " - ", end.i)) } Another improvement would be to compute end.i using the min function (less typing!) end.i <- min(start.i + stepsize, lengthdata) As you become an advance user, you might like working with vectors directly: start.i <- seq(1, lengthdata-1, stepsize) end.i <- pmin(start.i + stepsize, lengthdata) and call your function(s) (e.g. paste) on all using the Map (or mapply) output <- Map(paste, start.i, " - ", end.i) rather than use a for loop (even less typing). This is equivalent to doing: output <- list() for (i in 1:length(start.i)) { output[[i]] <- paste(start.i[i], " - ", end.i[i]) }
{ "domain": "codereview.stackexchange", "id": 16193, "tags": "performance, r" }
Reactivity of Alkanes
Question: Does the tetrahedral structure of the alkanes contribute to their lower reactivity? I thought that because a tetrahedral structure suggests ${sp^3}$ hybridization, it should contribute to its lower reactivity. Is this right? Answer: Does the tetrahedral structure of the alkanes contribute to their lower reactivity? Hybridization is at the center of the answer. Due to poor P-P overlap, it only takes something on the order of 60 kcal/mol to break a pi bond and produce cis-trans isomerization about a double bond involving ${sp^2}$ hybridized carbons, whereas it takes something like 90 kcal/mol to break a carbon-carbon single bond involving ${sp^3}$ hybridized carbon atoms with highly directional orbitals that result in strong bonds. This poorer overlap with the pi bond causes ethene to be higher in energy than ethane. Alternatively one can view ethene as a two-membered ring (ref. 1), no pi bond, just 2 sigma bonds between the 2 carbons forming a two-membered ring. It is easy to imagine that such a ring system would contain a significant amount of strain (ref. 2). Either way (and by the way these two views of ethene are equivalent and complementary) - using a poorer overlap argument or a higher strain energy argument, we see why alkenes (and even more so alkynes) would generally be more reactive than alkanes. From this argument we would expect that if we were to somehow increase the strain in an alkane, then we should increase its reactivity. A good example of such behavior can be found in cyclopropane. Placing 3 "tetrahedral" carbon atoms into a ring dramatically increases the strain energy in the system (it changes the carbon hybridization too, see ref. 3) and the reactivity increases dramatically. Indeed, cyclopropane adds bromine much like an alkene (ref. 3) Since someone above commented that ${sp^3}$ $\ce{C-H}$ bonds are "very stable and typically do not want to react". Let me just point out that $\ce{C-H}$ bond energies vary as follows \begin{array}{|c|c|c|c|} \hline \ce{C-H} & \text{Bond Strength} \\ \text{bond type} & \text{(kcal/mol)} \\ \hline \ce{sp^3 C-H} & 101 \\ \hline \ce{sp^2 C-H} & 111 \\ \hline \ce{sp C-H} & 133 \\ \hline \end{array} These bond dissociation energies (ref. 4) don't fit the observed reactivity pattern. They indicate that an ${sp^3}$ bond is actually easier to break than an ${sp^2}$ or ${sp}$ $\ce{C-H}$ bond
{ "domain": "chemistry.stackexchange", "id": 7067, "tags": "organic-chemistry, reactivity" }
OO design of Reverse Polish Notation Calculator
Question: This solution focuses on the design of classes, not the actual push and pop stack part. I will include that part of the code as well. The current code has 2 operators (plus and minus). If we add another subclass under Token, is there a way that we don't need to check in processInput() what each operator is and perform the right calculation? I didn't know how to answer that question. I looked into Java Reflection, but still don't know how reflection can help me in this case. Can anyone shed some light on how I can make this design better? I want to allow people to add new operators, multiply and divide, and maybe make their own definition for special operators. This application will take input string like "1 2 + 1 -" and output 2 because "1 2 + 1 -" is the reverse Polish notation for (1+2)-1 = 2. import java.util.*; public class RPNCalculator { public static Stack<Integer> stack; //assume this string has integers and legal operators +,- and deliminated by space // an example would be "12 2 + 1 -" private static String input; static String operators ="+-"; public RPNCalculator(String userInput){ input = userInput; stack = new Stack<Integer>(); } public void processInput(){ StringTokenizer st = new StringTokenizer(input, " "); while (st.hasMoreTokens()){ String str = st.nextToken(); if(!operators.contains(str)){ stack.push(Integer.parseInt(str)); } else{ if(str.equals("+")){ Plus obj = new Plus(); obj.calc(); } else if(str.equals("-")){ Minus obj2 = new Minus(); obj2.calc(); } } } System.out.println(stack.pop()); } } public abstract class Token { abstract void calc(); } public class Minus extends Token { @Override void calc() { int a = RPNCalculator.stack.pop(); int b = RPNCalculator.stack.pop(); RPNCalculator.stack.push(b-a); } } public class Plus extends Token { @Override void calc() { int a = RPNCalculator.stack.pop(); int b = RPNCalculator.stack.pop(); RPNCalculator.stack.push(a+b); } } public class RPNDriver { public static void main(String[] args) { System.out.println("starting calculator..."); RPNCalculator rpn = new RPNCalculator("1 2 +"); rpn.processInput(); } } Answer: You can add more flexibility by storing a mapping from a String to a Token in a Map. After populating the map, you'll just need to look up for the token in it instead of writing a a long sequence of if-else blocks. That is, you can define and populate it like this: Map<String, Token> operators = new HashMap<>(); operators.add("+", new Plus()); // other operators go here and then use it like this: operators.get(str).calc() This way, you just need to define a new subclass of the Token and add its instance to the operators map to define a new operator. You can make the code even more flexible by passing this map as a parameter to the processInput method to decouple reading the input from the computations (this way, a client will be able to define his own subclass of the Token and pass it to this method). I'd also rename the Token. Operator or Operation seems more appropriate to me (because a number is also a token, so this name is more precise). There's also no point in having static members in the RPNCalculator class. It makes it non-reusable and might create a bunch of issues in a multithreaded environment. I'd suggest to make all members non-static and encapsulate them properly (and pass a reference to an instance of this class to the constructor of the concrete Token subclasses). You can also decouple the operators from the calculator completely by changing the signature of the calc method to public int calc(int a, int b) (it might be an issue for non-binary operators, though). You can improve your code by handling the errors more carefully. For instance, if the number of tokens on the stack is more than one in the end of the evaluation, your code will return some value, but it should be an error ("1 2" is not a valid expression, is it?). You can check such cases and throw an appropriate exception.
{ "domain": "codereview.stackexchange", "id": 25745, "tags": "java, object-oriented, math-expression-eval" }
Inverse $\mathcal Z$-transform problem
Question: $B(z)+B(-z) = 2c$, explain the structure of $b[n]$ and find the constraint of its length given that $c$ cannot be $0$. This is a homework problem. "Explain the structure" means that $b[n]$ is zero for certain values of $n$, and has a certain shape. I'm trying to take the inverse $\mathcal Z$-transform of $B(z)$ and $B(-z)$, but I'm not sure how the inverse $\mathcal Z$-transform of $B(-z)$ is related to $b[n]$, so I'm stuck...can anyone give me some advice? Answer: Just use the definition of the $\mathcal{Z}$-transform: $$B(z)=\sum_{n=-\infty}^{\infty}b[n]z^{-n}\tag{1}$$ from which it follows that $$B(-z)=\sum_{n=-\infty}^{\infty}b[n](-z)^{-n}=\sum_{n=-\infty}^{\infty}b[n](-1)^nz^{-n}\tag{2}$$ From $(2)$, the sequence that corresponds to $B(-z)$ is $b[n](-1)^n$. Now taking the inverse $\mathcal{Z}$-transform of $$B(z)+B(-z)=2c\tag{3}$$ gives $$b[n]+b[n](-1)^n=2c\delta[n]\tag{4}$$ For odd $n$, the left side of $(4)$ equals zero. For even $n$ it equals $2b[n]$. Consequently, equation $(4)$ can only be satisfied if $b[n]=0$ for even $n$ except for $n=0$, where we require $b[0]=c$: $$b[n]=\begin{cases}c,&n=0\\ 0,&n\text{ even}\end{cases}$$
{ "domain": "dsp.stackexchange", "id": 4013, "tags": "z-transform, homework" }
Best practices in regards to pass and deserialize data, when calling an API endpoint from a MVC-project?
Question: In my application, I only show users their own data. For that, I need to know in the backend, which user is requesting data. I use the username for that and therefore, I need to send the username as part of the GET request. The method in question, located in MVC-controller: public async Task<IActionResult> Index() { ////Here, I pass username. ApiClient is a HttpClient and has a configured BaseAddress. var response = await ApiHelper.ApiClient.GetAsync("username?username=" + User.Identity.Name); var result = await response.Content.ReadAsStringAsync(); var options = new JsonSerializerOptions { PropertyNameCaseInsensitive = true }; List<MatchViewModel> match = JsonSerializer.Deserialize<List<MatchViewModel>>(result, options); return View(match); } Points of interest for feedback: Is this a good/safe way to communicate with endpoint? Couldn't a hostile party just change the username parameter by calling the API directly, and get another user's data? Does it make the method for confusing, that I instantiate a variable of JsonSerializerOptions? Whatever feedback, advice or critique you might have, I am interested. Thanks in advance. Answer: Few tips: respect IDisposable objects, apply using where necessary. if you encapsulating HTTP API logic into a helper class, do it completely exposing HttpClient outside isn't a good idea, what if you'll decide to change HttpClient for something else, either HttpClientFactory or something. JsonSerializerOptions can be instantiated once and reused. Property getter/setter might help but why not expose the method that does exactly what needed? It would make the usage simpler like: public async Task<IActionResult> Index() { var query = new Dictionary<string, string> { ["username"] = User.Identity.Name }; List<MatchViewModel> match = await ApiHelper.GetJsonAsync<List<MatchViewModel>>("username", query); return View(match); } The example public class ApiHelper { private static readonly HttpClient _client = new HttpClient(); private static readonly JsonSerializerOptions _options = new JsonSerializerOptions { PropertyNameCaseInsensitive = true }; public static async Task<T> GetJsonAsync<T>(string method, IEnumerable<KeyValuePair<string, string>> query) { string queryString = new FormUrlEncodedContent(query).ReadAsStringAsync().Result; using var response = await _client.GetAsync($"{method}?{queryString}", HttpCompletionOption.ResponseHeadersRead).ConfigureAwait(false); using var stream = await response.Content.ReadAsStreamAsync().ConfigureAwait(false); return await JsonSerializer.DeserializeAsync<T>(stream, _options).ConfigureAwait(false); } } The method can be simplified using GetFromJsonAsync extension. public static async Task<T> GetJsonAsync<T>(string method, IEnumerable<KeyValuePair<string, string>> query) { string queryString = new FormUrlEncodedContent(query).ReadAsStringAsync().Result; return await _client.GetFromJsonAsync<T>($"{method}?{queryString}", _options).ConfigureAwait(false); } Finally, as here we have only one and the last await, then async state machine can be optimized out. There's no sense to launch the State Machine that has only one state, right? But be careful with this kind of optimization. Not sure - don't use. For example, it can break the method when the await is inside of the using or try-catch clause. public static Task<T> GetJsonAsync<T>(string method, IEnumerable<KeyValuePair<string, string>> query) { string queryString = new FormUrlEncodedContent(query).ReadAsStringAsync().Result; return _client.GetFromJsonAsync<T>($"{method}?{queryString}", _options); } Converting key-value pairs to URL-encoded query looks a bit tricky but new FormUrlEncodedContent(query).ReadAsStringAsync().Result; does exactly that thing. I know that ReadAsStringAsync is async method but I know that content is already inside as I put the source directly to the constructor. Then the ReadAsStringAsync is completed synchronously here, then to save some resources I call .Result for the already completed Task, which isn't a bad practice. Only calling .Result for a not completed Task may lead to a problem with locked/deadlocked threads. Never use .Result or .Wait() or .GetAwaiter().GetResult() when you're not sure if the Task was already completed, use await instead (or always use await). About ConfigureAwait(false) you may read here and here.
{ "domain": "codereview.stackexchange", "id": 41680, "tags": "c#, mvc, asp.net-core-webapi, .net-5" }
What is the probability of 3 possible products, between two chemical species?
Question: I am not a chemist. I hope I will be specific enough. Suppose there are two chemical species $\ce{A}$, $\ce{B}$ with the following properties: at temperature $t < T_r$, no reaction occurs between $\ce{A}$ and $\ce{B}$ (in any combination). at $t\ge T_r$, $\ce{A}$ interacts with itself to create $\ce{A_2}$, $\ce{B}$ reacts with itself to create $\ce{B_2}$, and $\ce{A}$ and $\ce{B}$ are reacting to create $\ce{AB}$. $\ce{A_2}$, $\ce{B_2}$ and $\ce{AB}$ are never reacting. In experiment, we first mix $\ce{A}$ and $\ce{B}$ in temperature $t<T_r$. Amounts of species mixed are $a$ for $\ce{A}$, $b$ for $\ce{B}$. Then, we add heat to obtain temperature $t\ge T_r$ and start the reaction. What amounts of $\ce{A_2}$, $\ce{B_2}$, and $\ce{AB}$ can be expected to be produced? To obtain the amounts, should probability theory be used? E.g., amount of $\ce{AB}$ equals to probability that species $\ce{A}$, $\ce{B}$ will interact ("collide" or similar interpretation). Assume the rates of the reactions are equal. Answer: Well, if you assume the rates are known and the reactions' order follows from stoechiometry (e.g. if they are elementary reactions), you can put the chemical kinetics into simple equations: $$\frac{\mathrm da}{\mathrm dt} = -k_1 a(t)^2 - k_3 a(t)b(t) $$ $$\frac{\mathrm db}{\mathrm dt} = -k_2 b(t)^2 - k_3 a(t)b(t) $$ ($t$ here being time, not temperature). Knowing initial amounts or concentrations $a_0=a(t=0)$ and $b_0=b(t=0)$, you can pretty much integrate the system to find out what happens. Edit — solving this system for $k_1=k_2=k_3$ yields the quantities of AA, AB and BB at infinite time to be as follows: $$aa = \frac{a_0^2}{a_0+b_0}$$ $$ab = \frac{a_0 b_0}{a_0+b_0}$$ $$bb = \frac{b_0^2}{a_0+b_0}$$
{ "domain": "chemistry.stackexchange", "id": 68, "tags": "kinetics" }
Create `suffixes` Function on List
Question: I wrote the following suffixes function. Given a list, it returns the list + all sub-lists. suffixes [1,2,3,4] == [[1,2,3,4], [2,3,4], [3,4], [4], []] Please critique my implementation. suffixes :: [a] -> [[a]] suffixes [] = [] : [] suffixes xxs@(_:xs) = xxs : suffixes xs Answer: Looks good to me. The only thing I would change is suffixes [] = [] : [] to suffixes [] = [[]] as it's a bit more readable. When re-inventing functions, it can be instructive to look up their definition using Hoogle. Following the links, we find this definition in Data.List: tails :: [a] -> [[a]] tails xs = xs : case xs of [] -> [] _ : xs' -> tails xs'
{ "domain": "codereview.stackexchange", "id": 10109, "tags": "haskell, reinventing-the-wheel" }
How to compute the Jaccard Similarity in this example? (Jaccard vs. Cosine)
Question: I am trying to understand the difference between Jaccard and Cosine. However, there seem to be a disagreement in the answers provided in Applications and differences for Jaccard similarity and Cosine Similarity. I am seeking if anyone could step me through the calculations of the Jaccard Similarity in this Cosine Similarity example from https://bioinformatics.oxfordjournals.org/content/suppl/2009/10/24/btp613.DC1/bioinf-2008-1835-File004.pdf Given: Question: How do we compute the Jaccard Similarity index between t1 and t2? Thank you. Answer: Cosine similarity is for comparing two real-valued vectors, but Jaccard similarity is for comparing two binary vectors (sets). So you cannot compute the standard Jaccard similarity index between your two vectors, but there is a generalized version of the Jaccard index for real valued vectors which you can use in this case: $J_g(\Bbb{a}, \Bbb{b}) =\frac{\sum_i min(\Bbb{a}_i, \Bbb{b}_i)}{\sum_i max(\Bbb{a}_i, \Bbb{b}_i)}$ So for your examples of $t_1 = (1, 1, 0, 1), t_2 = (2, 0, 1, 1)$, the generalized Jaccard similarity index can be computed as follows: $J(t_1, t_2) = \frac{1+0+0+1}{2+1+1+1} = 0.4$ Alternatively you can treat your bag-of-words vector as a binary vector, where a value $1$ indicates a words presence and $0$ indicates a words absence i.e. $t_1 = (1, 1, 0, 1), t_2 = (1, 0, 1, 1)$. From there, you can compute the original Jaccard similarity index: $J(t_1, t_2) = \frac{2}{2+1+1} = 0.5$
{ "domain": "datascience.stackexchange", "id": 1341, "tags": "similarity" }
Sending variables from one nodes to another
Question: Hello all ! I am using Hydro version of ROS. I have created a node in which I made a variable and store the value in it. Now I have to pass that variable value to another node. So that it can access it. Can any body tell me how I can do the above thing. I am working on python so it would be nice if anybody tells me in python Thanks Looking forward for answers ! Originally posted by jashanvir on ROS Answers with karma: 68 on 2014-06-18 Post score: 0 Answer: There are many ways how to pass data from one node to another. You could publish your variable in a topic: Tutorial Understanding Topics You could use a Setter like service: Tutorial Understanding Services And some more advanced ways to do it. Originally posted by BennyRe with karma: 2949 on 2014-06-18 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 18301, "tags": "ros, python, ros-hydro, environment-variables" }
Is it possible to prove Language L context-free?
Question: Give a question: Language L= {a^n b^(n+m) a^m}, where both n and m are >=0. Is L context-free or not. If the answer is yes, can I use the following PDA to prove it? Since {a^n b^(n+m) a^m}={a^n b^n b^m a^m}, in the PDA, we first push n a's onto the stack. Then, we pop n a's from the stack by reading n b's. Next, we push m b's onto the stack, and we pop m b's from the stack by reading m a's. Finally, the strings will be accepted in q5. Conversely, if the language is not context-free, should i use pumping lemma to prove it? Thank you! Answer: S → AB A → aAb | ε B → bBa | ε This will be the Grammar for that language which follows the context free grammar rules, so the language generated will be CFL.
{ "domain": "cs.stackexchange", "id": 7874, "tags": "context-free" }
Google Code Jam - Alien Language
Question: I've successfully solved the Alien Languages problem for Google Code Jam in Haskell: The algorithm is trivial - turn the patterns into regular expressions and see how many of the known words match these expressions. But - this has taken me far longer than I expected it to, and most of the time I was battling with getting my types correct. Which leaves me wondering whether there is something wrong with my understanding of functional programming. module Main where -- http://code.google.com/codejam/contest/90101/dashboard#s=p0 -- Input and output with standard redirection operators -- Unless otherwise indicated, all modules used are either bundled with -- the Haskell Platform (http://hackage.haskell.org/platform/) or available -- as a separate download from Hackage (http://hackage.haskell.org/). import Data.List import Text.Regex.Posix import Data.String.Utils numberOfMatches :: String -> [String] -> Int numberOfMatches pattern = foldl' matches 0 where matches acc word = if word =~ pattern' :: Bool then acc + 1 else acc pattern' = replace "(" "[" $ replace ")" "]" pattern getResult :: String -> ([String], Int, [String]) -> ([String], Int, [String]) getResult pattern (w, count, accum) = (w, count - 1, res : accum) where res = "Case #" ++ show count ++ ": " ++ show (numberOfMatches pattern w) main :: IO () main = do (header:xs) <- fmap lines getContents -- IO is a Functor. let [_, d, n] = map read $ words header let knownWords = take d xs let patterns = drop d xs let (_, _, results) = foldr getResult (knownWords, n, []) patterns mapM_ putStrLn results Which, for a test input of: 3 5 4 abc bca dac dbc cba (ab)(bc)(ca) abc (abc)(abc)(abc) (zyx)bc Yields the results: Case #1: 2 Case #2: 1 Case #3: 3 Case #4: 0 Answer: A good rule of thumb is that you should only ever use foldr when you're really sure that your fold is not an instance of something simpler. In your case, the fold is doing pretty much exactly two things while traversing the pattern list: Keeping track of the "case index" Accumulating the result list The second should be easily recognisable as a map - maybe less obvious is that the first can just be written as a zip with an enumeration: getResult :: [String] -> (Int, String) -> String getResult w (count, pattern) = "Case #" ++ show count ++ ": " ++ show (numberOfMatches pattern w) main :: IO () main = do [...] let results = map (getResult knownWords) $ zip [1..n] patterns Which is much easier to understand than trying to bend the fold to do the right thing. Also, just as a suggestion, here's main implemented in a more "imperative" (read: monadic) style. After all, Haskell is said to be the best imperative language ever invented, so we can do that proudly: main :: IO () main = do [_, d, n] <- fmap (map read . words) getLine knownWords <- replicateM d getLine forM_ [1..n] $ \count -> do pattern <- getLine let matches = numberOfMatches pattern knownWords putStrLn $ concat ["Case #", show count, ": ", show matches]
{ "domain": "codereview.stackexchange", "id": 2652, "tags": "haskell, programming-challenge" }
ros2 lookupTransform() to get latest transform
Question: Hello all, In foxy tf2_ros::Buffer::lookupTransform(), geometry_msgs::msg::TransformStamped tf2_ros::Buffer::lookupTransform(const std::string& target_frame, const std::string& source_frame, const tf2::TimePoint& time, const tf2::Duration timeout ) const @param time: The time at which the value of the transform is desired. (0 will get the latest) We can use tf2::TimePoint(std::chrono::nanoseconds(0)) to get the latest transform. In foxy tf2 namespace using tf2::Duration = typedef std::chrono::nanoseconds using tf2::TimePoint = typedef std::chrono::time_point<std::chrono::system_clock, Duration> tf2::TimePoint use system_clock. Although we can use tf2::TimePoint fromRclcpp(const rclcpp::Time & time) to convert a rclcpp::Time into tf2::TimePoint, rclcpp::Time may be use a ROS time, such as RCL_ROS_TIME time source inline tf2::TimePoint fromRclcpp(const rclcpp::Time & time) { // tf2::TimePoint is a typedef to a system time point, but rclcpp::Time may be ROS time. // Ignore that, and assume the clock used from rclcpp time points is consistent. return tf2::TimePoint(std::chrono::nanoseconds(time.nanoseconds())); } My question is: 1: Could we directly use rclcpp::Time lookupTransform() version to get latest transform, no need to convert into tf2::TimePoint ? 2: AutowareArchitectureProposal: Replacing tf2_ros::Buffer says there is some bug in tf2_ros::Buffter, is it true ? Thanks for your help and time! Originally posted by jxl on ROS Answers with karma: 252 on 2022-03-25 Post score: 1 Answer: Hello, I use "tf2::TimePointZero", it give me the last transform. This tutorial can help you : link text I don't have any problems with my foxy version with timeout = 0. Originally posted by timRO with karma: 26 on 2022-03-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by jxl on 2022-03-28: @timRO, thanks for your help :). tf2::TimePointZero is defined here line=50 btw. Did you meet tf net delay is very big when playing bag data, as this question
{ "domain": "robotics.stackexchange", "id": 37527, "tags": "ros, transforms, tf2-ros, tf2" }
Do particles get arbitrarily delocalized in interstellar space?
Question: I was watching this simulation of two quantum wave packets colliding in a box: https://physics.weber.edu/schroeder/software/CollidingPackets.html The wave function gets arbitrarily delocalized as time goes on until you have almost an equal probability of finding the two particles anywhere in the box. This would mean that the particles are "everywhere" in the box and the place where they would "appear" upon measurement would be almost random. Do real particles traveling in interstellar space without interacting much at all with anything get smeared out in this way? If so how we can form images of extremely far away objects? It seems as if any photon emitted from a star would be extremely delocalized by the time the wave packet reaches earth, even if it has an initially well defined direction and position when emitted. The probability of finding it at any given point on the expanding "sphere of influence" of the wave packet would be extremely small. Also, how most of the particles in our "real" world seem to be always very localized? I mean, my body doesn't dissolve as time goes on. I can imagine these reasons of why this happens in practice: Because interactions between individual particles with large systems maintains them localized. Those interactions would be more similar to "measurements" than the interaction seen on the simulation and so their wave functions would be continuously collapsing. Most of my particles are "locked" into a kind of potential from which they can't escape like the one on the simulation. A combination of both. Which one is correct, is there another explanation? Thanks Answer: Yes they do. When a particle is finally measured, its wave function collapses to a point in the detector. This "collapse of the wave function" is a fundamental aspect of quantum theory and is highly nonlocal. Photons have been observed being bent round opposite sides of a black hole, in what is called gravitational lensing, with the wave function interfering with itself when it reaches Earth, in a kind of cosmic-scale Young's slits diffraction experiment. Such a photon wave may have expanded across billions of lightyears since it left its parent star, but vanishes in an un-mesaurably short moment when it hits a digital camera. Bing! Gone, just like that. How come it manages to find our camera at all in all that vast waste of emptiness? Because there are many billions times billions of photon waves pouring out from that star and being lensed around that black hole. A few will make it to the camera, but which few is down to pure chance. That chance is described by the quantum wave function. The reason our Earthly matter seems so localized is because its waves (called de Broglie waves) spread only slowly and very soon hit their neighbours. But in fact you can perform the Young's slits diffraction experiment on electrons, with care even on buckyballs, and they will behave like waves. It is all so mindboggling that some physicists try to develop theories in which a real particle is somehow steered by "hidden variables" and the wave function is just an expression of our ignorance as to what those variable are. But all the quantum weirdness does not then go away, it must remain inherent in those hidden variables, which rather dampens the point of the exercise. It was Hamlet to whom Shakespeare gave to say, "There are more things in Heaven and Earth ... than are dreamed of in your philosophy." I sometimes think this was the first historical expression of quantum physics.
{ "domain": "physics.stackexchange", "id": 67747, "tags": "quantum-mechanics, waves" }
How much of your stock solution should you take to make the 1000 cells/mL mixture?
Question: You need a solution containing 1000 buccal cells/mL. You count that you have 125 buccal cells in 50uL, from a total solution of 8 mL. How much of your stock solution should you take to make the 1000 cells/mL mixture? How much solvent if you would like a total volume of 5mL? I don't even know how to approach this!!! Help?? Answer: welcome to cell culture & dimensional analysis 125/50µl or 2.5 cells/µl is your concentration (better way of expressing it) 2.5 * 8000µl = 20,000 total cells you have. 1000µl in 1ml that's why the 8000µl 20,000 cells / 1000 cells/ml = 20mls final culture volume to reach desired concentration 20mls - 8mls = 12mls so you need to add 12mls of medium / saline / whatever to your 8mls to dilute it to the correct concentration. when you ask about solvent to ADD to make 5mls. Prob with this question is you don't state the desired concentration. So assuming its still 1000 cells/ml kinda go backwards, 1000 cells/ml * 5mls = 5000 cells is what you want to get to get it: 5000 / 2.5 cells/µl = 2000µl or 2mls of your culture. So take 2mls of your culture and mix it with 3mls of solvent / medium (5mls - 2mls). I'm quite certain chris below is not correct.
{ "domain": "biology.stackexchange", "id": 1949, "tags": "homework" }
If body temperature is 37°C (98.6°F), why are most people more comfortable at around 21°C (70°F)?
Question: It may be different for other people, but for me, anything above 32°C (90°F) is very uncomfortable, and my body is inclined to seek cooler temperatures. But I would think that at 32°C, the body would have less work to do to get itself to 37°C. So why is it not comfortable in those temperatures? My theory is this, but I don't know if it's right: The body's abilities for warming itself are much more sophisticated than its abilities for cooling itself (which are non-existent, possibly?). So it likes to be in an environment 20-30 degrees below optimal because it can easily handle that. But up in the 32's and we're dangerously close to going over the optimal, and the body doesn't know how to get it back down after that, so we are inclined to seek safer temperatures. Is it something like that? Answer: This is due to the fact that skin is the interface where heat is lost. Our body due to constant functioning, produces heat constantly as a by-product (due to exothermic reaction of ATP break mainly). The excess heat needs to be conducted away from the body, or it will cause a decrease in the body metabolism to prevent temperature rise. Heat is lost mainly through the skin by: Sweating - Through evaporation Radiation - As heat waves (IR rays - That's why IR camera captures people at night) Conduction - Directly through objects that touch skin Convection - Through air circulation When the ambient temperature rises, the heat lost through radiation, conduction, and convection drastically decreases. And often when the temperature is high, there is a accompanying rise in the relative humidity which decreases the heat loss through sweating (as the amount of water vapor is high in the atmosphere, the sweat does not evaporate, so no heat is lost). So the heat which is not lost is felt as the "hot sensation". It relieves by stopping any activity, seeking shade or a cool place, etc... all of which increases the heat lost or decreases the heat produced. You have to note that the temperature of skin is lower than the body temperature. The Skin temperature is lower than the core body temperature for two reasons: The skin acts as a medium through which the external temperature is measured - as such the skin temperature is at equilibrium with the external temperature. The brain regulates the core-body temperature in response to temperature measured through skin. If a person is exposed suddenly to cold environment, skin looses much of its heat in the form of radiation (radiation is direclty proportional to the temperature difference) this will cause perception of cold and the body starts shivering even though no actual heat loss has occured from the core body thermal load (only skin looses heat, not the body core). The brain anticipates that the core will loose its heat when exposed to such low temperature for prolonged periods and starts the warming mechanism before the actual cooling occurs such that the cooling is either prevented or minimized. This is called Anticipatory control and the temperature of skin being close to the ambient temperature within physiological limits is needed for this. Skin is the medium (almost the only medium) through which the excessive heat produced by the body core during activity is expelled. The skin temperature is lower so that a constant gradient can be created between the body core and body surface to maintain the flow of heat. (Heat losses through urine and feces is minimal) For more details see this question. All the answers in this question are good and will increase your understanding of what actually happens.
{ "domain": "biology.stackexchange", "id": 3319, "tags": "human-biology, homeostasis" }
Enter the Matrix
Question: I've semi-often been frustrated at the lack of a proper Matrix data structure in VBA. A multi-dimensional array is obviously the right way to handle it, but there is so much missing... for example, you can't natively check to see if an array has been dimensioned, you can't resize the array while preserving values except on the last dimension, there is no convenient VBA syntax for loading immediate values into the array, etc. So I created a Matrix class that supports: Matrix operations - Add, Subtract, Multiply, ScalarMultiply, Augment, Transpose Elementary Row Operations SwapRows, ScaleRow, AddScalarMultipleRow A Parser for loading the Matrix from a String - LoadMatrixString Utility functions - ToString, Clone An implementation of Gaussian Elimination - RowReduce The parser was made based on this tutorial on hand coding a recursive descent parser. The Elementary Row Operations are destructive, because doing otherwise would degrade the performance too much. The Matrix operations are non-destructive, in that they create a new Matrix with the results and return it. This allows method chaining, such as Set D = A.Multiply(B).Add(C).ScalarMultiply(5), and the intuitive behavior such that C = A x B and A and B themselves are not modified in the process. I'm tempted to make these methods destructive to improve performance (an object is created for every intermediate matrix operation), but I'm not sure how intuitive it would be that the result of A.Multiply(B) would be A. I posted an earlier version of the class as an answer to a question here, but have since made some improvements. The test code there is still valid. I'm particularly intersted to know whether I should split the parser off into a separate class to be used independently, or maybe be called by the Matrix class itself. I've tried to clean up the code naming conventions - PascalCase for the sub/functions and camelCase for the variable names and removing Hungarian - but please point out to me if I've missed something. I've been reading that unless you are specifically coding for performance, it's better from a code maintainability standpoint to call accessors when possible within the class instead of always modifying private members directly because if the implementation of the accessor ever changes, you wouldn't have to then go through the rest of the code and change the way it's done in the other functions - does that sound right? Here is the very self-contained Matrix class: Option Compare Database Option Explicit Private Declare Sub CopyMemory Lib "kernel32.dll" Alias "RtlMoveMemory" (ByVal Destination As Long, ByVal Source As Long, ByVal Length As Integer) '---------------------------------- 'This array holds the values of the Matrix Private matrixArray() As Double '---------------------------------- 'Shared recursive descent parsing variables Private tempMatrixString As String Private look As String Public Sub Class_Initialize() End Sub '************************************************ '* Accessors and Utility Functions * '*********************************** Public Property Get Value(r As Long, c As Long) As Double CheckDimensions Value = matrixArray(r, c) End Property Public Property Let Value(r As Long, c As Long, val As Double) CheckDimensions matrixArray(r, c) = val End Property Public Property Get Rows() As Long If GetDims(matrixArray) = 0 Then Rows = 0 Else Rows = UBound(matrixArray, 1) + 1 End If End Property Public Property Get Cols() As Long If GetDims(matrixArray) = 0 Then Cols = 0 Else Cols = UBound(matrixArray, 2) + 1 End If End Property Public Sub LoadMatrixString(str As String) tempMatrixString = str ParseMatrix str tempMatrixString = "" look = "" End Sub Public Sub Resize(Rows As Long, Cols As Long, Optional blPreserve As Boolean = False) Dim tempMatrix As Matrix Dim r As Long Dim c As Long If blPreserve Then CheckDimensions Set tempMatrix = Me.Clone ReDim matrixArray(0 To Rows - 1, 0 To Cols - 1) For r = 0 To MinLongs(tempMatrix.Rows, Me.Rows) - 1 For c = 0 To MinLongs(tempMatrix.Cols, Me.Cols) - 1 Value(r, c) = tempMatrix.Value(r, c) Next Next Else ReDim matrixArray(0 To Rows - 1, 0 To Cols - 1) End If End Sub Public Function Clone() As Matrix Dim mresult As Matrix Dim r As Long Dim c As Long CheckDimensions Set mresult = New Matrix mresult.Resize Me.Rows, Me.Cols For r = 0 To Me.Rows - 1 For c = 0 To Me.Cols - 1 mresult.Value(r, c) = Me.Value(r, c) Next Next Set Clone = mresult End Function Public Function ToString() As String Dim str As String Dim r As Long Dim c As Long Dim tempRow() As String Dim tempRows() As String ReDim tempRow(0 To Me.Cols - 1) ReDim tempRows(0 To Me.Rows - 1) If Not GetDims(matrixArray) = 0 Then 'Need to check if array is empty For r = 0 To Me.Rows - 1 For c = 0 To Me.Cols - 1 tempRow(c) = Me.Value(r, c) Next tempRows(r) = "[" & Join(tempRow, ", ") & "]" Next ToString = "[" & Join(tempRows, vbCrLf) & "]" Else ToString = "" End If End Function '*********************************************************** '* Matrix Operations * '********************* Public Function Add(m As Matrix) As Matrix Dim mresult As Matrix Dim r As Long Dim c As Long CheckDimensions If m.Rows = Me.Rows And m.Cols = Me.Cols Then Set mresult = New Matrix mresult.Resize Me.Rows, Me.Cols For r = 0 To Me.Rows - 1 For c = 0 To Me.Cols - 1 mresult.Value(r, c) = Me.Value(r, c) + m.Value(r, c) Next Next Else Err.Raise vbObjectError + 1, "Matrix.Add", "Could not Add matrices: the Rows and Columns must be the same. The left matrix is (" & Me.Rows & ", " & Me.Cols & ") and the right matrix is (" & m.Rows & ", " & m.Cols & ")." End If Set Add = mresult End Function Public Function Subtract(m As Matrix) As Matrix Dim mresult As Matrix Dim r As Long Dim c As Long CheckDimensions If m.Rows = Me.Rows And m.Cols = Me.Cols Then Set mresult = New Matrix mresult.Resize Me.Rows, Me.Cols For r = 0 To Me.Rows - 1 For c = 0 To Me.Cols - 1 mresult.Value(r, c) = Me.Value(r, c) - m.Value(r, c) Next Next Else Err.Raise vbObjectError + 2, "Matrix.Subtract", "Could not Subtract matrices: the Rows and Columns must be the same. The left matrix is (" & Me.Rows & ", " & Me.Cols & ") and the right matrix is (" & m.Rows & ", " & m.Cols & ")." End If Set Subtract = mresult End Function Public Function Multiply(m As Matrix) As Matrix Dim mresult As Matrix Dim i As Long Dim j As Long Dim n As Long CheckDimensions If Me.Cols = m.Rows Then Set mresult = New Matrix mresult.Resize Me.Rows, m.Cols For i = 0 To Me.Rows - 1 For j = 0 To m.Cols - 1 For n = 0 To Me.Cols - 1 mresult.Value(i, j) = mresult.Value(i, j) + (Me.Value(i, n) * m.Value(n, j)) Next Next Next Else Err.Raise vbObjectError + 3, "Matrix.Multiply", "Could not Subtract matrices: the Columns of the left matrix and Rows of the right must be the same. The left matrix has " & Me.Cols & " Columns and the right matrix has " & m.Rows & " Rows." End If Set Multiply = mresult End Function Public Function ScalarMultiply(scalar As Double) As Matrix Dim mresult As Matrix Dim r As Long Dim c As Long CheckDimensions Set mresult = New Matrix mresult.Resize Me.Rows, Me.Cols For r = 0 To Me.Rows - 1 For c = 0 To Me.Cols - 1 mresult.Value(r, c) = Me.Value(r, c) * scalar Next Next Set ScalarMultiply = mresult End Function Public Function Augment(m As Matrix) As Matrix Dim mresult As Matrix Dim r As Long Dim c As Long CheckDimensions If Me.Rows = m.Rows Then Set mresult = New Matrix mresult.Resize Me.Rows, Me.Cols + m.Cols For r = 0 To Me.Rows - 1 For c = 0 To Me.Cols - 1 mresult.Value(r, c) = Me.Value(r, c) Next Next For r = 0 To Me.Rows - 1 For c = 0 To m.Cols - 1 mresult.Value(r, Me.Cols + c) = m.Value(r, c) Next Next Else Err.Raise vbObjectError + 4, "Matrix.Augment", "Could not Augment matrices: the matrices must have the same number of Rows. The left matrix has " & Me.Rows & " Rows and the right matrix has " & m.Rows & " Rows." End If Set Augment = mresult End Function Public Function Transpose() As Matrix Dim mresult As Matrix Dim r As Long Dim c As Long CheckDimensions If Me.Rows = Me.Cols Then Set mresult = New Matrix mresult.Resize Me.Cols, Me.Rows For r = 0 To Me.Rows - 1 For c = 0 To Me.Cols - 1 Me.Value(r, c) = mresult(c, r) Next Next Else Err.Raise vbObjectError + 5, "Matrix.Augment", "Could not Transpose matrix: the matrix must have the same number of Rows and Cols. The matrix is (" & Me.Rows & ", " & Me.Cols & ")." End If Set Transpose = mresult End Function Public Function RowReduce() As Matrix Dim i As Long Dim j As Long CheckDimensions 'Row Echelon Dim mresult As Matrix Set mresult = Me.Clone For i = 0 To mresult.Rows - 1 If Not mresult.Value(i, i) <> 0 Then For j = i + 1 To mresult.Rows - 1 If mresult.Value(j, i) > 0 Then mresult.SwapRows i, j Exit For End If Next End If If mresult.Value(i, i) = 0 Then Exit For End If mresult.ScaleRow i, 1 / mresult.Value(i, i) For j = i + 1 To mresult.Rows - 1 mresult.AddScalarMultipleRow i, j, -mresult.Value(j, i) Next Next 'Backwards substitution For i = IIf(mresult.Rows < mresult.Cols, mresult.Rows, mresult.Cols) - 1 To 1 Step -1 If mresult.Value(i, i) > 0 Then For j = i - 1 To 0 Step -1 mresult.AddScalarMultipleRow i, j, -mresult.Value(j, i) Next End If Next Set RowReduce = mresult End Function '************************************************************* '* Elementary Row Operaions * '**************************** Public Sub SwapRows(r1 As Long, r2 As Long) Dim temp As Double Dim c As Long CheckDimensions For c = 0 To Me.Cols - 1 temp = Me.Value(r1, c) Me.Value(r1, c) = Me.Value(r2, c) Me.Value(r2, c) = temp Next End Sub Public Sub ScaleRow(row As Long, scalar As Double) Dim c As Long CheckDimensions For c = 0 To Me.Cols - 1 Me.Value(row, c) = Me.Value(row, c) * scalar Next End Sub Public Sub AddScalarMultipleRow(srcrow As Long, destrow As Long, scalar As Double) Dim c As Long CheckDimensions For c = 0 To Me.Cols - 1 Me.Value(destrow, c) = Me.Value(destrow, c) + (Me.Value(srcrow, c) * scalar) Next End Sub '************************************************************ '* Parsing Functions * '********************* Private Sub ParseMatrix(strMatrix As String) Dim arr() As Double Dim c As Long GetChar 1 Match "[" SkipWhite If look = "[" Then arr = ParseRow Me.Resize 1, UBound(arr) + 1 'ReDim matrixArray(0 To UBound(arr), 0 To 0) For c = 0 To Me.Cols - 1 Me.Value(0, c) = arr(c) Next SkipWhite While look = "," Match "," SkipWhite arr = ParseRow Me.Resize Me.Rows + 1, Me.Cols, True If UBound(arr) <> (Me.Cols - 1) Then 'Error jagged array Err.Raise vbObjectError + 6, "Matrix.LoadMatrixString", "Parser Error - Jagged arrays are not supported: Row 0 has " & Me.Cols & " Cols, but Row " & Me.Rows - 1 & " has " & UBound(arr) + 1 & " Cols." End If For c = 0 To Me.Cols - 1 Me.Value(Me.Rows - 1, c) = arr(c) Next SkipWhite Wend Match "]" ElseIf look = "]" Then Match "]" Else MsgBox "Error" End If SkipWhite If look <> "" Then Err.Raise vbObjectError + 7, "Matrix.LoadMatrixString", "Parser Error - Unexpected Character: """ & look & """." End If End Sub Private Function ParseRow() As Variant Dim arr() As Double Match "[" SkipWhite ReDim arr(0 To 0) arr(0) = ParseNumber SkipWhite While look = "," Match "," ReDim Preserve arr(0 To UBound(arr) + 1) arr(UBound(arr)) = ParseNumber SkipWhite Wend Match "]" ParseRow = arr End Function Private Function ParseNumber() As Double Dim strToken As String If look = "-" Then strToken = strToken & look GetChar End If While IsDigit(look) strToken = strToken & look GetChar Wend If look = "." Then strToken = strToken & look GetChar While IsDigit(look) strToken = strToken & look GetChar Wend End If ParseNumber = CDbl(strToken) End Function '**************************************************************** Private Sub GetChar(Optional InitValue) Static i As Long If Not IsMissing(InitValue) Then i = InitValue End If If i <= Len(tempMatrixString) Then look = Mid(tempMatrixString, i, 1) i = i + 1 Else look = "" End If End Sub '**************************************************************** '* Skip Functions (Parser) * '*************************** Private Sub SkipWhite() While IsWhite(look) Or IsEOL(look) GetChar Wend End Sub '**************************************************************** '* Match/Expect Functions (Parser) * '*********************************** Private Sub Match(char As String) If look <> char Then Expected """" & char & """" Else GetChar SkipWhite End If Exit Sub End Sub Private Sub Expected(str As String) 'MsgBox "Expected: " & str Err.Raise vbObjectError + 8, "Matrix.LoadMatrixString", "Parser Error - Expected: " & str End Sub '**************************************************************** '* Character Class Functions (Parser) * '************************************** Private Function IsDigit(char As String) As Boolean Dim charval As Integer If char <> "" Then charval = Asc(char) If 48 <= charval And charval <= 57 Then IsDigit = True Else IsDigit = False End If Else IsDigit = False End If End Function Private Function IsWhite(char As String) As Boolean Dim charval As Integer If char <> "" Then charval = Asc(char) If charval = 9 Or charval = 11 Or charval = 12 Or charval = 32 Or charval = 160 Then '160 because MS Exchange sucks IsWhite = True Else IsWhite = False End If Else IsWhite = False End If End Function Private Function IsEOL(char As String) As Boolean If char = Chr(13) Or char = Chr(10) Then IsEOL = True Else IsEOL = False End If End Function '***************************************************************** '* Helper Functions * '******************** Private Sub CheckDimensions() If GetDims(matrixArray) = 0 Then 'Error, uninitialized array Err.Raise vbObjectError + 1, "Matrix", "Array has not been initialized" End If End Sub Private Function GetDims(VarSafeArray As Variant) As Integer Dim lpSAFEARRAY As Long Dim lppSAFEARRAY As Long Dim arrayDims As Integer 'This check ensures that the value inside the Variant is actually an array of some type If (VarType(VarSafeArray) And vbArray) > 0 Then 'If the Variant contains an array, the pointer to the pointer to the array is located at VarPtr(VarSafeArray) + 8... CopyMemory VarPtr(lppSAFEARRAY), VarPtr(VarSafeArray) + 8, 4& '...and now dereference the pointer to pointer to get the actual pointer to the array... CopyMemory VarPtr(lpSAFEARRAY), lppSAFEARRAY, 4& '...which will be 0 if the array hasn't been initialized If Not lpSAFEARRAY = 0 Then 'If it HAS been initialized, we can pull the number of dimensions directly from the pointer, since it's the first member in the SAFEARRAY struct CopyMemory VarPtr(arrayDims), lpSAFEARRAY, 2& GetDims = arrayDims Else GetDims = 0 'Array not initialized End If Else GetDims = 0 'It's not an array... Type mismatch maybe? End If End Function Private Function MinLongs(a As Long, b As Long) As Long If a < b Then MinLongs = a Else MinLongs = b End If End Function And here are a couple examples of use: Option Compare Database Public Sub TestMatrix() Dim m1 As Matrix Set m1 = New Matrix m1.LoadMatrixString ("[[ 0, 1, 4, 9, 16]," & _ " [16, 15, 12, 7, 0]," & _ " [ 1, 1, 1, 1, 1]]") Dim m2 As Matrix Set m2 = New Matrix m2.LoadMatrixString ("[[190]," & _ " [190]," & _ " [ 20]]") MsgBox m1.Augment(m2).RowReduce.ToString End Sub Public Sub TestMatrix2() 'This is an example iteration of a matrix Petri Net as described here: 'http://www.techfak.uni-bielefeld.de/~mchen/BioPNML/Intro/MRPN.html Dim D_Minus As Matrix Dim D_Plus As Matrix Dim D As Matrix Set D_Minus = New Matrix D_Minus.LoadMatrixString "[[0, 0, 0, 0, 1]," & _ " [1, 0, 0, 0, 0]," & _ " [0, 1, 0, 0, 0]," & _ " [0, 0, 1, 1, 0]]" Set D_Plus = New Matrix D_Plus.LoadMatrixString "[[1, 1, 0, 0, 0]," & _ " [0, 0, 1, 1, 0]," & _ " [0, 0, 0, 1, 0]," & _ " [0, 0, 0, 0, 1]]" Set D = D_Plus.Subtract(D_Minus) MsgBox D.ToString Dim Transition_Matrix As Matrix Dim Marking_Matrix As Matrix Dim Next_Marking As Matrix Set Transition_Matrix = New Matrix Transition_Matrix.LoadMatrixString "[[0, 1, 1, 0]]" Set Marking_Matrix = New Matrix Marking_Matrix.LoadMatrixString "[[2, 1, 0, 0, 0]]" Set Next_Marking = Transition_Matrix.Multiply(D).Add(Marking_Matrix) MsgBox Next_Marking.ToString End Sub Answer: Public Sub Class_Initialize() End Sub Avoid empty members; this initializer serves no purpose, remove it. Although I could infer r and c are meant for row and column, these single-letter parameters should probably be called row and column, for clarity. Likewise, Cols should probably be called Columns. This is unfortunate: Public Property Let Value(r As Long, c As Long, val As Double) I'd consider calling the property ValueAt, and the val parameter could then be called value - and since parameters are passed ByRef by default, I'd be explicit about them being passed ByVal - there's no need to pass them by reference: Public Property Let ValueAt(ByVal rowIndex As Long, ByVal columnIndex As Long, ByVal value As Double) In the case of LoadMatrixString, I'd consider changing the signature from this: Public Sub LoadMatrixString(str As String) To that: Public Sub LoadMatrixString(ByVal values As String) And for the members that take a m As Matrix parameter, I'd go with ByVal value As Matrix and avoid single-letter identifiers. I find "value" remains the most descriptive name in these contexts. There's an inconsistency in the way you're naming "Dimensions": you have CheckDimensions, but then you also have GetDims - I'd rename the latter GetDimensions. I like how the class is self-contained, but then it seems to me like the ToString implementation would be a perfect excuse to use your wonderful StringBuilder class, and I bet you'd get the string output much, much faster ;) As for this: I'm particularly intersted to know whether I should split the parser off into a separate class to be used independently, or maybe be called by the Matrix class itself. I think you could simply move the parsing code to a MatrixParser class, and be done with it! ...Actually, I'd copy the LoadMatrixString procedure there, and rename it Parse, make it a Function and have it return a Matrix. Then LoadMatrixString could be modified to call this new function.
{ "domain": "codereview.stackexchange", "id": 31836, "tags": "matrix, vba" }
Why can't the Earth's core melt the whole planet?
Question: Earth core temperature is range between 4,400° Celsius (7,952° Fahrenheit) to about 6,000° Celsius (10,800° Fahrenheit). Source Why can't the Earth's core melt the whole planet? In other words, what is stopping Earth from being melted up to its surface? Answer: Think about a frozen-over lake in the winter. The water underneath is liquid, but it doesn't melt the ice. In fact, it wasn't even able to stop the ice from freezing as the weather got colder in the winter. The surface of the lake was losing heat faster than it could soak up heat from the warmer water below, so it froze while the deeper water was still liquid. The earth was completely molten right after the impact that formed the moon. That's like the lake at the end of fall. The liquid surface radiated heat away into space until first the surface solidified (pretty quickly) and then the depth of solid rock got greater and greater. The hotter molten rock down below just couldn't heat up the surface fast enough to keep it molten.
{ "domain": "physics.stackexchange", "id": 77009, "tags": "thermodynamics, temperature, earth, estimation, geophysics" }
BERTopic: Is it okay to ignore the first two topics?
Question: I used BERTopic to generate a topic model over a large dataset of texts. The result is very appealing and the modeled topics are mostly perfectly interpretable for a human, especially compared to other topic modeling approaches. According to the documentation (e.g. https://maartengr.github.io/BERTopic/getting_started/quickstart/quickstart.html) the topic with number -1 refers to outliers and should be ignored. Topic -1 is in my case indeed a topic consisting of unrelated common words without a possible interpretation. However, in my topic model, as well the second topic with number 0 is just a mixture of unrelated words and not a "nice topic" in terms of human sense. All the following topics are very nice topics with a clear meaning. My question: Is it okay to ignore the first two topics (-1 & 0) modeled by BERTopic and only start using the topics from topic 1 on? Or is this problematic and indicating an issue with the model? Is there a parameter that can be changed to change this behavior in order to ensure that only topic -1 will be an uninterpretable topic? Answer: I received a answer for this question on github: It might be that topic 0 groups documents together that have little meaning and as such can be considered to be a topic. From that perspective, they could indeed be considered outliers. For example, if you pass the model a couple of hundred of empty documents, it will generate a perfectly valid cluster with no labels. That cluster, could then be ignored. You could, however, merge -1 and 0 together with .merge_topics or .update_topics to make sure they are indeed -1.
{ "domain": "datascience.stackexchange", "id": 11654, "tags": "bert, topic-model" }
Using gmapping to build map and transport to the Non-ROS PC
Question: Hello ROS users, Is it possible to use gmapping to build the map and transport the map to the Non-ROS computer? To specify my question: I have a sick TiM 561 lidar(2D lidar) and I can launch it from the ROS package in the ubuntu 14.04LTS ROS computer. Because I need to build the map to let my vehicle knows where is it and then do some vehicle dynamic suck as obstacle avoidance and parking. Therefore, I want to use the gmapping to build the maps and I want to transport the map to the Non-ROS computer. But is it really possible to transport the map which is built by the gmapping to the Non-ROS computer? Does someone know? Could you give me some directions or suggestions. Thanks in advanced. Originally posted by Terry Su on ROS Answers with karma: 23 on 2016-01-04 Post score: 0 Answer: You could use the map_saver in http://wiki.ros.org/map_server to convert the map into an image that can easily be copied to your non-ROS PC. Originally posted by NEngelhard with karma: 3519 on 2016-01-04 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Terry Su on 2016-01-04: Because the lidar transmits the distance data to my ROS computer. Can the map_saver collect all the data of lidar and convert into the X,Y coordinate if the map? Thank you. Comment by NEngelhard on 2016-01-04: No it cannot. As it is clearly explained in the linked page... Comment by Terry Su on 2016-01-05: @NEngelhard, I saw it and I think I need to change the title because I need to transport the map which is just 0 or 1 signal or x,y location of the object and then feed that to the Non-ROS computer which is the controller let a robot know can really know the environment. But thanks anyway really.
{ "domain": "robotics.stackexchange", "id": 23340, "tags": "ros, slam, navigation, follow-joint-trajectory, gmapping" }
Does there exist a hyperbolic relationship between frequency $\omega$ and wavenumber $k$?
Question: As the title states, is it possible to derive a hyperbolic relationship in the form of $\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1$ between frequency $\omega$ and wavenumber $k$ I have tried to start this from the general phase-group velocity relationship $v_pv_g = c^2$ where $v_p$ = $\frac{w}{k}$ and $v_g = \frac{dw}{dk}$ and upon substitution arrived at the obvious conclusion that $w^2/k^2 = c^2$. Can someone help me carry this train of thought further? Thanks! Answer: The freespace dispersion equation is $\omega^2 = k^2\,c^2$ and this cannot change: this simply follows from considering plane wave components of propagating fields, which all fulfil the Helmholtz equation $$\nabla^2 A_j + \frac{\omega^2}{c^2} A_j = 0\tag{1}$$ which is fulfilled by all Cartesian components of the moncrhomatic EM field vectors and, for a plane wave, $\nabla^2 = -|k|^2$, whence the dispersion relationship. The dispersion of a massive particle fulfilling the Klein Gordon equations yields what you want. All particles fulfilling the Dirac equation in their first quantised (semiclassical) description fulfill the KG equation (but the converse is not true). The Klein Gordon equation is: $$\frac {1}{c^2} \frac{\partial^2}{\partial t^2} \psi - \nabla^2 \psi + \frac {m^2 c^2}{\hbar^2} \psi = 0\tag{2}$$ where $m$ is the particle's restmass and, on writing $\nabla^2 \mapsto -|k|^2$, $\partial_t\mapsto-i\,\omega$ for a plane, monochromatic wave we find the dispersion relationship is: $$\frac{\omega^2}{c^2} - k^2 = \frac{m^2\,c^2}{\hbar^2}\tag{3}$$ i.e. the equation for a hyperbola, as you wished for. This hyperbolic relationship actually yields a really interesting insight into mass at the quantum level: from classical mechanics and general relativity, the notions of inertial mass and gravitational mass are wonted to us, but a third interpretation is that rest mass measures a particles "stay-putability": the group velocity is: $$v_g = \frac{\mathrm{d}\,\omega}{\mathrm{d}\,k} = \frac{c}{\sqrt{1+\frac{m^2\,c^2}{\hbar^2\,k^2}}}\tag{4}$$ Massless particles must always be observed to be travelling at speed $c$, as shown by (4) (which becomes $v_g=c$ if $m=0$). They are always dispersionless. However, if $m$ is nonzero in (4) you can slow a particle down, or "make it stay put" by making the momentum $\hbar\,k$ very small. You can see now from (4) what I mean by mass measures a particle's "stay puttability". You can make a massive particle stay put, at least as far as the Heisenberg uncertainty principle lets you. So now, can we simulate this dispersion for light? On elimination of the magnetic field vectors from Maxwell's equations, we find, for each of the Cartesian components of the electric field: $$-\nabla^2 E + \frac{1}{c^2}\frac{\partial^2}{\partial\,t^2} E +\mu\,\frac{\partial}{\partial\,t} J = 0\tag{5}$$ where $J$ is the corresponding Cartesian component of the current density. So, to simulate your hyperbolic relationship, we need to reproduce the Klein-Gordon equation, which in turn requires that: $$E = \mu\,\frac{\partial}{\partial\,t} J\tag{6}$$ This is a highly unusual relationship: Ohm's law is $J = \sigma\, E$, so (6) implies a nondissipative current density in phase quadrature with the electric field. I can't imagine a physical material that would do this.
{ "domain": "physics.stackexchange", "id": 18310, "tags": "electromagnetism, waves, speed-of-light, wavefunction, velocity" }
Topology of equipotential surfaces
Question: Let's think of planar equipotential surfaces, say they are parallel to the x-y plane, then apparently $p_x$ and $p_y$ are conserved quantities. Next, let's move on to cylindrical equi-potentials. Then $p_z$ and $p_\phi$ are conserved from symmetries. The other way we can think about it is that a cylinder can be obtained from identifying two opposite borders of a rectangle. So we can map $p_x \rightarrow p_z, p_y\rightarrow p_\phi$. Let's do the folding once more, so that the cylinder now becomes a torus. If you believe topology, then we should expect two conserved momenta, associated with orbiting the red circle in the image and the magenta circle. But careful calculation invoking Noether's theorem doesn't seem to support this belief because transformation along the red circle doesn't seem to preserved the Lagrangian and hence doesn't provide a conserved momentum. To expand a little more, I use the following coordinates on the torus, where $\theta$ is the angle on minor circles, and $\phi$ is on the major circle. Then the Lagrangian expressed in terms of $\theta$ and $\phi$ is dependent on $\theta$, so $p_\theta$ is not conserved. So, if you believe in topology, where is the other symmetry? \begin{align}x(\theta,\varphi) &= (R+ r\cos\theta)\cos\varphi\\ y(\theta,\varphi) &= (R+ r\cos\theta)\sin\varphi\\ z(\theta,\varphi)&= r\sin\theta\end{align} Answer: Your represented (in the nice figure!) torus is non-flat and is immersed in $\mathbb R^3$, the one obtained by the identifications of the opposite edges of a rectangle is instead flat and is not metrically immersed in $\mathbb R^3$. These two kinds of torus are topologically identical (homeomorphic and also diffeomorphic actually), but they are metrically distinct (they are not isometric). Here metrical notions matter. This is the reason why the immersed torus has one symmetry less than the flat torus. The orbits of the angle $\theta$ are symmetries provided the metric on the torus is flat as it arises putting the metric of the plane on the torus with the standard identifications just to produce a flat torus. However, this metric is not the one the torus receives from the metric of $\mathbb R^3$ viewing it as an immersed surface: the curvature shows up here and $\theta$ is not a metrically invariant direction. In the flat torus, for instance, all the violet circles have the same length, in the immersed torus, their length is variable depending on $\theta$ as it is evident from the figure... The Lagrangian possesses the corresponding symmetries depending on which notion of torus you consider. In the limit case of a torus with an infinite radius $R$, that is a cylinder, the two metrics coincide. This is the reason why you cannot see the problem just looking at the cylinder. This is an interesting example where topology is not enough to fix physics.
{ "domain": "physics.stackexchange", "id": 33966, "tags": "classical-mechanics, symmetry, topology, noethers-theorem" }
Java PrintWriter not printing data to file
Question: This program is compiling just fine, and the variables and such seem to be passing through. However, when I open the text file there is no data. Can anyone point me in the right direction or nudge my brain into the right process thought? package dvdlogger; import java.util.*; import java.io.*; public class DVDLogger { public DVDLogger() throws IOException { String title = null; double price = 0; storeDVDInfo(title, price); } private void storeDVDInfo(String title, double price) throws IOException { File path = new File("C:\\users\\crazy\\desktop\\dvd.txt"); PrintWriter output = new PrintWriter(path); output.printf("%1$s %2$.2f", title, price); } public static void main(String[] args) throws IOException{ DVDLogger dvd; String title; double price; try (Scanner read = new Scanner(System.in)) { dvd = new DVDLogger(); System.out.printf("%nPlease enter the title of the DVD: "); title = read.nextLine(); System.out.printf("%nPlease enter the price of the DVD: "); price = read.nextDouble(); } dvd.storeDVDInfo(title, price); } } Answer: With any Java Writer or Stream... YOU MUST PROPERLY FLUSH AND CLOSE it. The documentation is not fantastically clear... but: when you exit the method, you must ensure that the file is flushed and closed. you are exiting your program before the data is written to disk. Always use the following pattern when doing IO (if you are using Java7) : try (PrintWriter output = new PrintWriter(path)) { output.printf("%1$s %2$.2f", title, price); output.flush(); } This is a 'try-with-resource' block.
{ "domain": "codereview.stackexchange", "id": 5275, "tags": "java, file, io" }
Perfect matching problem
Question: Suppose you are given two sets of integers L and M both having N elements. The problem is to match each number in L to a number in M. Such perfect matching has some cost given by $\sum_{i=1}^{N} l_i*m_i$. I want to find some perfect matching with some given cost. I suspect that this is hard (i.e. NP-complete). Can you solve it quickly? (find an efficient algorithm). Here is an example: we have three courses, credit hours are 4, 5, 8. Grade points are 4, 3, 2. A solution is a perfect matching between the lists that result in some given GPA. For this to be computationally meaningful, the largest grade point and largest credit hour are unbounded. P.S. Yuval hinted the reduction from Subset sum problem. I am interested in hardness proof of strong NP-completeness. Answer: This problem is indeed $NP-complete$ as you have suspected. To see this, we will show a reduction from $SubsetSum$. The reduction Let $A,s$ be an instance of $SubsetSum$, where $|A|=n$. We will define $L$ as all the numbers in $A$, in addition to another $n$ zeros. For example, if $A= 4, 5, 8$ then $L=4,5,8,0,0,0$. Now, we define $M$ to be with $2n$ values, such that we have $n$ ones and $n$ zeros. The target value will be $s$ Proof Such a perfect matching defines us a subset $A'\in A$, that $a\in A'$ iff $1$ is matched with $a$. Since we have $n$ values in $A$, and we have $n$ ones and $n$ zeros in $M$, then each pair of $(0,1)$ in $M$ can be assigned to every pair $(a, 0)$ where $a\in A$, and thus any $a\in A$ can be matched with either $0$ or $1$, contributing to the total sum either $a$ or $0$. Therefore, there is a perfect match if and only if there is a subset with that sum (as the value of the matching is a sum of the subset $A'$).
{ "domain": "cs.stackexchange", "id": 17823, "tags": "algorithms, complexity-theory, time-complexity" }
SPARK 1.5.1: Convert multi-labeled data into binary vector
Question: I am using SPARK 1.5.1, and I have DataFrame that looks like follow: labelsCol, featureCol (Label1, Label2, Label 32), FeatureVector (Label1, Label10, Label16, Label30, Label48), FeatureVector ... (Label1, label 95), FeatureVector The first column is the list of labels for that sample, and in total I have 100 label. I would like to build a binary classifier for each label, so I want to transform the labels list column into a binary vector. The binary vector will have a length of 100 and the value will be 0 or 1 depends on the existence of the label for sample. Is there any strait forward solution for this? Answer: Spark only recently implemented CountVectorizer, which will take the labels (as strings) and encode them as your 100-dimensional vector (assuming all 100 labels show up somewhere in your dataset). Once you have those vectors, it should be a simple step to threshold them to make them 0/1 instead of a frequency.
{ "domain": "datascience.stackexchange", "id": 506, "tags": "classification, apache-spark, multilabel-classification" }
Non-deterministic Büchi vs Rabin: Automaton size for LTL->automaton
Question: Is there any general result to show that which automaton is more succinct? I have a set of LTL properties and I would like to know (show) which automaton is more efficient in term of state number and edge number. Answer: Consider a nondeterministic Büchi automaton $A = \langle \Sigma, Q, q_0, \delta, \alpha\rangle$. The acceptance condition of $A$ is a subset of states $\alpha\subseteq Q$, and an infinite run $r = q_0, q_1, q_2, \ldots$ over an infinite word $\sigma_1\sigma_2\cdots $ is accepting iff $r$ visits a state in $\alpha$ infinitely many times. In Rabin automata, the acceptance condition is given by $\alpha = \{\langle \alpha_1, \beta_1 \rangle, \langle \alpha_2, \beta_2 \rangle, \ldots , \langle \alpha_k, \beta_k \rangle \}$, where $ \alpha_i, \beta_i\subseteq Q$, for all $i\in [k]$. An infinite run $r = q_0, q_1, q_2, \ldots$ over an infinite word $\sigma_1\sigma_2\cdots $ is accepting iff for some $i\in [k]$, $r$ visits a state in $\alpha_i$ infinitely many times, and visits the states in $\beta_i$ finitely many times. The number $k$ is the index of the automaton, and is usually taken into account when defining the automaton's size. Clearly, a Büchi condition $\alpha$ is equivalent to the Rabin condition $\{ \langle \alpha, \emptyset \rangle\}$. Thus, nondeterministic Büchi automata can be translated to nondeterministic Rabin automata with no blowup (they can be thought of as a simple fragment of Rabin automata - although they are equally expressive). So, nondeterministic Büchi automata cannot be more succinct than nondeterministic Rabin automata. However, a nondeterministic Rabin automaton with $n$ states, $m$ transitions, and index $k$, can be translated to a nondeterministic Büchi automaton with $O(n\cdot k)$ states, and $O(k\cdot m)$ transitions. The later translation is justified by a lower bound. So, nondeterministic Rabin automata are polynomially more succinct than nondeterministic Büchi automata, which is not significant. A comprehensive overview of the translations can be found here (along with other relevant references). Note that you can easily translate an LTL formula into a nondeterministic Büchi automaton (see section 6 here), and the translation has a tight exponential bound. However, following your comment, if you're interested in translating the LTL-formula into a deterministic automaton, then you can translate the Büchi automaton to a deterministic Rabin automaton (such determinization construction has a tight bound of $2^{\Theta(n \log n)}$). Also, the double exponential blow-up of translating LTL formulas to deterministic automata cannot be avoided (see theorem 26 here).
{ "domain": "cs.stackexchange", "id": 17455, "tags": "automata, linear-temporal-logic" }
Turning non-symmetric filter into a symmetric filter
Question: I would like to ask if there is a systematic way to turn a non-symmetric filter into a symmetric equivalent with the same frequency response. For example, let's say I have a finite length filter with the coefficients [1,-1.5,0.5] for its impulse response (the zeros are at $z=1$ and $z=0.5$). What would be the coefficients of the new filter so they both have the same frequency response? My guess is something like this: $$ H_{new}(z) = \frac{(z-1)(z-p)(z-1/p)}{z^3},\ \ \ \ 0<p<1 $$ Is there any generalizable approach to finding the new coefficients based on the ones I have? What about anti-symmetric? Answer: I would like to ask if there is a systematic way to turn a non-symmetric filter into a symmetric equivalent with the same frequency response. Nope. Consider the filter $$H(z) = \left(1 + 0.5z^{-1}\right) \left(1 + 0.4z^{-1}\right) = 1 + 0.9z^{-1} + 0.2z^{-1}$$ It has zeros at $z = \{-0.5, -0.4\}$ (and a double pole at $z = 0$). It is non-symmetric, and that non-symmetry is forced by the poles zeros. The other things that's forced by the combination of poles (boring in this case) and zeros is the amplitude response. The only thing you can do to change the filter's impulse response without changing its amplitude response is to replace a pole or a zero by its compliment on the other side of the stability boundary. In this case, that means that you can exchange $1 + 0.5z^{-1}$ with $0.5 + z^{-1}$, and you can exchange $1 + 0.4z^{-1}$ with $0.4 + z^{-1}$. Swapping either one of these will give you a filter that is more symmetrical -- i.e. $$H(z) = \left(1 + 0.5z^{-1}\right) \left(0.4 + z^{-1}\right) = 0.4 + 1.2z^{-1} + 0.5z^{-2}$$ but you won't get to exactly symmetrical with the same frequency response. This is going to be the case any time that you have a filter that does not have complementary pairs of poles and zeros (i.e., $1 - az^{-1}$ paired with $a - z^{-1}$, or $s - a$ paired with $s + a$), or at least conjugate pairs on the stability boundary (i.e., $1 + z^{-1} + z^{-2} = z - \frac 1 2 \pm j \frac {\sqrt{3}} 2$). You can always come arbitrarily close with a symmetric filter using FIR filter synthesis. However, you will never get spot-on with a finite-length filter, and the closer you get, in general, the longer your filter will need to be.
{ "domain": "dsp.stackexchange", "id": 12487, "tags": "filter-design, frequency-response, linear-phase, symmetry" }
Why do people categorically dismiss some simple quantum models?
Question: Deterministic models. Clarification of the question: The problem with these blogs is that people are inclined to start yelling at each other. (I admit, I got infected and it's difficult not to raise one's electronic voice.) I want to ask my question without an entourage of polemics. My recent papers were greeted with scepticism. I've no problem with that. What disturbes me is the general reaction that they are "wrong". My question is summarised as follows: Did any of these people actually read the work and can anyone tell me where a mistake was made? Now the details. I can't help being disgusted by the "many world" interpretation, or the Bohm-de Broglie "pilot waves", and even the idea that the quantum world must be non-local is difficult to buy. I want to know what is really going on, and in order to try to get some ideas, I construct some models with various degrees of sophistication. These models are of course "wrong" in the sense that they do not describe the real world, they do not generate the Standard Model, but one can imagine starting from such simple models and adding more and more complicated details to make them look more realistic, in various stages. Of course I know what the difficulties are when one tries to underpin QM with determinism. Simple probabilistic theories fail in an essential way. One or several of the usual assumptions made in such a deterministic theory will probably have to be abandoned; I am fully aware of that. On the other hand, our world seems to be extremely logical and natural. Therefore, I decided to start my investigation at the other end. Make assumptions that later surely will have to be amended; make some simple models, compare these with what we know about the real world, and then modify the assumptions any way we like. The no-go theorems tell us that a simple cellular automaton model is not likely to work. One way I tried to "amend" them, was to introduce information loss. At first sight this would carry me even further away from QM, but if you look a little more closely, you find that one still can introduce a Hilbert space, but it becomes much smaller and it may become holographic, which is something we may actually want. If you then realize that information loss makes any mapping from the deterministic model to QM states fundamentally non-local—while the physics itself stays local—then maybe the idea becomes more attractive. Now the problem with this is that again one makes too big assumptions, and the math is quite complicated and unattractive. So I went back to a reversible, local, deterministic automaton and asked: To what extent does this resemble QM, and where does it go wrong? With the idea in mind that we will alter the assumptions, maybe add information loss, put in an expanding universe, but all that comes later; first I want to know what goes wrong. And here is the surprise: In a sense, nothing goes wrong. All you have to assume is that we use quantum states, even if the evolution laws themselves are deterministic. So the probability distributions are given by quantum amplitudes. The point is that, when describing the mapping between the deterministic system and the quantum system, there is a lot of freedom. If you look at any one periodic mode of the deterministic system, you can define a common contribution to the energy for all states in this mode, and this introduces a large number of arbitrary constants, so we are given much freedom. Using this freedom I end up with quite a few models that I happen to find interesting. Starting with deterministic systems I end up with quantum systems. I mean real quantum systems, not any of those ugly concoctions. On the other hand, they are still a long way off from the Standard Model, or even anything else that shows decent, interacting particles. Except string theory. Is the model I constructed a counterexample, showing that what everyone tells me about fundamental QM being incompatible with determinism, is wrong? No, I don't believe that. The idea was that, somewhere, I will have to modify my assumptions, but maybe the usual assumptions made in the no-go theorems will have to be looked at as well. I personally think people are too quick in rejecting "superdeterminism". I do reject "conspiracy", but that might not be the same thing. Superdeterminism simply states that you can't "change your mind" (about which component of a spin to measure), by "free will", without also having a modification of the deterministic modes of your world in the distant past. It's obviously true in a deterministic world, and maybe this is an essential fact that has to be taken into account. It does not imply "conspiracy". Does someone have a good, or better, idea about this approach, without name-calling? Why are some of you so strongly opinionated that it is "wrong"? Am I stepping on someone's religeous feelings? I hope not. References: "Relating the quantum mechanics of discrete systems to standard canonical quantum mechanics", arXiv:1204.4926 [quant-ph]; "Duality between a deterministic cellular automaton and a bosonic quantum field theory in $1+1$ dimensions", arXiv:1205.4107 [quant-ph]; "Discreteness and Determinism in Superstrings", arXiv:1207.3612 [hep-th]. Further reactions on the answers given. (Writing this as "comment" failed, then writing this as "answer" generated objections. I'll try to erase the "answer" that I should not have put there...) First: thank you for the elaborate answers. I realise that my question raises philosophical issues; these are interesting and important, but not my main concern. I want to know why I find no technical problem while constructing my model. I am flattered by the impression that my theories were so "easy" to construct. Indeed, I made my presentation as transparent as possible, but it wasn't easy. There are many dead alleys, and not all models work equally well. For instance, the harmonic oscillator can be mapped onto a simple periodic automaton, but then one does hit upon technicalities: The hamiltonian of a periodic system seems to be unbounded above and below, while the harmonic oscillator has a ground state. The time-reversible cellular automaton (CA) that consists of two steps $A$ and $B$, where both $A$ and $B$ can be written as the exponent of physically reasonable Hamiltonians, itself is much more difficult to express as a Hamiltonian theory, because the BCH series does not converge. Also, explicit $3+1$ dimensional QFT models resisted my attempts to rewrite them as cellular automata. This is why I was surprised that the superstring works so nicely, it seems, but even here, to achieve this, quite a few tricks had to be invented. @RonMaimon. I here repeat what I said in a comment, just because there the 600 character limit distorted my text too much. You gave a good exposition of the problem in earlier contributions: in a CA the "ontic" wave function of the universe can only be in specific modes of the CA. This means that the universe can only be in states $\psi_1,\ \psi_2,\ ...$ that have the property $\langle\psi_i\,|\,\psi_j\rangle=\delta_{ij}$, whereas the quantum world that we would like to describe, allows for many more states that are not at all orthonormal to each other. How could these states ever arise? I summarise, with apologies for the repetition: We usually think that Hilbert space is separable, that is, inside every infinitesimal volume element of this world there is a Hilbert space, and the entire Hilbert space is the product of all these. Normally, we assume that any of the states in this joint Hilbert space may represent an "ontic" state of the Universe. I think this might not be true. The ontic states of the universe may form a much smaller class of states $\psi_i$; in terms of CA states, they must form an orthonormal set. In terms of "Standard Model" (SM) states, this orthonormal set is not separable, and this is why, locally, we think we have not only the basis elements but also all superpositions. The orthonormal set is then easy to map back onto the CA states. I don't think we have to talk about a non-denumerable number of states, but the number of CA states is extremely large. In short: the mathematical system allows us to choose: take all CA states, then the orthonormal set is large enough to describe all possible universes, or choose the much smaller set of SM states, then you also need many superimposed states to describe the universe. The transition from one description to the other is natural and smooth in the mathematical sense. I suspect that, this way, one can see how a description that is not quantum mechanical at the CA level (admitting only "classical" probabilities), can "gradually" force us into accepting quantum amplitudes when turning to larger distance scales, and limiting ourselves to much lower energy levels only. You see, in words, all of this might sound crooky and vague, but in my models I think I am forced to think this way, simply by looking at the expressions: In terms of the SM states, I could easily decide to accept all quantum amplitudes, but when turning to the CA basis, I discover that superpositions are superfluous; they can be replaced by classical probabilities without changing any of the physics, because in the CA, the phase factors in the superpositions will never become observable. @Ron I understand that what you are trying to do is something else. It is not clear to me whether you want to interpret $\delta\rho$ as a wave function. (I am not worried about the absence of $\mathrm{i}$, as long as the minus sign is allowed.) My theory is much more direct; I use the original "quantum" description with only conventional wave functions and conventional probabilities. (New since Sunday Aug. 20, 2012) There is a problem with my argument. (I correct some statements I had put here earlier). I have to work with two kinds of states: 1: the template states, used whever you do quantum mechanics, these allow for any kinds of superposition; and 2: the ontic states, the set of states that form the basis of the CA. The ontic states $|n\rangle$ are all orthonormal: $\langle n|m\rangle=\delta_{nm}$, so no superpositions are allowed for them (unless you want to construct a template state of course). One can then ask the question: How can it be that we (think we) see superimposed states in experiments? Aren't experiments only seeing ontic states? My answer has always been: Who cares about that problem? Just use the rules of QM. Use the templates to do any calculation you like, compute your state $|\psi\rangle$, and then note that the CA probabilities, $\rho_n=|\langle n|\psi\rangle|^2$, evolve exactly as probabilities are supposed to do. That works, but it leaves the question unanswered, and for some reason, my friends on this discussion page get upset by that. So I started thinking about it. I concluded that the template states can be used to describe the ontic states, but this means that, somewhere along the line, they have to be reduced to an orthonormal set. How does this happen? In particular, how can it be that experiments strongly suggest that superpositions play extremely important roles, while according to my theory, somehow, these are plutoed by saying that they aren't ontic? Looking at the math expressions, I now tend to think that orthonormality is restored by "superdeterminism", combined with vacuum fluctuations. The thing we call vacuum state, $|\emptyset\rangle$, is not an ontological state, but a superposition of many, perhaps all, CA states. The phases can be chosen to be anything, but it makes sense to choose them to be $+1$ for the vacuum. This is actually a nice way to define phases: all other phases you might introduce for non-vacuum states now have a definite meaning. The states we normally consider in an experiment are usually orthogonal to the vacuum. If we say that we can do experiments with two states, $A$ and $B$, that are not orthonormal to each other, this means that these are template states; it is easy to construct such states and to calculate how they evolve. However, it is safe to assume that, actually, the ontological states $|n\rangle$ with non-vanishing inner product with $A$, must be different from the states $|m\rangle$ that occur in $B$, so that, in spite of the template, $\langle A|B\rangle=0$. This is because the universe never repeats itself exactly. My physical interpretation of this is "superdeterminism": If, in an EPR or Bell experiment, Alice (or Bob) changes her (his) mind about what to measure, she (he) works with states $m$ which all differ from all states $n$ used previously. In the template states, all one has to do is assume at least one change in one of the physical states somewhere else in the universe. The contradiction then disappears. The role of vacuum fluctuations is also unavoidable when considering the decay of an unstable particle. I think there's no problem with the above arguments, but some people find it difficult to accept that the working of their minds may have any effect at all on vacuum fluctuations, or the converse, that vacuum fluctuations might affect their minds. The "free will" of an observer is at risk; people won't like that. But most disturbingly, this argument would imply that what my friends have been teaching at Harvard and other places, for many decades as we are told, is actually incorrect. I want to stay modest; I find this disturbing. A revised version of my latest paper was now sent to the arXiv (will probably be available from Monday or Tuesday). Thanks to you all. My conclusion did not change, but I now have more precise arguments concerning Bell's inequalities and what vacuum fluctuations can do to them. Answer: I can tell you why I don't believe in it. I think my reasons are different from most physicists' reasons, however. Regular quantum mechanics implies the existence of quantum computation. If you believe in the difficulty of factoring (and a number of other classical problems), then a deterministic underpinning for quantum mechanics would seem to imply one of the following. There is a classical polynomial-time algorithm for factoring and other problems which can be solved on a quantum computer. The deterministic underpinnings of quantum mechanics require $2^n$ resources for a system of size $O(n)$. Quantum computation doesn't actually work in practice. None of these seem at all likely to me. For the first, it is quite conceivable that there is a polynomial-time algorithm for factoring, but quantum computation can solve lots of similar periodicity problems, and you can argue that there can't be a single algorithm that solves all of them on a classical computer, so you would have to have different classical algorithms for each classical problem that a quantum computer can solve by period finding. For the second, deterministic underpinnings of quantum mechanics that require $2^n$ resources for a system of size $O(n)$ are really unsatisfactory (but maybe quite possible ... after all, the theory that the universe is a simulation on a classical computer falls in this class of theories, and while truly unsatisfactory, can't be ruled out by this argument). For the third, I haven't seen any reasonable way to how you could make quantum computation impossible while still maintaining consistency with current experimental results.
{ "domain": "physics.stackexchange", "id": 4358, "tags": "quantum-mechanics, models, determinism" }
Acoustic power spectral density change due to diffraction
Question: Say we have an acoustical point source emitting white noise - i.e., a power spectral density of $$S_x(f) = \frac{N_0}{2}$$ The source is embedded in the plane such that it radiates in half-space - i.e., a directivity factor $Q = 2$. A receiver is located such that there's a step barrier between it and the source such that the total vertical displacement between source and receiver is $$h = h1 + h2$$ And the total horizontal displacement is $$d = d1 + d2$$ What is the power spectral density at the receiver? edit 1: I'm expecting the barrier to act a bit like a low-pass filter such that the resulting spectrum begins to roll off at a frequency of about $\frac{1}{h2}$, is that right? edit 2: found a helpful simulation by Kai Saksela based on a geometrical acoustics method from Svensson et al 1999. Takes a while to load (10 minutes?), but appears to be the kind of approach that could find a solution. It also makes me realize despite seeming like a pretty straightforward problem, it's probably quite complicated. edit 3: there's actually an entire Python library dedicated to the calculation of sound fields, sfs-python. From it I was able to calculate normalized sound fields with a barrier at a variety of frequencies and barrier distances. For example, a source at 1000 Hz, $h2 = 1.5 \ m$, and $d2 = 2.0 \ m$: Though interesting, I'm interested in an analytical solution. Answer: It turns out that the generalized calculation of diffraction-based attenuation at a specific frequency is given by the Kurze-Anderson formula, $$\Delta L_B = 5 + 20 \ log \frac{\sqrt{2 \pi N}}{\text{tanh} \ \sqrt{2 \pi N}}$$ Where $N$ is calculated using different distances ($A$, $B$, and $d$) than the way the question was originally posed: $N = \frac{2 \ f}{c} \cdot (A + B - d)$. We can trivially convert using the Pythagorean theorem: $$B = \sqrt{h1^2 + d1^2}$$ $$A = \sqrt{h2^2 + d2^2}$$ $$d = \sqrt{(h1 + h2)^2 + (d1 + d2)^2}$$ Then substitute, arriving at an expression in terms of our original distances. $$N = \frac{2 \ f}{c} \cdot (\sqrt{h2^2 + d2^2} + \sqrt{h1^2 + d1^2} - \sqrt{(h1 + h2)^2 + (d1 + d2)^2})$$ Because barriers are much more effective at attenuating waves smaller than they are, we expect the barrier to work like a low-pass filter with a cutoff frequency approximately the same size as $h2$. Plotting the results for ($d1 = 10 \text{m}$, $d2 = 10 \text{m}$, $h1 = 1.5 \text{m}$, $h2 = 0.5 \text{m}$) we find that this expectation is verified:
{ "domain": "physics.stackexchange", "id": 52249, "tags": "acoustics, diffraction, signal-processing" }
Wave functions and wave packets
Question: IS every quantum mechanical wave function a wave packet, i.e an infinite superposition of differently wave numbered sinusoidal waves, irrespective of whether the potential admits bound states or scattering states? Then for the free particle while considering the stationary state of the form (or at least containing the term) $x ± vt =$ constant, thereby implying a constant phase velocity of $v $. What includes in the computation of the wave function the taking into account of all the sinusoidal wave functions? Answer: Not every wave function is a wave packet. Wave functions are frequently considered to include functions that are not properly normalizable, like $\operatorname{e}^{ikx}$, so we describe them as "normalized to the delta function." That is, we demand $$\delta(k - k') = \int_{-\infty}^\infty \operatorname{d} x \psi^\star(x, k') \psi(x, k),$$ leading to things like $\psi(x,k) = \operatorname{e}^{ikx} / \sqrt{2\pi}$. With a wave packet we make the more strict requirement that it be normalizable and localized in both $x$ and $p$. The canonical example of a wave packet is:$$\psi(x) \propto \exp \left(-\frac{1}{4} \frac{(x - x_0)^2}{\sigma^2} + i \frac{p_0 x}{\hbar}\right),$$ which has $\langle x\rangle = x_0$ and $\langle p \rangle = p_0$, and is normalizable with finite width in both $x$ and $p$ space. Another example of a wave packet is the $\operatorname{sinc}$ function: $$\psi(x) \propto \frac{\sin\left(\frac{x-x_0}{2\hbar}\Delta p\right)}{(x-x_0)} \operatorname{e}^{ip_0 x / \hbar}.$$ It is localized in $x$, in the sense that it is normalizable and has definite quantiles, but it has divergent variance. In $p$ it is much better behaved because it is a boxcar function that stretches from $p_0 - \Delta p / 2$ to $p_0 + \Delta p / 2$.
{ "domain": "physics.stackexchange", "id": 34433, "tags": "quantum-mechanics, wavefunction, hilbert-space, terminology, scattering" }
Constraints on Phase Space
Question: This question here motivated me to record to the following fact: Consider a $2n$ dimensional phase space with coordinates $q_1,...,q_n,p_1,...,p_n$. Consider the constraint $C(\vec q)=0$. What is the constraint on the momenta which, together with $C=0$, reduces the phase space dimension to $2n-2$? Answer: First one performs a change of coordinates: $\vec q\to \vec x(\vec q)$. These coordinates are to be chosen such that $x_1(\vec q)=0\longleftrightarrow C(\vec q)=0$. Let $\vec \pi$ be the momenta conjugate to $\vec x$. They are related to the original coordinates and momenta by $$ \pi_i=\frac{\partial q_j(\vec x(\vec q))}{\partial x_i} p_j. \tag{1} $$ In the new coordinates it is clear that the constraint on the momenta is simply $\pi_1=0$. Using (1) we see that this is just $$ \frac{\partial q_i}{\partial x_1}{p_i}=0. $$ This is the answer to the question, but I will go on to elaborate on other aspects. In the path integral one must insert delta functions in the integrals over $\vec q$ and $\vec p$. The appropriate combination of delta functions is $$ \delta(x_1)\delta(\pi_1)=\delta(x_1)\delta\left(\frac{\partial q_i}{\partial x_1}{p_i}\right) \tag{2} $$ Since $x_1(\vec q)=0\longleftrightarrow C(\vec q)=0$, (2) can be written $$ \bigg{|}\frac{\partial C}{\partial x_1}\bigg{|}\delta(C)\delta\left(\frac{\partial q_i}{\partial x_1}{p_i}\right) \tag{3} $$ In gauge theory, the 'original coordinates' are the gauge fields. The constraint is the gauge fixing condition $C^a=0$. So-called $x_1$ is simply the gauge transformation parameter $\xi^a(x)$. Therefore (3) for gauge theories is $$ \bigg{|}\frac{\partial C^a}{\partial \xi^b}\bigg{|}\delta(C^a)\delta\left(\int dx\frac{ \partial A^b_i(x)}{\partial \xi^a}{\pi_i^b(x)}\right) \tag{4} $$ Using $\frac{ \partial A^b_i(x)}{\partial \xi^a(y)}=\delta_{ab}D_i\delta(x-y)$, (4) finally becomes $$ \bigg{|}\frac{\partial C^a}{\partial \xi^b}\bigg{|}\delta(C^a)\delta\left(D_i\pi^a_i\right) $$ We recognise the first term as the Fadeev-Popov determinant; the second term as the gauge fixing constraint and the last term as the momentum constraint.
{ "domain": "physics.stackexchange", "id": 94178, "tags": "quantum-mechanics, hamiltonian-formalism, phase-space, constrained-dynamics" }
Number of bits needed to express physical laws?
Question: What is the minimum number of bits that would be needed to express a given physical law, like the law of universal gravitation? How many bits are needed to express each of the four fundamental forces? Is there a pattern here? Answer: One productive way of thinking of the complexity of physical laws is in terms of the Kolmogorov complexity of the algorithm that simulates a given physical situation. This is defined as the length of the shortest code which does the simulation. If you are given a law of nature, like Newton's law of universal gravitation, you can write a simulation of N-interacting particles. If you are only interested in an in-principle answer, you are looking for the best algorithm to simulate Newton's laws on a computer. The problem of computing the Kolmogorov complexity precisely is in general the worst of all uncomputable problems. You can usually shrink a code written from scratch by a lot by cleverly rewriting the subroutines to make a specialized language for the description. You would never know if you have the optimal coding, since maybe you could compress things more by adding a special interpretation layer. The rigorous version of this annoyance is the theorem that no axiomatic system can prove an algorithm is optimal (meaning of minimal length) if the length is significantly greater than the length of the program that does the deduction in this axiomatic system. The proof is a simple contradiction: suppose the axiomatic system proves program P is optimal. Write a program CONTRADICTION which goes through all the theorems of the axiomatic system until it finds a program which is proved optimal and whose length is greater than the length of CONTRADICTION. Then, run this program. By construction, CONTRADICTION is shorter than this program and yet has the same output. If you think about what CONTRADICTION is doing, it is generating the code for a completely different program, and then running it. This gives a hint of the nature of the difficulty in finding a minimal description. But if you are happy with a crude estimate of the complexity, you just write any old code to simulate the physics, and the length of this code is an estimate of the complexity. This is a useful heuristic that makes precise the desire for simple elegant theories.
{ "domain": "physics.stackexchange", "id": 3811, "tags": "information" }