anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
What should be the relation between flange length and web thickness for an optimum I - beam?
Question: While learning about I-beams I became curious to know if there is any relation between the flange length and web thickness of I-beams or if its arbitrary. Answer: The answer is "Yes", and the direct source to find such relationship is from the technical publishing, that offer steel design tables/charts showing the most optimum beam/column sections for certain load with respect to the length of the beam column. The image below is an example of a chart comparing moment capacity (indicated on the left axis) of beam sections with respect to the unbraced length indicated on the bottom. While there are many sections will satisfy either constrain, there is only one that will satisfies both constrains with the least "weight", which often is the focus of optimization. You can also make your own calculation to find the most optimum cross section for the constrains you have in mind. For instance, You want a section can resist a set of loads, moment and shear, yet having the least weight. In realizing the stress formulas, $f_b = My/I$, and $f_v = V/A_w$, and the facts that $I$ (moment of inertia), and $A_w$ (effective area of the web for shear) must be kept constant to yield the constant stresses due to the given set of loads, now is the simple mater by writing the equations for $I$, $A$ and $A_w$ ($A$ is area of the entire cross section), and plug in varies combination of $b_f$, $t_f$, $d$, $t_w$ that will yield the constant $I$ and $A_w$, yet the $W = r \cdot A$ is the minimum. Hope this helps.
{ "domain": "engineering.stackexchange", "id": 3933, "tags": "mechanical-engineering, civil-engineering, solid-mechanics" }
Evaluate boolean circuit on batch of similar inputs
Question: Suppose I have a boolean circuit $C$ that computes some function $f:\{0,1\}^n \to \{0,1\}$. Assume the circuit is composed of AND, OR, and NOT gates with fan-in and fan-out at most 2. Let $x \in \{0,1\}^n$ be a given input. Given $C$ and $x$, I want to evaluate $C$ on the $n$ inputs that differ from $x$ in a single bit position, i.e., to compute the $n$ values $C(x^1),C(x^2),\dots,C(x^n)$ where $x^i$ is the same as $x$ except that its $i$th bit is flipped. Is there a way to do this that is more efficient that independently evaluating $C$ $n$ times on the $n$ different inputs? Assume $C$ contains $m$ gates. Then independently evaluating $C$ on all $n$ inputs will take $O(mn)$ time. Is there a way to compute $C(x^1),C(x^2),\dots,C(x^n)$ in $o(mn)$ time? Optional context: If we had an arithmetic circuit (whose gates are multiplication, addition, and negation) over $\mathbb{R}$, then it would be possible to compute the $n$ directional derivatives ${\partial f \over \partial x_i}(x)$ in $O(m)$ time. Basically, we could use standard methods for computation of the gradient (back-propagation / chain rule), in $O(m)$ time. That works because the corresponding function is continuous and differentiable. I'm wondering whether something similar can be done for boolean circuits. Boolean circuits aren't continuous and differentiable, so you can't do the same trick, but maybe there is some other clever technique one can use? Maybe some kind of Fourier trick, or something? (Variant question: if we have boolean gates with unbounded fan-in and bounded fan-out, can you do do asymptotically better than evaluating $C$ $n$ times?) Answer: I'd consider it unlikely that such a trick is easy to find and/or will give you significant gains, as it would give nontrivial satisfiability algorithms. Here's how: First of all, while ostensibly easier, your problem can actually solve the more general problem of, given a circuit $C$ and $N$ inputs $x_0, \ldots, x_{N-1}$, evaluate $C$ at all the inputs faster than $\tilde{O}(N\cdot|C|)$ time. The reason is that we can tweak $C$ into a circuit $C'$ of size $|C| + \tilde{O}(Nn)$ which, on input $0^i10^{N-1-i}$, outputs $C(x_{i})$. Basically, we just make a little lookup table that sends $0^i10^{N-1-i}$ to $x_i$, and wire it into $C$. Nontrivial algorithms for batch evaluation of boolean-circuits can then be used to make fast satisfiability algorithms. Here's an example in the simple case where we suppose we have an algorithm doing evaluation in time $\tilde{O}(|C|^{2-\epsilon} + (N\cdot|C|)^{1-\epsilon/2} + N^{2-\epsilon})$ for any constant $\epsilon > 0$. On input a circuit $C$, we can decide satisfiability by expanding $C$ into a circuit $C'$ of size $2^{n/2}\cdot |C|$ which is just the OR over all possible choices of the first $n/2$ inputs to $C$ (leaving the other inputs free). We then batch-evaluate $C'$ on all of its $2^{n/2}$ inputs. The end result is that we find a satisfying assignment to $C'$ iff $C$ is satisfiable. The running time is $\tilde{O}(2^{(n/2)(2-\epsilon)}\cdot |C|^{2-\epsilon}) = \tilde{O}(2^{n(1-\epsilon/2)}\cdot \textrm{poly}(|C|))$.
{ "domain": "cstheory.stackexchange", "id": 3784, "tags": "ds.algorithms, circuit-complexity, boolean-functions, circuits" }
URDF link mass inertia properties
Question: Hi I'm building a URDF for rviz, and eventually Gazebo. I'm a bit confused about the best values to put in for the Inertia Origin and Moment of Inertia matrix. Below you should see a screenshot of a robot shoulder in SolidWorks. (Right-click and open to see it in full size) The origin is the tri-color axis, which I put there because it matches the link origin. This means I should be able to specify the mesh visual origin as 0,0,0. The pink axis is the Center of Mass found by SolidWorks. The URDF XML doc: http://ros.informatik.uni-freiburg.de/roswiki/urdf(2f)XML(2f)inertial.html says the Inertial Origin should be the "Link Center of Mass". So I could use the SolidWorks Center of Mass x,y,z co-ordinates as the Inertial Origin. But I don't know which Ix,Px, Lxx, or Ixx values from SolidWorks to use? Or, why can't I just set the Inertial Origin to 0,0,0 and use the very last set of values from SolidWorks: Ixx, Ixy, Ixz, etc. Will Gazebo/ODE get confused because the Inertial Origin is not the Center of Mass of the link? These two posts provided some info, but no complete explanation: http://answers.ros.org/question/11407/what-is-the-origin-of-the-inertia-tensor-in-urdf/ http://answers.ros.org/question/30539/choosing-the-right-coefficients-for-gazebo/ Thanks. Originally posted by dbworth on ROS Answers with karma: 1103 on 2012-07-17 Post score: 2 Answer: The origin of the inertial element of the URDF must be taken from the center of mass of the link. So the XYZ of the inertial origin is the offset from the link's origin to the center of mass. Currently, the orientation of the inertial element is assumed to match the orientation of link's origin and RPY for the inertial origin is not supported. Therefore if you were pulling values from SolidWorks mass properties, you should probably use the moment of inertia taken at the center of mass and aligned with the output coordinate system (Lxx). However, a few caveats. Make sure the 'output coordinate system' you choose aligns with the coordinate system in the URDF. Many times in SW, the default coordinate system has Y pointing upward, but in URDF and Gazebo, Z is assumed to be pointing upward. Place your 'output coordinate system' by inserting a reference coordinate system in SolidWorks at either the origin location of the joint connecting this link with its parent link, or in the case of the base_link, place the output coordinate system at the collision/visual origin (typically the default origin for the model). I'm going to toot my own horn here and plug the SolidWorks to URDF exporter for exactly this. Though, I realize answering this question four months after you asked it is of little help. Originally posted by brawner with karma: 581 on 2012-12-19 This answer was ACCEPTED on the original site Post score: 6
{ "domain": "robotics.stackexchange", "id": 10249, "tags": "gazebo, urdf, solidworks" }
connection to R-30ib
Question: Hi, I haven't been able to get through to Fanuc about this concern I have and I am wondering how a machine running ROS is typically connected to an industrial robot controller? I have an R-30ib with me and there are connections to a RS-323C port and a PCMCIA connection port as well. I'm not sure which adapter I should buy to initiate communication between my laptop and the controller. Initially I was looking for an ethernet connection but I couldn't find one on the panel, if anyone has ever had experience with using ROS on industrial robot arms could give me some advice it would be greatly appreciated. Regards, Devin Originally posted by TheDude35 on ROS Answers with karma: 19 on 2015-09-17 Post score: 0 Answer: General comment: the ROS-Industrial 'branch' of ROS is where we focus on using ROS with industrial robots. You might want to get in touch with us on the ROS-Industrial mailing list. Initially I was looking for an ethernet connection but I couldn't find one on the panel [..] The fanuc_driver page lists the requirements on both the software and the hardware of your controller: you must have ethernet support, as well as options R632 (KAREL) and R648 (User Socket Messaging). Afaik, all R-30iB controllers come with ethernet networking installed, I'm not sure where the port(s) is (are) located inside the controller itself though. As for the other options: please make sure you have them (they are not installed by default), otherwise the fanuc_driver package will not be compatible. Edit: according to the R-30iB Controller Maintenance Manual (B-83195EN/03), Section 4.10.2.1 (Connection to Ethernet), the port is labelled CD38{A,B,C}. Pages 223 and 224 should show you where it is located. Originally posted by gvdhoorn with karma: 86574 on 2015-09-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 22657, "tags": "ros, fanuc" }
How to work with std_msgs::Int16MultiArray message type?
Question: What is needed to publish a How to work with std_msgs::Int16MultiArray message? I tried std_msgs::Int16MultiArray msg; int16_t *velocities; msg.data = velocities; but just those few lines gives me errors while compiling error: no match for ‘operator=’ (operand types are ‘std_msgs::Int16MultiArray_<std::allocator<void> >::_data_type {aka std::vector<short int, std::allocator<short int> >}’ and ‘int16_t* {aka short int*}’) msg.data = velocities; Originally posted by kump on ROS Answers with karma: 308 on 2019-01-28 Post score: 0 Original comments Comment by gvdhoorn on 2019-01-28: I believe this is a duplicate of #q37185. Answer: This works: # definition in header std::vector<physics::JointPtr> joints; ros::Publisher rosPub; std::unique_ptr<ros::NodeHandle> rosNode; # initialization this->rosNode.reset(new ros::NodeHandle( node_name )); this->rosPub = this->rosNode->advertise<std_msgs::Float32MultiArray>("/velocity",1); # callback function std_msgs::Float32MultiArray msg; msg.data.push_back( this->joints[0]->GetVelocity(0) ); msg.data.push_back( this->joints[1]->GetVelocity(0) ); msg.data.push_back( this->joints[2]->GetVelocity(0) ); this->rosPub.publish(msg); Originally posted by kump with karma: 308 on 2019-01-29 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2019-01-29: Please note that you're not initialising the layout field here. Subscribers that expect that field to describe the size, dimensionality and shape of the data you're publishing will not work correctly.
{ "domain": "robotics.stackexchange", "id": 32358, "tags": "ros, ros-kinetic, ubuntu, std-msgs, ubuntu-trusty" }
Recursive type checking for containers
Question: This code checks the type, length and depth (highest number of nested containers) of the container as well as the type of its sub-containers and its elements. There are also two simple methods for type checking non-nested list / tuples. I mainly want a code review on readability and the quality of the docstrings. from inspect import isclass from itertools import islice from typing import Any, Optional, Union from collections.abc import Iterable, Sized, Container as GenericContainer from typing import List, Tuple, Callable def is_n_container( obj: Any, matching_type: Union[type, List[type], Tuple[type, ...], None] = None, container_type: Union[type, List[type], Tuple[type, ...], None] = None, length: Union[int, Callable[[Any], bool], None] = None, depth: Union[int, Callable[[Any], bool], None] = None ) -> Optional[bool]: """Checks if `obj` is a container with expected properties. Verifies that container `obj` and its sub-containers are of a type specified in `container_type` and all non-container elements of `obj` are of a type specified in `matching_type`. The length of the container `obj` and its maximum depth is confirmed. More complex requirements can be satisfied by providing a function for its length and depth (see notes). `None` acts as a "wildcard" for the parameters if the type of non-container elements, the type of the container `obj` and its sub-containers, the length of container `obj` or the depth of container `obj` doesn't matter. Parameters ---------- obj : any Object that is checked to be a container with specified properties. matching_type : type or list/tuple of types, optional Allowed types for non-container elements of container 'obj'. container_type : type or list/tuple of types, optional Allowed types for container 'obj' and its sub-containers. length : int or function, optional Number of elements of depth zero in container 'obj'. depth : int or function, optional Maximum depth of nested containers. Returns ------- bool, optional Returns 'None' if a parameter has an incorrect format, 'True' if 'obj' is container of parameter-defined properties or 'False' otherwise. Raises ------ ValueError If 'length' or 'depth' is smaller than zero. TypeError If 'length' is specified but 'obj' has no length attribute accessible through 'len()' or by iterating over it. Warnings -------- **Warning**: If a container type is included in the `matching_type` parameter, checking will stop once the container type is found without analysing its internal structure. **Warning**: If iterating over the container changes it as a side effect, the behaviour of `is_n_container` becomes undefined. Notes ----- A function used as argument for parameter `length` or parameter `depth` should take an object (`obj`) as argument and return `True` if the object has the required length/depth or `False` otherwise. Variable length containers also include empty containers. If empty containers should be excluded, check for empty containers separately or use `not is_n_container(obj, length=0, **kwargs) and is_n_container(obj, length=None, **kwargs)`. Even when a sub-container is empty it adds another level of depth (e.g. ([], 1, 2, 3) has depth 1), unless the sub-container is a type in parameter `matching_type` (e.g. with `matching_type=(int, list)` ([], 1, 2, 3) has depth 0). Examples -------- Types of elements: int, float, str \n Types of containers: list, tuple \n Depth 0: [(I), 2, 3, (II), (III), 2.3] - Length: 6 \n Depth 1: I - 'a', 1, 'b' | II - 4, 5 | III - (IV), 9 \n Depth 2: IV - (V), 8 \n Depth 3: V - 6, 7 >>> container = [['a', 1, 'b'], 2, 3, (4, 5), (((6, 7), 8), 9), 2.3] Use all parameters and `bool` as an unused `matching_type` parameter >>> is_n_container(container, matching_type=(int, str, float, bool), ... container_type=(tuple, list), length=6, depth=3) True Wrong length >>> is_n_container(container, matching_type=(int, str, float, bool), ... container_type=(tuple, list), length=5, depth=3) False Allow more depth than needed >>> is_n_container(container, matching_type=(int, str, float), ... container_type=(tuple, list), depth=5) True Expect a too shallow structure >>> is_n_container(container, matching_type=(int, str, float), ... container_type=(tuple, list), length=6, depth=2) False Missing `matching_type` arguemnt `float` >>> is_n_container(container, matching_type=(int, str), ... container_type=(tuple, list)) False Missing 'container_type' argument 'list' >>> is_n_container(tuple(container), ... matching_type=(int, str, float), ... container_type=tuple) False Container `obj` is still a list >>> is_n_container(container[1:-1], matching_type=int, ... container_type=tuple) False Now all containers are of type `tuple` >>> is_n_container(tuple(container[1:-1]), matching_type=int, ... container_type=tuple) True Using `None` for an argument allows for a wide variety of acceptable containers, but not non-container objects >>> containers = [[1, 2, 3], ('a', 'b', 'c'), {1.5, 2.4, 3.3}, True] >>> for i in (0, 1, 2, 3): ... is_n_container(containers[i], depth=1) True True True False Even if the container is empty it adds another level of depth >>> container = [(), ([], 1), 2] >>> for i in (1, 2): ... is_n_container(container, matching_type=int, depth=i) False True Unless the container is a type in parameter `matching_type` >>> is_n_container(container, matching_type=(int, list), depth=1) True """ if _check_parameters(matching_type, container_type, length, depth): _matching_type = (matching_type if not isinstance(matching_type, list) else tuple(matching_type)) _container_type = (container_type if not isinstance(container_type, list) else tuple(container_type)) if (not isinstance(obj, GenericContainer) or not (_container_type is None or isinstance(obj, _container_type)) or (isinstance(length, Callable) and not length(obj)) or (isinstance(depth, Callable) and not depth(obj))): return False _length = length if isinstance(length, int) else None _depth = depth if isinstance(depth, int) else float('inf') if _length is not None: if _length < 0: raise ValueError("length of the container must be positive," + f" not {_length}") if not (isinstance(obj, Sized) or isinstance(obj, Iterable) or hasattr(obj, '__getitem__')): raise TypeError(f"object of type '{type(obj).__name__}'" + " has no attribute len()") if ((isinstance(obj, Sized) and _length != len(obj)) or ( (isinstance(obj, Iterable) or hasattr(obj, '__getitem__')) and _length != sum(1 for _ in islice(obj, _length + 1)))): return False if _depth < 0: raise ValueError("depth of the container must be positive," + f" not {_depth}") if (matching_type is None or not (isinstance(obj, Iterable) or hasattr(obj, '__getitem__'))): return (_container_type is None or isinstance(obj, _container_type)) return _is_container(obj, _matching_type, _container_type, _depth) return None def _check_parameters(matching_type, container_type, length, depth) -> bool: """Returns `True` if all arguments for `is_n_container` have the correct format.""" return (True # Added for better readability and (matching_type is None or isinstance(matching_type, type) or is_n_list(matching_type, None, type) or is_n_tuple(matching_type, None, type)) and (container_type is None or (isclass(container_type) and issubclass(container_type, GenericContainer)) or (isinstance(container_type, list) and all((isclass(c_type) and issubclass(c_type, GenericContainer)) for c_type in container_type)) or (isinstance(container_type, tuple) and all((isclass(c_type) and issubclass(c_type, GenericContainer)) for c_type in container_type))) and (length is None or isinstance(length, int) or isinstance(length, Callable)) and (depth is None or isinstance(depth, int) or isinstance(depth, Callable))) def _is_container(obj: Iterable, matching_type: Union[type, Tuple[type, ...]], container_type: Union[type, Tuple[type, ...], None], depth: Union[int, float]) -> bool: """Checks properties of container `obj` recursively. Used in `is_n_container`.""" return depth >= 0 and all( isinstance(element, matching_type) or ((isinstance(element, Iterable) or hasattr(element, '__getitem__')) and (container_type is None or isinstance(element, container_type)) and _is_container(element, matching_type, container_type, depth-1)) for element in obj ) def is_n_list(obj: Any, length: Optional[int], matching_type: Union[type, Tuple[type, ...]]) -> bool: """Checks if `obj` is a list with expected properties. Parameters ---------- obj : any Object that is checked to be a list with specified properties. matching_type : type or tuple of types, optional Allowed types for non-container elements of container 'obj'. length : int, optional Number of elements in container 'obj'. Returns ------- bool Returns 'True' if 'obj' is a list and all its elements are of a type defined in 'matching_type'. See Also -------- `is_n_container`: Checks if `obj` is a container with expected properties. """ return (isinstance(obj, list) and (length is None or length == len(obj)) and all(isinstance(element, matching_type) for element in obj)) def is_n_tuple(obj: Any, length: Optional[int], matching_type: Union[type, Tuple[type, ...]]) -> bool: """Checks if `obj` is a tuple with expected properties. Parameters ---------- obj : any Object that is checked to be a tuple with specified properties. matching_type : type or tuple of types, optional Allowed types for non-container elements of container 'obj'. length : int, optional Number of elements in container 'obj'. Returns ------- bool Returns 'True' if 'obj' is a tuple and all its elements are of a type defined in 'matching_type'. See Also -------- `is_n_container`: Checks if `obj` is a container with expected properties. """ return (isinstance(obj, tuple) and (length is None or length == len(obj)) and all(isinstance(element, matching_type) for element in obj)) Additional specific questions: Should I scrap the whole thing from a Python standpoint, since its follows more of a LBYL principle than a EAFP principle (LBYL/EAFP)? If not the whole thing, should I get rid of the _check_parameters method? Are there enough / too many examples for is_n_container in the docstring and are these examples helpful or should they be put in a example context? PEP8 suggests to use blank lines in functions sparingly. Should I use some in is_n_container? And if yes, where? Is the iteration warning about a possible changed container necessary / helpful? Is it ok to use three different indentations for the function parameters (*see remarks)? Some remarks: I use python 3.8.8 and pycharm 2020.3.4 . Since the sections 'Parameters', 'Returns' and 'Raises' won't format single back-ticks (``) in the docstring correctly (for me), I used single quotation marks ('') instead. I imported List, Tuple and Callable from typing seperatly, because in python 3.9 List and Tuple should be replaced with the generic list and tuple and Callable should be imported from collections.abc . I don't use any type hinting in _check_parameters as they all have type Any (i want to accept any type and return if they are of the desired type). *I use three different indentations because I usually use: If I can get it into two lines def func(var1: type, var2: type, var3: type, var4: type) -> type Else if I can get the type hint of the result into the line with the last variable def name_too_long_to_get_parameters_in_two_lines(long_var_name1: type, long_var_name2: type, long_var_name3: type) -> type Else def name_and_result_type_hinting_too_long( var1: type, var2: type, var3: type ) -> really_long_result_type Answer: Assuming that you decide to retain _check_parameters(), here's an illustration of some alternative techniques for organizing a complex boolean check. (1) The isinstance() function will take a tuple of types, so you can do some consolidation. (2) If you need to check for None and check for types, you can do it all in one shot. (3) Some of your checks were repetitive; help the reader by factoring things out. (4) Organize the checks like a pretty-printed data structure because it gets the parens/brackets working for you to convey logical structure. (5) Sometimes simple and fairly banal code comments can function as visual/organizational sign posts to guide the reader. (6) I prefer the lines of code to lead with the substance rather than the boolean operator -- which is mostly a stylistic preference, but I do think it combines better in these kinds of complex checks. For example, I felt no readability-driven urge to add a preliminary True and to the expression. Also, most people use editors with syntax highlighting, so the operators pop out visually and there's no need to waste the prime real estate (the start of each line) on the operator. TNone = type(None) check_ctype_seq = lambda ctypes, cls: ( isinstance(ctypes, cls) and all( isclass(ct) and issubclass(ct, GenericContainer) for ct in ctypes ) ) return ( # Length and depth. isinstance(length, (int, Callable, TNone)) and isinstance(depth, (int, Callable, TNone)) and # Matching type. ( isinstance(matching_type, (TNone, type)) or is_n_list(matching_type, None, type) or is_n_tuple(matching_type, None, type) ) and # Container_type. ( container_type is None or ( isclass(container_type) and issubclass(container_type, GenericContainer) ) or check_ctype_seq(container_type, list) or check_ctype_seq(container_type, tuple) ) )
{ "domain": "codereview.stackexchange", "id": 40947, "tags": "python, python-3.x, recursion, validation" }
Are measurement results only orthogonal?
Question: Are all measurement operators on a quantum mechanical system defined by a Hilbert space, such that all possible post-measurement states are orthogonal? For example measuring a qubit in some orthonormal basis $\{|0\rangle,|1\rangle\}$. The possible outcome states after measurement are $|0\rangle$ and $|1\rangle$. I know the example I gave above is projective measurement, a special case of general measurement. So is there an example where all possible post-measurement states are not orthogonal? I know if the measurement operators are $\{M_m\}$ ( $m$ denotes a possible outcome) then if the outcome is $m$ post-measurement state is $\frac{M|\psi \rangle}{|M|\psi \rangle|}$ ( $|\psi\rangle$ being initial state of system ) such that $\sum M^{\dagger}_m M_m=I$. Thus I can mathematically see that the example I am looking for possible , but I can't come up with one having some physical significance. Answer: Yes. Nonweak measurements correspond to Hermitian (or self adjoint) operators. The results are 1) an eigenvalue and 2) you project the state vector onto the corresponding eigenspace. The projections onto different eigenspaces produce eigenvectors with different eigenvalues, and eigenvectors of a symmetric operator with different eigenvalues are orthogonal. So for nonweak measurements the different outcomes are orthogonal. Always.
{ "domain": "physics.stackexchange", "id": 21141, "tags": "quantum-mechanics, operators, quantum-information, measurement-problem" }
Getting started with Program Synthesis
Question: There some internet pages: http://en.wikipedia.org/wiki/Program_synthesis http://research.microsoft.com/en-us/um/people/sumitg/pubs/synthesis.html https://sites.google.com/site/asergrp/bibli/program-synthesis But, honestly, i can't find entrance to that topic. There are a lot of advanced level talks but only a few "hello world" examples(or completely none). So where can one start learning about program synthesis ? Answer: Start with the notes form Ras Bodik's course, Program Synthesis for everyone: http://www.cs.berkeley.edu/~bodik/cs294fa12 The slides from the recent summer school are also useful: https://excape.cis.upenn.edu/summer-school.html
{ "domain": "cstheory.stackexchange", "id": 2154, "tags": "ds.algorithms" }
Under what conditions can a body can be approximated as a black body?
Question: I have read this post and from it was my understanding that the definition of a blackbody is: Black body means a body which ABSORBS all wavelengths completely. After reading this answer on another post I gained the understanding that a $\color{green}{\text{black body doesn't have to be in thermal equilibrium.}}$ Wikipedia agrees with both the statements above. So all is well, but then I read this (from the Imperial College London, Department of Physics): Black Body Radiation Stars have high densities -> frequent collisions that can lead to thermodynamic equilibrium where all particles (electrons, ions, photons) have a single temperature Can often approximate stars as black bodies! This has got to be a joke right? So does it seem that any object in thermal equilibrium with its surroundings will emit as a black body? I don't see how thermal equilibrium implies a black body (even if it is an approximation). This also contradicts the line in green above. Answer: No, thermal equilibrium does not mean something is a blackbody. A red apple can be in thermal equilibrium with its surroundings. It is manifestly not a blackbody radiator. The outer parts of a star, from whence the radiation we see, escapes, is approximately in thermal equilibrium at a certain temperature. A star absorbs nearly all radiation incident upon it. A star is approximately a blackbody radiator.
{ "domain": "physics.stackexchange", "id": 64450, "tags": "thermodynamics, thermal-radiation" }
Directions of angular velocity and angular acceleration
Question: How can the directions of angular velocity and angular acceleration be determined in the case of uniform circular motion? Answer: The convention is to use the right hand rule. Curl your fingers in the direction of linear velocity, and your thumb points in the direction of angular velocity. This allows one to talk about a fixed angular velocity for a rotating system. For instance, if a fly is flying in counterclockwise circles on the plane of your computer screen, its angular velocity is towards you.
{ "domain": "physics.stackexchange", "id": 33610, "tags": "homework-and-exercises, newtonian-mechanics" }
Where’s the flaw in my proposed TB Treatment?
Question: Ten years ago, I emailed a prominent lung specialist with my suggestion for a treatment for Tuberculosis. His lack of response led me to believe that the idea had no merit whatsoever – but I had no idea what the failings might be. I still don’t and I’m hoping someone will enlighten me as to why this is not worth exploring. The idea is based upon the fact that Mycobacterium Tuberculosis is aerobic – in fact Wikipedia states it “requires high levels of oxygen”. I’d heard that the TB “Sanatoriums” used to practice the treatment of breaking ribs on one side to collapse that lung and “rest” it. It seemed to me that this resting was actually starving the bacterium of Oxygen and killing the infestation. My suggestion is that we take advantage of having two lungs by inserting a pair of breathing tubes into the respiratory tract and into the head of the two bronchi: it may be necessary to use ultrasound or other imaging to accurately position the tubes. We feed pure Nitrogen into one tube, and 40/60 Oxygen/Nitrogen into the other – double Oxygen concentration. The patient therefore receives all their Oxygen requirements through one lung. After some period of time, when we judge that the bacteria in the “starved” lung are all dead, we switch the supplies to the 2 tubes and treat the other lung. Can anyone see why this idea is so flawed? Answer: High oxygen concentration can be deleterious; it can induce oxidative damage. The systemic blood circulation would supply oxygen to the "oxygen-deprived" lung. Moreover, Mycobacteria can survive in anaerobic conditions. And what if both lungs are infected? There are many flaws in the proposed therapy. Adding this point from one of the comments below: There are several cases of extrapulmonary tuberculosis and in these cases treating just the lung would not make sense.
{ "domain": "biology.stackexchange", "id": 4219, "tags": "human-biology, medicine, tuberculosis" }
Prettifying URLs for a PHP-based club website
Question: I have a website for my car enthusiast club ("example.org") whose content can be divided into three main sections: Startseite (the "front page" of the site), Mitglieder (membership directory), and Vereinsinfos ("about us"). For each section, the content is handled by a corresponding PHP script: /home.php, /mitglieder.php, and /verein.php. I'd like to prettify the URLs to eliminate the .php extensions, so that the URLs look like http://example.org/home (maps to /home.php) http://example.org/home/ (also maps to /home.php) http://example.org/home/1 (maps to /home.php with a data=1 query string) After reading all documentation, I formed my .htaccess like this (repeated for the mitglieder and verein sections): RewriteRule ^home\/$ /home.php [L] RewriteRule ^home$ /home/ [L] RewriteRule ^home\/([^/]*)$ /home.php?data=$1 [L] This works, but I can't figure out how to improve it further after reading all the documentation on the Internet. (I have difficulty understanding regular expressions.) In addition to the content above, I also have static resources under /images/, /js/, /css/, etc. which should be left alone. Answer: First, I'd like to point out that when you map URLs with varying numbers of slashes to the same content, you must take care when referencing images or links in your HTML. Basically, you must use absolute URLs everywhere instead of relative URLs. If you want to have both /home and /home/ refer to the exact same content, it would be good practice to pick one of them as the canonical URL. You should either enforce your choice by redirecting the alternate URL to the canonical URL, or just inform web crawlers of your preference putting a <link rel="canonical" href="…"> tag in the page. If you don't want the canonicalizing redirect, then first two rules would be better combined as RewriteRule ^home/?$ /home.php [L]. (The question mark in the regex means that the trailing slash is optional; both home and home/ will match.) If you do want a canonicalizing redirect, then you'll need the first two rules to remain separate: RewriteRule ^home/$ /home.php [L] RewriteRule ^home$ /home/ [L,R] # canonicalizing redirect The third rule can remain as it is. However, I recommend modifying your PHP script so that it accepts the additional path elements using $_SERVER['PATH_INFO'] instead of a $_GET['data'] parameter. Then you can combine your first and third rules: RewriteRule ^home/(.*)$ /home.php/$1 [L] RewriteRule ^home$ /home/ [L,R] # canonicalizing redirect Or, without the canonicalizing redirect, it's just one rule: RewriteRule ^home(/.*)?$ /home.php$1 [L] You have three scripts. Instead of repeating all of the rules for each script, you should be able to combine them. With canonicalizing redirects: RewriteRule ^(home|mitglieder|verein)(/.*)?$ /$1.php$2 [L] RewriteRule ^(home|mitglieder|verein)$ /$1/ [L,R] With no canonicalizing redirects, it all collapses into one rule: RewriteRule ^(home|mitglieder|verein)(/.*)?$ /$1.php$2
{ "domain": "codereview.stackexchange", "id": 14702, "tags": "regex, url, .htaccess" }
How does $U_f$ act on a qudit state in the Deutsch-Jozsa Algorithm
Question: The problem starts with the given the input state $|\psi_{in} \rangle = |0 \rangle |1 \rangle$, I'm asked to calculate $|\psi'\rangle = H_d \otimes H_d |\psi_{in} \rangle$ where $H_d$ is the Hadamard gate for $d=4$ dimensional system. $$H_d = \frac{1}{2} \begin{pmatrix} 1 & 1 & 1 & 1\\ 1 & -1 & 1 & -1\\ 1 & 1 & -1 & -1\\ 1 & -1 & -1 & 1 \end{pmatrix}$$ Well, $H_d|0\rangle = |0\rangle + |1\rangle + |2\rangle + |3\rangle$ and $H_d |1 \rangle = |0\rangle - |1\rangle + |2 \rangle - |3 \rangle$. So $|\psi'\rangle = H_d \otimes H_d |\psi_{in} \rangle = \frac{1}{4} \sum_{x,y=0}^{d-1} (-1)^{y} |x\rangle |y \rangle$. The problem suggests that the $\frac{1}{4}$ is not part of the new state, but I think it is. Now the question I'm stuck on is using the unitary operator $U_f | x,y \rangle = |x, y \oplus f(x) \rangle$, show that $U_f|\psi'\rangle = \Big( \sum_{x=0}^{d-1}(-1)^{x} |x\rangle \Big) |1 \rangle_H$ $|1\rangle_H$ is not defined in the problem but I'm guessing it's equivalent to $H_d|1\rangle$ My problem is we don't know what $f(x)$ is, only that it might be constant or balanced. So why is $|y\rangle$ always transformed into $|1\rangle_H$? Here, a function $f$ in $d$ dimensions is defined as constant if $f(0) \oplus f(1) \oplus \ldots \oplus f(d-1) = 0$ and it is balanced if $f(0) \oplus f(1) \oplus \ldots \oplus f(d-1) = \frac{d}{2}$ where $\oplus$ is addition mod $d$. Answer: This is how I understand exercise 9.6 from the book. Firstly note that $$|1_H\rangle = |0\rangle - |1\rangle + |2\rangle -|3\rangle = \sum_{y = 0}^{d-1} (-1)^{y}|y\rangle$$ so let's write $|\psi'\rangle$ in a slightly different way: $$ |\psi'\rangle = H_d \otimes H_d |01 \rangle = \sum_{x,y=0}^{d-1} (-1)^{y} |x\rangle |y \rangle = \sum_{x}^{d-1} |x\rangle |1_H \rangle $$ Here I drop the normalization factors ($\frac{1}{4}$ or $\frac{1}{2}$) like in the book. Also I assume that $f(x)$ is a binary function $f(x) = 0$ or $f(x) = 1$, and when we have this definition for $U_f$ $$U_f |x,y\rangle = |x,y \oplus f(x)\rangle$$ then $$U_f |\psi'\rangle = \sum_{x}^{d-1} |x\rangle |1_H \oplus f(x)\rangle$$ If $f(x) = 0$, then $|1_H \oplus f(x)\rangle = |1_H \rangle$ and if $f(x) = 1$, then $|1_H \oplus f(x)\rangle = -|1_H \rangle$, so $$U_f |\psi'\rangle = \Big( \sum_{x=0}^{d-1}(-1)^{f(x)} |x\rangle \Big) |1_H \rangle$$ Here I have a difference with the book. In the book instead of $(-1)^{f(x)}$ we have $(-1)^{x}$ that I guess is a typo in the exercise. If $f(x)$ is constant then it can be proved that either $f(x) = 0$ or $f(x) = 1$ always with the given definition for the constant functions and by taking into account my assumption that $f(x)$ is a binary function. So in the constant case: $$U_f |\psi'\rangle = \Big( \sum_{x=0}^{d-1}(-1)^{f(x)} |x\rangle \Big) |1 \rangle_H = \pm\Big( \sum_{x=0}^{d-1} |x\rangle \Big) |1_H \rangle = \pm |0_H\rangle |1_H\rangle$$ And if we will apply an $H_d$ on the first qudit $H_d|0_d\rangle = |0\rangle$ we will always obtain $|0\rangle$. If it's a balanced function with the given definition and the assumption that $f(x)$ is a binary function it can be proved that in the half cases (2) $f(x) = 0$ and in the other half cases (2) $f(x) = 1$, so $$\sum_{x=0}^{d-1}(-1)^{f(x)} |x\rangle = \pm |1_H\rangle \text{ or } \pm |2_H\rangle \text{ or } \pm |3_H\rangle$$ In all cases if we will apply $H_d$ we will never obtain $|0\rangle$ after the measurement, so the measurement will indicate that we had a balanced function. Here I have used the definition from the exercise for $H_d$: $$H_d |x\rangle = \frac{1}{\sqrt{d}}\sum_{y = 0}^{d-1} (-1)^{x \cdot y} |y\rangle$$ And hence $H_d^2|j\rangle = H_d |j_H\rangle = |j\rangle$, where $j \in \{0,1,2,3\}$. In the end the author writes that "Finally, apply qudit Hadamard gates to the first set of qudits" and I don't see from where in the exercise the first qudit became quditS, but I imagine that it can be generalized in a similar fashion to the multiple input qudit case.
{ "domain": "quantumcomputing.stackexchange", "id": 2298, "tags": "textbook-and-exercises, hadamard, deutsch-jozsa-algorithm, quantum-state" }
How to prove the existence of a number which cannot be written by any algorithm?
Question: I have the problem: Show that there exists a real number for which no program exists that runs infinitely long and writes that number's decimal digits. I suppose it can be solved by reducing it to the Halting problem, but I have no idea how to do so. I would also appreciate links to similar problems for further practice. Answer: As Sebastian indicates, there are only (infintely but) countably many programs. List them to create a list of programs. The list is (infinitely but) countably long. Each program generates one number in R. From that we can create an (infinite but) countable list of numbers in R. Now we can apply Cantor's diagonal argument directly to prove that there still must be other numbers. By the way if the algorithm has (finite) arguments, you can just rewrite that as a "longer" list of programs where each program doesn't have any arguments. With regard to your comment "What if real numbers are allowed as argument", then the question's premise is wrong: all numbers in R can then be generated. If someone finds a number, say 皮 and claims it cannot be computed, we have the following "algorithm": func(number): return number and call func(皮)
{ "domain": "cs.stackexchange", "id": 11447, "tags": "algorithms, reductions, halting-problem" }
"roslaunch pi_tracker skeleton.launch" Fails
Question: ... logging to /home/geeko/.ros/log/5e3bf648-4dca-11e0-a040-001d92bb2297/roslaunch-geeko-MS-7312-12457.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://geeko-MS-7312:53390/ SUMMARY ======== PARAMETERS * /use_sim_time * /skeleton_tracker/fixed_frame * /skeleton_tracker/load_filepath * /skeleton_tracker/holonomic * /skeleton_tracker/skel_to_joint_map/right_shoulder * /skeleton_tracker/tracking_rate * /skeleton_tracker/skel_to_joint_map/right_hip * /skeleton_tracker/command_rate * /skeleton_tracker/skel_to_joint_map/left_foot * /skeleton_tracker/skel_to_joint_map/left_elbow * /skeleton_tracker/skel_to_joint_map/right_foot * /skeleton_tracker/use_real_robot * /skeleton_tracker/skel_to_joint_map/right_knee * /skeleton_tracker/skel_to_joint_map/left_shoulder * /skeleton_tracker/scale_drive_speed * /skeleton_tracker/base_controller_rate * /skeleton_tracker/skel_to_joint_map/left_knee * /skeleton_tracker/base_control_side * /rosdistro * /robot_description * /skeleton_tracker/skel_to_joint_map/head * /skeleton_tracker/max_rotation_speed * /rosversion * /skeleton_tracker/skel_to_joint_map/torso * /skeleton_tracker/skel_to_joint_map/right_hand * /skeleton_tracker/skel_to_joint_map/right_elbow * /skeleton_tracker/default_joint_speed * /skeleton_tracker/max_drive_speed * /skeleton_tracker/scale_rotation_speed * /robot_state_publisher/publish_frequency * /skeleton_tracker/reverse_rotation * /skeleton_tracker/skel_to_joint_map/neck * /skeleton_tracker/skel_to_joint_map/left_hand * /skeleton_tracker/joint_controller_rate * /skeleton_tracker/skel_to_joint_map/left_hip NODES / robot_state_publisher (robot_state_publisher/state_publisher) kinect_base_link (tf/static_transform_publisher) kinect_base_link1 (tf/static_transform_publisher) kinect_base_link2 (tf/static_transform_publisher) kinect_base_link3 (tf/static_transform_publisher) skeleton_tracker (pi_tracker/skeleton_tracker) base_world_broadcaster (tf/static_transform_publisher) ROS_MASTER_URI=http://localhost:11311 core service [/rosout] found process[robot_state_publisher-1]: started with pid [12478] process[kinect_base_link-2]: started with pid [12479] process[kinect_base_link1-3]: started with pid [12480] process[kinect_base_link2-4]: started with pid [12482] process[kinect_base_link3-5]: started with pid [12495] process[skeleton_tracker-6]: started with pid [12505] process[base_world_broadcaster-7]: started with pid [12512] [skeleton_tracker-6] process has died [pid 12505, exit code -11]. log files: /home/geeko/.ros/log/5e3bf648-4dca-11e0-a040-001d92bb2297/skeleton_tracker-6*.log Originally posted by GeniusGeeko on ROS Answers with karma: 299 on 2011-03-13 Post score: 0 Answer: Hi GeniusGeeko, Looks like the SamplesConfig.xml file disappeared in the new openni_kinect package recently released. As a quick fix, I have copied the SamplesConfig.xml file from the ni stack into the pi_tracker params directory and modified the skeleton.launch file accordingly. If you do a "svn update" in the pi_tracker directory, hopefully the launch will then work OK. --patrick P.S. I just noticed that even though the above update fixed the problem on my machine, the skeleton_tracker process can sometimes die on the first attempt. If that occurs, type Ctrl-C in the launch window to kill all the nodes, then try the launch again. I think I have seen this before with other Kinect applications. Perhaps some kind of "wait for openni topic" is required in my skeleton_tracker.cpp file... UPDATE: If you haven't already, roscd into pi_tracker, then run a "make clean" followed by "make" to rebuild the skeleton_tracker binary. If this still doesn't work, can you please provide a little more info about your setup? Helpful information would include. Are you successfully running other ROS packages with your Kinect? Are you using Cturtle or Diamondback? Debian packages or SVN? Ubuntu or other OS? The NI stack from http://www.ros.org/wiki/ni (which is now officially depreciated) or the newer openni_kinect stack? (And if the latter, the Debian package or the latest from the repository?) Also, can you include whether or not your machine supports SSE3? On Ubuntu, you can use the command "cat /proc/cpuinfo | grep sse3" to see if appears among the flags. Finally, can you run the following modified launch file that will fire up the gdb debugger which should tell us the error that is causing the process to die? When you launch this file, a separate xterm will pop up with a gdb prompt. When you get the prompt, enter the command "run" and then copy and paste any output that results. To close the gdb window, type the "quit" command: <launch> <param name="robot_description" command="$(find xacro)/xacro.py '$(find pi_tracker)/urdf/pi_robot.urdf.xacro'" /> <param name="/use_sim_time" value="False" /> <node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher"> <param name="publish_frequency" value="20.0"/> </node> <include file="$(find openni_camera)/launch/kinect_frames.launch" /> <node launch-prefix="xterm -e gdb --args" name="skeleton_tracker" pkg="pi_tracker" type="skeleton_tracker"> <param name="load_filepath" value="$(find pi_tracker)/params/SamplesConfig.xml" /> <rosparam command="load" file="$(find pi_tracker)/params/tracker_params.yaml" /> </node> <node pkg="tf" type="static_transform_publisher" name="base_world_broadcaster" args="0 0 0 0 0 0 /base_link /world 100" /> </launch> Originally posted by Pi Robot with karma: 4046 on 2011-03-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Pi Robot on 2011-03-16: Cool! Looking forward to your improvements. Comment by GeniusGeeko on 2011-03-16: I read a little more on the wiki, I'm throwing a full overhaul on your teleoperation sections. I'm trying to make them a little more generic for other users. I am however running into troubles getting rviz simulation to work properly. It just shoots out thousands of errors. Comment by Pi Robot on 2011-03-16: Glad to hear it's working! Alas, I don't using IM (I can barely get enough done as it is) and I really like this forum since other users get to see the questions and answers. Comment by GeniusGeeko on 2011-03-16: For the people that are wondering, my processor didn't support SSE3 just SSE2. Comment by GeniusGeeko on 2011-03-15: Awesome it works perfectly now, do you have a IMing service we could use for further discussion? Comment by Pi Robot on 2011-03-14: Please see the "Update" at the bottom of my answer above. Comment by GeniusGeeko on 2011-03-14: Can I get some more help here? Comment by GeniusGeeko on 2011-03-13: Now I am getting.. [skeleton_tracker-6] process has died [pid 5930, exit code -4]. log files: /home/geeko/.ros/log/1e7cb524-4de8-11e0-9638-001d92bb2297/skeleton_tracker-6*.log
{ "domain": "robotics.stackexchange", "id": 5055, "tags": "ros, kinect, pi-tracker, openni-kinect" }
Understand equations of a conducting sphere
Question: Can somebody explain to me, when the following two equations (equations 2.48 and 2.50 in this document) are applicable and what $\Phi_s$ and $\Phi$ actually are? The thing is, I want to find general equations that determine the field produced by conducting spherical sphere in an external field and was wondering whether these are the equations I am looking for. $$\Phi (r,\theta,\phi)=\sum_{l,m}\left(\frac{a}{r} \right )^{l+1}Y_{lm}(\theta,\phi)\oint\Phi_{s}(\theta',\phi')Y^*_{lm}(\theta',\phi')d\Omega',\,\,\,\,r>a$$ $$\Phi (r,\theta,\phi)=\sum_{l,m}\left(\frac{r}{a} \right )^{l}Y_{lm}(\theta,\phi)\oint\Phi_{s}(\theta',\phi')Y^*_{lm}(\theta',\phi')d\Omega',\,\,\,\,r<a$$ Or is it rather this equation (equation (17.3) in this document), probably they are one and the same: $$\Phi (\mathbf{x})=-\frac{1}{4\pi}\int_S \Phi(\mathbf{x}')(\hat n.\nabla')G_D(\mathbf{x},\mathbf{x}')dS'$$ Answer: If I understand the situation correctly, you have a metal sphere with a total charge of $Q$ in an external uniform field. ($Q$ can be zero; It's the general case). Assume the field's directions is $\hat z$. Since we usually choose the center of the sphere as zero potential point, to make the problem simpler we can substitute the charged sphere with a grounded sphere and a charge $Q$ at the origin. Nothing changes. Now we solve the problem for the grounded sphere without considering the charge at the center and then add the two potentials: $$\text{boundary conditions: }\,\cases{V=0 \,\,\,\,\,\,\,\,\,\,\ r=R \\ V \to -E_0 r \cos \theta \,\,\,\,\,\,\,\,\,\,\ r\gg R }$$ $$V(r,\theta)=\sum_{l=0}^{\infty}\left(A_lr^l+B_lr^{-(l+1)}\right)\mathrm{P}_l(\cos \theta) $$ Applying these boundary conditions we will have: $$V(r,\theta)=-E_0\left( r-\frac{R^3}{r^2}\right)\cos \theta$$ Now we add the effect of the charge $Q$: $$V_{total}(r,\theta)=-E_0\left( r-\frac{R^3}{r^2}\right)\cos \theta+ \frac{1}{4\pi \epsilon_0}\frac{Q}{r}$$
{ "domain": "physics.stackexchange", "id": 8817, "tags": "electromagnetism, electric-fields, potential, classical-electrodynamics" }
How can i create a map using ros?
Question: Hello everyone!! I have a hokuyo laser scanner, kinnect, imu, and other stuff, How can i create a map using ros? Originally posted by programmer on ROS Answers with karma: 61 on 2014-01-21 Post score: 1 Answer: Hi if you want to have map only with laser I suggest you to use hector_slam. if you want to use hector_slam you need a launch file that you can find it in this link but if you want to use a sensor more than the laser scanner you can use gmapping that uses odometry. Originally posted by Hamid Didari with karma: 1769 on 2014-01-22 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by programmer on 2014-01-22: Thank a lot.. :-) Comment by programmer on 2014-01-26: I do the instruction in the Link and finally i can create a map but when i run slam.launch( modify of tutorial.launch) below error caused, what's a problem? [ERROR] [1390736724.097883974]: Trajectory Server: Transform from /map to scanmatcher_frame failed: Frame id /map does not exist! Frames (3): Frame /laser exists with parent /base_link. Frame /base_link exists with parent NO_PARENT.
{ "domain": "robotics.stackexchange", "id": 16722, "tags": "ros" }
Is it possible to prove conventional current is always equivalent to actual current?
Question: I understand how the conventional current is logically equivalent to the actual current of electrons in a circuit. However, whenever I'm studying some new concept, and things are assumed as working with a current of protons (or positively charged particles), I have to always prove to myself that this is true because it also holds true for a current of electrons. This proving things to myself is getting irritating. I wanted to know if there is an argument, a very fundamental one, that could make me stop wanting to prove these things to myself? Something like, “whatever we can say about a current of positive charge particles flowing one way, we can say about electrons flowing the opposite way because ....[followed by a proof]”. Answer: I suggest you should look at all your arguments to date and see what they have in common. I'd almost be willing to bet that they can be reduced to something like the following: The equations governing a circuit's behaviour are linear in the vector of state variables, amongst whose members is the current. That is, if $\vec{U}$ is a column vector of state variables and it solves the equations, then so does $\alpha \vec{U}$ as well, where $\alpha$ is any real (or complex, in the case of phasor notation) scalar. This linearity follows from the linearity of all the circuit elements you use together with the linearity of the equations used to combine them: check that the relationships between the state variables defining an element's electrical behaviour holds in each case. For example, the inductor is defined by $v(t) = - L \,\mathrm{d}_t i(t)$, and this equation holds true if we make the transformation $(v,\,i)\mapsto(\alpha\,v,\,\alpha\,i)$. Then you check the linearity of the equations that combine these equations in a circuit. These are simply the Kirchoff voltage and current laws (conservation of energy around a loop and charge, respectively), and are most decidedly linear in the $v$ and $i$ variables they combine. The arbitrary choice of current direction is simply this argument in the special case where $\alpha=-1$. You must switch the sign of all the voltage definitions together withyour currents. Actually equations of state that survive multiplication by $-1$ are more general than simply linear ones, but this is the easiest argument. Note that the above arguments do not hold for all physics governing the electrical circuit: protons going the opposite way to the equivalent number of electrons going the other are most certainly not the same physics. A good example was given to you by user dmckee: Note that the Hall Effect unambiguously differentiates between the two cases, so it is not true that “whatever we can say about a current of positive charge particles flowing one way, we can say about electrons flowing to opposite way", but for the usual questions of circuit analysis it doesn't make any difference.
{ "domain": "physics.stackexchange", "id": 16912, "tags": "electromagnetism, electric-current" }
Would time dilation be too great for the early universe to expand?
Question: I read that one second after the big bang the universe was composed of photons electrons and neutrinos. Wouldn't the density of energy/matter have caused such extreme time dilation that the universe would never expand? Answer: Time dilation only applies between distant observers. Local observers always say that time goes normally around THEM. It's only when separated observers compare each other that you get a problem. So, not, there is no contradiction in having an observer in an arbitrarily dense area say that his local neighborhood is expanding.
{ "domain": "physics.stackexchange", "id": 11290, "tags": "general-relativity, time, big-bang" }
ur_gazebo: changed controller configuration does not work
Question: Hello I'm using ros-melodic and I'm having trouble with controlling ur5 manipulator with gazebo I downloaded ur_gazebo package from https://github.com/ros-industrial/universal_robot.git Planning and execution with the moveit works well. What I trying to do is to execute position control of each joint from the optimization result. So, I modified the source code to watch how it works. What I wanted is to control shoulder_pan_joint by publishing data through rostopic I changed ur5.launch, and arm_controller_ur5.yaml like below ur5.launch <?xml version="1.0"?> <launch> <arg name="limited" default="false" doc="If true, limits joint range [-PI, PI] on all joints." /> <arg name="paused" default="false" doc="Starts gazebo in paused mode" /> <arg name="gui" default="true" doc="Starts gazebo gui" /> <!-- startup simulated world --> <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="world_name" default="worlds/empty.world"/> <arg name="paused" value="$(arg paused)"/> <arg name="gui" value="$(arg gui)"/> </include> <!-- send robot urdf to param server --> <include file="$(find ur_description)/launch/ur5_upload.launch"> <arg name="limited" value="$(arg limited)"/> </include> <!-- push robot_description to factory and spawn robot in gazebo --> <node name="spawn_gazebo_model" pkg="gazebo_ros" type="spawn_model" args="-urdf -param robot_description -model robot -z 0.1" respawn="false" output="screen" /> <include file="$(find ur_gazebo)/launch/controller_utils.launch"/> <!-- start this controller --> <rosparam file="$(find ur_gazebo)/controller/arm_controller_ur5.yaml" command="load"/> <node name="arm_controller_spawner" pkg="controller_manager" type="controller_manager" args="spawn arm_controller" respawn="false" output="screen"/> <!-- load other controllers --> <node name="ros_control_controller_manager" pkg="controller_manager" type="controller_manager" respawn="false" output="screen" args="load joint_group_position_controller" /> <node name="simple_controller" pkg="controller_manager" type="controller_manager" respawn="false" output="screen" args="load prac" /> </launch> arm_controller.yaml prac: type: position_controllers/JointPositionController joint: shoulder_pan_joint pid: p: 100.0 d: 10.0 arm_controller: type: position_controllers/JointTrajectoryController joints: - shoulder_pan_joint - shoulder_lift_joint - elbow_joint - wrist_1_joint - wrist_2_joint - wrist_3_joint constraints: goal_time: 0.6 stopped_velocity_tolerance: 0.05 shoulder_pan_joint: {trajectory: 0.1, goal: 0.1} shoulder_lift_joint: {trajectory: 0.1, goal: 0.1} elbow_joint: {trajectory: 0.1, goal: 0.1} wrist_1_joint: {trajectory: 0.1, goal: 0.1} wrist_2_joint: {trajectory: 0.1, goal: 0.1} wrist_3_joint: {trajectory: 0.1, goal: 0.1} stop_trajectory_duration: 0.5 state_publish_rate: 25 action_monitor_rate: 10 joint_group_position_controller: type: position_controllers/JointGroupPositionController joints: - shoulder_pan_joint - shoulder_lift_joint - elbow_joint - wrist_1_joint - wrist_2_joint - wrist_3_joint In ur5.launch I added node named simple controller, and in yaml file I added the controller desctription Then, I run the launch file roslaunch ur_gazebo ur5.launch and published desired position by rostopic pub /prac/command std_msgs/Float64 "data: 0.5" However, the robot doesn't seem to move. Can anyone tell me what is wrong? Originally posted by ilkwon on ROS Answers with karma: 3 on 2021-04-29 Post score: 0 Original comments Comment by ilkwon on 2021-04-30: What I think is the cause of this problem is when I do the roslaunch ur_gazebo ur5.launch, I get these messages Loaded 'prac' Loaded 'joint_group_position_controller' Loaded 'joint_state_controller' Started ['joint_state_controller'] successfully Loaded 'arm_controller' Started ['arm_controller'] successfully It seems to prac(the single joint controller I added) is just loaded and not started. How can I deal with this problem? Comment by gvdhoorn on 2021-04-30: This doesn't seem like a problem with ur_gazebo, but with how you changed the provided files and the way you interact with the controllers. The title should reflect that. Edit: I've updated the title. Answer: You have this in the .launch file you edited: <!-- start this controller --> <rosparam file="$(find ur_gazebo)/controller/arm_controller_ur5.yaml" command="load"/> <node name="arm_controller_spawner" pkg="controller_manager" type="controller_manager" args="spawn arm_controller" respawn="false" output="screen"/> <!-- load other controllers --> <node name="ros_control_controller_manager" pkg="controller_manager" type="controller_manager" respawn="false" output="screen" args="load joint_group_position_controller" /> <node name="simple_controller" pkg="controller_manager" type="controller_manager" respawn="false" output="screen" args="load prac" /> The simple_controller node (which is essentially a controller_manager) only loads the controller settings with the name prac. You don't actually start the controller. For that, you'd need to run the controller_manager with the spawn argument. Note: with the current .launch file, the arm_controller is spawned. You cannot start two controllers for the same joint (resource really) at the same time. That would lead to conflicts, and is specifically prohibited. You'll have to stop the running controller and then start your prac controller. You could either do that in separate steps, or use the switch_controller service. Note also: because your prac controller controls only a single joint, I expect the rest of the robot to flail around in Gazebo. Originally posted by gvdhoorn with karma: 86574 on 2021-04-30 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ilkwon on 2021-05-01: Oh.... This worked thanks!,!
{ "domain": "robotics.stackexchange", "id": 36384, "tags": "ros, ros-melodic, gazebo-ros-control" }
$3EQ \leq _P 2EQ$
Question: Let: $2EQ$ - The language of all binary ($\mathbb{Z}_2$) equation sets that have a solution in $\mathbb{Z}_2$, where each multiplication is of at most two $x_i,\, x_j$. Meaning a set of equations of the form: $\Sigma a_i \cdot x_i + \Sigma \, \Sigma \, a_{i,j} \cdot x_i \cdot x_j = b $. $3EQ$ - Same just up to at most 3 multiplications, meaning a set of equations of the form: $\Sigma a_i \cdot x_i + \Sigma \, \Sigma \, a_{i,j} \cdot x_i \cdot x_j + \Sigma \, \Sigma \, \Sigma \, a_{i,j,k} \cdot x_i \cdot x_j \cdot x_k = b $. I'd like to show that $3EQ \leq _P 2EQ$, but I don't know how to deal with the expressions of $x_i \cdot x_j \cdot x_k$. I thought about trying to represent them with using only at most two $\wedge$'s, but not sure if it's even possible. Maybe there's a better idea? Answer: Hint: You can add a variable $x_{ij}$ whose value is always $x_ix_j$ by adding the equation $x_{ij}-x_ix_j=0$. This allows you to reduce cubic equations to quadratic ones.
{ "domain": "cs.stackexchange", "id": 4041, "tags": "complexity-theory, turing-machines, reductions" }
Ordering a .txt file numerically by size and alphabetically
Question: I am doing a controlled assessment and one of the tasks is that we have to sort the saved pupils scores (numerically), and their names (alphabetically) in a .txt file. Any other improvements to the code would also be greatly appreciated. import random classcheck = False while classcheck == False: classnumber = input("what class are you in") if classnumber == "1": print() classcheck = True elif classnumber == "2": print() classcheck = True elif classnumber == "3": print() classcheck = True else: print("that is not a valid class number") student1name = input("please enter your name") student1score = 0 for i in range(10): question1no1 = random.randint(1,20) question1no2 = random.randint(1,20) operators = ['+','-','*'] op_number = random.randint(0,2) op_sym = operators[op_number] op = op_number if op == 0: ans = question1no1 + question1no2 elif op == 1: ans = question1no1 - question1no2 elif op == 2: ans = question1no1 * question1no2 print(str(question1no1), str(op_sym) , str(question1no2)) question = "what is "+str(question1no1) + str(op_sym) + str(question1no2)+"?" student1answer1 = input(question) print (student1answer1) if str(student1answer1) == str(ans): print("congrats you got the answer right") student1score = student1score + 1 else: print("sorry you got the answer wrong") print("your score was " + str(student1score) + " out of 10") if classnumber == "1": class1score = open("class1score.txt", "a") class1score.write("\n" + student1name + (" ") + str(student1score)) class1score.close() if classnumber == "2": class2score = open("class2score.txt", "a") class2score.write("\n" + student1name + (" ") + str(student1score)) class2score.close() if classnumber == "3": class3score = open("class3score.txt", "a") class3score.write("\n" + student1name + (" ") + str(student1score)) class3score.close() Answer: My first suggestion would be to use some extra functions. However, let's look at it in several parts. First, the getting of the class number. You have classcheck = False while classcheck == False: classnumber = input("what class are you in") if classnumber == "1": print() classcheck = True elif classnumber == "2": print() classcheck = True elif classnumber == "3": print() classcheck = True else: print("that is not a valid class number") The if statements are quite the same. By using set-membership, we can reduce the duplicate code. classcheck = False while classcheck == False: classnumber = input("what class are you in") if classnumber in {"1", "2", "3"}: classcheck = True else: print("that is not a valid class number") Furthermore, writing while foo == False: is not really idiomatic Python. Better would be while foo:. But, in this case you're trying to emulate a do {...} while (...) loop. I'd suggest writing it as follows: while True: classnumber = input("what class are you in") if classnumber in {"1", "2", "3"}: break print("that is not a valid class number") The asking of the name is quite obvious, let's move to the part where the questions get asked: for i in range(10): The variable i does not actually get used. It's a minor nitpick, but convention has it that you should write for _ in range(10): instead. question1no1 = random.randint(1,20) question1no2 = random.randint(1,20) What is question1no1 referring to? This will also be executed for question 2 to 10. Maybe operand_left and operand_right would be better names? Leaving them be for now, but it's something you can ponder about. Next, the creation of the 'puzzle'/question. operators = ['+','-','*'] op_number = random.randint(0,2) op_sym = operators[op_number] op = op_number if op == 0: ans = question1no1 + question1no2 elif op == 1: ans = question1no1 - question1no2 elif op == 2: ans = question1no1 * question1no2 There are a few lines between the definition of the operators, and the calculation of the desired result. First suggestion: use random.choice(operators) instead of random.randint(0, 2) operators = ['+','-','*'] op_sym = random.choice(operators) if op_sym == '+': ans = question1no1 + question1no2 elif op_sym == '-': ans = question1no1 - question1no2 elif op_sym == '*': ans = question1no1 * question1no2 Already a lot clearer, no? Still, I've typed the + sign 3 times here. By using the operator module, I could write: import operator ... ... operators = [ ('+', operator.add), ('-', operator.sub), ('*', operator.mul), ] op_sym, op_func = random.choice(operators) ans = op_func(question1no1, question1no2) Adding a new operator would be a simple method of adding another line in the list above. I'll assume the explicit print statements are a bit of debugging work, and ignore those. Ideally you'd remove them. Look at how you write the question. question = "what is "+str(question1no1) + str(op_sym) + str(question1no2)+"?" There are so many things going on, it is a bit worry-some. Also, you might want to add a space around the operator, causing the need for yet some more work. By using string formatting (https://docs.python.org/3.5/library/stdtypes.html#str.format), you can make it a bit simpler: question = "what is {} {} {}?".format(question1no1, op_sym, question1no2) Now, we get to student1answer1. Why not just answer? (Or given_answer, and rename ans to expected_answer). As for checking the results: if str(student1answer1) == str(ans): print("congrats you got the answer right") student1score = student1score + 1 else: print("sorry you got the answer wrong") You can remove the empty line between the if and the else block. But more importantly, instead of student1score = student1score + 1 you can write student1score += 1 with the same effect. At the end, we also see duplicated code regarding to the class numbers: if classnumber == "1": class1score = open("class1score.txt", "a") class1score.write("\n" + student1name + (" ") + str(student1score)) class1score.close() if classnumber == "2": class2score = open("class2score.txt", "a") class2score.write("\n" + student1name + (" ") + str(student1score)) class2score.close() if classnumber == "3": class3score = open("class3score.txt", "a") class3score.write("\n" + student1name + (" ") + str(student1score)) class3score.close() It should be quite obvious that the only difference in these statements is the filename. "Easy" fix: if classnumber == "1": filename = "class1score.txt" if classnumber == "2": filename = "class2score.txt" if classnumber == "3": filename = "class3score.txt" class_score = open(filename, "a") class_score.write("\n" + student1name + " " + str(student1score)) class_score.close() But the if is now still very suspicious. Let's replace those 3 if-statements with 1 statement: filename = "class" + classnumber + "score.txt" or filename = "class{}score.txt".format(classnumber) (my preference is the second). Finally: please use capitalization when writing strings for the end-user. So, instead of "what class are you in", write "What class are you in? ". (The final space is to make the question look even better when entering the data, but that's merely convention.) After making the changes, I ended up with import operator import random while True: classnumber = input("What class are you in? ") if classnumber in {"1", "2", "3"}: break print("That is not a valid class number.") student1name = input("Please enter your name: ") student1score = 0 for _ in range(10): question1no1 = random.randint(1,20) question1no2 = random.randint(1,20) operators = [ ('+', operator.add), ('-', operator.sub), ('*', operator.mul), ] op_sym, op_func = random.choice(operators) ans = op_func(question1no1, question1no2) question = "What is {} {} {}? ".format(question1no1, op_sym, question1no2) student1answer1 = input(question) if str(student1answer1) == str(ans): print("Congrats, you got the answer right!") student1score += 1 else: print("Sorry, you got the answer wrong.") print("Your score was " + str(student1score) + " out of 10.") filename = "class{}score.txt".format(classnumber) class_score = open(filename, "a") class_score.write("\n" + student1name + " " + str(student1score)) class_score.close() Also, I'd like to change the last write to have the "\n" at the end instead of at the beginning, as it's convention to end a line with a "\n" instead of starting one with it. But that's up to you, as it would change the semantics of the script.
{ "domain": "codereview.stackexchange", "id": 18078, "tags": "python, python-3.x, file" }
Is temperature in vacuum zero?
Question: From Wikipedia entry on Kinetic Theory The temperature of an ideal monatomic gas is a measure of the average kinetic energy of its atoms. Now if I remove all the particles from the box shown below will the temperature be zero? Answer: There's no temperature. If we use the following definition "temperature is the average kinetic energy of the particles". Then no particles - no temperature. As the first sight this answer doesn't seem to be good enough, but if you want to calculate "average spin" or "average charge" those parameters will have no sense if there's no particles to calculate data on.
{ "domain": "physics.stackexchange", "id": 83921, "tags": "statistical-mechanics, temperature" }
Hawking radiation and reversibility
Question: It's often said that, as long as the information that fell into a black hole comes out eventually in the Hawking radiation (by whatever means), pure states remain pure rather than evolving into mixed states, and "the universe is safe for quantum mechanics." But that can't be the whole story! For quantum-mechanical reversibility doesn't merely say that information must eventually be retrievable, after 1070 years or whatever; but also that, if U is an admissible transformation of a physical system, then U-1 is also an admissible transformation. So, just like it must be consistent with reversible microlaws for smoke and ash to spontaneously reassemble into a book, it must also be consistent for a black hole to spontaneously "uncollapse" into a star, or into whatever configuration of ordinary matter could have collapsed to form the black hole in the first place. And this "white-hole uncollapse process" must be possible in exactly the same amount of time as the black-hole collapse process, rather than an astronomically longer time (as with Hawking radiation). In both cases, the explanation for why we never see these processes must be thermodynamic -- i.e., sure they're allowed, but they involve such a crazy decrease in entropy that they're exponentially suppressed. I get that. But I'm still confused about something, and here's my best attempt to crystallize my confusion: In order to explain how it could even be possible for information to come out of a black hole, physicists typically appeal to Hawking radiation, which provides a mechanism based on more-or-less understood quantum field theory in curved spacetime. (Granted, QFT also predicts that the radiation should be thermal! But because of AdS/CFT and so forth, today people seem pretty confident that the information, after hanging out near the event horizon, is carried away by the Hawking radiation in some not-yet-understood way.) However, suppose it's objected that a Hawking radiation process seems nothing whatsoever like the time-reverse of an ordinary black-hole formation process. Then the only response I know would be along the lines of, "well, do you believe that QM will survive unaltered in a future quantum theory of gravity, or don't you? If you do, then consider the unitary U corresponding to a black-hole formation process, and invert it to get U-1!" My question is: why couldn't people have made that same straightforward argument even before they knew anything about Hawking radiation? (Or did they make it?) More generally, even if Hawking radiation does carry away the infalling information, that still seems extremely far from implying full quantum-mechanical reversibility. So, how much does the existence of Hawking radiation really have to do with the case for the compatibility between quantum mechanics and black holes? Answer: As you said, the case of black holes is conceptually totally analogous to the burning books. In principle, the process is reversible, but the probability of the CPT-conjugated process (more accurate a symmetry than just time reversal) is different from the original one because $$ \frac{Prob(A\to B)}{Prob(B^{CPT}\to A^{CPT})} \approx \exp(S_B-S_A ).$$ This is true because the probabilities of evolution between ensembles are obtained by summing over final states but averaging over initial states. The averaging differs from summing by the extra factor of $1/N = \exp(-S)$, and that's why the exponential of the entropy difference quantifies the past-future asymmetry of the evolution. At the qualitative level, a white hole is exactly as impossible in practice as a burning coal suddenly rearranging into a particular book. Quantitatively speaking, it's more impossible because the drop of entropy would be much greater: black holes have the greatest entropy among all localized or bound objects of the same total mass. However, the Hawking radiation isn't localized or bound and it actually has an even greater entropy – by a significant factor – than the black hole from which it evaporated. That's needed and that's true because even the Hawking evaporation process agrees with the second law of thermodynamics. At the level of classical general relativity, nothing prevents us from drawing a white hole spacetime. In fact, the spacetime for an eternal black hole is already perfectly time-reversal-symmetric. We still mostly call it a black hole but it's a "white hole" at the same moment. Such solutions don't correspond to the reality in which black holes always come from a lower-entropy initial state – because the initial state of the Universe couldn't have any black holes. So the real issue are the realistic diagrams for a star collapsing into a black hole which later evaporates. Such a diagram is clearly time-reversal-asymmetric. The entropy increases during the star collapse as well as during the Hawking radiation. You may flip the diagram upside down and you will get a picture that solves the equations of general relativity. However, it will heavily violate the second law of thermodynamics. Any consistent classical or quantum theory explains and guarantees the thermodynamic phenomena and laws microscopically, i.e. by statistical physics applied to its phase space or Hilbert space. That's true for burning books but that's true for theories containing black holes, too. So if one has a consistent microscopic quantum theory for this process – but the same comment would hold for a classical theory as well: your question has really nothing to do with quantum mechanics per se – then this theory must predict that the inverted processes that decrease entropy are exponentially unlikely. Whenever there is a specific model with well-defined microstates and a microscopic T or CPT symmetry, it's easy to prove the equation I started with. A genuine microscopic theory really establishes that the inverted processes (those that lower the total entropy) are possible but very unlikely. A classical theory of macroscopic matter however "averages over many atoms". For solids, liquids, and gases, this is manifested by time-reversal-asymmetric terms in the effective equations - diffusion, heat diffusion, friction, viscosity, all these things that slow things down, heat them up, and transfer heat from warmer bodies to cooler ones. The transfer of heat from warmer bodies to cooler ones may either occur by "direct contact" which really looks classical but it may also proceed via the black body radiation – which is a quantum process and may be found in the first semiclassical corrections to classical physics. The Hawking radiation is an example of the "transfer of heat from warmer to cooler bodies", too. The black hole has a nonzero temperature so it radiates energy away to the empty space whose temperature is zero. Again, it doesn't "realistically" occur in the opposite chronological order because the entropy would decrease and a cooler object would spontaneously transfer its heat to a warmer one. In an approximate macroscopic effective theory that incorporates the microscopic statistical phenomena collectively, much like friction terms in mechanics, those time-reversal-violating terms appear explicitly: they are replacements/results of some statistical physics calculations. In the exact microscopic theory, however, there are no explicit time-reversal-breaking terms. And indeed, according to the full microscopic theory – e.g. a consistent theory of quantum gravity – the entropy-lowering processes aren't strictly forbidden, they may just be calculated to be exponentially unlikely. The probability that we arrange the initial state of the black hole so that it will evolve into a star with some particular shape and composition is extremely tiny. It is hard to describe the state of the black hole microstates explicitly, but even in setups where we know them in principle, it's practically impossible to locate black hole microstates that have evolved from a recent star (or will evolve into a star soon, which is the same mathematical problem). Your $U^{-1}$ transformation undoubtedly exists in a consistent theory of quantum gravity – e.g. in AdS/CFT – but if you want the final state $U^{-1}|initial\rangle$ to have a lower entropy than the initial one, you must carefully cherry-pick the initial one and it's exponentially unlikely that you will be able to prepare such an initial state, whether it is experimental preparation or a theoretical one. For "realistically preparable" initial states, the final states will have a higher entropy. This is true everywhere in physics and has nothing specific in the context of quantum gravity with black holes. Let me also say that the "white hole" microstates exist but they're the same thing as the "black hole microstates". The reason why these microstates almost always behave as black holes and not white holes is the second law of thermodynamics once again: it's just very unlikely for them to evolve to a lower-entropy state (at least if we expect this entropy drop to be imminent: within a long enough, Poincaré recurrence time, such thing may occur at some point). That's true for burned books, too. A "white hole" is analogous to a "burned book that will conspire its atomic vibrations and rearrange itself into a nice and healthy book again". But macroscopically, such "books waiting to be revived" don't differ from other piles of ashes; that's the analogous claim to the claim that there is no visible difference between black hole and white hole microstates, and due to their "very likely" future evolution, the whole class should better be called "black hole microstates" and not "white hole microstates" even the microstates that will drop entropy soon represent a tiny fraction of this set. My main punch line is that at the level of general reversibility, there has never been any qualitative difference between black holes and other objects that are subject to thermodynamics and, which is related, there has never been (and there is not) any general incompatibility between the general principles of quantum mechanics, microscopic reversibility, and macroscopic irreversibility, whether black holes are present or not. The only "new" feature of black holes that sparked the decades of efforts and debates was the causality. While a burning book may still transfer the information in both ways, the material inside the black hole should no longer be able to transfer the information about itself to infinity because it's equivalent to superluminal signals forbidden in relativity. However, we know today that the laws of causality aren't this strict in the presence of black holes and the information is leaked, so the qualitative features of a collapsing star and evaporating black hole are literally the same as in a book that is printed by diffusing ink and then burned.
{ "domain": "physics.stackexchange", "id": 59802, "tags": "quantum-mechanics, black-holes, hawking-radiation, reversibility, black-hole-thermodynamics" }
Constant Pressure Cylinder
Question: The solution to this states that "pressure of the gas is constant." What implies this? Is it because the piston was elevated then stopped? If the gas pushed the Piston all the way up to where it stopped moving (and kept pushing) would Pressure be constant then? Thank You. Answer: For the pressure of the gas to be considered constant, the expansion would have to be carried out quasistatically that is, very slowly, so that the pressure of the gas was only differentially greater than the external pressure at each stage of the expansion, or $$P_{gas}=P_{ext}+dP$$ In this way the pressure of the gas can be considered in equilibrium with the external pressure throughout the expansion. Hope this helps.
{ "domain": "physics.stackexchange", "id": 59773, "tags": "homework-and-exercises, thermodynamics" }
Why is synchrotron radiation from relativistic electrons low in energy?
Question: Why do populations of relativistic (high energy) electrons emitting synchrotron radiation emit at mostly radio wavelengths? The fact that they are high energy makes me think they would emit high energy photons. As a particle moves in the magnetic field of say, an accretion disk, is it constantly emitting synchrotron radiation as it goes around? Answer: Synchrotron radiation is emitted by charged particles (mostly electrons) executing helical motion, accelerated by the Lorentz force exerted by the vector product of their velocity and the magnetic field. The frequency of the radiation depends how fast the electrons orbit, which in turn depends on the magnetic field strength. The acceleration thus depends on their velocity and the strength of the magnetic field. For non-relativistic electrons the frequencies that the electromagnetic radiation appears would simply be the orbital frequency and would be $$\omega = \frac{qB}{mc}\ ,$$ where $q$ and $m$ are the charge and mass of the electron and $B$ is the magnetic field strength. For Relativistic electrons, this frequency becomes even smaller (by the Lorentz factor $\gamma$). If you work out what these frequencies are for the typical magnetic field strengths in galaxies then they are at very, very long radio wavelengths and at very small frequencies, well below the plasma frequency of the interstellar medium. However, there are two effects that boost the spectrum back up to MHz, GHz, or in the cases of some very strong magnetic fields (around pulsars for example), even optical frequencies. These are that radiation is Doppler shifted to higher frequencies when the electrons move towards the observer. Secondly, the radiation from a relativistic electron is beamed into a narrow cone in the forward direction, so that a distant observer would see a set of narrow, short pulses as the electron spirals around the field lines. The spectrum (Fourier transform) of this pulsed emission leads to power at much higher frequencies than the simple orbital frequency of the electron, but still not in the visible range unless the magnetic fields are very strong. Similar arguments would apply to synchrotron radiation from relativistic charged particles in orbit around stellar sized black hole. In this case, the relevant orbital frequency is the actual orbital frequency around the black hole. It is highest at the innermost stable circular orbit of a Schwarzschild black hole but is only $2200 (M/M_\odot)^{-1}$ Hz (I.e. just hundreds of Hz for typical black holes). So the same argument applies here. It is only the Doppler beaming effects that even boosts these frequencies up to observable radio wavelengths.
{ "domain": "astronomy.stackexchange", "id": 6082, "tags": "radiation, accretion-discs, synchrotron-radiation" }
ros_canopen homing simultaneously or one-by-one?
Question: Hello, this question has been addressed in these questions as well (Q1, Q2), however no answer arrived, therefore i'm asking as well. It is not clear from the wiki of ros_canopen that how does the node execute the homing of multiple motors/joints? Do the motors execute the homing simultaneously or one-by-one? If one-by-one, than does it mean, that the execution of the next homing procedure is not started until the previous has not been finished? Also, is it possible to change the order of the sequence? I mean, what if i want to execute the homing like, joint1-joint3-joint2-joint4-joint5? Should be enough just to change the order in the yaml config file? Thank you in advance. Originally posted by akosodry on ROS Answers with karma: 121 on 2019-06-21 Post score: 0 Answer: Do the motors execute the homing simultaneously or one-by-one? One-by-one (canopen_motor_node, as-is). If one-by-one, than does it mean, that the execution of the next homing procedure is not started until the previous has not been finished? Exactly Also, is it possible to change the order of the sequence? I mean, what if i want to execute the homing like, joint1-joint3-joint2-joint4-joint5? Yes, just use list syntax to specify the overall order Originally posted by Mathias Lüdtke with karma: 1596 on 2019-06-21 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 33237, "tags": "ros, ros-kinetic, ros-canopen" }
Couldn't find an AF_INET address for [hostname]
Question: when Running ROS across multiple machines ,it can share master for two machines,but when run a node ,it appear something like Couldn't find an AF_INET address for [hostname],I don't know why. when I run a node in machine a,it pub a topic named "message",but I can't sub the "message" for a node from machine b,but when I use "rostopic list ",it has the topic named "message" Originally posted by cros on ROS Answers with karma: 3 on 2016-02-24 Post score: 0 Answer: Searching for your error message finds a few other users who have had the same problem and resolved it. Have you tried any of their solutions: http://answers.ros.org/question/163556/how-to-solve-couldnt-find-an-af_inet-address-for-problem/ http://answers.ros.org/question/206546/couldnt-find-an-af_inet-address-for-with-virtual-machine/ Originally posted by ahendrix with karma: 47576 on 2016-02-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 23885, "tags": "ros, multiple, machines" }
Equilibrium of rotating bodies - non-balancing vertical forces?
Question: I am slightly confused about this problem. We have a $6$ kg non-uniform rod $MN$ which is pivoted about $M$. There is a force of $40$ N applied at $N$ at an angle to the rod. The rod is said to be in equilibrium as a result of the $40$N force. However, my question is that if we resolve vertically, we have the component of the $40$N force and the weight of the rod. These will not be equal, so why does the rod not fall downwards? I know it has something to do with the pivot, but I can't see why the pivot will produce a vertical force. Answer: The rod is fixed to the pivot - hence it does not fall. The pivot here means something that keeps that end of the rod permanently in place, regardless of the forces exerted. It does this by exerting whatever force is necessary to counteract the other forces. If balancing the vertical forces causes a net force of, say, $20N$ downwards, then the pivot produces a force $20N$ upwards to counteract that. As an illustration, consider Farcher's example of a trapdoor. If you pull one edge of the trapdoor, it opens. If you keep pulling it vertically, eventually the trapdoor is vertical. Now you can pull very hard and the trapdoor still won't move. That's because the other edge of the trapdoor is fixed to the floor, and that edge produces an almost-arbitrarily large force to balance the force you are pulling with.
{ "domain": "physics.stackexchange", "id": 74320, "tags": "newtonian-mechanics, forces, rotational-dynamics, vectors, free-body-diagram" }
Proving that a language of Turing machine descriptions is/is not Turing recognizable
Question: How to approach to solve this question and the likes of it? Let $L$ be the set of strings $\langle M\rangle$ such that $M$ accepts all strings of even length and does not accept any strings of odd length. a) Is $L$ Turing-recognizable? Prove your answer. b) Is the complement of $L$ Turing-recognizable? Again prove your answer. Please have a look at my approach, is it correct? : Since the string representation of set of all the Turing Machines (say S) is Infinite countable and Recursively Enumerable, there fore there exists a Turing Machine that accepts S. Now, if we choose the Turing Machines M1, M2, ... from S, in the given order, such that all the chosen machines accept even length string and reject odd length strings, then the set we get (Say T) will also be Infinite Countable and recursively Enumerable and will have a Turing Machine which accepts it. That is why it is language T is Turing Recognizable. Similarly complement of T is Turing Recognizable. @Rick if I use the following code : RL(<M>) = for n = 0, 1, ... for s = s0, ..., sn \\in the standard order on all strings run M on s for one move if M(s) = accept AND |s| is odd reject // accept as string in L complement if not rejected for any string then accept as a string in L This code will select all the $M$ which accept all strings of even length and do not accept any string of odd length. Then now since we have a membership algorithm for $L$ (as well as for $\overline L$), then can we say that the $L$ and $\overline L$ are recursive? Answer: The problem with your proposed algorithm is where you say ... choose the Turing Machines M1, M2, ... from S, in the given order, such that all the chosen machines accept even length string and reject odd length strings. How are you going to do this? If you could, then you would have answered your original question, i.e., you've fallen into the trap of assuming what you want to prove. In fact, your language $L$ isn't recursively enumerable. There are at least two ways to show that $L$ isn't r.e.. I'll give one, but it requires that you be able to show that $L$ isn't recursive. Rice's theorem is an immediate way, but you don't know it yet, so you'll have to reduce it from a known non-recursive language like the Halting language. That's not hard, but I won't give it now. Just accept for the moment that $L$ is not recursive. Suppose that we wrongly guessed that $L$ was recursive and we wanted to build a recognizer for it. One way is to simply dovetail all strings and test whether $M$ accepted any odd-length strings (so we'd know then if $M$ wasn't in $L$). This gives us a proposed recognizer for $L$: RL(<M>) = for n = 0, 1, ... for s = s0, ..., sn \\in the standard order on all strings run M on s for one move if M(s) = accept AND |s| is odd reject If $M$ accepts any odd-length string, sooner or later this program will find it and so will be able to recognize that $M$ is not in $L$. Of course this isn't what we wanted: we need to make a program that recognized if a given $M$ is in $L$. However, all is not lost---simply by changing the "reject" to "accept" we would have made a recognizer for the complement, $\overline{L}$, so we've stumbled across the interesting fact that $\overline{L}$ is recognizable. So what, you might ask? Remember, if a language $A$ and its complement, $\overline{A}$ are both r.e., then $A$ must be recursive. We just showed that $\overline{L}$ is r.e., so if $L$ was also r.e., then we could conclude that $L$ was recursive. However, you've accepted that $L$ is not recursive, so we must conclude that $L$ must not be r.e..
{ "domain": "cs.stackexchange", "id": 5445, "tags": "computability, semi-decidability" }
Convert Float32 to C++ float variable
Question: Hi, I'm trying to convert a Float32 type std_msg into a float variable. Following is the code I have at the moment. void update_yaw(const std_msgs::Float32 &yaw) { std_msgs::Float32 imu; imu = *yaw; fromIMU::Yaw = float(imu.data); ROS_INFO("Yaw updated"); } But when I try to compile, the following error comes. /home/padmal/catkin_ws/src/ouster_example/ouster_ros/src/merger_node.cpp: In function ‘void update_yaw(const Float32&)’: /home/padmal/catkin_ws/src/ouster_example/ouster_ros/src/merger_node.cpp:89:11: error: no match for ‘operator*’ (operand type is ‘const Float32 {aka const std_msgs::Float32_<std::allocator<void> >}’) imu = *yaw; ^~~~ In file included from /usr/include/boost/config/no_tr1/complex.hpp:21:0, from /usr/include/boost/math/policies/error_handling.hpp:17, from /usr/include/boost/math/special_functions/round.hpp:14, from /opt/ros/melodic/include/ros/time.h:58, from /opt/ros/melodic/include/ros/ros.h:38, from /home/padmal/catkin_ws/src/ouster_example/ouster_ros/src/merger_node.cpp:1: /usr/include/c++/7/complex:404:5: note: candidate: template<class _Tp> std::complex<_Tp> std::operator*(const _Tp&, const std::complex<_Tp>&) operator*(const _Tp& __x, const complex<_Tp>& __y) ^~~~~~~~ /usr/include/c++/7/complex:404:5: note: template argument deduction/substitution failed: /home/padmal/catkin_ws/src/ouster_example/ouster_ros/src/merger_node.cpp:89:12: note: candidate expects 2 arguments, 1 provided imu = *yaw; ^~~ In file included from /usr/include/boost/config/no_tr1/complex.hpp:21:0, from /usr/include/boost/math/policies/error_handling.hpp:17, from /usr/include/boost/math/special_functions/round.hpp:14, from /opt/ros/melodic/include/ros/time.h:58, from /opt/ros/melodic/include/ros/ros.h:38, from /home/padmal/catkin_ws/src/ouster_example/ouster_ros/src/merger_node.cpp:1: /usr/include/c++/7/complex:395:5: note: candidate: template<class _Tp> std::complex<_Tp> std::operator*(const std::complex<_Tp>&, const _Tp&) operator*(const complex<_Tp>& __x, const _Tp& __y) ^~~~~~~~ /usr/include/c++/7/complex:395:5: note: template argument deduction/substitution failed: /home/padmal/catkin_ws/src/ouster_example/ouster_ros/src/merger_node.cpp:89:12: note: ‘const Float32 {aka const std_msgs::Float32_<std::allocator<void> >}’ is not derived from ‘const std::complex<_Tp>’ imu = *yaw; ^~~ In file included from /usr/include/boost/config/no_tr1/complex.hpp:21:0, from /usr/include/boost/math/policies/error_handling.hpp:17, from /usr/include/boost/math/special_functions/round.hpp:14, from /opt/ros/melodic/include/ros/time.h:58, from /opt/ros/melodic/include/ros/ros.h:38, from /home/padmal/catkin_ws/src/ouster_example/ouster_ros/src/merger_node.cpp:1: /usr/include/c++/7/complex:386:5: note: candidate: template<class _Tp> std::complex<_Tp> std::operator*(const std::complex<_Tp>&, const std::complex<_Tp>&) operator*(const complex<_Tp>& __x, const complex<_Tp>& __y) ^~~~~~~~ /usr/include/c++/7/complex:386:5: note: template argument deduction/substitution failed: /home/padmal/catkin_ws/src/ouster_example/ouster_ros/src/merger_node.cpp:89:12: note: ‘const Float32 {aka const std_msgs::Float32_<std::allocator<void> >}’ is not derived from ‘const std::complex<_Tp>’ imu = *yaw; ^~~ What could be the issue here? Thanks in advance. Originally posted by Padmal on ROS Answers with karma: 3 on 2020-08-04 Post score: 0 Answer: Your function signature shows you get a reference to yaw. It is not a pointer. So you don't need to dereference it. Simply imu = yaw should suffice. Originally posted by mgruhler with karma: 12390 on 2020-08-04 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 35366, "tags": "ros-melodic, roscpp" }
Why does the definition of the reward function $r(s, a, s')$ involve the term $p(s' \mid s, a)$?
Question: Sutton and Barto define the state–action–next-state reward function, $r(s, a, s')$, as follows (equation 3.6, p. 49) $$ r(s, a, s^{\prime}) \doteq \mathbb{E}\left[R_{t} \mid S_{t-1}=s, A_{t-1}=a, S_{t}=s^{\prime}\right]=\sum_{r \in \mathcal{R}} r \frac{p(s^{\prime}, r \mid s, a )}{\color{red}{p(s^{\prime} \mid s, a)}} $$ Why is the term $p(s' \mid s, a)$ required in this definition? Shouldn't the correct formula be $\sum_{r \in \mathcal{R}} r p(s^{\prime}, r \mid s, a )$? Answer: Expectation of reward after taking action $a$ in state $s$ and ending up in state $s'$ would simply be \begin{equation} r(s, a, s') = \sum_{r \in R} r \cdot p(r|s, a, s') \end{equation} The problem with this is that they do not define probability distribution for rewards separately, they use joint distribution $p(s', r|s, a)$, which represents probability for ending up in state $s'$ with reward $r$ after taking action $a$ in state $s$. This probability can be separated in 2 parts using product rule \begin{equation} p(s', r|s, a) = p(s'|s, a)\cdot p(r|s', s, a) \end{equation} which represents the probability for getting to state $s'$ from $(s, a)$, and then probability for getting reward $r$ after ending up in $s'$. If we define reward expectation through the joint distribution, we would have \begin{align} r(s, a, s') &= \sum_{r \in R} r \cdot p(s', r|s, a)\\ &= \sum_{r \in R} r \cdot p(s'|s, a) \cdot p(r|s', s, a) \end{align} but this would not be correct, since we have this extra $p(s'|s, a)$, so we divide everything by it to get expression with only $p(r|s', s, a)$. So, in the end we have \begin{equation} r(s, a, s') = \sum_{r \in R} r \frac{p(r, s'|s, a)}{p(s'|s, a)} \end{equation}
{ "domain": "ai.stackexchange", "id": 1859, "tags": "reinforcement-learning, definitions, markov-decision-process, reward-functions, sutton-barto" }
Assigning a value to a class variable
Question: I have tried the following code: SystemUser user = (SystemUser)Session["CurrentUser"]; Data.Configuration configuration = db.Configurations.Single(c => c.ID == user.ID); MailAddress from = new MailAddress("noreply@test.com", "Test"); if (configuration.EmailFromAddress != null) { from = new MailAddress(configuration.EmailFromAddress, "Test"); } Here I declare the from variable of the MailAddress class with a default value, and if configuration.EmailFromAddress exists, I am assigning that value to the from. Is this following best practice? Here, the MailAddress class is calling twice to assign the value. Is it good performance and memory wise? Another implementation is using a string variable: SystemUser user = (SystemUser)Session["CurrentUser"]; Data.Configuration configuration = db.Configurations.Single(c => c.ID == user.ID); string fromAddress = "noreply@test.com"; if (configuration.EmailFromAddress != null) { fromAddress = configuration.EmailFromAddress; } MailAddress from = new MailAddress(fromAddress, "Test"); Which one follows best practice? Answer: If you are sure that configuration.EmailFromAddress will always be valid or null, I'd suggest using the null-coalescing operator (??) SystemUser user = (SystemUser)Session["CurrentUser"]; Data.Configuration configuration = db.Configurations.Single(c => c.ID == user.ID); var fromAddress = configuration.EmailFromAddress ?? "noreply@test.com"; MailAddress from = new MailAddress(fromAddress, "Test"); In the case that configuration.EmailFromAddress can also be empty, I'd use the ternary operator SystemUser user = (SystemUser)Session["CurrentUser"]; Data.Configuration configuration = db.Configurations.Single(c => c.ID == user.ID); var fromAddress = String.IsNullOrEmpty(configuration.EmailFromAddress) ? configuration.EmailFromAddress : "noreply@test.com"; MailAddress from = new MailAddress(fromAddress, "Test");
{ "domain": "codereview.stackexchange", "id": 9994, "tags": "c#, comparative-review" }
Is there a difference between np.matrix(np.array([0,0])) and np.matrix([0,0])?
Question: I was reading this code, for implemnting linear regression from scratch: # convert from data frames to numpy matrices X = np.matrix(X.values) y = np.matrix(y.values) theta = np.matrix(np.array([0,0])) When I came accross this line : np.matrix(np.array([0,0])) I was wondering why didn't the person just write np.matrix([0,0]). I ran both in jupyter notebook and got the same output: theta = np.matrix([0,0]) theta2 = np.matrix(np.array([0,0])) print(theta,theta2,type(theta),type(theta2)) Output:[[0 0]] [[0 0]] <class 'numpy.matrix'> <class 'numpy.matrix'> Is there a difference between the two? Does the extra np.array somehow part add to the functionality of theta? Will the final code function properly if I replace the former with the latter? Thanks. Edit: Is this the right place to ask this question? I am new here... Answer: No, they're absolutely the same. In this case there is absolutely no difference apart from perhaps a trivial amount of processing time. This is all open source code so we can just read it: The relevant part of numpy for us here is the matrix constructor (yes, np.matrix is a python class below the hood). In a summary from the NumPy code we see: class matrix(N.ndarray): # ... def __new__(subtype, data, dtype=None, copy=True): # ... if isinstance(data, N.ndarray): if dtype is None: intype = data.dtype else: intype = N.dtype(dtype) new = data.view(subtype) if intype != data.dtype: return new.astype(intype) if copy: return new.copy() else: return new # ... arr = N.array(data, dtype=dtype, copy=copy) ndim = arr.ndim shape = arr.shape # some extra checks ret = N.ndarray.__new__(subtype, shape, arr.dtype, buffer=arr, order=order) return ret What we give as the data argument is literally what we give to to np.matrix(). Therefore we can draw for the two cases: np.matrix([0, 0]) The python interpreter builds two integers: 0 and 0. The python interpreter builds a list from the pointers to the two integers. The python interpreter evaluates the matrix constructor with the list as data. The if in the constructor is not executed, instead an np.array is build from the list. Inside the array constructor data types are checked. The final array is returned (the second array constructor perform much less work because it is passed buffer=) np.matrix(np.array([0, 0])) The python interpreter builds two integers: 0 and 0. The python interpreter builds a list from the pointers to the two integers. The python interpreter evaluates the array constructor. The resulting array is passed as data to the matrix constructor, and the if is executed. Within the if the data type is taken from the existing array. The array is copied an returned. Both ways execute pretty much the same number of constructors and lines of code. One could argue that copying the array (the copy= argument) may be a slow operation. Yet, given the fact that to have enough data for array.copy() to be slow one would first need to construct a full python list of that size, the copy() time is negligible compared to the list construction. In other words, both methods need to construct the list - because python will always evaluate arguments before passing them - which is the slowest part of the execution of this code. As for the return value, they're absolutely and completely the same. Most of the code within the constructor summarized (and linked) above is to make sure that you get the same return if you give equivalent input. P.S. (Unrelated Note) If one starts reading data from a file (or any other external source) the picture changes. If one reads directly into an array without going through the python list phase, that method is bound to be much faster. The processing bottleneck is the python list, if one can avoid that things will go faster.
{ "domain": "datascience.stackexchange", "id": 5297, "tags": "linear-regression, numpy, matrix" }
What explains the effect of water's path falling around a ball/curve?
Question: Disclaimer: I'm not a physicist. If you take an apple or a ball (or a cylinder works too) and put it under a tap such that the water falls down the side (not the centre) of the sphere, you get water flow roughly like this: It seems to me that the fluid is sticking to the surface of the ball and due to viscosity or whatnot the water holds itself together and creates this strange “following the direction of the ball” behaviour until the angle of the surface would be upward and then gravity is greater than the momentum + this stickiness and it falls down. What do you call this kind of phenomenon? Does this specific case have a name? Has it ever been used for something practical? I expect the speed that the water's falling also contributes, e.g. at terminal velocity it might just brush right past the apple rather than being curved. Is that right? Answer: What do you call this kind of phenomenon? Does this specific case have a name? Liquid adhesion ... Has it ever been used for something practical? ...is used in some gutters
{ "domain": "physics.stackexchange", "id": 9478, "tags": "gravity, fluid-dynamics" }
Commented Parser Combinators in Lisp-style C
Question: I've attempted to remedy the issues outlined in the answer to my previous question. I've added several hundred blank lines to better illustrate the grouping of functions and generally make things look less dense. I've trimmed all the lines to 80 columns except for one 83 character line in ppnarg.h which was inside the original author's comment, so I chose not to alter that. I've added forward declarations for all the static functions inside the .c files so all the static "helper functions" can be placed below the non-static function that uses them, so the implementation can be presented in a more top down fashion overall. I've added a description of every API function next to its declaration in the .h file, and comments in the .c files explaining design decisions that are important for understanding the implementation. github README.md omitted for size Makefile CFLAGS= -std=c99 -g -Wall -Wpedantic -Wextra -Wno-unused-function -Wno-unused-parameter -Wno-switch -Wno-return-type -Wunused-variable CFLAGS+= $(cflags) test : pc11test ./$< pc11test : pc11object.o pc11parser.o pc11io.o pc11test.o $(CC) $(CFLAGS) -o $@ $^ $(LDLIBS) pc11object.o : pc11object.[ch] pc11parser.o : pc11parser.[ch] pc11object.h pc11io.o : pc11io.[ch] pc11object.h pc11parser.h pc11test.o : pc11test.[ch] pc11object.h pc11parser.h pc11io.h clean : rm *.o pc11test.exe count : wc -l -c -L pc11*[ch] ppnarg.h cloc pc11*[ch] ppnarg.h ppnarg.h omitted for size pc11object.h #define PC11OBJECT_H #include <stdlib.h> #include <stdio.h> #if ! PPNARG_H #include "ppnarg.h" #endif /* Variant subtypes of object, and signatures for function object functions */ #define IS_THE_TARGET_OF_THE_HIDDEN_POINTER_ * typedef union object IS_THE_TARGET_OF_THE_HIDDEN_POINTER_ object; typedef object integer; typedef object list; typedef object symbol; typedef object string; typedef object boolean; typedef object suspension; typedef object parser; typedef object operator; typedef operator predicate; typedef operator binoperator; typedef object fSuspension( object env ); typedef object fParser( object env, list input ); typedef object fOperator( object env, object input ); typedef boolean fPredicate( object env, object input ); typedef object fBinOperator( object left, object right ); typedef enum { INVALID, INT, LIST, SYMBOL, STRING, VOID, SUSPENSION, PARSER, OPERATOR, END_TAGS } tag; enum object_symbol_codes { T, END_OBJECT_SYMBOLS }; struct integer { tag t; int i; }; struct list { tag t; object first, rest; }; struct symbol { tag t; int code; const char *printname; object data; }; struct string { tag t; char *str; int disposable; }; struct void_ { tag t; void *pointer; }; struct suspension { tag t; object env; fSuspension *f; const char *printname; }; struct parser { tag t; object env; fParser *f; const char *printname; }; struct operator { tag t; object env; fOperator *f; const char *printname; }; struct header { int mark; object next; int forward; }; union object { tag t; struct integer Int; struct list List; struct symbol Symbol; struct string String; struct void_ Void; struct suspension Suspension; struct parser Parser; struct operator Operator; struct header Header; }; /* Global true/false objects. */ extern object NIL_; /* .t == INVALID */ extern symbol T_; /* Determine if object is non-NULL and non-NIL. Will also convert a boolean T_ or NIL_ to an integer 1 or 0. */ static int valid( object it ){ return it && it->t < END_TAGS && it->t != INVALID; } /* Constructors */ integer Int( int i ); boolean Boolean( int b ); string String( char *str, int disposable ); object Void( void *pointer ); /* List of one element */ list one( object it ); /* Join two elements togther. If rest is a list or NIL_, result is a list. */ list cons( object first, object rest ); /* Join N elements together in a list */ #define LIST(...) \ reduce( cons, PP_NARG(__VA_ARGS__), (object[]){ __VA_ARGS__ } ) /* Macros capture printnames automatically for these constructors */ #define Symbol( n ) \ Symbol_( n, #n, NIL_ ) symbol Symbol_( int code, const char *printname, object data ); #define Suspension( env, f ) \ Suspension_( env, f, __func__ ) suspension Suspension_( object env, fSuspension *f, const char *printname ); #define Parser( env, f ) \ Parser_( env, f, __func__ ) parser Parser_( object env, fParser *f, const char *printname ); #define Operator( env, f ) \ Operator_( env, f, #f ) operator Operator_( object env, fOperator *f, const char *printname ); /* Printing */ /* Print list with dot notation or any object */ void print( object a ); /* Print list with list notation or any object */ void print_list( object a ); /* Functions over lists */ /* car */ object first( list it ); /* cdr */ list rest( list it ); /* Length of list */ int length( list ls ); /* Force n elements from the front of (lazy?) list */ list take( int n, list it ); /* Skip ahead n elements in (lazy?) list */ list drop( int n, list it ); /* Index a (lazy?) list */ object nth( int n, list it ); /* Apply operator to (lazy?) object */ object apply( operator op, object it ); /* Produce lazy lists */ list infinite( object mother ); list chars_from_str( char *str ); list chars_from_file( FILE *file ); /* Lazy list adapters */ list ucs4_from_utf8( list o ); list utf8_from_ucs4( list o ); /* Maps and folds */ /* Transform each element of list with operator; yield new list. */ list map( operator op, list it ); /* Fold right-to-left over list with f */ object collapse( fBinOperator *f, list it ); /* Fold right-to-left over array of objects with f */ object reduce( fBinOperator *f, int n, object *po ); /* Comparisons and Association Lists (Environments) */ /* Compare for equality. For symbols, just compare codes. */ boolean eq( object a, object b ); /* Call eq, but avoid the need to allocate a Symbol object */ boolean eq_symbol( int code, object b ); /* Return copy of start sharing end */ list append( list start, list end ); /* Prepend n (key . value) pairs to tail */ list env( list tail, int n, ... ); /* Return value associated with key */ object assoc( object key, list env ); /* Call assoc, but avoid the need to allocate a Symbol object */ object assoc_symbol( int code, list env ); /* Conversions */ /* Copy integers and strings into *str. modifies caller supplied pointer */ void fill_string( char **str, list it ); /* Convert integers and strings from list into a string */ string to_string( list ls ); /* Dynamically create a symbol object corresponding to printname s. Scans the list of allocations linearly to find a matching printname. Failing that, it allocates a new symbol code from the space [-2,-inf). */ symbol symbol_from_string( string s ); /* That one lone function without a category to group it in. */ /* Report (an analogue of) memory usage. By current measure, an allocation is 64 bytes, ie. 2x 32 byte union objects. */ int count_allocations( void ); pc11object.c #define _BSD_SOURCE #include "pc11object.h" #include <stdarg.h> #include <string.h> static void print_listn( object a ); static int leading_ones( object byte ); static int mask_off( object byte, int m ); static fSuspension force_first; static fSuspension force_rest; static fSuspension force_apply; fSuspension infinite; static fSuspension force_chars_from_string; static fSuspension force_chars_from_file; static fSuspension force_ucs4_from_utf8; static fSuspension force_utf8_from_ucs4; fBinOperator map; fBinOperator eq; fBinOperator append; fBinOperator assoc; /* Helper macro for constructor functions. */ #define OBJECT(...) new_( (union object[]){{ __VA_ARGS__ }} ) /* Flags controlling print(). */ static int print_innards = 1; static int print_chars = 1; static int print_codes = 0; /* Define simple objects T_ and NIL_, the components of our boolean type. */ static union object nil_object = { .t=INVALID }; object NIL_ = & nil_object; object T_ = 1 + (union object[]){ {.Header={1}}, {.Symbol={SYMBOL, T, "T", & nil_object}} }; /* Allocation function is defined at the end of this file with its file scoped data protected from the vast majority of other functions here. */ static object new_( object prototype ); integer Int( int i ){ return OBJECT( .Int = { INT, i } ); } boolean Boolean( int b ){ return b ? T_ : NIL_; } string String( char *str, int disposable ){ return OBJECT( .String = { STRING, str, disposable } ); } object Void( void *pointer ){ return OBJECT( .Void = { VOID, pointer } ); } list one( object it ){ return cons( it, NIL_ ); } list cons( object first, object rest ){ return OBJECT( .List = { LIST, first, rest } ); } symbol Symbol_( int code, const char *printname, object data ){ return OBJECT( .Symbol = { SYMBOL, code, printname, data } ); } suspension Suspension_( object env, fSuspension *f, const char *printname ){ return OBJECT( .Suspension = { SUSPENSION, env, f, printname } ); } parser Parser_( object env, fParser *f, const char *printname ){ return OBJECT( .Parser = { PARSER, env, f, printname } ); } operator Operator_( object env, fOperator *f, const char *printname ){ return OBJECT( .Operator = { OPERATOR, env, f, printname } ); } void print( object a ){ switch( a ? a->t : 0 ){ default: printf( "() " ); break; case INT: printf( print_chars ? "'%c' " : "%d ", a->Int.i ); break; case LIST: printf( "(" ), print( a->List.first ), printf( "." ), print( a->List.rest ), printf( ")" ); break; case SUSPENSION: printf( "...(%s) ", a->Suspension.printname ); break; case PARSER: printf( "Parser(%s", a->Parser.printname ), (print_innards & ! a[-1].Header.forward) && (printf( ", " ), print( a->Parser.env ),0), printf( ") " ); break; case OPERATOR: printf( "Oper(%s", a->Operator.printname ), printf( ", " ), print( a->Operator.env ), printf( ") " ); break; case STRING: printf( "\"%s\" ", a->String.str ); break; case SYMBOL: if( print_codes ) printf( "%d:%s ", a->Symbol.code, a->Symbol.printname ); else printf( "%s ", a->Symbol.printname ); break; case VOID: printf( "VOID " ); break; } } void print_list( object a ){ switch( a ? a->t : 0 ){ default: print( a ); break; case LIST: printf( "(" ), print_list( first( a ) ), print_listn( rest( a ) ), printf( ") " ); break; } } static void print_listn( object a ){ if( ! valid( a ) ) return; switch( a->t ){ default: print( a ); break; case LIST: print_list( first( a ) ), print_listn( rest( a ) ); break; } } /* force_() executes a suspension function to instantiate and yield a value. It may unwrap many layers of suspended operations to shake off any laziness at the front of a list or resolve a lazy calculation down to its result. In order to simulate the feature of lazy evaluation that a lazy list will manifest its elements "in place", the resulting object from force_() must be overwritten over the representation of the suspension object to provide the illusion that the list magically manifests for all handles to that part of the list. Consequently, force_() is declared static to this file and it is exclusively used in the stereotyped form: *it = *force_( it ); Functions outside of this module requiring the forced execution of a potential suspension must use side effect of take() or drop(). Eg. drop( 1, it ) will transform a suspended calculation into its actual resulting value. If it is a lazy list, this will manifest the list node with a new suspension as the rest(). */ static object force_( object it ){ if( it->t != SUSPENSION ) return it; return force_( it->Suspension.f( it->Suspension.env ) ); } object first( list it ){ if( it->t == SUSPENSION ) return Suspension( it, force_first ); if( it->t != LIST ) return NIL_; return it->List.first; } static object force_first ( object it ){ *it = *force_( it ); return first( it ); } object rest( list it ){ if( it->t == SUSPENSION ) return Suspension( it, force_rest ); if( it->t != LIST ) return NIL_; return it->List.rest; } static object force_rest ( object it ){ *it = *force_( it ); return rest( it ); } int length( list ls ){ return valid( ls ) ? valid( first( ls ) ) + length( rest( ls ) ) : 0; } list take( int n, list it ){ if( n == 0 ) return NIL_; *it = *force_( it ); if( ! valid( it ) ) return NIL_; return cons( first( it ), take( n-1, rest( it ) ) ); } list drop( int n, list it ){ if( n == 0 ) return it; *it = *force_( it ); if( ! valid( it ) ) return NIL_; return drop( n-1, rest( it ) ); } object nth( int n, list it ){ return first( take( 1, drop( n-1, it ) ) ); } object apply( operator op, object it ){ if( it->t == SUSPENSION ) return Suspension( cons( op, it ), force_apply ); return op->Operator.f( op->Operator.env, it ); } static object force_apply( list env ){ operator op = first( env ); object it = rest( env ); *it = *force_( it ); return apply( op, it ); } list infinite( object mother ){ return cons( mother, Suspension( mother, infinite ) ); } list chars_from_str( char *str ){ if( ! str ) return NIL_; return Suspension( String( str, 0 ), force_chars_from_string ); } static list force_chars_from_string( string s ){ char *str = s->String.str; if( ! *str ) return one( Symbol( EOF ) ); return cons( Int( *str ), Suspension( String( str+1, 0 ), force_chars_from_string ) ); } list chars_from_file( FILE *file ){ if( ! file ) return NIL_; return Suspension( Void( file ), force_chars_from_file ); } static list force_chars_from_file( object file ){ FILE *f = file->Void.pointer; int c = fgetc( f ); if( c == EOF ) return one( Symbol( EOF ) ); return cons( Int( c ), Suspension( file, force_chars_from_file ) ); } /* UCS4 <=> UTF8 */ list ucs4_from_utf8( list input ){ if( ! input ) return NIL_; return Suspension( input, force_ucs4_from_utf8 ); } list utf8_from_ucs4( list input ){ if( ! input ) return NIL_; return Suspension( input, force_utf8_from_ucs4 ); } static list force_ucs4_from_utf8( list input ){ *input = *force_( input ); object byte; byte = first( input ), input = rest( input ); if( !valid(byte) ) return NIL_; if( eq_symbol( EOF, byte ) ) return input; int ones = leading_ones( byte ); int bits = mask_off( byte, ones ); int n = ones; while( n-- > 1 ){ *input = *force_( input ); byte = first( input ), input = rest( input ); if( eq_symbol( EOF, byte ) ) return input; bits = ( bits << 6 ) | ( byte->Int.i & 0x3f ); } if( bits < ((int[]){0,0,0x80,0x800,0x10000,0x110000,0x4000000})[ ones ] ) fprintf( stderr, "Overlength encoding in utf8 char.\n" ); return cons( Int( bits ), Suspension( input, force_ucs4_from_utf8 ) ); } static list force_utf8_from_ucs4( list input ){ *input = *force_( input ); object code = first( input ); if( eq_symbol( EOF, code ) ) return input; int x = code->Int.i; object next = Suspension( drop( 1, input ), force_utf8_from_ucs4 ); if( x <= 0x7f ) return cons( code, next ); if( x <= 0x7ff ) return LIST( Int( (x >> 6) | 0xc0 ), Int( (x & 0x3f) | 0x80 ), next ); if( x <= 0xffff ) return LIST( Int( (x >> 12) | 0xe0 ), Int( ( (x >> 6) & 0x3f ) | 0x80 ), Int( ( x & 0x3f ) | 0x80 ), next ); if( x <= 0x10ffff ) return LIST( Int( (x >> 18) | 0xf0 ), Int( ( (x >> 12) & 0x3f ) | 0x80 ), Int( ( (x >> 6) & 0x3f ) | 0x80 ), Int( ( x & 0x3f ) | 0x80 ), next ); if( x <= 0x3ffffff ) return LIST( Int( (x >> 24) | 0xf8 ), Int( ( (x >> 18) & 0x3f ) | 0x80 ), Int( ( (x >> 12) & 0x3f ) | 0x80 ), Int( ( (x >> 6) & 0x3f ) | 0x80 ), Int( ( x & 0x3f ) | 0x80 ), next ); if( x <= 0x3fffffff ) return LIST( Int( (x >> 30) | 0xfc ), Int( ( (x >> 24) & 0x3f ) | 0x80 ), Int( ( (x >> 18) & 0x3f ) | 0x80 ), Int( ( (x >> 12) & 0x3f ) | 0x80 ), Int( ( (x >> 6) & 0x3f ) | 0x80 ), Int( ( x & 0x3f ) | 0x80 ), next ); fprintf( stderr, "Invalid unicode code point in ucs4 char.\n" ); return next; } static int leading_ones( object byte ){ if( byte->t != INT ) return 0; int x = byte->Int.i; return x&0200 ? x&0100 ? x&040 ? x&020 ? x&010 ? x&4 ? 6 : 5 : 4 : 3 : 2 : 1 : 0; } static int mask_off( object byte, int m ){ if( byte->t != INT ) return 0; int x = byte->Int.i; return x & (m ? (1<<(8-m))-1 : -1); } list map( operator op, list it ){ if( ! valid( it ) ) return it; return cons( apply( op, first( it ) ), map( op, rest( it ) ) ); } object collapse( fBinOperator *f, list it ){ if( !valid( it ) ) return it; object right = collapse( f, rest( it ) ); if( !valid( right ) ) return first( it ); return f( first( it ), right ); } object reduce( fBinOperator *f, int n, object *po ){ return n==1 ? *po : f( *po, reduce( f, n-1, po+1 ) ); } boolean eq( object a, object b ){ return Boolean( !valid( a ) && !valid( b ) ? 1 : !valid( a ) || !valid( b ) ? 0 : a->t != b->t ? 0 : a->t == SYMBOL ? a->Symbol.code == b->Symbol.code : !memcmp( a, b, sizeof *a ) ? 1 : 0 ); } boolean eq_symbol( int code, object b ){ return eq( (union object[]){ {.Symbol = {SYMBOL, code, "", 0} } }, b ); } list append( list start, list end ){ if( ! valid( start ) ) return end; return cons( first( start ), append( rest( start ), end ) ); } list env( list tail, int n, ... ){ va_list v; va_start( v, n ); list r = tail; while( n-- ){ object a = va_arg( v, object ); object b = va_arg( v, object ); r = cons( cons( a, b ), r ); } va_end( v ); return r; } object assoc( object key, list b ){ if( !valid( b ) ) return NIL_; object pair = first( b ); if( valid( eq( key, first( pair ) ) ) ) return rest( pair ); else return assoc( key, rest( b ) ); } object assoc_symbol( int code, list b ){ return assoc( (union object[]){ {.Symbol = {SYMBOL, code, "", 0}} }, b ); } static int string_length( object it ){ switch( it ? it->t : 0 ){ default: return 0; case INT: return 1; case STRING: return strlen( it->String.str ); case LIST: return string_length( first( it ) ) + string_length( rest( it ) ); } } void fill_string( char **str, list it ){ switch( it ? it->t : 0 ){ default: return; case INT: *(*str)++ = it->Int.i; return; case STRING: strcpy( *str, it->String.str ); *str += strlen( it->String.str ); return; case LIST: fill_string( str, first( it ) ); fill_string( str, rest( it ) ); return; } } string to_string( list ls ){ char *str = calloc( 1 + string_length( ls ), 1 ); string s = OBJECT( .String = { STRING, str, 1 } ); fill_string( &str, ls ); return s; } /* The following functions are isolated to the bottom of this file so that their static variables are protected from all other functions in this file. */ /* Allocation of objects */ static list allocation_list = NULL; static object new_( object prototype ){ object record = calloc( 2, sizeof *record ); if( record ){ record[0] = (union object){ .Header = { 0, allocation_list } }; allocation_list = record; record[1] = *prototype; } return record + 1; } /* Construction of dynamic symbols */ static int next_symbol_code = -2; symbol symbol_from_string( string s ){ list ls = allocation_list; while( ls != NULL && valid( ls + 1 ) ){ if( ls[1].t == SYMBOL && strcmp( ls[1].Symbol.printname, s->String.str ) == 0 ){ return ls + 1; } ls = ls[0].Header.next; } return Symbol_( next_symbol_code--, strdup( s->String.str ), NIL_ ); } int count_allocations( void ){ list ls = allocation_list; int n = 0; while( ls != NULL && valid( ls + 1 ) ){ ++n; ls = ls->Header.next; } return n; } pc11parser.h #define PC11PARSER_H #if ! PC11OBJECT_H #include "pc11object.h" #endif enum parser_symbol_codes { VALUE = END_OBJECT_SYMBOLS, OK, FAIL, SATISFY_PRED, EITHER_P, EITHER_Q, SEQUENCE_P, SEQUENCE_Q, SEQUENCE_OP, BIND_P, BIND_OP, INTO_P, INTO_ID, INTO_Q, REGEX_ATOM, PROBE_P, PROBE_MODE, EBNF_SEQ, EBNF_ANY, EBNF_EPSILON, EBNF_MAYBE, EBNF_MANY, END_PARSER_SYMBOLS }; /* Parse the input using parser p. */ list parse( parser p, list input ); /* Check result from parse(). */ int is_ok( list result ); int not_ok( list result ); /* Return OK or FAIL result. */ parser succeeds( list result ); parser fails( list errormsg ); /* Emit debugging output from p. Print on ok iff mode&1; print not ok iff mode&2. */ parser probe( parser p, int mode ); /* The basic (leaf) parser. */ parser satisfy( predicate pred ); /* Simple parsers built with satisfy(). */ parser alpha( void ); parser upper( void ); parser lower( void ); parser digit( void ); parser literal( object example ); parser chr( int c ); parser str( char *s ); parser anyof( char *s ); parser noneof( char *s ); /* Accept any single element off the input list. */ parser item( void ); /* Choice ("OR" branches) */ /* Combine 2 parsers into a choice. */ parser either( parser p, parser q ); /* Combine N parsers into a choice. */ #define ANY(...) \ reduce( either, \ PP_NARG(__VA_ARGS__), \ (object[]){ __VA_ARGS__ } ) /* Sequence ("AND" branches) */ /* Combine 2 parsers into a sequence, using op to merge the value portions of results. */ parser sequence( parser p, parser q, binoperator op ); /* Sequence 2 parsers but drop result from first. */ parser xthen( parser x, parser q ); /* Sequence 2 parsers but drop result from second. */ parser thenx( parser p, parser x ); /* Sequence 2 parsers and concatenate results. */ parser then( parser p, parser q ); /* Sequence N parsers and concatenate results. */ #define SEQ(...) \ reduce( then, \ PP_NARG(__VA_ARGS__), \ (object[]){ __VA_ARGS__ } ) /* Sequence 2 parsers, but pass result from first as a (id.value) pair in second's env. */ parser into( parser p, object id, parser q ); /* Repetitions */ /* Accept 0 or 1 successful results from p. */ parser maybe( parser p ); /* Accept 0 or more successful results from p. */ parser many( parser p ); /* Accept 1 or more successful results from p. */ parser some( parser p ); /* Transform of values */ /* Process succesful result from p by transforming the value portion with op. */ parser bind( parser p, operator op ); /* Building recursive parsers */ /* Create an empty parser, useful for building loops. A forward declaration of a parser. */ parser forward( void ); /* Compilers */ /* Compile a regular expression into a parser. */ // E->T ('|' T)* // T->F* // F->A ('*' | '+' | '?')? // A->'.' | '('E')' | C // C->S|L|P // S->'\' ('.' | '|' | '(' | ')' | '[' | ']' | '/' ) // L->'[' '^'? ']'? [^]]* ']' // P->Plain char parser regex( char *re ); /* Compile a block of EBNF definitions into a list of (symbol.parser) pairs. */ // D->N '=' E ';' // N->name // E->T ('|' T)* // T->F* // F->R | N | '[' E ']' | '{' E '}' | '(' E ')' | '/' regex '/' // R->'"' [^"]* '"' | "'" [^']* "'" list ebnf( char *productions, list supplements, list handlers ); pc11parser.c #include "pc11parser.h" #include <ctype.h> #include <string.h> static fParser success; static fParser fail; static fParser parse_satisfy; static fPredicate is_upper; static fPredicate is_alpha; static fPredicate is_lower; static fPredicate is_digit; static fPredicate is_literal; static fPredicate is_range; static fPredicate is_anyof; static fPredicate is_noneof; static fPredicate always_true; static fParser parse_either; fBinOperator either; static fParser parse_sequence; static fBinOperator concat; fBinOperator then; static fBinOperator left; static fBinOperator right; fBinOperator xthen; fBinOperator thenx; static fParser parse_bind; static fParser parse_into; static fParser parse_probe; static fOperator apply_meta; static fOperator on_dot; static fOperator on_chr; static fOperator on_meta; static fOperator on_class; static fOperator on_term; static fOperator on_expr; static fOperator stringify; static fOperator symbolize; static fOperator encapsulate; static fOperator make_matcher; static fOperator make_sequence; static fOperator make_any; static fOperator make_maybe; static fOperator make_many; static fOperator define_forward; static fOperator compile_bnf; static fOperator compile_rhs; static fOperator define_parser; static fOperator wrap_handler; /* Execute a parser upon an input stream by invoking its function, supplying its env. */ list parse( parser p, list input ){ if( !valid( p ) || !valid( input ) || p->t != PARSER ) return cons( Symbol(FAIL), cons( String("parse() validity check failed",0), input ) ); return p->Parser.f( p->Parser.env, input ); } /* The result structure from a parser is either ( OK . ( <value> . <remaining input ) ) or ( FAIL . ( <error message> . <remaining input> ) ) */ static object success( object result, list input ){ return cons( Symbol(OK), cons( result, input ) ); } static object fail( object errormsg, list input ){ return cons( Symbol(FAIL), cons( errormsg, input ) ); } int is_ok( list result ){ return valid( eq_symbol( OK, first( result ) ) ); } int not_ok( list result ){ return ! is_ok( result ); } parser succeeds( list result ){ return Parser( result, success ); } parser fails( list errormsg ){ return Parser( errormsg, fail ); } /* For all of the parsers after this point, the associated parse_*() function should be considered the "lambda" or "closure" function for the constructed parser object. C, of course, doesn't have lambdas. Hence these closely associated functions are close by and have related names. These parse_* functions receive an association list of (symbol.value) pairs in their env parameter, and they extract their needed values using assoc_symbol(). */ /* The satisfy(pred) parser is the basis for all "leaf" parsers. Importantly, it forces the first element off of the (lazy?) input list. Therefore, all other functions that operate upon this result of this parser need not fuss with suspensions at all. */ parser satisfy( predicate pred ){ return Parser( env( NIL_, 1, Symbol(SATISFY_PRED), pred ), parse_satisfy ); } static list parse_satisfy( object env, list input ){ predicate pred = assoc_symbol( SATISFY_PRED, env ); drop( 1, input ); object item = first( input ); if( ! valid( item ) ) return fail( String( "empty input", 0 ), input ); return valid( apply( pred, item ) ) ? success( item, rest( input ) ) : fail( LIST( String( "predicate not satisfied", 0 ), pred, NIL_ ), input ); } parser item( void ){ return satisfy( Operator( NIL_, always_true ) ); } boolean always_true( object v, object it ){ return T_; } parser alpha( void ){ return satisfy( Operator( NIL_, is_alpha ) ); } static boolean is_alpha( object v, object it ){ return Boolean( it->t == INT && isalpha( it->Int.i ) ); } parser upper( void ){ return satisfy( Operator( NIL_, is_upper ) ); } static boolean is_upper( object v, object it ){ return Boolean( it->t == INT && isupper( it->Int.i ) ); } parser lower( void ){ return satisfy( Operator( NIL_, is_lower ) ); } static boolean is_lower( object v, object it ){ return Boolean( it->t == INT && islower( it->Int.i ) ); } parser digit( void ){ return satisfy( Operator( NIL_, is_digit ) ); } static boolean is_digit( object v, object it ){ return Boolean( it->t == INT && isdigit( it->Int.i ) ); } parser literal( object example ){ return satisfy( Operator( example, is_literal ) ); } static boolean is_literal( object example, object it ){ return eq( example, it ); } parser chr( int c ){ return literal( Int( c ) ); } parser str( char *s ){ return !*s ? succeeds( NIL_ ) : !s[1] ? chr( *s ) : then( chr( *s ), str( s+1 ) ); } parser range( int lo, int hi ){ return satisfy( Operator( cons( Int( lo ), Int( hi ) ), is_range ) ); } static boolean is_range( object bounds, object it ){ int lo = first( bounds )->Int.i, hi = rest( bounds )->Int.i; return Boolean( it->t == INT && lo <= it->Int.i && it->Int.i <= hi ); } parser anyof( char *s ){ return satisfy( Operator( String( s, 0 ), is_anyof ) ); } static boolean is_anyof( object set, object it ){ return Boolean( it->t == INT && strchr( set->String.str, it->Int.i ) ); } parser noneof( char *s ){ return satisfy( Operator( String( s, 0 ), is_noneof ) ); } static boolean is_noneof( object set, object it ){ return Boolean( it->t == INT && ! strchr( set->String.str, it->Int.i ) ); } /* The choice combinator. Result is success if either p or q succeed. Short circuits q if p was successful. Not lazy. */ parser either( parser p, parser q ){ return Parser( env( NIL_, 2, Symbol(EITHER_Q), q, Symbol(EITHER_P), p ), parse_either ); } static object parse_either( object env, list input ){ parser p = assoc_symbol( EITHER_P, env ); object result = parse( p, input ); if( is_ok( result ) ) return result; parser q = assoc_symbol( EITHER_Q, env ); return parse( q, input ); } /* Sequence 2 parsers and join the 2 results using a binary operator. By parameterizing this "joining" operator, this parser supports then(), thenx() and xthen() while being completely agnostic as to how joining might or might not be done. */ parser sequence( parser p, parser q, binoperator op ){ return Parser( env( NIL_, 3, Symbol(SEQUENCE_OP), op, Symbol(SEQUENCE_Q), q, Symbol(SEQUENCE_P), p ), parse_sequence ); } static object parse_sequence( object env, list input ){ parser p = assoc_symbol( SEQUENCE_P, env ); object p_result = parse( p, input ); if( not_ok( p_result ) ) return p_result; parser q = assoc_symbol( SEQUENCE_Q, env ); list remainder = rest( rest( p_result ) ); object q_result = parse( q, remainder ); if( not_ok( q_result ) ){ object q_error = first( rest( q_result ) ); object q_remainder = rest( rest( q_result ) ); return fail( LIST( q_error, String( "after", 0), first( rest( p_result ) ), NIL_ ), q_remainder ); } binoperator op = assoc_symbol( SEQUENCE_OP, env ); return success( op->Operator.f( first( rest( p_result ) ), first( rest( q_result ) ) ), rest( rest( q_result ) ) ); } parser then( parser p, parser q ){ return sequence( p, q, Operator( NIL_, concat ) ); } parser xthen( parser x, parser q ){ return sequence( x, q, Operator( NIL_, right ) ); } parser thenx( parser p, parser x ){ return sequence( p, x, Operator( NIL_, left ) ); } /* Some hacking and heuristics to massage 2 objects together into a list, taking care if either is already a list */ static object concat( object l, object r ){ if( ! valid( l ) ) return r; if( r->t == LIST && valid( eq_symbol( VALUE, first( first( r ) ) ) ) && ! valid( rest( r ) ) && ! valid( rest( first( r ) ) ) ) return l; switch( l->t ){ case LIST: return cons( first( l ), concat( rest( l ), r ) ); default: return cons( l, r ); } } static object right( object l, object r ){ return r; } static object left( object l, object r ){ return l; } /* Sequence parsers p and q, but define the value portion of the result of p (if successful) as (id.value) in the env of q. */ parser into( parser p, object id, parser q ){ return Parser( env( NIL_, 3, Symbol(INTO_P), p, Symbol(INTO_ID), id, Symbol(INTO_Q), q ), parse_into ); } static object parse_into( object v, list input ){ parser p = assoc_symbol( INTO_P, v ); object p_result = parse( p, input ); if( not_ok( p_result ) ) return p_result; object id = assoc_symbol( INTO_ID, v ); parser q = assoc_symbol( INTO_Q, v ); object q_result = q->Parser.f( env( q->Parser.env, 1, id, first( rest( p_result ) ) ), rest( rest( p_result ) ) ); if( not_ok( q_result ) ){ object q_error = first( rest( q_result ) ); object q_remainder = rest( rest( q_result ) ); return fail( LIST( q_error, String( "after", 0), first( rest( p_result ) ), NIL_ ), q_remainder ); } return q_result; } /* If the parser p succeeds, great! return its result. If not, who cares?! call it a success, but give a nothing value. If this parser is composed using then(), the merging of values will simply ignore this nothing value. It just disappears. If you bind() this parser to an operator, the operator can test if valid( input ) to tell whether p succeeded (and yielded a value) or not (which yielded NIL). */ parser maybe( parser p ){ return either( p, succeeds( NIL_ ) ); } /* Uses a forward() to build an infinite sequence of maybe(p). */ parser many( parser p ){ parser q = forward(); *q = *maybe( then( p, q ) ); return q; } parser some( parser p ){ return then( p, many( p ) ); } /* Bind transforms a succesful result from the child parser through the operator. The operator's environment is supplemented with the environment passed to bind itself. */ parser bind( parser p, operator op ){ return Parser( env( NIL_, 2, Symbol(BIND_P), p, Symbol(BIND_OP), op ), parse_bind ); } static object parse_bind( object env, list input ){ parser p = assoc_symbol( BIND_P, env ); operator op = assoc_symbol( BIND_OP, env ); object result = parse( p, input ); if( not_ok( result ) ) return result; object payload = rest( result ), value = first( payload ), remainder = rest( payload ); return success( apply( (union object[]){{.Operator={ OPERATOR, append(op->Operator.env, env), op->Operator.f, op->Operator.printname }}}, value ), remainder ); } /* Construct a forwarding parser to aid building of loops. This parser can be composed with other parsers. Later, the higher level composed parser can be copied over this object to create the point of recursion in the parser graph. Remembers the fact that it was created as a forward by storing a flag in the hidden allocation record for the parser. This flag is not altered by overwriting the parser's normal union object. */ parser forward( void ){ parser p = Parser( 0, 0 ); p[-1].Header.forward = 1; return p; } parser probe( parser p, int mode ){ return Parser( env( NIL_, 2, Symbol(PROBE_MODE), Int( mode ), Symbol(PROBE_P), p ), parse_probe ); } static object parse_probe( object env, object input ){ parser p = assoc_symbol( PROBE_P, env ); int mode = assoc_symbol( PROBE_MODE, env )->Int.i; object result = parse( p, input ); if( is_ok( result ) && mode&1 ) print( result ), puts(""); else if( not_ok( result ) && mode&2 ) print_list( result ), puts(""); return result; } /* Regex compiler */ static parser regex_grammar( void ); static parser regex_parser; parser regex( char *re ){ if( !regex_parser ) regex_parser = regex_grammar(); object result = parse( regex_parser, chars_from_str( re ) ); if( not_ok( result ) ) return result; return first( rest( result ) ); } #define META "*+?" #define SPECIAL META ".|()[]/" static parser regex_grammar( void ){ parser dot = bind( chr('.'), Operator( NIL_, on_dot ) ); parser meta = anyof( META ); parser escape = xthen( chr('\\'), anyof( SPECIAL "\\" ) ); parser class = xthen( chr('['), thenx( SEQ( maybe( chr('^') ), maybe( chr(']') ), many( noneof( "]" ) ) ), chr(']') ) ); parser character = ANY( bind( escape, Operator( NIL_, on_chr ) ), bind( class, Operator( NIL_, on_class ) ), bind( noneof( SPECIAL ), Operator( NIL_, on_chr ) ) ); parser expr = forward(); { parser atom = ANY( dot, xthen( chr('('), thenx( expr, chr(')') ) ), character ); parser factor = into( atom, Symbol(REGEX_ATOM), bind( maybe( meta ), Operator( NIL_, on_meta ) ) ); parser term = bind( many( factor ), Operator( NIL_, on_term ) ); *expr = *bind( then( term, many( xthen( chr('|'), term ) ) ), Operator( NIL_, on_expr ) ); } return expr; } /* syntax directed compilation to parser */ static parser apply_meta( parser a, object it ){ switch( it->Int.i ){ default: return a; case '*': return many( a ); case '+': return some( a ); case '?': return maybe( a ); } } static parser on_dot( object v, object it ){ return item(); } static parser on_chr( object v, object it ){ return literal( it ); } static parser on_meta( object v, object it ){ parser atom = assoc_symbol( REGEX_ATOM, v ); if( it->t == LIST && valid( eq_symbol( VALUE, first( first( it ) ) ) ) && ! valid( rest( it ) ) && ! valid( rest( rest( it ) ) ) ) return atom; return apply_meta( atom, it ); } static parser on_class( object v, object it ){ if( first( it )->Int.i == '^' ) return satisfy( Operator( to_string( rest( it ) ), is_noneof ) ); return satisfy( Operator( to_string( it ), is_anyof ) ); } static parser on_term( object v, object it ){ if( ! valid( it ) ) return NIL_; if( it->t == LIST && ! valid( rest( it ) ) ) it = first( it ); if( it->t == PARSER ) return it; return collapse( then, it ); } static parser on_expr( object v, object it ){ if( it->t == LIST && ! valid( rest( it ) ) ) it = first( it ); if( it->t == PARSER ) return it; return collapse( either, it ); } /* EBNF compiler */ static parser ebnf_grammar( void ); /* Compile a block of EBNF definitions into an association list of (symbol.parser) pairs. Accepts an association list of supplemental parsers for any syntactic constructs that are easier to build outside of the EBNF syntax. Accepts an association list of operators to bind the results of any named parser from the EBNF block or the supplements. */ list ebnf( char *productions, list supplements, list handlers ){ static parser ebnf_parser; if( !ebnf_parser ) ebnf_parser = ebnf_grammar(); object result = parse( ebnf_parser, chars_from_str( productions ) ); if( not_ok( result ) ) return result; object payload = first( rest( result ) ); list defs = append( payload, env( supplements, 1, Symbol(EBNF_EPSILON), succeeds(NIL_) ) ); list forwards = map( Operator( NIL_, define_forward ), defs ); list parsers = map( Operator( forwards, compile_rhs ), defs ); list final = map( Operator( forwards, define_parser ), parsers ); map( Operator( forwards, wrap_handler ), handlers ); return final; } static parser ebnf_grammar( void ){ if( !regex_parser ) regex_parser = regex_grammar(); parser spaces = many( anyof( " \t\n" ) ); parser defining_symbol = thenx( chr( '=' ), spaces ); parser choice_symbol = thenx( chr( '|' ), spaces ); parser terminating_symbol = thenx( chr( ';' ), spaces ); parser name = some( either( anyof( "-_" ), alpha() ) ); parser identifier = thenx( name, spaces ); parser terminal = bind( thenx( either( thenx( xthen( chr( '"'), many( noneof("\"") ) ), chr( '"') ), thenx( xthen( chr('\''), many( noneof( "'") ) ), chr('\'') ) ), spaces ), Operator( NIL_, make_matcher ) ); parser symb = bind( identifier, Operator( NIL_, symbolize ) ); parser nonterminal = symb; parser expr = forward(); { parser factor = ANY( terminal, nonterminal, bind( xthen( then( chr( '[' ), spaces ), thenx( expr, then( chr( ']' ), spaces ) ) ), Operator( NIL_, make_maybe ) ), bind( xthen( then( chr( '{' ), spaces ), thenx( expr, then( chr( '}' ), spaces ) ) ), Operator( NIL_, make_many ) ), bind( xthen( then( chr( '(' ), spaces ), thenx( expr, then( chr( ')' ), spaces ) ) ), Operator( NIL_, encapsulate ) ), bind( xthen( chr( '/' ), thenx( regex_parser, chr( '/' ) ) ), Operator( NIL_, encapsulate ) ) ); parser term = bind( many( factor ), Operator( NIL_, make_sequence ) ); *expr = *bind( then( term, many( xthen( choice_symbol, term ) ) ), Operator( NIL_, make_any ) ); }; parser definition = bind( then( symb, xthen( defining_symbol, thenx( expr, terminating_symbol ) ) ), Operator( NIL_, encapsulate) ); return some( definition ); } /* helpers */ static string stringify( object env, object input ){ return to_string( input ); } static symbol symbolize( object env, object input ){ return symbol_from_string( to_string( input ) ); } static list encapsulate( object env, object input ){ return one( input ); } /* syntax directed translation to list form */ static parser make_matcher( object env, object input ){ return str( to_string( input )->String.str ); } static list make_sequence( object env, object input ){ if( length( input ) == 0 ) return Symbol(EBNF_EPSILON); if( length( input ) < 2 ) return input; return one( cons( Symbol(EBNF_SEQ), input ) ); } static list make_any( object env, object input ){ if( length( input ) < 2 ) return input; return one( cons( Symbol(EBNF_ANY), input ) ); } static list make_maybe( object env, object input ){ return one( cons( Symbol(EBNF_MAYBE), input ) ); } static list make_many( object env, object input ){ return one( cons( Symbol(EBNF_MANY), input ) ); } /* stages of constructing the parsers from list form */ static list define_forward( object env, object it ){ if( rest( it )->t == PARSER ) return it; return cons( first( it ), forward() ); } static parser compile_bnf( object env, object it ){ operator self = (union object[]){{.Operator={OPERATOR,env,compile_bnf}}}; switch( it->t ){ default: return it; case SYMBOL: { object ob = assoc( it, env ); return valid( ob ) ? ob : it; } case LIST: { object f = first( it ); if( valid( eq_symbol( EBNF_SEQ, f ) ) ) return collapse( then, map( self, rest( it ) ) ); if( valid( eq_symbol( EBNF_ANY, f ) ) ) return collapse( either, map( self, rest( it ) ) ); if( valid( eq_symbol( EBNF_MANY, f ) ) ) return many( map( self, rest( it ) ) ); if( valid( eq_symbol( EBNF_MAYBE, f ) ) ) return maybe( map( self, rest( it ) ) ); if( length( it ) == 1 ) return compile_bnf( env, f ); return map( self, it ); } } } static list compile_rhs( object env, object it ){ if( rest( it )->t == PARSER ) return it; object result = cons( first( it ), map( (union object[]){{.Operator={OPERATOR,env,compile_bnf}}}, rest( it ) ) ); return result; } static list define_parser( object env, object it ){ object lhs = assoc( first( it ), env ); if( valid( lhs ) && lhs->t == PARSER && lhs->Parser.f == NULL ){ object rhs = rest( it ); if( rhs->t == LIST ) rhs = first( rhs ); *lhs = *rhs; } return it; } static list wrap_handler( object env, object it ){ object lhs = assoc( first( it ), env ); if( valid( lhs ) && lhs->t == PARSER ){ object op = rest( it ); parser copy = Parser( 0, 0 ); *copy = *lhs; *lhs = *bind( copy, op ); } return it; } pc11io.h #define PC11IO_H #if ! PC11PARSER_H #include "pc11parser.h" #endif enum io_symbol_codes { ARGS = END_PARSER_SYMBOLS, END_IO_SYMBOLS }; int pprintf( char const *fmt, ... ); int pscanf( char const *fmt, ... ); pc11io.c code omitted for size pc11test.h #define PC11TEST_H #if ! PC11IO_H #include "pc11io.h" #endif int main( void ); pc11test.c #include <ctype.h> #include "pc11test.h" enum test_symbol_codes { TEST = END_IO_SYMBOLS, DIGIT, UPPER, NAME, NUMBER, EOL, SP, postal_address, name_part, street_address, street_name, zip_part, END_TEST_SYMBOLS }; static int test_basics(); static int test_parsers(); static int test_regex(); static int test_ebnf(); static int test_io(); int main( void ){ return 0 || test_basics() || test_parsers() || test_regex() || test_ebnf() || test_io() ; } static fOperator to_upper; static integer to_upper( object env, integer it ){ return Int( toupper( it->Int.i ) ); } static int test_basics(){ puts( __func__ ); list ch = chars_from_str( "abcdef" ); print( ch ), puts(""); print_list( ch ), puts(""); integer a = apply( Operator( NIL_, to_upper ), first( ch ) ); print( a ), puts(""); drop( 1, a ); print( a ), puts(""); drop( 6, ch ); print( ch ), puts(""); print_list( ch ), puts(""); drop( 7, ch ); print( ch ), puts(""); print_list( ch ), puts(""); puts(""); list xs = infinite( Int('x') ); print_list( xs ), puts(""); drop( 3, xs ); print_list( xs ), puts(""); puts(""); return 0; } static int test_parsers(){ puts( __func__ ); list ch = chars_from_str( "a b c d 1 2 3 4" ); parser p = succeeds( Int('*') ); print_list( parse( p, ch ) ), puts(""); parser q = fails( String("Do you want a cookie?",0) ); print_list( parse( q, ch ) ), puts(""); parser r = item(); print_list( parse( r, ch ) ), puts(""); parser s = either( alpha(), item() ); print_list( parse( s, ch ) ), puts(""); parser t = literal( Int('a') ); print_list( parse( t, ch ) ), puts(""); puts(""); return 0; } static int test_regex(){ puts( __func__ ); parser a = regex( "." ); print_list( a ), puts(""); print_list( parse( a, chars_from_str( "a" ) ) ), puts(""); print_list( parse( a, chars_from_str( "." ) ) ), puts(""); print_list( parse( a, chars_from_str( "\\." ) ) ), puts(""); puts(""); parser b = regex( "\\." ); print_list( b ), puts(""); print_list( parse( b, chars_from_str( "a" ) ) ), puts(""); print_list( parse( b, chars_from_str( "." ) ) ), puts(""); print_list( parse( b, chars_from_str( "\\." ) ) ), puts(""); puts(""); parser c = regex( "\\\\." ); print_list( c ), puts(""); print_list( parse( c, chars_from_str( "a" ) ) ), puts(""); print_list( parse( c, chars_from_str( "." ) ) ), puts(""); print_list( parse( c, chars_from_str( "\\." ) ) ), puts(""); print_list( parse( c, chars_from_str( "\\a" ) ) ), puts(""); puts(""); parser d = regex( "\\\\\\." ); print_list( d ), puts(""); print_list( parse( d, chars_from_str( "a" ) ) ), puts(""); print_list( parse( d, chars_from_str( "." ) ) ), puts(""); print_list( parse( d, chars_from_str( "\\." ) ) ), puts(""); print_list( parse( d, chars_from_str( "\\a" ) ) ), puts(""); puts(""); parser e = regex( "\\\\|a" ); print_list( e ), puts(""); print_list( parse( e, chars_from_str( "a" ) ) ), puts(""); print_list( parse( e, chars_from_str( "." ) ) ), puts(""); print_list( parse( e, chars_from_str( "\\." ) ) ), puts(""); print_list( parse( e, chars_from_str( "\\a" ) ) ), puts(""); puts(""); parser f = regex( "[abcd]" ); print_list( f ), puts(""); print_list( parse( f, chars_from_str( "a" ) ) ), puts(""); print_list( parse( f, chars_from_str( "." ) ) ), puts(""); puts(""); return 0; } static fOperator stringify; static string stringify( object env, list it ){ return to_string( it ); } static int test_ebnf(){ puts( __func__ ); Symbol(postal_address); Symbol(name_part); Symbol(street_address); Symbol(street_name); Symbol(zip_part); list parsers = ebnf( "postal_address = name_part street_address zip_part ;\n" "name_part = personal_part SP last_name SP opt_suffix_part EOL\n" " | personal_part SP name_part ;\n" "personal_part = initial '.' | first_name ;\n" "street_address = house_num SP street_name opt_apt_num EOL ;\n" "zip_part = town_name ',' SP state_code SP zip_code EOL ;\n" "opt_suffix_part = 'Sr.' | 'Jr.' | roman_numeral | ;\n" "opt_apt_num = [ apt_num ] ;\n" "apt_num = NUMBER ;\n" "town_name = NAME ;\n" "state_code = UPPER UPPER ;\n" "zip_code = DIGIT DIGIT DIGIT DIGIT DIGIT ;\n" "initial = 'Mrs' | 'Mr' | 'Ms' | 'M' ;\n" "roman_numeral = 'I' [ 'V' | 'X' ] { 'I' } ;\n" "first_name = NAME ;\n" "last_name = NAME ;\n" "house_num = NUMBER ;\n" "street_name = NAME ;\n", env( NIL_, 6, Symbol(EOL), chr('\n'), Symbol(DIGIT), digit(), Symbol(UPPER), upper(), Symbol(NUMBER), some( digit() ), Symbol(NAME), some( alpha() ), Symbol(SP), many( anyof( " \t\n" ) ) ), env( NIL_, 2, Symbol(name_part), Operator( NIL_, stringify ), Symbol(street_name), Operator( NIL_, stringify ) ) ); parser start = assoc_symbol( postal_address, parsers ); if( valid( start ) && start->t == LIST ) start = first( start ); print_list( start ), puts("\n"); print_list( parse( start, chars_from_str( "Mr. luser droog I\n" "2357 Streetname\n" "Anytown, ST 00700\n" ) ) ), puts(""); printf( "%d objects\n", count_allocations() ); return 0; } static int test_io(){ pprintf( "%s:%c-%c\n", "does it work?", '*', '@' ); return 0; } Size: $ make count wc -l -c -L pc11*[ch] ppnarg.h 180 4442 78 pc11io.c 13 218 36 pc11io.h 549 13731 77 pc11object.c 361 5955 77 pc11object.h 818 20944 80 pc11parser.c 214 3601 63 pc11parser.h 202 5453 69 pc11test.c 6 82 21 pc11test.h 29 1018 83 ppnarg.h 2372 55444 83 total cloc pc11*[ch] ppnarg.h 9 text files. 9 unique files. 0 files ignored. github.com/AlDanial/cloc v 1.93 T=0.05 s (194.9 files/s, 51356.5 lines/s) ------------------------------------------------------------------------------- Language files blank comment code ------------------------------------------------------------------------------- C 4 316 98 1335 C/C++ Header 5 241 99 283 ------------------------------------------------------------------------------- SUM: 9 557 197 1618 ------------------------------------------------------------------------------- Any improvements to make to the interface, implementation, or documentation? Answer: Makefile improvements At first glance, the makefile looks OK, but a few improvements can be made. First, you always override CFLAGS, although you allow to add things to be added to it via the environment variable $cflags. However, that lower case form is very non-standard, and it's more common to expect CFLAGS=... make to work. The usual solution is this: CFLAGS ?= -g -Wall -Wpedantic -Wextra -Wno-unused-function -Wno-unused-parameter -Wno-switch -Wno-return-type -Wunused-variable CFLAGS += -std=c99 Where in the first line, we only add those options if no CFLAGS were provided via the environment, and in the second line we unconditionally add any required flags for the build to work. Second, targets that don't build anything but just run commands should be marked as .PHONY, so if you accidentily created a file test that has a timestamp newer than pc11test, it wouldn't prevent make test from working as expect. So add: .PHONY: test clean count About forward declarations I've added forward declarations for all the static functions inside the .c files so all the static "helper functions" can be placed below the non-static function that uses them, so the implementation can be presented in a more top down fashion overall. Personally I don't think that is very helpful. Now you have both the forward declaration and the actual definition to keep in sync. The files are long enough that you are going to use search functionality anyway to jump to functions. Documentation It is great that you are documenting all the functions and also having them grouped in a sensible way. However, I recommend that you write these documentation comments in Doxygen format. Doxygen is a widely used standard for documenting C and C++ code, and the Doxygen tools can then do all kinds of nice stuff for you: apart from generating documentation in navigatable PDF and HTML formats, it can also warn you when you forgot to document functions and/or function parameters. Naming things Especially in pc11object.h, I am a bit surprised by some of the function names, in particular when the comments above them describe those in terms that don't match the function name itself. For example, drop() has as documentation "Skip ahead n elements". Why not call the function skip() then, or alternatively, write "Drop the first n elements" in the comments. There are more examples of this, like map() "Transform", collapse()/reduce() "Fold", env() "Prepend", and so on. Some comments don't make sense at all to a C programmer, like first() being documented as "car". If you don't know LISP, you might think "what does this have to do with automobiles?". Some comments are needlessly complicated, like append() being documented as "return copy of start sharing end". But it also raises questions: does this append one list to another like the function name implies, or does it create a new list that is the concatenation of two lists like the comments hint at? Make sure the function name, while concise, conveys clearly what is is going to do, and make sure the documentation matches. I would go over all the function names and make changes where appropriate. For example, instead of having to remember whether collapse() is for lists and reduce() is for arrays of objects, why not make them fold_list() and fold_objects()?
{ "domain": "codereview.stackexchange", "id": 43545, "tags": "c, parsing, functional-programming" }
Why are fluoroalkyl chains hydrophobic/oleophobic?
Question: I'm searching for an answer that explains the hydrophobicity / oleophobicity in terms of intermolecular forces, but can't really find one. Below is an example fluoro-alkyl nano-particle, F-POSS. It has 8 ligands of a fluoroalkyl chain $\ce{(CH2)2(CF2)4CF3}$ with a silicon-oxygen cage in the center. When grouped with other F-POSS molecules it forms a membrane with the fluoroalkyl chains: Hydrogen Bonding - Wouldn't the hydrogen be attracted to the fluorine atoms? They're essentially electron sinks. London Dispersal Forces - Why would other molecules not be attracted to these ligands? I was told that the ligands themselves aren't even attracted to each other. Apologies for the lack of citation, but what's been given to me has mostly been word of mouth. Research articles I've read don't really go into the intermolecular forces. Image source: http://pubs.acs.org/doi/abs/10.1021/la201545a?journalCode=langd5 Answer: You have identified a fairly common but counter-intuitive result. At least it's counter-intuitive based on the way the properties of fluorine are described in undergraduate chemistry classes. Specifically, we're told it's very electronegative and hence forms very strong hydrogen bonds, and that it is very reactive. This is, however, a poor description of what tends to dominate the behavior of fluorine when it is bonded to carbon. Let's start with something simple but illustrative. If I told you that the boiling point of $\ce{CCl4}$ is $\pu{350 K}$ and the boiling point of $\ce{CBr4}$ is $\pu{463 K}$, what would be your guess for the boiling point of $\ce{CF4}$? Probably less based on the trend, but just how much less? $\ce{CF4}$: b.p.=$\pu{145.5 K}$ $\ce{CH4}$: b.p.=$\pu{111.7 K}$ First time I've used one of these :) Well hopefully you don't look this far, but you'll notice that the boiling point of $\ce{CF4}$ is much, much lower than that of $\ce{CCl4}$. Importantly, the difference between the fluorine and chlorine compounds is much larger than that of the chlorine and bromine compounds. Additionally, the fact that $\ce{CF4}$ boils only slightly higher than $\ce{CH4}$ indicates that the molecule basically loses all interactions when going from the chlorine to the fluorine. What does this actually illustrate? It doesn't tell us anything about hydrogen-bonding or dipole interactions (all have zero dipole), but it tells us about the dispersion forces present in the system. Namely, if this result generalizes, we have learned that a $\ce{C-F}$ bond is essentially non-polarizable. This is very important because polarization is most of what determines the strength of dispersion interactions. Okay, this basically answers the question, but let's be more systematic. $\textbf{Dipole-Dipole Interactions:}$ The paper you linked is basically looking for a material which won't interact strongly with water and other polar substances. One important feature is to make sure the ligands on this material do not have an appreciable dipole then. It is true that the $\ce{C-F}$ is statically polarized towards the fluorine atom, so there is a dipole along the bond, but the molecular dipole is generally what we will care about. The ligands are basically cylindrical, so any dipole perpendicular to the cylinder ought to roughly cancel out. If anything there would be a dipole along the ligand backbone, but this is probably very small if it exists at all. So, dipole interactions don't particularly matter. $\textbf{Hydrogen-Bonding:}$ Despite what you might think, it actually is quite possible for water (or other $\ce{O-H}$ containing solvents) to hydrogen-bond with the fluorine in a $\ce{C-F}$ bond. See [1] for some calculations demonstrating this. So, if you're designing this material, you might have to worry about this a little bit, but leave it up to the experiment to tell if this matters. $\textbf{Dispersion Forces:}$ Here is the key. In organic chemistry, you're usually sold a lie that the things determining the size of dispersion forces are the surface area and the molecular weight of a molecule. If this were strictly true, then we would expect the increase in boiling point we saw above going from $\ce{CH4}$ to $\ce{CF4}$ to be much more dramatic than it is based on the change going from $\ce{CCl4}$ to $\ce{CBr4}$. The reason this rule works is because generally organic compounds are just a bunch of $\ce{C-H}$ bonds, so most things are roughly as polarizable as the next thing. Okay, but it's not like the $\ce{C-F}$ bonds are inherently unpolarizable. They are about as polarizable as a $\ce{C-H}$ bond, so for all consideration of physical interactions, we can basically imagine that these ligands are just lipids. $\textbf{Why Do Lipids Form Membranes?}$ This is actually quite a tricky, but interesting question. Imagine a big fat lipid in a solvent of water. One thing we know is that water likes to hydrogen bond. This lipid disrupts the hydrogen-bonding network. Furthermore, because there has to be some water at the surface of the lipid, it is likely these waters will get stuck aligning their dipoles with whatever dipole is present in the bond it is near. So, it seems like having the lipid there is both energetically unfavorable and entropically unfavorable. It is unclear without an experiment which of these things will dominate and if clumping all the lipids into a membrane is actually better because that's definitely a decrease in entropy for the lipid. Well, [2] shows the experiment describing this same idea. Slightly different system but same idea. Basically the conclusion is that the energy plays almost no role ($\Delta H\approx 0$) but the entropy for forming a membrane dominates. Shamelessly quoting from the abstract: Possible mechanisms for the entropy increase include: (i) the attraction between discrete oppositely-charged areas, releasing counterions; (ii) the release of loosely-bound water molecules from the inter-membrane gap; (iii) the increased orientational freedom of previously-aligned water dipoles; and (iv) the lateral rearrangement of membrane components. The lesson to learn here is that the entropy of the solvent can actually be very important when considering both the equilibrium arrangements of heterogeneous systems as well in chemical reactions. $\textbf{Bringing it All Together:}$ So, I hope you see what I mean when I said that noticing $\ce{C-H}$ and $\ce{C-F}$ bonds have almost identical intermolecular forces behave basically answers the question. When we consider explicitly the interactions of a substance with the ligands you have asked about (in comparison to a lipid), we see that dipole interactions might be a little stronger between the solvent and the ligand, but the ligand-ligand interactions should also be a bit stronger than in the lipid. So, again, $\Delta H\approx 0$. In contrast, the stronger dipole means that the solvent is more tied up, so the entropy change should increase (not accounting for the effect of the cube thing the ligands are mounted on). Note that this argument for membrane formation based on entropy is not exclusive to strongly polar solvents such as water. That is, pretty much any solvent gains a lot of entropy when a membrane forms. So, as long as it does not interact strongly the ligand, the ligands should form a membrane. Finally, based on what I've said, why not just use $\ce{C-H}$ bonds rather than have fluorine present? I only skimmed the paper, and they mention durability which makes sense because a $\ce{C-F}$ bond is much stronger than a $\ce{C-H}$ bond. Plus alkanes have the problem of being very combustible which is probably not good as these materials would likely be used at high temperatures. Random semi-relevant note. Teflon is just an alkane but with all $\ce{C-F}$ bonds, and it too is quite slippery (it's the stuff on non-stick pans). $\textbf{Edits:}$ I forgot to address the oleophobic nature of fluorocarbons, which is the most important part of the question, really. $\textbf{Oleophobicity of Fluorocarbons}$ One interesting trend which I was not aware of, is that the boiling point of fluorocarbons increases more slowly than boiling point of corresponding hydrocarbons. For instance, the boiling point of perfluorooctane is $\pu{103 ^\circ C}$ while the boiling point of octane is $\pu{125 ^\circ C}$. So, this means that the dispersion forces in hydrocarbons grow more quickly than that in fluorocarbons. This makes some sense because the carbon in fluorocarbons will be more positively charged, and the fluorine atoms are comparable in polarizability to hydrogen, so the polarizability of the carbon atoms is diminished. It seems, however, that the oleophobicity of this particular structure has a lot to do with the fact that the chains are not free to move about in solution. For instance in [3], the authors show that by attaching a fluorocarbon, which is ordinarily not oleophobic, to silica particles, the flurocarbon becomes oleophobic. This indicates to me (and this could be argued) that by restricting the translational motion of the fluorocarbons by mounting them on a large structure, as is done in the example you give, that the favorability of mixing is lost. This assumes that the energetics of having the fluorocarbons dispersed throughout the oil is roughly the same as having the fluorocarbons and oil separated into phases. So, it seems like entropy is the dominant factor in both the hydrophobic and oleophobic characteristics of these fluorocarbons. The big difference is that basically any nonpolar organic molecule will form its own phase in the presence of a polar solvent like water. This is because the interactions between the polar substances are fairly strong, but there is also still a lot of free motion (hydrogen-bonds, etc. break and form again quite often). With the fluorocarbon, one has to manually restrict the entropy gain due to mixing by forcing the fluorocarbons onto a silica base. These things individually have the same effect of disrupting some of the degrees of freedom of the solvent, so the two separate to maximize entropy. Obviously, some of the parts about oleophobicity are a little bit hand-wavy because really either effect could dominate, but the fact that putting it on the silica base makes the fluorocarbon oleophobic as ref. 3 points out convinces me this is due to entropy and not enthalpy. References: [1]: Andrew V. Razgulin, Sandro Mecozzi, "Binding properties of aromatic carbon-bound fluorine," Journal of medicinal chemistry 2006, 49(26), 7902-7906 (https://doi.org/10.1021/jm0600702). [2]: Husen Jia, John R. Liggins, Wah Soon Chow, "Entropy and biological systems: Experimentally-investigated entropy-driven stacking of plant photosynthetic membranes," Scientific reports 2014, 4, Article number: 4142 (7 pages)(DOI: https://doi.org/10.1038/srep04142)(PDF). [3]: H. F. Hoefnagels, D. Wu, G. de With, W. Ming, "Biomimetic superhydrophobic and highly oleophobic cotton textiles," Langmuir 2007, 23(26), 13158-13163 (https://doi.org/10.1021/la702174x).
{ "domain": "chemistry.stackexchange", "id": 8666, "tags": "organic-chemistry, intermolecular-forces" }
Programming Symbols : Instance/Instantiation
Question: Is there a generally accepted symbol for indicating instantiation. That is indicating an object is an instance of a class. My first guess is to use a left arrow with a double or triple line but this seems more like a functional programming symbol, based upon what I've seen of Haskell. Wikipedia has no examples and a quick google for psuedocode mostly turns up simple functional or procedural algorithms. Object Orientated examples don't seem to feature or do not use any specific symbol. Answer: In most contexts, objects are values and classes are types, so I would simply use the colon, representing the "has type" relation: $Object : Class$ That said, this depends on your context, and whether $:$ carries some other meaning. You could also use $\in$, since you can identify a class with the set of objects that are instances of that class.
{ "domain": "cs.stackexchange", "id": 7625, "tags": "programming-languages, semantics, notation" }
How do graph neural networks adapt to different number of nodes and connections of different graphs?
Question: I have recently been studying GNN, and the fundamental idea seems to be the aggregation and transfer of information from a node's neighborhood to update the node's internal state. However, there are few sources that mention the implementation of GNN in code, specifically, how do GNNs adapt to the differing number of nodes and connections in a dataset. For example, say we have 2 graph data that looks like this: It is clear that the number of weights required in the two data points would be different. So, how would the model adapt to this varying number of weight parameters? Answer: The essence of the reason, why this approach works for graphs with a different number of nodes is the locality and node order permutation invariance. The typical form of the layer-wise signal propagation rule is: $$ H^{(l+1)} = f(H^{(l)}, A) = \sigma (A H^{(l)} W^{(l)}) $$ Here $H^{(k)}$ are the activations of the $k$-th later, $W^{(k)}$ is the weight matrix, $A$ is the adjacency matrix and $\sigma$ is the activation function. Activation function $\sigma$ and $W^{(k)}$ is the same on any graph, and the difference is only in the choice of adjacency matrix $A$. Aggregation of the information from the neighborhood is done in permutation-invariant way, and the only way to do this is to assign the same weight for every member in neightboorhood, and (probably) some other weight to the node itself. For Graph Convolutional Neural Networks (GCNN's) this works as follows: $$ h_{v_i}^{(l+1)} = \sigma (\sum_{j \in N(i)} \frac{1}{c_{ij}} h_{v_j}^{(l)} W^{(l)}) $$ Regardless of whether the node is isolated or has many neighbors procedure is unchanged. Older approaches, the spectral for example, had to calculate the Graph Laplacian and perform its eigendecomposition. They did not generalize to other graphs.
{ "domain": "ai.stackexchange", "id": 3064, "tags": "implementation, geometric-deep-learning, graph-neural-networks" }
How can I show that the Cook-Levin theorem does not relativize?
Question: The following is an exercise which I am stuck at ( source: Sanjeev Arora and Boaz Barak; its not homework ) : Show that there is an oracle $A$ and a language $L \in NP^A$ such that $L$ is not polynomial-time reducible to 3SAT even when the machine computing the reduction is allowed access to $A$. What I tried was, take $A$ to be the oracle to halting problem and let $L=\{1^n | \;\exists \; \langle M,w \rangle \; \text{s.t.} \; |\langle M,w \rangle|=n \; \text{ and Turing machine M halts on w} \} $. With this assignment I ensure $L \in NP^{A}$ and $L$ is not polynomial reducible to 3SAT if oracle is not provided to the machine carrying out reduction. Although to map an instance $1^n$ I would have to search through $2^n$ strings even if oracle is provided to the reduction machine. But this does not seem like a proof for absence of polynomial reduction in this case. Is there a way to prove it using the same example ? Is there a simpler example ? Answer: Please refer Does Cook Levin Theorem relativize?. Also refer to Arora, Implagiazo and Vazirani's paper: Relativizing versus Nonrelativizing Techniques: The Role of local checkability. In the paper by Baker, Gill and Solovay (BGS) on Relativizations of the P =? N P question (SIAM Journal on Computing, 4(4):431–442, December 1975) they give a language $B$ and $U_B$ such that $U_B \in NP^B$ and $U_B \not\in P^B$, thus proving that there are oracles $B$ for which $P^B \neq NP^B$. We shall modify the $U_B$ and $B$ to $U_{B'}$ and $B'$ such that we get a new language that cannot be reduced to 3SAT even if there is availability of $B'$ as an oracle. First assume that we can pad every $3SAT$ boolean instance $\phi$ to $\phi'$ with some additional dummy 3CNF expressions such that $|\phi'|$ is odd and they are equivalent, i.e., $\phi$ is satisfiable iff $\phi'$ is satisfiable. We can do it in $n+O(1)$ time and with $O(1)$ padding, but even if it takes polynomial time and extra polynomial padding it does not matter. Now we need to combine the $B$ and $3SAT$ to $B'$ somehow so that BGS theorem still holds but additionally $3SAT \in P^{B'}$. So we do something like the following. $U_{B'} = \{1^n \ \ |\ \ \exists x \in B, $ such that $|x| = 1^{2n}\}$ and $B' = B'_{constructed} \ \cup \{\phi \ \ |\ \ \phi \in 3SAT $ and $ |\phi| $ is odd $\}$. Now we shall construct $B'_{constructed}$ according to the theorem such that if the deterministic machine $M_i^{B'}$ for input $1^n$ ($n$ is determined as in theorem) asks the oracle $B'$ a query of odd length we check if it is in $3SAT$ and answer correctly but if it asks a query of even length we proceed according to the construction, that is, answering correctly if it is already in the table, otherwise answer no every time. Then since we are running for $1^n$ we flip the answers at $2n$ length so that $M_i^{B'}$ does not decide $U_{B'}$. We can prove similarly as in the BGS theorem that for this $B'$ and $U_{B'}$ too, we have $U_{B'} \in NP^{B'}$ and $U_{B'} \not\in P^{B'}$. $U_{B'} \in NP^{B'}$ is easy to prove. We construct a non-deterministic Turing Machine which for input $1^n$ creates non-deterministic branches that runs for $2n$ steps to generate a different $2n$-length string and then asks oracle $B'$ if the $2n$-length string is in $B'$, and if the answer is yes it accepts $1^n$ else it rejects $1^n$. This construction shows that $U_{B'} \in NP^{B'}$. $U_{B'} \not\in P^{B'}$ can be proved with the help of diagonalization argument. Basically it is different from every $L(M_i^{B'})$ for every oracle Turing Machine that have $B'$ as an oracle. This is because of how we construct $B'_{constructed}$. Now we shall prove by contradiction that there does not exist a reduction from $U_{B'}$ to $3SAT$ even with the availability of oracle $B'$. Assume there is a reduction using oracle $B'$, i.e., $U_{B'} \leq^{B'}_P 3SAT$. That means we can reduce a string of the form $1^n$ to a 3SAT instance $\phi$ using a polynomial-time deterministic machine which uses $B'$ as oracle. We can now describe a deterministic TM $M^{B'}$ which will decide strings $U_{B'}$ in polynomial time using $B'$ as an oracle. First this machine reduces the input $1^n$ to a 3SAT-instance $\phi$ using $B'$ as an oracle. This can be done because we have the reduction above. Then if $\phi$ is not odd length $M^{B'}$ will pad it to make $\phi'$ which is odd length. Next, it will give this $\phi'$ to oracle $B'$ and get the answer yes/no. It will accept if the answer is yes and reject if the answer is no. This machine is deterministically polynomial and uses oracle $B'$. Thus we have proved that $U_{B'} \in P^{B'}$, a contradiction. Therefore $U_{B'} \not\leq^{B'}_P 3SAT$.
{ "domain": "cs.stackexchange", "id": 6131, "tags": "complexity-theory, np, nondeterminism, oracle-machines, relativization" }
Is Loss not a good indication of performance?
Question: Im trying to segment 3D volumes using a 3D uNet network. Ive reached a stage where I am getting very good validation loss using CrossEntropy and BCE idx: 0 of 53 - Validation Loss: 0.029650183394551277 idx: 5 of 53 - Validation Loss: 0.009899887256324291 idx: 10 of 53 - Validation Loss: 0.05049080401659012 idx: 15 of 53 - Validation Loss: 0.02019292116165161 idx: 20 of 53 - Validation Loss: 0.04724293574690819 idx: 25 of 53 - Validation Loss: 0.02810296043753624 idx: 30 of 53 - Validation Loss: 0.02642594277858734 idx: 35 of 53 - Validation Loss: 0.029894422739744186 idx: 40 of 53 - Validation Loss: 0.04158024489879608 idx: 45 of 53 - Validation Loss: 0.04574814811348915 idx: 50 of 53 - Validation Loss: 0.05406259000301361 I assumed my network is performing very well so i wrote a script to visualize my network outputs against their respective targets. What I get is something very different, not something that justifies this loss. The samples are of depth 32 and I've outputted each z-plane as a single image. Here is the target: And the predicted output: All samples are like this, not one of them accurately represent the target with the reported loss.. so I ask is my loss wrong? What should I look into to fix this? Thanks Answer: To answer the title of your question: Is Loss not a good indication of performance? It is only a relative indicator of performance over the training session. First, what is loss anyway? In general, the loss is some expression of the difference between the model's predicted output and the target output. Depending on the loss function used (e.g. if log is involved) or on the nominal values of the inputs themselves, the value of the loss can also be very small or very large. People usually normalise data so the values are smaller, but the point is that you cannot always say that a validation loss of 0.0012345 is actually a good value. Or that 12345 is definitely bad! Other possible loss functions Other 3d segmentation models that I have seen it is common to use the Dice Coefficient for your cost/loss. Maybe give that a try. The Dice coefficient is essentially the same as the F1 score; you are really finding a trade-off in how to penalise a model for its mistakes in classification e.g. of pixels or voxels. Do you want to strongly punich bad cases or rather a more averging approach. As that link points out, it is similar to the difference between the $L_1$ and $L_2$ losses. There is also the Jaccard index, which is essentially the same as the Dice Coefficient. The Tversky index is a generalisation of the two - it is an asymmetric similarity measure. : $$ Tversky(A, B; α, β) = \frac{|TruePos|}{|TruePos| + \alpha |FalsePos| + \beta |FalseNeg|} $$ The Dice coefficient is this with $\alpha = \frac{1}{2}$ and $\beta = \frac{1}{2}$ The Jaccard index instead has $\alpha = 1$ and $\beta = 1$ This might be a nice approach for your problem to tweak how the loss is computed. There are plenty of source online to explain more about these well-defined measures and perhaps help you gain an intuition for their results. Your results With the losses you posted, it doesn't actually look as good as you explained. I am not sure exactly what your idx values mean, but the loss values are actually going up: In [1]: import numpy as np In [2]: import pandas as pd In [3]: import matplotlib.pyplot as plt In [4]: val_loss = np.array([0.029650183394551277, 0.009899887256324291, 0.0504908040 ...: 1659012, 0.02019292116165161, 0.04724293574690819, 0.02810296043753624, 0.026 ...: 42594277858734, 0.029894422739744186, 0.04158024489879608, 0.0457481481134 ...: 8913, 0.05406259000301361]) In [5]: pd.Series(val_loss, index=range(0, 51, 5)).plot() Out[5]: <matplotlib.axes._subplots.AxesSubplot at 0x7f4405326390> In [6]: plt.show() This suggests you might be overfitting. However, the differences between the predictions and ground truth in the images you show suggest there is a more fundamental problem.
{ "domain": "datascience.stackexchange", "id": 4179, "tags": "deep-learning, cnn, visualization" }
Minimum True Monotone 3SAT
Question: I am interested in a SAT variation where the CNF formula is monotone (no variables are negated). Such a formula is obviously satisfiable. But say the number of true variables is a measure of how good our solution is. So we have the following problem: MINIMUM TRUE MONOTONE 3SAT INSTANCE: Set U of variables, collection C of disjunctive clauses of 3 literals, where a literal is a variable (not negated). SOLUTION: A truth assignment for U that satisfies C. MEASURE: The number of variable that are true. Could someone give me some helpful remarks on this problem? Answer: This problem is the same as the Vertex Cover problem for $3$-uniform hypergraphs: given a collection $H$ of subsets of $V$ of size $3$ each, find a minimal subset $U\subseteq V$ that intersects each set in $H$. It is therefore NP-hard, but fixed parameter tractable. It is also NP-hard to approximate to within a factor of $2-\epsilon$ for every $\epsilon>0$. This was shown in the following paper: Irit Dinur, Venkatesan Guruswami, Subhash Khot and Oded Regev. A New Multilayered PCP and the Hardness of Hypergraph Vertex Cover, SIAM Journal on Computing, 34(5):1129–1146, 2005.
{ "domain": "cstheory.stackexchange", "id": 1207, "tags": "cc.complexity-theory, np-hardness, sat" }
Can we really see the bonds?
Question: I was thinking is there really bond present at microscopic level or atoms/molecules are just nearby and are connected with force which is not visible(like gravitational force between earth and sun) and we make bonds just for understanding. Answer: All credit to Zhang et al. "Real-Space Identification of Intermolecular Bonding with Atomic Force Microscopy" Science Vol. 342 no. 6158 pp. 611-614. Yes, direct images of bonds, not only covalent bonds but also intermolecular hydrogen bonds have been recorded. It is the electron density that is being observed, covalent and hydrogen bonds involving high electron density between the atoms. Scanning Tunneling Microscopy can also be utilized to directly observe the electron density of bonds.
{ "domain": "chemistry.stackexchange", "id": 2565, "tags": "bond" }
Speed of blast from supernova
Question: How fast does the blast front of a supernova expand at? Is it close to the speed of light or is it less than a quarter of the speed of light? Answer: The speed of the blast front depends on the initial energy release and the density of the medium into which it is expanding, see here. Theory suggests and measurements confirm expansion rates of the order of thousands of km/s or a few $\times 10^6\ \mbox{m/s}$ or $\sim 1\% \mbox{c}$.
{ "domain": "astronomy.stackexchange", "id": 991, "tags": "supernova, explosion" }
Newton's Second Law in vertical launch of a rocket
Question: Consider a rocket being launched vertically. Let $T(t)$ denote the thrust from the engine and $M(t)$ be the total mass of the rocket at time $t$. At $t=0$, $T(0)=M(0)g$ (so that the normal force due to the launch pad needs not to be considered). The acceleration $a(t)$ of the rocket at time $t$ can be obtained (along with other variables like the ejection speed of the fuel that are less important to my question) from Newton's second law of motion: $$T(t)-M(t)g=\frac{dp}{dt}=\frac{d(M(t)v(t))}{dt}$$ $$=M(t)\frac{dv}{dt}+v(t)\frac{dM}{dt}=M(t)\frac{dv}{dt}=M(t)a(t)\tag{1}$$ So it seems to me that in general, we do not need to consider the $\frac{dM}{dt}$ term? But shouldn't $\frac{dM(t)}{dt}$ be non-zero if the total mass of the rocket is decreasing over time. Or is it that the change in mass over time is accounted for by $M=M(t)$ alone already? And when do we need we to consider the $\frac{dm}{dt}$ term in $N2$? Answer: Your second equation in $(1)$ isn't valid when the mass is changing, see here. When you have a variable mass (or rather the mass of the body of concern is changing), you need to think carefully about the system on which you are applying the second law. Here are two ways to go about this: At time $t - \delta t$, the rocket mass is $M(t) + \delta m$ and at time $t$ it is $M(t)$. Apply the second law to the system that is only the mass that will remain at time $t$, i.e. the mass $M(t)$. This mass isn't changing during this time interval. We can write $$T(t) - M(t)g = M(t)\frac{dv}{dt}.\tag{1}$$ This equation is instantaneously valid during the entire motion of the rocket. Now consider the same situation, but this time choose the system to be the entire rocket mass $M(t) + \delta m$ at time $t - \delta t$, including the mass $\delta m$ that will have been ejected by time $t$. This mass again does not vary during the time interval $\delta t$. The only external force applied on this system is the weight. Suppose the mass $\delta m$ is ejected from the rocket at a speed of $v_e$ relative to the rocket, and $M(t)$ picks up a velocity increment $\delta v$. The second law now states $$-(M(t) + \delta m)g = \frac{1}{\delta t}\left[M(t)(v+\delta v)+\delta m(v -v_e) - (M(t)+\delta m)v \right].$$ As $\delta t\to 0$, we get $$-M(t)g=M(t) \frac{dv}{dt} +\frac{dM(t)}{dt}v_e.\tag{2}$$ Here $dM/dt$ the rate of change of mass, not the rate at which mass is ejected, i.e. $\delta m$ was positive but $dM/dt$ is negative. Comparing $(1)$ and $(2)$, you see that $$T(t) = -\frac{dM(t)}{dt}v_e$$ so the ejection speed isn't unimportant after all. When $v_e$ is constant, neglecting the weight term and integrating $(2)$ yields the famous Tsiolkovsky rocket equation.
{ "domain": "physics.stackexchange", "id": 68940, "tags": "newtonian-mechanics, forces, classical-mechanics, kinematics, calculus" }
C++ Linked list implementation using smart pointers. Advice on move semantics
Question: As an exercise to familiarise myself with smart pointers, I implemented a template linked list class in C++, closely following the very good tutorial at: https://solarianprogrammer.com/2019/02/22/cpp-17-implementing-singly-linked-list-smart-pointers/, which I duly acknowledge and from which I borrow freely. The code seems to be working as expected, so far, and I feel that I have a better understanding of how to use unique_ptrs for ownership. However, I have taken a shortcut in my code for popping an element from the front of the list, which is also used in the clean() member function (see code below). In particular, when popping an element or iteratively deleting nodes, I have noticed that some people use an intermediary unique pointer to take ownership of the node to be deleted, as in the following: /** * Pop the top element off the list * */ template <typename T> void LinkedList<T>::pop() { if (ptrHead == nullptr) { return; } // can we safely avoid the ptrDetached intermediary? std::unique_ptr<Node> ptrDetached = std::move(ptrHead); ptrHead = std::move(ptrDetached->ptrNext); } However, it appears to me that the allocation of a ptrDetached is unnecessary, and one can instead use: template <typename T> void LinkedList<T>::pop() { if (ptrHead == nullptr) { return; } ptrHead = std::move(ptrHead->ptrNext); } This second version of the code seems to be working [Linux, g++ (Debian 9.2.1-22) 9.2.1 20200104], but I am concerned that I might be making some naive assumptions about unique_ptr's move constructor that might come back to bite me. Is there anything wrong with the second approach? Can anyone offer some advice on best practice here? Is my short cut above generally safe? My header file for the complete LinkedList follows (it's a work in progress). Suggestions for improvement would be very welcome. Thanks in advance and my apologies if there is something I have overlooked (this is my first post). /* * File: LinkedList.h */ #ifndef LINKEDLIST_H #define LINKEDLIST_H #include <iostream> #include <memory> #include <exception> template <typename T> class LinkedList { template <typename U> friend std::ostream& operator<<(std::ostream&, const LinkedList<U>&); public: // default constructor LinkedList() : ptrHead{nullptr} {} // copy constructor: duplicate a list object LinkedList(LinkedList&); // move constructor LinkedList(LinkedList&&); // destructor ~LinkedList(); // TODO: Add overloaded assignment and move assignment operators // For now, just delete them. const LinkedList& operator=(const LinkedList&) = delete; const LinkedList& operator=(const LinkedList&&) noexcept = delete; // utility methods void push(const T&); void pop(); T peek() const; void clean(); void print(std::ostream&) const; private: // basic node structure struct Node { // constructor explicit Node(const T& data) : element{data}, ptrNext{nullptr} {} // destructor (only for testing deallocation) ~Node() { std::cout << "Destroying Node with data " << element << std::endl; } T element; std::unique_ptr<Node> ptrNext; }; std::unique_ptr<Node> ptrHead; // head of the list }; ///////////////////////////////////////////////////////////////////////////// // Implementation of Linked List /** * Copy constructor * * @param list */ template <typename T> LinkedList<T>::LinkedList(LinkedList& list) { // the new head of list std::unique_ptr<Node> ptrNewHead{nullptr}; // raw pointer cursor for traversing the new (copied) list Node* ptrCurrNode{nullptr}; // raw pointer for traversing the list to be copied Node* ptrCursor{list.ptrHead.get()}; while (ptrCursor) { // allocate a new node, copying the element of the current node std::unique_ptr<Node> ptrTemp{std::make_unique<Node>(ptrCursor->element)}; // add it to the new list, as appropriate if (ptrNewHead == nullptr) { ptrNewHead = std::move(ptrTemp); ptrCurrNode = ptrNewHead.get(); } else { ptrCurrNode->ptrNext = std::move(ptrTemp); ptrCurrNode = ptrCurrNode->ptrNext.get(); } ptrCursor = ptrCursor->ptrNext.get(); } ptrHead = std::move(ptrNewHead); } /** * Copy move constructor * * @param list */ template <typename T> LinkedList<T>::LinkedList(LinkedList&& list) { ptrHead = std::move(list.ptrHead); } // TODO: Add overloaded assignment and move assignment operators /** * Destructor * */ template <typename T> LinkedList<T>::~LinkedList() { clean(); } /** * Push a T value onto the list * * @param data */ template <typename T> void LinkedList<T>::push(const T& data) { std::unique_ptr<Node> ptrTemp{std::make_unique<Node>(data)}; ptrTemp->ptrNext = std::move(ptrHead); ptrHead = std::move(ptrTemp); } /** * Pop the top element off the list * */ template <typename T> void LinkedList<T>::pop() { if (ptrHead == nullptr) { return; } // can we safely avoid the ptrDetached intermediary? // std::unique_ptr<Node> ptrDetached = std::move(ptrHead); // ptrHead = std::move(ptrDetached->ptrNext); ptrHead = std::move(ptrHead->ptrNext); } /** * Peek at the value of the top element. * * Throws an exception if the list is empty. * * @return */ template <typename T> T LinkedList<T>::peek() const { if (ptrHead == nullptr) { throw std::out_of_range{"Empty list: Attempt to dereference a NULL pointer"}; } return ptrHead->element; } /** * Clean the list * */ template <typename T> void LinkedList<T>::clean() { while (ptrHead) { // can we safely avoid the ptrDetached intermediary? // std::unique_ptr<Node> ptrDetached = std::move(ptrHead); // ptrHead = std::move(ptrDetached->ptrNext); ptrHead = std::move(ptrHead->ptrNext); } } /** * Print the list on ostream os * * @param os */ template <typename T> void LinkedList<T>::print(std::ostream& os) const { Node* ptrCursor{ptrHead.get()}; // raw pointer for iteration while(ptrCursor) { os << ptrCursor->element << " -> "; ptrCursor = ptrCursor->ptrNext.get(); } os << "NULL" << std::endl; } /** * Overloaded operator << for the LinkedList */ template <typename T> std::ostream& operator<<(std::ostream& os, const LinkedList<T>& list) { list.print(os); return os; } #endif /* LINKEDLIST_H */ Answer: Overall Good. Personally I don't like building containers using smart pointers. Containers and smart pointers are the techniques we use to manage memory for objects (singular or plural respectively). As such they should both manage their own memories correctly. But other people do it (use smart pointers) so I don't see it as a big deal; but I think you will learn more from implementing the container as the class that handles memory management. Overview You should put your stuff inside its own namespace. Code Review Not very unique. #ifndef LINKEDLIST_H #define LINKEDLIST_H If you add your own namespace to that guard it may become unique. A friend template for a different type? class LinkedList { template <typename U> friend std::ostream& operator<<(std::ostream&, const LinkedList<U>&); You can simplify this to: class LinkedList { friend std::ostream& operator<<(std::ostream&, LinkedList const&); Even though print() is a public method and thus does not need a fiend to call it. I still would encourage this as a friend operator because it declares the tight coupling of the interface. Nice use of the initializer list here. LinkedList() : ptrHead{nullptr} {} Curious why you don't use it in the Copy Cosntrctor body! I'll get to that below. Normally you would pass the list by const reference. LinkedList(LinkedList&); Here you could make a mistake in your copy constructor and accidently modify the input list. Move constructors are usually noexcept safe. LinkedList(LinkedList&&); This provides the standard library the opportunity to add optimizations when using its containers. If you can safely move objects without the chance of exception then the move constructor can be used. If the move constructor is not exception safe them you can not always provide the Strong Exception Guarantee and thus must use a technique that uses copying rather than moving. Thus if you can guarantee exception safe moves you should let the compiler know with noexcept. Seriously that comment does my no good. // destructor ~LinkedList(); That is a bad comment. Because comments need to be maintained with the code (comments like code rote over time). So you be careful to avoid usless coments as they take effort to maintain (and people will put as little effort into maintenance as they can). As a result comments and code can drift apart over time and cause confusion. // TODO: Add overloaded assignment and move assignment operators // For now, just delete them. const LinkedList& operator=(const LinkedList&) = delete; const LinkedList& operator=(const LinkedList&&) noexcept = delete; These are both exceptionally easy to implement if you have a swap() noexcept method. Note 1: Assignment operators don't usually return const references. Note 2: The move assignment operator does not take a const input. Moving the source into the destination will modify it. LinkedList const& operator=(LinkedList const& input) { LinkedList copy(input); swap(copy); return *this; } LinkedList& operator=(LinkedList&& input) noexcept { clean(); swap(input); return *this; } You have a move constructor. Why don't you have a move push()? void push(const T&); Nice. void pop(); Clea separation of the pop from the peek(). Why are you returning by value? T peek() const; You should return a const reference to the object. This will prevent an extra copy (which is important if T is expensive to copy). But you can also provide a normal reference (if your class needs it) that would allow you to modify the object in place inside the list. T const& peek() const; T& peek(); // Optional. template <typename T> LinkedList<T>::LinkedList(LinkedList& list) { Why not use the initializer list to do this. // the new head of list std::unique_ptr<Node> ptrNewHead{nullptr}; Which of course is the default action of the unique_ptr default constructor. So this operation is already done by this point in the constructor. I am going to mention comments again. // raw pointer cursor for traversing the new (copied) list Node* ptrCurrNode{nullptr}; // raw pointer for traversing the list to be copied Node* ptrCursor{list.ptrHead.get()}; These comments are not useful. They do not tell me more than I can already understand from simply reading the code. Infact your code could be made more readable by removing the comments and using better variable names. Comments should not be used for describing the code (the code does that very well). Also because of comment rote over time the code and comments can easily become disjoint. As such if a maintainer comes across code that has a comment that does not mach the comment do they fix the comment or do they fix the code. If they are good they have to do one which means they have to do research. It is better to write better "Self documenting code" so the code describes what it does. Your comments should describe WHY or an overall ALGORITHM or some particularly OBSCURE point that code can not describe. DO NOT simply convert your code into English and call it a comment. So above you ask why people assigned this to a temporary. void LinkedList<T>::pop() { .... ptrHead = std::move(ptrHead->ptrNext); } The question you have to ask yourself. Q: Does the std::unique_ptr assignment operator call the destruct on the object it contains before or after it assigns the new value? Let us imagine two different version of the assignment operator. oldValue = internalPtr; internalPtr = newValue; delete oldValue; or delete internalValue; internalValue = newValue; How do those different implementations affect your code? What grantees does the standard provide? No need to use std::endl here. os << "NULL" << std::endl; Prefer to use "\n". This is exactly the same except it does not force a flush of the stream. The main problem with people timing C++ streams is that they always manually flush them (like this) then complain they are not as fast a C streams. If you don't manually flush them (especially since the stream knows when to flush itself very efficiently) the C++ streams are comparable to C streams.
{ "domain": "codereview.stackexchange", "id": 37244, "tags": "c++, linked-list, pointers" }
Moving clocks tick slow and time dilation
Question: Here’s what I don’t understand about time dilation. Alice is at rest and Bob is moving with velocity V with respect to Alice. Let’s say Alice measures two events separated in time by $\Delta t_A$ according to her own watch, and she measures those events in the same place, so she has measured her own proper time. By applying LTs, we get that $\Delta t_B’ = \gamma \Delta t_A$. What does this time mean? Is this the proper time elapsed between the two events as measured by another observer Bob who is measuring time with his own clock at the same place? I mean, what we compute by using LTs, is Bob’s proper time or is it something different? And if it IS something different, how can we compute Bob’s proper time? Also, since $\Delta t_B’ > \Delta t_A$, I don’t understand why people usually state that “moving clocks tick slower’. Alice being at rest thinks her proper time $\Delta t_A$ is less than $\Delta t_B’$. For examples, Alice thinks she has measured one hour and Bob measured 2 hours, right? If I were Alice, I would conclude that Bob’s moving clock is running faster, rather than slower. Answer: Let’s say Alice measures two events separated in time by ΔtA according to her own watch, and she measures those events in the same place, so she has measured her own proper time. Technically, the proper time is only defined along the worldline of an observer, so for the time between two events to be Alice's proper time it is necessary not only that they be at the same place, but also that place must be Alice's position. (This is a nit-picking requirement that is not important in SR but becomes important in GR). So I will assume that the two events are indeed on Alice's worldline so that it is indeed her proper time. Is this the proper time elapsed between the two events as measured by another observer Bob who is measuring time with his own clock at the same place? I mean, what we compute by using LTs, is Bob’s proper time or is it something different? It cannot be Bob's proper time because Bob is moving relative to Alice, so at most one of those events could be on both Bob and Alice's worldline. So it is instead Bob's coordinate time, meaning the time as shown on a lattice of clocks at rest relative to Bob and synchronized using the Einstein synchronization convention. I don’t understand why people usually state that “moving clocks tick slower’. It is true that this phrasing is a bit confusing. Since whether a clock is moving or not depends on the reference frame and since no frame is specified, it is ambiguous. What is unambiguous is the following: $\Delta t_A$ is measured by a single clock which was present at both events, but $\Delta t'_B$ is measured by a pair of synchronized clocks each of which was present at only one event. The time difference for the single clock is always less than the time difference for the pair of synchronized clocks. "Moving clocks tick slower" means that whenever you are comparing a single clock to a lattice of synchronized clocks (in the lattice clock's frame) then the single clock will be slower whenever it is moving (in the lattice frame). For examples, Alice thinks she has measured one hour and Bob measured 2 hours, right? If I were Alice, I would conclude that Bob’s moving clock is running faster, rather than slower. But in that comparison Alice is not looking at a single Bob clock. Any one Bob clock will run slow compared to Alice's lattice of synchronized clocks. But Bob's clocks are not synchronized in Alice's frame due to the relativity of simultaneity. So subtractions of times on different Bob clocks is essentially meaningless for Alice.
{ "domain": "physics.stackexchange", "id": 90864, "tags": "special-relativity, time-dilation" }
Denoise an image under extreme time pressure
Question: I'm working on a real-time, embedded system image processing application for my group engineering capstone in undergrad. I'm receiving data at 60FPS, and have to isolate and detect the location of a flying object in each frame, if it exists, before the next frame. This gives me about 15ms to perform the entire image processing algorithm. One important step in the process is denoising the image. The input to the denoising function is an \$MxN\$ image, obtained by background subtraction/frame differencing. Each pixel is represented by a single bit. (Basically we take the frame at time \$t+1\$, subtract it from the frame at time \$t\$, and if the absolute difference is above some threshold, we set the bit equal to 1.) This means that we store the entire image in \$\frac{M*N}{8}\$ bytes. Owing to a quirk of how the background subtraction algorithm was implemented, the LSB of each byte contains the earliest pixel sent by the camera, and the MSB contains the latest pixel sent by the camera. So our input data is ordered something like this in our array: 7 6 5 4 3 2 1 0 | 15 14 13 12 11 10 9 8 | 23 22 21 20 19 18 17 16 | 31 ... My denoising function performs two operations: Sets a pixel to 0 if there are less than 3 pixels next to it with a 1. Flips the orientation of data to simplify further processing, so that it looks like 0 1 2 3 4 5 6 7 | 8 9 ... The Problem It's too slow, by an order of magnitude. This runs in about 109ms on my hardware for a given image (320x240 in my case). It should be running around 10-12ms. Is the slowness I'm experiencing due to the nature of the job I'm trying to do, or to my implementation? How can I speed it up? // bitBuffer is a full image void Denoise(uint8_t* unsafe bitBuffer) { for (int i = 2; i < IMG_HEIGHT; i++) { DenoiseRow( &bitBuffer[(i-2)*IMG_WIDTH], &bitBuffer[(i-1)*IMG_WIDTH], &bitBuffer[i*IMG_WIDTH]); } } // you're never going to get the top, bottom rows in current void DenoiseRow( uint8_t* unsafe top, uint8_t* unsafe cur, uint8_t* unsafe bot) { // deal with leftmost byte in row. cur[0] = DenoiseAndFlipByte(top[0], 0, cur[0], cur[1], bot[0]); for (int byte = 1; byte < IMG_WIDTH-1; byte++) { cur[byte] = DenoiseAndFlipByte(top[byte], cur[byte-1], cur[byte], cur[byte+1], bot[byte]); } // deal with rightmost byte in row. cur[IMG_WIDTH-1] = DenoiseAndFlipByte(top[IMG_WIDTH-1], cur[IMG_WIDTH-2], cur[IMG_WIDTH-1], 0, bot[IMG_WIDTH-1]); } uint8_t DenoiseAndFlipByte( uint8_t top, uint8_t left, uint8_t cur, uint8_t right, uint8_t bot) { // bits uint8_t topBit, botBit; uint8_t leftBit, curBit, rightBit; // final byte to save back uint8_t toSaveByte = 0; // number of white pixels around current uint8_t count = 0; // deal with the first bit. topBit = top & 0x1; top = top >> 1; botBit = bot & 0x1; bot = bot >> 1; // Once we arrive here, the leftByte has already been flipped. So now we have this orientation of bytes left/current/right: 0 1 2 3 4 5 6 7 ||| 15 14 13 12 11 10 9 8 | 23 22 ... // Therefore, this next command gets bit 7 (the LSB in "left") rightBit = left & 0x1; curBit = cur & 0x1; cur = cur >> 1; leftBit = cur & 0x1; cur = cur >> 1; count = topBit + botBit + leftBit + rightBit; count = (count > 2); toSaveByte |= count << 7; // deal with middle bytes for (int i = 1; i < 7; i++) { topBit = top & 0x1; top = top >> 1; botBit = bot & 0x1; bot = bot >> 1; rightBit = curBit; curBit = leftBit; leftBit = cur & 0x1; cur = cur >> 1; count = topBit + botBit + leftBit + rightBit; count = (count > 2); toSaveByte |= count << (7 - i); } // deal with the last bit topBit = top & 0x1; botBit = bot & 0x1; rightBit = curBit; curBit = leftBit; // counterintuitive, but this is the orientation of bytes left/cur/right: ... 6 7 | 15 14 13 12 11 10 9 8 | 23 22 21 20 19 18 17 16 // so this next command gets bit 16. leftBit = right & 0x1; count = topBit + botBit + leftBit + rightBit; count = (count > 2); toSaveByte |= count; return toSaveByte; } Answer: Program doesn't produce correct result At first I was going to write something about how to speed up your program, but as I examined it more closely, I found that it wasn't producing the correct result. I think you first need to address these issues (or clarify that they are not issues) before I can talk about how to fix your speed issues. Here are the problems I found: Top and bottom rows unflipped Your function does two things: denoise and flip the bit orientations. But you never flip the bit orientations of the top and bottom row, so your output image will have a mix of bit orientations. Incorrect handling of upper row bits Since your algorithm works from top to bottom and left to right, when you get to a particular pixel, both the upper and the left bytes will already be flipped. You correctly take care of the left side by doing left & 1 instead of left & 0x80 (as mentioned in your comments). However, you didn't do the same for the upper row bits. You handle the upper and lower rows the same way, when you should be dealing with the upper row in a reversed fashion since they have already been flipped. More fundamental issue The previous issue points to a more fundamental one. Your algorithm uses "new bits" from the top and left and "old bits" from the bottom and right to compute what happens to the current pixel. I think you should only use "old bits" from the original image to compute the results for each pixel. In other words, I think you should write your results to a copy of the original image instead of doing the work in-place. If you do the denoising in-place, you will cause an effect to occur where the upper left pixel in the image will have an effect on what happens to the lower right pixel, because each pixel affects its right and bottom neighbors, and then those pixels affect their right and bottom neighbors, etc. There could be cases where you could change a single pixel in the upper left corner and it could cause the output to look entirely different due to a cascading chain of effects. From your problem specification, it doesn't seem that this cascading effect is intended. However, this effect might not matter to you. I don't really know. Speed optimization / followup question I have an idea in mind to speed up your program by using a precomputed lookup table to do the denoising. However, without knowing how you want to handle the previously mentioned issues, I can't really show you an example of what it would look like. If you address those issues and post a followup question, I would be able to give you an answer there.
{ "domain": "codereview.stackexchange", "id": 24600, "tags": "c, time-limit-exceeded, image, homework, signal-processing" }
Is the six-layer cortex model of the mammalian cortex still the most accepted model?
Question: I've been reading a bit about the different layers of the cerebral cortex and its clear that certainly not every region of the cortex has the same number of layers. Thus, the idea that every region has six-layers is clearly false. And I think this fact is definitely well known. Do neuroscientists still view the six-layer model as an accurate model? If so, what aspects of it have changed over time that now incorporate the fact not every region has six layers? Aside: Let me clarify my question. If not every region has 6 layers, why don't we just say there is a layered cortex and not specify a number? But if we decide to do this, what distinguishes the mammalian "6-layered" cortex from non-mammalian species that have a so-called "3-layer" cortex See The Microcircuit Concept Applied to Cortical Evolution: from Three-Layer to Six-Layer Cortex by Gordon Shepard for more. Answer: Short answer The layers in the cortex are histologically and functionally defined. Both 3- and 6-layered cortices are found in the human brain. Hence, the different cortical layers are not models, but classifications based on empirical observations. Background The different cortical layers have been defined by histological staining and microscopy. The layered organization of the mammalian cortex is typically explained in textbooks by using the neocortex as an example, which has 6 layers (Fig. 1): Fig. 1. Layered cortex. Source: What-When-How - Neuroscience. As one can see in this picture, layer VI can be divided histologically into 2 sub-layers, namely VIa, containing mainly pyramidal cells, and VIb with mainly horizontal cells (Prieto & Weiner, 1999). Hence, a 7-layered cortex could be argued for as well - it is all kind of subjective. The functions of the 6 layers are illustrated in Fig. 2. Fig. 2. Functions of the 6 layers. Source: Free-Stock-Illustration. However, the primary olfactory human cortex (the paleocortex) contains only 3 histological layers, as opposed to the 6 identified layers in the neocortex (Fig. 3). The primary olfactory cortex is part of the allocortex in man. Fig. 3. Neocortex versus paleocortex. Source: Slideshare. The primary olfactory cortex receives direct sensory input from mitral cells in the olfactory bulb in the outermost layer Ia. layers Ib, II, and III receive input via local and long-range intracortical connections. Sensory and intracortical inputs converge in layers II and III. The three-layered palaeocortex is different from the six-layered neocortex in sensory areas. In the neocortex, sensory and intracortical microcircuits are distributed to two different layers. Sensory inputs from the thalamus target layer 4. Layer 4 subsequently projects onto layer II/III, which distributes and receives intracortical associative fibers. Note that the olfactory system is exceptional in that it does not relay its information via the thalamus, explaining the absence of layer IV (Wiegand et al., 2011). Reptilians and birds typically have a 3-layered cortex (Nauman et al., 2015). References - Nauman et al., Current Biology; 25(8): R317–R21 - Prieto & Weiner, J Comparative Neurology (1999); 404:332–58 - Wiegand et al., J Neurosci (2011); 31(34):12149 –58
{ "domain": "biology.stackexchange", "id": 4356, "tags": "neuroscience, brain, neuroanatomy" }
Are sinusoidal travelling waves also normal modes of vibration?
Question: According to definition of normal modes, which says if all the different independent parts of a system vibrate at same frequency and their amplitude preserve a fixed ratio then such a motion is a normal mode of that system then since in sinusoidal travelling waves also different parts move with same frequency and different parts preserve a ratio, shouldn't they too be normal modes? So are sinusoidal traveling waves normal modes? Answer: Yes, travelling wave systems have normal modes. In fact, the physics and mathematics of two coupled oscillators is strikingly similar to that of coupled waveguides. I'm most familiar with electrical oscillators, but everything written below applies to any coupled harmonic oscillators. Coupled oscillators Consider two electrical oscillators "$a$" and "$b$". Oscillator $a$ has capacitance $C_a$ and inductance $L_a$, and similarly for oscillator $b$. The oscillators are coupled through a capacitance $C_g$ and mutual inductance $L_g$. Each oscillator has a magnetic flux $\Phi$ and an electric charge $Q$.$^{[a]}$ We could study this system using Kirchhoff's laws, but it's a lot easier to convert everything to the Hamiltonian formalism. The Hamiltonian for the system is $$H = \frac{\Phi_a^2}{2 L_a'} + \frac{\Phi_b^2}{2 L_b'} + \frac{Q_a^2}{2 C_a'} + \frac{Q_b^2}{2 C_b'} + \frac{Q_a Q_b}{C_g'} - \frac{\Phi_a \Phi_b}{L_g'} $$ where all those primes on the various constants have to do with the fact that the coupling renormalizes each oscillator's capacitance and inductance. Now we introduce the variables \begin{align} a &= \frac{1}{\sqrt{2}}\left( \frac{\Phi_a}{\sqrt{Z_a'}} + i \sqrt{Z_a'} Q_a \right) \\ b &= \frac{1}{\sqrt{2}}\left( \frac{\Phi_b}{\sqrt{Z_b'}} + i \sqrt{Z_b'} Q_b \right) \end{align} where the impedance $Z$ is defined by $Z \equiv \sqrt{L/C}$. With these variables, the Hamiltonian becomes \begin{align} H &= \omega_a' a^* a + \omega_b' b^* b \\ &-\left( ab + a^* b^* \right) \underbrace{\frac{1}{2} \left( \frac{1}{C_g' \sqrt{Z_a' Z_b'}} + \frac{\sqrt{Z_a' Z_b'}}{L_g'} \right)}_\chi \\ &+\left( a b^* + a^* b \right) \underbrace{\frac{1}{2} \left( \frac{1}{C_g' \sqrt{Z_a' Z_b'}} - \frac{\sqrt{Z_a' Z_b'}}{L_g'} \right)}_g \, . \end{align} Let's remember for a moment what the Hamiltonian means: it provides a way to get the time evolution of the system. In the present case the time dependences come from $$ \dot a(t) = -i \frac{\partial H}{\partial a^*} \qquad \dot b(t) = -i \frac{\partial H}{\partial b^*} \, . $$ Using these equations, we can write a matrix equation for the whole system: $$ \frac{d}{dt} \left( \begin{array}{c} a \\ b \\ a^* \\ b^* \end{array} \right) = -i \left( \begin{array}{cc} \omega_a' & g & 0 & - \chi \\ g & \omega_b' & -\chi & 0 \\ 0 & \chi & -\omega_a' & -g \\ \chi & 0 & -g & -\omega_b' \end{array} \right) \left( \begin{array}{c} a \\ b \\ a^* \\ b^* \end{array} \right) \, . $$ Ok now here's the point: the normal modes and frequencies of the system are precisely the eigenvectors and eigenvalues of that matrix. If the coupling is turned off (i.e. $C_g=0$ and $L_g=0$), then $g = \chi = 0$ and the eigenvalues are $\pm \omega_a'$ and $\pm \omega_b'$, which makes complete sense. Coupled waveguides Alright now suppose we have two waveguides "$a$" and "$b$" that are coupled to each other through some mutual capacitance and inductance per length of the waveguide. Denote the rightward and leftward moving amplitudes in waveguide $a$ as $a_\pm$, and similarly for waveguide $b$. If you work it all out, you find that $$ \frac{d}{dx} \left( \begin{array}{c} a_+ \\ b_- \\ a_- \\ b_+ \end{array} \right) = i \left( \begin{array}{cc} k_a' & -g & 0 & \chi \\ g & -k_b' & -\chi & 0 \\ 0 & -\chi & -k_a' & g \\ \chi & 0 & -g & k_b' \end{array} \right) \left( \begin{array}{c} a_+ \\ b_- \\ a_- \\ b_+ \end{array} \right) $$ where the $k$'s are the wave numbers associated with each waveguide (and I should say that the meanings of $g$ and $\chi$ are slightly different than they were for the coupled oscillator case). Comparison between the problems Thus the coupled waveguide problem has the same form as the coupled oscillator problem (the signs are different, but that's just because of how we ordered the variables). In both cases we have a system of first-order differential equations, and in both cases the eigenvectors and eigenvalues of the matrix in the differential equation tell us what the normal modes and frequencies (for the oscillator case) or wave numbers (for the waveguide case) of the system are. $[a]$: There's a direct correspondence between electrical and mechanical oscillators. The electrical flux and charge correspond to position and momentum. Capacitance corresponds to mass, and inductance corresponds to one over the spring constant. Where we use Kirchhoff's laws for the electrical case, we use Newton's law ($F = ma$) in the mechanical case.
{ "domain": "physics.stackexchange", "id": 56983, "tags": "newtonian-mechanics, waves, vibrations" }
Adding values from DictReader to empty dictionary
Question: I made a rock climbing simulator game for fun. Below is a function from the program, which works. But I can't help but think it could be refactored or more pythonic. Namely, I open the data with csv.DictReader and read through the data, which add to a blank dictionary called route_info. Is this step necessary? The function has a with-open block, which opens a .csv file, but a link to the data is here. import csv import time routes_climbed = [] pitches_climbed = 0 def choose_route(): """After deciding about the weather, you come here and choose a route to climb. Reads list of routes from .csv file with names, grades, and pitches.""" global routes_climbed, pitches_climbed route_info = {} with open('seneca_routes.csv', 'r', encoding='utf-8-sig') as f: #weird encoding from excel? reader = csv.DictReader(f, ('route_name', 'route_grade', 'pitches')) print("Choose a route to climb from the following list of classics.") print() for row in reader: print(row['route_name'],row['route_grade']) route_info[row['route_name']] = row['pitches'] route_choice = input("Which route do you want to climb? : ") if route_choice in route_info.keys(): routes_climbed.append(route_choice) pitches_climbed += int(route_info[route_choice]) climb_route(route_choice) else: print("Incorrect route name") time.sleep(3) choose_route() choose_route() Answer: Some suggestions: Pass the code through at least one linter such as pycodestyle or flake8 to get more idiomatic Python. Don't use global. Returning two values, while ugly, is preferable to having global state. You can add \n to the end of a string to print a newline, avoiding empty print()s. Use argparse to rather than input() to make the script … well, scriptable. Each route should probably be an object rather than a disparate set of just stuff, even though these objects will have no methods for now. Using a main method as the entry point for the functionality would make the script usable by other scripts.
{ "domain": "codereview.stackexchange", "id": 32418, "tags": "python, python-3.x" }
Is there a possibility to convert the following problem into divide and connquer
Question: $A$ is a $m*n$ matrix $B$ is an $n*n$ matrix I want to return matrix C of size m*n such that: $C_{ij} = \sum_{k=1}^{n} max(0, a_{ij} - b_{jk}) $ In pseudocode it could be like below for i = 1 to m: for j = 1 to n: C[i,j] = 0 for k = 1 to n: C[i,j] += max(0, a[i,j] - b[j,k]) this runs on $O(m*n^2)$ but it is possible to lower that. Answer: We can note that to compute $C_{ij}$, it is enough to know the sum, and the count of $\{b_{jk}\}_{k=1}^n$ such that $b_{jk} < a_{ij}$. This can be done after a simple preprocessing and the final time complexity will be $O(mn\log{n})$
{ "domain": "cs.stackexchange", "id": 20471, "tags": "algorithms, time-complexity" }
Quantum Mechanics - Finding momentum probability density
Question: everyone. I got a bit stuck on 2(iii), this is supposed to be a easy question, but i don't know how you get the square term? I thought you just do the Fourier transform, but then I got some exponential out of it and I don't know what to do? Can anyone suggest or shed some light on the problem? THANKS If you just plug in infinity into the integral, how can you avoid the problem of getting zero? You may use the integral provided. Answer: Recall that $\left|\psi\left(x\right)\right\rangle =\int\left|\varphi_{p}\left(x\right)\right\rangle \left\langle \varphi_{p}\left(x\right)|\psi\left(x\right)\right\rangle dp$ (for continuous p) Where $\left\langle \varphi_{p}\left(x\right)|\psi\left(x\right)\right\rangle=\text{Φ}\left(p\right)$, which is the amplitude of momentum measurement $p$ Then $\text{Φ}\left(p\right)=\left\langle \varphi_{p}\left(x\right)|\psi\left(x\right)\right\rangle =\intop_{-\infty}^{\infty}\bar{\varphi_{p}}\left(x\right)\psi\left(x\right)dx$ =$\frac{1}{\sqrt{2\pi a\bar{h}}}\int_{-\infty}^{\infty}e^{\frac{-\left|x\right|}{a}+ix\left(k-\frac{P}{\bar{h}}\right)}dx$ By solving the integral, you get $\frac{1}{\sqrt{2\pi a\bar{h}}}\left(-\frac{1}{\frac{-1}{a}+i\left(k-\frac{P}{\bar{h}}\right)}+\frac{1}{\frac{1}{a}+i\left(k-\frac{P}{\bar{h}}\right)}\right)$ $=\frac{1}{\sqrt{2\pi a\bar{h}}}\left(\frac{2a}{1+a^{2}\left(k-\frac{P}{\bar{h}}\right)^{2}}\right)$ which can then be simplified to obtain the answer provided
{ "domain": "physics.stackexchange", "id": 13637, "tags": "quantum-mechanics, homework-and-exercises, momentum, fourier-transform" }
The right way to hang a man
Question: Problem: I have seen a few questions around hangman. Usually this is done in a very hackish way, which usually can not be generalized any further. My thought or question is about the creation of the gown, which is a central part in any hangman game. I want to create the ASCII image below _____ | | O | /|\ | / \ | | ---------- Where one "limb" of the stick figure should appear each time the function is prompted. I also wanted to be able to specify both the height and width of the gown. However I was not able to scale the stick figure relative to the gown size. The code is provided below and I have two simple questions about it =) Question: Is there a cleaner way to hang the man? I feel my method is very barbaric (I used some black voodo graduate mathematics to get it to look correct). Is there a way to make the stick figure scale with the size of the gown? (essentially providing more guesses). Code: from math import ceil def create_gown(width, height): gown = [] gown.append('{:>{}s}'.format('_'*int(width/2), 10)) gown.append('{:>{}s} {:>{}s}'.format( '|', 11 - int(width/2), '|', int(width/2)-2)) for i in range(height-3): gown.append('{:^{}s}'.format('|', 20)) gown.append('{:^{}s}'.format('-'*width, 20)) return gown def wrong_answer(gown, attempt=0): height, width = len(gown), len(gown[-1].strip()) offset1 = int((-width+23)*0.5)+1 offset3 = int(ceil(0.5*(width-7))) if attempt == 0: return gown elif attempt == 1: new_line = '{:>{}s} {:>{}s}'.format('O', offset1-1, '|', offset3+1) row = 2 elif attempt == 2: new_line = '{:>{}s} {:>{}s}'.format('|', offset1-1, '|', offset3+1) row = 3 elif attempt == 3: new_line = '{:>{}s} {:>{}s}'.format('/| ', offset1, '|', offset3) row = 3 elif attempt == 4: new_line = '{:>{}s} {:>{}s}'.format('/|\\', offset1, '|', offset3) row = 3 elif attempt == 5: new_line = '{:>{}s} {:>{}s}'.format('/ ', offset1, '|', offset3) row = 4 elif attempt == 6: new_line = '{:>{}s} {:>{}s}'.format('/ \\', offset1, '|', offset3) row = 4 else: raise Exception("Ops! The number of attempts must be an integer from 0 to 6.") gown[row] = new_line return gown def print_gown(gown): for line in gown: print line if __name__ == '__main__': gown = create_gown(10, 7) print len(gown) for i in range(7): gown = wrong_answer(gown, i) print_gown(gown) Answer: Repetitions All your elif share the same structure. You just need to ensure that the inserted pattern is always 3 wide and you can drop your offsets adjustments. The logical next step is to use a dictionary to store the pattern and the row for each attempt: _GOWN_MODIFIER = { 1: (' O ', 2), 2: (' | ', 3), 3: ('/| ', 3), 4: ('/|\\', 3), 5: ('/ ', 4), 6: ('/ \\', 4), } def wrong_answer(gown, attempt=0): height, width = len(gown), len(gown[-1].strip()) offset1 = int((-width+23)*0.5)+1 offset3 = int(ceil(0.5*(width-7))) if not attempt: return gown try: pattern, row = _GOWN_MODIFIER[attempt] except KeyError: raise Exception("Ops! The number of attempts must be an integer from 0 to 6.") else: gown[row] = '{:>{}s} {:>{}s}'.format(pattern, offset1, '|', offset3) return gown Calculus You do not need ceil nor int to perform an integer division by 2. In Python 2, / is already the operator you need. You can also use // which is also the same operation. The advantage of the latter over the former is that the behaviour is the same in Python 3, whereas / will perform a decimal division in Python 3. _GOWN_MODIFIER = { 1: (' O ', 2), 2: (' | ', 3), 3: ('/| ', 3), 4: ('/|\\', 3), 5: ('/ ', 4), 6: ('/ \\', 4), } def wrong_answer(gown, attempt=0): if not attempt: return gown height, width = len(gown), len(gown[-1].strip()) offset1 = (23 - width) // 2 + 1 offset3 = (width - 6) // 2 try: pattern, row = _GOWN_MODIFIER[attempt] except KeyError: raise Exception("Ops! The number of attempts must be an integer from 0 to 6.") else: gown[row] = '{:>{}s} {:>{}s}'.format(pattern, offset1, '|', offset3) return gown def create_gown(width, height): half_width = width // 2 gown = [] gown.append('{:>{}s}'.format('_'*half_width, 10)) gown.append('{:>{}s} {:>{}s}'.format( '|', 11 - half_width, '|', half_width-2)) for i in range(height-3): gown.append('{:^{}s}'.format('|', 20)) gown.append('{:^{}s}'.format('-'*width, 20)) return gown Creation vs modifications You're aware that you can create sequences of repeating elements using seq(element) * count. You can use that to create your gown variable by repeating the pole and then modifying the top two rows and the last one; it should be more memory friendly as the space needed to store the list will be allocated in one go: def create_gown(width, height): half_width = width // 2 gown = ['{:^{}s}'.format('|', 20)] * height gown[0] = '{:>{}s}'.format('_' * half_width, 10) gown[1] = '{:>{}s} {:>{}s}'.format( '|', 11 - half_width, '|', half_width-2) gown[-1] = '{:^{}s}'.format('-' * width, 20) return gown On modifying in place and returning a value and a bit about printing too Your wrong_answer both modifies the gown in place and return the modified value. This is unnecessary as the caller still should hold a reference to the value (which was modified). When calling the code, you can have: gown2 = wrong_answer(gown, attempt=3) but >>> gown2 is gown True So there is really no point in duplicating references to the same object. Instead, what you can do is preprocess the printing job, so you don't really need the print_gown function: def wrong_answer(gown, attempt=0): height, width = len(gown), len(gown[-1].strip()) offset1 = (23 - width) // 2 + 1 offset3 = (width - 6) // 2 if attempt: try: pattern, row = _GOWN_MODIFIER[attempt] except KeyError: raise Exception("Ops! The number of attempts must be an integer from 0 to 6.") else: gown[row] = '{:>{}s} {:>{}s}'.format(pattern, offset1, '|', offset3) return '\n'.join(gown) Usage: print(wrong_answer(gown, attempt=3)) Exceptions Raising a generic purpose Exception is bad practice, as it makes the except clauses trying to handle your code being able to catch more than they should. You should at least use a more generic exception (such as ValueError) or define your own: class AlreadyHangedError(ValueError): pass def wrong_answer(gown, attempt): ... raise AlreadyHangedError('...') ... However, there is something odd in the way you use your wrong_answer: the caller is expected to call this function using increasing attempts. This is the job for a generator. By turning wrong_answer into a generator, you can call it to create a generator instance and then call next on this instance (or let a for loop do it for you) to get the next gown to print: def wrong_answer(width, height): gown = create_gown(width, height) offset1 = (23 - width) // 2 + 1 offset3 = (width - 6) // 2 yield '\n'.join(gown) for attempt in range(1, 7): pattern, row = _GOWN_MODIFIER[attempt] gown[row] = '{:>{}s} {:>{}s}'.format(pattern, offset1, '|', offset3) yield '\n'.join(gown) Usage: for gown in wrong_answer(10, 7): print(gown) or, in a more real-case scenario: def hangman_game(): for gown in wrong_answer(10, 7): while True: status = manage_user_input() if status == 'FAILED': # Whatever break if status == 'COMPLETED': # Whatever return print gown print 'You lose' Magic numbers create_gown seems to center the gown in 20-sized strings, but what if width is greater than that? At first, you should define this value and give it a meaningful name, and same for all values that derive from it; and then, you might want to take max(20, width) as a base for your calculus.
{ "domain": "codereview.stackexchange", "id": 20340, "tags": "python, python-2.x, hangman, ascii-art" }
Spring Loaded Pin with Chamfered Pin Finish?
Question: I would like to try out a component that operates like a combination of the two pictures provided - spring loaded pull pin, with a pin finish like a tubular latch (chamfered/filleted finish). Do these exist? If somebody can tell me their correct name or better still, provide me with a link to a manufacturer's site that would be great. Thank you Answer: please note that (since there is a huge amount of manufacturers) it is not our job to look for your part. But as starting point: Here are some sites where you can look for an index plunger. There is a ton of options but on first glance I couldn't make out the smae type you were looking for. It's likely that you have to resort to GisMofx's hack and grind/mill the bevel on the standard part. Norelem Ganter Halder Carr-Lane I'll extend this list to more sources whan people comment.
{ "domain": "engineering.stackexchange", "id": 2948, "tags": "springs" }
Lorentz transformation of a frequency modulated signal
Question: Let's consider the following problem: A spacecraft starts at time $t_0 = 0$ with speed $v > 0$ and moves along the $x$-axis. If the distance between spacecraft and earth equals $R$ (in the reference frame of the earth), i.e. at $t=\frac{R}{v}$ a frequency modulatated signal $$ u \colon \left[ \frac{R}{v}, \frac{R}{v} + \tau \right] \rightarrow \mathbb{R} \\ u(t) \colon= u_0 \cos\left( F\left(t - \frac{R}{v}\right) \right) $$ of duration $\tau$ is sent from earth to the spacecraft. To compute the signal received from the spacecraft we have to consider the spacetime-signal $$ u(t, x) = u_0 \cos\left( F\left(t - \frac{R}{v} - \frac{x}{c}\right) \right) $$ and use the Lorentz transformation, i.e., $$ t^\prime = \gamma \left( t + \frac{vx}{c^2} \right)\\ x^\prime = \gamma \left( x + vt \right). $$ Hence, we get the transformation $$ \left(t - \frac{x}{c}\right) \longrightarrow \sqrt{\frac{c-v}{c+v}} \left(t - \frac{x}{c}\right), $$ and clearly see the Doppler shift. Let's come to the actual question: Assume that the spacecraft did not start at $t_0=0$ from earth. Instead an observer from earth knows that the distance to the spacecraft at $t_0=0$ is $R$ and that it moves with $v\geqslant 0$. Question 1: What is the correct Lorentz transform in that case? For example we could just shift the time, i.e., $$ t^\prime = \gamma \left( t + \frac{R}{v} + \frac{vx}{c^2} \right)\\ x^\prime = \gamma \left( x + R + vt \right), $$ but obviously the limiting case $v \to 0$ does not exist. Question 2: A spatial shift instead of a time shift, i.e., $$ t^\prime = \gamma \left( t + \frac{v(x+R)}{c^2} \right)\\ x^\prime = \gamma \left( x + R + vt \right), $$ is somewhat confusing for me. At least the limiting case $v \to 0$ exists in this case, but what is the inverse of this transformation if the spacecraft also knows that the distance to earth at $t_0 = 0$ is $R$ (in the reference frame of the earth). Probably I have a wrong understanding about simultaneity and synchronization of both reference frames... Maybe someone can shed light on the darkness :) Answer: What is the correct Lorentz transform in that case? There is no need to change the Lorentz transform at all in that case. It is perfectly fine for a ship to be located somewhere other than the origin. In fact, even in the original scenario although the ship starts at the origin by the time it finishes the signal it is no longer at the origin. That is not important in either scenario. However, if you want to transform it then there is no problem doing a spatial and/or temporal translation. The order of operations matters, it is different to do a translation first followed by a boost vs a boost followed by a translation. Based on the description, I think that a translation followed by a boost makes more sense. The overall combined transform would be: $$ t'= \gamma \left( t+ \Delta t +\frac{v \ (x+\Delta x)}{c^2} \right)$$ $$x' = \gamma \left( x+ \Delta x + v \ (t+\Delta t) \right)$$ For your scenario as I understand it $\Delta x = R$ and $\Delta t = 0$ so the above simplifies to $$ t'= \gamma \left( t +\frac{v \ (x+R)}{c^2} \right)$$ $$x' = \gamma \left( x+ R + v \ t \right)$$ Note that this agrees with your second expression. Now, your first expression is a little odd. What you are doing is shifting in time, but you are shifting in time by the time that it takes for the ship to arrive at the origin traveling a distance of $R$ at a speed $v$. If $v=0$ then there is no such time. This has nothing to do with relativity, the same thing would happen in Newtonian physics. If $v=0$ then the ship stays at $R$ and never arrives at Earth. what is the inverse of this transformation The inverse transform is found simply by algebraically solving the forward transform listed above: $$t=\gamma \left( t' - \frac{v \ x'}{c^2} \right) - \Delta t$$ $$x = \gamma \left( x'- v \ t' \right) -\Delta x $$ Note that this is not simply the first transformation listed with the signs reversed. That is because the order of transformations matters. A translation followed by a boost is different from a boost followed by a translation. And the inverse of a translation followed by a boost is not a translation followed by a boost. In fact, the inverse of a translation followed by a boost is a boost followed by a translation. That is the form that we see in the final expression.
{ "domain": "physics.stackexchange", "id": 71738, "tags": "special-relativity, doppler-effect" }
Should I use multi-armed-bandits or RL for a financial time-series problem?
Question: If we take simple financial timeseries data(stock/commodity/currency prices), State(t+1) does not depend on the action that we choose to take at State(t) as in Maze or Chess problem. Simple example: as states we can have the sum of the daily returns of 5 different ETFs. Based on that, we want to take action - either buy(go long) or sell(go short) in another ETF. No matter what we choose however, our action will not determine what the next state would be (we do not have any control of what the returns of those 5 ETFs will be tomorrow). In that case of simple financial time series data, would multi-armed-bandit approach be more suitable? Answer: Your agent's actions will (probably) not have much impact on the observed financial time series. However, they will make a large difference to other things - namely what stock your agent is holding and the account balance. If you are happy to ignore the agent's current portfolio and balance as not part of your problem, effectively treating these items as infinite sinks, then yes a multi-armed bandit might be a reasonable solution. But then so might any other sequence-predicting algorithm, if what you are searching for is some kind of financial prediction of good times to buy or sell. If the portfolio and cash balance are an important part of your problem, you should add them to the state and use reinforcement learning. You might do this if your goal is to model a single investor playing the markets. Note that although it may be possible to use machine learning techniques to analyse markets, and base investments on advice of a trained AI agent, it is a very risky venture. There are lots of ways you can fool yourself into believing too strongly in your solution and you stand to lose significant amounts of money.
{ "domain": "ai.stackexchange", "id": 3632, "tags": "reinforcement-learning, comparison, time-series, multi-armed-bandits" }
Is filming 25 micrometer from opposite sides possible QM event?
Question: I have a Gaussian process where the mean is 0.381 and the standard deviation is 0.524. The process is simplifically the difference of diameters $d_1$ and $d_2$ where $d_1$ is taken from one side, while $d_2$ from the other by CadCam. The size of the diameter is about 0.25 - 0.50 micrometers so a QM event could be possible. However, I cannot get anything to my mind and my fellows neither, since we think the values should all be 0. There may be something QM happening in so small distances. A small computational error or setting error would leak do fixed error, but now it is a distribution. How can filming the disc (etc diameter) both sides be distributed as Gaussian? Answer: You do not need to invoke QM at all. Measurements are often normally (Gaussian like) distributed due to random errors unavoidable on each individual measurement. Actually, your result $d_1-d_2= 0.381 \pm 0.524$ is consistent with the difference being zero (because the error is larger than the obtained average value)
{ "domain": "physics.stackexchange", "id": 26004, "tags": "quantum-mechanics, quantum-electrodynamics" }
Data-checking class supporting letters
Question: I'm starting to learn OOP with PHP, and all I've learned so far is just by searching and reading. So I have this need to check input data for certain things like min of chars, max of chars, spaced or not spaced, just letters or not. So far I've just created the alpha() method which is just for letters only. I'm pretty sure this is horrible coding and can and should be improved. I would love some feedback on how I can improve this class. I also have a function that strips tags when '<' and '>' becomes available on the strings, but it's not currently in the code. class dataValidator { public function alpha($data, $space = null, $minimum = null, $maximum = null, $extends = null) { $data = trim($data); if ( !empty($data) ) { $data = preg_replace("/\s{2,}/", " ", $data); if ( isset($space) && isset($minimum) && isset($maximum) && isset($extends) ) { if ( $minimum == 0 || $maximum < $minimum ) { return false; } if ( $space === true ) { if ( $extends === "EXT_PUNCTUATION" ) { $newData = html_entity_decode($data, ENT_HTML5 | ENT_QUOTES, "UTF-8"); $newData = preg_replace("/[^\p{L}\p{P}\ ]/u", "", $newData); $dataLen = strlen($newData); if ( $dataLen >= $minimum && $dataLen <= $maximum ) { $newData = htmlentities($newData, ENT_HTML5 | ENT_QUOTES, "UTF-8"); return $newData; } else { return false; } } else if ( $extends === "EXT_ANY" ) { $newData = html_entity_decode($data, ENT_HTML5 | ENT_QUOTES, "UTF-8"); $newData = preg_replace("/[^\p{L}\p{P}\p{S}\ ]/u", "", $newData); $dataLen = strlen($newData); if ( $dataLen >= $minimum && $dataLen <= $maximum ) { $newData = htmlentities($newData, ENT_HTML5 | ENT_QUOTES, "UTF-8"); return $newData; } else { return false; } } else { return false; } } else { $newData = html_entity_decode($data, ENT_HTML5 | ENT_QUOTES, "UTF-8"); $newData = preg_replace("/[^\p{L}]/u", "", $newData); $dataLen = strlen($newData); if ( $dataLen >= $minimum && $dataLen <= $maximum ) { $newData = htmlentities($newData, ENT_HTML5 | ENT_QUOTES, "UTF-8"); return $newData; } else { return false; } } } else if ( isset($space) && isset($minimum) && isset($maximum) ) { if ( $minimum == 0 || $maximum < $minimum ) { return false; } if ( $space === true ) { $newData = html_entity_decode($data, ENT_HTML5 | ENT_QUOTES, "UTF-8"); $newData = preg_replace("/[^\p{L}\ ]/u", "", $newData); $dataLen = strlen($newData); if ( $dataLen >= $minimum && $dataLen <= $maximum ) { $newData = htmlentities($newData, ENT_HTML5 | ENT_QUOTES, "UTF-8"); return $newData; } else { return false; } } else { $newData = html_entity_decode($data, ENT_HTML5 | ENT_QUOTES, "UTF-8"); $newData = preg_replace("/[^\p{L}]/u", "", $newData); $dataLen = strlen($newData); if ( $dataLen >= $minimum && $dataLen <= $maximum ) { $newData = htmlentities($newData, ENT_HTML5 | ENT_QUOTES, "UTF-8"); return $newData; } else { return false; } } } else if ( isset($space) && isset($minimum) ) { if ( $minimum == 0 ) { return false; } if ( $space === true ) { $newData = html_entity_decode($data, ENT_HTML5 | ENT_QUOTES, "UTF-8"); $newData = preg_replace("/[^\p{L}\ ]/u", "", $newData); if ( strlen($newData) >= $minimum ) { $newData = htmlentities($newData, ENT_HTML5 | ENT_QUOTES, "UTF-8"); return $newData; } else { return false; } } else { $newData = html_entity_decode($data, ENT_HTML5 | ENT_QUOTES, "UTF-8"); $newData = preg_replace("/[^\p{L}]/u", "", $newData); if ( strlen($newData) >= $minimum ) { $newData = htmlentities($newData, ENT_HTML5 | ENT_QUOTES, "UTF-8"); return $newData; } else { return false; } } } else if ( isset($space) ) { if ( $space === true ) { $newData = html_entity_decode($data, ENT_HTML5 | ENT_QUOTES, "UTF-8"); $newData = preg_replace("/[^\p{L}\ ]/u", "", $newData); $newData = htmlentities($newData, ENT_HTML5 | ENT_QUOTES, "UTF-8"); return $newData; } else { return self::alpha($data); } } else { $newData = html_entity_decode($data, ENT_HTML5 | ENT_QUOTES, "UTF-8"); $newData = preg_replace("/[^\p{L}]/u", "", $newData); $newData = htmlentities($newData, ENT_HTML5 | ENT_QUOTES, "UTF-8"); return $newData; } } else { return false; } } } So using this class the following code: $string = "Hola mis 100 canarios, un dia hermoso en la cabaña.! And what about thís guy right here.? Are this &lt;i&gt; and &lt;/i&gt; tags ?"; echo "<pre>String: $string</pre><br />"; $_validator = new dataValidator(); $newString_1 = $_validator->alpha($string); $newString_2 = $_validator->alpha($string, true); $newString_3 = $_validator->alpha($string, true, 500); $newString_4 = $_validator->alpha($string, true, 20, 150); $newString_5 = $_validator->alpha($string, true, 20, 150, "EXT_PUNCTUATION"); $newString_6 = $_validator->alpha($string, true, 20, 150, "EXT_ANY"); echo "<pre>Alpha(data) validator: $newString_1</pre>"; echo "<pre>Alpha(data, space: true) validator: $newString_2</pre>"; echo "<pre>Alpha(data, space: true, min: 500) validator: $newString_3</pre>"; echo "<pre>Alpha(data, space: true, min: 20, max: 100) validator: $newString_4</pre>"; echo '<pre>Alpha(data, space: true, min: 20, max: 100, extends: "EXT_PUNCTUATION") validator: '.$newString_5.'</pre>'; echo '<pre>Alpha(data, space: true, min: 20, max: 100, extends: "EXT_ANY") validator: '.$newString_6.'</pre>'; Will output something like: String: Hola mis 100 canarios, un dia hermoso en la cabaña.! And what about thís guy right here.? Are this <i> and </i> tags ? Alpha(data) validator: HolamiscanariosundiahermosoenlacabañaAndwhataboutthísguyrighthereArethisianditags Alpha(data, space: true) validator: Hola mis canarios un dia hermoso en la cabaña And what about thís guy right here Are this i and i tags Alpha(data, space: true, min: 500) validator: Alpha(data, space: true, min: 20, max: 100) validator: Hola mis canarios un dia hermoso en la cabaña And what about thís guy right here Are this i and i tags Alpha(data, space: true, min: 20, max: 100, extends: "EXT_PUNCTUATION") validator: Hola mis canarios, un dia hermoso en la cabaña.! And what about thís guy right here ? Are this i and /i tags ? Alpha(data, space: true, min: 20, max: 100, extends: "EXT_ANY") validator: Hola mis canarios, un dia hermoso en la cabaña.! And what about thís guy right here ? Are this <i> and </i> tags ? For strings in Spanish, I need to check for UTF-8 characters like 'ñ, é, í' etc. Answer: Standards It's more common to start class names with a capital letter: class DataValidator { public function alpha($data, $space = null, $minimum = null, $maximum = null, $extends = null) { $data = trim($data); Early Return You write (with a bunch of code in the middle): if ( !empty($data) ) { } else { return false; } It's often easier to just write if ( empty($data) ) { return false; } That way you can see immediately what happens when $data is empty, and you can see that the rest of the function is about when it isn't. This only works when you return to end the else, but that's a relatively common pattern. Note that I also moved the single statement onto its own line. This makes it more consistent with the multiline then blocks. Perl Regular Expressions $data = preg_replace("/\s{2,}/", " ", $data); This replaces two tabs with a single space but leaves one tab as a tab. This seems undesirable. $data = preg_replace('{\s+}', ' ', $data); This code will cause it to sometimes replace a single space with a single space, but it will always replace any amount of whitespace with a single space. I also changed the double quoted strings to single quoted strings. This saves a step of checking the strings for variables and more clearly expresses what you are doing. I prefer to use {} as my delimiters in Perl regular expressions. It's often more readable that way. Note that we can get rid of /\ here. Don't Repeat Yourself (DRY) You have an awful lot of repeated code that can be replaced by $newData = html_entity_decode($data, ENT_HTML5 | ENT_QUOTES, 'UTF-8'); This statement appears in every branch that does not return false, nine times. Once seems to be enough. if ( isset($minimum) ) { if ( 0 == $minimum ) { return false; } $dataLen = strlen($newData); if ( $dataLen < $minimum ) { return false; } if ( isset($maximum) ) { if ( $maximum < $minimum || $dataLen > $maximum ) { return false; } } } Note: under some circumstances, it might be worth making this a separate function. It's marginal but worth considering. Note how it has no interactions with $space or $extends, so we don't have to check those at the same time. if ( isset($space) && true === $space ) { if ( isset($extends) ) { if ( 'EXT_PUNCTUATION' === $extends ) { $newData = preg_replace('{[^\p{L}\p{P}\ ]}u', '', $newData); } else if ( 'EXT_ANY' === $extends ) { $newData = preg_replace('{[^\p{L}\p{P}\p{S}\ ]}u', '', $newData); } else { return false; } } else { $newData = preg_replace('{[^\p{L}\ ]}u', '', $newData); } } else { $newData = preg_replace('{[^\p{L}]}u', '', $newData); } This replaces nine preg_replace statements with four. That seems a bit better. If your compiler will support it, I'd prefer the full word Unicode properties to the single letter versions. '{[^\p{Letter}\p{Punctuation}\p{Symbol}\ ]}u'. if ( isset($minimum) ) { $dataLen = strlen($newData); if ( $dataLen < $minimum ) { return false; } if ( isset($maximum) ) { if ( $dataLen > $maximum ) { return false; } } } $newData = htmlentities($newData, ENT_HTML5 | ENT_QUOTES, 'UTF-8'); return $newData; The last two statements appear in all nine branches that do not return false, but we now we do them only once. If I counted properly, this replaces twenty-two return statements with eight. } } And that's the entire class in 64 lines (as compared to 160 for your version). Usage $newString_1 = $_validator->alpha($string); $newString_2 = $_validator->alpha($string, true); $newString_3 = $_validator->alpha($string, true, 500); $newString_4 = $_validator->alpha($string, true, 20, 150); $newString_5 = $_validator->alpha($string, true, 20, 150, "EXT_PUNCTUATION"); $newString_6 = $_validator->alpha($string, true, 20, 150, "EXT_ANY") I'm not crazy about this. It's not clear to me what true does or 20 or 150. If you had a validation options class, you could say something like $options->setSpacesAllowed(true); $options->setMinimumLength(20); $options->setMaximumLength(150); $newString = $_validator->alpha($string, $options); Which is more verbose but self-commenting.
{ "domain": "codereview.stackexchange", "id": 11826, "tags": "php, optimization, object-oriented, validation, unicode" }
Chordal graph and its clique tree
Question: A graph $G$ is chordal if it is the intersection graph of subtrees of a tree $T$. In particular $T$ can be chosen such that each node of $T$ corresponds to a maximal clique of $G$ and the subtrees $T_v$ consist of precisely those maximal cliques in $G$ that contain $v$. $T$ is then called the clique tree of $G$. Now my question is the following. Is any tree can be represented as a clique tree of some chordal graph? Any counter example or hint of proof is welcome. Answer: Chordal graphs can be defined as intersection graph of subtrees of any tree. So the answer to your decision question is trivially YES. On the construction side, for each subtree $T_v$ of bags (it's convenient and conventional to call the nodes of the tree as bags), you'll have a unique new vertex $v$, which is put into all bags of the $T_v$.
{ "domain": "cstheory.stackexchange", "id": 2572, "tags": "graph-theory" }
Creating a scatterplot by a json data in js
Question: I have a json file of protein data of mouse and human like { "name": "P04202", "full_name": "Transforming growth factor beta-1 proprotein", "symbol": "TGFB1_MOUSE", "length": 390, "species": "mouse", "function": "Transforming growth factor beta-1 proprotein: Precursor of the Latency-associated peptide (LAP) and Transforming growth factor beta-1 (TGF-beta-1) chains, which constitute the regulatory and active subunit of TGF-beta-1, respectively.", "nr_modifications": 2821, "residues": [ { "pos": 1, "residue": "M", "modifications": [] }, { "pos": 2, "residue": "P", "modifications": [] }, ... { "pos": 60, "residue": "A", "modifications": [ { "mod": "[127] Fluoro", "type": "Chemical derivative", "prob": 1 }, { "mod": "[53] HNE", "type": "Post-translational", "prob": 0.25 } ] }, ... ] } How do I create a scatter plot [![enter image description here][1]][1] Thanks for any idea So far I gathered a code to fetch the data from an online json export const load = ({ fetch }) => { const fetchProteins = async () => { const res = await fetch('https:address/assets/ptm.json') const data = await res.json() return data } return { Proteins: fetchProteins() } } This code generates a scatter plot but I will need to hover around the dots script> import { extent } from 'd3-array'; import { scaleLinear } from 'd3-scale'; export let data = []; const margin = 30; $: xScale = scaleLinear() .domain(extent(data.proteins.map((d) => d.length))) .range([0, 800 - (margin * 2)]); $: yScale = scaleLinear() .domain(extent(data.proteins.map((d) => d.nr_modifications))) .range([0, 400 - (margin * 2)]); </script> <svg width="800" height="400"> <g transform="translate({margin}, {margin})"> {#each data.proteins as datapoint} <circle cx={xScale(datapoint.length)} cy={yScale(datapoint.nr_modifications)} class:human={datapoint.species === 'human'} r="20" /> {/each} <circle cx={10} cy={10} class="human" r="10" /> <circle cx={10} cy={40} class="mouse" r="10" /> </g> </svg> <style> svg { border: 1px; border-style: solid; } circle { fill: steelblue; fill-opacity: 0.5; } .human { fill: red; } </style> Answer: It seems like you are already on the right track. The code you provided fetches the protein data from the JSON file and generates a scatter plot with D3.js based on two properties: length and nr_modifications. To add an interactive hover event, you can add a tooltip to your scatterplot. The tooltip will display additional information when you hover over each dot. Below is an example of how you can add a tooltip: First, add a <div> element in your HTML that will be used as the tooltip: <div id="tooltip" style="position: absolute; opacity: 0;"> <p><strong>Protein:</strong> <span id="protein"></span></p> <p><strong>Modifications:</strong> <span id="modifications"></span></p> <p><strong>Length:</strong> <span id="length"></span></p> </div> Then, in your script, add the code to handle the mouseover and mouseout events for each circle: {#each data.proteins as datapoint} <circle cx={xScale(datapoint.length)} cy={yScale(datapoint.nr_modifications)} class:human={datapoint.species === 'human'} r="20" on:mouseover="{(e) => handleMouseover(e, datapoint)}" on:mouseout="{(e) => handleMouseout(e)}" /> {/each} Define the handleMouseover and handleMouseout functions in your script. These functions will show and hide the tooltip respectively. For the handleMouseover function, it also updates the tooltip content with the protein data: <script> // ... your existing code ... const handleMouseover = (event, datapoint) => { // Update the tooltip content document.getElementById('protein').textContent = datapoint.symbol; document.getElementById('modifications').textContent = datapoint.nr_modifications; document.getElementById('length').textContent = datapoint.length; // Show the tooltip document.getElementById('tooltip').style.opacity = 1; // Position the tooltip document.getElementById('tooltip').style.left = event.pageX + 'px'; document.getElementById('tooltip').style.top = event.pageY + 'px'; }; const handleMouseout = () => { // Hide the tooltip document.getElementById('tooltip').style.opacity = 0; }; </script> In standard HTML or JavaScript, the {#each} syntax and usage of {} for expressions and variable binding won't work as they are specific to Svelte. Same applies to the usage of on:mouseover and on:mouseout. In standard HTML or JavaScript, we use onmouseover and onmouseout. Here are some things which I haven't checked, but you might want to consider looking at: It looks like you are trying to load JSON data from an external source, but the function fetchProteins() is never called and its result (the promise returned) is not being awaited. The way it's structured now, Proteins in the returned object will hold a Promise, not the actual data. The code assumes that the returned JSON data is an array stored in data.proteins, but the provided JSON example suggests that the data structure might be different. It doesn't show any 'proteins' key in the JSON. The code needs to align with the actual structure of the returned JSON data. data.proteins implies that data is an object and proteins is a property of that object containing an array. In your SVG transform attribute, you are using braces {} though the correct syntax for a translate transformation in SVG is translate(x,y). You might want to try looking at the following changes: export const load = ({ fetch }) => { const fetchProteins = async () => { const res = await fetch('https:address/assets/ptm.json') const data = await res.json() return data } return { Proteins: fetchProteins } } $: xScale = scaleLinear() .domain(extent(data.map((d) => d.length))) .range([0, 800 - (margin * 2)]); $: yScale = scaleLinear() .domain(extent(data.map((d) => d.nr_modifications))) .range([0, 400 - (margin * 2)]); <g transform="translate({margin}px, {margin}px)"> {#each data as datapoint} <circle cx={xScale(datapoint.length)} cy={yScale(datapoint.nr_modifications)} class="{datapoint.species === 'human' ? 'human' : ''}" r="20" /> {/each} <!-- The rest of your code --> </g>
{ "domain": "bioinformatics.stackexchange", "id": 2489, "tags": "proteins, graphs, json" }
Quadrotor navigation in gazebo
Question: I am a beginner user of gazebo, and am confused how to navigate my quadrotor. I have used hector_quadrotor package. I tried using rosrun pr2_teleop teleop_pr2_keyboard (after installing pr2_teleop package), but it still gives error "Pr2_teleop" package not found. Is there some other easier way to navigate/control the quadrotor model? Originally posted by Siddharth on ROS Answers with karma: 1 on 2016-11-11 Post score: 0 Answer: Try running rosrun teleop_twist_keyboard teleop_twist_keyboard.py (To install : sudo apt-get install ros-$YOURDISTRO$-teleop-twist-keyboard) It should work now. It will give additional yaw control. Or else, You can create a separate node which publishes /cmd_vel topic so that the quad can use to navigate. If you can't create a node, publish the velocity using "rostopic pub" utility rostopic pub -1 /cmd_vel geometry_msgs/Twist -- '[0.0, 0.0, 0.3]' '[0.0, 0.0, 0.0]' This will give linear vertical velocity. To check: Use rqt_plot. Make sure that your initiated node publishes the /cmd_Vel topic to Gazebo model. Originally posted by Ajith kumar with karma: 28 on 2016-11-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 26228, "tags": "gazebo, hector-quadrotor" }
Probability of all alleles represented in a sample
Question: I'm trying to wrap my head around some formulas presented in the 1992 paper from Chakraborty Sample Size Requirements for Addressing the Population Genetic Issues of Forensic Use of DNA Typing, but I have not been able to. Specifically, the right hand side of formula (16) and it's relation with formula (13). $1-\sum\limits_{i=1}^{k}(1-p_{i})^{2n}$ (13) $[1-(1-p)^{2n}]^{r}\geqslant1-\alpha$ (16) Formula 13 indicates the probability, for a locus with $k$ segregating alleles whose frequencies are contained in the vector $p$, that all alleles are represented in a given sample of size $n$, and the right hand side of formula 16 indicates the probability of $r$ alleles to be represented in a given sample of size $n$. First of all, why, based on 13, the expression inside the summation indicates the probability of an allele of frequency p, to remain unobserved in a sample of size n? I tried to understand this from the Hardy-Weinberg equation but did not have any success. Second, Why to take the expression in (16) to the r'th power? Which biological concepts am I missing? Answer: I'm going to strictly answer the questions, rather than step through the proof, because it involves a lot of formatting that I'm not familiar with. Other folks are welcome to edit this! Equation 13 This equation assumes a diploid genotype, given by the $2n$ power with $n$ individuals. For anything with greater ploidy than mono-, it's mathematically simpler to determine the probability that an allele is not present. As an example, see this calculation of a triploid Hardy-Weinburg equilibrium equation. Using this simplification, $P(single$ $allele$ $not$ $present)$ $= (1$ $- P(allele$ $present))$ ^ $(ploidy)$ ^ $(n)$ $= (1$ $- P(allele$ $present))$ ^ $(ploidy$ * $n)$ With $k$ segregating alleles, each allele has its own non-presence probability. The probability of total non-presence is $1 - (sum$ $of$ $P(each$ $non$-$presence))$ Equation 16 In this equation, the author describes the probability that all alleles are present at a given frequency. These allele presences are independent of each other and therefore multiplicative. Since $P(allele$ $present)$ is vectorized, this product can be simplified to $^r$
{ "domain": "biology.stackexchange", "id": 10936, "tags": "genetics, statistics, forensics" }
What is the minimum number of separable pure states needed to decompose arbitrary separable states?
Question: Consider a separable state $\rho$ living in a tensor product space $\mathcal H\otimes\mathcal H'$, with $\mathcal H$ and $\mathcal H'$ of dimensions $D$ and $D'$, respectively. If $\rho$ is separable, then it is by definition possible to write it as a convex combination of (projectors over) separable pure states. $\newcommand{\ketbra}[1]{\lvert #1\rangle\!\langle #1\rvert}$Because a state is Hermitian and positive by definition, we can trivially always write it in terms of its eigenvectors and eigenvalues as $$ \rho = \sum_{k=1}^{D^{} D^\prime} \lambda_k \ketbra{\psi_k}, \quad p_k\ge0, $$ where $\rho|\psi_k\rangle=\lambda_k|\psi_k\rangle$. However, $|\psi_k\rangle$ will in general be non-separable states. What I am looking for is the decomposition of $\rho$ in terms of only separable states. For example, a trivial case is $\rho=I/DD'$, which is easily seen to be decomposable as $$\frac{1}{DD'}I=\frac{1}{DD'}\sum_{k=1}^D\sum_{\ell=1}^{D'}\ketbra{k,\ell}.$$ This shows that, to decompose an unknown state $\rho$ in terms of separable states, at least $DD'$ elements are required. Is this number sufficient for any separable $\rho$? In other words, what I'm looking for is the smallest $M$ such that a representation of the form $$\rho = \sum_{j=1}^M p_j \,\ketbra{\alpha_j}\otimes\ketbra{\beta_j}$$ holds for all separable $\rho$. More formally, this amounts to finding $$\min\left\{M\in\mathbb N\,:\,\,\forall\rho\exists\{p_k\}_k,\{|\alpha_k\rangle\}_k,\{|\beta_k\rangle\}\,:\,\rho=\sum_{j=1}^M p_j \,\ketbra{\alpha_j}\otimes\ketbra{\beta_j}\right\}.$$ Answer: First of all, your problem is a special version of a more general problem, namely finding the minimum number of states which minimize the entanglement of formation, this is, given a state $\rho$ on AB$\equiv \mathbb C^D\otimes \mathbb C^{D'}$, find the decomposition $$ \rho = \sum_{i=1}^m p_i |\psi_i\rangle\langle\psi_i| $$ which minimizes $\sum_i p_i E(|\psi_i\rangle)$, where $E(|\psi_i\rangle) = S(\mathrm{tr}_B(|\psi_i\rangle\langle\psi_i|)$, and find the minimum $m$ for which such a decomposition exists. Your problem is just the variant of this where the state has entanglement of formation zero. This is a well-studied problem and in turn a special case of a so-called "convex roof construction". Uhlmann, for instance, states that for any such problem, at most $(DD')^2+1$ states are needed for the optimal decomposition (Proposition 2.1). It is likely that better bounds exist for the special problem of entanglement of formation, or the given problem of separable states. I was unable to find any in the literature, but one should be able to prove one along the following lines: First, note that one can relax the optimization to all decompositions $$\rho=\sum p_i\rho_i\,\tag{1}$$ where one minimizes $\sum p_i S(\mathrm{tr}_B\rho_i)$, since the entropy is concave, i.e. the minimium will always be (also) attained on pure $\rho_i$. Thus, we can instead consider decompositions of the reduced density matrix $\rho^A = \sum p_i \rho_i^A$ -- any such decomposition arises from a decomposition (1) of $\rho$ (e.g. by writing $p_i\rho_i^A$ as $M_k\rho M_k^\dagger$ with a POVM $M_k$ and applying $M_k\otimes I$ to $\rho$). Now consider an optimal decomposition $\rho^A = \sum p_i \rho_i^A$. If it has more than $D^2$ terms, the $\rho_i^A$ must be linearly dependent. Thus, we can decrease the weight of some $\rho_j^A$ all the way down to zero by shifting the weights of all the other $\rho_i^A$ (keeping $p_i\ge0$!). Again, due to concavity, this will not change the average entanglement. We are now left with an optimal decomposition $\rho^A=\sum p_i\rho^A_i$ with $D^2$ terms. This yields a decomposition of $\rho$, $\rho=\sum p_i \rho_i$, which minimizes $\sum p_i S(\rho_i^A)$ (as described in 2.). We can now decompose each $\rho_i$ in their eigenbasis (which has at most $DD'$ terms), which yields a total of $D^3D'$ terms. There is likely space for improvement: For instance, one could rewrite each of the $\rho_i^A$ in a basis of pure states $|\phi_{k,i}\rangle\langle\phi_{k,i}|$. Such a basis has size at most $D^2+1$ ($D^2$ being the dimension of the convex space), and the coefficients are $\mathrm{tr}(\rho_i^A|\phi_k\rangle\langle\phi_k|)$ and thus positive. Again, convexity yields an optimal decomposition with pure $\rho_i^A$ and $D^2$ terms. It only remains to decompose the corresponding $\rho_i^B$, which results in a total of $(D^2+1)D'$ terms.
{ "domain": "physics.stackexchange", "id": 85823, "tags": "quantum-mechanics, quantum-information, quantum-entanglement, density-operator, quantum-states" }
How different sizes of water bubbles behaves in space
Question: I watched a YouTube video of Chris Hadfield talking about different preventive measure for spills in space station. He is using water to demonstrate spills in space station. At 0.24sec the initially formed smaller water bubbles accelerates fast and moves away in different direction from the tube whereas bigger water bubbles accelerate slowly. is there any scientific reason for this behavior? Answer: Well, the reason for the water-bubbles are the hydrogen bonds: They keep the $H$$2$$O$ atoms together, forming a mass (a very instable mass though). So when some outside force is being applied to these bubbles, they behave like any other moving object on earth, having a kinetic energy: $$F_k=\frac{1}{2}mv^2$$ Because the smaller water bubbles have a smaller mass $m_s$, they accelerate much faster than the big water bubble with a big mass $m_b$, as soon as they come out of the water bottle: $$v_{small} = √\frac{2F_k}{m_{s}}> v_{big}= √\frac{2F_k}{m_{b}}$$ Strictly speaking, you'll have to derive these formulas by time in order to get the acceleration $a$ (but that's not inherently necessary to understand the underlying physics): $$a_{small} = \frac{dv_{small}}{dt} > a_{big} = \frac{dv_{big}}{dt}$$ Clearly, there might some other forces like air vents being responsible for the smaller bubbles to move faster in this or another direction, but the underlying formulas would be the same, just apply any other outside force to the mass and you can calculate their corresponding speed and acceleration.
{ "domain": "physics.stackexchange", "id": 71948, "tags": "everyday-life, space" }
C++ Quiz game w/questions in random order
Question: This is a 10 question quiz that assess's the user's knowledge of the C++ programming language. Upon the start of the program some ASCII welcome art prints. The questions, answers, and correct answer are stored in a text file. The questions are loaded in random order. The user is prompted to answer each question, if the answer is right points are added and a win message is displayed. If the question is answered wrong, the user is told they are wrong and the right answer is displayed. If a user passes, it will print "You Passed" ASCII art stored in a text file, if the user fails, it will just tell them they failed. There are 25 total questions stored inside the text file. I plan to add to add more to the .txt file, so that the quiz will have more variety and feel less repetitive. These questions mostly come from http://www.cprogramming.com and a textbook I have for programming logic (Programming Logic and Design 8th Edition). Example of one of the questions in quiz_data.txt: What command prints something to the screen? cin cout char print b quiz_passed.txt: __ __ ____ ____ \ \/ /___ __ __ / __ \____ ______________ ____/ / / \ / __ \/ / / / / /_/ / __ `/ ___/ ___/ _ \/ __ / / / / /_/ / /_/ / / ____/ /_/ (__ |__ ) __/ /_/ /_/ /_/\____/\__,_/ /_/ \__,_/____/____/\___/\__,_(_) One thing I would really like to do with this code is to separate the PrintResults() function from the InititializeQuizGame() function. What I have right now works as it's intended, but I feel like PrintResults() should be separate from InitializeQuizGame() for better readability and logic. Any thoughts on this or other criticism appreciated. This is my first major program and I would like to know how I can make this code more efficient or readable. Thanks in advance! My Code: #include <iostream> #include <fstream> #include <string> #include <sstream> #include <vector> #include <algorithm> #include <random> namespace { const int s_questionScore = 10; // Points rewarded for each correct answer. const int s_failingGrade = 60; const int s_numQuestions = 10; const char* s_winMessage = "Correct!\n"; const char* s_loseMessage = "Incorrect, the correct answer was "; const char* s_promptAnswer = "What is your answer?\n"; } class Question { public: int askQuestion(int num = -1); friend std::istream& operator >> (std::istream& is, Question& ques); private: std::string question_text; std::string answer_1; std::string answer_2; std::string answer_3; std::string answer_4; char correct_answer; }; void PrintArt(std::ifstream myfile); void InititializeQuizGame(std::ifstream data); void load(std::istream& is, std::vector<Question>& questions); void Shuffle(std::vector<Question>& questions); void PrintResults(std::vector<Question>& questions); void clearScreen(); void PositionCursor(); int main() { PrintArt(std::ifstream("welcome.txt")); InititializeQuizGame(std::ifstream("quiz_data.txt")); //Load questions from .txt file return 0; } void PrintArt(std::ifstream myfile) { std::string line; if (myfile.is_open()) { while (getline(myfile, line)) { std::cout << line << '\n'; } myfile.close(); std::cin.get(); clearScreen(); } else { std::cout << "Error: File not found!\n"; } } void InititializeQuizGame(std::ifstream data) { if (data.is_open()) { std::vector<Question> questions; load(data, questions); Shuffle(questions); PrintResults(questions); } else { std::cout << "Error: File not found!\n"; } std::cin.get(); } std::istream& operator >> (std::istream& is, Question& ques) { std::string line; while (std::getline(is, line)) { if (line.size() == 0) continue; break; } ques.question_text = line; getline(is, ques.answer_1); getline(is, ques.answer_2); getline(is, ques.answer_3); getline(is, ques.answer_4); is >> ques.correct_answer; return is; } void load(std::istream& is, std::vector<Question>& questions) { Question q; while (is >> q) questions.push_back(q); } int Question::askQuestion(int num) //Ask the question. { int score = 0; std::cout << "\n"; if (num > 0) std::cout << num << ".) "; std::cout << question_text << "\n"; std::cout << "a. " << answer_1 << "\n"; std::cout << "b. " << answer_2 << "\n"; std::cout << "c. " << answer_3 << "\n"; std::cout << "d. " << answer_4 << "\n"; //Ask user for their answer. char guess = ' '; PositionCursor(); std::cout << s_promptAnswer; std::cin >> guess; if (guess == correct_answer) { std::cout << s_winMessage; score = s_questionScore; std::cin.get(); std::cin.get(); } else { std::cout << s_loseMessage << correct_answer << ".\n"; std::cin.get(); std::cin.get(); } return score; } void Shuffle(std::vector<Question>& questions) //Shuffle the questions. { std::random_device rd; std::mt19937 randomGenerator(rd()); std::shuffle(questions.begin(), questions.end(), randomGenerator); } void PrintResults(std::vector<Question>& questions) { int total = 0; //Total score. //Keep track of score. for (size_t i = 0; i < s_numQuestions; ++i) { total += questions[i].askQuestion(i + 1); } //Print Total score. clearScreen(); if (total >= s_failingGrade) { std::cout << "\n\n"; std::cout << "You scored " << total << " out of 100!\n"; PrintArt(std::ifstream("quiz_passed.txt")); } else { std::cout << "You scored " << total << " out of 100....\n"; std::cout << "Sorry, you failed... Better luck next time.\n"; PositionCursor(); } } void clearScreen() { std::cout << std::string(22, '\n'); } void PositionCursor() { std::cout << std::string(22, '\n'); } Answer: Here are some things that may help you improve your code. Use only required #includes The code has the line: #include <sstream> but as far as I can see, nothing from <sstream> is actually used or needed. Only include files that are actually needed. Consider better naming The function named InitializeQuizGame is actually rather misleading because it doesn't simply initialize the game, but also runs it to completion. Most of the other names are not bad, but the odd mix of capitalization (as with InitializeQuizGame and load) makes the code a bit harder to read than it should be. Let the computer do the mathematics If you decided to make a 20 question quiz or a 50 question quiz, it would be nice if the program would automatically adjust to that without having to recompile the program. Fortunately, it's quite easy to do. Assuming the questions are all weighted the same, and if there are \$n\$ questions, each question is worth \$n/100\$ percent. Since you're reading all of the questions into a std::vector anyway, why not just let the computer perform the mathematics rather than hardcoding the values? When is a failing grade not a failing grade? If I see a variable named failingGrade I would expect it to actually represent a failing grade, but surprisingly in this code, if one gets 60% correct (even though s_failingGrade = 60) the message says that I passed the quiz. Perform input sanitation The function that reads in the questions does not validate the data. In particular, the correct answer is apparently always supposed to be a, b, c, or d but the data read is not validated to assure that. Prefer std::istream to std::ifstream In several cases, a passed parameter is an std::ifstream but the interface would be more flexible if they instead used a std::istream. I'd also recommend having the functions accept a std::istream & to both avoid making a copy and to have the calling function handle any problems opening files. It also makes it slightly easier to write automated tests because you can use std::stringstreams as inputs. Consider more effective use of objects The Question class is not a bad start for a quiz program, but the object is a little strange in that its only member function relies on both an externally provided question number and various global variables in order to function. These both suggest to me that it might be better to introduce a Quiz class to encapsulate the vector of Question objects. It might look something like this: class Quiz { public: class Question { public: friend std::istream& operator >> (std::istream& is, Question& ques); bool ask() const; private: std::string question_text; std::string answer_1; std::string answer_2; std::string answer_3; std::string answer_4; char correct_answer; }; Quiz(std::istream &data); void operator()() const; private: std::vector<Question> questions; static const int s_failingGrade = 60; static const char* s_winMessage; static const char* s_loseMessage; static const char* s_promptAnswer; }; const char* Quiz::s_winMessage = "Correct!\n"; const char* Quiz::s_loseMessage = "Incorrect, the correct answer was "; const char* Quiz::s_promptAnswer = "What is your answer?\n"; Usage might look like this: int main() { std::ifstream in{"quiz_data.txt"}; if (!in) { std::cout << "Error: could not open quiz data file\n"; return 1; } Quiz quiz(in); //Load questions from .txt file quiz(); } Avoid hardcoding values The file names, the failing grade and the various error messages are all hardcoded now. Rather than hardcode those, you could read them from a configuration file, making the program much more flexible and usable at very little additional effort. Consider shuffling the answers Right now each question is asked with all of the answers in the same order as they appeared in the input file. It might make for a better quiz if the program shuffled the available answers each time, just as it shuffles the questions. Omit return 0 When a C or C++ program reaches the end of main the compiler will automatically generate code to return 0, so there is no need to put return 0; explicitly at the end of main. Note: when I make this suggestion, it's almost invariably followed by one of two kinds of comments: "I didn't know that." or "That's bad advice!" My rationale is that it's safe and useful to rely on compiler behavior explicitly supported by the standard. For C, since C99; see ISO/IEC 9899:1999 section 5.1.2.2.3: [...] a return from the initial call to the main function is equivalent to calling the exit function with the value returned by the main function as its argument; reaching the } that terminates the main function returns a value of 0. For C++, since the first standard in 1998; see ISO/IEC 14882:1998 section 3.6.1: If control reaches the end of main without encountering a return statement, the effect is that of executing return 0; All versions of both standards since then (C99 and C++98) have maintained the same idea. We rely on automatically generated member functions in C++, and few people write explicit return; statements at the end of a void function. Reasons against omitting seem to boil down to "it looks weird". If, like me, you're curious about the rationale for the change to the C standard read this question. Also note that in the early 1990s this was considered "sloppy practice" because it was undefined behavior (although widely supported) at the time. So I advocate omitting it; others disagree (often vehemently!) In any case, if you encounter code that omits it, you'll know that it's explicitly supported by the standard and you'll know what it means.
{ "domain": "codereview.stackexchange", "id": 23268, "tags": "c++, beginner, object-oriented, random, quiz" }
Confusion about rolling motion
Question: why is rolling motion translational? Isn't translational motion a motion where a body moves without changing its orientation?But in rolling motion,the object is rotating i.e changing its orientation every time. So why is it called rolling motion and translational motion combined? Answer: Here is a diagram of a body rolling at an angular speed $\omega$ and not slipping relative to the surface. Note that every particle, including the one at the centre of mass, has a translational speed of $v$ but also a speed of $r\omega$ due to the rotational motion. To get the velocity of a particle (blue arrows) one needs to add the velocity due to translation (red arrows) and the velocity due to rotation about the centre of mass (grey arrows).
{ "domain": "physics.stackexchange", "id": 93401, "tags": "newtonian-mechanics, rotational-dynamics, terminology" }
What is meant by "arbitrary" in the context of digital filter design?
Question: I try to understand the 'arbitrary' what does it meant? I had read many references ,such a one it is 'begin the process with a transfer function of your choice' ,in other reference :related by the starting point, In other hand i read about Chebyshev approximation with arbitrary magnitude, how can i find the better design with arbitrary magnitude ? any method of design have a criteria ,the 'arbitrary' related by the method of design or related by the transfer function? Answer: The term "arbitrary" in this context refers to the desired frequency response, and what it means is that the respective design method accepts any desired response. So unlike required by many standard design routines, the magnitude response does not need to be piecewise constant or linear, and, more importantly, the phase response need not be linear or minimum-phase. The corresponding design problem is a complex approximation problem because the desired frequency response which must be approximated is a complex-valued function.
{ "domain": "dsp.stackexchange", "id": 6573, "tags": "filter-design" }
Clustering geo location coordinates (lat,long pairs)
Question: What is the right approach and clustering algorithm for geolocation clustering? I'm using the following code to cluster geolocation coordinates: import numpy as np import matplotlib.pyplot as plt from scipy.cluster.vq import kmeans2, whiten coordinates= np.array([ [lat, long], [lat, long], ... [lat, long] ]) x, y = kmeans2(whiten(coordinates), 3, iter = 20) plt.scatter(coordinates[:,0], coordinates[:,1], c=y); plt.show() Is it right to use K-means for geolocation clustering, as it uses Euclidean distance, and not Haversine formula as a distance function? Answer: K-means is not the most appropriate algorithm here. The reason is that k-means is designed to minimize variance. This is, of course, appearling from a statistical and signal procssing point of view, but your data is not "linear". Since your data is in latitude, longitude format, you should use an algorithm that can handle arbitrary distance functions, in particular geodetic distance functions. Hierarchical clustering, PAM, CLARA, and DBSCAN are popular examples of this. This recommends OPTICS clustering. The problems of k-means are easy to see when you consider points close to the +-180 degrees wrap-around. Even if you hacked k-means to use Haversine distance, in the update step when it recomputes the mean the result will be badly screwed. Worst case is, k-means will never converge!
{ "domain": "datascience.stackexchange", "id": 9828, "tags": "machine-learning, python, clustering, k-means, geospatial" }
How do I go about balancing these equations?
Question: I have to use oxidization number approach. $\ce{Co^2+ (aq) + H2SO3(aq) -> Co^3+(aq) + S2O3^2- (aq)}$ This is my approach so far: The change in Co is 1 electron, the change is S is 2 electrons, so CO must be multiplied by two $\ce{2 Co^2 + H2SO3(aq) -> 2 Co^3+ + S2O3^2-}$ I balance the O's, then balance the H's and I get $\ce{2 Co^2+ + 2H2SO3(aq) + 2H^+ -> 2 Co^3+ + S2O3^2- + 3H2O(l)}$ I did it the half reaction way as well, I got the same answer but the coefficients on the Co's were 4. I suspect my problem is my assumptions about the oxidization number change at the beginning. Thanks in advance Answer: I agree with you, the right equilibration of the redox reaction is the following: $$\ce{4 Co^2+(aq) + 2H2SO3(aq) + 2H^+ (aq)-> 4 Co^3+(aq) + S2O3^2- (aq) + 3H2O(l)}$$ The problem is in your assumptions about the oxidization number change at the beginning, and more precisely about sulfur. The oxidation number of sulfur in $\ce{H2SO3}$ is four. But in thiosulfate ion, you have two inequivalent sulfur atoms: Please see: Oxidation states of the sulfur atoms in the thiosulfate ion The oxidation number is $\ce{+IV}$ for the central one, while it's zero for the terminal one. In sum, the variation in the oxidation number for sulfure is four, not two. (As if you have two sulfur atoms with oxidation number of four in $\ce{2H2SO3}$. One of them changes its oxidation number to zero. while the other one stays intact).
{ "domain": "chemistry.stackexchange", "id": 3004, "tags": "redox" }
How to run ROS commands from C++ application?
Question: (Original Author) want(s) to run ROS commands, like e.g. $ ROSCORE, from a C++ application. The following example does not work: #include <stdlib.h> int main() { system("/opt/ros/diamondback/ros/bin/roscore"); return 0; } Execution results in the following message: Traceback (most recent call last): File "/opt/ros/diamondback/ros/bin/roscore", line 34, in <module> from ros import roslaunch ImportError: No module named ros Original Author was using Ubuntu 10.04. Original Author notes that roscore successfully launches the master when executed from the command line Originally posted by ASMIK2011ROS on ROS Answers with karma: 62 on 2011-05-22 Post score: 1 Original comments Comment by Asomerville on 2012-08-06: I accidentally pushed this to the top by correcting the formatting. :P Comment by domikilo on 2013-12-18: you must use system("/opt/ros/diamondback/bin/roscore"); Answer: Maybe it is a problem with missing environment variables (python path?). You could try to run bash with the command as parameter, e.g. something like bash -i -c "/opt/ros/diamondback/ros/bin/roscore". -i makes the bash read ~/.bashrc where I assume you source /opt/ros/diamondback/setup.sh. Originally posted by Felix Endres with karma: 6468 on 2011-05-22 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 5627, "tags": "c++" }
I can't use rosdep?
Question: I'm a beginner of Ubuntu (v12.04) and Fuerte ROS. I've tried several tutorials in ROS and mostly things were OK. Now, I'm trying to build some stacks i.e. robotino*. But I cannot use rosdep. rosdeps: command not found Did I install ROS incorrectly or lack of some steps to initialize ROS platform? Thank you! Originally posted by roskidos on ROS Answers with karma: 110 on 2012-10-08 Post score: 0 Original comments Comment by roskidos on 2012-10-08: I've tried to track the error and see that I'm lack of rosbash. So I run "source ${ROS_ROOT}/tools/rosbash/rosbash" but the error comes "No such file or directory" Comment by Lorenz on 2012-10-08: Have a look at the environment setup section and the overlay page. You probably don't have your environment set up correctly. Comment by roskidos on 2012-10-08: thank you @Lorenz!! I've found that I perhaps was lacked of this command in the installation page of ROS: sudo apt-get install python-rosinstall python-rosdep Answer: The command is not rosdeps but rosdep. You need to install rosdep with: sudo apt-get install python-rosdep Originally posted by Lorenz with karma: 22731 on 2012-10-08 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 11277, "tags": "rosdep, ros-fuerte, ubuntu-precise, ubuntu" }
Proving that two sets of strings are equal
Question: I am stucked at this problem: Let $A=(\Sigma, Q, q_1, F, \delta)$ be a finite deterministic automaton (I.e. $\delta:Q\times\Sigma\to Q$) such that $Q=\{q_1,...,q_m\}$. Let's define foreach $i,j\in\{1,...,m\}$, $k\in\{0,1,...,m\}$ (Note: the $\delta$ below is the extension of $\delta$ to $\Sigma^*$) $L_{i,j}^k=\{w\in\Sigma^*|\delta(q_i,w)=q_j \land \forall u\in PREFIX(w)-\{\epsilon,w\}, \delta(q_i,u)=q_x\to x\leq k\}$ Now let's define the following sets recursively: For all $i,j\in\{1,...,m\}$: $M_{i,j}^0=\{\sigma\in\Sigma|\delta(q_i,\sigma)=q_j\}\cup\begin{cases} \emptyset, \text{if $i\neq j$} \\ \{\epsilon\}, \text{if $i=j$} \end{cases}$ For all $i,j,k\in\{1,...,m\}$: $M_{i,j}^k=M_{i,k}^{k-1}\cdot (M_{k,k}^{k-1})^* \cdot M_{k,j}^{k-1}\cup M_{i,j}^{k-1}$ Prove that for all $i,j\in\{1,...,m\}$ and $k\in\{0,1,...,m\}$ we get $L_{i,j}^k=M_{i,j}^k$. I've tried to prove it by induction on $k$ but failed. (Note: I've encountered these sets in the proof of the Theorem that says that every regular language has a regular expression) Thanks a lot. Answer: It is not totally clear what are your main obstacles to get the proof done. Note that $L_{i,j}^k$ is the set of all strings $w$ on a path $\pi$ from $q_i$ to $q_j$ such that for each intermediate state $q_x$ we have $x\le k$; i.e., all intermediate states have index at most $k$. Consider state $q_k$. Now either the path $\pi$ does not enter $q_k$ at all: then all indices are at most $k-1$ meaning that $w$ belongs to $L_{i,j}^{k-1}$. Or it may enter $q_k$ one or more times. Thus the path is of the form $\pi: q_i \leadsto q_k \leadsto \dots \leadsto q_k \leadsto q_j$, where the subpaths may start and end in $q_k$, but do not pass that state. So the string $w$ can be partitioned into strings that belong to $L_{i,k}^{k-1}$, $L_{k,k}^{k-1}$, ..., $L_{k,k}^{k-1}$, $L_{k,j}^{k-1}$. That should prove that $L_{i,j}^{k} = L_{i,k}^{k-1} ( L_{k,k}^{k-1} )^*L_{k,j}^{k-1} \cup L_{i,j}^{k-1}$ (or at least the inclusion from left to right, the other inclusion follows similar argumentation). With this knowledge the induction is on the subscript $k$, and the induction step is If $L_{i,j}^{k-1} = M_{i,j}^{k-1}$ for all $i,j$ then $L_{i,j}^{k} = M_{i,j}^{k}$ for all $i,j$. This should be obvious, just plug in the equations we have.
{ "domain": "cs.stackexchange", "id": 5715, "tags": "formal-languages, regular-languages, automata, finite-automata" }
If a force is a vector quantity, thus having direction and magnitude, why do we express pounds/newtons, a unit of weight, as a scalar?
Question: It is in my understanding that forces are vector quantities, and thus have both magnitude and direction. Since weight is a force of gravity, it also must have magnitude and direction. Why do we define the weight of an object (through SI newtons or customary pounds) as a scalar, such as 5 newtons? Why, despite weight being a force/vector quantity, is the direction not specified? Answer: It's just lazy language, the direction is implicit. If I say "the weight is 4 Newtons", then it's implied, because we're talking about weight, that the direction is "toward the center of the Earth". Similarly, if we say "the thrust on the airplane from the engines is 11,000 lbs", it's implied that the direction is "in the direction the airplane is going".
{ "domain": "physics.stackexchange", "id": 34127, "tags": "terminology, vectors, units" }
What will happen if an ideal current source is connected to a isolated resistance?
Question: I know that ideal current source is a theoretical thing. But when it comes to solve problems we have to consider these scenarios. Now suppose an ideal current source is connected with an isolated resistance; now what should happen; should the current flow in the resistance? I am asking as an isolated resistance has infinite potential now the ideal current source which is connected at its another end has also infinite resistance so what do I make out of it should the current flow or not? Answer: In the "theoretically ideal" world, an "ideal current source" has infinite voltage available, so it can assert its current flow through an infinite resistance- like the case where the other leg of the resistor is not connected to the return port of the current source. In the real world: unless the current path is from the current source, through the resistor, and then back to the current source in a complete loop, no current will flow because no one has on their lab bench a current source that can push any amount of current through several million ohms of resistance in an open circuit.
{ "domain": "physics.stackexchange", "id": 57514, "tags": "electric-circuits, electric-current, electrical-resistance, short-circuits" }
When convolving two functions that are constants in a region and 0 everywhere else, where does the integration start?
Question: Heads up, this is for homework. I never took a signals and systems course, so I'm behind on this stuff. I want to compute the convolution of two rectangular regions. I know the standard equation for convolution, where the integrals are from $-\infty$ to $\infty$, but it is useful to restrict the integration to the areas that matter. I have one function that has a value of $1/3$ when $-0.5 \le x \le 0.5$ and $0 \le y \le 0.4$. I have a second function that has a value of $1$ when $-1/50 \le x,y \le 1/50$. Where would the bounds for integration be for convolving functions like this? Answer: For simplicity, let's talk about one-dimensional convolutions only. You need to understand that the result of a convolution is a function and the alleged convolution integral is actually an infinitude of integrals, one for each time instant. By this I mean that to compute the output $y(t)$ of a system with impulse response $h(t)$ to an input signal at time $t = 5$, we write $$\begin{align*} y(5) &= \int_{-\infty}^\infty h(\tau)x(5-\tau)\mathrm d\tau,\\ \end{align*}$$ and compute $y(5)$. For $t = 6.5$, we write $$\begin{align*} y(6.5) &= \int_{-\infty}^\infty h(\tau)x(6.5-\tau)\mathrm d\tau,\\ \end{align*}$$ and compute $y(6)$. For $t = 10.2$, we write $$\begin{align*} y(10.2) &= \int_{-\infty}^\infty h(\tau)x(10.2-\tau)\mathrm d\tau,\\ \end{align*}$$ and compute $y(10.2)$.and so on. After doing this a while, some smart-ass said, "Hey Ma, I am beginning to see a pattern here! Lookit, I can write $$\begin{align*} y(t) &= \int_{-\infty}^\infty h(\tau)x(t-\tau)\mathrm d\tau, ~\text{for all} ~t\\ \end{align*}$$ and the world was never the same again. Suppose that $h(t)$ is $0$ for $t < 0$ or $t > 12$. Then all the integrals exhibited above simplify in the sense that the limits can be changed to $0$ and $12$ since the integrand, being a multiple of $h(\tau)$, is $0$ when $\tau < 0$ or $\tau > 12$. Suppose in addition that $x(t)$ is nonzero only when $|t| < 3$. Now look at the first integral (for $y(5)$ with the limits changed to $0$ and $12$). At $\tau = 0$, the integrand is $h(0)x(5-0) = 0$ since $x(5) = 0$. As $\tau$ increases towards $12$, the argument of $x$ in the integrand decreases from $5$ downwards till the argument hits $-7$ at $\tau = 12$. Thus, the integrand is nonzero only for $\tau \in (2,8)$, having value $h(2)x(5-2) = h(2)x(3)$ at one end of the interval and value $h(8)x(5-8) = h(8)x(-3)$ at the other. We conclude that $$y(5) = \int_{-\infty}^\infty h(\tau)x(5-\tau)\mathrm d\tau = \int_{0}^{12} h(\tau)x(5-\tau)\mathrm d\tau = \int_{2}^{8} h(\tau)x(5-\tau)\mathrm d\tau.$$ Similarly, $$y(6.5) = \int_{-\infty}^\infty h(\tau)x(6.5-\tau)\mathrm d\tau = \int_{0}^{12} h(\tau)x(6.5-\tau)\mathrm d\tau = \int_{3.5}^{9.5} h(\tau)x(6.5-\tau)\mathrm d\tau.$$ However, $$y(10.2) = \int_{-\infty}^\infty h(\tau)x(10.2-\tau)\mathrm d\tau = \int_{0}^{12} h(\tau)x(10.2-\tau)\mathrm d\tau = \int_{7.2}^{12} h(\tau)x(10.2-\tau)\mathrm d\tau$$ (can you see why?) So, there is not a simple answer to your question about the limits of integration; you have to work them out for each value of $t$ for which you want to evaluate the convolution integral. But you don't have to do an infinite amount of work! There are Look Ma moments here and there. Can you combine two of the results above to write that for each $t$, $3 \leq t \leq 9$, we have the useful pattern $$y(t) = \int_{t-3}^{t+3} h(\tau)x(t-\tau)\mathrm d\tau$$ shown above? I hope this will help you gain the understanding that you need in order to complete your homework.
{ "domain": "dsp.stackexchange", "id": 169, "tags": "homework, continuous-signals, convolution" }
How can I aggregate/combine 3 columns of a data frame into one column with the sum of the values of the other three in R?
Question: Part of my dataset is shown in this image, I want to combine the columns GR_S01_w1_a, GR_S01_w1_b and GR_S01_w1_c into a single column - GR_S01_w1 - whose values are the sum of the three. I know how to use mutate to add a new column which does this, but I also want to delete the other three, and do this about 100 more times for all the other samples I have. So essentially - I have three replicates of each sample in the form of a column of the format samplename_a, samplename_b and samplename_c, and I want to replace these with a single column, many times over. I have tried using mutate like this - Gregory <- Gregory %>% mutate(GR_S01_w1 = sum(GR_S01_w1_a, GR_S01_w1_b, GR_S01_w1_c)) but for all of the samples that I have this would of course take far too long. Is there a quick way for me to do this (other than manually on excel which is what I'm doing at the moment)? Answer: Answer This can be done by following a number of steps: Use grep to get the groups of columns to sum over Use rowSums over each group of columns base <- c("GR_S01_w1", "GR_S01_w2") cols <- lapply(base, grep, names(Gregory), fixed = TRUE) for (i in seq_along(base)) { Gregory[, base[i]] <- rowSums(Gregory[, cols[[i]]]) } This automates the whole process without defining any names manually (apart from the group names), and without having to transform your dataset to long and then back to wide. Finding the sample names automatically If you also don't want to have to specify the samples by hand, then you can use grep and sub. Here, we make the assumption that your structure is always "sample underscore letter", e.g. sample_d or test_sample_b. We can do this by using grep: relevant_columns <- grep(".*_[a-zA-Z]{1}$", names(Gregory), value = TRUE) base <- unique(sub("(_[a-zA-Z]{1})$", "", relevant_columns)) base # [1] "GR_S01_w1" "GR_S01_w2" What the grep term means: .*: Any number of any characters. _: Presence of an underscore, followed by... [a-zA-Z]: Any of the alphabetic letters (lowercase or upper case). {1}: Only one of those. $: This is the end of the word. Next, we just use sub to remove that part, select the unique values, and we're done. This does assume that: There are no other columns that end with _[a-zA-Z]; you can just avoid those columns by inputting names(Gregory)[-1] or whatever columns you DON'T want to consider. Names are only followed by ONE letter, not e.g. two or three.
{ "domain": "datascience.stackexchange", "id": 9657, "tags": "r, statistics" }
Differential equation of a series $RLC$ circuit driven by a DC voltage source?
Question: From math below it seems no oscillations are possible and the steady state reaches instantly. I know this is wrong but I'm new to differential equations and don't see my mistake. Summary: For the initial conditions on capacitor $$v(0) = 0, \qquad i(0) = C v'(0) = 0$$ I'm getting the homogeneous solution constants $c_1 = 0$ and $c_2 = 0$. This means there is no transient response and the response reaches steady-state instantly. How is this possible? Let $v$ denote voltage across capacitor. Differential equation for series RLC circuit in terms of voltage across the capacitor is $$Ri + L\frac{di}{dt} + v = V$$ Since $i = C\frac{dv}{dt}$ it follows $$RC\frac{dv}{dt} + LC\frac{d^2v}{dt^2 } + v = V$$ $$\frac{d^2v}{dt^2} + \frac{R}{L}\frac{dv}{dt} + \frac{1}{LC}v = \frac{V}{LC}$$ With $2p = \frac{R}{L}$ and $\omega_0^2=\frac{1}{LC}$ the differential equation is $$v''(t) + 2pv'(t) + \omega_0^2 v(t) = \omega_0^2 V$$ The homogeneous solution for the above differential equation is $$v_h(t) = e^{-pt} ( c_1 \cos(\omega t) + c_2 \sin(\omega t) )$$ where $\omega^2 = \omega_0^2 - p^2$. From the initial conditions it follows $$v(0) = 0 \implies c_1 = 0$$ $$i(0) = v'(0) = 0 \implies c_2 = 0$$ This means that the differential equation has no transient response! A particular solution is $v_p(t) = V$ and this makes the complete solution! What am I doing wrong? Answer: TL;DR In the procedure you posted you forgot to include particular solution. The homogeneous solution will always evaluate to $0$ when used as a solution to the general differential equation. Homogeneous solution The roots of the differential equation $$v''(t) + 2p v'(t) + \omega_0^2 v(t) = \omega_0^2 V$$ are $q_{1,2} = -p \pm j \sqrt{\omega_0^2 - p^2}$. For these roots there are three types of homogeneous solutions: $$v_h(t) = \left\{ \begin{array}{lll} e^{-pt} (A \cos(\omega t) + B \sin(\omega t)), & \omega_0^2 > p^2 & \qquad\text{(Underdamped response)} \\ e^{-pt} (A + B t), & \omega_0^2 = p^2 & \qquad\text{(Critically damped response)} \\ e^{-pt} (A e^{\omega_1 t} + B e^{-\omega_1 t}), & \omega_0^2 < p^2 & \qquad\text{(Overdamped response)} \end{array} \right. $$ where $A$ and $B$ are unknown constants, $\omega^2 = \omega_0^2 - p^2$ and $\omega_1^2 = -\omega^2$. We will now focus only on the underdamped response, as that is the one you analyze in your question. Particular solution The particular solution depends on the input (driver) function and the roots of the differential equation. In your case, the particular solution is $$v_p(t) = K$$ where $K$ is unknown constant. Solving for unknown constants The total solution is a linear combination of homogeneous and particular solutions $$v(t) = v_h(t) + v_p(t) = e^{-pt} \bigl( A \cos(\omega t) + B \sin(\omega t) \bigr) + K$$ If we use this as a solution to the general differential equation we get $$\underbrace{v_h''(t) + 2p v_h'(t) + \omega_0^2 v_h(t)}_{\text{always } 0} + \omega_0^2 v_p(t) = \omega_0^2 V$$ from which it follows $K = V$. The other two unknown constants are determined from initial conditions $v(0) = v_0$ and $v'(0) = v_0'$ $$A + K = v_0, \qquad -p A + \omega B = v_0'$$ from which it follows $A = v_0 - V$ and $B = \frac{1}{\omega} (v_0' + p v_0 - p V)$. Final solution In your special case, $v_0 = 0$ and $v_0' = 0$, and the final solution is $$v(t) = - V e^{-pt} \Bigl( \cos(\omega t) + \frac{p}{\omega} \sin(\omega t) \Bigr) + V$$ The above solution can be written in a more compact form $$\boxed{v(t) = V \Bigl( 1 - \frac{\omega_0}{\omega} e^{-pt} \cos\bigl( \omega t - \arctan \frac{p}{\omega} \bigr) \Bigr) }$$ Response overshoot Note that underdamped (oscillatory) responses naturally have an overshoot. This means that at some point the voltage on the capacitor will be higher than the input voltage. After the transient response, voltage on the capacitor settles to the input DC voltage $$v_f = \lim_{t \to \infty} v(t) = V$$ The response overshoot magnitude can be found as $$\text{PO} = \frac{v(t_m) - v_f}{v_f} \cdot 100\%$$ where $t_m$ is the time of the response maximum. From $v'(t_m) = 0$ it follows that $t_m = k \pi / \omega$ where $k$ is an odd positive number. The response overshoot magnitude is $$\boxed{\text{PO} = e^{-k \pi p/\omega} \cdot 100\%}$$
{ "domain": "physics.stackexchange", "id": 86434, "tags": "electric-circuits, electrical-resistance, harmonic-oscillator, differential-equations" }
The equipartition theorem in momentum space
Question: Motivated by the answers to this question on turbulence, I'm interested in an explanation and/or derivation/reference of the equipartition theorem in momentum space. To formulate it as a question: If one considers a physical configuration, which admits a description in a dual/momentum space in the framework of statistical mechanics (basically like fields do), what is the realization of the equipartition theorem in these $k$-space terms. I.e. how does it affect the distribution of energy and what consequences do different dispersion relations have on the evolution, or even outcome of that partition? How do we understand statistical mechanics, not in the inital degrees of freedom, which are usually well motivated by the model building process, but in abstract $k$-space terms and what do we typically expect? Answer: General Mumbo-Jumbo about Statistics When you have any Hamiltonian mechanical system, with degrees of freedom $q_i$, conjugate variables $p_i$, and Hamiltonian $H(q_i,p_i)$ there is a conserved phase space volume, which is just the area in q,p space, defined by the volume element $$\prod_i dp_i dq_i$$ The conservation of phase space volume is Liouville's theorem, and it is easy to prove directly. If you add the plausible but generally next-to-impossible to prove assumption of ergodicity, which says that for large enough systems with generic interactions, there are no special surfaces on which the motion is confined, that any motion is as likely as any other. This is equivalent to the absence of any other conserved quantity which is defined by a nice analytic surface, that all the other conservation laws (which necessarily exist, because every point on the trajectory is determined by the initial conditions, so the initial condition values are the other conserved quantities) give intrinsically complicated mixed up surfaces which become ever more mixed up in the infinite degree of freedom limit, so points with different values of these phony conserved quantities cannot be meaningfully separated from other points with different values. If this is so, the only real conservation law is the conservation of energy, and this gives you that the correct invariant probability distribution on the phase space is the uniform distribution on the surface H=E (although you must be careful that the measure at any point on the energy surface is defined by the volume in the full phase space between two infinitesimally separated surfaces of energy E and E+dE), and this distribution describes the statistics of nearly every trajectory. This uniform probability distribution on the constant energy surface is the microcanonical ensemble. The log of the volume of the microcanonical ensemble is the entropy S(E). The assumption of ergodicity says that any point in phase space p,q is as likely as any other point, except to the degree that the energy varies in the direction perpendicular to the constant energy surface, the reciprocal of this rate of change is the local microcanonical density. For a subsystem of a large system, dividing the big system into parts 1 and 2, where 1 is big but small compared to 2, you know that the whole thing is described by the microcanonical ensemble, so that the sub-part 1 is desribed by the microcanonical ensemble with energy $e$, and the other part by the microcanonical ensemble $E-e$. Statistically speaking, you expect that if system 1 is big, the average energy e fluctuates only by a small amount from its average value. The total phase space volume for two independent systems is the product of the volume of each one, so the entropy (which is the log of the volume) is the sum of the entropies: $$ S(e) + S(E-e) = S(e) + S(E_2 - e) \approx S(e) - {\partial S_2\over \partial E} e = S(e) - \beta e $$ So that the probability, which is found by exponentiating the entropy, is weighted by a factor of $e^{-\beta e} $$. The assumption here is that the system 2 is a large bath, so that the derivative of entropy with respect to energy is almost exactly constant over all points of the microcanonical ensemble. This tells you that any sub-part of a large system has a distribution on phase space which is given by the canonical ensemble--- every state has a probability at any point of phase space equal to $$ P(E) = e^{-\beta E} $$ where $\beta = {1\over T}$ defines the absolute thermodynamic temperature (in temperature units where Boltzmann's constant is equal to 1). The exponential suppression of large energies is easy to understand statistically. There is a conserved quantity, the energy, which is global. In order to absorb a unit of energy, you have to pay a probability cost which is uniform, but otherwise, there is no constraint on the motion. This is the maximum-entropy interpretation of the Boltzmann state--- each subsystem pays a probability cost for absorbing a unit of energy, and this probability cost is adjusted by making sure the total energy is whatever it is. The maximum entropy distribution for any conserved quantity is found by imposing a probability cost for absorbing each unit of the conserved quantity, so that each subsystem is only probabilistically constrained by the amount of each conserved quantity that it has. The log of the probability cost for the quantity is the thermodynamically conjugage variable (or, rather, it would be, if thermodynamics developed logically rather than historically. In actual life, people multiply all the thermodynamic conjugate quantities by the temperature for no good reason, to turn the more fundamental extra log-probability quantity into a less fundamental quantity which is the extra free energy per unit conserved quantity, so that the thermodynamically conjugate quantity to U has units of energy per unit U, rather than (dimensionless, quantumly additively unambiguous) entropy per unit U. I try not to use this otherwise universal convention, because I think it is wrongheaded. Also, it is good use $\beta$ instead of T most of the time, since $\beta$ is the thermodynamically conjugate variable to E.) The thermodynamic formalism is not just a solution to the problem of deriving thermodynamics--- it also gives a solution to the generally unsolvable problem of the statistics of a generic motion in a mechanical system. If you ask "what does a typical motion look like for a subsystem of a big system without any conserved quantities other than the energy?" It can only look like a canonical ensemble for the individual subsystems that make up the system. If it doesn't look like this yet, it must be in a special state, and this state will be unstable to spreading out in phase space. When the state is fully randomized, the statistics will be those of the canonical ensemble with a temperature determined by matching the energy at temperature T to the total energy. This is remarkable, because the general problem of describing a deterministic thing statistically is impossibly mathematically difficult in any rigorous way. If you take a Rubik's cube, and shuffle it by a sequence of moves of the form "turn the front face clockwise, then turn the cube in a direction determined by the next digit of pi modulo 4, and repeat", you will not be able to prove anything about the state you get. But it is obvious that the probability distribution of the colors you see will always eventually be indistinguishable from the uniform distribution on all Rubik's cube configurations, even though you can't prove anything of the sort with any rigor. For another simpler example, if you take a long string of binary digits which ends with "1", shift it one position to the left by concatenating a 1 at the rightmost position, and binary add the two strings, and shift the result of the addition to the right until you get rid of all the zeros on the right, you have a deterministic procedure on bit strings which you can iterate. It is clear by doing it a few times that you always quickly get a randomized pattern of bits, where any pattern of 1s and 0s is equally likely in any small window. This process is well known in mathematics--- it is the 3n+1 procedure, the Collatz problem. It is a simple consequence of eventual randomization that the 3n+1 Collatz conjecture is true, that all finite patterns reach "1" eventually (because all infinite random bit strings are shifted to the right after a many iterations with probability 1). But to prove this conjecture rigorously is well beyond current mathematical methods. So proving that a deterministic system turns random in any meaningful way is generally extremely difficult. Even so, seeing that it turns random is generally not difficult-- you can identify the stochasticity by eye and by simple statistical tests. Further, it is often not difficult to identify what the correct probability distribution should be, once it turns random, just by identifying the conserved quantities in the problem, and making a distribution function from these conserved quantities which is preserved under time evolution. This is the source of many conjectures. Boltzmann solved this problem of statistics of the invariant distribution for mechanical systems in equilbrium, the general solution to the problem of the statistics of the deterministic motion on a subsystem, starting from any initial conditions, and for long enough times, is given by the Boltzmann distribution defining the canonical ensemble. So nontrivial statistical problems in deterministic systems are physics, not rigorous mathematics, barring a major breakthrough in mathematical methods. The reason is that the physicist does not worry about justifying randomization in a rigorous sense, only in a scientific sense--- if a system looks random and passes the appropriate statistical tests, it is scientifically random (although, of course, mathematically you haven't proved anything). You can think of this as an ad-hoc axiom schema about the statistical properties of various deterministic automata, although if improperly formulated, some of these axioms will be false, and I don't think that very many of these axioms are likely to be independent of powerful enough set-theoretic axioms, and all of them should probably be resolved by large enough cardinals or finitistic analogs of large cardinals. So they are not really new axioms per-se, just an infinite list of 3n+1 Collatz style conjectures, obviously true yet incredibly difficult to prove. The upshot is that, when you have your physicist hat on, you should take any such randomization result for granted. In particular, you take the ergodicity hypothesis for granted, when you can't find conserved quantities, and numerical integration shows that they are not present. Equipartition Once you understand the Boltzmann distribution, equipartition is very easy. Consider a mechanical system consisting of n oscillators, with Hamiltonian $$H = A_{ij} p_i p_j + B_{ij} q_i q_j $$ Where A and B are two positive definite matrices, only the symmetric part of which is important, so take them symmetric. The canonical ensemble distribution for the states is, after a diagonalization rotation of p and q, a product of Gaussians: $$ P(p,q) = \prod_i e^{- \beta( A_i p_i^2 + B_i q_i^2)} $$ And a direct calculation shows that the expected value of any $A_i p_i^2$ or $ B_i q_i^2 $ is just ${1\over 2\beta}$. This is the equipartition theorem. Every mechanical oscillator in thermal equilibrium has $T/2$ kinetic and $T/2$ potential energy, and in general, every quadratic term. Note that this result does not depend on the stiffness of the oscillator. If the oscillator is stiffer, so that $B_i$ is big, the oscillations are smaller in amplitude, but still carry the same energy. So if you have fast oscillators and slow ones, classically, they only are in equilibrium when they are at the same temperature, so that they are oscillating with the same energy. For higher order terms, if the potential goes as $q^4$ or $q^8$, the result is different, but generally, the kinetic energy is always quadratic in the momentum. In the limit of $|q|^\infty$, the box potential, there is 0 potential energy but still T/2 kinetic energy. So that the partition into a given mode is generically between $T/2$ and $T$, and always of the order of T. This result applies for any subsystem with a separated energy, so that the interaction between the subsystem and the rest of the system are small compared to the internal interactions, and the conditions of thermal equilibrium apply. If certain oscillators are damped, these oscillators lose energy, and become cold. Then energy flows in from oscillators which are not damped, according to the internal thermal gradient. The description can sometimes be by non-equilibrium thermodynamics, where you assume that different parts of the system are locally in equilibrium, but with a temperature which is different from part to part. For fluids, the flow at the smallest scales is always cold, because it is damped by viscosity, while you are stirring the largest scales, so these are always hot. The flow of energy from cold to hot unfortunately is not well described by non-equilibrium thermodynamics, because the assumption that each k region is in a local thermodynamic equilibrium is not true. Nevertheless, this point of view is a useful first approximation to the turbulence problem. Ultraviolet Catastrophe For fields, the equipartition of energy leads to the famous ultraviolet catastrophe. To see this, you formulate the field statistical problem in k-space. I'll use a simple field where the Fourier description is obviously by coupled oscillators (this is not so simple for the Navier stokes equations, because any physical oscillation frequency depends on the nonlinearity crucially). I'll use quartic field theory, with Hamiltonian $$ H = \sum_i {1\over 2} (|\Pi_i|^2 + |\nabla \phi_i|^2) + V(\phi) $$ Where $\Pi_i = \dot{\phi_i}$ is the conjugate momentum to the field $\phi$, derived from the usual relativistic Lagrangian. $$ V(\phi) = \sum_{i,j,k,l} \lambda_{ijkl} \phi_i \phi_j \phi_k \phi_l $$ Where the $\lambda$'s are chosen so that the potential is non-negative in every direction in field space. The quartic interaction will make the field system non-integrable in general, so that the field modes will all be coupled in a nonlinear way, which should lead to ergodic mixing in phase space, with an approach to equilibrium. The gives a Boltzmann distribution on phase space $\phi,\Pi$, $$P(\phi,\Pi) = e^{-\beta (\sum_i {1\over 2} |\Pi_i|^2 + {1\over 2} |\nabla \phi_i|^2 + V(\phi) )} $$ Which, ignoring the nonlinearity, is just a bunch of oscillators. In terms of the Fourier variables $\Pi(k),\phi(k)$, it is $$ P(\phi,Pi) = e^{ - \beta (\sum_{ik} {1\over 2} |\Pi_i(k)|^2 - {1\over 2} k^2|\phi_i(k)|^2 - V(\phi)} $$ Which is quadratic in the momentum and the position, ignoring the potential V, and so gives the equipartition for field modes--- each k mode has energy ${1\over \beta}$ in thermal equilibrium, split evenly between potential and kinetic energy. The thermal equilibrium state at any temperature consists of fluctuating fields with a divergent amount of energy, which if you cut off k at wavelength $\Lambda$, will diverge as $\Lambda^4$. This is just like the vacuum energy problem in quantum fields, except here it is physical--- a classical field cannot reach thermal equilibrium. If you have total initial energy E and cutoff $\Lambda$, you will equilibrate at a temperature which is something like $T=E/Lambda^4$, which goes to zero as the cutoff goes away. The system just dumps all the energy into progressively smaller wavelengths, dividing it into smaller and smaller parcels in the process. I ignored the quartic term completely in the above analysis, but it is easy to see that the quartic nonlinearity can't affect the result too much. At worst, it can drive the equilibrium potential energy in a given mode to 0, leaving only $1\over 2\beta$ kinetic energy in the mode. It doesn't affect the kinetic energy at all. Really, the quartic mode doesn't have to affect the potential energy very much, because you can tune $\lambda$ close to 0, in which case it will slow down the mixing time between different modes, but it will still lead to thermalization over time, which still sucks the energy out of any large wavelength modes into the smallest wavelength modes, in accordance with the ultraviolet catastrophe expectations. This observation, that classical fields suck all the energy into the tiniest wavelength modes, is due to Einstein, Rayleigh, and Jeans. There is some history of science literature on the proper attribution of this result, due to Thomas Kuhn, which says that the result is new in 1905, and did not motivate Planck. I am not sure if this analysis is correct (for a probably inaccurate discussion, see the talk page of the Wikipedia article on ultraviolet catastrophe--- warning, I didn't read the original literature, Kuhn did, and Kuhn says Planck didn't care about equipartition, but I think Planck and others knew about it anyway). The inability of fields to get to thermal equilibrium famously led to quantum mechanics, but it is also an important insight for the problem of classical turbulence. Classical turbulence should be viewed as the attempt of a classical field to thermalize in those situations where it is constantly driven by an energy input, and where the energy leaks to smaller wavelengths in an irreversible way, because there are infinitely many modes down there. If the energy is constantly replenished, you end up with a steady state energy flow downward in k space. The k-space energy flow is somewhat analogous to the flow of heat along a thermally conducting material from a hot reservoir to a cold reservoir. It's not exactly analogous, because the local concept of temperature is more iffy. Nonequilibrium thermodynamics of turbulent mixing Suppose that you drive the field at low k with a stirring force, and suppose that the stirring force produces low-k modes in local thermal equilibrium, so that the low-k modes are populated according to the Boltzmann distribution for the low-k modes, but the high-k modes are not populated at all. The low-k modes try to thermalize higher k. The thermalization process depends on the nonlinear mixing, so you have to examine how that works. In Fourier space (transforming the space but not the time), the nonlinear term is $$ \sum_{k_1 k_2 k_3 k_4} \lambda_{ijkl} \phi_i(k_1) \phi_j (k_2) \phi_k(k_3) \phi_l(k_4) \delta(k_1 + k_2 + k_3 + k_4) $$ The delta function enforces translation invariance, and it is important, because it says that modes of size |k| can only interact with each other to make modes of size bounded by 3|k|, at worst, and typically only of size 2k. This means that the flow of energy is sort-of local in k-space, because the mixing nonlinearity can't push the energy in one step from small |k| to very large |k|, it can only add a factor of $log(2)$ to the log of the size of k. This is obviously true for any polynomial term in a nonlinear equation. If you assume that the nonlinear coefficient $\lambda$ is small enough, then the modes interact perturbatively over many linear oscillation cycles, so that the condition for resonant interaction is that the energy is conserved as well. This is best expressed by using an additional delta-function in the "energy" (quantum mechanically, this is the conservation of energy--- this isn't quantum, so it is just a resonant frequency matching condition) $$ \delta(\epsilon(|k_1|) + \epsilon(|k_2|) + \epsilon(|k_3|) + \epsilon(|k_4|)) $$ where for the case of the previous equation $\epsilon(|k|) = |k|$. This condition is not so restrictive, it doesn't prevent a long-distance cascade in k-space. But if you change the equation to break Lorentz invariance (but not rotational invariance), and make $\epsilon(|k|) = k^{2N} $$ where N is large, you can find a limit where the description of the turbulence is precisely by a local flow of energy in k space. The reason is that the constraint of energy conservation, or resonant mode interaction, for large N requires that the length of the sum vector must be equal to the length of the longest vector of $|k_1|,|k_2|,|k_3|$, up to a small correction which goes as $1/N$ times the 2N-th power of the ratio of the second-longest k to the longest k. This means that the dynamics in k-space becomes entirely local, with k's only sourcing neighboring k's because far-away k's are not at all resonant, their frequency being completely different. The resulting cascade for the dispersion relation $\epsilon(|k|) = k^{2N}$ with a weak quartic coupling is therefore be described by a local thermal equilibrium with a temperature that depends only on |k|, at weak coupling. This observation seems interesting, but I am not sure if it is new. In this model, it is straightforward to calculate all properties of the turbulent cascade from a thermal gradient on the k-space.
{ "domain": "physics.stackexchange", "id": 2398, "tags": "statistical-mechanics, specific-reference, field-theory, turbulence" }
I was told whenever we have equi-spaced equi-potential planes, field must be constant. I seek the proof of this statement
Question: Y-axis vertical line and the other is X-axis Answer: You should have presented a better pic for the question. I will provide a crude explanation. To prove it, one should remember, that : $$ E = -\nabla \phi $$ And the equipotential surfaces $\phi = $ const are orthogonal to the lines of $E$. These surfaces are all parallel, therefore, the vector $E$ points in the same direction at every point. The difference of potentials between every two points 1 and 2 is: $$ \phi(2) - \phi(1) = - \int \limits_1^{2} \vec{E} \cdot d \vec{x} $$ Because the planes are located equi-spaced, taking, for example, infinitesimal distance between 1 and 2, one gets that for the displacement on the same fixed amount $dx$ along the direction of $E$, the potential changes for the same value $d \phi$. Therefore, $E$ is also constant, when moving along the direction of $\vec{E}$.
{ "domain": "physics.stackexchange", "id": 69424, "tags": "homework-and-exercises, electrostatics, electric-fields, potential, gauss-law" }
Evaluate clustering by using decision tree unsupervised learning
Question: I am trying to evaluate some clustering results that a company did for some data but they used an evaluation method for clustering that i have never seen before. So i would like to ask your opinion and obviously if someone is aware of this method it would be great if he/she could explain to me the whole idea. Clusters have been made to the data set (sample of 250000 rows and 5 features out of 500000 rows) by using k-prototypes as one of the features is categorical. All the combinations of k= 2:10 and lambda = c(0.3,0.5,0.6,1,2,4,6.693558,10) have been made and 3 methods to figure out the best combination have been use. Elbow method (pick the number of clusters and lambda with the min WSS) Silhouette method pick the number of clusters and lambda with the max silhouette) Decision tree They build a decision tree for the data and after that they calculated for every different clustering combination the following value: (inverse leaf size weighted within cluster purity)* cluster size/ total obs and the picked the combination which had the max value. (k=10 and lambda=4) So my question is: Is there such a thing? Can we use the tree to identify which combination will give us higher cluster purity? Also if we can do that can we just use a simple tree without even evaluate how good or bad tree is? And finally, as every single method is giving us different answers how can we decide and pick which one to use to pick the right combinations? I would really appreciate if someone can help me with that. Thanks in advance! Answer: That's a good question. Is there such a thing? Can we use the tree to identify which combination will give us higher cluster purity? Clustering and simple decision tree fitting are used together in many cases such as: First, like you mentioned quality of clustering can be measured by using decision tree leafs. I heard this calculation first time (I know some other measures very similar to it) but it makes sence since it still measures how are clusters are distinct from each other and dense. Second (and most used one), fitting a decision tree by assigning cluster labels as class labels. The fit in here should be overfit (training error should be close nearly 0). This let you when you have a new customer (let's say segmentation in e-commerce) you don't have to calculate all distances and find clusters, you just predict the new customer with the tree and assign cluster = segment label to her/him. Also if we can do that can we just use a simple tree without even evaluate how good or bad tree is? No, you cannot. The tree fit well (nearly overfit). Since you want to obtain cluster separation rules in your data rather than obtain a model. And finally, as every single method is giving us different answers how can we decide and pick which one to use to pick the right combinations? In that stage you should think why are we doing this segmentation? With all possible cluster models try to simulate your business approach and compare results. Hope it helps!
{ "domain": "datascience.stackexchange", "id": 6444, "tags": "r, clustering, decision-trees, k-means, model-evaluations" }
How can a railcar withstand high pressure but fail under a vacuum?
Question: A 25,000 gallon (95,000 litre) bulk chemical storage railcar can store products with vapor pressures in excess of 200 psia (1.38 Mpa). The same railcar can not withstand a vacuum when being unloaded. I want to understand why. A bulk chemical storage car in this example (assume refrigerant on a warm day) is unique in it's design in the sense that it is a pressure vessel contains well over 200 psig (1.38 MPa) internal pressure whereas many metal tanks are rated for far lower pressures thus the pressure differential between atmosphere and the inside of the metal tank may be typically be close to the difference found in this example, 14.7 psi (101 kPa) or less. The definition of a metal tanks can be open to interpretation not only with regard to pressure ratings but also wall thicknesses. One of the answers posed refers to plastic or aluminum soda containers. The material properties are far different than those typically found in the types of rail cars I have presented here. Answer: A pressure vessel consists mainly of a thin metal plate. If you experiment with a thin sheet of material (for example a piece of paper) you will find that it is very easy to bend it, but much harder to stretch it. If there is internal pressure in a spherical or cylindrical vessel, the only way the pressure can do mechanical work (i.e. force $\times$ distance) is to increase the internal volume of the vessel, and the only way to do that is to stretch the material, which is difficult. However, external pressure can do work by reducing the internal volume, and that is easy to do without stretching the vessel wall, by breaking the original cylindrical or spherical symmetry of the vessel and "crumpling" the wall. Since the structure will never be a perfectly uniform shape, there will always be a "weak point" somewhere from which the crumpling can start. This page http://publish.ucc.ie/boolean/2010/00/dePaor/11/en has some nice pictures showing what happens. The basic principle works exactly the same way with simpler geometry, for example Euler buckling of a column. If you apply a tension load to a column, the only possibility is to stretch the material and make the column longer. But if you apply a compressive load, you can make the column shorter without changing the length of the material, when it buckles into a curved shape.
{ "domain": "physics.stackexchange", "id": 37008, "tags": "pressure, solid-state-physics, vacuum" }
How to evaluate triple extraction in NLP?
Question: I am current NLP work, I am extracting triples using triple extraction function in Stanford NLP and Spacy libraries. I am looking for a good method to evaluate how good the extraction has been? Any suggestions Answer: The standard evaluation method works for this kind of task: measure precision, recall and F1-score on a manually annotated sample. In general one can find which evaluation measure is standard for a particular task in the literature. For example this paper seems to address the topic (I didn't read it).
{ "domain": "datascience.stackexchange", "id": 10366, "tags": "machine-learning, nlp, text-mining, text-classification, stanford-nlp" }
How many frames can human eye see?
Question: What's the limitation of our eyes? Can we differentiate 60fps from 120fps? Are new 144hz monitors just a marketing trick? Couldn't find any proper journals or studies about the matter. Answer: Different parts of the eye have different response speed. The corner of your eye doesn't see color, but is fast; the center sees color, and is slower. This means that when you look at a 60 Hz monitor straight-on, the image is perfectly steady; but when you look at it from the corner of your eye, it is flickering. As you go to even higher frequencies of refresh, even the rods don't respond fast enough. This make sense from an evolutionary perspective. When the saber-toothed tiger jumps at you, you need to know about it - quickly. You don't need to know its color. So using the faster rods (sensitive, fast, no color sense) in the edge of the field of view is a good survival strategy. But since we can't move very far in 1/100th of a second, there is no need for sensors that respond at that speed. The difference is real, and can be perceived. In the corner of your eye, for most people. Incidentally, the rendering of fast motion is helped by the higher frame rate; if you show a bright object against a dark background moving left-to-right across the screen in 1/30th of a second, the brain will notice the difference between "two images comprise the full motion" and "four images comprise the full motion", even if you don't really perceive the individual frames. You will see a smoother action when more frames make up the motion: after all, in real life you really see "infinitely many frames" even though they blur together.
{ "domain": "physics.stackexchange", "id": 27494, "tags": "biophysics, biology, vision, perception" }
Dirac delta function mathematical expression proof
Question: In a discussion of the second order transitions in graphene this mathematical expression is used. $$ \left|\frac{1}{\varepsilon + i \Gamma/2}\right|^2 = \frac{2\pi}{\Gamma}\delta(\epsilon) $$ And I'm kind of confused right now. Can someone prove this equation? Answer: OP's formula is derived from the Poisson kernel representation $$ \delta(\varepsilon)~=~\lim_{\Gamma \to 0^+}\frac{1}{\pi}\frac{\Gamma/2}{|\varepsilon+i\Gamma/2|^2} $$ of the Dirac delta distribution.
{ "domain": "physics.stackexchange", "id": 57608, "tags": "mathematical-physics, mathematics, dirac-delta-distributions, fermis-golden-rule" }
Is the interval $ds^2$ NOT invariant under translation in an inhomogenous space?
Question: In the Chapter 9 Symmetries, Section 9.1 The Killing vectors (page 101) are Killing vectors defined such that an infinitesimal translation along the vector keep the line element invariant. It means that if the Killing vector equation doesn't have a solution, the space is not homogeneous. My doubt is: If the space is not homogeneous, the line element is not an invariant. This contradicts that scalars are invariant under any general coordinate transform. Where am I wrong? Are tensors not covariant in an inhomogenous space? Answer: It matters of two different concepts. 1) The transformation under which the geometry, i.e. the spacetime interval $ds$, is invariant is expressed infinitesimally as a motion in the direction of a Killing vector $K^\mu$. The vector $K^\mu$ is said to generate the isometry. 2) A scalar is invariant under a general coordinate transformation. In a coordinate transformation the physical point in spacetime does not change, but it is renamed. Similar considerations hold for tensors. Any combination of tensors, which results in a scalar, is invariant under a general coordinate transformation.
{ "domain": "physics.stackexchange", "id": 59562, "tags": "general-relativity, symmetry, metric-tensor, tensor-calculus" }
Can I modify portion of the global plan in TrajectoryPlannerROS?
Question: Hi there, I would like to know if it is possible to change the length of the "portion of the global plan that the local planner is currently attempting to follow" as stated in the wiki for the published topic. The wiki says for goal_distance_bias parameter: "The weighting for how much the controller should attempt to reach its local goal". I assume the local goal is the end of the portion of the local plan, is that correct? I want to use the TrajectoryPlannerROS plugin in move_base to follow a global path, but the robot tends to drive towards walls and get stuck, when driving around corners. I think I could improve the performance if I the portion of the global plan was shorter, because the robot would not try to take shortcuts that lead into walls. Basically I want the robot to follow the path more closely, however I don't want to increase the path_distance_bias because higher path_distance_bias means the robot can't avoid obstacles on the path then. I'm not sure what impact the local path length has to navigation, but I think it's worth trying. So can I somehow influence the length of the current portion of the path? If it's possible how would I do this? Of course I surely can get the source code and make a new planner, but I don't want to this if it can be avoided. Thanks in advance for anything that might help me improve the local planner behavior. UPDATE1: Perhaps I could improve the behavior of the local planner by changing some other parameters, but I can't get it to work reliably. The footprint matches the actual robot plus some extra space for movement of the kinect and some padding. I don't think further increasing the footprint will help me. I tried different settings for the local costmap, but it did not help either. When I use high cost values around the obstacles the robot has problems going through doors. The idea behind my idea was , that it seems pretty useless scoring a local trajectory by "heading towards goal" if the goal is in a different room. I have an image of rviz: imagebam.com http://thumbnails46.imagebam.com/17875/e103d8178747299.jpg As you can see, the robot is heading toward the wall, where the goal is behind. If I could dynamically change the length of the plan portion, I could perhaps make the planner head towards the door. Though I'm not sure if this really works. I noticed I can improve the planner's behavior by increasing the sim_time parameter (to about 10 sec), but it does not work in every situation. Here is a parameter dump of the relevant parameters: TrajectoryPlannerROS: {acc_lim_th: 1, acc_lim_theta: 1.0, acc_lim_x: 1.0, acc_lim_y: 0.0, angular_sim_granularity: 0.025, controller_frequency: 20, dwa: true, escape_reset_dist: 0.1, escape_reset_theta: 1.57079632679, escape_vel: -0.1, gdist_scale: 0.8, global_frame_id: odom, goal_distance_bias: 0.8, heading_soring: false, holonomic_robot: false, latch_xy_goal_tolerance: true, max_rotational_vel: 0.3, max_vel_theta: 1.0, max_vel_x: 0.3, min_in_place_rotational_vel: 0.05, min_in_place_vel_theta: 0.4, min_vel_theta: 0.0, min_vel_x: 0.05, occdist_scale: 0.05, oscillation_reset_dist: 0.02, path_distance_bias: 0.6, pdist_scale: 0.6, publish_cost_grid_pc: true, restore_defaults: false, sim_granularity: 0.025, sim_time: 3.0, simple_attractor: false, vtheta_samples: 20, vx_samples: 20, xy_goal_tolerance: 0.1, y_vels: '-0.3,-0.1,0.1,-0.3', yaw_goal_tolerance: 0.1} aggressive_reset: {reset_distance: 1.84} base_global_planner: navfn/NavfnROS base_local_planner: base_local_planner/TrajectoryPlannerROS clearing_rotation_allowed: true conservative_reset: {reset_distance: 3.0} conservative_reset_dist: 3.0 controller_frequency: 20.0 controller_patience: 5.0 local_costmap: cost_scaling_factor: 4.0 footprint: '[[0.217,0.33],[-0.325,0.33],[-0.325,0.25],[-0.53,0.13],[-0.53,-0.13],[-0.325,-0.25],[-0.325,-0.33],[-0.0936,-0.33],[-0.0936,-0.51],[0.397,-0.51],[0.397,-0.079],[0.31,0.083],[0.217,0.083]]' footprint_padding: 0.03 global_frame: map height: 10 inflation_radius: 0.65 kinect: {clearing: true, data_type: PointCloud, marking: true, min_obstacle_height: 0.1, obstacle_range: 2.5, raytrace_range: 3.0, sensor_frame: kinect_link, topic: kinect/cloud} laser_scan_sensor: {clearing: true, data_type: LaserScan, marking: true, sensor_frame: base_scan_link, topic: base_scan/scan} lethal_cost_threshold: 100 map_topic: map map_type: voxel mark_threshold: 0 max_obstacle_height: 1.8 max_obstacle_range: 2.5 observation_sources: laser_scan_sensor kinect obstacle_range: 9.5 origin_x: 0.0 origin_y: 0.0 origin_z: 0.0 publish_frequency: 2.0 publish_voxel_map: true raytrace_range: 9.5 resolution: 0.05 restore_defaults: false robot_base_frame: base_footprint robot_radius: 0.46 rolling_window: true static_map: false track_unknown_space: true transform_tolerance: 0.3 unknown_cost_value: 0 unknown_threshold: 15 update_frequency: 5.0 width: 10 z_resolution: 0.2 z_voxels: 10 oscillation_distance: 0.5 oscillation_timeout: 0.0 planner_frequency: 0.0 planner_patience: 5.0 recovery_behavior_enabled: true restore_defaults: false shutdown_costmaps: false Originally posted by Kai Bublitz on ROS Answers with karma: 357 on 2012-03-07 Post score: 1 Answer: I know this may not directly answer your question, but I would recommend first tuning your robot_footprint and inflation_radius parameters. By increasing the robot's footprint to the correct size, it will automatically cause the robot to leave more space around obstacles (the corner) and could potentially solve your problem without having to modify base_local_planner. See costmap_2d for descriptions on the robot_footprint and see this question for information regarding the inflation radius. To actually answer your question, you cannot modify the local path without intercepting it in move_base. The move_base node is responsible for passing the plan from the global planner to the local planner. You'd have to modify the move_base source code if you want to add this functionality. Could it be done? Yes. Is it the best way? I'm not so sure. EDIT: From your parameter dump, I've noticed that your path_distance_bias is set to 0.0. This means that the local planner (in theory) won't care about the path generated by the global planner, and will head directly toward the goal as directly as possible. You need to give some path bias to tell the robot to value the plan in order to move out of the room. Originally posted by DimitriProsser with karma: 11163 on 2012-03-08 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Kai Bublitz on 2012-03-08: Thanks for your answer. I updated my question to add some more information. I already spent a lot of time optimizing the footprint and inflation parameters, but I can't find a working configuration. I don't really want to modify move_base, but if there I can't better way I guess I'll give it a try Comment by Kai Bublitz on 2012-03-08: my path_distance_bias is usually set to 0.6. I just had it set to 0 in the launch file to test something, but I used the reconfigure_gui to set it (that's pdist_scale) back to 0.6. I corrected that in my question now. Comment by Gazer on 2013-06-17: so, did you solve your problem?
{ "domain": "robotics.stackexchange", "id": 8525, "tags": "navigation, move-base" }
roscore is not working and cant install pip or rospkg?
Question: I has frute then I remove it and I downloaded hydro. I followed the steps until I reached to this step: sudo rosdep init I got the following error: Traceback (most recent call last): File "/usr/bin/rosdep", line 3, in <module> from rosdep2.main import rosdep_main File "/usr/lib/pymodules/python2.7/rosdep2/__init__.py", line 40, in <module> from .installers import InstallerContext, Installer, \ File "/usr/lib/pymodules/python2.7/rosdep2/installers.py", line 35, in <module> from rospkg.os_detect import OsDetect ImportError: No module named rospkg.os_detect Then I looked for a solution and I did the following: pip install -U rospkg I got the following error : Downloading/unpacking rospkg Cannot fetch index base URL http://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement rospkg No distributions at all found for rospkg Storing complete log in /home/asctec/.pip/pip.log Traceback (most recent call last): File "/usr/bin/pip", line 9, in <module> load_entry_point('pip==1.0', 'console_scripts', 'pip')() File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 116, in main return command.main(initial_args, args[1:], options) File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 147, in main log_fp = open_logfile(log_fn, 'w') File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 176, in open_logfile log_fp = open(filename, mode) IOError: [Errno 13] Permission denied: '/home/asctec/.pip/pip.log' Also I tried: sudo pip install --upgrade pip and I got the following error: Downloading/unpacking pip Downloading pip-1.5.6.tar.gz (938Kb): 40Kb downloaded Exception: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 126, in main self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 223, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 955, in prepare_files self.unpack_url(url, location, self.is_download) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1072, in unpack_url return unpack_http_url(link, location, self.download_cache, only_download) File "/usr/lib/python2.7/dist-packages/pip/download.py", line 441, in unpack_http_url download_hash = _download_url(resp, link, temp_location) File "/usr/lib/python2.7/dist-packages/pip/download.py", line 366, in _download_url chunk = resp.read(4096) File "/usr/lib/python2.7/socket.py", line 380, in read data = self._sock.recv(left) File "/usr/lib/python2.7/httplib.py", line 561, in read s = self.fp.read(amt) File "/usr/lib/python2.7/socket.py", line 380, in read data = self._sock.recv(left) File "/usr/lib/python2.7/ssl.py", line 241, in recv return self.read(buflen) File "/usr/lib/python2.7/ssl.py", line 160, in read return self._sslobj.read(len) SSLError: The read operation timed out Storing complete log in /home/asctec/.pip/pip.log Traceback (most recent call last): File "/usr/bin/pip", line 9, in <module> load_entry_point('pip==1.0', 'console_scripts', 'pip')() File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 116, in main return command.main(initial_args, args[1:], options) File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 147, in main log_fp = open_logfile(log_fn, 'w') File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 176, in open_logfile log_fp = open(filename, mode) IOError: [Errno 13] Permission denied: '/home/asctec/.pip/pip.log' What should I do exactly to make roscore working because it is not working ?????????? Originally posted by RSA_kustar on ROS Answers with karma: 275 on 2014-09-22 Post score: 0 Original comments Comment by bvbdort on 2014-09-22: try sudo easy_install rospkg Comment by Vova Niu on 2014-09-22: what version of Ubuntu are you using? Comment by RSA_kustar on 2014-10-09: @Vova Niu ubuntu 12.04 Comment by RSA_kustar on 2014-10-09: @bvbdort I solve it using the command of re-install. I will post the answer Answer: I solved this problem using the following command sudo apt-get re-install python-rosdep Originally posted by RSA_kustar with karma: 275 on 2014-10-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by fish24 on 2019-08-01: In my case it did not work, but I used: sudo apt-get --reinstall install python-rosdep
{ "domain": "robotics.stackexchange", "id": 19475, "tags": "ros, python, ros-hydro, ros-fuerte, roscore" }
Variance Due to white noise input
Question: I have the problem below. It sounds simple but for some reason I have been stuck on it for a long time and don't know what am doing wrong. Am trying to solve this using correlations. So we all know that the variance $\sigma _y^2\:=\:R_y\left(0\right)$ which is basically the correlation of $y$ evaluated at zero. Now we know the convolution $y(n)\:=\:x(n)* h(n) = \sum _{m\:=-\infty }^{\infty }h\left(m\right)x\left(n-m\right)$ Using the idea of correlation, I get the following: $R_y\left(k\right)\:=\:E\left(y\left(k\right)y\left(n+k\right)\right)\:=\:E\left[\sum _{m=-\infty }^{\infty }h\left(m\right)x\left(k-m\right)\sum _{m=-\infty }^{\infty }h\left(m\right)x\left(n+k-m\right)\right]$ Now am so stuck here and just don't know what to do. Please help me by directing me in the right direction and by providing clear and coherent steps and explanations. Thank you very much in advance. Answer: You have a typo in your definition of $R_y(k)$, and an error in the time indices when developing the equation. The proper definition of autocorrelation for complex signals (in the case of wide-sense stationary processes) is $$R_y(k)=\mathbb{E}\left[y(n)\overline{y(n+k)}\right],$$ and setting $k=0$, we obtain $$\begin{align} R_y(0)&=\mathbb{E}\left[y(n)\overline{y(n+0)}\right]=\mathbb{E}\left[\left|y(n)\right|^2\right]\\ &=\mathbb{E}\left[\left|\sum_{m=-\infty}^{\infty}h(m)x(n-m)\right|^2\right]\\ &=\mathbb{E}\left[\left(\sum_{m=-\infty}^{\infty}h(m)x(n-m)\right)\overline{\left(\sum_{i=-\infty}^{\infty}h(i)x(n-i)\right)}\right]\\ &=\mathbb{E}\left[\sum_{m=-\infty}^{\infty}\sum_{i=-\infty}^{\infty}h(m)\overline{h(i)}x(n-m)\overline{x(n-i)}\right]\\ &=\sum_{m=-\infty}^{\infty}\sum_{i=-\infty}^{\infty}h(m)\overline{h(i)}\,\mathbb{E}\left[x(n-m)\overline{x(n-i)}\right]\\ &=\sum_{m=-\infty}^{\infty}\sum_{i=-\infty}^{\infty}h(m)\overline{h(i)}\,R_x(m-i)\\ \end{align}$$ where I first use the fact that the product of sums is the sum of products (and I use a different summation index to better show the cross-products), and second, that the impulse response of the system is deterministic and therefore constant for the purposes of taking expectation. Now, we can group those terms where $m=i$ and those where $m\neq i$, obtaining $$\begin{align} R_y(0)&=\sum_{m=-\infty}^{\infty}\sum_{i=-\infty}^{\infty}h(m)\overline{h(i)}\,R_x(m-i)\\ &=\sum_{m=-\infty}^{\infty}h(m)\overline{h(m)}\,R_x(0)+\sum_{m=-\infty}^{\infty}\sum_{i=-\infty\\ i\neq m}^{\infty}h(m)\overline{h(i)}\,R_x(m-i)\\ &=R_x(0)\sum_{m=-\infty}^{\infty}|h(m)|^2+\sum_{m=-\infty}^{\infty}\sum_{i=-\infty\\ i\neq m}^{\infty}h(m)\overline{h(i)}\,R_x(m-i)\\ \end{align}$$ Finally, we know that the input noise is white, and therefore $R_x(k)=0$ for $k\neq0$. We also know that $R_x(0)=\sigma_x^2$. Thus, the second term vanishes, and we obtain $$\begin{align} R_y(0) &=R_x(0)\sum_{m=-\infty}^{\infty}|h(m)|^2=\sigma_x^2\sum_{m=-\infty}^{\infty}|h(m)|^2 \end{align}.$$
{ "domain": "dsp.stackexchange", "id": 9886, "tags": "filters, discrete-signals, finite-impulse-response, stochastic, covariance" }
How to train this neural network?
Question: I seek the neural network (NN) which satisfies the 100 equations (i=1,2...100) $\sum_{j=1}^{2000} NN(A_{ij},B_{ij},C_{ij})=Q_i$. Where A,B,C are 100x2000 matrices So I know Q, A, B and C How can one find the NN? (ie train it using this data) I have a second datset to test it on. I also have MATLAB neural network toolbox, but that is not a necessity for me to use that. Answer: Let $w$ denote the weights of the neural network. Define the objective function $\Psi$ by $$\Psi(w) = \sum_i \left(\left(\sum_j NN_w(A_{ij},B_{ij},C_{ij})\right) - Q_i\right)^2.$$ Then, find $w$ that minimizes $\Psi(w)$ using gradient descent. You can find the gradient of $\Psi(w)$ using backpropagation through the neural network. You can use all the standard methods for speeding up training of neural networks: stochastic gradient descent, Adam or momentum or Adagrad, etc. This is very similar to training neural networks for regression problems. The only difference here is that you are summing the output of the neural network on multiple inputs. It is straightforward to adjust the objective function to take that into account, as shown above.
{ "domain": "cs.stackexchange", "id": 13326, "tags": "neural-networks" }
Do cuttlefish know how to mimic animals since the moment they are born or do they learn by observing?
Question: I can't seem to find information anywhere else so I hope somebody can help me out. So, as the question states, I saw a cuttlefish mimicking a hermit crab the other day and I was wondering if cuttlefish carry information of what they can mimic as spiders carry information of how to make a web (innate behaviour) or if they learn how to mimic by observing other animals. Answer: I haven't been able to find any definitive research/answers, but, based on the usage of their mimicry (camouflage and sexual), and some of its features (alteration of the surface texture of their skin, electric camouflage, development+change through lifetime), I'd hypothesise that it's most likely a mixture of the two. They likely have a large innate capacity to use the form of mimicry young cephalopods do, which is further developed socially/as a learned behaviour, which subsequently causes the change in the nature of their mimicry. Cuttlefish texture change (Primary research article) Transient sexual mimicry (Summary article (Nature)) Research has shown that cuttlefish mimicry develops and changes as they grow; it has also been shown to differ based on the location of the creature. There is also evidence of cephalopods learning socially, so it does indicate that their behaviour is at least partially learned. (Source + additional references- reed.edu ->Visual Mimicry in Cephalopods -> Ontogeny)
{ "domain": "biology.stackexchange", "id": 7444, "tags": "zoology, ethology, molluscs" }
Why viruses cannot be seen?
Question: With the coronavirus pandemic, a lot of websites are publishing articles about viruses. In particular, I've seen some of these stating that viruses cannot be seen because they are so tiny they cannot reflect light. This seems to be true at first sight but I've been intrigued ever since I put some real thought about it. Early quantum physics teaches us that electrons can absorve and re-emit photons. Thus, if we picture a light beam hitting a single atom, the electrons of this atom might absorb some photons and change their state. If these electrons readily re-emit these photons, the light is been reflected and, otherwise, the light is absorbed. This very simple model shows that even electrons, which are pretty smaller than a virus, can emit photons and 'reflect' light. Of course, we cannot see electrons either but this could be explained by the fact that a single eletron emits a little number of photons per second and our eyes cannot process an image. But, if we put a huge number of atoms together, the net effect allows us to see these objects, their colour and so on. So, is the size of a virus the real explanation to the fact that they cannot be seen or should the correct explanation be due the fact that we cannot put together as many viruses as it would take in order to have an proper image of something? PS: I know the interaction between light and matter are better explained using QED but I'm proposing a toy model here just to thing things classicaly, if possible. Answer: Seeing means imaging using optical methods, which range from 400 nm (violet) to 750 nm (dark red). A single SARS-CoV-2 virus measures 120 nm so the virus so is much smaller than optical wavelengths. The images of virions that we do have were made by scanning electron microscopes, which has a resolution limit of better than 1 nm. Atoms measure about 0.1 nm and can be visualized by atomic force microscopy. Images of electrons do not exist as far as I know. It is also possible to incorporate fluorescent protein markers into viruses for detection. This could enable, if anyone had that purpose, single virus detection by optical microscopy. (I added this to deal with the statement on single molecule detection by @S.McGrew.)
{ "domain": "physics.stackexchange", "id": 66844, "tags": "optics, electromagnetic-radiation, estimation, wavelength, microscopy" }
Effective way to find two string are anagrams
Question: My approach for checking anagrams is: step 1: Remove all the space from both input strings. step 2: Sort both of the strings. step 3: Return false if the lengths differ. step 4 : Return true if all char matches. #include<iostream> #include<set> #include<string> #include <algorithm> bool anagrams(std::string usr1,std::string usr2) { if(usr1.length()==usr2.length()) { for(std::string::size_type pos = 0 ; pos<= usr1.length()-1 ; ++pos) { if(pos==usr1.size()-1) { if(usr1[pos]==usr2[pos]) return true; } if(usr1[pos]==usr2[pos]) { continue ; } } } return false; } int main() { std::string userInput1; std::string userInput2; std::getline(std::cin,userInput1); std::getline(std::cin,userInput2); std::string::iterator end_pos1 = std::remove(userInput1.begin(),userInput1.end(),' '); userInput2.erase(end_pos1,userInput1.end()); std::string::iterator end_pos2 = std::remove(userInput2.begin(),userInput2.end(),' '); userInput1.erase(end_pos2,userInput2.end()); std::sort(userInput1.begin(),userInput1.end()); std::sort(userInput2.begin(),userInput2.end()); if(userInput1.empty() || userInput2.empty()) return 0; if(anagrams(userInput1,userInput2)) std::cout<<"String is anagrams"<<"\n"; else std::cout<<"String not anagrams"<<"\n"; return 0; } Answer: anagram seems like a misnomer. A better name for such function would be is_anagram. I don't see why <set> is included. Fail early: if (usr1.length() != usr2.length()) { return false; } spares you a level of indentation. Test for pos==usr1.size()-1 is waste of time, because it fails at every iteration except the last one, and still tests for characters at pos. Sorting inputs is an essential part of the algorithm, and hence shall be performed inside anagram. As written, anagram only tests two string for being identical. On the other hand, removing spaces does not belong to the core algorithm, and you correctly let the caller do it. You may also want to remove punctuation, and maybe convert the strings to the lower case. This logic indeed belongs to a caller. Since you already #include <algorithm>, an std::equal(usr1.begin(), usr1.end(), usr2.begin()) looks more C++ish (if you are open to C++14, a variant std::equal(usr1.begin(), usr1.end(), usr2.begin(), usr2.end()) even spares the test for equal length). All that said, bool is_anagram(std::string s1, std::string s2) { std::sort(s1.begin(), s1.end()); std::sort(s2.begin(), s2.end()); return std::equal(s1.begin(), s1.end(), s2.begin(), s2.end()); } As mentioned in comments, an alternative implementation using histograms is very appealing.
{ "domain": "codereview.stackexchange", "id": 28589, "tags": "c++, strings, c++11" }
When are all cannabinoids infused into grape seed oil using non-decarboxylated, dried plant matter?
Question: I am unsure which is the correct terminology, infusion or extraction? For all practical purposes I will continue using infusion here. (What is the difference between the two?) When making the infusion1 I'm using a ratio of 2 oz of dried, but not decarboxylated plant matter to 1 quart of grape seed oil. Any brand will work. The plant matter mainly consists of the buds and colas. These have gone through the curing phase already and are dry. I put the plant material in the oil inside of a small 1.5 qt crockpot. The last time I did this, I let the crockpot sit on high for 8 hours before switching to low for 3 days (72 hours). A lot of tutorials and online videos will say leave it sit for 4, 8, 24, or all the way up to 72 hours and more. How long will it actually take until most cannabinoids are infused into the oil, without infusing other, unwanted compounds like chlorophyll? I'm assuming that there is a breaking point where the plant matter is done decarboxylating, and the oil has extracted all of the cannabinoids. I've done this kind infusion using coconut oil and I didn't get the same potency as when I did the infusion with grape seed oil. What is the reason on a molecular level that allows grape seed oil to infuse cannabinoids more easily? 1 Legality is not an issue since I live in a state where I am allowed to grow up to 6 plants of varying maturity and also produce my own grape seed oil infusions. Answer: The time taken (before there are diminishing returns) of extracting the herb is totally dependent on the solvent used, or in your case, the oil. If you were to use water, the answer might be days. If you used heptane, a couple of hours. The decarboxylation and the extraction - a catch all chemical term for removal of components or a component from a particular solid liquid or gas phase, as opposed to infusion which seems to be specific to botanical extractions (so both are correct in this instance) - should be decoupled. If you start with the cannabinoidal acid and heat in solvent, you'll start extracting components at different rates, while also decarboxylating and extracting the cannabinoid. This reduces the selectivity of the extraction. Again, depending on your chosen oil, the rate at which your desired components and undesired ones are pulled into your oil will differ. You seem to have a handle on which of your two oils is better. I would try two experiments: Heat the dry herb to decarboxylate; less than two hours should be ample and may even be too much. Then extract/infuse your oil. Bear in mind, the more oil you use also leaves you open to extracting more rubbish you don't want, as does extending the time. You're better cutting your time down first and monitoring potency (as extracting more undesired components will lower your potency in the final extract, so potency should increase as you bring your time down) and then experimenting on reducing the oil content until your potency decreases. A decarboxylation should take about one hour at around 110C, but it depends on the amount of herb you have and if it's whole leaf, shredded, milled etc. I'm talking fine material at kilo scale. Extraction under ideal conditions is done in six hours, depending on solvent. The second option is to extract at cooler temperatures (cool temp to 50-60C absolute maximum) first to get the cannabinoidal acids. If you have the facility, add base (diluted sodium hydroxide is ideal and would take minutes . Sodium carbonate dissolved in water may work but would be slow (days?) and you'd have to monitor carbon dioxide evolution). The acids you want will form salts and extract into the water. The neutral stuff you don't want (terpenes, waxes - long chain alkanes) stay in the oil. Separate the oil and water. You have a choice here. Add acid to your water layer - this may cause your cannabinoid acid to precipitate, so filter it off. Otherwise add fresh oil to the water, then acid and your cannabinoid acid will now be in the oil after mixing. Whatever you choose, heat your cannabinoid acid to over 100 degrees Celsius and you should be done in an hour or two. Over 120 degrees Celsius may be too much. As I mention, some experimenting will be needed but try to ensure your herbs are consistent and not mixtures of different varieties each time. As for your oils. One is more like a liquid "fat" - the coconut oil. The grapeseed oil is still a mixture, but is primarily "polyunsaturated" ( it's actually di-unsaturated) and about 50% bigger than the main coconut oil component. These oil mixtures are not usually used in chemical extractions as they're not pure. I assume you use digestible, non-toxic solvent oils. Otherwise I'd suggest limonene.
{ "domain": "chemistry.stackexchange", "id": 8667, "tags": "experimental-chemistry, home-experiment, extraction" }
conservation of C-peptide sequence in the guinea pig
Question: Where can I compare the C-peptide sequence of guinea pig with human or mouse? I am also interested in finding whether the guinea pig has insulin 1 and 2 to know whether I could use an anti-C-peptide 1 and 2? Answer: The NCBI Nucleotide database can be used to search for coding sequences (mRNA and translated protein) of genes. You need to enter the name of the gene (INS) or protein (insulin) and then species (use either the common name, "guinea pig" or the scientific name, "Cavla porcellus"). Filter the results to show you only mRNA coding sequences (sometimes abbreviated as "complete CDS"). These entries are annotated with the regions that account for the signal peptide, mature peptide(s) and prodomain(s). In your case, the full proinsulin is annotated as mature peptide and the B and A chains. The C peptide connects the A and B chains, located between the only two dibasic sites (R55R56 and K88R89). Another useful database is the UniProt Protein Knowledge Base which presents similar information in a slightly more graphical presentation. Here is guinea pig insulin in this database. If you happen to compile a whole list of C-peptide domains from other mammals, then you can use an alignment tool like ClustalOmega to align the peptide sequences to look for commonalities. As for guinea pigs, there is only a single insulin gene (like most mammals). Only mice and rats have a two gene insulin system, Ins1 and Ins2. Generally speaking, there is poor conservation of the C peptide domain across mammals, in part because it only serves to link the A and B chains together to form the necessary disulfide bridges. If you are wondering whether an anti-guinea pig antibody will cross react with another species' epitope, that information is listed on the antibody specification sheet under species cross-reactivity. Some antibodies are more cross-reactive than others, and you will need to refer to the specific manufacturer's information to be sure. Wang S, Wei W, Zheng Y, Hou J, Dou Y, et al. (2012) The Role of Insulin C-Peptide in the Coevolution Analyses of the Insulin Signaling Pathway: A Hint for Its Functions. PLoS ONE 7(12): e52847. doi:10.1371/journal.pone.0052847 A note on this paper, I would not take the conclusions that C-peptide has a physiological role in intracellular or endocrine signalling because it is still quite a controversial issue. I used this article because it was open access and to demonstrate in Figure 1 the variability of the C-peptide domain across selected mammals.
{ "domain": "biology.stackexchange", "id": 2171, "tags": "proteins" }
Camera Calibration Matrix accuracy
Question: When we do camera calibration, we have to find calibration matrix $M$, which is found by first finding extrinsic matrix and then intrinsic matrix. To validate the accuracy of calibration matrix, we compare groundtruth projections $points \times intrinsic-matrix$ against prediction $points \times extrinsic-matrix \times intrinsic-matrix$. To illustrate results, I plotted the figure below. Source Question: Why the projections from $points \times intrinsic-matrix$ are regarded as the groundtruth please? Answer: The reason that it's the ground truth is because in the tutorial it's all synthetic data and it's using the ground truth parameters of the model to calculate/construct those values exactly. First we define the necessary parameters and create the camera extrinsic matrix and intrinsic matrix. These are required to build the pipeline and prepare the ground truth. This example is not how to do a camera calibration in the real world but is teaching you the fundamentals of how the calibration algorithms work, thus you can know the ground truth before you try to do the optimization so see the performance.
{ "domain": "robotics.stackexchange", "id": 2547, "tags": "computer-vision, cameras, calibration" }
Is a correlation matrix meaningful for a binary classification task?
Question: When examining my dataset with a binary target (y) variable I wonder if a correlation matrix is useful to determine predictive power of each variable. My predictors (X) contain some numeric and some factor variables. Answer: Well correlation, namely Pearson coefficient, is built for continuous data. Thus when applied to binary/categorical data, you will obtain measure of a relationship which does not have to be correct and/or precise. There are quite a few answers on stats exchange covering this topic - this or this for example.
{ "domain": "datascience.stackexchange", "id": 1146, "tags": "classification, correlation, binary" }
Roslaunch fails to set MASTER_URI over ssh
Question: I've read this question and set my MASTER_URI accordingly (own machine name) yet the remote node gets launched with another roscore. Here is my launch file <launch> <machine name="Panda" address="LabRob-Panda-01" user="panda" default="false" env-loader="/opt/ros/fuerte/env.sh"/> <!-- send roomba urdf to param server --> <param name="robot_description" command="$(find xacro)/xacro.py /home/erupter/Apps/ROS/simple.urdf" /> <param name="tf_prefix" value="laser"/> <param name="use_rep_117" value="true"/> <node pkg="tf" type="static_transform_publisher" name="tf" args="0 0 0 0 0 0 base_link laser 10" />" <!-- <node pkg="joint_state_publisher" type="joint_state_publisher" name="jsp"/> --> <!-- <node pkg="robot_state_publisher" type="state_publisher" name="laser" > <remap from="tf" to="laser" /> </node> --> <!-- Hokuyo Base Node--> <node pkg="hokuyo_node" type="hokuyo_node" name="hokuyo_laser" machine="Panda"></node> <!-- Rviz Node--> <node pkg="rviz" type="rviz" args="-d /home/erupter/Apps/ROS/hokuyo/hokuyo_node/hokuyo_test.vcg" name="rviz01"> </node> </launch> And here is the result of the launching erupter@FCD-04-Ubuntu:~/Apps/ROS$ echo $ROS_MASTER_URI http://FCD-04-Ubuntu:11311/ erupter@FCD-04-Ubuntu:~/Apps/ROS$ roslaunch hokuyo.launch ... logging to /home/erupter/.ros/log/ce7b5b28-6584-11e2-96a3-e0cb4e8e9c06/roslaunch-FCD-04-Ubuntu-32673.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://FCD-04-Ubuntu:57645/ remote[LabRob-Panda-01-0] starting roslaunch remote[LabRob-Panda-01-0]: creating ssh connection to LabRob-Panda-01:22, user[panda] launching remote roslaunch child with command: [/opt/ros/fuerte/env.sh roslaunch -c LabRob-Panda-01-0 -u http://FCD-04-Ubuntu:57645/ --run_id ce7b5b28-6584-11e2-96a3-e0cb4e8e9c06] remote[LabRob-Panda-01-0]: ssh connection created SUMMARY ======== PARAMETERS * /robot_description * /rosdistro * /rosversion * /tf_prefix * /use_rep_117 MACHINES * Panda NODES / hokuyo_laser (hokuyo_node/hokuyo_node) rviz01 (rviz/rviz) tf (tf/static_transform_publisher) auto-starting new master process[master]: started with pid [32695] ROS_MASTER_URI=http://FCD-04-Ubuntu:11311/ setting /run_id to ce7b5b28-6584-11e2-96a3-e0cb4e8e9c06 process[rosout-1]: started with pid [32708] started core service [/rosout] process[tf-2]: started with pid [32722] process[rviz01-3]: started with pid [32735] [LabRob-Panda-01-0]: launching nodes... [LabRob-Panda-01-0]: auto-starting new master [LabRob-Panda-01-0]: process[master]: started with pid [28756] [LabRob-Panda-01-0]: ROS_MASTER_URI=http://localhost:11311 [LabRob-Panda-01-0]: setting /run_id to ce7b5b28-6584-11e2-96a3-e0cb4e8e9c06 [LabRob-Panda-01-0]: process[hokuyo_laser-1]: started with pid [28770] [LabRob-Panda-01-0]: ... done launching nodes I also tried setting an env like so <node pkg="hokuyo_node" type="hokuyo_node" name="hokuyo_laser" machine="Panda"> <env name="ROS_MASTER_URI" value="http://FCD-04-Ubuntu:11311"/> </node> But it doesn't work. Distro is fuerte on both machines. Originally posted by Claudio on ROS Answers with karma: 859 on 2013-01-23 Post score: 0 Answer: This is covered in detail in this answer. Problem is ros_comm version together with actual shell variable. Updating ros_comm and correctly setting the env variable ROS_MASTER_URI corrects the problem. Originally posted by Claudio with karma: 859 on 2013-01-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12552, "tags": "master, ros-master-uri, ros-fuerte, ssh, roscore" }
How far do air particles move when a sound wave passes through them?
Question: How far do air particles move when a sound wave passes through them? I know that they don't actually travel, the question is how far do they oscillate or what is the physical amplitude of the oscillation? Obviously the answer is different for different mediums and volumes (and maybe frequency?), so for the purposes of the question, assume we're talking about a 100Hz sine wave, at about 75db (low bass, at listening volume), travelling through air at sea-level pressure. Answer: Sound pressure level (SPL) in dB is defined relative to a pressure $p_{ref}=20\mu Pa$ $$L_p=20\log_{10}\left(\frac{p_{rms}}{p_{ref}}\right )$$ 75dB corresponds to acoustic pressure of 0.11 Pa, you can use this online calculator to easily check other SPLs. Acoustic velocity is proportional to acoustic pressure through acoustic impedance $Z=\rho c$ where $\rho$ is air density and $c$ is sound velocity, for air at room temperature $Z \approx 400 \frac{Ns}{m^3}$. With acoustic pressure and impedance you can calculate all sorts of quantities. In particular, particle displacement is calculated as $$ \xi =\frac{p}{Z\omega}=\frac{p}{Z2\pi f} $$ where $f$ is the acoustic frequency. Plugging all numbers into this formula, we get $\xi=4.4 \cdot 10^{-7}\,m$. Keep in mind that SPL is given in logarithmic scale, so if you take $L_p=150\,dB$ then $\xi= 0.0025\,m=2.5\,mm$.
{ "domain": "physics.stackexchange", "id": 11295, "tags": "waves, pressure, acoustics, distance" }
Question involving Vectors and Forces
Question: This is a question I found in Mechanics for Engineers by Beer & Johnston. A 600-lb crate is supported by the rope and pulley arrangement as shown below. Write a computer program which can be used to determine, for a given value of $\beta$, the magnitude and direction of the force $F$ which should be exerted on the free end of the rope. Use this program to calculate $F$ and $\alpha$ for values of $\beta$ from 0 degrees to 30 degrees at 5 degree intervals. For writing the program, I need to obtain a relation between $F$, $\beta$ and $\alpha$. The only thing which I was able to think of was that the component $Fcos\alpha=600 lb$ I tried to relate the angles $\alpha$ and $\beta$ by using high school geometry but I did not obtain any result which I could use. Can anyone help me with the physics part of the question? Answer: Another good trick to solve this would be to use Lami's theorem With this you get $$\frac{F}{\sin(\pi-\beta)=\sin\beta}=\frac{600}{\sin(\frac\pi2+\beta-\alpha)}$$ While this isn't enough to solve it, Lami's theorem is a useful trick for your toolbox.
{ "domain": "physics.stackexchange", "id": 17543, "tags": "homework-and-exercises, newtonian-mechanics, forces, vectors" }