anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Does the Slavnov-Taylor identity still hold for scalar Yang-Mills?
Question: I want to renormalize the minimally-coupled scalar Yang-Mills theory: $$\mathcal{L}_{YM\phi}=(D_\mu\phi)^\dagger(D^\mu\phi)-\frac{1}{4}F_{\mu\nu}^a{F^{\mu\nu}}^a-\frac{1}{2\xi}(\partial_\mu {A^\mu}^a)^2-\bar{c}^a\partial^\mu D_\mu^{ac}c^c$$ to find the $\beta(g)$ function, and I would like to do so using the vector-scalar-scalar vertex. In a Yang-Mills theory coupled to fermions, I can safely use the vector-fermion-fermion vertex knowing that choosing other vertices would yield the same solution because of the Slavnov-Taylor identity, obtained with BRST symmetry. Now, are there analogous "BRST" symmetry (and corresponding "Slavnov-Taylor" identity) for the scalar case? Answer: Yes, the Slavnov–Taylor (ST) identities are generalized Ward identities for non-Abelian Yang-Mills theory with or without matter. (If there's matter the ST identities contain more terms.) More generally, any anomaly-free gauge theory can in principle be given a BRST formulation with corresponding generalized Ward identities.
{ "domain": "physics.stackexchange", "id": 84339, "tags": "quantum-field-theory, renormalization, yang-mills, brst, ward-identity" }
Undefined Reference in Linking Using colcon
Question: Background: I work in underwater robotics, and DCCL is a library for encoding of messages to fit within our extremely constrained message sizes based on an original protobuf format. Just for practice before applying this to one of our real messages, I'm modifying the basic publisher/subscriber from the tutorial to encode the message to DCCL before sending. It seems to do just fine with the protobuf library, but I'm getting an undefined reference error on the DCCL library, which I assuming is a linking error. Error: teddybouch@Norby:~/workspace/ros2_ws$ colcon build --packages-select cpp_pubsub Starting >>> cpp_pubsub --- stderr: cpp_pubsub /usr/bin/ld: CMakeFiles/talker.dir/src/publisher_member_function.cpp.o: in function `MinimalPublisher::timer_callback()': publisher_member_function.cpp:(.text._ZN16MinimalPublisher14timer_callbackEv[_ZN16MinimalPublisher14timer_callbackEv]+0xa2): undefined reference to `dccl::Codec::Codec(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)' /usr/bin/ld: publisher_member_function.cpp:(.text._ZN16MinimalPublisher14timer_callbackEv[_ZN16MinimalPublisher14timer_callbackEv]+0xed): undefined reference to `testdccl::NavigationReport::NavigationReport()' /usr/bin/ld: publisher_member_function.cpp:(.text._ZN16MinimalPublisher14timer_callbackEv[_ZN16MinimalPublisher14timer_callbackEv]+0x1d5): undefined reference to `dccl::Codec::encode(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, google::protobuf::Message const&, bool, int)' /usr/bin/ld: publisher_member_function.cpp:(.text._ZN16MinimalPublisher14timer_callbackEv[_ZN16MinimalPublisher14timer_callbackEv]+0x3b7): undefined reference to `testdccl::NavigationReport::~NavigationReport()' /usr/bin/ld: publisher_member_function.cpp:(.text._ZN16MinimalPublisher14timer_callbackEv[_ZN16MinimalPublisher14timer_callbackEv]+0x3c6): undefined reference to `dccl::Codec::~Codec()' /usr/bin/ld: publisher_member_function.cpp:(.text._ZN16MinimalPublisher14timer_callbackEv[_ZN16MinimalPublisher14timer_callbackEv]+0x48b): undefined reference to `testdccl::NavigationReport::~NavigationReport()' /usr/bin/ld: publisher_member_function.cpp:(.text._ZN16MinimalPublisher14timer_callbackEv[_ZN16MinimalPublisher14timer_callbackEv]+0x4a3): undefined reference to `dccl::Codec::~Codec()' /usr/bin/ld: CMakeFiles/talker.dir/src/publisher_member_function.cpp.o: in function `void dccl::Codec::load<testdccl::NavigationReport>()': publisher_member_function.cpp:(.text._ZN4dccl5Codec4loadIN8testdccl16NavigationReportEEEvv[_ZN4dccl5Codec4loadIN8testdccl16NavigationReportEEEvv]+0x11): undefined reference to `testdccl::NavigationReport::descriptor()' /usr/bin/ld: publisher_member_function.cpp:(.text._ZN4dccl5Codec4loadIN8testdccl16NavigationReportEEEvv[_ZN4dccl5Codec4loadIN8testdccl16NavigationReportEEEvv]+0x28): undefined reference to `dccl::Codec::load(google::protobuf::Descriptor const*, int)' collect2: error: ld returned 1 exit status make[2]: *** [CMakeFiles/talker.dir/build.make:132: talker] Error 1 make[1]: *** [CMakeFiles/Makefile2:82: CMakeFiles/talker.dir/all] Error 2 make: *** [Makefile:141: all] Error 2 --- Failed <<< cpp_pubsub [0.97s, exited with code 2] Summary: 0 packages finished [1.16s] 1 package failed: cpp_pubsub 1 package had stderr output: cpp_pubsub Except for the addition of the proto directory and the protobuf message definition file in it, I think that the only relevant changes are in the CMakeLists.txt and publisher_member_function.cpp, which I'm including below, but if I've missed something or anyone wants all the code for some reason, the whole package is on my Dropbox here. CMakeLists.txt: cmake_minimum_required(VERSION 3.5) project(cpp_pubsub) # Default to C99 if(NOT CMAKE_C_STANDARD) set(CMAKE_C_STANDARD 99) endif() # Default to C++14 if(NOT CMAKE_CXX_STANDARD) set(CMAKE_CXX_STANDARD 14) endif() if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang") add_compile_options(-Wall -Wextra -Wpedantic) endif() # find dependencies find_package(ament_cmake REQUIRED) find_package(rclcpp REQUIRED) find_package(std_msgs REQUIRED) find_package(dccl REQUIRED) message(STATUS "Using DCCL in ${DCCL_DIR}") find_package(Protobuf REQUIRED) if(NOT DEFINED DCCL_INCLUDE_DIR) #for DCCL 3.0.3 and newer get_target_property(DCCL_INCLUDE_DIR dccl INTERFACE_INCLUDE_DIRECTORIES) endif() if(DCCL_INCLUDE_DIR) message(STATUS "\tIncluding DCCL protobuf dir: ${DCCL_INCLUDE_DIR}") include_directories("${DCCL_INCLUDE_DIR}") #protobuf_include_dirs("${DCCL_INCLUDE_DIR}") endif() # build the protobuf messages file(GLOB ProtoFiles "${CMAKE_CURRENT_SOURCE_DIR}/proto/*.proto") PROTOBUF_GENERATE_CPP(ProtoSources ProtoHeaders ${ProtoFiles}) add_library(proto STATIC ${ProtoSources} ${ProtoHeaders}) target_link_libraries(proto ${PROTOBUF_LIBRARY}) message( STATUS ${CMAKE_BINARY_DIR} ) include_directories( ${CMAKE_BINARY_DIR} ) add_executable(talker src/publisher_member_function.cpp) ament_target_dependencies(talker rclcpp std_msgs dccl Protobuf) add_executable(listener src/subscriber_member_function.cpp) ament_target_dependencies(listener rclcpp std_msgs) install(TARGETS talker listener DESTINATION lib/${PROJECT_NAME}) if(BUILD_TESTING) find_package(ament_lint_auto REQUIRED) ament_lint_auto_find_test_dependencies() endif() ament_package() publisher_member_function.cpp: #include <chrono> #include <memory> #include "rclcpp/rclcpp.hpp" #include "std_msgs/msg/string.hpp" #include "dccl.h" #include "navreport.pb.h" using namespace std::chrono_literals; /* This example creates a subclass of Node and uses std::bind() to register a * member function as a callback from the timer. */ class MinimalPublisher : public rclcpp::Node { public: MinimalPublisher() : Node("minimal_publisher"), count_(0) { publisher_ = this->create_publisher<std_msgs::msg::String>("topic", 10); timer_ = this->create_wall_timer( 500ms, std::bind(&MinimalPublisher::timer_callback, this)); } private: void timer_callback() { auto message = std_msgs::msg::String(); std::string encoded_bytes; dccl::Codec codec; // Create the DCCL-encoded protobuf message codec.load<testdccl::NavigationReport>(); testdccl::NavigationReport report; report.set_x( count_++ ); report.set_y( 0 - count_ ); report.set_z( -999 ); //report.set_veh_class( testdccl::NavigationReport::AUV ); report.set_battery_ok( true ); codec.encode( &encoded_bytes, report ); // Serialize the protobuf data message.data = encoded_bytes; RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str()); publisher_->publish(message); } rclcpp::TimerBase::SharedPtr timer_; rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_; size_t count_; }; int main(int argc, char * argv[]) { rclcpp::init(argc, argv); rclcpp::spin(std::make_shared<MinimalPublisher>()); rclcpp::shutdown(); return 0; } Truth be told I'm still getting the hang of CMake and linking in general, so I think that what's going on is that I'm getting confused with the intersection of ROS, colcon, protobuf, DCCL, and CMake and I'm probably missing something obvious, but I've spent over an hour now looking between the CMake and DCCL documentation (plus various other Googling) to no avail, so if anyone has a moment to spare to look at it I would be grateful. Thanks! gcc version: 9.4.0 CMake version: 3.16.3 ROS version: foxy Originally posted by teddybouch on ROS Answers with karma: 320 on 2022-05-06 Post score: 0 Answer: TLDR, these are the lines you've got to change in your CMakeLists.txt: add_executable(talker src/publisher_member_function.cpp) ament_target_dependencies(talker rclcpp std_msgs) # Remove dccl and Protobuf target_link_libraries(talker dccl proto) # Link non-ament dependencies In more detail: There seems to be two issues in the error log you posted: Not linking against dccl correctly Not linking against your proto file correctly and they are both related to linking, as you suspected. In regards to not linking against dccl correctly - dccl is a pure CMake package (doesn't follow ament guidelines), and so you have to use target_link_libraries instead of ament_target_dependencies. There is no issue calling one after the other on the same target (ie. talker). In regards to not linking against your proto file correctly - you have to use target_link_libraries to link against the target (ie. proto) you generate from the proto file in the line - add_library(proto.... Note that you already have Protobuf linked to the proto target, so if you link the talker to the proto target, you don't need to link Protobuf. Originally posted by ijnek with karma: 460 on 2022-05-08 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by teddybouch on 2022-05-08: Thank you so much! Not only did this fix the immediate problem, but your explanation was excellent and helped me understand the context a lot better as well!
{ "domain": "robotics.stackexchange", "id": 37646, "tags": "ros, ros2, colcon" }
Reconcile the Beer Lambert Law with rate of absorption
Question: The Beer-Lambert law gives the proportion of incident light intensity that will be absorbed. Intensity isn't strictly a rate but if the light is monochromatic and the path length is constant it sort of is. $$I=I_0\exp(-\epsilon c l)$$ But the rate of absroption of photons is given by: $$K=B\rho$$ Where B is the Einstein coefficient and $\rho$ is the spectral energy density which is proportional to the intensity of incident radiation. Surely the second equation should mean that the intensity decays linearly? Would the absorbed intensity double if the concentration was doubled? Answer: To answer your second question first, just use the Beer-lambert law. The first question needs us to go back a few steps and re-define the Einstein B coefficients in terms of the intensity which means that the 'new' B or $B^I=B^{\rho}/c$ and from now on for simplicity we just use B instead of $B^I$. The intensity I is defined (per unit frequency interval) as flux across unit area per unit time thus has units $\pu{W/m^2.s^{-1}}$ or mass time$^{-2}$ and thus is not a rate. The energy density $\rho$ means density of energy per unit volume and is related to I as $I=c\rho$ where c is the speed of light. (The rate of absorption is unchanged but written as $B^II$ rather than $B^{\rho}\rho$.) The energy density $\rho$ has units $\pu{J.m^{-3}/s^{-1}}$ which is mass length$^{-1}$ time$^{-1}$. In these units the Einstein B coefficient has dimensions time mass$^{-1}$ or $\pu{s kg^{-1}}$ in SI units. The Beer Lambert law derives from the differential form $$-dI_{\nu} =I_{\nu}k_{\nu}dx$$ where for clarity x is used to indicate length rather than l. The constant $k_{\nu} = \epsilon [c]$ at frequency $\nu$ and has dimensions of length$^{-1}$. If the absorption band is wide we take a small element about frequency $\nu$, which is $\delta\nu$, and calculate the rate of energy removal from the beam during absorption from state 1 to state 2 with populations $N_1$ and $N_2$ respectively, with Einstein coefficients for (stimulated) absorption $B_{12}$ and stimulated emission $B_{21}$ $$ -dI_{\nu}\delta\nu = h\nu I_{\nu} [N_1B_{12}dx -N_2B_{21}dx ]$$ rearranging gives $$-\frac{1}{I_{\nu}}\frac{dI_{\nu}}{dx}\delta\nu=h\nu[N_1B_{12} -N_2B_{21} ]=k_{\nu}\delta\nu $$ If the light intensity is low, the population of the upper level is only a minute fraction of that of the lower level and can be set to zero. In this case B can be obtained directly as $$B_{12}=\frac{k_{\nu}\delta\nu}{h\nu N_1}$$ Then integration over $\delta\nu$ gives the B value for the absorption band. The value of $N_1$ is effectively the total population. This shows that the B values are related to the extinction coefficients as expected and that time does not come into the calculation. However, this is only part of the story. A quantum calculation starting with the Schroedinger equation and using the dipole approximation and first order perturbation theory is necessary to explain the electric dipole interaction. The result of this calculation (see Atkins & Friedman Molecular Quantum Mechanics chapter 6 for gory details) is that the probability of absorption from state a to b is $$P_{ab}= \frac{|V_{ba}|^2}{\hbar^2}\frac{\sin^2((\omega-\omega_{ab})t/2)}{(\omega-\omega_{ab})^2}$$ where $V_{ab}= <a|H|b>$ is the interaction energy, $\omega$ the radiation frequency and $\omega_{ab} = (E_a-E_b)\hbar$. This oscillating behaviour as a function of frequency is not ordinarily observed in thermal samples but has been observed in a cold molecular beam of HCN molecules. Time was made constant by passing the beam at constant speed through the fixed illuminated region (see Dyke et al. J. Chem. Phys. vol 57, 2277, 1972). The rate of absorption is the time derivative of the probability which is $$ R=\frac{|V_{ba}|^2}{2\hbar^2}\frac{\sin((\omega-\omega_{ab})t)}{\omega-\omega_{ab}}$$ and as the the absorption coefficient is directly proportional to the rate of absorption $\gamma(\omega,t)=\beta R$ (where $\beta$ only contains constants) which indicates that near resonance absorption should increase in time. However, this is not what is usually observed for molecules at room temperature as a vapour or in solution. The reason for the generally observed behaviour is that the absorption is continually interrupted by first order processes, for example by collisions. Suppose that the system is initially in state a and then the radiation field is turned on. The molecule is then in a superposition state involving a and b, for example $\Psi=c_a(t)\psi_a+c_b(t)\psi_b$ where the c are time dependent coefficients that move the system from a to b. A collision will destroy the superposition and return the molecule to state a or b each with a certain probability, then the radiation causes the superposition to grow again. If we assume that the number of superposition states is n and that a first order process destroys them with lifetime $\tau$ , then $n(t)=n(0)\exp(-t/\tau)$. There is now a competition between the superposition state growing and being destroyed. The average absorption coefficient is obtained by averaging over the probability distribution of it being interrupted thus $$\gamma(\omega)=\frac{\int_0^{\infty}\gamma(\omega,t)\exp(-t/\tau)dt}{\int_0^{\infty}\exp(-t/\tau)dt}$$ evaluating gives $$\gamma(\omega) \propto L(\omega_{ab}-\omega)$$ where L is the Lorenzian function $$L(\omega_{ab}-\omega)=\frac{1}{\pi }\frac{\tau^{-1}}{\tau^{-2}+(\omega_{ab}-\omega)^2 } $$ and examining the function the half-width at half-height $\Delta \omega = 1/\tau$. The first order process thus explains why the absorption does not increase in time. However, the experimentally line shapes on many molecules do not have a Lorenzian shape and the reasons for this are not that the idea of interrupting the superposition is wrong but that other effects dominate, doppler broadening for example, or in solution there is always inhomogeneous broadening. This is caused by interactions with solvent molecules which shift energy levels up and down and thus the solution effectively contains molecules with a mixture of energy levels. That this is so can be observed at very low temperatures (say 10K) where it is possible to burn a spectral hole in a sample and so remove a small part of the inhomogeneous population. The spectral hole produced is very narrow as expected from the calculation above.
{ "domain": "chemistry.stackexchange", "id": 7143, "tags": "physical-chemistry, photochemistry" }
Knit with R markdown
Question: I have tried running some code on RStudio Desktop, within the Chunk the codes runs smoothly, but when knitting to view as html, I get the message that something is wrong with a line of code. What can I do please. [This is the code I wrote and I was okay][1] But I got this error message while trying to view it in html Answer: The message tells you that there is no variable mcq_grade. Most likely when you ran the code yourself you had already loaded some data from a file and assigned it to mcq_grade, it could look like this: mcq_grade <- read.table(filename) All the code that you put in your R markdown source document must be independent because it's executed in a new environment. Btw this is the goal: this way the code is reproducible independently from the environment. This means that if some dataset must be loaded first, it must be loaded from the R markdown document itself. Mind that the R Markdown doc is executed in the directory where the source is located, so you might have to adjust the path to the file.
{ "domain": "datascience.stackexchange", "id": 10429, "tags": "r, data, data-cleaning, data-analysis" }
How do you publish on a ROS topic when the message type is not known?
Question: I am trying to publish a ROS message in Python but I do not know the message type. I tried using AnyMsg and it did not work. If this is possible would someone be able to provide a Python code snippet? Answer: You can use AnyMsg to subscribe to the topic, then use this to acquire information about the topic which can be used to construct a publisher and a new message which can be published. For an example, see below. import rospy import genpy from rospy.msg import AnyMsg rospy.init_node("test_node") publisher = None msg_class = None # callback that we'll use to setup the publisher once we've gotten a message def setup_publisher(msg): global msg_class global publisher topic_type = msg._connection_header["type"] msg_class = genpy.message.get_message_class(topic_type) publisher = rospy.Publisher("test_topic", msg_class, queue_size=10) # subscribe and wait for a message to come in check_rate = rospy.Rate(10) sub = rospy.Subscriber("test_topic", AnyMsg, setup_publisher) while publisher is None and not rospy.is_shutdown(): check_rate.sleep() sub.unregister() # all done with the subscriber now assert publisher is not None while not rospy.is_shutdown(): # now that we have a valid publisher we can create a new message. # we could use genpy like in the callback, or we can just use the # publisher: new_msg = publisher.data_class() # even if we don't know what type it is beforehand, # we can find out what information can be populated # thanks to the 'slots' attribute provided by messages if "data" in new_msg.__slots__: if type(new_msg.data) == int: new_msg.data = 42 elif type(new_msg.data) == str: new_msg.data = "Hello World!" elif type(new_msg.data) == float: new_msg.data = 3.14159 # we can also make use of the setattr/gettattr # functions in a slightly more traditionally-python way for slot in dir(new_msg): if slot.startswith("_"): # ignore private attributes continue if type(getattr(new_msg, slot)) == int: # set any integer to 42 setattr(new_msg, slot, 42) # now we can publish the message publisher.publish(new_msg) But, notice that we need to do some introspection on the message to find/set fields if we don't know them beforehand, and this can be a bit messy. For that reason, whenever you can, it's recommended to just use the message.
{ "domain": "robotics.stackexchange", "id": 38808, "tags": "ros, python, python3" }
Calculating time dilation for photon traveling towards a moving spaceship
Question: Suppose a spaceship is moving away from the Earth at $0.5c$. When the spaceship is one light-year away from Earth, an observer on Earth sends a photon toward the spaceship. According to the observer on Earth, the photon is traveling at $c$ and the spaceship is traveling in the same direction at $0.5c$, so it takes 2 years for the photon to reach the spaceship. From the frame of reference of someone inside the spaceship, the spaceship is not moving and the photon is traveling towards it at $c$. So it takes 1 year for the photon to reach the spaceship. So why isn't 1 year inside the spaceship equal to 2 years on Earth, i.e. why is this method of deriving the Lorentz factor invalid? Answer: So it takes 1 year for the photon to reach the spaceship This isn't correct. Assume, for simplicity, that both the Earth's clock and spacecraft's clock read $t=0=t'$ when the spacecraft passes Earth. Then, when the photon is sent, Earth's clock reads $t=2\, \mathrm {y}$. Both observers agree on this. Also, both observers agree that the spacecraft's clock reads $t' = 3.464\, \mathrm {y}$ when the photon is received. However, due to the relativity of simultaneity, the observers don't agree on the reading of the spacecraft's clock when the photon is sent. According to the observer on Earth, the spaceship's clock reads $t' = 1.732\, \mathrm {y}$ when the photon is sent. But, according to the observer on the spacecraft, the spacecraft's clock reads $t' = 2.309\, \mathrm {y}$ when the photon is sent. And, the observers don't agree on the reading of Earth's clock when the photon is received. According to the observer on Earth, the Earth's clock reads $t = 4\, \mathrm {y}$ when the photon is received. But, according to the observer on the spacecraft, the Earth's clock reads $t = 3\, \mathrm {y}$ when the photon is received. In summary, according to the observer on Earth, 2 Earth years elapses between the emission and reception events while 1.732 spacecraft years elapses. According to the observer on the spacecraft, 1.155 spacecraft years elapses between the emission and reception events while 1 Earth years elapses. From either perspective, the moving clock's elapsed time is smaller and by the same proportion: $$\gamma = \frac{2}{1.732} = \frac{1.155}{1} = \frac{1}{\sqrt{1 - (0.5)^2}} $$ If you will draw the spacetime diagram for this, the above results will be clear.
{ "domain": "physics.stackexchange", "id": 20331, "tags": "homework-and-exercises, special-relativity, reference-frames" }
Why isn't an observer on earth considered to be accelerating in the twin paradox?
Question: This question has a duplicate, but answers to that question went miles above my head. Well, that was supposed to happen since I have just previewed special relativity that too on a high school level, but I didn't even go near general relativity. So this question has been asked from that point of view. I hope that this question won't irritate anyone. In the twin paradox, how do we conclude that the observer in the rocket is the one responsible for breaking the symmetry by accelerating even though according to him the other observer on earth is accelerating? Is it confirmed in reference to another (third) observer which was in the same inertial frame as the other two were at the beginning? Again if the whole incident took 10 years according to the observer on earth and 6 years for the observer in the rocket, are the 6 years out of 10 responsible for the symmetry that is due to being in the same inertial frame and the the rest 4 is due to the acceleration? Or is this calculation a bit more peculiar than it seems (I mean the calculation isn't that uniform as I have thought in the previous question)? Any reference would be enough which would clear all my problems or even a book suggestion will be enough if this question seems to be not worth answering. Answer: The key to understanding this is a concept called proper acceleration. The Wikipedia article I have linked looks formidably complicated but actually the idea is very simple. Suppose you are floating freely in space far from any source of gravity so there are no forces acting on you. Now drop an object e.g. a ball. The ball will just float beside you and will not move away. Now suppose we put you in a rocket that is accelerating at some acceleration $a$. Now if you drop the ball you'll see it accelerate away from you at that acceleration $a$. This happens because while you are holding the ball the rocket is exerting a force on both you and the ball so you accelerate at the same rate. When you release the ball the rocket stops accelerating the ball but it is still accelerating you. The result is that you accelerate away from the ball, and this looks to you as if the ball is accelerating away from you. We define your proper acceleration as your acceleration relative to an object (like the ball) that is moving freely, and the value of the proper acceleration is that it is an unambiguous way of detecting when a force is acting on you. Suppose in our twin paradox I am floating freely in space while you jump in the rocket and accelerate away then back again. To both of us it looks as if the other is accelerating, but my ball will remain floating freely next to me while yours will not. This means my proper acceleration is zero while yours is non-zero. And this is what breaks the symmetry. It is the twin with the non-zero proper time who experiences less elapsed time. This is discussed in gory detail in the question What is the proper way to explain the twin paradox?, which I assume is the what you referred to in your question, but the details need not not worry us. All we need to know is that it is always possible to measure the proper acceleration for an observer, so there is never any ambiguity about which twin did the accelerating.
{ "domain": "physics.stackexchange", "id": 82830, "tags": "special-relativity, acceleration, inertial-frames, observers" }
Constant value NP-complete vs W[1]-hard
Question: I am a research scholar currently working in parameterized algorithms. I am studying the complexity of a problem (say $P$) for $\Delta_{10}$ graphs and was able to provide a reduction from a known NP-complete problem. Hence, i have that $P$ is NP-complete on $\Delta_{10}$ graphs. It is obvious that there is no FPT (fixed-parameter tractable) algorithm for the problem w.r.t. the parameter maximum degree. My question is, can I say that since the problem is NP-complete for constant maximum degree graphs, it is also W[1]-hard for the parameter maximum degree? Answer: By definition, a parametrized reduction from a parametrized problem $A$ to $B$ needs to satisfy the following properties: The reduction maps each instance $(x,k)$ of $A$ is mapped to some instance $(x',k')$ of $B$ such that $(x,k)$ is true if and only if $(x',k')$ is true. The parameter $k'$ obtained after the reduction needs to be bounded by a function of $k$. The reduction takes at most $f(k)\cdot|x|^{O(1)}$ time, for some computable function $f$. A polynomial time reduction from $A'$ to $B'$ already satisfies property 1 and 3 if we add an arbitrary parameter to these problems. If we add a parameter $p$ to $B'$ such that $p\leq C$ on all instances for some constant $C$, then property 2 is also satisfied. So, there is a parametrized reduction from $A'$ with any parameter to $B'$ with a "bounded parameter". Now take an NP-hard problem $X$ and a parameter $p$ that is bounded by a constant on all instances of $X$. There is a polynomial time reduction from CLIQUE to $X$, because $X$ is NP-hard and CLIQUE is in NP. By the argument above, there is a parametrized reduction from $k$-CLIQUE to $X$, so $X$ is W[1]-hard. Now, to answer your question, yes, problems that are NP-hard become W[1]-hard if you add a parameter that is bounded by a constant on all inputs. However, the fact that your problem "$P$ for $\Delta_{10}$ graphs" becomes W[1]-hard after adding a bounded parameter is not a insightful property of your particular problem, precisely because this is true for any NP-hard problem. In fact, a way to interpret the fact that some problem is W[1]-hard for some parameter is that we expect the additional assumption that this parameter is small will not help us to construct a polynomial time algorithm. If the parameter chosen is already guaranteed to be small in the problem, then assuming it is small a second time does not help!
{ "domain": "cs.stackexchange", "id": 20868, "tags": "np-hard, parameterized-complexity" }
How in experimental practice does a momentum measurement reduce a state to a momentum eigenfunction?
Question: It's easy to think of ways to reduce the state of a particle to a position eigenfunction (or at least a narrow spread in position space), whether by trapping the particle in a potential well or by striking it with a probe photon, and we can calculate the precision $\Delta x$ necessary to produce an experimentally significant $\Delta p$ through the [x,p] uncertainty relation. However, despite how much emphasis there is in QM texts on position and momentum eigenfunctions and the [x,p] uncertainty relation, I have not come across an example of an actual experiment in which a momentum measurement could reduce a state to a momentum eigenfunction, with corresponding spread in the position state in accordance with the uncertainty relation. For example in HEP momentum is measured through sequential position measurements in order to establish radius of curvature, but clearly you cannot produce momentum eigenstates by measuring position! Similarly for time-of-flight measurements, or anything else I can think of. Diffraction could be used to establish the expectation value of wavelength (and thus momentum) with an ensemble of identically prepared states, but you can't measure a diffraction pattern with a single particle! Can someone point out any experiment that measures particle momentum with the result of leaving the state in a momentum eigenfunction? Answer: Use a collimator to filter out particles with wrong direction, and then use a narrow band-pass filter. Such a filter can be made using interference to reflect particles with wavevector outside of the passband. For photons this is typically implemented in photonic crystals, for electrons a superlattice can be made.
{ "domain": "physics.stackexchange", "id": 56121, "tags": "quantum-mechanics, heisenberg-uncertainty-principle, quantum-measurements" }
Diamagnetic vs Paramagnetic
Question: As far as I know, to know whether a substance is paramagnetic or diamagnetic, we check if that particular species has got unpaired electron or not. But my textbook lists- Examples of Diamagnetic Materials - Copper,Lead,Sillicon,etc Examples of Paramagnetic Materials - Aluminium,Sodium,Calcium etc The valence shell configuration of Copper is 3d10 4s1 . So, shouldn't it be Paramagnetic? The valence shell configuration of Calcium is 4s2 . So, shouldn't it be Diamagnetic? Answer: You're right that Cu0 has one unpaired electron in the 4s orbital. If you were to observe a single Cu0 atom in the gas phase, you would observe some paramagnetic behavior. However, in metallic copper, the valence orbitals are engaged in metal-to-metal bonding. The 3d and 4s orbitals interact in constructive overlap with neighboring copper atoms to produce bonding molecular orbitals. When two 4s orbitals (each containing 1 unpaired electron) combine to form a sigma bonding MO, the electrons in that bonding MO become paired, per the image below. Note: this MO diagram depicts the bonding in H2, but I'm just using it to illustrate the principle of partially filled atomic orbitals coming together to form a filled bonding MO The process of deriving the MO diagram for a metallic solid is obviously more complicated than this, but the answer to your question is that a single Cu0 atom is paramagnetic, but Cu2 would be diamagnetic. By extension, metallic copper is effectively Cu$_\infty$ and also diamagnetic. As you can tell, the process for determining whether an atom or ion is para/diamagnetic is a very different question than whether the bulk substance (metal) is para/diamagnetic. You can't do the latter by electron counting alone; you need to know more about bonding in the substance. As another example of this, you mention that lead is diamagnetic. Lead is another example of a paramagnetic atom (6p2; the 6p orbital contains two unpaired electrons) - but lead metal is diamagnetic because those electrons become paired in the bonding MOs.
{ "domain": "chemistry.stackexchange", "id": 12602, "tags": "electronic-configuration, atomic-structure" }
Two operators commute with the Hamiltonian, but do not commute with each other
Question: I was reading Griffiths, and he made a statement that if two operators commute with the Hamiltonian, but do not commute with each other, then the energy spectrum has to be degenerate. He gave the following reasoning: If there is not a complete set of simultaneous eigenstates of all three operators, does that mean that there is some state in the Hilbert space that cannot be written as a linear combination of the simultaneous eigenstates? Why do we know from that that "there must be some $|\psi\rangle$ such that $\Lambda|\psi\rangle$ is distinct from $|\psi\rangle$?" (the underlined sentence). Answer: If $Q$ and $\Lambda$ don't commute, all we know for certain is that there is no complete basis where each state in that basis is an eigenstate of $Q$ and $\Lambda$. There might be some states that are eigenstates of both, but that is not guaranteed. So what you called "simultaneous eigenstates" can not span the whole Hilbert space, and may even be an empty set. The text states that there must be a $|\psi\rangle$, such that $\Lambda|\psi\rangle$ is distinct from $|\psi\rangle$. That is simply the meaning of not having a common complete basis. If $\Lambda|\psi\rangle \sim |\psi\rangle$ would hold for all $|\psi\rangle$, then $|\psi\rangle$ would be an eigenbasis of $\Lambda$ (and it is by construction an eigenbasis of $Q$), so then $\Lambda$ and $Q$ would have the same eigenbasis and would commute.
{ "domain": "physics.stackexchange", "id": 76492, "tags": "quantum-mechanics, operators, commutator" }
Is PReLU superfluous with respect to ReLU?
Question: Why do people use the $PReLU$ activation? $PReLU[x] = ReLU[x] + ReLU[p*x]$ with the parameter $p$ typically being a small negative number. If a fully connected layer is followed by a at least two element $ReLU$ layer then the combined layers together are capable of emulating exactly the $PReLU$, so why is it necessary? Am I missing something? Answer: Lets assume we have 3 Dense layers, where the activations are $x^0 \rightarrow x^1 \rightarrow x^2$, such that $x^2 = \psi PReLU(x^1) + \gamma$ and $x^1 = PReLU(Ax^0 + b)$ Now lets see what it would take to conform the PReLU into a ReLU $\begin{align*} PReLU(x^1) &= ReLU(x^1) + ReLU(p \odot x^1)\\ &= ReLU(Ax^0+b) + ReLU(p\odot(Ax^0+b))\\ &= ReLU(Ax^0+b) + ReLU((eye(p)A + eye(p)b)x^0)\\ &= ReLU(Ax^0+b) + ReLU(Qx^0+c) \quad s.t. \quad Q = eye(p)A, \ \ c = eye(p)b\\ &= [I, I]^T[ReLU(Ax^0+b), ReLU(Qx^0+c)]\\ \implies x^2 &= [\psi, \psi][ReLU(Ax^0+b), ReLU(Qx^0+c)]\\ &= V*ReLU(Sx^0 + d) \quad V=[\psi, \psi], \ \ S=[A, Q] \ \ d=[b, c] \end{align*}$ So as you said it is possible to break the form of the intermiediary $PReLU$ into a pure $ReLU$ while keeping it as a linear model, but if you take a second look at the parameters of the model, the size increase drastically. The hidden units of S doubled meaning to keep $x^2$ the same size $V$ also doubles in size. So this means if you dont want to use the $PReLU$ you are learning double the parameters to achieve the same capability (granted it allows you to learn a wider span of functions as well), and if you enforce the constraints on $V,S$ set by the $PReLU$ the number of paramaters is the same but you are still using more memory and more operations! I hope this example convinces you of the difference
{ "domain": "ai.stackexchange", "id": 1265, "tags": "neural-networks, machine-learning, activation-functions, relu" }
Why do we get a reflected electromagnetic wave when it hits a perfect conductor?
Question: We've an EM wave $\vec{E_i}=\vec{E_0}e^{i(\omega t-kz)}$ As it reaches on the surface of the perfect conductor we know the electric field must be zero, so we deduce that another electric field must be produced which will cancel the original field on the surface. So we can quickly say that the induced field is equal to the inverted incident field on the surface. But we cannot say that a reflected wave is produced from this logic. We can only say that an induced field exists, which is opposite to that of the incident one and lives on the surface of the conductor. And it doesn't follow that this induced field should should travel out of the surface in the form of a reflected wave. How can then one deduce that a reflected EM wave exists when a EM wave strikes a perfect conductor? Thank you. Answer: It is true that the electric field inside a perfect conductor is zero. But consider what is happening on the surface of the conductor. We can only say that an induced field exists, which is opposite to that of the incident one and lives on the surface of the conduit conductor. The incident electromagnetic wave moves the free charges on the conductor which produces a current that then creates a radiating field which is the reflected wave. We can only say that an induced field exists, which is opposite to that of the incident one and lives on the surface of the conduit conductor. And this field is oscillating charges on the surface of the conductor. And it doesn't follow that this induced field should should travel out of the surface in the form of a reflected wave. It is these induced (changing) fields on the surface that then cause the reflected electromagnetic wave. How can then one deduce that a reflected EM wave exists when a EM wave strikes a perfect conductor? Since the electric field inside the conductor is zero, there is an infinite impedance to the electromagnetic wave right at the surface. Now if we take the phase of the electromagnetic wave at this interface to be $0$ degrees (and since the conductor allows no electric field), there must be an electromagnetic wave with an opposite phase of $180$ degrees to cancel the incident electromagnetic wave at the interface.
{ "domain": "physics.stackexchange", "id": 75009, "tags": "electromagnetism, electromagnetic-radiation, electric-fields, maxwell-equations" }
Direction of Induced Electric Field
Question: I have a time dependent magnetic field $B(t)$ coming out of the plane. A charged particle (q) with some velocity v is placed in the the magnetic field, it will follow a certain path that is dictated by $$\vec F= q(\vec v \times \vec B)$$ It follows a circular path in the magnetic field. My question is what does the induced electric field look like? I understand this field is non conservative so probably will not have have the form of a radially outward field like the field from a stationary charge. Answer: If the magnetic field has boundaries, then the induced electric field will form loops around the magnetic field. The size and shape of the loops will depend on the size and shape of the magnetic field and they will be centered on the effective center of the magnetic field.
{ "domain": "physics.stackexchange", "id": 75990, "tags": "electromagnetism, magnetostatics" }
Can polarized light be unpolarized again?
Question: I was just wondering if there could be a process that could unpolarize polarazied light. Is that possible? Answer: Sure. Un-polarized light is just a superposition of many polarizations. Even if you are in vacuum you can use some beam splitters in cascade to obtain many rays, change (rotate) the polarization of each one in a different way, and then recombine the beam.
{ "domain": "physics.stackexchange", "id": 21487, "tags": "electromagnetic-radiation, waves" }
Is there a consensus on whether or not race exists on a biological level?
Question: The most recent survey I could find was from 1985 which said that 16% of biologist disagreed that "[t]here are biological races in the species Homo sapiens." I was wondering if there's been a change in this position. Answer: This has been investigated extensively on skeptics.stackexchange.com. Unfortunately, it’s not easy to determine a scientific consensus since nobody has cited a relevant survey. On the other hand, here are some salient quotes from scientific literature (stolen from the answers to the above-mentioned question) which I find worth repeating: A subspecies (race) is a distinct evolutionary lineage within a species. This definition requires that a subspecies be genetically differentiated due to barriers to genetic exchange that have persisted for long periods of time; that is, the subspecies must have historical continuity in addition to current genetic differentiation. [Templeton, 1998], cited in [Long & Kittles, 2003] Which speaks against the existence of races as a meaningful biological concept. On the other hand, Jorde & Wooding (2004) contend that, Genetic variation is geographically structured, as expected from the partial isolation of human populations during much of their history. Because traditional concepts of race are in turn correlated with geography, it is inaccurate to state that race is "biologically meaningless". But they concede that there is no scientific support for the concept that human populations are discrete, nonoverlapping entities. Finally, it’s worth noting that in the universally agreed-on taxonomy, formalised in the International Code of Zoological Nomenclature there simply is no category for races, whether they would make sense or not. Whether or not this view enjoys a majority vote in biology, it’s the de facto consensus. Jeffrey C. Long & Rick A. Kittles, Human Genetic Diversity and the Nonexistence of Biological Races, Human Biology 75 (4), pp. 449–471, Aug 2003 Lynn B. Jorde & Stephen P. Wooding, Genetic variation, classification and ‘race’, Nature Genetics 36, pp. 28–33, 2004
{ "domain": "biology.stackexchange", "id": 3295, "tags": "human-biology" }
costmaps being updated with data from two sensors of different heights
Question: Hi I have a robot with an RPLidar sensor on top and would like to add an ultrasonic sensor to identify and avoid obstacles that are below the LIDAR line. It is possible? My robot: Let's say the ultrasonic sensor publishes distance values less than one meter in the topic /ultrasonic_readings and the ultrasonic sensor link is called ultrasonic_link . I believe the ultrasonic sensor readings should be put into the cost maps (global and local), but I'm not sure how to do this. Has anyone done something like that? costmap_common_params.yaml: obstacle_range: 2.5 raytrace_range: 3.0 footprint: [[-0.045,-0.10],[-0.045,0.10],[0.17,0.10],[0.17,-0.10]] observation_sources: scan scan: {sensor_frame: rplidar_link, observation_persistence: 0.0, max_obstacle_height: 0.4, min_obstacle_height: 0.0, data_type: LaserScan, topic: /scan, marking: true, clearing: true} inflation_layer: inflation_radius: 0.3 cost_scaling_factor: 3.0 local_costmap_params.yaml: local_costmap: global_frame: /odom robot_base_frame: /base_footprint update_frequency: 7.0 publish_frequency: 7.0 static_map: false rolling_window: true width: 2.0 height: 2.0 resolution: 0.02 transform_tolerance: 0.5 global_costmap_params.yaml: global_costmap: global_frame: /map robot_base_frame: /base_footprint transform_tolerance: 0.5 update_frequency: 7.0 publish_frequency: 7.0 static_map: true rolling_window: false thanks Originally posted by mateusguilherme on ROS Answers with karma: 125 on 2021-08-05 Post score: 1 Answer: It is possible. By setting the plugins parameter, you can stack multiple cost maps. I found a QA that is working on something similar, please refer to it. https://answers.ros.org/question/206805/adding-range_sensor_layer-to-layered-costmap-for-global-planning/ Originally posted by miura with karma: 1908 on 2021-08-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 36775, "tags": "ros-melodic" }
Gravitational potential energy in Escape Velocity
Question: From the derivation of formula of Escape Velocity we know that Minimum kinetic energy = Gravitational Potential Energy But in this circumstance, isn't it the value of gravitational potential energy should be infinity since Energy = Force x Displacement and the object travels an infinity distance from the planet. I don't understand why my reference book shows that Gravitational potential energy $= -GMm/r .$ Why does the $r$ is taken as the distance between the center of the planet and the object since the force is applied for an infinity distance? Answer: The long answer is that the equation you provided (force times displacement, $E=\mathbf{F}\cdot\boldsymbol{\Delta}\mathbf{r}$) is for a specific type of energy, but this energy is work (not potential energy) and the force is not constant. So to find the potential energy we need to add up all the instantaneous forces. An approximation can be: $$E\approx\sum_n\,\mathbf{F}(\mathbf{r}_n)\cdot\boldsymbol{\Delta}\mathbf{r}$$ Where at each step (literally a step in space of size $\boldsymbol{\Delta}\mathbf{r}$) we add the force at that point. The summation is along the path we imagine our rocket or particle traveling, say, from an initial position, $\mathbf{R}_i$, to a final position, $\mathbf{R}_f$. However, for this to be correct (i.e., exact) we need the steps to be infinitely small (called infinitesimal steps). We achieve this when $\boldsymbol{\Delta}\mathbf{r}\rightarrow0$. Thus: $$E=\lim_{\boldsymbol{\Delta}\mathbf{r}\rightarrow0}\left[\sum_n\,\mathbf{F}(\mathbf{r}_n)\cdot\boldsymbol{\Delta}\mathbf{r}\right]$$ If you have taken calculus this is the integral: $$E=\int_{\mathbf{R}_i}^{\mathbf{R}_f}\mathbf{F}(\mathbf{r})\cdot d\mathbf{r}$$ The gravitational force is: $$\mathbf{F}(\mathbf{r})=-\frac{GMm}{r^2}\mathbf{\hat{r}}$$ Thus the energy is: $$E=\int_{\mathbf{R}_i}^{\mathbf{R}_f}\left(-\frac{GMm}{r^2}\right)\mathbf{\hat{r}}\cdot d\mathbf{r}=-GMm\int_{R_i}^{R_f}\frac{dr}{r^2}=GMm\left(\left.\frac{1}{r}\right|_{R_i}^{R_f}\right)=GMm\left(\frac{1}{R_f}-\frac{1}{R_i}\right)$$ $$E=\frac{GMm}{R_f}-\frac{GMm}{R_i}$$ This energy is called work and its relation to potential energy is given by $\Delta U(\mathbf{r})=-W$ or $U(\mathbf{R}_f)-U(\mathbf{R}_i)=-W$, which gives: $$U(\mathbf{R}_f)-U(\mathbf{R}_i)=-\frac{GMm}{R_f}-\left(-\frac{GMm}{R_i}\right)$$ Where we can see that: $$U(\mathbf{r})=-\frac{GMm}{r}$$ This function is the potential energy and it tells us that it's difference is the negative of the work. Another way of looking at this is through energy conservation: $$E_i=E_f$$ $$U(\mathbf{R}_i)+K_i=U(\mathbf{R}_f)+K_f$$ $$K_i-K_f=U(\mathbf{R}_f)-U(\mathbf{R}_i)$$ $$-\Delta K=\Delta U$$
{ "domain": "physics.stackexchange", "id": 98745, "tags": "newtonian-mechanics, newtonian-gravity, potential-energy, orbital-motion, escape-velocity" }
Why do sinusoids have DFT magnitudes of N / 2 while we typically normalize by N?
Question: I'm wondering why evaluating a sinusoid that matches one of the frequencies of the DFT basis functions has a magnitude of $N / 2$. Using this definition of the Discrete Fourier transform, it looks like this holds (I guess for non-zero $k$ and $N \geq 3$): $$ \left| \sum_{n=0}^{N - 1} a \cdot \sin \left( 2 \pi \frac{k n}{N} \right) \cdot e^{ -i 2 \pi \frac{k n}{N}} \right| ~=~ a \cdot \frac{N}{2} $$ In other words: Taking the DFT of a sinusoid that exactly matches one of the frequencies used by the DFT results in the corresponding coefficient taking a value of amplitude times half the sample length. What surprises me about this: The common normalization of the DFT is to divide by $N$, i.e., the full sample length. Therefore my intuitive expectation was that these magnitudes actually are $a \cdot N$ instead of $a \cdot N / 2$. Is there an intuitive way to see where this factor of $1 / 2$ is coming from? Bonus question: Why hasn't dividing by $N / 2$ become a standard for normalization? When creating e.g. a periodogram, isn't it a nice property that after normalizing by $N / 2$ one could directly read off the amplitudes of the DFT coefficients? The normalization by $N$ feels awkward in comparison, because one has to double the coefficients to actually obtain their original sinusoidal amplitudes. What am I missing that explains why dividing by $N$ is "better"? Answer: The DFT is providing the coefficients of the basis functions given as samples of $e^{j\omega t}$ not cosines or sines. Review the formula for the inverse DFT which shows this relationship: $$x[n]= \frac{1}{N}\sum_{k=0}^{N-1}X[k]e^{jk \omega_o n}$$ Where $\omega_o= 2\pi f_s/N$ with $f_s$ as the sampling rate. Given Euler’s formula relating sinusoids and exponentials we see how the 1/2 factor makes sense and why we get two peaks in the DFT result for a sinusoid: $$\cos(\omega_o n) = \frac{1}{2}e^{j\omega_o n}+ \frac{1}{2}e^{-j\omega_o n}$$
{ "domain": "dsp.stackexchange", "id": 12076, "tags": "dft" }
Why are centrioles aligned at 90 degree with each other?
Question: The centrioles are aligned at 90 degree with each other. What is the function of this? Answer: As far as I know, the function is not truly known, although there are some seriously interesting guesses. Part of the problem seems, from scanning the literature, to be that it's not easy or obvious to disrupt centriole orientation. This paper in PLoS Biology presents some really interesting results, in particular two piece of evidence: Centriole orientation, to some degree, is dictated by the "mother" centriole during centriole division and is thus passed on from cell to (daughter) cell. Defects in centriole orientation can result in organelle localization defects in the cell (e.g. nuclear orientation), and at least some of these defects are genetic. So it seems that orientation is, in some part, a product simply of centriole division, but that there are some gene products that do help determine orientation and localization. This paper puts forth the hypothesis that centriole orientation determines the axis of cell division which can affect embryo orientation, whereas this suggests that external environmental factors could translate into altered centriole, and therefore cellular, orientation.
{ "domain": "biology.stackexchange", "id": 1328, "tags": "cell-biology, homework, cell-division" }
What is the reason for Avagadro's Law?
Question: From outside the quantum stuff I'm tiredelessly learning, I've been reminded about Avagadro's Law — i.e., the fact that any molecular gas has the same count of molecules in the same volume and temperature. That was a bit surprising because I think about solids more often; with quantum properties of their nuclei propagating up to the macroscopic level (namely, differing them into conductors, semiconductors, magnets, etc.). But then I thought that the reason why gases are different is because in normal conditions (i.e., if gas is not ionized, or whatever), their molecules must be neutral; therefore, indeed, any gas is just a collection of EM-neutral floating balls: it doesn't matter how heavy they are. So, the sole-reason for Avagadro's Law is EM-neutrality, is that correct? EDIT: so, the question was about real gases; not idealized models. Answer: This is true only for an ideal gas i.e. a gas that obeys the equation of state: $$ PV = nRT $$ A rearrangement gives us the molar density: $$ \frac{n}{V} = \frac{P}{RT} $$ and for constant $P$ and $T$ the molar density is a constant, hence Avagadro's law. It is certainly true that the gas molecules have to be neutral to be ideal because there cannot be any intermolecular forces in an ideal gas. However there are other restrictions as well e.g. the molecules must have a zero volume. Showing that an ideal gas obeys the equation above is simple enough if you don't mind a hand waving argument. See for example my answer to Why/How is $PV=k$ true in an ideal gas?
{ "domain": "physics.stackexchange", "id": 92017, "tags": "electromagnetism, physical-chemistry, quantum-chemistry" }
Humidity in home is much higher than outside
Question: We have a two year-old, two-story town house with: fiberglass insulation in the walls; a Tyvek vapor barrier under foam board on the exterior walls; blown cellulose insulation in the attic; stucco outside; and a refrigeration cycle air conditioning. We live in Phoenix, AZ (desert climate) and our indoor humidity levels are consistently 20% to 35% higher than outside. During the summer we consistently have humidity levels in excess of 80% (not 70%), even with the damper fully open. It has been so humid inside that the tile floors had sweat on them. We have no indoor plants or water features. Our dryer vent is operational, and our exhaust fans from the bathrooms are vented externally through the roof. There are no unusual ceiling vaults and we do have ceiling fans to stir the air. Ours is an end unit, so we have one shared wall with a neighboring town house, which has no vapor barrier. We have had the builder heavily involved for warranty work to determine the root cause of the humidity sources. We have tried: Slowing the AC blower motor down to allow the water in air to condense; Looking for water leaks with an infrared camera (FLIR); Looking for blockages in the vents in the eaves; Calibrating the thermostat (Ecobee); Balancing the air flow by adjusting the vents inside with flow meter hood; Adjusting the air conditioner refrigerant cycle pressures; Fully opening the damper to let in dry air; - Increasing the exhaust fan to run for 30 min in a 60 min cycle. The problem hasn't gone away and we're not sure what to do next. Is there anything else we can try before we resort to installing a whole house dehumidifier, or adding additional attic ventilation? What are your thoughts on the RH trend over time? Also, I did a rough calculation that the home needs to express ~350 pints of water per day (ASHRAE) psychometric chart corrected for altitude, but the HVAC company measured the AC system is operating properly and measured the AC will remove 77 pints of water per day. I read that whole home dehumidifiers typically remove ~100 pints per day. What are your recommendations on how to best ventilate the attic while controlling cost? Historical Indoor RH and Temp. Including limited outdoor data Answer: Sounds like you have covered your bases. The only thing I can recommend is gathering data and considering some scenerios to evaluate that data against. I am not an HVAC guy but can give you the 1000ft engineering perspective. Any of these trends would be helpful in debuggig the situation: Relative humidity and temperature at a discharge of the HVAC system trended over time. Condensate discharge from the air conditioner into a bucket, gallons per day. Relative humidity and temperature in each room trended over time. Surrounding water table level Condensation locations and times of day Possible scenerios in order of likelyness: A portion of the condensate being removed from the air is not being drained off and is being revaporated. Either due to installation issue, mfg defect, or possibly a very frequent cycle time. To small of airflow is being cooled then mixed with the main air stream. When a small stream of air is cooled very cold, most all the water is removed from this stream. However, no water is removed from the larger main stream it mixes with. This can result in the mixed stream being 100% humidity (think fog from dry ice). To mimimize humidity, fresh air from outside should be introduced upstream of the air conditioner before being distributed to the rest of the house. Moisture wiking through concrete. May be exacerbated by irrigation or a high water table. Unnoticed moisture producing item. A fridge with bad defrost settings and an icemaker.
{ "domain": "engineering.stackexchange", "id": 1865, "tags": "thermodynamics, hvac, building-design, environmental-engineering" }
Turtlebot amcl demo, using map-files from selected directory?
Question: map files created with the turtlebot 'SLAM map building' tutorial are stored in the /tmp/ directory. I have tried moving the generated files to my home directory and the run the 'amcl-navigation demo' pointing to the files in this directory but the map-server does'nt seem to accept this. How can i store and reuse different mapfiles? roslaunch turtlebot_navigation amcl_demo.launch map_file:=/turtlebot/freja02.yaml HFOR Originally posted by hfor on ROS Answers with karma: 1 on 2011-09-29 Post score: 0 Answer: as @tfoote pointed out, the command should look like: roslaunch turtlebot_navigation amcl_demo.launch map_file:=/home/turtlebot/freja02.yaml Also, make sure your .yaml file points out the right location of .pgm file. Suppose you have freja02.pgm also in the home directory, you might want to something like: emacs freja02.yaml & This will open a text editor, then correct perhaps the 1st line where describes the path of .pgm file as: image: /home/turtlebot/freja02.pgm then save file by Ctrl-s (pressing control key and s). Close the window, and run the .launch file again. Originally posted by 130s with karma: 10937 on 2011-11-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 6821, "tags": "navigation, turtlebot, amcl" }
How to setup planning group moveit for dual arm
Question: Hi all, I had a self-made dual arm system, want to set up the moveit configuration. I follow the advice to configure it like this. <group name="left_arm"> <joint name="left_arm_base_joint" /> ... <joint name="left_fake_end_joint" /> <chain base_link="left_arm_base_link" tip_link="left_fake_end" /> </group> <group name="right_arm"> <joint name="right_arm_base_joint" /> ... <joint name="right_fake_end_joint" /> <chain base_link="right_arm_base_link" tip_link="right_fake_end" /> </group> <group name="both_arms"> <group name="left_arm" /> <group name="right_arm" /> But I got this error [ERROR] []: Group 'both_arms' is not a chain [ERROR] []: Kinematics solver of type 'kdl_kinematics_plugin/KDLKinematicsPlugin' could not be initialized for group 'both_arms' [ERROR] []: Kinematics solver could not be instantiated for joint group both_arms. And also I tried the below configuration, as pr2 set each arm as a chain, but also got the same error. I did a test on pr2, when I choosing arms, it will show two interactive marker for both arms. But in my case, I don't have any, but if I choose right_arm or left_arm, there will be one interactive marker for each arm. This is my srdf, quite similar to the pr2's. <group name="left_arm"> <chain base_link="body_link" tip_link="left_arm_link4" /> </group> <group name="right_arm"> <chain base_link="body_link" tip_link="right_arm_link4" /> </group> <group name="both_arms"> <group name="left_arm" /> <group name="right_arm" /> </group> I don't know how pr2 setup arms plan group, it would be very appreciated that someone can help me, thanks! Originally posted by peng cheng on ROS Answers with karma: 36 on 2021-08-21 Post score: 0 Original comments Comment by fvd on 2021-08-25: Is there nothing else that is different between your robot and the PR2? What if you add the joints to your two arm groups, like in the PR2 URDF? Comment by peng cheng on 2021-08-28: Thanks! It is the same result when I change to joints. The Kinematic Solver for the "both_arms" group is set to None, pr2 also does this if I am correct. My question is how this None Kinematic Solver do the motion plan, or it does the motion plan for "left_arm" and "right_arm" subgroup separately instead? I have no idea now, quite confused to me. Answer: I don't know why, but after I added an end effector for each arm, two interactive markers appear. So problem solved. Originally posted by peng cheng with karma: 36 on 2021-09-08 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ColorJ on 2022-06-12: Would you like to share your SRDF file?
{ "domain": "robotics.stackexchange", "id": 36830, "tags": "moveit, ros-kinetic" }
What is a general definition of impedance?
Question: Impedance is a concept that shows up in any area of physics concerning waves. In transmission lines, impedance is the ratio of voltage to current. In optics, index of refraction plays a role similar to impedance. Mechanical impedance is the ratio of force to velocity. What is a general definition of impedance? What are some examples of "impedance matching" other than in electrical transmission lines? Answer: I found a general, qualitative answer in David Blackstock's book Physical Acoustics, on page 46: Impedance is often described as the ratio of a "push" variable $q_p$ (such as voltage or pressure) to a corresponding "flow" variable $q_f$ (such as current or particle velocity). I also received a nice answer to this question on another Q&A site which expands a bit on this qualitative statement with a quantitative one. In particular, this answer makes the point that impedance is the ratio (transfer function) of a force applied at a particular point to the velocity at that point. I suppose what I am looking for next is an intuitive explanation of the phenomenon of impedance matching and maximum power transfer.
{ "domain": "physics.stackexchange", "id": 88248, "tags": "optics, waves, terminology, definition, electrical-engineering" }
Simulating robot_localization on pioneer3at with xsens 300 AHRS
Question: I am testing robot_localization using the receive_xsens stack listed here: link text, and the ros-pioneer3at stack listed here: link text. I set up the frame id's to match those in the pioneer3at simulation stack as follows: map_frame = Pioneer3AT/map odom_frame = Pioneer3AT/odom base_link_frame = Pioneer3AT/base_link world_frame = Pioneer3AT/odom I configured xsens imu to produce orientation, angular velocity, and acceleration. The imu0 configuration looks like this: <param name="imu0" value="/imu/data" /> I will incorporate p3at hardware later with wheel encoders. For now trying to fuse simulated odometry from the ros-pioneer3at stack. Specifically, I have configured the odom0 as follows: <param name="odom0" value="Pioneer3AT/pose" /> After configuring, I launch the ros-pioneer3at simulation, then launch the receive_xsens hardware, then launch the ekf_localization to fuse the sensor and simulation data. There is no data being published to odometry/filtered. I am not sure where to start the debugging. -----edit 1----- Here is the node graph: rosgraph.png -----edit 2----- Thanks for the insight! Here is the script for /Pioneer3AT/pose: import rospy import tf from nav_msgs.msg import Odometry def subCB(msg): P = (msg.pose.pose.position.x, msg.pose.pose.position.y, msg.pose.pose.position.z ) Q = (msg.pose.pose.orientation.x, msg.pose.pose.orientation.y, msg.pose.pose.orientation.z, msg.pose.pose.orientation.w) br = tf.TransformBroadcaster() br.sendTransform(P, Q, rospy.Time.now(), "/Pioneer3AT/base_link", "world") if __name__ == '__main__': rospy.init_node('tf_from_pose') rospy.Subscriber("/Pioneer3AT/pose", Odometry, subCB) rospy.spin() Also, here is the ekf_localization_node launch file I used to try and fuse the xsens imu hardware with the ros-pioneer3at simulation: <launch> <node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true"> <param name="frequency" value="30"/> <param name="sensor_timeout" value="0.1"/> <param name="two_d_mode" value="false"/> <param name="map_frame" value="/Pioneer3AT/map"/> <param name="odom_frame" value="/Pioneer3AT/odom"/> <param name="base_link_frame" value="/Pioneer3AT/base_link"/> <param name="world_frame" value="/Pioneer3AT/odom"/> <param name="odom0" value="/Pioneer3AT/pose"/> <param name="imu0" value="/imu/data"/> <rosparam param="odom0_config">[true, true, true, true, true, true, false, false, false, false, false, false, false, false, false]</rosparam> <rosparam param="imu0_config">[false, false, false, true, true, true, false, false, false, true, true, true, true, true, true]</rosparam> <param name="odom0_differential" value="false" /> <param name="imu0_differential" value="false" /> <param name="imu0_remove_gravitational_acceleration" value="true"/> <param name="odom0_pose_rejection_threshold" value="5"/> <param name="imu0_pose_rejection_threshold" value="0.3"/> <param name="imu0_angular_velocity_rejection_threshold" value="0.1"/> <param name="imu0_linear_acceleration_rejection_threshold" value="0.1"/> --> <rosparam param="process_noise_covariance"> [0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.015]</rosparam> </node> </launch> Here are the sample message for /imu/data: header: seq: 53 stamp: secs: 1422541047 nsecs: 981105890 frame_id: /imu orientation: x: -0.0338176414371 y: -0.0273063704371 z: -0.260511487722 w: 0.964491903782 orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] angular_velocity: x: -0.00170594989322 y: 0.000133183944854 z: 0.0014496203512 angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] linear_acceleration: x: 0.690724253654 y: -0.452333807945 z: 9.75730419159 linear_acceleration_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] --- Here is a sample sensor message for /Pioneer3AT/pose: header: seq: 3203 stamp: secs: 1422541097 nsecs: 747374621 frame_id: /Pioneer3AT/odom child_frame_id: /Pioneer3AT/base_link pose: pose: position: x: 0.000293139455607 y: 0.00299759814516 z: 0.17000131309 orientation: x: 2.56361946104e-06 y: 7.77009058467e-06 z: 0.0016544693635 w: 0.999998631331 covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] twist: twist: linear: x: 0.0 y: 0.0 z: 0.0 angular: x: 0.0 y: 0.0 z: 0.0 covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] --- ---edit 3--- Still trying to make it work! I now started trying to get odometry working without an imu as you suggest. robot_localization improves the estimate as there is less drift than before. Next, I attempt to integrate the imu. I have an updated launch file here: <launch> <node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true"> <param name="frequency" value="30"/> <param name="sensor_timeout" value="1"/> <param name="two_d_mode" value="false"/> <param name="map_frame" value="/Pioneer3AT/map"/> <param name="odom_frame" value="/Pioneer3AT/odom"/> <param name="base_link_frame" value="/Pioneer3AT/base_link"/> <param name="world_frame" value="/Pioneer3AT/odom"/> <param name="odom0" value="/Pioneer3AT/odom"/> <param name="imu0" value="/imu"/> <rosparam param="odom0_config">[true, true, true, true, true, true, false, false, false, false, false, false, false, false, false]</rosparam> <rosparam param="imu0_config">[false, false, false, true, true, true, false, false, false, true, true, true, true, true, true]</rosparam> <param name="odom0_differential" value="true" /> <param name="imu0_differential" value="true" /> <param name="imu0_remove_gravitational_acceleration" value="true"/> <param name="odom0_pose_rejection_threshold" value="5"/> <param name="imu0_pose_rejection_threshold" value="0.3"/> <param name="imu0_angular_velocity_rejection_threshold" value="0.1"/> <param name="imu0_linear_acceleration_rejection_threshold" value="0.1"/> <rosparam param="process_noise_covariance">[0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.015]</rosparam> </node> </launch> I connect Pioneer3AT/base_link->/imu with the following: <launch> <node pkg="tf" type="static_transform_publisher" name="link1_broadcaster" args="0 0 0 0 0 0 1 /Pioneer3AT/base_link /imu 100" /> </launch> The tf_tree is here with ekf node running: One observation: I notice that in RVIZ the wheels of the P3AT bounce up and down every few seconds. Is this likely because I am trying to set position/orientation with two broadcasters? Originally posted by Orso on ROS Answers with karma: 37 on 2015-01-26 Post score: 0 Original comments Comment by l0g1x on 2015-01-26: your topics might be namespaced to "pioneer3AT/imu/data" for example. can you post a picture of the ros node graph from rqt? Comment by l0g1x on 2015-01-26: Seems like you only have one input source (imu). I may be wrong, but im pretty sure you may need at least two. (wheel odometry, GPS, etc..) Comment by Orso on 2015-01-26: @Tom Moore: --Pioneer3AT/pose is of the type nav_msgs.msg/Odometry. I have included the scripts in my original post, along with my ekf_localization_node, and a sample message from each sensor. --Still trying to get one sensor at a time up and running. --Not able to run /imu/data alone. Comment by Tom Moore on 2015-01-28: Would you mind just posting one example of each sensor message? You don't need to do record all of them with -p. Just do rostopic echo, copy a single message, and paste it in the question. I'm not trying to debug all your data. I just want to see one sample message of each. Thanks! Comment by Tom Moore on 2015-01-29: Ha, no, sorry, I meant the entire message, complete with frame_ids. Like this (click the "more" link in the question to see the whole thing). Comment by Tom Moore on 2015-02-08: What do you mean by "set position/orientation with two broadcasters"? Do you still have the P3 node broadcasting the odom->base_link transform as well? Comment by Orso on 2015-02-08: I meant that ekf_localization is broadcasting odom->base_link and Gazebo_Bridge is broadcasting odom->base_link. I could not realize another way to provide odom data in my current setup. For now, I am fusing simulation odom data, with xsens data. Later planned to use wheel odometry data. Comment by Tom Moore on 2015-02-09: What do you mean by "provide odom data"? You simply can not have two different nodes publishing the same transform. If you are running ekf_localization_node, it is providing your odometry message and the odom->base_link transform. Answer: One data source will work. What message type is Pioneer3AT/pose? If it's a PoseWithCovarianceStamped, then your configuration is wrong. Change it to <param name="pose0" value="Pioneer3AT/pose" /> When in doubt, start with one sensor at a time. Get your pose data working, then add the IMU, etc. EDIT: Also, the best thing to do is paste your entire ekf_localization_node launch file, and post a sample message from each sensor. EDIT 2: Your IMU data appears to be in the /imu frame. Are you providing a transform from /imu to /Pioneer3AT/base_link? Also, you want to make sure that your Pioneer code is not broadcasting a /Pioneer3AT/odom->/Pioneer3AT/base_link transform. EDIT 3: So I ran your bag file with the launch file you posted. I am definitely getting publications on /odometry/filtered, though the only data going into it is IMU data, because your bag didn't have any Pioneer odometry messages in it. If you're still having trouble, do this for me: Set up your robot as before, without ekf_localization_node running. Bring everything up. Start bagging everything. Drive your robot in some pattern. A square works well. Make note of how you drove the robot (e.g., clockwise turns, leg lengths, etc.). Post the bag, and let me know what, if any, output you were getting from ekf_localization_node. EDIT 4: If you download the bag file you posted, and then start playing it back, and do rostopic echo /Pioneer3AT/pose, you'll see that no messages are published to /Pioneer3AT/pose (or you can just do rosbag info on the bag and see that no odometry messages are in there). Now run your system live, and do rostopic echo /Pioneer3AT/pose. If nothing comes out, then you've got a problem with your odometry publishing. EDIT 5 (in response to comment): here's a screenshot of the output of your bag after about 30 seconds. Is this what you mean by "light years"? This is exactly what I would expect to see from a single IMU being used for your state estimate. You're fusing linear acceleration, which is typically pretty noisy, and is clearly not zero-mean in this case, as you are creeping along in all three dimensions. If you have no other measurements from other sensors to help, small errors in your IMU acceleration will accumulate and lead to false velocity, which will continue to grow infinitely, and your position will as well. Long story short: you can't get a good state estimate with a single IMU by itself. Re: the odom->base_link transform, see REP-105. ROS assumes three principal frames: map, odom, and base_link (or whatever naming scheme you want). The purpose of packages like robot_localization is to provide two things: A nav_msgs/Odometry message with the robot's current pose estimate A transform from odom->base_link or a transform from map->odom. With your configuration, we are going from /Pioneer3/odom->/Pioneer3/base_link. This transform will contain effectively the same data as the message, but will be used by tf. Note that the transform from the odom frame to the base_link frame is equivalent to your pose in the world (technically, it's the inverse, but when you put the transform together, you use the same data as your message). Now, tf does not like two different nodes broadcasting the same transform. The Pioneer node you're using has an option that allows it to broadcast the /Pioneer3/odom->/Pioneer3/base_link based on its raw wheel odometry. You want a better estimate of your position, so you fuse the raw wheel odometry with IMU data. This produces the two outputs above. Originally posted by Tom Moore with karma: 13689 on 2015-01-27 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Orso on 2015-01-28: I am not providing a transform from /imu frame to /Pioneer3AT/base_link. I believe that I had the broadcaster on from /Pioneer3AT_Gazebo, broadcasting a /Pioneer3AT/odom->/Pioneer3AT/base_link transform. Sorry about that. Comment by Tom Moore on 2015-01-29: No worries! Try providing the /Pioneer3AT/base_link->imu transform (you can probably just use tf's static_transform_publisher). Then disable the broadcast of the odom->base_link transform from Gazebo. Let me know if that helps. Comment by Orso on 2015-01-30: I tried tf's static_transform_publisher for Pioneer3AT/base_link->imu while disabling broadcast of odom->base_link. /odometry/filtered publish's pose and orientation data. Both grow without bounds, and seem unaffected by the imu. Would you have any other suggestions? Greatly appreciated! Comment by Tom Moore on 2015-01-30: Can you run your system without running anything from robot_localization, and then post a bag? Thanks! Comment by Tom Moore on 2015-01-30: Unless you have a Kinect/camera/LIDAR, then yes. Comment by Orso on 2015-01-30: Tom: Here is a link to the bag file: link text Please let me know if I can provide anything else....Is it best to post in the question? Thanks! Comment by Orso on 2015-02-02: Here is link to bag file: link text. I turned off Pioneer3AT/odom->Pioneer3AT/base_link. Just using imu sensor for now. I moved forward a few meters, turned CCW, forward a few meters, CCW, forward a few meters. Comment by Orso on 2015-02-03: The bag file is in the previous comment. This time I surely had nothing running from robot_localization (Sorry about the previous bag file that did), and I got no output from robot_localization. Thanks! Comment by Orso on 2015-02-03: Right. I tried not publishing odometry data to Pioneer3AT/pose in order to work with just the imu sensor. Do I need to have that published? Thanks! Comment by Tom Moore on 2015-02-03: I guess it depends on what remains to be answered in your question. When I use your launch file, the IMU data is clearly being integrated and a state estimate is being produced (i.e., the /odometry/filtered message gets published). Is that not the case for you? Comment by Orso on 2015-02-03: The IMU data is being integrated in my case too, but the state estimate is off by light years. Also, it is unaffected by any motion of the IMU. I turned off the odom->base_link transform, and I am not sure how to get pose data without that broadcast.
{ "domain": "robotics.stackexchange", "id": 20693, "tags": "navigation, robot-localization" }
Adding extension methods to IServiceCollection in ASP.NET Core
Question: I have the following extension method to add themes support to my application: public static void AddThemes(this IServiceCollection services, Action<ThemesOptions> setup) { services.Configure(setup); } Where ThemesOptions is defined as: public class ThemesOptions { public IEnumerable<string> Themes { get; set; } } Now in my application's startup ConfigureServices method I can say: services.AddThemes(options => { options.Themes = Configuration.GetSection("Themes").Get<ThemesOptions>().Themes; }); I'm not sure that I like that I have to set every property for the options. Alternatively I tried: services.AddThemes(options => { options = Configuration.GetSection("Themes").Get<ThemesOptions>(); }); And: services.AddThemes(options => Configuration.GetSection("Themes")); However when I inject IOptions<ThemesOptions> the Themes property is null. An alternative to I changed my extension method to: public static void AddThemes(this IServiceCollection services, IConfiguration configuration) { services.Configure<ThemesOptions>(configuration.GetSection("Themes")); } Now I can say the following within my application's Startup ConfigureServices method: services.AddThemes(Configuration); This worked fine, however, I feel the problem with this approach is that the extension method only allows the options to be set from the configuration. I'd appreciate it if someone could confirm whether my first solution is correct and if it can be improved upon. Answer: Instead of doing services.AddThemes(options => { options = Configuration.GetSection("Themes").Get<ThemesOptions>(); }); you could use services.AddThemes(options => { Configuration.GetSection("Themes").Bind(options); }); which will programatically set each value of options based on what's in the configuration.
{ "domain": "codereview.stackexchange", "id": 34931, "tags": "c#, dependency-injection, extension-methods, asp.net-core" }
MATLAB fir2 - npt and lap
Question: This might not be the right place to ask this, but I'm hoping someone can explain two of the arguments in the MATLAB fir2 function. It is a function for designing filters using the frequency sampling method. There are two optional arguments: npt - Number of grid points, specified as a positive integer scalar. npt must be larger than one-half the filter order: npt > n/2. lap - Length of region around duplicate frequency points, specified as a positive integer scalar. I am struggling to find any meaningful literature around this so if anyone could explain these two in further detail or point me to something to read that would be greatly appreciated. Answer: npt is the number of frequency points that are used to define the desired frequency response. It's the length of the inverse FFT that is applied to the frequency domain data. It defaults to $512$ points, but if you want to design very long filters (with many taps) then you should choose a larger number. lap defines the width of the transition band, if there is one. There is a transition band if there are two equal frequencies specified in the vector f, where a step in the desired response occurs. The wider the transition band the smaller the approximation error in the pass bands and stop bands, so there is a trade-off between steep transitions and approximation errors in the pass bands and stop bands.
{ "domain": "dsp.stackexchange", "id": 7796, "tags": "matlab, filter-design, finite-impulse-response, digital-filters" }
Getting deeper into map (Occupancy Grid)
Question: I'm developing a navigation algorithm, for which I need to find out corners and walls present in the room. I've been playing around with turtlebot simulation. I've made a very simple rectangular room of about 3.0 x 2.4 meter square. I've saved the map I generated using gmapping. To find the corners and walls, I'm writing my own python script, using OpenCV functions like Harris Corner detection and Contours, and Canny. I'll be feeding the map.pgm file to do the same. I'll receive the pixels with corners and wall. Now, I need to know where these are in relation to the bot. That's where the problem is. I am not able to understand where does the map set its origin? And if it's the same for every map file? Does the robot's start, end pose or trajectory during the gmapping affect the map origin? Also, if the map isn't at the top left corner, then how can I compute the distance between that pixel and my robot? I'd thought of identifying the pixel, then knowing its location with respect to map, and map's origin with respect to base_footprint. That way I'd know the vector joining the pixel and the base_footprint, and can drive to the point if I want to. I'm using the default 0.05 resolution, therefore each pixel should be 5 cm. According to my observations, the map origin depends on the start pose of the robot while the mapping process. But if someone experienced can answer these questions, it would be a great help. Also, if you think this isn't a good approach, and have a better approach, please teach me, or let me know. Thanks! As per following images, map origin isn't same, and not necessarily at the top left. Map 1: Map 2: Originally posted by parzival on ROS Answers with karma: 463 on 2019-11-09 Post score: 2 Original comments Comment by parzival on 2019-11-09: I read about nav_msgs/OccupancyGrid and nav_msgs/MapMetaData. So the /map will have cell values (-1, 0 or 100) and that MapMetaData is a part of OccupancyGrid message, and holds info like height, width, resolution and origin. Can I use the origin data to calculate the cell value of the map origin, and therefore to calculate its distance from any pixel of interest? Answer: Hi parzival, I am not able to understand where does the map set its origin? The origin of the map is defined in its .yaml file, its value depends on how it's created, but it usually when using gmapping it is given by the width and height of the map and its resolution, so as to try to put the 0,0 position at the center of the map. Although this is by no means a rule, it depends on a variety of things. And if it's the same for every map file? No, it depends on the map, moreover, it can be changed on the .yaml file Does the robot's start, end pose or trajectory during the gmapping affect the map origin? Not really, it only affects the position of the robot on the map Also, if the map isn't at the top left corner, then how can I compute the distance between that pixel and my robot? You need to put both things on the same reference frame, you can extract the position of a given pixel on the /map reference using a simple computation, an example can be void mapToWorld(unsigned int map_x, unsigned int map_y, double& pos_x, double& pos_y, nav_msgs::OccupancyGrid map) { pos_x = map.info.origin.position.x + (map_x + 0.5) * map.info.resolution; pos_y = map.info.origin.position.y + (map_y + 0.5) * map.info.resolution; } Then you can easily get the distance to the robot's position on the map. Originally posted by Mario Garzon with karma: 802 on 2019-11-09 This answer was ACCEPTED on the original site Post score: 5
{ "domain": "robotics.stackexchange", "id": 33991, "tags": "navigation, mapping, turtlebot, ros-kinetic, gmapping" }
Controller algorithm implementation in ROS/Gazebo
Question: I am doing some robotic simulations in ROS/Gazebo and wondering what is the best way (programming-wise since I don't have a CS background) to implement a robot's motion controller. Here are the questions: Where should I call my Controller function (In a callback? which callback if I have multiple callbacks running simultaneously)? How to sync data coming from different nodes i.e. callbacks? What if my sensor nodes are running faster than my controller loop, then how should I implement the controller? Is there any way to subscribe to different topics with a single callback? Should I process the data in a callback or process it in the controller function? What is the exact use of buffers in ROS subscribers? Sorry for a long list of inquiries but these questions have been bothering me for some time and I would like to make my code more efficient and aesthetically pleasant at the same time if possible. P.S: I am using C++ in ROS. Answer: That is a long list of broad questions. Some of the answers depend heavily on application and personal preference. Assuming you're not using ROS2, then you might consider looking at the ros_control package. Not necessarily to use - just read through their architecture and tutorials for now. It will at least help give you an idea of where to start. As for the specifics: In general your control loop should be running at a fixed rate (e.g. 50 Hz), so your control function should be running independently of your sensor callbacks. It depends, what is your sensor data and where is it coming from? In your control node, you sensor subscribers could just provide your control loop with the most up to date data possible. This also depends on the application. Obviously if your sensor data is coming in at 10 Hz you probably don't need a 2000 Hz control loop. Maybe look at the message_filters package. It sounds like it'll answer some of your questions, too. Depends on your data. But you probably want your controller function to handle combining all of your sensor input and processing it at the controller's loop rate. Do you mean the queue sizes? It gives flexibility into the timing of processing published messages. For your sensor data your probably only care about the latest available reading, and your callback should probably never fall behind on processing messages from the queue. But say you had a sensor that produce a huge burst of information in a short amount of time, then with a sufficient queue size all of those messages could be published and processed without losing any of the data. Hope that helps. I think the ros_control stuff might be a good place for you start.
{ "domain": "robotics.stackexchange", "id": 2087, "tags": "mobile-robot, ros, c++, gazebo" }
Why do solar panels not have focusing mirrors?
Question: Most of the solar panels that I have seen do not have any mirrors, etc., but usually solar cookers have mirrors. What is the reason for solar panels not having focusing mirrors? Answer: The simple answer is that the two devices work in completely different ways. Solar cookers, as well as so-called 'solar thermal collectors', focus the light of the sun to heat something (a pot in a cooker, some oil or ceramics) and the heat is then transferred somewhere, where it generates electricity, usually by some steam engine. So, the more heat, the better. Solar panels on the other hand use the photovoltaic effect, which directly converts light into electric energy. Light excites electrons to the conduction band and the current is then transmitted somewhere. Too much heat, however, destroys the materials used, so focusing might be a very bad idea.
{ "domain": "physics.stackexchange", "id": 23248, "tags": "energy, visible-light, reflection, efficient-energy-use" }
The concept of flux
Question: I find it difficult to understand the concept of flux and field.I have searched the internet and refered a few books.From what I understand, generally flux can be defined as "some quantity per unit of some other quantity" E.g: "the force of attraction at a point in space per unit charge placed at that point is called electric flux" , "the rate of heat transfer per unit area is called heat flux in conduction". Please suggest whether I am right or wrong. Answer: From what I understand, generally flux can be defined as "some quantity per unit of some other quantity". That's about right. The some other quantity is often one of particular interest Typically, it is a surface, or a facet of a surface, used in control volume analysis. And there is a flux operator. It returns a scalar quantity from an vector operation. In the case of fluids, the fluid advects the quantity of interest. So you have a description of the density of some thing of interest at time t (a scalar field) and a velocity field of the fluid (a vector field), and a reference surface S. The Flux operator tallies the rate at which stuff is advecting through the surface. Of course, when dealing with pure field equations as in Electromagnetism, there is no need for an advective ether. Flux pertains to the vector field itself with respect to some surface of interest. We often have a system of equations that all involve this same control volume, and which have multiple conservation laws, which relate to a potential field. The flux idea is very useful in writing these equations in a compact form. For example, the equation below can represent a system of three conservation equations. $F$ being the flux function. $$\frac{d}{dt} \iint_{\Omega}\mathbf q\ dx\ dy\ + \int_{\partial \Omega} F(\mathbf q)\cdot\mathbf n\ ds = \iint_{\Omega} {\psi}\ dx\ dy $$ The equations would be conservation of mass, conservation of x momentum, and conservation of y momentum. See page 4 of this document to see this in action. - Numerical Approximation of the Nonlinear Shallow Water Equations with Topography and Dry Beds: A Godunov-Type Scheme, by David L. George There is a nice series of three videos from the Kahn Academy that deals with the flux operator. They play in sequence. The first one is here. (There should be a badge for learning mathjax. That formula took me 45 minutes.)
{ "domain": "engineering.stackexchange", "id": 2461, "tags": "electrical-engineering, heat-transfer" }
Implementation of an asynchronous TCP/UDP server
Question: I am trying to implement a TCP/UDP server so all I have to do is something like this: var server = new Server(Type.UDP, "127.0.0.1", 8888); server.OnDataRecieved += Datahandler; server.Start(); I have tried to make it perform as fast as possible by using Asynchronous calls where possible. I basically would like to know if there is anything missing/any changes that people would recommend (and why). It is not finished yet as I need to handle exceptions better etc... TODO: I need to complete the signature of the events to make them more meaningful, etc. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net; using System.Net.Sockets; using System.Threading; namespace SBlackler.Networking { public sealed class HighPerformanceServer { private Int32 _currentConnections = 0; Socket listener; EndPoint ipeSender; #region "Properties" public Int32 Port { get; set; } public Int32 CurrentConnections { get { return _currentConnections; } } public Int32 MaxQueuedConnections { get; set; } public IPEndPoint Endpoint { get; set; } public ServerType Type { get; set; } #endregion #region "Constructors" private HighPerformanceServer() { // do nothing } public HighPerformanceServer(ServerType type, String IpAddress) { Init(type, IpAddress, 28930); } public HighPerformanceServer(ServerType type, String IpAddress, Int32 Port) { Init(type, IpAddress, Port); } private void Init(ServerType server, String IpAddress, Int32 Port) { IPAddress ip; // Check the IpAddress to make sure that it is valid if (!String.IsNullOrEmpty(IpAddress) && IPAddress.TryParse(IpAddress, out ip)) { this.Endpoint = new IPEndPoint(ip, Port); // Make sure that the port is greater than 100 as not to conflict with any other programs if (Port < 100) { throw new ArgumentException("The argument 'Port' is not valid. Please select a value greater than 100."); } else { this.Port = Port; } } else { throw new ArgumentException("The argument 'IpAddress' is not valid"); } // We never want a ServerType of None, but we include it as it is recommended by FXCop. if (server != ServerType.None) { this.Type = server; } else { throw new ArgumentException("The argument 'ServerType' is not valid"); } } #endregion #region "Events" public event EventHandler<EventArgs> OnServerStart; public event EventHandler<EventArgs> OnServerStarted; public event EventHandler<EventArgs> OnServerStopping; public event EventHandler<EventArgs> OnServerStoped; public event EventHandler<EventArgs> OnClientConnected; public event EventHandler<EventArgs> OnClientDisconnecting; public event EventHandler<EventArgs> OnClientDisconnected; public event EventHandler<EventArgs> OnDataReceived; #endregion public void Start() { // Tell anything that is listening that we have starting to work if (OnServerStart != null) { OnServerStart(this, null); } // Get either a TCP or UDP socket depending on what we specified when we created the class listener = GetCorrectSocket(); if (listener != null) { // Bind the socket to the endpoint listener.Bind(this.Endpoint); // TODO :: Add throttleling (using SEMAPHORE's) if (this.Type == ServerType.TCP) { // Start listening to the socket, accepting any backlog listener.Listen(this.MaxQueuedConnections); // Use the BeginAccept to accept new clients listener.BeginAccept(new AsyncCallback(ClientConnected), listener); } else if (this.Type == ServerType.UDP) { // So we can buffer and store information, create a new information class SocketConnectionInfo connection = new SocketConnectionInfo(); connection.Buffer = new byte[SocketConnectionInfo.BufferSize]; connection.Socket = listener; // Setup the IPEndpoint ipeSender = new IPEndPoint(IPAddress.Any, this.Port); // Start recieving from the client listener.BeginReceiveFrom(connection.Buffer, 0, connection.Buffer.Length, SocketFlags.None, ref ipeSender, new AsyncCallback(DataReceived), connection); } // Tell anything that is listening that we have started to work if (OnServerStarted != null) { OnServerStarted(this, null); } } else { // There was an error creating the correct socket throw new InvalidOperationException("Could not create the correct sever socket type."); } } internal Socket GetCorrectSocket() { if (this.Type == ServerType.TCP) { return new Socket(this.Endpoint.AddressFamily, SocketType.Stream, ProtocolType.Tcp); } else if (this.Type == ServerType.UDP) { return new Socket(this.Endpoint.AddressFamily, SocketType.Dgram, ProtocolType.Udp); } else { return null; } } public void Stop() { if (OnServerStopping != null) { OnServerStopping(this, null); } if (OnServerStoped != null) { OnServerStoped(this, null); } } internal void ClientConnected(IAsyncResult asyncResult) { // Increment our ConcurrentConnections counter Interlocked.Increment(ref _currentConnections); // So we can buffer and store information, create a new information class SocketConnectionInfo connection = new SocketConnectionInfo(); connection.Buffer = new byte[SocketConnectionInfo.BufferSize]; // We want to end the async event as soon as possible Socket asyncListener = (Socket)asyncResult.AsyncState; Socket asyncClient = asyncListener.EndAccept(asyncResult); // Set the SocketConnectionInformations socket to the current client connection.Socket = asyncClient; // Tell anyone that's listening that we have a new client connected if (OnClientConnected != null) { OnClientConnected(this, null); } // TODO :: Add throttleling (using SEMAPHORE's) // Begin recieving the data from the client if (this.Type == ServerType.TCP) { asyncClient.BeginReceive(connection.Buffer, 0, connection.Buffer.Length, SocketFlags.None, new AsyncCallback(DataReceived), connection); } else if (this.Type == ServerType.UDP) { asyncClient.BeginReceiveFrom(connection.Buffer, 0, connection.Buffer.Length, SocketFlags.None, ref ipeSender, new AsyncCallback(DataReceived), connection); } // Now we have begun recieving data from this client, // we can now accept a new client listener.BeginAccept(new AsyncCallback(ClientConnected), listener); } internal void DataReceived(IAsyncResult asyncResult) { try { SocketConnectionInfo connection = (SocketConnectionInfo)asyncResult.AsyncState; Int32 bytesRead; // End the correct async process if (this.Type == ServerType.UDP) { bytesRead = connection.Socket.EndReceiveFrom(asyncResult, ref ipeSender); } else if (this.Type == ServerType.TCP) { bytesRead = connection.Socket.EndReceive(asyncResult); } else { bytesRead = 0; } // Increment the counter of BytesRead connection.BytesRead += bytesRead; // Check to see whether the socket is connected or not... if (IsSocketConnected(connection.Socket)) { // If we have read no more bytes, raise the data received event if (bytesRead == 0 || (bytesRead > 0 && bytesRead < SocketConnectionInfo.BufferSize)) { byte[] buffer = connection.Buffer; Int32 totalBytesRead = connection.BytesRead; // Setup the connection info again ready for another packet connection = new SocketConnectionInfo(); connection.Buffer = new byte[SocketConnectionInfo.BufferSize]; connection.Socket = ((SocketConnectionInfo)asyncResult.AsyncState).Socket; // Fire off the receive event as quickly as possible, then we can process the data... if (this.Type == ServerType.UDP) { connection.Socket.BeginReceiveFrom(connection.Buffer, 0, connection.Buffer.Length, SocketFlags.None, ref ipeSender, new AsyncCallback(DataReceived), connection); } else if (this.Type == ServerType.TCP) { connection.Socket.BeginReceive(connection.Buffer, 0, connection.Buffer.Length, SocketFlags.None, new AsyncCallback(DataReceived), connection); } // Remove any extra data if (totalBytesRead < buffer.Length) { Array.Resize<Byte>(ref buffer, totalBytesRead); } // Now raise the event, sender will contain the buffer for now if (OnDataReceived != null) { OnDataReceived(buffer, null); } buffer = null; } else { // Resize the array ready for the next chunk of data Array.Resize<Byte>(ref connection.Buffer, connection.Buffer.Length + SocketConnectionInfo.BufferSize); // Fire off the receive event again, with the bigger buffer if (this.Type == ServerType.UDP) { connection.Socket.BeginReceiveFrom(connection.Buffer, 0, connection.Buffer.Length, SocketFlags.None, ref ipeSender, new AsyncCallback(DataReceived), connection); } else if (this.Type == ServerType.TCP) { connection.Socket.BeginReceive(connection.Buffer, 0, connection.Buffer.Length, SocketFlags.None, new AsyncCallback(DataReceived), connection); } } } else if(connection.BytesRead > 0) { // We still have data Array.Resize<Byte>(ref connection.Buffer, connection.BytesRead); // call the event if (OnDataReceived != null) { OnDataReceived(connection.Buffer, null); } } } catch (Exception ex) { Console.WriteLine(ex.Message); } } internal bool IsSocketConnected(Socket socket) { return !(socket.Poll(1, SelectMode.SelectRead) && socket.Available == 0); } internal void DisconnectClient(SocketConnectionInfo connection) { if (OnClientDisconnecting != null) { OnClientDisconnecting(this, null); } connection.Socket.BeginDisconnect(true, new AsyncCallback(ClientDisconnected), connection); } internal void ClientDisconnected(IAsyncResult asyncResult) { SocketConnectionInfo sci = (SocketConnectionInfo)asyncResult; sci.Socket.EndDisconnect(asyncResult); if (OnClientDisconnected != null) { OnClientDisconnected(this, null); } } } public class SocketConnectionInfo { public const Int32 BufferSize = 1048576; public Socket Socket; public byte[] Buffer; public Int32 BytesRead { get; set; } } public enum ServerType { None = 0, TCP = 1, UDP = 2 } } Answer: At a quick glance, there are a couple minor things I noticed regarding how you handle your events: You are passing null event args. I would instead use EventArgs.Empty, as callers will typically assume the EventArgs object they get from the event handler will be non-null. You are using Interlocked.Increment on your connection counter, suggesting you are going to be using this in multi-threaded code. As such, you should note that if (OnClientConnected!= null) { OnClientConnected (this, null); } is not thread-safe. Instead, you will want to do something more like the following: var evt = OnClientConnected; if (evt != null) { evt (this, EventArgs.Empty); } I would suggest converting all your internal members to private, unless there is a specific need for other classes to access them, which seems unlikely, given their content. Additionally, if SocketConnectionInfo.BufferSize is >= 0, then if (bytesRead == 0 || (bytesRead > 0 && bytesRead < SocketConnectionInfo.BufferSize)) can be converted to if (bytesRead < SocketConnectionInfo.BufferSize)
{ "domain": "codereview.stackexchange", "id": 1051, "tags": "c#, asynchronous, networking" }
Maximum distance a particle can move
Question: This is problem 2.13 in Marion and Thornton's Classical Mechanics book. They ask to show that the maximum distance a particle can move under the influence of a retarding force equal to $mk(v^3+a^2v)$ is $\frac{\pi}{2ka}$ and that the particle only comes to rest at $t \rightarrow \infty$. If you've seen their solutions manual, their solution is relatively long and involves some tricky substitutions. So, I was wondering whether this problem can be solved alternatively in the following way: Use the substitution $\frac{d^2x}{dt^2} = v\frac{dv}{dt}$ and integrate $\frac{dv}{v^2+a^2}=-kdx$. We'd then get $$x=\frac{1}{ka}\left[\operatorname{arctan}\left(\frac{v_i}{a}\right)-\operatorname{arctan}\left(\frac{v}{a}\right)\right]$$ We could then claim that $v \rightarrow 0$ as $t \rightarrow \infty$ (which makes sense considering the problem), or we could integrate $\frac{v}{v(v^2+a^2)}=-kdt$ the same way they did and isolate $v$ in terms of $t$ just to confirm the claim. Thus, as $t\rightarrow \infty$, we have $x=\frac{1}{ka}\operatorname{arctan}\left(\frac{v_i}{a}\right)$, which is bounded by $\frac{\pi}{2ka}$ for any initial velocity. Would this be a solution? Answer: It's valid, except you'd want to point out that $x(t)$ is always increasing, which means it's absolute max will be as $t\to\infty$. I.e. make sure you didn't miss a potential absolute max anywhere on $t\geq0$.
{ "domain": "physics.stackexchange", "id": 81479, "tags": "homework-and-exercises, forces, classical-mechanics" }
Why does ice freeze the way it does?
Question: I was just recently looking into some of the details of the different crystal structures that ice can freeze in. I was aware that many of the unusual properties of ice $I_h$ (the structure of “everyday ice”) are a result of the proton disorder necessary for the hydrogen-bonding arrangement it adopts. What I also found is there are other arrangements of ice which are proton ordered and have nearly identical density to ice $I_h$. Now, I know density has no predictive value on what crystal structure will be taken (rather crystal structure can be used to predict density), but it seems unusual to me that there is such massive divide between ice $I_h$ and the rest of the ice srystal structures. Is the entropic gain of proton disorder an explanation why ice freezes the way it does? Also, is there a large energy minimization associated with the hexagonal lattice that ice $I_h$ takes on? Doesn't it seem like a proton-ordered arrangement would be preferable over a proton-disordered arrangement? Or is it all much more complicated than that? It just seems like there should be a crystal structure of ice which is comparable to ice $I_h$ in energy, stability, etc. and yet there isn't. I suppose a true answer to this question, because I admit it's rather broad, would deal with why the hexagonal lattice is so much more preferable for water than some other lattice arrangement. By dealing strictly with water, rather than crystals in general, I believe an answer to this question is more within reach. Answer: Some of what you are asking are still open questions… and the few data published on the topic are often controversial. But let's try to get some of the facts that appear to be accepted rather broadly: There is a proton-ordered ice structure, called ice XI that is structurally equivalent to our everyday hexagonal proton-disordered ice (aka ice Ih). In the commonly accepted version of the ice phases diagram: it can be seen that it is the most stable phase of ice at ambient pressure and low temperature. (Although obtaining this phase in practice can be difficult, and is done by doping the ice crystals with dilute KOH.) Thus, the proton ordering is energetically favored over proton disorder, while proton disorder is entropically favored. Following simple combinatorial arguments [G. P. Johari, J. Chem. Phys., 109, 9543 (1998)], one can estimate the entropy difference at $R \log(3/2) \approx 3.37$ J/mol/K. From the transition temperature of $T_0 = 72$ K, one can then work out the enthalpy difference at $\Delta H = -T \Delta S \approx -0.243$ kJ/mol. We can note that both terms are very small. The hexagonal arrangement is not the only one accessible: there is a metastable cubic ice ($I_c$) phase accessible at ambient pressure. The enthalpy differences between cubic and hexagonal ices are even smaller than with ice XI, of the order of 40 J/mol. There are, finally, numerous other phases of ice — some experimental, some hypothetical — at positive or negative pressures. Since their boundaries are almost horizontal on the phase diagram, it implies that these transitions are dominantly density-driven. There are also amorphous ices! To your final question: why ice prefers the hexagonal net? Well, in order to form low-energy crystalline structures, water must adopt a tetrahedral / four-fold coordination, with each molecule surrounded by four nearest neighbors: two on the donating end of hydrogen bonds, two on the receiving end. So the question is: how many periodic structures (crystals) are mathematically possible with this arrangement? They are called four-connected nets, and there are an infinite number of those. However, it happens that among those with relatively low density and “simplicity” (i.e. relatively few inequivalent vertices), most are of high symmetry, including hexagonal. There is actually an interesting parallel between ice structures and another category of solids having four-connected nets: silicas.
{ "domain": "chemistry.stackexchange", "id": 6158, "tags": "water, crystal-structure" }
Confusion about rotations of quantum states: $SO(3)$ versus $SU(2)$
Question: I'm trying to understand the relationship between rotations in "real space" and in quantum state space. Let me explain with this example: Suppose I have a spin-1/2 particle, lets say an electron, with spin measured in the $z+$ direction. If I rotate this electron by an angle of $\pi$ to get the spin in the $z-$ direction, the quantum state "rotates" half the angle $(\pi/2)$ because of the orthogonality of the states $|z+\rangle$ and $|z-\rangle$. I think this isn't very rigorous, but is this way of seeing it correct? I searched for how to derive this result, and started to learn about representations. I read about $SO(3)$ and $SU(2)$ and their relationship, but it's still unclear to me. I found this action of $SU(2)$ on spinors: $$ \Omega(\theta, \hat n) = e^{-i \frac{\theta}{2}(\hat n\cdot \vec\sigma)}, $$ where $\hat n$ is a unitary $3D$ vector, $\vec\sigma = (\sigma_1,\sigma_2,\sigma_3)$ is the Pauli vector and $\sigma_i$ are the Pauli matrices. I see the factor of $\frac{1}{2}$ on the rotation angle $\theta$, but where does it come from? I saw $[\sigma_i,\sigma_j] = 2i\epsilon_{ijk}\sigma_k $, and making $ X_j = -\frac{i}{2}\sigma_j $ the commutator becomes $[X_i,X_j] = \epsilon_{ijk}X_k$, which is the commutator of the $\mathfrak{so}(3)$ Lie algebra, isn't it? So when I compute the exponential $$e^{\theta(\hat n\cdot X)} = e^{-i \frac{\theta}{2}(\hat n\cdot \vec\sigma)}.$$ I get my result, and it seems like it's a rotation, but I read that it isn't an $SO(3)$ representation. So where do rotations appear? However, my central question is: How can I demonstrate that a rotation on "our world" generates a rotation of quantum states, and how do I use that to show the formula for rotations on quantum states? And how I do it for higher spin values? I'm really new on this topic, and it was hard to formulate this question, so feel free to ask me for a better explanation or to clear any misconception. Answer: This is a very good question. There are several ways to answer your question. I remember understanding all the group theory and yet feeling I didn't understand the physics when I first encountered the problem so I will try to explain things physically. First one should ask how one would you know something has angular momentum and measure it? I am not talking about spin only but also every day objects. You would couple it to something else that can pick up the angular momentum. Like a baseball hitting bat. In case of electrons it comes from the spin-orbit coupling or emission of photons that carry spin. However, we have a much more accessible example because electrons carry charge. The charge helps us as the angular momentum then manifests itself as magnetic moment which couples to a magnetic field and we can use the Stern-Gerlach set up to influence the trajectories of particles with different angular momentum (as long as they are charged under the usual U(1) EM charge). So basically the hamiltonian has a coupling $$ \Delta H = g ~ \hat n. \vec J $$ where in the Stern Gerlach case the $\hat n$ corresponds to the magnetic field direction. Invariance under infinitestimal rotations means the $J$ have to satisfy the algebra (you should try to prove this or ask a separate question) $$ [J_i,J_j]=\epsilon_{ijk} J_k $$ where summation convention on repeated indices is used. The smallest dimension operators that satisfy it are $2 \times 2$ (a result otherwise known as the smallest spin being $\frac{1}{2}$) and these operators are spanned by the basis $\{\frac{1}{2} \sigma_x,\dots \}$. At this point note a mathematical fact that I have never tried to prove, for half-intergral angular momentum operators, exponentiation does not cover the rotation group $SO(3)$ but instead covers $SU(2)$. These are locally isomorphic and that is all that is required from the coupling in the Hamiltonian. In other words representations of $SU(2)$ can couple to classical instruments that measure angular momentum (doesn't mean they have to for instance SU(2) subset of color SU(3) does not). So, we conclude that for spin $\frac{1}{2}$ the most general angular momentum operator is $$ \mathcal O = \frac{1}{2} \hat n . \vec \sigma $$ where $\hat n$ is a unit vector in some direction. Now we can do a lot of mathematics and group/representation theory to say this transforms in the adjoint representation or we can see the physics. Suppose we perform a rotation of angle $\theta$ around the axis given by unit vector $\hat m$. What do we expect? We'd expect to get a new opertor that corresponds to rotating $\hat n$ by $\theta$ around $\hat m$. Such a vector is $$ \tilde {\hat n}= (\hat n . \hat m) \hat m + ( \hat n - (\hat n . \hat m) \hat m) \cos \theta + (\hat m \times \hat n) \sin \theta $$ and the rotated operator is $$ \tilde {\mathcal O} = \frac{1}{2} \tilde {\hat n}. \vec \sigma $$ Now the claim is that the operator $\mathcal O$ transforms in the adjoint rep and that means we should get $$ \tilde {\mathcal O}= \frac{1}{2} e^{-i\frac{\theta}{2} \hat m.\vec \sigma} ~\hat n. \vec \sigma ~e^{i\frac{\theta}{2} \hat m.\vec \sigma} $$ It indeed does and for completeness I will derive it here. We will need $$ (\vec a. \vec \sigma)(\vec b. \vec \sigma) = (\vec a.\vec b) +i (\vec a \times \vec b).\vec \sigma $$ We see that $$ \begin{align} & \frac{1}{2} e^{-i\frac{\theta}{2} \hat m.\vec \sigma} ~\hat n. \vec \sigma ~e^{i\frac{\theta}{2} \hat m.\vec \sigma} \\ &=[ \cos(\frac{\theta}{2}) - i (\hat m.\vec \sigma) \sin(\frac{\theta}{2})] [ \hat n.\vec \sigma] [ \cos(\frac{\theta}{2}) + i (\hat m.\vec \sigma) \sin(\frac{\theta}{2})] \\ &=[ \cos(\frac{\theta}{2}) - i (\hat m.\vec \sigma) \sin(\frac{\theta}{2})] [ (\hat n.\vec \sigma) \cos(\frac{\theta}{2}) +i ( \hat n.\hat m + i (\hat n \times \hat m).\vec \sigma) \sin(\frac{\theta}{2})] \\ &=(\hat n. \vec \sigma) \cos^2 ( \frac{\theta}{2}) + 2 ( \hat m \times \hat n) \sin(\frac{\theta}{2}) \cos (\frac{\theta}{2}) +(\hat n.\hat m) \hat m .\vec \sigma \sin^2 (\frac{\theta}{2}) \\ &~~~-( \hat n - (\hat n.\hat m) \hat m).\vec \sigma \sin^2(\frac{\theta}{2}) \\ &=\tilde {\hat n} .\vec \sigma \end{align} $$ So we see that indeed the operator transforms in the adjoint representation. Now the expectation value for a state $|\psi \rangle$ is $$ \langle \psi | \mathcal O | \psi \rangle $$ and therefore invariance under rotations means that $$ \begin{align} | \tilde \psi \rangle &= e^{-i\frac{\theta}{2} \hat m.\vec \sigma} |\psi \rangle \\ &= [\cos(\frac{\theta}{2}) - i (\hat m.\vec \sigma) \sin (\frac{\theta}{2})]|\psi \rangle \end{align} $$ which shows that under a rotation by $\pi$ we get $| \tilde \psi \rangle = -i (\hat m. \vec \sigma) | \psi \rangle$ as the OP asked. The generalization to higher spins is not straightforward as we do not have properties like $J_x^2=1$ and one must use Baker–Campbell–Hausdorff formula. Nevertheless, once the above idea is clear one can use group theory results to see that for spin-1 for instance, we will have 3 $3 \times 3$ matrices $$ \begin{align} J_x &= \frac{1}{\sqrt{2}} \begin{pmatrix} 0 &1 &0\\ 1 &0 &1\\ 0 &1 &0 \end{pmatrix} \\ J_y &= \frac{1}{\sqrt{2}} \begin{pmatrix} 0 &-i &0\\ i &0 &-i\\ 0 &i &0 \end{pmatrix} \\ J_z &= \begin{pmatrix} 1 &0 &0\\ 0 &0 &0\\ 0 &0 &-1 \end{pmatrix} \end{align} $$ and again the most general operator will be $$ \mathcal O_3 = \hat n.\vec J $$ Now since the operator is made of the generators of SO(3) we know that it will transform under the adjoint rep and, while it will considerably more work than above to prove it is so, the meaning of such transformation is that $$ \tilde {\mathcal O_3} = \tilde{\hat n}.\vec J $$ and then the same argument as above will give us that the state transforms in the fundamental representation as $$ | \tilde {\psi_3} \rangle = e^{-i \theta ~ \hat n. \vec J} |\psi_3 \rangle $$ Alternately, one makes the spin-1 particle as the triplet state of two spin-$\frac{1}{2}$ particles and works from there.
{ "domain": "physics.stackexchange", "id": 47681, "tags": "quantum-mechanics, group-theory, representation-theory, group-representations, lie-algebra" }
How to decide a train-test split?
Question: In almost every ML model, a train-test (or train-test-val split) is critical to assess the model's performance. However, I have always wondered what the rationale is to decide a particular train-test split. I've seen that some people like an 80-20 split, others opt for 90-10, but why? Is it simply a matter of preference? Also, why not 70-30 or 60-40, what is the best way to decide? Answer: I don't think there is any rationale behind choosing 80/20 over 75/25 or others. But those are the numbers for rather small datasets. If your dataset is large enough (like hundreds of thousands of samples), you can even work with 98/1/1 percents for train/val/test as discussed by Andrew Ng in this video. Neural networks thrive with big data and it is always a good idea to make use most of it.
{ "domain": "ai.stackexchange", "id": 3054, "tags": "machine-learning, training, cross-validation, testing" }
4 dimensional interpretation
Question: Has it ever been hypothetized that, in a 4 dimensional space, being time the 4th D, one body could travel through the dimensions at the combined speed of $c$? If a body is at rest in the classical 3 dimension, it would travel through time at $c$, but if traveling at $c$ in space, it would be resting in the "time" dimension... Answer: As it happens, you are absolutely correct. The velocities we encounter in everyday life are 3D velocities that are vectors defined as: $$ \vec{v} = \left(\frac{dx}{dt}, \frac{dy}{dt}, \frac{dz}{dt}\right) $$ In special relativity we use a 4D velocity called the four-velocity, and this is a four-vector defined as: $$ \vec{v} = \left(c\frac{dt}{d\tau}, \frac{dx}{d\tau}, \frac{dy}{d\tau}, \frac{dz}{d\tau}\right) $$ where the quantity $\tau$ is called the proper time. The proper time is the time shown on a clock carried by the moving object. But there's something funny about this four-velocity. Suppose we choose coordinates $(t, x, y, z)$ in which I am not moving. Then $dx/d\tau = dy/d\tau = dz/d\tau = 0$. But I am moving in time, at one second per second, so $dt/d\tau = 1$. In that case my four-velocity is: $$ \vec{v} = (c, 0, 0, 0) $$ And the magnitude of my four velocity is $c$. In other words I am moving at the speed of light even when I am stationary. In fact you can easily prove that the magnitude of the four-velocity is always $c$. I won't do that here because I suspect the maths is a bit more in depth than you want (shout if you do want the proof and I'll edit it in). But basically when you're moving the $dx/d\tau$ etc are not zero but time dilation changes $dt/d\tau$ to compensate, so the magnitude always remains $c$.
{ "domain": "physics.stackexchange", "id": 22027, "tags": "special-relativity, spacetime-dimensions" }
How does MSF estimate heading?
Question: Hi All, I am relatively new to MSF and am trying to set it up for estimating the pose(absolute) of a ground vehicle. There are two update pose sensors (in addition to IMU for prediction step). One pose sensor provides absolute pose measurements in the world frame and the second one provides a relative pose (odometric data). The setup seems to work okay but the heading seems to be off. More precisely, the movement of the robot seems to be interpreted along the global frame and not the local frame. To clarify, a x velocity translates the robot along the global x axis and not the local x axis. I know this is not a lot to debug the problem, but is there something that I am doing wrong ? If some information/links could be provided as to the part where heading is estimated and how it is being estimated, it would be really helpful. Thanks. -Abhishek Originally posted by abhi on ROS Answers with karma: 1 on 2016-07-11 Post score: 0 Original comments Comment by M@t on 2016-07-11: It would help to have a little more information about your system - ROS distro, launch files, bag files etc. But first, check that your local and global frames have the same origin and orientation - if they don't the odometry can be very different in each one. Comment by abhi on 2016-07-11: Distro: Indigo, launch file being used: position_pose_sensor.launch (supplied with MSF). The data is composed of: pose data from wheel encoders on a simulated robot and absolute pose data from vicon. The part about the global and local frames having same origin and orientation is satisfied. Answer: This isn't a proper answer sorry but it's too long an explanation for comments and it may give you some leads to look into. I would make sure ethzasl sensor fusion works on Indigo because from the wiki page it looks like Groovy is the latest distribution that's supported, so there may be compatibility issues (hopefully someone more familiar with ROS than me can tell you). I don't have any personal experience with ethzasl (I'm using the robot_localization package). But what I can say is that I see similar behavior quite a bit, and it's usually because sensor data is being converted to odometry in a reference frame, and somewhere along the way an angular offset is added to the data. For example if you line up your robot with the 'X' axis of your global frame and drive it in a straight line but your odometry in that frame shows the robot travelling at an angle to the 'X' axis. E.g: My global, local and GPS frames line up with the 'X' and 'Y' axis, but my global (red) and GPS (green) data is reported at an angle. Is this similar to what you're seeing? I suggest you try something similar, drive your robot in a straight line outside and plot the odometry in each frame in Excel. If this looks familar it's usually because your IMU heading has been distorted (nearby metal objects will do that) or because yaw-related parameter has been set incorrectly. Is one of your sensors a GPS? If so then it will almost certainly be doing some sort of geodetic conversion to produce an absolute pose - and for such conversions it's critical that parameters such as your magnetic declination and yaw offset are correct (as per REP-103 and REP-105, most IMU's read 0 when pointing North, whereas ROS expects them to read 0 pointing East). If you're using a provided launch file your robot may not report sensor data in the way the launch file expects it to. I'm sorry I can't give you more specific advice, but hopefully this will give you some things to check. And in general, the more information you can provide about the problem the easier it is for others to debug - if you could post some bag files or graphs of your odometry similar to the one above that would help. Originally posted by M@t with karma: 2327 on 2016-07-11 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 25209, "tags": "ros" }
Designing an (thought) experiment to determine exact position
Question: I know that Heisenberg's Uncertainty Principle prevents us from even theoretically predicting the exact position and momentum of a particle. There is always an uncertainty in one or the other, given by $$\Delta x\cdot \Delta p\geqslant \frac h{4\pi} $$ This apparently leads to our inability to see the wave nature of electron when we have an observer observing the slit through which it passes (delayed choice quantum eraser, Kim et al.(1999)). So, we have enough evidence (the electron not falling into the atom, for instance) that one cannot determine position and momentum accurately and simultaneously. But still, Heisenberg's principle does not prevent us from measuring the exact position at all- but for that we must sacrifice any knowledge about the momentum of an electron. Is it possible, ideally (assuming we have powerful techniques and instruments, and all errors are accounted for or absent), to devise an experiment which in no way gives us an idea about momentum- with which we can measure the exact position of the particle? Answer: I interpret your question to be: Does the position operator on, say, $L^2({\mathbb R})$, have any eigenvectors? The answer is no because if $f$ is a square integrable function (or for that matter any function) with $xf=f$ almost everywhere, then $f=0$ almost everywhere. You can try to get around this by replacing the state space with a rigged Hilbert Space but at that point you've left the formalism of quantum mechanics.
{ "domain": "physics.stackexchange", "id": 66940, "tags": "quantum-mechanics, momentum, heisenberg-uncertainty-principle" }
How should I ship plasmids?
Question: I shipped 10 µL of my vector miniprep to a collaborator in a 1.5 mL eppendorf parafilmed shut and stuffed into a 50 mL conical with some paper-towel padding. However, something happened on the way and there was nothing (no liquid) in the tube when it arrived. They didn't make any comments about the microcentrifuge tube popping open or broken parafilm, so nothing crazy happened but something did. What's the most reliable way to ship plasmids? Answer: Summary the 10 uL of plasmid miniprep may have been splattered in the cap of the tube (AnnaF) the eppendorf tube may have depressurized during air shipment and allowed the 10 uL to escape and evaporate solution: try air-drying or blotting (Jonas) your minipreps prior to air shipment Details As AnnaF wrote, the 10 uL of your plasmid could have been hidden in the cap or dispersed around the tube, making appear empty. You should check with your collaborators to be sure they centrifuged it. According to a fedex document on shipping perishables (pdf) and a paper measuring the temp and pressure of air shipments (pdf), fedex and ups air shipments may experience low pressure environments around 0.56 - 0.74 atmospheres (atm). At these relatively low pressures, perhaps an eppendorf tube sealed at 1 atm might breach. The papers also note that ground shipments that pass over the rockies (i.e. in Colorado) may experience ~ 0.64 atm. So perhaps your 1.5 ml eppendorf tube depressurized during the shipment? It would be interesting to do some tests on the pressure-worthiness of eppendorf tubes. Regarding the original question, in 2007 I prepared and shipped (fedex) a library of thousands of minipreps to hundreds of users. 1uL of miniprep was dispensed into wells of 384-well plates and airdried, then sealed with aluminum, then mailed. Users rehydrate a well with 10 uL of water. Generally it works.
{ "domain": "biology.stackexchange", "id": 210, "tags": "plasmids" }
What is the Hamiltonian in the "energy basis" for a simple harmonic oscillator?
Question: My textbook says that for a simple harmonic oscillator the Hamiltonian can be expressed in the "energy basis" in this way: $$\hat H=\hbar\omega\bigg(\hat a^{\dagger}\hat a + {1\over 2}\bigg).$$ I know that $\hat a^{\dagger}$ and $\hat a$ are the raising and lowering operators, and that they can be written in terms of $\hat p_x$ and $\hat x$, but how is this the "energy" basis? What does that even mean? Answer: ...how is this the "energy" basis? What does that even mean? Any of our observable operators in their own eigenbasis are diagonal, where the diagonal entries are the eigenvalues.$^*$ We can see this is true. Let $|\psi_i\rangle$ be the eigenvector such that $H|\psi_i\rangle=E_i|\psi\rangle$. Then the Hamiltonian in its own eigenbasis is: $$[H]_{m,n}=\langle\psi_m|H|\psi_n\rangle=\langle\psi_m|E_n|\psi_n\rangle=E_n\langle\psi_m|\psi_n\rangle$$ Since the eigenvectors are orthonormal: $$[H]_{m,n}=\delta_{m,n}E_n$$ Which means that the Hamiltonian is diagonal in its own eigenbasis basis. Notice how this doesn't depend on what $H$ actually is. If you want to work with your specific example (I'll leave the work to you): $$\langle\psi_m|\hbar\omega\left(a^\dagger a+\frac 12\right)|\psi_n\rangle=\delta_{m,n}\hbar\omega\left(n+\frac12\right)=\delta_{m,n}E_n$$ Therefore, the expression you give must be the Hamiltonian is it's own eigenbasis. $^*$In treating our operators like matrices, in general an operator in some basis tells us the following information. Each column of the operator tells us how the corresponding basis vector transforms upon multiplication by that operator. Therefore, it makes sense that an operator in its own eigenbasis is diagonal, because the eigenvectors are the basis vectors, and the resulting transformation of each basis vector corresponds to just multiplying them by the corresponding eigenvalue.
{ "domain": "physics.stackexchange", "id": 53234, "tags": "quantum-mechanics, energy, hilbert-space, harmonic-oscillator, hamiltonian" }
Shouldn't black holes exert the same gravitational force as an object of similar mass but lower density?
Question: If the earth got shrunk into the size of a peanut, it would turn into a black hole, which would have a higher density but same mass. Since the center of mass of both bodies would be the same, the distance between a far-away object and the centers of mass would be the same. Since both the variables (mass, distance) would be the same, wouldn't the gravitational force exerted by both the earth and the black hole on a far away object be the same? If this is true, wouldn't light be unable to escape the earth as well, since light can't escape black holes? Answer: The parameter you're not considering is the distance. The Earth is an object with the mass of the Earth $m_E$ and the radius of the Earth $r_E$ (duh). If you take a black hole with mass $m_E$, then its radius will be the radius of a peanut, $r_p$. When shooting a light ray on Earth, the light easily escapes its gravitational pull because it is shot at a distance $r_E$ from the center of mass of the object. When shooting a light ray near our black hole, it can't escape its gravitational pull and falls into it, because it is shot from a much shorter distance, $r_p$. If you shoot a light ray at a distance of $r_E$ from the black hole, it will behave the same way as it does on the Earth. This is the punchline: light can escape black holes from a great enough distance. Then does this mean that if we go near the center of the Earth and shoot a light ray when we're at a distance of about $r_p$ the light will be pulled in the center of the Earth? Of course not, because the mass "inside" a radius $r_p$ of the Earth's center is much much smaller than $m_E$.
{ "domain": "physics.stackexchange", "id": 93573, "tags": "general-relativity, gravity, black-holes, curvature, event-horizon" }
How do the two half cells of Daniel cell know when they are connected?
Question: I'm currently studying electrochemistry in school, and there is something I don't can't quite understand about Daniel cell. When you put $\ce{Zn}$ solid into $\ce{ZnSO4}$ solution, does the electrode become negatively charged even before connecting it with the $\ce{Cu}$ half cell? I searched but I wasn't able to form a clear understanding, I read that the answer to the previous question is no, the the two cells need to be connected for the redox reaction to happen. If that was the case, how do the half cells communicate, how do they know when they are connected even when there is distance between them? If the electrodes were charged before connecting them that would make perfect sense for me, the electrons repel each other and when connecting the two half cells with a wire, the electrons move because of the negatively charged electrode, but from what I've read that is not the case? I'm really sorry if that is a stupid question but I really can't find an answer. Answer: No, electrodes do not know about each other. When an electrode is inserted to an electrolyte, the electrochemical reaction is ongoing in both directions. If reduction direction overruns oxidation, the potential of the electrode is increasing ( or vice versa ) until the rate of both reaction gets equal, the net reaction rate is zero and the electrode reaches the equilibrium potential. That may be quick or slow, depending on the electrode and respective reaction kinetics. When both electrodes are galvanically connected, the electron exchange due the potential difference disbalances the equilibrium potentials of both electrodes and the net electrochemical reactions get the steady non zero rate. Note that the electrode potential is conventionally referred to the Standard hydrogen electrode (SHE), which itself has potential $\pu{+4.44 \pm 0.02 V}$ ( Wikipedia ) wrt the potential of a free electron in vacuum. So saying $\ce{Zn}$ electrode potential is negative rather means it is less positive. You may want to try searching various terms in wikipedia and following relevant internal/external links. The great source of compactly explaned topics is hyperphysics, but with limited scope. It is a kind of hyperlinked quick cards/cheat sheets, very handy, if you want to understand the basics. This link is particularly for Electrochemistry
{ "domain": "chemistry.stackexchange", "id": 13499, "tags": "electrochemistry, redox" }
How to install and launch Python node inside a virtualenv?
Question: I'm writing a Python ROS node, and all the documentation seems to assume I've installed all my package dependencies into the global site-packages. Best practices in the Python world is to use a virtualenv. However, I can't find anything in the tutorials that mentions this. When launching my node, is it enough to source my virtualenv, or is there some special configuration I need to do first? Originally posted by Cerin on ROS Answers with karma: 940 on 2016-03-07 Post score: 6 Answer: If the python code was written by you and is part of one of your own ROS package then sourcing the ros workspace's devel/setup.bash will make all the python libraries available for import just like virtualenv does. These links should give enough info to set up a ros package to do so. http://docs.ros.org/jade/api/catkin/html/howto/format2/installing_python.html https://github.com/ros-infrastructure/rep/blob/master/rep-0008.rst AFAIK there isn't any special configuration needed to use a virtualenv when developing Python nodes. I sourced devel/setup.bash in my ros workspace first, then sourced bin/activate in a virtualenv and found I was able to import modules from both the ros packages and the libraries installed in just the virtualenv. Originally posted by sloretz with karma: 3061 on 2016-03-09 This answer was ACCEPTED on the original site Post score: 8 Original comments Comment by Cerin on 2016-03-09: Thanks. I've verified this also.
{ "domain": "robotics.stackexchange", "id": 24030, "tags": "python" }
Basic Caesar Cipher in Swift 3.1
Question: I have been learning Swift for the past day and thought I'd try a basic problem where I can do String manipulation. I am only on pg.100 of the Swift Programming Language on iBooks. Compared to python it seems to be quite difficult to actually modify individual characters as Swift is strongly typed/type safe. A Caesar cipher is used to encrypt a String by shifting each letter by x, where x is an integer, and replaces each letter withe the shifted letter. So encrypt("AB", 2) becomes "BC" If anyone could point where I can reduce the number of lines or an easier way of doing/or idioms I have missed that'd be just what I am looking for. import Cocoa func encrypt(message: String, shift: Int) -> String { let scalars = Array(message.unicodeScalars) let unicodePoints = scalars.map({x in Character(UnicodeScalar(Int(x.value) + shift)!)}) return String(unicodePoints) } let message = "Attack at dawn" print(encrypt(message: message, shift: 2)) Answer: Using the unicodeScalars view of the Swift String is a good start, but the code can be simplified: There is no need to create an array first, you can call map directly on the unicodeScalars collection. The conversion to Character is not needed, the array of shifted unicode scalars can be converted directly back to a string (attribution to this answer on Stack Overflow). Also the variable name unicodePoints is misleading, because that is an array of Character in your code. Your method then becomes func encrypt(message: String, shift: Int) -> String { let unicodeScalars = message.unicodeScalars.map { UnicodeScalar(Int($0.value) + shift)! } return String(String.UnicodeScalarView(unicodeScalars)) } However, this is probably not yet what you want. Example: let message = "Attack at dawn" print(encrypt(message: message, shift: 2)) // Cvvcem"cv"fcyp print(encrypt(message: message, shift: 10)) // K~~kmu*k~*nkx All characters are shifted by the given amount so that a space becomes a quotation mark, and some letters are transformed to non-letters. Also print(encrypt(message: message, shift: -100)) crashes at runtime, because the unicode scalar value is shifted to an invalid value. The traditional Caesar Cipher transforms only letters in the range A...Z, and rotates them, so that shifting "Z" by 2 becomes "B". Lowercase letters can be transformed to uppercase, but other characters are usually ignored (either removed or left unchanged). Here is a possible implementation with an inner function shiftLetter() which transforms a single letter. This inner function is used as argument in msg.unicodeScalars.map(shiftLetter) to transform the entire String: func encrypt(message: String, shift: Int) -> String { func shiftLetter(ucs: UnicodeScalar) -> UnicodeScalar { let firstLetter = Int(UnicodeScalar("A").value) let lastLetter = Int(UnicodeScalar("Z").value) let letterCount = lastLetter - firstLetter + 1 let value = Int(ucs.value) switch value { case firstLetter...lastLetter: // Offset relative to first letter: var offset = value - firstLetter // Apply shift amount (can be positive or negative): offset += shift // Transform back to the range firstLetter...lastLetter: offset = (offset % letterCount + letterCount) % letterCount // Return corresponding character: return UnicodeScalar(firstLetter + offset)! default: // Not in the range A...Z, leave unchanged: return ucs } } let msg = message.uppercased() return String(String.UnicodeScalarView(msg.unicodeScalars.map(shiftLetter))) } Examples: let message = "Attack at dawn" print(encrypt(message: message, shift: 2)) // CVVCEM CV FCYP print(encrypt(message: message, shift: 10)) // KDDKMU KD NKGX print(encrypt(message: message, shift: -100)) // EXXEGO EX HEAR
{ "domain": "codereview.stackexchange", "id": 26831, "tags": "beginner, swift, caesar-cipher, swift3" }
Hi everyone, i am new to ROS and i want to make a 3D map using a lidar and a kinect. Is it possible?
Question: https://www.youtube.com/watch?v=jgNFeDH3qFk like the video above, they have used a hokuyo lidar to 2D map the environment using the hector_slam package and then generate an octomap using a kinect sensor i guess. I am not sure how can i do this? lets suppose i have generated the 2D map and octomap separately then how can i combine both like in the video thanks in advance! Originally posted by hashim on ROS Answers with karma: 53 on 2016-01-13 Post score: 0 Answer: Well this obviously is possible, because otherwise the video wouldn't be available ;) The hector_slam setup is following the conventions laid out in the SettingUpForYourRobot tutorial. The octomap is created by feeding Kinect data to a octomap generator node. This part is using tf data generated from SLAM+IMU and essentially performs mapping with known poses. To reproduce what is shown in your video, you'll want to get SLAM working and have a proper robot model (i.e. have the Kinect mounted correctly on your model and a complete tf tree). Generating the octomap then boils down to feeding the Kinect data to a octomap generator node. The underlying ROS tools (tf, octomap) then will do their magic and create a map. These slides provide a brief overview. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2016-01-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by hashim on 2016-01-15: Do i need to integrate the IMU and encoder data to get the hector_elevation, octomap working ? Comment by hashim on 2016-05-18: Hi @stefan, i have successfully build a 3D map using hector_slam and octomaps and hector_localization. But there is just one issue which is after some time my robot looses its position and the map goes to infinity :P Comment by hashim on 2016-05-18: As in you slides you have mentioned that localization and slam node must be loosely coupled and localization node must subscribe to imu, magnetometer, gps and poseupdate(from slam node) and must sent back the orientation, turn rate and velocity. Comment by hashim on 2016-05-18: @stefan the problem is here, how can i feed orientation, velocity and turn rate to scanmathcer ? Comment by hashim on 2016-05-18: Added the photo above for reference
{ "domain": "robotics.stackexchange", "id": 23426, "tags": "kinect, 3dmapping, rplidar, octomap, rtabmap" }
Transparent Interior of Mesh in Gazebo
Question: I am trying to use a tunnel-like mesh in gazebo as a model of its own. I have the pointcloud and I followed this tutorial to transform it to a mesh https://gazebosim.org/api/gazebo/3.3/pointcloud.html My problem is that on gazebo, the inside of the mesh is completely transparent so the inside of the tunnel cannot be detected by a lidar. Any idea how to fix it ? Answer: Sounds like you're seeing backface culling, where faces that don't point toward the viewer aren't rendered. Here's what it looks like when face normals aren't rendered correctly (from this page): There are typically a couple ways around this: Invert the normals in CloudCompare, but this will mean the outside of the mesh is transparent and the inside is visible. Duplicate the mesh and flip the normals to get a "two-faced" mesh. With Blender, open the mesh, select the mesh, hit "tab" to go to edit mode, hit "a" to select all, "shift + D" to duplicate the mesh, "esc" to cancel moving it, then go to Mesh -> Normals -> Flip and flip the normals. It might be possible to disable backface culling in gazebo but the link in that post is dead, and I don't get any results when I search for backface culling. That's it, typically - either the normals exist (options 1 and 2) or you tell the engine to ignore the normals (option 3). The lidar itself will generally render only what the engine allows to be rendered, and if the normals are getting culled then there's nothing there for the lidar to detect. I have not used Gazebo, though, so things might be different there.
{ "domain": "robotics.stackexchange", "id": 2617, "tags": "gazebo" }
Are "interference fit" equations appropriate for calculating barrel stress?
Question: I am curious about calculating stress at various points along a barrel as a bullet is being fired through it. The interference fit (press/friction fit) equations immediately come to mind, but I am wondering if these equations are appropriate to use because of: (i) The non-linear pressure curve of the gas behind a fired bullet (ii) The bullet is moving through the barrel (dynamic system) Here is a link to some interfernce fit equations I'm refering to, http://www.engineersedge.com/calculators/machine-design/press-fit/press-fit-equations.htm Any pointers? Answer: Indeed, you can use the same equations for calculating the stresses, as the barrel essentially is a thick walled pipe with internal pressure. Looking at the equations you linked: (7) "Radial Stress Casued by axial force" (8) "Circumferential Stress Caused by Axial force" are the right ones. (there seems to be a mistake though: the correct term for (8) would be "Circumferential Stress Caused by internal pressure" The nonlinearity of the pressure: the pressure will be the highest at the chamber. Assuming an even wall thickness, check the stresses here and you are ok. In case of changing barrel outer diameter, you shuld look for benchmark pressure curves, and check the stresses at multiple locations along the barrel. Dynamics: in my opinion this concerns the fatigue life of the barrel. To calculate the lifetime in terms of number of shots fired and survival probability, you will need the Haigh-diagram of the chosen material. But I believe the wear will be the limiting factor. edit: this paper seems to be dealing with the same issue: http://www.slideshare.net/JoshuaRicci/design-of-a-rifle-barrel
{ "domain": "engineering.stackexchange", "id": 1031, "tags": "stresses, dynamics, friction" }
First Linked List in Java
Question: I just started learning Java, and wrote up a linked list. I'm coming from just having learned Haskell, so I've been out of the mindset of stateful computations for awhile. I'd like to know how it could be improved in any way. Please be critical. Mainly though, I'd like suggestions on readability, and safe practices. It's very simple right now, only allowing additions at the end and deletions; but I figured it would be sufficient for a review. class Node<T> { T cargo; Node<T> tail; public Node(T nCargo) { cargo = nCargo; tail = null; } public Node<T> getNext() { return tail; } public T getCargo() { return cargo; } public boolean isLast() { return tail == null; } public Node<T> getLast() { Node<T> curNode = this; while (!curNode.isLast()) { curNode = curNode.getNext(); } return curNode; } public void add(T nCargo) { tail = new Node<T>(nCargo); } public boolean canLink() { return this.getNext().getNext() != null; } public void deleteNext() { if (this.canLink()) { tail = tail.getNext(); } else { tail = null; } } } public class Vect<T> { Node<T> head; int vSize; public Vect() { head = null; vSize = 0; } public Vect(T cargo) { head = new Node<T>(cargo); vSize = 1; } public void pushBack(T cargo) { Node<T> last = head.getLast(); vSize += 1; last.add(cargo); } private Node<T> getNodeAt(int i) { Node<T> curNode = head; for (int cI = 0; i > cI; cI++) { curNode = curNode.getNext(); } return curNode; } public T getAt(int i) { return getNodeAt(i).getCargo(); } public int length() { return vSize; } public void deleteHead() { if (head.getNext() != null) { head = head.getNext(); } else { head = null; } vSize -= 1; } public void deleteAt(int i) { if (i == 0) { deleteHead(); } else { Node<T> curNode = this.getNodeAt(i - 1); curNode.deleteNext(); vSize -= 1; } } } and its use: public class Main { public static void main(String[] args) { Vect <Integer> v = new Vect<Integer>(2); v.pushBack(3); v.pushBack(4); v.pushBack(5); v.pushBack(6); for (int i = 0; i < v.length(); ++i) { System.out.println(v.getAt(i)); } v.deleteAt(2); for(int i = 0; i < v.length(); ++i) { System.out.println(v.getAt(i)); } } } Answer: Access to member variables Restrict the accessibility of member variables as much as possible: private final T cargo; private Node<T> tail; You don't want these fields to be editable outside of this class. Both of these should be private. And since you never change cargo after setting it, it can be final. Naming I suggest the following renames: data instead of cargo cargo instead of nCargo in the constructor (and then use this.cargo = cargo to assign to the field) index instead of i in getAt and getNodeAt size instead of vSize Use the diamond operator Assuming you can use Java 7 (hopefully, because Java 6 is no longer supported), you should change this: tail = new Node<T>(nCargo); to this: tail = new Node<>(nCargo); This is called the diamond operator. The compiler can figure out the right type. Formatting This is very badly formatted public void deleteNext() { if (this.canLink()) { tail = tail.getNext(); } else { tail = null; } } It should have been like this: public void deleteNext() { if (this.canLink()) { tail = tail.getNext(); } else { tail = null; } } But actually, I would use a ternary operator to make it more compact: public void deleteNext() { tail = canLink() ? tail.getNext() : null; } Simplifications This is not only badly formatted, it can also be vastly simplified: public void deleteHead() { if (head.getNext() != null) { head = head.getNext(); } else { head = null; } vSize -= 1; } to this: public void deleteHead() { head = head.getNext(); --size; } Misc A more common way of writing x += 1 is ++x, and x -= 1 as --x.
{ "domain": "codereview.stackexchange", "id": 10815, "tags": "java, linked-list" }
Fourier Series of a given function
Question: This is a very simple code that expresses a function in terms of Trigonometric Fourier Series and generates a animation based on the harmonics. I would like to know some ways to improve the performance of the code and its readability. I believe that any question about the mathematical part can be answered here. Here's the whole code: import matplotlib.pyplot as plt from numpy import sin, cos, pi, linspace from scipy.integrate import quad as integral from celluloid import Camera def desired_function(t): return t**2 def a_zero(function, half_period, inferior_limit, superior_limit): return (1/(2*half_period)) * integral(function, inferior_limit, superior_limit)[0] def a_k(k, half_period, inferior_limit, superior_limit): return (1/half_period) * integral(lambda x: desired_function(x) * cos(x * k * (pi / half_period)), inferior_limit, superior_limit)[0] def b_k(k, half_period, inferior_limit, superior_limit): return (1/half_period) * integral(lambda x: desired_function(x) * sin(x * k * (pi / half_period)), inferior_limit, superior_limit)[0] def main(): fig = plt.figure() camera = Camera(fig) inferior_limit, superior_limit = -pi, pi half_period = (superior_limit - inferior_limit) / 2 x = linspace(inferior_limit, superior_limit, 1000) f = a_zero(desired_function, half_period, inferior_limit, superior_limit) # The sum ranging from 1 to total_k total_k = 30 for k in range(1, total_k+1): f += a_k(k, half_period, inferior_limit, superior_limit)*cos(k*x*(pi/half_period)) +\ b_k(k, half_period, inferior_limit, superior_limit)*sin(k*x*(pi/half_period)) plt.plot(x, desired_function(x), color='k') plt.plot(x, f, label=f'k = {k}') camera.snap() animation = camera.animate() plt.close() animation.save('animation.gif') if __name__ == '__main__': main() And my biggest issue is this for loop being used as a Σ, is there a more elegant way to express this? # The sum ranging from 1 to total_k total_k = 30 for k in range(1, total_k+1): f += a_k(k, half_period, inferior_limit, superior_limit)*cos(k*x*(pi/half_period)) +\ b_k(k, half_period, inferior_limit, superior_limit)*sin(k*x*(pi/half_period)) Thanks in advance to anyone who takes the time to help me with this! Answer: I like it. This code hews closely to the underlying math, using lovely identifiers, and is very clear. For {a,b}_k, here's the only part I'm not super happy with: def a_k(k, half_period, inferior_limit, superior_limit): return (1/half_period) * integral(lambda x: desired_function(x) * cos(x * k * (pi / half_period)), inferior_limit, superior_limit)[0] We went with desired_function as a global. Ok, not the end of the world. Just sayin', consider adding it to the parameters. I probably would have gone with a concise lo, hi instead of inferior_limit, superior_limit. But hey, that's just me, it's a matter of taste. Don't change it on my account, looks good as it is. Maybe package them both as the 2-tuple limits, and then the call goes from ... * integral(... * (pi / half_period)), inferior_limit, superior_limit)[0] to ... * integral(... * (pi / half_period)), *limits)[0] def main(): ... x = linspace(inferior_limit, superior_limit, 1000) ... total_k = 30 We have two magic numbers, both quite reasonable. Consider burying one or both of them in the signature: def main(num_points=1000, total_k=30): One benefit of such practice is it makes it easy for unit tests to adjust numbers to more convenient values. It wouldn't hurt to sometimes add a """docstring""" to a function, explaining what it expects and what it offers. Not every function, mind you, just the big one(s). To follow PEP-8 conventions more closely, consider running black over this source. Whitespace within math expressions tends to improve readability. more elegant way to express [the summation?] No, I respectfully disagree with your assessment. It exactly corresponds to the math and it looks beautiful to me. You could change it, but then I fear it would be less clear. Now, if we wanted to worry about efficiency and execution speed, there are some constants that could be hoisted out of the loop, and the repeated calls to integral() seem like they might show up on cProfile's hit list. But if we cared about elapsed time we would use numba's @jit or another technique, and compilers are good at hoisting. This code achieves its design goals and is maintainable. ATM it looks good to me. Ship it!
{ "domain": "codereview.stackexchange", "id": 44553, "tags": "python, performance, numpy, signal-processing, scipy" }
Mechanism for the reaction between trisilylamine and hydrogen chloride
Question: In Concise Inorganic Chemistry by J. D. Lee (adapted by Sudarsan Guha), in the chapter "Chemical Bonding", under the topic "Back Bonding with Nitrogen as a donor atom" the following reaction is given: $\ce{(SiH3)3N +4HCl \to NH4Cl + 3SiH3Cl}$ (bond cleavage through $\mathrm{S_N2}$ mechanism) I have learnt about $\mathrm{S_N2}$ in organic chemistry. I know its mechanism there and tried to do the same here (there is no reaction mechanism given for this in my book). As suggested by this comment, $\ce{(SiH3)3N}$ is still basic so the following set of reactions takes place: $$\ce{\color{orange}{(SiH3)3N} + \color{blue}{HCl} \to (SiH3)3NH+ +Cl-}\tag{Protonation}$$ $$\ce{(SiH3)3NH+ +Cl- \to (SiH3)2NH + \color{red}{SiH3Cl}}\tag{Nucleophilic Attack}$$ $$\ce{(SiH3)2NH + \color{blue}{HCl} \to (SiH3)2NH2+ + Cl-}\tag{Protonation}$$ $$\ce{(SiH3)2NH2+ +Cl- \to (SiH3)NH2 + \color{red}{SiH3Cl}}\tag{Nucleophilic Attack}$$ $$\ce{(SiH3)NH2 + \color{blue}{HCl} \to (SiH3)NH3+ + Cl-}\tag{Protonation}$$ $$\ce{(SiH3)NH3+ + Cl- \to NH3 + \color{red}{SiH3Cl}}\tag{Nucleophilic Attack}$$ $$\ce{NH3 + \color{blue}{HCl} \to \color{green}{NH4+Cl-}}\tag{Protonation}$$ Summing up all the above reactions we get the overall reaction as follows: $$\ce{\color{orange}{(SiH3)3N} + 4\color{blue}{HCl} \to \color{green}{NH4+Cl-} + 3\color{red}{SiH3Cl} }$$ I got the mechanism I was looking for! But the main problem is the first step - protonation. In my book, it's given that in $\ce{(SiH3)3N}$, due to the presence of back bonding, the lone pair on nitrogen atom is not available for donation. Or in other words, $\ce{(SiH3)3N}$ cannot act as a lewis base (electron-pair donor). Thus the first step of protonation (acid-base reaction) is ruled out. So, how does the above reaction take place? Do we have any other reaction mechanism for the above reaction? If yes, could you please specify the mechanism? Or is the above one is itself correct? I am unable to find relevant information regarding this on the internet. Answer: I am not sure what back bonding you are referring to. It may be related to the outdated concept of silicon using its vacant, high-energy d orbitals in any meaningful way which is not actually the case. There is a second effect which consists of overlap of nitrogen’s p orbitals with the σ* orbitals of the $\ce{Si-H}$ bonds (which are silicon-centred due to the low electronegativity of silicon). This indeed reduces nitrogen’s basicity slightly compared to other nitrogen compounds but the effect is not that important overall. If you check page 2 of the Evans $\mathrm pK_\mathrm a$ table, you can see that the difference between $\ce{(Me3Si)2NH}$ and $\ce{(iPr)2NH}$ is about 10 logarithmic units which is noticeable (i.e. the HMDS salts are milder bases than LDA) but it does nothing to counteract their overall behaviour. (I am aware that I am arguing with the wrong $\mathrm pK_\mathrm a$ values but sadly the table does not contain data for $\ce{(Me3Si)2NH2+}$ so this is the closest equivalent.) It is also important to point out that you are using a very strong acid $(\mathrm pK_\mathrm a \approx 8)$ which will protonate practically any nitrogen lone pair that it can come across.
{ "domain": "chemistry.stackexchange", "id": 12782, "tags": "inorganic-chemistry, acid-base, reaction-mechanism, nucleophilic-substitution" }
How do moons get captured?
Question: A moon-sized object is running loose in the Solar System, perhaps after a planetary collision. As it approaches a planet, it's presumably following an approximately hyperbolic path. If it goes on past, it's still on the same hyperbola, on a curve mirroring its approach (presumably). How can the planet ever capture it, whatever the body's velocity? Why doesn't it either collide or go on past? Answer: How can a planet capture a moon? There are 178 moons in the Solar System, according to the NASA Planetary Fact Sheet, so it seems to be a common event. The following sections will show that moon capture is actually unlikely, but when a planet has one or more moons capture becomes easier. Initial Conditions Starting from the initial conditions, the planet is in orbit about the sun, and an asteroid is in a different orbit about the sun. In order for capture to become possible, the asteroid and the planet must come into proximity. When the asteroid comes inside the Sphere of influence of the planet, the gravity of the planet is the main factor in determining the path of the asteroid. Possible Outcomes Relative to the planet, the asteroid will be following a hyperbolic trajectory, and hence has sufficient kinetic energy to avoid capture. A large variety of outcomes may occur, but the ones that lead to capture are those where the asteroid somehow loses enough kinetic energy for its velocity to fall below the escape velocity of the planet while retaining enough energy to achieve a closed (elliptical) orbit. The main (not the only) possible outcomes are the orbit of the asteroid is perturbed, by a greater or lesser extent, and it continues on its way out of the sphere of influence of the planet. the orbit of the asteroid is perturbed, and the asteroid impacts the planet surface. That would usually be the end of the process, but current theories on how Earth captured the Moon are that a body named Thea impacted the Earth, and the Moon formed from some of the collision debris. the orbit of the asteroid is perturbed, and the path of the asteroid intersects the atmosphere of the planet, losing kinetic energy as heat in the atmosphere (similar to aerobraking). the orbit of the asteroid nears an existing moon of the planet and is accelerated (in the sense that deceleration is just acceleration with the opposite sign) by the existing moon, as used by the MESSENGER spacecraft to slow its speed before orbiting Mercury. The last two cases admit the possibility of capture. Possible Capture After losing energy in the planetary atmosphere, if the asteroid has lost enough energy it may enter a closed orbit around the planet. The problem is that the orbit will intersect the atmosphere again, losing energy each time it does so, until it impacts on the planetary surface. Capture can occur when an existing moon is present and is in just the right place for its gravity to reduce the eccentricity of the orbit of the asteroid. So, the most likely case where a planet can capture a free asteroid is when there is already one or more moons present. The incoming asteroid must avoid entering the Hill sphere of the existing moon - the region where the moon would dominate the path of the asteroid. Gravity assist can accelerate an asteroid when the asteroid is passing outside the orbit of the moon, but can decelerate the asteroid is passing inside the orbit of the moon. In this case some of the kinetic energy of the asteroid is transferred to the moon. As is the case with aerobraking capture, gravity assisted capture requires the existing moon to be in just the right place. Another mechanism A rather elegant paper published in Nature (mentioned below) shows how two bodies orbiting each other as they approach the planet could have led to one being captured by Neptune. This mechanism could apply in other cases also. This Dissertation (pdf) discusses a similar process for Jupiter. Irregular bodies It turns out that irregular shaped bodies can be captured more easily than spherical bodies. Orbiting within the Hill sphere of the planet is not enough for capture to be permanent. Only orbits in the lower half of the Hill sphere are stable. Bodies in higher orbits can be perturbed by nearby planets, and the body can eventually be ejected. But irregular shaped bodies exert minute fluctuations in gravitational attraction on the planet, and actually orbit in a chaotic manor. When other moons or rings are present these chaotic orbits gradually transfer energy to the bodies in the lower orbits, causing the new body to orbit lower, and hence become immune to external perturbation. [citation needed] Prograde vs retrograde orbits The same analysis of chaotic orbits, and earlier work also concluded that retrograde orbits are more stable than prograde orbits. Whereas prograde orbits are only stable in the inner half of the Hill sphere, retrograde orbits can be stable out to 100% of the Hill radius. Hence retrograde capture is more commonly observed (this is not the whole story, it is still a matter if research). Multiple existing moons, rings, and the early Solar System While the probability of a single moon being in the right place at the right time is low, when there are multiple moons the probability of an initial helpful interaction rises linearly. But the probability of additional interactions rises geometrically, so the more moons a planet has the more likely it is to capture more. The existence of rings also aids capture by exerting a drag on the new moon, taking it's energy and lowering it's orbit, in much the same way that uncaptured gas would do in the early Solar System. The biggest planets have the most moons It may be obvious, but the biggest planets have the most moons. This is because they have deeper gravity wells, and sweep in more objects. Even though the probability of capture is low (most objects are just pulled into the planet), a steady trickle have have captured over millions of orbits. Conclusion Each capture mechanism requires a fortuitous set of conditions, and so is actually a quite rare event. One mechanism is a that a pair of co-orbiting asteroids become separated when one enters the planetary Hill sphere. The odds for an individual asteroid are improved when the asteroid arrives with low kinetic energy that must be given up to other bodies orbiting the planet, and when there are already many moons or a ring system. See also Could Earth's gravity capture an asteroid? - earthsky.org Dynamics of Distant Moons of Asteroids - Icarus (pdf) Hypothesis: New evidence on origin of the Moon support cataclysmic collision theory Neptune's capture of its moon Triton in a binary-planet gravitational encounter - Nature Planetary Fact Sheet - NASA The use of the two-body energy to study problems of escape/capture (pdf) Wikipedia Aerobraking Elliptic orbit Escape velocity Hill sphere Hyperbolic trajectory Gravity assist Retrograde and prograde motion Sphere of influence Thea
{ "domain": "astronomy.stackexchange", "id": 779, "tags": "jupiter, natural-satellites" }
Working with text files in Excel
Question: I have an excel file containing a long text in column A. I am looking for the words starting by "popul" such as popular and populate . I can find these cells by the formula: =SEARCH("popul",A1,1) I want a function that returns the whole words starting by popul such as popular and populate. Answer: I'm no Excel expert, as I generally use Python or R instead, but this might get you started until an Excel expert comes along. In the meantime, it would help if you clarified your question. And you should be aware that search will only find you the index of the first match, not all matches in the string. If you only need the first hit, you can use =MID(A1,SEARCH("popul",A1,1),IFERROR(FIND(" ",A1,SEARCH("popul",A1,1)),LEN(A1)+1)-SEARCH("popul",A1,1)) although I cannot claim this is the best way to do this. You really didn't specify where you want the results to appear, how they should look, or if you only have one cell you need to search in. It would also help to know the version of Excel you have. I'll also present a crude way to return all the hits in the string: Cell A1 contains the string, B1 has no formula, and if you run out of "n/a"s you can extend columns B, C, and D by filling down. The formulas are as follows: B3 and below use =IF(C2+1<LEN($A$1),C2+1,"n/a") C2 and below use =IFERROR(FIND(" ",$A$1,SEARCH("popul",$A$1,B2)),LEN($A$1)+1) D2 and below use =IFERROR(MID($A$1,SEARCH("popul",$A$1,B2),C2-SEARCH("popul",$A$1,B2)),"") As you can see, there's little to no error checking except to deal with the match at the end of the string. In the end though, if you're going to use Excel for this you should probably create a user defined function or utilize VBA instead of in-cell formulas.
{ "domain": "datascience.stackexchange", "id": 335, "tags": "dataset, excel" }
Effect of NOT changing filter weights of CNN during backprop
Question: What is the effect of NOT changing filter weights of a CNN during backpropagation? I changed only the fully connected layer weights while training on the MNIST dataset and still achieved almost 99 percent accuracy. Answer: By not changing the weights of the convolutional layers of a CNN, you are essentially feeding your classifier (the fully connected layer) random features (i.e. not the optimal features for the classification task at hand). MNIST is an easy enough image classification task that you can pretty much feed the input pixels to a classifier without any feature extraction and it will still score in the high 90s. Besides that, perhaps the pooling layers help a bit... Try training an MLP (without the conv/pool layers) on the input image and see how it ranks. Here is an example where an MLP (1 hidden & 1 output layer) reached 98+% without any preprocessing/feature extraction. Edit: I'd also like to point out to another answer I wrote, which goes into more detail on why MNIST is so easy as an image classification task.
{ "domain": "datascience.stackexchange", "id": 3573, "tags": "machine-learning, cnn, mnist" }
Doubt regarding collision map
Question: Hi, i have a few doubts regarding the collision map - is it possible to use our own cloud data as a collision map? is there a possibility to identify clusters/cloud within the collision map as objects so that my arm does not consider those as collisions and can go near it.. i know these features should be present i am still looking for them, till then if any1 of you has any idea do let me know. Thank you :) Originally posted by Navam on ROS Answers with karma: 45 on 2011-11-05 Post score: 0 Answer: Yes You will have to do the work yourself such as clustering deciding what is to be avoided etc. And then publish on a separate topic to populate the collision map. Originally posted by tfoote with karma: 58457 on 2012-08-03 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 7198, "tags": "ros, collision-map" }
Why shift means push terminal symbol and state into the stack? Why not push only state?
Question: Shift means that you need to put the status and terminal symbol on the stack. But what is the terminal symbol used for? After all, if the reduce command is executed, both the state and the terminal symbol are deleted and the terminal symbol will never be used. Why not push into the stack only state? (English is not my native language, so please be kind to my mistakes) Answer: You are talking about the typical description of LR parsing. And you are completely right, the "state" (if you look closely, it is the state of the DFA that recognizes left hand sides of productions when processing the current stack's contents) encodes all information needed. Adding the symbol is redundant. As far as I can tell, it is usually added to clarify what is going on (just a list of state numbers would be completely opaque).
{ "domain": "cs.stackexchange", "id": 15892, "tags": "parsers, lr-k" }
How to calculate the dipole potential in spherical coordinates
Question: I want to calculate the dipole potential in spherical coordinates. I know that the potential can be calculated with $$ \phi = - \int \mathbf E \cdot\mathrm d\mathbf r,$$ but I don't know the electric field. I would say $$ \mathbf E = \frac{1}{4 \pi \epsilon_0 r^3} ( 3p\,\hat{\mathbf {r}} \cos(\theta) -\mathbf p) $$ is the electric field in spherical coordinates but I'm not sure because I didn't calculate this formula, in fact I got it from a book and don't understand the way it's calculated. Answer: The easiest derivation is probably to start with the potential, and then calculate the electric field as the gradient of that potential. Consider a dipole at the origin aligned along the z-axis, with two point charges of charge $ \pm q $ positioned at $z = \pm\frac{a}{2}$. By symmetry, the potential will be independent of the azimuthal angle, so we only need to consider the potential at the coordinates $\left( r, \theta \right)$. Since we are trying to find the field around a point dipole, we can assume $r >> a$. The distance of this point to the two charges is given by the cosine rule as $r_\pm^2 = \frac{a^2}{4} + r^2 \mp ar \cos \theta$. The potential at this point is then given by summing the potential from the two point charges, which is $V\left(r, \theta\right) = \frac{q}{4\pi\varepsilon_0}\left(\left(\frac{a^2}{4} + r^2-ar\cos\theta\right)^{-\frac{1}{2}}-\left(\frac{a^2}{4} + r^2+ar\cos\theta\right)^{-\frac{1}{2}}\right)$. The approximation $r >> a$ suggests that we can Taylor expand this in $\frac{a}{r}$, and that is what we will do. Taylor expanding gives $V\left(r, \theta\right) = \frac{qa \cos \theta}{4\pi\varepsilon_0 r^2}$. Any higher-order terms can always be ignored, because this is an ideal point dipole. Now all that's left is finding the electric field from this potental. We know that $ \vec E = - \nabla V$. Taking the gradient of the dipole potential gives $\vec E = \frac{qa}{4 \pi \varepsilon_0 r^3} \left( 2 \cos \theta \hat r + \sin \theta \hat \theta \right) $. This is nice, but we'd like an expression in terms of the dipole moment; there aren't really any point charges separated by a physical distance, that was just scaffolding to get us this far. Fortunately, the way we've defined the coordinate axes gives us a nice expression for the dipole moment: $\vec p = qa \hat z$, or, in spherical coordinates, $\vec p = qa\left( \cos \theta \hat r - \sin \theta \hat \theta\right)$. So $ \vec p . \hat r = p \cos \theta = qa \cos \theta $. Putting this all into the expression for the electric field gives us $ \frac{1}{4 \pi \varepsilon_0 r^3} \left( 3p \cos \theta \hat r - \vec p \right)$, just as planned.
{ "domain": "physics.stackexchange", "id": 51676, "tags": "homework-and-exercises, electrostatics, electric-fields, potential, dipole" }
Soft bremsstrahlung classical computation
Question: On page 177 in Peskin & Schroeder there is a derivation I have a hard time with. They write the current for a charge at rest as $$j^\mu = (1,0)^\mu e \delta(x). $$ I don't understand what the four vector $(1,0)^\mu$ represents. Why is time=1? Answer: The time-component (0th component) of the 4-current represents charge. The spatial components represent the 3-vector current.
{ "domain": "physics.stackexchange", "id": 12070, "tags": "quantum-field-theory, tensor-calculus" }
Command sequence executor with error handling
Question: I've recently learned that as a Java developer my methods throw exceptions way too often, even when the reason for throwing is related to a business rule, when returning an error code instead could perhaps be more suitable. I'm trying to learn how to differentiate real exceptional cases which require throwing exceptions from the ones I don't need to. So I'd like this code review's main focus to be on avoiding unnecessary exceptions. The code in Portuguese, but I'll do my best to explain it in English. What is going on in it: this server-side Java class has a BlockingQueue<JSONObject> holding objects representing a command sequence along with its parameters. Command sequences are taken from the queue in a loop and executed (they send commands to a remote device; a command is a request sent to the device which sends a response back). The executarSequenciaDeComandos() ("executeCommandSequence") method returns a result which updates the command sequence status in the database (it may either succeed, or fail due to a number of reasons). When there is no command sequence to execute, the thread blocks waiting for the next command sequence. As long as there is a connection to the device, the thread loop is kept running, executing command sequences or waiting for the next one. If the connection is lost, an observer is notified which interrupts the thread, causing an InterruptedException to be thrown (I'm replacing it with a ViaDeComunicacaoFechadaException, which translates to ConnectionClosedException). Such an exception causes the loop to end. The executarSequenciaDeComandos() method can also throw such an exception in the middle of the communication with the device causing the same result. My questions: Should aguardarProximaSequenciaDeComandos() ("waitForNextCommandSequence") really throw ViaDeComunicacaoFechadaException? Is losing the connection a normal, expected scenario in this case or should I consider it an exceptional situation? Connections can be lost often, leading the device to attempt to reconnect to the server. Shouldn't this method just catch the InterruptedException and return null instead, and I include a null check after this method's call which causes the execution flow to break out of the loop? If I consider it a normal situation, shouldn't I consider it normal as well regarding to the executarSequenciaDeComandos() method? In other words, shouldn't this method favor returning a status code indicating the the connection has been lost over throwing a ViaDeComunicacaoFechadaException? Currently I'm favoring the exception because the command sequences (not shown in the code) are algorithms which send a number of commands to the device and each command has the possibility of losing connection with the device, thus I end up with a simpler algorithm if I allow each one of those commands to throw an exception that will bubble up instead of having the result of each of them checked for a connection lost status, which would really pollute the algorithms. For further clarification of how I communicate with the device from inside a command sequence algorithm, I have for instance a ConnectionWithGivenDevice class with an interface such as: public ResultFromCommandA callCommandA(parameters) throws ViaDeComunicacaoFechadaException, InterruptedException; public ResultFromCommandB callCommandB(parameters) throws ViaDeComunicacaoFechadaException, InterruptedException; public ResultFromCommandC callCommandC(parameters) throws ViaDeComunicacaoFechadaException, InterruptedException; I tried comparing those two scenarios with general I/O handling in Java but I'm not sure this is a good comparison. E.g. trying to read or write through a broken stream in Java throws an IOException. Which means that this is an exceptional condition. Can I say the same for my scenarios? Finally, depending on the adopted solution I might be able to break the executarLoopDeExecucaoDeSequenciasDeComandos() ("runCommandSequencesExecutionLoop") method which contains the while loop into smaller methods. Currently I'm finding it a bit difficult due to the break line which prevents turning the inner try block into a new method. Or perhaps that does not depend upon the solution and can be solved in another way. I was going to ask for recommendations on how to deal with the other exceptions, but I'm afraid the review would become unnecessarily long. Any to-the-point comments on them would be welcome though. DeveDerrubarOProcessoException translates to "ProcessMustBeShutdownException"; and InterruptedException is thrown if by any reason (currently none) the wait for a command response is interrupted (I suppose I'd want it to be another side effect of losing connection with the device). public abstract class ExecutorDeSequenciasDeComandos { private final BlockingQueue<JSONObject> filaBloqueanteDeSequenciasDeComandos = new LinkedBlockingQueue<>(); private final AtualizadorDeStatusDeSequenciasDeComandos atualizadorDeStatusDeSequenciasDeComandos; private final Thread threadExecutorDasSequenciasDeComandos = new Thread() { @Override public void run() { executarLoopDeExecucaoDeSequenciasDeComandos(); } }; public ExecutorDeSequenciasDeComandos(AtualizadorDeStatusDeSequenciasDeComandos atualizadorDeStatusDeSequenciasDeComandos) { if (atualizadorDeStatusDeSequenciasDeComandos == null) { throw new IllegalStateException("Os argumentos não podem ser nulos."); } this.atualizadorDeStatusDeSequenciasDeComandos = atualizadorDeStatusDeSequenciasDeComandos; } protected abstract ViaDeComunicacao getViaDeComunicacao(); protected abstract void logarComIdDoPainel(String mensagem); protected abstract void logarComIdDoPainel(Exception e); public void chamarEsteMétodoUmaVezLogoApósInstanciação() { getViaDeComunicacao().adicionarObservadorDeFechamentoDeConexao(new ObservadorDeFechamentoDeViaDeComunicacao() { @Override public void onFechado(ViaDeComunicacao via) { logarComIdDoPainel("Conexão fechada."); threadExecutorDasSequenciasDeComandos.interrupt(); } }); threadExecutorDasSequenciasDeComandos.start(); } public final void receberSequenciaDeComandos(JSONObject jsonComando) { if (getViaDeComunicacao().isAberta()) { filaBloqueanteDeSequenciasDeComandos.offer(jsonComando); } } private JSONObject aguardarProximaSequenciaDeComandos() throws ViaDeComunicacaoFechadaException { try { return filaBloqueanteDeSequenciasDeComandos.take(); } catch (InterruptedException e) { throw new ViaDeComunicacaoFechadaException(e); } } protected abstract ResultadoDaSequenciaDeComandos executarSequenciaDeComandos(JSONObject jsonComando) throws ViaDeComunicacaoFechadaException, DeveDerrubarOProcessoException, InterruptedException; private void executarLoopDeExecucaoDeSequenciasDeComandos() throws DeveDerrubarOProcessoException, RuntimeException { while (true) { try { JSONObject jsonComando = aguardarProximaSequenciaDeComandos(); Integer idDoComando = jsonComando.getInt("idComando"); try { atualizadorDeStatusDeSequenciasDeComandos.enviarParaFilaStatusDoComando(idDoComando, StatusDeSequenciaDeComandos.EM_EXECUCAO, null); ResultadoDaSequenciaDeComandos resultado = executarSequenciaDeComandos(jsonComando); atualizadorDeStatusDeSequenciasDeComandos.enviarParaFilaStatusDoComando(idDoComando, resultado.getStatus(), resultado.getDescricao()); } catch (JsonSyntaxException e) { atualizadorDeStatusDeSequenciasDeComandos.enviarParaFilaStatusDoComando(idDoComando, StatusDeSequenciaDeComandos.ERRO, "Erro lendo os parâmetros do JSON: " + e.getMessage()); } catch (ViaDeComunicacaoFechadaException | InterruptedException e) { atualizadorDeStatusDeSequenciasDeComandos.enviarParaFilaStatusDoComando(idDoComando, StatusDeSequenciaDeComandos.TIMEOUT, null); break; } catch (DeveDerrubarOProcessoException e) { throw e; } catch (RuntimeException e) { logarComIdDoPainel(e); atualizadorDeStatusDeSequenciasDeComandos.enviarParaFilaStatusDoComando(idDoComando, StatusDeSequenciaDeComandos.ERRO, "Ocorreu um erro ao executar o comando."); } } catch (JSONException e) { logarComIdDoPainel(e); } catch (ViaDeComunicacaoFechadaException e) { break; } } } } Answer: The best tip I can give you regarding to this question is: always code in english. English is the main, universal language of coding, and just like all programming languages, and this forum, are in english, so should be your code. I know that you are probably saying right now "but this code is for myself". It doesn't matter. As a programming practice, and also for situations like you asking a question on the internet, you should always keep your code in english. Now that that's settled, we can move on to the actual answer. Well, the short answer is, it depends. There isn't really a "right" way. It just depends on how exactly you want to manage the code. On one hand, it's correct to use an exception. It's a special case that the calling function needs to be notified of. In that case, you should have the calling function catch the exception and deal with it. On the other hand, exceptions are, well, exceptional. It should be used for stating abnormal behavior. And when dealing with wireless/remote networking, disconnects aren't that abnormal. So what can you do instead? Well, a very common programming trick in those cases is to make your method boolean. Then, you can return true to indicate success or false to indicate failure. I also personally think it's a the best option for this type of "gray-zone" exception like your situation. I didn't have the time to completely read the code (or the will to deal with the fact that it's in Portuguese :P), so I'm not sure exactly what's returning what, but if it's important for your method to return a non-boolean value, then instead of the exception, you can return null, but again, make sure the caller deals with it.
{ "domain": "codereview.stackexchange", "id": 21035, "tags": "java, error-handling, exception" }
Post Correspondence Problem variant
Question: This is probably pretty simple, but consider the standard Post Correspondence Problem: Given $\alpha_1, \ldots, \alpha_N$ and $\beta_1, \ldots, \beta_N$, find a sequence of indices $i_1, \ldots, i_K$ such that $\alpha_{i_1}\cdots \alpha_{i_K} = \beta_{i_1}\cdots \beta_{i_K}$. This is, of course, undecidable. Now, I call this a 'variant', but it's not really--it essentially throws away 'correspondence'. Anyway, consider the following variant: Given $\alpha_1, \ldots, \alpha_N$ and $\beta_1, \ldots, \beta_N$, find two sequences of indices $i_1, \ldots, i_K, j_1, \ldots, j_{K}$ such that $\alpha_{i_1}\cdots \alpha_{i_K} = \beta_{j_1}\cdots \beta_{j_{K}}$. What can be said about this variant? If this is trivial, my apologies! Answer: This new version - where $K = K'$ - is decidable. Let's show that the language $L := \bigcup_{k \geq 1} (A^k \ \cap \ B^k)$ is a CFL. Then the decidability follows from the decidability of the emptiness of a CFL. We'll design a PDA to accept $L$. On input $x$, this PDA will try to construct two factorizations of $x$, one using words of $A$, and the other using words of $B$. It will use a counter on the stack to ensure these two factorizations are of the same length. Conceptually I will refer to the $A$-factorization of $x$ so far as sitting on top of $x$ and the $B$-factorization as sitting on the bottom of $x$. Then the stack will contain $n$ counters iff the absolute value of the difference of the number of words matched on the top, minus the number of words on the bottom, is $n$. We need another state of the PDA to record what the appropriate sign is corresponding to $n$ (which tells us if the $A$-factorization is longer than the $B$-factorization, or vice versa). As we scan the letters of $x$, we nondeterministically guess a word $t$ of $A$ and a word $u$ of $B$ to which this letter begins. Once we guess, we are committed to matching the rest of $t$ and $u$ against $x$; if at any point our match fails, we halt in this nondeterministic choice. So we also maintain, in the state of our PDA, the suffix of $t$ and $u$ that remains to match. As we scan further letters, we continue matching until we hit the end of $t$ or the end of $u$ (or both). When we hit the end of a word, we update the stack appropriately, and then guess a new word to match in either the top or bottom (or both). We accept if the suffixes remaining to be matched are both empty in top and bottom, and the stack contains no counters. We can construct this PDA effectively, so we can effectively decide if it accepts anything or not (for example, by converting effectively to a grammar $G$ and then using the usual method to see if G generates anything). Edit: One can also turn this into an upper bound on how big $k$ can be, in the worst case. I think it should give an upper bound of something roughly like $2^{O(l^2)}$, where $l$ is the sum of the lengths of the words in $A$ and $B$. Edit: I see now that the requirement that $A$ and $B$ be finite sets can also be relaxed, to the requirement that $A$ and $B$ be regular (possibly infinite). In this case, instead of maintaining the suffix remaining to be matched in "top" and "bottom", instead we maintain the states of the respective DFA we are in, after processing the prefix of a possible matched word. If we hit a final state in either "top" or "bottom", we can nondeterministically choose to go back to the initial state for a new guessed word.
{ "domain": "cstheory.stackexchange", "id": 578, "tags": "computability, fl.formal-languages" }
How much energy does a neuron use for information processing as opposed to just surviving?
Question: That is, as part of a neural network in a brain. Answer: http://quantum-mind.co.uk/anaesthetics-and-brain-energy/ About 80% of the brain’s energy consumption is devoted to neuronal firing with only 20% involved in maintenance activity. The energy for firing is mainly supplied by glia and astrocytes, the maintenance of the neuron itself is done mainly by the cellular machinery in the neuron.
{ "domain": "biology.stackexchange", "id": 5176, "tags": "neuroscience, neurophysiology" }
How to update weights in a neural network using gradient descent with mini-batches?
Question: [I've cross-posted it to cross.validated because I'm not sure where it fits best] How does gradient descent work for training a neural network if I choose mini-batch (i.e., sample a subset of the training set)? I have thought of three different possibilities: Epoch starts. We sample and feedforward one minibatch only, get the error and backprop it, i.e. update the weights. Epoch over. Epoch starts. We sample and feedforward a minibatch, get the error and backprop it, i.e. update the weights. We repeat this until we have sampled the full data set. Epoch over. Epoch starts. We sample and feedforward a minibatch, get the error and store it. We repeat this until we have sampled the full data set. We somehow average the errors and backprop them by updating the weights. Epoch over. Answer: Let us say that the output of one neural network given it's parameters is $$f(x;w)$$ Let us define the loss function as the squared L2 loss (in this case). $$L(X,y;w) = \frac{1}{2n}\sum_{i=0}^{n}[f(X_i;w)-y_i]^2$$ In this case the batchsize will be denoted as $n$. Essentially what this means is that we iterate over a finite subset of samples with the size of the subset being equal to your batch-size, and use the gradient normalized under this batch. We do this until we have exhausted every data-point in the dataset. Then the epoch is over. The gradient in this case: $$\frac{\partial L(X,y;w)}{\partial w} = \frac{1}{n}\sum_{i=0}^{n}[f(X_i;w)-y_i]\frac{\partial f(X_i;w)}{\partial w}$$ Using batch gradient descent normalizes your gradient, so the updates are not as sporadic as if you have used stochastic gradient descent.
{ "domain": "datascience.stackexchange", "id": 572, "tags": "machine-learning, neural-network, gradient-descent, backpropagation" }
Is the effect of boiling at reduced pressure same as boiling at high pressure?
Question: I guess that boiling itself would be the same in both cases, but what about its effects? For example, we boil water to disinfect it. So, would water boiling at low temperature at low pressure have the same effect? And what are the other possible differences? Answer: Good question. What actually matters, according to the CDC, is moist heat. High humidity and high temperatures together are very effective at sterilizing. In lower pressures, water boils at a lower temperature. and thus: If you want to sterilize the water and make it safe to drink, the CDC recommends boiling it for one minute at elevations under 6,500 feet and for three minutes at elevations over 6,500 feet. The fact that you have to boil it longer at higher altitudes is a good indication that the lower pressure makes sterilization via boiling less effective. Autoclaves, designed for sterilization of surgical equipment, operate hotter than boiling. 121C and 132C are common temperatures to sterilize faster and better. They typically use steam rather than water, for the obvious reason that water will have boiled already. From the best I can discern, there is nothing magical about the act of "boiling" for sterilization. Its the heat and water that matter. However, boiling has a long history of being a very reliable temperature that can be achieved with great regularity without any scientific equipment. Once it starts bubbling, you can start your timer. Otherwise you would need to carefully monitor the temperatures during the process.
{ "domain": "physics.stackexchange", "id": 74108, "tags": "pressure, temperature" }
No completition of Cartesian Path using Moveit
Question: Hello together, I have a question regarding planning with a UR10 arm and Moveit. I try to compute a Cartesian Path around an object. And it frequently happens that the robot can not successfully follow the path until 100%. E.g. it stops after 59% etc. Looking on the control panel for the robot, I noticed that it could not execute the path completely because a joint limit is reached (-2 pi/2 pi). It is a little bit strange to me because normally Moveit should know that it will violate that limit and of course just setting the start state to something around 0 degree would do the job, but it starts for example sometimes with 250 degree for that special joint. Could you help me? Thank you very much in advance! Hannes Originally posted by HannesIII on ROS Answers with karma: 27 on 2017-03-01 Post score: 0 Answer: The current cartesian path planning in MoveIt is quite limited. Your specific example confuses me though. To use the GetCartesianPath ROS service, you have to provide the start state. If MoveIt were allowed to decide on a start-state, this would most likely end up in a non-cartesian trajectory to reach the start state first of all. If you need further capabilities, have a look at the decartes project. We are currently discussing to integrate this into standard MoveIt. https://github.com/ros-planning/moveit/issues/467 Originally posted by v4hn with karma: 2950 on 2017-03-08 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by HannesIII on 2017-04-11: Sorry for my late response, wasn't able to get in. The problem is that the robot can reach a state using different joint values. E.g. one joint value can be 20 deg or 200deg (+180deg). So my solution would be to tell the robot to use a value between [-180,180] for a pose. How can I do that?
{ "domain": "robotics.stackexchange", "id": 27169, "tags": "ros, joint, moveit, ur10, limits" }
Get gmapping robot pose estimation
Question: I'm using gmapping with laser and odometry to perform SLAM. How can I visualize in rviz the estimated robot path? And how can I get the x and y values for each pose? Thanks Originally posted by Antonio on ROS Answers with karma: 53 on 2012-09-26 Post score: 1 Answer: Gmapping does not publish the pose of the robot as a pose per se (within a topic), but as a transform from 'map' to 'odom'. In order to get the pose of the robot in the map that is built you have to get the current odometry of the robot (read your odometry topic) and then ask for the transform of that odometry (that is in 'odom' frame) to the 'map' frame. The result will be a pose in 'map' frame (the coordiantes of the robot in the map) Then, in order to visualize the robot path through the map being built you have to create a node that asks periodically for that transform, store it in some place, and publish all the stored points as a path in a given topic (you decide the name). Key point here is to understand the difference between 'tf transform' and 'topic'. Gmapping publishes a topic named '/map' which contains the current map, but it also publishes the tf transform 'map' (to 'odom'). Originally posted by Ricardo Tellez with karma: 114 on 2012-09-26 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by zkytony on 2017-01-22: shouldn't the actual pose of the robot be from map to base_link? Comment by zkytony on 2017-01-26: There is another potential problem. Getting the path in the way you described may suffer from loop closure or other adjustment of the map, which makes previously obtained poses invalid. Do you know a way to get the gmapping's current estimate of the trajectory? Comment by roberto3 on 2018-02-20: Sorry but I do not understand so much, do you know how I can get a pose with covariance stamped from gmapping??? Comment by Zuhair95 on 2022-08-17: What about using the server of hector trjaectory ?.
{ "domain": "robotics.stackexchange", "id": 11143, "tags": "navigation, rviz, gmapping" }
Intercolumn statistics between columns in a dataframe
Question: I have a df and need to count how many adjacent columns have the same sign as other columns based on the sign of the first column, and multiply by the sign of the first column. What I need to speed up is the calc_df function, which runs like this on my computer: %timeit calc_df(df) 6.38 s ± 170 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) The output of my code is: a_0 a_1 a_2 a_3 a_4 a_5 a_6 a_7 a_8 a_9 0 0.097627 0.430379 0.205527 0.089766 -0.152690 0.291788 -0.124826 0.783546 0.927326 -0.233117 1 0.583450 0.057790 0.136089 0.851193 -0.857928 -0.825741 -0.959563 0.665240 0.556314 0.740024 2 0.957237 0.598317 -0.077041 0.561058 -0.763451 0.279842 -0.713293 0.889338 0.043697 -0.170676 3 -0.470889 0.548467 -0.087699 0.136868 -0.962420 0.235271 0.224191 0.233868 0.887496 0.363641 4 -0.280984 -0.125936 0.395262 -0.879549 0.333533 0.341276 -0.579235 -0.742147 -0.369143 -0.272578 0 4.0 1 4.0 2 2.0 3 -1.0 4 -2.0 My code is as follows, where the generate_data function generates demo data, which is consistent with my actual data volume. import numpy as np import pandas as pd from numba import njit np.random.seed(0) pd.set_option('display.max_columns', None) pd.set_option('expand_frame_repr', False) # This function generates demo data. def generate_data() -> pd.DataFrame: col = [f'a_{x}' for x in range(10)] df = pd.DataFrame(data=np.random.uniform(-1, 1, [280000, 10]), columns=col) return df @njit def calc_numba(s: np.array) -> float: a = s[0] b = 1 for sign in s[1:]: if sign == a: b += 1 else: break b *= a return b def calc_series(s: pd.Series) -> float: return calc_numba(s.to_numpy()) def calc_df(df: pd.DataFrame) -> pd.DataFrame: df1 = np.sign(df) df['count'] = df1.apply(calc_series, axis=1) return df def main() -> None: df = generate_data() print(df.head(5)) df = calc_df(df) print(df['count'].head(5)) return if __name__ == '__main__': main() Answer: Just avoiding switching between numpy array and pandas dataframe so much can grant you an easy x4 speedup: def calc_df(df: pd.DataFrame) -> pd.DataFrame: df1 = np.sign(df) df['count'] = df1.apply(calc_series, axis=1) return df %%timeit df = generate_data() calc_df(df) 1.6 s ± 75.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) With less casting from pandas to numpy: def optimized_calc_df(df: pd.DataFrame) -> pd.DataFrame: array = np.sign(df.to_numpy()) df['count'] = np.apply_along_axis(calc_numba, 1, array) return df %%timeit odf = generate_data() optimized_calc_df(odf) 415 ms ± 16.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Using your solution and with numpy: import numpy as np import pandas as pd from numba import njit np.random.seed(0) pd.set_option('display.max_columns', None) pd.set_option('expand_frame_repr', False) # This function generates demo data. def generate_data() -> pd.DataFrame: col = [f'a_{x}' for x in range(10)] np.random.seed(0) df = pd.DataFrame(data=np.random.uniform(-1, 1, [280000, 10]), columns=col) return df # This function generates demo data. def generate_data_array() -> np.array: np.random.seed(0) return np.random.uniform(-1, 1, [280000, 10]) %%timeit df = generate_data() df1 = np.sign(df) m = df1.eq(df1.iloc[:,0], axis=0).cummin(1) out_df = m.sum(1)*df1.iloc[:,0] 76.6 ms ± 2.6 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) %%timeit array = generate_data_array() array2 = np.sign(array) array3 = np.minimum.accumulate(np.equal(array2, np.expand_dims(array2[:,0], axis=1)), 1) out_array = array3.sum(axis=1) * array2[:,0] 44.3 ms ± 1.35 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
{ "domain": "codereview.stackexchange", "id": 43784, "tags": "python, performance, numpy, pandas, numba" }
What are the boundary conditions simply?
Question: I don't know what are actually boundary conditions for incompressible fluids (I don't really understand what they are.) So may you give me a simple explanation in the incompressible fluids only? Answer: There are boundary conditions for different fluid flow situations and having an incompressible fluid just makes the situation more simple. Note that the flow must be laminar and not turbulent to have an analytical solution. For example a simple task would be to calculate the flow profile of a fluid in between two infinite plates, when the other (lets say top plate) is moving with velocity V. Looks like this: (source: reading.ac.uk) To answer your question: the boundary condition in this example case are that the speed of the fluid right by the lower plate is zero and another boundary condition is that the speed of the fluid right by the top plate is the given V. These come from a common fluid-dynamics rule called the no-slip-condition which you should note to have assumed when stating these kind of boundary conditions. No-slip basically means that the fluid particle that is in direct contact with a solid (being the molecule that touches solid) is held by that solid by friction. There are of course other kinds of boundary conditions than the stated example, but my guess is that this is the most common among fluid-dynamics. Firstly this is a very simple exercise and usually given in the beginning in some fluid-dynamics related courses. Assuming from your question you might be in a similar situation perhaps not with the exact case. Without realizing/deducing what these boundary conditions are the task would be quite hard. Edit: For further reading a quick google search lead me to this course material that goes first through simple laminar flows with analytical answers and then goes beyond: http://ocw.mit.edu/courses/earth-atmospheric-and-planetary-sciences/12-090-introduction-to-fluid-motions-sediment-transport-and-current-generated-sedimentary-structures-fall-2006/course-textbook/ch4.pdf Edit2: If you're still puzzled then here is an example of how to solve the question I stated above: https://ceprofs.civil.tamu.edu/ssocolofsky/ocen678/Downloads/Lectures/Couette.PDF. To explain a little of the steps taken in this derivation (steps you can also follow in any of the problems with different conditions and geometries) once you understand the handwriting: Stating the deductions and boundary conditions of the situation. Using these insights to eliminate terms and simplify the Navier-Stokes equations. Solving whatever quantities and profiles from the simplified Navier-Stokes.
{ "domain": "physics.stackexchange", "id": 33251, "tags": "fluid-dynamics, terminology, boundary-conditions, flow" }
Channel estimation
Question: In single-carrier systems, at the receiver, when we need to do channel estimation and equalization? Do we need to do before timing synchronization and CFO correction or before it? And, please let me how it behaves if it is before synchronization or after synchronization. Answer: Usually you do center frequency correction followed by some sort of coarse timing synchronization and then equalization. The equalizer will provide mitigation against ISI and additional timing synchronization.
{ "domain": "dsp.stackexchange", "id": 7268, "tags": "signal-analysis, digital-communications, estimation, fading-channel" }
Is the Hydraulic analogy of a resistor wrong?
Question: According to Wikipedia's Hydraulic analogy page A resistor is analogous to a constricted pipe such at the one in the photo below. Bernouli's principle tells us that the pressure in both sides would be identical in both of the wide sections of the pipe and in the narrow section of the pipe the pressure would be lower it also tells us that the velocity inside the narrow section would be faster than the velocity in the wide parts But, when a resistor is placed in a circuit, the rate of electron gets lower and stays that way throughout the circuit and causes a voltage drop across it, so when exiting the resistor, the pressure does not return to the entry pressure like it does in the pipe analogy. so, in what way is this Hydraulic analogy of a resistor actually analogous to a resistor ? Answer: You're probably aware of this; but just to cover all the bases, in the hydraulic analogy, pressure represents voltage, flow rate represents current, and as you mentioned, a pipe restriction represents a resistor. For starters, there's a comment in your question that should be addressed: Bernouli's principle tells us that the pressure in both sides would be identical in both of the wide sections of the pipe and in the narrow section of the pipe the pressure would be lower This is true; but for a real flow when the effects of viscosity are considered, the flow restriction will actually create a pressure drop, so pressure at the end will be slightly reduced compared to before the restriction. The more restrictive, the greater the pressure drop. This is analogous to voltage drop across a resistor. Due to the pressure drop across the restriction, doing this with pipes will decrease the total flow rate through the pipe, the same way a resistor reduces the total current in a line. Because there is a pressure drop in the pipe, that means you either require a greater pressure to get the same outlet flow rate, or by applying the same pressure, the flow at the outlet is less. This is analogous to what happens with resistors in circuits. The confusion comes from not accounting for the pressure drop across the restriction, and looking too closely at the localized velocity in the restriction, compared to the actual flow rate.
{ "domain": "physics.stackexchange", "id": 53281, "tags": "fluid-dynamics, electricity, electrical-resistance" }
Why do we squint when tasting very sour things?
Question: Sometimes while tasting a very sour thing (like tamarind, lemon etc.) our eyes squint immediately and involuntarily for a second, but a little later becomes normal again. Why, and how, does this reflexive movement occur? Answer: Maybe it serves to show others that we may be consuming something poisonous. We cry when we are sad to alert others of our distress. There may be better ways to show something is poisonous, but a child doesn't have any real way to alert who ever is feeding it that it may be poisonous. If I was feeding my baby something, and they had that reaction, I would not give that to them again.
{ "domain": "biology.stackexchange", "id": 3640, "tags": "homework, sensation, reflexes" }
Question about epinephrine
Question: In my class we were told that adrenaline (or epinephrine) causes vasoconstriction. My question was I had always thought that people took this via an EpiPen when they were having an allergic reaction. So I thought it would make sense, that it would dilate vessels, because this will allow more air through. Or does the constriction help because it tightens the vessels in the throat allowing a reduction of swelling? Answer: An allergic reaction requiring an epipen can be caused by swelling in the throat and/or bronchoconstriction (spasms decreasing the diameter of bronchioles). Bronchoconstriction can occur in the absence of soft tissue swelling in the lips, tongue and throat, and vs. versa. Epinephrine reduces pharyngeal edema (swelling) because it is a potent vasoconstrictor in small arterioles and precapillary sphincters in most body organ systems. This is effected through alpha-1 adrenergic receptor activation. While the blood pressure and heart rate increase (due to beta-1 adrenergic effects), the permeability at the capillary level is decreased, relieving swelling. At the same time, epinephrine relaxes bronchial muscle. It has a powerful bronchodilator action when bronchial muscle is contracted because it is a powerful beta2 adrenergic agonist. Finally, epinephrine inhibits antigen-induced release of inflammatory mediators from mast cells, and to a lesser extent from diminution of bronchial secretions and congestion within the mucosa. Inhibition of mast cell secretion is mediated by beta2 adrenergic receptors. Adapted from Simons. Kemp et al. World Allergy Organization Journal 2008 1(Suppl 2):S18-S26 doi:10.1186/1939-4551-1-S2-S18 Epinephrine: The Drug of Choice for Anaphylaxis--A Statement of the World Allergy Organization
{ "domain": "biology.stackexchange", "id": 3755, "tags": "physiology, endocrinology" }
LRU Cache in ECMAScript
Question: I wrote this for a CodeWars challenge while trying to learn ECMAScript and would really like to have some advice on how it could be improved. What I don't like about this code myself, but am unsure on how to improve: I didn't yet found a way to have a class in ECMAScript that mixes public and private methods in a way, where the public methods (and only them) can still access private methods and vars. (Partly caused by the point above) I think the code is way too verbose and bloated. Too much of it is wasted on language specific implementation details. There must be a smarter way to do this - I feel like I could write the same functionality in PHP using 10% of the lines I needed now. function LRUCache(capacity, init) { this._capacity = capacity; this._size = 0; this._accessMap = []; this._items = []; this._frozenMethods = ['cache', 'delete']; this._functionalProps = ['size', 'capacity']; this.cache = function(key, value) { if( this._accessMap.indexOf(key) == -1 ) { this._addNewItem(key); } this._items[key] = this[key] = value; this._updateAccessMap(key); return this; }, this.delete = function(key) { if(this._frozenMethods.indexOf(key) !== -1 || this._functionalProps.indexOf(key) !== -1) { return false; } if( this._accessMap.indexOf(key) === -1 ) { return true; } delete this[key]; delete this._items[key]; this._size--; this._updateAccessMap(key, true); return true; }, this._removeOldest = function() { var key = this._accessMap[0]; this.delete(key); }, this._updateAccessMap = function(key, remove) { index = this._accessMap.indexOf(key); if( index !== -1) { this._accessMap.splice(index, 1); } if(!remove) { this._accessMap.push(key); } }, this._addNewItem = function(key) { this._size++; if(this._size > this._capacity) { this._removeOldest(); } Object.defineProperty(this, key, { get: function() { this._updateAccessMap(key); return this._items[key]; }, set: function(v) { this._items[key] = v; this._updateAccessMap(key); return this; }, enumerable: true, configurable: true }); }; var obj = this; this._frozenMethods.map( function(v) { Object.defineProperty(obj, v, { enumerable: false, writable: false, configurable: false }); }); ['_items', '_capacity', '_accessMap', '_frozenMethods','_size', '_functionalProps', '_removeOldest','_updateAccessMap','_addNewItem'].map( function(v) { Object.defineProperty(obj, v, { enumerable: false, configurable: true }); }); Object.defineProperty(this, 'capacity', { set: function(v) { while(this.size > v) { this._removeOldest(); } this._capacity = v; }, get: function() { return this._capacity; }, configurable: false }); Object.defineProperty(this, 'size', { set: function(v) { return false; }, get: function() { return this._size; }, configurable: false }); if(init) { for(var attr in init) { if( !init.hasOwnProperty(attr) ) { continue; } this.cache(attr, init[attr]); } } } Answer: I didn't yet found a way to have a class in ECMAScript that mixes public and private methods in a way, where the public methods (and only them) can still access private methods and vars. JavaScript ain't classical OOP. Don't force it to be like one. The default inheritance pattern in JS is prototypal, and everything is public and methods are shared across instances. You could emulate private by defining your data not as properties but as vars in the constructor. However, this requires that your methods be defined in the constructor to have access to them. This means that your methods are created on a per-instance basis, rather than shared via the prototype. Anyways, off to a review of your code: this._items = []; I'm pretty sure you wanted a hash {} instead of an array. this._items[key] = this[key] = value; I'm not sure why you want to add the key-value pair as a property of the cache object. I'm pretty sure this is why you have to guard against overriding your methods. I suggest you don't. Make your object just an interface. Don't put your data on it. Just put it in _items. this._frozenMethods.map( function(v) { Object.defineProperty(obj, v, { enumerable: false, writable: false, configurable: false }); }); ['_items', '_capacity', '_accessMap', '_frozenMethods','_size', '_functionalProps', '_removeOldest','_updateAccessMap','_addNewItem'].map( function(v) { Object.defineProperty(obj, v, { enumerable: false, configurable: true }); }); map is a special iterator method. It runs on each array item and expects a return value for each item to form the new array it creates. If you just end us using it to loop through, then forEach is a better choice. Less confusing. Additionally, naming variables properly is a must. Explicit is better than implicit, and naming your variables explicitly is better for readability than having to guess what they're for. It takes 5 seconds for a developer to know a purpose of a variable if named properly, rather than 5 mins because the developer has to read through the implementation. this.delete = function(key) { IIRC, IE8 and some other browsers have issues when using some keywords as property/function names. Best use alternative names like remove. Anyways, here's a simplified implementation. Should be straightforward: function LruCache(capacity){ this._capacity = capacity || 1; this._accessMap = []; this._items = {}; } LruCache.prototype.addItem = function(key, value){ if(this._accessMap.length >= this._capacity) this.removeOldest(); this._accessMap.push(key); this._items[key] = value; }; LruCache.prototype.remove = function(key){ this._accessMap.splice(this._accessMap.indexOf(key), 1); delete this._items[key]; }; LruCache.prototype.removeOldest = function(){ var oldestKey = this._accessMap.shift(); delete this._items[oldestKey]; }; LruCache.prototype.getCapacity = function(){ return this._capacity; }; LruCache.prototype.getSize = function(){ return this._accessMap.length; }; LruCache.prototype.getItems = function(){ var instance = this; return Object.keys(this._items).reduce(function(carry, key){ carry[key] = instance._items[key]; return carry; }, {}); }; If you really want privates by way of closures and throw away the entire concept of shared methods, you can do this: function LruCache(capacity){ var _capacity = capacity || 1; var _accessMap = []; var _items = {}; function removeOldest(){ var oldestKey = _accessMap.shift(); delete _items[oldestKey]; }; this.addItem = function(key, value){ if(_accessMap.length >= _capacity) removeOldest(); _accessMap.push(key); _items[key] = value; }; this.remove = function(key){ _accessMap.splice(_accessMap.indexOf(key), 1); delete _items[key]; }; this.getCapacity = function(){ return _capacity; }; this.getSize = function(){ return _accessMap.length; }; this.getItems = function(){ return Object.keys(_items).reduce(function(carry, key){ carry[key] = _items[key]; return carry; }, {}); }; }
{ "domain": "codereview.stackexchange", "id": 15580, "tags": "javascript, object-oriented, cache" }
Solving the Schrödinger equation - dilemma with separation of variables procedure
Question: I haven't formally learned the separation of variables technique to solve partial differential equations, but my lecturer knew this when he wrote his notes, so hopefully a recourse into this is not needed. However, that is hardly the confusion I have. My confusion concerns the following snippet from my notes: We isolate the $x$ & $t$ dependence by dividing both sides of the equation by our trial solution, $$ \frac{1}{\psi_n(x)}\left\{ -\frac{\hbar^2}{2m} \frac{d^2\psi_n}{dx^2} + V(x) \psi_n(x) \right\} = \frac{1}{\theta_n(t)}\left\{ i\hbar \frac{\partial \theta_n}{\partial t} \right\} $$ The next step is crucial, so be sure that you follow the argument completely. It's not that it's so very complicated; it's actually rather obvious once you perceive its truth, but it can be subtle if you are seeing it for the first time. Read through it again if you do not understand, and be sure to ask your tutor (or myself) if you still cannot follow. The left-hand side of this equation depends only on the variable $x$, and the right-hand side is a function only of the variable $t$, and these two variables are independent of one another. In other words, $x$ is not a function of time, because $x$ refers to a particular location in space relative to the origin, and that doesn't change with time if the coordinate axes are fixed. You may be used to seeing $x(t)$ used to represent the location of a particle as a function of time, but it is not being used that way here. That is something strictly for classical physics, because quantum particles do not follow localized trajectories through space. Note also that $t\neq t(x)$; the coordinate time does not depend on position. Here's the key conclusion: because the left- and right- hand sides are functions of two different, independent variables, the only way the equality can hold for all values of $x$ & $t$ is if they are each separately equal to the same constant. Imagine fixing our attention on one location in space, so that $x$ is held constant. The left-hand side cannot change with time, because it is only a function of position, and so the right-hand side also cannot change with time, otherwise the equality would not hold. The exact same argument can be made by holding $t$ fixed and allowing $x$ to vary. We must therefore have: $$ -\frac{\hbar^2}{2m} \frac{d^2\psi_n}{dx^2} + V(x) \psi_n(x) = E_n\psi_n(x) \qquad\qquad i\hbar \frac{\partial \theta_n}{\partial t} = E_n\theta_n(t) $$ Here's what I'm getting from this: We have an equality, despite having two functions of two distinct independent variables. For this to be true: The two variables have to be expressed in the same function, i.e $\sin(x) = \sin(t)$ The LHS and RHS have to map $x$ and $t$ separately to the same constant for all $x$ and $t$. I think (2) is what my lecturer is saying hopefully, but in that case, I would expect: $$-\frac{\hbar}{2m} \frac{d^2 \Psi_n}{dx^2} + V(x) \Psi_n = E_n = i\hbar \frac{d\theta_n}{dt}$$ As opposed to what my lecturer wrote: $$-\frac{\hbar}{2m} \frac{d^2 \Psi_n}{dx^2} + V(x) \Psi_n = E_n \psi_n(x)$$ $$i\hbar \frac{d\theta_n}{dt} = E_n \theta_n(t)$$ What am I missing here? I suspect I'm misinterpreting what's going on here. Answer: First of all, the option 1) you're mentioning isn't valid, $sin(x)\neq sin(t)$ in general. Consider for example the case with $x=0$ and $t=1$ to see why. That said, what your lecturer wrote is right. You have $$ f(x) = \frac{1}{\psi(x)}\left(-\frac{\hbar^2}{2m} \frac{d^2 \psi}{dx^2}(x) + V(x) \psi(x)\right) \qquad\text{and}\qquad g(t)= \frac{1}{\theta(t)} \left(i\hbar\frac{d\theta}{dt}(t)\right) $$ and the first equation reads $$f(x)=g(t).$$ So it has to be $f(x)=E=g(t)$, that is $$ \frac{1}{\psi(x)}\left(-\frac{\hbar^2}{2m} \frac{d^2 \psi}{dx^2}(x) + V(x) \psi(x)\right) = E $$ or, multiplying both sides by $\psi(x)$, $$ -\frac{\hbar^2}{2m} \frac{d^2 \psi}{dx^2}(x) + V(x) \psi(x) = E \psi(x) $$ and similar for $g(t)=E$. You can see also that your reasoning is wrong by inserting your equations $$\tag{1}-\frac{\hbar}{2m} \frac{d^2 \Psi}{dx^2} + V(x) \Psi = E \qquad \text{and}\qquad i\hbar \frac{d\theta}{dt}=E$$ in the first one. You get $$ \frac{1}{\psi(x)} E = \frac{1}{\theta(t)} E \implies \psi(x) = \theta(t) $$ which by the same reasoning as before implies $\psi(x) = \theta(t) = \text{constant}$. If you insert this in (1) you can verify that it means $\psi(x)=\theta(t)=0$, which can't be since they appear in the denominator of the original equation.
{ "domain": "physics.stackexchange", "id": 52050, "tags": "quantum-mechanics, schroedinger-equation" }
SIFT - why s+3 scales per octave?
Question: I have a problem with SIFT that I do not understand. Lowe [1] proposed in his work the s=3 levels of scale are enough for one octave. Afterwards, he mentioned that you need to compute s+3 levels. Why there are 3 and not 2 additional levels required. I understand that you require one additional level above and one additional level below the scales since you search for extrema in neighbored scales. For what is the third additional level of scale? Thank you very much in advance! [1] Distinctive Image Features from Scale-Invariant Keypoints D. G. Lowe Int. Journal of Computer Vision 60(2) (2004), pp. 91--110 Answer: We must produce s + 3 images in the stack of blurred images for each octave, so that final extrema detection covers a complete octave. For $s=3$ this means you will have $s + 3 = 6$ blurred images (the Gaussian images shown in the paper in Figure 1 on the left). Having $6$ Gaussian images will result in $5$ DoG images (shown in Figure 1 on the right). This will allow you to do the extrema detection on $s=3$ scales (using the method shown in Figure 2).
{ "domain": "dsp.stackexchange", "id": 1090, "tags": "sift, scale-space" }
Newspaper Bill Calculator CLI with Python (1 of 3, Core)
Question: Code is posted after explanation. Due to the size of the project, this is being posted in three separate posts. This also ensures each post is more focused. Post 2 of 3, CLI: Newspaper Bill Calculator CLI with Python (2 of 3, CLI) Post 3 of 3, Database: Newspaper Bill Calculator CLI with Python (3 of 3, Database) What is this? This application helps you calculate monthly newspaper bills. The goal is to generate a message that I can paste into WhatsApp and send to my newspaper vendor. The end result here is a CLI tool that will be later used as a back-end to build GUIs (hence learn about: C#, HTML/CSS/JS, Flutter). In its current form, everything will be "compiled" by PyInstaller into one-file stand-alone executables for the end-user using GitHub Actions. The other important goal was to be a testbed for learning a bunch of new tools: more Python libraries, SQL connectors, GitHub Actions (CI/CD, if I understand correctly), unit tests, CLI libraries, type-hinting, regex. I had earlier built this on a different platform, so I now have a solid idea of how this application is used. Key concepts Each newspaper has a certain cost per day of the week Each newspaper may or may not be delivered on a given day Each newspaper has a name, and a number called a key You may register any dates when you didn't receive a paper in advance using the addudl command Once you calculate, the results are displayed and copied to your clipboard What files exist? (ignoring conventional ones like README and requirements.txt) File Purpose/Description Review npbc_core.py Provide the core functionality: the calculation, parsing and validation of user input, interaction with the DB etc. Later on, some functionality from this will be extracted to create server-side code that can service more users, but I have to learn a lot more before getting there. Please review this. npbc_cli.py Import functionality from npbc_core.py and wrap a CLI layer on it using argparse. Also provide some additional validation. Please review this. npbc_updater.py Provide a utility to update the application on the user's end. Don't bother reviewing this (code not included). test_core.py Test the functionality of the core file (pytest). This isn't as exhaustive as I'd like, but it did a good job of capturing many of my mistakes. Please review this. data/schema.sql Database schema. In my local environment, the data folder also has a test database file (but I don't want to upload this online). Please review this if you can (not high priority). Known problems Tests are not exhaustive (please suggest anything you think of). Tests are not well commented (working on this right now in a local branch). SQL injection is possible in some cases by -k/--key CLI parameters, if you can figure out a way to insert a semicolon in an integer. I will remove this in a future version, once I find a way to improve or remove the generate_sql_query() function. A lot of documentation is tied up in the CLI UI and comments, and is not an explicit document. npbc_core.py from sqlite3 import connect from calendar import day_name as weekday_names_iterable from calendar import monthrange, monthcalendar from datetime import date as date_type, datetime, timedelta from pathlib import Path from re import compile as compile_regex ## paths for the folder containing schema and database files # during normal use, the DB will be in ~/.npbc (where ~ is the user's home directory) and the schema will be bundled with the executable # during development, the DB and schema will both be in "data" DATABASE_DIR = Path().home() / '.npbc' # normal use path # DATABASE_DIR = Path('data') # development path DATABASE_PATH = DATABASE_DIR / 'npbc.db' SCHEMA_PATH = Path(__file__).parent / 'schema.sql' # normal use path # SCHEMA_PATH = DATABASE_DIR / 'schema.sql' # development path ## list constant for names of weekdays WEEKDAY_NAMES = list(weekday_names_iterable) ## regex for validating user input VALIDATE_REGEX = { # match for a list of comma separated values. each value must be/contain digits, or letters, or hyphens. spaces are allowed between values and commas. any number of values are allowed, but at least one must be present. 'CSVs': compile_regex(r'^[-\w]+( *, *[-\w]+)*( *,)?$'), # match for a single number. must be one or two digits 'number': compile_regex(r'^[\d]{1,2}?$'), # match for a range of numbers. each number must be one or two digits. numbers are separated by a hyphen. spaces are allowed between numbers and the hyphen. 'range': compile_regex(r'^\d{1,2} *- *\d{1,2}$'), # match for weekday name. day must appear as "daynames" (example: "mondays"). all lowercase. 'days': compile_regex(f"^{'|'.join([day_name.lower() + 's' for day_name in WEEKDAY_NAMES])}$"), # match for nth weekday name. day must appear as "n-dayname" (example: "1-monday"). all lowercase. must be one digit. 'n-day': compile_regex(f"^\\d *- *({'|'.join([day_name.lower() for day_name in WEEKDAY_NAMES])})$"), # match for real values, delimited by semicolons. each value must be either an integer or a float with a decimal point. spaces are allowed between values and semicolons, and up to 7 (but at least 1) values are allowed. 'costs': compile_regex(r'^\d+(\.\d+)?( *; *\d+(\.\d+)?){0,6} *;?$'), # match for seven values, each of which must be a 'Y' or an 'N'. there are no delimiters. 'delivery': compile_regex(r'^[YN]{7}$') } ## regex for splitting strings SPLIT_REGEX = { # split on hyphens. spaces are allowed between hyphens and values. 'hyphen': compile_regex(r' *- *'), # split on semicolons. spaces are allowed between hyphens and values. 'semicolon': compile_regex(r' *; *'), # split on commas. spaces are allowed between commas and values. 'comma': compile_regex(r' *, *') } ## ensure DB exists and it's set up with the schema def setup_and_connect_DB() -> None: DATABASE_DIR.mkdir(parents=True, exist_ok=True) DATABASE_PATH.touch(exist_ok=True) with connect(DATABASE_PATH) as connection: connection.executescript(SCHEMA_PATH.read_text()) connection.commit() ## generate a "SELECT" SQL query # use params to specify columns to select, and "WHERE" conditions def generate_sql_query(table_name: str, conditions: dict[str, int | str] | None = None, columns: list[str] | None = None) -> str: sql_query = f"SELECT" if columns: sql_query += f" {', '.join(columns)}" else: sql_query += f" *" sql_query += f" FROM {table_name}" if conditions: conditions_segment = ' AND '.join([ f"{parameter_name} = {parameter_value}" for parameter_name, parameter_value in conditions.items() ]) sql_query += f" WHERE {conditions_segment}" return f"{sql_query};" ## execute a "SELECT" SQL query and return the results def query_database(query: str) -> list[tuple]: with connect(DATABASE_PATH) as connection: return connection.execute(query).fetchall() return [] ## generate a list of number of times each weekday occurs in a given month # the list will be in the same order as WEEKDAY_NAMES (so the first day should be Monday) def get_number_of_days_per_week(month: int, year: int) -> list[int]: main_calendar = monthcalendar(year, month) number_of_weeks = len(main_calendar) number_of_weekdays = [] for i, _ in enumerate(WEEKDAY_NAMES): number_of_weekday = number_of_weeks if main_calendar[0][i] == 0: number_of_weekday -= 1 if main_calendar[-1][i] == 0: number_of_weekday -= 1 number_of_weekdays.append(number_of_weekday) return number_of_weekdays ## validate a string that specifies when a given paper was not delivered # first check to see that it meets the comma-separated requirements # then check against each of the other acceptable patterns in the regex dictionary def validate_undelivered_string(string: str) -> bool: if VALIDATE_REGEX['CSVs'].match(string): for section in SPLIT_REGEX['comma'].split(string.rstrip(',')): section_validity = False for pattern, regex in VALIDATE_REGEX.items(): if (not section_validity) and (pattern not in ["CSVs", "costs", "delivery"]) and (regex.match(section)): section_validity = True if not section_validity: return False return True return False ## parse a string that specifies when a given paper was not delivered # each CSV section states some set of dates # this function will return a set of dates that uniquely identifies each date mentioned across all the CSVs def parse_undelivered_string(string: str, month: int, year: int) -> set[date_type]: dates = set() for section in SPLIT_REGEX['comma'].split(string.rstrip(',')): # if the date is simply a number, it's a single day. so we just identify that date if VALIDATE_REGEX['number'].match(section): date = int(section) if date > 0 and date <= monthrange(year, month)[1]: dates.add(date_type(year, month, date)) # if the date is a range of numbers, it's a range of days. we identify all the dates in that range, bounds inclusive elif VALIDATE_REGEX['range'].match(section): start, end = [int(date) for date in SPLIT_REGEX['hyphen'].split(section)] if (0 < start) and (start <= end) and (end <= monthrange(year, month)[1]): dates.update( date_type(year, month, day) for day in range(start, end + 1) ) # if the date is the plural of a weekday name, we identify all dates in that month which are the given weekday elif VALIDATE_REGEX['days'].match(section): weekday = WEEKDAY_NAMES.index(section.capitalize().rstrip('s')) dates.update( date_type(year, month, day) for day in range(1, monthrange(year, month)[1] + 1) if date_type(year, month, day).weekday() == weekday ) # if the date is a number and a weekday name (singular), we identify the date that is the nth occurrence of the given weekday in the month elif VALIDATE_REGEX['n-day'].match(section): n, weekday = SPLIT_REGEX['hyphen'].split(section) n = int(n) if n > 0 and n <= get_number_of_days_per_week(month, year)[WEEKDAY_NAMES.index(weekday.capitalize())]: weekday = WEEKDAY_NAMES.index(weekday.capitalize()) valid_dates = [ date_type(year, month, day) for day in range(1, monthrange(year, month)[1] + 1) if date_type(year, month, day).weekday() == weekday ] dates.add(valid_dates[n - 1]) # bug report :) else: print("Congratulations! You broke the program!") print("You managed to write a string that the program considers valid, but isn't actually.") print("Please report it to the developer.") print(f"\nThe string you wrote was: {string}") print("This data has not been counted.") return dates ## get the cost and delivery data for a given paper from the DB # each of them are converted to a dictionary, whose index is the day_id # the two dictionaries are then returned as a tuple def get_cost_and_delivery_data(paper_id: int) -> tuple[dict[int, float], dict[int, bool]]: cost_query = generate_sql_query( 'papers_days_cost', columns=['day_id', 'cost'], conditions={'paper_id': paper_id} ) delivery_query = generate_sql_query( 'papers_days_delivered', columns=['day_id', 'delivered'], conditions={'paper_id': paper_id} ) with connect(DATABASE_PATH) as connection: cost_tuple = connection.execute(cost_query).fetchall() delivery_tuple = connection.execute(delivery_query).fetchall() cost_dict = { day_id: cost for day_id, cost in cost_tuple # type: ignore } delivery_dict = { day_id: delivery for day_id, delivery in delivery_tuple # type: ignore } return cost_dict, delivery_dict ## calculate the cost of one paper for the full month # any dates when it was not delivered will be removed def calculate_cost_of_one_paper(number_of_days_per_week: list[int], undelivered_dates: set[date_type], cost_and_delivered_data: tuple[dict[int, float], dict[int, bool]]) -> float: cost_data, delivered_data = cost_and_delivered_data # initialize counters corresponding to each weekday when the paper was not delivered number_of_days_per_week_not_received = [0] * len(number_of_days_per_week) # for each date that the paper was not delivered, we increment the counter for the corresponding weekday for date in undelivered_dates: number_of_days_per_week_not_received[date.weekday()] += 1 # calculate the total number of each weekday the paper was delivered (if it is supposed to be delivered) number_of_days_delivered = [ number_of_days_per_week[day_id] - number_of_days_per_week_not_received[day_id] if delivered else 0 for day_id, delivered in delivered_data.items() ] # calculate the total cost of the paper for the month return sum( cost * number_of_days_delivered[day_id] for day_id, cost in cost_data.items() ) ## calculate the cost of all papers for the full month # return data about the cost of each paper, the total cost, and dates when each paper was not delivered def calculate_cost_of_all_papers(undelivered_strings: dict[int, str], month: int, year: int) -> tuple[dict[int, float], float, dict[int, set[date_type]]]: NUMBER_OF_DAYS_PER_WEEK = get_number_of_days_per_week(month, year) # get the IDs of papers that exist with connect(DATABASE_PATH) as connection: papers = connection.execute( generate_sql_query( 'papers', columns=['paper_id'] ) ).fetchall() # get the data about cost and delivery for each paper cost_and_delivery_data = [ get_cost_and_delivery_data(paper_id) for paper_id, in papers # type: ignore ] # initialize a "blank" dictionary that will eventually contain any dates when a paper was not delivered undelivered_dates: dict[int, set[date_type]] = { paper_id: {} for paper_id, in papers # type: ignore } # calculate the undelivered dates for each paper for paper_id, undelivered_string in undelivered_strings.items(): # type: ignore undelivered_dates[paper_id] = parse_undelivered_string(undelivered_string, month, year) # calculate the cost of each paper costs = { paper_id: calculate_cost_of_one_paper( NUMBER_OF_DAYS_PER_WEEK, undelivered_dates[paper_id], cost_and_delivery_data[index] ) for index, (paper_id,) in enumerate(papers) # type: ignore } # calculate the total cost of all papers total = sum(costs.values()) return costs, total, undelivered_dates ## save the results of undelivered dates to the DB # save the dates any paper was not delivered def save_results(undelivered_dates: dict[int, set[date_type]], month: int, year: int) -> None: TIMESTAMP = datetime.now().strftime(r'%d/%m/%Y %I:%M:%S %p') with connect(DATABASE_PATH) as connection: for paper_id, undelivered_date_instances in undelivered_dates.items(): connection.execute( "INSERT INTO undelivered_dates (timestamp, month, year, paper_id, dates) VALUES (?, ?, ?, ?, ?);", ( TIMESTAMP, month, year, paper_id, ','.join([ undelivered_date_instance.strftime(r'%d') for undelivered_date_instance in undelivered_date_instances ]) ) ) ## format the output of calculating the cost of all papers def format_output(costs: dict[int, float], total: float, month: int, year: int) -> str: papers = { paper_id: name for paper_id, name in query_database( generate_sql_query('papers') ) } format_string = f"For {date_type(year=year, month=month, day=1).strftime(r'%B %Y')}\n\n" format_string += f"*TOTAL*: {total}\n" format_string += '\n'.join([ f"{papers[paper_id]}: {cost}" # type: ignore for paper_id, cost in costs.items() ]) return f"{format_string}\n" ## add a new paper # do not allow if the paper already exists def add_new_paper(name: str, days_delivered: list[bool], days_cost: list[float]) -> tuple[bool, str]: with connect(DATABASE_PATH) as connection: # get the names of all papers that already exist paper = connection.execute( generate_sql_query('papers', columns=['name'], conditions={'name': f"\"{name}\""}) ).fetchall() # if the proposed paper already exists, return an error message if paper: return False, "Paper already exists. Please try editing the paper instead." # otherwise, add the paper name to the database connection.execute( "INSERT INTO papers (name) VALUES (?);", (name, ) ) # get the ID of the paper that was just added paper_id = connection.execute( "SELECT paper_id FROM papers WHERE name = ?;", (name, ) ).fetchone()[0] # add the cost and delivery data for the paper for day_id, (cost, delivered) in enumerate(zip(days_cost, days_delivered)): connection.execute( "INSERT INTO papers_days_cost (paper_id, day_id, cost) VALUES (?, ?, ?);", (paper_id, day_id, cost) ) connection.execute( "INSERT INTO papers_days_delivered (paper_id, day_id, delivered) VALUES (?, ?, ?);", (paper_id, day_id, delivered) ) connection.commit() return True, f"Paper {name} added." return False, "Something went wrong." ## edit an existing paper # do not allow if the paper does not exist def edit_existing_paper(paper_id: int, name: str | None = None, days_delivered: list[bool] | None = None, days_cost: list[float] | None = None) -> tuple[bool, str]: with connect(DATABASE_PATH) as connection: # get the IDs of all papers that already exist paper = connection.execute( generate_sql_query('papers', columns=['paper_id'], conditions={'paper_id': paper_id}) ).fetchone() # if the proposed paper does not exist, return an error message if not paper: return False, f"Paper {paper_id} does not exist. Please try adding it instead." # if a name is proposed, update the name of the paper if name is not None: connection.execute( "UPDATE papers SET name = ? WHERE paper_id = ?;", (name, paper_id) ) # if delivery data is proposed, update the delivery data of the paper if days_delivered is not None: for day_id, delivered in enumerate(days_delivered): connection.execute( "UPDATE papers_days_delivered SET delivered = ? WHERE paper_id = ? AND day_id = ?;", (delivered, paper_id, day_id) ) # if cost data is proposed, update the cost data of the paper if days_cost is not None: for day_id, cost in enumerate(days_cost): connection.execute( "UPDATE papers_days_cost SET cost = ? WHERE paper_id = ? AND day_id = ?;", (cost, paper_id, day_id) ) connection.commit() return True, f"Paper {paper_id} edited." return False, "Something went wrong." ## delete an existing paper # do not allow if the paper does not exist def delete_existing_paper(paper_id: int) -> tuple[bool, str]: with connect(DATABASE_PATH) as connection: # get the IDs of all papers that already exist paper = connection.execute( generate_sql_query('papers', columns=['paper_id'], conditions={'paper_id': paper_id}) ).fetchone() # if the proposed paper does not exist, return an error message if not paper: return False, f"Paper {paper_id} does not exist. Please try adding it instead." # delete the paper from the names table connection.execute( "DELETE FROM papers WHERE paper_id = ?;", (paper_id, ) ) # delete the paper from the delivery data table connection.execute( "DELETE FROM papers_days_delivered WHERE paper_id = ?;", (paper_id, ) ) # delete the paper from the cost data table connection.execute( "DELETE FROM papers_days_cost WHERE paper_id = ?;", (paper_id, ) ) connection.commit() return True, f"Paper {paper_id} deleted." return False, "Something went wrong." ## record strings for date(s) paper(s) were not delivered def add_undelivered_string(paper_id: int, undelivered_string: str, month: int, year: int) -> tuple[bool, str]: # if the string is not valid, return an error message if not validate_undelivered_string(undelivered_string): return False, f"Invalid undelivered string." with connect(DATABASE_PATH) as connection: # check if given paper exists paper = connection.execute( generate_sql_query( 'papers', columns=['paper_id'], conditions={'paper_id': paper_id} ) ).fetchone() # if the paper does not exist, return an error message if not paper: return False, f"Paper {paper_id} does not exist. Please try adding it instead." # check if a string with the same month and year, for the same paper, already exists existing_string = connection.execute( generate_sql_query( 'undelivered_strings', columns=['string'], conditions={ 'paper_id': paper_id, 'month': month, 'year': year } ) ).fetchone() # if a string with the same month and year, for the same paper, already exists, concatenate the new string to it if existing_string: new_string = f"{existing_string[0]},{undelivered_string}" connection.execute( "UPDATE undelivered_strings SET string = ? WHERE paper_id = ? AND month = ? AND year = ?;", (new_string, paper_id, month, year) ) # otherwise, add the new string to the database else: connection.execute( "INSERT INTO undelivered_strings (string, paper_id, month, year) VALUES (?, ?, ?, ?);", (undelivered_string, paper_id, month, year) ) connection.commit() return True, f"Undelivered string added." ## delete an existing undelivered string # do not allow if the string does not exist def delete_undelivered_string(paper_id: int, month: int, year: int) -> tuple[bool, str]: with connect(DATABASE_PATH) as connection: # check if a string with the same month and year, for the same paper, exists existing_string = connection.execute( generate_sql_query( 'undelivered_strings', columns=['string'], conditions={ 'paper_id': paper_id, 'month': month, 'year': year } ) ).fetchone() # if it does, delete it if existing_string: connection.execute( "DELETE FROM undelivered_strings WHERE paper_id = ? AND month = ? AND year = ?;", (paper_id, month, year) ) connection.commit() return True, f"Undelivered string deleted." # if the string does not exist, return an error message return False, f"Undelivered string does not exist." return False, "Something went wrong." ## get the previous month, by looking at 1 day before the first day of the current month (duh) def get_previous_month() -> date_type: return (datetime.today().replace(day=1) - timedelta(days=1)).replace(day=1) ## extract delivery days and costs from user input def extract_days_and_costs(days_delivered: str | None, prices: str | None, paper_id: int | None = None) -> tuple[list[bool], list[float]]: days = [] costs = [] # if the user has provided delivery days, extract them if days_delivered is not None: days = [ bool(int(day == 'Y')) for day in str(days_delivered).upper() ] # if the user has not provided delivery days, fetch them from the database else: if isinstance(paper_id, int): days = [ (int(day_id), bool(delivered)) for day_id, delivered in query_database( generate_sql_query( 'papers_days_delivered', columns=['day_id', 'delivered'], conditions={ 'paper_id': paper_id } ) ) ] days.sort(key=lambda x: x[0]) days = [delivered for _, delivered in days] # if the user has provided prices, extract them if prices is not None: costs = [] encoded_prices = [float(price) for price in SPLIT_REGEX['semicolon'].split(prices.rstrip(';')) if float(price) > 0] day_count = -1 for day in days: if day: day_count += 1 cost = encoded_prices[day_count] else: cost = 0 costs.append(cost) return days, costs ## validate month and year def validate_month_and_year(month: int | None = None, year: int | None = None) -> tuple[bool, str]: if ((month is None) or (isinstance(month, int) and (0 < month) and (month <= 12))) and ((year is None) or (isinstance(year, int) and (year >= 0))): return True, "" return False, "Invalid month and/or year." test_core.py from datetime import date as date_type from npbc_core import (SPLIT_REGEX, VALIDATE_REGEX, calculate_cost_of_one_paper, extract_days_and_costs, generate_sql_query, get_number_of_days_per_week, parse_undelivered_string, validate_month_and_year) def test_regex_number(): assert VALIDATE_REGEX['number'].match('') is None assert VALIDATE_REGEX['number'].match('1') is not None assert VALIDATE_REGEX['number'].match('1 2') is None assert VALIDATE_REGEX['number'].match('1-2') is None assert VALIDATE_REGEX['number'].match('11') is not None assert VALIDATE_REGEX['number'].match('11-12') is None assert VALIDATE_REGEX['number'].match('11-12,13') is None assert VALIDATE_REGEX['number'].match('11-12,13-14') is None assert VALIDATE_REGEX['number'].match('111') is None assert VALIDATE_REGEX['number'].match('a') is None assert VALIDATE_REGEX['number'].match('1a') is None assert VALIDATE_REGEX['number'].match('1a2') is None assert VALIDATE_REGEX['number'].match('12b') is None def test_regex_range(): assert VALIDATE_REGEX['range'].match('') is None assert VALIDATE_REGEX['range'].match('1') is None assert VALIDATE_REGEX['range'].match('1 2') is None assert VALIDATE_REGEX['range'].match('1-2') is not None assert VALIDATE_REGEX['range'].match('11') is None assert VALIDATE_REGEX['range'].match('11-') is None assert VALIDATE_REGEX['range'].match('11-12') is not None assert VALIDATE_REGEX['range'].match('11-12-1') is None assert VALIDATE_REGEX['range'].match('11 -12') is not None assert VALIDATE_REGEX['range'].match('11 - 12') is not None assert VALIDATE_REGEX['range'].match('11- 12') is not None assert VALIDATE_REGEX['range'].match('11-2') is not None assert VALIDATE_REGEX['range'].match('11-12,13') is None assert VALIDATE_REGEX['range'].match('11-12,13-14') is None assert VALIDATE_REGEX['range'].match('111') is None assert VALIDATE_REGEX['range'].match('a') is None assert VALIDATE_REGEX['range'].match('1a') is None assert VALIDATE_REGEX['range'].match('1a2') is None assert VALIDATE_REGEX['range'].match('12b') is None assert VALIDATE_REGEX['range'].match('11-a') is None assert VALIDATE_REGEX['range'].match('11-12a') is None def test_regex_CSVs(): assert VALIDATE_REGEX['CSVs'].match('') is None assert VALIDATE_REGEX['CSVs'].match('1') is not None assert VALIDATE_REGEX['CSVs'].match('a') is not None assert VALIDATE_REGEX['CSVs'].match('adcef') is not None assert VALIDATE_REGEX['CSVs'].match('-') is not None assert VALIDATE_REGEX['CSVs'].match(' ') is None assert VALIDATE_REGEX['CSVs'].match('1,2') is not None assert VALIDATE_REGEX['CSVs'].match('1-3') is not None assert VALIDATE_REGEX['CSVs'].match('monday') is not None assert VALIDATE_REGEX['CSVs'].match('monday,tuesday') is not None assert VALIDATE_REGEX['CSVs'].match('mondays') is not None assert VALIDATE_REGEX['CSVs'].match('tuesdays') is not None assert VALIDATE_REGEX['CSVs'].match('1,2,3') is not None assert VALIDATE_REGEX['CSVs'].match('1-3') is not None assert VALIDATE_REGEX['CSVs'].match('monday,tuesday') is not None assert VALIDATE_REGEX['CSVs'].match('mondays,tuesdays') is not None assert VALIDATE_REGEX['CSVs'].match(';') is None assert VALIDATE_REGEX['CSVs'].match(':') is None assert VALIDATE_REGEX['CSVs'].match(':') is None assert VALIDATE_REGEX['CSVs'].match('!') is None assert VALIDATE_REGEX['CSVs'].match('1,2,3,4') is not None def test_regex_days(): assert VALIDATE_REGEX['days'].match('') is None assert VALIDATE_REGEX['days'].match('1') is None assert VALIDATE_REGEX['days'].match('1,2') is None assert VALIDATE_REGEX['days'].match('1-3') is None assert VALIDATE_REGEX['days'].match('monday') is None assert VALIDATE_REGEX['days'].match('monday,tuesday') is None assert VALIDATE_REGEX['days'].match('mondays') is not None assert VALIDATE_REGEX['days'].match('tuesdays') is not None def test_regex_n_days(): assert VALIDATE_REGEX['n-day'].match('') is None assert VALIDATE_REGEX['n-day'].match('1') is None assert VALIDATE_REGEX['n-day'].match('1-') is None assert VALIDATE_REGEX['n-day'].match('1,2') is None assert VALIDATE_REGEX['n-day'].match('1-3') is None assert VALIDATE_REGEX['n-day'].match('monday') is None assert VALIDATE_REGEX['n-day'].match('monday,tuesday') is None assert VALIDATE_REGEX['n-day'].match('mondays') is None assert VALIDATE_REGEX['n-day'].match('1-tuesday') is not None assert VALIDATE_REGEX['n-day'].match('11-tuesday') is None assert VALIDATE_REGEX['n-day'].match('111-tuesday') is None assert VALIDATE_REGEX['n-day'].match('11-tuesdays') is None assert VALIDATE_REGEX['n-day'].match('1 -tuesday') is not None assert VALIDATE_REGEX['n-day'].match('1- tuesday') is not None assert VALIDATE_REGEX['n-day'].match('1 - tuesday') is not None def test_regex_costs(): assert VALIDATE_REGEX['costs'].match('') is None assert VALIDATE_REGEX['costs'].match('a') is None assert VALIDATE_REGEX['costs'].match('1') is not None assert VALIDATE_REGEX['costs'].match('1.') is None assert VALIDATE_REGEX['costs'].match('1.5') is not None assert VALIDATE_REGEX['costs'].match('1.0') is not None assert VALIDATE_REGEX['costs'].match('16.0') is not None assert VALIDATE_REGEX['costs'].match('16.06') is not None assert VALIDATE_REGEX['costs'].match('1;2') is not None assert VALIDATE_REGEX['costs'].match('1 ;2') is not None assert VALIDATE_REGEX['costs'].match('1; 2') is not None assert VALIDATE_REGEX['costs'].match('1 ; 2') is not None assert VALIDATE_REGEX['costs'].match('1;2;') is not None assert VALIDATE_REGEX['costs'].match('1;2 ;') is not None assert VALIDATE_REGEX['costs'].match('1:2') is None assert VALIDATE_REGEX['costs'].match('1,2') is None assert VALIDATE_REGEX['costs'].match('1-2') is None assert VALIDATE_REGEX['costs'].match('1;2;3') is not None assert VALIDATE_REGEX['costs'].match('1;2;3;4') is not None assert VALIDATE_REGEX['costs'].match('1;2;3;4;5') is not None assert VALIDATE_REGEX['costs'].match('1;2;3;4;5;6') is not None assert VALIDATE_REGEX['costs'].match('1;2;3;4;5;6;7;') is not None assert VALIDATE_REGEX['costs'].match('1;2;3;4;5;6;7') is not None assert VALIDATE_REGEX['costs'].match('1;2;3;4;5;6;7;8') is None def test_delivery_regex(): assert VALIDATE_REGEX['delivery'].match('') is None assert VALIDATE_REGEX['delivery'].match('a') is None assert VALIDATE_REGEX['delivery'].match('1') is None assert VALIDATE_REGEX['delivery'].match('1.') is None assert VALIDATE_REGEX['delivery'].match('1.5') is None assert VALIDATE_REGEX['delivery'].match('1,2') is None assert VALIDATE_REGEX['delivery'].match('1-2') is None assert VALIDATE_REGEX['delivery'].match('1;2') is None assert VALIDATE_REGEX['delivery'].match('1:2') is None assert VALIDATE_REGEX['delivery'].match('1,2,3') is None assert VALIDATE_REGEX['delivery'].match('Y') is None assert VALIDATE_REGEX['delivery'].match('N') is None assert VALIDATE_REGEX['delivery'].match('YY') is None assert VALIDATE_REGEX['delivery'].match('YYY') is None assert VALIDATE_REGEX['delivery'].match('YYYY') is None assert VALIDATE_REGEX['delivery'].match('YYYYY') is None assert VALIDATE_REGEX['delivery'].match('YYYYYY') is None assert VALIDATE_REGEX['delivery'].match('YYYYYYY') is not None assert VALIDATE_REGEX['delivery'].match('YYYYYYYY') is None assert VALIDATE_REGEX['delivery'].match('NNNNNNN') is not None assert VALIDATE_REGEX['delivery'].match('NYNNNNN') is not None assert VALIDATE_REGEX['delivery'].match('NYYYYNN') is not None assert VALIDATE_REGEX['delivery'].match('NYYYYYY') is not None assert VALIDATE_REGEX['delivery'].match('NYYYYYYY') is None assert VALIDATE_REGEX['delivery'].match('N,N,N,N,N,N,N') is None assert VALIDATE_REGEX['delivery'].match('N;N;N;N;N;N;N') is None assert VALIDATE_REGEX['delivery'].match('N-N-N-N-N-N-N') is None assert VALIDATE_REGEX['delivery'].match('N N N N N N N') is None assert VALIDATE_REGEX['delivery'].match('YYYYYYy') is None assert VALIDATE_REGEX['delivery'].match('YYYYYYn') is None def test_regex_hyphen(): assert SPLIT_REGEX['hyphen'].split('1-2') == ['1', '2'] assert SPLIT_REGEX['hyphen'].split('1-2-3') == ['1', '2', '3'] assert SPLIT_REGEX['hyphen'].split('1 -2-3') == ['1', '2', '3'] assert SPLIT_REGEX['hyphen'].split('1 - 2-3') == ['1', '2', '3'] assert SPLIT_REGEX['hyphen'].split('1- 2-3') == ['1', '2', '3'] assert SPLIT_REGEX['hyphen'].split('1') == ['1'] assert SPLIT_REGEX['hyphen'].split('1-') == ['1', ''] assert SPLIT_REGEX['hyphen'].split('1-2-') == ['1', '2', ''] assert SPLIT_REGEX['hyphen'].split('1-2-3-') == ['1', '2', '3', ''] assert SPLIT_REGEX['hyphen'].split('1,2-3') == ['1,2', '3'] assert SPLIT_REGEX['hyphen'].split('1,2-3-') == ['1,2', '3', ''] assert SPLIT_REGEX['hyphen'].split('1,2, 3,') == ['1,2, 3,'] assert SPLIT_REGEX['hyphen'].split('') == [''] def test_regex_comma(): assert SPLIT_REGEX['comma'].split('1,2') == ['1', '2'] assert SPLIT_REGEX['comma'].split('1,2,3') == ['1', '2', '3'] assert SPLIT_REGEX['comma'].split('1 ,2,3') == ['1', '2', '3'] assert SPLIT_REGEX['comma'].split('1 , 2,3') == ['1', '2', '3'] assert SPLIT_REGEX['comma'].split('1, 2,3') == ['1', '2', '3'] assert SPLIT_REGEX['comma'].split('1') == ['1'] assert SPLIT_REGEX['comma'].split('1,') == ['1', ''] assert SPLIT_REGEX['comma'].split('1, ') == ['1', ''] assert SPLIT_REGEX['comma'].split('1,2,') == ['1', '2', ''] assert SPLIT_REGEX['comma'].split('1,2,3,') == ['1', '2', '3', ''] assert SPLIT_REGEX['comma'].split('1-2,3') == ['1-2', '3'] assert SPLIT_REGEX['comma'].split('1-2,3,') == ['1-2', '3', ''] assert SPLIT_REGEX['comma'].split('1-2-3') == ['1-2-3'] assert SPLIT_REGEX['comma'].split('1-2- 3') == ['1-2- 3'] assert SPLIT_REGEX['comma'].split('') == [''] def test_regex_semicolon(): assert SPLIT_REGEX['semicolon'].split('1;2') == ['1', '2'] assert SPLIT_REGEX['semicolon'].split('1;2;3') == ['1', '2', '3'] assert SPLIT_REGEX['semicolon'].split('1 ;2;3') == ['1', '2', '3'] assert SPLIT_REGEX['semicolon'].split('1 ; 2;3') == ['1', '2', '3'] assert SPLIT_REGEX['semicolon'].split('1; 2;3') == ['1', '2', '3'] assert SPLIT_REGEX['semicolon'].split('1') == ['1'] assert SPLIT_REGEX['semicolon'].split('1;') == ['1', ''] assert SPLIT_REGEX['semicolon'].split('1; ') == ['1', ''] assert SPLIT_REGEX['semicolon'].split('1;2;') == ['1', '2', ''] assert SPLIT_REGEX['semicolon'].split('1;2;3;') == ['1', '2', '3', ''] assert SPLIT_REGEX['semicolon'].split('1-2;3') == ['1-2', '3'] assert SPLIT_REGEX['semicolon'].split('1-2;3;') == ['1-2', '3', ''] assert SPLIT_REGEX['semicolon'].split('1-2-3') == ['1-2-3'] assert SPLIT_REGEX['semicolon'].split('1-2- 3') == ['1-2- 3'] assert SPLIT_REGEX['semicolon'].split('') == [''] def test_undelivered_string_parsing(): MONTH = 5 YEAR = 2017 assert parse_undelivered_string('', MONTH, YEAR) == set([]) assert parse_undelivered_string('1', MONTH, YEAR) == set([ date_type(year=YEAR, month=MONTH, day=1) ]) assert parse_undelivered_string('1-2', MONTH, YEAR) == set([ date_type(year=YEAR, month=MONTH, day=1), date_type(year=YEAR, month=MONTH, day=2) ]) assert parse_undelivered_string('5-17', MONTH, YEAR) == set([ date_type(year=YEAR, month=MONTH, day=5), date_type(year=YEAR, month=MONTH, day=6), date_type(year=YEAR, month=MONTH, day=7), date_type(year=YEAR, month=MONTH, day=8), date_type(year=YEAR, month=MONTH, day=9), date_type(year=YEAR, month=MONTH, day=10), date_type(year=YEAR, month=MONTH, day=11), date_type(year=YEAR, month=MONTH, day=12), date_type(year=YEAR, month=MONTH, day=13), date_type(year=YEAR, month=MONTH, day=14), date_type(year=YEAR, month=MONTH, day=15), date_type(year=YEAR, month=MONTH, day=16), date_type(year=YEAR, month=MONTH, day=17) ]) assert parse_undelivered_string('5-17,19', MONTH, YEAR) == set([ date_type(year=YEAR, month=MONTH, day=5), date_type(year=YEAR, month=MONTH, day=6), date_type(year=YEAR, month=MONTH, day=7), date_type(year=YEAR, month=MONTH, day=8), date_type(year=YEAR, month=MONTH, day=9), date_type(year=YEAR, month=MONTH, day=10), date_type(year=YEAR, month=MONTH, day=11), date_type(year=YEAR, month=MONTH, day=12), date_type(year=YEAR, month=MONTH, day=13), date_type(year=YEAR, month=MONTH, day=14), date_type(year=YEAR, month=MONTH, day=15), date_type(year=YEAR, month=MONTH, day=16), date_type(year=YEAR, month=MONTH, day=17), date_type(year=YEAR, month=MONTH, day=19) ]) assert parse_undelivered_string('5-17,19-21', MONTH, YEAR) == set([ date_type(year=YEAR, month=MONTH, day=5), date_type(year=YEAR, month=MONTH, day=6), date_type(year=YEAR, month=MONTH, day=7), date_type(year=YEAR, month=MONTH, day=8), date_type(year=YEAR, month=MONTH, day=9), date_type(year=YEAR, month=MONTH, day=10), date_type(year=YEAR, month=MONTH, day=11), date_type(year=YEAR, month=MONTH, day=12), date_type(year=YEAR, month=MONTH, day=13), date_type(year=YEAR, month=MONTH, day=14), date_type(year=YEAR, month=MONTH, day=15), date_type(year=YEAR, month=MONTH, day=16), date_type(year=YEAR, month=MONTH, day=17), date_type(year=YEAR, month=MONTH, day=19), date_type(year=YEAR, month=MONTH, day=20), date_type(year=YEAR, month=MONTH, day=21) ]) assert parse_undelivered_string('5-17,19-21,23', MONTH, YEAR) == set([ date_type(year=YEAR, month=MONTH, day=5), date_type(year=YEAR, month=MONTH, day=6), date_type(year=YEAR, month=MONTH, day=7), date_type(year=YEAR, month=MONTH, day=8), date_type(year=YEAR, month=MONTH, day=9), date_type(year=YEAR, month=MONTH, day=10), date_type(year=YEAR, month=MONTH, day=11), date_type(year=YEAR, month=MONTH, day=12), date_type(year=YEAR, month=MONTH, day=13), date_type(year=YEAR, month=MONTH, day=14), date_type(year=YEAR, month=MONTH, day=15), date_type(year=YEAR, month=MONTH, day=16), date_type(year=YEAR, month=MONTH, day=17), date_type(year=YEAR, month=MONTH, day=19), date_type(year=YEAR, month=MONTH, day=20), date_type(year=YEAR, month=MONTH, day=21), date_type(year=YEAR, month=MONTH, day=23) ]) assert parse_undelivered_string('mondays', MONTH, YEAR) == set([ date_type(year=YEAR, month=MONTH, day=1), date_type(year=YEAR, month=MONTH, day=8), date_type(year=YEAR, month=MONTH, day=15), date_type(year=YEAR, month=MONTH, day=22), date_type(year=YEAR, month=MONTH, day=29) ]) assert parse_undelivered_string('mondays, wednesdays', MONTH, YEAR) == set([ date_type(year=YEAR, month=MONTH, day=1), date_type(year=YEAR, month=MONTH, day=8), date_type(year=YEAR, month=MONTH, day=15), date_type(year=YEAR, month=MONTH, day=22), date_type(year=YEAR, month=MONTH, day=29), date_type(year=YEAR, month=MONTH, day=3), date_type(year=YEAR, month=MONTH, day=10), date_type(year=YEAR, month=MONTH, day=17), date_type(year=YEAR, month=MONTH, day=24), date_type(year=YEAR, month=MONTH, day=31) ]) assert parse_undelivered_string('2-monday', MONTH, YEAR) == set([ date_type(year=YEAR, month=MONTH, day=8) ]) assert parse_undelivered_string('2-monday, 3-wednesday', MONTH, YEAR) == set([ date_type(year=YEAR, month=MONTH, day=8), date_type(year=YEAR, month=MONTH, day=17) ]) def test_sql_query(): assert generate_sql_query( 'test' ) == "SELECT * FROM test;" assert generate_sql_query( 'test', columns=['a'] ) == "SELECT a FROM test;" assert generate_sql_query( 'test', columns=['a', 'b'] ) == "SELECT a, b FROM test;" assert generate_sql_query( 'test', conditions={'a': '\"b\"'} ) == "SELECT * FROM test WHERE a = \"b\";" assert generate_sql_query( 'test', conditions={ 'a': '\"b\"', 'c': '\"d\"' } ) == "SELECT * FROM test WHERE a = \"b\" AND c = \"d\";" assert generate_sql_query( 'test', conditions={ 'a': '\"b\"', 'c': '\"d\"' }, columns=['a', 'b'] ) == "SELECT a, b FROM test WHERE a = \"b\" AND c = \"d\";" def test_number_of_days_per_week(): assert get_number_of_days_per_week(1, 2022) == [5, 4, 4, 4, 4, 5, 5] assert get_number_of_days_per_week(2, 2022) == [4, 4, 4, 4, 4, 4, 4] assert get_number_of_days_per_week(3, 2022) == [4, 5, 5 ,5, 4, 4, 4] assert get_number_of_days_per_week(2, 2020) == [4, 4, 4, 4, 4, 5, 4] assert get_number_of_days_per_week(12, 1954) == [4, 4, 5, 5, 5, 4, 4] def test_calculating_cost_of_one_paper(): DAYS_PER_WEEK = [5, 4, 4, 4, 4, 5, 5] COST_PER_DAY: dict[int, float] = { 0: 0, 1: 0, 2: 2, 3: 2, 4: 5, 5: 0, 6: 1 } DELIVERY_DATA: dict[int, bool] = { 0: False, 1: False, 2: True, 3: True, 4: True, 5: False, 6: True } assert calculate_cost_of_one_paper( DAYS_PER_WEEK, set([]), ( COST_PER_DAY, DELIVERY_DATA ) ) == 41 assert calculate_cost_of_one_paper( DAYS_PER_WEEK, set([]), ( COST_PER_DAY, { 0: False, 1: False, 2: True, 3: True, 4: True, 5: False, 6: False } ) ) == 36 assert calculate_cost_of_one_paper( DAYS_PER_WEEK, set([ date_type(year=2022, month=1, day=8) ]), ( COST_PER_DAY, DELIVERY_DATA ) ) == 41 assert calculate_cost_of_one_paper( DAYS_PER_WEEK, set([ date_type(year=2022, month=1, day=8), date_type(year=2022, month=1, day=8) ]), ( COST_PER_DAY, DELIVERY_DATA ) ) == 41 assert calculate_cost_of_one_paper( DAYS_PER_WEEK, set([ date_type(year=2022, month=1, day=8), date_type(year=2022, month=1, day=17) ]), ( COST_PER_DAY, DELIVERY_DATA ) ) == 41 assert calculate_cost_of_one_paper( DAYS_PER_WEEK, set([ date_type(year=2022, month=1, day=2) ]), ( COST_PER_DAY, DELIVERY_DATA ) ) == 40 assert calculate_cost_of_one_paper( DAYS_PER_WEEK, set([ date_type(year=2022, month=1, day=2), date_type(year=2022, month=1, day=2) ]), ( COST_PER_DAY, DELIVERY_DATA ) ) == 40 assert calculate_cost_of_one_paper( DAYS_PER_WEEK, set([ date_type(year=2022, month=1, day=6), date_type(year=2022, month=1, day=7) ]), ( COST_PER_DAY, DELIVERY_DATA ) ) == 34 assert calculate_cost_of_one_paper( DAYS_PER_WEEK, set([ date_type(year=2022, month=1, day=6), date_type(year=2022, month=1, day=7), date_type(year=2022, month=1, day=8) ]), ( COST_PER_DAY, DELIVERY_DATA ) ) == 34 assert calculate_cost_of_one_paper( DAYS_PER_WEEK, set([ date_type(year=2022, month=1, day=6), date_type(year=2022, month=1, day=7), date_type(year=2022, month=1, day=7), date_type(year=2022, month=1, day=7), date_type(year=2022, month=1, day=8), date_type(year=2022, month=1, day=8), date_type(year=2022, month=1, day=8) ]), ( COST_PER_DAY, DELIVERY_DATA ) ) == 34 def test_extracting_days_and_costs(): assert extract_days_and_costs(None, None) == ([], []) assert extract_days_and_costs('NNNNNNN', None) == ( [False, False, False, False, False, False, False], [] ) assert extract_days_and_costs('NNNYNNN', '7') == ( [False, False, False, True, False, False, False], [0, 0, 0, 7, 0, 0, 0] ) assert extract_days_and_costs('NNNYNNN', '7;7') == ( [False, False, False, True, False, False, False], [0, 0, 0, 7, 0, 0, 0] ) assert extract_days_and_costs('NNNYNNY', '7;4') == ( [False, False, False, True, False, False, True], [0, 0, 0, 7, 0, 0, 4] ) assert extract_days_and_costs('NNNYNNY', '7;4.7') == ( [False, False, False, True, False, False, True], [0, 0, 0, 7, 0, 0, 4.7] ) def test_validate_month_and_year(): assert validate_month_and_year(1, 2020)[0] assert validate_month_and_year(12, 2020)[0] assert validate_month_and_year(1, 2021)[0] assert validate_month_and_year(12, 2021)[0] assert validate_month_and_year(1, 2022)[0] assert validate_month_and_year(12, 2022)[0] assert not validate_month_and_year(-54, 2020)[0] assert not validate_month_and_year(0, 2020)[0] assert not validate_month_and_year(13, 2020)[0] assert not validate_month_and_year(45, 2020)[0] assert not validate_month_and_year(1, -5)[0] assert not validate_month_and_year(12, -5)[0] assert not validate_month_and_year(1.6, 10)[0] # type: ignore assert not validate_month_and_year(12.6, 10)[0] # type: ignore assert not validate_month_and_year(1, '10')[0] # type: ignore assert not validate_month_and_year(12, '10')[0] # type: ignore If you need it, here is a link to the GitHub repo for this project. It's at the same commit as the code above, and I won't edit this so that any discussion is consistent. https://github.com/eccentricOrange/npbc/tree/6020a4f5db0bf40f54e35b725b305cfeafdd8f2b Answer: Once you calculate, the results are displayed and copied to your clipboard This is not a good idea for a CLI. It may be a reasonable (opt-in!) option for a GUI, but for a CLI the overwhelming expectation is that the results are printed to stdout, and the user can copy from there if they want. You say I will be the sole user of this app until it's converted into a server thing as stated in the table. But you're not expected to know that, and it's probably a bad idea for me to learn non-standard practices. Indeed. For us to get better as practitioners, even if our programs are only intended for us, best practice is to make it approachable for everyone. Overall this code is pretty good! There are type hints, you're using pathlib, you have reasonable data directories, you have tests, etc. Don't represent VALIDATE_REGEX as a dictionary where all of the keys are referenced statically. A saner option is to move this to a different file, which will still offer scope but not require the machinery of a dictionary. SPLIT_REGEX is probably best as three separate regex variables. Having a regex validator for CSV content is concerning. You shouldn't be parsing CSV content yourself. You say: This refers to user input, not a "data" file CSV is vaguely a machine-legible format and not expected to be a user-legible format. So if it doesn't come from a file, then it's probably not an appropriate choice for user input of your undelivered string. Your other post shows that you're accepting this as one argument. Instead, consider using argparse and its nargs='+' support to accumulate multiple arguments to a list, which will not require a comma separator. generate_sql_query is attempting to be too clever. Don't construct your queries like this; just write them out. What's with query_database returning []? That statement will never be executed, so remove it. get_number_of_days_per_week is a good candidate for being converted from returning a list to yielding an iterator. You ask: it's called only once and the value is saved to a constant. So, why does list vs generator matter? It doesn't matter how many times it's called, and it doesn't matter whether it's saved to a constant. An iterator function offers "streamed" calculation instead of in-memory calculation to any callers that want it, and if a caller does actually want a list or tuple, they can cast to it there. Other than the memory representation flexibility, this is mostly a matter of syntactic sugar since no return is necessary and no inner list variable is needed. Your "bug report" here: print("Congratulations! You broke the program!") print("You managed to write a string that the program considers valid, but isn't actually.") print("Please report it to the developer.") print(f"\nThe string you wrote was: {string}") print("This data has not been counted.") is well-intended, but would be better-represented by refactoring out the contents of the loop to its own function; in that function, throwing an exception at this scope - likely a custom exception subclass that can accept your invalid string as a member; back in the loop that calls that function, wrap it in a try/except that catches your (specific) exception and prints something like what you've already shown. Similar for return False, "Something went wrong." which should be replaced with proper exception handling rather than a boolean return. (0 < start) and (start <= end) is better-represented as 0 < start <= end . Similar for 0 < n <= get_number_of_days_per_week(month, year)[WEEKDAY_NAMES.index(weekday.capitalize())]. TIMESTAMP should not be capitalised, since it's a local. Your ','.join([ should not have an inner list and should pass the generator directly. This formatting block: format_string = f"For {date_type(year=year, month=month, day=1).strftime(r'%B %Y')}\n\n" format_string += f"*TOTAL*: {total}\n" format_string += '\n'.join([ f"{papers[paper_id]}: {cost}" # type: ignore for paper_id, cost in costs.items() ]) return f"{format_string}\n" is... fine? Other than the last f-string, which - since you've already iteratively concatenated, should just have a + '\n'. If you're concerned about the O(n^2) cost of successive string concatenation (which you probably shouldn't need to be), then either use StringIO, or do '\n'.join(). The latter option would be nice as an iterator function, something like def format_total() -> Iterator[str]: # ... date_type_str = date_type(year=year, month=month, day=1).strftime(r'%B %Y') yield f"For {date_type_str}" yield f"*TOTAL*: {total}" for paper_id, cost in costs.items(): yield f"{papers[paper_id]}: {cost}" # ... '\n'.join(format_total()) bool(int(day == 'Y')) should just be day == 'Y' which is already a boolean.
{ "domain": "codereview.stackexchange", "id": 43298, "tags": "python, unit-testing, functional-programming, regex, validation" }
What does $n$ represent in "hydrated compound⋅nH2O"?
Question: On Wikipedia it is written that in "hydrated compound⋅nH2O" $n$ is the number of water molecules per formula unit of the salt. I don't understand what that means and this is indicated in that I can't express it in other terms for being able to express a statement in other terms is a sign of understanding it. Can you express this in other terms? How do you imagine these n water molecules to be arranged in a formula unit? Answer: OrangeApple2 . n Plums means that for each orange and 2 apples, there is n plums, regardless of how oranges, apples and plums are mutually arranged in the sack. Now more seriously: When there is a chemical formula like $\ce{AB2. n H2O}$, then it means there is substance $\ce{AB2}$ (ionic like $\ce{A^2+}$ + $\ce{2 B-}$, molecular like $\ce{B-A-B}$ or large scale bound with A:B ratio 1:2), which incorporates in its solid structure (crystals, microcrystalic, amorphous) molecules of water in ratio 1 $\ce{AB2}$ : $\ce{n H2O}$. Usually, such $\ce{AB2}$ can exist as anhydrate ( being anhydrous) too, without water (like there is anhydrous citric acid and its monohydrate). Some substances can exist in several variants with different values of $n$. E.g. calcium chloride $\ce{CaCl2}$ can be anhydrous (n=0),monohydrate(1), dihydrate(2), tetrahydrate(4), hexahydrate(6). The formula gives generally nothing more nor less than the stoichiometric substance : water ratio, without any hint about the particular structure and molecular/atomic/ionic placement. In a case of salts, some water molecules wrap cations similarly as in solution, some may be independent or attached to anions.
{ "domain": "chemistry.stackexchange", "id": 16558, "tags": "inorganic-chemistry, nomenclature, water, notation" }
When is something a Deep Neural Network (DNN) and not NN?
Question: When would a neural network be defined as a Deep Neural Network (DNN) and not a NN? A DNN as I understand them are neural networks with many layers, and simple neural networks usually have fewer layer... but what a many and a few in numbers? or is there some other definition? What are networks trained used Tensorflow, Caffee as such? I haven't (as far I know) seen anybody manually design a network with many many layers. They seem to promote their tools for creating DNN, but is it actually DNN if you only make a network with two layers? Answer: You are right. Mainly any network with more than two layers between the input and output is considered a deep neural network. Libraries like tensorflow provide efficient architecture for deep learning applications such as image recognition, or language modelling using Convolutional neural networks and Recurrent neural networks. Another thing to keep in mind, is the depth of the network also has to do with the number of units being used in the layer. Mainly, as your non-linear hypotheses get complex you will need deep neural networks.
{ "domain": "datascience.stackexchange", "id": 1497, "tags": "neural-network, deep-learning" }
Concentration of solutions
Question: I'm stuck with this problem. If I have 200 grams of a solution at 30% how much water should I add so that the concentration becomes 25%? Answer: The answer is that for a simple dilution the following formula applies: $$c_1m_1 = c_2m_2$$ $$ m_2 = \frac{c_1m_1}{c_2} = \frac{(200g)(30\text{%})}{20\text{%}} = 240g$$ Therefore the mass to add is $(240g - 200g) = 40g$ of $\ce{H2O}$ (which is 40 ml of $\ce{H2O}$).
{ "domain": "chemistry.stackexchange", "id": 4486, "tags": "concentration" }
Finding palindromic strings of even length
Question: Given a string of digits. Find the length of longest substring of even length i.e. 0,2,... which is a palindrome or which can be rearranged to form a palindrome (see example below). Here is my code: def is_palindrome(string): length=len(string) new_d=[' ']*length #For rearranging the substring for i in range(length/2): new_d[i]=string[i] if string[i] in string[i+1:]: sindx=length-1-(string[::-1].index(string[i])) new_d[-(i+1)]=string[sindx] string1=('').join(new_d) return string1==string1[::-1] def substring(number): subs=[] length=len(number) for i in range(length): for j in range(i+2,length+1,2): #print(number[i:j]) yield number[i:j] def count_palindromes(number): palindromes=[] palindromes.extend(filter(is_palindrome,substring(number))) #print(palindromes) if len(palindromes): lengths=[] for i in palindromes: lengths.append(len(i)) lengths.sort(reverse=True) return lengths[0] else: return 0 number=raw_input() length=count_palindromes(number) print(length) Input: String of numbers(0-9) Ouput:Length of longest palindromic substring of even length Example Input:12345354987 Longest palindromic even length substring: 345354 On rearranging: 345543 Output:6 Answer: Notes Your is_palindrome function has a bug. is_palindrome('1101') returns True. Be careful with your function names. count_palindromes doesn't count palindromes, it returns the length of the largest one. substring returns a generator of substrings. It should be plural. If palindromes is a list of palindromes, you can use a list comprehension to get their respective lengths: [len(p) for p in palindromes]. You don't need to sort a list to find its max: max(palindromes, key=len). There's also no need to define an empty list and extending it. You can use list(filter(is_palindrome,substring(number))) or [s for s in substring(number) if is_palindrome(s)]. Theory If you write a generator which iterates over every even substrings in your string in decreasing length, all you need to check is that every digit appears an even number of time. Code from collections import Counter def all_even_substrings(string): n = len(string) even_n = n + (n % 2) for l in range(even_n, 0, -2): for i in range(n + 1 - l): yield string[i:i + l] def could_be_palindrome(substring): return all(count % 2 == 0 for count in Counter(substring).values()) def largest_possible_palindrome(string): return next((s for s in all_even_substrings(string) if could_be_palindrome(s)), None) print(largest_possible_palindrome('12345354987')) # 345354 print(largest_possible_palindrome('12334455987')) # 334455 print(largest_possible_palindrome('123456789')) # None It's concise and pretty straightforward. If no palindrome is possible at all, the function returns None. Substrings examples Here's an example for all_even_substrings: print(list(all_even_substrings('1234'))) # ['1234', '12', '23', '34'] print(list(all_even_substrings('12345'))) # ['1234', '2345', '12', '23', '34', '45'] If you ever need every substring: def all_substrings(string): n = len(string) + 1 for l in range(n, 0, -1): for i in range(n - l): yield string[i:i + l] print(list(all_substrings('123'))) # ['123', '12', '23', '1', '2', '3'] print(list(all_substrings('1234'))) # ['1234', '123', '234', '12', '23', '34', '1', '2', '3', '4']
{ "domain": "codereview.stackexchange", "id": 28994, "tags": "python, algorithm, strings, python-2.x, palindrome" }
Why is net torque zero?
Question: A kid of mass $M$ stands at the edge of a platform of radius $R$ which can be freely rotated about its axis. The moment of inertia of the platform is $I$. The system is at rest when a friend throws a ball of mass $m$ and the kid catches it. If the velocity of the ball is v horizontally along the tangent to edge of the platform when it was caught by the kid, find the angular speed of the platform after the event. The solution says that net external torque is zero and proceeds as follows. Conserving angular momentum, $\text{mvr} = [ I+(M+m)r^2]\omega$ and then they found out $\omega$ quite easily. My question is why would the torque be zero? Isn't the guy on the platform catching it? Wont he experience a tangential force (due to which there is a torque on the platform)? Answer: The external torque for the system consisting of the boy, the ball and the platform is $0$. The torque you mentioned is an internal torque. Notice that the angular momentum conservation equation includes the angular momentum of all the 3 components mentioned earlier. Internal torque cannot change the angular momentum of a system as internal torque is provided by internal force (in this case the action-reaction pair between the boy+ball and platform) which always come in action-reaction pairs at same perpendicular distance from axis of rotation and equal in magnitude.
{ "domain": "physics.stackexchange", "id": 80441, "tags": "homework-and-exercises, rotational-dynamics, torque" }
Product of oxymercuration-demercuration reaction
Question: What will be the major product of this reaction? I know that this is the OMDM reaction where Alkene is converted to alcohol following the Markovnikov rule. This question is also atypical because all the previous questions used NaBH4 as the demercuration (reducing) agent instead of NaBD4. According to my reasoning, the answer should be: But my answer is incorrect. Please help me out. PS- Can someone suggest me some website/app where I can draw chemical structures? Answer: Well, the answer is correct except instead of -OH it must be -OMe because the reagent used is Hg(OAc)2/ MeOH and not Hg(OAc)2/H2O. Here's an illustration of the rection using methanol
{ "domain": "chemistry.stackexchange", "id": 17809, "tags": "organic-chemistry, regioselectivity, c-c-addition" }
How to get statics out of a dynamic force concept?
Question: If one defines force as the time derivative of momentum, i.e. by $$ \vec{F}= \frac{d}{dt} \vec{p} $$ how can this include static forces? Is there a generally accepted way to argue in detail how to get from this to the static case? If not, what different solutions are discussed? Edit: I shoud add, that by static forces I mean forces involved in problems where bodies don't move. Answer: I don't exactly know what you mean by static forces. But I am going to take a wild guess here and assume that by that you mean forces involved in problems where bodies don't move. I think you assumed that Newton's second law quantifies a force. This is actually wrong. First of all realize that a force is an interaction and it still acts whether the body on which it acts moves or not. Newton's second law quantifies the total effect of all such forces on a body of mass $m$ and not the force itself. For example the Newton's law of Gravitation tells you that the force between two masses is: $$\vec{F} = G\frac{Mm}{r^2}$$ Now this is practically useless unless you specifies what a force does on a body. That's where Newton's second law comes in. So along with Newton's second law, you have a complete theory of (classical) gravitation. Also the $\vec{F}$ in Newton's second law is the total force acting on a body having momentum $\vec{p}$. So when bodies don't move the net forces on them is zero. But that does not mean that you can not have forces acting on it.
{ "domain": "physics.stackexchange", "id": 1376, "tags": "forces, newtonian-mechanics" }
Why {${xww|x,w∈(a+b)^*}$} is regular but {${ww|w∈(a+b)^*}$} is not $? $
Question: I read this site example 12 that {${xww|x,w∈(a+b)^*}$} the set of strings generated by language $L$ is {${ϵ,a,b,aa,ab,ba,bb,aaa,…}$} by taking always $w$ as $\epsilon$ and $x$∈$(a+b)^∗$. But my question is why they taking always $w$ as $\epsilon$ to prove it is regular. In that same logic why $ww$ isn't regular by taking $w$ as $\epsilon$? Answer: If you take $w=\varepsilon$ in the second language, you can only create the word $ww = \varepsilon \varepsilon = \varepsilon$. This is not the case in the first language because of the $x$ factor that gives you all the freedom of choice you need.
{ "domain": "cs.stackexchange", "id": 18840, "tags": "turing-machines, regular-languages, finite-automata, context-free, pushdown-automata" }
Communicating losses and imprecision in liquid measurement by analogy
Question: I'm writing a presentation for people and I need an analogy to convey the following problem: Person A wants to sell a liquid, so he fills a tank, measures volume and sends to Person B. Now Person B receives the liquid and wants to know if he received the volume he bought from Person A, so since this liquid is now stored in his tank, Person B measures the liquid volume. Person B measures won't match exactly Person A measures - due to losses and imprecisions in both measures. These are oil tanks and I need to explain this to really varied audience that contains people unskilled in metrology. Also, Person A and B can be the same person in different places - which is actually the case that I'm studying. I think an analogy for the tanks would be good, but I can't think anything! How can I explain this problem in a way that a lay audience will understand? Answer: I'm not sure of the specifics of your problem, it seems to be relatively straightforward to me. I have previoulsy considered addressing the subject of measurement errors with schoolchildren in the following manner - it may be of help to you and your audience or change in subject may confuse and distract them from your topic, that's up to you to decide. Step 1 - give all the students and length of string (considerably longer than a metre) and ask them to put two knots in the string exactly a metre apart, based on their own estimation. The smart ones will use a length they already know as a guide - most common is their own height. Step 2 - measure the lengths achieved against a ruler and see who got closest. A leaderboard of some kind can make this more fun. This can lead into a discussion of estimation in general. Plotting a scatter plot of the measurements might also be interesting and lead into a discussion about distributions (with enough students I think I'd expect a roughly normal distribution?) Step 3 - now change the ruler for a tape measure and get everyone to remeasure someone elses string - they will inevitably get slightly different results. Write these down and look at the differences. Perhaps now also draw out a scatter plot of the differences. The change in results should bring up all sorts of questions to prompt your discussions on metrology. Examples could be: Are the metre ruler and the tape measure the same length? Which of these is the most accurate measure of a metre? How can we test them? Why do two measures of the same thing come out different? Was there an element of judgement by eye on the part of the measurer? Did the string stretch? Did the tape measure stretch? Were the conditions in the room the same? Have the knots tightened or slipped? Was the level of precision used in recording the measurements the same? What is the right level of precision? Did we agree on what part of the knots we were measuring to? Is there a general trend in the difference results? Are they dominated by bias or random errors? I've done steps 1 and 2 with students, along with similar experiments on time and mass. (Generally they are roughly ok at estimating a metre, largely underestimate how long a second is, and are wildly inaccurate at estimating a kilogram). Step 3 is an extension that I've planned but not yet gotten round to trying out. I think you could also carry out similar experiments/demonstrations with your exact problem of liquids in containers as well.
{ "domain": "engineering.stackexchange", "id": 56, "tags": "education, metrology" }
How to realize Su-Schrieffer-Heeger model in Qiskit
Question: This is a very specific question, which I try to implement a simple dimerized tight-binding Hamiltonian on qiskit. The model is one dimensional, and defined as $$ H = \sum_{\langle i,j\rangle} t_{ij} c^\dagger_i c_j $$ where $\langle i,j\rangle$ denotes the nearest neighbor coupling with strength $t_{ij}$. The important aspect of the model is that the coupling is dimerized in the following sense. If we label the sites as 1,2,3,4...., then the nearest neighbor coupling $t_{ij}$ reads, for example, $$ t_{1,2} = 1, t_{2,3} = 2 , t_{3,4} = 1 , t_{4,5} = 2, etc $$ which is alternating. I try to pretend this is a molecular Hamiltonian and realize it in the same way with "FermionicOperator". However, I am not sure how to properly index the "one-body" integral here, which is the nearest neighbor coupling $t_{ij}$ here. Answer: I don't think it really matter how you index your $t_{ij}$. As you mentioned you can use FermionicOperator all you need to do is define the one body integral. This of course can be done in many ways. Here is a convenient way. def ssh_ham(gamma, lamda, n): sigmax = np.array([[0,1],[1,0]], dtype=np.complex_) sigmay = np.array([[0,-1j],[1j,0]], dtype=np.complex_) op_eye_x = np.eye(n) op_cos_x = 1/2*(np.eye(n, k=1) + np.eye(n,k=-1)) op_sin_x = 1j/2*(np.eye(n, k=1) - np.eye(n,k=-1)) h = np.kron(gamma*op_eye_x + lamda*op_cos_x, sigmax) + np.kron(lamda*op_sin_x, sigmay) return h This function would return the one-body integral for you. In this function, the $\sigma$ degree of freedom represents the two atoms in the unit cell, and $n$ represents how many unit cells you have in the SSH chain, and for the dimmerized limit you asked for you can take $\gamma = 1,\ \lambda = 2 $. You can use the output of this function fer_op = FermionicOperator(h1 = ssh_ham(1,2,10)) That being said, I'm not sure what the benefit would be. Trying to solve this Hamiltonian using VQE for example is an over-kill since this Hamiltonian can be readily diagonalized from its one-body Hamiltonian. Putting this on a quantum computer would give you $2n$ qubits, this is a Hilbert space of $2^n$ dimensions, whereas you can diagonalize the the Hamiltonian by diagonalizing the $2n \times 2n$ matrix $t_{ij}$.
{ "domain": "quantumcomputing.stackexchange", "id": 1975, "tags": "qiskit, programming, ibm-q-experience, hamiltonian-simulation" }
What does Pauli’s exclusion principle mean in atomic or fundamental way?
Question: It means is that no electron can have same n , l and $m_l$ but can have two different spin quantum number. I want to know why is this rule valid?Means there must be some other things happening also Inside that atom which forces this rule to be true. I got to know from web that electron dual behaviours and electromagnetic forces result in the formulation of this law but what is the real reason? Answer: Pauli's exclusion principle is a consequence of electrons being indistinguishable fermions. Fundamental particles are indistinguishable in that two of the same type differ only in a few properties but otherwise behave identically (compare that say to two apples - they are always in principle distinguishable because they differ in so many ways). Fermions exhibit an associated statistical property when you have a collection of the otherwise indistinguishable particles. The statistical property follows from how the wavefunction of the collection behaves under symmetry (exchange) operations. You have two choices when you swap particles in such a wavefunction: retain the sign or switch signs. That is one fundamental distinguishing property between fermions and bosons (all particles can be classified as one or the other). It happens that nature allows both cases. We chose to call one fermions (to honor Enrico Fermi who co-discovered them), and the particles we call electrons are fermions (you can look into the details of the Standard Model to work out exactly how changes to the properties of the electron would change the universe). Fermions have half-integer values of the spin quantum number. Bosons have integer values. If you have multiple indistinguishable fermions, the QM wavefunction changes sign (is antisymmetric) under exchange of any two of the particles. For the exchange to alter the sign the two exchanged fermions must differ in some property, making the wavefunction antisymmetric, for instance if intrinsic angular momenta (spins) have opposing quantization (quantum numbers $\pm \frac12$). The consequence of this is explained well in the Wikipedia If two fermions were in the same state (for example the same orbital with the same spin in the same atom), interchanging them would change nothing and the total wave function would be unchanged. The only way the total wave function can both change sign as required for fermions and also remain unchanged is that this function must be zero everywhere, which means that the state cannot exist. This reasoning does not apply to bosons because the sign does not change.
{ "domain": "chemistry.stackexchange", "id": 14885, "tags": "quantum-chemistry, electrons, orbitals, atoms, atomic-structure" }
Test to determine if universe is infinite or not
Question: Currently it is not known whether the universe is finite or infinite. Is there any test that can be performed (theoretically) to know whether the universe is infinite or not? I'm still in high school, so it'd be great if the answer provides a more qualitative test rather than a quantitative one, as I doubt I'll be able to understand the mathematics behind it. Answer: Given that we can only interact or observe the universe out to a certain distance, it isn't possible to test whether the universe is infinite. Even if it is, the parts of it beyond a certain distance are inaccessible to us through any means, because all information is limited to the speed of light, including the travel of gravitational waves and massless particles; so the only way we'll ever be able to tell whether it is finite is if someday an edge shows up somewhere, in which case we'll know it's not infinite.
{ "domain": "physics.stackexchange", "id": 21565, "tags": "cosmology, universe" }
Getting descriptive outcomes from ExePathAction
Question: I am trying to design some recovery behaviors for a bot following a set of waypoints, with velocities issued by TEB. I am using ExePathAction to start the navigation with TEB. I would like to know when the path is blocked (BLOCKED_PATH = 109) in order to trigger either a GetPathAction until the next waypoint, or maybe some other recovery behaviour. Are the descriptive outcomes + messages already implemented in MBF? Or do these descriptive outcomes have to be implemented in the controller (in this case: TEBLocalPlannerROS)? Or should it be implemented by extending some function of the controller_action or abstract_controller_execution? Thank you for your time, any help will be appreciated! :) Originally posted by curi_ROS on ROS Answers with karma: 166 on 2019-05-15 Post score: 0 Answer: Hi, MBF just provides extended interfaces for the plugins, with error code and message. But it's up the the plugins to provide this information. If a plugin doesn't implement the extended MBF, we wrap the old nav_core interface and report SUCCESS / FAILURE depending on there it returns true / false. For TEB, we have implemented MBF's extended interface in our fork. We report the following possible outcomes: SUCCESS NOT_INITIALIZED INTERNAL_ERROR INVALID_PATH NO_VALID_CMD To my understanding, TEB doesn't recognizes explicitly situations when BLOCKED_PATH outcome makes sense, but feel free to PR it if you have already implemented the code! Originally posted by jorge with karma: 2284 on 2019-05-15 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by curi_ROS on 2019-05-15: We don't have any code for the BLOCKED_PATH event yet, but this helps. Thank you! Comment by artemiialessandrini on 2020-07-02: people? @curi_ROS
{ "domain": "robotics.stackexchange", "id": 33021, "tags": "ros, navigation, teb-local-planner, ros-kinetic" }
What is a rigid wave function?
Question: The London equations prior to BCS that describe superconductivity require assuming the wavefunction describing the superconducting pair of electrons to be rigid. I've been looking all over trying to find what does it mean for a wave function to be rigid. All I have found is that the phase of the wave function is what is actually assumed to be rigid, and even then none of these sources explain what it means but rather just use it or assume the reader knows. What does it mean for a wave function to be rigid? And why is this a necessary condition for the validity of the London equations? Answer: As you might have seen the London brothers were the first to construct a successful theoretical model of superconductivity, translated in the two famous London equations, of which the second one, $$\textbf{B}=-\mu_0\lambda_L^2 \nabla\times\textbf{j}_s$$ describes the Meissner effect. Now this was more of a phenomenological theory, and I won't go into detail but the basic idea was that they assumed a spatially constant density of superconducting charge carriers (later we figured out that these were Cooper pairs). However, later this phenomenological model was extended by Ginzburg and Landau. They allowed the variation of the Cooper pair density, which was assumed to be constant in London's theory. Furthermore, they introduced a characteristic length scale over which the Cooper density (order) can change nl. the Ginzburg-Landau coherence length $\xi_{GL}$. This was an important step forward because by introducing this coherence length $\xi_{GL}$ one can distinguish between type I and type II superconductors by the relative length scale of $\xi_{GL}$ w.r.t. $\lambda_L$. Both of these superconductor types can be in the Meissner state below a certain critical temperature and magnetic field (type II can also be in the Shubnikov phase above between certain critical temperatures and fields). We expect the Meissner state to occur when we have a strong expulsion of the external magnetic field, which means that the coherence length $\xi_{GL}$ needs to be sufficiently large compared to the penetration depth $\lambda_L$. This will indeed be the condition for the Meissner state to occur. That is what they mean, in your text when they speak about the rigidity of the wave function (or alternatively they named it 'superconducting coherence') nl. a large coherence length; spatially homogenous Cooper density (see p.41 of link: 'second London equation can only be derived from BCS theory by assuming a spatially homogenous BCS state). Note: I would recommend reading the standard text on superconductivity by Michael Tinkham. Because in the link you mentioned they are not very careful with the distinction between the different characteristic length scales. In fact, there are three main characteristic length scales: London penetration length $\lambda_L$, GL coherence length $\xi_{GL}$, and Cooper pair length $\xi_0$.
{ "domain": "physics.stackexchange", "id": 61170, "tags": "quantum-mechanics, condensed-matter, wavefunction, terminology, superconductivity" }
Catkin-compiled Code Runs 3x slower
Question: I've been running a slate of tests on the new hydro navigation. I was trying to figure out why updating the costmap in the new hydro version takes so much longer than in the old 'Fuerte' version. (Note that both of these are tested in Groovy) I searched and searched to figure out how the new code was different. But it wasn't the code at all. It was the compiler. The colors of each segment of the bar are irrelevant in this case. The first row shows how long the catkin-compiled version of the Hydro code takes to update the costmap once. The second bar shows the same operation in Fuerte, which is compiled using Rosbuild. The third is the same exact code from the first bar, but compiled using Rosbuild. My question is: what accounts for this 3x slowdown when compiling with Catkin? Links: Catkin CMake Rosbuild CMake Other Edit: The benchmark was performed using locally compiled packages and the timing was measured with repeated gettimeofday calls (since that infrastructure was already in the code for measuring update-time.) Edit 2: Thanks to William's comment, I figured it out. Explanation: The update loop timing is usually only available in the ROS_DEBUG information. Printing the DEBUG logs takes a bit of time, which you can see with the red bars. For the packages I compiled, I also changed it to be printed quicker without ROS_DEBUG. However, I couldn't do that for the debian package. The key result is that Rosbuild code (which William mentions, already was built in Release mode) runs as fast as the catkin code with the release flag set. I don't know why the Debian is as slow as it is, but its not going to bother me immediately. (@William: Put your answer below for some sweet sweet karma) Originally posted by David Lu on ROS Answers with karma: 10932 on 2013-08-09 Post score: 13 Original comments Comment by Dirk Thomas on 2013-08-09: Can you describe how you did the benchmark? Did you compiled the packages locally from source? Did you used the available Debian packages? Comment by Dirk Thomas on 2013-08-09: Can you please post a comparison between the locally self-built package in Hydro vs. the publically available Debian package? Comment by David Lu on 2013-08-09: Working on the tests, but a quick note: The set(ROS_BUILD_TYPE Release) line isn't mine. It's Eitan's from 2009. https://github.com/DLu/navigation/commit/fa4430e4780453d69150d8a8b40d7f427970950f Comment by William on 2013-08-09: @David Lu, sure it is actually part of the default rosbuild CMakeLists.txt template. Comment by mateus03 on 2014-07-18: When I install ros hydro on my pc, it already comes with the navigation stack. Is the code compiled with catkin without the flag set to Release? How can I compile it with the flag? Comment by ahendrix on 2014-07-19: Yes - all packages built on the build farm are compiled in release mode. Comment by mateus03 on 2014-07-20: Ok, thank you Answer: Just to reiterate, be mindful of the ROS_BUILD_TYPE, whether set by default in your CMake or invoked on the command line. Compiling with catkin_make -DCMAKE_BUILD_TYPE=Release will ensure that your Catkin code is as fast as possible. Originally posted by David Lu with karma: 10932 on 2013-08-09 This answer was ACCEPTED on the original site Post score: 11 Original comments Comment by Dirk Thomas on 2013-08-09: Just for the record these are the default build flags with CMake and if not defined CMake will use "none": None: *no flags* Debug: -g Release: -O3 -DNDEBUG RelWithDebInfo: -O2 -g MinSizeRel: -Os -DNDEBUG Comment by David Lu on 2013-08-11: As a side note, do you realize how low the probability of me memorizing that exact flag configuration is? For such an important functionality, it seems like a relatively arcane incantation. Comment by Dirk Thomas on 2013-08-11: That is why CMake uses reasonable default flags for the different build types. And catkin_make provides completion on arguments like "-DCMAKE_BUILD_TYPE" since they are not that memorable. Comment by K_Yousif on 2013-10-13: It works! It took me a week to figure this out. Thanks alot!
{ "domain": "robotics.stackexchange", "id": 15216, "tags": "navigation, catkin, costmap, costmap-2d" }
Poly-time computability of inversion of poly-time real functions
Question: At pp. 7-8 of Ker-I Ko's Computational Complexity of Real Functions (1991), the following is stated for one dimensional cases: Let $INV_1$ be the operator that maps a one-to-one function $f:[0,1]\rightarrow [0,1]$ to its inverse function $f^{-1}$. Then $INV_1(f)$ is polynomial-time computable for all polynomial-time computable, one-to-one real functions $f$ on $[0,1]$. And for two dimensional cases: Let $INV_2$ be the operator that maps a one-to-one function $f:[0,1]^2\rightarrow [0,1]^2$ to its inverse function $f^{-1}$. Then $P=NP$ implies that for all polynomial computable, one-to-one real functions $f$ on $[0,1]^2$, $INV_2(f)$ is polynomial-time computable, and this in turn implies $P=UP$. How is the one-dimensional case derived? And how can the two-dimensional case be extended to n-dimensions? Answer: In 1-dimensional case, the function has to be strictly-monotone. Given $y_n$ do a binary search to find $x_n$. If $\mathsf{P} = \mathsf{NP}$, given $y_n$ we can guess $x_n$ and then verify that $f(x_n) = y_n$. The dimension doesn't matter for this part. For the other part, that is, if $INV_d$ is polytime then $\mathsf{P} = \mathsf{UP}$, we can reduce it to $INV_2$ by noting that every 2-dimensional function can be extended to a $d$-dimensional one by letting other coordinates map as identity. So for $d'<d$, $INV_d$ being polytime computable implies that $INV_{d'}$ is also polytime. Therefore $INV_d$ being polytime implies that $INV_2$ is polytime and therefore $\mathsf{P}=\mathsf{UP}$.
{ "domain": "cs.stackexchange", "id": 3166, "tags": "complexity-theory, computable-analysis" }
Simple technology quiz
Question: This is my first program that uses more than just boring ol' if/else statements - I made a whole 246 line game that's still a work in progress simply using if/else - and I'm going to email this to my teacher later tonight. Please suggest more things I can add and report any bugs (although I've found none so far). # Starting points value points = 0 # correct() calls this def correct(): global points points = points + 1 print("Correct!") print("You have",points,"points!") a = input() # incorrect() calls this def incorrect(): print("Incorrect!") ab = input() print("Welcome to this five question general IT quiz!") print("If you're stuck and need help, type 'Hint'!") a = input() # Question 1 print("Question one:") q1 = input("What does LAN stand for?") if q1 == "Hint": print("The N stands for network...") q1 = input("What does LAN stand for?") elif q1 == "hint": print("The N stands for network...") q1 = input("What does LAN stand for?") if q1 == "Local Area Network": correct() elif q1 == "local area network": correct() elif q1 == "Local area network": correct() elif q1 == "Local area Network": correct() else: incorrect() # Question 2 print("Question 2:") print("Fill in the blank.") q2 = input("A monitor is an example of an ------ device.") if q2 == "Hint": print("Another example would be speakers or headphones.") q2 = input("A monitor is an example of an ------ device.") elif q2 == "hint": print("Another example would be speakers or headphones.") q2 = input("A monitor is an example of an ------ device.") if q2 == "Output": correct() elif q2 == "output": correct() else: incorrect() # Question 3 print("Question 3:") q3 = input("True or false: To connect to the internet, you MUST have an ethernet cable connected from your router to your PC.") if q3 == "Hint": print("Remember, there are two types of internet connecttion.") q3 = input("True or false: To connect to the internet, you MUST have an ethernet cable connected from your router to your PC.") elif q3 == "hint": print("Remember, there are two types of internet connecttion.") q3 = input("True or false: To connect to the internet, you MUST have an ethernet cable connected from your router to your PC.") if q3 == "False": correct() elif q3 == "false": correct() else: incorrect() # Question 4 print("Question 4:") q4 = input("What is the processor made by Intel that is used in PCs designed for networking/server hosting?") if q4 == "Hint": print("Begins with an X!") q4 = input("What is the processor made by Intel that is used in PCs designed for networking/server hosting?") elif q4 == "hint": print("Begins with an X!") q4 = input("What is the processor made by Intel that is used in PCs designed for networking/server hosting?") if q4 == "Xeon": correct() elif q4 == "xeon": correct() else: incorrect() # Final Question print("Final question:") q5 = input("Radeon, EVGA, XFX and Sapphire are companies that make what computer component?") if q5 == "Hint": print("A better one of these will boost grapical performance and framerate in games.") q5 = input("Radeon, EVGA, XFX and Sapphire are companies that make what computer component?") elif q5 == "hint": print("A better one of these will boost grapical performance and framerate in games.") q5 = input("Radeon, EVGA, XFX and Sapphire are companies that make what computer component?") if q5 == "graphics cards": correct() elif q5 == "Graphics Cards": correct() elif q5 == "GPUs": correct() elif q5 == "gpus": correct() elif q5 == "graphics card": correct() elif q5 == "Graphics Card": correct() elif q5 == "gpu": correct() elif q5 == "GPU": correct() else: incorrect() # End of the quiz print("The quiz is over! You scored",points,"points!") if points >= 4: print("Pretty good!") if points == 3: print("Not bad...") if points == 2: print("Meh.") if points < 2: print("Boo! Try again!") b = input() Answer: First, you can't possibly check for all possible uppercase/lowercase combinations of the answers, and it is looking messy even as it currently is. This is the perfect place to convert the input to lowercase like this: q1 = input("What does LAN stand for?").lower() Now, all you need to do is check for "hint" and "local area network" for question one: if q1 == "hint": print("The N stands for network...") q1 = input("What does LAN stand for?").lower() if q1 == "local area network": correct() else: incorrect() In question five, you have multiple inputs. Why don't you handle it like this instead of having multiple if/elif/else statements? if q5 == "hint": print("A better one of these will boost grapical performance and framerate in games.") q5 = input("Radeon, EVGA, XFX and Sapphire are companies that make what computer component?").lower() if s in ['gpu', 'gpus', 'graphics card', 'graphics cards']: correct() else: incorrect() In this print call, you should put spaces around your string literals and variables to make it easier to read: print("You have",points,"points!") q1 isn't a very descriptive variable name, and it does not hold the question, as it suggests. Also, why are you using one variable name for each question? This code can be cleaned up a lot by creating a method to administer the questions: def administer_question(question, answers, hint): while True: user_answer = input(question) if user_answer == "hint": print(hint) elif user_answer in answers: correct() return else: incorrect() return Now, the user can input "hint" as many times as they wish. This is called as: administer_question("Radeon, EVGA, XFX and Sapphire are companies that make what computer component?", ["gpu", "gpus", "graphics card", "graphics cards"], "A better one of these will boost graphical performance and frame rate in games.") I would pass a string to the incorrect() function to display the correct answer to the user: def incorrect(correct_answer): print("Incorrect!") print("The correct answer is '", correct_answer, "'") pause = input() Note also that I renamed the variable ab to pause so it demonstrates better that the program is just waiting for the user to input a value to continue. However, with all these pauses for input, maybe you should explain that to the user so they don't wait for the program to respond and wonder what is going on.
{ "domain": "codereview.stackexchange", "id": 12783, "tags": "python, beginner, homework, quiz" }
Are human eyes interferometers?
Question: It seems like 2 eyes is enough “wetware” to do interferometry inside brain. Can you definitely see some reason why this could not be happening, or some way to test if it does happen? Answer: To do interferometry in post-processing after detection of radiation, the detector must be able to record the phase of the radiation. The eye cannot do this: the photochemical reactions that record the radiation are insensitive to phase. In instrumentation, radio interferometry may be done post-detection because phase-sensitive radio detectors are practical. Optical interferometry is done pre-detection, using mirrors.
{ "domain": "physics.stackexchange", "id": 91552, "tags": "optics, vision, biology, interferometry, perception" }
Check if one string is a permutation of another using Python
Question: The code below is an attempt at a solution to an exercise from the book "Cracking the Coding Interview." I believe that the worst case time complexity of the code below is \$O(n)\$, where n is the length of each string (they should be the same length since I am checking if there lengths are equal) and the space complexity is \$O(n)\$. Is this correct? In particular does checking the length of each string take \$O(1)\$ time? def is_permutation(first_string, other_string): if len(first_string) != len(other_string): return False count_first = {} count_other = {} for char in first_string: if char in count_first.keys(): count_first[char] += 1 else: count_first[char] = 1 for char in other_string: if char in count_other.keys(): count_other[char] += 1 else: count_other[char] = 1 for char in count_first.keys(): if char not in count_other.keys(): return False elif count_first[char] != count_other[char]: return False return True Answer: Yes, len(str) should be O(1) in Python. (Good question!) Each of your for loops is O(n), so your whole function is O(n). Your counting loops could be written more compactly as for char in first_string: count_first[char] = 1 + count_first.get(char, 0) The epilogue could be simplified to return count_first == count_other It pays to get familiar with the standard Python library, though. Your entire function could be more simply implemented as from collections import Counter def is_permutation(a, b): return len(a) == len(b) and Counter(a) == Counter(b) … where len(a) == len(b) is an optional optimization. Writing less code simplifies maintenance and tends to create fewer opportunities for introducing bugs (as in Rev 2 of your question).
{ "domain": "codereview.stackexchange", "id": 33845, "tags": "python, strings, python-3.x, complexity" }
Properties of plasmas
Question: In chemistry one can recognize that the four states of matter are solid, liquid, gas and plasma. The first is rigid, and has a definite shape and volume. The second doesn't have a shape, and assumes the shape of its container, but it has a fixed volume. The third doesn't have either a shape or a fixed volume and assumes the volume and shape of its container. What about the fourth one (plasma)? Answer: Plasma is made up of ionized gas, so molecules have a positive electric charge and valency electrons are totally or partially separated by their nuclei. Plasma is different from a gas since it has a high temperature and is radiation emitting; think of the Sun and other stars, which are made up of plasma and show both these properties. Plasma hasn't got a proper volume, like gases; e.g. stars can expand or contract under the opposite effects of gravity and nuclear fusion. For example, this property is important to comprehend the formation of white dwarfs and neutron stars, process caused by high pression due to gravity. Little trivia: there is also a fifth state of matter, whose name is Bose-Einstein Condensate (BEC).
{ "domain": "chemistry.stackexchange", "id": 3952, "tags": "physical-chemistry, phase" }
Table builder pattern
Question: Trying to combine functional style (immutable objects) and flexibility of property setters. For the sake of example, let’s say we have a soil types table with two attributes: Color and Name. I am looking for a way to alternate Names, but not Color. Here is how I solved it: // retrieving: all objects are immutable SoilTypes types = SoilTypes.Default; ISoilType clay1 = types.Clay; ISoilType clay2 = types[3]; // derive an alternated immutable copy SoilTypes altTypes = types .With(tt => { // tt.SensitiveFines.Color is still read only tt.SensitiveFines.Name = "Very sensitive fines!"; tt[2].Name = "Purely Organic soil!"; }); // retrieving: everything is immutable ISoilType sensitiveFines = altTypes.SensitiveFines; Where this interface is immutable: public interface ISoilType { Color Color { get; } string Name { get; } } And this class is mutable: public class SoilType : ISoilType { public static implicit operator SoilType((Color Color, string Name) tuple) => new SoilType(tuple.Color, tuple.Name); internal SoilType(ISoilType source) : this(source.Color, source.Name) { } internal SoilType(Color color, string name) { Color = color; Name = name; } public Color Color { get; } public string Name { get; set; } } And this non-generic class is immutable: public class SoilTypes : SoilTypes<ISoilType> { public static SoilTypes Default = new SoilTypes( (White, "Undefined"), (Red, "Sensitive Fines"), (Green, "Organic Soil"), (Blue, "Clay"), (Orange, "Silty Clay")); public SoilTypes(params SoilType[] types) : base(types) { } public SoilTypes With(Action<SoilTypes<SoilType>> update) { var copy = this .Select(t => new SoilType(t)) .ToArray(); update(new SoilTypes<SoilType>(copy)); return new SoilTypes(copy); } } while this generic base used in both situations: public class SoilTypes<TType> : ReadOnlyCollection<TType> where TType : ISoilType { internal SoilTypes(TType[] types) : base(types) { } public TType Undefined => this[0]; public TType SensitiveFines => this[1]; public TType OrganicSoil => this[2]; public TType Clay => this[3]; public TType SiltyClay => this[4]; } Answer: I'm afraid this is not fully immutable becasue I am able to change the Name with a simple cast: altTypes.Dump(); ((SoilType)altTypes.SensitiveFines).Name = "foo"; altTypes.Dump(); The underlying data type is still SoilType so the interface does not protect the data from being overriden. Consider a user that writes a function like this one because he doesn't like interfaces :-) public static void foo(SoilType bar) { bar.Name = "new name"; } and calls it foo((SoilType)altTypes.SensitiveFines); altTypes.Dump(); Name changed. Unfortunatelly I have no idea how to prevent it yet.
{ "domain": "codereview.stackexchange", "id": 25369, "tags": "c#, design-patterns, functional-programming" }
open and read Fasta file (raw data)
Question: I have a big fasta.dataset file containing half a million proteins (1.0 GB). I have four lines for each protein code: line 1:the protein code line 2: protein length in amino acids line 3: amino acid sequence line 4: secondary structure Now, I am trying to open and read it in python (Biopython), and it does not work: filename = 'pdb.fasta_qual.dataset' sequences = SeqIO.parse ( filename,'fasta') for record in sequences: example = record break print(example) How can I read it in python and loop through the file to look at line 3 for each protein to count the sequence length and distribution? here is the first 5 lines of my file: which my file contains 500,000 proteins for each one has a 4 lines (name ,len of protein in amino acid,the seq represents by letters which what I would to calculate,the secondary structure) 4LGTD 247 M S E K L Q K V L A R A G H G T . . E E H H H H H H H T T S S . I want to open and read the file and loop through line 3 for each protein to calculate the length of the sequences and plot a histogram ! to check the distribution. ,,, The output i am expecting is : The len for the first seq is = The len for the second seq is = Until the len of the last sequence which is number (500.000)= And then i can plot a histogram for the len of the sequences ,,, NOTE: I have opened and read the file's info by Linux, but I could not by python. Answer: I better try and explain the issues a bit better. A dataset of 0.5 million at 1 GB will struggle to load into Biopython. Biopython isn't great at handling large numbers of sequences - for a regular desktop/laptop - and what will happen is the parser will freeze or else take an enormous amount of time (many hours). I call this a "RAM bottleneck". A generator is an easy workaround. The other issue with the code is there are lots of bugs combining format and code. There is a clarity issue as well it looks like you want to calculate the secondary structure - rather than import the secondary structure alongside the alignment. I could be wrong on this though, certainly secondary structure is present in the ad hoc alignment. There is an alignment format that handles both secondary structure and sequence data. You might think about restructuring the question and your approach into multiple questions. Reformatting the ad hoc alignment to permit import into Biopython on a pilot data set (not 0.5 million sequences). This is close to @terdon's answer. Code that will process very large files in Biopython. This is the answer below. An alignment format via Biopython (probably output - could be input) that integrates linear sequence with secondary structure. Plotting a histogram for 0.5 million sequences. Points (Questions) 1. to 3. can be leveraged by Biopython. Point (Question) 4 can be done via Python too - graphing/histograms are pretty good. There seems an underlying issue of project management for coding based work. If this is a RAM bottleneck then solution is below. The easiest approach is to use a generator (please see 'format'). example = (record for record in SeqIO.parse(filename,'fasta')) print (next(example)) # example.id ? A generator will deal with the sequences one by one instead of trying to load the whole thing to memory. It may not be suitable for certain complex calculations, but it is by far the easiest approach to the problem (if the problem is the OP is struggling to load a million sequences into memory). Note I always used yield commands but @Steve introduced the () format and I definitely prefer it. Also note that manipulating the output of a generator can be different to a list, tuple or dictionary (calculation depending). Format The format supplied isn't fasta so the SeqIO will not work in fasta mode and should throw an error/exception. The format you need to leverage Biopython is very different. I would recommend a standard 'open' command and passing this to a custom dictionary. Also the output of print(next(record)) should produce an object symbol, not the actual data e.g. record.id would be needed to see data: it does however demonstrate the generator is working.
{ "domain": "bioinformatics.stackexchange", "id": 2407, "tags": "fasta, proteins, protein-structure, biopython" }
How to choose correctly oversampling ratio?
Question: As many of us have noticed, last two months I am working on GMSK modulation and demodulation design. Before continuing my research, I want to be sure I chose the correct oversampling ratio ( how many zeros should I add inside my signal before convolution with the Gaussian filter?). How to choose the correct oversampling ratio for nonlinear modulation? I know, that when I choose the oversampling factor (ovs), it must hold (according to Nyquist, and thus to avoid aliasing) Ts<<T ( T- symbol time, Ts - sampling time). The oversampling factor cannot be too small. I have "understood" from Mr Dan Boschen's explanation the following: for case, BT = 0.25 ovs=8 is sufficient for not introducing aliasing but also acceptable from a complexity point of view (not too large). BUT how should I determine that this oversampling ratio is sufficient? Should it be calculated somehow? Should I implement an additional process? I am sorry if it is the stupidest question you have ever read here...but I want understand this process. Answer: The referenced post does not imply that 8 samples per symbol are required, that is only what was used in order to show the extended spectrum as copied below: What I have also included in this copy of the plot are additional lines showing what would occur if we decreased the number of samples per symbol. At 8 samples per symbol, the normalized frequency shown extends from $f=-0.5$ to $f=+0.5$ corresponding to $\pm f_s/2$ where $f_s$ is the sampling rate, and $f_s/2$ is the "Nyquist boundary" beyond which spectral aliasing will occur. We see how the sidelobes of the spectrum continue to decrease as we approach the Nyquist boundary, so the determination of how many samples per symbol (which set the sampling rate) is set on how much aliasing we will tolerate, and as explained in other posts, allowing for simplification of subsequent filtering such as the analog reconstruction filter after the D/A converter. As shown, the actual spectral levels in the plot in proximity to $f = -0.5$ and $f=0.5$ include the aliasing so would be raised slightly at these locations than if we had extended the number of samples even further. With a pulse duration of 4 symbols the overall distortion is nearly -80 dB, so in most cases this would be of no concern. I have also drawn in additional bold lines showing where the Nyquist boundary would be if we were to decrease the sampling rate to just 2 samples per symbol (the new sampling rate would be at the dashed lines indicated by $f=\pm 0.25$ corresponding to decreasing 8 samples per symbol by 4. The Nyquist boundary would then occur at $f=\pm 0.125$ and the spectrum as shown just past this would fold back onto the main passband in the center, contributing additional distortion to our signal. With a symbol duration of 4 symbols, we would have spectral folding at approximately -70 dB (also likely of no consequence). More significantly, the waveform will repeat spectrally at every multiple of the new sampling rate (in this case at $f=\pm 0.25$, $f=\pm 0.5$, $f=\pm 0.75$, etc..) thus imposing much tighter requirements on the D/A converter or subsequent digital interpolation filters if used. In this case, simplifying this filtering would be the prime consideration in choosing the number of samples per symbol, balanced with minimizing the digital sampling rate leading to lower power and lower complexity. Note that in general as $BT$ gets higher the spectral occupancy of the waveform relative to the sampling rate increases. This means that at lower $BT$ values, the amount of oversampling needed decreases to meet a certain waveform quality requirement.
{ "domain": "dsp.stackexchange", "id": 11232, "tags": "signal-analysis, digital-communications, nyquist, gmsk, upsampling" }
Showing a problem is NP complete? Reducing CLIQUE to KITE.
Question: I've got an exam next week all about this sort of thing. Ie: Find polynomial certifier for a problem, give a polynomial reduction, prove problem X reduces to Y and etc. The problem is, there doesn't seem to be anyone available in my department to help me until after the test. I really struggle to understand it and get confused all too often when looking at it. So I was wondering if my wonderful chums of the internet could work through a problem with me! (Problem is below if you wanna skip my waffle) I apologise if there is an awful lot of it, I hope this will be useful for me or future lurkers though. Feel free to only answer part of it. I get that $X \leq_p Y$ means that given an oracle for Y, we can solve X using a polynomial number of standard computational operations plus a polynomial number of calls to that oracle for Y. I also get that to show a problem Y is NP complete we show: There exists a polynomial time certifier for problem Y. We give a reduction $X \leq_p Y$ from a known NP problem X. Ie: Showing that X is no harder than Y. But in practise, I really struggle to do both of these things. It all seems a bit hand wavy at times and the slides used by my course don't really help me, they seem to be used all over the place, here's a copy: http://www.cs.princeton.edu/~wayne/kleinberg-tardos/pdf/08IntractabilityI.pdf So, I thought we'd look at a problem that was included on an earlier test for my own module. Given that CLIQUE is NP-complete, show that KITE is NP-complete The CLIQUE decision problem is given a graph G and integer k; decide whether there is an induced complete subgraph of size k. The KITE decision problem is to decide whether there is a kite subgraph of size 2k, which consists of a clique of size k with a k-size tail. Creating the certifier So, how exactly should I write a certifier for KITE? I don't really understand what my certificates should be to be honest. I assume I want an input graph $G=(V,E)$ and then maybe a graph $H=(V',E')$ with an integer $k$ such that $H$ is a kite of $2k$. And then check if G contains H as an induced subgraph? But that seems way too easy considering it was worth 20% of the example paper. algo kiteCertifier(Graph G, Graph H, int k) { for each v in V' if v not in V return no for each (u,v) in E' if (u,v) not in E return no return yes } I suppose what really confuses me is that I don't get how to pick a certificate for problems, perhaps I don't really get what a certificate is or what a certifier should actually be doing. Showing $CLIQUE \leq_p KITE$ This is where I often get confused I think. I've seen different reduction examples showing an implication in one direction, then the other, and many just show iff. Again, it can't just be as simple as saying every kite has a clique in it can it? That's almost axiomic! I wondered if perhaps I should show a graph contains a clique iff it contains a kite. Given a clique C of size $2k$ or $2k+1$ (i.e.: odd or even clique) we show there always exists a kite of size $2k$ within it. Consider any k nodes $\{v_1,v_2,...,v_k\}$ in the clique. These nodes also form a clique because every node v in the clique is connected to every other node w by an edge. Now consider the nodes $\{v_{k+1},v_{k+2},...,v_{2k}\}$. Each node $v_j$ in this set is connected by an edge to $v_{j+1}$ since C is a clique. We also have that $v_k$ is connected to $v_{k+1}$ and so we now have a kite of size 2k. Now suppose we have a kite of size 2k, by definition, it contains a clique of size k. But again, I really don't feel confident enough on this, am I even doing anything vaguely correct? :p Any help would be greatly appreciated. Thanks very much! Answer: With regards to the reduction, you need to show that given an oracle for Kite, you can solve Clique in polynomial time. It doesn't really help that every Kite has a Clique in it: we're interested in solving Clique, but a graph that has a Clique doesn't necessarily have a Kite (if it has a Kite, then it certainly has a Clique - but that's not enough). You need to modify the graph so that if(f) it has a Clique, then it also has a Kite and pass that modified graph as an input to the oracle. We don't ordinarily believe that an NP-complete problem can be solved in polynomial time. The point of a verifier/certifiate is that with help, we can solve it in polynomial time. The certificate is the "help", and you can define the certificate to be almost anything you like. For instance, for Clique, we can ask that the certificate be a list of the vertices that make up the Clique. Verifying then becomes easy: we can check we've indeed been given $k$ vertices, and that there are edges between all of these vertices. If you like, you can think of the certificate as a solution to the problem, and the verifier's job is to check that the solution is correct.
{ "domain": "cs.stackexchange", "id": 5816, "tags": "complexity-theory, np-complete, reductions, clique" }