anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Are NP proofs limited to polynomial length? | Question:
In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine.
The proofs for an NP decision problem are verified in polynomial time.
Does this imply the proofs are at most polynomial length?
"Well you have to read the whole input. If the input is longer than polynomial, then the time is greater than polynomial."
The decision problem "Is the first bit of the input a 0?" can be solved in constant time and space - without reading the whole input.
Therefore, perhaps some NP problem has candidate proofs that are longer than polynomial length but checked in polynomial time.
Answer:
The decision problem "Is the first bit of the input a 0?" can be solved in constant time and space - without reading the whole input.
Given that a Turing machine head moves right one step at a time, a Turing machine head can only read a polynomial amount of the proof in polynomial time.
While you could define proofs to exceed a polynomial length, only a polynomial prefix of the proof could be read in polynomial time, assuming the head starts at cell 0 and can move at max one cell to the right in one time unit. | {
"domain": "cs.stackexchange",
"id": 16117,
"tags": "complexity-theory, np, decision-problem"
} |
Force Linear Phase for a FIR Filter Synthesized Using Berchin's FDLS? | Question: As a follow-on to this post, how would you force linear phase for a FIR filter you synthesized using Berchin's FDLS?
Answer: Remember that, as its name suggests, Berchin's frequency-domain least squares method (FDLS) is really just a least-squares fit of a desired frequency-domain response. Recall that for an arbitrary transfer function of the form:
$$
H(z) = \frac{\sum_{k=0}^{M} b_k z^{-k}}{\sum_{k=1}^{N} a_k z^{-k}}
$$
you can write the system's frequency response by simply letting $z=e^{j\omega}$:
$$
H(\omega) = \frac{\sum_{k=0}^{M} b_k e^{-j\omega k}}{\sum_{k=1}^{N} a_k e^{-j\omega k}}
$$
This sort of system can be implemented as an IIR filter with $b_k$ as the feedforward (numerator) coefficients and $a_k$ as the feedback (denominator) coefficients. The FDLS method simply requires you to define the desired (complex-valued) frequency response at a collection of frequencies $\{\omega_k\}$. Evaluating the above expression for each frequency $\omega_k$ yields a system of linear equations in the $b_k$ and $a_k$ variables. You can then solve this using least-squares methods to yield the best coefficients for the desired frequency response and filter order (in the LS sense, of course).
If you have the requirement for exact linear phase (i.e. better linear phase response than you might get by just applying the method itself and specifying a desired linear phase response), then you could reformulate the above a bit, if you're willing to accept an FIR-only filter structure.
Recall that linear phase in the frequency domain corresponds to symmetry in the filter's time-domain impulse response. For FIR filters, this impulse response is the same as the filter's coefficients $b_k$.
Let $a_k=0\ \forall\ k$ in the above expression, yielding:
$$
H(\omega) = \sum_{k=0}^{M} b_k e^{-j\omega k}
$$
The symmetry constraint corresponds to a loss in the number of degrees of freedom in the linear system; instead of each of the coefficients being independent variables, the linear-phase constraint requires that coefficients that are symmetric about the impulse response's midpoint to be equal.
Rewrite the above equation taking these constraints into account. For example, for a type I FIR filter (odd length, even symmetry), then you would have something like:
$$
H(\omega) = \sum_{k=0}^{\frac{M}{2}-1} b_k e^{-j\omega k} + \sum_{k=\frac{M}{2}+1}^{M} b_{M-k} e^{-j\omega k} + b_{\frac{M}{2}} e^{-j\omega \frac{M}{2}}
$$
where the filter has $M+1$ taps. You would then solve the system for the reduced set of coefficients, then reconstruct the overall filter impulse response using the symmetry constraint:
$$
h[k] =
\begin{cases}
b_k, && 0 \le k \le \frac{M}{2} \\
b_{M-k}, && \frac{M}{2}+1 \le k \le M
\end{cases}
$$
The result is an FIR filter that will have linear phase. | {
"domain": "dsp.stackexchange",
"id": 2109,
"tags": "matlab, finite-impulse-response, least-squares"
} |
Failed to build package moveit_pr2 under indigo | Question:
Sorry if this is a total noob question.
I try to build the moveit_pr2 package under ROS indigo .and have cloned the following packages into <catkin_workspace>/src directory:
ivcon pr2_controllers realtime_tools
CMakeLists.txt moveit_pr2 pr2_mechanism ros_control
control_toolbox pr2_common pr2_mechanism_msgs
when I built these packages ,i got following errors
lzm@ubuntu:~/ROS/catkin_ws$ catkin_make
Base path: /home/lzm/ROS/catkin_ws
Source space: /home/lzm/ROS/catkin_ws/src
Build space: /home/lzm/ROS/catkin_ws/build
Devel space: /home/lzm/ROS/catkin_ws/devel
Install space: /home/lzm/ROS/catkin_ws/install
####
#### Running command: "make cmake_check_build_system" in "/home/lzm/ROS/catkin_ws/build"
####
####
#### Running command: "make -j4 -l4" in "/home/lzm/ROS/catkin_ws/build"
####
[ 0%] Built target ivcon
[ 0%] Built target _controller_manager_msgs_generate_messages_check_deps_UnloadController
[ 0%] [ 0%] Built target _controller_manager_msgs_generate_messages_check_deps_SwitchController
Built target _controller_manager_msgs_generate_messages_check_deps_ListControllerTypes
[ 0%] Built target _controller_manager_msgs_generate_messages_check_deps_LoadController
[ 0%] [ 0%] [ 0%] Built target _controller_manager_msgs_generate_messages_check_deps_ReloadControllerLibraries
Built target std_msgs_generate_messages_py
Built target _controller_manager_msgs_generate_messages_check_deps_ControllerStatistics
[ 0%] Built target _controller_manager_msgs_generate_messages_check_deps_ControllerState
[ 0%] Built target std_msgs_generate_messages_lisp
[ 0%] Built target std_msgs_generate_messages_cpp
[ 0%] Built target _controller_manager_msgs_generate_messages_check_deps_ControllersStatistics
[ 0%] Built target _controller_manager_msgs_generate_messages_check_deps_ListControllers
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_SwitchControllerGoal
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_ListControllers
[ 0%] [ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_ActuatorStatistics
Built target actionlib_msgs_generate_messages_py
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_SwitchControllerFeedback
[ 0%] [ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_SwitchController
Built target _pr2_mechanism_msgs_generate_messages_check_deps_ControllerStatistics
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_SwitchControllerActionGoal
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_SwitchControllerResult
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_SwitchControllerActionFeedback
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_ListControllerTypes
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_MechanismStatistics
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_JointStatistics
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_SwitchControllerAction
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_LoadController
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_ReloadControllerLibraries
[ 0%] Built target actionlib_msgs_generate_messages_lisp
[ 0%] [ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_SwitchControllerActionResult
Built target actionlib_msgs_generate_messages_cpp
[ 0%] Built target geometry_msgs_generate_messages_lisp
[ 0%] Built target _pr2_mechanism_msgs_generate_messages_check_deps_UnloadController
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_LaserTrajCmd
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_LaserScannerSignal
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_DashboardState
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_BatteryServer2
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_BatteryState2
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_PowerBoardState
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_AccelerometerState
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_PressureState
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_PeriodicCmd
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_AccessPoint
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_SetPeriodicCmd
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_GPUStatus
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_BatteryServer
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_BatteryState
[ 0%] Built target geometry_msgs_generate_messages_py
[ 0%] Built target geometry_msgs_generate_messages_cpp
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_PowerState
[ 0%] Built target _pr2_msgs_generate_messages_check_deps_SetLaserTrajCmd
[ 0%] Built target trajectory_msgs_generate_messages_py
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_Pr2GripperCommandActionResult
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_Pr2GripperCommandAction
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_PointHeadActionResult
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_JointTrajectoryActionFeedback
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_QueryTrajectoryState
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_SingleJointPositionAction
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_PointHeadGoal
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_SingleJointPositionActionResult
[ 0%] [ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_JointTrajectoryAction
Built target _pr2_controllers_msgs_generate_messages_check_deps_SingleJointPositionGoal
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_JointTrajectoryResult
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_PointHeadActionFeedback
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_JointTrajectoryFeedback
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_Pr2GripperCommandGoal
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_JointTrajectoryGoal
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_SingleJointPositionActionFeedback
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_QueryCalibrationState
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_JointTrajectoryActionGoal
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_JointControllerState
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_PointHeadResult
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_SingleJointPositionResult
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_Pr2GripperCommand
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_SingleJointPositionFeedback
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_JointTrajectoryActionResult
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_Pr2GripperCommandActionGoal
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_SingleJointPositionActionGoal
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_PointHeadFeedback
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_Pr2GripperCommandResult
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_JointTrajectoryControllerState
[ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_Pr2GripperCommandActionFeedback
[ 0%] [ 0%] Built target trajectory_msgs_generate_messages_cpp
Built target _pr2_controllers_msgs_generate_messages_check_deps_PointHeadAction
[ 0%] [ 0%] Built target _pr2_controllers_msgs_generate_messages_check_deps_PointHeadActionGoal
Built target trajectory_msgs_generate_messages_lisp
[ 0%] [ 0%] Built target realtime_tools
Built target _pr2_controllers_msgs_generate_messages_check_deps_Pr2GripperCommandFeedback
[ 0%] [ 0%] Built target roscpp_generate_messages_lisp
Built target roscpp_generate_messages_cpp
[ 0%] Built target rosgraph_msgs_generate_messages_lisp
[ 0%] Built target roscpp_generate_messages_py
[ 0%] [ 0%] Built target rosgraph_msgs_generate_messages_cpp
Built target rosgraph_msgs_generate_messages_py
[ 0%] Built target control_toolbox_gencfg
[ 0%] Built target actionlib_generate_messages_lisp
[ 0%] Built target gtest
[ 0%] Built target actionlib_generate_messages_cpp
[ 0%] Built target _control_toolbox_generate_messages_check_deps_SetPidGains
[ 0%] [ 0%] Built target actionlib_generate_messages_py
Built target transmission_interface_parser
[ 0%] [ 0%] Built target tf_generate_messages_cpp
[ 0%] Built target sensor_msgs_generate_messages_cpp
Built target tf_generate_messages_lisp
[ 0%] Built target sensor_msgs_generate_messages_lisp
[ 0%] [ 0%] Built target tf2_msgs_generate_messages_cpp
Built target tf_generate_messages_py
[ 0%] Built target sensor_msgs_generate_messages_py
[ 0%] Built target tf2_msgs_generate_messages_lisp
[ 0%] Built target tf2_msgs_generate_messages_py
[ 1%] Built target pr2_mechanism_model
[ 1%] Built target _ethercat_trigger_controllers_generate_messages_check_deps_MultiWaveform
[ 1%] Built target _ethercat_trigger_controllers_generate_messages_check_deps_MultiWaveformTransition
[ 1%] Built target _ethercat_trigger_controllers_generate_messages_check_deps_SetWaveform
[ 1%] Built target _ethercat_trigger_controllers_generate_messages_check_deps_SetMultiWaveform
[ 1%] Scanning dependencies of target pr2_mechanism_diagnostics
Building CXX object moveit_pr2/pr2_moveit_plugins/pr2_moveit_controller_manager/CMakeFiles/pr2_moveit_controller_manager.dir/src/pr2_moveit_controller_manager.cpp.o
[ 1%] Built target pr2_moveit_arm_kinematics
[ 2%] Building CXX object moveit_pr2/pr2_moveit_plugins/pr2_moveit_sensor_manager/CMakeFiles/pr2_moveit_sensor_manager.dir/src/pr2_moveit_sensor_manager.cpp.o
[ 2%] Built target kinematic_model_tutorial
[ 2%] Built target ros_api_tutorial
[ 2%] Building CXX object pr2_mechanism/pr2_mechanism_diagnostics/CMakeFiles/pr2_mechanism_diagnostics.dir/src/controller_diagnostics.cpp.o
[ 2%] Built target motion_planning_api_tutorial
[ 2%] Built target move_group_interface_tutorial
[ 2%] Built target planning_pipeline_tutorial
[ 2%] Built target planning_scene_ros_api_tutorial
[ 3%] Built target planning_scene_tutorial
[ 3%] Built target state_display_tutorial
[ 3%] Built target attached_body_tutorial
[ 3%] Built target collision_contact_tutorial
[ 4%] Built target interactivity_tutorial
[ 4%] Built target pick_place_tutorial
[ 4%] Building CXX object pr2_mechanism/pr2_controller_manager/CMakeFiles/controller_test.dir/test/controllers/test_controller.cpp.o
In file included from /home/lzm/ROS/catkin_ws/src/pr2_mechanism/pr2_mechanism_diagnostics/src/controller_diagnostics.cpp:36:0:
/home/lzm/ROS/catkin_ws/src/pr2_mechanism/pr2_mechanism_diagnostics/include/pr2_mechanism_diagnostics/controller_diagnostics.h:42:52: fatal error: pr2_mechanism_msgs/MechanismStatistics.h: No such file or directory
#include <pr2_mechanism_msgs/MechanismStatistics.h>
^
compilation terminated.
/home/lzm/ROS/catkin_ws/src/moveit_pr2/pr2_moveit_plugins/pr2_moveit_controller_manager/src/pr2_moveit_controller_manager.cpp:43:48: fatal error: pr2_mechanism_msgs/ListControllers.h: No such file or directory
#include <pr2_mechanism_msgs/ListControllers.h>
^
compilation terminated.
/home/lzm/ROS/catkin_ws/src/moveit_pr2/pr2_moveit_plugins/pr2_moveit_sensor_manager/src/pr2_moveit_sensor_manager.cpp:43:50: fatal error: pr2_controllers_msgs/PointHeadAction.h: No such file or directory
#include <pr2_controllers_msgs/PointHeadAction.h>
^
compilation terminated.
In file included from /home/lzm/ROS/catkin_ws/src/pr2_mechanism/pr2_controller_manager/test/controllers/test_controller.cpp:1:0:
/home/lzm/ROS/catkin_ws/src/pr2_mechanism/pr2_controller_manager/test/controllers/test_controller.h:5:47: fatal error: pr2_mechanism_msgs/LoadController.h: No such file or directory
#include <pr2_mechanism_msgs/LoadController.h>
^
compilation terminated.
make[2]: *** [pr2_mechanism/pr2_mechanism_diagnostics/CMakeFiles/pr2_mechanism_diagnostics.dir/src/controller_diagnostics.cpp.o] Error 1
make[1]: *** [pr2_mechanism/pr2_mechanism_diagnostics/CMakeFiles/pr2_mechanism_diagnostics.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
make[2]: *** [moveit_pr2/pr2_moveit_plugins/pr2_moveit_controller_manager/CMakeFiles/pr2_moveit_controller_manager.dir/src/pr2_moveit_controller_manager.cpp.o] Error 1
make[1]: *** [moveit_pr2/pr2_moveit_plugins/pr2_moveit_controller_manager/CMakeFiles/pr2_moveit_controller_manager.dir/all] Error 2
make[2]: *** [moveit_pr2/pr2_moveit_plugins/pr2_moveit_sensor_manager/CMakeFiles/pr2_moveit_sensor_manager.dir/src/pr2_moveit_sensor_manager.cpp.o] Error 1
make[1]: *** [moveit_pr2/pr2_moveit_plugins/pr2_moveit_sensor_manager/CMakeFiles/pr2_moveit_sensor_manager.dir/all] Error 2
make[2]: *** [pr2_mechanism/pr2_controller_manager/CMakeFiles/controller_test.dir/test/controllers/test_controller.cpp.o] Error 1
make[1]: *** [pr2_mechanism/pr2_controller_manager/CMakeFiles/controller_test.dir/all] Error 2
make: *** [all] Error 2
Invoking "make" failed
it seems some head files are missing. I found all the source files in the src directory.
so I follow the CreatingMsgAndSrv tutorial ,try to generate the missing head files.but it's not working.
Can anybody give me a hint?
Originally posted by iimmer on ROS Answers with karma: 1 on 2014-12-21
Post score: 0
Original comments
Comment by gvdhoorn on 2014-12-22:
Please use the formatting tools for console output (the little '101010' button on the bar). It makes things much easier to read.
Answer:
It looks like the CMakeLists needs a dependencies on catkin_EXPORTED_TARGETS for the executables and libraries.
The targets are getting built before the dependencies are generated.
In file included from /home/lzm/ROS/catkin_ws/src/pr2_mechanism/pr2_mechanism_diagnostics/src/controller_diagnostics.cpp:36:0:
/home/lzm/ROS/catkin_ws/src/pr2_mechanism/pr2_mechanism_diagnostics/include/pr2_mechanism_diagnostics/controller_diagnostics.h:42:52: fatal error: pr2_mechanism_msgs/MechanismStatistics.h: No such file or directory
#include <pr2_mechanism_msgs/MechanismStatistics.h>
That tells you the controller_diagnostics.h is unable to find the MechanismStatistics message definition. But the compiler tells us
Built target _pr2_mechanism_msgs_generate_messages_check_deps_MechanismStatistics
Which means it was built, so it's there.
If you change this line to your CMakeLists inside the pr2_mechanism_diagnostics/CMakeLists.txt file:
add_library(${PROJECT_NAME}
> src/controller_diagnostics.cpp
> src/joint_diagnostics.cpp)
> target_link_libraries(${PROJECT_NAME}
${catkin_LIBRARIES})
to
add_library(${PROJECT_NAME}
src/controller_diagnostics.cpp
src/joint_diagnostics.cpp)
target_link_libraries(${PROJECT_NAME} ${catkin_LIBRARIES})
> add_dependencies(${PROJECT_NAME} ${catkin_EXPORTED_TARGETS} pr2_mechanism_msgs_gencpp)
Then that executable will compile fine. I think the compiler tries to optimize, and runs several threads to get the job done fast. However, when it reaches this executable before the headers have been installed, this will happen. Since I'm the maintainer of this package, I'll do that for you. You'll have to ask the maintainer of the pr2_moveit_plugins package to do the other packages, or you can make a pull request of the fix (since you know what's wrong).
Originally posted by DevonW with karma: 644 on 2015-01-19
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 20405,
"tags": "ros-indigo"
} |
Behavior of water at temperatures above 100°C | Question: Assuming you have container with heat insulation (something like boiler).
You can store water at any temperature below 100°C. At these conditions, you can drill a hole into container and store water at air pressure.
But when you heat water to tempertature over 100°C, it starts to boil, increasing pressure in contailner. If you drill a hole, water will start to evaporate and that'll cause temperature drop. If you try to maintain constant temperature, after some time all water will evaporate.
Water with temperature over 100°C can be stored only at high pressure?
If so, it'll reach some pressure (depending on temperature) and stop boiling?
Are there any other risks besides high pressure to be aware when trying to store water at these conditions?
Answer: At high enough pressure you can keep water as a liquid above 100°C. With even more pressure you can even keep ice above 100°C. Similarly you can boil water at room temperature with a low pressure.
(https://en.wikipedia.org/wiki/Phase_diagram)
The phase diagram of water shows what state it is in at any given temperature and pressure.
edit: To answer Phil's question. The very steep vertical line between the blue(solid) and liquid(green) at 0°C shows why pressure isn't the reason ice skates have a film of water to slide on. You would need to increase the pressure to a few kbar to get liquid at even slightly below freezing. See how far vertically you would need to go to hit the green area starting to the left of the vertical line at 0°C
It is actually the friction between the blade and the ice that creates heat which melts the surface. | {
"domain": "physics.stackexchange",
"id": 13670,
"tags": "water, temperature, evaporation"
} |
The analytical result for free massless fermion propagator | Question: For massless fermion, the free propagator in quantum field theory
is
\begin{eqnarray*}
& & \langle0|T\psi(x)\bar{\psi}(y)|0\rangle=\int\frac{d^{4}k}{(2\pi)^{4}}\frac{i\gamma\cdot k}{k^{2}+i\epsilon}e^{-ik\cdot(x-y)}.
\end{eqnarray*}
In Peskin & Schroeder's book, An introduction to quantum field theory
(edition 1995, page 660, formula 19.40), they obtained the analytical
result for this propagator,
\begin{eqnarray*}
& & \int\frac{d^{4}k}{(2\pi)^{4}}\frac{i\gamma\cdot k}{k^{2}+i\epsilon}e^{-ik\cdot(x-y)}=-\frac{i}{2\pi^{2}}\frac{\gamma\cdot(x-y)}{(x-y)^{4}} .\tag{19.40}
\end{eqnarray*}
Question: Is this analytical result right? Actually I don't know
how to obtain it.
Answer: Yes it is correct. The derivation in P&S is straightforward but I will expand on it a bit. The key observation is that
\begin{equation}
\int\frac{d^4k}{(2\pi)^4}e^{-ik\cdot(y-z)}\frac{i\gamma^{\mu}k_{\mu}}{k^2+i\epsilon}
=-\gamma^{\mu}\partial_{\mu}\int\frac{d^4k}{(2\pi)^4}\frac{1}{k^2+i\epsilon}e^{-ik\cdot(y-z)},
\end{equation}
where the integral on the right hand side is the Feynman propagator for a
massless scalar. Performing the $k$ integrals to get to position space yields
\begin{equation}
\int\frac{d^4k}{(2\pi)^4}\frac{1}{k^2+i\epsilon}e^{-ik\cdot(y-z)}=\frac{i}{4\pi^2}\frac{1}{(y-z)^2-i\epsilon}.
\end{equation}
If you aren't sure about this last step, it is easier to consider the massive case and then take the limit as $m\rightarrow 0$ at the end. Schwinger parameters are also helpful for proving this identity.
Once you have transformed to position space you simply act with $-\gamma^{\mu}\partial_{\mu}$ to arrive at the final expression. | {
"domain": "physics.stackexchange",
"id": 35613,
"tags": "homework-and-exercises, quantum-electrodynamics, fermions, greens-functions, propagator"
} |
Why can't two 3° free radicals combine? | Question:
In Kolbe Electrolysis, my teacher told that when a 1° or 2° free radical is used, we get alkanes through dimerisation. Why wont dimerisation occur for a 3° one??
Answer:
Question is: Why can't two $3^\circ$ free radicals combine (during Kolbe Electrolysis)?
The simplest answer is there is no reason why not. Here is the counter example to show it is possible and achieved as far back as 1959 Ref.1). I attached the available PDF file for convenience. There are several trisubstituted succinic acid derivatives, which are achieved by the dimerization of corresponding disubstituted malonic acid derivatives after decarboxylation during Kolbe Electrolysis.
Although the intermediate in this reaction is not technically a $3^\circ$ free radical but $2^\circ$ free radical with really bulky group ($\ce{CH3O2C-CH^.-CH2Si(CH3)3}$), I think this paper (Ref.2) is worth considering. Yet, some examples in similar vain (e.g., $\ce{CH3O2C-CH^.-CH2C(CH3)3}$) are also discussed in Ref.1. Eberson has continue his work with Kolbe Electrolysis to present some other example of tetraalkylsuccinc acid derivatives (Ref.3).
These examples are thoroughly reviewed by Schäfer in 1990 (Ref.4).
Other than Kolbe Electrolysis, there are some other examples of dimerization of $3^\circ$ free radicals have been reported. For instance, synthesis of 2,2,3,3-tetramethysbutane has been undergone with similar mechanism. The compound can be obtained by reaction of Grignard reagent, tert-butylmagnesium bromide with ethyl bromide, or of ethylmagnesium bromide with tert-butyl bromide in the presence of manganese(II) ions (Ref.6). The mechanism of this transformation is believed to be undergoing through the dimerization of two tert-butyl radicals, which are generated by decomposition of the organomanganese compounds generated in situ such as:
$$\ce{(CH3)3C-MgBr + MnCl2 -> (CH3)3C-MnCl + MgBrCl}$$
References:
Lennart Eberson, “Studies on Succinic Acids I. Preparation of $\alpha,\alpha'$-Dialkyl- and Tetraalkyl-Substituted Succinic Acids by Kolbe Electrolysis,” Acta Chem. Scand. 1959, 13, 40-49 (DOI: 10.3891/acta.chem.scand.13-0040) (PDF).
Lennart Eberson, “The Synthesis of Some Aliphatic Organosilicon Dicarboxylic Acids. III,” Acta Chem. Scand. 1956, 10, 629-632 (DOI: 10.3891/acta.chem.scand.10-0629) (PDF).
Lennart Eberson, “Studies on Succinic Acids V. The Preparation and Properties of Diastereoisomers of Tetraalkylsuccinic Acids,” Acta Chem. Scand. 1960, 14, 641-649 (DOI: 10.3891/acta.chem.scand.14-0641) (PDF).
Hans-Jürgen Schäfer, “Recent Contributions of Kolbe Electrolysis to Organic Synthesis,” Topics in Current Chemistry: Electrochemistry IV; Vol. 152, Eberhard Steckhan, Editor; Springer-Verlag: Berlin, Germany, 1990, pp. 91-159 (PDF).
M. S. Kharasch, J. W. Hancock, W. Nudenberg, P. O. Tawney, “Factors Influencing the Course and Mechanism of Grignard Reactions. XXII. The Reaction of Grignard Reagents with Alkyl Halides and Ketones in the Presence of Manganous Salts,” J. Org. Chem. 1956, 21(3), 322-327 (https://doi.org/10.1021/jo01109a016). | {
"domain": "chemistry.stackexchange",
"id": 13813,
"tags": "organic-chemistry, electrochemistry, electrolysis"
} |
mongodb query in waterline,sailsjs | Question: Because it is a lot of code, I don't want to ask for details what is wrong, but want to ask about review what can be changed in approach to this.
This is controller that make query:
It grabs data within time range
Does aggregation so I get sum of some fields
Because there are some fields from different collections i.e leader_name, I do other queries
There is a $in: workers fragment in the $match part that also need workers to filter from a different query
getGroupInRange: function (req, gres) {
console.log(req.params);
Group.find({name: {$in: req.params.id.split(',')}})
.populate('workers')
.exec(function(err, found){
if (err) return gres.serverError(err);
var workers = [];
found.forEach(function(x){
x.workers.forEach(function(xx){
workers.push(xx.id);
});
});
workers = workers.map(function (x) {
return new ObjectId(x);
});
Event.native(function(err, collection) {
if (err) return gres.serverError(err);
var query = [
{
$match:
{
probe_time:
{
$gte: new Date(req.params.from),
$lt: new Date(req.params.to)
},
worker_id:
{
$in: workers
}
}
},
{
$group:
{
_id:{worker_id:"$worker_id", group:"$group"},
"total_logged_time":{$sum: "$duration"},
"total_break_time":{$sum: {$cond: [ {$eq:["$user.work_mode", 'break']},"$duration", 0] }},
"total_nonidle_duration":{$sum: {$cond: [ {$eq:["$user.presence", 'active']},"$duration", 0] }},
"total_idle_time":{$sum: {$cond: [ {$eq:["$user.presence", 'idle']},"$duration", 0] }},
"total_pro_apps_time":{$sum: {$cond: [ {$eq:["$app_category", 'productive']},"$duration", 0] }},
"total_nonpro_apps_time":{$sum: {$cond: [ {$ne:["$app_category", 'productive']},"$duration", 0] }},
"status1":{$sum: {$cond: [ {$eq:["$user.work_mode", 'CUSTOM_1']},"$duration", 0] }},
"status2":{$sum: {$cond: [ {$eq:["$user.work_mode", 'CUSTOM_2']},"$duration", 0] }},
"status3":{$sum: {$cond: [ {$eq:["$user.work_mode", 'CUSTOM_3']},"$duration", 0] }},
"status4":{$sum: {$cond: [ {$eq:["$user.work_mode", 'CUSTOM_4']},"$duration", 0] }},
"leader_name":{$last:"$leader_name"},
}
},
{
$project:
{
"total_logged_time":1,
"total_break_time":1,
"total_nonidle_duration":1,
"total_idle_time":1,
"total_pro_apps_time": 1,
"total_nonpro_apps_time" : 6388,
"print_qty":1,
"medium_print_qty":1,
"medium_download_size":1,
"effectivity":1,
"status1":1,
"status2":1,
"medium_break_time": { $multiply:[3600, {$divide:["$total_break_time", "$total_logged_time"]} ] },
"worker_name":"$_id.worker_id",
"leader_name":1
}
},
{
$group:
{
_id:"$_id.group",
data:{$push: "$$ROOT"}
}
}
];
collection.aggregate(query).toArray(function (err, results) {
if (err) return res.serverError(err);
Worker.find({_id:{$in: workers}})
.populate('leader')
.exec(function (err, res) {
results.map(function(x,y){
x.data.map(function (xx, yy) {
xx.leader_name = res.filter(function (x) {
return xx._id.worker_id.equals(x.id);
})[0].leader.fullname;
})
});
return gres.ok(results);
});
});
});
});
}
Answer: First, you should fix your indentation and styling, I would use a tool like JSFiddle's Tidy function to clean up your code.
err:
The word you're looking for is error, no need to sacrifice readability for two characters.
.exec(function(err, found) {
Overriding err/error:
In the following code block, you overwrite the parameter err or error, this is bad practice, it'd be better off named something like error_two.
collection.aggregate(query).toArray(function(error, results) {
if (error) return res.serverError(error);
Worker.find({
_id: {
$in: workers
}
})
.populate('leader')
.exec(function(error, res) {
In fact, you actually overwrite it three times!
Overriding x:
You override x in the following code block:
You should apply the same as before and name it xxx or something (because xx is already existent)
results.map(function(x, y) {
x.data.map(function(xx, yy) {
xx.leader_name = res.filter(function(x) {
y:
In the above point's code, you never actually use y, as it is the second parameter, it can be entirely removed, JavaScript does not require an explicit amount of parameters.
In total:
In total, it would look like:
getGroupInRange: function(req, gres) {
console.log(req.params);
Group.find({
name: {
$in: req.params.id.split(',')
}
})
.populate('workers')
.exec(function(error, found) {
if (error) return gres.serverError(error);
var workers = [];
found.forEach(function(x) {
x.workers.forEach(function(xx) {
workers.push(xx.id);
});
});
workers = workers.map(function(x) {
return new ObjectId(x);
});
Event.native(function(error, collection) {
if (error) return gres.serverError(error);
var query = [{
$match: {
probe_time: {
$gte: new Date(req.params.from),
$lt: new Date(req.params.to)
},
worker_id: {
$in: workers
}
}
}, {
$group: {
_id: {
worker_id: "$worker_id",
group: "$group"
},
"total_logged_time": {
$sum: "$duration"
},
"total_break_time": {
$sum: {
$cond: [{
$eq: ["$user.work_mode", 'break']
}, "$duration", 0]
}
},
"total_nonidle_duration": {
$sum: {
$cond: [{
$eq: ["$user.presence", 'active']
}, "$duration", 0]
}
},
"total_idle_time": {
$sum: {
$cond: [{
$eq: ["$user.presence", 'idle']
}, "$duration", 0]
}
},
"total_pro_apps_time": {
$sum: {
$cond: [{
$eq: ["$app_category", 'productive']
}, "$duration", 0]
}
},
"total_nonpro_apps_time": {
$sum: {
$cond: [{
$ne: ["$app_category", 'productive']
}, "$duration", 0]
}
},
"status1": {
$sum: {
$cond: [{
$eq: ["$user.work_mode", 'CUSTOM_1']
}, "$duration", 0]
}
},
"status2": {
$sum: {
$cond: [{
$eq: ["$user.work_mode", 'CUSTOM_2']
}, "$duration", 0]
}
},
"status3": {
$sum: {
$cond: [{
$eq: ["$user.work_mode", 'CUSTOM_3']
}, "$duration", 0]
}
},
"status4": {
$sum: {
$cond: [{
$eq: ["$user.work_mode", 'CUSTOM_4']
}, "$duration", 0]
}
},
"leader_name": {
$last: "$leader_name"
},
}
}, {
$project: {
"total_logged_time": 1,
"total_break_time": 1,
"total_nonidle_duration": 1,
"total_idle_time": 1,
"total_pro_apps_time": 1,
"total_nonpro_apps_time": 6388,
"print_qty": 1,
"medium_print_qty": 1,
"medium_download_size": 1,
"effectivity": 1,
"status1": 1,
"status2": 1,
"medium_break_time": {
$multiply: [3600, {
$divide: ["$total_break_time", "$total_logged_time"]
}]
},
"worker_name": "$_id.worker_id",
"leader_name": 1
}
}, {
$group: {
_id: "$_id.group",
data: {
$push: "$$ROOT"
}
}
}];
collection.aggregate(query).toArray(function(error_two, results) {
if (error_two) return res.serverError(error_two);
Worker.find({
_id: {
$in: workers
}
})
.populate('leader')
.exec(function(error_three, res) {
results.map(function(x) {
x.data.map(function(xx) {
xx.leader_name = res.filter(function(xxx) {
return xx._id.worker_id.equals(xxx.id);
})[0].leader.fullname;
})
});
return gres.ok(results);
});
});
});
});
} | {
"domain": "codereview.stackexchange",
"id": 17419,
"tags": "javascript, node.js, mongodb"
} |
Conservation of crystal momentum in time-dependent potential | Question: Consider a time-dependent problem. A particle is placed in a periodic potential with time-dependent amplitude
$$H = -\frac{\hbar^2}{2m}\frac{d^2}{dx^2} + V_0(t)\cos^2(x)$$
Instantenous eigenstates are labelled by quasi-momentum $k$ nad band index $n$ - Bloch states $\phi_{n,k}(x,t)$. I wonder if there are some consequences of the discrete translational symmetry of lattice on the time-evolution?
Does the initial state $\phi_{n,k}(x,t=0)$ evolves into a superposition of different Bloch states $\phi_{m,k}(x,t)$ but with the same quasi-momentum $k$ as the inital state?
UPDATE
I checked some literature and found only one reference with time-dependent potential, but I think it does not answer my question completely.
Consider the translation operator $\hat{T}_{a_j}$ which has the property
$$\hat{T}_{a} \psi(x,t) = \psi(x - a,t).$$
This operator commutes with the Hamiltonian due to periodicity of the potential. We have $[H(x,t), \hat{T}_{a}] = 0$. Consider what happens to the initial state of the Bloch form
$$\phi_0(x) = e^{i q x} u_0(x)$$
where $u_0(x)$ is a periodic function with the same periodicity as the lattice itself. It can be proved that it evolves to a state of the form
$$\psi(x,t) = e^{iqx}u(x,t)$$
where $u(x,t)$ has the same periodicity as $u_0(x)$.
This is what the author calls: 'conservation of crystal momentum'. For me it does not mean that if I start with the Bloch eigenstate $\phi_{q}(x) = e^{iqx} u_q(x)$ I will end up with Bloch eigenstate at some other moment of time $\phi_q(x,t) = e^{iqx} u_{q}(x,t)$ (I ommited inter-band scattering). This would indicate a perfect adiabatic evolution no matter how fast the evolution is. I suppose the overlap between different Bloch states
$$\int dx \phi^{*}_{q}(x,t)\frac{\partial }{\partial t}\phi_{k}(x,t)$$
is non-zero so perfect adiabatic evolution is not possible.
UPDATE 2
In the following review(p. 21) the author mentions what happens with the single atom when the potential amplitude is changed in time.
Importantly, the time dependence
enters only in a scale factor for the potential, which remains periodic in space with the same period at all
times (the lattice translation operator commutes with H(t) at all times). Therefore Bloch’s theorem applies,
and the Bloch wave functions are the eigenstates of the Hamiltonian at all times (including t = 0 when there
is no lattice). This also means that the quasi-momentum is conserved. The band index, however, can change.
As I wrote in the first update, the only consequence of translational invariance is that at the end of the ramp time we end up with the state of the Bloch's form
$$\psi(x,t) = e^{iqx} u(x,t)$$
which is not necessarily the eigenstate of the Hamiltonian (translational invariance alone does not impose that). It can be decomposed in the Bloch states basis
$$e^{iqx}u(x,t) = \sum\limits_{n,k}C_{n,k}(t) e^{ikx}u_{n,k}(x,t)$$
where
$$C_{n,k}(t) = \int dx\ e^{ix(q-k)}u_{n,k}^{*}(x,t) u(x,t).$$
This analysis does not imply that all $C_{n,k = q}$ are 0. In contrary, this is what the paper says.
Answer: Because the time-dependent Hamiltonian $\hat H(t)$ commutes with $\hat T_a$, we find that $\hat T_a$ commutes with the evolution operator $\hat U(t)$ ($\hat U(t)$ solves $i\partial_t \hat U(t)=\hat H(t) \hat U(t)$).
Now, if the initial state $|\psi(0)\rangle$ is an eigenstate of $\hat T_a$, with eigenvalue $e^{-i q a}$ (i.e., it has quasi-momentum $q$), one readily finds that $|\psi(t)\rangle=\hat U(t)|\psi(0)\rangle$ is also an eigenstate of $\hat T_a$ with the same eigenvalue, that is, quasi-momentum is conserved during time-evolution.
Update : The discussion above only insures that the quasi-momentum stays the same during the evolution. However, nothing prevents the system to have interband transitions, in particular if the driving is fast. In particular, even if the system is prepared in, say, the lowest band of $\hat H(0)$, in general,
$$|\psi(t)\rangle=\sum_n c_n(t) |\phi_{q,n}(t)\rangle,$$
where $|\phi_{q,n}(t)\rangle$ are the instantaneous eigenstate of $\hat H(t)$ with quasimomentum $q$. | {
"domain": "physics.stackexchange",
"id": 39379,
"tags": "quantum-mechanics, solid-state-physics"
} |
How can I address missing values for LSTM? | Question: I'm a student and writing my first paper for submission on conference. I have a question
there is a dataset below. this is temporal-spatial dataset.
Date Hour City Sensor1 Sensor2 Sensor3 Sensor4 ...
21-06-10 0 Region1 0.12 0.52 0.33 0.44 ...
21-06-10 1 Region2 0.16 0.83 0.34 0.49 ...
21-06-10 2 Region1 0.21 0.44 0.57 0.5 ...
...
My Task is anomaly detection for each region
I want to use LSTM. So, I represent the temporal-spatial data to two time-series data. my dataset can be represented below.
City Date Hour Sensor1 Sensor2 Sensor3 Sensor4 ...
Region1 21-06-10 0 0.12 0.52 0.33 0.44 ...
Region1 21-06-10 2 0.21 0.44 0.57 0.5 ...
...
City Date Hour Sensor1 Sensor2 Sensor3 Sensor4 ...
Region2 21-06-10 1 0.16 0.83 0.34 0.49 ...
...
However, then, there is no a row with attribute 'Hour=1' in Region1 dataset
(you can see the table below)
City Date Hour Sensor1 Sensor2 Sensor3 Sensor4 ...
Region1 21-06-10 0 0.12 0.52 0.33 0.44 ...
Region1 21-06-10 1 NaN NaN NaN NaN ...
Region1 21-06-10 2 0.21 0.44 0.57 0.5 ...
...
Can I insert estimated values into the row with attribute 'Hour=1' in Region1 dataset? (for example, I want to insert average between the first row and the third row)
Can I claim to have utilized a real world dataset even with this missing value estimation?
Answer: You can claim to use a real-world dataset, you would just need to specify that some values were interpolated.
Do you have to have the inter-mediate values though? By the looks of it, each "region" was only measured every 2 hours, so I would just keep it that way and just have the resolution be 2 hours. It doesn't have to be hourly, and probably shouldn't since that isn't the resolution of the data by the looks of it.
If it does need to be hourly then it is fine to just linearly interpolate the data. Additionally, you can try and train the network to accept empty inputs (though It'd definitely be easier to just interpolate your dataset) | {
"domain": "ai.stackexchange",
"id": 2915,
"tags": "machine-learning, long-short-term-memory, datasets, data-preprocessing, data-science"
} |
How can a GRU perform as well as an LSTM? | Question: I think I understand both types of units in terms of just the math.
What I don't understand is, how is it possible in practice for a GRU to perform as well as or better than an LSTM (which is what seems to be the norm)? I don't intuitively get how the GRU is able to make up for the missing cell content. The gates seem to be pretty much the same as an LSTM's gates, but with a missing part. Does it just mean that the cell in an LSTM is actually nearly useless?
Edit: Other questions have asked about the differences between GRU and LSTM. None of them (in my opinion) explain well enough why a GRU works as well as an LSTM even without the memory unit, only that the lack of a memory unit is one of the differences that makes a GRU faster.
Answer: GRU and LSTM are two popular RNN variants out of many possible similar architectures motivated by similar theoretical ideas of having a "pass through" channel where gradients do not degrade as much, and a system of sigmoid-based control gates to manage signals passing between time steps.
Even with LSTM, there are variations which may or may not get used, such as adding "peephole" connections between previous cell state and the gates.
LSTM and GRU are the two architectures explored so far that do well across a wide range of problems, as verified by experiment. I suspect, but cannot show conclusively, that there is no strong theory that explains this rough equivalence. Instead we are left with more intuition-based theories or conjectures:
GRU has less parameters per "cell", allowing it in theory to generalise better from less examples, at the cost of less flexibility.
LSTM has a more sophisticated memory in the form of separating internal cell state from cell output, allowing it to output features useful for a task without needing to memorise those features. This comes at the cost of needing to learn extra gates which help map between state and features.
When considering performance of these architectures in general, you have to allow that some problems will make use of these strengths better, or it may be a wash. For instance, in a problem where forwarding the layer output between time steps is already a good state representation and feature representation, then there is little need for the additional internal state of the LSTM.
In effect the choice between LSTM and GRU is yet another hyperparameter to consider when searching for a good solution, and like most other hyperparameters, there is no strong theory to guide an a priori selection. | {
"domain": "datascience.stackexchange",
"id": 2790,
"tags": "neural-network, deep-learning"
} |
Implementing a Kalman filter for position, velocity, acceleration | Question: I've used Kalman filters for various things in the past, but I'm now interested in using one to track position, speed and acceleration in the context of tracking position for smartphone apps. It strikes me that this should be a text book example of a simple linear Kalman filter, but I can't seem to find any online links which discuss this. I can think of various ways of doing this, but rather than researching it from scratch, perhaps someone here can point me in the right direction:Does anyone know the best way of setting up this system? For example, given the recent history of position observations, what's the best way of predicting the next point in the Kalman filter state space? What are the advantages and disadvantages of including acceleration in the state space? If all measurements are position, then if speed and acceleration are in the state space can the system become unstable? Etc ...Alternatively, does anyone know of a good reference for this application of Kalman filters?Thanks
Answer: This is the best one that I know of
Full derivation with explanation
Kalman
This is a good resource for learning about the Kalman filter. If you are more concerned with getting the smartphone app working I would suggest looking for a pre-existing implementation of the Kalman filter. Why reinvent the wheel? For example if you are developing for android, openCV has an implementation of the Kalman filter. See Android OpenCV
Bradski and Kaehler is a good resource on image processing in general and includes a section on the Kalman filter including code examples. | {
"domain": "dsp.stackexchange",
"id": 877,
"tags": "kalman-filters"
} |
NP-complete decision problems - how close can we come to a solution? | Question: After we prove that a certain optimization problem is NP-hard, the natural next step is to look for a polynomial algorithm that comes close to the optimal solution - preferrably with a constant approximation factor.
After we prove that a certain decision problem is NP-complete, what is the natural next step? Obviously we cannot "approximate" a boolean value...
My guess is that, the next step is to look for a randomized algorithm that returns the correct solution with a high probability. Is this correct?
If so, what probability of being correct can we expect to get from such a randomized algorithm?
As far as I understand from Wikipedia, PP contains NP. This means that, if the problem is in NP, it should be easy to write an algorithm that is correct more than $0.5$ of the times.
However, it is not known whether BPP contains NP. This means that, it may be difficult (if not impossible) to write an algorithm that is correct more than $0.5+\epsilon$ of the times, for every positive $\epsilon$ independent of the size of input.
Did I understand correctly?
Answer: You made a crucial change to the question. I've updated my answer to respond to the new question; I'll keep my original answer below for posterity as well.
To answer the latest iteration of the question: If the problem you really want to solve is a decision problem, and you've shown that it is NP-complete, then you might be in a tough spot. Here are some candidate next steps:
Look for heuristics. This means algorithms that work on some problem instances, but not all. Basically, you're hoping that you only come across problem instances that are easy, not any of the worst-case problem instances.
SAT solvers are an example of this approach. They solve a NP-complete decision problem ("does SAT formula $\varphi$ have a satisfying assignment?") using heuristics that happen to work well on many of the formulas we run into in real life, but their worst-case running time remains exponential. Integer linear programming is another example of this.
One very powerful kind of heuristic is to formulate your problem as an instance of SAT, ILP, constraint satisfaction programming, or something like that; and then apply an off-the-shelf SAT/ILP/CSP solver to solve it.
Look for a fixed-parameter tractable algorithm. Sometimes the complexity of a problem can be characterized by multiple parameters, and occasionally one can find an algorithm that is polynomial in one of the parameters, even if it is exponential in the length of the input (and thus is not a polynomial-time algorithm).
An example would be the knapsack problem, which takes as input an integer $W$ (the capacity of the knapsack) and some other inputs describing the items you can select from. The knapsack problem is NP-hard. However, there is a standard dynamic programming algorithm whose running time is $O(nW)$. This algorithm is exponential in the length of the input (because $W$ is specified in binary, so the value of $W$ is exponential in the length of the input), and thus this does not qualify as a polynomial-time algorithm. However, if $W$ is not too large -- as will often be the case in many practical settings -- this algorithm is efficient enough nonetheless.
Start looking at algorithms whose running time is exponential, or at least super-polynomial. Sometimes the running time can be optimized so that the exponential running time is (just barely) tolerable.
Change the problem. Start looking for way which you can "cheat", i.e., change the assumptions or the information given to you to make the computational problem easier.
Give up. Accept that this is one problem you probably won't be able to solve.
These are standard strategies for coping with NP-completeness. Many textbooks mention these options.
No, randomization alone is not going to take a NP-complete problem and make it easy to solve in polynomial time. Everyone knows that proving $P\ne NP$ is a famous hard problem, and that most complexity theorists expect that $P \ne NP$ but just don't know how to prove it; similarly, most complexity theorists expect that $BPP \ne NP$, but they just don't know how to prove it. Proving that $BPP \ne NP$ looks even harder than proving that $P \ne NP$. That's why it is the $P \ne NP$ problem that is famous, not the $BPP \ne NP$ problem; if we can't prove that $P \ne NP$, we have no hope of proving that $BPP \ne NP$.
Incidentally, many complexity theorists expect that $P = BPP$, but proving that is way beyond our current knowledge. That said, for all practical engineering purposes, you can assume that $P = BPP$. (Complexity theorists, close your eyes here. I'm about to use some engineering reasoning that works well enough for all engineering purposes, but will drive theorists crazy.) If you have an efficient randomized algorithm that solves some natural real-world problem, I bet I can build a deterministic algorithm that will almost surely solve the problem, too; I won't be able to give you a proof of that fact, but I'll gladly take that bet even at \$100 to \$1 odds in your favor. How can I be so confident? Because if we take your randomized algorithm and, instead of feeding it truly random bits, feed it bits from the output of a cryptographic-strength pseudorandom generator, then there's no way your randomized algorithm is going to be able to tell the difference. After all, if it could, it would have found a way to break the crypto. If you find a way to break the crypto, there are people who will pay you a whole lot more than \$100 for the secret. That's part of the reason why many people expect that $P = BPP$ (or some moral equivalent) is probably true.
Your original question said you wanted to find an approximate solution to a NP-complete problem, and asked what performance is possible. Here was my answer to that question:
It depends on the problem. Some NP-hard optimization problems have good approximation algorithms; others don't. There's lots written in textbooks (and in Wikipedia) on approximation algorithms for NP-hard problems; this is a standard topic in undergraduate algorithms. See, e.g., https://en.wikipedia.org/wiki/Approximation_algorithm and https://en.wikipedia.org/wiki/Polynomial_time_approximation_scheme, but make sure to read textbooks too.
BPP is not a relevant concept here. The question of whether $NP \subseteq BPP$ would be relevant if you were looking for a randomized algorithm that produces an exact solution to the NP-complete problem. But that's not what you're looking for. You're looking for a randomized algorithm that produces an approximate solution to the problem. Looking for an approximate solution is a different problem than looking for an exact solution. If there were a polytime randomized algorithm to find an approximate solution, that wouldn't imply that there's a polytime algorithm to find an exact solution, and it wouldn't imply that $NP \subseteq BPP$. Similarly, if there were a polytime deterministic algorithm to find an approximate solution, that wouldn't imply that there's a polytime algorithm to find an exact solution, and it wouldn't imply that $P=NP$.
Similarly, $PP$ is not very relevant here.
If you really want to relate this to complexity classes, the relevant complexity class is $APX$ (see, e.g., https://en.wikipedia.org/wiki/APX).
Incidentally, many approximation algorithms are deterministic algorithms. Randomization is occasionally helpful but often not needed. | {
"domain": "cs.stackexchange",
"id": 2214,
"tags": "np-complete, approximation, randomized-algorithms"
} |
Show/hide checkbox div on change and load | Question: I'm working on a show/hide div with checkbox on change and on load. I've come up with this so far:
jsFiddle
$(document).ready(function() {
var $cbtextbook = $('#in-product_category-14'),
$cbimod = $('#in-product_category-15'),
$mb1 = $('#mbtextbook'),
$mb2 = $('#mbimod');
function tbmb() {
if ($cbtextbook.is(":checked")) $mb1.show();
else $mb1.hide();
}
function immb() {
if ($cbimod.is(":checked")) $mb2.show();
else $mb2.hide();
}
tbmb();
immb();
$cbtextbook.change(tbmb);
$cbimod.change(immb);
})
At the moment, I'm not worrying about dynamically changing elements (although I might in the future as I learn more).
Is there a much cleaner way to do this? I did get pretty simple toggle to work, but that didn't take into account if the box was already checked on page load, and the div I wanted to show/hide could get off cycle (i.e. show when unclicked, hide when clicked) if it was already checked, so I came up with this overly verbose solution. How can I pare this down?
Answer: It could be done with:
$(document).ready(function() {
$("input").change(function() {
var index = $(this).closest("li").index();
$(".metabox").eq(index)[this.checked ? "show" : "hide"]();
}).change();
});
http://jsfiddle.net/bK8EC/118/
Remember to add proper qualifiers in your real code, for example add id="something" to your ul and then do $("#something input:checkbox") instead of just binding this event to every input on the page. The .metabox is also pretty fragile if a container isn't added. | {
"domain": "codereview.stackexchange",
"id": 1660,
"tags": "javascript, jquery"
} |
What is the name of the expression $\cos(\frac{\theta}{2})^2$ to describe probability of measurements on qubits? | Question: I know that the following formula describes the probability of a qubit confirming to a measurment in one axis, with an angle $\theta$ to another axis.
$$\cos(\frac{\theta}{2})^2$$
I have been searching the internet for the past hour for the name of this expression. I believe it was called something similar to Moslow's Law, but I cannot find it.
If someone could help me out, that'd be great!
Answer: You probably mean Malus' law. However, it's important to note that the term not normally used in that context - it is usually reserved for the transmission characteristics of linear polarizers for classical light. | {
"domain": "physics.stackexchange",
"id": 35275,
"tags": "quantum-mechanics, quantum-information"
} |
Reaction of octahydroazecine with iodine crystals | Question: I really don’t understand why the following reaction would take place first of all.
So this is a sub part of a question wherein we have to compare the basicity of the compounds formed through some reactions. So, I got the rest 3 of them but I have no clue about this one.
My concerns are..
Why would $\ce{I2}$ react in absence of any oxidising agent or catalyst?
Should the presence of $\ce{N}$ influence the reaction.
Is this reaction feasible?
Ans -
Answer: Chlorination and bromination of alkenes are very general reactions, and mechanistic study of these reactions provides additional insight into electrophilic addition reactions of alkenes. Although much less detail is known about fluorination and iodination of alkenes, it is believed that iodination follows the similar mechanistic steps of much studied bromination (Ref.1) to give corresponding 1,2-diiodoalkane. In certain compounds, if neucleophilic center is available within vicinity of cyclic halonium center, the last step of nucleophilic attack of halide ion would be over written by much faster intramolecular cyclization as depicted by following mechanism:
This mechanism is well supported by the examples provided in Ref.1 and Ref.2.
References:
Yoshinao Tamaru, Masato Mizutani, Yutaka Furukawa, Shinichi Kawamura, Zenichi Yoshida, Kazunori Yanagi, Masao Minobe, “1,3-Asymmetric induction: highly stereoselective synthesis of 2,4-trans-disubstituted γ-butyrolactones and γ-butyrothiolactones,” J. Am. Chem. Soc. 1984, 106(4), 1079-1085 (https://doi.org/10.1021/ja00316a044).
Yvan Guindon, François Soucy, Christiane Yoakim, William W. Ogilvie, Louis Plamondon, “Diastereoselective Synthesis of 2,3,5-Trisubstituted Tetrahydrofurans via Cyclofunctionalization Reactions. Evidence of Stereoelectronic Effects,” J. Org. Chem. 2001, 66(26), 8992-8996 (https://doi.org/10.1021/jo010873r). | {
"domain": "chemistry.stackexchange",
"id": 13247,
"tags": "organic-chemistry, reaction-mechanism, halogenation"
} |
Why would gamma spectroscopy be a tool for nuclei? | Question: I am very much familiar with Atomic spectroscopy, not much with nuclear spectroscopy.
In atoms (electronic cloud), we have electromagnetic interaction that plays the leading role, whose exchange boson is the photon, and that's why we have photon spectroscopy.
Now, we have the nucleus, where allegedly strong force plays a crucial role, whose exchange boson is the pion (Yukaway, Nobel prize 1949), although later on it's been understood that gluons are responsible on a more microscopic level, but we can still use pions in the low energy regime. Having this in mind, I would expect that nuclear spectroscopy be done with pion beams. But that's not generally the case! "... studies of $\gamma$-ray emission have become the standard technique of nuclear rspectroscopy", Krane, Introduction to Nuclear Physics.
Why is so?
To conclude, we of course have Coulomb interaction in nuclei, since they contain a bunch of protons. But I expect Coulomb interaction to be of secondary importance, being of much less strength. Furthermore, I would expect the excitations caused by ELM interaction to be independent from the ones caused by Strong interaction, thus leading to two spectroscopy patterns.
Do we have two spectroscopy patterns in nuclear physics? If not, why?
EDITED:
To make myself clearer, I add something taken from the discussion within comments below, since I think it makes better the point:
"Suppose to have a universe with only He nucleus and strong interaction. You would not even know what a photon is, since ELM interaction is switched off. Then, why would you need a photon to excite the He nucleus?"
Answer: 1) In QCD without the electro-weak interaction every nucleus has many excited states (states with energies above the ground state energy) that are absolutely stable. This is because both the total baryon number and the total charge (the number of protons and the number of neutrons) are conserved, and because angular momentum and parity are conserved. This means that any state that differs from the ground state in terms of angular momentum, parity, or isospin, cannot decay to the ground state. If the state has an excitation energy above the pion mass then it could in principle decay by pion emission, but this is irrelevant in practice because nuclear level spacing are much lower than the pion mass.
2) These states become unstable when coupled to the weak and EM interactions, and they can decay by photon (or neutrino etc) emission. This is very useful precisely because the photon is weakly coupled. It implies that the width of the state is small, and that it can be resolved with high precision.
3) You can try to to resolve excited states with hadronic (pion or proton) beams, but this is much more complicated because the probe is now strongly coupled to the nucleus, and the scattering reaction is not only sensitive to the nuclear levels, but also to initial and final state interactions. | {
"domain": "physics.stackexchange",
"id": 53004,
"tags": "electromagnetic-radiation, nuclear-physics, spectroscopy, gamma-rays, pions"
} |
Differences between reconstruction- and generation-level variables in HEP data | Question: I am working on a CMS - related project where the ROOT trees contain both reconstruction-level and generation-level particle variables (like mass). However, I don't know the basic difference between the two or when we prefer using one type of variable over the other?
Answer: Real data goes
$$\text{Collisions} \to \text{Detector} \to \text{Trigger} \to \text{Reconstruction}$$
Simulated data goes
$$\text{Event generator} \to \text{Simulated detector} \to \text{Trigger} \to \text{Reconstruction}$$
Generator-level is the information from the first step, before the events are passed through the simulated detector. This is also sometimes called "MC Truth" or "Truth level".
Reconstruction-level is the information from the last step, after the reconstruction algorithms have been run over the data from the simulated detector.
You can compare the two in order to quantify things like efficiencies and resolutions. | {
"domain": "physics.stackexchange",
"id": 38062,
"tags": "particle-physics, experimental-physics, large-hadron-collider"
} |
What species is this fly? | Question: I am looking for an ID of this fly at genus or preferably species level.
Location: The Netherlands.
Size: approx. 7-8mm
Habitat: indoors, attic.
Timing: usually appears late winter, early spring
Answer: Apparently, this is a Pollenia sp
https://forum.waarneming.nl/smf/index.php?topic=447650.0.
One of the common species from this genus, Pollenia rudis, is often found overwintering in groups in attics and is therefore known as the cluster fly or attic fly. | {
"domain": "biology.stackexchange",
"id": 9590,
"tags": "species-identification, entomology, diptera"
} |
Variant allele frequency versus population frequency | Question: In gnomAD related article
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9160216/#:~:text=Allele%20frequency%20is%20calculated%20by,can%20differ%20substantially%20between%20positions
I road this section
Allele frequency is calculated by dividing allele count by allele
number, hence, allele frequency represents the frequency of
confidently sequenced haplotypes that carry the allele in question
(because coverage and sequencing quality varies across the genome, the
allele number can differ substantially between positions). Allele
frequency is not equivalent to the percentage of individuals that
carry the allele, but is a suitable value for expressing the frequency
of a variant in the general population. The number of individuals
carrying a variant will depend on the number of heterozygous and
homozygous individuals but can be calculated from the data provided in
the variant table.
Does this mean, for getting the frequency of each allele in gnomAD, one could use Allele frequency (AF) column or must calculate population frequency?
Answer: gnomAD provides a global Allele Frequency as well as AF values for most subsets. gnomAD does not provide individual genotypes (as far as I know), so calculating custom AF for non-precomputed subsets might not be possible.
Since you're interested in the NFE (non-Finnish European) sub-population, the nfe_*_AF fields might be of interest to you.
As far as calculating the fraction of non-WT samples from num_homozygous_samples, AC, and AN goes, your formula seems spot on. Here's my deconstruction of the formula:
Your formula is: (AC - num_homozygous_samples)/(AN / 2))
AN/2 is the number of samples (assuming all samples are diploid). AC - num_homo gives you 2 * num_hets. Dividing this by number of samples gives you frequency of samples with at least one ALT allele.
For example, let's take a dataset of 200 diploid samples where 100 are WT, 75 are het and 25 are hom-alt for a biallelic variant.
Since there are 200 diploid samples, AN is 400.
AC for the ALT allele is (75 * 1) + (25 * 2) = 125.
Since 100 are WT, it is known that fraction of samples with mutation = 1 - (100/200) = 0.5. This number can be arrived using your formula:
(AC - num_homozygous_samples) / (AN/2)
= ((125 - 25) / (400/2))
= 100 / 200
= 0.5 | {
"domain": "bioinformatics.stackexchange",
"id": 2641,
"tags": "allele-frequency, gnomad"
} |
Helping diff. drive robot get through doorways | Question:
Hey folks!
I'm working on a differential drive SLAM robot and I have things working pretty well. I'm able to:
Create a map of my apartment using slam_toolbox
Navigate from room to
room using Nav2 and Goal Poses on
RViz2
First off a big thanks to Steve Macenski and all the folks who maintain these open source packages. I've really been enjoying using them so far.
I'm having a problem where my robot is getting "stuck" in doorways that are plenty big for it to navigate through. It doesn't happen every time, and I'd say 50% of the time the robot just breezes through and passes to the next room with no problem. The other 50% though the robot will stop at the threshhold, start rotating to and fro in a confused manner, and sometimes it works itself out after 30sec or so, but most of the time it ends up getting totally stuck until I give it a little help with the joystick.
I have noticed that this happens more often when moving to a room that is positioned a U-turn away from the current room. For instance, the robot can usually handle these paths with no problem:
Whereas it almost ALWAYS gets stuck moving between these two rooms:
It doesn't like something about that turn radius, even though practically there is plenty of room to pass (sometimes it manages to).
What I have tried:
Confirming the URDF descriptions are accurate to the robot's physical dimensions
Playing with the wheel separation multiplier in the controller configuration file
But can't seem to shake this issue. Perhaps there is some setting in Nav2 related to the costmap that I can finetune to get this working? Thanks in advance for any suggestions!
Originally posted by coatwolf on ROS Answers with karma: 13 on 2023-04-12
Post score: 0
Original comments
Comment by Mike Scheutzow on 2023-04-13:
Have you installed the plugin in rviz to display the Local Costmap? If you have "fake" obstacles in the Local Costmap, the Local Planner may be unable to find an open path.
Comment by coatwolf on 2023-04-16:
Yes you can see in the images in my post that I have the costmap displayed in rviz2. Unless you meant something else. I have made some progress (see my response in a comment below).
Comment by Mike Scheutzow on 2023-04-16:
The Global Costmap is different from the Local Costmap. It would benefit you to understand the difference.
Comment by coatwolf on 2023-04-16:
Apologies, I missed the "Local" keyword. Yes I have viewed the local costmap as well and there doesn't seem to be an issue of phantom obstacles. Thanks for the reply.
Answer:
I haven't solved this entirely BUT I am closer now that I have discovered the costmap parameters file which I was unaware of prior to my post. You can find info here and the code here.
I had originally installed the nav2 stack using apt which made it inconvenient to access this file, so I apt removed all of the packages and then reinstalled by cloning from the nav2 repository (guide here) to a dedicated nav2_ws and editing / building from there.
By playing with the following values I am slowly but surely improving my robot's mobility through turns and doorways:
footprint_padding
inflation_radius
cost_scaling_factor
Originally posted by coatwolf with karma: 13 on 2023-04-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by hunterlineage1 on 2023-04-16:
This is right, I had to adjust inflation_radius and it worked.
Comment by coatwolf on 2023-04-16:
Thanks for the reply! Was it simply a matter of reducing the radius?
Comment by hunterlineage1 on 2023-04-16:
Yes, but you might still have to tune the footprint_padding and cost_scaling_factor. | {
"domain": "robotics.stackexchange",
"id": 38345,
"tags": "ros, ros2, differential-drive"
} |
Question on cryptographic advantage | Question: In Provably Secure Steganography by Hopper, et al, we have the following definition
Cryptographic notions
Let $F:\{0,1\}^k \times \{0,1\}^L \rightarrow \{0,1\}^l$ denote a family of functions. Let $A$ be an oracle probabilistic adversary:
Define the $prf$-advantage of $A$ over $F$ as
$$Adv^{prf}_F(A)= | Pr_{K \leftarrow U(k), r\leftarrow \{0,1\}^*}[A_r^{F_k(.)}=1]- Pr_{g \leftarrow U(L,l), r\leftarrow \{0,1\}^*}[A_r^{g}=1] |.$$
where $r$ is the string of random bits used by adversary $A$. Define the insecurity of $F$ as
$$InSec^{prf}_{F}(t,q) = max_{A\in \mathcal{A}(t,q)} \{ Adv_F^{prf}(A)\}$$
where $U(k)$ is a uniform distribution over $k$ bits, $U(L,l)$ is a uniform distribution on functions from $\{0,1\}^L$ to $\{0,1\}^l$, and $F_K$ is a function $F_K:\{0,1\}^k\times\{0,1\}^L\to\{0,1\}^\ell$. But honestly, I don't know how I'm supposed to understand or parse the definition. Intuitively, I get the adversary $A$ is outputting $1$ if it decides that $F_K$ is not sufficiently different from a sample $r$ drawn from the channel distribution. But can someone walk me through, like I'm an idiot, as to what role each part of the definition plays in the overall mechanism of prf-advantage?
Link to paper: https://www.cs.cmu.edu/~biglou/PSS.pdf
Answer: If you're totally lost, you might not have the needed background to get much use out of the paper. The paper draws on many concepts and foundational ideas from theoretical cryptography (including how to formalize notions of security for various cryptographic tasks), and it assumes familiarity with those ideas. If the definition looks confusing, it's probably because you don't have that familiarity; the paper's intended audience was cryptographers who already have that familiarity.
You might want to spend a few weeks studying theoretical cryptography (foundations of cryptography), e.g., from Lindell-Katz, the Goldreich book, and/or many lecture notes online on the foundations of cryptography. You'll find that this kind of definition is routine, and studying that material will prepare you to be able to understand this material and work through the definitions on your own. | {
"domain": "cstheory.stackexchange",
"id": 3202,
"tags": "cr.crypto-security"
} |
Upsample - filter - downsample | Question: Given a discrete signal $ r(nT_c)$ specified at a rate $T_c$, assume we want to resample with a 3/2 rate change to match the sampling rate of some other signal.
We would like to upsample it by 3, then filter it by filter of length $K$ and then downsample by 2.
Also assume we have the following relationship exists $$ \frac{3}{2}T_s = T_c$$
Step 1: Upsample then we have
$r'(n\frac{Ts}{2}) = \left\{ \begin{array}{ll}
r(n\frac{Ts}{3}) & \mbox{if $n = 0,3,6$};\\
0 & \mbox{if OW}.\end{array} \right.$
Step 2: Filter
$\hat{r}(n\frac{Ts}{2})= \sum_{k=0}^{K-1}r(\frac{(n-k)T_s}{2})h_{filter}(k)\,\,\,\,\,\,\,n=0,1....$
Step 3:
$r_o(nT_s)= \hat{r}(n\frac{Ts}{2}-\underbrace{\frac{K-1}{2}\frac{T_s}{2}}{???}) \,\,\,\,\,n=0,1,$
I don't understand two of the above steps
1) why do we need use a filter to resample and
2) why isnt it the part underbrace $$\hat{r}(n\frac{Ts}{2}) \,\,\,\,\,n=0,1,....$$
Thank you very much.
Answer:
Your goal is to create a signal with 3 times more samples. The first step is to insert 2 zeros between each sample.
Technically, you now have a signal with 3 time more samples, as you require. However most of the samples are zeros. What you need to do next is to interpolate the 2 samples between each 2 nonzero samples. And this is exactly what the low-pass filter does (or, at least, this one way to look at it).
Digital filters introduce delay to the signal by half the filter's length. I believe this is the reason for the time shifting | {
"domain": "dsp.stackexchange",
"id": 2770,
"tags": "sampling, downsampling"
} |
Closest star system to Alpha Centauri? | Question: The closest star system to our Solar System is Alpha Centauri.
But is our Solar System the closest star system to Alpha Centuari?
If not, which star system is?
Answer: The sun is the nearest star to Alpha Centauri (unless you count Proxima Centauri, which is really part of the same system).
There is a very small and dim pair of brown dwarfs, called Luhman 16 that are closer, at about 3.6 light years from Alpha Centauri. Brown dwarfs are not true stars, but they do glow from their own heat. They were only discovered in 2011. It is possible that there are other very dim objects, but these would not be stars either in the strict sense. | {
"domain": "astronomy.stackexchange",
"id": 4475,
"tags": "solar-system, distances, star-systems"
} |
Get unique entries from array | Question: For a website I'm working on, I had to get all the unique entries in an array and count their occurrence. It is possible a certain entry is only found once, but it can also be found 20 times. So I designed the following bit of code:
for ($i = 0; $i < count($nodes); $i++)
{
for ($j = 0; $j < count($nodes[$i]); $j++)
{
if (!array_key_exists($nodes[$i][$j], $uniekenodes))
{
$uniekenodes[$nodes[$i][$j]] = 1;
}
else
{
$uniekenodes[$nodes[$i][$j]] += 1;
}
}
}
The $nodes array contains the the entries returned from the database. And the $uniekenodes array contains the unique entries and how many times they occured in the $nodes array.
This is my first php script (on a drupal webpage by the way) and as such I don't know that much about php. I'm pretty confident there is probably a more a efficient way to do this, using php-specific functions. Any and all tips are welcome!
EDIT: I might have to clarify the structure of the arrays:
$nodes has two dimensions. The first dimension is just a key for the second dimension. This one contains an array of drupal nodes for each key.
$uniekenodes uses the nodes from $nodes as a key and the value is how many times the node occured in $nodes
EDIT 2:
I printed the arrays, as requested by Boris Guéry:
Array
(
[0] => Array //Each of these contains node id's returned by a query
(
[0] => 12
[1] => 11
[2] => 10
[3] => 9
[4] => 8
[5] => 7
)
[1] => Array
(
[0] => 10
[1] => 9
[2] => 8
[3] => 7
)
[2] => Array
(
)
[3] => Array
(
[0] => 11
[1] => 10
[2] => 9
[3] => 8
[4] => 7
)
)
Array //This one uses the node ids from the previous array as keys, the values are the number of occurences.
(
[12] => 1
[11] => 2
[10] => 3
[9] => 3
[8] => 3
[7] => 3
)
Answer: First off, foreach loops would make this look much nicer. Also, its always better to use the "not operator" ! only if you have to. For instance, in your situation you do something either way, so switch it around.
foreach( $nodes AS $node ) {
foreach( $node AS $value ) {
if( array_key_exists( $value, $uniekenodes ) ) {
$uniekenodes[ $value ] += 1;
} else {//the not is now the else
$uniekenodes[ $value ] = 1;
}
}
}
I had the initial idea that you could just use array_count_values() on each interior array then combine them. You can still do this, but combining them wasn't as easy as I thought it would be. Maybe I'm missing something... Anyways, I found another solution. It SHOULD work, but I've not tested it, so no promises. I'm not really sure how I feel about the closure, but it seems to be picking up common use. In the end, both the above method and this one below are fine, its up to you to decide which to use. I'm not even sure where I stand on this one. The bottom one is cleaner, but the top one I'm more familiar with. Its fun to play around with stuff like this and experiment. I'll have to see how this works out in some of my projects. Its always good when you walk away having learned something yourself. Thanks for giving me an excuse to play around with something new :)
array_walk_recursive( $nodes, function( $value, $key ) use( &$uniekenodes ) {
if( array_key_exists( $value, $uniekenodes ) ) {
$uniekenodes[ $value ] += 1;
} else {
$uniekenodes[ $value ] = 0;
}
} );
On a final note, I would consider renaming the $uniekenodes array. It is just awkward and I wouldn't have known what was in it if you hadn't told me. $uniqueNodes is good, or $nodeFrequency, or anything else that better describes it. Unless, of course, this is in another language and that is a good description (I don't know, looks like it could be). Hope this helps! | {
"domain": "codereview.stackexchange",
"id": 2209,
"tags": "php, drupal"
} |
Triviality of Yang Mills in $d>4$? | Question: It has been proved that the $\phi^4$ theory is trivial in spacetime dimensions $d>4$. By trivial I mean that the field $\phi$ is a generalized free field, or in other words, it's only nonzero connected correlator is the two point correlator. This is a nonperturbative result, which manages to get around the fact that $\phi^4$ is nonrenormalizeable in dimensions $d>4$.
Here is the paper which proves this result: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.47.1
Have there been any results similar to this for the Yang-Mills theory? Yang-Mills is nonrenormalizeable in dimensions $d>4$ as well, so I imagine that if there were a similar result, $d=4$ should also be the critical dimension.
Answer: Good question. I am not aware of similar results for YM. The $\phi^4$ case uses correlation inequalities for ferromagnetic spin systems. Unfortunately, not many of those are known for gauge theories. YM is an example of model with non-Abelian group symmetry like $SU(N)$. Even for much simpler models with $O(N)$ symmetry like $N$-component $\phi^4$ or spherical spins, not much is known as far as correlation inequalities when $N\ge 3$. | {
"domain": "physics.stackexchange",
"id": 56198,
"tags": "quantum-field-theory, renormalization, spacetime-dimensions, yang-mills, non-perturbative"
} |
request data and print results | Question: On last test, the below code takes approximately 10 seconds to download then print the data from 10 url's. I wish to speed this up as much as possible as later on I plan to expand this further and use the scraped data as live data in a GUI.
The display_value() function consists of 95% of the time, which seems like an awful lot considering it's a small number. I am thinking it's due to how I've written the function call, but out of ideas.
def live_indices():
import sys
"""Acquire stock value from Yahoo Finance using stock symbol as key. Then assign the relevant variable to the respective value.
ie. 'GSPC' equates to the value keyed to 'GSPC' stock indices_price_value.
"""
start_time = datetime.datetime.now() # Use to time how long the function takes to complete
import requests
import bs4
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36'}
all_indices_values = {}
symbols = ['GSPC', 'DJI', 'IXIC', 'FTSE', 'NSEI', 'FCHI', 'N225', 'GDAXI', 'IMOEX.ME', '000001.SS']
for ticker in symbols:
url = f'https://uk.finance.yahoo.com/lookup/all?s={ticker}'
tmp_res = requests.get(url, headers=headers)
tmp_res.raise_for_status()
soup = bs4.BeautifulSoup(tmp_res.text, 'html.parser')
indices_price_value = soup.select('#Main tbody>tr td')[2].text
all_indices_values[ticker] = indices_price_value
end_time = datetime.datetime.now() - start_time
sys.stdout.write(f'DONE - time taken = {end_time}'.upper())
return all_indices_values
def display_value(live_indices):
print(live_indices['GSPC'])
print(live_indices['DJI'])
print(live_indices['IXIC'])
print(live_indices['FTSE'])
print(live_indices['NSEI'])
print(live_indices['FCHI'])
print(live_indices['N225'])
print(live_indices['GDAXI'])
print(live_indices['IMOEX.ME'])
print(live_indices['000001.SS'])
display_value(live_indices())
Answer: I think your profiling potentially is misleading. Neither 10 prints nor accessing 10 dictionary keys should take upwards 10 seconds. Maybe try profiling again with something like
def display_values(live_indices):
for key in SYMBOLS:
print(key)
all_indices_values = live_indices()
display_values(all_indices_values)
Now, to actually speed up the requests you will have to use parallelism. You are currently doing sequential requests, which understandably takes forever since you have to wait for each scrape to finish before starting the next one.
You can probably also look for some API instead of scraping web pages for the data; that should decrease the payload by quite a bit.
If you do not already know, there are modules for both profiling and for timing built into the standard library - you don't need to manually try to time stuff. | {
"domain": "codereview.stackexchange",
"id": 43896,
"tags": "performance, python-3.x, web-scraping, python-requests"
} |
Writing multiple if tags that compare two variables | Question: I am a beginner in C# coding, and I was trying to compare two int variables:
void CompareNumber() {
int oneNumber;
int secondNumber;
if (oneNumber > secondNumber)
{
DoSomething();
}
else if (oneNumber < secondnumber)
{
DoSomethingElse();
}
else if (oneNumber == secondnumber)
{
DoSomethingDifferent();
}
}
While this does work, it looks kinda messy, especially because I compare variables in this manner many times. Is there a more concise way of doing this, making it look neater? (Except for omitting the curly brackets)
Here is my actual code for a simple game that thinks of a number, and you need to guess what it said:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace GuessName
{
class Program
{
static void Main(string[] args)
{
int previousNumber = 0;
AskNumber(previousNumber);
}
static void AskNumber(int previousNumber)
{
Console.Clear();
int numberTyped;
int randomNumber = new Random().Next(1, 11);
while (previousNumber == randomNumber)
randomNumber = new Random().Next(1, 11);
Console.WriteLine("I am thinking of a number between 1 and 10. What do you think it is?");
if (int.TryParse(Console.ReadLine(), out numberTyped) == true)
CheckNumber(numberTyped, randomNumber);
else
InvalidNumber();
}
static void CheckNumber(int numberTyped, int randomNumber)
{
if (numberTyped < 0 || numberTyped > 10)
{
InvalidNumber();
}
else if (numberTyped > randomNumber)
{
Console.Clear();
Console.WriteLine("Sorry, but your number is bigger than what I thought");
Console.WriteLine("Try again, just type what you think:");
}
else if (numberTyped < randomNumber)
{
Console.Clear();
Console.WriteLine("Sorry, but your number is smaller than what I thought");
Console.WriteLine("Try again, just type what you think:");
}
else
{
Console.Clear();
WonGame(randomNumber);
}
if (int.TryParse(Console.ReadLine(), out numberTyped) == true)
CheckNumber(numberTyped, randomNumber);
else
InvalidNumber();
}
static void WonGame(int randomNumber)
{
Console.WriteLine("Great job, You did it!");
Console.WriteLine("Would you like to try again? (y) or would you like to quit? (n)");
string wantToPlay = Console.ReadLine();
if (wantToPlay == "y")
AskNumber(randomNumber);
else
Environment.Exit(0);
}
static void InvalidNumber()
{
Console.Clear();
Console.WriteLine("Your number is invalid. Would you like to try again? (y)");
Console.WriteLine("Or would you like to quit? (n)");
string wantToPlay = Console.ReadLine();
if (wantToPlay == "y")
AskNumber(0);
else
Environment.Exit(0);
}
}
}
Answer: Indentation?
At first I thought it was a mere copy/paste error, but the pattern is throughout the code you submitted, so I have to mention it: it's not the braces you indent, it's what the braces encompass!
static void Main(string[] args)
{
int previousNumber = 0;
AskNumber(previousNumber);
}
Should be:
static void Main(string[] args)
{
int previousNumber = 0;
AskNumber(previousNumber);
}
Curly braces define a scope. In C#, convention is to place the scope-opening brace on the next line, so this:
void CompareNumber() {
Should be:
void CompareNumber()
{
Speaking of braces:
Is there a more concise way of doing this, making it look neater? (Except for omitting the curly brackets)
Omitting the curly braces does not make code look neater. Proof being this very confusing snippet:
if (wantToPlay == "y")
AskNumber(0);
else
Environment.Exit(0);
}
I had to scan the entire method twice to figure out that the last } was in fact closing the scope of the method.
By opposition:
if (wantToPlay == "y")
{
AskNumber(0);
}
else
{
Environment.Exit(0);
}
}
Is brutally in-your-face crystal-clear. Notice the position of the last brace, which closes the method scope. The readability problem stems from the extraneous indentation of scope-delimiting braces - this doesn't suffer the exact same issue:
if (wantToPlay == "y")
AskNumber(0);
else
Environment.Exit(0);
}
You should always delimit scopes with braces. if/else statements denote a scope - that scope wants its braces.
Same with loop constructs:
while (previousNumber == randomNumber)
randomNumber = new Random().Next(1, 11);
Should be:
while (previousNumber == randomNumber)
{
randomNumber = new Random().Next(1, 11);
}
The Good
You use descriptive naming. This is often understated - descriptive names are your best friend. They reduce bug-proneness of your code all by themselves, and make code easier and more enjoyable to read. Since programming is 80% reading and 20% writing, enjoyable reading means enjoyable programming.
Good job!
The Bad
The syntax for an if condition goes if ([bool-expression]), where [bool-expression] is any expression that evaluates to true or false. Can you spot the redundancy here?
if (int.TryParse(Console.ReadLine(), out numberTyped) == true)
That's right. TryParse returns a bool that's a perfectly valid [bool-expression] - in fact, anytime you're comparing a Boolean value (or expression) to a Boolean constant for the sake of getting a Boolean expression, you are needlessly repeating yourself... redundantly.
Just do this instead:
if (int.TryParse(Console.ReadLine(), out numberTyped))
Method names should start with a verb. Always. void InvalidNumber() is a bad name, because it doesn't say anything about what it actually does - it doesn't return a number, and doesn't intake one either. Without looking at how it's implemented, it's pretty confusing.
Console applications exit when the Main method executes to its closing brace. I have never used Environment.Exit(0) in any application I've written - in fact, this call is like a "red button", a forceful, ugly way to kill your application (and throw the body in a container in the back alley) rather than letting it gracefully exit through the front door.
You're going to have to restructure things up in order to make that happen though.
CheckNumber shouldn't have the 0 and 10 hard-coded like this; there should be constants defined for the lower and upper bounds, and the AskNumber method should be using them as well - that way if you ever need to change the 10 to a 100, there's only 1 place you'll need to change it.
AskNumber shouldn't be creating a new Random() every time. There should be one single instance of Random that the program uses whenever it needs a random number.
There's a lot more to say about this code, I'll let other reviewers chip in ;) | {
"domain": "codereview.stackexchange",
"id": 11550,
"tags": "c#, number-guessing-game"
} |
rolling window global costmap borders are marked as obstacles | Question:
Hi,
i am trying to get a rolling window global costmap. I know that this is highly uncommon, but in my usecase it would be extremely beneficial if the global costmap wouldn't keep anything for ever. (This is for a robot on Street Loop with potential obstacles).
As you can see in the picture the border of the global costmap gets marked as an obstacle and isn't removed by the next update.
https://imgur.com/g2p3106
It gets only partly removed once cost values of the local map are close to the previous borders of the global one.
I tried setting allways send full cost map to true but that didn't change anything
I would be extremely grateful for any help or tips, thanks in advance
Tristan
The config file of the global costmap is as follows:
global_costmap:
map_type: costmap
global_frame: map
robot_base_frame: base_footprint
update_frequency: 5.0
publish_frequency: 5.0
static_map: false
rolling_window: true
always_send_full_costmap: true
width: 10.0
height: 10.0
- {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
Originally posted by Tristan9497 on ROS Answers with karma: 220 on 2021-02-11
Post score: 0
Answer:
I just was able to solve this myself.
For future readers:
This behaviour isn't caused by the costmaps themselves, but by the global_planner of the navigation stack. To fix this you'll need to set its parameter outline_map to false which is by default set to true.
This isn't included in the parameter @ http://wiki.ros.org/global_planner#Parameters and i am unsure if you can set it directly from the launch file.
I changed the default value in the planner_core.cpp file located at navigation/global_planner/src/planner_core.cpp at line 149, which definitely isn't the cleanest way possible.
If you configured this you'll be able to use a rolling window for global maps aswell.
This Parameter has been added by the following PR see the following commit for details:
https://github.com/ros-planning/navigation/commit/57c3cb2357201e956be80324bb9b9282ff0fa051
EDIT:
If someone can change it in the ros wiki please do
EDIT:
i just added i to the wiki.
Unfortunately i can't accept this answer myself
Originally posted by Tristan9497 with karma: 220 on 2021-02-11
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 36074,
"tags": "navigation, move-base, costmap-2d"
} |
Problems of Rotational Body at Uniform Speed | Question:
In this figure, $1.5 \;\text{kg}$ mass is connected to a $2 \;\text{kg}$ mass through a inflexible string. The $1.5 \;\text{kg}$ mass is on the surface of a table and through a hole on the table the string went down and $2 \;\text{kg}$ mass is hanging from that string. $1.5 \;\text{kg}$ mass is now rotating at a uniform speed and it's coefficient of sliding friction $\mu_k=0.2$
Questions:
Write down Newton's Equation for the $1.5 \;\text{kg}$ mass moving at a
uniform speed.
My Attempt: I think they are asking for Equations of Angular Motion (What else could be?). But if something is moving at uniform speed, does it mean it has uniform angular velocity? If so, then the only equation I can think about is $\theta= \theta_0 + \omega t$ where $\theta_0$ and $\theta$ are the initial and final position of the $1.5 \;\text{kg}$ mass respectively. Is my assumption correct? How can I improve it?
What is the work done due to the rotation of the uniform speed of $1.5 \;\text{kg}$ mass?
My Attempt: Now I am a bit confused here. In the original context, they mention about coefficient of sliding friction. That means there is friction present in our consideration. If I only consider work done due to centripetal force, it would be $0$ because of $90°$ angle. But if friction is present, should we consider extra torque is applied to the $1.5 \;\text{kg}$ mass to keep it's speed uniform?
If we want to keep $2 \;\text{kg}$ mass stationary on its place (which is hanging) then what
should be the speed of $1.5 \;\text{kg}$ mass?
My Attempt: Here, I considered for $2 \;\text{kg}$ mass the resultant motion is zero, which means it's acceleration $(a)$ would be zero. So, if we consider the Tension force from the $1.5 \;\text{kg}$ mass to keep $2 \;\text{kg}$ mass from falling is L, then $2 \times 9.8-L=ma=2 \times 0 \implies L=19.6 \;\text{N}$. But where does the tension force from $1.5 \;\text{kg}$ comes from? I thought it's from the centrifugal (or centripetal) force. But should I take into account friction now? If the friction if $f_k$ then can I write this $\frac{mv^2}{r}-f_k-L=0$ in the next step to find out speed of the $1.5 \;\text{kg}$ mass?
If the speed of $1.5 \;\text{kg}$ mass decrease due to friction, then draw the effective Velocity vs. Time graph of the $2 \;\text{kg}$ mass.
My Attempt: Like the 3rd question, I would consider $$2 \times 9.8-L=ma=2 \cdot a$$ for the $2 \;\text{kg}$ mass but here $a$ isn't $0$. Now how can I attempt to draw it?
Please help me solving my question. I am struggling with it for hours. Even if you can answer 1 or 2 question please do it. I desperately need your help.
Answer: 1: Newton's equation would have two components. To maintain the centripetal acceleration you need a tension in the string T = m$v^2$/R. To counter the friction and maintain a constant speed, you need an external tangential force F = $μ_k$mg.
2. The external force supplies power = Fv.
3. Set T = (2 kg)g to support the weight of the lower mass.
4. Without the external force, the torque from friction would cause a decrease in the angular momentum, a decrease in the tension, a decrease in the radius, and allow the lower mass to accelerate downward. Starting from equilibrium, its downward velocity would start at zero and increase with time (in a complex fashion and ending at zero when the upper mass hits the hole).
I set up a numeric simulation of this system using a fixed xy system with its origin at the hole. I let t = 0 when the external force was removed and Δt = 0.01 sec. The upper mass started at x = 0.5 m and y = 0 with an equilibrium $v_y$ of 2.556 m/s. It followed a spiral path in toward the origin completing one cycle at t = 1.09 sec., and crossing the x axis at 0.3 m with a speed of 1.95 m/s. The downward speed of the hanging mass increased slowly, reaching a maximum of 0.34 m/s just before the first half cycle and then decreased even more slowly after that. | {
"domain": "physics.stackexchange",
"id": 81100,
"tags": "homework-and-exercises, newtonian-mechanics, forces, mass, centripetal-force"
} |
Clean way to remove an identical nested object in javascript | Question: can you give me some feedback about my approach to remove an idential element from following data structure?
const items = {
'Sun Mar 07 2021': [
{ id: 2 },
{ id: 1 }
],
'Sat Mar 06 2021': [
{ id: 1 } // remove me
]
}
const id = 1 // filter by id 1
const newDate = 'Sun Mar 07 2021' // filter by newDate
let oldDate = ''
// Find the duplicate date and
// save it in oldDate to splice it afterwards
Object.keys(items).forEach(date => {
items[date].forEach(item => {
const match = item.id === id
if (match && date !== newDate) {
oldDate = date
}
})
})
const idx = items[oldDate].findIndex(el => el.id === id)
// remove the old item from matched array
if (oldDate) items[oldDate].splice(idx, 1)
I think it can be simplified or solved differently. Unfortunately I can't get any further, do you have any ideas?
Answer:
I think it can be simplified or solved differently.
That's good intuition. That's called a code smell. Something "smells" funny, off, or wrong but not sure what it is. Pay attention to that feeling!!
Unfortunately I can't get any further
This is an even stronger code smell.
do you have any ideas?
Re-examine the date object design.
date and IDs should be in one object
Put the date objects into an array. Array is designed for easy handling of multiple objects.
Using Array's provided methods should eliminate external variables and functions.
This code demonstrates overall structural changes after fixing the objects. Algorithm details not shown.
const dates = [
{ date : 'Sun Mar 07 2021',
IDs : [ 1, 2 ]
},
{ date : 'Sat Mar 06 2021',
IDs : [ 1 ]
}
]
const dupeDate = { date : 'Sun Mar 07 2021', IDs : [ 1 ] }
dates.forEach( date => {
if( date.date === dupeDate.date ) return;
if( date.IDs.includes( dupeDate.IDs[0] )) {
// algorithm details here. Should be function calls as much as possible,
// not dozens of lines of nested, nested code.
}
}); | {
"domain": "codereview.stackexchange",
"id": 40814,
"tags": "javascript"
} |
Meaning of Representation | Question: I am reading Schwartz's book "Quantum Field Theory and The Standard Model", in chapter 8, the author says "A set of objects $\psi$ that mix under a transformation group is called a representation of the group."
I am confused about this, I thought the representation is the representation of the transformation, not the set of states corresponding to those transformation. In the book, it sounds like the set of states is the representation, which doesn't make sense to me, how can a set of states (closed under the transformation) be representation of the group? Or is it just really saying the "transformation matrix" (assume we are using matrix representation) is the representation of the group, and the set of states closed under those transformation is also called the representation of that group?
Answer: A representation $\rho$ of some Lie group $G$ is a group homomorphism $\rho:G\rightarrow GL(V)$, where $GL(V)$ is the general linear group over some vector space $V$. $V$ is often called the representation space.
It is standard parlance both in physics and in mathematics to refer to a representation space simply as a representation when the actual homomorphism in question is otherwise clear from context. | {
"domain": "physics.stackexchange",
"id": 72423,
"tags": "quantum-field-theory, representation-theory"
} |
Matrix of any special unitary transformation in two dimensions | Question: I want to show that every special unitary transformation in two dimensions can be written as the matrix
$$ U = \left(\begin{array}{cc} e^{i(\delta + \varphi)}\cos\theta & i~e^{i(\delta - \varphi)} \sin\theta \\ i~e^{-i(\delta - \varphi)}\sin\theta & e^{-i(\delta + \varphi)}\cos\theta\end{array}\right),$$
where $\delta,\varphi,\theta \in\mathbb{R}$. I want to understand neutrino oscillations and my professor just wrote down this matrix without deriving it. Unfortunately, I cannot find the right way to do it myself. Any help is appreciated.
Answer: For simplicity, suppose $U$ is special unitary so its determinant is $+1$. If you want full unitary simply multiply $U$ by an overall phase $e^{i\zeta}$.
Write
$$
U=\left(\begin{array}{cc}
a&b\\
c&d\end{array}\right)\, ,\qquad
U^{-1}=\left(
\begin{array}{cc}
d & -b \\
-c & a \\
\end{array}
\right)\, ,\qquad U^\dagger =
\left(\begin{array}{cc}
a^*&c^*\\
b^*&d^*\end{array}\right)
$$
where $\hbox{Det}(U)=1$ has been used. Since $U^\dagger=U^{-1}$ this immediately shows
that $d=a^*$, $c=-b^*$ so
$$
U=\left(\begin{array}{cc}
a&b\\
-b^*&a^*\end{array}\right)
$$
The condition on the determinant is now $aa^*+bb^*=1$. In your case you chose
$$
a=e^{i(\delta+\varphi)}\cos\theta\, ,\qquad ie^{i(\delta-\varphi)}\sin\theta.
$$
The factor of $i$ in the off-diagonal terms and the general form of your solution is typical of the parametrization used in optics.
Of course this is not the only solution, indeed this is not the "standard" solution. A special unitary $2\times 2$ matrix is often written in the factorized form
$$
e^{-i\alpha\sigma_z/2}e^{-i\beta\sigma_y/2}e^{-i\gamma\sigma_z/2}
=\left(
\begin{array}{cc}
e^{-\frac{1}{2} i (\alpha +\gamma )} \cos \left(\frac{\beta }{2}\right) & -e^{-\frac{1}{2} i (\alpha -\gamma )} \sin \left(\frac{\beta }{2}\right) \\
e^{\frac{1}{2} i (\alpha -\gamma )} \sin \left(\frac{\beta }{2}\right) & e^{\frac{1}{2} i (\alpha +\gamma )} \cos \left(\frac{\beta }{2}\right) \\
\end{array}
\right)\, .
$$ | {
"domain": "physics.stackexchange",
"id": 41444,
"tags": "homework-and-exercises, angular-momentum, group-theory, rotation, linear-algebra"
} |
Why is ferret giving me error that the region is not 2D while plotting | Question: I have downloaded ERA5 data and am trying to plot temperature variable. The problem I am facing is that even after providing a 2D area for plotting, ferret is giving an error saying that the region must be 2D
yes? fill t[k=1,l=1]
**ERROR: dimensions improperly specified: must be a 2D region
CONTOUR/FILL t[k=1,l=1]
Answer: You need to use the following syntax
yes? fill 't'[k=1,l=1]
This is because ferret treats t variable name as time and to distinguish your variable name (if it is the same in the file), you have to provide it in double quotes. | {
"domain": "earthscience.stackexchange",
"id": 1914,
"tags": "reanalysis, era"
} |
Do servo motor specifications take into account the gear ratio inside? | Question: I am looking at buying a servo motor for a an application that must be able to lift 4-5 lb at a rotational speed of approximately 1rpm. The servo motor listed here http://www.robotshop.com/ca/en/hitec-hs755mg-servo.html states a stalling torque of 200 oz-in. Is this torque rating at the horn of the servo motor or the torque rating of the actual motor before any gear reduction is done?
Is this motor sufficiently strong for my application?
Answer:
Is this torque rating at the horn of the servo motor or the torque
rating of the actual motor before any gear reduction is done
For reputable manufactures and vendors, the ratings will include all internal gear reduction, or external in the case of gear-head motors.
Is this motor sufficiently strong for my application?
That depends more on the details of your application. 200 oz-in of torque means the servo should be able to lift up to 200 oz within one inch of its center shaft. That means it can work on up to [ 200 oz / 16 oz/lb ] = 12.5 pounds within that inch.
To get 4 - 5 pounds out of it, your rotational radius must be within:
12.5 lb-in / 4.5 lb = 2.78 inches.
However, operating the motor continuously under a full load will most likely cause it to fail fairly quickly. It will also slow down the rotational speed.
Speaking of speed, the servo is given at 0.23 seconds / 60 degrees. That means it will rotate 60 degrees in 0.23 seconds. a Full circle is 360 degrees, so 4 * 0.23 = 0.92 seconds for a full rotation.
However, (and correct me if I am wrong) no where in the listing does it mention continuous rotation. A traditional RC servo motor cannot rotate all the way around unless it specifically states it can or is manually altered (voiding any warranties). It will typically only rotate a maximum of 140 to 180 degrees. | {
"domain": "robotics.stackexchange",
"id": 354,
"tags": "motor, servomotor"
} |
BDD in PHP, Testing search in Wikipedia with Behat and Mink (Selenium2 Driver) | Question: I am trying to learn BDD in PHP with Behat and Mink and I am using Selenium2 driver for the same.
The scenario is given on this page and is as follows:
Feature: Search
In order to see a word definition
As a website user
I need to be able to search for a word
Scenario: Searching for a page that does exist
Given I am on "/wiki/Main_Page"
When I fill in "search" with "Behavior Driven Development"
And I press "searchButton"
Then I should see "Behavior-driven development"
Scenario: Searching for a page that does NOT exist
Given I am on "/wiki/Main_Page"
When I fill in "search" with "Glory Driven Development"
And I press "searchButton"
Then I should see "Search results"
The file behat.yml is as follows:
default:
extensions:
Behat\MinkExtension:
base_url: http://en.wikipedia.org/
goutte: ~
selenium2: ~
I have written the below code for FeatureContext.php that I am sure needs improvements. Can anyone suggest me the points for the same?
<?php
use Behat\Behat\Tester\Exception\PendingException;
use Behat\Behat\Context\Context;
use Behat\Behat\Context\SnippetAcceptingContext;
use Behat\Gherkin\Node\PyStringNode;
use Behat\Gherkin\Node\TableNode;
/**
* Defines application features from the specific context.
*/
class FeatureContext implements Context, SnippetAcceptingContext
{
/**
* Initializes context.
*
* Every scenario gets its own context instance.
* You can also pass arbitrary arguments to the
* context constructor through behat.yml.
*/
public function __construct()
{
$this->driver = new \Behat\Mink\Driver\Selenium2Driver('firefox');
$this->session = new \Behat\Mink\Session($this->driver);
$this->session->start();
}
/**
* @Given I am on :url
*/
public function iAmOn($url)
{
$this->session->visit('http://en.wikipedia.org'.$url);
}
/**
* @When I fill in :field with :text
*/
public function iFillInWith($field, $text)
{
$this
->session
->getPage()
->find('css', '[type=' . $field . ']')
->setValue($text);
}
/**
* @When I press :button
*/
public function iPress($button)
{
$this
->session
->getPage()
->find('css', '[id=' . $button . ']')
->press();
}
/**
* @Then I should see :text
*/
public function iShouldSee($text)
{
$title = $this
->session
->getPage()
->find('css', 'h1')
->getText();
if ($title !== $text) {
new Exception('Invalid page');
}
}
/**
* @AfterScenario
*/
public function tearDown()
{
$this->session->stop();
}
}
Answer: There is one major point that we can do here. In the FeatureContext.php, if class FeatureContext extends MinkContext, then you do not need to write any of the function definitions at all!
Applying what I just said, will result to have FeatureContext.php to be as:
<?php
use Behat\Behat\Tester\Exception\PendingException;
use Behat\Behat\Context\Context;
use Behat\Behat\Context\SnippetAcceptingContext;
use Behat\Gherkin\Node\PyStringNode;
use Behat\Gherkin\Node\TableNode;
use Behat\MinkExtension\Context\MinkContext; //This line is add
class FeatureContext extends MinkContext // This line is changed
implements Context, SnippetAcceptingContext
{
public function __construct()
{
}
}
Note: To make the point more clear, I have removed the comments that are generated by behat. | {
"domain": "codereview.stackexchange",
"id": 13686,
"tags": "php, bdd"
} |
failure of catkin_make_isolated on FreeBSD | Question:
I'm attempting to install ROS Kinetic on FreeBSD 12.1 (at my peril).
I'm following the 'from source' instructions at:
http://wiki.ros.org/kinetic/Installation/Source
I am performing step 2.1.3 (Building the catkin workspace), which directs invoking catkin_make_isolated as follows:
./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release
I get the following output:
==> Processing catkin package: 'catkin'
...
==> make -j4 -l4 in '$PATH/ros_catkin_ws/build_isolated/catkin'
...
<== Failed to process package 'catkin':
Command '['make', '-j4', '-l4']' returned non-zero exit status 2
This seems to be due to FreeBSD make not having the loading average flag -l.
I have tried both setting the MAKEFLAGS and ROS_PARALLEL_JOBS environment variables to avoid setting the -l flag. E.g., set ROS_PARALLEL_JOBS='-j4'.
The result is that it sends a flag -pn to make, as follows:
==> Processing catkin package: 'catkin'
...
==> make -j4 in '$PATH/ros_catkin_ws/build_isolated/catkin'
...
<== Failed to process package 'catkin':
Command '['make', '-pn']' returned non-zero exit status 2
Interestingly, it appears that it's invoking make -j4, but then says that the actual flag sent to make is -pn.
I have also attempted to set the flags when invoking catkin_make_isolated, but get the same result (with -pn). For example:
./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release -j4
Originally posted by broomstick on ROS Answers with karma: 111 on 2020-01-17
Post score: 0
Answer:
Found my answer by posting a similar question elsewhere:
https://github.com/ros/catkin/issues/1047
In short, FreeBSD make is not the same as GNU make and has different flags. Installing GNU make doesn't immediately fix the problem because the binary name is gmake in FreeBSD, but make is hardcoded into catkin's builder.py script.
The workaround (modifying the builder.py script) is documented in the github thread cited above.
EDIT: The workaround to use gmake on FreeBSD has been pulled into Catkin, which now supports the --use-gmake argument (https://github.com/ros/catkin/pull/1051)
Originally posted by broomstick with karma: 111 on 2020-01-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 34284,
"tags": "ros, catkin, ros-kinetic"
} |
When do I apply Significant figures in physics calculations? | Question: I'm a little confused as to when to use significant figures for my physics class. For example, I'm asked to find the average speed of a race car that travels around a circular track with a radius of $500~\mathrm{m}$ in $50~\mathrm{s}$.
Would I need to apply the rules of significant figures to this step of the problem?
$$ C = 2\pi (1000~\mathrm{m}) = 6283.19 $$
Or do I just need to apply significant figures at this step?
$$ \text{Average speed} = \frac{6283.19~\mathrm{m}}{50~\mathrm{s}} = 125.664~\mathrm{m}/\mathrm{s} $$
Should I round $125.664~\mathrm{m}/\mathrm{s}$ to $130~\mathrm{m}/\mathrm{s}$ since the number with the least amount of significant figures is two?
Answer: You should always find an answer that is a formula, and then only apply significant figures once you get to the one final step of substituting your numbers back into the problem in place of variables.
Avoid multiple intermediate steps of substituting numbers at all costs. Not only will this save your pencil a lot of work, but it will also cause your answer to be more accurate, as rounding errors can pile up, even when using a calculator. | {
"domain": "physics.stackexchange",
"id": 77704,
"tags": "error-analysis, estimation"
} |
Efficient solution for Bike Tracker | Question: there
I installed SINO 942 tracker in my bike, it works well, but then my bike's self start has stopped working (it works when I ride bike for around an hour), so
I think the tracker is eating up battery when the bike is parked for long hours.
I am already thinking about installing a manual switch, or a rechargable
12 V battery (so it wont need to connect to main battery), but all of these
are manual and regularly to-be-interacted-with solutions.
Please suggest best solution I can use to rectify this issue.
Answer: You can go with your second battery, but use a split-charge system so whenever the bike is running the second battery gets charged.
An example of a split charge system / relay is here | {
"domain": "engineering.stackexchange",
"id": 1960,
"tags": "electrical-engineering"
} |
Unsure of how to handle this linker error.. (not really ROS related) | Question:
From https://github.com/PR2/linux_networking/blob/master/wpa_supplicant_node/CMakeLists.txt as the CMakeLists used to compile this program. Was wondering if anyone has experience with these sorts of errors. I understand the undefined reference is a missing link between a library.. my major concern would be: how to convert this https://github.com/PR2/linux_networking/blob/master/wpa_supplicant_node/CMakeLists.txt to the Hydro one presented prior. Any suggestions/advice would be appreciated.
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 0 has invalid symbol index 10
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 1 has invalid symbol index 11
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 2 has invalid symbol index 2
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 3 has invalid symbol index 2
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 4 has invalid symbol index 10
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 5 has invalid symbol index 12
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 6 has invalid symbol index 12
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 7 has invalid symbol index 12
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 8 has invalid symbol index 2
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 9 has invalid symbol index 2
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 10 has invalid symbol index 11
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 11 has invalid symbol index 12
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 12 has invalid symbol index 12
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 13 has invalid symbol index 12
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 14 has invalid symbol index 12
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 15 has invalid symbol index 12
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 16 has invalid symbol index 12
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 17 has invalid symbol index 12
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 18 has invalid symbol index 12
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 19 has invalid symbol index 12
/usr/bin/ld: /usr/lib/debug/usr/lib/x86_64-linux-gnu/crt1.o(.debug_info): relocation 20 has invalid symbol index 19
/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../x86_64-linux-gnu/crt1.o: In function _start': (.text+0x20): undefined reference to main'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function RosApi::init2()': wpa_supplicant_node.cpp:(.text._ZN6RosApi5init2Ev[RosApi::init2()]+0x119): undefined reference to eloop_register_read_sock'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function RosApi::uninit()': wpa_supplicant_node.cpp:(.text._ZN6RosApi6uninitEv[RosApi::uninit()]+0x29c): undefined reference to eloop_unregister_read_sock'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::ros_interface(ros::NodeHandle const&, wpa_supplicant*)': wpa_supplicant_node.cpp:(.text._ZN13ros_interfaceC2ERKN3ros10NodeHandleEP14wpa_supplicant[_ZN13ros_interfaceC5ERKN3ros10NodeHandleEP14wpa_supplicant]+0xc52): undefined reference to eloop_register_timeout'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::setCountryCode(char const*)': wpa_supplicant_node.cpp:(.text._ZN13ros_interface14setCountryCodeEPKc[ros_interface::setCountryCode(char const*)]+0x1b8): undefined reference to eloop_register_timeout'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::scanCompleted(wpa_scan_results*)': wpa_supplicant_node.cpp:(.text._ZN13ros_interface13scanCompletedEP16wpa_scan_results[ros_interface::scanCompleted(wpa_scan_results*)]+0x142): undefined reference to eloop_cancel_timeout'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::assocFailed(unsigned char const*, char const*)': wpa_supplicant_node.cpp:(.text._ZN13ros_interface11assocFailedEPKhPKc[ros_interface::assocFailed(unsigned char const*, char const*)]+0x148): undefined reference to eloop_cancel_timeout'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::assocSucceeded()': wpa_supplicant_node.cpp:(.text._ZN13ros_interface14assocSucceededEv[ros_interface::assocSucceeded()]+0x162): undefined reference to eloop_cancel_timeout'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::publishNetworkList()': wpa_supplicant_node.cpp:(.text._ZN13ros_interface18publishNetworkListEv[ros_interface::publishNetworkList()]+0x7a): undefined reference to wpa_config_get_all'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::setDriverCountry()': wpa_supplicant_node.cpp:(.text._ZN13ros_interface16setDriverCountryEv[ros_interface::setDriverCountry()]+0x1c7): undefined reference to eloop_register_timeout'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::delayedPublishFrequencyList(void*, void*)': wpa_supplicant_node.cpp:(.text._ZN13ros_interface27delayedPublishFrequencyListEPvS0_[ros_interface::delayedPublishFrequencyList(void*, void*)]+0x48): undefined reference to eloop_register_timeout'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::stopActiveAssociation()': wpa_supplicant_node.cpp:(.text._ZN13ros_interface21stopActiveAssociationEv[ros_interface::stopActiveAssociation()]+0x21c): undefined reference to wpa_supplicant_disassociate'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::startActiveAssociation(actionlib::ServerGoalHandle<wpa_supplicant_node::AssociateAction_<std::allocator<void> > >&)': wpa_supplicant_node.cpp:(.text._ZN13ros_interface22startActiveAssociationERN9actionlib16ServerGoalHandleIN19wpa_supplicant_node16AssociateAction_ISaIvEEEEE[ros_interface::startActiveAssociation(actionlib::ServerGoalHandle<wpa_supplicant_node::AssociateAction_<std::allocator<void> > >&)]+0x2fd): undefined reference to wpa_bss_get'
wpa_supplicant_node.cpp:(.text.ZN13ros_interface22startActiveAssociationERN9actionlib16ServerGoalHandleIN19wpa_supplicant_node16AssociateAction_ISaIvEEEEE[ros_interface::startActiveAssociation(actionlib::ServerGoalHandle<wpa_supplicant_node::AssociateActionstd::allocator<void > >&)]+0x4ab): undefined reference to eloop_register_timeout' wpa_supplicant_node.cpp:(.text._ZN13ros_interface22startActiveAssociationERN9actionlib16ServerGoalHandleIN19wpa_supplicant_node16AssociateAction_ISaIvEEEEE[ros_interface::startActiveAssociation(actionlib::ServerGoalHandle<wpa_supplicant_node::AssociateAction_<std::allocator<void> > >&)]+0x4ce): undefined reference to wpa_supplicant_associate'
CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::fillRosBss(wpa_supplicant_node::Bss_<std::allocator<void> >&, wpa_bss&)': wpa_supplicant_node.cpp:(.text._ZN13ros_interface10fillRosBssERN19wpa_supplicant_node4Bss_ISaIvEEER7wpa_bss[ros_interface::fillRosBss(wpa_supplicant_node::Bss_<std::allocator<void> >&, wpa_bss&)]+0x137): undefined reference to wpa_bss_get_vendor_ie'
wpa_supplicant_node.cpp:(.text.ZN13ros_interface10fillRosBssERN19wpa_supplicant_node4Bss_ISaIvEEER7wpa_bss[ros_interface::fillRosBss(wpa_supplicant_node::Bssstd::allocator<void >&, wpa_bss&)]+0x169): undefined reference to wpa_parse_wpa_ie' wpa_supplicant_node.cpp:(.text._ZN13ros_interface10fillRosBssERN19wpa_supplicant_node4Bss_ISaIvEEER7wpa_bss[ros_interface::fillRosBss(wpa_supplicant_node::Bss_<std::allocator<void> >&, wpa_bss&)]+0x1ab): undefined reference to wpa_bss_get_ie'
wpa_supplicant_node.cpp:(.text.ZN13ros_interface10fillRosBssERN19wpa_supplicant_node4Bss_ISaIvEEER7wpa_bss[ros_interface::fillRosBss(wpa_supplicant_node::Bssstd::allocator<void >&, wpa_bss&)]+0x1dd): undefined reference to wpa_parse_wpa_ie' CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::fillRosResp(wpa_supplicant_node::ScanResult_std::allocator<void >&, wpa_scan_results&)':
wpa_supplicant_node.cpp:(.text.ZN13ros_interface11fillRosRespERN19wpa_supplicant_node11ScanResult_ISaIvEEER16wpa_scan_results[ros_interface::fillRosResp(wpa_supplicant_node::ScanResultstd::allocator<void >&, wpa_scan_results&)]+0x9e): undefined reference to wpa_scan_get_ie' wpa_supplicant_node.cpp:(.text._ZN13ros_interface11fillRosRespERN19wpa_supplicant_node11ScanResult_ISaIvEEER16wpa_scan_results[ros_interface::fillRosResp(wpa_supplicant_node::ScanResult_<std::allocator<void> >&, wpa_scan_results&)]+0x28c): undefined reference to wpa_scan_get_vendor_ie'
wpa_supplicant_node.cpp:(.text.ZN13ros_interface11fillRosRespERN19wpa_supplicant_node11ScanResult_ISaIvEEER16wpa_scan_results[ros_interface::fillRosResp(wpa_supplicant_node::ScanResultstd::allocator<void >&, wpa_scan_results&)]+0x2cd): undefined reference to wpa_parse_wpa_ie' wpa_supplicant_node.cpp:(.text._ZN13ros_interface11fillRosRespERN19wpa_supplicant_node11ScanResult_ISaIvEEER16wpa_scan_results[ros_interface::fillRosResp(wpa_supplicant_node::ScanResult_<std::allocator<void> >&, wpa_scan_results&)]+0x31e): undefined reference to wpa_scan_get_ie'
wpa_supplicant_node.cpp:(.text.ZN13ros_interface11fillRosRespERN19wpa_supplicant_node11ScanResult_ISaIvEEER16wpa_scan_results[ros_interface::fillRosResp(wpa_supplicant_node::ScanResultstd::allocator<void >&, wpa_scan_results&)]+0x35f): undefined reference to wpa_parse_wpa_ie' CMakeFiles/wpa_supplicant_node.dir/src/nodes/wpa_supplicant_node.cpp.o: In function ros_interface::lockedScanTryActivate()':
wpa_supplicant_node.cpp:(.text._ZN13ros_interface21lockedScanTryActivateEv[ros_interface::lockedScanTryActivate()]+0x4fe): undefined reference to eloop_register_timeout' wpa_supplicant_node.cpp:(.text._ZN13ros_interface21lockedScanTryActivateEv[ros_interface::lockedScanTryActivate()]+0x51a): undefined reference to wpa_supplicant_trigger_scan'
collect2: ld returned 1 exit status
make[2]: *** [/home/marco/pr2_hydro_packages/devel/lib/wpa_supplicant_node/wpa_supplicant_node] Error 1
make[1]: *** [linux_networking/wpa_supplicant_node/CMakeFiles/wpa_supplicant_node.dir/all] Error 2
make: *** [all] Error 2
Invoking "make" failed
Originally posted by DevonW on ROS Answers with karma: 644 on 2014-10-02
Post score: 0
Answer:
You need to call pkg_check_modules(WPA_SUPPLICANT REQUIRED libwpa_supplicant) in order to set up the cmake variables for the wpa_supplicant include directories and libraries.
Originally posted by ahendrix with karma: 47576 on 2014-10-02
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by DevonW on 2014-10-02:
Hm, I did that and it didn't work out. checking for module 'libwpa_supplicant'
-- package 'libwpa_supplicant' not found.
Comment by ahendrix on 2014-10-02:
The old package was buiding wpa_supplicant from source. You'll need to replicate that in cmake: https://github.com/PR2/linux_networking/blob/master/wpa_supplicant_node/Makefile
Comment by ahendrix on 2014-10-02:
Or just ditch the whole package. It isn't used anywhere on the PR2. | {
"domain": "robotics.stackexchange",
"id": 19607,
"tags": "ros, cmake"
} |
What's the debate about Newton's bucket argument? | Question: I visited some other QA threads about this topic, and I don't understand why people think it's mysterious that the bucket knows about its rotation.
If a non-rotating bucket is all there is in the universe, then, initially, all the parts of the bucket are at rest wrt to each other.
But if we want to rotate that bucket with an angular velocity $\omega$, then we basically require the different parts of it to have relative acceleration wrt each other. Because if we divide the bottom of the bucket into many concentric rings, then each ring would've an acceleration $\omega^2 r$ towards the center, depending on the radius $r$ of ring. This means that the rings have relative acceleration wrt to each other. Laws of physics would take different forms for people standing on different rings. Hence, a rotating bucket is a collection of non-inertial frames having relative acceleration.
But non-inertial frames are supposed to detect acceleration in Newtonian physics. So what am I missing?
Answer: In Newtonian mechanics (and also relativity and quantum mechanics), a hypothetical physicists sitting in the bucket would definitely be able to do an experiment to detect that the bucket is rotating. I'm not sure why that would be mysterious. It should be noted that the velocities of the particles involved (relative to one another) are a fundamental part of the system. You cannot describe a physical situation using only the mass and position of the particles. You need to include their relative velocities. For this reason, a bucket sitting alone in an empty universe is fundamentally different from a rotating bucket sitting alone in an empty universe. | {
"domain": "physics.stackexchange",
"id": 68348,
"tags": "newtonian-mechanics, reference-frames, inertial-frames"
} |
How does temperature relate to the kinetic energy of molecules? | Question: In ideal gas model, temperature is the measure of average kinetic energy of the gas molecules. If by some means the gas particles are accelerated to a very high speed in one direction, KE certainly increased, can we say the gas becomes hotter? Do we need to distinguish the random vibration KE and KE in one direction?
Furthermore, if we accelerate a block of metal with ultrasonic vibrator so that the metal is vibrating in very high speed with cyclic motion, can we say the metal is hot when it is moving but suddenly become much cooler when the vibration stop?
Answer:
In ideal gas model, temperature is the measure of average kinetic energy of the gas molecules.
In the kinetic theory of gases random motion is assumed before deriving anything.
If by some means the gas particles are accelerated to a very high speed in one direction, KE certainly increased, can we say the gas becomes hotter? Do we need to distinguish the random vibration KE and KE in one direction?
The temperature is still defined by the random motion, subtracting the extra energy imposed . This is answered simply by the first part of @LDC3 's answer. Does your hot coffee boil in the cup in an airplane?
Furthermore, if we accelerate a block of metal with ultrasonic vibrator so that the metal is vibrating in very high speed with cyclic motion, can we say the metal is hot when it is moving but suddenly become much cooler when the vibration stop?
This is more complicated, because vibrations may excite internal degrees of freedom and raise the average kinetic energy for that degree of freedom. It would then take time to reach a thermal equilibrium with the surroundings after the vibrations stop. If one supposes that this does not happen, then the answer is the same as for the first part, it is the random motions of the degrees of freedom that define the kinetic energy which is connected to the definitions of temperature. So no heat will be induced by the vibrations. | {
"domain": "physics.stackexchange",
"id": 28203,
"tags": "thermodynamics, statistical-mechanics, temperature"
} |
Why doesn't the Sun produce an emission spectrum? | Question: I have read that the reason why the Sun produces an absorption spectrum is because the temperature drops as you go away from the center, such that as the various layers of the atmosphere of the sun absorb certain wavelengths, the re-emitted light will have a smaller intensity than the absorbed one, causing a dip in the spectrum (i.e., an absorption spectrum). This is consistent with Kirchhoff's laws and Planck's law for blackbody radiation. However, I checked and found that as you go from the photosphere and into the chromosphere and corona, the temperature rises instead. So then, why doesn't the opposite happen where instead of dips in the spectrum, we get peaks in it (i.e., an emission spectrum)?
(I will note that there was a temperature drop inside of the photosphere itself, but to my understanding the photosphere is opaque to all wavelengths, meaning that it can't pick out the specific wavelengths of the absorption spectrum).
Now, I already have some doubts about the logic above applying here. It seems to me that one of the assumptions being made is that the energy that is absorbed by the various layers are first distributed among the various atoms in the layers, such that the re-emitted light that comes out is the ordinary thermal radiation/blackbody radiation. However, considering the fact that the chromosphere and corona is so dilute, it would seem that that wouldn't be the case. Is it more correct to say that the radiation is rather being absorbed and re-emitted (i.e., scattered) so many times on its journey through the Sun's atmosphere that it continually loses small "bits" of energy that once it reaches us, the intensity of the re-emitted light is much smaller than the absorbed one (causing a dip in the spectrum)? Or is the gas perhaps so dilute that even that wouldn't work? If not, then again, why do we see an absorption spectrum?
Answer: The hotter layers above the solar photosphere do have an emission spectrum. The emission spectrum is much fainter than the visible photosphere and so is not easily seen through broadband filters in the optical spectrum, though it can be observed through very narrow filters centred on the emission lines (e.g. H$\alpha$ from the chromosphere) in question.
The hotter gas is much less dense than the photosphere and is essentially transparent to photons from the photosphere.
The situation becomes much easier in the UV and X-ray part of the spectrum, where there is very little radiation from the photosphere, but a more significant amount from the chromosphere and corona, because their hotter temperatures excite higher energy transitions. The Sun has an obvious emission line spectrum in the UV and X-ray range. Here is an example (from del Zanna & Mason 2018).
The (optically) thin chromosphere and corona are not heated by radiation from below. That could not produce the temperature inversion that is seen. Instead the chromosphere and corona are heated by the dissipation of magnetic fields in the form of currents and the acceleration of charged particles.
A shorthand way of thinking about photospheric absorption lines is that we can see to different depths in the Sun at different wavelengths. Where there is a strong atomic transition, light is readily absorbed, we cannot see very far. Thus our sight only penetrates to relatively cool layers (note again that the chromosphere and corona are transparent). In contrast, when we look at a "continuum" wavelength we see to deeper and hotter layers in the Sun. Since the brightness is very temperature dependent this is why we get a continuous spectrum with absorption lines. | {
"domain": "astronomy.stackexchange",
"id": 5624,
"tags": "spectroscopy, spectra"
} |
Does wind chill work in deserts? | Question: Wind chill works by removing the layer of heated air around your body. However in a hot area, say a desert, wouldn't the air around be hotter than your body or at least much closer to it than cold air? In that case, how much would wind chill actually affect your temperature loss?
Answer: As you state, wind chill "is the cooling of a body due to the passing-flow of lower temperature air.
Because of this, wind chill only occurs when then air had capacity to absorb heat from a body - thus chilling the body. The capacity of air to absorb heat from a body is dependent on the temperature difference between the air and the body, and the humidity of the air.
In determining the ability of the air to absorb heat , the wet bulb temperature must be used, in conjunction with the dry bulb. Wind chill and [heat index] (https://en.wikipedia.org/wiki/Heat_index) are related.
In a hot environment, such as a very hot day in a desert, a wind chill factor usually will usually not exist. Under such conditions is it possible to a body to over heat and experience heat stress and hyperthermia, which can lead to death. Also, one has to be careful so that wind burn doesn't occur. | {
"domain": "earthscience.stackexchange",
"id": 1640,
"tags": "meteorology"
} |
Units of mass on the atomic scale | Question: what are the systems of recording atomic masses and their units?
I know that the nucleon number is the number of protons and neutrons
I also know that the mass number on the periodic table is the masses of all the isotopes and their relative abundance (I believe this is called relative atomic mass and is equal in numerical value to the molar mass of that element)
I am not sure what units relative atomic mass is measured in. some say there is no units as it is a relative scale, others say that it is measured in atomic mass units (AMU), unified mass units (u) or daltons (Da).
I believe (but would like it to be confirmed) that the atomic mass unit (AMU) is an obsolete unit based on the mass relative to oxygen.
I also believe (but would like it to be confirmed) that the unified mass units (u) and dalton (Da) are equivalent units (u=Da=1/12 mass of a carbon-12 atom)
what is the difference between isotopic mass, relative atomic mass and atomic mass and what units do they all have?
Answer: The quantity atomic mass (quantity symbol: $m_\mathrm{a}$) is defined as rest mass of a neutral atom in the ground state.
The dimension of the atomic mass is
$$\dim m_\mathrm{a} = \mathsf{M}$$
The coherent SI unit for atomic mass is ‘kilogram’ (unit symbol: $\mathrm{kg}$).
The quantity relative atomic mass (quantity symbol: $A_\mathrm{r}$) is defined as the ratio of the average mass per atom of an element to 1/12 of the mass of an atom of the nuclide $\ce{^12C}$; i.e. the relative atomic mass is the ratio of the mass of an atom to the unified atomic mass constant:
$$A_\mathrm{r} = m_\mathrm{a}/m_\mathrm{u}$$
where $m_\mathrm{a}$ is the atomic mass and $m_\mathrm{u}$ is the unified atomic mass constant.
The relative atomic mass is a quantity of dimension one (for historical reasons, a quantity of dimension one is often called dimensionless):
$$\dim A_\mathrm{r} = 1$$
The coherent SI unit for relative atomic mass is the unit one (symbol: $1$).
For historical reasons, the IUPAC accepts the use of the special name ‘atomic weight’ for the quantity relative atomic mass. The use of this traditional name is deprecated.
The value of the unified atomic mass constant (symbol: $m_\mathrm{u}$) is defined as 1/12 of the mass of a neutral atom of the nuclide $\ce{^12C}$ in the ground state at rest.
$$m_\mathrm{u} = \frac{m_\mathrm{a}(\ce{^12C})}{12}$$
The recommended value is
$$m_\mathrm{u} = 1.660\,538\,921(73) \times 10^{-27}\ \mathrm{kg}$$ (source)
Note that the value in SI units is obtained experimentally.
The unit dalton (unit symbol: $\mathrm{Da}$) and the unified atomic
mass unit (unit symbol: $\mathrm{u}$) are non-SI units that are accepted for use with the SI. Actually, ‘dalton’ and ‘unified atomic mass unit’ are alternative names for the same unit, equal to 1/12 times the mass of a free carbon-12 atom, at rest and in its ground state, i.e.
$$1\ \mathrm{Da} = 1\ \mathrm{u} = 1.660\,538\,921(73) \times 10^{-27}\ \mathrm{kg}$$
Thus, the value of the unified atomic mass constant is, by definition, equal to one dalton or one unified atomic mass unit:
$$m_\mathrm{u} = 1\ \mathrm{Da} = 1\ \mathrm{u} = 1.660\,538\,921(73) \times 10^{-27}\ \mathrm{kg}$$
The old relative atomic masses (unfortunately called ‘atomic weights’) and the corresponding atomic mass unit (amu) were originally referred to the relative atomic mass of oxygen, which was taken as 16. However, physicists used the atomic mass of the nuclide $\ce{^16O}$ whereas chemists used the average atomic mass of natural oxygen. This unit became obsolete when IUPAP (1960), IUPAC (1961), ISO, CIPM (1967) and CGPM (1971) agreed to assign the value 12 to the relative atomic mass of the nuclide $\ce{^12C}$. | {
"domain": "chemistry.stackexchange",
"id": 3564,
"tags": "atoms, units"
} |
How come a whistling kettle starts whistling only when water boils, and not long before - due to hot air escaping under pressure? | Question: A whistling kettle will start to whistle when the water boils and turns into a jet of steam which then exits the small aperture in the spout.
But why doesn't this happen much earlier - when the air molecules in the kettle get heated up enough, shouldn't they also (due to the increased pressure) forcefully escape through the aperture? (Assumee that the kettle was half filled with water, and so the upper half was filled with air). This should happen long before boiling (which is when the water molecules gain enough momentum to escape the water surface). Yet in practice, the whistling only starts at the time the water boils - how come?
Answer: Let's assume a one litre $1000{\,\rm W}$ electric kettle, filled with $0.5$ kilograms of water at $20^\circ \mathrm{C}$:
It takes 4.2 joules to warm one gram of water one degree Celsius.
So, to warm the $500$ grams of water $80$ degrees from $20$ to $100$ takes $168,000$ joules. The kettle will supply $1000$ joules per second, so it'll take $168$ seconds for the kettle to come to a boil.
During this time, the $0.5$ litres of air will expand by a factor of $\frac{373}{293}$, to a volume of $0.637$ litres. So in the almost three minutes of heating, only $0.137$ litres of air will be forced out through the whistle spout.
Now we're at the boiling point. It takes $2,280$ joules to vaporize $1$ gram of water. So the kilowatt heater will vaporize $0.439$ grams of water each second!
Those $0.439$ grams of water vapor will occupy around $0.750$ litres at $100^\circ$ Celsius. So this much gas will be forced out the spout each second...
$0.137$ litres of air in three minutes, vs $0.750$ litres of steam each second.. That explains the difference... | {
"domain": "physics.stackexchange",
"id": 13154,
"tags": "classical-mechanics, fluid-dynamics, water, pressure"
} |
Can frequency be equal to 0? | Question: Is correct to speak about frequency equal to 0 ?
$$f= \frac{1}{t} $$
If $t\rightarrow\infty$ can I consider that the frequency is equal to 0 ?
Answer: Yes. For example, the frequency of times you go to space is zero. | {
"domain": "physics.stackexchange",
"id": 11840,
"tags": "waves, frequency"
} |
Hard cantiliver beam question (got I = 3.622x10^-6 and maximum load 23.9 KN) is this right I cant find the answere anywhere? Do you agree with me? | Question: A horizontal cantilever with an inverted T cross-section is used as a hoist. The figure below shows
an overview of the beam loading conditions (left) and the beam cross-section (right). One end of the
beam is built in, and the vertical load is applied 1.0 m along the cantilever. If the maximum allowable
stress in the material is 330 MN/m2
, determine the maximum load that can be lifted. Neglect the
weight of the cantilever itself.
Answer: Let's calculate the section properties:
Note: $d_c$ is the offset distance of the center of the segment/block in consideration with respect to the centroid of the cross-section.
From the equation $\sigma = My/I$, we know the larger of $y_t$ and $y_b$ will yield the controlling (maximum) stress. For this case, the topmost face controls,
$M = \sigma_aI/y_t = 330*10^6*1.8*10^{-6}/0.0713 = 8330 N-m$
$Load = M/L = 8330/1 = 8330 N-m = 8.33 kN-m$
Please check my calculation and verify the results. | {
"domain": "engineering.stackexchange",
"id": 4422,
"tags": "mechanical-engineering"
} |
Stability of transfer function | Question:
Now K(s) is obviously the negative feedback loop which is ${H(s) \over 1+H(s)G(s)}$
When I substitute ${H(s) = {1 \over s-2}}$ I get K(s) = ${ 1 \over s-2 +G(s) }$
For the system to be stable I know that the poles have to be on the negative half of th imaginary plane. Not sure how to find G(s).
Answer: Hint:
Since $K(s) = \frac{1}{s-2+G(s)}$, then the roots of $s-2+G(s)$ must lie on the left side of the complex plane. Let's suppose that $G(s)$ is of the form $as + b$. Can you solve it then? | {
"domain": "dsp.stackexchange",
"id": 1893,
"tags": "homework, transfer-function"
} |
Choosing a balance for home chemistry | Question: A good balance is an important piece of equipment for chemistry. For home chemistry, the price of the balance is unfortunately limiting ..
How should one go about picking a balance (for home chemistry; general guidelines are welcome too)? Specifically, I'm thinking whether to buy a model with 0.01g precision (up to 200g), or a one with 0.001g precision (up to 20g) (the latter is a so called "diamond balance" but probably does the job). Which one would serve me better?
Answer: Home chemistry can be difficult sometimes due to the expense of the essential working materials and instruments required. As a mathematician all I need is a sharp pen (which is sometimes still quite difficult to obtain!) and blank paper. I can give you the following general (personal) advice regarding purchasing chemistry lab equipment:
Determine your budget and the needs for the experiments you want to perform. Do you really need a scale that has to weight with an accuracy of 0.001g or is 0.01g sufficient (or even 0.1g)? Ask yourself what you REALLY need for an experiment BEFORE searching for instruments. If you do it in the converse order you often buy something way more fancier (and pricier) than what you actually needed in the first place.
Look at specialized chemistry equipment supply websites to orientate on what is available and in what price range certain products are (think of scales, chemicals, glassware, etc.)
A lot of stuff can be bought incredibly cheap on websites where you not expect (at least I did not) people to sell lab supplies/instruments. Think of ebay.com and amazon.com as examples. They sometimes list items at a fifth of the usual price that would have to be paid at a specialized store.
Take a look at local non-specialized stores like the construction market (you know, the place where they sell all kinds of construction material). They sometimes hide hidden (cheap) gems! As an example I was able to obtain a vacuum source (membrane pump) for < 100\$, which would have cost me easily > 400\$ at a chemical supply website.
About your particular question: I really recommend a 0.01g precision 200g (or 600g) scale. It brings the benefit of having relatively accurate weighting for even small samples, while still being able to weight bulk mass. Furthermore, they are most common in the lab/home environment and hence relatively cheap in comparison with professional analytical scales. If you plan to do a lot of microscale experiments (yielding a product < 200 mg) the analytical scale (precision of 0.001g) may suit your needs better.
I wish you much success, think and buy wisely! | {
"domain": "chemistry.stackexchange",
"id": 2609,
"tags": "home-experiment"
} |
Do all forms of energy fall under kinetic and potential energy? | Question: I know that energy is recognized through motion. Even in the mass-energy equivalence a velocity is present even though it is a rest-energy (Not really sure if this would count as a potential energy since there is no 'field' of acceleration that the mass is in)
So does kinetic and potential energy make up all other forms of energy by definition?
Answer: Originally even thermal energy was neither kinetic nor potential. Of course with the acceptance of the mechanical theory of heat we can interpret it as a manifestation of kinetic energy. Roughly, the distinction between kinetic and potential energy in classical mechanics reflected the difference between intrinsic energy of an object, and energy of interaction between objects, with the former reduced to the energy of mechanical motions (macroscopic or microscopic).
In this sense special relativity introduced a new form of energy based on the Einstein's mass-energy equivalence. In hindsight, the energy of electromagnetic field studied earlier by Lorentz and Poincare was a particular case. Another manifestation is energy released in nuclear reactions. One could say that as with the thermal energy this new energy microscopically "reduces" to the energy of subatomic motions and interactions, so it may not be that new. However, there is still a difference with classical statistical mechanics and thermal energy. When excited atom emits a photon we can not say that energy of some motion or interaction "in" the atom got "transferred" to the photon, which did not even "exist" before the emission, such classical parsing simply loses its meaning in quantum theory. Moreover, mass-energy equivalence assigns this internal energy even to truly elementary particles (with no parts that can move or interact) that are at rest. | {
"domain": "physics.stackexchange",
"id": 76664,
"tags": "newtonian-mechanics, energy, energy-conservation, potential-energy"
} |
Bounds on the size of smallest decision tree for a boolean function? | Question: Consider a boolean function $f : V \rightarrow \{0,1\}$ with $m$ true points. Are there any non-trivial bounds in $m$ on the size of the smallest decision tree for $f$?
It seems to me that assuming $f$ has $n$ variables and $m$ true points then any minimal decision tree has at most $$ 2^{\lceil{\log_2 m}\rceil}−m+m⋅(n−\lceil{\log_2 m}\rceil) $$ 0-leaves (obviously, one can take the dual as well). I am wondering if whether this sort of thing has been covered before (I presume it has somewhere).
Answer: I think the number of true points isn't a good measure to say anything about the size of the smallest decision tree.
There is a chapter in Branching programs and binary decision diagrams: theory and applications by Ingo Wegener about the size of decision trees for boolean functions.
The size of the smallest DT is determined up to constant factors by the number of leaves of the tree. There is a lemma in the book that says: If you have a DT for a function $f$ with $s_0$ 0-leaves and $s_1$ 1-leaves then $f$ can be represented by a DNF with $s_1$ monomials and a CNF with $s_0$ clauses. So even if you have a function with only a small number of true points, the size of the smallest DT could be very large. So you need small size of CNF and DNF to have a small DT size.
There is a upper bound of the DT size with respect to the sum of the minimal number of monomials of $f$ and the minimal number of monomials of $\overline{f}$. Lets call this sum $DCNF(f)$. Then you have the following upper bound for the number of leaves $DT(f)$ of the smallest DT for the function $f: \lbrace 0,1 \rbrace^n \rightarrow \lbrace 0,1 \rbrace$
$$ DT(f) \leq n^{O(log^2 DCNF(f))}. $$ | {
"domain": "cstheory.stackexchange",
"id": 902,
"tags": "cc.complexity-theory, sat, boolean-functions, binary-decision-diagrams, decision-trees"
} |
Initializing a variable with a function reference (PHP) | Question: I'm working on an old PHP website, and NetBeans is complaining about an uninitialized variable. I'm seeing some code like this:
$app->soapAPIVersion($apiVersion);
$_SESSION['apiVersion'] = $apiVersion;
The function looks something like this:
function soapAPIVersion(&$apiVersion)
{
$apiVersion = '';
$result = false;
$soapResult = $client->call('getAPIVersion', array('sessionKey' => $sessionKey), $url);
if (is_string($soapResult))
{
$apiVersion = $soapResult;
$result = true;
}
return $result;
}
I believe it's using this line to initialize the $apiVersion variable:
$app->soapAPIVersion($apiVersion);
For better coding practice, should that really be this:
$apiVersion = '';
$app->soapAPIVersion($apiVersion);
Or is it a valid trick?
Answer: With all do respect, this code really did hurt. Netbeans is complaining, because the function soapAPIVersion expects a reference to a variable. You're passing an undeclared, and uninitialized variable to the function. Sort of like a void pointer/null-pointer thing.
It's not really a big deal, but I cannot, for the life of me, get why this function uses a reference as an argument. I think it better to change it to:
function soapAPIVersion()
{
$soapResult = $client->call('getAPIVersion', array('sessionKey' => $sessionKey), $url);
if (is_string($soapResult))
{
return $soapResult;
}
//implicitly return null, or throw exception
}
And call it like so:
$apiVersion = $app->soapAPIVersion();
if ($apiVersion === null)
{
throw new RuntimeException('unable to get the API version');
}
$_SESSION['apiVersion'] = $apiVersion;
Bottom line: only pass by reference if you need your function to do 2 things at the same time, and you can't split those two things over 2 functions. situations are Very rare. Since PHP5, I've only had to use a reference argument 5~10 times, I think, so avoid, if at all possible | {
"domain": "codereview.stackexchange",
"id": 4295,
"tags": "php, php5"
} |
Extending a trained neural network for a larger input | Question: I have a seq2seq conversational model (based on this implementation) trained on the Cornell movie dialogs.
Now I want to fine-tune it on a much smaller dataset. The new data comes with the new words, and I want UNKs for as few new words as possible. So I'm going to create a new network with respect to the new input/output sizes, and I'm going to initialize its submatrices with learned weights I have at hand.
Could you say if this method can cause problems with the resulting model's performance? E.g. are the softmaxes likely to be affected significantly with these new initially untrained weights?
And if it's OK, do you have some examples on how to do it with the least pain in tensorflow's seq2seq setup?
Answer: Its okay as long as the nerwork you are planning to create has the same number of layers and units i.e the dimensions of your network must be compatible with the weights that you are borrowing from the trained model. Also it would be better if you follow the second blog post of suriyadeepan practical seq2seq where he trains a conversation model on twitter chat. The code is much simpler and easier to understand, also it is on a smaller dataset, also he mentioned that the bot trained on cornell movie dialog corpus wasnt performing so well. Mainly to use the pre-trained weights all you have to do is load the model, create placeholders for the weights, assign thr weights from the loaded model to the placeholders and run a forward pass. This blog and this question might help you with this task | {
"domain": "datascience.stackexchange",
"id": 1388,
"tags": "python, neural-network, nlp, deep-learning, tensorflow"
} |
Finding an answer key given tests and scores | Question: Consider multiple students taking a multiple-choice tests. The test has $q$ questions, and each question has $m$ choices. ($q$ can be large, but $m$ is rather small.) You are given $n$ tests, complete with all answer choices and their grades (number of questions correct). Find the set of answers keys which produce these scores.
I and a few friends had a go at it, but I was wondering if there was an efficient algorithm to solve this.
Answer: Solving this problem is NP-hard. In particular, even deciding whether there exists an answer key consistent with the given answer choices and scores is NP-complete. We can prove this by reduction from 1-in-3 SAT.
Given a 1-in-3 SAT formula with $c$ clauses and $n$ variables, we construct an instance of your problem with $n$ questions, each with 3 choices, and $c+1$ sample answer/grade pairs.
For each variable $x$, one of the questions is "what is the value of boolean variable $x$?" and the three possible answers are "True", "False", and "17".
One student answers every question with the answer $17$ and gets a score of zero. The remaining students are each assigned a clause. For every positive literal $x$ occurring in the clause, the student answers the question about $x$ with the answer "True". For every negative literal $\neg x$ occurring in the clause, the student answers the question about $x$ with the answer "False". For every other question, the student answers $17$. Every such student gets a score of one.
An answer key must correspond with an assignment of binary values to the variables (since no question has $17$ as the correct answer as shown by the fact that the student who always answered $17$ got zero points). Furthermore, there must be exactly one true literal in each clause under this assignment in order for the answer key to match the scores given to the students. In other words, the satisfying assignments to the input 1-in-3 SAT formula are in a bijection with the answer keys consistent with the student scores.
Clearly then, deciding whether an answer key consistent with the student scores exists is NP-hard. Similar arguments can be made to show that for example counting answer keys is #P-complete. | {
"domain": "cstheory.stackexchange",
"id": 4071,
"tags": "ds.algorithms, optimization"
} |
Gauge Boson Self-Interactions with covariant derivative | Question: Self-Interactions of the unphysical gauge bosons $W_1, W_2, W_3$ are written within the gauge term
$L_\mathrm{Gauge}=-\frac{1}{4} W_{\mu \nu} W^{\mu \nu}$
with $W_{\mu \nu}= \partial_\mu W_\nu - \partial_\nu W_\mu + i g [W_\mu,W_\nu]$.
Using now the transformations for Mass matrix diagonalization
$W_{3 \mu} = \mathrm{cw} Z_\mu + \mathrm{sw} A_\mu$,
$W_{2 \mu} = \frac{i}{\sqrt{2}}(W^+_\mu - W^-_\mu)$,
$W_{1 \mu} = \frac{1}{\sqrt{2}}(W^+_\mu + W^-_\mu)$,
one is able to derive e.g. the vertex of WWA. It contains terms of the form
$(\partial A) W^+ W^-, A (\partial W^+) W^-, A W^+ (\partial W^-)$. To be precise it should read
\begin{align}
-i e [ \partial^\mu W^{\nu +} (A_\nu W_\mu^- - A_\mu W_\nu^-) + \partial^\mu W^{\nu -} (A_\mu W_\nu^+ - A_\nu W_\mu^+ ) + \partial^\mu A^\nu (W_\nu^+ W_\mu^- - W_\mu^+ W_\nu^-)]
\end{align}
The Photon gradient term can be integrated by parts, then I get
\begin{align}
-i e \partial^\mu A^\nu (W_\nu^+ W_\mu^- - W_\mu^+ W_\nu^-) = i e A^\nu (\partial^\mu W_\nu^+ W_\mu^- - W_\mu^+ \partial^\mu W_\nu^-)
\end{align}
and in total this would mean for the WWA vertex
\begin{align}
-i e &[ \partial^\mu W^{\nu +} (A_\nu W_\mu^- - A_\mu W_\nu^-) + \partial^\mu W^{\nu -} (A_\mu W_\nu^+ - A_\nu W_\mu^+ ) - A^\nu (\partial^\mu W_\nu^+ W_\mu^- - W_\mu^+ \partial^\mu W_\nu^-)] \\
= -i e &[ -\partial^\mu W^{\nu +} A_\mu W_\nu^- + \partial^\mu W^{\nu -} A_\mu W_\nu^+ ]
\end{align}
Now I tried to rewrite the gauge boson self interactions in terms of the covariant derivative using directly the physical fields by writing down
\begin{align}
L_\mathrm{cov} = -\frac{1}{2}(D_\mu W_\nu^+ - D_\nu W_\mu^+) (D^\mu W^{\nu -} - D^\nu W^{\mu -}) - \frac{1}{4}(\partial_\mu A_\nu - \partial_\nu A_\mu)^2 - \frac{1}{4}(\partial_\mu Z_\nu - \partial_\nu Z_\mu)^2
\end{align}
with
$D_\mu W_\nu^+ = (\partial_\mu + i e A_\mu) W_\nu^+$,
$D_\mu W_\nu^- = (\partial_\mu - i e A_\mu) W_\nu^+$.
Collecting all contributions to WWA results in
\begin{align}
-i e [ \partial^\mu W^{\nu +} (A_\nu W_\mu^- - A_\mu W_\nu^-) + \partial^\mu W^{\nu -} (A_\mu W_\nu^+ - A_\nu W_\mu^+ )]
\end{align}
Obviously, this is not yet the same as with $L_\mathrm{Gauge}$, so what am I missing? In the end, both approaches should lead to equivalent vertices...
It already would be helpful if somebody knows literature where these terms are calculated explicitly using the covariant derivative.
Answer: Your covariant action guess is incomplete. It is missing the gauge invariant
$$\propto \bbox[yellow]{(\partial^\mu A^\nu - \partial^\nu A^\mu) (W_\mu^+ W_\nu^- - c.c.)}\\ \propto [D_\mu,D_\nu](W_\mu^+ W_\nu^- - c.c.),$$
giving the Ws their characteristic magnetic moment.
In both cases, EM gauge invariance is paramount, and should obtain. In both cases, the $W\partial W$ current coupling to the photon should be hermitean (with the i; or antihermitean without it!).
The gauge piece, available in good texts, e.g. M Schwartz, (29.9), is proportional to the top expression in
$$
-ieA_\nu [ \partial_\mu (-W^{\mu~+} W^{\nu~-}+ W^{\nu~+}W^{\mu~-}) \\-W^{\mu~+} \partial_\nu W^{\mu~-}+ W^{\mu~-}\partial_\nu W^{\mu~+ } + W^{\mu~+} \partial_\mu W^{\nu~-}- W^{\mu~-}\partial_\mu W^{\nu~+}]\\ =...
$$
You may halve these terms by subtracting the c.c.s!
Proceed to rewrite it to your form.
Where did the extra term come from? you might ask. Schematicaly, and with conventional (unlike your) normalizations, recall the gauge action is the square of
$$
\vec W_{\mu\nu}= \partial_\mu \vec W_\nu - \partial_\nu \vec W_\mu + g \vec W _\mu \times \vec W _\nu \leadsto \\
\vec W_{\mu\nu}^+= \partial_\mu \vec W_\nu^+ - \partial_\nu \vec W_\mu^+ -i g \vec ( W _\mu^3 \vec W _\nu^+ - \hbox{c.c.}),\\
\vec W_{\mu\nu}^3= \partial_\mu \vec W_\nu^3 - \partial_\nu \vec W_\mu^3 +i g \vec ( W _\mu^+ \vec W _\nu^- - \hbox{c.c.}),
$$
where you might adjust potential wrong signs to your liking.
The key point is that for
$$
gW_\mu^3= eA_\mu + g\cos\theta_W ~Z_\mu,
$$
ignoring Z, this yields the now complete EM gauge-invariant action piece,
$$
-ieA_\nu [ (W_\mu^+(\partial^\mu W^{\nu ~ -}- \partial^\nu W^{\mu ~ -} )- \hbox{c.c.})+ W_\nu^+ W_\mu^-(\partial^\mu A^\nu -\partial^\nu A^\mu )]
$$
with the new, formerly missing, term originating in $(W^3_{\mu\nu})^2$, now put in the end. | {
"domain": "physics.stackexchange",
"id": 98675,
"tags": "lagrangian-formalism, field-theory, standard-model, gauge-theory, electroweak"
} |
Microstates of the canonical ensemble | Question: In the micro canonical ensemble the microstates of a system in an arbitrary macrostate, are also eigenstates of the Hamiltonian. Does the same apply to the microstates of the canonical ensemble? Are they eigenstates of the the Hamiltonian? I would expect them not to be, since here the energy is not constant. But I am not sure
Answer: When talking about the canonical ensemble, one has to distinguish
the Hamiltonian of the system of interest, $H_S$
the Hamiltonian of the system of interest + bath/thermostat/reservoir, $H_{tot} = H_S + H_B + V_{SB}$
$H_{tot}$ is treated in a microcanonical ensemble framework, and hence we are discussing its eigenstates. Generally it will not commute with $H_S$, since there is some interaction between the system and the bath. The derivation of the canonical ensemble is however based on solid reasoning that the interaction energy is smaller than the energy of the system, and can be neglected in tehrmodynamic limit (roughly speaking, the energy of the system is proportional to its volume, whereas the interaction energy is proportional to its surface, i.e., scales as volume to power $2/3$.)
Hence, once this logic is accepted and we talk about microcanonical ensemble, the microstates of the system are its eigenstates. | {
"domain": "physics.stackexchange",
"id": 84690,
"tags": "quantum-mechanics, thermodynamics, statistical-mechanics, hamiltonian"
} |
Infinite plate of charge - questions about previous asked question | Question: I asked the question previously here - Link. Look for the accepted answer.
I have 2 questions and since they both relate to the answer for my previous question, I will include both here. Note that for Q1, I'm looking for the intuitive answer without integral, while in second, integral explanation.
While I almost understood everything, there's something thats bugging me. So on the plate, there're patches where $x>d\sqrt{2}$ and when we step away, those patches contribute to the increase in ⊥ while patches where $x<d\sqrt{2}$ contribute to the decrease in ⊥, so they cancel out each other, leaving E unchanged everywhere.
Q1: What I wonder is that since plate is infinite, there will be more numbers of patches for $x>d\sqrt{2}$ than for $x<d\sqrt{2}$ so their increase effect must be bigger then than decrease effect of patches $x<d\sqrt{2}$. This comes from the fact that there's finite patches from 0 to $d\sqrt{2}$ and infinite patches from $d\sqrt{2}$ to infinity. What am I missing ?
Q2: I wonder in the integral calculation, what area d ⊥ gives us and why is there extra x in there ?
Answer: What you're missing is that although there are infinitely many patches, the total contribution from these patches is finite. The fact that there are so many more patches that are far away is balanced by the fact that they have a weaker influence on the field.
I think the most compelling case for this is to look at the graph of $\delta E_\perp$:
While the positive region is infinitely wide, it is also infinitely thin and has finite area.
While convincing, this depiction is slightly deceptive. As you noticed (in your Q2), we add a factor of $x$ when integrating. This is because we care about area. The region which has the change in contribution
$$
\delta E_\perp = \frac{x^2-2d^2}{r^5} \delta d
$$
is the infinitely thin ring of radius $x$. Since it's infinitely thin, we can say it's basically a rectangle with sides $dx$ and $2\pi x$ (the circumference of the ring in question). So the actual change in $E_\perp$ includes an (infinitesimal) area:
$$
\delta E_\perp = \frac{x^2-2d^2}{r^5} \delta d~\mathrm dA = \frac{x^2-2d^2}{r^5} \delta d (2\pi x)\mathrm dx
$$
The graph of this one is less extreme, but still gets the point across and is not deceptive (integrating this curve yields the actual change in $E_\perp$ when we take a tiny step away):
The red curve is $\delta E_\perp$. The black curve is $2\pi/x^2$, which is only there to show that the positive region has finite area; the black curve is always greater than the red, and the area under the black curve is finite.
In this graph, the area of the negative region is equal to the area of the positive region, so the change in $E_\perp$ is zero.
When there is a finite region with finite values and an infinite region with infinitesimal values, I'd caution against reasoning about which has greater area without doing an explicit integral.
For an explicit counterexample, if we had a strip of uniform charge $\sigma$ that is $W$ wide and infinitely long, we get to use the same ideas. For a point directly in the middle of the strip ($\frac{W}{2}$ on either side), and $d$ distance away from the strip, we get the same symmetry argument showing that the parallel components all cancel out. We also get the same formulas for the perpendicular component that J. Murray showed:
$$
E_\perp = \frac{dA}{r^2}\cos(\theta)=dA \frac{d}{(x^2+d^2)^{3/2}}
$$
and when $d\to d+\delta d$:
$$
\delta E_\perp = \frac{x^2-2d^2}{r^5} \delta d
$$
so patches with $x > d\sqrt{2}$ increase their contribution when we step away. Also, we still have infinitely many patches with $x > d\sqrt{2}$ and finitely many with $x < d\sqrt{2}$. But in this scenario, the net electric field decreases as we move away from the strip (I'll leave the integration to you as an exercise). | {
"domain": "physics.stackexchange",
"id": 95920,
"tags": "electromagnetism, electrostatics, electric-fields"
} |
High probability events without low probability coordinates | Question: Let $X$ be a random variable taking values in $\Sigma^n$ (for some large alphabet $\Sigma$), which has very high entropy - say, $H(X) \ge (n- \delta)\cdot\log|\Sigma|$ for an arbitrarily small constant $\delta$. Let $E \subseteq \rm{Supp}(X)$ be an event in the support of $X$ such that $\Pr[X \in E] \ge 1 - \varepsilon$, where $\varepsilon$ is an arbitrarily small constant.
We say that a pair $(i,\sigma)$ is a low probability coordinate of $E$ if $\Pr[X \in E | X_i = \sigma] \le \varepsilon$. We say that a string $x \in \Sigma^n$ contains a low probability coordinate of $E$ if $(i, x_i)$ is a low probability coordinate of $E$ for some $i$.
In general, some strings in $E$ may contain low probability coordinates of $E$. The question is can we always find a high probability event $E' \subseteq E$ such that no string in $E'$ contains a low probability coordinate of $E'$ (and not of $E$).
Thanks!
Answer: Here is an example complementing Harry Yuen's answer. For a counter-example, it suffices to define appropriate $X,E$ and show that any large subset $E'\subseteq E$ must have a low probability co-ordinate of $E$ - a low probability co-ordinate of $E$ is necessarily a low probability co-ordinate of $E'$.
Also, I'll ignore the condition about entropy - appending $N$ independent uniformly distributed random variables to $X$ (and taking $E$ to $E\times \Sigma^N$) will increase $H(X)/(n+N)\log|\Sigma|$ to nearly $1$ without affecting whether such an $E'$ exists (I haven't though this through carefully).
Here's the example. Let $X$ be a random element of $\{0,1\}^n$ such that every vector with Hamming weight $1$ (i.e. vectors of the form $0\dots 010\dots 0$) have probability $(1-\epsilon)/n$ and the all-ones vector $1\dots 1$ has probability $\epsilon$. Let $E$ be the set of vectors with Hamming weight $1$.
Consider a subset $E'\subseteq E$. If $E'$ is not empty, it contains a vector of Hamming weight $1$, say $100\dots 0$ without loss of generality.
But $\Pr[X \in E'|X_i=1]=\frac{(1-\epsilon)/n}{(1-\epsilon)/n + \epsilon}$, which is less than $\epsilon$ if $n$ is about $2/\epsilon^2$. | {
"domain": "cstheory.stackexchange",
"id": 1655,
"tags": "it.information-theory, pr.probability"
} |
Core algorithms deployed | Question: To demonstrate the importance of algorithms (e.g. to students and professors who don't do theory or are even from entirely different fields) it is sometimes useful to have ready at hand a list of examples where core algorithms have been deployed in commercial, governmental, or widely-used software/hardware.
I am looking for such examples that satisfy the following criteria:
The software/hardware using the algorithm should be in wide use right now.
The example should be specific.
Please give a reference to a specific system and a specific algorithm.
E.g., in "algorithm X is useful for image processing"
the term "image processing" is not specific enough;
in "Google search uses graph algorithms"
the term "graph algorithms" is not specific enough.
The algorithm should be taught in
typical undergraduate or Ph.D. classes in algorithms or data structures.
Ideally, the algorithm is covered in typical algorithms textbooks.
E.g., "well-known system X uses little-known algorithm Y" is not good.
Update:
Thanks again for the great answers and links!
Some people comment that it is hard to satisfy the criteria
because core algorithms are so pervasive that it's hard to point to a specific use.
I see the difficulty.
But I think it is worthwhile to come up with specific examples because
in my experience telling people:
"Look, algorithms are important because they are just about everywhere!" does not work.
Answer: Algorithms that are the main driver behind a system are, in my opinion, easier to find in non-algorithms courses for the same reason theorems with immediate applications are easier to find in applied mathematics rather than pure mathematics courses. It is rare for a practical problem to have the exact structure of the abstract problem in a lecture. To be argumentative, I see no reason why fashionable algorithms course material such as Strassen's multiplication, the AKS primality test, or the Moser-Tardos algorithm is relevant for low-level practical problems of implementing a video database, an optimizing compiler, an operating system, a network congestion control system or any other system. The value of these courses is learning that there are intricate ways to exploit the structure of a problem to find efficient solutions. Advanced algorithms is also where one meets simple algorithms whose analysis is non-trivial. For this reason, I would not dismiss simple randomized algorithms or PageRank.
I think you can choose any large piece of software and find basic and advanced algorithms implemented in it. As a case study, I've done this for the Linux kernel, and shown a few examples from Chromium.
Basic Data Structures and Algorithms in the Linux kernel
Links are to the source code on github.
Linked list, doubly linked list, lock-free linked list.
B+ Trees with comments telling you what you can't find in the textbooks.
A relatively simple B+Tree implementation. I have written it as a learning exercise to understand how B+Trees work. Turned out to be useful as well.
...
A tricks was used that is not commonly found in textbooks. The lowest values are to the right, not to the left. All used slots within a node are on the left, all unused slots contain NUL values. Most operations simply loop once over all slots and terminate on the first NUL.
Priority sorted lists used for mutexes, drivers, etc.
Red-Black trees are used for scheduling, virtual memory management, to track file descriptors and directory entries,etc.
Interval trees
Radix trees, are used for memory management, NFS related lookups and networking related functionality.
A common use of the radix tree is to store pointers to struct pages;
Priority heap, which is literally, a textbook implementation, used in the control group system.
Simple insertion-only static-sized priority heap containing pointers, based on CLR, chapter 7
Hash functions, with a reference to Knuth and to a paper.
Knuth recommends primes in approximately golden ratio to the maximum
integer representable by a machine word for multiplicative hashing.
Chuck Lever verified the effectiveness of this technique:
http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf
These primes are chosen to be bit-sparse, that is operations on
them can use shifts and additions instead of multiplications for
machines where multiplications are slow.
Some parts of the code, such as this driver, implement their own hash function.
hash function using a Rotating Hash algorithm
Knuth, D. The Art of Computer Programming,
Volume 3: Sorting and Searching,
Chapter 6.4.
Addison Wesley, 1973
Hash tables used to implement inodes, file system integrity checks etc.
Bit arrays, which are used for dealing with flags, interrupts, etc. and are featured in Knuth Vol. 4.
Semaphores and spin locks
Binary search is used for interrupt handling, register cache lookup, etc.
Binary search with B-trees
Depth first search and variant used in directory configuration.
Performs a modified depth-first walk of the namespace tree, starting
(and ending) at the node specified by start_handle. The callback
function is called whenever a node that matches the type parameter is
found. If the callback function returns a non-zero value, the search
is terminated immediately and this value is returned to the caller.
Breadth first search is used to check correctness of locking at runtime.
Merge sort on linked lists is used for garbage collection, file system management, etc.
Bubble sort is amazingly implemented too, in a driver library.
Knuth-Morris-Pratt string matching,
Implements a linear-time string-matching algorithm due to Knuth,
Morris, and Pratt [1]. Their algorithm avoids the explicit
computation of the transition function DELTA altogether. Its
matching time is O(n), for n being length(text), using just an
auxiliary function PI[1..m], for m being length(pattern),
precomputed from the pattern in time O(m). The array PI allows
the transition function DELTA to be computed efficiently
"on the fly" as needed. Roughly speaking, for any state
"q" = 0,1,...,m and any character "a" in SIGMA, the value
PI["q"] contains the information that is independent of "a" and
is needed to compute DELTA("q", "a") 2. Since the array PI
has only m entries, whereas DELTA has O(m|SIGMA|) entries, we
save a factor of |SIGMA| in the preprocessing time by computing
PI rather than DELTA.
[1] Cormen, Leiserson, Rivest, Stein
Introdcution to Algorithms, 2nd Edition, MIT Press
[2] See finite automation theory
Boyer-Moore pattern matching with references and recommendations for when to prefer the alternative.
Implements Boyer-Moore string matching algorithm:
[1] A Fast String Searching Algorithm, R.S. Boyer and Moore.
Communications of the Association for Computing Machinery,
20(10), 1977, pp. 762-772.
http://www.cs.utexas.edu/users/moore/publications/fstrpos.pdf
[2] Handbook of Exact String Matching Algorithms, Thierry Lecroq,
2004
http://www-igm.univ-mlv.fr/~lecroq/string/string.pdf
Note: Since Boyer-Moore (BM) performs searches for matchings from
right
to left, it's still possible that a matching could be spread over
multiple blocks, in that case this algorithm won't find any
coincidence.
If you're willing to ensure that such thing won't ever happen,
use the
Knuth-Pratt-Morris (KMP) implementation instead. In conclusion,
choose
the proper string search algorithm depending on your setting.
Say you're using the textsearch infrastructure for filtering,
NIDS or
any similar security focused purpose, then go KMP. Otherwise, if
you
really care about performance, say you're classifying packets to
apply
Quality of Service (QoS) policies, and you don't mind about
possible
matchings spread over multiple fragments, then go BM.
Data Structures and Algorithms in the Chromium Web Browser
Links are to the source code on Google code. I'm only going to list a few. I would suggest using the search feature to look up your favourite algorithm or data structure.
Splay trees.
The tree is also parameterized by an allocation policy
(Allocator). The policy is used for allocating lists in the C free
store or the zone; see zone.h.
Voronoi diagrams are used in a demo.
Tabbing based on Bresenham's algorithm.
There are also such data structures and algorithms in the third-party code included in the Chromium code.
Binary trees
Red-Black trees
Conclusion of Julian Walker
Red black trees are interesting beasts. They're believed to be
simpler than
AVL trees (their direct competitor), and at first glance this seems
to be the
case because insertion is a breeze. However, when one begins to play
with the
deletion algorithm, red black trees become very tricky. However, the
counterweight to this added complexity is that both insertion and
deletion
can be implemented using a single pass, top-down algorithm. Such is
not the
case with AVL trees, where only the insertion algorithm can be
written top-down.
Deletion from an AVL tree requires a bottom-up algorithm.
...
Red black trees are popular, as most data structures with a
whimsical name.
For example, in Java and C++, the library map structures are
typically
implemented with a red black tree. Red black trees are also
comparable in
speed to AVL trees. While the balance is not quite as good, the work
it takes
to maintain balance is usually better in a red black tree. There are
a few
misconceptions floating around, but for the most part the hype about
red black
trees is accurate.
AVL trees
Rabin-Karp string matching is used for compression.
Compute the suffixes of an automaton.
Bloom filter implemented by Apple Inc.
Bresenham's algorithm.
Programming Language Libraries
I think they are worth considering. The programming languages designers thought it was worth the time and effort of some engineers to implement these data structures and algorithms so others would not have to. The existence of libraries is part of the reason we can find basic data structures reimplemented in software that is written in C but less so for Java applications.
The C++ STL includes lists, stacks, queues, maps, vectors, and algorithms for sorting, searching and heap manipulation.
The Java API is very extensive and covers much more.
The Boost C++ library includes algorithms like Boyer-Moore and Knuth-Morris-Pratt string matching algorithms.
Allocation and Scheduling Algorithms
I find these interesting because even though they are called heuristics, the policy you use dictates the type of algorithm and data structure that are required, so one need to know about stacks and queues.
Least Recently Used can be implemented in multiple ways. A list-based implementation in the Linux kernel.
Other possibilities are First In First Out, Least Frequently Used, and Round Robin.
A variant of FIFO was used by the VAX/VMS system.
The Clock algorithm by Richard Carr is used for page frame replacement in Linux.
The Intel i860 processor used a random replacement policy.
Adaptive Replacement Cache is used in some IBM storage controllers, and was used in PostgreSQL though only briefly due to patent concerns.
The Buddy memory allocation algorithm, which is discussed by Knuth in TAOCP Vol. 1 is used in the Linux kernel, and the jemalloc concurrent allocator used by FreeBSD and in facebook.
Core utils in *nix systems
grep and awk both implement the Thompson-McNaughton-Yamada construction of NFAs from regular expressions, which apparently even beats the Perl implementation.
tsort implements topological sort.
fgrep implements the Aho-Corasick string matching algorithm.
GNU grep, implements the Boyer-Moore algorithm according to the author Mike Haertel.
crypt(1) on Unix implemented a variant of the encryption algorithm in the Enigma machine.
Unix diff implemented by Doug McIllroy, based on a prototype co-written with James Hunt, performs better than the standard dynamic programming algorithm used to compute Levenshtein distances. The Linux version computes the shortest edit distance.
Cryptographic Algorithms
This could be a very long list. Cryptographic algorithms are implemented in all software that can perform secure communications or transactions.
Merkle trees, specifically the Tiger Tree Hash variant, were used in peer-to-peer applications such as GTK Gnutella and LimeWire.
MD5 is used to provide a checksum for software packages and is used for integrity checks on *nix systems (Linux implementation) and is also supported on Windows and OS X.
OpenSSL implements many cryptographic algorithms including AES, Blowfish, DES, SHA-1, SHA-2, RSA, DES, etc.
Compilers
LALR parsing is implemented by yacc and bison.
Dominator algorithms are used in most optimizing compilers based on SSA form.
lex and flex compile regular expressions into NFAs.
Compression and Image Processing
The Lempel-Ziv algorithms for the GIF image format are implemented in image manipulation programs, starting from the *nix utility convert to complex programs.
Run length encoding is used to generate PCX files (used by the original Paintbrush program), compressed BMP files and TIFF files.
Wavelet compression is the basis for JPEG 2000 so all digital cameras that produce JPEG 2000 files will be implementing this algorithm.
Reed-Solomon error correction is implemented in the Linux kernel, CD drives, barcode readers and was combined with convolution for image transmission from Voyager.
Conflict Driven Clause Learning
Since the year 2000, the running time of SAT solvers on industrial benchmarks (usually from the hardware industry, though though other sources are used too) has decreased nearly exponentially every year. A very important part of this development is the Conflict Driven Clause Learning algorithm that combines the Boolean Constraint Propagation algorithm in the original paper of Davis Logemann and Loveland with the technique of clause learning that originated in constraint programming and artificial intelligence research. For specific, industrial modelling, SAT is considered an easy problem (see this discussion). To me, this is one of the greatest success stories in recent times because it combines algorithmic advances spread over several years, clever engineering ideas, experimental evaluation, and a concerted communal effort to solve the problem. The CACM article by Malik and Zhang is a good read. This algorithm is taught in many universities (I have attended four where it was the case) but typically in a logic or formal methods class.
Applications of SAT solvers are numerous. IBM, Intel and many other companies have their own SAT solver implementations. The package manager in OpenSUSE also uses a SAT solver. | {
"domain": "cstheory.stackexchange",
"id": 2380,
"tags": "ds.algorithms, big-picture, application-of-theory"
} |
Prove that the Dyson series solves the differential equation | Question: I have a short question about Dyson series.
Prove that $$U\left(t, 0\right) = \sum_{n}^{} \left(-i\right)^{n} \int_{0}^{t} dt_{1} \int_{0}^{t_{1}}dt_{2} ... \int_{0}^{t_{n - 1}} dt_{n} H_{I}\left(t_{1}\right)H_{I}\left(t_{2}\right)...H_{I}\left(t_{n}\right)$$ is a solution of $$ i\frac{\mathrm{d} }{\mathrm{d} t}\left(U\left(t, 0\right)\right) = H_{I}\left(t\right)U\left(t,0\right).$$
My attempt:
We can write: $$ U\left(t, 0\right) = \sum_{n}^{} (-i)^{n} \int_{0}^{t}dt_{1}f\left(t_{1}\right)$$ where $$f\left(t_{1}\right) = \int_{0}^{t_{1}}dt_{2} \int_{0}^{t_{2}}dt_{3} ... \int_{0}^{t_{n-1}}dt_{n}H_{I}\left(t_{1}\right)H_{I}\left(t_{2}\right)...H_{I}\left(t_{n}\right).$$ Calculate the derivative ($\frac{\mathrm{d} }{\mathrm{d} t}\left(U\left(t, 0\right)\right)$):
$$\frac{\mathrm{d} }{\mathrm{d} t}U\left(t,0\right) = \sum_{n}^{}(-i)^n\frac{\mathrm{d}}{\mathrm{d} t}\left(\int_{0}^{t}dt_{1}f\left(t_{1}\right)\right) = \sum_{n}^{}\left(-i\right)^{n}f\left(t\right).$$
In other words:
$$ \frac{\mathrm{d} }{\mathrm{d} t}U\left(t,0\right) = \sum_{n}^{} \left(-i\right)^n \int_{0}^{t}dt_{2} \int_{0}^{t_{2}}dt_{3} ... \int_{0}^{t_{n-1}}dt_{n}H_{I}\left(t\right)H_{I}\left(t_{2}\right)...H_{I}\left(t_{n}\right).$$
So:
$$ i\frac{\mathrm{d} }{\mathrm{d} t}U\left(t,0\right) = \sum_{n}^{} \left(-i\right)^{n - 1} \int_{0}^{t}dt_{2} \int_{0}^{t_{2}}dt_{3} ... \int_{0}^{t_{n-1}}dt_{n}H_{I}\left(t\right)H_{I}\left(t_{2}\right)...H_{I}\left(t_{n}\right).$$
Now calculate $H_{I}\left(t\right)U\left(t, 0\right)$: $$H_{I}\left(t\right)U\left(t, 0\right) =\sum_{n}^{} \left(-i\right)^{n} H_{I}\left(t\right) \int_{0}^{t} dt_{1} \int_{0}^{t_{1}}dt_{2} ... \int_{0}^{t_{n - 1}} dt_{n} H_{I}\left(t_{1}\right)H_{I}\left(t_{2}\right)...H_{I}\left(t_{n}\right).$$
How are these two terms equal?
Answer:
How are these two terms equal?
I think you may have messed up a little bit with your evaluation of $f(t_1)$ at $t_1=t$. Or maybe it is just that your sum over $n$ in one of your equations (the one after taking the derivative) should start at $n=1$ since the derivative kills the $n=0$ term.
Anyways, it is helpful to just write out a few terms explicitly to see what is going on:
$$
U(t,0) = 1 + (-i)\int_0^t dt_1 H_I(t_1) + (-i)^2\int_0^t dt_1 H_I(t_1)\int_0^{t_1} dt_2 H_I(t_2) + \ldots
$$
So, we have:
$$
\frac{dU}{dt} = 0 -iH_I(t) +(-i)^2 H_I(t)\int_0^{t} dt_2 H_I(t_2) + \ldots\;,\tag{1}
$$
where, please note, the upper limit of the $dt_2$ integration is now $t$.
From Eq. (1), just pull out a factor of $-iH$ from each term:
$$
\frac{dU}{dt} = -iH_I(t)\left( 1 + (-i)\int_0^t dt_2 H_I(t_2) + \ldots \right)\;,
$$
but $t_2$ is a dummy variable, so just rename it $t_1$:
$$
\frac{dU}{dt} = -iH_I(t)\left( 1 + (-i)\int_0^t dt_1 H_I(t_1) + \ldots \right)
=-iH_I(t)U(t,0)
$$
Thus:
$$
i\frac{d}{dt}(U(t,0)) = H_I(t)U(t,0)
$$ | {
"domain": "physics.stackexchange",
"id": 98266,
"tags": "quantum-mechanics, homework-and-exercises, operators, schroedinger-equation, time-evolution"
} |
Why is "activity" of gaseous component equal to its partial pressure but that of aqueous component its concentration? | Question: While solving an electrochemistry problem I had to calculate the reaction quotient of this reaction:
$$\ce{2Fe(s) + 4H+(aq) + O2(g) -> 2Fe^{2+} (aq) + 2H2O (l)}$$
It turns out to be: $$\ce{Q=\dfrac{[\ce{Fe}^{2+}]^{2}}{[\ce{H+}]^4p\ce{O2}}}$$
where square brackets mean concentration and $p$ denotes partial pressure. Here's what I do know: activity of solid and liquid components is 1, and that of aqueous components is their concentration.
However, why is the activity of gaseous components their partial pressure? Is this fact experimental or can it be derived?
Answer: Thermodynamics courses usually start by calculating amounts of heat and amounts of work enterring a container. And the work is $\pu{p\Delta V}$. No mention of concentration ! Just the pressure. Afterwards, enthalpy is introduced, then gas chemistry is developed, always using pressures. Equilibrium constants are then introduced, always with gases and pressures. And suddenly appears the necessity of defining equilibrium constants in liquid phase, where pressures have no obvious meaning. Here the authors of the course state that, thanks to Henry's law, it makes sense to replace the pressure by the concentration (or the activity) which is easier to determine than pressure. Concentration or activity are supposed to be proportional to pressure. Fortunately, this change of parameter does work well in practice. The equilibrium constants determined by using concentration allow nice predictions, and are verified for example in electrochemistry (Nernst law). But it should be known that as soon as a gas intervenes in a chemical reaction, it is safer to use its pressure. | {
"domain": "chemistry.stackexchange",
"id": 15824,
"tags": "physical-chemistry, electrochemistry, equilibrium, redox"
} |
Lexicographically minimal topological sort of a labeled DAG | Question: Consider the problem where we are given as input a directed acyclic graph $G = (V, E)$, a labeling function $\lambda$ from $V$ to some set $L$ with a total order $<_L$ (e.g., the integers), and where we are asked to compute the lexicographically smallest topological sort of $G$ in terms of $\lambda$. More precisely, a topological sort of $G$ is an enumeration of $V$ as $\mathbf{v} = v_1, \ldots, v_n$, such that for all $i \neq j$, whenever there is a path from $v_i$ to $v_j$ in $G$, then we must have $i < j$. The label of such a topological sort is the sequence of elements of $S$ obtained as $\mathbf{l} = \lambda(v_1), \ldots, \lambda(v_n)$. The lexicographical order on such sequences (which all have length $|V|$) is defined as $\mathbf{l} <_{\text{LEX}} \mathbf{l'}$ iff there is some position $i$ such that $l_i <_L l_i'$ and $l_j = l'_j$ for all $j < i$. Pay attention to the fact that each label in $S$ can be assigned to multiple vertices in $V$ (otherwise the problem is trivial).
This problem can be stated either in a computation variant ("compute the lexicographically minimal topological sort") or in a decision variant ("is this input word the minimal topological sort?"). My question is, what is the complexity of this problem? Is it in PTIME (or in FP, for the computation variant) or is it NP-hard? If the general problem is NP-hard, I am also interested about the version where the set $S$ of possible labels is fixed in advance (i.e., there are only a constant number of possible labels).
Remarks:
Here is a small real-world example to motivate the problem. We can see the DAG as representing tasks of a project (with a dependency relationship between them) and the labels are integers representing the number of days that each task takes. To finish the project, it will take me the same total amount of time no matter the order I choose for the tasks. However, I would like to impress my boss, and to do this I want to finish as many tasks as possible as fast as possible (in a greedy manner, even if it means being very slow at the end because the harder tasks remain). Choosing the lexicographically minimal order optimises the following criterion: I want to choose an order $o$ such that there is no other order $o'$ and a number of days $n$ where after $n$ days I would have finished more tasks with order $o'$ than with order $o$ (i.e., if my boss looks at time $n$, I give a better impression with $o'$), but for all $m < n$ I have finished no less tasks with order $o'$ than with order $o$.
To give some insight about the problem: I already know from previous answers that the following related problem is hard: "is there a topological sort which achieves the following sequence"? However, the fact here that I want a sequence which is minimal for this lexicographic order seems to constrain a lot the possible topological orders that may achieve it (in particular the reductions in those other answers no longer seem to work). Intuitively, there are much less situations where we have a choice to make.
Note that there seems to be interesting rephrasings of the problems in terms of set cover (when restricting the problem to DAGs that are bipartite, i.e., have height two): given a set of sets, enumerate them in a order $S_1, \ldots, S_n$ that minimises lexicographically the sequence $|S_1|$, $|S_2 \backslash S_1|$, $|S_3 \backslash (S_1 \cup S_2)|$, $\ldots$, $|S_n \backslash (S_1 \cup \cdots \cup S_{n-1})|$. The problem can also be rephrased on undirected graphs (progressively expand a connected area of the graph following the order that minimizes the lexicographic sequence of the uncovered labels). However, because of the fact that the sequence has to be greedy at all times by definition of the lexicographic order, I can't get reductions (e.g., of Steiner tree) to work.
Thanks in advance for your ideas!
Answer: With multiple copies of the same label allowed, the problem is NP-hard, via a reduction from cliques in graphs. Given a graph $G$ in which you want to find a $k$-clique, make a DAG with a source vertex for each vertex of $G$, a sink vertex for each edge of $G$, and a directed edge $xy$ whenever $x$ is a vertex of $G$ that forms an endpoint of edge $y$. Give the vertices of $G$ the label value $1$ and the edges of $G$ the label value $0$.
Then, there is a $k$-clique in $G$ if and only if the lexicographically first topological order forms a sequence of $k$ $1$'s and $\tbinom{k}{2}$ $0$'s, with $i-1$ $0$'s following the $i$th $1$. E.g. a six-vertex clique would be represented by the sequence $110100100010000100000$. This is the lexicographically smallest sequence that could possibly begin a topological ordering of a labeled DAG given by this construction (replacing any of the $1$'s by $0$'s would give a sequence with more edges than could be found in a simple graph with that many vertices) and it can only be the beginning of a topological ordering when $G$ contains the desired clique. | {
"domain": "cstheory.stackexchange",
"id": 3330,
"tags": "cc.complexity-theory, np-hardness, directed-acyclic-graph, order-theory, topological-sorting"
} |
Is there an efficient beta-equivalence algorithm? | Question: Is there an efficient algorithm to determine if two terms are beta-equivalent? Specifically, I am curious about simply-typed-lambda-calculus, so you can assume both terms are strongly normalizing.
I know a simple algorithm:
Compute the beta normal form (BNF) for each term.
Confirm that the two BNFs are alpha-equivalent.
But it is possible for BNFs to be exponentially larger than the original term? Is it possible check the equivalence of terms $S$ and $T$ in $O(|S| + |T|)$ time?
Answer: The answer is no. An old theorem of Statman states that $\beta$-equivalence in the simply-typed $\lambda$-calculus is not elementary recursive, that is, no algorithm whose running time is bounded by $2^{\vdots^{2^{|S|+|T|}}}$ for a tower of exponentials of fixed height may decide whether two simply-typed terms $S$ and $T$ are $\beta$-equivalent.
The original statement is from
Richard Statman. The typed $\lambda$-calculus is not elementary recursive. Theoret. Comput. Sci. 9:73-81, 1979.
A simpler proof may be found in this paper by Harry Mairson.
Edit: as observed by Martin Berger, Mairson proves that $\beta\eta$-equivalence is not elementary recursive, whereas Statman's result (and the OP's question) concerns $\beta$-equivalence, without $\eta$. However, as pointed out by xavierm02, Mairson's result implies Statman's. Let me fill in the details for those who are not familiar with $\eta$-long forms.
The $\eta$-long form $\eta(x^A)$ of a variable $x^A$ is defined by induction on $A$: observe that $A=A_1\to\cdots\to A_n\to\alpha$ for some $n\in\mathbb N$, some types $A_1,\ldots,A_n$ (smaller than $A$) and some atom $\alpha$, and let
$$\eta(x^A) := \lambda y_1^{A_1}\ldots\lambda y_n^{A_n}.x\eta(y_1^{A_1})\cdots\eta(y_n^{A_n}),$$
where the $\eta(y_i^{A_i})$ are given inductively.
The $\eta$-long form $\eta(M)$ of a simply-typed $\lambda$-term $M$ is defined by replacing every occurrence of variable $x^A$ of $M$ (free or bound) with $\eta(x^A)$. (NB: through Curry-Howard, this corresponds to taking a sequent calculus proof and expanding it so that it has only atomic axioms).
Observe that:
$\eta$-long forms are stable under substitution, and therefore under $\beta$-reduction;
two $\eta$-long $\beta$-normal forms are $\beta\eta$-equivalent iff they are equal (up to $\alpha$-renaming, of course);
computing the $\eta$-long form of a simply-typed $\lambda$-term is elementary recursive (if you don't keep the size of type annotations, the $\eta$-long form of a term may be exponentially bigger, but that is not a problem).
That Mairson's result implies Statman's is a consequence of the following:
Claim. Let $M,N$ be two simply-typed $\lambda$-terms. Then, $M\simeq_{\beta\eta}N$ iff $\eta(M)\simeq_\beta\eta(N)$.
In fact, via point (3) above, an elementary recursive algorithm for deciding $\beta$-equivalence immediately gives an elementary recursive algorithm for deciding $\beta\eta$-equivalence (the one pointed out by xavierm02).
Let us prove the claim. The right-to-left implication is trivial. Conversely, suppose that $M\simeq_{\beta\eta} N$. This obviously implies $\eta(M)\simeq_{\beta\eta}\eta(N)$. Let $P$ and $Q$ be the $\beta$-normal forms of $\eta(M)$ and $\eta(N)$, respectively. By point (1) above, both $P$ and $Q$ are $\eta$-long (because $\eta(M)$ and $\eta(N)$ are). But of course we still have $P\simeq_{\beta\eta} Q$, so by point (2) $P=Q$, which proves $\eta(M)\simeq_\beta\eta(N)$ (they have the same $\beta$-normal form). | {
"domain": "cstheory.stackexchange",
"id": 5376,
"tags": "lambda-calculus, typed-lambda-calculus"
} |
Advice on creating a new environment using OpenAI Gym | Question: I'm looking for some general advice here before I dive in.
I'm interested in creating a new environment for OpenAI gym to provide some slightly more challenging continuous control problems than the ones currently in their set of Classic Control environments. The intended users of this new environment are just me and members of a meetup group I am in but obviously I want everyone to be able to access, install and potentially modify the same environment.
What's the easiest way to do this?
Can I simply import and sub-class the OpenAI gym.Env module similar
to the way cartpole.py does.
Or do I need to create a whole new
PIP package as this article suggests?
Also, before I invest a lot of time on this, has anyone already created a cart-pole system environment where the goal is to stabilize the full state (not just the vertical position of the pendulum)? (I tried googling and couldn't find any variants on the original cart-pole-v1 but I'm suspicious as I can't be the first person to make modified versions of some of these classic control environments).
Thanks, (I realize this question is a bit open-ended) but hoping for some good advice that will point me in the right direction.
Answer: This stackoverflow post provides a good answer. It recommends creating the environment as a new package.
Note the updated link to the instructions on OpenAI.
So that's what I did and it's honestly not that much work and then you can import the environment into your script like this:
import gym
import gym_foo
env = gym.make('foo-v0')
This blog post also describes the process in a little bit more detail. | {
"domain": "ai.stackexchange",
"id": 1137,
"tags": "python, open-ai, environment"
} |
Which of these commutation relations are correct? | Question: I saw, in two different references, the following two commutation relations for the fermionic field operator:
and
which one of them is correct?
1 "Stefanucci, Gianluca, and Robert Van Leeuwen. Nonequilibrium many-body theory of quantum systems: a modern introduction. Cambridge University Press, 2013." page 88.
2 "An introduction to the GW formalism" International summer School in electronic structure Theory: electron correlation in Physics and Chemistry. June 2021 Virtual lecture, Xavier Blase. Page 39.
Answer: The second expression is simplified as follows:
\begin{align}
\Big[\psi(\mathbf{x}),\psi^\dagger(\mathbf{x}') \psi^\dagger(\mathbf{x}'') \psi(\mathbf{x}'') \psi(\mathbf{x}') \Big]_- &= \bigg(\delta(\mathbf{x}-\mathbf{x}') \psi^\dagger(\mathbf{x}'') - \delta(\mathbf{x}-\mathbf{x}'') \psi^\dagger(\mathbf{x}')\bigg) \psi(\mathbf{x}'') \psi(\mathbf{x}')\\[5pt]
= \delta(\mathbf{x}-\mathbf{x}')& \psi^\dagger(\mathbf{x}'') \psi(\mathbf{x}'') \psi(\mathbf{x}')- \delta(\mathbf{x}-\mathbf{x}'') \psi^\dagger(\mathbf{x}') \psi(\mathbf{x}'') \psi(\mathbf{x}')\\[5pt]
= \delta(\mathbf{x}-\mathbf{x}') &\psi^\dagger(\mathbf{x}'') \psi(\mathbf{x}'') \psi(\mathbf{x})- \delta(\mathbf{x}-\mathbf{x}'') \psi^\dagger(\mathbf{x}') \psi(\mathbf{x}) \psi(\mathbf{x}') \qquad (*)\\[5pt]
= \delta(\mathbf{x}-\mathbf{x}') &\psi^\dagger(\mathbf{x}'') \psi(\mathbf{x}'') \psi(\mathbf{x})- \delta(\mathbf{x}-\mathbf{x}'') \psi^\dagger(\mathbf{x}') \Big(-\psi(\mathbf{x}') \psi(\mathbf{x}) \Big) \quad (**)\\[5pt]
= \delta(\mathbf{x}-\mathbf{x}') &\psi^\dagger(\mathbf{x}'') \psi(\mathbf{x}'') \psi(\mathbf{x})+ \delta(\mathbf{x}-\mathbf{x}'') \psi^\dagger(\mathbf{x}') \psi(\mathbf{x}') \psi(\mathbf{x}),
\end{align}
which is your first expression.
Here's an explanation for each step:
$(*)$ The delta functions are only non-zero if their argument is zero, so $\delta(\mathbf{x} - \mathbf{x}') \psi(\mathbf{x'}) = \delta(\mathbf{x} - \mathbf{x}') \psi(\mathbf{x}) $. Similarly, $\delta(\mathbf{x} - \mathbf{x}'') \psi(\mathbf{x''}) = \delta(\mathbf{x} - \mathbf{x}'') \psi(\mathbf{x}) $.
$(**)$ $\psi(\mathbf{x})$ and $\psi(\mathbf{x}')$ anti-commute. | {
"domain": "physics.stackexchange",
"id": 89009,
"tags": "commutator, anticommutator"
} |
Reading input from the console in F# | Question: In the process of learning F#, I wrote this code that reads lines of input from the console and stores them in a list.
As far as I can tell the code works correctly, but since I'm new to functional programming, I'm not sure if it is written as well as it could have been.
Should I have read characters iteratively instead of recursively?
Is there a better alternative to using a StringBuilder object?
Does my code follow proper functional style?
open System
/// Appends a character to the string builder and returns the new builder.
/// Characters 10 and 13 (newline and carriage return) are ignored.
/// Returns the updated StringBuilder.
let appendChar char (builder : Text.StringBuilder) =
match char with
| 10 | 13 -> builder
| n ->
let c = Convert.ToChar(n)
builder.Append(c)
/// Finishes building a string by clearing the StringBuilder
/// and appending the string to the end of the list of strings.
/// Empty strings are ignored.
/// Returns a tuple containing the lines and the new StringBuilder.
let finishString lines (builder : Text.StringBuilder) =
let s = builder.ToString()
let l = builder.Length
let newBuilder = builder.Clear()
match l with
| 0 -> (lines, newBuilder)
| _ -> (lines @ [s], newBuilder)
/// Handles a character by appending it to the builder and taking an appropriate action.
/// If char is a newline, finish the string.
/// Returns a tuple containing lines and the new builder.
let handleChar char lines builder =
let newBuilder = appendChar char builder
match char with
| 10 -> finishString lines newBuilder
| c -> (lines, newBuilder)
/// Gets all the lines from standard input until end of input (Ctrl-Z).
/// Empty lines are ignored.
/// Returns a list of strings read.
let rec getLines lines builder =
match Console.Read() with
| -1 -> lines
| c ->
let tuple = handleChar c lines builder
let newLines = fst tuple
let newbuilder = snd tuple
getLines newLines newbuilder
[<EntryPoint>]
let main argv =
Text.StringBuilder()
|> getLines []
|> ... and so on
Answer: Some small comments
let tuple = handleChar c lines builder
let newLines = fst tuple
let newbuilder = snd tuple
getLines newLines newbuilder
can become
let newlines,newbuilder = handleChar c lines builder
getLines newLines newbuilder
although, if you aren't using getLines anywhere else, there is a good argument for making it
handleChar c lines builder |> getLines
by makeing getLines take its argument in tupled form.
The magic number 10 appears a few times, so I would add a literal like
[<Literal>]
let newline = 10
and then you can pattern match against it.
Also,
Lines @ [s]
is bad as @ is very slow. You are better to use
s::Lines
and then reverse the list at the end.
Your format for reading the characters recursively should be fine as it will get optimised as a tail-call.
Of course, your entire program could be replaced by:
let rec procinput lines=
match Console.ReadLine() with
| null -> List.rev lines
| "" -> procinput lines
| s -> procinput (s::lines) | {
"domain": "codereview.stackexchange",
"id": 5726,
"tags": "functional-programming, f#"
} |
Dropdown form that shows data from 2 different tables | Question: I am very new to this and am using php and MySQLi to create a form which will create a record in a third table. This works fine but I can't help but think that there is a way to do it with a single query rather than a query for each table as I have now. The third table is called tblcars, and I will use this form to build records for that table by selecting data from these dropdowns.
<?php
//Database Login
require_once 'includes/connection.php';
$sql = ("SELECT * FROM tblmake");
$getclass = ("SELECT * FROM tblclass");
$result = $conn->query($sql);
$results = $conn->query($getclass);
?>
<html>
<head>
<title>Car Makes</title>
</head>
<body>
<h3>Car Makes</h3>
<form action="" method="get">
<select name="makeID">
<option value="">Choose Make</option>
<?php
while($make = $result->fetch_assoc()) {
echo "<option value={$make['makeID']}>";
echo $make['carmake'];
echo "</option>";
}
?>
</select>
<select name="classID">
<option value="">Choose Class</option>
<?php
while($class = $results->fetch_assoc()) {
echo "<option value={$class['classID']}>";
echo $class['carclass'];
echo "</option>";
}
?>
</select>
<input type="submit" value="Insert">
</form>
</body>
</html>
Answer: As @tim already pointed out, I don't think there is a way of combining these two SQL queries together since they aren't relevant to each other. Also, your variable, table and table column naming is a bit odd.
What I've done to your code:
Renamed the variables.
Added the HTML5 doctype.
Omitted the form's action attribute.
Disabled the first select options.
Created a short alias function to handle html output to prevent XSS attacks.
Used alternative while() syntax more suited for going in and out of PHP to output HTML.
Used <?= $x (shorthand for <?php echo $x;).
Ended up with the following code:
<?php
require_once 'includes/connection.php';
$carMakeSql = 'SELECT * FROM tblmake';
$carMakeResult = $conn->query($carMakeSql);
$carClassSql = 'SELECT * FROM tblclass';
$carClassResult = $conn->query($carClassSql);
function html($string) {
return htmlspecialchars($string, ENT_QUOTES, 'UTF-8');
}
?>
<!DOCTYPE html>
<html>
<head>
<title>Car Makes</title>
</head>
<body>
<h3>Car Makes</h3>
<form method="GET">
<select name="makeID">
<option selected disabled>Choose Make</option>
<?php while ($result = $carMakeResult->fetch_assoc()): ?>
<option value="<?= html($result['makeID']); ?>"><?= html($result['carmake']); ?></option>
<?php endwhile; ?>
</select>
<select name="classID">
<option selected disabled>Choose Class</option>
<?php while ($result = $carClassResult->fetch_assoc()): ?>
<option value="<?= html($result['classID']); ?>"><?= html($result['carclass']); ?></option>
<?php endwhile; ?>
</select>
<input type="submit" value="Insert">
</form>
</body>
</html>
(not tested) | {
"domain": "codereview.stackexchange",
"id": 13601,
"tags": "php, mysqli"
} |
Using Func instead of helper methods in a class | Question: I really like functional programming in C#, it gives you a lot of flexibility.
I am wondering if it is the right idea to use functions instead of helper methods, and to make the code more readable.
So here is what I want, I have a method that needs to find Saturday and do something with it:
public void SomeMethod()
{
var saturday = DateTime.Today;
while (saturday.DayOfWeek != DayOfWeek.Saturday)
saturday = saturday.AddDays(-1);
//rest of the code
}
The problem I am having with this code is readability. I have to scan through 3 lines of code to understand that it needs to find Saturday. Plus seeing var saturday = DateTime.Today;, it really bugs my eyes.
Of course I can write a helper method
private DateTime GetSaturday()
{
var saturday = DateTime.Today;
while (saturday.DayOfWeek != DayOfWeek.Saturday)
saturday = saturday.AddDays(-1);
return saturday;
}
Now I know exactly what the method does but my problem now is that I know that I won't be calling this method anywhere in the class so it kind of pollutes my class.
So here we are at my favorite:
public void SomeMethod()
{
Func<DateTime> GetSaturdayFunc = () =>
{
var date = DateTime.Today;
while (date.DayOfWeek != DayOfWeek.Saturday)
date = date.AddDays(-1);
return date;
};
var saturday = GetSaturdayFunc();
//rest of the code
}
Although it's a bit more code now my class is not polluted with a lot of utility methods and it is more readable.
What do you think? Is this a good way, are there any better ways?
EDIT
I could also create an extension method which in this case would be appropriate, something like
DateTime.Saturday()
but that's not what I am referring to in my question. I am asking if it is a good practice to wrap a portion of the code as a function inside that method (since that peace of the code will only be used inside that method and might not apply anywhere else), just for readability purposes.
Answer: I actually think I slightly disagree with the other answers in this situation. As you, others have mentioned I see three possible solutions (although there may be others)
Create a local Func()
Create a method on some sort of utilitly class or on a specific class for dealing with Date manipulation i.e Calendar
Create a private method to the class (as it won't be used anywhere else anyway)
Create an extension method
Now from your question you state that you do not intend to use the method anywhere else. Your example tends one to think it would be beneficial in other situations as it is fairly generic. In that situation I would be leaning towards an extension method as others have pointed out.
However I often find myself in the situation where a block of code will be repeated in a function and only the variables passed in will vary dependant on local switches etc In this situation I think it's perfectly reasonable to create a local delegate and use that in the function only.
So, to summarize. Yes I think it's acceptable. In your example, probably not, but in situations where the block of code is local only to that method and you know that is the case, then yes. You can always refactor later! | {
"domain": "codereview.stackexchange",
"id": 4263,
"tags": "c#, functional-programming"
} |
Ion Drive Propulsion Top Speed | Question: I would like to know if there is some formula / graph which would provide / show the efficiency of a certain type of propeller in space. Specifically, I'm interested in the acceleration attainable at certain speeds.
I'm writing a science fiction book and I'm trying to make it as correct as possible, fact wise. The propeller I'm talking about is the ion drive
Now, my book takes place in a world where fusion power is finally ours.
So, please, let us assume that we have unlimited energy so you could power a dozen huge ion drives non stop. OK, there's the question of argon/xenon fuel, let's assume we have 1 year of that.
So... the question is... what speed could you reach?
If a continuous acceleration of $10\frac{\mathrm{m}}{\mathrm{s}}$ is applied (I put that number because it would also constitute an advantage for my crew - living in Earth's gravity), that would mean that a ship would reach the speed of light in just 347 DAYS
But I know that's impossible because the EFFICIENCY of the ion drive would DECREASE as the ship's speed would approach the exhaust speed of the drive's "nozzle" (well, it doesn't have a nozzle per-se, as you can see in Wiki, but anyway...)
Please do not fear to elaborate on top of my question. Let's suppose for example that maybe the ion drives of the future have a much higher thrust/efficiency/nozzle exhaust speed.
This isn't only about currently POSSIBLE facts but also about THEORETICAL limitations which might be overcome in the future (such as fusion energy).
Answer: The principle of relativity says that we can analyze a physical situation from any reference frame, as long as it moves with some constant speed relative to a known inertial frame. Thus, the ion drive does not find it more difficult to accelerate the ship when the ship is "going fast" because the ion drive cannot physically distinguish going fast from going slow.
However, if the ion drive is going fast in the reference frame of Earth, then when the ion drive burns, say 1 kg of fuel, it picks up less speed in the Earth frame than it does in the rocket frame due to the relativistic velocity addition law.
That velocity addition law is just the angle-addition law for the hyperbolic tangent. So, suppose the ship accelerates by shooting individual ions out the back. Each time it does this, it accelerates the same amount from its own comoving frame. Then from an Earth frame, the $\textrm{arctanh}$ of the rocket's speed increases by the same amount each time.
If, as a function of the proper time $\tau$ experienced on the rocket, the acceleration of the rocket is $a(\tau)$ in a comoving frame, there is a quantity called the rapidity of the rocket which increases the way velocity does in Newtonian mechanics.
The rapidity $\theta$ will be $\theta(\tau) = \int_0^\tau a(\tau) d\tau$, and the velocity is then $v(\tau) = \tanh\theta$. Specifically, if $a = g$, the velocity is
$$v(\tau) = \tanh(g\tau)$$
When one year of time has passed on the rocket, its velocity relative to Earth will be $\tanh(1.05) = 0.78$, or 78% the speed of light. The limit of the $\tanh$ function is one as $\tau \to \infty$, so the rocket never gets to light speed.
A more important limiting factor is the fuel. If the rocket carries all its fuel, then once it burns through it all, it can't go any more. Fusion isn't a way around this because by $E=mc^2$ there is a limited energy you can get from a given mass of fuel.
If a fraction $f$ of the rocket is fuel, when the fuel is all burned, the momentum of the rocket will be $\gamma m (1-f) \beta$, with $m$ the original mass. The energy of the rocket is $\gamma m (1-f)$. Similar relations hold for the fuel. The conservation of momentum and energy give
$$m = \gamma m (1-f) + E_{fuel}$$
$$0 = \gamma m \beta (1-f) + p_{fuel}$$
$E_{fuel}$ and $p_{fuel}$ are the energy and momentum of the fuel after burning. Solving for $\beta$ gives
$$\beta = \frac{-p_{fuel}}{m - E_{fuel}}$$
The minus sign shows that the fuel and rocket go opposite directions. To maximize $\beta$, we want to make $p_{fuel}$ as large as possible subject to a fixed $E_{fuel}$. This means that we want the speed of the fuel as high as possible, so assume the fuel is massless with $\beta_{fuel} = 1$ and $p_{fuel} = -E_{fuel}$. Plugging this into the previous equations and doing some algebra, I got
$$\beta = \frac{1 - (1-f)^2}{1 + (1-f)^2}$$
Even if half the rocket's original mass were fuel, it would only get to 3/5 the speed of light. | {
"domain": "physics.stackexchange",
"id": 13771,
"tags": "special-relativity, kinematics, acceleration, propulsion"
} |
How to find bias for perceptron algorithm? | Question: My question is very basic. I am starting with ML and am working on the perceptron algorithm. I successfully computed the weights for this input data:
X = [[0.8, 0.1], [0.7, 0.2], [0.9, 0.3], [0.3, 0.8], [0.1, 0.7], [0.1, 0.9]]
Y = [-1, -1, -1, 1, 1, 1]
Output_weights = [-0.7, 0.5]
But I didn't take bias into account, i.e. I assumed the discriminator line goes through the origin. Now, let's say I add another point into my training set:
new_X = [4,4]
new_Y = [-1]
How do I proceed if I want to compute the bias as well? In the first iteration for example, I'd set default weights to $[0,0]$, so I find the first point that is incorrectly classified.
Without bias, it is easy. I compute the dot product
0.8*0 + 0.1*0 = 0
should be $-1$, so it is incorrectly classified. I update the weights to:
[-0.8,-0.1]
However, taking bias into account, I get:
0.8*0 + 0.1*0 + bias
Now, how do I update the weights and the bias? What is the procedure?
I have searched several tutorials like this or this but didn't find an answer. A link to some resource would help, too.
Answer: Rather than viewing your data as
X = [[0.8, 0.1], [0.7, 0.2], [0.9, 0.3], [0.3, 0.8], [0.1, 0.7], [0.1, 0.9]]
Y = [-1, -1, -1, 1, 1, 1]
You could have treat the problem as
X = [[1, 0.8, 0.1], [1, 0.7, 0.2], [1, 0.9, 0.3], [1, 0.3, 0.8], [1, 0.1, 0.7], [1, 0.1, 0.9]]
Y = [-1, -1, -1, 1, 1, 1]
That is to append a $1$ in every single entry of $X$. The weight corresponding to $1$ would be the bias.
That is $Y=Ax+b$ can be written as $Y=[A, e]\begin{bmatrix}x \\b\end{bmatrix}$ where $e$ is the all one vector. | {
"domain": "datascience.stackexchange",
"id": 6111,
"tags": "machine-learning, regression, algorithms, perceptron"
} |
How did Mars come to have a 24 hour 39 minute day? | Question: Mercury rotates three times for every two revolutions around the Sun, apparently due to a gravitational resonance with the Sun.Venus takes about 225 days to rotate, and rotates in the opposite direction of any of the inner planets. Maybe because its extreme nature makes it ornery.Earth rotates once every 24 hours, a condition caused by the tidal interaction between Earth and its Moon. It's believed that the Earth was rotating about once every 5 hours before the theorized collision with a Mars sized coorbiting object referred to as Theia.Mars shows no signs of a similar collision. Its two moons appear to be asteroids that were captured from the asteroid belt. So how did Mars come to have a day so close to the length of an Earth day?
Answer:
"It's believed that the Earth was rotating about once every 5 hours
before the theorized collision with a Mars sized coorbiting object
referred to as Theia."
Almost. Theia did not have to be co-orbiting, just an intersecting orbit. We have no idea what the Earth's spin was before the collision, but it is theorized that the Earth rotation had a 5 hour period after the collision with Theia, at the time of the Moon's formation from the debris.
The fact that Mars and Earth have such a similar period is a coincidence, perhaps you are asking why Mars is spinning so fast? Well actually Mars is not the odd man out, Mercury and Venus are. Most planets spin fast. exactly which spin orientation is somewhat arbitrarily determined by the vagaries of the ways the planetesimals collided to form them. The fact that Venus and Uranus have unusual spin orientations is just the way things turned out.
Both Mercury and Venus used to spin much faster. Mercury's spin was tidally slowed down by the Sun and Mercury's orbit was (and still is being) driven further away by the Sun (just like the Moon and Earth: Why is the Moon receding from the Earth due to tides? Is this typical for other moons?). Eventually Mercury was held in that 2:3 resonance. Which, by the way had a certain amount of luck involved (see: Mercury’s capture into the 3:2 spin-orbit resonance as a result of its chaotic dynamics ). Venus, we are not so sure of.
The tidal force from the sun is much much less for Venus than for Mercury, but much more than for Earth. However Venus has a dense hot massive atmosphere, which can be forced into both gravitational bi-modal (two peaks) tides and thermal uni-modal (one peak) tides. The bulge lags behind the tidal forcing peak, which creates a torque by the sun to slow it down. This is fiendishly complex (See: Long term evolution of the spin of Venus - I. Theory )
P.S. Actually Phobos, and probably Deimos, are thought to be constructed fairly recently (millions of years) from debris from a collision of Mars with a large asteroid. There is no way to capture a whole asteroid into orbits that close. | {
"domain": "astronomy.stackexchange",
"id": 1165,
"tags": "planet, mars, rotation, tidal-forces, planetary-formation"
} |
Tiny blue and red paint close to each other result in black or magenta? | Question: Now, if I draw two tiny spots with red and blue paint on paper and they are so close (but they do not cover each other) to each other that human eyes cannot identify there are two spots. To the color that our eyes can perceive, I come across two statement on the web.
The first is the subtractive principle. It says that, in that small area, the red paint absorbs light other than red and the blue paint absorbs light other than blue. As a result, Longwave and Shortwave are absorbed and there is almost no light come into our eyes and we see black.
The second says in that small area, red lights are reflected from red paint spot and blue lights are reflected from blue paint spot. Thus these two lights, shoulder to shoulder, come into our eyes. These two lights are so close to each other and almost one light for our eyes’ ability. So when the light come into our eyes, it is a mixture of red and blue lights, just magenta.
I have made some experiments with my scanner and printer which told me the result of the mixture is black but not magenta. But I really do not know how to refute the second statement, so I come here for help.
Your helpful answers and comments inspired me that the answer is purple and the first statement is wrong because Subtractive color only happens when two inks overlaps. Here is a image I made to show the result (Interpolation is used to compress it, so zoom in will not show you the original pixels).
Answer: Here is the color perception map:
A similar exists in the corresponding wikipedia article.
Note there is no black, black is the absence of radiation, and cannot appear in our perception by the addition of two frequencies.
It says that, in that small area, the red paint absorbs light other than red and the blue paint absorbs light other than blue.As a result, Longwave and Shortwave are absorbed and there is almost no light come into our eyes and we see black.
wherever you read this , it is wrong. We see light, which is composed by zillions of elementary particles called photons, by reflection, and light hitting a red spot to be seen as red is reflected to our retina, the same for blue. The absorbed parts are absorbed by the atoms and molecules in the paint of the spot. What was reflected when at a distance, is reflected when adjacent.
Even if adjacent, the overlap of the reflected light( the dots are assumed adjacent) at most will give a color from the color map above, not the absence of color. My main point is that the behavior of reflected frequencies of the spots is not a function of the distance between them. If they reflect red and blue apart, they will do the same when adjacent. If they overlap it is another story, which will depend on the chemistry and transparency or not of the spots. | {
"domain": "physics.stackexchange",
"id": 49981,
"tags": "visible-light, vision, perception"
} |
If I surrounded a black hole with gravitometers and observed the gravitational field as mass passed the event horizon, what would I observe? | Question: By surrounding a black hole with gravitometers I would be able to get a 3-D map of the gravitational field. If I observed this field as an object approaches the event horizon, what would I see?
Would the gravitational field allow me to track the mass on its journey from the event horizon to the singularity? Would this allow me to observe what is going on inside the black hole?
To clarify: At some point in time there are two distinct gravitational fields, one for the approaching object and one for the singularity. At a later time there is just the gravitational field for the singularity. It is the nature of the transition in the gravitational field that I am specifically interested in.
Answer: This problem can be (and has been) studied numerical. One can simulate the gravitational field of a small massive object dropping radially into a (much larger) Schwarzschild black hole. This was done, for example, in arXiv:1012.2028 by Mitsou. (This is simply the first one I found, there are more, and probably much earlier.)
The easiest way to represent the gravitational field is in the form of the Weyl scalar $\psi_4$, which contains (almost) all gauge invariant dynamical information about the gravitational field. Further more, it is convenient to write the field as a sum of (spin-weighted) spherical harmonics $Y_{lm}$. (You can think of this as the analog of a Fourier series on the sphere.) This conveniently captures the angular dependence of the gravitational field. Moreover, if one picks coordinates such that particle falls along the coordinate axis, the field is axisymmetric, meaning that all but the $m=0$ modes are zero.
With this setup one can plot the time dependence of the remaining $l$ modes. The plot below (from arXiv:1012.2028) plots the time dependence of the $l=2$ mode. As the particle approaches the black hole the field grows monotonically (mostly not shown in this plot) until particle reaches the horizon, after which the field "ringsdown" decaying exponentially while oscillating with a characteristic frequency known as a quasinormal mode (or QNM).
For higher $l$ modes the picture is qualitatively similar except that the amplitude is (much) smaller and the frequencies of the QNM ringdown higher. For example here is the $l=6$ mode from the same paper.
Note that the time scale for this to happen is quite short. In the plots, the units of time are scaled by the natural time scale for the black hole. For a 10 solar mass black hole one such unit is about 50 microseconds. | {
"domain": "physics.stackexchange",
"id": 61910,
"tags": "general-relativity, gravity, black-holes"
} |
Getting 3,4-dimethylhex-3-ene from but-2-ene | Question: How can one get 3,4-dimethylhex-3-ene from but-2-ene ?
$\hskip2in$
Using $\ce{Br_2}$ then two equivalents of $\ce{CH_3CH_2MgBr}$, I can get the corresponding alkane (3,4-dimethylhexane) but I don't see how to do it while preserving the double bond...
Answer: Your approach, which uses $\ce{Br2}$ to convert 2-butene into 2,3-dibromobutane and react that with ethylmagnesium bromide $\ce{CH3CH2MgBr}$ has two flaws:
This approach removes the alkene and there is no obvious way to get it back (there is a way)
More seriously, Grignard reagents are pretty terrible for $\text{S}_N 2$ reactions. They are very strong bases, so elimination can compete, but more importantly, they can undergo metathesis:
$$\ce{R-Br + R'-MgBr <=> R-MgBr + R'Br}$$
Assuming your approach worked to generate 3,4-dimethylhexane, you could carefully radically brominate it to produce 3,-bromo-3,4-dimethylhexane, and I am sure you can get back to the alkene from there.
A better approach uses two molecules of 2-butene. Do you see how you can break your target molecule up into two 4-carbon pieces? Convert one into an electrophile and the other into a nucleophile. It might take two reactions in each case. If you choose your electrophile correctly (think about what makes a great combo with a Grignard reagents), you will form the $\ce{C-C}$ bond between the two pieces and you will still have a functional group left that you can use as a handle to direct the elimination reaction to regenerate the alkene. | {
"domain": "chemistry.stackexchange",
"id": 15220,
"tags": "organic-chemistry, synthesis, grignard-reagent, c-c-addition"
} |
Easy decision problem, hard search problem | Question: Deciding whether a Nash equilibrium exists is easy (it always does); however, actually finding one is believed to be difficult (it is PPAD-Complete).
What are some other examples of problems where the decision version is easy but the search version is relatively difficult (compared the the decision version)?
I would be particularly interested in problems where the decision version is non-trival (unlike the case with Nash equilibrium).
Answer: Given an integer, does it have a non-trivial factor? -> Non-trivially in P.
Given an integer, find a non-trivial factor, if there is one -> Not known to be in FP. | {
"domain": "cstheory.stackexchange",
"id": 2337,
"tags": "cc.complexity-theory, big-list"
} |
Effect of dillution on titration | Question: When we are titrating acid/base using ph meter we add distilled water to immerse the ph electrode.
Won't this affect the concentration of the acid/base: I mean isn't this dillution.
Won't this affect our calcultion of the concentration at the end.
Answer: What's important to note in this situation is that the number of moles of the titrand isn't changing. This matters because all acid/base reactions go to completion, so every drop of titrant that falls into the solution will react with the titrand until there are no moles of titrand left regardless of concentration, at which point you've reached the equivalence point.
Adding a little water here or there doesn't really change the pH at the equivalence point, because ideally there are no moles of acid or base present in solution.
The reason this doesn't mess up your measurement for the concentration is because you measured the volume of titrand before the titration, and then recorded the volume of the known titrant needed to reach the equivalence point. From this information, you can calculate the moles of titrant needed to reach equivalence, which in most cases equals the number of moles of titrand present, and with the original volume of titrand noted previously, you can calculate the original concentration of the solution the sample was taken from. voila! | {
"domain": "chemistry.stackexchange",
"id": 494,
"tags": "acid-base, experimental-chemistry, ph"
} |
How to Translate Forces Between Points? | Question: Consider a force $\vec{F}$ acting at distance $\vec{r}$ from a center of mass $CoM$ . Is it possible to somehow express the forces acting on $CoM$ from the known variables?
The torque will obviously stay the same ($\vec{\tau} = \vec{r} \times \vec{F}$) but can you express $\vec{F_{out}}$ in terms of the known variables so that the two systems — the one with only $\vec{F}$, and the one with only $\vec{F_{out}}$ are equivalent / will behave the same?
The vectors are in three dimensions.
Keep in mind, I do not know if this is a legitimate question.
Answer: Forces (as a concept) act along a line in space called the line of action. But a force vector (as three components) does not carry any location information on its own. To describe a physical fore you not only need the force vector, but also the equipollent torque that force has about the measuring point.
Two force systems are equipollent only if their forces are equal and the torques about some point are equal also in the same coordinate frame.
This means that a force $\vec{F}$ acting on a point located at $\vec{r}$ from the center of mass, is equipollent to the same force $\vec{F}$ acting through the center of mass and the torque $\vec{\tau} = \vec{r} \times \vec{F}$ acting on the body.
Let us designate the point where the force acts A and the center of mass C. You can now write
$$ \begin{aligned}
\vec{F}_B & = \vec{F}_A \\
\vec{\tau}_B & = \vec{\tau}_A + \vec{r}_{A/B} \times \vec{F}_A
\end{aligned} \;\tag{1}$$
In your case, $\vec{\tau}_A =0$ since the line of action passes through A.
The above is necessary in order to apply Newton's third law. In this context, think of what force is needed at the support of a beam B when an applied force acts through A.
It is obvious that the solution has to be $\vec{F}_B = \vec{F}_A$, as well as the torque $\vec{\tau}_B = \vec{r}_{A/B} \times \vec{F}_A$ which means that forces transform between points identically, and torques obey the law of moments. | {
"domain": "physics.stackexchange",
"id": 60102,
"tags": "forces, kinematics"
} |
Understanding renormalizability | Question: I want to understand when a given theory is renormalizable and how to find renormalizable theories for different dimensions (the latter will become clearer later on).
To do so, we work through an example: the following real scalar field theory in $d$ spacetime dimensions
\begin{equation*}
\mathcal{L} = \frac 1 2 \partial_{\mu} \phi \partial^{\mu} \phi - \frac 1 2 m^2 \phi^2 - \frac{\lambda_3}{3!} \phi^3 - \frac{\lambda_4}{4!} \phi^4 - \frac{\lambda_5}{5!} \phi^5 - \frac{\lambda_6}{6!} \phi^6
\end{equation*}
Let us work with $3 \leq n \leq 6$. We first determine the coupling constants
\begin{align*}
[\mathcal{L}] = d, \quad [\partial_{\mu}]=1, \quad [m] = 1,\quad \Rightarrow \quad& [\phi^n] = \frac 1 2 n(d-2), \quad [\lambda_n] = d- \frac 1 2 n(d-2) \\
&[\phi] = \frac 1 2 (d-2), \quad [\phi^2] = d-2
\end{align*}
I learned that a theory with $[\lambda_n] < 0$ is nonrenormalizable (from M. Srednicki's beautiful book, chapter 18). Hence, from $[\lambda_n] = d- \frac 1 2 n(d-2)$, we get the following condition
\begin{equation} [\lambda_n]<0 \iff n> \frac{2d}{d-2} \tag{*} \end{equation}
From $(*)$ we see that the renormalizability of the theory depends on the powers of $\phi$ and the dimension $d$. In particular, we see that our theory is renormalizable if we work with $d = 1$ and $d=3$. For $d > 3$ our theory becomes nonrenormalizable
Let me ask some questions
How to check whether our theory is renormalizable when $d=2$ or not? Our formula $n> \frac{2d}{d-2}$ breaks down for such a case.
Let's say we want to find renormalizable theories for higher dimensions. Based on $(*)$, for $d=4$ we see we would need to drop $\phi^5$ and $\phi^6$ terms. For $d=5,6$ we see we would need to drop $\phi^4$, $\phi^5$ and $\phi^6$. For $d=7,8,9,10,..,10^6,...$ (I did not check further than $d=10^6$ but it seems that the relation holds for further dimensions) we see we would need to drop $\phi^3$, $\phi^4$, $\phi^5$ and $\phi^6$. Does this mean that the simplest scalar field theory $\mathcal{L} = \frac 1 2 \partial_{\mu} \phi \partial^{\mu} \phi - \frac 1 2 m^2 \phi^2$ is renormalizable for all dimensions?
PS: Please note that I am aimed at understanding under which conditions is a theory renormalizable and not solving this one in particular.
Answer: Your statement (*) is valid only for $d > 2$ (you have to take care when dividing an inequality by a non-positive number). Based on the original formula for $[\lambda_n]$, renormalizability holds for all $n$ if $d \le 2$.
You are correct that the "simplest" (free) scalar field theory is renormalizable for all $d$. | {
"domain": "physics.stackexchange",
"id": 77739,
"tags": "quantum-field-theory, lagrangian-formalism, renormalization, dimensional-analysis, spacetime-dimensions"
} |
"Detailed" exception class hierarchy | Question: I'm designing an modular exception class hierarchy to use in various projects. I intend to inherit from std::exception in order to be maximally compatible with any exception-handling code. A design goal is that each exception's what() method returns a string which contains a base message, which is dependent on the object's most specific class (i.e. equal for all objects of the class), and an optional instance-specific details message which specifies the origin of the exception.
The two main goals are ease of use (as in when throwing the exceptions), as well as ensuring that writing another exception subclass is as simple and repetition-free as possible.
Base class
The base exception class I wrote is the following. It is conceptually an abstract class, but not syntactically, since I don't have any virtual method to make pure virtual. So, as an alternative, I made all constructors protected.
/**
* Base class for all custom exceptions. Stores a message as a string.
* Instances can only be constructed from within a child class,
* since the constructors are protected.
*/
class BaseException : public std::exception {
protected:
std::string message; ///< message to be returned by what()
BaseException() = default;
/**
* Construct a new exception from a base message and optional additional details.
* The base message is intended to be class-specific, while the additional
* details string is intended to be instance-specific.
*
* @param baseMessage generic message for the kind of exception being created
* @param details additional information as to why the exception was thrown
*/
BaseException(const std::string &baseMessage, const std::string &details = "") {
std::ostringstream oss(baseMessage, std::ios::ate);
if (not details.empty()) oss << " (" << details << ')';
message = oss.str();
}
public:
/// `std::exception`-compliant message getter
const char *what() const noexcept override {
return message.c_str();
}
};
The intention of the above design is that any subclass of BaseException defines a constructor that passes a class-specific base message (as the baseMessage paramter) and an optional detail-specifier (as the details parameter) as arguments to BaseException's constructor.
Errors & warnings
Since I want to be able to differentiate between general exception "types", e.g. errors vs. warnings, I've made the following two virtually-inherited bases:
class Error: public virtual BaseException {};
class Warning : public virtual BaseException {};
Examples
Here are some (project-specific) examples of implementing concrete exception subclasses with this design:
/// Indicates that a command whose keyword is unknown was issued
class UnknownCommand : public Error {
public:
static constexpr auto base = "unrecognized command";
UnknownCommand(const std::string &specific = "") : BaseException(base, specific) {}
};
/// Indicates an error in reading or writing to a file
class IOError : public Error {
public:
static constexpr auto base = "unable to access file";
IOError(const std::string &specific = "") : BaseException(base, specific) {}
};
/// Indicates that an algorithm encountered a situation in which it is not well-defined;
/// i.e., a value that doesn't meet a function's prerequisites was passed.
class ValueError : public Error {
public:
static constexpr auto base = "invalid value";
ValueError(const std::string &specific = "") : BaseException(base, specific) {}
};
# [...]
As you can see, the common pattern is
class SomeException : public Error /* or Warning */ {
public:
static constexpr auto base = "some exception's generic description";
SomeException(const std::string &details) : BaseException(base, details) {}
}
Usage example
Taking the previous IOError class as an example:
#include <iostream>
#include <fstream>
#include "exceptions.h" // all of the previous stuff
void cat(const std::string &filename) {
std::ifstream file(filename);
if (not file.is_open()) throw IOError(filename);
std::cout << file.rdbuf();
}
int main(int argc, char **argv) {
for (int i = 1; i < argc; ++i) {
try { cat(argv[i]); }
catch (std::exception &e) { std::cerr << e.what() << '\n'; }
}
}
In case of calling the program with an inaccessible file path, e.g. the path "foo", it should output
unable to access file (foo)
Answer:
Using a std::ostringstream to concatenate strings is like using a DeathStar to kill a sparrow.
Don't use std::string for the stored message. Copying it is not guaranteed to never throw, and you only need a very small sliver of its capabilities anyway.
A std::shared_ptr<const char[]> fits the bill much better, though even that is overkill.
Avoid using std::string at all, so you don't risk short-lived but costly dynamic allocations. Prefer std::string_view for the interface.
BaseException seems to be purely an implementation-help, adding storing of an arbitrary exception-message on top of std::exception. That's fine, only a pitty it wasn't already in the base.
Still, marking the additions as protected doesn't make any sense, message should really be private, and why shouldn't the ctors be public?
If the aim of that exercise is forbidding objects of most derived class BaseException, just make the ctor pure virtual:
// Declaration in the class:
virtual ~BaseException() = 0;
// Definition in the header-file:
inline BaseException::~BaseException() = default;
Applying that:
template <class... Ts>
auto shared_message(Ts... ts)
-> std::enable_if_t<(std::is_same_v<Ts, std::string_view> ... &&),
std::shared_ptr<const char[]>> {
auto r = std::make_shared_default_init<char[]>(1 + ... + ts.size());
auto p = &r[0];
((std::copy_n(&ts[0], ts.size(), p), p += ts.size()), ...);
*p = 0;
return r;
}
template <class... Ts>
auto shared_message(Ts&&... ts)
-> std::enable_if_t<!(std::is_same_v<std::decay_t<Ts>, std::string_view> ... &&),
decltype(shared_message(std::string_view{ts}...))>
{ return shared_message(std::string_view{ts}...); }
class BaseException : std::exception {
decltype(shared_message()) message;
public:
const char* what() const noexcept final override
{ return &message[0]; }
virtual ~BaseException() = 0;
template <class... Ts, class = decltype(shared_message(std::declval<Ts&>()...))>
BaseException(Ts&&... ts)
: message(shared_message(ts...))
{}
};
inline BaseException::~BaseException() = default; | {
"domain": "codereview.stackexchange",
"id": 34431,
"tags": "c++, object-oriented, inheritance, exception"
} |
Doing work against the friction force, temperature, and heat | Question: In Feynman's his introduction to the second law of thermodynamics, he said,
We know that if we
do work against friction, say, the work lost to us is equal to the heat produced.
If we do work in a room at temperature $T$, and we do the work slowly enough,
the room temperature does not change much, and we have converted work into
heat at a given temperature.
I suppose that if the temperature doesn't change, then the work done must go into the potential energy part of the internal energy of the gas in the room, but then why must we do the work slowly?
The last statement is also quite confusing. As I understand, work and heat are essentially the same thing: a mechanism of transferring energy from one system to another. So is it just an abuse of language when he says, "we have converted work into heat"?
Answer: Feynman wants to talk about constant temperature, because that simplifies the physics: "we have converted work into heat at a given temperature"
If friction transfers a lot of energy quickly, the material where friction is acting rises in temperature: Those parts get hot, and it's no longer a constant-T situation.
So you have to have the friction act slowly, giving time for the energy lost to friction to spread out into the room and eventually into the environment of the room. Since those are quite large, their temperature doesn't change significantly when the energy is added across all of that material.
As to work and heat being the same: No, they're not. That's the point. Later, in the book, you'll see more about that.
His "converted work into heat at a given temperature" is saying that work, the mechanical transfer of energy, has been changed into heat, the thermal transfer of energy. (Later you'll learn that, while it can go this way, going the other way is more complicated)
If you find yourself thinking that Feynman is committing an "abuse of language", that's a good clue that you're acting on an incorrect assumption or a misunderstanding. His knowledge of physics and how to explain it were both above average. | {
"domain": "physics.stackexchange",
"id": 63786,
"tags": "thermodynamics, work, friction, dissipation"
} |
Understanding the behavior of light/photons inside a Laser | Question: I am trying to establish a model inside my head of how light behaves but find it hard with all the seemingly contradicting information.
For example, electrons inside a Laser are raised to a higher energetic state and consequently emit photons of a specific wavelength when falling back to their ground state.
Am i not supposed to think of a photon created in such a process as traveling in all possible directions, with the wave function giving me the probability of finding that photon in a specific volume of space at a given time-interval?
In the above description, photons do not have any specific direction at all. So i would like to ask if we can create a single photon in a vacuum which then travels in a specific direction?
If not, then is the direction of a photon merely an "illusion" created by overlapping wave functions which result in a very low probabilities to find a photon in certain volumes of space?
When photons exit a laser, can i treat them as single photons, hence, imagine the wave-function for each photon extending in all directions in space, then overlap all wave-functions of every photon and calculate the probabilities of finding a photon in a certain volume of space at a given time-interval or is there more going on i have to account for?
Because i cannot see why a laser beam would stay so focused for such long distances just by the cancellation of waves in all directions other than the direction the laser is pointing towards.
Would a single photon exiting the laser have any higher probability to be found on the straight line the laser is pointing towards later than in any other direction? If yes, why?
I hope that someone can help me to create a better model inside my head of how light really works.
Answer: You are correct in one thing: if an atom in an isotropic medium spontaneously emits a photon, it can do so in any direction at all, and the overall emission will be evenly spread over the unit sphere. However, lasers work using stimulated emission, which is slightly different: if an atom is excited, you can induce it to emit its energy by shining an initial photon at it, in which case the atom will emit its photon in the same direction as the original one, and with the same phase (so they will always interfere constructively). The way this works is by having some atoms emit spontaneously, and then by a snowball effect pile the photons of all the other atoms on top of the first photon, so they all have the same direction and phase.
Of course, there's nothing stopping the first photon from going in any old random direction, so here is where the laser cavity comes in. Except in very specific cases, you always put the gain medium in between two different mirrors facing each other. You then engineer the situation so that if a photon is spontaneously emitted, the probability of other atoms emitting on top of it is fairly small for a single pass of the gain medium. If a photon is emitted into the cavity mode, towards one of the mirrors, it will bounce around a large number of times, and it will have multiple opportunities to stimulate further emission. If it's emitted in an unwanted mode, on the other hand, it just leaves the cavity and doesn't excite much further emission, so it doesn't drain the excited atom population by much.
The photons then leave the laser via one of the mirrors, which is not perfectly reflective. The output beam is essentially the same as the cavity mode as it reaches the output mirror, which is a small spot that's been quite carefully collimated. One way to think about this is to 'unfold' the cavity, and think of it as a chain of gain media in a line, separated by apertures the size of the mirrors. Photons emitted by the first gain medium need to pass through all the apertures to go through the final one, so you select on only a small set of directions. This set of directions does have a small spread, of course, and any laser beam will show some spreading after a while, but if you're careful (if you engineer your cavity so that any one photon will stay inside the cavity for a very long time) then this spreading can be very small.
Your final question is very interesting:
Would a single photon exiting the laser have any higher probability to be found on the straight line the laser is pointing towards later than in any other direction? If yes, why?
Photons don't really have wavefunctions of their own. More fundamentally, photons are discrete excitations of the underlying classical wave mode, and it is this classical mode which governs the spatial distribution. For a laser, the classical mode is an initial collimated mode, which then spreads due to diffraction. All of the photons are on this mode! Thus, if you have some other mode and you try to detect photons on it, you won't get anything there (unless your test mode has significant overlap with the laser mode). | {
"domain": "physics.stackexchange",
"id": 21349,
"tags": "optics, visible-light, photons, wavefunction, laser"
} |
distinguishing redshift from star's color | Question: How do scientists find out the true color of the star's light as well as the true doppler shift (relative speed)? Seems to me you wouldn't know how to separate out those 2 values.
Answer: Spectroscopy is done on the starlight. Say we look for hydrogen lines. We know where they’ll appear in the spectrum of laboratory hydrogen. If the starlight is redshifted then all the hydrogen lines will be shifted by the same amount. As can be seen in an example spectrum below.
Once the shift is quantified, we can work back what the unshifted spectrum of the star would look like. | {
"domain": "physics.stackexchange",
"id": 64796,
"tags": "stars, doppler-effect"
} |
Rolling dice in a method chain | Question: I've been writing code for nearly 40 years now and am still not too old to learn and understand new things. Right now, my focus is a bit of OO with functional programming combined and C# with Linq is an excellent way to fool around with this.
I'm also practicing deferred execution and aggregating to make it really interesting. So I designed a dice roller. And to start, I begin by specifying various dice types:
public enum DiceType { D2 = 2, D3 = 3, D4 = 4, D6 = 6, D8 = 8, D10 = 10, D12 = 12, D20 = 20, D100 = 100 }
Very basic, actually. Each enum value just indicates the number of sides a die has. These values are very common, with the D2 (coin flip) and D6 (regular die) being the most well-known.
Next is a static class that can basically roll any dice, and will do this until infinity:
public static class Dice
{
public static IEnumerable<int> Roll()
{
var rnd = new Random();
while (true) { yield return rnd.Next(1, 7); }
}
public static IEnumerable<int> Roll(this DiceType die)
{
var rnd = new Random();
while (true) { yield return rnd.Next((int)die) + 1; }
}
public static IEnumerable<int> RollX(this IEnumerable<int> rolls, int x = 3)
{
while (true) { yield return rolls.Take(x).Sum(i => i); }
}
}
The Roll() method will be the stereotype die with six sides. Hardcoded, as someone might change the value for D6 to 8 and cause problems. This method will always roll a six-sided die.
But Roll(this DiceType die) is more interesting as it basically extends the DiceTyper enumeration. And it will roll that specific die into infinity so if I use DiceType.D20.Roll() then I will get an endless list of values between 1 and 20...
The third method RollX() is an even more interesting one. It will use the previous dice enumerator and take an X amount of values to sum them, before taking the next X amount of dice. By default, it will roll 3 times. And DiceType.D6.Roll().RollX(5) will roll 5 dice, each resulting in a value between 1 and 6, and add them to get values between 5 and 30, and a bell curve.
Next, two clock-related aggregators that will limit the amount of time allowed to keep rolling:
public static IEnumerable<T> MaxTime<T>(this IEnumerable<T> source, long totalMs)
{
var sw = new Stopwatch();
foreach (var item in source)
{
if (!sw.IsRunning) { sw.Restart(); }
if (sw.ElapsedMilliseconds < totalMs) { yield return item; }
else { yield break; }
}
}
public static IEnumerable<T> MaxTicks<T>(this IEnumerable<T> source, long totalTicks)
{
var sw = new Stopwatch();
foreach (var item in source)
{
if (!sw.IsRunning) { sw.Restart(); }
if (sw.ElapsedTicks < totalTicks) { yield return item; }
else { yield break; }
}
}
The stopwatch class can be found in System.Diagnostics.
The principle is simple: it will do a deferred execution until the time is up, which can either be a matter of seconds or a matter of clock ticks.
As a final step, I want an aggregator to just add all the results:
public static SumTotal Total(this IEnumerable<int> data)
{
var result = new SumTotal();
foreach (var i in data) { result.Add(i); }
return result;
}
And this introduces this class:
public class SumTotal
{
public long Sum { get; set; } = 0;
public long Total { get; set; } = 0;
public long Min { get; set; } = long.MaxValue;
public long Max { get; set; } = long.MinValue;
public SumTotal Add(long value)
{
Total++;
Sum += value;
if (Max < value) { Max = value; }
if (value < Min) { Min = value; }
return this;
}
public override string ToString() => $"For {Total:#,##0} items, total of {Sum:#,##0}, average of {Sum * 1.0 / Total:#,##0.00} between {Min:#,##0} and {Max:#,##0}.";
}
So I get a total containing the number of rolls, the minimum and maximum values rolled and a total of all values and can calculate the average value rolled.
Now, to use it, all I have to do is define a rolling queue like this:
public static IEnumerable<int> D12Queue = DiceType.D12.Roll();
This generates a static variable from where I can just take an X amount of rolls. Simply using D12Queue.Take(5) will give me 5 random values between 1 and 12. This makes it practical to use in games where you'd need to emulate a dice roll.
So, is this a practical solution? I'm just training and learning new skills so I haven't thought about that aspect...
The latter dice queue is what my main focus will be. I can use this:
var queue = DiceType.D6.Roll(); to generate a queue. And then:
var roll = queue.First(); which will give a new dice roll with every call.
var yahtzee = queue.Take(5); to roll 5 dice and see if I rolled a Yahtzee.
var highRoll = queue.RollX(5).Take(20).Max(); Which will roll 5 dice repeatedly for 20 times and return the highest value.
So basically, I made dice rolling part of Linq queries so I don't need to write loops if I need multiple rolls. I just take the number of required rolls from the queue.
Answer: Let me just repeat what @Ron Klein said, dont put your dices into enum, the real set of all possible dices is much greater than what your enum offers.
Let me show how I think the Dice class would be better implemented:
class Dice : IEnumerable<int>
{
private int Sides;
private int Shift;
private Random Generator;
public Dice(int sides, int shift, Random generator)
{
Sides = sides;
Shift = shift;
Generator = generator;
}
public Dice(int sides, int shift, int seed) : this(sides, shift, new Random(seed)) {}
public Dice(int sides, int shift) : this(sides, shift, new Random()) {}
public Dice(int sides, Random generator) : this(sides, 1, generator) {}
public Dice(int sides) : this(sides, 1) {}
public Dice(Random generator) : this(6, generator) {}
public Dice() : this(6) {}
public int Roll()
{
return Generator.Next(Sides) + Shift;
}
public IEnumerator<int> GetEnumerator()
{
while (true) yield return Roll();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
Now you can create whatever dice you want:
var d20 = new Dice(20);
var coin = new Dice(2);
var cube = new Dice(6);
var uncommon = new Dice(7);
var d0to5 = new Dice(6, 0);
var d15to30 new Dice(16, 15);
You can roll once
var roll = d20.Roll(); //1-20
Or you can roll N times
var sum = uncommon.Take(3).Sum(i => i); //3-21
Or you can even roll as many as the CPU can do in a certain amount of time/ticks
var rolls = cube.MaxTicks(250);
Also as mentioned by others, the Random class needs a seed and the default constructor seeds from current time. My Dice class implementation allow this default Random but also allows to inject Random instance seeded in a way out of scope of the Dice class.
var d20 = new Dice(20, new Random(myseed));
I have actualy added more constructors to simplify this and I have also added Shift property to the Dice as it seemd a waste to have +1 as constant, if it might be possible to have dices starting with zero or any other number.
var d20 = new Dice(20, 1, myseed);
Anyway I think that the MaxTick and MaxTime methods are quite controversial. One thing is they break SRP.
Good rule of thumb though is that methods should not instantiate objects and do some work (other that the work needed to instantiate those objects). They should do one or the other, but not both.
Other thing is I'm not sure they will fulfill their purpose as mentioned in comments under other answer. | {
"domain": "codereview.stackexchange",
"id": 36561,
"tags": "c#, functional-programming, random"
} |
What is the work done when pressure fully changes in thermodynamics? | Question: $$dW=-pdV$$
here it seems did the pressure be taken as constant and then what would be the change made in the reaction when pressure is variable.
Answer: Under very small changes in $V$, the pressure can be taken as approximately constant over that interval, so it is possible that this equation will sometimes be used taking $p$ constant.
However, in the general case you are 100% right that $p$ is not constant. The trick, then, is knowing how $p$ depends on $V$ so you can calculate $W$ with the integral $$W = -\int p dV.$$
An especially good example of this is for an ideal gas, where you know that $PV = NkT$, so $P = NkT/V$. There are two (simple) cases here: isothermal expansion, where $T$ is held constant as $V$ increases, and adiabatic expansion, where no heat is allowed to enter the system. I suggest looking into how the work is derived in these situations; this is certainly how I came to understand this slightly subtle issue. | {
"domain": "physics.stackexchange",
"id": 74070,
"tags": "thermodynamics"
} |
Acceleration vector for an object moving in a elliptic trajectory | Question: I am interested in deriving what the radial and tangential components of the acceleration vector should be for an object following an elliptical trajectory centered on the origin, in which the relation between the speed $v$ (module) is related to the distance of the object to the origin ($r$) by $v=kr^\beta$ where $k$ and $\beta$ are constants. I tried to find information online, but much of the information about elliptical motion is devoted to objects subject to gravitation.
I know that the components of acceleration in polar coordinates are
$$a_r = \frac{d^2r}{dt^2}-r \left(\frac{d \theta}{dt}\right)^2 $$
and $$a_\theta = r\frac{d^2\theta}{dt^2}+2 \frac{dr}{dt} \frac{d\theta}{dt}$$ but I do not know how to implement the constraint $v=kr^\beta$ there or whether that is the way to go.
Answer: Take the ansatz
\begin{align}
\vec{r}(\theta(t)) = \begin{pmatrix}
\rho_1\cos(\theta(t))\\
\rho_2\sin(\theta(t))
\end{pmatrix}
\end{align}
with a yet unknown scalar smooth function $\theta$.
Solve the ode
$$
k \bigg(\left|\vec{r}\big(\theta(t)\big)\right|\bigg)^{\beta} = |\vec{r}'(\theta(t))| \cdot\dot\theta(t)
$$
for $\theta$ or explicitely
$$
\dot\theta(t) = \frac{k\cdot\left|\vec{r}(\theta(t))\right|^{\beta}}{|\vec{r}'(\theta(t))|}
$$
Then use the found $\theta$ to calculate
$$
\vec{a}(t)=\frac{d}{dt}\left(\vec{r}'(\theta(t)) \dot\theta(t)\right).
$$ | {
"domain": "physics.stackexchange",
"id": 12585,
"tags": "kinematics, satellites"
} |
Organize TSNE data into grid | Question: I have some data reduced by TSNE into a 2D representation, which shows clear spatial features.
However, I'd like to format this into a grid – not just snapping data to the nearest grid square but spreading everything out to fill up a grid, preserving (as much as possible) the existing spatial relationships.
So far, I've only found this article which might close to what I need? This process might already have a name and I'm just one step from an easy Google solution, but at the moment I'm stuck!
Answer: There seem to be a few options, but I found rasterfairy which is very easy to install and use. Has the added bonus of being able to fit to a rectangular grid, but also circular and other arbitrary shapes.
A very nice IronPython notebook example: https://github.com/Quasimondo/RasterFairy/blob/master/examples/Raster%20Fairy%20Demo%201.ipynb
And some example results: | {
"domain": "datascience.stackexchange",
"id": 1412,
"tags": "graphs, tsne"
} |
Why photocurrent doesnt increase after saturation limit in Photoelectric effect on increasing Voltage | Question: As per what i learn that current(i) is rate at which charge flow through a cross section. Let assume we have 10 balls and it reaches another end while passing through some cross section, if 10 balls passes a cross section in 1 sec lets its current be i1 and if 10 balls passes a cross section in 0.1 sec lets its current be i2, so i2>i1
I know about the stopping potential, and how does we reach saturation current(If even an electron with negligible velocity is attracted toward collector plate, this max possible current is saturation current and its directly proportional to intensity)
And after saturation limit, if we further increase potential difference between collector and emitter, so electron arrives at greater speed (and i also know that number of photoelectrons wont increase or decrease as frequency, Intensity is kept constant)
Now the question :
If we assume 10 photoelectron emitted by plate (at saturation limit) to reaches collector. If we assume a cross section to be collector plate, so that 10 photoelectron cross it in 1 sec(just assume) if potential difference is say V1 to have photocurrent i1
If we increase potential difference to have V2 so now 10 photoelectron crosses collector plate in 0.1 sec(it has more Kinetic energy and it reaches earlier than in former case) to have photocurrent i2
As if V2 > V1 implies i2 > i1 after saturation.
What did i miss there, i know i am wrong but cant figure out where?
Answer: Imagine buses are running between two cities. Everyday 10 buses start from A and reach B. So 'bus current' is 10 bus per day.
Suppose speed of all buses is doubled. The time of journey will be reduced. 'Bus current' will remain same. Seperation between buses will increase. | {
"domain": "physics.stackexchange",
"id": 87601,
"tags": "photons, potential, photoelectric-effect"
} |
How to delete a node from BST tree with 2 chidren? | Question: I googled, read several tutorials and watched several BST node deletion algorithm explanations before posting this question. For some reason, I cannot find a complete explanation of BST node deletion algorithms.
I've found 4 algorithms to remove the node with 2 children from Binary Search Tree:
1) Find the smallest node from right sub tree and replace it with the node which we want to delete.
2) Find the biggest node from left sub tree and replace it with the node which we want to delete.
3) Find the deepest leftmost node from the right sub tree and replace it with the node which we want to delete.
4) Find the deepest rightmost node from the left sub tree and replace it with the node which we want to delete.
Apparently, none of those algorithms works for the next use case (most likely because I am missing or don't understand something). The use case is to remove element 5 from the next tree:
For the first algorithm we would chose element 6 and would lose its right sub tree. For the second algorithm we would chose element 4 and would lose its left sub tree. For the 3rd algorithm we would chose element 7 and which would violate BST rules. For the 4th algorithm we would chose element 3 which would also violate BST rules.
What is the right algorithm for such a use case?
Answer: You are right, in BST usually the case where a node has two children is reduced to the case where a node has one child.
Consider a node with two children. Swap this node with its predecessor in the BST (that is the rightmost node in the left subtree), or --totally symmetric-- swap with the successor (the leftmost node in the right subtree).
The node to be removed now has at most one child (in case we went for the predecessor it cannot have a right child).
If it has no children we delete the node, and we are done.
In case the node has a single child, we remove the node and replace it by its subtree. We again obtain a BST.
In your example we want to remove 5 and swap it with its predecessor 4. Now 4 is in the root, and 5 has left child 3. Finally we remove 5 and put its child 3 on its place. Now 3 is the right child of 2.
PS. This method is sometimes called "deletion by copying", because values are copied from one node to another. Some do not approve of this process for two reasons. First copying might be an expensive operation, and second references are lost (if another datastructure points to a node with a certain value). Instead one only changes the pointers. | {
"domain": "cs.stackexchange",
"id": 15218,
"tags": "trees, binary-trees, search-trees"
} |
Prove that there is no $DFA$ with less than $2^n$ states that accepts $L_n =\{w\in\Sigma^*_n\ |\ \exists\sigma\in\Sigma_n\ :\ ⋕_\sigma(w)=0\}$ | Question: I've faced this question in my homework and I couldn't provide elegant proof for it.
We're given $\Sigma_n=\{1,\dots,n\}$ and a language: $$L_n =\{w\in\Sigma^*_n\ |\ \exists\sigma\in\Sigma_n\ :\ ⋕_\sigma(w)=0\}$$
That is a language that consists of letters from $\Sigma_n$ but doesn't contain all the letters.
Question: We're asked to prove that there is no $DFA$ with less than $2^n$ states that accepts $L_n$.
Note: It's given that there is $NFA$ with $(n+1)$ states that accepts $L_n$.
Answer: You can use the Myhill-Nerode theorem for the given task. You must provide at least $2^n$ prefixes $p_i\in \Sigma_n^*$ belonging to the different equivalence classes, e.g. s.t. for every two prefixes $p_i$, $p_j$ there exists a suffix $s_{i,j}$ s.t. $p_i s_{i,j}\in L_n$ and $p_j s_{i,j}\notin L_n$ or vice versa.
Given your task, such prefixes correspond to all possible subsets of the alphabet $\Sigma_n$. There are $2^n$ such subsets, and the strings containing all the letters from the chosen subset $S_i$ and no other letters definitely satisfy the Myhill-Nerode equivalence class criterion. Given two words $p_i$, $p_j$ corresponding to the subsets $S_i$ and $S_j$, s.t. $S_i\not\subseteq S_j$, the suffix $p_k$ corresponding to the set $S_k=\Sigma_n\setminus S_i$ discerns $p_i$ and $p_j$, since $p_i p_k\not\in L_n$ and $p_j p_k\in L_n$. | {
"domain": "cs.stackexchange",
"id": 19764,
"tags": "formal-languages, regular-languages, finite-automata"
} |
Pi electron stacking, how does it work? | Question: I've come across the term base-pair stacking (with reference to B-DNA) in my school text book, and I had posted a question in that regard on Bio.SE.
I've also seen a similar (albeit brief) version of my question on Chem.SE.
I looked up the term online, and after checking out a couple of links, I'm led to believe that this term stacking refers to the forces of attraction between aromatic rings on account of the de-localized pi-electron clouds on either side of their (the aromatic rings) planes.
Now I'm not sure if I skimmed over some important details when I read up on 'stacking', but I'm unable to find a simple and solid explanation as to how these interactions are brought about.
If 'stacking' is the interaction between pi-electron clouds, then shouldn't these interactions be repulsive (on account of 'like-charges repel')? But I'm told they're attractive interactions.
I thought about it for a bit, and I realized that if you consider the fact the the pi-electrons, despite being de-localized all over the plane of the aromatic ring, can only be found at one particular location at any given instant of time, resulting in a 'partial' negative charge being formed there. The other regions of the electron cloud probably acquire a 'partial' positive charge on account of the fact that the electron is not present there at that instant. It's these opposite, partial charges formed that results in the attraction. (Essentially the same idea that governs London Dispersion Forces)
But the thing thing is I'm not sure if my 'hypothesis' is correct in the present case. If this isn't what happens then I've no idea what makes 'stacking' attractive interactions.
So my question stands: How exactly does stacking give rise to attractive interactions between aromatic ring? Why don't they result in repulsive interactions?
Answer: You going along the right track. The stacking attraction occurs when two molecules with $\pi$ orbitals come face to face with one another, typically their separation is 0.34 nm. However, for the interaction to be attractive the two molecules have to be displaced slightly, this prevents direct electron repulsion between $\pi$ electrons on one molecule with those on the other. The attraction is in general between the $\pi$ electrons on one molecule and the $\sigma$ framework on the other$^1$ and vice - versa.
The model assumes that a $\pi$ orbital can be split into three charged parts, separated by a distance $\delta$. The lobes of the orbital have charge -1/2 each and at the position of the atom's nuclei is placed a charge of +1. Thus there are also three interactions $\sigma - \sigma , \pi - \sigma $ and $ \pi - \pi$ and three positions of charge, making 9 interactions in all for each atom. The interaction is taken to be electrostatic, i.e. Coulomb and each term has the form $E_{ij} = q_iq_j/r_{ij} $ for atoms i and j and is summed over all atoms.
In addition a van-der-Waals term is added to the calculation of the total interaction, this has the form $E_{vdw} = \Sigma _{ij}(A_{ij}exp(-\alpha_{ij}R_{ij})-C_{ij}/r_{ij}^r))$ where A, $\alpha$ and C have standard values$^2$. In solution the value of this energy is much reduced vs. its value in the crystal making the $\pi-\pi$ interaction the dominant one.
The charges are shown schematically below for carbon atoms. For other atoms, e.g. nitrogen the charge is given as 1.5 with 1.5 electrons associated with them. (All images are from ref 1 )
charge distribution
optimal geometry for Zn porphyrin $\pi-\pi$ interaction. The molecules are displaced (in x and y) relative to one another at a separation of 0.34 nm.
The contour plot for $\pi-\pi$ interaction and van-der-waals (vdw) energy as a function of their centre to centre offset as shown above (energies in $\pu{ kJ mol^{-1}}$). The black squares are geometries from $\pi-\pi $ stacking in crystals. The shape of the contour plot with or without the vdw interaction is the same as with just the $\pi-\pi$ interaction alone, i.e this interaction determines geometry but the vdw generally lowers the energy and more clearly show regions of low energy as negative values.
$^1$ The model was developed by Hunter & Sanders J.Am. Chem. Soc. 1990, v112, p 5525-5534
$^2$ Van der waals parameters Caillet & Claverie, Acta Crystallogr. sect A1975, v31, p448 | {
"domain": "chemistry.stackexchange",
"id": 6532,
"tags": "bond, biochemistry, dna-rna"
} |
Flatten JSON to get relevant data in Postgresql | Question: Following is my table:
CREATE TABLE test.individuals
(
id INT PRIMARY KEY,
firstname VARCHAR,
lastname VARCHAR,
phone JSONB,
email JSONB
);
Which contains the following records (say):
INSERT INTO test.individuals (
id,
firstname,
lastname,
phone,
email
) VALUES ( 1,
'Ajay',
'Upadhaya',
'[{"Type": "Mobile", "Number": "9876543210"}, {"Type": "Home", "Number": "23456789"}, {"Type": "Work", "Number": "24356758"}]',
'[{"Type": "Home", "Email": "ajay@xyz.com"},{"Type": "Work", "Email": "ajay@abc.com"}]'
), ( 2,
'Vikas',
'Singh',
'[{"Type": "Mobile", "Number": "8978675612"}, {"Type": "Home", "Number": "21324354"}, {"Type": "Work", "Number": "24256376"}]',
'[{"Type": "Work", "Email": "vikas@xyz.com"}]'
), ( 3,
'Atul',
'Prasad',
'[{"Type": "Mobile", "Number": "7895674563"}]',
'[]'
), ( 4,
'Soumil',
'Roy',
'[{"Type": "Mobile", "Number": "8798765632"}]',
'[{"Type": "Home", "Email": "soumil@xyz.com"}]'
);
My requirement was to get mobile numbers in one field, and the rest of the numbers in another field (Since, there can be multiple numbers, they have to be separated by commas). Same for email ids.
Following is my query:
WITH persons
AS (SELECT id,
firstname,
lastname
FROM test.individuals),
email_ids
AS (SELECT id,
Jsonb_array_elements(email) :: jsonb AS email_object
FROM test.individuals),
email_aggregated
AS (SELECT id,
String_agg(email_object ->> 'Email', ',') AS email_id
FROM email_ids
GROUP BY id),
phone_numbers
AS (SELECT id,
Jsonb_array_elements(phone) :: jsonb phone_object
FROM test.individuals),
mobile_numbers
AS (SELECT id,
String_agg(phone_object ->> 'Number', ',') AS mobile_number
FROM phone_numbers
WHERE phone_object ->> 'Type' = 'Mobile'
GROUP BY id),
other_numbers
AS (SELECT id,
String_agg(phone_object ->> 'Number', ',') AS phone_number
FROM phone_numbers
WHERE phone_object ->> 'Type' <> 'Mobile'
GROUP BY id)
SELECT p.*,
mob.mobile_number mobile_phone_number,
oth.phone_number unformatted_phone_numbers,
eml.email_id email_addresses
FROM persons p
left join email_aggregated eml
ON p.id = eml.id
left join other_numbers oth
ON p.id = oth.id
left join mobile_numbers mob
ON p.id = mob.id
This gives me the required result:
I came up with this query intuitively. I don't know if there is a better way to do this. Any comments will be appreciated.
Answer: This gets you the same output:
SELECT id
, firstname
, lastname
, (SELECT string_agg(p->>'Number', ',') FROM jsonb_array_elements(phone) p WHERE p->>'Type' = 'Mobile') AS mobile_phone_number
, (SELECT string_agg(p->>'Number', ',') FROM jsonb_array_elements(phone) p WHERE p->>'Type' <> 'Mobile') AS unformatted_phone_numbers
, (SELECT string_agg(e->>'Email', ',') FROM jsonb_array_elements(email) e) AS email_addresses
FROM test.individuals
So basically, rather than creating all the CTEs ahead of time with aggregated results and then joining against them, we just go through the records and compute the results of the last three columns by running some functions on the content of phone/email fields as they're read.
This is the query plan my my version:
Seq Scan on individuals (cost=0.00..21.22 rows=4 width=111) (actual time=0.114..0.230 rows=4 loops=1)
SubPlan 1
-> Aggregate (cost=1.51..1.52 rows=1 width=32) (actual time=0.021..0.021 rows=1 loops=4)
-> Function Scan on jsonb_array_elements p (cost=0.00..1.50 rows=1 width=32) (actual time=0.013..0.015 rows=1 loops=4)
Filter: ((value ->> 'Type'::text) = 'Mobile'::text)
Rows Removed by Filter: 1
SubPlan 2
-> Aggregate (cost=2.00..2.01 rows=1 width=32) (actual time=0.012..0.012 rows=1 loops=4)
-> Function Scan on jsonb_array_elements p_1 (cost=0.00..1.50 rows=99 width=32) (actual time=0.008..0.009 rows=1 loops=4)
Filter: ((value ->> 'Type'::text) <> 'Mobile'::text)
Rows Removed by Filter: 1
SubPlan 3
-> Aggregate (cost=1.50..1.51 rows=1 width=32) (actual time=0.009..0.009 rows=1 loops=4)
-> Function Scan on jsonb_array_elements e (cost=0.00..1.00 rows=100 width=32) (actual time=0.005..0.005 rows=1 loops=4)
Planning time: 0.297 ms
Execution time: 0.354 ms
And yours:
Hash Left Join (cost=51.17..56.00 rows=4 width=164) (actual time=0.367..0.386 rows=4 loops=1)
Hash Cond: (p.id = mob.id)
CTE persons
-> Seq Scan on individuals (cost=0.00..1.04 rows=4 width=15) (actual time=0.032..0.034 rows=4 loops=1)
CTE email_ids
-> Seq Scan on individuals individuals_1 (cost=0.00..3.03 rows=400 width=36) (actual time=0.034..0.056 rows=4 loops=1)
CTE email_aggregated
-> HashAggregate (cost=11.00..13.50 rows=200 width=36) (actual time=0.088..0.091 rows=3 loops=1)
Group Key: email_ids.id
-> CTE Scan on email_ids (cost=0.00..8.00 rows=400 width=36) (actual time=0.037..0.061 rows=4 loops=1)
CTE phone_numbers
-> Seq Scan on individuals individuals_2 (cost=0.00..3.03 rows=400 width=36) (actual time=0.030..0.048 rows=8 loops=1)
CTE mobile_numbers
-> GroupAggregate (cost=10.01..10.06 rows=2 width=36) (actual time=0.033..0.041 rows=4 loops=1)
Group Key: phone_numbers.id
-> Sort (cost=10.01..10.02 rows=2 width=36) (actual time=0.025..0.027 rows=4 loops=1)
Sort Key: phone_numbers.id
Sort Method: quicksort Memory: 25kB
-> CTE Scan on phone_numbers (cost=0.00..10.00 rows=2 width=36) (actual time=0.004..0.012 rows=4 loops=1)
Filter: ((phone_object ->> 'Type'::text) = 'Mobile'::text)
Rows Removed by Filter: 4
CTE other_numbers
-> HashAggregate (cost=12.98..15.48 rows=200 width=36) (actual time=0.084..0.086 rows=2 loops=1)
Group Key: phone_numbers_1.id
-> CTE Scan on phone_numbers phone_numbers_1 (cost=0.00..10.00 rows=398 width=36) (actual time=0.042..0.069 rows=4 loops=1)
Filter: ((phone_object ->> 'Type'::text) <> 'Mobile'::text)
Rows Removed by Filter: 4
-> Hash Right Join (cost=4.97..9.76 rows=4 width=132) (actual time=0.298..0.315 rows=4 loops=1)
Hash Cond: (oth.id = p.id)
-> CTE Scan on other_numbers oth (cost=0.00..4.00 rows=200 width=36) (actual time=0.085..0.088 rows=2 loops=1)
-> Hash (cost=4.92..4.92 rows=4 width=100) (actual time=0.193..0.193 rows=4 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Hash Right Join (cost=0.13..4.92 rows=4 width=100) (actual time=0.167..0.185 rows=4 loops=1)
Hash Cond: (eml.id = p.id)
-> CTE Scan on email_aggregated eml (cost=0.00..4.00 rows=200 width=36) (actual time=0.090..0.096 rows=3 loops=1)
-> Hash (cost=0.08..0.08 rows=4 width=68) (actual time=0.059..0.059 rows=4 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> CTE Scan on persons p (cost=0.00..0.08 rows=4 width=68) (actual time=0.040..0.050 rows=4 loops=1)
-> Hash (cost=0.04..0.04 rows=2 width=36) (actual time=0.052..0.052 rows=4 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> CTE Scan on mobile_numbers mob (cost=0.00..0.04 rows=2 width=36) (actual time=0.036..0.048 rows=4 loops=1)
Planning time: 0.719 ms
Execution time: 0.693 ms
Neither are particularly performant, although my version appears about twice as fast, but the stats can't be trusted with so little data, so it's not very useful right now. | {
"domain": "codereview.stackexchange",
"id": 25835,
"tags": "sql, postgresql"
} |
Pattern in scaled down image of very close lines | Question: Today I was watching a video on Youtube where I saw lines representing the light of a light source. While the image was downscaled like this:
There were some clear patterns on the horizontal and vertical lines going through the source. I already saw this phenomenon a few times when I received photos of computer monitors. When there were some closely packed lines I saw patterns resembling those in the screenshot when I was playing around with she scale of the image. As I was increasing or decreasing the size of the image the pattern was changing continuously.
This is what the actual image looks like when it isn't scaled down:
The patterns seem to disappear.
What causes this phenomenon? I assumed it was falling under the category optics. If I am false, I am sorry.
source of the video: https://www.youtube.com/watch?v=d-o3eB9sfls
Answer: The moiré fringes are caused by undersampling, it is the difference between properly sampling a small field of view and undersampling a larger field, it causes spatial aliasing. | {
"domain": "physics.stackexchange",
"id": 47271,
"tags": "optics, geometric-optics"
} |
How hard is finding the discrete logarithm? | Question: The discrete logarithm is the same as finding $b$ in $a^b=c \bmod N$, given $a$, $c$, and $N$.
I wonder what complexity groups (e.g. for classical and quantum computers) this is in, and what approaches (i.e. algorithms) are the best for accomplishing this task.
The wikipedia link above doesn't really give very concrete runtimes. I'm hoping for something more like what the best known methods are for finding such.
Answer: Short answer.
If we formulate an appropriate decision problem version of the Discrete Logarithm problem, we can show that it belongs to the intersection of the complexity classes NP, coNP, and BQP.
A decision problem version of Discrete Log.
The discrete logarithm problem is most often formulated as a function problem, mapping tuples of integers to another integer. That formulation of the problem is incompatible with the complexity classes P, BPP, NP, and so forth which people prefer to consider, which concern only decision (yes/no) problems. We may consider a decision problem version of the discrete log problem which is effectively equivalent:
Discrete Log (Decision Problem). Given a prime $N$, a generator $a \in \mathbb Z_N^\times$ of the multiplicative units modulo $N$, an integer $0 < c < N$, and an upper bound $b \in \mathbb N$, determine whether there exists $1 \leqslant L \leqslant b$ such that $a^L \equiv c \pmod{N}$.
This would allow us to actually compute loga(c) modulo N by binary search, if we could efficiently solve it. We may then ask to which complexity classes this problem belongs. Note that we've phrased it as a promise problem: we can extend it to a decision problem by suspending the requirements that $N$ be prime and $a \in \mathbb Z_N^\times$ a generator, but adding the condition that these restrictions hold for any 'YES' instance of the problem.
Discrete Log is in BQP.
Using Shor's algorithm for computing the discrete logarithm (Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer), we may easily contain Discrete Log in BQP. (To test whether or not $a \in \mathbb Z_N^\times$ actually is a generator, we may use Shor's order-finding algorithm in the same paper, which is the basis for the discrete logarithm algorithm, to find the order of $a$ and compare it against $N-1$.)
Discrete Log is in NP ∩ coNP.
If it is actually the case that $N$ is prime and $a \in \mathbb Z_N^\times$ is a generator, a sufficient certificate either for a 'YES' or a 'NO' instance of the decision problem is the unique integer $0 \leqslant L < N-1$ such that $a^L \equiv c \pmod{N}$. So it suffices to show that we can certify whether or not the conditions on $a$ and $N$ hold. Following Brassard's A note on the complexity of cryptography, if it is both the case that $N$ is prime and $a \in \mathbb Z_N^\times$ is a generator, then it is the case that
$$ r^{N-1} \equiv 1 \!\!\!\!\pmod{N} \qquad\text{and}\qquad r^{(N-1)/q} \not\equiv 1 \!\!\!\!\pmod{N} ~~\text{for primes $q$ dividing $N-1$} $$
by definition (using the fact that $\mathbb Z_N^\times$ has order $N-1$).
A certificate that the constraints on $N$ and $a$ both hold would be a list of the prime factors $q_1, q_2, \ldots$ dividing $N-1$, which will allow us to test the above congruence constraints. (We can test whether each $q_j$ is prime using AKS test if we wish, and test that these are all of the prime factors of $N-1$ by attempting to find the prime-power factorization of $N-1$ with only those primes.)
A certificate that one of the constraints on $N$ or $a$ fail would be an integer $q$ which divides $N-1$, such that $a^{(N-1)/q} \equiv 1 \pmod{N}$. It isn't necessary to test $q$ for primeness in this case; it immediately implies that the order of $a$ is less than $N-1$, and so it is a generator of the multiplicative group only if $N$ fails to be prime. | {
"domain": "cs.stackexchange",
"id": 8481,
"tags": "algorithms, complexity-theory, time-complexity, discrete-mathematics"
} |
Spoj's 1st problem in tdd using Ruby | Question: I am pretty new to TDD and is following problems on spoj.pl for practice. I need help with following code; it was my first attempt.
Problem: Your program is to use the brute-force approach in order to find the Answer to Life, the Universe, and Everything. More precisely... rewrite small numbers from input to output. Stop processing input after reading in the number 42. All numbers at input are integers of one or two digits. Mentioned here.
Tests:
describe "life universe and everything" do
before(:each) do
@life = Life.new
end
it "should accept input" do
@life.stub(:gets) { "5, 4, 23, 42, 4" }
@life.input.should == [5,4,23,42,4]
end
it "check number validity" do
@life.valid?(23).should be_true
@life.valid?(42).should be_false
end
it "process input" do
array1 = [5,4,23,42,4]
@life.process_input(array1).should == [5,4,23]
array2 = [1,4,76,34,90]
@life.process_input(array2).should == [1,4,76,34,90]
end
end
Code:
class Life
def input
puts "List comma(',') seperated numbers. Press enter when done."
numbers_string = gets
numbers = numbers_string.split(',')
numbers.map {|n| n.strip.to_i}
end
def valid?(number)
(number==42) ? false : true
end
def process_input(numbers)
processed_numbers = []
numbers.each do |number|
break unless valid?(number)
processed_numbers << number
end
processed_numbers
end
end
I would like it to be reviewed and would be glad to know my mistakes and what can I do to improve quality of code. Thanks!
Answer: Congratulations. That's a pretty nice first attempt.
I see only a few issues:
The problem statement shows that you will receive one number per line; your code expects one line with multiple numbers, separated by commas.
The problem statement asks you to print the numbers; your program doesn't.
When run, the test writes its prompt to $stdout. Tests should be quiet when passing.
The issue with the prompt can be fixed with an expectation:
@life.should_receive(:puts)\
.with("List comma(',') separated numbers. Press enter when done"
If your intent is to solve a slightly different problem, then we won't worry about the differences between what your program does and what they're asking for. But read below the fold for a simple test and implementation, TDD Style, for the stated problem.
This problem can be solved pretty simply. For example,
loop do
n = gets.chomp.to_i
puts n
break if n == 42
end
What test or tests could you write that would allow you to have code this simple? Let's see if we can get there from nothing:
class Life
def process_input
end
end
describe Life do
it "should copy numbers from $stdin to $stdout" do
life = Life.new
life.should_receive(:gets).with(no_args).and_return('41')
life.should_receive(:puts).with(41)
life.process_input
end
end
Running that, we find that the spec fails because the code never called gets. We add the call to gets, then find that the spec fails because the code never called puts. Adding them, we get a passing test:
class Life
def process_input
puts gets.to_i
end
end
Of course, we've got no loop, so let's tell the test there should be multiple calls to gets and puts. We'll use .ordered to tell rspec that the calls should occur in a certain order.
it "should copy numbers from $stdin to $stdout" do
life = Life.new
life.should_receive(:gets).ordered.with(no_args).and_return('41')
life.should_receive(:puts).ordered.with(41)
life.should_receive(:gets).ordered.with(no_args).and_return('42')
life.should_receive(:puts).ordered.with(42)
life.process_input
end
Now there needs to be a loop. Let's add it:
def process_input
loop do
puts gets.to_i
end
end
Our test neither passes nor fails. Instead, it hangs. That's because we didn't give it any way to get out of the loop. Let's add the termination condition:
def process_input
loop do
n = gets.to_i
puts n
break if n == 42
end
end
So, we can write a test that lets us have pretty simple code. But in a few ways, this test is bothersome. One reason is that it is too picky about the way that the I/O is done. What if we changed the program to use print instead of puts? It would have produce exactly the same output, but the test would fail. Also, having the test hang when the program fails to terminate isn't very good. We want tests that pass or fail quickly and cleanly, with no guessing. So let's rewrite our test (and the program) using dependency injection for I/O.
Starting over, we're going to change the the Life program so that it takes and input object and an output object:
class Life
def initialize(input = $stdin, output = $stdout)
@input = input
@output = output
end
def process_input
end
end
And our test:
require 'stringio'
describe Life do
it "should copy numbers from input to output" do
input = StringIO.new("41\n")
output = StringIO.new
life = Life.new(input, output)
life.process_input
output.string.should == "41\n"
end
end
Using StringIO instances as mock I/O objects is very handy, and lets us test the result of the I/O rather than the actions that produce the I/O. That's nice.
Since our program doesn't process anything yet, the test fails:
1) Life should copy numbers from $stdin to $stdout
Failure/Error: output.string.should == "41\n"
expected: "41\n"
got: "" (using ==)
Let's fix that:
def process_input
@output.print @input.gets
end
The test now passes. Let's modify the test to show that it should loop:
it "should copy numbers from $stdin to $stdout" do
input = StringIO.new("41\n42\n")
output = StringIO.new
life = Life.new(input, output)
life.process_input
output.string.should == "41\n42\n"
end
The test fails:
1) Life should copy numbers from $stdin to $stdout
Failure/Error: output.string.should == "41\n42\n"
expected: "41\n42\n"
got: "41\n" (using ==)
Let's add the loop:
def process_input
loop do
@output.print @input.gets
end
end
Now it hangs again! That's because I/O objects like StringIO and $stdin just return a nil when they run out of input. Our program is happily printing nil over and over. Let's make the test die nicely when the loop fails to terminate:
it "should copy numbers from $stdin to $stdout" do
input = StringIO.new("41\n42\n43\n")
def input.gets
super.tap { |s| raise "no more input" unless s }
end
output = StringIO.new
life = Life.new(input, output)
life.process_input
output.string.should == "41\n42\n"
end
Here we monkey patch our input objects so that when it runs out of input, it raises an error rather than just returning nil. The result:
1) Life should copy numbers from $stdin to $stdout
Failure/Error: super.tap { |s| raise "no more input" unless s }
RuntimeError:
no more input
Great! let's get the loop to terminate when it encounters that "42":
def process_input
loop do
s = @input.gets
@output.print s
break if s == "42\n"
end
end
Now we've got a robust test that doesn't care about the details of how I/O is done, and code that is pretty simple.
There's just one more enhancement I'd make to this program, and that is to have it exit when it runs out of input. While not strictly required by the problem statement, a program that prints blank lines endlessly upon EOF is obnoxious! Now we need another "it" in our test:
it "should exit on EOF" do
input = StringIO.new("41\n<EOF>")
def input.gets
s = super
case s
when "<EOF>"
nil
when nil
raise "no more input"
else
s
end
end
output = StringIO.new
life = Life.new(input, output)
life.process_input
output.string.should == "41\n"
end
Our monkey-patched input object just got a little more interesting. We want it to return "nil" on end of input, because that's how the loop will detect end of input in real life, but we want it to raise an exception if the code under test tries to get more input after the nil. This test fails:
1) Life should exit on EOF
Failure/Error: raise "no more input"
RuntimeError:
no more input
So let's make the loop exit on end of input:
def process_input
loop do
s = @input.gets
@output.print s
break if s.nil?
break if s == "42\n"
end
end
Now the program meets the stated requirements, and our own requirement of not filling the output with blank lines on end-of-input. | {
"domain": "codereview.stackexchange",
"id": 2405,
"tags": "ruby, rspec"
} |
Cutting of spherical mirrors | Question: suppose u have a concave mirror and a point object $O$ placed along the its principal axis
if we cut the mirror in half and displace each peice by x units above and below the prinicpal axis
where is the new principal axis and how many images will be formed???
according to me the principal axis should remain the same, however, my teacher says I'm wrong :(
Answer: If you were to separate the two pieces by translating them along the circumference of the circle, you'd be roughly correct. But since you only translated them along the y-axis, you have to re-calculate the principal axis for each section separately. The centers of the new circles to which each belongs are different.
To get a bit technical, the ideal reflector for optimum focus is a parabola. We use spheres because they are much easier to fabricate and because we can make any point on the sphere (circle) the principal axis point by rotating the sphere. For "small" curvatures the difference between the circle and the parabola shapes is very small. But in this case you haven't rotated either section, so their principal axes are quite different so far as the object is concerned. | {
"domain": "physics.stackexchange",
"id": 79372,
"tags": "geometric-optics"
} |
Magnetic-activated cell sorting vs. FACS | Question: When sorting cell populations it is possible to use either magnetic-activated cell sorting or fluorescence-activated cell-sorting (FACS). I am wondering when you would choose either technique and what the pro's and con's of each technique are.
As FACS sorts cells one by one, I can imagine magnetic-activated cell sorting is a faster process. So I guess if time is an important factor this could be used to base your decision on. Also by sorting cells one at a time I assume FACS is a lot more precise, and will lead to less 'contamination'.
So, are there other factors at play here that can influence the choice of magnetic-activated cell sorting over FACS, and what are the pro's and con's of these techniques respectively?
Answer: Magnetic-activated cell sorting (PDF download), or MACS, is a procedure developed by Miltenyi Biotec to separate cells from complex mixtures using antibody-coated magnetic nanoparticles. The antibodies are specific for certain cell surface markers, either expressed on your population of interest (positive selection), or expressed on undesired cell types (negative selection). After adding the antibody-coated beads to the cell mixture and incubating, the suspension is added to a special single-use separation column affixed to a magnet, to which the beads stick, while unlabeled cells flow through. If performing a negative selection, the flow-through is your population of interest and the bound beads and cells are discarded. If your cells of interest are bound to the beads (positive selection), the column is washed several times to make sure unbound or weakly-bound cells are washed through, then the column is removed from the magnet and the cell-bead complexes are eluted.
In fluorescence-activated cell sorting or FACS, the initial complex cell mixture is first labeled with one or more cell surface marker-specific antibodies that have been conjugated to fluorescent dyes. Cells can also be analyzed that express one or more recombinant fluorescent proteins in conjunction with a gene of interest, allowing for selection of cells without using antibodies. After incubation and wash steps, red blood cells are lysed, if analyzing whole blood. The reason for RBC lysis is shown below:
Without lysis, the RBCs overwhelm the cytometer, as they make up around 95% of the cells in human whole blood. White blood cells (leukocytes), on the other hand, only make up 0.1-0.2% of cells, and lymphocytes between about 15 to 50% of leukocytes.
The cell mixture is then analyzed on a cell sorter such as a BD FACSAria.
From: https://commons.wikimedia.org/wiki/File:Fluorescence_Assisted_Cell_Sorting_%28FACS%29_B.jpg
The cells pass in single file past one or more laser beams, which excite the dyes and cause them to fluoresce at a certain wavelength. The user can then use gating to select the combination and intensity of colors they are interested in, and when a cell meets the criteria, it is given an electrical charge, and electro magnets direct it into a collection container.
So what are the pros and cons of each technology? MACS separation generally is quicker than FACS staining, and a greater number of cells can be processed at one time (sometimes orders of magnitude more, depending on instrument and protocol). If you are, for example, looking to purify large numbers of CD4+ T cells from whole blood, MACS may be the better choice of the two (although I prefer the RosetteSep system for that particular requirement). MACS doesn't require a rather expensive dedicated instrument, although automatic platforms are available if needed. On the downside, most MACS-qualified beads are mostly only available from Miltenyi, so if they do not have beads with your particular marker of interest available, you will need to do the conjugation yourself, at the expense of time and possibly precious antibody while you optimize conditions. Fluorescent-conjugated antibodies are extremely common, and available from practically innumerable companies, so unless you are working with a self-generated antibody it's much more likely that you'll be able to find one for FACS quite easily. There are MACS beads that contain antibodies directed at certain fluorophores, so you can use flow antibodies for MACS, although optimization is likely still necessary. Finally, MACS separation is only an option for cells that have your marker of interest expressed on the cell surface.
One of the major advantages of FACS is multicolor staining (depending on the instrument you're using). This makes it much more straightforward to purify potentially rare populations that are only differentiated by their combined expression of a number of surface markers, necessitating multiple gating steps. If this were to be accomplished by MACS, you would need to go through multiple rounds of purification, something that could take longer than FACS, and not necessarily be feasible, as a certain number of cells are required for each MACS run. With FACS, these rare populations (which may only be tens or hundreds of cells per million) can be collected just as easily as more common ones. Also, as mentioned above, there are more antibodies available for flow/FACS than for direct MACS. Additionally, FACS allows for sorting of cells expressing GFP and relatives, letting you collect populations based on internal markers. | {
"domain": "biology.stackexchange",
"id": 3800,
"tags": "immunology, lab-techniques, flow-cytometry, cell-sorting"
} |
Program for calculating and ranking scores | Question: I've recently completed a vanilla JS challenge . It is a method for calculating and ranking scores.
If possible, I would love to make it better. Any suggestions are welcome.
Challenge Directions:
The input array contains a series of objects that hold user scores. Scores are ranked both individually (within the object) as well as against the other score objects to find overall ranking.
MY CODE:
//STEP 1 - CALCULATE TOTAL SCORE & SCORE RANKS FOR EACH OBJECT
function calculateResults(input) {
return input.map((scoreSeries) => {
let final_rank = 0;
calculatedScoreSeries = {...scoreSeries}
Object.keys(scoreSeries).forEach((key) => {
if (key !== 'category') {
final_rank = final_rank + scoreSeries[key] //Adding up the score.
calculatedScoreSeries[key] = calculateSourceRank(scoreSeries, key) //calculating the rank
}})
calculatedScoreSeries.final_rank = final_rank //adding total_score prop to the new object
return calculatedScoreSeries
})
}
//to calculate the ranking of each source
function calculateSourceRank(scoreSeries, key) {
let rank = 1;
for (let i = 1; i < Object.keys(scoreSeries).length; i++) {
if (scoreSeries[Object.keys(scoreSeries)[i]] > scoreSeries[key]) {
rank++
}
}
return rank
}
//STEP 2 - COMPARE OBJECTS FOR FINAL RANK
function calculateFinalRank(calculatedResults) {
return calculatedResults.map((calculatedScoreSeries) => {
let rankedScoreSeries = {...calculatedScoreSeries}
let rank = 1;
for (let i = 0; i < calculatedResults.length; i++) {
if (calculatedResults[i].final_rank > calculatedScoreSeries.final_rank) {
rank++
}
rankedScoreSeries.final_rank = rank
}
return rankedScoreSeries
})
}
let output = calculateFinalRank(calculateResults(input))
console.log(output)
TEST INPUTS W EXPECTED OUTPUTS:
//each object's score sources are ranked (if two are equal they share the higher ranking).
//then all objects are compared for the overall rnking (if two are equal they share the higher ranking)
input = [ // test input 1
{
"category":"test3",
"source1":50,
"source2":100,
"source3":30,
"source4":10,
"source5":10,
},
{
"category":"test4",
"source1":100,
"source2":30,
"source3":10,
"source4":10,
"source5":50,
},
{
"category":"test5",
"source1":5,
"source2":10,
"source3":10,
"source4":40,
"source5":5,
},
]
output = [
{
"category":"test3",
"source1":2
"source2":1
"source3":3
"source4":4
"source5":4
"final_rank":1
},
{
"category":"test4",
"source1":1
"source2":3
"source3":4
"source4":4
"source5":2
"final_rank":1
},
{
"category":"test5",
"source1":4,
"source2":2,
"source3":2,
"source4":1,
"source5":4,
"final_rank":3
},
]
input = [ // test input 2
{
"category":"cat1",
"src1":20,
"src2":30,
"src3":40,
"src4":50,
"src5":50
},
{
"category":"cat1",
"src1":10,
"src2":0,
"src3":20,
"src4":20,
"src5":100
},
]
output = [
{
"category":"cat1",
"src1":5,
"src2":4,
"src3":3,
"src4":1,
"src5":1,
"final_rank":1
},
{
"category":"cat1",
"src1":4,
"src2":5,
"src3":2,
"src4":2,
"src5":1,
"final_rank":2
},
]
BUSINESS LOGIC:
1 - Category will always have at least one 'category'. More than 1 catgeory will be saved as an array under the category key. Category will always be the first key.
2 - each object in the input array will have an equal number of sources (although this number can be any amount)
3 - if a total score is equal to anothers, then they will share the higher ranking (same as individual score ranking)
Answer: Good things
I like the functional approach taken with this code, and that some ecmascript-6 features like arrow functions are used.
Suggestions
const vs let
It would be wise to default to using const for any variable that doesn't need to be re-assigned. If you later determine a value should be re-assigned then switch it to using let. This helps prevent accidental re-assignment in the future.
Use consistent indentation
Some lines appear to be indented with two spaces, while others are indented with four. It is wise to use uniform indentation throughout the code.
Use consistent line terminators
Many lines are terminated with a semi-colon but some are not. Unless you completely understand rules of Automatic semicolon insertion or are using a compiler/module bundler it is best to include line terminators.
multiple calls to Object.keys() in loop
Let us take a look at the following block:
for (let i = 1; i < Object.keys(scoreSeries).length; i++) {
if (scoreSeries[Object.keys(scoreSeries)[i]] > scoreSeries[key]) {
rank++
}
}
For each iteration of the loop, Object.keys() is called twice. That function could be called once if the result is stored in a variable.
const scoreKeys = Object.keys(scoreSeries)
for (let i = 1; i < scoreKeys.length; i++) {
if (scoreSeries[scoreKeys[i]] > scoreSeries[key]) {
rank++
}
}
The syntax could be simplified using a for...of loop:
const scoreKeys = Object.keys(scoreSeries)
for (const scoreKey of scoreKeys) {
if (scoreSeries[scoreKey] > scoreSeries[key]) {
rank++
}
}
The whole function could be simplified with a more function approach using .filter():
//to calculate the ranking of each source
function calculateSourceRank(scoreSeries, key) {
const scoreKeys = Object.keys(scoreSeries)
return 1 + scoreKeys.filter(scoreKey => scoreSeries[scoreKey] > scoreSeries[key]).length
}
A similar approach could be applied to calculateFinalRank() | {
"domain": "codereview.stackexchange",
"id": 37254,
"tags": "javascript, functional-programming, json, ecmascript-6"
} |
Gravitational potential energy negative? | Question: Can the gravitational potential energy be negative? $PE=mgh$, we kind of have the same fig as this (minus the car)!
http://www.mne.ksu.edu/static/nlc/tiki-download_file.php?fileId=24
and we choose the arbitrary point to be the G of the pendulum (point of stability) now $(O,k)$ is upwards but the book says $E_p=mgh$ and they choose $h$ to be positive?
my question is :the inverse pendulum is moving upwards but its still UNDER the arbitary point ($E_p=0$),why does the book say $E_p=mgh$ where $h$ is positive $h=\frac{L}{2}\theta°$!
Answer: In general, potential energy is only well-defined up to an additive constant. The physically relevant quantity is the difference in potential energy between two points. So there is nothing wrong with some points having a negative potential energy. For any given problem, you'll define your reference point to make your equations as simple as possible.
In this particular example, it looks like the reference point is defined as the base of the inverted pendulum, where it attaches to the car (or ground, if the car isn't present in your version of the problem). $h$ is then the height above the car (ground). In this case, it's probably easiest to measure the height of the pendulum's bob in terms of that definition. But you could instead define your reference point to be the point of highest reach of the pendulum's bob. $h$ would then be measured as height above that point. Since all the points you're interested are below that point, $h$ would be negative for all points, and potential energy would always be negative. And that's okay; there's nothing inherently wrong with that. It just might be a bit harder to work with the resulting equations.
This is beyond the scope of this particular example, but $PE = mgh$ only works for systems near the surface of the earth, where $h$ is much smaller than the radius of the earth. If you have to work with distances that are similar in size to the radius of the earth, you have to use the full (Newtonian) form $PE = -GMm/r$, where I'm using $r$ to mean the distance between the object and the center of the earth. In cases where you have to use this form, the easiest reference is usually to define the potential energy to be zero when the object is infinitely far from the earth. In this case, all points a finite distance from the earth have a negative potential energy. | {
"domain": "physics.stackexchange",
"id": 14180,
"tags": "newtonian-mechanics, potential-energy, conventions"
} |
Is it possible to make graphite an insulator? | Question: Since graphite is anisotropic electrical conductivity, won't it be possible to make it an insulator instead? Like manufacturing it such that the plates of carbon atoms are mostly parallel to one another.
Answer: Tempting, but no. Graphite is of course less conductive perpendicular to the basal plane but still boasts a conductivity of 330 S/m in that direction, https://en.wikipedia.org/wiki/Electrical_resistivity_and_conductivity. Electrons can "jump" fairly easily from one pi-electron "layer" to the next. | {
"domain": "chemistry.stackexchange",
"id": 9393,
"tags": "physical-chemistry"
} |
Concavity of magnetization for Potts model | Question: For the Ising model the magnetization $ \langle \sigma_x \rangle_{\beta,h} $ is concave in the variable $h$. This means that
\begin{align*}
\frac{ \partial^2 \langle \sigma_x \rangle_{\beta,h} }{\partial^2 h} \leq 0.
\end{align*}
See for example $[1]$.
As far as I know, this does not hold for Potts models, but is it known to be false?
What about random cluster models for general $1 \leq q \leq \infty$?
Reference:
R. B. Griffiths, C. A. Hurst and S. Sherman, Concavity of Magnetization of an Ising Ferromagnet in a Positive External Field, J. Math. Phys. 11, 790 (1970).
Answer: The question is bit ambiguous, since the Potts spins are not really scalar quantities (sure, you can map the $q$ states to $\{1,\dots,q\}$, but this does not properly reflect the permutation symmetry of the model). A better representation is as the vertices of the $(q-1)$-simplex, but then you have to say what you precisely mean by the GHS inequality in this model (in particular, what plays the role of the magnetic field and of the magnetization).
I'll interpret the question in the simplest way possible: the magnetic field acts as $-h\sum_i \delta_{\sigma_i,1}$ and the "magnetization" is given by $\langle\delta_{\sigma,1}\rangle_h$.
In this setting, the GHS inequality does not extend to the $q$-state Potts model with $q>2$. If I haven't made a stupid mistake, the simplest counterexample is a $q$-state Potts model with a single spin $\sigma\in\{1,\dots,q\}$ and free boundary condition, that is, the Hamiltonian reads
$$
\mathcal{H(\sigma)} = -h \delta_{\sigma, 1}.
$$
In this case, $\langle\delta_{\sigma,1}\rangle_h = \operatorname{Prob}_{\,h}(\sigma=1) = \frac{e^h}{e^h + (q-1)}$.
One easily verifies that, when $q>2$,
$\frac{\mathrm{d}^2}{\mathrm{d}h^2} \langle\delta_{\sigma,1}\rangle_h$ does not have a constant sign when $h>0$. | {
"domain": "physics.stackexchange",
"id": 85505,
"tags": "statistical-mechanics, ising-model"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.