anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Khronos: Primary CMake file | Question: So recently I made a large project of mine open source: Khronos. I will be dissecting parts of it so that I can have it reviewed more easily here and so that the project as a whole will be improved. The first part I want to have reviewed is the CMake file involved with kicking off the building process of the project. Please feel free to tear it apart.
CMakeLists.txt:
cmake_minimum_required(VERSION 2.8.7)
include(ExternalProject)
project(Khronos)
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake/")
set_directory_properties(PROPERTIES EP_PREFIX ${CMAKE_BINARY_DIR}/library-build)
set(CMAKE_C_FLAGS "-std=gnu11 -O3")
if (GCC_VERSION VERSION_GREATER "4.8")
elsif (GCC_VERSION VERSION_GREATER "4.1.2")
SET (GCC_COMMON_WARNING_FLAGS "-pedantic -Wall -Wextra -Wconversion -Wfloat-equal -Wformat=2 -Winit-self -Winline -Winvalid-pch -Wlogical-op -Wmissing-declarations -Wmissing-include-dirs -Wold-style-cast -Woverloaded-virtual -Wredundant-decls -Wshadow -Wstack-protector -Wstrict-null-sentinel -Wswitch-default -Wswitch-enum")
SET (GCC_COMMON_WARNING_FLAGS "${GCC_COMMON_WARNING_FLAGS} -Wno-unused-parameter")
SET (GCC_CXX_WARNING_FLAGS "-Wctor-dtor-privacy")
else ()
SET(GCC_COMMON_WARNING_FLAGS "-pedantic -Wall -Wextra -Wconversion -Wfloat-equal -Wformat=2 -Winit-self -Winline -Winvalid-pch -Wmissing-include-dirs -Wold-style-cast -Woverloaded-virtual -Wredundant-decls -Wshadow -Wstack-protector -Wstrict-null-sentinel -Wswitch-default -Wswitch-enum")
SET(GCC_COMMON_WARNING_FLAGS "${GCC_COMMON_WARNING_FLAGS} -Wno-unused-parameter")
SET(GCC_CXX_WARNING_FLAGS "-Wctor-dtor-privacy")
endif ()
# add a target to generate API documentation with Doxygen
find_package(Doxygen)
if (DOXYGEN_FOUND)
add_custom_target(doc
${DOXYGEN_EXECUTABLE} ${CMAKE_SOURCE_DIR}/Doxyfile
WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/src
COMMENT "Generating API documentation with Doxygen" VERBATIM
)
endif(DOXYGEN_FOUND)
find_package(CURL REQUIRED)
find_package(Portaudio REQUIRED)
find_package(Flite REQUIRED)
find_package(LibSndFile REQUIRED)
find_package(PortAudio)
if(${PORTAUDIO_FOUND})
else(${PORTAUDIO_FOUND})
message(STATUS "Could not find PortAudio. This dependency will be downloaded.")
ExternalProject_Add(
PortAudio
SVN_REPOSITORY "https://subversion.assembla.com/svn/portaudio/portaudio/trunk/"
SVN_TRUST_CERT 1
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/lib/PortAudio
CONFIGURE_COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/lib/PortAudio/configure --prefix=<INSTALL_DIR>
BUILD_COMMAND ${MAKE}
UPDATE_COMMAND ""
INSTALL_COMMAND ""
LOG_DOWNLOAD ON
LOG_UPDATE ON
LOG_CONFIGURE ON
LOG_BUILD ON
LOG_TEST ON
LOG_INSTALL ON
)
ExternalProject_Get_Property(PortAudio source_dir)
ExternalProject_Get_Property(PortAudio binary_dir)
set(PORTAUDIO_SOURCE_DIR ${source_dir})
set(PORTAUDIO_BINARY_DIR ${binary_dir})
set(PORTAUDIO_LIBRARIES ${PORTAUDIO_BINARY_DIR}/lib/.libs/libportaudio.dylib)
include_directories(${PORTAUDIO_SOURCE_DIR})
set(DEPENDENCIES ${DEPENDENCIES} PortAudio)
endif(${PORTAUDIO_FOUND})
message(STATUS "Could not find parcel. This dependency will be downloaded.")
ExternalProject_Add(
parcel
GIT_REPOSITORY "git://github.com/syb0rg/parcel.git"
GIT_TAG "c2fd447cd2af552021304e64b6bd66c88c170241"
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/lib/parcel
UPDATE_COMMAND ""
INSTALL_COMMAND ""
LOG_DOWNLOAD ON
LOG_UPDATE ON
LOG_CONFIGURE ON
LOG_BUILD ON
LOG_TEST ON
LOG_INSTALL ON
)
ExternalProject_Get_Property(parcel source_dir)
ExternalProject_Get_Property(parcel binary_dir)
set(PARCEL_SOURCE_DIR ${source_dir})
set(PARCEL_BINARY_DIR ${binary_dir})
set(PARCEL_LIBRARIES ${PARCEL_BINARY_DIR}/libparcel.a)
set(DEPENDENCIES ${DEPENDENCIES} parcel)
find_package(CURL)
if(${CURL_FOUND})
else(${CURL_FOUND})
message(STATUS "Could not find libcURL. This dependency will be downloaded.")
ExternalProject_Add(
libcurl
GIT_REPOSITORY "git://github.com/bagder/curl.git"
GIT_TAG "1b6bc02fb926403f04061721f9159e9887202a96"
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/lib/curl
PATCH_COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/lib/cURL/buildconf
CONFIGURE_COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/lib/cURL/configure --prefix=<INSTALL_DIR>
BUILD_COMMAND ${MAKE}
UPDATE_COMMAND ""
INSTALL_COMMAND ""
LOG_DOWNLOAD ON
LOG_UPDATE ON
LOG_CONFIGURE ON
LOG_BUILD ON
LOG_TEST ON
LOG_INSTALL ON
)
ExternalProject_Get_Property(libcurl source_dir)
ExternalProject_Get_Property(libcurl binary_dir)
set(CURL_SOURCE_DIR ${source_dir})
set(CURL_BINARY_DIR ${binary_dir})
set(CURL_LIBRARIES ${CURL_BINARY_DIR}/lib/.libs/libcurl.dylib)
include_directories(${CURL_SOURCE_DIR})
set(DEPENDENCIES ${DEPENDENCIES} libcurl)
endif(${CURL_FOUND})
find_package(FLAC) # test if FLAC is installed on the system
if(${FLAC_FOUND}) # do something if it is found, maybe tell the user
else(${FLAC_FOUND}) # FLAC isn't installed on the system and needs to be downloaded
ExternalProject_Add(
FLAC
URL "http://downloads.xiph.org/releases/flac/flac-1.3.0.tar.xz"
CONFIGURE_COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/lib/flac/configure --prefix=<INSTALL_DIR>
BUILD_COMMAND ${MAKE}
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/lib/flac
UPDATE_COMMAND ""
INSTALL_COMMAND ""
LOG_DOWNLOAD ON
LOG_CONFIGURE ON
LOG_BUILD ON
)
endif(${FLAC_FOUND})
#find_package(LibOgg)
#find_package(LibVorbis)
find_package(LibSndFile)
if(${LIBSNDFILE_FOUND})
else(${LIBSNDFILE_FOUND})
ExternalProject_Add(
LibSndFile
DEPENDS FLAC libogg libvorbis
URL "http://www.mega-nerd.com/libsndfile/files/libsndfile-1.0.25.tar.gz"
CONFIGURE_COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/lib/LibSndFile/configure --prefix=<INSTALL_DIR>
BUILD_COMMAND ${MAKE}
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/lib/LibSndFile
UPDATE_COMMAND ""
INSTALL_COMMAND ""
LOG_DOWNLOAD ON
LOG_UPDATE ON
LOG_CONFIGURE ON
LOG_BUILD ON
LOG_TEST ON
LOG_INSTALL ON
)
ExternalProject_Get_Property(LibSndFile source_dir)
ExternalProject_Get_Property(LibSndFile binary_dir)
set(LIBSNDFILE_SOURCE_DIR ${source_dir})
set(LIBSNDFILE_BINARY_DIR ${binary_dir})
set(LIBSNDFILE_LIBRARIES ${LIBSNDFILE_BINARY_DIR}/)
include_directories(${LIBSNDFILE_SOURCE_DIR})
set(DEPENDENCIES ${DEPENDENCIES} LibSndFile)
endif(${LIBSNDFILE_FOUND})
find_package(Flite)
include_directories(src/audio src/web ${PARCEL_SOURCE_DIR})
set(LIBS ${LIBS} ${CURL_LIBRARIES} ${PARCEL_LIBRARIES} ${PORTAUDIO_LIBRARIES} ${FLITE_LIBRARIES} ${LIBSNDFILE_LIBRARY} ${CURL_LIBRARIES})
file(GLOB_RECURSE sources ${PROJECT_SOURCE_DIR}/src/*.c)
add_executable(Khronos ${sources})
add_dependencies(Khronos ${DEPENDENCIES})
target_link_libraries(Khronos ${LIBS})
Answer: Disclaimer: I'm not a CMake user, so this review may be shorter, and mostly focused on style and readability. This questions needs some love though, so I'll do my best.
Why do you have comments in the code below?
find_package(FLAC) # test if FLAC is installed on the system
if(${FLAC_FOUND}) # do something if it is found, maybe tell the user
else(${FLAC_FOUND}) # FLAC isn't installed on the system and needs to be downloaded
...
And not in other sections similar to the above? Like this:
find_package(CURL)
if(${CURL_FOUND})
else(${CURL_FOUND})
...
If you really want to add comments to these, then I'd recommend not inlining them, and placing them like this:
# Test if FooBar is installed on the system.
# Do something if it is found, and tell the
# user it needs to be downloaded if it isn't
# found on the system.
find_package(FOOBAR)
if(${FOOBAR_FOUND})
else(${FOOBAR_FOUND})
...
I find these comments to not really be needed anyways, so I'd just remove them all together.
Some part of me find blocks of code like the below particularly difficult to read:
ExternalProject_Add(
parcel
GIT_REPOSITORY "git://github.com/syb0rg/parcel.git"
GIT_TAG "c2fd447cd2af552021304e64b6bd66c88c170241"
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/lib/parcel
UPDATE_COMMAND ""
INSTALL_COMMAND ""
LOG_DOWNLOAD ON
LOG_UPDATE ON
LOG_CONFIGURE ON
LOG_BUILD ON
LOG_TEST ON
LOG_INSTALL ON
)
I'd consider possibly aligning values in a fashion similar to this, if CMake allows it:
ExternalProject_Add(
parcel
GIT_REPOSITORY "git://github.com/syb0rg/parcel.git"
GIT_TAG "c2fd447cd2af552021304e64b6bd66c88c170241"
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/lib/parcel
UPDATE_COMMAND ""
INSTALL_COMMAND ""
LOG_DOWNLOAD ON
LOG_UPDATE ON
LOG_CONFIGURE ON
LOG_BUILD ON
LOG_TEST ON
LOG_INSTALL ON
)
Now it's much clearer to the reader what each value specifically maps to.
You have a couple of indentation issues scattered around various places. For example, this:
add_custom_target(doc
${DOXYGEN_EXECUTABLE} ${CMAKE_SOURCE_DIR}/Doxyfile
WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/src
COMMENT "Generating API documentation with Doxygen" VERBATIM
)
Should become this:
add_custom_target(doc
${DOXYGEN_EXECUTABLE} ${CMAKE_SOURCE_DIR}/Doxyfile
WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/src
COMMENT "Generating API documentation with Doxygen" VERBATIM
)
And this:
ExternalProject_Add(
FLAC
URL "http://downloads.xiph.org/releases/flac/flac-1.3.0.tar.xz"
CONFIGURE_COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/lib/flac/configure --prefix=<INSTALL_DIR>
BUILD_COMMAND ${MAKE}
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/lib/flac
UPDATE_COMMAND ""
INSTALL_COMMAND ""
LOG_DOWNLOAD ON
LOG_CONFIGURE ON
LOG_BUILD ON
)
Should become this:
ExternalProject_Add(
FLAC
URL "http://downloads.xiph.org/releases/flac/flac-1.3.0.tar.xz"
CONFIGURE_COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/lib/flac/configure --prefix=<INSTALL_DIR>
BUILD_COMMAND ${MAKE}
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/lib/flac
UPDATE_COMMAND ""
INSTALL_COMMAND ""
LOG_DOWNLOAD ON
LOG_CONFIGURE ON
LOG_BUILD ON
)
While these are minor issues, it's still hard to read when things aren't properly indented.
That's about all I can really cover, if there's anything else you want me to cover, just mention it in the comments, and I'll see what I can do. | {
"domain": "codereview.stackexchange",
"id": 15709,
"tags": "portability, cmake, khronos"
} |
What are the criteria for selecting services or actions? Did they change in ROS2 due to asynchronous services? | Question:
In ROS1, this page says that services
should be used for remote procedure calls that terminate quickly
should never be used for longer running processes
I'm not sure the specific amount of "long", but I guess this is because services were synchronous in ROS1, am I correct?
Now I think services became asynchronous in ROS2 and this page doesn't say we shouldn't use services for long tasks.
Also, seeing this page, I think the point of selecting services or actions are
whether we cancel tasks
whether we need feedback
So I guess we can use services for a bit long tasks now.
For example, suppose that there is a task that takes 30 seconds, and we don't need to know the progress and don't need to cancel it.
Is it okay to use services for the task? Or are any problems?
Originally posted by Kenji Miyake on ROS Answers with karma: 307 on 2021-05-01
Post score: 1
Answer:
The asynchronicity indeed ensures that your node will not be stalled until a response comes back in ROS 2. If you don't require progress or the option to cancel the request then there will be no issue in your example.
Actually IMO in ROS 2 it's better to think that way about services: you make a request and you don't know how long you will wait for it, you just make sure to handle the response in a callback. As several questions on here show (e.g. this one), trying to stick to the ROS 1 thought model is not well supported.
Originally posted by sgvandijk with karma: 649 on 2021-05-02
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 36392,
"tags": "ros, ros2, services"
} |
What are typical error rates of quantum computers? | Question: I read in an article that in order to perform error correction on a quantum computer there can only be one error per 10.000 calculations (=unitary transformations).
This sounds pretty high but how much errors do actually occur typically? Are we close? Are we close to being close?
Article: http://www.cs.virginia.edu/~robins/Computing_with_Quantum_Knots.pdf
Answer: The answer depends on the implementation, as is often the case when asking practical questions about quantum computing. To give you an example of the state of the art in trapped ions, the Lucas group in Oxford can achieve less than one error in 1 million single-qubit gates, which they claim is less than the fault-tolerance threshold. Their error rate for two-qubit gates is more like one in every hundred operations, which will not be sufficient for fault-tolerance. However, it seems likely that error rates for two-qubit gates will continue to decrease as technology develops.
The key issue in all of these implementations is scalability. It is not enough to demonstrate low error rates on just a few qubits. One also needs to be able to design an architecture allowing one to manipulate and store quantum information on tens or hundreds of qubits, while keeping the error rates low, before anything remotely useful can be done with your putative quantum computer. | {
"domain": "physics.stackexchange",
"id": 23466,
"tags": "quantum-information, quantum-computer"
} |
Ampere's law on a finite wire | Question: Consider the circular Amperian loop of radius $r$ in this case. The wire carries a current $I$. Integrating using biot-Savart law gives the field as:
\begin{align}B=\frac{\mu_0}{2\pi r}\frac{L}{\sqrt{L^2+4r^2}}\end{align}
However, Using Ampere's Law in line-integral form, the field should be:
\begin{align}B=\frac{\mu_0}{2\pi r}\end{align}
As shown in the diagram, the field would always be tangential to the Amperian loop.
I could deduce so far that lengthening the wire would also mean increasing contribution of the field, and that $\lim_{L\rightarrow\infty}B=\frac{\mu_0}{2\pi r}$. Further, the wire does not represent a closed circuit and is thus only logical as a part of a circuit. But as far as I know, Ampere's Law does not require a circuit as it is only about moving charge. So there is a bit uncertainty about that.
Answer: Disclaimer: As pointed out by jensen paull in the comments, there is a problem with this answer. I'm keeping the original version in here because I find it to be pedagogical in terms of understanding that Ampère's Law is only useful in a few situations, but I'll address the flaws in the following section.
While Ampère's Law holds in any magnetostatic situation and can be used if you are assuming the wire to be part of a circuit, it is not always useful. When you compute the field using Ampère's Law, you are assuming implicitly that the field does not depend on the position along the wire (otherwise, you wouldn't be able to compute the necessary integrals that go in using Ampère's Law to compute the field). As a consequence, you are throwing away the dependence on the wire's length. Not surprisingly, your result obtained with this trick is precisely the result for an infinite wire ($L \to \infty$, as you noticed), which is indeed the magnetic field one would obtain for the wire in the case where the field is independent of the position along the wire (i.e., it is the case in which you have cylindrical symmetry, rather than only axial symmetry).
In short, Ampère's Law is always valid (in magnetostatics), but it is not always useful when it comes to computing the magnetic field. It will only be helpful in situations with a lot of symmetry, which this problem lacks. Griffiths' Introduction to Electrodynamics, Sec. 5.3., lists the possible symmetries in which one can employ this trick as being
infinite straight lines;
infinite planes;
infinite solenoids,
toroids.
Erratum: as pointed out by jensen paull in the comments, one does not really need to assume cylindrical symmetry to perform the integrals in Ampère's Law, as I previously stated. The problem is actually more subtle and deeper. This section is my way of rephrasing the point made in the comments.
As pointed out by the OP, the finite wire only makes sense as a part of circuit. This is due to the fact that a finite wire with constant current will fail to satisfy charge conservation: one needs charge coming from nowhere and going to nowhere at the extremities of the wire in order to keep the current constant. However, consider Ampère's Law in differential form. It is given by
$$\nabla\times\mathbf{B} = \mu_0 \mathbf{J},$$
and since the divergence of a curl is always zero, one has a consequence
$$\nabla\cdot\mathbf{J} = 0.$$
Hence, conservation of charge is an integrability condition for Ampère's Law: if the current is not divergenceless, it is impossible to find a magnetic field that respects Ampère's Law.
This explains why one can't use Ampère's Law for the finite wire: it indeed doesn't hold. Maxwell's equations will certainly hold if you consider the entire circuit, but to consider the piece of wire alone is to consider an unphysical situation, which ends up "breaking" the equations.
The Biot–Savart Law, on the other hand, reads
$$\mathbf{B}(\mathbf{x}) = \frac{\mu_0}{4\pi} \int \frac{\mathbf{J}(\mathbf{x}') \times (\mathbf{x} - \mathbf{x}')}{\|\mathbf{x} - \mathbf{x}'\|^3} \mathrm{d}^3{x'}.$$
If we compute the curl of this expression, we'll find it doesn't equal $\mu_0 \mathbf{J}$, but rather has an additional term depending on $\nabla\cdot\mathbf{J}$. If I didn't make any mistakes, it reads
$$\nabla\times\mathbf{B} = \mu_0 \mathbf{J} - \frac{\mu_0}{4\pi} \int \frac{[\nabla'\cdot\mathbf{J}(\mathbf{x}')] (\mathbf{x} - \mathbf{x}')}{\|\mathbf{x} - \mathbf{x}'\|^3} \mathrm{d}^3{x'}.$$
Hence, for divergenceless currents, the Biot–Savart law will yield a solution to Ampère's Law. For more general currents, it yields something else but adding up the pieces of a circuit will cancel out the differences and lead you to a correct result. | {
"domain": "physics.stackexchange",
"id": 89047,
"tags": "electromagnetism"
} |
I Bring 1 kg of Iron to a Flux Density of 1 T. How Much Energy Does That Take? | Question: I'm an EE, not a physicist, so please forgive if this question is dumb.
I learned a bit of magnetics when I took motors 20 years ago, but I don't remember much.
I'm reaching out to the physics community because finding EE's who know the answer to this question can be tough. (Take me, for example.)
I bring 1 kg of iron to a flux density of 1 T. How much energy does that take?
Answer: Usually we can answer questions like this using $ \frac{U}{V} = \frac{B^2}{2 \mu} $ (magnetic energy stored per unit volume), but since iron is nonlinear and ferromagnetic, we need to use its magnetisation curve.
In this case $ \frac{U}{V} = \int_{0}^{B} H \ dB $, so the energy required per unit volume of iron is equal to the area between this curve, the vertical axis, and the line $ B = 1 $. It's quite linear in this region so you could approximate it as a triangle (which is equivalent to just using the original formula above). I estimate it as
$$ \text{energy per unit volume} = \frac{1}{2} \times {1} \times {2000} = 1000 J = 1 \ kJ. $$
Perhaps this diagram looks familiar - we are looking at the dashed part of the curve (initial charging up) in this analysis. | {
"domain": "physics.stackexchange",
"id": 97697,
"tags": "electromagnetism, magnetic-moment"
} |
ROS pcl run-time error segmenting planes | Question:
Hello all,
I am trying to write a program that subscribes to the openni_launch's pointcloud topic, downsamples the data, and publishes an extracted pointcloud plane. So far the code compiles fine but upon running it produces this error:
/usr/include/boost/smart_ptr/shared_ptr.hpp:412: boost::shared_ptr::reference boost::shared_ptr::operator*() const [with T = sensor_msgs::PointCloud2_std::allocator<void >, boost::shared_ptr::reference = sensor_msgs::PointCloud2_std::allocator<void >&]: Assertion `px != 0' failed.
Aborted (core dumped)
I am unsure of where the error is coming from and would appreciate any insight. Here is the code I have written thus far:
// Includes and whatnot
ros::Publisher pub;
sensor_msgs::PointCloud2::Ptr downsampled,output;
pcl::PointCloud<pcl::PointXYZ>::Ptr output_p, downsampled_XYZ;
void callback(const sensor_msgs::PointCloud2ConstPtr& input)
{
// Do some downsampling to the point cloud
pcl::VoxelGrid<sensor_msgs::PointCloud2> sor;
sor.setInputCloud (input);
sor.setLeafSize (0.01f, 0.01f, 0.01f);
sor.filter (*downsampled);
// Change from type sensor_msgs::PointCloud2 to pcl::PointXYZ
pcl::fromROSMsg (*downsampled, *downsampled_XYZ);
pcl::ModelCoefficients::Ptr coefficients (new pcl::ModelCoefficients ());
pcl::PointIndices::Ptr inliers (new pcl::PointIndices ());
// Create the segmentation object
pcl::SACSegmentation<pcl::PointXYZ> seg;
// Optional
seg.setOptimizeCoefficients (true);
// Mandatory
seg.setModelType (pcl::SACMODEL_PLANE);
seg.setMethodType (pcl::SAC_RANSAC);
seg.setMaxIterations (1000);
seg.setDistanceThreshold (0.01);
// Create the filtering object
pcl::ExtractIndices<pcl::PointXYZ> extract;
// Segment the largest planar component from the cloud
seg.setInputCloud (downsampled_XYZ);
seg.segment (*inliers, *coefficients);
if (inliers->indices.size () == 0)
{
std::cerr << "Could not estimate a planar model for the given dataset." << std::endl;
}
// Extract the inliers
extract.setInputCloud (downsampled_XYZ);
extract.setIndices (inliers);
extract.setNegative (false);
extract.filter (*output_p);
std::cerr << "PointCloud representing the planar component: " << output_p->width * output_p->height << " data points." << std::endl;
// Create the filtering object
// extract.setNegative (true);
// extract.filter (*cloud_f);
// cloud_filtered.swap (cloud_f);
pcl::toROSMsg (*output_p, *output);
//Publish the results
pub.publish(output);
}
int
main (int argc, char** argv)
{
// INITIALIZE ROS
ros::init (argc, argv, "table");
ros::NodeHandle nh;
ros::Subscriber sub = nh.subscribe("/camera/depth_registered/points", 1, callback);
pub = nh.advertise<sensor_msgs::PointCloud2> ("table", 1);
ros::spin();
return (0);
}
Thank you in advance!
Cheers,
Martin
Originally posted by MartinW on ROS Answers with karma: 464 on 2012-07-25
Post score: 1
Original comments
Comment by MartinW on 2012-07-25:
Found my answer here: http://stackoverflow.com/questions/3541179/shared-ptr-assertion-px-0-failed
Answer:
I am guessing that you should allocate the memory for output_p and downsampled_XYZ when you enter the callback.
Also, one more advice, after
pcl::toROSMsg (*output_p, *output);
You should set the frame_id of output to the same value as input
output.header.frame_id = input->header.frame_id;
otherwise if you try to visualize the point cloud with rviz, it will complain because it would not know the frame of reference for output.
Originally posted by Martin Peris with karma: 5625 on 2012-07-25
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by MartinW on 2012-07-26:
Thanks! I got my program running and I've extracted the pointcloud plane of a table in front of my Kinect! I tried to add your line of code "output.header.frame_id = input->header.frame_id;"
But when I did it said sensor_msgs::pointcloud2 has no member header. But I could see it in Rviz anyway! | {
"domain": "robotics.stackexchange",
"id": 10365,
"tags": "ros"
} |
Crossfade for files vs for speakers | Question: I'm fairly new to crossfading albeit having a fairly good understanding of the mathematics and physical units.
What I can't wrap my head around is as to whether there is a difference between crossfading between files and between physical speakers.
What I want to achieve is a constant loudness/volume level across the fade, leading to the questions:
in order generate a 3rd mono WAV file C from two other mono WAV files A and B of equal loudness/volume, how do I guarantee that C.wav has the same loudness/volume as A.wav and B.wav individually?
Do I choose equal power fading or equal gain fading?
what is the method of choice for crossfading between two speakers, say the left and right one of a headphone, if I want the sound to "move" between the speakers with equal total loudness, do I have to use equal power fading or equal gain fading?
Answer: The answer depends on the degree of correlation between the signals. Correlated signals sum in amplitude, uncorrelated signals sum in power.
Most wave files are essentially uncorrelated, so for your first case constant-power fading is the best choice.
The second case (which is mainly a balance control) is more complicated. For headphones the two signals do not physically interact at all, but the loudness summing happens perceptually in your brain. Without going into too much detail of how this works: the best choice here is also constant-power fading.
Two loudspeakers in a room is another can of worms: At low frequencies the two loudspeaker signals at the listener position will sum in amplitude and at high frequencies they will sum in power. So the correct fading type depends on the frequency.
The "cutoff" frequency between the two states depends on the physical properties of the setup: distance between the speakers, acoustic properties of the room, listening distance etc. In most residential cases, it's fairly low, maybe 80Hz-160Hz. Hence most implementations will use a constant-power fade which gets the bulk of the spectrum correct. However this will result in the Left or Right location having roughly 3 dB less bass than the center position. That can be corrected using a dynamic low-shelf filter that's linked to the fading control. | {
"domain": "dsp.stackexchange",
"id": 11704,
"tags": "discrete-signals, audio"
} |
Inner product in a Hilbert space producing real numbers | Question: If we have some vectors, we know
$$\langle a | b\rangle=\langle b|a \rangle ^*$$
Then if we consider
$$\langle a | a\rangle=\langle a|a \rangle ^*$$
Then this tells us we will always get a real number. But why? In a Hilbert space, are the vectors all imaginary? Meaning that when you take the inner product (dot product) between two imaginary vectors they produce real numbers? Strictly speaking, does this real number need to be positive? My textbook says "The space $\mathcal{H}$ is endowed with a positive-definite scalar product, which makes it a Hilbert space", does positive-definite scalar product just mean the product is real or does it also mean it is positive?
Also why doesnt
$$\langle a | a\rangle=\langle a|a \rangle ^*=0$$
If you take the vector $|a\rangle$ then its bra and ket elements are orthogonal, shouldn't the dot product between two orthogonal vectors give zero? My textbook says the above is only the case when the vector is zero.
Answer: Inner product space is a vector space V over the field F together with an inner product, that satisfies the following 3 properties:
$ \langle a | b\rangle=\langle b|a \rangle ^* $
$ \langle l \cdot a + m \cdot b | c\rangle=l \cdot \langle a|c \rangle + m \cdot \langle b|c \rangle $
If a is not 0; then: $ \langle a | a\rangle > 0 $
Now, I will answer your questions:
In a Hilbert space, are the vectors all imaginary?
There is no such thing as an imaginary vector. Did you mean a vector with imaginary components when written in some basis? Then the answer is obviously NO.
the inner product (dot product) between two imaginary vectors they produce real numbers?
Yes, the inner product between 2 vectors with purely imaginary components will be a real number.
Strictly speaking, does this real number need to be positive?
No, it can be any real number.
does positive-definite scalar product just mean the product is real or does it also mean it is positive?
It means that the inner product of a non-zero vector with itself, is a positive real number.
If you take the vector |a⟩ then its bra and ket elements are orthogonal
Nope. That is impossible. | {
"domain": "physics.stackexchange",
"id": 86546,
"tags": "quantum-mechanics, hilbert-space"
} |
Help understanding the word 'glycosaminoglycan'? | Question: In my biochemistry course I have to know about various polysaccharides and variants, and I am struggling with remembering them. I think it would help if I could break down their names.
For 'glycosaminoglycan' i think
-glycan just refers to the fact that it is a polysaccharide. Some, but not all polysaccharides have glycan in their name, e.g. glycogen vs amylase and amylopectin
-amino means there is an amino group somewhere in there?
That's all I can think of...
Answer: Glycosamines (or amino-sugars) are monosaccharide derivatives with an amino group substituting the hydroxyl group at second carbon. Glycosaminoglycans (GAG) usually contain repeats of a disaccharide unit one of the component of which is an amino sugar. GAGs are not strictly polymers of amino sugar monomers as the name might misleadingly suggest. | {
"domain": "biology.stackexchange",
"id": 6378,
"tags": "biochemistry, molecular-biology, terminology"
} |
Counting States in the trim automaton for $(L_1 \cup L_2 \cup \ldots \cup L_p) \circ L'$ | Question: Preliminaries. Let $n,m,p \in \mathbb{N}$ with $n,m,p > 1$. We allow that $p$
could be large but still bounded by a function of $n$: $p = O(2^n)$. Let our alphabet be $\Sigma = \{0,1\}$, with
non-empty languages $ L_1,L_2,...L_p \subseteq \Sigma^n$ and $ L' \subseteq \Sigma^m$. The other preliminaries are the same as the previous question:
We follow
the standard definition
for deterministic finite-state automata except that we allow the state-transition function $\delta$
to be a partial function. In other words, an FSM has a finite number of states with transitions between
them. We define the depth of a state $s$ as the length of the shortest path from the start state (at depth zero) to $s$.
A state $q$ is considered accessible if there is a path from the start state to $q$. A state $q$ is
called co-accessible if there is a path from $q$ to a final state. Finally, an automaton is called trim
if all its states are both accessible and co-accessible.
This is defined here.
Question 1: Consider the minimal trim finite-state automaton $A$ for the language $(L_1 \cup L_2) \circ L'$. We observe
that this language is also finite.
Can we conclude that the number of states in $A$ at level $n$ is 1?
Question 2: Consider the minimal trim finite-state automaton $B$ for the language $(L_1 \cup L_2 \cup \ldots \cup L_p) \circ L'$. We observe
that this language is also finite.
Can we conclude that the number of states in $B$ at level $n$ is 1?
Argument: If we let $L = L_1 \cup L_2 \cup \ldots \cup L_p$, the argument is similar: All the strings in $L$ are in the same equivalence class for the Myhill-Nerode congruence
for $L \circ L'$, since there is no distinguishing extension for any two strings in $L \in \Sigma^n$.
All strings not in $L$ will land in the sink state, which is trimmed out of the minimal trim automaton. Could the proof offered in the previous question work here as well?
Question 3: How does the number of states at level $n$ change if we drop the stipulation that these automata be trim and let $\delta$ be a total function?
Answer: Yes, and yes. This follows from the result in your prior question, letting $L = L_1 \cup L_2$ or $L = L_1 \cup \dots \cup L_p$. | {
"domain": "cs.stackexchange",
"id": 21154,
"tags": "regular-languages, finite-automata"
} |
Scalar predictor - is it better to have a lot of training data that is less precise? Or fewer training data that is more precise? | Question: I am quite new to this neural network stuff, so please bear with me :)
TL;DR:
I want to train a neural network to predict a scalar score from 32 binary features. Training data is expensive to come by, there is a tradeoff between precision and amount of training samples. Which scenario will likely give me a better result:
Training the network with 100 distinct samples of training data where the output (-1 to 1) is averaged from 100 runs of the same sample, and therefore fairly precise
Training the network with 1000 distinct samples of training data where the output (-1 to 1) is averaged from 10 runs of the same sample, and therefore less precise
Training the network with 10000 distinct samples of training data where the output is just binary (-1 or 1), and therefore very imprecise
Something else?
More context:
I am creating an AI for an imperfect information 4-player card game with 32 cards. I already have implemented a MinMax-based tree search that solves the perfect information version of the game, i.e. this can deliver me the score that is reached at the end of the game, assuming perfect play of all players, for the case that the full card distribution is known to all players. In reality, of course, each player only knows their own hand of cards. For the purposes of the AI I get around this by repeating the perfect information game many times while randomly assigning the unknown cards.
I now want to train a neural network that predicts the win probability that is reached with a given hand of cards (of course, not knowing the cards of the other players). I imagine this would be a value between -1 and 1, where 0 means 50% win probability and 1 means 100% win probability. The input features would be 32 binary values, representing the hand of cards. I want to use my MinMax algorithm to generate the training data for the network.
In a perfect world, I would iterate trough 1 Million random hands of cards and determine a precise win probability for each of them by playing 1 Million randomized perfect information games based on that hand. The reality, however, is that my MinMax algorithm is fairly expensive, and I can't improve it much more. So the total amount of perfect information games I can go through is limited.
Now I am wondering: How do I maximize the effectiveness of my training data generation process? I guess the tradeoff is:
If I go through many perfect information iterations for each given hand, the win probability in my training data will be fairly close to the 'real' win probability, so very precise
If I go through fewer (or in extreme case, only 1) perfect information iterations for each given hand, the win probability in my training data will be less precise. However, statistically it should still all even out in the end. Plus, I will have a lot more training samples, covering a much wider range of situations.
In that context I am wondering which side of this spectrum - precision vs. amount - will give me the better tradeoff.
Side note: For my validation data set, of course I will have to determine a fairly precise win probability for at least some samples, where I will probably use more iterations per sample than for the training data.
Answer: super-interesting question!
My approach to the problem would be not to do any preprocessing on the data. This is, feed all the experiments to the network with the target being the 0/1 variable corresponding to lose/win. For example, if you have a dataset like
| hand of cards | game output |
|-------------------|-------------|
| [1, 0, 0, ..., 1] | 1 |
| [1, 0, 0, ..., 1] | 0 |
| [1, 0, 0, ..., 1] | 1 |
| [0, 1, 1, ..., 1] | 1 |
| [0, 1, 1, ..., 1] | 1 |
| [0, 1, 1, ..., 1] | 1 |
instead of training the model with
| hand of cards | winning prob |
|-------------------|--------------|
| [1, 0, 0, ..., 1] | 0.66 |
| [0, 1, 1, ..., 1] | 1 |
I would train the model with the first dataset, and try to predict the game output. This is, instead of using a regression model using a classification model. Of course, with this approach, your dataset will have entries with the same features and different targets, however, this is not a problem, since you can interpret the output of the classification model as the probability of winning or losing. From my experience, when I've dealt with similar problems, this approach is the one that gave the best results.
On the other hand, I would try an approach using decision trees, such as XGBoost or a simple RandomForest, since they use to work better with the kind of data you are dealing with. | {
"domain": "datascience.stackexchange",
"id": 10930,
"tags": "neural-network, training"
} |
cmake error upon catkin make | Question:
I'm new to the software, I was doing the following tutorial (http://wiki.ros.org/hector_quadrotor/Tutorials/Quadrotor%20outdoor%20flight%20demo) and got a cmake error, I'm not sure how to " Add the installation prefix of "hardware_interface" to CMAKE_PREFIX_PATH" as it suggests, how can I fix it?
Full terminal text:
user@ubuntu:~$ mkdir ~/hector_quadrotor_tutorial
user@ubuntu:~$ cd ~/hector_quadrotor_tutorial
user@ubuntu:~/hector_quadrotor_tutorial$ wstool init src https://raw.github.com/tu-darmstadt-ros-pkg/hector_quadrotor/hydro-devel/tutorials.rosinstall
Using initial elements from: https://raw.github.com/tu-darmstadt-ros-pkg/hector_quadrotor/hydro-devel/tutorials.rosinstall
Writing /home/user/hector_quadrotor_tutorial/src/.rosinstall
[hector_quadrotor] Fetching https://github.com/tu-darmstadt-ros-pkg/hector_quadrotor.git (version hydro-devel) to /home/user/hector_quadrotor_tutorial/src/hector_quadrotor
Cloning into '/home/user/hector_quadrotor_tutorial/src/hector_quadrotor'...
remote: Counting objects: 2851, done.
remote: Total 2851 (delta 0), reused 0 (delta 0), pack-reused 2851
Receiving objects: 100% (2851/2851), 726.82 KiB | 332.00 KiB/s, done.
Resolving deltas: 100% (1758/1758), done.
Checking connectivity... done.
[hector_quadrotor] Done.
[hector_slam] Fetching https://github.com/tu-darmstadt-ros-pkg/hector_slam.git (version catkin) to /home/user/hector_quadrotor_tutorial/src/hector_slam
Cloning into '/home/user/hector_quadrotor_tutorial/src/hector_slam'...
remote: Counting objects: 1827, done.
remote: Total 1827 (delta 0), reused 0 (delta 0), pack-reused 1827
Receiving objects: 100% (1827/1827), 338.44 KiB | 236.00 KiB/s, done.
Resolving deltas: 100% (1112/1112), done.
Checking connectivity... done.
[hector_slam] Done.
[hector_localization] Fetching https://github.com/tu-darmstadt-ros-pkg/hector_localization.git (version catkin) to /home/user/hector_quadrotor_tutorial/src/hector_localization
Cloning into '/home/user/hector_quadrotor_tutorial/src/hector_localization'...
remote: Counting objects: 2620, done.
remote: Total 2620 (delta 0), reused 0 (delta 0), pack-reused 2620
Receiving objects: 100% (2620/2620), 3.69 MiB | 1.22 MiB/s, done.
Resolving deltas: 100% (1948/1948), done.
Checking connectivity... done.
[hector_localization] Done.
[hector_gazebo] Fetching https://github.com/tu-darmstadt-ros-pkg/hector_gazebo.git (version hydro-devel) to /home/user/hector_quadrotor_tutorial/src/hector_gazebo
Cloning into '/home/user/hector_quadrotor_tutorial/src/hector_gazebo'...
remote: Counting objects: 1499, done.
remote: Total 1499 (delta 0), reused 0 (delta 0), pack-reused 1499
Receiving objects: 100% (1499/1499), 2.08 MiB | 789.00 KiB/s, done.
Resolving deltas: 100% (950/950), done.
Checking connectivity... done.
[hector_gazebo] Done.
[hector_models] Fetching https://github.com/tu-darmstadt-ros-pkg/hector_models.git (version hydro-devel) to /home/user/hector_quadrotor_tutorial/src/hector_models
Cloning into '/home/user/hector_quadrotor_tutorial/src/hector_models'...
remote: Counting objects: 630, done.
remote: Total 630 (delta 0), reused 0 (delta 0), pack-reused 630
Receiving objects: 100% (630/630), 207.65 KiB | 199.00 KiB/s, done.
Resolving deltas: 100% (414/414), done.
Checking connectivity... done.
[hector_models] Done.
update complete.
user@ubuntu:~/hector_quadrotor_tutorial$ catkin_make
Base path: /home/user/hector_quadrotor_tutorial
Source space: /home/user/hector_quadrotor_tutorial/src
Build space: /home/user/hector_quadrotor_tutorial/build
Devel space: /home/user/hector_quadrotor_tutorial/devel
Install space: /home/user/hector_quadrotor_tutorial/install
Creating symlink "/home/user/hector_quadrotor_tutorial/src/CMakeLists.txt" pointing to "/opt/ros/indigo/share/catkin/cmake/toplevel.cmake"
####
#### Running command: "cmake /home/user/hector_quadrotor_tutorial/src -DCATKIN_DEVEL_PREFIX=/home/user/hector_quadrotor_tutorial/devel -DCMAKE_INSTALL_PREFIX=/home/user/hector_quadrotor_tutorial/install -G Unix Makefiles" in "/home/user/hector_quadrotor_tutorial/build"
####
-- The C compiler identification is GNU 4.8.2
-- The CXX compiler identification is GNU 4.8.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Using CATKIN_DEVEL_PREFIX: /home/user/hector_quadrotor_tutorial/devel
-- Using CMAKE_PREFIX_PATH: /home/user/catkin_ws/devel;/opt/ros/indigo
-- This workspace overlays: /home/user/catkin_ws/devel;/opt/ros/indigo
-- Found PythonInterp: /usr/bin/python (found version "2.7.6")
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/user/hector_quadrotor_tutorial/build/test_results
-- Looking for include file pthread.h
-- Looking for include file pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.6.14
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 37 packages in topological order:
-- ~~ - hector_components_description
-- ~~ - hector_gazebo (metapackage)
-- ~~ - hector_gazebo_worlds
-- ~~ - hector_localization (metapackage)
-- ~~ - hector_models (metapackage)
-- ~~ - hector_quadrotor
-- ~~ - hector_quadrotor_demo
-- ~~ - hector_quadrotor_description
-- ~~ - hector_sensors_description
-- ~~ - hector_sensors_gazebo
-- ~~ - hector_slam (metapackage)
-- ~~ - hector_slam_launch
-- ~~ - hector_xacro_tools
-- ~~ - hector_uav_msgs
-- ~~ - hector_map_tools
-- ~~ - hector_nav_msgs
-- ~~ - hector_geotiff
-- ~~ - hector_geotiff_plugins
-- ~~ - hector_marker_drawing
-- ~~ - hector_quadrotor_controller
-- ~~ - hector_quadrotor_controller_gazebo
-- ~~ - hector_quadrotor_model
-- ~~ - hector_quadrotor_teleop
-- ~~ - hector_compressed_map_transport
-- ~~ - hector_gazebo_plugins
-- ~~ - hector_imu_attitude_to_tf
-- ~~ - hector_imu_tools
-- ~~ - hector_map_server
-- ~~ - hector_pose_estimation_core
-- ~~ - hector_pose_estimation
-- ~~ - hector_quadrotor_gazebo_plugins
-- ~~ - hector_quadrotor_pose_estimation
-- ~~ - hector_trajectory_server
-- ~~ - message_to_tf
-- ~~ - hector_mapping
-- ~~ - hector_gazebo_thermal_camera
-- ~~ - hector_quadrotor_gazebo
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'hector_components_description'
-- ==> add_subdirectory(hector_models/hector_components_description)
-- +++ processing catkin metapackage: 'hector_gazebo'
-- ==> add_subdirectory(hector_gazebo/hector_gazebo)
-- +++ processing catkin package: 'hector_gazebo_worlds'
-- ==> add_subdirectory(hector_gazebo/hector_gazebo_worlds)
-- +++ processing catkin metapackage: 'hector_localization'
-- ==> add_subdirectory(hector_localization/hector_localization)
-- +++ processing catkin metapackage: 'hector_models'
-- ==> add_subdirectory(hector_models/hector_models)
-- +++ processing catkin package: 'hector_quadrotor'
-- ==> add_subdirectory(hector_quadrotor/hector_quadrotor)
-- +++ processing catkin package: 'hector_quadrotor_demo'
-- ==> add_subdirectory(hector_quadrotor/hector_quadrotor_demo)
-- +++ processing catkin package: 'hector_quadrotor_description'
-- ==> add_subdirectory(hector_quadrotor/hector_quadrotor_description)
-- +++ processing catkin package: 'hector_sensors_description'
-- ==> add_subdirectory(hector_models/hector_sensors_description)
-- +++ processing catkin package: 'hector_sensors_gazebo'
-- ==> add_subdirectory(hector_gazebo/hector_sensors_gazebo)
-- +++ processing catkin metapackage: 'hector_slam'
-- ==> add_subdirectory(hector_slam/hector_slam)
-- +++ processing catkin package: 'hector_slam_launch'
-- ==> add_subdirectory(hector_slam/hector_slam_launch)
-- +++ processing catkin package: 'hector_xacro_tools'
-- ==> add_subdirectory(hector_models/hector_xacro_tools)
-- +++ processing catkin package: 'hector_uav_msgs'
-- ==> add_subdirectory(hector_quadrotor/hector_uav_msgs)
-- Using these message generators: gencpp;genlisp;genpy
-- hector_uav_msgs: 21 messages, 0 services
-- +++ processing catkin package: 'hector_map_tools'
-- ==> add_subdirectory(hector_slam/hector_map_tools)
-- Using these message generators: gencpp;genlisp;genpy
-- +++ processing catkin package: 'hector_nav_msgs'
-- ==> add_subdirectory(hector_slam/hector_nav_msgs)
-- Using these message generators: gencpp;genlisp;genpy
-- hector_nav_msgs: 0 messages, 5 services
-- +++ processing catkin package: 'hector_geotiff'
-- ==> add_subdirectory(hector_slam/hector_geotiff)
-- Using these message generators: gencpp;genlisp;genpy
-- Looking for Q_WS_X11
-- Looking for Q_WS_X11 - found
-- Looking for Q_WS_WIN
-- Looking for Q_WS_WIN - not found
-- Looking for Q_WS_QWS
-- Looking for Q_WS_QWS - not found
-- Looking for Q_WS_MAC
-- Looking for Q_WS_MAC - not found
-- Found Qt4: /usr/bin/qmake (found suitable version "4.8.6", minimum required is "4.6")
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.26")
-- checking for module 'eigen3'
-- found eigen3, version 3.2.0
-- Found Eigen: /usr/include/eigen3
-- Eigen found (include: /usr/include/eigen3)
-- +++ processing catkin package: 'hector_geotiff_plugins'
-- ==> add_subdirectory(hector_slam/hector_geotiff_plugins)
-- Using these message generators: gencpp;genlisp;genpy
-- +++ processing catkin package: 'hector_marker_drawing'
-- ==> add_subdirectory(hector_slam/hector_marker_drawing)
-- Eigen found (include: /usr/include/eigen3)
-- +++ processing catkin package: 'hector_quadrotor_controller'
-- ==> add_subdirectory(hector_quadrotor/hector_quadrotor_controller)
-- Using these message generators: gencpp;genlisp;genpy
CMake Error at /opt/ros/indigo/share/catkin/cmake/catkinConfig.cmake:75 (find_package):
Could not find a package configuration file provided by
"hardware_interface" with any of the following names:
hardware_interfaceConfig.cmake
hardware_interface-config.cmake
Add the installation prefix of "hardware_interface" to CMAKE_PREFIX_PATH or
set "hardware_interface_DIR" to a directory containing one of the above
files. If "hardware_interface" provides a separate development package or
SDK, be sure it has been installed.
Call Stack (most recent call first):
hector_quadrotor/hector_quadrotor_controller/CMakeLists.txt:7 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/user/hector_quadrotor_tutorial/build/CMakeFiles/CMakeOutput.log".
See also "/home/user/hector_quadrotor_tutorial/build/CMakeFiles/CMakeError.log".
Invoking "cmake" failed
user@ubuntu:~/hector_quadrotor_tutorial$ ^C
user@ubuntu:~/hector_quadrotor_tutorial$
Originally posted by c123 on ROS Answers with karma: 1 on 2015-06-04
Post score: 0
Original comments
Comment by gvdhoorn on 2015-06-05:
The tutorial you linked has two options: Install binary packages and Install from source. Any particular reason you cannot install the binary pkgs? That would remove the need to build everything locally with catkin_make.
Comment by gvdhoorn on 2015-06-05:
Btw, the solution to your issue is to make sure you have all dependencies of the hector_quadrotor_controller pkg installed (using rosdep install .. fi). But if you install the binary versions, that would all be a non-issue, as all dependencies are already taken care of.
Answer:
I just used the binary versions, thanks for the help.
Originally posted by c123 with karma: 1 on 2015-06-08
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 21845,
"tags": "ros"
} |
Electricity & Magnetism - Is an electric field infinite? | Question: The inverse square law for an electric field is:
$$
E = \frac{Q}{4\pi\varepsilon_{0}r^2}
$$
Here: $$\frac{Q}{\varepsilon_{0}}$$
is the source strength of the charge. It is the point charge divided by the vacuum permittivity or electric constant, I would like very much to know what is meant by source strength as I can't find it anywhere on the internet. Coming to the point an electric field is also described as:
$$Ed = \frac{Fd}{Q} = \Delta V$$
This would mean that an electric field can act only over a certain distance. But according to the Inverse Square Law, the denominator is the surface area of a sphere and we can extend this radius to infinity and still have a value for the electric field. Does this mean that any electric field extends to infinity but its intensity diminishes with increasing length? If that is so, then an electric field is capable of applying infinite energy on any charged particle since from the above mentioned equation, if the distance over which the electric field acts is infinite, then the work done on any charged particle by the field is infinite, therefore the energy supplied by an electric field is infinite. This clashes directly with energy-mass conservation laws. Maybe I don't understand this concept properly, I was hoping someone would help me understand this better.
Answer: It goes out forever, but the total energy it imparts is finite. The reason is that when things fall off as the square of the distance, the sum is finite. For example:
$$ \sum_n {1\over n^2} = {1\over 1} + {1\over 4} + {1\over 9} + {1\over 16} + {1\over 25} + ... = {\pi^2\over 6} $$
This sum has a finite limit. Likewise the total energy you gain from moving a positive charge away from another positive charge from position R to infinity is the finite quantity
$$\int_r^{\infty} {Qq\over r^2} dr = {Qq\over r}$$
So there is no infinity. In two dimensions (or in one), the electric field falls off only like ${1\over r}$ so the potential energy is infinite, and objects thrown apart get infinite speed in the analogous two-dimensional situation. | {
"domain": "physics.stackexchange",
"id": 10599,
"tags": "electromagnetism, electrostatics, electric-fields, potential-energy"
} |
Finding acceleration of wedge | Question: Consider a situation in which there is an object placed above a triangular wedge with angle $\theta$ as shown. The situation is ideal with no friction, etc.
We've to find acceleration of the wedge.
My Working
During the entire course of motion, the block remains attached to the wedge, therefore it applies a normal force of $m_2 g \cos \theta$ on the wedge. Taking its horizontal component, we get $m_2g\cos\theta\sin\theta$. Therefore the acceleration should be:
$$
\frac{m_{2} g \cos\theta\sin\theta}{m_{1}}
$$
But that's incorrect.
Where did I go wrong? Furthermore my teacher uses the wedge frame to arrive at the actual answer. What's the rationale behind this approach?
Answer:
Where did I go wrong ?
When working out the normal force you implicitly assumed that the block is stationary.
What's the rationale behind this approach (using the frame of reference of the wedge) ?
In the frame of reference of the wedge, the block moves down the wedge. So if the block's vertical and horizontal accelerations in the wedge frame are $a_v'$ and $a_h'$ we have
$\displaystyle \frac {a_v'}{a_h'} = \tan \theta$
Converting these accelerations to an inertial frame of reference, we have to add the wedge's horizontal acceleration $a_w$ to get
$\displaystyle \frac {a_v}{a_h + a_w} = \tan \theta$
where $a_v$, $a_h$ are the block's vertical and horizontal accelerations in the inertial reference frame. So we have
$a_v \cos \theta = (a_h + a_w) \sin \theta$
If we introduce an unknown normal force $N$ then we can write down the horizontal and vertical equations of motion for the block and the horizontal equation of motion for the wedge. Together with the above equation, this gives us four equations in four unknown quantities ($a_v, a_h, a_w$ and $N$) and four known quantities ($m_1, m_2, \theta$ and $g$), which we can solve to find $a_w$ as a function of $m_1, m_2, \theta$ and $g$. | {
"domain": "physics.stackexchange",
"id": 96120,
"tags": "homework-and-exercises, newtonian-mechanics, acceleration, free-body-diagram"
} |
Why does a carbon pile work as a rheostat? | Question: This afternoon I opened a sewing machine pedal, and inside it I found a ceramic material containing lots of small and thin black disks. I didn't expect that. I've searched on the Internet and I've found a device called carbon pile that could be the same device I found (write carbon pile disks on Google Images or try this link to see the disks).
It seems that there isn't much information about this device. The only information I've found is that the carbon pile resistance is dependent on the pressure exerted along the pile.
So, now I know why the carbon pile is employed as a rheostat or as a potentiometer. But I don't understand why a pile made of carbon disks varies its resistance depending on the pressure. Any idea?
Perhaps this is a very basic question, but I can't think of any simple explanation.
Answer: If you press a pile of carbon particles together to form a pill-like disc, you will find that the electrical resistance of the pressed disc is higher than that of a similarly-sized disc of solid carbon. This is because those particles of carbon are not all in good contact with one another.
If you then compress that disc, you mash the particles together into better physica contact with one another, building better electrical contact between them, and the resistance of the disc drops. That resistance change can be used as a control signal.
If you mill some soft rubber into the carbon particles before pressing them into discs, then you get a disc that wants to return to its original dimensions after you release the pressure, yielding a pressure-sensitive variable resistor.
If you instead loosely pack the carbon granules into the space between two flexible metal foil discs separated by an insulator, you can make this pressure-sensitive resistance effect so sensitive that the resulting device serves as a microphone, as used in old-style telephones and radio transmitters.
If you form the rubber-loaded carbon into a tiny pointed cone and press its tip into another metal contact, you obtain an electrical switch whose resistance varies smoothly from infinity when the contacts are not touching to a very low value when the cone and the contact are firmly pressed together. This is the principle of the anti-induction or noiseless switch as used in audio circuits. | {
"domain": "physics.stackexchange",
"id": 66132,
"tags": "electricity"
} |
Website code for a client | Question: This is for a client of mine: Enbridge Gas. I had to turn 2 .psd file into html and CSS. This client is my first real client by myself and I want to make sure my code is improved as much as possible. I would love feed back on my style, naming conventions and ect to improve the code.
If you'd like to see the full project files/folder and run it with the images and ect, here is the GitHub link to the project.
Note: IMAGES are of course, not part of the code snippet. This means the drop down list won't work. I commented one image out and put in a grey color. Also, one image at the bottom is not going to show.
For the snippet you have to scroll to the right to see the content:
body {
padding: 0px;
margin: 0px;
}
input[type=checkbox] {
cursor: pointer;
}
.main_content {
width: 480px;
padding-left: 618px;
padding-right: 620px;
}
.top_info {
width: 477px;
height: 105px;
padding-bottom: 20px;
}
.top_info h2 {
padding: 0px;
margin: 0px;
margin-top: 25px;
margin-bottom: 13px;
font: 700 13px / 18px Arial;
}
.top_info p {
margin: 0 0 10px;
font: 400 13px / 18px Arial;
}
.form {
width: 479px;
height: 329px;
/*background: url(images/Layer-7.png);*/
background: #bbb;
}
.form_header {
background: url(images/Layer-11.png);
background-repeat: no-repeat;
height: 43px;
width: 479px;
}
.form_header h2 {
font: 700 17px / 19px Arial;
color: #ffb81c;
padding: 13px 35px 15px 35px;
}
.checkbox {
font: 400 12px / 19px Arial;
width: auto;
color: #f7f7f7;
padding-top: 11px;
padding-left: 9px;
}
.left_content {
float: left;
width: 255px;
font: 400 15px / 19px Arial;
height: 175px;
color: #f7f7f7;
}
.left_content ul {
margin: 0px;
padding-left: 28px;
}
.left_content li {
padding-left: 3px;
}
.right_content {
float: right;
width: 189px;
height: 175px;
padding-left: 14px;
padding-right: 19px;
}
.right_content input {
width: 189px;
height: 24px;
margin-bottom: 6px;
}
.select {
background: url(images/MergedLayers.png) no-repeat right #f7f7f7;
overflow: hidden;
width: 189px;
height: 24px;
margin-bottom: 6px;
border: 0 none;
}
.select select {
width: 189px;
height: 24px;
margin-bottom: 6px;
background: transparent;
border: 0 none;
border-radius: 0;
-webkit-appearance: none;
-moz-appearance: none;
cursor: pointer;
}
.right_content input[type=submit] {
width: 189px;
height: 36px;
background: url(images/Layer-8-copy-3.png);
border: 0 none;
font: 700 16.35px / 14.43px Arial;
padding-top: 11px;
padding-bottom: 10px;
padding-left: 7px;
padding-right: 5px;
color: #f7f7f7;
cursor: pointer;
}
.right_content p {
font: 400 12.95px / 16px Arial;
color: #f7f7f7;
padding: 0px;
padding-top: 5px;
margin: 0px;
text-align: center;
}
.form_footer p {
clear: both;
font: 700 17px / 20px Arial;
color: #f7f7f7;
text-align: center;
padding-left: 29px;
padding-right: 31px;
padding-bottom: 22px;
padding-top: 10px;
}
.bottom_info h2 {
margin: 0px;
padding-top: 24px;
padding-left: 9px;
color: #02436b;
font: 700 15px / 16px Arial;
}
.bottom_image {
background: url(images/stock-photo-64105997-generic-hospital-building.png);
height: 211px;
width: 157px;
float: left;
margin: 0;
padding: 0;
margin-top: 19px;
margin-left: 9px;
}
.bottom_right {
width: 285px;
float: right;
height: 198px;
}
.bottom_info p {
color: #252424;
font: 700 13px / 16px Arial;
padding: 0px;
margin: 0px;
padding-top: 19px;
}
.bottom_info ul {
color: #252424;
font: 400 13px / 16px Arial;
padding: 0px;
padding-left: 15px;
padding-top: 5px;
margin: 0px;
}
.bottom_info li {
padding-bottom: 5PX;
;
}
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<link rel="stylesheet" href="css/main.css" />
</head>
<body>
<div class="main_content">
<div class="top_info">
<h2>Incentives for every building, every budget</h2>
<p>Looking for energy saving solutions for your commercial building? You've come to the right place. Whether retrofitting or building new, we offer free services and financial incentives for buildings and budgets of every size. We'll guide the process
from start to finish, making it easy to reduce energy consumption and improve your bottom line.</p>
</div>
<!-- end top_info -->
<div class="form">
<div class="form_header">
<h2>Earn up to 50% of your project cost! Find out how.</h2>
</div>
<form action="#" method="get" class="checkbox">
<input type="checkbox" name="communicate">Check here to allow us to communicate with you
</form>
<div class="left_content">
<ul>
<li>Energy Solutions Consultant (ESC) will speak with you directly and help you access your energy efficiency needs.</li>
<li>Fixed and custom incentives to upgrade boilers, water and heating systems, make-up air and ventilation systems, building controls, and more</li>
</ul>
</div>
<div class="right_content">
<form action="#" method="get">
<input type="text" placeholder="First and Last Name">
<input type="text" placeholder="Email Address">
<div class="select">
<select name="sector" id="sector">
<option value="sector">Sector</option>
</select>
</div>
<input type="submit" name="submit" value="Download Case Study">
</form>
<p>Your information is safe with us.</p>
</div>
<div class="form_footer">
<p>Contact your Energy Solutions Consultant today at 1-855-659-0549 or energyservices@enbridge.com</p>
</div>
</div>
<!-- end form -->
<div class="bottom_info">
<h2>Need a reason to take part in the Commercial Energy Solutions program?</h2>
<div class="bottom_image"></div>
<div class="bottom_right">
<p>Top gains include:</p>
<ul>
<li>Retrofit incentives for installing energy-efficient equipment and systems in older buildings</li>
<li>Rebates for installing energy-efficient showerheads in multi-residential buildings</li>
<li>Incentives for installing an ozone laundry systems on commercial washing machines</li>
<li>Rebates for renting or purchasing high-efficiency and condensing boilers</li>
<li>Free support and energy saving expertise from one of our Energy Solutions Consultants</li>
</ul>
</div>
</div>
<!-- end bottom_info -->
</div>
<!-- end main_content -->
</body>
</html>
Answer: If there is no specific reason to why you have designed the page to be 480px wide, I would move away from that. It makes it look like a flyer that you can print out. It's all very crammed up and feels kind of like a pop-up ad.
In your code you are doing this, which I assume is to center the main_content div:
.main_content {
width: 480px;
padding-left: 618px;
padding-right: 620px;
}
While in reality it's not centered.
You can do this to center it properly:
.main_content {
width: 480px;
margin: 0 auto;
}
(Top and Bottom = 0 | Left and Right = auto)
When the div has an absolute width, you can use "auto" on both sides to give them equal spacing.
Also, when you are setting widths, paddings and margins with pixels, the page may look very different on a smaller or larger resolution screen.
Browse the web and look for some inspiration. Steal some ideas, and implement them with your own twist. | {
"domain": "codereview.stackexchange",
"id": 20951,
"tags": "html, css"
} |
Why are some materials diamagnetic, others paramagnetic, and others ferromagnetic? | Question:
Why are some materials diamagnetic, others paramagnetic, and others ferromagnetic?
Or, put another way, which of their atomic properties determines which of the three forms of magnetism (if at all) the compound will take?
Is paramagnetism to ferromagnetism a continuous spectrum, or is there no grey zone in between?
Answer: There are a few decent rules of thumb for para- and diamagnetism.
A system is paramagnetic if it has a net magnetic moment because it has electrons of like (parallel) spins. These are often called triplet (or higher) states. In atoms and molecules, they occur when the highest occupied atomic/molecular orbital is not full (degeneracy > 2 * # of valence electrons). In this case, Hund's rules suggest that the electrons lower their energy by aligning their spins.
In contrast, a diamagnet has no magnetic moment because all electrons are paired.
Nearly all free atoms are paramagnetic because nearly all atoms have unpaired spins. The exceptions are the the last column of the s, p, d, and f block (2, 12, and 18). (Any that I'm missing?) For instance, that's an important property for the Stern-Gerlach experiments and magnetic trapping.
Most molecules, however, have fully paired spins. First off, most molecules have an even number of spins, except for free radicals, which are relatively unstable. To figure out if the molecule has a net magnetic moment (paramagnetic) or not (diamagnetic), you need to look at its molecular orbitals. The classical example is oxygen, which has a half-full (or half-empty) $\pi_{2p}^\ast$ orbitals, and nitrogen, which has a full $\pi_{2p}^\ast$ orbital. See: http://www.mpcfaculty.net/mark_bishop/molecular_orbital_theory.htm.
For crystals and solid-state materials, the question is more challenging, but it ends up coming down to the same question: is there a net magnetic moment because of unpaired spins, in which case it's a paramagnetic? or is there no net magnetic moment because all spins are paired, in which case it's a diamagnet?
Of course, in solid-state, there is a third situation, a ferromagnet. This is rather difficult to predict in real systems and is a major field of research. Some model systems (model system: a much simplefr mathematical model of a system) are solvable and give hints of what to look for. For instance, free spins in a lattice create a paramagnet by the argument above: the crystal has a net magnetic moment. You expect, in a magnetic field, the spin of one electron creates a magnetic field that can effect its neighbors. Since the system is paramagnetic, you might expect that the neighbors align with their local magnetic field, which is induced by their neighbors, and the whole crystal polarizes itself, creating a ferromagnet. This explanation is a mean-field Ising model. It gives a good intuition even though it's too simple to describe any real system. | {
"domain": "physics.stackexchange",
"id": 4894,
"tags": "electromagnetism, solid-state-physics, material-science"
} |
Implementing a 'TreeGraph' which extends a normal graph | Question: I'm currently trying to build an interactive technology tree for Stellaris. Which normally doesn't come with a tree view. Stellaris is like Civilization 5, where the tree isn't actually a tree, as it has cycles.
Take Civilization 5, Writing is a parent to Philosophy, and Drama and Poetry, which are both parents to Theology. This makes the following graph:
┌─ Philosophy ─┬─ Theology
Writing ─┴─ Drama and Poetry ─┘
In the example above, Writing is a root of the graph, which is the first column, and so has a level of 0. Philosophy, and Drama and Poetry are level 1, and Theology is level 2.
And so I build the following to build the graph data structure to later display this. This is mainly intended to take an easy to read node list with possible parents and children links in them, and build the graph for me.
I've not used TypeScript before, and so tried my best with the type system, however I'm unsure if I've used it correctly. Especially with sub classing the graph and the node.
interface IBuildNode<TValue> {
id: string;
data?: TValue;
parents?: Array<string>;
children?: Array<string>;
}
interface INode<TValue> {
id: string;
data: TValue;
parents: {[key: string]: INode<TValue>};
children: {[key: string]: INode<TValue>};
_default: boolean;
Delete(): void;
AddParent(parent: INode<TValue>): void;
AddChild(child: INode<TValue>): void;
}
class GraphNode<TValue> implements INode<TValue> {
id: string;
data: TValue;
parents: {[key: string]: INode<TValue>};
children: {[key: string]: INode<TValue>};
_default: boolean;
constructor(id: string,
data: any = null,
default_: boolean = true) {
this.id = id;
this.data = data;
this.parents = {};
this.children = {};
this._default = default_;
}
AddParent(parent: INode<TValue>): void {
this.parents[parent.id] = parent;
parent.children[this.id] = this;
}
AddChild(child: INode<TValue>): void {
this.children[child.id] = child;
child.parents[this.id] = this;
}
Delete(): void {
for (let parent in this.parents) {
delete this.parents[parent].children[this.id];
}
for (let child in this.children) {
delete this.children[child].parents[this.id];
}
}
}
class Graph<TValue> {
private nodes: {[key: string]: INode<TValue>};
constructor(nodes: IBuildNode<TValue>[] = [],
wipeDefault: boolean = true) {
this.nodes = {};
for (let node of nodes) {
let gNode = this.GetDefaultNode(node.id);
gNode.data = node.data;
gNode._default = false;
for (let parent of (node.parents || [])) {
this.GetDefaultNode(parent).AddChild(gNode);
}
for (let child of (node.children || [])) {
this.GetDefaultNode(child).AddParent(gNode);
}
}
if (wipeDefault) {
for (let nodeId of this.GetDefaultIds()) {
this.DeleteNode(nodeId);
}
}
}
protected BuildNode(nodeId: string) : INode<TValue> {
return new GraphNode<TValue>(nodeId);
}
GetNode(nodeId: string) : INode<TValue> {
if (this.nodes.hasOwnProperty(nodeId)) {
return this.nodes[nodeId];
}
return null;
}
GetDefaultNode(nodeId: string) : INode<TValue> {
if (this.nodes.hasOwnProperty(nodeId)) {
return this.nodes[nodeId];
}
let node = this.BuildNode(nodeId);
this.nodes[nodeId] = node;
return node;
}
DeleteNode(nodeId: string) : void {
this.nodes[nodeId].Delete();
delete this.nodes[nodeId];
}
GetDefaultIds() : Array<string> {
let ret : Array<string> = [];
for (let nodeId in this.nodes) {
let node = this.nodes[nodeId];
if (node._default) {
ret.push(nodeId);
}
}
return ret;
}
GetDefault() : Array<INode<TValue>> {
let ret : Array<INode<TValue>> = [];
for (let nodeId in this.nodes) {
let node = this.nodes[nodeId];
if (node._default) {
ret.push(node);
}
}
return ret;
}
GetRoots() : Array<INode<TValue>> {
let ret : Array<INode<TValue>> = [];
for (let nodeId in this.nodes) {
let node = this.nodes[nodeId];
if (Object.keys(node.parents).length == 0) {
ret.push(node);
}
}
return ret;
}
}
class TreeNode<TValue> extends GraphNode<TValue> {
level: number;
parents: {[key: string]: TreeNode<TValue>};
children: {[key: string]: TreeNode<TValue>};
constructor(id: string,
data: any = null,
default_: boolean = true) {
super(id, data, default_);
this.level = 0;
}
SetLevel(): void {
let level: number = -1;
for (let parentId in this.parents) {
let parent = this.parents[parentId];
level = Math.max(level, parent.level);
}
level++;
this.level = level;
for (let childId in this.children) {
let child = this.children[childId];
if (child.level >= level) {
child.SetLevel();
}
}
}
AddParent(parent: INode<TValue>): void {
super.AddParent(parent);
this.SetLevel();
}
AddChild(child: TreeNode<TValue>): void {
super.AddChild(child);
child.SetLevel();
}
Delete(): void {
for (let parentId in this.parents) {
delete this.parents[parentId].children[this.id];
}
for (let childId in this.children) {
let child = this.children[childId];
delete child.parents[this.id];
child.SetLevel();
}
}
}
class TreeGraph<TValue> extends Graph<TValue> {
protected BuildNode(nodeId: string) : INode<TValue> {
return new TreeNode<TValue>(nodeId);
}
}
let graph : IBuildNode<string>[] = [
{
id: '0',
data: 'Writing'
},
{
id: '1',
data: 'Philosophy',
parents: ['0']
},
{
id: '2',
data: 'Drama and Poetry',
parents: ['0']
},
{
id: '3',
data: 'Theology',
parents: ['1', '2']
}
];
let treeGraph = new TreeGraph(graph);
console.debug(treeGraph.GetNode('0'));
console.debug(treeGraph.GetNode('1'));
console.debug(treeGraph.GetNode('2'));
console.debug(treeGraph.GetNode('3'));
console.debug(treeGraph.GetRoots());
Answer: Calling this a tree isn't quite correct. The term you are looking for is a "directed graph". Generally one would accordingly not talk about parent and child but precedessor and successor or incoming and outgoing nodes.
As such talking about "Levels" is also possibly not the best naming here.
What it represents is the maximum amount of nodes on a path to the node you see. Maybe longestPath would be better for a general structure...
Generally speaking: the code you use looks well structured and clean to me, though not optimized for performance.
That's just fine as is. One thing I personally wouldn't do is the IBuildNode interface. I'd instead expose additional constructors.
Another thing I'd do differently is the type of the node-id. This may be overzealous, but I prefer numerical ids for about everything.
Finally the methods GetRoots(), GetDefaultIds() and GetDefault() all return information that I would "precompute" when modifying the graph. Also there are no comments on this, which makes maintenance needlessly difficult :) | {
"domain": "codereview.stackexchange",
"id": 26756,
"tags": "graph, typescript"
} |
Stripping specified character | Question: Here's some code that removes the specified character, ch, from the string passed in. Is there a better way to do this? Specifically, one that's more efficient and/or portable?
//returns string without any 'ch' characters in it, if any.
#include <string>
using namespace std;
string strip(string str, const char ch)
{
size_t p = 0; //position of any 'ch'
while ((p = str.find(ch, p)) != string::npos)
str.erase(p, 1);
return str;
}
Answer: I'm not entirely sure how the performance will compare, but the standard way to accomplish this would be the erase-remove idiom:
str.erase(std::remove(str.begin(). str.end(), ch), str.end());
Unless the performance proves to be a bottleneck, it's typically better to stick with the C++ style of doing things. I can't imagine that this would be significantly less efficient than the other method. (In fact, I wouldn't be surprised if this is a bit faster for long strings with a high amount of the removed character -- though my assumption of that depends on quite a few non-guaranteed implementation choices, and a rather rough estimation of the cost of different low level operations.) | {
"domain": "codereview.stackexchange",
"id": 2708,
"tags": "c++"
} |
Parsing equations from stdin | Question: The program should read equations from stdin, parse them and generate a matrix of the coefficient which represents the system.
Example:
$ ./eqs
x + y = 4
-y + 4x = 2
1 1 4
4 -1 2
As you can see, the user can input as many equations they want and the program stops reading when an empty line is sent.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#define _M(i, j) (M->elements[(i) * M->cols + (j)])
#define ROWS 10
#define CHUNK 32
typedef struct {
size_t rows;
size_t cols;
double *elements;
} Matrix;
void print_matrix(Matrix *);
Matrix *parse_input();
Matrix *create_matrix(size_t, size_t);
void free_matrix(Matrix *);
int readline(char **, size_t *, FILE *);
void memory_error(void);
int main(void) {
Matrix *M = parse_input();
print_matrix(M);
free_matrix(M);
return 0;
}
void print_matrix(Matrix *M) {
size_t i, j;
for (i = 0; i < M->rows; i++) {
for (j = 0; j < M->cols; j++) {
printf("%3g ", _M(i, j));
}
printf("\n");
}
printf("\n");
}
Matrix *parse_input() {
char *row;
size_t reading_size = CHUNK;
if (!(row = malloc(reading_size))) memory_error();
double coeff;
size_t i, j, k, numrows = ROWS;
double **unknowns;
if (!(unknowns = malloc(numrows * sizeof(*unknowns)))) {
free(row);
memory_error();
}
for (i = 0 ; i < numrows; i++) {
if (!(unknowns[i] = calloc(27, sizeof(**unknowns)))) {
free(unknowns);
memory_error();
}
}
i = 0;
do {
if (i == numrows) {
numrows *= 2;
for (j = i + 1; j < numrows; j++) {
if (!(unknowns[j] = calloc(27, sizeof(**unknowns)))) {
free(unknowns);
memory_error();
}
}
}
readline(&row, &reading_size, stdin);
char *p = row;
unsigned char past_equal = 0;
coeff = 1;
while (*p) {
if (*p == '-') {
coeff *= -1;
p++;
} else if (*p == '=') {
past_equal = 1;
p++;
} else if (isdigit(*p)) {
double val = strtod(p, &p);
if (!past_equal) coeff *= val;
else {
unknowns[i][26] = val;
break;
}
} else if (isalpha(*p)) {
unknowns[i][tolower(*p++) - 'a'] = coeff;
coeff = 1;
} else p++;
}
i++;
} while (row[0] != '\0');
free(row);
i--;
unsigned short int nonzero_unknowns[27] = {0};
nonzero_unknowns[26] = 1;
for (j = 0; j < i; j++) {
for (k = 0; k < 26; k++) {
if (unknowns[j][k]) nonzero_unknowns[k] = 1;
}
}
size_t ncols = 0;
unsigned short int positions[26];
for (j = 0; j < 26; j++) {
if (nonzero_unknowns[j]) {
positions[ncols++] = j;
}
}
ncols++;
if (i + 1 < ncols) {
for (j = 0; j < i; j++) {
free(unknowns[j]);
}
free(unknowns);
puts("The system is underdetermined.");
exit(EXIT_SUCCESS);
}
Matrix *M = create_matrix(i, ncols);
for (j = 0; j < i; j++) {
for (k = 0; k + 1 < ncols; k++) {
_M(j, k) = unknowns[j][positions[k]];
}
_M(j, k) = unknowns[j][26];
}
for (j = 0; j < i; j++) {
free(unknowns[j]);
}
free(unknowns);
return M;
}
Matrix *create_matrix(size_t rows, size_t cols) {
Matrix *M;
if (!(M = malloc(sizeof *M))) memory_error();
M->rows = rows;
M->cols = cols;
if (!(M->elements = calloc(rows * cols, sizeof(double)))) {
free(M);
memory_error();
}
return M;
}
void free_matrix(Matrix *M) {
free(M->elements);
free(M);
}
int readline(char **input, size_t *size, FILE *file) {
char *offset;
char *p;
size_t old_size;
// Already at the end of file
if (!fgets(*input, *size, file)) {
return EOF;
}
// Check if input already contains a newline
if ((p = strchr(*input, '\n'))) {
*p = 0;
return 0;
}
do {
old_size = *size;
*size *= 2;
if (!(*input = realloc(*input, *size))) {
free(*input);
memory_error();
}
offset = &((*input)[old_size - 1]);
} while (fgets(offset, old_size + 1, file) &&
offset[strlen(offset) - 1] != '\n');
return 0;
}
void memory_error(void) {
puts("Could not allocate memory.");
exit(EXIT_FAILURE);
}
The parsing function is far too large, and it's quite complicated. Here's how the parsing is done:
a line is read;
the coefficients are put into the unknowns double array, which is 27 doubles wide to hold the 26 coefficients for the lowercase letters and the free term on the RHS of the equation;
since an equation may not contain all the coefficients (some of them can be zero), the nonzero_unknowns holds the count;
the matrix is initialized and returned.
I think the parse_input() function can definitely be broken up into multiple pieces, but I'm having a hard time simplifying it.
Answer: Here are some things that may help you improve your code.
Consider using parser generator tools
If I were writing code like this, I would use flex and bison (or equivalently lex and yacc). There are many resources available for these tools. Here is one such resource.
Eliminate "magic numbers"
Instead of hard-coding the constants 26 and 27 in the code, it would be better to use a #define or const and name them.
Fix memory leaks
In the case that there is, say, a single line as input, the program leaks memory. This is because the unknowns[i] or unknowns[j] allocates more than enough space, but the loop at the bottom of parse_input only frees i rows. Better would be to use a separate variable to track the actual number of allocated unknowns and then use that to drive the loop that frees them.
Don't bother allocating more memory than needed
There is no benefit to allocating more unknowns than needed. You could simply add them one at a time as needed, since you're allocating them one at a time in a loop anyway. Just make sure to keep track of the number of allocations so that you can correctly free the memory later.
Consider using streamlining error handling
Much of the code does error checking for malloc and then handles the result by freeing memory and then calling memory_error() which calls exit(). This is all good practice -- you are well ahead of the pack by actually carefully checking to make sure that the memory allocation suceeeded. That's great, and I would encourage you to continue this excellent habit. However, it clutters up the code quite a bit. An alternative approach would be to create a wrapper function for malloc and calloc that would enter the address into a flexible data structure. Then you could have a corresponding free_all that would free all allocated memory that could be called either in main or in memory_error. This could be done automatically if you register the function with atexit().
Break apart the parse_input function
Your description of the parse_input function points the way to how you might break apart the overly-long parse_input function. You could have one function that would create and return the unknowns structure. Then another to parse those into the matrix.
Minimize redundant loops
The nonzero_unknowns array could actually instead be the first row of the unknowns array. That way, you would reduce the number of variables (there are a lot!) and would be able to tally the unknown count as the parsing is done, making things much more efficient.
Consider handling malformed input lines
If the user types the line x - 4 = y the program incorrectly parses this as 1 -4 0. Another problem occurs if the user types in x - x = 0. It would be better, I think, to issue a warning to the user or, if you can, perform the algebra to correctly interpret the equation.
Eliminate return 0 at the end of main
For a long time now, the C language says that finishing main will implicitly generate the equivalent to return 0. For that reason, you should eliminate that line from your code. | {
"domain": "codereview.stackexchange",
"id": 12945,
"tags": "c, parsing, matrix, mathematics, math-expression-eval"
} |
RTnet vs CORBA transport | Question: What is the difference between this two. And which should i choose for hard real time control purpose? What are the cons and pros of them one to another?
Answer: RTnet and its transport protocol is described at http://www.rtnet.org/. "Hard Real-Time Networking for Real-Time Linux" (That network and web site use one of the numerous imprecise and conflicting "definitions" of the term "hard real-time.")
There are a great many descriptions of the ISO Real-Time CORBA standard on the web (Google). One major difference between RTNet and RTCORBA is that RTCORBA is intended for a wide variety of hard, soft, and non real-time distributed systems. The specification (more carefully) describes what meanings it uses for those terms. To accomplish that range of applicability--and specifically in regard to your question--the RTCORBA spec (not necessarily all vendor implementations) allows for application-specific pluggable transport protocols. Real-time and embedded CORBA products exist which are functionally and performance competitive with RTNet.
Which of the two depends on your application. CORBA in general is much more versatile than you need, unless you want to use a very special application-specific (for example, military aircraft) transport protocol which might not be accommodated by RTNet but which would be accommodated by certain RTCORBA implementations. A detailed answer to your question would require a lot of information about your needs and would probably be a very lengthy answer. Start by reading the web documents, being sure you know exactly what network capabilities you need. | {
"domain": "robotics.stackexchange",
"id": 1637,
"tags": "real-time"
} |
How are the kernels initialized in a convolutional neural network? | Question: I am currently learning about CNNs. I am confused about how filters (aka kernels) are initialized.
Suppose that we have a $3 \times 3$ kernel. How are the values of this filter initialized before training? Do you just use predefined image kernels? Or are they randomly initialized, then changed with backpropagation?
Answer: The kernels are usually initialized at a seemingly arbitrary value, and then you would use a gradient descent optimizer to optimize the values, so that the kernels solve your problem.
There are many different initialization strategies.
Set all values to a constant (for example, zero)
Sample from a distribution, such as a normal or uniform distribution
There are also some heuristic methods that seem to work very well in practice; a popular one is the so-called Glorot initializer, which is named after Xavier Glorot, who introduced them here. Glorot initializers also sample from distribution, but they truncate the values based on the kernel complexity.
For specific types of kernels, there are other defaults that seem to perform well. See for example this paper.
Exploring initialization strategies is something I do when my model is not able to converge (gradient problems) or when the training seems to be stuck for a long time before the loss function starts to decrease. These are signs that there might be a better initialization strategy to look for. | {
"domain": "ai.stackexchange",
"id": 417,
"tags": "deep-learning, convolutional-neural-networks, image-recognition, filters, weights-initialization"
} |
Do discordants dykes ever travel concordantly (are transgressive dykes a thing)? | Question: Transgressive sills "jump" between bedding planes, following joints:
_______________________
_______________________
___________/ _________
_____________/_________
_______________________
But can dykes do the same?
___________________ ___
__________________/ /___
_________________/ /____
____________/ ____/_____
___________/ /__________
__________/ /___________
Answer: Yes, they are. It's not the sills or dykes that "jump", it's the magma. Depending on the various parameters such as viscosity, stress, temperature, pressure, and local conditions and availability of joints, the magma will flow either as a sill or a dyke (or some other intrusive body).
Notice that your two sketches are basically the same thing - it's just the in one case the sill is the dominant structure whereas the dyke is more dominant in the second sketch.
The flow regime can definitely change from a "dyke" to a "sill" and vice versa. Here's a schematic diagram:
(source: USGS)
I also had a quick Internet search and came up with this great example from nature:
You can read more about it at Geotripper blog by Garry Hayes. | {
"domain": "earthscience.stackexchange",
"id": 703,
"tags": "geology, volcanology, structural-geology, magmatism, igneous"
} |
setting min depth in openni_kinect | Question:
How do you setup minimum still acceptable depth in openni_kinect.
The problem is when trying to simulate Turtlebot and building maps with SLAM. Min depth is set too low and simulated laser picks up turtlebot itself resulting in poorly build maps.
Or can I adjust minimum depth somewhere else?
Thank you.
Originally posted by Grega Pusnik on ROS Answers with karma: 460 on 2012-10-08
Post score: 0
Answer:
I have found the solution. Needed to change parameters in turtlebot_description in gazebo.urdf.xacro under <xacro:macro name="turtlebot_sim_laser">
<minRange>0.30</minRange>
I also modified the angle of simulated laser, because it was way to big:
<minAngle>-57</minAngle>
<maxAngle>57</maxAngle>
Originally posted by Grega Pusnik with karma: 460 on 2012-10-08
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 11273,
"tags": "slam, navigation, kinect, turtlebot, gmapping"
} |
camera_calibration "ImportError" for stereo setup | Question:
Hi all;
I am using camera_calibration node (http://wiki.ros.org/camera_calibration) to calibrate a stereo system based on a VRMagic D3 camera. To access the images of the smartcam I am using vrmagic_ros_bridge_server node.
I am running ROS INDIGO in an Ubuntu 64 bit machine.
After remapping images to a common workspace, I have the following topics:
/img_pub1
/img_pub1_camera_info
/img_pub2
/img_pub2_camera_info
/object_image/left/camera_info
/object_image/left/image_raw
/object_image/left/image_raw/compressed
/object_image/left/image_raw/compressed/parameter_descriptions
/object_image/left/image_raw/compressed/parameter_updates
/object_image/left/image_raw/compressedDepth
/object_image/left/image_raw/compressedDepth/parameter_descriptions
/object_image/left/image_raw/compressedDepth/parameter_updates
/object_image/left/image_raw/theora
/object_image/left/image_raw/theora/parameter_descriptions
/object_image/left/image_raw/theora/parameter_updates
/object_image/right/camera_info
/object_image/right/image_raw
/object_image/right/image_raw/compressed
/object_image/right/image_raw/compressed/parameter_descriptions
/object_image/right/image_raw/compressed/parameter_updates
/object_image/right/image_raw/compressedDepth
/object_image/right/image_raw/compressedDepth/parameter_descriptions
/object_image/right/image_raw/compressedDepth/parameter_updates
/object_image/right/image_raw/theora
/object_image/right/image_raw/theora/parameter_descriptions
/object_image/right/image_raw/theora/parameter_updates
/rosout
/rosout_agg
Following the StereoCalibration tutorial step by step, I call the stereo calibration procedure:
rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.024 right:=/object_image/right/image_raw left:=/object_image/left/image_raw right_camera:=/object_image/right left_camera:=/object_image/left
And I obtain the following error, seems some kind of dependency, but I have installed everything (I think) correctly:
Traceback (most recent call last):
File "/opt/ros/indigo/lib/camera_calibration/cameracalibrator.py", line 50, in <module>
from camera_calibration.calibrator import MonoCalibrator, StereoCalibrator, ChessboardInfo, Patterns
ImportError: No module named calibrator
I would appreciate any clue to solve this, I need to obtain the camera_info information of both cameras asap !!
Thnak you all in advance,
Alberto
Originally posted by altella on ROS Answers with karma: 149 on 2016-07-04
Post score: 0
Answer:
The problem comes because python can not find modules in different paths.
In my instalation "cameracalibrator.py" is on /opt/ros/indigo/lib/camera_calibration/ , while "calibrator.py" is on /opt/ros/indigo/lib/python2.7/dist-packages/camera_calibration/
One fast way to solve the problem is to copy "calibrator.py" to a directory camera_calibrator, so that the imports in python find what they are expected to find:
/opt/ros/indigo/lib/camera_calibration/cameracalibrator.py
/opt/ros/indigo/lib/camera_calibration/camera_calibration/calibrator.py
/opt/ros/indigo/lib/camera_calibration/camera_calibration/calibrator.pyc
/opt/ros/indigo/lib/python2.7/dist-packages/camera_calibration/calibrator.py
/opt/ros/indigo/lib/python2.7/dist-packages/camera_calibration/calibrator.pyc
/usr/share/app-install/desktop/xinput-calibrator:xinput_calibrator.desktop
/usr/share/app-install/icons/xinput_calibrator.svg
/usr/share/apport/package-hooks/source_xinput-calibrator.py
Originally posted by altella with karma: 149 on 2016-07-04
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 25134,
"tags": "python, calibration, camera-calibration"
} |
Running ROSJAVA as a thread in a larger app | Question:
How might the ROSJAVA talker example be spawned as a separate thread since it must be passed the NodeConfiguration at startup? I get an error stating that I cannot cast my main file to org.ros.node.NodeMain.
Originally posted by morrowsend on ROS Answers with karma: 56 on 2011-10-18
Post score: 0
Answer:
I figured it out. I have to implement NodeMain in the main java file, save the passed NodeConfiguration to a global volatile, and read it from the runnable() which contains the RosNode. I am sure there is a better way of doing it (such as simply passing the NodeConfiguration to the run() method) but this works for now. If I run into problems with it, I will post an update.
Originally posted by morrowsend with karma: 56 on 2011-10-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 6999,
"tags": "ros, threads, rosjava"
} |
Derivation of $\frac{d}{dt}\mathbf L = I \dot{\boldsymbol \omega} = \mathbf M - \boldsymbol \omega \times \mathbf L$ | Question: I would like to know how the above quantity is derived (Here $\mathbf{M}$ is the rate of change of angular momentum with respect to a non inertial frame).I tried looking at various sources and couldn't find a derivation. I have no idea where $\vec{\omega} \times \mathbf{L}$ came into this equation. In what cases does the $\vec{\omega} \times \mathbf{L}$ term vanish? Because if $\vec{\omega}$ were in the direction of $\mathbf{L}$ it wouldn't be rotational motion anymore? It would be of great help to me if some could point out the flaws in my logic and hand me a derivation.
Answer: Given the current form of the equation, I have to make a few assumptions about the currently-missing context:
$\mathbf{L}$ is the angular momentum in a fixed frame.
$\mathbf{M}$ refers to the rate of change of angular momentum in a frame that is rotating with angular velocity vector $\vec{\omega}$.
The $\vec{\omega}$ in the final term of the equation is mostly unrelated to the $\omega$ in $I\dot{\omega}$. The former is the rotational velocity of the rotating reference frame itself, while the latter is the angular velocity of the object in the fixed reference frame. They really should have different symbols.
Given this, the transformation above is a specific example of a very general formula: for any vector $\mathbf{Q}$, its rate of change in a fixed frame and its rate of change in a frame rotating with angular velocity vector $\vec{\omega}$ are related by:
$$\left(\frac{d\mathbf{Q}}{dt}\right)_{fixed}=\left(\frac{d\mathbf{Q}}{dt}\right)_{rot}+\vec{\omega}\times\mathbf{Q}$$
A derivation of this general relation can be found in most upper-level undergraduate textbooks, and on Wikipedia as well: https://en.wikipedia.org/wiki/Rotating_reference_frame. | {
"domain": "physics.stackexchange",
"id": 68191,
"tags": "newtonian-mechanics, reference-frames, rotational-dynamics, torque, rigid-body-dynamics"
} |
Navigation of robot to goal point/Turning until robot faces goal-direction | Question:
SOLVED
See Solution at the end of post
Problem:
Hey there,
I'm currently working on a project about navigating a robot to a goal point.
I already wrote some code to tackle this task. The code uses the gazebo/modelstates topic to get the position and orientation of the robot.
The angle_to_goal is calculated by looking at the currentPosition and the goalPosition.
The robot should turn, until the difference between the yaw of the robot and the angle_to_goal is 0 +/- a threshold of 0.1.
If this is the case, the robot should drive towards the goal.
This code is working pretty well in some cases where the robot gets into the acceptable range well and drives straight without any correction needed.
In other cases the robot will turn, then drive a very small distance, then correct its course again a little bit and repeat this which leads to a very slow and jittery path. I can imagine that the problem is that the robot gets into the acceptable range of the angle at the very border of it, then starts driving which makes the difference of the angles grow and leads the robot to turn the robot again but not enough so that it drives relatively fluently afterwards.
Another case is that the robot seems to overshoot the desired angle_range or immediately leave it and doesn't stop turning.
In both cases the robot does not really find its way but keeps being stuck, misses the right angle and continues in this sub-optimal behavior.
I already tried different angular-velocities and also other thresholds.
As I am a total beginner I don't know if my guess of the problem could be right.
I don't know how i could do to solve the problem or at least improve the rate of the robot getting the right direction at the first time.
Thank you in advance for any help!
Here is the code:
#! /usr/bin/env python
import rospy
from gazebo_msgs.msg import ModelStates
from tf.transformations import euler_from_quaternion
from geometry_msgs.msg import Twist
from geometry_msgs.msg import Point
from math import atan2
#start is x:0, y:0
x = 0.0
y = 0.0
theta = 0.0 #current angle of robot
#import ipdb; ipdb.set_trace()
def callback(msg):
global x
global y
global theta
x = msg.pose[1].position.x
y = msg.pose[1].position.y
rot_q = msg.pose[1].orientation
(roll, pitch, theta) = euler_from_quaternion([rot_q.x, rot_q.y, rot_q.z, rot_q.w])
rospy.init_node ('subscriber')
sub = rospy.Subscriber('/gazebo/model_states', ModelStates, callback)
pub = rospy.Publisher('/cmd_vel', Twist, queue_size=1)
speed = Twist()
r = rospy.Rate(4)
goal = Point()
goal.x = -2
goal.y = -1
while not rospy.is_shutdown():
inc_x = goal.x - x #distance robot to goal in x
inc_y = goal.y - y #distance robot to goal in y
angle_to_goal = atan2 (inc_y, inc_x) #calculate angle through distance from robot to goal in x and y
print abs(angle_to_goal - theta)
if abs(angle_to_goal - theta) > 0.1: #0.1 because it too exact for a robot if both angles should be exactly 0
speed.linear.x = 0.0
speed.angular.z = 0.3
else:
speed.linear.x = 0.3 #drive towards goal
speed.angular.z = 0.0
pub.publish(speed)
r.sleep()
Edit: Hey Delb,
thank you a lot for your answer!
It is not directly intended to just turn left but i thought that it would be easier to get started with only one direction and to add a logic about in which direction to turn later.
I will definitely try to implement your suggestion and see if it works.
SOLUTION:
This is the code I ended up with, which seems to work pretty well.
#! /usr/bin/env python
import rospy
from gazebo_msgs.msg import ModelStates
from tf.transformations import euler_from_quaternion
from geometry_msgs.msg import Twist
from geometry_msgs.msg import Point
from math import atan2, sin, cos, pow, sqrt
#start is x:0, y:0
x = 0.0
y = 0.0
theta = 0.0 #current angle of robot
move_forward = False
def callback(msg):
global x
global y
global theta
x = msg.pose[1].position.x
y = msg.pose[1].position.y
rot_q = msg.pose[1].orientation
(roll, pitch, theta) = euler_from_quaternion([rot_q.x, rot_q.y, rot_q.z, rot_q.w])
rospy.init_node ('subscriber')
sub = rospy.Subscriber('/gazebo/model_states', ModelStates, callback)
pub = rospy.Publisher('/cmd_vel', Twist, queue_size=1)
speed = Twist()
r = rospy.Rate(4)
goal = Point()
goal.x = 2
goal.y = 2
while not rospy.is_shutdown():
inc_x = goal.x - x #distance robot to goal in x
inc_y = goal.y - y #distance robot to goal in y
angle_to_goal = atan2 (inc_y, inc_x) #calculate angle through distance from robot to goal in x and y
dist = sqrt(pow(inc_x, 2) + pow(inc_y, 2)) #calculate distance
#find out which turndirection is better
# the bigger the angle, the bigger turn, - when clockwise
turn = atan2(sin(angle_to_goal-theta), cos(angle_to_goal-theta))
if abs(angle_to_goal - theta) < 0.1: #0.1 because it too exact for a robot if both angles should be exactly 0
move_forward = True
speed.angular.z = 0.2 * turn
if move_forward == True:
#keep speed between 0.3 and 0.7
if 0.1 * dist > 0.3 and 0.1 * dist < 0.7:
speed.linear.x = 0.05 * dist
elif 0.1 * dist > 0.7:
speed.linear.x = 0.7
else:
speed.linear.x = 0.3
pub.publish(speed)
r.sleep()
Originally posted by SpaceTime on ROS Answers with karma: 50 on 2019-01-21
Post score: 0
Answer:
First note here : is it intended for your robot to be only turning left ? That could be problematic if the robot deviates too much at the left of the goal, it would be turning for almost 2*pi instead of a little bit to the right to correct its orientation.
Anyway, you should be able to achieve what you want with some little changes :
I would directly set the angular speed equal to the angle difference. That would allow the robot to always correct its orientation when reaching the goal. So your angular speed at any time could be :
speed.angular.z = angle_to_goal - theta
That being said you might want to have a function calculating the shortest angle (because your calculation might return 3*pi/2 instead of -pi/2) not to be always turning too much and also to deal with the case when your robot is aligned to the goal but the calculation returns 2*pi. If it's not what you want I would still recommend to get the sign of the angular difference to allow your robot to turn left or right when needed.
I would change the condition. Instead of choosing between moving forward and turning, you could have a flag turning to true the first time your robot gets aligned to the goal and then allow it to move forward :
move_forward = False #define this outside of the while loop
...
if abs(angle_to_goal - theta) > 0.1:
move_forward = True
speed.angular.z = angle_to_goal - theta #the angular speed is always the angualr difference
if move_forward is True:
speed.linear = 0.3 #the linear speed is only set once the flag is True
If the robot has almost reached a goal but you are not exactly aligned with it and you are still moving forward at a constant speed, you might miss the goal (eventhough I don't know which tolerance you have) but to avoid that it is possible to also set the linear speed as the angular speed, but instead of the angular difference it would be the distance. Like that your robot will slow down when getting closer to the goal, letting you the time to adjust your orientation before missing the goal.
EDIT :
So if I understand the values right,
if the coordinates are on the robots
right side, I get a value between 0.0
and -pi returned and if its on the
left its between 0.0 and pi? How do I
effectively get
which angle is smaller?
If you get the correct angle then yes but you won't get it simply with angle_to_goal - theta because angle_to_goal and theta will return a value between [-pi; pi] : If you have x > 0 a little angle, if you get angle_to_goal = pi - x and theta = -pi + x then angle_to_goal - theta = 2pi + 2x instead of 2x. So you have to modify the result of this angular difference, here's an exemple of a function (in C++) doing that :
float shortestAngleDifference(float th1, float th2)
{
float anglediff = fmod( (th1 - th2) , 2*M_PI);
if( anglediff < 0.0 )
{
if( fabs(anglediff) > (2*M_PI + anglediff) )
{
anglediff = 2*M_PI + anglediff;
}
}
else
{
if( anglediff > fabs(anglediff - 2*M_PI) )
{
anglediff = anglediff - 2*M_PI;
}
}
return anglediff;
}
Where angle_to_goal = th1 and theta = th2 to direclty have anglediff > 0 if the robot has to turn to the left and anglediff < 0 to turn to the right.
Note : In your solution you have multiple conditions to keep your speed between [0.3;0.7] but you could simply do :
if move_forward == True:
#keep speed between 0.3 and 0.7
if dist > 0.7:
dist = 0.7
elif dist < 0.3:
dist = 0.3
speed.linear.x = dist
Originally posted by Delb with karma: 3907 on 2019-01-22
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 32306,
"tags": "ros, gazebo, ros-kinetic, position"
} |
python PointCloud2 read_points() problem | Question:
I'm working on a kinect project. I'm trying to write a pair of files that test out the PointCloud2 sensor message. I don't know how to make the 'listener' python script read the value that I have created in the 'talktest' script. I am using the class defined here:
sensor_msgs/point_cloud2.py (I could not post a link)
This is my code. These are my includes. I use them pretty much the same in both files.
#!/usr/bin/env python
import rospy
import sensor_msgs.point_cloud2 as pc2
from sensor_msgs.msg import PointCloud2, PointField
This is the 'talktest' code.
#talktest
def test():
rospy.init_node('talktest', anonymous=True)
pub_cloud = rospy.Publisher("camera/depth_registered/points", PointCloud2)
while not rospy.is_shutdown():
pcloud = PointCloud2()
# make point cloud
cloud = [[33,22,11],[55,33,22],[33,22,11]]
pcloud = pc2.create_cloud_xyz32(pcloud.header, cloud)
pub_cloud.publish(pcloud)
rospy.loginfo(pcloud)
rospy.sleep(1.0)
if __name__ == '__main__':
try:
test()
except rospy.ROSInterruptException:
pass
Then I have written a script that listens for point cloud data. It always throws a AttributeError.
#listener
def listen():
rospy.init_node('listen', anonymous=True)
rospy.Subscriber("camera/depth_registered/points", PointCloud2, callback_kinect)
def callback_kinect(data) :
# pick a height
height = int (data.height / 2)
# pick x coords near front and center
middle_x = int (data.width / 2)
# examine point
middle = read_depth (middle_x, height, data)
# do stuff with middle
def read_depth(width, height, data) :
# read function
if (height >= data.height) or (width >= data.width) :
return -1
data_out = pc2.read_points(data, field_names=None, skip_nans=False, uvs=[width, height])
int_data = next(data_out)
rospy.loginfo("int_data " + str(int_data))
return int_data
if __name__ == '__main__':
try:
listen()
except rospy.ROSInterruptException:
pass
I always get this error:
[ERROR] [WallTime: 1389040976.560028]
bad callback: <function
callback_kinect at 0x2a51b18>
Traceback (most recent call last):
File
"/opt/ros/hydro/lib/python2.7/dist-packages/rospy/topics.py",
line 681, in _invoke_callback
cb(msg) File "./turtlebot_listen.py", line 98, in
callback_kinect
left = read_depth (left_x, height, data) File "./turtlebot_listen.py",
line 126, in read_depth
int_data = next(data_out) File "/opt/ros/hydro/lib/python2.7/dist-packages/sensor_msgs/point_cloud2.py",
line 74, in read_points
assert isinstance(cloud, roslib.message.Message) and
cloud._type ==
'sensor_msgs/PointCloud2', 'cloud is
not a sensor_msgs.msg.PointCloud2'
AttributeError: 'module' object has no
attribute 'message'
Can anyone tell me how to use this library correctly?
Originally posted by david.c.liebman on ROS Answers with karma: 125 on 2014-01-06
Post score: 0
Answer:
Try adding the following import statement after the 'import rospy' line:
from roslib import message
--patrick
Originally posted by Pi Robot with karma: 4046 on 2014-01-06
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by david.c.liebman on 2014-01-07:
this worked. Thanks. Also, I had to change the read_points() line like this:
data_out = pc2.read_points(data, field_names=None, skip_nans=False, uvs=[[width, height]])
note the double square braces around width and height!! Thanks again.
Comment by Pi Robot on 2014-01-07:
Excellent! I haven't used the uvs argument (yet) so that is good to know for the future.
Comment by sabruri1 on 2017-07-26:
Hello! Thank's for your help. I have a question. When I try to test your code, into read_depth method, variable "int_data" has three field. What are they?
Comment by Badal on 2021-09-29:
They are Pointcloud XYZ data for the center (u,v ) pixel correspondence. | {
"domain": "robotics.stackexchange",
"id": 16581,
"tags": "python, pointcloud"
} |
Is the language $L=\{a^nb^m:n,m\in\mathbb{N}\land n-m=5 \}$ regular or not regular? | Question: I'm trying to understand how to prove a language is regular or not regular, for example this language: $$L=\{a^nb^m:n,m\in\mathbb{N}\land n-m=5 \}$$
Is this language regular or not?
My solution
Using the pumping lemma, I can choose a string with a pumping length $p$ like: $w=a^{5+p}b^p$, then $x = a^j, y=a^l$ and $z=a^kb^p$ such that $j+l+k=5+p$, I will pump with $i=0$, so the string will be $xz=a^{j+k}b^p$, this is not regular because $j+k<p$.
Am I correct about this? Thanks for your help !
Answer: If $L$ is regular, then so is $L\{b\}^5$. You can conclude by studying $L\{b\}^5$ (which is a very classic language).
Also in your proof, you cannot guarantee that $j+k < p$, but $j+k < 5 + p$ is enough. | {
"domain": "cs.stackexchange",
"id": 19314,
"tags": "formal-languages, regular-languages"
} |
Why don't small stars end up as a black holes? | Question: I have recently done some research into black holes, and realized that big stars form black holes, whilst smaller ones don't. Is this because the gravity isn't strong enough for it to fall in on itself, or something else?
Answer: Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. In the cases of stars, it normally usually happens because of one of these 2 reasons:
The star has too little "fuel" left to maintain its temperature
The star that would have been stable receives extra matter in a way that does not raise its core temperature.
In either of these cases, the star's temperature is no longer high enough to prevent it from collapsing under its own weight (gravity). The collapse may be stopped by various factors condensing the matter into a denser state. The result is one of the various types of compact star.
As said above, it needs a lot of mass to actually collapse into either a black hole, or implode as a supernova, and smaller stars simply don't have this.
Wikipedia | {
"domain": "astronomy.stackexchange",
"id": 644,
"tags": "star, gravity, black-hole"
} |
Uncertainty when beaker goes by 50s | Question: I am helping my daughter do her chemistry homework and am stumped by this one. I have a beaker that seems to count by 50s.
That makes uncertainty calculations awkward. What is the “next unit down”?
How should one calculate uncertainty for this?
Answer: A real beaker says "approximate volume".
So the volume is approximately between 100 mL and 150 mL. So maybe 125 mL with an uncertainty of 25 mL. Your daughter would know best because she could apply exactly the rules she learned in class.
There is no universally accepted answer because the beaker is not made to measure volume accurately. | {
"domain": "chemistry.stackexchange",
"id": 17662,
"tags": "units, definitions"
} |
How do I use matrix math in irregular neural networks such as those generated from neuroevolution (NEAT)? | Question: I understand how to structure the matrix when every node in a layer is fully connected to every node in adjacent layers and I understand that in "irregular" neural networks I can just process each node individually. However, there are no explanations or examples online of how to structure a matrix for an "irregular" neural network. How would I handle recurrent connections? Would I just fill in the "gaps" in the matrix with zeroes? Take the irregular neural network in this diagram:
Could I somehow combine (or get the dot-product of):
[i0 i1 i2] and
[[w0 w1 0 w9 0 ]
[0 w2 w3 0 0 ]
[0 0 0 w4 0 ]
[0 0 0 w5 w7]
[0 w8 0 w6 0 ]]
to find [o0 o1 o2]? Would I need to give the input vector an additional two values of 0?
Answer: It looks like I created a new way to use matrices with irregular neural networks in the process of answering my own question. Everyone must now refer to this method as the Capobianco Irregular Neural Network method. The CINN method basically treats the entire irregular neural network as a two layer fully connected network, albeit you treat most of the weights as being 0 (because they don't exist). This results in two sparse matrices. To be clear, you treat the input layer as being connected to EVERY hidden neuron, even if they are several layers away. Similarly, you treat the output layer as being connected to EVERY hidden neuron and input neuron.
The following example referencing the above picture uses initial h0 and h1 values of 0, which can be thought of as the values of the hidden states at the beginning of t0. There's two tricks I discovered.
First, just concatenate the [i0 i1 i2] input vector with the hidden state vector which gives us:
[ i0 i1 i2 h0 h1 ]
Next concatenate the weight matrix for the entire neural network except for the output weights:
( i0 i1 i2 h0 h1 )
(i0) [[ 1 0 0 w0 0 ]
(i1) [ 0 1 0 w1 w2 ]
(i2) [ 0 0 1 0 w3 ]
(h0) [ 0 0 0 w9 0 ]
(h1) [ 0 0 0 0 0 ]]
Every row shows the outgoing weight the value in parentheses has to the corresponding column value, with the exception of the area where the input vector values would otherwise "interact". We need this identity matrix to retain the original input values for the last step.
Before that it might be helpful to multiply out what we have using real values. To be clear, you want to find the dot product of:
[[ 1 0 0 w0 0 ]
[ 0 1 0 w1 w2 ]
[ i0 i1 i2 h0 h1 ] . [ 0 0 1 0 w3 ]
[ 0 0 0 w9 0 ]
[ 0 0 0 0 0 ]]
http://matrixmultiplication.xyz/ isn't perfectly accurate, but I really like how it shows you visually how each pair of terms are combined. This will result in a new vector [i0 i1 i2 h0* h1*]. h0* and h1* represent the final values of the hidden states at the end of the original time-step t0 (note that these will be the new h0 and h1 values at the beginning of t1), while the i0, i1, and i2 remain unchanged because we utilized the identity matrix.
Finally all we have to do is multiply this new vector by the matrix containing all the weights connecting to the output layer:
( o0 o1 o2)
(i0 ) [[ 0 0 0]
(i1 ) [ 0 0 w8]
(i2 ) [ 0 0 0]
(h0*) [w4 w5 w6]
(h1*) [ 0 w7 0]]
We can also do this all at once:
[[ 1 0 0 w0 0 ] [[ 0 0 0 ]
[ 0 1 0 w1 w2 ] [ 0 0 w8 ]
[ i0 i1 i2 h0 h1 ] . [ 0 0 1 0 w3 ] . [ 0 0 0 ] = [ o0 o1 o2 ]
[ 0 0 0 w9 0 ] [w4 w5 w6 ]
[ 0 0 0 0 0 ]] [ 0 w7 0 ]]
Note that is just for output vector at the end of t0. If you're not dealing with time data or using an RNN you don't need to worry about this, but if you're trying to determine outputs at a later time-step, you'll have to do something like this:
[[ 1 0 0 w0 0 ] [[ 1 0 0 w0 0 ] [[ 0 0 0 ]
(input @ t0) [ 0 1 0 w1 w2 ] (input @ t1) [ 0 1 0 w1 w2 ] [ 0 0 w8 ] (output @ t1)
[i0 i1 i2 h0 h1] . [ 0 0 1 0 w3 ] . [i0 i1 i2 h0* h1*] . [ 0 0 1 0 w3 ] . [ 0 0 0 ] = [o0 o1 o2]
[ 0 0 0 w9 0 ] [ 0 0 0 w9 0 ] [w4 w5 w6 ]
[ 0 0 0 0 0 ]] [ 0 0 0 0 0 ]] [ 0 w7 0 ]]
Note that this example is for an INN created using a genetic algorithm, so back-propagation is unnecessary in this example. If you wanted to utilize back-propagation you'd begin that process after calculating your output vector. | {
"domain": "datascience.stackexchange",
"id": 7363,
"tags": "machine-learning, neural-network, rnn, genetic-algorithms, matrix"
} |
Feedback on my Conway's Game of Life | Question: I've been programming for about 4 months now, just trying to learn by myself. I've tried my way with coding the Game of Life here, would like some general feedback as well as some pointers on how I can speed it up because right now it seems to run incredibly slowly. Keep in mind I'm a newbie so I would like some easy ways to optimize it.
Both positive and negative feedback are very welcome.
Here's the complete code:
public partial class MainFom : Form
{
Grid formGrid;
CancellationTokenSource tokenSrc = new CancellationTokenSource();
public MainFom()
{
InitializeComponent();
}
private void MainFom_Load(object sender, EventArgs e)
{
formGrid = new Grid();
}
private void MainFom_Paint(object sender, PaintEventArgs e)
{
e.Graphics.DrawImage(formGrid.toBitmap(), 0, 0);
e.Graphics.Dispose();
}
private void startBtn_Click(object sender, EventArgs e)
{
Task tempTask = Task.Factory.StartNew(
(x) =>
{
while (!tokenSrc.IsCancellationRequested)
{
formGrid.UpdateGrid();
Graphics graphics = this.CreateGraphics();
graphics.Clear(this.BackColor);
graphics.DrawImage(formGrid.toBitmap(), 0, 0);
graphics.Dispose();
}
}, tokenSrc);
startBtn.Hide();
Button stopBtn = new Button() { Text = "Stop", Location = startBtn.Location, Size = startBtn.Size };
this.Controls.Add(stopBtn);
stopBtn.Click += new EventHandler(
(x, y) =>
{
tokenSrc.Cancel();
stopBtn.Hide();
startBtn.Show();
tempTask.Wait();
tokenSrc = new CancellationTokenSource();
});
}
}
class Grid
{
#region Properties/Fields
const int MAX_CELLS_X = 41*2;//41;
const int MAX_CELLS_Y = 35*2;//35;
Random RNG = new Random();
CellCollection cells;
#endregion
public Grid()
{
//Initialize grid (both frontend and backend)
cells = new CellCollection();
for (int x = 0; x < MAX_CELLS_X; x++)
{
int XCord = 10 * (x + 1);
for (int y = 0; y < MAX_CELLS_Y; y++)
{
int YCord = 10 * (y + 1);
Point point = new Point(XCord, YCord);
if (RNG.Next(100) < 7)
{ // 10% chance of initial seed creating a live cell
cells.Add(new Cell(new Rectangle(point, new Size(10, 10)), point) { isAlive = true });
} else
{
cells.Add(new Cell(new Rectangle(point, new Size(10, 10)), point));
}
}
}
}
public void UpdateGrid()
{
//Create copy of cells since all changes must be done simultaneously
CellCollection copy = cells;
for (int i = 0; i < copy.Count; i++)
{
//Rule 1: Any live cell with fewer than two live neighbours dies, as if caused by under-population.
if (cells[i].isAlive && cells.GetNeighbours(cells[i]).Length < 2)
{
copy[i].Kill();
}
//Rule 2: Any live cell with more than three live neighbours dies, as if by overcrowding.
if (cells[i].isAlive && cells.GetNeighbours(cells[i]).Length > 3)
{
copy[i].Kill();
}
//Rule 3: Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
if (!cells[i].isAlive && cells.GetNeighbours(cells[i]).Length == 3)
{
cells[i].Alive();
}
}
// Now that all cells are changed we can copy those changes simulatenously by copying the copy back to the original
cells = copy;
}
public Bitmap toBitmap()
{
Bitmap gridBmp = new Bitmap(1000, 1000); // TODO: Find optimal size for bmp
using (Graphics gfxObj = Graphics.FromImage(gridBmp))
{
// Draw grid here and Dispose() on Pen, gfxObj is implicitly disposed
Pen myPen = new Pen(Color.LightGray);
SolidBrush myBrush = new SolidBrush(Color.Black);
foreach (var cell in cells)
{
if (!cell.isAlive) {
gfxObj.DrawRectangle(myPen, cell.rect);
} else {
gfxObj.FillRectangle(myBrush, cell.rect);
}
}
myPen.Dispose();
}
return gridBmp;
}
}
class CellCollection : List<Cell>
{
public Cell[] GetNeighbours(Cell cell)
{
List<Cell> neighbours = new List<Cell>();
foreach (Cell entry in this)
{
//Top row
if(entry.point.Y.Equals(cell.point.Y - 10)) {
if (entry.point.X.Equals(cell.point.X-10) || entry.point.X.Equals(cell.point.X) || entry.point.X.Equals(cell.point.X+10))
{
if (entry.isAlive) {
neighbours.Add(entry);
}
}
}
// Middle row
if (entry.point.Y.Equals(cell.point.Y)) {
if (entry.point.X.Equals(cell.point.X - 10) || entry.point.X.Equals(cell.point.X + 10))
{
if (entry.isAlive) {
neighbours.Add(entry);
}
}
}
//Bottom row
if (entry.point.Y.Equals(cell.point.Y + 10))
{
if (entry.point.X.Equals(cell.point.X - 10) || entry.point.X.Equals(cell.point.X) || entry.point.X.Equals(cell.point.X + 10))
{
if (entry.isAlive) {
neighbours.Add(entry);
}
}
}
}
return neighbours.ToArray();
}
}
class Cell
{
public bool isAlive { get; set; }
public Rectangle rect { get; set; }
public readonly Point point;
public Cell(Rectangle rect, Point point)
{
this.rect = rect;
this.point = point;
}
public void Alive()
{
isAlive = true;
}
public void Kill()
{
isAlive = false;
}
}
Answer: It looks well-written.
You're hard-coding 10 in several places. What if you want to change it to 12, or (worse) to 12.33333? Instead of storing Rect and Point in Cell, I'd suggest storing the zero-based x and y grid coordinate of the cell: that makes your UpdateGrid calculation easier. point and rect can be a property of Cell, calculated on-the-fly ...
Point point { get { return new Point(this.x * 10, this.y * 10); } }
... or initialized in the Cell constructor:
Point point;
int x;
int y;
public Cell(int x, int y)
{
this.x = x;
this.y = y;
this.point = new Point(this.x * 10, this.y * 10);
}
In C# the convention is to use PascalCase instead of camelCase: so IsAlive instead of isAlive etc.
To make it faster, currently you are calling the GetNeighbours method several times for each cell, which is a waste:
for (int i = 0; i < copy.Count; i++)
{
//Rule 1: Any live cell with fewer than two live neighbours dies, as if caused by under-population.
if (cells[i].isAlive && cells.GetNeighbours(cells[i]).Length < 2)
{
copy[i].Kill();
}
//Rule 2: Any live cell with more than three live neighbours dies, as if by overcrowding.
if (cells[i].isAlive && cells.GetNeighbours(cells[i]).Length > 3)
{
copy[i].Kill();
}
//Rule 3: Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
if (!cells[i].isAlive && cells.GetNeighbours(cells[i]).Length == 3)
{
cells[i].Alive();
}
}
It would be better to call that method only once:
for (int i = 0; i < copy.Count; i++)
{
int countNeighbours = cells.GetNeighbours(cells[i]).Length;
//Rule 1: Any live cell with fewer than two live neighbours dies, as if caused by under-population.
if (cells[i].isAlive && countNeighbours < 2)
{
copy[i].Kill();
}
//Rule 2: Any live cell with more than three live neighbours dies, as if by overcrowding.
if (cells[i].isAlive && countNeighbours > 3)
{
copy[i].Kill();
}
//Rule 3: Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
if (!cells[i].isAlive && countNeighbours == 3)
{
cells[i].Alive();
}
}
Your GetNeighbours returns a List but it only needs to return an integer count.
Your GetNeighbours searches the whole grid for neighbours; it could just find them instead:
// position of this cell
int x = cell.x;
int y = cell.y;
return
// row above
IsAlive(x - 1, y - 1) +
IsAlive(x, y - 1) +
IsAlive(x + 1, y - 1) +
// left and right on this row
IsAlive(x - 1, y) +
IsAlive(x + 1, y) +
// row below
IsAlive(x - 1, y + 1) +
IsAlive(x, y + 1) +
IsAlive(x + 1, y + 1);
bool IsAlive(int x, int y)
{
// x and/or y might be off the board
if ((x < 0) || (y < 0) || (x >= MAX_CELLS_X) || (y >= MAX_CELLS_Y))
// no cell here therefore not alive
return false;
// find the cell at (x,y)
int index = (y * MAX_CELLS_X) + x;
Cell found = this[index];
return found.isAlive;
}
It would be more conventional to model the Grid as a two-dimensional array than as a one-dimensional list.
Grid probably shouldn't be a subclass of (i.e. inherit from) List: at most it should contain a List as a data member.
You should Dispose your SolidBrush as well as your Pen: do that with further using statements.
This doesn't create a copy:
//Create copy of cells since all changes must be done simultaneously
CellCollection copy = cells;
It creates a variable named copy which is a reference to the same CellCollection as cells.
To create a copy, given that CellCollection is a List, define a 'copy constructor' ...
CellCollection(CellCollection copyFrom)
// invoke this List constructor:
// http://msdn.microsoft.com/en-us/library/fkbw11z0(v=vs.110).aspx
: base(copyFrom)
{
}
... and invoke it e.g. like this:
CellCollection copy = new CellCollection(cells);
There's a typo in your rule #3 processing: you set aliveness of cell instead of copy.
Beware making all your properties settable; for example, your API allows callers to set the isAlive and rect properties:
public bool isAlive { get; set; }
public Rectangle rect { get; set; }
Why have Alive and Kill methods if callers can also/instead set the isAlive property directly? And do you want callers to change the rect property after the cell has been constructed?
Some people (e.g. people who use scripting languages) like a permissive API which allows you to do as much as possible; conversely there's also something to be said for a restrictive API which lets you do as little as possible i.e. only what is necessary and no more: for example if I have no need to change the rect after the Cell is constructed then I don't define an API which permits that.
I just noticed that even making a copy of the grid isn't enough: because the copied list would contain the same Cell instances as the original list. You can fix that:
By making a copy (using new Cell) of each Cell as you copy it into the new List
Or by saying that Cell is a struct instead of a class
Or by giving up on the idea of copying Cells, and adding a new property like bool NextGenerationAliveness which you initialize in UpdateGrid: a) walk through the grid using IsAlive to set NextGenerationAliveness b) walk through the list again to set IsAlive = NextGenerationAliveness. | {
"domain": "codereview.stackexchange",
"id": 6577,
"tags": "c#, object-oriented, beginner, game-of-life"
} |
Is it possible to connect three neural networks in Matlab? | Question: If I have 3 separate feedforward neural networks in Matlab, is it possible to connect them so that, given input data and target data the 3 work in parallel to produce output? If so, how do I do this?
Answer: If you want to combine the results from three different Neural Networks to "boost" the performance :) , you might want to look at the different Ensemble Learning Methods as I mentioned earlier.
Which method you should use, depends on how you share or divide the training data between the three NNs. For example if the NNs are trained on same data but have different parameters, you can look at simple voting ( if you are doing a classification task) or averaging ( if you are using them for regression).
The more advanced methods like AdaBoost divide the training data between the classifiers. You can read about it in Boosting Neural Networks | {
"domain": "datascience.stackexchange",
"id": 368,
"tags": "neural-network, matlab"
} |
Generic Double Linked List | Question: I am a mathematician attempting to become proficient with C++. At the moment I am learning about data structures. I am trying to write a double linked list now from scratch with some help from online tutorials. I wanted to see if there is anything that I could improve. I have made similar posts with other data structures. With the enormous help everyone has given me I feel more and more confident with my coding.
Here is the header file:
#ifndef DoubleLinkedLists_h
#define DoubleLinkedLists_h
template <class T>
class DoubleLinkedLists {
private:
struct Node {
T data;
Node* next;
Node* previous;
};
Node* head;
Node* tail;
public:
// Constructors
DoubleLinkedLists() : head(nullptr), tail(nullptr) {} // empty constructor
DoubleLinkedLists(DoubleLinkedLists const& value); // copy constructor
DoubleLinkedLists<T>(DoubleLinkedLists<T>&& move) noexcept; // move constuctor
DoubleLinkedLists<T>& operator=(DoubleLinkedLists&& move) noexcept; // move assignment operator
~DoubleLinkedLists(); // destructor
// Overload operators
DoubleLinkedLists& operator=(DoubleLinkedLists const& rhs);
friend std::ostream& operator<<(std::ostream& str, DoubleLinkedLists<T> const& data) {
data.display(str);
return str;
}
// Member functions
void swap(DoubleLinkedLists& other) noexcept;
void createNode(const T& theData);
void createNode(T&& theData);
void display(std::ostream& str) const;
void insertHead(const T& theData);
void insertTail(const T& theData);
void insertPosition(int pos, const T& theData);
void deleteHead();
void deleteTail();
void deletePosition(int pos);
bool search(const T& x);
};
template <class T>
DoubleLinkedLists<T>::DoubleLinkedLists(DoubleLinkedLists const& value) : head(nullptr), tail(nullptr) {
for(Node* loop = value->head; loop != nullptr; loop = loop->next) {
createNode(loop->data);
}
}
template <class T>
DoubleLinkedLists<T>::DoubleLinkedLists(DoubleLinkedLists<T>&& move) noexcept : head(nullptr), tail(nullptr) {
move.swap(*this);
}
template <class T>
DoubleLinkedLists<T>& DoubleLinkedLists<T>::operator=(DoubleLinkedLists<T> &&move) noexcept {
move.swap(*this);
return *this;
}
template <class T>
DoubleLinkedLists<T>::~DoubleLinkedLists() {
while(head != nullptr) {
deleteHead();
}
}
template <class T>
DoubleLinkedLists<T>& DoubleLinkedLists<T>::operator=(DoubleLinkedLists const& rhs) {
DoubleLinkedLists copy(rhs);
swap(copy);
return *this;
}
template <class T>
void DoubleLinkedLists<T>::swap(DoubleLinkedLists<T>& other) noexcept {
using std::swap;
swap(head, other.head);
swap(tail, other.tail);
}
template <class T>
void DoubleLinkedLists<T>::createNode(const T& theData) {
Node* newData = new Node;
newData->data = theData;
newData->next = nullptr;
if(head == nullptr) {
newData->previous = nullptr;
head = newData;
tail = newData;
}
else {
newData = new Node;
newData->data = theData;
newData->previous = tail;
tail->next = newData;
tail = newData;
}
}
template <class T>
void DoubleLinkedLists<T>::createNode(T&& theData) {
Node* newData = new Node;
newData->data = std::move(theData);
newData->next = nullptr;
if(head == nullptr) {
newData->previous = nullptr;
head = newData;
tail = newData;
}
else {
newData = new Node;
newData->data = std::move(theData);
newData->previous = tail;
tail->next = newData;
tail = newData;
}
}
template <class T>
void DoubleLinkedLists<T>::insertHead(const T& theData) {
Node* newNode = new Node;
newNode->data = theData;
newNode->next = head;
head->previous = newNode;
head = newNode;
}
template <class T>
void DoubleLinkedLists<T>::insertTail(const T& theData) {
Node* newNode = new Node;
newNode->data = theData;
newNode->previous = tail;
tail->next = newNode;
tail = newNode;
}
template <class T>
void DoubleLinkedLists<T>::insertPosition(int pos, const T& theData) {
Node* prev = new Node;
Node* current = head;
Node* newNode = new Node;
for(int i = 1; i < pos; i++) {
prev = current;
current = current->next;
}
newNode->data = theData;
prev->next = newNode;
newNode->next = current;
}
template <class T>
void DoubleLinkedLists<T>::display(std::ostream &str) const {
for(Node* loop = head; loop != nullptr; loop = loop->next) {
str << loop->data << "\t";
}
str << "\n";
}
template <class T>
void DoubleLinkedLists<T>::deleteHead() {
Node* old = head;
head = head->next;
delete old;
}
template <class T>
void DoubleLinkedLists<T>::deleteTail() {
Node* prev = nullptr;
Node* current = head;
while(current->next != nullptr) {
prev = current;
current = current->next;
}
tail = prev;
prev->next = nullptr;
delete current;
}
template <class T>
void DoubleLinkedLists<T>::deletePosition(int pos) {
Node* prev = new Node;
Node* current = head;
for(int i = 1; i < pos; i++) {
prev = current;
current = current->next;
}
prev->next = current->next;
}
template <class T>
bool DoubleLinkedLists<T>::search(const T &x) {
Node* current = head;
while(current != nullptr) {
if(current->data == x) {
return true;
}
current = current->next;
}
return false;
}
#endif /* DoubleLinkedLists_h */
I feel like some of the functions like insertPosition(), deletePosition() I may have not linked the previous node correctly, but I am not entirely sure. Everything runs and compiles as it should.
Here is the main.cpp file:
#include <iostream>
#include "DoubleLinkedLists.h"
int main(int argc, const char * argv[]) {
///////////////////////////////////////////////////////////////////////////////////
///////////////////////////// Double Linked List //////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////
DoubleLinkedLists<int> obj;
obj.createNode(2);
obj.createNode(4);
obj.createNode(6);
obj.createNode(8);
obj.createNode(10);
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"---------------Displaying All nodes---------------";
std::cout<<"\n--------------------------------------------------\n";
std::cout << obj << std::endl;
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"----------------Inserting At Start----------------";
std::cout<<"\n--------------------------------------------------\n";
obj.insertHead(50);
std::cout << obj << std::endl;
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"-----------------Inserting At End-----------------";
std::cout<<"\n--------------------------------------------------\n";
obj.insertTail(20);
std::cout << obj << std::endl;
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"-------------Inserting At Particular--------------";
std::cout<<"\n--------------------------------------------------\n";
obj.insertPosition(5,60);
std::cout << obj << std::endl;
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"----------------Deleting At Start-----------------";
std::cout<<"\n--------------------------------------------------\n";
obj.deleteHead();
std::cout << obj << std::endl;
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"----------------Deleting At End-----------------";
std::cout<<"\n--------------------------------------------------\n";
obj.deleteTail();
std::cout << obj << std::endl;
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"--------------Deleting At Particular--------------";
std::cout<<"\n--------------------------------------------------\n";
obj.deletePosition(4);
std::cout << obj << std::endl;
std::cout << std::endl;
obj.search(8) ? printf("Yes"):printf("No");
return 0;
}
Answer: Memory Leaks
At a first glance the double use of new without any nearby delete is suspicious. Let's look at createNode first:
Node* newData = new Node;
newData->data = theData;
newData->next = nullptr;
This first part is always executed, allocating a new Node.
Now if head is not nullptr you allocate another new Node:
if(head == nullptr) {
newData->previous = nullptr;
head = newData;
tail = newData;
}
else {
newData = new Node;
newData->data = theData;
newData->previous = tail;
tail->next = newData;
tail = newData;
}
You just lost any reference to your first Node and have no way to clean it up anymore.
Also as a user of this function you don't know what it does without looking at the implementation because the name certainly doesn't tell you. Will it insert at the front? The back? In the middle? No idea.
I would scrap the entire function or merge it into insertTail.
Furthermore don't compare to nullptr. Use the implicit conversion and simply do if (head) and for the inverse if (!head).
The next function with double new and no delete in sight is insertPosition.
What is even going on here? Why allocate new memory for the previous node? Why allocate the new node before you found the right spot? What if it fails now? Memory leaks for everyone.
Consider something like the following:
Node* cur_node = head;
int i = 0;
while (cur_node) {
if (i++ == pos) {
// do the deed
}
cur_node = cur_node->next;
}
No need to allocate any memory before you found the right position to insert at. (note: here the postfix operator is intentional but usually you should prefer the prefix version)
No double new but still a problem child: deletePosition
Again, why make a new node for prev? The approach from above applies here as well.
Well those functions didn't work out too well but the others are fine right? No.
Let's look at insertHead as an example of what is wrong with most of the other functions.
Node* newNode = new Node;
newNode->data = theData;
newNode->next = head;
head->previous = newNode;
head = newNode;
See the problem?
Assume the list is empty and head is nullptr
newNode->next = head;
head->previous = newNode;
Now this will crash and burn.
This issue of not checking for valid nodes can be found in other functions as well.
General
Dump the comments, they don't help. In fact they're wrong as they claim some code is constructors when it also includes an overloaded assignment operator and even a destructor.
head and tail can be initialized directly in the class.
Generally you should order your interface from public to private and not the other way around.
Naming is really inconsistent. You have value, move, other, rhs. Not really wrong but confusing. Which one you pick is mostly a matter of personal preference but do pick one and stick with it. Consistency is key.
For the operator<< overload the display function should probably be private. Right now you can do std::cout << obj as well as obj.display(std::cout), kinda weird.
You're missing includes. At least <ostream> and <utility>. | {
"domain": "codereview.stackexchange",
"id": 30888,
"tags": "c++, linked-list, reinventing-the-wheel"
} |
Is there inductance to a DC circuit? | Question: When a DC circuit is carrying current, large amounts or small, is there induced-emf due to the inductance? Or is it only applied to AC circuits?
Answer: In the limit of long times, the currents are steady, so the magnetic fields they create are steady so there is no induced EMF. This situation is usually tagged "steady state".
That said, there will be a period of time when you have just switched a circuit on or off during which things have not settled down and then there will in general be effects not seen in the steady state (including induced EMFs). This is called the "transient" behavior.
Transient behavior analysis is a important component of electronics design. | {
"domain": "physics.stackexchange",
"id": 58416,
"tags": "electromagnetism, electric-circuits, inductance"
} |
Can someone tell the me the data bank which has all the STO-3G basis sets for many electron atoms? | Question: I am trying to express the slater orbitals of Helium atom in terms of 3 gaussian functions (STO-3G basis set). Is there any data bank or reference table in which I can find the exponents and coefficients in it ?
Also in primitive cartesian gaussian , it seems like both $2s$ and $2p$ has the same functional form. For $2p$ it is $x\exp(-\alpha x^{2})$ , $y\exp(-\alpha y^{2})$ and $z\exp(-\alpha z^{2})$. What is for $2s$ orbital in primitive gaussian function ?
Answer: A wide range of Gaussian basis sets for all elements can be found at the Basis Set Exchange. Alternatively, you can try to extract that data from the basis set library of a quantum chemistry software package.
The functional form of a primitive cartesian gaussian with $l=0$ is $\exp(-\alpha r^2)$, independent of the $n$ quantum number. | {
"domain": "chemistry.stackexchange",
"id": 10952,
"tags": "quantum-chemistry, computational-chemistry"
} |
How does a resistance connected in series with a zener diode but parallel to the load resistance affect the output voltage? | Question:
The breakdown voltage of A is $6V$ and that of B is $4V$
In this question let's assume that the input voltage is $f(t)$, a linear function of time. It's quite obvious that $V_{\text{output}}=f(t)$ when the input is lesser than $4\,\text{V}$ and $V_{\text{output}}$ is independent of $f(t)$ above $6\,\text{V}$. My question is what will happen between $4\,\text{V}$ and $6\,\text{V}$?
I think $V_{\text{output}}= f(t)$ since the current can still pass through zener diode A, but my textbook says that the slope should decrease within this interval. How is that possible?
Answer: The author of that textbook question did a really bad job (if you didn't omit information or make transcription mistakes).
The way the schematic is given, $V_{in} = V_o$ always holds, as they are directly connected. No matter whether $V_{in}$ is below 4V, above 6V or whatever.
If the components are assumed to be "ideal" (what textbooks typically do), this question leads to physically impossible situations.
If $V_{in}$ were an ideal voltage source, above 6V, you'd get an infinite current through the idealized 6V zener diode, which is physically impossible, and in any real-world experiment would blow up that component.
So, to make this experiment physically possible, you have to assume some non-ideal, real-world properties of the components.
Assuming some finite impedance of the voltage source, which can be understood as a series resistor at the top left, would lead to results like those given in the textbook, but nothing about the non-ideal parameters of the components seems to be given with the question. | {
"domain": "physics.stackexchange",
"id": 81484,
"tags": "homework-and-exercises, electric-circuits, electric-current, voltage, semiconductor-physics"
} |
Extending Polynomial Class with math operations | Question: Background
After seeing a random coding interview question "write a program that can take the derivative of a polynomial", I decided to solve the problem using an OOP approach in Python. Then, I wanted to practice more, so built out a "fully" functioning Polynomial class. I am looking both for ways to improve my code and possible ways to expand this project. So far it can:
compute the value of a given Polynomial for any given float
perform field operations like addition, subtraction, & scalar multiplication
perform standard calculus operations like differentiation and integration
displays instances in a more aesthetic format:
>>> print(Poly(-7, 0, 0, 12, 12, -2, 0))
-7.0x⁶ + 12.0x³ + 12.0x² - 2.0x
Below is the main file and supporting script for the __str__ method
Polynomial.py (main)
from FormattingFuncs import mathformat
class Poly:
"""
Polynomial(*coeffs) -> Polynomial
Supports:
- Computing real-valued polynomials for any given float/int
- Field operations: can add, subtract, & scalar multiply polynomials
- Calculus opeartions: can differentatiate & integrate polynomials
"""
def __init__(self, *coeffs):
"""
Creates a new Polynomial object by passing in floats/ints
Passed in args correspond to coefficients in decreasing degree order
"""
try:
if coeffs[0] == 0 and len(coeffs) > 1:
raise ValueError('First argument of non-constant Polynomial must be non-zero')
self.coeffs = coeffs
# self.coeffs = {degree: coeff for degree, coeff in enumerate(reversed(coeffs))}
self.degree = len(coeffs) - 1
self.isConst = (len(coeffs) == 1) # P(x) = c
self.isLinear = (len(coeffs) == 2) # P(x) = ax + b
except IndexError:
raise IndexError(f'Oops! {type(self).__name__} requires atleast 1 float/int arg')
def value(self, x):
"""Computes P(x=float/int)"""
# Makes finding y-intercept a constant time process, since y-int := P(0) = const term
if x == 0:
val = self.coeffs[-1]
else:
val = sum(self.coeffs[i] * (x ** (self.degree - i)) for i in range(self.degree + 1))
return val
def differentiate(self):
"""Computes differentiated Polynomial object: dP/dx"""
if self.isConst:
coeffs = (0,)
else:
coeffs = (self.coeffs[i] * (self.degree - i) for i in range(self.degree))
return Poly(*coeffs)
def integrate(self, const=0):
"""Computes integrated Polynomial object given some integrating constant: ∫P(x)dx + c"""
coeffs = [self.coeffs[i] / (self.degree + 1 - i) for i in range(self.degree + 1)]
coeffs.append(const)
return Poly(*coeffs)
def __add__(self, other):
"""
Polynomial(*coeffs1) + Polynomial(*coeffs2) -> Polynomial(*coeffs3)
Polynomial(*coeffs4) + float/int -> Polynomial(*coeffs5)
"""
try:
# Prepends smaller degree polynomial with leading 0's
diff = self.degree - other.degree
leadingZeros = tuple(0 for _ in range(abs(diff)))
if diff > 0:
p1coeffs = self.coeffs
p2coeffs = leadingZeros + other.coeffs
else:
p1coeffs = leadingZeros + self.coeffs
p2coeffs = other.coeffs
sumcoeffs = [sum(c) for c in zip(p1coeffs, p2coeffs)]
# Finds index of first non-zero term
leadingindex = -1
for c in sumcoeffs:
leadingindex += 1
if c is not 0: break
sumcoeffs = sumcoeffs[leadingindex:]
return Poly(*sumcoeffs)
# Handles case when adding a scalar -> shifts polynomial vertically
except AttributeError:
coeffs = list(self.coeffs)
coeffs[-1] += other
return Poly(*coeffs)
def __radd__(self, other):
"""
Polynomial(*coeffs1) + Polynomial(*coeffs2) -> Polynomial(*coeffs3)
float/int + Polynomial(*coeffs4) -> Polynomial(*coeffs5)
"""
return self + other
def __mul__(self, scalar):
"""Polynomial(coeffs=tuple) * float/int -> Polynomial(coeffs=tuple2)"""
if not isinstance(scalar, (float, int)):
raise TypeError(f"'{type(scalar).__name__}' object cannot be multiplied"
f" - only supports scalar (float/int) multiplication")
elif scalar == 0:
coeffs = (0,)
else:
coeffs = (coeff * scalar for coeff in self.coeffs)
return Poly(*coeffs)
def __rmul__(self, other):
"""float/int * Polynomial(coeffs=tuple) -> Polynomial(coeffs=tuple2)"""
return self * other
def __sub__(self, other):
"""
Polynomial(*coeffs1) - Polynomial(*coeffs2) -> Polynomial(*coeffs3)
Polynomial(*coeffs4) - float/int -> Polynomial(*coeffs5)
"""
return self + (-1 * other)
def __rsub__(self, other):
"""
Polynomial(*coeffs1) - Polynomial(*coeffs2) -> Polynomial(*coeffs3)
float/int - Polynomial(*coeffs4) -> Polynomial(*coeffs5)
"""
return other + (-1 * self)
def __repr__(self):
return f'{type(self).__name__}{self.coeffs}'
def __str__(self):
"""
Displays 'mathematical' representation of polynomial:
i.e. Polynomial(-1, 0, 0, 0, -2, 0, 1) -> -x⁶ - 2x² + 1
"""
poly = ''.join(mathformat(k, self.coeffs) for k in range(self.degree + 1))
return poly
FormattingFuncs.py
# File includes 3 support functions for Poly.__str__ method
def mathformat(term, coeffs):
"""Returns formatted term of Polynomial according to coefficient's parity and value"""
# Booleans used for control flow to exhaust all cases
isLeadingTerm = (term == 0)
isConstTerm = (term == len(coeffs) - 1)
# Handles case where polynomial is constant
if isLeadingTerm and isConstTerm: return str(coeffs[0])
c = float(coeffs[term])
# Handles formatting the highest order term's coefficient
if c == 1:
leadingstr = ''
elif c == -1:
leadingstr = '-'
else:
leadingstr = f'{c:.03}'
# Formats coefficient accordingly; superscripts degree unless it's the linear term
coeff = leadingstr if isLeadingTerm else formatcoeff(term, coeffs)
degree = f'{superscript(len(coeffs) - 1 - term)}'
formattedterm = f'{coeff}x' + degree * (degree != '¹')
polyformat = formatcoeff(term, coeffs) if (isConstTerm or c == 0) else formattedterm
return polyformat
def formatcoeff(term, coeffs):
"""Transforms coefficient into appropriate str"""
c = float(coeffs[term])
isConstTerm = (term == len(coeffs) - 1)
isUnitary = (abs(c) == 1) # checks if c = 1 or -1
coeff = '' if (isUnitary and not isConstTerm) else abs(c)
if c > 0:
formatted = f' + {coeff:.03}'
elif c < 0:
formatted = f' - {coeff:.03}'
else:
formatted = ''
return formatted
def superscript(value, reverse=False):
"""
Returns a str with any numbers superscripted: H2SO4 -> H²SO⁴
Change reverse param to 'True' to subscript: H2SO4 -> H₂SO₄
"""
digits = ('⁰¹²³⁴⁵⁶⁷⁸⁹', '₀₁₂₃₄₅₆₇₈₉')[reverse]
transtable = str.maketrans("0123456789", digits)
formatted = str(value).translate(transtable)
return formatted
Answer:
It looks strange that you all right with constant zero polynomial (e.g. differentiate may produce one), yet you don't allow to construct one explicitly.
value can benefit from a Horner schedule, both in time complexity and precision. Along the same line, case x == 0 seems like a premature optimization.
You seem to assume that AttributeError in __add__ implies that other is of a numeric type. Unfortunately it only means that other doesn't have coeffs, and you may end up with very strangely looking coeffs[-1]. | {
"domain": "codereview.stackexchange",
"id": 28300,
"tags": "python, object-oriented, mathematics"
} |
Compute several histograms with multiple filter conditions | Question: I have a numerical dataset of $N$ columns ($\approx 150$) and $K$ rows ($\approx 60000$).
On a user interface, you can apply a filter to one or more columns. We can call the number of filters = $Z$ (with $Z \le N$). Those filters are double sliders allowing to get a min/max for the column. For each filter, I would like to compute the histogram by applying all filters except this one.
The simplest method would be for each filter, to apply all other filters in the complete dataset and compute the histogram of this remaining column. That leads to a $O(Z^2)$ filtering on the dataset and poor performance with an important number of $Z$ and also $K$.
Is there a smarter way ? I have a "feeling" there is probably a way in $O(Z)$ but I am not able yet to clear it in my mind to write it :). It looks quite similar to the problem of the product of an array except self in term of thinking.
Maybe a more concrete example may help
On a e-commerce, you are selling RAMs and you have a filter by:
price
number of modules
frequency
memory size
If you filter the price between 100€-200€ and a frequency above 3000Mhz, you will mainly have a memory size of 16Gb.
The benefit of the histogram is to see on the "price" slider, in which direction you may have more result with the smallest change. For example going below 100€ will not provide a lot more result as there is nearly no ram below 90€. However, if you increase to 250€ you may have a lot more choice for this range of frequency. Similarly, if you reduce the frequency, you will end up with more choice (lower frequency is cheaper so maybe 32gb will be available at 200€.
To see that:
you need to compute the distribution of the price after applying the filter on frequency only
then compute the distribution of the frequency after applying the filter on price to have the number of product available below 3000Mhz.
The objective is to be able to render something like :
The histogram is available on the complete range of even if there is a filter on this given range (blue section). This is quite simple if there is only 1 filter as you compute only once the histogram but on my case, depending on other filters, this histogram varies.
I hope this example makes it clear.
Answer: I'm going to assume that each filter is a predicate on rows that identifies whether that row should be kept or discarded. If so, then yes, this can be solved with many fewer than $O(Z^2)$ applications of the filters. I'll describe two algorithms with different tradeoffs between time vs space: the first applies $O(Z)$ filters but uses $O(ZK)$ memory; the second applies $O(Z \lg Z)$ filters and uses just $O(K \lg Z)$ memory. In both, let $F_1,\dots,F_Z$ denote the filters.
Algorithm 1: minimizing computation time
Here is an algorithm that applies $O(Z)$ applications filters and uses $O(ZK)$ memory. (This corresponds to something like $O(ZK)$ time and $O(ZK)$ memory.) Roughly speaking the algorithm computes a "prefix sum" and a "suffix sum", using two linear scans. In more detail:
For each $i$, compute the subset $S_i$ of rows that are retained if you apply filters $F_1,\dots,F_i$.
For each $j$, compute the subset $S'_j$ of rows that are retained if you apply filters $F_j,\dots,F_Z$.
For each $i$, compute the intersection $S_{i-1} \cap S'_{i+1}$; this is the set of rows that are retained if you apply all the filters except for $F_i$.
Note that step 1 can be done in a linear scan using a total of $Z$ applications of filters, as you can obtain $S_{i+1}$ from $S_i$ using a single application of $F_{i+1}$. Similarly step 2 can be done using $Z$ applications of filters, for a total of $2Z=O(Z)$ applications of filters. Unfortunately you have to store all of $S_1,\dots,S_Z,S'_1,\dots,S'_Z$, which takes $O(ZK)$ memory.
Algorithm 2: minimizing memory usage
A divide-and-conquer algorithm can solve the problem with $O(Z \lg Z)$ applications of filters and $O(K \lg Z)$ memory. (This corresponds to approximately $O(ZK \lg Z)$ time and $O(K \lg Z)$ memory.) The algorithm works like this:
If $Z=1$ or $Z=2$, the problem is trivial; this is the base case. Otherwise:
Compute the subset of rows that are retained if you apply filters $F_1,\dots,F_{Z/2}$. Then, recursively apply the algorithm to this reduced set of rows and to the filters $F_{Z/2+1},\dots,F_Z$.
Compute the subset of rows that are retained if you apply filters $F_{Z/2+1},\dots,F_Z$. Then, recursively apply the algorithm to this reduced set of rows and to the filters $F_1,\dots,F_{Z/2}$.
Notice that you can compute the subsets of rows in steps 2,3 with $O(Z)$ applications of filters. So, if we let $T(Z)$ denote the number of filter applications used by this algorithm when running on $Z$ filters, we have the recurrence relation
$$T(Z) = 2 T(Z/2) + O(Z),$$
which has the solution $T(Z) = O(Z \lg Z)$. The claimed running time follows; and it's not too hard to see that the amount of memory used is as claimed, too (as there are $O(\lg Z)$ levels to the recursion tree, and you store $O(K)$ records per level of the recursion tree at a time).
Can we do better?
A natural question is to ask whether it is possible to find an algorithm that is optimal in both time and memory usage: e.g., $O(Z)$ applications of filters and $O(ZK)$ memory. I don't know whether that is possible or not. | {
"domain": "cs.stackexchange",
"id": 17488,
"tags": "algorithms, optimization, filtering-problem"
} |
Sum of digits in a^b | Question: I'm trying to calculate a problem which requires sum of digits in \$a^b\$, where \$0 \le a \le 9\$ and \$0 \le b \le 4000\$.
I have seen various other posts similar to this, but I haven't acquired best way. I have created this:
#include<stdio.h>
int main() {
int digits[4000]={0} ;
long long int test,a,b,temp,newVar ;
scanf("%lld",&test) ;
newVar=1 ;
while(test--) {
scanf("%lld %lld",&a,&b) ;
if(b==0) {
printf("Case %lld: 0\n",newVar) ;
newVar++ ;
continue ;
}
b=b-1 ;
digits[0]=a ;
long long int len=1,carry=0,i ;
while(b--) {
carry=0 ;
for(i=0 ; i<len ; i++) {
temp=digits[i]*a + carry ;
digits[i]=temp%10;
carry=temp/10 ;
}
if(carry!=0) {
digits[i]=carry ;
len++ ;
}
}
long long int sum=0 ;
for(i=len-1 ; i>=0 ; i--)
sum+=digits[i] ;
printf("Case %lld: %llu\n",newVar,sum) ;
newVar++ ;
}
return 0 ;
}
I tested my code for large powers, which is giving the right results.
For example:
Input:
3
2 32
3 2
2 60
Output:
Case 1: 58
Case 2: 9
Case 3: 82
Is there any way to increase the speed or running time of this code? I have a time limit of 1 sec and this code is showing Time Limit Exceeded.
This is not a contest that is going to effect my(or anyone else) ratings. For me, it's a way to learn new things. So any help will be appreciated.
Answer: In the comment to Martin R's answer there is a link to the question on Codechef. In the description they say that there can be at most 36000 test cases. I don't think that is coincidence because there are also 9 possible values for a and 4000+1 values for b.
Read the "at most 36000" as "probably that many", and caching comes to mind.
My solution is built on the cleaned up code from Martin R's answer but builds a look-up table first.
On some random input with 36000 entries, my code takes 0.334s while Martin R's code takes 21s. (About half of that time is taken by input/output from the command line.) The best solution on Codechef needs 0.03s but all other solutions are in that 0.3s range.
Code
#include <stdio.h>
const int A = 9; // 1 <= a <= 9
const int B = 4000+1; // 0 <= b <= 4000
const int BASE = 100000000; // 10^8
/** fill a look up table with sums of digits for a^b, result is lut[a-1][b].
*/
void fillLUT(int lut[A][B]) {
for (int a = 1; a <= A; a++) {
int digits[4000] = { 1 }; // TODO way larger than necessary
int len = 1;
for (int b = 0; b < B; b++) {
// Write previous digits-sum into lut
int sum = 0;
for (int i = 0; i < len; i++) {
int d = digits[i];
while (d > 0) {
sum += d % 10;
d /= 10;
}
}
lut[a-1][b] = sum;
// Calculate next sum
int carry = 0;
for(int i = 0; i < len; ++i) {
int temp = digits[i] * a + carry;
digits[i] = temp % BASE;
carry = temp / BASE;
}
if (carry != 0) {
digits[len++] = carry;
}
}
}
}
int main() {
// Build Lookup-table.
int lut[A][B];
fillLUT(lut);
// Read input
int test, a, b;
scanf("%d", &test);
for (int i = 0; i < test; i++) {
scanf("%d %d", &a, &b);
if (a < 1 || a > A) {
printf("Invalid input a:%d (b: %d)\n", a, b);
return 1;
}
if (b < 0 || b >= B) {
printf("Invalid input b:%d (a: %d)\n", b, a);
return 2;
}
//int sum = powerDigitSum(a, b);
int sum = lut[a-1][b];
printf("Case %d: %d\n", i, sum) ;
}
return 0;
} | {
"domain": "codereview.stackexchange",
"id": 10678,
"tags": "c++, algorithm, mathematics, integer, time-limit-exceeded"
} |
Potential energy and frame of reference | Question: I was studying about potential energy, and I suddenly thought that is there any relation between potential energy and frame of reference?
For example, we say for an object raised to a height $h$, potential energy is $mgh$. But this is so when we are talking about its distance from Earth. If a person is holding a suitcase then for the person shouldn't be the P.E. of suitcase be zero?
Thanks in advance.
Answer: The potential energy has a gauge freedom, that is we can define the zero to be anywhere we want without affecting the physics. A side effect of this is that we cannot experimentally measure potential energy, we can only measure differences in potential energy.
So when you say the potential energy of an object raised to a height $h$ is $mgh$, what this really means is that raising an object by a distance $h$ changes the potential energy by $mgh$ i.e. the difference in the potential energy before and after raising was $mgh$.
The person holding the suitcase can define its potential energy to be zero, but this is just a choice of gauge. Regardless of how the person holding the suitcase defines the potential energy it still changes by $+mgh$ when it is raised by a distance $h$ and $-mgh$ if it is lowered by a distance $h$. | {
"domain": "physics.stackexchange",
"id": 33633,
"tags": "reference-frames, potential-energy, conventions"
} |
How are mars years counted? | Question: In this diagram about the methane concentration in the martian atmosphere, there are data points labeled "Mars year 32", "Mars year 33", and "Mars year 34". How are Mars years counted? Is there a "Mars year 0"? What is the event that defines the start of the time scale?
Answer: Given that the methane variations are seasonal, the diagram is labeled in Martian years - the amount of time it take Mars to travel around the Sun once (about 1.88 Earth years). This makes it much easier to see the cycles rather than using Earth years. In particular, the x-axis has a length of one Martian year; the data point show seasonal methane measurements for that year. So, for instance, the red data point was take early in the spring of Mars year 34, the first blue point was taken at the start of summer of Mars year 32 (earlier!), and so on and so forth.
The specific year numbering is an arbitrary convention begun on April 11, 1955 (the northern spring equinox). The solar longitude system is applied to Mars; it's a geometric way of describing where in its orbit a planet is. The solar longitude is denoted $L_2$; on April 11, 1955, $L_s=0^\circ$; it increases to $L_s=360^\circ$, at which point a new Martian year begins. | {
"domain": "astronomy.stackexchange",
"id": 2964,
"tags": "mars, time"
} |
Buildup of white powder where metal has been touching a salt stone | Question: The other day, I opened a box for my cellar I use to store random junk and found this:
The pink stone is a salt stone from a pet store. I assume it's NaCl with impurities. The ring is some metal jewelery I found on the ground. Maybe copper or some alloy? Some white build-up has appeared where the metal thing is touching the salt stone.
The metal thing seems to be fused to the stone (I didn't pull too hard), but some of the build-up fell into the container as white power:
Silly me touched the powder to taste it. It tastes like nothing, not salty at all. But it seems the poweder did muck around with a brass (?) coin sitting below it. Especially the bottom, which was sitting against the wooden container and not touching the salt stone above it, was hit hard:
There are more of that coin in this continer and they're still as shiny as when I put them into the box.
Here are two additional photos of the thing:
The box has been sitting in my cellear for about 1.5 years. It shouldn't be above 60% rel. humidity. There's some non-stainless steel stuff and paper in the same box that don't look like they were able to pick up a lot of humidity.
So, can anyone explain what's going on here? What is the white powder composed of?
Answer: This link https://www.corrosionpedia.com/does-zinc-rust/7/7030 describes, among other air stability properties of zinc, formation conditions of powdery zinc hydroxide. The ring may be copper-zinc alloy. The conditions in Your box match the ones described in the link above: Salinity and the lack of free flowing air.
I am not sure about the black coating on Your coin, but I suspect it contains copper.
If the ring seems to be fused to salt, it definitely is because of changing air humidity over time. | {
"domain": "chemistry.stackexchange",
"id": 18050,
"tags": "metal, salt"
} |
ModuleNotFoundError: No module named 'qiskit.circuit.library' | Question: I am importing from qiskit.circuit.library import MCMTVChain on python IDLE editor but it showing error ModuleNotFoundError: No module named 'qiskit.circuit.library' although it's working fine on google colab. Also why Qiskit does not work properly on python IDLE it's show lots of other error not just this one?
Answer: You most probably haven't installed the qiskit module. Qiskit can be installed via :
pip install qiskit
Try the following commands in command prompt, one of them should work (if your PATH variables are appropriately set):
py -m pip install qiskit
Or
python -m pip install qiskit
Or if you have more than one python versions, you can try:
py -'version number like 3.7' pip install qiskit
without the quotes.
This should install the qiskit module and then you can import it. | {
"domain": "quantumcomputing.stackexchange",
"id": 1528,
"tags": "qiskit, programming, circuit-construction, ibm-q-experience"
} |
Pressure Difference affecting reversibility of a process | Question: It is said that for a finite pressuredifference between system and the surrounding the process is irreversible. From the diagram can you please tell me how the process is irreversible ?
Does this look reversible because I have considered the weight of the piston also ? Because if I don't consider the weight then the system will change state to attain mechanical equilibrium and then to get the system back to the same position work will have to be done on the system...so by definition it won't be a reversible process
Answer: A thermodynamic process is called reversible if an infinitesimal change of the external condition reverses the process.
Consider a gas enclosed by a freely moving piston in a cylinder. Let us say it is in mechanical equilibrium with the atmosphere, that is, the pressures on the piston match. If you increase the external pressure infinitesimally the piston goes downs until the system reaches equilibrium. Then decrease the external pressure infinitesimally and the piston moves upwards. the process is reversible.
Now consider that the gas and the atmosphere are not in mechanical equilibrium, let us say the atmospheric pressure is greater by a finite amount. The piston will always go down no matter if you increase or decrease the external pressure by any infinitesimal amount. So this system is, by definition, irreversible. | {
"domain": "physics.stackexchange",
"id": 32347,
"tags": "thermodynamics, pressure, equilibrium, reversibility"
} |
Restriction Mapping - Homework question | Question: I have trouble in solving this exercise.
Exercise
A circular plasmid of 10,000 base pairs (bp) is digested with two restriction enzymes,A and B, to produce a 3000 bp and a 2000 bp bands when visualised on an agarose gel. when digested with one enzyme at a time, only one band is visible at 5000 bp. If the first site for enzyme A(A1) is present at the 100th base, the order in which the remaining sites (A2,B1 and B2) are present is?
Why I struggle with this exercise
How can a 10,000bp plasmid produce 3000bp and 2000bp only on agarose gel. I mean where are the remaining 5000bp? I dont know if my concept is right.
Linearised bands on an agarose gel may be doublet. That can clear the doubts in problem. But still I dont know how to do a reasonable restriction mapping and find out sites of different enzymes.
Answer: This is not as hard, as it first seems. Lets have a look at the single enzyme digests first: The digest with enzyme A and B only leads to products which are 5kB (5000 bp) away from each other. Since they are of the same size, both equally sized restriction fragments appear as one band. So each enzyme cuts the plasmid exactly in half.
The double digest is a bit more tricky, but not much. You get two products from it, with 2kb and 3kB. This means that enzyme B cuts between the restriction sites for enzyme A, resulting in these two fragments. Have a look at the sketch I made below (only a quick one and not to scale):
A1 and A2 are the cutting sites for enzyme A, B1 and B2 the sits for enzyme B. The numbers on the outside are the positions on the DNA sequence. You can see that a single digest leads to two fragments of 5kB (doesn't matter which) and that both A and both B sites are located 5kB from each other.
A1 is at position 100, A2 has to be at position 5100, B1 is located 2kB behind A1 (and therefore at position 2100), and B2 2kB behind position A2 (and therefore at position 7100). To get the same restriction pattern it is also possible that B1 and B2 are located 3kB behind their respective enzyme A position (so at 3100 and 8100) and you still get the same pattern of 2kB and 3kB on the gel. | {
"domain": "biology.stackexchange",
"id": 3123,
"tags": "genetics, molecular-biology, homework"
} |
Rotational motion - torque | Question: I recently read a book of introduction of the rotational motion, but I do not quite understand the way author analysed the problem.
Situation: A light rod rests on a pivot passing throughout its centre of mass A, force $F
_1$ acts vertically down on the rod to the right of the pivot point at a distance $x_1$ from the pivot. Force $F_2$ acts vertically downwards on the rod at a distance $x_2$ to the left of the pivot. What is the condition, in terms of the $F$'s and $x$'s, that the rod does not rotate about its pivot point?
The author analyses this problem with introducing a fictitious force $f$ at either end of the rod, acting along the length of the rod. Saying that whatever the magnitude of $f$, the force that results from adding together the $f$'s and $F$'s must pass throughout a point $P$ vertically above the pivot point $O$ or otherwise the rod will rotate. He further mentioned that the lines of action of combination of $f$ and $F_1$ and of the combination of $f$ and
$F_2$ meet at a height $y$ above $O$. There are force vector diagram and position vector diagram as attached picture.
At this point, I do not understand:
1) Why the fictitious force $f$ acting along the length of rod will cause rotation?
2) Why the lines of action of combination of $f$ and $F_1$ and of the combination of $f$ and $F_2$ meet at a height $y$ above $O$. What does it mean and why?
3) Why there is resultant force between $f$ and $F_2$ in the diagram and so for $f$ and $F_1$?
I am sorry to ask these basic concepts. Thanks a lot.
Answer: Here author consider two extra oppositely directed forces along the rod for the seek of calulation. This two forces being oppositely directed, there will be no net force for this and would not cause any rotation of the rod. Now the combination force of $f$ and $F_2$ gives the resultant ${F_2}^{'}$ which makes an angel $\theta_1$ with the rod. similarly the forces $f$ and $F_1$ gives the resultant ${F_1}^{'}$. Now we know that when a force acting through the point of ratation, does not cause any torque i.e. there will be no rotation.
Now if the rod does not rotate for the two forces then the resultant of them ($F$) must be pass through $O$ and for this ${F_2}^{'}$ and ${F_1}^{'}$ should intersect at a point on the perpendicular line $OP$. | {
"domain": "physics.stackexchange",
"id": 24751,
"tags": "torque"
} |
When I am on a moving bus and the bus stops abruptly how to cancel the extra motion due to Newtonian Law without holding steel bars in the bus? | Question: Today I was on a City bus.
And Newton's first Law hit me hard.
I was standing. And the bus stopped suddenly. I fell down. Because of Newton's Law of Inertia, which states:
An object at rest stays at rest and an object in motion stays in motion with the same speed and in the same direction unless acted upon by an unbalanced force.
For example, when the bus started to move my body has the tendency to move so when the bus stopped I swayed forward and fell down.
I want to know without holding the steel bars in the bus is it possible to cancel out this extra force ${F}_{\text{net}} = 0$? I know you can use friction. But then the bus is moving and the bus is moving you. You have no way to walk inside the bus with the industrial boot that has the ultimate friction on the bus floor because you don't know the time and place when and where the bus would stop.
Answer: The industrial boots would not help.
In fact, it was the friction force that caused your fall in the first place. If it was not for the friction, you would just slide forward.
If the friction, even from regular shoes, was acting on your center of mass, it would be sufficient for stopping you from falling or much sliding, but, since the friction acts on your feet, it creates a torque relative to your COM and that causes rotation and fall.
Of course, you can reduce the probability falling by spreading your feet along the direction of the movement. This will help in several ways.
First, it will lower your COM and thus decrease the magnitude of the torque.
Second, the leg pointing forward will transmit some of the friction force toward your COM, also reducing the net torque.
Third, for your body to go down, it would, first, have to go up, which, would, at a minimum reduce the speed of the fall, or, very likely (along with other factors), prevent the fall altogether. | {
"domain": "physics.stackexchange",
"id": 51133,
"tags": "newtonian-mechanics, everyday-life"
} |
How to verify gauge invariance of an amplitude | Question: I have calculated a tree level amplitude for Compton scattering (${e\left(p\right)+\gamma\left(k\right)\to e\left(p\prime\right)+\gamma\left(k\prime\right)}$):
$${
i\mathcal{M}=M_{\mu\nu}\epsilon^{*\mu}\left(k\prime\right)\epsilon^{\nu}\left(k\right)\textrm{.}
}$$
How should I go about trying to verify that it is gauge invariant?
Answer: Don't forget that the polarization tensors depend on the gauge choice via reference vectors (call them $q$, $q'$) Now you have to check what happen when you chance the reference vectors from $q, q'$ to some new vectors $r,r'$. The change of the vectors will lead to the new polarization vectors aquire a term proportional to its momentum $p$.
$$\epsilon(p,r)^\mu \sim \epsilon(p,q)^\mu+p^\mu$$
The contraction of the last term with $M_{\mu\nu}$ vanishes, i.e. you have shown gauge invariance.
Do you also have to show that $M_{\mu\nu}$ contracted into one of its momenta vanishes? | {
"domain": "physics.stackexchange",
"id": 9327,
"tags": "quantum-electrodynamics, feynman-diagrams, gauge-invariance"
} |
Mysterious factor of 30 in Coulomb cross section for electron–electron collisions | Question: I’m a bit stuck trying to understand where a numerical constant in an old paper comes from. The number in question is in eq. (A1) in Shull & van Steenberg (1985) (which gives the cross section for Coulomb collisions between electrons),
$$
\sigma_{ee} = (7.82 \times 10^{-11}) (0.05/f) (\ln \Lambda) E^{-2} \;\mathrm{cm}^2
\tag{A1}
$$
specifically $7.82 \times 10^{-11}$. There are two papers cited in the same sentence (Habing & Goldsmith 1971, Shull 1979), and both give the equation
$$
\sigma_{ee} = 40\pi e^4 (0.05/f) (\ln \Lambda) E^{-2} \;\mathrm{cm}^2
$$
(both in turn citing Spitzer & Scott 1969, where they seem to have derived it from eq. (3)), which is also used in more recent papers (e. g. Furlanetto & Stoever 2010, Evoli et al. 2012).
This would seem to suggest that
$$
7.82 \times 10^{-11} \stackrel{?}{=} 40\pi e^4
.
$$
An immediate issue are the units. For (A1), given the context it seems that $E$ should be given in $\mathrm{eV}$ (which is missing in the equation). Apart from that, it looks like the equations are in the cgs unit system, although I think the “$\mathrm{cm}^2$” at the end of the equation with “$40\pi e^4$” is actually wrong, since $e^4/E^2$ in cgs should already have units of $\mathrm{cm}^2$. It’s a mess!
Anyway, given that, I’ve found that
$$
40\pi e^4
= 2.61 \times 10^{-12} \,\mathrm{eV}^2 \mathrm{cm}^2
= (7.82 \times 10^{-11} \,\mathrm{eV}^2 \mathrm{cm}^2) \operatorname{/} 30
$$
So I can almost get it consistent, except for an extra factor of 30! Where could this be coming from? Maybe there’s a mistake somewhere in the mess with the units?
Answer: I have found another source giving an actual number for the constant factor: The DarkHistory code calculates the prefactor $4\pi e^4$ in these units (citing one of my sources above, Furlanetto & Stoever 2010; note the “missing” factor of 10, since this is for a slightly different quantity). Plugging in the numbers from the code, I get
$$
4\pi e^4 = 2.61 \times 10^{-13} \,\mathrm{eV}^2 \mathrm{cm}^2
$$
or in other words, exact agreement with my own calculation.
In light of this, and given that there are no other sources giving the same number as Shull & van Steenberg (1985), as well as the lack of anything that could explain an extra factor of 30 within the paper itself, I can only conclude that this is an error in Shull & van Steenberg (1985). | {
"domain": "astronomy.stackexchange",
"id": 7298,
"tags": "astrophysics"
} |
Can PRNGs be used to magically compress stuff? | Question: This idea occurred to me as a kid learning to program and
on first encountering PRNG's. I still don't know how realistic
it is, but now there's stack exchange.
Here's a 14 year-old's scheme for an amazing compression algorithm:
Take a PRNG and seed it with seed s to get a long sequence
of pseudo-random bytes. To transmit that sequence to another party,
you need only communicate a description of the PRNG, the appropriate seed
and the length of the message. For a long enough sequence, that
description would be much shorter then the sequence itself.
Now suppose I could invert the process. Given enough time and
computational resources, I could do a brute-force search and find
a seed (and PRNG, or in other words: a program) that produces my
desired sequence (Let's say an amusing photo of cats being mischievous).
PRNGs repeat after a large enough number of bits have been generated,
but compared to "typical" cycles my message is quite short so this
dosn't seem like much of a problem.
Voila, an effective (if rube-Goldbergian) way to compress data.
So, assuming:
The sequence I wish to compress is finite and known in advance.
I'm not short on cash or time (Just as long as a finite amount
of both is required)
I'd like to know:
Is there a fundamental flaw in the reasoning behind the scheme?
What's the standard way to analyse these sorts of thought experiments?
Summary
It's often the case that good answers make clear not only the answer,
but what it is that I was really asking. Thanks for everyone's patience
and detailed answers.
Here's my nth attempt at a summary of the answers:
The PRNG/seed angle doesn't contribute anything, it's no more
than a program that produces the desired sequence as output.
The pigeonhole principle: There are many more messages of
length > k than there are (message generating) programs of
length <= k. So some sequences simply cannot be the output of a
program shorter than the message.
It's worth mentioning that the interpreter of the program
(message) is necessarily fixed in advance. And it's design
determines the (small) subset of messages which can be generated
when a message of length k is received.
At this point the original PRNG idea is already dead, but there's
at least one last question to settle:
Q: Could I get lucky and find that my long (but finite) message just
happens to be the output of a program of length < k bits?
Strictly speaking, it's not a matter of chance since the
meaning of every possible message (program) must be known
in advance. Either it is the meaning of some message
of < k bits or it isn't.
If I choose a random message of >= k bits randomly (why would I?),
I would in any case have a vanishing probability of being able to send it
using less than k bits, and an almost certainty of not being able
to send it at all using less than k bits.
OTOH, if I choose a specific message of >= k bits from those which
are the output of a program of less than k bits (assuming there is
such a message), then in effect I'm taking advantage of bits already
transmitted to the receiver (the design of the interpreter), which
counts as part of the message transferred.
Finally:
Q: What's all this entropy/kolmogorov complexity business?
Ultimately, both tell us the same thing as the (simpler) pigeonhole
principle tells us about how much we can compress: perhaps
not at all, perhaps some, but certainly not as much as we fancy
(unless we cheat).
Answer: You've got a brilliant new compression scheme, eh? Alrighty, then...
♫ Let's all play, the entropy game ♫
Just to be simple, I will assume you want to compress messages of exactly $n$ bits, for some fixed $n$. However, you want to be able to use it for longer messages, so you need some way of differentiating your first message from the second (it cannot be ambiguous what you have compressed).
So, your scheme is to determine some family of PRNG/seeds such that if you want to compress, say, $01000111001$, then you just write some number $k$, which identifies some precomputed (and shared) seed/PRNG combo that generates those bits after $n$ queries. Alright. How many different bit-strings of length $n$ are there? $2^n$ (you have n choices between two items; $0$ and $1$). That means you will have to compute $2^n$ of these combos. No problem. However, you need to write out $k$ in binary for me to read it. How big can $k$ get? Well, it can be as big as $2^n$. How many bits do I need to write out $2^n$? $\log{2^n} = n$.
Oops! Your compression scheme needs messages as long as what you're compressing!
"Haha!", you say, "but that's in the worst case! One of my messages will be mapped to $0$, which needs only $1$ bit to represent! Victory!"
Yes, but your messages have to be unambiguous! How can I tell apart $1$ followed by $0$ from $10$? Since some of your keys are length $n$, all of them must be, or else I can't tell where you've started and stopped.
"Haha!", you say, "but I can just put the length of the string in binary first! That only needs to count to $n$, which can be represented by $\log{n}$ bits! So my $0$ now comes prefixed with only $\log{n}$ bits, I still win!"
Yes, but now those really big numbers are prefixed with $\log{n}$ bits. Your compression scheme has made some of your messages even longer! And half of all of your numbers start with $1$, so half of your messages are that much longer!
You then proceed to throw out more ideas like a terminating character, gzipping the number, and compressing the length itself, but all of those run into cases where the resultant message is just longer. In fact, for every bit you save on some message, another message will get longer in response. In general, you're just going to be shifting around the "cost" of your messages. Making some shorter will just make others longer. You really can't fit $2^n$ different messages in less space than writing out $2^n$ binary strings of length $n$.
"Haha!", you say, "but I can choose some messages as 'stupid' and make them illegal! Then I don't need to count all the way to $2^n$, because I don't support that many messages!"
You're right, but you haven't really won. You've just shrunk the set of messages you support. If you only supported $a=0000000011010$ and $b=111111110101000$ as the messages you send, then you can definitely just have the code $a\rightarrow 0$, $b\rightarrow 1$, which matches exactly what I've said. Here, $n=1$. The actual length of the messages isn't important, it's how many there are.
"Haha!", you say, "but I can simply determine that those stupid messages are rare! I'll make the rare ones big, and the common ones small! Then I win on average!"
Yep! Congratulations, you've just discovered entropy! If you have $n$ messages, where the $i$th message has probability $p_i$ of being sent, then you can get your expected message length down to the entropy $H = \sum_{i=1}^np_i\log(1/p_i)$ of this set of messages. That's a kind of weird expression, but all you really need to know is that's it's biggest when all messages are equally likely, and smaller when some are more common than others. In the extreme, if you know basically every message is going to be $a=000111010101$. Then you can use this super efficient code: $a\rightarrow0$, $x\rightarrow1x$ otherwise. Then your expected message length is basically $1$, which is awesome, and that's going to be really close to the entropy $H$. However, $H$ is a lower bound, and you really can't beat it, no matter how hard you try.
Anything that claims to beat entropy is probably not giving enough information to unambiguously retrieve the compressed message, or is just wrong. Entropy is such a powerful concept that we can lower-bound (and sometimes even upper-bound) the running time of some algorithms with it, because if they run really fast (or really slow), then they must be doing something that violates entropy. | {
"domain": "cs.stackexchange",
"id": 3141,
"tags": "information-theory, randomness, data-compression, pseudo-random-generators, entropy"
} |
Does a photon need to have EXACTLY the right energy to be absorbed by a gas molecule? | Question: From an answer to this question, https://physics.stackexchange.com/questions/281660/how-does-an-electron-absorb-or-emit-light,
Absorption of a photon will occur only when the quantum energy of the photon precisely matches the energy gap between the initial and
final states of the system. (the atom or a molecule as a whole)
i.e., by the absorption of a photon, the system could access to some
higher permissible quantum mechanical energy state. If there is no
pair of energy states such that the photon energy can elevate the
system from the lower to the upper energy state, then the matter will
be transparent to that radiation.
Given that the photon's energy is proportional to the photon's electromagnetic frequency, and that the frequency is subject to tiny Doppler shifts due to differences in velocity between the emitting and absorbing molecules, how can a photon have precisely the same energy as the exact quantum state change for a gas molecule? Does the photon energy only have to be very close to the quantum state change for a molecule to absorb it? If so, what happens to the extra(or lesser) energy absorbed by the molecule? Does this margin contribute to the width of observed absorption spectra in radio astronomy?
Answer: The Physics SE answer (or the part quoted) was incorrect. The photon does not have to have "precisely" the right energy to cause a transition. The reality is that there is a non-zero probability of causing a transition at all photon energies, but the probability distribution is sharply peaked at the energy we calculate to be the energy difference between the two energy eigenstates of the absorbing atom/molecule.
There are a number of effects that cause this broadening of the frequency response of a bunch of atoms/molecules to photon energy.
The transitions have a "natural width" because the transitions take a finite amount of time. This is encapsulated in a form of the famous uncertainty principle, which means that when viewed over a short time period, there is a fuzziness to energy levels in the atom/molecule. A classical analogy is to try and define the frequency of a damped harmonic oscillator. The larger the damping, the quicker the oscillation dies away and the broader is the continuous frequency spectrum.
Related to this is collisional broadening. Transitions can be interrupted/truncated by collisions, again resulting in a broadened frequency response.
A photon of fixed frequency (note that having a group of photons at fixed frequency is equally impossible, for similar reasons at the emitting end) will encounter atoms/molecules travelling with different velocities and as a result their frequency responses will be Doppler shifted by the appropriate amount.
The application of electromagnetic fields can also broaden and shift the frequency response. | {
"domain": "astronomy.stackexchange",
"id": 5364,
"tags": "light, radio-astronomy, photons, electromagnetic-spectrum"
} |
If we say the universe is expanding, shouldn't it be expanding relative to something? | Question: I don't understand, if everything in this world is relative to something else, then cannot we essentially say that nothing exists independently? We say that the universe is considered to be the ultimate 'background'. However, if we say the universe is expanding, shouldn't it be expanding relative to something?
Answer: The universe is expanding, in the sense that things in it are getting farther apart. It is not expanding into anything because it already is everything. There simply is nowhere else to expand into.
Lets knock it down one dimension. Your universe is the surface of a balloon. The balloon is slowly being inflated. Your universe is getting bigger but nothing else is getting smaller (remember, you are unable to leave, look from, or perceive anything that is not on the surface). The only thing you can measure is points are further apart than they used to be.
Classical mechanics don't really work at the two extremes: the quantum level and the whole-universe level. If we ever fully understand the whole process I expect we will find that the complete equation applies across the board, but certain factors are negligible at human-perception levels. Motion is a good example here: we don't need relativity to calculate driving times, even though my watch does slow down when I drive to work. | {
"domain": "physics.stackexchange",
"id": 14418,
"tags": "cosmology, spacetime, universe, space-expansion, big-bang"
} |
heightmap for gazebo | Question:
I'm experimenting with using a heightmap in gazebo to produce some nice rolling terrain. So far I have found that it works wonderfully until I add a LIDAR sensor.
When I add such a sensor to the robot I find that the simulation runs smoothly until an object enters the robots field of view. Adding any such object slows down the sim almost to a standstill.
I read the answer about improving performance in Gazebo adn tried improving the speed by reducing the number of LIDAR rays to 10 and chopping back the physics rate to 100Hz. With these changes the simulation runs at about 1/5th normal speed.
Is there any way of making the heightmap geom more gazebo friendly? failing that is it possible to import mesh based terrain? I have found at least one project that used both heightmaps and LIDAR under gazebo
Originally posted by JonW on ROS Answers with karma: 586 on 2011-05-25
Post score: 6
Original comments
Comment by JonW on 2011-05-28:
OK alternative idea - is there a good method for going from a heightmap image to a collada file? I have experimented with the blender tutorial here: http://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Making_Landscapes_with_heightmaps but it is a very manual process.
Comment by JonW on 2011-05-26:
I have created a map in blender and exported it as a COLLADA file. I am not sure of the exact number of vertices, but the .xml mesh is about 2MB in size. This map does not exhibit the slowdown that I am seeing with an image based heightmap.
Answer:
I did a little research into ODE heightfields, and it turns out the collider used for heightfields in ODE is not very efficient:
http://groups.google.com/group/ode-users/browse_thread/thread/d883a0e15b1647d5
This has been of the features in Gazebo which has received little use. So, the near term solution is to use triangle meshes, like you have done. It may also be more efficient to break your large terrain into sections. The ODE collision engine uses bounding boxes as a first pass to determine if two object should be checked for collision. So, rather than one large bounding box for the entire terrain mesh, you can have multiple smaller bounding boxes. This will ideally reduce the number of collision checks.
Originally posted by nkoenig with karma: 431 on 2011-06-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by JonW on 2011-06-10:
Thanks for following this up. Breaking up terrain into sections sounds like a good idea.
Comment by ChickenSoup on 2012-06-20:
Hey @JonW have you found a solution to your problem? I am experiencing the same thing; gazebo FPS goes down when my rover gets near a mountain.
Comment by JonW on 2012-06-20:
I am converting the heightmap into a collada mesh externally then importing that into gazebo. The release of gazebo in fuerte seems to be much happier dealing with big meshes - I just tried loading a collada heightmap with about 700k vertices (although I haven't collided anything with it yet)
Comment by ChickenSoup on 2012-06-20:
@JonW Thanks for your comment. Currently, I am using the gazebo of ros-electric. I will give a try installing ros-fuerte | {
"domain": "robotics.stackexchange",
"id": 5671,
"tags": "gazebo"
} |
Why does a red object appear dark in yellow light? | Question: "red objects appear dark in yellow light" as my text says...------------(1)
but what I think is that..
we see colour of an object because it reflects that colour of light and absorbs all others so when yellow would fall on originally red object it would appear dark ( to which I agree)
But then the next statement confuses me
" the red color is scattered less but this is not the reason for the (1) phenomenon to happen"
I thought why would it be so because if that light was less scattered it was absorbed mostly and hence we were not able to see that then shouldn't that be a correct explanation?
Answer: Have a look at Rayleigh scattering. An electromagnetic wave with a longer wavelength scatters less. Red has the longest wavelength in the visible light's spectrum and so it is scattered the least.
Now what your text says is that reflection has nothing to do with the fact that red light scatters the least and thus less scattering of red is not the reason why red objects appear dark in yellow light. Your understanding is right.
We see the color of an object because it reflects that colored light and absorbs all others so when yellow would fall on originally red object it would appear dark (to which I agree)
This is true and you are right, your text is also right. You are just confusing between reflection and scattering perhaps. Read up on scattering and it should be clear. Hope this helps. | {
"domain": "physics.stackexchange",
"id": 47508,
"tags": "optics, visible-light, geometric-optics"
} |
Efficient simulation of an NFA, while preserving the paths to the accept states | Question: The standard way of simulating an NFA on a computer (for implementing regex engines etc) is to construct a DFA that accepts the same language. Otherwise you get problems like exponential blowup.
However, for my purpose I need to know which paths the NFA went through for accepted words. This is obviously not trivial if I simply use the subset construction method. The NFA could also have $\epsilon$ transitions.
Of course, any such simulation could have a bad worst-case, in which there are a humongous amount of ways that the automaton could accept a given word. However, it'd be nice to have some sort of algorithm that runs, in, say, $O(m+n)$ for a word of length $m$ that the NFA has $n$ ways of accepting.
Is there any efficient way to do this?
Answer: There is no need to construct the DFA. Instead, you can construct a dynamic programming table which answers the question "the NFA could be at state $s$ after reading the $k$th prefix of the word" (the parameters are both $s$ and $k$). In order to efficiently fill this table in the presence of $\epsilon$-transitions, you need to compute ahead of time the transitive closure of the $\epsilon$-transitions. As you fill the table, you can include data that will allow you to reconstruct at least some of the accepting paths (or all of them, if you insist, though it will be more tricky to handle the $\epsilon$-transitions). The running time, however, won't be linear but more like $O(q^2m)$, where $q$ is the number of states; perhaps you could optimize this further for sparse NFAs, but probably not to $O(m+n)$. | {
"domain": "cs.stackexchange",
"id": 12672,
"tags": "automata, finite-automata, simulation"
} |
Usage of KL divergence to improve BOW model | Question: For a university project, I chose to do sentiment analysis on a Google Play store reviews dataset. I obtained decent results classifying the data using the bag of words (BOW) model and an ADALINE classifier.
I would like to improve my model by incorporating bigrams relevant to the topic (Negative or Positive) in my features set. I found this paper which uses KL divergence to measure the relevance of unigrams/bigrams relative to a topic.
The only problem is that I am having trouble understanding what C refers to in the equation (2.2). Does it refer to the unique words associated with topic C, the set of documents on a topic, or the words in a document?
Answer: Since those are academic researchers, they framed the problem in the most general way possible. The $C$ term could be any random variable to be modeled. In this specific case, $C$ is the individual tokens (unigrams or bigrams).
I have found empirical improvement by including bigrams highly ranked by collocations, frequently occurring n-grams. By including common phrases, a model can better capture how language is used in that specific context. Finding collocations is relatively straightforward - rank the occurrence of all n-grams, then set a threshold to limit to only the most popular.
Those authors are looking for unique information which far more complex to model and often not necessary for model lift. | {
"domain": "datascience.stackexchange",
"id": 9813,
"tags": "classification, ngrams, bag-of-words"
} |
What is the name for polynomially solvable optimisation problems? | Question: An optimisation problem that allows to solve a NPC decision problem through a polynomial reduction is called NP-hard. For these optimisation problems no polynomial algorithm is known.
Symmetrically, is there a standard name for all those optimisation problems for which a polynomial algorithm is known?
Answer: An optimization problem is an example of a function problem: i.e., one where the task is to map some input to some output. The class of function problems solvable in polynomial time is FP. See, for example, the Complexity Zoo.
(Note that there is a class OptP but that's not the polynomial-time optimization problems. Perhaps confusingly, it's the optimization analogue of NP: it's the class of functions that can be defined by taking the maximum out the outputs given by accepting paths of a nondeterministic polynomial-time Turing machine.) | {
"domain": "cs.stackexchange",
"id": 8529,
"tags": "terminology, optimization, polynomial-time"
} |
Why are Killing fields relevant in physics? | Question: I'm taking a course on General Relativity and the notes that I'm following define a Killing vector field $X$ as those verifying:
$$\mathcal{L}_Xg~=~ 0.$$
They seem to be very important in physics but I don't understand why yet because that definition is the only thing that I have so far. I'm not very familiar with the Lie derivative so I don't know how to interpret that equation so far.
What is the meaning of the Lie derivative of the metric being 0? Why are Killing fields relevant physically?
Answer: Killing fields are one of the most important concepts in general relativity both in its classical as well as quantum versions.
Classically, one thing we are always interested in is the world-line/trajectory of a free-falling observer in curved space-times. These world-lines are described as geodesics and satisfy the equation
$$
\frac{d^2 x^\mu}{d\tau^2} + \Gamma^\mu_{\rho\sigma} \frac{dx^\rho}{d\tau} \frac{ dx^\sigma }{ d\tau } = 0
$$
where $\tau$ is the affine parameter along the geodesic.
Now, if $\xi^\mu$ is a Killing vector field, then it is easy to show that
$$
Q_\xi = \xi_\mu \frac{dx^\mu}{d\tau}
$$
is a constant along the geodesic, i.e.
$$
\frac{d}{d\tau} Q_\xi = 0
$$
Thus. we find that every Killing vector field gives rise to a conserved quantity along the world-line of a freely falling observer. This fact can then be used to label geodesics. This is nothing but an instance of Noether's theorem at play in GR.
For instance, in stationary space-times, there exists a Killing vector that is globally time-like (this is the definition of stationary space-times), $k^\mu$. Then, we can define a conserved quantity
$$
E = - k_\mu \frac{dx^\mu}{d\tau}
$$
This is simply the generalization of the definition of energy! (Can you check what this reduces to for flat space-times?)
In quantum field theories, Killing vectors can be used to construct conserved currents (and therefore conclude existence of symmetries and all the hoopla that accompanies it). For instance, any local quantum field theory has a stress-tensor operator $T_{\mu\nu}$ that is symmetric and conserved. Using Killing vector fields $\xi^\mu$ we can define conserved currents
$$
j^\nu = \xi_\mu T^{\mu\nu}
$$
This current then satisfies various Ward identities, etc.
Bottomline - symmetries form the basis of almost all the physics that is done today. Killing vector fields are simply manifestations of symmetries in the context of general relativity. | {
"domain": "physics.stackexchange",
"id": 27277,
"tags": "general-relativity, symmetry, differentiation, vector-fields"
} |
Student and Lecturer views | Question: How can I improve the following two views? Should the action listeners stay in the views?
StudentView.java
package com.studentenverwaltung.view;
import com.studentenverwaltung.controller.StudentController;
import com.studentenverwaltung.helpers.MyTableCellRenderer;
import com.studentenverwaltung.model.User;
import com.studentenverwaltung.persistence.PerformanceDAO;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.Observable;
import java.util.Observer;
import javax.swing.*;
import javax.swing.table.DefaultTableModel;
public class StudentView implements IView {
private final static String WELCOME = "Herzlich Willkommen";
private final static String MEDIAN = "Notendurchschnitt";
private final static Object[] COLUMNS = {"Vorlesung", "Note"};
public JPanel contentPane;
private JLabel lblWelcome;
private JButton btnChangePassword;
private JButton btnLogout;
private JTextField txtId;
private JTextField txtPassword;
private JTextField txtDegreeProgram;
private JLabel lblMedian;
private JTable tblPerformance;
private StudentController studentController;
private User user;
public StudentView(StudentController studentController, User user) {
this.studentController = studentController;
this.user = user;
this.user.addObserver(new ModelObserver());
this.btnChangePassword.addActionListener(new ChangePasswordListener());
this.btnLogout.addActionListener(new LogoutListener());
}
@Override
public JPanel getContentPane() {
return this.contentPane;
}
private double calculateMedian(DefaultTableModel tableModel) {
int i = 0, rows = tableModel.getRowCount();
double total = 0;
while (i < rows) {
total += (Double) tableModel.getValueAt(i, 1);
i++;
}
return total / tableModel.getRowCount();
}
private void updateMedian(StudentView view, DefaultTableModel tableModel) {
view.lblMedian.setText(String.format("%s: %.2f", MEDIAN, calculateMedian(tableModel)));
}
private void updateTable(StudentView view, DefaultTableModel tableModel) {
view.tblPerformance.setDefaultRenderer(Object.class, new MyTableCellRenderer());
view.tblPerformance.setModel(tableModel);
}
private DefaultTableModel createDefaultTableModel(StudentView view) {
PerformanceDAO performanceDAO = new PerformanceDAO("Files/noten.csv");
Object[][] myData = performanceDAO.getPerformance(view.user.getId());
return new DefaultTableModel(myData, StudentView.COLUMNS);
}
private void updateLabelsAndTextFields(StudentView view) {
view.lblWelcome.setText(String.format("%s, %s", WELCOME, view.user.toString()));
view.txtId.setText(view.user.getId());
view.txtPassword.setText(view.user.getPassword());
view.txtDegreeProgram.setText(view.user.getDegreeProgram());
}
private class ChangePasswordListener implements ActionListener {
@Override
public void actionPerformed(ActionEvent e) {
StudentView view = StudentView.this;
view.studentController.changePassword();
}
}
private class LogoutListener implements ActionListener {
public static final String TITLE = "Beenden";
public static final String MESSAGE = "Sollen die Aenderungen gespeichert werden?";
@Override
public void actionPerformed(ActionEvent e) {
StudentView view = StudentView.this;
int result = JOptionPane.showConfirmDialog(view.contentPane, MESSAGE, TITLE,
JOptionPane.YES_NO_OPTION);
if (result == JOptionPane.YES_OPTION) {
// TODO: save data
}
view.studentController.logout();
}
}
private class ModelObserver implements Observer {
@Override
public void update(Observable o, Object arg) {
StudentView view = StudentView.this;
updateLabelsAndTextFields(view);
DefaultTableModel tableModel = createDefaultTableModel(view);
updateTable(view, tableModel);
updateMedian(view, tableModel);
}
}
}
LecturerView.java
package com.studentenverwaltung.view;
import com.studentenverwaltung.controller.LecturerController;
import com.studentenverwaltung.model.User;
import com.studentenverwaltung.persistence.PerformanceDAO;
import com.studentenverwaltung.persistence.UserDAO;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.Collection;
import java.util.Observable;
import java.util.Observer;
import javax.swing.*;
public class LecturerView implements IView {
public static final String WELCOME = "Herzlich Willkommen";
public JPanel contentPane;
private JLabel lblWelcome;
private JButton btnChangePassword;
private JButton btnLogout;
private JTextField txtId;
private JTextField txtPassword;
private JTextField txtDegreeProgram;
private JComboBox<Object> boxCourses;
private JComboBox<Object> boxUsers;
private JTextField txtGrade;
private JButton btnSave;
private JButton btnCalculateMedian;
private JList lstBadStudents;
private LecturerController lecturerController;
private User user;
public LecturerView(LecturerController lecturerController, User user) {
this.lecturerController = lecturerController;
this.user = user;
this.user.addObserver(new ModelObserver());
this.btnChangePassword.addActionListener(new ChangePasswordListener());
this.btnLogout.addActionListener(new LogoutListener());
this.btnSave.addActionListener(new SaveGradeListener());
}
@Override
public JPanel getContentPane() {
return this.contentPane;
}
private class ChangePasswordListener implements ActionListener {
@Override
public void actionPerformed(ActionEvent e) {
LecturerView view = LecturerView.this;
view.lecturerController.changePassword();
}
}
private class SaveGradeListener implements ActionListener {
@Override
public void actionPerformed(ActionEvent e) {
// TODO: refactoring
String course = LecturerView.this.boxCourses.getModel().getSelectedItem().toString();
User student = (User) LecturerView.this.boxUsers.getSelectedItem();
String id = student.getId();
double grade = Double.parseDouble(LecturerView.this.txtGrade.getText());
PerformanceDAO performanceDAO = new PerformanceDAO("Files/noten.csv");
performanceDAO.createOrUpdatePerformance(id, course, grade);
}
}
private class LogoutListener implements ActionListener {
public static final String TITLE = "Beenden";
public static final String MESSAGE = "Sollen die Aenderungen gespeichert werden?";
@Override
public void actionPerformed(ActionEvent e) {
LecturerView view = LecturerView.this;
int result = JOptionPane.showConfirmDialog(view.contentPane, MESSAGE, TITLE,
JOptionPane.YES_NO_OPTION);
if (result == JOptionPane.YES_OPTION) {
// TODO: save data
}
view.lecturerController.logout();
}
}
private class ModelObserver implements Observer {
@Override
public void update(Observable o, Object arg) {
// TODO: refactoring
LecturerView view = LecturerView.this;
view.lblWelcome.setText(String.format("%s, %s", WELCOME, view.user.toString()));
view.txtId.setText(view.user.getId());
view.txtPassword.setText(view.user.getPassword());
view.txtDegreeProgram.setText(view.user.getDegreeProgram());
Object[]
courses =
view.user.getCourse() == null ? new Object[0] : view.user.getCourse().toArray();
DefaultComboBoxModel
courseModel =
new DefaultComboBoxModel<Object>(courses);
view.boxCourses.setModel(courseModel);
UserDAO userDAO = new UserDAO("Files/stud_info.csv");
Collection<User> studentCollection = userDAO.getUsersByRole("Student");
Object[] students = studentCollection == null ? new Object[0] : studentCollection.toArray();
DefaultComboBoxModel studentModel = new DefaultComboBoxModel<Object>(students);
view.boxUsers.setModel(studentModel);
}
}
}
Answer:
Things that are not view related should be moved elsewhere. View should not know anything about DAOs etc. It just should call methods on the controller.
For example:
PerformanceDAO performanceDAO = new PerformanceDAO("Files/noten.csv");
performanceDAO.createOrUpdatePerformance(id, course, grade);
should just have been
LecturerView.this.lecturerController.createOrUpdatePerformance(id, course, grade);
You can refer to fields of the outer class from (non-static) inner class directly. The line above can be shortened to:
lecturerController.createOrUpdatePerformance(id, course, grade);
Since many of your ActionListener should be one-liners, it is more practical to implement them as anonymous inner classes. You can remove ChangePasswordListener class by changing the following line from :
this.btnChangePassword.addActionListener(new ChangePasswordListener());
to
this.btnChangePassword.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
lecturerController.changePassword();
}
});
Because you have a parameter named lecturerController which refers to the same object with, but hides the field lecturerController you would have to mark the parameter final. (Which is good practice anyways).
You should not do anything more complicated that calling a method (or 2) in an action Listener. Look at your LogoutListener. Your specification (user story whatever) for that case might have been something like:
When Lecturer Logs Out; he is prompted for saving chages, before he is logged out.
You call the logout method on LecturerCotroller, which is excellent. But why don't you call a method promptForSaveChanges for the first part of the spec. It only makes sense. You would not have to dig through codes; if spec changed such that "prompt user for changes if only unsaved changes exist" or "prompt for saving changes when user tries to close the main window"
You should not instantiate your DAOs and services etc in listeners. They should be instantiated once in the Main class and passed to other components as constructor arguments. Just as you passed your controllers as constructor argument to your views; your controllers should get their DAOs etc in the same way. | {
"domain": "codereview.stackexchange",
"id": 3878,
"tags": "java, swing"
} |
How to calculate diffraction image for a given lattice? | Question: I have seen a lot of diffraction patterns such as this, taken from Wikipedia. I know how these images are measured, but I do not know how you can calculate (predict) a diffraction pattern for a specific lattice.
The structure factor for a monatomic system is given as
$$
S(\mathbf{q}) = \frac{1}{N}\sum\limits_{j=1}^{N}\sum\limits_{k=1}^{N}\mathrm{e}^{-i\mathbf{q}(\mathbf{r}_j - \mathbf{r}_k)},
$$
where $\mathbf{q}$ is the scattering vector and $\mathbf{r}_j$ the position of atom (or lattice point) $j$. The scattering vector $\mathbf{q}$ is given as $\mathbf{q} = \mathbf{k}_2 - \mathbf{k}_1$, where $\mathbf{k}_1$ is the incoming and $\mathbf{k}_2$ is the scattered beam. The amplitude $\lvert\mathbf{q}\rvert = \frac{4\pi}{\lambda}\sin\theta$ depends on the angle $\theta$ between the incoming and scattered beam.
For an isotropic system such as an amorphous solid, a polycrystal or in powder diffraction, one typically averages over all possible directions of $\mathbf{q}$. The so-calculated static structure factor is the Fourier transform of the radial distribution function. However, if I want to calculate the 2d diffraction pattern, I can't average over all possible directions of $\mathbf{q}$.
Which value of $\mathbf{q}$ should be used? Does it matter?
I read that $S(\mathbf{q})$ is the Fourier transform of the lattice (the reciprocal lattice). But the Fourier transform of a 3d lattice is three dimensional. How do I obtain the 2d diffraction pattern?
Related: This question on to calculate the 1d diffraction pattern.
Answer: Let's first elaborate on your premises, which are not general enough in practice. A crystal is the repeat by translation of a so-called unit cell. In the most general case, this unit cell is a parallelepiped defined by three non-colinear vectors $\renewcommand{\vec}[1]{\mathbf{#1}}(\vec{a},\vec{b},\vec{c})$ and the translations to consider are $\vec{T}_{mnp}=m\vec{a}+n\vec{b}+p\vec{c}$ for integers $m,n,p$. We can then denote $F(\vec{q})$ the complex amplitude of the wave diffracted by one unit cell: if we consider only elastic scattering, this is the Fourier transform of the electron density inside the unit cell. Then the diffraction by the entire crystal, made of $M, N, P$ unit cells along $\vec{a}, \vec{b}, \vec{c}$ respectively reads
$$S(\vec{q})=\sum_{m=-M}^M\sum_{n=-N}^N\sum_{p=-P}^P F(\vec{q})\exp {i2\pi\vec{q}\cdot \vec{T}_{mnp}}.$$
Then introducing
$$\begin{aligned}
h&=\vec{a}\cdot\vec{q}\\
k&=\vec{b}\cdot\vec{q}\\
l&=\vec{c}\cdot\vec{q}\\
\end{aligned}$$
we get
$$S(\vec{q}) = F(\vec{q})
\underbrace{\frac{\sin\pi h (2M+1)}{\sin\pi h}}_{D_M(h)}
\underbrace{\frac{\sin\pi k (2N+1)}{\sin\pi k}}_{D_N(k)}
\underbrace{\frac{\sin\pi l (2P+1)}{\sin\pi l}}_{D_P(l)}
$$
The main characteristic of the function $D_K(r)$ for a large value of $K$ is that it has very strong and sharp peaks for integers value of $r$. As a result, the diffracted wave exhibit sharp peaks for integers $h,k,l$. If we then introduce the the basis $(\vec{a}^*, \vec{b}^*, \vec{c}^*)$ dual to $(\vec{a},\vec{b},\vec{c})$,
$$\vec{a}^* = \frac{\vec{b}\times\vec{c}}{V},$$
and circular permutations of $a$, $b$ and $c$, where $V=\det(\vec{a},\vec{b},\vec{c})$ is the volume of the unit cell, then
$$\vec{q}=h\vec{a}^*+k\vec{b}^*+l\vec{c}^*=\vec{q}_{hkl},$$
and therefore the sharp peaks lie on a lattice whose unit cell is defined by $(\vec{a}^*, \vec{b}^*, \vec{c}^*)$, the so-called reciprocal lattice.
Now that the proper framework has been laid out, I can move on to answer your question. Consider a plane passing through the reciprocal lattice. Any $\vec{q}_{hkl}$ close enough to the plane will result in some diffracted intensity around the projection of $\vec{q}_{hkl}$ onto the plane. In a real experiment, the plane would be the surface of a CCD for example. That detector surface would be moved of course but the reciprocal lattice would be moved too, i.e. $(\vec{a}^*, \vec{b}^*, \vec{c}^*)$ would be moved, because the crystal would be rotated, and the incident X-ray beam would also see its direction changed. Thus the position of that plane I was discussing becomes a rather complex function of the relative position of the detector, the crystal and the source but we don't really need to go into that complexity unless you want to model an actual experimental setup. For a general simulation, it suffices to simply move that plane. The most "beautiful" diffraction patterns would of course be obtained by choosing a plane passing through a subset of $\vec{q}_{hkl}$'s. For example, the plane containing all $\vec{q}_{hk0}$ for any integer $h,k$. | {
"domain": "physics.stackexchange",
"id": 42554,
"tags": "condensed-matter, solid-state-physics, scattering, diffraction, x-rays"
} |
Applying same formatting to multiple borders in Excel VBA? | Question: Is there a better way to format the cells with borders than what you get when you record a macro? For example, I want to add borders to a cell range. The recorded code is:
Range("A1:C19").Select
Selection.Borders(xlDiagonalDown).LineStyle = xlNone
Selection.Borders(xlDiagonalUp).LineStyle = xlNone
With Selection.Borders(xlEdgeLeft)
.LineStyle = xlContinuous
.ColorIndex = 0
.TintAndShade = 0
.Weight = xlThin
End With
With Selection.Borders(xlEdgeTop)
.LineStyle = xlContinuous
.ColorIndex = 0
.TintAndShade = 0
.Weight = xlThin
End With
With Selection.Borders(xlEdgeBottom)
.LineStyle = xlContinuous
.ColorIndex = 0
.TintAndShade = 0
.Weight = xlThin
End With
With Selection.Borders(xlEdgeRight)
.LineStyle = xlContinuous
.ColorIndex = 0
.TintAndShade = 0
.Weight = xlThin
End With
With Selection.Borders(xlInsideVertical)
.LineStyle = xlContinuous
.ColorIndex = 0
.TintAndShade = 0
.Weight = xlThin
End With
With Selection.Borders(xlInsideHorizontal)
.LineStyle = xlContinuous
.ColorIndex = 0
.TintAndShade = 0
.Weight = xlThin
End With
Can this be shortened?
Answer: If the question is "how do I apply the same formatting to multiple borders?" then the following is one way:
Dim the_borders As Variant
the_borders = Array(xlEdgeLeft, xlEdgeTop, xlEdgeBottom, xlEdgeRight, xlInsideVertical, xlInsideHorizontal)
' Or whatever xlEdge* constants you want to list
Dim idx As Long
For idx = LBound(the_borders) To UBound(the_borders)
With Selection.Borders(the_borders(idx)) ' Process the right border
.LineStyle = xlContinuous
.ColorIndex = 0
.TintAndShade = 0
.Weight = xlThin
End With
Next idx
The As Variant / Array combo gives you an array of the border IDs you specify, running from LBound(the_borders) to UBound(the_borders). Then, within the loop, borders(idx) is the XlBordersIndex you can pass to Selection.Borders. | {
"domain": "codereview.stackexchange",
"id": 24543,
"tags": "excel, vba, performance"
} |
Math symbol to represent an operator to convert from double-precision to 32-bit integer value? | Question: I'm looking for a good mathematical symbol to represent the conversion from a double-precision floating value to an unsigned 32-bit integer value. Does anyone have suggestions for a good Greek letter or a math symbol to express this?
The best candidate I can think of is to use a symbol for a floor function with a subscript like below:
Is there a more official way to represent this?
Answer: As far as I know, there is no common/standard symbology to represent finite types. Neither floating-point nor unsigned integer. And even less for conversions, taking into account that they are not always possible. You can represent the naturals as the set $\mathbb N$, but there is nothing for the floating-point numbers, a special subset of the rationals.
If you want to stress the data type conversion aspect, better use a typecast-like notation. If you just want to express the numerical value, $\text{floor}$ ($\lfloor x\rfloor$) is good enough. | {
"domain": "cs.stackexchange",
"id": 20784,
"tags": "integers"
} |
Generic callback object, but I need the type parameter inside methods | Question: Inside my android app, I currently have methods that look like below code. Since they all need a callback object that basically does the same thing and I would like to try to eliminate the duplicated code seen below.
The object that gets posted via the mBus.post(response); statement needs to be the specific type.
@Subscribe
public void onLeave(LeaveRequest event) {
mRailsApi.leave(event.getEmail(), event.getPassword(),
new Callback<LeaveResponse>() {
@Override
public void failure(RetrofitError arg0) {
LeaveResponse response = new LeaveResponse();
response.setSuccessful(false);
mBus.post(response);
}
@Override
public void success(LeaveResponse leaveResponse,
Response arg1) {
// need to create one since the api just returns a
// header with no body , hence the response is null
LeaveResponse response = new LeaveResponse();
response.setSuccessful(true);
mBus.post(response);
}
});
}
@Subscribe
public void onGetUsers(GetUsersRequest event) {
mRailsApi.getUsers(mApp.getToken(), new Callback<GetUsersResponse>() {
@Override
public void failure(RetrofitError arg0) {
GetUsersResponse response = (GetUsersResponse) arg0.getBody();
response.setSuccessful(false);
mBus.post(response);
}
@Override
public void success(GetUsersResponse getUsersResponse, Response arg1) {
getUsersResponse.setSuccessful(true);
mBus.post(getUsersResponse);
}
});
}
Here is what I have come up with so far. It seems to work but I am wondering if there is a better solution. One thing that bothers me is, that I have both the type parameter for the class and I am passing in the class to the constructor. It seems like I should not need to pass in the same info in two different ways.
The method calls become this:
@Subscribe
public void onLeave(LeaveRequest event) {
System.out.println("inside api repo - making leave request");
LeaveResponse response = new LeaveResponse();
mRailsApi.leave(event.getEmail(), event.getPassword(),
new RailsApiCallback<LeaveResponse>(mBus, LeaveResponse.class ));
}
And this is the callback class :
public class RailsApiCallback<T extends BaseResponse> implements Callback<T> {
private Bus mBus;
private Class mType;
public RailsApiCallback(Bus bus, Class type) {
super();
mBus = bus;
mType = type;
}
@Override
public void failure(RetrofitError retrofitError) {
T response = (T) retrofitError.getBody();
response.setSuccessful(false);
mBus.post(mType.cast(response));
}
@Override
public void success(T convertedResponse, Response rawResponse) {
T response = null;
try {
response = (T) (convertedResponse != null ? convertedResponse : mType.newInstance());
} catch (InstantiationException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
}
response.setSuccessful(true);
mBus.post(mType.cast(response));
}
}
Answer:
One thing that bothers me is, that I have both the type parameter for the class and I am passing in the class to the constructor. It seems like I should not need to pass in the same info in two different ways.
The reason for why you have to do that is because of Type Erasure. Simply put, the generic class is only known during compile-time.
This call is unnecessary in your constructor: super();
All your fields should be marked with final.
Class mType; should be Class<T> mType;
I believe the mType.cast is unnecessary here:
mBus.post(mType.cast(response));
simply mBus.post(response); is enough. The eventbus will automatically detect the class by calling obj.getClass() and invoke the appropriate listeners.
In fact, your response variable is already declared as T response; so I don't see what good casting it will do. | {
"domain": "codereview.stackexchange",
"id": 8593,
"tags": "java, android, callback"
} |
absolute speed of bodies as observed by a distant observer | Question: What does a distant observer see if two masses with a given velocity are close enough to each other that time is dilated ?
Scenario (using 2 point masses for simplicities sake):
Mass a: 1,00E+030 kg
Mass b: 2,00E+030
Kinetic energy:
mass a: 1,39037910021726E+046
mass b: 2,78075820043453E+046
this would mean a velocity of
Velocity: 149896229 m/s [ = c/2]
for both masses
distance between masses: 3960 m
using the formula for time dilation the time would pass at a rate of ~0,5 at mass a and at a rate of ~0,8 at mass b
if both masses are flying parralel in the same direction does the observer see mass a flying with a speed of
74948115 m/s [= c/4] and mass b flying with a speed of 119916983 m/s [= c/2 *0.8]
?
Answer: In general relativity, then notion of space and time is local. As underlined in the OP, it means that at a radius $r$ of a star of mass $M$, time will run at $$\sqrt{1-\frac{2GM}{rc^2}}$$ the rate at an infinite distance of the object. For example imagine that $1-2GM/rc^2 = 1/4$, so that at a distance $r$, time runs at half the rate compare to an observer at infinity. If you are initially with an observer at infinity, you travel to $r$ (we neglect the time distortion during the travel), stay here an hour and then go back. When you'll reach the observer once again, your local clock will be one hour late, because 1 hour at $r$ will have passed in two hours at $r=\infty$.
So this distortion of space and time is only local. A massive object, like a star is only impacting in its neighborhood space and time. A distant observer is not impacted by the distortion, so that time and distances are unaffected.
To answer your question, an observer far away (for example on Earth) measuring by some meaning the speed of the two stars will actually see $c/2$ for both of them. | {
"domain": "astronomy.stackexchange",
"id": 1843,
"tags": "time-dilation"
} |
Detecting a specific pop in a real time audio signal | Question: I am listening for a very definite pop in a real time audio signal. So far I have managed to get the audio signal in at a sampling rate of 44100.0 Hz and 2048 frames per second. I have visualized the waveform and computed and visualized the FFT. I am currently looking for distinct features to recognize this pop sound (it is the sound of a ping pong ball on the table).
I found a few papers which suggested features such as the zero crossing rate of a percussive sound like this http://www.csl.sony.fr/downloads/papers/2000/gouyon-dafx2000.pdf. However I am struggling a bit to identify the percussive envelop within the signal. I would like to identify a peak in a certain frequency range within my FFT as well, then combine the existence of an envelope with a peak in the correct range to determine whether a sound was heard.
I have two questions, how would I identify an envelope in the time domain, or how would I approach this problem. Further, how could I identify a peak in the frequency domain when I have a very fine mesh FFT (lots of bins and noise). I also have a lot of pink noise as this is a real time audio recording, so I have louder frequencies toward the bottom end than the top end, but I haven't managed to get those frequencies out of my FFT (I don't need to worry about converting back to the time domain).
Identify envelopes beside natural noise
Identify FFT peaks beside natural noise
Any experience with noticing percussive transients?
Thanks
Answer: One of the simplest methods to optimally (minimising the error energy while restricted to LTI filters) detect the existance of a known signal inside another is to use a matched filter. A matched filter will have its impulse response equal to the signal being searched in time-reversed form. This resulting filter equivalently computes the correlation of the known signal with the measured signal. And the peaks of this correlator yields the position of the best possible matches.
Let the searched signal be $s[n]$ with $s= \{-2,-1,0,0,1,2\}$ with $s[0]=-2$ and $s[5]=2$ and length of signal is $L=6$.
Assume that this signal is embedded into various locations of another longer and uncorrelated signal x[n] with its length M > N. Then the resulting combined (by adding sifted versions of the test signal s[n] into x[n] at various locations) signal is called as y[n] of length M.
Now to optimally find the index of the locations into where the signal s[n] was added, we can use an LTI filter with impulse respose h[n] which is the time reversed version of s[n] as: $$h[n] = s[-n]$$
(you can refer to Simon Haykin Communication Systems Theory for a simple and accessible derivation of this optimal filter which is called as the matched filter. Note that the impulse response h[n] of this optimal filter is the same (except time reversal) as the signal being searched, therefore it's called as "matched": optimal detector's impulse response matches the signal being searched)
Following the example of $s[n] = \{-2,-1,0,0,1,2\}$ then $h[n] = \{2,1,0,0,-1,-2\}$
Now however $h[-5]=2$ and $h[0]=-2$. This is an non-causal signal. In practice (using computer system) it is wiser to shift it right, enough to make it causal. It is just a convenience and will not effect the operation performance. Then we will have the following same impulse response $h[n]= \{2,1,0,0,-1,-2\}$ with $h[0]=2$ and $h[5]=-2$ for mathematical convenience.
Now the output of the LTI filter is $$z[n] = h[n]*y[n]$$ This is the convolution operator, which enables the computation of the output of the any discrete time LTI filter and is expressed as $$z[n] = \sum_{k=0}^{k=M-1} {h[n-k]x[k]}$$ The dummy index k ranges within the intersection of the valid signal ranges which is k=0 to k=M-1 for this particular example. Also the output index n will also range from 0 to M-1, which is by convention...
So the output sequence z[n] is expected to inlcude some peaks, these peak locations after being thresholded indicates a possible detection of signal s[n] inside y[n]. (practically the exact location will be L samples back)
What if the signal s[n] is not only shifted but also scaled? Then the peak value will be reduced as much as the scaling. Hence the threshold must be selected very carefully. Or you must find a way to estimate the scaling a priorily. | {
"domain": "dsp.stackexchange",
"id": 2908,
"tags": "fft, discrete-signals, sound"
} |
How do I fix this limma line? | Question: I'm trying to conduct lmFit on my fread::table matrix:
fit <- lmFit(matchedgeneTPM)[,-c(1,2)]
It's giving me the error that expression object should be numeric and that there are 2 non-numeric columns (because the first two columns are the labels of the genes and gene ID's I'm working with). I thought the last portion of my code would allow it to skip over the first two columns but apparently it's not the right way to do it. Can someone show me how to format the code correctly, please?
Answer: My guess is you need to index matchedgeneTPM before you run lmFit(), not after:
fit <- lmFit(matchedgeneTPM[,-c(1:2)]) | {
"domain": "bioinformatics.stackexchange",
"id": 2504,
"tags": "r, rna, limma"
} |
Strain on eyes when seeing a mirror | Question: Suppose; The strain on my eyes when seeing an object at distance x is a.
The strain on my eyes when seeing an object at distance 2x is b.
Now if I see the image in the mirror (mirror is at distance x) what is the strain in my eye? What I actually see is my VIRTUAL image, so is the strain in my eye a (for seeing the mirror at distance x) OR b (for seeing the VIRTUAL image at distance 2x)?
And Similarly explain for lens (where image is real so the strain on eye is for seeing image I think)
WIKIPEDIA:
The image in a plane mirror is not magnified (that is, the image is the same size as the object) and appears to be as far behind the mirror as the object is in front of the mirror.
Light APPEARS to come from behind but not actually that's why it's called VIRTUAL IMAGE(So strain is for a distance x (for seeing mirror) i.e 'a'is what I understood.
Edit:-
Light is coming from the mirror, only appears to come the VIRTUAL Image. So strain should be for seeing the mirror.
Answer: The lens in your eye doesn't know or care where the light is coming from. All it does is change the angle of the light passing through it. So as far as your lens is concerned the light coming from a real object at a distance $2x$ away is the same as the light coming from the virtual object at a distance $2x$ away formed by a mirror. In both cases the angles of the light rays reaching the lens are the same so they will be focussed in exactly the same way.
So the effort needed by your eye to focus is the same for a real object at $2x$ and virtual object at $2x$. | {
"domain": "physics.stackexchange",
"id": 94963,
"tags": "optics, reflection, lenses, vision"
} |
Electroplating an alloy from separate metal solutions | Question: Similarly to a question asked here, would mixing two solutions containing metallic ions in different proportions allow to electroplate something with the alloy of the metals? For example if I take a $\ce{CuSO4}$ solution and mix it with a $\ce{ZnSO4}$ solution, will I be able to electroplate brass? If I want to make 70/30 brass, would mixing a 0.7 mol/L and 0.3 mol/L solution work?
I know a "brass" solution can be made by reverse electroplating (is that the term?) a brass anode in an acid solution so mixing two solutions should have the same result, copper and zinc ions in a solution.
Answer: In short, yes, it is possible to electroplate alloys, but it isn't as simple as just mixing the ions you want in the ratio you want.
In the simplest approximation, the alloy deposition would behave as two simultaneous, separate depositions at the same voltage and the composition could be controlled by changing the voltage, assuming each component has a different current-voltage response.
In the real world, unfortunately, there are numerous interactions between the ions in solution, the metals on the electrode, etc. and the outcome becomes quite complex to predict.
Here is a decent open-access review of the chemistry and applications of this type of process. | {
"domain": "chemistry.stackexchange",
"id": 10954,
"tags": "electroplating"
} |
Cron expression validator for Apache Quartz | Question: Not too long ago, I had to create a cron expression for a route in Apache Camel. I had a bit of a struggle to find the right expression. So I made a small program to output the n next valid dates returned by the expression. I planned to add more feature like instead of making a blind Sysout, you could create a file with your dates.
In my application, a cron expression correspond to the definition of this class. It's use to fire an event for a Camel route. You will set a Quartz endpoint to your route and use a cron expression to express at which interval it should start. Some example :
0 0/5 12-18 ? * MON-FRI
This cron expression translated to a more human format would be : every five minutes starting at 12pm (noon) to 6pm on weekdays. (this cron is taken from the quartz documentation).
For this particular cron, my program (with an numberOfdates set to 5) would output :
Fri Sep 19 12:05:00 EDT 2014
Fri Sep 19 12:10:00 EDT 2014
Fri Sep 19 12:15:00 EDT 2014
Fri Sep 19 12:20:00 EDT 2014
Fri Sep 19 12:25:00 EDT 2014
The application is really basic. I take two arguments, the cron expression and the numbers of date you want outputted.
Feel free to cover any aspect. I had particular problem with the naming of my classes and variables. There isn't much code at the moment, but my naming and the way I verify arguments worry me a bit.
It's a public repo on GitHub
CronTester
import java.util.Date;
import java.util.List;
public class CronTester {
/**
* The application take two arguments :
* The first argument is the cron expression surrounded by " : "* * * * * *"
* The second argument is the number of dates you want to verify. It's a simple Integer.
* @param args
*/
public static void main(String[] args) {
if(args.length > 2) {
throw new IllegalArgumentException("You should only provide 2 arguments. \"[cronExpression]\" [numberOfDates]");
} else if (args.length != 2) {
throw new IllegalArgumentException("You need to provide 2 arguments in the following form : \"[cronExpression]\" [numberOfDates]");
}
final String expression = args[0];
final int numberOfDates;
try {
numberOfDates = Integer.parseInt(args[1]);
} catch (NumberFormatException e) {
throw new IllegalArgumentException("The second argument need to be a valid integer.");
}
final CronDateCreator dateCreator = new CronDateCreator(expression);
List<Date> result = dateCreator.createValidTimeDatesFromNow(numberOfDates);
for(Date expected : result) {
System.out.println(expected);
}
}
}
CronDateCreator
import java.text.ParseException;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import org.quartz.CronExpression;
public class CronDateCreator {
private CronExpression cron;
public CronDateCreator(String cronExpressionRaw) {
try {
this.cron = new CronExpression(cronExpressionRaw);
} catch (ParseException e) {
throw new IllegalArgumentException("The cron expression supplied is not a valid cron expression.",e);
}
}
public List<Date> createValidTimeDates(Date startingDate, int numberOfDates) {
Date nextValidDate = cron.getNextValidTimeAfter(startingDate);
List<Date> results = new ArrayList<>();
for (int i = 0; i < numberOfDates; i++) {
results.add(nextValidDate);
nextValidDate = cron.getNextValidTimeAfter(nextValidDate);
}
return results;
}
public List<Date> createValidTimeDatesFromNow(int numberOfDates) {
return createValidTimeDates(new Date(), numberOfDates);
}
}
Answer: In terms of overall hierarchy here, the CronTester class is just 'fluff' and does not do anything except provide an main method. The interesting class is: CronDateCreator.
CronDateCreator is a very lightweight class. It essentially does nothing that requires it to be a class... it wraps a String value in a CronExpression, is that CronExpression reused for anything?
It strikes me that the standard use-case for this class would be:
CronDateCreator cdc = new CronDateCreator(input);
List<Date> dates = cdc.createValidTimeDatesFromNow(numberOfDates);
and then the cdc will be forgotten.
Is this a case of where a static method is maybe all you need?
Something like:
public static List<Date> createValidTimeDatesFromNow(String expression, int numberOfDates) {
....
}
Just a thought...
Date
Now, about the Date return values.
Date is a much maligned class in Java. It has so many nuances, issues, and tweaks that it was initially 'forked' and rewritten as JodaTime in an open source library, then, in Java8, they have pulled the best from JodaTime, and incorporated some other compatibility and functionality changes, and created the new java.time.* package.
Any code written to support Java8 or newer should essentially abandon Date, and work with Instant instead.
You should use your class/static methods to translate the Dates back to Instants, and go from there.
Actual code:
Your method here could be neatened up a lot:
public List<Date> createValidTimeDates(Date startingDate, int numberOfDates) {
Date nextValidDate = cron.getNextValidTimeAfter(startingDate);
List<Date> results = new ArrayList<>();
for (int i = 0; i < numberOfDates; i++) {
results.add(nextValidDate);
nextValidDate = cron.getNextValidTimeAfter(nextValidDate);
}
return results;
}
For a start, you know how large the ArrayList will be, so set the size on the constructor:
List<Date> results = new ArrayList<>(numberOfDates);
Then, your loop logic is off... how about the following?:
public List<Date> createValidTimeDates(Date startingDate, final int numberOfDates) {
List<Date> results = new ArrayList<>(numberOfDates);
for (int i = 0; i < numberOfDates; i++) {
startingDate = cron.getNextValidTimeAfter(startingDate);
results.add(startingDate);
}
return results;
} | {
"domain": "codereview.stackexchange",
"id": 9656,
"tags": "java, datetime, validation"
} |
Async ASP.NET MVC 5 controller method | Question: I'm attempting to correctly convert a synchronous controller method to asynchronous, given that the operation it performs is CPU-intensive.
private MyDbContext db = new MyDbContext();
public async Task<ActionResult> Index()
{
List<DashboardItemViewModel> viewModels = new List<DashboardItemViewModel>();
List<ProductModel> products = db.Products.ToList();
await Task.Run(() =>
{
foreach (ClientModel client in db.Clients)
{
// Constructor uses lots of logic, mostly looping through the list
// many times
viewModels.Add(new DashboardItemViewModel(client, products));
}
});
return View(viewModels);
}
Answer: I think the closures are unnecessary and create (small or not) overhead you can avoid. See Using Asynchronous Methods in ASP.NET MVC 4 where specifically the placement of the await keyword is within the return and additionally (unlike the example) you can return the entire self contained code block. | {
"domain": "codereview.stackexchange",
"id": 27064,
"tags": "c#, asynchronous, asp.net-mvc"
} |
Angular Momentum Quantum Number and Orbitals | Question: In our lecture I was told the angular momentum QN is equal to $n - 1$ of the principle QN, which corresponded to the orbital shape. How is this case, for example in Barium the principle QN is 6 with, so it should have at least an f orbital because the angular momentum QN contains 3, which corresponds to the f orbital, but in it's electron configuration it does not have an f orbital. Why is this?
Answer: It has the f orbitals, they're just empty, and so they get left out of the (ground state) electron configurations.
F orbitals are high energy orbitals for their energy level. The 4f orbitals (in n=4) don't get filled until after the 6s orbital is filled (and kinda sorta one 5d gets filled - f orbitals don't really follow nice patterns when they get filled).
For a good read on the filling pattern of orbitals (which, btw, is very much a generalization - there are plenty of exceptions), here's a good link. | {
"domain": "chemistry.stackexchange",
"id": 10989,
"tags": "orbitals"
} |
What are these little leaves growing on these other leaves? (plant growing near the ocean) | Question: I saw many of these plants while hiking on a hill facing the ocean in northern Taiwan. I've included a snapshot of a nearby part of the same trail to show it's rugged, wet, green, and rocky, and another photo of some what looks to be the same kind of plant without these things, and one that has a reddish coloring.
Any idea what these little (~10 mm long) "leaf-shaped" brown/orangish things are, growing from points along two regular rows of bumps that are repeated everywhere? Is this reproduction, if so how does it work?
Also curious, any thoughts on what kind of plant to call this? I don't need a species identification necessarily, but is it a kind of fern?
below: Additional photos, click for full size.
Answer: These are really cool, and I am absolutely not certain of this, but I would hazard that they might be some kind of bulbil? Ferns (if I have started my ID correctly) can use them to asexually reproduce. For example the New Zealand endemic Asplenium bulbiferum grows little bulbils on the adaxial surface of its frond. | {
"domain": "biology.stackexchange",
"id": 8691,
"tags": "species-identification, reproduction, seeds"
} |
How fictitious are fictitious forces? | Question: How fictitious are fictitious forces?
More specifically, in a rotating reference frame i.e. on the surface of the earth does an object that is 'stationary' and in contract with the ground feel centrifugal and Coriolis forces? Or are these forces purely fictional and used to account for differences in observed behaviour relative to an inertial frame?
To give a practical example a turreted armoured vehicle is sitting stationary and horizontally somewhere in the UK. The turret is continually rotating in an anti-clockwise direction. Do the motors that drive the turret's rotation require more power as the turret rotates from east to west and less power as the turret rotates from west to east? i.e. are the turret motors cyclically assisted and hindered by the earths rotation?
Answer: No, they are not real forces.
Quoting from my answer here
Whenever we view a system from an accelerated frame, there is a "psuedoforce" or "false force" which appears to act on the bodies. Note that this force is not actually a force, more of something which appears to be acting. A mathematical trick, if you will.
Let's take a simple case. You are accelerating with $\vec{a}$ in space, and you see a little ball floating around. This is in a perfect vacuum, with no electric/magnetic/gravitational/etc fields. So, the ball does not accelerate.
But, from your point of view, the ball accelerates with an acceleration $-\vec{a}$, backwards relative to you. Now you know that the space is free of any fields, yet you see the particle accelerating. You can either deduce from this that you are accelerating, or you can decide that there is some unknown force, $-m\vec{a}$, acting on the ball. This force is the psuedoforce. It mathematically enables us to look at the world from the point of view of an accelerated frame, and derive equations of motion with all values relative to that frame. Many times, solving things from the ground frame get icky, so we use this. But let me stress once again, it is not a real force.
And here:
The centrifugal force is basically the psuedoforce acting in a rotating frame. Basically, a frame undergoing UCM has an acceleration $\frac{mv^2}{r}$ towards the center. Thus, an observer in that rotating frame will feel a psuedoforce $\frac{mv^2}{r}$ outwards. This psuedoforce is known as the centrifugal force.
Unlike the centripetal force, the centrifugal force is not real. Imagine a ball being whirled around. It has a CPF $=\frac{mv^2}{r}$, and this force is the tension in the string. But, if you shift to the balls frame (become tiny and stand on it), it will appear to you that the ball is stationary (as you are standing on it. The rest of the world will appear to rotate). But, you will notice something a bit off: The ball still has a tension force acting on it, so how is is steady? This balancing of forces you attribute to a mysterious "centrifugal force". If you have mass, you feel the CFF, too (from the ground, it is obvious that what you feel as the CFF is due to your inertia)
What really happens when you "feel" psuedoforces is the following. I'll take the example of spinning on a playground wheel.
From the ground frame, your body has inertia and would not like to accelerate(circular motion is acceleration as the direction of velocity changes).
But, you are holding on to the spinning thingy so you're forced to accelerate. Thus, there is a net inward force--centripetal force--a true force since it's from "holding on". In that frame, though, you don't move forward. So your body feels as if there is a balancing backwards force. And you feel that force acting upon you. It really is your body's "inertia" that's acting.
Yes, the turret's wheels are affected. Again, this is due to inertia from the correct perspective, psuedofoces are just a way to easily explain inertia.
Remember, Newton's definition of a force is only valid in an inertial frame in the first place. Psuedoforces make Newton's laws valid in non inertial frames. | {
"domain": "physics.stackexchange",
"id": 39775,
"tags": "forces, inertial-frames, reference-frames"
} |
Passing parameter to singleton | Question: I wrote this factory class in order to pass parameter to Singlton class, is it a good design in term of design for multithreading environment?
public static class LoggingServiceFactory
{
private static string _connectionstring;
private static readonly Lazy<LoggingService> _INSTANCE = new Lazy<LoggingService>(() => new LoggingService(_connectionstring));
public static ILoggingService GetService(string connectionString)
{
_connectionstring = connectionString;
return _INSTANCE.Value;
}
private class LoggingService : ILoggingService
{
private string _connectionstring;
internal LoggingService(string connectionString)
{
_connectionstring = connectionString;
}
public void LogMessage(string msg)
{
// do the logging work
}
}
}
public interface ILoggingService
{
void LogMessage(string msg);
}
Answer: If connection string matching is a really necessary thing:
static class LoggingServiceFactory
{
static readonly TaskCompletionSource<string> _cs = new TaskCompletionSource<string>();
static ILoggingService _service;
public static ILoggingService GetService(string connectionString)
{
if (_cs.TrySetResult(connectionString))
_service = new LoggingService(connectionString);
else
if (_cs.Task.Result != connectionString)
throw new InvalidOperationException("Connection string redefinition.");
return _service;
}
} | {
"domain": "codereview.stackexchange",
"id": 18181,
"tags": "c#"
} |
Why isn’t light polarization used as a physical realization of a quantum computer? | Question: Constructing a qubit requires something that can be represented as a linear combination of two states. The physical realizations are numerous
https://en.wikipedia.org/wiki/Quantum_computing
But I do not see the use of light polarization in this list. If we let |0> be vertically polarized light and |1> be horizontally polarized light, then a qubit can assume the polarization of light as a sum of these two components. It is easy to read the state of a qubit by measuring the residual brightness after passing the photon through a linear Polaroid filter. The polarization of a qubit can be changed by applying a magnetic field to the photon. The polarization can be stored in a hologram.
https://www.researchgate.net/publication/258369620_Polarization_Holography
Obviously there is a reason no one has done this so perhaps someone can enlighten me.
Answer: Of course this is possible and has been done. For instance, photonic polarization qubits are explicitly listed on https://en.wikipedia.org/wiki/Qubit#Physical_implementations
as a physical implementation. Indeed, it is easy to convert polarization into other encodings of a qubit with photons, such as a "which-way" encoding, by using polarizing beam splitters.
There are some difficulties which are specific to photonic qubits, most importantly the necessity to create a large number of photons at the same time ("on demand"), and the difficulty of coupling qubits, which requires non-linear media. | {
"domain": "physics.stackexchange",
"id": 66445,
"tags": "electromagnetic-radiation, quantum-information, polarization, quantum-optics, quantum-computer"
} |
How can heat dissipation from a resistor have such a simple relationship with resistance? | Question: I understand that when current runs through a resistor in a circuit, Joule heating occurs. I've read that you can think of the energy conversion as happening due to some of the electrons of the current being blocked on their way through the resistor. When you have a collision of course there is energy transfer.
The formula for the rate of dissipation is $I^2R$
$$
\text{Resistance} = \frac{\text{material resistivity} \times \text{length}}{\text{sectional area}}
$$
Say we have 2 wires, both with length 5 but one with resistivity 2 and area 2, and the other with resistivity 1 and area 1. Both have resistance 5. The latter has lower resistivity, but since it also has less area in which the electrons can travel, the resistance remains the same.
If we hook both up to identical 5 V batteries, 1 A will flow in both cases. The heat dissipation will be $1^2 \times 5 = 5$ in both cases. But surely the higher resistivity of the first wire should imply more of the energy-transferring collisions per unit area, which combined with the higher area should result in a higher rate of dissipation?
Answer: They both still have the same current though, so the same charge per unit time is running through them.
If you were applying the same amperage per unit area of resistor; then you should expect the higher resistivity material would indeed have a higher resistance. In this case you aren't though; you're sending the same amperage through two different areas; so although resistance per unit area is greater; the applied amperage per unit area is also less; so total resistance is still the same.
A simplified analogy is you have more places for the current to travel through at each length; so that causes less resistance than a thin wire. | {
"domain": "physics.stackexchange",
"id": 47406,
"tags": "electric-circuits, electrical-resistance"
} |
How can I apply a fixed and adjustable force to my flexible sample? | Question: I major in microelectronics. I just want to apply a fixed and adjustable force to my flexible sample.
'Fixed' means that the force could be hold for given time without change.
'Adjustable' means that the force would be changed after my setting or its value could be changed with time and rule.
It would be better if the force was horizontal.
My sample can endure the force with value about 10 N. And I hope the force could be adjustable between 0 and 10 N.
I think a stepper motor with load cell may be the solution but I don't know how to realize it exactly. The only thing I can think of is to hang different balance weight to one side of my sample. I know some kind of universal testing machines could also get this done but they are too expensive.
Answer: With a spring and a (accurate) actuator you can create one.
One end of the spring is the force applicator the other end is moved by the actuator so the spring compresses a certain amount.
Then by Hooke's law you can control the force that the spring applies to your test piece.
This works best if the test piece doesn't deform. | {
"domain": "engineering.stackexchange",
"id": 297,
"tags": "mechanical-engineering, mechanical-failure"
} |
How to remove file extension using C#? | Question: I am trying to remove file extension of a file with many dots in it:
string a = "asdasdasd.asdas.adas.asdasdasdasd.edasdasd";
string b = a.Substring(a.LastIndexOf('.'), a.Length - a.LastIndexOf('.'));
string c = a.Replace(b, "");
Console.WriteLine(c);
Is there any better way of doing this?
Answer: If you can, just use Path.GetFileNameWithoutExtension
Returns the file name of the specified path string without the extension.
Path.GetFileNameWithoutExtension("asdasdasd.asdas.adas.asdasdasdasd.edasdasd");
With one line of code you can get the same result.
If you want to create one by yourself, why not just use this?
int index = a.LastIndexOf('.');
b = index == -1 ? a : a.Substring(0, index);
P.S Special thanks to @Anthony and @CompuChip to point me out some mistake i done, bad day maybe.
You take everything which comes from 0 (the start) to the last dot which means the start of the extension | {
"domain": "codereview.stackexchange",
"id": 6968,
"tags": "c#"
} |
Gauge invariance of Faddeev-Popov determinant in bosonic string theory | Question: I am, once again, going through an introduction to (bosonic) string theory, following the lecture notes by David Tong on the subject, and once again I am stumbling on technicalities around the Polyakov path integral formulation.
This time it is the claimed gauge invariance of the Faddeev-Popov determinant, defined in Tongs notes in eq. (5.1) on page 110 as:
$$\Delta[g]^{-1}=\int_G\mathcal{D}\xi\delta(g-g_0^\xi)\tag{5.1}$$
where, for simplification, $g$ and $g_0$ are lorentzian metrics on the zylinder and the integral is over "the Haar measure" on the group $G$ of diffeomorphisms and Weyl transformations. For $\xi$ the diffeomorphism $f$ and Weyl factor $\phi$, $g^\xi=\phi f^*g$ or something along those lines.
Tong claims that this expression is gauge invariant, that is $\forall \epsilon\in G$: $\Delta[g^\epsilon]=\Delta[g]$, and gives a short uncommented proof of it as:
$$\Delta[g^\epsilon]^{-1}=\int_G\mathcal{D}\xi\delta(g^\epsilon-g_0^\xi)=\int_G\mathcal{D}\xi\delta(g-g_0^{\epsilon^{-1}\xi})=\int_G\mathcal{D}\xi\delta(g-g_0^{\xi})=\Delta[g]^{-1}.\tag{p.111}$$
I guess the third equality uses the translation invariance of the Haar measure, but the second step simply seems wrong to me. I think it should be:
$$\int_G\mathcal{D}\xi\delta(g^\epsilon-g_0^\xi)=\int_G\mathcal{D}\xi\delta(g^\epsilon-g_0^{\epsilon\xi})=\int_G\mathcal{D}\xi\delta([g-g_0^\xi]^\epsilon)=\int_G\mathcal{D}\xi\frac{\delta(g-g_0^\xi)}{|\det\frac{\delta h^\epsilon}{\delta h}\vert_{h=0}|}.$$
If we were talking about a representation of a compact topological group it is clear that this determinant is $1$, but in this case I can't see it.
Moreover, there is indirect evidence that the Fadeev-Popov determinant is not gauge invariant: Apparently it can be written as the partition function of a $c=-26$ CFT, but the partition functions of CFT's are only Weyl-invariant for $c=0$ (or flat background metric which we can't assume since we are integrating over all background metrics).
The question is: am I overlooking something, and if yes, what? To be clear, I am convinced that treating this un-invariance correctly gives the right expression for the gauge fixed pathintegral anyway, but the presentation in Tongs notes seems flawed, even apart from all the assumptions made.
Remark: this would also clear up an earlier question of mine, since the un-invariance of the Faddeev-Popov determinant and that of the string measure would exactly cancel in $26$ dimensions, see my earlier question.
Answer: Let $Z[g]$ be the partition function of a conformal field theory with central charge $c$ on a genus $0$ surface, $F[g]=\ln Z[g]$ the "free energy".
It is a standard result that
\begin{equation}
g^{ab}(p)\frac{\delta}{\delta g^{ab}(p)}F[g]\sim c\sqrt{|g|}R[g](p)\qquad(1)
\end{equation}
where $R[g]$ is the Ricci curvature and the proportionality constant is not zero and independent of $g$. In particular, eq. (1) implies that the partition function can't be Weyl rescaling invariant whenever $c\neq 0$ and the background is curved.
Firstly, the proof of gauge invariance given by Tong and Polchinski is, almost literally cited, this:
\begin{equation}
\Delta[g^\epsilon]^{-1}=\int\mathcal{D}\xi\delta(g^\epsilon-g_0^\xi)=\int\mathcal{D}\xi\delta([g-g_0^{\epsilon^{-1}\xi}]^\epsilon)=\int\mathcal{D}\xi'\delta([g-g_0^{\xi'}]^\epsilon)=\int\mathcal{D}\xi'\delta(g-g_0^{\xi'})=\Delta[g]^{-1}\qquad(2)
\end{equation}
The point where i don't agree is the second to last equality in eq. (2): as is well known there should be a factor of $|\det({\frac{\delta h^\epsilon}{\delta h}\vert_{h=0}})|^{-1}$ appearing. If we were talking about a representation of a compact group I would agree that this is always $1$, but, since we are including Weyl rescalings, the group we are considering is far from compact. In particular consider the case when $\epsilon$ is a Weyl rescaling $h^\epsilon=\phi h$, then we have to determine $\det('\text{multiplication with }\phi')$, which I highly suspect to not be $1$ for general $\phi$ (even when regularized appropriately).
Secondly, assume that we are on a cylinder such that $\exists \epsilon:g=g_0^\epsilon$. Then following Tong almost word by word we find that
\begin{align*}
\Delta[g]^{-1}&=\int\mathcal{D}\xi\delta(g_0^\epsilon-g_0^\xi)=\int\mathcal{D}\xi\delta(g_0^\epsilon-(g_0^\epsilon)\xi)\\
&=\int\mathcal{D}\xi\delta(2w(g_0^\epsilon)_{ab}+\nabla_{(a}\nu_{b)})=\ldots\\
&=Z_{\text{bosonic ghosts}}[g_0^\epsilon]
\end{align*}
so that at the end of the day we can write the Fadeev-Popov determinant as the partition function of the ghost CFT:
\begin{equation}
\Delta[g]=Z_{\text{gh}}[g]\qquad(3)
\end{equation}
where the right hand side, as discussed above, is not gauge invariant: Let $\epsilon_\phi$ be the Weyl rescaling by $1+\phi$, gauge invariance must imply that $\frac{\delta \Delta[g^{\epsilon_\phi}]}{\delta \phi(p)}\vert_{\phi=0}=0$, but according to eq. (1) and (3) we have
\begin{align*}
\frac{\delta \Delta[g^{\epsilon_\phi}]}{\delta \phi(p)}\vert_{\phi=0}&=\frac{\delta Z_{\text{gh}}[g^{\epsilon_\phi}]}{\delta \phi(p)}\vert_{\phi=0}=\frac{\delta Z_{\text{gh}}[g+\phi g]}{\delta \phi(p)}\vert_{\phi=0}\\
&=\int\mathrm{d}q\,\frac{\delta Z_{\text{gh}}[g]}{\delta g^{ab}(q)}\frac{\delta \phi(q) g^{ab}(q)}{\delta \phi (p)}\vert_{\phi=0}=\int\mathrm{d} q\,\frac{\delta Z_{\text{gh}}[g]}{\delta g^{ab}(q)}g^{ab}(q)\delta(p-q)\\
&=Z_{\text{gh}}[g]g^{ab}(p)\frac{\delta}{\delta g^{ab}(p)}F_{\text{gh}}[g]\sim \Delta[g]c\sqrt{|g|}R[g](p)
\end{align*}
So, since the ghost CFT in this case has $c=-26\neq0$ and $g$ in general might have non zero curvature we find that the Fadeev-Popov determinant can't be gauge invariant.
\newpage
Finally, I want to remark that this is actually not a problem for our considerations, but makes it possible in the first place:
\begin{align*}
Z_{\text{String}}&=\int\mathcal{D}gZ_{\text{Polyakov}}[g]=\int\mathcal{D}g\Delta[g]\int\mathcal{D}\xi\delta(g-g_0^\xi)Z_{\text{Polyakov}}[g]\\
&=\int\mathcal{D}\xi Z_{\text{gh}}[g_0^\xi]Z_{\text{Polyakov}}[g_0^\xi]
\end{align*}
The combination $Z_{\text{gh}}[g_0^\xi]Z_{\text{Polyakov}}[g_0^\xi]$ has a conformal anomaly given by $c=D-26$, so it is gauge invariant if and only if $D=26$! In that case we can drop the integration over the gauge group and the associated infinite but constant factor to get
\begin{equation*}
Z_{\text{String}}=Z_{\text{gh}}[g_0]Z_{\text{Polyakov}}[g_0]
\end{equation*}
which is our desired result. | {
"domain": "physics.stackexchange",
"id": 66934,
"tags": "string-theory, gauge-theory, conformal-field-theory, path-integral"
} |
Permission Denied: running rosjava pubsub tutorial | Question:
I am trying to run the rosjava_tutorial_pubsub tutorial, but every time I try to "rosmake" it, I get the following error about not being able to create "ros.properties." I'm honestly not sure what to do from here, and I've looked through the rest of the answer base. Thanks in advance!
kevinaboos@ubuntu:~/ros_workspace$ roscd rosjava_tutorial_pubsub/
kevinaboos@ubuntu:/opt/ros/electric/stacks/rosjava_core/rosjava_tutorial_pubsub$ rosmake
[ rosmake ] No package specified. Building ['rosjava_tutorial_pubsub']
[ rosmake ] Packages requested are: ['rosjava_tutorial_pubsub']
[ rosmake ] Logging to directory/home/kevinaboos/.ros/rosmake/rosmake_output-20111115-023614
[ rosmake ] Expanded args ['rosjava_tutorial_pubsub'] to:
['rosjava_tutorial_pubsub']
[ rosmake ] Checking rosdeps compliance for packages rosjava_tutorial_pubsub. This may take a few seconds.
[ rosmake ] rosdep check passed all system dependencies in packages
[rosmake-0] Starting >>> roslib [ make ]
[rosmake-0] Finished <<< roslib ROS_NOBUILD in package roslib
[rosmake-0] Starting >>> std_msgs [ make ]
[rosmake-0] Finished <<< std_msgs ROS_NOBUILD in package std_msgs
[rosmake-0] Starting >>> rosgraph_msgs [ make ]
[rosmake-0] Finished <<< rosgraph_msgs ROS_NOBUILD in package rosgraph_msgs
[rosmake-0] Starting >>> rosbuild [ make ]
[rosmake-0] Finished <<< rosbuild ROS_NOBUILD in package rosbuild
No Makefile in package rosbuild
[rosmake-0] Starting >>> roslang [ make ]
[rosmake-0] Finished <<< roslang ROS_NOBUILD in package roslang
No Makefile in package roslang
[rosmake-0] Starting >>> rosclean [ make ]
[rosmake-0] Finished <<< rosclean ROS_NOBUILD in package rosclean
[rosmake-0] Starting >>> rosgraph [ make ]
[rosmake-0] Finished <<< rosgraph ROS_NOBUILD in package rosgraph
[rosmake-0] Starting >>> rosparam [ make ]
[rosmake-0] Finished <<< rosparam ROS_NOBUILD in package rosparam
[rosmake-0] Starting >>> rospy [ make ]
[rosmake-0] Finished <<< rospy ROS_NOBUILD in package rospy
[rosmake-0] Starting >>> rosmaster [ make ]
[rosmake-0] Finished <<< rosmaster ROS_NOBUILD in package rosmaster
[rosmake-0] Starting >>> cpp_common [ make ]
[rosmake-0] Finished <<< cpp_common ROS_NOBUILD in package cpp_common
[rosmake-0] Starting >>> roscpp_traits [ make ]
[rosmake-0] Finished <<< roscpp_traits ROS_NOBUILD in package roscpp_traits
[rosmake-0] Starting >>> rostime [ make ]
[rosmake-0] Finished <<< rostime ROS_NOBUILD in package rostime
[rosmake-0] Starting >>> roscpp_serialization [ make ]
[rosmake-0] Finished <<< roscpp_serialization ROS_NOBUILD in package roscpp_serialization
[rosmake-0] Starting >>> xmlrpcpp [ make ]
[rosmake-0] Finished <<< xmlrpcpp ROS_NOBUILD in package xmlrpcpp
[rosmake-0] Starting >>> rosconsole [ make ]
[rosmake-0] Finished <<< rosconsole ROS_NOBUILD in package rosconsole
[rosmake-0] Starting >>> roscpp [ make ]
[rosmake-0] Finished <<< roscpp ROS_NOBUILD in package roscpp
[rosmake-0] Starting >>> rosout [ make ]
[rosmake-0] Finished <<< rosout ROS_NOBUILD in package rosout
[rosmake-0] Starting >>> roslaunch [ make ]
[rosmake-0] Finished <<< roslaunch ROS_NOBUILD in package roslaunch
No Makefile in package roslaunch
[rosmake-0] Starting >>> rosunit [ make ]
[rosmake-0] Finished <<< rosunit ROS_NOBUILD in package rosunit
[rosmake-0] Starting >>> rostest [ make ]
[rosmake-0] Finished <<< rostest ROS_NOBUILD in package rostest
[rosmake-0] Starting >>> test_ros [ make ]
[rosmake-0] Finished <<< test_ros ROS_NOBUILD in package test_ros
[rosmake-0] Starting >>> rosjava_bootstrap [ make ]
[rosmake-0] Finished <<< rosjava_bootstrap ROS_NOBUILD in package rosjava_bootstrap
[rosmake-0] Starting >>> apache_commons_util [ make ]
[rosmake-0] Finished <<< apache_commons_util ROS_NOBUILD in package apache_commons_util
[rosmake-0] Starting >>> apache_xmlrpc [ make ]
[rosmake-0] Finished <<< apache_xmlrpc ROS_NOBUILD in package apache_xmlrpc
[rosmake-0] Starting >>> rosjava [ make ]
[rosmake-0] Finished <<< rosjava ROS_NOBUILD in package rosjava
[rosmake-0] Starting >>> rosjava_tutorial_pubsub [ make ]
[ rosmake ] All 3 linesrosjava_tutorial_pubsub: 0.0 sec ] [ 1 Active 27/28 Complete ]
{-------------------------------------------------------------------------------
rosrun rosjava_bootstrap generate_properties.py rosjava_tutorial_pubsub > ros.properties
/bin/sh: cannot create ros.properties: Permission denied
-------------------------------------------------------------------------------}
[ rosmake ] Output from build of package rosjava_tutorial_pubsub written to:
[ rosmake ] /home/kevinaboos/.ros/rosmake/rosmake_output-20111115-023614/rosjava_tutorial_pubsub/build_output.log
[rosmake-0] Finished <<< rosjava_tutorial_pubsub [FAIL] [ 0.07 seconds ]
[ rosmake ] Halting due to failure in package rosjava_tutorial_pubsub.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] Results:
[ rosmake ] Built 28 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/kevinaboos/.ros/rosmake/rosmake_output-20111115-023614
Originally posted by kevinaboos on ROS Answers with karma: 1 on 2011-11-14
Post score: 0
Original comments
Comment by tingfan on 2011-11-15:
You installed the tutorial in /opt/ros which require root permission to access. You could probably copy the folder to somewhere in your home directory and then add the directory into ROS_PACKAGE_PATH. Alternatively, you could run rosmake as root. PS. the rosjava package may miss a ROS_NOBUILD file.
Answer:
Copying from the comment:
You installed the tutorial in /opt/ros which require root permission to access. You could probably copy the folder to somewhere in your home directory and then add the directory into ROS_PACKAGE_PATH. Alternatively, you could run rosmake as root. PS. the rosjava package may miss a ROS_NOBUILD file.
Originally posted by damonkohler with karma: 3838 on 2012-01-27
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 7295,
"tags": "rosmake, rosjava"
} |
why is it just one kinect per USB host? | Question:
Due to definition a USB 2.0 host offers a max Bandwidth of 36 to 40MB/s.
A kinect: (source from openkinect)
There are 242 packets for one frame for the depth camera together with the header packets. All data packets are of 1760 bytes size. That results in 12672000 bytes/sec for 30 frames per second.
The RGB camera needs 162 packets for one frame -> 9216000 bytes/sec for 30fps.
Which sums up tor 21MB/s for rgb and ir stream.
Do I miss something or why is it not possible to use another device beside the Kinect (in 640x480 and not even in 320x240)?
hope someone can show me what I miss.
Originally posted by dinamex on ROS Answers with karma: 447 on 2013-04-30
Post score: 0
Original comments
Comment by davinci on 2013-04-30:
There is probably also some overhead. But isn't it an advice to use one kinect per host? Perhaps it is possible but not recommended.
Answer:
I don't know if I understood your question correctly. To my knowledge, it is possible to use a Kinect and (one or more) additional device such as mouse, keyboard etc. on the same USB host; at least I've never heard otherwise. If a Kinect is the only connected device, though, you can be sure that no other device can interfere in any way. Which is probably why many people use it this way.
However, it is not possible to use more than one Kinect at the same USB host due to the required bandwidth (even when lowering the resolution as the bandwidth is negotiated when the device is plugged in). For technical details, see also this answer.
Originally posted by Philip with karma: 990 on 2013-04-30
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 14007,
"tags": "ros, kinect, usb, rgbd, bandwidth"
} |
What is the chemical reaction in a home-made high-bouncer ball? | Question: I took the kids to the Science Museum in Canberra (Questacon) and one of the toys we brought back was a Home-Made High Bouncing Ball.
Now I did it with the kids and it was amazing. You pour the powder in the mould, hold it underwater for a minute. Dry it in the mould for two minutes and open the mould and let it dry.
The end result was like the high-bouncer balls we used to play with as kids. (Lots of fun).
My question is: What is the chemical reaction in a home-made high-bouncer ball? (I'd also love a picture I can show my kids - but that is not essential).
Answer: I have google it and got the link which explains the process but don't show the organic reactions(might be because they are complex)
This activity demonstrates an interesting chemical reaction, primarily between the borax and the glue. The borax acts as a “cross-linker” to the polymer molecules in the glue – basically it creates chains of molecules that stay together when you pick them up. The cornstarch helps to bind the molecules together so that they hold their shape better. - See more at: Sciencebob | {
"domain": "chemistry.stackexchange",
"id": 1949,
"tags": "reaction-mechanism, home-experiment"
} |
Does a wave packet have finite size? | Question: Does the “amplitude” of a particle's wave packet decrease to 0 at some finite distance?
Answer: I think a general statement for a wavepacket cannot be made. There could be wavepackets like those representing a particle in a deep quantum well that go zero outside the barrier while in other cases you could find a wave packet never becoming zero in case of a free particle where the wave packets are basically waves of equal amplitude since the probability of finding the particle is uniform in whole space.
In essence the more the particle is localised the more its wave packet starts resembling a dirac delta and its amplitude starts becoming zero everywhere except at a point. | {
"domain": "physics.stackexchange",
"id": 70810,
"tags": "quantum-mechanics, wavefunction"
} |
Mechanism for chloromethylation of benzene with formaldehyde and HCl | Question:
What is the mechanism of the above reaction? I have thought of one possibility:
Would this work? How exactly is the chlorine installed on the alkyl chain?
Answer: This reaction is chloromethylation, similar to Blanc chloromethylation, but using $\ce{AlCl3}$ as co-catalyst instead of $\ce{ZnCl2}$. These are reactions belonging to a group related to Friedel-Crafts reactions, but characterized by usage of protonation instead of coordination with molecular Lewis acids
Mechanism of your reaction (not accounting for influence of co-catalyst) can be written as:
(source)
$\ce{AlCl3}$ coordinates to $\ce{HCl}$ creating adduct ($\ce{HAlCl4}$) which is much stronger acid. | {
"domain": "chemistry.stackexchange",
"id": 4351,
"tags": "organic-chemistry, reaction-mechanism, aromatic-compounds"
} |
Custom model slam mapping issues | Question:
I've had problems creating a well formed map of a city simulation for weeks. Initially, I was using gmapping and I later switched to hector mapping. In both cases the resulting maps were misaligned as shown in the images below:
GMapping:
Hector Mapping:
The highlighted areas of both maps indicate areas that are misaligned during the mapping process.
Because both cases result in similar issues, I was wondering if the problem might be with my xacro file ?
Is this something that can be resolved with costmap configuration files ( which I currently don't have ) OR is a basic but well formed URDF sufficient to correctly map an environment ?
Originally posted by sisko on ROS Answers with karma: 247 on 2021-04-01
Post score: 0
Answer:
Your environment appears to be composed of long, straight corridors, which are difficult for SLAM. Your maps look fairly rectilinear (I assume that's reflective of the environment), so your problem may be that the map "slips" when your robot loses sight of reliable features. Can your lidars always see things other than the parallel walls beside it? Odometry helps, but I assume you're already using that. Some things to try:
Extend the range of your lidars to always be able to see the end of a corridor in any direction.
Add small objects along the sides to provide extra features in your lidar data for SLAM to reference.
Originally posted by tryan with karma: 1421 on 2021-04-02
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by sisko on 2021-04-03:
Thanks @Tryan.
I did not realise slam does not work well in an environment of long straight corridors. Infact, I had assumed the opposite. Those lines are actually the kerbs of sidewalks of a simulated city as detected by the laser(s) or lidar on my robot model.
As the environment is a city, the lidar does detect buildings and other infrastructure.
So, I wonder, is it possible to combine multiple sensor data to create better slam data ?
My thinking is perhaps combining odometry, laser and lidar would provide better data for slam.
Comment by tryan on 2021-04-03:
Yes, the main problem with such an environment arises when using only a short-range lidar. In a hallway, the scans from the lidar at different points will look very similar (two parallel walls), making it appear as though the robot has not moved.
As you suggest, if you are able to collect more data from the environment (longer range, more sensors, etc.), the SLAM algorithm will be better able to determine how far the robot has moved. In general, the more data, the better. You do have to make sure it's quality data, though. If your odometry is way off for example, it could hurt more than help.
Another option is to make the scans look different by adding objects to the environment, so the SLAM algorithm has more discernible features to use.
Comment by sisko on 2021-04-03:
I'm going to try finding and adding some kind of unique patterns to the kerbs. I'm thinking qr codes to be read by camera on my model. BUT, how can I integrate such camera data, or lidar etc, into the SLAM algorithm ? This is the part I can't figure out yet. Gmapping needs a minimum of laser data. How can I introduce extra data from other sensors ?
Comment by tryan on 2021-04-04:
Before adding more complicated sensors, I recommend you confirm what the problem is. If you increase the range of your current lidar(s) or add some small objects along the road and the mapping works, then you'll know that you need more feature-rich data to be successful. If it doesn't help, then you may still have a different problem with your current setup.
I'm not sure about using landmarks in gmapping; I've only used it with odometry and a lidar. A different package may be easier to integrate, but that's beyond the scope of this question.
Comment by sisko on 2021-04-08:
Hi @Tryan.
Thank you for your input. I followed your advice and placed objects all over the streets of the virtual city world and I found using Hector mapping the robot model did not loose orientation so much. I managed to complete a far better map than all my previous attempts. It still lost orientation at some point but I had saved the map when it was almost complete. I then manually editted the saved map to remove the the objects etc.
However when I did the same with gmapping, the map was far inferior.
Comment by tryan on 2021-04-08:
It's good to hear there was improvement! Would you mind adding a picture of your newer map for reference? | {
"domain": "robotics.stackexchange",
"id": 36269,
"tags": "ros-melodic"
} |
Does QFT prevent preparation of an entangled particle pair as in EPR experiment? | Question: This is the claim Tommasini makes in Reality, Measurement and Locality in Quantum Field
Theory:"Two spin $1/2$ particles, A and B, are created in coincidence in a spin-singlet state, and are detected by the detectors $O_A$ and $O_B$ in opposite directions... The EPR argument, as described above, and (as far as I know) all the subsequent treatments of the EPR paradox, have assumed that it was actually possible to prepare a system of two entangled particles. However, I have recently proved that this assumption is not correct... In fact, the Standard Model of Particle Physics predicts that it is not possible to produce a state having a definite particle content: given the process that produces A and B alone, QFT theory predicts a nonvanishing and finite probability for the creation of A and B plus additional photons".
She goes on to say that "the EPR+Bell proof of nonlocality is removed" because "for the EPR argument it is crucial that the measurement on A implies a certain prediction for B without disturbing B". But spurious photons potentially produced along with A and B make any prediction uncertain.
Is it true that in QFT one can not prepare states with prescribed number of particles? Does it follow that above analysis of EPR is correct? QFT is manifestly relativistic, so it makes sense that quantum non-locality is "removed", and Tommasini reproduces the usual QM correlations for EPR using a Feynman integral QFT calculation, so it seems consistent. But this diverges sharply from the usual explanation of EPR.
EDIT: In a companion paper there are some details on computations and agreement with experiments:"the case of the EPR experiments that have been performed up to now the QED prediction for the correlations is very close to that obtained in Quantum Mechanics by ignoring the soft photons, so that it can still agree with the data within the experimental errors. However, even a very small probability for soft photons creation is sufficient to forbid any certain prediction for the measurement on B as a consequence of the measurement on A".
Apparently, soft photons do exist (her source is Weinberg's text), and they do affect QED predictions:"Even though soft photons are not detected, the possibility of their emission must be taken into account in the calculation of the scattering amplitude". Entanglement and the infrared structure of QED discusses QED violations of Bell inequalities:"We might consider that they started with the studies of the effect of the QED spin-spin interactions on the entanglement and the violation of Bell Inequalities due to QED... The objective of this work is... to characterize the effect of soft photons on the entanglement of two charged qubits..."
So I guess the answer to the first question is affirmative. I am still not clear though why small QED corrections to QM correlations entirely "remove" non-locality.
Answer: As is often the case with these sorts of papers, it is sometimes difficult for me to tell if the author is making a trivial statement, or a sophisticated one that I don't understand. So any other viewpoints on the content are welcome. With that disclaimer, here is my understanding:
The author appears to put an extreme emphasis on the following words in the original EPR paper:
If, without in any way disturbing a system we can predict with certainty (i.e. with probability equal to unity) the value of a physical quantity, then there is an element of
physical reality corresponding to this physical quantity
Then he (the author is male) goes on to point out something that I can't independently confirm but seems reasonable: one cannot make a perfect preparation of the original two-particle entangled state in QED. There will always be some admixture of other states. So you will never truly predict the outcome of this (or any other!) experiment with unity, and therefore the prerequisite for EPR as stated above is never satisfied. Of course, our uncertainty may get arbitrarily low, but perhaps any uncertainty in principle is enough to ruin this "element of physical reality."
Okay, but how important is this? From my perspective, the answer is "not very," for the following reason:
From my viewpoint, the original EPR paper, while hugely important, is really only of historical significance now. The most important thing it did was to partly inspire Bell's work. Now, the violation of Bell inequalities, and the resulting implications for hidden variable theories of quantum mechanics, do not require any such perfect preparation and as a result are completely unaffected by this issue. Tommasini says this himself. Since this is what our understanding of quantum mechanics is based on, I don't see any reason that this claim should change anything about how we think about the nature of QM.
It is usually thought that the violation of Bell's inequality was a unambiguous refutation of the claims made in the EPR paper. Tommasini says this is not true, and that because of this imperfect preparation both papers are addressing slightly different situations. This is a historical question that might interest some people. But, from my perspective, what Bell experiments say about an 80-year-old paper that may or may not be asking a completely well-posed question is less interesting than what they say quite unambiguously about nature itself.
Finally, the author worries that we might be missing something about the viability of quantum teleportation or quantum computing because of this issue. Quantum teleportation had already been first achieved some years before this was published, and has continued to be used in more and more elaborate ways. Quantum computing is also growing in sophistication, without any disagreement with theories that neglect soft photon emission as far as I know. So, while maybe valid when this paper was written, I would say this worry is basically unfounded by now. | {
"domain": "physics.stackexchange",
"id": 23547,
"tags": "quantum-mechanics, quantum-field-theory, quantum-interpretations"
} |
Calculating the current of an infinitely long plane? | Question: I stumbled across this question and it doesn't seem to have any kind of solution...or am I just not understanding this correctly?
Question
There is a sheet that is infinitely long in length but has a width, $w$. It lies across the $xy$ plane and carries a current density $J = J_0$ Amp/m. What is the current flowing in the sheet?
Does this question make any sense? If it is infinitely long and the current depends on how long the sheet is, how would it be possible to calculate the area of an infinitely long sheet to find the current flowing through it?
Answer: The current depends on the width of the sheet.
Think of an infinitely long wire with a current I in it. The current does not depend on the length of the wire. Current is a measure of electrons per second flowing past a point.
For a sheet, think of a roll of conductive paper towels. The goal is to find how many electrons per second are flowing past the line between two towels.
If the current in a strip 1 cm wide is 1 Amp, and the paper towels are 20 cm wide, the total current is 20 Amp. | {
"domain": "physics.stackexchange",
"id": 39306,
"tags": "electrostatics, electric-fields, electric-current, gauss-law"
} |
Why do we determine the values of λ in regularization as ln λ, such as ln λ=-18 instead of for example λ=0.3? | Question: I'm studying Pattern Recognition and Machine Learning by Christopher Bishop. What I realized is, he defines values of λ as ln λ. For example:
We see that, for a value of lnλ = −18, the over-fitting has been suppressed and we
now obtain a much closer representation of the underlying function sin(2πx). If,
however, we use too large a value for λ then we again obtain a poor fit, as shown in
Figure 1.7 for lnλ = 0
What is the reason for this? Why he doesn't just use λ?
Answer: I can just suppose, that it's because then we could consider regularization together with a Log Likelihood function, for instance, and that is why it's more convenient to have such a representation. It's easier to make computations, using log, for example, when we want to minimize some functions (ln a + ln b = ln(a*b), (ln a)' = 1/a etc) | {
"domain": "datascience.stackexchange",
"id": 5816,
"tags": "regularization"
} |
Does the Lorentz force applied to a current carring wire by a magnetic field act in the negative or positive direction of the right hand rule? | Question: If I say that I am calculating the Lorentz force $F$ applied to a wire carrying a current $i$ at a point $P$ in a magnetic field $B$, would the actual force be opposite of that given by the right hand rule since electrons are actually flowing rather than the positive charge suggested by conventional current (i.e. positive charge doesn't actually flow through a wire)?
Answer: The force on each electron is given by the Lorentz for law which is ,
$$\mathbf{F}=q(\mathbf{v} \times \mathbf{B})$$
Now if we try to apply this to electrons moving in a wire,notice that the direction of electron flow will also be opposite to direction of (conventional)current flow.
So,
\begin{align} \mathbf{F} &=&-e((-\mathbf{v}) \times \mathbf{B})\\&=&e(\mathbf{v} \times \mathbf{B}) \end{align}
which is just the same for positve charges moving in the direction of the conventional current. | {
"domain": "physics.stackexchange",
"id": 11057,
"tags": "electromagnetism, forces, magnetic-fields"
} |
What is the difference between Sentiment Analysis and Emotion Recognition? | Question: I found Sentiment Analysis and Emotion Recognition as two different categories on paperswithcode.com. Should both be the same as my understanding? If not what's the difference?
Answer: Sentiment in this context refers to evaluations, typically positive/negative/neutral. Sentiment Analysis can be applied to product reviews, to identify if the reviewer liked the product or not. This has (in principle) got nothing to do with emotions as such.
Emotion recognition would typically work on conversational data (eg from conversations with chatbots), and it would attempt to recognise the emotional state of the user -- angry/happy/sad...
Of course the same can overlap: if the user is happy, they will typically express positive sentiments on something.
Also: emotion recognition goes beyond text (eg facial expressions), whereas sentiment analysis mostly works with textual data only. | {
"domain": "ai.stackexchange",
"id": 1896,
"tags": "definitions, comparison, emotional-intelligence, sentiment-analysis"
} |
Why do we go blind for a few seconds after switching off the light? | Question: At night, when I switch off the lights, I always seem to go blind for a while. The room becomes pitch black and I am unable to see anything. After a while, however, my vision slowly recovers and I start to see things around me again. I always have to wait a while before my vision returns to that functional state.
I am interested in knowing the mechanism behind this phenomenon. What do we call it?
Answer: Short answer
The eyes need to adapt to the low lighting condition after you switch off the lights, a process called dark adaptation.
Background
The process behind the reduced visual function when going from bright ambient light to low-lighting conditions is caused by a process called dark adaptation. The visual system works on a huge intensity scale. The only way to do that is by adapting to ambient lighting intensity.
The sensitivity of our eye can be measured by determining the absolute intensity threshold, i.e., the minimum luminance of a stimulus to produce a visual sensation. This can be measured by placing a subject in a dark room, and increasing the luminance of the test spot until the subject reports its presence.
Dark adaptation refers to how the eye recovers its sensitivity in the dark following exposure to bright light. The sensitivity of the visual system increases approximately 35 times after dark adaptation.
Dark adaptation forms the basis of the Duplicity Theory which states that above a certain luminance level (about 0.03 cd/m2), the cone mechanism is involved in mediating vision, called photopic vision. Below this level, the rod mechanism comes into play providing scotopic (night) vision. The range where two mechanisms are working together is called the mesopic range, as there is not an abrupt transition between the two mechanism.
The dark adaptation curve shown below (Fig. 1) depicts this duplex nature of our visual system. The sensitivity of the rod pathway improves considerably after 5-10 minutes in the dark. Because after you switch off the light the rod system is still inactive, you are unable to perceive much. The reason why rods are inactive is because they are said to be photo bleached. Photo bleaching refers to the visual pigments in the rods and cones to become used up because of the high light intensities when the light was still on. The pigment needs to be regenerated and that takes time.
Fig. 1. Dark adaptation curves of rods and cones. Source: Webvision
Reference
- Kolb et al (eds). Webvision. The organization of the retina and the visual system (2012) | {
"domain": "biology.stackexchange",
"id": 5334,
"tags": "vision, eyes, neurophysiology"
} |
Why do F- bacteria still exist? | Question: When an F+ bacteria conjugates with an F-, it makes the other bactaria F+ too. So on the long run, all bactaria should be F+.
Is there any mechanism that converts F- to F+? Could it be degeneration of F plasmid?
Answer: So an easy way to convert a cell from F+ to F- is to divide it without correctly replicating the plasmid and transferring it to a daughter cell, leaving you with one F+ and one F-. The cells that are F- now have a selective pressure advantage over the F+ cells, as they now don't require the energy to produce a plasmid when they divide. If that advantage is enough to overcome the frequency of conjugation, you'll end up having a population of F- cells. | {
"domain": "biology.stackexchange",
"id": 7023,
"tags": "bacteriology, plasmids"
} |
ML in R (caret-package) missing hyperparameters | Question: I have a pretty specific question regarding the caret package however I still hope to finde help here.
I recently worked with the caret package and trained a multilayer perceptron with method = 'mlp'. I looked up the github page of Max Kuhn (developer of caret), and it says that you only need to tune one hyperparameter: the size (number of neurons in the hidden layer). Which is really convinient.
However it further states that caret for the training builds on the RSNNS Package (by Bergmeier). The mlp model implemented in this RSNNS package has additional tunable parameters over just the size hyperparameter (i.e. learnFunc,hiddenActFunc,Std_Backpropagation, maxit).
So I asked myself what values caret uses for those parameters? Default values or are those optmizied?
Answer: It appears that the defaults are used, except for lin, which is inferred from the type of the target variable: [source code]
Note too that you can set any of the other RSNNS parameters through the dots. | {
"domain": "datascience.stackexchange",
"id": 6532,
"tags": "neural-network, r, hyperparameter, hyperparameter-tuning"
} |
Stimulated emission: how can giving energy to electrons make them decay to a lower state? | Question: I have been reading about how lasers function: A photon is used for stimulated emission of electrons from the metastable state to a lower energy state.
What I don't understand is: How can "giving" energy (in the form of photons) to electrons stimulate them to come to a lower energy state? After stimulated emission, the old photon exists along with the new one and with the same energy as earlier, so how did it actually stimulate the electrons to fall?
I know that the same question has been asked before, but the answer was overly simplified. I am looking for a detailed answer.
Answer: Edit: I've edited this answer to add more intuitive explanations, see the end. The electrons don't receive energy from the photons; it's just that the initial presence of $N$ photons makes the probability of the electron emitting another photon more likely. "Dipoles" and "population inversion" are actually irrelevant.
Peter Shor's answer is a nice intuitive sketch, but here's the mathematical presentation he/OP requested.
Quick run-through of quantum electrodynamics, then it will be clear: recall that the interaction between charged fields and the photon is given by
\begin{equation}
\mathscr{V}_{int}=e\int (\hat{j}\hat{A}) d^3x
\end{equation}
We can decompose the free electromagnetic field into a sum of photon creation annihilation operators
\begin{equation}
\hat{A}=\sum_{n}\left(\hat{c}_nA_n(x)+\hat{c}^\dagger_nA^*_n(x)\right)
\end{equation}
As we know from the harmonic oscillator, each operator has matrix elements only for an increase or decrease of the corresponding occupation number $N_n$ (the number of photons of type $n$; by type we mean of a given frequency/wavevector, since we count the number of photons of different frequencies separately) which differ by one. That is, only processes of the emission or absorption of a single photon occur in the first approximation of perturbation theory. (Though again, in analogy with the harmonic oscillator, we know that at the $m$th order in perturbation theory, $m$-photon processes are possible ie matrix elements connecting $N_n$ and $N_n\pm m$. Quantitatively, the matrix elements of the operators $c_n$ are given by
\begin{equation}
\langle N_n|c^\dagger_n| N_n-1\rangle=\langle N_n-1|c_n|N_n\rangle=\sqrt{N_n}
\end{equation}
(The convention is that $c_n$ are the usual "$a_n$", but with a factor of $\sqrt{2\pi/\omega}$ absorbed into them).
Investigating the probability of an absorption/emission process requires perturbation theory. Let us assume for simplicity that the initial and final states of the emitting/absorbing system belong to the discrete spectrum. Then the probability rate is given by the Fermi golden rule
\begin{equation}
dw=2\pi |\mathscr{V}_{fi} |^2 \delta\left(E_i-E_f-\omega\right) d\nu
\end{equation}
We have adopted the normalisation of the photon wavefunction so that there is one photon per volume V, and the photon wavefunction is normalised by integrating over $d\nu$. The bottom line here is that the probability rate is proportional to the square of the matrix element of $\mathscr{V}$ between the initial and final state.
Okay so here's the punchline: if the initial state of the field already has a non zero number $N_n$ of the photons in question, the matrix element for the transition is multiplied by
\begin{align}
\langle N_n+1|c^\dagger_n|N_n\rangle=\sqrt{N_n+1}
\end{align}
ie the transition probability, which involves the square of the matrix element, gets multiplied by $N_n+1$. The 1 in this factor corresponds to the $\textbf{spontaneous emission}$ which occurs even if $N_n=0$. The term $N_n$ represents the $\textbf{stimulated or induced emission}$: the presence of photons in the initial state of the field stimulates the further emission of photons of the same kind. The hand waving explanation is exactly that photons are bosons, see Peter Shor's answer. This is also the same "$N+1$" phenomenon cited in a newer answer, which involves the example of a molecular toy Hamiltonian.
Incidentally, we can obtain the Einstein relations from here with minimal effort: the matrix element for the opposite change of state will be proportional to
\begin{align}
\langle N_n-1|c_n| N_n\rangle=\sqrt{N_n}
\end{align}
and so the emission and absorption probabilities for a given pair of states are related by
\begin{equation}
w_e/w_a=(N_n+1)/N_n
\end{equation}
$\textbf{Edit:}$ $\textit{Some further questions elaborated.}$
As was stated in Peter Shor's answer, one way of thinking about this is that the factor of $(N_n+1)$ appearing in the probability rate is due to the fact that photons are bosons, and "like to group together" to go see Star Wars movies. Photons of a certain frequency in the initial state encourage there to be another photon of such a frequency in the final state, and the electron obliges by emitting this photon. There's an important point here too: which is that the photons of type $n$ ie frequency $\omega_n$ in the initial state encourage there to be more photons of the same type $n$ frequency $\omega_n$ in the final state. So the photon the electron spits out by stimulated emission is $\textit{in phase}$ with the original photons - ie of the same type. All this is simply a consequence of the algebra of bosonic creation/annihilation operators. It's not the case that energy has been "given to" the electrons in any way: clearly, it is the electron that has given up energy to the photon bunch, because it has emitted a photon. What happened is that the probability rate of the electron doing that has been increased.
Steven Sagona asks: $\textit{"why do atoms have such a Hamiltonian"?}$ The $j\cdot A$ Hamiltonian is the Hamiltonian of electromagnetism. All interactions between photons and matter are described by this Hamiltonian, as this is the only Hamiltonian allowed by gauge invariance and Lorentz invariance.
Another question is asking for the role of dipole moments and population inversion. Neither of these are actually necessary to understand the notion of stimulated emission, which is simply our factor of $N_n$, as explained. For completeness we'll give a quick explanation of the role of those terms in laser physics.
The way a laser works is essentially: you put energy into the system - "pumping" - and thereby drive the atoms into excited states. Population inversion is simply the situation when you have more atoms in excited states than in the ground state. Then you expose your excited atoms to photons, and the electrons are stimulated to drop back down to the ground state and spit out photons that are in-phase ("of the same type") as the incident photons, for the reasons explained above. Then those stimulatedly-emitted photons fly around bumping into more electrons, and cause them to undergo stimulated emission, and so on in a snowballing effect of more and more in-phase photons, until you gradually run out of your excited electrons. This gives you a whole bunch of coherent photons. Again, no dipoles necessary here.
If we wanted to calculate the emission rates more exactly, we'd have to calculate $\mathscr{V}_{fi}$. When the wavelength of the photon is large compared to the size of the atom, the dominant contribution to this matrix element is from dipole radiation. There are selection rules that determine whether an initial and final state can be connected by a dipole transition, https://en.wikipedia.org/wiki/Selection_rule. We can calculate $\mathscr{V}_{fi}$ more precisely by expanding our expression for $j\cdot A$ in a multipole expansion. I could step through all these details mathematically but it would be overkill - the basic point is that the symmetries of the states the electron is jumping between determine whether that process is allowed or not. Practically, for the snowball process explained above to work, you want the electrons to stay in their excited states for a long time (ie you want them to be metastable) so that the photons get a chance to reach them and snowball off them. The origin of metastable states is usually that: spontaneously jumping from that metastable state to the ground state is forbidden by a selection rule https://en.wikipedia.org/wiki/Metastability#Atomic_and_molecular_physics so falling out of the metastable state is unlikely. This means the probability of the electron spontaneously returning to the ground state is low, but the probability of it returning to the ground state via stimulated emission can be high due to that large factor of $N_n$ compensating. This is good: spontaneous emission spits out random out-of-phase photons, but we want stimulated emission so that we can have in phase photons (that's the point of a laser). So selection rules allow us to choose good metastable states, and that's what allows us to make the most of those excited atoms and get as many stimulated emission events out of them before they all de-excite. But this is a system dependent detail, and plays no essential role in the phenomenon of stimulated emission per se - it's a practical necessity needed to ensure the electrons in a laser stay excited long enough to undergo stimulated emission. | {
"domain": "physics.stackexchange",
"id": 94907,
"tags": "quantum-mechanics, laser"
} |
roslaunch gazebo_ros mars_world.launch cannot find mars.world | Question:
Hi,
I have a question regarding the directory structure expected by gazebo_ros when launching it using roslaunch.
I am using ROS Groovy, with Gazebo 1.9 and gazebo_ros_pkgs built from source.
I have a catkin workspace with gazebo_ros, where I have copied the following launch file (into ~/gazebo_src/catkin_ws/src/gazebo_ros_pkgs/gazebo_ros/launch).
<launch>
<!-- We resume the logic in empty_world.launch, changing only the name of the world to be launched -->
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<arg name="world_name" value="mars.world"/> <!-- Note: the world_name is with respect to GAZEBO_RESOURCE_PATH environmental variable -->
<arg name="paused" value="false"/>
<arg name="use_sim_time" value="true"/>
<arg name="gui" value="true"/>
<arg name="debug" value="false"/>
</include>
</launch>
I need to keep the mars.world in a separate folder (internal/project repository) - ~/faster_dev/branches/yn/gazebo/plugin - which I have added to GAZEBO_RESOURCE_PATH. However when I try to launch gazebo with the command "roslaunch gazebo_ros mars_world.launch" I get the error that mars.world cannot be found.
I know that the mars.world works when I launch it with gazebo ("gazebo mars.world").
My environment variables are:
env | grep GAZEBO
GAZEBO_MODEL_PATH=:/home/yn/faster_dev/branches/yn/gazebo/models:
GAZEBO_RESOURCE_PATH=/home/yn/local/share/gazebo-1.9:/home/yn/local/share/gazebo_models:/home/yn/faster_dev/branches/yn/gazebo/plugin:/home/yn/faster_dev/branches/yn/gazebo/models
GAZEBO_MASTER_URI=http://localhost:11345
GAZEBO_PLUGIN_PATH=/home/yn/local/lib/gazebo-1.9/plugins:/home/yn/faster_dev/branches/yn/gazebo/plugin/build:/home/yn/faster_dev/trunk/gazebo/plugin/build
GAZEBO_MODEL_DATABASE_URI=http://gazebosim.org/models
The relevant folders are
gazebo_ros: ~/gazebo_source/catkin_ws/src/gazebo_ros_pkgs/gazebo_ros
models: ~/faster_dev/branches/yn/gazebo/models (contains a databse.config and a number of models in subfolders)
world file: ~/faster_dev/branches/yn/gazebo/plugin
launch file: ~/gazebo_source/catkin_ws/src/gazebo_ros_pkgs/gazebo_ros/launch
Does anyone have any ideas why this could be giving me an error even though the world file path is correct (relative to GAZEBO_RESOURCE_PATH)?
I guess I could include "~/faster_dev/branches/yn/gazebo/plugin" in my global path, but I'd rather avoid this if possible..
Thanks!
Yasho
Originally posted by ynevatia on Gazebo Answers with karma: 41 on 2013-07-26
Post score: 0
Answer:
I finally figured out the problem: it lies with the scripts in gazebo_ros/scripts. These call the setup.sh from Gazebo, which was overwriting the environment variables I had set in my .bashrc with default values.
By commenting out the relevant lines in gazebo_ros/scripts/gazebo and building it again (for safety), roslaunch gazebo_ros mars_world.launch works.
Originally posted by ynevatia with karma: 41 on 2013-07-30
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3397,
"tags": "ros"
} |
Why is the moon so large in this picture from Athens | Question:
I don't undertand why the moon can be that big a few hundred miles sout of where I am in London...
And when I went to Greece, even Africa, the moon wasn't so large...
Answer: The moon is the same size in Athens as in London.
To take a photo like that you go a long way from the Greek temple, so the temple appears to be smaller than your little-fingernail with your arm stretched out. You then get a powerful telephoto lens to zoom in and wait for the moon to rise. If you have done your calculations correctly you have positioned yourself so that the moon appears right behind the temple. If you were standing there, the moon and the temple would both be small and the temple would be in the distance.
If it is the temple of Poseidon, which is about 10m along the side, we can estimate that the photographer must have been standing about 1km away to get this shot, perhaps standing by the road across the bay
You can get similar photos in London. You just need a viewpoint from which you can see the buildings of the city and is in the right position (direction and distance) for the moon to rise (or set) in just the right place. Michael Tomas captured such a picture. | {
"domain": "astronomy.stackexchange",
"id": 3269,
"tags": "the-moon"
} |
Why copper wires submerged in salt (or fresh) water oxidize much faster while they conduct electrical current? | Question: On a lot of flooded vehicles (salt/fresh water), I've noticed that wire that carries more current is always the one with most destruction (at the connector/splice), in comparison with lower current carrying wire. Wire turns black on the outer layer, accumulating green/white oxidation(?) on top of it.
How does electricity affect the rate of oxidation?
Answer: Electricity in a wire submerged in water effectively turns it into an electrochemical cell, and then it is no surprise that the anode gets oxidized pretty quickly. A modest potential of a few volts would suffice to oxidize any metal, even gold. This works for AC as well (a wire oxidizes during the positive half-wave and then does nothing during the other half-wave). | {
"domain": "chemistry.stackexchange",
"id": 7975,
"tags": "electrochemistry, electrolysis"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.