anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
ROS Answers SE migration: rosbag reverse | Question:
In the C++ API for rosbag, you can open a rosbag and then step forwards through messages using the following loop (taken from the rosbag API website):
BOOST_FOREACH(rosbag::MessageInstance const m, view)
Is there a way to reverse through the messages?
I have tried the following:
for(rosbag::View::iterator iter = view.end(); iter != view.begin(); --iter)
but the compiler complains that ‘class rosbag::View::iterator’ has no member named ‘decrement’
I tried to add a "decrement" function to view.h/cpp in the rosbag API based on the "increment" function, but my C++ isn't good enough to figure out what is going on.
Any suggestions? Thanks, Kyle.
Originally posted by Kyle Strabala on ROS Answers with karma: 186 on 2011-10-05
Post score: 3
Answer:
I found a workaround that works for me. Still, I wish there was a decrement function to step through the messages.
#include <rosbag/bag.h>
#include <rosbag/view.h>
#include <rosbag/query.h>
...
// Open rosbag file
rosbag::Bag input_bag;
input_bag.open(input_bag_path.c_str(), rosbag::bagmode::Read);
// Setup View
std::vector<std::string> topics;
topics.push_back(image_topic_name);
rosbag::View view(input_bag, rosbag::TopicQuery(topics));
// Make a vector with instances of the View::iterator class pointing to each message
std::vector<rosbag::View::iterator> iters;
for(rosbag::View::iterator iter = view.begin(); iter != view.end(); ++iter)
{
rosbag::View::iterator new_iter(iter);
iters.push_back( new_iter );
}
// Reverse iterate through the message pointers
for(std::vector<rosbag::View::iterator>::reverse_iterator r_iter = iters.rbegin(); r_iter != iters.rend(); ++r_iter)
{
sensor_msgs::Image::ConstPtr image_msg = (*(*r_iter)).instantiate<sensor_msgs::Image>();
...
}
Originally posted by Kyle Strabala with karma: 186 on 2011-10-05
This answer was ACCEPTED on the original site
Post score: 5 | {
"domain": "robotics.stackexchange",
"id": 6876,
"tags": "c++, rosbag"
} |
Radiofrequency acceleration of ions in a collider | Question: Reading about the radiofrequency acceleration in circular colliders (https://home.cern/about/engineering/radiofrequency-cavities) made me wonder, this is how it is performed for particles going in one direction, but at the same time the particles going on the other direction need to be accelerated. Are the beams synchronized in order for one of them to be inside the cylindrical electrodes while the one running in the opposite direction is crossing the cavity and being accelerated?
What about for linear colliders? Is the acceleration performed on the same way?
Answer: The particles going in each direction are in separate beamlines. They both have their own set of RF cavities, as well as dipole and quadrupole magnets.
This needs to be the case, since both beams contain positively charged particles, so they need an opposite direction magnetic field to stay in the beam lines. Since this forces them to be in separate beamlines, they can't share RF cavities.
Linear colliders are the same way (usually, at least), since you need to have two separate beamlines if you want to collide them at the end. | {
"domain": "physics.stackexchange",
"id": 49193,
"tags": "experimental-physics, particle-accelerators"
} |
Factory pattern for image or shape marker | Question: I'm making a photo marker application and need to make a factory pattern for marker. I think it is not very flexible and overall not good.
Would you check my code and suggest what could be improved?
import Foundation
enum MarkerType: String {
case Shape, Image
}
enum MarkerError: ErrorType {
case ImageNoExist
case ShapeNoExist
}
struct ImageMarker {
static func make(type: String) -> UIImageView {
var imageStr: String = ""
switch type {
case "X":
imageStr = "close.png"
break
default: break
}
let image = UIImage(named: imageStr)
let tintedImage = image?.imageWithRenderingMode(.AlwaysTemplate)
let imageView = UIImageView(image: tintedImage)
imageView.tintColor = UIColor.redColor()
return imageView
}
}
struct ShapeMarker {
static func make(type: String) -> UIView {
var shape: UIView = UIView()
switch type {
case "CIRCLE":
//shape = CircleView()
break
default: break
}
return shape
}
}
typealias Factory = (String) -> AnyObject
class MarkerHelper {
class func factoryFor(type: MarkerType) -> Factory {
switch type {
case .Shape:
return ShapeMarker.make
case .Image:
return ImageMarker.make
}
}
}
Answer: In looking at your code, what I see are a series of effectively global functions that "make" things based on a string.
The object that calls factoryFor must know what the string is in order to know which type of maker to return (after all, if the string is going to be "X", then the object that calls factoryFor has to know not to pass Shape.) Also, the caller of the factory function must know what the factory is making in order to correctly cast the AnyObject that is returned.
This leads me to wonder what the MakerHelper.factoryFor function's responsibility is? I mean, it's job just seems to be to return the make global function that the caller already knows it wants, so why not just have the caller use the make function directly?
It seems to me that you could have written:
import UIKit
func tintedImageViewNamed(name: String) -> UIImageView {
let image = UIImage(named: name)
let tintedImage = image?.imageWithRenderingMode(.AlwaysTemplate)
let imageView = UIImageView(image: tintedImage)
imageView.tintColor = UIColor.redColor()
return imageView
}
func make(type: String) -> UIView? /* thanks nhgrif */ {
switch type {
case "CIRCLE":
//return CircleView()
return UIView()
case "X":
return tintedImageViewNamed("close.png")
default:
return nil
}
}
The above accomplishes the same thing your code does, is no less safe and no more testable than what you have and it's a lot less complex (which means less chance of bugs.)
I'd love to suggest a better solution, but to do that, I would have to better understand what problem you are trying to solve... | {
"domain": "codereview.stackexchange",
"id": 19350,
"tags": "ios, swift, abstract-factory, uikit"
} |
Is the Ra/Dec of Alnitak in Orion's Belt known to be correct? | Question: I've obtained right ascension / declination information about some of stars in the Orion constellation from a few different sources and imported this into an open source video game I'm making. Using (my own possibly wonky) Ra/Dec/Parallax to x,y,z methods, Orion looks like this:
Notice Alnitak in red. It's visibly wrong.
I then manually adjust the declination of Alnitak by 1.5 degrees, and get a more believable result:
To give some background, I initially pulled data for these stars from the SIMBAD Astronomical Database but found that Alnitak had no parallax information. After manually looking up and adding in parallax info based on known distance, I realised the position looked wrong. I checked multiple sources - Star facts, Astro Pixels Wikipedia, and others - they all report similar declinations (well within 1 degree).
I find it strange that one star would be off and the others not, so normally I'd blame the data source. However it seems incredibly unlikely so many (likely peer reviewed) sources would simply be wrong, especially with such a well known star triple star system.
How I create my x,y,z positions:
I create an object in a 3D scene with world coords 0,0,0 and global rotation 0,0,0.
I set its x rotation to exactly match right ascension (converted to decimal degrees).
I then rotate the object along the y axis by the declination amount (in degrees).
I then move the object 'forward' by any arbitrary amount. Doing this for all stars, say, at a fixed distance of 1.5 parsecs, the result is as follows for BSC5P:
But now I don't know if my way of determining position is flawed.
Anyone have any ideas on where I might be going wrong?
Answer: I tried to reproduce your result. I took the values of RA and DEC from Wikipedia: Alnitak, Alnilam, Mintaka.
The values are the following:
Alnitak: Ra = 05h 40m 45.52666s, Dec = −01° 56′ 34.2649″
Alnilam: Ra = 05h 36m 12.8s, Dec = −01° 12′ 06.9″
Mintaka: Ra = 05h 32m 00.40009s, Dec = −00° 17′ 56.7424″
Please, check whether these are about the same values that you have used.
I plotted them and this is the result
Looks ok to me, so the problem is not in the public known coordinates of the star. Please, check that you have entered the correct values for Alnitak. | {
"domain": "astronomy.stackexchange",
"id": 5854,
"tags": "declination, right-ascension"
} |
Why is there a lone pair in thionyl fluoride? | Question: Why is there a lone pair in $\ce{SOF2}$? I drew its structure, which according to me should look like this:
Why is there a lone pair on sulfur? Isn't its octet complete? If yes, why should it expand its octet and gain more electrons?
Answer: When you want to write a Lewis structure, I suggest you start from considering how many valence electrons each atom has. In your case, you would have:
S = 6
F = 7; total = 14
O = 6
The final total would be 14 + 6 + 6 = 26. The structure you guessed for is correct, sulfur is the central atom. You start by drawing a single bond per each atom, getting something like this:
F
|
S–O
|
F
Then, you can draw a double bond for oxygen:
F
|
S=O
|
F
If you do the math, by counting the electrons that you put in the previous structure, you would have 2 + 2 + 4 (two S-F bonds and a S=O) = 8.
Then we will add in the lone pairs: three on each halogen, two for oxygen, and one for sulfur. Therefore we add 3 × 2 × 2 = 12 electrons (three lone pairs on each fluorine) plus 4 electrons on O (the two lone pairs), and we have a total of 24. The final 2 , adding up to 26, come from the lone pair on sulfur.
This makes sense because sulfur is in period 3, so it is possible for it to have more than 8 electrons: in other words, the octet rule applies not only for 8 electrons, if that makes sense. | {
"domain": "chemistry.stackexchange",
"id": 11809,
"tags": "bond, molecular-structure"
} |
How to use ROS Mobile for ROS2 | Question:
In ROS1 I used to use my mobile phone as a controller using the ROS Mobile Android application. Can the same application be used for ROS 2? If not, is there a way to use my mobile phone to publish/subscribe and visualise topics in ROS2?
Originally posted by tsadarsh on ROS Answers with karma: 78 on 2022-09-17
Post score: 0
Answer:
Hey,
Currently ROS-Mobile does not support communication with ROS 2. It will (hopefully) in the nearer future.
There are however some good resources to take a look at for interversion communication like the ros1_bridge on Github: https://github.com/ros2/ros1_bridge
Kind regards,
One of the contributors of ROS-Mobile
Originally posted by Schneewittchen with karma: 51 on 2022-11-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 37982,
"tags": "ros"
} |
Phenol preparation from Grignard reagent | Question: Reaction: Phenyl magnesium bromide in presence of excess oxygen, on acidification yields phenol, as below.
My instructor told me that the above reaction occurs, and if the reagent is in excess then phenol is the end product. However, I'm not able to figure out the mechanism.
I've tried and got stuck at this point:
I wish to know the complete mechanism of this reaction.
Answer: As requested by the OP - the route that lab chemists use to form phenols from aryl halides (not aryl fluorides) is via formation of the aryl boronic acid or boronate. These may be formed by either metallation of the aryl halides to form the aryl lithium or Grignard and reaction with a trialkyl borate $\ce{B(OR)3}$, or by Pd catalysed reaction with pinacol borane (Suzuki-Miyaura reaction). These boronates or boronic acids may be isolated, but in practice an oxidative workup, most often with basic peroxide, yields the phenol in good to excellent yield.
(read more links: 1, 2, 3)
image from US7678158B2. | {
"domain": "chemistry.stackexchange",
"id": 9745,
"tags": "organic-chemistry, reaction-mechanism, phenols, grignard-reagent"
} |
Why have a placebo control group when testing a new drug if existing drugs can be used? | Question: It is general practice to compare a new treatment against a sham treatment (placebo), and then use those results to compare efficacy of the new treatment (call it B) to an existing treatment (call it A), at least when existing treatments are available.
However, it seems reasonable and more ethical to use A and B together in a study, rather than giving one group a sham treatment. In other words, instead of a placebo group and a treatment group, why aren't there two treatment groups, one with A and one with B? Statistical noise such as regression to the mean, natural progression of the disease, etc would all show up in both groups, so if B gave better results than A, we should be able to say directly from that result that B works better than A, and we would be doing so in a more direct way than comparing the effect sizes of B vs placebo to the effect sizes of A vs placebo.
Answer: You are correct. For serious diseases, it would be unethical to withhold the existing treatment (Treatment A) for a placebo. Treatment A is called "standard of care", and new treatments are compared to this. | {
"domain": "biology.stackexchange",
"id": 8094,
"tags": "medicine, research-design, clinical-trial"
} |
What level of cellular radiation is harmful for humans? | Question: What level of radiation at the frequencies used by the cellular network(1-2 Ghz) is harmful for human health?
Answer: Addition to the previous answer
First you need to understand how radiation causes cellular damage.
EM waves like γ-rays, X-rays and high-energy UV (in certain molecules even visible light) can knock out the electrons from the atom and create an ion and free radicals by breaking chemical bonds. This term — ionizing radiation is not used for low energy UV and visible light because these waves can only ionize a few types of molecules and are not ionizing in general. Moreover X-rays and γ-rays can dislodge even inner electrons which UV and visible light cannot (High energy γ-rays can also lead to the nuclear reaction of pair production). These ions that are formed are reactive and can attack other molecules (DNA/proteins etc) around them. DNA absorbs UV and this can cause DNA lesions such as dimerization of adjacent pyrimidines (via a cyclobutane bridge). Both reactive ions and pyrimidine dimers can cause cancer; the former is most frequent in melanomas which happen because of solar UV. Since UV cannot penetrate very deep, they affect mostly the skin (See here).
IR, with energy lower than visible light does not have sufficient energy to cause electronic transitions. But it can cause vibrational transitions such as bond stretching and bending. Microwaves have even lower energy and can only cause molecular rotation. Both IR and microwaves because of their property to make molecules move, generate heat.
Continuous usage of mobile phones can cause discomfort but only because of their heating effect. They can by no means cause cancer and even though mobile phone radiation is classified as a potential carcinogen (IARC Group 2B) perhaps because of the pressing allegations of certain factions, they do not have sufficient energy to cause DNA damage.
If you ask me if mobile phone radiation can be harmful to human health, then I would say yes, they can be potentially harmful with prolonged usage but in no way as harmful as to be causing cancer. I would say that it is as harmful as wine :P | {
"domain": "biology.stackexchange",
"id": 3625,
"tags": "human-biology, radiation, health"
} |
EM wave reflection on a perfect conductor | Question: Suppose the region $x>0$ of 3D space is a perfect conductor, and the region $x<0$ is vacuum. You send a monochromatic plane wave $\vec{E_i}=\vec{E_0}e^{i(\omega t-kx)}$ from the left to the conductor. When calculating the total field, people usually suppose that the reflected wave will be of the form $\vec{E_r}=\vec{E_0'}e^{i(\omega't+k'x)}$, then they show that $\omega=\omega'$ and $k=k'$ by saying that $\vec{E_i}+\vec{E_r}=\vec{0}$ for $x=0$ and for all $t$.
I get that, and also that we take an incident wave that is monochromatic and plane, but how can we show that the reflected wave will be a monochromatic plane wave too?
Answer: The reflection coefficient is a constant independent of time and the amplitude of the reflected wave is basically -1. Since all frequencies are reflected exactly the same way with the reflection coefficient $\Gamma=-1$, the form of the pulse is not distorted.
The reflected plane wave only has a single frequency component so it can only remain a plane wave. If it shifted in frequency the boundary condition on the electric field at the interface would not be true for all times.
When there is transmission, i.e when the transmission coefficient $\tau\ne 0$ and $\Gamma\ne -1$; because $\sigma$ and $\epsilon$ are frequency-dependent (often slowly varying functions of $\omega$) not all frequency components of the wave-packet will be equally transmitted or reflected, so the actual shape of the pulse can change (usually slightly) upon reflection or transmission.
An alternative approach is to compute the length of the wave vector $\vec k$: this depends on the properties of the medium in which the wave propagates: basically for vacuum $k=\omega/c$ and for air one might as well take the velocity to be $c$ as well. To match phases in a time-independent way requires $\omega_r=\omega_i$, which in turn implies that the lengths of $\vec k_r$ and $\vec k_i$ are the same. By translational invariance, the component of $\vec k$ parallel to the interface cannot change, which means the component that is normal must reverse its sign: in other words, assuming the interface is the $z=0$ plane, the boundary conditions show that
$$
k_{rx}=k_{ix}\, ,\qquad k_{ry}=k_{ir}\, , \tag{1}
$$
and we know that $k_r=\sqrt{k_{rx}^2+k_{ry}^2+k_{rz}^2}=
\sqrt{k_{ix}^2+k_{iy}^2+k_{iz}^2}$. Of course as a vector $\vec k_r\ne \vec k_i$ so this leaves $k_{rz}=-k_{iz}$ as the only possible solution, i.e. the component of $\vec k$ normal to the interface changes sign without changing magnitude, since the reflected and incident waves are in the same medium. | {
"domain": "physics.stackexchange",
"id": 66274,
"tags": "electromagnetism, electromagnetic-radiation, reflection"
} |
Minkowski Diagram for Time-Like Separated Events | Question: A while ago, I asked a question if two events are always simultaneous in some reference frame. I received excellent answers. The point is that if $E_1$ and $E_2$ are time-like separated with time between events $t > 0$ in some frame $S$, then it can easily be shown that for any frame $S'$, these two events cannot be simultaneous in $S'$.
What I'm curious about now, is if someone (I'm a layman) would draw a Minkowski diagram demonstrating this. So, I would like to see a diagram showing that the events $E_1$ and $E_2$ are not on any line where all events on that line are simultaneous.
Answer:
I drew the spacetime diagrams for you. On the l.h.s. you may see two simultaneous events in the unprimed (x,t) frame. The axes of a frame going in the positive direction (the primed frame) should be drawn into the unprimed as I have done it. You can find the new space coordinates by drawing a straight line parallel to the new time axis through the event. The point where your line intersects with the new spacial axis, is the new spacial coordinate. The new time coodrdinate is found by drawing a line parallel to the new spatial axis, and finding its intersection with the new time axis. Green lines indicate the trajectory that light would follow, supposing we are working with units so that $c=1$. | {
"domain": "physics.stackexchange",
"id": 22190,
"tags": "special-relativity, spacetime, inertial-frames"
} |
A system for determination of human language from input text:: robustification | Question: This program works, but it's pretty clunky and fragile. I want to make it more dynamic / solid.
For instance some of the functionality depends on the order I entered things into an ArrayList, that seems to me to be an example of a terribly odious smelly code.
//These structures hold the word list & corresponding flag indicator for each language
static HashMap<String, Boolean> de_map = new HashMap<String, Boolean>();
static HashMap<String, Boolean> fr_map = new HashMap<String, Boolean>();
static HashMap<String, Boolean> ru_map = new HashMap<String, Boolean>();
static HashMap<String, Boolean> eng_map = new HashMap<String, Boolean>();
public static void main( String[] args ) throws URISyntaxException, IOException
{
//put them all in a list for later processing
List<HashMap<String, Boolean>> lang_maps = new ArrayList<HashMap<String, Boolean>>();
lang_maps.add( de_map );
lang_maps.add( fr_map );
lang_maps.add( ru_map );
lang_maps.add( eng_map );
//this holds all the dictionary files, i.e. word lists garners from language folders
ArrayList<Path> dictionary_files = new ArrayList<Path>();
//THIS SEGMENT IS FOR DYNAMICALLY LOCATING THE DIRECTORY, SO THE PROGRAM WORKS "OUT OF THE BOX"
/*******************************************************************************************************************************************/
File currentDir = new File( "." ); // Read current file location
//System.out.println(currentDir.getAbsolutePath());
File targetDir = null;
if (currentDir.isDirectory())
{
targetDir = new File( currentDir, "word_lists_1" ); // Construct the target directory file with the right parent directory
}
if ( targetDir != null && targetDir.exists() )
{
SearchDirectories.listDirectoryAndFiles( targetDir.toPath(), dictionary_files );
}
/*******************************************************************************************************************************************/
//this populates word presence data structs for each language
for(Path dir : dictionary_files)
{
String word_holding_directory_path = dir.toString().toLowerCase();
BufferedReader br = new BufferedReader( new FileReader( dir.toString() ) );
String line = null;
while ((line = br.readLine()) != null)
{
//System.out.println(line);
if(word_holding_directory_path.toLowerCase().contains("/de/") )
{
de_map.put(line, false);
}
if(word_holding_directory_path.toLowerCase().contains("/ru/") )
{
ru_map.put(line, false);
}
if(word_holding_directory_path.toLowerCase().contains("/fr/") )
{
fr_map.put(line, false);
}
if(word_holding_directory_path.toLowerCase().contains("/eng/") )
{
eng_map.put(line, false);
}
}
}
//print debugging
// for (Map.Entry entry : de_map.entrySet())
// {
// System.out.println(entry.getKey() + ", " + entry.getValue());
// }
/*******************************************************************************************************************************************/
//GET THE USER INPUT
ArrayList<String> input_text = new ArrayList<String>();
Scanner in = new Scanner(System.in);
System.out.println("Please enter a sentence: ");
String [] tokens = in.nextLine().split("\\s");
for (int i = 0; i < tokens.length; i++)
{
input_text.add( tokens[i].toString() );
}
/*******************************************************************************************************************************************/
//iterate over the hashmaps of all the languages we're considering
//and flip the bool if there is a hit
for( int i = 0; i < lang_maps.size(); i++ )
{
HashMap<String, Boolean> working_lang_map = lang_maps.get( i );
Iterator it = working_lang_map.entrySet().iterator();
while (it.hasNext())
{
Map.Entry pair = (Map.Entry)it.next();
//System.out.println(pair.getKey() + ", " + pair.getValue());
for(String word : input_text)
{
if(pair.getKey().toString().toLowerCase().trim().equals( word.toLowerCase().trim() ) )
{
working_lang_map.put(pair.getKey().toString(), true);
}
}
}
}
// !!!!!!!!!!!!!!
// this is fragile. since it depends on order. need to make it
// more robust
int total_de = 0;
int total_fr = 0;
int total_ru = 0;
int total_eng = 0;
//iterate over all the hashmaps and count the hit totals
for( int i = 0; i < lang_maps.size(); i++ )
{
HashMap<String, Boolean> working_lang_map = lang_maps.get( i );
//System.out.println( working_lang_map );
for (Map.Entry entry : working_lang_map.entrySet())
{
if( ((Boolean) entry.getValue()) == true )
{
//this part is really stupid
if( i == 0)
{
total_de++;
}
if( i == 1)
{
total_fr++;
}
if( i == 2)
{
total_ru++;
}
if( i == 3)
{
total_eng++;
}
}
}
}
HashMap< String, Integer > most_hits_lang = new HashMap<String, Integer>();
most_hits_lang.put( "German", total_de );
most_hits_lang.put( "French", total_fr );
most_hits_lang.put( "Russian", total_ru );
most_hits_lang.put( "English", total_eng );
Entry<String,Integer> maxEntry = null;
for(Entry<String,Integer> entry : most_hits_lang.entrySet())
{
if (maxEntry == null || entry.getValue() > maxEntry.getValue())
{
maxEntry = entry;
}
}
System.out.println( maxEntry.getKey() );
}
The full code and everything necessary to run it can be found on my GitHub page.
Answer: You're right about the clunky and fragile.
A lot of improvements are possible.
Most notably,
many unnecessary elements can be rewritten simpler, shorter, cleaner.
Let's go from the top.
Prefer interface types instead of implementation
Instead of this:
static HashMap<String, Boolean> de_map = new HashMap<String, Boolean>();
If you don't have a specific need for a HashMap, declare as a Map:
static Map<String, Boolean> de_map = new HashMap<String, Boolean>();
Do this everywhere.
Map<String, Boolean> or Set<String>
Maps with boolean as value type are always suspicious.
Very often they can be rewritten as Set<> instead.
We'll get to that too a bit later.
Use the diamond operator
Instead of:
static Map<String, Boolean> fr_map = new HashMap<String, Boolean>();
Use the modern diamond operator:
static Map<String, Boolean> fr_map = new HashMap<>();
This is supported as of Java 7, which is the lowest officially supported Java version. Consider migrating to it ASAP.
Another similar example, instead of this:
List<HashMap<String, Boolean>> lang_maps = new ArrayList<HashMap<String, Boolean>>();
Should be:
List<Map<String, Boolean>> lang_maps = new ArrayList<>();
Don't throw exceptions in vain
The main method is declared to throw URISyntaxException,
but I don't see where that can come from.
Perhaps from SearchDirectories.listDirectoryAndFiles ?
I doubt you really need it.
Perhaps related to this (or perhaps not at all) is that SearchDirectories.listDirectoryAndFiles takes a Path as parameter.
I'm wondering if that's necessary at all.
Can't you rewrite it with a simple File ?
Unnecessary stuff
Instead of this:
File currentDir = new File( "." ); // Read current file location
File targetDir = null;
if (currentDir.isDirectory())
{
targetDir = new File( currentDir, "word_lists_1" ); // Construct the target directory file with the right parent directory
}
if ( targetDir != null && targetDir.exists() )
{
// ...
}
This is exactly the same:
File currentDir = new File(".");
targetDir = new File(currentDir, "word_lists_1");
if (targetDir.exists()) {
// ...
}
Simpler isn't it? I also removed the redundant comments,
and changed the formatting to the style used by major IDEs.
Speaking of comments,
putting comments at the end of lines is not a great idea,
as it makes the lines longer,
and forces the reader to scroll horizontally to the far right.
if or if-else ?
About this code:
if (word_holding_directory_path.toLowerCase().contains("/de/")) {
de_map.put(line, false);
}
if (word_holding_directory_path.toLowerCase().contains("/ru/")) {
ru_map.put(line, false);
}
if (word_holding_directory_path.toLowerCase().contains("/fr/")) {
fr_map.put(line, false);
}
if (word_holding_directory_path.toLowerCase().contains("/eng/")) {
eng_map.put(line, false);
}
Can you have paths like something/fr/.../eng/.../de/?
I suspect that a path will only contain either "fr", "eng", "de", and so on.
In which case these conditions should be chained with else-if.
Also,
instead of evaluating word_holding_directory_path.toLowerCase()
multiple times,
it would be better to evaluate it only once.
Even worse,
there's really no need to re-evaluate any of these conditions for each line of input.
This would be better:
Map<String, Boolean> map;
if (word_holding_directory_path.toLowerCase().contains("/de/")) {
map = de_map;
} else if (word_holding_directory_path.toLowerCase().contains("/ru/")) {
map = ru_map;
} else if (word_holding_directory_path.toLowerCase().contains("/fr/")) {
map = fr_map;
} else if (word_holding_directory_path.toLowerCase().contains("/eng/")) {
map = eng_map;
} else {
continue;
}
while ((line = br.readLine()) != null) {
map.put(line, false);
}
Unnecessary stuff 2
Instead of this:
ArrayList<String> input_text = new ArrayList<String>();
String [] tokens = in.nextLine().split("\\s");
for (int i = 0; i < tokens.length; i++)
{
input_text.add( tokens[i].toString() );
}
This is exactly the same:
List<String> input_text = Arrays.asList(in.nextLine().split("\\s"));
But actually,
you don't need a List<String> at all later in your code.
So you could as well just replace the above with:
String[] input_text = in.nextLine().split("\\s");
And the rest of the code will work the same way.
Unnecessary stuff 3
The nested loop after the you get the user input can be simplified using enhanced for-each loops, dropping unnecessary toString calls,
and written better as:
for (Map<String, Boolean> working_lang_map : lang_maps) {
for (Map.Entry<String, Boolean> entry : working_lang_map.entrySet()) {
for (String word : input_text) {
if (entry.getKey().toLowerCase().trim().equals(word.toLowerCase().trim())) {
working_lang_map.put(entry.getKey(), true);
}
}
}
}
... but ... something's awfully wrong here.
Iterating over entries of a map to check if it contains some elements is not normal. Maps are designed for fast access by key, and you're squandering that away.
If you rewrote the maps to contain trimmed and lowercased strings (it seems you can, the rest of the code will work just the same),
then you could simplify the above nested loop and make it more efficient like this:
for (Map<String, Boolean> working_lang_map : lang_maps) {
for (String word : input_text) {
String normalized = word.trim().toLowerCase();
if (working_lang_map.containsKey(normalized)) {
working_lang_map.put(normalized, true);
}
}
}
Simplified implementation
I suggest to replace de_map, eng_map, and the others with Set<String> of words, so that you populate:
Set<String> enWords = new HashSet<String>();
Set<String> frWords = new HashSet<String>();
Set<String> ruWords = new HashSet<String>();
Set<String> deWords = new HashSet<String>();
And a map of languages:
Map<String, Set<String>> langMaps = new HashMap<>();
langMaps.put("English", enWords);
langMaps.put("French", frWords);
langMaps.put("Russian", ruWords);
langMaps.put("German", deWords);
And then,
instead of setting flags and then counting them,
you can count directly,
replacing the final fragile part with this simpler and more robust approach:
int maxCount = 0;
String maxLang = null;
for (Map.Entry<String, Set<String>> entry : langMaps.entrySet()) {
int count = 0;
Set<String> words = entry.getValue();
for (String word : input_text) {
String normalized = word.trim().toLowerCase();
if (words.contains(normalized)) {
++count;
}
}
if (count > maxCount) {
maxLang = entry.getKey();
maxCount = count;
}
}
System.out.println(maxLang); | {
"domain": "codereview.stackexchange",
"id": 13656,
"tags": "java, natural-language-processing"
} |
Spatial allocation of road emissions and unit conversion | Question: I have a grid shapefile that has a resolution of 1km x 1km.
This grid is then used to spatially allocate road emissions.
My goal is to get the emissions in the unit: μg m-2 s-1(microgram per square meter per second) since this is the emission unit for aerosols in the WRF/Chem
I manage to get the emissions for a certain grid cell by multiplying the total emissions in (micrograms/s) to the ratio of the road length present inside the grid cell to the total road length
Now I have the emission for that grid cell in μg s-1 (micrograms/second).
My question may be silly or simple, but I am thinking if I still need to divide my emissions by 1000 (since 1km = 1000m) to get the final emission unit I need
(Final unit needed: μg m-2 s-1(microgram per square meter per second))
Am I right? Hoping someone could help me with this. Thanks!
Answer: You have got your emissions over a square kilometer. To get to square meters you divide by 1000000 since $1km^2=10^6m^2$. | {
"domain": "earthscience.stackexchange",
"id": 1894,
"tags": "wrf, wrf-chem, emissions"
} |
Does reversing time give parity reversed antimatter or just antimatter? | Question: Feynman's idea states that matter going backwards in time seems like antimatter.
But, since nature is $CPT$ symmetric, reversing time ($T$) is equivalent to $CP$ operation. So, reversing time gives parity reversed antimatter, not just antimatter.
What is happening here? Why does nobody mention this parity thing when talking about reversing time? What am I missing?
Answer: The statement that antimatter is matter going back in time is usually associated with Feynman diagrams in QED, so we're talking about electrons, and electrons have parity +1, so:
$$ CPT = C1T = CT = 1 $$
So the parity part doesn't come into play, but it is required in general. | {
"domain": "physics.stackexchange",
"id": 59261,
"tags": "antimatter, parity, time-reversal-symmetry, cp-violation, cpt-symmetry"
} |
Can the Sun contain degenerate matter? | Question: Degenerate matter (neutronium) is hypothesized to be very dense and, at a certain amount, unstable - in the sense of collapsing on itself and causing fusion. The result would be a massive fusion detonation. Such a detonation could cause the Sun to lose its photosphere and cook the inner Solar System with a wave of radiation. This event is described enjoyably in Robert Sawyer’s book, "The Oppenheimer Alternative". The discovery of the neutron core, however, is made by identifying a transient rise in products of the CNO fusion cycle which is hypothesized to require 20 million Kelvin, whereas the Sun’s core temperature is ‘only’ 15 million Kelvin. These fusion products are detected by solar spectroscopy.
Can such a neutron core exist? If it did exist, how could the by products of the CNO cycle be identified before the actual explosion? I mean, if the neutronium degenerates, fuses, and raises the Sun's temperature to allow CNO fusion, then isn't the explosion going to happen before any of the photons from the explosion have time to heat the Sun up enough to make CNO products detectable in the spectra?
EDIT: Fortunately there are many people here who are smarter than me and still kind enough to show it nicely. I will try to clarify the facts of the novel without spoilers (Rather like describing the phenomena of Red Matter in Star Trek (2009) without spoiling the plot)
Edward Teller presents three spectra of the sun taken in 1929,1938 and 1945 at a colloquium in Los Alamos
Fermi remarks that the second spectrum is not of our sun but of an F class star because of the strong carbon absorption lines
Teller infers that something happened to our sun around 1938 to heat it up a little to spark off C-N-O fusion
Oppenheimer relates von Neumann's weather data that the Earth was indeed statistically warmer during that 'period'. He goes on to say that the problem is not with the spectrograph plates from Bethe nor the math done by Teller. Rather, Oppenheimer says that the sun is indeed having a 'problem'
Oppenheimer recalls publications from Zwicky and Landau that hypothesized a neutron core to our sun.
Oppenheimer recalls a paper he wrote with Robert Serber that refuted Landau's work with calculations that a neutron core greater than 0.1 solar masses would be unstable.
Neutron core instability would manifest as a hotter sun
Oppenheimer starts to say that such instability is transient and Teller interrupts him to blurt out that the unstable neutronium would be ejected from the sun
Teller goes on to say that a neutron core would be formed by an implosion, then compares it to the process that happens in the atomic bomb,FatMan, where an explosive array is used to implode a plutonium core. The inevitable result he states is an explosion (It is unclear if he means fission or perhaps his own demon, fusion)
Hans Bethe then calculates that based on the expected size of a neutron core, the known size of the sun, and the opacity of the sun that the outwardly exploding degenerate matter would hit the photosphere in 90 years.
At this point, I will let the other really knowledgeable people here comment on these points and their veracity. However, I am still haunted by the idea that a small really heavy object (microscopic black hole, white dwarf, etc) could be captured by our sun and then precipitate instability.
Answer: The answer to whether a normal star can contain a core of degenerate neutrons is a yes. Thorne & Zytkow 1977 produced numerically models where a neutron star becomes embedded in the center of a massive giant or supergiant star, surrounded by a large gaseous envelope. In this scenario, the main source of energy becomes not nuclear fusion but instead gravitational contraction from matter flowing from the inner envelope onto the outer core. The energy production ratios are
$$L_{\text{nuc}}/L\approx0.04,\quad L_{\text{grav}}/L=0.96$$
for stars with $M_{\text{tot}}\leq10M_{\odot}$, as above this, convective envelopes form. The models, regardless of mass, predict some degree of shell burning, with hydrogen-, helium-, and carbon- burning layers outside the core. In the inner regions, temperatures are orders of magnitude higher than that required for the CNO cycle, which sets in at around 15 million Kelvin, actually (not 20 million K) and dominates over the p-p chain at about 17 million K.
Thorne & Zytkow did find that for $M_{\text{tot}}<2M_{\odot}$, the envelopes were unstable against radial adiabatic pulsations, implying that it's likely not possible to extend the analysis to the case of the Sun - at that mass, the object is quite vulnerable to instabilities. That said, I am fairly confident that we would be able to tell if there was a degenerate object in the solar core; unlike the red giant or supergiant case, there is no large envelope to hide it, and the mass of the neutron star would at least be comparable to or (much more likely) exceed the mass of the Sun itself.
Some points on the stability of neutron degenerate matter: The problem is actually with small quantities, not large quantities. Small amounts are incapable of remaining bound by their own gravity; the pressures involved are simply too high, and the solar core is nowhere near high-pressure enough to maintain stability. Optimistically, you'd need somewhere in the $0.1M_{\odot}\text{-}0.2M_{\odot}$ range at minimum for stability, though I would be surprised if a mass of degenerate matter this low was produced naturally - a neutron star of a more typical mass would likely have to lose mass somehow. | {
"domain": "astronomy.stackexchange",
"id": 4546,
"tags": "the-sun, neutron-star, degenerate-matter"
} |
Can DQN announce it has things in its hand in a card game? | Question: More informations on the card game I'm talking about are in my last question here: DQN input representation for a card game
So I was thinking about the output of the q neural network and, aside from which card to play, I was wondering if the agent can announce things.
Imagine you have the current hand: 2, 4, 11, 2 (The twos are different card type).
When you're playing the game and you get dealt a hand like this, you have to announce that you have the same number twice (called Ronda) or thrice (called Tringa) before anyone plays a card on the table. Lying about it gets you a penalty.
Could a DQN handle this? I don't know if adding "Announcing a Ronda/Tringa" as an action would actually help. I mean, can this be modeled for the NN or should I just automate this and spare the agent having to announce it everytime.
Answer: The simplest thing to do when you make you first implementation of the agent, is to automate decisions like this, in order to keep representations and decisions simple.
However, if you want to explore tactics surrounding declaration, then I think the following applies:
There should be an initial round of actions where the agent may get to decide whether or not to declare a Ronda, based on the cards it holds. These will be different action choices to playing cards, so you would need to alter your action representation to include those choices. Only allow action choices which are valid, so if it is not valid to declare a Ronda or a Tringa when a player does not have one, then the player does not get to make that choice.
You may want to add a state feature "has a Ronda" and "has a Tringa" for the agent's player, to help with the action decision.
You should also add a state feature for each player according to whether they declared a Ronda or a Tringa.
Rather than have the agent learn to detect a lie, given your comment that all cards are played so it is easy to tell (there are only 8 cards total in play by the end), then I would just assume lies are automatically found out and include that in the game engine. In other words, the penalty is always paid.
The interesting question is then whether withholding the declaration can give a tactical advantage when the round is being played (because your opponent knows less about your hand), and whether that advantage offsets the inevitable penalty. This might not be true in your card game, but could be true in games with similar choices.
Could a DQN handle this?
A DQN is maybe going to struggle with partial information in this game. The opponent's cards are hidden from the network, but could have a non-random influence on the opponent's choices of action. It is possible that you will need to investigate agents that can solve POMDPs to get the best player.
I don't know for certain though. It depends on how much tactical advantage there is in having concealed cards, or how much that is just luck that plays out much the same whether you know the opponent's cards or not. If there is strategy, and ways to determine/guess what the opponent holds based on your cards and their actions so far, this is more like a POMDP. | {
"domain": "ai.stackexchange",
"id": 600,
"tags": "reinforcement-learning, dqn"
} |
Nonzero spontaneous magnetization in two-dimensional Ising model | Question: The two-dimensional Ising model with the nearest-neighbour interactions enjoys a $\mathbb{Z}_2$ symmetry under $S_i\to -S_i$; it displays sponatebous symmetry breaking at a finite temperature $T_C=2J[k_B\ln(1+\sqrt{2})]^{-1}$ and nonzero spontaneous magnetization is developed below $T_C$.
Now, the definition of magnetization $$\Big\langle \sum_i S_i\Big\rangle=\frac{1}{Z}\sum\limits_{C}\Big(\sum_i S_i\Big)e^{-\beta H(C)}\tag{1}$$ where the sum is over all configurations $C$ of the spins. However, for any configuration $C$ with $\sum_i S_i=M$, there is
a spin-flipped configuration $C^\prime$ with $\sum_i S_i=-M$ in the sum of Eq.$(1)$, but exactly same energy, i.e., $H(C)=H(C^\prime)$ by $\mathbb{Z}_2$ spin-flip symmetry. Clealry, this argument shows that the magnetization must vanish from $(1)$! But this does not happen.
Question What is the flaw in this argument?
Answer: Your argument only applies to finite systems (otherwise the energy is ill-defined) and there are no phase transitions in finite systems. So, there is no contradiction there.
Moreover, your argument only applies when both $h=0$ (no magnetic field) and you use free or periodic boundary conditions. Indeed, were it not the case, then you would not have symmetry under spin-flip.
Now, consider a system in the box $\{-n,\dots,n\}^d$ with, say, $+$ boundary condition (that is, all spins in the exterior boundary of the box are fixed to $+1$). Let us denote the corresponding probability measure by $\mu_{n,\beta}^+$ and the associated expectation by $\langle\cdot\rangle_{n,\beta}^+$.
Then (assuming that $d\geq 2$), one can (rather easily) show, using for instance Peierls' argument, that at low enough temperatures, the expected value of the central spin $\sigma_0$ is positive: there exist $\epsilon>0$ and $\beta_0$ (both independent of $n$) such that, for all $\beta>\beta_0$,
$$
\langle\sigma_0\rangle_{n,\beta}^+ \geq \epsilon.
$$
In the same way, one shows that, for all $\beta>\beta_0$,
$$
\langle\sigma_0\rangle_{n,\beta}^- \leq - \epsilon,
$$
for a system with $-$ boundary condition.
We now want to define probability measures on the set of all infinite configurations (that is, configurations of all spins in $\mathbb{Z}^d)$. I will not enter into too much detail here. One way to do that is to take the thermodynamic limit. That is, we would like to define a measure $\mu^+_\beta$ as the limit of $\mu^+_{n,\beta}$ as $n\to\infty$. The precise sense in which this limit is taken is the following one: for any local observable $f$ (that is, any observable depending only on the values taken by finitely many spins), we want convergence of the expectation of $f$:
$$
\langle f \rangle_\beta^+ = \lim_{n\to\infty} \langle f \rangle_{n,\beta}^+.
$$
One can show, using correlation inequalities, that the limit indeed exists in this sense. Moreover, in view of the above, for all $\beta>\beta_0$,
$$
\langle \sigma_0 \rangle_\beta^+ = \lim_{n\to\infty} \langle \sigma_0 \rangle_{n,\beta}^+ \geq \epsilon.
$$
One can do the same starting with the $-$ boundary condition and define a measure $\mu^-_\beta$ as the limit of the measures $\mu^-_{n,\beta}$ and we'll have, for all $\beta>\beta_0$,
$$
\langle \sigma_0 \rangle_\beta^- \leq -\epsilon.
$$
In particular, the two measures $\mu^+_\beta$ and $\mu^-_\beta$ cannot coincide (since the expectation of $\sigma_0$ is different under these two measures!). You have thus shown that your system can exist in two different phases when there is no magnetic field and the temperature is low enough. In the phase described by $\mu^+_\beta$, the magnetization is positive, while it is negative in the phase described by $\mu^-_\beta$.
Of course, you might also have considered the limit of measures with free (or periodic) boundary conditions $\mu^\varnothing_\beta$ and have concluded that, for all $\beta$,
$$
\langle \sigma_0\rangle_\beta^\varnothing = 0.
$$
However, the measure $\mu^\varnothing_\beta$ does not describe a pure phase. In fact,
$$
\mu_\beta^\varnothing = \frac12\mu^+_\beta + \frac12\mu^-_\beta .
$$
Pure phases are important for several reasons. First, these are the only ones in which macroscopic observables take on deterministic values. Second, they contain all the interesting physics, since any other Gibbs measure $\mu$ can be written as a convex combination of pure phases (as we did above for $\mu_\beta^\varnothing$). In particular, if you sample a configuration with $\mu$, then you'll obtain a configuration that is typical of one of the pure phases (with a probability corresponding to the corresponding coefficient in the convex decomposition; for instance, using $\mu_\beta^\varnothing$, you'd obtain a configuration typical of $\mu^+_\beta$ with probability $1/2$).
(Pure phases possess additional remarkable properties, but this would take us too far, so I'll only discuss this if requested explicitly.)
Let me briefly describe an alternative way to proceed. Rather than introducing boundary conditions that break the symmetry, you can continue to work with, say, periodic boundary condition, but introduce a magnetic field $h$. Denote the corresponding measure $\mu_{n,\beta,h}^{\rm per}$.
Then, one can again take the limit as $n\to\infty$ and obtain a limiting measure $\mu_{\beta,h}$. This measure can be shown to be unique as long as $h\neq 0$, in the sense that the limit does not depend on the boundary condition used. Moreover, one has that
$$
\lim_{h\downarrow 0} \mu_{\beta,h} = \mu^+_\beta
$$
and
$$
\lim_{h\uparrow 0} \mu_{\beta,h} = \mu^-_\beta.
$$
So, the two measures obtained before, describing the pure phases of the (infinite-volume) Ising model, correspond precisely to the phases you obtain by setting a positive (resp. negative) magnetic field and decreasing (resp. increasing) it to $0$.
Combined with the discussion above, this explains how the magnetization can have a discontinuity at $h=0$ at low temperatures.
To conclude (finally!), let me just mention that it is possible to construct infinite-volume Gibbs measures (such as the measures $\mu_\beta^+$ and $\mu^-_\beta$ described above) directly in infinite-volume, without taking limits of finite-volume measures. This is interesting because this avoids any explicit symmetry breaking! I discussed this in another answer. | {
"domain": "physics.stackexchange",
"id": 67795,
"tags": "statistical-mechanics, phase-transition, symmetry-breaking, ising-model, critical-phenomena"
} |
Efficiently calculating minimum edit distance of a smaller string at each position in a larger one | Question: Given two strings, $r$ and $s$, where $n = |r|$, $m = |s|$ and $m \ll n$, find the minimum edit distance between $s$ for each beginning position in $r$ efficiently.
That is, for each suffix of $r$ beginning at position $k$, $r_k$, find the Levenshtein distance of $r_k$ and $s$ for each $k \in [0, |r|-1]$. In other words, I would like an array of scores, $A$, such that each position, $A[k]$, corresponds to the score of $r_k$ and $s$.
The obvious solution is to use the standard dynamic programming solution for each $r_k$ against $s$ considered separately, but this has the abysmal running time of $O(n m^2)$ (or $O(n d^2)$, where $d$ is the maximum edit distance). It seems like you should be able to re-use the information that you've computed for $r_0$ against $s$ for the comparison with $s$ and $r_1$.
I've thought of constructing a prefix tree and then trying to do dynamic programming algorithm on $s$ against the trie, but this still has worst case $O(n d^2)$ (where $d$ is the maximum edit distance) as the trie is only optimized for efficient lookup.
Ideally I would like something that has worst case running time of $O(n d)$ though I would settle for good average case running time. Does anyone have any suggestions? Is $O(n d^2)$ the best you can do, in general?
Here are some links that might be relevant though I can't see how they would apply to the above problem as most of them are optimized for lookup only:
Fast and Easy Levensthein distance using a Trie
SO: Most efficient way to calculate Levenshtein distance
SO: Levenshtein Distance Algoirthm better than $O(n m)$
An extension of Ukkonen's enhanced dynamic programming ASM algorithm
Damn Cool Algorithms: Levenshtein Automata
I've also heard some talk about using some type of distance metric to optimize search (such as a BK-tree?) but I know little about this area and how it applies to this problem.
Answer: What you are interested in are semi-global and/or local alignments. The usual way to compute those is to adapt the dynamic programing algorithm for the Levenshtein distance:
Initialise the first row/column with $0$ (instead of $i$/$j$) if free deletions/insertions are allowed at the beginning.
Select the minimum value from the last row/column as result if free deletions/insertions are allowed at the end.
Different combinations are possible. In your case, assume that $r$ is the horizontal word, you initialise the first row with $0$ (the first column with $i$, no change here) and the result is the minimum of the last row. The result is then the smallest Levenshtein distance between $s$ and any substring of $r$, computed in time and space $O(nm)$.
Now, in order to get scores for all starting positions, note that the computation of semi-global alignments with allowed deletions/insertions at the end conveniently provides the wanted array in the last row/column -- well, almost: that gives you the best matches against prefixes up to position $k$. So in order to get best matches against suffixes from position $k$, reverse both $r$ and $s$. | {
"domain": "cs.stackexchange",
"id": 313,
"tags": "algorithms, runtime-analysis, strings, dynamic-programming, string-metrics"
} |
Running time of modified BFS algorithm to find shortest path in weighted DAG | Question: While the shortest path can be calculated with $O(V+E)$ time over a weighted directed acyclic graph using topological sort, I wonder about the running time of the following BFS type algorithm I thought of.
BFS_requeue(v):
Initialization()
push(s) //push the source node
while the queue contains at least one vertex:
u = pull()
for all edges u -> v
w = edge_weight(u,v)
if dist(v) > dist(u) + w
dist(v) = dist(u) + w
pred(v) = u
push(v)
Compare one of the standard BFS algorithms, this does not mark vertices, and one vertex can be pushed into the queue more than once. Although it seems that popping a vertex more than once can result in a slower running time in general, I am not sure with the exact running time of the algorithm on a (possibly negative) weighted directed acyclic graph.
Although I suspect in the worst case, this runs like Bellman-ford, in $O(VE)$ time, I cannot think of a concrete example. All the examples I thought of run in $O(V+E)$ time. So what is the running time of this algorithm on a weighted DAG?
I think regardless of the running time, this does identify the shortest path, essentially by relaxing all edges. However, I might be incorrect.
This algorithm is taken from page 7 of chapter 8 of Jeff Erickson's textbook on algorithms. I have only modified the algorithm so it applies to weighted graphs. Nonetheless, this problem is specifically about DAG, rather than weighted graphs in general
Answer: The running time can be exponential, so it is much worse than $O(|V|+|E|)$ or $O(|V| \cdot |E|)$.
Here is an explicit counterexample. Suppose you have a dag with $n$ vertices, numbered $1,2,\dots,n$, and there is an edge $i \to j$ for each $i,j$ with $i > j$, with weight $2^{i-j}-1$. Suppose that the for-loop traverses the neighbors from lowest-numbered vertex to highest-number vertex. The algorithm will start by visiting vertex $n$, then pushing $1,2,\dots,n-1$ onto the queue, visit vertex $1$, then $2,1$, and so forth. In particular, the condition dist(v) > dist(u) + w will always hold true, so the then-branch of the if-statement will always be executed.
What's the running time, on this graph, with this visit order? It satisfies the recurrence relation
$$T(n) = T(1) + T(2) + T(3) + \dots + T(n-1),$$
which solves to $T(n) = 2^n$. | {
"domain": "cs.stackexchange",
"id": 21684,
"tags": "algorithms, graphs, algorithm-analysis, graph-traversal, weighted-graphs"
} |
Autocomplete not working anymore | Question:
Hi. When I first installed ROS Groovy, most of its commands had autocomplete and I found it very useful.
For some reason, autocomplete of ros command arguments (topic, service, package and node names) is not working any more. Any ideas how to get it to work again?
Originally posted by r0nald on ROS Answers with karma: 268 on 2013-01-25
Post score: 1
Original comments
Comment by dornhege on 2013-01-25:
Could you try a clean shell where you don't source specific files and set no environment variables and the just source /opt/ros/groovy/setup.bash (assuming you use bash) and see if it works. (For me amd64/groovy updated today completion works).
Comment by 2ROS0 on 2016-05-25:
What lines in the setup.bash script are responsible for tab completion? In some of my nodes, rosrun doesn't tab complete and I'd like to debug upwards from the root of the problem.
Answer:
Autocompletion is activated using setup.bash instead of setup.sh.
Originally posted by KruseT with karma: 7848 on 2013-01-25
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by r0nald on 2013-01-25:
Thanks, that information helped.
Comment by 2ROS0 on 2016-05-25:
Doesn't setup.bash just source the setup.sh file? | {
"domain": "robotics.stackexchange",
"id": 12576,
"tags": "ros"
} |
Simplifying effect of a hidden Weyl symmetry in a QFT on curved spacetime | Question: We consider AdS$_{d+1}$ in Poincaré coordinates:
$$
ds^2=\frac{1}{z^2}\left(-dt^2+dz^2+dx_{d-1}^2\right),
$$
where we set the AdS radius to unity. We study a scalar in this background with action
$$
S=-\frac{1}{2}\int d^{d+1}x\,\sqrt{-g}\left(\partial_\mu\phi\partial^\mu\phi+m^2\phi^2\right).
$$
I want to understand the following statement:
For $m^2=-\frac{(d-1)(d+1)}{4}$, there is a hidden Weyl symmetry present in the action which simplifies the form of the scalar correlators in the vacuum state.
What I was able to show is the following: If we consider a scalar with a different action
$$
S=-\frac{1}{2}\int d^{d+1}x\,\sqrt{-g}\left(\partial_\mu \phi \partial^\mu \phi+ \frac{d-1}{4d}R\phi^2\right).
$$
i.e. with coupling to the scalar curvature instead of a mass term, then this action is indeed invariant under Weyl transformations
$$
\begin{align}
g_{\mu\nu} &\to \Omega^2 g_{\mu\nu}, \\
\phi &\to \Omega^{\frac{1-d}{2}}\phi,
\end{align}
$$
with a spacetime dependent $\Omega=\Omega(x)$.
For AdS$_{d+1}$ the Ricci scalar is $R=-d(d+1)$, so with a non-dynamical AdS background one can rewrite the action as
$$
S=-\frac{1}{2}\int d^{d+1}x\,\sqrt{-g}\left(\partial_\mu \phi \partial^\mu \phi- \frac{(d-1)(d+1)}{4}\phi^2\right),
$$
so one finds the massive scalar with the $m^2=-\frac{(d-1)(d+1)}{4}$.
Now, when gravity is not dynamical and the Ricci scalar is not explicit in the action any more, how does this Weyl symmetry of the gravitational theory affect the scalar theory?
Does the hidden Weyl symmetry lead to a conformal symmetry of the scalar theory?
Answer: Upshot:
The following two facts can be used to argue that the scalar correlator simplifies in the special case described above.
When the scalar action is Weyl invariant, then the scalar equation of motion is covariant and we can use a Weyl transformation to simplify the equation.
There is a Weyl transformation that maps AdS to the upper half plane of flat Minkowski space. The correlator in flat space can be computed easily and the result can be transformed back to AdS using the inverse of the Weyl transformation above.
More details:
In the special case of $m^2=-\frac{(d-1)(d+1)}{4}$ the scalar action in Weyl invariant, i.e. under
$$
\begin{align}
g_{\mu\nu} &\to \Omega^2 g_{\mu\nu}, \\
\phi &\to \Omega^{\frac{1-d}{2}}\phi,
\end{align}
$$
and the equation of motion
$$
\left(\Box-\frac{d-1}{4d}R\right)\phi=0
$$
is covariant.
For $\Omega=z$, the transformed metric is $z^2g_{\mu\nu}=\eta_{\mu\nu}$, the flat $(d+1)$ dimensional Minkowski metric. Defining the rescaled field $\varphi=z^{\frac{1-d}{2}}\phi$, the equation of motion becomes
$$
0=\eta^{\mu\nu}\partial_\mu\partial_\nu\varphi=\left(-\partial_t^2+\vec{\nabla}^2+\partial_z^2\right)\varphi.
$$
The solution is
$$
u=Ae^{-i\omega t+i\vec{k}\cdot\vec{x}+iqz}+Be^{-i\omega t+i\vec{k}\cdot\vec{x}-iqz},
$$
where $q=\sqrt{\omega^2-\vec{k}^2}$. The modes with $\vec{k}^2>\omega^2$ are forbidden as they are not normalizable in the region $z\to\infty$. Imposing Dirichlet boundary conditions $\left.\varphi\right|_{z=0}=0$ leads to
$$
u=C e^{i{k}\cdot{x}}\sin(qz)
$$
where $k\cdot x=-\omega t+\vec{k}\cdot\vec{x}$. Now, one can expand the field $\varphi$ in the eigenmodes $u$ we found. We separate positive and negative frequency modes:
$$
\varphi(x,z)=\int_{k^0>|\vec{k}|} d^d k\, \left(C a_{k} e^{i{k}\cdot{x}}\sin(qz)+C^* a_k^\dagger e^{-i{k}\cdot{x}}\sin(qz)\right).
$$
From this point on, quantizing and computing the two-point function is straightforward. First, one computes the Klein-Gordon norm of the mode functions to find the normalization $C$. Canonical commutation relations for $a_k$ and $a_k^\dagger$ are imposed and the vacuum two-point function can be found.
The result can be transformed back to $AdS$ using
$$
\langle\phi(x,z)\phi(y,z')\rangle= z^{-\frac{1-d}{2}}{z'}^{-\frac{1-d}{2}}\langle\varphi(x,z)\varphi(y,z')\rangle
$$
The result,
$$
\langle\varphi(x,z)\varphi(x',z')\rangle=\frac{\Gamma\left(\frac{d-1}{2}\right)}{4\pi^{\frac{d+1}{2}}}\left(\frac{1}{[(x-x')^2+(z-z')^2]^{\frac{d-1}{2}}}-\frac{1}{[(x-x')^2+(z+z')^2]^{\frac{d-1}{2}}}\right)
$$
is very interesting. Translational invariance along $x$ dictates that the correlator can only depend on $x-x'$. Because of the Dirichlet boundary conditions at $z=0$, which is where the AdS boundary is mapped under the rescaling, translation invariance in the $z$-direction is broken. In addition to a term depending on $z-z'$, there is a term $z+z'$. This can be compared to electromagnetism, where the Dirichlet boundary conditions can be enforced by introducing mirror charges. $-z'$ is basically a mirror image of $z'$ upon reflection at $z=0$, so there is a second term due to the Dirichlet boundary conditions on the AdS boundary. | {
"domain": "physics.stackexchange",
"id": 21717,
"tags": "quantum-field-theory, gravity, anti-de-sitter-spacetime, qft-in-curved-spacetime, scale-invariance"
} |
What is the difference between gravitation and magnetism? | Question: If you compress a large mass, on the order of a star or the Earth, into a very small space, you get a black hole. Even for very large masses, it is possible in principle for it to occupy a very small size, like that of a golf ball.
I started to think, how would matter react around this golf ball sized Earth? If I let go of a coffee mug next to it, it would go tumbling down toward the "golf ball". Isn't that exactly how magnets work, with paperclips for example?
Magnets are cool because they seem to defy the laws of gravity, on a scale that we can casually see. Clearly, the force carrier particles that produce electromagnetic attraction are stronger than gravity on this scale (or are at least on par: gravity plays some role in the paperclips path, but so does electromagnetism).
My question is, why do we try to consider gravity as anything different than magnetism? Perhaps "great mass" equates to a positively (or negatively) charged object. Pull so much matter in close and somewhere you've crossed the line between what we call electromagnetic force and gravity force. They are one in the same, no?
Answer: There are several qualitative and quantitative differences between gravity and magnetism.
When you attract 'neutral' bits of metal with a magnet, or attach it to something like a plate of metal, what's happening is that individual atoms of the metal react to the magnetic force. In a ferromagnetic metal, one with a similar electronic structure to Iron or Nickel, the individual atoms work like nanoscopic magnets; but they are very weak, and they are not lined up with one another, so that their fields cancel one another out over any macroscopic distance. But if you bring a "large" magnet (such as a fridge magnet) up to them, the field of the large magnet causes them to align with the field, so that they are pulled towards the magnet — and the magnet is pulled towards them. This is why some metal objects are attracted to magnets.
Other metals, such as aluminum or silver, also react to magnets, but much more weakly (and in some cases repulsively): the way that they react to magnetic fields is described as paramagnetism (for materials which align very weakly with magnetic fields) and diamagnetism (for materials which align very weakly against magnetic fields).
The very fact that different materials react differently to magnetic fields is something that sets magnetism apart from gravity. Gravitation works equally with masses of any sort, and is always attractive (as noted by Nic); magnetism can both attract and repel, and do so with different degrees of force, as between ferromagnetic, paramagnetic, and diamagnetic materials. But of course, quite famously, even a single object can be both attracted and repelled by magnetic forces: the north poles of two magnets repel each other, as do the south poles; only opposite poles attract each other. (This, of course, is the basis on which compasses work.)
The way that these forces operate over distance also varies. Gravity very famously (but only approximately) obeys an inverse-square law; the field far from a bar magnet, however, decreases like the inverse of the cube of the distance from the magnet.
Finally, moving electric charges produce magnetic forces; whereas they don't cause any gravitational forces which could not be accounted for just by the fact that the charged particles have mass (whether moving or at rest).
So, on both the macroscopic level and on the level of individual atoms, the forces of gravity and magnetism act quite differently. | {
"domain": "physics.stackexchange",
"id": 1693,
"tags": "electromagnetism, gravity, equivalence-principle"
} |
understanding time constant meaning in signal processing | Question: from my DSP book,i am reading that exponential signal $x(t)=e^{-a*t}$ has some time constant value $c$ let say if $e^{-a*t}=e^{-t/c}$ which means that $a=1/c$,but i want to understand what is general formula to find time constant for other signals and also what does it express?i meant what is real mining of time constant in signal?clearly it cant be value where signal repeats itself,because in this case we call this number period,thanks in advance
Answer: There is no general formula to calculate time constants for all signals. Time constants are defined for some signals. For your exponential signal
\begin{equation}
e^{-a \cdot t} = e^{-t / c}
\end{equation}
it just means that always after the time $c$ has passed, the value of the function has been reduced by $1 / e$. A more common interpretation is half-life defined as
$$t_{1/2} = \frac{ln(2)}{a}$$
which is the time after the function has gone from 1 (at time zero) to 1/2 (at time $t_{1/2}$) and so on.
However, there is no general time constant that can be applied to any signal, because, as you already mentioned, it is unclear what it represents. | {
"domain": "dsp.stackexchange",
"id": 1316,
"tags": "discrete-signals, signal-analysis, continuous-signals"
} |
swapping between sorting algorithms for small input size left over | Question: Is it suggested to swap between sorting algorithms? Merge sort certainly performs better on large input size however Insertion sort performs better on small input size
Analysis based on there graph comparison of running time of Insertion sort and merge sort
How often people swap the algorithms they are using? For example sorting problem of size 100K with merge sort and then switching to insertion sort when problem size reached to 500 input size?
Is is advisable to follow such techniques? For small input size the constants(c) does matter and insertion sort having small constant size than merge sort will certainly perform better. I guess
Answer: People don't usually program their own sorting routines, and I would advise against it, unless you're implementing a non-comparison-based sort such as counting sort. Library sorting routines are optimized beyond what the casual programmer can achieve.
If you're interested in the best practices of the field, you'll have to look at library sorting routines. For example, python uses Timsort, which switches to insertion sort for small arrays. The GNU implementation of Quicksort also switches to insertion sort for small arrays. So it seems switching to insertion sort for small arrays is indeed advisable. | {
"domain": "cs.stackexchange",
"id": 21495,
"tags": "algorithms, algorithm-analysis, sorting"
} |
Is there a name for the type of boundary condition where the initial boundary values are known but are not held constant over time? | Question: I'm exploring the heat equation to model a particular 1D scenario, and I understood the Dirichlet and Neumann boundary conditions, but neither are sufficient for my scenario. Assuming a rod of length L, I want the boundaries to have a particular initial value ($U(0,0) = 400$, $U(L,0) = 300$), but the temperatures at the boundaries do not need to be constant across time ($U(0,0) ≠ U(0,t)$, $U(L,0) \ne U(L,t))$. Heat does flow in and out of the boundary, but only towards the rod, not the air.
Now, my question is, is there any sort of name for this type of boundary condition, where the initial boundary values are known, and are not held constant over time?
I hope the explanation of my scenario was clear. Please drop a comment in case you need clarification on some point.
Answer: Well, after consulting my professor, it seems it was a Neumann boundary condition with zero flux at both boundaries ($\phi(0,t) = \phi(L,t) = 0$) | {
"domain": "physics.stackexchange",
"id": 85086,
"tags": "thermodynamics, terminology, boundary-conditions, thermal-conductivity, heat-conduction"
} |
Detect abrupt changes in signal | Question: I'm using MATLAB and I'm trying to find the index of the first signal drop as shown in the figure bellow :
I tried the function findchangepts but it didn't give me the exact index, any solutions ?
Answer: A cheap and simple way is to differentiate the signal numerically and threshold the magnitude of differencate to identify the abrupt changes.While deciding the threshold you should take into account the dynamic range of the signal. | {
"domain": "dsp.stackexchange",
"id": 8627,
"tags": "matlab, signal-analysis"
} |
How to simulate NGS reads, controlling sequence coverage? | Question: I have a FASTA file with 100+ sequences like this:
>Sequence1
GTGCCTATTGCTACTAAAA ...
>Sequence2
GCAATGCAAGGAAGTGATGGCGGAAATAGCGTTA
......
I also have a text file like this:
Sequence1 40
Sequence2 30
......
I would like to simulate next-generation paired-end reads for all the sequences in my FASTA file. For Sequence1, I would like to simulate at 40x coverage. For Sequence2, I would like to simulate at 30x coverage. In other words, I want to control my sequence coverage for each sequence in my simulation.
Q: What is the simplest way to do that? Any software I should use? Bioconductor?
Answer: I am not aware of any software that can do this directly, but I would split the fasta file into one sequence per file, loop over them in BASH and invoke ART the sequence simulator (or another) on each sequence.
For more information about ART, please see their paper here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3278762/ | {
"domain": "bioinformatics.stackexchange",
"id": 2106,
"tags": "fasta, ngs, simulated-data"
} |
Optimizing $\epsilon$ in $\epsilon$-kernel | Question: The notion of $\epsilon$-kernel, as defined by Agarwal et al. ("Approximating extent measures of points"), is the following.
Let $S^{d−1}$ denote the unit sphere centered at the origin in $R^d$. For any set $P$ of points in $R^d$ and any direction $u \in S^{d−1}$, we define the directional width of $P$ in direction $u$, denoted by $\omega(u, P)$, to be
$\omega(u, P)=\max_{p\in P} \langle u, p \rangle - \min_{p\in P} \langle u, p \rangle$
where $\langle \cdot, \cdot\rangle $ is the standard inner product. Let $\epsilon > 0$ be a parameter. A subset $Q \subseteq P$ is called an $\epsilon$-kernel of $P$ if for each $u \in S^{d−1}$, $(1 − \epsilon)\omega(u, P) \leq \omega(u, Q)$.
It was shown that one can compute an $\epsilon$-kernel of $P$ of size $O(1/\epsilon^{(d−1)/2})$ in time $O(n + 1/\epsilon^{d−(3/2)})$. (Chan, "Faster coreset constructions and data stream algorithms in fixed dimensions".)
I am looking at the problem that is the reverse of this: Let $k$ be a parameter. I want to find an $\epsilon$-kernel of size at most $k$ such that $\epsilon$ is minimized. The running time could be exponential in $d$ but should be polynomial in $k$.
My general question is whether there is anything known for this problem. Some specific questions:
1. Is the problem NP-hard when d=2? (Any clue what problem we can reduce from?)
2. Is there any approximation algorithm for d=2?
Any general comments are also welcomed.
Answer: Jeff Philips has a related result showing the stability of kernels: http://www.cs.utah.edu/~jeffp/papers/stable-kernel.pdf.
As for the problem itself, it can be formulated as a hitting set problem. Indeed, you need to pick a point into the kernel for each "complement of a slab" that contains $\geq \epsilon_k$ fraction of the input points. Here $\epsilon_k$ is the minimum $\epsilon$ that has a kernel of size $k$. Since this is just a hitting set problem, it can be approximated up to $O( \log k)$ factor in geometric settings. You need to guess the value of $\epsilon_k$, but naively just do a binary search on epsilon. As such, one can get two results:
A set of size $O( k \log k)$ that is an $\epsilon_k$ kernel.
A set of size k that is an $\epsilon_{O(k/\log k)}$ kernel.
The algorithm for computing the set cover is the reweighting algorithm. In fact, the original paper by Clarkson solves a related problem about polytope approximation:
@inproceedings{c-apca-93,
author = "Kenneth L. Clarkson",
title = "Algorithms for Polytope Covering and Approximation",
booktitle = "Proc. 3rd Workshop Algorithms Data Struct.",
series = LNCS,
volume = 709,
publisher = "Springer-Verlag",
year = 1993,
pages = "246--252"
}
As for what to implement, i would use an incremental algorithm that performs quite well in practice, and add points an incremental fashion. See for example, the paper "Robust Shape Fitting via Peeling and Grating Coresets" and references therein.
it seems believable BTW that one should be able to do constant factor approximation instead of logarithmic.
I dont know of a formal proof that this problem is NP-hard, but it would be very surprising if it is not, IMHO.
The problem is probably solvable exactly in the plane using dynamic programming, I think.... | {
"domain": "cstheory.stackexchange",
"id": 146,
"tags": "cg.comp-geom"
} |
Validating a list of dictionaries of names and tags | Question: I created a Python function to check the types from a list and also check the keys inside an dictionary within that list.
I have the following data:
[
dict(name='Frank', tags=['dog', 'cat']),
dict(name='Manfred', tags=['cat', 'chicken'])
]
My function looks like this:
def _validate_data(self, data):
if not isinstance(data, list):
raise('not a list')
else:
for element in data:
if not isinstance(element, dict):
raise('not a dictionary')
else:
if not all(key in element for key in ('name', 'tags')):
raise('Keys not inside dictionary')
This works fine, but I don't like the structure and I also think there may be a smarter way to code this function. I hope someone could give me some nice and helpful hints.
Answer: There're some modules that might help you get rid of the structure you're complaining about like marshmallow or voluptous and since you didn't added the reinventing-the wheel tag I guess that's perfectly okay.
For the sake of example, I'll refer to the former one because IMO it better fits our purpose (and is also probably clearer).
From the docs:
Marshmallow is an ORM/ODM/framework-agnostic library for converting
complex datatypes, such as objects, to and from native Python
datatypes.
In short, marshmallow schemas can be used to:
Validate input data.
Deserialize input data to app-level objects.
Serialize app-level objects to primitive Python types. The serialized objects can then be rendered to standard formats such as
JSON for use in an HTTP API.
First of, you'll need to define your Schema:
class DataSchema(Schema):
name = fields.String(required=True)
tags = fields.List(fields.String(), required=True)
In the above, name and tags are the keys of our dictionaries. In our class I've specified each key type (str and list). They're also mandatory, so I added required=True.
Next, to validate our top-level list, we need to instantiate our list item schema with many=True argument and then just load the data we need:
data, errors = DataSchema(many=True).load([
{'name': 'Frank', 'tags': ['dog', 'cat']},
{'name': 'Manfred', 'tags': ['dog', 'chicken']}
])
Printing the above:
print(data)
print(errors)
Will have the following output:
[{'name': 'Frank', 'tags': ['dog', 'cat']}, {'name': 'Manfred', 'tags': ['dog', 'chicken']}]
{}
Now, if we try to pass an invalid data to our dict, the errors will warn us about this. For example, passing a str instead of a list in our tags key will result in:
[{'name': 'Frank'}, {'name': 'Manfred', 'tags': ['dog', 'chicken']}]
{0: {'tags': ['Not a valid list.']}}
Full code:
from marshmallow import Schema, fields
class DataSchema(Schema):
name = fields.String(required=True)
tags = fields.List(fields.String(), required=True)
data, errors = DataSchema(many=True).load([
{'name': 'Frank', 'tags': ['dog', 'cat']},
{'name': 'Manfred', 'tags': ['dog', 'chicken']}
])
Now, IDK if the above will be valid for all the test-cases (e.g it might allow you to pass an empty list), but it should give you a good overview of what you can achieve with this module. | {
"domain": "codereview.stackexchange",
"id": 26251,
"tags": "python, validation, type-safety"
} |
Lattice pattern | Question: I'm trying to find and outline a non-primitive conventional unit mesh, I'd also like to find any mirrors planes and rotional symmetry axes.
I first thought it would be like this: , but I was told the answer was smaller in a different thread.
This was the solution left by LDC3
Which one is correct? I don't know if his is correct because I thought the conventional unit mesh connected to points that looked the same as the original small pattern I picked(the small hexagon in the middle)
Also tried the axis of symmetry, but am unsure if 8 lines from the topside or 6 for a hexagonal pattern should come out.
https://imgur.com/Bkjwhih
I think for hexagonal structures there should be more lines
like this https://i.stack.imgur.com/YuTVf.png
Any help is much appreciated.
Answer: The Wallpaper group classifies 2D patterns according to their symmetries.This looks like this is a member of group p6.
There is a 6 fold rotation symmetries about the center of the hexagon, but the blue section shows there is no reflection symmetry. Groups p6 and p6m are the only ones with 6 fold rotation. To be p6m, reflection would be needed as well.
Here is a primitive unit cell that matches the illustration of the p6 group.
And here is a view of the larger scale. | {
"domain": "physics.stackexchange",
"id": 13725,
"tags": "units"
} |
Navigation Stack -Error | Question:
I have been using the Segbot Navigation package to navigate our robot through office work spaces with cubicles and rooms. It sometimes has problem navigating through narrow aisles and doorways (not always very consistent). I have attached a file with some of the error/warning messages I am getting.
After going through navigation stack literature, I feel 'obstacle range' in the Costmap configuration is one of the parameters I could change. Please let me know if you have suggestions/recommendations regarding any other parameters that I could modify to improve the performance.
I feel the map we are getting is pretty good . Will manually modifying it improve the performance ?
WARN] [1411756064.480605628]: Calculation of Distance between bubble and nearest obstacle failed. Frame 0 of 5 in collision. Plan invalid
[ WARN] [1411756064.480653208]: Failed to add frames to existing band
[ WARN] [1411756064.824642032]: Map update loop missed its desired rate of 5.0000Hz... the loop actually took 0.2956 seconds
[ WARN] [1411756064.880660273]: Calculation of Distance between bubble and nearest obstacle failed. Frame 105 of 130 in collision. Plan invalid
[ WARN] [1411756064.880743612]: Conversion from plan to elastic band failed. Plan probably not collision free. Plan not set for optimization
[ERROR] [1411756064.880786228]: Setting plan to Elastic Band method failed!
[ERROR] [1411756064.880828885]: Failed to pass global plan to the controller, aborting.
[ WARN] [1411756064.480605628]: Calculation of Distance between bubble and nearest obstacle failed. Frame 0 of 5 in collision. Plan invalid
[ WARN] [1411756064.480653208]: Failed to add frames to existing band
[ WARN] [1411756064.824642032]: Map update loop missed its desired rate of 5.0000Hz... the loop actually took 0.2956 seconds
[ WARN] [1411756064.880660273]: Calculation of Distance between bubble and nearest obstacle failed. Frame 105 of 130 in collision. Plan invalid
[ WARN] [1411756064.880743612]: Conversion from plan to elastic band failed. Plan probably not collision free. Plan not set for optimization
[ERROR] [1411756064.880786228]: Setting plan to Elastic Band method failed!
[ERROR] [1411756064.880828885]: Failed to pass global plan to the controller, aborting.
[ INFO] [1411756192.792013719]: Global plan set to elastic band for optimization
[ WARN] [1411756201.242162218]: Calculation of Distance between bubble and nearest obstacle failed. Frame 0 of 1 in collision. Plan invalid
[ WARN] [1411756201.242268942]: Failed to add frames to existing band
[ INFO] [1411756201.291943229]: Global plan set to elastic band for optimization
[ WARN] [1411756202.241923475]: Optimization failed - Band invalid - No controls availlable
[ INFO] [1411756202.292122263]: Global plan set to elastic band for optimization
[ WARN] [1411756217.241859529]: Optimization failed - Band invalid - No controls availlable
[ INFO] [1411756217.291952232]: Global plan set to elastic band for optimization
[ WARN] [1411756218.641859635]: Calculation of Distance between bubble and nearest obstacle failed. Frame 0 of 1 in collision. Plan invalid
[ WARN] [1411756218.641931723]: Failed to add frames to existing band
[ INFO] [1411756218.691938193]: Global plan set to elastic band for optimization
[ WARN] [1411756219.591840715]: Calculation of Distance between bubble and nearest obstacle failed. Frame 0 of 1 in collision. Plan invalid
[ WARN] [1411756219.591897692]: Failed to add frames to existing band
[ INFO] [1411756219.641959228]: Global plan set to elastic band for optimization
[ WARN] [1411756223.841820722]: Optimization failed - Band invalid - No controls availlable
[ WARN] [1411756223.891952513]: Calculation of Distance between bubble and nearest obstacle failed. Frame 79 of 96 in collision. Plan invalid
[ WARN] [1411756223.892017305]: Conversion from plan to elastic band failed. Plan probably not collision free. Plan not set for optimization
[ERROR] [1411756223.892063528]: Setting plan to Elastic Band method failed!
[ERROR] [1411756223.892101993]: Failed to pass global plan to the controller, aborting.
Originally posted by pnambiar on ROS Answers with karma: 120 on 2014-09-29
Post score: 1
Original comments
Comment by pnambiar on 2014-09-29:
Conversion from plan to elastic band failed. Plan probably not collision free. Plan not set for optimization
Setting plan to Elastic Band method failed!
Failed to pass global plan to the controller, aborting.
Comment by anonymous60874 on 2022-03-15:
Hi, I have the same problem but couldn't solve it. Can someone help me.
Answer:
I feel the map we are getting is pretty good .
Mapping (gmapping or etc.) doesn't have anything to do with the navigation stack. To experiment with getting a good nav setup, get a static map separately via gmapping and teleoperation, or setup navigation without a static map (using fake_localization instead of amcl to publish the odom->map transform).
Assuming you're running a typical move_base setup, 'Map update loop missed' means that your pc was process costmap updates as quickly as it wanted. Since your system is not realtime, it tries to run loops at a certain rate set in the parameters. Specifically, one of the costmaps inside move_base was unable to update at 5 Hz, this is usually due to too much processing load on the system. However since that message only popped up once, it's likely a corner case and you shouldn't need to reduce/alter any of the rates in your parameter files.
From the remainder of the messages, I'm guessing you're running eband_local_planner. If you're just trying to get the navigation stack up and running, you could try the defaultbase_local_planner instead, since it's a little more battle tested. Then you can tune the rest of the stack (move_base, costmap, and global_planner parameters) before delving into swapping in a more experimental planner.
Originally posted by paulbovbel with karma: 4518 on 2014-09-29
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 19563,
"tags": "ros, navigation, costmap, collision-map"
} |
How&why does change in concentration affect cell potential | Question:
I'm a bit confused about why (in 1c) if the air pressure is lower, and there is a lower concentration of oxygen, why the cell potential decreases.
It checks out fine with the Nernst Equation, but conceptually, voltage is the driving force or energy of the current. Wouldn't there just be less electrons, not a lower voltage?
Like how the standard reduction potential isn't based on moles of reactant?
Answer:
I'm a bit confused about why (in 1c) if the air pressure is lower, and there is a lower concentration of oxygen, why the cell potential decreases.
Think about it this way. Oxygen will diffuse into the cathode regardless if current is flowing or not. Obviously an infinite amount of oxygen can't flow into the cathode, or else all the oxygen would be sucked out of earth's atmosphere. So there has to be some distribution coefficient, D, that gives the limit of how much oxygen the cathode can absorb. The notion here is that if 100% oxygen allows an amount $x$ of oxygen to diffuse into the electrode, the the atmosphere with roughly 20% oxygen and 80% nitrogen would allow $0.20x$ amount of oxygen to diffuse into the electrode. Thus much akin to the Nernst Equation for liquids applies and the cell voltage with only 20% oxygen is lower than if the cell is in 100% oxygen. | {
"domain": "chemistry.stackexchange",
"id": 10745,
"tags": "electrochemistry"
} |
How do plant cell vacuoles form? | Question: Does the plant cell vacuole form by the invagination of the cell membrane?
Answer:
Does the plant cell vacuole form by the invagination of the cell membrane?
Yes, in part, but the complete answer is way more complicated than that.
Endosomes (the vesicles formed by "invagination" of the cell membrane, as you said) are responsible for (part) of the biomembrane of the cell vacuole. However, some biomembrane comes from the Golgi-ER system, and almost all proteins come from ER (having passed by the Golgi apparatus).
Have a look at this image (Marty, 1999):
Its legend says:
Seven basic pathways are used for the biogenesis, maintenance, and supplying of vacuoles. Pathway 1: entry and transport in the early secretory pathway (from ER to late Golgi compartments). Pathway 2: sorting of vacuolar proteins in the trans-Golgi network (TGN) to a pre/provacuolar compartment (PVC) via an early biosynthetic vacuolar pathway. Pathway 3: transport from PVC to vacuole via the late biosynthetic vacuolar pathway. Pathway 4: transport from early secretory steps (ER to Golgi complex; pathway 1) to the vacuole via an alternative route with possible material accretion from Golgi (indicated by the asterisk). Pathway 5: endocytotic pathway from the cell surface to the vacuole via endosomes. Pathway 6: cytoplasm to vacuole through autophagy by degradative or biosynthetic pathways. Pathway 7: transport of ions and solutes across the tonoplast. AV, autophagic vacuole; E, early endosome; ER, endoplasmic reticulum; PVC, pre/provacuolar compartment; TGN, trans-Golgi network.
According to the same paper (Marty, 1999):
Experimental evidence suggests that material within the vacuolar system in plants derives confluently from both an intracellular biosynthetic pathway and a coordinated endocytotic pathway. (emphases mine)
Source: Marty, F. (1999). Plant Vacuoles. THE PLANT CELL ONLINE, 11(4), pp.587-600. | {
"domain": "biology.stackexchange",
"id": 7829,
"tags": "cell-biology, organelle"
} |
What do the different grades of chemicals mean? | Question: When purchasing chemicals from Sigma, Fisher, or wherever, there are often -grade's attached to their description like reagent-grade, technical-grade, analytical-grade, or more niche-sounding biotech-grade, HPLC-grade, DNA grade (DNase free perhaps?)
Is there some sort of standard for what these actually mean or are they arbitrary, differing from supplier to supplier or even chemical to chemical? Beyond the 'niche' grades, is there a common order of purity between the others or do they depend on the types of impurities?
Answer: Sigma-Aldrich gives a very useful table outlining what the different purity levels are and suggested applications. I was less successful at finding equal documentation from some of the other suppliers, but the analysis for the purity of the chemicals they sell is in the catalog and on the bottle.
In general, technical grade or laboratory grade are the lowest purity. ACS Reagent grade means that the chemical conforms to specifications defined by the Committee on Analytical Reagents of the American Chemical Society (but Aldrich "ReagentPlus" means >95% pure). So, "ACS Reagent grade" chemicals should be comparable from different suppliers. Analytical grade is generally the most pure.
For some of the other grades, such as HPLC grade solvents, the issue is less about overall purity than about being free of substances that would interfere with a particular application. I've reproduced a piece of the table below to give you an idea of the information available. | {
"domain": "chemistry.stackexchange",
"id": 6865,
"tags": "experimental-chemistry, concentration"
} |
Sentimental Analysis on Twitter Data | Question: What are best ways to perform sentimental analysis on Twitter Data which I dont have labels for?
Answer: You should look at literature on unsupervised sentiment analysis. The paper by Peter Turney could be a good starting point.
Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews Turner 2002
You can also check this if you use R https://datascienceplus.com/unsupervised-learning-and-text-mining-of-emotion-terms-using-r/ | {
"domain": "datascience.stackexchange",
"id": 4053,
"tags": "data-mining, sentiment-analysis, twitter"
} |
Difference between Validation data and Testing data? | Question: I am bit confused about validating data. What is this data mainly for?? Like I am seeing some tutorial and they have some training[I know it] images , they they have some validation images[donot know] and some testing[I know] images? So what is validating images mainly for??
Answer: There are two uses for the validation set:
1) Knowing when to stop training
Some models are trained iteratively - like neural nets. Sooner or later the model might start to overfit the training data. That's why you repeatedly measure the model's score on the validation set (like after each epoch) and you stop training once the score on the validation set starts degrading again.
From Wikipedia on Overfitting:
"Training error is shown in blue, validation error in red, both as a function of the number of training cycles. If the validation error increases (positive slope) while the training error steadily decreases (negative slope) then a situation of overfitting may have occurred. The best predictive and fitted model would be where the validation error has its global minimum."
2) Parameter selection
Your model needs some hyper-parameters to be set, like the learning rate, what optimizer to use, number and type of layers / neurons, activation functions, or even different algorithms like neural net vs SVM... you'll have to fiddle with these parameters, trying to find the ones that work best.
To do that you train a model with each set of parameters and then evaluate each model using the validation set. Finally you select the model / the set of parameters that yielded the best score on the validation set.
In both of the above cases the model might have fit the data in the validation set, resulting in a biased (slightly too optimistic) score - which is why you evaluate the final model on the test-set before publishing its score. | {
"domain": "datascience.stackexchange",
"id": 853,
"tags": "machine-learning, data-mining, deep-learning"
} |
Project Euler, Challenge #12 in Swift | Question:
The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...
Let us list the factors of the first seven triangle numbers:
1: 1
3: 1,3
6: 1,2,3,6
10: 1,2,5,10
15: 1,3,5,15
21: 1,3,7,21
28: 1,2,4,7,14,28
We can see that 28 is the first triangle number to have over five divisors.
What is the value of the first triangle number to have over five hundred divisors?
Swift Code:
import Foundation
let numberOfRequiredFactors = 500
func nthTriangularNumber( n:Int ) -> Int {
var number = 0
for iterator in 0...n {
number += iterator
}
return number
}
func findNumberOfFactors( number:Int ) -> Int {
var numberOfFactors = 0
for iterator in 1...number {
if number % iterator == 0 {
numberOfFactors++
}
}
return numberOfFactors
}
var countOfTriangularNumbers = 1
var found = false
var foundNumber = 0
while !found {
var numberOfFactors = findNumberOfFactors(nthTriangularNumber(countOfTriangularNumbers))
if numberOfFactors > numberOfRequiredFactors {
found = true
foundNumber = nthTriangularNumber(countOfTriangularNumbers)
}
countOfTriangularNumbers++
}
println("The number is: \(foundNumber)")
Comments on making this code more efficient, making it more cleaner or violations of best practices would be highly appreciated. However, it did work for smaller numbers, which is how I tested if the logic is right.
Answer: The spacing on your parenthesis in your function declarations is odd and distracting. Try not to fight what autocomplete will give you. It should look like this:
func nameOfFunction(firstArgumentName: Int) -> Int
Notice the lack of spaces inside the parenthesis but the addition of the space between the argument name and its type? This is the expect formatting style for Swift function declaration.
Memoization
func nthTriangularNumber( n:Int ) -> Int {
var number = 0
for iterator in 0...n {
number += iterator
}
return number
}
We can immediately improve upon this approach. We know that for any number n, the nth triangular number will be the n-1th triangular number + n, right?
So we can improve this approach by using some memoization.
Without changing the rules of this function (although, I think the name needs some work), we can use a local struct to hold an array of the previously calculated triangular numbers.
Since Swift doesn't have the same sort of static function variables as Objective-C and other languages have, we have to consult this Stack Overflow question to figure out how to achieve something similar.
So... we're going to want a nested struct that looks like this:
struct Memoizer {
static var triangularNumbers: [Int] = []
}
Now we'll use this Memoizer.triangularNumbers array to memoize triangular numbers so nothing is ever calculated more than once. Perhaps this makes sense as an extension to Int though?
extension Int {
func triangularNumber() -> Int {
struct Memoizer {
static var triangularNumbers: [Int] = [0]
}
let lastIndex = Memoizer.triangularNumbers.count - 1
for n in lastIndex...self {
let next = Memoizer.triangularNumbers[n] + n
Memoizer.triangularNumbers.append(next)
}
return Memoizer.triangularNumbers[self]
}
}
So, if we try to find 500's triangular number, it still takes just as long as it would have taken using your approach (assuming we've not previously found any other triangular numbers) (and maybe slightly longer since we have to allocate memory for the array). But once we've calculated the 500th triangular number, anything less than 500 is a simply array look up. And calculating the 501st triangular number is a matter of grabbing the 500th and adding 501 to it.
That last point is the most important part here though.
In your current implementation, each iteration of your while loop takes progressively longer than the previous one. Your countOfTriangularNumbers is also the count of addition operations you have to do per loop to calculate that particular triangular numbers.
Using the implementation I just suggested, your number of addition operations to calculate the triangular number stays at a constant one.
The difference between one addition operation and two addition operations may not be measurable. But by the 500th or 1000th or more iteration of your loop, the difference between one addition operation and 500 addition operations starts to become noticeable. (And this is minor addition to the time it takes... but when we do find the number, we have to then recalculate it inside your if branch).
There is still a lot to work on for this problem, but I think this answer sets you down a very good path. Think about what this memoization is doing for us, and try to see where we can apply it in other places (especially as you work through Project Euler). | {
"domain": "codereview.stackexchange",
"id": 15340,
"tags": "programming-challenge, mathematics, time-limit-exceeded, swift"
} |
Does Double Slit Detection Rate Change when Path Is Known vs. Unknown? | Question: Is the overall rate of particle detections you will obtain in a classic double slit experiment dependent on whether the experiment is set up to cause an interference pattern or not?
Another way of asking this is whether you can interpret the interference pattern as the ABSENCE of particles you would have ostensibly detected had their paths been defined?
The motive behind this is that in quantum eraser experiments, the initial detection pattern always appears as random noise; it's only when you filter the results based on the "erasure" of the which-path information for the entangled twins that the pattern emerges from the noise. While often presented as proof of retrocausality, one could (and IMO should) interpret the results merely as a data gathering and filtering exercise.
But of course in the classic double slit scenario, there is no correlating and filtering after the fact; the pattern emerges then and there. But does that pattern arise because the particles we wound up detecting behaved differently due to the presence or absence of path detectors, or - as in the quantum eraser - because we failed (intentionally or not) to detect the ones that would have otherwise landed between the bands? And if the detection rate is unaffected by path knowledge in the classic experiment, does this suggest a different physical mechanism is at play vs a quantum eraser?
Answer:
Is the overall rate of particle detections you will obtain in a classic double slit experiment dependent on whether the experiment is set up to cause an interference pattern or not?
Another way of asking this is whether you can interpret the interference pattern as the ABSENCE of particles you would have ostensibly detected had their paths been defined?
No, regardless of whether the setup is designed to get which-way information (thereby destroying the interference pattern) or not, the same number of photons will hit the screen*. While the total number of photons is the same, however, their distributions in the interference case is of course different from the non-interference case.
While often presented as proof of retrocausality, one could (and IMO should) interpret the results merely as a data gathering and filtering exercise.
I agree with this, and at some point I was also fooled by explanations of the delayed choice double slit which didn't make this clear.
But does that pattern arise because the particles we wound up detecting behaved differently due to the presence or absence of path detectors, or - as in the quantum eraser - because we failed (intentionally or not) to detect the ones that would have otherwise landed between the bands?
The former is correct here and the latter is not.
And if the detection rate is unaffected by path knowledge in the classic experiment, does this suggest a different physical mechanism is at play vs a quantum eraser?
In the quantum eraser the detection rate at each point is still the same regardless of any path knowledge obtained afterwards. So I would see it as the same physical mechanism, unless you had a different idea in mind? In addition to this same physical mechanism, the delayed choice quantum eraser has a filtering effect, but I understand that as a separate effect which results from the entangled photons that are created. | {
"domain": "physics.stackexchange",
"id": 92147,
"tags": "interference, double-slit-experiment, quantum-eraser"
} |
How do rocky objects between 1cm and 1m accrete to form planetesimals? | Question: I am having a hard time gaining an intuitive understanding of some of the middle stages of planetary formation from a protoplanetary accretion disk.
I understand that microscopic dust particles may accrete through something like electrostatic forces. This makes sense on an intuitive level as we often observe very small objects sticking to one another, like clumps of dust 'just forming' in the corner of a dusty basement. I am guessing that gravity doesn't come into play here since the particles are too small.
However, I can't quite visualise how objects that are greater than 1cm and smaller than 1m can accrete, particularly when they are formed from rocky substances. This seems to defy intuition, as electrostatic forces don't act to join objects at this scale - you can't just stick rocks together. It doesn't seem to be a matter of energy either, since smashing rocks together results in either one of them breaking apart rather than sticking together.
Is there a known mechanism by which rocky objects within between 1cm and 1m accrete to form larger objects? I vaguely remember reading about how gas clouds may have been involved.
Answer: This is indeed a tricky problem, and the accretion of pebbles to form planetesimals is a big question in planetary science. You are right to say that small particles can stick together through electrostatic forces if collisions are gentle enough. But as particles increase in size, bouncing and fragmentation seem to present a barrier to growth. This is overcome somewhat in the outer disk, where icy particles tend to 'stick' more readily than rocky particles. Nevertheless, planetesimal growth requires a more thorough explanation.
One of the most popular solutions to this problem is a mechanism called the streaming instability. To understand how this works, we first need to know that dust particles within a protoplanetary disk undergo radial drift, due to the drag force exerted on them by gas in the disk. This causes them to move at 'sub-Keplerian' velocities and and lose angular momentum, drifting in towards the host star.
The idea behind streaming instability is that particles exert a frictional force back onto the gas, causing the gas to accelerate to near Keplerian velocities. This reduces the headwind on the dust particles, causing them to drift in more slowly. As particles further out in the disk start to 'catch up', the local density increases exponentially, forming clumps that become self-gravitating. These clumps grow vary quickly and eventually collapse, enabling planetesimals to form over very short timescales.
Streaming instability can create planetesimals in a variety of sizes, typically of the order ~100km, but even up to the size of objects such as Ceres. It is particularly favoured to take place at snowlines, which are the radial regions of the disk where particular molecular species 'freeze-out'.
For a more detailed explanation I highly recommend this recent review by Raymond & Morbidelli (2020). | {
"domain": "astronomy.stackexchange",
"id": 5910,
"tags": "planetary-formation, accretion-discs"
} |
A CSharpish String.Format formatting helper | Question: A while ago I implemented .net's string.Format() method in VB6; it works amazingly well, but I'm sure there has to be a way to make it more efficient.
I'll start by listing a simple class called EscapeSequence:
Private Type tEscapeSequence
EscapeString As String
ReplacementString As String
End Type
Private this As tEscapeSequence
Option Explicit
Public Property Get EscapeString() As String
EscapeString = this.EscapeString
End Property
Friend Property Let EscapeString(value As String)
this.EscapeString = value
End Property
Public Property Get ReplacementString() As String
ReplacementString = this.ReplacementString
End Property
Friend Property Let ReplacementString(value As String)
this.ReplacementString = value
End Property
'Lord I wish VB6 had constructors!
Public Function Create(escape As String, replacement As String) As EscapeSequence
Dim result As New EscapeSequence
result.EscapeString = escape
result.ReplacementString = replacement
Set Create = result
End Function
...and the actual StringFormat function - there's a global variable PADDING_CHAR involved, which I'd love to find a way to specify and de-globalize:
Public Function StringFormat(format_string As String, ParamArray values()) As String
'VB6 implementation of .net String.Format(), slightly customized.
Dim return_value As String
Dim values_count As Integer
'some error-handling constants:
Const ERR_FORMAT_EXCEPTION As Long = vbObjectError Or 9001
Const ERR_ARGUMENT_NULL_EXCEPTION As Long = vbObjectError Or 9002
Const ERR_ARGUMENT_EXCEPTION As Long = vbObjectError Or 9003
Const ERR_SOURCE As String = "StringFormat"
Const ERR_MSG_INVALID_FORMAT_STRING As String = "Invalid format string."
Const ERR_MSG_FORMAT_EXCEPTION As String = "The number indicating an argument to format is less than zero, or greater than or equal to the length of the args array."
Const ERR_MSG_NUMBER_ARGUMENT_EXCEPTION As String = "Invalid number argument."
'use SPACE as default padding character
If PADDING_CHAR = vbNullString Then PADDING_CHAR = Chr$(32)
'figure out number of passed values:
values_count = UBound(values) + 1
Dim regex As RegExp
Dim matches As MatchCollection
Dim thisMatch As Match
Dim thisString As String
Dim thisFormat As String
Dim useLiteral As Boolean 'when format_string starts with "@", escapes are not replaced (string is treated as a literal string with placeholders)
Dim escapeHex As Boolean 'indicates whether HEX specifier "0x" is to be escaped or not
'validate string_format:
Set regex = New RegExp
regex.pattern = "{({{)*(\w+)(,-?\d+)?(:[^}]+)?}(}})*"
regex.IgnoreCase = True
regex.Global = True
Set matches = regex.Execute(format_string)
'determine if values_count matches number of unique regex matches:
Dim uniqueCount As Integer
Dim tmpCSV As String
For Each thisMatch In matches
If Not StringContains(tmpCSV, thisMatch.SubMatches(1)) Then
uniqueCount = uniqueCount + 1
tmpCSV = tmpCSV & thisMatch.SubMatches(1) & ","
End If
Next
'unique indices count must match values_count:
If matches.Count > 0 And uniqueCount <> values_count Then
Err.Raise ERR_FORMAT_EXCEPTION, _
ERR_SOURCE, ERR_MSG_FORMAT_EXCEPTION
End If
useLiteral = StringStartsWith("@", format_string)
If useLiteral Then format_string = Right(format_string, Len(format_string) - 1) 'remove the "@" literal specifier
If Not useLiteral And StringContains(format_string, "\\") Then _
format_string = Replace(format_string, "\\", Chr$(27))
If matches.Count = 0 And format_string <> vbNullString And UBound(values) = -1 Then
'only format_string was specified: skip to checking escape sequences:
return_value = format_string
GoTo checkEscapes
ElseIf UBound(values) = -1 And matches.Count > 0 Then
Err.Raise ERR_ARGUMENT_NULL_EXCEPTION, _
ERR_SOURCE, ERR_MSG_FORMAT_EXCEPTION
End If
return_value = format_string
'dissect format_string:
Dim i As Integer, v As String, p As String 'i: iterator; v: value; p: placeholder
Dim alignmentGroup As String, alignmentSpecifier As String
Dim formattedValue As String, alignmentPadding As Integer
'iterate regex matches (each match is a placeholder):
For i = 0 To matches.Count - 1
'get the placeholder specified index:
Set thisMatch = matches(i)
p = thisMatch.SubMatches(1)
'if specified index (0-based) > uniqueCount (1-based), something's wrong:
If p > uniqueCount - 1 Then
Err.Raise ERR_FORMAT_EXCEPTION, _
ERR_SOURCE, ERR_MSG_FORMAT_EXCEPTION
End If
v = values(p)
'get the alignment specifier if it is specified:
alignmentGroup = thisMatch.SubMatches(2)
If alignmentGroup <> vbNullString Then _
alignmentSpecifier = Right$(alignmentGroup, LenB(alignmentGroup) / 2 - 1)
'get the format specifier if it is specified:
thisString = thisMatch.value
If StringContains(thisString, ":") Then
Dim formatGroup As String, precisionSpecifier As Integer
Dim formatSpecifier As String, precisionString As String
'get the string between ":" and "}":
formatGroup = mId$(thisString, InStr(1, thisString, ":") + 1, (LenB(thisString) / 2) - 2)
formatGroup = Left$(formatGroup, LenB(formatGroup) / 2 - 1)
precisionString = Right$(formatGroup, LenB(formatGroup) / 2 - 1)
formatSpecifier = mId$(thisString, InStr(1, thisString, ":") + 1, 1)
'applicable formatting depends on the type of the value (yes, GOTO!!):
If TypeName(values(p)) = "Date" Then GoTo DateTimeFormatSpecifiers
If v = vbNullString Then GoTo ApplyStringFormat
NumberFormatSpecifiers:
If precisionString <> vbNullString And Not IsNumeric(precisionString) Then
Err.Raise ERR_FORMAT_EXCEPTION, _
ERR_SOURCE, ERR_MSG_INVALID_FORMAT_STRING
End If
If Not IsNumeric(v) Then
Err.Raise ERR_ARGUMENT_EXCEPTION, _
ERR_SOURCE, ERR_MSG_NUMBER_ARGUMENT_EXCEPTION
End If
If precisionString = vbNullString Then precisionString = 0
Select Case formatSpecifier
Case "C", "c" 'CURRENCY format, formats string as currency.
'Precision specifier determines number of decimal digits.
'This implementation ignores regional settings
'(hard-coded group separator, decimal separator and currency sign).
precisionSpecifier = CInt(precisionString)
thisFormat = "#,##0.$"
If LenB(formatGroup) > 2 And precisionSpecifier > 0 Then
'if a non-zero precision is specified...
thisFormat = _
Replace$(thisFormat, ".", "." & String$(precisionString, Chr$(48)))
Else
thisFormat = CURRENCY_FORMAT
End If
Case "D", "d" 'DECIMAL format, formats string as integer number.
'Precision specifier determines number of digits in returned string.
precisionSpecifier = CInt(precisionString)
thisFormat = "0"
thisFormat = Right$(String$(precisionSpecifier, "0") & thisFormat, _
IIf(precisionSpecifier = 0, Len(thisFormat), precisionSpecifier))
Case "E", "e" 'EXPONENTIAL NOTATION format (aka "Scientific Notation")
'Precision specifier determines number of decimals in returned string.
'This implementation ignores regional settings'
'(hard-coded decimal separator).
precisionSpecifier = CInt(precisionString)
thisFormat = "0.00000#" & formatSpecifier & "-#" 'defaults to 6 decimals
If LenB(formatGroup) > 2 And precisionSpecifier > 0 Then
'if a non-zero precision is specified...
thisFormat = "0." & String$(precisionSpecifier - 1, Chr$(48)) & "#" & formatSpecifier & "-#"
ElseIf LenB(formatGroup) > 2 And precisionSpecifier = 0 Then
Err.Raise ERR_FORMAT_EXCEPTION, _
ERR_SOURCE, ERR_MSG_INVALID_FORMAT_STRING
End If
Case "F", "f" 'FIXED-POINT format
'Precision specifier determines number of decimals in returned string.
'This implementation ignores regional settings'
'(hard-coded decimal separator).
precisionSpecifier = CInt(precisionString)
thisFormat = "0"
If LenB(formatGroup) > 2 And precisionSpecifier > 0 Then
'if a non-zero precision is specified...
thisFormat = (thisFormat & ".") & String$(precisionSpecifier, Chr$(48))
Else
'no precision specified - default to 2 decimals:
thisFormat = "0.00"
End If
Case "G", "g" 'GENERAL format (recursive)
'returns the shortest of either FIXED-POINT or SCIENTIFIC formats in case of a Double.
'returns DECIMAL format in case of a Integer or Long.
Dim eNotation As String, ePower As Integer, specifier As String
precisionSpecifier = IIf(CInt(precisionString) > 0, CInt(precisionString), _
IIf(StringContains(v, "."), Len(v) - InStr(1, v, "."), 0))
'track character case of formatSpecifier:
specifier = IIf(formatSpecifier = "G", "D", "d")
If TypeName(values(p)) = "Integer" Or TypeName(values(p)) = "Long" Then
'Integer types: use {0:D} (recursive call):
formattedValue = StringFormat("{0:" & specifier & "}", values(p))
ElseIf TypeName(values(p)) = "Double" Then
'Non-integer types: use {0:E}
specifier = IIf(formatSpecifier = "G", "E", "e")
'evaluate the exponential notation value (recursive call):
eNotation = StringFormat("{0:" & specifier & "}", v)
'get the power of eNotation:
ePower = mId$(eNotation, InStr(1, UCase$(eNotation), "E-") + 1, Len(eNotation) - InStr(1, UCase$(eNotation), "E-"))
If ePower > -5 And Abs(ePower) < precisionSpecifier Then
'use {0:F} when ePower > -5 and abs(ePower) < precisionSpecifier:
'evaluate the floating-point value (recursive call):
specifier = IIf(formatSpecifier = "G", "F", "f")
formattedValue = StringFormat("{0:" & formatSpecifier & _
IIf(precisionSpecifier <> 0, precisionString, vbNullString) & "}", values(p))
Else
'fallback to {0:E} if previous rule didn't apply:
formattedValue = eNotation
End If
End If
GoTo AlignFormattedValue 'Skip the "ApplyStringFormat" step, it's applied already.
Case "N", "n" 'NUMERIC format, formats string as an integer or decimal number.
'Precision specifier determines number of decimal digits.
'This implementation ignores regional settings'
'(hard-coded group and decimal separators).
precisionSpecifier = CInt(precisionString)
If LenB(formatGroup) > 2 And precisionSpecifier > 0 Then
'if a non-zero precision is specified...
thisFormat = "#,##0"
thisFormat = (thisFormat & ".") & String$(precisionSpecifier, Chr$(48))
Else 'only the "D" is specified
thisFormat = "#,##0"
End If
Case "P", "p" 'PERCENT format. Formats string as a percentage.
'Value is multiplied by 100 and displayed with a percent symbol.
'Precision specifier determines number of decimal digits.
thisFormat = "#,##0%"
precisionSpecifier = CInt(precisionString)
If LenB(formatGroup) > 2 And precisionSpecifier > 0 Then
'if a non-zero precision is specified...
thisFormat = "#,##0"
thisFormat = (thisFormat & ".") & String$(precisionSpecifier, Chr$(48))
Else 'only the "P" is specified
thisFormat = "#,##0"
End If
'Append the percentage sign to the format string:
thisFormat = thisFormat & "%"
Case "R", "r" 'ROUND-TRIP format (a string that can round-trip to an identical number)
'example: ?StringFormat("{0:R}", 0.0000000001141596325677345362656)
' ...returns "0.000000000114159632567735"
'convert value to a Double (chop off overflow digits):
v = CDbl(v)
Case "X", "x" 'HEX format. Formats a string as a Hexadecimal value.
'Precision specifier determines number of total digits.
'Returned string is prefixed with "&H" to specify Hex.
v = Hex(v)
precisionSpecifier = CInt(precisionString)
If LenB(precisionString) > 0 Then 'precision here stands for left padding
v = Right$(String$(precisionSpecifier, "0") & v, IIf(precisionSpecifier = 0, Len(v), precisionSpecifier))
End If
'add C# hex specifier, apply specified casing:
'(VB6 hex specifier would cause Format() to reverse the formatting):
v = "0x" & IIf(formatSpecifier = "X", UCase$(v), LCase$(v))
escapeHex = True
Case Else
If IsNumeric(formatSpecifier) And val(formatGroup) = 0 Then
formatSpecifier = formatGroup
v = Format(v, formatGroup)
Else
Err.Raise ERR_FORMAT_EXCEPTION, _
ERR_SOURCE, ERR_MSG_INVALID_FORMAT_STRING
End If
End Select
GoTo ApplyStringFormat
DateTimeFormatSpecifiers:
Select Case formatSpecifier
Case "c", "C" 'CUSTOM date/time format
'let VB Format() parse precision specifier as is:
thisFormat = precisionString
Case "d" 'SHORT DATE format
thisFormat = "ddddd"
Case "D" 'LONG DATE format
thisFormat = "dddddd"
Case "f" 'FULL DATE format (short)
thisFormat = "dddddd h:mm AM/PM"
Case "F" 'FULL DATE format (long)
thisFormat = "dddddd ttttt"
Case "g"
thisFormat = "ddddd hh:mm AM/PM"
Case "G"
thisFormat = "ddddd ttttt"
Case "s" 'SORTABLE DATETIME format
thisFormat = "yyyy-mm-ddThh:mm:ss"
Case "t" 'SHORT TIME format
thisFormat = "hh:mm AM/PM"
Case "T" 'LONG TIME format
thisFormat = "ttttt"
Case Else
Err.Raise ERR_FORMAT_EXCEPTION, _
ERR_SOURCE, ERR_MSG_INVALID_FORMAT_STRING
End Select
GoTo ApplyStringFormat
End If
ApplyStringFormat:
'apply computed format string:
If thisFormat <> vbNullString Then
formattedValue = Format(v, thisFormat)
Else
formattedValue = v
End If
AlignFormattedValue:
'apply specified alignment specifier:
If alignmentSpecifier <> vbNullString Then
alignmentPadding = Abs(CInt(alignmentSpecifier))
If CInt(alignmentSpecifier) < 0 Then
'negative: left-justified alignment
If alignmentPadding - Len(formattedValue) > 0 Then _
formattedValue = formattedValue & _
String$(alignmentPadding - Len(formattedValue), PADDING_CHAR)
Else
'positive: right-justified alignment
If alignmentPadding - Len(formattedValue) > 0 Then _
formattedValue = String$(alignmentPadding - Len(formattedValue), PADDING_CHAR) & formattedValue
End If
End If
'Replace C# hex specifier with VB6 hex specifier, only if hex specifier was introduced in this function:
If (Not useLiteral And escapeHex) And StringContains(formattedValue, "0x") Then formattedValue = Replace$(formattedValue, "0x", "&H")
'replace all occurrences of placeholder {i} with their formatted values:
return_value = Replace(return_value, thisString, formattedValue, Count:=1)
'reset before reiterating:
thisFormat = vbNullString
Next
checkEscapes:
'if there's no more backslashes, don't bother checking for the rest:
If useLiteral Or Not StringContains(return_value, "\") Then GoTo normalExit
Dim escape As New EscapeSequence
Dim escapes As New Collection
escapes.Add escape.Create("\n", vbNewLine), "0"
escapes.Add escape.Create("\q", Chr$(34)), "1"
escapes.Add escape.Create("\t", vbTab), "2"
escapes.Add escape.Create("\a", Chr$(7)), "3"
escapes.Add escape.Create("\b", Chr$(8)), "4"
escapes.Add escape.Create("\v", Chr$(13)), "5"
escapes.Add escape.Create("\f", Chr$(14)), "6"
escapes.Add escape.Create("\r", Chr$(15)), "7"
For i = 0 To escapes.Count - 1
Set escape = escapes(CStr(i))
If StringContains(return_value, escape.EscapeString) Then _
return_value = Replace(return_value, escape.EscapeString, escape.ReplacementString)
If Not StringContains(return_value, "\") Then _
GoTo normalExit
Next
'replace "ASCII (oct)" escape sequence
Set regex = New RegExp
regex.pattern = "\\(\d{3})"
regex.IgnoreCase = True
regex.Global = True
Set matches = regex.Execute(format_string)
Dim char As Long
If matches.Count <> 0 Then
For Each thisMatch In matches
p = thisMatch.SubMatches(0)
'"p" contains the octal number representing the ASCII code we're after:
p = "&O" & p 'prepend octal prefix
char = CLng(p)
return_value = Replace(return_value, thisMatch.value, Chr$(char))
Next
End If
'if there's no more backslashes, don't bother checking for the rest:
If Not StringContains("\", return_value) Then GoTo normalExit
'replace "ASCII (hex)" escape sequence
Set regex = New RegExp
regex.pattern = "\\x(\w{2})"
regex.IgnoreCase = True
regex.Global = True
Set matches = regex.Execute(format_string)
If matches.Count <> 0 Then
For Each thisMatch In matches
p = thisMatch.SubMatches(0)
'"p" contains the hex value representing the ASCII code we're after:
p = "&H" & p 'prepend hex prefix
char = CLng(p)
return_value = Replace(return_value, thisMatch.value, Chr$(char))
Next
End If
normalExit:
Set escapes = Nothing
Set escape = Nothing
If Not useLiteral And StringContains(return_value, Chr$(27)) Then _
return_value = Replace(return_value, Chr$(27), "\")
StringFormat = return_value
End Function
I'm looking for a way to factor out the two (quite huge) Select...Case blocks, and to improve readability in general.
Note that this uses StringContains functions, and I should add a disclaimer about the fact that most of this code is already posted as an answer of mine at StackOverflow, although I do not consider it multi-posting, since I'm actually asking for a code review here.
Answer: Key Points
Each Case block implements formatting functionality for a specific format specifier.
Goto statements indicate the function wants to be broken down into several smaller functions.
Local variables such as alignmentSpecifier, alignmentPadding, precisionString, precisionSpecifier, formatSpecifier and all others, could all be eliminated if there was a concept of a "FormatSpecifier" object that held all these values.
Bringing in escapeHex and the C# hex specifier is a hack easily made useless by correctly encapsulating each format specifier.
escapes collection gets rebuilt every time the function is called, which is inefficient; valid escape sequences don't change from one call to the next.
ASCII (hex & octal) escapes both desperately want to be part of that collection.
Replacing \\ with ASCII code for Esc works nicely to get backslashes escaped.
Warning: below code is absolute overkill - no one in their right minds (I did this just for fun!) would do all this just to format strings in their VB6 or VBA application. However it shows how the monolithic function can be refactored to remove all Select...Case blocks and Goto statements.
Rewrite
Here's the refactored module-level function - it uses a Private helper As New StringHelper, declared at module level ("declarations" section):
Public Function StringFormat(format_string As String, ParamArray values()) As String
Dim valuesArray() As Variant
valuesArray = values
StringFormat = helper.StringFormat(format_string, valuesArray)
End Function
Escape Sequences
The EscapeSequence class was annoyingly leaving out ASCII escapes, so I tackled this first:
Private Type tEscapeSequence
EscapeString As String
ReplacementString As String
IsAsciiCharacter As Boolean
AsciiBase As AsciiEscapeBase
End Type
Public Enum AsciiEscapeBase
Octal
Hexadecimal
End Enum
Private this As tEscapeSequence
Option Explicit
Public Property Get EscapeString() As String
EscapeString = this.EscapeString
End Property
Friend Property Let EscapeString(value As String)
this.EscapeString = value
End Property
Public Property Get ReplacementString() As String
ReplacementString = this.ReplacementString
End Property
Friend Property Let ReplacementString(value As String)
this.ReplacementString = value
End Property
Public Property Get IsAsciiCharacter() As Boolean
IsAsciiCharacter = this.IsAsciiCharacter
End Property
Friend Property Let IsAsciiCharacter(value As Boolean)
this.IsAsciiCharacter = value
End Property
Public Property Get AsciiBase() As AsciiEscapeBase
AsciiBase = this.AsciiBase
End Property
Friend Property Let AsciiBase(value As AsciiEscapeBase)
this.AsciiBase = value
End Property
The factory Create function was added two optional parameters; one to specify whether the escape sequence indicates an ASCII replacement escape, the other to specify the base (an enum) of the digits representing the ASCII code:
Public Function Create(escape As String, replacement As String, _
Optional ByVal isAsciiReplacement As Boolean = False, _
Optional ByVal base As AsciiEscapeBase = Octal) As EscapeSequence
Dim result As New EscapeSequence
result.EscapeString = escape
result.ReplacementString = replacement
result.IsAsciiCharacter = isAsciiReplacement
result.AsciiBase = base
Set Create = result
End Function
Added an Execute method here - all escape sequences boil down to the same thing: *replace the EscapeString with the ReplacementString, so we might as well encapsulate it here. ASCII escapes are a little bit more complex so I put them in their own method:
Public Sub Execute(ByRef string_value As String)
If this.IsAsciiCharacter Then
ProcessAsciiEscape string_value, this.EscapeString
ElseIf StringContains(string_value, this.EscapeString) Then
string_value = Replace(string_value, this.EscapeString, this.ReplacementString)
End If
End Sub
Private Sub ProcessAsciiEscape(ByRef format_string As String, _
ByVal regexPattern As String)
Dim regex As RegExp, matches As MatchCollection, thisMatch As Match
Dim prefix As String, char As Long
If Not StringContains(format_string, "\") Then Exit Sub
Set regex = New RegExp
regex.pattern = regexPattern
regex.IgnoreCase = True
regex.Global = True
Select Case this.AsciiBase
Case AsciiEscapeBase.Octal
prefix = "&O"
Case AsciiEscapeBase.Hexadecimal
prefix = "&H"
End Select
Set matches = regex.Execute(format_string)
For Each thisMatch In matches
char = CLng(prefix & thisMatch.SubMatches(0))
format_string = Replace(format_string, thisMatch.value, Chr$(char))
Next
Set regex = Nothing
Set matches = Nothing
End Sub
This puts escape sequences to bed, at least for now.
Format Specifiers
Each match in the main RegEx stands for a placeholder (something potentially looking like "{0,-10:C2}"); if we can call those "format specifiers", they can probably deserve their own StringFormatSpecifier class as well - the precision specifier is normally an Integer, but in the custom date format it's also taking a String so we'll make Precision a get-only property that's set when assigning CustomSpecifier:
Private Type tSpecifier
Index As Integer
identifier As String
AlignmentSpecifier As Integer
PrecisionSpecifier As Integer
CustomSpecifier As String
End Type
Private this As tSpecifier
Option Explicit
Public Property Get Index() As Integer
Index = this.Index
End Property
Public Property Let Index(value As Integer)
this.Index = value
End Property
Public Property Get identifier() As String
identifier = this.identifier
End Property
Public Property Let identifier(value As String)
this.identifier = value
End Property
Public Property Get Alignment() As Integer
Alignment = this.AlignmentSpecifier
End Property
Public Property Let Alignment(value As Integer)
this.AlignmentSpecifier = value
End Property
Public Property Get Precision() As Integer
Precision = this.PrecisionSpecifier
End Property
Public Property Get CustomSpecifier() As String
CustomSpecifier = this.CustomSpecifier
End Property
Public Property Let CustomSpecifier(value As String)
this.CustomSpecifier = value
If IsNumeric(value) And val(value) <> 0 Then this.PrecisionSpecifier = CInt(value)
End Property
All that's missing is a way to put all the pieces back together to perform the actual replacement - either we store the original string or we implement a ToString function:
Public Function ToString() As String
ToString = "{" & this.Index & _
IIf(this.AlignmentSpecifier <> 0, _
"," & this.AlignmentSpecifier, vbNullString) & _
IIf(this.identifier <> vbNullString, _
":" & this.identifier, vbNullString) & _
IIf(this.CustomSpecifier <> vbNullString, _
this.CustomSpecifier, vbNullString) & "}"
End Function
This puts another important piece to bed.
VB6 Interface?
If we encapsulated how each format specifier works into its own class, odds are we'd get over a dozen of very similar classes. If only we were in .net, we could create an interface for this, right? Very few people know that VB6 also supports interfaces. In fact, any class can be implemented by any other.
So the IStringFormatIdentifier interface/class looks like this:
Option Explicit
'returns a format string suitable for use with VB6's native Format() function.
Public Function GetFormatString(specifier As StringFormatSpecifier) As String
End Function
'returns the formatted value.
Public Function GetFormattedValue(value As Variant, _
specifier As StringFormatSpecifier) As String
End Function
'compares specified format identifier with implementation-defined one,
'returns true if format is applicable.
Public Function IsIdentifierMatch(specifier As StringFormatSpecifier) As Boolean
End Function
This interface needs an implementation of it for each and every single Case block of the original code - not going to list them all here, but this is GeneralNumericStringFormatIdentifier (the most complicated one); notice that doing this has also eliminated the recursive function calls:
Implements IStringFormatIdentifier
Option Explicit
Private Function IStringFormatIdentifier_GetFormatString(specifier As StringFormatSpecifier) As String
IStringFormatIdentifier_GetFormatString = vbNullString
End Function
Private Function IStringFormatIdentifier_GetFormattedValue(value As Variant, specifier As StringFormatSpecifier) As String
Dim result As String
Dim exponentialNotation As String
Dim power As Integer
Dim exponentialFormat As New ExponentialStringFormatIdentifier
Dim fixedPointFormat As New FixedPointStringFormatIdentifier
Dim decimalFormat As New DecimalStringFormatIdentifier
Dim formatSpecifier As New StringFormatSpecifier
formatSpecifier.Alignment = specifier.Alignment
formatSpecifier.CustomSpecifier = specifier.CustomSpecifier
If StringMatchesAny(TypeName(value), "Integer", "Long") Then
formatSpecifier.identifier = IIf(specifier.identifier = "G", "D", "d")
result = decimalFormat.GetFormattedValue(value, formatSpecifier)
ElseIf TypeName(value) = "Double" Then
formatSpecifier.identifier = IIf(specifier.identifier = "G", "E", "e")
exponentialNotation = exponentialFormat.GetFormattedValue(value, formatSpecifier)
power = exponentialFormat.GetPower(exponentialNotation)
If power > -5 And Abs(power) < specifier.Precision Then
formatSpecifier.identifier = IIf(specifier.identifier = "G", "F", "f")
result = fixedPointFormat.GetFormattedValue(value, formatSpecifier)
Else
result = exponentialNotation
End If
End If
IStringFormatIdentifier_GetFormattedValue = result
Set exponentialFormat = Nothing
Set fixedPointFormat = Nothing
Set decimalFormat = Nothing
Set formatSpecifier = Nothing
End Function
Public Function GetFormattedValue(value As Variant, specifier As StringFormatSpecifier) As String
GetFormattedValue = IStringFormatIdentifier_GetFormattedValue(value, specifier)
End Function
Private Function IStringFormatIdentifier_IsIdentifierMatch(specifier As StringFormatSpecifier) As Boolean
IStringFormatIdentifier_IsIdentifierMatch = UCase$(specifier.identifier) = "G"
End Function
Once every format identifier ("C", "D", "N", etc.) has its implementation of the IStringFormatIdentifier interface, we're ready to initialize everything we need, once.
The StringHelper class
Diving into the StringHelper class, the "declarations" section contains the error-handling constants, the default padding character and a private type that defines the encapsulated properties (I just do that in every class I write):
Private Const ERR_FORMAT_EXCEPTION As Long = vbObjectError Or 9001
Private Const ERR_SOURCE As String = "StringHelper"
Private Const ERR_MSG_INVALID_FORMAT_STRING As String = "Invalid format string."
Private Const ERR_MSG_FORMAT_EXCEPTION As String = "The number indicating an argument to format is less than zero, or greater than or equal to the length of the args array."
Private Type tString
PaddingCharacter As String * 1
EscapeSequences As New Collection
NumericSpecifiers As New Collection
DateTimeSpecifiers As New Collection
End Type
Private Const PADDING_CHAR As String * 1 = " "
Private this As tString
Option Base 0
Option Explicit
Method Class_Initialize is where all the one-time stuff happens - this is where escape sequences, numeric and datetime specifiers are initialized:
Private Sub Class_Initialize()
If this.PaddingCharacter = vbNullString Then this.PaddingCharacter = PADDING_CHAR
InitEscapeSequences
InitNumericSpecifiers
InitDateTimeSpecifiers
End Sub
Private Sub InitEscapeSequences()
Dim factory As New EscapeSequence
Set this.EscapeSequences = New Collection
this.EscapeSequences.Add factory.Create("\n", vbNewLine)
this.EscapeSequences.Add factory.Create("\q", Chr$(34))
this.EscapeSequences.Add factory.Create("\t", vbTab)
this.EscapeSequences.Add factory.Create("\a", Chr$(7))
this.EscapeSequences.Add factory.Create("\b", Chr$(8))
this.EscapeSequences.Add factory.Create("\v", Chr$(13))
this.EscapeSequences.Add factory.Create("\f", Chr$(14))
this.EscapeSequences.Add factory.Create("\r", Chr$(15))
this.EscapeSequences.Add factory.Create("\\x(\w{2})", 0, True, Hexadecimal)
this.EscapeSequences.Add factory.Create("\\(\d{3})", 0, True, Octal)
Set factory = Nothing
End Sub
Private Sub InitNumericSpecifiers()
Set this.NumericSpecifiers = New Collection
this.NumericSpecifiers.Add New CurrencyStringFormatIdentifier
this.NumericSpecifiers.Add New DecimalStringFormatIdentifier
this.NumericSpecifiers.Add New GeneralNumericStringFormatIdentifier
this.NumericSpecifiers.Add New PercentStringFormatIdentifier
this.NumericSpecifiers.Add New FixedPointStringFormatIdentifier
this.NumericSpecifiers.Add New ExponentialStringFormatIdentifier
this.NumericSpecifiers.Add New HexStringFormatIdentifier
this.NumericSpecifiers.Add New RoundTripStringFormatIdentifier
this.NumericSpecifiers.Add New NumericPaddingStringFormatIdentifier
End Sub
Private Sub InitDateTimeSpecifiers()
Set this.DateTimeSpecifiers = New Collection
this.DateTimeSpecifiers.Add New CustomDateFormatIdentifier
this.DateTimeSpecifiers.Add New FullDateLongStringFormatSpecifier
this.DateTimeSpecifiers.Add New FullDateShortStringFormatIdentifier
this.DateTimeSpecifiers.Add New GeneralLongDateTimeStringFormatIdentifier
this.DateTimeSpecifiers.Add New GeneralShortDateTimeStringFormatIdentifier
this.DateTimeSpecifiers.Add New LongDateFormatIdentifier
this.DateTimeSpecifiers.Add New LongTimeStringFormatIdentifier
this.DateTimeSpecifiers.Add New ShortDateFormatIdentifier
this.DateTimeSpecifiers.Add New SortableDateTimeStringFormatIdentifier
End Sub
To make the PaddingCharacter configurable, it only needs to be exposed as a property.
So let's recap here, we have:
A collection of escape sequences that know how to to process themselves
A collection of numeric specifiers that know how to process themselves
A collection of date/time specifiers that know how to process themselves
All we're missing is a function that will take a format_string, validate it and return a collection of StringFormatSpecifier. The regular expression we're using to do this can also be simplified a bit - unfortunately this doesn't make it run any faster (performance-wise, this function is really where the bottleneck is):
Private Function GetFormatSpecifiers(ByVal format_string As String, valuesCount As Integer) As Collection
'executes a regular expression against format_string to extract all placeholders into a MatchCollection
Dim regex As New RegExp
Dim matches As MatchCollection
Dim thisMatch As Match
Dim result As New Collection
Dim specifier As StringFormatSpecifier
Dim csvIndices As String
Dim uniqueCount As Integer
Dim largestIndex As Integer
regex.pattern = "\{(\w+)(\,\-?\d+)?(\:[^}]+)?\}"
' literal {
' [1] numbered captured group, any number of repetitions (Index)
' alphanumeric, one or more repetitions
' [2] numbered captured group, zero or one repetitions (AlignmentSpecifier)
' literal ,
' literal -, zero or one repetitions
' any digit, one or more repetitions
' [3] numbered captured group, zero or one repetitions (FormatSpecifier)
' literal :
' any character except '}', one or more repetitions
' literal }
regex.IgnoreCase = True
regex.Global = True
Set matches = regex.Execute(format_string)
For Each thisMatch In matches
Set specifier = New StringFormatSpecifier
specifier.Index = CInt(thisMatch.SubMatches(0))
If Not StringContains(csvIndices, specifier.Index & ",") Then
uniqueCount = uniqueCount + 1
csvIndices = csvIndices & specifier.Index & ","
End If
If specifier.Index > largestIndex Then largestIndex = specifier.Index
If Not thisMatch.SubMatches(1) = vbEmpty Then specifier.Alignment = CInt(Replace(CStr(thisMatch.SubMatches(1)), ",", vbNullString))
If Not thisMatch.SubMatches(2) = vbEmpty Then
specifier.identifier = Left(Replace(CStr(thisMatch.SubMatches(2)), ":", vbNullString), 1)
specifier.CustomSpecifier = Replace(CStr(thisMatch.SubMatches(2)), ":" & specifier.identifier, vbNullString)
End If
result.Add specifier
Next
If matches.Count > 0 And (uniqueCount <> valuesCount) Or (largestIndex >= uniqueCount) Or valuesCount = 0) Then Err.Raise ERR_FORMAT_EXCEPTION, ERR_SOURCE, ERR_MSG_FORMAT_EXCEPTION
Set GetFormatSpecifiers = result
Set regex = Nothing
Set matches = Nothing
End Function
The actual StringFormat function takes an array of Variant sent from the module function's ParamArray values() parameter; taking a ParamArray here as well would make things more complicated than they already are.
So all the function really needs to do, is loop through all specifiers in format_string, and apply the appropriate format specifier's formatting. Then apply the alignment specifier and execute escape sequences (unless format_string starts with a "@") - with everything properly encapsulated in specialized objects, this should leave a pretty readable implementation:
Public Function StringFormat(format_string As String, values() As Variant) As String
Dim result As String
result = format_string
Dim specifiers As Collection
Dim specifier As StringFormatSpecifier
Set specifiers = GetFormatSpecifiers(result, UBound(values) + 1)
Dim useLiteral As Boolean
'when format_string starts with "@", escapes are not replaced
'(string is treated as a literal string with placeholders)
useLiteral = StringStartsWith("@", result)
'remove the "@" literal specifier from the result string
If useLiteral Then result = Right(result, Len(result) - 1)
'replace escaped backslashes with 'ESC' character [Chr$(27)]
'to optimize escape sequences evaluation:
If Not useLiteral And StringContains(result, "\\") Then _
result = Replace(result, "\\", Chr$(27))
Dim formattedValue As String
Dim alignmentPadding As Integer
Dim identifier As IStringFormatIdentifier
Dim identifierFound As Boolean
For Each specifier In specifiers
formattedValue = values(specifier.Index)
identifierFound = (specifier.identifier = vbNullString)
If IsNumeric(values(specifier.Index)) Then
For Each identifier In this.NumericSpecifiers
If identifier.IsIdentifierMatch(specifier) Then
identifierFound = True
formattedValue = identifier.GetFormattedValue(values(specifier.Index), specifier)
End If
Next
ElseIf TypeName(values(specifier.Index)) = "Date" Then
For Each identifier In this.DateTimeSpecifiers
If identifier.IsIdentifierMatch(specifier) Then
identifierFound = True
formattedValue = identifier.GetFormattedValue(values(specifier.Index), specifier)
End If
Next
End If
If Not identifierFound Then Err.Raise ERR_FORMAT_EXCEPTION, ERR_SOURCE, ERR_MSG_INVALID_FORMAT_STRING
alignmentPadding = Abs(specifier.Alignment)
If specifier.Alignment < 0 Then
'negative: left-justified alignment
If alignmentPadding - Len(formattedValue) > 0 Then _
formattedValue = formattedValue & String$(alignmentPadding - Len(formattedValue), this.PaddingCharacter)
ElseIf specifier.Alignment > 0 Then
'positive: right-justified alignment
If alignmentPadding - Len(formattedValue) > 0 Then _
formattedValue = String$(alignmentPadding - Len(formattedValue), this.PaddingCharacter) & formattedValue
End If
'replace all occurrences of placeholder {i} with their formatted values:
result = Replace(result, specifier.ToString, formattedValue)
Next
Dim escape As EscapeSequence
If Not useLiteral And StringContains(result, "\") Then
For Each escape In this.EscapeSequences
escape.Execute result
Next
End If
If Not useLiteral And StringContains(result, Chr$(27)) Then result = Replace(result, Chr$(27), "\")
StringFormat = result
End Function
Feel free to comment below! :) | {
"domain": "codereview.stackexchange",
"id": 20319,
"tags": "strings, vba, formatting, vb6"
} |
What are the risks of developing geoengineering? | Question: Wikipedia page for geoengineering entry list a lot of proposals to decrease global temperature:
Solar radiation management methods[5] may include:
Surface-based: for example, surface-based mirror infrastructures,[29] protecting or expanding polar sea ice and
glaciers, including the use of insulating blankets or artificial
snow,[30][31] using pale-colors for roofing materials and other
man-made surfaces (i.e. roadways and exterior paints), attempting
to change the oceans' brightness, growing high-albedo crops, or by
distributing hollow glass beads in selected areas to increase ice
coverage and lower temperatures.[32]
Troposphere-based: for example, marine cloud brightening, which would spray fine sea water to whiten clouds and thus increase cloud
reflectivity.
Upper atmosphere-based: creating reflective aerosols, such as stratospheric sulfate aerosols, specifically designed
self-levitating aerosols,[33] or other substances.
Space-based: space sunshade – obstructing solar radiation with space-based mirrors, dust,[34] etc.
Carbon dioxide removal
Creating biochar (i.e. in biomass-fired thermal power plants), for mixing into the soil to create terra preta
Bio-energy with carbon capture and storage to sequester carbon and simultaneously provide energy Carbon air capture to remove carbon
dioxide from ambient air
Afforestation, reforestation and forest restoration to absorb carbon dioxide
Ocean afforestation and ocean fertilization (which includes iron fertilization of the oceans)
Why have none of these methods been implemented?
What are the risks of developing geoengineering?
Answer: We don't know, and that's why we have to develop it.
that's the thing about undeveloped sciences we don't know what many of the risks are, Its a case of knowing what we don't know. Which is one big reason to develop it, so we don't cause harm accidentally.
Lets be clear we already are geoengineering right now, by driving cars, changing the landscape, polluting the air, we are just doing it blind. We have to research things to understand the risks, and geoengineering has barely be scratched. | {
"domain": "earthscience.stackexchange",
"id": 2239,
"tags": "climate-change, carbon-cycle, forest, geoengineering, light"
} |
If my displacement is Euler's number $e$ | Question: So, differential of $e^x$ ($e$=Euler's No.) is equal to $e^x$.
Now imagine I put $x=1$ and travel with a displacement $e$ (which is a constant) so we will be not moving anywhere since our displacement is constant. But (and this is where the question starts) as the differential of displacement w.r.t. time will be velocity and it will be $e^x$ where we put $x=1$ so velocity=$e$.
So now we have a constant non zero velocity but we also have constant displacement, that means even if we have a velocity we are not moving anywhere with passage of time.
I know I am wrong somewhere please can you tell me where. And please don't mind if it's a silly mistake.
Answer: The velocity function $v(t)$ is the derivative of the displacement function $x(t)$, so $v(t)= x'(t)$. If $x(t)=e$ (a constant), then $v(t) = x'(t) = 0$.
If $x(t) = e^t$ (in some units), then $x'(t)= e^t$ as well, but in that case, $x(t)$ is not constant. | {
"domain": "physics.stackexchange",
"id": 82612,
"tags": "kinematics, mathematics, calculus"
} |
It is possible to get deb from jenkins before debian repo sync? | Question:
I released new package. Build on jenkins.ros.org successfully completes. Where i can get these package(s) for inspection?
Originally posted by vooon on ROS Answers with karma: 404 on 2014-06-18
Post score: 0
Answer:
For each distro, there is a status page for debs (for indigo, it's http://www.ros.org/debbuild/indigo.html, for others, replace indigo with the distro). This will tell you what version is in what debian repository. The second column under each heading is the "ros-shadow-fixed" repository, which is probably what you want.
If you want to download an individual deb for inspection, you can find them at http://packages.ros.org/ros-shadow-fixed, or you could update your /etc/apt/sources files to point at "ros-shadow-fixed" rather than "ros", which will allow you to install and test (but, of course, things might be broken -- the "fixed" in ros-shadow-fixed is kinda wishful thinking, not a guarantee)
Originally posted by fergs with karma: 13902 on 2014-06-19
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 18312,
"tags": "ros, deb, bloom-release"
} |
RenderBatch for game engine | Question: Recently, I implemented an RenderBatch class, which renders any objects, using a specific shader (with its configuration). I'd like to improve on the way my render process works.
RenderBatch.h:
#pragma once
#include "alpha/Shader.h"
#include "alpha/RenderComponent.h"
#include <glm/glm.hpp>
#include "alpha/Mesh.h"
#include "alpha/Transform.h"
#include "alpha/Model.h"
class ShaderConfiguration
{
public:
ShaderConfiguration();
virtual ~ShaderConfiguration();
virtual void Update(Shader& shader);
void SetProjectionView(const glm::mat4& projectionView){
m_projectionView = projectionView;
}
glm::mat4 GetProjectionView() const{
return m_projectionView;
}
void SetTransform(const Transform& transform){
m_transform = transform;
}
Transform GetTransform(){
return m_transform;
}
protected:
glm::mat4 m_projectionView;
Transform m_transform;
};
typedef ShaderConfiguration DefaultShaderConfiguration;
///Render Batch is really low lovel... It is only advised to use it
///in case you want to experiment with GLSL shaders : for more basic uses,
///ModelBatch is actually integrated in the GameObject system...
class RenderBatch : RenderComponent
{
public:
RenderBatch(){
m_shader.loadFromFile(GL_VERTEX_SHADER, "res/basicShader.glslv");
m_shader.loadFromFile(GL_FRAGMENT_SHADER, "res/basicShader.glslf");
m_shader.CreateAndLink();
}
RenderBatch(Shader shader)
: m_shader(shader){}
virtual ~RenderBatch(){}
void SetShader(Shader shader){this->m_shader = shader;}
Shader GetShader()const {return m_shader;}
///Genereal way of updating -----------------> using polymorphism
void UpdateShaderConfiguration(ShaderConfiguration* config){
m_shader.Bind();
config->Update(m_shader);
m_shader.UnBind();
}
///Update for simple rendering --------------> ProjectionView + Transform ()
///Equivalent of :
///DefaultShaderConfiguration defaultShaderConfiguration
///UpdateShaderConfiguration(&defaultShaderConfiguration);
void Update(glm::mat4 projectionView, Transform transform){
m_shader.Bind();
glUniformMatrix4fv(
m_shader.GetUniformLocation("MVP"),
1,
GL_FALSE,
glm::value_ptr(projectionView * transform.GetModelView())
);
m_shader.UnBind();
}
void Render(Mesh& mesh){
m_shader.Bind();
mesh.Draw();
m_shader.UnBind();
}
glm::mat4 GetProjectionViewMatrix(){
return m_projectionView;
}
void SetProjectionViewMatrix(glm::mat4 projectionView){
this->m_projectionView = projectionView;
}
protected:
Shader m_shader;
glm::mat4 m_projectionView;
};
RenderBatch.cpp
#include "alpha/RenderBatch.h"
ShaderConfiguration::ShaderConfiguration()
: m_projectionView(glm::mat4(1))
{}
ShaderConfiguration::~ShaderConfiguration(){}
void DefaultShaderConfiguration::Update(Shader& shader){
glUniformMatrix4fv(
shader.GetUniformLocation("MVP"),
1,
GL_FALSE,
glm::value_ptr(m_projectionView * m_transform.GetModelView())
);
}
Main.cpp
///example 1 ---->using defaults
RenderBatch batch = RenderBatch();
Mesh mesh("res/Test.obj");
Camera camera;
while(isRunning)
{
/*Update */
batch.Update(camera.Combined(), transform)
/*some more update code...*/
/*Render*/
batch.Render(mesh);
}
///example 2 ---->using custom configuration
RenderBatch batch = RenderBatch();
Mesh mesh("res/Test.obj");
Camera camera;
ShaderConfiguration config;
while(isRunning)
{
config.SetProjectionView(camera.combined());
config.SetTransform(transform)
/*Update */
batch.Update(config);
/*some more update code...*/
/*Render*/
batch.Render(mesh);
}
Answer: There is a bit of misnomer with RenderBatch. The term "batch rendering" is normally associated with grouping several geometries that share the same rendering attributes (textures/materials/etc) to render then together and avoid expensive state changes. Your class is just a wrapper around a Shader, so I don't see much reason for its existence, when all that could be part of Shader instead.
There is also an excessive amount of shader binding/unbinding going on inside RenderBatch. You definitely don't want to do that, changing shader programs is a very expensive operation.
That's one of the pitfalls of attempting to integrate OpenGL with OOP. The library was just not made to be used like that; it relies heavily on global state. This will change in the future with the new "bindless" OpenGL, but that's for version 4.5 and above.
Another potential issue that I see here is that RenderBatch is taking its Shader parameter by value, thus making a copy of it. How are you dealing with the internal GL shader program handle in this case? Even though you can copy the integer handle given to you by OpenGL, you can't copy the underlaying shader object itself. So I'm wondering if you are not winding up with duplicate shader handles through your program that will get deleted multiple times by calls to glDeleteProgram or even leak. Yet another problem of mixing a very C-ish API with C++. You have to be extra careful to avoid these sorts of mistakes. Be very aware that you are mixing two distinct programming languages in the same project. | {
"domain": "codereview.stackexchange",
"id": 14729,
"tags": "c++, opengl"
} |
Can the IQ of an AI program be measured? | Question: Can an AI program have an IQ? In other words, can the IQ of an AI program be measured? Like how humans can do an IQ test.
Answer: Short answer: No.
Longer answer: It depends on what IQ exactly is, and when the question is asked compared to ongoing development. The topic you're referring to is actually more commonly described as AGI, or Artificial General Intelligence, as opposed to AI, which could be any narrow problem solving capability represented in software/hardware.
Intelligence quotient is a rough estimate of how well humans are able to generally answer questions they have not previously encountered, but as a predictor it is somewhat flawed, and has many criticisms and detractors.
Currently (2016), no known programs have the ability to generalize, or apply learning from one domain to solving problems in an arbitrarily different domain through an abstract understanding. (However there are programs which can effectively analyze, or break down some information domains into simpler representations.) This seems likely to change as time goes on and both hardware and software techniques are developed toward this goal. Experts widely disagree as to the likely timing and approach of these developments, as well as to the most probable outcomes.
It's also worth noting that there seems to be a large deficit of understanding as to what exactly consciousness is, and disagreement over whether there is ever likely to be anything in the field of artificial intelligence that compares to it. | {
"domain": "ai.stackexchange",
"id": 205,
"tags": "intelligence-testing, intelligence-quotient"
} |
Isometry in two dimensions | Question:
Let A be an isometry, then $g^{ij}=A^i{}_{k}A^j{}_l\,g^{kl}$ (1)
First, consider an infinitesimal transformation $A=\exp(\epsilon \lambda )$
Then express A by its power series to the first order in $\epsilon $ then plug in the expression into (1), drop all terms with $\epsilon ^2$ and find a condition for $\lambda ^{ij}$
Also show that in two dimensions with a metric $g_{ij}$ the matrix $\lambda^i{}_k $ is proportional to
$\begin{pmatrix} \lambda^1{}_1 & \lambda^1{}_2 \\ \lambda^2{}_1 & \lambda^2{}_2 \end{pmatrix} \propto \begin{pmatrix} g_{12} & g_{22} \\ -g_{11} & -g_{12} \end{pmatrix}$
$A=\mathbb{E}+\epsilon \lambda => [A]^i{}_j=\delta ^i_j + \epsilon \lambda ^i{}_j$
$g^{ij}=\delta^i_k \delta^j_lg^{kl}+\delta^i_k\lambda^j{}_l\epsilon g^{kl}+\epsilon\lambda^i{}_k\delta^j_lg^{kl} =>$
$g^{ij}=g^{ij}+\epsilon (\lambda^i{}_kg^{kj}+\lambda^j{}_lg^{il}) =>$
$\lambda^i{}_kg^{kj}=-\lambda^j{}_lg^{il} =>$
$\lambda^{ij}=- \lambda^{ji}$
However, the matrix for two dimensions doesn't fulfill this condition and I don't know what I've done wrong. Did I do a mistake?
Answer: Isometries for any metric in any dimension are found to be solutions of the Killing equation, which can be written for an infinitesimal coordinate transformation $x^i\rightarrow x^i+\xi^i$ in the form:$$ \xi_{i;j}=-\xi_{j;i}$$ where the semi-colon indicates the covariant derivative. So if you interpret this as a matrix the correct relation should be $\lambda^{ij}=-\lambda^{ji}$. The matrix should be anti-symmetric.
For the second part of the question (restricted to two dimensions) one can use the generic form for $\lambda^{ij} = \left(\begin{array}{c c} 0 & f(x,y) \\ -f(x,y) & 0\end{array}\right).$ Lowering on index corresponds to multiplication with the metric tensor $g_{ij}$. One finds $$\left(\begin{array}{c c} 0 & f(x,y) \\ -f(x,y) & 0\end{array}\right)\cdot \left(\begin{array}{c c} g_{11} & g_{12} \\ g_{12} & g_{22}\end{array}\right) = f\cdot \left(\begin{array}{c c} g_{12} & g_{22} \\ -g_{11} & -g_{12}\end{array}\right)$$ | {
"domain": "physics.stackexchange",
"id": 58101,
"tags": "special-relativity, metric-tensor"
} |
Finding the magnetic field given the electric field | Question: I'm reading a book on Electricity and Magnetism. I was solving some problems and found this very interesting one:
WARNING: The next problems don't have to be solved applying Maxwell's equations; rather, it's required to cogitate about the physical situation.
There is a vacuum region where an electric field is defined as $\boldsymbol E=\langle A\sin\beta y\sin\omega t ,0,0\rangle$. Determine the magnetic field, evaluate $\beta$ and show that this field is a combination of travelling waves. Also, show the direction of propagation.
So (I lack of physical intuition and experience in electrodynamics), I will take into account the equations $$ \nabla\times \boldsymbol B = \mu_0\epsilon_0\frac{\partial \boldsymbol E}{\partial t},~~~~\nabla\times\boldsymbol E = -\frac{\partial \boldsymbol B}{\partial t},~~~~ \nabla\cdot\boldsymbol B = 0.$$
I end up with $\boldsymbol B = \frac{A\beta}{\omega}\sin\beta y\cos\omega t \boldsymbol{\hat k}+\boldsymbol f(x,y,z),$ where $\boldsymbol f:\mathbb R^3\to\mathbb R^3$. So I have a bunch of partial derivatives of $\boldsymbol f$ and of course I can't solve them as the warning adviced. Can you give me any hint that would lead me any further in solving this interesting problem? Thanks a lot!
Answer: This is a nice physical intuition problem. Here's a kick along the intended path.
A plane wave with frequency $\omega$ traveling along a wavevector $\mathbf k$ has the form $\cos\left( \mathbf k \cdot \mathbf x - \omega t \right)$. Working backwards from the identity $$\cos(A+B)=\cos A\cos B - \sin A\sin B$$ we find the electric field
$$
E_x =
\frac A2 \cos( -\beta y - \omega t ) - \frac A2 \cos( +\beta y - \omega t).
$$
This is a superposition of two plane waves traveling along the $\pm y$ directions.
The Poynting vector $\mathbf S \propto \mathbf E \times \mathbf B$ tells us that for an EM plane wave $\mathbf E, \mathbf B, \mathbf k$ form a right-handed coordinate system with $\mathbf E, \mathbf B$ oscillating in phase. You can use this to find the components of $\mathbf B$ for both of the plane waves, and recombine them using the same trig identity. | {
"domain": "physics.stackexchange",
"id": 22854,
"tags": "electromagnetism"
} |
Net force equation on incline, tension at angle | Question:
Here's the situation: a block is on an incline. A string is attached at a certain angle and applies tension to the block in the direction up the incline. What would the net force equation look like?
My instinct is that $F_\textrm{net} = T_y + T_x - Ff - F_{gx}$ (since $F_n$ and $F_{gy}$ balance out). But, the only force in the $y$ direction is $T_y,$ which seems to indicate the box will be lifted off the incline! Is there another force acting opposite of $T_y$? In real life, even if you pull something at an angle it doesn't necessarily lift off the ground. I feel like there's some part/explanation I'm missing out on.
Would the actual net equation be $F_\textrm{net} = T_x - F_f - F_{gx}$?
Answer: No rule says that $F_n$ and $F_{gy}$ must balance out.
They just often do in simple examples. But never remember this as a rule! Because it isn't.
The normal force $F_n$ is a "holding up"-force. It can adjust. It is only as large as it needs to be.
Put a book on a table, and a normal force appears to hold it up.
Add another book, and the normal force grows to the double.
In your case:
Place a block on the incline, and the normal force appears to hold back the force that pushes against the surface. This is the y component of gravity. It balances it out completely.
Now pull a bit outwards in the box with a string. This pull balances a bit of the gravity, so there is now less left for the normal force to withstand. The normal force is thus smaller than without the extra pull.
The total sum of these three forces is at all times zero. And that's the only rule.
(Except, of course, if the pull in the string is large enough to overcome gravity - then the object will fly off, and the normal force is zero and not needed since there is nothing to hold back against.) | {
"domain": "physics.stackexchange",
"id": 34144,
"tags": "homework-and-exercises, newtonian-mechanics"
} |
How to run proteowizzard on linux using wine? | Question: I am trying to run proteowizzard using wine to be able to convert raw mass spectrometry files to myML format. Following, these instructions` which seem terribly outdated. I am getting many issues. I am running Ubuntu 16.04.
We want to convert output from a ThermoFisher instrument to mzML or mzXML format.
> ~/software/proteowizzard>wine msconvert.exe
> fixme:heap:HeapSetInformation (nil) 1 (nil) 0
> fixme:heap:HeapSetInformation (nil) 1 (nil) 0
> fixme:process:SetProcessShutdownParameters (00000380, 00000000):
> partial stub. err:module:import_dll Library VCOMP110.DLL (which is
> needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\baf2sql_c.dll") not
> found err:module:import_dll Library baf2sql_c.dll (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library MSVCP140.dll (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library VCRUNTIME140.dll (which is needed
> by L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library api-ms-win-crt-runtime-l1-1-0.dll
> (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library api-ms-win-crt-heap-l1-1-0.dll
> (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library api-ms-win-crt-convert-l1-1-0.dll
> (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library api-ms-win-crt-stdio-l1-1-0.dll
> (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library api-ms-win-crt-string-l1-1-0.dll
> (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library
> api-ms-win-crt-environment-l1-1-0.dll (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library
> api-ms-win-crt-filesystem-l1-1-0.dll (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library api-ms-win-crt-time-l1-1-0.dll
> (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library api-ms-win-crt-locale-l1-1-0.dll
> (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library api-ms-win-crt-math-l1-1-0.dll
> (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library MSVCP140.dll (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll") not found
> err:module:import_dll Library VCRUNTIME140.dll (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll") not found
> err:module:import_dll Library api-ms-win-crt-runtime-l1-1-0.dll (which
> is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll") not found
> err:module:import_dll Library api-ms-win-crt-string-l1-1-0.dll (which
> is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll") not found
> err:module:import_dll Library api-ms-win-crt-stdio-l1-1-0.dll (which
> is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll") not found
> err:module:import_dll Library api-ms-win-crt-filesystem-l1-1-0.dll
> (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll") not found
> err:module:import_dll Library api-ms-win-crt-time-l1-1-0.dll (which is
> needed by L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll")
> not found err:module:import_dll Library
> api-ms-win-crt-convert-l1-1-0.dll (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll") not found
> err:module:import_dll Library api-ms-win-crt-environment-l1-1-0.dll
> (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll") not found
> err:module:import_dll Library api-ms-win-crt-locale-l1-1-0.dll (which
> is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll") not found
> err:module:import_dll Library api-ms-win-crt-heap-l1-1-0.dll (which is
> needed by L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll")
> not found err:module:import_dll Library api-ms-win-crt-math-l1-1-0.dll
> (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll") not found
> err:module:import_dll Library api-ms-win-crt-utility-l1-1-0.dll (which
> is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msparser.dll") not found
> err:module:import_dll Library msparser.dll (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:import_dll Library api-ms-win-crt-utility-l1-1-0.dll
> (which is needed by
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe") not
> found err:module:LdrInitializeThunk Main exe initialization for
> L"Z:\\home\\swacker\\software\\proteowizzard\\msconvert.exe" failed,
> status c0000135
Answer: Try this Docker container.
All the vendor formats work except T2D, YEP, and FID. | {
"domain": "bioinformatics.stackexchange",
"id": 833,
"tags": "mass-spectrometry"
} |
Rot13 app - How to tidy up code? | Question: I made a program that lets the user input text into a text box.
The user can choose a number to rotate each alphabetical character by that number,
and the output box will diplay the converted text.
Here is how it runs:
Among the code is
# if event.key == pygame.K_RETURN:
# print('\n'.join([''.join(row) for row in rot_box.text])+'\n--------------------')
If you remove the hashtags, the converted text will be printed whenever you press Enter.
Here is my entire code:
import pygame
pygame.font.init()
wn = pygame.display.set_mode((600, 600))
font = pygame.font.Font(None, 32)
class TextBox():
def __init__(self, x, y, w, h, title='Text Box', color=(0, 0, 0), default=''):
self.input_box = pygame.Rect(x, y, w, h)
self.w = w
self.h = h
self.color_inactive = color
self.color_active = pygame.Color('purple')
self.color = self.color_inactive
self.default = default
self.text = ['']
self.active = False
self.title = title
def draw(self):
title = font.render(self.title, True, self.color)
wn.blit(title, (self.input_box.x+5, self.input_box.y-self.h))
txts = [font.render(''.join(t), True, self.color) for t in self.text]
width = max(self.w, max(txts, key=lambda x:x.get_width()).get_width()+10)
height = self.h * len(txts)
self.input_box.w = width
self.input_box.h = height
if len(txts) == 1 and txts[0].get_width() == 1:
wn.blit(font.render(self.default, True, self.color), (self.input_box.x+5, self.input_box.y+5))
else:
for i, txt in enumerate(txts):
wn.blit(txt, (self.input_box.x+5, self.input_box.y+5+i*self.h))
pygame.draw.rect(wn, self.color, self.input_box, 2)
def check_status(self, pos):
if self.input_box.collidepoint(pos):
self.active = not self.active
else:
self.active = False
self.color = self.color_active if self.active else self.color_inactive
def type(self, event):
if self.active:
if event.key == pygame.K_RETURN:
self.text.append('')
elif event.key == pygame.K_BACKSPACE:
if self.text[-1]:
self.text[-1] = self.text[-1][:-1]
else:
if len(self.text) > 1:
self.text = self.text[:-1]
else:
self.text[-1] += event.unicode
def rot(alp, num):
while num >= 26:
num -= 26
if alp in 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ':
n = ord(alp) + num
if n > 122 or 97 > n > 90:
n -= 26
alp = chr(n)
return alp
box = TextBox(110, 60, 140, 32, 'Input Text')
rot_num = TextBox(10, 60, 50, 32, 'Rot', default='0')
rot_box = TextBox(110, 300, 140, 32, 'Output Text')
while True:
rt = int(rot_num.text[0]) if rot_num.text[0] else 0
rot_box.text = [[rot(char, rt) for char in row] for row in box.text]
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
if event.type == pygame.MOUSEBUTTONDOWN:
box.check_status(event.pos)
rot_num.check_status(event.pos)
if event.type == pygame.KEYDOWN:
box.type(event)
if (event.unicode.isdigit() or event.key == pygame.K_BACKSPACE) and len(rot_num.text[0]) < 6 or event.key == pygame.K_BACKSPACE:
rot_num.type(event)
# if event.key == pygame.K_RETURN:
# print('\n'.join([''.join(row) for row in rot_box.text])+'\n--------------------')
wn.fill((255, 255, 200))
box.draw()
rot_num.draw()
rot_box.draw()
pygame.display.flip()
Can you show me how to clean up my messy code?
Can also you point me at the parts of my code that should've been in a class or function, and the parts that shouldn't have been?
Answer: Your ROT algorithm is slightly wrong. The real ROT returns The alphabet rotated in the corresponding case. But in yours, even if you give Uppercase(TUVWXYZ), it returns lowercase for large rotations. That is not exact ROT.
This can be seen from this screenshot
This may be helpful: https://www.geeksforgeeks.org/rot13-cipher/. Here instead of chr() and ord() Dictionaries are used which is very efficient and less error prone. | {
"domain": "codereview.stackexchange",
"id": 39841,
"tags": "python, classes, pygame"
} |
How doesn't combining two eigenvectors that have the same eigenvalue for a specific matrix represent every vector left in the plane? | Question: If we have a 2D plane and the hermitian matrix $L$ where:
$$L|\lambda_1\rangle=\lambda|\lambda_1\rangle$$
$$L|\lambda_2\rangle=\lambda|\lambda_2\rangle$$
Given that $|\lambda_1\rangle$ and $|\lambda_2\rangle$ are linearly independent, we can make any vector
$$|A\rangle=\alpha|\lambda_1\rangle +\beta|\lambda_2\rangle$$
that will be an eigenvector for $L$ with the same eigenvalue.
Can't I, with the last equation, create every vector in that plane and therefore every vector is an eigenvector for the matrix with the same eigenvalue?
I know this question might sound ridiculous and that there is a mistake in my reasoning but I can't find where is the mistake.
Answer: There is no mistake. If you have a $2 \times 2$ matrix with one eigenvalue $\lambda$ and 2 linearly independent eigenvectors, then the whole plane is the eigenspace and the matrix is equal to $\lambda I$. | {
"domain": "quantumcomputing.stackexchange",
"id": 5375,
"tags": "linear-algebra"
} |
Incremental encoder with memory | Question: I'm working on a robotics project. In this project we need to control the angle position of the wheels. We are going to use an incremental encoder (we didn't find an absolute encoder that was able to to fit into the structure of the system). My question is, is there a method that we can use to store the position of the encoder in case of a power shutdown? I was thinking about ways to save the last position in a non volatile memory (internal microcontroller memory or a sd carf, for example).
Answer: What if someone pushes the wheel while the power is off? The memory won't be correct.
A typical solution to this problem is to use a "home switch". When the robot starts up, it can rotate the wheel in one direction until it bumps into a switch which you have installed in a certain position. Then, the software knows the wheel position, and can track it from then on.
You indicated in the comments that this solution works for you. | {
"domain": "engineering.stackexchange",
"id": 4826,
"tags": "applied-mechanics, robotics, wheels"
} |
Why Do Ruminants Require A Multi-Compartment Stomach To Digest Food? | Question: Cows, camels, sheep, goats, etc being ruminants must chew their food repeatedly by regurgitating their food from their first stomach compartment and chewing their 'cud'. This then finer chewed material makes its way to the various stomach compartments to be digested.
These animals are eating plant material, the same plant material animals such as elephants, horses and hippos eat as well. However, these animals only have one stomach compartment.
Why does one need a multi-compartment stomach and one does not if they are all eating the same/similar food?
Answer: There is more difference than just the parts of plants that are eaten.
Two of their stomachs - rumen and reticulum are not used for digesting food at all. Multiple rounds of chewing and mixing with saliva, wich result in very small particles of undigested food help the bacteria in those stomachs reach, digest and use that cellulose-rich plant material. Cows "eat" the bacteria from their stomachs and the grass is just food for these bacteria. You could say that cows are not herbivores but secondary consumers.
Look at this picture (from Wikipedia article on ruminants, a good place to check all this in more detail but still simpler than most professional books):
See that the ruminants have the same second and third part of their digestive system (though the bacteria use up most of carbohydrates, so some difference in absorbed material), but have an additional part in the front - where the microbes digest and ferment the food, that the ruminant has provived them. The reticulorumen needs to be large in order to provide space for all that plant material and bacteria. | {
"domain": "biology.stackexchange",
"id": 754,
"tags": "digestive-system"
} |
Where to start on learning Special Theory of Relativity? | Question: I was recently reading articles/posts about the Special Theory of Relativity and I was fascinated by the theory's insights about space and time. My goal is to have an in-depth understanding for STR and so I want to know what books, websites or youtube channels are good for learning the STR. Also, do you have to be an expert at STR to be good in GTR?
**My Mathematical knowledge is not very high(in Pre-Calculus) and I am in AP Physics 1. Just some helpful information.
Answer: Reflections on Relativity by Kevin Brown.
http://mathpages.com/rr/rrtoc.htm
Please pay particular attention to Doppler Effect
http://mathpages.com/home/kmath587/kmath587.htm
Read about Transverse Doppler Effect. Ask yourself (or someone else) - why Transverse Doppler Effect appears in different colors? What that means? Which observation is correct? What color depends on? Why the author omitted these reflections in his book?
You may spend years, looking for answer, but you won't be sorry. | {
"domain": "physics.stackexchange",
"id": 36176,
"tags": "special-relativity, resource-recommendations, education"
} |
Angular momentum wavefunctions with respect to different axes | Question: I've been learning about quantum angular momentum, and I have a question about the relationship between quantum mechanical angular momentum wavefunctions with respect to different axes. I know that when constructing angular momentum eigenfunctions, we can only find simultaneous eigenfunctions of $L^2$ and one component of $\vec{L}$ (because the individual component of $\vec{L}$ don't commute with each other), so we choose $L_z$ for mathematical convenience.
Say we have a system that is in some superposition of these eigenfunctions which are simultaneous eigenfunctions of $L^2$ and $L_z$, and then we measure the angular momentum of the system with respect to some arbitrary axis. In order to use the Born probability rule, it will have to end up being the case that whatever superposition of the simultaneous eigenfunctions of $L^2$, and $L_z$ the system was in, this superposition will correspond to a superposition of states that are simultaneous eigenfunctions of $L^2$ and the component of $\vec{L}$ in the arbitrary direction we chose to make the measurement.
But this is not mathematically intuitive, so I just wanted to check whether I was correct: Is it the case that any superposition of simultaneous eigenfunctions of $L^2$ and $L_z$ can be expanded as a superposition of simultaneous eigenfunctions of $L^2$ and the component of $\vec{L}$ in any direction whatsoever? This would seem to be quite remarkable.
Answer: Yes, you're right. One of the postulate of QM is that the set of eigenfunctions of a complete set of operators (where with complete I mean a set of physical quantity whose knowledge completely describe the state of the system) is a complete set of functions, where this time with complete I mean that every solution $\psi$ of the relative Shroedinger's equation is some linear combination of these. So if you consider the set of eigenfunctions $\psi_k$ of $L^2$ and $L_z$ then you can write:
$$
\psi=\sum_{k=0}^{\infty}c_k\psi_k
$$
Now, if $L^2$ and $L_z$ represents a complete set of operators, also $L^2$ and $L_s$ (where $L_s$ is the component of angular momentum in some other direction) does too. In fact the chose of direction depends only on the reference frame in which you work. So, also the simultaneous eigenfunctions of $L^2$ and $L_s$ constitute a complete set of functions in the space of Shroedinger equation's solutions and you can represent your wave function as a linear combination of them. | {
"domain": "physics.stackexchange",
"id": 21672,
"tags": "quantum-mechanics, angular-momentum"
} |
Rate of Evaporation of solution | Question: Why does rate of evaporation of solution decreases on adding a non volatile component?
Also what happens if we add a volatile solute?
Answer: The rate of evaporation decreases because the activity of the solvent in the liquid phase decreases, while the activity in the vapor phase (or solid, for that matter) stays the same. This is also the reason for the increase in boiling point and the decrease in melting point of the solvent.
If we add a volatile solute, it will evaporate too, thus changing the solvent activity in the vapor phase as well. The resulting behavior can't be characterized universally; you'll have to consult the binary phase diagram of boiling for the particular system, and these can be pretty diverse. | {
"domain": "chemistry.stackexchange",
"id": 5550,
"tags": "aqueous-solution"
} |
How to count microstates in quantum statistical mechanics | Question: According to the fundamental postulate of statistical mechanics (as far as I know it), we take the (classical) probability for a system to be in any of its microstates to be equal (if the system is in equilibrium and isolated). My question is, how do we count the number of microstates? From what I can see, the number of microstates is usually taken to be the number of linearly independant energy eigenstates of the Hamiltonian for that particular value of energy. However, I don't see why this is the 'number' of microstates. Are we supposed to imagine the microcanonical ensemble as a macroscopic quantum system for which we have measured the energy? In that case the system could be in any one of an (uncountably) infinite number of linear superpositions of its degenerate energy eigenstates. In other words the state vector could be be any vector at all in the degenerate energy eigenspace.
So why are we allowed to count the number of states as the number of linearly independant energy eigenstates, when we don't know that the system would even be in an eigenstate (but rather only in some linear combination)? And as a direct consequence of this method of counting, it seems we assign a non-zero probability only for the system to be in energy eigenstates, and ignore the possibility that it could be in a superposition.
Answer: Summary: the question "what is the probability of state $|\psi \rangle$" is the wrong question to ask, because it's experimentally unobservable. What you really care about is the results of measurements, and the number of possible different measurement outcomes is equal to the number of microstates.
Consider a bunch of noninteracting spin 1/2 particles in thermal equilibrium, and look at a single spin inside.
On the classical level, at thermal equilibrium, the spin of this particle should be a vector with random direction. Now we might ask, on the quantum level, what's the state of this particle?
However, this is the wrong question to ask. Given any spinor $|\psi \rangle$, it's possible to find an axis $\mathbf{n}$ so that $|\psi\rangle$ is the positive eigenstate of $S_{\mathbf{n}}$, i.e. the spin is definitely up along $\mathbf{n}$. So individual elements of the Hilbert space are insufficient to represent a thermal state.
Instead, we need to do the same thing we did in the classical case: replace the state of a system (i.e. $(\mathbf{x}, \mathbf{p})$ classically and $|\psi \rangle$ in quantum) with a probability distribution over those states. One such probability distribution is
$$50\% \text{ chance } |\uparrow \rangle, \quad 50\% \text{ chance } |\downarrow \rangle.$$
The miraculous thing is that this distribution is rotationally invariant! That is, the distributions
$$50\% \text{ chance } | \mathbf{n} \rangle, \quad 50\% \text{ chance } |-\mathbf{n}\rangle$$
are indistinguishable by measurement for any direction $\mathbf{n}$. Every spin measurement of any $\mathbf{n}$ ensemble, along any direction $\mathbf{m}$, gives a 50/50 result.
This tells us that the correct description of a thermal ensemble of quantum particles can't be an assignment of probabilities to quantum states, because such a construction isn't unique: there are many probability distributions that are experimentally indistinguishable. (The quantity that corresponds to probability distributions "up to distinguishability" is called the density matrix $\rho$.)
With this setup, the answers to your questions are:
Why don't we consider superpositions? We do! You can't directly ask "what is the probability the system is in state $|\psi \rangle$", because this is not an experimentally observable quantity, as explained above. But you can perform a measurement of an operator $\hat{A}$ with eigenvector $|\psi \rangle$ and eigenvalue $\lambda$, and there's some probability you get $\lambda$.
Why do we count microstates? This is the dimension of our Hilbert space, or more physically, the number of different numbers you can get while measuring $\hat{A}$. In the example above, the probability of getting $\lambda$ is $1/2$ where $2$ is the number of microstates. We aren't privileging the spin up or spin down states, because you get a 50/50 chance for any spin operator $S_{\mathbf{n}}$.
Why do we count specifically energy eigenstates? This is just for convenience. What we really want is the dimension of the Hilbert space, and $\hat{H}$ is often easy to deal with/reason about. | {
"domain": "physics.stackexchange",
"id": 48051,
"tags": "quantum-mechanics, statistical-mechanics"
} |
Can we tell how fast bodies are moving away by measuring their frequency? | Question: It's clear the universe is expanding because light from distant bodies, like galaxies, are shifted towards the red side of the spectrum. In other words, the light measured is more red than it should be when compared to laboratory results. This is great for measuring the distance to other bodies, but can't we also measure their velocities of departure by measuring how fast the light from these bodies is becoming more and more red over time? That is, v is directly proportional to the change of frequency divided by the time elapsed?
Answer: Your understanding is correct. The doppler shift observed from a galaxy is the sum of its peculiar velocity with respect to the "Hubble flow" and the redshift due to the Hubble flow, which is caused by the expansion of the universe.
There is no direct way from a spectrum to separate these two components - they have the same qualitative result.
In principle, the expansion of the universe (or a change in the peculiar velocity) could be directly measured by looking for a change in redshift with time, which would depend on the cosmological parameters.
This is an extremely small effect and is confused by the peculiar motions of individual galaxies. Nevertheless, measuring this redshift drift is one of the prime goals of the Codex Instrument on the E-ELT (see Pasquini et al. 2010, http://esoads.eso.org/abs/2010Msngr.140...20P )
using Lyman alpha absorption systems towards distant quasars. This experiment is also planned for the Square Kilometre Array, using the 21cm line (Kloeckner et al. 2015 http://arxiv.org/abs/1501.03822 ).
In both cases, to overcome the experimental uncertainties (eg at 21 cm, it amounts to line drifts of 0.1 Hz over a decade), then observations of millions of galaxies must be combined.
There is no prospect of measuring this effect in an individual galaxy, furthermore I fear your understanding of cosmological redshift is flawed. The dependence on distance is a statistical average, not an absolute dependence. Individual galaxies are moving in individual gravitational potentials from objects around them. This gives them their peculiar velocities with respect to the flow. This velocity could increase or decrease as a galaxy got further away, but is never expected to be large enough to be detectable on human timescales for any individual galaxy. In addition, any change in peculiar velocity should average to zero when looking at millions of galaxies, leaving the redshift drift due to the expansion. | {
"domain": "astronomy.stackexchange",
"id": 1481,
"tags": "distances, velocity, doppler-effect"
} |
Question about Nyquist sampling - filtering of high image copies | Question: Suppose I sample an audio signal with a sampling frequency > twice the highest audio frequency.
ie, the frequency transform of the sampled signal looks like the image on top here, where there is no overlap of copies:
Also, suppose that all the high image copies are outside the audible range. Now is there any need for a brickwall filter in this situation? I'd think it was unnecessary since the audible part of the original and sampled signal are identical. As a specific example, say we sampled a signal at 96 kHz. Now is there a need for any filtering to remove high image copies? Thanks.
Answer: In your scenario where an audio analog signal having nonzero spectral components from zero Hz to, say, 15 kHz, is sampled at a rate of 96 kHz the separation between the blue and green curves (in your linked web page picture) will be larger than the separation shown in the picture.
To answer your question of: "I'm wondering if there are some unforeseen side-effects to having those images there.", the answer is "No". And I say "No" because there is no spectral overlap of the sampled x[n] signal's blue and green curves, thus no "aliasing" errors have been caused by sampling the original analog signal at 96 kHz.
Now if you were to perform some processing on the discrete x[n] sampled signal that results in some sort of frequency translation (such AM modulation, or discrete signal decimation) causing overlap of the blue and green curves then you'll be introducing errors in your new "processed" discrete signal. The bottom line here is well-known; For any discrete lowpass signal the sample rate must be greater than twice the highest-frequency spectral component.
On that linked web page the page's author wrote: "A brick-wall low-pass filter, H(f), removes the images, leaves the original spectrum, X(f), and recovers the original signal from its samples." That sentence, without further explanation, is super misleading! There is NO digital filter that will eliminate the spectral images (the green curves) from a discrete signal sequence.ㅤ | {
"domain": "dsp.stackexchange",
"id": 11846,
"tags": "sampling, nyquist"
} |
How to perform a controlled Pauli string rotation gate? | Question: I would like to know some circuit decomposition for an arbitrary controlled Pauli string rotation:
\begin{equation}
|0\rangle\langle 0| \otimes e^{i \theta (P_1\otimes...\otimes P_n)}+ |1\rangle\langle 1| \otimes I
\end{equation}
where $P_i$ are Pauli operators. What's a simple circuit decomposition for that?
Answer: This is nearly a built-in decomposition in cirq. Here's what happens when you decompose a Pauli product:
import cirq
a, b, c, d, e = cirq.LineQubit.range(5)
product = cirq.X(a) * cirq.X(b) * cirq.Y(c) * cirq.Y(d) * cirq.Z(e)
power = product**0.125
ops = cirq.decompose_once(power)
print(cirq.Circuit(ops).to_text_diagram(use_unicode_characters=False))
Which prints:
0: ---Y^-0.5---X---X---X---X---Z^(1/8)---X---X---X--------X--------Y^0.5---
| | | | | | | |
1: ---Y^-0.5---@---|---|---|-------------|---|---|--------@--------Y^0.5---
| | | | | |
2: ---X^0.5--------@---|---|-------------|---|---@--------X^-0.5-----------
| | | |
3: ---X^0.5------------@---|-------------|---@---X^-0.5--------------------
| |
4: ---I--------------------@-------------@---I-----------------------------
It works by conjugating the Pauli product by single qubit gates to turn it into a product of Z observables, then conjugating by CNOTs to reduce that to a single Z observable, then phasing that single observable.
Anyways, the point is that the key operation in this entire thing is that one Z gate in the middle. Everything else is self-undoing conjugation. So...
C: ----------------------------@-------------------------------------------
|
0: ---Y^-0.5---X---X---X---X---Z^(1/8)---X---X---X--------X--------Y^0.5---
| | | | | | | |
1: ---Y^-0.5---@---|---|---|-------------|---|---|--------@--------Y^0.5---
| | | | | |
2: ---X^0.5--------@---|---|-------------|---|---@--------X^-0.5-----------
| | | |
3: ---X^0.5------------@---|-------------|---@---X^-0.5--------------------
| |
4: ---I--------------------@-------------@---I-----------------------------
Controlling that one key gate controls the entire operation. | {
"domain": "quantumcomputing.stackexchange",
"id": 5226,
"tags": "circuit-construction, quantum-circuit, gate-synthesis, pauli-gates, openfermion"
} |
Why are LIGO's beam tubes so wide? | Question: Gravitational wave detectors and particle accelerators have at least one thing in common -- they require long vacuum tubes through which a narrow beam is fired (a laser in the gravitational wave case, a particle beam in the accelerator case). In both cases, the vacuum tube is many orders of magnitude wider than the beam itself. But interestingly, while the LHC's vacuum tubes are 6.3 cm in diameter, LIGO's are about 20 times wider at 1.2 m in diameter.
So my question is: why are LIGO's vacuum tubes so wide? This must have been a conscious design consideration, since it means that a much larger volume of vacuum must be maintained, and more material must be used to construct the tube. The main consideration for tube width that I can think of is that you have to be able to aim your beam within the width allotted, but surely on these grounds LIGO could have gotten away with a much narrower tube. (Actually, I have no idea -- is this even the deciding factor for the tube width at the LHC?)
Answer: The LIGO beam is 200 W as generated at the input mode cleaner; the beam is then recycled multiple times in the arms, increasing the power density significantly. This requires large optics with near perfect coatings in order to avoid "hot spot/cold spot" damage from various types of possible defects.
But there is an additional reason for the large beam size, and I quote from Advanced LIGO, section 2.1: "In order to reduce test mass thermal noise, the beam size on the test masses is made as large as practical so that it averages over more of the mirror surface. The dominant noise mechanism here is mechanical loss in the dielectric mirror coatings, for which the displacement thermal noise scales inversely with beam size. This thermal noise reduction is balanced against increased aperture loss and decreased mode stability with larger beams."
Inspecting LIGO's optics for contaminants.
When I was a grad student in the early 1990s, we worked on extremely sensitive, non-destructive techniques based on non-linear optics which could find the coating defects: location and classification. Our detector scanned the surface, and recorded amplitude and phase changes based on the photothermal effect, so I always take a personal interest in the success of LIGO; after all, they helped pay my way!
See LIGO's laser here.
LIGO Hanford. | {
"domain": "physics.stackexchange",
"id": 31164,
"tags": "experimental-physics, ligo"
} |
What do photons look like? | Question: We have many theories that advocate the particle nature of light. But have we ever observed photons physically?
If so: what do they look like? How big are they?
If not: why not? Is it because they move at the speed of light?
Answer: "We have many theories that advocate the particle nature of light". Let me first re-word that statement to make it more accurate:
We have a wide-ranging and mathematically elegant framework called quantum mechanics, and when applied to electromagnetic phenomena it yields the photon model.
"But have we ever observed photons physically? What do they look like?"
The answer to this is that every observation involving light or other electromagnetic radiation is correctly treated by the photon model. But some observations could also be handled by other models such as classical electromagnetism. So to ask your question more precisely, it could be phrased "which observations support the photon model above other possible models?" We have to ask it this way because we observe pretty much everything by observing its effects. Even when you touch a hard surface with your finger, what you sense is the effect of the surface on your finger. And when you see something, what you sense is the response of the light receptors in your eye.
An example of an effect which strongly suggests the photon model is the photoelectric effect. Here the behaviour of the electrons in a metal in response to light is hard to make sense of using other models, but the photon model makes sense of it quite readily. So in this kind of experiment one is observing the effects of photons. And, as I just remarked, observing the effects is all one can ever hope for.
There is a kind of light detector called photomultiplier tube which uses the photo-electric effect, and when you shine light on the detector, what is observed is a series of short electrical pulses, rather than a continuous current. This indicates the energy is arriving at the detector in short pulses---in other words, photons. More sophisticated experiments using atoms have been used to map out the spatial distribution of a light field in great detail. In these experiments, one is detecting the shape of the region of space occupied by the photons.
The evidence for the photon model is, ultimately, in the way it is knitted deeply into the whole theoretical framework of modern physics. It is the only way to understand the full range of electromagnetic phenomena, whether stars shining or electrons changing state in atoms, or light detectors, or photosynthesis, or thousands of other observations. It is this wealth of information that makes us confident that the photon description is the right one.
In my lab we use single-photon-sensitive detectors all the time. We have got used to saying, when the detector emits $N$ electrical pulses, "we have detected $N$ photons". This answers your question "have we ever observed photons physically". We can also detect the shape of the light field using cameras; this amounts to observing what the photons "look like", though to get a complete picture you have to accumulate many images of a light field which stays constant over time, so really you are looking at many photons arriving one after another, but all with the same spatial distribution. The distribution gives the probability distribution of where in space the detector (such as a camera) will register some energy. | {
"domain": "physics.stackexchange",
"id": 83124,
"tags": "electromagnetic-radiation, visible-light, photons"
} |
Properly using parameterized Factory.Create() method using DI | Question: My factory is using method injection because I thought this was the best way to make it so far. Besides, I doubt it is a good thing after having to call on its Create method from within a dependent object.
The only way I might think of whilst continuing to use the parameterized factory Create method, is to inject the dependencies directly in the MainPresenter so that it may provide with the dependencies to the method, and I dislike it. It dislike it because it is not the MainPresenter that depends on the ICustomerManagementView and the ICustomerDetailPresenterFactory, it's its dependency. So I would feel like I'm sabotaging my own code by doing so.
MainPresenter
public class MainPresenter : Presenter<IMainView>, IMainViewUiHandler {
public MainPresenter(IMainView view
, ICustomerManagementPresenterFactory customerManagementFactory)
: base(view) {
this.customerManagementPresenterFactory = customerManagementPresenterFactory;
}
public void ManageCustomers() {
// The following line is causing trouble.
// As you can see per the ICustomerManagementPresenterFactory code sample,
// the Create() method takes two parameters:
// 1. ICustomerManagementView, and
// 2. ICustomerDetailPresenterFactory
// Hence I have to provide the dependencies manually, I guess. Which is
// something to avoid at any cost.
var customerManagementPresenter = customerManagementPresenterFactory.Create();
customerManagementPresenter.ShowView();
}
}
ICustomerManagementPresenterFactory
public interface ICustomerManagementPresenterFactory {
// Here. Though I ask Ninject to inject my dependencies, I need to
// provide values to the parameters when calling the method from within
// the MainPresenter class. The compiler won't let me do otherwise! And
// this makes sense!...
[Inject]
CustomerManagementPresenter Create(ICustomerManagementView view
, ICustomerDetailPresenterFactory factory);
}
IMainView
public interface IMainView : IView, IHasUiHandler<IMainViewUiHandler> {
}
IMainViewUiHandler
public interface IMainViewUiHandler : IUiHandler {
void ManageCustomers();
}
IUiHandler
public interface IUiHandler {
}
IHasUiHandler
public interface IHasUiHandler<H> where H : IUiHandler {
H Handler { set; }
}
MainForm
public partial class MainForm : Form, IMainView {
public MainForm() { InitializeComponent(); }
public IMainViewUiHandler Handler { private get { return handler; } set { setHandler(value); } }
}
CompositionRoot
public class CompositionRoot {
private CompositionRoot() { }
public static IKernel BuildObjectGraph() {
IKernel kernel = new StandardKernel();
BindFactories(kernel);
BindViews(kernel);
}
private static void BindFactories(IKernel kernel) {
kernel.Bind(services => services
.From(AppDomain.CurrentDomain
.GetAssemblies()
.Where(a => !a.FullName.Contains("Tests")))
.SelectAllInterfaces()
.EndingWith("Factory")
.BindToFactory()
);
}
private static void BindViews(IKernel kernel) {
kernel.Bind(services => services
.From(AppDomain.CurrentDomain
.GetAssemblies()
.Where(a => a.FullName.Contains("Windows")
&& !a.FullName.Contains("Tests"))
.SelectAllClasses()
.EndingWith("Form")
.BindSelection((type, baseType) => type
.GetInterfaces()
.Where(iface => iface.Name.EndsWith("View"))
)
);
}
}
So I wonder, is it best to implement the ICustomerManagementPresenterFactory and bind the implementer with it within my CompositionRoot, so that I could provide those dependencies through constructor injection to the Create method which shall no longer take any arguments, or shall I make it otherwise?
What I like of writing a simple interface is that Ninject does it all for me to a factory, and no code is necessary to build an instance of the desired type. Besides, when the constructor of the class to be created uses constructor injection, it seems like it is impossible to have a simple factory interface bound as a factory, and one need to implement the factory interface by hand.
What did I get right/wrong?
Answer: Finally, no need for parameterized Factory.Create method as per this answer:
Properly using parameterized Factory.Create() method using DI.
In short, because Ninject knows how to resolve your returned type, and also your parameterized types, Ninject shall inject whatever needed to instantiate your class.
public class CustomerManagementPresenter : Presenter<ICustomerManagementView> {
public CustomerManagementPrenseter(ICustomerManagementView view
, ICustomerDetailPrensterFactory factory)
: base(view) {
customerDetailPresenterFactory = factory;
}
public void ShowDetailsFor(Customer customer) {
var detailPresenter = customerDetailPresenterFactory.Create();
detailPresenter.ShowDetailsFor(customer);
}
private readonly ICustomerDetailPresenterFactory customerDetailPresenterFactory;
}
And because both ICustomerManagementView and ICustomerDetailPresenterFactory are known to Ninject, a simple ICustomerDetailPresenterFactory looks like this:
public interface ICustomerDetailPresenterFactory {
CustomerDetailPresenterFactory Create();
}
And when the Create method is called, Ninject will know it has to instantiate everything necessary to have a consistent instance of CustomerDetailPresenter, so it will:
Instantiate an ICustomerDetailView to whatever it is bound in your CompositionRoot
Instantiate an ICustomerDetailPresenterFactory using its Factory Extension
Then finally instantiate your CustomerDetailPresenter class so that you may work with just like if you would have instantiated it yourself.
Besides, the ICustomerDetailPresenterFactory need to be bound using either two methods:
Using convention binding
var kernel = new StandardKernel();
kernel.Bind(services => services
From(GetProjectAssemblies())
.SelectAllInterfaces()
.EndingWith("Factory")
.BindToFactory());
Using per-factory binding
var kernel = new StandardKernel();
kernel.Bind<ICustomerManagementPresenterFactory>().ToFactory();
kernel.Bind<ICustomerDetailPresenterFactory>().ToFactory();
...
It's as simple as that! | {
"domain": "codereview.stackexchange",
"id": 9851,
"tags": "c#, dependency-injection, mvp, factory-method, ninject"
} |
Could LIGO have detected R'lyeh if it existed? | Question: In Possible Bubbles of Spacetime Curvature in the South Pacific, that R'lyeh might be exist within a bubble of spacetime. (It appears to be a joke paper since some of the references are fictional, but the math appears quite serious.)
If a Bubble like the one described in the paper existed somewhere on (or in/near) the Earth, could LIGO have detected?
The reason I think LIGO might be able to detect it is because it can detect gravitational waves, and a spacetime bubble would presumably cause some "noise" in the signal. That being said, spacetime bubbles don't really change quickly, whereas LIGO is designed to detect speedy gravitational waves.
Answer: The spacetime described in the paper is static, so it presumably does not generate any gravitational waves. LIGO does not detect static effects.
However, a kilometre-sized bubble presumably involves (negative) energy densities comparable to a solar mass (by analogy to the curvature induced in a black hole). That ought to have some effects on the outside since the edge has continuous derivatives. Presumably the edge effects falls off like some $1/r^3$ tidal effect.
The GRACE pair of satellites can detect changes in ice sheet thickness and sea levels through gravity changes, so they might detect weird south pacific anomalies. Especially since it uses microwave beam to measure the distance between the satellites - presumably some residual curvature ought to affect the photons of the beam.
Finally, a bubble would definitely bend neutrinos and gravity waves passing through. So maybe LIGO or IceCube can detect it that way, as an anomaly in detection from particular directions that move with the Earth. However, the angular resolution might not be enough for a small kilometre-sized entrance. | {
"domain": "physics.stackexchange",
"id": 46302,
"tags": "spacetime, curvature, gravitational-waves, ligo, bubbles"
} |
1 Ton of Refrigeration | Question: "1 TR is equivalent to the rate of heat transfer needed to produce 1 U.S. ton (2000 lbs) of ice at 32F from water at 32F in one day, i.e., 24 hours."
While defining 1 Ton of Refrigeration (TR), why doesn't some specific value of pressure come into definition?
Isn't freezing water (to ice) related to pressure?
-->If yes then why do we go away with it while defining 1TR ?
Answer:
While defining 1 Ton of Refrigeration (TR), why doesn't some specific value of pressure come into definition? Isn't freezing water (to ice) related to pressure? If yes then why do we go away with it while defining 1TR?
Very little, if at all for the range of pressures likely to be encountered in air-conditioning applications where the term is used. Water and ice are virtually incompressible so the pressure makes very little difference. (I am unable to find a good graph of this at the moment.) Steam, on the other hand, is compressible and the pressure affects the latent heat of evaporation.
Figure 1. Before mechanical refrigeration was invented, keeping something cool when it was warm outside was the job of ice. Ice would be collected during winter, transported to ice houses and stored until it could be sold for use in ice boxes. Image source: Forward Engineers.
Since ice was sold by weight and people had a concept of how much cooling a given weight would give it made sense to continue to use the "ton of refrigeration" measurement.
1 ton (refrigeration) = 3516.8 J/s (if spread out over 24 hours). That's 3.5 kW of cooling or 84 kWh. So at, say, 10 c/kWh that would cost 840 c per one ton ice cube.
For those not living in the colonies, 1 US ton = 907 kg. 1 tonne (1000 kg) would have a latent energy of fusion (if that's a valid term) of 92.6 kWh. | {
"domain": "engineering.stackexchange",
"id": 2891,
"tags": "thermodynamics, refrigeration"
} |
Self defined messages | Question:
Hi everyone,
I'm a bit new in ROS and I want to know how to create a self described message and use it in ROS to exchange information between ROS nodes. I've found this ros answer 1 and this video 2 about how to create and use you own ROS message. But there's still some question that remains unanswer or unclear for me.
You have to call a header file to start using your own message as the video and the answer question show, but what should I code in that header file (my_msg.h) and its realated cpp file (my_msg.cpp)? Or they must stay empty and its a trick ROS use to call self described messages? If anyone could done or refer me to a github easy example of an self defined message I will be very pleased.
Do you have to add any dependence or export any dependence of that package? If so, how can I do it?
Thanks in advance for your help.
Originally posted by drodgu on ROS Answers with karma: 59 on 2019-09-23
Post score: 0
Answer:
No, you don't have to create header, when you build your message it will do that for you.
If the message you built is in the same package as your node, then you don't need to add package dependency, but if message was built in a different package, then you need to add that package as a dependency.
Originally posted by Choco93 with karma: 685 on 2019-09-23
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by drodgu on 2019-09-23:
Thank you very much! I add also this video in which is explained how to setup your own message.
Video | {
"domain": "robotics.stackexchange",
"id": 33807,
"tags": "ros, ros-kinetic"
} |
Is this version of Subset Product Problem NPComplete? | Question: Subset Product Problem is $NP-Complete$. Given a promise that the product contains exactly 1 occurrences of each of its prime factor is the problem still $NP-Complete$?
Seems to be the case but couldn't find any references. Can someone confirm?
Answer: Yes, by reduction from exact cover. | {
"domain": "cs.stackexchange",
"id": 20516,
"tags": "complexity-theory"
} |
Error: building keras model using LSTM | Question: I am trying to build a simple LSTM based model but I am getting "can't set attribute error" on the line to add LSTM layer to the model. I am unable to figure the reason as to why this error is appearing. This is the code I am using.
left = Sequential()
left.add(LSTM(64,activation='sigmoid',batch_input_shape=(10,look_back,dim)))
left.add(Dense(dim, activation='linear'))
left.compile(loss='mean_squared_error', optimizer='rmsprop')
I do not think there is an issue with code. The version of Keras is 2.0.7 with tensorflow backend. It is difficult to trace the cause of error.
Answer: Install the latest Keras version: https://github.com/keras-team/keras/issues/7736#issuecomment-324989522 | {
"domain": "datascience.stackexchange",
"id": 4902,
"tags": "keras, lstm"
} |
Console Ultimate Tic Tac Toe game | Question: Last week I wrote a C++ program on Ultimate Tic Tac Toe. The issue was that the program was a bit lengthy (~230 lines) and it is needed to be around 150 lines.
I absolutely am not asking you to go through the whole code. Just skimming through might reveal possible contractions to code, and tips in general to write short but readable C++ code.
Code
#include <iostream>
#include <cstdlib>
#include <fstream>
#include <string>
using namespace std;
inline int X (int pos) {
return pos / 3;
}
inline int Y (int pos) {
return pos % 3 - 1;
}
string rule(80, '_');
class Grid
{
private:
char subgrid[3][3];
char left, right;
public:
Grid(){}
void set (int x, int y, char cell);
char get (int x, int y);
};
void Grid::set (int x, int y, char cell) {
subgrid[x][y] = cell;
}
char Grid::get(int x, int y) {
return subgrid[x][y];
}
class Game
{
private:
Grid grid[3][3];
char player;
size_t cur;
public:
Game();
void display();
void play();
void input (int& g);
bool checkWin (Grid grid);
void showScore();
};
Game::Game() {
for (size_t i = 0; i < 3; ++i)
for (size_t j = 0; j < 3; ++j)
for (size_t k = 0; k < 3; ++k)
for (size_t l = 0; l < 3; ++l)
{
grid[i][j].set(k, l, '.');
}
player = 'x';
cur = 0;
}
void Game::play()
{
int g, s;
display();
while (1) {
display();
cout << "\n Player " << player << " - Enter Grid: ";
cin >> g;
if (g > 0 && g < 10) {
break;
}
display();
cout << "\n Try again.";
cin.get();
}
s = g;
while (1) {
display();
if (checkWin(grid[X(cur)][Y(cur)])) {
display();
cout << "\n Player " << player << " won!";
cin.get();
cin.ignore();
break;
}
player = player == 'x' ? 'o' : 'x';
cur = g;
input(g);
}
}
void Game::display()
{
system("cls");
cout << "\n ULTIMATE TIC TAC TOE\n" << rule;
for (size_t i = 0; i < 3; ++i)
{
for (size_t k = 0; k < 3; ++k)
{
cout << "\n";
char left, right;
left = right = ' ';
for (size_t j = 0; j < 3; ++j)
{
if (k == 1)
{
if (3*i + j + 1 == cur) {
left = '>';
right = '<';
}
else {
left = right = ' ';
}
}
cout << " " << left << " ";
for (size_t l = 0; l < 3; ++l) {
cout << grid[i][j].get(k, l) << " ";
}
cout << right;
}
}
cout << "\n\n";
}
cout << "\n";
}
void Game::input(int& g)
{
int s;
while (1) {
display();
cout << "\n Player " << player << " - Enter subgrid: ";
cin >> s;
if (s > 0 && s < 10)
{
if (grid[X(g)][Y(g)].get(X(s), Y(s)) == '.') {
break;
}
}
display();
cout << "\n Try again.";
cin.ignore(); cin.get();
}
grid[X(g)][Y(g)].set(X(s), Y(s), player);
g = s;
}
bool Game::checkWin(Grid grid)
{
char p = player;
int row = 1, col = 1, main_diag = 1, anti_diag = 1;
for (size_t i = 0; i < 3; ++i)
{
row = col = 1;
if (grid.get(i, 3-1-i) != p) {
anti_diag = 0;
}
if (grid.get(i, i) != p) {
row = col = main_diag = 0;
}
else {
for (size_t j = 0; j < 3; ++j)
{
if (grid.get(i, j) != p) {
row = 0;
}
if (grid.get(j, i) != p) {
col = 0;
}
}
}
if (row || col) {
return 1;
}
}
if (main_diag || anti_diag) {
return 1;
}
return 0;
}
int main()
{
Game game;
game.display();
cout << "\n Welcome to Ultimate Tic Tac Toe." <<
"\n Press Enter to start.";
cin.get();
int input, error = 0;
enum menu { play = 1, scores, quit };
do {
game.display();
if (error) {
cout << " Invalid option. Try again.\n";
error = 0;
}
else {
cout << " Select an option: \n";
}
cout << " 1) Play\n 2) Scores\n 3) Quit\n" << "\n> ";
cin >> input;
switch (input) {
case play:
game.play(); break;
case scores:
//showScores();
break;
case quit:
std::exit(0);
default:
error = 1;
}
system("cls");
} while (error);
system("pause");
return 0;
}
Answer: Here are some things that may help you improve your code.
Don't abuse using namespace std
Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid. For this program, I'd advocate removing it everywhere and using the std:: prefix where needed.
Don't use system("cls")
There are two reasons not to use system("cls") or system("pause"). The first is that it is not portable to other operating systems which you may or may not care about now. The second is that it's a security hole, which you absolutely must care about. Specifically, if some program is defined and named cls or pause, your program will execute that program instead of what you intend, and that other program could be anything. First, isolate these into a seperate functions cls() and pause() and then modify your code to call those functions instead of system. Then rewrite the contents of those functions to do what you want using C++. For example, if your terminal supports ANSI Escape sequences, you could use this:
void cls()
{
std::cout << "\x1b[2J";
}
Eliminate unused variables
The variable s in your Game::play() code is defined but never used. Also, left and right within Grid are never used. Since unused variables are a sign of poor code quality, you should seek to eliminate them. Your compiler is probably smart enough to warn you about such things if you know how to ask it to do so.
Use rational default constructors
If you provide a constructor for Grid that initializes its contents to all ., then the constructor for Game is shorter and much more readable.
Grid() : subgrid{'.','.','.','.','.','.','.','.','.'} {}
Eliminate global variables
In this case, the only global variable is rule which is only used once. I'd move it to within Game::display() and declare it like this:
static const std::string rule(80, '_');
Delegate more to the subclass
The Grid object is not doing very much. It could be assisting more in the display() and checkWin() tasks in particular.
Eliminate unused #includes
The cstdlib library is not required if you change std::exit() to simply return in main.
Eliminate unimplemented code
The showScore() code is missing and is never called anyway. It could simply be deleted, along with the associated case statement and menu option.
Use const where practical
Member functions that don't alter the underlying object should be declared const.
Use standard structures and algorithms
One important and useful way to simplify code is to make better use of existing library code. In particular, the Standard Template Library (STL) would be very helpful here. For instance, you could use a std:array instead of a plain C array to represent each grid. Internally, the representation could be std::array<char, 9> and translation from x and y coordinates could be done by member functions. As an example:
class Grid {
private:
std::array<char, 9> subgrid;
public:
Grid() : subgrid{'.','.','.','.','.','.','.','.','.'} {}
void set (int i, char cell) { subgrid[i] = cell; }
char get (int i) const { return subgrid[i]; }
char get (int x, int y) const { return subgrid[x+3*y]; }
bool checkWin(char player) const {
// check for col and row wins
for (int i=0; i < 3; ++i) {
if((player == get(i, 0) &&
player == get(i, 1) &&
player == get(i, 2)) ||
(player == get(0, i) &&
player == get(1, i) &&
player == get(2, i))) {
return true;
}
}
// check diagonals
return (player == get(1,1) &&
((player == get(0,0) && player == get(2,2))
|| (player == get(0,2) && player == get(2,0))));
}
std::string line(int linenum) const {
std::string ret;
if (linenum >= 0 && linenum < 3) {
for (int i=0; i<3; ++i) {
ret += get(i, linenum);
}
}
return ret;
}
};
Now the Game::display() is much neater and smaller:
void Game::display()
{
cls();
static const std::string rule(80, '_');
std::cout << "\n ULTIMATE TIC TAC TOE\n" << rule << '\n';
for (int i=0; i < 9; i += 3) {
for (int line = 0; line < 3; ++line) {
for (int j=0; j < 3; ++j) {
if (line == 1 && (cur-1 == i+j)) {
std::cout << " > " << grid[i+j].line(line) << " < ";
} else {
std::cout << " " << grid[i+j].line(line) << " ";
}
}
std::cout << "\n";
}
std::cout << "\n\n";
}
}
Avoid breaking loops
Rather than use break to exit a loop, it's usually better to simply declare the actual loop exit at the top so that someone reading your code doesn't have to wonder where the actual exit lies. For example, one way to rewrite Game::input is like this:
void Game::input(int& g)
{
int s;
bool badinput = false;
for (s = 0; s < 1 || s > 9 || grid[g-1].get(s-1) != '.'; badinput = true) {
display();
if (badinput) {
std::cout << "Try again";
}
std::cout << "\n Player " << player << " - Enter subgrid: ";
std::cin >> s;
}
grid[g-1].set(s-1, player);
g = s;
}
Omit return 0
When a C or C++ program reaches the end of main the compiler will automatically generate code to return 0, so there is no need to put return 0; explicitly at the end of main.
Note: when I make this suggestion, it's almost invariably followed by one of two kinds of comments: "I didn't know that." or "That's bad advice!" My rationale is that it's safe and useful to rely on compiler behavior explicitly supported by the standard. For C, since C99; see ISO/IEC 9899:1999 section 5.1.2.2.3:
[...] a return from the initial call to the main function is equivalent to calling the exit function with the value returned by the main function as its argument; reaching the } that terminates the main function returns a value of 0.
For C++, since the first standard in 1998; see ISO/IEC 14882:1998 section 3.6.1:
If control reaches the end of main without encountering a return statement, the effect is that of executing return 0;
All versions of both standards since then (C99 and C++98) have maintained the same idea. We rely on automatically generated member functions in C++, and few people write explicit return; statements at the end of a void function. Reasons against omitting seem to boil down to "it looks weird". If, like me, you're curious about the rationale for the change to the C standard read this question. Also note that in the early 1990s this was considered "sloppy practice" because it was undefined behavior (although widely supported) at the time.
So I advocate omitting it; others disagree (often vehemently!) In any case, if you encounter code that omits it, you'll know that it's explicitly supported by the standard and you'll know what it means. | {
"domain": "codereview.stackexchange",
"id": 33991,
"tags": "c++, homework, tic-tac-toe"
} |
Is there a relationship between visitor pattern and DeMorgan's Law? | Question: Visitor Pattern enables mimicking sum types with product types. Where does the "sum"-iness come from?
For example, in OCaml one could define type my_bool = True | False
Or encode with visitor pattern:
type 'a bool_visitor = {
case_true: unit -> 'a;
case_false: unit -> 'a;
}
let t visitor = visitor.case_true ()
let f visitor = visitor.case_false ()
let visitor = {
case_true = (fun () -> "true");
case_false = (fun () -> "true");
}
let () = print_endline (t visitor) (* prints "true" *)
What's the best way of explaining the sum-type-to-visitor-pattern transformation? Is it:
Of course + and * are interdefinable, what did I expect?
Or is it that the left side of -> is the "negative" position and that this leads to a DeMorgan-law-like flip of sum and product?
I also wonder if this question is related to how one can use universally-quantified types to mimic existential types.
Answer: The best way to explain it is
$$\mathsf{Bool} \to C \cong C \times C,$$
which is a special case of
$$(A + B) \to C \cong (A \to C) \times (B \to C).$$
Read the above as follows: as sum is equivalent to a pair of visitors.
(By the way, this is not the de Morgan law. It does not have a name, as far as I know. It's a general consequence of the definition of the concepts involved.) | {
"domain": "cs.stackexchange",
"id": 18178,
"tags": "data-structures, type-theory"
} |
Norms, rules or guidelines for calculating and showing ETA/ETC for a process | Question: ETC = "Estimated Time of Completion"
I'm counting the time it takes to run through a loop and showing the user some numbers that tells him/her how much time, approximately, the full process will take. I feel like this is a common thing that everyone does on occasion and I would like to know if you have any guidelines that you follow.
Here's an example I'm using at the moment:
int itemsLeft; //This holds the number of items to run through.
double timeLeft;
TimeSpan TsTimeLeft;
list<double> avrage;
double milliseconds; //This holds the time each loop takes to complete, reset every loop.
//The background worker calls this event once for each item. The total number
//of items are in the hundreds for this particular application and every loop takes
//roughly one second.
private void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
//An item has been completed!
itemsLeft--;
avrage.Add(milliseconds);
//Get an avgrage time per item and multiply it with items left.
timeLeft = avrage.Sum() / avrage.Count * itemsLeft;
TsTimeLeft = TimeSpan.FromSeconds(timeLeft);
this.Text = String.Format("ETC: {0}:{1:D2}:{2:D2} ({3:N2}s/file)",
TsTimeLeft.Hours,
TsTimeLeft.Minutes,
TsTimeLeft.Seconds,
avrage.Sum() / avrage.Count);
//Only using the last 20-30 logs in the calculation to prevent an unnecessarily long List<>.
if (avrage.Count > 30)
avrage.RemoveRange(0, 10);
milliseconds = 0;
}
//this.profiler.Interval = 10;
private void profiler_Tick(object sender, EventArgs e)
{
milliseconds += 0.01;
}
As I am a programmer at the very start of my career I'm curious to see what you would do in this situation. My main concern is the fact that I calculate and update the UI for every loop, is this bad practice?
Are there any do's/don't's when it comes to estimations like this? Are there any preferred ways of doing it, e.g. update every second, update every ten logs, calculate and update UI separately? Also when would an ETA/ETC be a good/bad idea.
Answer: Your approach looks reasonable if one makes the assumption that the average time for item processing is roughly the same for each item.
Updating the UI once a second is ok, updating it more often does not make much sense since the typically user cannot follow that visually either. If this really becomes a bottleneck, you might consider to add some code like
if(timeWhenUIWasUpdated < currentTime-1.0)
{
this.Text = String.Format(/*...*/);
timeWhenUIWasUpdated = currentTime; // unit: seconds
}
(you will have to introduce those two variables timeWhenUIWasUpdated and currentTime into your code).
Of course, when your progress event gets called more often (for example, several 10,000s a second), you will have to think about further optimizations, since even without the UI you will have to do some time value management or calculations. But as long as this is not the case, keep things simple.
This one here
if (avrage.Count > 30)
avrage.RemoveRange(0, 10);
looks like premature optimization, I guess
avrage.RemoveRange(0, 1);
will give you a more smooth behaviour (and I guess you won't notice the difference in running time in most real-world cases).
Furthermore, for testing purposes, you may consider to refactor the average-time calculation into a SlidingAverageCalculator class, separating the logic fully from the background worker and the UI display. This will make it really easy to write unit tests for that part of the code. | {
"domain": "codereview.stackexchange",
"id": 3768,
"tags": "c#"
} |
How Hooke's Law and motion equation are related? | Question: $$F(x)= -kx = ma(x) $$ Elastic force equation
I solve this differential equation to find the equation of motion:
$$ -kx =m\frac{d^2x}{dt^2} $$
$$ x'' +\frac{k}{m}x=0 $$
$λ^2 = -\frac{k}{m} $
$ λ = \pm i\sqrt{\frac{k}{m}}$
$ x(t) = c_1\cos(t\sqrt{\frac{k}{m}}) + c_2\sin(t\sqrt{\frac{k}{m}}) $ equation of motion
$ k $ is $\rm[M][T]^{-2} $ so I can write $ ω =\sqrt{\frac{k}{m}}$ and call it angular frequency $[T]^{-1} $
$ x(t) = c_1\cos(tω) + c_2\sin(tω) $
$c_1$ and $c_2$ are $[\rm L]^{1}$, but what they represent?, the equation should depend only from $x_0$ or I am wrong?
Then I can assume:
$c_1 = A\sin(φ)$
$c_2 = A\cos(φ)$
So using trigonometric addition formulas I can simplify the motion equation to:
$x(t) = A\sin(tω+φ)$ That is a Simple harmonic motion
A is $[\rm L]^1$, and I suppose $ A = x_0 $, it's correct, or I am wrong?
$ φ $ should be the phase, but I am not able to understand how it's related to force equation, how can I determine $φ$? What is physical meaning of $φ$?
Answer: As you stated correctly, the two equations $$x(t)=c_1 \cos(t\omega)+c_2 \sin(t\omega)$$ and $$x(t)=Asin(t\omega + \phi)$$ are equivalent and both are the general solution for the problem. In order to get from the general solution to a particular solution, you need to impose boundary conditions,e.g.
\begin{align}
x_0 = x(t=0s) = 1 cm\\
v_0 = v(t=0s) = 0 m/s
\end{align}
Both general solution will yield the same answer if they are subjected to the same boundary conditions.
Coming back to your original questions:
In the first general solution, the coeffs. $c_1$ and $c_2$ are integration constants, which are needed in order to fulfill the boundary conditions. They do not really have any physical meaning. However, if you impose the two boundary conditions stated above, the coeff. $c_1$ becomes $x_0$ and $c_2$ becomes $v_0/\omega$.
In the second general solution, the coeffs. $A$ and $\phi$ are the amplitude and phase of the oscillation. If we take the boundary conditions stated above, we obtain $A=x_0$ and $\phi = 0$. However, nothing prevents us from taking $v_0 \ne 0m/s$. In that case $A \ne x_0 = x(t=0s)$ and $\phi \ne 0$. | {
"domain": "physics.stackexchange",
"id": 41323,
"tags": "homework-and-exercises, newtonian-mechanics, forces, harmonic-oscillator, spring"
} |
Diamond in the rough! Finding Gems in the rocks | Question: Recently a question was asked that solved a hackerrank problem: Counting gems amongst the rocks
The description of the challenge is:
John has discovered various rocks. Each rock is composed of various
elements, and each element is represented by a lowercase Latin letter
from 'a' to 'z'. An element can be present multiple times in a rock.
An element is called a 'gem-element' if it occurs at least once in
each of the rocks.
Given the list of rocks with their compositions, you have to print how
many different kinds of gems-elements he has.
Input Format
The first line consists of N, the number of rocks. Each of the next N
lines contain rocks’ composition. Each composition consists of small
alphabets of English language.
Output Format
Print the number of different kinds of gem-elements he has.
Constraints
\$1 ≤ N ≤ 100\$
Each composition consists of only small Latin letters ('a'-'z'). 1 ≤ Length of each composition ≤ 100
Sample Input
3
abcdde
baccd
eeabg
Sample Output
2
Explanation
Only "a", "b" are the two kind of gem-elements, since
these characters occur in each of the rocks’ composition.
I was in the process of writing an answer when the question was closed because the code in the question was failing to produce the right results.
My recommended solution to the problem is likely quite fast, but it is also relatively complicated. I am sure it can be improved, and simplified.
I have included a simple main method that shows how it can be used, and produces the sample output.
public class GemCounter {
private static final int LETTERCOUNT = 26;
// Start with 1 bit set for each element/letter.
private int gemsSoFar = (1 << LETTERCOUNT) - 1;
public GemCounter() {
// default constructor. Does nothing.
}
public void processRock(final String rock) {
int gotElement = 0; // no bits are set.
for (int i = 0; i < rock.length(); i++) {
int element = rock.charAt(i) - 'a';
if (element >= 0 && element < LETTERCOUNT) {
// bitwise OR the bit that represents the element
gotElement |= 1 << element;
}
}
// only bits that were found in this rock
// and also that have been seen before
// will remain set.
gemsSoFar = gemsSoFar & gotElement;
}
public int getGemCount() {
// count the number of bits that are still set.
return Integer.bitCount(gemsSoFar);
}
public static void main(String[] args) {
String[] rocks = {"abcdde", "baccd", "eeabg"};
GemCounter gemcount = new GemCounter();
for (String rock : rocks) {
gemcount.processRock(rock);
System.out.printf("Processed %s, got %d gems still%n", rock, gemcount.getGemCount());
}
System.out.printf("%d rocks have %d gems%n", rocks.length, gemcount.getGemCount());
}
}
Answer: Not much to say.
Logical operators used a bit inconsistently:
gotElement |= 1 << element;
gemsSoFar = gemsSoFar & gotElement;
better use &= in the second line, or spell out the first.
gotElement is somewhat unclear. rockElements, maybe?
In real life I'd also oppose extensive comments. | {
"domain": "codereview.stackexchange",
"id": 9286,
"tags": "java, programming-challenge, bitwise, rags-to-riches"
} |
what is the physical significance of ∆H=∆U for a chemical reaction? | Question: In case of few chemical reactions $\Delta n=0$,so according to the equation $\Delta H= \Delta U+\Delta nRT$ change in enthalpy equals change in internal energy. But what does this in essence actually mean? Like physically what does this indicate and do we get any additional information from that an equation? I got this query seeing the following equation of formation of hydrogen chloride.
$\ce{H2 +Cl2 =2HCl}$ with $\Delta n=0$.
Like can it be associated to why the reaction is zero order?
Answer: This equation holds for a reaction involving ideal gases at constant T. Under these conditions, $\Delta (pV) = \Delta n RT$. Assume now that a gas phase reaction occurs at constant pressure and temperature. Then $\Delta (pV) = p\Delta V$. This is equal to the negative of the pressure-volume work done by the system during the reaction since $w = -p\Delta V$. Therefore, at constant p and T, $\Delta H$ is equal to the change in energy of the system minus the pV work it did. This is consistent with the fact that we equate the enthalpy change for such a process at constant p and T with the heat exchanged, that is, $\Delta H = \Delta U -w = q_p$. Now if $\Delta n =0$ no expansion occurs at constant p and T, and so $w=0$ and $\Delta H = \Delta U = q$. | {
"domain": "chemistry.stackexchange",
"id": 13584,
"tags": "physical-chemistry, thermodynamics, kinetics"
} |
How do exponents of a unit of physical quantity affect converting between different prefixes of the same unit? | Question: Let's say, I have some quantities in unit $ \rm kg^{-1/2}$, I now want to express the quantity in $\rm g^{-1/2}$. Do I simply multiply the quantity by $\rm k^{-1/2}$ which is approximately $0.0316$ ?
The way I am going about this is that since $\rm kg^{-1/2}$ can be separated into $\rm k^{-1/2}×g^{-1/2}$, and so the quantity has to 'absorb' $\rm k^{-1/2}$.
I'm asking this question because I tend to run into a lot of units with strange exponents when doing data analysis and need a sanity check.
Answer: You probably learned the prefixes
Mega = $10^6$
Kilo = $10^3$
Milli = $10^{-3}$
So, all you have to do is using them in the relationship you are interested in. E.g.
$$\pi \frac{(kg)^{1/3} \; \cdot MJ}{(mV)^4}
= \pi \frac{(10^3g)^{1/3} \; \cdot (10^6J)}{(10^{-3}V)^4}
= \pi \frac{10g^{1/3} \; \cdot 10^6J}{10^{-3/4}V^4} = \ldots
$$ | {
"domain": "physics.stackexchange",
"id": 77598,
"tags": "dimensional-analysis, units"
} |
Does Cute Poison actually work? | Question: For those of you that have watched Season 1 of Prison Break (TV Series), in the "Cute Poison" Episode Micheal Scofield combined $\ce{CuSO4}$ (Copper Sulfate) and $\ce{H3PO4}$ (Phosphoric Acid) to weaken the metal. Is it true?
Answer: The Wikipedia page on "Cute Poison" episode says that it wasn't anhydrous copper (II) phosphate, but Gypsum.
I haven't seen the episode, and the Wikipedia page's reaction seems to be the backwards reaction of yours. But I imagine yours is the one the pro/antagonist is gonna use to escape the prison; as
Gypsum is commonly found and accessible:
Gypsum is a soft sulfate mineral composed of calcium sulfate dihydrate, with the chemical formula $\ce{CaSO4·2H2O}$. It can be used as a fertilizer, is the main constituent in many forms of plaster and in blackboard chalk, and is widely mined. A massive fine-grained white or lightly tinted variety of gypsum, called alabaster, has been used for sculpture by many cultures including Ancient Egypt, Mesopotamia, Ancient Rome, Byzantine empire and the Nottingham alabasters of medieval England.
Calcium phosphate's solubility will decrease with temperature increase and it will precipitate, leaving you with a solution of sulfuric acid.
It's possible for a double displacement reaction to occur in aqueous medium (with a spark):
$$\ce{2H3PO4(aq) + 3CaSO4(aq)·2H2O(l) \leftrightharpoons 3H2SO4(aq) + Ca3(PO4)2(aq) + 6H2O(l)}$$
$\ce{H2SO4}$ is sulfuric acid. It's a very strong acid in water, a diprotic acid. Its $p {\rm K_a}$s are −3 and 1.99 according to Wikipedia.
Its corrosiveness on other materials, like metals, living tissues or even stones, can be mainly ascribed to its strong acidic nature and, if concentrated, strong dehydrating and oxidizing properties.
Later in the same article, you can read how it reacts with different material. It can weaken metal pretty fast and easily, as 3 moles of it will be present in any 7 moles of products, which is high above the dilute concentration (arbitrarily) defined in the Wikipedia page.
Copper sulfate also reacts with phosphoric acid and forms sulfuric acid, but it's hardly accessible in prison, I reckon. | {
"domain": "chemistry.stackexchange",
"id": 4016,
"tags": "acid-base, experimental-chemistry"
} |
Is the ratio Brain Mass/Total Mass still considered a valid indicator of intelligence? | Question: I was reading this(1) and it led me back to ask a very basic question (I'm not a neuroscientist). All the way back to undergrad anthropology and neuroscience courses I remember being taught the general rule of relative intelligence was that one looked at the ratio of the brain mass over the total mass of the animal (or the estimations therein from let's say fossils).
I know a lot of the interesting neuroscience research going on these days does looks into bird brains, particularly within Corvidae. It would seem that birds are often much more efficient in the abilities they seem to show with considerably less brain mass. I do realize that birds often weigh very little as well, so perhaps the ratio is preserved?
I also realize that within birds, they see better ratios in more intelligent birds. But what about the comparison from mammals to birds?
Dinosaurs are often predicted to not be intelligent because of the enormous body size and small cavities for a brain. Now I realize that some dinosaurs were actually quite tiny, but this is just an example.
Given birds' close genetic link to dinosaurs, could it simply be that they were just doing more with less? The mammalian brain is a huge caloric burden, so perhaps this would show an efficiency that could be selected for?
Thus as the title suggests, my main question:
Is the ratio of brain mass to body mass still considered to be a valid indication of intelligence of a species in modern (current) neuroscience? Certainly there are exceptions, but is it still considered the rule of thumb?
EDIT: I also wanted to point out the first article(2) I started to read after formulating the question. It made me question the usefulness of encephalization in addressing this issue at all, but I don't/didn't feel adequately trained to evaluate the conclusions of the meta study.
Again, this still leaves us comparing within primates, which still leaves me feeling that Corvidae have some really impressive efficiency going on with their overall brain mass. Which then leads to hope that some dinosaurs could be at least equally intelligent, if not more so (noted that this is complete conjecture).
(1) Front Hum Neurosci. 2013 Jun 6;7:245. doi:10.3389/fnhum.2013.00245. Print 2013.
(2) Brain Behav Evol. 2007;70(2):115-24. Epub 2007 May 18.
Answer: I don't know about brain mass to body size, but ratio of neocortex to brain volume is correlated with primate evolution (Figure 3, here). If you subscribe to the view that humans and more closely related primates are more intelligent than more distantly related primates, then that is an important correlate.
I think the issue with this question is that we're still having trouble defining intelligence. | {
"domain": "biology.stackexchange",
"id": 6861,
"tags": "evolution, neuroscience, anthropology"
} |
Understanding how mass spectroscopy works | Question: I’m trying to get a deeper understanding of how mass spectroscopy works. Most tutorials and textbooks I’ve encountered omit certain details about the process and I’m hoping someone out there who understands the process can fill in the gaps. I’ll start by explaining the way I currently understand the process and then list where I start to get confused.
1 First a particular molecule is bombarded with a beam of electrons (ionization step). This step works to free an electron from the molecule under analysis which generates a cation.
Note: It’s important to remember that the only reason why this molecule was ionized is to make use of a magnetic field to bend its direction. Only ions will be affected by this magnetic field which allows us to pinpoint the molecule in question.
2 Next, these ions are accelerated by an electric field toward a magnetic field that bends the moving ions by a certain amount.
Question 1: What’s the difference between an electric field and a magnetic field? And why does the magnetic field bend the ions while the electron field does not. Is this just a consequence of the shape of the apparatus? Could you use a magnetic field to accelerate a particle and a electric field to do the bending? I’m just really confused as to the difference between electric fields and magnetic fields.
3 The ions that bend will travel around a tube by a certain degree and hit a detector. The amount of bending will tell us about the mass of a particular molecule. Heavier molecules will bend less and lighter molecules will bend more.
Question 2: I’ve come across a tutorial that said that what this detector is actually measuring is the mass to charge ratio. How is this calibrated and how does this step allow us to measure the mass of a single molecule. Won’t there be multiple particles hitting the detector at the same time, will this not affect the mass reading of a particular molecule? I'm just confused about this final step and how we can get an accurate mass reading for a particular molecule.
Any help understanding this concept would be appreciated.
Answer:
Question 1: What’s the difference between an electric field and a magnetic field? And why does the magnetic field bend the ions while the electron field does not. Is this just a consequence of the shape of the apparatus? Could you use a magnetic field to accelerate a particle and a electric field to do the bending? I’m just really confused as to the difference between electric fields and magnetic fields.
Electric fields accelerate charged particles (linear acceleration). Magnetic fields deflect them (angular acceleration).
Charged particles respond to electric fields and magnetic fields differently. Electric fields accelerate charged particles linearly in the direction of the flux. For example, if an electric field exists between two plates, one with a buildup of positive charge and the other with a buildup of negative charge, then positively charged particles will accelerate from the positive part toward the negative plate. The acceleration of the particle ($\vec{a}$) depends on the strength of the field ($\vec{E}$), the mass of the particle ($m$), and the charge on the particle ($q$). Since the acceleration is a scalar multiple of the electric field, they have the same direction.
$$\vec{F}=q\vec{E}$$
$$\vec{F}=m\vec{a}$$
$$q\vec{E}=m\vec{a}$$
$$\vec{a}=\frac{q}{m}\vec{E}$$
Magnetic fields are orthogonal to electric fields. Magnetic fields alter the angular acceleration of charged particles (i.e., they deflect them). Thus, charged particles have a circular trajectory in a magnetic field. The radius of the circular path ($r$) is dependent on the strength of the magnetic field ($\vec{B}$), the charge of the particle ($q$), the mass of the particle ($m$), and the velocity of particle ($\vec{v}$). The radius of curvature is proportion to the mass to charge ration ($m/q$).
$$|\vec{F}|=m\frac{|\vec{v}|^2}{r}$$
$$|\vec{F}|=|q\vec{v}||\vec{B}|$$
$$m\frac{|\vec{v}|^2}{r}=|q\vec{v}||\vec{B}|$$
$$r=\frac{m}{q} \frac{|\vec{v}|}{|\vec{B}|}$$
The acceleration of the particle depends on the velocity, magnetic field, radius, and mass. Since acceleration in a magnetic field is a scalar multiple of the cross product of velocity and field strength, the direction of acceleration is always changing.
$$\vec{F}=q(\vec{v}\times\vec{B})$$
$$m\vec{a}=q(\vec{v}\times\vec{B})$$
$$\vec{a}=\frac{q}{m}(\vec{v}\times\vec{B})$$
Question 2: I’ve come across a tutorial that said that what this detector is actually measuring is the mass to charge ratio. How is this calibrated and how does this step allow us to measure the mass of a single molecule. Won’t there be multiple particles hitting the detector at the same time, will this not affect the mass reading of a particular molecule? I'm just confused about this final step and how we can get an accurate mass reading for a particular molecule.
Now that we know the relationship between radius of curvature and mass-to-charge ratio, this question is easy. The detector counts the number of particles that hit it. That is all that it can do. That is all that it needs to do. The hard work of separating the ions by m/q is done by the mass analyzer, which is the magnetic field in curved tube. The magnetic field scans (quickly) over a range of $\vec{B}$ to alter the curvature of the ions by their m/q. Since the tube has a fixed curvature, only the ions with the correct m/q will pass to the detector. All others will hit the walls of the tube since their radius of deflection would be too large or too small. Thus, since the instrument controls the strength of the magnetic field, and physics determines what m/q will have the correct deflection at that field strength, there is no need to calibrate the detector.
More complex mass analyzers exist, for example the quadrupole mass analyzer. Instead of the typical bipolar magnetic field, it generates a quadrupolar magnetic field. Ions are thrown all over the place to hit a spherical detector. The physics is more complex, but the trajectory once again depends on m/q, so the detector only has to count hits in certain spots. | {
"domain": "chemistry.stackexchange",
"id": 342,
"tags": "mass-spectrometry"
} |
Alternate to a switch case - Javascript | Question: I have a piece of code which works but I am looking for an optimal solution, if one exists. I have a column value and the value can only range from 2 to 10 including 2 and 10. The base value inside a switch is 18. Whenever the column value increases by 1, I need to decrease the value inside the switch by 0.5.
This is what I have:
const columns = 10;
const getVal = (columns) => {
switch (columns) {
case 2:
return 18
case 3:
return 17.5
case 4:
return 17
case 5:
return 16.5
case 6:
return 16
case 7:
return 15.5
case 8:
return 15
case 9:
return 14.5
case 10:
return 14
}
}
console.log(getVal(5))
Is there a better solution to achieve the same thing? Thanks.
Answer: I would suggest simply converting the logic to a mathematical formula, rather than handling each case separately. This results in much shorter code, and clarifies that there is in fact a mathematical relationship between the input and output.
const getVal = (columns) => {
return 18 - ((columns - 2) * 0.5);
}
for (let x = 2; x <= 10; x++) {
console.log(x, getVal(x))
}
I would also highly recommend renaming the function getVal. I can't suggest a better name because I don't know what the reason for the function is, but I would think about what the output represents, and rename the function to indicate that.
You could make the getVal function much shorter - it can be done in one line. However, whether or not this is advisable depends (in my opinion) on the experience levels of the developers who will read this code (i.e. yourself and any collaborators). If the shorter form is likely to confuse people then I would suggest sticking to something written out more fully, but if the syntax is familiar then it does save a bit of space.
const getVal = columns => 18 - (columns - 2) * 0.5;
for (let x = 2; x <= 10; x++) {
console.log(x, getVal(x))
} | {
"domain": "codereview.stackexchange",
"id": 40238,
"tags": "javascript"
} |
The direction of forces in statics problem | Question: I was always confused on how we define the directions of the forces in equilibrium problems.
Let's say I have a simple structure:
In this case, I assumed that Ray and Rcy are pointing up to counter that 10KN forces.
However, when I assume that Ray is pointing down and Rcy is pointing up, it changes the answer for Ray.
I've also heard a lot that if you assume the wrong direction of the force it will come up as a negative value.
As you can see here if I assume a different direction, it actually comes up with a totally different value.
Am I missing something?
Any help will be greatly appreciated!
Answer: Equilibrium means to make the system stay at rest. i.e., to make it stay as it is.
The way you have defined the forces to attain equilibrium is correct (as in the image) as you can attain both linear and angular equilibrium.
Linear equilibrium is obtained if the sum of the forces( i.e., the net force ) in each direction is zero. And angular equilibrium, if the net torque is zero.
But, taking $ Ray $ and $ Rcy $ to be in opposite direction can assume linear equilibrium if :
1) $ Ray + 10= Rcy $ if $ Ray $ is downwards and $ Rcy $ is upwards.
2) $ Rcy + 10 = Ray $ if its vice versa.
But, both the above two cases won't satisfy angular equilibrium as the net torque won't be zero.
Taking both forces to be downwards won't satisfy both equilibriums.
So, you have to define your forces in such a way that they ensure the system to attain overall equilibrium.
So, the only way is as you have taken in the image above.
Of course you will get different values as you change the direction of each force as your main concern is to sustain equilibrium and the net force in one direction has to be canceled out by that force that is in the opposite direction.
" $ Ray $ is pointing down and $ Rcy $ is pointing up, it changes the answer for
$ Ray $."
This is because, in the earlier case the 10$ N $ force was cancelled out by $ Ray $ and $ Rcy $ together but when $ Ray $ is pointing down and $ Rcy $ is pointing up the net force of 10$ N $ and $ Ray $ has to be cancelled out by $ Rcy $ alone and hence $ Rcy $ has to be greater in magnitude $. | {
"domain": "physics.stackexchange",
"id": 40930,
"tags": "homework-and-exercises, forces, free-body-diagram, equilibrium, statics"
} |
Can the (sparse) categorical cross-entropy be greater than one? | Question: I am using AlexNet CNN to classify my dataset which contains 10 classes and 1000 data for each class, with 60-30-10, splits for train, validation, and test. I used different batch sizes, learning rates, activation functions, and initializers. I'm using the sparse categorical cross-entropy loss function.
However, while training, my loss value is greater than one (almost equal to 1.2) in the first epoch, but until epoch 5 it comes near 0.8.
Is it normal? If not, how can I solve this?
Answer: Both the sparse categorical cross-entropy (SCE) and the categorical cross-entropy (CCE) can be greater than $1$. By the way, they are the same exact loss function: the only difference is really the implementation, where the SCE assumes that the labels (or classes) are given as integers, while the CCE assumes that the labels are given as one-hot vectors.
Here is the explanation with examples.
Let $(x, y) \in D$ be an input-output pair from the labelled dataset $D$, where $x$ is the input and $y$ is the ground-truth class/label for $x$, which is an integer between $0$ and $C-1$. Let's suppose that your neural network $f$ produces a probability vector $f(x) = \hat{y} \in [0, 1]^C$ (e.g. with a softmax), where $\hat{y}_i \in [0, 1]$ is the $i$th element of $\hat{y}$.
The formula for SCE is (which is consistent with the TensorFlow implementation of this loss function)
$$
\text{SCE}(y, \hat{y}) = - \ln (\hat{y}_{y}) \label{1}\tag{1},
$$
where $\hat{y}_{y}$ is the $y$th element of the output probability vector $\hat{y}$ that corresponds to the probability that $x$ belongs to class $y$, according to $f$.
Actually, the equation \ref{1} is also the definition of the CCE with one-hot vectors as targets (which behave as indicator functions). The only difference between CCE and SSE is really just the representation of the targets, which can slightly change the implementation under the hood. Moreover, note that this is the definition of the CE for only $1$ training pair. If you have multiple pairs, you have to compute the CE for all pairs, then average these CEs (for a reference, see equation 4.108, section 4.3.4 Multiclass logistic regression of Bishop's book PRML).
Let's have a look at a concrete example with concrete numbers. Let $C=5$, $y = 3$, $\hat{y} = [0.2, 0.2, 0.1, 0.4, 0.1]$, then the SCE is
\begin{align}
\text{SCE}(y, \hat{y})
&=
- \ln (0.4) \approx 0.92,
\end{align}
If $\hat{y} = [0.2, 0.2, 0.2, 0.1, 0.3]$, so $\hat{y}_{y} = 0.1$, and we still have $y = 3$, then the CCE is $2.3 > 1$.
You can execute this Python code to check yourself.
import numpy as np
import tensorflow as tf # Install TensorFlow 2.3!
y = 1
y_true = [3] # sparse label (integer)
y_true2 = [0, 0, 0, 1, 0] # one-hot vector
for y_y in [0.4, 0.1]:
sce_np = -(y * np.log(y_y))
print("SCE (NumPy) =", sce_np)
y_preds = [[0.2, 0.2, 0.1, 0.4, 0.1],
[0.2, 0.2, 0.2, 0.1, 0.3]]
for y_pred in y_preds:
sce_tf = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)
cce_tf = tf.keras.losses.categorical_crossentropy(y_true2, y_pred)
print("SCE (TensorFlow) =", sce_tf)
print("CCE (TensorFlow) =", cce_tf)
To answer the following question more directly.
However, while training, my loss value is greater than one (almost equal to 1.2) in the first epoch, but until epoch 5 it comes near 0.8. Is it normal? If not, how can I solve this?
Yes. It can happen, as explained above. (However, this does not mean that you do not have mistakes in your code.) | {
"domain": "ai.stackexchange",
"id": 2440,
"tags": "convolutional-neural-networks, training, object-recognition, categorical-crossentropy, alexnet"
} |
Questions about compilers and machine code | Question:
When it is said that machine code is specific to a machine, is it specific to an architecture, or is it specific to a machine? Do two identical processors require different machine code? My guess is that people are referencing instruction sets/architecture when they say machine code is specific to a machine?
If they are referencing instruction sets; how does a compiler know which instruction set to create machine code for?
-Ultimately -- if I compile a C program on my computer, generating machine code, to what extent is that machine code usable on other machines?
Like, if one of the points of compiled languages is that you don't have to share the source code, how does that even work? Don't you have to, at a minimum, compile for each available architecture?
edit: Sorry if these questions seem easily googleable, believe me I have tried. It's at the point where I need to just start asking people things, which is why I've made this account.
Answer: In principle, identical processors share the same instruction set. Anyway, on modern machines some instructions can be enabled or disabled through the BIOS (UEFI) by the user.
The successive processor generations in a given architecture have an evolving instruction set, usually keeping downward compatibility, but not always.
So in principle code compiled for an old processor should still run on a newer.
For compilers to cope, there are two strategies:
compile-time compatibility lets the developers specify the processor family that should be supported, leading to different code generation; a substantial historical example is that of the transition from 32 to 64 bit systems.
run-time compatibility relies on querying the processor itself during program execution to know which instructions are actually supported, and switch to the relevant code to optimize the utilization of the hardware capabilities; this means that the generated code will include several instances of some functions. | {
"domain": "cs.stackexchange",
"id": 19674,
"tags": "compilers, c"
} |
Razor_imu_9dof can't see my Sparkfun IMU (14001)? | Question:
I'm running kinetic on Ubuntu 16.04 (x86).
I've followed the instructions for razor_imu_9dof, for the Razor SEN-14001 M0 board from Sparkfun. I flashed the AHRS firmware, and installed all the relevant ROS packages. I am able to see the IMU spewing data on /dev/ttyACM0, by running cat or screen.
However, when I run
roslaunch razor_imu_9dof razor-pub-and-display.launch
I just get the error:
[ERROR] [1525842655.072800]: IMU not found at port /dev/ttyACM0. Did you specify the correct port in the launch file?
What am I doing wrong? Is there a way to troubleshoot?
Originally posted by roach374 on ROS Answers with karma: 36 on 2018-05-09
Post score: 0
Original comments
Comment by PeteBlackerThe3rd on 2018-05-09:
Just to check you're not trying to run the launch file at the same time as viewing the output using cat or screen are you?
Comment by roach374 on 2018-05-09:
No, I did that separately, add part of my naive attempt to troubleshoot. One thing I did notice though, the /dev/ttyACM0 device is owned by root/dialout. Maybe it's a permissions issue? I tried running sudo roslaunch, but it barfed with "unrecognized command".
Comment by PeteBlackerThe3rd on 2018-05-09:
if cat can read from the port then it should be fine. You can always use chmod to set the permissions of /dev/ttyACM0 . Note they will be reset every time to plug it in.
Comment by PeteBlackerThe3rd on 2018-05-09:
We've got a few of these in our lab, I can check what the output should look like tomorrow so you can check the firmware is correct.
Comment by roach374 on 2018-05-09:
I think I've figured it out. It WAS a permissions issue. I added my user account to the dialout group, and things seem to be working fine now. Thanks!
Answer:
Turned out to be a permissions issue. I added my user account to the dialout group, and everything started working.
Originally posted by roach374 with karma: 36 on 2018-05-09
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by roach374 on 2018-05-09:
This is the answer, but I can't accept it, since I don't have enough karma.
Comment by PeteBlackerThe3rd on 2018-05-10:
Glad you got this working | {
"domain": "robotics.stackexchange",
"id": 30796,
"tags": "ros, imu, razor-imu-9dof, ros-kinetic"
} |
Arranging the phenol derivative in decreasing order of acidity | Question: I am facing difficulty in the following problem.
What is the decreasing order of acidity of the following phenol derivatives?
In my view benzene-1,2-diol 1 should be most acidic because $-\ce{OH}$ is a strong $-I$ group thereby it will stabilise the negative charge on oxygen formed after removal of a proton.
$\ce{OCH3}$ is a $+R$ and $-I$ group. Near the carbanion $-I$ effect should be more effective. Hence the second most acidic should be 2-methoxyphenol 2. However the answer key gives it as 4-methoxyphenol 3.
Answer: The acidity order should be 1 > 2 > 3 as they have pKa values of 9.48, 9.98, and 10.21 respectively.
+R effect of $\ce{-OCH3}$ is more significant than +R effect of $\ce{-OH}$
In case of 1, -I effect works at -ortho position and also +R effect works at -ortho position.
In case of 2, -I effect works at -ortho position and also +R effect works at -ortho position. Net effect is +R effect as +R effect is dominant than -I effect.
In case of 3, -I effect works at -para position and also +R effect works at -para position. Net effect is +R effect as +R effect is dominant than -I effect. But this +R effect is more dominant than in case 2 because -I effect is more dominant at -ortho position than at -para position but+R effect of equally effective at both -ortho and -para positions.
As a result of which, acidity is reduced to greater extent in the third compound compared to the second compound. | {
"domain": "chemistry.stackexchange",
"id": 7087,
"tags": "organic-chemistry, acid-base, aromatic-compounds"
} |
What should the discount factor for the non-slippery version of the FrozenLake environment be? | Question: I was working with FrozenLake 4x4 from open AI gym. In the slippery case, using a discounting factor of 1, my value iteration implementation was giving a success rate of around 75 percent. It was much worse for the 8x8 grid with success around 50%. I thought in the non slippery case, it should definitely give me 100 percent success, but it turned out to be 0 percent.
After trying to understand what was happening by going through the algorithm on paper (below), I found all values were the same, so greedifying with respect to the value function often just had the agent going around in circles. Reducing the discount factor from 1 to anything below 1 gave me 100 percent success. But going to anything below 0.9 gave me worse results in the slippery case.
Can someone give me some intuition as to why this happened and how to interpret and choose the discount factor for any use case?
Algorithm used (where $\theta = 0.0000001$):
Answer:
After trying to understand what was happening by going through the algorithm on paper (below), I found all values were the same, so greedifying with respect to the value function often just had the agent going around in circles. Reducing the discount factor from 1 to anything below 1 gave me 100 percent success.
This is expected (and optimal, as defined) behaviour with a discount factor of 1 in the deterministic case. With a reward of 1 at the end, no discounting and and no negative rewards for taking its time, the agent has infinite time to complete the task. So whatever it does, provided it does not step into a hole at any point, is optimal.
The caveat is that when following the policy as a test, you need to use random choice to resolve ties (this is assumed in the theory behind value iteration, that the agent can reach the high rewards). Most simple argmax functions will not do that, so you will need to add some kind of shuffle or other random selection over the maximum results to ensure you are really following the policy that value iteration discovered the state value function for.
The last sentence in the pseudocode is ignoring the possibility that you have this kind of situation, so it is not strictly true in all cases. That is, you cannot reliably build a working deterministic policy directly from the optimal value function when $\gamma=1$ for environments like Frozen Lake.
But going to anything below 0.9 gave me worse results in the slippery case.
I am less sure about what is happening in this case. Given that you successfully found an optimal value function in the deterministic environment, then this may actually be optimal too. Whether or not it is may depend on the map - an optimal agent with low discount factor may prefer not to "run a gauntlet" early with risk of failure even if the overall probability of success is better. That may cause it to prefer a safer start that leads to a tougher end, where it is forced to take more risks overall than would be optimal for a goal of completing the most episodes.
how to interpret and choose the discount factor for any use case?
The discount factor is not a free choice for the agent, and it is not a hyperparameter for the learning method in value iteration. Instead, it is part of the definion for what is and isn't optimal. So you should select it based on the problem definition.
In an episodic environment where the agent's goal is to reach a certain state or states as fast as possible, there are two "natural" formulations:
Zero reward for most transitions, "goal" rewards for reaching certain terminal states, and a discount factor below $1$. I think your choice of $0.9$ would be fine, and it is commonly used in the literature for this kind of toy environment.
Negative reward for most transitions, "goal" rewards for reaching certain terminal states. If there is a single "target" state, such as exiting a maze, the "goal" reward can be zero. Usually no discounting, but you could add discounting if it represents something else about the problem.
In more complex systems, the discount factor may also become a hyperparameter of the agent. For instance, it does so when using function approximation with temporal difference learning, because it can dampen positive feedback from bootstrapping. In those cases, where "naturally" there would be no discounting, then you may want to use the highest value that prevents the estimator diverging e.g. $\gamma = 0.99$. | {
"domain": "ai.stackexchange",
"id": 3334,
"tags": "reinforcement-learning, sutton-barto, value-iteration, discount-factor, frozen-lake"
} |
magnitude and phase Fourier coefficients | Question: While solving Fourier series coefficients in example, i found couple of things which confuse me.
How the minus sign changes to plus sign $a_1= 1-\frac{1}{2j} = 1+\frac{1}{2}j$?
After plotting the magnitude and phase part of all the Fourier coefficients, what kind of information we can get from magnitude and phase plot?
Also, how $4je^{j3(2\pi/8)t} - 4je^{-j3(2\pi/8)t} = 8\sin(6\pi/8)t$
Answer: Frankly I don't deserve to get any credit for this answer. That's the reason I was commenting on the question. What you are asking as a question is too trivial.
Part 1
$$a_1=1-\frac{1}{2j}= 1-\frac{j}{2j^2}$$
We know that $j^2=-1$ from complex number arithmetic. So $a_1$ simplifies to $1+\frac{j}{2}$
Part 3
$$4je^{j3\frac{2\pi}{8}t} - 4je^{-j3\frac{2\pi}{8}t}$$
Multiply and divide the above expression by $2j$.
we get $$4j\left(e^{j3\frac{2\pi}{8}} - e^{-j3\frac{2\pi}{8})t}\right) \frac{2j}{2j}$$
It simplifies to $$8j^2\left(e^{j3\frac{2\pi}{8}t} - e^{-j3\frac{2\pi}{8}t}\right)\frac{1}{2j}$$
which is equal to $$8j^2 \sin\left(\frac{6\pi}{8}t\right) = -8 \sin \left(\frac{6\pi}{8}t\right)$$
because from exponential form of definition of $\sin(x)$
$$\sin(x) = \dfrac{e^{jx} + e^{-jx}}{2j}$$ | {
"domain": "dsp.stackexchange",
"id": 3165,
"tags": "continuous-signals, fourier-series"
} |
Is it possible to exclude article types in a PubMed search? | Question: I have a question about searching PubMed. I want to search for original research papers while omitting reviews. Is this possible in PubMed?
Example
If I am interested in the relation between Acute Renal Failure and Salt, I could use the following search string:
("Acute Kidney Injury"[Mesh]) AND ("Sodium Chloride"[Mesh])
Using the column on the left of the article overview list, you can select which article types you are interested in.
Question
But I wondered if it was also possible to exclude a specific article type (e.g., reviews) from the search?
Answer: I believe this will work:
((Acute Kidney Injury[MeSH Terms]) AND (Sodium Chloride[MeSH Terms])) NOT (Review[Publication Type])
Using the advanced search settings on PubMed it was easy enough to set up, here is the link in case you don't have it already:
https://pubmed.ncbi.nlm.nih.gov/advanced/ | {
"domain": "biology.stackexchange",
"id": 11022,
"tags": "literature, research-process, research-tools"
} |
Why is depth-first search an artificial intelligence algorithm? | Question: I'm new to the artificial intelligence field. In our first chapters, there is one topic called "problem-solving by searching". After searching for it on the internet, I found the depth-first search algorithm. The algorithm is easy to understand, but no one explains why this algorithm is included in the artificial intelligence study.
Where do we use it? What makes it an artificial intelligence algorithm? Is every search algorithm is an AI algorithm?
Answer: This is a fundamentally a philosophical question. What makes AI AI? But first things, why would DFS be considered an AI algorithm?
In its most basic form, DFS is a very general algorithm that is applied to wildly different categories of problems: topological sorting, finding all the connected components in a graph, etc. It may be also used for searching. For instance, you could use DFS for finding a path in a 2D maze (although not necessarily the shortest one). Or you could use it to navigate through more abstract state spaces (e.g. between configuration of chess or in the towers of Hanoi). And this is where the connection to AI arises. DFS can be used on its own for navigating such spaces, or as a basic subroutine for more complex algorithms. I believe that in the book Artificial Intelligence: A Modern Approach (which you may be reading at the moment) they introduce DFS and Breadth-First Search this way, as a first milestone before reaching more complex algorithms like A*.
Now, you may be wondering why such search algorithms should be considered AI. Here, I'm speculating, but maybe the source of the confusion comes from the fact that DFS does not learn anything. This is a common misconception among new AI practitioners. Not every AI technique has to revolve around learning. In other words, AI != Machine Learning. ML is one of the many subfields within AI. In fact, early AI (around the 50s-60s) was more about logic reasoning than it was about learning.
AI is about making an artificial system behave "intelligently" in a given setting, whatever it takes to reach that intelligent behavior. If what it takes is applying well-known algorithms from computer science like DFS, then so be it. Now, what is it that intelligent means? This is where we enter more philosophical grounds. My interpretation is that "intelligence" is a broad term to define the large set of techniques that we use to approach the immense complexity that reality and certain puzzle-like problems have to offer. Often, "intelligent behavior" revolves around heuristics and proxy methods away from the perfect, provable algorithms that work elsewhere in computer science. While certain algorithms (like DFS or A*) may be proven to give optimal answers if infinitely many resources can be devoted to the task at hand, only in sufficiently constrained settings would such techniques be affordable. Fortunately, we can make them work in many situations (like A* for chess or for robot navigation, or Monte Carlo Tree Search for Go), but only if reasonable assumptions and constraints over the state space are imposed. For all the rest is where learning techniques (like Markov Random Fields for image segmentation, or Neural Nets paired with Reinforcement Learning for situated agents) may come handy.
Funny enough, even if intelligence is often regarded as a good thing, my interpretation can be summed up as imperfect modes of behavior to address immensely complex problems for which no known perfect solution exists (with rare exceptions in sufficiently bounded problems). If we had a huge table that, for each chess position, gives the best possible move you can make, and put that table inside a program, would this program be intelligent? Maybe you'd think so, but in any case it seems more arguable than a program that makes real-time reasoning and spits a decision after some reasonable time, even if it's not the best one. Similarly, do you consider sorting algorithms intelligent? Again, the answer is arguable, but the fact is that algorithms exist with optimal time and memory complexities, we know that we can't do better than what those algorithms do, and we do not have to resort to any heuristic or any learning to do better (disclaimer: I haven't actually checked if there's some madman out in the wild applying learning to solve sorting with better average times). | {
"domain": "ai.stackexchange",
"id": 2238,
"tags": "search, ai-field, depth-first-search"
} |
How can i send the 2d navigation goal to rviz by voice command | Question:
How can i send the 2d navigation goal to rviz by voice command.
I plan to use pocketsphinx
Originally posted by femitof on ROS Answers with karma: 13 on 2020-05-01
Post score: 0
Answer:
pocketsphinx will work. I've done it that way. But of course the goal gets sent to move_base, not to RVIZ.
In my case, I created a node that takes the desired goal location from pocketsphinx as a string("door", "kitchen", etc) and lookup the pose information from a table, then send that pose goal to navigation stack. I discussed sending the goal in an answer to this post:
https://answers.ros.org/question/259418/sending-goals-to-navigation-stack-using-code/#259427
**Update after follow on question: **
In comment below you ask how to do this in reality, but if you have it working on turtlebot then you should be able to extend to move_base as described above. It depends on how you want to do the voice the commands or rather what type of commends you want to give. Do you want to:
give it a goal location using coordinates and an orientation, like"go to ten dot 23 X 2 dot 1 Y 2 dot 5 Z"?
give predefined place in the map names and then you simply say the name (as I describe above) like "kitchen"
say "forward" "back" and essentially telop it around with your voice?
You may approach it differently given what it is you want to do. It sounds like you want autonomous navigation so I think option 3 is out so you'll need to decide between option 1 and 2.
I'll assume option 2 since option 1 really males no sense to me even though it is what you asked for.
Step 1 - add the names of the predefined locations to pocketsphinx
You'll use the voice_cmd.launch file for starting.
voice_cmd.lm and voice_cmd.dic will both need updating with the new words you need pocketsphinx to recognize. Here is my voice_cmd.dic file to show pocketsphinx how to pronounce the words. This file includes words that did not come with the tutorial.
I got the phonetic spellings from this web page: http://www.speech.cs.cmu.edu/cgi-bin/cmudict
BACK B AE K
BACKWARD B AE K W ER D
FORWARD F AO R W ER D
FULL F UH L
HALF HH AE F
HALT HH AO L T
LEFT L EH F T
MOVE M UW V
RIGHT R AY T
SPEED S P IY D
STOP S T AA P
COFFEE K AA F IY
COFFEE K AO F IY
TABLE T EY B AH L
KITCHEN K IH CH AH N
ROBOT R OW B AA T
DINING D AY N IH NG
CANCEL K AE N S AH L
REBECCA R AH B EH K AH
LASER L EY Z ER
ON AA N
OFF AO F
SING S IH NG
FRONT F R AH N T
DOOR D AO R
THE DH AH
BAR B AA R
REMOTE R IH M OW T
CONTROL K AH N T R OW L
WAKE W EY K
SLEEP S L IY P
The new words need to be added to .LM file and I'll tell you right now that the LM file is tricky. I'm just gonna post the entire file as I use it because I can't explain how to set it up fully. I had to play around quite a bit as I added words to the dictionary to keep pocketsphinx working. If you have questions about this this file please google pocketsphinx instead of asking for my help. As you add words to the list you need to update the ngram numbers, but it wasn't always clear to me where to add the words and never clear to what timing I should use before and after the words, but the way I did it seem to work OK.
Language model created by QuickLM on Sat Mar 19 20:15:00 EDT 2011
Copyright (c) 1996-2000
Carnegie Mellon University and Alexander I. Rudnicky
This model based on a corpus of 14 sentences and 13 words
The (fixed) discount mass is 0.5
\data\
ngram 1=33
ngram 2=37
ngram 3=26
\1-grams:
-0.8451 </s> -0.3010
-0.8451 <s> -0.2074
-1.6902 BACK -0.2341
-1.6902 BACKWARD -0.2341
-1.6902 FORWARD -0.2341
-1.9912 FULL -0.2921
-1.9912 HALF -0.2921
-1.9912 HALT -0.2341
-1.6902 LEFT -0.2341
-1.2923 MOVE -0.2543
-1.6902 RIGHT -0.2341
-1.6902 SPEED -0.2341
-1.9912 STOP -0.2341
-1.9912 COFFEE -0.2341
-1.9912 TABLE -0.2341
-1.9912 DINING -0.2341
-1.9912 ROBOT -0.2341
-1.9912 KITCHEN -0.2341
-1.9912 CANCEL -0.2341
-1.9912 REBECCA -0.2341
-1.9912 LASER -0.2341
-1.9912 ON -0.2341
-1.9912 OFF -0.2341
-1.9912 SING -0.2341
-1.9912 REMOTE -0.2341
-1.9912 CONTROL -0.2341
-1.9912 FRONT -0.2341
-1.9912 DOOR -0.2341
-1.9912 THE -0.2341
-1.9912 BAR -0.2341
-1.9912 WAKE -0.2341
-1.9912 SLEEP -0.2341
-1.9912 UP -0.2341
\2-grams:
-1.4472 <s> BACK 0.0000
-1.4472 <s> BACKWARD 0.0000
-1.4472 <s> FORWARD 0.0000
-1.4472 <s> FULL 0.0000
-1.4472 <s> HALF 0.0000
-1.4472 <s> HALT 0.0000
-1.4472 <s> LEFT 0.0000
-0.7482 <s> MOVE 0.0000
-1.4472 <s> RIGHT 0.0000
-1.4472 <s> STOP 0.0000
-1.4472 <s> COFFEE 0.0000
-1.4472 <s> DINING 0.0000
-1.4472 <s> ROBOT 0.0000
-1.4472 <s> KITCHEN 0.0000
-0.3010 BACK </s> -0.3010
-0.3010 BACKWARD </s> -0.3010
-0.3010 FORWARD </s> -0.3010
-0.3010 FULL SPEED 0.0000
-0.3010 HALF SPEED 0.0000
-0.3010 HALT </s> -0.3010
-0.3010 LEFT </s> -0.3010
-1.0000 MOVE BACK 0.0000
-1.0000 MOVE BACKWARD 0.0000
-1.0000 MOVE FORWARD 0.0000
-1.0000 MOVE LEFT 0.0000
-1.0000 MOVE RIGHT 0.0000
-0.3010 RIGHT </s> -0.3010
-0.3010 SPEED </s> -0.3010
-0.3010 STOP </s> -0.3010
-1.0000 COFFEE TABLE 0.0000
-1.0000 DINING TABLE 0.0000
-1.0000 LASER OFF 0.0000
-1.0000 LASER ON 0.0000
-0.3010 REMOTE CONTROL -0.3010
-1.0000 FRONT DOOR 0.0000
-1.0000 THE BAR 0.0000
-1.0000 WAKE UP 0.0000
\3-grams:
-0.3010 <s> BACK </s>
-0.3010 <s> BACKWARD </s>
-0.3010 <s> FORWARD </s>
-0.3010 <s> FULL SPEED
-0.3010 <s> HALF SPEED
-0.3010 <s> HALT </s>
-0.3010 <s> LEFT </s>
-1.0000 <s> MOVE BACK
-1.0000 <s> MOVE BACKWARD
-1.0000 <s> MOVE FORWARD
-1.0000 <s> MOVE LEFT
-1.0000 <s> MOVE RIGHT
-0.3010 <s> RIGHT </s>
-0.3010 <s> STOP </s>
-1.0000 <s> COFFEE TABLE
-1.0000 <s> DINING TABLE
-0.3010 FULL SPEED </s>
-0.3010 HALF SPEED </s>
-0.3010 MOVE BACK </s>
-0.3010 MOVE BACKWARD </s>
-0.3010 MOVE FORWARD </s>
-0.3010 MOVE LEFT </s>
-0.3010 MOVE RIGHT </s>
-0.3010 COFFEE TABLE </s>
-0.3010 DINING TABLE </s>
-0.3010 REMOTE CONTROL </s>
\end\
Pocketsphinx will put the recognized text of the /recognizer/output topic. You need a node that subscribes to that topic and looks up the location and generates a goal to move_base as I mentioned in the post originally.
If you need help with that part of the project after looking through the question/answer I linked in the original answer I think maybe that's probably good for another question.
Originally posted by billy with karma: 1850 on 2020-05-01
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by femitof on 2020-05-01:
Thanks @billy
Can you remember the pocketsphinx you used ? cause i am having issues...
Do you have your files in a github repository or something? I could use it as a referendce.
Thanks alot again
much appreciate
Comment by femitof on 2020-05-02:
Hi, Yes to move_base using voice..
I have been able to successfully install pocketsphinx with basic commands like back, forward and all.
I can control turtlebot in the simulation..
How do i use this to actually control my robot in reality? ps: my robot already does autonomous navigation but by sending goals on rviz @billy
Comment by billy on 2020-05-02:
see edited answer above
Comment by femitof on 2020-05-03:
Thank you very much..
This was detailed and helpful.. used all you wrote to crosscheck what i had done so far..
much appreciated...
I am gonna post a link to the new question so you can review my codes for the voice to move_base control..
Thanks...
Comment by femitof on 2020-05-03:
https://answers.ros.org/question/351244/how-to-publish-voice-output-pocketspinx-to-move-base-goal/
Please see the link @billy
Thanks alot for all the help thus far..
Much appreciated
Comment by femitof on 2020-05-03:
Hi @billy
I have been at this all day and night and cant get to be able to use the voice (position 3 and 4) to send goals to the move_base..
Please assist when you do have the time
Here is a link to the question:
https://answers.ros.org/question/351244/how-to-publish-voice-output-pocketspinx-to-move-base-goal/ | {
"domain": "robotics.stackexchange",
"id": 34877,
"tags": "ros, ros-kinetic"
} |
What is the difference between 250W band heater and a 200W one? | Question: What is the difference between 250W band heater and a 200W one? Can they reach the same temperature?
Answer: Heating elements generally have a maximum operating temperature above which they'll start to degrade and eventually break. If these heaters are from the same product line they are likely made of the same materials and will have the same maximum operating temp therefore.
Separately, you want to know how hot they can get in your application. The hotter they get, the faster they lose heat to the environment. Once they are losing heat as quickly as they generate it, their temperature will stop rising. From that perspective, the 250 W heater will reach a higher temperature if all other things are the same in your system. Alternatively, you may add insulation to the lower power heater to achieve the same final temp.
The last thing to consider, is that the higher power heater will heat things up more quickly. | {
"domain": "engineering.stackexchange",
"id": 1189,
"tags": "heat-transfer, power-electronics"
} |
How does this thought experiment not rule out black holes? | Question: How does the following brief thought experiment fail to show that general relativity (GR) has a major problem in regards to black holes?
The full thought experiment is in my blog post. The post claims that GR violates its own equivalence principle at the horizon of a black hole. The principle says that the laws of physics in any sufficiently small, freely falling frame are the same as they are in an inertial frame in an idealized, gravity-free universe. Here's a condensed version of the thought experiment:
In an arbitrarily small, freely falling frame X that is falling through the horizon of a black hole, let there be a particle above the horizon that is escaping to infinity. A free-floating rod positioned alongside the particle and straddling the horizon couldn't be escaping to infinity as well, or else it'd be passing outward through the horizon. However, if instead the rod didn't extend as far down as the horizon, then in principle it could be escaping, possibly faster than the particle beside it. In an inertial frame, unlike in X, a body's freedom of movement (in principle and if only relative to other free objects in the frame) doesn't depend on the body's position or extent. Then a test of the laws of physics can distinguish X from an inertial frame. If X was equivalent to an inertial frame, I wouldn't be able to tell whether the rod could possibly be passing the particle in the outward direction, by knowing only whether the rod extends as far down as an imaginary boundary (the horizon) within the frame. If X was equivalent to an inertial frame, the rod could in principle be passing the particle in the outward direction regardless of its extent within X.
The thought experiment above takes place completely within X, which is arbitrarily small in spacetime (arbitrarily small both spatially and in duration). That is, the experiment is completely local. That the particle is escaping to infinity is a process occurring within X; it tells us that the particle won't cross the horizon during the lifetime of X. The particle needn't reach infinity before the experiment concludes.
It isn't necessary to be able to detect (by some experiment) that a horizon exists within X. It's a given (from the givens in the thought experiment) that a horizon is there. Likewise, I am free to specify the initial conditions of a particle or rod in relation to the horizon. For example, I am free to specify that the rod straddles the horizon, and draw conclusions from that. The laws of physics in X are affected by the presence and properties of the horizon regardless whether an observer in that frame detects the horizon.
It seems to me that the only way the equivalence principle is satisfiable in X is when in principle the rod can be escaping to infinity regardless of its initial position or extent in X, which would rule out black holes in a theory of gravity consistent with the principle. Otherwise, it seems the bolded sentence must be incorrect. If so, how? In other words, how can I not tell whether the rod can possibly be passing the particle in the outward direction, by knowing only whether it extends as far down as the horizon?
I'd appreciate hearing from Ted Bunn or other experts on black holes. A barrier to getting a satisfactory answer to this question is that many people believe the tidal force is so strong at the horizon that the equivalence principle can't be tested there except impossibly, within a single point in spacetime. An equation of GR (see my blog post) shows that a horizon isn't a special place in regards to the tidal force, in agreement with many texts including Ted Bunn's Black Hole FAQ. In fact the tidal force can in principle be arbitrarily weak in any size X. To weaken the tidal force in any given size X, just increase the mass of the black hole. (Or they might believe it's fine to test the principle in numerical approximation in a frame larger than a point, but not fine to test it logically in such frame anywhere. Kip Thorne disagrees, in a reference in my blog post.) Note also that the Chandra X-ray Observatory FAQ tells us that observations of black holes to date aren't confirmations of GR, rather they actually depend on the theory's validity, which is to say the existence of black holes in nature isn't proven.
Edit to add: I put a simple diagram, showing GR's violation of its own EP, at the blog post.
Edit to add: I'm awarding the bounty to dbrane, whose answer will likely retain the lead in votes, even though it's clearly incorrect as I see it. (In short, the correct answer cannot be that an infinitesimally small frame is required to test the EP. It is in fact tested in larger labs. The tidal force need only be small enough that it doesn't affect the outcome. Nor is the horizon a special place in regards to the tidal force, says GR.) I do appreciate the answers. Thanks!
Edit to add: this question hasn't been properly answered. The #1 answer below made a false assumption about the question. I've beefed up the question to address the objections in the answers below. I added my own answer to recap the objections and reach a conclusion. Please read the whole post before answering; I may have already covered your objection. Thanks!
Answer: I just read your blog post and it's clear to me where you've gone wrong.
The equivalence principle only allows you to transform to an inertial frame locally. This means that if your spacetime is curved, then the falling observer can only choose Minkowski coordinates for an infinitesimal region around her.
Think of a curved surface and having to choose a very small patch on it for it to appear flat. Clearly, you can't extend that flat patch indefinitely and call it an inertial frame of infinite extent (which you require in order to argue that the frame would allow you to send signals out to infinity).
The horizon is a global object that you realize exists when you patch together all the infinitesimal coordinate systems and examine its causal structure.
So, yes, the falling observer can do experiments to realize the horizon exists, but this does not violate the Equivalence Principle because such experiments are not done locally in an infinitesimal region. This applies to the rod that you seem to want to send away to infinity after crossing the horizon too. The infinitesimal flat patch in which you're allowed to play with the EP does not include infinity (or anything beyond the horizon), so you can't throw things outside of the horizon once you've crossed. | {
"domain": "physics.stackexchange",
"id": 86311,
"tags": "general-relativity, gravity, black-holes, equivalence-principle"
} |
Predicting and clustering at the same time? | Question: I want to build a segmentation to substitute the existing RFM segmentation which is a basic segmentation based on the Recency, Frequency and Monetary values.
The new segmentation will be used for two purposes:
Rank customers per value
Have homogeneous groups of customers, so there should be an interpretation and meaning for each group.
I think this shouldn't be an unsupervised learning, but a mix of both supervised and unsupervised learning.
So What I did is that I defined a target variable that takes 1 if the customer is active in the next period and 0 if not. then I used a decision tree to predict this value, so that the leaves represent the clusters. the percentage of active customers in each group is used to rank these clusters.
Do you think that's a good solution and do you have other ideas ?
Thanks in advance
Answer: Your solution seems interesting. However, it implies that the customers’ value is assessed through whether they are active or not in next period. It might be interesting to look at the purchase by customer for, say future 8 weeks (or future 6 weeks or 12 weeks for that matter – the time frame depends a lot on the particularity of the business). This measurement reflects whether customers are active or not in the next period and also quantifies their 'activity’.
For a somewhat similar project I implemented the following solution:
Start by randomly selecting one breakpoint for each of the three dimensions (RFM); by breakpoint I mean a random value among the unique values of Recency, another randomly selected value for Frequency, and another one for Monetary.
Draw some initial segments based on these three breakpoints by selecting groups of customers that are relatively similar in terms of RFM according to the selected breakpoints. As there are 3 breakpoints (one per variable), you will get 2^3 segments because each variable will be broken down in 2 parts leading to 8 segments in total. In other words, you will have:
Segm 1: Recency < breakpoint_Recency & Frequency < breakpoint_Frequency & Monetary < breakpoint_Monetary
Segm 2: Recency < breakpoint_Recency & Frequency >= breakpoint_Frequency & Monetary < breakpoint_Monetary
Segm 3: Recency < breakpoint_Recency & Frequency < breakpoint_Frequency & Monetary >= breakpoint_Monetary
Segm 4: Recency < breakpoint_Recency & Frequency >= breakpoint_Frequency & Monetary >= breakpoint_Monetary
Segm 5: Recency >= breakpoint_Recency & Frequency < breakpoint_Frequency & Monetary < breakpoint_Monetary
Segm 6: Recency >= breakpoint_Recency & Frequency >= breakpoint_Frequency & Monetary < breakpoint_Monetary
Segm 7: Recency >= breakpoint_Recency & Frequency < breakpoint_Frequency & Monetary >= breakpoint_Monetary
Segm 8: Recency >= breakpoint_Recency & Frequency >= breakpoint_Frequency & Monetary >= breakpoint_Monetary
Compute the mean of future 8 weeks purchases for each of the resulted segments
Then, of course, this segmentation was random and is not the optimal one. To get the optimal segmentation you then implement a Genetic Algorithm to re-do the previous procedure and explore the space for better breakpoints. By this you will likely get the optimal segmentation of homogeneous customers in terms of their purchasing behavior (as characterized by RFM) – optimal with respect to their value (as you decide to define their value).
By the way, the fitness function for the GA aims to maximize the mean of future 8 weeks purchases per segment. Or you can also set your own objective on how to measure the value of customers and define the fitness function accordingly.
Hope this helps. Good luck!
Update: This solution allows you to find 'the homogenous groups of customers' and 'rank segments per value'. I am not sure if this is what you meant by 'rank customers per value'. In my view supervised clustering defeats the idea of individual ranking (but maybe I am wrong).
Moreover, customers change behaviors from one period to the other and it is perhaps better to cluster them in terms of purchasing behavior, not to rank individuals at a point in time. In this way, when a customer changes behavior (in terms of RFM), we know where s/he went (i.e. in which cluster), and thus we can assess whether s/he 'improved' or not. In the latter case, marketing people can take action to incentivize the concerned customer to move back into the better off segment. | {
"domain": "datascience.stackexchange",
"id": 1131,
"tags": "machine-learning, data-mining, clustering, predictive-modeling"
} |
Validity of the leveling effect of water in nitration reactions | Question: Wikipedia gives this as the levelling effect of water
Any acid that is stronger than $\ce{H3O+}$ reacts with $\ce{H2O}$ to form $\ce{H3O+}$. Therefore, no acid stronger than $\ce{H3O+}$ exists in $\ce{H2O}$. For example, aqueous perchloric acid ($\ce{HClO4}$), aqueous hydrochloric acid ($\ce{HCl}$) and aqueous nitric acid ($\ce{HNO3}$) are all completely ionized and are all equally strong acids.
So from what I can understand, if I take an aqueous medium, then $\ce{H2SO4}$ and $\ce{HNO3}$ both should be equally strong.
But then comes the nitration reaction, where the nitronium cation acts as the real attacking electrophile, which means that in this reaction, $\ce{HNO3}$ acted as a base and $\ce{H2SO4}$ acted as an acid.
I am also aware concentrated acids are taken in the nitration reactions, but even the most concentrated acids have some amount of water.
How are these two concepts contradicting, where am I going wrong?
What are the limitations of the levelling effect and where all is it applicable? Is it not applicable to concentrated solutions, and if so, how concentrated?
Answer: What you have gone wrong in this question is you completely disregard the fact that mentioned stronger acids than water are all in aqueous medium, in other words, dissolved in water. Thus, fast acid-base reaction ($\ce{H2O}$ acts as a base here) happens to give $\ce{H3O+}$ as the only acid in these mixtures (leveling effect of water).
On the other hand, concentration of sulfuric acid is $\approx 98\% (w/w)$. That means, approximately only $\pu{2 g}$ of $\ce{H2O}$ in $\pu{100 g}$ of concentrated solution. Therefore, mole ratio of $\ce{H2O : H2SO4}$ would be about $0.1 : 1$. Thus, ($\ce{H2SO4}$ is not leveling with water. That extra remaining $\ce{H2SO4}$ can be acted on $\ce{HNO3}$ in nitrating mixture as shown in your scheme (vide supra).
Note: As a rule of thumb, you may simply consider $1:1$ mole ratio of water and any stronger acid than water would be the limiting level for the leveling effect. | {
"domain": "chemistry.stackexchange",
"id": 13139,
"tags": "organic-chemistry, acid-base, electrophilic-substitution"
} |
$R = dV/dI$ for varying temperature | Question: I'm trying to do my prelab for an E&M course, and am asked if, for plotting $V$ vs $I$ with a varying temperature, I should expect a linear slope. I know that both $V$ and $I$ depend on $R$, and since $R$ is proportional to $T$ that I shouldn't.
Trying to show this, I'm having some problems -- probably because my differential equations skills are a bit rusty.
Simplely, if $V = I \,R(T)$, how do I take $\mathrm{d}V/\mathrm{d}I$?
I can find $\mathrm{d}V/\mathrm{d}T$ by chain rule, but i'm just drawing a blank with $\mathrm{d}V/\mathrm{d}I$ since $I = V/R(T)$
Answer: The missing piece here is that the temperature of the resistor is a function of the current. Your equation should perhaps read $V = I\,R(T(I))$. Does that help? | {
"domain": "physics.stackexchange",
"id": 14630,
"tags": "electric-circuits, calculus, differential-equations"
} |
Sieve of Eratosthenes using Java 8 streams | Question: I am learning streams and lambdas in Java 8.
I wanted to implement the classic Sieve of Eratosthenes using lambdas and streams. This is my implementation.
/**
* @author Sanjay
*/
class Sieve {
/**
* Implementation of Sieve of eratosthenes algorithm using streams and lambdas
* It's much slower than the version with traditional for loops
* It consumes much memory than the version with traditional for loops
* It computes all primes upto 100_000_000 in 2 secs (approx)
*/
public static List<Integer> sieveOfEratosthenes(int _n) {
// if _n is empty return an empty list
if (_n <= 1) return Arrays.asList();
// create a boolean array each index representing index number itself
boolean[] prime = new boolean[_n + 1];
// set all the values in prime as true
IntStream.rangeClosed(0, _n).forEach(x -> prime[x] = true);
// make all composite numbers false in prime array
IntStream.rangeClosed(2, (int) Math.sqrt(_n))
.filter(x -> prime[x])
.forEach(x -> unsetFactors(x, prime, _n));
// create a list containing primes upto _n
List<Integer> primeList = new ArrayList<>((_n < 20) ? _n : (_n < 100) ? 30 : _n / 3);
// add all the indexes that represent true in prime array to primeList
IntStream.rangeClosed(2, _n).filter(x -> prime[x]).forEach(primeList::add);
// return prime list
return primeList;
}
/*
* makes all the factors of x in prime array to false
* as primes don't have any factors
* here x is a prime and this makes all factors of x as false
*/
private static void unsetFactors(int x, boolean[] prime, int _n) {
IntStream.iterate(x * 2, factor -> factor + x)
.limit(_n / x - 1)
.forEach(factor -> prime[factor] = false);
}
}
What is its efficiency compared to normal for-loops?
Are there any imporvements to be made?
Answer: Implementation
You can invert the values for prime and not prime to avoid the initial rangeClosed(0, _n).forEach(x -> prime[x] = true). The innermost loop can start at x*x instead of x*2:
// (1)
public static List<Integer> sieveOfEratosthenes(int n) {
boolean[] notPrime = new boolean[n + 1];
IntStream.rangeClosed(2, (int) Math.sqrt(n))
.filter(x -> !notPrime[x])
.forEach(x -> {
IntStream.iterate(x * x, m -> m <= n, m -> m + x)
.forEach(y -> notPrime[y] = true);
});
List<Integer> list = new ArrayList<>();
IntStream.rangeClosed(2, n)
.filter(x -> !notPrime[x])
.forEach(x -> list.add(x));
return list;
}
This implementation still feels like a literal translation of the for-loop approach. Streams offer options to improve readability, i.e. the inner loop can be flattened:
// (2)
public static List<Integer> sieveOfEratosthenes(int n) {
boolean[] notPrime = new boolean[n + 1];
rangeClosed(2, (int) Math.sqrt(n))
.filter(x -> !notPrime[x])
.flatMap(x -> iterate(x * x, m -> m <= n, m -> m + x))
.forEach(x -> notPrime[x] = true);
return rangeClosed(2, n)
.filter(x -> !notPrime[x])
.boxed().collect(toList());
}
Stream vs Loop
The main problem I see with a stream based approach is (besides the overhead of the streams), that it is more difficult to optimize compared to a loop approach as the abstraction level is higher. For example converting the implementation to a parallel approach is in my opinion way more complicated with a stream approach.
Algorithm improvements (not directly related to the question)
You can use a BitSet (or directly a long/int/byte array) to ensure that each entry in the sieve consumes only one bit (instead of currently (likely) 8). You are storing all values from 2 to n in the sieve, you can save memory by skipping multiples of 2 (and 3, 5, ...) at the cost of some additional calculations to convert between sieve position and value.
For larger values of n it might be advantageous to divide the sieving process in smaller steps and use a small array instead of a large array for all values and sieving the complete range at once.
The sieving process can be converted to a parallel implementation with nearly linear speedup as two or more values can be sieved simultaneously (sieving process of each value is independent from other values).
Approx. performance for n=100_000_000, included my implementation (which utilizes most of the improvements mentioned above) as reference value:
// 1 Thread
Initial 3406676978ns
Version (1) 2487619662ns
Version (2) 2459434320ns
For loop 2158261344ns
Nevay to List 1436579484ns
Nevay to Sieve 1040829243ns
// 8 Threads
Nevay to List 682085628ns
Nevay to Sieve 172952467ns | {
"domain": "codereview.stackexchange",
"id": 27686,
"tags": "java, algorithm, primes, stream, sieve-of-eratosthenes"
} |
Maximum weight perfect matching in general graphs | Question: Let $G(V,E)$ be a graph (not necessarily bipartite), where edge $e \in E$ has weight $w_e$ (non-negative real). Then one can write the LP relaxation for maximum weight perfect matching as follows
$$
\text{maximize}~~\sum_{e\in E} w_e x_e
\\
\text{subject to}~~\sum_{e\in E: v\in e}x_e=1\text{ for all }v\in V
\\
0\le x_{e}\le 1\text{ for }e\in E.
$$
In the corresponding Integer Program $x_e$ is a binary variable s.t.
$$
x_e =
\begin{cases}
1 \text{ if } e \text{ is in the perfect matching}\\
0 \text{ otherwise}
\end{cases}
$$
I am unable to come up with an example where given $G$ has a perfect matching, the LP does not have an all integral optimal solution $x_e^{\text{*}}$. I tried taking small examples with odd length cycles, though there was a fractional optimal for the LP, there was an all integral optimal as well (in the examples I took).
I am aware that for bipartite graphs if the LP is feasible, then there always exists an all integral optimal for the LP. Does it hold for general graphs as well? If not I would appreciate hints so I can come up with such an example.
Answer: Consider the graph consisting of two triangles $123,456$ connected by an edge $34$. The graph has a unique perfect matching $12,34,56$. Give the edges of the triangles weight $1$, and the edge $34$ weight $\epsilon$. The maximum weight perfect matching has weight $2+\epsilon$. Conversely, the fractional matching giving a weight of $1/2$ to each edge of each triangle has weight $3$. | {
"domain": "cs.stackexchange",
"id": 17944,
"tags": "graphs, linear-programming, integer-programming, matching"
} |
Which is the second resistor in a voltage divider? | Question: The formula for a voltage divider is:
Voltage out = ( resistance two/resistance one + resistance two ) * voltage in
or Vo = ( R2 / R1 + R2 ) * Vi
How do you identify the R1 resistor? Is it the one which conventional current 'goes through' first?
Answer: The voltage divider formula is used where there is a series of two resistor and you want to know the voltage through one of the two.
The voltage drop of the resistor number 1 is
$$
V_1 = V_i \frac{R_1}{R_1 + R_2}
$$
The voltage drop of the resistor number 2 is:
$$
V_2 = V_i \frac{R_2}{R_1 + R_2}
$$ | {
"domain": "physics.stackexchange",
"id": 37252,
"tags": "electric-circuits, electrical-resistance, voltage"
} |
A recursive_transform template function for the binary operation cases in C++ | Question: This is a follow-up question for A recursive_transform Template Function with Unwrap Level for Various Type Arbitrary Nested Iterable Implementation in C++ and A recursive_print Function For Various Type Arbitrary Nested Iterable Implementation in C++. Besides the usage with single Ranges input like recursive_transform<1>(Ranges, Lambda), I am attempting to extend recursive_transform function to deal with the binary operation cases recursive_transform<>(Ranges1, Ranges2, Lambda) which Lambda here could take two inputs. For example:
std::vector<int> a{ 1, 2, 3 }, b{ 4, 5, 6 };
auto result1 = recursive_transform<1>(a, b, [](int element1, int element2) { return element1 + element2; });
for (auto&& element : result1)
{
std::cout << element << std::endl;
}
This code can be compiled and the output:
5
7
9
The experimental implementation
recursive_invoke_result2 struct implementation: in order to determine the type of output, recursive_invoke_result2 struct is needed.
template<typename, typename, typename>
struct recursive_invoke_result2 { };
template<typename T1, typename T2, std::invocable<T1, T2> F>
struct recursive_invoke_result2<F, T1, T2> { using type = std::invoke_result_t<F, T1, T2>; };
// Ref: https://stackoverflow.com/a/66821371/6667035
template<typename F, class...Ts1, class...Ts2, template<class...>class Container1, template<class...>class Container2>
requires (
!std::invocable<F, Container1<Ts1...>, Container2<Ts2...>>&&
std::ranges::input_range<Container1<Ts1...>>&&
std::ranges::input_range<Container2<Ts2...>>&&
requires { typename recursive_invoke_result2<F, std::ranges::range_value_t<Container1<Ts1...>>, std::ranges::range_value_t<Container2<Ts2...>>>::type; })
struct recursive_invoke_result2<F, Container1<Ts1...>, Container2<Ts2...>>
{
using type = Container1<typename recursive_invoke_result2<F, std::ranges::range_value_t<Container1<Ts1...>>, std::ranges::range_value_t<Container2<Ts2...>>>::type>;
};
template<typename F, typename T1, typename T2>
using recursive_invoke_result_t2 = typename recursive_invoke_result2<F, T1, T2>::type;
recursive_transform for the binary operation cases implementation:
// recursive_transform for the binary operation cases (the version with unwrap_level)
template<std::size_t unwrap_level = 1, class T1, class T2, class F>
constexpr auto recursive_transform(const T1& input1, const T2& input2, const F& f)
{
if constexpr (unwrap_level > 0)
{
recursive_invoke_result_t2<F, T1, T2> output{};
std::transform(
std::ranges::cbegin(input1),
std::ranges::cend(input1),
std::ranges::cbegin(input2),
std::inserter(output, std::ranges::end(output)),
[&f](auto&& element1, auto&& element2) { return recursive_transform<unwrap_level - 1>(element1, element2, f); }
);
return output;
}
else
{
return f(input1, input2);
}
}
The full testing code
// A recursive_transform template function for the binary operation cases in C++
#include <algorithm>
#include <array>
#include <cassert>
#include <chrono>
#include <complex>
#include <concepts>
#include <deque>
#include <execution>
#include <exception>
#include <functional>
#include <iostream>
#include <iterator>
#include <list>
#include <map>
#include <mutex>
#include <numeric>
#include <optional>
#include <ranges>
#include <stdexcept>
#include <string>
#include <tuple>
#include <type_traits>
#include <utility>
#include <variant>
#include <vector>
// recursive_print implementation
template<std::ranges::input_range Range>
constexpr auto recursive_print(const Range& input, const int level = 0)
{
auto output = input;
std::cout << std::string(level, ' ') << "Level " << level << ":" << std::endl;
std::ranges::transform(std::ranges::cbegin(input), std::ranges::cend(input), std::ranges::begin(output),
[level](auto&& x)
{
std::cout << std::string(level, ' ') << x << std::endl;
return x;
}
);
return output;
}
template<std::ranges::input_range Range> requires (std::ranges::input_range<std::ranges::range_value_t<Range>>)
constexpr auto recursive_print(const Range& input, const int level = 0)
{
auto output = input;
std::cout << std::string(level, ' ') << "Level " << level << ":" << std::endl;
std::ranges::transform(std::ranges::cbegin(input), std::ranges::cend(input), std::ranges::begin(output),
[level](auto&& element)
{
return recursive_print(element, level + 1);
}
);
return output;
}
// recursive_invoke_result_t implementation
template<typename, typename>
struct recursive_invoke_result { };
template<typename T, std::invocable<T> F>
struct recursive_invoke_result<F, T> { using type = std::invoke_result_t<F, T>; };
template<typename, typename, typename>
struct recursive_invoke_result2 { };
template<typename T1, typename T2, std::invocable<T1, T2> F>
struct recursive_invoke_result2<F, T1, T2> { using type = std::invoke_result_t<F, T1, T2>; };
template<typename F, template<typename...> typename Container, typename... Ts>
requires (
!std::invocable<F, Container<Ts...>>&&
std::ranges::input_range<Container<Ts...>>&&
requires { typename recursive_invoke_result<F, std::ranges::range_value_t<Container<Ts...>>>::type; })
struct recursive_invoke_result<F, Container<Ts...>>
{
using type = Container<typename recursive_invoke_result<F, std::ranges::range_value_t<Container<Ts...>>>::type>;
};
// Ref: https://stackoverflow.com/a/66821371/6667035
template<typename F, class...Ts1, class...Ts2, template<class...>class Container1, template<class...>class Container2>
requires (
!std::invocable<F, Container1<Ts1...>, Container2<Ts2...>>&&
std::ranges::input_range<Container1<Ts1...>>&&
std::ranges::input_range<Container2<Ts2...>>&&
requires { typename recursive_invoke_result2<F, std::ranges::range_value_t<Container1<Ts1...>>, std::ranges::range_value_t<Container2<Ts2...>>>::type; })
struct recursive_invoke_result2<F, Container1<Ts1...>, Container2<Ts2...>>
{
using type = Container1<typename recursive_invoke_result2<F, std::ranges::range_value_t<Container1<Ts1...>>, std::ranges::range_value_t<Container2<Ts2...>>>::type>;
};
template<typename F, typename T>
using recursive_invoke_result_t = typename recursive_invoke_result<F, T>::type;
template<typename F, typename T1, typename T2>
using recursive_invoke_result_t2 = typename recursive_invoke_result2<F, T1, T2>::type;
// recursive_transform implementation (the version with unwrap_level)
template<std::size_t unwrap_level = 1, class T, class F>
constexpr auto recursive_transform(const T& input, const F& f)
{
if constexpr (unwrap_level > 0)
{
recursive_invoke_result_t<F, T> output{};
std::ranges::transform(
std::ranges::cbegin(input),
std::ranges::cend(input),
std::inserter(output, std::ranges::end(output)),
[&f](auto&& element) { return recursive_transform<unwrap_level - 1>(element, f); }
);
return output;
}
else
{
return f(input);
}
}
// recursive_transform implementation (the version with unwrap_level, with execution policy)
template<std::size_t unwrap_level = 1, class ExPo, class T, class F>
requires (std::is_execution_policy_v<std::remove_cvref_t<ExPo>>)
constexpr auto recursive_transform(ExPo execution_policy, const T& input, const F& f)
{
if constexpr (unwrap_level > 0)
{
recursive_invoke_result_t<F, T> output{};
std::mutex mutex;
// Reference: https://en.cppreference.com/w/cpp/algorithm/for_each
std::for_each(execution_policy, input.cbegin(), input.cend(),
[&](auto&& element)
{
auto result = recursive_transform<unwrap_level - 1>(execution_policy, element, f);
std::lock_guard lock(mutex);
output.emplace_back(std::move(result));
}
);
return output;
}
else
{
return f(input);
}
}
// recursive_transform for the binary operation cases (the version with unwrap_level)
template<std::size_t unwrap_level = 1, class T1, class T2, class F>
constexpr auto recursive_transform(const T1& input1, const T2& input2, const F& f)
{
if constexpr (unwrap_level > 0)
{
recursive_invoke_result_t2<F, T1, T2> output{};
std::transform(
std::ranges::cbegin(input1),
std::ranges::cend(input1),
std::ranges::cbegin(input2),
std::inserter(output, std::ranges::end(output)),
[&f](auto&& element1, auto&& element2) { return recursive_transform<unwrap_level - 1>(element1, element2, f); }
);
return output;
}
else
{
return f(input1, input2);
}
}
void binary_test_cases();
int main()
{
binary_test_cases();
return 0;
}
void binary_test_cases()
{
// std::vector<int>
std::vector<int> a{ 1, 2, 3 }, b{ 4, 5, 6 };
auto result1 = recursive_transform<1>(a, b, [](int element1, int element2) { return element1 + element2; });
for (auto&& element : result1)
{
std::cout << element << std::endl;
}
// std::vector<std::vector<int>>
std::vector<decltype(a)> c{ a, a, a }, d{ b, b, b };
auto result2 = recursive_transform<2>(c, d, [](int element1, int element2) { return element1 + element2; });
recursive_print(result2);
// std::deque<int>
std::deque<int> test_deque;
test_deque.push_back(1);
test_deque.push_back(1);
test_deque.push_back(1);
auto result3 = recursive_transform<1>(test_deque, test_deque, [](int element1, int element2) { return element1 + element2; });
for (auto&& element : result3)
{
std::cout << element << std::endl;
}
// std::deque<std::deque<int>>
std::deque<decltype(test_deque)> test_deque2;
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
auto result4 = recursive_transform<2>(test_deque2, test_deque2, [](int element1, int element2) { return element1 + element2; });
recursive_print(result4);
// std::list<int>
std::list<int> test_list = { 1, 2, 3, 4 };
auto result5 = recursive_transform<1>(test_list, test_list, [](int element1, int element2) { return element1 + element2; });
for (auto&& element : result5)
{
std::cout << element << std::endl;
}
// std::list<std::list<int>>
std::list<std::list<int>> test_list2 = { test_list, test_list, test_list, test_list };
auto result6 = recursive_transform<2>(test_list2, test_list2, [](int element1, int element2) { return element1 + element2; });
recursive_print(result6);
return;
}
The output of the above tests:
5
7
9
Level 0:
Level 1:
5
7
9
Level 1:
5
7
9
Level 1:
5
7
9
2
2
2
Level 0:
Level 1:
2
2
2
Level 1:
2
2
2
Level 1:
2
2
2
2
4
6
8
Level 0:
Level 1:
2
4
6
8
Level 1:
2
4
6
8
Level 1:
2
4
6
8
Level 1:
2
4
6
8
A Godbolt link is here.
All suggestions are welcome.
The summary information:
Which question it is a follow-up to?
A recursive_transform Template Function with Unwrap Level for Various Type Arbitrary Nested Iterable Implementation in C++ and
A recursive_print Function For Various Type Arbitrary Nested Iterable Implementation in C++
What changes has been made in the code since last question?
I am attempting to extend recursive_transform function to deal with the binary operation cases in this post.
Why a new review is being asked for?
If there is any possible improvement, please let me know.
Answer: The only remark I have is that you have a lot of code duplication between the version that takes a unary operator and one that takes a binary operator. I'm sure the implementation of std::transform() has a lot of duplication as well. However, with C++20 it should be easy to make a version that takes a predicate that can have any number of parameters, not just one or two. The only issue is that the order of parameters to your function should be such that all the containers come last. To illustrate, here is a version of std::transform that works with any \$n\$-ary function:
template<typename OutputIt, typename NAryOperation, typename InputIt, typename... InputIts>
OutputIt transform(OutputIt d_first, NAryOperation op, InputIt first, InputIt last, InputIts... rest) {
while (first != last) {
*d_first++ = op(*first++, (*rest++)...);
}
return d_first;
}
We can combine support for \$n\$-ary functions with support for recursion. For example:
template<typename F, typename... Ts>
struct recursive_invoke_result {
using type = std::invoke_result_t<F, Ts...>;
};
template<typename F, typename... Ts>
requires (std::ranges::input_range<Ts> && ...)
struct recursive_invoke_result<F, Ts...> {
using type = recursive_invoke_result<F, std::ranges::range_value_t<Ts>...>::type;
};
So all this together should allow you to create a recursive_transform() that accepts a predicate that takes an arbitrary number of parameters. | {
"domain": "codereview.stackexchange",
"id": 41740,
"tags": "c++, recursion, template, lambda, c++20"
} |
Interpretation of Einstein field equations | Question: I have a beginners question concerning the Einstein field equations.
Although I've read some basic texts about G.R. and understand some of the formula's and derivations I still have the feeling there's something I don't properly understand. Hope my question is not too vague.
The Einstein field equations state: $G_{\mu \nu } =\kappa T_{\mu \nu }$.
I am told that these equations can be justified on the basis of two fundamental properties:
They are coordinate independent, meaning that these are tensors therefore they can be defined without reference to a specific coordinate system. For example 'Gravitation' by Misner, Thorne and Wheeler devotes a paragraph on the importance of 'coordinate independent thinking'.
The beauty of above equations is that they generate a curvature of spacetime on the left side in which the stress energy tensor on the right is automatically conserved: ${T^{\alpha\nu}}_{;\nu}=0$.
Now how does this compare to classical mechanics?
Newtons laws imply that we are describing our system with respect to an inertial frame of reference.
Hamiltonial mechanics gives us a way to use more general coordinates. But still the assumption is that spacetime is flat and we always know how to transform back to our 'normal' inertial coordinates.
A typical approach in classical mechanics would be to specify some distribution of mass at $t=0$ as a boundary condition, then use Newton or Euler-Lagrange to calculate the evolution of this distribution in time thereby arriving at a (conserved) value for the stress-energy tensor at every time and at every point in space.
In general relativity the equation $G_{\mu \nu } =\kappa T_{\mu \nu }$ somehow suggests to me that our 'boundary condition' now requires us to specify some $T_{\mu \nu }$ for all of spacetime at once (meaning at every point and at every time). With that we can now calculate the curvature of our spacetime, which in turn guarantees us that stress-energy is automatically conserved.
I know I'm probably wrong, but it seems to me that in this way we can specify just any stress energy tensor on the right (since it's automatically conserved anyway). After solving the field equations we are done, only to conclude that (since we've calculated that spacetime must now be curved in some way) we don't exactly know in what coordinates we are working anymore.
Also having to specify $T_{\mu \nu }$ over all of spacetime suggests to me that we have to know something about the time evolution of the system before we can even begin to calculate the metric (which in turn should then be able to tell us about the time evolution of the system??).
Is it perhaps that we have to start with postulating some form of $T_{\mu \nu }$ with certain properties, and then solve for $g_{\alpha\beta}$ and work our way back to interpret how we would transform our spacetime coordinates to arrive at acceptable coordinates (local Lorentzian coordinates perhaps)? So that we have now rediscovered our preferred local Lorentzian coordinates after which we can look at our $T_{\mu \nu }$ again to draw some conclusions about it's behaviour?
So my question is:
Is there a straightforward way to explain in what way the Einstein field equations can have the same predictive value as in the classical mechanical case where we impose a boundary condition in a coordinate system we understand and then solve to find the time evolution of that system in that same coordinate system?
I know the question is very vague, I wouldn't ask if I wasn't confused. But I hope that maybe someone recognizes their own initial struggle with the subject and point me in the right direction.
Answer: There are no Lorenzian coordinates in General Relativity. The properties of space-time, such as geodesic lines, conservation laws and singularities, can be described in any suitable coordinate system.
Usually Einstein's equations are solved for a specific type of matter, for example electromagnetic field, for which:
$$
T_{\mu\nu}=\frac{1}{4\pi}\left(F_{\mu\alpha}F_{\nu}{}^{\alpha}-\frac{1}{4}g_{\mu\nu}F_{\alpha\beta}F^{\alpha\beta}\right)
$$
Then the equation $T^{\mu\nu}{}_{;\nu}=0$ leads to Maxwell's equations: $F^{\mu\nu}{}_{;\nu}=0$. Einstein's equations are solved for both metric $g_{\mu\nu}$ and electromagnetic field tensor $F_{\mu\nu}$. Both $g_{\mu\nu}$ and $F_{\mu\nu}$ must be specified on a spacelike hypersurface (such as $t=0$ in Minkowski space; these are initial conditions) and at the spatial infinity (boundary conditions); Einstein equations allow to find the evolution of the system in the future.
Also, as a side remark, equation $T^{\mu\nu}{}_{;\nu}=0$ isn't a conservation law in the common meaning. If a space-time has a symmetry with respect to transformation $x'^{\mu}=x^\mu+\xi^\mu$, then vector $\xi^\mu$ satisfies the Killing equation:
$$
\xi_{\mu;\nu}+\xi_{\nu;\mu}=0
$$
Then $(T^{\mu\nu}\xi_{\nu})_{;\mu}=0$, and
$$
J=\int T^{0\nu}\xi_{\nu}\sqrt{-g}d^3x
$$
will be a conserved quantity. If there are no Killing vectors, then there are no conservation laws. | {
"domain": "physics.stackexchange",
"id": 91800,
"tags": "general-relativity, stress-energy-momentum-tensor, time-evolution"
} |
Diffusion velocity | Question: I understand that diffusion is the movement of particles from high concentration areas to low concentration, but what is the cause of that movement atomically? And especially in the case of charge carriers in semi-conductors...
is it related to the charge of the atoms or particles? For example a higher concentration of electrons in a certain area creates a difference in charge relative to its adjacent (we could say "positive") area. Is that what causes the carrier "electrons" in this case to move towards the low concentration area?
Answer: Nothing "causes" diffusion, it is a statistical process. The atoms in any system in thermal equilibrium are constantly moving with velocities of order $\langle v^2\rangle \simeq T/m$. This motion is randomized by collisions on some microscopic time scale $\tau$. The simplest case is a dilute gas, where $\tau\sim 1/(vn\sigma)$, where $n$ is the density of the gas, and $\sigma$ is the collision cross section. Similar formulas hold in all sorts of systems, electrons in metals, phonons in solids, etc.
Now consider a box with high concentration on the left, low concentration on the right. On the left, atoms have a 50-50 chance to move right, and on the right atoms have a 50-50 chance to move left. Since there are more atoms on the left, the net current moves to the right. | {
"domain": "physics.stackexchange",
"id": 25196,
"tags": "electric-current, charge, atoms, diffusion, electrical-engineering"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.