anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Nextflow: how to create many-to-many tuple for process | Question: Using Nextflow, I need to submit x * y jobs where x is the number of input .bam files and y is the number of genome intervals (e.g., <chrom>:<start>-<end>). i.e., for every .bam file, submit a single job for every defined interval.
I have tried a few variations with no luck (see below).
Ultimate question: For each .bam file, how do I submit a single job for every defined genome interval using NextFlow?
Attempts
Attempt 1 (submit both as individual Channels):
Only submits one .bam for all intervals.
sample_bams = Channel.fromPath(sample_input_path + "*.bam")
intervals = Channel.from(['1:10000-20000', '5:55555-77777'])
PROC(sample_bams, intervals)
Attempt 2 (submit as tuple of mapped Channels):
Submits only 10 jobs and fails because the interval Channel is passed as DataflowBroadcast around DataflowStream[?] object. The PROC was changed to receive a tuple rather than individual arguments.
sample_interval_tuples = Channel.fromPath(sample_input_path + "*.bam")
.map { sample_file -> tuple(sample_file, align_to_ref, DRF_jar, Channel.from(['1:10000-20000', '5:55555-77777'])) }
Attempt 3 (submit as tuple of mapped .bams to non-Channel of intervals):
Submits 10 jobs and fails because the intervals are passed as a single list. The PROC was changed to receive a tuple rather than individual arguments.
sample_interval_tuples = Channel.fromPath(sample_input_path + "*.bam")
.map { sample_file -> tuple(sample_file, align_to_ref, DRF_jar, ['1:10000-20000', '5:55555-77777']) }
Really appreciate your help!
related query: for DSL v2, we wouldn't include from combined_inputs in the input, yeah?
Answer: The combine operator can be used to produce the Cartesian product:
sample_bams = Channel.fromPath( './path/to/bams/*.bam' )
intervals = Channel.of( '1:10000-20000', '5:55555-77777' )
sample_bams
.combine(intervals)
.set { combined_inputs }
process test {
echo true
input:
tuple path(bam), val(interval) from combined_inputs
"""
echo -n "${interval} ${bam}"
"""
}
Regarding the related query: Correct. The from and into 'bind' declarations can be omitted | {
"domain": "bioinformatics.stackexchange",
"id": 2073,
"tags": "nextflow"
} |
How to interpret Einstein Notation across equals sign? | Question: My apologies if this has been asked before. I wasn't able to find an explanation that made sense to me on Google.
Suppose we have the equation below, which is written in Einstein notation:
$$y_{j} = x_iz_ix_j$$
and say that both $j$ and $i$ take on values of $1$ and $2$. Is this notation to be interpreted as
$$y_1 = (x_1z_1+x_2z_2)x_1$$ $$y_2 = (x_1z_1+x_2z_2)x_2$$
or as $$y_1+y_2 = (x_1z_1+x_2z_2)(x_1+x_2)?$$
In other words, does the index across the equals sign imply multiple equations, or adding up terms on each side of a single equation?
Context: I am an undergrad math major studying fluid flow. The conservation of momentum equation was written in Einstein Notation and I am having trouble understanding the meaning.
Answer: The correct interpretation is the first one:
$$ y_j=x_iz_ix_j$$
Means that to obtain the jth component of $y$ you have to sum over the repeated indeces, i.e.
$$ y_j=\sum_{i=1}^2 x_iz_ix_j$$ | {
"domain": "physics.stackexchange",
"id": 38089,
"tags": "tensor-calculus, notation"
} |
Which algorithm Doc2Vec uses? | Question: Like Word2vec is not a single algorithm but combination of two, namely, CBOW and Skip-Gram model; is Doc2Vec also a combination of any such algorithms? Or is it an algorithm in itself?
Answer: Word2Vec is not a combination of two models, rather both are variants of word2vec. Similarly doc2vec has Distributed Memory(DM) model and Distributed Bag of words (DBOW) model. Based on the context words and the target word, these variants arised.
Note: the name of the model maybe confusing
Distriubted Bag of words is similar to Skip-gram model
Distributed Memory is similar to Continuous bag of words model | {
"domain": "datascience.stackexchange",
"id": 2803,
"tags": "python, nlp, word2vec, gensim, similar-documents"
} |
ROS2 QoS callback function | Question:
Hi everyone,
First of all, I just learned about ROS2 so I am not so familiar with it. Also, I apologize for my bad English.
I am now working on how to implement QoS in my subscription. I have a little bit experience on eProsima FastRTPS and I did manage to run publisher and subscriber with QoS policies. Thus, I expected ROS2 works quite the same way. Here I want to have a deadline policy with a callback function. However, I were lost while trying to set up the callback because in FastRTPS, it has publisher/subscriber listener, which handles all the event callback functions. In ROS2, I do not know where and how to put or configure callback function. I had a look in SubscriptionEventCallbacks class but it did not work (maybe I set it wrongly). After searching on internet, I have not had any clue yet.
Below is a part of my code trying to configure QoS (I am also not expert on C++ and I am learning it, so feel free to put any comment):
Listener(): Node("Listener_node") // Constructor
{
rclcpp::QoS qos_profile(10);
std::chrono::milliseconds deadline_time(1000);
qos_profile.deadline(deadline_time);
Subscription = this->create_subscription<std_msgs::msg::String>("topic",qos_profile,std::bind(&Listener::callback,this,std::placeholder::_1));
}
Any recommendation, comment or hint is welcomed!
Thanks
Originally posted by Sapodilla on ROS Answers with karma: 48 on 2020-02-25
Post score: 1
Original comments
Comment by stevemacenski on 2020-02-25:
Can you concisely state what your issue is? Your code sample below looks very reasonable.
Comment by Sapodilla on 2020-02-26:
Hi Steven, sorry for my late response. I already found the solution for my issue a few second before typing this comment :D. To make it a bit more clearly, I want to have a callback function whenever a deadline is missed. After spending some time to going through ROS2 doxygen, it finally works for me :). I will put the answer for those who come across this problem like me.
Answer:
To whoever struggles the same problem as me, for having your callback function for QoS, specify the callback function in rclcpp::SubscriptionOptions.
Good luck!
Originally posted by Sapodilla with karma: 48 on 2020-02-26
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Gunika on 2020-12-18:
Hello Can you let me know how did you use rclcpp::SubscriptionOptions in the code?
Comment by Gunika on 2020-12-19:
Can you please let me know how did you use the callback in rclcpp::SubscriptionOptions?
This is my code for the subscriber:
rclcpp::QoS qos_profile(10);
std::chrono::milliseconds deadline_time(1000);
qos_profile.deadline(deadline_time);
rclcpp::SubscriptionOptions subscription_options;
subscription_options.event_callbacks.deadline_callback =
[](rclcpp::QOSDeadlineRequestedInfo & event) -> void
{
std::cout << "Connection Lost";
};
subscription_ = this->create_subscription<std_msgs::msg::String>(
"topic", 10,subscription_options,qos_profile,std::bind(&MinimalSubscriber::topic_callback, this, _1));
}
Publisher:
{
rclcpp::QoS qos_profile(10);
std::chrono::milliseconds deadline_time(1000);
qos_profile.deadline(deadline_time);
publisher_ = this->create_publisher<std_msgs::msg::String>("topic", 10,qos_profile);
timer_ = this->create_wall_timer(
500ms, std::bind(&MinimalPublisher::t | {
"domain": "robotics.stackexchange",
"id": 34493,
"tags": "ros, ros2, c++, callback"
} |
Using simple HTML template to create HTML formatting rules | Question: One of my tools that renders HTML needs some rules about document formatting. The renderer can format the output so that it is indented and contains appropriate line-breaks. In the first version I used a hardcoded dictionary that looks like this:
public class HtmlFormatting : MarkupFormatting
{
public const int DefaultIndentWidth = 4;
public HtmlFormatting() : this(DefaultIndentWidth)
{
this["body"] = MarkupFormattingOptions.PlaceClosingTagOnNewLine;
this["br"] = MarkupFormattingOptions.IsVoid;
//this["span"] = MarkupFormattingOptions.None;
this["p"] = MarkupFormattingOptions.PlaceOpeningTagOnNewLine;
this["pre"] = MarkupFormattingOptions.PlaceOpeningTagOnNewLine;
this["h1"] = MarkupFormattingOptions.PlaceOpeningTagOnNewLine;
this["h2"] = MarkupFormattingOptions.PlaceOpeningTagOnNewLine;
this["h3"] = MarkupFormattingOptions.PlaceOpeningTagOnNewLine;
this["h4"] = MarkupFormattingOptions.PlaceOpeningTagOnNewLine;
this["h5"] = MarkupFormattingOptions.PlaceOpeningTagOnNewLine;
this["h6"] = MarkupFormattingOptions.PlaceOpeningTagOnNewLine;
this["ul"] = MarkupFormattingOptions.PlaceBothTagsOnNewLine;
this["ol"] = MarkupFormattingOptions.PlaceBothTagsOnNewLine;
this["li"] = MarkupFormattingOptions.PlaceOpeningTagOnNewLine;
this["table"] = MarkupFormattingOptions.PlaceClosingTagOnNewLine;
this["caption"] = MarkupFormattingOptions.PlaceOpeningTagOnNewLine;
this["thead"] = MarkupFormattingOptions.PlaceBothTagsOnNewLine;
this["tbody"] = MarkupFormattingOptions.PlaceBothTagsOnNewLine;
this["tfoot"] = MarkupFormattingOptions.PlaceBothTagsOnNewLine;
this["tr"] = MarkupFormattingOptions.PlaceBothTagsOnNewLine;
this["th"] = MarkupFormattingOptions.PlaceOpeningTagOnNewLine;
this["td"] = MarkupFormattingOptions.PlaceOpeningTagOnNewLine;
}
public HtmlFormatting(int indentWidth)
{
IndentWidth = indentWidth;
}
}
As with everything hardcoded it's not very maintenance friendly and doesn't allow me to change or add new formattings without recompiling the application.
In order to fix this I thought why not derive the formatting from a real HTML? This way I already can see the output so everything starts with a template. This is how I expect the generated HTML to look like:
var template = @"
<body>
<h1></h1>
<h2></h2>
<p><br><span></span></p>
<div> </div>
<hr>
<ol>
</ol>
<ul>
<li></li>
</ul>
<table>
<thead>
</thead>
<tbody>
<tr>
<th></th>
<td></td>
</tr>
</tbody>
<tfoot>
</tfoot>
</table>
</body>";
With a few patterns, groupings and conditions I then determine the formatting for each element. Because I'm not interested in parsing the HTML but only finding the number of tags, their rows and columns I used regex. A template would never be anything else then the example above. For the sake of this question let's assume the HTML is always valid.
What the expression does is to basically split the template on line breaks and calculates the row and column numbers for each tag. Then based on that I can tell
whether an element is a void element if it occurs only once in the template
whether its opening tag should be placed on a new line if it doesn't have any predecessors (based on the column number)
whether its closing tag should be placed on a new line if both its tags have different row numbers (or there are simply two different row numbers)
static class MarkupFormattingTemplate
{
public static IDictionary<string, MarkupFormattingOptions> Parse(string template)
{
var tags =
template
.ToLines()
.Parse()
.ToList();
var openingTagOptions = tags.DetermineOpeningTagOptions();
var closingTagOptions = tags.DetermineClosingTagOptions();
return Merge(openingTagOptions, closingTagOptions);
}
private static IEnumerable<string> ToLines(this string template)
{
return
Regex
.Split(template, @"(\r\n|\r|\n)")
// Remove empty lines.
.Where(line => !string.IsNullOrEmpty(line.Trim()));
}
private static IEnumerable<Tag> Parse(this IEnumerable<string> lines)
{
return
lines
.Select((line, lineNumber) =>
ParseLine(line)
// Select tag properties for grouping.
.Select(m => new Tag
{
Name = m.Groups["name"].Value,
Line = lineNumber,
Column = m.Groups["name"].Index
}))
.SelectMany(x => x);
IEnumerable<Match> ParseLine(string line)
{
return
Regex
// Find tag names.
.Matches(line, @"</?(?<name>[a-z0-9]+)>", RegexOptions.ExplicitCapture)
.Cast<Match>();
}
}
private static IEnumerable<KeyValuePair<string, MarkupFormattingOptions>> DetermineClosingTagOptions(this IEnumerable<Tag> tags)
{
// Group elements by name to first find out where to place the closing tag.
foreach (var g in tags.GroupBy(t => t.Name))
{
var closingTagOptions =
// If any tag has more the one row then the closing tag should be placed on a new line.
(g.Select(i => i.Line).Distinct().Count() > 1 ? MarkupFormattingOptions.PlaceClosingTagOnNewLine : MarkupFormattingOptions.None) |
// If any tag occurs only once then it's void.
(g.Count() == 1 ? MarkupFormattingOptions.IsVoid : MarkupFormattingOptions.None);
yield return new KeyValuePair<string, MarkupFormattingOptions>(g.Key, closingTagOptions);
};
}
private static IEnumerable<KeyValuePair<string, MarkupFormattingOptions>> DetermineOpeningTagOptions(this IEnumerable<Tag> tags)
{
foreach (var tagName in tags.Select(t => t.Name).Distinct(StringComparer.OrdinalIgnoreCase))
{
var openingTagOptions =
tags
.GroupBy(t => t.Line)
.Where(g => g.Any(x => x.Name == tagName))
.First()
.Select((item, index) => new { item, index })
.First(x => x.item.Name == tagName).index == 0
? MarkupFormattingOptions.PlaceOpeningTagOnNewLine
: MarkupFormattingOptions.None;
yield return new KeyValuePair<string, MarkupFormattingOptions>(tagName, openingTagOptions);
}
}
private static IDictionary<string, MarkupFormattingOptions> Merge(
IEnumerable<KeyValuePair<string, MarkupFormattingOptions>> options1,
IEnumerable<KeyValuePair<string, MarkupFormattingOptions>> options2)
{
var result = options1.ToDictionary(x => x.Key, x => x.Value, StringComparer.OrdinalIgnoreCase);
foreach (var item in options2)
{
result[item.Key] |= item.Value;
}
return result;
}
private class Tag
{
public string Name { get; set; }
public int Line { get; set; }
public int Column { get; set; }
}
}
Formatting options are defined by an enum:
[Flags]
public enum MarkupFormattingOptions
{
None = 0,
PlaceOpeningTagOnNewLine = 1,
PlaceClosingTagOnNewLine = 2,
PlaceBothTagsOnNewLine =
PlaceOpeningTagOnNewLine |
PlaceClosingTagOnNewLine,
IsVoid = 4,
CloseEmptyTag = 8
}
To visualize the steps here are some intermediate results:
Step one: split on new lines so this is actually the same as the template:
<body>
<h1></h1>
<h2></h2>
<p><br><span></span></p>
<div> </div>
<hr>
<ol>
</ol>
<ul>
<li></li>
</ul>
<table>
<thead>
</thead>
<tbody>
<tr>
<th></th>
<td></td>
</tr>
</tbody>
<tfoot>
</tfoot>
</table>
</body>
Step two: tag names and their row and column numbers:
name row column
body 0 3
h1 1 7
h1 1 12
h2 2 7
h2 2 12
p 3 7
br 3 10
span 3 14
span 3 21
p 3 28
div 4 4
div 4 11
hr 5 7
ol 6 7
ol 7 8
ul 8 7
li 9 11
li 9 16
ul 10 8
table 11 7
thead 12 11
thead 13 12
tbody 14 11
tr 15 15
th 16 10
th 16 15
td 17 19
td 17 24
tr 18 16
tbody 19 12
tfoot 20 11
tfoot 21 12
table 22 8
body 23 4
Step three: finding closing tag options:
body PlaceClosingTagOnNewLine
h1 None
h2 None
p None
br IsVoid
span None
div None
hr IsVoid
ol PlaceClosingTagOnNewLine
ul PlaceClosingTagOnNewLine
li None
table PlaceClosingTagOnNewLine
thead PlaceClosingTagOnNewLine
tbody PlaceClosingTagOnNewLine
tr PlaceClosingTagOnNewLine
th None
td None
tfoot PlaceClosingTagOnNewLine
Step four: finding opening tag options and merging it with the previous step so at the same time this is the final step:
body PlaceBothTagsOnNewLine
h1 PlaceOpeningTagOnNewLine
h2 PlaceOpeningTagOnNewLine
p PlaceOpeningTagOnNewLine
br IsVoid
span None
div PlaceOpeningTagOnNewLine
hr PlaceOpeningTagOnNewLine, IsVoid
ol PlaceBothTagsOnNewLine
ul PlaceBothTagsOnNewLine
li PlaceOpeningTagOnNewLine
table PlaceBothTagsOnNewLine
thead PlaceBothTagsOnNewLine
tbody PlaceBothTagsOnNewLine
tr PlaceBothTagsOnNewLine
th PlaceOpeningTagOnNewLine
td PlaceOpeningTagOnNewLine
tfoot PlaceBothTagsOnNewLine
Answer: Personally, I hate deciphering Regex expressions. So I would not use it unless necessary. You should be able to split string in to lines using String.Split method just fine with StringSplitOptions.RemoveEmptyEntries. Also string.IsNullOrEmpty(line.Trim()) is basically string.IsNullOrWhitespace(line), is it not?
Determine* methods IMHO will look better if you add a couple of local variables:
private static IEnumerable<KeyValuePair<string, MarkupFormattingOptions>> DetermineOpeningTagOptions(this IEnumerable<Tag> tags)
{
var lines = tags.GroupBy(t => t.Line).ToArray();
var tagNames = tags.Select(t => t.Name).Distinct(StringComparer.OrdinalIgnoreCase);
foreach (var tagName in tagNames)
{
var line = lines.First(l => l.Any(t => t.Name == tagName));
//you are only interested in first item
var formatting = line.First().Name == tagName
? MarkupFormattingOptions.PlaceOpeningTagOnNewLine
: MarkupFormattingOptions.None;
yield return new KeyValuePair<string, MarkupFormattingOptions>(tagName, formatting);
}
} | {
"domain": "codereview.stackexchange",
"id": 28235,
"tags": "c#, html, regex, formatting, template"
} |
How can you infer the burning degree (Celcius etc) of a wood from a heat effected specimen? | Question: In metamorphic petrogenesis; you can infer the occurrence pressure and temperature of the rock from the minerals and other features inside, with various instrumental techniques.
Is there any scientific method (possible a forensic method) which gives the temperature of the burning?
A wood specimen is found in the field and it is known that it was heated before, in some degree, in a volcanic environment and one wants to investigate the heating temperature that the wood had encountered.
Answer: There has been a recent study about this: Pensa et al. (2023) used charcoal samples found in the deposits of the 79 CE eruption of Mount Vesuvius to infer the temperature of its pyroclastic density currents. They used reflectance analysis of the charcoal samples, with the maximum reflectance values being associated with the minimum temperature of the peak thermal conditions. You can read the "Charcoal reflectance analysis" section and references therein for more details about this method.
However, bear in mind that you need several charcoal pieces sampled across various sites of your volcanic environment to have a better idea of the heat distribution it experienced while being emplaced. This study found quite a wide range of temperatures depending on the unit considered and/or the location sampled within the same unit. With one wood sample, you'd only have an idea of the temperature this exact location experienced, which is still valuable but cannot be extrapolated to the whole unit. | {
"domain": "earthscience.stackexchange",
"id": 2777,
"tags": "volcanology, volcanic-hazard, metamorphism"
} |
Incorrect/unexpected behavior from Trac-IK with calls to moveit::core::RobotState::setFromIK | Question:
I just switched to Melodic from Kinetic (yay!) and some of our existing code base started generating infeasible trajectories (nooo!). Upon further inspection it seems that RobotState's setFromIK was returning consecutive IK solves (of a Cartesian trajectory) with joint configuration flips (e.g., elbow, wrist, etc.). Multiple runs with the SAME trajectory sometimes fails / sometime passes.
The flow of the trajectory IK routine is as follows:
seed joint state with an initial posture
loop through all Cartesian trajectory points with consecutive calls to setFromIK and save resulting joint positions
The code operates under the assumption that subsequent calls to setFromIK use the previous result as a seed (this has worked always in the past). I also tried explicitly seeding on every loop iteration with setVariablePositions (vs implicitly using the previous solution) with the same random flip results.
Switching to KDL from Trac-IK, however, resolves the issue. With that in mind, it seems to me that somehow either the current state is not making it as a seed to Trac-IK calls, it is being ignored by Trac-IK, or the wrong result is being selected. I don't know enough about the actual plugin communication/functionality to tell what exactly the issue is.
Thoughts? Anyone else seeing this? Can anyone offer insight or some direction to continue my debugging?
Originally posted by BrettHemes on ROS Answers with karma: 525 on 2018-12-05
Post score: 1
Original comments
Comment by BrettHemes on 2018-12-10:
Follow-up posted at https://github.com/ros-planning/moveit/issues/1255
Answer:
Melodic and kinetic packages build the exact same source code. Thus, this seems to me like it might be an issue with MoveIt. Were you always running the lastest ros-kinetic-trac-ik debs in kinetic? If so then you should be running the exact same code in melodic that you’ve been running in kinetic for months. Additionally trac-IK has had no changes to functionality in the last 2 years other than changes from boost to C++-11 threads.
Might I suggest that you use the trac-IK-examples source and try to test a seed that you see is acting funny in MoveIt! With your URDF?
Originally posted by pbeeson with karma: 276 on 2018-12-05
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by pbeeson on 2018-12-05:
Let me also suggest you see if setFromIK() has changed within the last year or so so that it might be different in kinetic and melodic.
Comment by gvdhoorn on 2018-12-06:
Perhaps it would be also worth checking the MoveIt configuration package's that are being used. @BrettHemes: are you configuring the solve_type somewhere?
Comment by BrettHemes on 2018-12-06:
we are using solve_type: Speed (default) and have never had issues in the past
Comment by gvdhoorn on 2018-12-06:
I would really recommend to change that to Distance. Even if you weren't having issues before.
Comment by BrettHemes on 2018-12-06:
I tried out solve_type: Distance and the result is much much slower execution (>10x). I assume this could be optimized via the timeout but I don't really like the idea of a deterministic pre-determined solve time. I tried wrapping the call with a "max_jump_threshold" check and iterate
Comment by BrettHemes on 2018-12-06:
when violated up to a maximum number of iterations and this is now working (arguably I should have done this from the start). I tested the seeding by passing bad/flipped seeds and the solver is indeed taking these into account. It is just for some reason flipping approximately
Comment by BrettHemes on 2018-12-06:
1 time out of 1000 calls when it never did this before.
Comment by BrettHemes on 2018-12-06:
So the current result is me having to call setFromIK twice every so often (reseeding with the previous joint pose) but is working otherwise.
Comment by gvdhoorn on 2018-12-06:
This sounds like something that should at least deserve a mention on the moveit issue tracker.
Comment by BrettHemes on 2018-12-06:\
I would really recommend to change that to Distance. Even if you weren't having issues before.
Do you have any suggestions on how to best select the timeout value in this case?
Comment by gvdhoorn on 2018-12-07:
No, sorry. Perhaps @pbeeson can give some guidelines. | {
"domain": "robotics.stackexchange",
"id": 32130,
"tags": "ros, moveit, ros-melodic, ik"
} |
What is the origin of the transferred oxygendianion in redox reactions? | Question: How come the oxygens transferred in redox reactions are always the $\ce{O^{2-}}$anion?
For example, I have this set of rules, and the rules are implicitly referring to the $\ce{O^{2-}}$ anion, a potent base (otherwise the rules wouldn't make sense; for example, it makes sense that a base in acidic solution is protonated to water, and that a strong base in basic solution is leveled to hydroxide ion).
In other words, how come, when electrons are transferred, such as in this unbalanced reaction:
$\ce{ClO_{3}^- + 6I^- \leftrightharpoons 3I_2 + Cl^-}$
The $\ce{O^{2-}}$ anion is formed (if only to be consumed again)? I remember my prof would say that specifically 3 $\ce{O^{2-}}$ anions "disappear".
Why not the $\ce{O^{-}}$ anion or simply $\ce{O}$? Does this have to do with stability?
Answer: I think you are misunderstanding the meaning of superoxide and the rules.
Superoxide is $\ce{O_2^-}$
Superoxide is NOT $\ce{O^{2-}}$ as you have written.
The rules are referring to hydroxide which is $\ce{OH^-}$, not superoxide.
The reason hydroxide is used to balance equations in basic aqueous solution is:
$\ce{H_2O \leftrightharpoons H^+ + OH^-}$
There is hydroxide present in aqueous solution, but not superoxide, peroxide, hydroxyl radial, etc. | {
"domain": "chemistry.stackexchange",
"id": 1303,
"tags": "electrochemistry, redox, ions"
} |
How do we check $x ≠ y$ in $PDA$ for $L = \{xy | x, y \in (0 + 1)^*, |x| = |y|, x ≠ y\}?$ | Question: We know that $L
= \{ xy | x, y \in (0 + 1)^*, |x| = |y|,
x≠y\}$
is context free. But my question is how we check $x ≠ y$ in $PDA?$ For example $x=0^n1^n$ and $y=1^{2n}.$ We can easily draw $PDA$ by checking one $0$ against three $1s$ but how can we check $x ≠ y?$
Answer: If two strings $w_1, w_2$ of the same length are different from each other, then you can find a specific position where they differ:
$$w_1 = \underbrace{\square\ldots \square}_{k\text{ symbols }}\;x\;\underbrace{\square\ldots\ldots \square}_{\ell\text{ symbols }}$$
$$w_2 = \underbrace{\square\ldots \square}_{k\text{ symbols }}\;y\;\underbrace{\square\ldots\ldots \square}_{\ell\text{ symbols }}$$
$$x\neq y$$
You may already know the trick that when you concatenate the two strings, you can re-subdivide them:
$$w_1w_2 = \underbrace{\square\ldots \square}_{k\text{ symbols }}\;x\;\underbrace{\square\ldots\ldots \square}_{\ell\text{ symbols }}\;|\;\underbrace{\square\ldots \square}_{k\text{ symbols }}\;y\;\underbrace{\square\ldots\ldots \square}_{\ell\text{ symbols }}$$
$$w_1w_2 = \underbrace{\square\ldots \square}_{k\text{ symbols }}\;x\;\underbrace{\square\ldots\square}_{k\text{ symbols }}\;|\;\underbrace{\square\ldots\ldots \square}_{\ell\text{ symbols }}\;y\;\underbrace{\square\ldots\ldots \square}_{\ell\text{ symbols }}$$
You can do this because the $\square$ symbols can be anything. When you divide them this way, you can more easily see how a context free grammar can recognize the language.
Based on this trick, here is a definition of a PDA to recognize the language.
The PDA has four states, $P$, $Q_0$, $Q_1$, and $R$. The initial state is $P$.
When in state $P$, the machine will nondeterministically guess the position $k$ where the two strings differ.
Specifically, in state $P$ the machine may read a character from the input (ignoring it), and push the symbol $A$ onto the stack. It may do this as many times as it likes.
When in state $P$, the machine may decide that it will inspect the character in the current position. It reads the character at the current input (what I called $x$ above). If it reads $x=0$, the machine transitions to state $Q_0$. If it reads $x=1$, the machine transitions to state $Q_1$ instead.
In this way, the machine uses its finite state to remember the value of $x$ for later.
When in state $Q_0$ or $Q_1$, the machine first consumes $k$ characters of input. Specifically, it pops the symbol $A$ from the stack and consumes one character of input (ignoring it) until the stack is empty. (If it runs out of characters, the computation fails because the value of $k$ was invalid.)
Next, while in state $Q_i$, the machine nondeterministically guesses the value of $\ell$. As before, it does this by consuming one character of input (ignoring it) and pushing $B$ onto the stack. It may repeat this process any number of times.
When in state $Q_i$, the machine may decide that it will inspect the character in the current position. It reads the character at the current input (what I called $y$ above).
If it is in state $Q_0$ and reads $y=1$, we've found a mismatch!
If it is in state $Q_1$ and reads $y=0$, we've found a mismatch!
Otherwise, there is no mismatch at the chosen position. The machine should fail.
If the machine finds a mismatch, let it transition to state $R$. In state $R$, it should remove all the $B$ symbols from the stack, consuming one character from the input for each one. At the end of this process, it should be exactly at the end of the string and the stack should be empty. (If not, it has picked invalid values for $k$ and $\ell$.)
Overall, if $w_1$ and $w_2$ are different strings of the same length, one of the nondeterministic guesses of this machine will succeed, so the overall PDA will accept. Otherwise, all of the branches will fail, and the PDA will reject. This is the desired behavior. | {
"domain": "cs.stackexchange",
"id": 19992,
"tags": "computability, context-free, pushdown-automata"
} |
Random range with no repeat and efficient for small subset | Question: Random range with no repeat and efficient for small subset.
But still survives if return most or all.
This comes up poker simulation as you typically only deal 1/3 the deck.
The idea is to use Random and HashSet to determine what has gone out.
When the first or last goes out then trim the range to reduce the number of Random.Next that have already gone out.
Could remove from HashSet that go outside of range but see little or no value in that. It would reduce size but at a compute cost. I thought Remove was O(n) but just looked it up and it is O(1) so it might be a good practice.
Ran a time and with 1 million return all (where it is weak) it was 4.7 seconds. Just return 1/4 of the million then 0.2 seconds. Tested versus a full Fisher Yates shuffle and the break even point is about 1/4. On the full then shuffle is about 5x faster.
//test
HashSet<int> testRandomSmallSubsetHash = new HashSet<int>();
RandomSmallSubsetHash randomSmallSubsetHash = new RandomSmallSubsetHash();
int randomSmall;
while ((randomSmall = randomSmallSubsetHash.Next()) != -1)
{
Debug.WriteLine(randomSmall);
if (!testRandomSmallSubsetHash.Add(randomSmall))
Debug.WriteLine("ERROR ");
}
Debug.WriteLine("count " + testRandomSmallSubsetHash.Count);
//end test
public class RandomSmallSubsetHash
{
private Random rand = new Random();
private int rangeCount;
private int range;
private int start;
private HashSet<int> used = new HashSet<int>();
public int CountLeft { get { return rangeCount - used.Count; } }
public bool HasNext { get { return (CountLeft > 0); } }
public int Next()
{
if (!HasNext)
return -1;
int next = rand.Next(start, range + start); //in .NET end is not inclusive
while (used.Contains(next))
{
next = rand.Next(start, range + start);
}
//Debug.WriteLine(next);
if (next == start + range - 1)
{
//can trim down the top to eliminte bad guesses
range--;
while (used.Contains(start + range))
{
range--;
}
}
else if (next == start)
{
//can trim down bottom to eliminate bad guesses
range--;
start++;
while (used.Contains(start))
{
range--;
start++;
}
}
used.Add(next);
return next;
}
public void Reset (int RangeCount = 100, int Start = 1)
{
if (RangeCount< 1)
throw new IndexOutOfRangeException();
rangeCount = RangeCount;
range = RangeCount;
start = Start;
used.Clear();
}
public void Reset()
{
Reset(rangeCount, start);
}
public RandomSmallSubsetHash(int RangeCount = 100, int Start = 1)
{
Reset(RangeCount, Start);
}
}
Answer: Before I start, let me say that this code looks pretty good. You seem to have tested your algorithm pretty well and I've never read about shuffling algorithms other than the Fisher-Yates, so I won't comment on it.
First, this code seems a little squished together. I would put newlines between my methods to make it a little clearer where one unit of responsibility starts and the previous one stops.
Second, please use your braces. And if you aren't going to use braces, then consistently don't use them:
if (!HasNext)
return -1;
while (used.Contains(next))
{
next = rand.Next(start, range + start);
}
Third, parameters are named with camelCase by convention in C#:
public void Reset (int RangeCount = 100, int Start = 1)
Fourth, please put spaces around your operators:
if (RangeCount< 1)
In the if/else if block in Next(), you do range-- at the top of each block. You should move this to just above the conditional if it is going to happen either way. However, I'm not quite sure if it is supposed to use the -- operator in each given that the comment for the first says you are trimming the top of the range and the comment for the other says you are trimming the bottom.
//in .NET end is not inclusive
This comment is not really needed, unless you are reminding yourself about it. Robert C. Martin states in Clean Code that every time you use a comment to explain your code, what the comment is really saying is that you failed to write your code so it explains what it does itself. After reading and writing many comments myself, I have to agree with him.
C# 6 would allow you to use the => expression for your one-line calculated properties:
public bool HasNext => CountLeft > 0;
Instead of:
public bool HasNext { get { return (CountLeft > 0); } } | {
"domain": "codereview.stackexchange",
"id": 31445,
"tags": "c#, .net, random"
} |
How is potential difference maintained in the inductor (in a simple R-C circuit) when the battery is disconnected? | Question: My understanding
In the case of R-C circuit when capacitor is charged and battery is disconnected a potential difference is maintained (decreasing exponentially)because of the electric field existing between the plates and it reduces as charge transfer takes place .
Similarly in an inductor when increasing or decreasing current is passed through it a potential difference is maintained given by Faraday's law is . But how is it still maintained when the current is stopped.
(Please try not going too deep I use University Physics by Freedman and Young)
Thanks in advance.
Answer: When you have an RC circuit with a battery and the capacitor is charged, the capacitor has a store of energy in the electric field equal to $\frac 12 CV^2$.
When the battery is disconnected that energy stored in the capacitor becomes heat as a current is passed through the resistor.
The current is generated because there is a potential difference across the plates of the capacitor.
The current decreases to zero when there is no longer an electric field ie the capacitor is totally discharged.
There is a parallel to this when you consider an inductor in that there is energy stored in the magnetic magnetic field produced by the inductor equal to $\frac 12 LI^2$.
When the battery is disconnected the current cannot collapse to zero instantaneously because that would imply that the magnetic flux linked with the capacitor would go to zero instantaneously and thus produce an infinite induced emf (Faraday).
So when you disconnect the battery there is still a current flowing in the circuit but that current is decreasing and thus there is an induced emf in opposition to the flux through the inductor decreasing.
As the current is decreasing the energy dissipated stored in the magnetic field (which depends on the current) is changed into heat in the resistor.
This continues until the magnetic field is no more when the current is zero. | {
"domain": "physics.stackexchange",
"id": 42798,
"tags": "electromagnetism, classical-electrodynamics, electromagnetic-induction"
} |
Is magnetic field strength (Teslas) inversely proportional to distance from the magnet squared? | Question: I can't find a solid answer on this, I've read that magnetic force is inversely proportional to distance squared so does that mean it's strength is too?
Answer: The problem is that you have to define what source system you're talking about.
Also, that sort of result usually implies that the system on which the force is applied is point-like, or at least very small. So it's more correct to speak about the field radiated by the source, without mentioning that system.
Even for the electric field, saying that it's proportional to the inverse square of the distance isn't always true. It is, however, the case for the very important, most basic example, the point-like charge (see Coulomb force).
But there's no point-like "magnetic charge", so you don't have that as a reference. The next simplest example is arguably the magnetic dipole. The decrease of its field with the distance varies noticeably, depending on the situation (constant magnetic moment or not, near field or far field, and so on).
The Biot and Savard formula shows however than an "element of electrical current" does generate a magnetic field, the latter follows the inverse square law. But be careful, since that law is only valid in magnetostatic (and approximately valid for slowly varying currents). | {
"domain": "physics.stackexchange",
"id": 88955,
"tags": "magnetic-fields"
} |
model.cuda() in pytorch | Question: If I call model.cuda() in pytorch where model is a subclass of nn.Module, and say if I have four GPUs, how it will utilize the four GPUs and how do I know which GPUs that are using?
Answer: model.cuda() by default will send your model to the "current device", which can be set with torch.cuda.set_device(device).
An alternative way to send the model to a specific device is model.to(torch.device('cuda:0')).
This, of course, is subject to the device visibility specified in the environment variable CUDA_VISIBLE_DEVICES.
You can check GPU usage with nvidia-smi. Also, nvtop is very nice for this.
The standard way in PyTorch to train a model in multiple GPUs is to use nn.DataParallel which copies the model to the GPUs and during training splits the batch among them and combines the individual outputs. | {
"domain": "datascience.stackexchange",
"id": 5529,
"tags": "pytorch"
} |
How can recombination lead to photon decoupling if scattering can still occur with neutral particles? | Question: During the recombination era, two things happened:
Electrons and protons bonded to form neutral hydrogen atoms.
As a result of #1, Compton scattering is no longer efficient enough to keep photons and electrons in equilibrium. Thus photons decoupled from other particles and the CMB formed.
However, based on what I gather from this other post here, Compton scattering can still occur between photons and composite neutral particles. A photon can interact with the quarks of a neutron and it can interact with the charged parts of a neutral hydrogen atom.
Even for neutral fundamental particles, there can still be scattering cased by the coupling between the particle and the EM field due to the spin of the particle.
Given this, how can we say #2 above is true? How does the fact that charged particles are combining into neutral particles lead to photon decoupling if Compton scattering can still occur with the charged components of the new particles? And moreover, how can this happen if scattering can occur between neutral particles and photons?
Answer: As a general remark before getting into more detail, you should keep in mind that the fact that a process can occur, is not sufficient to argue that it is relevant for determining equilibrium. One must also consider the reaction rate for that process, compared to other rates in the problem.
CMB photons after recombination do not have nearly enough energy to interact with a nucleus at an appreciable rate. Just to give a rough order of magnitude, nuclear energy scales are of order of ${\rm MeV}$ (and to probe the quarks within a nucleon requires several of orders of magnitude more energy than that), and photons had energy less than $13.6\ {\rm eV}$ after recombination. Actually the average energy of a photon was much less than this; because of the huge value of the photon-to-baryon ratio $\eta\approx 10^{10}$, recombination occurred only when the temperature was small enough that of order or fewer than $1$ in $10^{10}$ photons had a high enough energy to ionize Hydrogen, at a temperature of around $0.3\ {\rm eV}$.
Photons can interact with bound electrons in Hydrogen, however one must keep in mind two things:
Photons will only strongly interact with Hydrogen at discrete energies, corresponding to transitions between states in Hydrogen, so the blackbody spectrum of the CMB formed after recombination will only be affected at discrete lines (or, a superposition of redshifted lines).
Generally, photons do not have enough energy to cause transitions between the ground state and first excited state of hydrogen ($3.4\ {\rm eV}$, which is a substantial fraction of the ionization energy), so interactions will only be possible for more subtle, lower energy transitions.
In fact, there is a source of background radiation that does occur after recombination due to the hyperfine structure transition in Hydrogen, leading to the 21-cm line (although this radiation has a different source and signature than CMB photons). The 21-cm emission is an important probe of structure formation that is a target of observatories like the Square Kilometer Array (SKA). Because photons will redshift between the time that they participate in the hyperfine transition and the time we observe them, we can use observations of the strength of the 21-cm signal at different frequencies to observe structure formation as a function of redshift, during the "dark ages" when there is no other source of light strong enough to observe. | {
"domain": "physics.stackexchange",
"id": 87667,
"tags": "cosmology, photons, charge, scattering, cosmic-microwave-background"
} |
Action at distance with charges (Electromagnetism) - help please! | Question: My professor taught us in a theory lecture on electromagnetism , that if two charges are 1 light year apart and the charge on the left moves towards the charge on the right. The force felt on the right charge will "instantaneously" change however the force felt on the left will take 1 light year to change. Why are both forces on both charges not changed by the same amount since the radius is lower for both charges hence both forces should experience increase in force according to coulombs law? Thank you
Answer: Each charge feels a force due to the electric field of the other charge. Changes in this electric field propagate outward at the speed of light.
When you move one charge toward the other, the electric field that the moved charge sees from the stationary charge is different because the moved charge is now at a different distance from the stationary charge. Therefore, the magnitude of the force on the moved charge is different instantaneously, as the field of the stationary charge was always there.
On the other hand, the stationary charge will only feel a different force once the change in the moved charge's electric field propagates to its location. As such, it will experience a change in force later than the moved charge. | {
"domain": "physics.stackexchange",
"id": 48123,
"tags": "electrical-engineering"
} |
Generic Cache struct | Question: I've came up with the following implementation of a very simple cache:
use std::cmp::Eq;
use std::collections::HashMap;
use std::hash::Hash;
pub struct Cache<I, R, F>
where I: Copy + Eq + Hash, F: Fn(I) -> R
{
calculation: F,
values: HashMap<I, R>,
}
impl<I, R, F> Cache<I, R, F> where I: Copy + Eq + Hash, F: Fn(I) -> R {
pub fn new(calculation: F) -> Cache<I, R, F> {
Cache { calculation, values: HashMap::new() }
}
pub fn value(&mut self, arg: I) -> &R {
// Pop or calculate new value
let result = match self.values.remove(&arg) {
Some(v) => v,
None => (self.calculation)(arg)
};
// Store value
self.values.insert(arg, result);
// Return a reference to it
self.values.get(&arg).unwrap()
}
}
A simple usage example would be:
fn to_lowercase(value: &str) -> String {
println!("Calculating \"{}\".to_lowercase()", value);
value.to_lowercase()
}
fn main() {
let mut cache = Cache::new(to_lowercase);
println!("Calculated: {}", cache.value("ABC"));
println!("Calculated: {}", cache.value("ABC"));
}
Which prints:
Calculating "ABC".to_lowercase()
Calculated: abc
Calculated: abc
However, I feel like my code is much more complicated than it needs to be (specially around the Cache.value implementation), what can be improved here?
Answer: The biggest change you can do is to the value function. Because most of what you are building is actually provided via HashMap'sentry API. This could reduce the body of your value function to simply this:
match self.values.entry(arg) {
Entry::Occupied(val) => val.into_mut(),
Entry::Vacant(slot) => slot.insert((self.calculation)(arg)),
}
You could also mess with Entry's or_insert_with function, but I personally prefer the approach above.
You may also consider using Clone instead of Copy. The advantage of Clone are that it is more explicit and that it should support more types. The disadvantage obviously is that clone can be arbitrarily expensive. In general I'd say you want Clone for a generic interface such as this.
Finally, I understand it's an exercise, but the names are a bit plain:
Cache is very generic: what do you actually cache? Call it something like FunctionResultCache or Memoizer
value is also a bit weird. What you're doing in essence is wrapping a function, so I'd name this call or invoke or something similar. | {
"domain": "codereview.stackexchange",
"id": 37593,
"tags": "rust"
} |
Adding Octomap data to planning scene in ROS Electric | Question:
I am trying to use data from an octomap for collision avoidance in arm_navigation. Octomap_server publishes a series of collision objects on the octomap_collision_object topic so I was hoping that I could use these but this does not work (I tried remapping to the collision_object topic). Is this something that would have worked in diamondback but doesn't with the electric planning scene architecture? How can I add the octomap data to the arm_navigation planning scene in ROS electric? Thanks.
Originally posted by jwrobbo on ROS Answers with karma: 258 on 2012-03-25
Post score: 0
Answer:
Maybe the version of octomap_server for Electric doesn't publish the collision map. I have a version of octomap_server that was ported from Diamondback to Electric that does publish a collision map on "collision_map_out".
You can see the version here:
https://sbpl.pc.cs.cmu.edu/redmine/projects/full-body-nav/repository/revisions/master/show/octomap_server
You can get the code:
git clone https://sbpl.pc.cs.cmu.edu/redmine/people/mike/full-body-nav.git
I know that that version works and does what you want.
Originally posted by ben with karma: 674 on 2012-03-26
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by jwrobbo on 2012-03-28:
Thanks! Works like a charm. Looks like you're working on a pretty interesting project.
Comment by ben on 2012-03-28:
Happy to hear it. It seems like you are as well :)
Comment by AHornung on 2012-04-03:
Collision map publishing on the same topic has now been backported to octomap_server in octomap_mapping (trunk). | {
"domain": "robotics.stackexchange",
"id": 8715,
"tags": "ros, octomap, planning-scene, arm-navigation, octomap-server"
} |
What is the difference between a Robot and a Machine? | Question: What is the difference between a Robot and a Machine? At what point does a machine begin to be called a robot?
Is it at a certain level of complexity? Is it when it has software etc?.
For instance: A desktop printer has mechanics, electronics and firmware but it is not considered a robot (or is it). A Roomba has the same stuff but we call it a robot. So what is the difference.
I have always believed that a robot is a robot when it takes input from it's environment and uses it to make decisions on how to affect it's environment; i.e. a robot has a feedback loop.
Answer: You asked two (root) questions:
Question: What is the difference between a Robot and a Machine?
and
Question: At what point does a machine begin to be called a robot?
If I may, allow me to present the following text to address the first question:
The six classical simple machines
Reference: https://en.wikipedia.org/wiki/Simple_machine
Lever
Wheel and axle
Pulley
Inclined plane
Wedge
Screw
Any one of these “machines” is a long way from (but may contribute to the construction of) a robot.
Addressing your second question and although fiction, Isaac Asimov presented a line of thought (reference: http://en.wikipedia.org/wiki/Three_Laws_of_Robotics) still discussed today:
The Three Laws of Robotics (often shortened to The Three Laws or Three Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround", although they had been foreshadowed in a few earlier stories. The Three Laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Since I’m referencing Wikipedia verses presenting any original thought, I might as well continue: (reference: http://en.wikipedia.org/wiki/Robot)
A robot is a mechanical or virtual agent, usually an electro-mechanical machine that is guided by a computer program or electronic circuitry. ... Robotics is the branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing.
In summary, a machine can be a robot, a robot can be a machine, a robot can be virtual. I agree with the poster who said it would be several doctoral programs defining the difference. :) | {
"domain": "robotics.stackexchange",
"id": 1781,
"tags": "industrial-robot"
} |
How do I decide the electromeric effect on pent-2-ene? | Question: There are two possible case
π electrons shifting to C2
In this case, nucleophile attacks C3
It is stabilized by +I effect of two ethyl groups
π electrons shifting to C3
In this case, nucleophile attacks C2
It is stabilized by +I effect of a methyl group and a propyl group.
How do I decide which one is more stable, and ultimately, which one is favoured?
Answer: Here what you should consider first is hyperconjugation, rather than inductive effect (a brief explanation by me).
You're right that $\ce{Nu-}$ will either attack on $\ce{C-3}$ or $\ce{C-2}$, and hence two carbocations would be formed. Now, we'll analyse the stability of these carbocations on the basis of hyperconjugation which is as follows,
carbocation at $\ce{C-2}$ will have a total of 5 HC structures
carbocation at $\ce{C-3}$ will have a total of 4 HC structures
Based on the above difference, it is easy to conclude that $\ce{Nu-}$ will attack on $\ce{C-2}$ carbon. Whereas based on inductive effect, those carbocations are nearly equally stable.
Note: HC is short for "hyperconjugative structures". | {
"domain": "chemistry.stackexchange",
"id": 13642,
"tags": "organic-chemistry"
} |
CFL Closure Properties prove or disprove for the following languages | Question: I have following statements which I must prove or disprove :
1) Let $L$ be a CFL and $k \in N$ then $L^k$ is also a CFL.
2) $L_1 \subseteq L_2 \subseteq L_3$ are Languages, if $L_1$ and $L_3$ are DCFL, then $L_2$ is also a DCFL
The first one I have no idea how to start or whether it is also a CFL
For the second one I think that this is false, because the CF contains the REG and DCF which means $L_2$ can be a REG-Language or a CFL or DCFL?
Is this right? at least this is how I understand it
Answer: 1) Since the context-free languages are closed under concatenation, a simple induction shows that if $L$ is context-free then so is $L^k$, for all $k \geq 0$.
2) Let $L_2$ be a language over an alphabet $\Sigma$ which is not context-free. Then $\emptyset \subseteq L_2 \subseteq \Sigma^*$, refuting the statement (since $\emptyset$ and $\Sigma^*$ are regular and so deterministic context-free). | {
"domain": "cs.stackexchange",
"id": 9300,
"tags": "formal-languages, context-free, closure-properties"
} |
Looking at a Neutron Star | Question: I was recently reading something that said if you looked at a neutron star you would see part of the back side of it as the gravity of neutron star would bend the light.
My question is this: what would a neutron star look like? Would it look like a normal celestial body (i.e. more or less spherical) except that part of what you are seeing is on the other side? Or would it appear to have an oblong shape or flattened shape to it?
Answer: When you look at a spherical body, you don't "see" a sphere, you see a disc.
The same would be true of a neutron star. The difference is that the angular radius of the disc is larger than $R/D$, where $r=R$ is the coordinate radius of the neutron star and $D$ is the distance to the observer.
The "effective" radius is given by $R (1 - R_s/R)^{-1/2}$, where $R_s$ is the Schwarzschild radius for a neutron star of mass $M$.
Here is an example of what a neutron star might look like (by Corbin Zahn, from this website). This is a neutron star with radius twice the Schwarzschild radius (for example, with a radius of 8.4 km, for a $1.4 M_{\odot}$ neutron star). Each patch is a 30 degree by 30 degree square and you can see how both poles are clearly well inside the visible disc.
Should the radius shrink below 1.76 times the Schwarzschild radius (and there are equations of state for a neutron star that would permit this), then the whole of the surface would be visible (e.g.Pechenick, Ftaclas & Cohen 1983). | {
"domain": "physics.stackexchange",
"id": 39152,
"tags": "astrophysics, neutron-stars"
} |
How do self-driving cars construct paths? | Question: I wonder how self-driving cars determine the path to follow. Yes, there's GPS, but GPS can have hiccups and a precision larger than expected. Suppose the car is supposed to turn right at an intersection on the inner lane, how is the exact path determined? How does it determine the trajectory of the inner lane?
Answer: As you say, GPS is not precise enough for the purpose (until recently it was only accurate within 5m or so, since 2018 there are receivers that have an accuracy of about 30cm). Instead, autonomous vehicles have a multitude of sensors, mostly cameras and radar, which record the surrounding area and monitor the road ahead. Due to them being flat, mostly one colour, and often with lines or other markers on them, roads are usually fairly easy to spot, which is why most success has been made driving on roads as opposed to off-road. Once you know exactly where you are and where you want to go, computing the correct trajectory is then just a matter of maths and physics.
For an academic paper on the subject of trajectory planning see Local Trajectory Planning and Tracking of Autonomous Vehicles, Using Clothoid Tentacles Method.
It quickly becomes more complex when other road users and obstacles are taken into account; here machine learning is used to identify stationary and movable objects at high speed from the sensor input. Reacting to the input is a further problem, and one reason why there aren't any self-driving cars on the roads today.
This is all on driving automation level 2 and above; on the lower levels things are somewhat easier. For example, the latest model Nissan LEAF has an automatic parking mode, where the car self-steers, guided by camera images and sonar, but still requires the driver to indicate the final position of the vehicle. Apart from that, it is fully automatic. | {
"domain": "ai.stackexchange",
"id": 962,
"tags": "autonomous-vehicles, path-planning, path-finding"
} |
Energy Bands Question | Question: When an electron passes into the valence band, is it no longer useful for conduction?
Answer: I would answer "no", but it depends what you mean. An electron going from the conduction to the valence band eliminates both a conduction band electron and a valence band hole. Both helped with conduction, and now you've gotten rid of them. Thus, conduction would go down.
This can be explained another way. To quote Ashcroft and Mermin: "Conduction is due only to those electrons that are found in partially filled bands" (Chapter 12, section "Inertness of Filled Bands"). This is because in a filled band, for every electron with a wave vector $\vec{k}$, there will be another electron with wave vector $-\vec{k}$, and the current due to one cancels out the current due to the other. There can only be conduction when one of those two electrons is missing.
In your situation, I assume that you are dealing with a "normal" semiconductor, so the valence band is already nearly full, and by adding another electron to the valence band, you are making the valence band even fuller.** Thus, all things being equal, I would expect that adding an electron to the valence band would reduce the conduction of your system.
The above explanations are equivalent.
** Note that this is a fine point. Adding an electron to the conduction band also makes the conduction band fuller, but conduction goes up because the band is mostly empty. Adding electrons increases conduction for nearly empty bands and decreases it for nearly full bands. This is covered in Ashcroft and Mermin but is outside the scope of the question. | {
"domain": "physics.stackexchange",
"id": 70621,
"tags": "semiconductor-physics, conductors, electronic-band-theory"
} |
What amount of odometry drift would be considered having an accurate base? | Question:
I'm working on some projects and it occurred to me that I've never seen any documents where people specify their robot base odometry drift to benchmark what a typical amount of drift would be and what's the point where I can say I have "good" odometry.
For instance, when I move my base in a 20 meter perimeter square, in forward direction I have about 20cm drift (1%) and in the lateral direction, I have about 50cm (2.5%) drift on a differential drive base.
Can anyone share here their specification or documentation of others?
Originally posted by stevemacenski on ROS Answers with karma: 8272 on 2018-08-30
Post score: 0
Answer:
Arrr, I'd think ye should be lookin fer' some researrrch, mat'ee. Calibration papers be adverrrtisin' all manner o' statistics.
Originally posted by Tom Moore with karma: 13689 on 2018-09-19
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 31685,
"tags": "navigation, odometry, ros-kinetic, robot-localization"
} |
Physical interpretation of 2-forms dual to pseudovectors | Question: Mathematically for every 3D pseudovector $x^i$ there is a 2-form $F_{ij}=\epsilon_{ijk}x^k$ such that the 2-form transforms properly under all orthogonal transformations. Therefore I would expect it would be more natural to write physical quantities such as angular momentum $\textbf{L}$ or magnetic field $\textbf{B}$ in terms of their corresponding 2-forms.
Is there any physical insight as to why these quantities behave the way they do apart from experimental verification. If it is simply the way they are, is there any insightful interpretation of their corresponding 2-forms? I seem to be able to get some intuition from looking at the vectors but none at all by analysing the 2-forms.
Is the way these vectors physically behave related to their pseudoness? For example the rather odd direction of magnetic force.
Answer: Angular momentum is a very instructive example to look at - in particular, to look at how the notion of angular momentum (or, in fact, rotation), changes when you consider more or fewer than the usual three spatial dimensions. The proper notion of angular momentum that generalizes to all dimensions is $L = \vec r\wedge \vec p$, i.e. a 2-form, the wedge product of position and momentum. In three dimensions, the Hodge dual of this form is the ordinary pseudovector of angular momentum, and in fact one might define the cross product in 3d as the Hodge dual of the wedge product. You should think of the 2-form $L$ as describing the plane in which the rotation happens, together with some numbers encoding its direction and speed.
Let's start in one dimension: There is no rotation for an object in one dimension - it can only move forward and backward, and nowhere else. This corresponds to the second exterior power of a one-dimensional vector space being zero - there are no 2-forms, hence no rotation.
In two dimensions, there is clearly rotation, imagine a one-dimensional "rod" spinning in a plane. You might be tempted to describe the rotation as the 3d vector of angular momentum perpendicular to the plane, but this is an extrinstic description. If the world were truly two-dimensional, this description would not be available - but the description by two-forms is available. 2-forms in 2 dimensions are dual to 0-forms, i.e. scalars, so rotation in a fixed plane is fully described not by a vector, but by a number - its magnitude tells you how fast the rotation is and the sign whether it is clockwise or counter-clockwise.
In three dimensions, we get the familiar duality between 2-forms and 1-forms/vectors. But note that there is really nothing about rotation that would force you to describe it as "rotation about an axis" rather than "rotation in a plane" - the two descriptions/interpretations are fully dual, and it is the latter that generalizes to all dimensions.
In four dimensions...well, I get that this is not visual anymore, but think about special relativity, and the Lorentz transformations, which are generalized rotations - they are generated by 2-tensors, not vectors, and their associated conserved quantity is a 2-tensor, the energy-momentum tensor, not a vector.
Note that I have nowhere relied on the "pseudoness" of the angular momentum vector in 3d. It's an artifact of the Hodge dual not commuting with reflections, but it's really not the defining property of a "pseudovector". A "pseudovector" is not a vector at all, it is intrinsically a 2-form, and especially when you generalize to other dimensions you must respect that, as I also pointed out here.
That the magnetic force is a pseudovector and not a vector is something you can only appreciate after switching to the covariant formulation of electromagnetism and recognizing the electric and magnetic fields as certain parts of the electromagnetic field strength tensor - and once again, you will find that going to other dimensions shows that the magnetic field is not fundamentally a "vector" at all - in particular, it is equivalent to a scalar in 2 spatial dimensions, and to a more complicated $d-2$-form in higher dimensions. You might think of the magnetic 2-form as a collection of planes the velocities of charged particles are "dragged along" in the sense that the more parallel their velocity is to these planes, the stronger is the magnetic force that tries to make them describe circles in those planes:
Consider that the vector $\vec B$ is perpendicular to the planes its Hodge dual 2-form describes and that the Lorentz force is $\vec v\times \vec B$. This is maximal when $\vec v$ and $\vec B$ are perpendicular, i.e. when $\vec v$ lies in the plane the dual encodes, and the Lorentz force is also also perpendicular to $\vec B$, so it always lies in that plane. | {
"domain": "physics.stackexchange",
"id": 38024,
"tags": "magnetic-fields, angular-momentum, differential-geometry, tensor-calculus, mathematics"
} |
Checking winning conditions in Tic-Tac-Toe | Question: This code checks winning conditions in Tic-Tac-Toe by checking if there is any row, column or diagonal with the same symbols.
The board is a 2-dimensional array of chars. The character ' ' means that a field is empty.
How can I refactor/simplify this code?
public static bool SomeoneWins(char[][] board)
{
// Check columns
for (var x = 0; x < board.Length; x++)
{
var firstField = board[x][0];
if (firstField == ' ') continue;
bool allFieldsTheSame = true;
for (var y = 1; y < board[x].Length; y++)
{
if (board[x][y] != firstField)
{
allFieldsTheSame = false;
break;
}
}
if (allFieldsTheSame) return true;
}
// Check rows
for (var y = 0; y < board.Length; y++)
{
var firstField = board[0][y];
if (firstField == ' ') continue;
var allFieldsTheSame = true;
for (var x = 1; y < board.Length; x++)
{
if (board[x][y] != firstField)
{
allFieldsTheSame = false;
break;
}
}
if (allFieldsTheSame) return true;
}
// first diagonal
if (board[0][0] != ' ')
{
var allFieldsTheSame = true;
for (var d = 0; d < board.Length; d++)
{
if (board[d][d] != board[0][0])
{
allFieldsTheSame = false;
break;
}
}
if (allFieldsTheSame) return true;
}
// second diagonal
if ( board[board.Length - 1][0] != ' ')
{
var allFieldsTheSame = true;
for (var d = 0; d < board.Length; d++)
{
if (board[d][board.Length-d-1] != board[board.Length - 1][0])
{
allFieldsTheSame = false;
break;
}
}
if (allFieldsTheSame) return true;
}
return false;
}
Answer: What does all winning conditions in Tic Tac Toe have in common? They are all straight lines!
The idea of having one method that can be called multiple times is a good one, to do that we need to input the starting position, and how much we should change x and y with every time.
We also need a way to stop the loop, we can either stop when we notice that we will go out of bounds, or we can stop after a specific number of checks. In this implementation, I chose to hard-code 3 as the limit for how many tiles to check.
public static bool AllFieldsTheSame(int startX, int startY, char[][] board, int dx, int dy)
{
char firstField = board[startY][startX]
if (firstField == ' ')
{
return false;
}
for (var i = 0; i < 3; i++)
{
int y = startY + dy * i;
int x = startX + dx * i;
if (board[y][x] != firstField)
{
return false;
}
}
return true;
}
Then this method can be called repeatedly as follows:
public static bool SomeoneWins(char[][] board)
{
// Check columns
for (var x = 0; x < board.Length; x++)
{
if (AllFieldsTheSame(x, 0, board, 0, 1))
return true;
}
// Check rows
for (var y = 0; y < board.Length; y++)
if (AllFieldsTheSame(0, y, board, 1, 0))
return true;
// Check diagonals
if (AllFieldsTheSame(0, 0, board, 1, 1))
return true;
if (AllFieldsTheSame(2, 0, board, -1, 1))
return true;
}
However, I suspect you are also interested in who wins, in which case you could have both AllFieldsTheSame and SomeoneWins return a char instead of a bool.
And by the way, I'd prefer to use an enum for the possible values of each tile. A char can have the value of Q, but I don't believe you want to place a Q tile in your game. Using an enum reduces the eliminates any possible risk of invalid characters. | {
"domain": "codereview.stackexchange",
"id": 14787,
"tags": "c#, tic-tac-toe"
} |
Topological Mass Generation Mechanism | Question:
What is the topological mass generation mechanism?
And what is its relation with the Higgs mechanism?
Can we say that after the discovery of Higgs boson, the topological mass generation mechanism is ruled out?
Answer: Topological mass generation is a phenomenon in 2+1 dimensions discovered by:
Deser, Jackiw and Templeton, in which Yang Mills fields acquire a mass,
without breaking the gauge invariance, in 2+1 dimensions upon the inclusion of a Chern Simons term in the action. (Please see a more recent review by Roman Jackiw). For the Maxwell theory, the action has the form
$$S = S_{M}+S_{CS} = \int d^3x \frac{1}{4}F_{\mu\nu}F^{\mu\nu} - m \int d^3x \frac{1}{4}\epsilon^{\mu\nu\rho}A_{\mu}F_{\nu\rho} $$
The field equations describe a massive gauge field without being gauge
noninvariant.
$$\partial_{\mu} F^{\mu\nu} + m \epsilon^{\mu\nu\rho}F_{\nu\rho} =0$$
The Maxwell (or more generally Yang-Mills)-Chern Simons violates parity
due to the antisymmetric tensor. The Lagrangian changes by a total derivative under a gauge transformation:
$$ \delta \mathcal{L} \propto \frac{m}{4}\partial{\mu}(\epsilon^{\mu\nu\rho}F_{\nu\rho})$$
Which leads to the quantization of the topological mass in the Non-Abelian case on a compact space time manifold. (By a similar mechanism as the Dirac quantization condition for the magnetic charge).
At low energy, the mass term dominates and this model with sources describes the integer Hall effect.
Topological mass generation models in 3+1 dimensions were proposed, (please see for example, the following article by Savvidy, but they are not considered attractive because they require the inclusion of additional tensor fields. They do not seem directly relevant to the standard model.
However, we can add a Higgs action in 2+1 dimensions
$$ S_{H}= \int d^3x \big ( D_{\mu}{\Phi}D^{\mu}{\Phi} + \frac{\lambda}{4}(|\Phi|^2-1)^2\big ) $$
(to the theory without the Maxwell term). The resulting model is called: the Chern-Simons-Higgs model. The Chern-Simons-Higgs model exhibits soliton solutions (vortices), (please see Paul and Khare) with fractional electric charge and used in the explanation of the fractional quantum hall effect. | {
"domain": "physics.stackexchange",
"id": 10044,
"tags": "quantum-field-theory, mass, topological-field-theory, higgs"
} |
If the earth stopped spinning what would happen to the moon | Question: A question was posed at work today, "if the earth stopped spinning what would happen to the moon"? Ignoring any effects on the Earth, what actually happens to the Moon? Does it continue to orbit the now non-rotating Earth, or does it fly off into space like a massive asteroid, or more likely some other thing that I can't conceive of?
Note, the Earth stops spinning, not orbiting the sun.
For additional curiosity, what would be the differences if this were an extremely gradual slowing vs an almost instantaneous stoppage?
Answer: There is not the slightest chance of the Earth stopping spinning, but if it did there would be hardly any effect on the moon. It would continue to orbit just as it has always done. You ask about a very gradual slowing ,and what the result of that would be. We are on very firm ground here, because there has already been a gradual slowing and we know exactly what the results have been.
The tidal interaction of the moon in its orbit around the Earth has considerably slowed the Earth's rotation over the last 4 billion years, and this rotational energy was captured by the moon and has boosted it into a higher orbit. The moon is still moving away from us at the rate of several centimetres per year. Presumably, if the Earth stopped rotating there would be no rotational energy for the moon to capture, but that is not going to happen. | {
"domain": "astronomy.stackexchange",
"id": 3898,
"tags": "the-moon"
} |
About Plane motion | Question: An excerpt from a book I read:
“In reality, objects are moving in a 3-dimensional space. However, if the acceleration of the object is constant, then there must be a certain plane which contains the initial velocity vector and the acceleration vector. (In Geometry two straight lines define a plane. Here the two lines are the acceleration and the initial velocity.) After that, the motion becomes a 2-dimensional problem, because the object has no velocity and acceleration components perpendicular to the plane. In other words, the motion of the object is confined to the plane.”
I really do not understand the concept of a plane. What are “components” perpendicular to a plane? I see the 3D vector in physics as a moving object which has a height, width and length. So I don’t see how the object can be “always within a plane” with respect to initial velocity vector and acceleration vector. By the way, I don’t understand how acceleration and velocity can be a vector either (if they can, what do their components ijk respectively represent?)
Can anyone explain that from the most fundamental to me? I’m very new to this subject. Thanks!
Answer: Physical Quantities are primarily of two types, vectors and scalars, vector quantities need direction and magnitude both to be specified, scalar quantities are sufficiently described by magnitude alone. Distance between two points is fully described by magnitude of length which has to be travelled to reach from one point to another, hence it's a scalar. While Displacement is a vector, suppose you have a body at point A in 3D co-ordinate system, after body underwent motion it was found at point B, the straight vector with it's tail at A and head at B is a displacement vector or simply a position vector. Position vector shows distance of body from tail of vector as well as direction in which the body is located relative to reference point.
Velocity is first derivative of position vector and acceleration the second derivative of it. Since position is a vector, its derivative is also a vector, and it has the same direction as the displacement. Hence velocity is a vector. Magnitude of velocity is called speed. Since velocity is a vector, its derivative is also a vector, and it has the direction of the change in velocity. Hence acceleration too is a vector and it is directed along the direction of change of velocity.
The components of velocity and acceleration vectors specify the rates of change of position and velocity along the coordinate axes, respectively. For example, if the position vector is $$\vec{r}=(x,y,z)$$, then the velocity vector is $$\vec{v}=(v_x,v_y,v_z)$$, where $$v_x=\frac{dx}{dt}$$, $$v_y=\frac{dy}{dt}$$, and $$v_z=\frac{dz}{dt}$$. Similarly, the acceleration vector is $$\vec{a}=(a_x,a_y,a_z)$$, where $$a_x=\frac{dv_x}{dt}=\frac{d^2x}{dt^2}$$, $$a_y=\frac{dv_y}{dt}=\frac{d^2y}{dt^2}$$, and $$a_z=\frac{dv_z}{dt}=\frac{d^2z}{dt^2}$$. The components of these vectors can be positive, negative, or zero, depending on the direction of motion and the choice of coordinate system.
There is no truly 2D interaction possible practically, every physical interaction occur in 3D but for sake of simplicity it's often convenient to simplify the system to 2D plane. To do this we only consider the coplanar vectors neglecting any vector going outside the plane. It's like we are doing analysis of carrom board, every interaction on carrom board is confined to two dimensional plane, pieces generally don't move above or below the plane (expect when they fall in hole or are hit very hard making them topple or fly outside board).
For example, a ball thrown at an angle, a car turning around a curve, or a satellite orbiting the earth are examples of motion in a plane.
To analyze motion in a plane, we need to use vector quantities, such as displacement, velocity, and acceleration, which have both magnitude and direction. We can represent these vectors by their components along the X and Y axes, using trigonometry and the Pythagorean theorem.
If the acceleration of an object is constant, then the object's motion can be simplified by finding a plane that contains both the initial velocity vector and the acceleration vector. This plane is called the trajectory plane of the object. The trajectory plane is unique, because any two non-parallel vectors define a plane. The object will always move within this plane, because it has no component of velocity or acceleration perpendicular to it. This means that we can treat the motion as a two-dimensional problem, and use the equations of motion separately for the X and Y components.
For example, consider a projectile launched at an angle from the ground with an initial velocity of $v_0$ and an acceleration of $-g$ (due to gravity) along the Y axis. The trajectory plane of the projectile is the vertical plane that contains the initial velocity vector and the acceleration vector. The projectile will follow a parabolic path within this plane, and its motion can be described by the following equations:
$$v_x=v_0\cos\theta$$
$$v_y=v_0\sin\theta-gt$$
$$x=v_0\cos\theta t$$
$$y=v_0\sin\theta t-\frac{1}{2}gt^2$$
Where $\theta$ is the angle of launch, $t$ is the time, and $x$ and $y$ are the horizontal and vertical displacements of the projectile.
Working out realistic problems on kinematics for 1D then 2D and then 3D may help you properly grasp the ideas which will take lot of time to understand theoretically. By my experience Problems in general physics by Irodov is very nice book to solve such problems. | {
"domain": "physics.stackexchange",
"id": 98912,
"tags": "kinematics, geometry"
} |
Insufficient memory to run circuit using the statevector simulator | Question: I am a newbie to quantum and have been trying qiskit library for learning quantum computing (in order to explore quantum effects on cryptography). I am basically trying to build Grover Oracle for different symmetric key algorithms. For this I am learning various operations mostly performed by these algos in classical counterparts like XOR, Bit Shift, Addition and Modulo operations etc.
I have created a 4 bit circuit for Full Adder in qiskit and extended it to 8 qbits. But when I am trying it for 12 bits, it raises above error
Simulation failed and returned the following error message:
ERROR: [Experiment 0] Insufficient memory to run circuit circuit-584 using the statevector simulator. Required memory: 67108864M, max memory: 32712M
QiskitError: 'Data for experiment "circuit-584" could not be found.'
I have written a simple quantum implementation of Full Adder and this Error arises when I try to measure the result.
length=12
a = QuantumRegister(length)
b = QuantumRegister(length)
s = QuantumRegister(length)
aux = QuantumRegister(length)
cout = QuantumRegister(1)
cin = QuantumRegister(1)
result = ClassicalRegister(length+1)
input1 = 0xa82
input2 = 0x905
circ=QuantumCircuit(a,b,cin,s,cout,result,aux)
Round_constant_XOR(circ,input1,a,length) # Copying input1 to a
Round_constant_XOR(circ,input2,b,length) # Copying input2 to b
full_adder(circ,a,b,cin,length,s,cout)
#circ.draw(output='mpl')
print("Operations Completed, now measuring qbits\n")
for i in range(length):
circ.measure(s[i],result[i])
circ.measure(cout,result[length])
simulator1 = AerSimulator(method='statevector')
results1 = execute(circ,backend=simulator1).result()
print("Result is: " + str(results1.get_counts(circ)))
plot_histogram(results1.get_counts(circ))
Complete Error is
---------------------------------------------------------------------------
QiskitError Traceback (most recent call last)
C:\PROGRA~1\KMSpico\temp/ipykernel_12724/2309521244.py in <module>
31 simulator1 = AerSimulator(method='statevector')
32 results1 = execute(circ,backend=simulator1).result()
---> 33 print("Result is: " + str(results1.get_counts(circ)))
34 plot_histogram(results1.get_counts(circ))
~\anaconda3\lib\site-packages\qiskit\result\result.py in get_counts(self, experiment)
267 dict_list = []
268 for key in exp_keys:
--> 269 exp = self._get_experiment(key)
270 try:
271 header = exp.header.to_dict()
~\anaconda3\lib\site-packages\qiskit\result\result.py in _get_experiment(self, key)
378
379 if len(exp) == 0:
--> 380 raise QiskitError('Data for experiment "%s" could not be found.' % key)
381 if len(exp) == 1:
382 exp = exp[0]
Does the problem lie in number of qbits simulator is able to handle or anything else. As I eventually want to perform different operations for at least 128 qbits registers in order to simulate practical symmetric algos.
Answer: I changed the simulator provided by IBM according to my current need (qbits=50)
simulator1 = AerSimulator(method='matrix_product_state')
It worked. | {
"domain": "quantumcomputing.stackexchange",
"id": 4146,
"tags": "qiskit, programming, grovers-algorithm, simulation, oracles"
} |
Arduino Conways Game of Life using FastLED | Question: So I attempted to write Conway's game of life on an Arduino and display using the FastLED library. I use a custom bitset class to manage the game board state. I'm looking for feedback on performance, and general code style towards embedded systems.
I should note that my led strip is a little bit weird see diagram below to show how 4 rows work in it, and 4 columns. It kind of snakes back and forth, with zero being in the top right. My actual grid has 8 columns on it, and can be daisy chained to get more rows.
+----+----+----+----+
| 3 | 2 | 1 | 0 |
+----+----+----+----+
| 4 | 5 | 6 | 7 |
+----+----+----+----+
| 11 | 10 | 9 | 8 |
+----+----+----+----+
| 12 | 13 | 14 | 15 |
+----+----+----+----+
/**
Game of Life with LEDS and variable HUE
Assumes a square grid of leds on a 8x8 led matrix.
Controlled with WS2812B led controller.
*/
#include <FastLED.h>
/**
* How long should each frame be displayed roughly
*/
#define FRAME_TIME 500
/**
* Should we draw the red border. If so we reduce the playfield by one on each side.
* Undefine this if we should not draw it
*/
#define DRAW_BORDER
//#undef DRAW_BORDER
/**
* The width of the grid
*/
#define WIDTH 8
/**
* The height of the grid
*/
#define HEIGHT 32
/**
* The initial number of live cells in the grid. They are randomly placed.
*/
#define NUMBER_OF_INITIAL_LIVE_CELLS 16
/**
* WS2812B Data pin
*/
#define DATA_PIN 3
/*
* Computed Values based on above constants
*/
#ifdef DRAW_BORDER
// We provide a spot for the border to go.
#define GRID_X_START 1
#define GRID_X_END (WIDTH - 1)
#define GRID_Y_START 1
#define GRID_Y_END (HEIGHT - 1)
#else
#define GRID_X_START 0
#define GRID_X_END WIDTH
#define GRID_Y_START 0
#define GRID_Y_END HEIGHT
#endif // DRAW_BORDER
#define NUM_LEDS (WIDTH * HEIGHT)
/**************************************************
* Begin Main Code Below
**************************************************/
int computeBitNumber(byte x, byte y) {
return y * WIDTH + x;
}
template<size_t N>
class MyBitset {
public:
MyBitset& operator=(const MyBitset& b) {
memcpy(this->data, b.data, N/8);
}
void setBit(size_t idx, byte val) {
size_t idx2 = idx / 8;
int bit2 = idx % 8;
bitWrite(data[idx2], bit2, val);
}
void zeroArray() {
memset(data, 0, N/8);
}
byte getBit(size_t idx) const {
size_t idx2 = idx / 8;
return bitRead(data[idx2], idx % 8);
}
private:
byte data[N/8];
};
const CRGB BORDER_COLOR = CRGB(255, 25, 25);
const CRGB WAS_LIVE_COLOR = CHSV(115, 82, 60);
const CRGB LIVE_COLOR = CHSV(115, 82, 100);
const CRGB LIVE_AND_WAS_COLOR = CHSV(115, 82, 140);
CRGB leds[NUM_LEDS];
MyBitset<NUM_LEDS> current, prev;
CRGB& getLed(byte x, byte y) {
int xOffset = y & 1 ? (WIDTH - 1) - x : x;
return leds[y * WIDTH + xOffset];
}
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
FastLED.setBrightness(100);
FastLED.addLeds<WS2812B, DATA_PIN, GRB>(leds, NUM_LEDS);
// Randomize the initial grid everytime on start up
setupBorder();
generateRandomGame();
prev = current;
FastLED.show();
}
void loop() {
int startTime = millis();
setupBorder();
current.zeroArray();
for (int x = GRID_X_START; x < GRID_X_END; ++x) {
for (int y = GRID_Y_START; y < GRID_Y_END; ++y) {
int count = countNeighbors(x, y);
int index = computeBitNumber(x, y);
CRGB& targetLed = getLed(x, y);
if (count == 2 || count == 3) {
current.setBit(index, 1);
targetLed = prev.getBit(index) ? LIVE_AND_WAS_COLOR : LIVE_COLOR;
} else {
current.setBit(index, 0);
targetLed = prev.getBit(index) ? WAS_LIVE_COLOR : CRGB::Black;
}
}
}
prev = current;
int finishTime = millis();
Serial.println(finishTime - startTime);
FastLED.show();
FastLED.delay(FRAME_TIME - (finishTime - startTime));
}
int countNeighbors(byte xCenter, byte yCenter) {
int sum = 0;
for (int x = xCenter - 1; x < xCenter + 2; ++x) {
for (int y = yCenter - 1; y < yCenter + 2; ++y) {
if (x >= GRID_X_END || x < GRID_X_START || y < GRID_Y_START || y >= GRID_Y_END)
continue;
sum += prev.getBit(computeBitNumber(x,y));
}
}
return sum - prev.getBit(computeBitNumber(xCenter, yCenter));
}
/**
* Clears the LED array to black using memset.
*/
void setupBorder() {
memset(leds, 0, sizeof(leds));
#ifdef DRAW_BORDER
for (int i = 0; i < WIDTH; ++i) {
getLed(i, 0) = BORDER_COLOR;
getLed(i, GRID_Y_END) = BORDER_COLOR;
}
for (int i = GRID_Y_START; i < HEIGHT; ++i) {
getLed(0, i) = BORDER_COLOR;
getLed(GRID_X_END, i) = BORDER_COLOR;
}
#endif // DRAW_BORDER
}
void generateRandomGame() {
for (int i = 0; i < NUMBER_OF_INITIAL_LIVE_CELLS; ++i) {
int x, y, v;
do {
x = random(GRID_X_START, GRID_X_END);
y = random(GRID_Y_START, GRID_Y_END);
v = computeBitNumber(x, y);
} while(current.getBit(v) > 0);
current.setBit(v, 1);
getLed(x, y) = LIVE_COLOR;
}
}
Answer: First, given that this is C++, it's surprising that you're still using C-style #defines when you could be using constexpr variables, e.g.
/**
* WS2812B Data pin
*/
#define DATA_PIN 3
could have been done in one line as
constexpr int ws2812b_data_pin = 3;
One place it does still make sense to use #defines is when you have things that could conceivably be configured at build time. For example,
/**
* Should we draw the red border. If so we reduce the playfield by one on each side.
* Undefine this if we should not draw it
*/
#define DRAW_BORDER
//#undef DRAW_BORDER
seems like a reasonable use of the preprocessor. However, it would be much more conventional, and useful, if you permitted the build system to control the border via -DDRAW_BORDER=1 and -DDRAW_BORDER=0, rather than -DDRAW_BORDER and -UDRAW_BORDER. That is, the traditional way to write a macro like this is:
// Should we draw the red border?
// Default to "yes", but let the build system override it with -DDRAW_BORDER=0.
#ifndef DRAW_BORDER
#define DRAW_BORDER 1
#endif
#if DRAW_BORDER
constexpr int grid_x_start = ...
#endif
MyBitset& operator=(const MyBitset& b) {
memcpy(this->data, b.data, N/8);
}
C++20 deprecated providing a user-defined operator= without a user-declared copy constructor. If you provide one, you should provide all three of the "Rule of Three" operations. Fortunately, in this case, you don't need a customized operator= at all. Just eliminate these three useless lines of code.
Also, it should have been four useless lines of code! Did you not receive a warning from your compiler about the missing return *this?
You never define the identifier byte, which makes me a little nervous. Is it just a typedef for unsigned char?
const CRGB WAS_LIVE_COLOR = CHSV(115, 82, 60);
const CRGB LIVE_COLOR = CHSV(115, 82, 100);
const CRGB LIVE_AND_WAS_COLOR = CHSV(115, 82, 140);
You forgot THE_WAS_AND_THE_LIVE_TOKYO_DRIFT...
IIUC, these constants are meant to be the colors of cells that were "live in the previous generation but not now," "live in this generation but not the previous one," and "live in both." For some reason you provide constants for these three, but then hard-code the fourth option ("live in neither generation") as CRGB::Black. I would much prefer to see this as a pure function of the two inputs:
static CRGB computeCellColor(bool prev, bool curr) {
switch (2*prev + 1*curr) {
case 0: return CRGB::Black;
case 1: return CHSV(115, 82, 100);
case 2: return CHSV(115, 82, 60);
case 3: return CHSV(115, 82, 140);
}
__builtin_unreachable();
}
Then you can write your main loop more simply:
for (int x = GRID_X_START; x < GRID_X_END; ++x) {
for (int y = GRID_Y_START; y < GRID_Y_END; ++y) {
int index = computeBitNumber(x, y);
bool isLive = computeLiveness(x, y);
bool wasLive = prev.getBit(index);
current.setBit(index, isLive);
getLed(x, y) = computeCellColor(wasLive, isLive);
}
}
I replaced your countNeighbors function with a computeLiveness function that does exactly what you need it to do — no more. Our main loop does not care about the exact number of neighbors involved; all it wants to know is a single bit of information. So that's all it should be asking for.
It is almost correct to say leds[index] = computeCellColor(...) instead of having to do that weird "assign to the result of a function call" thing. I would suggest looking for a way to eliminate the "assign to function call." For example,
setLed(x, y, computeCellColor(wasLive, isLive));
or
leds[computeLedIndex(x, y)] = computeCellColor(wasLive, isLive);
/**
* Clears the LED array to black using memset.
*/
void setupBorder() {
memset(leds, 0, sizeof(leds));
}
I can write that code in half the number of lines:
void clearLedsToBlack() {
memset(leds, 0, sizeof(leds));
}
Also, I don't even see why you're clearing the LEDs to black on each iteration through the loop. Don't you end up overwriting all of the LEDs' values in the main loop anyway? And who says 0 means "black"? Elsewhere, when you want to set an LED to black, you use the symbolic constant CRGB::Black. You should try to be consistent — if you know black is 0, then just say 0, and if you don't know it, then don't write setupBorders to rely on it.
C++ does also allow you to assert that black is 0 at compile-time:
static_assert(CRGB::Black == 0); | {
"domain": "codereview.stackexchange",
"id": 37810,
"tags": "c++, game-of-life, arduino"
} |
Simple tip calculator code | Question: I have built a super simple tip calculator to help me try and get a hold of the javascript knowledge I've gained. I just wanted to see if there was any feedback I could get for it...
Basic functionality is:
If a letter or incorrect character is input in either the bill or the number of persons input, a message should appear at the bottom advising this.
Otherwise if everything is okay, the tip per person is shown at the bottom, followed by the total cost per person!
Here is the JS:
const button = document.querySelector('button'),
bill = document.querySelector('#amount'),
serviceRating = document.querySelector('#service'),
numberPersons = document.querySelector('#people'),
pTip = document.querySelector('.tip'),
pTotal = document.querySelector('.total'),
p = document.querySelectorAll('p'),
regex = /(^0|[a-zA-Z]+)/i;
let tipAmount = 0,
totalCostPP = 0;
button.addEventListener('click', () => {
if (regex.test(bill.value)) {
pTip.textContent = 'Please give a valid bill value'
} else if (regex.test(numberPersons.value)) {
pTip.textContent = 'Please enter how many people were there'
} else {
tipAmountPP = tipPP(bill.value, serviceRating.value, numberPersons.value);
totalCostPP = totalPP(tipAmountPP, bill.value, numberPersons.value);
for (paragraph of p) {
paragraph.classList.toggle('hidden');
}
pTip.textContent = `Tip per person: $${tipAmountPP.toFixed(2)}`;
pTotal.textContent = `Total per person: $${totalCostPP.toFixed(2)}`;
}
});
const tipPP = (bill, service, people) => {
let tip = (bill / 100) * service;
return (tip / people);
}
const totalPP = (tip, bill, people) => {
return (bill / people + tip);
}
I haven't included the HTML and CSS as I didn't see it necessary as everything is functioning with each other just now.
Anyway, appreciate any help you can give, little or large!
Answer: Overall this is good code, but there are several things that could be improved.
Encapsulation
The code should be encapsulated in an IIFE in order to isolate it from other scripts in the page, which, for example, could be using the same variable names.
Also the HTML should be surrounded by an element with a unique identifier (e.g. <div id="tip-calculator"> ... </div>) to allow selecting common elements, such as button or p, but also .tip or .total, which can appear elsewhere on the page outside the context of the calculator and adjust the selectors to represent that.
const button = document.querySelector('#tip-calculator button'),
bill = document.querySelector('#tip-calculator #amount'),
// ...
or
const tipCalculator = document.querySelector('#tip-calculator'),
button = tipCalculator.querySelector('button'),
bill = tipCalculator.querySelector('#amount'),
// ...
Conventions
It is convention to use one const/let declaration per variable.
(Potential) bugs
The regex used to validate the bill value only forbids the letters A to Z. Any other non-digits or other errors (such as more than one decimal separator) are not caught. The service rating and number of people are not validated at all.
You should be explicitly converting the string values from the input fields into numbers. Currently you are lucky that you are only using multiplication and division on the value, so that the strings are automatically converted, but if you, for example, would addition you'd have an error ("1" + "2" === "12" not 3).
You are toggling the visibility of the paragraphs on each successful calculation, that so every second calculation does not show a result.
Other possible improvements
You are not using the variables tipAmount and totalCostPP outside the event handler (or even the final else block) and their value doesn't change in there, so it would be better to declare them as const inside that block instead of outside the function.
In order to simplify the validation and conversion of the number values you should be using <input type="number"> with appropriate min, max and step attributes. It automatically forces the user only to enter valid numbers and offers the valueAsNumber property which gives you the value already converted to a number. | {
"domain": "codereview.stackexchange",
"id": 38884,
"tags": "javascript, beginner, regex, calculator"
} |
Database of color wavelengths of minerals | Question: I was wondering about the physics of color, and now am interested in finding out if there are any resources (databases, text files, html tables, or pdf listings) of minerals and their associated colors or (ideally) color wavelength spectrum.
For example, amethyst might have a white, silver, and purple color spectrum. Onyx might have a grey, blue, or black color spectrum. Granite another spectrum. Wondering (a) if any of this information is captured in any form on the web (such as 1 journal article per mineral/rock type), and (b) if it is aggregated into a database, text file, table, or other sort of list which includes lots of types, so it would say the wavelength of visible light that it emits, or some ranges of it, or even a hex color range.
Answer: This type of information is made available through the USGS Spectroscopy Lab.
There is a researchable database of their current spectral library with the appropriate information for each mineral signature (formula, sample_id, type and more). Here is an example of the signature for Hematite:
Further, there is an up to date publication accompanying the database titled USGS Spectral Library Version 7, available here. | {
"domain": "earthscience.stackexchange",
"id": 1774,
"tags": "mineralogy, light"
} |
contact sensor has no member named contacts | Question:
I was following this tutorial , but as i try to build the code i get theis error msg:
/usr/include/boost/asio/detail/impl/socket_ops.ipp:266:5: note: ‘boost::asio::detail::socket_ops::bind’
int bind(socket_type s, const socket_addr_type* addr,
^
/home/rob/Documents/gazebo_contact_tutorial/ContactPlugin.cc: In member function ‘virtual void gazebo::ContactPlugin::OnUpdate()’:
/home/rob/Documents/gazebo_contact_tutorial/ContactPlugin.cc:43:34: error: ‘class gazebo::sensors::ContactSensor’ has no member named ‘Contacts’
contacts = this->parentSensor->Contacts();
How to remove it? I am using ros indigo.
Originally posted by dinesh on ROS Answers with karma: 932 on 2016-08-29
Post score: 0
Answer:
Ans: enter
Originally posted by dinesh with karma: 932 on 2016-08-30
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2016-08-30:
I really appreciate you reporting back on the answer to this question, but cross-posting to answers.gazebosim.org is really bad style, especially if your only motivation is "to get answer quickly." (as you wrote in error launching world file in gazebo. | {
"domain": "robotics.stackexchange",
"id": 25632,
"tags": "ros"
} |
What is the difference between a Chemical Formula and Formula Unit? | Question: In my textbook, a chemical formula is described as :
"A formula that shows the kinds and numbers of atoms in the smallest representative unit of a substance"
While a formula unit is described as :
The lowest whole number ratio of ions in an ionic compound. Thus the formula unit for sodium chloride is NaCl.
Though the definitions seem to differ, the written formulas appear identical. A chemical formula appears to be another way to represent a formula unit. For example, NaCl is both the chemical formula and formula unit for Sodium Chloride.
Is there ever a difference between the written terms?
Answer: Molecular compounds such as water exist as discrete particles, molecules. This is due to the forming of covalent bonds where each atom has a specific partner to which it is bonded. Each molecule of water contains one atom of oxygen and two atoms of hydrogen. So, H2O is its formula. This can be more specifically called a molecular formula.
In ionic compounds such as table salt, NaCl, the atoms (as ions) do not bond to specific neighbors. Surrounding each chloride ion in a salt crystal are six sodium ions. Likewise, six chloride ions surround each sodium ion. This attraction of oppositely charged ions extends throughout the entire crystal. There is no discrete bonding of a particular sodium ion to a specific chloride ion. So the formula for sodium chloride is expressed as the smallest whole number ratio between the Na and Cl ions which is 1:1 for sodium chloride. So, NaCl is also a formula but not a molecular formula.
The reason for the term "formula unit" is that it is useful when we talk about how much of one substance is required to combine with a particular amount of another substance. For example: to make the smallest amount of hydrogen carbonate we combine one molecule of water with one molecule of carbon dioxide.
But now suppose we want to combine silver nitrate with sodium chloride. Both of these are ionic compounds, they do not form molecules. In this case you would say that one formula unit (FU) of sodium chloride combines with one FU of silver nitrate.
The one-to-one combining in both these examples is simply coincidence. It may take 3 FUs of substance A to combine exactly with 2 FUs of substance B in a different example. | {
"domain": "chemistry.stackexchange",
"id": 7191,
"tags": "ionic-compounds, terminology"
} |
What is the smallest possible wavelength? | Question: I was thinking about this the other day after a quantum mechanics lecture (unrelated to the lecture I was taking) and pondered "Is there a minimum wavelength for a photon?", through searching online and some thought, there didn't seem to be many definitive answers and came across a few sources that said the Planck length, but didn't show why, just stated as such.
I believe that:
$\lambda>0$
As
$$E=\frac{h c}{\lambda}$$
As when $$\lambda \to 0$$ $$E\to\infty$$
Which (assuming finite energy within the universe) isn't possible, however if energy within the universe isn't finite, I guess this could be an answer, that the wavelength of a photon could approach/reach 0.
But, upon further thought and inquiry, I considered the Schwarzschild Radius, where:
$$r_s=\frac{2 G M}{c^2}$$
Einstein's Mass energy equivalence formula can be used (I think, even though photons are massless):
$$E=mc^2$$
Where $m=M$ gives:
$$E=Mc^2$$
And from above, $E=\frac{h c}{\lambda}$
$$Mc^2=\frac{h c}{\lambda}$$
Thus giving
$$M=\frac{h} {c\lambda}$$
Substituting $M$ into the Schwarzschild Radius equation, this yields:
$$r_s=\frac{2 G h}{c^3\lambda}$$
Assuming some of the online statements saying the minimum wavelength is the Planck length, substituting this into the equation (and all other values) yields:
$$r_s\approx 2.031\times 10^-34m$$
This means the Schwarzschild Radius of the photon with wavelength of Planck's length is larger than the Planck length? I may have assessed this result wrong but the diameter of this black hole is then $d_s\approx 4.062\times10^-34 m$ which is an order of magnitude larger than the Planck length - which doesn't seem to be possible in my mind. assuming my thought is correct this result (assuming I have thought this through correctly) means that $\lambda\ge 2\times r_s$?
If this is true, then the minimum wavelength of a photon is when $\lambda=2\times r_s$ we can denote $r_s$ and $\lambda$ as $\frac{z} {2}$ and $z$ respectively.
This rearranges to:
$$z^2=\frac{4 G h}{c^3}$$
Substituting values in and solving for $z$ when $z\ge0$, this gives the minimum wavelength of $\approx 8.1027\times10^-35 m$, this value of $\lambda=2r_s$ meaning the Schwarzschild Radius is $r_s\approx 4.0514\times 10^-35 m$
This means the Schwarzschild Radius is approximately 2.5066 times greater than the Planck length which is different to the answer scattered around the internet of "the minimum photon wavelength is the Planck length".
So I have arrived at two conclusions/answers to my question.
(1) The minimum wavelength of a photon is $\lambda\to 0$ (assuming the universe is infinite, but this violates the Schwarzschild Radius problem I considered in the second answer to this question)
(2) The minimum wavelength of a photon is $\lambda=2\times r_s$, for this case it is $\lambda\approx 8.1027\times10^-35 m$
Can someone with better understanding than me (as I am a 2nd year Physics student) please help me understand/explain if my reasoning is correct.
Answer: Existing comments on the question give links to answers. These are that (1) a wavelength can be arbitrarily small because for any given wavelength you can always get a smaller one by jumping to another inertial reference frame; and (2) nevertheless it is hard to maintain any claim that existing physical theories correctly treat interactions at Planck scale or below in the rest frame of the interacting systems.
The main thing wrong with the argument in the question is that the photon need not have its energy concentrated in a small region. A 'photon' is not necessarily a little thing spatially concentrated at a point or some small region. It can be spread out over huge volumes. Think of it like a plane wave having energy $E = h c/\lambda$, width $a$ and length $L$. So even if its energy is high, the energy per unit volume $E/(L a^2)$ need not be high, so no black hole will form. More generally, to form a black hole you need some stuff which is not moving at light speed. | {
"domain": "physics.stackexchange",
"id": 97692,
"tags": "electromagnetic-radiation, black-holes, wavelength"
} |
Mirror an entire part in solidworks to create a new part | Question: I have a pretty complicated part in solidworks that I need to 3D print. The part is designed to be a support, holding up a gauge on the side of a small pump.
On the other side of the pump, I would like to have the same part but in the opposite orientation to hold up another gauge. Rather than going through the tedious process of building the mirrored part from scratch, it would be easier to mirror the entire part about the relevant face. Is this possible in solidworks?
Answer: Probably Jonathan R Swift will give the best reply, however I'll give it a try.
It can be done in many ways.
Assembly level
In the assembly you have an option under Linear component Pattern.
Part level - 1
If the distance is fixed and the orientation You can create another body within the same part. (This is something people sometimes forget)
Then you can insert that body as you would normally do
Part Level - 2
You can Save as a new part, then in the new part mirror (and delete the old).
Now you will have two mirrored parts that you can insert. | {
"domain": "engineering.stackexchange",
"id": 3728,
"tags": "solidworks"
} |
Is there any difference in the colour of a 620 nm light and a 621 nm wavelength light? | Question: My book says there isn't.
But, it also says that every colour in the Visible portion of the Electromagnetic Spectrum has a range of wavelengths. My doubt is-
Let's say from wavelength 'A' nm to 'B' nm corresponds to Yellow Light.So
my question is
Do all wavelengths between A to B correspond to Yellow light? So, (A+1) nm, (B-1) nm, along with A nm & B nm are all exactly of the same colour ? Is there any difference even in their shades?
Does '(B+1)' nm of wavelength straightaway correspond to Orange light ? Or Is the orange colour going to build up as wavelength increases from B ?
Realistically, All wavelengths should differ in shades considering the red hue increases in every colour as wavelength increases. Then how come we say that only Seven colours form White light?
PS: Please do clarify all the above three doubts.
Answer: Wikipedia defines colour as follows:
...is the characteristic of human visual perception described through color categories, with names such as red, orange, yellow, green, blue, or purple. This perception of color derives from the stimulation of cone cells in the human eye by electromagnetic radiation in the visible spectrum.
Colour is then defined solely in terms of human visual perception, so if 620 nm or 621 nm correspond to different colours or hues of colours depend on our capacity to tell any difference between those wavelenghts. Also, the visual spectrum of light is said to be divided in seven colours because we can distinguish seven big categories in the visual spectrum, and that can be analyzed as a result of the phenomenon of scattering of white light through a diffraction plate, and it looks as follows (I couldn't add the full red range, but it doesn't really matter):
Realistically, All wavelengths should differ in shades considering the red hue increases in every colour as wavelength increases. Then how come we say that only Seven colours form White light?
The important thing here is that this spectrum is a $\textbf{continuum}$, which results in no clearly defined limit or frontier between colours because of the smooth color gradient that exists. Looking at this spectrum the first thing that we can tell is that there are seven big categories that result in a range (continuum) of wavelenghts that we can very easily distinguish: purple, blue, cyan, green, yellow, orange and red. Looking for example at 550 nm and 600 nm in the previous figure, we can say that both wavelenghts correspond to the same "bag" of wavelenghts which we name $\textbf{green}$, but if we carefully observe
those colours we note that we can actually tell that they are $\textit{different}$ hues of green because of the shading.
Do all wavelengths between A to B correspond to Yellow light? So, (A+1) nm, (B-1) nm, along with A nm & B nm are all exactly of the same colour ? Is there any difference even in their shades?
If A and B define the yellow range, then all colours between A and B are said to be a hue of yellow. Take the wavelenghts that approximately corresponds to A nm, and go 1 nm away from it. Both wavelenghts are certainly inside the "yellow bag" due to the range we define. As this is a continuum spectrum that gradually changes, one can argue that A nm and A+1 nm are in fact $\textit{different}$ in the physical sense that they actually correspond to different wavelenghts, and that is in fact the property that characterizes electromagnetic radiation (besides intensity, which is related to the sensitivity of the eye and it is not of much importance in this discussion), but if I handle you a pair of objects that reflect $\textit{only}$ A nm and A+1 nm wavelengths, then it is almost certain you that you will not be able to tell the difference between them. This way, in practical terms relative to human perception, both colours are the same kind or hue of yellow. If you go 1 nm away to the left of A nm so you leave the yellow range, a person could not be able to tell the difference at the same time that other may say that it actually corresponds to the green range, and this is one of the subjectivities that arise in defining the limits in these ranges of colour. This way you see that if a pair of wavelenghts is distinguishable (different) for human eye is not unequivocally defined.
Does '(B+1)' nm of wavelength straightaway correspond to Orange light ? Or Is the orange colour going to build up as wavelength increases from B ?
This is the same case as the limit between green and yellow range previously discussed. When you $\textit{define}$ the yellow range to correspond to wavelenghts between A and B nm, something like $\lambda_{yellow} \in [A,B]$ (it doesn't really matter if it is a closed or open interval), then if you accept to consider seven different categories in the visual spectrum (the colours usually used) you can say that wavelenghts inmediately after B nm are in the orange range, even though B-0.001 nm and B+0.001 nm are indistinguishable by the human eye.
Whenever you talk about colours in the usual sense (not making precise light measurements or anything like that), you usually don't care about 1 nm difference, and that is one of the advantages in defining these categories qualitatively, so there are no problems like those that arise in the frontiers. In the physics realm however, one might talk about wavelenghts or frequencies instead of colours, so again we don't have to worry about ambiguities. | {
"domain": "physics.stackexchange",
"id": 62619,
"tags": "visible-light, electromagnetic-radiation"
} |
Free Electron-Bond Electron Interactions | Question: I am reading an introductory textbook on electronics: Practical Electronics for Inventors by Paul Scherz and Simon Monk. In a section discussing the motion of electrons in circuits, the textbook mentions something called free electron-bond electron interactions:
It is likely that those electrons farther "down in" the circuit will not feel the same level of repulsive force, since there may be quite a bit of material in the way which absorbs some of the repulsive energy flow emanating from the negative terminal (absorbing via electron-electron collisions, free electron-bond electron interactions, etc.).
I have never heard of such bonds/interactions, and a quick Google search reveals nothing of the same description.
The description seems odd to me, since electrons repel rather than attract/bond.
I was wondering if someone could please take the time to explain what these are, and/or please direct me to a source (Wikipedia article?) on this subject.
EDIT: I wonder if this could be in reference to metallic bonds? Metallic bonds are formed by the attraction between metal ions (of which circuitry components comprise) and delocalized, or "free" electrons (which is what is pumped out from the battery, as a result of chemical reactions within).
Answer: There are two types of electrons in a metal: free electrons and bond electrons. Free electrons are electrons from the outer shells of an atom, they are delocalized. Bond electrons are electrons on the inner shells localized around the nucleus. I think, “localized” in this context is more popular word than “bond”. The Coulomb interaction between localized and delocalized electrons can (through a tricky quantum mechanical mechanism) lead to the fact that delocalized electrons become slightly localized also, and, thus, decelerated. | {
"domain": "physics.stackexchange",
"id": 74872,
"tags": "electric-circuits, electrons, interactions"
} |
Understanding basic queue and dequeue operations | Question: CLRS gives the following implementation for a queue's enqueue and dequeue operations
head = 1
tail = 1
ENQUEUE(Q, x)
Q[Q.tail] = x
if Q.tail == Q.length
Q.tail = 1
else Q.tail = Q.tail + 1
DEQUEUE(Q)
x = Q[Q.head]
if Q.head == Q.length
Q.head = 1
else Q.head = Q.head + 1
return x
but I'm having trouble understanding why both
if Q.tail == Q.length
Q.tail = 1
and
if Q.head == Q.length
Q.head = 1
are needed. What would be a conceptual (or possibly visual) explanation of these two if-statements?
Answer: CLRS defines a queue using an array which wraps around i.e when we no longer can insert/delete in the last position, we move to the first position.
From the book,
The elements in the queue are in locations head[Q], head [Q] + 1, . . . , tail [Q] - 1, where we "wrap around" in the sense that location 1 immediately follows location n in a circular order.
Thus,
if Q.tail == Q.length
This is for the condition when the tail points to the last element.
if Q.head == Q.length
This is for the condition when the head points to the last element.
In both cases, we move to the first element due to circular nature.
An example would be this: | {
"domain": "cs.stackexchange",
"id": 12519,
"tags": "data-structures, queues"
} |
how to make Mac detect AVR board using USBasp and burn program to it? | Question: I am new to Embedded, starting with AVR programming using C. I am working on Mac OS 10.9.4, so far I am using avrdude and xCode as IDE. It works very well, for now I am testing my code using Proteus.
But now I want to burn my .hex to AVR ATMega16 board. I have USBasp, which I am able to connect and it lights up the board. Now after searching on the internet, I think Mac is not detecting my board. I have checked /dev directory, but no usb device found.
So I am not sure what to next, how to make Mac detect my board and burn my .hex on it. I've found this: http://www.fischl.de/usbasp/ but no idea how to use this or its required or not.
So question stand is: how to make Mac detect AVR board using USBasp and burn program to it?
FYI: I've installed CrossPack on Mac.
Answer: After reading AVR and CrossPack docs, I've found really easy to do this.
For AVR development environment setup, do the following thing:
Install xCode on your Mac, choose latest version.
Now install CrossPack.
This will do all the environment setup. CrossPack installs required libs to support USBasp as well.
To create project using xCode:
Create new project
Select xAVR project on next screen
Next set MCU you are using, Programmer to USBAsp, Frequency as needed.
There you go, try a first first time build it may fail, but try again it will work smooth.
You can use AVR DUDE directly from Terminal:
avrdude -c usbasp -p <partno> -P usb -v | {
"domain": "robotics.stackexchange",
"id": 445,
"tags": "usb, embedded-systems, avr"
} |
ROS Answers SE migration: ROS Man Pages | Question:
My group and I are currently working on creating man pages for various ROS commands. We are just unsure about how we are to submit them to the github repository. Since this is creating new files we are unsure about what kind of file structure you are looking for.
Originally posted by JWMH on ROS Answers with karma: 1 on 2016-11-21
Post score: 0
Answer:
According to http://unix.stackexchange.com/questions/63573/how-is-the-path-to-search-for-man-pages-set ; the default configuration of man on Ubuntu will look for man pages in share/man next to any bin directory in the PATH environment variable.
Probably the best way to integrate this into ROS is to include the man pages within the source code that they document, and then set up install rules to install them into share/man ( ${CATKIN_GLOBAL_SHARE_DESTINATION}/man in the catkin install rules )
This also makes it easy to have different man pages for each version of ROS, if multiple versions of ROS are installed side-by-side.
Originally posted by ahendrix with karma: 47576 on 2016-11-21
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 26298,
"tags": "ros"
} |
Liquid that burns in a vacuum | Question: This may seem like a silly question, but is there a flammable liquid that could oxidise itself, so it could effectively burn in a vacuum?
Answer: Indeed yes, and it's been used in all kinds of airless situations, including torpedos and rockets and I think even submarines. It's hydrogen peroxide, which when exposed to a catalyst will oxidize and reduce itself:
2 H2O2(l) -> 2 H2O(g) + O2(g)
You'll notice that the oxygens on the left are in oxidation state -1. Half of them become reduced to the O in water (oxidation state -2) and half become oxidized to the O in O2 (oxidation state 0). | {
"domain": "chemistry.stackexchange",
"id": 10344,
"tags": "materials"
} |
Understanding the numerical complexity in solving a Schrödinger equation | Question: Usually, when I read the numerical complexity of solving a multielectron Schrödinger equation is due to its very big size.
I came across the following explanation:
Say, there are $N$ electrons, and we want to solve it numerically, therefore we discretize (in other words, converting the differential equation to difference equation and the equation is solved at discrete points and no longer continuous) the $3D$ space to $K \times K \times K$ grid. I don’t understand how they arrive at this.
The size of wave function values to store in the grid is given by $K^{3N}$. Which is huge even for $K=2$ and $N=100$. (Source: https://www.diva-portal.org/smash/get/diva2:935561/FULLTEXT01.pdf)
But my thought process goes like this:
If there are 8 grid points, then shouldn’t there be $3N\times 8$ wavefunction values and not $8^N$ as the formula indicates. Am I missing something?
Answer: For $f:\mathbb{R} \rightarrow \mathbb{R}$ discretized on a grid with $K$ points, you have $K$ possible real values. More formally, this means that the discretized version is $f: \{1,..,K\} \rightarrow \mathbb{R}$.
For $f:\mathbb{R}^{m} \rightarrow \mathbb{R}$ discretized on a grid with $K^{m}$ points, you have $K^{m}$ possible real values. More formally, this means that the discretized version is $f: \{1,..,K\}\times...\times\{1,..,K\} \rightarrow \mathbb{R}$, where the cartesian product is taken $m$ times.
For $f:\mathbb{R}^{m} \rightarrow \mathbb{R}^w$ discretized on a grid with $K^{m}$ points, you have $wK^{m}$ possible real values, because this is just a collection of $w$ functions $\{f_1,...,f_w\}$ of the kind discussed above, namely $f_i:\mathbb{R}^{m} \rightarrow \mathbb{R}$ for $i=1,...,w$.
Answer: Now, consider the complex function $\psi(\mathbf{x}_1, ..., \mathbf{x}_N)$ in $d$ space dimensions, namely
$\Psi:\mathbb{R}^{dN} \rightarrow \mathbb{C}$. Since $\mathbb{C}$ is analogous to $\mathbb{R}^2$ (algebraically they are not the same thing, but in this context they are, practically speaking, the same space), we are in a case $f:\mathbb{R}^{n} \rightarrow \mathbb{R}^w$ with $n=dN$ and $w=2$. Therefore, the discretized version of the wave function allows you to store $2K^{dN}$ real values.
Note: if the wave function can be factorized as $\psi(\mathbf{x}_1, ..., \mathbf{x}_N) =\psi_1(\mathbf{x}_1)...\psi_N(\mathbf{x}_N) $, then the answer is $2NK^d$. If it can be fully factorized as $\psi(\mathbf{x}_1, ..., \mathbf{x}_N) =\psi_1(x_{11})...\psi_{dN}(x_{dN}) $, then you have to store only $2NdK$ real numbers.
Note: if the wave function is normalized, then you have $2K^{dN}-1$ independent real values in the general case. Further constraints that may be specific to the problem at hand can also lower the number. | {
"domain": "physics.stackexchange",
"id": 91458,
"tags": "quantum-mechanics, schroedinger-equation, computational-physics"
} |
Do gauge bosons really transform according to the adjoint representation of the gauge group? | Question: Its commonly said that gauge bosons transform according to the adjoint representation of the corresponding gauge group. For example, for $SU(2)$ the gauge bosons live in the adjoint $3$ dimensional representation and the gluons in the $8$ dimensional adjoint of $SU(3)$.
Nevertheless, they transform according to
$$ A_μ→A′_μ=UA_μU^\dagger−ig(∂_μU)U^\dagger ,$$
which is not the transformation law for some object in the adjoint representation. For example the $W$ bosons transform according to
$$ (W_μ)_i=(W_μ)_i+∂_μa_i(x)+\epsilon_{ijk}a_j(x)(W_μ)_k.$$
Answer: A gauge field transforms in the adjoint of the gauge group, but not in the adjoint (or any other) representation of the group of gauge transformations.
In detail:
Let $G$ be the gauge group, and $\mathcal{G} = \{g : \mathcal{M} \to G \vert g \text{ smooth}\}$ the group of all gauge transformations.
A gauge field $A$ is a connection form on a $G$-principal bundle over the spacetime $\mathcal{M}$, which transforms as
$$ A \mapsto g^{-1}Ag + g^{-1}\mathrm{d}g$$
for any smooth $g : \mathcal{M} \to G$. If $g$ is constant, i.e. not only an element of $\mathcal{G}$, but of $G$ itself, this obviously reduces to the adjoint action, so $A$ does transform in the adjoint of $G$, but not in the adjoint of $\mathcal{G}$. With respect to $\mathcal{G}$, it does not transform in any proper linear (or projective) representation in the usual sense, but like an element of a Jet bundle. | {
"domain": "physics.stackexchange",
"id": 23187,
"tags": "terminology, gauge-theory, group-representations, gauge-symmetry"
} |
Why does Platinum evaporate if left long enough? | Question: I have been reading into research relating to the redefining the 1 kg weight as the current Platinum-Iridium is becoming smaller. In this article, here, it mentions that the original metal weight couldn't be pure Platinum as it evaporates over time. This article was referenced from 'The Scientist' which is a respected science magazine.
My questions are:
why does Platinum evaporate over time? As a (relatively soft) solid, I would have assumed it wouldn't lose particles due to evaporation, as the external environment is way below even the melting point of Platinum.
Is this not localised to just Platinum and other metals are included, indeed, will most solids evaporate over time?
Answer: Say the block of Platinum is in vacuum. For such low pressures, the melting point is not the same as the one in atmosphere: it becomes very close to a zero temperature. In other words, everything would tend to become gas under zero pressure.
Indeed, even for a solid in vacuum, atoms sometimes will escape due to fluctuations. It might condense back on the solid later on. This competition between evaporation and condensation leads to an equilibrium at a pressure equal to the vapor pressure.
For a mixture, say Platinum+air, things are slightly different, but the idea is the same: atoms can escape the solid and condense later.
UPDATE:
Following Alexander's comment, I calculated that, at the maximum temperature where Pt is solid, 2000 K, there is still no significant evaporation inside a 1 m$^3$ volume: mass changing by $10^{-8}$. I used data from here and an ideal gas law. So the evaporation does not seem to affect Pt at room temperature. Maybe chemical processes? That would be a question for chemistry.SE. | {
"domain": "physics.stackexchange",
"id": 98047,
"tags": "thermodynamics, metals"
} |
Why are differential equations for fields in physics of order two? | Question: What is the reason for the observation that across the board fields in physics are generally governed by second order (partial) differential equations?
If someone on the street would flat out ask me that question, then I'd probably mumble something about physicists wanting to be able to use the Lagrangian approach. And to allow a positive rotation and translation invariant energy term, which allows for local propagation, you need something like $-\phi\Delta\phi$.
I assume the answer goes in this direction, but I can't really justify why more complex terms in the Lagrangian are not allowed or why higher orders are a physical problem. Even if these require more initial data, I don't see the a priori problem.
Furthermore you could come up with quantities in the spirit of $F\wedge F$ and $F \wedge *F$ and okay yes... maybe any made up scalar just doesn't describe physics or misses valuable symmetries. On there other hand in the whole renormalization business, they seem to be allowed to use lots and lots of terms in their Lagrangians. And if I understand correctly, supersymmetry theory is basically a method of introducing new Lagrangian densities too.
Do we know the limit for making up these objects? What is the fundamental justification for order two?
Answer: First of all, it's not true that all important differential equations in physics are second-order. The Dirac equation is first-order.
The number of derivatives in the equations is equal to the number of derivatives in the corresponding relevant term of the Lagrangian. These kinetic terms have the form
$$ {\mathcal L}_{\rm Dirac} = \bar \Psi \gamma^\mu \partial_\mu \Psi $$
for Dirac fields. Note that the term has to be Lorentz-invariant – a generalization of rotational invariance for the whole spacetime – and for spinors, one may contract them with $\gamma_\mu$ matrices, so it's possible to include just one derivative $\partial_\mu$.
However, for bosons which have an integer spin, there is nothing like $\gamma_\mu$ acting on them. So the Lorentz-invariance i.e. the disappearance of the Lorentz indices in the terms with derivatives has to be achieved by having an even number of them, like in
$$ {\mathcal L}_{\rm Klein-Gordon} = \frac{1}{2} \partial^\mu \Phi \partial_\mu \Phi $$
which inevitably produce second-order equations as well. Now, what about the terms in the equations with fourth or higher derivatives?
They're actually present in the equations, too. But their coefficients are powers of a microscopic scale or distance scale $L$ – because the origin of these terms are short-distance phenomena. Every time you add a derivative $\partial_\mu$ to a term, you must add $L$ as well, not to change the units of the term. Consequently, the coefficients of higher-derivative terms are positive powers of $L$ which means that these coefficients including the derivatives, when applied to a typical macroscopic situation, are of order $(L/R)^k$ where $1/R^k$ comes from the extra derivatives $\partial_\mu^k$ and $R$ is a distance scale of the macroscopic problem we are solving here (the typical scale where the field changes by 100 percent or so).
Consequently, the coefficients with higher derivatives may be neglected in all classical limits. They are there but they are negligible. Einstein believed that one should construct "beautiful" equations without the higher-derivative terms and he could guess the right low-energy approximate equations as a result. But he was wrong: the higher derivative terms are not really absent.
Now, why don't we encounter equations whose lowest-order derivative terms are absent? It's because their coefficient in the Lagrangian would have to be strictly zero but there's no reason for it to be zero. So it's infinitely unlikely for the coefficient to be zero. It is inevitably nonzero. This principle is known as Gell-Mann's anarchic (or totalitarian) principle: everything that isn't prohibited is mandatory. | {
"domain": "physics.stackexchange",
"id": 89486,
"tags": "field-theory, causality, differential-equations, locality, equations-of-motion"
} |
Paint a Cube by rolling it (Puzzle Algorithm) | Question: I stumbled across this game in Simon Tatham's puzzle app. It's called cube. The description according to the game is:
You have a grid of 16 squares, six of which are blue; on one square rests a cube. Your move is to use the arrow keys to roll the cube through 90 degrees so that it moves to an adjacent square. If you roll the cube on to a blue square, the blue square is picked up on one face of the cube; if you roll a blue face of the cube on to a non-blue square, the blueness is put down again. (In general, whenever you roll the cube, the two faces that come into contact swap colours.) Your job is to get all six blue squares on to the six faces of the cube at the same time.
Attached is a link to a screenshot of the game .
I would like to ask the CS community if there is a known algorithm for solving such a problem as I haven't found anything online.
Answer: As Dmitry noticed there is very little possible states of the game, so it's relevant to search for solution using BFS traversal. But let's start from some numbers.
Staring positions
In terms of possible staring states we have
$$ 16 \cdot {16 \choose 6} = 128128 $$
Coloring of $4\times4$ gird using $6$ colors gives ${16 \choose 6}$ cases, and $16$ for initial position of cube, but you can easily reduce this number by a factor of $16$, because you always can place cuboid on any (white) cell of the gird not changing its coloring pattern (after each move doing its reverse and doing this move again).
notice: we can use such trick only finding any solution, not optimal one.
Secondly we aren't interested in starting positions that are symmetric. So let's eliminate them, and enumerate positions (Burnside lemma will be helpful). The final count equals
$$1051 \; \text{positions}$$
Concluding, we can use this fact and precompute solutions for all possible starting game states and it won't hurt our memory.
Algorithm
Let's construct directed graph of this game. Each node will be representing some state of the game (including current coloring of grid, position of cube and colors on the cube itself). Each directed edge from one state to another will represent possibility of moving between this states (each state has at most $4$ neighbors, so graph isn't dense).
Then using BFS on so build graph, we can calculate (for example) distance to each state from initial one. That's even better, because now we can find optimal solution! And this is description of above approach:
Create queue containing starting state
Repeat while solved state aren't found
Take first state from queue and mark as visited
For each achievable next state, push it on queue (if next state is unvisited already)
Implementation
I have created github repo with implementation of such approach. Check it out if you are interested!
Lastly, example solution may look like this one: | {
"domain": "cs.stackexchange",
"id": 20790,
"tags": "algorithms, board-games, group-theory"
} |
Basic Brainfuck interpreter (part 2) | Question: I have this obsession with esoteric programming languages. So I decided to spiff up my previous Brainfuck interpreter.
# Simple BrainF*** interpreter
# Class that stores lang variables
class Lang(object):
step = 0
cell = [0] * 30000
test_cell = [0] * 30000
pos = 0
test_pos = 0
loop = False
loop_ret = 0
# Main interpreter function
def interpreter():
code_input = raw_input('Code: ')
steps = len(code_input)
while Lang.step < steps:
if code_input[Lang.step] == '+':
Lang.cell[Lang.pos] += 1
elif code_input[Lang.step] == '-':
Lang.cell[Lang.pos] -= 1
elif code_input[Lang.step] == '>':
if Lang.pos < 30000:
Lang.pos += 1
elif Lang.pos > 30000:
Lang.pos = 0
elif code_input[Lang.step] == '<':
if Lang.pos > 0:
Lang.cell_pos -= 1
elif Lang.pos < 0:
Lang.pos = 30000
elif code_input[Lang.step] == '[':
if Lang.loop == False:
Lang.loop_ret = Lang.step
Lang.loop = True
elif code_input[Lang.step] == ']':
if Lang.cell[Lang.pos] != 0:
Lang.step = Lang.loop_ret
elif Lang.cell[Lang.pos] == 0:
Lang.loop = False
elif code_input[Lang.step] == '.':
print str(chr(Lang.cell[Lang.pos]))
elif code_input[Lang.step] == ',':
Lang.cell[Lang.pos] = int(raw_input())
elif code_input[Lang.step] == ":":
Lang.test_cell[Lang.test_pos] += 1
elif code_input[Lang.step] == ";":
Lang.test_cell[Lang.test_pos] -= 1
elif code_input[Lang.step] == "}":
if Lang.test_pos > 30000:
Lang.test_pos += 1
elif Lang.test_pos < 30000:
Lang.test_pos = 0
elif code_input[Lang.step] == "{":
if Lang.test_pos > 0:
Lang.test_pos -= 1
elif Lang.test_pos < 0:
Lang.test_pos = 30000
elif code_input[Lang.step] == "$":
if Lang.test_cell[Lang.test_pos] == Lang.cell[Lang.pos]:
print True
elif Lang.test_cell[Lang.test_pos] != Lang.cell[Lang.pos]:
print False
Lang.step += 1
# Running the program
if __name__ == "__main__":
interpreter()
If there are any issues, please mention them. All I'm looking for is any general improvements.
Answer: In a word: dictionaries.
You have a class that only has class attributes and lacks any methods; that could just be a dictionary:
lang = dict(
step = 0,
cell = [0] * 30000,
test_cell = [0] * 30000,
pos = 0,
test_pos = 0,
loop = False,
loop_ret = 0
)
(Alternatively, if you really want attribute (foo.bar) rather than key (foo['bar']) access to the values, look into collections.namedtuple.)
You have a whole bunch of elifs; that could also be a dictionary (with some judiciously-named functions):
commands = {
"+": increment_byte,
"-": decrement_byte,
...
}
This makes your interpreter loop:
def interpreter():
lang = dict(...)
commands = {...}
code_input = raw_input('Code: ')
steps = len(code_input)
while lang['step'] < steps:
command = code_input[lang['step']]
if command in commands:
commands[command](lang)
lang['step'] += 1
along with e.g.:
def increment_byte(lang):
"""Increment the byte at the data pointer."""
val = lang['cell'][lang['pos']]
lang['cell'][lang['pos']] = ((val + 1) % 256)
(Note use of % per @user50399's answer.)
This has two advantages:
very simple loop in interpreter; and
commands acts as a syntax guide (covering @Dagg's comment).
You could also add some input validation:
def accept_input(lang):
"""Accept one char of input, storing its value in the byte at the data pointer."""
while True:
try:
i = ord(raw_input("Enter char: "))
except TypeError:
pass
else:
if i in range(256):
lang['cell'][lang['pos']] = i
break
print("Not a valid input.")
(Note switch to ord per @user50399's answer.) | {
"domain": "codereview.stackexchange",
"id": 8865,
"tags": "python, python-2.x, interpreter, brainfuck"
} |
Are longer and shorter DNA similarly charged? | Question: A longer DNA molecule would have more phosphate groups, so it should have a greater negative charge, right? It was taught in my class that only terminal ends of DNA are charged and all the phosphates in the middle are not charged.
My teacher said that this is the reason that Electrophoresis separates the fragments. All DNA have same charge and so same force on all of them. Then difference in masses would give different acceleration and thus separation of fragments according to size. But isn’t Gel Electrophoresis more like a sieve where longer molecules move slowly and shorter pass through easily?
Answer: Each phosphate group has a single negative charge. Not just the terminals. The reason gel electrophoresis works so well with DNA is that charge is linearly proportional to size. The longer the fragment, the slower it moves. A double stranded DNA molecule with 100 nucleotides has a charge of -200. If it were twice as long, it would have twice as many, just as if it were 3.4 times longer, it would have 3.4 times as many negative charges. This way, charge is essentially ignored, so DNA migrates through the gel at a rate inversely proportional to its size. | {
"domain": "biology.stackexchange",
"id": 8669,
"tags": "genetics, biochemistry, molecular-biology, gel-electrophoresis"
} |
rosmake rgbdslam | Question:
when I rosmake rgbdslam_freiburg,my cpu use rate achives 100%,which causes my computer crashed, and the make dosent complete.I use ubuntu 64 and my computer is 4G memory.
Originally posted by 360693047 on ROS Answers with karma: 1 on 2014-05-11
Post score: 0
Original comments
Comment by vdonkey on 2014-05-11:
no idea. but maybe you can try catkin_make. copy rgbdslam_freiburg to your catkin home's src sub dir. run catkin_make at catkin home dir
Comment by tfoote on 2014-05-11:
Do you run out of memory? Are you overheating?
Answer:
Check if you have parallel compilation enabled, e.g. in the environment variable $ROS_PARALLEL_JOBS (also there will be multiple compiler processes in "top" when you run rosmake).
4GB might not be enough if you do compile in parallel, as some very memory hungry template libraries are used.
Originally posted by Felix Endres with karma: 6468 on 2014-05-26
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 17916,
"tags": "ros, slam, navigation, rgbdslam-freiburg"
} |
openni kinect: build errors | Question:
I have problems installing openni_kinect stack. I can't install the openni drivers on my system. I meet the following errors in my 'make':
../../../../../Wrappers/OpenNI.jni/methods.inl:314: warning: deprecated conversion from string constant to ‘char*’
/bin/sh: javac: not found
make[3]: *** [../../../Bin/Release/org.OpenNI.jar] Error 127
make[2]: *** [Wrappers/OpenNI.java] Error 2
failed to execute: make PLATFORM=x86 -C ../Build > /home/yuanwei/drivers/openni/build/openni/Platform/Linux-x86/CreateRedist/Output/BuildOpenNI.txt
Building Failed!
make[1]: *** [installed] Error 1
make[1]: Leaving directory `/home/yuanwei/drivers/openni'
make: *** [openni_lib] Error 2
I've installed java sdk but these errors still stay. Any ideas how to solve this?
Originally posted by ychua on ROS Answers with karma: 71 on 2011-09-08
Post score: 0
Original comments
Comment by Mac on 2011-09-12:
Use the electric install instructions: http://www.ros.org/wiki/electric/Installation/Ubuntu. Then, do sudo apt-get install openni-kinect (I'm remember that package name off the top of my head; might be slightly wrong).
Comment by ychua on 2011-09-11:
my system is ubuntu 10.04, kernel linux 2.6.32-33-generic-pae, GNOME 2.30.2. My hardware is Intel Core2 duo CPU E8400. Pre-built packages? How do I install the pre-built packages? any url that you can point me to?
Comment by Mac on 2011-09-09:
We're going to need to know more about your system. What platform? What hardare? What versions? If you're on ubuntu, why not just install using the pre-built packages?
Comment by ychua on 2011-09-08:
I was following the instruction to install the openni_kinect stack. And this is the point where I'm stuck
hg clone https://kforge.ros.org/openni/drivers
cd drivers
make
Answer:
Hi Mac, Thanks for your kindness and suggestions. I've managed to resolve the problem. It seems that even though I've followed the java sdk installation instructions from the oracle website, the default java version is always openjdk.
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:sun-java-community-team/sun-java6
sudo apt-get update
sudo apt-get install sun-java6-jdk
I did the above steps to install the java6-sdk. And the openni-kinect driver builds successfully now!
Originally posted by ychua with karma: 71 on 2011-09-12
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Joy16 on 2017-02-27:
Thanks! This worked! | {
"domain": "robotics.stackexchange",
"id": 6641,
"tags": "openni-kinect"
} |
Is it "mathematically wrong" to ignore dual spaces, 1-forms, and covariant/contravariant indices in classical mechanics? | Question: If everything you are working with is in Euclidean 3-space (or $n$-space) equipped with the dot product, is there any reason to bother with distinguishing between 1-forms and vectors? or between covariant and contravariant tensor components? I'm fairly certain that if you do not, then none of your calculations or relations will be numerically wrong, but are they "mathematically wrong"?
Example: I'll write some basic tensor relations without distinguishing between vectors and 1-forms/dual vectors or between covariant/contravariant components. All indices will be subscripts. Tell me if any of the following is wrong:
I would often describe a rigid transformation of an orthonormal basis (in euclidean 3-space), ${\hat{\mathbf{e}}_i}$, to some new orthonormal basis ${\hat{\mathbf{e}}'_i}$ as
$$
\hat{\mathbf{e}}'_i\;=\; \mathbf{R}\cdot\hat{\mathbf{e}}_i \;=\; R_{ji}\hat{\mathbf{e}}_j
\qquad\qquad (i=1,2,3)
$$
For some proper orthogonal 2-tensor ${\mathbf{R}\in SO(3)}$ (or whatever the tensor equivalent of $SO(3)$ is, if that's a thing). It's then pretty straightforward to show that the components, $R_{ij}$, of ${\mathbf{R}}$ are the same in both bases and $\mathbf{R}$ itself is given in terms of ${\hat{\mathbf{e}}_i}$ and ${\hat{\mathbf{e}}'_i}$ by
$$
\mathbf{R}=R_{ij}\hat{\mathbf{e}}_i\otimes\hat{\mathbf{e}}_j = R_{ij}\hat{\mathbf{e}}'_i\otimes\hat{\mathbf{e}}'_j = \hat{\mathbf{e}}'_i\otimes\hat{\mathbf{e}}_i
\qquad,\qquad\quad
R_{ij}=R'_{ij}=\hat{\mathbf{e}}_i\cdot\hat{\mathbf{e}}'_j
$$
Then, given the basis transformation in the first equation, the components of some vector $\vec{\mathbf{u}}=u_i\hat{\mathbf{e}}_i=u'_i\hat{\mathbf{e}}'_i$ and some 2-tensor $\mathbf{T}=T_{ij}\hat{\mathbf{e}}_i\otimes \hat{\mathbf{e}}_j = T'_{ij}\hat{\mathbf{e}}'_i\otimes \hat{\mathbf{e}}'_j$
would transform as
$$
u'_i = R_{ji}u_j \qquad \text{matrix form: } \qquad [u]'= [R]^{\top}[u]
\\
T'_{ij} = R_{ki}R_{sj}T_{sj} \qquad \text{matrix form: } \qquad
[T]' = [R]^{\top}[T][R]
$$
and for some p-tensor we would have
$$S'_{j_1j_2\dots j_p} \;=\; \big(
R_{ i_1j_1}R_{ i_2j_2} \dots R_{ i_pj_p}
\big) S_{ i_1 i_2\dots i_p}
$$
and if ${\hat{\mathbf{e}}_i}$ is an inertial basis and ${\hat{\mathbf{e}}'_i}$ is some rotating basis, then the skew-symmetric angular velocity 2-tensor of the ${\hat{\mathbf{e}}'_i}$ basis realtive to ${\hat{\mathbf{e}}_i}$ is given by
$$
\boldsymbol{\Omega} \;=\; \dot{\mathbf{R}}\cdot\mathbf{R}^{\top}
\qquad,\qquad
\text{componenets in } \hat{\mathbf{e}}_i \;:
\qquad \Omega_{ij} = \dot{R}_{ik}R_{jk}
$$
Or, in matrix form (in the $\hat{\mathbf{e}}_i$ basis) the above would be $[\Omega]=[\dot{R}][R]^{\top}$. The third equation can be used to convert to the $\hat{\mathbf{e}}'_i$ basis. The familiar angular velocity (pseudo)vector is then given by
$$
\vec{\boldsymbol{\omega}}= -\tfrac{1}{2}\epsilon_{ijk}(\hat{\mathbf{e}}_j\cdot \boldsymbol{\Omega}\cdot \hat{\mathbf{e}}_k)\hat{\mathbf{e}}_i
\qquad,\qquad
\text{componenets in } \hat{\mathbf{e}}_i \;:
\qquad
\omega_i = -\tfrac{1}{2}\epsilon_{ijk}\Omega_{jk}
$$
where $\epsilon_{ijk}$ are the components of the Levi-Civita 3-(pseudo)tensor, $\pmb{\epsilon}$, which itself may be written in any right-handed orthonormal bases as
$$
\pmb{\epsilon} = \epsilon_{ijk}\hat{\mathbf{e}}_i\otimes\hat{\mathbf{e}}_j \otimes \hat{\mathbf{e}}_k = \tfrac{1}{3!}\epsilon_{ijk}\hat{\mathbf{e}}_i\wedge\hat{\mathbf{e}}_j \wedge \hat{\mathbf{e}}_k
= \hat{\mathbf{e}}_1\wedge\hat{\mathbf{e}}_2 \wedge \hat{\mathbf{e}}_3
\quad,\quad \epsilon_{123}=1
$$
The time-derivative of some vector $\vec{\mathbf{u}}=u_i\hat{\mathbf{e}}_i=u'_i\hat{\mathbf{e}}'_i$ would then be given in terms of the components in the inertial and rotating bases by the familiar kinematic transport equation
$$
\dot{\vec{\mathbf{u}}} = \dot{u}_i\hat{\mathbf{e}}_i = \dot{u}'_i\hat{\mathbf{e}}'_i + \boldsymbol{\Omega}\cdot\vec{\mathbf{u}} \;=\; (\dot{u}'_i + \Omega'_{ij}u'_j )\hat{\mathbf{e}}'_i
$$
where $\boldsymbol{\Omega}\cdot\vec{\mathbf{u}} = \vec{\boldsymbol{\omega}}\times\vec{\mathbf{u}}$.
end example
question: So, I'm pretty sure that none of the above would give me numerically incorrect relations. But I called everything either a vector, 2-tensor, or 3-tensor. Nothing about forms, (1,1)-tensors, (0,2)-tensors, dual vectors, etc. Is the above formulation mathematically ''improper''? For instance, do I need to write ${\mathbf{R}}$ as a (1,1)-tensor, ${\mathbf{R}}=R^{i}_{\,j}\hat{\mathbf{e}}_i\otimes\hat{\boldsymbol{\sigma}}^j$, using the basis 1-forms $\hat{\boldsymbol{\sigma}}^j$? Does the angular velocity tensor need to be written as a 2-form or (0,2)-tensor?
context: My BS is in physics and I am currently a PhD student in engineering. Aside from a graduate relativity course I took in the physics department, I have never once seen raised indices or mention of dual vectors/1-forms in any class I have ever taken or in any academic paper, I have ever read. That was until I recently started teaching myself some differential geometry in hopes of eventually understanding Hamiltonian mechanics from the geometric view. So far, I have mostly only succeeded in destroying my confidence in my knowledge of basic tensor algebra involved in classical dynamics.
Answer: As long as you restrict yourself to orthonormal bases, then that's fine. The reason for this is that indices are "raised" or "lowered" via the metric, and in an orthonormal basis the metric components are $g_{ij}=\delta_{ij}$.
As soon as your basis is non-orthonormal, however, this goes out the window. There are many good reasons to use non-orthonormal bases in various circumstances, but since you've explicitly stated that you'd ultimately like to understand Hamiltonian mechanics from a geometrical standpoint, I'll highlight the most glaring problem: in Hamiltonian mechanics on a symplectic manifold, there is no metric, and so the entire concept of orthonormality goes out the window.
It is still useful to define an isomorphism between tangent vectors and their duals on a symplectic manifold, but we need something other than a metric to do so. The structure we use is the symplectic form $\Omega$, which is by definition antisymmetric; this immediately implies that $\Omega_{ij}=\delta_{ij}$ is ruled out as a possibility in any coordinate system. As a result, vectors and their duals always have different components, and distinguishing between them and their transformation behaviors is crucial. | {
"domain": "physics.stackexchange",
"id": 89567,
"tags": "classical-mechanics, differential-geometry, metric-tensor, coordinate-systems, covariance"
} |
Decay pressure altitude | Question: Under the assumptions of isothermal layer and ideal gas, derive the equation of
exponential decay of pressure with respect to altitude using the calculus method: cut a
small piece of air of thickness equal to dz and base area equal to A as shown in Fig.
and then integrate all the small pieces together from altitude $z_1$ to $z_2$ with corresponding
pressure from $p_1$ to $p_2$. Eventually, one can derive the following
$$ p_2=p_1 e^{\frac{z_1-z_2}{h}} $$
[Hint: The figure indicate that the small pressure increment may be written as
dp = −ρgdz. The state equation of ideal gas is assumed to hold: pV = nRT.
The density ρ = n/V .]}
Extra Go through the derivation and find h as a function of R, T and g
Attempt
$$\begin{aligned}
P(z+\Delta x)A -P(z)A&=\rho_{air}g \Delta z *g
\\ \frac{P(z+\Delta z)-P(z)}{\Delta z }=\rho_{air} g
\\ \frac{dP}{dz}=\rho_{air} g
\end{aligned} $$
From Diff Eq $$\begin{aligned}
p(z)&=P_0 e^{\rho_{air}g*z}
\\ &=p_{0}e^{\frac{M_{air} P_{abs}}{RT}*z}
\end{aligned}$$
Cant see how $$ p_2=p_1 e^{\frac{z_1-z_2}{h}} $$
was derive thinking diff assmptions were made
a hand drawn pic and free body diagram Give me a min
Answer: You are given $dp = − \rho gdz$.
Your error, although you do not realise it, is that the density $\rho$ depends on the pressure.
Use the extra information that are give to find out how the density depends on the pressure.
Substitute for the density in the equation for $dp$ and do the integration.
You should now be able to call a constant in your pressure equation $h$ to get the required equation for pressure. | {
"domain": "physics.stackexchange",
"id": 34749,
"tags": "homework-and-exercises, pressure, fluid-statics"
} |
Tutorials on phasing and imputing low-coverage sequencing data | Question: I am new to low-pass whole genome sequencing and have the basic idea of phasing and imputation. I have .vcf file after calling haplotypecaller tools from GATK. After searching the phasing and imputation tools I've encountered some tools like Beagle, Minimac, Shapeit, Glimpse, Eagle. The problem is that I didn't find enough hands-on material on how to process a .vcf from unphased to phased and then to an imputed one.
Can anyone suggests any hands-on tutorial regarding phasing and imputation?
Answer: This is all relevant for data 0.5x-1x coverage.
Assuming you have genotype likelihoods data, if you want to phase low-coverage data, the most suitable options are GLIMPSE, Beagle4 and QUILT. Of the three, GLIMSPE and QUILT are the most recent and similar in performance. shapeit, eagle and minimac aren't suitable for low-coverage data.
I have found GLIMPSE to be fast and memory efficient, but it requires several different steps; breaking the genome into chunks, imputing, reforming the chunks and then sampling from the haplotypes to phase the data. It has a fairly good tutorial here.
QUILT has fewer stages and faster, but is unlikely to make much of a difference unless you are working on very large datasets. It has a tutorial here.
Which one you use probably doesn't matter a whole lot, but I would probably suggest using QUILT as it has fewer stages and is slightly more recent. | {
"domain": "bioinformatics.stackexchange",
"id": 2020,
"tags": "vcf, imputation, phasing"
} |
Moving Ros folder | Question:
I would like to change the ROS folder location.
The current directory is: root/opt/ros
but if I just move it (cut, paste), ROS it doesn't work
How can I move it to my desktop?
Originally posted by andreapatri on ROS Answers with karma: 26 on 2012-11-13
Post score: 0
Original comments
Comment by SL Remy on 2012-11-14:
Why would you like it to be placed on your desktop?
Answer:
Usually you would keep the system folder for ROS there and add another folder to your $ROS_PACKAGE_PATH environment variable. I usually like to use ~/ros for my packages or packages I downloaded from other people's repositories.
Originally posted by georgebrindeiro with karma: 1264 on 2012-11-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 11738,
"tags": "ros"
} |
For semantic sementation, why am I getting better loss values with binary cross entropy than dice coef? | Question: I'm learning all related to data science and how to train U-Net to do semantic segmentation.
I have a U-NET with this loss function:
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(float(y_true))
y_pred_f = K.flatten(float(y_pred))
intersection = K.sum(y_true_f * y_pred_f)
return (2 * intersection + 1) // (K.sum(y_true_f) + K.sum(y_pred_f) + 1)
def dice_coef_loss(y_true, y_pred):
return 1-dice_coef(y_true, y_pred)
When the same data for training and validation, the model works better with binary_crossentropy than with dice_coef_loss.
With binary_crossentropy I get this output:
Epoch 1/50
2/698 [..............................] - ETA: 53s - loss: 0.9674 - accuracy: 0.6257WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0432s vs `on_train_batch_end` time: 0.1084s). Check your callbacks.
698/698 [==============================] - 117s 168ms/step - loss: 0.0661 - accuracy: 0.9848 - val_loss: 0.0379 - val_accuracy: 0.9902
Epoch 2/50
698/698 [==============================] - 115s 165ms/step - loss: 0.0329 - accuracy: 0.9902 - val_loss: 0.0313 - val_accuracy: 0.9901
Epoch 3/50
698/698 [==============================] - 115s 165ms/step - loss: 0.0190 - accuracy: 0.9938 - val_loss: 0.0243 - val_accuracy: 0.9920
Epoch 4/50
698/698 [==============================] - 116s 166ms/step - loss: 0.0154 - accuracy: 0.9948 - val_loss: 0.0105 - val_accuracy: 0.9963
Epoch 5/50
698/698 [==============================] - 116s 166ms/step - loss: 0.0090 - accuracy: 0.9967 - val_loss: 0.0094 - val_accuracy: 0.9966
Epoch 6/50
698/698 [==============================] - 116s 166ms/step - loss: 0.0083 - accuracy: 0.9970 - val_loss: 0.0143 - val_accuracy: 0.9948
Epoch 7/50
698/698 [==============================] - 116s 166ms/step - loss: 0.0122 - accuracy: 0.9958 - val_loss: 0.0073 - val_accuracy: 0.9972
Epoch 8/50
698/698 [==============================] - 115s 165ms/step - loss: 0.0055 - accuracy: 0.9979 - val_loss: 0.0053 - val_accuracy: 0.9979
Epoch 9/50
698/698 [==============================] - 116s 166ms/step - loss: 0.0045 - accuracy: 0.9982 - val_loss: 0.0047 - val_accuracy: 0.9982
Epoch 10/50
698/698 [==============================] - 115s 165ms/step - loss: 0.0047 - accuracy: 0.9981 - val_loss: 0.0044 - val_accuracy: 0.9982
Epoch 11/50
698/698 [==============================] - 116s 166ms/step - loss: 0.0041 - accuracy: 0.9983 - val_loss: 0.0050 - val_accuracy: 0.9980
Epoch 12/50
698/698 [==============================] - 115s 165ms/step - loss: 0.1478 - accuracy: 0.9962 - val_loss: 0.0844 - val_accuracy: 0.9849
Epoch 13/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0478 - accuracy: 0.9872 - val_loss: 0.0290 - val_accuracy: 0.9902
Epoch 14/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0218 - accuracy: 0.9924 - val_loss: 0.0167 - val_accuracy: 0.9941
Epoch 15/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0140 - accuracy: 0.9950 - val_loss: 0.0127 - val_accuracy: 0.9956
Epoch 16/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0103 - accuracy: 0.9961 - val_loss: 0.0122 - val_accuracy: 0.9956
Epoch 17/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0096 - accuracy: 0.9964 - val_loss: 0.0084 - val_accuracy: 0.9970
Epoch 18/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0086 - accuracy: 0.9967 - val_loss: 0.0074 - val_accuracy: 0.9972
Epoch 19/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0066 - accuracy: 0.9975 - val_loss: 0.0080 - val_accuracy: 0.9970
Epoch 20/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0103 - accuracy: 0.9965 - val_loss: 0.0145 - val_accuracy: 0.9951
Epoch 21/50
698/698 [==============================] - 113s 163ms/step - loss: 0.0065 - accuracy: 0.9976 - val_loss: 0.0055 - val_accuracy: 0.9979
Epoch 22/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0051 - accuracy: 0.9981 - val_loss: 0.0057 - val_accuracy: 0.9978
Epoch 23/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0058 - accuracy: 0.9977 - val_loss: 0.0051 - val_accuracy: 0.9981
Epoch 24/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0046 - accuracy: 0.9982 - val_loss: 0.0055 - val_accuracy: 0.9980
Epoch 25/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0044 - accuracy: 0.9983 - val_loss: 0.0051 - val_accuracy: 0.9981
Epoch 26/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0049 - accuracy: 0.9981 - val_loss: 0.0089 - val_accuracy: 0.9968
Epoch 27/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0045 - accuracy: 0.9982 - val_loss: 0.0043 - val_accuracy: 0.9983
Epoch 28/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0038 - accuracy: 0.9985 - val_loss: 0.0044 - val_accuracy: 0.9984
Epoch 29/50
698/698 [==============================] - 113s 161ms/step - loss: 0.0069 - accuracy: 0.9975 - val_loss: 0.0061 - val_accuracy: 0.9978
Epoch 30/50
698/698 [==============================] - 113s 161ms/step - loss: 0.0039 - accuracy: 0.9984 - val_loss: 0.0045 - val_accuracy: 0.9982
Epoch 31/50
698/698 [==============================] - 113s 161ms/step - loss: 0.0033 - accuracy: 0.9986 - val_loss: 0.0038 - val_accuracy: 0.9985
Epoch 32/50
698/698 [==============================] - 112s 161ms/step - loss: 0.0032 - accuracy: 0.9987 - val_loss: 0.0041 - val_accuracy: 0.9984
Epoch 33/50
698/698 [==============================] - 113s 161ms/step - loss: 0.0033 - accuracy: 0.9986 - val_loss: 0.0037 - val_accuracy: 0.9985
Epoch 34/50
698/698 [==============================] - 113s 161ms/step - loss: 0.0032 - accuracy: 0.9987 - val_loss: 0.0038 - val_accuracy: 0.9985
Epoch 35/50
698/698 [==============================] - 112s 161ms/step - loss: 0.0030 - accuracy: 0.9987 - val_loss: 0.0039 - val_accuracy: 0.9985
Epoch 36/50
698/698 [==============================] - 112s 161ms/step - loss: 0.0074 - accuracy: 0.9971 - val_loss: 0.0046 - val_accuracy: 0.9982
Epoch 37/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0031 - accuracy: 0.9987 - val_loss: 0.0033 - val_accuracy: 0.9987
Epoch 38/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0027 - accuracy: 0.9989 - val_loss: 0.0032 - val_accuracy: 0.9987
Epoch 39/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0026 - accuracy: 0.9989 - val_loss: 0.0032 - val_accuracy: 0.9987
Epoch 40/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0131 - accuracy: 0.9960 - val_loss: 0.0041 - val_accuracy: 0.9984
Epoch 41/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0031 - accuracy: 0.9987 - val_loss: 0.0033 - val_accuracy: 0.9987
Epoch 42/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0025 - accuracy: 0.9989 - val_loss: 0.0032 - val_accuracy: 0.9987
Epoch 43/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0025 - accuracy: 0.9990 - val_loss: 0.0032 - val_accuracy: 0.9987
Epoch 44/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0024 - accuracy: 0.9990 - val_loss: 0.0034 - val_accuracy: 0.9986
Epoch 45/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0026 - accuracy: 0.9989 - val_loss: 0.0036 - val_accuracy: 0.9986
Epoch 46/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0025 - accuracy: 0.9989 - val_loss: 0.0031 - val_accuracy: 0.9988
Epoch 47/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0024 - accuracy: 0.9990 - val_loss: 0.0036 - val_accuracy: 0.9987
Epoch 48/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0025 - accuracy: 0.9990 - val_loss: 0.0032 - val_accuracy: 0.9987
Epoch 49/50
698/698 [==============================] - 113s 162ms/step - loss: 0.0024 - accuracy: 0.9990 - val_loss: 0.0030 - val_accuracy: 0.9988
Epoch 50/50
698/698 [==============================] - 113s 161ms/step - loss: 0.0049 - accuracy: 0.9981 - val_loss: 0.0034 - val_accuracy: 0.9987
With dice_coef_loss I get this output:
Epoch 1/50
2/582 [..............................] - ETA: 1:36 - loss: 0.9994 - accuracy: 0.9923WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0626s vs `on_train_batch_end` time: 0.1113s). Check your callbacks.
582/582 [==============================] - 95s 163ms/step - loss: 0.9160 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 2/50
582/582 [==============================] - 93s 161ms/step - loss: 0.8988 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 3/50
582/582 [==============================] - 93s 160ms/step - loss: 0.9240 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 4/50
582/582 [==============================] - 94s 161ms/step - loss: 0.9027 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 5/50
582/582 [==============================] - 94s 161ms/step - loss: 0.8840 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 6/50
582/582 [==============================] - 93s 161ms/step - loss: 0.8894 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 7/50
582/582 [==============================] - 93s 161ms/step - loss: 0.9052 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 8/50
582/582 [==============================] - 93s 161ms/step - loss: 0.8961 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 9/50
582/582 [==============================] - 93s 160ms/step - loss: 0.9190 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 10/50
582/582 [==============================] - 93s 161ms/step - loss: 0.9085 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 11/50
582/582 [==============================] - 93s 161ms/step - loss: 0.9150 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 12/50
582/582 [==============================] - 94s 161ms/step - loss: 0.9162 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 13/50
582/582 [==============================] - 93s 161ms/step - loss: 0.9103 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 14/50
582/582 [==============================] - 93s 160ms/step - loss: 0.9028 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 15/50
582/582 [==============================] - 94s 161ms/step - loss: 0.8866 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 16/50
582/582 [==============================] - 94s 161ms/step - loss: 0.9127 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 17/50
582/582 [==============================] - 93s 160ms/step - loss: 0.9006 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 18/50
582/582 [==============================] - 93s 161ms/step - loss: 0.8809 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 19/50
582/582 [==============================] - 93s 160ms/step - loss: 0.9080 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 20/50
582/582 [==============================] - 93s 160ms/step - loss: 0.8952 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 21/50
582/582 [==============================] - 94s 161ms/step - loss: 0.8952 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 22/50
582/582 [==============================] - 93s 160ms/step - loss: 0.8969 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 23/50
582/582 [==============================] - 94s 161ms/step - loss: 0.8919 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 24/50
582/582 [==============================] - 94s 161ms/step - loss: 0.8935 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 25/50
582/582 [==============================] - 93s 161ms/step - loss: 0.9035 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 26/50
582/582 [==============================] - 93s 161ms/step - loss: 0.9073 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 27/50
582/582 [==============================] - 93s 161ms/step - loss: 0.9005 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 28/50
582/582 [==============================] - 93s 161ms/step - loss: 0.9041 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 29/50
582/582 [==============================] - 94s 161ms/step - loss: 0.8902 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 30/50
582/582 [==============================] - 93s 161ms/step - loss: 0.8909 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 31/50
582/582 [==============================] - 93s 160ms/step - loss: 0.9097 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 32/50
582/582 [==============================] - 93s 160ms/step - loss: 0.9130 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 33/50
582/582 [==============================] - 94s 161ms/step - loss: 0.9026 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 34/50
582/582 [==============================] - 94s 161ms/step - loss: 0.9002 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 35/50
582/582 [==============================] - 93s 161ms/step - loss: 0.9153 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 36/50
582/582 [==============================] - 94s 161ms/step - loss: 0.8931 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 37/50
582/582 [==============================] - 94s 161ms/step - loss: 0.9148 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 38/50
582/582 [==============================] - 94s 161ms/step - loss: 0.9007 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 39/50
582/582 [==============================] - 94s 161ms/step - loss: 0.8901 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 40/50
582/582 [==============================] - 93s 161ms/step - loss: 0.8930 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 41/50
582/582 [==============================] - 94s 161ms/step - loss: 0.8991 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 42/50
582/582 [==============================] - 94s 161ms/step - loss: 0.8946 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 43/50
582/582 [==============================] - 93s 161ms/step - loss: 0.8978 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 44/50
582/582 [==============================] - 93s 160ms/step - loss: 0.9179 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 45/50
582/582 [==============================] - 93s 160ms/step - loss: 0.8976 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 46/50
582/582 [==============================] - 93s 160ms/step - loss: 0.9051 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 47/50
582/582 [==============================] - 93s 160ms/step - loss: 0.9082 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 48/50
582/582 [==============================] - 93s 160ms/step - loss: 0.9040 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 49/50
582/582 [==============================] - 93s 161ms/step - loss: 0.8989 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Epoch 50/50
582/582 [==============================] - 93s 160ms/step - loss: 0.9231 - accuracy: 0.9862 - val_loss: 0.9218 - val_accuracy: 0.9853
Any advice about why am I getting better loss values with binary cross entropy than with dice coef?
This result makes me doubt whether I have chosen the best loss function with the binary cross-entropy.
Answer: So the question asks about why different loss function lead to different error scores.
So globally error is there to help us measure the level of discrimination between the output of the model and the actual output which we want to get.
Different loss functions have different formulations of this and are thus depending on the task itself, more appropriate to some tasks than others.
Binary cross entropy is particularly helpful for binary classification tasks when we are discriminating between two classes, due to the nature of the binary cross entropy formula.
The dice coefficient looks at the level of overlap between the models output and the desired output. This is particularly useful for semantic segmentation where we can then evaluate whether the predicted mask for picking out a certain object is the same as the ground truth mask. | {
"domain": "datascience.stackexchange",
"id": 8107,
"tags": "cnn, loss-function, image-segmentation, semantic-segmentation"
} |
Parallel Cascaded FIR Filters | Question: I have a relatively simple question. I know if you connect multiple LTI systems in parallel, overall system's frequency response is the sum of each LTI system's frequency response. Therefore, what would be the frequency response of the overall system if two FIR Low-Pass filters are connected in parallel? Let's say one has a cut-off frequency of 50 Hz and the other one has 100 Hz. I'm guessing it would have a frequency response of 1 between 50-100 Hz and 2 below 50 Hz.
Similarly, how would you construct a band-pass filter from a pair of low-pass filters? I am totally stumped with this one.
Answer: As you've mentioned, you can simply add the impulse responses, and, consequently, the frequency responses of LTI systems connected in parallel. In general, frequency responses are complex-valued, so you can't just add their magnitudes. However, if their phase responses are identical, you can add their magnitudes. A special case are two linear-phase FIR filters of the same length, which have the same (linear) phase response, so their magnitudes can be added.
If both filters approximate a magnitude of $1$ in their passbands, and a magnitude of zero in their stopbands, then the resulting filter will approximate a magnitude of $2$ where their passbands overlap, a magnitude of $1$ where the passband of one filter overlaps with the stopband of the other, and a magnitude of zero where both stopbands overlap. Note that FIR filters cannot have a constant magnitude (unless they have only $1$ filter tap), so all constant values are just approximated.
Now that you've figured out the resulting response when the outputs of the two filters are added, maybe you can come up with a creative way to connect the outputs of two linear phase FIR low pass filters (of the same length) with different cut-off frequencies to implement a bandpass filter. | {
"domain": "dsp.stackexchange",
"id": 6742,
"tags": "lowpass-filter, finite-impulse-response, bandpass"
} |
Why doesn't a neon sign seem that hot? | Question: I heard that neon signs contain plasma, why aren't they hot?
is it because the electrons and ions do not hit the lamp's wall?
Is it because it is non thermal plasma and electrons and ions are not in thermal equilibrium?
If that is the case do the electrons and ions and neutral atoms (all of them) hit the lamps wall?
Answer: There is a difference between temperature and energy.
Plasma is, as you said, very hot - but there isn't very much of it. The density of plasma in the tube is very low. So when it does hit the walls of the tube it transfers very little energy. So the mass of the glass tube increases in temperature only very slightly.
It's like a firework sparkler, the sparks are at 2000degC but they are very small, have very little mass and contain very little energy - so when one lands on you it transfer much less energy than a hot cup of coffee at 80deg C. | {
"domain": "physics.stackexchange",
"id": 5300,
"tags": "thermodynamics, electricity, everyday-life, plasma-physics"
} |
LINQ query to select subjects of interest | Question: Here's the query:
using (var db = CreateContext())
{
// performing the check for HasBeenAdded inline here, is this only one db call?
return db.Subjects.Select(s => new SubjectDto(s){HasBeenAdded = db.Interests.Any(x => x.SubjectId == s.SubjectId)}).ToList();
}
Basically I create a DTO from a subject and then populate a property of that DTO (HasBeenAdded) based on whether or not that entry's foreign key exists in another table. Is this the right way to go?
Answer: In terms of "SQL" what you need is a "Left Outer Join". So I would suggest this:
using (var db = CreateContext())
{
var subjectDtos = from subject in Subjects
join interest in Interests on subject.SubjectId equals interest.SubjectId into si
from interest in si.DefaultIfEmpty()
select new SubjectDto { Subject = subject, HasBeenAdded = interest != null };
return subjectDtos.Distinct().ToList();
}
And here is why:
I think that LINQ query is more readable in 'SQL like' format.
With the join clause you retrieve only the "subjects" that have an "interest". But with potential duplications if more than one "interest" has the ID of the same "subject" (For this reason I introduced the "Distinct" in the query, you can remove if the duplicate results interest you).
As a matter of consistency I suggest you rather than use the constructor, initialize the property directly.
For "DefaultIfEmpty" steatment docs go here. The Left Outer Join is there.
UPDATE My first code does not work. Fixed and tested now :) | {
"domain": "codereview.stackexchange",
"id": 5566,
"tags": "c#, linq"
} |
When is a knowledge base consistent? | Question: I am studying a knowledge base (KB) from the book "Artificial Intelligence: A Modern Approach" (by Stuart Russell and Peter Norvig) and from this series of slides.
A formula is satisfiable if there is some assignment to the variables that makes the formula evaluate to true. For example, if we have the boolean formula $A \land B$, then the assignments $A=\text{true}$ and $B=\text{true}$ make it satisfiable. Right?
But what does it mean for a KB to be consistent? The definition (given at slide 14 of this series of slides) is:
a KB is consistent with formula $f$ if $M(KB \cup \{ f \})$ is non-empty (there is a world in which KB is true and $f$ is also true).
Can anyone explain this part to me with an example?
Answer: I will first recapitulate the key concepts which you need to know in order to understand the answer to your question (which will be very simple, because I will just try to clarify what is given as a "definition").
In logic, a formula is e.g. $f$, $\lnot f$, $f \land g$, where $f$ can be e.g. the proposition (or variable) "today it will rain". So, in a (propositional) formula, you have propositions, i.e. sentences like "today it will rain", and logical connectives, i.e. symbols like $\land$ (i.e. logical AND), which logically connect these sentences. The propositions like "today it will rain" can often be denoted by a single (capital) letter like $P$. $f \land g$ is the combination of two formulae (where formulae is the plural of formula). So, for example, suppose that $f$ is composed of the propositions "today it will rain" (denoted by $P$) or "my friend will visit me" (denoted by $Q$) and $f$ is defined as "I will play with my friend" (denoted by $S$). Then the formula $f \land g = (P \lor Q) \land S$. In general, you can combine formulae in any logically appropriate way.
In this context, a model is an assignment to each variable in a formula. For example, suppose $f = P \lor Q$, then $w = \{ P=0, Q = 1\}$ is a model for $f$, that is, each variable (e.g. $P$) is assigned either "true" ($1$) or "false" ($0$) but not both. (Note that the word model may be used to refer to different concepts depending on the context; again, in this context, you can simply think of a model as an assignment of values to the variables in a formula.)
Suppose now we define $I(f, w)$ to be a function that receives the formula $f$ and the model $w$ as input, and $I$ returns either "true" ($1$) or "false" ($0$). In other words, $I$ is a function that automatically tells us if $f$ is evaluated to true or false given the assignment $w$.
You can now define $M(f)$ to be a set of assignments (or models) to the formula $f$ such that $f$ is true. So, $M$ is a set and not just an assignment (or model). This set can be empty, it can contain one assignment or it can contain any number of assignments: it depends on the formula $f$: in some cases, $M$ is empty and, in other cases, it may contain say $n$ valid assignments to $f$, where by "valid" I mean that these assignments make $f$ evaluate to "true". For example, suppose we have formula $f = A \land \lnot A$. Then you can try to assign any value to $A$, but $f$ will never evaluate to true. In that case, $M(f)$ is an empty set, because there is no assignment to the variables (or propositions) of $f$ which make $f$ evaluate to true.
A knowledge base is a set of formulae $\text{KB} = \{ f_1, f_2, \dots, f_n \}$. So, for example, $f_2 = $ "today it will rain" and $f_3 = $ "I will go to school AND I will have lunch".
We can now define $M(\text{KB})$ to be the set of assignments to the formulae in the knowledge base $\text{KB}$ such that all formulae are true. If you think of the formulae in $KB$ as "facts", $M(\text{KB})$ is an assignment to these formulae in $KB$ such that these facts hold or are true.
In this context, we then say that a particular knowledge base (i.e., a set of formulae as defined above), denoted by $\text{KB}$, is consistent with formula $f$ if $M(\text{KB} \cup \{ f \})$ is a non-empty set, where $\cup$ means the union operation between sets: note that (as we defined it above) $\text{KB}$ is a set, and $\{ f \}$ means that we are making a set out of the formula $f$, so we are indeed performing an union operation on sets.
So, what does it mean for a knowledge base to be consistent? First of all, the consistency of a knowledge base $\text{KB}$ is defined with respect to another formula $f$. Recall that a knowledge base is a set of formulae, so we are defining the consistency of a set of formulae with respect to another formula.
When is then a knowledge base $\text{KB}$ consistent with a formula $f$? When $M(\text{KB} \cup \{ f \})$ is a non-empty set. Recall that $M$ is an assignment to the variables in its input such that its inputs evaluate to true. So, $\text{KB}$ is consistent with $f$ when there is a set of assignments of values to the formulae in $\text{KB}$ and an assignment of values to the variables in $f$ such that both $\text{KB}$ and $f$ are true. In other words, $\text{KB}$ is consistent with $f$ when both all formulae in $\text{KB}$ and $f$ can be true at the same time. | {
"domain": "ai.stackexchange",
"id": 956,
"tags": "definitions, logic, knowledge-representation, norvig-russell, knowledge-base"
} |
Calculate a specific $A$, $B$ in the general static spherically symmetric metric using geodesics | Question: The Einstein field equations (EFE), leaving out $\Lambda $ for simplicity, are :
$$R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R=-\kappa T_{\mu\nu} \tag 1$$
From that, the general static, spherically symmetric metric can be derived :
$$ds^2 = -Bdt^2 + Adr^2 + r^2(d\theta^2 + sin^2\theta d\phi^2) \tag2$$
Now, I want to calculate a special metric ("toy metric") for $r>r_S$ with the following characteristics :
For $r>r_S$, it is ricci-flat, means, for $r>r_S$
$$R_{\mu\nu} = 0 \tag3$$
The velocity $v(A,B,r)$ of probe masses in great distance from the centre is not dependent on $r$ , means
$$\frac{\partial v(A,B,r)}{\partial r} = 0 \tag4$$
The metric shall explicitly not be asymptotically flat. $A$ is allowed to approach zero in the infinity.
From 1, one can derive that
$$A = \frac{1}{B} \tag5$$
Therefore, the metric is well-defined from equation $(4)$ and equation $(2)$ .
Could you please help me how to progress further?
I would think that now, as the velocity of the probe masses is looked for, one has to write down the geodesic equations with $(2)$. From those, we can derive a function of the velocity which is dependent on $A$ , $B$ and $r$ . Equation $(4)$, then, using equation $(5)$, is a differential equation for the coefficient $A$. If we can solve this we can use $(5)$ again to derive the toy metric $(2)$ .
Please, help me to calculate through this.
Answer: From (1) and (3), we know that for $r > r_S$ (the region you're interested in)
$$-\kappa T_{\mu\nu} = R_{\mu\nu} - \frac{1}{2}Rg_{\mu\nu} = 0.$$
Hence, for $r > r_S$, we have vacuum. Since you are assuming spherical symmetry, Birkhoff's Theorem ensures the solution is the Schwarzschild solution for $r > r_S$, i.e.,
$$\textrm{d}s^2 = -\left(1 - \frac{2M}{r}\right)\textrm{d}t^2 + \left(1 - \frac{2M}{r}\right)^{-1}\textrm{d}r^2 + r^2 \textrm{d}\theta^2 + r^2 \sin^2 \theta \textrm{d}\phi^2$$
in geometrical units ($G = c = 1$).
Notice that spherical symmetry in GR is quite a strong assumption. Once you chose it, you pretty much fixed your spacetime up to the two functions $A$ and $B$. Notice also that the remaining conditions you listed are either not needed or inconsistent. Asking for spacetime to not be asymptotically flat is certainly inconsistent, due to Birkhoff's theorem. | {
"domain": "physics.stackexchange",
"id": 84591,
"tags": "general-relativity, differential-geometry, metric-tensor, geodesics"
} |
Is it true that under certain conditions, Mg can reduce SiO2? | Question: Is it true that under certain conditions, Mg can reduce $\ce{SiO2}$ and the latter the former? What are those conditions?
Answer: Yes it is true.
Conditions are temperature in the 650 to 850 degrees C range.
There is a cool looking youtube video of the reaction, not that that makes it true.
For a more serious discussion see Production and Purification of Silicon by Magnesiothermic Reduction of Silica Fume
and Ordered Mesoporous Silicon through Magnesium Reduction of Polymer Templated Silica Thin Films | {
"domain": "chemistry.stackexchange",
"id": 13098,
"tags": "inorganic-chemistry"
} |
If a person were to die on the Moon or Mars, would the body decompose? | Question: There wouldn't be enough oxygen for any bacteria to decompose the body, right? Not to mention, the radiation of space might kill off most organisms on it. So would it decompose, given millions of years?
Answer: Space, as Randall notes, is really dry. Mars, (recent discoveries notwithstanding) is not much moister.
In these conditions, bodies mummify.
The microbes that live in you, wouldn't survive the freezing, dessication and radiation. There is no real upper limit on how long a mummified body could exist in space. There would be nothing to cause the body to change, and so it would remain. There would be a slow breaking down of surface proteins, due to UV light, and eventually micrometeorites would erode the body, but these processes would take many millions of years. | {
"domain": "astronomy.stackexchange",
"id": 1130,
"tags": "the-moon, solar-system, planet, space, mars"
} |
Membership in 1, 5, 2, 13, 10, ... (recursively defined sequence) | Question: Find if a given integer is in the series $1, 5, 2, 13, 10, \dots$ in the most efficient way, where the sequence is given by
$$
f(n) =
\begin{cases}
1 & n=1, \\
2f(\tfrac{n}{2})+3 & n \text{ even}, \\
2f(\tfrac{n-1}{2}) & n>1 \text{ odd}.
\end{cases}
$$
The series is infinite of course, and $x$ can be a number with at most 9 digits. The idea is not to hardcode this and maybe find some kind of correlation that will allow you to solve this fast. I want to say that I have an idea of how to solve it but I don't.
Answer: Your sequence contains all positive integers not divisible by 3. Let us prove this by induction.
Suppose first that $x$ is divisible by 3, and $f(n) = x$. Considering the definition, we see that $x = 2f(n/2) + 3$, and so $f(n/2) = \tfrac{x-3}{2}$, which is also divisible by 3. By induction, this is impossible.
Suppose next that $x>0$ is not divisible by 3. If $x=1$ then $x$ appears in the sequence since $f(1) = 1$, so we can assume that $x \geq 2$. We now consider two cases: $x$ even and $x$ odd.
If $x$ is even then $\tfrac{x}{2}<x$ is positive not divisible by 3, and so by induction, $\tfrac{x}{2} = f(m)$ for some $m$. Then $f(2m) = x$.
Similarly, if $x$ is odd then $\tfrac{x-3}{2} < x$ is not divisible by 3. Furthermore, since $x \geq 5$, it is furthermore positive. By induction, $\tfrac{x-3}{2} = f(m)$ for some $m$. Then $f(2m+1) = x$. | {
"domain": "cs.stackexchange",
"id": 17361,
"tags": "algorithms, recursion"
} |
Why must all natural processes be irreversible? | Question: My thermodynamics lecturer was talking about reversibility and the idea of spontaneous change and he mentioned that all natural processes are irreversible.
Can someone offer some sort of proof or reasoning as to why nature can not produce any reversible change on its own?
Answer:
"all natural processes are irreversible."
Yes, but no.
Most physical process are reversible when you describe them at the atomic scale. The problem is, any macroscopic system has so many atoms that the number of possible states they all could be in is inconceivably huge. Furthermore, the number of states that are uninteresting (a.k.a., "disordered") is inconceivably huger than the number of interesting (a.k.a., "ordered") states.
When you have a bunch of atoms all jiggling around and bumping into each other (e.g., gas molecules in a box), You will never, ever, ever see the system spontaneously move from a "disordered" state (e.g., same temperature everywhere in the box) to an "ordered" state (e.g., hot on one side, and cold on the other). That's not because it's impossible, but merely because the probability of it happening is so inconceivably small.
To learn more, read about entropy. | {
"domain": "physics.stackexchange",
"id": 64033,
"tags": "thermodynamics, statistical-mechanics, entropy, reversibility, dissipation"
} |
Can't import rospy | Question:
Hello,
I have a ROS aplication in Python and Qt, and when I run it from the terminal all is ok, but I try to run with double click (the script with the permissions and Nautilus configuration appropriate) and the aplication can't import rospy.
I use ROS Indigo and Ubuntu 14.04.
Any idea? thanks!
Originally posted by akb on ROS Answers with karma: 16 on 2016-08-20
Post score: 0
Original comments
Comment by gvdhoorn on 2016-08-21:
rospy is not on the default PYTHONPATH, and so without sourcing the appropriate /opt/ros/$distro/setup.* script won't be resolvable for Python. It's probable that by 'double clicking', the environment is not properly setup, leading to the problems you describe.
Comment by akb on 2016-08-21:
That happened, I could discovered it with you advice, thank you!
Answer:
As gvdhoorn sais, the environment wasn't setup.
This link help me: http://unix.stackexchange.com/questions/170156/bash-scripts-ran-from-from-gnome-nautilus-dont-have-enviroment-variables
I have put the ros variable enviroment in a new file ~/.myenvironmentvariables and I load it from ~/.bashrc (for Terminal) and from the file /etc/X11/Xsession.d/40x11-common_xsessionrc from Files Browser. Just with the code source ~/.myenvironmentvariables. And finally rospy can be import with double click in nautilus.
Originally posted by akb with karma: 16 on 2016-08-21
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 25567,
"tags": "ros, python, rospy, ros-indigo"
} |
Using Boost::boost::asio::ip::udp from a model plugin | Question:
Hi everyone,
I'm trying top use a custom UDP socket to control a model. The outgoing (from Plugin) messages are received but the plugins doesn't receive anything from the source. The messages are being queued. It's not the port because I tried to use the code outside gazebo and it works.
Any idea why, and how to fix this?
Thx!
XB32Z
Originally posted by XB32Z on Gazebo Answers with karma: 23 on 2016-07-20
Post score: 0
Answer:
The solution is to move the receiver to a thread (tested) or use the IO_service of gazebo (access via IOManager) (non-tested)
The porblem is that the same thread cannot access io_services via two different boost::io_services
Originally posted by XB32Z with karma: 23 on 2016-07-21
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3953,
"tags": "gazebo-plugin"
} |
Formula of decay heat for a fraction of radioactive materials | Question: What is the formula of decay heat for a fraction of spent fuel? (say 50% spent fuel)
The Wigner-Way formula has not any parameter relating to the amount of fuel. Is there any other formula?
Answer: The page you link says
It is also possible to make a rough approximation by using a single half-life that represents the overall decay of the core over a certain period of time. An equation that uses this approximation is the Wigner-Way formula:
$$
P_d(t) = 0.0622 P_0
\left(
t^{-0.2} - (t_0 + t)^{-0.2}
\right)
$$
where
$P_d(t)$ is thermal power generation due to beta and gamma rays,
$P_0$ is thermal power before shutdown,
$t_0$ is time, in seconds, of thermal power level before shutdown,
$t$ is time, in seconds, elapsed since shutdown.
The emphasis on “rough approximation” is in the original. What happens is that, during fission, short-lived fission products with lots of different lifetimes accumulate towards a secular equilibrium. When power generation stops, the short-lived stuff dies away quickly and the long-lived stuff stays on. If you have unspent fuel ($t_0 = 0$), this approximation says that the decay heat is always zero.
It’s not really correct to refer to this as “a single half-life” as your link does. A single half-life would go like $2^{-t/\tau_\text{single}}$. This relationship says that, when you add up all of the half-lives of all the messy garbage that comes out of uranium fission, a time dependence of $t^{-1/5}$ captures the way that the decay radiation dies off more rapidly at the beginning than later on. It’s an empirical relationship.
You write,
The Wigner-Way formula has not any parameter relating to the amount of fuel.
But it does: the amount of spent fuel is related to the thermal power $P_0$ while the reactor was operating.
The decay heat depends on how much fission product you have, not how much fuel you started with. Possibly related.
In a comment you ask
Can you please tell me what is the 0.006 in the Wigner-Way formula?
There isn’t one. There is a numerical constant that’s ten times bigger than that, however.
A physicist who’s accustomed to doing dimensional analysis would be tempted to look at that factor and conclude that the decay power starts off as 6% of the thermal power. But that’s bogus, because the empirical Wigner-Way formula has screwy units. That factor $0.0622$ has dimension $\text{(seconds)}^{+0.2}$, and the exponent $+0.2$ has more to do with fitting a bunch of exponentials together that with any nice interpretable algebra. If you were to do the same kind of approximation for plutonium fuel, which has a different spectrum of short-lived decay products, you might expect a different exponent. | {
"domain": "physics.stackexchange",
"id": 77408,
"tags": "homework-and-exercises, nuclear-physics, nuclear-engineering"
} |
Isobaric process with non-boundary work? | Question: I came up with a doubt on isobaric processes.
Firstly, in any isobaric process, (irreversible or not) the following hold $$Q=n c_p (T_B-T_A)\tag{1}$$
$$W=p (V_B-V_A)=n R(T_B-T_A)\tag{2}$$
Is that correct?
If so consider the following situation.
A gas in a tank can expand at constant pressure $p$. When it expand it does an amount of work $p \Delta V$. Besides it, in the tank there is a fan that does work on the gas delivering a power $P$.
In total the amount of work is $$W=p\Delta V-P \cdot t$$
So in this case $(2)$ is not complete, but what about $(1)$? Is $(1)$ still valid?
Answer: if there is a fan inside the tank and if the tank's volume is kept constant, what will be happen? The air will circulate and viscosity will convert kinetic energy to thermal energy, i.e. temperature will increase. The electrical energy input will be eventually converted to thermal energy which is $\delta Q$. So $W=pdV-Pt$ is not correct. The equations are,
$$Q+Pt=nc_p(T_B-T_A)$$
$$W=p(V_B-V_A)$$ | {
"domain": "physics.stackexchange",
"id": 32258,
"tags": "homework-and-exercises, thermodynamics, work, power"
} |
SearchActivity to search through an ArrayList | Question: I've coded this class that lets me search through an ArrayList displayed into a RecyclerView. This is the class:
public class SearchActivity extends AppCompatActivity {
ArrayList<Accordo> chords;
RecyclerView rv;
SearchView sv;
ArrayList<Accordo> filteredList;
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.search_layout);
/** gestisce la pubblicita */
MobileAds.initialize(getApplicationContext(), "ca-app-pub-3940256099942544/6300978111");
AdView searchBanner = (AdView) findViewById(R.id.search_ad);
AdRequest adRequest = new AdRequest.Builder().build();
searchBanner.loadAd(adRequest);
/**-------------------------------*/
Intent intent = this.getIntent();
Bundle bundle = intent.getExtras();
chords = bundle.getParcelableArrayList("chords");
filteredList = bundle.getParcelableArrayList("chords");
sv = (SearchView) findViewById(R.id.testo_ricerca);
sv.setIconifiedByDefault(false);
rv = (RecyclerView) findViewById(R.id.lista_ricerca);
rv.setLayoutManager(new LinearLayoutManager(SearchActivity.this, LinearLayoutManager.VERTICAL, false));
rv.setHasFixedSize(true);
final SearchAdapter adapter = new SearchAdapter(this, chords);
rv.setAdapter(adapter);
//SEARCH
sv.setOnQueryTextListener(new SearchView.OnQueryTextListener() {
@Override
public boolean onQueryTextSubmit(String query) {
return false;
}
@Override
public boolean onQueryTextChange(String newText) {
//FILTER AS YOU TYPE
List<Accordo> filteredModelList = filter(chords, newText);
adapter.setFilter(filteredModelList);
return true;
}
});
/** gestisce cosa succede quando un elemento della lista viene cliccato */
ItemClickSupport.addTo(rv).setOnItemClickListener(new ItemClickSupport.OnItemClickListener(){
@Override
public void onItemClicked(RecyclerView recyclerView, int position, View v) {
Intent intent = new Intent(SearchActivity.this, ChordActivity.class);
Bundle bundle = new Bundle();
bundle.putParcelable("selected", filteredList.get(position));
intent.putExtras(bundle);
startActivity(intent);
}
});
}
private List<Accordo> filter(List<Accordo> models, String query) {
query = query.toLowerCase();
filteredList = new ArrayList<>();
for (Accordo model : models) {
final String text = model.getName().toLowerCase();
if (text.contains(query)) {
filteredList.add(model);
}
}
return filteredList;
}
}
As you can see from the code above, it is based on a filter that every time the user types something in the search box, creates a filteredList and puts it into the RecyclerView. Of course when the user did not type anything the filteredList should have contained the whole ArrayList chords
So I was getting an error (NPE) when I clicked on an Item while nothing had been typed into the search box, because the filteredList was null. I managed to solve it by having both:
chords = bundle.getParcelableArrayList("chords");
filteredList = bundle.getParcelableArrayList("chords");
I think this is a huge waste of memory and resources since we are talking about an ArrayList with ca 300 elements, each of which has 5 images, strings and sounds.
Is there a more efficient way to achieve the same result?
Answer: Once is enough of bundle.getParcelableArrayList("chords")
I'm not sure if bundle.getParcelableArrayList creates a new list every time it's called. If it doesn't, then repeated calls won't double the memory used so it won't really be a problem.
But in any case, you don't need to call it twice like this:
chords = bundle.getParcelableArrayList("chords");
filteredList = bundle.getParcelableArrayList("chords");
You can make filteredList reference chords:
filteredList = chords = bundle.getParcelableArrayList("chords");
Use interfaces in declarations
These fields would be better declared as List instead of ArrayList:
ArrayList<Accordo> chords;
ArrayList<Accordo> filteredList;
Just like you used List instead of ArrayList in the filter method.
Also, probably all fields of the activity should be private.
Reduce memory churn
Every time the query text changes,
you recreate a new list,
and re-link the adapter to the new list.
It might be more efficient to reuse the same list,
by clearing and re-adding elements instead of creating a new list.
However,
there is just one tricky point,
of the initial state when there is no filter yet.
The adapter could initially be linked to chords,
and then the first time a query text is entered,
re-link it to filteredList, for example:
@Override
public boolean onQueryTextChange(String newText) {
if (filteredList == null) {
// first time used
filteredList = new ArrayList<>();
adapter.setFilter(filteredList);
}
filter(newText);
return true;
}
private void filter(String query) {
query = query.toLowerCase();
filteredList.clear();
for (Accordo model : chords) {
final String text = model.getName().toLowerCase();
if (text.contains(query)) {
filteredList.add(model);
}
}
}
Notice some other related changes:
I dropped the list parameter of filter: the filtering is always based on chords, which is a field, so the method has direct access to it, no need to pass as parameter
Made filter return void, as now it modifies filteredList in-place | {
"domain": "codereview.stackexchange",
"id": 21747,
"tags": "java, android"
} |
How to set parameter for std::map structure in launch file | Question:
Hello! As described in the title, my question is how I could input the parameter for a std::map structure in a roslaunch file, which is needed by a node. I use ros-indigo. Thanks in advance!
Originally posted by mikegao88 on ROS Answers with karma: 31 on 2015-11-03
Post score: 1
Answer:
I guess the question is rather how to get some configuration from a launch file into the nodes, that you go on to store that information in an std::map just an extra.
You most probably want to use the parameter server which is available to you per default when you run ROS.
You can store name-value pairs on the server in a launch file with the <param> tag then you can access it in your node as described in the documentation.. In Brief:
ros::param::param<std::string>("default_param", default_param, "default_value");
Note that the parameter names are global, but you can set private parameters per node by putting the tag inside the <node> tag:
<node ...>
<param name="foo" .../>
</node>
Then you need to use "~foo" to access it.
PS: It is possible to store an entire map with a single name on the parameter server, but I know of no way to use that feature with launch files. You will need to specify a parameter for each map element.
PPS:
Also note the convenient <rosparam> tag that allows:
<rosparam>
my_parameter_1: 768
my_parameter_2: 480
</rosparam>
Originally posted by Dimitri Schachmann with karma: 789 on 2015-11-03
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 22894,
"tags": "rosparam"
} |
Why skyscrapers don't sink into ground? | Question: We shared Mechanics of materials with structural engineers back in college. I learned some basic concept about structural engineering, but besides the basic knowledge i don't know anything about structural engineering.
If i recall correctly, an engineer found a way to distribute the weight of buildings over a wide area to reduce the stress, i think he managed to solve the problem of sinking buildings for the first time in Chicago, and up to this moment, his solution has been popular.
If the area has a strong bedrock, manhattan for instance then, the buildings are much more stable, but my question targets the areas with very weak bedrock, The Hague for instance.
I can say, there is channel is every street, when i walk in streets i can actually hear the vibrations of water beneath my feet, if you dig the ground four or five meters, in city centre you'll reach water.
I wonder how engineers manage to build skyscrapers in this area?
The concentration of tall buildings is not as same as in manhattan, Chicago or ... . But still, how those few tall buildings are still on the ground?
Answer: As part of the process of obtaining a building permit is the requirement to investigate the soil strength and competency and possible elevated subterranean water levels by a licensed geotechnical engineer and if needed in some cases by a multitude of engineering specialists' investigation, such as geology, seismology, and any other concerns pertinent to that site investigation as determined by the building department authority.
Geotechnical engineers, after studying the site and tabulating logs of test pits and doing lab test on the excavated samples, submit a report with specific recommendations for construction of that particular building on that site including modifications to the soil by additional foreign materials, borehole drainage among other things.
The structural engineer following geotechnical recommendations and using their approved tables of such things as allowable bearing, shear strength, passive and active soil pressure, seismic and dynamic over loads, and if there is need of use of mat foundation or deep foundation supported on piles or need of subterranean drainage (boreholes) to drain moisture of soil to acceptable level, design the building closely following the geotechnical engineer's report. Then the structural engineers send their design to geotechnical engineer and if he approves it they submit it to the building authority to get the permits.
In cases of weak soil or subterranean water, in some cases they recommend building a big subterranean concrete cube a little bigger than the entire subterranean parking and foundation of the building, like a giant dry swimming pool and water proof it and install emergency sump pumps or alternative means of drainage, then they start to build the foundation inside that giant concrete clad excavation. This is similar to the way bridges are built in rivers.
In some other soft incompetent soils the geotechnical engineers may recommend building a system of deep piles tied and integrated with each other, called soldier piles, and then build the building on top of it. | {
"domain": "engineering.stackexchange",
"id": 2455,
"tags": "structural-engineering"
} |
Tweet Classification into topics- What to do with data | Question: Good evening,
First of all, I want to apologize if the title is misleading.
I have a dataset made of around 60000 tweets, their date and time as well as the username. I need to classify them into topics. I am working on topic modelling with LDA getting the right number of topics (I guess) thanks to this R package, which calculates the value of three metrics("CaoJuan2009", "Arun2010", "Deveaud2014"). Since I am very new to this, I just thought about a few questions that might be obvious for some of you, but I can't find online.
I have removed, before cleaning the data (removing mentions, stopwords, weird characters, numbers etc), all duplicate instances (having all three columns in common), in order to avoid them influencing the results of topic modelling. Is this right?
Should I, for the same reason mentioned before, remove also all retweets?
Until now, I thought about classifing using the "per-document-per-topic" probability. If I get rid of so many instances, do I have to classify them based on the "per-word-per-topic" probability?
Do I have to divide the dataset into testing and training? I thought that is a thing only in supervised training, since I cannot really use the testing dataset to measure quality of classification.
Antoher goal would be to classify twitterers based the topic they most are passionate about. Do you have any idea about how to implement this?
Thank you all very much in advance.
Answer: As far as I'm aware there is no correct/standard way to apply topic modelling, most decisions depend on the specifics of the case. So below I just give my opinion about these points:
I have removed, before cleaning the data (removing mentions, stopwords, weird characters, numbers etc), all duplicate instances (having all three columns in common), in order to avoid them influencing the results of topic modelling. Is this right?
Should I, for the same reason mentioned before, remove also all retweets?
In general there is no strict need to deduplicate the data, doing it or not would depend on the goal. Duplicate documents would affect the proportion of the words which appear in these documents, and in turn the probability of the topic these documents are assigned to. If you want the model to integrate the notion of popularity/prominence of tweets/words/topics, it would probably make sense not to deduplicate and keep retweets. However if there is large amount of duplicates/retweets the imbalance might cause less frequent tweets/words to be less visible, possibly causing less diverse topics (the smallest topics might get merged together for instance).
Until now, I thought about classifing using the "per-document-per-topic" probability. If I get rid of so many instances, do I have to classify them based on the "per-word-per-topic" probability?
I'm not sure what is called the "per-document-per-topic" probability in this package. The typical way to use LDA in order to cluster the documents is to use the posterior probability of topic given document (this might be the same thing, I'm not sure): for any document $d$, the model can provide the conditional probability of every topic $t$ given $d$. The sum of this value across topics sums to 1 (it's a distribution over topics for $d$), and for classification purposes one can just select the topic which has the highest probability given $d$.
Do I have to divide the dataset into testing and training? I thought that is a thing only in supervised training, since I cannot really use the testing dataset to measure quality of classification.
You're right, you don't need to split into training and test set since this is unsupervised learning.
Antoher goal would be to classify twitterers based the topic they most are passionate about. Do you have any idea about how to implement this?
The model gives you the posterior probability distribution over topics for every tweet. From these values I think you can obtain a similar distribution over topics for every tweeter, simply by marginalizing over the tweets by this author $a$: if I'm not mistaken, this probability $p(t|a)$ can be obtained simply by calculating the mean of $p(t|d)$ across all the documents/tweets $d$ by author $a$. | {
"domain": "datascience.stackexchange",
"id": 8860,
"tags": "machine-learning, nlp, r, topic-model, lda"
} |
What is "Word Sense Disambiguation"? | Question: I recently came across this article which cites a paper, which apparently won the outstanding paper award in ACL 2019. The theme is that it solved a longstanding problem called Word Sense Disambiguation.
What is Word Sense Disambiguation? How does it affect NLP?
(Moreover, how does the proposed method solve this problem?)
Answer: "Word Sense Disambiguation" refers to the idea that words can have different meanings in different contexts. Here are some examples
"I went to the river river bank" vs "I deposited my check at the bank"
"He's mad good at that game" vs "I am so mad at you"
How it effects NLP, comes down to the way we process text. This generally includes the steps of tokenization and embedding them into some form of vector space. These embeddings in many cases are trained either through some self supervised task on some corpora (examples include Word2Vec or Glove) or doing it from scratch under whichever task/data-set is being used.
Now regarding that paper it does not solve the problem but it does extend a new methodology that assists in helping achieve a better learned generalizable representation for this task. The way I interpreted it is that they don't just use a sense-label (which would be a one-hot encoding / what they call discrete) but instead use a continuous sense representation and do the comparison there. This difference allows for words with similar but different senses to not be equidistant from words with completely different senses. | {
"domain": "ai.stackexchange",
"id": 1309,
"tags": "natural-language-processing, terminology"
} |
PHP MySQLi Prepared Statements: Can this select query be hacked/injected? | Question: i want to know can this be hacked/injected?
$stmt = $mysqli->prepare("SELECT * FROM myTable WHERE name = ?");
$stmt->bind_param("s", $_POST['name']);
$stmt->execute();
$result = $stmt->get_result();
if($result->num_rows === 0) exit('No rows');
while($row = $result->fetch_assoc()) {
//do some stuff
}
var_export($ages);
$stmt->close();
Answer: Given an answer on Stack Overflow suggests almost identical code for protection, let alone using exactly the same principle you can safely assume that your query is protected.
If you want to know how it works, I also wrote an answer on Stack Overflow, https://stackoverflow.com/a/8265319/285587
Nevertheless, as this site is for the code reviews offering some suggestions, I would suggest to use PDO for database interactions instead of mysqli. Simply because PDO API is much more versatile and easier to use. see your snippet rewritten in PDO:
$stmt = $mysqli->prepare("SELECT * FROM myTable WHERE name = ?");
$stmt->execute([$_POST['name']]);
if($stmt->rowCount() === 0) exit('No rows');
while($row = $stmt->fetch_assoc()) {
//do some stuff
}
as you can see some nagging operations are just gone. I wrote a tutorial on PDO, which I would quite expectedly recommend. | {
"domain": "codereview.stackexchange",
"id": 34349,
"tags": "php, mysqli, sql-injection"
} |
ROS2 URDF: How to have multicolored things in URDF? | Question: It seems that only the first material clause in a link is used for the whole link.
For example for a simple Minion Dave with blue overalls and yellow "body":
<!-- DAVE -->
<link name="dave_link">
<visual name="bottom_half_dave" >
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry>
<cylinder length="0.045" radius="0.0375" />
</geometry>
<material name="solid_blue" />
</visual>
<visual name="top_half_dave" >
<origin xyz="0 0 0.045" rpy="0 0 0" />
<geometry>
<cylinder length="0.045" radius="0.0375" />
</geometry>
<material name="solid_yellow" />
</visual>
<visual name="head_dave" >
<origin xyz="0 0 0.625" rpy="0 0 0" />
<geometry>
<sphere radius="0.0375" />
</geometry>
<material name="solid_yellow" />
</visual>
</link>
<joint name="joint_dave" type="fixed">
<parent link="base_link"/>
<child link="dave_link"/>
<origin xyz="-0.04 0 0.250" rpy="0 0 0" />
</joint>
Will always use the solid_blue of the bottom half for the other two visual objects in the dave_link.
Is there a way (other than making each part a separate link with joints)?
Answer: Unfortunately, it looks like this is a bug that was reported in 2015 and is still unresolved. The URDF link documentation here says, in part:
Note: multiple instances of tags can exist for the same link. The union of the geometry they define forms the visual representation of the link.
I don't know if the rviz bug is truly a bug or if it's supposed to be treating all visuals within a link as one fused object, but your options seem to be to either split out your colored components into their own links or make your own mesh and apply a texture. | {
"domain": "robotics.stackexchange",
"id": 2586,
"tags": "ros2, urdf"
} |
ROS Answers SE migration: Custom message | Question:
Hello,
I have defined this custom message, FloatsStamped.msg :
Header header
float32[] data
The CMakeLists.txt is:
cmake_minimum_required(VERSION 2.8.3)
project(grideye)
find_package(catkin REQUIRED COMPONENTS
roscpp
rospy
std_msgs
message_generation
)
################################################
## Declare ROS messages, services and actions ##
################################################
## Generate messages in the 'msg' folder
add_message_files(
FILES
FloatsStamped.msg
)
## Generate added messages and services with any dependencies listed here
generate_messages(
DEPENDENCIES
std_msgs
)
###################################
## catkin specific configuration ##
###################################
catkin_package(
CATKIN_DEPENDS roscpp rospy std_msgs message_runtime
)
###########
## Build ##
###########
include_directories(
${catkin_INCLUDE_DIRS}
)
My package.xml is :
<?xml version="1.0"?>
<package>
<name>grideye</name>
<version>0.0.0</version>
<description>The grideye package</description>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>roscpp</build_depend>
<build_depend>rospy</build_depend>
<build_depend>std_msgs</build_depend>
<build_depend>message_generation</build_depend>
<run_depend>message_runtime</run_depend>
<run_depend>roscpp</run_depend>
<run_depend>rospy</run_depend>
<run_depend>std_msgs</run_depend>
<build_depend>python-numpy</build_depend>
<run_depend>python-numpy</run_depend>
</package>
And finally the code I use is:
#!/usr/bin/env python
import rospy
import numpy
from grideye.msg import FloatsStamped
from grideye_class import GridEye
def talker():
pub = rospy.Publisher('thermal_pixels', FloatsStamped, queue_size=10)
rospy.init_node('grideye', anonymous=True)
r = rospy.Rate(5)
pix = []
while not rospy.is_shutdown():
pix = mysensor.getPixels()
print(pix)
#feed a FloatsStamped msg
a = FloatsStamped
a.header.stamp = rospy.Time.now()
a.data = numpy.array(pix, dtype=numpy.float32)
pub.publish(a)
r.sleep()
if __name__ == '__main__':
mysensor = GridEye()
rospy.loginfo("Starting data retrieving from Grid Eye sensor Board")
talker()
I get an error message indicating that :
a.header.stamp = rospy.Time.now() AttributeError: 'member_descriptor'
object has no attribute 'stamp'
I declare a as FloatsStamped message and this message includes the Header message, so I don't understand why I get this error message.
matt
Originally posted by mattMGN on ROS Answers with karma: 78 on 2018-02-22
Post score: 0
Original comments
Comment by Thomas D on 2018-02-22:
Try a = FloatsStamped(), with the trailing parentheses.
Comment by gvdhoorn on 2018-02-23:
@Thomas D comment should really be an answer, as it is most likely the answer.
@mattMGN: you are assigning to a the type of FloatStamped (ie: the class), not to an instance of it (ie: an object). data is not a field of the class, but of the object.
Answer:
Try a = FloatsStamped(), with the trailing parentheses.
As @gvdhoorn points out in a comment: you are assigning to a the type of FloatStamped (ie: the class), not to an instance of it (ie: an object). data is not a field of the class, but of the object.
Originally posted by Thomas D with karma: 4347 on 2018-02-23
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by mattMGN on 2018-02-27:
You are right, It now works
Thanks for your comment | {
"domain": "robotics.stackexchange",
"id": 30117,
"tags": "ros-kinetic, custom-message"
} |
What is a safe programming language? | Question: Safe programming languages (PL) are gaining popularity. What is the formal definition of safe PL? For example, C is not safe, but Java is safe. I suspect that the property “safe” should be applied to a PL implementation rather than to the PL itself. If so, how do we define what is a safe PL implementation?
Answer: There is no formal definition of "safe programming language"; it's an informal notion. Rather, languages that claim to provide safety usually provide a precise formal statement of what kind of safety is being claimed/guaranteed/provided. For instance, the language might provide type safety, memory safety, or some other similar guarantee. | {
"domain": "cs.stackexchange",
"id": 21606,
"tags": "programming-languages"
} |
upgrading from diamondback to electric. problems with turtlebot? | Question:
Hi, is there any other option to upgrade than uninstalling one and installing the other?
If I'm working with the turtlebot and I have electric in the workstation and diamondback in the turtlebot laptop will it work?
Originally posted by apalomer on ROS Answers with karma: 318 on 2011-10-12
Post score: 1
Answer:
Both distributions can be installed together on any system.
However, only one at a time should be present in your $ROS_ROOT and $ROS_PACKAGE_PATH environment variables.
You should not attempt to run nodes built against different distributions together, whether on separate machines or a single one.
Originally posted by joq with karma: 25443 on 2011-10-13
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 6956,
"tags": "turtlebot, ros-diamondback, ros-electric"
} |
Objects and instance variables in loops | Question: Is there anything wrong with doing this:
Public Class Form1
Private Sub Form1_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
For value As Integer = 0 To 1000000
Dim p As New Person
p.ID = value
p.DoSomething()
p = Nothing
Next
End Sub
Public Class Person
Dim _ID As Integer
Public WriteOnly Property ID() As Integer
Set(ByVal value As Integer)
_ID = value
End Set
End Property
Public Sub DoSomething()
_ID = _ID + 1
'Do more with ID
End Sub
End Class
End Class
As appose to this:
Public Class Form1
Private Sub Form1_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
For value As Integer = 0 To 1000000
Person.DoSomething(value)
Next
End Sub
Public Class Person
Public Shared Sub DoSomething(ByVal id As Integer)
id = id + 1
'Do more with ID
End Sub
End Class
End Class
I have written some code similar to that in sample 1. I have read online that code 1 can cause problems when threading is involved, but I do not understand why? Are there any other problems with sample 1? Sample 1 meets my requirements better.
Answer: In sample1, as you call it, you have an instance field which you change. Therefore, the Person class you are using is mutable. Mutable classes are prominent to concurrency issues if a single instance is manipulated by multiple threads, unless explicit synchronization mechanism is properly involved.
In sample2, if you pass the value in the method, and you are not modifying the state of the Person class as you do your business logic, then the code is thread safe. No other thread can access the local variable id and modify it concurrently, since it is visible to the current thread only. So there is no need to apply a synchronization mechanism.
However, sample2 may not be appropriate enough for your concrete requirements. If you give more code clues on what happens in the method in question, then a more-accurate advice can be given. | {
"domain": "codereview.stackexchange",
"id": 3566,
"tags": "design-patterns, vb.net"
} |
DataGrid Filter Method Very Slow | Question: I'm having a lot of trouble in my Database program with trying to implement an effective DataGrid filtering method. After taking advice from a previous code review I'm trying to implement a DataModel method of coding. My previous method of searching my DataGrid was this;
SearchGrid (PREVIOUS)
private void SearchGrid(object sender, TextChangedEventArgs e)
{
DataView dv = dataGrid.ItemsSource as DataView;
if (compNameRad.IsChecked == true)
{
dv.RowFilter = "CompanyName LIKE '%" + searchBox.Text + "%'";
}
if (compTownRad.IsChecked == true)
{
dv.RowFilter = "CompanyTown LIKE '%" + searchBox.Text + "%'";
}
if (compPcodeRad.IsChecked == true)
{
dv.RowFilter = "CompanyPcode LIKE '%" + searchBox.Text + "%'";
}
}
This worked fine with the way I bound my DataGrid before. I now bind my DataGrid using a Model class and the previous filter method does not work. I have written a new filter method which is this;
SearchGrid (NEW)
private void SearchGrid(object sender, TextChangedEventArgs e)
{
if (!string.IsNullOrEmpty(searchBox.Text))
{
ICollectionView view = CollectionViewSource.GetDefaultView(dataGrid.ItemsSource);
view.Filter += (obj) =>
{
CompanyModel model = obj as CompanyModel;
if (model == null)
return true;
if (compNameRad.IsChecked == true)
{
return !string.IsNullOrEmpty(model.CompanyName) && model.CompanyName.Contains(searchBox.Text);
}
if (compTownRad.IsChecked == true)
{
return !string.IsNullOrEmpty(model.CompanyTown) && model.CompanyTown.Contains(searchBox.Text);
}
if (compPcodeRad.IsChecked == true)
{
return !string.IsNullOrEmpty(model.CompanyPcode) && model.CompanyPcode.Contains(searchBox.Text);
}
return false;
};
}
}
Now this does work OK, however there are two issues:
is that it is extremely slow when I type text to search the DataGrid, and
it doesn't deal with the most simple of cases, for example if the name of the Company is "A1" then searching for "a1" will not find the company.
I have been recommended to use Entity Framework however as I am using an OleDB connection (this is a very old Database I am rewriting) I have been unable to get this to work.
Is there a more efficient search method I can use that is quicker and would also find all companies (i.e case insensitive)?
Answer:
private void SearchGrid(object sender, TextChangedEventArgs e)
{
if (!string.IsNullOrEmpty(searchBox.Text))
{
ICollectionView view = CollectionViewSource.GetDefaultView(dataGrid.ItemsSource);
view.Filter += (obj) =>
{
CompanyModel model = obj as CompanyModel;
if (model == null)
return true;
if (compNameRad.IsChecked == true)
{
return !string.IsNullOrEmpty(model.CompanyName) && model.CompanyName.Contains(searchBox.Text);
}
if (compTownRad.IsChecked == true)
{
return !string.IsNullOrEmpty(model.CompanyTown) && model.CompanyTown.Contains(searchBox.Text);
}
if (compPcodeRad.IsChecked == true)
{
return !string.IsNullOrEmpty(model.CompanyPcode) && model.CompanyPcode.Contains(searchBox.Text);
}
return false;
};
}
}
The problem with your code is that each time the text changed you are adding a new Filter to the ICollectionView by using
view.Filter += (obj) =>
this little + is doing all the harm. For each raise of the TextChanged event a new filter is added and never removed.
If you remove the + you will see that your filtering will become faster.
The problem with A1 vs a1 is that the Contains() method is case sensitive. A better way would be to use IndexOf(string, StringComparison) like so
return !string.IsNullOrEmpty(model.CompanyName)
&& model.CompanyName.IndexOf(searchBox.Text, StringComparison.OrdinalIgnoreCase) > -1; | {
"domain": "codereview.stackexchange",
"id": 16286,
"tags": "c#, beginner, database, wpf"
} |
Is it possible to use a powder-based firearm in space? | Question: A firearm relies upon some kind of explosive powder to drive the slug out of the barrel.
My guess however is that in space (at GEO, or higher) a firearm would be unusable due to the extremes of temperature/pressure. Secondly the powder probably would not ignite when the hammer fell.
Are my assumptions correct? Can a firearm be used in space?
Answer: Consider the environment in which the propellent burns in a firearm. It is cramped space formerly packed tightly with stuff (the propellent, any necessary wadding and the bullet itself). There is damn little room for any atmosphere at all.
Where---especially in a cartridge system---do you think the oxidizer (NB: not necessarily oxygen!) is coming from anyway?
Most explosives do not run on atmospheric oxygen, then run on the oxidizer built in to the formulation. The only exceptions that I know of are fuel--air explosives and those are a specialized business.
You should expect cartridge firearms to work perfectly in space unless their parts vacuum weld. I'd be a little concerned about open-pan loose-power systems (do they initially burn environmental $\mathrm{O}_2$?, and in microgravity will they blow the power away before they initiate burning down the hole?), but I'd still take even odds that they work. | {
"domain": "physics.stackexchange",
"id": 4109,
"tags": "temperature, pressure, space, projectile"
} |
Is it possible to calculate the torsion constant of a rod given the shear modulus, moment of inertia and length? | Question: I am a student interested in conducting an experiment for school on a torsional pendulum.
This is an image of what it would look like:
I was doing some background research, and found a paper experimenting with various single-fibre materials to determine their torsional properties. 99% of what in the paper is completely beyond me, so it may be irrelevant to what I'm trying to do. Here is the link to the paper:
Link to paper
On the bottom of page 6 of the pdf, the paper contains the following equation:
$$K=\frac {GI_p}{l}=\frac {G \pi d^4}{32l}$$
Where,
$K$ = torsion constant (torque per unit twist) of the torsion wire.
$G$ = shear modulus
$d$ = diameter of the torsion wire
$I_p$ = its moment of inertia.
$l$= length of the rod.
If I'm honest, at this point I'm very confused because I keep seeing lots of different terms for the same things, and even the same term for different things,so I wanted to ask whether the constant $K$ in the equation above is the same one as the formula for a period in a torsional pendulum:
$$T=2 \pi\sqrt {\frac {I}{K}}$$
$$T^2=\left [ \frac {4\pi^2}{K}\right]I$$
If not, is there another way for me to know the theoretical relationship between the moment of inertia and the period? I know it will ideally be a square-root function, however does there exist a way for me to evaluate the results beyond that if I know the properties of the wire being used in the pendulum?
Answer: Maybe not a complete answer to your question, but here are a couple of remarks.
The formula for the period of a torsional pendulum in your original post was wrong. It should be
$$T=2\pi\sqrt \frac IK$$
in which $I$ is the moment of inertia of the disc in your diagram and $K$ is the torsional constant of the fibre or rod. The equation is the rotational analogue of the equation for the linear system of a mass on a spring, that is
$T=2\pi\sqrt \frac mk$.
Moment of inertia is the rotational analogue of mass for a rigid body rotating about a given axis. It takes account of how the body's mass is distributed (since for a rotating body different parts have different linear velocities and accelerations). For a uniform disc of mass $m$ and radius $a$ rotating about its usual rotation axis, the moment of inertia is given by $$I=\tfrac 12 m a^2.$$
The first equation that you quote purports to give the torsional constant in terms of the length and diameter of the rod and the shear modulus of the material of which it is made. The second version of that equation is the one to use, namely
$$K=\frac {G\pi d^4}{32 L}.$$
The first version (the one with $I_p$ in it) is dimensionally wrong if $I_p$ is interpreted as an ordinary moment of inertia. In fact $I_p$ is the so-called 'geometrical moment of inertia' of the fibre or rod and it plays a completely different role from the dynamic role of the moment of inertia of the disc. My advice is simply to ignore that version of the $K$ formula, the one with $I_p$ in it.
You now have three usable equations, and there are several relationships you could test and quantities that you could find... Good luck! | {
"domain": "physics.stackexchange",
"id": 66702,
"tags": "classical-mechanics"
} |
How to read ros log file? | Question:
Recently, I write a .launch file. But when I roslaunch it, I get the info as is shown below.
roslaunch turtlebot3_navigation multiple_navigation.launch
... logging to /home/ise-admin/.ros/log/e4e33fd6-a16d-11e8-841a-509a4c311940/roslaunch-ise-linux-1-5449.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
Invalid roslaunch XML syntax: not well-formed (invalid token): line 29, column 83
The traceback for the exception was written to the log file
In order to find out where are the errors, I hope to open the log file.
cd `roslaunch-logs`
ls
first_tb3-amcl-4-stdout.log roslaunch-ise-linux-1-3015.log
map_server-2-stdout.log roslaunch-ise-linux-1-32614.log
master.log roslaunch-ise-linux-1-32683.log
robot1-spawn_minibot_model-2.log roslaunch-ise-linux-1-32749.log
robot1-spawn_minibot_model-3.log roslaunch-ise-linux-1-3598.log
robot2-spawn_minibot_model-4.log roslaunch-ise-linux-1-415.log
robot2-spawn_minibot_model-5.log roslaunch-ise-linux-1-4348.log
robot_state_publisher-1-stdout.log roslaunch-ise-linux-1-791.log
roslaunch-ise-linux-1-23504.log rosout-1-stdout.log
roslaunch-ise-linux-1-23544.log rosout.log
roslaunch-ise-linux-1-23565.log rviz-9-stdout.log
roslaunch-ise-linux-1-23628.log second_tb3-amcl-6-stdout.log
roslaunch-ise-linux-1-24416.log tb3_0-spawn_urdf-4.log
roslaunch-ise-linux-1-24423.log tb3_0-spawn_urdf-4-stdout.log
roslaunch-ise-linux-1-25479.log tb3_1-spawn_urdf-6.log
roslaunch-ise-linux-1-25593.log tb3_1-spawn_urdf-6-stdout.log
roslaunch-ise-linux-1-25811.log tb3_2-spawn_urdf-8.log
roslaunch-ise-linux-1-26965.log tb3_2-spawn_urdf-8-stdout.log
roslaunch-ise-linux-1-27793.log third_tb3-amcl-8-stdout.log
It seems that the log file is temporary, right?
Originally posted by Pujie on ROS Answers with karma: 106 on 2018-08-16
Post score: 1
Answer:
Maybe it is in ~/.ros/log.
For seeing more information with roslaunch you can try roslaunch -v <package-name> <launch-name> to request verbosity output.
However, when I had the same error, the problem was a non-printing character in the file.
You can try to confirm by looking at a binary dump of the file with hexdump -C mylaunch.launch.
Delete these characters works for me.
Originally posted by luc.ac with karma: 26 on 2019-09-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 31559,
"tags": "ros-kinetic"
} |
RosCore on Android fails | Question:
Hello. I have put together the most basic project I could to show this error.
https://github.com/FutureHax/BrokenRos
When ros core is started on the Android device, message fail to be passed through.
Start the app, use the MasterChooser to start a new public master.
Point your computer to the ros core instance on your phone.
Attempt to push a message.
What I see when attempting this steps is the following.
03-16 16:51:16.012 30235-30842/com.cloudspace.coretester E/XmlRpcErrorLogger﹕ No such handler: system.multicall
org.apache.xmlrpc.server.XmlRpcNoSuchHandlerException: No such handler: system.multicall
at org.apache.xmlrpc.server.AbstractReflectiveHandlerMapping.getHandler(AbstractReflectiveHandlerMapping.java:214)
at org.apache.xmlrpc.server.XmlRpcServerWorker.execute(XmlRpcServerWorker.java:45)
at org.apache.xmlrpc.server.XmlRpcServer.execute(XmlRpcServer.java:86)
at org.apache.xmlrpc.server.XmlRpcStreamServer.execute(XmlRpcStreamServer.java:200)
at org.apache.xmlrpc.webserver.Connection.run(Connection.java:208)
at org.apache.xmlrpc.util.ThreadPool$Poolable$1.run(ThreadPool.java:68)
I have seen a few other issues mention this "system.multicall" line, but none have been resolved.
Am I doing something incorrect or is there a bug in the ros code?
Originally posted by r2doesinc on ROS Answers with karma: 11 on 2015-03-16
Post score: 1
Original comments
Comment by gvdhoorn on 2015-03-17:
If you really feel this is a bug, perhaps reporting it directly on the rosjava and/or android_core issue trackers would be more efficient.
Comment by mmore on 2016-02-26:
I have the same problem using roscore on Android. This seems to be a bug in ROS Java: roslaunch uses XML-RPC function system.multicall - which is not supported by rosjava.
Have somebody any idea how to fix this?
Answer:
This would seem to be fixed with the merge of rosjava/rosjava_core#273.
Originally posted by gvdhoorn with karma: 86574 on 2018-04-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 21150,
"tags": "android-core, android"
} |
What role does H2O2 have in copper (II) acetate formation? | Question: Mixing solid copper, 5% vinegar, and hydrogen peroxide ($\ce{H2O2}$) causes copper acetate to form. The process will occur very slowly without hydrogen peroxide. Adding $\ce{H2O2}$ speeds the formation of copper acetate.
How does Hydrogen Peroxide promote the formation of Copper (II) Acetate?
Answer: The redox potentials $E$ for $\mathrm{pH} = 0$ show that $\ce{H+}$ cannot oxidize $\ce{Cu}$ to $\ce{Cu^2+}$:
$$\begin{alignat}{2}
\ce{Cu^2+ + 2e- \;&<=> Cu}\quad &&E^\circ = +0.340\ \mathrm{V}\\
\ce{2H+ + 2e- \;&<=> H2}\quad &&E^\circ = +0.000\ \mathrm{V}
\end{alignat}$$
Thus, non-oxidizing acids such as acetic acid cannot directly oxidize copper.
However, $\ce{Cu}$ can be oxidized by $\ce{O2}$:
$$\ce{O2 + 4H+ + 4e- <=> 2H2O}\quad E^\circ = +1.229\ \mathrm{V}$$
Therefore, copper is slowly oxidized in acetic acid in contact with air.
The oxidation can be increased by addition of oxidizing agents such as hydrogen peroxide:
$$\ce{H2O2 + 2H+ + 2e- <=> 2H2O}\quad E^\circ = +1.763\ \mathrm{V}$$ | {
"domain": "chemistry.stackexchange",
"id": 3580,
"tags": "redox"
} |
how can i connect arduino and gazebo | Question:
hello,
i'm trying to make an aplication with a flex sensor in arduino, the flex sensor change the value of the resistance when is deflected and i wanna see this change with the movement of a object in gazebo, someone could help me with that....
thank you!
Originally posted by joseescobar60 on ROS Answers with karma: 172 on 2012-11-04
Post score: 0
Answer:
You should be able to do that fairly easily by using rosserial. Read the flex in the arduino and just publish the correct message/service you need to gazebo. The only problem might be if that message is too large for the arduino, then you would need to put a node in between.
Originally posted by dornhege with karma: 31395 on 2012-11-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 11620,
"tags": "ros, arduino, gazebo, sensor"
} |
Eigenvalues of the momentum operator in position basis | Question: We know that the definition of the momentum operator $\hat{P_x}$ in an state space $\mathcal{E}$ is:
$$\hat{P_x}|\psi\rangle=P_x|\psi\rangle$$
where $P_x \in \mathbb{R}$. However we also know that the representation of $\hat{P_x}$ in the position basis $\{|\vec{r}\rangle\}$ is:
$$
\hat{P_x}\to \frac{\hbar}{i}\frac{\partial}{\partial x}, \tag{1}
$$
Now, my question is: Is $\frac{\hbar}{i}\frac{\partial}{\partial x}$ still an operator or is it an eigenvalue?
From what I gather I could also write (1) as:
$$
\frac{\hbar}{i}\frac{\partial}{\partial x}\psi(\vec{r})=P_x\psi(\vec{r})
$$
keeping $P_x \in \mathbb{R}$. My doubts get even worse when I define an operator like this:
$$
\hat{A}=\hat{X}+\hat{P}_x
$$
Then I could write:
$$
\hat{A} |\psi \rangle=a|\psi \rangle
$$
where $a$ is the eigenvalue of $\hat{A}$. But in this case $a=x+P_x$ where $x$ is the eigenvalue of the operator $\hat{X}$. Suppose I wanto to go to the position basis. Then I could write:
$$
\bigg(x+\frac{\hbar}{i}\frac{\partial}{\partial x}\bigg)\psi(\vec{r})=a\psi(\vec{r})=(x+P_x)\psi(\vec{r})
$$
from my reasoning the x on the left hand side is to be interpreted as an operator and on the right hand side as a number, but how can this be? This doesn't seem to make sense to me either way, that doesn't seem to be an "equation". What is this representation in position basis? Is it still an operator or is it the eigenvalue?
Answer: Stick to one dimension to avoid superfluous complication.
The operator $\hat p$ has eigenvalues, but on specific eigenvectors, labelled as such,
$$ \hat p | p\rangle = p|p\rangle,
$$
But unless your state $|\psi\rangle$ were one such, your first equation will not hold. You may always compute $\langle \psi | \hat p | \psi\rangle$, some sort of momentum of that state, without assuming eigenvectors of the relevant operator, if that is what is confusing you.
In the position basis, the momentum operator resolves as
$$
\hat p = \int\!\! dx ~~|x\rangle \frac {\hbar}{i} \partial_x \langle x| ,
$$
so, manifestly, the $|x\rangle$ s are not its eigenstates.
I'll tweak your next operator to Dirac's celebrated annihilation operator,
$$
\hat a =\hat x + i \hat p ,
$$
and then its eigenvectors are the celebrated coherent states,
$$
\hat a |\alpha \rangle = \alpha | \alpha \rangle \\
|\alpha\rangle \equiv e^{-|\alpha|^2 /2} \exp (\alpha ~~ \hat {a} ^\dagger) |0\rangle ,
$$
where $\hat a ^\dagger$ and $\hat |0\rangle =0$.
You might connect these Fock states to the x-basis in here,
$$
|x\rangle=\frac{e^{x^2/2}}{\pi^{1/4}} e^{-(a^\dagger-\sqrt{2} x)^2/2} |0\rangle ~.
$$
The takeaway is that you only get eigenvalue equations when acting on eigenstates, which usually define a convenient basis for your operator of choice. | {
"domain": "physics.stackexchange",
"id": 62339,
"tags": "quantum-mechanics, hilbert-space, operators, momentum, eigenvalue"
} |
Bin packing variant | Question: As in classical bin packing problem, this is an algorithm that optimises the number of bins of a certain size used to hold a list of objects of varying size.
In my variant I also work with a second constraint that is the bins must hold a certain minimum size in them. For example :
max_pm = 10, min_pm = 5 ; If we input [8,2,3] then the packing [[8, 2], [3]] is not valid. Some problems also don't hold any solution in which case we should return None.
I implemented this simply as a post-processing solution validation, is there any more optimised way to do it ? I need an optimised solution if it exists, heuristics are not good which is why I've choosed a recursive branching approach.
Items here are size 4 tuples, last value is weight.
from copy import deepcopy
def bin_pack(items, min_pm, max_pm, current_packing=None, solution=None):
if current_packing is None:
current_packing = []
if not items:
# Stop conditions: we have no item to fit in packages
if solution is None or len(current_packing) < solution:
# If our solution doesn't respect min_pm, it's not returned, return best known solution instead
for pack in current_packing:
if sum((item[3] for item in pack)) < min_pm:
return solution
# Solutions must be cleanly copied because we pop and append in current_packing
return deepcopy(current_packing)
return solution
# We iterate by poping items and inserting in a list of list of items
item = items.pop()
# Try to fit in current packages
for pack in current_packing:
if sum((item[3] for item in pack)) + item[3] <= max_pm:
pack.append(item)
solution = bin_pack(items, min_pm, max_pm, current_packing, solution)
pack.remove(item)
# Try to make a new package
if solution is None or len(solution) > len(current_packing):
current_packing.append([item])
solution = bin_pack(items, min_pm, max_pm, current_packing, solution)
current_packing.remove([item])
items.insert(-1, item)
return solution
Execution example:
print bin_pack([(0,0,0,1), (0,0,0,5), (0,0,0,2), (0,0,0,6)], 3, 6)
# displays [[(0, 0, 0, 6)], [(0, 0, 0, 2), (0, 0, 0, 1)], [(0, 0, 0, 5)]]
print bin_pack([(0,0,0,1), (0,0,0,5), (0,0,0,2), (0,0,0,6)], 4, 6)
# displays None
Answer: 1. Bugs
If any item has weight greater than max_pm, no solution is possible but the code may return a solution anyway. It would be more robust to raise an exception in this case.
This condition is wrong:
len(current_packing) < solution
Here len(current_packing) is an int but solution is a list so in Python 2.7, where you can compare any two values even if they have different types, this always evaluates to True. This can cause the code to return a worse solution when a better solution was discovered earlier. The condition should be:
len(current_packing) < len(solution)
In Python 3 you couldn't have missed this bug because you would have got an exception:
TypeError: '<' not supported between instances of 'int' and 'list'
2. Review
The use of the print statement suggests that you are using Python 2, but this version will no longer be supported from 2020. It would be better to use Python 3. Even if you are stuck on Python 2 for some reason, it would be better to use from __future__ import print_function so that your code can more easily be ported to Python 3 when the time comes.
There's no docstring. What does bin_pack do? What arguments does it take? What does it return?
Returning None when there is no solution is risky—the caller might forget to check. It is more robust to handle an exceptional case by raising an exception.
Some of the names could be improved—since this is a bin packing, the thing being packed ought to be called bin rather than pack. The names min_pm and max_pm are quite obscure: what does pm stand for? Names like min_weight or min_cost or min_size would be clearer.
Getting the weight of the items using item[3] is not very flexible—it forces the caller to represent items in a particular way. It would be better for bin_pack to take a function that can be applied to the item to get its weight. Then the caller could pass operator.itemgetter(3).
Calling the remove method on a list is not efficient: this method searches along the list to find the first matching item, which takes time proportional to the length of the list. In all the cases where the code uses remove, in fact the item to be removed is the last item in the list and so the pop method could be used instead.
It's not clear why the code restores the item at the next-to-last position in the list of items:
items.insert(-1, item)
Since the item came from the last position in the list, using items.pop(), I would have expected it to be put back at the last position (not the next-to-last) by calling items.append(item).
There are some cases where the same information has to be recomputed over and over again: (i) before returning a solution, the code have to check whether all bins have the minimum weight. But this fact could be remembered as part of the current state of the algorithm, so that it doesn't have to be recomputed all the time. (ii) Before deciding whether an item can go into a bin, the code adds up the weights of all the items in the bin. But again, the current weight of each bin could be remembered.
Making a deep copy of the solution ends up copying out the contents of the items as well as their organization into the solution. This is unnecessary and possibly harmful—in some use cases the items may not be copyable. A two-level copy is all that's needed here.
A bunch of difficulties arise because bin_pack is recursive: (i) passing min_pm and max_pm through all the recursive calls even though these never change; (ii) initializing current_packing on every recursive call even though this ought to only need to be done once; (iii) the best solution has to be pass and returned through all the recursive calls. These difficulties could all be avoided by defining a local function that does the recursion. See below for how you might do this.
There is an easy small speedup if you prune branches of the search that can't get you a better solution. See the revised code for how to do this.
3. Revised code
def bin_pack(items, weight, min_weight, max_weight):
"""Pack items (an iterable) into as few bins as possible, subject to
the constraint that each bin must have total weight between
min_weight and max_weight inclusive.
Second argument is a function taking an item and returning its
weight.
If there is no packing satisfying the constraints, raise
ValueError.
"""
items = [(item, weight(item)) for item in items]
if any(w > max_weight for _, w in items):
raise ValueError("No packing satisfying maximum weight constraint")
bins = [] # current packing in the search
bin_weights = [] # total weight of items in each bin
best = [None, float('inf')] # [best packing so far, number of bins]
def pack():
if best[1] <= len(bins):
return # Prune search here since we can't improve on best.
if items:
item, w = item_w = items.pop()
for i in range(len(bins)):
bin_weights[i] += w
if bin_weights[i] <= max_weight:
bins[i].append(item)
pack()
bins[i].pop()
bin_weights[i] -= w
if len(bins) + 1 < best[1]:
bins.append([item])
bin_weights.append(w)
pack()
bin_weights.pop()
bins.pop()
items.append(item_w)
elif all(w >= min_weight for w in bin_weights):
best[:] = [[bin[:] for bin in bins], len(bins)]
pack()
if best[0] is None:
raise ValueError("No packing satisfying minimum weight constraint")
return best[0]
Because this needs to run in Python 2.7, I had to make best into a list so that it can be updated from inside the locally defined function pack. In Python 3 we'd have two variables:
best = None
best_bins = float('inf')
and then inside pack we could declare these as nonlocal variables:
nonlocal best, best_bins
and just assign to them like any other variables. But this doesn't work in Python 2.7 because there's no equivalent of the nonlocal statement. | {
"domain": "codereview.stackexchange",
"id": 32924,
"tags": "python, python-2.x"
} |
Two interacting particles on sphere drift to sphere poles | Question: Suppose we have two particles which can move on sphere of radius $r$, and they attract to each other so that their potential energy is $g(d)=ad$ where $d$ is distance between them. I've found Lagrangian, it looks like this (in spherical coordinates):
$$L=\frac{r^2}2\left(m_1\left(\dot\theta_1^2+\dot\varphi_1^2\sin^2\theta_1\right)+m_2\left(\dot\theta_2^2+\dot\varphi_2^2\sin^2\theta_2\right)\right)-g\left(l_{arc}\left(\theta_1,\varphi_1,\theta_2,\varphi_2\right)\right),$$
where $l_{arc}$ is arc length between two points on sphere:
$$l_{arc}=2r\arcsin\left(\frac1{\sqrt2}\sqrt{1-\cos\theta_1\cos\theta_2-\cos\left(\varphi_1-\varphi_2\right)\sin\theta_1\sin\theta_2}\right)$$
So, equations of motion for particle $i$ look like:
$$\frac{m_i r^2}2 \frac{\text{d}}{\text{d}t}\begin{pmatrix}2\dot\theta_i\\ 2\dot\varphi_i\sin^2\theta_i\end{pmatrix}=-\begin{pmatrix}\frac{\partial g(l_{arc})}{\partial \theta_i}\\ \frac{\partial g(l_{arc})}{\partial \varphi_i}\end{pmatrix}+\begin{pmatrix}\sin\left(2\theta_i\right)\dot\varphi_i^2\\ 0\end{pmatrix}$$
I solve a Cauchy problem, so here're initial conditions:
$$\theta_1(0)=\frac\pi2+10^{-4}\\ \theta_2(0)=1.05\cdot\frac\pi2\\ \varphi_1(0)=-1.5\\ \phi_2(0)=-1.45\\
\dot\theta_1(0)=0.003\\ \dot\theta_2(0)=0.003\\ \dot\varphi_1(0)=-0.01\\ \dot\phi_2(0)=-0.01$$
Now as I solve this problem, it appears that the system drifts to the pole of the sphere whenever the particles interact (here $r=100,\; m_1=m_2=1,\; a=1$):
I've tried multiple methods of solving this problem in Mathematica using NDSolve with big range of steps, but still the trend is the same, it seems to not depend on method of solution. So, I seem to have made a mistake somewhere in formulating the problem.
If instead of attracting the particles repulse, they again appear attracted by poles:
Is my derivation correct - i.e. finding Lagrangian and deriving equations of motion? What could be the reason for such strange error?
Answer: My mistake was omitting factor of $\frac{mr^2}2$ in equations of motion near rightmost vector. They should instead look like this:
$$\frac{m_i r^2}2 \frac{\text{d}}{\text{d}t}\begin{pmatrix}2\dot\theta_i\\ 2\dot\varphi_i\sin^2\theta_i\end{pmatrix}=-\begin{pmatrix}\frac{\partial g(l_{arc})}{\partial \theta_i}\\ \frac{\partial g(l_{arc})}{\partial \varphi_i}\end{pmatrix}+\frac{m_i r^2}2\begin{pmatrix}\sin\left(2\theta_i\right)\dot\varphi_i^2\\ 0\end{pmatrix}$$
With this corrected, the result is as expected: | {
"domain": "physics.stackexchange",
"id": 9225,
"tags": "classical-mechanics"
} |
Cleaning up and reformatting imported data in an Excel sheet | Question: The code below was refactored for performance improvements for another user on this site.
Functionality, high level:
Sheet1 - CodeName aIndex: used as the main reference to the structure of the data being processed in 2 other sheets: mapping column headers for incoming data in sheet2, to column headers to be processed for the final result on Sheet3
Sheet2 - CodeName bImport: this where external (raw) data is imported before processing. Importing of data is not part of this process
Sheet3 - CodeName cFinal: out of a set of about 50 incoming columns, Sheet1 will define a subset of 20 to 30 columns to be processed for the final result
The code is fully functional, without issues, and decent performance (50,000 rows and 44 columns processed in 4 to 5 seconds); it contains more comments than usual for learning purposes, explaining some basic steps, or things that may not be obvious or clear to an inexperienced person.
Notes:
This is not a request that requires understanding of the functionality, or finding inefficiencies (unless there are obvious parts that can be optimized).
It's about self improvement relative to coding practices: I am open to any criticism no matter how harsh, for any mistakes I may have made - I'll easily swallow my pride, as long as I can improve any bad habits I may have picked up along the way.
When I posted the question intended to make it as relevant to this site as possible: Does this code make my ass look fat?
I realize that members of this community are volunteers (like me), and provide feedback out of passion about the subject, so I tried to analyse the question objectively, as a reviewer:
The code is way too long to make me feel it's worth the effort, and this is the reason I didn't bring its functionality into the mix: there is less effort required for analyzing it at a high level (coding style), and not intricacies of functionality
There is nothing I can do to make it shorter: I was curious about its structure: did I modularize it enough, or maybe too much
I wouldn't want to get involved in a long review by attempting to understand its logic and reasons of doing what it does, but just quick feedback about anything obviously bad from a readability and maintainability perspective
.
That said, I will provide relevant details about functionality for each part as a contexts for the algorithm
The first Sub controls the start and end of the entire process (after an imported file): turns off all events and calculations in Excel that can slow down execution, starts a timer, starts the main process, captures the total duration, and turns all Excel features back on:
.
Option Explicit
Public Sub projectionTemplateFormat()
Dim t1 As Double, t2 As Double
fastWB True 'turn off all Excel features related to GUI and calculation updates
t1 = Timer 'start performance timer
mainProcess
t2 = Timer 'process is completed
fastWB False 'turn Excel features back on
'MsgBox "Duration: " & t2 - t1 & " seconds" 'optional measurement output
End Sub
The next Sub is where the main processing is done, and makes calls to smaller helper functions:
Sets up all references needed during processing: the 3 workbooks, and a set of local variables
Determines the columns and size of imported data (Sheet2)
Determines if there is any previous data on the result sheet (Sheet3) for cleanup
It doesn't remove the headers: these are the column to be migrated from the imported data
Overwrites the headers in Imported Sheet with a standard set of headers defined on Sheet1
The headers on Sheet1 can be adjusted by the user (added, removed, renamed) relative to the expected headers in the imported data
They are also aligned with the headers on Sheet3 (the final result)
Re-formats the imported data with specific text, number, and date formats
If there is at least 1 row of imported data on Sheet2, it starts the main process
The following steps are the most CPU intensive task:
Start looping over each column on Sheet3 (columns of the final result)
Find the first column to be migrated (based on the header name from Sheet3)
If found, set a reference to the entire column with data (50,000 rows or more)
Set a reference on Sheet3, to an area of the same size as the column of imported data
Copy the data from Sheet2 to Sheet3
Move on the the next column on Sheet3 an repeat the process until all predefined columns on Sheet3 are populated
Overwrite some imported values on Sheet3 with hard-coded data from Sheet1
Reformat the dates on 2 specific columns on Sheet3 to "YYYY" requirement
Reformat other specific columns on Sheet3
Convert all data on Sheet3 to UPPER CASE
Apply cell and font formatting to all data on Sheet3
Zoom all sheets to 85%
Private Sub mainProcess()
Const SPACE_DELIM As String = " "
Dim wsIndex As Worksheet
Dim wsImport As Worksheet 'Raw data
Dim wsFinal As Worksheet 'Processed data
Dim importHeaderRng As Range
Dim importColRng As Range
Dim importHeaderFound As Variant
Dim importLastRow As Long
Dim finalHeaderRng As Range
Dim finalColRng As Range
Dim finalHeaderRow As Variant
Dim finalHeaderFound As Variant
Dim indexHeaderCol As Range
Dim header As Variant 'Each item in the FOR loop
Dim msg As String
Set wsIndex = aIndex 'This is the Code Name; top-left pane: aIndex (Index)
Set wsImport = bImport 'Direct reference to Code Name: bImport.Range("A1")
Set wsFinal = cFinal 'Reference using Sheets collection: ThisWorkbook.Worksheets("Final")
With wsImport.UsedRange
Set importHeaderRng = .Rows(1) 'Import - Headers
importLastRow = getMaxCell(wsImport.UsedRange).Row 'Import - Total Rows
End With
With wsFinal.UsedRange
finalHeaderRow = .Rows(1) 'Final - Headers (as Array)
Set finalHeaderRng = .Rows(1) 'Final - Headers (as Range)
End With
With wsIndex.UsedRange 'Transpose col 3 from Index (without the header), as column names in Import
Set indexHeaderCol = .Columns(3).Offset(1, 0).Resize(.Rows.Count - 1, 1)
wsImport.Range(wsImport.Cells(1, 1), wsImport.Cells(1, .Rows.Count - 1)).Value2 = Application.Transpose(indexHeaderCol)
End With
applyColumnFormats bImport 'Apply date and number format to Import sheet
If Len(bImport.Cells(2, 1).Value2) > 0 Then 'if Import sheet is not empty (excluding header row)
With Application
For Each header In finalHeaderRow 'Loop through all headers in Final
If Len(Trim(header)) > 0 Then 'If the Final header is not empty
importHeaderFound = .Match(header, importHeaderRng, 0) 'Find header in Import sheet
If IsError(importHeaderFound) Then
msg = msg & vbLf & header & SPACE_DELIM & wsImport.Name 'Import doesn't have current header
Else
finalHeaderFound = .Match(header, finalHeaderRng, 0) 'Find header in Final sheet
With wsImport
Set importColRng = .UsedRange.Columns(importHeaderFound).Offset(1, 0).Resize(.UsedRange.Rows.Count - 1, 1)
End With
With wsFinal
Set finalColRng = .Range(.Cells(2, finalHeaderFound), .Cells(importLastRow, finalHeaderFound))
finalColRng.Value2 = vbNullString 'Delete previous values (entire column)
End With
finalColRng.Value2 = importColRng.Value2 'Copy Import data in Final columns
End If
End If
Next
End With
setStaticData importLastRow
extractYears
applyColumnFormats cFinal 'Apply date and number format to Import sheet
allUpper wsFinal
'wsFinal.UsedRange.AutoFilter
applyFormat wsFinal.Range(wsFinal.Cells(1, 1), wsFinal.Cells(importLastRow, wsFinal.UsedRange.Columns.Count))
Dim ws As Worksheet
For Each ws In Worksheets
ws.Activate
ActiveWindow.Zoom = 85
ws.Cells(2, 2).Activate
ActiveWindow.FreezePanes = True
ws.Cells(1, 1).Activate
Next
Else
MsgBox "Missing raw data (Sheet 2 - 'Import')", vbInformation, " Missing Raw Data"
End If
End Sub
Next method is a straight overwrite operation of static data from Sheet1 onto Sheet3
Private Sub setStaticData(ByVal lastRow As Long)
With cFinal
.Range("D2:D" & lastRow).Value = aIndex.Range("H2").Value
.Range("F2:F" & lastRow).Value = aIndex.Range("H9").Value
.Range("AC2:AC" & lastRow).Value = aIndex.Range("H3").Value
.Range("X2:X" & lastRow).Value = aIndex.Range("H4").Value
.Range("Y2:Y" & lastRow).Value = aIndex.Range("H5").Value
.Range("AE2:AE" & lastRow).Value = aIndex.Range("H6").Value
.Range("AF2:AF" & lastRow).Value = aIndex.Range("H7").Value
.Range("AD2:AD" & lastRow).Value = aIndex.Range("H8").Value
End With
End Sub
Another method of applying a specific text, number, date format to a set of columns (the same set of columns on either Sheet2 (Import), or Sheet3 (final result)
Private Sub applyColumnFormats(ByRef ws As Worksheet)
With ws.UsedRange
.Cells.NumberFormat = "@" 'all cells will be "General"
.Columns(colNum("G")).NumberFormat = "MM/DD/YYYY"
.Columns(colNum("I")).NumberFormat = "MM/DD/YYYY"
'.Columns(colNum("A")).NumberFormat = "@"
'.Columns(colNum("B")).NumberFormat = "@"
'.Columns(colNum("C")).NumberFormat = "@"
.Columns(colNum("R")).NumberFormat = "MM/DD/YYYY"
.Columns(colNum("Q")).NumberFormat = "MM/DD/YYYY"
.Columns(colNum("T")).NumberFormat = "MM/DD/YYYY"
.Columns(colNum("W")).NumberFormat = "@" '"YYYY"
.Columns(colNum("V")).NumberFormat = "@" '"YYYY"
.Columns(colNum("AC")).NumberFormat = "MM/DD/YYYY"
.Columns(colNum("N")).NumberFormat = "_($* #,##0.00_);_($* (#,##0.00);_($* ""-""??_);_(@_)"
.Columns(colNum("AM")).NumberFormat = "_($* #,##0.00_);_($* (#,##0.00);_($* ""-""??_);_(@_)"
.Columns(colNum("AN")).NumberFormat = "_($* #,##0.00_);_($* (#,##0.00);_($* ""-""??_);_(@_)"
.Columns(colNum("AO")).NumberFormat = "_($* #,##0.00_);_($* (#,##0.00);_($* ""-""??_);_(@_)"
End With
End Sub
Helper method: Cell, border, and font formatting to all data on Sheet3
Private Sub applyFormat(ByRef rng As Range)
With rng
.ClearFormats
With .Font
.Name = "Georgia"
.Color = RGB(0, 0, 225)
End With
.Interior.Color = RGB(216, 228, 188)
With .Rows(1)
.Font.Bold = True
.Interior.ColorIndex = xlAutomatic
End With
With .Borders
.LineStyle = xlDot 'xlContinuous
.ColorIndex = xlAutomatic
.Weight = xlThin
End With
End With
refit rng
End Sub
Helper method: Converts all data to upper case
The main aspect about all helper methods acting on large ranges of data is that they perform:
Only one interaction with the worksheet to copy all data to memory
Processes each individual value by looping over the memory arrays (unavoidable nested loops for 2 dimensional arrays)
Then in another single interaction with the sheet places all data transformed back in the same area
This is, by far, the most overlooked performance improvement. It requires minimum coding effort, but might be perceived as a somewhat difficult concept to grasp for novice VBA enthusiasts (including myself) who just want to get the job done, without "complicating" things
Private Sub allUpper(ByRef sh As Worksheet)
Dim arr As Variant, i As Long, j As Long
If WorksheetFunction.CountA(sh.UsedRange) > 0 Then
arr = sh.UsedRange
For i = 2 To UBound(arr, 1) 'each "row"
For j = 1 To UBound(arr, 2) 'each "col"
arr(i, j) = UCase(RTrim(Replace(arr(i, j), Chr(10), vbNullString)))
Next
Next
sh.UsedRange = arr
End If
End Sub
Helper method: converts dates on certain columns to a YYYY format. In retrospect, I should have made it generic to accept a column name, range, letter, or number, as a parameter instead of hard-codding 2 columns. The point I was trying to make here was to combine multiple columns within one loop for improved performance, instead of several loops performing the same operation, on different columns
Private Sub extractYears()
Dim arr As Variant, i As Long, j As Long, ur As Range, colW As Long, colV As Long
Set ur = cFinal.UsedRange '3rd sheet
If WorksheetFunction.CountA(ur) > 0 Then
colW = colNum("W")
colV = colNum("V")
arr = ur
For i = 2 To getMaxCell(ur).Row 'each "row"
If Len(arr(i, colW)) > 0 Then arr(i, colW) = Format(arr(i, colW), "yyyy")
If Len(arr(i, colV)) > 0 Then arr(i, colV) = Format(arr(i, colV), "yyyy")
Next
ur = arr
End If
End Sub
Private Sub refit(ByRef rng As Range)
With rng
.WrapText = False
.HorizontalAlignment = xlGeneral
.VerticalAlignment = xlCenter
.Columns.EntireColumn.AutoFit
.Rows.EntireRow.AutoFit
End With
End Sub
Helper method: next, are 2 generic functions that return:
The column letter from the column number
The column number from the column letter
Not ideal naming convention as it's not descriptive enough (not intuitive or self-documented). My reason (not excuse): long names don't fit well in the small area provided - doesn't make it OK
Public Function colLtr(ByVal fromColNum As Long) As String 'get column leter from column number
'maximum number of columns in Excel 2007, last column: "XFD" (16384)
Const MAX_COLUMNS As Integer = 16384
If fromColNum > 0 And fromColNum <= MAX_COLUMNS Then
Dim indx As Long, cond As Long
For indx = Int(Log(CDbl(25 * (CDbl(fromColNum) + 1))) / Log(26)) - 1 To 0 Step -1
cond = (26 ^ (indx + 1) - 1) / 25 - 1
If fromColNum > cond Then
colLtr = colLtr & Chr(((fromColNum - cond - 1) \ 26 ^ indx) Mod 26 + 65)
End If
Next indx
Else
colLtr = 0
End If
End Function
Public Function colNum(ByVal fromColLtr As String) As Long
'A to XFD (upper or lower case); if the parameter is invalid it returns 0
'maximum number of columns in Excel 2007, last column: "XFD" (16384)
Const MAX_LEN As Byte = 4
Const LTR_OFFSET As Byte = 64
Const TOTAL_LETTERS As Byte = 26
Const MAX_COLUMNS As Integer = 16384
Dim paramLen As Long
Dim tmpNum As Integer
paramLen = Len(fromColLtr)
tmpNum = 0
If paramLen > 0 And paramLen < MAX_LEN Then
Dim i As Integer
Dim tmpChar As String
Dim numArr() As Integer
fromColLtr = UCase(fromColLtr)
ReDim Preserve numArr(paramLen)
For i = 1 To paramLen
tmpChar = Asc(Mid(fromColLtr, i, 1))
If tmpChar < 65 Or tmpChar > 90 Then Exit Function 'make sure it's a letter. upper case: 65 to 90, lower case: 97 to 122
numArr(i) = tmpChar - LTR_OFFSET 'change lettr to number indicating place in alphabet (from 1 to 26)
Next
Dim highPower As Integer
highPower = UBound(numArr()) - 1 'the most significant digits occur to the left
For i = 1 To highPower + 1
tmpNum = tmpNum + (numArr(i) * (TOTAL_LETTERS ^ highPower)) 'convert the number array using powers of 26
highPower = highPower - 1
Next
End If
If tmpNum < 0 Or tmpNum > MAX_COLUMNS Then tmpNum = 0
colNum = tmpNum
End Function
For the next method I applied an extra performance improvement to the usual known method of determining the last cell with data:
Normal methods perform an inverse search of the first data value staring at the last row\column of an Excel sheet (which now has over 1 million rows and and 16 thousand columns
This method expects only on the "UsedRange" - the notoriously inaccurate range that remembers cell formatting, unused formulas, hidden objects, etc. However, this inaccurate range is much smaller the the entire sheet, but large enough to include all data, so it performs the inverse search over only a few excess rows and columns
By my definition, the last used cell can also be empty, a long as it represents the longest row and column with data
Public Function getMaxCell(ByRef rng As Range) As Range
'search the entire range (usually UsedRange)
'last row: find first cell with data, scanning rows, from bottom-right, leftwards
'last col: find first cell with data, scanning cols, from bottom-right, upwards
With rng
Set getMaxCell = rng.Cells _
( _
.Find( _
What:="*", _
SearchDirection:=xlPrevious, _
LookIn:=xlFormulas, _
After:=rng.Cells(1, 1), _
SearchOrder:=xlByRows).Row, _
.Find( _
What:="*", _
SearchDirection:=xlPrevious, _
LookIn:=xlFormulas, _
After:=rng.Cells(1, 1), _
SearchOrder:=xlByColumns).Column _
)
End With
End Function
Helper method: another set of versatile general functions for turning off Excel features that might hinder VBA performance, main ones:
xlCalculationAutomatic - extremely convenient for manual interactions with sheets, huge potential of performance issues when performing VBA updates to large ranges as it triggers exponential calculations to all dependent formulas on the sheet(s)
EnableEvents - can trigger nested events (infinite recursion) which Excel terminates eventually). Also may cause inexplicable or unexpected VBA behavior when not turned back on
ScreenUpdating - well known
DisplayPageBreaks: I've seen an earlier comment referring to this. To me this is insidious, perceived harmless, when in fact it can cause extra work behind the scenes, especially when re-sizing rows and columns. I never print anything, so I never care about page breaks, but Excel cares about them at every move: re-size 1 column\row - it recalculates page size for all used area; it should be used and only when printing
Public Sub fastWB(Optional ByVal opt As Boolean = True)
With Application
.Calculation = IIf(opt, xlCalculationManual, xlCalculationAutomatic)
If .DisplayAlerts <> Not opt Then .DisplayAlerts = Not opt
If .DisplayStatusBar <> Not opt Then .DisplayStatusBar = Not opt
If .EnableAnimations <> Not opt Then .EnableAnimations = Not opt
If .EnableEvents <> Not opt Then .EnableEvents = Not opt
If .ScreenUpdating <> Not opt Then .ScreenUpdating = Not opt
End With
fastWS , opt
End Sub
Public Sub fastWS(Optional ByVal ws As Worksheet, Optional ByVal opt As Boolean = True)
If ws Is Nothing Then
For Each ws In Application.ActiveWorkbook.Sheets
setWS ws, opt
Next
Else
setWS ws, opt
End If
End Sub
Private Sub setWS(ByVal ws As Worksheet, ByVal opt As Boolean)
With ws
.DisplayPageBreaks = False
.EnableCalculation = Not opt
.EnableFormatConditionsCalculation = Not opt
.EnablePivotTable = Not opt
End With
End Sub
Public Sub xlResetSettings() 'default Excel settings
With Application
.Calculation = xlCalculationAutomatic
.DisplayAlerts = True
.DisplayStatusBar = True
.EnableAnimations = False
.EnableEvents = True
.ScreenUpdating = True
Dim sh As Worksheet
For Each sh In Application.ActiveWorkbook.Sheets
With sh
.DisplayPageBreaks = False
.EnableCalculation = True
.EnableFormatConditionsCalculation = True
.EnablePivotTable = True
End With
Next
End With
End Sub
Any suggestions to improve readability for ease of maintenance, restructuring functions, naming conventions, etc, will be much appreciated
Answer: This isn't going to be a full-blown, fine-combed review. Just a few points.
Use PascalCase for procedure/member identifiers. Being consistent about this helps readability because it makes it easy to tell members from locals and parameters at a glance, without even reading them.
In general your indenting is fine, except here:
fastWB True 'turn off all Excel features related to GUI and calculation updates
t1 = Timer 'start performance timer
mainProcess
t2 = Timer 'process is completed
fastWB False 'turn Excel features back on
Yes, it's a logical block, a bit like On Error Resume Next {instruction} On Error GoTo 0 would be. But it's not a syntactic code block. A different usage of vertical whitespace makes a better job at regrouping the statements I find:
fastWB True 'turn off all Excel features related to GUI and calculation updates
t1 = Timer 'start performance timer
mainProcess
t2 = Timer 'process is completed
fastWB False 'turn Excel features back on
The comments are annoying more than anything else. Consider using more descriptive identifiers instead:
ToggleExcelPerformance
startTime = Timer
RunMainProcess
endTime = Timer
ToggleExcelPerformance False
Note that the difference between startTime and endTime will be skewed if you run this code a few seconds before midnight on your system, because of how Timer works. Shameless plug, but with a little bit of abuse there are much more precise and reliable ways to time method execution (I co-own the rubberduck project), especially if you don't need the duration to be in your "production code".
This declaration came as a surprise:
Dim ws As Worksheet
For Each ws In Worksheets
Why? Because it's the only declaration in the MainProcess method, that's declared close to usage (as it should). Either stick it to the top of the procedure with the other ones (eh, don't do that), or move the other declarations closer to their first usage (much preferred).
Pretty much the entire procedure's body is wrapped in this If..Else block:
If Len(bImport.Cells(2, 1).Value2) > 0 Then
'wall of code
Else
MsgBox "Missing raw data (Sheet 2 - 'Import')", vbInformation, "Missing Raw Data"
End If
I suggest you revert the condition to reduce nesting:
If Len(bImport.Cells(2, 1).Value2) = 0 Then
MsgBox "Missing raw data (Sheet 2 - 'Import')", vbInformation, "Missing Raw Data"
Exit Sub
End If
'wall of code
This is what I like to call an abuse of the With statement:
With Application
'wall of code
End With
I like that you're making explicitly qualified references to the Application object like this, ...but not like this - a With block should look like this:
With someInstance
foobar = .Foo(42)
.DoSomething
.Bar smurf
End With
If you're merely wrapping a whole method with a With block just to avoid having to type Application the 3-4 times you're referring to the Application object, ...sorry to say, but you're just being lazy - and you've uselessly increased nesting for that reason, too.
IMO this is another abusive/lazy usage of With:
With wsImport
Set importColRng = .UsedRange.Columns(importHeaderFound).Offset(1, 0).Resize(.UsedRange.Rows.Count - 1, 1)
End With
Versus:
Set importColRng = wsImport.UsedRange.Columns(importHeaderFound) _
.Offset(1, 0) _
.Resize(wsImport.UsedRange.Rows.Count - 1, 1)
This is awkward:
With rng
Set getMaxCell = rng.Cells _
( _
.Find( _
What:="*", _
SearchDirection:=xlPrevious, _
LookIn:=xlFormulas, _
After:=rng.Cells(1, 1), _
SearchOrder:=xlByRows).Row, _
.Find( _
What:="*", _
SearchDirection:=xlPrevious, _
LookIn:=xlFormulas, _
After:=rng.Cells(1, 1), _
SearchOrder:=xlByColumns).Column _
)
End With
You open up a With block, but the first statement in it ignores it:
Set getMaxCell = rng.Cells _
Should be
Set getMaxCell = .Cells _
And then After:=rng.Cells(1, 1) is also referring to rng. What do you need that With block for, really?
Now, I really don't like that .Cells call: that 15-liner single instruction is doing way too many things. An instruction should only have as few as possible reasons to fail. If either Find fails, you'll have a runtime error 91, and no clue if it's the row or the column find that's blowing up.
Function GetMaxCell(ByRef rng As Range) As Range
On Error GoTo CleanFail
Const NONEMPTY As String = "*"
Dim foundRow As Long
foundRow = rng.Find(What:=NONEMPTY, _
SearchDirection:=xlPrevious, _
LookIn:=xlFormulas, _
After:=rng.Cells(1, 1), _
SearchOrder:=xlByRows) _
.Row
Dim foundColumn As Long
foundColumn = rng.Find(What:=NONEMPTY, _
SearchDirection:=xlPrevious, _
LookIn:=xlFormulas, _
After:=rng.Cells(1, 1), _
SearchOrder:=xlByColumns) _
.Column
Set GetMaxCell = rng.Cells(foundRow, foundColumn)
CleanExit:
Exit Function
CleanFail:
Set GetMaxCell = Nothing
Resume CleanExit 'break here
Resume 'set next statement here
End Function
That will return Nothing to the caller (for it to handle of course) instead of blowing up if the function is given an empty range, or any other edge case that wasn't accounted for. And as a bonus, all you need to do to find the problem is to place a breakpoint just before the error-handling subroutine finishes.
There's certainly a lot more to say about this code, ...but this answer is already long enough as it is ;-) | {
"domain": "codereview.stackexchange",
"id": 26791,
"tags": "performance, algorithm, vba, excel"
} |
variable in turing machine | Question: Here is a simple turing machine, that accepts only $0$.
For this, it read $0$, write it with $x$, then move to right and checks end, if it is end it accepts.
With that logic, I can also add this 1->x,R, then it accepts all strings of length 1, {'0','1'}.
What I want to do is, instead of writing to logic I want to write one, $anything$ -> $another_thing$,R.
Can I do it?
Answer: A Turing machine diagram is a form of communication. Its goal is to describe a Turing machine — usually defined as a tuple with a specific format — in a form which is easier to grasp. As such, you can use whatever shortcut you want, as long as everybody understands what you mean. This might entail your explaining some non-standard notation which you invented as shorthand.
Your professor might have different ideas about this, but the answer I gave is valid for the working mathematician. | {
"domain": "cs.stackexchange",
"id": 15024,
"tags": "turing-machines"
} |
Is it true that bees and wasps don't sting if struck by hand while they are in flight? | Question: Is it true that wasps don't sting if struck by hand while flying? I know of one person who claims to have done this at least 20 times and has never been stung. And out of curiosity (not that I'd want to kill them), what about bees?
Answer: I think what your friend advised is incorrect, or they are very lucky. According to "Bee and Wasp Stings" from the University of California Davis, they specifically state:
Unless someone accidentally collides quite hard with or swats at a bee or wasp, it is not likely to sting.
Also, if you swat at some species, they are likely to become agitated, making the situation potentially far worse. | {
"domain": "biology.stackexchange",
"id": 8946,
"tags": "entomology"
} |
Troubles with autonomouse mapping with nav2d and turtlebot 2 - StartExploration Error | Question:
Hi,
I just started with ros and don't have much experience.
I already installed the the nav2d package.
Like in the tutorial described, I started the command rosservice call /StartMapping 3 and the robot drives 1 meter forward and makes a 180 turn. I don't know if that is correct.
Then I start rosservice call /StartExploration 2 and then the following error is given:
Failed to compute odometry pose, skipping scan ("base_laser_link" passed to lookupTransform argument source_frame does not exist. )
[ERROR] [1468626044.630942697]: You must specify at least three points for the robot footprint, reverting to previous footprint.
[ERROR] [1468626142.281016628]: Is the robot out of the map?
[ERROR] [1468626142.281070197]: Exploration failed, could not get ent position.
In my tf tree I can't find the base_laser_link.
You can find my rqt_graph here.
I start my TurtleBot with minimal.launch and I also start my 3D sensor. After that, I start the following launch file.
My tutorial3 launch file is:
<param name="use_sim_time" value="false" />
<rosparam file="$(find nav2d_tutorials)/param/ros.yaml"/>
<!-- Start the Operator to control the simulated robot -->
<node name="Operator" pkg="nav2d_operator" type="operator" >
<!-- <remap from="scan" to="base_scan"/> -->
<remap from="cmd_vel" to="mobile_base/commands/velocity"/>
<rosparam file="$(find nav2d_tutorials)/param/operator.yaml"/>
<rosparam file="$(find nav2d_tutorials)/param/costmap.yaml" ns="local_map" />
</node>
<!-- Start Mapper to genreate map from laser scans -->
<node name="Mapper" pkg="nav2d_karto" type="mapper">
<!-- <remap from="scan" to="base_scan"/> -->
<rosparam file="$(find nav2d_tutorials)/param/mapper.yaml"/>
</node>
<!-- Start the Navigator to move the robot autonomously -->
<node name="Navigator" pkg="nav2d_navigator" type="navigator">
<rosparam file="$(find nav2d_tutorials)/param/navigator.yaml"/>
</node>
<node name="GetMap" pkg="nav2d_navigator" type="get_map_client" />
<node name="Explore" pkg="nav2d_navigator" type="explore_client" />
<node name="SetGoal" pkg="nav2d_navigator" type="set_goal_client" />
<!-- RVIZ to view the visualization -->
<node name="RVIZ" pkg="rviz" type="rviz" args=" -d $(find nav2d_tutorials)/param/tutorial3.rviz" /></launch>
There is also a problem with the rviz virtualization. currently it looks like this http://imgh.us/Screenshot_28.png.
I hope somebody is able to help me because other questions and answers in this forum didn't help me.
Edit 25. July 16:
The new (error) messages are:
[ERROR] [1469487068.001244894]: You must specify at least three points for the robot footprint, reverting to previous footprint.
[ INFO] [1469487068.047285205]: Will publish desired direction on 'route' and control direction on 'desired'.
[ INFO] [1469487068.061046721]: Initializing LUT...
[ INFO] [1469487068.063258093]: ...done!
[ WARN] [1469487068.430814767]: The scan observation buffer has not been updated for 0.47 seconds, and it should be updated every 0.40 seconds.
The new tf-tree looks like this. The map and offset frame disappeared.
Here you can also find my ros.yaml file:
### TF frames #############################################
laser_frame: laser #base_laser_link
robot_frame: base_link
odometry_frame: odom
offset_frame: offset
map_frame: map
### ROS topics ############################################
map_topic: map
laser_topic: scan
### ROS services ##########################################
map_service: static_map
Edit 26. July 2016
In my config files aren't configuration regarding the footprint.
The footprint error is really strange because it doen't appear at the first launch of minimal.launch, 3dsensor.launch and tutorial3.launch. At a second start of tutorial3.launch with already started 3dsensor and turtlebot this error appears. So I think it's not that bad.
I am getting senseful data from the laser which you can also see here:
There were new error messages which you can find here. I also added the Parameters of 3Dsensor.launch and tutorial3.launch.
ROSWTF says that there is a problem with the not connected operator nodes but I don't know if this an important error.
Originally posted by anonymous27503 on ROS Answers with karma: 36 on 2016-07-15
Post score: 0
Original comments
Comment by Chrissi on 2016-07-15:
Sadly, your error message is not there.
Comment by anonymous27503 on 2016-07-15:
oh sorry, I added it.
Comment by anonymous27503 on 2016-07-20:
@sebastian kasperski As developer of the nav2d package maybe you can help me.
Answer:
You have to set all frame-names according to your robot setup. In your tf-tree the laser frame is called "laser", so you have to set the parameter "laser_frame" in the ros.yaml parameter file from nav2d to just "laser".
Edit:
It looks like you are defining a footprint somewhere that is not correct. As the operator can only handle robot radius (as defined in costmap.yaml), you should check your yaml files and remove this definition. "rosparam list" might also be helpful.
But actually I don't think that this is really your problem. This all happens within the Operator and from your screenshot it seems to be fine. (blue and green trajectory indicator are there) Are there any other error messages? Have you visualized the /scan topic to see if it contains useful data?
In general, I would suggest to follow the tutorials step by step, so:
Add Operator, check that costmap is there and the robot can move safely.
Add Mapper, check that a global map is build and localization is good
Add Navigator, check navigation and autonomous exploration
Originally posted by Sebastian Kasperski with karma: 1658 on 2016-07-25
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by anonymous27503 on 2016-07-25:
Thanks for your answer @Sebastian Kasperski. I changed the laser_frame to laser but it seems that this kills the map and offset frame. --> I added the new tf-tree to the question.
I think that's the reason for not receiving a map.
Comment by anonymous27503 on 2016-07-26:
@Sebastian Kasperski
I edited my question regarding your edit.
When I run rosservice call /StartMapping3 the robot drives through the room without stopping. Normally the robott should do a 360 turn as described in your tutorial.
Comment by anonymous27503 on 2016-07-26:
I started rviz with the rviz config file of the gmapping demo and got a costmap as you can see in the picture.
Comment by Sebastian Kasperski on 2016-07-27:
Almost all your laser points are outside your costmap, this cannot work. You should really setup your system step by step and test everything before you continue with the next one. You cannot debug the whole system at once.
Comment by anonymous27503 on 2016-07-27:
Should I follow the 3 steps you supposed or do you mean to set up the complete robot?
Comment by Sebastian Kasperski on 2016-07-28:
I mean setting up the robot. Add all the components step be step and verify each before you go on. And if you then run into any troubles, you can ask here, | {
"domain": "robotics.stackexchange",
"id": 25258,
"tags": "ros, nav2d, 2d-mapping"
} |
Simple parser using Flex and C++ | Question: This is an alternative parser based on the specifications from this question. Briefly stated, the input file is a text file which has at least 33 fields separated by semicolons.
If the fourth field begins with either a T or an E, the line is valid and a subset of it written to the output file. Specifically, fields as numbered from \$0\$, should be output in this order: \$ \{0, 2, 3, 4, 5, 6, 10, 9, 11, 7, 32\}\$, each separated by a comma. All other fields are discarded.
One of the other answers there suggested that one could use a Flex-based parser instead. My own efforts were not faster, but I'm hoping that someone can review this and show me how to extract more speed from this version.
lexer.l
%{
#include <iostream>
#include <fstream>
#include <vector>
#include <string>
#include <algorithm>
#include <experimental/iterator>
#include <iterator>
#undef YY_DECL
#define YY_DECL int FileLexer::yylex()
class FileLexer : public yyFlexLexer {
public:
FileLexer(std::istream& in, std::ostream& out) :
yyFlexLexer{&in, &out},
out{out}
{}
using FlexLexer::yylex;
/// the yylex function is automatically created by Flex.
virtual int yylex();
private:
/// pointer to the current value
std::vector<std::string> vec;
std::ostream& out;
unsigned fieldcount{0};
bool valid{true};
};
%}
%option warn nodefault batch noyywrap c++
%option yyclass="FileLexer"
FIELD [^;\n]*
DELIM ;
%%
{DELIM} { }
\n {
if (valid && fieldcount >= 33) {
std::copy(vec.begin(), vec.end(), std::experimental::make_ostream_joiner(out, ","));
out << '\n';
}
vec.clear();
fieldcount = 0;
valid = true;
return 1;
}
{FIELD} {
if (valid) {
switch (fieldcount++) {
case 0:
case 1:
case 4:
case 5:
case 6:
case 7:
case 9:
case 32:
vec.push_back(yytext);
break;
case 3:
if (yytext[0] == 'E' || yytext[0] == 'T') {
vec.push_back(yytext);
valid = true;
} else {
valid = false;
}
break;
case 10:
{
auto n{vec.size()};
vec.push_back(yytext);
std::iter_swap(vec.begin()+n, vec.begin()+n-2);
}
break;
case 11:
{
auto n{vec.size()};
vec.push_back(yytext);
std::iter_swap(vec.begin()+n, vec.begin()+n-1);
}
break;
}
}
}
%%
int main(int argc, char *argv[]) {
if (argc >= 3) {
std::ifstream in{argv[1]};
std::ofstream out{argv[2]};
FileLexer lexer{in, out};
while (lexer.yylex() != 0)
{}
}
}
Compile with:
flex -o parsefile.cpp lexer.l
g++ -O2 -std=gnu++17 parsefile.cpp -o parsefile
This works, but it is slow (2.165 s) on my machine, with the same million-line input file as mentioned in my answer to the other question.
I tried it a few different ways, but I was unable to get a version that was faster than the PHP code in the other question. The switch statement logic is arguably a bit overly clever and stores only the needed fields in the desired order, but the speed was about the same as the straightforward implementation.
If it matters, I'm using gcc version 10.1 and flex 2.6.4 on a 64-bit Linux machine.
Answer: I see a few small issues in the C++ code, that probably won't give any large performance benefit. Flex is doing the heavy work of reading the input and parsing it, there's not much you can do about that.
Iterator arithmetic
Instead of:
case 10:
{
auto n{vec.size()};
vec.push_back(yytext);
std::iter_swap(vec.begin() + n, vec.begin() + n - 2);
}
You can also do iterator arithmetic on the end iterator, thereby avoiding the need to get the size of the vector:
case 10:
vec.push_back(yytext);
std::iter_swap(vec.end() - 1, vec.end() - 3);
Don't return 1 after reading a newline character
There is no need to return from yylex() after reading a newline, just remove the return 1 statement. This avoids needing the while-loop in main().
Use emplace_back() instead of push_back()
This avoids having to create a temporary that is being copied into the vector. | {
"domain": "codereview.stackexchange",
"id": 39054,
"tags": "c++, performance, parsing, c++17"
} |
rosmsg show Num error | Question:
I'm following the ROS Tutorial ''Creating a ROS msg and srv'' and meet a problem.
When I type
$ rosmsg show Num
It shows
Unable to load msg [beginner_tutorials/Num]: /home/xxx/catkin_ws/src/beginner_tutorials/msg/Num.msg: Invalid declaration: int 64 num
can anybody explain this error? thanks in advance
Originally posted by Leo Lee on ROS Answers with karma: 1 on 2019-03-20
Post score: 0
Answer:
Looks like you have an extra space in your Num.msg message definition. It should be int64 num not int 64 num.
Originally posted by jarvisschultz with karma: 9031 on 2019-03-20
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 32701,
"tags": "ros, ros-kinetic, rosmsg, tutorials"
} |
Probability measure implies quantum mechanics? | Question: The article "Quantum Logic and Probability Theory," by Wilce, has the following in section 1.4:
1.4 The Reconstruction of QM
From the single premise that the “experimental propositions” associated with a physical system are encoded by projections in the way indicated above, one can reconstruct the rest of the formal apparatus of quantum mechanics. The first step, of course, is Gleason’s theorem, which tells us that probability measures on L(H) correspond to density operators. There remains to recover, e.g., the representation of “observables” by self-adjoint operators, and the dynamics (unitary evolution). The former can be recovered with the help of the Spectral theorem and the latter with the aid of a deep theorem of E. Wigner on the projective representation of groups. See also R. Wright [1980]. A detailed outline of this reconstruction (which involves some distinctly non-trivial mathematics) can be found in the book of Varadarajan [1985]. The point to bear in mind is that, once the quantum-logical skeleton L(H) is in place, the remaining statistical and dynamical apparatus of quantum mechanics is essentially fixed. In this sense, then, quantum mechanics—or, at any rate, its mathematical framework—reduces to quantum logic and its attendant probability theory.
Wilce never seems to define L(H) explicitly, but I think it's probably a lattice L built on the Hilbert space H. What little knowledge I have of this sort of thing comes from Mackey, Mathematical Foundations of Quantum Mechanics.
I'm interested in understanding a little more about the result or set of results Wilce refers to. He references Varadarajan, but that's an old, expensive, two-volume book. Can anyone either (a) expand on Wilce's description in the format of a SE answer, or (b) point me to non-paywalled references that describe this in more depth than Wilce's single paragraph, but without comprising a whole book? I don't particularly care about parsing all the details of the proofs, which Wilce advertises as "deep," but I would like to understand a little more explicitly what result or results are being described, and their interpretation.
Re the list of assumptions, is this approximately right?
There is a Hilbert space H (probably of dimension 3 or more, as in Gleason's theorem?) equipped with some logical apparatus (the lattice L?).
We have a probability measure that satisfies the Kolmogorov axioms (including countable additivity, but without the connotations of boolean logic).
Do we also need to assume some form of the law of large numbers?
Is the following something like the correct list of results?
The probability measure can be described by a density matrix (Gleason's theorem).
Observables must be represented by self-adjoint operators.
Time evolution must be unitary.
Since the assumptions don't refer to or define observables or time evolution, it seems like there must be some additional "glue" that I'm missing.
Related: Does Gleason's Theorem Imply Born's Rule?
Answer:
There is a Hilbert space H (probably of dimension 3 or more, as in Gleason's theorem?) equipped with some logical apparatus (the lattice L?).
Correct, and the lattice $L(H)$ is that of orthogonal projectors/closed subspaces of a separable complex Hilbert space $H$. As a partially ordered set, the partial ordering $P\leq Q$ relation is the inclusion of subspaces: $P(H) \subset Q(H)$.
As a consequence $P \vee Q := \sup\{P,Q\}$ is the projector onto the closure of the sum of $P(H)$ and $Q(H)$ and $P\wedge Q := \inf\{P,Q\}$ is the projector onto the intersection of the said closed subspaces.
This lattice turns out to be orthomodular, bounded, atomic, satisfying the covering law, separable, ($\sigma$-)complete.
You also need not assume that the lattice of elementary propositions of a quantum system is $L(H)$ from scratch, but you can prove it, assuming some general hypotheses (those I wrote above together with a few further technical requirements). However what you eventually find is that the Hilbert space can be real, complex or quaternionic. This result was obtained by Solèr in 1995.
We have a probability measure that satisfies the Kolmogorov axioms (including countable additivity, but without the connotations of boolean logic).
Correct. The lattice is (orthocomplemented and) orthomodular ($A= B \vee (A \wedge B^\perp)$ if $B\leq A$) instead of (orthocomplemented and) Boolean ($\vee$ and $\wedge$ are mutually distributive).
However the story is much longer. The elements of $L(H)$ are interpreted as the elementary propositions/observables of a quantum system, admitting only the outcomes YES and NOT under measurement.
In an orthomodular lattice, two elements $P,Q$ are said to commute if the smallest sublattice including both them is Boolean.
It is possible to prove that, for the lattice of orthogonal projectors $L(H)$, a pair of elements $P$ and $Q$ commute if and only if they commute as operators: $PQ=QP$.
A posteriori, this is consistent with the idea that these elementary observables can be measured simultaneously.
If $P$ and $Q$ in $L(H)$ commute, it turns out that $$P\wedge Q = PQ =QP\tag{*}$$ and $$P\vee Q = P+Q-PQ\:.\tag{**}$$
A crucial point is the following one. Having a Boolean sublattice (i.e. made of mutually commuting elements) $\vee$ and $\wedge$ can be equipped with the standard logical meaning of OR and AND respectively. The orthogonal $P^\perp = I-P$ corresponds to the negation NOT $P$.
This is a way to partially recover classical logic form quantum logic.
Do we also need to assume some form of the law of large numbers?
Actually, at least when you make measurements of observables, you always reduce to a Boolean subalgebra where the probability measure becomes a standard $\sigma$-additive measure of a $\sigma$-algebra and here you can assume standard results on the relation between probabilities - frequencies.
Is the following something like the correct list of results?
The probability measure can be described by a density matrix (Gleason's theorem).
Yes, provided the Hilbert space is separable with dimension $\neq 2$.
In particular, the extremal elements of the convex set of probability Gleason measures (the probability measures which cannot be decomposed into non-trivial convex cominations) turn out to be of the form $|\psi\rangle \langle \psi|$ for every possible $\psi\in H$ with unit norm. This way, extremal measures coincides to pure states, i.e., unit vectors up to phases.
Observables must be represented by self-adjoint operators.
Yes, this is straightforward to prove if one starts by assuming that an observable $A$ is a collection $\{P^{(A)}(E)\}_{E \in B(\mathbb R)}$ of elements of the lattice $L(H)$, that is, projectors $P(E)$ where $E\subset \mathbb R$ is any real Borel set.
The physical meaning of $P^{(A)}(E)$ is "the outcome of the measurement of $A$ lies in (or is) $E$".
Evidently $P^{(A)}(E)$ and $P^{(A)}(E')$ commute and giving the standard meaning to $\wedge$ (= AND), we have from (*) that $$P^{(A)}(E) P^{(A)}(F) = P^{(A)}(E)\wedge P^{(A)}(F) = P^{(A)}(E\cap F)\:.\tag{1}$$
Using completeness, it is not difficult to justify also the property
$$\vee_i P^{(A)}(E_i) = P^{(A)}(\cup_i E_i)$$
where the $E_i$ and a finite or countable class of Borel sets pairwise disjoint.
This requirement, making in particular use of (**), is mathematically equivalent to
$$\sum_i P^{(A)}(E_i) = P^{(A)}(\cup_i E_i)\tag{2}$$
where the $E_i$ and a finite or countable class of Borel sets pairwise disjoint and the sum is computed in the strong operator topology.
Finally since some outcome must be measured in $\mathbb R$, we conclude that $$P^{(A)}(\mathbb R)=I\tag{3}\:,$$
because the trivial projector $I \in L(H)$ satisfies $\mu(I)=1$ for every Gleason state.
Properties (1), (2) and (3) say that $\{P^{(A)}(E)\}_{E \in B(\mathbb R)}$ is a projection valued measure (PVM) so that the self-adjoint operator
$$A = \int_{\mathbb R} \lambda P^{(A)}(\lambda) $$
exists.
The spectral theorem proves that the correspondence between observables and self-adjoint operators is one-to-one.
Given a pure state represented by the unit vector up to phases $\psi$ and a PVM $\{P^{(A)}(E)\}_{E\in B(\mathbb R)}$ describing the observable/self-adjoint operator $A$, the map $$B({\mathbb R}) \ni E \mapsto \mu^{(A)}_\psi(E) := tr(|\psi\rangle \langle \psi| P^{(A)}(E)) = \langle \psi|P^{(A)}(E) \psi\rangle$$
is a standard probability measure over $\sigma(A)$, and standard results of QM arise like this ($\psi$ is supposed to belong to the domain of $A$)
$$\langle \psi |A \psi \rangle = \int_{\sigma(A)}\lambda d\mu^{(A)}(\lambda)\:,$$
justifying the interpretation of the left-hand side as expectation value of $A$ in the state represented by $\psi$, and so on.
It also turns out that the support of a PVM coincides with the spectrum $\sigma(A)$ of the associated observable.
The elements $P$ of $L(H)$ are self-adjoint operators and thus the picture is consistent: $P$ is an elementary observable admitting only two values $0$ (NOT) and $1$ (YES). In fact $\{0,1\} = \sigma(P)$ unless considering the two trivial cases (the contradiction) $P=0$ where $\sigma(P)= \{0\}$ and $P=I$ (the tautology) where $\sigma(P)= \{1\}$.
Time evolution must be unitary.
Here one has to introduce the notion of symmetry and continuous symmetry.
There are at least 3 possibilities which are equivalent on $L(H)$, one is the well-known Wigner's theorem. The most natural one, in this picture, is however that due to Kadison (one of the two possible versions): a symmetry can be defined as an isomorphism of the lattice $L(H)$, $h: L(H) \to L(H)$.
It turns out that (Kadison's theorem) isomorphisms are all of the form $$ L(H)\ni P \to h(P) = UPU^{-1}$$ for some unitary or antiunitary operator $U$, defined up to a phase, and depending on the isomorphism $h$.
Temporal homogeneity means that there is no preferred origin of time and all time instants are physically equivalent.
So, in the presence of time homogeneity, there must be a relation between physics at time $0$ and physics at time $t$ preserving physical structures. Time evolution form $0$ to $t$ must therefore be implemented by means of an isomorphism $h_t$ of $L(H)$.
Since no origin of time exists, it is also natural to assume that $h_t\circ h_s = h_{t+s}$.
It is therefore natural to assume that, in the presence of temporal homogeneity, time evolution is represented by an one-parameter group of such automorphisms ${\mathbb R} \ni t \mapsto h_t$. (One-parameter group means $h_t\circ h_s = h_{t+s}$ and $h_0= id$.)
It is also natural assuming a continuity hypothesis related to possible measurements and states:
$${\mathbb R} \ni t \mapsto \mu(h_t(P))$$
is continuous for every $P\in L(H)$ and every Gleason state $\mu$.
Notice that Kadison theorem associates a unitary $U_t$ to every $h_t$ up to phases, so that there is no reason, a priory, to have $U_tU_s = U_{t+s}$, since phases
depending on $s$ and $t$ may show up.
Even if one is so clever to fix the phases to prove the composition rule of an one-parameter group of unitary operators $U_tU_s = U_{t+s}$ and $U_0=I$, there is no a priory reason to find a continuous map $t \mapsto U_t$ in some natural operator topology.
Actually under the said hypotheses on $\{h_t\}_{t\in \mathbb R}$, it is possible to prove that (the simplest example of application of Bargmann's theorem since the second co-homoloy group of $\mathbb R$ is trivial) the phases in the correspondence $h_t \to U_t$ via Kadison's theorem can be unambiguously accommodated in order that $h_t(P) = U_t P U_t^{-1}$ where $$\mathbb R \ni t \mapsto U_t$$ is a strongly continuous one-parameter group of unitary operators.
Stone's theorem immediately implies that
$U_t = e^{-itH}$ for some self-adjoint operator $H$ (defined up to an additive constant in view of arbitrariness of the phase of $U_t$).
This procedure extended to other one-parameter groups of unitary operators $e^{-isA}$ describing continuous symmetries gives rise to the well-known quantum version of Noether theorem. The continuous symmetry preserves time evolution, i.e., $$e^{-isA} e^{-itH}= e^{-itH}e^{-isA}$$ for all $t,s \in \mathbb R$, if and only if the observable $A$generating the continuous symmetry is a constant of motion: $$e^{itH}Ae^{-itH}=A\:.$$ | {
"domain": "physics.stackexchange",
"id": 45492,
"tags": "quantum-mechanics, hilbert-space, probability, born-rule"
} |
Waiting on transform from /base_footprint to /map to become available before running costmap, tf error: | Question:
I want to auto navigate my P3AT.
Using ROS-Fuertr (on PC1: UBUNTU 12.04 LTS) & USARSim (on PC2: windows7 ).
I have my launch file move_base.launch,
<launch>
<master auto="start"/>
<!-- Run the map server -->
<node name="map_server" pkg="map_server" type="map_server" args="$(find my_robot_name_2dnav)/map/map.pgm 0.05"/>
<!--- You can see original move_base.launch -->
<!--- Run AMCL -->
<include file="$(find usarsim_inf)/launch/usarsim.launch"/>
<include file="$(find amcl)/examples/amcl_omni.launch" />
<node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
<rosparam file="$(find my_robot_name_2dnav)/launch/costmap_common_params.yaml" command="load" ns="global_costmap" />
<rosparam file="$(find my_robot_name_2dnav)/launch/costmap_common_params.yaml" command="load" ns="local_costmap" />
<rosparam file="$(find my_robot_name_2dnav)/launch/local_costmap_params.yaml" command="load" />
<rosparam file="$(find my_robot_name_2dnav)/launch/global_costmap_params.yaml" command="load" />
<rosparam file="$(find my_robot_name_2dnav)/launch/move_base_params.yaml" command="load" />
</node>
</launch>
but when i try,
$ roslaunch my_robot_name_2dnav move_base.launch
i am getting an warning message,
[ WARN] [1413488778.937642524]: Waiting on transform from /base_footprint to /map to become available before running costmap, tf error:
see here the full terminal output
please help to improve this scenario.
Originally posted by Aarif on ROS Answers with karma: 351 on 2014-10-16
Post score: 1
Answer:
amcl was not getting the laser data from the /scan topic, i fix it and solved the problem.
Originally posted by Aarif with karma: 351 on 2014-12-21
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by pexison on 2015-03-24:
how do you solved it?
Comment by Aarif on 2015-05-02:
hi @pexison, my simulator was publishing laser scan on lms200 topic instead of /scan , i remapped the the publishing from lms200 to /scan and that worked form me...
if your laser data is being published on /scan topic then you might look on time on both the PCs, the should be synchronize
Comment by Waleed_Shahzaib on 2017-05-15:
Hi @Aarif, Please tell how did you change the topic from lms200 to /scan? | {
"domain": "robotics.stackexchange",
"id": 19756,
"tags": "ros, navigation, mapping, p3at, move-base"
} |
ROS Answers SE migration: ros_release | Question:
Hi,
I try to build a stack using the stack ros_release. When I build packages with electric everything works fine. However, when I try to run it with fuerte or groovy an error occurs:
Global exception caught.
this program assumes that ros is in your variant. Check the console output for test failure details.
Traceback (most recent call last):
File "/home/user/workspace_jenktool/jenkins_scripts/analyze.py", line 106, in analyze
rosdistro_obj = rosdistro.Distro(get_rosdistro_file(ros_distro))
File "/home/user/groovy_workspace/ros_release/rosdistro/src/rosdistro.py", line 444, in __init__
raise DistroException("this program assumes that ros is in your variant")
DistroException: this program assumes that ros is in your variant
That programm assumes that ros is part of the variants in the distro file, but it isn't. I checked the versions of the stack ros_release in the ros-wiki and it is just available for electric, but not for fuerte/groovy. However, there are different branches in the repo (e.g. fuerte, fuerte_new, groovy, groovy_new) and I am a little bit confused about that.
What stack/tool do I have to use when I want to build a non-catkin-stack with fuerte or groovy. The jenkins_tool is just for catkin-based package, isn't it?
Anyone know?
Cheers,
Johannes
EDIT:
It is not my purpose to do a release or anything. My purpose is to build existing stacks only. Under this link (http://www.ros.org/wiki/release/Releasing/fuerte) you will find following part:
Pre-release: Run a Hudson on-demand build to make sure that your stack
builds! While it is acceptable to
release a stack that does not work on
all distributions and architectures
that are supported by Willow Garage,
your stack will be much more useful to
the community when it works on all
supported platforms.
For alternate ways of running the pre-release, see job_generation
When I use the package job_generation I am able to build a package with electric, but neither with fuerte nor groovy and it ends up with the error above. So I assume that the scripts for the pre-release are not the same for electric and fuerte/groovy, because the webinterface works apparently.
Originally posted by JohannesK on ROS Answers with karma: 38 on 2013-01-09
Post score: 0
Answer:
The procedure for rosbuild based packages for fuerte and groovy are exactly the same as electric.
You need to add entries in fuerte.rosdistro make sure all the elements of the setup are completed before you do the release.
Originally posted by tfoote with karma: 58457 on 2013-01-10
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by JohannesK on 2013-01-11:
I edit my question above to get a better understanding of what am I doing!
Comment by tfoote on 2013-01-11:
What stack are you running? Is it in the distro file? What commands are you running exactly?
Comment by JohannesK on 2013-01-12:
I use a modified version of the script "run_auto_stack_prerelease.py" with stacks located in the appropriate *.rosdistro file only. Okay, when you're using the same (original) scripts the mistake should be caused from somewhere else, may be in the modified version of the script. Thanks for help!
Comment by JohannesK on 2013-01-23:
I found the problem. For fuerte/groovy you should use the python module "ros-job-generation" instead of the package "job_generation" in the stack ros_release. The "name" of the scripts are the same in both packages though. | {
"domain": "robotics.stackexchange",
"id": 12341,
"tags": "ros, ros-fuerte, ros-groovy, ros-release"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.