anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Preorder, inorder, postorder for tree in C++11 | Question: Please review my code for preorder, inorder and postorder of a tree. I am using C++11 (or wanted to use), so let me know if anything is deviating from the C++11 standard.
#include <iostream>
#include <memory>
using std::cout;
using std::endl;
using std::shared_ptr;
struct node
{
int data;
shared_ptr<node> left;
shared_ptr<node> right;
};
shared_ptr<node> newNode (int i)
{
shared_ptr<node> n (new node);
n->data = i;
n->left = nullptr;
n->right = nullptr;
return n;
}
void preOrder(shared_ptr<node> n)
{
if(n == nullptr)
return;
cout<<n->data<<" ";
preOrder((n->left));
preOrder((n->right));
}
void inOrder(shared_ptr<node> n)
{
if(n == nullptr)
return;
inOrder((n->left));
cout<<n->data<<" ";
inOrder((n->right));
}
void postOrder(shared_ptr<node> n)
{
if(n == nullptr)
return;
postOrder((n->left));
postOrder((n->right));
cout<<n->data<<" ";
}
int main(int argc, char* argv[])
{
shared_ptr<node> r = newNode(1);
(r->left) = newNode(2);
(r->right) = newNode(3);
(r->left->left) = newNode(4);
(r->left->right) = newNode(5);
preOrder(r);
cout<<endl;
inOrder(r);
cout<<endl;
postOrder(r);
cout<<endl;
return 0;
}
Answer: Memory management
Don't think that is the appropriate smart pointer.
shared_ptr<node> newNode (int i)
Do you have plans to actually share ownership of a node? Or is it more likely that ownership will be done by the patent of the node? I would have gone with unique_ptr<node> if I was going to use smart pointers.
make_X
You should not be using new with smart pointers. There are make_XXX commands that combine all the memory allocation required into a single operation (thus making it more efficient).
std::shared_ptr<node> n = std::make_shared<node>(val); // or make_unique
Using
I suppose this is better than using namespace std;
using std::cout;
using std::endl;
using std::shared_ptr;
But its still pretty sloppy and lazy. Pull those declarations into the tightest scope possible (ie inside a function).
Braces
This looks strange
preOrder((n->left));
preOrder((n->right));
Now I am all for using braces to make expression easy to read or to force a particular evaluation order. But this just looks strange and does not help in the evaluation.
Prefer '\n'
Prefer '\n' to std::endl.
The difference between the two is that std::endl forces a flush after placing the new line on the stream. It is very rare that you actually want to force a flush (as the runtime is much better at making that decision than you). Excessive flushing just makes the stream libraries slow.
“\n” or '\n' or std::endl to std::cout?
Why does endl get used as a synonym for “\n” even though it incurs significant performance penalties? | {
"domain": "codereview.stackexchange",
"id": 11611,
"tags": "c++, c++11, tree"
} |
Compilation issues with PCL and message_filters: ; cannot bind non-const lvalue | Question:
Compiler: gcc9.4.0
Arch: Ubuntu20.04/AMD64
PCL: 1.10
ROS Galactic
I've got two errors which I believe are hindering me from compiling my code here successfully.
The first is affecting my attempt to calculate the 3D centroid using PCL:
cannot bind non-const lvalue reference of type ‘Eigen::Matrix<int, 4, 1>&’ to an rvalue of type ‘Eigen::Matrix<int, 4, 1>’
I've tried a few different type, including pointers/non-pointers, but I'm a bit stumped with this.
The second is an issue with using message_filters effectively. I've tried to copy the approach used by image_pipeline for stereography, but it's throwing a number of errors:
error: no matching function for call to ‘message_filters::MessageEvent<const vision_msgs::msg::Detection2D_<std::allocator<void> > >::MessageEvent(const message_filters::MessageEvent<const sensor_msgs::msg::PointCloud2_<std::allocator<void> > >&, bool)’
error: no matching function for call to ‘message_filters::Signal9<vision_msgs::msg::Detection2D_<std::allocator<void> >, sensor_msgs::msg::PointCloud2_<std::allocator<void> >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>::addCallback<const M0ConstPtr&, const M1ConstPtr&, const M2ConstPtr&, const M3ConstPtr&, const M4ConstPtr&, const M5ConstPtr&, const M6ConstPtr&, const M7ConstPtr&, const M8ConstPtr&>(std::_Bind_helper<false, const std::_Bind<void (pose_eval::PoseEval::*(pose_eval::PoseEval*, std::_Placeholder<1>, std::_Placeholder<2>))(sensor_msgs::msg::PointCloud2_<std::allocator<void> >&, vision_msgs::msg::Detection2D_<std::allocator<void> >&)>&, const std::_Placeholder<1>&, const std::_Placeholder<2>&, const std::_Placeholder<3>&, const std::_Placeholder<4>&, const std::_Placeholder<5>&, const std::_Placeholder<6>&, const std::_Placeholder<7>&, const std::_Placeholder<8>&, const std::_Placeholder<9>&>::type)’
[5.100s] 280 | ::_3, std::placeholders::_4, std::placeholders::_5, std::placeholders::_6, std::placeholders::_7, std::placeholders::_8, std::placeholders::_9));
The latter error might be causing the former, but I'm not 100% on that.
Full error log is not attached, so please find it at the bottom here.
The relevant code is follows:
PoseEval::PoseEval(rclcpp::NodeOptions options) : Node("pose_eval_cpp", options) {
rclcpp::QoS qos = rclcpp::SystemDefaultsQoS();
std::string ns = std::string(this->get_namespace());
this->detect_loc_pub_ = this->create_publisher<PointStamped>(ns + std::string("/detect_pos"), qos);
message_filters::Subscriber<PointCloud2> pc2_sub_(this, ns + "", qos.get_rmw_qos_profile());
message_filters::Subscriber<Detection2D> best_detect_sub_(this, ns + "", qos.get_rmw_qos_profile());
this->synch_subs_.reset(new ApproximateSync(ApproximatePolicy(20), pc2_sub_, best_detect_sub_));
this->synch_subs_->registerCallback(std::bind(&PoseEval::points_callback, this, std::placeholders::_1, std::placeholders::_2));
}
void PoseEval::points_callback(PointCloud2& points, Detection2D& best_match) {
RCLCPP_INFO(this->get_logger(), "Synching detection and cloud");
// Find best result
auto byScore = [&](const ObjectHypothesisWithPose& a, const ObjectHypothesisWithPose& b) {
return a.hypothesis.score < b.hypothesis.score;
};
auto best = std::max_element(best_match.results.begin(), best_match.results.end(), byScore);
auto best_class = best->hypothesis.class_id;
auto best_score = best->hypothesis.score;
// Get ROI points
auto box = best_match.bbox;
auto box_c = box.center;
auto top = box_c.y - box.size_y / 2, bottom = box_c.y + box.size_y / 2, left = box_c.x - box.size_x / 2, right = box_c.x + box.size_x / 2;
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud = boost::make_shared<pcl::PointCloud<pcl::PointXYZRGB>>();
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_subset = boost::make_shared<pcl::PointCloud<pcl::PointXYZRGB>>();
pcl::fromROSMsg(points, *cloud);
// Extract ROI
pcl::ExtractIndices<pcl::PointXYZRGB> filter(true);
filter.setInputCloud(cloud);
filter.setIndices(top, left, bottom - top, right - left);
filter.filter(*cloud_subset);
// Find Centroid
auto centroid = Eigen::Vector4f(0, 0, 0, 0);
auto n_pts_used = pcl::compute3DCentroid<pcl::PointXYZRGB, std::int32_t>(*cloud_subset, centroid);
RCLCPP_INFO(this->get_logger(), "Centroid found at %lf,%lf,%lf", centroid.x(), centroid.y(), centroid.z());
// Construct message
auto pt = Point();
pt.x = centroid.x();
pt.y = centroid.y();
pt.z = centroid.z();
auto pt_s = PointStamped();
pt_s.point = pt;
pt_s.header = Header(points.header);
this->detect_loc_pub_->publish(pt_s);
}
And error messages:
This code block was moved to the following github gist:
https://gist.github.com/answers-se-migration-openrobotics/dd081e35037ca8c8493772fc1f2bda24
(Apologies for the verbosity, I cannot attach this log to the post).
Thanks!
Originally posted by Nilaos on ROS Answers with karma: 13 on 2023-06-09
Post score: 0
Answer:
With c++, it is usually best to look at only the first error and ignore the enormous spew that follows. All the info you need is shown near the start:
[4.770s] /home/USERNAME/repos/thesis/ros2_ws/src/ros2_yolo_cpp/src/pose_eval.cpp:55:105: error: no matching function for call to ‘compute3DCentroid<pcl::PointXYZRGB, int32_t>(pcl::PointCloud<pcl::PointXYZRGB>&, Eigen::Matrix<float, 4, 1>&)’
...
[4.771s] /usr/include/pcl-1.10/pcl/common/impl/centroid.hpp:50:1: note: candidate: ‘unsigned int pcl::compute3DCentroid(pcl::ConstCloudIterator<PointT>&, Eigen::Matrix<Scalar, 4, 1>&) [with PointT = pcl::PointXYZRGB; Scalar = int]’
The compiler is telling you that you need to pass a pointcloud iterator, not the the pointcloud itself.
Originally posted by Mike Scheutzow with karma: 4903 on 2023-06-11
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Nilaos on 2023-06-15:
Thanks for the help - it turns out that while that was what the compiler wasn't a fan of, due to template matching it was actually the mismatch of passing in a float matrix while setting the second template type in that function call to be an int_32t that the compiler didn't like.
While I would normally agree re error spew, the second error here that's followed is still proving a thorn in my side. I've tried changing the callback function inputs to be a const MsgTypeXYZ::ConstSharedPtr &msg but that hasn't really changed the error message at all. Do you have any ideas for that? | {
"domain": "robotics.stackexchange",
"id": 38419,
"tags": "ros, pcl, approximatetime, message-filters"
} |
Hill's system alphabetizes elements symbol or name? | Question: Take monosodium glutamate.
Its chemical formula is $$\ce{C5H8NO4Na or C5H8NNaO4}$$
First way nitrogen, oxygen, sodium, with is in order of element's name.
Second way, N, Na, O, in order of element's symbol.
Have seen it both ways, hoping someone could clarify. Thanks.
Answer: Go straight to the original source, ON A SYSTEM OF INDEXING CHEMICAL LITERATURE; ADOPTED BY THE CLASSIFICATION DIVISION OF THE U. S. PATENT OFFICE, and you can see that Hill uses the element symbol to determine alphabetical order.
In your example $\ce{C5H8NNaO4}$ is correct. | {
"domain": "chemistry.stackexchange",
"id": 6262,
"tags": "elements"
} |
Can the DFT can be performed without normalization? | Question: If we have OFDM system with $N$ sub-carriers, the DFT matrix can be expressed as follows:
$F = dfmtx(N)/sqrt(N)$
My question, is it possible and practical to use the matrix without Normalization, it means use the iDFT as $F = (dfmtx(N))'$, and then at the receiver end, I will multiply with that matrix and divided by $N$ (like this $dfmtx(N)/N)$ to get scalar diagonal matrix whose diagonal elements are 1?
thank you
Answer: Yes. In fact, depending on your receiver architecture, you may not need the divide by $N$ in the receiver.
Generally, in a receiver you care about the relative phases of the bins (or possibly their strengths with respect to some other part of the same signal). So there's no reason to do that multiply if you don't want to.
If you're working with fixed-point math, just make sure that you don't experience over- or under-flow. | {
"domain": "dsp.stackexchange",
"id": 9916,
"tags": "digital-communications, ofdm"
} |
PyDOS: Version 2.0 | Question: This is a follow up question to PyDOS shell simulation.
You might remember PyDOS from my other post. I have now updated it and it's better than ever! Post any ideas below that could make it better.
import time
import os
import sys
import random
def splash():
print ()
print ("PyDOS")
print ()
print ("Version 1.9.8")
print ()
time.sleep(3.2)
def calc():
while True:
print ("Welcome to PyCALC")
print ("Please type the number for your suggestion.")
print ("1: Addition\n2: Subtraction\n3: Multiplacation\n4: Division")
suggestion = input("Your Choice: ")
if suggestion == "1":
num1 = int(input("Enter a number: "))
num2 = int(input("Enter a number: "))
print ("Please wait...")
time.sleep(0.6)
answer = num1+num2
print ("Your answer is:")
print (answer)
break
if suggestion == "2":
num1 = int(input("Enter a number: "))
num2 = int(input("Enter a number: "))
print ("Please wait...")
time.sleep(0.6)
answer = num1-num2
print ("Your answer is:")
print (answer)
break
if suggestion == "3":
num1 = int(input("Enter a number: "))
num2 = int(input("Enter a number: "))
print ("Please wait...")
time.sleep(0.6)
answer = num1*num2
print ("Your answer is:")
print (answer)
break
if suggestion == "4":
num1 = int(input("Enter a number: "))
num2 = int(input("Enter a number: "))
print ("Please wait...")
time.sleep(0.6)
answer = num1/num2
print ("Your answer is:")
print (answer)
break
else:
print ("Your operation choice is invalid!")
def get_lines():
print("Enter 'stop' to end.")
lines = []
line = input()
while line != 'stop':
lines.append(line)
line = input()
return lines
def textviewer():
os.system('cls' if os.name == 'nt' else 'clear')
print ("Text Viewer.")
directory = ("/media/GENERAL/Projects/files")
filename = input("Enter a text file to view: ")
file = open(os.path.join(directory, filename), "r")
print ("Loading text...")
time.sleep(0.5)
os.system('cls' if os.name == 'nt' else 'clear')
print(file.read())
edit_text = input("Would you like to edit it? (y for yes, n for no)")
if edit_text == "y":
file = open(os.path.join(directory, filename), "w")
print ("You are now in edit mode.")
lines = get_lines
file.write('\n'.join(lines))
time.sleep(2)
if edit_text == "n":
print ("Press enter to exit")
input()
def edit():
os.system('cls' if os.name == 'nt' else 'clear')
print ("EDIT")
print ("-------------")
print ("Note: Naming this current document the same as a different document will replace the other document with this one.")
directory = ("/media/GENERAL/Projects/files")
filename = input("Plese enter a file name.")
file = open(os.path.join(directory, filename), "w")
print ("FILE: " + filename+".")
lines = get_lines()
file.write('\n'.join(lines))
def cls():
os.system('cls' if os.name == 'nt' else 'clear')
splash()
while True:
os.system('cls' if os.name == 'nt' else 'clear')
print ()
print ("PyDOS VERSION 1.9.5")
shell = input("> ")
if shell == "textviewer":
print ("Loading Text Viewer...")
time.sleep(3)
textviewer()
elif shell == "help":
print ("HELP")
print ("-----------------")
print ("Commands:")
print ("dir - displays directory.")
print ("cls - clears screen")
print ("help - shows help")
print ("textviewer - launches textviewer")
print ("edit - launches edit")
print ("news - launches news")
print ("shutdown - closes PyDOS")
print ("calc - launches calculator")
print ("------------------------------------")
print ("What is PyDOS?")
print ("PyDOS is inspired by MS-DOS made in the 1980's and")
print ("has that feel of it too! For a better experiance")
print ("run PyDOS in the python CMD shell.")
input ("Press enter to close.")
elif shell == "edit":
print ("Loading edit...")
time.sleep(3)
edit()
elif shell == "calc":
calc()
elif shell == "dir":
print ("The drive name is A:")
print ()
print ("NAME: TYPE: MODIFIED:")
print ("SHUTDOWN.EXE .EXE 12/01/15 ")
print ("EDIT.EXE .EXE 09/03/15 ")
print ("TEXTVIEWER.EXE .EXE 09/03/15 ")
print ("NEWS.EXE .EXE 09/01/15 ")
print ("HELP.COM .COM 09/03/15 ")
print ("HANGMAN(BROKEN).EXE .EXE 11/03/15 ")
print ("CALC.EXE .EXE 20/03/15 ")
input ("---------------Press enter to close---------------")
elif shell == "cls":
cls()
elif shell == "hangman":
hangman.main()
elif shell == "news":
print ("PyDOS NEWS")
print ()
print ("New Additions:")
print ()
print ("New calculator app!")
print ("All text documents from edit are now stored in a 'files' directory")
print ()
print ("Tweaks and fixes:")
print ()
print ("BUG023L: Fixed issue with splash screen loop")
print ()
print ("Reported Bugs:")
print ("BUG024T: Hangman returns a traceback error, we are fixing this!")
print ("BUG025T: Pressing 'y' in texteditor when it asks you if you want\nto edit the file returns Type_Error")
input("Press enter to close")
elif shell == 'shutdown':
print ("Shutting down...")
time.sleep(3)
break
else:
print("This command or file: "+shell+ " does not exist. Check for spelling errors and try again.\nIf you are trying to open a textfile, open textviewer.")
time.sleep(5)
os.system('cls' if os.name == 'nt' else 'clear')
Answer: Beautiful is better than ugly
Your calculator is ugly. @Hosch gave some hints but a much more dramatic redesign is needed. Why must the user say a number that is mapped to a symbol that is mapped to an operation only then applied to the numbers? This is user unfriendliness taken to the extreme. I suggest this very primitive but much more sane implementation:
def calc():
print ("Welcome to PyCALC")
while True:
expr = input("> ")
a,oper,b = expr.split()
operations = {'+': op.add,
'-': op.sub,
'*': op.mul,
'/': op.truediv
}
print(operations[oper](int(a),int(b)))
Now the user interface is:
> 2 + 2
4
Readability counts
To a poor average human this is just noise:
os.system('cls' if os.name == 'nt' else 'clear')
You have been told to make a function out of this.
Ironically enough, you defined a (badly named) function cls but you did not use it ?!
First class bug
lines = get_lines
get_lines is a function... you are saving the content of a function in the text file... calling a function is done by applying double parenthesis get_lines()
Be lazy
The good programmer is lazy, he/she never does by hand what could be automated:
print ("dir - displays directory.")
print ("cls - clears screen")
print ("help - shows help")
print ("textviewer - launches textviewer")
print ("edit - launches edit")
print ("news - launches news")
print ("shutdown - closes PyDOS")
print ("calc - launches calculator")
Let's say you add another utility or change the functionality of one, will you remember to update this list? Maybe.
This list can be generated by using a dictionary but I will not give you the fish.
"Please use {}".format(".format()")
print("This command or file: "+shell+ " does not exist. Check for spelling errors and try again.\nIf you are trying to open a textfile, open textviewer.")
Using {}.format inside strings is common practice, do not use +
CONSTANTly improving maintainability
You may want to change some of your longer strings, looking for them inside the code is not the best option. Instead define them as constants at the start of your file:
NEWS = """"PyDOS NEWS"
"New Additions:"
"New calculator app!"
"All text documents from edit are now stored in a 'files' directory"
"Tweaks and fixes:"
"BUG023L: Fixed issue with splash screen loop"
"Reported Bugs:"
"BUG024T: Hangman returns a traceback error, we are fixing
"BUG025T: Pressing 'y' in texteditor when it asks you if you want\nto edit the file returns Type_Error"
"""
you will then
print(NEWS)
The master directory
"/media/GENERAL/Projects/files"
Why this directory? Why can't the user change this? He/she may want to change the location where the files are saved.
You are still sleeping
I said this, others said this, but I will repeat it:
sleeping does not make the terminal feel retro, it is seriously annoying!
Dealing with files with with
Repetita iuvant // repeating helps
with is a nicer way of dealing with files, even the official docs recommend
it.
The commands dictionary
Please go back to my previous answer and re-read this part: it is very important. | {
"domain": "codereview.stackexchange",
"id": 12894,
"tags": "python, python-3.x, console, shell"
} |
Parametric visibility and order of tkinter labels | Question: I'm writing software which allows a user to view data in a number of different formats, and they can switch between formats at any time. I'm wondering if there's a better way to do this than switching between subclasses, and if not, if there's a better way to write this than I have.
import tkinter as tk
from tkinter import ttk
class Display(ttk.Frame):
def __init__(self, master=None):
ttk.Frame.__init__(self, master, relief='sunken', padding='20')
self.widgets = [
ttk.Label(self, text="Data Value #1"),
ttk.Label(self, text="Data Value #2"),
ttk.Label(self, text="Data Value #3"),
ttk.Label(self, text="Data Value #4"),
ttk.Label(self, text="Data Value #5"),
ttk.Label(self, text="Data Value #6"),
ttk.Label(self, text="Data Value #7"),
ttk.Label(self, text="Data Value #8"),
ttk.Label(self, text="Data Value #9"),
ttk.Label(self, text="Data Value #10"),
]
def show(self):
for i in self.widgets:
i.pack()
class Display1(Display):
def show(self):
for i, widget in enumerate(self.widgets):
if i % 2 == 0:
widget.pack()
class Display2(Display):
def show(self):
for i, widget in enumerate(self.widgets):
if i % 2 == 1:
widget.pack()
class Display3(Display):
def show(self):
for i, widget in enumerate(self.widgets):
if i > 4:
widget.pack()
class Display4(Display):
def show(self):
for i, widget in enumerate(self.widgets):
if i < 5:
widget.pack()
class Display5(Display):
def show(self):
for i, widget in enumerate(self.widgets):
self.widgets[-i].pack()
class Window(tk.Tk):
def __init__(self):
tk.Tk.__init__(self)
#
#
# Initialize buttons to select how to view the data.
self.selection = tk.StringVar(self, value="Option 1")
self.display_options = [
ttk.Radiobutton(self, text="Option 1", variable=self.selection, value="Option 1", command=self.change_display),
ttk.Radiobutton(self, text="Option 2", variable=self.selection, value="Option 2", command=self.change_display),
ttk.Radiobutton(self, text="Option 3", variable=self.selection, value="Option 3", command=self.change_display),
ttk.Radiobutton(self, text="Option 4", variable=self.selection, value="Option 4", command=self.change_display),
ttk.Radiobutton(self, text="Option 5", variable=self.selection, value="Option 5", command=self.change_display),
]
for i, button in enumerate(self.display_options):
button.grid(row=i, column=1, padx=20)
#
#
# Initialize the frame holding the data
self.data_frame = Display(self)
self.data_frame.grid(row=0, column=0, rowspan=10)
self.data_frame.show()
def change_display(self):
for i in self.data_frame.pack_slaves():
i.pack_forget()
self.data_frame.grid_forget()
match self.selection.get():
case "Option 1":
self.data_frame = Display1(self)
case "Option 2":
self.data_frame = Display2(self)
case "Option 3":
self.data_frame = Display3(self)
case "Option 4":
self.data_frame = Display4(self)
case "Option 5":
self.data_frame = Display5(self)
case _:
self.data_frame = Display(self)
self.data_frame.show()
self.data_frame.grid(row=0, column=0, rowspan=10)
if __name__ == '__main__':
win = Window()
win.mainloop()
The program initializes a window with a selection of formats to view the data, and shows the data in some default format. Each subclass of Display only overrides the show() method.
This isn't the entire program I'm working on, but the important part is that the subclasses of Display don't actually initialize themselves, and only override certain parent methods. Is there a better way to change the way the data is displayed than this?
Answer: Display and Window should both follow has-a rather than is-a for better loose coupling; that is, they should keep the widgets to themselves and only selectively expose functionality as needed.
Subclassing is not justified in this case.
Your conditional packing loops (1) should not use conditions at all, and (2) should simply iterate over a range of indices instead.
You had the right(ish) idea to use a Var, but it shouldn't be a StringVar - it should be an IntVar, since that's usable as an index.
You need to make use of loops to fill your label and radio lists.
Rather than manually checking every single option, just pull the value of the IntVar and use that as an index into a tuple of ranges.
Rather than command, prefer to trace the variable itself.
After these changes you will not need to save your display_options and the parent Tk as members of your class.
Suggested
import tkinter as tk
from tkinter import ttk
from typing import Sequence
displays = (
range(1, 10, 2),
range(2, 11, 2),
range(6, 11, 1),
range(1, 6, 1),
range(10, 0, -1),
)
class Display:
def __init__(self, parent: tk.Tk) -> None:
self.frame = ttk.Frame(parent, relief='sunken', padding='20')
self.frame.grid(row=0, column=0, rowspan=10)
self.labels = [
ttk.Label(self.frame, text=f'Data Value #{i}')
for i in range(1, 11)
]
def show(self, indices: Sequence[int]) -> None:
for label in self.frame.pack_slaves():
label.pack_forget()
for i in indices:
self.labels[i - 1].pack()
class Window:
def __init__(self):
parent = tk.Tk()
self.mainloop = parent.mainloop
self.selection = tk.IntVar(parent)
self.selection.trace_add('write', self.change_display)
for i in range(5):
button = ttk.Radiobutton(
parent, text=f"Option {i + 1}",
variable=self.selection, value=i,
)
button.grid(row=i, column=1, padx=20)
self.data_frame = Display(parent)
self.selection.set(0)
def change_display(self, name: str, index: str, mode: str) -> None:
display = displays[self.selection.get()]
self.data_frame.show(display)
if __name__ == '__main__':
win = Window()
win.mainloop()
An alternative method involves calls to grid/gridremove instead of pack:
import tkinter as tk
from tkinter import ttk
from typing import Sequence
DISPLAYS = (
range( 1, 10, 2),
range( 2, 11, 2),
range( 6, 11, 1),
range( 1, 6, 1),
range(10, 0, -1),
)
class Display:
def __init__(self, parent: tk.Tk) -> None:
self.frame = ttk.Frame(parent, relief='sunken', padding='20')
self.frame.grid(row=0, column=0, rowspan=5)
self.labels = [
ttk.Label(self.frame, text=f'Data Value #{i}')
for i in range(1, 11)
]
def show(self, indices: Sequence[int]) -> None:
for y, i in enumerate(indices):
self.labels[i - 1].grid(row=y, column=0)
for i in set(range(1, 11)) - set(indices):
self.labels[i - 1].grid_remove()
class Window:
def __init__(self) -> None:
parent = tk.Tk()
self.mainloop = parent.mainloop
self.selection = tk.IntVar(parent)
self.selection.trace_add('write', self.change_display)
for i in range(5):
button = ttk.Radiobutton(
parent, text=f"Option {i + 1}",
variable=self.selection, value=i,
)
button.grid(row=i, column=1, padx=20)
self.data_frame = Display(parent)
self.selection.set(0)
def change_display(self, name: str, index: str, mode: str) -> None:
display = DISPLAYS[self.selection.get()]
self.data_frame.show(display)
if __name__ == '__main__':
Window().mainloop() | {
"domain": "codereview.stackexchange",
"id": 43742,
"tags": "python, tkinter, polymorphism"
} |
Induced emf over the cuboidal wing of an air plane due to Earth's magnetic field | Question: If the wing of an air plane is cuboidal in shape, and that the Earth's magnetic field $\bf{B}$ is uniform in the neighbouring volume of the plane, does that mean the induced emf over the entire 'block' of wing is zero due to
$$\Phi=\oint_S \mathbf{B \cdot \mathbf{n}}dS=0 \tag{1}$$
and hence $E=-\frac{d \Phi}{dt}=0$?
Therefore only asymmertical wings (which are a common feature found in modern aeroplanes) can give rise to an induced emf due to $\bf{B}$?
Answer: (a) Your phrase "the induced emf over the entire block of wing doesn't make much sense to me. You need to specify a closed path, along which you calculate the emf. This path need not be through the metal of the wing, but you may wish it to be.
(b) If the closed path moves with the plane, and the plane moves in a straight line through a uniform magnetic field, then $\Phi$ in your equation will be a constant (not generally zero), but $\mathscr E = \frac {d\Phi}{dt}=0$ so the emf will still be zero.
(c) This is the case whether or not the wing is symmetrical, as it applies to all closed loops moving with the plane.
(d) Homework and exam questions sometimes ask students to calculate the emf between the wingtips of an aircraft in flight. This emf is zero unless the wingspan of the aircraft in included in a loop not all of which is moving with the aircraft. One crazy arrangement would be to have the wingtips brushing against metal fences anchored to the ground! You could connect a voltmeter between the fences. | {
"domain": "physics.stackexchange",
"id": 69872,
"tags": "electromagnetism, magnetic-fields, electromagnetic-induction, aircraft"
} |
Extracting coefficients of polynomials given by straight line programs | Question: Consider a straight line program of length $L$ that takes one input $x \in \mathbb{R}$ and computes a polynomial $p(x)$, using only addition, multiplication (including multiplication by constants). We allow the degree to be very large: potentially $2^{\Theta(L)}$.
Question: Is there an $O(\operatorname{poly}(L)n^\theta)$ algorithm for computing the $n$th coefficient of $p(x)$, with $\theta < 1$?
I roughly want to say "assume exact arithmetic", but there is a subtlety in that sufficiently large exact arithmetic might allow cheating. It's possible Blum-Shub-Smale (BSS) is the right model, but I am not confident.
My guess is that the answer is (sadly) no, since all the straight line program polynomial algorithms I can find either (1) are linear or superlinear in degree or (2) assume $p(x)$ is sparse.
More details: I should add why I think $O(L^{O(1)} n^\theta)$ is the most interesting complexity goal, and unfortunately why I think it’s unobtainable. First, direct evaluation of all coefficients using FFT multiplication gives $O(L n \log n)$, so the goal is a slight reduction in the exponent of $n$. Ignoring dependence on $L$, this is achievable: there are baby step/giant step methods which achieve $O(n^{1/2})$ for any holonomic sequence (Bostan and Yurkevich 2020) is a nice example). However, the complexity of the holonomic recurrence grows badly with $L$, and I believe the total complexity is $2^{O(2^L)}n^{1/2}$. So the question is asking whether one can reduce the exponent on $n$ without blowing up the dependence on $L$.
Unfortunately, my best guess is that this is impossible, and specifically that it would contradict SETH. I don’t know how to do that reduction without losing precision on $\theta$, however.
Answer: This would contradict SETH by using a known hardness result for subset sum: https://arxiv.org/abs/1704.04546.
In this paper it is shown that the subset sum problem with $n$ integers and target $T$ cannot be solved in $T^{1-\varepsilon} \cdot 2^{o(n)}$ time for any $\varepsilon>0$. What you propose would give a $T^\theta \cdot n^{O(1)}$ algorithm as follows:
Let the input numbers be $a_1, \ldots, a_n$. For each $a_i$, we can construct a straight line program that computes the polynomial $x^{a_i} + 1$ and has length $O(\log a_i)$. Then by multiplying these together, we get a straight line program of length linear in the number of bits of the input with the property that the $T$:th coefficient is nonzero if and only if there is some subset that sums to $T$. | {
"domain": "cstheory.stackexchange",
"id": 5466,
"tags": "polynomials"
} |
How quickly does a small piece of molten steel cool at room temperatures? | Question: Say I have a $(\frac{1}{2}D)^2 \pi \times \ell = (.05)^2 \pi \times .03 \approx 0.000236 \ \text{mm}^3$ piece of molten steel freshly spewed out of a hot nozzle. Now assuming the nozzle moves away quickly enough (because it's hot and radiates heat at close range by convection and radiation), how quickly will the piece of steel solidify? How do I calculate that?
Answer: As always there are three sources of heat transfer. Radiative, Conduction and Convection.
Radiative cooling will be a function of the emissivity ($\epsilon$ - look here - you can probably assume some oxidation, so close to 0.79), the surface area (probably roughly spherical if it's molten, but you can work it out from the geometry), and the temperature (it's also receiving radiation from the surroundings, and the temperature is changing, so it's a dynamic situation that trends exponentially towards room temperature.
However, what else is going on? If it's flying through a vacuum, the above will be fine. If not, you'll need heat transfer co-efficients. If it lands on Cu it will cool much faster than if is lands on Alumina. The thermal conductivity and size and temperature of the objects it interacts with will all be relevant.
If it's moving through the air, it will be different from if it's levitating in the air.
Edit based on comments
Presumably you're trying to make a 3D metal printer or something similar. In this scenario almost all of the cooling is going to come from the substrate on which the droplet lands. The wetting angle etc will also be relevant, because this effects the contact area. This is not just blah blah. If it balls up on the substrate it will cool much slower than if it wets the substrate and as a very fine angle.
So, let's start with the surface area. Assume that you have a wetting angle of $\theta$. Also assume that you form a droplet that is a spherical cap. Then you know that the volume of the spherical cap is
\begin{equation} V = \frac{1}{3}\pi R^3(2-3\cos\theta+\cos^3\theta) \end{equation}
The area is:
\begin{equation}A = \pi R^2\sin^2\theta\end{equation}
You know the volume ($V$) of your piece, so solve for $R$ using the first equation, and then calculate your contact area ($A$) using the second.
For a small droplet like that, I think you can assume that it is uniform temperature, so now you need to know how the heat is flowing. One challenge is that the little volume where it lands will heat up, slowing the dissipation. This is especially true if you have multiple droplets landing in a row.
However, let's assume that the substrate is a good enough conductor and is big enough that we can treat it like a heat sink with temperature $T_{\mathrm{sub}}\approx300\mathrm{K}$.
The molten droplet has temperature $T(t)$, where $t$ is time, and 0 is the moment it settles onto the substrate.
Now we need to work out how much heat energy there is and how to convert between energy (Joules) and temperature (Kelvin) for your little droplet. This is heat capacity. For solid steel it's about 0.46J.g$^{-1}$K$^{-1}$ (from wikipedia), and for molten steel it's about 0.82 (at least that's for Iron from here). You can figure out your mass using the density of steel and the volume you gave. That will give you a number in J/K (joules per kelvin) - call this number $c$.
Next take the heat transfer coefficient ($h$). This all gives you:
\begin{equation} \frac{dT}{dt} = h\cdot A\cdot(T - T_{\mathrm{sub}})/c \end{equation}
which can be solved in the usual way to give you $T$ as a function of $t$. | {
"domain": "physics.stackexchange",
"id": 25618,
"tags": "thermodynamics, metals, experimental-technology, technology"
} |
Can this question be solved without knowing the Page Table Entry? | Question: I'm preparing for the exams and this question came up -
Consider a machine with $64MB$ physical memory and a $32$-bit virtual address space. If the page size is $4KB$, what is the approximate size of the page table?
(A) 16MB
(B) 8MB
(C) 2MB
(D) 24MB
The way I've solved it -
Physical Address Space = $64MB = 2^{26}B$
Virtual Address = $32$-bits, $\therefore$ Virtual Address Space = $2^{32}B$
Page Size = $4KB=2^{12}B$
Number of pages =$\,\Large\frac{2^{32}}{2^{12}}$$=2^{20}$ pages.
Number of frames =$\,\Large\frac{2^{26}}{2^{12}}$$=2^{14}$ frames.
$\therefore$ Page Table Size = $2^{20}\times 14\,bits \approx 2^{20}\times 16\,bits\approx 2^{20}\times 2B= 2MB.$
Some books claim the answer to be $8MB$ and I don't see why, but that confuses me.
Is this the correct way to solve it? Is the answer correct?
Answer: Your answer and calculation is correct, if we used a one-level page table.
However, a one-level page table is probably not a very likely implementation strategy, so it's more interesting to analyze the space consumption for a more plausible implementation strategy and work out what this would look like.
A two-level page table is probably a more likely implementation strategy. The first 9 bits of the virtual address will be used as an index into the first-level table; they yield the entry in the second level. The next 11 bits of the virtual address will be used as an index into the second-level table.
With this strategy, the size of the entire page table is $1+k$ pages, or $(1+k) \times 2^{12}$ bytes, where $k$ is the number of second-level table entries. The worst case is $k=2^9$, in which case the entire page table takes up approximately $2^9 \times 2^{12} = 2^{21}$ bytes, i.e., 2 MB. In other words, the worst case for a two-level page table is approximately the same as the size of a one-level page table. However, the best case is much less.
I don't know how the books got 8 KB, but that seems clearly wrong, absent further assumptions; that would only allow 4096 entries in the entire page table, which clearly isn't enough to maintain a virtual-to-physical mapping for all of virtual memory. | {
"domain": "cs.stackexchange",
"id": 5271,
"tags": "operating-systems, memory-management, paging"
} |
Why are policy gradient methods preferred over value function approximation in continuous action domains? | Question: In value-function approximation, in particular, in deep Q-learning, I understand that we first predict the Q values for each action. However, when there are many actions, this task is not easy.
But in policy iteration we also have to output a softmax vector related to each action. So I don't understand how this can be used to work with continuous action space.
Why are policy gradient methods preferred over value function approximation in continuous action domains?
Answer:
But in policy iteration also we are have to output a softmax vector related to each actions
This is not strictly true. A softmax vector is one possible way to represent a policy, and works for discrete action spaces. The difference between policy gradient and value function approaches here is in how you use the output. For a value function you would find the maximum output, and choose that (perhaps $\epsilon$-greedily), and it should be an estimate of the value of taking that action. For a policy function, you would use the output as probability to choose each action, and you do not know the value of taking that action.
So I don't understand how this can use to work with continuous action space ?
With policy gradient methods, the policy can be any function of your parameters $\theta$ which:
Outputs a probability distribution
Can be differentiated with respect to $\theta$
So for instance your policy function can be
$$\pi_{\theta}(s) = \mathcal{N}(\mu(s,\theta), \sigma(s,\theta))$$
where $\mu$ and $\sigma$ can be functions you implement with e.g. a neural network. The output of the network is a description of the Normal distribution for the action value $a$ given a state value $s$. The policy requires you to sample from the normal distribution defined by those values (the NN doesn't do that sampling, you typically have to add that in code).
Why are policy gradient methods preferred over value function approximation in continuous action domains?
Whilst it is still possible to estimate the value of a state/action pair in a continuous action space, this does not help you choose an action. Consider how you might implement an $\epsilon$-greedy policy using action value approximation: It would require performing an optimisation over the action space for each and every action choice, in order to find the estimated optimal action. This is possible, but likely to be very slow/inefficient (also there is a risk of finding local maximum).
Working directly with policies that emit probability distributions can avoid this problem, provided those distributions are easy to sample from. Hence you will often see things like policies that control parameters of the Normal distribution or similar, because it is known how to easily sample from those distributions. | {
"domain": "datascience.stackexchange",
"id": 2282,
"tags": "reinforcement-learning"
} |
Sun Zenith angle and location on earth | Question: Let suppose you are at +3 degrees latitude (3 degrees north of Equator ... for example at Cali in Colombia). At what periods of the year will the sun be at your vertical (at your zenith) at noon?
I know that at Sun is at equator zenith on 21st March and 21st September (spring and fall equinox). So at +3 degree north of equator will it be before or after 21st march? And before or after 21st September?
Answer: The Sun will be at your zenith a few days after 21st March, after it crosses the equator heading northwards* going towards summer, and a few days before 21st September, before it crosses the equator heading southwards* after summer has ended.
* from an Earth-centric viewpoint. We know, of course, that the Earth is rotating about its axis and revolving about the Sun, and that Earth's axial tilt is what causes the seasons and the apparent motion of the Sun across the sky both through the day and through the seasons. | {
"domain": "astronomy.stackexchange",
"id": 2707,
"tags": "the-sun, fundamental-astronomy"
} |
Quick question regarding Wick's theorem | Question: Let $T\{...\}$ denote time-ordering, $N\{...\}$ normal-ordering and $\left<ab\right>$ be the propagator.
Wick's theorem states that
$$ T\{ab\} = N\{ab\} + \left<ab\right>. $$
I now apply time-ordering to both sides of this equation. Because $T$ is idempotent, $T\{T\{ab\}\}=T\{ab\}$. Also $T\{N\{ab\}\}=T\{ab\}$ because we re-order operators inside the $T\{...\}$ anyway. $\left<ab\right>$ is not an operator, so time-ordering acts on it trivially. We obtain:
$$ T\{ab\} = T\{ab\} + \left<ab\right>$$
$$ \left<ab\right> = 0.$$
This result is obviously incorrect. What am I missing?
Answer: I) It is true that operator ordering procedures are idempotent operations
$$\tag{1} T(T(\ldots))~=~T(\ldots)\quad\text{and}\quad N(N(\ldots))~=~N(\ldots). $$
But it is not true that the outermost ordering cancels the effect of the innermost ordering
$$\tag{2} T(N(\ldots))~=~T(\ldots)\quad\text{and}\quad N(T(\ldots))~=~N(\ldots). \quad (\longleftarrow \text{Both Wrong!})$$
In fact, the opposite is true
$$\tag{3} T(N(\ldots))~=~N(\ldots)\quad\text{and}\quad N(T(\ldots))~=~T(\ldots),$$
as a special case of a nested${}^1$ Wick's Theorem, cf. Section II below.
Example. If for two operators $a$ and $b$, we have the relation
$$\tag{4} T(ab) ~=~ N(ab) + \langle ab\rangle {\bf 1},$$
where the contraction $\langle ab\rangle$ is a $c$-number, then
$$T(N(ab))~\stackrel{(4)}{=}~T\left(T(ab)- \langle ab\rangle {\bf 1}\right)
~\stackrel{\text{linearity}}{=}~T(T(ab))- T\left(\langle ab\rangle {\bf 1})\right)$$
$$\tag{5} ~\stackrel{(1)}{=}~T(ab)- \langle ab\rangle {\bf 1}
~\stackrel{(4)}{=}~N(ab)~\neq~ T(ab) .$$
II) More generally, if we want to bring a nested expression of the form
$$ \tag{6} T\left(N(\ldots) \ldots N(\ldots)\right) $$
on normal ordered form, there is a nested${}^1$ Wick's Theorem, which states that we should only include contractions between pairs of operators who belong to different normal order symbols.
Example. In OP's case (5), this means for the lhs. that
$$ \tag{7} T(N(ab))~=~N(ab), $$
since $a$ and $b$ belong to the same normal order symbol $N(ab)$, while the rhs. is
$$\tag{8} T(ab) ~=~ N(ab) + \langle ab\rangle {\bf 1}.$$
[Note that $N(a)=a$ and $N(b)=b$.] See also e.g. this and this Phys.SE posts.
--
${}^1$
A nested Wick's Theorem (between radial order and normal order) is briefly stated on p. 39 in J. Polchinski, String Theory, Vol. 1. Beware that radial order is often only implicitly written in CFT texts. | {
"domain": "physics.stackexchange",
"id": 19450,
"tags": "quantum-field-theory, wick-theorem"
} |
Reading integers while break condition not met in C++ | Question: This is a drill from the Programming Principles and Practice Using C++. I managed to get it to work but somehow it doesn't feel right to me. I 'd appreciate any pointers on how to simplify/improve the solution.
Especially when it comes to best check if '|' was entered. I feel that creating two extra string variables and then converting them to int using std::stoi is very inefficient. I am new to C++ so please excuse the ignorance. Appreciate any feedback.
Question
Write a program that in a while loop reads 2 integers and prints them. Carry on until the user enters '|'.
#include<iostream>
#include<string>
int main(){
int a, b;
std::string _a, _b;
while (true){
std::cout << "Enter two numbers." << std::endl;
std::cin >> _a >> _b;
if (_a == "|" || _b == "|"){break;};
a = std::stoi(_a);
b = std::stoi(_b);
std::cout << "You entered " << a << " and " << b << std::endl;
}
return 0;
}
Answer: std::cout << "Enter two numbers." << std::endl;
You are looking to accept integers, not just numbers (i.e. \$1.0, 1e0, 0b1, 0x1\$).
Avoid std::endl. std::endl inserts a newline character into an output sequence and flushes it as if by calling stream.put(stream.widen('\n')) followed by stream.flush(). If you just want to insert a newline character, then stream '\n'.
std::cout << "Enter two integers:\n"; // can append it on C strings
std::cin >> _a >> _b;
Think about how you want to handle inputs. You are expecting 2 integers, but what if they pass you one integer? No integers (or send end-of-file)? For those cases, you should check if the value was read successfull (see 10.10 in your book).
When a value cannot be extracted from the stream, the streamable types in C++ will typically set std::ios::failbit. When std::string is the type being extracted, a failure to extract leaves the data in the string unchanged. For other types, like the numeric types, the extraction operation will assign 0 on failure.
If the stream enters a fail state, you won't be able to extract from it until you clear() the error flag. Not clearing the error flag would result in an infinite loop once the stream entered the error state.
std::cin.clear();
What if the user enters more than 2 integers? Should you keep reading from the line? Doing this, you'll need to make sure both values are extracted, even if one failed. Could you ignore() the remaining input until end of line? You were only expecting two integers, so this is an option.
std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
a = std::stoi(_a);
b = std::stoi(_b);
std::cout << "You entered " << a << " and " << b << std::endl;
What happens if the extraction from std::cin fails? Either _a, _b, or both could be empty and std::stoi will throw std::invalid_argument when an empty string is encountered.
std::stoi will truncate at the decimal separator and return the number to that point. If you want to validate that the string represents an integer, you need to make sure std::stoi reads the full buffer.
std::size_t last_pos_read;
a = std::stoi(_a, std::address_of(last_pos_read));
if (last_pos_read != _a.length()) {
throw std::invalid_argument("Not an integer.");
}
Inexperience aside, the big lesson you should take from this review is to test! Test inputs of type integer (negatives, zero, positives, 16/32/64bit values), floating point (representable/nonrepresentable values), string/containers (empty/nonempty, small/large sizes). Test failures at different times, like an immediate failure or a failure after a successful loop. | {
"domain": "codereview.stackexchange",
"id": 31698,
"tags": "c++, beginner, programming-challenge"
} |
Difference between potential and potential energy mathematically | Question: I search google, quora, and reddit: What is the difference between potential and potential energy?
Potential is the ability to do work. Potential Energy is the amount of energy it acquires.
Potential is the work done. Potential Energy is the energy stored when the work is done.
Potential Energy means only the stored energy due to position and potential means stored energy in any Field.
Please explain the difference between potential and potential energy mathematically; What is the mathematical formula for potential and potential energy? I know two expressions for potential energy but not the derivations; $U = mgh$ and, $U = -G \frac{m_1 M_2}{r}$.
I am learning gravitation in my school; my teacher derived gravitational potential from gravitational potential energy, but I missed the class and class notes are not available. Also this question-answer is not explained and is limited to electromagnetic potential only. I am sorry but please help me. Please explain potential and potential energy with the example of gravitation.
Answer: For the gravitational case, the definition of potential ($V$) is: $$V = \frac{U_g}{m}$$
Mathematically they represent different physical quantities. Potential energy is a property of a system of two masses, while potential is only a property of the source mass.
For a spherical source mass $M$, the potential at a radial distance $r> a$ where $a$ is it's radius, is given by: $$V = \frac{-\frac{GmM}{r}}{m}= -\frac{GM}{r}$$
This result does not apply for $r < a$ as $U_g \ne \frac{-GmM}{r}$ for $r < a$, but the first definition still holds.
For the electrostatic analog, replace mass with charge and $U_g$ with $U_E$.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 82880,
"tags": "gravity, potential, field-theory, potential-energy, definition"
} |
Is a photon going through a center of mass affected by time dilation more then another going around? | Question: Please don't mark as duplicate. My specific question was not answered in other posts. And this question here Which Photon would win the race?
is also about neutrinos, electrons.
My question now is only about photons, and the shapiro delay.
then if you try to make analogy to the shapiro delay, then comes my question:
according to the shapiro delay, a photon going around the mass arrives later then it should.
there is no experiment where they would somehow shoot a photon also through the center of mass (maybe between two close stars or an artificial mass with a whole through it, or something else)
so shapiro delay only talks about the photon going around, and that photon needed more time to arrive from point A to B, but it's speed had to stay c, so the distance it traveled had to be longer (which it really is in 3D)
so a photon going through a tunnel through the center of mass (or between the two SUNs here) has to travel at speed c too, but its distance is shorter in 3D so it's time need to be shorter too. So that photon has to arrive from A to B faster.
Imagine this set:
We shoot two photons, one around one of the stars, and one inbetween two stars.
We shoot the one going around in an angle that it will cross point B too.
Question:
so was there any experiment like this?
Am I right that time dilation would affect the photon going between two stars more and it would arrive first?
Answer: You would think that a particle moving in the centre of mass experiences the same time as moving through space wthout any mass in it. In both cases no gravity is felt, so you would think time proceeds at it's fastest rate. The pace of time is nevertheless dependent on the gravitational potential, which is different in both cases, so time is moving at different paces for the two photons (coordinate time, because for the photon itself the pace of time is zero). So if you compensate for the differences, the two photons don't arrive at the same time. | {
"domain": "physics.stackexchange",
"id": 34936,
"tags": "special-relativity"
} |
Converting generalized NFAs to NFAs | Question: I came across generalized nondeterministic finite automata (GNFAs) in Sipser's Introduction to the Theory of Computation. These are automata where transitions are labelled with regular expressions, rather than single symbols from the alphabet. I thought he would explain why GNFAs are allowed. I mean, an appropriate explanation would be that GNFAs are equivalent to NFAs, or GNFAs are equivalent to DFAs or some such argument. But I couldn't find any such explanation in the book.
Online, I read in this article that you can convert a GNFA to an NFA as follows:
For each transition arrow in the GNFA, we insert the complete
automaton accepting the language generated by the transition arrow’s
label as a “subautomaton;” this way, we can replace each regular
expression by a set of states and character transitions
How is the automaton inserted?
Let's say we have a GNFA with an arrow going from state A to state B labelled with a regular expression R. To convert this GNFA to an NFA, do we get rid of that arrow, instead, take NFA N that recognizes L(R), and create an arrow from A to the start state of N labelled with the epsilon symbol, then create arrows from the accept states of N to B, each also labelled with the epsilon symbol?
Of course the accept states of N would no longer be accept states in the new machine, would they?
I know that GNFAs are equivalent to NFAs but I need a convincing proof, not just a short paragraph mentioning their equivalence.
Answer: You are correct in your understanding of the construction: For every transition arrow, say from state $A$ to state $B$ in the GNFA labeled by a regular expression $R$, you replace that transition by the NFA, $N$, for $R$, with an $\epsilon$-transition from $A$ to the start state of $N$ and $\epsilon$-transitions from the final states of $N$ (which you then "un-finalize") to state $B$. Do this for every transition in the GNFA and you'll wind up with an ordinary NFA.
To show that the two accept the same language, it's enough to rely on the previous result (Lemma 1.55 in Sipser) that any regular expression denotes a language that is recognized by a NFA. | {
"domain": "cs.stackexchange",
"id": 3226,
"tags": "automata, finite-automata, simulation"
} |
Finding the common prefix in a list of strings | Question: Given a list of strings, my task is to find the common prefix.
Sample input: ["madam", "mad", "mast"]
Sample output: "ma"
Sample input: ["question", "method"]
Sample output: ""
Below is my solution, I'd be happy for help improving the algorithm (I'm open to totally different approaches) or general code improvement tips.
Thanks :)
public class PreFixer {
public static void main(String[] args) {
if(args.length < 1) {
System.out.println("invalid arguments");
return;
}
String commongPrefix = getCommonPrefix(args);
System.out.println("Common Prefix for list is : " + commongPrefix);
}
private static String getCommonPrefix (String[] list){
int matchIndex = recursiveChecker(0, list);
return list[0].substring(0, matchIndex);
}
private static int recursiveChecker(int strIndex, String[] list){
for(int x=0; x<list.length; x++) {
if(strIndex >= list[x].length()){
return strIndex;
}
if(list[0].charAt(strIndex) != list[x].charAt(strIndex)) {
return strIndex;
}
}
return recursiveChecker(strIndex + 1, list);
}
}
Answer:
There are some inconsistencies in your code style: sometimes you do put a space before an opening curly bracket, sometimes you don't. It is a good practice to adhere to one style(in this case, it is conventional to have a whitespace there). It is also conventional to surround all binary operators with whitespaces to make the code more readable. For instance, for(int x = 0; x < list.length; x++) looks better than for(int x=0; x<list.length; x++).
In terms of time complexity, you algorithm is optimal(it is linear in the size of input). However, if it is supposed to work with long strings, I'd use iteration instead of recursion(it gets a StackOverflowError when the strings get really big). Here is my iterative solution:
public static String getLongestCommonPrefix(String[] strings) {
int commonPrefixLength = 0;
while (allCharactersAreSame(strings, commonPrefixLength)) {
commonPrefixLength++;
}
return strings[0].substring(0, commonPrefixLength);
}
private static boolean allCharactersAreSame(String[] strings, int pos) {
String first = strings[0];
for (String curString : strings) {
if (curString.length() <= pos
|| curString.charAt(pos) != first.charAt(pos)) {
return false;
}
}
return true;
}
In general, each class should have single responsibility(that it, you might create two separate classes here: one for computing the longest prefix and the other one for checking and parsing command-line arguments). But I think it is fine to have one class here(the entire class is pretty small) as long as the format of arguments is not going to change in the future. | {
"domain": "codereview.stackexchange",
"id": 12683,
"tags": "java, algorithm, strings"
} |
A continuous optimization problem that reduces to TSP | Question: Suppose I am given a finite set of points $p_1,p_2,..p_n$ in the plane, and asked to draw a twice-differentiable curve $C(P)$ through the $p_i$'s, such that its perimeter is as small as possible. Assuming $p_i=(x_i,y_i)$ and $x_i<x_{i+1}$, I can formalize this problem as:
Problem 1 (edited in response to Suresh's comments) Determine $C^2$ functions $x(t),y(t)$ of a parameter $t$ such that the arclength $ L = \int_{[t \in 0,1]} \sqrt{x'^2+y'^2}dt$ is minimized, with $x(0) = x_1, x(1) = x_n$ and for all $t_i: x(t_i) = x_i$, we have $y(t_i)=y_i)$.
How do I prove (or perhaps refute) that Problem 1 is NP-hard?
Why I suspect NP-hardness Suppose the $C^2$ assumption is relaxed. Evidently, the function of minimal arclength is the Travelling Salesman tour of the $p_i$'s. Perhaps the $C^2$ constraint only makes the problem much harder?
Context A variant of this problem was posted on MSE. It didn't receive an answer both there and on MO. Given that it's nontrivial to solve the problem, I want to establish how hard it is.
Answer: The differentiability requirement doesn't change the nature of the problem: requiring $\mathcal{C}^0$ (continuity) or $\mathcal{C}^{\infty}$ (infinite differentiability) gives the same lower bound for the length and the same order of points, and is equivalent to solving the traveling salesman problem.
If you have a solution to the TSP, you have a $\mathcal{C}^0$ curve that goes through all the points. Conversely, suppose you have a $\mathcal{C}^0$ curve of finite length that goes through all the points, and let $p_{\sigma(1)}, \ldots, p_{\sigma(n)}$ be the order in which it traverses the points and $t_1,\ldots,t_n$ the corresponding parameters (if the curve traverses a point more than once, pick any of the possible values of $t$). Then the curve built from $n$ segments $[p_{\sigma(1)},p_{\sigma(2)}], \ldots, [p_{\sigma(n-1)},p_{\sigma(n)}], [p_{\sigma(n)},p_{\sigma(1)}]$ is shorter, because for each segment a straight line is shorter than any other curve that connects the point. Thus for every ordering of the points, the best curve is a TSP solution, and the TSP solution provides the best ordering of the points.
Let's now show that requiring the curve to be $\mathcal{C}^{\infty}$ (or $\mathcal{C}^k$ for any $k$) doesn't change the best ordering of points. For any TSP solution of total length $\ell$ and any $\epsilon > 0$, we can round every corner, i.e. build a $\mathcal{C}^{\infty}$ curve that traverses the points in the same order and has a length of at most $\ell + \epsilon$ (the explicit construction relies on algebraic functions and $e^{-1/t^2}$ to define bump functions and from those smooth connections between curve segments such as $e^{1-1/x^2} (x-e^{-1/(1-x)^2})$ which connects with $y=0$ at $x=0$ and with $y=x$ at $x=1$; it is tedious to make these explicit, but they are computable); hence, the lower bound for a $\mathcal{C}^{\infty}$ curve is the same as for a collection of segments (note that the lower bound is not reached in general). | {
"domain": "cs.stackexchange",
"id": 127,
"tags": "complexity-theory, np-hard, optimization, computable-analysis"
} |
Final Step for ROS installation failing | Question:
Running this command:
~/ros_catkin_ws/src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release
Crashes with at this point:
cd /Users/Raaj/ros_catkin_ws/build_isolated/pcl_ros && /Users/Raaj/ros_catkin_ws/install_isolated/env.sh cmake -vd /Users/Raaj/ros_catkin_ws/src/perception_pcl/pcl_ros -DCATKIN_DEVEL_PREFIX=/Users/Raaj/ros_catkin_ws/devel_isolated/pcl_ros -DCMAKE_INSTALL_PREFIX=/Users/Raaj/ros_catkin_ws/install_isolated -DCMAKE_BUILD_TYPE=Release
with this :
CMake Error at /usr/local/share/pcl-1.8/PCLConfig.cmake:49 (message):
simulation is required but glew was not found
I've already installed glew via brew install glew. How may I solve this? I am using OSX Mavericks
Originally posted by soulslicer on ROS Answers with karma: 61 on 2014-10-10
Post score: 0
Original comments
Comment by tfoote on 2014-10-10:
Please link to the instructions you are following.
Answer:
Ah it's ok, it seems OSX Mavericks has removed GLEW. I installed it and linked it correctly and it works
Originally posted by soulslicer with karma: 61 on 2014-10-11
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by kpax77 on 2015-01-23:
And how did you do it if it was already installed via brew ? | {
"domain": "robotics.stackexchange",
"id": 19705,
"tags": "ros, installation, macosx, catkin-make-isolated, osx"
} |
Using ROS commands over ssh? | Question:
I am trying to launch a node using a launch file on a remote computer from a local machine but I am getting the error: "error launching on [192.168.1.7-0, uri http://ai2:37824/]: Name or service not known." I have the ROS_MASTER_URI set to the local machine (the machine attempting to launch the file). I also noticed that when I try ssh'ing into the remote computer and running a ROS command on the same line, it does not work.
For example:
ssh ai2@192.168.1.7
rostopic list // works
ssh ai2@192.168.1.7 ls // works
ssh ai2@192.168.1.7 rostopic list // does not work
It seems that something about ssh'ing and executing a ROS command on the same line does not work and I think this problem is related to the problem I am having with the launch file. Has anyone ever dealt with this issue?
Originally posted by pgigioli on ROS Answers with karma: 354 on 2016-03-15
Post score: 1
Original comments
Comment by alee on 2016-03-15:
What exactly are you trying to do? Are you trying to run rostopic list on your own computer after SSHing? Or are you trying to SSH then run rostopic list on the computer you SSHed into?
Comment by pgigioli on 2016-03-15:
I am trying to run rostopic list on the same line as ssh. For whatever reason, if I ssh into the remote and then run rostopic list separately, there are no errors. But if I ssh and rostopic list on the same line, I get "bash: rostopic: command not found"
Answer:
When you are attempting to run ssh ai2@192.168.1.7 rostopic list you are attempting to run it on a non-interactive shell.
I believe the issue you are having is described here: http://stackoverflow.com/questions/940533/how-do-i-set-path-such-that-ssh-userhost-command-works
From the answer there, make sure your source /path/to/setup.bash is before the following lines in your .bashrc:
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
Originally posted by John Hoare with karma: 765 on 2016-03-15
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by pgigioli on 2016-03-16:
Thanks! This is exactly what I needed. | {
"domain": "robotics.stackexchange",
"id": 24123,
"tags": "ros, ros-master-uri, ssh, roslauch"
} |
Solar System model | Question: I have a model that works, as far as I know, but it's so messy! I am very new to Java, so I'd really appreciate some help tidying up. In particular, a lot of my constructors are empty which is probably not good, and I'm not sure if I've made the right choices in terms of public, static, void etc for my methods. It probably doesn't follow best practices.
SolarSim (Main):
import java.util.Scanner;
import java.lang.Math;
import java.util.Arrays;
import java.io.*;
public class SolarSim{
private static final double earthMass = 5.9726*Math.pow(10,24);
private static final double earthRadius = 6371000;
private static final double sunMass = 1.9885*Math.pow(10,30);
private static final double sunRadius = 696342000;
private static final double mercuryMass = 3.301*Math.pow(10,23);
private static final double mercuryRadius = 2.44*Math.pow(10,6);
private static final PhysicsVector zero = new PhysicsVector(0,0);
private static final PhysicsVector zero1 = new PhysicsVector(0,0);
private static final PhysicsVector zero2 = new PhysicsVector(0,0);
private static final PhysicsVector zeroa = new PhysicsVector(0,0);
private static final PhysicsVector zerob = new PhysicsVector(0,0);
private static final PhysicsVector zeroc = new PhysicsVector(0,0);
public static PhysicsVector[] copyArray(PhysicsVector[] a) {
int length = a.length;
PhysicsVector[] copy = new PhysicsVector[length];
System.arraycopy(a, 0, copy, 0, length);
return copy;
}
public static double sumArray(double[] p){
double sum = 0;
for(int z= 0; z < p.length; z++){
sum += p[z];
}
return sum;
}
public static PhysicsVector[] add(PhysicsVector[] sum, PhysicsVector a){
for(int c = 0; c<sum.length; c++){
sum[c].increaseBy(a);
}
return sum;
}
public static PhysicsVector[] subtract(PhysicsVector[] diff, PhysicsVector g){
for (int ab=0; ab<diff.length;ab++){
diff[ab].decreaseBy(g);
}
return diff;
}
public static void main(String[] args) throws IOException{
java.io.File file = new java.io.File("output.txt" );
java.io.PrintWriter n = new PrintWriter(file);
//Initialise variables here
PhysicsVector earthInitialPos = new PhysicsVector();
PhysicsVector earthInitialV = new PhysicsVector();
PhysicsVector sunInitialV = new PhysicsVector();
PhysicsVector sunInitialPos = new PhysicsVector();
PhysicsVector mercuryInitialPos = new PhysicsVector();
PhysicsVector mercuryInitialV=new PhysicsVector();
Scanner scanner = new Scanner(System.in);
System.out.println("Please enter the size of the time step:");
double timeStep = scanner.nextDouble();
//SET PLANETS' INITIAL POSITIONS
earthInitialPos.setVector(1.4960*Math.pow(10,11),0);
earthInitialV.setVector(0,29786.24);
sunInitialPos.setVector(0,0);
sunInitialV.setVector(0,0);
mercuryInitialPos.setVector(5.791*Math.pow(10,10),0);
mercuryInitialV.setVector(0,47873.5);
//CREATE GRAVFIELD OBJECTS
GravField sunGravField = new GravField(sunMass, sunRadius, sunInitialPos);
GravField earthGravField = new GravField(earthMass, earthRadius, earthInitialPos);
GravField mercuryGravField = new GravField(mercuryMass, mercuryRadius, mercuryInitialPos);
//CREATE PARTICLE OBJECTS
Particle earth = new Particle(earthMass, earthInitialPos, earthInitialV);
Particle sun = new Particle(sunMass, sunInitialPos, sunInitialV);
Particle mercury = new Particle(mercuryMass, mercuryInitialPos, mercuryInitialV);
double time = 0;
double finalTime = 31557600; //One earth year(seconds)
PhysicsVector newSunGrav = new PhysicsVector();
PhysicsVector newEarthGrav = new PhysicsVector();
PhysicsVector newMoonGrav = new PhysicsVector();
GravField[] gravityObject = {earthGravField, sunGravField, mercuryGravField};
PhysicsVector[] position = {earthInitialPos, sunInitialPos, mercuryInitialPos};
PhysicsVector[] velocity = {earthInitialV, sunInitialV, mercuryInitialV};
PhysicsVector[] gravField = {zero, zero1, zero2};
double[] planetMass = {earthMass, sunMass, mercuryMass};
Particle[] planets = {earth, sun, mercury};
//Calculate the centre of mass and subtract position from positions of planets, so c.o.m is at origin
PhysicsVector centreOfMass = new PhysicsVector();
centreOfMass = sun.centreOfMass(planetMass, position);
position = SolarSim.subtract(position, centreOfMass);
System.out.println(Arrays.toString(position));
centreOfMass.print();
//Calculate centre of mass velocity and subtract from planet velocities
PhysicsVector centreOfMassVelocity = new PhysicsVector();
centreOfMassVelocity = sun.cOMVel(planetMass, velocity);
centreOfMassVelocity.print();
velocity = SolarSim.subtract(velocity, centreOfMassVelocity);
//Calculate fields of planets
for(int ac=0; ac<gravityObject.length; ac++){
for(int ad=0;ad<gravityObject.length;ad++){
if(ac!=ad){
gravField[ac].increaseBy(gravityObject[ac].aDueToGravity(planetMass[ad], position[ad], position[ac]));
}
else{
//do nothing
}
}
}
PhysicsVector[] newP = new PhysicsVector[position.length];
PhysicsVector[] newGrav = {zeroa,zerob,zeroc};
PhysicsVector[] newVel = new PhysicsVector[velocity.length];
do{
PhysicsVector[] y = new PhysicsVector[gravField.length];
y=copyArray(gravField);
for(int i=0; i<planets.length;i++){
newP[i] = planets[i].updatePosition(position[i], velocity[i], timeStep, gravField[i]);
}
for(int j=0; j<gravityObject.length; j++){
for(int l=0;l<gravityObject.length;l++){
if(j!=l){
newGrav[j].increaseBy(gravityObject[j].aDueToGravity(planetMass[l], newP[l], newP[j]));
}
else{
//do nothing
}
}
}
for(int k=0; k<planets.length; k++){
newVel[k] = planets[k].updateVelocity(velocity[k], timeStep, y[k], newGrav[k]);
}
//Calculate centre of mass velocity and subtract from planet velocities
centreOfMassVelocity = earth.cOMVel(planetMass, newVel);
for (int ab=0; ab<newVel.length;ab++){
newVel[ab].decreaseBy(centreOfMassVelocity);
}
newVel = SolarSim.subtract(newVel, centreOfMassVelocity);
gravField = copyArray(newGrav);
velocity = newVel;
position = newP;
time+=timeStep;
double x = newP[0].getX();
double ap = newP[0].getY();
n.println(x+" "+ap);
}while (time<=1000*finalTime);
System.out.println(Arrays.toString(newP));
n.close();
}
}
Particle:
import java.lang.Math;
//Class that creates a particle object with mass, initial velocity and initial position
public class Particle{
private double mass;
PhysicsVector initialPosition = new PhysicsVector();
PhysicsVector initialVelocity = new PhysicsVector();
PhysicsVector centreOfMass = new PhysicsVector(0,0);
PhysicsVector cOMV=new PhysicsVector(0,0);
//Default constructor
public Particle(){
mass = 1;
initialPosition.setVector(0,0);
initialVelocity.setVector(1,1);
}
//Constructor
public Particle(double mass, PhysicsVector x, PhysicsVector y){
}
//Make it static or not?
public PhysicsVector updatePosition(PhysicsVector initialPosition, PhysicsVector initialVelocity, double timeStep, PhysicsVector aDueToGravity){
PhysicsVector x = new PhysicsVector(initialVelocity);
x.scale(timeStep);
PhysicsVector z = new PhysicsVector(aDueToGravity);
z.scale(0.5*timeStep*timeStep);
initialPosition.increaseBy(x);
initialPosition.increaseBy(z);
return initialPosition;
}
public PhysicsVector updateVelocity(PhysicsVector initialVelocity, double timeStep, PhysicsVector a, PhysicsVector newA){
PhysicsVector z = new PhysicsVector(newA);
PhysicsVector x = new PhysicsVector(a);
z.increaseBy(x);
z.scale(0.5*timeStep);
initialVelocity.increaseBy(z);
return initialVelocity;
}
public PhysicsVector centreOfMass(double[] mass, PhysicsVector[] positions){
//Set origin at centre of sun, so that sunMass*distance = 0
double sum = SolarSim.sumArray(mass);
for(int i=0;i<positions.length;i++){
centreOfMass.increaseBy(positions[i].scale(mass[i],positions[i]));
}
centreOfMass.scale(1/sum);
return centreOfMass;
}
public PhysicsVector cOMVel(double[] mass, PhysicsVector[] velocity){
double total = SolarSim.sumArray(mass);
for(int ae=0;ae<velocity.length;ae++){
cOMV.increaseBy(velocity[ae].scale(mass[ae],velocity[ae]));
}
cOMV.scale(1/total);
return cOMV;
}
}
GravField:
import java.lang.Math;
import java.io.*;
//Class to create a gravity object.
public class GravField{
public static final double G = 6.67408*Math.pow(10,-11);
private double planetMass;
private double planetRadius;
PhysicsVector gravityAcceleration = new PhysicsVector();
/**
*Default constructor that creates a GravField object with the mass and radius of the earth,
*acting on a projectile starting at x=0, y=0, where the x and y axes are on the surface of the planet
*/
public GravField(){
}
/**
*Constructor that creates a GravField object
*@param planetMass Mass of the planet whose field is to be calculated
*@param planetRadius Radius of the planet
*/
public GravField(double mass, double radius, PhysicsVector initialPos){
//can't think of anything to do here
}
//Calculates the acceleration due to the gravitational field of the object
public PhysicsVector aDueToGravity(double planetMass, PhysicsVector sourcePos, PhysicsVector initialPosition){
PhysicsVector a = new PhysicsVector(sourcePos);
PhysicsVector b = new PhysicsVector(initialPosition);
b.decreaseBy(sourcePos);
double distance = b.magnitude();
b.scale(-1*G*planetMass/(distance*distance*distance));
return b;
}
}
Answer: After reading the program, I think it is pretty good for start, but there is some room for further improvement.
Small improvements
Imports
Consider importing specific classes, instead of using the wildcard import, so that your namespace is not cluttered up. (Although there are also benefits in importing the whole package, see this SO question).
Double constants
Instead of using Math.pow, like this:
5.9726*Math.pow(10,24)
Numbers can be written in scientific notation, as follows:
5.9726e24
Empty else blocks
In general, it is best to avoid empty else blocks, like this one:
if(j!=l){
newGrav[j].increaseBy(gravityObject[j].aDueToGravity(planetMass[l], newP[l], newP[j]));
}
else{
//do nothing
}
Instead, the else-block can just be omitted:
if(j!=l){
newGrav[j].increaseBy(gravityObject[j].aDueToGravity(planetMass[l], newP[l], newP[j]));
}
JavaDoc
The parameters in the documentation should match the parameters of the method.
*@param planetMass Mass of the planet whose field is to be calculated
*@param planetRadius Radius of the planet
*/
public GravField(double mass, double radius, PhysicsVector initialPos){
So, in the above case, you should write "mass" and "radius" into the javadoc, instead of planetMass and planetRadius. Also, it would be nice to describe what "initialPos" means.
Visibility (public, protected, default, private) and staticness
Particle.initialPosition, Particle.initialVelocity, Particle.centreOfMass and Particle.cOMV are used only within Particle, thus they can be made private, instead of default (package-private) access.
The methods updatePosition, updateVelocity, centreOfMass and cOMVel cannot be made static, since they refer to the above mentioned member variables. (The other possibility would be to make those variables static as well, though I'm not sure if that would not break the logic of the program.)
Constructors
Empty body for constructors
As you wrote yourself, this is not a good practice :) In short, those constructors do almost nothing.
By calling public Particle(double mass, PhysicsVector x, PhysicsVector y), Particle.mass, Particle.initialPosition and Particle.initialVelocity are not set (they have the same value they received during initialisation). Probably, you should do something like this in the constructor:
public Particle(double mass, PhysicsVector x, PhysicsVector y){
this.mass = mass;
initialPosition = x;
initialVelocity = y;
}
Although, it is not clear to me, what x and y mean in this context, so the actual code needed in your application could be different. Also note, that you need to prefix "mass" with "this", if you are accessing the member variable, to differentiate it from the constructor parameter with the same name.
The constructor public GravField(double mass, double radius, PhysicsVector initialPos) also does not do anything more than the default constructor, as it is now. Probably, it should look similar as follows:
public GravField(double mass, double radius, PhysicsVector initialPos){
planetMass = mass;
planetRadius = radius;
// ? = initialPos; // I have no idea for what initialPos should be used.
}
Note, that currently both constructors are being used, but they end up constructing three similar objects in each case (i.e., all with the default mass, radius, etc., instead of the parameters that you provide to them).
Default constructor
If you are not planning to use the default constructor (i.e. the one without any parameters), you do not need to provide one for the class. I.e., the following constructors are not really needed and can be removed:
public Particle(){
mass = 1;
initialPosition.setVector(0,0);
initialVelocity.setVector(1,1);
}
public GravField(){
}
Separation of the code
Utility methods
I would suggest to move the methods SolarSim.copyArray, SolarSim.sumArray, SolarSim.add and SolarSim.subtract to a separate class (e.g. a new class called PhysicsVectorUtils, or even PhysicsVector itself, if you have access to its code), because they do not really belong to the logic of SolarSim. Also, in this way, Particle does not need to depend on SolarSim, in order to call sumArray.
main method
This method is very long, and difficult to follow. I suggest splitting up the steps into smaller methods. Besides, the local variables of this method, could be instead instance variable of SolarSim. You should end up with something like this:
public static void main(String[] args) throws IOException{
SolarSim solarSim = new SolarSim();
solarSim.openFile();
solarSim.initVariables();
solarSim.readTimeStep();
solarSim.setInitialPositions();
solarSim.createGravFieldObjects();
solarSim.createParticleObjects();
solarSim.initializeTimeAndGrvity();
solarSim.calculateCentreOfMass();
solarSim.calculateCentreOfMassVelocity();
solarSim.calculateFields();
do{
solarSim.updateVeolcitiesAndPositions();
}while (solarSim.hasMoreSteps());
solarSim.printResult();
solarSim.closeFile();
}
You could take the first part further, and call the methods not dependent on user input (file opening, initialisation of variables, fields, velocities etc.) from the constructor of SolarSim. (Beware, I'm not suggesting to put all the initialisation code into the constructor, because in this way the constructor would become very long. Just call those methods from within the constructor.)
Ideas for further improvements
The output file could be an argument of the program. I.e., you would invoke the program like this:
java SolarSim /path/to/my/customoutput.txt
The path to the file can be read from args[0] in this case (i.e., from the args parameter of main).
Also the number of steps (which is currently constant 1000), could be a parameter of the simulation. | {
"domain": "codereview.stackexchange",
"id": 17191,
"tags": "java, beginner, object-oriented, simulation, physics"
} |
How to set arg to a node from launch file? | Question:
Hi everybody,
I can't set the args of my gscam node by launch file (this node is different from the original,because I modified it).
This is the most important piece of source of my modified gscam node:
int main(int argc, char** argv) {
if (argc != 2)
{
ROS_WARN("WARNING: you should specify camera info properties!");
}else{
ROS_INFO("INFO: you have set camera properties file: %s\n",argv[1]);
camera_info_file=g_string_new(argv[1]);
}
This is a piece of my launch file:
<node pkg="gscam" type="gscam"
name="gscam"
cwd="node"
args="/home/aldo/Projects/4thRC/BergamoComponents/services_caller/camera_parameters_to_set.txt">
<env name="GSCAM_CONFIG"
value="multifilesrc location="/home/aldo/Documents/VISIONE/visual_odometry/libviso2/2010_03_09_drive_0019/I1_%06d.png" index=0 num-buffers=-1 caps="image/png,framerate=\(fraction\)19/10\" ! videorate framerate=19/10 ! pngdec ! ffmpegcolorspace"/>
</node>
this is the got message when I run "roslaunch my_file.launch --screen" :
[ WARN] [1350399215.805837395]: WARNING: you should specify camera info properties!
Note: If I run my gscam node without launch file it works properly (the argument is recognized):
rosrun gscam gscam /home/aldo/Projects/4thRC/BergamoComponents/services_caller/camera_parameters_to_set.txt
[ INFO] [1350399865.086125707]: INFO: you have set camera properties file: /home/aldo/Projects/4thRC/BergamoComponents/services_caller/camera_parameters_to_set.txt
How to fix it?
Originally posted by aldo85ita on ROS Answers with karma: 252 on 2012-10-16
Post score: 2
Original comments
Comment by Ivan Dryanovski on 2012-10-16:
Have you tried printing out how many and what arguments are passed? It might be that you're receiving more than 2 arguments from the launch file.
Answer:
Your argc condition is wrong. You don't account for the additional parameters that roslaunch gives the node.
Either change your condition or call ros::init before your code, if possible. That should filter out ROS arguments.
Originally posted by dornhege with karma: 31395 on 2012-10-16
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 11392,
"tags": "roslaunch, gscam"
} |
Build a combinatorial matrix from two vectors | Question: I want to define a matrix of values with a very simple pattern. I've achieved it with the code below but it's ugly - very difficult to read (I find) and surely not as optimal, in terms of performance, as it could be. The former is really what I'm trying to address here. I feel this can be done much more elegantly and would love some guidance from the community.
Here is an example of what I am hoping to achieve, from x and y I want to get points:
y =
10 30 50 70
x =
10 30 50
points =
10 10
30 10
50 10
10 30
30 30
50 30
10 50
30 50
50 50
10 70
30 70
50 70
x and y are, of course, no always those exact vectors. They are, however, very simply created. Something like so:
scale = 20;
max_x = 85;
max_y = 62;
y_count = floor(max_x / scale);
x_count = floor(max_y / scale);
y = ((1:y_count) * scale) - scale / 2;
x = ((1:x_count) * scale) - scale / 2;
Here's my brute force approach:
y = repmat(y, x_count, 1);
y = reshape(y, 1, size(y, 1) * size(y, 2));
y = y';
x = repmat(x, y_count, 1);
x = x';
x = reshape(x, 1, size(x, 1) * size(x, 2));
x = x';
points = [x y];
It works, but it isn't very elegant.
Answer: It's called Cartesian product and you can do that easily:
Here's one way:
y = [10 30 50 70]
x = [10 30 50]
[X,Y] = meshgrid(y,x);
result = [Y(:) X(:)];
Result:
10 10
30 10
50 10
10 30
30 30
50 30
10 50
30 50
50 50
10 70
30 70
50 70 | {
"domain": "codereview.stackexchange",
"id": 3122,
"tags": "matrix, matlab"
} |
Does move_group.plan(sampleJointTargetPose) implement MoveJ by default? | Question:
I am using a UR10 real robot. I am not using Universal_Robots_ROS_Driver for robot control. I am using moveit move_group to do collision check for a pair of start_state and goal_state in joint-space. I am simply using move_group.plan() method to figure out if a collision-free path exists between the two states. It works. But, we control the robot using MoveJ commands, So I am trying to figure out if the collision-free path that is generated by the move_group.plan() and the paths that are in-general generated by MoveJ command are both same?
If not, what is the correct way of doing this?
if yes, is there a better way to do this?
Originally posted by srujan on ROS Answers with karma: 32 on 2020-12-07
Post score: 1
Answer:
move_group.plan() and the MoveJ command are not the same. MoveJ moves directly to the target in joint space (PTP motion), while move_group.plan() submits a planning request to whichever planning plugin is connected to MoveIt. The default planning plugin is sampling-based (RRTConnect in OMPL), but when there are no obstacles, it returns a direct motion in joint space just like MoveJ, which is probably the source of your confusion. If there are obstacles, the result of move_group.plan() can be very different from MoveJ.
If you want to plan motions like MoveJ (linear in joint space/configuration space) and MoveL (linear in cartesian space), you can use the pilz_industrial_motion_planner plugin which was recently merged into MoveIt. It has not been backported to melodic yet, so you would need to build MoveIt from source. I suggest using an underlay workspace (more details) with MoveIt inside to alleviate build time.
If you send a PTP motion planning request to the pilz_industrial_motion_planner plugin and the plan succeeds, then moving your robot with the MoveJ command to the same target should be safe (if you represented your whole scene correctly and did not make mistakes).
In theory you could also çreate your own RobotTrajectory by linearly interpolating the joint poses between your start and goal and then check if the path is valid, but I would use the industrial motion planning plugin as described above, to avoid duplicating work. Using PTP/LIN/CIRC motions in MoveIt should soon become a lot easier.
Originally posted by fvd with karma: 2180 on 2020-12-09
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by srujan on 2020-12-14:
Thanks for your response @fvd. It was indeed useful. However, I am unable to use the pilz_industrial_motion_planner plugin. Here's what I did:
created a new workspace with moveit melodic-devel build from source. build successful
created a new workspace (pilz_ws) and underlayed melodic_source_moveit1. I installed pilz_common and pilz_industrial_motion packages, build successful. tried installing pilz_robots . build unsuccessful, hence discarded it for time being
Copied pilz_industrial_motion_planner_planning_pipeline.launch.xml from the prbt_moveit_config/launch to my aris_metra_scan_config/launch.
Created cartesian_limits.yaml file in the aris_metra_scan_config/config
In aris_metra_scan_config/demo.launch, I changed <arg name="pipeline" default="ompl" /> to <arg name="pipeline" default="pilz_industrial_motion_planner" />
when I launch demo.launch, I get MoveGroup running was unable to load pilz_industrial_motion_planner::CommandPlanner. Help
Comment by fvd on 2020-12-14:
Since comments are not meant for new questions, please accept this answer and post a new question with your new problem.
Try following this tutorial: https://ros-planning.github.io/moveit_tutorials/doc/pilz_industrial_motion_planner/pilz_industrial_motion_planner.html
And note that you might have to build from the master branch at the moment.
Comment by srujan on 2020-12-14:
That tutorial didn't help. However my issue is resolved now.
I copied pilz_command_planner_planning_pipeline.launch.xml from the prbt_moveit_config/launch to my aris_metra_scan_config/launch
and made the following change in aris_metra_scan_config/demo.launch
<arg name="pipeline" default="ompl" /> to <arg name="pipeline" default="pilz_command_planner" />
This solved the issue.
@fvd, do I still add a new question for this?
Comment by fvd on 2020-12-14:
If it's solved, then no (although if the tutorial is insufficient you could try fixing it). Just remember to use comments for clarifications for the future and make a new post for new questions, so that others can find it more easily.
Comment by srujan on 2020-12-14:
I agree and will surely make a new post for new questions in future. Thank you! | {
"domain": "robotics.stackexchange",
"id": 35845,
"tags": "ros, moveit, universal-robots, move-group-interface, ur10"
} |
Why does the electric field have non-zero curl for magnetic monopoles? | Question: In Griffiths’ Introduction to Electrodynamics, he asks what changes would need to be made to Maxwell's equations to accommodate the existence of magnetic monopoles. Now, it is clear to me that the Gauss’s law and Ampere’s law must be left untouched. I also understand why the divergence of $\vec{B}$ should be $\alpha_{0}\rho_{m}$, where $\alpha_{0}$ is some constant to be determined experimentally and $\rho_{m}$ is the density of monopoles.
What I don’t understand is why the curl of $\vec{E}$ must be $\beta_{0}\vec{J}_{m}$, where $\beta_{0}$ is some constant.
The idea that moving electric monopoles produce a magnetic field is an experimental fact. Why must we assume that the same holds for magnetic monopoles as well? Is the symmetry of Maxwell's equations enough of a motivation for us to be sure that the electric field will now have a non-zero curl in the statics regime?
I get that such symmetries often motivate the discovery of such properties, but does it guarantee that the electric field curls?
Answer: With the standard caveat that all theory is predicated on experimental validation, I would strongly expect magnetic charge to obey a continuity equation. If you make the modifications you suggest without changing Faraday's law to include magnetic current, then you would find that $\frac{\partial \rho_m}{\partial t}=0$, implying that the total magnetic charge in any given region could never change.
Note that while Ampere's law is an experimental observation, the absence of the electrical current term similarly implies via Gauss' law for electric charge that the electric charge density of the universe is static, which would be intuitively upsetting were it not so easily demonstrated to be false. | {
"domain": "physics.stackexchange",
"id": 71834,
"tags": "electromagnetism, electrostatics, maxwell-equations, magnetostatics"
} |
What is the difference between "maximally entangled" and "entangled" states? | Question: when we talk about bell state we say that these states are maximally entangled. so just wanted to understand is there any difference between just entangled and maximally entangled ?
Answer: A maximally entangled state is a state that maximises some entanglement measure.
In the case of bipartite states, this generally means a state that maximises the entanglement entropy, that is, the von Neumann entropy of the reduced states. Or if you prefer, the Shannon entropy of the vector of eigenvalues of $\operatorname{Tr}_B(\rho)$.
So, for a bipartite two-qubit state, a state $\rho$ is maximally entangled if $\rho_A\equiv\operatorname{Tr}_B(\rho)$ has eigenvalues $(\frac12,\frac12)$. For a pure bipartite state $|\psi\rangle$, this is equivalent to asking its Schmidt coefficients to be $(\frac1{\sqrt2},\frac1{\sqrt2})$, meaning that the state must have the form
$$|\psi\rangle = \frac1{\sqrt2}(|u_1,v_1\rangle+|u_2,v_2\rangle)$$
for any set of states $|u_i\rangle,|v_i\rangle$ such that $\langle u_1,u_2\rangle=\langle v_1,v_2\rangle=0$.
For multipartite states, things get considerably more complicated, because there isn't a unique "canonical" way to measure entanglement, and thus different states can be "maximally entangled" according to different measures. See e.g. (Plenio, Virmani 2015) for more detailed discussion on this. Note in particular the discussion in the last paragraph in the second column of page 4 (in the arxiv version), which I'll report here for completeness:
Now we have a notion of which states are entangled and
are also able, in some cases, to assert that one state is
more entangled than another. This naturally raises the
question whether there is a maximally entangled state,
i.e. one that is more entangled than all others. Indeed,
at least in two-party systems consisting of two fixed ddimensional sub-systems (sometimes called qudits), such
states exist. It turns out that any pure state that is local
unitarily equivalent to
$$|\psi_d^+\rangle = \frac{|0,0\rangle+\cdots + |d-1,d-1\rangle}{\sqrt d}$$
is maximally entangled. This is well justified, because as
we shall see in the next subsection, any pure or mixed
state of two d-dimensional systems can be prepared from
such states with certainty using only LOCC operations.
We shall later also see that the non-existence of an
equivalent statement in multi-particle systems is one of
the reasons for the difficulty in establishing a theory of
multi-particle entanglement. | {
"domain": "quantumcomputing.stackexchange",
"id": 3833,
"tags": "quantum-state, entanglement, textbook-and-exercises, terminology-and-notation"
} |
Will a body necessarily posses zero kinetic energy at infinity? | Question: I have been studying about gravitation for a while and most of the books(including fundamental of physics) while defining escape velocity assumes that kinetic energy of a body at infinity is zero (potential energy is zero too, but that's not my matter).
Now the question is why we take K.E.= zero at infinity. As for the gravitational field of earth escape speed is defined 11.2 Km/sec, what will happen if i throw a body with a speed of 90 Km/sec, will it still posses zero kinetic energy at infinity.
Hope you all got my question,if any query leave comment.
Hoping for a simplified answer.
Answer: Escape velocity is the minimum velocity needed to escape. The minimum velocity results in zero kinetic energy at infinity. But velocities greater than the escape velocity are possible, of course, and these result in a velocity at infinity that is greater than zero. | {
"domain": "physics.stackexchange",
"id": 34989,
"tags": "newtonian-mechanics, newtonian-gravity, energy-conservation, potential-energy, projectile"
} |
Can antibiotic resistant bacteria compete with normal one in an antibiotic free environment? | Question: The question is based on an intuition that antibiotic resistance can't come along. This mutation will probably make bacteria less tenacious. Is there any research how AR bacteria compete with normal one in an antibiotic free environment? Because if them generally lose to normal bacteria then AR bacteria is not a big threat and they can't spread to much.
Answer: The antibiotic resistance adds to growth cost. In an antibiotic free medium the antibiotic-susceptible strains will outgrow the resistant ones. See this | {
"domain": "biology.stackexchange",
"id": 1684,
"tags": "bacteriology"
} |
What is "Symmetry of Infinity" in electricity and magnetism? | Question: I have this problem from my E&M textbook:
Two infinitely long wires running parallel to the x axis carry uniform charge densities $+\lambda$ and $-\lambda$ (see photo). Find the potential at any point $(x,y,z)$, using the origin as your reference.
The solution to this uses a random point and solves the problem there:
It's stated that "due to the symmetry of infinity, we need only consider the z-y-plane. We plot an arbitrarily located point, without symmetry."
Once here I could do the math of this just fine, but I don't understand what "due to the symmetry of infinity" means. I tried to look it up online (including stack exchange) and all I could find were journals that were related to this. I could not access them, and even if I could I probably wouldn't understand what was going on anyway.
What is "the symmetry of infinity?" And how is it related to this problem?
Answer: To my knowledge, this is not a technical term which you don't know, but merely a hand-wavey and brief way of pointing out that the charge distribution is independent of x and so the potential must also be independent of x. "Infinity" is evocative of this fact because if the wires were not infinite in length, then the charge distribution (and the potential) would be dependent on x. | {
"domain": "physics.stackexchange",
"id": 46860,
"tags": "electromagnetism, charge"
} |
What is swig-wx? | Question:
Hi,
i know what swig is, but what is swig-wx exactly and why do i need this when i want to compile ros?
swig-wx version 1.3.29 (2006)
swig version 2.0.7 (2012)
Originally posted by mano on ROS Answers with karma: 141 on 2012-07-01
Post score: 1
Answer:
Thanks for your quick answer!
I found this on the swig-wx site (https://github.com/wg-debs/swig-wx/blob/master/stack.xml):
This is a special version of swig needed by wxPython, which is SWIG version 1.3.29 plus some custom patches (explained at http://wxpython.org/builddoc.php). The patched latest version of wxPython SWIG is available at http://wxpython.wxcommunity.com/tools
... under the Install graphical library dependencies section of the install guide).
Is there a general installation guide which lists the dependencies?
Originally posted by mano with karma: 141 on 2012-07-02
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Asomerville on 2012-07-02:
It's generally considered to be bad form to ask additional questions in posting an answer. If you did not intend for this to be posted as an answer to your own question, it would be better to move it into your original question and delete this answer post. | {
"domain": "robotics.stackexchange",
"id": 10006,
"tags": "ros"
} |
Periodic signal checking when using $\Sigma$ | Question: Is the following signal periodic?
$$ \sum_{κ=-\infty}^{\infty}\left[\mathrm{rect}\left(\frac{t+2κ}{10}\right)\right]+\cos\left(\frac{π}{75}t\right) $$
where rect is the rectangular signal
Answer: Let's define two new signals:
$$x_1(t)=\cos\left(\frac{π}{75}t\right)$$
$$x_2(t)=\sum_{κ=-\infty}^{\infty}\left[\mathrm{rect}\left(\frac{t+2κ}{10}\right)\right]$$
It's easy to see that the signal in your question can be calculated as
$$x(t)=x_1(t)+x_2(t)$$
The period of $x_1(t)$ is $T_1=150$. That can be seen easily: when $t=T_1$, the argument of the cosine becomes $2\pi$, so the function "resets".
Now let's see what happens with $x_2(t)$. Each term of the summation is a rectangular pulse of width $10$ centered at $t=-2\kappa$. As the magnitude of the shift factor (in this case, $2$) is less than the width, some pulses will overlap with others when performing the sum.
In fact, for almost every $t$, there will be exactly five rectangular pulses adding up. I say "almost" because there are special cases. When $t\in\mathbb{Z}$ and is odd, there will be only four pulses adding there, but there will also be two "half-pulses" corresponding to the value the $\mathrm{rect}()$ function takes at the discontinuity (every pulse will be discontinuos at two odd integers only), which is $0.5$. Because the values of $\kappa$ extend all over the natural numbers, it is valid to redefine $x_2(t)$ so that:
$$x_2(t)=5 \ \forall t$$
You can try that using Octave. I ran this script:
kMin = -500;
kMax = 500;
shiftFactor = 2;
tMax = shiftFactor*kMax + 10/2;
tMin = shiftFactor*kMin - 10/2;
t = tMin:tMax;
x2 = zeros(length(t),1);
for k = kMin:kMax;
for ii = 1 : length(t)
if t(ii) > (-shiftFactor*k - 10/2) && t(ii) < (-shiftFactor*k + 10/2)
x2(ii) = x2(ii) + 1;
elseif t(ii) == (-shiftFactor*k - 10/2) || t(ii) == (-shiftFactor*k + 10/2)
x2(ii) = x2(ii) + 0.5;
end
end
end
plot(t,x2,'r','Linewidth',1.5);
xlim([tMin-1 tMax+1]);
ylim([0 5.5]);
grid minor;
The values kMin and kMax are the limit values $\kappa$ takes. Ideally, they should be infinite, but that's obviously not possible on a computer. shiftFactor is the constant that multiplies $\kappa$ in the summation. You can play a little bit with that number to see how the signal changes. I hardcoded the width of the pulses ($10$). I used the convention that the pulse equals $0.5$ in the discontinuity. If you plot the result, you get something like this:
Note that the ramps appearing by the sides do not happen in the infinite case. They show up here because the pulses that should be adding up for those values of $t$ were cut off when letting $\kappa$ take values between $500$ and $-500$ only.
So $x(t)$ is basically a cosine with an offset. Therefore, the sum of the functions we defined at the beginning is periodic. Even more, the period of $x(t)$ is the same as that of $x_1(t)$. Ergo,
$$T=T_1=150$$ | {
"domain": "dsp.stackexchange",
"id": 5913,
"tags": "periodic"
} |
What is the significance of cysteine in a protein sequence? | Question: What is the importance of cysteine-cysteine in amino acid sequence? What can I infer if I get a high percentage of C from a protein sequence?
Answer:
What can I infer if I get a high percentage of C from a protein
sequence?
A highly stable structure that is likely found in the extra-cellular space.
Cysteine can form a disulphide bond with another cysteine. Cysteine can be found as a lone cysteine, but is often paired with another cysteine in the tertiary structure to form these bonds.
Disulphide bonds play important roles in protein folding and stability (60 kcal/mol compared to around 1 and 5 kcal/mol for a hydrogen bond depeneding on the environment). Notably though, cysteine disulphide bonds are usually only used in extracellular secreted proteins, as they are unstable in the cytoplasm.
As an example, take the structure 2ksk. Look at how the structure is held together by cysteines that are distant in the sequence. If you see cysteines in a sequence, expect interesting folding!
The cartoon is going from blue to red, whilst cysteines are shown with sticks and the S-S bonds are in yellow.
Side note, they are annotated as SSBOND in the PDB. | {
"domain": "biology.stackexchange",
"id": 9069,
"tags": "biochemistry, sequence-analysis, amino-acids, protein-structure"
} |
How does pressure decrease in an isothermal process when heat is transferred? | Question: Let's suppose there is a gas(ideal) inside a cylinder with a frictionless, massless piston. The gas is under a pressure $P_{1}$ and has volume $V_{1}$ at a temperature $T$. We carry out a isothermal process (I have no idea how to conduct one, but i'm assuming I do). Here's a picture. Heat is transferred into the system.
Here's my question. What I understand is that for the piston to start moving, more force is required to defeat the initial pressure. Therefore, the pressure increases so does the volume. I kwow I'm wrong because according to the state equation $PV=RTN$ pressure should decrease. I hope you could provide me a deepest insight about what is happening at the molecular level to understand this process.
Answer: In the picture you've just drawn, pressure is constant for the entire process. Why? Well, the forces on the piston are $PA$ and $-mg$, and they have to sum to zero for the piston not to accelerate off either up or down. $A$, $m$, and $g$ don't change, so $P$ doesn't change either. Then the only way for V to change is for T to increase. So you haven't drawn an isothermal process.
But let's pretend you did draw an isothermal process. Then $T$ is constant, so either $P$ decreases and $V$ increases or vice-versa. Let's consider what has to happen to increase $V$, decrease $P$, and keep $T$ constant.
First: if we're going to increase $V$, the gas is going to do work on the environment. So, we need to supply some heat $Q$ which is exactly equal to the work done. So we're going to heat this container during this process, and carefully control the heat to keep $T$ constant. Alternatively, we're going to perform this process VERY SLOWLY, and allow the gas time to gain heat from the environment. Second, we need the gas to expand. How do we do that? We have to decrease the external force on the piston. We either take some weight off the top of the piston (decrease m in your picture), or grab the piston and pull up. In general, you're correct in saying that the pressure of $P$ is not going to make the gas expand without us doing something.
What I'm trying to get across is that these processes don't happen spontaneously. The ideal gas law has a lot of constants that can vary in a lot of different ways; if you want a specific process to happen (e.g. adiabatic, isothermal, isobaric, etc) you have to do something VERY SPECIFIC to the gas. In this case, you need to simultaneously provide heat AND pull on the the piston to create an isothermal process. You asked why the container would expand if $P$ was decreasing; the answer is, $P$ decreased because YOU did something to expand the container. | {
"domain": "physics.stackexchange",
"id": 98161,
"tags": "thermodynamics, pressure"
} |
Understanding electronic band structure diagrams | Question: Currently I'm trying to understand electronic band structures such as depicted below:
band structure http://ej.iop.org/images/1367-2630/14/3/033045/Full/nj413738f1_online.jpg
And following questions were arisen.
Why are there multiple lines in valence side and conduction side? Where are the bands and gaps between them starting from the lowest energy (inner electrons) to higher energies (up to valence band and conduction band)? How can I distinguish between them in the pictures like presented above? I just want to see a connection of that picture with the following:
(source: nau.edu)
Why do these different lines intersect each other at some points? (Don't they?) What does it mean?
Why do we choose path connecting the points of high symmetry in 1-st Brillouine zone? What's wrong with random directions? Does this path cover all possible energy values of electron in crystal? If so, then how come is that?
Thanks in advance!
Answer: Your second figure is a simplification of the first one, usually in the $ \Gamma $ point, but it could be any other as well.
Regarding your questions:
There are multiple lines in valence and conduction band because there are several allowed bands or energy eigen states. Technically there is even an infinite number of allowed bands, but usually you would only plot the lowest ones, which are actually populated.
From this diagram, it seems that the lowest bandgap is at the L point.
These lines can intersect if there's multiple bands, which happen to have the same energy in a certain point.
The fixed paths in the band diagram (e.g. $ \Gamma $ to M or $ \Gamma $ to L are just simplifications that let you estimate the material behavior. You could move along any path, but since your carriers usually populate one of the valleys, you're only interested in a small region around a local conduction band minimum or valence band maximum. | {
"domain": "physics.stackexchange",
"id": 21613,
"tags": "quantum-mechanics, solid-state-physics, semiconductor-physics, electronic-band-theory, density-functional-theory"
} |
ROS Answers SE migration: Jade vs Indigo | Question:
Hi,
I'm very new on ROS.
I use Ubuntu 14.04 LTS (Trusty).
I want to install ROS on my Linux.
Which ROS is the best for my OS..??
Jade, or Indigo, or maybe there is something better then those two..??
Thank you.
Regards,
Seano
Originally posted by Seano on ROS Answers with karma: 3 on 2016-04-15
Post score: 0
Answer:
ROS Indigo was made for Ubuntu 14.04 and is recommended for stability and more community support. You could also wait a bit and install ROS Kinetic on Ubuntu 16.04 in May.
Originally posted by Mehdi. with karma: 3339 on 2016-04-15
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Seano on 2016-04-15:
Get it..
Thanks for your answer.. :)
Seano | {
"domain": "robotics.stackexchange",
"id": 24384,
"tags": "ubuntu-trusty, ubuntu"
} |
Prove or disprove if $L_{1}$ is Turing-recognizable and $L_{2}$ is co-Turing-recognizable then $L_{1}\cap L_{2}$ is decidable | Question: I thought about these languages:
$$L_{1} = A_{TM} = \big\{ \langle M, w \rangle \mid M \text{ is TM and }M \text{ accepts } w \big\}$$
$$L_{2} = \overline{HALT_{TM}} = \big\{ \langle M, w \rangle \mid M \text{ is TM and }M \text{ doesn't halt on input } w \big\}$$
Their intersection:
$$L_{1}\cap L_{2} = \big\{ \langle M, w \rangle \mid M \text{ is TM and }M \text{ accepts and doesn't halt on input } w \big\}$$
If I assume that the intersection is decidable by TM $T$ so I can use $T$ to decide $A_{TM}$ and that's a contradiction.
Is it true?
Answer: Your proof is wrong. Note if $M$ accepts $w$, it must halt on $w$, so $L_1\cap L_2$ is the empty set, which is of course decidable.
To disprove the statement, you can set $L_2$ to be $\Sigma^*$, i.e. the language containing all strings. Since $\emptyset$ is Turing recognizable, $L_2$ is co-Turing recognizable. However, $L_1\cap L_2=L_1$ is undecidable. | {
"domain": "cs.stackexchange",
"id": 11872,
"tags": "turing-machines, computability"
} |
roscore spawning 3 processes of itself and 5 processes of rosmaster...is this normal | Question:
When I run roscore from command line, I am getting three processes (from ps -AL)
of roscore (3 duplicates). I am getting FIVE rosmaster processes. Any node I spawn will also be duplicated.
Is this normal behavior
what am I doing wrong if it isn't?
I am using Ros Fuerte (which upgrading isn't an option for me at the moment).
Any help would be great, thanks.
~BM
Originally posted by BlackManta on ROS Answers with karma: 56 on 2013-03-28
Post score: 0
Answer:
You should use a tool better suited for the work you seek. ps -AL will show you tons of stuff, probably having to do with terminals, side processes (like those roslaunch spawns to control the other processes), pipes and ptys and so on.
Either use TOP or HTOP to check on running programs.
I have a launch file I know works correctly and launching it and then checking with ps -AL I too have lots of apparently duplicated processes. Suggestion: when you don't know what is really going on under the hood, don't look under the hood, else you get confused.
Originally posted by Claudio with karma: 859 on 2013-03-29
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by BlackManta on 2013-03-29:
...
Fair enough I will use top or HTOP. That will at least make sure I am running only 1 application. I am still a little concerned about the processes though. If I run a python script I get a process for that python script. Even if I fork, I might get a separate process for python itself but I still only get two duplicates though.
...
And...yes...I do sort of know what is going on under the hood. ;P
Now I am curious why that is happening. Is it because of the inter-process communication?
Comment by ahendrix on 2013-03-29:
I see one PID for the rosmaster with 5 threads, and similar results for other ROS nodes like roscore. I think the ROS libraries spawn extra threads in the background to handle the network traffic.
Comment by Claudio on 2013-04-02:
The core may have to spawn sub-processes for example for: message handling, logging, networking. Same thing for the nodes: maybe the call to ross::init spawns another thread/process to handle the messages.
If you know what's happening under the hood then you shouldn't need to ask ;) | {
"domain": "robotics.stackexchange",
"id": 13590,
"tags": "ros"
} |
Has the optimality of Grover's algorithm be proven in the standard oracle case? | Question: While answering this question, I stumbled upon the fact that the only proof I knew of the optimality of Grover's algorithm (that is, Zalka's one) uses a phase oracle. I know that a phase oracle can be built from a standard oracle, but we would need the converse to use this proof in the case of a standard oracle.
The converse is almost true, in that it is possible to build it using a controlled version of the phase oracle, which is not in the problem's statement. Has there been a proof of the optimality of Grover's algorithm in the Standard oracle case? Or is there a reduction from the phase oracle to the standard one without using a controlled version?
Answer: I don't know how to reduce a standard oracle to a phase oracle, but the proof can actually be generalized to the standard oracle model, with a little more work. You can check out the proof on my blog for better math typesetting.
Notations and definitions
Let $S=\{0,1\}^n$ be the search space, and $R\subset S$ be the solutions, we define $N=|S|,M=|R|$. Let $f_R:S\to\{0,1\}$ be a function that checks whether a given solution is valid, i.e. $f_R(x)=1\iff x\in R$. To find a solution, we are allowed to query $f_R$ as an oracle in our algorithm design. For a quantum algorithm, we are given an "oracle operation" $O_R$ such that $O_R|x\rangle|b\rangle=|x\rangle|b\oplus f_R(x)\rangle$
First, we introduce a lemma to bound the sum of vector norms:
Lemma 1
Let $\{\alpha_i\}$ and $\{\beta_i\}$ be some finite sequence of vectors in a inner product space. Then we have
$$\begin{align}
\sum_i\|\alpha_i+\beta_i\|^2&\le\left(\sqrt{\sum_i\|\alpha_i\|^2}+\sqrt{\sum_i\|\beta_i\|^2}\right)^2\\
\sum_i\|\alpha_i+\beta_i\|^2&\ge\left(\sqrt{\sum_i\|\alpha_i\|^2}-\sqrt{\sum_i\|\beta_i\|^2}\right)^2
\end{align}$$
Proof of Lemma 1
By the triangle inequality, we have
$$\begin{align}
\sum_i\|\alpha_i+\beta_i\|^2
&\le\sum_i(\|\alpha_i\|+\|\beta_i\|)^2\\
&=\sum_i\|\alpha_i\|^2+\sum_i\|\beta_i\|^2+2\sum_i\|\alpha_i\|\|\beta_i\|
\end{align}$$
Then we apply Cauchy-Schwarz inequality on the third term:
$$\begin{align}
\sum_i\|\alpha_i+\beta_i\|^2
&\le\sum_i\|\alpha_i\|^2+\sum_i\|\beta_i\|^2+2\sqrt{\sum_i\|\alpha_i\|^2}\sqrt{\sum_i\|\beta_i\|^2}\\
&=\left(\sqrt{\sum_i\|\alpha_i\|^2}+\sqrt{\sum_i\|\beta_i\|^2}\right)^2
\end{align}$$
The other side is similar:
$$\begin{align}
\sum_i\|\alpha_i+\beta_i\|^2
&\ge\sum_i(\|\alpha_i\|-\|\beta_i\|)^2\\
&=\sum_i\|\alpha_i\|^2+\sum_i\|\beta_i\|^2-2\sum_i\|\alpha_i\|\|\beta_i\|\\
&\ge\sum_i\|\alpha_i\|^2+\sum_i\|\beta_i\|^2-2\sqrt{\sum_i\|\alpha_i\|^2}\sqrt{\sum_i\|\beta_i\|^2}\\
&=\left(\sqrt{\sum_i\|\alpha_i\|^2}-\sqrt{\sum_i\|\beta_i\|^2}\right)^2
\end{align}$$
Now we are going to prove the main result:
Theorem 1
All quantum algorithms that find a solution with $O(1)$ probability requires $\Omega\left(\sqrt{\frac NM}\right)$ oracle queries.
Proof of Theorem 1
Without loss of generality, we may assume that the quantum algorithm uses $m$ qubits for some $m>n$. It applies $W$ unitary operations, interleaved with $W$ oracle operations. More specifically, let $|\psi\rangle$ be the initial state of the register, we compute $|\psi_W^R\rangle:=U_WO_RU_{W-1}O_R\cdots U_1O_R|\psi\rangle$, and then measure the first $n$ qubits of $|\psi_W^R\rangle$ as an answer. We may assume that the oracle queries are done on the first $n$ qubits, and the result is XORed with the $(n+1)$'th qubit. (Otherwise we can "swap" the queried qubits with the first $(n+1)$ qubits, and then swap them back after the query, as there is no limit on the unitary operations we apply.)
We define
$$\begin{align}
|\psi_k^R\rangle&:=U_kO_RU_{k-1}O_R\cdots U_1O_R|\psi\rangle\\
|\psi_k\rangle&:=U_kU_{k-1}\cdots U_1|\psi\rangle\\
D_k&:=\sum_{R\subset S}\left\|\psi_k^R-\psi_k\right\|^2
\end{align}$$
Upperbound of $D_W$
For the first half of the proof, we upperbound $D_k$ by $4k^2\binom{N-1}{M-1}$ using induction:
$$\begin{align}
D_{k+1}
&=\sum_{R\subset S}\left\|U_{k+1}O_R\psi_k^R-U_{k+1}\psi_k\right\|^2\\
&=\sum_{R\subset S}\left\|O_R\psi_k^R-\psi_k\right\|^2\\
&=\sum_{R\subset S}\left\|O_R(\psi_k^R-\psi_k)+(O_R-I)\psi_k\right\|^2\\
\end{align}$$
Notice that $O_R$ and $I$ can be written as
$$\begin{align}
O_R&=\sum_{x\notin R}|x\rangle\langle x|\otimes I\otimes I+\sum_{x\in R}|x\rangle\langle x|\otimes X\otimes I\\
I&=\sum_{x\notin R}|x\rangle\langle x|\otimes I\otimes I+\sum_{x\in R}|x\rangle\langle x|\otimes I\otimes I
\end{align}$$
Therefore we have
$$\begin{align}
D_{k+1}
&=\sum_{R\subset S}\left\|O_R(\psi_k^R-\psi_k)+\left(\sum_{x\in R}|x\rangle\langle x|\otimes(X-I)\otimes I\right)\psi_k\right\|^2
\end{align}$$
To apply Lemma 1, we upper bound the first term:
$$\begin{align}
\sum_{R\subset S}\left\|O_R(\psi_k^R-\psi_k)\right\|^2
&=\sum_{R\subset S}\left\|(\psi_k^R-\psi_k)\right\|^2\\
&=D_k\\
\end{align}$$
and the second term:
$$\begin{align}
&\sum_{R\subset S}\left\|\left(\sum_{x\in R}|x\rangle\langle x|\otimes(X-I)\otimes I\right)\psi_k\right\|^2\\
&=\sum_{R\subset S}\sum_{x\in R}\langle\psi_k|(|x\rangle\langle x|\otimes(2I-2X)\otimes I)|\psi_k\rangle\\
&=2\binom{N-1}{M-1}\sum_{x\in S}\langle\psi_k|(|x\rangle\langle x|\otimes(I-X))|\psi_k\rangle\\
&=2\binom{N-1}{M-1}(\langle\psi_k|\psi_k\rangle-\langle\psi_k|I\otimes X\otimes I|\psi_k\rangle)\\
&\le4\binom{N-1}{M-1}
\end{align}$$
By induction, $D_k\le4k^2\binom{N-1}{M-1}$, and with Lemma 1 we conclude that
$$\begin{align}
D_{k+1}
&\le\left(2k\sqrt{\binom{N-1}{M-1}}+2\sqrt{\binom{N-1}{M-1}}\right)^2\\
&=4(k+1)^2\binom{N-1}{M-1}
\end{align}$$
Lowerbound of $D_W$
For the second half of the proof, we lowerbound $D_k$ by $\Omega(1)\binom NM$. First, we define the projection matrix onto the subspace spanned by the solutions $R$:
$$\begin{align}
P_R:=\sum_{x\in R}|x\rangle\langle x|\otimes I
\end{align}$$
Then again, we split $D_W$ into two parts, and try to apply Lemma 1:
$$\begin{align}
D_W
&=\sum_{R\subset S}\left\|(I-P_R)\psi_W^R+(P_R\psi_W^R-\psi_W)\right\|^2\\
\end{align}$$
For the first term, we have $\langle\psi_W^R|(I-P_R)|\psi_W^R\rangle\le\frac12$, as we may assume that the probability of success is no less than $\frac12$ for this quantum algorithm. Thus we write down
$$\begin{align}
\sum_{R\subset S}\left\|(I-P_R)\psi_W^R\right\|^2\le\frac12\binom NM
\end{align}$$
For the second term, we have
$$\begin{align}
\left\|P_R\psi_W^R-\psi_W\right\|^2
&=1+\langle\psi_W^R|P_R|\psi_W^R\rangle-2\Re\langle\psi_W|P_R|\psi_W^R\rangle\\
&\ge\frac32-2\|P_R\psi_W\|^2\\
&=\frac32-2\langle\psi_W|P_R|\psi_W\rangle\\
\end{align}$$
Summing over $R$,
$$\begin{align}
\sum_{R\subset S}\left\|P_R\psi_W^R-\psi_W\right\|^2
&\ge\frac32\binom NM-2\sum_{R\subset S}\sum_{x\in R}\langle\psi_W|(|x\rangle\langle x|\otimes I)|\psi_W\rangle\\
&=\frac32\binom NM-2\binom{N-1}{M-1}\sum_{x\in S}\langle\psi_W|(|x\rangle\langle x|\otimes I)|\psi_W\rangle\\
&=\frac32\binom NM-2\binom{N-1}{M-1}\\
&=\left(\frac32-2\frac MN\right)\binom NM
\end{align}$$
We may assume that $\frac MN\le\frac14$. (In fact, for $\frac MN>\frac 14$, the problem is trivial as one only needs less than 4 trials on average using a naive algorithm.) We then apply Lemma 1:
$$\begin{align}
D_W
&\ge\left(\sqrt{\frac32-2\frac MN}-\sqrt{\frac12}\right)^2\binom NM\\
&\ge\left(\frac32-\sqrt2\right)\binom NM
\end{align}$$
Combining the lowerbound and upperbound of $D_W$, we have
$$\begin{align}
&\left(\frac32-\sqrt2\right)\binom NM\le D_W\le4W^2\binom{N-1}{M-1}\\
&\implies W\ge\frac{2-\sqrt2}4\sqrt{\frac NM}\\
&\implies W\ge\Omega\left(\sqrt{\frac NM}\right)
\end{align}$$
Which completes our proof. | {
"domain": "quantumcomputing.stackexchange",
"id": 4608,
"tags": "grovers-algorithm, resource-request"
} |
A* graph search heuristicfor pathfinding | Question: A* needs a consistent heuristic to work on a graph.
So I'm not sure if the heuristic of a straight line (bird flight) can be used.
For example: the costs to travel to a neighbors node is always positive.
GOAL
--------------------
| |
| START |
| | where to stripes are obstacles.
Am I correct that this the proposed heuristic isn't consistent here as it has to travel away from the goal first?
Is there a good heuristic for this kind of situation?
Or should I keep it with Dijkstra a forget about A*?
Answer: Three points here:
A$^*$ does not require consistency of the heuristic function (for this, I refer to the definition provided by Klaus Draeger, which is perfect). Instead, A$^*$ requires admissibility of the heuristic function ($h(n)\leq h^*(n), \forall n$ where $h^*(n)$ is the optimal cost to reach the goal from a particular node $n$) or, in plain words, that it never overestimates the effort to reach the goal.
As a matter of fact, it has been proven that inconsistencies (while preserving admissibility) can be very beneficial. There are a number of papers on this issue but let me please refer the one which summarizes the most important findings: Ariel Felner, Uzi Zahavi, Robert Holte, Jonathan Schaeffer, Nathan Sturtevant, Zhifu Zhang. Inconsistent Heuristics in Theory and Practice. Let me put their main finding in this way: if a heuristic $h(n)$ is inconsistent but it preserves admissibility then there are points where by exceeding the cost of an edge (again, I am referring here to Klaus Draeger's definition) either the starting vertex or the ending vertex are better informed than the other. The authors then revise an old idea: the Pathmax propagation rule and show that it can be greatly enhanced in undirected state spaces resulting in the Bidirectional Pathmax propagation rule (BPMX).
Regarding your specific case, the euclidean distance or aerial distance however is both admissible and consistent so you should not bother about these issues ---it does not even matter if the optimal solution consists of moving away first from the goal to approach later, let A$^*$ prove that for you.
Concluding: I would try the euclidean distance between the current location and the goal location to guide A$^*$ to find optimal solutions in your domain. However, if you ever find out an inconsistent heuristic that serves this purpose to do not be afraid at all and just apply BPMX (which is a couple of lines of code, no more than that).
Hope this helps, | {
"domain": "cs.stackexchange",
"id": 4047,
"tags": "algorithms, graphs"
} |
Do sea oil platforms use the oil they extract to power their engines? | Question: I'm currently studying oil extraction and learned that the engines are powered by electricity or diesel. Probably there are other types of fuel used as well. By engines I mean the pumps that extract the oil from under the sea. My question is: Is the oil platform able to use the oil they are pumping to power their own engines? I believe it would need to be able to "transform" oil into fuel on site and use it, is that possible?
Answer: Platforms are weight and space limited so there is generally no room for processing( and processing will require more crew and their facilities). Gas, oil and water are usually separated, often it is legal to dump the water, saves pipelining it to shore for processing. So a nat gas engine should be possible but I have not heard of them used on a platform. And it can require a complex of 2 or 3 platforms to separate gas, oil, and water. Especially in the US gulf, many platforms are unmanned and just send all production to shore. Some very productive deep water platforms will have a permanently anchored tanker which give more room for separating ,etc. They might separate out a diesel fraction, gasoline would be out of the question. | {
"domain": "engineering.stackexchange",
"id": 4345,
"tags": "petroleum-engineering, fuel"
} |
Morse code string - follow-up | Question: This is probably one of the many follow-ups coming. What I have edited:
Added equals() and hashCode()
Added . and , to my Morse code "dictionary"
Used a Pattern for regex checking
Edited method names
Concerns:
Any bad practices in the new code?
Does it "smell"?
import java.util.regex.Pattern;
public class MorseString {
public static final char CHAR_SEPARATOR = ' ';
public static final char WORD_SEPARATOR = '/';
public static final char DOT = '.';
public static final char DASH = '-';
private String string;
private String codeString;
/*
* Constructor that takes the Morse Code as a String as a parameter
*/
public MorseString(String s) {
if(!isValidMorse(s)) {
throw new IllegalArgumentException("s is not a valid Morse Code");
}
if(!s.isEmpty()) {
this.string = translate(s);
} else {
this.string = s;
}
this.codeString = s;
}
/*
* Checks if it is a valid Morse Code
*/
private static final Pattern VALID_MORSE_PATTERN = Pattern.compile(
"(" + Pattern.quote(Character.toString(DASH))
+ "|" + Pattern.quote(Character.toString(DOT))
+ "|" + Pattern.quote(Character.toString(WORD_SEPARATOR)) +
"|\\s)*");
public boolean isValidMorse(CharSequence ch) {
return VALID_MORSE_PATTERN.matcher(ch).matches();
}
/*
* Traslates from Morse in a String to a String
* e.g. ".... .." to "hi"
*/
private String translate(String code) {
String[] words = code.split(Character.toString(WORD_SEPARATOR));
StringBuilder result = new StringBuilder(words.length * words[0].length()); // Rough guess of size
for(String word : words) {
String[] letters = word.trim().split(Character.toString(CHAR_SEPARATOR));
for(String letter : letters) {
result.append(MorseCode.decode(letter));
}
result.append(CHAR_SEPARATOR);
}
return result.toString().substring(0, result.length() - 1);
}
public static MorseString parse(String s) {
// ...
}
/*
* Returns the code as a String
* e.g. if the object represents "hi" in Morse, it returns ".... .."
*/
@Override
public String toString() {
return codeString;
}
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result
+ ((codeString == null) ? 0 : codeString.hashCode());
result = prime * result + ((string == null) ? 0 : string.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (!(obj instanceof MorseString))
return false;
MorseString other = (MorseString) obj;
if (codeString == null) {
if (other.codeString != null)
return false;
} else if (!codeString.equals(other.codeString))
return false;
if (string == null) {
if (other.string != null)
return false;
}
return string.equals(other.string);
}
/*
* Returns the result of the translations
* e.g. if the object represents "hi" in Morse, it returns "hi"
*/
public String asText() {
return string;
}
}
enum MorseCode {
A(".-"),
B("-..."),
C("-.-."),
D("-.."),
E("."),
F("..-."),
G("--."),
H("...."),
I(".."),
J(".---"),
K("-.-"),
L(".-.."),
M("--"),
N("-."),
O("---"),
P(".--."),
Q("--.-"),
R(".-."),
S("..."),
T("-"),
U("..-"),
V("...-"),
W(".--"),
X("-..-"),
Y("-.--"),
Z("--.."),
ZERO('0', "-----"),
ONE('1', ".----"),
TWO('2', "..---"),
THREE('3', "...--"),
FOUR('4', "....-"),
FIVE('5', "....."),
SIX('6', "-...."),
SEVEN('7', "--..."),
EIGHT('8', "---.."),
NINE('9', "----."),
PERIOD('.', ".-.-.-"),
COMMA(',', "--..--");
private char character;
private String code;
private MorseCode(char character, String code) {
this.character = character;
this.code = code;
}
private MorseCode(String code) {
this.character = this.name().charAt(0);
this.code = code;
}
// ...
}
Answer: I'm referring to this code with the \\ ... replaced from your previous post.
Where are the tests?
I do not see any tests. Where are the tests? You know, tests come first...
Morse: Usage of '/' to separate words?
That's AFAIK not standard Morse, or is it?
Morse - but which?
There are different Morse alphabets. Which one do you use? Mentioning the corresponding spec in a comment would be nice.
Do you want to support multiple Morse alphabets in the long run?
If construction is ambiguous, use a factory method instead.
Ideally code is self-explanatory. Looking at the constructor MorseString.MorseString(String) I cannot tell whether this constructs a MorseString from a String by translating the String argument into Morse code using the Morse alphabet, or whether this constructs a MorseString from a String which already is translated Morse code.
In such cases it is better to have private constructors and static factory methods. You already did it half way, MorseString.parse() is such a factory method.
You could consider declaring private MorseString(String) and provide another factory method instead.
Use final (partially a matter of taste).
The final keyword communicates that a variable is not going to change. I recommend it strongly for fields which do not change, and I recommend it even for all other variables.
Especially, use
public class MorseString {
private final String string;
private final String codeString;
}
This makes it more obvious that class MorseString is immutable. BTW the fact that class MorseString is immutable is good!
Premature optimization in translate(String)
Guessing the size of the result for the StringBuilder is premature optimization. And in case performance matters, there's a better way. How significant is words[0].length() really for the length? I would say that words[0] is a bad sample because when it comes to most Indo-German languages like English, French, German etc, there's a big chance that the first word of a sentence is very short like 'A', 'I' or 'You'. Wouldn't it be better to use a constant like, say, 7?
Or, if this is really important for performance, how about a self-tuning, conservative mechanism? Track the average length in a double averageWordLength variable, and use a conservative allocation like new StringBuilder((int) (words.length * (averageWordLength + 3))) if you really want to avoid reallocations of the StringBuilder's internal buffer.
But really, only optimize when you know that this is a performance hot spot.
Bug in translate(String).
I believe that translate(String) cannot process "". That's not very convenient. I'd actually call it a bug.
(I didn't test it, though - did I already ask where are the tests?)
This bug actually complicates the constructor MorseString unnecessarily.
Even if the case s.length() == 0 would need special treatment - then that's because of translate() and therefore responsibility of translate(), not the constructor. Checking s.length() in the constructor is misplaced responsibility.
Use meaningful names.
The parameter s in constructor MorseString is assigned to field codeString. Therefore it makes sense to name it codeString as well, not just s. String s is totally meaningless, whereas String codeString carries a lot of information.
How about calling the things consistently plainText and morseText? Just suggesting, and I think that would be consistent with how such stuff is usually called in the context of coding.
Provide more information in exceptions.
In the constructor, when you detect that s is not a valid Morse string, you do
throw new IllegalArgumentException("s is not a valid Morse Code");
That leaves the programmer clueless what exactly was invalid.
There are two steps about how to improve that.
First of all, include s in the exception message, like this:
throw new IllegalArgumentException("\"" + s + "\"" is not a valid Morse code");
The second step is a bit bigger, you could change isValidMorse(String) from using a regular expression into using a proprietary parser which can give more information.
Over-complicated constructor MorseString
Shouldn't the constructor simply look like this:
public MorseString(String codeString) {
if (!isValidMorse(codeString))
throw new IllegalArgumentException("\"" + codeString + "\"" is not a valid Morse code");
string = translate(codeString);
this.codeString = codeString;
}
The fact that in a special case s is assigned to both, string and codeString is confusing. It would've been better to use "" instead. But even better of course if translate(String) would simply accept a String s with s.length() == 0.
Avoid overly long methods
translate(String) and parse(String) are a bit lengthy. Consider simplifying and splitting them.
For example, parse(String) contains a special case if (s.isEmpty()) return new MorseString("");. Ideally algorithms are written in a way that such special cases are implicit (and if they are developed with TDD/TPP, they usually end up this way automatically).
Look at this sum(int... ops) method:
public static int sum(final int... operands) {
int sum = 0;
for (final int operand : operands)
sum += operand;
return sum;
}
No special case for operands.length == 0.
Handling the special cases specially makes algorithmic functions less robust (if we're not speaking of recursive algorithms): You remove a potential test case from testing the core algorithm.
Redundant null check in equals()
In equals(), the check if (obj == null) is redundant. It is defined, guaranteed and (hopefully) well-known that x instanceof Y with x == null always evaluates false.
EDIT: The equals() could be as simple as this, given that codeString and string are never null:
public boolean equals(final Object obj) {
return obj instanceof MorseString ? equals((MorseString) obj) : false;
}
public boolean equals(final MorseString obj) {
return codeString.equals(obj.codeString) && string.equals(obj.string);
}
Or if you want to go without a second overloaded equals(), like this:
public boolean equals(final Object obj) {
if (!(obj instanceof MorseString))
return false;
final MorseString o = (MorseString) obj;
return codeString.equals(o.codeString) && string.equals(o.string);
}
My personal preference is on the variant with the two equals() because the individual methods are shorter and pure single-statement expression functions.
Just compare this with your isValidMorse(CharSequence) method. This is how functions ideally look like. Your isValidMorse(CharSequence) method is really nice.
Inconsistent null check in equals() and hashCode()
The other null check if (codeString == null) actually is inconsistent. The constructor is (currently) written in a way that codeString can never be null. The constructor takes String s, invokes s.isEmpty() without null-check, which means if this succeeds, s is guaranteed to not be null, then assignes codeString = s.
Consider using Objects.hashCode() and Objects.hash() for hashCode().
The expression (obj == null ? 0 : obj.hashCode()) can be replaced by Objects.hashCode(obj).
However, your hashCode() could be as simple as this:
@Override
public int hashCode() {
return Objects.hash(codeString, string);
}
enums are for programmers. Plus, put abstraction in code, details in data.
I wouldn't use an enum for the job of translating between plain text and morse text.
Also, decode and encode are unnecessarily slow. They are O(n) with n being the size of the Morse alphabet.
A Map<Character, String> / Map<String, Character> would be O(log(n)), and it would be more maintainable. As soon as someone sees Map they think "Oh, lookup. Clear." Whereas the for-loop has to be read and understood in order to understand that it's a lookup.
If performance matters, you can actually have O(C) by using an array with the character as index and special values like null or -1 to denote invalids / gaps in the array.
This works in both directions. A String which represents a single character from the Morse alphabet and therefore contains only . and - can actually be represented as a binary digit by converting '.' into 0 and '-' into 1. You need a known start bit in order to distinguish whether the code starts with 0 or 1. That way you can convert the String into a small int number and use that for looking up the character.
EDIT:
Violation of SRP - Single Responsibility Principle regarding plausibility check. Consequence: Inconsistent MorseString.parse(String) vs enum MorseCode.
While your enum MorseCode allows for latin characters, digits, comma and period, MorseString.parse(String) will still reject such strings because of the guard if (!s.matches("[\\s\\dA-Za-z]*")).
The root cause is the violation of the SRP - Single Responsibility Principle. Both classes / methods, MorseString.parse(String) and MorseCode.encode(char), are responsible for the plausibility check. The responsibility of the plausibility check has been duplicated, it's not in a single place, and now it has become inconsistent.
The guard actually is entirely redundant. MorseString.parse(String) could purely take the responsibility of dealing with multiple characters, i.e. taking care of the loop.
Avoid break and continue if possible.
break and continue are goto in disguise, i.e. a violation of structured programming. Okay, sometimes we need them. But in the case of MorseString.parse(String), the continue can easily be avoided by using an else.
Possible violation of the SRP - Single Responsibility Principle between MorseString.parse(String) and MorseCode.encode(char) regarding conversion.
MorseCode seems to be responsible for converting all characters except some, like ' ' (SP).
Consider how much simpler MorseString.parse(String) would be if MorseCode would also take care of ' '.
Actually, MorseString.parse(String) should be as simple as this:
public static MorseString parse(final String plainText) {
final StringBuilder result = new StringBuilder();
for (final char c : plainText.toCharArray())
result.append(MorseCode.encodeForText(c)).append(CHAR_SEPARATOR);
return new MorseString(result.toString().trim());
}
I refer to a not yet existent method encodeForText which would return Strings which would be one pause length short to not have overly long pause lengths for those Morse alphabet code words which contain pauses at their ends. | {
"domain": "codereview.stackexchange",
"id": 11370,
"tags": "java, regex, morse-code"
} |
Why is the electromagnetic field time-constant in the static case? | Question: At the beginning of chapter 4 in Feynman's book on electromagnetism, he writes down Maxwell's equations:
$$\nabla\cdot E =\frac{\rho}{\epsilon_0}$$
$$\nabla\times E =-\partial_t B$$
$$\nabla\cdot B =0$$
$$c^2\nabla\times B = \partial_t E + \frac{j}{\epsilon_0}$$
The easiest circumstance to treat is one in which nothing depends on the time—called the static case. All charges are
permanently fixed in space, or if they do move, they move as a steady flow in a circuit (so p and j are constant in time). In these circumstances, all of the terms in the Maxwell equations which are time derivatives of the field are zero.
That is, if $\rho(t)$ and $j(t)$ are constant at all points in space, then $\partial_t E=\partial_t B=0$. Why is that the case? I can't see it.
Answer: I think you're misinterpreting the passage. The static case is, by definition, the situation where nothing depends on time, so all time derivatives are zero. $\rho$ and $j$ are constant everywhere too.
You're right that the fact that $\rho$ and $j$ are constant does not imply that $\partial_t E = \partial_t B = 0$. As a simple counterexample, consider electromagnetic waves in a vacuum. | {
"domain": "physics.stackexchange",
"id": 48728,
"tags": "electromagnetism, electrostatics"
} |
The Vertical Launch of a Rocket | Question: From Q7 on Pg.22 of "Upgrade Your Physics" by BPhO/Machacek
A rocket of initial mass $M_0$ is being launched vertically in a uniform gravitational field of strength $g$.
(a) Calculate the final velocity of the rocket 90 % of whose launch mass is propellant, with a constant exhaust velocity $u$. Assume that the propellant is consumed evenly over one minute.
Attempt:
Let $\alpha$ denote the fuel consumption in $\mathrm{kg\ s^{-1}}$
Then the constant thrust provided by the exhaust is given by:
$$T=\alpha u \tag{1}$$
The acceleration $a(t)$ of the rocket at some time $t$ after the launch:
$$T-M(t)g=M(t)a(t) \tag{2}$$
where $$M(t)=M_0-\alpha t \tag{3}$$
is the mass of the rocket at time $t$.
Using $v=\int a(t)\,\mathrm dt $, I got
$$v(t)=\int\limits_0^t\left(\frac{\alpha u}{M_0-\alpha t}-g\right)\,\mathrm dt=u\ln\left(\frac{M_0}{M_0-\alpha t}\right)-gt \tag{4}$$
since $v_0=0$.
Can $\alpha$ and $t$ somehow be eliminated or do I need more information to answer the question? Any conceptual errors in my working?
Later on, the question also asks for the velocity at main engine cut-off and the greatest height reached (which I think can be obtained by integrating eq. $(4)$ but the notion of time is again needed here?).
Answer: As you say :
$$M(t) = M_o - \alpha t$$
But you must know that at time $\tau$ after start the object now has $0.1M_o$ mass (since it consumed all of it's fule).
Therfore
$$0.1M_o = M_o - \alpha \tau \tag 1$$
$$\Rightarrow \tau = \frac {0.9M_o}{\alpha} \tag 2$$
After substituting $(1)$ and $(2)$ into your equation we get:
$$ v(\tau) = u\ln \left(10 \right)-g\frac {0.9M_o}{\alpha}$$
Now if you know $\alpha$ then you can find $v(\tau)$. | {
"domain": "physics.stackexchange",
"id": 65744,
"tags": "homework-and-exercises, newtonian-mechanics, kinematics, conservation-laws, rocket-science"
} |
Java function that blocks until a specific file is deleted | Question: Created for a console application that will be run in the background on a linux system, giving me a nice way to gracefully shut it down by simply deleting a file (which can be done via script/etc.)
public class BlockOnRunFile {
private File watchedFile;
private Path path;
private WatchService watcher;
public void end() {
watchedFile.delete();
}
public BlockOnRunFile(String runFilePath) throws IOException {
watcher = FileSystems.getDefault().newWatchService();
path = FileSystems.getDefault().getPath(
runFilePath.substring(0, runFilePath.lastIndexOf(FileSystems
.getDefault().getSeparator())));
watchedFile = new File(runFilePath);
watchedFile.createNewFile();
}
public void block() {
try {
WatchKey key;
key = path.register(watcher, StandardWatchEventKinds.ENTRY_DELETE);
// stall until the game is supposed to end
// reset key to allow new events to be detected
while (key.reset()) {
// key = watcher.take();
try {
for (WatchEvent<?> event : key.pollEvents()) {
WatchEvent.Kind<?> kind = event.kind();
if (kind == StandardWatchEventKinds.OVERFLOW) {
Common.log.logMessage("File watcher overflow",
LogLevel.INFO);
if (!watchedFile.exists()) {
// do nothing
;
}
break;
}
if (kind == StandardWatchEventKinds.ENTRY_DELETE) {
@SuppressWarnings("unchecked")
WatchEvent<Path> ev = (WatchEvent<Path>) event;
Path filename = ev.context();
if (filename
.toAbsolutePath()
.toString()
.equals(watchedFile.getAbsolutePath()
.toString())) {
watcher.close();
break;
}
}
}
Thread.sleep(1000); // prevent CPU burn, worst-case: 1 second delay on shutdown
} catch (Exception e) {
watcher.close();
Common.log.logMessage(e, LogLevel.INFO);
continue;
}
}// end while loop
} catch (IOException e1) {
Common.log.logMessage(e1, LogLevel.ERROR);
}
}
}
It works, and I don't expect there to be any large amount of file operations in the directory where the file exists, but I would still like to know of any possible issues (I intend to automate it via some scripts/crontab and make sure it's robust enough)
Answer: This is an interesting concept. I would link it in with, say, a pid file, and have the program terminate itself when the pid file is deleted.... and also delete it's own pid if it terminates early. I think that's why you have your end() method....
General
public class BlockOnRunFile {
private File watchedFile;
private Path path;
private WatchService watcher;
The above variables should all be final too. Also, there's an uncomfortable mix of the "old" File-based system, and the "new" Path based one. I recommend that you use Path, and stick to it.
Simplifications
This code is.... ugly.
path = FileSystems.getDefault().getPath(
runFilePath.substring(0, runFilePath.lastIndexOf(FileSystems
.getDefault().getSeparator())));
watchedFile = new File(runFilePath);
watchedFile.createNewFile();
It can be simplified a lot:
Path watchedFile = Paths.get(runFilePath).toAbsolutePath();
Path runDir = watchedFile.getParent();
Files.deleteIfExists(watchedFile);
Files.createFile(watchedFile);
Note, there is no actual File instance in there, all just paths.... Also, there is no Watcher in there. There is no need to create the Watcher in the constructor, it is only used in the block() method, so use it there.
In your block() method, the try/catch is good, but I would add a finally:
} finally {
Files.deleteIfExists(watchedFile);
}
The Watcher should be added as a resource to the top of that try.... and it auto-closes that way...
Also, about simplification, this code should be 1 line:
WatchKey key;
key = path.register(watcher, StandardWatchEventKinds.ENTRY_DELETE);
like:
WatchKey key = runDir.register(watcher, StandardWatchEventKinds.ENTRY_DELETE);
Then, your code body, the guts, is a little more complicated than necessary, and has a bug....
The first thing is that the inner try/catch is unnecessary, and buggy. You cannot close() the watcher and then try to reset the key, and wait for more. That will just fail. The code will simply terminate the loop, yet you have a continue statement. It's unintuitive. I would simply remove the try/catch entirely.
The second bug is in the "OVERFLOW" code. Your code tests for if the file does not exist, but it should only continue if it does exist.
Finally, why do you have the take() commented out... and use a Thread.sleep()? The take() is a blocking operation, and will 'sleep' until there is an event ready. Use the logic in the take(), and trust it...
Oh, and your handling of InterruptedException is not good.....
Finally, you go to a lot of effort to make sure that every event in the directory is 'handled', but, there is no need to handle the events. All we need to do is to make sure our sentry file is still there. If it is gone, then we die. We don't care about any other files.... this essentially removes the need for the Overflow and other conditions....
Suggestions
With the 'finally' block, the above fixes, and a little bit of "return" instead of "break", your code becomes:
import java.io.IOException;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardWatchEventKinds;
import java.nio.file.WatchKey;
import java.nio.file.WatchService;
public class BlockOnRunFile {
private final Path watchedFile;
public BlockOnRunFile(String runFilePath) throws IOException {
watchedFile = Paths.get(runFilePath).toAbsolutePath();
Files.deleteIfExists(watchedFile);
// create entire directory tree, if possible, to create our watch file
// in.
Files.createDirectories(watchedFile.getParent());
Files.createFile(watchedFile);
}
public void end() throws IOException {
Files.deleteIfExists(watchedFile);
}
public void block() {
try (WatchService watcher = FileSystems.getDefault().newWatchService()) {
final WatchKey key = watchedFile.getParent().register(watcher,
StandardWatchEventKinds.ENTRY_DELETE);
// stall until the game is supposed to end
// reset key to allow new events to be detected
while (key.reset()) {
// wait for a file to be deleted (or an overflow....)
if (key != watcher.take()) {
throw new IllegalStateException(
"Only our key is registered, only it should be taken");
}
// now, we know something has changed in the directory, all we
// care about though, is if our file exists.
if (!Files.exists(watchedFile)) {
return;
}
}
} catch (IOException e1) {
Common.log.logMessage(e1, LogLevel.ERROR);
} catch (InterruptedException e) {
// propogate an interrupt... we can't handle it here.....
// just let the file be removed, and we die....
Thread.currentThread().interrupt();
Common.log.logMessage(e, LogLevel.WARN);
} finally {
try {
Files.deleteIfExists(watchedFile);
} catch (IOException e) {
// unable to delete the sentry file.....
Common.log.logMessage(e, LogLevel.WARN);
}
}
}
} | {
"domain": "codereview.stackexchange",
"id": 12722,
"tags": "java, file"
} |
Getting an array from one property of an associative array best practice | Question: I have an array of objects eg.
[
array(
'id' => 1,
'name' => "simon"
),
...
]
and I need to get an array of IDs eg.
[1,2,3,4,5];
Currently I'm doing this:
$entities = $this->works_order_model->get_assigned_entities($id);
$employee_ids = array();
foreach ($entities as $entity) {
array_push($employee_ids, $entity->id);
}
Is there a best practice way of doing this?
Answer: I think array_map is what you are looking for:
php > $aa = array (array ("id" => 1, "name" => 'what'), array('id' => 2));
php > function id($i) { return $i['id'];};
php > print_r(array_map ('id', $aa));
Array
(
[0] => 1
[1] => 2
) | {
"domain": "codereview.stackexchange",
"id": 662,
"tags": "php, array"
} |
Why would a pendulum experiment give $g > 9.8\ \mathrm{m/s^2}$? | Question: I am taking an introductory lab course in which we've done an experiment on the physical pendulum.
We've seen that for small oscillations, the period is
$$T=2\pi\sqrt{\dfrac{I_S}{Mgd_{cm}}}\tag{1}$$
where $S$ is the pivot point, $M$ is the total mass of the object, and $d_{cm}$ is the distance between $S$ and the pendulum's center of mass.
Now, varying $d_{cm}$, I've obtained seven different periods. I've calculated $g$ in terms of $T$ and $d_{cm}$ for those seven distinct values of $T$ and $d_{cm}$. So, $g$ can be expressed as
$$g=4\pi^2\dfrac{I_S}{T^2Md_{cm}}.\tag{2}$$
All the periods I've obtained were always greater than $1$, and the values of $g$ where between $10.15$ and $10.3$. I am trying to understand why is it that $g$ gave me always greater than $9.8$, the expected value, and not less than $9.8$. If I consider the air friction, I would expect the period to be greater; from what I've said and from equation (2), I would say that the values of $g$ should be less than $9.8$, contrary to the values I got.
Note that that $T$ is in seconds, $[g]=\dfrac{m}{s^2}$ and the angle from which the pendulum was released is of $25$ degrees approximately.
I would appreciate if someone could help me to understand the reason why I've obtained these values for $g$.
Answer: I) OP is using the period formula
$$\tag{1} T~=~2\pi\sqrt{\frac{I}{MgR}} $$
for a compound/physical pendulum (in the small amplitude limit) to estimate the gravitational acceleration constant
$$\tag{2} g~=~\left(\frac{2\pi}{T}\right)^2 \frac{I}{MR}. $$
Here $I$ is the moment of inertia around the pivot point; $R$ is the distance from CM to the pivot point; and $M$ is the total mass.
II) After doing the experiment OP finds values for $g$ that are 3-5% too big. (These results are close enough that OP likely did not make any elementary mistakes with units.) A finite amplitude of
$$\tag{3} \theta_0 ~\approx ~25^{\circ}~\approx~ .44~ {\rm rad}$$
makes the pendulum
$$\tag{4} \frac{\theta_0^2}{8}~\approx~ 2\%$$
slower, as compared to the ideal pendulum (1), cf. comment by Prahar. So correcting for a finite amplitude makes OP's estimates worse, 5-7% too big, as Keith Thompson points out in a comment above.
So the discrepancy is caused by something else. The culprit is likely that it is difficult to get a precise estimate for the moment of inertia $I$. All the other quantities $T$, $M$ and $R$ should be fairly easy to measure reliable. So OP's value for $I$ is likely too big. According to Steiner's theorem
$$\tag{5} I~=~MR^2+I_0,$$
where $I_0$ is the moment of inertia around the CM (and the actual quantity which is poorly known).
III) Below follows a suggestion. Plot OP's seven data points in an $(x,y)$ diagram with axes
$$\tag{6} x~:=~R^2 \quad\text{and}\quad y~:=~R\left(\frac{T}{2\pi}\right)^2.$$
Theoretically, the $(x,y)$ data points should then lie on a straight line
$$\tag{7} y~=~ax+b$$
with slope
$$\tag{8} a~=~\frac{1}{g}$$
and $y$-intercept
$$\tag{9} b=~~\frac{I_0}{gM}.$$
In other words, find the best fitting straight line. This method should hopefully produce a good estimate for $g$ without having to know $I_0$ a priori. (By the way, notice that we in principle also don't need to know the mass $M$, cf. the equivalence principle!)
IV) Finally, as always in experiments, estimate all pertinent uncertainties in the various measurements. | {
"domain": "physics.stackexchange",
"id": 12440,
"tags": "newtonian-mechanics, gravity, experimental-physics"
} |
How come the energy $E$ appears in the Time-Independent Schrödinger Equation, if only energy differences $\Delta E$ are actually physical? | Question: Consider the time-independent Schrödinger equation:
$$\operatorname{\hat H}\vert\Psi\rangle=E\vert\Psi\rangle$$
Is it not true that $E$ doesn't factor into any physically meaningful relation, and only $\Delta E$ does? That we can choose where we want $E = 0$.
Then, why is $E$ here? Where is $E = 0$ in this definition of $E$?
Answer: It is true that only $\Delta E$ matters and, therefore, it is possible to assign different values of $E$ to the same system without changing the meaning behind it. In other words, we can choose different reference points for the energy. However, the reference point is already well defined within the Hamiltonian.
For instance, take the example of a particle in a potential $V(x)=\frac{1}{2}kx^2$. The Hamiltonian will then be:
$$
H=\frac{p^{2}}{2m}+\frac{1}{2}kx^2
$$
Just like in classical physics, we could have defined the potential to be $V(x)=\frac{1}{2}kx^2+ \alpha$, where $\alpha$ is an arbitrary constant, and the physics would have been the same. However, the Hamiltonian now will be
$$
H=\frac{p^{2}}{2m}+\frac{1}{2}kx^2+\alpha
$$
and, as a result, the original eigenvalues $E$ will become $E+\alpha$. | {
"domain": "physics.stackexchange",
"id": 51430,
"tags": "quantum-mechanics, energy, schroedinger-equation, conventions, hamiltonian"
} |
Why are basis sets needed? | Question: I am not sure whether this question is even reasonable, but here it goes. We are taught about the different types of basis sets (extended, minimal, double-zeta, plane wave), but I do not think it is clear as to why they are needed. After all, it is possible to do computational chemistry without a basis set (see James R. Chelikowsky, N. Troullier, and Y. Saad, Phys. Rev. Lett. 1994, 72, 1240).
From what I understand basis sets are needed because we use LCAO. We find a set of functions (a basis set) that resembles atomic orbitals. Is this true? Or is the picture more complicated?
Answer: Spatial orbitals $\phi_i$ in modern electronic structure calculations are indeed typically expressed as a linear combination of a finite number of basis functions $\chi_k$,
\begin{equation}
\phi_i(1) = \sum\limits_{k=1}^{m} c_{ki} \chi_k(1) \, .
\end{equation}
In the early days, atomic orbitals were built out of basis functions, while molecular orbitals were built out of atomic orbitals, which is where the name of the approach, linear combination of atomic orbitals (LCAO) originated.
Today both atomic and molecular orbitals are built out of basis functions, and while basis functions for molecular calculations are still typically centered on atoms, they are usually differ from the exact atomic orbitals due to approximations and simplifications.
Besides, basis functions centered on bonds or lone pairs of electrons, or even plane waves, are also used as basis functions.
Nevertheless, the approach is still commonly referred to as linear combination of atomic orbitals.
Now, to the very question of why do we do things this way?
On the one hand, the LCAO technique made its way into quantum chemistry as just another example of a widely used approach of reducing a complicated mathematical problem to the well-researched domain of linear algebra. To my knowledge, this was proposed first by Roothaan.1 However, and as it was mentioned by Roothaan from the beginning, the LCAO approach in present day quantum chemistry is also attractive from the general chemistry point of view: it is tempting to construct molecular orbitals in modern electronic structure theory from their atomic counterparts as it was done by Hund, Mulliken and others already in the early days of quantum theory2 and as it is though in high-school general chemistry courses today.
1) Roothaan, C.C.J. "New developments in molecular orbital theory." Reviews of modern physics 23.2 (1951): 69. DOI: 10.1103/RevModPhys.23.69
2) Pauling and Wilson in Introduction to Quantum Mechanics with Applications to Chemistry refer to the following works in that respect (p. 346):
F. Hund, Z. f. Phys. 51, 759 (1928); 73, 1 (1931); etc.; R. S.
Mulliken, Phys. Rev. 32, 186, 761 (1928); 41, 49 (1932); etc.; M.
Dunkel, Z. f. phys. Chem. B7, 81; 10, 434 (1930); E. Hückel ,Z.f.
Phys. 60, 423 (1930); etc.
Hund's papers are unfortunately in German, but Mulliken's ones are quite an interesting read. Especially the second one,
Mulliken, Robert S. "Electronic structures of polyatomic molecules and valence. II. General considerations." Physical Review 41.1 (1932): 49. DOI: 10.1103/PhysRev.41.49
which (again, to my knowledge) introduced the very term "molecular orbital". | {
"domain": "chemistry.stackexchange",
"id": 6693,
"tags": "computational-chemistry, basis-set"
} |
Uses and interpretation of the 'Bowen Ratio' ($B_o=SH/LE$) | Question: The Bowen Ratio is the ratio of sensible heat flux to latent heat flux, so presumably it gives some information about the relative importance of these processes. But it is not clear how this information can be used to make inference about a system (e.g. the land-air interface of a corn field).
What are some uses of the Bowen Ratio?
What are typical values, and what is the range of meaningful values (e.g. under what conditions does $B_o\to\pm \infty$?)
Answer: I haven't seen it for years, but I think it was just a simple assertion to allow some primitive models to be built without solving the microphysics and biology of evaporation/trnaspiration. I think typical values were about a half. One could presumably go one step further and obtain empirical measurements and build those into models. | {
"domain": "physics.stackexchange",
"id": 1044,
"tags": "thermodynamics, energy, water, atmospheric-science"
} |
Quantum ideal gas - Canonical ensemble - Occupation number summation notation (Huang) | Question: (Question at the end, in bold, marked with an b))
For the quantum ideal gas, the hamiltonian (operator) of the system is:
\begin{align}
\mathcal
H=\sum_{i=1}^N H_i=\sum_{i=1}^N \frac{P_i^2}{2m}
\end{align}
where $N$ is the number of particles.
In the canonical ensemble we have
\begin{align}
\mathcal
\rho = e^{-\beta \mathcal{H}}
\end{align}
where $\rho$ is the density operator and $\beta = \frac{1}{K_BT}$.
The entry $jj$ of the matrix that represents this operator in the basis of eigenvectors of $\mathcal{H}$ is, then:
\begin{align}
\mathcal
\rho_{jj} = e^{-\beta E_j}
\end{align}
and thus, the partition function is given by:
\begin{align}
Z_N = Tr[\rho]=\sum_{j=1}^\mathcal{N} e^{-\beta E_j}
\end{align}
where $\mathcal{N}$ is the number of eigenvalues $E_j$ of $\mathcal{H}$ (repeated included).
a)
This is what's written in some Statistical Mechanics books/notes (e.g., Huang):
\begin{align}
Z_N = \sum_{\{n_p\}} e^{-\beta E }
\end{align}
with
\begin{align}
E=\sum_{\vec{p}} \epsilon_\vec{p} n_\vec{p} \quad , \quad N=\sum_{\vec{p}} n_\vec{p}
\end{align}
where $n_\vec{p}$ is the occupation number (number of particles) corresponding to a configuration with momentum $\vec{p}$ (?) and $\epsilon_\vec{p}$ the respective energy.
b)
Is $E_j = E \,$? If so, how can I see that? If not, what exactly does $Z_N$ as written in the books (i.e. the summation as the above) mean?
Addendum:
I thought that if the system is in the state $|\Psi^{(j)} \rangle$, eigenvector of $\mathcal{H}$ associated with $E_j$, then maybe $|\Psi^{(j)} \rangle = |\phi_1^{(j)} \rangle |\phi_2^{(j)} \rangle...|\phi_N^{(j)} \rangle $ (with $|\phi_i^{(j)} \rangle$ being the state of particle $i$ when the system is in the state $|\Psi^{(j)} \rangle$) and, thus,
\begin{align}
\mathcal
H|\Psi^{(j)} \rangle &= \bigg(\sum_{i=1}^N H_i \bigg) |\phi_1^{(j)} \rangle |\phi_2^{(j)} \rangle...|\phi_N^{(j)} \rangle \\
&= \bigg(\sum_{i=1}^N \epsilon_i^{(j)} \bigg) |\Psi^{(j)} \rangle
\end{align}
$\qquad$ with $\epsilon_i^{(j)}$ being the eigenvalue of $H_i$ associated with the eigenvector $|\phi_i^{(j)} \rangle$.
Since $\mathcal{H}|\Psi^{(j)} \rangle = E_j |\Psi^{(j)} \rangle$, we would have:
\begin{align}
\tag{*} E_j = \sum_{i} \epsilon_i^{(j)}
\end{align}
If $n_\vec{p}$ particles have momentum $\vec{p}$, then I guess we could write this as:
\begin{align}
E_j = \sum_{\vec{p}} n_\vec{p}^{(j)} \epsilon_\vec{p}^{(j)}
\end{align}
Finally, instead of doing a sum over $j$, they decide to do a sum over all possible $n_\vec{p}$, thus getting the formula in the books. Is this correct?
EDIT (reply to glance):
This is what I got:
Suppose $E_1$ and $E_2$ are eigenvalues of the total hamiltonian $\mathcal{H}$ with $g_{E_1}=2$ and $g_{E_2}=1$. Then, there are two eigenstates $|E_{1_a} \rangle$ and $|E_{1_b} \rangle$ for which
\begin{align}
\mathcal{H} |E_{1_a} \rangle = E_1|E_{1_a} \rangle \\
\mathcal{H} |E_{1_b} \rangle = E_1|E_{1_b} \rangle
\end{align}
and one state $|E_{2} \rangle$ for which $\mathcal{H} |E_{2} \rangle = E_2|E_{2} \rangle$.
Each of these states may be written in terms of the states of the particles, $|\epsilon_{i} \rangle$ (eigenstates of $H_i$ with eigenvalues $\epsilon_i$). Say $N=3$ and that, e.g.:
\begin{align}
& |E_{1_a} \rangle = |\epsilon_{1},\epsilon_{2},\epsilon_{2} \rangle \rightarrow \text{config} \quad \{n_p\}_{1_a}=\{ p_1,p_2,p_2\} \\
& |E_{1_b} \rangle = |\epsilon_{2},\epsilon_{2},\epsilon_{1} \rangle \rightarrow \text{config} \quad \{n_p\}_{1_b}=\{ p_2,p_2,p_1\} \\
& |E_{2} \rangle = |\epsilon_{5},\epsilon_{5},\epsilon_{2} \rangle \rightarrow \text{config} \quad \{n_p\}_2=\{ p_5,p_5,p_2\}
\end{align}
Then, corresponding to $E_1$, we have occupation numbers $n_{p_1}=1$, $n_{p_2}=2$ and $n_{p_k}=0$ $\forall k>2$ and, for $E_2$, occupation numbers $n_{p_2}=1$, $n_{p_5}=2$ and $n_{p_k}=0$ for $k\neq 2,5$.
$Z$ would then be, according to your (2):
\begin{align}
Z &= e^{-\beta (1 \epsilon_1 + 2 \epsilon_2 + \sum\limits_{k>2} 0 \epsilon_k)} + e^{-\beta (1 \epsilon_1 + 2 \epsilon_2 + \sum\limits_{k>2} 0 \epsilon_k)} + e^{-\beta (1 \epsilon_2 + 2 \epsilon_5 + \sum\limits_{k \neq 2,5} 0 \epsilon_k)} \\
&= 2e^{-\beta (1 \epsilon_1 + 2\epsilon_2)} + e^{-\beta (1\epsilon_2 + 2\epsilon_5)} \\
&= g_{E_1}e^{-\beta E_1} + g_{E_2}e^{-\beta E_2}
\end{align}
Answer: The point is that to get the partition function you have to sum over all states, with the usual Boltzmann weight factor $e^{-\beta E}$.
If you label the energy eigenstates of your system with $| n \rangle$ then the partition function will have the form
$$ \tag{1} Z = \sum_n g_n e^{-\beta E_n},$$
where $E_n$ is the energy of the $n$-th state, and $g_n$ is a possibly present degeneracy factor counting the number of states with energy $E_n$ (which in the non-degenerate case is equal to 1 and thus unnecessary).
If you are dealing with a many-particles system one way to label the states is by specifying the configuration $\{ n_j\}_j$, i.e. the occupation number $n_j$ of the $j$-th particle, for every particle $j=1,...,N$, and the partition function can accordingly be written as
$$ \tag{2} Z= \sum_{\{n_j\}} g(\{n_j\}) e^{-\beta E(\{n_j\})}, $$
where it is important to notice that the energy $E$ depends on the configuration $\{n_j\}$.
However, while (2) is usually more practical in these circumstances, one could equally well express the partition function in the form (1).
To try to make it clearer I'll show how we would do this:
let $E_j$ denote the eigenenergies of the total system (so rembember that this $j$ is different from the one used above, which labeled one-particle states). Then the partition function has the form
$$ \tag{3} Z = \sum_j g_j e^{-\beta E_j}.$$
Why is this equal to (2)? Because we are still counting all states, just in a different way. Now $g_j$ is the number of states with total energy $E_j$, i.e. in terms of how the single-particle states are distributed:
$$ g_j = \sum_{\{n_{\textbf p} \}\, | \sum_{\textbf p} \!\epsilon_{\textbf p}=E_j} g( \{n_{\textbf p} \}),$$
where $\epsilon_{\textbf p}$ is the energy of an electron in the state $\textbf p$, and I am now labeling single particle states using their momenta $\textbf p$. | {
"domain": "physics.stackexchange",
"id": 19224,
"tags": "statistical-mechanics, ideal-gas, quantum-statistics"
} |
How a pendulum accelerates? | Question: I have learned that $-g\sin\theta$ describes the acceleration of a pendulum. But surely if a pendulum is held from a point, say this point is $a$ and another point $b$ and say suppose that point $a$ is higher than point $b$, upon dropping the whatever object is attached to the pendulum, and within both scenarios, each ball must stop at a given point, and this point is the same for whatever point they are dropped from, say, point $c$, therefore dropping the ball from point A must ultimately accelerate more than point B since it is under influence longer from gravity.
So I figured, $-g\sin\theta = a$ ( sorry for the ambiguity, here $a$ is acceleration of course ), doesn't necessarily work in this case. Could we view the pendulum motion as individual discrete gains in energy due to gravity and then integrate it to achieve the accumulated energy at point b?
So perhaps something like,
$$\int_a^c -mg\sin\theta d\theta$$
Answer: The tangential acceleration of the mass is solely determined by it's angular position $\theta$ by what you say: $a_\bot=-g\sin\theta$. There is no velocity dependent forces in the scenario you describe, so the acceleration will not depend on the velocity as well.
If an object is released from rest at $\theta_A$ and another object is released from rest at $\theta_B<\theta_A$, then when mass $A$ reaches $\theta_B$ it will indeed have a larger velocity than mass $B$ at $\theta_B$, but they both will have the same tangential acceleration at $\theta_B$ because the tangential acceleration is a function only of $\theta$.
A simpler example of this is a ball that is dropped from your hand versus one that is thrown from your other hand. Once both balls are released they will have the same acceleration ($g$ downwards) even though their velocities are different.
The more general misunderstanding here is that a larger velocity means a larger acceleration must have caused that larger velocity. But this is not the case. Velocity changes over time due to acceleration, so "small" accelerations can cause "large" velocities and vice versa, but it is incorrect to assume that at some instant a large velocity means a large acceleration, or that the acceleration is solely, if at all, determined by the velocity. | {
"domain": "physics.stackexchange",
"id": 76088,
"tags": "newtonian-mechanics, forces, acceleration, free-body-diagram, oscillators"
} |
Modelling a robot using urdf / xacro for Moveit | Question:
Hi there
I'm working with a robot in Moveit: Here you can find the package of the robot.
I have an issue with their articulations, as it's shown in those pictures:
1-
All joints are in their initial position:
2-After moving Joint_2 in Moveit, this is what happen:
This first link_1 get out from the robot !!
My question as you probably understand, is how and where can I fix that issue :D ?
Originally posted by ROSkinect on ROS Answers with karma: 751 on 2014-11-19
Post score: 0
Original comments
Comment by gvdhoorn on 2014-11-19:
As the author of the staubli packages in the repository you link, I'm quite sure that the model you show in your screenshot is not part of that repository. You should probably clarify that in your question.
Comment by ROSkinect on 2014-11-19:
Yes gvdhoorn You're right.
I was thinking about that when writing the question but I figure that it'll complicate the question !!
So as you are the author; where the center rotation is defined or how can I solve that problem ?
Comment by gvdhoorn on 2014-11-19:
Also: this is not really MoveIt related, but is a simple matter of modelling your robot correctly using urdf / xacro. I'd suggest updating your question title (and tags) to reflect this.
Comment by ROSkinect on 2014-11-20:
I changed the title
Answer:
This would appear to be an incorrectly specified center of rotation (origin) for the joint between link_1 and link_2, judging from the screenshot. Are you sure you defined their locations (and orientations) correctly?
Edit:
when I delete all directories and files except rx160.urdf and meshes; it still wokring ! How can you explain that !
If by 'is still working' you mean you can still visualise the model, then that is not really hard to explain: all you need for that are the urdf and the meshes. All the other files in the package are just for documentation, packaging and convenience (the test_rx160.launch file fi).
and in the urdf we can't do lot of things !
I don't really understand what you're trying to say.
The urdf is just a textual description of the kinematic structure of your robot. In the case of the staubli_rx160_support package, this is based on the dimensions of the physical RX 160 from Stäubli.
All you need to do is update the urdf with the information for your particular robot model, and make sure you create some meshes that match it. As Stäubli provides 3D models of their manipulators on their website, this should not be too difficult (but watch the origins and orientations of your links).
If you are not familiar with urdf, you might want to take a look at urdf/Tutorials. If you are, perhaps Create a URDF for an Industrial Robot could provide you some more insight into how that knowledge was used to create the staubli_rx160_support package.
Originally posted by gvdhoorn with karma: 86574 on 2014-11-19
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by ROSkinect on 2014-11-19:
What I'm doing actually is that I'm using a package for Robot_1 with Robot_2 (mine). this is why I get that Error..
I'm looking for where can I get those parameters that you're taking about :D
Comment by gvdhoorn on 2014-11-19:
So you are using the urdf of the RX160, with your own meshes? No, that will not work (most of the time).
There are multiple ways to get details on the kinematic structure: the Staubli Studio can show them, or you can use the General Description section of the Instruction Manual.
Comment by ROSkinect on 2014-11-19:
Yes exactly that's what I'm doing !
I have 3D model of my robot working in Staubli studio so I have all details about it.
But how can I use or edit your package !?
Comment by gvdhoorn on 2014-11-19:
You could use the RX160 urdf as a template, but update the joint transforms with the data for your particular model. Be sure to update the axis definitions as well, otherwise it won't work. The studio should have a textbox somewhere it can show the DH params, which you can use to update the urdf.
Comment by ROSkinect on 2014-11-19:
it not really clear where should I go.. but I'll look at that :D
Thank you so much :D
Comment by gvdhoorn on 2014-11-19:
First create a copy of the RX160 pkg, and name it after your own model. Then start updating all file-, urdf, launch file and xacro names to match your own model. Now update the urdf with the proper lengths of links and the joint limits of your model. You should end up with a pkg just for your robot.
Comment by ROSkinect on 2014-11-19:
oh that's more clear, I can't thank you enough :D
Comment by ROSkinect on 2014-11-20:
@Gvdhoorn
I worked little bit today on your package but I don't really get any new results, I wanna ask you:
when I delete all directories and files except rx160.urdf and meshes; it still wokring !
How can you explain that !
and in the urdf we can't do lot of things !
Comment by ROSkinect on 2014-11-20:
First when I said it working I mean the articulations in Moveit: we can change different joins only using urdf+meshes !!?
=> how it still working or how it works ?
I didn't work with urdf before but it was easy for me to understand what is written in it (what each join or link structure is for etc)
Comment by gvdhoorn on 2014-11-20:
You keep saying MoveIt, but I think you mean the joint_state_publisher. Are you using test_rx160.launch? And yes, there is no magic: RViz is just a visualisation tool. If you change joint states, it will update its visualisation.
Comment by ROSkinect on 2014-11-20:
No I'm working with Moveit and I do all with it:
http://i.imgur.com/b7EF9a0.png
Comment by gvdhoorn on 2014-11-20:
Well I don't know what else to tell you: the problem you are seeing is because your urdf is incorrect: the location and orientation of your joints is wrong, or you have specified the wrong axis. I advise you to get some more experience with urdfs, and then just try again. | {
"domain": "robotics.stackexchange",
"id": 20094,
"tags": "ros, moveit"
} |
Half-Filled Shells and Stability explanation | Question: I am reading a book about Advanced Chemistry, and it is discussing the subject of half-filled orbitals.
The book notes that Chromium has an electron structure of $1s^2 2s^2 2p^63s^23p^63d^54s^1$
There is also some added stability when a sub-shell is half-full as this minimises the mutual repulsion between pairs of electrons in full orbitals.
Now, I'm confused by this explanation.
Is it correct to conclude that if a sub-shell is half filled, the repulsion between electrons in the half-filled subshell and other subshells with fully-filled orbitals is reduced (e.g. between the $3d$ and $3p$ subshell in Chromium)?
Answer: I actually just had this in class, and it confused me as well. This site helped me to understand it a bit better:
http://www.chemguide.co.uk/atoms/properties/3d4sproblem.html
The author, a retired chemistry professor, states:
"Many chemistry textbooks and teachers try to explain this by saying that the half-filled orbitals minimise repulsions, but that is a flawed, incomplete argument. You aren't taking into account the size of the energy gap between the lower energy 3d orbitals and the higher energy 4s orbital." | {
"domain": "chemistry.stackexchange",
"id": 8932,
"tags": "inorganic-chemistry, electrons, electronic-configuration"
} |
Would this be a metric? | Question: would a matrix $M$ with diagonal entries not necessarily equal 1, i.e. diag $M = (a,1,1,1)$ be a metric if $a \neq 1$ or $\neq 0$? I.e. in this case would this be like some sort of more general euclidean metric or just not a metric at all?
Answer: For $a<0$, it's a possible metric for spacetime. For $a>0$, it's a possible metric for a 4-dimensional Euclidean space. For $a=0$, it's degenerate, and in many cases it's not possible to work with a degenerate metric, e.g., the machinery of general relativity requires that the metric be nondegenerate. It doesn't matter whether $a$ has a particular nonzero value $a_1$ or another nonzero value $a_2$ with the same sign; under a change of coordinates, its value can change from one of these to another. For example, in relativity, if you switch between natural units and SI units, you'll pick up factors of $c^2$.
It doesn't matter whether the elements have absolute value 1. In SR, one generally makes this choice for convenience, because it's possible. In GR, you can't make the metric have constant components. | {
"domain": "physics.stackexchange",
"id": 7743,
"tags": "metric-tensor"
} |
C# Code to Find all Divisors of an Integer | Question: I made this so I can feed it integers and return an array with all the divisors of that integer. I put checks in in case the integer is less than 2. Order of integers in the array must be smallest to largest. The code works. What I need is to optimize this code to be as fast as possible. Right now on an online IDE I am getting around 40ms combined against the given test. I need to trim this down as much as possible.
using System.Collections.Generic;
public class Divisors
{
public static bool IsPrime(int n)
{
if (n == 2) return true;
if (n % 2 == 0) return false;
for (int x = 3; x * x <= n; x += 2)
if (n % x == 0)
return false;
return true;
}
public static int[] GetDivisors(int n)
{
List<int> divisors = new List<int>();
if (n < 2)
{
return null;
}
else if (IsPrime(n))
{
return null;
}
else
{
for (int i = 2; i < n; i++)
if (n % i == 0)
divisors.Add(i);
}
return divisors.ToArray();
}
}
namespace Solution
{
using NUnit.Framework;
using System;
[TestFixture]
public class SolutionTest
{
[Test]
public void SampleTest()
{
Assert.AreEqual(new int[] {3, 5}, Divisors.Divisors(15));
Assert.AreEqual(new int[] {2, 4, 8}, Divisors.Divisors(16));
Assert.AreEqual(new int[] {11, 23}, Divisors.Divisors(253));
Assert.AreEqual(new int[] {2, 3, 4, 6, 8, 12}, Divisors.Divisors(24));
}
}
}
Answer: Style
As far as style goes, your code looks clean and readable, and follows the conventions, so good job for that.
You may want to include documentation comments with \\\ four your class and methods. Although the method names are descriptive enough in this rather simple case, it is a good habit to take on.
Tests
It's nice you use a test framework to test your class. However, the methods name don't match (Divisors vs GetDivisors) and comparing arrays doesn't work that way.
You could also benefit from including more tests for edge cases . What if the given argument is prime (since you make it a special case)? What if it is int.MaxValue? What if it is 0? What if it is negative?
Make your class static
Your Divisors class only has static methods. As such, it should be a static class. I might help the compiler with optimizations and can improve performance a little.
Unexpected behavior
As far as I know, 1 and n are always divisors of n, yet they are omitted from the returned array. You should probably include them, or at least document that they are omitted.
If the number is negative, you return null. While in theory negative numbers have divisors (and positive numbers have negative divisors), I understand why this you chose this approach, as well as why you only return positive divisors. I would suggest you either document this behavior, or enforce it by using the uint datatype. I would chose the former approach, as intis much more prevalent, and using uint would most likely imply a lot of casting down the line.
Optimizing the algorithm
You check if the number is prime before looking for divisors. First of all, your algorithm for primality checking is rather naive and can be optimized in various ways. More importantly, this is an optimization only if the argument is prime, but is counterproductive in the vast majority of cases when it isn't prime. I suggest to simply get rid of that check.
Furthermore, you check if every number between 2 and n is a divisor of n; however, you know that if i is a divisor, so is n / i. Therefore, you can loop only on values between 1 and sqrt(n) and add two divisors for every match.
My attempt
public static class Divisors
{
/// <summary>
/// Finds all the divisors of any positive integer passed as argument.
/// Returns an array of int with all the divisors of the argument.
/// Returns null if the argument is zero or negative.
/// </summary>
public static int[] GetDivisorsMe(int n)
{
if (n <= 0)
{
return null;
}
List<int> divisors = new List<int>();
for (int i = 1; i <= Math.Sqrt(n); i++)
{
if (n % i == 0)
{
divisors.Add(i);
if (i != n / i)
{
divisors.Add(n / i);
}
}
}
divisors.Sort();
return divisors.ToArray();
}
}
As for performance, finding all divisors for every integer between 0 and 10,000 takes around 130ms with your solution on my machine vs 12ms with mine, so a performance gain of around 10x.
Finding divisors for int.MaxValue takes around 9s your solution vs 5ms with mine, a performance gain greater than 1000x!
Finally, finding divisors for 2095133040 – the largest highly composite number that fits in the int datatype, with a total of 1600 divisors – takes around 5s with your solution, vs 13ms with my solution, again a performance gain of around 400x.
Performance can probably be improved further by estimating how many divisors has a given input and passing that estimate to the List<int> constructor, and thus limiting how much memory reallocating is done as the list grows. In fact, the upper bound of the number divisors is known: 1600. You could simply allocate the list as:
List<int> divisors = new List<int>(1600);
This brings the execution time down to 5ms for the highest composite number, but feels like a waste of memory in most cases. | {
"domain": "codereview.stackexchange",
"id": 37446,
"tags": "c#, performance, mathematics, factors"
} |
Google reCAPTCHA Validator | Question: This entire class came out of a chat discussion, and I'm curious on how it looks. (This is like literally 30 minutes of development time.)
The idea is to allow very easy, quick implementations of Google's reCAPTCHA ("I am not a robot") checkbox CAPTCHA algorithm. It only requires minor work from the implementor to make it work properly, and that was the point.
Warning: Minimal implementation effort required.
public class ReCaptchaValidator
{
private const string _HeadScriptInclude = "<script src='https://www.google.com/recaptcha/api.js'></script>";
private const string _ReCaptchaLocationInclude = "<div class=\"g-recaptcha %EXTRACLASSES%\" data-sitekey=\"%SITEKEY%\"></div>";
private readonly string _ReCaptchaSecret;
private readonly string _ReCaptchaSiteKey;
/// <summary>
/// Returns the script to be included in the <code><head></code> of the page.
/// </summary>
public string HeadScriptInclude { get { return _HeadScriptInclude; } }
/// <summary>
/// Use this to get or set any extra classes that should be added to the <code><div></code> that is created by the <see cref="ReCaptchaLocationInclude"/>.
/// </summary>
public List<string> ExtraClasses { get; set; }
/// <summary>
/// Returns the <code><div></code> that should be inserted in the HTML where the reCAPTCHA should go.
/// </summary>
/// <remarks>
/// I'm still not sure if this should be a method or not.
/// </remarks>
public string ReCaptchaLocationInclude { get { return _ReCaptchaLocationInclude.Replace("%SITEKEY%", _ReCaptchaSiteKey).Replace("%EXTRACLASSES%", string.Join(" ", ExtraClasses)); } }
/// <summary>
/// Creates a new instance of the <see cref="ReCaptchaValidator"/>.
/// </summary>
/// <param name="reCaptchaSecret">The reCAPTCHA secret.</param>
/// <param name="reCaptchaSiteKey">The reCAPTCHA site key.</param>
public ReCaptchaValidator(string reCaptchaSecret, string reCaptchaSiteKey)
{
_ReCaptchaSecret = reCaptchaSecret;
_ReCaptchaSiteKey = reCaptchaSiteKey;
}
/// <summary>
/// Determines if the reCAPTCHA response in a <code>NameValueCollection</code> passed validation.
/// </summary>
/// <param name="form">The <code>Request.Form</code> to validate.</param>
/// <returns>A boolean value indicating success.</returns>
public bool Validate(NameValueCollection form)
{
string reCaptchaSecret = _ReCaptchaSecret;
string reCaptchaResponse = form["g-recaptcha-response"];
bool passedReCaptcha = false;
using (WebClient client = new WebClient())
{
byte[] response = client.UploadValues("https://www.google.com/recaptcha/api/siteverify",
new NameValueCollection() { { "secret", reCaptchaSecret }, { "response", reCaptchaResponse } });
string reCaptchaResult = System.Text.Encoding.UTF8.GetString(response);
if (reCaptchaResult.IndexOf("\"success\": true") > 0)
passedReCaptcha = true;
}
return passedReCaptcha;
}
}
Usage:
string reCaptchaSecret = "";
string reCaptchaSiteKey = "";
ReCaptchaValidator rcv = new ReCaptchaValidator(reCaptchaSecret, reCaptchaSiteKey);
bool passedReCaptcha = rcv.Validate(Request.Form);
It should be pretty self-explanatory. You can use the ReCaptchaValidator.HeadScriptInclude to get the entire <script> tag for the head, and ReCaptchaValidator.ReCaptchaLocationInclude to get the <div> element for placement in the body. These aren't demonstrated here, but are easy to implement.
Answer: Some quick shots at the code
by retrieving the ReCaptchaLocationInclude property an ArgumentNullException is thrown, because you didn't initialize the ExtraClasses property. I would also like to suggest changing this property from autoimplemented to a normal one, so you can validate any set value.
bool passedReCaptcha = false; is not really a good name here. so I wouldn't use this variable at all. Instead I would replace this
if (reCaptchaResult.IndexOf("\"success\": true") > 0)
passedReCaptcha = true;
with
return (reCaptchaResult.IndexOf("\"success\": true") > 0);
and for the IDE's love add a return false; at the end of the method. If you don't want to do this, it is fine too, but you should replace the former if with passedReCaptcha = (reCaptchaResult.IndexOf("\"success\": true") > 0); | {
"domain": "codereview.stackexchange",
"id": 15171,
"tags": "c#, object-oriented, polymorphism, captcha"
} |
Ros installation without root | Question:
I suppose that it happens quite often that people that are working in a closed environment without root access want to use ros. cf.
howto install w/o root priviliges
Installation of wstool and rosdep from source
In our case for ros fuerte for example we ended after 2 weeks of work with a big shell script downloading all the system dependencies (like for example BOOST, Log4cxx, PCL or OpenCV), installing it to $ROS_ROOT/usr and after that building ros form source while patching many CMakeLists.txt files.
I took a look at groovy and suppose that at least the building of ros with catkin should work quite fine due to the fact that building flags can be passed directly to cmake. Now the question:
Wouldn't it be nice for people working in such an environment to get a package manager installing packages from source? This could be then called by rosdep. With this it would be also possible to install ros on every Linux as the native package manager does not need to be called. If there is interest in something like that, I would like to work on such a software package.
Originally posted by 2_socke on ROS Answers with karma: 76 on 2013-01-21
Post score: 1
Answer:
I assume you mean a new custom package manager written for ROS and any Linux. Similar projects exist such as pkgsrc, autoproj, jhbuild. Also e.g. see robotpkg. If you want a solution like that, it would be wise to look at either of these first.
The following is mostly my personal opinion:
The ROS community decided rather to avoid own tooling as much as possible, and rely on existing package-managers where possible. The main problem is the effort that it takes to maintain any toolset, dragging down ROS resoures that could work on something else instead. The catkin build system is geared towards making it much easier to package software for plenty of different *nix systems, and patches to CMakeLists and such to make a package more compatible will also be accepted.
So if you want to put effort into something, for groovy it would probably be more appreciated if you either created packaging for a not-yet supported Linux distro or maintained instructions for how to install no a not yet supported distro, see
http://www.ros.org/wiki/groovy/Installation
PS: For a larger audience, this could be discussed at ros-users, maybe.
Originally posted by KruseT with karma: 7848 on 2013-01-21
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by 130s on 2013-07-15:
Nice topic. Going on in the email. | {
"domain": "robotics.stackexchange",
"id": 12529,
"tags": "ros, build-from-source"
} |
Saving Arm Trajectory and Replaying it | Question:
Hello
How can I save PR2 arm trajectory and then replay the same trajectory at a later time?
One way I can think of it is by writing trajectory_msg::JointTrajectory to a file and then reading the trajectory from the file whenever I need to reproduce the trajectory. But if I follow this method then I am not sure how to save "duration time_to_start" (which have many attributes) and "Header header".
Does anyone knows of any neater way for saving/replaying trajectories?
Thanks!!!
Originally posted by Ashesh on ROS Answers with karma: 16 on 2013-03-02
Post score: 0
Original comments
Comment by RosRos on 2013-03-14:
How can i write trajectory_msg::JointTrajectory to a file?
Answer:
Hi Ashesh
Apart from using stdio.h to write code to read a file of strings formatted like trajectory_msgs::JointTrajectory. You could try writing rostopic messages to your node. Or write your own YAML file that contains the information about your trajectory and upload it to the parameter server. Then using a node to read the parameter server and format a JointTrajectory msgs from the information and publishing it.
As for converting an integer or float to a duration type you can save your duration as a float or integer and use:
ros::Duration(24*60*60);
with the time in seconds in the brackets. And your stamp values in your header you just use:
header.stamp = ros::Time::now();
before you publish it.
hope it helps
Originally posted by PeterMilani with karma: 1493 on 2013-03-03
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 13150,
"tags": "ros, jointtrajectory, pr2"
} |
Rearrange data without using Cut and Insert | Question: I'm fairly new to VBA, and this was basically a brute-force solution to a problem I was encountering. I wanted to take data that appeared in two columns, and pull it together into one.
The current code works, but is very slow with large datasets. I've been told to avoid using the clipboard if possible, but I'm not quite sure where to begin with this. I've made a few attempts to use an array, but I'm not quite sure where to start. Any other suggestions would be very welcome.
Private Sub Arra()
Dim Library As Worksheet
Set Library = Sheets("Library")
Dim Rng As Range
Dim i As Long
Dim lastRow As Long
i = 1
lastRow = Library.Range("A1").SpecialCells(xlCellTypeLastCell).row
While i <= lastRow
Set Rng = Library.Range("A" & i)
If Application.WorksheetFunction.CountA(Rng.Offset(0, 1)) = 1 Then
Rng.Offset(0, 1).Cut
Rng.Offset(1, 0).Insert Shift:=xlDown
Rng.Offset(0, 1).Insert Shift:=xlDown
ElseIf Application.WorksheetFunction.CountA(Rng.Offset(0, 1)) = 0 Then
i = i + 1
End If
Wend
End Sub
Answer: To speed it up, I would read it into arrays. One array for column A and one array for column B and then combine them into another array and print that to sheet
Option Explicit
Sub Rearrange()
Dim lastRow As Long
lastRow = Library.Cells(Rows.Count, 1).End(xlUp).Row
Dim firstColumn As Variant
firstColumn = Library.Range("A1:A" & lastRow)
Dim secondColumn As Variant
secondColumn = Library.Range("B1:B" & lastRow)
Dim totalCount As Long
totalCount = Application.CountA(firstColumn) + Application.CountA(secondColumn)
Dim combinedArray As Variant
ReDim combinedArray(1 To totalCount)
Dim i As Long
Dim index As Long
index = 1
For i = 1 To lastRow
combinedArray(index) = firstColumn(i, 1)
index = index + 1
If Not IsEmpty(secondColumn(i, 1)) Then
combinedArray(index) = secondColumn(i, 1)
index = index + 1
End If
Next
Library.Range("A1:A" & totalCount) = Application.Transpose(combinedArray)
End Sub
Arrays are fast!
Also, as you can see worksheets have a CodeName property - View Properties window (F4) and the (Name) field (the one at the top) can be used as the worksheet name. This way you can avoid Sheets("Library") and instead just use Library.
I also switched your
lastRow = Library.Range("A1").SpecialCells(xlCellTypeLastCell).row
To the standard.
I also used a For loop instead of Do While. | {
"domain": "codereview.stackexchange",
"id": 20037,
"tags": "vba, excel"
} |
C++ Threaded Logger | Question: What is it?
It's a fairly simple logger which utilises a thread.
How it works
Note: this a pretty terrible explanation, perhaps it would be easier to look at the code.
The logger class is a singleton, which contains the function log(). Upon logs()'s first use a static logger instance is created. The function then returns a logstream object (constructed with a reference to the logger), which is a derivative of std::ostringstream. This object is used to format the message. Upon its destruction it sends the formatted std::string back to the logger using the push() function, which locks a mutex and then appends the std::string to a private std::queue belonging to the logger.
When the logger is constructed it creates a thread running the print_routine() function, which is a loop that locks a mutex and prints all the contents of the std::queue, and then sleeps for a set interval. Upon destruction it tells the routine to finish by setting the bool print to false and joins the thread.
Code
log_enum.h
#ifndef ANDROMEDA_LOG_ENUM_H
#define ANDROMEDA_LOG_ENUM_H
namespace andromeda {
enum class log_level {
info,
warning,
severe,
fatal
};
}
#endif
logger.h
#ifndef ANDROMEDA_LOGGER_H
#define ANDROMEDA_LOGGER_H
#include <sstream>
#include <mutex>
#include <queue>
#include <chrono>
#include <thread>
#include "log_enum.h"
namespace andromeda {
class logger;
}
#include "logstream.h"
namespace andromeda {
class logger {
std::queue<std::string> m_q;
std::mutex m_q_mu;
std::mutex m_stdout_mu;
std::mutex m_stderr_mu;
std::thread m_print_thread;
bool m_print = true;
static void print_routine(logger *instance, std::chrono::duration<double, std::milli> interval);
logger();
~logger();
public:
logger(logger const&) = delete;
void operator=(logger const&) = delete;
static logstream log(log_level level = log_level::info) {
static logger m_handler;
return logstream(m_handler, level);
}
void push(std::string fmt_msg);
};
}
#endif
logger.cpp
#include "logger.h"
#include <iostream>
namespace andromeda {
logger::logger()
{
m_print_thread = std::thread(print_routine, this, std::chrono::milliseconds(16));
}
logger::~logger()
{
m_print = false;
m_print_thread.join();
}
void logger::push(std::string fmt_msg)
{
std::lock_guard<std::mutex> lock(m_q_mu);
m_q.push(fmt_msg);
}
void logger::print_routine(logger *instance, std::chrono::duration<double, std::milli> interval)
{
while(instance->m_print || !instance->m_q.empty()) {
auto t1 = std::chrono::steady_clock::now();
{
std::lock_guard<std::mutex> lock(instance->m_q_mu);
while(!instance->m_q.empty()) {
std::cout << instance->m_q.front() << std::endl;
instance->m_q.pop();
}
}
auto t2 = std::chrono::steady_clock::now();
std::chrono::duration<double, std::milli> time_took = t2 - t1;
//sleep
if(time_took < interval && instance->m_print) {
std::this_thread::sleep_for(interval - time_took);
}
}
}
}
logstream.h
#ifndef ANDROMEDA_LOGSTREAM_H
#define ANDROMEDA_LOGSTREAM_H
#include <sstream>
#include "log_enum.h"
namespace andromeda {
class logger;
class logstream : public std::ostringstream {
logger& m_logger;
log_level m_level;
std::string get_level_string();
std::string get_time_string();
public:
logstream(logger& log, log_level);
~logstream();
};
}
#endif
logstream.cpp
#include "logstream.h"
#include <ctime>
#include "logger.h"
namespace andromeda {
logstream::logstream(logger& log, log_level level) : m_logger(log), m_level(level)
{}
logstream::~logstream()
{
//note: not using time yet because it adds 0.015 ms
//m_logger.push(get_time_string() + get_level_string() + str());
m_logger.push(get_level_string() + str());
}
std::string logstream::get_level_string()
{
std::string temp;
switch(m_level) {
case log_level::info: temp = "[INFO]"; break;
case log_level::warning: temp = "[WARNING]"; break;
case log_level::severe: temp = "[SEVERE]"; break;
case log_level::fatal: temp = "[FATAL]"; break;
}
return temp; //copy ellision should be guaranteed with a C++17 compiler
}
std::string logstream::get_time_string()
{
std::time_t t = std::time(nullptr);
#ifdef _WIN32
std::tm time;
localtime_s(&time, &t);
#else
std::tm time = *std::localtime(&t);
#endif
char t_str[20];
std::strftime(t_str, sizeof(t_str), "%T", &time);
return ("[" + std::string(t_str) + "]");
}
}
main.cpp
#include "logger/logger.h"
#include <iostream>
int main(int argc, char **argv) {
{
using namespace andromeda;
auto t1 = std::chrono::steady_clock::now();
logger::log() << "Hello World";
auto t2 = std::chrono::steady_clock::now();
/*
auto t3 = std::chrono::steady_clock::now();
std::cout << "Hello World" << std::endl;
auto t4 = std::chrono::steady_clock::now();
*/
std::chrono::duration<double, std::milli> d1 = t2 - t1;
//std::chrono::duration<double, std::milli> d2 = t4 - t3;
logger::log() << "logger took " << d1.count() << "ms";
//std::cout << "cout took " << d2.count() << "ms" << std::endl;
//This line is here to make test whether everything is printed before program exit
logger::log(log_level::fatal) << "end of program test: " << 33;
}
return 0;
}
Benchmark
I ran a benchmark of this logger vs std::cout without using the time.
run 1: logger = 0.02925ms and cout = 0.007725ms -> log/cout = 3.77
run 2: logger = 0.028469ms and cout = 0.008442ms -> log/cout = 3.37
run 3: logger = 0.027484ms and cout = 0.016155ms -> log/cout = 1.7
run 4: logger = 0.028764ms and cout = 0.007859ms -> log/cout = 3.66
run 5: logger = 0.027457ms and cout = 0.008173ms -> log.cout = 3.36
On average the logger was 3.172 times slower than std::cout.
Is tha bad?
What I'm aiming for
I'm aiming for it to be reasonably fast, thread-safe and cross-platform.
What I think could be improved
I think the get_time_string() could be improved. At the moment it worsens performance by about 50%. Another things is the detail. I think it might be a good idea to perhaps include the source and thread id. One last minor thing is the log_level. I don't have much experience so I don't know how many different levels are required for bigger projects.
Any feedback is appreciated.
Answer: Benchmarking
It's better to have more datapoints, rather than fewer in a benchmark. So it's best to execute code multiple times and take an average (or just measure the total), than to run it once.
Also (as mentioned above) the first time logger::log() is called, the static variable is initialised, creating a new thread. So a better benchmark would be something like:
logger::log(); // get thread creation out of the way...
auto runs = 500;
auto t1 = std::chrono::high_resolution_clock::now();
for (auto i = 0; i != runs; ++i)
logger::log() << "Hello World";
auto t2 = std::chrono::high_resolution_clock::now();
which for me gives 0.812507ms. (The first call to logger::log() takes around 1.31655ms, btw).
Routing the same thing directly to std::cout takes ~500ms!
One extra note: comparing the time taken by the logger in the main thread with the time taken by std::cout is comparing two different things. The logger call creates / copies / concatenates some strings and adds them to a queue, whereas cout is actually sending stuff to standard output.
Since the logger is doing the same work with cout in another thread anyway, we should treat the time taken by the logger::log() calls in the main thread as overhead added on top of cout.
We could isolate that overhead and profile it. (Running under a profiler with the number of runs at 500000, and commenting out a line in logger.cpp: //std::cout << instance->m_q.front() << std::endl; gives decent indications as to what takes the most time).
Beyond checking the overhead on the other thread, or pure curiosity, there's not much point. An overhead of 0.8ms on ~500ms of work is probably fine.
Code - minor inefficiencies
get_level_string() and get_time_string() can both be const.
Since we're focused on performance...
The level strings could be static members of the logstream class, which removes the need to create them every time.
Rather than using a temporary buffer, we can use std::strftime to write directly into a std::string, something like:
std::string result(21, '[');
auto charsWritten = std::strftime(&result[1], result.size() - 1, "%T", &time);
result[1 + charsWritten] = ']';
result.resize(1 + charsWritten + 1);
return result;
Concatenating multiple strings (e.g. m_logger.push(get_time_string() + get_level_string() + str()); can be inefficient, due to creating intermediate string objects which may need multiple allocations. This can be avoided by creating an output string object, .reserve()ing the necessary size, and using the += operator to copy each one into the output string.
m_q.push(std::move(fmt_msg)); avoid a copy!
get_level_string() and get_time_string() can both be const.
These are all minor things though. A profiler will tell you what actually needs changing. | {
"domain": "codereview.stackexchange",
"id": 30199,
"tags": "c++, performance, multithreading, thread-safety, logging"
} |
In rotational motion why total gravitational force act on centre of mass (centre of gravity)? | Question: In cases of rigid body rotation, rotating about axis other than centre of mass we always consider force by gravity completely of body on side of centre of mass why is that so?
Example consider rod of mass M and length L pivoted to wall at distance L/4 from one end then when in horizontal position we will always take torque=MgL/4. Why are we considering total mass on one side when some is on other why dont we use Mg/4 and 3Mg/4 on each sides? I mean what actually occur in system?, if centre of gravity moves out of centre of mass this approximation will be useless.
Answer: Because a rigid body acts like its entire mass is contained within a point called center of mass. This greatly simplifies solving problems such as those in your examples. But in principle you can adopt "your approach" given in your example, where you consider that both sides have mass, although this unnecessarily complicates the solution. For example, you could say that forces acting on two sides of your rod are $mg/4$ and $3mg/4$. Now, the distances of those forces to the rotational axis of the rod are $l/8$ and $3l/8$ respectively. And you can see that they give opposite torques, meaning that the total torques is $3mg/4\cdot 3l/8 - mg/4\cdot l/8=mgl/4$, which is the same as you would get if you consider that the entire mass of the rod is contained in its center of mass. | {
"domain": "physics.stackexchange",
"id": 84679,
"tags": "newtonian-mechanics, forces, rotational-dynamics, reference-frames"
} |
What's the relation between output voltage and time to boil water given the same kettle? | Question:
An electric kettle rated 220V, 2000W needed 10 minutes to boil water when it is half filled with water in Singapore where the output voltage is 220V. Estimate the amount of time needed to do the same task if the kettle was brought to the USA where the output voltage is 110V.
Options:
1. 5 minutes
2. 10 minutes
3. 20 minutes
4. 40 minutes
To be able to answer this you have to know how the output voltage is related to the power produced by the kettle. Can someone explain this relationship?
I don't quite understand what it is meant by " An electric kettle rated 220V, 2000W". I get that 2000W means the kettle uses 2000J/s and V is the work done by kettle per unit charge. But what is the difference between the 2 (power and charge of the kettle)? And how does the V of kettle differ from output voltage?
Answer: First, this is a poor question (the question that was asked of you, presumably). That's because it requires you to make an assumption about how the kettle works.
For instance, if the kettle's circuitry strives to maintain a constant power output (2000 W), then it will draw more current and maintain its 2000 W output when operated at 110V, which would mean no change in the time.
Since 10 minutes is not the correct answer (and if I were grading this test and someone said 10 minutes I'd say that's acceptable, because I let you assume how it operates). One assumption is to say that the heating element inside the kettle is fixed ($R$ = const) such that the power dissipated goes like $V^2 / R$. With half the voltage, you'd have 1/4 the power, and loosely speaking, it would take 4 times as long, so 40 minutes.
Your teacher should have added, "assume the kettle is nothing more than a resistor that has a voltage applied across it." | {
"domain": "physics.stackexchange",
"id": 16294,
"tags": "homework-and-exercises, electricity, voltage, power"
} |
What is this fluffy bug? | Question: I found this bug (actually he found me) while walking through some bushes in the city (Iași, Romania).
It is a small bug, a quarter of a fingernail, white and a fluff tail. He lays a white fluff on bushes (that is how I meet him, I brushed the bush and got the white fluff on me, when I wanted to clean it off I saw 2 moving on my shirt) and it seems they stay around it. I also found it near the plants, on a stone pavement, but I presume he got lost.
Answer: Based on the image and on the description, it could be a woolly aphid, a kind of insect that produces filaments of a waxy, cotton-like substance that gives them a fluffy appearance.
The images on Wikipedia aren't convincingly similar, but I found a better reference on abundantnature.com: | {
"domain": "biology.stackexchange",
"id": 7389,
"tags": "species-identification, zoology, entomology"
} |
Matching delayed signals | Question: I have pairs of 1d digitised waveform signals which are almost identical - except there are sections which are the same in both but one is delayed slightly.
I need to find sections and the delay with the corresponding section in the other channel.
I can set a maximum delay BUT I need to measure a delay to less than the sample interval - so some sort of convolution/correlation approach rather than a simple feature id.
Any suggestions where to start?
Answer: This may be a good candidate for a two step approach:
Step 1 would be a pretty coarse running cross correlation with a threshold detector to identify what parts in the signals are matching.
Step 2 would than determined the actual delay. There are various ways to get sub sample resolution:
Upsample to desired resolution and then cross correlate
Short term fourier transform and match a linear phase difference with a weighted least squares approach
Delayed lock loop with a fractional delay filter
Match the the two signals in with an adaptive filter. Then calculate the fourier transform of the filter impulse response and calculate the delay through a linear phase match | {
"domain": "dsp.stackexchange",
"id": 191,
"tags": "correlation"
} |
Solving the quantum well gives you eigen energies gives $E_n$, are these energies in conduction band or valence band? | Question: I wonder if the energies $E_n$ that is derived from solving the SWE for the quantum well can be considered as energies in the conduction band or the valence band. In other words is $E_1$ is lowest energy level in the conduction band or the highest energy in the valence band?
Answer: A quantum well is a much more general concept that applies to any potential... not just to the energy bands in solids. For example, it could refer to a space between two charged metal plates.
The real potential profile in a crystal is actually rather complicated because it consists of an enormous number of atoms. However, you can use a simple quantum well model to find the energy bands in semiconductor heterostructure (i.e., a "sandwich" of two different semiconductor materials).
The motion of a "free" electron within an energy band in a the crystal appears rather different from an electron in a vacuum. A simple "hack" to the quantum well model solves this by replacing the electron mass with an effective mass depending on which band you're interested in. For example, in GaAs, it is $0.067m_0$ for electrons in the conduction band and $0.62m_0$ for "holes" in the heavy-hole band.
So, the short answer is: the energies for a quantum well can be either the conduction or valence band states. You just need to choose an appropriate effective mass for the equation.
Note also that semiconductor quantum wells tend to be rather shallow (barrier potentials of a few hundred meV) so the infinite quantum well model is quite a poor approximation. It is much better to solve the finite quantum well model for these systems, which unfortunately has to be done computationally rather than by using a simple direct equation. | {
"domain": "physics.stackexchange",
"id": 22887,
"tags": "energy, schroedinger-equation"
} |
Is spin necessary for electromagnetism? | Question: I know that spin is needed for defining the magnetic moment of any particle, and I have also read that the spin actually is the reason why some materials are magnetic. What I want to know is whether spin is necessary for the some interactions in the electromagnetic field.
Let me expound a bit: in classical electromagnetic field theory, the electric and the magnetic fields could be considered as some combinations of partial derivatives of the vector potential ($A_\mu$). Any particle couples with the field and interacts with other particles through it.
Moving on, if we consider the quantum field theory version, we have two particles coupled with the electromagnetic field, which then interact with the exchange of bosons (photons). My question is: how big of a role does spin play in the interactions which happen through the electromagnetic field? Are there some interactions which spinless particles cannot have, but those with spin can?
Answer: The charged pions, for example, $\pi^+$ and $\pi^-$, have zero spin but interact with a magnetic field, as can be seen from the curved tracks they leave in a bubble chamber with a magnetic field. So, to answer your question, spin is not theoretically necessary for electromagnetism. You could have a perfectly good and complete EM theory with spinless particles. But the real world doesn't work like that. | {
"domain": "physics.stackexchange",
"id": 94015,
"tags": "electromagnetism, electromagnetic-radiation, quantum-spin, quantum-electrodynamics, polarization"
} |
What happens to the umblical cord inside the mother? | Question: After giving birth to a child, the umblical cord is cut (and stored if they want). The end connected to the child's navel will fell off eventually but what happens to the end inside the mother?
Will it be removed right after birth by doctors or what happens?
Answer: Labor is typically divided into 3 stages:
Stage 1: From the onset of contractions (true labor pains) to full dilatation of the cervix (which is about 10 cm) - this takes about 12 to 18 hours
Stage 2: From full dilatation of cervix to expulsion of fetus - This takes about ~ 30 minutes
Stage 3. From expulsion of fetus to expulsion of placenta - this takes about ~ 15 minutes. During the third stage, the umblical cord which is attached to placenta is expelled along with the placenta. This would be the answer to your question.
Source:Hympath.com | {
"domain": "biology.stackexchange",
"id": 3488,
"tags": "pregnancy, children"
} |
What does RSGD stand for? | Question: I'm reading a paper that involves an algorithm for RSGD. It's clearly a form of stochastic gradient descent, but I can't find what the R stands for. The authors provide their own implementation of it, and I would like to find its original formulation to compare.
Answer: It's probably Riemannian stochastic gradient descent (R-SGD), (Stochastic Gradient Descent on Riemannian Manifolds).
You'll find several articles on the subject by searching for this term. | {
"domain": "cs.stackexchange",
"id": 21412,
"tags": "gradient-descent"
} |
What does it mean to be Turing reducible? | Question: I'm confused about what it means to be Turing reducible. I thought I understood what it meant, but apparently not.
$A \leq B $ Means that A is Turing reducible to B. This means that given an oracle for Machine B, A can be solved.
I'm confused in the case of SUPERHALT which decides whether a Turing machine with access to the HALT oracle halts or not. I feel like SUPERHALT is reducible to HALT because I can just put any Turing machine into a HALT oracle and then put that result into another HALT oracle.
So in summary I believe:
Given a Turing machine M and input x $\epsilon \{1,0\}^*$ and oracle $HALT$, $$SUPERHALT(M(x)) <_T HALT(M(x))$$ because $$SUPERHALT(M(x)) = HALT(HALT(<M(x)>)$$
Which is wrong, but I just want to know what I'm misunderstanding.
Answer: From wikipedia article Turing Reduction, in computability theory, a Turing reduction from a problem $A$ to a problem $B$, is a reduction which solves $A$, assuming the solution to $B$ is already known (Rogers 1967, Soare 1987). It can be understood as an algorithm that could be used to solve $A$ if it had available to it a subroutine for solving $B$. More formally, a Turing reduction is a function computable by an oracle machine with an oracle for $B$. Turing reductions can be applied to both decision problems and function problems.
This potentially allows us, along with a few other things, to solve problem $A$ using a deterministic Turing machine if $B$ is solvable using deterministic Turing machine.
I assume $SUPERHALT = \{ \langle M,x \rangle \ | \ M^{HALT}$ halts on input $x \}$. If so, then your reduction is incorrect.
First of all, a Turing machine $M$, that uses $HALT$ as an oracle, cannot be used as a non-oracle machine and given as an input of $HALT$. So using something like $HALT(\langle M, x \rangle)$ leads nowhere.
We can use relativizing proof to prove that $SUPERHALT$ is unsolvable even with availability of $HALT$ as an oracle.
We simply take Turing's original proof (with the machine given as self-input) that the halting problem is unsolvable, and give all the machines an oracle for the halting problem. Everything in the proof goes through as before.
From the fact that $SUPERHALT$ is unsolvable (even with the availability of $HALT$ as an oracle) we can prove $SUPERHALT \not\leq_T HALT$. Suppose there is a turing reduction $M'$ for $SUPERHALT$, which uses $HALT$ as a subroutine. Then this $M'$ which uses $HALT$ as an oracle is able to decide $\langle M^{HALT},x\rangle \in SUPERHALT$. This is a contradiction. Hence $SUPERHALT \not\leq_T HALT$. | {
"domain": "cs.stackexchange",
"id": 6314,
"tags": "computability, reductions, halting-problem"
} |
Where to find publicly available Sanger chromatography data? | Question: I am looking for any public available databases where I can download Sanger chromatography data ideally in Ab1 or SCF file format. I need extremely large amounts of this, since I want to use it for machine learning.
Answer: NCBI Trace Archive
The NCBI Trace Archive is a permanent repository of DNA sequence chromatograms (traces), base calls, and quality estimates for single-pass reads from various large-scale sequencing projects. Contaning more than 2 billion traces. | {
"domain": "bioinformatics.stackexchange",
"id": 1928,
"tags": "ngs, sequence-analysis, public-databases, data-download"
} |
How to send a vector with Service Client Communication? | Question:
Hi
How can I send a hole vector in once(not one by one of each entries) from client to the service ? is it possible?
Thanks
Originally posted by Developer on ROS Answers with karma: 69 on 2018-06-05
Post score: 0
Answer:
Yes this is possible.
You can define a variable in a service or message definition as a vector by adding empty square brackets after the type, eg:
int32[] myVectorOfInts
So in your case to send a vector of values to a service provider from the client your service definition could look something like this:
string singleVariable
int32[] vectorOfIntegers
std_msgs/ColorRGBA[] vectorOfAnotherMessage
---
int32 variableReturnedToClient
Hope this helps.
Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-06-05
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Developer on 2018-06-05:
Thanks! it helped | {
"domain": "robotics.stackexchange",
"id": 30972,
"tags": "ros, communication, ros-kinetic, service"
} |
What happens to the human brain when unconscious? | Question: What part of the brain gets affected and does it harm the brain? Thank you I just needed some extra info for a video I'm making.
Answer: We don't know. Disruption of cortex, thalamus, or brainstem structures (someone mentioned ascending reticular activation system) all can cause loss of consciousness. Different drugs have different mechanisms of action and some might cause different "types" of loss of consciousness - for example, ketamine and propofol seem very different, and trauma is a completely different issue.
The reason this is such a hard problem is that these structures are all interconnected, and there are likely proximate and distal causes of loss of consciousness. For an analogy, imagine you starve to death. You could make a convincing case that lack of food did it - so conclude food keeps you alive. But you weren't declared dead when you stopped eating; you were probably declared dead when your heart stopped. So conclude your heart keeps you alive. But in the middle somewhere, you had changes in ion concentrations in your blood, loss of glucose. So salts and sugars keep you alive. I can prove these are all true - take out your heart, you die. Stop feeding you, you die. Fill your arteries with KCl, you die. | {
"domain": "biology.stackexchange",
"id": 6163,
"tags": "human-biology, neuroscience, brain"
} |
What happens to the $E$ and $B$ fields at the edge of a laser beam? | Question: In an ideal plane wave, $E$ and $B$ fields run off to infinity in both directions along straight paths. I've always assumed the center of a laser beam looks like an ideal plane wave, with $E$ and $B$ fields oscillating as one would expect from the classical picture of EM waves. But then what happens at the edge of the beam? The "light" stops, but the field lines aren't allowed to—they need to either terminate on charges or form loops. They can't go on forever in a nice uniform way, because then the beam would be infinitely wide. So what do they do?
Answer: This is a good model at the center of the beam,
I've always assumed the center of a laser beam looks like an ideal plane wave,
but it doesn't say anything about what happens near the edges. If you want a full model of the beam, then the basic tool we use is that of a gaussian beam, which is a solution of the Helmholtz equationparaxial approximation, whose cross-section is a gaussian with a width $w_0$ that's much larger than the wavelength of the beam.
However, it's also important to point out that gaussian beams are only valid within the limit $w_0\gg \lambda$, and even then they are not fully reliable as vector solutions, because, strictly speaking, they are not solutions of the Gauss law. To see why, consider a beam propagating along the $z$ axis with linear polarization along $x$ (so $E_y\equiv0$ by symmetry): then, as you get into the beam from the negative $x$ axis, the $E_x$ component needs to increase from zero to its maximal value over a finite expanse (however large), which means that
$$
\frac{\partial E_x}{\partial x} \neq 0,
$$
and therefore that if the longitudinal component $E_z=0$ vanishes, as it does in the gaussian-beam solution, the Gauss law
$$
\frac{\partial E_x}{\partial x} + \frac{\partial E_z}{\partial z} =0
$$
cannot be satisfied.
So, what does this mean? Basically, it is the answer to your question,
But then what happens at the edge of the beam?
and it tells you that at the edge of the beam the vector character of the beam needs to adjust, either by tilting longitudinally or by acquiring a (small) forwards ellipticity:
Of course, if you're deep in the paraxial regime, with $\lambda/w_0\ll 1$, then these effects will be small (and indeed, to leading order, $|E_z|/|E_x|$ is proportional to $\lambda/w_0$) but they're still conceptually important.
However, that said, it's important to point out that this is mostly a misconception:
the field lines aren't allowed to—they need to either terminate on charges or form loops.
The picture of field lines is only valid in the electrostatic regime. For optical fields, we normally consider fields of the form
$$
\mathbf E (\mathbf r,t) = \mathrm{Re}\mathopen{}\left[ \tilde{\mathbf{E}}(\mathbf r) e^{-i\omega t} \right] \mathclose{}
$$
with $ \tilde{\mathbf{E}}(\mathbf r)$ a complex-valued vector amplitude. How do you define a field line then? And, for that definition, what theorem guarantees that those field lines will have any properties at all? (The electrostatic version is no longer applicable.) The answer is, of course, that there isn't one: in the electrostatic regime, those requirements were simply tools to enforce the Gauss law $\nabla \cdot \mathbf E = 0$, but now those tools are gone, and you're reduced to handling that PDE directly, as I've done above. | {
"domain": "physics.stackexchange",
"id": 51627,
"tags": "electromagnetism, optics, electromagnetic-radiation, electric-fields, laser"
} |
Does the Multiverse Theory rely on Superposition? | Question: Note: I'm not asking if the Multiverse Theory and the MWI are the same thing, cf. e.g. this Phys.SE post.
If I understand it correctly, the Multiverse Theory doesn't rely on the Many Worlds Interpretation. Is my understanding correct, or does the Multiverse Theory relies on the Many Worlds Interpretation?
If it doesn't rely on the MWI, does it rely anyhow on Superposition?
If it doesn't, does the theory even "belong" to Quantum Mechanics?
I appreciate the help.
Answer: The linked multiverse theory is just a general supposition that there are other universes that follow different rules.
The many worlds interpretation, a separate idea, is specifically with regard to quantum mechanics, and interprets superposition as indicating that there is a world associated with each possible outcome.
Seeing that there could be different rules of quantum mechanics there can therefore be multiple universes which all have their own "many worlds" of their own versions of quantum mechanics. Therefore this general idea of there being other worlds with different laws of physics is sometimes thought to be a bigger multiverse than the many worlds interpretation, as described in the wiki article you linked.
The multiverse idea you have linked could have been supposed before quantum mechanics in the days of classical physics by just supposing there are other worlds with other rules of physics -- and isn't necessarily connected to superposition.
Everett's many worlds interpretation though is a specific interpretation of quantum mechanics and is very much associated with superposition. | {
"domain": "physics.stackexchange",
"id": 88932,
"tags": "quantum-mechanics, quantum-interpretations, superposition, multiverse"
} |
Regex detection for Puppet configuration | Question: I've got a PR to add a feature to a Ruby application (Puppet) to allow Regex detection.
Right now, the method only does comparison on exact string matching:
defaultfor :operatingsystem => :fedora, :operatingsystemmajrelease => ['22', '23', '24']
This is annoying, as we have to add in new values everytime a new Fedora release comes out, for example.
I've extended it to match on a Regex:
defaultfor :operatingsystem => :fedora, :operatingsystemmajrelease => /^2[2-9]$/
The old method looks like this:
def self.fact_match(fact, values)
values = [values] unless values.is_a? Array
values.map! { |v| v.to_s.downcase.intern }
if fval = Facter.value(fact).to_s and fval != ""
fval = fval.to_s.downcase.intern
values.include?(fval)
else
false
end
end
This is my current proof-of-concept code, which will match both regex and strings, but it feels off.
def self.fact_match(fact, values)
values = [values] unless values.is_a? Array
values.map! do |v|
if v.is_a? Regexp
v
else
v.to_s.downcase.intern
end
end
if fval = Facter.value(fact).to_s and fval != ""
fval = fval.to_s.downcase.intern
if values.any? {|v| v.is_a? Regexp }
regex_match = Regexp.union(values)
fval =~ regex_match
else
values.include?(fval)
end
else
false
end
end
I feel like there's a much easier way of detecting if it's a Regex or not.
Answer: You're working too hard, trying to distinguish strings from regular expressions. Rather, you should take advantage of the fact that the === operator for a RegExp and the === operator for a String both express the idea of a "match", which is what you want here.
I don't understand why you need to #intern all the strings.
I don't see why the fval = Facter.value(fact).to_s assignment appears inside an if condition. The result of the assignment would only be false if Facter.value(fact).to_s is false or nil, which can never happen. (Even nil.to_s results in an empty string.)
To treat values as an array regardless of whether a scalar or an array was passed in, you can use the [*values] idiom.
def self.fact_match(fact, values)
fact_val = Facter.value(fact).to_s.downcase
fact_val != "" and [*values].any? { |v| v === fact_val }
end
In addition to supporting literal case-insensitive matches and regular expression matches, also consider supporting Ranges. | {
"domain": "codereview.stackexchange",
"id": 23404,
"tags": "ruby, regex, configuration"
} |
does life make or break? | Question: Ok, this question seems like it may be impossible to answer, but would be interesting to see if anyone has an idea.
Throughout the course of a human life, do we make more molecular bonds than we break, or vice versa?
Answer: Metabolism ≈ Anabolism + catabolism
Metabolism is the set of all chemical reactions happening inside a living body. These reactions can either breaks down big molecules and build up big molecules. The set of reactions the breaks down big molecules is called catabolism. The set of reactions that build up big molecules is called anabolism.
Energy
Generally speaking, anabolism cost energy and catabolism yield energy. The molecule that is directly used to transfer this energy is Adenosine triphosphate (ATP).
Any energy transfer is coupled with some loss of energy from the system (see Energy conversion efficiency).
Entropy
For any chemical reactions, there is always an overall increase of entropy. As a consequence, a living organism increases the entropy in its surrounding. In other words, it breaks more than it builds.
Your question
does life make or break?
Life does both! But on average, it mainly breaks things up (just like any other chemical reaction).
Want to learn more?
The concepts of energy and entropy are studied in physics and chemistry but not so much in biology. I cannot really give you a good source of information.
Khan Academy offers a very good introductory course on metabolism (here), you should have a look! | {
"domain": "biology.stackexchange",
"id": 5273,
"tags": "molecular-biology, digestive-system, systems-biology, biosynthesis"
} |
Threadsafe oneshot-event which fires on subscription if the event was fired in the past | Question: Assumptions
Basically, a Connection class has a "Disconnect" event. Subscribing to this event isn't thread-safe, because the disconnection may fire from another thread right before I subscribe. So checking before the subscription doesn't help.
Checking for the disconnect after subscription doesn't help either because the event may have fired in the meanwhile (2 threads might execute the same "observer" twice).
(My) Solution:
An event always fired once (and only once), even if the event itself already happened before Source is on GitHub as well.
Questions:
Are there other simpler solutions addressing this? (by simpler I mean from an outside or usage perspective)
Do you see any race conditions or things that could go wrong?
Maybe you have optimizations or simplifications to add?
Input is highly appreciated!
/// <summary>
/// Triggers if the event is invoked or was invoked before subscribing to it.
/// <para> Can be accessed safely by multiple threads.</para>
/// </summary>
public class AutoInvokeEvent<Sender, Args>
{
public delegate void EventHandle(Sender sender, Args arguments);
/// <summary>
/// Handle will be invoked if the event was triggered in the past.
/// <para>Unsubscribing happens automatically after the invocation and is redundant if done from the event handle.</para>
/// </summary>
public event EventHandle Event
{
add
{
if (!Subscribe(value))
value(m_sender, m_eventArgs);
}
remove { InternalEvent -= value; }
}
private event EventHandle InternalEvent;
// this is my personal lock implementation. in this case it is used like any other lock(object) so just ignore it
private SafeExecutor m_lock = new SingleThreadExecutor();
private volatile bool m_invoked = false;
Sender m_sender;
Args m_eventArgs;
/// <summary>
/// Invokes all subscribed handles with the given parameters.
/// <para>All calls after the first are ignored.</para>
/// </summary>
public void Invoke(Sender sender, Args args)
{
GetEventHandle(sender, args)?.Invoke(m_sender, m_eventArgs);
}
private EventHandle GetEventHandle(Sender sender, Args args)
{
return m_lock.Execute(() =>
{
if (m_invoked)
return null;
m_sender = sender;
m_eventArgs = args;
m_invoked = true;
EventHandle handle = InternalEvent;
InternalEvent = null;
return handle;
});
}
/// <returns>Returns true if subscription was successful and false if handle needs to be invoked immediately.</returns>
private bool Subscribe(EventHandle handle)
{
return m_lock.Execute(() =>
{
if (!m_invoked)
InternalEvent += handle;
return !m_invoked;
});
}
}
How the class could be used :
class Connection
{
public AutoInvokeEvent<object, EndPoint> OnDisconnect = new AutoInvokeEvent<object, EndPoint>();
public Disconnect()
{
OnDisconnect.Invoke(this, endpoint);
}
}
void main()
{
Connection connection = new Connection();
connection.OnDisconnect.Event += DoStuffOnDisconnect;
}
void DoStuffOnDisconnect(object sender, EndPoint endpoint) { }
Answer: Thread safety
Both Subscribe and GetEventHandle use lock and and+remove on field event are thread-safe too. You can even remove the volatile from m_invoked because you only ever access it under the lock from those two methods.
API and naming
Event and Invoke are pretty obvious, especially with the description. I would only change the signature of Invoke to return bool (true = invoked, false = not invoked, already done before) to better reflect the fact, that it won't throw exception if you try to invoke it multiple times (could as well be named TryInvoke, maybe changing Invoke to throw exceptions if invoked multiple-times).
GetEventHandle does not sound as good name to me, because it does much more than just getting the handle, it stores sender+args, takes current handle and clears it. I would either use it (and Subscribe) directly where they are used (only one place for each), which also solves naming-problem, or make them local functions.
AutoInvokeEvent hmm, how about OneShotEvent? Although it automatically calls subscribed handlers even after the event was fired, this is more about how it does it under the hood, than what the API is about - it will execute the handler exactly once when it is fired or if it already was fired.
Optimization
I don't think any optimization is necessary. You would have to first specify (and or test/profile) how often is the event subscribed compared to invoking it, to even have some direction in which to optimise. I can see you have switched to using some read-write-lock in the implementation you currently have on GitHub, makes sense (concurrent subscriptions, if that is trully important to you). Could as well be implemented by double-checked-locking, thanks to the fact this is one-shot event, but I don't think it is worth it, too easy to make a mistake, very hard to make it right. (The idea is to have volatile object m_lock and make it null when you fire the event, instead of using that bool m_invoked.) | {
"domain": "codereview.stackexchange",
"id": 32691,
"tags": "c#, multithreading, thread-safety, event-handling, observer-pattern"
} |
Append the first half of the first line of /etc/hosts to a conf file | Question: I have to edit a config automatically before starting a service in a docker container (a storm supervisor), and I want to append something like this in /opt/storm/conf/storm.yaml: storm.local.hostname: 1.1.1.1 (extracted from /etc/hosts)
What I currently do is that:
echo -n "storm.local.hostname: " >> /opt/storm/conf/storm.yaml && head -1 /etc/hosts | awk -F ' ' {'print $1'} | xargs echo >> /opt/storm/conf/storm.yaml
How can I improve on that, on a readability POV? It looks pretty obfuscated to me, but I may just be naive ^^
Disclaimer: this runs inside a docker container, so there's only access to classic bash.
Answer:
How can I improve on that, on a readability POV? It looks pretty obfuscated to me, but I may just be naive ^^
You're not naive, it looks obfuscated ;-)
A couple of general tips first:
When you find yourself writing the same path twice, put it in a variable
to avoid duplicated hard coded strings
to make it easy to change the path later if needed
to give it a meaningful descriptive name
Avoid echo -n. In general, avoid all echo statements that use any flags like -n, -e, because these are not portable. When you need those extra functions, printf is more portable. Plain echo with no flags, just stuff to print is nice, short and sweet
When you see head and awk in the same pipeline, usually you can rewrite with just awk. It saves the execution of one process.
I think head -NUM is deprecated. To be safe, I suggest to use head -n NUM instead.
To avoid extremely long lines, break the line with a \
A couple of specific tips in the context of your example:
The one-liner first appends some fixed text to the config file,
and then appends some more text by way of awk.
Since awk can print fixed text too, the echo is not needed at all.
xargs seems pointless: you could just redirect the output of awk
The separator ' ' you used with awk seems both unnecessary and error prone. The hosts file might have entries separated by tab. Simply by not specifying a separator, awk will work with both cases, space or tab separated.
Following the suggestions above, the one-liner could be simplified to:
awk '{print "storm.local.hostname: " $1; exit}' < /etc/hosts >> /opt/storm/conf/storm.yaml | {
"domain": "codereview.stackexchange",
"id": 14153,
"tags": "bash, linux"
} |
Why did Compton use X-rays in his experiment? | Question: Why did Compton use X-rays in his famous experiment? Can it be done using other types of electromagnetic waves?
Answer: The Compton effect is the inelastic scattering of photons by electrons.
Compton's initial experiment used electrons in a graphite crystal to act as scatterers. These electrons are not free, they are bound, but the X-ray energies (17 keV) were large compared with the binding energies, so they approximated to free electrons.
Photons of lower energies (UV to a few keV) may liberate electrons in the graphite via photoelectric absorption, rather than scatter. Lower energy photons would also interact with electrons pseudo-classically - Thomson scattering; any scattering would be very close to elastic. Very high energy photons, above 1.02 MeV are capable of interacting with nuclei and create electron/positron pairs, instead of scattering.
So, I think your answer is that Compton needed a "thick" target of free electrons, but in the absence of such, improvised using the electrons in a graphite crystal. At low photon energies, the interactions are dominated by photoelectric absorption or elastic scattering. At very high energies, pair-production dominates. It is only at intermediate photon energies (roughly 10-1000 keV for carbon) that Compton scattering dominates (see cross-section plot below, where $\sigma_{inel}$ refers to Compton scattering, $\sigma_{pe}$ is photoelectric absorption, $\sigma_{pp}$ is pair production and $\sigma_{el}$ is Thomson scattering). | {
"domain": "physics.stackexchange",
"id": 24359,
"tags": "special-relativity"
} |
In neutrino oscillation experiment, what are the Feynman diagrams involved for the *detection of the neutrino* in the detector? | Question: In neutrino oscillation experiment, for example NOvA [but it could be other if you prefer], what are the diagrams involved for the detection of the neutrino in the detector ?
I have explored inside many PhD theses to understand a bit, but none are very clear, and the authors seem not really to master the physics of the interaction at the detector (they seem to just copy/paste the explanation on the previous theses, etc.), and none are exhaustive for the involved interactions.
Here are the diagrams that I have created with jaxodraw and that is a synthesis of those diagrams that I found from various documents.
My question is :
for detecting the resulting oscillated neutrino at the far distance detector :
-which of these diagrams (maybe there are additive ones ?) correspond to the signal ?
-which correspond to the background ?
In particular, do elastic scattering contribute to the signal ?
(authors of PhD theses most often skip the purely elastic diagrams, for unknown reason)
Do neutral current contribute to the signal (for example elastic netural current)
By the way : QE=quasi elastic, RES=resonant production of pions, DIS=deep inelastic scattering, COH=coherent production of pions.
Answer: To start with, one has to keep in mind that detectors detect, by recording interactions in a medium. As neutrinos leave no track in the detectors, the final state neutrinos are useless for detecting. In principle, if the neutrino beam were mono-energetic, using conservation of energy and momentum one could fit for a missing neutrino, by just detecting the lower vertex outputs, $Δ^{++}$ or whatever it is, but neutrino masses are very small (only limits are known), so the energy balance even if possible would not give accurate enough four vectors to differentiate the masses
So what are the experiments for oscillations looking for:
They know the percentages of neutrino flavors in the beam, by construction of the beam
They go many kilometers away and measure the percentages of neutrino flavors, using detectors which are sensitive to electromagnetic scattering of the products of the neutrino interaction with the nuclei of the detector. The detector can record the momenta or energies of charged tracks, and thus identify an electron,a muon or a tau and from the numbers know the percentages of flavors that hit the detector. The difference between input and output flavor percentages show the oscillation.
-which of these diagrams (maybe there are additive ones ?) correspond to the signal ?
The signal is to see a lepton and determine its lepton number. This will identify that the incoming neutrino has a specific flavor, which is the objective of the experiment
which correspond to the background ?
the background will be the experimental misidentifications, no need for diagrams. One is not measureing a crossection.
In particular, do elastic scattering contribute to the signal ? (authors of PhD theses most often skip the purely elastic diagrams, for unknown reason)
As mentioned above neutrinos leave no track in detectors.
Do neutral current contribute to the signal (for example elastic netural current)
If the outgoing is a neutrino/antineutrino it cannot be seen in the detector. | {
"domain": "physics.stackexchange",
"id": 63603,
"tags": "particle-physics, experimental-physics, standard-model, feynman-diagrams, neutrinos"
} |
Circular patterns at bottom of rock, is it a fossil? | Question: The image below was taken on a hike. Circular patterns at bottom (it is flatter on this face and round otherwise) appear either man-made or possibly it is a fossil.
Any idea what this could be?
Answer: The face of the rock with the round center portion looks strongly like a Rugosa or Horn Coral. I suggest it is a Rugosa coral because of the center with the radial lines out from the center and the clear edge surrounding the radial lines. See https://en.wikipedia.org/wiki/Rugosa. | {
"domain": "earthscience.stackexchange",
"id": 985,
"tags": "fossils, paleontology"
} |
Which formula for entropy is correct? (OR Is the fundamental thermodynamic equality always right?) | Question: Quick question, in the lecture notes to a thermodynamic course I'm taking,
$$d\underline{S}=\frac{d\underline{U}}{T}+\Big(\frac{\partial P}{\partial T}\Big)_{\underline{V}} d\underline{V}$$
But everywhere else I've looked I've found (by rearranging the fundamental thermodynamic relation:
$$d\underline{S}=\frac{d\underline{U}}{T}+\frac{P}{T}d\underline{V}$$
The two are obviously extremely similar, however I don't believe they're equivalent in general (I believe you'd have to make further assumptions about the system, particularly $P=kT$ at constant specific volume, to go from one to the other.)
Right now I'm just asking if anyone knows which relation is the more general. I was of the opinion that the fundamental thermodynamic relation was always correct, but now I'm doubting myself. Thanks.
Answer: Both seem a little strange. So let us see where they are coming from. What we know is from the laws of thermodynamics that the total differential of the internal energy $U$ can be written as $$dU = \delta Q + dW$$
Here $\delta Q$ is an infinitesimal amound of thermal energy and $dW$ an infinitesimal amount of work. The work is the product of a generalized displacement times a generalized force, which are together a conjugated pair of variables. One such pair is pressure and volume, $P\, dV$. For a reversible process $\delta Q = T dS$, so that we get
$$dU = T\,dS - P\,dV$$
From here we easily arrive at your second equation:
$$ \frac{dU}{T} + \frac{P}{T} dV = dS$$
Now we compare with your first equation, which can only be true if$$\frac{P}{T} = \left(\frac{\partial P}{\partial T}\right)_V$$
which certainly does not hold for all thermodynamic systems. So your first equation is more general and the second one only correct in specific circumstances. We rearrange the condition $$P(T,V) = \left(\frac{\partial P}{\partial T}\right)_V T$$ which can only be satisfied by a general function $f(V)$ and $$P(T,V) = f(V) \,T$$ | {
"domain": "physics.stackexchange",
"id": 3749,
"tags": "thermodynamics"
} |
What is the eye muscle status when you stare at distant view through a glass wall? | Question: The book said when you look at object close to you, the eye muscles contract and vice versa. I wonder what will be the status of the eye muscles when I stare at distance view (such as a mountain) inside an office through a glass wall. The mountain is far away but the glass wall is close to us. In this case, does the eye muscles relax?
Answer: In the situation you describe, the eye would be focused on the distant mountain. This would mean that the lens would be stretched and thin in order to minimize the focussing power of the eye. Therefore the ciliary muscles would be relaxed.
When you are looking out of the window, it is possible to make a conscious decision to focus on the window pane itself (thus adjusting the focus to be more powerful as the ciliary muscles contract), however then the distant object will be out of focus and uncomfortable to look at.
This is because the light rays reflecting from the mountain are barely affected by the pane of glass, hence its transparency. | {
"domain": "biology.stackexchange",
"id": 167,
"tags": "human-biology, eyes"
} |
Heapsort With Full-Scale Genericity | Question: I've written a heapsort implementation. It's generic: generic comparators and sequences with elements of a generic type.
The implementation (follows closely chapter 6 from Introduction to Algorithms 3rd Edition and) consists of
heapify: establishes the heap property for an element:
template<typename BiDirIterator, typename Cmp>
void heapify(BiDirIterator begin, BiDirIterator end,
typename std::iterator_traits<BiDirIterator>::difference_type i, Cmp cmp)
{
const auto dist = std::distance(begin, end);
while (true) {
auto left = 2 * i + 1;
auto right = left + 1;
typename std::iterator_traits<BiDirIterator>::value_type largest;
if (left < dist && cmp(begin[left], begin[i]))
largest = left;
else
largest = i;
if (right < dist && cmp(begin[right], begin[largest]))
largest = right;
if (largest == i)
break;
std::swap(begin[i], begin[largest]);
i = largest;
}
}
buildHeap: creates a heap from a permutation:
template<typename BiDirIterator, typename Cmp>
void buildHeap(BiDirIterator begin, BiDirIterator end, Cmp cmp)
{
for (auto i = std::distance(begin, end) / 2 - 1; i >= 1; --i)
heapify(begin, end, i, cmp);
heapify(begin, end, 0, cmp);
}
sort: applies Heapsort to retrieve an ordered sequence from the permutation:
template<typename BiDirIterator, typename Cmp>
void sort(BiDirIterator begin, BiDirIterator end, Cmp cmp)
{
buildHeap(begin, end, cmp);
for (auto i = std::distance(begin, end) - 1; i > 0; --i) {
std::swap(*begin, begin[i]);
heapify(begin, std::next(begin, i - 1), 0, cmp);
}
}
Necessary includes are:
#include <algorithm>
#include <iterator>
#include <functional>
In action:
#include <iostream>
#include <array>
int main()
{
std::array<int, 10> perm { 5, 6, 9, 3, 6, 4, 8, 4, 87, 4 };
heap::sort(perm.begin(), perm.end(), heap::max<int>{});
}
where the heap utilities are put into a separate heap namespace and heap::max is simply
template<typename T> using max = std::greater<T>;
Some questions:
Is it OK to use Cmp as a template parameter? That makes it more generic, but the error messages are way more complicated and point to stuff deep down in the implementation. (Granted, you can't say my implementation is anywhere close to "deep," but this is more of a general question.) Passing a Cmp, where Cmp is some alias for a type, would also be possible but not as generic. std::sort also uses the template parameter, though, so I do so as well.
Is my use of iterators alright? I tried to keep the interface clean with iterators for interaction with standard containers, but the implementation also uses numerical indices when it's more convenient.
heapify is ugly: there's an infinite loop with a break condition and it basically finds the largest element out of three while checking that they are within range (i.e., < dist). I have a feeling this can be beautified both syntactically and semantically. Some hint on that would be appreciated.
The implementation is supposed to work with bidirectional iterators. std::list doesn't work because subscripting is not supported, it's not efficient for std::list, etc. Should I replace subscripting with std::next to make it work on std::list at the cost of performance guarantees? std::list would work, but it might be frickin' slow asymptotically.
Answer: left = 2 * i + 1; for a non-random access iterator this is an operation that is linear with regards to i so it's slow for std::list as well.
Heapify can be adust to not need to std::advance an arbitrary amount by going over the list multiple times:
template<typename BiDirIterator, typename Cmp>
void heapify(BiDirIterator begin, BiDirIterator end, Cmp cmp)
{
bool changed = false;
do{
auto parent = begin;
auto left = begin == end? end : begin+1;
auto right = left == end? end : left+1;
while (left != end) {
auto largest = left
if (right != end && cmp(*left, *right))
largest = right;
if (cmp(*parent, *largest)){
std::swap(*parent, *largest);
changed = true;
}
parent = ++parent;
left = right == end? end : right+1;
right = left == end? end : left+1;
}
}while(changed);
}
This will loop over the data at most O(log n) times and puts the elements in heap order.
However for pulling out the data there is no way to only look at the affected elements in O(log n) time. Which means that you can't sort a linked list with heapsort and still be in O(n log n) time complexity.
Instead to sort a linked list in O(n log n) time you would use merge sort. This requires that you are able to be able to split the list up into separate linked lists in that you can merge. std::list has this functionality with it's splice member function. (It also has a sort member function that will do that built-in) | {
"domain": "codereview.stackexchange",
"id": 24119,
"tags": "c++, algorithm, template, iterator, heap-sort"
} |
Various animation functions | Question: I been learning web development for 1.5 month now, and I am trying to improve my code. I can achieve what I want, but I think my code is really bad.
For example, I have a bunch of jQuery animations and a bunch of function to 'stop' those animations when another animation is starting. The code seems very inelegant, so how should I improve it? I been learning how to write a jQuery plugin, but I don't think it will help in this case.
$(document).ready(function(){
$("#diagonalLine").show(500);
$("#line").show(1300);
$("#start").show(1500);
$("#centerButton").click(function(){
$("#diagonalLine").hide(1500);
$("#line").hide(1300);
$("#start").hide(500);
fadeInWaterloo();
fadeInToronto();
fadeInTop();
$("#selectLine").animate({width:'toggle'},1250);;
$("#diagonalSelectLine").show(1450);
$("#select").show(1600);
});
$("#waterloo7").mouseenter(function(){
fadeInTorontoStop();
fadeInTopStop();
fadeOutToronto();
fadeOutTop();
});
$("#toronto5").mouseenter(function(){
fadeInWaterlooStop();
fadeInTopStop();
fadeOutWaterloo();
fadeOutTop();
//$("#flip").show(1000);
//$("#infoBox1").slideDown(2000);
});
$("#campus").mouseenter(function(){
fadeInWaterlooStop();
fadeInTorontoStop();
fadeOutWaterloo();
fadeOutToronto();
});
$("#waterloo7").mouseleave(function(){
fadeOutTorontoStop();
fadeOutTopStop();
fadeInToronto();
fadeInTop();
});
$("#toronto5").mouseleave(function(){
fadeInWaterloo();
fadeInTop();
});
$("#campus").mouseleave(function(){
fadeInToronto();
fadeInWaterloo();
});
$("#log-in-button").click(function(){sendLogin(); return false;});
$("#forgotLogin").click(function(){location.href='signUp.html';});
$("#signUp").click(function(){location.href='signUp.html';});
});
//-------------------------------------Toronto Animation Functions--------------------------
function fadeInToronto(){
for (var i=1;i<=5; i++){
$("#toronto"+i).fadeIn(300+i*200);
}
}
function fadeOutToronto(){
for (var i=1; i<=5; i++){
$("#toronto"+i).fadeOut(1100-i*200);
}
}
function fadeInTorontoStop(){
for (var i=1;i<=5; i++){
$("#toronto"+i).fadeIn().stop();
}
}
function fadeOutTorontoStop(){
for (var i=1; i<=5; i++){
$("#toronto"+i).fadeOut().stop();
}
}
//-----------------------------------End of Toronto Animation---------------------------
//-----------------------------------Waterloo Animation Functions----------------------------
function fadeInWaterloo(){
$("#waterloo1").fadeIn(0);
$("#waterloo2").fadeIn(300);
$("#waterloo3").fadeIn(1300);
$("#waterloo4").fadeIn(1600);
$("#waterloo5").fadeIn(1800);
$("#waterloo6").fadeIn(2000);
$("#waterloo7").fadeIn(2000);
}
function fadeInWaterlooStop(){
for (var i=1;i<=7; i++){
$("#waterloo"+i).fadeIn().stop();
}
}
function fadeOutWaterloo(){
var num = 1100;
for (var i=1; i<=7; i++){
$("#waterloo"+i).fadeOut(num);
num = num-200;
}
}
function fadeOutWaterlooStop(){
for (var i=1;i<=7; i++){
$("#waterloo"+i).fadeOut().stop();
}
}
//----------------------------------------End of Waterloo Animation-------------------
//-------------------------------------Campus Animation Functions--------------------
function fadeInTop(){
for (var i=1;i<=4; i++){
var id = "#top"+i;
$(id).fadeIn(300+i*200);
}
$("#campus").fadeIn(1100);
}
function fadeOutTop(){
$("#top1").fadeOut(1100);
$("#top2").fadeOut(1100);
$("#top3").fadeOut(900);
$("#top4").fadeOut(700);
$("#campus").fadeOut(500);
}
function fadeInTopStop(){
for (var i=1;i<=4; i++){
var id = "#top"+i;
$(id).fadeIn().stop();
}
$("#campus").fadeIn().stop();
}
function fadeOutTopStop(){
$("#top1").fadeOut().stop();
$("#top2").fadeOut().stop();
$("#top3").fadeOut().stop();
$("#top4").fadeOut().stop();
$("#campus").fadeOut().stop();
}
//-------------------------------------End of Campus Animation---------------
Answer: The general rule is if you're doing something with a static element inside a event, you should cache that reference to this element.
$("#button").click(function(){
$("#diagonalLine").hide(1500);
}
or
document.getElementById('button').onclick = function(){
document.getElementById('diagonalLine').style.display='none';
}
Everytime the user click on #button it traverses the DOM to getElementById and find #diagonalLine. With jQuery there's much more overhead involved, it create a new jQuery object, parses the string #diagonalLine, find the element, bind it to the new object and returns it. There are all sort of thing jQuery do under hood to make it "magic".
var diagonalLine = $("#diagonalLine");
$("#centerButton").click(function(){
diagonalLine.hide(1500);
}
var diagonalLine = document.getElementById('diagonalLine');
document.getElementById('centerButton').onclick = function(){
diagonalLine.style.display='none';
}
You can also use closures for caching references to elements
$('#button').click=(function(){
var diagonalLine = $("#diagonalLine");
return function(){
diagonalLine.hide(1500);
}
})();
document.querySelector('#button').onclick=(function(){
var diagonalLine = document.querySelector('#diagonalLine');
return function(){
diagonalLine.style.display='none';
}
})(); | {
"domain": "codereview.stackexchange",
"id": 4561,
"tags": "javascript, jquery, animation"
} |
Why is trans-cyclooctene chiral? | Question: How does trans-cyclooctene exhibit chirality if there are no stereocenters?
Related follow-on questions:
Are all higher cycloalkenes chiral?
Do more double bonds cause a bigger number of stereoisomers in cycloalkenes?
Answer: Very interesting question! The key word you are looking for is planar chirality. In trans-cyclooctene, the polymethylene bridge can either go "in front of" or go "behind" the plane of the double bond, assuming you fix the double bond and the two hydrogens in place.
As pointed out by @jerepierre, they are considered different molecules due to a high-energy barrier which prevents the interconversion. Cyclooctene is the first cycloalkene to have both stable cis- and trans- isomers. The chain in trans-cyclooctene is not long enough to swing over the double bond. As the chain gets longer, the energy barrier to rotation decreases.
Here are the two mirror images of trans-cyclooctene (image source: own work).
These two molecules are mirror images of each other but are not superimposable. Therefore trans-cyclooctene is chiral despite not having a chiral center.
Source
Edit: this is the first time I learned that a chiral molecule does not have to have a chiral center. After making some searches online, I feel a need to expand the answer to clarify the concept of chirality.
A chiral molecule is one that has a non-superimposable mirror image. Mathematically a molecule is chiral if it is not symmetric under an improper rotation. Chirality arises due to:
point chirality: typically a carbon center with four different substituents;
axial chirality: such as allenes with different substituents on each carbon (see this question);
planar chirality: such as the case of trans-cyclooctene;
inherent chirality: due to the presence of a curvature in a structure that would be devoid of symmetry axes in any bidimensional representation, such as fullerenes. | {
"domain": "chemistry.stackexchange",
"id": 4029,
"tags": "stereochemistry, chirality"
} |
Killing equation in coordinates | Question: In proving that it is possible to write the killing equation in coordinates as $$L_X g=0\iff X_{\alpha;\beta}+X_{\beta;\alpha}=0$$
I have read that the key observation, to write the equation in coordinates as above, is that when we consider the Levi-Civita connection we can replace the partial derivative with the covariant ones, i.e $Xg_{\sigma\beta}=\nabla_X g_{\sigma\beta}$.
This is my work, with $X=X^\alpha\partial_\alpha$:
$$L_X g=Xg_{\sigma \beta}-g([X,\partial_\sigma]\partial_\beta)-g(\partial_\sigma, [X,\partial_\beta])$$
But I can't understand why...can you help me?
Answer: The point is that, for the Levi-Civita connection, we can find Riemann normal coordinates around any point P. In those coordinates $\partial_\alpha g_{\mu\nu}=0$ and the ${\Gamma^\alpha}_{\mu\nu}=0$ at P. Now the formula for the Lie derivative
$$
[{\mathcal L}_Xg]_{\mu\nu}= X^\alpha \partial_\alpha g_{\mu\nu} + g_{\mu\alpha}\partial_\nu X^\alpha + g_{\alpha\nu }\partial_\mu X^\alpha
$$
and the covariant derivative expression
$$
\nabla_{\mu}X_\nu + \nabla_\nu X_\mu
$$
coincide if both $\partial_\alpha g_{\mu\nu}=0$ and ${\Gamma^\alpha}_{\mu\nu}=0$. But both of these quanties are doubly-covariant tensors and if two tensors of the same type coincide in one system of coordinates, they coincide in all coordinate systems.
We conclude that
$$
[{\mathcal L}_Xg]_{\mu\nu} =\nabla_{\mu}X_\nu + \nabla_\nu X_\mu
$$
at the point P --- but P is an arbitrary point, so the two expressions coincide everywhere. | {
"domain": "physics.stackexchange",
"id": 77630,
"tags": "homework-and-exercises, differential-geometry, metric-tensor, differentiation, vector-fields"
} |
Difference between time complexity and computational complexity | Question: For measuring the complexity of an algorithm, is it time complexity, or computational complexity? What is the difference between them?
I used to calculate the maximum (worst) count of basic (most costing) operation in the algorithm.
Answer: Computational complexity is the general subject of using complexity measures to compare programs or algorithms. Time complexity and space complexity are two measures that are commonly used when talking about computational complexity, but there are others.
For example, it's common to look at the number of comparisons performed in a sorting algorithm. | {
"domain": "cs.stackexchange",
"id": 4590,
"tags": "time-complexity, complexity-theory"
} |
Differences between probability density and expectation value of position | Question: The expression $\int | \Psi\left(x\right)|^2dx$ gives the probability of finding a particle at a given position.
If wave function gives the probabilities of positions, why do we calculate "expectation value of position"?
I don't understand the conceptual difference, we already have a wave function of a position. Expectation value is related to probabilities.
So what is the differences between them? And why do we calculate expectation value for position, although we have a function for probability of finding a particle at a given position?
Answer: In position-space (that is, when your functions are functions of x), the function $\int|\Psi|^2$ gives the probability of finding the particle in a given range. The expectation value of x is where you'd expect to find the particle. It is often essentially the weighted average of all the positions where the probability density, $|\Psi|^2$, is the weighting function (that's not exactly what it is, but it's a useful analogy). Similarly, you can find the expectation value for any measurable quantity. In this space, the difference between the two is that the expectation value is a number that represents the expected average position of the particle over many measurements whereas the probability is a number that gives you the probability for finding the particle within the limits of integration.
However, you can use any different basis. For example, you could choose momentum-space, $\left|\Psi\right>$ is $\Psi(p)$ (quantum physicists please don't kill me for that affront to notation). In momentum space, the integral $\int|\Psi|^2$ is now the probability of the particle having a given range of momenta. However, the expectation value of x is still the average measurement of x. What, you ask, is the point? The expectation value is a number that can be found in any basis that represents the "on-average" value of a measurement. The probability found by $\int|\Psi|^2$ is the probability that a particle will be found existing within a specified range of values for the basis you are using.
$\int_{x_1}^{x_2}|\Psi|^2dx$ is "there is #% chance that the particle will be found between $x_1$ and $x_2$"
$\left<\Psi\right|x\left|\Psi\right>$ is "the expected average position of the particle over a large number of sample measurements is at $x$=#"
$|\Psi|^2(x)$ is a function "the probability per unit length of finding the particle at this position is #%" | {
"domain": "physics.stackexchange",
"id": 15413,
"tags": "quantum-mechanics, operators, wavefunction, hilbert-space, probability"
} |
Does a charged or rotating black hole change the genus of spacetime? | Question: For a Reissner–Nordström or Kerr black hole there is an analytic continuation through the event horizon and back out. Assuming this is physically meaningful (various site members hereabouts think not!) does this change the genus of spacetime?
In the 2D rubber sheet analogy a loop drawn around the event horizon wouldn't be contractible because it would go through the event horizon and back out and still have the black hole in it's centre. I don't know if the same argument applies to 4D spacetime.
Answer: To rephrase the question slightly, you are asking for one of the Betti numbers of the (3+1)-dimensional manifold corresponding to one of the solutions of the Einstein field equations that corresponds to charged or rotating black hole.
The Betti numbers of a manifold are topological invariants that intuitively represent the number of non-contractible d-dimensional "handles" on that manifold (or formally, the number of generators of the d-th homology group), and for a D dimensional manifold there are D+1 nonzero Betti numbers, $B_0,B_1, ... ,B_D$. The zero-th, $B_0$, corresponds to the number of connected components, $B_1$ the number of one-dimensional handles, and so on.
For example, the Betti numbers of a single torus $\mathbb{T}^2$ are: $B_0=1$, $B_1=2$, and $B_2=1$; and for a sphere $\mathbb{S}^2$ they are $B_0=1$, $B_1=0$, and $B_2=1$. These can be related directly back to simpler invariants you may already have heard of, like the Euler characteristic $\chi$, and the genus $g$, and we can see that for closed two-dimensional manifolds with no boundaries like the above, $g$ is related to the first Betti number:
$B_1 = 2g$
For surfaces where $B_1$ is not an even number, that is, for surfaces which are open or aren't orientable, this definition breaks down, and we talk instead of a non-orientable genus $k=B_1$. For higher dimensional manifolds we can take $k=B_{D-1}$.
So to return to your question, we want to know what the Betti numbers of the spacetime you're describing are. Assuming the topology of timelike curves is simple enough (i.e. no closed timelike curves) or that the solution is independent of time, it's easiest to look at the topology of the purely spacelike part of the solution; $D=3$, so $k=B_2$.
@Siva pointed out that the only relevant non-contractible surface that is introduced by the event horizon is one sphere, $\mathbb{S}^2$, so I would guess* that $B_2 = 1$. This would mean that $k=1$, which is a genus, but because $B_2$ is not an even number, essentially because asymptotically flat spacetimes like these are open, not closed, we can't interpret it in the same sense as the genus of a torus or another 2-d handlebody. But it is a genus in some modified sense, it's a non-orientable genus.
So, I would say that $k=1$ in this case, and clearly $k=0$ for an asymptotically flat spacetime with no black holes in it.
EDIT: To directly answer the question "Does a charged or rotating black hole change the genus of spacetime?"; yes, it increases the non-orientable genus ($k = B_2$) from $0$ to $1$.
*If there is a string theorist or someone else who can compute homology groups better than me, this might need checking. In 2D there's an isomorphism between the first homotopy group (which is really what Siva's sphere argument is pointing to) and the first homology group but this doesn't necessarily hold in higher dimensions IIRC. I did a quick calculation with a cellular homology that looked reasonable but I might be oversimplifying. | {
"domain": "physics.stackexchange",
"id": 8879,
"tags": "general-relativity, black-holes, spacetime, topology"
} |
Is this a possible structural isomer for Butene(C4H8) | Question: Today in school, my teacher was explaining the different types of alkenes. Examples included ethene, propene, and butene. She also showed us their structural formulas.
I tried to draw butene's formula and came up with this:
Is it correct?
Answer: Yes, the formula you drew is an example of isobutylene. Isobutylene has the "official" IUPAC name of 2-methylpropene, and is an important industrial chemical. Because it:
has a formula of $\ce{C4H8}$, and thus contains four carbon atoms;
contains a carbon-carbon double bond;
and doesn't contain any other functional groups
... it is indeed an isomer of butene (sometimes known as butylene). You got this one right!$%edit$ | {
"domain": "chemistry.stackexchange",
"id": 4576,
"tags": "molecular-structure, structural-formula"
} |
Is the dissolution of sodium acetate trihydate endothermic? | Question: Sodium acetate trihydrate dissolves in water to its constituent ions:
$$\ce{NaOAc.3H2O (s) ->[H2O] Na+ (aq) + OAc- (aq) + 3H2O (l)}$$
The crystallisation of sodium acetate from a supersaturated solution is well-known to be exothermic. Since dissolution is the reverse of crystallisation, should the enthalpy of dissolution be positive (i.e. an endothermic process)?
Answer: The process of dissolving sodium acetate trihydrate $(\ce{NaC2H3O2.3H2O})$ in water is endothermic. The molar enthalpy of solution at $T=25\ \mathrm{^\circ C}$ is $\Delta_\text{sol}H^\circ=19.66\ \mathrm{kJ\ mol^{-1}}$.*
Accordingly, since crystallization is the reverse process of dissolution, the crystallization of sodium acetate trihydrate from aqueous solutions is exothermic. For practical purposes, the enthalpy of solution with a reverse sign is taken as enthalpy of crystallization.
The enthalpy of solution mainly depends on two energy contributions: lattice energy and hydration energy. Lattice energy is the energy released when the crystal lattice of an ionic compounds is formed. Conversely, energy equal to the lattice energy has to be supplied to break up the crystal lattice; i.e. this process is endothermic. Hydration energy is released when water molecules hydrate the ions; i.e. when water molecules surround the ions and new attractions form between water molecules and ions. This process is exothermic.
Therefore, whether the process of dissolving a salt in water is exothermic or endothermic depends on the relative sizes of the lattice energy and the hydration energy. If the lattice energy is greater than the hydration energy, the process of dissolving the salt in water is endothermic. Conversely, if the lattice energy is smaller than the hydration energy, the process of dissolving the salt in water is exothermic.
For anhydrous sodium acetate $(\ce{NaC2H3O2})$, the lattice energy is actually smaller than the hydration energy. The molar enthalpy of solution is negative; at $T=25\ \mathrm{^\circ C}$ it is $\Delta_\text{sol}H^\circ=-17.32\ \mathrm{kJ\ mol^{-1}}$.* Thus, the process of dissolving anhydrous sodium acetate in water is exothermic.
However, hydration of ions can also occur in crystalline solids. Sodium acetate trihydrate $(\ce{NaC2H3O2.3H2O})$ contains three water molecules for each $\ce{NaC2H3O2}$ unit in the crystal. The corresponding amount of hydration energy is already released when crystalline anhydrous sodium acetate absorbs water and is converted to crystalline sodium acetate trihydrate. Therefore, the additional amount of hydration energy that is released when sodium acetate trihydrate is dissolved in water and completely hydrated is smaller than the total hydration energy that is released when anhydrous sodium acetate is dissolved in water. Hence, the resulting value for the molar enthalpy of solution is larger for sodium acetate trihydrate than for anhydrous sodium acetate.
* “Enthalpy of Solution of Electrolytes”, in CRC Handbook of Chemistry and Physics, 90th Edition (CD-ROM Version 2010), David R. Lide, ed., CRC Press/Taylor and Francis, Boca Raton, FL. | {
"domain": "chemistry.stackexchange",
"id": 4521,
"tags": "physical-chemistry, thermodynamics, aqueous-solution, enthalpy"
} |
What's the purpose of spraying water on rebar coils but not on plain coils? | Question: Recently, I visited industry where they produce steel coils (rebar + plain coils). At the end of the production process, I discover that rebar coils are getting sprayed by water:
Whereas plain coils are not getting sprayed by water. What's the purpose of spraying water on rebar coils but not on plain coils?
Answer: The rebar may be getting some hardening depending on temperature and chemical composition. If they are hot enough to show any color ( roughly 1200 F) it is definitely some hardening. Because the transformation from high temperature austenite takes some time ( depending on chemistry ) ; some hardening could develop at lower temperatures. The rebar coils would only need to be straightened and have the anchor pattern rolled into them to be finished. The other coils are likely to have significant further processing such as wire drawing or cold heading so they need to be less hard = more ductile so are cooled slowly. | {
"domain": "engineering.stackexchange",
"id": 3952,
"tags": "rolling"
} |
Position-space representation of momentum operator | Question: I have found 2 different forms of $⟨x|\hat{p}|x′⟩$ and I have no idea which one is the true form. Can anyone help please?
$⟨x|\hat{p}|x′⟩ = (i\hbar)\frac{d\delta(x − x′)}{dx′}$
$⟨x|\hat{p}|x′⟩ = -(i\hbar)\frac{d\delta(x − x′)}{dx}$
Answer: Since $\langle \psi | A | \varphi \rangle = \langle \varphi | A^{\dagger} | \psi \rangle^*$, you have to conjugate your first line.
You get
$$\langle x | p | x' \rangle = -i\hbar \frac{\partial}{\partial x}\delta(x-x')$$
or
$$\langle x | p | x' \rangle = \langle x' | p | x\rangle^* = [-i\hbar \frac{\partial}{\partial x'}\delta(x-x')]^* = i\hbar \frac{\partial}{\partial x'}\delta(x-x') = -i\hbar \frac{\partial}{\partial x}\delta(x-x')$$
so both are the same result now. | {
"domain": "physics.stackexchange",
"id": 64605,
"tags": "quantum-mechanics, hilbert-space, operators, momentum"
} |
Calculating GetHashCode efficiently with unordered list | Question: I'm wondering what would be the best way to calculate the hashcode when the order of a sequence doesn't matter. Here's the custom IEqualityComparer<T> i've implemented for an answer on Stackoverflow.
public class AccDocumentItemComparer : IEqualityComparer<AccDocumentItem>
{
public bool Equals(AccDocumentItem x, AccDocumentItem y)
{
if (x == null || y == null)
return false;
if (object.ReferenceEquals(x, y))
return true;
if (x.AccountId != y.AccountId)
return false;
return x.DocumentItemDetails.Select(d => d.DetailAccountId).OrderBy(i => i)
.SequenceEqual(y.DocumentItemDetails.Select(d => d.DetailAccountId).OrderBy(i => i));
}
public int GetHashCode(AccDocumentItem obj)
{
if (obj == null) return int.MinValue;
int hash = obj.AccountId.GetHashCode();
if (obj.DocumentItemDetails == null)
return hash;
int detailHash = 0;
unchecked
{
var orderedDetailIds = obj.DocumentItemDetails
.Select(d => d.DetailAccountId).OrderBy(i => i);
foreach (int detID in orderedDetailIds)
detailHash = 17 * detailHash + detID;
}
return hash + detailHash;
}
}
As you can see the foreach in GetHashCode needs to order the (nested) sequence before it starts calculating the hashcode. If the sequence is large this seems to be inefficient. Is there a better way to calculate the hashcode if the order of a sequence can be ignored?
Here are the simple classes involved:
public class AccDocumentItem
{
public string AccountId { get; set; }
public List<AccDocumentItemDetail> DocumentItemDetails { get; set; }
}
public class AccDocumentItemDetail
{
public int LevelId { get; set; }
public int DetailAccountId { get; set; }
}
Answer: If the order does not matter, use an kommutative operator to combine the hashCodes of the elements.
Possible candidates are:
^ binary xor // THIS WOULD BE MY CHOICE
+ addition // problem may be the overflow
* multiplication // problem may be the overflow
| binary or // not recommended, because after some of this operations it is likely that all bits are set, so the same hashCode would appear for quite different instances) | {
"domain": "codereview.stackexchange",
"id": 4634,
"tags": "c#, linq"
} |
Why are 'table salt and sugar are both terrible candidates for recrystallization'? | Question: I received a comment on one of my other chemistry questions stating that
table salt and sugar are both terrible candidates for recrystallization, albeit for different reasons.
Why are salt and sugar terrible candidates for recrystallization?
Answer: The solubility of $\ce{NaCl}$ has very poor temperature dependency. If you dissolve it at $100^\circ\rm C$ and precipitate at $20^\circ\rm C$, you'll essentially lose about 35 g to purify 3 g. Even using a fridge, you'll still lose more than half of the product, which is quite a bad business. Industrial process relies on evaporation, I think.
Sugar is another story. It does have a nice temperature dependence, but it dissolves too well, even when cold. Just imagine dealing with that awfully viscous syrup, like molasses. It would take forever to filter, and quite a while to crystallize. Industry can handle that, but for home chemistry I'd suggest something more pleasant. | {
"domain": "chemistry.stackexchange",
"id": 6537,
"tags": "purification, recrystallization"
} |
Why don't alloys self-galvanically corrode? | Question: Total chemistry newbie here (so this may be a fairly obvious question). With that said, I've had a niggling question that I can't quite resolve myself;
Given that galvanic corrosion is driven by the different electronegativities of different elements, why don't alloys made of significantly different elements self-galvanically corrode very quickly?
Essentially, if I were to place two electrically connected lumps in a conducting bath, one of copper and one of tin (with an anodic difference of 0.3V), I would expect them to corrode relatively quickly. Yet if I make these lumps small enough (i.e. make an alloy of them) the resulting bronze mass is fairly resilient. At what size do the individual lumps of copper and tin change from being rapidly corroding to something stable?
Answer: They certainly can. Most cases where the alloying compound is more noble this is a hazard. Nickel in Iron is a popular example, you can get corrosion acceleration on the iron by reduction happening on the Ni-surfaces.
Where the alloying element is less noble you usually have superficial corrosion to begin with but in the end the resulting surface is protected by the main element - which is common for bronze(s). If you just have lumps, then you do not have a bronze and slowly but certainly, nature will reclaim the tin.
One has to care about the homogeneity of the alloy. | {
"domain": "chemistry.stackexchange",
"id": 8279,
"tags": "metal, metallurgy, alloy"
} |
Prepending textbox to Wikipedia thumbnail picture | Question: I'm working on small Javascript code that adds text from a clicked-href to the top right corner of the Wikipedia page. So for example, if I click the "Jamaican" link in Sly and Robbie, a textbox would be appended to the page on the top right.
Note: to test this code, copy it and paste it to your console, then click some links.
/**
* My Name
* November 24th 2015
* Device: Mac OS X Yosemite 10.10.5
* Editor - vim
* Browser - Google Chrome Version 46.0.2490.86 (64-bit)
*/
'use strict';
// Detect click on href
$("a").click(function (event) {
// Capture the href from the selected link
var link = this.href;
$.ajax({
type: 'get',
url: link
}).done(function (data) {
isValidWikiPage(link);
// Find only the body text (not the titles.. or unnecessary text tips)
var bodyText = $(data).find('#bodyContent #mw-content-text p').text();
// Prepend the text only if there is text in the clicked link
if (bodyText.length > 0) {
prependText(bodyText);
} else {
alert("No text found!");
}
});
// Prevent the link from being executed
return false;
});
/**
* Checks the URL to see if the link clicked
* is a valid Wikipedia page
* @param {[type]} link The clicked URL
*/
function isValidWikiPage(link) {
// Check if link clicked is a Wikipedia page
if (link.indexOf('wikipedia.org') <= 0) {
// Show an alert
alert(link + " is not a Wikipedia page!");
// Redirect user to the new page
window.location.href = link;
return;
}
}
/**
* Prepend the thumbnail with clicked body text
* @param {string} text Body text
*/
function prependText(text) {
$(".infobox .vcard .plainlist").addClass($(".infobox tbody").prepend(text));
}
Answer: Nice idea.
Here's a solution that breaks down each of the steps into tiny, single purpose functions, whose names reveal exactly what they do. This removes all the nesting and makes the solution easier to follow.
New logical steps:
Identifying a valid link (this was always broken out in the oringal solution)
"Upgrading" a link into a preview link (this is no longer coupled to 1., and can be re-used for other links)
Fetching the link's html
Extracting just the body text from the html
Filling the preview window with text
Some ideas that might be nice:
It might be nice to have the preview window vanish if you click the same link again.
Remove old contents from preview window, rather than prepending
Other notes:
I put the each statement that kicks everything off at the bottom, so that you can still copy paste into the console, even though it's more logical to have it at the top, since it's the "main" method, so to speak.
I removed the alerts, since I didn't think they were necessary.
Refactored code:
function turnIntoPreviewLink(link) {
$(link).click(function(e) { e.preventDefault(), previewContents(link.href); })
}
function previewContents(wikiUrl) {
grabHtml(wikiUrl).done(function(html) {
addContentToPreview(bodyText(html));
})
}
function grabHtml(wikiUrl) {
return $.ajax({ type: 'get', url: wikiUrl });
}
function bodyText(html) {
return $(html).find('#bodyContent #mw-content-text p').text();
}
function addContentToPreview(text) {
$(".infobox .vcard .plainlist").addClass($(".infobox tbody").prepend(text));
}
function isValidWikiPage(url) {
return url.indexOf('wikipedia.org') >= 0;
}
$("a").each(function(i, link) {
if (isValidWikiPage(link.href)) turnIntoPreviewLink(link);
}); | {
"domain": "codereview.stackexchange",
"id": 16961,
"tags": "javascript, jquery, userscript, wikipedia"
} |
Python converter: number-to-English - Project Euler 17 | Question: So I wrote this function to convert a given number to its interpretation in the English language as part of the Project Euler exercises. It works fine, but I sense that it's rather sloppy and inelegant, especially for Python where many things can be done quickly in a couple of lines. Any feedback on how to make this code more beautiful/Pythonic is appreciated!
NUMBER_WORDS = {
1 : "one",
2 : "two",
3 : "three",
4 : "four",
5 : "five",
6 : "six",
7 : "seven",
8 : "eight",
9 : "nine",
10 : "ten",
11 : "eleven",
12 : "twelve",
13 : "thirteen",
14 : "fourteen",
15 : "fifteen",
16 : "sixteen",
17 : "seventeen",
18 : "eighteen",
19 : "nineteen",
20 : "twenty",
30 : "thirty",
40 : "forty",
50 : "fifty",
60 : "sixty",
70 : "seventy",
80 : "eighty",
90 : "ninety"
}
def convert_number_to_words(num):
#Works up to 99,999
num = str(num)
analyze = 0
postfix = remainder = None
string = ""
if len(num) > 4:
analyze = int(num[0:2])
remainder = num[2:]
postfix = " thousand "
elif len(num) > 3:
analyze = int(num[0:1])
remainder = num[1:]
postfix = " thousand "
elif len(num) > 2:
analyze = int(num[0:1])
remainder = num[1:]
postfix = " hundred "
if int(remainder) > 0:
postfix += "and "
elif int(num) in NUMBER_WORDS:
analyze = int(num)
else:
analyze = int(num[0:1] + "0")
remainder = num[1:]
postfix = "-"
string = NUMBER_WORDS[analyze]
if postfix is not None:
string += postfix
if remainder is not None and int(remainder) > 0:
return string + convert_number_to_words(remainder)
else:
return string
Answer: Here's one using modulo % and list joining that uses your original NUMBER_WORDS dict:
def int_to_english(n):
english_parts = []
ones = n % 10
tens = n % 100
hundreds = math.floor(n / 100) % 10
thousands = math.floor(n / 1000)
if thousands:
english_parts.append(int_to_english(thousands))
english_parts.append('thousand')
if not hundreds and tens:
english_parts.append('and')
if hundreds:
english_parts.append(NUMBER_WORDS[hundreds])
english_parts.append('hundred')
if tens:
english_parts.append('and')
if tens:
if tens < 20 or ones == 0:
english_parts.append(NUMBER_WORDS[tens])
else:
english_parts.append(NUMBER_WORDS[tens - ones])
english_parts.append(NUMBER_WORDS[ones])
return ' '.join(english_parts)
It works up to 999,999, but could be extended further with a little customisation. | {
"domain": "codereview.stackexchange",
"id": 5659,
"tags": "python, programming-challenge, python-3.x, converting"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.