anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Object Orientated PHP, what's wrong with this implementation of OOPHP and how might it be improved? | Question: I need some advice off the back of a question I posted on StackOverflow. I'm a procedural PHP programmer and I'm looking to make the move into object oriented PHP. I have started a small project that basically acts as a small CMS that allows me to separate my website content from the actual structure of the page. The code I am using to achieve this is:
<?php
class Connection extends Mysqli{
public function __construct($mysqli_host,$mysqli_user,$mysqli_pass, $mysqli_db) {
parent::__construct($mysqli_host,$mysqli_user,$mysqli_pass,$mysqli_db);
$this->throwConnectionExceptionOnConnectionError();
}
private function throwConnectionExceptionOnConnectionError(){
if(!$this->connect_error){
echo "Database connection established<br/>";
}else{
//$message = sprintf('(%s) %s', $this->connect_errno, $this->connect_error);
echo "Error connecting to the database.";
throw new DatabaseException($message);
}
}
}
class DatabaseException extends Exception
{
}
class Page{
private $con;
public function __construct(Connection $con) {
$this->con = $con;
if(isset($_GET['id'])){
$id = $_GET['id'];
}else{
$id = 1;
}
$this->get_headers($id);
$this->get_content($id);
$this->get_footer($id);
}
private function get_headers($pageId){
$retrieveHead = $this->con->prepare("SELECT headers FROM pages WHERE page_id=?");
$retrieveHead->bind_param('i',$pageId);
$retrieveHead->execute();
$retrieveHead->bind_result($header);
$retrieveHead->fetch();
$retrieveHead->close();
echo $header;
}
private function get_footer($pageId){
$retrieveFooter = $this->con->prepare("SELECT footer FROM pages WHERE page_id=?");
$retrieveFooter->bind_param('i',$pageId);
$retrieveFooter->execute();
$retrieveFooter->bind_result($footer);
$retrieveFooter->fetch();
$retrieveFooter->close();
echo $footer;
}
private function get_content($pageId){
$retreiveContent = $this->con->prepare("SELECT template_id, section_title, i1, i2 FROM content WHERE page_id=? ORDER BY sequence DESC");
$retreiveContent->bind_param('i',$pageId);
$retreiveContent->execute();
$retreiveContent->bind_result($template_id, $section_title, $i1, $i2);
$retreiveContent->store_result();
while ($retreiveContent->fetch()) {
//Variables will be populated for this row.
//Update the tags in the template.
$template = $this->get_template($template_id);
$template = str_replace('[i1]',$i1,$template);
$template = str_replace('[i2]',$i2,$template);
//$theTemplate is populated with content. Probably want to echo here
echo $template;
}
$retreiveContent->free_result();
$retreiveContent->close();
}
private function get_template($template_id){
$retreiveTemplate = $this->con->prepare("SELECT code FROM templates WHERE template_id=?");
$retreiveTemplate->bind_param('i',$template_id);
$retreiveTemplate->execute();
$retreiveTemplate->bind_result($template);
$retreiveTemplate->fetch();
$retreiveTemplate->close();
return $template;
}
}
?>
I create a page object in my index.php file which is used to output the page by running the functions below in the order listed in the Page constructor. The comments I received on StackOverflow were along the lines of:
This code violates numerous OOP principles. As a result of that, I hope that no newbies attempt to use this as a means of learning OOP, as they will be learning an invalid programming paradigm if they use what you have here as an example. Read up on the single responsibility principle and the other components of SOLID, and perhaps get a book on design patterns.
I've read up on these subjects and I remember a lot of the principals from Java in my first year of Uni (about 6 years ago now) but as far as I can see in my code I am separating concerns as much as possible. It doesn't make sense to me to have the database connection as part of the page as it isn't really a property of the page whereas the headers and footers and content are. However, the page requires a database connection to function so I therefore have to pass a Connection object into the Page class to acheive the connectivity.
I've asked the authors of the comments numerous times to explain their reasoning behind such comments but all the keep doing is making statements saying this is bad code without providing examples as to why or what I might do to change it therefore I thought I'd ask this question here as I'll never learn if I'm not given a helping hand along the way.
Answer: What I see at first glance.
Page and Connection are extending Mysqli
Why is a Page a special Mysqli? (Page extends Mysqli)
Why is a Connection a special Mysqli? (Connection extends Mysqli)
A page can use a Mysqli to save stuff, but it is not a Mysqli.
Also the word Connection suggests it's a base-class of Mysqli (since a Connection can be a connection to a OracleDB, a TCP-Server etc.) but in your case Mysqli is the base for Connection.
The whole code implies you have no idea of OOP at all. Which is Okay, you can read some stuff about it, there is many outter there :)
Probably the most OOP programmers just see the code and don't know where to start improving it, because it just eludes them completly. | {
"domain": "codereview.stackexchange",
"id": 3147,
"tags": "php, object-oriented"
} |
Probable error (Area) | Question: Calculate the area of the triangular tract of land and its most
Answer: Famous ancient mathematician Heron of Alexandria, Greco- Egyptian mathematician has this formula.
Say S1 and S2 and S3 are the sides.
And we call
$$ S_a =\frac{S1+S2+S3}{2}, \\ then \ Area= \sqrt{S_a(S_a-S1)(S_a-S2)(S_a-S3)} $$
I let you do the calcs.
Wikipedia link to Heron | {
"domain": "engineering.stackexchange",
"id": 3745,
"tags": "surveying"
} |
Navigation in semantic environment and reasoning | Question:
Hello
I want to navigate my robot through doors and other object using reasoning and semantic maps. So I have the map of my environment but the objects are not labelled on that map. So was thinking to use more knowledge about my environment and also to do reasoning about it.
So I found that a using of knowledge base like KnowRob( http://www.ros.org/wiki/knowrob ) to store e.g. semantic environment information (in a so called semantic map) is a good approach. But the wiki is pretty mess so do not know for example how to generate that semantic map from my already exist yaml map ( I could not find that).
Im using inly laser range sensor and no point cloud dates. Any useful help or some examples that can help?
Thanks
Originally posted by Astronaut on ROS Answers with karma: 330 on 2012-08-13
Post score: 2
Answer:
Sorry, the wiki is not necessarily 'a mess' just because there is no tutorial for exactly your use case. All relevant tutorials and more in-depth descriptions are directly linked from the knowrob wiki page on ros.org.
Effectively using the system and adapting it to your needs won't work without some learning from your side. I'd recommend to go through the basic tutorials and have a closer look at those about methods for reasoning about objects or the representation of map data.
Besides, there are quite some publications describing different aspects of the system.
Originally posted by moritz with karma: 2673 on 2012-08-14
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Astronaut on 2012-08-14:
Ok. But could not find how to use my environment yaml map in knowrob. So how to convert in in OWL??
Comment by moritz on 2012-08-14:
There is no direct conversion. Your YAML file does only describe the extension of the map and the conversion from pixel space to metric space. You'll need to create the semantic map yourself because it contains much more information which cannot be extracted automatically from an occupancy grid map.
Comment by Astronaut on 2012-08-14:
OK. And there is tutorial or example how to create the semantic map from my occupancy grid map?Because I already have that map of the environment. Im using gmapping and the map server. So I want to use that information in knowrob and to create the semantic map from that. Is that possible?
Comment by Lorenz on 2012-08-15:
As Moritz already said, you will have to do some manual work to construct the map. There is no automatic way right now. There are a lot of examples of semantic maps in the knowrob repository and there are a bunch of publications that teach you the basics which you should understand first.
Comment by ctguell on 2013-09-02:
@moritz Im trying to detect objects with a kinect , so i can later use this information in the navigation stack of ROS, and wanted to know if this knowrob package can learn from kinect data and if so can it learn with a moving camera?? thanks for the help | {
"domain": "robotics.stackexchange",
"id": 10596,
"tags": "knowrob"
} |
How to convert vertical motion to horizontal? | Question: I am interested in using this miniature motor Squiggle Micro Motor to create very tiny horizontal movements. However, due to very limited space, I can only place it vertically within my project.
Assuming this motor is placed as follows, how can one adapt it to simultaneous movement at a right angle? (Ideally with the X-axis movement matched to the Y-axis movement as well as possible.)
Answer: If this is true linear motion (non-rotational) then you will need some sort of a pivoting linkage in between the two units to transfer one motion to the other. Something like this would probably work:
As the lower link moves vertically, it rotates the red gear which in turn pushes the second link horizontally.
However, given that your image shows more of a screw type link, I feel like the lower link will be rotating (correct me if I'm wrong here). In that case, then a different approach would need to be taken - at least, a rotational ball joint would need to be used to attached the rotating unit to any linkage. | {
"domain": "robotics.stackexchange",
"id": 2404,
"tags": "motor, motion, movement, driver, linear-bearing"
} |
Can a hydrogen atom emit characteristic X-ray? | Question: Is it not possible that incoming electron excite the hydrogen atom and then when it de-excites it releases radiation?
Here please don't answer no because hydrogen is light.
My actual query is can characteristic xray be produced by excitation and then deexcitation? Is knocking of electron necessary?
Answer:
Is it not possible that incoming electron excite the hydrogen atom and then when it de-excites it releases radiation?
Yes, that is what is happening. It is a confluence of photons, and it has frequencies that start from gamma rays and end in very low frequency ELF. The spectrum of the hydrogen atom emitted photons that create radiation is in the visible frequencies.
Here please don't answer no because hydrogen is light.
Well, do not use it as an exampe then. Its transitions do not have enoug energy to create X-rays.
My actual query is can characteristic xray be produced by excitation and then deexcitation? Is knocking of electron necessary.
Yes, the X ray lines happen because an electron in an atom is changing energy levels. There are two ways to change energy levels in an atom: 1) absorb a photon and go to a higher energy level, 2 emit a photon and go to a lower energy level.
The only way to get characteristic x rays is in exciting the atoms, which then deexcite giving the specific frequency. | {
"domain": "physics.stackexchange",
"id": 62499,
"tags": "particle-physics, nuclear-physics, collision, elementary-particles, x-rays"
} |
rosjava groovy install fail | Question:
Hey
I have downloaded rosjava_core into "~/catkin_ws/src/rosjava_core".
I am using Java version:
OpenJDK Runtime Environment (IcedTea6 1.12.3) (6b27-1.12.3-0ubuntu1~12.04.1)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
Although trying with JDK 7 still produces the same error.
Installing rosjava in groovy after the catkin update and receiving an error when running catkin make from "~/catkin_ws"(see below for print out).
Error message in print out
19 errors
:rosjava_test:compileJava FAILED
FAILURE: Build failed with an exception.
What went wrong:
Execution failed for task ':rosjava_test:compileJava'.
Compilation failed; see the compiler error output for details.
Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
Has anyone encountered this error in groovy or previous versions? Searching has found some results but no solution to the problem as far as I can find.
EDIT:
After a bit more debugging it saying "Pakcage test_ros does not exist"
Originally posted by HenryW on ROS Answers with karma: 140 on 2013-04-30
Post score: 1
Answer:
Ok, found a solution however it is not a proper fix, although works.
First I found test_ros was not being generated inside "rosjava_core/rosjava_messages/build/generated-src".
Taking test_ros from a previous compilation of rosjava (different computer running fuerte), I placed this folder inside the generated-src folder.
Running ./gradlew or catkin_make, then ran a successful build.
This is not a proper fix, and I have put a issue under the git repo for rosjava, hopefully someone who knows the system a bit better than I do can solve it.
Henry
Originally posted by HenryW with karma: 140 on 2013-05-02
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by HenryW on 2013-05-26:
solution has been found, details are on the git repo for rosjava: https://github.com/rosjava/rosjava_core/issues/156#issuecomment-18445073 | {
"domain": "robotics.stackexchange",
"id": 14015,
"tags": "ros-groovy, rosjava"
} |
Asking for C++ advice based on a radix sort function | Question: Can someone point out what possible edge cases/leaks/inefficiencies my code has and how they could be avoided/fixed. Basically, I want to know what mistakes I make so I can avoid making them in the future.
I chose an LSD radix sort function as a sample for no particular reason.
The function:
//arrPtr - pointer to the array being sorted; count - its count
//scoreFunction - function that assigns scores to elements based on which they will be sorted in ascending order
//args - pointer to an array of pointers used to pass data to the scoreFunction
template<class T>
void rsort(T *arrPtr, size_t count, int scoreFunction(T val, void **args) = [](T val, void **args) {return val; }, void **args = nullptr) {
struct elem { //Pairing indices with their scores
size_t idx;
int scr;
};
struct elemPtr { //Pointer struct to avoid memory leaks in case of an exception
elem *ptr;
~elemPtr() { delete[] ptr; }
void swap(elemPtr &aux) {
elemPtr t{ ptr };
ptr = aux.ptr;
aux.ptr = t.ptr;
t.ptr = nullptr;
}
};
elemPtr srt{ new elem[count] }, srtAux{ new elem[count] };
for (size_t i = 0; i < count; ++i) { //Assigning indices and scores that are converted to unsigned representation + 2^31
srt.ptr[i].idx = i;
srt.ptr[i].scr = scoreFunction(arrPtr[i], args) ^ 0x80000000;
}
for (unsigned char bShft = 0; bShft < 32; bShft += 8) { // Base 256 LSD radix sort based on the scores
size_t dCnt[256]{};
for (size_t i = 0; i < count; ++i)
++dCnt[srt.ptr[i].scr >> bShft & 255];
for (unsigned char i = 0; i < 255; ++i)
dCnt[i + 1] += dCnt[i];
for (size_t i = count; i > 0; --i) {
--dCnt[srt.ptr[i - 1].scr >> bShft & 255];
srtAux.ptr[dCnt[srt.ptr[i - 1].scr >> bShft & 255]] = srt.ptr[i - 1];
}
srt.swap(srtAux);
}
for (size_t i = 0; i < count; ++i) //Filling with 0s to reuse as an array of bools that represent if the element is in the right place
srtAux.ptr[i].idx = 0;
for (size_t i = 0; i < count; ++i) { //Rearranging the original array by dividing the elements into enclosed loops
if (srtAux.ptr[i].idx)
continue;
T temp = arrPtr[i];
size_t curIdx = i;
while (srt.ptr[curIdx].idx != i) {
srtAux.ptr[curIdx].idx = 1;
arrPtr[curIdx] = arrPtr[srt.ptr[curIdx].idx];
curIdx = srt.ptr[curIdx].idx;
}
srtAux.ptr[curIdx].idx = 1;
arrPtr[curIdx] = temp;
}
}
I didn't use any libraries to make the function independent-- if that makes sense.
Answer: Use the standard library, or emulate it
I didn't use any libraries to make the function independent-- if that makes sense.
It can indeed be a burden for developers if a library they want to use has lots of dependencies, as they need to incorporate those into their build system, have to deal with licensing issues, and so on. However, on almost any platform you will have a standard library that you can rely on without doing anything special.
Consider your elemPtr, this is basically the same as std::unique_ptr.
So I would strongly recommend using the latter instead. And in case this isn't available on the platform you want to run it on, you could create a drop-in replacement for std::unique_ptr. You should avoid adding things to the std namespace yourself, so a way to do this properly would be:
#if HAVE_UNIQUE_PTR
#include <memory>
using std::unique_ptr;
#else
class unique_ptr {
// ...
};
#endif
Passing the score function
You are passing the score function as a regular function pointer, and you want to be able to have a variable number of arguments passed to it. Using void** is not very safe. Again, if you can I would recommend using the standard library here, and use std::function<int(T)> to pass the function. It will allow you to pass lambdas with captures, so you don't have to worry about any extra arguments.
Another solution is to make the template itself variadic, allowing you to really pass an arbitrary number of arguments without resorting to void**, like so:
template<typename T, typename... Args>
void rsort(T *arrPtr, size_t count, int scoreFunction(T val, Args...) = [](T val, Args...) {return val;}, Args... args) {
// ...
srt.ptr[i].scr = scoreFunction(arrPtr[i], args...) ^ 0x80000000;
// ...
}
Pass by const reference where appropriate
Passing val and args by value to scoreFunction() can be inefficient, depending on their types. Prefer to pass them by const reference instead:
int scoreFunction(const T& val, const Args&...)
Use of a score function might be problematic
You are not radix sorting T, you are radix sorting the 32-bit score associated with each element of the array pointed to by arrPtr. However, it might not always be easy to create a score function. Consider having an array of std::strings. It would be easy to radix-sort strings, using char as the radix. However, given an set of arbitrary strings that I want to sort alphabetically, it's very hard to create a scoreFunction() that would map every string to a 32-bit value and still keep them in the same order. | {
"domain": "codereview.stackexchange",
"id": 43372,
"tags": "c++, performance, radix-sort"
} |
Do Microwave oven heating times grow linearly with Wattage? Calculating optimal heating time | Question: So this is a completely random and trivial question that was prompted by looking at my microwave oven and the back of a TV dinner and my google searching failed to produce a meaningful answer so figured I'd ask here.
On my TV dinner box it has different cook times based on Microwave Oven Wattage:
1100 Watts - Cook 2 minutes, stir and cook for another 1.5 minutes. (3.5 minutes total)
700 Watts - Cook 3 minutes, stir and cook for another 2.5 minutes. (5.5 minutes total)
My oven is 900 Watts, which is right in the middle.
Assuming those times listed on the box are the scientifically optimal cook time (which is doubtful, but just go with me), is it fair to assume I should use the linear average for 2.5 minutes, stir and cook for another 2 minutes (4.5 minutes total), or is there a different rate of growth between the 700 watt and 1100 watt ovens that would change the optimal cook time?
Answer: The rate at which a mass absorbs microwave radiation is characterized by the 'Specific Absorption Rate', which is proportional to the electromagnetic field intensity:
Wikipedia has a dedicated article to this phenomenon but in short it says
$$\text{SAR} = \int_\text{sample} \frac{\sigma(\vec{r})|\vec{E}(\vec{r})|^2}{\rho(\vec{r})} d\vec{r}$$
Because the absorption rate is proportional to the EM field intensity, $|\vec{E}(\vec{r})|^2$, which is in turn proportional to power, then the relationship will indeed be linear.
Assuming 100% energy efficiency (which is a wild overestimate: 20% might be more accurate but I do not know the answer to that question) your total 'energy' transferred to your dinner will be:
$$ \text{Energy} = \text{Power} \Delta t$$
i.e.
$$\Delta t = \frac{\text{Energy}}{\text{Power}} $$
The cook time will be inversely proportional to your oven power.
1100 watts for 3.5 minutes computes to
$$ \text{Energy } = 1100 \text{ Watts } \times 210 \text{ seconds } = 231,000 \text{ Joules}$$
700 Watts for 5.5 minutes computes to
$$ \text{Energy } = 700 \text{ Watts } \times 330 \text{ seconds } = 231,000 \text{ Joules}$$
Thus a 900 Watt oven would necessitate
$$\Delta t = \frac{\text{Energy}}{\text{Power}} = 256.66 \text{ seconds } = 4.278 \text{ minutes }$$ | {
"domain": "physics.stackexchange",
"id": 8106,
"tags": "thermodynamics, microwaves, applied-physics"
} |
Animal identification (larva) | Question: I found a tiny, white animal in my mouth on the morning of June 28, 2015, in Toronto, Ontario. Luckily I don't seem to have hurt it. I was eating some sunflower and pumpkin seeds at the time. I believe that's how it got there.
The pictures include a toothpick and a Canadian cent to give an idea of its size.
I also took a video of it crawling around. I will post it if the pictures are not sufficient for identification.
Answer: This animal is moth larva. The images presented are blurry, but if there are no any dots on the larva my first suggestion is:
pantry/Indian meal moth larvae -
"They are a common grain-feeding pest found around the world, feeding on cereals and similar products."
(picture source) | {
"domain": "biology.stackexchange",
"id": 4179,
"tags": "zoology, entomology, species-identification"
} |
Rosserial arduino connectivity problem " socket.error: [Errno 111] Connection refused" | Question:
I am using Ubuntu 14.04LTS with Indigo ROS and an Arduino Mega 2560.
When I run the command rosrun rosserial_python serial_node.py _port:=/dev/ttyACM0 _baud:=115200, I faced some connection problems
first this command was working normally and the serial connection was connected successfully with the Arduino.
suddenly after restart my PC the following Error was appear "socket.error: [Errno 111] Connection refused" and serial connection didn't established.
The following codes i write it by sequence :
> sudo usermod -a -G dialout wael
> sudo chmod a+rw /dev/ttyACM0
> rosrun rosserial_python serial_node.py _port:=/dev/ttyACM0 _baud:=115200
The following error appear
Traceback (most recent call last):
File "/opt/ros/indigo/lib/rosserial_python/serial_node.py", line 46, in <module>
rospy.init_node("serial_node")
File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/client.py", line 323, in init_node
_init_node_params(argv, name)
File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/client.py", line 186, in _init_node_params
set_param(rosgraph.names.PRIV_NAME + param_name, param_value)
File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/client.py", line 504, in set_param
_param_server[param_name] = param_value #MasterProxy does all the magic for us
File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/msproxy.py", line 148, in __setitem__
self.target.setParam(rospy.names.get_caller_id(), rospy.names.resolve_name(key), val)
File "/usr/lib/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
File "/usr/lib/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
File "/usr/lib/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/lib/python2.7/xmlrpclib.py", line 1301, in single_request
self.send_content(h, request_body)
File "/usr/lib/python2.7/xmlrpclib.py", line 1448, in send_content
connection.endheaders(request_body)
File "/usr/lib/python2.7/httplib.py", line 975, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 835, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 797, in send
self.connect()
File "/usr/lib/python2.7/httplib.py", line 778, in connect
self.timeout, self.source_address)
File "/usr/lib/python2.7/socket.py", line 571, in create_connection
raise err
socket.error: [Errno 111] Connection refused
What can i do please to solve this problem?
Originally posted by Wael on ROS Answers with karma: 36 on 2016-11-17
Post score: 0
Original comments
Comment by gvdhoorn on 2016-11-17:
Have you set ROS_MASTER_URI or ROS_IP or ROS_HOSTNAME to something yourself? Are you on a DHCP network and could your IP have changed?
Answer:
The problem was in the ROS_MASTER_URI , it was not activated.
After activating it the problem solved.
Thank you for your support.
Originally posted by Wael with karma: 36 on 2016-11-17
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by shawnysh on 2017-01-12:
Thanks for your comment, it helps.
Comment by himS1234 on 2018-06-12:
How to activate it?
Comment by Carls on 2020-04-28:
Just type 'roscore' | {
"domain": "robotics.stackexchange",
"id": 26269,
"tags": "ros, rosserial"
} |
Equivalent partition sums | Question: I'm looking for feedback on my solution to the following prompt:
Given an array of ints, return the index such that the sum of the
elements to the right of that index equals the sum of the elements
to the left of that index. If there is no such index, return Nothing.
If there is more than one such index, return the left-most index. Example: peak([1,2,3,5,3,2,1]) = 3, because the sum of the elements at indexes 0,1,2 == sum of elements at indexes 4,5,6. We don't sum index 3.
Questions:
Is the solution best defined recursively? Or is there some kind of higher-order function that I could use?
The function peak' takes four arguments. That feels a little unwieldy. Any reasonable way to shorten that parameter list?
Conventional programming wisdom says to avoid "magic numbers", so I introduced some bindings in the let clause. Does that make peak easier to read, or does it just seem like bloat?
peak :: [Int] -> Maybe Int
peak numbers =
let leftSum = 0
rightSum = sum numbers
startingIndex = 0
in peak' numbers leftSum rightSum startingIndex
peak' :: [Int] -> Int -> Int -> Int -> Maybe Int
peak' [] _ _ _ = Nothing
peak' (x:xs) leftSum rightSum index
| leftSum + x == rightSum = Just index
| otherwise = peak' xs (leftSum + x) (rightSum - x) (index + 1)
Prompt source: https://www.codewars.com/kata/5a61a846cadebf9738000076/train/haskell
Answer:
You can use scans to implement this:
import Data.List
peak :: [Int] -> Maybe Int
peak [] = Nothing
peak xs = elemIndex True $ zipWith (==) (scanl1 (+) xs) (scanr1 (+) xs)
I don't think there is a way to reduce the number of arguments, but I think you can give them shorter names in the helper function and if you put the helper function in a where clause then you can also leave out the type signature:
peak :: [Int] -> Maybe Int
peak numbers =
let leftSum = 0
rightSum = sum numbers
startingIndex = 0
in peak' numbers leftSum rightSum startingIndex
where
peak' [] _ _ _ = Nothing
peak' (x:xs) l r i
| l + x == r = Just i
| otherwise = peak' xs (l + x) (r - x) (i + 1)
Personally, I believe shorter code is usually more readable than long code. And the length of variable names should be proportional to the size of the scope in which they are used, i.e. local variables should get short names and global or top-level variables and functions should get longer names.
I think it is bloat in this case, again I really like short code. And I think 0 is never really considered a magic number.
And recursive helper functions are usually called go in Haskell. And if you put the list as the last argument in the helper function then you can eta-reduce the main function. So the end result would be:
peak :: [Int] -> Maybe Int
peak = go 0 (sum xs) 0 where
go _ _ _ [] = Nothing
go l r i (x:xs)
| l + x == r = Just i
| otherwise = go (l + x) (r - x) (i + 1) xs | {
"domain": "codereview.stackexchange",
"id": 41615,
"tags": "beginner, haskell, recursion"
} |
Double Slit Information Destruction | Question: In the double slit experiment, when a detector is added to tell which hole the photon went through the interference pattern disappears. Would there be an interference pattern if the data was sent to a computer, but never recorded? What if it was recorded but destroyed before anyone looked at it?
Answer: Yes and no. "Sending the data to a computer, then destroying it" is probably too complex an operation to let the state of a photon produce the same interference pattern again.
Yet, experiments in the spirit of your idea have indeed been performed, by playing around with entangled photons, sending one through the slit and using the other to obtain information about the path taken. They are called quantum eraser experiments: (quoting from the description of such an experiment from Wikipedia)
First, a photon is shot through a specialized nonlinear optical device: a beta barium borate (BBO) crystal. This crystal converts the single photon into two entangled photons of lower frequency, a process known as spontaneous parametric down-conversion (SPDC). These entangled photons follow separate paths. One photon goes directly to a detector, while the second photon passes through the double-slit mask to a second detector. Both detectors are connected to a coincidence circuit, ensuring that only entangled photon pairs are counted. A stepper motor moves the second detector to scan across the target area, producing an intensity map. This configuration yields the familiar interference pattern.
Next, a circular polarizer is placed in front of each slit in the double-slit mask, producing clockwise circular polarization in light passing through one slit, and counter-clockwise circular polarization in the other slit. This polarization is measured at the detector, thus "marking" the photons and destroying the interference pattern.
Finally, a linear polarizer is introduced in the path of the first photon of the entangled pair, giving this photon a diagonal polarization. Entanglement ensures a complementary diagonal polarization in its partner, which passes through the double-slit mask. This alters the effect of the circular polarizers: each will produce a mix of clockwise and counter-clockwise polarized light. Thus the second detector can no longer determine which path was taken, and the interference fringes are restored.
What happens is that though the paths through the slits are in principle distinguishable, no interaction (in particular, no macroscopic interaction that could cause decoherence or whatever you believe happens during a measurement) is actually taking place that would depend on the path the photon takes after the second polarizer is introduced. "Recording data", as you propose, would change that, and so that cannot work. | {
"domain": "physics.stackexchange",
"id": 17770,
"tags": "double-slit-experiment"
} |
Induction cooking: why ferromagnetic pan? | Question:
In the image above, we have the principle of induction cooking. An alternating current is run through the coil, which causes a change in flux. This change in flux induces eddy currents in the conductive pan, and by Joule heating/resistive heating (P=VI), this causes the pan to heat up.
So the only thing that you need is a pan that can conduct electricity/the eddy currents right? As long as the pan does not have a too low or a too high resistance.
However, why is the effect specifically optimal for ferromagnetic materials/conductors? How does the ability to magnetize somehow enhance the effect of induction heating? Does the magnetization of the pan itself somehow enhance the eddy currents or something?
Answer: Ferromagnetic material is needed for the same reason that transformers working with low frequency alternating current (a.c.) need iron cores. The magnetic field generated by coils in the cooker's hob are supplemented by magnetic fields due to alignment by this field of magnetic domains in the iron. The resulting magnetic flux density may be hundreds of times larger than if there were no iron. Thus as the field changes, because it is a.c. that is passed through the coils, the changes are far greater than if there were no iron, and a much larger voltage is generated. Hence the heating effect is greater, despite the greater resistivity of the iron compared with that of (say) copper or aluminium. | {
"domain": "physics.stackexchange",
"id": 95232,
"tags": "electromagnetism, electromagnetic-induction"
} |
What would be an example of something digital which isn't electronic? | Question: The terms digital and electronic are often used interchangeably but I know that it's not correct because something can be digital but not electronic.
Something can be digital in the sense that it's discrete both in time and magnitude.
A picture of a ruler appearing each second is discrete in time because this timing is countable and it's also discrete in value/amplitude/magnitude because it is splitted into countable (in this case, two or more) parts.
A picture like that is likely to be electronic or displayed on an electronic device.
What would be an example of something digital which isn't electronic?
The ruler itself (given all times it appeared in the minds of people)?
Answer: Counting with my fingers.
Also when I was a kid in the '60s, all of the adding machines and cash registers were digital and mechanical. | {
"domain": "dsp.stackexchange",
"id": 12047,
"tags": "terminology"
} |
Could all matter in a black hole be actually only on its surface (Schwarzschild radius)? | Question: Layman here, sorry if I miss something obvious.
Black hole entropy depends linearly on its surface. If all matter in a black hole would actually be located only on its surface (or very near), wouldn't it cause entropy to behave exactly like that?
My idea is that when new matter falls in a black hole all black hole matter gets displaced further from the center of the black hole. Somewhat akin to how gravitational waves can displace matter far away from black hole. In that way black hole interior is just a (topological) hole in fabric of the space time.
Does this idea violate any known physical laws?
Answer:
Black hole entropy depends linearly on its surface. If all matter in a black hole would actually be located only on its surface (or very near), wouldn't it cause entropy to behave exactly like that?
This doesn't really make sense, for a couple of reasons. (1) Once a chunk of matter has fallen into a black hole, we expect the black hole's entropy to remain constant from that time on. Therefore the location of the matter doesn't relate logically to the entropy. (2) In your proposed interpretation, you give no reason why the entropy per unit area should be fixed. In fact, the area of the event horizon is proportional to the square of the amount of matter that has fallen in, so by your interpretation, the entropy should be proportional to the square root of the area.
Another thing you need to realize is that if matter falls into a black hole, it doesn't make sense to discuss where the matter is "now." General relativity doesn't define simultaneity in a way that would make that meaningful. For more on this, see this answer: https://physics.stackexchange.com/a/146852/4552
Somewhat akin to how gravitational waves can displace matter far away from black hole.
Not sure what you mean by this. This sounds wrong.
In that way black hole interior is just a (topological) hole in fabric of the space time.
This is not an interpretation that fits very well with current ideas about relativity. For more details, see this question: Is it possible the space-time manifold itself could stop at a black hole's event horizon? Basically it's the singularity that we describe as a topological hole, not the entire interior. | {
"domain": "physics.stackexchange",
"id": 51124,
"tags": "black-holes, spacetime, entropy, event-horizon, singularities"
} |
nickel-copper fabric: shielding viability after being subjected to weathering | Question: I know the solution can be as simple as a well-sealed garbage can, but I am contemplating making a more malleable fabric based faraday-cage-esque contraption. The plan is to use nickle-copper fabric to achieve the shielding, I'll experiment with how many layers/folds create the best seal. It is to be placed outside, exposed to the elements, potentially incurring moderate to heavy rain. Before I get too heavily invested in this, I would like to really think about the shielding viability of the fabric in the long-term.
For reference, here is a generic picture of the nickel-copper fabric. Amazon has its fair share too, for those interested.
Question:
Unlike its ferrous counterparts, nickel/copper do not rust but rather have a patina or tarnish that develops after weathering. Will nickel-copper fabric tarnish like we might expect a penny to tarnish? If so, what compromises to shielding would be sustained? Also, if so, would that compromise the structural integrity of the fabric (more prone to tears, ect)?
Further Clarifications:
- exact chemical makeup of the fabric is uncertain, the one I bought was simply marketed as "nickel-copper fabric" so I assume its mostly nickel and copper, but it seems to have a fabric-like feel, so perhaps its mixed with some kind of substrate or something. If the consensus is that this information is crucial I'll try to track down the manufacturer and get specifics, but at the moment I don't have a practical means of going about that(manufacturer is unknown). You are free to speculate about common implementations of nickel-copper fabric and state your assumptions accordingly
shielding: I'm concerned that if the fabric weathers poorly, the shielding will be affected
tear resistance: I'm concerned that if the fabric weathers poorly there may be more points of failure, leading to easier tears in the fabric
health: the fabric will occasionally come in to close contact with humans (me) and I'm not sure if tarnished nickel-copper has any adverse health risks
set up: my design is too hypothetical to put into words at the moment, but you can imagine a tarp-like thing, outside layer is cordura (water-resistant, not water-proof) and inside layer is nickel-copper fabric
placement: outside, exposed to the elements, incurring rain
Answer: Your alloy should be sufficient. For one, cupronickel alloys are very corrosion resistant and is used to make many different coins across the world. occasionally yes, a nickel or dime will tarnish but this is usually from exposure to soda, salt or some other corrosive liquid, other wise they can sit in a fountain for weeks without tarnishing. AS long as the acid rain isn't too severe, you should be okay.
As for the product you are considering, the description on amazon describes it as:
Copper+Nickel+Polyester Plated Fabric
Now plated metals do not necessarily have the same properties as the pure casted alloy since to plate on polymer requires using electroless plating and does not guarantee the same microstructure as a casted alloy, but corrosion should be unaffected.
See also: Monel | {
"domain": "chemistry.stackexchange",
"id": 10976,
"tags": "everyday-chemistry, metal, corrosion"
} |
Signal to Quantization Noise ratio concept | Question: I was reading Simon S. Haykin's Digital Communications in order to understand the concept of quantization. However, on SQNR, I got stuck over the point where the author mentioned:
With the input $M$ having zero mean and the quantizer assumed to be symmetric, it follows that the quantizer output $V$ and, therefore, the quantization error $Q$ will also have zero mean. Thus, for a partial statistical characterization of the quantizer in
terms of output signal-to-(quantization) noise ratio, we need only find the mean-square value of the quantization error $Q$.
where $M$ is the random variable of the sampled analog inputs, $V$ is the random variable of digitalized outputs and $Q$ is the random variable of the error.
$\star$ Can someone kindly elaborate this line like how does zero mean of $M$, $V$ and $Q$ implicates that we need to find the mean-square value?
I searched for the meaning of zero mean and partial statistical characterization and got them as follows:
https://www.sciencedirect.com/topics/engineering/statistical-characterization
Mean and correlation provide a partial statistical characterization of
a random process in terms of its averages and moments. They are useful
as they offer tractable mathematical analysis, they are amenable to
experimental evaluation and they are well suited to the
characterization of linear operations on random processes.
and Zero mean : https://physics.stackexchange.com/questions/178323/what-does-zero-mean-random-noise-with-standard-deviation-equal-to-1-mean
I just don't know how these concepts come together to imply that we need to find mean square value. Please help!
Answer: Give me an A! Give me a D! Give me a converter! What have we got? An A/D converter! Go Team!
Let $X$ denote a standard Gaussian random variable with pdf $\phi(x)$ and complementary CDF $Q(x)$. Let $Y$ denote a quantized version of $X$. First, for simplicity (and to flex our analytical muscles), suppose that
$$Y = \begin{cases}+\alpha, & X \geq 0,\\
-\alpha, & X < 0.\end{cases}$$
Note that $Y$ is a discrete random variable taking on values $\pm\alpha$ with equal probability $\frac 12$.
If we use $Y$ as a quantized representation of $X$, then the quantization error or quantization noise is $$Z = X-Y = \begin{cases}X-\alpha, & X \geq 0,\\
X+\alpha, & X < 0.\end{cases}$$
Note that $Z$ takes on values in $[-\alpha,\infty)$ when $X\geq 0$ and values in $(-\infty,\alpha)$ when $X < 0$. Furthermore, the mean-square error of our representation is $E[Z^2] = E[(X-Y)^2]$ which we can calculate as
\begin{align}
E[Z^2] &= \int_0^\infty (x-\alpha)^2\phi(x)\, \mathrm dx
+ \int_{-\infty}^0 (x+\alpha)^2\phi(x)\, \mathrm dx\\
&= \int_{-\infty}^\infty (x^2+\alpha^2)\phi(x)\, \mathrm dx
-4\alpha\int_0^\infty x \phi(x)\, \mathrm dx\\
&= 1 + \alpha^2 - 2\sqrt{\frac{2}{\pi}}\alpha.
\end{align}
Note that in the last step, we used the fact that the antiderivative of $x\phi(x)$ is $-\phi(x)$ to evaluate the second integral. From this, we get that the smallest possible value of $E[Z^2]$ is $1 - \frac{2}{\pi}$ and it occurs when we choose $\alpha = \sqrt{\frac{2}{\pi}}$.
Next, suppose that we quantize $X$ into $7$ integer values from $-3$ to $+3$, mapping the observed value into the nearest of these $7$ integers. Thus, $W$ is a discrete random variable and its pmf is readily calculated as
\begin{align}
p_W(3) &= p_W(-3) = Q(2.5) &= 0.0006,\\
p_W(2) &= p_W(-2) = Q(1.5)-Q(2.5) &= 0.0606,\\
p_W(1) &= p_W(-1) = Q(0.5)-Q(1.5) &= 0.2417,\\
p_W(0) &= Q(-0.5)-Q(0.5) &= 0.3830.
\end{align}
The mean-square error can be worked out using the ideas and methods described above. More fun can be had by using quantization levels ranging from $-3\alpha$ to $+3\alpha$ and then figuring out the optimum choice of $\alpha$ that minimizes the mean-square error. Alternatively, quantize $X$ into $8$ levels $\pm\frac 12, \pm\frac 32, \pm\frac 52, \pm\frac 72$ and do the calculations as above.
But, turning back to integer levels $W$, if $W$ is represented as a $3$-bit twos complement number $[W_2, W_1, W_0]$, then the $W_i$ are Bernoulli random variables with parameters $p_2 =P(W<0) = 0.3085$, $p_1 = 0.3691$ and $p_0 = 0.4958$. Note that $[W_2,W_1, W_0]$ cannot take on value $100$ which is $-4$ in three-bit twos' complement notation. Finally, we are at the A/D converter promised in the blurb that begins this answer,
How all this fits in with the various claims in Haykin's book is left as an exercise for the reader. | {
"domain": "dsp.stackexchange",
"id": 10575,
"tags": "digital-communications, snr, quantization"
} |
Choosing a solenoidal vector potential in gauge fixing | Question: When finding a potential vector for the $\vec{B}$ field I understand that we have certain freedom because if $\nabla \times \vec{A}=\vec{B}$ then $\vec{A'} = \vec{A} + \nabla \psi$ also satisfies $\nabla \times \vec{A'}=\vec{B}$
What I don't understand is why that gives us the freedom to choose $\nabla \cdot \vec{A}=0$, when you can only choose any scalar function $\psi$.
I thought that maybe it has something to do with the Helmholtz theorem but I got nowhere.
Thank you, in advance
Answer: The proof is like this. Suppose you have some vector potential $\mathbf{A}$, not necessarily satisfying your gauge condition. Now choose some $\psi$ such that
$$\nabla^2 \psi = - \nabla \cdot \mathbf{A}$$
This is Poisson's equation for $\psi$, and it always has a solution (which is unique if you specify boundary conditions). Now, if we define
$$\mathbf{A}' = \mathbf{A} + \nabla\psi$$
Then it holds that
$$\nabla\cdot\mathbf{A}' = 0$$
i.e., we've found an equivalent vector potential that satisfies the gauge condition. | {
"domain": "physics.stackexchange",
"id": 33163,
"tags": "electromagnetism, potential, classical-electrodynamics, vector-fields, gauge"
} |
Problem with the mapping reduction from $A_{TM}$ to $HALT_{TM}$ | Question: Sipser provided the following proof to prove the mapping reduction from $A_{TM}$ to $HALT_{TM}$, it in fact tried to build a mapping function:
My problem is the way this proof works. The function must take $<M,w> \in A_{TM}$ and produce $<M',w'>$ if and only if $<M',w'>\in HALT_{TM}$ We know by the hypothesis $<M,w>\in A_{TM}$. The machine $M'$ runs $M$ on an input $x$ not the input $w$. Now what if $<M,x>\notin A_{TM}$ and $<M,w>\in A_{TM}$? then $M'$ will output the loop while since $<M,w>\in A_{TM}$ the output must be accept. As a result the output $<M',w>$ which is supposed to be in $HALT_{TM}$ is not truly corresponded to the input. I would be appreciated if you explain about my problem.
Thanks
Answer: Your conception of a reduction is incorrect. The reduction takes an instance $\langle M, w \rangle$ of $A_{TM}$ and returns an instance $F(\langle M,w \rangle)$ of $HALT_{TM}$ such that $\langle M,w \rangle \in A_{TM}$ iff $F(\langle M,w \rangle) \in HALT_{TM}$.
Second, $M'$ is a Turing machine that accepts a single input $x$. The pair $\langle M',w \rangle$ belongs to $HALT_{TM}$ if $M'$ halts when running on the input $w$. Hence when analyzing whether $\langle M',w \rangle \in HALT_{TM}$ or not, the relevant value of $x$ is $w$.
Here is another way to think of the proof. Given a Turing machine $M$, we construct another Turing machine $M'$ such that $M$ accepts $w$ iff $M'$ halts on $w$.
When describing $M'$, we use $x$ to denote its input. There is no definite value of $x$ like your post seems to assume — it stands for the input to $M'$. For a similar example, consider the function $f(x) = x^2$. What is the value of $x$? It has no definite value, being just a place holder. If we plug in $w$ for $x$ then we get $f(w) = w^2$. | {
"domain": "cs.stackexchange",
"id": 11332,
"tags": "turing-machines, computability, reductions"
} |
My question is about stoichiometry, limiting reagent problems | Question: So I'm really struggling with this particular concept of these problems. I know how to convert to moler mass and all that, but I don't know how to do the ratio part, I can do them when the ratio is 1/2 or 1/3 but I don't know what to do with it when the ratio is 5/3 or others that don't include a 1. Any help would be greatly appreciated, thanks
Answer: Let's look at the reaction between permanganate and oxalate in acid as an example.
$\ce{5C2O4^2- + 2MnO4^- + 16H^+(aq) -> 2Mn^2+ + 10CO2(g) + 8H2O}$
So if I have 13 moles of oxalate and 6 moles of permanganate what is the limiting reagent?
An easy way to solve is to divide the chemical equation by 2 to get
$\dfrac{5}{2}\ce{C2O4^2- + MnO4^- + 8H^+(aq) -> Mn^2+ + 5CO2(g) + 4H2O}$
So $\dfrac{\text{moles oxalate}}{\text{moles permanganate}} = \dfrac{5}{2} = 2.5$
Given that we started with
$\dfrac{\text{moles oxalate}}{\text{moles permanganate}} = \dfrac{13}{6} = 2.167$
there is too little oxalate to react with all the permanganate. For a complete reaction we'd need 6 moles of permanganate and $\dfrac{5}{2}(6) = 15$ moles of oxalate.
Also note as a different problem we can look at oxalate and $\ce{H+}$.
$\dfrac{\text{moles } \ce{H+} }{\text{moles oxalate}} = \dfrac{16}{5} = 3.2$
So if we started with 13 moles of oxalate, we'd need (3.2)(13) = 41.6 moles of H+. | {
"domain": "chemistry.stackexchange",
"id": 8998,
"tags": "stoichiometry"
} |
Why is the sinusoidal model classified as absolute positional encoding in some literature? | Question: I am currently reading in depth about positional encodings, and as we know there are two types of positional encodings: Absolute and relative.
My question:
Why is the sinusoidal model classified as absolute positional encoding in some literature, given that in Vaswani's original paper it was said that it captures relative relationships between words, and this has been proven here.
However, while I was reading a research, it was mentioned that projections that occur in the attention layer destroy this:
Indeed, sinusoidal position embeddings exhibit useful properties in theory. Yan et al. (2019) investigate the dot product of sinusoidal position embeddings and prove important properties:
(1) The dot product of two sinusoidal position embeddings depends only on their relative distance. That is, is independent of .
(2) , which means that sinusoidal position embeddings are unaware of direction. However, in practice the sinusoidal embeddings are projected with two different projection matrices, which destroys these properties.
Is this the reason?
Answer: Absolute position embeddings capture the absolute location of a token. Absolute location would refer to e.g., the 1st, 2nd, 3rd token etc. The sinusoidal embeddings in Vaswani's paper capture this absolute position information. But, if you have absolute position encodings, then you can also always derive relative positions, and sinusoidal embeddings make that really easy, but because absolute position is encoded, it would be considered an absolute position encoding.
Contrast that with relative position encoding, where only the relative position between two tokens is used. e.g., in the paper, the embeddings are only used during the attention operation, and they only capture information about the distance between two tokens. | {
"domain": "ai.stackexchange",
"id": 4115,
"tags": "natural-language-processing, transformer, positional-encoding"
} |
Why does 3-hydroxy-butan-2-one give positive Tollens' test? | Question: Why does 3-hydroxy-butan-2-one give a positive Tollens' test despite the absence of an aldehyde group? What is the mechanism?
Answer: From the Wikipedia entry on Tollens 1
Tollens' reagent is a chemical reagent used to determine the presence of an aldehyde, aromatic aldehyde and alpha-hydroxy ketone functional groups
The substrate you have drawn is an alpha-hydroxy ketone. | {
"domain": "chemistry.stackexchange",
"id": 10295,
"tags": "organic-chemistry, organic-oxidation, reagents"
} |
Measuring a given method's execution time | Question: I have been playing around with some improvements to some sort algorithms like Selection Sort and Merge Sort and I ended up needing some sort of measurement to check if my versions were any faster than the original algorithm form. I also had a need to implement some sort of time measurement before but never did it, so here came the chance. And so I ended up coding the following measurement method:
public static double Measure(Action action, bool print = true)
{
Stopwatch watch = new Stopwatch();
const int precision = 1; //estimated precision of 1 milisecond on my machine
const int error = 1; //max error
const int times = 10;
double min = double.MaxValue;
for (int i = 0; i < times; ++i)
{
int iterations = 0;
watch.Restart();
while (watch.Elapsed.Milliseconds < precision*(100-error))
{
action();
++iterations;
}
watch.Stop();
min = Math.Min(min, watch.Elapsed.Milliseconds*1000.0/iterations);
}
if(print)
Debug.WriteLine("The action takes {0:N4} nanos to complete", min);
return min;
}
Is this a well conducted measurement algorithm? Any suggestions or improvements that I could apply?
Answer: The thing you're timing is:
while (watch.Elapsed.Milliseconds < precision*(100-error))
{
action();
++iterations;
}
The problem is that if action takes a very short time, then most of what you're timing is the time it takes to call the watch.Elapsed.Milliseconds property.
A more accurate timer would be something like:
int iterations = 1000;
watch.Restart();
while (iterations--)
{
action();
}
watch.Stop();
You would then need to do something to ensure you pick a suitable number of iterations (e.g. try again with 10 times as many iterations if the measured time is too short to be accurate). | {
"domain": "codereview.stackexchange",
"id": 6113,
"tags": "c#, performance, unit-testing"
} |
If a piston has a larger upper surface area than lower surface area experience a force downwards even if that piston is ventilated? | Question: Here I have a system where there is 18 bar of Nitrogen in a valve with a piston. The pressurized nitrogen is below the piston, above the piston, and also in the ventilation bore in the piston. Nitrogen can move freely between these three volumes.
Since the upper surface area of the piston is greater than the lower surface area of the piston, I would expect a difference in "pneumatic force" between the top and bottom of the piston, which would drive the piston downwards (as the upwards pneumatic force is lower than the downwards pneumatic force).
However the small ventilation bore in the piston makes me feel unsure. I can't think of a reason why this ventilation bore would prevent the piston from functioning this way, as pressure should still act on every surface equally, right?
So my question is: will the piston experience a net downwards force proportional to the difference in areas as a result of the difference in surface areas from the top of the piston to the bottom (even if the piston is ventilated)?
Answer: Yes, even if the ventilation bore has an oval or kidney shape or is significantly larger. The difference between the top and bottom areas times approximately 17 bars (18 bars-1 atm) will be the net force acting on the piston.
Despite its size or shape, the 18 bars of pressure acting on the ventilation bore's walls cancel each other out. The remaining forces are the top and bottom forces! | {
"domain": "engineering.stackexchange",
"id": 5085,
"tags": "fluid-mechanics, pressure, pressure-vessel"
} |
Why is kVA not the same as kW? | Question: I thought my electric car charging unit uses 6.6 kW of power. However, I found the label and it actually says 6.6 kVA. When I saw this I thought something along the lines of...
Well, $ P=VI $, therefore kVA must be the same thing as kW... strange, I wonder why it's not labelled in kW.
So a quick Google search later, and I found this page, which has a converter that tells me 6.6 kVA is actually just 5.28 kW. The Wikipedia page for watts confirmed what I thought, that a watt is a volt times an ampere.
So what part of all this am I missing, that explains why kVA and kW are not the same?
Answer: The problem is that the formula $P=I\ V$ is correct when dealing with DC circuits or with AC circuits where there is no lag between the current and the voltage. When dealing with realistic AC circuits, the power is given by
$$
P=I\ V\ \cos(\phi),
$$
where $\phi$ is the phase difference between the current and the voltage. The unit kVA is a unit of what is called 'apparent power' whereas W is a unit of 'real power'. Apparent power is the maximum possible power attainable when the current and voltage are in phase and real power is the actual amount of work which can be done with a given circuit. | {
"domain": "engineering.stackexchange",
"id": 4176,
"tags": "electrical-engineering"
} |
The EFE - The 'Mother' of all equations of motion in the universe? | Question: This question addresses the scope of general relativity.
Scientists and engineers solve all sorts of physical problems in all sorts of fields and often to solve these problems a 'system' is defined including inputs, outputs, inter-relationships of variables and constraints, and from this definition the 'equations of motion' are written - specific to that system.
So my question is can the Einstein Field Equations (EFE) of General Relativity lead to the same equations of motion for these specific systems in all cases (discounting for the moment quantum mechanical systems)? In other words:
Are the EFE of General Relativity the 'Mother' of all equations of motion in the universe - outside of quantum mechanical systems? - And therefore a more generalized way of writing equations of motion for all systems?
If this is true, then from the field equations, can one derive Maxwell's equations?
Answer: No.
The Einstein field equations are the equation of motion for the metric (i.e. gravity) in the Einstein-Hilbert action.
If you add other dynamical fields to the action, you not only change the stress-energy tensor appearing in the EFE, but you also have to vary the action with respect to the new fields to obtain e.o.m. for them. | {
"domain": "physics.stackexchange",
"id": 26523,
"tags": "general-relativity, classical-mechanics, gravity, matter"
} |
Controlled Z gate acting on 3 qubits in matrix form | Question: For a controlled Z gate $CZ_{1,2,3}$ acting on 3 qubits, which of the following is correct? If it is the first one then what is the difference between that and a CZ gate acting on qubits 1 and 3?
$$I \otimes I \otimes I − |1⟩⟨1| \otimes I \otimes (Z − I)$$
$$I \otimes I \otimes I − |1⟩⟨1| \otimes |1⟩⟨1| \otimes (Z − I)$$
Answer: The difficulty is that the wording describing what you want is unclear. What your two stated options do is:
$$I \otimes I \otimes I − |1⟩⟨1| \otimes I \otimes (Z − I)$$
This is a standard controlled-phase gate acting between qubits 1 and 3, doing nothing to qubit 2.
$$I \otimes I \otimes I − |1⟩⟨1| \otimes |1⟩⟨1| \otimes (Z − I)$$
This is a controlled-controlled-phase gate acting on all three qubits, so it applies a -1 phase if all three are in the $1\rangle$ and leaves them unchanged otherwise. The symmetry of the system is perhaps better represented by writing it as
$$I \otimes I \otimes I − 2|1⟩⟨1| \otimes |1⟩⟨1| \otimes |1⟩⟨1|.$$ | {
"domain": "quantumcomputing.stackexchange",
"id": 824,
"tags": "quantum-gate, matrix-representation"
} |
How to find longest recurring pattern from lage string data set? | Question: I need to find the substring that is from a 100,000 characters this substring must be most repeated and it need to be longest substring for example
TYUFRIETEYM0SQZLHBCTN0W1KA9HELAT4LTQ14W7ZW484GSK1XTNOBJ2R6AMGW9KU36G7ITMPF315Y7ESYPR1XE2C1953J0DXUNBJLNTDG7IHS63854SGSS7YDEFJYSFP0DLL54GK6NUZ5UU5FRIETEYCPNGHIJOX23QOVSCBYHKE7HRIETEYV0H49I5SX9CW967CDGKX3TYCVNVBNCFGGDGDGDDFIIPGDSDVGDDSRGDGVCZAQRIOPKLMVFGCDGDTYGSDCBGDUSLVAQEFCGDGRIETEYDGDFG
In above character set there is two substring one is GD and other is RIETEY algorithm able to identify RIETEY because it is longest the substring, also pattern must occur at least twice to be considered a recurrence and patterns will not overlap.
I found a algorithm but it only work for less that 100,000 characters
any suggestion for this problem ?
Answer: This problem is well-studied; it's aptly called longest repeated substring problem.
It can be solved in linear time by creating a suffix tree with Ukkonen's algorithm; the longest repeat corresponds to the labelling of the longest path from the root to an inner node which you find using breadth-first search.
This does not exclude overlapping substrings. Keep track of the smallest $m$ and largest $M$ starting index of sequences nodes represent while creating the tree; a node at depth $n$ (counted in symbols on the path) represents a non-overlapping repeat if and only if $M - m \geq n$. | {
"domain": "cs.stackexchange",
"id": 6472,
"tags": "algorithms, algorithm-analysis, strings, longest-common-substring"
} |
Cross-correlation peak | Question: How get cross-correlation peak and based on it calculate correlation score for similarity of two audio samples. SO far I've
FFT two samples
complex conjugate second
multiply results
IFFT
cross-correlate with itself(autocorrelate)
Thanks for any advice
Answer: As Matt stated, you should use the correlation coefficient!.
Points 1 to 4 calculate the crosscorrelation. From that you have to find the highest peak (or lowest, if it has a higher absolute value). This value is the value of the nominator.
The denominator consists of the two autocorrelation values. Those are obtained by using the same algorithm where both signals are the equal. Here the peak should be in the middle (t=0) as already stated by welcomedungeon. Taking the square root of both autocorrelation values and multiplying them, gives the denominator.
Edit:
Maybe this description is more clear:
$\frac{max(abs(ifft(fft(x_1)*fft(x_2)')))}{sqrt(max(abs(ifft(fft(x_1)*fft(x_1)'))))*sqrt(max(abs(ifft(fft(x_2)*fft(x_2)'))))}$
The apostroph means conjugate complex.
Edit: Two examples with Matlab code:
Using the same signal:
x = rand(1000,1)-0.5;
max(abs(ifft(fft(x).*fft(x)')))/(sqrt(max(abs(ifft(fft(x).*fft(x)')))).*sqrt(max(abs(ifft(fft(x).*fft(x)')))));
gives 1;
Using a sine and a cosine, should also give 1 because they are delayed versions of each other:
x = sin([0:pi/100:10*pi]);
y = cos([0:pi/100:10*pi]);
max(abs(ifft(fft(x).*fft(y)')))/(sqrt(max(abs(ifft(fft(x).*fft(x)')))).*sqrt(max(abs(ifft(fft(y).*fft(y)')))))
gives approximately 1
Using windowing before transformation to frequency domain should improve results. | {
"domain": "dsp.stackexchange",
"id": 1036,
"tags": "fft, cross-correlation, autocorrelation, waveform-similarity"
} |
Angle of attack on variable pitch propeller? | Question: For a variable pitch propeller where the angle of attack can be adjusted during flight, are the individual blades similar in design to standard propellers, with a low angle of attack at the edge of the propeller (3-6 degrees), and gradually increases towards the root (14-16 degrees); or are the propeller blades one singular angle of attack throughout the entire length of the blade? I'm presuming the blades have a differing angle of attack at the root and tip, but of a lower magnitude to prevent the root AOA on the root of the blade exceeding the critical angle?
Answer: All propellers have to have a gradually increasing twist as we get closer to the hub to maintain the same angle of attack with the relative wind at that particular position.
The reason is the change in the angle of the relative wind as we move from the tip to the root.
The relative wind is the vector sum of the airplane's speed and the speed of the point along the length of the propeller.
Relative wind tilts more toward the airplane's axial speed near the root as compared to near the tip which is more aligned with the plane of the propeller. Just compare the tangential speeds $V_t=\omega R$ the greater the R is greater Vt.
$\vec{V_r}= \vec{V_t}+\vec{V_{axial}}$
$V_r=$ relative wind
$V_t=$ propeller tangential speed
$V_{axial}= $airplane speed | {
"domain": "engineering.stackexchange",
"id": 4748,
"tags": "aerospace-engineering, aerodynamics, propulsion"
} |
How close should you get to speed of light, in order for time to be dilated? | Question: Recently I was watching Carl Sagan's Cosmos: A Personal Voyage. In episode 8 ("Journeys in Space and Time") there is a scene presenting the idea of time dilation, due to traveling close to the speed of light, here is an excerpt.
I understood it as follows: in order of the dilation to happen you have to travel close to or exactly at (which is presumably impossible) the speed of light.
Now, what I would like to know is:
After which speed the time dilation starts to happen (I assume the closer to the speed of light the greater the dilation)?
How (if at all) this "tipping point" was (or could be) calculated?
Or does the dilation happen only when object reaches the exact speed of light?
What exactly puzzles me, is this: say there is you and a friend of yours, and you running around your friend in circles, with the speed of light (or fairly close to it), now as shown in the excerpt, time for your friend will go faster then for yourself (i.e. he will age greatly, while you won't).
Now, putting the speed of light aside, let's assume the same situation (i.e. you running around your friend in circles), but lets say not with the speed of light, but just "really fast". Say you are just moving 10 times faster than your friend, now it is obvious (at least it seems so to me), that time for your friend will go not faster, but slower (since you can do 10 times more things in a given time-span than your friend can).
Again, putting speed of light aside, if as shown in the excerpt, somebody left his friends and really fast went to some other place and returned, it is possible that they won't even notice he was gone. So my question is, basically, after which speed it stops being true and time dilation kicks in?
Answer: There's no such speed 'limit', unless you count $0\ \rm m/s$. Even if you're moving really, really slow with respect to your friend, you'll measure a different elapsed time.
Look at the canonical formula, $$\Delta t'=\frac{\Delta t}{\sqrt{1-\frac{v^2}{c^2}}}$$
$\Delta t'$ is the time you measure between two events, $v$ is your velocity with respect to the other observer, and $\Delta t$ is the time measured by your friend. If $v=0$, then $\Delta t=\Delta t'$. But if $v$ is really close to zero (whether or not it is positive, because the direction of the velocity isn't relevant), you'll necessarily observe some minute difference, and you're only limited by the accuracy with which you can measure. | {
"domain": "physics.stackexchange",
"id": 53582,
"tags": "special-relativity, spacetime, reference-frames, speed-of-light, time-dilation"
} |
Illuminance of a non-punctual source | Question: When working with a punctual light source, I get the formula (for the normal incidence)
$$ E=\frac{I}{r^2} $$
This is how I got it:
First I identify that the expression for the solid angle is
$$ d\Omega=dS/r^2 $$
Then knowing that $dF=Id\Omega$ and $dE=dF/dS$, I just substitute to get the formula.
However, I can't tell what step is not valid for the case of the source being non punctual.
Also, in that case, what would the formula be? Or would it be dependent on the exact geometry of the source?
Answer: You would divide your source into small "infinitesimal" punctual sources. If one such small source, of infinitesimal intensity $dI(\vec r_s)$, is located that $\vec r_s$, then radiance of that source at some point $\vec r_p$ in space is given by
$$
dE= \frac{dI(\vec r_s)}{\vert \vec r_p-\vec r_s\vert^2}
$$
The total integrated radiance is then
$$
E=\int_VdE = \int_V \frac{dI(\vec r_s)}{\vert \vec r_p-\vec r_s\vert^2} \tag{1}
$$
where the integration is over the volume containing your source. The denominator factor can be expanded, resulting in a multipole expansion similar to the expansion of a potential created by a macroscopic charge distribution.
In the case of a single punctual source, there is no integration since everything is concentrated at one point. Assuming your source is at the origin, you get the usual result with $\vec r=\vec r_p$.
The radiance does depend on the distribution through the factor $dI(\vec r_s)$ since this factor is not necessarily constant, i.e. the value $dI(\vec r_{s1})$ at $\vec r_{s1}$ is not necessarily the value of $dI(\vec r_{s2})$ at $\vec r_{s2}$. This factor essentially tells you that the light coming from point $\vec r_s$ can be considered as a point source of strength $dI(\vec r_s)$ located at that point. The integration in Eq.(1) basically adds all the contributions.
Thus, with the assumption that $\vert \vec r_p\vert>\vert \vec r_s\vert$ (the point $r_p$ is outside the source), we have
\begin{align}
\vert \vec r_p-\vec r_s\vert^2&= (\vec r_p-\vec r_s)\cdot (\vec r_p-\vec r_s)\\
&=r_p^2+r_s^2-2r_pr_s\cos\theta_s\\
&=r_p^2\left(1-2\frac{r_s}{r_p}\cos\theta_s+\frac{r_s^2}{r_p^2}\right)\, ,\qquad \frac{r_s}{r_p}<1\, ,
\end{align}
so that
\begin{align}
\frac{1}{\vert \vec r_p-\vec r_s\vert^2}&=\frac{1}{r_p^2}
(1-2\frac{r_s}{r_p}\cos\theta_s+\frac{r_s^2}{r_p^2})^{-1}\\
&\approx \frac{1}{r_p^2}\left(1+2\frac{r_s}{r_p}\cos\theta_s\right)
\end{align}
so the leading term is in $1/r_p^2$ and yields
$$
E\approx \int_V\frac{dI(\vec r_s)}{r_p^2}=\frac{I}{r_p^2}\, ,
$$
recovering the point source result.
Note that here I have assumed incoherent sources else there could be interference effects in the same way that constructive or destructive interference effects can occur when combining sound waves at a particular point. | {
"domain": "physics.stackexchange",
"id": 94365,
"tags": "electromagnetism, optics"
} |
Doppler shifts appear to violate conservation of energy | Question: If the relative distance between an infrared source and a spectrometer is shortening at a such a rate that the spectrometer detects that radiation at a Doppler- (or blue-) shifted wavelength in the UV band, then the photonic energy the spectrometer detects is by definition a shorter wavelength than the wavelength emitted from the infrared source.
This seems to violate the thermodynamic law of conservation -- We can't say that the photonic energy was increased, because no additional energy was added. Yet, UV has more photonic energy than infrared.
Moreover, given the propagation of the EM radiation is always C in a vacuum, the relative motion does not add any Newtonian inertia either (you can't add the relative velocity to the propagation velocity -- besides, photons are massless, hence able to propagate at C, no faster).
Likewise, if the distance were growing instead of shrinking (the red-shifted case) there (seemingly) is an equal violation, in that there (seemingly) is a loss of received photonic energy.
Keeping in mind that given an ideal laser beam or a point source doesn't matter, because this has nothing to do with the energy density of the emitter versus the energy density at the spectrometer; the effective change in photonic energy is still the same.
How is this photonic energy delta reconciled?
Answer: No the energy is conserved. Energy is a quantitive quantity so when you compare red light to blue light you're comparing the energydensity not the whole energy in the system. And yes the energy density decreases, but since in this process the wave gets larger due to the "stretching" the equal amount of energy is stored. It's just like if you have a sin wave across say 100m and a sin wave across 1000m the wavelength is the same and the energy density is the same but there is more energy in the system where the wave is across 1000m. | {
"domain": "physics.stackexchange",
"id": 66754,
"tags": "thermodynamics, speed-of-light, redshift, photonics"
} |
Calculating the mass of acceptable carbon monoxide in a room | Question:
The acceptable concentration of $\ce{CO}$ in the air is $10\:\mathrm{mg/m}^3$. In
a room that is $19\:\mathrm{m\times4.0\:m\times25\:m}$, what is the acceptable mass in
kilograms of $\ce{CO}$?
The room has a volume of $1900\:\mathrm{m}^3$. Since there are $10\:\mathrm{mg}$ of $\ce{CO}$ per $\mathrm{m}^3$, there are $19000\:\mathrm{mg}$ of $\ce{CO}$ in this room. That’s $19\:\mathrm{kg}$.
But the book says the answer is $1.9 \times 10^{-2}\:\mathrm{kg}$. How come?
Answer: You have to be careful with your conversions - you are missing a step in your conversions.
You have 19000 mg
As 1 mg = 10-3 g and
1 g = 10-3 kg
So:
19000 mg = 19 g = 1.9 x 10-2 kg | {
"domain": "chemistry.stackexchange",
"id": 3694,
"tags": "physical-chemistry, concentration"
} |
Asymptotic flatness implies existence of rotation axis | Question: Suppose we have an asymptotically flat, globally hyperbolic spacetime $M$ endowed with two one-parameter isometry groups $\sigma_t$ and $\chi_{\phi}$ which commute (i.e. $\sigma_t \circ \chi_{\phi}= \chi_{\phi} \circ \sigma_t.$)
Assume moreover that the orbits of $\sigma_t$ are timelike curves generated by the Killing vector field $\xi^a.$ The orbits of $\chi_{\phi}$ are closed spacelike curves generated by the Killing field $\psi^a$.
In chapter 7.1, p 165 of Wald's GR text, he states that the asymptotic flatness of the spacetime implies that "there must be a rotation axis on which $\psi^a$ vanishes." Why must this be the case?
Answer: Asymptotically, the spacetime metric approaches the Minkowski metric, which has the property that for $\theta = 0$, $\theta=\pi$ (on the rotation axis), $\psi^a \psi_a =0$, hence, since $\psi^a$ is spacelike, it vanishes there. These two points are enough to satisfy the first hypothesis of theorem 7.1.1. | {
"domain": "physics.stackexchange",
"id": 13373,
"tags": "general-relativity, differential-geometry"
} |
Can the Fourier Transform of the unit step be used as a filter? | Question: Using the FT of the step function we have $H(\delta)=\pi\delta(\omega)+\frac{1}{j\omega}$, and it's magnitude is $\infty$ at $\omega=0$ and approaches $0$ as $\omega$ goes to both positive and negative infinity. Based on this, is the FT a low pass filter or a bandpass filter?
Answer: The "unit step as a filter" is a filter whose impulse response is the unit step. In the time domain this would be
$$H(s) = 1/s$$
Which is an time domain integration. (Consider if we presented an impulse to the input of a "filter" constructed as a time domain integrator, it would immediately jump to 1 and then stay there for the rest of time-- which is a unit step as the "impulse response"). A time domain integration has a frequency response given as $1/(j\omega)$ so is indeed a low pass filter with a magnitude going down at -20 dB/decade, and a phase at -90° for all frequencies (and infinite gain at DC).
We can approximate this as a digital filter with different mapping techniques from the Laplace domain to the z domain, the most common is a simple accumulator given as:
$$H(z)= \frac{1}{z-1}$$
Converting this to the time domain in samples we get:
$$y[n] = x[n] + y[n-1]$$
With the continuous time integrator, if we passed in a unit step as the input, we would get a ramp out. Similarly with the accumulator, if we passed in a series of unit samples, the output would accumulate linearly (a ramp).
As a word of caution when dealing with systems in Laplace or z domains- do not confuse the transform of the waveform at the input or output with the transform of the impulse response of the system when considering the derived frequency response and behavior as a filter. Specifically when we concern ourselves with the question of the frequency response of a 2 port system- we are specifically interested in the impulse response for that system (what the output would be in time if only an impulse was presented at the input). We then take the Laplace Transform (or z transform if discrete time) of that impulse response to get the transfer function, and for that replace $s$ with $j\omega$ to get the frequency response. For further intuitive insight into the reason the impulse response is so interesting, see this post. | {
"domain": "dsp.stackexchange",
"id": 11084,
"tags": "filters, fourier-transform, continuous-signals"
} |
topic is already advertised as md5sum [...] and datatype [...] | Question:
I'm trying to get output from a remote node to the /rosout topic and view it with 'rostopic echo rosout' running on the master. I have a roscore running on hostA brought up by starting roslaunch. On hostB, I have a node built from roscpp.
My nodes code is as follows:
int func1()
{
if (ros::console::set_logger_level(ROSCONSOLE_DEFAULT_NAME,
ros::console::levels::Debug) ) {
ros::console::notifyLoggerLevelsChanged();
}
map<string, string> remap;
remap.emplace("__ip", get_ip());
remap.emplace("__master", get_master_uri());
ros::init(remap, "test-node");
ros::Time::init();
while (!ros::master::check()) {
ros::Duration(1.0).sleep();
}
nh = boost::make_shared<ros::NodeHandle>();
ros::spinOnce();
return 0;
}
int func2()
{
ROS_INFO("point A");
ros::Publisher pub =
nh->advertise<std_msgs::String>("rosout", 1000);
ROS_INFO("point B");
std_msgs::String msg;
stringstream ss;
ss << "Using stringstream.";
msg.data = ss.str();
pub.publish(msg);
ROS_INFO("point C");
ROS_INFO_STREAM("ss data: " << msg.data);
ROS_INFO_STREAM("ROS Node Namespace: " << ros::this_node::getNamespace());
ROS_INFO_STREAM("ROS Node Name: " << ros::this_node::getName());
ROS_INFO("point 4");
return 0;
}
The only thing I ever see show up on the master node is "point A", "point B" never prints and I get the following message on the master node.
Tried to advertise on topic [/rosout] with md5sum [992ce8a1687cec8c8bd883ec973ca4131] and datatype [std_msgs/String], but the topic is already advertised as md5sum [acffd30cd6b6de30f120938c17c593fb] and datatype [rosgraph_msgs/Log]
No where's have I used a datatype of rosgraph_msgs and I'm not sure why the message says so. rqt_console shows me the message is coming from the node "test-node" as an error. Sometimes the "point A" message will come through with severity 'Info' and shows up properly, but won't go beyond the error above.
Also, how is it that ROS_INFO("point A") shows up on the /rosout of the master, yet the topic hasn't even been advertised yet?
Thoughts?
Originally posted by SRD on ROS Answers with karma: 13 on 2017-04-05
Post score: 0
Answer:
The rosout topic is advertised by ros::init as part of the node setup; you don't need to advertise on it yourself.
In addition, ROS_INFO and the associated logging macros already handle publishing to rosout.
The publisher you've created conflicts with the default publisher, which is why you're getting this error message.
Originally posted by ahendrix with karma: 47576 on 2017-04-05
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 27533,
"tags": "rosout, libroscpp, publisher, logging, logger"
} |
Is Hilbert transform not defined for complex signals? | Question: Is Hilbert transform not defined for complex signals? In MATLAB, the function hilbert ignores if you give a complex sequence as input. Why?
Answer: The Hilbert transform can be applied to complex functions of a real variable. E.g., the Hilbert transform of the complex exponential $e^{j\omega_0t}$, $\omega_0>0$, is given by
$$\mathcal{H}\{e^{j\omega_0t}\}=-je^{j\omega_0t},\qquad\omega_0>0$$
The problem you encounter has to do with Matlab's implementation of the function hilbert.m. It is designed for real-valued input sequences and it will ignore any imaginary part. Note that despite its name this function does not simply return the Hilbert transform of the input vector, but it computes the corresponding analytic signal, i.e. it returns a complex vector, the real part of which is equal to the input vector, and the imaginary part of which is the Hilbert transform of the input vector.
So if for whatever reason you want to compute the Hilbert transform of a complex vector, you need to do the following:
x = ... % some complex vector
xr = real(x);
xi = imag(x);
xr_ = imag(hilbert(xr)); % Hilbert transform of real part
xi_ = imag(hilbert(xi)); % Hilbert transform of imaginary part
x_ = xr_ + 1i * xi_; % Hilbert transform of complex vector x | {
"domain": "dsp.stackexchange",
"id": 3610,
"tags": "hilbert-transform"
} |
Is gazebo a good simulator? Or are there any better ones? | Question:
I found gazebo when I was roaming around and trying to learn how ROS runs. I came across gazebo and it seemed pretty cool. I want to learn if it is a good simulator or can you suggest any other?
Can i use gazebo for drones such as AR.Drone2.0? Or do you suggest anyother simulators or programs?
P.S:Beginner
Originally posted by Abdul on ROS Answers with karma: 27 on 2014-04-20
Post score: 1
Answer:
Yes, you can use gazebo for drones. Take a look:
http://www.youtube.com/watch?v=cZWTO_gREb4
http://www.youtube.com/watch?v=_8AhNWKzv2k
You have to create model and simple controller for your model. Take a look here for some examples:
https://github.com/CrocInc/uav-croc-contest-2013
I use standalone gazebo for my tasks.
Don't know, if V-Rep is better than gazebo or not. Will try V-Rep in near future.
Originally posted by Mike Charikov with karma: 123 on 2014-04-20
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 17715,
"tags": "ros, gazebo, ardrone2.0, gazebo-simulator"
} |
Non-linear optics - solve differential equations coupled with the finite difference method | Question: I have these three differential equations in which I need to solve numerically:
$$
\frac{dn_0}{dt}= -n_0(t)W_{01}(t) + n_1(t)K_{10}
$$
$$
\frac{dn_1}{dt}= -n_1(t)W_{12}(t) - n_1(t)K_{10} + n_2(t)K_{21} + n_0(t)W_{01}(t)
$$
$$
\frac{dn_2}{dt}= n_1(t)W_{12}(t) - n_2(t)K_{21}
$$
such that
$$ n_0(0)=1 $$
$$ n_0(N)=0 $$
$$ n_1(0)=0 $$
$$ n_1(N)=1 $$
$$ n_2(0)=0 $$
$$ n_2(N)=0 $$
Using the central finite difference formula:
$$\frac{n_{0}(t + \Delta t) - n_{0}(t - \Delta t)}{2\Delta t}=-n_0(t)W_{01}(t) + n_1(t)K_{10}$$
$$\frac{n_{1}(t + \Delta t) - n_{1}(t - \Delta t)}{2\Delta t}=-n_1(t)W_{12}(t) - n_1(t)K_{10} + n_2(t)K_{21} + n_0(t)W_{01}(t)$$
$$\frac{n_{2}(t + \Delta t) - n_{2}(t - \Delta t)}{2\Delta t}=n_1(t)W_{12}(t) - n_2(t)K_{21}
$$
How do I determine the functions $n0$, $n1$ and $n2$ knowing that $n0 + n1 + n2 =1$, and that the three equations are coupled?
And I could not understand how to calculate the derivatives, how can I determine their value with the finite difference method without knowing the functions?
Can someone please help me?
Answer: You have $3(N-1)$ unknown quantities, $n_0(1) \dots n_0(N-1)$, $n_1(1) \dots n_1(N-1)$, and $n_2(1) \dots n_2(N-1)$.
The central difference equations give you $3(N-1)$ linear equations in those quantities, and also in $n_0(0), n_0(N), n_1(0), n_1(N), n_2(0), n_2(N)$, but you know those 6 values from the boundary conditions.
Set up the equations as a $3(N-1) \times 3(N-1)$ matrix and vectors of length $3(N-1)$ and solve them with MATLAB or whatever software you have.
Since the finite difference equations and the boundary conditions are consistent with the fact that $n_0 + n_1 + n_2 = 1$ everywhere, the numerical solution should have the same property.
You find the derivatives using the central difference approximation in your OP, i.e. $$n'_0(k) \approx \frac{n_0(k+1) - n_0(k-1)}{2h}, \quad k = 1 \dots N$$ and similarly for $n'_1$ and $n'_2$, where $h$ is the value of $\Delta t$ in "real world" physical units. | {
"domain": "physics.stackexchange",
"id": 54873,
"tags": "optics, computational-physics, differential-equations, non-linear-optics"
} |
Light waves confusion | Question: This may be a simple question but its really confusing me, I cant find answer for it still now, please help me. My doubt is what do we actually mean by amplitude of a light wave (an EM-wave), is it wave produced in electric field or magnetic field ?
Answer: Both. In the simplest case of a wave traveling through an isotropic medium (vacuum, air, water, glass), the electric field strength and magnetic field strength are proportional to each other: $E = cB$. Here, $E$ is the electric field strength, $B$ is the magnetic field strength, and $c$ is the speed of light. If you know one field strength, you know the other one. | {
"domain": "physics.stackexchange",
"id": 74381,
"tags": "electromagnetic-radiation"
} |
Variations of the Repository Pattern | Question: I've been using this variation of the Repository pattern for over a year now:
public interface IReadOnlyRepository<T, in TId>
where T : AbstractEntity<TId>
{
T Get( TId id );
IEnumerable<T> GetAll();
}
/// <summary>
/// Defines a generic repository interface for
/// classes solely in charge of getting and processing data from a data source
/// </summary>
/// <typeparam name="T"></typeparam>
/// <typeparam name="TId">The type of the id.</typeparam>
public interface IRepository<T, in TId> : IReadOnlyRepository<T, TId> where T : AbstractEntity<TId>
{
/// <summary>
/// Determines whether the specified entity has duplicates.
/// </summary>
/// <param name="entity">The entity.</param>
/// <returns>
/// <c>true</c> if the specified entity has duplicates; otherwise, <c>false</c>.
/// </returns>
bool HasDuplicates(T entity);
/// <summary>
/// Inserts the specified entity.
/// </summary>
/// <param name="entity">The entity.</param>
void Save( T entity );
/// <summary>
/// Inserts the entity or updates it if it already exists.
/// </summary>
/// <param name="entity">The entity.</param>
T SaveOrUpdate( T entity );
/// <summary>
/// Updates the specified entity.
/// </summary>
/// <param name="entity">The entity.</param>
/// <returns></returns>
T Update(T entity);
/// <summary>
/// Deletes the specified entity from the data source.
/// </summary>
/// <param name="entity">The entity.</param>
void Delete(T entity);
/// <summary>
/// Deletes the entity with the specified id.
/// </summary>
/// <param name="id">The id.</param>
void Delete(TId id);
}
but recently, after rereading some books on Design Patterns, I've had this seemingly amazing idea to apply some patterns to my repositories.
public interface IRepository<T, in TId> : IReadOnlyRepository<T, TId> where T : AbstractEntity<TId>
{
void Execute(IRepositoryCommand command);
void Execute(IBatchRepositoryCommand command);
}
public interface IRepositoryCommand<T>
{
void Execute(T entity);
}
public interface IBatchRepositoryCommand<T>
{
void Execute(IEnumerable<T> entities);
}
public SaveCommand<T> : IRepositoryCommand<T>
{
public void Execute(T entity)
{
// Logic for saving goes here
}
}
public BatchSaveCommand<T> : IRepositoryCommand<T>
{
public void Execute(IEnumerable<T> entities)
{
// Logic for batch saves go here
}
}
which would then be called like this:
_myRepository.Execute(new SaveCommand());
My reasoning is that placing logic for the common data access operations (e.g. saving, deleting) gets to be so repetitive that right now I'm relying on a T4 template to recreate those everytime I have a new entity enter the playing field. This way I just define the most commonly used Data Access operations and then have any of my callers execute whatever action they need to execute.
Can you critique my work? I do have the tendency to overthink and overengineer things.
Answer: Although the command pattern you brought in makes it easy to create a 'flexible' Execute method. I wonder if it is really transparant to the ones who read your code.
Perhaps you can combine your two ideas, by implementing a certain ReadOnlyRepository using the command objects. So that the user of your interace keeps using:
repository->Update(SomeEntity);
While the repository implementation does:
repository.Execute(new UpdateCommand());
Consider, if you had to make a change to your later interface. Where would you need to update that in your code? In my suggestion it simply requires you to change it at one place. (since all the other code still uses the 'old' interface).
I hope my post makes sense :) | {
"domain": "codereview.stackexchange",
"id": 460,
"tags": "c#, design-patterns"
} |
Do Earth's layers move at different speeds? | Question: I don't have a background in Geology but this question popped in my head the other day and can't find an answer anywhere else.
If I remember science class correctly, Earth's layers have different element compositions. Would it be correct to assume that they have different densities and different frictions as a result? And if they do, does it follow from it that they rotate at different speeds?
Thanks.
Answer: Im am currently doing my masters in geophysics (last semester) and before that I did a bachelor in geoscience.
I assume by layers you mean the crust, the mantle and the core.
These all have different composition and also different densities. But the earth rotates as a whole, not the individual layers, all layers have the same angular velocity. That means they all make one rotation per day.
These layers are also not the perfect boundaries we like to imagine, but more a change in properties around a finite depth. This depth can even change at different places. | {
"domain": "earthscience.stackexchange",
"id": 1727,
"tags": "earth-rotation, geologic-layers"
} |
Grade 12 chemistry question about Schrodinger's standing wave? | Question: this is my first time ever using a website like this but I have something I'm curious about. My chemistry teacher showed this today (img below) and discussed how quantum of energy that hits the electron has to be an integer otherwise decay occurs. What is stopping this from happening more often? Why aren't things decaying a lot and what are the factors to this?
Answer: If an electron "falls" on the nucleus, it cannot produce anything. The electron has not enough mass to produce a neutron, as the mass of the neutron is bigger than the sum of the masses of electron + proton. It is even not stupid to state that in a hydrogen atom, the electron has already fallen on the nucleus. | {
"domain": "chemistry.stackexchange",
"id": 17990,
"tags": "quantum-chemistry"
} |
Determining coefficients for wave function solutions of an electron in a periodic potential | Question: In Kittel's Intro to solid state physics, when solving the schrodinger equation for a periodic potential, we begin by writing the potential and the wave function as fourier series of the form
$$\psi = \sum_k C(k)\,e^{ikx} \qquad U(x) = \sum_G U_G\,e^{iGx}$$
where $k=n2\pi/L$ and $G$ is a reciprocal lattice vector. We then substitute these into the schrodinger equation and simplify until we get what Kittel refers to as the central equation which is
$$\left(\frac{\hbar^2 k^2}{2m}-\epsilon\right)C(k)+ \sum_G U_G\,C(k-G)=0$$
Kittel then goes on to say that the above equation "represents a set of simultaneous linear equations that connect the coefficients for all reciprocal lattice vectors G" and that "there are as many equations as there are coefficients C". This I do not understand. In ashcroft and mermin, it is stated that central equation consists of $N$ equations where N is the number of vectors within the 1st Brillouin zone. But the sum $\sum_GU_G\,C(k-G)$ runs over all reciprocal vectors $G$ within reciprocal space and hence their should be an infinite amount of coefficents $C(k-G)$. So clearly their are an infinite amount of coefficients but not an infinite amount of equations. Is Kittel wrong when he states that "there are as many equations as there are coefficients". After this Kittel says that "these equations are consistent if the determinant of the coefficients vanishes" and then he writes that "a block of the determinant of the coeeficients is given by:
How does he get this matrix from the central equation? Any help on this would be most appreciated as its been driving me crazy.
Answer: I do not have Ashcroft and Mermin for reference, but as far as I can tell the central equation generally describes an infinite set of equations and coefficients. This also makes sense for the following reason: If we were dealing with phonons, we would only need to consider wave vectors $\mathbf{K}$ inside the first Brillouin Zone (BZ) to model the motion, but this is due to elastic waves being described in terms of the lattice constant. An elastic wave "exists" only at the ions as there is nothing in between them which can move, and as such, there is a minimum wavelength and a maximum wave vector required to describe the motion. This maximum wave vector lies at the edge of the first BZ. The wave function of a free electron (its probability distribution) exists also in between atoms, and we generally need to consider all possible wave vectors $\mathbf{k}$, even those outside the first BZ.
As for the determinant in question; it is derived under the simplifying assumption that the potential only has one (real) Fourier component; $U_{g} = U_{-g} \equiv U$ (the constant component $U_0 = 0$). Thus the sum over $G$ in the central equation
$$ \bigg( \frac{\hbar^2k^2}{2m} - \epsilon \bigg)C(k) + \sum_G U_GC(k-G) = 0 $$
contains two terms, $UC(k \pm g)$. Set $\lambda_k \equiv \hbar^2k^2/2m$ and write out the set of equations from $k-2g$ to $k+2g$;
\begin{equation}
\begin{aligned}
&& \vdots \\
& (\lambda_{k-2g} - \epsilon)C(k-2g) && + \; U\big[C(k-g) + C(k-3g)\big] & = 0 \\
& (\lambda_{k-g} - \epsilon)C(k-g) && + \; U\big[C(k) + C(k-2g)\big] & = 0 \\
& (\lambda_{k} - \epsilon)C(k) && + \; U\big[C(k+g) + C(k-g)\big] & = 0 \\
& (\lambda_{k+g} - \epsilon)C(k+g) && + \; U\big[C(k) + C(k+2g)\big] & = 0 \\
& (\lambda_{k+2g} - \epsilon)C(k+2g) && + \; U\big[C(k+g) + C(k+3g)\big] & = 0 \\
&& \vdots
\end{aligned}.
\end{equation}
These are part of an infinite system of equations which can equally well be written as a single matrix equation;
\begin{equation}
\begin{bmatrix}
\ddots & \vdots & \vdots & \vdots & \vdots & \vdots & ⋰ \\
\dots & \lambda_{k-2g} - \epsilon & U & 0 & 0 & 0 & \dots \\
\dots & U & \lambda_{k-g} - \epsilon & U & 0 & 0 & \dots \\
\dots & 0 & U & \lambda_{k} - \epsilon & U & 0 & \dots \\
\dots & 0 & 0 & U & \lambda_{k+g} - \epsilon & U & \dots \\
\dots & 0 & 0 & 0 & U & \lambda_{k+2g} - \epsilon & \dots \\
⋰ & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{bmatrix}
\begin{bmatrix}
\vdots \\ C(k-2g) \\ C(k-g) \\ C(k) \\ C(k+g) \\ C(k+2g) \\ \vdots
\end{bmatrix} =
\begin{bmatrix}
\vdots \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ \vdots
\end{bmatrix},
\end{equation}
and for non-trivial solutions the determinant of the matrix on the left must then be zero. What is shown by Kittel is just a small central block of this determinant. | {
"domain": "physics.stackexchange",
"id": 79423,
"tags": "quantum-mechanics, condensed-matter, wavefunction, solid-state-physics, schroedinger-equation"
} |
How can an event happening 5.5 billion light years away be witnessed in "real-time"? | Question: This article
https://www.ras.org.uk/news-and-press/2578-cosmic-radio-burst-caught-red-handed
states that
"These bursts were generally discovered weeks or months or even more than a decade after they happened! We’re the first to catch one in real time."
Given the answers to this question
Are we seeing the past when we look at the stars?
how is it that the former article can claim "real-time"?
Answer: There are two different delays at work. One is the 5.5 billion years between when an event happened and when the signal reaches Earth; the other is the weeks/months/years delay between when the first part of the signal reaches Earth and when that signal is appreciated for what it is.
In this case, we are ignoring the first delay and only talking about reducing the second one. The event happened 5.5 billion years ago, and nothing is changing that.
The thing about astronomy is that we take many, many photographs of different parts of the sky every night, in all different wavelengths. Usually any given project is just looking for certain things (large elliptical galaxies, planets around Sun-like stars, comets, etc.), and that's all the data is initially used for. We can't analyze every byte of data for every possible scientific goal, if for no other reason than we don't have an exhaustive list of what to even look for.
But we try not to throw away this data (to an extent; storage isn't free after all). So people can later look at data sets with new criteria for what counts as scientifically noteworthy. Bursts similar to the one in question are often found in this review process.
If instead one looks for bursts "in real time" -- identifying them as soon as the data is gathered -- one can look at the event with other telescopes in other wavelengths, to get a more complete picture of what is happening. (No one telescope can ever see more than a small fraction of the electromagnetic spectrum.) Once the burst has completely faded, it's too late to try to gather more data on it. | {
"domain": "physics.stackexchange",
"id": 19192,
"tags": "general-relativity, observational-astronomy"
} |
Is space — as opposed to space-time — curved by a gravitating mass? | Question: Or is the question in the title fundamentally wrong? We label each point in space-time with four coordinate values, one of which typically is suggestively called $t$ for time. This made me think that I could just fix, say, $t=0$ and look at the reduced-dimension manifold.
Yet a coordinate system is somewhat arbitrary and can, for example, be rotated such that nearly all points are relabeled. I assume that what, after rotation, is then called suggestively the $t'$ for "time" axis may point in a different direction and we don't have $t=0 \Leftrightarrow t'=0$, meaning that what was previously a "pure space point", i.e. one with $\text{time}=0$ need not be one after rotation.
So let me try a question: Can we relatively freely rotate our 4 dimensional coordinate system for the universe's spacetime such that what was space before (time fixed, say at zero) is rotated "into" the time axis to get varying values of the new time coordinate. Or is there some either mathematical or physical obstacle against completely arbitrary rotations that allows us to talk about space alone in some sense?
This answer actually helped me formulate my question and my hunch is that what is called "foliation" there is roughly my setting the time coordinate to zero in differently rotated coordinate systems.
Answer: A few facts:
In a 4-dimensional manifold such as spacetime you can pick any timelike direction and call it time in the vicinity of any given event. Directions orthogonal to this will then make up 'space'. To extend the definitions, you make a 'threading', that is, many timelike likes smoothly displaced from one another (not intersecting) and you have a notion of time for a continuous region of spacetime. The direction orthogonal to this time can be called space.
In the cosmos at large there is a natural choice to make for the timelike lines, owing to the way the matter is moving. You pick the worldlines of freely-falling matter at the largest scales. This is the standard choice made in cosmology, but for the purposes of your question you do not have to make this choice.
If the 4-dimensional manifold is flat, it is always possible to pick the part called 'space' such that it is curved. (This would be an unusual choice, but it is available).
If the 4-dimensional manifold is curved, it may be possible to find spacelike sections which are flat, or more generally submanifolds which are flat. I think this is a less common situation, and maybe not guaranteed to be possible; I'm not sure. But around a Schwarzschild black hole you can find, remarkably enough, a sequence of spacelike regions with Euclidean metric! Check out Gullstrand-Painlev'e coordinates. | {
"domain": "physics.stackexchange",
"id": 88933,
"tags": "general-relativity, spacetime, differential-geometry, metric-tensor, curvature"
} |
Execute external file from inside C++ | Question: I am trying to create an application that on the front end presents the user with a text editor to input code into and then I will run that code and return the result. I thought it would be a fun project to try and build my own version of leetcode as a learning project.
Right now this is what I am doing to run the provided code. Let's say we are running python code, because that's all I have implemented right now.
First I take in the code that the user submits and create an a file that contains the given code:
std::string python(std::string code){
std::string langCommand = "python3 ";
std::string outFile;
//I am hoping to parallelize this operation so I add threadID to output
outFile = createOutFileName("PythonRunner.py");
std::ofstream output;
output.open(outFile);
output << code;
output.close();
return langCommand + outFile;
}
The next thing I do is create an output file and run the previously created file but I send my stdout/stderr to another outputfile:
std::string Program::run(){
std::string command = createFile(this->lang, this->code);
this->outputFile = createOutFileName("output.txt");
std::stringstream newCommand;
newCommand << command;
newCommand << ">> ";
newCommand << outputFile;
newCommand << " 2>&1";
system(newCommand.str().c_str());
std::string output = getOutputFileData(this->outputFile);
cleanupFiles(command);
return output;
}
Finally I return whatever I got from my output file and that is how I am executing my code.
I gotta think there is an easier way to do this. Especially since I am doing so much writing to a file and then reading from it is there anyway to get rid of that?
I also want to include more than one language in the future so I don't want to use any libraries that are specific to a certain language.
Lastly, this is my first C++ project so I would love any C++ tips!
Edit: I do want to eventually parallelize this code and find some way to encapsulate the program so it can't damage the system it is running on. If there is maybe some external program that would be good for that let me know and also gives me its stderr/stdout let me know.
Edit: As someone asked, here is the entire repo
https://github.com/lkelly93/coderunner
Answer: Rather than system() you should popen().
The difference is that system runs the command in a sub process with no access to this processes, while popen runs the command in a sub processes but provides accesses to the input and output streams of the sub processes.
This will allow you to run the sub-processes and stream input to the processes directly (from the input field you provided for standard input) and then read output from the processes and write it to the output field in your user interface.
FILE* proc = popen(command);
std::string inputFromUser = getUserInputFromUI();
// Using fwrite() correctly left to user.
// You need to check for errors and continue etc.
fwrite(inputFromUser.c_str(), 1, inputFromUser.size(), proc);
char buffer[100];
std::size_t size;
while((size = fread(buffer, 1, 100, proc)) != 0) {
// Check for read errors here.
sendToUserInterface(std::string(bufffer, buffer + size));
}
pclose(proc);
Sorted of related you don't need to save your pythong script as a file. The python command accept the - as a name which means read the script from the standard input rather than from the named file.
So you can run the python command (with popen()) then write the script you want to execute to the input stream of the file produced.
This will remove the need for any intermediate files. | {
"domain": "codereview.stackexchange",
"id": 39211,
"tags": "c++, file"
} |
Calculating $E_b/N_0$ from given SNR ratio | Question: I have the following formula:
$$\frac{S}{N}=\frac{E_b \cdot R_b}{N_0 \cdot B }$$
Where $S/N$ is of course SNR in dB, $E_b$ is energy per information bit, $ R_b$ is information bit rate in [bit/s], $N_0$ is noise spectral density and $B$ is bandwitch in [Hz].
So let's say the $S/N$ ratio = $70\ \rm dB$
$R_b = 250\ \rm kb/s$
$B = 1\ \rm MHz$
So:
$$70\ \mathrm{dB}=\frac{E_b \cdot 250 000\ \mathrm{b/s}}{N_0 \cdot 1000000\ \mathrm{MHz}}$$
$$70\ \mathrm{dB}=\frac{E_b \cdot 1\ \mathrm{b/s}}{N_0 \cdot 4\ \mathrm{MHz}}$$
Now, of course, I can't simply multiply because this 4 is not on a logarithmic scale. But can I just logarithmize 4 like that? Won't there be a problem with units? How to do it to get $E_b/N_0$ ratio in decibels, because I keep doing something wrong.
Answer: HINT:
$$
\frac{S}{N}=\frac{E_b \cdot R_b}{N_0 \cdot B }
$$
With all units linear:
$$\implies \frac{E_b}{N_0} = \frac SN\cdot \frac BR_b\tag{linear}
$$
Now with $S/N$ in $[\rm dB]$ and $R_b$ and $B$ linear, you get $E_b/N_0$ in $[\rm dB]$ as follows:
$$\implies \frac{E_b}{N_0} = \frac SN +10\log_{10}\left(\frac BR_b\right)\tag{in [dB]}$$ | {
"domain": "dsp.stackexchange",
"id": 9787,
"tags": "snr, bandwidth, symbol-energy"
} |
What causes the black shimmering bands on a sun-lit surface? | Question: I can't figure it out. I thought it was the window causing it, but even without it it happened. It seems to be the motion of the air that causes it, but why shimmering black bands? Sometimes this happens when the surface is not hot at all.
Answer: The Sun heats up the surface that it's light touches and the heat created causes the air to rise and the light hitting the hot air causes a mirage. http://en.wikipedia.org/wiki/Mirage | {
"domain": "physics.stackexchange",
"id": 21568,
"tags": "thermodynamics, sun, ideal-gas"
} |
Derivation of Overall Mass Transfer Coefficient | Question:
The flux in the gas can be written as:
$$N_{1}=k_{p}(p_{1}-p_{1i})$$
And the flux in the liquid can be written as:
$$N_{1}=k_{x}(x_{1i}-x_{1})$$
And we know that $$x^*_{1}=\frac{p_{1}}{H}$$
How can I use this to find the overall mass transfer coefficient?
I know that the driving force will just be $(p_{1}-Hx_{1})$ but what's $K_{g}$ and more importantly, how is it derived?
EDIT: I'll give the best answer to anyone who can even give a hint
Answer: Regarding $k_p$:
So, you've got $N_1$ a flux in $mol/ s \cdot m^2 $ and a partial pressures $p_i$ in $Pa$. $k_p$ has got to connect those, and make the units work out.
Exactly what $k_p$ is depends on the flow of the gas. If the gas is stationary, mass transfer will by diffusion alone. Unfortunately, that process doesn't have a steady-state solution in a semi-infinite domain. If you had a thickness for the gas layer ($L_x$), $k_p$ would be:
$$ \frac{D_{AB}}{L_x R T} $$
where $D_{AB}$, $R$, and $T$, are the diffusion coefficient, universal gas constant and temperature. $D_{AB}$ is a property of the species that you're tracking ($A$) and the remaining species that make up the gas ($B$). You can look those up. In a pinch, I assume $2 \cdot 10^{-5}$ for air-like gases.
Now, go with the idea that you've got some kind of flow.
With flow over a surface you'll end up with boundary layer, a thin region where velocity and species concentrations vary between their values at the surface and in the free stream. That flow is generally going to enhance the mass transfer, by actually moving things around a lot faster than diffusion (diffusion is wicked slow).
It is possible to cook up some solutions to the boundary layer equations and actually derive $k_p$. But that's tedious, only works in a few situations and the results are still approximate. What you actually want to do is look up a correlation. Like these:
Those'll give you the Sherwood number based on the Reynolds number and Schmidt number.
$$ Sh = \frac{k_g L}{D_{AB}}, Re = \frac{u L}{\nu}, Sc = \frac{\nu}{D_{AB}} $$
If you aren't familiar, the only thing that might be tricky here is choosing $L$. It's the characteristic length of the flow configurations. Usually, it's just the length of the object in the direction of flow. Except inside pipes, use the diameter. | {
"domain": "engineering.stackexchange",
"id": 829,
"tags": "heat-transfer, chemical-engineering, process-engineering, diffusion"
} |
A doubt on conservation of linear momentum | Question: Please look at the image given below.
Since the human of mass 'm' is dropped onto the car under the influence of gravity and since the dashed line does not include the earth how is momentum conserved as gravity is now an external force on the system?
P.S:the answer key in my book says that momentum is conserved so can someone give an explanation as to why this is?
Also since the momentum has to be conserved does it mean the car is moving with constant velocity(Fnet=0)?
Answer: You're right. There is an external force on the system you have identified, which is gravity. As this force acts purely in the y - direction, the y - component of linear momentum is not conserved.
Notice that there are no external forces in the x - direction. The only force in the horizontal direction is possibly friction between the man's feet and the car, but this is internal to the system. Hence, the x - component of linear momentum is conserved. This can be mathematically written as:
$$ \Delta p_x = 0 \rightarrow m_{car}(v_{xi}) + 0 = (m_{car} + m_{man})v_{xf}$$
assuming the man sticks to the car on contact.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 86250,
"tags": "classical-mechanics, momentum"
} |
including multiple copies of the same model in the file.world? | Question:
Hello,
Is there a way to include multiple copies of the same model with different position in the world file?
For example:
I have the coke_can model, that I want to include multiple copies of it in my world file as following:
<include>
<uri>model://coke_can</uri>
<pose>-0.0 -2 0 0 0 0</pose>
</include>
<include>
<uri>model://coke_can</uri>
<pose>-0.0 -5 1 0 0 0</pose>
</include>
but when I do this , the first model overrides the second one. In other words , only the first model shows up in the world file.
Any ideas or suggestions to solve this are very much appreciated!
Thanks in advance.
Originally posted by Zahra on Gazebo Answers with karma: 122 on 2013-05-30
Post score: 1
Answer:
Use the <name> tag to give different names to each model. e.g.
<include>
<uri>model://coke_can</uri>
<name>coke1</name>
<pose>-0.0 -2 0 0 0 0</pose>
</include>
<include>
<uri>model://coke_can</uri>
<name>coke2</name>
<pose>-0.0 -5 1 0 0 0</pose>
</include>
Originally posted by ThomasK with karma: 508 on 2013-05-30
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Zahra on 2013-05-31:
Thanks a lot. it works!
Comment by vvyogi on 2017-03-26:
Can something similar be done from the world launch file? | {
"domain": "robotics.stackexchange",
"id": 3325,
"tags": "sdformat"
} |
add column with count by constraint | Question: Could someone please help me out, I'm trying to remove the need to iterate through the dataframe and know it is likely very easy for someone with the knowledge.
Dataframe:
id racecourse going distance runners draw draw_bias
0 253375 178 Standard 7.0 13 2 0.50
1 253375 178 Standard 7.0 13 11 0.25
2 253375 178 Standard 7.0 13 12 1.00
3 253376 178 Standard 6.0 12 2 1.00
4 253376 178 Standard 6.0 12 8 0.50
... ... ... ... ... ... ... ...
378867 4802789 192 Standard 7.0 16 11 0.50
378868 4802789 192 Standard 7.0 16 16 0.10
378869 4802790 192 Standard 7.0 16 1 0.25
378870 4802790 192 Standard 7.0 16 3 0.50
378871 4802790 192 Standard 7.0 16 8 1.00
378872 rows × 7 columns
What I need is to add a new column with the count of unique races (id) by the conditions defined below. This code works as expected but it is sooo slow....
df['race_count'] = None
for i, row in df.iterrows():
df.at[i, 'race_count'] = df.loc[(df.racecourse==row.racecourse)&(df.going==row.going)&(df.distance==row.distance)&(df.runners==row.runners), 'id'].nunique()
Answer: Sorry, this is not a complete solution, just an idea.
In Pandas you can split a data frame in subgroups based on one or multiple grouping variables using the groupby method. You can then apply an operation (in this case nunique) to each of the subgroups:
df.groupby(['racecourse', 'going', 'distance', 'runners'])['id'].nunique()
This should give you the number of races with the same characteristics (racecourse, going, ...) but unique values for id.
Most importantly, this should be much faster than looping over the rows, especially for larger data frames.
EDIT:
Here's a complete solution also including the combination with the original data frame (thanks to ojdo for suggesting join/merge)
race_count = df.groupby(['racecourse', 'going', 'distance', 'runners'])['id'].nunique()
race_count.name = 'race_count'
df.merge(race_count, on=['racecourse', 'going', 'distance', 'runners'])
Conveniently, merge broadcasts the values in race_count to all rows of df based on the values in the columns specified by the on parameter.
This outputs:
id racecourse going distance runners draw draw_bias race_count
0 253375 178 Standard 7.0 13 2 0.50 1
1 253375 178 Standard 7.0 13 11 0.25 1
2 253375 178 Standard 7.0 13 12 1.00 1
3 253376 178 Standard 6.0 12 2 1.00 1
4 253376 178 Standard 6.0 12 8 0.50 1
5 4802789 192 Standard 7.0 16 11 0.50 2
6 4802789 192 Standard 7.0 16 16 0.10 2
7 4802790 192 Standard 7.0 16 1 0.25 2
8 4802790 192 Standard 7.0 16 3 0.50 2
9 4802790 192 Standard 7.0 16 8 1.00 2 | {
"domain": "codereview.stackexchange",
"id": 41941,
"tags": "python, pandas"
} |
Calculate net force on the pendulum | Question: In this image you can see a swinging pendulum and that there's a net force $A$ that causes it to move in circular motion. It is constantly updating velocity vector $v$ direction to be perpendicular with the direction towards the pivot point.
As I understand the net force $A$ should be $A = T - mg$, where $T$ is the tension force and is equal to $T = mg\cos\theta$. I tried to solve for net force $A$ but I'm not getting the correct result. In my calculations when $\theta = 0$, the net force is zero. But that's wrong because there's clearly always a force that affects the pendulum, since its velocity vector constantly changes its direction.
What am I missing?
Answer: You are missing concepts.
In a pendulum, the net force at any instant can be resolved into radial and tangential components(They are infact mutually perpendicular to each other at each instant). The radial component is along the string, directed towards the centre of circular motion. It actually provides the required centripetal acceleration . When the string makes angle $\theta$ from the vertical , you can write:
$$T-mg cos\theta=\frac {mv^2}{l}$$
Note that at the instant of maximum displacement only, the pendulum instantaneously comes to rest. So at maximum angular displacement $\theta_0$, we can write:
$$T_0=mgcos\theta_0,$$ where $T_0$ is the tension in the string at that instant.
Note that the tension also varies accordingly throughout the motion.
The tangential component of the net force is $mgsin\theta$ which provides the required tangential acceleration (or, restoring torque for oscillation)This is along the motion of the pendulum. This is zero only when the string is vertical.
So now you can see, at the extreme position, only tangential component of force is present.
You can use mechanical energy conservation for further analysis of pendulum system. | {
"domain": "physics.stackexchange",
"id": 58482,
"tags": "homework-and-exercises, newtonian-mechanics, forces, free-body-diagram"
} |
Graph isomorphism in BPP implies it is also in RP | Question: $$L=\{\langle G\rangle \#\langle H\rangle : H, G \text{ are directed isomorphic graphs }\}$$
$\langle G\rangle$ is adjacency matrix written row by row.
Show that if $L\in BPP$ then also $L\in RP$.
Can you help me ?
I have no idea how to do it.
Answer: The important observation here is that $GI$ (the language of isomorphic graphs) is self reducible. This means that you can solve the corresponding search problem (finding an isomorphism) using a solution to the decision problem. You can find such a reduction in these notes of Jonathan Katz. Now, suppose $A$ is a polynomial time machine with access to an oracle for $GI$, such that when given two isomorphic graphs, $A$ produces an isomorphism.
To put $GI$ in $\mathsf{RP}$, you need to be able to answer "yes" only when you are certain the graphs are isomorphic. Suppose $M$ is a probabilistic polynomial time machine which agrees with $GI$ with high probability (its existence follows from $GI\in\mathsf{BPP}$, with some success amplification). Given graphs $G,H$ as input, execute $M$. If $M$ outputs "no", answer "no". However, when $M$ answers "yes", execute $A$ (the algorithm for finding an isomorphism), while replacing oracle calls with calls to $M$. If $A$ produces a valid isomorphism from $G$ to $H$, output "isomorphic", otherwise output "non-isomorphic".
If $G,H$ are isomorphic, then conditioning on the event that $M$ outputs the correct answer in all of the oracle calls raised by $A$, $A$ finds an isomorphism and the above procedure outputs the correct answer. Since $A$ runs in polynomial time, if the error probability of $M$ is small enough (I leave the details for you), you can use the union bound to bound the probability of having an error in at least one of the oracle calls. If $G,H$ are non isomorphic, then you always output no, which puts $GI$ in $\mathsf{RP}$. | {
"domain": "cs.stackexchange",
"id": 9587,
"tags": "complexity-theory, probabilistic-algorithms"
} |
If two languages together cover all words and one is regular, is the other one as well? | Question: If $L_1$$\subseteq$ $\Sigma^*$, $L_2$$\subseteq$ $\Sigma^*$ , $L_1$ is regular and $L_1$$\cup$ $L_2$ = $\Sigma^*$ then is $L_2$ necessarily regular?
I think that the answer is yes, but I'm not sure on my proof.
The reason that I think that $L_2$ is regular is because surely $L_2$ just accepts all the words in the language that $L_1$ doesn't? So, to me, that suggests that $L_2$ must be regular as well, I just don't know where to begin on a formal proof.
Any guidance would be appreciated.
Answer: In fact, the answer is no.
If $L_1$$\subseteq$ $\Sigma^*$, $L_2$$\subseteq$ $\Sigma^*$ , $L_1$ is regular and $L_1$$\cup$ $L_2$ = $\Sigma^*$ then is $L_2$ is not necessarily regular.
We can prove this through counter-example.
If we let $L_1$ = $\Sigma^*$, then we can choose any non-regular language in $\Sigma$ for $L_2$.
If we take $\Sigma$ = {a,b} and then let $L_2$ = $a^n$$b^n$ (A non-regular language) then $L_1$$\cup$ $L_2$ = $\Sigma^*$ and $L_2$ is non-regular as required. | {
"domain": "cs.stackexchange",
"id": 5621,
"tags": "formal-languages, regular-languages, closure-properties"
} |
Applying Sci-kit Learn's kNN algorithm to Fresh Data | Question: While I was studying Scikit-learn's kNN algorithm, I realized that if I use sklearn.model_selection.train_test_split, the provided data gets automatically split into the train data and the test data set, according to the proportions provided as parameters.
Then based on the train data, the algorithm looks at the k-nearest neighbor points closest to the test data points to determine whether the test data points belong to a certain criteria or not.
I was wondering whether there was a way to predict the criteria NOT for the test data sets, which were already a part of the provided data set, but brand new data that were not provided during the whole process.
Is there a way to do that using sci-kit learn?
Answer: KNN is not fitted to "the k-nearest neighbor points closest to the test data points". You specify the fit option, like:
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X, y)
Usually this will be xtrain, ytrain, while you test the model performance using "new" (unseen) data and compare the true targets to the prediction.
neigh.predict(xtest)
or
neigh.predict_proba(xtest)
See docs: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html | {
"domain": "datascience.stackexchange",
"id": 9470,
"tags": "machine-learning, scikit-learn, k-nn"
} |
Is 2-oxobicyclo[2.2.1]heptan-1-ide resonance stabilized? | Question: Is the following compound resonance stabilized?
I think its not resonance stabilized because the p-orbital of carbanion and pure p-orbital of carbonyl carbon are not in same plane. So they can't overlap. (Pure means that the p-orbital is not a part of the $\ce{sp^2}$ orbital. One p-orbital of carbonyl carbon is pure).
Am I correct?
Answer: I think, in contrast to cyclohexanone, a resonance stabilization by
is less likely to happen as you already start with a bi-cyclic compound where the cycles involved are small to moderate. The then additional strain introduced by a (partial) double bond at the bridge-head carbon atom would destabilize the molecule too much. I extrapolate here from Bredt's rule. | {
"domain": "chemistry.stackexchange",
"id": 17755,
"tags": "organic-chemistry, carbonyl-compounds, resonance"
} |
Tension and Newton's Third Law | Question: I have heard many people tell me that the tensional force is bi-directional. Consider the following case where a (mass-less) rope is used to transmit tension.
The rope is being pulled (by hand) with a force of 5 newtons. Thus the mass (along with the rope) will have an acceleration of 5 ms^(-2). (Neglect friction)
1) Considering a point P on the rope, have I represented the tensional force on the rope correctly?
2) By Newton's Third Law, if the rope is pulling on the block, the block must exert an equal and opposite force on the rope. So, shouldn't the body not have any motion? A similar question was asked here: With Newton's third law, why are things capable of moving?. According to the answer provided, it is the force of the muscles that is responsible for the resulting acceleration.
So then what force is transmitted across the rope? It has to be the force of the muscles and not any other force since the tension in the rope is 5 newtons. But if it is so, the force exerted by the block on the rope (reaction) should also be 5 newtons. This means that the object will have no motion! Am I misunderstanding something here?
Answer: There are three parts to the situation you are considering:
Yourself
Rope
Block
You exert a force of 5 N on the rope to the right and by Newton’s third law the rope exerts a force of 5 newton on you to the left.
The rope exerts a force of 5 N on the block to the right and by Newton’s third law the block exerts a force of 5 N on the rope to the left.
The end result is that
the block has on it a net force of 5 N to the right and it will accelerate to the right.
there is a net force of 5 N to the left on you. If you were not anchored to the ground this force would cause you to accelerate to the left.
the rope has a net force of zero on it which is a consequence of the assumption that the rope is massless. You can think of the rope as transferring forces between you and the block and there is no reason why the rope cannot move. | {
"domain": "physics.stackexchange",
"id": 47771,
"tags": "newtonian-mechanics, forces, free-body-diagram, string"
} |
Why does the universe manifest scale? | Question: I'll try outline my question in clear terms, articulating specific aspects that are its primary motivators. I'm just beginning in my exploration of physics as a student, but a persistent question that I've been grappling with is this - why does the universe manifest scale?
More precisely, what is scale, in physical terms? Is it an extension of dimension? To be clear, I recognize that there may be an explanation grounded in dimensionality; however, it seems to me that scale does not equate to dimension. In the case of our universe things are contained within a three-dimensional space (at least at the macro level), but scale implies "levels" of containment within the dimensional containment space.
The way I make sense of it is as being akin to the surface area of dimension, but is this accurate from the standpoint of physics? Has the phenomenon of scale been theoretically defined?
The most significant aspect of this for me, is why it is that physics would work "differently" at different scales. The fact that at the macro level we observe behaviors under the theoretical banner of GR/SR while at the micro level QM becomes the rule makes it seem as if scale has some sort of primacy that extends beyond space in the GR/SR sense, because it seems to be setting distinct contexts in which different physics occur. Has this been explained?
Forgive any naivety that may come across due to my inexperience with physics, and for any of the less than rigorous aspects of my questions. I'm sure certain aspects of my question are probably just due to my dearth of knowledge of physics, but I haven't been confronted with what seems like a clear answer to the essential question of just what scale is in the view of physics.
Thank you in advance for any answers and insight on this question!
Answer: A scale is a level of analysis or observation that looks at a system on a certain length (or time) scale, ignoring phenomena much smaller and larger than this length or time. Note that this is something we humans do, not the world! However, it is very useful since physics at different scales does look fairly different.
One way of looking at why physics is different on different scales is to note that different forces and interactions have particular length scales. The most obvious are the strong and weak nuclear forces, but this is also true for van der Waals forces and surface tension in liquids. The reason for the range of the nuclear forces can be explained using the Yukawa potential. Surface tension occurs because of microscopic interactions (intermolecular forces) and hence acts on larger scales up to the scale where other typical forces like gravity overwhelm it. The relative "mix" of forces at different scales hence tends to vary.
The same is true for interactions: the mean-free path of molecules is one length scale that is different from their physical size. Physics below the molecular size is dominated by the quantum interactions, above it the molecule is largely a classical system, and above the mean-free path molecules can be treated diffusing statistically.
In some sense dimensionality affects this since it affects how strongly different forces fall off with distance. On very large scales electromagnetism and gravity, the forces with unbounded range, tends to dominate and they fall off as $1/r^{D-1}$. Dipole forces (magnetic fields and tidal forces) fall off as $1/r^D$, so beyond a certain scale (dependent on how intense the dipoles typically are) they will not be relevant.
Things that remain the same when you rescale the problem are often useful or interesting, hence the interest in the renormalization group that is used to develop physical theories for things that are unchanged by rescaling. Many of these phenomena show non-integer dimensionality $D$ and are hence fractal. | {
"domain": "physics.stackexchange",
"id": 57929,
"tags": "renormalization, dimensional-analysis, scaling, order-of-magnitude"
} |
Can I have a sensor_msgs::Image::ConstPtr as data member? | Question:
I have a class that gets an sensor_msgs::Image::ConstPtr and there are a lot of member functions acting to this same image.
void extractData(const sensor_msgs::Image::ConstPtr &image)
I was wondering if it is possible to have a data member instead of sending the pointer from one function to the other.
I've tried adding:
sensor_msgs::Image::ConstPtr ℑ
But it says uninitialized reference member.
Tried also removing Const or Ptr in declaration with no success.
Any ideas?
Originally posted by kabamaru on ROS Answers with karma: 35 on 2013-02-11
Post score: 0
Original comments
Comment by Ivan Dryanovski on 2013-02-11:
FYI: For some reason the text appears to me as "sensor_msgs::Image::ConstPtr ℑ", when the OP actually entered "sensor_msgs::Image::ConstPtr &image"
Comment by dornhege on 2013-02-11:
Seems fixed now.
Answer:
Yes, you can.
However, you cannot have it as a reference, i.e. remove the & in the declaration.
Originally posted by dornhege with karma: 31395 on 2013-02-11
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by kabamaru on 2013-02-11:
I don't want new objects. I want a reference or else it doesn't worth the loss in speed. I will go with passing around the smart pointer. Thanks anyway.
Comment by dornhege on 2013-02-11:
That's exactly what the smart pointer is intended for. | {
"domain": "robotics.stackexchange",
"id": 12838,
"tags": "ros, sensor-msgs, image"
} |
Intuition for Stress and the Cauchy Stress Tensor | Question: I'm struggling to get an intuitive understanding of what exactly Stress is, particularly the "direction" associated with it.
In the case of a 1 dimensional bar with just uniaxial loading, the way stress was explained to me was just $\pm\frac{F}{A}$ where $F$ is the force applied to either end, A is the cross sectional area, and the sign refers to tension or compression. This explanation is fine as a formula, but I don't see how it relates to "internal forces".
I've found other sources explaining it with an "imaginary cut" through the material, ignoring one side of the cut, and imposing equilibrium on the other piece. Why can either side be "ignored"? If stress is the internal force per unit of surface, why doesn't the neglected part contribute to the stress? (after all, the neglected part shares the surface).
In the more general case using the Stress Tensor,
$$T=\begin{bmatrix}
\sigma_{xx} & \tau_{xy} & \tau_{xz}\\
\tau_{yx} & \sigma_{yy} & \tau_{yz}\\
\tau_{zx} & \tau_{zy} & \sigma_{zz}\\
\end{bmatrix}$$
Do each of the components describe the stress on the surfaces of some infinitesimal volume? If so, which faces do they describe (there are 2 faces normal to each direction)- is it the sum of the stresses on each face?
Any insight on these questions would be greatly appreciated, thanks for reading
Answer:
Do each of the components describe the stress on the surfaces of some
infinitesimal volume?
Essentially, yes.
If so, which faces do they describe (there are 2faces normal to each direction)- is it the sum of the stresses on each
face?
The assumption is the volume is in equilibrium, both translational and rotational. On that basis, the diagonal terms are the applied external normal stresses on the faces of the cube. There are six faces, but the normal stress on each opposite face is equal and opposite for translational equilibrium, so only three are specified in the tensor. If there was only a normal stress on one pair of opposite faces and no applied shear stress on the faced, you would have your uniaxial stress equation. You can see this in the figure in the Wikipedia link supplied by @nicoguaro, except that $\sigma$ is used for shear stress an $e$ is used for normal stress.
The off diagonal terms are the shear stresses on each face. Six are specified but three pairs are identical, e.g., $\tau_{xy}=\tau_{yx}$. This needs to be so in order that there is not rotation of the cube. For example, in terms of @nicoguaro indices, $\sigma_{21}=\sigma_{12}$
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 83484,
"tags": "stress-energy-momentum-tensor, stress-strain, solid-mechanics"
} |
Supersonic isentropic flow | Question: how to know if a deflection in a supersonic isentropic flow causes compression (oblique shock) or expansion (Prandtl-Meyer expansion)
Thanks in advance.
an example of a problem that is related to my problem
Answer: Really easy: Calculate the deflection angle beta. If the angle is positive, then it's an oblique shockwave, but if it's negative, then you have an expansion. If the angle beta is zero, then there's neither compression nor compression and if beta is 90º (π/2) you have a normalmente shockwave.
An expression that may help you (I did a report on this and demonstrated the formulæ): | {
"domain": "engineering.stackexchange",
"id": 2737,
"tags": "fluid-mechanics"
} |
How does moveout correction work in seismic reflection profiling? | Question: When you take a seismic reflection profile you take multiple shots and then stack them together around a common midpoint. Due to the extra distance travelled by the wide angle shots there is a moveout on time profile which you have to account for. Apparently, in order to apply the moveout correction you have to know (or guess) the velocity structure of the layers. I am confused as to why this is. Surely you know the extra distance travelled from the geometry of the shot you know the extra time because it is recorded in the arrivals so you should calculate the average layer velocity after lining up all the arrivals at the same time, not the other way around.
If someone can provide a detailed explanation of how common midpoint stacking and the moveout correction works that would be very helpful.
Answer: The offset is measured in meters, but the depth is still measured in time. The extra distance the wave have traveled to a longer offset is therefor also seen as an increased time in the CMP-gather (Common Mid Point) and the extra time depends on the seismic velocity of the layers.
If the velocities are high, the moveouts are short. The longer offset, the longer distance the wave have traveled through the layer and the larger moveout correction is needed before stacking. Sometimes we mute long moveouts as they tend to be distorted and noisy.
When we stack the traces in the CMP-gather, we need the reflectors to be at the same time and to do that, we perform NMO-correction (Normal MoveOut).
To find the right velocities to use for the NMO-correction, we can do a semblance analysis to get the NMO-velocity. That is to simply try possible velocities and see at what velocity the reflector in the CMP-gather is aligned horizontally and gives the highest peak when summed for all traces.
The figure shows a synthetic CMP-gather to the left. A semblance velocity scan is performed and the peaks are shown in the middle plot. The velocities are picked as shown in the graph and are used to NMO correct the CMP-gather as seen in the seismogram to the right. When this is stacked, the reflectors will add up and the noise will be suppressed. So in theory, we don't need the velocity beforehand to perform NMO-correction. The moveout can be used to calculate a NMO-velocity model.
The figure is from Madagascar development blog.
However, if we know the velocities, eg from borehole data or refraction studies, we don't need to rely on computing heavy semblance velocity scans and uncertain velocity picking. Data are rarely so neat as this synthetic example shown here. Also, it's impossible to derive the gradually increasing velocity in a formation without internal reflectors.
The velocity we use to do NMO-correction is not the same as the velocity of the layer, but rather the additional velocity to the average needed to correct the moveout. To finally estimate the actual velocity of the layer, we can use e.g. Dix equation to correct the velocities, but more accurate methods are usually used nowadays.
This is the basic application for stacking, but there is much more to it, you can read more e.g. here. | {
"domain": "earthscience.stackexchange",
"id": 980,
"tags": "seismology, data-analysis"
} |
Optimal buffer size for output stream | Question: I'm writing a method to save an input stream to an output stream. What's the optimal size for the buffer? Here's my method:
/**
* Saves the given InputStream to a file at the destination. Does not check whether the destination exists.
*
* @param inputStream
* @param destination
* @throws FileNotFoundException
* @throws IOException
*/
public static void saveInputStream(InputStream inputStream, File outputFile) throws FileNotFoundException, IOException {
try (OutputStream out = new FileOutputStream(outputFile)) {
byte[] buffer = new byte[2097152]; //This is set to two MB. But I have no idea whether this is optimal
int length;
while ((length = inputStream.read(buffer)) > 0) {
out.write(buffer, 0, length);
}
inputStream.close();
}
}
Answer: It depends on a lot of factors, there's no universally "optimal" size. 512kB is probably good enough.
If you want, you can always benchmark it for various buffer sizes: this will let you know what the best option is for your computer and OS. | {
"domain": "codereview.stackexchange",
"id": 1733,
"tags": "java, optimization"
} |
Why does increasing the temperature for a reaction at equillibrium cause the reaction to shift to the endorthermic side? | Question: Increasing the temperature would also increase the rate of the exothermic reaction, however what drives the molecules present in the system to choose the endorthermic pathway?
Answer: Think about it this way:
There are two people. One guy likes chocolates and one guy hates them. They decide to transfer the chocolates. If you start throwing chocolates randomly to both of them, in which side will the reaction go? The guy who hates chocolates would keep giving it to the guy who likes them.
In a similar way, if you provide heat to a reaction, it would move in the direction of the where the heat is absorbed, i.e the endothermic pathway. | {
"domain": "chemistry.stackexchange",
"id": 8339,
"tags": "thermodynamics, equilibrium, temperature"
} |
Prove that the following algorithm has STOP property (number of steps is finite) | Question: Prove that the following algorithm has STOP property. I am not sure if this term is widely know, so the definition of STOP property that I got during classes looks as follows:
STOP property (for all input data satisfying $ \alpha $ the computation halts - the number of steps is finite)
$\alpha: x \in N $
void BB(int x)
{
int y = x;
int z = 0;
while((z != 0) || (y <= 300))
{
if(y <= 300)
{
y = y + 3;
z = z + 1;
}
else
{
y = y - 2;
z = z - 1;
}
}
}
I do not really have any idea how to approach such a problem. I was thinking about using the method of loop counters, but I couldn't success with it.
Answer: You have three possible cases:
$y > 300$. Since $z = 0$ initially, the algorithm terminates.
$y = 300$ and $z = 0$.
You enter the first if and you get $y = 303$ and $z = 1$. Then you enter the second if and you get $y = 301$ and $z = 0$. Since $y > 300$ and $z = 0$ the algorithm stops.
$y < 300$ and $z = 0$.
After $k$ iterations $y = 301$ or $y = 302$ or $y=303$ and $z = k$. By executing the algorithm for each $y$ you can see that $y$ only takes values in a specific range while $z$ is always decreasing and will eventually reach 0 (or it will become less than 0). For example, let $y = 301$ and $z = k$. By executing the algorithm you get the following values:
${\bf y = 301, z = k} \\
y = 299, z=k-1 \\
y=302, z = k\\
y = 300, z = k-1 \\
y = 303, z = k \\
{\bf y = 301, z = k-1}\\
y = 299, z = k-2 \\
y=302, z = k-1\\
y = 300, z = k-2 \\
y = 303, z = k-1 \\
{\bf y = 301, z = k-2}\\$
Obviously each time you reach $y = 301$, z is decreased by 1. After showing that the same thing happens for the other two possible values of $y$ you have proved that for every possible starting value, the algorithm stops after a finite number of steps. | {
"domain": "cs.stackexchange",
"id": 13182,
"tags": "algorithms, algorithm-analysis, correctness-proof"
} |
What is the specific role of the cerebellum when it comes to 'coordinating movement'? | Question: In elementary biology (high school level in the UK - A levels), we are told that the cerebellum is the part of the brain that 'coordinates movement'. Literally nobody takes the time to explain what the word 'coordinates' encompasses. Hence, I do not know the specific role of the cerebellum in coordinating movement.
For example, one very daunting thing that we are told is that the frontal lobe contains the 'primary motor cortex', which supposedly contains motor neurons that connect to the spinal cord and brain stem and can send nerve impulses that enable movement in the body. Where is the distinction between such a role and the role of the cerebellum in 'coordinating movement'?
Answer: The principal function of the cerebellum, which was detected years ago, is to calibrate detailed movements rather than initiating movements or deciding which movements to execute (Ghez et al, 1985). This was concluded by observing the changes which occurred after damaging cerebellum. Animals and humans with cerebellar dysfunction show, above all, problems with motor control, on the same side of the body as the damaged part of the cerebellum. They continue to be able to generate motor activity, but it loses precision, producing erratic, uncoordinated, or incorrectly timed movements. Thus, cerebellum helps in coordinating fine-tuned movements and inhibits involuntary movements. Apart from this, fMRI studies have indicated that more than half of the cerebellum is intertwined with association areas (Buckner et al, 2011). But since it is out-of-scope here, I'll leave it at that much.
See it like this:
For a more detailed mechanism and circuitry of the cerebellum, you can have a look at this interactive page from the University of Texas. @FilipeRocha have also come up with some fine details in their answer.
References:
Ghez C, Fahn S (1985). "The cerebellum". In Kandel ER, Schwartz JH. Principles of Neural Science, 2nd edition. New York: Elsevier. pp. 502–522.
Buckner RL, Krienen FM, Castellanos A, Diaz JC, Yeo BT (2011). "The organization of the human cerebellum estimated by intrinsic functional connectivity". J. Neurophysiol. 106 (5): 2322–2345. doi:10.1152/jn.00339.2011. | {
"domain": "biology.stackexchange",
"id": 6809,
"tags": "neuroscience, neurophysiology, neuroanatomy, neurology"
} |
Implementing a complex circuit for a Szegedy quantum walk in qiskit | Question: Problem definition
I'm implementing a quantum circuit in qiskit for a Szegedy quantum walk, (reference, Fig 21.). It uses two registers of dimension $N$ ($N=3$) each one. The challenges I'm facing are:
Multiple controlled gates (2 and 3 controls and targets like $H$, $R_{y}$).
Hermitian conjugate of an operator (Dagger).
Ancilla qubits, that increase the complexity of the circuit.
Here is a main part of the circuit:
For example, we have $K_{b_{2}}$ controlled by zero and one controls.
Main questions
How do I implement the controlled-dagger operator $K^{\dagger}_{b_{2}}$, of operator $K_{b_2}$? Does qiskit provide dagger operators of the principal gates? I have a little insight here. Should I apply tdg to all gates in $K_{b_{2}}$?
For multiple-controlled qubits operations, I base the construction of the multiple-controlled gates in the Nielsen&Chuang book, as Toffoli gates. So, we must use ancilla qubits. For $N$ controls we use $N-1$ ancilla qubits.
Is the following proposal correct?
proposition
So, for $K_{b_{2}}$, I control individually all the gates as follows, is this approximation correct?
How do I control the $R_{y}$ gate? I could not find a Controlled rotation around Y-axis only for Z axis (crz).
qiskit
I would define for each gate of the above figure, the "compute-copy-uncompute" method here.
def kb2(qw, q0, q1, anc):
qw.ccx(q0[0], q0[1], anc[0])
qw.cry(np.pi/2, anc[0], q1[0])
qw.ccx(q0[0], q0[1], anc[0])
qw.ccx(q0[0], q0[1], anc[0])
qw.ccx(anc[0], q1[0], anc[1])
qw.cry(np.pi/2, anc[1], q1[1])
qw.ccx(anc[0], q1[0], anc[1])
qw.ccx(q0[0], q0[1], anc[0])
qw.ccx(q0[0], q0[1], anc[0])
qw.ccx(anc[0], q1[0], anc[1])
qw.ch(anc[1], q1[1])
qw.ccx(anc[0], q1[0], anc[1])
qw.ccx(q0[0], q0[1], anc[0])
#... and so on
return kb2
q0 = QuantumRegister(3, 'q0')
q1 = QuantumRegister(3, 'q1')
anc = QuantumRegister(2, 'a')
qwcirc = QuantumCircuit(q0, q1, anc)
qwcirc.x(q0[1]) # for 0-control
kb2(qwcirc, q0, q1, anc)
qwcirc.x(q0[1]) #for 0-control
# Matplotlib Drawing
qwcirc.draw(output='mpl')
I think half of the Toffoli's gates may be avoided...but I really hope to start a conversation. Thanks in advance.
Answer: This is an interesting question here.
How do I implement the controlled-dagger operator $K^{\dagger}_{b_{2}}$, of operator $K_{b_2}$? Does qiskit provide dagger operators of the principal gates? I have a little insight here. Should I apply tdg to all gates in $K_{b_{2}}$?
The dagger operation is quite simple to implement and can be seen as a recursive operation.
The only requirement is that the dagger of each gate in your gateset is also in your gateset. This means that if $U$ is a gate in your gateset (i.e. a "primitive" gate), then $U^\dagger$ should also be present in your gateset.
If this requirement is checked (and it is checked for IBM gateset, which is the answer to your second question), then here is a pseudo-algorithm using Python syntax implementing a generic dagger operation:
def dagger(quantum_gate):
if quantum_gate in gateset:
return quantum_gate.dagger() # just returning a gate from the gateset.
# else, quantum_gate is a sequence of other quantum gates (primitive or not)
daggerized_gate = []
for gate in reversed(quantum_gate):
daggerized_gate.append(dagger(quantum_gate)) # recursive call
return daggerized_gate
This algorithm is fully-generic: it works for every gates, even for controlled ones, provided the requirement is checked.
For multiple-controlled qubits operations, I base the construction of the multiple-controlled gates in the Nielsen&Chuang book, as Toffoli gates. So, we must use ancilla qubits. For $N$ controls we use $N-1$ ancilla qubits.
There are several algorithms that implement a generic $N$-controlled (controlled by $N$ qubits) $X$ gate. As far as I know, there are 3 distinct classes:
The algorithms requiering $N-1$ ancilla qubits as you mention. These algorithms have the best gate-count and circuit-depth complexity.
The algorithms requiering only $1$ ancilla qubit, at the expense of a bigger gate-count and circuit-depth.
Hybrid algorithms between the 2 previous classes: they need more than one ancilla qubit but less than $N-1$ and also have a gate complexity between the 2 previous classes.
I don't have links right now, but will edit as soon as possible to provide you links to papers describing algorithms in the 3 previous classes.
Note: if someone has access to some papers, please edit my answer if possible or leave the link in comments.
Is the following proposal correct?
So, for $K_{b_{2}}$, I control individually all the gates as follows, is this approximation correct?
How do I control the $R_{y}$ gate? I could not find a Controlled rotation around Y-axis only for Z axis (crz).
This is not an approximation: it is 100% correct. Controlling a composed-quantum gate is equivalent to controlling all the gates composing it (may be used recursively).
About the implementation of the $R_y$ gate, you can implement it with the circuit given here.
For multiply-controlled $R_y$ gates, you can apply this definition recursively.
A quick observation: you can remove the 3rd control from the last 2 $H$ gates. If the first 2 controls are verified, the $H$ gate will be applied only once to the last qubit, independently of the value of third control-qubit. | {
"domain": "quantumcomputing.stackexchange",
"id": 817,
"tags": "quantum-gate, programming, circuit-construction, qiskit"
} |
Why are ensembles so unreasonably effective | Question: It seems to have become axiomatic that an ensemble of learners leads to the best possible model results - and it is becoming far rarer, for example, for single models to win competitions such as Kaggle. Is there a theoretical explanation for why ensembles are so darn effective?
Answer: For a specific model you feed it data, choose the features, choose hyperparameters etcetera. Compared to the reality it makes a three types of mistakes:
Bias (due to too low model complexity, a sampling bias in your data)
Variance (due to noise in your data, overfitting of your data)
Randomness of the reality you are trying to predict (or lack of predictive features in your dataset)
Ensembles average out a number of these models. The bias due to sampling bias will not be fixed for obvious reasons, it can fix some of the model complexity bias, however the variance mistakes that are made are very different over your different models. Especially low correlated models make very different mistakes in this areas, certain models perform well in certain parts of your feature space. By averaging out these models you reduce this variance quite a bit. This is why ensembles shine. | {
"domain": "datascience.stackexchange",
"id": 868,
"tags": "machine-learning, data-mining, predictive-modeling"
} |
Inverse Fast Fourier Transform Hard Way | Question: I'm reading "Digital Signal Processing", written by Michael Weeks. In section Representing a Digital Signal as a Sum of Sinusoids, there is this Matlab code that I cannot understand. (code given below)
What I don't get is what does the term inside cos argument 1/Xsize mean (in part %Do the sinusoids as discrete points only, I guess it is fundamental frequency, but how can we conclude it). And why is the whole function divided by Xsize.
Same thing with smooth0, smooth1, smooth2... I also don't get this smoothing process.
Thank you for any explanations, or links to helpful stuff.
%sum_of_sins.m
%Show a random signal as a sum of sins
%
%Make our x signal
x=round(rand(1,8)*10);
Xsize=length(x);
%Get FFT of x
X=fft(x);
Xmag=abs(X);
Xphase=angle(X);
%Show the freq-domain info as sum of sinusoids
%Find the IFFT (the hard way) part 1
n=0:Xsize-1;
%Do the sinusoids as discrete points only
s0=Xmag(1)*cos(2*pi*0*n/Xsize+Xphase(1))/Xsize;
s1=Xmag(2)*cos(2*pi*1*n/Xsize+Xphase(2))/Xsize;
s2=Xmag(3)*cos(2*pi*2*n/Xsize+Xphase(3))/Xsize;
s3=Xmag(4)*cos(2*pi*3*n/Xsize+Xphase(4))/Xsize;
s4=Xmag(5)*cos(2*pi*4*n/Xsize+Xphase(5))/Xsize;
s5=Xmag(6)*cos(2*pi*5*n/Xsize+Xphase(6))/Xsize;
s6=Xmag(7)*cos(2*pi*6*n/Xsize+Xphase(7))/Xsize;
s7=Xmag(8)*cos(2*pi*7*n/Xsize+Xphase(8))/Xsize;
%Redo the sinusoids as smooth curves
t=0:0.05:Xsize-1;
smooth0=Xmag(1)*cos(2*pi*0*t/Xsize+Xphase(1))/Xsize;
smooth1=Xmag(2)*cos(2*pi*1*t/Xsize+Xphase(2))/Xsize;
smooth2=Xmag(3)*cos(2*pi*2*t/Xsize+Xphase(3))/Xsize;
smooth3=Xmag(4)*cos(2*pi*3*t/Xsize+Xphase(4))/Xsize;
smooth4=Xmag(5)*cos(2*pi*4*t/Xsize+Xphase(5))/Xsize;
smooth5=Xmag(6)*cos(2*pi*5*t/Xsize+Xphase(6))/Xsize;
smooth6=Xmag(7)*cos(2*pi*6*t/Xsize+Xphase(7))/Xsize;
smooth7=Xmag(8)*cos(2*pi*7*t/Xsize+Xphase(8))/Xsize;
%Find the IFFT (the hard way) part 2
my_sum_of_sins=s0+s1+s2+s3+s4+s5+s6+s7;
smooth_sum=smooth0+smooth1+smooth2+smooth3+smooth4+smooth5+smooth6+smooth7;
%Show both discrete points and smooth curves together
xaxis1=(0:length(smooth0)-1)/20; %for 8 points
xaxis2=(0:length(s0)-1);
figure(1);
subplot(4,1,1); plot(xaxis1, smooth0, 'g', xaxis2, s0,'r*');
subplot(4,1,2); plot(xaxis1, smooth1, 'g', xaxis2, s1,'r*');
subplot(4,1,3); plot(xaxis1, smooth2, 'g', xaxis2, s2,'r*');
subplot(4,1,4); plot(xaxis1, smooth3, 'g', xaxis2, s3,'r*');
figure(2);
subplot(4,1,1); plot(xaxis1, smooth4, 'g', xaxis2, s4, 'r*');
subplot(4,1,2); plot(xaxis1, smooth5, 'g', xaxis2, s5, 'r*');
subplot(4,1,3); plot(xaxis1, smooth6, 'g', xaxis2, s6, 'r*');
subplot(4,1,4); plot(xaxis1, smooth7, 'g', xaxis2, s7, 'r*');
Answer: Effectively Xsize is allows the sampling frequency to be set.
If you just used 2*pi*0*n+Xphase(1) as the argument of $\cos(\cdot)$ or $\sin(\cdot)$ then you'd just get $\cos({\tt Xphase(1)})$ or $\sin({\tt Xphase(1)})$ for all terms... because then the 2*pi*0*n terms would be an integer multiple of 2*pi.
EDIT
With respect to the Xsize terms at the end:
s0=Xmag(1)*cos(2*pi*0*n/Xsize+Xphase(1))/Xsize;
Check out the definition of the inverse DFT, particularly equation 2:
$$
x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k e^{i2\pi k n / N}
$$
for your code, N = XSize. | {
"domain": "dsp.stackexchange",
"id": 1324,
"tags": "matlab, ifft"
} |
Laws of nature and relative motion | Question: According to the principle of relativity, the laws of nature should be independent of the relative movement of different frames. My doubt is the meaning of "laws of nature".
So, suppose a spaceship starting from earth with zero velocity, and (except for a few minutes after lauching) keeping acceleration $g$ until point $x$. Then inverting the direction of the engines, keeping $-g$ until stop after reaching $2x$, and returning to $x$. Finally, inverting again the engines, keeping $g$ until stop at the Earth.
In spite of pendulum with the same lenght $l$ at the ship and on the Earth have the same period $$T = 2\pi \sqrt{\frac{l}{g}}$$ the total number of oscillations after the trip is different, because less time passed for the ship. It could be argued that the length $l$ changes between frames, but a quartz clock doesn't depend on such a length and also the number of oscillations would be different.
Apparently number of oscillations is not included in the expression "laws of nature". That means: there are properties that change only because of relative movement, even when the physical environment that should affect the outcome is not different from the point of view of each frame.
It is possible also to compare different ships, one as before, and another going to a distance $y<x$ and repeating the sequence several times, until they meet after some time. Again the number of oscillations are different, even having not only the same acceleration, but also avoiding the difference of potential well (uniform acceleration and gravity).
Answer: You can think of the laws of nature as being rules about the relationships between different physical quantities. The principle of relativity is that the rules should appear to hold regardless of the reference frame you chose to label positions in time and space.
Clearly that doesn't mean that the values of physical properties don't change from one reference frame to another. Measured relative to my chair, I have zero momentum- measured relative to a passing plane I have a large momentum.
Likewise with the passage of time. In my rest frame time passes at a certain rate. On the passing plane time also passes at a certain rate. However, the rate at which time on the plane happens appears to be reduced when I measure it in my frame.
The relevant law of nature, as far as we know, is that the relationships between times and distances in different inertial reference frames are governed by the Lorenz transformations. It doesn't matter which reference frames you pick- the transformations will always hold.
By a circular argument I can answer your question 'what is a law of nature' by saying that it is any fixed rule governing the relationship ship between quantities that applies independently of the reference frame chosen to represent the quantities.
You can then see that you have to be careful with your definitions of the laws if you want to be rigorous. | {
"domain": "physics.stackexchange",
"id": 82749,
"tags": "special-relativity, time"
} |
How can I write unit tests for a pennylane circuit? | Question: I have several mixing unitary circuits written using Pennylane to be used in the QAOA algorithm. Furthermore, I'd like to write unit tests for these mixing circuits to ensure that the code is doing what it is supposed to in the future as changes are made to the codebase. Consider the basic example:
def x_mixer(beta, wires):
for i in wires:
qml.RX(beta, wires=i)
Currently, I'm thinking of using assert statements to check that the output of:
dev = qml.device('lightning.qubit', wires=2, shots=10000)
circuit = qml.QNode(mixer_circuit, dev)
result = circuit(0.5, wires=[0, 1])
is a certain value. Now one issue is that the results themselves are probabilistic and change during each run. My first question: What's the best way to get around this? Can you set random_seed in any of the simulator devices?
In general if someone has ideas on how to do unit testing for Pennylane circuits, it would be really appreciated.
Answer: Simulator devices, like 'lightning.qubit' or 'default.qubit', can usually be run analytically. This is the default for most devices, but can be explicitly specified by setting shots=None.
Devices inheriting from QubitDevice, like "default.qubit" and "lightning.qubit" currently rely on numpy.random for their random number generation. So you can also specify the global numpy seed to get reproducible results:
import numpy as np
np.random.seed(1234)
Hope that helps :) | {
"domain": "quantumcomputing.stackexchange",
"id": 2850,
"tags": "programming, quantum-circuit, pennylane"
} |
Coulombic potential energy of Hydrogen $1s$ orbital? | Question: Given that $\langle \psi_{1s}|\hat H|\psi_{1s}\rangle=\langle \psi_{1s}|\hat K+\hat V|\psi_{1s}\rangle=-13.6$eV, does any one know the exact value of $\langle \psi_{1s}|\hat V|\psi_{1s}\rangle$? I tried to find it online but didn't get it... Thanks!
Answer: Using Schrodinger-equation wavefunctions for hydrogen, the expectation values of the kinetic energy and the (electrostatic) potential energy are
$$\langle \hat K \rangle=\frac{\mu e^4}{32\pi^2\epsilon_0^2\hbar^2}\frac{1}{n^2}=\frac{\alpha^2}{2n^2}\mu c^2$$
and
$$\langle \hat V \rangle=-\frac{\mu e^4}{16\pi^2\epsilon_0^2\hbar^2}\frac{1}{n^2}=-\frac{\alpha^2}{n^2}\mu c^2.$$
Here $\mu=\frac{m_em_p}{m_e+m_p}$ is the reduced mass of the electron and proton, $e$ is their charge, $\epsilon_0$ is the electric constant, $\hbar$ is the reduced Planck constant, $c$ is the speed of light, $\alpha=\frac{e^2}{4\pi\epsilon_0\hbar c}$ is the dimensionless fine-structure constant, and $n$ is the principal quantum number (1 for the 1s state).
Numerical values for the relevant constants are $\mu c^2=510721$ eV, $\alpha=0.00729735$, and $\alpha^2\mu c^2=27.1966$ eV.
The expectation value of the kinetic energy is of course positive. The expectation value of the potential energy is negative because the electrostatic force between the proton and the electron is attractive.
The factor of two between their magnitudes is a consequence of the virial theorem. The same ratio holds in the Bohr model of hydrogen, and in the Solar System! It comes from the forces being inverse square forces.
The expectation value of the total energy is negative, expressing the fact that the electron is bound to the proton; energy must be added to a hydrogen atom to ionize it.
Note that the three expectation values depend only on one of the three quantum numbers, namely the principal or radial quantum number $n$. They are independent of the angular quantum numbers $l$ and $m$. This can be understood as a consequence of the hydrogenic Schrodinger equation having a non-obvious $SO(4)$ symmetry. | {
"domain": "physics.stackexchange",
"id": 62772,
"tags": "quantum-mechanics, homework-and-exercises, schroedinger-equation, hydrogen, virial-theorem"
} |
Why is there an exponent 4 after the brackets in sp3? | Question: My professor wrote an electron configuration for carbon as: 1s2 (sp3)4
I thought it was just 1s2 sp3 where did the 4 come from?
Answer: The electron configuration of carbon is canonically written as 1s22s22p2. What you now can do is invoke orbital hybridization and combine the electrons in the 2s and 2p orbitals. Then you get sp3 hybrid orbitals with a total of 4 electrons in them. The 1s electrons stay where they are. To mark the new hybrid orbitals as separate, you put brackets around them.
You then end up with an electronic configuration of 1s2(2sp3)4. | {
"domain": "chemistry.stackexchange",
"id": 2192,
"tags": "orbitals, electronic-configuration"
} |
Is atmospheric pressure acting only on the contents or also on the container? | Question:
$22~\mathrm{g}$ of dry ice is placed in an an empty $600~\mathrm{ml}$ closed vessel at $298~\mathrm{K}$. Find the final pressure inside the vessel, if all $\ce{CO2}$ gets evaporated?
Now by applying $pV = n\mathcal{R}T$, the value of $p$ comes out to be $20.4~\mathrm{atm}$, but in the solution shown, it adds $1~\mathrm{atm}$ i.e atmospheric pressure pressure, so the answer comes out be $21.4~\mathrm{atm}$. Now why do we need to add atm pressure, the question tells us to find the total pressure inside the vessel, isn't the atmospheric pressure acting only on the container, and not on its contents?
Answer: According to the way this question is worded, adding 1 atmosphere would be incorrect. The reason is that they specify an empty container - if you have a rigid container and it is truly empty (high vacuum), then the absolute pressure would be zero, and the gauge pressure would be -1 atm (see this wikipedia article for reference to these terms).
Once the $\ce{CO2}$ sublimates, the final absolute pressure would be given by $PV=nRT$ - there is no need to add atmospheric pressure. To find the gauge pressure, you would actually subtract one atmosphere.
It is possible that the question wants you to assume that "empty" means "filled with air at 1 atm," but that is a guess and I would argue that it's not safe to assume that based on the way the question is written. | {
"domain": "chemistry.stackexchange",
"id": 2594,
"tags": "physical-chemistry, vapor-pressure"
} |
Why are nearby clouds so different in brightness? | Question: I was traveling in the day time from Saint-Petersburg to Sochi and was watching various clouds passing by. After some time I noticed that even though some clouds are very close to each other, they have very different brightness.
See the image below. Here there's a tall white (cumulonimbus?) cloud, intersecting a flat high-altitude (altostratus?) cloud. But in spite of the fact that the latter is much flatter, it appears to reflect much less intense light, and looks dark on the background of the large white cloud.
What is the reason for this difference in brightness? I'd expect both clouds to be of similar color, just the flat one maybe more translucent, but not darker.
Answer: Those flatter, thinner clouds are less opaque.
In general, shadows from the cloud itself or other clouds explain most of the variation in brightness. But in this case I believe you give the answer yourself when you mention that the flatter clouds should be more translucent:
they let more of the sunlight through, i.e., they reflect less light and are thus less bright, especially in contrast to the thicker clouds;
you see them from a shallow angle, with the sun behind you, so most of the light that does get reflected, might be so away from your eyes$^1$;
also, you're seeing them from above, and they might let you see more of the darker surface underneath.
$^1$Notice how the brightest parts of the clouds seem to be those facing you. Edit: As can be seen in the picture below, especially in the highlighted selection, the farther away you look from the line of sight to the sun, the less of its light is reflected toward your eyes.
source: https://cauldronsandcupcakes.com/2012/02/15/wanderlust-and-the-universal-ordering-system/ | {
"domain": "physics.stackexchange",
"id": 42208,
"tags": "optics, visible-light, everyday-life, atmospheric-science"
} |
What is the origin of the bactericidal properties of silver in water? | Question: I often hear that water gets purified by being in a silver vessel, which sounds plausible because of bactericidal feature of silver. What doesn't sound plausible, though, is the way it's explained: that silver releases ions into the water. Since silver is a noble metal, why would any "reaction" at all occur with something as neutral as water?
Is the above explanation nonsense? Does the "disinfection" of water happen only on contact with the metal?
Answer: Silver is not as inert as gold. Tarnish is the name we give to the phenomenon when silver metal is oxidized and becomes a salt. Surfaces made of silver tend to disinfect themselves pretty quickly. As for disinfecting water poured into a silver cup, I imagine that would take a little longer since you have to wait for silver to diffuse away from the surface and into the solution. But even very trace levels of silver can have strong antimicrobial effects. | {
"domain": "chemistry.stackexchange",
"id": 11,
"tags": "ions, purification, silver"
} |
Highest man-made voltage | Question: What was the highest voltage achieved and was it produced by electrostatic means or just some transformers and multipliers?
What are the limitations when it comes to producing voltage using electrostatic means?
Answer: Assuming you mean a macroscopic potential difference, the largest I know about was in the Nuclear Structure Facility accelerator at the Daresbury laboratory in the UK, and this was 30MV. The Wikipedia article on electrostatic particle accelerators claims this is about the highest possible in such devices. | {
"domain": "physics.stackexchange",
"id": 37671,
"tags": "electrostatics, potential, voltage"
} |
Showing information about states on a map when clicked | Question: I'm using JQVMap, and I have my code to where depending on what state you click, it'll reveal info about that state within a single div below the map. However, I feel there's a simplified, more efficient way to set up this code so there's not a lot of repeat, especially once I start adding more states. Is there?
switch(code){
case"tx":
$('#info-children').children(':not(#info-tx)').hide("slow");
$('#info-children').children('#info-tx').show("slow");
break;
case"il":
$('#info-children').children(':not(#info-il)').hide("slow");
$('#info-children').children('#info-il').show("slow");
break;
case"fl":
$('#info-children').children(':not(#info-fl)').hide("slow");
$('#info-children').children('#info-fl').show("slow");
break;
case"ga":
$('#info-children').children(':not(#info-ga)').hide("slow");
$('#info-children').children('#info-ga').show("slow");
break;
case"pa":
$('#info-children').children(':not(#info-pa)').hide("slow");
$('#info-children').children('#info-pa').show("slow");
break;
default:
$('#state-name').html(region);
$('#info-children').children(':not(#info-uhoh)').hide("slow");
$('#info-children').children('#info-uhoh').show("slow");
}
Answer: You could have a single if statement covering all your non-default cases, since all your $('info#children')... sequences are the same, except for the state clicked.
So, you could just say
$('#info-children').children(':not(#info-' + code + ')').hide('slow');
and
$('#info-children').children('#info-' + code + ')').show('slow');
So, you could just have one huge logical statement, checking if the code/state is tx, il, fl, ga, or pa. If that statement is true, run the above code; if not, run the default code. | {
"domain": "codereview.stackexchange",
"id": 12167,
"tags": "javascript, jquery"
} |
What is a "segregating gene"? | Question: What does the term "segregating" mean in references like the following:
"found 42% of a random sample of loci to be segregating"
"15 blood group systems are known to be segregating"
Answer: If, at a locus, you have several alleles, then this locus is said to be polymorphic or segregating.
A segregating locus is just an untintuitve name for a polymorphic locus. | {
"domain": "biology.stackexchange",
"id": 8176,
"tags": "population-genetics"
} |
Understanding the difference between timelike and spacelike separations | Question: From Woodhouse's General Relativity:
If $A$ is the origin and $B$ is a nearby event with coordinates $dt, dx, dy, dz$, then,
$$ds^2 = dt^2 - dx^2 - dy^2 - dz^2$$ is the same in all local inertial coordinate systems with origin A.
Timelike separation. If $ds^2 >0$, then $ds$ is the time from $A$ to $B$ on a clock travelling between the two events in free-fall
Spacelike separation. If $ds^2 < 0$, then $ds^2=-D^2$, where $D$ is the distance from $A$ to $B$ measured in a frame in free-fall in which $A$ and $B$ are simultaneous.
1) If we are talking about timelike separation why given that $ds^2 >0$ is $ds$ is the time from $A$ to $B$? I cannot see how this works in the equation above
2) If given that $ds^2 < 0$, I get how we can say $ds^2=-D^2$ but why are we now able to say that this is a distance (and not a time)? How can we discard considering the $dt^2$ in the above equation?
I have tried to think about this by writing
$$C^2 = dt^2 - dx^2 - dy^2 - dz^2$$ and
$$-C^2 = dt^2 - dx^2 - dy^2 - dz^2$$
but I cannot get my head around this.
Answer: I'll try to re-describe the same ideas in a different way. This isn't meant to be a quick answer to the question; rather, this is meant to be a resource to help build some intuition.
In this answer, the word "frame" is not used. That's because "frame" might carry connotations of something rigid, something defined by "axes." This answer is expressed using the more general concept of a coordinate system, which doesn't rely on anything like axes or straight lines.
Coordinates are arbitrary labels for the points in spacetime. They should assign a unique 4-tuple of numbers $(w,x,y,z)$ to each point, and they should do this in a smooth way, but otherwise they are arbitrary. Any worldline (curve in spacetime) can be described by giving the four coordinates as functions of some other parameter $\lambda$ that runs along the worldline. Examples will be shown below, after the general principles.
Mathematically, coordinate systems and worldlines are defined without the help of any geometric concepts like time, distance, timelike, or spacelike, and without the help of any dynamic concepts like free-fall. Geometry (including time) and free-fall are both defined instead by the metric. A convenient way to specify the metric is by specifying the line element. The line element takes any $\lambda$-parameterized worldline as input and returns a single function $G(\lambda)$ as output. In special relativity, the line element can be expressed as
$$
G(\lambda) = \dot w^2 - (\dot x^2+\dot y^2+\dot z^2)
\tag{1}
$$
where a dot denotes a derivative with respect to the parameter $\lambda$ along the given worldline. The worldline is called
timelike wherever $G(\lambda)>0$,
spacelike wherever $G(\lambda)<0$,
lightlike wherever $G(\lambda)=0$.
A worldline is called causal if it is either timelike or lightlike. The causality principle says that only a causal worldline can represent the history of a physical object. Proper time is defined only along such a worldline. Given any causal worldline, its proper time $\tau(\lambda)$ is defined by the condition
$$
\dot\tau^2 = G(\lambda) \geq 0.
\tag{2}
$$
This tells us how the proper time $\tau$ progresses along the worldline as a function of the parameter $\lambda$.
In hindsight, now that the line element (1) has been specified, we see that a wordline cannot be timelike unless $w$ changes monotonically along the worldline. In this sense, we can think of $w$ as a "timelike" coordinate — but it is still just a coordinate. Proper time is given by equation (2), and this is what an object actually experiences as time. Proper time is specific to the given worldline, and it is invariant under changes of the coordinate system.
If the quantity (1) is negative, then we have a spacelike worldline. Proper time is undefined along such a worldline. Physical objects, including clocks, cannot move according to such a worldline, so we shouldn't expect to have any invariant notion of the progression of time along such a worldline. What we have instead for such a worldline is proper distance $\ell$, given by the condition
$$
\dot\ell^2 = -G(\lambda) > 0.
\tag{3}
$$
Two points in spacetime are said to be "timelike separated" if they can be connected to each other by some timelike worldline, and they are said to be "spacelike separated" if they cannot be connected to each other by any causal worldline. The concept of "spacelike separated" events is an extension of the concept of "simultaneous" events. Spacelike-separated events cannot be time-ordered in any invariant way.
By the way, even if two points are timelike separated (which means that one of the points is unambiguously in the future of the other), they can still be connectd to each other by a spacelike worldline. The following pair of examples illustrates this.
Example 1
Choose constants $A,B,C$ and consider the worldline given by
$$
w(\lambda)=A\lambda
\hskip1cm
x(\lambda)=B\lambda+C
\hskip1cm
y(\lambda)=0
\hskip1cm
z(\lambda)=0.
\tag{4}
$$
Then
$$
\dot w = A
\hskip1cm
\dot x = B
\hskip1cm
\dot y = 0
\hskip1cm
\dot z = 0,
\tag{5}
$$
so $G(\lambda)=A^2-B^2$, which is independent of $\lambda$ in this simple example. This worldline is:
timelike if $A^2>B^2$, and then equation (2) gives $\tau(\lambda) = \sqrt{A^2-B^2}\,\lambda$ for the proper time along this worldline.
spacelike if $A^2<B^2$, and then equation (3) gives $\ell(\lambda) = \sqrt{B^2-A^2}\,\lambda$ for the proper distance along this worldline.
lightlike if $A^2=B^2$, and then the proper time and proper distance are both zero along this worldline.
For the special metric defined by (1), a timelike (or lightlike) worldline correpsonds to free-fall if and only if the derivatives $(\dot w,\dot x,\dot y,\dot z)$ are all proportional to each other. In particular:
The worldline defined by (4) represents free-fall if $A^2\geq B^2$.
If $A^2<B^2$, then it does not represent any physically possible motion.
Example 2
Consider the worldline defined by
\begin{align}
w(\lambda) &= \lambda + \lambda^3 \\
x(\lambda) &= \cos(\beta\lambda + \beta\lambda^3) \\
y(\lambda) &= \sin(\beta\lambda + \beta\lambda^3) \\
z(\lambda) &= 0.
\tag{6}
\end{align}
where $\beta$ is a constant. For each value of $\lambda$, these equations specify the coordinates of one point in the four-dimensional manifold, so they define a worldline. Plug (6) into (1) to get
$$
G(\lambda)=(1-\beta^2)(1+3\lambda^2)^2.
\tag{7}
$$
If $\beta^2<1$, then this worldline is timelike, and then equation (2) says that its proper time is given by
$$
\tau(\lambda)=\sqrt{1-\beta^2}\,(\lambda+\lambda^3).
\tag{8}
$$
This tells us how the proper time $\tau$ progresses along the worldline as a function of the parameter $\lambda$. Equation (8) is independent of the coordinates, as it should be; proper time is invariant under coordinate transformations.
If $\beta^2>1$, then this worldline is spacelike. Notice, though, that this spacelike worldline passes through all of the points $(w,x,y,z)=(2\pi\,n/\beta,1,0,0)$ for all integers $n$, and these points are also all contained in the timelike worldline (4) with $A=1$, $B=0$, and $C=1$. (The parameters $\lambda$ of the two worldlines are not the same; the symbol $\lambda$ was recycled.) This shows that the worldline (6) with $\beta^2>1$ is an example of a spacelike worldline that connects some timelike-separated points. | {
"domain": "physics.stackexchange",
"id": 54463,
"tags": "metric-tensor, relativity, conventions, causality, distance"
} |
What type of visualization is this and what are my options to produce something like it? | Question: I am looking to reproduce the bottom part of the visualization below. What is this type of visualization called? What are my options to reproduce it? Preferably using Python or R, but I'm open to using other tools as well.
Answer: It's similar to a violin plot, which shows the shape of the distribution of a variable. However here the X axis shows only categorical values, it's not clear if the shape is based on some underlying numerical variable (this is a requirement for a violin plot).
Violin plots can be made in Python or in R. | {
"domain": "datascience.stackexchange",
"id": 9684,
"tags": "python, r, visualization"
} |
What is the maximum data rate a wireless link can support using BPSK for given BER and SNR? | Question: What is maximum data rate a wireless link can support using BPSK for 10^-6 BER, Bandwidth of channel is 200 kHz and SNR 21.335dB ?
As per my understanding , Eb/No can be calculated for BPSK for given BER using curves and value found is 11.29dB, and after that we can use relation
Eb/No = S/N * W/R
R comes out to be around 2 Mbps! Is it correct? Need your input!
Answer: In BPSK, BER is equal to $Q(\sqrt{2E_b/N_0})$, so you need $E_b/N_0>11.3$ to have a BER no larger than $10^{-6}$. Since the bandwidth is 200 kHz, the maximum rate is 400 kb/s. So, since $SNR\approx136$,
$$\frac{Eb}{N_0}=\frac{S}{N}\frac{W}{R}=136\frac{200\cdot10^3}{400\cdot10^3}=\frac{136}{2}=68$$
Since $68>11.3$, you can be sure that you can transmit at 400 kb/s and meet your BER requirement.
Note that I'm assuming baseband transmission (so that the bit rate is twice the bandwidth). Note also that the question, as posed, does not make a lot of sense, since the bandwidth will essentially determine your bit rate. | {
"domain": "dsp.stackexchange",
"id": 2174,
"tags": "digital-communications, bpsk"
} |
Rubberduck VBA Parser, Episode V: The ANTLR Strikes Back | Question: I changed my mind. I don't want to implement 200-some Node classes. Instead, I'll be working directly with the ANTLR generated classes, to implement the Rubberduck code inspections, unit test method discovery, the "Code Explorer" tree view, and everything else Rubberduck might need a parse tree for.
The Node derived classes already implemented aren't wasted though: I'll eventually use them to expose a high-level view of the VBA code, to VBA itself (through COM interop) - that will enable very cool stuff, like VBA code that can enumerate its modules' members.
I changed the VBParser - I renamed ParseInternal to Parse (it becomes an overload of the public Parse method), and made it public as well:
public class VBParser : IRubberduckParser
{
public INode Parse(string projectName, string componentName, string code)
{
var result = Parse(code);
var walker = new ParseTreeWalker();
var listener = new VBTreeListener(projectName, componentName);
walker.Walk(listener, result);
return listener.Root;
}
public IParseTree Parse(string code)
{
var input = new AntlrInputStream(code);
var lexer = new VisualBasic6Lexer(input);
var tokens = new CommonTokenStream(lexer);
var parser = new VisualBasic6Parser(tokens);
return parser.startRule();
}
}
Now I can have an ANTLR IParseTree wherever I need it. One place I'm going to need it, is to discover test methods in the active VBA project. Rubberduck test methods are always public, parameterless methods, so I wrote this extension method/class:
public static class ParseTreeExtensions
{
/// <summary>
/// Finds all public procedures in specified parse tree.
/// </summary>
public static IEnumerable<VisualBasic6Parser.SubStmtContext> GetPublicProcedures(this IParseTree parseTree)
{
var walker = new ParseTreeWalker();
var listener = new PublicSubListener();
walker.Walk(listener, parseTree);
return listener.Members;
}
private class PublicSubListener : VisualBasic6BaseListener
{
private readonly IList<VisualBasic6Parser.SubStmtContext> _members = new List<VisualBasic6Parser.SubStmtContext>();
public IEnumerable<VisualBasic6Parser.SubStmtContext> Members { get { return _members; } }
public override void EnterSubStmt(VisualBasic6Parser.SubStmtContext context)
{
var visibility = context.visibility();
if (visibility == null || visibility.PUBLIC() != null)
{
_members.Add(context);
}
}
}
}
Here's a unit test for it:
[TestMethod]
public void GetPublicProceduresReturnsPublicSubs()
{
IRubberduckParser parser = new VBParser();
var code = "Sub Foo()\nEnd Sub\n\nPrivate Sub FooBar()\nEnd Sub\n\nPublic Sub Bar()\nEnd Sub\n\nPublic Sub BarFoo(ByVal fb As Long)\nEnd Sub\n\nFunction GetFoo() As Bar\nEnd Function";
var module = parser.Parse(code);
var procedures = module.GetPublicProcedures().ToList();
var parameterless = procedures.Where(p => p.argList().arg().Count == 0).ToList();
Assert.AreEqual(3, procedures.Count);
Assert.AreEqual(2, parameterless.Count);
}
This works, so I'm going to run with it. I don't want to expose the ANTLR generated classes to COM, so I'm moving it all into its own assembly, which Rubberduck will reference.
It looks to me like VBParser is somewhat mixing abstraction levels... but is that much of an issue? Is it a good idea to "walk" the tree like this? I'm not re-parsing the code, but I'll probably end up walking it multiple times when I run code inspections, and I'll have to implement several [simple] tree listeners, to retrieve the "interesting" nodes. Is this how I'm supposed to be doing this?
Anything else strikes you as weird with this approach?
Answer: One would expect that a method of a parser which returns an IParseTree would be named Parse() instead of startRule() which is in addition violating the naming guidelines.
You should declare variables as near as possible to their usage. You could also omit the assignment of the call to Parse()
public INode Parse(string projectName, string componentName, string code)
{
var walker = new ParseTreeWalker();
var listener = new VBTreeListener(projectName, componentName);
walker.Walk(listener, Parse(code));
return listener.Root;
}
The VisualBasic6Parser.SubStmtContext.visibility() method is also not going conform with the naming guidelines.
Is it a good idea to "walk" the tree like this?
It is hard to say something about something one can't see.
It looks to me like VBParser is somewhat mixing abstraction levels... but is that much of an issue?
It is a little bit distracting to see inside a Parse() method of a VBParser class which implements an IRubberduckParser interface that a VisualBasic6Parser class is used. But I have no idea how to solve this and if this has to be solved. | {
"domain": "codereview.stackexchange",
"id": 12035,
"tags": "c#, parsing, antlr, rubberduck"
} |
Identification - What kind of animal leaves these tracks? | Question: I live in the suburbs of Washington DC where it recently snowed. I was admiring all of the tracks left behind by wildlife of all kinds when I came across these prints, that I was unable to recognize from charts of common animals prints online.
What is unique about them, is that each print, has 3 points, and there is only one straight line of prints, about 2-3 feet apart, where as I would imagine a deer, or other 4 legged animal would leave staggered footprints. We have a lot of deer in these parts but these dont seem to match deer print patterns I've seen online.
Each print is about 5 inches by 4 inches at their widest/longest points (they are longer then they are wider).
Note, the snow has melted somewhat since they were initially made.
Answer: The only candidate I could imagine is hare.
Sometimes it leaves a single track from its fore-limbs (from a theoretical point of view, I'd say it happens when the snow is relatively deep.) | {
"domain": "biology.stackexchange",
"id": 1619,
"tags": "species-identification"
} |
Why is there a difference between the magnitude of magnitude field of a finite wire in contrast to an infinite wire? | Question: Besides the two equations(from Ampère's Law & Biot-Savart Law) why is there a weaker magnetic field produced by a finite wire than the infinite wire? If we experimented between a wire(A) and another wire that is 100x wire(A) at wanted to measure the magnitude of the magnetic field at an equal point(P) the longer wire would produce a greater magnitude.
If the parameters are set to have an equal current, and wire shape(expect length), why would they differ?
Answer: Your mistake appears to be to believe that the magnetic field produced by a short piece of current-carrying wire is only non-zero at points lying in a plane perpendicular to the wire, going through the wire segment.
That is not the case. The Biot-Savart law tells us that each current-carrying wire segment contributes a magnetic field, given in vector form by
$$ d\vec{B}(\vec{r}) = \frac{\mu_0 I}{4\pi} \frac{d\vec{l} \times \vec{r'}}{|\vec{r'}|^3},$$
where $\vec{r}$ is the position vector in space where you wish to know the B-field and $\vec{r'}$ is a vector from a point on the wire to that position in space. That is, the magnetic field produced by each wire segment is in a direction given by the vector product of the wire segment's instantaneous direction and a vector between the wire segment and the point in space where you wish to calculate the B-field. Even a long way along the infinite wire from where you have marked point $P$, there will be a contribution, because $d\vec{l} \times \vec{r'}$ only approaches zero asymptotically with distance from $P$.
The total magnetic field then needs to be calculated by summing up (integrating) over the contributions from all wire segments in a vector fashion.
Since the short piece of wire contains wire segments that are just a subset of those making up an infinite piece of wire, then the B-field from the short piece of wire must be smaller. | {
"domain": "physics.stackexchange",
"id": 36601,
"tags": "electromagnetism, magnetic-fields"
} |
[roscore] Following the tutorials ... unexpected output? | Question:
Hi guys,
Following the Tutorial #6.- "Understanding Topics Basics", at the first step (1.1.1.- roscore), quoting:
Let's start by making sure that we have roscore running, in a new terminal:
$ roscore
If you left roscore running from the last tutorial, you may get the error message:
roscore cannot run as another roscore/master is already running.
Please kill other roscore/master processes before relaunching
However, if I let one "roscore" process running, and launch a new one in a different terminal Window, I get the following error:
ERROR: Unable to start XML-RPC server, port 11311 is already in use
Unhandled exception in thread started by <bound method XmlRpcNode.run of <rosgraph.xmlrpc.XmlRpcNode object at 0x8cd3e2c>>
And after the stacktrace:
socket.error: [Errno 98] Address already in use
ERROR: could not contact master [http://pfc-VirtualBox:11311/]
[master] killing on exit
So, my question is: Is this a kind of "bug" in the tutorial or may I worry about that error being threw instead of the suggested output @ tutorial?
Thanks in advance for your answer!
Originally posted by 1morelearner on ROS Answers with karma: 57 on 2012-07-13
Post score: 0
Original comments
Comment by LittleFishLove on 2018-02-02:
Hi, bro I met the same problem with you , I want to know if the problem has been solved ? If solved, can you give me some help? Thank you very much!
Comment by ahendrix on 2018-02-02:
This problem was solved; see the answer below. If that solution doesn't work for you, you should ask a new question. Asking other users to contact you directly is against the community spirit of the forum, so I've removed your email address from your comment.
Answer:
Almost certainly a bug in the tutorial. In general, you can't have more than one roscore running; a good rule of thumb is to fire one up, minimize that window, and forget about it.
Originally posted by Mac with karma: 4119 on 2012-07-13
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 10190,
"tags": "ros, roscore, tutorials"
} |
What is a mode? | Question: The word mode pops up in many fields of physics, yet I can't remember ever encountering a simple but precise definition.
After having searched fruitlessly on this site as well, an easy to find place with (one or more) good answers seems in order.
Objective
Ideally, answers should give an intuitive and easy-to-remember definition of what a mode is, preferably in a general context. If limitation is necessary for a detailed answer, assume a context of theoretical physics, e.g. mode expansions in quantum field theory.
Answer: In a very mathematical sense, more often than not a mode refers to an eigenvector of a linear equation.
Consider the coupled springs problem
$$\frac{d}{dt^2} \left[ \begin{array}{cc} x_1 \\ x_2 \end{array} \right]
=\left[ \begin{array}{cc}
- 2 \omega_0^2 & \omega_0^2 \\
\omega_0^2 & - \omega_0^2
\end{array} \right]
\left[ \begin{array}{cc} x_1 \\ x_2 \end{array} \right]$$
or in basis independent form
$$
\frac{d}{dt^2}\lvert x(t) \rangle = T \rvert x(t) \rangle \, .$$
This problem is hard because the equations of motion for $x_1$ and $x_2$ are coupled.
The normal modes are (up to scale factor)
$$\left[ \begin{array}{cc} 1 \\ 1 \end{array} \right]
\quad \text{and} \quad \left[ \begin{array}{cc} 1 \\ -1 \end{array} \right] \, .$$
These vectors are eigenvectors of $T$.
Being eigenvectors, if we expand $\lvert x(t) \rangle$ and $T$ in terms of these vectors, the equations of motion uncouple.
In other words
The set of normal modes is the vector basis which diagonalizes the equations of motion (i.e. diagonalizes $T$).
That definition will get you pretty far.
The situation is the same in quantum mechanics.
The normal modes of a system come from Schrodinger's equation
$$i \hbar \frac{d}{dt}\lvert \Psi(t) \rangle = \hat{H} \lvert \Psi \rangle \, .$$
An eigenvector of $\hat{H}$ is a normal mode of the system, also called a stationary state or eigenstate.
These normal modes have another important property: under time evolution they maintain their shape, picking up only complex prefactors $\exp[-i E t / \hbar]$ where $E$ is the mode's eigenvalue under the $\hat{H}$ operator (i.e. the mode's energy).
This was actually also the case in the classical system too.
If the coupled springs system is initiated in an eigenstate of $T$ (i.e. in normal mode), then it remains in a scaled version of that normal mode forever.
In the springs case, the scale factor is $\cos(\sqrt{\lambda} t)$ where $\lambda$ is the eigenvalue of the mode under the $T$ operator.
From the above discussion we can form a very physical definition of "mode":
A mode is a trajectory of a physical system which does not change shape as the system evolves. In other words, when a system is moving in a single mode, the positions of its parts all move with same general time dependence (e.g. sinusoidal motion with a single frequency) but may have different relative amplitudes. | {
"domain": "physics.stackexchange",
"id": 25582,
"tags": "terminology, definition, oscillators"
} |
Formula for projection of electric field onto plane | Question: Suppose I have a vector field (for example an electric field $\mathbf{E}(\mathbf{r})$, where $\mathbf{r}$ is a point in space), and also a plane with normal $\hat{n}$ which passes through a point $\mathbf{p}$. How do I find the projection of this vector field onto the plane? I think the answer might simply be $\mathbf{E}(\mathbf{r})\times \hat{n}$ but then this doesn't use the point $\mathbf{p}$ at all. I know this question is maybe better suited for the math stackexchange but it's for an electromagnetic application.
Answer: The component of $\mathbf{E}$ along the normal $\mathbf{\hat{n}}$ is given by $(\mathbf{E} \cdot \mathbf{\hat{n}}) \mathbf{\hat{n}}$. So the component of $\mathbf{E}$ along the plane is simply $\mathbf{E} - (\mathbf{E} \cdot \mathbf{\hat{n}}) \mathbf{\hat{n}}$. The point $\mathbf{p}$ is irrelevant. | {
"domain": "physics.stackexchange",
"id": 67345,
"tags": "electromagnetism, vectors, vector-fields"
} |
Is there a notion of "inevitable reduction?" | Question: I was just working on a semantics paper and realized I needed a notion of inevitable reduction. I came up with this definition:
Let $\rightarrow$ be a binary relation. We say that $a$ inevitably reduces to $b$, or $a \rightarrow^{\forall *} b$, if either $a=b$ or, for all derivations $a \rightarrow x_1 \rightarrow x_2 \rightarrow \dots$ (where the $x_i$ are either infinite or terminate in a normal form), there is some $i$ such that $x_i = b$.
This seems way too simple and fundamental to not already exist, but I've never encountered it before. (I've read "Term Rewriting and All That" in full, and just took a skim through "Advanced Topics in Term Rewriting".) It's clearly related to confluence, and can also be stated in terms of dominators. Still, I have no leads for finding prior uses.
So, anyone know prior work on this?
Answer: I have never heard of this exact concept in rewrite theory, which certainly doesn't prove it hasn't been considered.
However, I will make the point that it may not be quite as useful a concept as it first appears, at least in classical rewrite theory because it behaves poorly under substitution:
If $t\rightarrow t'$ is an inevitable reduction, and $t$ contains a free variable $x$ which also appears in $t'$, then $t[u/x]\rightarrow t'[u/x]$ is never an inevitable reduction if $u$ is not normal (since if $u\rightarrow u'$ then $t[u/x] \rightarrow t'[u'/x]$ can avoid $t'[u/x]$).
It would seem that inevitable reducts are quite rare in general, even for pretty reasonable systems, except for normal forms (which are interesting for other reasons). Obviously this only holds for general notions of reductions; specific strategies might have many more (deterministic strategies only have inevitable reductions).
In general, rewrite theorists are more interested in which redexes need to be reduced (to hit normal forms) rather than which terms need to be hit. This has been studied quite a bit, under the name of needed reductions: a needed redex is one that must be triggered in order to reach a normal form. See e.g. Needed reduction and spine strategies for the lambda calculus or Reduction Strategies for Left-Linear Term Rewriting Systems.
I believe this latter notion is stable under substitution for reasonable systems (say, orthogonal). | {
"domain": "cstheory.stackexchange",
"id": 4690,
"tags": "reductions, semantics, term-rewriting-systems"
} |
Merge a set of strings based on overlaps | Question: I have a set of strings that are slices of a single, longer string. It is not guaranteed, however, that any two strings from this set must overlap: only that all of them together overlap. I want to reconstruct the original longer string.
For example, suppose the longer string is LGHFDJGHSJGHDFKSJBSFSHJKADGHS. The slices might be FKSJBSFS, JKADGHS, LGHFDJGHS, SFSHJKA and GHSJGHDFK. I need an algorithm to reconstruct the original LGHFDJGHSJGHDFKSJBSFSHJKADGHS.
I understand that this algorithm may be somewhat probabilistic: given any two strings, the probability that they overlap by pure chance is $\frac{2}{n}$ where $n$ is the size of the alphabet used. I want to find the "best" overlapping longer string.
So far I've asked around and I've been told to look into suffix trees and de Bruijn graphs. I have done so but I can't see how I can apply them to solve this problem. Any help would be appreciated.
Answer: This is the DNA sequence assembly problem, an algorithms problem that has been well studied in the literature. You can apparently find entire courses on the subject. In general, the problem is NP-hard. However, there are many algorithms that seem to work well in practice; greedy assembly is one very simple approach, but there are many others. | {
"domain": "cs.stackexchange",
"id": 11720,
"tags": "algorithms, graphs, strings"
} |
BF interpreter in Python that uses recursion to handle loops | Question: I wrote a Brainfuck interpreter in Python and I'm wondering how to simplify it.
I handle separately loop commands and the others. A recursive function deals with loops.
import sys
array = [0] # byte array
ptr = 0 # data pointer
Read and execute commands except loop
def readNoLoop(char):
global ptr
# Increment/Decrement the byte at the data pointer
if char=='+':
array[ptr] += 1
elif char=='-':
array[ptr] -= 1
if array[ptr] < 0:
raise ValueError("Negative value in array")
# Increment/Decrement the data pointer
elif char=='>':
ptr += 1
while(ptr>=len(array)-1):
array.append(0)
elif char=='<':
ptr -= 1
if ptr < 0:
raise ValueError("Negative value of pointer")
# Output the byte at the data pointer
elif char=='.':
sys.stdout.write(chr(array[ptr]))
# Store one byte of input in the byte at the data pointer
elif char==',':
array[ptr] = ord(sys.stdin.read(1))
Recursive function to deal with loops
def interpret(charChain):
it = 0
loopBegin = []
while(it<len(charChain)):
if charChain[it]=='[':
loopBegin.append(it)
elif charChain[it]==']':
subChain = charChain[loopBegin[-1]+1:it]
while(array[ptr]>0):
interpret(subChain)
loopBegin.pop()
else:
readNoLoop(charChain[it])
it+=1
Main
if __name__ == "__main__":
# Brainfuck program to print "Hello World!"
code = '++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.'
try:
interpret(code)
except:
raise
My main concern is to know if it's possible to have a simpler code without recursion.
Answer: I much prefer read @Dex'ter’s code with Python coding style applied, so I’ll start with that.
There are a number of issues with your code that makes your interpreter somewhat buggy:
iterate over characters instead of indexes. The for loop in Python is one of it's key feature. It lets you easily iterate over elements of a collection without having to handle indexes to access each of them. You still need indexes for other purposes? enumerate is your friend, then.
def interpret(char_chain):
loop_begin = []
for it, character in enumerate(char_chain):
if character == '[':
loop_begin.append(it)
elif character == ']':
# ... and so on
command '>': since you can only increment data_pointer by one at a time, you'll never have to catch up with a gap between the size of byte_array and the value of data_pointer. At maximum, you’ll be adding 1 extra cell.
elif char == '>':
data_pointer += 1
if data_pointer == len(byte_array):
byte_array.append(0)
except: raise: never use bare excepts, always specify which kind of exceptions you’re expecting. You never know what could happen (ValueErrors that you are raising, KeyboardInterrupt, RuntimeError… the list goes on) but want to handle everything the same way. Moreover, you’re not prepared to handle anything because the only thing you do with your exception is re-raise it… just let the exception be and remove your try .. except instead.
command ',': your interpreter does not handle end of files when reading the input. sys.stdin.read(1) can return '' when EOF is reached and ord('') is a TypeError. See what the issue is about and example of programs that handle different conventions. The key point here is, whatever the convention you choose to apply, you need to handle EOFs.
command '[': your recursion is broken. The completely innefficient '+++++++++++++++++++++++++++++++++.' should output !. Same for '[+]+++++++++++++++++++++++++++++++++.' since the first cell is at 0 when starting the interpreter and thus the loop will be skipped. However your interpreter does nothing when encountering '[' to signal that the loop should be skipped or not and thus the next character will always be handled by read_no_loop. Recursion can be a powerfull tool here to handle going back and forth between the beginning and the end of the loop; or you could handle indexes by hand like you do, but both seems a bit inefficient.
What you'll need to have, though, is a flag telling you if you should skip interpreting the loop. And if you have it manage nested levels of loops, it's even better.
def interpret(char_chain):
skip = 0
for it, character in enumerate(char_chain):
if skip:
skip += character == '[' # Increment the level of nested loops to skip
skip -= character == ']' # Decrement the level of nested loops to skip
continue # Do nothing else, since we have to skip that character
if character == '[':
while byte_array[data_pointer]:
# Recurse to interpret the loop
interpret(char_chain[it+1:])
# Skip the loop when the current cell is 0
skip = 1
elif character == ']':
# End the recursion when reaching the end of the loop
return
else:
read_no_loop(character)
Here I use the fact that booleans are a subclass of integers and that False is 0 and True is 1. This is considered rather bad practice but I like the conciseness of that form. If it feel more natural to you, you can write if character == '[': skip += 1 instead of skip += character == '['.
One last thing to note is that it would be better if the user could be able to provide the BF code on the command line when invoking the program instead of having to modify the code to do so. The complete program would look like:
import sys
byte_array = [0]
data_pointer = 0
def read_no_loop(char):
global data_pointer
# Increment/Decrement the byte at the data pointer
if char == '+':
byte_array[data_pointer] += 1
elif char == '-':
byte_array[data_pointer] -= 1
if byte_array[data_pointer] < 0:
raise ValueError("Negative value in byte_array")
# Increment/Decrement the data pointer
elif char == '>':
data_pointer += 1
if data_pointer == len(byte_array):
byte_array.append(0)
elif char == '<':
data_pointer -= 1
if data_pointer < 0:
raise ValueError("Negative value of pointer")
# Output the byte at the data pointer
elif char == '.':
sys.stdout.write(chr(byte_array[data_pointer]))
# Store one byte of input in the byte at the data pointer
elif char == ',':
character = sys.stdin.read(1)
if character:
byte_array[data_pointer] = ord(character)
else: # EOF
byte_array[data_pointer] = 0 # choose you convention and document it.
def interpret(char_chain):
skip = 0
for it, character in enumerate(char_chain):
if skip:
skip += character == '['
skip -= character == ']'
continue # Actually skip that character
if character == '[':
while byte_array[data_pointer]:
# Recurse to interpret the loop
interpret(char_chain[it+1:])
# Skip the loop when the current cell is 0
skip = 1
elif character == ']':
# End the recursion when reaching the end of the loop
return
else:
read_no_loop(character)
if __name__ == "__main__":
try:
code = sys.argv[1]
except IndexError:
print("You should provide the BF code as first argument of this program")
else:
interpret(code)
# Be nice with the user and output a newline to not mess up his shell
print() | {
"domain": "codereview.stackexchange",
"id": 19529,
"tags": "python, beginner, recursion, interpreter, brainfuck"
} |
installation fails trying to fix dependencies | Question:
I do not know what to do. everytime I try to use "sudo apt-get install ros-jade-desktop-full", terminal tells me there were unfulfilled dependencies. However, I did not set any of these and I do not know how to remove them. Nevertheless, I really need to install ROS...
Do you know what I can change in order to make it work?
Thank you very much!
PS:
the problem appears in this step:
sudo apt-get install xserver-xorg-dev-lts-utopic mesa-common-dev-lts-utopic libxatracker-dev-lts-utopic libopenvg1-mesa-dev-lts-utopic libgles2-mesa-dev-lts-utopic libgles1-mesa-dev-lts-utopic libgl1-mesa-dev-lts-utopic libgbm-dev-lts-utopic libegl1-mesa-dev-lts-utopic
OR
sudo apt-get install libgl1-mesa-dev-lts-utopic
(as one of them did not work, I tried the other one, too. My ubuntu version is 15.04).
The error messages:
E: Paket xserver-xorg-dev-lts-utopic kann nicht gefunden werden.
E: Paket mesa-common-dev-lts-utopic kann nicht gefunden werden.
E: Paket libxatracker-dev-lts-utopic kann nicht gefunden werden.
E: Paket libopenvg1-mesa-dev-lts-utopic kann nicht gefunden werden.
E: Paket libgles2-mesa-dev-lts-utopic kann nicht gefunden werden.
E: Paket libgles1-mesa-dev-lts-utopic kann nicht gefunden werden.
E: Paket libgl1-mesa-dev-lts-utopic kann nicht gefunden werden.
E: Paket libgbm-dev-lts-utopic kann nicht gefunden werden.
E: Paket libegl1-mesa-dev-lts-utopic kann nicht gefunden werden.
("kan nicht gefunden werden" means "cannot be found").
I do not know why this happens. I my settings for "Software and Updates" for "download from" is "main server" (changing it to German server does not change anything). Of course, I have also done each of the first steps described in the download tutorial. I am sure that I did not type in anything wrong.
Originally posted by juwinkler on ROS Answers with karma: 1 on 2015-10-01
Post score: 0
Original comments
Comment by gvdhoorn on 2015-10-01:
Please copy/paste the exact errors into your question (use the edit button/link for that). Without information we cannot help you. Please format error text using the Preformatted text button on the toolbar (it's the one with 101010 on it).
Comment by juwinkler on 2015-10-04:
Okay, thank you, I edited the post :)
Do you have an idea how to solve the problem?
Answer:
You appear to be following these insructions: http://wiki.ros.org/jade/Installation/Ubuntu
The section you're following is for 14.04.2 only. You can skip that on 15.04
Originally posted by tfoote with karma: 58457 on 2015-10-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by juwinkler on 2015-10-15:
yes, I follow the instructions.
if I do not follow them and start to install ros, it does not work.
also, i cannot install it by klicking on installing when I use the non-terminal method.
an error message says that there are dependency problems. thus, i conclude, i do have to fix them... :/
Comment by tfoote on 2015-10-15:
You have provided the console output from the part you don't need to do, but have not provided details on the later message. Please provide enough information for us to reproduce the problem and see exactly what you're typing and getting as an output for us to be able to help you better. | {
"domain": "robotics.stackexchange",
"id": 22727,
"tags": "ros"
} |
Criteria for saving best model during training neural network? | Question: I am doing 4-class semantic segmentation with U-net using generalised dice loss as loss function.
General approach to save best model during training is to monitor validation loss at each epoch and save the model if val loss decreases than previous minimum.
But, I am interested in the "model which gives best the average dice score of 4 classes".
During training in my case, using validation loss as criteria doesn't lead to best avg dice score. So what should I consider the best model the one with less validation loss or highest validation dice score?
Below is my validation loss and avg dice score after each epoch. Out of this which epoch gives the best model?
epoch 1/10 validation loss: 0.95 avg dice score: 0.17
epoch 2/10 validation loss: 0.86 avg dice score: 0.23
epoch 3/10 validation loss: 0.77 avg dice score: 0.34
epoch 4/10 validation loss: 0.74 avg dice score: 0.40
epoch 5/10 validation loss: 0.71 avg dice score: 0.45
epoch 6/10 validation loss: 0.69 avg dice score: 0.34
epoch 7/10 validation loss: 0.79 avg dice score: 0.45
epoch 8/10 validation loss: 0.75 avg dice score: 0.51
epoch 9/10 validation loss: 0.76 avg dice score: 0.36
epoch 10/10 validation loss: 0.75 avg dice score: 0.38
If I go by val loss as criteria epoch-6 gives the best model, if i choose avg dice score as critieria epoch-8 gives the best model? how to choose?
Answer: The loss is mostly a measure that helps the model learn and is not looked at too much when deciding which model to select. A more business oriented measure is often used for this, e.g. accuracy. Since in this case you are mostly interested in the dice score it would make most sense to select the model from epoch 8. | {
"domain": "datascience.stackexchange",
"id": 9533,
"tags": "deep-learning, cnn, cross-validation, loss-function, semantic-segmentation"
} |
Converting an already-uploaded file and saving it to a model's FileField | Question: Please review this:
from os import path, remove
try:
video = Video.objects.get(id=original_video_id)
except ObjectDoesNotExist:
return False
convert_command = ['ffmpeg', '-i', input_file, '-acodec',
'libmp3lame', '-y', '-ac', '2', '-ar',
'44100', '-aq', '5', '-qscale', '10',
'%s.flv' % output_file]
convert_system_call = subprocess.Popen(
convert_command,
stderr=subprocess.STDOUT,
stdout=subprocess.PIPE
)
logger.debug(convert_system_call.stdout.read())
try:
f = open('%s.flv' % output_file, 'r')
filecontent = ContentFile(f.read())
video.converted_video.save('%s.flv' % output_file,
filecontent,
save=True)
f.close()
remove('%s.flv' % output_file)
video.save()
return True
except:
return False
Answer: Clean, easy to understand code. However:
Never hide errors with a bare except. Change
try:
...
except:
return False
into
try:
...
except (IOError, AnyOther, ExceptionThat, YouExpect):
logging.exception("File conversion failed")
Also, unless you need to support Python 2.4 or earlier, you want to use the files context manager support:
with open('%s.flv' % output_file, 'r') as f:
filecontent = ContentFile(f.read())
video.converted_video.save('%s.flv' % output_file,
filecontent,
save=True)
remove('%s.flv' % output_file)
That way the file will be closed immediately after exiting the with-block, even if there is an error. For Python 2.5 you would have to from __future__ import with_statement as well.
You might also want to look at using a temporary file from tempfile for the output file. | {
"domain": "codereview.stackexchange",
"id": 41,
"tags": "python, converting, django"
} |
Can $\{a^nb^n\}$ be recognized in poly-time probabilistic sublogarithmic-space? | Question: Consider language $ \mathtt{EQUALITY} = \{ a^nb^n \mid n \geq 0 \} $.
It is known that $ \mathtt{EQUALITY} $ cannot be recognized by any sublogarithmic-space alternating Turing machine (ATM) (Szepietowski, 1994). (There is an ATM using sublogarithmic space for members but not for all non-members!)
On the other hand, Freivalds (1981) showed that bounded-error constant-space probabilistic Turing machines (PTMs) can recognize $ \mathtt{EQUALITY} $ but only in exponential expected time (Greenberg and Weiss, 1986). Later, it was shown that no bounded-error $ o(\log\log n) $-space PTM can recognize a non-regular language in polynomial expected time (Dwork and Stockmeyer, 1990). My question is
whether poly-time sublogarithmic-space PTMs recognize $ \mathtt{EQUALITY} $ with bounded-error.
Answer: I have found an answer to my own question. The result was given in Section 5 of Karpinski and Verbeek, 1987.
For any input of length $ n $, a PTM can construct $ \Theta(\log \log n) $ space with high probability (Section 4). (With a very small probability, the machine can also construct logarithmic space, and this might be seen as a "drawback" of the algorithm.) Then, the PTM can decide whether the numbers of $a$'s ($n$) and $b$'s ($m$) are equal with high probability by using $ O(\log \log n) $ space in polynomial time.
The idea is as follows. If $ n \neq m $, then $ \exists k \leq 4 \log(n+m) $ such that $ n \not\equiv m \mod k $ (Alt and Mehlron, 1976). The PTM can pick a random $ k $ by using $ O(\log \log n) $ space. $ O(\log \log n) $ is also sufficient to keep a counter and so trying more than half of all possible $k$'s. The case of $ n \neq m $ can be detected with a probability more than $ 1 \over 2 $. | {
"domain": "cstheory.stackexchange",
"id": 2354,
"tags": "space-bounded, polynomial-time"
} |
Clean code and SOLID principles for a simple Python TicTacToe game | Question: I recently read the book Clean Code and I also did some research on the SOLID principles. I'm looking for general feedback on if I was able to transpose the examples (written in Java) to Python while still maintaining a "Pythonic" program.
I wrote a simple TicTacToe game in Python using Tkinter as the GUI and I tried to strike a balance between readable, clean code and avoiding code cluttering with useless functions (get(), set(), isEmpty() and other 1-liners)
Having a generic BoardGUI class specializing in TicTacToeBoardGUI is my attempt at the open/close principle.
The 3 modules (models, views, game) implement a minimalistic MVC design where GameApplication controls the flow and events.
The part I don't know how to handle is using polymorphism to have different levels of AI (right now the AI only checks a random empty box)
http://www.filedropper.com/tictactoe (~200 lines total)
main.py
#!/usr/bin/env
import Tkinter as tk
from game import GameApplication
def convert_images():
"""Returns a dictionary with the images converted into PhotoImages Tkinter can use"""
img_dict = {}
img_dict.update({'cross': tk.PhotoImage(file='images/cross.gif')})
img_dict.update({'circle': tk.PhotoImage(file='images/circle.gif')})
return img_dict
def main():
root = tk.Tk()
root.title('Tic-Tac-Toe v1.0')
images = convert_images()
GameApplication(root, images)
root.mainloop()
if __name__ == '__main__':
main()
models.py:
#!/usr/bin/env
from collections import namedtuple
class TicTacToeBoard():
"""
Keeps track of the status of the boxes of a standard 3x3 TicTacToe game
I don't particularly like the victory, draw and _find_lines functions, any better solutions?
"""
ROWS = 3
COLUMNS = 3
EMPTY = 0
O = 1
X = 2
def __init__(self):
self.boxes = {(column, row): namedtuple('box', 'value') \
for row in range(TicTacToeBoard.ROWS) \
for column in range(TicTacToeBoard.COLUMNS)}
for __, items in self.boxes.items():
items.value = TicTacToeBoard.EMPTY
self._lines = self._find_all_lines()
def victory(self):
for line in self._lines:
if self.boxes[line[0]].value == self.boxes[line[1]].value == self.boxes[line[2]].value:
if self.boxes[line[0]].value != TicTacToeBoard.EMPTY:
return True
return False
def draw(self):
for __, box in self.boxes.items():
if box.value == TicTacToeBoard.EMPTY:
return False
return True
def _find_all_lines(self):
lines = []
for x in range(3):
lines.append(((0, x), (1, x), (2, x)))
lines.append(((x, 0), (x, 1), (x, 2)))
lines.append(((0, 0), (1, 1), (2, 2)))
lines.append(((0, 2), (1, 1), (2, 0)))
return lines
views.py:
#!/usr/bin/env
import Tkinter as tk
from math import ceil
class BoardGUI(tk.LabelFrame):
"""
Questions:
Does this class respects the open/close principle?
Am I reinventing the wheel by transforming the coordinates to row and
columns or Tkinter.canvas can do that already?
A generic board game of arbitrary size, number of columns and number of rows.
It can be inherited to serve as a point and click GUI for different board games.
The mouse coordinates should be carried by an event passed in parameter to the functions
Boxes are accessed with the tags kw from a Tkinter canvas
They are nothing but drawings; actual rows and columns are computed based on mouse position and size
Needs Tkinter compatible images to draw in boxes
Coordinates for the boxes are either in form of (box_column, box_row) or (box_top_left_x, box_top_left_y)
"""
def __init__(self, root, rows, columns, width, height, **options):
tk.LabelFrame.__init__(self, root, **options)
self._rows = rows
self._columns = columns
self._width = width
self._height = height
self.canvas = tk.Canvas(self, width=self._width, height=self._height)
self.canvas.pack(side=tk.TOP, fill=tk.BOTH, expand=tk.TRUE)
self._draw_boxes()
def get_box_coord(self, event):
return self._convert_pixels_to_box_coord(event.x, event.y)
def draw_image_on_box(self, coord, image):
(x, y) = self._convert_box_coord_to_pixels(coord)
self.canvas.create_image((x, y), image=image, anchor=tk.NW)
def draw_message_in_center(self, message):
self.canvas.create_text(self._find_board_center(), text=message, font=("Helvetica", 50, "bold"))
def _draw_boxes(self):
[self.canvas.create_rectangle(self._find_box_corners((column, row))) \
for row in range(self._rows) \
for column in range(self._columns)]
def _find_box_corners(self, coord):
boxWidth, boxHeight = self._find_box_dimensions()
column, row = coord[0], coord[1]
topLeftCorner = column * boxWidth
topRightCorner = (column * boxWidth) + boxWidth
bottomLeftCorner = row * boxHeight
bottomRightCorner = (row * boxHeight) + boxWidth
return (topLeftCorner, bottomLeftCorner, topRightCorner, bottomRightCorner)
def _find_board_center(self):
return ((self._width / 2), (self._height / 2))
def _find_box_dimensions(self):
return ((self._width / self._columns), (self._height / self._rows))
def _convert_box_coord_to_pixels(self, coord):
boxWidth, boxHeight = self._find_box_dimensions()
return ((coord[0] * boxWidth), (coord[1] * boxHeight))
def _convert_pixels_to_box_coord(self, x, y):
column = ceil(x / (self._height / self._columns))
row = ceil(y / (self._width / self._rows))
return (column, row)
class TicTacToeBoardGUI(BoardGUI):
"""
Allows you to draw circles or crosses on a 3x3 square gaming board
Tkinter needs to keep a reference of the images to loop. It's better to
initiate them elsewhere and pass them in parameters even if they seem like
they should be internal attributes.
"""
ROWS = 3
COLUMNS = 3
def __init__(self, root, width, height, images):
BoardGUI.__init__(self, root, TicTacToeBoardGUI.ROWS, TicTacToeBoardGUI.COLUMNS, width, height)
self.xSymbol = images['cross']
self.oSymbol = images['circle']
def draw_x_on_box(self, coord):
self.draw_image_on_box(coord, self.xSymbol)
def draw_o_on_box(self, coord):
self.draw_image_on_box(coord, self.oSymbol)
game.py:
#!/usr/bin/env
import Tkinter as tk
from views import TicTacToeBoardGUI
from models import TicTacToeBoard
from random import shuffle
class GameApplication(tk.Frame):
"""
Serves as a top level Tkinter frame to display the game and as a controller to connect
the models and the views.
I'm willingly giving this class 2 tasks because the controller as a single event to connect
and the purpose of this exercise isn't the MVC pattern but the SOLID and clean code principles
It simply instantiates the models and views necessary to play a TicTacToe game and interpret a left
mouse click as a move from the Human player
"""
CANVAS_WIDTH = 300
CANVAS_HEIGHT = 300
def __init__(self, root, images, **options):
tk.Frame.__init__(self, root, **options)
self._boardModel = TicTacToeBoard()
self._boardView = TicTacToeBoardGUI(root, GameApplication.CANVAS_WIDTH, GameApplication.CANVAS_HEIGHT, images)
self._boardView.pack()
self._connect_click_event()
if self._ai_goes_first():
self._ai_plays()
def _player_plays(self, event):
coord = self._boardView.get_box_coord(event)
if self._boardModel.boxes[coord].value != self._boardModel.EMPTY:
return
self._boardModel.boxes[coord].value = self._boardModel.X
self._boardView.draw_x_on_box(coord)
self._ai_plays()
def _ai_plays(self):
#Simply checks a random empty box on the board;
#The minmax algorithm could be used to never lose but that's no fun
for coord, box in self._boardModel.boxes.items():
if box.value == self._boardModel.EMPTY:
self._boardModel.boxes[coord].value = self._boardModel.O
self._boardView.draw_o_on_box(coord)
break
self._check_end_game_status()
def _check_end_game_status(self):
if self._boardModel.victory():
self._boardView.draw_message_in_center("Victory!")
elif self._boardModel.draw():
self._boardView.draw_message_in_center("Draw!")
def _ai_goes_first(self):
result = [True, False]
shuffle(result)
return result[0]
def _connect_click_event(self):
self._boardView.canvas.bind("<Button-1>", self._player_plays)
Answer: Looks to me that you want to have a box with various buttons that allows you to select, say, easy, medium, or hard. Return that and then say
if AIChoose == "Easy":
getEasyAIMove()
elif AIChoose == "Medium":
GetMedAIMove()
elif AIChoose == "Hard":
GetHardAIMove()
Now, that's obviously the easy part, but a Tic-Tac-Toe AI is actually not at all hard:
def GetHardAIMove():
# first check if you're about to win. If you are, move there
# now check if your opponent is about to win. If he is, block him
# check if any of the corners are free. If they are, go to a random one of them
# check if the middle is free. If it is, go to it
#otherwise, go to a random open spot
def GetMedAIMove():
# here's where we cheat. generate a random number between 1 and 3. if its <= 2, GetHardAIMove. otherwise, go easy.
def GetEasyAIMove():
# If you're about to win, go there
# If you're about to lose, go there
# Otherwise, go to a random spot.
See? This is a quick and painless AI that can beat decent players who don't know how it's made about 50-60% of the time.
EDIT: Since Tic-Tac-Toe can always be won or tied by the player that moves first, you may wish to add this into the AI by doing something like this:
def GetHardAIMove():
# if you move first:
# after making sure you or your opponent isn't about to win:
# move to a corner
# then move to the corner opposite that
# then move to a random corner
# at this point, you have either won or tied
# first check if you're about to win. If you are, move there
# now check if your opponent is about to win. If he is, block him
# check if any of the corners are free. If they are, go to a random one of them
# check if the middle is free. If it is, go to it
#otherwise, go to a random open spot | {
"domain": "codereview.stackexchange",
"id": 5754,
"tags": "python, game, tkinter, tic-tac-toe"
} |
Question about an OPE for the free massless scalar CFT | Question: In page 78 of David Tong's notes on CFT https://www.damtp.cam.ac.uk/user/tong/string/four.pdf, he finds that the propagator for a theory of free massless scalars is
$$\langle X(\sigma)X(\sigma')\rangle=-\frac{\alpha'}{2}\text{log}(\sigma-\sigma')^2$$
Then he goes on saying that in equation (4.22) the OPE of $X(\sigma)X(\sigma')$ is
$$X(\sigma)X(\sigma')=-\frac{\alpha'}{2}\text{log}(\sigma-\sigma')^2 + ... \label{eq}$$
My question is: what are the $...$ in the above equation?
Certainly they are there because he is saying that we could consider the path integral with other isertions away from $\sigma$ and $\sigma'$, but then equation (4.20) would turn to
$$\langle \partial^2 X(\sigma)X(\sigma')...\rangle=-2 \pi \alpha' \langle \delta^2(\sigma,\sigma') ... \rangle$$
Implying that
$$\langle X(\sigma)X(\sigma')...\rangle =-\frac{\alpha'}{2}\langle \text{log}(\sigma-\sigma')^2... \rangle \\
\hspace{2.8cm}=-\frac{\alpha'}{2}\text{log}(\sigma-\sigma')^2\langle ... \rangle$$
Because all insertions are away from $\sigma$ and $\sigma'$.
Answer: In the free theory the answer is really simple,
$$
X(\sigma)X(\sigma') = - \frac{\alpha'}{2} \log(\sigma - \sigma') ~ + : X(\sigma)X(\sigma'):
$$
where $:~:$ denotes normal ordering. | {
"domain": "physics.stackexchange",
"id": 66338,
"tags": "quantum-field-theory, conformal-field-theory, path-integral, correlation-functions"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.