anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
What is the language generated by a given grammar | Question:
Given the grammar
$s \to aSb \mid bSb \mid a \mid b$;
what is the language generated by the grammar over the alphabet $\{a,b\}$?
When I was solving this question I was a bit confused
about the language generated by this grammar. Would be set of all palindromes?
Or would the language generated by the above grammar be that of all odd length palindromes?
Is it possible that a palindrome generated by above grammar be of odd length only as there is no rule for $S \to \varepsilon$?
Answer: The language of this grammar is all strings of the form $wb^n$ where $w\in \{a,b\}^*$, and $|w| = n+1$. If I abuse the notation a bit, it is $\Sigma^{n+1}b^n$.
If perchance you mean $S \rightarrow aSa \ | \ bSb \ | \ a\ | \ b$, then the language contains all palindromic strings of odd length. | {
"domain": "cs.stackexchange",
"id": 13288,
"tags": "formal-languages, context-free, formal-grammars"
} |
Decompose a value as a sum of square numbers | Question: My daughter had a question on her maths homework which was to write a value as a sum of 4 or fewer square numbers.
For example, x = 73 could be 36 + 36 + 1.
I came up with a really brute force algorithm that shows all combinations but when the values get above 1000 it becomes quite slow.
Is there a clever way of achieving this?
Here is my algo in F#
let squares = [ 0.. 100 ] |> Seq.map (fun x -> x * x) |> Seq.toList
let rec calc total attempts squares accu results =
match (total, attempts, squares) with
| (0, _, _) -> accu :: results
| (_, 0, _) -> [] :: results
| (x, _, _) when x < 0 -> [] :: results
| (_, _, []) -> [] :: results
| total, attempts, squares ->
let filteredSquares = squares |> Seq.filter ((>=) total )
filteredSquares
|> Seq.collect(fun sq ->
calc (total - sq)
(attempts - 1)
(filteredSquares |> Seq.toList)
(sq :: accu)
results) |> Seq.toList
let res =
calc 8058 4 squares [] []
|> Seq.filter(fun lst -> lst <> [])
|> Seq.sortBy (fun lst -> lst.Length)
|> Seq.take 1
|> Seq.toList
Additional comments on better F# code would be appreciated as well.
One thing I was wondering was whether I could make it lazy using sequences.
Answer: let squares = [ 0.. 100 ] |> Seq.map (fun x -> x * x) |> Seq.toList
I don't think limiting the squares this way is a good idea. Instead, if you moved it inside calc, you could write it as:
let squares =
Seq.initInfinite id
|> Seq.map (fun x -> x * x)
|> Seq.takeWhile (fun x -> x <= total)
|> Seq.toList
calc is a pretty bad name: it doesn't say anything about what the function calculates.
If you limit yourself to returning always only the shortest sum, the signature of the method could be simplified quite a lot: you could get rid of attempts, accu and results. This would also somewhat simplify the implementation.
And even if you don't do that, I think that filtering out empty lists should be done inside calc, not outside.
If you simplify the signature, you could then apply memoization to make the computation much faster. Memoization will help here, because what's slowing down the computation is calculating the result for the same total over and over.
The implementation could look like this:
let rec calculateSquaresSum =
let calculateSquaresSum' total =
if total = 0 then
[]
else
let squares =
Seq.initInfinite (fun x -> x + 1)
|> Seq.map (fun x -> x * x)
|> Seq.takeWhile (fun x -> x <= total)
|> Seq.toList
|> List.rev
squares
|> Seq.map (fun sq -> sq :: (calculateSquaresSum (total - sq)))
|> Seq.minBy (fun sum -> sum.Length)
memoize calculateSquaresSum'
This will calculate the result for values around 10.000 immediately, but 100.000 takes a few seconds, so there likely is a more efficient way to implement this.
Though if you're okay with finding just some decomposition with at most the given length, not necessarily the shortest one, having the attempts parameter makes sense and makes the computation much faster when the decomposition is possible. (E.g. calculateSquaresSum(1000000 - 1, 4) returns immediately with [998001; 1849; 100; 49], while calculateSquaresSum(100000 - 1, 3) takes few seconds to return with a negative answer.)
The code:
let rec calculateSquaresSum =
let calculateSquaresSum' (total, attempts) =
if total = 0 then
Some []
elif attempts = 0 then
None
else
let squares =
Seq.initInfinite (fun x -> x + 1)
|> Seq.map (fun x -> x * x)
|> Seq.takeWhile (fun x -> x <= total)
|> Seq.toList
|> List.rev
squares
|> Seq.map (fun sq ->
match calculateSquaresSum (total - sq, attempts - 1) with
| None -> None
| Some tail -> Some (sq::tail))
|> Seq.tryPick id
memoize calculateSquaresSum' | {
"domain": "codereview.stackexchange",
"id": 10997,
"tags": "f#, combinatorics"
} |
Neutral pions and chromodynamics | Question: $\pi^0$ particles are either up-antiup or down-antidown (or strange-antistrange?) They must be opposite colors to preserve neutrality. Why don't the opposite quarks annihilate?
Answer: Pions in general and the neutral pion is not an stable particle, its mean lifetime is $\tau=8.4 \cdot10^{-18}$ s Pion - Wikipedia | {
"domain": "physics.stackexchange",
"id": 8228,
"tags": "quantum-mechanics, quantum-chromodynamics"
} |
Photon Polarization | Question: I'm having some issues to understand some concepts related with the polarization of photons, especifically the 'pratical' diference between a superposition of two polarizations (let's say $|H\rangle + |V\rangle$) and a single polarization ($|H\rangle$ or $|V\rangle$). I've read a topic similiar (Can we determine whether a photon's polarization is fixed or in a superposition), but this is not exactly what I'm trying to understand. My question: it's possible to create a experimental arrangement capable of making a diference between a superposition state or a single polarized state? Would it be diferent if I had a source who provides me with both $|H\rangle$ and $|V\rangle$ in a known rate? That is: can we use a arrangement of polarizers, mirrors, beam spliters and detectores to find out wether a source is providing these states? My original idea is to use two ortogonal polarizers in 45 degrees but I'm having some issues to understand the principles behind the situation.
Answer: The basic idea is you will have to have multiple photons prepared identically, and make multiple polarization measurements (using a polarization filter) to be able to distinguish between $|H\rangle$ and $|H\rangle+|V\rangle$. One way I can do it would be to get a polarization filter and only allow vertically polarized light through. So I'm looking for the presence of a $|V\rangle$ state. If I make 10,000 measurements and I find no component $|V\rangle$ (all my results come back negative, i.e. no photons were let through the polarizer) then I am relatively certain the state was $|H\rangle$ to begin with and not $|H\rangle+|V\rangle$. If the original state was the latter, I would expect 50% of my measurements to come back positive. (If the original state was $|V\rangle$ then all 10,000 of my results should have come back positive). By building out these probabilities, I can figure out what state the photon was in originally.
This answers the question of "the photon was known to be in some pure state, how can I determine what that pure state was?". Interestingly though, when one does measure the vertical polarization, for photons which do pass through the polarizer, their state is now changed to be exactly $|V\rangle$. So say a photon starts in state $|H\rangle+|V\rangle$ and you measure $V$, then for photons that pass through your polarizer, their state is now $|V\rangle$ and NOT $|H\rangle+|V\rangle$. By measuring the state, you have changed the state (see the joke in Futurama about a "quantum finish" in a horse race). So, if one wants a bunch of $|V\rangle$ state photons, one can merely pass a bunch of randomly prepared photons through an (idealized) vertical polarizer and they will end up in that state. If one wants a bunch of $|H\rangle+|V\rangle$ photons, one can pass it through a (45$^\circ$) diagonally polarized filter to accomplish that. | {
"domain": "physics.stackexchange",
"id": 52815,
"tags": "quantum-mechanics"
} |
Check if two triangles intersect | Question: Problem statement: Write a function which checks if two triangles intersect or not. Complete containment or tangential contact is not considered intersection.
Approach: I have considered each triangle as collection of three line segments. And then checked if any of the line segment from first triangle intersects segments from segments second triangle. To check line segment intersection, I have used rotation direction of one segment endpoint with respect to another. Assertion is that in case of intersection rotation direction will change( from clockwise to anticlockwise or vice versa). Rotation direction is computed using cross product.
Review ask: Do you see functional glitches? Missing boundary or error cases? Input around use of python
import operator
def compute_direction(common_point, first_endpoint, second_endpoint):
first_vector = tuple(map(operator.sub, first_endpoint, common_point))
second_vector = tuple(map(operator.sub, second_endpoint, common_point))
return first_vector[0]*second_vector[1] - first_vector[1]*second_vector[0]
def does_line_segments_intersect(first_segment, second_segment):
d1 = compute_direction(first_segment[0], first_segment[1], second_segment[0])
d2 = compute_direction(first_segment[0], first_segment[1], second_segment[1])
d3 = compute_direction(second_segment[0], second_segment[1], first_segment[0])
d4 = compute_direction(second_segment[0], second_segment[1], first_segment[1])
if d1*d2 < 0 and d3*d4 < 0:
return True
pass
def does_triangles_intersect(first_triangle, second_triangle):
for first_triangle_side in range(3):
first_side = [first_triangle[first_triangle_side], first_triangle[(first_triangle_side+1)%3]]
for second_triangle_side in range(3):
second_side = [second_triangle[second_triangle_side], second_triangle[(second_triangle_side+1)%3]]
if does_line_segments_intersect(first_side, second_side):
return True
return False
print(does_triangles_intersect([(0, 0), (8, 0), (4, 4)], [(0, 0), (8, 0), (8, 4)]))
Answer: Your code looks mostly good to me. Here are a few comments
Style details
Invisible details but there are a few trailing whitespaces that should be cleaned up
Probably a matter of personal preference but I think that ordinal ("first", "second") lead to variables names which are pretty long and could make things harder to understand at first glance. My suggestion would to be use number as suffixes: "segment1", "segment2", etc.
Each function implemented is non-trivial and deserves a bit of explanation regarding what it does, the expected inputs, the algorithm used.
The code seems to follow PEP 8 pretty well except for this particular point:
Be consistent in return statements. Either all return statements in a
function should return an expression, or none of them should. If any
return statement returns an expression, any return statements where no
value is returned should explicitly state this as return None, and an
explicit return statement should be present at the end of the function
(if reachable)
Indeed, does_line_segments_intersect returns either True (explicitly) or None (implicitly). It would be better to returns either True or False (explicitly).
if d1*d2 < 0 and d3*d4 < 0:
return True
return False
Then, it is a bit clearer that we can have a single return statement:
return d1*d2 < 0 and d3*d4 < 0
More tests
Unit-tests could help to:
explain your code
check that it works as expected on various cases, in particular edge-cases
Here is what I wrote using the assert statement but this should probably be done using a proper unit-test framework.
# Tests about compute_direction
###############################
# An endpoint is the origin
assert compute_direction((1, 2), (3, 4), (1, 2)) == 0
# Two endpoints are similar
assert compute_direction((1, 2), (3, 4), (3, 4)) == 0
# Two endpoints in the exact same direction
assert compute_direction((0, 0), (1, 2), (3, 6)) == 0
# More interesting cases
assert compute_direction((4, 4), (0, 0), (8, 4)) > 0
assert compute_direction((8, 0), (8, 4), (4, 4)) > 0
assert compute_direction((8, 0), (4, 4), (8, 4)) < 0
assert compute_direction((8, 4), (0, 0), (4, 4)) < 0
assert compute_direction((0, 0), (8, 0), (4, 4)) > 0
assert compute_direction((0, 0), (8, 0), (8, 4)) > 0
assert compute_direction((4, 4), (0, 0), (8, 0)) > 0
assert compute_direction((8, 0), (4, 4), (0, 0)) > 0
assert compute_direction((8, 0), (8, 4), (0, 0)) > 0
assert compute_direction((8, 4), (0, 0), (8, 0)) > 0
# Tests about triangles_intersect
#################################
# Example provided
assert triangles_intersect([(0, 0), (8, 0), (4, 4)], [(0, 0), (8, 0), (8, 4)])
# No intersection, similar triangle
assert not triangles_intersect([(0, 0), (1, 0), (0, 1)], [(0, 0), (1, 0), (0, 1)])
# No intersection, one common point
assert not triangles_intersect([(0, 0), (1, 0), (0, 1)], [(1, 0), (2, 0), (2, 1)])
# No intersection, two common points
assert not triangles_intersect([(0, 0), (1, 0), (0, 1)], [(1, 1), (1, 0), (0, 1)])
# Intersection, one point in other triangle
assert triangles_intersect([(0, 0), (3, 0), (0, 3)], [(1, 1), (1, 3), (3, 1)])
# Intersection, two points in other triangle
assert triangles_intersect([(0, 0), (4, 0), (0, 4)], [(1, 2), (2, 1), (3, 3)])
# Intersection, three points in other triangle
# assert triangles_intersect([(0, 0), (4, 0), (0, 4)], [(1, 1), (2, 1), (1, 2)]) # WRONG
# Intersection but not point in other triangle
assert triangles_intersect([(0, 1), (4, 1), (2, 3)], [(0, 2), (4, 2), (2, 0)])
# Intersection, two common points, one on side
# assert triangles_intersect([(0, 0), (2, 0), (1, 1)], [(1, 0), (2, 0), (1, 1)]) # WRONG
# Intersection, two common points, one inside
# assert triangles_intersect([(0, 0), (3, 0), (0, 3)], [(1, 1), (3, 0), (0, 3)]) # WRONG
# Intersection, two common points, one outside
assert triangles_intersect([(0, 0), (1, 1), (1, 0)], [(0, 1), (1, 1), (1, 0)])
From what I understand, a few cases are not handled properly (flagged "WRONG" above).
However, I am not too sure if the code is incorrect or if my understanding is incorrect.
My understanding is that 2 triangle intersects if there are points which are (strictly) inside the 2 triangles but it looks like the code expects triangles to intersect if and only if they have sides that intersect.
I'll let you see if this if what you want.
Edit: a value was wrong in one of my examples. I noticed this (and fixed this) by using some code to generate the corresponding graph. See:
Second edit:
Now that I understand better the expected behavior, let's continue the review.
The logic to iterate over the different sides of the triangle is a bit cumbersome. For a start, we could extract it in a function on its own. Also, we could easily write it in a very generic way so that it gives the different sides of any polygon. Using generator, we could have something like:
def get_sides(polygon):
last_point = polygon[-1]
for point in polygon:
yield (last_point, point)
last_point = point
def triangles_intersect(triangle1, triangle2):
for side1 in get_sides(triangle1):
for side2 in get_sides(triangle2):
if segments_intersect(side1, side2):
return True
return False
Going further, we could use itertools.product and any.
def triangles_intersect(triangle1, triangle2):
return any(segments_intersect(side1, side2)
for side1, side2 in itertools.product(get_sides(triangle1), get_sides(triangle2))) | {
"domain": "codereview.stackexchange",
"id": 41986,
"tags": "python, computational-geometry"
} |
random team distributer | Question: Any idea how to shorten and improve the program?
it's going to take a list included of 22 people and split them into two teams and return the names and teams
import random
# list of teams
teams = ['A', 'B']
# list of players which is inculded of 22 people
players = ['a', 'b', 'c', 'd', 'e',
'f', 'g', 'h', 'i', 'j',
'k', 'l', 'm', 'n', 'o',
'p', 'q', 'r', 's', 't',
'u', 'v']
# takes the list of names
class Human:
def __init__(self, name):
self.name = name
# distribure names and people to teams randomly and it inherits names from Human class
class Player(Human):
def get_team(self):
teams = dict()
random.shuffle(self.name)
teams['A'] = self.name[0::2]
teams['B'] = self.name[1::2]
return teams
my_team = Player(players)
my_team = my_team.get_team()
for team, name in my_team.items():
print(team, name)
Answer: The concept is good, but there is room for improvement. Here are some problems I see:
Misleading names
Clear names are very important in coding, because they make it easy or hard for other people (including your future self) to understand what you have written. Some of the names you have used do not match what the code actually does, which is confusing.
Based on the name of the Human class, I'd expect an instance of this class to represent one human, but that is not how it is used (see below).
Based on the name of the Player class, and the way it extends Human, I'd expect an instance of this class to represent one player; if so, then there should be 22 instances of the class, or one object for each name in the players list. Instead, it seems to represent all the players at once, which its name does not imply.
I would expect the name attribute of Human to store a single name, but Player stuffs an entire list of player names into that attribute.
At minimum, these names should be made plural (Humans, names, Players, etc.) to reflect their contents. It would be even better to give Player a name that describes what it actually does, or what it is responsible for within the program.
Unnecessary hierarchy, part 1: only one subclass
You mentioned in a comment that you are trying to practice your OOP skills, so "[you] want to force [yourself] to use classes even if it sounds silly". Practice is good, but part of good OOP is understanding why and how to use classes and objects effectively...and that includes knowing when not to use them, or when not to use certain parts of them.
For example: why do you need both a Human class and a Player subclass? In the program that you have now, the only humans are players; therefore there is no advantage to having two classes. You could do everything in the Player class and drop the Human class altogether, or vice versa. On the other hand, if you already have plans to expand your program to include non-player humans (e.g. coaches, referees), then a Human superclass might make sense. The structure of the code should fit your use case.
Unnecessary hierarchy, part 2: superclass does nothing
A subclass is supposed to have a "type of" or "kind of" relationship to its superclass, e.g. a Square is a kind of Rectangle, or a Dog is a kind of Animal. The problem is that your code does not gain much from defining Player as a kind of Human. Since Human has no defined behaviors and barely any data, inheriting from it doesn't provide many benefits. It would be simpler and more flexible to use composition instead of inheritance, by creating or importing a Human object inside the Player object.
Unnecessary use of internal state / object attributes
One of the nice things about objects is that they can create and maintain an internal state--a little chunk of data that it hides away from the rest of the program. (Keeping separate things separate is called encapsulation or separation of concerns, and it makes programs much easier to think about and easier to debug.) In Python, internal state is stored in object attributes, like Human's self.name. But you have to know when to use internal state and when not to use it. Keeping multiple copies of the same data in different places, or multiple references to the same data object, can cause confusion and lead to bugs.
Your get_team() method takes a list of players, splits the players between two known teams, and then returns a dict() with that information. This method is called only once in your entire program. So, what is the advantage of having the Player object store the list of player names inside itself? In the current code, there is no advantage. You could easily change get_team() to a class method (which can be called without instantiating an object) that takes the player names as a parameter.
Hardcoded assumptions
# list of teams
teams = ['A', 'B']
Since you defined this teams list at the top of the code, I would assume that editing that list would change the program output. But when I look closer, I can see that this list is not even used. Your get_team() method is explicitly written to split the players into two teams named 'A' and 'B', regardless of what is in the list above.
teams['A'] = self.name[0::2]
teams['B'] = self.name[1::2]
return teams
This is not very flexible, especially if you want to import the team names from outside the program (e.g. from user input, or from a config file). A better method would take a list of teams as an input, count the number of teams, and then split the players accordingly.
Here's how I might write it:
TEAMS = ('A', 'B') # Replaced mutable list with immutable tuple.
# Names of constants should be in all-caps.
PLAYERS = ('a', 'b', 'c', 'd', 'e',
'f', 'g', 'h', 'i', 'j',
'k', 'l', 'm', 'n', 'o',
'p', 'q', 'r', 's', 't',
'u', 'v')
class TeamAssigner:
def assign_players_to_teams(teams, players):
players_on_teams = dict()
randomized_players = random.sample(players, len(players)) # changed to .sample because .shuffle doesn't work on tuples / immutable sequences
for i, team in enumerate(teams):
players_on_teams[team] = randomized_players[i::len(teams)]
return players_on_teams
my_teams = TeamAssigner.assign_players_to_teams(TEAMS, PLAYERS)
for team, name in my_teams.items():
print(team, name)
Of course, this is only what I would write if I was not planning to expand the program in the future. If you have plans to make the program bigger and make it do more things, then some of these decisions might be the wrong ones. For instance, if you wanted the ability to move players from one team to another after the initial split, then you would need a class that did keep an internal state of who is on which team... | {
"domain": "codereview.stackexchange",
"id": 40866,
"tags": "python, classes"
} |
Sublime Text 2 R Build System (source file, send selection to interactive session) | Question: The following is the description of the R build system I am using (very happy) with Sublime Text 2 on linux (Crunchbang). With it, I am able to:
to source the current file.R to an interactive R session;
to send the current selection to an interactive R session (supports multi selection);
to output plots and *.Rout files (non-interactive);
xsel, xdotool, terminator (or some other terminal that supports custom titles) and a bash script (see below) are requisites. Please, attention for the locations of each file.
The open_and_run_r.sh script is the object of revision (see its commented lines for details). But feel free to give your considerations about the whole picture.
Build File:
.config/sublime-text-2/Packages/User/r.sublime-build
{ // source( $file ) into interactive session
"shell": true,
"cmd":
[ "echo 'source(\"'$file'\")' | xsel -i -b; if xdotool search --name 'Running R' windowactivate --sync; then xdotool key --window 0 ctrl+shift+v; else open_and_run_r.sh; fi; xsel -c -b"
],
"selector": "source.r",
"variants":
[
{ // send current selection of $file to interactive session; supports ST2 multi-selection as well :)
"name": "r_send_selection",
"shell": true,
"cmd":
[ "xdotool getactivewindow key ctrl+c; if xdotool search --name \"Running R\" windowactivate --sync; then xdotool key --window 0 ctrl+shift+v Return; fi"
]
},
{ // generate custom output.Rout file (not the output.Rout from R CMD BATCH), non-interactive
"name": "r_output",
"cmd":
[ "/usr/bin/R --quiet --slave < $file > output.Rout"
]
}
]
}
Lets give a proper formatting to the bash commands:
# source file, note:
# escaped \" for JSON compatibility;
# $file is ST2 global variable;
# 'source()' is an R command
echo 'source(\"'$file'\")' | xsel -i -b;
if xdotool search --name 'Running R' windowactivate --sync;
then xdotool key --window 0 ctrl+shift+v;
else open_and_run_r.sh;
fi;
xsel -c -b
# send selection
xdotool getactivewindow key ctrl+c;
if xdotool search --name \"Running R\" windowactivate --sync;
then xdotool key --window 0 ctrl+shift+v Return;
fi
Finally the open_and_run_r.sh script (run as program enabled).
~/bin/open_and_run_r.sh
#!/bin/sh
echo 'Initializing Terminator...'
terminator --title "Running R" --command='R' &
# in this workaround
# I could not find a better way to properly
# catch the terminator ending,
# after it receives the 'Running R' name.
a=0
while [ $a = 0 ]; do
if xdotool search --name 'Running R' windowactivate --sync;
then a=1;
xdotool key --window 0 ctrl+shift+v;
echo 'Initialized.'
fi;
sleep 2; # decreasing the value increases asynchrony behavior.
done
Keymap file, for keyboard shortcuts.
.config/sublime-text-2/Packages/User/Default (Linux).sublime-keymap
[
// R Build / [ctrl+b] see r.sublime-build/source current $file into interactive R
// R build / send current selection to interactive R
{"keys": ["super+'"], "command": "build", "args": {"variant": "r_send_selection"} },
// R build / generate output.Rout and plots files from, non-interactive
{"keys": ["ctrl+shift+b"], "command": "build", "args": {"variant": "r_output"} }
]
Answer: In the main loop of the script,
you don't need the $a variable.
You can use an infinite loop,
and break out of it when the if condition becomes true.
You also don't need the semicolons at the end of the lines.
The script can be simplified to:
while :; do
if xdotool search --name 'Running R' windowactivate --sync; then
xdotool key --window 0 ctrl+shift+v
echo 'Initialized.'
break
fi
sleep 2 # decreasing the value increases asynchrony behavior.
done
As for this comment:
# in this workaround
# I could not find a better way to properly
# catch the terminator ending,
# after it receives the 'Running R' name.
Unfortunately I don't know a better way either. | {
"domain": "codereview.stackexchange",
"id": 11882,
"tags": "json, bash, r"
} |
How can I test my trained network on the next unavailable hour? | Question: I have data of 695 hours. I use the first 694 hours to train the network and I use 695th hour to validate it. Now my goal is to predict the next hour.
How I can use my trained network to predict the next hour, that is, the 696th hour (which I do not have access to)?
Answer: You need to have access to the 696th hour (or successive hours), otherwise, you cannot test your model. An alternative would be, for example, to train your model on the first 693 hours, validate it on the 694th hour, and test it on the 695th hour. | {
"domain": "ai.stackexchange",
"id": 1532,
"tags": "neural-networks, machine-learning, prediction, matlab, time-series"
} |
ROC curve for different hyperparameters of `RandomForestClassifier`? | Question: I'm currently trying to train a RandomForestClassifier on a dataset consisting of 5000 instances with 12 (now) encoded features and a binary target label. Through GridSearchCV I found out, that
best_parameters = {
'criterion': 'gini',
'max_depth': 12,
'max_features': 'log2',
'n_estimators': 300
}
works best out of
hyperparameters = {
"n_estimators": [9,10,20,30,40,50,60,100,150,200,300,1000],
"max_depth": [3,6,9,12,20],
"criterion": ["gini", "entropy"],
"max_features": ["log2", "auto"]
}
And gives a mean_test_score of 0.8546 which is quite good already, as I think.
Now I would like to get some kind of visual interpretation like a ROC curve for each parameter. But does it actually make sense in the case of a RandomForestClassifier to create a ROC curve for each hyperparameter? Or are there other ways to tune my classifier?
Answer: I assume you are running classification, and have a binary target variable. If that's the case, it does not make sense to show component ROC curves, because your separation may be based on on combinations of 2, 3, or more predictors that individual ROC curves will not reflect. I would show your overall ROC curve, along with perhaps variable importance measures. If you have a handful of predictors that are clear winners, you could re-run your model including only those, and then show that ROC. Otherwise, I don't see what it buys you. | {
"domain": "datascience.stackexchange",
"id": 2317,
"tags": "classification, scikit-learn, random-forest, model-selection"
} |
How to give an upper bound on this bin packing problem? | Question: In the bin packing with fragile objects (BPFO) problem one is given a set of objects $\{1,\ldots,n\}$ where each object $i$ has a weight $w_i$ and a fragility $f_i$ for all $i$ in the set $\{1,\ldots,n\}$. (We assum that $w_i\le f_i$.) Also, we have a set of bins of infinite capacity (each) in which we want to place the objects.
The objective is to assign the objects to the minimum number of bins such that the sum of the weights in each bin is at most the minimum fragility. That is, if we assign the set of objects $S$ to bin $B$ than we must have:
$$\sum_{i\in S}w_i\le\min\limits_{i\in S}f_i.\quad\quad\quad (1)$$
The BPFO problem is given in this paper. Let us denote the optimal value of BPFO by $OPT$.
I am trying to relax BPFO by modifying equation$~(1)$ as follows.
$$\sum_{i\in S}w_i\le f_i,\forall i\in S$$
Let us denote the optimal value of the relaxed problem by $OPTR$.
Now, I would like to upper bound $OPT$ by $OPTR$. How can I achieve this?
It is clear that a solution to BPFO is a solution to the relaxed problem and then $OPT\ge OPTR$.
Is there a way to say that $OPT\le \alpha(n)OPTR$ where $\alpha(n)\ge 1$ ?
Answer: Your relaxation is same as original problem.
$$\sum_{i\in S}w_i\leq f_j,\forall j\in S$$
is same condition as
$$\sum_{i\in S}w_i\leq\min\limits_{j\in S}f_j$$.
because
$$\min\limits_{i\in S}f_i \leq f_j,\forall j\in S$$. | {
"domain": "cs.stackexchange",
"id": 6355,
"tags": "approximation, lower-bounds"
} |
Scalar commutation relations with equal times enforced by a delta function | Question: (I'm following these notes by Vadim Kaplunovsky titled "Feynman Propagator of a Scalar Field," specifically asking about equation 10.)
When calculating the time derivative of the Feynman scalar propagator, we get a term $\delta(x^0-y^0)\times\langle 0|[\phi(x),\phi(y)]|0\rangle$. According to Kaplunovsky, the delta function forces the term to be zero at unequal times but the term is zero at equal times also because then the $\phi$'s commute. However, if one just naively plugs in $x^0=y^0$ into the equation, we get an indeterminate form $\infty\times0$. I realize this doesn't make sense because $\delta$ is a distribution, not a function, but is there a mathematical way to show that this term yields zero?
Answer: Because you have a $\delta$-function, the expression you get only makes sense when integrated against a test function. So, consider a test function $g(x^0)$. We have
\begin{align}
\int dx^0\ g(x^0) \langle[\phi(x),\phi(y)]\rangle \delta(x^0-y^0) &= g(y^0)\langle[\phi(y^0,\vec{x}),\phi(y^0,\vec{y})]\rangle\\
&= g(y^0)\times 0\\
&=0\\
\end{align}
where in the first step I used the definition of the delta function, in the second I used the commutation relation assuming $\vec{x}\neq\vec{y}$. You're right that naively plugging in $x^0=y^0$ gives an indeterminant form, but that's only because you're dealing with a distribution instead of a function. Since you have a distribution, you learn about it by integrating it against test functions instead of plugging in numbers. Once you accept that, it's pretty easy to see that the distribution is the zero distribution, since it integrates to zero against any test function. | {
"domain": "physics.stackexchange",
"id": 44384,
"tags": "quantum-field-theory, dirac-delta-distributions"
} |
Is there a relation between the mass gap in quantum field theory and the band gap in condensed matter? | Question: I often find out papers and textbooks that remark the ressemblance between QFT and Condensed Matter. I think that this has to do with the fact that both theories use the same tools, namely, field theory, renormalization, etc. So, is there any common interpretation or link between the mass gap in the spectrum of a quantum field theory and the band gap on materials in condensed matter?
Answer: The mass of Dirac fermions in high-energy physics is the band gap in condensed matter if we view our universe as a band insulator. Both high-energy physics and condensed matter physics use the same quantum field theory to describe particles/excitations in quantum many-body systems, so many concepts are just the same. However, in condensed matter physics, energy gaps are further divided into different types. For example, the band gap is a gap in the single-particle energy spectrum, and the Mott gap is a gap in the many-body energy spectrum. Both gaps are called "mass" in the high-energy physics. | {
"domain": "physics.stackexchange",
"id": 40100,
"tags": "quantum-field-theory, energy, condensed-matter, mass"
} |
Why does larger permittivity of a medium cause light to propagate slower? | Question: I was wondering about what physically happens when light is transmitted through a non-magnetic medium. Specifically, I’m trying to visualize how materials slow down light as the electromagnetic wave is passing through, and how permittivity affects this. I know that the index of refraction is directly related to relative permittivity, but I’m unclear as to how this parameter affects the speed of propagation.
My understanding of permittivity is that it measures how easily the molecules of the medium can polarize due to the electric field component of light, with larger permittivity meaning easier polarization of the dipole moments. These polarized molecules in turn have a growing/shrinking electric field between the poles that eventually counteracts the initial field that polarized them.
I’m thinking that this time-varying electric field creates a magnetic field, which then creates an electric field, which then creates a magnetic field and so on, and the speed of the light traveling through the medium is dependent on how quickly these fields rise and collapse. This would suggest that my interpretation of a larger permittivity would cause faster propagation, but I know from the equation that a larger permittivity means a larger index of refraction and a slower propagation of light.
My reasoning is flawed, but I’m not sure where I went wrong. I'm thinking that my understanding of permittivity is incorrect. I was hoping someone could shed some light on what physically happens as the waves propagate through a medium, and how this relates to permittivity. If you have any suggestions on websites or links I should look at it would also be greatly appreciated.
Answer: If I can expand a little bit on Sofia's answer the polarization of the medium opposes time variations in the electric field thus slowing down the phase velocity of the wave.
This can be seen from Ampere's circuit law (the 4th Maxwell equation) which is central as you stated in arriving at the wave equation describing light. It can be written in vacuum as
$\frac{\partial \mathbf{E}} {\partial t} = \frac{1}{\varepsilon_0\mu_0}\nabla \times \mathbf{B}$.
It says that physically the coupling between the time variation of E and the curl of B
is inversely proportional to the vacuum permittivity making it plausible that a larger vacuum permittivity would give a lower phase velocity of the E wave.
To be completely rigorous one still would need to solve the coupled Maxwell equations in the usual way leading to the usual expression of $c$ in terms of $\epsilon_0$ and $\mu_0$ but I believe this gives an argument.
This can be easily extended to say a isotropic medium in which the medium polarization works in the same way as increasing the vacuum permittivity. In short, in a medium with permittivity > 1 the polarization opposes the rate in which the magnetic field causes the electric field to change over time. | {
"domain": "physics.stackexchange",
"id": 67209,
"tags": "electromagnetism, optics, speed-of-light"
} |
Which types of steel would be most appropriate for blunt cutting weaponry on a combat robot? | Question: I'm getting more serious about a combat robot design I've been contemplating for several years now (BattleBots, Robot Wars, etc) and find myself overwhelmed by the sheer variegation of ASTM and AISI grades and alloys. I hope that by detailing the application, those familiar with the nomenclature can steer me in the right direction.
The overall design is a full-body spinner - somewhere around 0.75m in diameter, at least 2500 RPM. (Approximately 205 MPH weapon speed, though this is a minimum target.) The circumference is lined with five or six solid-mount blades approximating the size and shape of forestry mauls, though slightly thinner - while primarily relying upon inertia and brute force, I would like to retain some degree of sharpness/cutting.
Structural integrity is the key attribute, remaining intact (if deformed to some degree) in spite of high-speed impacts with fixed hazards, opponents, and/or similar spinning weaponry. Any significant separation would be of serious concern owing to the high-speed rotation.
The blades will be mounted to the outside of the body with high-strength bolts, anchoring roughly half the length and 60% the mass of each individual blade. (I have no intention of ever welding these pieces as even a fractured blade would be swiftly replaced.)
Withstanding impacts and remaining affixed with minimal distortion is highly desirable. Retaining a sharp edge would be nice, but this is mostly wishful thinking - for the sake of other traits, some sacrifice must be made.
What little I understand of this sort of metallurgy has me looking at ASTM-S1, which seems to be recommended for high-speed impacts and sudden shock-loading. (Also assuming those terms mean what I think they do.)
Any guidance/recommendations in this department would be greatly appreciated.
Answer: Answer 2: Because you are not building thousands of the Bots, You need to look at materials that are already heat-treated for strength and toughness. Just thinking of the material and not the form : rear axel( of rear wheel drive ), Torsion bars , leaf and coil springs. Construction equipment ; bulldozer blade ( edge if it is separate material from the bulk, bucket teeth ( if replaceable ), bulldozer treads, blades/arms of big chipper shredder ( hammer mill ) used to chop tree trunks ( Asplendunk is one name). The chipper is almost what you are making. These are educated guess ; all likely have more carbon than optimum for toughness. But it gives some ideas where to look. | {
"domain": "engineering.stackexchange",
"id": 1660,
"tags": "materials, steel, robotics"
} |
What is the name of this creature? | Question:
What is the name of this creature?
It looks like either a crustacean or a conch.
Answer: Nautilus.
Nautilus belauensis. Licenced under CC ASA 3.0 unported, via Wikipedia 2022 from user Manuae.
Mollusca -> Cephalopoda (like squid and octopus) -> Nautilaceae -> Nautilidae - comprising of six extant species.
They range the 200m to 700m depth of the Indo-Pacific, scavenging carrion.
They seem to have been more-or less the same over the last 500 million years according to the fossil record - then being up to 2.5 metres in shell diameter, being more modest now at a maximum of 22 cm (10 inches) when fully grown.
They move via jet-propulsion by drawing water in through their hyphonome, then expelling it with force, as seen in the "front view" here:
From Profberger at English Wikipedia, freely licensed. | {
"domain": "biology.stackexchange",
"id": 11996,
"tags": "species-identification, zoology, molluscs"
} |
Effect of isotopes on H-Bonding | Question: The following question was asked in one of the assignments my teacher has given.
Acetone ($\ce{Me2CO^16}$) on treatment with $\ce{H2O^18}$ gives a mixture of $\ce{Me2CO^16}$ and $\ce{Me2CO^18}$, the
latter being in slight excess. This may be explained by
(A) hydration of acetone is an equilibrium process
(B) $\ce{C – O^18}$ bond is slightly stronger than $\ce{C – O^16}$
(C) $\ce{Me2CO^18}$ forms stronger hydrogen bonds than $\ce{Me2CO^16}$ with water
(D)hydration of acetone is irreversible
I have doubt about option (C). According to me, (C) should be correct, but it is given incorrect. Why the $\ce{^18O...H}$ is weaker than $\ce{^16O...H}$ bonding?
Answer: Hydration of acetone is an equilibrium process.
$$\ce{(CH3)2C^16O + H2^18O <=> (CH3)2C^18O + H2^16O }$$
The equilibrium constant of this reaction will depend on how the bond strengths of the $\ce{O-H}$ bond in water and the $\ce{O=C}$ bond in acetone is affected by isotope exchange. The isotope-dependent strength of hydrogen bonds might be a secondary effect, as well.
The ratio of heavy to light acetone will not only be dependent on the equilibrium constant, but also on the ratio of acetone to heavy water initially present.
So none of the statements are sufficient to explain the observation completely. Statements (a) and (b) are correct. Statement (c) is not directly related to the ratio of the species. You also have to consider the isotope effect on the strength of hydrogen bonds with water (as one acetone changes from light to heavy, one water changes from heavy to light). | {
"domain": "chemistry.stackexchange",
"id": 14513,
"tags": "organic-chemistry, hydrogen-bond"
} |
Merging sorted arrays in Python | Question:
def merge_arrays(list1, list2):
len_list1 = len(list1); len_list2 = len(list2)
merge_len = len_list1 + len_list2
merge_list = []
l1_ptr = 0
l2_ptr = 0
# import pdb; pdb.set_trace()
while(l1_ptr <= len_list1-1 and l2_ptr <= len_list2-1):
if (list1[l1_ptr] <= list2[l2_ptr]):
merge_list.append(list1[l1_ptr])
l1_ptr += 1
elif (list1[l1_ptr] > list2[l2_ptr]):
merge_list.append(list2[l2_ptr])
l2_ptr += 1
if l1_ptr > len_list1-1: #list1 exhausted
for item in list2[l2_ptr:]:
merge_list.append(item)
else:
for item in list1[l1_ptr:]:
merge_list.append(item)
return merge_list
I am trying to merge sorted arrays in python. How can I improve this? Honestly it looks like I've written this in C and not in Python
Answer: various
merge_len is unused
the extra parentheses around the simple checks are unnecessary
l1_ptr <= len_list1-1 can be made more clearer as l1_ptr < len_list1
using the variable name l1_ptr to save a few characters while making it harder to guess from the name what it does is not useful
Working with the indices directly is not really pythonic indeed. You can make this more generic, using iter and next, and work for all iterables.
typing
add typing information:
import typing
T = typing.TypeVar("T")
def merge_sorted_iterables(
iterable1: typing.Iterable[T], iterable2: typing.Iterable[T]
) -> typing.Iterable[T]:
This is extra explanation for the user of this function (and his IDE).
docstring
Add some explanation on what the method does, expects from the caller, and returns.
def merge_sorted_iterables(
iterable1: typing.Iterable[T], iterable2: typing.Iterable[T]
) -> typing.Iterable[T]:
"""Merge 2 sorted iterables.
The items in the iterables need to be comparable (and support `<=`).
...
"""
iterator
Instead of keeping track of the index you can use iter and next. You don't even need to add the items to a list, you can yield them, so the caller of the method can decide in what way he wants to use this.
done = object()
iterator1 = iter(iterable1)
iterator2 = iter(iterable2)
item1 = next(iterator1, done)
item2 = next(iterator2, done)
while item1 is not done and item2 is not done:
if item1 <= item2:
yield item1
item1 = next(iterator1, done)
else:
yield item2
item2 = next(iterator2, done)
Then all that needs to be done is continue the iterator that is not finished
if item1 is not done:
yield item1
yield from iterator1
if item2 is not done:
yield item2
yield from iterator2
import typing
T = typing.TypeVar("T")
def merge_sorted_iterables(
iterable1: typing.Iterable[T], iterable2: typing.Iterable[T]
) -> typing.Iterable[T]:
"""Merge 2 sorted iterables.
The items in the iterables need to be comparable (and support `<=`).
...
"""
done = object()
iterator1 = iter(iterable1)
iterator2 = iter(iterable2)
item1 = next(iterator1, done)
item2 = next(iterator2, done)
while item1 is not done and item2 is not done:
if item1 <= item2:
yield item1
item1 = next(iterator1, done)
else:
yield item2
item2 = next(iterator2, done)
if item1 is not done:
yield item1
yield from iterator1
if item2 is not done:
yield item2
yield from iterator2
testing
You can test the behaviour, starting with the simplest cases:
import pytest
def test_empty():
expected = []
result = list(merge_sorted_iterables([], []))
assert result == expected
def test_single():
expected = [0, 1, 2]
result = list(merge_sorted_iterables([], range(3)))
assert expected == result
result = list(merge_sorted_iterables(range(3), [],))
assert expected == result
def test_simple():
expected = [0, 1, 2, 3, 4, 5]
result = list(merge_sorted_iterables([0, 1, 2], [3, 4, 5]))
assert result == expected
result = list(merge_sorted_iterables([0, 2, 4], [1, 3, 5]))
assert result == expected
result = list(merge_sorted_iterables([3, 4, 5], [0, 1, 2],))
assert result == expected
def test_string():
expected = list("abcdef")
result = list(merge_sorted_iterables("abc", "def"))
assert result == expected
result = list(merge_sorted_iterables("ace", "bdf"))
assert result == expected
result = list(merge_sorted_iterables("def", "abc",))
assert result == expected
def test_iterable():
expected = [0, 1, 2, 3, 4, 5]
result = list(merge_sorted_iterables(iter([0, 1, 2]), iter([3, 4, 5])))
assert result == expected
result = list(merge_sorted_iterables(iter([0, 2, 4]), iter([1, 3, 5])))
assert result == expected
result = list(merge_sorted_iterables(iter([3, 4, 5]), iter([0, 1, 2]),))
assert result == expected
def test_comparable():
with pytest.raises(TypeError, match="not supported between instances of"):
list(merge_sorted_iterables([0, 1, 2], ["a", "b", "c"]))
descending
Once you have these test in place, you can easily expand the behaviour to also take descending iterables:
import operator
def merge_sorted_iterables(
iterable1: typing.Iterable[T],
iterable2: typing.Iterable[T],
*,
ascending: bool = True,
) -> typing.Iterable[T]:
"""Merge 2 sorted iterables.
The items in the iterables need to be comparable.
...
"""
done = object()
iterator1 = iter(iterable1)
iterator2 = iter(iterable2)
item1 = next(iterator1, done)
item2 = next(iterator2, done)
comparison = operator.le if ascending else operator.ge
while item1 is not done and item2 is not done:
if comparison(item1, item2):
yield item1
item1 = next(iterator1, done)
else:
yield item2
item2 = next(iterator2, done)
if item1 is not done:
yield item1
yield from iterator1
if item2 is not done:
yield item2
yield from iterator2
I added the ascending keyword as a keyword-only argument to avoid confusion and keep backwards compatibility
One of its tests:
def test_descending():
expected = [5, 4, 3, 2, 1, 0]
result = list(
merge_sorted_iterables([2, 1, 0], [5, 4, 3], ascending=False)
)
assert result == expected
result = list(
merge_sorted_iterables([4, 2, 0], [5, 3, 1], ascending=False)
)
assert result == expected
result = list(
merge_sorted_iterables([5, 4, 3], [2, 1, 0], ascending=False)
)
assert result == expected | {
"domain": "codereview.stackexchange",
"id": 39276,
"tags": "python"
} |
How to merge .fastq.qz files into a single .fastq.gz with their same id without losing any content in parallel | Question: I have a large number of .fastq.gz files of different lane and reads. I have to merge them each reads group files into single .fastq.gz files.
**eg:
1st type
NA24694_GCCAAT_L001_R1_001.fastq.gz
NA24694_GCCAAT_L001_R1_002.fastq.gz
…....
…....
NA24694_GCCAAT_L001_R1_033.fastq.gz
2nd type
NA24694_GCCAAT_L001_R2_001.fastq.gz
NA24694_GCCAAT_L001_R2_002.fastq.gz
…....
…....
NA24694_GCCAAT_L001_R2_027.fastq.gz
3rd type
NA24694_GCCAAT_L002_R1_001.fastq.g
NA24694_GCCAAT_L002_R1_002.fastq.g
…....
…....
NA24694_GCCAAT_L002_R1_040.fastq.g
4th type
NA24694_GCCAAT_L002_R2_001.fastq.gz
NA24694_GCCAAT_L002_R2_002.fastq.gz
…....
…....
NA24694_GCCAAT_L002_R2_040.fastq.gz
so now i need to merge all the corresponding files into single file.
Output:
EA00694_GCCAAT_L001_R1.fastq.gz
EA00694_GCCAAT_L001_R2.fastq.gz
EA00694_GCCAAT_L002_R1.fastq.gz
EA00694_GCCAAT_L002_R2.fastq.gz
I tried ""cat'" but its not possible to do for large files. Could any one help me to run in script parallel in perl or python or shell. or any linux command which can be used for large file.
Thanks all.
Answer: All you need is cat. You won't find any better tool for a simple job like this. Just run:
cat NA24694_GCCAAT_L001_R1*fastq.gz > EA00694_GCCAAT_L001_R1.fastq.gz
cat NA24694_GCCAAT_L001_R2*fastq.gz > EA00694_GCCAAT_L001_R2.fastq.gz
cat NA24694_GCCAAT_L001_R3*fastq.gz > EA00694_GCCAAT_L002_R1.fastq.gz
cat NA24694_GCCAAT_L001_R4*fastq.gz > EA00694_GCCAAT_L002_R2.fastq.gz
That should work just fine. You say you tried cat and it "didn't work" but since you don't tell us how it failed, I can't really help. The only issues I can think of is that either there are too many files (as in several hundred thousand, whatever the value returned by getconf ARG_MAX on your system which is 2097152 on mine) or, more likely, you are running out of disk space.
If it's a disk space issue, you might be able to get around it by adding each file and then deleting it:
for i in 1 2 3 4; do
for file in NA24694_GCCAAT_L001_R${i}_*fastq.gz; do
cat "$file" >> EA00694_GCCAAT_L001_R${i}.fastq.gz && rm "$file"
done
done
IMPORTANT: the command above is destructive. It will delete each file after it has been added to the new one. If everything works fine that's not a problem, but if I've made a mistake or your file names are slightly different you might lose your data. So I strongly urge you to simply run the cat commands in the first section on a machine with enough disk space to deal with them. | {
"domain": "bioinformatics.stackexchange",
"id": 987,
"tags": "bash, shell, data-management"
} |
How are processor instructions stored in RAM? | Question: I've recently been designing a simple 8-bit microprocessor, similar to the Intel 8008. It doesn't make use of anything advanced as pipelining, as my knowledge isn't at that level yet to know how to implement it.
The address length of 14 bits, and an instruction lengths ranging from 1 to 3 bytes.
I've been researching, and I found that instructions generally span across multiple addresses in memory. For example, a 2 byte instruction could store the opcode-byte in one address, and a one-byte data value in the following.
However, when fetching an instruction from memory, does this mean that the processor has to make up to 3 memory calls just for the fetch stage?
I.e. - loading the MAR with the PC address, incrementing the PC, and loading from memory into part of the instruction register, repeated up to three times.
Or, is there a different way to store the instructions to get around this? Just to note, my processor also doesn't use any caching, meaning it has to get instructions and data exclusively from RAM.
Also, how would this work for jump instructions? If I want to jump to a specific instruction in the program, but don't know the address in memory because of all the other multi-length instructions.
I would appreciate feedback on any of these questions.
Answer: Some processors - generally, 32-bit RISC ones - make sure that all instructions fit in one location in memory. This works for them because 32 bits is enough to fit plenty of instructions, and it makes it easy for the processor to load instructions. All the instruction operands are included as part of the instruction. The instructions are often bigger than they need to be, but the simplicity can be worth it.
8-bit processors don't do this, because you can't really fit enough instructions in only 8 bits. You can only fit 256 instructions, in fact. If they have 4-bit operands (not enough) you can only fit 16 instructions (also not enough). So multi-byte instructions are a necessity.
However, 256 is plenty of opcodes. In the 8-bit era it was typical that a CPU would read a 1-byte opcode, and then the operands (if any) would be in the next 1 or 2 bytes.
I.e. - loading the MAR with the PC address, incrementing the PC, and loading from memory into part of the instruction register, repeated up to three times.
Almost. You could do this. But 8-bit CPUs generally do not have 3-byte instruction registers. Instead, the instructions are designed so only the opcode needs to be stored in the opcode register is needed, and the operand bytes can go directly to their final destination. An instruction like ADD BC, 1234 might process the bytes like this:
load the MAR with the PC address, increment the PC, and load from memory into the opcode register
load the MAR with the PC address, increment the PC, load from memory into the ALU right input, load from register C into the ALU left input, set the ALU mode to ADD, store from the ALU output into register C
load the MAR with the PC address, increment the PC, load from memory into the ALU right input, load from register B into the ALU left input, set the ALU mode to ADC (add-with-carry), store from the ALU output into register B
To keep track of the steps, you might find it helpful to add a step counter "mini-register" which increments each cycle, except at the end of the instruction when it resets. Then you can use the opcode register and the step counter and combinational logic to select what the CPU should do in the current clock cycle. If you use a ROM chip instead of combinational logic, the contents of the ROM are called microcode.
For jump and call instructions specifically (with 16-bit addresses), the Z80 CPU uses an extra, hidden 16-bit register called WZ. Normally you would load 16 bits from memory into a register by loading one half, then the other half. But when you are loading into the program counter, you can't do this, because if you load the first half of the program counter then it messes up the address you load the second half from. So it loads the halves into WZ and then transfers WZ to PC.
To save having a WZ register, a different idea is to make the programmer load the address into a register, then jump to the address in the register. You can still have relative jumps (add an 8-bit value to the PC) and page-relative jumps (loading the lower 8 bits only) without needing any extra registers.
Also, how would this work for jump instructions? If I want to jump to a specific instruction in the program, but don't know the address in memory because of all the other multi-length instructions
You do know the address. Each instruction has an address, and you tell the CPU to jump to the address of the instruction. | {
"domain": "cs.stackexchange",
"id": 20501,
"tags": "memory-management, cpu, memory-access, instruction-set"
} |
Implementing a Treeview component using React | Question: Today after watching a getting started course about React, I tried to code my very first React component, a "treeview" as the title says. It does work, but as a newbie I know there are many improvement that can be done.
It receives a data array like the one below:
[{
descp: 'UNE',
url: '/tree/une',
icon: 'icon-list',
children: [
{
descp: 'ATI',
url: '/tree/une/ati',
icon: 'icon-layers',
children: [{
descp: 'ATI-VC',
url: '/tree/une/ati/ativc',
icon: 'icon-pin',
children: []
},
{
descp: 'ATI-CMG',
url: '/tree/une/ati/aticmg',
icon: 'icon-pin',
children: []
}
]
},
{
descp: 'EMGEF',
url: '/tree/une/emgef',
icon: 'icon-layers',
children: []
},
{
descp: 'ECIE',
url: '/tree/une/ecie',
icon: 'icon-layers',
children: []
},
{
descp: 'GEYSEL',
url: '/tree/une/geysel',
icon: 'icon-layers',
children: []
}
]
}]
Maybe the data format can be improve as well but the real deal is the component:
const MyTree = (props) => {
const { list } = props;
return (
list.map((child,i)=>
<MyTreeNestedItems key={i} data={child} />
)
);
};
const MyTreeNestedItems = (props) => {
const { data } = props;
const createChildren = (list) => {
return (
list.map((child,i)=>
<MyTreeNestedItems key={i} data={child} />
)
)
}
let children = null;
if (data.children.length) {
children = createChildren(data.children);
}
return (
<li className="nav-item">
<a className="nav-link" href="#">
<span className={data.icon}></span> {" "}
{ data.descp }
</a>
<ul style={{listStyleType:"none"}}>{children}</ul>
</li>
);
};
render(<MyTree list={tree} />,document.querySelector('.js-mytree'));
Any tips would be gratefully appreciated.
Answer: You might notice that createChildren is almost the same as MyTree -- the only difference is that createChildren takes its argument directly instead of as a props object. So with a small change we can remove createChildren entirely:
const MyTreeNestedItems = (props) => {
const { data } = props;
let children = null;
if (data.children.length) {
children = <MyTree list={data.children} />;
}
return (
<li className="nav-item">
<a className="nav-link" href="#">
<span className={data.icon}></span> {" "}
{ data.descp }
</a>
<ul style={{listStyleType:"none"}}>{children}</ul>
</li>
);
}
This can be a bit more compact as so:
const MyTreeNestedItems = ({ data }) => {
return (
<li className="nav-item">
<a className="nav-link" href="#">
<span className={data.icon}></span> {" "}
{ data.descp }
</a>
<ul style={{listStyleType:"none"}}>
{data.children.length ? <MyTree list={data.children} /> : null}
</ul>
</li>
);
};
Currently this code renders invalid HTML -- it renders with a <li> as the top level element but <li> should only be contained in a <ul>, <ol>, or <menu>.
Looking at the code so far, <MyTree> is rendered in two places -- it may be tempting to just add another <ul> around <MyTree list={tree} /> in the render() call. But then we would have two occurrences of a structure like this: <ul><MyTree/></ul>. It'd be easier to just move the <ul> into the <MyTree/>.
With that, the code now looks like this:
const MyTree = ({ list }) => {
return (
<ul style={{listStyleType:"none"}}>
{list.map((child,i)=> <MyTreeNestedItems key={i} data={child} />)}
</ul>
);
};
const MyTreeNestedItems = ({ data }) => {
return (
<li className="nav-item">
<a className="nav-link" href="#">
<span className={data.icon}></span> {" "}
{ data.descp }
</a>
{data.children.length ? <MyTree list={data.children} /> : null}
</li>
);
};
render(<MyTree list={tree} />, document.querySelector('.js-mytree'));
From here it's a matter of taste. Both of the components are just directly returning a value, so we can use short arrow function syntax now: (...) => (value) instead of (...) => { return (value); }
Also, the components are now simple enough that I would just combine the two.
All in, that leaves us with this final version of the code:
const MyTree = ({ list }) => (
<ul style={{listStyleType:"none"}}>
{list.map(({icon, descp, children}, i) => (
<li className="nav-item" key={i}>
<a className="nav-link" href="#">
<span className={icon} /> {descp}
</a>
{children.length ? <MyTree list={children} /> : null}
</li>
))}
</ul>
);
render(<MyTree list={tree} />, document.querySelector('.js-mytree')); | {
"domain": "codereview.stackexchange",
"id": 35714,
"tags": "javascript, react.js, jsx"
} |
What type of animal is this? | Question: There was a leaking fire hydrant across a sidewalk. On the other side was a lake. The fire hydrant water was streaming across the sidewalk.
I saw it swimming in the sidewalk away from the side of the lake to the fire hydrant.
What is it?
Answer: Likely a dragonfly larva, but I don't know the specific type. They live in water and eat anything they catch like small fish. | {
"domain": "biology.stackexchange",
"id": 11604,
"tags": "species-identification, zoology, arthropod"
} |
Multipole expansion of the magnetic energy of a distribution of current density | Question: My professor left me as an exercise the multipole expansion of the energy of a distribution of current density in a magnetic field but I don't manage even to understand how to start.
The energy of the distribution is:
$$
U = - \int \vec{j} \cdot \vec{A} d^3 x
$$
so I start expanding the potential:
$$
\vec{A} = \vec{A}(0) + [\vec{x}\cdot\nabla] \vec{A} +\frac{1}{6} \sum_{i,j} (x_i x_j - |x|^2\delta_{i,j})\frac{\partial^2\vec{A}}{\partial_{x_i} \partial_{x_j}}
$$
And the energy can be decomposed into:
$$
U = - \int \vec{j} d^3 x \cdot \vec{A_0} -\int[\vec{j}\cdot \vec{x}\cdot\nabla]A (0) + \dots
$$
The task is to obtain the magnetic analogue of the energy of a distribution of charge:
$$
U=qV(0) - \vec{p}\cdot\vec{E}(0) +\frac{1}{6}Q:\nabla\vec{E}(0)
$$
The field produced by the distribution is negligible, only the energy of interaction between the distribution and the external field has to be taken into account.
I don't know how to handle even the second term of the expansion: how can I recover the vector $\vec{B}$ if there isn't any rotor in that expression? I searched for any vectorial identities but I found nothing useful. I feel this excercise is beyond my current abilities.
Answer: Things get pretty confusing if you attempt to do things in full vectorial notation, so I'll stick exclusively to component notation, with Einstein summations understood
You want to start with the expression
$$
\newcommand{\ue}{\hat{\mathbf e}}\newcommand{\z}{\mathbf 0}
\mathbf A(\mathbf r)=\ue_jA_j(\z)+\ue_jx_k\frac{\partial A_j}{\partial x_k}(\z)+\ue_j x_kx_l\frac{\partial^2 A_j}{\partial x_k\partial x_l}(\z)+ \cdots,
$$
and introduce it into your integral
$$
U=\int \mathbf j(\mathbf r)\cdot\mathbf A(\mathbf r)\mathrm d\mathbf r
$$
to get a multipole series. I will do the monopole and dipole terms and, since you're playing the fuzzy-definition game with the quadrupoles, leave those to you.
The monopole term is easy, and it comes from the zeroth-order term in the integral, which reads
$$
U_0=\int \mathbf j(\mathbf r)\cdot\mathbf A(\z)\mathrm d\mathbf r=\mathbf A(\z)\cdot\int \mathbf j(\mathbf r)\mathrm d\mathbf r,
$$
and which vanishes because you're in a static situation. To prove that this vanishes, consider the $k$th component of the integral of the current, exactly as in this question, giving you
$$
\int j_k(\mathbf r)\mathrm d\mathbf r
=
\int \ue_k\cdot \mathbf j(\mathbf r)\mathrm d\mathbf r
=
\int \left[\nabla\cdot\left(x_k\mathbf j(\mathbf r)\right)-x_k\nabla\cdot\mathbf j(\mathbf r)\right]\mathrm d\mathbf r;
$$
here the second term vanishes because $\nabla\cdot\mathbf j(\mathbf r)=0$, and the first term goes over into the surface integral $\oint_\infty x_k\mathbf j(\mathbf r) \cdot\mathrm d\mathbf S$ at infinity, through the divergence theorem, and this surface integral vanishes since your current is spatially bounded.
The first nontrivial term is the dipole term, which comes from
$$
U_1
= \int j_k(\mathbf r) x_l\frac{\partial A_k}{\partial x_l}(\z) \mathrm d \mathbf r
= \frac{\partial A_k}{\partial x_l}(\z) \int x_lj_k(\mathbf r) \mathrm d \mathbf r,
$$
and this already separates into a field times a moment of the current, but it's obviously not in the form we want it to be in; instead, we want it in the form $\mathbf B(\z)\cdot \mathbf m$, where $\mathbf B = \nabla\times\mathbf A$ and
$$
\mathbf m = \frac12 \int \mathbf r\times\mathbf j(\mathbf r)\mathrm d\mathbf r.
$$
If you work out what we want to get, in all its glory, it's actually not that different, because it reads
\begin{align}
\mathbf B\cdot\mathbf m
& =
\varepsilon_{ikl}\frac{\partial A_l}{\partial x_k}(\z) \,\frac12 \int \varepsilon_{imn}x_mj_n(\mathbf r)\mathrm d\mathbf r
\\& =
\frac12(\delta_{km}\delta_{ln}-\delta_{kn}\delta_{lm})\frac{\partial A_l}{\partial x_k}(\z) \, \int x_mj_n(\mathbf r)\mathrm d\mathbf r
\\& =
\frac12\left[\frac{\partial A_l}{\partial x_k}(\z) \, \int x_kj_l(\mathbf r)\mathrm d\mathbf r
-\frac{\partial A_l}{\partial x_k}(\z) \, \int x_lj_k(\mathbf r)\mathrm d\mathbf r\right],
\end{align}
and this is pretty close to what we have, except for that pesky term with the switched indices. To get a term of that form, you can use an integration-by-parts term as above, which looks as follows: we pull out of our hat the quantity $x_lx_k \,\mathbf j(\mathbf r)$ (much as above we used $x_k \,\mathbf j(\mathbf r)$) and we calculate the integral of its divergence, giving
\begin{align}
\int \nabla\cdot\left(x_lx_k \,\mathbf j(\mathbf r) \right)\mathrm d \mathbf r
& =
\int x_lj_k(\mathbf r) \mathrm d \mathbf r
+\int x_kj_l(\mathbf r) \mathrm d \mathbf r
+\int x_lx_k\,\nabla\cdot\mathbf j(\mathbf r) \mathrm d \mathbf r
.
\end{align}
Here the left-hand side vanishes, because we can pull it to a surface integral at the surface, away from the domain of the current, and the third term on the right also vanishes because the current is static and conserves charge; this means, then, that
$$
\int x_lj_k(\mathbf r) \mathrm d \mathbf r
+\int x_kj_l(\mathbf r) \mathrm d \mathbf r
=0,
$$
and we're essentially done - all that's left to do is to connect the dots.
And, as you can see, trying to do this one order above has nothing but a promise of pain in it, particularly because magnetic quadrupole fields and moments are used relatively rarely and that means that there aren't particularly solid conventions about how exactly one should define the moments, which then makes people a bit more reluctant to use the fields, and it all snowballs down.
To be frank, the magnetic-dipole calculation I've just done (and the electrostatic quadrupole you did in class) is really the highest order to which it makes sense to do the calculation explicitly; if you want to go higher, you really should be setting up a full calculation with arbitrary $l$, with consistent and clear definitions of both the current moments and the external fields at arbitrary multipolarities. Even there, though, it's remarkable that even Jackson forgoes going into those waters, which should tell you something about how worthwhile it is to go down that road. | {
"domain": "physics.stackexchange",
"id": 38703,
"tags": "magnetic-moment, multipole-expansion"
} |
Mathematical (simple?) vs physical pendulums | Question: I've looked long and hard for what a mathematical pendulum is but no site clearly has the name "mathematical" pendulum but it's all over the book "Problems in General Physics" by I.E. Irodov .
Reading a few sites I get the impression that mathematical and simple pendulums are the same thing.
Is this true?
Answer: It’s just a translation issue from common Russian terminology into English terminology. While the literal translation of “математический маятник” is “mathematical pendulum,” the terminologically correct translation is simply “pendulum”. | {
"domain": "physics.stackexchange",
"id": 76661,
"tags": "terminology, definition, oscillators"
} |
How does pressure change with depth in earth? | Question: I've learned in school that pressure in water changes like
$$p(h) = \rho g h$$
where $h$ is depth in meters, $\rho$ is density (e.g. 1000 $\frac{\text{kg}}{\text{m}^3}$ for water) and $g$ is gravitation acceleration ($\approx 9.81 \frac{\text{m}}{\text{s}^2}$) and $p$ is the pressure in Pascal.
I guess there is no similar law for pressure in earth as it is to different, depending on where you are. But is there a rule of thumb? What do engineers who build tunnels / underground stations do?
Answer:
I guess there is no similar law for pressure in earth as it is to different, depending on where you are. But is there a rule of thumb? What do engineers who build tunnels / underground stations do?
I approach this question as an engineer who does a lot of work on buried pipes and occasionally has to qualify buried structures for nuclear power plants. Also, for the sake of brevity, I assume you are talking about only vertical loads on the structure (lateral loads are another complicated topic for foundation engineering).
Soil can act similarly to fluid, depending on the soil type and even the type of structure that is being loaded.
For example, flexible pipes such as PVC, HDPE, and steel can be assumed to be loaded by the soil prism directly above the pipe. Piping is considered flexible if it can sustain a sizable deformation of its cross-section without rupturing. Consider the image below from Moser & Folkman's Buried Pipe Design, 3rd Edition (1):
In this case, since the pipe is considered more flexible than the soil, the pipe deforms under load such that no arching of the soil takes place. As such, the load on the pipe is simply the soil density times the depth of soil, like in your example.
Things get more complicated for so-called rigid pipes, such as concrete pipe or transite (asbestos-cement) pipe. In this case, the rigidity of the pipe is such that the soil on the sides of the pipe settle more than the diameter of the pipe itself and the pipe takes extra loading via soil arching. Below I've pasted another image from Moser & Folkman (1) illustrating this phenomena.
The loading on the pipe depends on how it was buried (positive projection, trench, induced trench, etc.) and is really beyond the scope of this answer. I've included a couple of references at the end of this answer for further reading.
For larger structures such as your tunnels or subway stations, determining the soil load is more complicated. Are there adjacent structures applying load? Has there been something done to stabilize the soil? How are the different soil strata interacting, and how does the relative stiffness of each impact the total load? If tunneling through rock, can the rock support itself without further reinforcement?
All of these considerations and more that I can't think of at the moment come into play when determining the load on a buried structure. There is no true rule of thumb when it comes to designing a buried structure since there are so many considerations when it comes to the actual loading.
Further Reading
1.) Moser, A.P. & Steven Folkman, Buried Pipe Design, 3rd Edition.
2.) Marston, A. & A. O. Anderson, The Theory of Loads on Pipes in Ditches and Tests of Cement and Clay Drain Tile and Sewer Pipe, Feb. 1913.
3.) Clarke, N.W.B., Buried Pipelines: A Manual of Structural Design and Installation, 1968. | {
"domain": "engineering.stackexchange",
"id": 1041,
"tags": "civil-engineering, pressure"
} |
Why is there a size limitation on animals? | Question: Why is there a size limitation on human/animal growth? Assuming the technology exists for man to grow to 200 feet high, it's pretty much a given that the stress on the skeletal structure and joints wouldn't be possible to support the mass or move...but WHY is this? if our current skeletal structures and joints can support our weight as is, wouldn't a much larger skeletal structure do the same assuming it's growing in proportion with the rest of the body? And why wouldn't a giant person be able to move like normal sized humans do? (I'm honestly thinking Ant Man, or even the non-biological sense of mechs/gundams/jaegers)...I'm just having a hard time grasping why if it were possible to grow to gigantic sizes or create giant robots, why it then wouldn't be possible for them to move.
Answer: The following fact lies at the heart of this and many similar issues with sizes of things: Not all physical quantities scale with the same power of linear size.
Some quantities, like mass, go as the cube of your scaling - double every dimension of an animal, and it will weigh eight times as much. Other quantities only go as the square of the scaling. Examples of this latter category include
Muscle strength (a longer muscle can exert no more force than a shorter one of equal cross sectional area),
Heart pumping ability (the heart is not solid but rather hollow, so the amount of muscle powering it goes as the surface area),
The compression/tension that can be safely transmitted by a bone (material strength is intrinsic and independent of size, so the pressure that can be supported is constant, so the force - cross sectional area times pressure - that can be supported goes as the square of size), and
The ability to exchange material and heat the the environment (single cells for example have a hard time growing large because their metabolism goes as the cube of the size, but their ability to transport nutrients across their outer membranes only scales as the area of those membranes),
at least to a first approximation. You could also come up with other quantities that scale differently with size.
As a result, simply scaling up an organism will undo the balance that has been achieved for that particular size. Its muscles will likely be too weak, its bones will likely break, and it will generate so much internal heat (if it is warm blooded) that the only equilibrium achievable given its comparatively small surface area would be at a high enough temperature to denature many proteins.
For a completely non-biological example, consider the fact that airplanes cannot be made arbitrarily large, and in fact different sizes of planes have very different shapes and engineering requirements. The surface area of the wings does not scale the same way as the total mass, and the stresses and pressures the material needs to withstand will not stay constant as you enlarge the plane. | {
"domain": "physics.stackexchange",
"id": 9676,
"tags": "soft-question, mass, torque, popular-science, biology"
} |
What is the relativistic kinetic energy for an object in a circular motion? | Question: Actually I was listening to a about cyclotrons and the lecturer used
$$KE=(\gamma -1)mc^2$$
For calculating radius of the particle's path. But can we use that equation as kinetic energy equation. Because in a circular motion the force is independent of velocity and thus $\gamma$ (as said by https://physics.stackexchange.com/a/20922/314854).
So while calculating the kinetic energy for this one we should use:
$$F=\gamma ma$$
Or shouldn't we?
(Edit:
If we use the $F=\gamma ma$ for calculating kinetic energy we take $\int \gamma mvdv$ ($adx = vdv$) instead of $\int \gamma^3 mvdv$. My question is whether I should use first method or the second method for calculating kinetic energy?
)
(Edit2:
If we have to use other one then does that mean kinetic energy is not $(\gamma -1)mc²$
)
I am really confused. Please help me in this and tell me whether my assumption is right or not. If not please rectify me.
Answer: The derivative of the momentum with respect to time is the force acting on the particle. If the velocity of the particle varies only in deraction, which means that the force is perpendicular to the velocity, we have:$$\frac{d\vec{p}}{dt}=\gamma m\frac{d\vec{v}}{dt} $$
If it is the velocity modulus that is variable, and the direction of the force coincides with the velocity, we have:
$$\frac{d\vec{p}}{dt}=\frac{m}{\left(1-\frac{v^{2}}{c^{2}}\right)^{3/2} }\frac{d\vec{v}}{dt} $$
So this is the 1st formula to use.
ps: extracted and translated from: Theoretical Physics,volume II, field theory, by L.Landau, E.Lifchitz | {
"domain": "physics.stackexchange",
"id": 90500,
"tags": "special-relativity, energy"
} |
Convert regex pattern to LL1 parser | Question: Background: I'm trying to solve this leetcode problem: regular-expression-matching. My approach is to implement a LL(1) parser generator, might be overkilling for this problem but it's just a brain exercise.
I only have entry level knowledge of parser theory, so excuse me if this is a silly question that I ask.
So, there's this testcase that I fail. Requirement is regex pattern should match the whole string
Regex: a*a
Input: aaa
I cannot wrap my mind about how to convert this pattern into a LL(1) parser.
The way I see it, a*a pattern can be converted into production rules:
S -> Aa # a*a
A -> aA | ε # a*
Parse table:
a
$
S
S -> Aa
A
A -> aA
A -> ε
Here's the steps to parse:
0: S$ aaa$ # use [S,a]
1: Aa$ aaa$ # use [A,a]
2: aAa$ aaa$ # eat 'a'
3: Aa$ aa$ # use [A,a]
4: aAa$ aa$ # eat 'a'
5: Aa$ a$ # use [A,a]
6: aAa$ a$ # eat 'a'
7: Aa$ $ # use [A,$]
8: a$ $ # Error!
The correct matching should be:
a* -> aa
a -> a
But what I get instead is:
a* -> aaa
a -> Error!
I don't know which part I'm missing . Is this problem even solvable using LL(1)?
Answer: No, it's not possible in general. There's no guarantee that a regular expression be LL(1), not even the simplified form of regular expressions required by that exercise, and the example you provide is a perfect illustration.
To be LL(1), it must be possible to work out which production to match based only on the parse up to that point and the first symbol in the input. ("First symbol" because it's LL(1).) But that's not possible for your grammar; in general, it's not possible to decide between $A\to\epsilon$ and $A\to a A$, because the fact that the next symbol is an $a$ doesn't give you enough information to decide between them.
In slightly more technical terms, that happens because $A$ is nullable (matches the empty string) and $FIRST(A) \cap FOLLOW(A) \ne \emptyset$.
In fact, grammars derived from regular expressions (in the way you show) are not guaranteed to be unambiguous, and an ambiguous grammar cannot be deterministically parsed (by definition). Your grammar is unambiguous, but that wouldn't be the case for $.^*a^*$ (strings ending with one or more $a$s). | {
"domain": "cs.stackexchange",
"id": 19121,
"tags": "regular-expressions, parsers"
} |
Is Venus' north towards Earth's south | Question: Is Venus upside down? Does its north point in the same direction as Earth's south? (roughly) Or does it just spin clockwise and its north points towards Earth's north? (also roughly) It seems that the right-hand rule would point Venus north in the same direction Earth's south (roughly) but some have said Venus is not upside down, which I believe would mean Venus' north points (rougly) in the same direction as Earth's north.
Answer: This is a matter of defintion and convention, not science.
By convention "North pole" is defined to be the pole that lies to north side of the solar system's "invariable plane" (that is the plane in which the planets orbit). And so, by definition, its North Pole points the same way as Earth's North pole (roughly).
However axial tilt is defined relative to the positive pole, as defined by the right-hand-rule. For Earth and five other planets, the positive pole is the North pole. For Venus and Uranus the positive pole is the South pole, so these planets have inclinations that are greater than 90 degrees.
But remember that these are just arbitrary choices of nomenclature.
See this explanation on astronomy.com | {
"domain": "astronomy.stackexchange",
"id": 7208,
"tags": "venus"
} |
Quicksort implementation seems too slow with large data | Question: I've written a program to compare different sorting algorithms with the array size being 10, 1,000, 100,000, 1,000,000 and 10,000,000. I, of course, expect Insertion to win out on 10 and merge heap and quick to really perform at the upper levels.
However, below are the times I'm getting. (Note that quick sort is growing much faster than heap and merge even though they are all \$\theta(n \log n)\$.)
Things I've considered:
Each algorithm uses the same seed to initialize each array, so the numbers are the same.
I am only timing the algorithm and nothing extra.
My professor approves of the code and doesn't know what's wrong, but maybe we're both missing something.
I moved the program from my flash drive to the desktop to test possible data transfer problems.
The algorithm (except for test 2) has only run at night with nothing else going.
Key: Hours:Minutes:Seconds:Milliseconds:Microseconds
Test 3
n Insertion Merge Heap Quick
10 0:0:0:0:3 0:0:0:0:28 0:0:0:0:35 0:0:0:0:43
1000 0:0:0:11:470 0:0:0:2:358 0:0:0:7:787 0:0:0:3:596
100000 0:0:1:911:865 0:0:0:51:506 0:0:0:24:257 0:0:0:55:519
1000000 0:3:59:769:105 0:0:0:351:129 0:0:0:238:878 0:0:0:916:885
10000000 8:11:44:552:820 0:0:3:521:108 0:0:3:560:178 0:1:13:709:830
Test 2
n Insertion Merge Heap Quick
10 0:0:0:0:5 0:0:0:0:49 0:0:0:0:37 0:0:0:0:50
1000 0:0:0:15:473 0:0:0:2:893 0:0:0:8:402 0:0:0:5:230
100000 0:0:2:518:566 0:0:0:57:845 0:0:0:32:917 0:0:0:71:243
1000000 0:5:38:538:795 0:0:0:460:796 0:0:0:312:66 0:0:1:398:508
10000000 11:48:6:630:6390:0:3:690:329 0:0:3:518:281 0:1:18:180:11
Test 1
n Insertion Merge Heap Quick
10 2676ns 19626ns 26316ns 33454ns
1000 11504040ns 2298935ns 6835250ns 3741456ns
100000 1849274815ns 47654052ns 23620952ns 52819295ns
1000000 0:3:58ns 0:0:0ns 0:0:0ns 0:0:0ns
10000000 8:10:25ns 0:0:3ns 0:0:3ns 0:1:15ns
Here's my quick sort implementation (35 lines):
public static long quick(int[] list) {
long startTime = System.nanoTime();
quickSort(list, 0, list.length - 1);
long endTime = System.nanoTime();
return endTime - startTime;
}
public static void quickSort(int[] A, int p, int r) {
if(p < r) {
int q = randomizedPartition(A, p, r);
quickSort(A, p, q-1);
quickSort(A, q+1, r);
}
}
public static int randomizedPartition(int[] A, int p, int r) {
Random random = new Random();
int i = random.nextInt((r-p)+1)+p;
swap(A,r,i);
return partition(A,p,r);
}
public static int partition(int[] A, int p, int r) {
int x = A[r];
int i = p-1;
for(int j = p; j < r; j++) {
if(A[j] <= x) {
i++;
swap(A, i, j);
}
}
swap(A, i+1, r);
return i+1;
}
And if needed (267 lines) here's my entire code:
import java.util.Random;
import java.util.concurrent.TimeUnit;
import java.io.*;
public class algComp {
public static void main(String[] args) {
driver(10); // Sort array of length 10
driver(1_000); // Sort array of length 1000
driver(100_000);
/* WARNING: Running program with the below values takes a lot of time!! */
driver(1_000_000);
//driver(10_000_000);
/* You are now leaving the danger zone. */
System.out.println("-----------------------------------------------");
content = String.format(content + "\nKey: Hours:Minutes:Seconds:Milliseconds:Microseconds");
printToFile(); // Prints data to times.txt
}
public static void driver(int n) {
// Insertion sort
int[] list = data(n);
if(n == 10) {
System.out.format("%10s","Unsorted: ");
printList(list);
}
long iTime = insertion(list);
if(n == 10) {
System.out.format("%10s","iSorted: ");
printList(list);
}
// Merge sort
list = data(n);
long mTime = mergeSort(list);
if(n == 10) {
System.out.format("%10s","mSorted: ");
printList(list);
}
// Heap sort
list=data(n);
long hTime = heap(list);
if(n == 10) {
System.out.format("%10s","hSorted: ");
printList(list);
}
// Quick sort
list=data(n);
long qTime = quick(list);
if(n == 10) {
System.out.format("%10s","qSorted: ");
printList(list);
}
if(n == 10) { // This will only print once
// Print prettifying stuff
System.out.println("Data is being written to times.txt...");
content = String.format(content + "%-9s%-17s%-17s%-17s%-17s\n",
"n","Insertion","Merge","Heap","Quick");
}
content = String.format(content + "%-9d%-17s%-17s%-17s%-17s%-1s",n,displayTime(iTime),
displayTime(mTime),displayTime(hTime),displayTime(qTime),"\n");
}
public static long insertion(int[] A) {
long startTime = System.nanoTime();
int i, j, key;
for(j = 1; j < A.length; j++) {
key = A[j];
// If previous is greater than selected (key) swap
for(i = j - 1; (i >= 0) && (A[i] > key); i--) {
A[i+1] = A[i];
}
A[i+1] = key;
}
long endTime = System.nanoTime();
return endTime - startTime;
}
public static long mergeSort(int[] A) {
long startTime = System.nanoTime();
if(A.length > 1) {
// First Half
int[] firstHalf = new int[A.length/2];
System.arraycopy(A, 0, firstHalf, 0, A.length/2);
mergeSort(firstHalf);
// Second Half
int secondHalfLength = A.length - A.length/2;
int[] secondHalf = new int[secondHalfLength];
System.arraycopy(A, A.length/2, secondHalf, 0, secondHalfLength);
mergeSort(secondHalf);
// Merge two arrays
merge(firstHalf,secondHalf,A);
}
long endTime = System.nanoTime();
return endTime - startTime;
}
public static void merge(int[] A1, int[] A2, int[] temp) {
int current1 = 0; // Current index in list 1
int current2 = 0; // Current index in list 2
int current3 = 0; // Current index in temp
// Compares elemets in A1 and A2 and sorts them
while(current1 < A1.length && current2 < A2.length) {
if(A1[current1] < A2[current2])
temp[current3++] = A1[current1++];
else
temp[current3++] = A2[current2++];
}
// Merge two arrays into temp
while(current1 < A1.length)
temp[current3++] = A1[current1++];
while(current2 < A2.length)
temp[current3++] = A2[current2++];
}
public static long heap(int[] A) {
long startTime = System.nanoTime();
int temp;
int heapSize = A.length-1;
buildMaxHeap(A);
for(int i = A.length-1; i >= 1; i--) {
swap(A,0,i); // Root is now biggest element, swap to end of array
heapSize--; // Reduce heapSize to ignore sorted elements
maxHeapify(A,0,heapSize);
}
long endTime = System.nanoTime();
return endTime - startTime;
}
public static void buildMaxHeap(int[] A) {
int heapSize = A.length-1;
// Bottom up, check parents children, sort and move up tree
for(int i = (heapSize/2); i >= 0; i--)
maxHeapify(A,i,heapSize);
}
public static void maxHeapify(int[] A, int i, int heapSize) {
int temp,largest;
int l = left(i); // 2i
int r = right(i); // 2i + 1
if(l <= heapSize && A[l] > A[i]) // Check left child (which is largest?)
largest = l;
else largest = i;
if(r <= heapSize && A[r] > A[largest]) // Check right child
largest = r;
if(largest != i) { // If parent is biggest do nothing, else make parent largest
swap(A,i,largest);
maxHeapify(A,largest,heapSize);
}
}
public static int left(int i) {
return 2*i;
}
public static int right(int i) {
return 2*i+1;
}
public static long quick(int[] list) {
long startTime = System.nanoTime();
quickSort(list, 0, list.length - 1);
long endTime = System.nanoTime();
return endTime - startTime;
}
public static void quickSort(int[] A, int p, int r) {
if(p < r) {
int q = randomizedPartition(A, p, r);
quickSort(A, p, q-1);
quickSort(A, q+1, r);
}
}
public static int randomizedPartition(int[] A, int p, int r) {
Random random = new Random();
int i = random.nextInt((r-p)+1)+p;
swap(A,r,i);
return partition(A,p,r);
}
public static int partition(int[] A, int p, int r) {
int x = A[r];
int i = p-1;
for(int j = p; j < r; j++) {
if(A[j] <= x) {
i++;
swap(A, i, j);
}
}
swap(A, i+1, r);
return i+1;
}
public static void swap(int[] list, int i, int j) {
int temp = list[i];
list[i] = list[j];
list[j] = temp;
}
public static String displayTime(long n) {
long hours = TimeUnit.NANOSECONDS.toHours(n);
long minutes = TimeUnit.NANOSECONDS.toMinutes(n) - (TimeUnit.NANOSECONDS.toHours(n)*60);
long seconds = TimeUnit.NANOSECONDS.toSeconds(n) - (TimeUnit.NANOSECONDS.toMinutes(n) *60);
long milliseconds = TimeUnit.NANOSECONDS.toMillis(n) - (TimeUnit.NANOSECONDS.toSeconds(n)*1000);
long microseconds = TimeUnit.NANOSECONDS.toMicros(n) - (TimeUnit.NANOSECONDS.toMillis(n)*1000);
String displayThis = (hours + ":" + minutes + ":" + seconds + ":" + milliseconds + ":" + microseconds);
return displayThis;
}
public static int[] data(int n) {
Random random = new Random(seed); // Random seed stays same for all sorts
int[] list = new int[n];
for(int i = 0; i < list.length; i++) {
list[i] = random.nextInt(1000);
}
return list;
}
public static void printList(int[] list) {
for(int i = 0; i < list.length; i++) {
System.out.format("%5d",list[i]);
}
System.out.println();
}
public static void printToFile() {
// Print to file
try {
File file = new File("times.txt");
if(!file.exists())
file.createNewFile();
FileWriter fw = new FileWriter(file.getAbsoluteFile());
BufferedWriter bw = new BufferedWriter(fw);
bw.write(content);
bw.close();
System.out.println("Done.");
} catch (IOException e) {
e.printStackTrace();
}
}
// Global variables
public static String content = ""; // Used to print data to text file times.txt
public static int seed = (int)(Math.random()*10_000); // Seed for generating lists
}
What do you think? Surely quick sort should be running near 3 seconds at 10mil rather than a minute. What am I doing wrong?
Answer: This implementation of Quicksort has poor performance for arrays with many repeated elements. From Wikipedia, emphasis mine
With a partitioning algorithm such as the one described above (even
with one that chooses good pivot values), quicksort exhibits poor
performance for inputs that contain many repeated elements. The
problem is clearly apparent when all the input elements are equal: at
each recursion, the left partition is empty (no input values are less
than the pivot), and the right partition has only decreased by one
element (the pivot is removed). Consequently, the algorithm takes
quadratic time to sort an array of equal values.
To solve this quicksort equivalent of the Dutch national flag problem, an alternative linear-time partition routine can be used that separates the values into three groups: values less than the pivot, values equal to the pivot, and values greater than the pivot.
You can see this quadratic behaviour by running it on the input new int[10000], for example. In fact, you'll most likely get a StackOverflowError.
Now in your test data, you have \$10{,}000{,}000\$ elements, but you're only picking random values in the range \$[0,1{,}000)\$. So... you have lots of repeated elements!
Let's run it as-is on my computer (I didn't run insertion sort, as it's too slow)
$ java AlgComp && cat times.txt
Unsorted: 323 653 751 33 350 378 913 280 243 792
mSorted: 33 243 280 323 350 378 653 751 792 913
hSorted: 33 243 280 323 350 378 653 751 792 913
qSorted: 33 243 280 323 350 378 653 751 792 913
Data is being written to times.txt...
-----------------------------------------------
Done.
n Insertion Merge Heap Quick
10 0:0:0:0:0 0:0:0:0:27 0:0:0:0:35 0:0:0:0:38
1000 0:0:0:0:0 0:0:0:1:852 0:0:0:0:822 0:0:0:3:224
100000 0:0:0:0:0 0:0:0:28:421 0:0:0:20:784 0:0:0:37:872
1000000 0:0:0:0:0 0:0:0:233:576 0:0:0:202:443 0:0:0:905:65
10000000 0:0:0:0:0 0:0:2:359:261 0:0:2:752:239 0:1:20:914:310
Now let's change this one line in data:
list[i] = random.nextInt(1000);
to this
list[i] = random.nextInt(1000000);
Now the results are more in line with our expectations:
$ java AlgComp && cat times.txt
Unsorted: 1662389001535502665295332468356126461089888942823562420254
mSorted: 2356224683166238420254529533550266561264610898889428900153
hSorted: 2356224683166238420254529533550266561264610898889428900153
qSorted: 2356224683166238420254529533550266561264610898889428900153
Data is being written to times.txt...
-----------------------------------------------
Done.
n Insertion Merge Heap Quick
10 0:0:0:0:0 0:0:0:0:21 0:0:0:0:98 0:0:0:0:56
1000 0:0:0:0:0 0:0:0:1:997 0:0:0:1:14 0:0:0:2:41
100000 0:0:0:0:0 0:0:0:27:223 0:0:0:22:562 0:0:0:21:587
1000000 0:0:0:0:0 0:0:0:283:939 0:0:0:215:551 0:0:0:137:658
10000000 0:0:0:0:0 0:0:2:899:176 0:0:3:681:388 0:0:1:845:255
Of course the real fix is not to change data, but to change the partitioning algorithm. | {
"domain": "codereview.stackexchange",
"id": 10184,
"tags": "java, performance, sorting, quick-sort"
} |
Why are particular laser-related skin products not recommended for darker skin tones? | Question: This is about products such as:
The "Theradome" laser helmet (this claims to stimulate hair growth through lasers)
Theradome™, a biomedical engineering company based in the Silicon Valley, is the proud designer, developer and manufacturer of the Theradome™ LH80 PRO, the first and most powerful FDA cleared OTC wearable laser hair helmet for laser hair growth treatment. At last, millions of people suffering hair loss can enjoy clinically effective laser hair restoration treatments at home, at an affordable price, with a simple push of a button.
Silk'n Flash&Go Hair Removal Device (claims to remove hair using lasers)
Silk'n Flash&Go™ is a revolutionary light-based system for permanent results at home. Now you can remove unwanted hair forever on your body and face – all with gentle pulses of light that disable hair growth. It's the safe way to get smooth, beautiful skin.
From the information on the website, it would appear that the products are safe for all; however, both products seem to be not recommended for the two darkest skin colors on the Fitzpatrick Skin Tone Classification Scale on a variety of different product feedback websites. What is the reasoning behind this?
Answer: The reason for laser hair removal/growth products not being safe for darker skin tones is the ability for darker skin tones to absorb more light from the laser than lighter skin does BEFORE the light is able to reach the depth at which the hair follicle is located. Light from lasers must pass through the epidermis where melanocytes are located before they reach the hair follicle.
More melanin means increased ability to absorb wavelengths as seen in the curve below. Laser products such as Theradome (678nm) and Silk'N (475-1200 nm) have shorter wavelengths, which are highly absorbed by melanin, and therefore can damage skin in darker skin tone patients. Safer products like Nd:YAG are those that have higher wavelengths at 1064nm and can penetrate the skin without over-absorption by melanin.
Disclosures: I own no stock in any of these companies.
References:
http://www.theradome.com/press-release/
http://www.amazon.com/Silkn-Flash-Removal-Device-Cartridge/dp/B0093BOR4W
http://www.nature.com/jid/journal/v122/n2/full/5602198a.html | {
"domain": "biology.stackexchange",
"id": 3393,
"tags": "skin, health"
} |
ubuntu 16.04 (xenial) package for gazebo 7.4.0? | Question:
I understand there are several bugs in 7.0.0 (for instance this one) which hve been fixed in later versions like 7.4.0 and 7.5.0. Are there any PPA's which include a prebuilt version of 7.4.0 or do I need to build from source? Thanks,
philip
Originally posted by hahnpv on ROS Answers with karma: 3 on 2017-04-23
Post score: 0
Answer:
I'm running 16.04 (in VMWare, even) and the installed version of Gazebo is 7.5:
$ dpkg-query -W | grep gazebo
gazebo7 7.5.0-1~xenial
Originally posted by Geoff with karma: 4203 on 2017-04-23
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by hahnpv on 2017-04-23:
Interesting. Mine shows
$ dpkg-query -W | grep gazebo
gazebo7 7.0.0+dfsg-2
$ lsb_release -a
LSB Version: core-9.20160110ubuntu0.2-amd64:core-9.20160110ubuntu0.2-noarch:security-9.20160110ubuntu0.2-amd64:security-9.20160110ubuntu0.2-noarch
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2
Comment by Geoff on 2017-04-23:
What does apt-cache madison gazebo7 show? (Interestingly, running that just now to test it showed me that I have a pending upgrade to Gazebo 7.6 waiting). Have you previously pinned Gazebo to version 7.0.0?
Comment by hahnpv on 2017-04-23:
I have not pinned versions of anything on my system.
$ apt-cache madison gazebo7
gazebo7 | 7.0.0+dfsg-2 | http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
gazebo | 7.0.0+dfsg-2 | http://us.archive.ubuntu.com/ubuntu xenial/universe Sources
Comment by hahnpv on 2017-04-23:
This appears to agree with http://packages.ubuntu.com/source/xenial/gazebo
"Source Package: gazebo (7.0.0+dfsg-2) [universe]"
Comment by Geoff on 2017-04-23:
I can see the problem now. My 7.6 package is coming from the OSRF repository. See the "alternative installation" instructions on this page: http://gazebosim.org/tutorials?tut=install_ubuntu
Comment by hahnpv on 2017-04-23:
Excellent. Thank you sir! | {
"domain": "robotics.stackexchange",
"id": 27697,
"tags": "gazebo, ubuntu"
} |
What is the diffraction pattern of the single slit experiment with the circular cylindrical observing screen centered at the slit? | Question:
$\textbf{Question}$ What is the diffraction pattern of the single slit experiment if the observing screen is replaced by a circular cylindrical screen so that the central line of the cylinder contains the slit?
My guess is that the diffraction pattern may look connected, whereas the pattern looked disconnected with the planar screen.
I'm just starting to study physics as a hobby. Please let me know.
Answer: Following the HyperPhysics derivation the angular position $\theta$ of the m$^{\rm th}$ minimum for a single slit diffraction pattern of width $a$ is given by $a\sin \theta = m \lambda$ where$\lambda$ is the wavelength of the light.
Suppose that the first minimum occurred at $10^\circ$ then the positions of the minima on a flat screen and a cylindrical screen would look like this:
I have exaggerated the angles to show an effect whereas in practice as per the HyperPhysics derivation the angles are all less than $5^\circ$.
So for small angles there is very little difference between what is seen on the two types of screen but for larger angles you can see that the linear spacing of the minima on a flat screen increases much more rapidly than for the cylindrical screen. | {
"domain": "physics.stackexchange",
"id": 36134,
"tags": "optics, experimental-physics, double-slit-experiment"
} |
Short gamma ray burst redshift | Question: I would like to know what kind of methods are being used to infer the redshift distance of GRBs.
Some background:
I recently went on two sites; grbweb by IceCube and Fermi GRB database. On grbweb they give the redshift of GRBs but I found some weird values in them (a really high peak at 2000Mpc for short GRBs). On the Fermi website they do not give the redshift distance and I could not find a source to tell me how they would get it.
Answer: Gamma Ray Bursts happen in galaxies, and the ISM in the host galaxy will leave spectral imprints on the GRB afterglow spectrum. This also means that besides being interesting in their own right, GRB afterglows are also valuable light sources for various cosmological uses, including galaxy evolution and IGM analysis. | {
"domain": "physics.stackexchange",
"id": 31746,
"tags": "astronomy, gamma-rays"
} |
What is the best .net programming language for artificial intelligence programming? | Question: I know that every program has some positive and negative points, and I know maybe .net programming languages are not the best for AI programming.
But I prefer .net programming languages because of my experiences and would like to know for an AI program which one is better, C or C++ or C# and or VB ?
Which one of this languages is faster and more stable when running different queries and for self learning ?
To make a summary, i think C++ is the best for AI programming in .net and also C# can be used in some projects, Python as recommended by others is not an option on my view !
because :
It's not a complex language itself and for every single move you need to find a library and import it to your project (most of the library are out of date and or not working with new released Python versions) and that's why people say it is an easy language to learn and use ! (If you start to create library yourself, this language could be the hardest language in the world !)
You do not create a program yourself by using those library for every single option on your project (it's just like a Lego game)
I'm not so sure in this, but i think it's a cheap programming language because i couldn't find any good program created by this language !
Answer: If you're talking about pure speed, C will get you there if you really know C and operating systems, etc. C++ is nicer in terms of user friendliness, and won't be much slower. I don't know much about VB but I don't see many benefits.
Please clarify at least generally what the AI program is about. It is extremely difficult to answer the generalized question "What programming language is best for AI"? if you know what I mean. ( I vote python :^] )
To add: Any programming language of those listed is "stable" if you write things correctly, but C++ and its great IDE's will help you to that point much more nicely than C will. C, to fully utilize it's potential, requires much fiddling with delicate and precise systems. It'll go that little bit faster, as the cost of being less stable in the practical, " uh oh, now I need to troubleshoot this" sense. | {
"domain": "ai.stackexchange",
"id": 114,
"tags": "intelligent-agent"
} |
Why is the change of temperature $\Delta T$ measured in Kelvins, degrees Celsius, etc.? | Question: Let me start by apologizing if this question seems pedantic and say that I'm not very familiar with physics in general, as I'm a math major instead.
Anyway, say a body changes from temperature $T_1$ to $T_2$, with $T_2 \ge T_1$.
Then the change in temperature is
$$\Delta T = T_2 - T_1$$
Now, it's clear that if $\Delta T = x\text{K}$ then $\Delta T = x \text{°C}$, with $x \ge 0$.
But it's also clear that $x \text{K} \ne x \text{°C}$, which leads to a contradiction.
Then I don't really understand why are units typically written in $\Delta T$? I suppose it could be to illustrate the units used to measure $T_1$ and $T_2$, but is it really necessary?
Answer: You need the units because though $x$ Kelvin is the same as $x$ Celcius it is not the same as $x$ Fahrenheit.
You can treat $\Delta T$ as a temperature. A temperature scale has a fixed zero point (absolute zero for the Kelvin scale and the freezing point of water for the Centigrade scale) and an interval defining 1 degree. To make $\Delta T$ a temperature you're just specifying that the fixed zero point is $T_1$, that is $\Delta T = 0$ when the temperature is $T_1$.
This may seem pedantic, but you would no doubt claim your height is six feet (or whatever it is). I'm sure you would not respond to a query by saying instead that the difference in height between the top of your head and the bottom of your feet is six feet. | {
"domain": "physics.stackexchange",
"id": 28009,
"tags": "thermodynamics, temperature, units"
} |
Does a high-speed particle weigh more? | Question: I tried looking in vain (at the LHC site and elsewhere) on the net and could not find out if a $7$-TeV proton weighs more than its rest mass.
Can anyone explain and point me towards experiments that have studies the issue. Does a particle traveling at near light-speed weighs more, given that it has a greater gravitational mass?
I understand it has an inertial mass which is 7,450 times greater than its rest mass. Is there a significant increase in weight?
edit after the comments:
to those who are suggesting that weight should not increase, I remind that thermal energy increases the gravitational mass of a body, so, why not Ke?
Answer: This is a really tricky question. A lot of it comes down to what you define as "weight".
TLDR:
Does a particle traveling at near light-speed weigh more, given that it has a greater gravitational mass?
Yes it weighs more, but I don't think that means it has greater "gravitational mass".
I understand it has an inertial mass which is 7,450 times greater than its rest mass. Is there a significant increase in weight?
A particle's mass does not change due to its motion. Relative to gravitational scales on Earth, a $7$ TeV proton has not significantly changed its weight. Any change in weight will be experimentally immeasurable by current methods.
inertial and gravitational mass
In classical mechanics inertial mass, $m_I$, is the mass that appears in Newton's second law, $\vec{F}_\mathrm{net} = m_I \vec{a}$, and gravitational mass, $m_G$, is the mass that appears in the gravitational force, $\vec{F}_g = m_G \vec{g}$. The equivalence principle says these two things are equal. That's why things with different masses fall at the same rate in free fall.
In special relativity there is only one mass $m$ the mass of the object. This is sometimes called the rest mass.
mass and motion
There's an outdated concept of relativistic mass, $m_R =\gamma m$, where $\gamma$ is the Lorentz factor of the object. According to this idea, an observer at rest would say the mass of a moving object is greater than its rest mass. This concept turns out to not be useful, because it leads to the wrong form of Newton's second law.
Relativistic momentum is defined $\vec{p} = \gamma m \vec{v} = m_R \vec{v}$. So far so good. But the general way to define Newton's second law is
$$\vec{F}_\mathrm{net} = \frac{d\vec{p}}{dt},$$
and this does not lead to $F = m_R a$ as you might hope. The Lorentz factor is a function of the object's speed, so it is not a constant. For a force that is parallel to the object's velocity you would find:
$$\vec{F}_\mathrm{net} = \frac{d}{dt}\left(\gamma m \vec{v}\right) = \gamma^3 m \vec{a}. $$
So $\gamma m$ is not the inertial mass of a relativistic object! It gets worse because the scaling in Newton's second law depends on the angle between the force and the velocity. The only useful mass in S.R. is the rest mass.
I'm not even sure inertial mass is well defined in S.R., since there isn't a simple way to write $F = ma$.
mass and gravity
Gravitational mass gets used in two ways.
The first way is about how an object responds to a gravitational field, $\vec{F}_g = m_G \vec{g}$, where $\vec{g}$ is the gravitational field. Near the surface of the Earth $\vec{g} \approx 9.8$ m/s$^2$ down. When you ask if a moving object weighs more, I would say no, because the object responds to an external gravitational field following the equivalence principle.
Even in general relativity, the motion of a very fast object is determined by the geodesic equation, which has the equivalence principle built right in. The concept of relativistic mass is not needed to explain the object's motion.
This example depends on the object being "small" in the sense that it doesn't really affect the background gravitational field. Like Earth is small compared to the Sun, or a baseball is small compared to the Earth. The rest energy ($mc^2$) of the Earth is about $10^{60}$ eV, so a $7$ TeV proton is still small compared to the Earth!
To answer "no", we sort of dodge the question! The proton's motion near the earth is not noticeably changed, mostly because it doesn't weigh noticeably more compared to the Earth.
The second way gravitational mass appears is as a source of gravitational fields. Unlike Newtonian gravity, where only mass matters, in general relativity energy (well, stress-energy) is the source of gravitational fields.
The gravitational field of a point mass is described by the Schwarzschild metric for a black hole. So does the gravity of a black hole change if it moves? One way to state the principle of relativity is that there is no experiment that can distinguish rest from uniform motion. So in the reference frame of the black hole, its gravitational field is identical to it being at rest. This fact his held up in one of the more common ways to define mass in general relativity, the ADM mass. The ADM mass of a black hole moving at constant velocity is the same as one at rest. The ADM mass is the $M$ that appears in the Schwarzschild metric defining its gravitational field.
Even though the $7$ TeV proton is moving very fast, its gravitational mass (by some definition of mass) does not change! This argument relies on the proton being an isolated system, alone in the universe. So maybe we dodged the question again.
Conclusion
Okay, so it sounds a whole lot like the weight of the proton doesn't change, but I said it did at the very top.
The key here is that "weight" applies to a gravitational interaction between two things. Two $7$ TeV protons moving at the same velocity are in a shared reference frame, so their gravitational attraction is identical to two protons at rest. But if one proton is moving relative to the the other, all observers will agree that the total energy of the system is greater than the sum of their masses. The joint gravitational field of both protons will be stronger due to their relative motion. In that sense the moving proton "weighs" more.
In order to observe this extra "weight" we'd need to measure the gravitational attraction between two subatomic particles moving at high speeds relative to each other. And we would need to disentangle the gravitational interaction from all other interactions. That is a very tall order. An experiment that can do that will open up the doors of quantum gravity, and I sure hope somebody figures it out eventually. | {
"domain": "physics.stackexchange",
"id": 89407,
"tags": "general-relativity, gravity, mass, mass-energy"
} |
Shortest walk with alternating colors in a directed graph | Question: Let $G$ be a directed graph such that every edge is colored (red, yellow or green). I want to compute the shortest walk (possibly with repeated vertices) with the restriction that the colors are alternating: red, green, yellow, red, green , ..., etc (the starting color doesn't matter).
So far my idea is to create a new graph $G'$ such that is has a copy of each vertex (for each color). Then, by adequately placing every colored edge in the new graph I should be able to just run DFS/BFS in the new graph and get the answer. However, I don't know exactly how to correctly place the edges in the new graph $G$, or if this idea works.
Answer: Let $G=(V, R \cup G \cup Y)$, where $R$, $G$, and $Y$ are the sets of red, green, and yellow edges, respectively.
I assume you want to find the shortest walk between a pair of given vertices. Let these vertices be $s$ and $t$.
Create the graph $G'=(V', E')$ where $V' = \{s', t'\} \cup \bigcup\limits_{v \in V} \{v_r, v_g, v_b \}$ and $E'= \{ (u_r, v_g) \mid (u,v) \in G \} \cup \{ (u_g, v_y) \mid (u,v) \in Y \} \cup \{ (u_y, v_r) \mid (u,v) \in R \} \cup \{ (s', u_r) \mid u \in V\} \cup \{(v_r, t'), (v_g, t'), (v_y, t') \mid v \in V\}$.
Let $\ell$ be length of the shortest color-alternating path between $s$ and $t$ in $G$, and let $\ell'$ be the length the length of the shortest path between $s'$ and $t'$ in $G'$. You have than $\ell' = \ell + 2$. Therefore it suffices to find a shortest path from $s'$ to $t'$ in $G'$. This can be done in linear time in the size of $G'$ (and $G$). | {
"domain": "cs.stackexchange",
"id": 19035,
"tags": "graphs, discrete-mathematics, directed-graphs"
} |
Irregularity of L = {a^i b^(j+3)| i!=j } | Question: I have a question to find out that $L = \{a^i b^{j+3}\mid i\ne j \}$ is regular or not. I know that it is not regular. I tried with pumping lemma but I am finding just a specific number of $v$'s in $u v^i w$ that not satisfies the language condition, but not ALL $v$'s. What are the best choices for $i$ and $j$ to prove with pumping lemma that this is not a regular language?
Answer: When you get stuck trying to prove a language is not regular by the Pumping Lemma, as in this example, because there doesn't seem to be a good choice of $u,v,w$, it's sometimes helpful to use closure properties first, to transform the problem. This is a good example of when this paradigm is useful.
We'll do this by contradiction. Suppose that $L$ were regular. Then since regular languages are closed under complement, $\overline{L}$ would be regular. Now $\overline{L}$ isn't particularly easy to describe, since it contains strings like $bababbba$, but what is $\overline{L}\cap a^*b^*$? This is just $a^nb^{n+3}=a^nb^nb^3$. Since regular languages are closed under intersection and $a^*b^*$ is regular, that would mean that $a^nb^nb^3$ would be regular. I'll bet you can now do a pumping lemma proof to show that this language wasn't regular. | {
"domain": "cs.stackexchange",
"id": 4798,
"tags": "formal-languages, regular-languages, pumping-lemma"
} |
Papers, Please - Kata from CodeWars - Python | Question: This is possibly one of the longest katas I've finished so far.
https://www.codewars.com/kata/59d582cafbdd0b7ef90000a0/train/python
I've been working on this one for about an hour a day for about 2 weeks, 90% of the time it's only been debugging. I'm trying to work on my code readability, but also optimizing the code as much as possible.
I'd like to know if YOU can understand something from the code I've written. Would love to hear opinions on how to improve the readability, optimizing some stuff that you notice. I know the code is long, so only check it out if you've got the time.
I had a lot of print statements during debugging, but I've deleted all of them. If you would like the version with all print statements, tell me.
I love constructive criticism, so don't hesitate.
Below , I am going to add the task of the project, just in case the link gets deleted or something:
Objective
Your task is to create a constructor function (or class) and a set of
instance methods to perform the tasks of the border checkpoint
inspection officer. The methods you will need to create are as follow:
Method: receiveBulletin
Each morning you are issued an official bulletin from the Ministry of
Admission. This bulletin will provide updates to regulations and
procedures and the name of a wanted criminal.
The bulletin is provided in the form of a string. It may include one
or more of the following:
Updates to the list of nations (comma-separated if more than one) whose citizens may enter (begins empty, before the first bulletin):
example 1: Allow citizens of Obristan
example 2: Deny citizens of Kolechia, Republia
Updates to required documents
example 1: Foreigners require access permit
example 2: Citizens of Arstotzka require ID card
example 3: Workers require work pass
Updates to required vaccinations
example 1: Citizens of Antegria, Republia, Obristan require polio vaccination
example 2: Entrants no longer require tetanus vaccination
Update to a currently wanted criminal
example 1: Wanted by the State: Hubert Popovic
Method: inspect
Each day, a number of entrants line up outside the checkpoint
inspection booth to gain passage into Arstotzka. The inspect method
will receive an object representing each entrant's set of identifying
documents. This object will contain zero or more properties which
represent separate documents. Each property will be a string value.
These properties may include the following:
passport
ID_card (only issued to citizens of Arstotzka)
access_permit
work_pass
grant_of_asylum
certificate_of_vaccination
diplomatic_authorization
The inspect method will return a result based on whether the entrant
passes or fails inspection:
Conditions for passing inspection
All required documents are present
There is no conflicting information across the provided documents
All documents are current (ie. none have expired) -- a document is considered expired if the expiration date is November 22, 1982 or earlier
The entrant is not a wanted criminal
If a certificate_of_vaccination is required and provided, it must list the required vaccination
If entrant is a foreigner, a grant_of_asylum or diplomatic_authorization are acceptable in lieu of an access_permit.
In the case where a diplomatic_authorization is used, it must include
Arstotzka as one of the list of nations that can be accessed.
If the entrant passes inspection, the method should return one of the
following string values:
If the entrant is a citizen of Arstotzka: Glory to Arstotzka.
If the entrant is a foreigner: Cause no trouble.
If the entrant fails the inspection due to expired or missing
documents, or their certificate_of_vaccination does not include the
necessary vaccinations, return Entry denied: with the reason for
denial appended.
Example 1: Entry denied: passport expired.
Example 2: Entry denied: missing required vaccination.
Example 3: Entry denied: missing required access permit.
If the entrant fails the inspection due to mismatching information
between documents (causing suspicion of forgery) or if they're a
wanted criminal, return Detainment: with the reason for detainment
appended.
If due to information mismatch, include the mismatched item. e.g. Detainment: ID number mismatch.
If the entrant is a wanted criminal: Detainment: Entrant is a wanted criminal.
NOTE: One wanted criminal will be specified in each daily bulletin, and must be detained when received for that day only. For example, if an entrant on Day 20 has the same name as a criminal declared on Day 10, they are not to be detained for being a criminal.
Also, if any of an entrant's identifying documents include the name of that day's wanted criminal (in case of mismatched names across multiple documents), they are assumed to be the wanted criminal.
In some cases, there may be multiple reasons for denying or detaining
an entrant. For this exercise, you will only need to provide one
reason.
If the entrant meets the criteria for both entry denial and detainment, priority goes to detaining.
For example, if they are missing a required document and are also a wanted criminal, then they should be detained instead of turned away.
In the case where the entrant has mismatching information and is a wanted criminal, detain for being a wanted criminal.
Test Example
bulletin = """Entrants require passport
Allow citizens of Arstotzka, Obristan"""
inspector = Inspector()
inspector.receive_bulletin(bulletin)
entrant1 = {
"passport": """ID#: GC07D-FU8AR
NATION: Arstotzka
NAME: Guyovich, Russian
DOB: 1933.11.28
SEX: M
ISS: East Grestin
EXP: 1983.07.10"""
}
inspector.inspect(entrant1) #=> 'Glory to Arstotzka.'
Additional Notes
Inputs will always be valid.
There are a total of 7 countries: Arstotzka, Antegria, Impor, Kolechia, Obristan, Republia, and United Federation.
Not every single possible case has been listed in this Description; use the test feedback to help you handle all cases.
The concept of this kata is derived from the video game of the same name, but it is not meant to be a direct representation of the game.
And here is my solution:
class Inspector:
def __init__(self):
self.documents_dict = {
"passport": [],
"ID_card": [],
"access_permit": [],
"work_pass": [],
"grant_of_asylum": [],
"certificate_of_vaccination": [],
"diplomatic_authorization": []
}
self.documents_names =["passport",
"ID card",
"access permit",
"work pass",
"grant of asylum",
"certificate of vaccination",
"diplommatic authorization"]
self.vaccinations_dict = {}
self.actual_bulletin = {
"allowed_nations": [],
"denied_nations": [],
"required_documents": self.documents_dict,
"required_vaccinations": self.vaccinations_dict,
"new_criminal": ""
}
def receive_bulletin(self, bulletin):
nations_List = ["Arstotzka",
"Antegria",
"Impor",
"Kolechia",
"Obristan",
"Republia",
"United Federation"]
bulletin_lines_List = bulletin.split("\n")
def update_allowed_and_denied_nations():
def return_commaless_line(line):
if "," in line:
line = line.split(" ")
for i in range(len(line)):
line[i] = line[i].replace(",", "")
return " ".join(line)
for nation in nations_List:
if nation in line:
return line
def return_line_containing_nations(list):
for line in list:
for nation in nations_List:
if (nation in line and "Allow citizens of " in line) or (nation in line and "Deny citizens of " in line):
return line
def are_multi_lines_w_nations_Boolean(list):
lines_with_nations = 0
for line in list:
for nation in nations_List:
if nation in line:
lines_with_nations +=1
break
continue
if lines_with_nations > 1:
return True
else:
return False
def return_lines_w_nations_List(list):
lines = []
for line in list:
for nation in nations_List:
if (nation in line and "Allow citizens of " in line) or (nation in line and "Deny citizens of " in line):
lines.append(line)
break
return lines
def return_allowed_nations_from_line_List(line):
allowed_nations = [word for word in line.split(" ") if word in nations_List]
if "United Federation" in line and "Allow citizens of " in line:
allowed_nations.append("United Federation")
return allowed_nations
def return_denied_nations_from_line_List(line):
denied_nations = [word for word in line.split(" ") if word in nations_List and "Deny citizens of " in line]
if "United Federation" in line and "Deny citizens of " in line:
denied_nations.append("United Federation")
return denied_nations
def update_actual_bulletin():
for n in new_allowed_nations_List:
if n not in self.actual_bulletin["allowed_nations"]:
self.actual_bulletin["allowed_nations"].append(n)
for n in new_denied_nations_List:
if n in self.actual_bulletin["allowed_nations"]:
self.actual_bulletin["allowed_nations"].remove(n)
new_allowed_nations_List = []
new_denied_nations_List = []
if are_multi_lines_w_nations_Boolean(bulletin_lines_List):
multi_lines_w_nations_List = return_lines_w_nations_List(bulletin_lines_List)
for i in range(len(multi_lines_w_nations_List)):
multi_lines_w_nations_List[i] = return_commaless_line(multi_lines_w_nations_List[i])
single_nations_line_list = [multi_lines_w_nations_List[i] for i in range(len(multi_lines_w_nations_List))]
for line in single_nations_line_list:
if "Allow citizens of " in line:
new_allowed_nations_List.extend(return_allowed_nations_from_line_List(line))
elif "Deny citizens of " in line:
new_denied_nations_List.extend(return_denied_nations_from_line_List(line))
update_actual_bulletin()
else:
if return_lines_w_nations_List(bulletin_lines_List) != []:
single_nations_line = return_commaless_line(return_line_containing_nations(bulletin_lines_List))
if "Allow citizens of " in single_nations_line:
new_allowed_nations_List.extend(return_allowed_nations_from_line_List(single_nations_line))
elif "Deny citizens of " in single_nations_line:
new_denied_nations_List.extend(return_denied_nations_from_line_List(single_nations_line))
update_actual_bulletin()
def update_required_documents():
documents_List = \
["passport",
"ID card",
"access permit",
"work pass",
"grant of asylum",
"certificate of vaccination",
"diplomatic authorization" ]
underlined_documents_List = \
["passport",
"ID_card",
"access_permit",
"work_pass",
"grant_of_asylum",
"certificate_of_vaccination",
"diplomatic_authorization" ]
def update_documents_Dict():
for line in bulletin_lines_List:
if "Workers require work pass" in line:
self.documents_dict["work_pass"].append("Yes")
if "Entrants require " in line:
for document in documents_List:
if "Entrants require " + document in line:
self.documents_dict[underlined_documents_List[documents_List.index(document)]] = nations_List
continue
if "Foreigners require " in line:
for document in documents_List:
if "Foreigners require " + document in line:
self.documents_dict[underlined_documents_List[documents_List.index(document)]] = ["Antegria",
"Impor",
"Kolechia",
"Obristan",
"Republia",
"United Federation"]
if "Citizens of " in line:
if "vaccination" not in line:
for nation in nations_List:
for document in documents_List:
if "Citizens of " + nation + " require " + document in line:
self.documents_dict[underlined_documents_List[documents_List.index(document)]].append(nation)
update_documents_Dict()
def update_required_vaccinations():
vaccine_Dict = {}
for i, line in enumerate(bulletin_lines_List):
if "vaccination" in line:
line_List = line.split(" ")
def return_commaless_line(list):
return [s.replace (",","") for s in list]
line_List = return_commaless_line(line_List)
if "United" in line_List: # Because "United Federation" has a space between
line_List.remove("United")
line_List.remove("Federation")
line_List.append("United Federation")
vaccine_name_List = [w for w in line_List if line_List.index(w) > line_List.index("require") and line_List.index(w) < line_List.index("vaccination")]
vaccine_name = " ".join(vaccine_name_List)
vaccine_Dict[i] = vaccine_name
def bulletin_add_remove_vaccinations():
if "Entrants" in line_List: # Entrants meaning all nations_List
if "no" in line_List:
self.vaccinations_dict.pop(vaccine_Dict[i]) # Remove all nations_List
else:
self.vaccinations_dict[vaccine_Dict[i]] = nations_List # Add all nations_List
elif "Foreigners" in line_List: # Foreigners meaning all nations_List except Arstotzka
if "no" in line_List:
self.vaccinations_dict[vaccine_Dict[i]] = [x for x in self.vaccinations_dict[vaccine_Dict[i]] # Remove all nations_List except Arstotzka
if x not in ["Antegria", "Impor", "Kolechia", "Obristan", "Republia", "United Federation"]]
else:
self.vaccinations_dict[vaccine_Dict[i]] = ["Antegria", "Impor", "Kolechia", # Add all nations_List except Arstotzka
"Obristan", "Republia",
"United Federation"]
elif "Citizens of " in line_List: # Followed by specific nations_List
if "no" in line_List:
self.vaccinations_dict[vaccine_Dict[i]] = [nation for nation in self.vaccinations_dict[vaccine_Dict[i]] if nation not in line_List]
else:
self.vaccinations_dict[vaccine_Dict[i]] = [nation for nation in nations_List if nation in line_List]
bulletin_add_remove_vaccinations()
def update_new_criminal():
for line in bulletin_lines_List:
if "Wanted by the State:" in line:
line_List = line.split(" ")
wanted = line_List[line_List.index("State:") + 1] + " " + line_List[line_List.index("State:") + 2]
self.actual_bulletin["new_criminal"] = wanted
update_allowed_and_denied_nations()
update_required_documents()
update_required_vaccinations()
update_new_criminal()
def inspect(self, entrant):
nations_List = ["Arstotzka", "Antegria", "Impor", "Kolechia", "Obristan", "Republia", "United Federation"]
documents_List = \
["passport",
"ID card",
"access permit",
"work pass",
"grant of asylum",
"certificate of vaccination",
"diplomatic authorization"]
underlined_documents_List = \
["passport",
"ID_card",
"access_permit",
"work_pass",
"grant_of_asylum",
"certificate_of_vaccination",
"diplomatic_authorization"]
def returns_nation():
for document in entrant.values():
for nation in nations_List:
if nation in document:
return nation
def check_nation():
if returns_nation() == "Arstotzka":
return
if returns_nation() not in self.actual_bulletin["allowed_nations"]:
return "Entry denied: citizen of banned nation."
def check_missing_documents():
if entrant == {}: # Missing all documents
return "Entry denied: missing required passport."
documents_req_for_nation_List = [doc for doc in self.documents_dict if returns_nation() in self.documents_dict[doc]]
vaccine_certificates_req_for_nation_List = [vac_doc for vac_doc in self.vaccinations_dict if returns_nation() in self.vaccinations_dict[vac_doc]]
if documents_req_for_nation_List == []:
for doc in self.documents_dict.keys():
if self.documents_dict[doc] == nations_List:
documents_req_for_nation_List.append(doc)
for document in documents_req_for_nation_List:
if document not in entrant.keys():
if document == "access_permit":
if ("grant_of_asylum" in entrant.keys()):
return
if ("diplomatic_authorization" in entrant.keys()):
if "Arstotzka" in entrant.get("diplomatic_authorization"):
return
else:
return "Entry denied: invalid diplomatic authorization."
return "Entry denied: missing required " + documents_List[underlined_documents_List.index(document)] + "."
if len(vaccine_certificates_req_for_nation_List) > 0:
if "certificate_of_vaccination" not in entrant.keys():
return "Entry denied: missing required certificate of vaccination."
if self.documents_dict["work_pass"] != []:
for i in range(len(entrant.values())):
if "WORK" in list(entrant.values())[i]:
if "work_pass" not in entrant.keys():
return "Entry denied: missing required work pass."
def check_mismatching_information():
def check_mismatching_ID():
document_and_ID_Dict = {}
documents_and_keys_List = [key for key in entrant.keys()]
for i, value in enumerate(entrant.values()):
for line in value.split("\n"):
if "ID#:" in line:
ID = line.split(" ")
for x in ID:
if "ID#:" in x:
ID.pop(ID.index(x))
document_and_ID_Dict[documents_and_keys_List[i]] = ID
expected_value = next(iter(document_and_ID_Dict.values()))
all_IDs_equal = all(value == expected_value for value in document_and_ID_Dict.values())
if not all_IDs_equal:
return "Detainment: ID number mismatch."
if "Detainment:" in str(check_mismatching_ID()):
return check_mismatching_ID()
def check_mismatching_name():
document_and_name_Dict = {}
documents_and_keys_List = [key for key in entrant.keys()]
for i, value in enumerate(entrant.values()):
for line in value.split("\n"):
if "NAME:" in line:
name_line = line.split(" ")
for x in name_line:
if "NAME:" in x:
name_line.pop(name_line.index(x))
document_and_name_Dict[documents_and_keys_List[i]] = name_line
expected_value = next(iter(document_and_name_Dict.values()))
all_names_equal = all(value == expected_value for value in document_and_name_Dict.values())
if not all_names_equal:
return "Detainment: name mismatch."
if "Detainment:" in str(check_mismatching_name()):
return check_mismatching_name()
def check_mismatching_nation():
document_and_nation_Dict = {}
documents_and_keys_List = [key for key in entrant.keys()]
for i, value in enumerate(entrant.values()):
for line in value.split("\n"):
if "NATION:" in line:
nation_line = line.split(" ")
for x in nation_line:
if "NATION:" in x:
nation_line.pop(nation_line.index(x))
document_and_nation_Dict[documents_and_keys_List[i]] = nation_line
try:
expected_value = next(iter(document_and_nation_Dict.values()))
all_nations_equal = all(value == expected_value for value in document_and_nation_Dict.values())
if not all_nations_equal:
return "Detainment: nationality mismatch."
except StopIteration:
pass
if "Detainment:" in str(check_mismatching_nation()):
return check_mismatching_nation()
def check_mismatching_DOB(): # DOB - Date of Birth
document_and_dob_Dict = {}
for i, value in enumerate(entrant.values()):
for line in value.split("\n"):
if "DOB:" in line:
DOB_line = line.split(" ")
for x in DOB_line:
if "DOB:" in x:
DOB_line.pop(DOB_line.index(x))
document_and_dob_Dict[list(entrant.keys())[i]] = DOB_line
try:
expected_value = next(iter(document_and_dob_Dict.values()))
except Exception:
pass
all_dobs_equal = all(value == expected_value for value in document_and_dob_Dict.values())
if not all_dobs_equal:
return "Detainment: date of birth mismatch."
if "Detainment:" in str(check_mismatching_DOB()):
return check_mismatching_DOB()
def check_missing_vaccination():
nation = returns_nation()
vaccines_req_for_nation_Dict = [vac for vac in self.vaccinations_dict if nation in self.vaccinations_dict[vac]]
for vac in vaccines_req_for_nation_Dict:
for i in range(len(list(entrant.values()))):
if vac not in list(entrant.values())[i]:
continue
else:
return
return "Entry denied: missing required vaccination."
def check_documents_expiration_date():
document_and_exp_date_Dict = {}
document_and_key_List = [key for key in entrant.keys()]
for i, value in enumerate(entrant.values()):
for line in value.split("\n"):
if "EXP:" in line:
date = line.split(" ")
for x in date:
if "EXP:" in x:
date.pop(date.index(x))
date_list = date[0].split(".")
document_and_exp_date_Dict[document_and_key_List[i]] = date_list
for i in range(len(document_and_key_List)):
if (document_and_key_List[i] != "diplomatic_authorization" and document_and_key_List[i] != "ID_card") and document_and_key_List[i] != "certificate_of_vaccination":
if document_and_exp_date_Dict[document_and_key_List[i]][0] < "1982":
return "Entry denied: " + documents_List[underlined_documents_List.index(document_and_key_List[i])]+ " expired."
elif document_and_exp_date_Dict[document_and_key_List[i]][0] == "1982":
if document_and_exp_date_Dict[document_and_key_List[i]][1] < "11":
return "Entry denied: " + documents_List[underlined_documents_List.index(document_and_key_List[i])]+ " expired."
elif document_and_exp_date_Dict[document_and_key_List[i]][1] == "11":
if document_and_exp_date_Dict[document_and_key_List[i]][2] <= "22":
return "Entry denied: " + documents_List[underlined_documents_List.index(document_and_key_List[i])]+ " expired."
def return_criminal_name():
for value in entrant.values():
for line in value.split("\n"):
if "NAME:" in line:
name = line.split(" ")
name.pop(0)
for n in range(len(name)):
if "," in name[n]:
name[n] = name[n].replace(",", "")
return name[1] + " " + name[0]
if len(entrant.keys()) > 1:
if "Detainment:" in str(check_mismatching_information()):
return check_mismatching_information()
if return_criminal_name() == self.actual_bulletin["new_criminal"]:
return "Detainment: Entrant is a wanted criminal."
if "Entry denied: " in str(check_missing_documents()):
return check_missing_documents()
if "Entry denied: " in str(check_documents_expiration_date()):
return check_documents_expiration_date()
if "Entry denied: " in str(check_missing_vaccination()):
return check_missing_vaccination()
if "Entry denied: " in str(check_nation()):
return "Entry denied: citizen of banned nation."
if returns_nation() == "Arstotzka":
return "Glory to Arstotzka."
else:
return "Cause no trouble."
Answer: Inspector! Why did you allow Gregory Arstotzkaya, a Kolechian spy, into our glorious nation? Or Ilyana Dbrova, whose documents were perfect forgeries, except for a gender mismatch in her ID card? And why did you fail to apprehend Karl von Oskowitz?
Treating documents as a single piece of text and just checking whether they contain certain words is problematic. Instead, parse each document into a dictionary, which makes it easy to look at relevant fields only. This also simplifies your code, because you no longer need to have parsing code all over the place.
Similarly, using string comparisons for dates is not a good idea. Parse dates into a datetime object and compare those instead.
You're making things more difficult for yourself by splitting on spaces rather than commas:
Today it's just a single hack for the United Federation, tomorrow Antegria splits up into North Antegria and the Republic of South Antegria...
Why hard-code the names of any country (except for the motherland, of course)? If someone claims to be from Nonexististan, well, non-existing countries aren't on our whitelist, so they'll be rejected anyway. And why should you ignore a bulletin that tells you to allow people from a newly formed country?
If you're having difficulties isolating specific parts of a sentence, for simple cases you could use slicing: string[start:end] gives you a new string that only contains the specified part of the original string. Negative offsets count from the end of the string, left-out offsets default to the start/end of the string.
Parsing bulletins would be much easier with the help of a few well-chosen regular expressions. Consider creating a list of pattern-function tuples, such as ('Allow citizens of (?P<countries>.+)', allow_citizens), where allow_citizens is a function that takes a match object. You can fetch the countries with match.groups('countries'), which can then easily be split on commas. Be sure to strip() leading and trailing spaces from each individual country name. Now, for each line in a bulletin, your receive_bulletin function would walk through this list, calling re.match for each pattern until it finds a match, which it passes on to the associated function. To handle the similarities between document and vaccination instructions, you could order the patterns from most to least specific, or use look-behind assertions, or perform additional checks in the associated function.
It's probably easier to store document and vaccination requirements per country, rather than per document type. That means that once you know where someone is from, you immediately know which documents are required, instead of having to check every document type for the person's country.
For requirements, I would use a defaultdict, from the collections module, with the set function as default factory. This means you don't need to worry about initialization: you can just do required_documents[country].add(document) without having to check whether required_documents even contains that country key.
In inspect, you're calling a lot of functions twice. First to see if the entrant should be detained or rejected, then again for the actual response. That's a waste of work. Just call these functions once and store their result in a local variable, then check that variable and return it if necessary.
Most nested functions are reasonably named, which makes it easy to get an idea of what your code does. However, a lot of them depend on local state in their 'parent' function, which makes it difficult to understand how they work. There's also a fair amount of (almost) duplicated code and special-case handling that could be removed with more robust general-case code.
A function that takes all of its input via arguments, and returns its output, without modifying any other state - a 'pure' function - is easier to understand and easier to (re)use. For example, compare check_documents_expiration_date() with is_document_expired(document, current_date): with the former, it's not immediately clear which documents it inspects, or against which expiry date. The latter clearly checks the given document against the given date. It's doing less, but can be used as a building block for other functions, or in small expressions like expired_documents = [doc for doc in documents if is_document_expired(doc, current_date)]. | {
"domain": "codereview.stackexchange",
"id": 37642,
"tags": "python, python-3.x, object-oriented, programming-challenge"
} |
Is there anything in the universe that is completely static? | Question: Based on my current, admittedly deficient, understanding of physics everything in the universe is in constant change and motion.
Is that understanding correct or is there something that is static?
By static I mean that all of that object's properties remain constant from one moment in time to the next. These properties include its position/motion as well as the properties of its internal structure.
Answer: For us to talk about "static," we have to be able to talk about a reference frame in which we are measuring something. As several people have mentioned, it is always possible to construct a reference frame with respect to a particle, and that particle will be static in that frame.
However, if I venture a guess and say that you are talking about a system which is unchanging the current prevailing opinion is that everything changes. In particular, the first law of thermodynamics says that entropy is always increasing, so there is always energy being converted into heat.
Could there be a system that is truly static? We actually can't disprove it. But from what we have seen, all macroscopic systems exhibit thermodynamic behaviors -- they all increase in entropy.
We can theorize about the existence of static systems, especially on the small scale, but to the best of our current understanding, no system actually earns the title of "static." | {
"domain": "physics.stackexchange",
"id": 45141,
"tags": "spacetime, universe, equilibrium"
} |
Calculating the (expected) kinetic energy of an electron in the ground state of a Coulomb potential? | Question: I've been struggling with this all week to no avail.
I'm asked to calculate the expectation value of kinetic energy for an electron in the ground state of a Coulomb potential. I know that it ought to be $ 13.6 \, \mathrm{eV}$, but I am having a difficult time arriving there.
In general, the expectation value of, say, $Q$, is
$$\langle Q \rangle = \int \psi^* \hat Q \psi \, \mathrm dV $$
over all space. In the case of kinetic energy, $\hat Q$ would be equal to
$$ \frac{-\hbar^2}{2m} \nabla^2 $$
and, in the case of a ground-state electron, we would have
$$ \psi = \sqrt{\frac{1}{4 \pi}} \frac{2}{a^{3/2}} \exp(- r / a) $$
with $a$ being the Bohr radius.
However, for the life of me, I cannot get this integral to work. For a while, I was continually coming up with either 0 or a non-converging integral, until I stumbled on some piece of information (that I can't find convincing proof of, either in my textbook or on the internet) that the square angular momentum (that is, $(\mathrm d^2/ \mathrm d \theta^2 + \mathrm d^2/ \mathrm d \phi^2) \theta\psi$) is equal to $l(l+1)$ - in my case, 0, since $l = 0$ in the ground state $(1,0,0)$. This simplified things and gave me an integral I could get to converge. However, it seems to converge to
$$ \frac{\hbar^2}{a^2} $$
which not only has the wrong units of (energy time per length)^2 but also has the wrong value.
Please help. This homework problem has taken an embarrassingly long time and a lot of scratch paper to do already.
Answer: The factorization into a radial part plus the angular momentum operator is true, but you don't really need it; instead, you can simply use the Laplacian in spherical coordinates,
$$
\nabla^2
= \frac{1}{r^2}\frac{\partial}{\partial r}r^2\frac{\partial}{\partial r}
+\frac{1}{r^2\sin(\theta)}\frac{\partial}{\partial \theta}\sin(\theta)\frac{\partial}{\partial \theta}+ \frac{1}{r^2\sin^2(\theta)}\frac{\partial^2}{\partial \varphi^2},
$$
and note that the angular derivatives vanish with your spherically-symmetric wavefunction.
From here you just need to calculate the action of the kinetic energy operator on the ground state,
\begin{align}
\hat{Q}\psi
&= \frac{-\hbar^2}{2m}\nabla^2\psi
\\& = \frac{-\hbar^2}{2m}\sqrt{\frac{1}{4 \pi}} \frac{2}{a^{3/2}}\nabla^2 \exp(- r / a)
\\& = \frac{-\hbar^2}{2m}\sqrt{\frac{1}{4 \pi}} \frac{2}{a^{3/2}}\frac{1}{r^2}\frac{\partial}{\partial r}\left[r^2\frac{\partial}{\partial r} \exp(- r / a)\right]
\\& = \frac{-\hbar^2}{2m}\sqrt{\frac{1}{4 \pi}} \frac{2}{a^{3/2}}\frac{1}{r^2}\frac{\partial}{\partial r}\left[r^2\frac{-1}{a} \exp(- r / a)\right]
\\& = \frac{-\hbar^2}{2m}\sqrt{\frac{1}{4 \pi}} \frac{2}{a^{3/2}}\frac{1}{r^2}\left[2r\frac{-1}{a} \exp(- r / a)+r^2\frac{1}{a^2} \exp(- r / a)\right]
\\& = \frac{-\hbar^2}{2m}\sqrt{\frac{1}{4 \pi}} \frac{2}{a^{3/2}}\left[\frac{-2}{ar} \exp(- r / a)+\frac{1}{a^2} \exp(- r / a)\right],
\end{align}
and then integrate against the wavefunction itself:
\begin{align}
\langle\psi|\hat{Q}|\psi\rangle
&=\int \psi(\mathbf r)^* \, \hat{Q}\psi(\mathbf r)\mathrm d\mathbf r
\\& = \int_0^{\infty} \psi(r)^* \hat{Q}\psi(r) 4\pi r^2\mathrm dr
\\& = \int_0^{\infty} \left(\sqrt{\frac{1}{4 \pi}} \frac{2}{a^{3/2}}\exp(- r / a)\right)
\\& \qquad\times\left(\frac{-\hbar^2}{2m}\sqrt{\frac{1}{4 \pi}} \frac{2}{a^{3/2}}\left[\frac{-2}{ar} \exp(- r / a)+\frac{1}{a^2} \exp(- r / a)\right]\right) 4\pi r^2\mathrm dr
\\& = \frac{-\hbar^2}{2m} \frac{4}{a^3}\int_0^{\infty} \left(\frac{-2r}{a} e^{- 2r / a}+\frac{r^2}{a^2} e^{-2 r / a}\right) \mathrm dr
\\& = \frac{-\hbar^2}{2m} \frac{4}{a^3} \left(\frac{-a}{4} \right)
\\& = +\frac{\hbar^2}{2ma^2}
\\& = 13.6\:\mathrm{eV},
\end{align}
as required. | {
"domain": "physics.stackexchange",
"id": 38053,
"tags": "quantum-mechanics, homework-and-exercises, energy, wavefunction, coordinate-systems"
} |
Raspberry Pi/PySimpleGUI based resistance test system | Question: Hardware: Raspberry Pi 3B+, Elecrow touchscreen, DFR0660 Barcode Scanner, ADS1115 ADC
I would like any bad practices/possible failure points pointed out specifically in the loop(), read_adc(), and catheter_test() functions. The write_to_report() and measure_single_catheter() functions were removed. The system is fully functional.
import sys
from time import sleep
import RPi.GPIO as GPIO
from os import system
from datetime import datetime
from fpdf import FPDF
from fpdf.enums import XPos, YPos
import pyautogui
import pygame
from itertools import chain
import subprocess
from hw_init import *
from cath_test_init import *
from gui_init import *
sys.path.append('/usr/lib/python39.zip')
# !/usr/bin python3
# VERBOSE LOGGING FOR TROUBLESHOOTING
if sys.argv[1] == 'log':
logfile_date = datetime.now().strftime("%m_%d_%Y")
logfile_name = 'logfile' + logfile_date + '.txt'
sys.stdout = open(logfile_name, 'w')
def log_print(string):
logfile_now = datetime.now().strftime("%m_%d_%Y_%H:%M:%S")
print(logfile_now + "\t" + string)
class PDF(FPDF):
# Page footer
def footer(self):
self.set_y(-40)
self.set_font('Times', 'I', 12)
# Page number {nb} comes from alias_nb_page
self.cell(0, 10, 'Model Selected: ' + MODEL_SELECTED +
' Sofware Version:' + SOFTWARE_VERSION,
new_x=XPos.LMARGIN, new_y=YPos.NEXT, align='L')
self.cell(0, 10, 'Page ' + str(self.page_no()) + '/{nb}',
new_x=XPos.RIGHT, new_y=YPos.TOP, align='C')
# AUDIO SETUP
FAIL_SOUND = 'fail.mp3'
PASS_SOUND = 'pass.mp3'
PLAY_SOUND = True
pygame.init()
pygame.mixer.init()
'''Loading and playing sounds in order to load dependencies of library'''
pygame.mixer.music.load(FAIL_SOUND)
pygame.mixer.music.play()
while pygame.mixer.music.get_busy():
pygame.time.Clock().tick(10)
pygame.mixer.music.load(PASS_SOUND)
pygame.mixer.music.play()
while pygame.mixer.music.get_busy():
pygame.time.Clock().tick(10)
def get_ip():
ip = subprocess.run(['hostname', '-I'],
stdout=subprocess.PIPE).stdout.decode('utf-8')
return ip
def terminate_script():
window.close()
sys.exit("Terminated at admin request")
def reboot_system():
system('sudo reboot')
def shutdown_system():
system('sudo shutdown -h now')
def set_bias_mux_to_low_range():
log_print("set_bias_mux_to_low_range() called")
GPIO.output(A_BIAS_MUX, GPIO.LOW)
log_print("set_bias_mux_to_low_range() returning")
def set_bias_mux_to_hi_range():
log_print("set_bias_mux_to_hi_range() called")
GPIO.output(A_BIAS_MUX, GPIO.HIGH)
log_print("set_bias_mux_to_hi_range() returning")
def set_dut_mux_to_input_res():
log_print("set_dut_mux_to_input_res() called")
GPIO.output(A_DUT_MUX, GPIO.LOW)
GPIO.output(B_DUT_MUX, GPIO.LOW)
log_print("set_dut_mux_to_input_res() returning")
def set_dut_mux_to_output_res():
log_print("set_dut_mux_to_output_res() called")
GPIO.output(A_DUT_MUX, GPIO.HIGH)
GPIO.output(B_DUT_MUX, GPIO.LOW)
log_print("set_dut_mux_to_output_res() returning")
def reset_mouse_position():
# log_print("reset_mouse_position() called")
pyautogui.moveTo(700, 160)
# log_print("reset_mouse_position() returning")
def no_blank_screen():
log_print("no_blank_screen() called")
cmd_list = ['xset s noblank', 'xset -dpms', 'xset s off']
for command in cmd_list:
system(command)
log_print("no_blank_screen() returning")
def audio_feedback(local_result):
log_print("audio_feedback(%s) called" % local_result)
global PLAY_SOUND
if local_result == 'FAIL':
pygame.mixer.music.load(FAIL_SOUND)
pygame.mixer.music.play()
while pygame.mixer.music.get_busy():
pygame.time.Clock().tick(10)
elif local_result == 'PASS':
pygame.mixer.music.load(PASS_SOUND)
pygame.mixer.music.play()
while pygame.mixer.music.get_busy():
pygame.time.Clock().tick(10)
else:
print('result is invalid. Cant play audio')
if PLAY_SOUND:
PLAY_SOUND = False
log_print("audio_feedback(%s) returning" % local_result)
def gui_frame_msngr_update(frame_to_show,
new_current_process_message='No message'):
user_messager_window.update(new_current_process_message)
current_frame_visibility = [frame1.visible, frame2.visible, frame3.visible,
frame4.visible, frame5.visible,
frame6.visible]
frame_update = [False, True, False, False, False, False, False, False,
False]
if frame_to_show == 1:
frame_update = [True, False, False, False, False, False, False, False,
False]
elif frame_to_show == 2:
frame_update = [False, True, False, False, False, False, False, False,
False]
elif frame_to_show == 3:
reset_mouse_position()
frame_update = [False, False, True, False, False, False, False, False,
False]
keypad_message_box.update(new_current_process_message)
elif frame_to_show == 4:
frame_update = [False, False, False, True, False, False, False, False,
False]
pass_test_text_box.update(new_current_process_message)
elif frame_to_show == 5:
frame_update = [False, False, False, False, True, False, False, False,
False]
fail_test_text_box.update(new_current_process_message)
elif frame_to_show == 6:
frame_update = [False, False, False, False, False, True, False, False,
False]
elif frame_to_show == 7:
frame_update = [False, False, False, False, False, False, True, False,
False]
elif frame_to_show == 8:
frame_update = [False, False, False, False, False, False, False, True,
False]
frame8_message_box.update(new_current_process_message)
elif frame_to_show == 9:
frame_update = [False, False, False, False, False, False, False, False,
True]
frame9_ip_add_box.update(new_current_process_message)
if not (current_frame_visibility == frame_update):
frame1.update(visible=frame_update[0])
frame2.update(visible=frame_update[1])
frame3.update(visible=frame_update[2])
frame4.update(visible=frame_update[3])
frame5.update(visible=frame_update[4])
frame6.update(visible=frame_update[5])
frame7.update(visible=frame_update[6])
frame8.update(visible=frame_update[7])
frame9.update(visible=frame_update[8])
window.refresh()
def show_calcheck_results(measurements, button_text, cal_reset):
log_print("show_calcheck_results([%.3f,%.3f], %s)" % (
measurements[0], measurements[1], button_text))
if button_text == 'CHECK LOW RANGE':
add_string = 'INSERT HIGH-RANGE\n' \
'RESISTANCE FIXTURE AND PRESS\n' \
'\'CHECK HIGH RANGE\''
if cal_reset:
res_message = 'UPPER-END RESISTANCE: %.3f\n' \
'LOWER-END RESISTANCE: %.3f\n%s' % \
(round(measurements[0], 3),
round(measurements[1], 3),
add_string)
else:
res_message = 'INPUT RESISTANCE: %.3f\n' \
'OUTPUT RESISTANCE: %.3f\n%s' % \
(round(measurements[0], 3),
round(measurements[1], 3),
add_string)
elif button_text == 'CHECK HIGH RANGE':
add_string = 'PRESS \'APPROVE & EXIT\'\n' \
'OR \'REDO CAL REF\'\n' \
'IF OUT OF TOLERANCE'
if cal_reset:
res_message = 'UPPER-END RESISTANCE: %.3f\n' \
'LOWER-END RESISTANCE: %.3f\n%s' % \
(round(measurements[0], 3),
round(measurements[1], 3),
add_string)
else:
res_message = 'INPUT RESISTANCE: %.3f\n' \
'OUTPUT RESISTANCE: %.3f\n' \
'%s' % \
(round(measurements[0], 3),
round(measurements[1], 3),
add_string)
else:
res_message = 'ERROR IN show_calcheck_results\nCONTACT ENGINEERING'
log_print("show_calcheck_results() returning")
return res_message
def show_results(cath_test_result, barcode):
log_print("show_results(%s, %s) called" % (cath_test_result, barcode))
if SHOW_LAST_CATH_GUI_MSG:
message = 'JOB FINISHED\nUNPLUG CATHETER\nTO PRINT REPORT\n%s: %s' % (
barcode, cath_test_result)
else:
if REPEATED_CATHETER_DETECTED:
message = 'REPEATED CATHETER\nREADY FOR\nNEXT CATHETER\n%s: %s' % (
barcode, cath_test_result)
else:
message = 'READY FOR\nNEXT CATHETER\n%s: %s' % (
barcode, cath_test_result)
log_print("show_results() returning")
return message
def alrt_rdy_func_enable():
log_print("alrt_rdy_func_enable() called")
i2cbus.write_i2c_block_data(I2C_DEV_ADDR, REG_LOTHRESH_ADDR,
LOTHRESH_CONFIGURATION_DATA)
i2cbus.write_i2c_block_data(I2C_DEV_ADDR, REG_HITHRESH_ADDR,
HITHRESH_CONFIGURATION_DATA)
log_print("alrt_rdy_func_enable() returning")
def configure_adc(config_reg_data):
i2cbus.write_i2c_block_data(I2C_DEV_ADDR, REG_CONFIG_ADDR, config_reg_data)
def isr_enable():
log_print("isr_enable() called")
GPIO.add_event_detect(ADS1115_ALRT_RDY_SIG, GPIO.FALLING)
GPIO.add_event_callback(ADS1115_ALRT_RDY_SIG, read_adc)
log_print("isr_enable() returning")
def isr_disable():
log_print("isr_disable() called")
# GPIO.remove_event_detect(ADS1115_ALRT_RDY_SIG)
log_print("isr_disable() returning")
def adc_decode(adc_codes):
msb = adc_codes[0] << 8
lsb = adc_codes[1]
adc_code = msb | lsb
voltage = adc_code * LSB
return voltage
def volt2res(voltage):
log_print("volt2res(%f) called" % voltage)
high_range_correction_factor = \
(initial_m_high_range * voltage) + initial_b_high_range
low_range_correction_factor = \
(initial_m_low_range * voltage) + initial_b_low_range
dynamic_high_range_correction_factor = \
(dynamic_m_high_range * voltage) + dynamic_b_high_range
dynamic_low_range_correction_factor = \
(dynamic_m_low_range * voltage) + dynamic_b_low_range
if MEASURE_OUTPUT_RESISTANCE:
if CATH_MODEL is MODEL_P16x_SEL:
correction_factor = low_range_correction_factor
dynamic_correction_factor = dynamic_low_range_correction_factor
tfco = TFCO_LOW
elif CATH_MODEL is MODEL_P330_SEL:
correction_factor = high_range_correction_factor
dynamic_correction_factor = dynamic_high_range_correction_factor
tfco = TFCO_HIGH
elif CATH_MODEL is MODEL_P330B_SEL:
correction_factor = high_range_correction_factor
dynamic_correction_factor = dynamic_high_range_correction_factor
tfco = TFCO_HIGH
else:
tfco = TFCO_LOW
correction_factor = low_range_correction_factor
dynamic_correction_factor = dynamic_low_range_correction_factor
print("Something went wrong when selecting the correction "
"factor during output resistance "
"measurement. CATH_MODEL = ", CATH_MODEL)
else:
if CATH_MODEL is MODEL_P16x_SEL:
correction_factor = low_range_correction_factor
dynamic_correction_factor = dynamic_low_range_correction_factor
tfco = TFCO_LOW
elif CATH_MODEL is MODEL_P330_SEL:
correction_factor = high_range_correction_factor
dynamic_correction_factor = dynamic_high_range_correction_factor
tfco = TFCO_HIGH
elif CATH_MODEL is MODEL_P330B_SEL:
correction_factor = low_range_correction_factor
dynamic_correction_factor = dynamic_low_range_correction_factor
tfco = TFCO_LOW
else:
tfco = TFCO_LOW
correction_factor = low_range_correction_factor
dynamic_correction_factor = dynamic_low_range_correction_factor
print("Something went wrong when selecting the correction factor "
"during input resistance measurement. "
"CATH_MODEL = ", CATH_MODEL)
transfer_func = RGAIN1 * ((10 * ((voltage - V_BIAS) * tfco + V_BIAS)) - 1)
r_dut = transfer_func - correction_factor - dynamic_correction_factor
log_print("volt2res() returning")
return r_dut
def configure_measurement():
global MODEL_RES, MODEL_TOL, V_BIAS, RANGE_LIMIT_LOW, RANGE_LIMIT_HIGH
log_print("configure_measurement() called")
if MEASURE_OUTPUT_RESISTANCE:
set_dut_mux_to_output_res()
if CATH_MODEL is MODEL_P16x_SEL:
set_bias_mux_to_low_range()
MODEL_RES = MODEL_P16x_OUTPUT_RES
MODEL_TOL = MODEL_P16x_TOL
V_BIAS = V_BIAS_LOW
RANGE_LIMIT_LOW = LOWRANGE_LIMIT_LOW
RANGE_LIMIT_HIGH = LOWRANGE_LIMIT_HIGH
elif CATH_MODEL is MODEL_P330_SEL:
set_bias_mux_to_hi_range()
MODEL_RES = MODEL_P330_OUTPUT_RES
MODEL_TOL = MODEL_P330_OUTPUT_TOL
V_BIAS = V_BIAS_HIGH
RANGE_LIMIT_LOW = HIGHRANGE_LIMIT_LOW
RANGE_LIMIT_HIGH = HIGHRANGE_LIMIT_HIGH
elif CATH_MODEL is MODEL_P330B_SEL:
set_bias_mux_to_hi_range()
MODEL_RES = MODEL_P330B_OUTPUT_RES
MODEL_TOL = MODEL_P330B_OUTPUT_TOL
V_BIAS = V_BIAS_HIGH
RANGE_LIMIT_LOW = HIGHRANGE_LIMIT_LOW
RANGE_LIMIT_HIGH = HIGHRANGE_LIMIT_HIGH
else:
log_print("InvalidModelError:model index "
"value is invalid (model=%d)" % CATH_MODEL)
else:
set_dut_mux_to_input_res()
if CATH_MODEL is MODEL_P16x_SEL:
set_bias_mux_to_low_range()
MODEL_RES = MODEL_P16x_INPUT_RES
MODEL_TOL = MODEL_P16x_TOL
V_BIAS = V_BIAS_LOW
RANGE_LIMIT_LOW = LOWRANGE_LIMIT_LOW
RANGE_LIMIT_HIGH = LOWRANGE_LIMIT_HIGH
elif CATH_MODEL is MODEL_P330_SEL:
set_bias_mux_to_hi_range()
MODEL_RES = MODEL_P330_INPUT_RES
MODEL_TOL = MODEL_P330_INPUT_TOL
V_BIAS = V_BIAS_HIGH
RANGE_LIMIT_LOW = HIGHRANGE_LIMIT_LOW
RANGE_LIMIT_HIGH = HIGHRANGE_LIMIT_HIGH
elif CATH_MODEL is MODEL_P330B_SEL:
set_bias_mux_to_low_range()
MODEL_RES = MODEL_P330B_INPUT_RES
MODEL_TOL = MODEL_P330B_INPUT_TOL
V_BIAS = V_BIAS_LOW
RANGE_LIMIT_LOW = LOWRANGE_LIMIT_LOW
RANGE_LIMIT_HIGH = LOWRANGE_LIMIT_HIGH
else:
log_print("InvalidModelError:model index "
"value is invalid (model=%d)" % CATH_MODEL)
log_print("configure_measurement() returning")
def jobsize_capture(jobsize_string):
global JOB_SIZE
log_print("jobsize_capture(%s) called" % jobsize_string)
if len(jobsize_string) > 0:
JOB_SIZE = int(jobsize_string)
else:
log_print("NO VALUE ENTERED.")
log_print("jobsize_capture() returning")
return False
if 0 < JOB_SIZE < MAX_CATHETERS_PER_JOB:
log_print("jobsize_capture() returning")
return True
else:
log_print("INVALID JOB SIZE.")
log_print("jobsize_capture() returning")
return False
def validate_job_number_barcode_scan(x):
log_print("validate_job_number_barcode_scan(%s) called" % x.strip())
try:
if len(x) > 0 and (int(x.strip()) > 999 and int(
x.strip()) < 100000000):
code_validity = True
else:
code_validity = False
except ValueError:
log_print("Invalid Job Number")
code_validity = False
return code_validity
log_print("validate_job_number_barcode_scan() returning")
return code_validity
def jobnumber_capture():
global JOB_NUMBER
log_print("jobnumber_capture() called")
for attempts in range(3):
JOB_NUMBER, jobnumber_validity = barcode_scanner()
if len(JOB_NUMBER) > 1:
if validate_job_number_barcode_scan(JOB_NUMBER):
log_print("jobnumber_capture() returning")
return True
else:
log_print("jobnumber_capture() returning")
return False
if not (bool(JOB_NUMBER)):
log_print("jobnumber_capture() returning")
return False
def model_capture():
global CATH_MODEL
log_print("model_capture() called")
if MODEL_SELECTED == 'P16x':
CATH_MODEL = MODEL_P16x_SEL
elif MODEL_SELECTED == 'P330':
CATH_MODEL = MODEL_P330_SEL
elif MODEL_SELECTED == 'P330B':
CATH_MODEL = MODEL_P330B_SEL
else:
log_print("INVALID MODEL SELECTED!")
log_print("model_capture() returning")
def waiting_for_manual_barcode():
global KEYPAD_CATH_BARCODE, MANUAL_BARCODE_CAPTURED
log_print("waiting_for_manual_barcode() called")
while not MANUAL_BARCODE_CAPTURED:
pass
sleep(.1)
MANUAL_BARCODE_CAPTURED = False
local_barcode = KEYPAD_CATH_BARCODE
log_print("waiting_for_manual_barcode() returning")
return local_barcode
def validate_catheter_barcode_scan(x):
log_print("validate_catheter_barcode_scan(%s) called" % x)
try:
if 5 < len(x) < 10:
code_validity = True
else:
code_validity = False
except TypeError:
code_validity = False
log_print("validate_catheter_barcode_scan() returning")
return code_validity
def manual_catheter_barcode_entry():
log_print("manual_catheter_barcode_entry() called")
keypad_back_btn.update('')
gui_frame_msngr_update(3, CATH_KEYPAD_MESSAGE)
while True:
event, values = window.read()
if event in '1234567890':
keys_entered = values['input'] # get what's been entered so far
keys_entered += event # add the new digit
window['input'].update(keys_entered)
elif event == 'Submit':
keys_entered = values['input']
window['input'].update('')
if validate_catheter_barcode_scan(keys_entered):
cath_barcode = keys_entered
break
else:
window['keypad_message'].update(CATH_KEYPAD_INVALID_MESSAGE)
keys_entered = ''
window['input'].update(keys_entered)
elif event == 'Clear': # clear keys if clear button
keys_entered = ''
window['input'].update(keys_entered)
log_print("manual_catheter_barcode_entry() returning")
return cath_barcode
def barcode_scanner():
log_print("barcode_scanner() called")
ser.write(SCNR_TRGR_CMD_BYTES) # Write Hex command to trigger barcode read
x = ser.read_until(
b'\r') # \r is the last character returned by the barcode reader after
# a barcode has been read. This character may change if
# the scanner model changes.
x = x.decode().split("31",
1) # anything to the right of the
# first 31 is the barcode.
# '31' is the last 2-digit code returned by the reader.
# This 2-digit code may have to change (or be removed)
# if the scanner model changes.
code_validity = validate_catheter_barcode_scan(x[1])
log_print("barcode_scanner() returning")
return x[1], code_validity
def print_report(local_report_filename):
log_print("print_report(%s) called" % local_report_filename)
cmd = 'sudo lp ' + local_report_filename
system(cmd)
log_print("print_report() returning")
def sort_data():
log_print("sort_data() called")
keys = list(cath_data_dict_unsorted_buffer.keys())
keys.sort()
catheter_data_dict_sorted = {key: cath_data_dict_unsorted_buffer[key]
for key in keys}
log_print("sort_data() returning")
return catheter_data_dict_sorted
def correction_fctr_calc(xlist, ylist):
log_print("correction_factor_calculation() called")
delta_x_low_range = xlist[0] - xlist[1]
delta_y_low_range = (ylist[0] - CAL_REF_RESISTANCE_LOWRANGE_HIGH) - (
ylist[1] - CAL_REF_RESISTANCE_LOWRANGE_LOW)
m_low_range = delta_y_low_range / delta_x_low_range
b_low_range = \
(ylist[0] - CAL_REF_RESISTANCE_LOWRANGE_HIGH) - m_low_range * xlist[0]
delta_x_high_range = xlist[2] - xlist[3]
delta_y_high_range = (ylist[2] - CAL_REF_RESISTANCE_HIRANGE_HIGH) - (
ylist[3] - CAL_REF_RESISTANCE_HIRANGE_LOW)
m_high_range = delta_y_high_range / delta_x_high_range
b_high_range = \
(ylist[2] - CAL_REF_RESISTANCE_HIRANGE_HIGH) - m_high_range * xlist[2]
log_print("correction_factor_calculation() returning")
return m_low_range, b_low_range, m_high_range, b_high_range
def correction_value(m, b, x):
log_print("correction_value() called")
corr_val = (m * x) + b
log_print("correction_value() returning")
return corr_val
def calibration():
global CAL_PASSED, V_BIAS, CATH_MODEL, CONVERSION_READY, \
ADC_SAMPLE_LATEST, CAL_REF_RESISTANCE_LOWRANGE_LOW, \
CAL_REF_RESISTANCE_LOWRANGE_HIGH, CAL_REF_RESISTANCE_HIRANGE_LOW, \
CAL_REF_RESISTANCE_HIRANGE_HIGH, \
dynamic_m_low_range, dynamic_b_low_range, dynamic_m_high_range, \
dynamic_b_high_range, CAL_FAIL
log_print("calibration() called")
# isr_enable()
temp_cath_model = CATH_MODEL
A = [0, 0]
x_list = []
y_list = []
SAMPLES_TO_TAKE = 120
samples_to_remove_cal = int(SAMPLES_TO_TAKE / 2)
sample_period = 0.004
CAL_FAIL = False
GPIO.output(B_DUT_MUX, GPIO.HIGH)
cal_resistances = [CAL_REF_RESISTANCE_LOWRANGE_HIGH,
CAL_REF_RESISTANCE_LOWRANGE_LOW,
CAL_REF_RESISTANCE_HIRANGE_HIGH,
CAL_REF_RESISTANCE_HIRANGE_LOW]
v_bias_cal = []
recalculate_dynamic_coefficients = False
log_print('\n\n===INITIATING CALIBRATION===\n')
for i in range(4):
voltage_samples = []
vbias_sample_buffer_cal = []
GPIO.output(A_DUT_MUX, A[1])
GPIO.output(CAL_RES_HI_RANGE_MUX, A[0])
GPIO.output(A_BIAS_MUX, A[0])
for sample in range(SAMPLES_TO_TAKE):
configure_adc(SINGLESHOT_VBIAS_CONFIG_REG)
while not CONVERSION_READY:
sleep(sample_period)
CONVERSION_READY = False
vbias_sample_buffer_cal.append(adc_decode(ADC_SAMPLE_LATEST))
vbias_sample_buffer_cal = vbias_sample_buffer_cal[
samples_to_remove_cal:]
V_BIAS = sum(vbias_sample_buffer_cal) / len(vbias_sample_buffer_cal)
log_print("V_BIAS = %f" % V_BIAS)
v_bias_cal.append(V_BIAS)
CATH_MODEL = A[0]
sleep(.35) # to allow the switches to settle
log_print('SAMPLING INTERNAL RESISTANCE...')
for sample in range(SAMPLES_TO_TAKE):
configure_adc(SINGLESHOT_CONFIG_REG)
while not CONVERSION_READY:
sleep(sample_period)
CONVERSION_READY = False
voltage_samples.append(adc_decode(ADC_SAMPLE_LATEST))
voltage_samples = voltage_samples[samples_to_remove_cal:]
avg_voltage = round(sum(voltage_samples) / len(voltage_samples), 3)
x_list.append(avg_voltage)
cal_res_msrmnt = round(volt2res(avg_voltage), 3)
y_list.append(cal_res_msrmnt)
if (abs(cal_res_msrmnt - cal_resistances[i]) /
cal_resistances[i]) > CAL_REF_RESISTANCE_TOL:
recalculate_dynamic_coefficients = True
log_print('avg_voltage:%.3f\navg_res:%.3f' % (
avg_voltage, cal_res_msrmnt))
for j in range(len(A) - 1, -1, -1):
if A[j] == 0:
A[j] = 1
break
A[j] = 0
if recalculate_dynamic_coefficients:
dynamic_m_low_range, dynamic_b_low_range, \
dynamic_m_high_range, dynamic_b_high_range = \
correction_fctr_calc(x_list, y_list)
log_print('dynamic_m_low_range, dynamic_b_low_range, '
'dynamic_m_high_range, dynamic_b_high_range:\n%f %f %f %f' %
(dynamic_m_low_range, dynamic_b_low_range,
dynamic_m_high_range, dynamic_b_high_range))
log_print("DYNAMIC CORRECTION FACTOR CALCULATED!")
# This condition limits how much the system can correct itself
# from the initial correction factors before triggering
# an engineer to recalibrate the system.
if (abs(dynamic_m_low_range)) > 10 or (
abs(dynamic_m_low_range) > 10) or (
abs(dynamic_m_low_range) > 10) or (
abs(dynamic_m_low_range) > 10):
CAL_FAIL = True
A = [0, 0]
for i in range(4):
voltage_samples = []
GPIO.output(A_DUT_MUX, A[1])
GPIO.output(CAL_RES_HI_RANGE_MUX, A[0])
GPIO.output(A_BIAS_MUX, A[0])
V_BIAS = v_bias_cal[i]
CATH_MODEL = A[0]
log_print('SAMPLING INTERNAL RESISTANCES '
'WITH DYNAMIC CORRECTION FACTOR...')
for sample in range(SAMPLES_TO_TAKE):
configure_adc(SINGLESHOT_CONFIG_REG)
while not CONVERSION_READY:
sleep(sample_period)
CONVERSION_READY = False
voltage_samples.append(adc_decode(ADC_SAMPLE_LATEST))
voltage_samples = voltage_samples[samples_to_remove_cal:]
avg_voltage = round(sum(voltage_samples) / len(voltage_samples), 3)
error_magnitude_voltage = round(
max(voltage_samples) - min(voltage_samples), 3)
error_plus_minus_voltage = round(error_magnitude_voltage / 2, 3)
cal_res_msrmnt = round(volt2res(avg_voltage), 3)
error_magnitude_res = round(
volt2res(max(voltage_samples)) - volt2res(
min(voltage_samples)), 3)
error_plus_minus_res = round(error_magnitude_res / 2, 3)
cal_resistance_tolerance = round(
cal_resistances[i] * CAL_REF_RESISTANCE_TOL, 3)
log_print('avg_voltage:%.3f +/-%.3f\navg_res:%.3f +/-%.3f\n' % (
avg_voltage, error_plus_minus_voltage,
cal_res_msrmnt, error_plus_minus_res))
if (abs(cal_res_msrmnt - cal_resistances[i]) / cal_resistances[i])\
< CAL_REF_RESISTANCE_TOL:
log_print("Measured calibrated resistance passed\n")
else:
CAL_FAIL = True
log_print("Measured calibrated resistance out of tolerance\n")
log_print(
'Expected Ω: %.3fΩ\nCalculated Ω: %fΩ\n' % (
cal_resistances[i], cal_res_msrmnt))
log_print(
'resistance tolerance: %.3f\n' % cal_resistance_tolerance)
log_print('resistance error measured: %.3f\n\n' % (
cal_res_msrmnt - cal_resistances[i]))
if i == 3 and CAL_FAIL is False:
CAL_PASSED = True
GPIO.output(A_DUT_MUX, GPIO.LOW)
GPIO.output(B_DUT_MUX, GPIO.LOW)
CATH_MODEL = temp_cath_model
print('===CALIBRATION DONE===')
for j in range(len(A) - 1, -1, -1):
if A[j] == 0:
A[j] = 1
break
A[j] = 0
else:
GPIO.output(A_DUT_MUX, GPIO.LOW)
GPIO.output(B_DUT_MUX, GPIO.LOW)
CATH_MODEL = temp_cath_model
CAL_PASSED = True
log_print('Dynamic coefficients were not recalculated. '
'Calibration is still stable.')
def read_adc(gpionum):
global CATH_CONN_EMPTY, RES_SAMPLES_VALS, ACTIVE_SAMPLING, \
CATH_RES_SAMPLES_COLLECTED, TEST_FINISHED, CATH_MODEL, \
ADC_SAMPLE_LATEST, CATH_DETECTED_SAMPLES_COLLECTED, \
CATH_DISCONN_SAMPLES_COLLECTED, end_job, SHOW_LAST_CATH_GUI_MSG, \
CONVERSION_READY, MEASURE_VBIAS, VBIAS_SAMPLES_BUFFER, V_BIAS, \
report_filename
ADC_SAMPLE_LATEST = i2cbus.read_i2c_block_data(I2C_DEV_ADDR,
REG_CONVERSION_ADDR, 2)
CONVERSION_READY = True
avg_voltage = adc_decode(ADC_SAMPLE_LATEST)
if CATH_CONN_EMPTY and CAL_PASSED and (
not ACTIVE_SAMPLING) and not EXTERNAL_CAL_INPROGRESS:
if CATH_DETECTED_SAMPLES_COLLECTED < CATH_DETECTED_SAMPLES_REQUIRED:
if avg_voltage < CATH_DETECTED_VOLTAGE:
CATH_DETECTED_SAMPLES_COLLECTED += 1
else:
CATH_DETECTED_SAMPLES_COLLECTED = 0
else:
configure_measurement()
MEASURE_VBIAS = True
CATH_CONN_EMPTY = False
CATH_DETECTED_SAMPLES_COLLECTED = 0
log_print("CATHETER DETECTED. STARTING TEST...")
elif MEASURE_VBIAS:
if len(VBIAS_SAMPLES_BUFFER) < 100:
VBIAS_SAMPLES_BUFFER.append(avg_voltage)
configure_adc(SINGLESHOT_VBIAS_CONFIG_REG)
# print('collecting vbias samples:',avg_voltage)
else:
MEASURE_VBIAS = False
VBIAS_SAMPLES_BUFFER = VBIAS_SAMPLES_BUFFER[50:]
V_BIAS = sum(VBIAS_SAMPLES_BUFFER) / len(VBIAS_SAMPLES_BUFFER)
VBIAS_SAMPLES_BUFFER = []
ACTIVE_SAMPLING = True
log_print('VBIAS = %f' % V_BIAS)
configure_adc(CONTINUOUS_CONFIG_REG)
elif ACTIVE_SAMPLING:
if CATH_RES_SAMPLES_COLLECTED < SAMPLES_REQUIRED:
RES_SAMPLES_VALS.append(avg_voltage)
CATH_RES_SAMPLES_COLLECTED += 1
else:
log_print("Finished logging test samples")
ACTIVE_SAMPLING = False
catheter_test()
if TEST_FINISHED:
if MEASURE_OUTPUT_RESISTANCE:
TEST_FINISHED = False
MEASURE_VBIAS = True
CATH_RES_SAMPLES_COLLECTED = 0
RES_SAMPLES_VALS = []
configure_measurement()
log_print("Measuring output resistance now...")
else:
if CATH_DISCONN_SAMPLES_COLLECTED < CATH_DETECTED_SAMPLES_REQUIRED:
if avg_voltage > CATH_DETECTED_VOLTAGE:
CATH_DISCONN_SAMPLES_COLLECTED += 1
else:
CATH_DISCONN_SAMPLES_COLLECTED = 0
else:
log_print("CATHETER DISCONNECTED. WRITING REPORT...")
TEST_FINISHED = False
CATH_CONN_EMPTY = True
CATH_RES_SAMPLES_COLLECTED = 0
CATH_DISCONN_SAMPLES_COLLECTED = 0
RES_SAMPLES_VALS = []
if catheters_processed == JOB_SIZE:
report_filename = write_to_report(sort_data())
end_job = True
def catheter_test():
global TEST_FINISHED, RES_SAMPLES_VALS, MEASURE_OUTPUT_RESISTANCE, \
catheters_processed, cath_data_dict_unsorted_buffer, avg_res, \
GUI_CURRENT_BARCODE, GUI_CURRENT_CATH_RESULT, SHOW_LAST_CATH_GUI_MSG, \
REPEATED_CATHETER_DETECTED, MANUAL_CATH_BARCODE_CAPTURE, \
test_result_buffer
log_print("catheter_test() called")
RES_SAMPLES_VALS = RES_SAMPLES_VALS[SAMPLES_TO_REMOVE:]
avg_voltage = sum(RES_SAMPLES_VALS) / len(RES_SAMPLES_VALS)
unrepeated_cath = False
repeated_catheter = True
if MEASURE_OUTPUT_RESISTANCE:
avg_res[1] = round(volt2res(avg_voltage), 2)
avg_res_dut = avg_res[1]
if abs(avg_res_dut - MODEL_RES) < MODEL_TOL:
test_result_buffer[1] = "PASS"
else:
test_result_buffer[1] = "FAIL"
log_print("Test Result:%s" % test_result_buffer[1])
else:
avg_res[0] = round(volt2res(avg_voltage), 2)
avg_res_dut = avg_res[0]
if abs(avg_res_dut - MODEL_RES) < MODEL_TOL:
test_result_buffer[0] = "PASS"
else:
test_result_buffer[0] = "FAIL"
log_print("Test Result:%s" % test_result_buffer[0])
log_print("Average resistance read:%f" % avg_res_dut)
if MEASURE_OUTPUT_RESISTANCE:
barcode, code_val = barcode_scanner()
if test_result_buffer[0] == "FAIL" or test_result_buffer[1] == "FAIL":
test_result = "FAIL"
else:
test_result = "PASS"
if bool(barcode) is False:
# Trigger main thread to change
# GUI to capture catheter SN manually.
MANUAL_CATH_BARCODE_CAPTURE = True
# This loop simply waits for the
# main thread to do the GUI operation.
# Reason is, Tkinter does not like working in multiple threads.
barcode = waiting_for_manual_barcode()
code_val = validate_catheter_barcode_scan(barcode)
if barcode.strip() in cath_data_dict_unsorted_buffer:
current_catheter_data = cath_data_dict_unsorted_buffer[
barcode.strip()]
current_catheter_data[4] = repeated_catheter
REPEATED_CATHETER_DETECTED = True
log_print("REPEATED CATHETER, UPDATING REPORT FOR %s" % barcode)
else:
if (avg_res[0] > RANGE_LIMIT_HIGH) or (
avg_res[0] < RANGE_LIMIT_LOW):
avg_res[0] = 9999
if (avg_res[1] > RANGE_LIMIT_HIGH) or (
avg_res[1] < RANGE_LIMIT_LOW):
avg_res[1] = 9999
cath_data_dict_unsorted_buffer[barcode.strip()] = [test_result,
avg_res[0],
avg_res[1],
code_val,
unrepeated_cath]
catheters_processed += 1
if catheters_processed < JOB_SIZE:
log_print("\n\n---READY FOR NEXT CATHETER---\n\n")
elif catheters_processed == JOB_SIZE:
log_print("LAST CATHETER OF JOB. UNPLUG TO PRINT REPORT.")
SHOW_LAST_CATH_GUI_MSG = True
GUI_CURRENT_BARCODE = barcode.strip()
GUI_CURRENT_CATH_RESULT = test_result
MEASURE_OUTPUT_RESISTANCE = False
else:
MEASURE_OUTPUT_RESISTANCE = True
TEST_FINISHED = True
def loop():
global catheters_processed, CATH_MODEL, \
JOB_SIZE, JOB_NUMBER, CAL_PASSED, \
end_job, cath_data_dict_unsorted_buffer, \
MODEL_SELECTED, CURRENT_PROCESS_MESSAGE, \
SHOW_LAST_CATH_GUI_MSG, REPEATED_CATHETER_DETECTED, \
KEYPAD_CATH_BARCODE, MANUAL_BARCODE_CAPTURED, \
MANUAL_CATH_BARCODE_CAPTURE, CATH_RES_SAMPLES_COLLECTED, \
PLAY_SOUND, PROCESS_MESSENGER_FONT, EXTERNAL_CAL_INPROGRESS, \
EXTERNAL_CAL_MEASURED_RES_VALS, EXTERNAL_CAL_USER_ENTERED_RES_VALS, \
dynamic_m_low_range, dynamic_b_low_range, dynamic_m_high_range, \
dynamic_b_high_range, initial_b_low_range, initial_m_low_range, \
initial_b_high_range, initial_m_high_range, \
CAL_REF_RESISTANCE_LOWRANGE_LOW, CAL_REF_RESISTANCE_LOWRANGE_HIGH, \
CAL_REF_RESISTANCE_HIRANGE_LOW, \
CAL_REF_RESISTANCE_HIRANGE_HIGH, CAL_FAIL
first_iteration = True
jobsize_valid = False
jobnumber_valid = False
reset_mouse_position()
first_job_done = False
res_measurements = []
display_cal_check_measurements = False
x_list = [[], []]
cal_res_fixture_count = 0
admin_logon_attempt = False
notify_user_cath_detected = True
notify_user_test_result = True
cath_data_dict_unsorted_buffer = {}
while True:
while True:
event, values = window.read()
f8_btn1_txt = frame8_button1_text.get_text().strip()
f8_btn2_txt = frame8_button2_text.get_text().strip()
if event in (sg.WINDOW_CLOSED, 'Exit'):
break
elif event == ADMIN_FUNCS_KEY:
log_print("USER PULLED UP ADMIN PASSWORD REQUEST PAGE")
gui_frame_msngr_update(3, ENTER_PW_MESSAGE)
admin_logon_attempt = True
elif event == 'RE-PRINT':
log_print('=USER ATTEMPTED TO RE-PRINT=')
if first_job_done:
reset_mouse_position()
print_report(report_filename)
else:
frame8_button1_text.update('BACK')
frame8_button2_text.update('')
gui_frame_msngr_update(8, RUN_FIRST_JOB_FIRST)
elif event == "TERMINATE SCRIPT":
terminate_script()
elif event == "REBOOT":
reboot_system()
elif event == "SHUTDOWN":
shutdown_system()
elif event == 'CALIBRATE':
CAL_FAIL = False
log_print("=USER PRESSED 'CALIBRATE'=")
reset_mouse_position()
GPIO.output(B_DUT_MUX, GPIO.LOW)
EXTERNAL_CAL_INPROGRESS = True
frame8_button1_text.update('CHECK LOW RANGE')
frame8_button2_text.update('BACK')
log_print("=INSTRUCTING USER TO MEASURE AND "
"INSERT CALIBRATION RESISTANCES=")
gui_frame_msngr_update(8, CAL_INSERT_FIXTURE_MESSAGE)
elif event == BUTTON_MESSAGE_KEY1 and \
f8_btn1_txt == 'CHECK LOW RANGE' and \
not display_cal_check_measurements:
reset_mouse_position()
log_print("=USER PRESSED 'CHECK LOW RANGE'=")
gui_frame_msngr_update(2, CHECK_CAL_MESSAGE)
res_measurements, temp_hold = measure_single_catheter('LOW')
window.write_event_value('display_measurements', '')
elif event == BUTTON_MESSAGE_KEY1 and \
f8_btn1_txt == 'CHECK HIGH RANGE' and \
not display_cal_check_measurements:
log_print("=USER PRESSED 'CHECK HIGH RANGE'=")
gui_frame_msngr_update(2, CHECK_CAL_MESSAGE)
res_measurements, temp_hold = measure_single_catheter('HIGH')
window.write_event_value('display_measurements', '')
elif event == BUTTON_MESSAGE_KEY1 and \
f8_btn1_txt == 'APPROVE & EXIT':
log_print("=USER PRESSED 'APPROVE & EXIT'=")
reset_mouse_position()
EXTERNAL_CAL_INPROGRESS = False
EXTERNAL_CAL_MEASURED_RES_VALS = [[], []]
EXTERNAL_CAL_USER_ENTERED_RES_VALS = [0, 0, 0, 0]
x_list = [[], []]
gui_frame_msngr_update(1)
elif event == BUTTON_MESSAGE_KEY2 and \
f8_btn2_txt == 'REDO CAL REF':
reset_mouse_position()
initial_b_low_range, initial_m_low_range = 0, 0
initial_b_high_range, initial_m_high_range = 0, 0
dynamic_m_low_range, dynamic_b_low_range = 0, 0
dynamic_m_high_range, dynamic_b_high_range = 0, 0
cal_res_count = 0
cal_res_fixture_count = 0
cal_keypad_messager_window.update(
CAL_KEYPAD_PROMPT_MESSAGES[cal_res_count])
gui_frame_msngr_update(7)
elif event == BUTTON_MESSAGE_KEY1 and \
f8_btn1_txt == 'CAPTURE REF RESISTANCES':
gui_frame_msngr_update(2, CAPTURE_CAL_RES_MESSAGE)
EXTERNAL_CAL_MEASURED_RES_VALS[cal_res_fixture_count], x_list[
cal_res_fixture_count] = \
measure_single_catheter(
'LOW' if cal_res_fixture_count == 0 else 'HIGH')
cal_res_fixture_count += 1
if cal_res_fixture_count > 1:
gui_frame_msngr_update(2, RECALCULATE_REF_RES_MESSAGE)
ext_cal_measured_vals_f = list(
chain.from_iterable(EXTERNAL_CAL_MEASURED_RES_VALS))
x_list_f = list(chain.from_iterable(x_list))
CAL_REF_RESISTANCE_LOWRANGE_HIGH = \
EXTERNAL_CAL_USER_ENTERED_RES_VALS[1]
CAL_REF_RESISTANCE_LOWRANGE_LOW = \
EXTERNAL_CAL_USER_ENTERED_RES_VALS[0]
CAL_REF_RESISTANCE_HIRANGE_HIGH = \
EXTERNAL_CAL_USER_ENTERED_RES_VALS[3]
CAL_REF_RESISTANCE_HIRANGE_LOW = \
EXTERNAL_CAL_USER_ENTERED_RES_VALS[2]
initial_m_low_range, initial_b_low_range, \
initial_m_high_range, initial_b_high_range = \
correction_fctr_calc(x_list_f, ext_cal_measured_vals_f)
GPIO.output(B_DUT_MUX, GPIO.HIGH)
sleep(.3)
low_range_recalculated_interal_resistances, temp_hold = \
measure_single_catheter('LOW')
GPIO.output(CAL_RES_HI_RANGE_MUX, GPIO.HIGH)
sleep(.3)
high_range_recalculated_interal_resistances, temp_hold = \
measure_single_catheter('HIGH')
print('%s\n%s\n%s\n%s\n' % (
initial_m_low_range, initial_b_low_range,
initial_m_high_range, initial_b_high_range))
print('%s\n%s\n%s\n%s\n' % (
low_range_recalculated_interal_resistances[1],
low_range_recalculated_interal_resistances[0],
high_range_recalculated_interal_resistances[1],
high_range_recalculated_interal_resistances[0]
))
with open('initial_correction_factor.txt', 'w') as ff:
ff.write('%s\n%s\n%s\n%s\n' % (
initial_m_low_range, initial_b_low_range,
initial_m_high_range, initial_b_high_range))
ff.write('%s\n%s\n%s\n%s\n' % (
low_range_recalculated_interal_resistances[1],
low_range_recalculated_interal_resistances[0],
high_range_recalculated_interal_resistances[1],
high_range_recalculated_interal_resistances[0]
))
CAL_REF_RESISTANCE_LOWRANGE_HIGH = \
low_range_recalculated_interal_resistances[0]
CAL_REF_RESISTANCE_LOWRANGE_LOW = \
low_range_recalculated_interal_resistances[1]
CAL_REF_RESISTANCE_HIRANGE_HIGH = \
high_range_recalculated_interal_resistances[0]
CAL_REF_RESISTANCE_HIRANGE_LOW = \
high_range_recalculated_interal_resistances[1]
GPIO.output(CAL_RES_HI_RANGE_MUX, GPIO.LOW)
GPIO.output(B_DUT_MUX, GPIO.LOW)
frame8_button1_text.update('CHECK LOW RANGE')
frame8_button2_text.update('')
res_measurements = []
reset_mouse_position()
gui_frame_msngr_update(8, CAL_RE_INSRT_LOW_RANGE_FIXTR_MSG)
else:
reset_mouse_position()
gui_frame_msngr_update(8, CAL_CURRENT_PROCESS_MESSAGES[
cal_res_fixture_count])
elif event == 'display_measurements':
log_print("Displaying calibration check values...")
reset_mouse_position()
if f8_btn1_txt == 'CHECK LOW RANGE':
frame8_button1_text.update('CHECK HIGH RANGE')
frame8_button2_text.update('')
elif f8_btn1_txt == 'CHECK HIGH RANGE':
frame8_button1_text.update('APPROVE & EXIT')
frame8_button2_text.update('REDO CAL REF')
gui_frame_msngr_update(8,
show_calcheck_results(
res_measurements, f8_btn1_txt,
cal_res_fixture_count))
display_cal_check_measurements = False
elif event == 'cal_Submit':
EXTERNAL_CAL_USER_ENTERED_RES_VALS[cal_res_count] = float(
values['calinput'])
window['calinput'].update('')
reset_mouse_position()
cal_res_count += 1
if cal_res_count > 3:
gui_frame_msngr_update(8, CAL_CURRENT_PROCESS_MESSAGES[
cal_res_fixture_count])
frame8_button1_text.update('CAPTURE REF RESISTANCES')
frame8_button2_text.update('')
else:
cal_keypad_messager_window.update(
CAL_KEYPAD_PROMPT_MESSAGES[cal_res_count])
elif event == 'cal_Clear':
log_print("=USER PRESSED 'CLEAR' in CAL KEYPAD")
reset_mouse_position()
cal_keys_entered = ''
window['calinput'].update(cal_keys_entered)
elif event == 'cal_BACK':
initial_m_low_range = float(lines[0].strip())
initial_b_low_range = float(lines[1].strip())
initial_m_high_range = float(lines[2].strip())
initial_b_high_range = float(lines[3].strip())
CAL_REF_RESISTANCE_LOWRANGE_LOW = float(lines[4].strip())
CAL_REF_RESISTANCE_LOWRANGE_HIGH = float(lines[5].strip())
CAL_REF_RESISTANCE_HIRANGE_LOW = float(lines[6].strip())
CAL_REF_RESISTANCE_HIRANGE_HIGH = float(lines[7].strip())
log_print("=USER PRESSED 'BACK' WHEN "
"IN THE REDO REF CAL KEYPAD=")
reset_mouse_position()
gui_frame_msngr_update(1)
cal_keys_entered = ''
window['calinput'].update(cal_keys_entered)
elif 'cal_' in event:
event_name = event.split('cal_')
reset_mouse_position()
cal_keys_entered = values[
'calinput'] # get what's been entered so far
cal_keys_entered += event_name[1] # add the new digit
window['calinput'].update(cal_keys_entered)
elif (event == BUTTON_MESSAGE_KEY1 and f8_btn1_txt == 'BACK') or (
event == BUTTON_MESSAGE_KEY2 and f8_btn2_txt == 'BACK'):
log_print("=USER PRESSED 'BACK' IN CALIBRATE=")
admin_logon_attempt = False
reset_mouse_position()
EXTERNAL_CAL_INPROGRESS = False
gui_frame_msngr_update(1)
elif event == KEYPAD_BACK_BTN_KEY:
if admin_logon_attempt:
admin_logon_attempt = False
log_print("=USER PRESSED 'BACK' ON "
"KEYPAD WHEN PROMPTED FOR ADMIN PW")
gui_frame_msngr_update(1)
if JOB_SIZE == 0:
log_print("=USER PRESSED 'BACK' ON KEYPAD WHEN PROMPTED TO"
" ENTER JOB SIZE OR WHEN IN "
"THE REDO REF CAL KEYPAD=")
gui_frame_msngr_update(1)
else:
log_print("=USER PRESSED 'BACK' ON KEYPAD WHEN "
"PROMPTED FOR JOB NUMBER=")
JOB_SIZE = 0
JOB_NUMBER = 0
window['keypad_message'].update(JOBSIZE_KEYPAD_MESSAGE)
keys_entered = ''
window['input'].update(keys_entered)
reset_mouse_position()
elif event == 'P16x' or event == 'P330' or event == 'P330B':
log_print("=USER SELECTED %s AS THE MODEL=" % event)
# window['keypad_message'].update(JOBSIZE_KEYPAD_MESSAGE)
MODEL_SELECTED = event
model_capture()
gui_frame_msngr_update(3, JOBSIZE_KEYPAD_MESSAGE)
elif event in '1234567890':
log_print("=USER PRESSED %s ON THE KEYPAD=" % event)
reset_mouse_position()
keys_entered = values[
'input'] # get what's been entered so far
keys_entered += event # add the new digit
window['input'].update(keys_entered)
elif event == 'Submit':
reset_mouse_position()
keys_entered = values['input']
window['input'].update('')
if admin_logon_attempt:
if keys_entered == ADMIN_PASSWORD:
gui_frame_msngr_update(9, get_ip())
else:
window['keypad_message'].update(INVALID_PW_MESSAGE)
else:
if JOB_SIZE == 0:
log_print("=USER PRESSED 'SUBMIT' TO RECORD JOB SIZE=")
jobsize_valid = jobsize_capture(keys_entered)
if jobsize_valid:
log_print("=USER WAS PROMPTED TO "
"SCAN JOB NUMBER BARCODE=")
gui_frame_msngr_update(6,
JOBNUMBER_SCAN_MESSAGE)
jobnumber_valid = jobnumber_capture()
if not jobnumber_valid:
log_print("USER WAS PROMPTED TO "
"ENTER JOB NUMBER MANUALLY")
keys_entered = ''
window['input'].update('')
gui_frame_msngr_update(3, JOBNUM_KEYPAD_MSG)
else:
window['keypad_message'].update(
JOBSIZE_INVALID_MESSAGE)
keys_entered = ''
JOB_SIZE = 0
if jobsize_valid and jobnumber_valid:
log_print(
"VALID JOB SIZE AND JOB NUMBERS WERE RECORDED")
break
elif not jobnumber_valid:
log_print(
"=USER PRESSED 'SUBMIT' TO RECORD JOB NUMBER=")
jobnumber_valid = validate_job_number_barcode_scan(
keys_entered)
if jobnumber_valid:
log_print(
"VALID JOB SIZE AND JOB NUMBERS WERE RECORDED")
JOB_NUMBER = keys_entered
break
else:
window['keypad_message'].update(
JOBNUMBER_INVALID_MESSAGE)
keys_entered = ''
window['input'].update(keys_entered)
elif event == 'Clear':
log_print("=USER PRESSED 'CLEAR'")
reset_mouse_position()
keys_entered = ''
window['input'].update(keys_entered)
while True:
if jobsize_valid and jobnumber_valid and \
not CAL_PASSED and not CAL_FAIL:
log_print(
"USER WAS NOTIFIED THAT CALIBRATION WAS BEING PERFORMED")
gui_frame_msngr_update(2, CAL_PERFORMING_MESSAGE)
calibration()
elif CAL_FAIL and not CAL_PASSED:
log_print("USER WAS NOTIFIED THAT CALIBRATION FAILED")
dynamic_m_low_range, dynamic_b_low_range, \
dynamic_m_high_range, dynamic_b_high_range = 0, 0, 0, 0
frame8_button1_text.update('')
frame8_button2_text.update('')
gui_frame_msngr_update(8, CAL_FAIL_MESSAGE)
sleep(10)
log_print("Calibration failed. "
"An engineer will come troubleshoot the system.")
SHOW_LAST_CATH_GUI_MSG = False
catheters_processed = 0
cath_data_dict_unsorted_buffer = {}
CATH_MODEL = -1
JOB_SIZE = 0
JOB_NUMBER = 0
CATH_RES_SAMPLES_COLLECTED = 0
CAL_PASSED = False
first_iteration = True
end_job = False
reset_mouse_position()
gui_frame_msngr_update(1)
notify_user_cath_detected = True
notify_user_test_result = True
break
elif CAL_PASSED and first_iteration:
# isr_enable()
configure_adc(CONTINUOUS_CONFIG_REG)
first_iteration = False
log_print("USER WAS PROMPTED TO INSERT "
"THE FIRST CATHETER OF THE JOB")
gui_frame_msngr_update(2, FIRST_CATH_MESSAGE)
elif not CATH_CONN_EMPTY and ACTIVE_SAMPLING and \
CATH_DETECTED_SAMPLES_COLLECTED == 0 and \
notify_user_cath_detected:
notify_user_cath_detected = False
notify_user_test_result = True
REPEATED_CATHETER_DETECTED = False # flag reset purposes only
PLAY_SOUND = True # flag reset purposes only
log_print("USER WAS NOTIFIED THAT THE CATHETER WAS DETECTED")
gui_frame_msngr_update(2, CATH_DETECTED_MESSAGE)
elif MANUAL_CATH_BARCODE_CAPTURE:
KEYPAD_CATH_BARCODE = manual_catheter_barcode_entry()
keypad_back_btn.update('BACK')
MANUAL_BARCODE_CAPTURED = True
MANUAL_CATH_BARCODE_CAPTURE = False
elif ((TEST_FINISHED and not MEASURE_OUTPUT_RESISTANCE and
not SHOW_LAST_CATH_GUI_MSG and not end_job) or
(catheters_processed == JOB_SIZE and
SHOW_LAST_CATH_GUI_MSG and
not end_job)) and notify_user_test_result:
notify_user_test_result = False
frame_to_see = 4 if GUI_CURRENT_CATH_RESULT == "PASS" else 5
log_print(
"USER WAS NOTIFIED THAT %s TEST RESULT IS: %s" % (
GUI_CURRENT_BARCODE, GUI_CURRENT_CATH_RESULT))
gui_frame_msngr_update(frame_to_see, show_results(
GUI_CURRENT_CATH_RESULT, GUI_CURRENT_BARCODE))
# resetting flag for next possible catheter
notify_user_cath_detected = True
if PLAY_SOUND:
audio_feedback(GUI_CURRENT_CATH_RESULT)
pass
elif catheters_processed == JOB_SIZE and end_job:
log_print("Job finished. Resetting flags and printing report.")
SHOW_LAST_CATH_GUI_MSG = False
catheters_processed = 0
cath_data_dict_unsorted_buffer = {}
CATH_MODEL = -1
JOB_SIZE = 0
JOB_NUMBER = 0
CATH_RES_SAMPLES_COLLECTED = 0
CAL_PASSED = False
first_iteration = True
end_job = False
reset_mouse_position()
gui_frame_msngr_update(1)
first_job_done = True
notify_user_cath_detected = True
notify_user_test_result = True
print_report(report_filename)
break
sleep(.05)
window.close()
alrt_rdy_func_enable()
isr_enable()
no_blank_screen()
loop()
Answer: First impressions:
way too much code
not enough comments if any
needs refactoring to achieve better separation of concerns
Unfortunately I can't test it, and TBH I don't understand much so I will just focus on style mainly.
There are many different things going on in this program, so I think your priority should be to break it up in small pieces to make it more comprehensible and more maintainable. It's difficult to maintain long code, when you have to scroll a lot and don't have a clear overview of code because it spreads along so many lines.
The first thing that is obvious is that you missed the logging module.
I recommend you start using it from now on because it is more flexible and there is no need to reinvent the wheel. I always use it for my projects and I like to write to console and to file at the same time (at different levels eg DEBUG for file and INFO for console), because it's easier to troubleshoot unattended applications when you have a permanent log file available.
But it's already good that you are using some logging in your program because it makes it easier to trace the execution flow.
The shebang should be on the first line:
#!/usr/bin python3
from import * should be avoided, just import what you need. Why: namespace pollution and possibly overriding existing functions, which may cause nasty bugs.
sys.path.append is something one should never do. This may be convenient when writing some test code but there are ways in Python to load libraries dynamically if the need arises. When you see this it usually means the application is poorly structured (files, directories) or the virtualenv is not setup right.
The PDF class does not seem to be used presently, so remove it from your code along with the unused imports. Declutter your file as much as you can. Anything you don't use is unneeded distraction.
If you really need PDF generation capabilities, then move your class code to a separate file, and import it accordingly.
Then we have a couple of variables related to sound, which is yet another type of functionality:
FAIL_SOUND = 'fail.mp3'
PASS_SOUND = 'pass.mp3'
PLAY_SOUND = True
At this point, it becomes clear that a separate configuration file should be used to contain those settings. Then they can be customized without touching the code itself, at the risk of inadvertently breaking things.
In Python there is the classic configparser module but I prefer YAML files personally. Here you can have a look at some options.
Then comes the meat. Next function names are set_bias_mux_to_low_range, set_bias_mux_to_hi_range, set_dut_mux_to_input_res, set_dut_mux_to_output_res etc. I have no idea what they do. You might want to comment these functions using docstrings. If I reason purely in terms of functionality and without even understanding your code, it seems to me that the functions related to GPIO deserve to be put in a separate file.
Next function: reset_mouse_position. OK, mouse manipulation. I would externalize this part as well. Possibly into a UI class of some sort.
Next: no_blank_screen. I suspect there is feature creep here. This is something that I would rather do outside Python. Possibly in a .bashrc file. Or straight into the Xorg configuration. If you're on Linux, that is. You could also create a launcher for your program. But if you're running a graphical environment, it seems sensible to adapt your power management options, or session preferences if you're concerned about screen lock. In short: this is outside the purview of your application.
Next:
def show_calcheck_results(measurements, button_text, cal_reset):
log_print("show_calcheck_results([%.3f,%.3f], %s)" % (
measurements[0], measurements[1], button_text))
if button_text == 'CHECK LOW RANGE':
add_string = 'INSERT HIGH-RANGE\n' \
'RESISTANCE FIXTURE AND PRESS\n' \
'\'CHECK HIGH RANGE\''
This does not sound right. If you click on a button, it should call a function with appropriate arguments. Not the other way round. You normally don't write a function that checks which button was clicked by looking at the button caption (which may change). Or it's possible I misunderstand your code because I'm not familiar with pyautogui and you're automating key strokes perhaps. I'd need to look more in depth then. But in GUIs such as GTK, QT etc you work with event handlers that you attach to controls such as buttons.
Next: alrt_rdy_func_enable. It was not worth abbreviating the function name. Make it more explicit. You've saved just 3 characters here. Ideally a function name should already give some clue about its purpose. Admittedly I am a noob in this stuff but maybe the function name could be more intuitive? You're handling I2C here by the way. I guess it goes with GPIO stuff then.
I could go on and on but there is misuse of global variables. Use function arguments where appropriate instead. When the number of arguments is unknown or variable you can use **kwargs. But global variables are seldom needed or justified really. And of course they are tricky: when you have global variables that can change value in many code paths, this is going to make debugging difficult. It's something you want to avoid. Variable scope should be limited as much as possible. Function scope is usually sufficient, and arguments can be passed from one function to another.
Some values are undefined for example RGAIN1 at line 327:
transfer_func = RGAIN1 * ((10 * ((voltage - V_BIAS) * tfco + V_BIAS)) - 1)
So I would guess it's defined in hw_init, where you've used star import.
Then do: from hw_init import RGAIN1 etc. Again, a config file may be more appropriate, if these are not hard values that will seldom change. My IDE (Pycharm) highlights the variables that are undefined or that cannot be resolved and there are quite a few. Declaring them in your imports would be beneficial because you can directly access their definition. This also reduces the risk of typos.
Coming next: barcode stuff. Unsurprisingly I might suggest a dedicated file for this. It looks like you're doing serial read/write but I don't see the relevant imports. You're saying the application works but maybe not in all code paths.
Let's have a look at one barcode function:
def validate_catheter_barcode_scan(x):
log_print("validate_catheter_barcode_scan(%s) called" % x)
try:
if 5 < len(x) < 10:
code_validity = True
else:
code_validity = False
except TypeError:
code_validity = False
log_print("validate_catheter_barcode_scan() returning")
return code_validity
Since you're merely returning a boolean value it can be shortened to:
def validate_catheter_barcode_scan(value):
return 5 < len(value) < 10
I have omitted logging and exception handling. Instead of TypeError I would probably use ValueError. In function barcode_scanner I would just return the raw results, rather than call validate_catheter_barcode_scan within that same function. Because you may want to handle different types of barcodes and apply different forms of validation. Leave room for flexibility.
Coming next: sort_data. The name is too broad to be meaningful. I suppose sort_catheter_data would then be more expressive.
correction_value: what are we correcting and why? No idea. Maybe this is important. But since the function appears to be unused anyway, remove it or stash it somewhere.
When you've done all that, review the remaining functions: calibration, read_adc, catheter_test and loop since they are the biggest chunks of code. A loop shouldn't be that long. There are lots of variables, not always aptly named so this part is a bit overwhelming. Sadly, many variables are just unneeded. They are there merely to solve control flow problems.
In addition to comments, there has to be more line spacing. That would make the code less of a sore to read. Code that is tedious to read and difficult to understand is harder to maintain because it takes more effort.
The other thing that strikes me is the nested ifs - the longer the block, the higher there is a risk of wrong indentation, causing bugs. Error logic are prone to happen here. You can "denest" by using the early return pattern instead: illustration (not in Python) but applies to any language.
I think that denesting should be a priority, because you are inevitably going to add more code, and the code will become even more difficult to comprehend. It will take a lot of scrolling. And even a large screen won't suffice to show the whole block so we can have a block-level overview. As said already, indentation is critical in Python. It's very easy to break the logic when you have big chunks of if blocks, nested with other ifs or loops.
At least you've made some effort to modularize by defining 4 distinct functions that have to run in order to start your app. But you need to modularize more.
Use a __main__ guard as well. This is a good habit to follow. One benefit is that you can import your module (or parts of it) without executing it. | {
"domain": "codereview.stackexchange",
"id": 44787,
"tags": "python, embedded, raspberry-pi"
} |
Contradiction between Gödel's Second Incompleteness Theorem and the Church-Rosser's property of CIC? | Question: On one hand, Gödel's Second Incompleteness Theorem states that any consistent formal theory that is strong enough to express any basic arithmetical statements can't prove its own consistency. On the other hand, the Church-Rosser's property of a formal (rewriting) system tells us that it is consistent, in the sense that not all equations are derivable, for example, K$\neq$I, since they don't have the same normal form.
Then the Calculus of Inductive Constructions (CIC) clearly statisfies both conditions. It is strong enough to represent arithmetical propositions (indeed, the $\lambda\beta\eta$-calculus alone is already able to encode the Church numerals and represent all primitive recursive functions). Moreover, CIC also has the confluence or Church-Rosser property. But:
shouldn't CIC be unable to prove its own consistency by the Second Incompleteness theorem?
Or it just states that the CIC can't prove its own consistency inside the system, and somehow the confluence property is a meta-theorem? Or maybe the confluence property of CIC does not guarantee its consistency?
I would highly appreciate if someone could shed some light on those issues!
Thanks!
Answer: First, you are confusing consistency of CIC as an equational theory with consistency of CIC as a logical theory. The first means that not all terms of CIC (of the same type) are $\beta\eta$-equivalent. The second means that the type $\bot$ is not inhabited. CR implies the first kind of consistency, not the second. This, as has been pointed out in the comments, is implied instead by (weak) normalization. The prototypical example of this situation is the pure $\lambda$-calculus: it is equationally consistent (CR holds) but, if you consider it as a logical system (as Alonzo Church orginally intended) it is inconsistent (indeed, it does not normalize).
Second, as Emil pointed out, even if CIC has a given property (CR or normalization) it is perfectly possible that CIC cannot itself prove that property. In this case, I do not see any inconsistency in the fact that CIC is able to prove its own CR property, and I guess that this is indeed the case (elementary combinatorial arguments usually suffice for CR, and such arguments definitely fall within the huge logical power of CIC). However, CIC certainly does not prove its own normalization property, precisely because of the second incompleteness theorem. | {
"domain": "cstheory.stackexchange",
"id": 3553,
"tags": "lo.logic, type-theory, lambda-calculus"
} |
Are all genes capable of being switched on or off? | Question: Are all genes capable of being switched on or off or only some genes? Are there some genes that permanently do not have the functionality that enables them to be switched on or off?
Everything I have found in response to this question seems to assume that ALL genes are capable of being switched on or off.
When I have searched for the answer to this question all I find are explanations about how things like epigenetics, gene regulation and expression work. I understand at the basic level how these things work and that there are different ways by which they are accomplished.
I realize the answer may be "as far as we know" or "we don't know" or "it's complicated" and that's fine and definitely understandable.
Answer: I cannot think of a mechanism that would entirely prevent a gene from being regulated. For example, consider mechanisms like histone modification: there is very little about the sequence of a single underlying gene that can itself cause or prevent histone modification, yet those changes regulate the expression of associated genes. However, you can really only provide evidence in science for things that happen; providing evidence that things do not happen is often questionable. If you have some example gene and you'd like to say "this gene is not regulated", the best you can ever get to is "I haven't yet found a circumstance in which this gene is regulated by any manipulation I know of".
In practice, there are some genes that are not typically "switched on/off" and always expressed at fairly constant rates, we call these housekeeping genes. For many of these, the consequence of 'switching them off' would be death of the cell. However, I would not consider that these genes are "not capable" of being regulated, rather, I would say that they are specifically regulated to be always active, and that there is very strong evolutionary pressure for this to occur. To show that I mean by this, consider some quoted lines from Wikipedia:
The housekeeping gene expression levels are fine-tuned to meet the metabolic requirements in various tissues. Biochemical studies on transcription initiation of the housekeeping gene promoters have been difficult, partly due to the less-characterized promoter motifs and transcription initiation process.
Little is known about how the dispersed transcription initiation of housekeeping gene is established. There are transcription factors that are specifically enriched on and regulate housekeeping gene promoters.[12][13] Furthermore, housekeeping promoters are regulated by housekeeping enhancers but not developmentally regulated enhancers.[14]
In summary, steady activity is carefully controlled, and difficult to study. Comparatively, if a gene has very different expression in different environments, you can follow an iterative process to look at cells in each environment and see what is different: if you find another protein is phosphorylated or otherwise modified, or has also changed expression, you might be looking at a transcription factor involved in your gene of interest. On the other hand, if something never changes, where do you start? Trickier problem for an experimentalist. | {
"domain": "biology.stackexchange",
"id": 11942,
"tags": "genetics, human-genetics, gene-expression, gene-regulation"
} |
Advanced Molten Salt Reactor - Is the concept of designing a reactor that uses spent nuclear fuel to generate power technically feasible? | Question: I was going through the concept of designing a Nuclear Reactor that uses Spent Nuclear Fuel(SNF) to generate power as proposed by Transatomic Power .
http://transatomicpower.com/white_papers/TAP_White_Paper.pdf
Is it possible to design such a Nuclear Reactor that uses spent nuclear fuel to generate power ?
Answer: Yes, it's possible and it's been done, in the form of Mixed Oxides (the mix being plutonium and uranium).
Until it went through prolonged shutdowns due to huge technical, safety and economic setbacks (it may have some use up to 2018, if these issues can be resolved), Thorp (THermal Oxide Reprocessing Plant) has been one of several plants that take spent nuclear fuel and process it for use in reactors designed or modified to take it. So, there's an existence proof that Mixed Oxide (MOX) plants can work. Japan was a significant consumer of MOX fuel.
From a theoretical perspective, they ought to be attractive: once-through nuclear fission is only 5% efficient thermally, and less than 2% electrically: that is, of the energy available in the fuel, only 5% is converted to heat, and less than 40% of that heat is converted to electricity: 95% of the available energy is locked up in the spent fuel which is typically destined to be buried as waste, eventually. In reality, the economics and engineering, though technically very clever and supported by very talented staff, suck.
Do bear in mind that the "molten salt" bit is a red herring in this context. Molten salts can be used as the fuel medium, and as the cooling medium, so we need to be careful not to confuse the cooling mechanism with the fuel cycle. In your link, transatomicpower's vapourware is molten-salt-fuel. Reprocessed spent fuel could be solid, molten salt, or liquid salt: real-world MOX fuels are solid. Molten-salt cooling could be used with all sorts of fuels, but designs keep coming back to water cooling because these reactors are complex enough already without adding crazy demands on the cooling side (q.v the gas-cooled reactors). There are proposed fuel cycles that involve the fuel being in the form of molten salts, but that's pretty much orthogonal to whether the fuel is reprocessed spent fuel or not. Several of these molten-salt variants have been tried in experimental reactors, so not all of them are vapourware, but they have been technical and commercial dead ends (see also fast breeders). Conversely, real-world MOX plants operate in several countries today, so have at least had some technical success: the World Nuclear Association estimates that MOX makes up about 2% of nuclear fuel currently, so globally that's about 6GW of electricity from reprocessed spent fuel and material from retired nuclear weapons. | {
"domain": "physics.stackexchange",
"id": 15683,
"tags": "nuclear-physics, nuclear-engineering"
} |
Is eating cooked food an evolved behavior or rather an intelligent one, passed down via culture? | Question: I was just eating a rather rare steak when I started wondering whether eating foods cooked was something I would instinctively want to do if the practice hadn't been taught to me.
So, is cooking food an evolved behavior, inspired by nutritional value or health benefits, or is it a practice learned, and passed down through culture alone?
Answer: Cooked food is a learnt trait rather than an evolutionary one. On a certain psychological level its a society norm so you would be hard pressed to find an individual who wouldn't mind eating completely raw meat/steak.
But, say if you chuck a few humans into the wilderness at an early age and not teach them how to cook meat they may not actually cook their food. As early prehistoric humans discovered fire they eventually noticed/learnt to cook food and passed on the knowledge . You would have to research the history of that since I don't actually know the details.
Cooking food makes more sense on a psychological level as we cook food based on our cultural upbringing, and our memories and knowledge. A human would prefer cooked meat based on his upbringing or understanding that eating processed meat is safer as it means food contains less pathogens. The way we cook food depends on where we are from (eg.An Indian would cook meat differently from, lets say an Italian because their environment provides them with different ingredients and/or devices, pans/pots)
I hope that wasn't too confusing. | {
"domain": "biology.stackexchange",
"id": 11165,
"tags": "evolution, food"
} |
Detect Nao robot in Kinect | Question: I am not sure if this has been tried before but I am trying to use Kinect and detect gestures made by the Nao robot.
I have made a Kinect application, a gesture based picture viewer and it detects humans fine(Obviously it does!) What I wanted to try was (lazy as I am), to see if I could use some (say, voice) command to tell the Nao to do a Swipe Right gesture and have my application identify that gesture. The Nao can easily identify my command and do some gesture. The problem however is, when I put the Nao in front of the Kinect sensor, the Kinect does not track it.
What I want to know is, are there some basics behind Kinect's human body motion tracking that essentially fails when a robot is placed in front of it instead of a human?
PS: I have kept the Nao at the right distance from the sensor. I have also checked if the entire robot is in the field of view of the sensor.
EDIT: This has been posted on stackoverflow and on msdn by me so as to target a large audience as this problem has not been encountered by anyone in the past.
Answer: You should read the paper published by Microsoft research on the actual algorithm behind the human motion tracking.
Real-Time Human Pose Recognition in Parts from a Single Depth Image, Shotton et. al,
http://research.microsoft.com/apps/pubs/default.aspx?id=145347
It relies on large labeled training data from the human body. That is why the Nao cannot just be tracked with the same method out of the box. To achieve that, you would need to re-train the algorithm with labeled data from the Nao in different poses. | {
"domain": "robotics.stackexchange",
"id": 17,
"tags": "kinect"
} |
Correlation length in d>1 Ising model, at zero temperature | Question: I am studying the renormalization group approach to the Ising model using as a reference Cardy's book "Scaling and renormalization in statistical mechanics".
I cannot understand what happens in the zero temperature case (and possibly for $ T < T_c$) to the correlation length $\xi$.
Here's my point:
Since the zero temperature is a fixed point, it should be either $\xi =0$ or $\infty$ in fact $\xi(\{K'\}) = 1/2 \ \xi(\{K\}) $, but at a fixed point $\{K'\} = \{K\}$ (I am using Cardy's notation where $\{K\}$ denotes the set of coupling for the theory).
Now if $\xi$ is finite below the critical temperature (as it is stated in some books), say at $T_0 < T_c$ or equivalently $K_0 > K^*$, it must be zero at zero temperature. This can be deduced in a similar fashion as it is done at the critical temperature (pag. 38). In short, if $n(K)$ is defined as the number of times you have to apply the renormalization group to get from $K_0$ to $K$, then $n(K)$ diverges as $K \to \infty$ (same to say $T \to 0$). Therefore $\xi(K)$ tends to zero (it is halved $n(K)$ times starting from $\xi(K_0)$ ).
On the other hand, it seems to me that it is possible to evaluate exactly the two-point spins correlation functions in the zero temperature limit as follows:
$\langle\sigma(0) \sigma(r) \rangle = \sum_{\{\sigma\}} \sigma(0) \sigma(r) \exp(-\beta H[\sigma]) = 2$.
In the last passage I have used that at zero temperature only the two spins configurations with lowest energy contribute to the sum, but those are with all spins up or all spins down and $\langle\sigma(0) \sigma(r) \rangle = 1$.
Therefore the correlation length is infinite. (and for the argument above it should be infinite for every $T<T_c$).
So where is the error?
Answer: The correlation length below the critical temperature can be defined using the rate of exponential decay of the truncated 2-point function, evaluated in a pure state (I'll choose the one induced by the $+$ boundary condition), namely
$$
\xi_\beta(\vec n) = \lim_{k\to\infty} -\frac1k \log \langle \sigma_0 ; \sigma_{[k\vec n]}\rangle^+_\beta,
$$
where $\vec n$ is a unit vector in $\mathbb{R}^d$ and $[k\vec n]$ is the point of $\mathbb{Z}^d$ closest to the point $k\vec n$ in $\mathbb{R}^d$. I used the standard notation
$$
\langle \sigma_i ; \sigma_j\rangle^+_\beta = \langle \sigma_i \sigma_j\rangle^+_\beta - \langle \sigma_i \rangle^+_\beta\langle \sigma_j\rangle^+_\beta
$$
for the truncated 2-point function (the covariance between the spins). Here, $\langle\cdot\rangle^+_\beta$ denotes expectation w.r.t. the (infinite-volume) Gibbs state obtained by taking the thermodynamic limit with $+$ boundary condition and inverse temperature $\beta$.
There are various ways of showing that the correlation length $\xi_\beta$ indeed goes to $0$ as $\beta\uparrow\infty$. Possibly one of the most straightforward (though a bit tedious) is through cluster expansion techniques (see, e.g., section 5.7.4 of this book).
The main point is that the average of the spins become nonzero below the critical temperature, but the fluctuations around this average value become completely uncorrelated in the limit. Morally, in order to correlate the fluctuations of two spins at $i$ and $j$, you must have a Peierls contour surrounding both $i$ and $j$, and the probability of this goes to $0$ exponentially fast in $\beta\|j−i\|$ (when $\beta>\beta_{\rm c}$).
There are several reasons why you must work with a pure state (or, more precisely, an extremal Gibbs measure) above. One is that these are the states corresponding to thermodynamic equilibrium: the only ones for which all macroscopic observables take deterministic values, for example. Note that if you were to consider the state obtained with free (or periodic) boundary condition, you would indeed see that the truncated 2-point function reduces to the usual 2-point function,
$$
\langle \sigma_i; \sigma_j \rangle^{\rm free}_\beta
=
\langle \sigma_i \sigma_j \rangle^{\rm free}_\beta
$$
and this quantities does not vanish as $\|j-i\|\to\infty$ when $\beta>\beta_{\rm c}$ (instead, it converges to the square of the spontaneous magnetization $m^*(\beta)$). Note also that, even though $\langle \sigma_i\rangle^{\rm free}_\beta = 0$, there is spontaneous magnetization in typical configurations: the expectation is zero only because this spontaneous magnetization is $m^*(\beta)$ or $-m^*(\beta)$ with probability $1/2$. Actually, a typical configuration under this measure will be typical of either the $+$ state or the $-$ state with probability $1/2$, so any natural way you decide to measure the correlation length from configurations under these pure states, should give you the same answer under the free state. The reason you cannot do it through the truncated 2-point function as above is that the expectation starts to mix up the contributions from the two possible macroscopic behaviors described by the pure states, but this does not correspond to anything physically relevant. | {
"domain": "physics.stackexchange",
"id": 19080,
"tags": "statistical-mechanics, renormalization"
} |
Does the weak force have an attractive/repulsive force observable in everyday life like the other forces? | Question: After the correct comments, this question is not here to compare gravity's and EM's long range forces' energetics and amplitudes to microscopic scattering amplitudes of such forces as weak and strong. I am basically trying to figure out if there is any observable effect in everyday life of the weak force. From the answers it is obvious that the weak force can be attractive or repulsive too. I am just trying to figure out if there is an observable effect of this (for the weak force) that we can see somehow in everyday life. Maybe not obvious, maybe we see it, experience it every day, we just do not know (that it is because of the weak force) it until it is explained in detail (like the strong force).
Maybe my question can be asked as simply as, 1. can the weak force pull/push (attr/rep) particles? 2. does it push/pull (attr/repl) particles in everyday life observable or is it just a rare thing, like decay? Does it hold something (particles) together or keep something apart in the everyday matter that we live in/around?
I have read these questions:
Weak force: attractive or repulsive?
Do strong and weak interactions have classical force fields as their limits?
Has the weak force ever been measured as a force?
As it is currently known,
EM force is mediated by virtual photons, and can either be attractive or repulsive, and in everyday life it is easily observable, just hold a magnet. You can see the same thing with electricity. Then there is the covalent bond that makes molecules out of atoms. It is observable too, that the EM force is stronger on the short distance then gravity
gravity, just let something go, and you see it is always attractive, there are obviously observable effects in everyday life, and it is observable that gravity on the short distance is weaker then the EM force or the strong force
even the strong force, that keeps quarks confined, inside a nucleon, a neutron or proton, and the residual strong force that keeps neutrons and protons inside a nucleus, has an observable effect in everyday life, since without it, nuclei would not exist, they would fall apart. It is attractive on certain distances (between 0.8 fm and 2.5 fm), but it becomes repulsive at short distances (less then 0.7 fm), and that makes sure neutrons and protons do not get too close. This effect, though not commonly known, is responsible in part for giving material volume. It is observable too, that the strong force is stronger then gravity and EM on the short scale.
But what about the weak force? I know it can be repulsive or attractive, see here:
Weak force: attractive or repulsive?
So:
For weak isospin, there are two isospin charges (or flavors), up and down, and their associated anti-charges, anti-up and anti-down.
up repels up (anti-up repels anti-up)
down repels down (anti-down repels anti-down)
up attracts down (anti-up attracts anti-down)
up attracts anti-up (down attracts anti-down)
up repels anti-down (down repels anti-up)
For weak hypercharge, there is just one type of charge and its associated anti-charge.
hypercharge repels hypercharge (anti-hypercharge repels anti-hypercharge)
hypercharge attracts anti-hypercharge
Note that electric charge is a certain mixture of weak isospin and weak hypercharge.
OK, so I know that the weak force can be either attractive or repulsive. But the answers say too, that the weak or strong force does not have a classical field theory. Still, the strong force does have observable (in everyday life) attractive or repulsive effects.
Question:
But what about the weak force, are there any effects that are in everyday life observable where the weak force is attractive or repulsive?
Answer: In everyday life? Like in your kitchen? No. Or if yes, totally not in the way that you're thinking.
If you insist on thinking of the fundamental interactions in terms of attraction and repulsion, one way to do that is to describe them all in terms of the Yukawa potential energy,
$$ U = \pm \alpha \frac{\hbar c}{r} e^{-r/r_0}
$$
where the sign comes from the relative signs of the charges involved and distinguishes attractive from repulsive potentials, the coupling constant $\alpha$ is determined experimentally, and the range parameter
$$ r_0 = \frac{\hbar c}{mc^2}
$$
depends on the mass $m$ of the field which mediates the interaction. For gravitation, electromagnetism, and the QCD color force, the this field (graviton, photon, gluon) is massless, so those forces in principle have infinite range. However, in the strong case, the coupling constant $\alpha$ is so large that multi-gluon exchanges are more important than single-gluon exchanges. This strong coupling means that color charges effectively can't be separated from each other, which is known as "color confinement." At low energies and long distances, the effective strong interaction is mediated by a spectrum of massive meson fields, whose own Yukawa potentials conspire to give the nuclei the structure that they have. An attractive force, mediated by pions, acts between nucleons that are separated by a few femtometers, but a repulsive force mediated by heavier mesons makes it expensive for nucleons to approach each other closer than about one femtometer.
For the weak interaction, the charged- and neutral-current bosons both have masses of nearly $100\,\mathrm{GeV}/c^2$. That's three orders of magnitude larger than the pion mass $140\,\mathrm{MeV}/c^2$, which is what mostly defines the size of a nucleon. So in order for nucleons to feel any attraction or repulsion due to the weak force, they would have be substantially "overlapping" in a way that's forbidden by the hard-core repulsion of the residual strong force. The effects of the strong force are much larger than the effects of the weak force --- partially because the coupling constants are different, but partially because the strong force prevents particles from approaching each other close enough that the weak force can affect them very much directly.
This same feature that makes the weak force mostly-irrelevant in nuclei (and more so in electromagnetically-bound systems, where the length scales are longer than in nuclei, and even more so in the even-larger gravitationally-bound systems) also makes the weak interaction harder to measure. In fact, measurements of the weak interaction would be impossible in strongly-interacting systems if the strong and weak interactions had the same set of symmetries, and we would be limited to patiently waiting for weak decays. However, we can take advantage the fact that the weak interaction is the only one of the fundamental forces which changes under mirror reflection.
If there's a way that the weak interaction affects life in your kitchen, it's because the weak interaction is parity-violating and the other fundamental interactions aren't. The Vester-Ulbricht hypothesis suggests a way that parity violation may have been important historically. But it's a much more subtle situation than "X is attracted to Y," because in contests of attraction and repulsion the weak interaction always loses to electromagnetism and the strong force. | {
"domain": "physics.stackexchange",
"id": 74928,
"tags": "forces, particle-physics, standard-model, fusion, weak-interaction"
} |
Do cockroaches lay eggs in human flesh when they "bite"? | Question: Recently, I discovered a "bite" by a cockroach, and not only is the "bitten" area red and swallowing, and more specifically, it have a big hole in that area, but when I clean it with hydrogen peroxide solution, something is happening, that is, a yellowish/greenish thing comes out from the holes from each infecting area.
Do cockroaches lay eggs in human flesh when they "bite", and much more importantly, would hydroxide solution kill the eggs and parasites inside those of those holes?
Answer: No, cockroaches do not lay eggs into human flesh. You most likely received an infection of some sort from the bug, or it wasn't a cockroach. | {
"domain": "biology.stackexchange",
"id": 461,
"tags": "zoology, entomology"
} |
Is there an R package for Locally Interpretable Model Agnostic Explanations? | Question: One of the researchers, Marco Ribeiro, who developed this method of explaining how black box models make their decisions has developed a Python implementation of the algorithm available through Github, but has anyone developed a R package? If so, can you report on using it?
Answer: I think you're talking about the lime Python package. No, there is no R port for the package. The implementation for the localized model requires enhancements to the existing machine-learning code (explained in the paper), a new implementation for R would be very time consuming.
You may want to take a look at this for interfacing Python in R.
My suggestion is stick with Python. The package is only useful for highly complicated non-linear models, which Python offers better support than R. | {
"domain": "datascience.stackexchange",
"id": 1704,
"tags": "neural-network, r, predictive-modeling, ensemble-modeling"
} |
Why does the work-energy theorem need to include internal forces? | Question: Can anyone kindly explain me why work energy theorem must also include internal forces?
The proof of work energy theorem is derived from Newton's laws of motion, but Newton's laws of motion don't take internal forces into account, so why should internal forces be taken into account in the work energy theorem?
Answer: This is a bit of a strange question, because Newton's laws do include internal forces.
However, Newton's third law happens to cancel out their overall effect on a center of mass. But, if you want to understand the motions of the constituent parts of the system, then you do have to understand their internal forces.
So let's assume that we have a collection of particles $\{i\}$ with masses $m_i$ and (vector) positions $x_i$ each feeling external forces $F_i$ and internal forces $V_{ij} = -V_{ji}.$
We usually describe the system using the overall mass $M = \sum_i m_i$ at the center-of-mass position $X = \sum_i \frac{m_i}M x_i.$ Newton's laws say that the EOM for the center-of-mass are (with dots as time-derivatives) $$M \ddot X = \sum_i m_i \ddot x_i = \sum_{i}\left(F_i + \sum_j V_{ij}\right) = \sum_i F_i = F.$$Here $F$ is the "effective force" on the center of mass. In the above we found out that the $V_{ij}$ term disappeared, why? In a little more detail the argument looks like this:
We know $V_{ij} = -V_{ji}$, this is called “antisymmetry”.
This means that $V_{ij}+V_{ji}=0$, and since we can add zero to anything without changing it, $$V_{ij} = V_{ij} + k (V_{ij}+V_{ji}) $$ for any k. We choose the average between the equally terms, k = -½ so that $$V_{ij} = {V_{ij} - V_{ji}\over2}.$$
Then when we calculate $\sum_{ij} V_{ij}$ we expand it out into these two terms, $\left(\sum_{ij} V_{ij} - \sum_{ji} V_{ji}\right)/2.$ In the second term we relabel $i \leftrightarrow j$ simultaneously and we find $\sum_{ij} (V_{ij} - V_{ij})/2 = \sum_{ij} 0 = 0$ directly.
In a paragraph or two I will call this the "antisymmetric cancellation trick."
Similarly we can use the usual work-energy trick and multiply both sides by $\dot X,$ yielding$$
\begin{align}
M\ddot X\cdot \dot X &= F\cdot\dot X\\
\frac d{dt} \left( \frac 12 M \dot X ^2 \right) &= F \cdot \dot X = P,
\end{align}
$$ and then we can define $K=\frac 12 M \dot X ^2$ as "the kinetic energy of the center of mass" and $P$ as "the power of the effective force on the center of mass." Same work-energy trick as for a particle, but now applied to an aggregate of particles.
However there is a bunch of kinetic energy in the system which is not seen in this definition of $K$! The easiest way to think about this is to think of a gyroscope which is spinning but standing still: $K$ as we have defined it is zero, and all of that rotational kinetic energy is being ignored by this picture, because the center of mass isn't moving.
If we instead want the total kinetic energy, then we find that the power exerted on it is $$T = \sum_i\frac 12 m_i \dot x_i^2 = \sum_i \left( F_i \cdot \dot x_i+ \sum_j V_{ij}\cdot \dot x_i \right)$$The $V_{ij}$ terms here do not vanish via the antisymmetric cancellation trick! That is because when you do it, you get $\sum_{ij} V_{ij} \cdot(\dot x_i - \dot x_j) / 2$ after the relabeling, but there is no guarantee that $\dot x_i = \dot x_j.$ | {
"domain": "physics.stackexchange",
"id": 26118,
"tags": "newtonian-mechanics, classical-mechanics, energy, work, power"
} |
Vector-transposing function | Question: I profiled a library I'm writing that uses vector transposes and found that I am spending a good bit of time doing the following transpose. I am using a std::vector of std::vector<double>s to represent the column vectors.
What are some ways to optimize this function?
std::vector<double> transpose_vector(const std::vector<std::vector<double>> &column_vec) {
// take a column vector:
// |x1|
// |x2|
// |x3|
// and return a row vector |x1, x2, x3|
std::vector<double> row_vector;
for (auto c : column_vec) {
for (auto r : c) {
row_vector.push_back(r);
}
}
return row_vector;
}
Answer: Change your column loop to use a reference.
for (const auto &c : column_vec)
Without the reference, a copy will be made of each vector. This will involve a memory allocation. Using the reference you avoid all that, which should save a good deal of time since each c will be a single element vector.
auto r can stay since r will be a double.
Combine this with using reserve on row_vector will eliminate all but one memory allocation. | {
"domain": "codereview.stackexchange",
"id": 33817,
"tags": "c++, performance, c++11, vectors"
} |
Why do we say the three-dimensional space is flat (in Physics)? | Question: This is quote from Hawking's book:
The surface of the Earth is what is called a two-dimensional space. That is, you can move on the surface of the Earth in two directions at right angles to each other: you can move north–south or east–west. But of course there is a third direction at right angles to these two and that is up or down. In other words the surface of the Earth exists in three-dimensional space. The three-dimensional space is flat."
Answer: After Hawking wrote "The three-dimensional space is flat", he explained what that means: "That is to say it obeys Euclidean geometry." And as an example of what it means to obey Euclidean geometry, he gives the example "The angles of a triangle add up to 180 degrees". In a non-flat space, the angles of a triangle can add up to more than 180 degrees, or less than 180 degrees!
When discussing flat and curved spaces, physicists often think in terms of the "metric" of the space, which determines the distance between infinitesimally close points. In a flat 3D space, the metric is
$$ds^2=dx^2+dy^2+dz^2,$$
which is just a 3D version of Pythagoras' Theorem. But in a curved 3D space, the metric cannot be put into this simple form. | {
"domain": "physics.stackexchange",
"id": 53851,
"tags": "astrophysics, astronomy, curvature, spacetime-dimensions"
} |
Do color-neutral gluons exist? | Question: If I'm correct a quark can change color by emitting a gluon. For example a blue up quark $u_b$ can change into a red up quark by emitting a gluon:
$$u_b \longrightarrow u_r + g_{b\overline{r}}$$
(Here, the subscript indicates color and $\overline{r}$ means anti-red). This is needed to keep the color balance (left hand side: $b$, right hand side $r+b+\overline{r}=b$).
My question is then, do color-neutral gluons exist? Eg a gluon that is blue-anti-blue?
If it would, it could be created by any quark then:
$$u_b \longrightarrow u_b + g_{b\overline{b}}$$
I'm learning about the Standard Model in school, but the text isn't always that clear.
Answer: Color-neutral gluons that have the component blue-antiblue do exist, much like red-antired and green-antigreen. However, the sum of these three possible kinds of gluons is unphysical, so there are only two "diagonal" types of gluons. None of these two types of gluons are "genuinely color-blind" or "completely color-neutral".
This is more manifest if you realize that the color dependence of the gluon field may be written as a traceless $3\times 3$ Hermitian matrix. It is traceless because the gauge group is $SU(3)$ rather than $U(3)$ whose dimension is 8 rather than 9. (There are 3 complex entries strictly above the diagonal, which are copied in the complex conjugate way beneath the diagonal, plus 2 or 3 real entries on the diagonal, depending on whether we require the trace to vanish.)
Completely color-neutral gluons, if they were added, would be proportional to the identity matrix and they would couple to all three colors of the quarks equally. In other words, the interactions mediated by such gluons would only depend on the baryon number of the quarks. Experimentally, this interaction doesn't exist. In beyond-the-Standard-Model physics, one may try to extend $SU(3)$ to $U(3)$ in this way (this is very common in braneworld models) but because no new baryon-charge long-distance interaction is seen, the $U(1)$ in the $U(3)$ has to be spontaneously broken at a pretty high energy scale. | {
"domain": "physics.stackexchange",
"id": 1138,
"tags": "standard-model, quantum-chromodynamics, quarks, strong-force, gluons"
} |
Finding the molarity of the combination of two solution reacting with each other | Question: I have a question that gives two concentrations and asks for the mass of $\ce{HCl}$ formed by the reaction.
$$\ce{H2SO4 + NaCl ->Na2SO4 + HCl}$$
I have two concentrations:
$\pu{250 mL}$ of $\pu{4.00 M}$ $\ce{H2SO4}$, and $\pu{250 mL}$ of $\pu{1.00 M}$ $\ce{NaCl}$.
Here is the balanced reaction equation:
$$\ce{H2SO4 + 2NaCl ->Na2SO4 + 2HCl}$$
I know how to find the mass once I find the moles, molecular weights and then grams by multiplying the two.
However, how do I add those two concentrations? I assume the mixture will be at least $\pu{500 mL}$, but how do I add the molarity?
Answer: Your working equation is correct.
$$\ce{H2SO4 + 2NaCl ->Na2SO4 + 2HCl}$$
Find the amount of substance of protons of each solution via $n=c\cdot V$.
$n(\ce{H+})=2~\mathrm{mol}$, $n(\ce{Cl-})=0.25~\mathrm{mol}$
What is the limiting agent?
Chlorine
How many moles of hydrogen chloride can only be formed?
$n(\ce{HCl})=0.25~\mathrm{mol}$
Calculate the mass of hydrogen chloride via $m = n\cdot M$
$M(\ce{HCl})=36.5~\mathrm{g/mol}$, $m(\ce{HCl})=9.1~\mathrm{g}$ | {
"domain": "chemistry.stackexchange",
"id": 1442,
"tags": "aqueous-solution, solutions, concentration"
} |
How do you install octomap on Fuerte in linux? | Question:
Looking at the octomap instructions, it is not clear to me what I have to do to install the "pre-built packages" on linux -- sorry I don't use linux much so this maybe very clear to others.
What are the commands to install the correct packages?
Originally posted by Kevin on ROS Answers with karma: 2962 on 2012-05-20
Post score: 1
Answer:
For just installing octomap as a stand-alone library, run
sudo apt-get install ros-fuerte-octomap
To install octomap, ROS integration, and octomap_server run
sudo apt-get install ros-fuerte-octomap ros-fuerte-octomap-mapping
Originally posted by AHornung with karma: 5904 on 2012-05-20
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 9466,
"tags": "ros, linux, octomap, octomap-mapping, ros-fuerte"
} |
Marginal Stability based on Poles | Question: We know that a discrete-time system with a (Z-transform) transfer function that has a pole of magnitude 1 (i.e. $|z|=1$ is a pole of the transfer function) is marginally stable if the pole at $z=1$ is a single pole and unstable on every RoC otherwise.
My question is: Suppose a transfer function that has two simple poles at $z=1, z=-1$. Is it marginally stable or not? And, therefore, what is the most general form of the stability theorem based on poles of the (Z-)transfer function?
Answer: A causal discrete-time LTI system is marginally stable if none of its poles has a radius greater than $1$, and if it has one or more distinct poles with radius $1$. So a system with poles at $z=1$ and $z=-1$ is marginally stable (if there are no other poles outside the unit circle).
A causal discrete-time system with all its poles strictly inside the unit circle is called asymptotically stable. If at least one of the poles is outside the unit circle, the system is unstable.
For anti-causal systems just replace 'outside' by 'inside' and 'greater than $1$' by 'smaller than $1$' in the above definitions.
For non-causal systems with two-sided impulse responses the region of convergence (ROC) is an annulus centered at $z=0$. Such a system is asymptotically stable if the ROC contains the unit circle. If the ROC is limited by the unit circle and if one or more distinct poles are on the unit circle, then the system is marginally stable.
In sum, we get the following ROCs for marginally stable systems:
causal: $1<|z|<\infty$
anti-causal: $0\le |z|<1$
non-causal (two-sided): $1<|z|<R,\; (R>1)\quad\text{or}\quad R<|z|<1,\; (R<1)$
where in all cases it is assumed that all poles on the unit circle are distinct. | {
"domain": "dsp.stackexchange",
"id": 6014,
"tags": "z-transform, transfer-function, stability"
} |
Scope of $E=mc^2$ | Question: As far as I can see, Einstein's $E=mc^2$ is most often mentioned in the context of nuclear physics, even though it is more generally applicable. I understand that this is due to the large nuclear binding energies that are involved.
In what other situations (outside of nuclear physics) is this mass-energy equivalence
relevant?
measurable?
Answer: I'll take measurability first. Particle physics and nuclear physics you already mention. In other areas:
modern mass measurements are precise enough to detect the impact of electron binding energies in atoms and molecules. Tests of the equivalence principle are sensitive to many different contributions to the total mass-energy of whatever objects are used. Precision spectroscopy in atoms can detect tiny contributions to the energy levels; many of these are, as we say, 'relativistic corrections', which amounts to saying they are closely related to $E=mc^2$.
Now on relevance. The relationship is relevant whenever speeds reach a significant fraction of the speed of light, or whenever high precision is available and needed. The former includes very hot plasmas in astrophysics and laser physics, as well as nuclear and subatomic particle physics. The latter includes precision measurements in atoms, including the determination of the fine structure constant and the gyromagnetic ratio of the electron. These present precision tests of fundamental physics.
But one could argue that the relation has a wider relevance because it is a central part of the ability of physics to be logically and mathematically consistent. | {
"domain": "physics.stackexchange",
"id": 58141,
"tags": "special-relativity, nuclear-physics, mass-energy, binding-energy, applied-physics"
} |
Looking for alternatives to the microstrain_3dmgx2_imu? | Question:
I'm looking for alternative IMUs for our autonomous robot. However, the driver must support ROS. It has been pretty difficult to find different ones since I am new to building robots but I'm trying to research alternatives.
Originally posted by faceinthegrass on ROS Answers with karma: 1 on 2014-07-17
Post score: 0
Answer:
You can look for Xsens IMU's.
http://www.xsens.com/products/mti/
http://www.xsens.com/products/mti-g-700/
http://www.xsens.com/products/mti-10-series/
http://www.xsens.com/products/mti-100-series/
ROS Driver for XSens MT/MTi/MTi-G devices is available at http://wiki.ros.org/xsens_driver
Originally posted by sai with karma: 1935 on 2014-07-17
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 18665,
"tags": "ros, imu, driver, microstrain"
} |
Estimate prevalence of ʻOumuamua-style interstellar obects inside Jupiter orbit now that C/2019 Q4 (Borisov) is found? | Question: Now we know two of them, so maybe it's not an outlier.
We have two points so we can draw a line. How many similar macroscopic objects should be zipping through space at ~30 km/sec so that we would see one of these passing inside the orbit of Jupiter once every two years? I don't really know what units this should be expressed with. Maybe it's 1 / (km^2 * year) - the frequency of such a body coming through any random square kilometer of empty space in a year of time.
What's the "capture cross section" of intra-Jupiter orbit? I imagine that since the Sun affects trajectories of bodies coming near it, drawing them closer as they pass by, intra-Jupiter orbit will see much more bodies than a circle of same area in empty space. How large is this effect for ʻOumuamua-like bodies? I expect this is to be a unitless coefficient. Is it neligible? Is it 5x? 10x? 1000x?
Sorry if my question is not clear enough.
Answer: The enhancement of a cross-section due to gravitational focusing is given by
$$ \sigma_{\rm eff} = \pi a_J^2 \left(1 + \frac{2GM_{\odot}}{a_J\ v^2}\right),$$
where $a_J$ is the semi-major axis of Jupiter's orbit (assumed circular), $v$ is the relative velocity (at infinity) and I have ignored the mass of Jupiter.
Thus, using $v=30$ km/s (as specified in the question) the term in brackets is the gravitational focusing enhancement of the cross-section and equals 1.38.
We now assume an isotropic velocity distribution and a homogeneous density of such objects of $n$ per unit volume. The flux of such particles is $f =n v \sigma_{\rm eff}$.
If $f = 0.5$ yr$^{-1}$, then $n \sim 5 \times 10^{12}$ per cubic parsec. | {
"domain": "astronomy.stackexchange",
"id": 3976,
"tags": "solar-system, interstellar, modeling"
} |
Need help on conservation of momentum (liquid jet) Problem! | Question: I had a quick question regarding this homework problem, I can't seem to understand why they chose to use the conservation of momentum in vertical direction and not the horizontal direction as you would get a different end result if you chose the horizontal direction to work with. Would anyone kindly explain this to me ? Is this because the question states that there can be no tangential force on the plate surface so that means horizontal forces and thus working in the horizontal direction is neglected ??
Thank you very much in advance!
Answer: The water hits the surface and the moving directions of the water particles get changed. Their speeds are assumed to stay.
Let's follow what happens when a particle with mass M hits the surface and splits to pieces. Piece 1 with mass M1 continues to the +X direction in the solution drawing and piece 2 which has the rest of the mass M2 = (M-M1) turns to the negative X direction.
Piece 1 has momentum V * M1 and it's directed to positive X.
Piece 2 has momentum -V(M - M1) if we calculate it to the +X direction.
The total momentum of the incoming particle is not conserved. The Y direction momentum is used to make the normal force to the surface. The sum of the X-direction momentums of the particles stay because the plate causes no force to the particles in X-direction (=frictionless)
The X-direction momentum of the incoming particle is V * M *sinθ
The X-direction conservation equation:
V *M * sinθ = V * M1 -V(M - M1)
V can be dropped off. After reordering we get M1 = M(1 + sinθ)/2 or equivalently
(M1/M) = (1 + sinθ)/2
If we assume the water is non-compressible the jet thicknesses must have the same ratio and that's the same result as presented in the original solution.
Your "working in the horizontal direction -idea is a misconception". The conservation of the momentum is only true for the "along the surface component". The rest of the momentum is lost because the surface stops the motion in the surface normal direction. | {
"domain": "engineering.stackexchange",
"id": 4151,
"tags": "fluid-mechanics"
} |
Longitudinal magnification | Question: I want to prove that if an object is small in length and lying along the principal axis then
$$M = -\frac{dv}{du} = -\left(\frac{v}{u}\right)^2$$
Where, $M$ is the longitudinal magnification.
Answer: In geometrical optics the following relation between the longitudinal positions of object and image (respectively $u$ and $v$) together with the focal length $f$ is valid:
$$\frac{1}{u} + \frac{1}{v} = \frac{1}{f}$$
If the object is small and it has one of its ends at $u_1$, with the corresponding image at $v_1$, we can calculate the position of the image of the other end, $v_2$, in an approximate way using derivatives:
$$v_2 \approx v_1 + \frac{dv}{du} (u_2 - u_1)$$.
where $\frac{dv}{du}$ is the derivative calculated at $u_1$.
The longitudinal magnification is the ratio between the length of the image and the length of the object:
$$ M = \left|\frac{v_2 - v_1}{u_2 - u_1}\right|, $$
and using the approximate equation for the position in terms of the derivatives that we wrote above we see that
$$M \approx \left| \frac{dv}{du} \right|.$$
An easy way to calculate the derivative is considering that the variation of the quantity $\frac{1}{u} + \frac{1}{v}$ is zero for any variation of the position of the object $u$ and the corresponding variation of the position of the image $v$. So:
$$d \left (\frac{1}{u} + \frac{1}{v}\right) = 0$$.
We can express the variation using the variations of $u$ ($du$) and $v$ ($dv$) as
$$d \left (\frac{1}{u} + \frac{1}{v}\right) = -\frac{1}{u^2}\,du -\frac{1}{v^2}\,dv$$
still equal to zero. From $-\frac{1}{u^2}\,du -\frac{1}{v^2}\,dv = 0$ we obtain the expression for $\frac{dv}{du}$:
$$\frac{dv}{du} = - \frac{v^2}{u^2}$$
The longitudinal magnification is then
$$M_{\mathrm{long.}} = \frac{v^2}{u^2}$$
Since the transverse magnification is
$$M_{\mathrm{transv.}} = \frac{v}{u}$$
then
$$M_{\mathrm{long.}} = M_{\mathrm{transv.}}^2$$ | {
"domain": "physics.stackexchange",
"id": 57230,
"tags": "homework-and-exercises, optics, geometric-optics"
} |
Searching repositories for files with forbidden strings | Question: I wrote a small tool for searching git repositories for forbidden strings that I don't want to commit publicly. It's still something basic but it does the job.
The project contains three files.
config.py - this is the file where I store the secrects like which directories to scan and which strings to look for in a long regex (here redacted). This is commited to a VSTS repository because they are private.
import re
paths = [
"c:\\..",
"c:\\.."
]
re_forbidden = re.compile(r"(class)")
main.py - this is the core file. It uses os.walk to list all files and directories. Not all directories and files are processed. Paths that don't get commited like .git or bin and files like dll are skipped. Paths and the re_forbitten regex are imported from the above config.py. When found something suspicious then it just outputs the path of the affected file.
import os
import time
import itertools
import shutil
import re
import importlib.util
config_spec = importlib.util.spec_from_file_location("config", "c:\\home\\projects\\classified\\python\\sanipyzer\\config.py")
config = importlib.util.module_from_spec(config_spec)
config_spec.loader.exec_module(config)
from pprint import pprint
ignore_dirs = [".git", "bin", "obj", ".vs"]
ignore_files = ["dll", "exe", "pdb", "map"]
def format_filemtime(path):
filemtime = os.path.getmtime(path)
return time.strftime('%Y-%m-%d', time.gmtime(filemtime))
def ignore_dir(dirpath):
for dir in ignore_dirs:
pattern = r"\\" + re.escape(dir) + r"(\\|$)"
if re.search(pattern, dirpath):
return True
return False
def ignore_file(file_name):
for ext in ignore_files:
pattern = r"\." + ext + "$"
if re.search(pattern, file_name):
return True
return False
def sanitize(path):
start = time.perf_counter()
for (dirpath, dirnames, filenames) in os.walk(path):
if ignore_dir(dirpath):
continue
searchable_filenames = [filename for filename in filenames if not ignore_file(filename)]
for filename in searchable_filenames:
full_name = os.path.join(dirpath, filename)
# without 'ignore' it throws for some files the UnicodeDecodeError 'utf-8' codec can't decode byte XXX in position XXX: invalid start byte
with open(full_name, 'r', encoding="utf8", errors="ignore") as searchable:
text = searchable.read()
if config.re_forbidden.search(text):
pprint(full_name)
end = time.perf_counter()
elapsed = round(end - start,2)
print(f"elapsed: {elapsed} sec")
# --- --- ---
def main():
for path in config.paths:
sanitize(path)
if __name__ == '__main__':
main()
test.py - this file is a very simple unittest. It tests the two ignore methods.
import unittest
from main import ignore_dir
from main import ignore_file
class MainTest(unittest.TestCase):
def test_ignore_dir(self):
self.assertTrue(ignore_dir("c:\\temp\\.git"))
self.assertTrue(ignore_dir("c:\\temp\\.git\\abc"))
def test_ignore_file(self):
self.assertTrue(ignore_file("c:\\temp\\project\\program.dll"))
self.assertFalse(ignore_file("c:\\temp\\project\\program.cs"))
# tests the test
#self.assertTrue(ignore_file("c:\\temp\\project\\program.cs"))
if __name__ == "__main__":
# use False to avoid the traceback "Exception has occurred: SystemExit"
unittest.main(exit=False)
What do you think of it? Is this an acceptable solution or are there still many beginner mistakes? I wasn't sure about the two ignore lists whether they are const and should be UPPER_CASE or are they just simple variables?
Answer: In general, it is a good code. However, I have a few remarks. Note: I am assuming that the code works perfectly as it should and will comment only on the style. Also, I assume that you are an experienced programmer and hence will not be explaining the basics, although if something is not clear please let me know.
Configuration
You might want to keep the real paths separate to a configuration script. I'd advice you to create a *.cfg file that would store all the configuration data. I can imagine this being used on different machines with different paths each. I don't think you would like to allow users to modify the code, it is better to give them config file.
In case you would like to keep the configuration as it is, I'd encourage to move the lists to config.py and name (as you mentioned) in UPPER_CASE. Additionally, you might also want to encapsulate them in a functions like this:
def get_ignored_dirs():
return ignored_dirs
This allows to keep the API of config.py constant regardless on what you decide to do with inners of the configuration.
Lastly, you might consider moving your functions like ignore_dir to separate file (e.g. files_utils.py) and treat is as a proxy to your configuration.
Method naming
In general I tend do explicitly say that some function returns boolean value by sticking is at the beginning. For example, it is clear (from the code) that your function ignore_dir checks if the directory should be ignored. This is however, not clear from the name, one might think that this function ignores some directory. I'd suggest to change the name to is_ignored_dir or something similar. The same goes for ignore_file.
Separation of concerns and responsibilities
It seems for me that your sanitize function is not very SRP-like. Inside, you do many things including deciding whether to ignore a directory (or file), read the file and count time. I'd suggest dividing those responsibilities to more than one function. You might notice that if you try to do a unit test for this function, in reality it will be a e2e test.
Below, an example of what you can fix it.
def measured(func):
def wrapper(path):
start = time.perf_counter()
func(path)
end = time.perf_counter()
elapsed = round(end - start,2)
print(f"elapsed: {elapsed} sec")
return wrapper
def searchable_files_from_list(filenames):
for filename in filenames:
if not is_ignore_file(filename):
yield filename
def searchable_files_containing_forbidden_text(filenames):
for filename in searchable_files_from_list(filenames):
full_name = os.path.join(dirpath, filename)
if contains_forbidden_text(full_name):
yield full_name
def contains_forbidden_text(filename):
with open(full_name, 'r', encoding="utf8", errors="ignore") as searchable:
text = searchable.read()
if config.re_forbidden.search(text):
return True
return False
def files_to_sanitize(path):
for (dirpath,dirname,filenames) in os.walk(path):
if is_ignored_dir(dirpath):
continue
for file in searchable_files_containing_forbidden_text(filenames):
yield file
@measured
def print_files_to_sanitize(path):
for file in files_to_sanitize(path):
pprint(file)
Explanation:
Now, all of your methods can be separately unit tested and validated. Let's take look at each function at a time, starting with the last one.
print_files_to_sanitize
This is what is left from your sanitize function. After the refactoring it become apparent that the function does not sanitize anything, it just prints a file name. This fact wasn't so clear before because of many things happening.
I've added a decorator to the function - this is a very common practice for measuring time of the method (you might want to generalize this by using *args and **kw).
files_to_sanitize is a generator that should give you all of the files that should be sanitized. It iterates through your catalogs ignoring some directories and getting the files from a second generator.
searchable_files_containing_forbidden_text Given a list of files it iterates over those which have a good file name (given by searchable_files_from_list function) and if such file contains a forbidden text it bubbles it up. | {
"domain": "codereview.stackexchange",
"id": 32435,
"tags": "python, beginner, python-3.x, regex, file-system"
} |
Simple DBLayer class review | Question: I have created a simple DBLayer class. But I am not sure whether this class have any bug/issue using in production.
public class DBLayer
{
private static string connectionString = ConfigurationManager.AppSettings["ConnectionString"];
public static DataTable GetDataTable(string strSQL, CommandType type)
{
var objCmd = new SqlCommand(strSQL);
objCmd.CommandType = type;
return GetDataTable(objCmd);
}
public static DataTable GetDataTable(SqlCommand objCmd)
{
SqlConnection objConn = null;
try
{
objConn = new SqlConnection(connectionString);
objCmd.Connection = objConn;
var objAdapter = new SqlDataAdapter(objCmd);
var ds = new DataSet();
objAdapter.Fill(ds);
return ds.Tables[0];
}
finally
{
objConn.Close();
}
}
public static int ExecuteNonQuery(SqlCommand objCmd)
{
int r = -1;
SqlConnection objConn = null;
try
{
objConn = new SqlConnection(connectionString);
objCmd.Connection = objConn;
objConn.Open();
r = objCmd.ExecuteNonQuery();
}
finally
{
objConn.Close();
}
return r;
}
}
Please review this code?
Answer: Looks good. You may also want some other things like getScalar (A single object, or string, as in 1 cell), getDataRow, just the top row, getDataRowCollection, i frequently only use datatable to get the rows, and forget about the columns, so I can iterate over it like so....
DataRowCollection drc = getDataRowCollection("SELECT * FROM tblTest");
for(DataRow r in drc){
Console.WriteLine(r["name"]);
}
So you could have NonQuery, you may want to test these before using them....
public DataTable getDataTable(SqlCommand objCmd)
{
SqlConnection objConn = null;
try
{
objConn = new SqlConnection(connectionString);
objCmd.Connection = objConn;
var objAdapter = new SqlDataAdapter(objCmd);
var ds = new DataSet();
objAdapter.Fill(ds);
return ds.Tables[0];
}
finally
{
objConn.Close();
}
}
public DataRowCollection getDataRowCollection(SqlCommand objCmd)
{
return getDataTable(objCmd).Rows;
}
public DataRow getDataRow(SqlCommand objCmd){
return getDataRowCollection(objCmd)[0];
}
public string getScalar(SqlCommand objCmd)
{
SqlConnection objConn = null;
try
{
objConn = new SqlConnection(connectionString);
objCmd.Connection = objConn;
return objCmd.ExecuteScalar().ToString();
}
finally
{
objConn.Close();
}
}
public int NonQuery(SqlCommand objCmd)
{
SqlConnection objConn = null;
try
{
objConn = new SqlConnection(connectionString);
objCmd.Connection = objConn;
return objCmd.ExecuteNonQuery();
}
finally
{
objConn.Close();
}
}
I would recommend writing quite a few variations of the methods, and implement enums for each connection string so you don't mis-type anywhere and have them set in concrete.
Take a look at my getScalar variations....
public static string getScalar(string query)
{
return getScalar(query, defaultConnection);
}
public static string getScalar(string query, ConnectionString cs)
{
return getScalar(query, new { }, cs);
}
public static string getScalar(string query, object param, ConnectionString cs)
{
return getScalar(query, param, cs, defaultTimeout);
}
public static string Query(string query, object param, int to)
{
return getScalar(query, param, defaultConnection, to);
}
public static string getScalar(string query, object param, ConnectionString cs, int to)
{
return getScalar(ConnectionStringToDb(cs), query, param, to);
}
public static string getScalar(MySqlConnection db, string query)
{
return getScalar(db, query, defaultTimeout);
}
public static string getScalar(MySqlConnection db, string query, int to)
{
return getScalar(db, query, null, to);
}
public static string getScalar(MySqlConnection db, string query, object param, int to)
{
string result = "";
if (db.State != ConnectionState.Open)
OpenConnection(db);
MySqlParameter[] p = ObjectToParameters(param);
using (db)
{
try
{
MySqlCommand command = new MySqlCommand();
command.Connection = db;
command.CommandText = query;
command.CommandTimeout = to;
foreach (MySqlParameter pm in p)
{
command.Parameters.Add(pm);
}
result = command.ExecuteScalar().ToString();
}
catch { }
}
return result;
}
Also if you class works as expected it should be fine... Looks okay. I use mine day to day, but I use MySql so I'm not to sure on the differences between the two classes, should be relatively the same. | {
"domain": "codereview.stackexchange",
"id": 3397,
"tags": "c#, sql-server"
} |
How can I determine the frequency of a sine wave signal with gradually increasing frequency? | Question: I have been trying to solve the following problem. I have a DSP algorithm where these variables occur:
$$x(k) = \cos[\theta(k)],\quad y(k) = \sin[\theta(k)]$$
I have recorded both of them and I would like to determine their frequency using offline processing.
The first way I was able to come up with until this moment is to use following approach.
At first determine the argument:
$$\theta(k) = \arctan\left[\frac{y(k)}{x(k)}\right]$$
Then based on that determine the angular speed:
$$\omega_s(k) = \frac{\theta(k) - \theta(k-1)}{T}.$$
The problem with this method is the derivative.
The second approach, which seems better to me, is to use the so-called synchronous reference frame phase-locked loop (SFPLL). This method avoids the derivative, but there will be some transient at the beginning until the control loop inside the SFPLL "finds" the correct frequency.
How can I determine the frequency of my signal?
Answer: Instantaneous frequency is the time derivative of phase. Since the OP already has the analytic signal representation of the chirp, every sample can easily be used for the frequency estimate (unlike zero crossing frequency estimators for real signals, where the better solution in that case is to use the Hilbert Transform to arrive at this point that the OP already has).
The concern with using the derivative that was raised in the comments under the question is the discontinuity in phase. This is resolved by unwrapping the phase first and then use the derivative to compute the instantaneous frequency. Rick Lyons has a nice blog post on alternate discrete time derivative computations that can have better noise performance (considering the high frequency noise enhancement of a true derivative).
If using MATLAB, Octave or Python phase unwrapping is a built in function as np.unwrap(phase) in Python or unwrap(phase) in MATLAB/Octave. Otherwise phase unwrapping can be done manually by detecting the discontinuity (easily done since the step change far exceeds any other sample to sample variation).
Also note that for small phase steps and normalized amplitude, the frequency can be well approximated using a cross product frequency discriminator, where $freq \propto \text{IM}\{x[n-1]^*x[n]\}$ for complex samples due to the small angle approximation $\sin(\theta) \approx \theta$. We can see how $\sin(\theta)$ is the imaginary component for the conjugate product of two successive complex samples from the equations developed below where the prior sample $x[n-1]$ is the complex signal represented as $I_1+jQ_1$ and the current sample $x[n]=I_2+jQ_2$:
$$(I_1-jQ_1)(I_2+jQ_2)$$
$$= (I_1I_2+Q_1Q_2) + j(I_1Q_2-Q_1I_2)$$
Given the frequency as a change in phase over a change in time, knowing the time step between the complex samples, if sufficiently oversampled (to maintain the small angle approximation) the frequency is given as:
$$f = \frac{\Delta \theta}{\Delta T} \approx f_s(I_1Q_2-Q_1I_2) $$
How this works is very intuitive when you consider the complex tone as a rotating phasor on the complex (IQ) plane as given in the diagram below. $V_1$ is the sample of the waveform at a particular sample in time and $V_2$ represents the subsequent sample due to the frequency of the waveform. When you multiply phasors, the angles add, thus by conjugating the first phasor, the resulting product will be the difference between the absolute angles of each phasor. This will work regardless of position in the IQ plane (no discontinuity) and for small angles, the angle solution reduces to the imaginary component of the result. If the amplitude is not normalized, the result must be divided by the product of each amplitude (which is normalization). | {
"domain": "dsp.stackexchange",
"id": 11599,
"tags": "discrete-signals, frequency, pll"
} |
Weakly normalizing + confluent = strongly normalizing? | Question: I was reading this abstract and saw that they prove weak normalization and confluence. My limited understanding suggests that those two properties should provide strong normalization, which then leaves me confused about why they'd leave that out in the abstract, thus suggesting I might be wrong.
Is it the case that a confluent weakly normalizing system is strongly normalizing? Why/why not?
Answer:
Is it the case that a confluent weakly normalizing system is strongly normalizing? Why/why not?
No, it's not necessarily the case.
Weak normalization means: there exists a reduction strategy that will lead to a normal form.
People often use confluence to mean what rewriting people call "local confluence": this is the property that if $a \leadsto b$ and $a \leadsto c$, then $b$ and $c$ can be joined -- i.e., there exists a $d$ such that $b \leadsto^\ast d$ and $c \leadsto^\ast d$.
This does not rule out the existence of infinite reduction sequences.
As an example, start with the STLC and add a new base type $X$ and constant $c,d : X$ with the reduction rule $c \leadsto c$ and $c \leadsto d$.
There is a weak normalization strategy: use a weak normalization strategy for the STLC and immediately reduce any $c$ to $d$. The calculus is also Church-Rosser, since it inherits this property from the STLC. But it isn't strongly normalizing since you can reduce a $c$ forever. | {
"domain": "cstheory.stackexchange",
"id": 3250,
"tags": "lambda-calculus"
} |
Using a GAN discriminator as a standalone classifier | Question: The goal of the discriminator in a GAN is to distinguish between real inputs and inputs synthesized by the generator.
Suppose I train a GAN until the generator is good enough to fool the discriminator much of the time. Could I then use the discriminator as a classifier that tests whether an input belongs to a single class?
For instance, if I train StyleGAN to be able to synthesize photorealistic cats, could I use the trained discriminator to detect whether an image is a cat or not?
My thinking is that perhaps the discriminator would be more accurate than other classifier models because it has effectively trained on many, many more inputs thanks to the generator.
On the other hand, perhaps the discriminator is somehow worse because it has been trained overwhelmingly on cat-like images (assuming the generator has gotten pretty good), and hasn't seen a wide variety of negative examples. It is concerned less with "is this a cat?" than "what are the tell-tale signs of this being synthetic?"
Answer: Yes, we can use the Discriminator of the GAN to classify images. But we should make sure that the images produced by the Generator are real looking.
If you have trained your GAN on a large number of images and it is performing pretty well on the dataset then I insist you to treat the Discriminator model as a pretrained model ( like we do in transfer learning ) and again train this model on images which were not used to train the GAN earlier. Thus the model is fine tuned on the dataset on the GAN wasn't trained before.
Another similar way could be to only use the weights of the CNN layers and load them in our new model.
Suppose I train a GAN until the generator is good enough to fool the
discriminator much of the time. Could I then use the discriminator as
a classifier that tests whether an input belongs to a single class?
You should definitely try reusing a Discriminator from a GAN and share your results too :-). | {
"domain": "datascience.stackexchange",
"id": 6736,
"tags": "neural-network, classification, gan"
} |
How does sound propagate up a tall building? | Question: I live in a flat in a 22-storey tall building. It is a common observation that loud people at level 1 can still be audibly heard even at this height.
Could someone suggest a reason for this counter-intuitive observation? Doesn't sound intensity decrease linearly with the square of the distance from the source? Is it possible that the building is acting as a surface for the sound to bounce and echo off?
Thanks.
Answer: I think that the echo idea (with a bit of finesse) is the best bet. As the original sound propagates as a sphere, as you suggested, every place it touches becomes a transmission point for another spherical wave front; this is basically Huygens Principle. The sound bounces back and forth among all the surfaces in the area creating echos ad infinitum - these are called multiples. Fresnel explained how interference of these waves could produce a coherent signal at another location. Because there are so many multiples, constructive interference compensates for some of the spherical energy loss - thus you can still hear it on your balcany.
Sound and light are sufficiently analogous at this level for the principles to work for both.
This article has some great illustrations and more information:
https://en.wikipedia.org/wiki/Huygens-Fresnel_principle | {
"domain": "physics.stackexchange",
"id": 34798,
"tags": "waves"
} |
Does a black hole have any kind of mass? | Question: Currently in my academics I am studying about the Gravitation. In the chapter I came across a term called the Escape Velocity (It's the velocity of any celestial body which is required by an object to escape from body's gravitational field without any further propulsion). When I was going through the chapter I came to know that the escape velocity of black holes is greater than $c$ and that's why even light can't escape from it's gravitational field. So proceeding toward my question,
From the information I know,
$$v_{es}=\sqrt\frac{2GM}{R}=\sqrt{2gR}$$
where $v_{es}$ is the escape velocity, $G$ is the universal gravitation constant, $g$ is the acceleration due to gravityof the celestial body, $M$ is the mass of the celestial body and $R$ is the distance between the object and center of gravitation of celestial body.
So my question is that if Black Hole is having that enormous value of escape velocity they must be either having exceptionally large value value of $M$ (i.e their Mass) or they must be having very small value of $R$, which I myself didn't know how can it be defined for a black hole.
So does black holes have enormous mass which result in very large value of $v_{es}$
Also I want to how $R$ can affect the value of $v_{es}$ in case of black holes?
Answer: The radius of a non-rotating black hole is $$r_s = \frac{2GM}{c^2} \tag{1}$$
where $M$ is the mass, $G$ is Newton's constant, and $c$ is the speed of light. This is the distance from the center of the black hole to the event horizon. The event horizon is the surface that traps light and objects, it separates inside and outside the black hole. Anything that passes inside the event horizon can never escape, even light. This is why it is said that the escape velocity at the surface of a black hole is $c$.
It is also the case that any object with $r < r_s$ is a black hole. But since the mass should be roughly proportional to the volume, and the volume is proportional to $r^3$, anything heavy enough will form a black hole. Therefore from (1) the proper answer to your question is: because they are very massive, not because they are small. | {
"domain": "physics.stackexchange",
"id": 17502,
"tags": "general-relativity, gravity, black-holes, mass, escape-velocity"
} |
Responsive web design using HTML5/PHP/Bootstrap3.3.4 | Question: I am trying to understand the basics of responsive web design using bootstrap. So, is this a good way of structuring the layout of a responsive web page or are there any other/better alternatives?
Here's the Gist of the entire file(login page), just in case if you want to download and test.
HTML:
<body>
<!--START-PAGE-HEADER-->
<div class="container-fluid">
<div class="header-image"><img src="../../images/header.jpg"></div>
</div>
<!--END-PAGE-HEADER-->
<div class="container">
<!--START-LOGIN-FORM-->
<section id="loginBox">
<h1>Login:</h1>
<form method="post" action="login_handler.php" class="minimal">
<label for="username"> Username:
<input type="text" name="username" id="username" placeholder="Username"/>
</label>
<label for="password"> Password:
<input type="password" name="password" id="password" placeholder="Password" required="required"/>
</label>
<button type="submit" class="btn-minimal">
Sign in
</button>
</form>
</section>
<!--END-LOGIN-FORM-->
</div>
<!--START-PAGE-FOOTER-->
<div id="footer">
<div class="container-fluid">
<p class="center-text">
Copyright © 2015 Sandeep Chatterjee.
</p>
</p>
</div>
</div>
<!--END-PAGE-FOOTER-->
<script src="js/jquery/2.1.3/jquery.min.js"></script>
<script src="js/bootstrap/bootstrap.js"></script>
</body>
CSS:
<link type="text/css" href="css/bootstrap/bootstrap.css" rel="stylesheet">
<style type="text/css">
html {
height: 100%;
}
* {
margin: 0;
padding: 0;
}
body {
font: normal .80em 'trebuchet ms', arial, sans-serif;
background: #FFF;
color: #555;
}
p {
padding: 0 0 20px 0;
line-height: 1.7em;
}
img {
border: 0;
}
.header-image img {
/* Avoid using !important as it breaks natural cascading of stylesheets. See: http://tinyurl.com/asn2mzo */
width: 100% !important;
}
.center-text {
text-align: center;
vertical-align: middle;
}
/* LOGIN-FORM */
section#loginBox {
background-color: rgb(255, 255, 255);
border: 1px solid rgba(0, 0, 0, .15);
border-radius: 4px;
box-shadow: 0 1px 0 rgba(255, 255, 255, 0.2) inset, 0 0 4px rgba(0, 0, 0, 0.2);
margin: 40px auto; /*aligns center*/
padding: 24px;
width: 350px;
}
form.minimal label {
display: block;
margin: 6px 0;
}
form.minimal input[type="text"], form.minimal input[type="email"], form.minimal input[type="number"], form.minimal input[type="search"], form.minimal input[type="password"], form.minimal textarea {
background-color: rgb(255, 255, 255);
border: 1px solid rgb(186, 186, 186);
border-radius: 2px;
-webkit-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.08);
-moz-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.08);
box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.08);
display: block;
font-size: 14px;
margin: 6px 0 12px 0;
padding: 8px;
text-shadow: 0 1px 1px rgba(255, 255, 255, 1);
width: 90%;
-webkit-transition: all 0.1s linear;
-moz-transition: all 0.1s linear;
-o-transition: all 0.1s linear;
transition: all 0.1s linear;
}
form.minimal input[type="text"]:focus, form.minimal input[type="email"]:focus, form.minimal input[type="number"]:focus, form.minimal input[type="search"]:focus, form.minimal input[type="password"]:focus, form.minimal textarea:focus, form.minimal select:focus {
border-color: #4195fc;
-webkit-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1), 0 0 8px #4195fc;
-moz-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1), 0 0 8px #4195fc;
box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1), 0 0 8px #4195fc;
color: rgb(0, 0, 0);
}
</style>
Answer: I think a good responsive design does not start with downloading 247,387 Bytes of jQuery and 67,546 Bytes of Bootstrap JS.
It only requires some simple CSS. Instead of trying to decipher what Bootstrap is doing with jQuery, take that learning curve and use that time to learn CSS.
Bootstrap is a band-aid. A band-aid that is not needed. And definitely a cure worse than the disease. When something is not working correctly how do you remedy the problem. It's a CSS problem. You didn't take the time to learn CSS but put on the Bootstrap Band-aid. Felt good at first. Now you have a problem. What to do? Is it a Bootstrap problem? A jQuery problem? A CSS problem? It's a CSS problem buried under 300+KB of Bootstrap and jQuery. Good luck with that.
The only reason I know Bootstrap exists, is because so many issues are posted on Stackoverflow. Basically you will at some point have to put yourself at the mercy of others to fix your problem. Will someone be there for you? No it's not that bad. You just have to give up on that creative genius idea that got you stuck.
Bootstrap, Mediocre at Best
If you are going to use 3rd party tools use something designed by those that know what they are doing. Not Bootstrap
Number One, I think a good design has no HTML or CSS errors. I have run bootstrap's pages through the W3C HTML, CSS and Mobile OK Validators. As I recall the page I ran had hundreds of CSS errors. I just now picked a page at random http://getbootstrap.com/javascript/ and 44 CSS and 14 HTML Errors.
Their pages also score 0% (zero) on W3C MobileOK. Mobile experts? I think not.
So when the CSS is not working correctly, what do you do? CSS can be difficult. Now add 300KB of JavaScript and you end up creating work arounds for Bootstrap and jQuery because they are too troublesome to figure out how they work.
I find many (e.g. Bootstrap) associate responsive with mobile, which is not the case. Non-Responsive is when the text goes off the right side of the screen and you have to scroll over to read. The horizontal scroll bar will never appear in a response design.
Whereas mobile is about font size and readability, viewport, and usability. If the buttons that need to be clicked are too close together and or too small, usability suffers. When on a mobile device have you ever had to zoom in on something to click it or read it? Ever need to zoom and the view port is set to not allow zoom? Why would anyone think they should take away zoom from the user? Keeping that answer to myself.
Another requirement is the Web Server be configured correctly.
Bootstrap has 9 JS files in their <head> , these 6 are servered from their Server.
http://getbootstrap.com/dist/js/bootstrap.min.js
http://getbootstrap.com/assets/js/docs.min.js
http://getbootstrap.com/assets/js/ie10-viewport-bug-workaround.js
http://getbootstrap.com/assets/js/ie8-responsive-file-warning.js
http://getbootstrap.com/assets/js/ie-emulation-modes-warning.js
These JS files are cached for 10 minutes, requiring the Browser to download them on every time you linger maore than 10 minutes. Also they should be combined into a single file to reduce HTTP Requests.
One other JS file: https://platform.twitter.com/widgets.js is cached for only 30 minutes.
The above are just a few things I found wrong spending about two minutes. There is no excuse for HTML or CSS errors or 0% MobileOK. MobileOK is tough, 20% is OK, 80% is very good. Zero is too typical.
These guys are supposed to be experts. Why can;t they get the simple stuff correct?
Too many dysfunctional sites.
The Web is a mess. Too many dysfunctional sites. jQuery is a factor. The problem being that a Web Designer will want some functionality and search for a solution. Many of these published "solutions" use jQuery. The Designer copies and pastes with no knowledge about what they are doing. The result is a hack job.
I assume Bootstrap is supposed to do responsive mobile. Responsive Design is one thing and mobile is another. I guess that why when they say
"Bootstrap is the most popular HTML, CSS, and JS framework for
developing responsive, mobile first projects on the web."
And their pages score a ZERO with the W3C.
jQuery
Lot's and lot's of sites use it. does that mean its great stuff? Not necessarily.
This site uses jQuery. I just right clicked this page and went to the console tab where javaScript is monitored. This is what I see.
17:37:23.775 The connection to wss://qa.sockets.stackexchange.com/ was interrupted while the page was loading. full.en.js:1:0
17:37:24.341 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:2:24785
17:37:24.341 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:2:26520
17:37:24.342 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:13810
17:37:24.342 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:15975
17:37:24.343 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:20231
17:37:24.343 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:20319
17:37:24.343 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:26137
17:37:24.345 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:4:11153
17:37:24.345 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:4:11206
17:37:24.345 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:4:12076
17:37:24.370 ReferenceError: reference to undefined property f.valHooks[this] jquery.min.js:2:31690
17:37:24.377 SyntaxError: test for equality (==) mistyped as assignment (=)? stub.en.js:2:137
17:37:24.379 ReferenceError: reference to undefined property parent.WebPlayer stub.en.js:1:41
17:37:25.178 SyntaxError: test for equality (==) mistyped as assignment (=)? MathJax.js:19:20446
17:37:25.186 ReferenceError: reference to undefined property args.execute MathJax.js:19:4657
17:37:25.187 ReferenceError: reference to undefined property this.head MathJax.js:19:12656
17:37:25.286 ReferenceError: reference to undefined property a[j] jquery.min.js:2:21397
17:37:25.298 ReferenceError: reference to undefined property a.selector jquery.min.js:2:7919
17:37:25.298 ReferenceError: reference to undefined property a.nodeType jquery.min.js:3:64
17:37:25.298 ReferenceError: reference to undefined property r.fullPostfix stub.en.js:1:7187
17:37:25.301 ReferenceError: reference to undefined property f.event.triggered jquery.min.js:3:1980
17:37:25.462 SyntaxError: mistyped ; after conditional? full.en.js:1:30118
17:37:25.462 TypeError: variable e redeclares argument full.en.js:3:8639
17:37:25.463 TypeError: variable e redeclares argument full.en.js:3:21906
17:37:25.464 ReferenceError: reference to undefined property a[A][o] stub.en.js:2:1136
17:37:25.468 ReferenceError: reference to undefined property h[j] jquery.min.js:4:4241
17:37:25.477 ReferenceError: reference to undefined property a[f.expando] jquery.min.js:2:21216
17:37:25.506 ReferenceError: reference to undefined property MathJax.OutputJax.NativeMML TeX-AMS_HTML-full.js:43:1
17:37:25.513 ReferenceError: reference to undefined property f.isCallback MathJax.js:19:3295
17:37:25.519 ReferenceError: reference to undefined property this.items[t].name[v] TeX-AMS_HTML-full.js:44:6182
17:37:25.523 ReferenceError: reference to undefined property j.require MathJax.js:19:49814
17:37:25.523 ReferenceError: reference to undefined property j.extensions MathJax.js:19:49899
17:37:25.525 ReferenceError: assignment to undeclared variable SETTINGS TeX-AMS_HTML-full.js:46:119
17:37:25.547 ReferenceError: reference to undefined property n.noStyleChar TeX-AMS_HTML-full.js:52:1953
17:37:25.843 ReferenceError: reference to undefined property d[a][2] MathJax.js:19:17403
17:37:25.944 ReferenceError: reference to undefined property f.event.triggered jquery.min.js:3:278
17:37:25.944 ReferenceError: reference to undefined property a.returnValue jquery.min.js:3:6513
17:37:25.944 Use of getPreventDefault() is deprecated. Use defaultPrevented instead. jquery.min.js:3:0
17:37:25.945 ReferenceError: reference to undefined property o[s] jquery.min.js:3:3868
17:37:26.188 ReferenceError: reference to undefined property c[0] jquery.min.js:3:28159
17:37:26.202 ReferenceError: reference to undefined property e.noCode wmd.en.js:2:14819
17:37:26.212 ReferenceError: reference to undefined property this[("s_" + e)] wmd.en.js:1:631
17:37:28.531 ReferenceError: reference to undefined property (intermediate value).value jquery.min.js:2:32273
17:37:30.578 ReferenceError: reference to undefined property d.traditional jquery.min.js:4:12761
17:37:30.578 ReferenceError: reference to undefined property f.ajaxSettings.traditional jquery.min.js:4:14154
17:37:30.867 ReferenceError: reference to undefined property f[0] jquery.min.js:2:1522
17:37:58.891 ReferenceError: reference to undefined property f[1] prettify-full.en.js:1:1056
17:38:03.755 KeyboardEvent.key value "Up" is obsolete and will be renamed to "ArrowUp". For more help https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent.key jquery.min.js:3:0
17:42:43.439 ReferenceError: reference to undefined property e.readonly snippet-javascript.en.js:1:444
17:42:43.789 SyntaxError: mistyped ; after conditional? snippet-javascript-codemirror.en.js:1:12589
17:42:43.789 TypeError: variable t redeclares argument snippet-javascript-codemirror.en.js:1:29375
17:42:43.789 TypeError: variable n redeclares argument snippet-javascript-codemirror.en.js:4:6309
17:42:43.833 ReferenceError: reference to undefined property e[(e.length - 1)] snippet-javascript-codemirror.en.js:3:20333
17:42:43.833 ReferenceError: reference to undefined property i.clearRedo snippet-javascript-codemirror.en.js:3:17520
17:42:43.863 ReferenceError: reference to undefined property e.display.blinker snippet-javascript-codemirror.en.js:2:20417
17:42:43.869 ReferenceError: reference to undefined property e.innerMode snippet-javascript-codemirror.en.js:4:7543
17:42:43.970 ReferenceError: reference to undefined property o.resize snippet-javascript.en.js:1:4720
17:42:52.120 ReferenceError: reference to undefined property a[h] jquery.min.js:2:21903
17:43:09.049 The connection to wss://qa.sockets.stackexchange.com/ was interrupted while the page was loading. full.en.js:1:0
17:43:09.554 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:2:24785
17:43:09.555 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:2:26520
17:43:09.557 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:13810
17:43:09.557 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:15975
17:43:09.557 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:20231
17:43:09.557 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:20319
17:43:09.558 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:26137
17:43:09.559 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:4:11153
17:43:09.559 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:4:11206
17:43:09.559 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:4:12076
17:43:09.585 ReferenceError: reference to undefined property f.valHooks[this] jquery.min.js:2:31690
17:43:09.592 SyntaxError: test for equality (==) mistyped as assignment (=)? stub.en.js:2:137
17:43:09.594 ReferenceError: reference to undefined property parent.WebPlayer stub.en.js:1:41
17:43:09.610 SyntaxError: test for equality (==) mistyped as assignment (=)? MathJax.js:19:20446
17:43:09.618 ReferenceError: reference to undefined property args.execute MathJax.js:19:4657
17:43:09.619 ReferenceError: reference to undefined property this.head MathJax.js:19:12656
17:43:09.634 ReferenceError: reference to undefined property a[j] jquery.min.js:2:21397
17:43:09.830 ReferenceError: reference to undefined property a.selector jquery.min.js:2:7919
17:43:09.830 ReferenceError: reference to undefined property a.nodeType jquery.min.js:3:64
17:43:09.831 ReferenceError: reference to undefined property r.fullPostfix stub.en.js:1:7187
17:43:09.834 ReferenceError: reference to undefined property f.event.triggered jquery.min.js:3:1980
17:43:10.443 SyntaxError: mistyped ; after conditional? full.en.js:1:30118
17:43:10.443 TypeError: variable e redeclares argument full.en.js:3:8639
17:43:10.443 TypeError: variable e redeclares argument full.en.js:3:21906
17:43:10.444 ReferenceError: reference to undefined property a[A][o] stub.en.js:2:1136
17:43:10.448 ReferenceError: reference to undefined property h[j] jquery.min.js:4:4241
17:43:10.453 ReferenceError: reference to undefined property a[f.expando] jquery.min.js:2:21216
17:43:10.456 ReferenceError: reference to undefined property p.delegateType jquery.min.js:3:1261
17:43:10.466 ReferenceError: reference to undefined property f.event.triggered jquery.min.js:3:278
17:43:10.481 ReferenceError: reference to undefined property d.headers jquery.min.js:4:13447
17:43:10.548 ReferenceError: reference to undefined property MathJax.OutputJax.NativeMML TeX-AMS_HTML-full.js:43:1
17:43:10.555 ReferenceError: reference to undefined property f.isCallback MathJax.js:19:3295
17:43:10.562 ReferenceError: reference to undefined property this.items[t].name[v] TeX-AMS_HTML-full.js:44:6182
17:43:10.565 ReferenceError: reference to undefined property j.require MathJax.js:19:49814
17:43:10.565 ReferenceError: reference to undefined property j.extensions MathJax.js:19:49899
17:43:10.566 ReferenceError: assignment to undeclared variable SETTINGS TeX-AMS_HTML-full.js:46:119
17:43:10.609 ReferenceError: reference to undefined property n.noStyleChar TeX-AMS_HTML-full.js:52:1953
17:43:10.764 ReferenceError: reference to undefined property f[1] prettify-full.en.js:1:1056
17:43:10.878 no element found 049f:1:1
17:43:10.878 ReferenceError: reference to undefined property f[0] jquery.min.js:2:1522
17:43:11.200 ReferenceError: reference to undefined property d[a][2] MathJax.js:19:17403
17:43:11.818 ReferenceError: reference to undefined property a.returnValue jquery.min.js:3:6513
17:43:11.818 Use of getPreventDefault() is deprecated. Use defaultPrevented instead. jquery.min.js:3:0
17:43:11.818 ReferenceError: reference to undefined property c.result jquery.min.js:3:4436
17:43:45.603 ReferenceError: reference to undefined property (intermediate value).value jquery.min.js:2:32273
17:44:02.152 ReferenceError: reference to undefined property j[d] jquery.min.js:3:11521
17:44:02.158 The connection to wss://qa.sockets.stackexchange.com/ was interrupted while the page was loading. full.en.js:1:0
17:44:02.472 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:2:24785
17:44:02.472 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:2:26520
17:44:02.474 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:13810
17:44:02.474 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:15975
17:44:02.474 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:20231
17:44:02.474 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:20319
17:44:02.475 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:26137
17:44:02.476 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:4:11153
17:44:02.476 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:4:11206
17:44:02.476 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:4:12076
17:44:02.505 ReferenceError: reference to undefined property f.valHooks[this] jquery.min.js:2:31690
17:44:02.513 SyntaxError: test for equality (==) mistyped as assignment (=)? stub.en.js:2:137
17:44:02.515 ReferenceError: reference to undefined property parent.WebPlayer stub.en.js:1:41
17:44:02.542 SyntaxError: test for equality (==) mistyped as assignment (=)? MathJax.js:19:20446
17:44:02.549 ReferenceError: reference to undefined property args.execute MathJax.js:19:4657
17:44:02.551 ReferenceError: reference to undefined property this.head MathJax.js:19:12656
17:44:02.652 ReferenceError: reference to undefined property a[j] jquery.min.js:2:21397
17:44:02.665 ReferenceError: reference to undefined property a.selector jquery.min.js:2:7919
17:44:02.665 ReferenceError: reference to undefined property a.nodeType jquery.min.js:3:64
17:44:02.665 ReferenceError: reference to undefined property r.fullPostfix stub.en.js:1:7187
17:44:02.669 ReferenceError: reference to undefined property f.event.triggered jquery.min.js:3:1980
17:44:03.035 ReferenceError: reference to undefined property MathJax.OutputJax.NativeMML TeX-AMS_HTML-full.js:43:1
17:44:03.048 ReferenceError: reference to undefined property f.isCallback MathJax.js:19:3295
17:44:03.058 ReferenceError: reference to undefined property this.items[t].name[v] TeX-AMS_HTML-full.js:44:6182
17:44:03.062 ReferenceError: reference to undefined property j.require MathJax.js:19:49814
17:44:03.062 ReferenceError: reference to undefined property j.extensions MathJax.js:19:49899
17:44:03.064 ReferenceError: assignment to undeclared variable SETTINGS TeX-AMS_HTML-full.js:46:119
17:44:03.109 SyntaxError: mistyped ; after conditional? full.en.js:1:30118
17:44:03.109 TypeError: variable e redeclares argument full.en.js:3:8639
17:44:03.109 TypeError: variable e redeclares argument full.en.js:3:21906
17:44:03.110 ReferenceError: reference to undefined property a[A][o] stub.en.js:2:1136
17:44:03.113 ReferenceError: reference to undefined property h[j] jquery.min.js:4:4241
17:44:03.120 ReferenceError: reference to undefined property a[f.expando] jquery.min.js:2:21216
17:44:03.126 ReferenceError: reference to undefined property n.noStyleChar TeX-AMS_HTML-full.js:52:1953
17:44:03.462 ReferenceError: reference to undefined property d[a][2] MathJax.js:19:17403
17:44:03.545 ReferenceError: reference to undefined property f.event.triggered jquery.min.js:3:278
17:44:03.545 ReferenceError: reference to undefined property a.returnValue jquery.min.js:3:6513
17:44:03.545 Use of getPreventDefault() is deprecated. Use defaultPrevented instead. jquery.min.js:3:0
17:44:03.546 ReferenceError: reference to undefined property o[s] jquery.min.js:3:3868
17:44:03.877 ReferenceError: reference to undefined property c[0] jquery.min.js:3:28159
17:44:03.881 ReferenceError: reference to undefined property e.noCode wmd.en.js:2:14819
17:44:03.892 ReferenceError: reference to undefined property this[("s_" + e)] wmd.en.js:1:631
17:44:06.151 ReferenceError: reference to undefined property (intermediate value).value jquery.min.js:2:32273
17:44:06.999 ReferenceError: reference to undefined property d.traditional jquery.min.js:4:12761
17:44:06.999 ReferenceError: reference to undefined property f.ajaxSettings.traditional jquery.min.js:4:14154
17:44:07.437 ReferenceError: reference to undefined property f[0] jquery.min.js:2:1522
17:44:57.221 ReferenceError: reference to undefined property f[1] prettify-full.en.js:1:1056
17:45:39.455 KeyboardEvent.key value "Del" is obsolete and will be renamed to "Delete". For more help https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent.key jquery.min.js:3:0
17:45:58.117 The connection to wss://qa.sockets.stackexchange.com/ was interrupted while the page was loading. full.en.js:1:0
17:45:58.674 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:2:24785
17:45:58.674 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:2:26520
17:45:58.675 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:13810
17:45:58.675 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:15975
17:45:58.676 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:20231
17:45:58.676 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:20319
17:45:58.676 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:3:26137
17:45:58.677 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:4:11153
17:45:58.677 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:4:11206
17:45:58.678 SyntaxError: test for equality (==) mistyped as assignment (=)? jquery.min.js:4:12076
17:45:58.704 ReferenceError: reference to undefined property f.valHooks[this] jquery.min.js:2:31690
17:45:58.716 SyntaxError: test for equality (==) mistyped as assignment (=)? stub.en.js:2:137
I find 200+ errors on one page to be UNACCEPTABLE.
And therefore I find jQuery to be Unacceptable.
It's not just this site, it's every site using jQuery that has these errors.
The number of errors keep on piling up I now have many hundreds of errors on just this one page we are on now. I had to remove many of the errors as this post allows a max of 30,000 characters.
Responsive Design 101:
All, whenever possible, horizontal CSS widths should be em instead of px or a percentage. All font-size are specified in em. Height should not be specified, so when the width is reduced the height will auto increase.
The basic structure of the page:
<body><div id="page">
</div></body>
The page width should not be 100%. Even in a mobile design.
Where the "experts" (e.g. Google PageSpeed Insights) say a mobile design should always have this meta tag:
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
Where the width of #page would be device width. A responsive design cannot have a fixed width.
A desktop responsive #page will have a max-width and a margin:0 auto; to center it in larger windows.
For mobile the max width should be used in the viewport meta tag.
If the max-width: is 60em, then the viewport is 16x or 60 x 16 = 960.
<meta name="viewport" content="width=960, initial-scale=1.0" />
Sometimes specifying the content width in pixels works better than "device-width". Even when it does work better, Google's Insights deducts points. However it is still a good validation tool. It does find mistakes.
Your page:
This is bad:
<div class="header-image"><img src="../../images/header.jpg"></div>
It is not a good idea to have an image without width and height specified. it creates problems with the Browser's "first paint" rendering of the page. what happens is the Browser does not know the image's dimensions until the image is retrieved requiring a "re-paint". First paint comes before or simultaneous with the downloading of content. Then likely bootstrap comes along after page load and changes the dimensions again requiring another re-paint". Not to mention the CPU cycles unnecessarily expended to scale the image to a size other than the actual image dimensions
If you want to scale images, use SVG, as in Scalable Vector Graphics.
While the following CSS passes CSS Level 3, it is totally unnecessary to use rgb
background-color: rgb(255, 255, 255);
It should be:
background-color: #fff;
When I see something like this I have to wonder what the person that did this was thinking.
html {
height: 100%;
}
And it's not because of all the unnecessary bandwidth wasting white space either. 100% of what? Just by the nature of HTML it is going to always expand to 100% of the content. 100% of the Browser window? Why? If there is not enough content to fill the window, why fill it with 100% of nothing?
Your page no Bootstrap or jQuery:
Basically I changed the #loginBox CSS from width: 350px; to max-width:20em;
I use Firefox with the Responsive Design Plug-In to check responsiveness.
I narrowed the browser window to 125px and the above is the result.
With a Browser window width of 800px below is the result:
<!DOCTYPE html>
<html><head><title>Login</title>
<style type="text/css">
body {
font: normal .80em 'trebuchet ms', arial, sans-serif;
background: #FFF;
color: #555;
}
p {
padding: 0 0 20px 0;
line-height: 1.7em;
}
img {
border: 0;
}
.center-text {
text-align: center;
vertical-align: middle;
}
/* LOGIN-FORM */
#loginBox {
background-color: #fff;
border: 1px solid rgba(0, 0, 0, .15);
border-radius: 4px;
box-shadow: 0 1px 0 rgba(255, 255, 255, 0.2) inset, 0 0 4px rgba(0, 0, 0, 0.2);
margin: 2.5em auto; /*aligns center*/
padding: 1.5em;
max-width: 20em;
}
form.minimal label {
display: block;
margin: 6px 0;
}
form.minimal input[type="text"], form.minimal input[type="email"], form.minimal input[type="number"], form.minimal input[type="search"], form.minimal input[type="password"], form.minimal textarea {
background-color: rgb(255, 255, 255);
border: 1px solid rgb(186, 186, 186);
border-radius: 2px;
-webkit-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.08);
-moz-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.08);
box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.08);
display: block;
font-size: .9em;
margin: 6px 0 12px 0;
padding: 8px;
text-shadow: 0 1px 1px rgba(255, 255, 255, 1);
width: 90%;
-webkit-transition: all 0.1s linear;
-moz-transition: all 0.1s linear;
-o-transition: all 0.1s linear;
transition: all 0.1s linear;
}
form.minimal input[type="text"]:focus, form.minimal input[type="email"]:focus, form.minimal input[type="number"]:focus, form.minimal input[type="search"]:focus, form.minimal input[type="password"]:focus, form.minimal textarea:focus, form.minimal select:focus {
border-color: #4195fc;
-webkit-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1), 0 0 8px #4195fc;
-moz-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1), 0 0 8px #4195fc;
box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1), 0 0 8px #4195fc;
color: rgb(0, 0, 0);
}
</style></head><body>
<!--START-PAGE-HEADER-->
<div class="container-fluid">
<div class="header-image"><img src="../../images/header.jpg"></div>
</div>
<!--END-PAGE-HEADER-->
<div class="container">
<!--START-LOGIN-FORM-->
<section id="loginBox">
<h1>Login:</h1>
<form method="post" action="login_handler.php" class="minimal">
<label for="username"> Username:
<input type="text" name="username" id="username" placeholder="Username"/>
</label>
<label for="password"> Password:
<input type="password" name="password" id="password" placeholder="Password" required="required"/>
</label>
<button type="submit" class="btn-minimal">
Sign in
</button>
</form>
</section>
<!--END-LOGIN-FORM-->
</div>
<!--START-PAGE-FOOTER-->
<div id="footer">
<div class="container-fluid">
<p class="center-text">
Copyright © 2015 Sandeep Chatterjee.
</p>
</p>
</div>
</div>
<!--END-PAGE-FOOTER-->
</body></html> | {
"domain": "codereview.stackexchange",
"id": 12736,
"tags": "html, css, html5"
} |
Is there any way to align ChIP-seq reads to telomeres? | Question: I know that telomeres are highly repeated sequences, but is there any way to retain any reads that map to these regions (on HG38)?
I recently managed to find some protein binding to centromeres, which are also mainly repeats. Therefore I wondered if there was any way to pick up signal on the telomeres.
My team and I are investigating binding of a protein complex to DNA and the experimental team reported that they saw it a bit on telomeres, so we were wondering if there was any way to measure this through bioinformatics. If not then it's no problem, we would just like to know.
Answer: This paper https://www.biorxiv.org/content/10.1101/728519v1.full examines the mapping of some human telomeric sequences, and indicates that there is something like 130kb missing at the telomeres in the HG38 assembly. One option might be to take the HG38 assembly, and add some missing telomere sequences as an additional pseudochromosome to the FASTA file. Then if you used this modified reference to align your reads, you should soon get a feel for the level of telomeric reads in your data set. | {
"domain": "bioinformatics.stackexchange",
"id": 1135,
"tags": "sequence-alignment, genome, chip-seq, repeat-elements, telomere"
} |
How do you turn a minimal CIF description into a complete one? | Question: I have a CIF file that I downloaded from the PDB, but if I try to use it in Coot, it complains that it is not a complete CIF definition. This page provides a batch script that I may be able to tease apart to suit my needs, but I thought there was a way to do this from within the CCP4 Program Suite GUI/job manager.
Maddeningly, I see in my project job history a couple "dictionary" jobs where I did this, but I utterly forgot how I set them up (forgive the asinine names):
When I try to use the Monomer Library Sketcher (under Refinement/Restraint Preparation) and open a CIF ("mmCIF") file, nothing happens.
Also under the sketcher, I can severely abuse the "Load Monomer from Library" and it looks like I get the right thing:
However, when setting up the above, it throws "Error: can't read "monomer_lib(code,n..." warnings, then when running it generates the Geometry file (seemingly successfully) then says that Libcheck failed when trying to create the coordinate file.
If there are other solutions, I'd be interested as well. I've used PRODRG to create CIF files before with its draw tool, but it can be a bit clumsy, it ignores how I drew my double bonds, and it creates new names for all the atoms.
Answer: Not quite as ideal as I had hoped, but it works:
Get and install CCP4.
Obtain minimal CIF file from PDB or elsewhere. Save to some folder easy to get at with the console.
Launch console and cd to folder with the minimal definition.
Run libcheck by typing it and hitting return
If it complains it's not found, the CCP4 bin directory may not be in your PATH environment variable. You can get around this by specifing the full path to the program, e.g.: c:\CCP4-Packages\ccp4-6.1.13\bin\libcheck.exe (may vary with version)
Say "Y" or "A" at the _DOC: prompt for some more files, "N" if you don't care (I generally don't)
At the --> prompts, specify:
the input file, e.g.: file_l: ABC.cif
the monomer: mon: ABC
output file name: file_o: ABC_out
Leave a prompt blank and hit return to start processing.
If it runs successfully it might look something like this:
C:\Users\Nick>libcheck
--- LIBCHECK --- /Vers 4.2.8 ; 02.06.2009/
Do you want to have FILE-DOCUMENT /libcheck.doc/ ? /<N>/Y/A :
N - means without DOC-file
Y - with new contents
A - means to keep old contents and add new information
with DOC-file program creates batch file: libcheck.bat
_DOC:n
#
# Keywords:
#
#FILE_L: < > - additional library, " " means without this file
#MON: < > - give info about this monomer
... (shortened) ...
#SRCH: <N>/Y/0 - Y - global search, 0 - for MON from PDB_file
# (only with NODIST = N)
#--- type "keyword parameters" and/or ---
#--- press key "CR" to run program ---
-->file_l: ABC.cif
-->mon: ABC
-->file_o: ABC_out
-->
MON : ABC
-------------
Output file :ABC_comp
Input user lib:ABC.cif
_chem_comp.name "DRUG INTERMEDIATE"
17 55
"DRUG INTERMEDIATE"
-------------
Keywords:
HFLAG : Y
COOR : N
LCOOR : Y
SRCH : 0
REF : Y
NODIST: Y
ERROR ==> In the loop containing the item_chem_comp.three_letter_code
ERROR ==> The number of expected items and the number existing items do not match
ERROR ==> The number of expected items is 7
NUMBER OF MONOMERS IN THE LIBRARY : 1
with complete description : 0
NUMBER OF MODIFICATIONS : 0
NUMBER OF LINKS : 0
I am reading libraries. Please wait.
- energy parameters
- monomer"s description (links & mod )
I am reading library. Please wait.
- monomer"s description
WARNING : monomer:ABC - has the minimal description.
now monomer:ABC - has complete description.
I will check it.
* CIFile : ABC_out_ABC.cif
* PDBfile : ABC_out_ABC.pdb
* Plotfile: ABC_out_ABC.ps
I am writing new description to
file: ABC_out.lib
C:\Users\Nick>
It yields the following output files:
(file_o): looks like a CIF file with lots of whitespace and records stripped out
(file_o).lib: the desired, complete definition file
(file_o)_(mon).cif: an even more minimal CIF file with only XYZ coords
(file_o)_(mon).pdb: a PDB file
(file_o)_(mon).ps: a document with a stereo labeled diagram, and lists of bonds, angles, chirality (centers?) and planes. Can convert it to the easier-to-use PDF format using this http://www.ps2pdf.com/convert.htm, or other tools. | {
"domain": "biology.stackexchange",
"id": 134,
"tags": "structural-biology, coot, ccp4"
} |
How do tightly packed plates move in the theory of plate tectonics? | Question: Here are two questions I had ever since I first heard about plate-tectonics.
How can the plates move? Before you suggest me some page to read about plate movement mechanics, let me clarify that I am not asking about the mechanics behind plate movement here. But I am talking about the physical possibility of plate movement.
When we look at the map of plates, all of them are tightly packed, with not an inch of space or gap to move. The complex shape of surrounding plates, blocks all freedom of movement. Then how can the theory say that plates can move?
If you can't follow my explanation, consider a jigsaw puzzle in which the blocks are correctly set up. Can we move a block from the middle now? We can not because the blocks are tightly interlocked, and because of their complex shape. The same is the situation here too, in fact much more complex! Each plates is tightly surrounded by complex shaped other plates, making any movement in any direction impossible!
See the below given a map of tectonic plates for example. It is evident the plates are actually interlocked so tightly that, it is impossible to move in any direction, because of the presence of another plate in opposite direction. There is no freedom of movement available, in any direction! The complex shape of plates make movements impossible!
My second question is this: It is said that, all continents were a single large continent millions of years ago. And then much later, continents 'drifted away', reaching current shape and locations.
If tectonic plates does not even have hundreds of miles of gaps between them to move, how can continents drift away so far, even thousands of miles away? Certainly plates can't move this far, because the whole earth surface is divided into plates, with no space left for plates to move thousands of miles.
So how did the continents drift away this far? Or do we have to assume that continents are simply floating over the tectonic plates?
Is the plate tectonics theory a complete hoax?
Answer:
isn't the plate tectonics theory a complete hoax?
No, it isn't. It's a perfectly valid theory that has much supporting evidence from all disciples of earth sciences, and it explains features that would otherwise remain unexplained. It is so widely and universally accepted that questioning it (particularly in the way you are doing) is likely to elicit a very hostile response from the scientific community, similar to how people would react to questioning Newton's laws of gravity.
How can the plates move? (before you suggest me some page to read about plate movement mechanics, let me clarify that i am not asking about the mechanics behind plate movement here. But i am talking about the physical possibility of plate movement)
Since you are not asking about the mechanics I will spare you from explanation about mantle convection and similar processes.
all of them are tightly packed, with not an inch of space or gap to move...
You are entirely correct. They don't have any gap to move. This is why we have mountain ranges where continental plates collide (or converge):
In places where an oceanic plate converges, plates can go underneath one the other in a process called subduction. This causes melting around the subducting plate, causing the formation of volcanic arcs:
Where plates pull apart and diverge you can have impressive rift valleys:
Or submarine trenches:
And of course, let's not forget about the earthquakes that occur when all of these things form. While you may doubt it, it is very real to the people affected by such natural disasters.
Everything I just mentioned happens because the plates are so tight and have nowhere to move. Just because you think a theory is incorrect because it doesn't fit your extremely simplistic way of how things should work, does not mean it is in fact incorrect. Nature does its thing regardless of what you, me, and everyone else thinks.
(Photographs either public domain or mine) | {
"domain": "earthscience.stackexchange",
"id": 1201,
"tags": "geology, plate-tectonics"
} |
2d CFT versus higher dimensions | Question: I am starting a reading course this quarter in conformal field theory. Is it necessary to do CFT in 2 dimensions first in order to understand it in 3 or greater dimensions, or would I be able to just start off in 3 or greater? Or is there a good reference that treats them simultaneously?
Answer: In my opinion, any course in specifically 2d CFT will not be particularly useful if your aim is 3d and higher. At least I am not aware of any counterexamples.
The reason is that in 2d the central topic is Virasoro symmetry, which is not available in higher dimensions. You could think that a 2d text would address stuff useful to higher dimensions before going to Virasoro symmetry, but that doesn't happen, since the questions which are highly non-trivial in higher dimensions are completely trivial in 2d (e.g. operators with spin, correlators of operators with spin, $SO(d+1,1)$ conformal blocks, etc.)
So, answering the first part of your question, if you understand higher dimensional CFT's, you will certainly understand the non-Virasoro aspects of 2d CFT's, but going the other way around is probably not going to help. If your goal is higher dimensional CFT's, you might not want to waste your time on exclusively 2d texts (which is not to say that the full 2d story is not worth learning).
For higher-dimensional CFT references, I would suggest David Simmons-Duffin's TASI lectures on conformal bootstrap and Slava Rychkov's EPFL lectures and references therein. These are perhaps mostly oriented towards conformal bootstrap applications. Joshua D. Qualls' lectures treat both 2d and higher-d, so may be a good start if you are not familiar with the 2d story.
It is also worth stressing that the higher-dimensional CFT's are a topic of active research, and a lot of important material exists only in papers (for example, most of the story for operators with spin). The references in the above lecture notes can hopefully serve as a good guide. | {
"domain": "physics.stackexchange",
"id": 36804,
"tags": "resource-recommendations, education, conformal-field-theory, spacetime-dimensions"
} |
Infinitely many soft photons from pushing an electron? | Question: I have been looking at some of the archives here and seen it quoted by Ron Maimon , that pushing an electron with a classical field means the electron will produce infinitely many soft photons should the universe be flat. Is this to be taken literally or it could be explained in more detail to me?
Answer: I will try interpreting the answer by Ron to the question since he is not allowed to talk for himself.
How-many-photons-can-an-electron-absorb-and-why
answer by Ron, broken down by concept
Unboundedly many, because the photon number is not conserved.
This is a correct general statement. The photon is a boson and the numbers are not conserved, just energy-momentum and angular momentum
Every time you push an electron with a classical field, you produce infinitely many soft photons (if the universe is flat at infinity)
Push and pull are not standard physics terminology, I suppose he means accelerates.
and conversely, any long-range field which pushes the electron has infinitely many soft photons getting absorbed in a sense, although you can't tell photons apart, so you can't distinguish the ones that were absorbed from the ones that were emitted.
Electrons in an electric field are attracted or repulsed (the field pushes the electron according to these simple-minded terms).
He is actually speaking of synchrotron radiation
The electromagnetic radiation emitted when charged particles are accelerated radially is called synchrotron radiation.
and brehmstrahlung.
Bremsstrahlung from bremsen "to brake" and Strahlung "radiation", i.e. "braking radiation" or "deceleration radiation") is electromagnetic radiation produced by the deceleration of a charged particle when deflected by another charged particle, typically an electron by an atomic nucleus. The moving particle loses kinetic energy, which is converted into a photon because energy is conserved. The term is also used to refer to the process of producing the radiation. Bremsstrahlung has a continuous spectrum,
A continuous spectrum extended to infinity will have an infinite number of photons but the grand majority will be too soft to be measurable. I do not know where flat space enters in the argument.
In these semiclassical situations, the electron continually emits and absorbs photons and energy/momentum conservation is balanced up by the field.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 10989,
"tags": "quantum-mechanics, newtonian-mechanics"
} |
Why does Fourier sampling allow to efficiently recover hidden subgroups? | Question: The hidden subgroup problem is often cited as a generalisation of many problems for which efficient quantum algorithms are known, such as factoring/period finding, the discrete logarithm problem, and Simon's problem.
The problem is the following: given a function $f$ defined over a group $G$ such that, for some subgroup $H\le G$, $f$ is constant over the cosets of $H$ in $G$, find $H$ (through a generating set).
In this context, $f$ is given as an oracle, which means that we don't care about the cost of evaluating $f(x)$ for different $x$, but we only look at the number of times $f$ must be evaluated.
It is often stated that a quantum computer can solve the hidden subgroup problem efficiently when $G$ is abelian. The idea, as stated for example in the wiki page, is that one uses the oracle to get the state
$\lvert gH\rangle\equiv\sum_{h\in H} \lvert gh\rangle$
for some $g\in G$, and then the QFT is used to efficiently recover from $\lvert gH\rangle$ a generating set for $H$.
Does this mean that sampling from $\operatorname{QFT}\lvert gH\rangle$ is somehow sufficient to efficiently reconstruct $H$, for a generic subgroup $H$? If yes, is there an easy way to see how/why, in the general case?
Answer: The question is whether taking the Fourier transform $\operatorname{QFT}|gH\rangle$ followed by sampling allows to efficiently recover generators of the hidden subgroup $H\leq G$. While the problem is wide open for non-abelian groups (see this paper for a discussion of the limitations of the Fourier sampling method for instances in case of $G=S_n$, $G=PSL(2,\mathbb{F}_q)$ and other non-abelian groups), for abelian groups $G$ the PO is correct that Fourier sampling solves the abelian hidden subgroup problem.
The basic idea is to perform several measurements $\operatorname{QFT}|gH\rangle$ and to note that a result $z$ can be sampled if and only if $z\in H^\perp$ holds, where $H^\perp = \{ g \in G : \chi_g(h)=1 \; \forall h \in H\}$. Here we (non-canonically) identified the characters $\chi \in \hat{G}$ with the elements of $G$ (which is possible if and only if $G$ is abelian). One can prove that doing this procedure $\log^2(|G|)$ times will with constant probability uniquely characterize $H$ from the measurement results $z_1, z_2, \ldots$.
What is more, if the group $G$ is explicitly known (i.e., one knows an isomorphism to a direct product of cyclic groups), then one can efficiently compute $H$ from $H^\perp$ using classical post-processing which essentially is linear algebra. A good reference for this is Brassard and Hoyer. If the group $G$ is abelian, but the structure is not known, then one can first discover the structure of $G$ and then find $H$ in a subsequent step. This was described in Cheung and Mosca. However, all this assumes at a minimum that $G$ has the structure of a black-box group with unique encoding. As shown by Ivanyos et al, even in case of non-unique encodings, one can recover the hidden subgroup, provided certain additional assumptions hold such as the existence of oracles for identity and membership test. | {
"domain": "quantumcomputing.stackexchange",
"id": 561,
"tags": "quantum-algorithms, quantum-fourier-transform, fourier-sampling, hidden-subgroup-problem"
} |
Refactoring the following methods to make it easier to read | Question: Would it be possible to refactor the following methods?
If so - how would I go about best doing it?
public void PlotListView_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
if (PlotListView.SelectedItems.Count == 0) return;
var selectedItem = (PlotComponent.PlotList) PlotListView.SelectedItems[0];
_focusPlotReference = Convert.ToInt32(selectedItem.PlotId);
}
public void WatchListView_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
if (WatchListView.SelectedItems.Count == 0) return;
var selectedItem = (PlotComponent.PlotList) WatchListView.SelectedItems[0];
_focusWatchReference = Convert.ToInt32(selectedItem.PlotId);
}
public void PositionListView_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
if (PositionListView.SelectedItems.Count == 0) return;
var selectedItem = (PlotComponent.PlotList) PositionListView.SelectedItems[0];
_focusPositionReference = Convert.ToInt32(selectedItem.PlotId);
}
private void FocusPlotItem(int focusPlotReference)
{
Dispatcher.Invoke(
(() =>
{
PlotComponent.PlotList plotList =
PlotListView.Items.OfType<PlotComponent.PlotList>()
.FirstOrDefault(p => Convert.ToInt32(p.PlotId) == focusPlotReference);
if (plotList == null) return;
//get visual container
var container = PlotListView.ItemContainerGenerator.ContainerFromItem(plotList) as ListViewItem;
if (container == null) return;
container.IsSelected = true;
container.Focus();
}));
}
private void FocusWatchItem(int focusWatchReference)
{
Dispatcher.Invoke(
(() =>
{
PlotComponent.PlotList watchList =
WatchListView.Items.OfType<PlotComponent.PlotList>()
.FirstOrDefault(p => Convert.ToInt32(p.PlotId) == focusWatchReference);
if (watchList == null) return;
//get visual container
var container = WatchListView.ItemContainerGenerator.ContainerFromItem(watchList) as ListViewItem;
if (container == null) return;
container.IsSelected = true;
container.Focus();
}));
}
private void FocusPositionItem(int focusPositionReference)
{
Dispatcher.Invoke(
(() =>
{
PlotComponent.PlotList positionList =
PositionListView.Items.OfType<PlotComponent.PlotList>()
.FirstOrDefault(p => Convert.ToInt32(p.PlotId) == focusPositionReference);
if (positionList == null) return;
//get visual container
var container =
PositionListView.ItemContainerGenerator.ContainerFromItem(positionList) as ListViewItem;
if (container == null) return;
container.IsSelected = true;
container.Focus();
}));
}
Answer: Those three event handlers repeat a lot of code, which should be extracted into a method:
private int GetPlotId(object sender)
{
var listView = sender as ListView;
if(listView == null || listView.SelectedItems.Count == 0)
return int.MinValue;
return ((PlotComponent.PlotList) PositionListView.SelectedItems[0]).PlotId;
}
(You might even want to add even more checks, or perhaps remove some.)
They can then be rewritten like this:
public void PlotListView_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
var plotId = GetPlotId(sender);
if(plotId == int.MinValue)
return;
_focusPlotReference = plotId ;
}
Perhaps this could be improved further, but I don't have a VS ready at the moment. Which is also why I won't be tackling the second set of methods right now.
I don't know what a PlotList is, but since it seems to be a single item, I don't feel List is a good name for it.
I'm also not sure why you'd need to add the namespace to the class explicitly: PlotComponent.PlotList. That's usually a red flag for me.
Could you also indicate what kind of project this is? I'm guessing Windows Forms, but it would be useful to get a little more context. | {
"domain": "codereview.stackexchange",
"id": 11996,
"tags": "c#"
} |
Virial theorem for momentum fluctuations | Question: In the paper https://journals.aps.org/pr/abstract/10.1103/PhysRev.109.1464 Lebowitz attempted to derive an estimation of the magnitude of center of mass momentum fluctuations in the presence of external confining potential (i.e walls). He considers $N$ classical particles interacting via a pair potential $U(x_{i}, x_{j})$ subject to confining potential (thought of as walls) $V(x_{i})$, thus from
$\dot{p_{i}} = - \sum_{j} \partial_{x_{i}}U(r_{ij}) - \sum_{j} \partial_{x_{i}}V(x_{i})\tag{1}$
he arrives (after several clear, simple and understandable steps) at the relation
$\frac{<P^{2}>}{2M} = \sum_{ij} <x_{i}\partial_{x_{j}}U(x_{j})>\tag{2}$
where <> indicates the time average and $P$ indicates the center of mass momentum. Then he mentions the equivalence of time averages and phase space average but writes without further arguments the following equation
$\frac{<P^{2}>}{2M} = \int dx_{1} x_{1}\partial_{x_{1}}U(x_{1}) + \int \int dx_{1} dx_{2} x_{1}\partial_{x_{2}}U(x_{2})\rho(x_{1}, x_{2})\tag{3}$
I struggle to understand the step from (2) to (3) and I would appreciate hints. A further question is how to use equation (3) to derive the relation explicitly derive $\sqrt{<(V_{z})^{2}>}=\sqrt{\frac{k_{B}T}{mN}}$ that is mentioned in the appendix of the paper.
To add more context, my interest is in finding estimations of the fluctuations of center of mass momentum in systems such as classic particles forming liquid or gas and confined between walls (i.e exchanging momentum with the boundaries). This is asked in more detail here Fluctuations of the center of mass of a confined fluid?
Answer: I will only consider the initial question about formula (3). Let $w_N(x_1,\ldots,x_N)$ be the $N$-particle probability density function of the Gibbs distribution. The explicit form of this function is not relevant for this discussion. We only need to use its symmetry property: the function $w_N$ does not change under any permutations of the coordinates of pairs of particles $x_i \leftrightarrow x_j$. Let us also define the number density $\rho(x_1)$ and the pair density $\rho(x_1,x_2)$ as follows
$$
\rho(x_1) = N\int dx_2\ldots dx_N\ w_N(x_1,x_2,\ldots,x_N),
$$
$$
\rho(x_1,x_2) = N(N-1)\int dx_3\ldots dx_N\ w_N(x_1,x_2,\ldots,x_N).
$$
Now consider the average value of the quantity on the right side of equality (2). We have
$$
\overline{\sum_{i,j}x_i\partial_{x_j}U(x_j)} = \sum_{i}\overline{x_i\partial_{x_i}U(x_i)} + \sum_{i\neq j}\overline{x_i\partial_{x_j}U(x_j)},\tag{I}
$$
where
$$
\overline{x_i\partial_{x_j}U(x_j)} = \int dx_1\ldots dx_N\ w_N(x_1,\ldots,x_N)\ x_i\partial_{x_j}U(x_j).
$$
Due to the symmetry of $w_N$ and the definition of densities, we further have
$$
\overline{x_i\partial_{x_i}U(x_i)} = \overline{x_1\partial_{x_1}U(x_1)} =
\frac1N \int dx_1\ \rho(x_1)\ x_1\partial_{x_1}U(x_1)\tag{II}
$$
and for $i\neq j$
$$
\overline{x_i\partial_{x_j}U(x_j)} = \overline{x_1\partial_{x_2}U(x_2)} =
\frac1{N(N-1)} \int dx_1dx_2\ \rho(x_1,x_2)\ x_1\partial_{x_2}U(x_2)\tag{III}
$$
Thus, the first sum on the right side of equality (I) has $N$ terms of the form (II), and the second sum has $N(N-1)$ terms of the form (III), from here we get what we need
$$
\overline{\sum_{i,j}x_i\partial_{x_j}U(x_j)} = \int dx_1\ \rho(x_1)\ x_1\partial_{x_1}U(x_1) + \int dx_1dx_2\ \rho(x_1,x_2)\ x_1\partial_{x_2}U(x_2).
$$ | {
"domain": "physics.stackexchange",
"id": 98592,
"tags": "newtonian-mechanics, statistical-mechanics, virial-theorem, fluctuation-dissipation"
} |
Angle of reflection of an object colliding on a moving wall | Question: Consider an object colliding elastically with an inclined wall (with much bigger mass than the object) as in picuture. The wall is at $45°$ and it is moving.
$(A)$: frame of reference of the wall (the wall is steady)
$(B)$: frame of reference with respect to which the wall is moving towards the right with velocity $V$
The ball collides on the wall in both cases with an angle $\theta_1=45°$ with respect to the normal to the wall. In $(A)$, using conservation of momentum, the ball will be reflected at an angle $\theta_2=\theta_1=45°$.
But what is the value of $\theta_2$ is $(B)$?
My guess is that the conservation of momentum implies the equality of the angles of incidence and reflection only in the frame where the wall is steady, while in the other reference the ball is reflected at an other angle.
But, thinking about it, it seems strange that the ball gets momentum in the direction of the motion of the wall just being reflected on it. So is the situation really like the picture $(B)$, i.e. the angle of reflection is bigger?
Answer: The answer "what is the value of $\theta$ in B" is answered by simple vector summation: you go back to the frame of reference where the wall is stationary, determine the magnitude and direction of the velocity after the impact, then add the velocity of the frame of reference back in.
That's really not so strange. If you imagine the situation where a ball is moving slowing at an angle to a wall that is coming rapidly towards it, you know intuitively that the ball will be moving roughly in the direction of the motion of the wall afterwards (think what happens with a tennis racket and a ball that was tossed in the air to be served... the ball flies across the net, doesn't it?) | {
"domain": "physics.stackexchange",
"id": 43191,
"tags": "newtonian-mechanics, classical-mechanics, reference-frames, reflection, relative-motion"
} |
When reading serial data between Arduino and a node, it hangs. How can I properly send and read data? | Question: I'm using ROS2 foxy with no libraries installed. It runs on an Ubuntu 21.04 x64 machine with kernel 5.8.0-59-generic.
I created a project with the goal of controlling an Arduino using python serial in a workspace with the following structure:
- workspace
- src
- arduino_comm
- serial_server.py
- serial_client.py
- motor_interface
- other_interface
the motor interface provides the action that is used by the arduino_comm package to send serial data. I have an old Arduino Nano that I've had for like 7 years ago that uses the old legacy bootloader. Still operational but sadly seems incompatible with the rosserial and I've checked the Micro-Ros packages and they also seem incompatible with my old arduino. Also I wanted to understand the code I'm working with at its bare basics before adding more complexity with an external library.
So I created the following code.
Arduino code
//Variable to store the bot command.
String botCommand;
void setup() {
//Initialize Serial port.
Serial.begin(9600);
//Initialize BUILTIN LED to light up for feedback.
pinMode(LED_BUILTIN, OUTPUT);
}
void loop() {
if(Serial.available()){
botCommand = Serial.readString();
if(botCommand == "forward"){
//Light up the led.
digitalWrite(LED_BUILTIN, HIGH);
delay(10);
}
Serial.println(botCommand);
}
//Shut down the LED.
digitalWrite(LED_BUILTIN, LOW);
delay(10);
}
The Server and Client use the following action, which is based on the tutorial
##################################################
#Action that sends robot towards a specific spatial position.
##################################################
#Goal of achieving a specific position.
int32 botgoalposition
---
#Resulting position that we're aiming to attain.
int32[] botendposition
---
#Feedback message about current position.
int32[] botcurrposition
The arduino_comm serial_server.py dedicated to sending commands to the Arduino to follow.
import time
#Import the Serial library to send commands to Arduino.
import serial
#Elements required to generate an ActionServer.
import rclpy
from rclpy.action import ActionServer
#Library required to generate a ROS2 Node.
from rclpy.node import Node
##################################################
# ACTIONS
##################################################
#Action to calculate Fibonacci code. FOR DEBUGGING.
from code_interfaces.action import Fibonacci
#Action to send commands to the Arduino until it reaches the desired coordinates.
from bot_move.action import BotMove
#Class that encapsulates the Action Server.
class SerialCommServer(Node):
def __init__(self):
#Declare the Node with the name "serial server".
super().__init__('serial_server')
#Declare it'll be an ActionServer that publishes to the topic 'botmove'.
self._action_server = ActionServer(
self,
BotMove,
#Action name.
'botmove',
#Callback for executing accepted goals.
self.executeCallback
)
#When it has been initialized, it'll start executing the following callback.
def executeCallback(self, goalHandle):
#Set the logger to indicate we're starting to execute our goal.
self.get_logger().info('Executing goal...')
#Get feedback to see if we're close to our goal.
feedbackMsg = BotMove.Feedback()
feedbackMsg.botcurrposition = [0,1]
goalHandle.publish_feedback(feedbackMsg)
#Send a small Hello to Arduino.
arduino = serial.Serial(port="/dev/ttyUSB0")
for i in range(1, 10):
arduino.write(bytes(str("forward"), 'utf-8'))
feedbackMsg.botcurrposition.append(i)
self.get_logger().info('Feedback: {0}'.format(i))
#Read back response from the Arduino.
#TODO: THIS SECTION HANGS, NEED TO FIGURE HOW TO FIX IT.
# response = arduino.readline()
# print("Response: " + str(response))
time.sleep(1)
# for i in range(1, goalHandle.request.botgoalposition):
# feedbackMsg.botcurrposition.append(
# feedbackMsg.botcurrposition[i] + feedbackMsg.botcurrposition[i-1])
# #Show the position that has been achieved so far.
# self.get_logger().info('Feedback: {0}'.format(feedbackMsg.botcurrposition))
# goalHandle.publish_feedback(feedbackMsg)
goalHandle.succeed()
result = BotMove.Result()
result.botendposition = feedbackMsg.botcurrposition
#Returns result of the run. All Actions MUST return a result.
return result
def main(args=None):
rclpy.init(args=args)
serialCommServer = SerialCommServer()
try:
print("Initializing Serial Communication Server.")
rclpy.spin(serialCommServer)
except KeyboardInterrupt:
print("Keyboard interrupt command received")
if __name__ == '__main__':
main()
And finally the client serial_client.py that doesn't do much of interest, later I want to send commands from another node but still need to fix this first.
import rclpy
from rclpy.action import ActionClient
#Library needed to generate a ROS2 Node.
from rclpy.node import Node
#Imports Action to solve a Fibonacci sequence. FOR DEBUGGING.
from code_interfaces.action import Fibonacci
#Action to manage position of the robot using Serial.
from bot_move.action import BotMove
#This is the Class that makes serial requests to the server.
class SerialCommClient(Node):
def __init__(self):
super().__init__('serial_client')
#Declare a new Action Client.
self._action_client = ActionClient(self, BotMove, 'botmove')
#This function waits for the action server to be available, then sends the new goal position the robot must occupy.
def sendGoal(self, botgoalposition):
#Declare the BoMove action so we can start requesting a new robot position.
goalMsg = BotMove.Goal()
#Set a new goal for the action.
goalMsg.botgoalposition = botgoalposition
#Wait for the server to respond to the request.
self._action_client.wait_for_server()
self._send_goal_future = self._action_client.send_goal_async(
goalMsg,
feedback_callback=self.feedbackCallback
)
#When the goal has been set, execute this callback.
self._send_goal_future.add_done_callback(self.goalResponseCallback)
#Goal Handle - Does different things depending on whether the server has accepted or rejected the goal.
def goalResponseCallback(self, future):
goalHandle = future.result()
if not goalHandle.accepted:
#Log that the goal has been rejected.
self.get_logger().info('Goal rejected :/')
return
#Otherwise, log that it has been accepted.
self.get_logger().info('Goal accepted')
self._get_result_future = goalHandle.get_result_async()
#Callback to do something with the results we get after the server is done executing code.
self._get_result_future.add_done_callback(self.getResultCallback)
#This callback function does things with the result.
def getResultCallback(self, future):
result = future.result().result
#Print the final position achieved.
self.get_logger().info('Result: {0}'.format(result.botendposition))
#Shutdown the Client.
rclpy.shutdown()
#Gets the partial feedback portion of the message and prints it.
def feedbackCallback(self, feedbackMsg):
feedback = feedbackMsg.feedback
self.get_logger().info('Received feedback: {0}'.format(feedback.botcurrposition))
def main(args=None):
rclpy.init(args=args)
actionClient = SerialCommClient()
#Establish the goal that will be reached by the server. In this case to reach some specific coordinates.
actionClient.sendGoal(50)
rclpy.spin(actionClient)
if __name__ == '__main__':
main()
So far it works and properly sends the serial data to the Arduino which makes it blink. But when I try to read back the response it hangs. What could be a proper way to modify the code so the server can read serial data back? And is there a more proper way to build this? Thanks a lot for your help!
Answer: I found what I believe is a relevant answer here, and I think it's relevant because of the following lines in your serial_server.py file:
#When it has been initialized, it'll start executing the following callback.
def executeCallback(self, goalHandle):
# <other code>
#Send a small Hello to Arduino.
arduino = serial.Serial(port="/dev/ttyUSB0")
The way I read this makes it look like you're starting a serial connection repeatedly. The post I linked states,
Establishing a serial connection to an arduino causes it to reset, so it was never online at the moment the pi was sending data, and so never replied.
There's a longer explanation about the behavior over at Arduino.SE that says, in part,
The Arduino uses the RTS (Request To Send) (and I think DTR (Data Terminal Ready)) signals to auto-reset.
The answer there also links here for instructions on disabling the auto-reset-on-serial-connection feature. Links can rot, so in the interest of future visitors I'll quote a subset of the page here, describing how to use a pull-up resistor on the reset pin to disable the auto-reset:
The simple way that doesn't require any permanent modifying of your hardware or software configuration changes:
Stick a 120 ohm resistor in the headers between 5v and reset (you can find these on the isp connector too). 120 is hard to find so just combine resistors. Don't go below 110 ohms or above 124 ohms, and don't do this with an isp programmer attached. You can just pull out the resistor when you want auto-reset back.
In summary, it looks like you're restarting your serial connection in every execute callback. Establishing the serial connection is resetting your arduino, and then it's resetting while you perform the serial write so it doesn't realize you're expecting a response when you get to the serial read.
You can avoid this by either starting the serial connection and storing the handle to the connection as a member variable or taking it as a callback parameter (preferred) or by disabling the auto-restart function. | {
"domain": "robotics.stackexchange",
"id": 2396,
"tags": "arduino, python, serial, communication, ros2"
} |
How can I feed BERT to neural machine translation? | Question: I am trying to feed the input and target sentences to an NMT model, I am trying to use BERT here, But I don't have any idea how to give it to my model.
before that, I was using one-hot encoding and I got issues there and I want to use BERT.
Also, I have to note that, I am new in TensorFlow and deep learning. So please share your opinion with me about the use of BERT in NMT.
my goal is only to use BERT in my model for translation purpose
my model definition:
def build_model(in_vocab, out_vocab, in_timesteps, out_timesteps, units):
model = Sequential()
model.add(Embedding(in_vocab, units, input_length=in_timesteps, mask_zero=True))
model.add(LSTM(units))
model.add(RepeatVector(out_timesteps))
model.add(LSTM(units, return_sequences=True))
model.add(Dense(out_vocab, activation='softmax'))
return model
Answer: I find this way of using BERT in my translation system and it allows me to load and use more data to train my model.
I got a memory error when I want to use more data like 100k for my task. and I came up that my tokenizer is a kind of problem here because it takes a lot of memory to make my tokenizer for such a huge volume of data so pre-trained models like BERT are the solution to this problem to feed more data like 200k or more to your model without worrying about memory error too much.
Also, in my task, I was worry about words that do not exist in my training phase but exist in a test phase so bert solved this problem for me too, because it was trained on a large corpus.
let's dive in and find out how I used BERT to fix my problem here.
Here I am going to make English to English translation system.
Loading pre-trained BERT for English ( if your source and the target language differs from each other, you have to load them separately you can look at tfhub.dev for them )
max_seq_length = 50 # i need to test the bert so I will keep this small for now
input_word_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32,name="input_word_ids")
input_mask = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32,name="input_mask")
segment_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32,name="segment_ids")
#this is the path to pre-trained bert model
bert_layer = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/1",trainable=True)
then I call my tokenizer
FullTokenizer = bert.bert_tokenization.FullTokenizer
vocab_file = bert_layer.resolved_object.vocab_file.asset_path.numpy()
do_lower_case = bert_layer.resolved_object.do_lower_case.numpy()
tokenizer = FullTokenizer(vocab_file, do_lower_case)
here is an examples to use the tokenizer:
tokenizer.convert_tokens_to_ids(['is'])
tokenizer.convert_ids_to_tokens([2003])
tokenizer.convert_ids_to_tokens([30521])
the output is: ([2003], ['is'], ['##~'])
then I used my train data to get my sequence
s = "This is a nice sentence."
stokens = tokenizer.tokenize(s)
stokens = ["[CLS]"] + stokens + ["[SEP]"]
input_ids = get_ids(stokens, tokenizer, max_seq_length)
the output for me was:
input Ids: [101, 2023, 2003, 1037, 3835, 6251, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
this method is also needed for getting token ids and you should add it to your codes
def get_ids(tokens, tokenizer, max_seq_length):
"""Token ids from Tokenizer vocab"""
token_ids = tokenizer.convert_tokens_to_ids(tokens)
input_ids = token_ids + [0] * (max_seq_length-len(token_ids))
return input_ids
for your model vocab size you can use these values,
len(tokenizer.vocab)
which in my case is: 30522
and after these steps, you can get your sequence for input and outputs and then feed them to your model.
I hope it helps. and please share your opinion if I am wrong in this case.
thanks | {
"domain": "datascience.stackexchange",
"id": 6515,
"tags": "keras, tensorflow, sequence-to-sequence, bert, machine-translation"
} |
To be or not to be Catkin? | Question:
Hi,
I really need help sorting a rather large question.
As ROS continues are you going to be sticking with Catkin or moving to something else? And what would that something else be? I happen to like Catkin so I would prefer the former. But someone said to someone at some conference that you will be moving away from Catkin, which is why I can't upgrade all of our packages, in our project, to be Catkin (which is what I was really hoping to do).
Much appreciated.
Originally posted by kleinash on ROS Answers with karma: 56 on 2015-05-21
Post score: 0
Answer:
If your current packages are using rosbuild you should really consider updating to catkin.
While the goal is to keep rosbuild working there are e.g. issues on Ubuntu 15.04 and newer with the way it works. Changes in upstream code broke it and there will only be a partial workaround to keep it working in most use cases. rosbuild is not actively maintained anymore and if for some reason it will break again in the future it might be beyond repair.
Another reason might be that you won't ever be able to release your packages when they are based on rosbuild.
Now to your main question:
ROS 1 is using catkin since Groovy and there is no plan to change that again.
What you might have heard are information about the ROS 2 development. That indeed uses something which is called differently: ament. But ament is basically an updated version of catkin - you could call it catkin 2.
A different name was chosen for multiple reasons:
The so called devel space in catkin was almost impossible to use when scaling a workspace up to hundreds of packages. Therefore that feature has been removed from ament. But the advantage of having a devel space (by not copying resources when running your code) has been preserved using a different approach (namely symlinked installs).
Python packages containing a setup.py require to be wrapped in CMake in catkin. In ament they do not anymore and are being processed with the "native" Python tools directly.
Some API changed slightly, e.g. some functions in catkin specify implicit target names (which could be considered magic) and ament is more clear about that by specifying them explicitly. Some of the catkin API was developed with rosbuild in mind and some of these design choices should have better been revised.
The order of some function calls needed to be changed to support currently not possible use cases. One example if the package catkin_simple which was an attempt to ease writing CMake code for ROS packages. It was never announce since it had fundamental flaws which couldn't easily be address with the way catkin works.
But ultimately catkin and ament are very similar. catkin will be used for ROS 1 and ament for ROS 2. There is no plan to "backport" ament to ROS 1 and also no plan to support catkin for ROS 2.
Therefore I would highly recommend to update your code base from rosbuild to catkin. A potential step towards ROS 2 in the future will require you to update more of your code. The changes to the CMake files should be very simple at that point.
Originally posted by Dirk Thomas with karma: 16276 on 2015-05-21
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 21747,
"tags": "catkin"
} |
Derivation of Squared Angular Momentum in Spherical Coordinates | Question: While reading my textbook, I found the following:
I tried to prove the above equation by doing the following.
Knowing that : $$(\vec{A}\times\vec{B}).(\vec{C}\times\vec{D})=(\vec{A}.\vec{C})(\vec{B}.\vec{D})-(\vec{A}.\vec{D})(\vec{B}.\vec{C})$$
And making the paropiate substitutions I'm left with this:
$$\nabla^2-(\hat{r}.\vec{\nabla})(\vec{\nabla}.\hat{r})$$
I am working in spherical coordinates so we have:
$$\hat{r}.\vec{\nabla}=\frac{\partial}{\partial r}$$
$$\vec{\nabla}.\hat{r}=?$$
where in place of the question mark I found $\frac{2}{r}$ but I know it must be wrong. I used the following partial derivatives of unit vector properties in spherical coordiantes:
Also Knowing the gradient for spherical coordinates:
I know I must be wrong because the quatruple product stands for vectors, but it might not be the case for operators. I tried taking in consideration the order of the products, as a last resource i tried to take into account the commutator of the unit vector and the gradient but failed to find an expression because i'm still green in this area. I would greatly apreciate if someone could clear my doubts, I also know I could've taken the cross products directly but is there a way of doing this by taking this path? It's always enlightning to be able to do computations in different ways.
Answer: You must remember that $\textbf{r}$ is an operator and to compute $\nabla\cdot\hat r$ you must act it on a function of coordinates. Here is how I derived it.\begin{equation}
\textbf{L}^2=(\textbf{r}\times\textbf{p})\cdot(\textbf{r}\times\textbf{p})
\end{equation} Using the formula $\textbf{A}\cdot(\textbf{B}\times\textbf{C})=\textbf{C}\cdot(\textbf{A}\times\textbf{B})$ twice, we get, \begin{equation}
\textbf{L}^2=\textbf{r}\cdot(\textbf{p}\times(\textbf{r}\times\textbf{p}))
\end{equation} Using the formula for vector triple product we get,
\begin{equation}
\textbf{L}^2=\textbf{r}\cdot(p^2\textbf{r}-\textbf{p}(\textbf{p}\cdot\textbf{r}))
\end{equation} Using $[\textbf{r},p^2]=2i\hbar\textbf{p}$ and $\textbf{r}\cdot\textbf{p}-\textbf{p}\cdot\textbf{r}=3i\hbar 1$, (to prove these commutation relations just use $[x_i,p_j]=i\hbar\delta_{ij}$) we get
\begin{equation}
\textbf{L}^2=r^2p^2+i\hbar\textbf{r}\cdot\textbf{p}-(\textbf{r}\cdot\textbf{p})(\textbf{r}\cdot\textbf{p})
\end{equation} Now use the fact that $\hat{r}\cdot\nabla f(\textbf{r})=\frac{\partial f}{\partial r}$ to obtain\begin{equation}
\textbf{L}^2=r^2p^2+\hbar^2\frac{\partial }{\partial r}(r^2\frac{\partial}{\partial r})=-\hbar^2r^2[\nabla^2-\frac{1}{r^2}\frac{\partial }{\partial r}(r^2\frac{\partial}{\partial r})]
\end{equation} | {
"domain": "physics.stackexchange",
"id": 46337,
"tags": "quantum-mechanics, homework-and-exercises, angular-momentum, coordinate-systems"
} |
What are 'acid stable' amino acids? | Question: I tend to see terms amino acid, acid stable amino acid, and free amino acids used often in the field of nutrition, but they are sometimes used interchangeably which confuses me.
I know that:
amino acid is a general term to describe organic compounds that encompass both essential and non-essential amino acids.
free amino acids is a general term for amino acids that are not broken down from a protein source; instead it exists in its original form 'freely'.
acid stable amino acid ... no idea
Is that right (correct me if I'm wrong)? What is the larger relationship amongst all three terms? Would consuming a free amino acid be better than one from a protein, or from an acid stable amino acid? Why would one choose to consume a particular class of amino acids over another?
Answer: The amino acids asparagine and glutamine have hydrolysable amide groups on their R groups, as shown here:
Note the leftmost amide group on both amino acids. When exposed to acid, these groups would hydrolyse, releasing ammonia.
This was of interest when people used to determine amino acid compositions by acid hydrolysing purified proteins (example paper which does this) and then running them through HPLC or mass spectroscopes. This method, however, fails to distinguish asparagine/aspartate and glutamine/glutamate due to their acid-unstable nature.
So to answer your question, an acid-stable amino acid is any amino acid that does not degrade under acid treatment.
Additional amino acids that degrade under such treatment can be found here, and includes serine, threonine, tyrosine, tryptophan and cysteine in addition to the two above amino acids. These amino acids degrade by alternative pathways distinct from the one by which the above amino acids degrade.
Finally, assuming you are talking about amino acids in dietary supplements, the reason acid-stable amino acids are reported is because acid hydrolysis is usually the method used to hydrolyse the proteins into amino acids, and therefore the manufacturer would not be able to produce acid-unstable amino acids by this method, leading to that label on your supplements. | {
"domain": "biology.stackexchange",
"id": 4645,
"tags": "biochemistry, nutrition, amino-acids"
} |
Force on a magnetic dipole in an external magnetic field | Question: I want to find an expression for the force acting upon a magnetic dipole with dipole moment $\mathbf{m}$ if that dipole is positioned in a stationary, external magnetic field $\mathbf{B}$. The expression given for the force is the following (assuming that $\nabla \times\mathbf{B}=0$):
$$\mathbf{F}=(\mathbf{m}\cdot\nabla)\mathbf{B}\quad(1)$$
My question is mostly whether the expression above is equivalent to:
$$\begin{bmatrix}
\frac{\partial \mathbf{B}}{\partial x} & \frac{\partial \mathbf{B}}{\partial y}& \frac{\partial \mathbf{B}}{\partial z}
\end{bmatrix}\mathbf{m} \quad (2)$$
or equivalent to:
$$\begin{bmatrix}
\frac{\partial \mathbf{B}}{\partial x} & \frac{\partial \mathbf{B}}{\partial y}& \frac{\partial \mathbf{B}}{\partial z}
\end{bmatrix}^T\mathbf{m} \quad (3)$$
I basically found these two expressions ($(2)$ $(3)$) for the force from two different sources, so one of them must be wrong. I derived the first expression in the following way:
$$(\mathbf{m}\cdot\nabla)\mathbf{B}=(m_1\frac{\partial }{\partial x}+m_2\frac{\partial }{\partial y}+m_3\frac{\partial }{\partial z})\begin{bmatrix}
B_1\\
B_2\\
B_3
\end{bmatrix}=\begin{bmatrix}
m_1\frac{\partial B_1}{\partial x}
+ m_2\frac{\partial B_1}{\partial y}
+ m_3\frac{\partial B_1}{\partial z}
\\
m_1\frac{\partial B_2}{\partial x}
+ m_2\frac{\partial B_2}{\partial y}
+ m_3\frac{\partial B_2}{\partial z}
\\
m_1\frac{\partial B_3}{\partial x}
+ m_2\frac{\partial B_3}{\partial y}
+ m_3\frac{\partial B_3}{\partial z}
\end{bmatrix}$$
The last expressions can be interpreted as the Matrix product $(2)$. Is that correct or am I missing something obvious?
Thanks!
Answer: When in doubt, use coordinates and index notation; the expression
$$\mathbf{F}=(\mathbf{m}\cdot\nabla)\mathbf{B}~~~(*)
$$
can be written in cartesian coordinates in this way:
$$
F_i = m_k \partial_k B_i
$$
If you're wondering how do we know that the right-hand side of (*) expands this way, it is actually the definition of the shorthand $\mathbf m \cdot \nabla$.
If force coordinates are put into a row $\mathbf F^T$, then this row can be obtained as left multiplication of the matrix $\mathbf G$ with coordinates $G_{ki} = \partial_k B_i$ by the magnetic moment row $\mathbf m^T$:
$$
\mathbf F^T = \mathbf m^T \cdot \mathbf G.
$$ | {
"domain": "physics.stackexchange",
"id": 55924,
"tags": "electromagnetism, magnetic-fields, magnetic-moment"
} |
Complements of Linear Bounded Automata? | Question: Would switching the accept and reject states of an LBA A create a new LBA we'll say A' in which the language of A' is the complement of the language of A? I believe the answer is yes just by working out an example...but I'm not sure on a solid proof...nor am I sure if the fact that I am working with an LBA vs a regular turing machine makes a difference in this case.
Answer: I agree with Hendrick Jan; I don't think the currently accepted answer is correct. Even though $A_{LBA}$ is decidable, that doesn't mean the LBA itself doesn't loop.
As a counterexample, consider an LBA $A$ over $\Sigma = \{0, 1\}$, where $A$ accepts $0$ but loops on $1$. Then $L(A) = \{ 0 \}$. The LBA with swapped states, $A'$, would reject $0$ and still loop on $1$, so $L(A') = \{ \}$. This should be a sufficient counterexample as $\overline{L(A)} = \{1\}$, which is not equal to $L(A')$. | {
"domain": "cs.stackexchange",
"id": 2028,
"tags": "formal-languages, turing-machines, linear-bounded-automata"
} |
Can pure rolling be reduced to pure rotation? | Question: Can pure rolling be reduced to pure rotation?
What I concluded is that
'In pure rolling ,rigid body is in pure rotation about the instantaneous axis of rotation.
So in this context it can be said that pure rolling can be reduced to pure rotation.
Am I correct?
Answer:
Can pure rolling be reduced to pure rotation?
Yes
In which circumstances pure rolling be reduced to pure rotation?
Such a circumstance where we can apply an external force.
Let a disc be translating with velocity $v$ and have an angular velocity $\omega$ with a condition $v=\omega R$. Where $R$ is the radius. On a frictionless surface.
Initially, the disc is purely rolling, and now an external force is applied to the disc to bring it to a complete stop. Now the disc is purely rotating.
This is one such scenario that comes to mind , The reason an external force is required is that this disc would be violating the conservation of momentum if it had stopped translating in the absence of external force. | {
"domain": "physics.stackexchange",
"id": 76039,
"tags": "newtonian-mechanics, rotational-dynamics"
} |
Unload a plugin with pluginlib | Question:
Hi everyone,
i'm trying to manage a project with the adaptation of plugins and I created a node with the launch of multiple plugins with different parameters and I can't found in the API code a function to unload a plugin dynamically. I'd like not to use any nodelet because there are some specifications that I don't understand (The bond connection between every nodelets and all).
So is there a simple way to manage the plugin dynamically ?
Thx
Edit 1:
I tried to use the unloadLibraryForClass. It does not delete the object, it just tries to unload the library which is not what I want.
What I'm trying to do for example is to instantiate a plugin, delete it and instantiate it again with a different parameter. The destructor of the base class is virtual so I create a destructor for each of my plugin but it cannot be used because the plugin is of the base type. Same problem with the delete function.
Originally posted by bulgrozer on ROS Answers with karma: 75 on 2017-08-22
Post score: 0
Answer:
The C++ API docs list a int pluginlib::ClassLoader<T>::unloadLibraryForClass(const std::string &lookup_name) method:
Decrements the counter for the library containing a class with a given name and attempts to unload it if the counter reaches zero.
That should do what you want, but only in the case that no other users of the library still hold on to it.
Originally posted by gvdhoorn with karma: 86574 on 2017-08-22
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28668,
"tags": "ros, pluginlib"
} |
A function determining intervals of values greater than threshold | Question: I wonder if there exists a shorter/more elegant functional programming way than listing all the possible cases. Here, a function that determines positions of beginning/end of subintervals greater than threshold is coded. The idea behind the listed code is to mark and retain the beginning of such an interval, then to push a tuple of (beginning,ending) as soon as the interval ends. Feel free to choose any other approach if needed.
-- | Determines the intervals greater than threshold.
--
-- Examples:
-- >>> intervals 0.5 [0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0]
-- [(3,4),(8,10)]
-- >>> intervals 0.5 [1,0,0,0,1,1,0,0,0,1,1,1,0]
-- [(0,0),(4,5),(9,11)]
-- >>> intervals 0.5 [1,0,0,0,1,1,0,0,0,1,1,1,0,1,1,1]
-- [(0,0),(4,5),(9,11),(13,15)]
intervals :: Ord a => a -> [a] -> [(Int, Int)]
intervals threshold ys = f False 0 p
where p = zip [0..] . map (> threshold) $ ys
f :: Bool -> Int -> [(Int, Bool)] -> [(Int, Int)]
f _ _ [] = []
f True startPos ((bPos,b):[]) | b = [(startPos, bPos)]
| otherwise = [(startPos, startPos)]
f False _ ((bPos,b):[]) | b = [(bPos, bPos)]
| otherwise = []
f True startPos ((aPos,a):(bPos,b):as) | a && b = f True startPos ((bPos,b):as)
| a && (not b) = ((startPos, aPos)) : (f False 0 as)
| otherwise = (startPos, startPos) : (f False 0 ((bPos,b):as))
f False _ ((aPos,a):as) | a = f True aPos as
| otherwise = f False 0 as
Answer: (You can skip right to TL;DR for a simpler approach)
Your function actually determines the indices of list elements that are above a threshold. In Haskell, when you have a list, an index is not the idiomatic way to represent its items. What do you want with those indices?
Agreed, your version is hard to read. For another approach, I start with
intervalsT :: [Bool] -> [(Int, Int]
and notice that the group function might come in handy to collect subsequent equal elements.
*Main> group [True,True,False,False,True]
[[True,True],[False,False],[True]]
mapping length will result in [2,2,1], which is a step closer to the indices. To turn [a,b,c] into [0,a ,a+b, a+b+c], the function scanl' is perfect:
*Main> scanl' (+) 0 [2,2,1]
[0,2,4,5]
which we can zip with its own tail. But wait! We lost information whether something is above or below threshold.
zip it again with the grouped Bools, filter based on the bools, throw away the bools. This yields:
TL;DR
intervals p = intervalsT . map (>p)
intervalsT :: [Bool] -> [(Int,Int)]
intervalsT xs = let grouped = group xs
idx = scanl' (+) 0 . map length $ grouped
ivs = zip idx (map (subtract 1) $ tail idx)
in map snd $ filter fst $ zip (map head grouped) ivs | {
"domain": "codereview.stackexchange",
"id": 18621,
"tags": "haskell"
} |
How to change ImageConstPtr data? | Question:
Hi all!
Please explain me how could I change raw data in ImageConstPtr& object?
I have a function like:
void filterImage (const sensor_msgs::ImageConstPtr& rgb_msg)
{
// something like rgb_msg->data[100] = 0;
}
So, I should change image data, and then pass rgb_msg to other functions.
I see deprecated functions get_data_vec and set_data_vec in Image.h file, but looks like I could not use them cause I have const modifier on object, and set_data_vec is non-const function.
Thank you.
Originally posted by Konstantin Cherenkov on ROS Answers with karma: 1 on 2012-02-21
Post score: 0
Answer:
You might want to try something like below:
void filterImage (const sensor_msgs::ImageConstPtr& rgb_msg)
{
sensor_msgs::Image current_rgb_msg;
current_rgb_msg = *rgb_msg;
}
You can further manipulate the value from current_rgb_msg variable.
Originally posted by alfa_80 with karma: 1053 on 2012-02-21
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by alfa_80 on 2012-02-21:
It's better to put the declaration of "current_rgb_msg" in class scope.
Comment by Konstantin Cherenkov on 2012-02-22:
As I understood, "current_rgb_msg" is a copy of object, referenced by "rgb_msg". Ok, that means, that all changes would be actually for "current_rgb_msg" object. Could you please explain, how to convert "current_rgb_msg" to "const sensor_msgs::ImageConstPtr&" after all changes? Thanks.
Comment by alfa_80 on 2012-02-22:
You cannot convert/overwrite the incoming "rgb_msg", because it is a constant value(it's read-only variable). I'm not so into openCV, I think, you can try what dlaz and Eddit suggested, that's by using straightaway cv_bridge for further processing. | {
"domain": "robotics.stackexchange",
"id": 8333,
"tags": "sensor-msgs"
} |
How to evolutionary explain menstrural cycles synchronization? | Question: Sorry, I can not indicate the source, but I read that menstrual cycles of women who live in the same house over time sync. Is it true? How does it work?
Answer: This is a myth which most likely came from methological flaws in the original study. It could never be reproduced. This paper ("Darwin’s Legacy: An Evolutionary View of Women’s Reproductive and Sexual Functioning") deals with it. The interesting part starts from page 30 and says (the references can be ressolved in the article):
Critiques of MSH Studies
Wilson (1992) and Yang and Schank (2006),
among others, have criticized the study design, methods, and
statistics used by McClintock (1971) and others who have claimed
evidence of MS. For example, McClintock
incorrectly used the Page test for ordered hypotheses with multiple
treatments (she used the same groups of women repeatedly instead of
independent treatments), making it impossible to evaluate the true
level of significance of her reported findings. Likewise, reports of
greater estrous synchrony in chimpanzees caged together than in those
caged apart (Wallis, 1985) and synchronization of estrogen peaks in a
sample of five golden lion tamarins (French & Stribley, 1987) are
rendered moot by the use of unsuitable statistical tests (Schank,
2001). Furthermore, computer simulations suggested that the null
hypothesis of no synchronization could not be rejected in either the
chimpanzee or tamarin samples (Schank, 2001). | {
"domain": "biology.stackexchange",
"id": 3147,
"tags": "human-biology, adaptation"
} |
How is energy in the hairspring restored in a mechanical watch? | Question: I've read through the excellent guide to how mechanical watches work by Ciechanowski, but it leaves some questions unanswered.
The main spring of the watch, when it loses energy, can of course be re-wound using the watch's crown, and the guide above explains in detail how that happens.
But the hairsping of the watch, the one that drives the balance wheel, must lose energy at some point as well through friction, with the balance wheel oscillations dying down as a result. How is energy restored to that spring, i.e. how is it re-wound?
On a related note, the guide above also shows that the balance wheel is stopped, through friction, by a special lever during the process of time adjustment. If the balance wheel is thus stopped at a particularly bad time (i.e. when it's exactly at its midpoint position), wouldn't all of the energy in the hairspring be dissipated when the balance wheel is stopped? How does it re-start once the watch is out of time adjustment mode?
Answer: I enjoyed the detailed and beautifully animated diagrams you linked.
The source of energy to the balance wheel is the end of the fork that is pressured by the jewels acting intermittently by the scapement gears.
As soon as the balance wheel is released it will be accelerating by the force of the fork handle, either clockwise or anticlockwise. | {
"domain": "engineering.stackexchange",
"id": 4719,
"tags": "mechanical-engineering, mechanisms, springs, coil-spring, watch"
} |
Why do we need 3 variables to parametrize $\scr{I}^\pm$ in a Penrose diagram? | Question: In the figure we can see the Penrose diagram for Minkowski space
If I understand correctly, $i^-$ and $\scr{I}^-$ have coordinates $r=\infty$ and $t=-\infty$ while $i^+$ and $\scr{I}^+$ have coordinates $r=\infty$ and $t=+\infty$. I would think, then that they can be parametrized with two variales, namely, the angles in a two-sphere. However, in many textbooks (like this review by Strominger https://arxiv.org/abs/1703.05448, on page 13) they claim that, for example. $\scr{I}^-$ is a three dimensional surface that can be thought of as the product of a two-sphere and a null line. Where is this extra degree of freedom to parametrize the null infinity coming from?
My attempt at an answer: The only thing I can think of is that, although $r$ and $t$ are infinite at the boundaries, there's still the degree of freedom to choose the ratio between them. I'm thinking that the worldline of a massive particle can end up at $r=\infty$ and $t=\infty$ as well as the worldline of a massless particle, but the relation between $r$ and $t$ will be different in each case since the massless particle followed a null geodesic all the way to infinite while the massive particle followed a timelike geodesic. Is this correct?
Answer: I'm just answering myself to close this thread, but the answer is basically what people said in the comments:
the regions $\scr{I}^\pm$ are reached by travelling on a light ray (i.e. a null geodisic) which satisfies $r=\pm ct + r_0$ where $c$ is the speed of light and $r_0$ is just the initial position at $t=0$. This means that geodisics going towards $\scr{I}^+$ will have a constant $u=r-ct$ while geodisics going towards $\scr{I}^-$ will have constant $v=r+ct$. This null variables basically label all the different null geodesics and are a good coordinate to parametrize the null infinity.
In conclusion, in order to fully parametrize $\scr{I}^\pm$ you need to give the angular variables in the 2-sphere plus $u$ (if it's $\scr{I}^+$) or $v$ (if it's $\scr{I}^-$ to label which null geodesic did you use to get there. | {
"domain": "physics.stackexchange",
"id": 63097,
"tags": "general-relativity, special-relativity, causality"
} |
Proof that $\vec{E}$-field is constant inside cylindrical resistor | Question: I am reading a proof that the $\vec{E}$-field is constant inside a cylindrical resistor, and I don't understand one of the steps. It is stated that since the surrounding medium is non-conductive the flow of charge at the surface has no component along the normal of the surface. From this the conclusion is drawn that the $\vec{E}$-field along the normal must be zero too.
This I don't understand. Since the conductivity of the surrounding medium is assumed to approach zero couldn't the $\vec{E}$-field be nonzero without causing charge to flow?
Answer: I think it would be better to include the actual figure of the resistor in question. I will do that below (I have also added the normal vector the example is referring to):
Since there is no current along the $\hat n$ direction, it must be that $\mathbf J\cdot\hat {\mathbf n}=0$, and since $\mathbf E$ is proportional to $\mathbf J$, it must be that $\mathbf E\cdot\hat {\mathbf n}=0$ as well.
One issue you are having seems to be with the specification that the surrounding medium is non-conducting. This is specified as an argument for why $\mathbf J\cdot\hat {\mathbf n}=0$ is true, not for why $\mathbf E\cdot\hat {\mathbf n}=0$ is true.
Since the conductivity of the surrounding medium is assumed to approach zero couldn't the $\vec E$-field be nonzero without causing charge to flow?
I suppose you are right here, but, the example is concerned with the field just inside the resistor, since we are wanting to solve Laplace's equation inside that region of space. What the field is doing outside is of no concern to us here (the footnote somewhat discusses the field outside). | {
"domain": "physics.stackexchange",
"id": 53271,
"tags": "electrostatics, electric-fields, electrical-resistance, voltage, conductors"
} |
How to solve this partial order reduction in $O(n^2)$? | Question: There are two orderings of numbers from the same set. Number $a$ is "immediately before" $b$ iff $a$ appears before $b$ in both sequences and there is no other number that appears between them in both sequences.
So in this example:
Seq 1: 1 2 3 4 5 6
Seq 2: 6 2 1 3 5 4
1 is not immediately before 2 because they appear in opposite orders.
1 is not immediately before 4 because 3 appears between them in both sequences.
2 is immediately before 3 and 3 is immediately before 4.
The problem is to find all pairs $a$, $b$ such that $a$ is immediately before $b$ in $O(n^2)$ time. How can this be done?
I understand that the naive solution can work in $O(n^3)$: For each pair in the first sequence ($n^2$ pairs) verify it using the second sequence ($O(n)$ time).
Answer: I think I am able to solve it.
First, have a lookup array for each sequence where array[element] = element's position in sequence [O(1)].
Phrased another way, this algorithm will find all "successors" for each of the elements in the first sequence. Finding successors for each element will take O(n) time.
For i in range(0,n):
initialize most_recent_successor to None
For j in range(i+1, n):
if pos. of seq_1[j] > pos. of most_recent_successor in both sequences:
advance j
else:
most_recent_successor = seq_1[j]
add (seq_1[i], seq_1[j]) to result
Essentially, if a valid pair (seq_1[i], seq_1[j]) exists, then any pair (seq_1[i], seq_1[k]) will not be valid if the position of seq_1[k] in comes after the position of seq_1[j] in both sequences.
So, for the example in the question, (1, 3) is a valid pair. Therefore, (1,4) and (1, 5) are not valid pairs since 4 and 5 come after 3 in both sequences.
Edit: Please look at ruakh's insightful answer on lookup arrays I used here. | {
"domain": "cs.stackexchange",
"id": 8288,
"tags": "algorithms, partial-order"
} |
Should the 'a' in the acid dissociation constant (Ka or pKa) be capitalised? | Question: In science, the negative logarithm of the acid dissociation constant is denoted $\mathrm{p}K_\mathrm{a}$ or $\mathrm{p}K_\mathrm{A}$ depending on the source (lowercase "a" or uppercase "A"). Since it is related to the acid dissociation constant, defined as
$$K_\mathrm{A} = \frac{[\ce{A-}][\ce{H3O+}]}{[\ce{AH}]}$$
I would write $\mathrm{p}K_\mathrm{A}$ (the LaTeX package chemmacros renders it so, but doesn't give any source it in its documentation). It seems to me that $\mathrm{p}K_\mathrm{a}$ is the "modern" version of pKa, which was written like this in the last century because of the lack of subscript character on typesetting machines.
I cannot find any source or styling guide saying explicitly which one should be used today (like the IUPAC Green Book), and why.
Answer: Good go-to references for this kind of problem are the IUPAC books (Gold, Green, etc.). I haven't found $\mathrm{p}K_\mathrm{a}$ directly but here you can find their convention for the acidity constant. They use a lower-case letter, i.e. $K_\mathrm{a}$. But I don't think using an upper-case letter would be wrong; I'd say it's a matter of taste. | {
"domain": "chemistry.stackexchange",
"id": 8907,
"tags": "acid-base, equilibrium, ph, notation"
} |
What constellations touch the 9-degree wide Zodiac? | Question: Say Barry, could you abuse the answer-your-own-question feature of
this site, and answer the following question:
As noted in How many constellations in the Zodiac? the
ecliptic itself touches 13 constellations, but the Encylopedia
Brittanica defines the Zodiac as "a belt around the heavens extending
9 [degrees] on either side of the ecliptic":
http://www.britannica.com/topic/zodiac
The Old Farmer's Almanac for 2014 (and previous years) notes (page
114) that Moon occasionally crosses into 5 "non-Zodiac" constellations
at times:
So, exactly which constellations are "in the Zodiac" if the Zodiac
extends 9 degrees on either side of the ecliptic?
And why 9 degrees? I know Venus can be as much as 8.25 degrees from
the ecliptic,
Answer: While I can't answer the 9 degrees part,
http://www.space.com/5417-ecliptic-zodiac-work.html notes, the band of
the zodiac actually passes through 22 constellations:
In fact, as pointed out by the well-known astronomical calculator,
Jean Meeus, along with Ophiuchus, there are nine other constellations
that occasionally can be visited by the Moon and planets: Auriga, the
Charioteer; Cetus, the Whale; Corvus, the Crow; Crater, the Cup;
Hydra, the Water Snake; Orion, the Hunter; Pegasus, the Flying Horse;
Scutum, the Shield; and Sextans, the Sextant.
So in truth, there really aren't twelve zodiacal constellations, but
twenty-two! | {
"domain": "astronomy.stackexchange",
"id": 1287,
"tags": "constellations, ecliptic"
} |
About $\chi^2_\text{adjusted}$ | Question: I am reading "An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements by J. R.Taylor", and I read the following formula in an exercise:
$$\chi^2_\text{adjusted}=\sum_{k=1}^2\frac{\left (|O_k-E_k|- \frac{1}{2}\right )^2}{E_k}$$
but I don't know when using it! In particular:
Is $\chi^2_\text{adjusted}$ used when $d=\text{degrees of freedom}=1$ and $k=2$?
Is $\chi^2_\text{adjusted}$ used when $d=\text{degrees of freedom}=1$?
The exercise is:
Answer: The $\chi^2$ statistic is independent of the number of degrees of freedom. But converting that statistic to some type of $p$-value does depend on the degrees of freedom. That is, you calculate $\chi^2$, then with that number and the degrees of freedom you look it up the $p$-value in a $\chi^2$ table.
As for the "corrected" version of this test, you may find the WP page for Yates continuity correction helpful. The extra 0.5 term in the numerator is there to compensate from modeling a discreet distribution using a continuous distribution. Some claim it overcompensates in certain circumstances. True, but I've found it gives closer to exact results in nearly all case. | {
"domain": "physics.stackexchange",
"id": 14770,
"tags": "experimental-physics, error-analysis"
} |
Uniqueness of the definition of Noether current | Question: On page 28 of Pierre Ramond Field theory - A modern primer the following is written:
"we remark that a conserved current does not have a unique definition since we can always add to it the four-divergence of an antisymmetric tensor [...] Also since $j$ [the Noether current] is conserved only after use of the equations of motion we have the freedom to add to it any quantity which vanishes by virtue of the equations of motion".
I do not understand what he means by saying, any quantity which vanishes by virtue of the equations of motion.
Answer: In Noether's first theorem, the continuity equation$^1$
$$ d_{\mu} J^{\mu}~\approx~0 \tag{*}$$
is an on-shell equation, i.e. it holds if the EOMs [= Euler-Lagrange (EL) equations] are satisfied. It does not necessarily hold off-shell.
Hence we can modify the Noether current $J^{\mu}$ with
terms that vanish on-shell, and/or
terms of the form $d_{\nu}A^{\nu\mu}$, where $A^{\nu\mu}=-A^{\mu\nu}$ is an antisymmetric tensor,
without spoiling the continuity eq. (*).
--
$^1$ The $\approx$ symbol means equality modulo EOMs. | {
"domain": "physics.stackexchange",
"id": 80717,
"tags": "lagrangian-formalism, symmetry, field-theory, noethers-theorem, classical-field-theory"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.