anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Converting Data in Rows and Columns to Rows in VBA for Excel | Question: I have a working VBA script for Excel that converts a matrix of data with multiple records in per row to multiple rows with one record per row.
A StackOverflow user told me that the code could use significant improvement, specifically mentioning implicit variants (not quite sure where I went wrong there), difficult to read code, splitting responsibilities and something about GoTo, the 1980's and raptors...
The script takes data like this:
Materials Person1 Person2
--------- --------- ---------
563718 20 40
837563 15 35
And can convert it to this:
Person Materials Data
--------- --------- ---------
Person1 563718 20
Person1 837563 15
Person2 563718 40
Person2 837563 35
Data is supplied by a third party. Each record/transaction (ex. quantity purchased, for each materials type, by customer) needs to be formatted in a separate row.
The script asked the the user about this data. In this example the user specifies 1 "Header Column" (one column beginning on the left that will remain as is). Then gives a name of "Person" for a new field made from values in the headers of the remaining columns to the right. The values (amounts) below these headers are also made into a new field, called "Data" by default.
I am open to any advice, but I am most interested in (1) writing better code in general, and (2) making this script adaptable and easier for others to use.
The script below was originally written by Peter T Oboyski. I extensively modified it.
Option Explicit
Sub MatrixConverter2_3()
'--------------------------------------------------
' This section declares variables for use in the script
Dim book, head, cels, mtrx, dbase, v, UserReady, columnsToCombine, RowName, DefaultRowName, DefaultColName1, DefaultColName2, ColName As String
Dim defaultHeaderRows, defaultHeaderColumns, c, r, selectionCols, ro, col, newro, newcol, rotot, coltot, all, rowz, colz, tot As Long
Dim headers(100) As Variant
Dim dun As Boolean
'--------------------------------------------------
' This section sets the script defaults
defaultHeaderRows = 1
defaultHeaderColumns = 2
DefaultRowName = "MyColumnName"
'--------------------------------------------------
' This section asks about data types, row headers, and column headers
UserReady = MsgBox("Have you selected the entire data set (not the column headers) to be converted?", vbYesNoCancel)
If UserReady = vbNo Or UserReady = vbCancel Then GoTo EndMatrixMacro
all = MsgBox("Exclude zeros and empty cells?", vbYesNoCancel)
If all = vbCancel Then GoTo EndMatrixMacro
' UN-COMMENT THIS SECTION TO ALLOW FOR MULTIPLE HEADER ROWS
rowz = 1
' rowz = InputBox("How many HEADER ROWS?" & vbNewLine & vbNewLine & "(Usually 1)", "Header Rows & Columns", defaultHeaderRows)
' If rowz = vbNullString Then GoTo EndMatrixMacro
colz = InputBox("How many HEADER COLUMNS?" & vbNewLine & vbNewLine & "(These are the columns on the left side of your data set to preserve as is.)", "Header Rows & Columns", defaultHeaderColumns)
If colz = vbNullString Then GoTo EndMatrixMacro
'--------------------------------------------------
' This section allows the user to provide field (column) names for the new spreadsheet
selectionCols = Selection.Columns.Count ' get the number of columns in the selection
For r = 1 To selectionCols
headers(r) = Selection.Cells(1, r).Offset(rowOffset:=-1, columnOffset:=0).Value ' save the column headers to use as defaults for user provided names
Next r
colz = colz * 1
columnsToCombine = "'" & Selection.Cells(1, colz + 1).Offset(rowOffset:=-1, columnOffset:=0).Value & "' to '" & Selection.Cells(1, selectionCols).Offset(rowOffset:=-1, columnOffset:=0).Value & "'"
Dim Arr(20) As Variant
newcol = 1
For r = 1 To rowz
If r = 1 Then RowName = DefaultRowName
Arr(newcol) = InputBox("Field name for the fields/columns to be combined" & vbNewLine & vbNewLine & columnsToCombine, , RowName)
If Arr(newcol) = vbNullString Then GoTo EndMatrixMacro
newcol = newcol + 1
Next
For c = 1 To colz
ColName = headers(c)
Arr(newcol) = InputBox("Field name for column " & c, , ColName)
If Arr(newcol) = vbNullString Then GoTo EndMatrixMacro
newcol = newcol + 1
Next
Arr(newcol) = "Data"
v = newcol
'--------------------------------------------------
' This section creates the new spreadsheet, names it, and color codes the new worksheet tab
mtrx = ActiveSheet.Name
Sheets.Add After:=ActiveSheet
dbase = "DB of " & mtrx
'--------------------------------------------------
' If the proposed worksheet name is longer than 28 characters, truncate it to 29 characters.
If Len(dbase) > 28 Then dbase = Left(dbase, 28)
'--------------------------------------------------
' This section checks if the proposed worksheet name
' already exists and appends adds a sequential number
' to the name
Dim sheetExists As Variant
Dim Sheet As Worksheet
Dim iName As Integer
Dim dbaseOld As String
dbaseOld = dbase ' save the original proposed name of the new worksheet
iName = 0
sheetExists = False
CheckWorksheetNames:
For Each Sheet In Worksheets ' loop through every worksheet in the workbook
If dbase = Sheet.Name Then
sheetExists = True
iName = iName + 1
dbase = Left(dbase, Len(dbase) - 1) & " " & iName
GoTo CheckWorksheetNames
' Exit For
End If
Next Sheet
'--------------------------------------------------
' This section notify the user if the proposed
' worksheet name is already being used and the new
' worksheet was given an alternate name
If sheetExists = True Then
MsgBox "The worksheet '" & dbaseOld & "' already exists. Renaming to '" & dbase & "'."
End If
'--------------------------------------------------
' This section creates and names a new worksheet
On Error Resume Next 'Ignore errors
If Sheets("" & Range(dbase) & "") Is Nothing Then ' If the worksheet name doesn't exist
ActiveSheet.Name = dbase ' Rename newly created worksheet
Else
MsgBox "Cannot name the worksheet '" & dbase & "'. A worksheet with that name already exists."
GoTo EndMatrixMacro
End If
On Error GoTo 0 ' Resume normal error handling
Sheets(dbase).Tab.ColorIndex = 41 ' color the worksheet tab
'--------------------------------------------------
' This section turns off screen and calculation updates so that the script
' can run faster. Updates are turned back on at the end of the script.
Application.Calculation = xlCalculationManual
Application.ScreenUpdating = False
'--------------------------------------------------
'This section determines how many rows and columns the matrix has
dun = False
rotot = rowz + 1
Do
If (Sheets(mtrx).Cells(rotot, 1) > 0) Then
rotot = rotot + 1
Else
dun = True
End If
Loop Until dun
rotot = rotot - 1
dun = False
coltot = colz + 1
Do
If (Sheets(mtrx).Cells(1, coltot) > 0) Then
coltot = coltot + 1
Else
dun = True
End If
Loop Until dun
coltot = coltot - 1
'--------------------------------------------------
'This section writes the new field names to the new spreadsheet
For newcol = 1 To v
Sheets(dbase).Cells(1, newcol) = Arr(newcol)
Next
'--------------------------------------------------
'This section actually does the conversion
tot = 0
newro = 2
For col = (colz + 1) To coltot
For ro = (rowz + 1) To rotot 'the next line determines if data are nonzero
If ((Sheets(mtrx).Cells(ro, col) <> 0) Or (all <> 6)) Then 'DCB modified ">0" to be "<>0" to exclude blank and zero cells
tot = tot + 1
newcol = 1
For r = 1 To rowz 'the next line copies the row headers
Sheets(dbase).Cells(newro, newcol) = Sheets(mtrx).Cells(r, col)
newcol = newcol + 1
Next
For c = 1 To colz 'the next line copies the column headers
Sheets(dbase).Cells(newro, newcol) = Sheets(mtrx).Cells(ro, c)
newcol = newcol + 1
Next 'the next line copies the data
Sheets(dbase).Cells(newro, newcol) = Sheets(mtrx).Cells(ro, col)
newro = newro + 1
End If
Next
Next
'--------------------------------------------------
'This section displays a message box with information about the conversion
book = "Original matrix = " & ActiveWorkbook.Name & ": " & mtrx & Chr(10)
head = "Matrix with " & rowz & " row headers and " & colz & " column headers" & Chr(10)
cels = tot & " cells of " & ((rotot - rowz) * (coltot - colz)) & " with data"
'--------------------------------------------------
' This section turns screen and calculation updates back ON.
Application.Calculation = xlCalculationAutomatic
Application.ScreenUpdating = True
MsgBox (book & head & cels)
'--------------------------------------------------
' This is an end point for the macro
EndMatrixMacro:
End Sub
Answer: Declaring variables at the top of a procedure was a recommended practice in 90's VB code, because "it makes it easy to see everything that the procedure needs at once". When a procedure would fit on a single screen, it wasn't too bad - but when a procedure scrolls several screens down, and uses 30-40 local variables, that "wall of declarations" actually made it harder to read (/maintain) the code, because you'd constantly need to scroll back up to see the declaration of a given variable, and then you'd waste considerable time locating which line you were looking at on your way back down. Been there, done that.
So to avoid the "wall of declarations" you could make a single instruction to declare a list of variables, like this:
Dim book, head, cels, mtrx, dbase, v, UserReady, columnsToCombine, RowName, DefaultRowName, DefaultColName1, DefaultColName2, ColName As String
Dim defaultHeaderRows, defaultHeaderColumns, c, r, selectionCols, ro, col, newro, newcol, rotot, coltot, all, rowz, colz, tot As Long
There's a trap though: Dim foo, bar, baz As String declares the last one (baz) as a String, and leaves foo and bar implicitly Variant - which incurs useless overhead and requires more storage/memory than needed (not that that is a problem nowadays though).
'--------------------------------------------------
' This section declares variables for use in the script
Dim book
Dim head
Dim cels
Dim mtrx
Dim dbase
Dim v
Dim UserReady
Dim columnsToCombine
Dim RowName
Dim DefaultRowName
Dim DefaultColName1
Dim DefaultColName2
Dim ColName As String
Dim defaultHeaderRows
Dim defaultHeaderColumns
Dim c
Dim r
Dim selectionCols
Dim ro
Dim col
Dim newro
Dim newcol
Dim rotot
Dim coltot
Dim all
Dim rowz
Dim colz
Dim tot As Long
Dim headers(100) As Variant
Dim dun As Boolean
Fun fact: the first three variables are actually Strings that are concatenated into a message for a MsgBox that's displayed at the very end of the procedure.
So we have that wall of declarations, and that banner comment telling us that we're looking at a wall of declarations. Comments shouldn't state the obvious like that; good comments tell us what the code can't say all by itself: it tells us why the code does something.
But back at these variables: DefaultColName1, DefaultColName2 and defaultHeaderRows are never never used! Actually DefaultColName1 and DefaultColName2 aren't even assigned, and never referred to, not even in dead/commented-out code - but who could have known? That's why declaring variables closer to where they're used is a much better practice: no wall of declarations, and it's much harder to declare a variable that's left unused, without noticing.
'--------------------------------------------------
' This section asks about data types, row headers, and column headers
In other words, this section is collecting user input - it should be a separate procedure!
UserReady = MsgBox("Have you selected the entire data set (not the column headers) to be converted?", vbYesNoCancel)
If UserReady = vbNo Or UserReady = vbCancel Then GoTo EndMatrixMacro
That UserReady variable should have been declared like this:
Dim UserReady As VbMsgBoxResult
Actually, since the only thing we're using it for is effectively to cancel the whole thing, might as well not declare it at all and do this instead:
If MsgBox("Have you selected the entire data set (not the column headers) to be converted?", vbYesNoCancel) <> vbYes Then Exit Sub
...And we just eliminated a GoTo jump!
all = MsgBox("Exclude zeros and empty cells?", vbYesNoCancel)
If all = vbCancel Then GoTo EndMatrixMacro
Same thing here: all should have been declared As VbMsgBoxResult, and there's no need to GoTo EndMatrixMacro either. The name isn't ideal, too: vbYes stands for "exclude zeros and empty cells", and vbNo stands for "include zeros and empty cells" - which means the true meaning of all is the exact opposite of what it appears to be! I'd rename it to IsExcludingZeroAndEmpty, and declare it As Boolean, because we don't really care about the MsgBox result here, all that matters is whether or not we're to include zeros and empty values.
That all variable is used here:
If ((Sheets(mtrx).Cells(ro, col) <> 0) Or (all <> 6)) Then
What's that magic number 6? If the variable would have been declared As VbMsgBoxResult, the VBE's IntelliSense would have suggested to use vbYes instead of its underlying numeric value. But that's all moot with a proper Boolean:
If Sheets(mtrx).Cells(ro, col) <> 0 Or Not IsExcludingZeroAndEmpty Then
(note, I might have gotten confused with the reversed "all" logic here... but you get the point I'm sure - which one is easier to understand?)
Next the script prompts for how many header columns we're looking at:
colz = InputBox("How many HEADER COLUMNS?" & vbNewLine & vbNewLine & "(These are the columns on the left side of your data set to preserve as is.)", "Header Rows & Columns", defaultHeaderColumns)
If colz = vbNullString Then GoTo EndMatrixMacro
Again Exit Sub makes the GoTo jump unnecessary. But there's a problem with using vbNullString with an InputBox - not in this context because it doesn't matter, but that condition will be true regardless of whether the user entered an empty string or hit the Cancel button; in a case where you would need to differenciate these inputs, you'd be stuck here.
If StrPtr(colz) = 0 Then Exit Sub
If StrPtr(InputBoxResult) returns a non-zero value, then there was a user input. If it's zero, then the user cancelled out.
There's a worse problem though. colz is a Variant, so it will happily be assigned to potato and nothing will happen until execution reaches this line:
colz = colz * 1
And then boom, runtime error 13 Type Mismatch strikes. The funny part is that it seems this no-op multiplication is there to prevent the next line from blowing up ...with the exact same runtime error:
columnsToCombine = "'" & Selection.Cells(1, colz + 1).Offset(rowOffset:=-1, columnOffset:=0).Value & "' to '" & Selection.Cells(1, selectionCols).Offset(rowOffset:=-1, columnOffset:=0).Value & "'"
A (much) better way would have been to validate the user's input:
If Not IsNumeric(colz) Then 'user is playing smartypants
This is an interesting comment:
'--------------------------------------------------
' If the proposed worksheet name is longer than 28 characters, truncate it to 29 characters.
If Len(dbase) > 28 Then dbase = Left(dbase, 28)
Which one is wrong? Is the typo in the comment or in the code? We'll never know... but this is why comments shouldn't rephrase what the code is already saying: when the code changes, the comments don't always get updated, and are left there dangling half-truths that no one dares fixing. This would have been much better:
' Maximum length allowed for a sheet name is 31 characters
If Len(dbase) > 28 Then dbase = Left(dbase, 28)
...which begs the question, why aren't we seeing this?
Private Const SHEETNAME_MAXLENGTH As Integer = 28 ' actually it's 31, but we're keeping a little buffer to append a digit if needed
And then do we need a comment to explain this line?
If Len(dbase) > SHEETNAME_MAXLENGTH Then dbase = Left(dbase, SHEETNAME_MAXLENGTH)
Everytime there's one such "banner comment":
'--------------------------------------------------
' This section checks if the proposed worksheet name
' already exists and appends adds a sequential number
' to the name
This is how I read it:
'--------------------------------------------------
' This section belongs in its own procedure or function
Might be a bit wrong - I haven't gone into the nitty-gritty details of how the procedure actually does its thing. But usually when a comment says "this chunk of code does XYZ", it can very well be moved into a procedure with a name that says "this procedure does XYZ".
I'll let other reviewers tackle the actual meat of the subject =) | {
"domain": "codereview.stackexchange",
"id": 21697,
"tags": "vba, excel, matrix"
} |
Standalone ROS apps on macos | Question:
Hi All
is there an easy way I can convert a given roslaunch into a standalone Mac .app?
Originally posted by Yogi on ROS Answers with karma: 411 on 2011-12-07
Post score: 0
Answer:
There's no generic way.
This is something that the OS X SIG has discussed; it's in the long-term plan. What it isn't is particularly near-term; OS X support is improving rapidly, but isn't to a point where just boxing things up makes sense.
Originally posted by Mac with karma: 4119 on 2011-12-07
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 7561,
"tags": "ros, roslaunch, rosrun, osx"
} |
Sort Characters By Frequency Java | Question: For context, I worked on the LeetCode May 2020 Challenge Week 3 Day 1. The challenge was described as this:
Given a string, sort it in decreasing order based on the frequency of
characters.
Example 1:
Input: "tree"
Output: "eert"
Explanation: 'e' appears twice while 'r' and 't' both appear once. So
'e' must appear before both 'r' and 't'. Therefore "eetr" is also a
valid answer.
Example 2:
Input: "cccaaa"
Output: "cccaaa"
Explanation: Both 'c' and 'a' appear three times, so "aaaccc" is also
a valid answer. Note that "cacaca" is incorrect, as the same
characters must be together.
Example 3:
Input: "Aabb"
Output: "bbAa"
Explanation: "bbaA" is also a valid answer, but "Aabb" is incorrect.
Note that 'A' and 'a' are treated as two different characters.
Anyways, I looked up some popular solutions. One was to get the frequency of each character and sort, and the other was the use a heap. I liked both of these solutions, but I wanted to make one where there was no sorting.
My solution involved an idea of an ArrayList of "tiers," where the index of the tier represents the frequency. Each tier consists of an ArrayList containing the characters which the corresponding frequency. As letters increase in frequency, the higher frequency tier they move up. I also used a HashMap to keep track of which frequency tier each character was in. Upon finishing iterating through the whole string, I simply use a StringBuilder to append the letters starting at the bottom tier, reverse the StringBuilder, then return the String. I was hoping someone could give me pointers (ha, code pun) on optimizing/modifying this approach without including any kind of sorting. Below is the functional code:
public static String frequencySort(String s) {
if (s.length() <= 1) return s;
ArrayList<ArrayList<Character>> tieredFreq = new ArrayList<>(); // stores characters at their proper frequency "tier"
HashMap<Character, Integer> tierOfChars = new HashMap<>(); // maps the characters to their current frequency tier
tieredFreq.add(null); // tier 0
for (char c : s.toCharArray()) {
tierOfChars.put(c, tierOfChars.getOrDefault(c, 0) + 1); // add char or increment the tier of the character
int i = tierOfChars.get(c); // i = tier of the character
if (tieredFreq.size() <= i) tieredFreq.add(new ArrayList<>()); // if not enough tiers, add a new tier
if (i > 1) tieredFreq.get(i - 1).remove(new Character(c)); // if c exists in previous tier, remove it
tieredFreq.get(i).add(c); // add to new tier
}
StringBuilder result = new StringBuilder();
for (int i = 1; i < tieredFreq.size(); i++) { // iterate through tiers
ArrayList<Character> tier = tieredFreq.get(i); // get tier
for (Character c : tier) { // for each char in tier, append to string a number of times equal to the tier
for (int j = 0; j < i; j++) result.append(c);
}
}
result.reverse(); // reverse, since result is currently in ascending order
return result.toString();
}
Answer: You have conceived a theoretical model that works. And avoids sorting.
Tiers by frequency
Every tier contains letters of that frequency
It will come at no surprise, that moving a char from on frequency's bin to the next frequency's bin will cost at least as much as sorting. But it is a nice mechanism
one sees too rare, and might have its application in vector operations, GPUs or whatever.
Improved could be the names. "Tier" one inclines to love, and might be apt, but does the term help in understanding the code?
Use if possible more general interfaces implemented by specific classes, like List<T> list = new ArrayList<>();. This is more flexible, when passing to methods, reimplementing with another class.
The comment to remain is for adding null for the frequency 0.
For characters in a tier use a Set. As implementation I used a TreeSet which is sorted to give nicer output.
Use as index not i but rather freq.
Moving from one frequency to the next higher can be done in two separate steps old+new. That makes the code more readable.
so:
public static String frequencySort(String s) {
if (s.length() <= 1) return s;
List<Set<Character>> charsByFrequency = new ArrayList<>(); // stores characters at their proper frequency "tier"
Map<Character, Integer> frequencyMap = new HashMap<>(); // maps the characters to their current frequency tier
charsByFrequency.add(null); // entry for frequency 0 is not used
for (char c : s.toCharArray()) {
Character ch = c; // Does ch = Character.valueOf(c);
int oldFreq = frequencyMap.getOrDefault(c, 0);
if (oldFreq != 0) {
charsByFrequency.get(oldFreq).remove(ch);
}
int freq = oldFreq + 1;
if (freq >= charsByFrequency.size()) {
charsByFrequency.add(new TreeSet());
}
charsByFrequency.get(freq).add(ch);
frequencyMap.put(ch, freq);
}
StringBuilder result = new StringBuilder();
for (int i = 1; i < charsByFrequency.size(); i++) { // iterate through tiers
Set<Character> tier = charsByFrequency.get(i); // get tier
for (Character c : tier) { // for each char in tier, append to string a number of times equal to the tier
for (int j = 0; j < i; j++) result.append(c);
}
}
result.reverse(); // reverse, since result is currently in ascending order
return result.toString();
} | {
"domain": "codereview.stackexchange",
"id": 38395,
"tags": "java, programming-challenge"
} |
Publication on Step Completion? | Question:
We've written a custom Gazebo (6.5.1) plugin so that we can single-step through time. That is, we start gazebo -u paused and then proceed through time step-by-step. A step is taken each time our plugin receives an external message to take a step.
Currently, we wait to receive a WorldStats publication from gazebo to consider a step completed. However, it appears these messages may be sent at regular time intervals, regardless of the step having completed.
Does Gazebo published a message that's more suitable to knowing that the step has been completed?
Originally posted by xanderai on Gazebo Answers with karma: 5 on 2016-03-18
Post score: 0
Answer:
There is a command line program that you could use:
gz world -s
In your plugin you can compare the iterations value to see if a step has occurred.
Originally posted by nkoenig with karma: 7676 on 2016-03-21
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by xanderai on 2016-04-05:
Thanks. We were able to simplify our situation by either not caring about the WorldStats message and carrying on as usual, or have some guarantees by having the plugin deal with iterations value. | {
"domain": "robotics.stackexchange",
"id": 3891,
"tags": "gazebo-plugin, gazebo-6.5"
} |
I need clarification on how pressure is measured | Question: I have a vessel that has a pressure measuring device. The value it shows as the pressure inside the vessel is 6 inches of water column. Is inches of water column meant to always be used as gauge pressure, or does one have to specify gauge vs atmospheric?
Answer: Inches of water is conventionally used as gauge pressure, not absolute. You could conceivably specify it as inches of water absolute, but that would be an unusual use - sub-atmospheric pressures are more commonly measured in inches or mm of mercury. | {
"domain": "engineering.stackexchange",
"id": 1451,
"tags": "pressure, unit"
} |
Can computer vision process 3D “images” directly? | Question: Most CV algorithms deal with 2D images produced by cameras. However, sometimes we need to process 3D “images” (I don’t know how to call them). For example, CT and MRI produce 3D radiographs. Traditionally these radiographs are cut into many slices for doctors to read. Because human eyes can only see 2D images (and most people lack the ability to comprehend complex 3D structures), doctors need to analyze them slice by slice, which is very time consuming. Computers on the other hand can process 3D patterns as efficiently as 2D. So can we design a CV algorithm to process 3D images directly? For example, instead of cutting the radiographs into many slices and mapping the edges (which marks the cross sections of blood vessels), the algorithm maps the surface of blood vessels directly. Will such direct 3D processing be more efficient and more accurate than 2D processing?
Answer: This is a very general question, so I'll give some ideas.
The short answer is yes. If you're talking about the "processing" as some function, then yes, we can use some function $f_3: \mathbb{R}^3 \to A$ in our computer algorithm rather than a $f_2: \mathbb{R}^2 \to B$. In fact, there are many computer algorithms that already do this. In fact, you can give any $n$-dimensional input, and have some algorithm process the $n$ inputs into anything you want (e.g. lots of econ models do this).
In terms of efficiency, it depends what you consider efficiency. But I think what you're trying to get at is, "Is there a 2D algorithm that can be more efficient if done in 3D?", and I would say no, because 2D algorithms are designed for 2D, and 3D are designed for 3D. E.g. if you want to do edge detection in 2D, it doesn't even make sense to use it in the context of 3D (there would be 3D collision or edge detection instead, a different version).
However, to entertain your question about efficiency, there are ideas of "creating features" from lower dimensional data, moving it to upper dimensional data and then doing analysis. PCA and neural networks are some examples that do that, and are sometimes very good at it. E.g. the classic is number digit recognition, where the 2D problem is quite difficult, but you might be able to find more patterns in higher dimensions and recognize the numbers more easily (neural networks are great for this).
Also, even though you might think of your vision as 2D (because that's how cinema portrays it), you have depth perception and many other senses that help you perceive 3D, which actually would make your visual senses a 3D rather than 2D function as you might think.
Doctors look at CAT scans in 2D because it might help them target certain areas they want to look at more. There are of course software that can process these slice by slices into 3D, and I would bet doctors use those as well. Also, you have to remember that CAT scans are "looking inside" of something in the first place, so it might be more helpful to see cross-sections rather than the whole thing. | {
"domain": "cs.stackexchange",
"id": 20159,
"tags": "computer-vision, pattern-recognition"
} |
Transferring Images through rosbridge | Question:
I am using the rosbridge websocket server to establish communication between ROS and an environment simulation tool. I can easily exchange scalar values like the object list and the control signals.
Now I want to transfer the images from the camera of the environment simulation tool to ROS, also through the rosbridge.
Is there a small example ?
Originally posted by aks on ROS Answers with karma: 667 on 2019-01-21
Post score: 0
Answer:
roslibpy works exactly the same as roslibjs, so you could -most likely- check this example using roslibjs for video publishing (using message type sensor_msgs/CompressedImage) and adapt it to roslibpy, all the subscribing part should be almost identical, then the decoding will be python-specific, but it should not be any problem.
Originally posted by gonzalocasas with karma: 180 on 2019-04-05
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by aks on 2019-09-18:
@gonzalocasas thank you for this example. One question here... is it necessary to use the (sensor_msgs/CompressedImage) as the message type ? Cant i use the normal (sensor_msgs/Image)
Comment by gonzalocasas on 2019-09-18:
you can also use image, of course, but you'd need to adapt it to do the corresponding decoding. | {
"domain": "robotics.stackexchange",
"id": 32305,
"tags": "ros, ros-melodic, rosbridge-server, rosbridge"
} |
Find The Duplicates in two arrays | Question: Find The Duplicates
Given two sorted arrays arr1 and arr2 of passport numbers, implement a function findDuplicates that returns an array of all passport numbers that are both in arr1 and arr2. Note that the output array should be sorted in an ascending order.
Let N and M be the lengths of arr1 and arr2, respectively. Solve for two cases and analyze the time & space complexities of your solutions: M ≈ N - the array lengths are approximately the same M ≫ N - arr2 is much bigger than arr1.
My approach:
import java.io.*;
import java.util.*;
import java.lang.*;
class Solution {
static int[] findDuplicates(int[] arr1, int[] arr2) {
// your code goes here
int i = 0, j = 0, k = 0;
ArrayList<Integer> ans = new ArrayList<>();
while(( i < arr1.length) && (j < arr2.length))
{
if(arr1[i] < arr2[j])
i++;
else if(arr1[i] == arr2[j])
{
ans.add(arr1[i]);
k++;
i++;
j++;
}
else
j++;
}
int[] arr = new int[ans.size()];
for(int l = 0; l < ans.size(); l++) {
if (ans.get(l) != null) {
arr[l] = ans.get(l);
}
}
return arr;
}
//M = arr1.length, N = arr2.length
//Time = O(M+N)
//Space = O(min(M,N))
I have the following questions regarding my code:
1) How can I improve the time and space complexity of my code?
2) Is there a better way(lesser lines of code, better data structures) that can be used to improve the code?
Source: Mock interview
Answer: I'd say the code suffers from readability more than from the efficiency aspects you (or your interviewer) had in mind.
The problem with such performance centric code is that usually the time spend to create and optimize the solution exceeds the time possibly saved during runtime of the program summed up over its lifetime by magnitudes.
And at the end it doesn't not tell something about your skills as a Java programmer. But thats the problem of your interviewer...
1) How can I improve the time and space complexity of my code?
removing unused variables would be a first step. You declare and increment k but you never read it. But agreed, this is not a "killer"...
2) Is there a better way(lesser lines of code, better data structures) that can be used to improve the code?
The pure LoC metric has no meaning. The most important property of code (after correctness) is readability. Structuring your code in to small methods with distinct responsibilities and well chosen names is a much higher value then LoC, but its mot measurable...
so here is my suggestion. Especially look at the content of the for loop how that makes a description of my intention what the code is supposed to do:
import java.util.ArrayList;
import java.util.List;
class Solution {
static int[] findDuplicates(int[] arr1, int[] arr2) {
List<Integer> ans = selectDuplicates(//
arr1.length < arr2.length ? arr1 : arr2, // shorter
arr1.length < arr2.length ? arr2 : arr1);// longer
return convertToIntArray(ans);
}
private static List<Integer> selectDuplicates(int[] shorter, int[] longer) {
ArrayList<Integer> duplicates = new ArrayList<>();
int indexInLonger = 0;
for (int current : shorter) {
indexInLonger = seekToEqualOrBiggerIn(longer, current, indexInLonger);
addMatchTo(duplicates, current, longer[indexInLonger]);
}
return duplicates;
}
private static int seekToEqualOrBiggerIn(int[] longer, int current, int indexInLonger) {
while (longer.length > indexInLonger && current > longer[indexInLonger])
indexInLonger++;
return indexInLonger;
}
private static void addMatchTo(ArrayList<Integer> duplicates, int current, int possibleMatch) {
if (current == possibleMatch)
duplicates.add(current);
}
private static int[] convertToIntArray(List<Integer> ans) {
return ans.stream().mapToInt(item -> item.intValue()).toArray();
}
}
selectDuplicates() forces the caller to check the length of the arrays? – Sharon Ben Asher
selectDuplicates() is a private method, an implementation detail not meant to be called by anyone else. So no caller is "forced" to check the length.
why? – Sharon Ben Asher
It is an optimization.
The idea is that I do things fastest when I don't do them.
With this optimization the length of the shorter array is implicitly checked by the foreach loop. It saves me from incrementing and checking both indexed during the iteration.
It also reduces the iterations to the size of the smaller array since the bigger array must have some entries not in the smaller array.
this is an internal implementation that is meaningful only to the method and not the caller. – Sharon Ben Asher
Exactly! Thats why it is i my implementation not exposed to the outside.
and it doesn't even check if it got the correct arguments. – Sharon Ben Asher
what would a "correct" argument look like in your opinion?
it should be able to accept any two arrays
sort out which is shorter than which. you chould have a shorter() and longer() methods that do the actual checking and have selectDuplicates() call them.... – Sharon Ben Asher
The method findDuplicates() which is the public interface does exactly that.
I also don't like the name of the methods. seekToEqualOrBiggerIn() is way too implementation-specific.
But that is the main purpose of a private methods name: describe the current behavior of that method as detailed as possible.
what if the input is ordered in reverse? sure you will have to modify the code inside the method but the intention of the method remains the same: skip elements that are have "lower" order according to the order of the arrays.
– Sharon Ben Asherand
The OPs requirement explicitly mentioned ascending order. According to the YAGNY-principly I should not provide anticipated but not required flexibility. IMHO this also applies to naming too.
In fact if the requirement would change this name would actually make easier to find the part to change while skimming over the code. Of cause it would be renamed then.
also, addMatchTo() does not hint that the match is not yet certain and that a condition is applied. – Sharon Ben Asher
At least to me it does. And no question: The receiver of a message determines its content.
and I don't understand the reason for this particular breaking up of the code into methods. why do you have findDuplicates() and selectDuplicates() ? this seems arbitrary separation.
findDuplicates() is part of the public API (fixed by the interviewer I guess).
The OPs implementation consist of two logical parts:
find and collect the duplicates in a list
convert that list into an array as expected by the methods signature.
To me this are two different responsibilities which should live in their own methods according the the single responsibility principle.
Since Java does not allow methods signatures to differ only in the return value I needed a different but similar name. | {
"domain": "codereview.stackexchange",
"id": 29364,
"tags": "java, performance, interview-questions, complexity, memory-optimization"
} |
Moon vs Sun size and distance 400 times | Question: I have seen below statement, and it doesn't sound right:
The Sun and Moon seem to have the same size because of this amazing coincidence:
the moon is 400 times smaller than the Sun and 400 closer than the Sun.
Checking 400 from this source:
The Moon's distance from the Earth: 384,000 km and diameter: 3,480 km
The Sun's distance from the Earth: 149,000,000 km and diameter of 1,392,000 km
Distance: 149,000,000 / 384,000 = 388.02, OK almost 400
Size: 1,392,000 / 3,480 = 400 spot on.
Does being X times further make it look X times smaller?
I have seen this post but not sure it answer my question:
Can the apparent equal size of sun and moon be explained or is this a coincidence?
Note: First time post, let me know if the post is suitable.
Answer:
Also called the intercept theorem. | {
"domain": "physics.stackexchange",
"id": 37056,
"tags": "earth, sun, moon, distance"
} |
Changing look and feel for all current frames | Question: I am using JGoodies Looks to change the look and feel of the application I'm currently working on.
I've just created a JMenu with some different themes options for the user to choose.
The thing is, I want to make sure the changes affect the program instantly, on all current windows, so I did this:
@Override
public void doActionPerformed(ActionEvent event) {
PlasticLookAndFeel lookAndFeel = new PlasticXPLookAndFeel();
lookAndFeel.setPlasticTheme(new SkyBlue());
try {
UIManager.setLookAndFeel(lookAndFeel);
for(Frame f: Frame.getFrames()) {
SwingUtilities.updateComponentTreeUI(f);
}
} catch (UnsupportedLookAndFeelException e) {
e.printStackTrace();
}
}
It works perfectly, like magic. But I don't know if it is a good practice or not, or if there is a better or easier (if possible) way.
I didn't even know that Frame had that method. I've just tried (thanks to IntelliJ magic code completion) and it worked so easy on the first try. So maybe that's why I'm concerned about it.
Also, what is the difference between the code above and this one (as they appear to work the exact same way):
for(Window window : JFrame.getWindows()) {
SwingUtilities.updateComponentTreeUI(window);
}
I mean, when and why should I be using Frame over JFrame, Frame over Window, or vice versa?
Answer: Java GUIs: Swing & AWT
In standard Java there are two ways to make GUIs: Swing and AWT.
To be short, Swing is platform-independent and AWT is platform-dependent. Typically you pick either AWT or Swing and stick to it. I believe Swing is the more popular choice today as it is easier to work with.
You can tell Swing code from AWT code by the fact that Swing component classes are prefixed with a J. For example Frame is AWT and JFrame is the Swing equivalent.
The Swing components inherit from the AWT components so a JFrame is a Frame.
Windows and Frames
Now the difference between Window and Frame (and JWindow and JFrame respectively) is that a Window is a plain window without any decorations; no borders, no title bar, no window management buttons. A Frame is a Window with all these decorations. And I put emphasis on is a in the above as Frame actually inherits Window.
So the inheritance tree looks like this:
lang.Object
|
awt.Component
|
awt.Container
|
awt.Window
|
+--------+-------+
| |
swing.JWindow awt.Frame
|
swing.JFrame
To answer your question about which of the methods is better. Note that getWindows() is defined in Window and inherited by JWindow, Frame and JFrame. Calling getWindows() will get all windows, even owner-less dialogues and system windows associated with the application, regardless of if they have decorations or not. On the other hand getFrames() is defined in Frame so calling getFrames() will get all windows with decorations (frames). If your application doesn't have any frame-less windows, the two pieces of code you posted will be equivalent.
The Code
Your code is likely fine as it is wither either approach as plain Windows are kind of rare and often transient. The approach you're using is the standard one.
If you want to be absolutely sure you get every window there is and be picky about it, this is how I would write it:
for(Window window: Window.getWindows()) {
SwingUtilities.updateComponentTreeUI(window);
}
For your reference, see the Java Docs for JWindow and JFrame and their super classes. | {
"domain": "codereview.stackexchange",
"id": 14318,
"tags": "java, swing"
} |
Curious natural patterns on the surface of basalt blocks that make up the sidewalk | Question: During one of my walks through the city streets, I noticed that some basalt blocks that make up the sidewalk have in their surface some very curious natural patterns:
The photos above was taken at the Porto Alegre city, Brazil. (Coordinates: 30°01′59″S 51°13′48″W)
What are these fractal-like patterns? How and when are they formed? Is it some kind of organic formation? Is it living being or some kind of fossil?
Answer: These are most likely manganese dendrites. It is not a fossil, and not organic. These usually form in cracks in rocks, and most likely this slab was broken along an existing crack.
You can read more about it now: https://www.mindat.org/min-26645.html
One comment: this is not basalt. Basalt is black. Most likely some form of limestone. | {
"domain": "earthscience.stackexchange",
"id": 1904,
"tags": "rocks, fossils, igneous"
} |
E: Unable to locate package ros-electric-pr2-arm-navigation | Question:
I am trying to proceed with this tutorial but when I run sudo apt-get install ros-electric-pr2-arm-navigation I get the following error:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package ros-electric-pr2-arm-navigation
Help would be appreciated. :)
Originally posted by meep on ROS Answers with karma: 1 on 2016-04-24
Post score: 0
Answer:
In the package name that you're trying to install, you should probably replace electric with your version of ROS (probably indigo or jade).
According to the wiki page for pr2_arm_navigation, the last version of ROS that it was documented or released for was Groovy.
You may be able to do the tutorial without using pr2_arm_navigation, or you may want to look for an equivalent tutorial from MoveIt, which is the current motion-planning library that is commonly used on the PR2.
Originally posted by ahendrix with karma: 47576 on 2016-04-24
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 24452,
"tags": "ros, inverse-kinematics, pr2"
} |
Direction of current detected by ammeter for ions | Question:
if the pipe contains Na+ and Cl- ions, will the ammeter have the reading as $enAV_{Na}$+$enAV_{Cl}$
or $enAV_{Na}$-$enAV_{Cl}$ (as the ions move in opposite directions?)
Answer: Superposition tells you that it is an addition.
A flow of negative charges in a given direction is equivalent to a flow of positive charges in the opposite direction. | {
"domain": "physics.stackexchange",
"id": 81216,
"tags": "electric-circuits, electric-current, batteries, electrochemistry"
} |
If wet skin has a higher coefficient of friction that dry skin, why is it easier to remove a ring from a wet finger than a dry one? | Question: Simply from personal experience it's clear that removing a ring from a finger that is wet is easier than dry (I can pull a ring over the knuckle without even twisting it while wet but have to twist while skin is dry).
However, from this article https://core.ac.uk/download/pdf/159150092.pdf, it's shown wet skin has a higher coefficient of friction with materials rings are commonly made of. Is there a factor besides the friction coefficient that's relevant here?
Answer: Let me distinguish between damp skin and wet skin.
Make your hands wet and allow the water to soak into your skin. You can speed up the process of water soaking into your skin with a bit of warmth. Rub you wet hands together, vigorously.
As long as there is still plenty of water the layer of water on your skin is acting as a lubricant.
(Water is actually a good lubricant, it's just that for obvious reasons we don't use it as lubricant for metal parts.)
As you are rubbing: you arrive at a point where there is no more water on your skin, but your skin is damp now. If you need grip, this is the moment to seize. If you need grip to screw the lid off an unopened jar, this is the moment.
Continue rubbing and the water evaporates out of your skin, and the amount of grip you have goes down again. | {
"domain": "physics.stackexchange",
"id": 79234,
"tags": "friction"
} |
My minesweeper implementation in Javascript | Question: I'm beginner coder and I would like to request for critique review of my minesweeper implementation. Game was made using bootstrap 3 and game board itself is a simple html DOM table. Game is complete and working but I feel it could be written in more simple and elegant manner. What in my code can be improved/written in different/better way? In game page you can choose size of board, then you have to click "start" to generate board and start the game. Ctrl + click on cell to flag it and prevent accidental click.
Main function responsible for revealing cells:
function reveal(ev){
var x = parseInt(this.getAttribute('x')),
y = parseInt(this.getAttribute('y'));
if(ev.ctrlKey == false){
if(cells[getFieldId(x,y)].markedByPlayer == false){
if(cells[getFieldId(x,y)].hasBomb == true){
document.getElementById(x+'x'+y).innerHTML = '*';
alert('Bomb! You have lost.');
gameState = 'loss';
revealMap();
}else if(cells[getFieldId(x,y)].neighbourNumber > 0 && cells[getFieldId(x,y)].hasBomb != true){
document.getElementById(x + 'x' +y).innerHTML = cells[getFieldId(x,y)].neighbourNumber;
cells[getFieldId(x,y)].hasBeenDiscovered = true;
pointColor(x,y,'midnightblue');
document.getElementById(x+'x'+y).style.color = 'silver';
revealFields(x,y);
}else{
revealFields(x,y);
}
if(checkVictoryCondition() == bombsNumber){
alert('You have won!');
revealMap();
}
}
}else if(ev.ctrlKey == true){
if(cells[getFieldId(x,y)].markedByPlayer == false){
document.getElementById(x+'x'+y).innerHTML = '!';
cells[getFieldId(x,y)].markedByPlayer = true;
document.getElementById(x+'x'+y).style.color = 'red';
}else if(cells[getFieldId(x,y)].markedByPlayer == true){
document.getElementById(x+'x'+y).innerHTML = '';
cells[getFieldId(x,y)].markedByPlayer = false;
}
}
}
function revealFields(x,y){
if(x<0 || y<0 || x>boardHeight - 1 || y>boardWidth - 1){
return;
}
if(cells[getFieldId(x,y)].neighbourNumber > 0){
document.getElementById(x+ 'x' +y).innerHTML = cells[getFieldId(x,y)].neighbourNumber;
cells[getFieldId(x,y)].hasBeenDiscovered = true;
pointColor(x,y,'midnightblue');
document.getElementById(x+ 'x' +y).removeEventListener('click', reveal, true);
}
if(cells[getFieldId(x,y)].hasBeenDiscovered == true){
return;
}
cells[getFieldId(x,y)].hasBeenDiscovered = true;
pointColor(x,y,'midnightblue');
document.getElementById(x+ 'x' +y).removeEventListener('click', reveal, true);
setTimeout(function(){revealFields(x-1,y);}, 200);
setTimeout(function(){revealFields(x+1,y);}, 200);
setTimeout(function(){revealFields(x,y-1);}, 200);
setTimeout(function(){revealFields(x,y+1);}, 200);
setTimeout(function(){revealFields(x-1,y-1);}, 200);
setTimeout(function(){revealFields(x-1,y+1);}, 200);
setTimeout(function(){revealFields(x+1,y-1);}, 200);
setTimeout(function(){revealFields(x+1,y+1);}, 200);
}
JS fiddle link: https://jsfiddle.net/pL1n8zwj/1/
Answer: Here are my personal thoughts:
Naming:
it's probably better to have placeBombs instead of initBombs, because you have a function called placeBomb.
Sometimes you use i and j , others you use x and y, it's better to be consistent.
Structure:
You should have a state field with constants indicating the state of the cell instead of various fields set to true or false.
you don't really need to look for the value inside of an hash to get the value of a cell. You can have a two dimensional array and reference the cell directly. This also means that you can remove the getFieldId function.
Something like this, for example:
var states = {
MARKEDBYPLAYER: 1,
DISCOVERED: 2,
UNTOUCHED: 3
};
var boardSize = 16;
var cells = new Array(boardSize);
for(var x=0; x < boardSize; x++) {
cells[x] = new Array(boardSize);
for(var y=0; y < boardSize; y++) {
cells[x][y] = {};
cells[x][y].state = states.UNTOUCHED;
cells[x][y].hasBomb = true;
cells[x][y].neighbourNumber = null;
}
}
if (cells[0][1].hasBomb) {
console.log('Player has lost')
}
In generateBoard you have a bunch of instruction that would read better if you had just a call to a function that sets the required values.
Something like this:
function setDomCell(domCell, x, y, cellSize, cellFontSize) {
domCell.id = x+'x'+y;
domCell.style.width = cellSize;
domCell.style.height = cellSize;
domCell.style.fontSize = cellFontSize;
domCell.setAttribute('x', x);
domCell.setAttribute('y', y);
domCell.addEventListener('click', reveal, true);
}
function generateBoard(){
var domRow, domCell;
for(var y=0; y<boardHeight; y++){
domRow = document.createElement('tr');
domBoard.appendChild(domRow);
for(var x=0; x<boardWidth; x++){
domCell = document.createElement('td');
setDomCell(domCell, x, y, cellSize, cellFontSize);
domRow.appendChild(domCell);
}
}
}
It would be even better if you had on object to take care of that, but take it one step at a time.
In the start function you have a switch to lookup values based on another value.
I'd say that's what a hash is for:
var sizeVars = {
'small': {
boardWidth: 10,
boardHeight: 10,
cellSize: '48px',
cellFontSize: '32px',
bombsNumber: 16
},
'medium': {
boardWidth: 20,
boardHeight: 20,
cellSize: '32px',
cellFontSize: '16px',
bombsNumber: 70
},
'large': {
boardWidth: 30,
boardHeight: 30,
cellSize: '16px',
cellFontSize: '8px',
bombsNumber: 160
}
}
var chosenSize = document.getElementById('size').value;
console.log(sizeVars[chosenSize].boardWidth); | {
"domain": "codereview.stackexchange",
"id": 21112,
"tags": "javascript, beginner, game, twitter-bootstrap, minesweeper"
} |
Calculating the system output using frequency response | Question: Given an input signal $$x(n)=\cos(6\pi n +\frac{\pi}{6})$$ and system $$y(n)=0.5x(n)-0.1x(n-1)$$. In this case, the coefficients of the difference equation are $a_0=1$, $b_0=0.5$, and $b_1=$. The frequency response to this system is $$H(\omega)= b_0 + b_1e^{-j\omega}= 0.5 - 0.1 \cos(\omega)+0.1 j \sin(\omega)$$
Since the input $x(n)$ is a sinusoid of the form $$x(n)=A \cos(\omega n +\theta_0)$$, we can express the system's output as
$$y(n)=A|H(\omega_0)|\cos\left(\omega_0 n+\theta_0+angle(H(\omega_0))\right)$$
I can see that $A=1$, $\omega_0=6\pi$, and $\theta_0=\frac{\pi}{6}$ in this formula. Hence, I have deduced that
$$|H(\omega_0)|= |0.5 - 0.1 \cos(6\pi)+0.1j \sin(6\pi)| = 0.4$$
Furthermore, since $H(\omega_0) = 0.4$ (a positive real number), I conjecture that $$angle(H(\omega_0))=0$$
Have I made the correct assumptions/conclusions?
Answer: Yes, you are correct. The system input is rather strange, though, since $cos(6\pi n + \frac{\pi}{6})$ is equal to $cos(\frac{\pi}{6})$. | {
"domain": "dsp.stackexchange",
"id": 490,
"tags": "fourier-transform, linear-systems, frequency-response"
} |
Change in wavefunction due to adiabatic potential in time dependent perturbation theory | Question: I've been puzzled equation (2.2) of this paper,
I've looked into time-dependent perturbation theory and the adiabatic theorem and the closest I could come to deriving this was showing that
$$
i \hbar \dot{c_k} = (E_k - i \hbar < \psi_k | \dot{\psi}_n > ) c_k - i \hbar \sum_{n \neq k} \frac{\dot{H}_{kn}}{E_n-E_k} c_n,
$$
where $| \Psi_n(t)> = \sum_n c_n(t) | \psi_n (t) >$ and $H_{kn} = <\psi_k| H |\psi_n>$.
I was wondering if someone could direct me to a derivation of equation (2.2) as I am struggling to connect perturbation theory with this result.
Answer: The adiabatic expansion is not the same as that of time dependent perturbation theory. In tradition T-D perturbation we expand in a complete set of states of the original time-independent Hamiltonian. In the adiabtic perturbation series we expand in terms of the eigenstates of the current time-dependent Hamiltonian.
Start from the time dependent Schroedinger equation
$$
i\partial_t|\psi_0(t)\rangle =\hat H(t)|\psi_0(t)\rangle
$$
and expand
$$
|\psi_0(t)\rangle=\sum_{n=0}^\infty a_n|n,t\rangle \exp\left\{-i\int_0^t E_0(t')dt'\right \}.
$$
Choose the complete orthonormal set of states $|n,t\rangle$ to be eigenstates of
the ``snapshot'' hamiltonian, $\hat H(t)$,
$$
\hat H(t)|n,t\rangle=E_n(t)|n,t\rangle.
$$
Insert the expansion into the Schroedinger equation, take components and assume that
$|\psi_0(t)\rangle $ stays close to $|0,t\rangle$ so that $|a_0|$ is close to unity, and the other coeficients are small. This leads to
$$
\dot a_0+a_0\langle 0,t|\partial_t|{0},t\rangle \approx 0\\
a_m\approx ia_0\langle{m,t}|{\partial_t}|{0,t}\rangle \frac1{(E_m-E_0)}.
$$
Up to first order in time-derivatives of the states, we find
$$
|\psi_0(t)\rangle =e^{-i\int_0^t E_0(t)dt+i\gamma_{Berry}}\left\{|{0,t}\rangle+
i\sum_{m\ne 0}\frac{|{m,t}\rangle\langle{m,t}|{\partial_t}|{0,t}\rangle}{E_m-E_0}+\ldots\right\}.
$$
Berry's phase is the solution $a_0=\exp \{i\gamma_{Berry}\}$ to the first of the two equations above. It is a phase because a $|{0},t\rangle$ being normalized means that $\langle 0,t|\partial_t|{0},t\rangle$ is pure imaginary. This phase-factor is needed to take up the
slack between the arbitrary phase choice made when defining
$|0,t\rangle$, and the specific phase selected by the Schroedinger equation
as it evolves the state. | {
"domain": "physics.stackexchange",
"id": 85460,
"tags": "quantum-mechanics, homework-and-exercises, wavefunction, hamiltonian, perturbation-theory"
} |
Almost universal string hashing in $Z_{2^n}$ and sublinear space | Question: Here are two families of hash functions on strings $\vec{x} = \langle x_0 x_1 x_2 \dots x_m \rangle$:
For $p$ prime and $x_i \in \mathbb{Z_p}$, $h^1_{a}(\vec{x}) = \sum a^i x_i \bmod p$ for $a \in \mathbb{Z}_p$. Dietzfelbinger et al. showed in "Polynomial Hash Functions Are Reliable" that $\forall x \neq y, P_a(h^1_a(x) = h^1_a(y)) \leq m/p$.
For $x_i \in \mathbb{Z}_{2^b}$, $h^2_{\vec{a} = \langle a_0 a_1 a_2 \dots a_{m+1}\rangle}(\vec{x}) = (a_0 + \sum a_{i+1} x_i \bmod 2^{2b}) \div 2^b$ for $a_i \in \mathbb{Z}_{2^{2b}}$. Lemire and Kaser showed in "Strongly universal string hashing is fast" that this family is 2-independent. This implies that $\forall x \neq y, P_\vec{a}(h^2_\vec{a}(x) = h^2_\vec{a}(y)) = 2^{-b}$
$h^1$ uses only $\lg p$ bits of space and bits of randomness, while $h^2$ uses $2 b m + 2 b$ bits of space and bits of randomness. On the other hand, $h^2$ operates over $\mathbb{Z}_{2^{2b}}$, which is fast on actual computers.
I'd like to know what other hash families are almost-universal (like $h^1$), but operate over $\mathbb{Z}_{2^b}$ (like $h^2$), and use $o(m)$ space and randomness.
Does such a hash family exist? Can its members be evaluated in $O(m)$ time?
Answer: Yes. Wegman and Carter's "New hash functions and their use in authentication and set equality" (mirror) shows a scheme meeting the requirements stated (almost universal, over $\mathbb{Z}_{2^b}$, sublinear space and randomness, linear evaluation time) based on a small number of hash functions drawn from a strongly universal family.
This is sometimes called "tree hashing", and it is used in "Badger - A Fast and Provably Secure MAC" by Boesgaard et al. | {
"domain": "cstheory.stackexchange",
"id": 2066,
"tags": "ds.data-structures, randomness, hash-function"
} |
How can this code for applying filters to data by date and size be improved? | Question: I am going to undergo a code review shortly at work. I feel that my code is too verbose, could log more sensibly, and I feel that my if/else statements are somewhat duplicative/repetitive and can be done better. But I can't seem to be able to put my finger on what I am missing. How would you clean this up such that the code isn't verbose, logs sensible stuff and doesn't look kiddish. I would also like to make it simpler, in case additional filters need to be added in future.
FYI - getDetails method used in the following code, merely contains code for constructing a customer specific "Details" object that contains just 2 things, the name of the customer and a corresponding schedule for invocation.
/* Filters out invokes that havent reached a size threshold.*/
private boolean filterBySize(CustomerUnit Customer) {
CustomerData customerData = new CustomerData(Customer);
if (customerData.getUsage() > USAGE_THRESHOLD) {
LOGGER.info("Customer's data usage of " + customerData.getUsage()
+ "is higher than threshold of " + USAGE_THRESHOLD + " percent for " + Customer.getId());
return true;
} else {
LOGGER.info("Customer's data usage of " + customerData.getUsage()
+ "is lower than threshold of " + USAGE_THRESHOLD + " percent for " + Customer.getId());
return false;
}
}
/* Filters out invokes that aren't slated for today.*/
private boolean filterByDate(CustomerUnit Customer) {
if (getDetails(Customer).getDOW().equalsIgnoreCase(LocalDate.now().getDayOfWeek().name())) {
LOGGER.info("Customer : " + getDetails(Customer).getName() + " is invoked to run import today");
return true;
} else {
LOGGER.info("Customer : " + getDetails(Customer).getName() + " isn't invoked to run import today, ");
return false;
}
}
/* Applies aforesaid filters */
private void invokeRequestByCustomer(CustomerUnit Customer) throws InterruptedException, UnexpectedDateOrderException {
if (filterByDate(Customer) && filterBySize(Customer)) {
invoker.invoke(getDetails(Customer));
LOGGER.info("Scheduled Customer : " + getDetails(Customer).getName() + " to run import operations at" + getDetails(Customer).getDOW());
} else {
LOGGER.info("Customer : " + getDetails(Customer).getName() + " was filtered, no invoke will be executed for it");
}
}
As of now, I have thought about using an IfThenElse function, somewhat similar to the following:
IfThenElse example
Answer: The comments seem redundant, I'm assuming they're just for this review, if not, I would remove them. The basically just duplicate what the code says.
Your function names seem misleading. filterBySize seems to be filtering by data usage. filterByDate seems to be filtering by the day of the week. It would be better if they said what they did.
Your filtration functions contain two branches, both of which return. Generally you would only wrap the if side. Everything that doesn't trigger the if would be the else...
CustomerData customerData = new CustomerData(Customer);
if (customerData.getUsage() > USAGE_THRESHOLD) {
return true;
}
// The else is implicit... so just noise
return false;
For me, there's an awful lot of logging across these methods. Maybe that's what's required by your processes, but it seems excessive. We don't have the context for how the code is going to be called, however I would expect that 'USAGE_THRESHOLD' is the same for every invocation, do you really need to log it for each custom to say if they are above/below it? The current day of the week is the same for each invocation, again do you really need to output it for every customer that's selected? Each of your filters logs the specific reason that the customer has been rejected do you need to log the generic message at the top level saying they are being skipped. If you're looking for reuse, then maybe your filterByDate method shouldn't mention the import, that way it could be reused to filter other things (export perhaps) in the future...
As an aside, I'm not sure about this:
CustomerData customerData = new CustomerData(Customer);
It makes me wonder what the constructor for CustomerData is doing... If the information needed for the filtration isn't in the Customer, then is it reading that extra data in the the constructor for CustomerData? That seems wrong...
Another small issue is naming.... variables should start with lower case in java.... CustomerUnit Customer should be CustomerUnit customer, you can see it's confusing the code preview...
It would also be interesting to see how invokeRequestByCustomer is being called... It looks a lot like it could fit into a stream something like:
allCustomers.stream()
.filter(CustomerFilters::byUsage)
.filter(CustomerFilters::byDay)
.foreach(c->invoke(c)); | {
"domain": "codereview.stackexchange",
"id": 37428,
"tags": "java"
} |
Hydro for Ubuntu Precise armhf on new kernel | Question:
Hi,
I'm intending to test the following: run Ubuntu Precise armhf with ROS Hydro with kernel 3.10. Having read that Hydro is supported up to Raring (kernel 3.8), I think that my combination will result in a non-working situation. Any suggestions ?
Thanks
Originally posted by hvn on ROS Answers with karma: 72 on 2014-03-17
Post score: 0
Original comments
Comment by hvn on 2014-03-18:
Ok, just did this and roscore (hydro) reports that it's "unable to contact my own server at [http://:59299]".
Answer:
My first assumption here is that this should work. ROS depends very strongly on libraries such as boost, and really don't care which kernel it's running on top of. In fact, most of Ubuntu cares very little about the kernel.
The error you're seeing about "cannot contact my own master" is a network/loopback configuration issue and not related to the kernel. Try pinging your machine's hostname to confirm, then try adding your hostname to one of the loopback addresses in /etc/hosts
Originally posted by ahendrix with karma: 47576 on 2014-03-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by hvn on 2014-03-18:
My hostname is added to /etc/hosts. The ping is successful, although it delivers only 1 ping. After this it does nothing but hanging in there:
64 bytes from (127.0.1.1): icmp_req=1 ttl=64 time=0.135 ms
^C
--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms
The ^C is given by me after waiting more than 5 minutes.
What about the server address? When I ran Groovy, the address was 11311, now it is 59299. Is this usual ?
Comment by hvn on 2014-03-18:
Tested roscore/ping localhost again. Via ssh, it only pings once which is successful. Direct on system in X terminal, ping goes fine. Via ssh, roscore starts but after message "Usage < 1GB" nothing. Only way to stop = kill PID. Via X terminal, after long wait it again "cannot contact own server".
Comment by ahendrix on 2014-03-19:
It sounds like you have some network driver or network setup problems on you machine. You'll have to get those sorted out before anything that depends on networking (including ROS) will work.
Comment by hvn on 2014-03-20:
which python version goes with either groovy or hydro ? The running version is 2.7.3
Comment by ahendrix on 2014-03-20:
Python 2.7.3 is fine.
Comment by ahendrix on 2014-03-20:
As a side note, roscore should be starting up on port 11311; not 59299. Are you sure you have your ROS_MASTER_URI set correctly?
Comment by hvn on 2014-03-20:
Yes, the port number is what I was wondering about as well. Was used to 11311. Good question..will look at that.
Comment by hvn on 2014-03-26:
Tried the same combination (kernel 3.10.18 and hydro) on an x86 (actually old AthlonXP). No problem, either with or without real-time (xenomai). Standard installation Ubuntu 12.04. On armhf still a no go.
Comment by ahendrix on 2014-03-26:
This could still be a kernel issue with the specific network drivers for your board. The realtime kernels tend to move a number of interrupt handlers to preemptible threads, and this can occasionally cause additional overhead or locking problems, particularly on less-tested platforms such as ARM.
Comment by ahendrix on 2014-03-26:
I would suggest (1) making sure that your hostname and /etc/hosts are set up properly so that your hostname resolves to a loopback address, and that it's pingable, and (2) try to figure out why the ROS core is running on a non-standard port. | {
"domain": "robotics.stackexchange",
"id": 17328,
"tags": "ros, ros-hydro, ubuntu, ubuntu-precise, armhf"
} |
What is the most polar covalent bond? | Question: Which polar covalent bond of the following:
Cl-F
S-O
P-N
C-Cl
It's trivia - I guess. And I got it wrong.
Electronegativity list was given for the following:
F, O, Cl, N, S, C, H, P
My wrong reasoning was:
why would two highly electronegative atoms bond with each other. Even they did, given their high electronegativity as both pull the electron cloud to each side (tug of war, weak dipole moments) with much strength by forming a weak bond. I refused to consider the difference between electronegativity of the species. (Even I did, S-O would be the highest).
I applied the same to S-O and P-N as well.
C-Cl was the pick and bad!
C-Cl has a acceptable dipole moment given C's less en 2.55 and Cl's high en. 3.16. The size of atoms wise, Cl-F is shorter compared to C-Cl. Which is questionable.
I checked few other related questions and answers, such as this peculiar one and this.
As I read ron's answer (the latter), the concept is not about a compound like CCl4 which negates the polarity between C-Cl bonds in the tetrahedral geometry. Thus C-Cl is a polar covalent bond. (isn't it?)
Where did I go wrong in reasoning, what did I miss?
PS: Not homework but some theory revision questions. Trying to reinstate/expand the understanding of concepts.
Answer: There are various electronegativity scales.
Electronegativity vales from Wikipedia
$$\newcommand{\d}[2]{#1.&\hspace{-1em}#2}
\begin{array}{lrl}
\hline
\text{Element} & \text{Pauling} & \text{Allen}& \\
\hline
\text{H} & 2.20 & 2.30 \\
\text{C} & 2.55 & 2.544 \\
\text{N} & 3.04 & 3.066 \\
\text{O} & 3.44 & 3.610 \\
\text{F} & 3.98 & 4.193 \\
\text{P} & 2.19 & 2.253 \\
\text{S} & 2.58 & 2.589\\
\text{Cl} & 3.16 & 2.896 \\
\hline
\end{array}
$$
The compounds are all binaries, so the polarity, $\mu$, is equal to the difference in charge, $\delta$, times the distance, $d$, between ions ($\mu = \delta \cdot d$). Only given a table of electronegativities, the first level of analysis would be to assume that all the bond lengths were equal and that that the polarity would simply be a function of the difference in electronegativity. The differences are thus:
$$\newcommand{\d}[2]{#1.&\hspace{-1em}#2}
\begin{array}{lrl}
\hline
\text{Element} & \text{Pauling} & \text{Allen}& \\
\hline
\Delta\text{Cl-F} & 0.82 & 1.297 \\
\Delta\text{S-O} & 0.86 & 1.021 \\
\Delta\text{P-N} & 0.85 & 0.813 \\
\Delta\text{C-Cl} & 0.61 & 0.352 \\
\hline
\end{array}
$$
So on the Pauling scale S-O should be the most polar, but on the Allen scale Cl-F should be the most polar. | {
"domain": "chemistry.stackexchange",
"id": 7766,
"tags": "bond, electronegativity, polarity"
} |
how does ros2 implement its network design? | Question:
In Why ROS 2.0, there is design goal of ros2 network:
we want ROS to behave as well as is possible when network connectivity degrades due to loss and/or delay, from poor-quality WiFi to ground-to-space communication links.
I am curious about how ros achieves this goal? If the network is poor, what will ros2 do, discard messages?
Originally posted by huchaohong on ROS Answers with karma: 47 on 2019-03-22
Post score: 1
Answer:
I am curious about how ros achieves this goal?
The issue with lossy networks and ROS 1 was that it used TCP almost exclusively, and if you lost data, TCP would try to resend it, which would further stress the network and you could end up saturating the network and not even keeping up at all. Especially since the common use case for this was streaming some sensor data over wifi to a workstation to visualize it in rviz, in which case you don't care if you miss a few messages. ROS 1 does have a UDP transport, but it had several issues, for example being unreliable for large data and not being supported uniformly (python never supported it).
DDS has unreliable and reliable communication and graceful degradation, i.e. a reliable publisher can send data to an unreliable subscriber (but not the other way around). But more importantly, DDS's reliable communication happens over UDP with a custom protocol on top (DDSI-RTPS), which has the advantage over TCP that you can control things like how long it will retry to send data, how long it will wait for a NAK, how it will buffer data before sending (like Nagle's algorithm), etc...
Basically, the idea is that DDS's configuration options allow it to be many things between TCP and simple UDP, including a more flexible version of TCP, which in turn allows you to fine tune your communication settings to better work on lossy networks.
This comes at the cost of complexity and some performance (TCP on the local host is really good), but should allow knowledgeable users to get good results in more situations.
If the network is poor, what will ros2 do, discard messages?
To answer this more directly, I'll cop-out and say "it depends". If you're using unreliable the messages will be discarded. If you're using reliable then just like ROS 1 and TCP it will try to send them until your system resource limits are reached, at which point it will discard them. The only difference, as I mentioned above, is that with DDS you can know when they are discarded and have more control over when they will be discarded and how it will try to resend them.
Hope that answers your questions somewhat.
Originally posted by William with karma: 17335 on 2019-03-22
This answer was ACCEPTED on the original site
Post score: 7
Original comments
Comment by huchaohong on 2019-03-22:
Thanks for you detailed explanation, i can understand the design more clearly now.
Comment by lucasw on 2019-03-24:
Is there anything in the works that could make future ROS2 versions achieve that really good ROS1 TCP performance level for inter-process localhost communications?
Comment by Geoff on 2019-03-25:
You can use the intraprocess transport, which is zero-copy for nodes in the same process, or you could direct your DDS implementation to use TCP if it has support for it (Connext DDS does, for example). Depending on the implementation you may still have marshalling (again, Connext does), but it should use the loopback interface. If you must have no marshalling and local-only maximum performance, then intraprocess is the way to go.
Comment by lucasw on 2019-03-26:
It would be nice if TCP could be selected on a topic by topic basis (not clear from looking at http://community.rti.com/docs/html/tcp_transport/main.html#configuring - is a node a 'domain participant'? Or a topic?). Or even better use tcp locally by default and udp when needed to communicate with other systems even on the same topic.
I suppose if the tools (especially python ones) for intra-process become nearly seamless it's less of an issue. Making the command line, rviz, and gui tools work intra process seems challenging- each one would spawn nodes inside of every local process they are trying to interact with to get at the data without having to use the udp dds?
Comment by William on 2019-03-28:
Yes, there are "locators" for TCP in addition to UDP or UDP multicast or even shared memory, it just depends on the implementation if it supports it. I think Fast-RTPS also has a TCP option https://eprosima-fast-rtps.readthedocs.io/en/latest/advanced.html#tcp-transport. However, it's not as awesome as you'd like because it still has to do the RTPS framing which is redundant with a lot of what is in the TCP headers. Depending on the implementation and the size of the messages, it might be much better or only marginally better. More work needs to be done here to see what the benefit might be. | {
"domain": "robotics.stackexchange",
"id": 32718,
"tags": "ros, ros2, network"
} |
Dynamic publishers, subscribers and callbacks | Question:
I am trying to create a multi-robot system (MRS), where multiple robots are communicating with each other over ROS2, Galactic, Ubuntu 20.04. Currently, I'm only running this in simulation, but I have multiple robots in Gazebo, each running on their own independent custom version of a navigation stack.
One of the nodes that I'm working on, is collision avoidance between the robots. Here, based on the number of robots, then I'm trying to ensure collision free operation between the robots. For starters this is simple start-stop mechanisms based on priority, but this isn't really relevant for this.
Based on the number of robots, then I need to subscribe to e.g. the position of each of the robots, and of cause have a callback for each of the subscribers. Currently, I've just created N hardcoded subscribers and callbacks, assuming that I am spawning at most N robots. If I spawn more than N robots, then I'm unable to listen to the position of the N+i'th robot.
Currently, the topics of the different robots are encapsulated within a namespace, e.g. robot1/..
Is there a way for me to dynamically create the publisher, subscribers, callback, etc. that I need based on the number of robots that I have in the system? I'm essentially just trying to create identical publishers/subscribers that run on different namespaces. I'm very interested in a general solution to this problem, assuming one exists, as I have other nodes in which I would like to use the same principles.
I've attached a piece of sample code, presenting the way that I'm doing it right now.
self.declare_parameter('number_of_robots', 1)
self.param_number_of_robots
# Creating publishers
for i in range(self.param_number_of_robots):
if i == 0:
self.pause_pub_r1= self.create_publisher(Bool, '/robot1/pause_or_resume', 10)
if i == 1:
self.pause_pub_r2= self.create_publisher(Bool, '/robot2/pause_or_resume', 10)
if i == 2:
self.pause_pub_r3= self.create_publisher(Bool, '/robot3/pause_or_resume', 10)
if i == 3:
self.pause_pub_r4= self.create_publisher(Bool, '/robot4/pause_or_resume', 10)
# Creating subscribers
for i in range(self.param_number_of_robots):
if i == 0:
#Create subscribers - ROBOT 1
self.subscription_heading_r1 = self.create_subscription(
DebugTrajectoryController,
'/robot1/odom',
self.heading_callback_r1,
qos.qos_profile_sensor_data
)
self.subscription_heading_r1 # prevent unused variable warning
if i == 1:
#Create subscribers - ROBOT 2
self.subscription_heading_r2 = self.create_subscription(
DebugTrajectoryController,
'/robot2/odom',
self.heading_callback_r2,
qos.qos_profile_sensor_data
)
self.subscription_heading_r2 # prevent unused variable warning
if i == 2:
#Create subscribers - ROBOT 3
self.subscription_heading_r3 = self.create_subscription(
DebugTrajectoryController,
'/robot3/odom',
self.heading_callback_r3,
qos.qos_profile_sensor_data
)
self.subscription_heading_r3 # prevent unused variable warning
if i == 3:
#Create subscribers - ROBOT 4
self.subscription_heading_r4 = self.create_subscription(
DebugTrajectoryController,
'/robot4/odom',
self.heading_callback_r4,
qos.qos_profile_sensor_data
)
self.subscription_heading_r4 # prevent unused variable warning
EDIT (Solution):
# Include library
from functools import partial
# Declare input variables
self.declare_parameter('number_of_robots', 1)
self.param_n_robots = self.get_parameter('number_of_robots').get_parameter_value().integer_value
# Declare variables
self.current_robots_poses = []
self.pause_publishers = []
# Dynamically append robots to arrays
for robot in range(self.param_n_robots):
self.current_robots_poses.append([np.nan, np.nan])
self.pause_publishers.append('pause_pub_r'+str(robot+1))
# Subscribers
for i in range(self.param_n_robots):
self.subscription_odom = self.create_subscription(
Odometry,
'/robot'+str(i+1)+'/odom/filtered',
partial(self.odom_callback, index=i),
qos.qos_profile_sensor_data
)
self.subscription_odom # prevent unused variable warning
# Publishers
self.pause_publishers[i] = self.create_publisher(Bool, '/robot'+str(i+1)+'/trajectory_controller/pause_or_resume', 50)
# Callbacks
def odom_callback(self, msg, index):
self.current_robots_poses[index][0] = round(msg.pose.pose.position.x, 2)
self.current_robots_poses[index][1] = round(msg.pose.pose.position.y, 2)
def some_example_function_for_pause_pub(self):
msg = Bool()
msg.data = True
self.pause_publishers[0].publish(msg)
Originally posted by KimJensen on ROS Answers with karma: 55 on 2022-04-05
Post score: 2
Answer:
I think you should use a modified launcher similar to multi_tb3_simulation_launch.py from Nav2:
https://github.com/ros-planning/navigation2/tree/main/nav2_bringup/launch
or Neobotix's (branch: feautre/multi_robot_navigation):
https://github.com/neobotix/neo_simulation2/tree/feature/multi_robot_navigation/launch
or base it on this one:
https://www.theconstructsim.com/spawning-multiple-robots-in-gazebo-with-ros2/
Use a ROS2 parameters in your nodes so you can setup them in the launchers of N robots as in these examples (so N nodes launched or a node with N publishers/subscribers given from the setup parameter). You can try to use a system variable too.
In general, self.create_subscription and self.create_publisher can use a parameterized topic name (e.g. '/robot'+str(i)+'/odom') so it can be easily written in a loop.
Originally posted by ljaniec with karma: 3064 on 2022-04-05
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by KimJensen on 2022-04-12:
Thank you for your answer.
For launching the different nodes, then I've made a system based on a combination of the Launch files from Nav2, and those from The Construct. Within this system, then each of the independent nodes are all spawned within their separate namespaces. In this scenario one node contains all relevant information for only one robot.
My problem arises when I have one node, that then has to receive/send information from/to multiple different robots, hence it will have to create pubs, subs and callbacks that correspond to the different namespaces of the different robots.
I'm already using the ROS2 parameters, which you're referring to. However, I've only used one setup parameter being the number of robots, and then I'm trying to create the dynamic pub/sub/callbacks within the node. Do you believe there's another, better, way to do this?
Comment by KimJensen on 2022-04-12:
Didn't know you could make parameterized topic names for pub/sub, that's nice! How do you do with dynamic callbacks then?
I can't really see how one would make the declaration of the callbacks dynamic, and I can't seem to pass an index to the callback, hence using one callback for all of the subscribers seems to not be possible.
Comment by ljaniec on 2022-04-12:
To make sure I understand - do you want parameterized number of subscribers/publishers in your node and want to do different things with topic (and thus different callbacks each time)? First I would rather check if it is 100% needed - maybe you can group these callbacks based on their work (e.g. odometry callback, camera feedback callback etc.) and have a different callback function for each group
E.g.
self.odometry_subscriber = self.create_subscription(
Odometry,
self.topic_prefix + "/odom",
self.odometry_callback,
10,
)
Comment by KimJensen on 2022-04-12:
Assuming I have 4 robots, encapsulated with /robot1, .., namespaces.
What I then in essence want, is to have e.g. 4 subs, one to each of the robot's odom, and then inside the callback store the position of each of the robots.
If I then want to suddenly spawn 5 robots instead, then I'd like for my code to create 5 subs, and store the pos of 5 different robots, using the setup parameter (param_number_of_robots = 5) - so do it without more hardcoding.
My callbacks now:
def odom_callback_r1(self, msg):
self.current_robots_poses[0][0] = round(msg.pose.pose.position.x, 2)
self.current_robots_poses[0][1] = round(msg.pose.pose.position.y, 2)
def odom_callback_r2(self, msg):
self.current_robots_poses[1][0] = round(msg.pose.pose.position.x, 2)
self.current_robots_poses[1][1] = round(msg.pose.pose.position.y, 2)
How would you handle the msgs if four different odom topics all are referring to the same odom_callback?
Comment by ljaniec on 2022-04-12:
I think I would use these suggestions here: https://answers.ros.org/question/346810/ros2-python-add-arguments-to-callback/ to create a callback with argument (functools.partial or lambda function), then rewrite the code to work with loops over elements and an array of size param_number_of_robots.
Comment by KimJensen on 2022-04-13:
This combination of parameterized topic names and passing index to only one callback completely resolved my issue around subscribers and callbacks. Thank you so much!
For those interested, here is how I ended up doing it:
# Subscribers
for i in range(self.param_n_robots):
self.subscription_odom = self.create_subscription(
Odometry,
'/robot'+str(i+1)+'/odom/filtered',
partial(self.odom_callback, index=i),
qos.qos_profile_sensor_data
)
self.subscription_odom # prevent unused variable warning
# Callbacks
def odom_callback(self, msg, index):
self.current_robots_poses[index][0] = round(msg.pose.pose.position.x, 2)
self.current_robots_poses[index][1] = round(msg.pose.pose.position.y, 2)
Comment by KimJensen on 2022-04-13:
When it comes to the publishers, then I get that I can now use parameterized topic names, which partly solves my problem with dynamic publishers. When defining the publishers, then I have to declare a variable name (in example below self.pause_pub_r1 and self.pause_pub_r2), which is later used to publish using that publisher, e.g. "node.pause_pub_r1.publish(msg)". Using the parameterized topic names doesn't allow me to create different publisher variables names, but instead assigns different topics to the same publisher variable. Is it possible to create the pub variable name dynamically as well? So I basically want to ensure, that when I use e.g. a robot1 publisher, then I only publish to one topic, the /robot1/..., robot 2 publisher only publish on /robot2/.., etc.
for i in range(self.param_number_of_robots):
self.pause_pub_r1= self.create_publisher(Bool, '/robot'+str(i+1)+'/trajectory_controller/pause_or_resume', 50)
# How do I get a "self.pause_pub_r2"?
Comment by ljaniec on 2022-04-13:
I think you can use an array of publishers (e.g. self.pause_publishers = []) for it, maybe?
Comment by KimJensen on 2022-04-14:
You did it Łukasz, thank you so much! :D
I'll edit the question with the final version of the code.
Comment by ljaniec on 2022-04-14:
I am glad to help you :) | {
"domain": "robotics.stackexchange",
"id": 37560,
"tags": "ros, ros2, callback, publisher, multi-robot"
} |
Fourier Transform of Kernel Density Estimation - Convolution Theorem? | Question: I am reading this paper about density estimation (Appendix A), where the authors apply a Fourier transform to the estimated probability density (the $X_j$ are a sample of $N$ data points drawn independently from an unknown probability density function $f(x)$):
$$
\hat{f}(x) = \frac1N \sum^N_{j=1} K(x-X_j)
$$
The authors now note that they apply the convolution theorem and the Fourier transform (where the Fourier transform of the function $g(x)$ is defined by $\Phi_g(t) = \int \exp\lbrace\mathrm{i}xt\rbrace g(x) \mathrm{d}x$), and obtain:
$$
\Phi_{\hat{f}}(t) = \kappa(t) \frac1N \sum^N_{t=1} \exp\lbrace\mathrm{i}tX_j\rbrace
$$
where $\kappa(t)$ is the Fourier transform of the Kernel $K$.
Now, I am a bit stuck on how they obtained this form and how exactly they used the convolution theorem here. In particular, what happened to the argument $X_j$ in the kernel?
EDIT: Ok, I think I got it. Is this the right approach?
We apply the Fourier transform and obtain:
$$
\Phi_{\hat{f}}(t) = \frac1N \sum^N_{j=1} \int K(x-X_j) \exp\lbrace\mathrm{i}xt\rbrace\mathrm{d}x
$$
We can easily rewrite this equation as an integral over a Dirac delta distribution:
$$
\Phi_{\hat{f}}(t) = \frac1N \sum^N_{j=1} \iint K(y) \delta(y-(x-X_j)) \exp\lbrace\mathrm{i}xt\rbrace\mathrm{d}x\mathrm{d}y
$$
Integrating over $x$ and using the symmetry of $\delta$ we obtain:
$$
\Phi_{\hat{f}}(t) = \frac1N \sum^N_{j=1} \int K(y) \exp\lbrace\mathrm{i}(y+X_j)t\rbrace\mathrm{d}y
$$
We see that we can move factor involving $X_j$ outside the integral, and are left with the result:
$$
\Phi_{\hat{f}}(t) = \frac1N \sum^N_{j=1} \exp\lbrace\mathrm{i}X_jt\rbrace \int K(y) \exp\lbrace\mathrm{i}yt\rbrace\mathrm{d}y
$$
where the first factor is $\Delta(t) = \frac1N \sum^N_{j=1} \exp\lbrace\mathrm{i}X_jt\rbrace$, and the second factor is $\kappa(t) = \int K(y) \exp\lbrace\mathrm{i}yt\rbrace\mathrm{d}y$, exactly as in the paper.
I am still not sure in how far they applied the convolution theorem here?
Answer: It's a bit contrived, but observe that by the "sifting property" of Dirac-delta function:
$$
K(x-X_j) = K(x) * \delta(x-X_j)
$$
where $*$ denotes convolution and $\delta$ is the Dirac-delta function.
Now you can apply Fourier convolution theorem and write:
$$
\mathcal{F}\{K(x-X_j)\}(t) = \kappa(t) e^{j t X_j}
$$
where $\mathcal{F}\{g(x)\}(t)$ denotes the Fourier transform of a function $g(x)$.
Finally using the linearity property of the Fourier transform:
$$
\mathcal{F}\{\hat{f}(x)\}(t) = \frac{1}{N} \sum_{j=1}^N \mathcal{F}\{K(x-X_j)\}(t) = \kappa(t) \sum_{j=1}^N e^{j t X_j}.
$$ | {
"domain": "dsp.stackexchange",
"id": 4785,
"tags": "fourier-transform, convolution, proof, kernel"
} |
What is the fringe in the context of search algorithms? | Question: What is the fringe in the context of search algorithms?
Answer: In English, the fringe is (also) defined as the outer, marginal, or extreme part of an area, group, or sphere of activity.
In the context of AI search algorithms, the state (or search) space is usually represented as a graph, where nodes are states and the edges are the connections (or actions) between the corresponding states. If you're performing a tree (or graph) search, then the set of all nodes at the end of all visited paths is called the fringe, frontier or border.
In the picture below, the grey nodes (the lastly visited nodes of each path) form the fringe.
The video Example Route Finding by Peter Norvig also gives some intuition behind this concept. | {
"domain": "ai.stackexchange",
"id": 487,
"tags": "terminology, search, definitions"
} |
Is it possible to break the second law of thermodynamics regarding entropy? | Question: The motivation behind my question is that it seems very unlikely that a chunk of metal would "randomly" reach escape velocity and fly away from the Earth, but it happens thanks to NASA and other space programs. If the second law of thermodynamics regarding entropy is statistical, can it be broken on a larger scale?
Answer: Entropy may decrease locally, but if you look at the complete system, entropy is strictly non-decreasing. Living organisms are an excellent example of a subsystem that produces order from the environment, organizing molecules into fantastically complex structures. But although they can reduce entropy locally, they must increase the entropy of their surroundings by at least as much in the process.
So, in a sense, your intuition is actually opposite to the truth - you can decrease entropy of a non-isolated system on a small scale, but at a large enough scale that includes all components in the system, entropy never decreases. Anything that seems to result in a decrease in entropy must increase entropy somewhere else. | {
"domain": "physics.stackexchange",
"id": 71528,
"tags": "thermodynamics, statistical-mechanics, entropy"
} |
Coefficients values in filter in Convolutional Neural Networks | Question: I'm starting to learn how convolutional neural networks work, and I have a question regarding the filters. Are these chosen manually or are they generated by the network in training? If it's the latter, are the coefficients in the filters chosen at random, and then as the network is trained they are "corrected"?
Any help or insight you might be able to provide me in this matter is greatly appreciated!
Answer: The values in the filters are parameters that are learned by the network during training. When creating the network the values are initialized randomly according to some initialization scheme (e.g. Kaiming He initialization) and then during training are updated to achieve a lower loss (i.e. the learning process). | {
"domain": "datascience.stackexchange",
"id": 10828,
"tags": "neural-network, convolutional-neural-network, training"
} |
Noether current: commute or not commute? | Question: I have two related questions.
1) Before promoting the fields in a theory (for example complex scalar $\mathcal{L}=\partial_{\mu}\phi^{\dagger}\partial^{\mu}\phi$) to operators one can commute the fields freely, for instance in the Noether current
\begin{equation}
j^{\mu}=i\left(\phi\partial^{\mu}\phi^{\dagger}-(\partial^{\mu}\phi)\phi^{\dagger}\right)=i\left((\partial^{\mu}\phi^{\dagger})\phi-\phi^{\dagger}\partial^{\mu}\phi\right)
\end{equation}
Why doesn't this affect the operator $j^{\mu}$ after quantization?
If I wanted to determine the current for a Lagrangian already in terms of operators (e.g. when taking a condensed matter Hamiltonian (operator) and Legendre transforming it), there the order of the operators is suddenly important and I get conflicting results for the current.
2) In the explicit scenario of a non-relativistic free particle $H=\frac{p^2}{2m}$ i.e. in second quantized form
\begin{equation}
L=\int \textrm{d}x\mathcal{L} =\int \textrm{d}x\ \Psi^{\dagger}(x,t)(i\partial_t+\frac{1}{2m}\partial_x^2)\Psi(x,t)=\frac{-1}{2m}\int \textrm{d}x\ \partial_x\Psi^{\dagger}(x)\partial_x\Psi(x)
\end{equation}
where I dropped the time dependence because it is not relevant to this question, can I use
\begin{equation}
j(x)=\frac{\partial\mathcal{L}}{\partial(\partial_x\Psi(x))}\Delta\Psi(x)+\frac{\partial\mathcal{L}}{\partial(\partial_x\Psi^{\dagger}(x))}\Delta\Psi^{\dagger}(x)=\frac{-i}{2m}\left[(\partial_x\Psi^{\dagger}(x))\Psi(x)-(\partial_x\Psi(x))\Psi^{\dagger}\right]
\end{equation}
to define the spatial component of the Noether current? This seems to be conflicting with the literature in terms of order of the operators and signs (see e.g. Mahan page 24).
Answer: Yes, the quantisation process has ordering ambiguities, not only in QFT but in standard QM too. You have to postulate some ordering, and the resulting theory usually depends (though rather trivially) on the order of factors.
Due to the trivial nature of the canonical commutation relations, different orderings for Noether charges are usually the same modulo a constant shift:
$$
Q_\mathrm{ordering 1}=Q_\mathrm{ordering\ 2}+\text{constant}
$$
so the standard procedure is to determine this arbitrary constant by declaring that the vacuum has zero charge:
$$
Q|0\rangle\equiv 0
$$
which, in effect, just means to normal order the Noether charge.
I don't own a copy of Mahan, but I don't see anything wrong about your current, except perhaps for the global coefficient (which is not really fixed by Noether's theorem, inasmuch if $j$ is conserved so is $cj$ for any $c\in\mathbb C$). You can check if your normalisation of $j$ is the standard one by checking whether
$$
[Q,\Psi]=\Psi
$$
is satisfied as is, or if it comes with some funny coefficient in front of $\Psi$. | {
"domain": "physics.stackexchange",
"id": 36595,
"tags": "quantum-field-theory, lagrangian-formalism, operators, noethers-theorem, quantization"
} |
PR2: basestation network manager no longer valid | Question:
Hi, I'm trying to recover a PR2 in my research lab. It hasn't seen a lot of use in the past few years. I am having issues with the basestation regarding the network manager no longer being valid and I'm trying to find the recover ISO (v0.6.0>); it was last upgrade to Ubuntu precise running diamondback according to records and I don't want to mess with the configurations without a recovery option. As it stands the base station can't communicate with the network. I have tried using CloneZilla to make a copy of present state but it runs into bitmap errors.
Thanks in advance for any suggestions on recovery.
Originally posted by shane113h on ROS Answers with karma: 3 on 2020-09-11
Post score: 0
Original comments
Comment by gvdhoorn on 2020-09-11:
@fergs seems like someone with some experience getting old "forgotten" robots to work again, perhaps he has an idea.
Answer:
I don't know if there was ever an ISO image for the PR2 or basestation - I think we might have just installed regular old Ubuntu (probably server edition) and then installed a bunch of custom debians. I'm not sure where those debians might be hosted today - but the sources are here: https://github.com/pr2-debs/pr2-basestation. That whole organization has a bunch of deb sources for the robot as well.
Originally posted by fergs with karma: 13902 on 2020-09-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by shane113h on 2020-09-16:
Thank you very much for this! | {
"domain": "robotics.stackexchange",
"id": 35524,
"tags": "ros-diamondback, ubuntu-precise, ubuntu, pr2"
} |
Height of data structures | Question: Why the height complexity of a data structure, generally expressed in terms of $\log n$, do not contain a ceiling or floor ?
Answer: Asymptotically, $\log n$ and $\lfloor \log n \rfloor$ (or $\lceil \log n \rceil$) are the same: a function is $\Theta(\log n)$ iff it is $\Theta(\lfloor \log n \rfloor)$. Therefore there is no need for floor or ceiling. | {
"domain": "cs.stackexchange",
"id": 8102,
"tags": "data-structures, balanced-search-trees"
} |
What does it actually mean by Task Planning? | Question: According to the research paper Parallel process decomposition of a dynamic manipulation task: robotic sewing (D. Gershon, DOI: 10.1109/70.56654):
Abstract - ... The task planner approach, as promoted by the Al community, is unsuited to tasks involving interaction with a dynamic environment. ....
... ... ...
Task planning is essentially an off-line activity, based
on a "snap-shot" of the world, and is therefore incompati-
ble with dynamic tasks and environments. Task planners re-
quire a single model of the world, whereas different models
may be appropriate for different objectives, e.g., an octree
representation is efficient for obstacle avoidance, whereas a
RAPT-style model [36] is better suited to planning compliant
motion tasks. Attempts to develop experimental task planner
systems revealed additional difficulties, such as image under-
standing, sensor fusion, error recovery, and the potential for
catastrophic failures [29]....
..... ..... ....
Many control schemes, such as adaptive control [15], [27] and
sliding-mode control [41], can accommodate bounded uncertainties in the model of the controlled system. However, these controllers are not generally robust to disturbances generated by a dynamic interacting environment.
Now, Multi-arm robot control system for manipulation of flexible materials in sewing operation seems to be implementing a multi-arm robot system on the basis of "Task Planning" approach. And, this paper implemented a robot system using adaptive control.
So, what is the catch here? Why does the 1st article say that use of task planning and adaptive control aren't possible in case of dynamic tasks?
Answer: I think what you are seeing here is a decade advancement in microprocessor and robotic control technologies.
By the time the second and third papers were written, in 2000 & 1998, the definition of 'task planning' had switched from static pre-planning to dynamic reactive planning.
The difference in microcomputer speeds between 1990 and 1998 is enormous. In 1990 typical CPUs included the 80486 & i860. They were clocked at double digit MHz, and didn't have the power to do the complex floating point operations required for doing task planning in real time.
By the late 90's microprocessors were significantly faster, GHz chips were going into production and CPUs were doing more per cycle. Significantly, SIMD (Single Instruction Multiple Data) pipelines like SSE were being implemented so the big matrix manipulations needed to do dynamic task planning were starting to become efficient enough to use in near real time.
This all made enough difference that suddenly things which were inconceivable at the start of the decade were now viable. | {
"domain": "robotics.stackexchange",
"id": 1920,
"tags": "first-robotics, planning"
} |
Fourth Equation of Motion | Question: When I was studying motion, my teacher asked us to derive the equations of motion. I too ended up deriving the fourth equation of motion, but my teacher said this is not an equation. Is this derivation correct?
\begin{align}
v^2-u^2 &= 2ax \\
(v+u)(v-u) &= 2 \left(\frac{v-u}{t}\right)x \\
(v+u)t &= 2x \\
vt+ut &= 2x \\
vt+(v-at)t &= 2x \\
2vt-at^2 &= 2x \\
x &= vt- \frac{at^2}{2}
\end{align}
And why is this wrong to say that this is the fourth equation of motion?
Given 3 equations of motion:-
\begin{align}
v&=u+at \\
x&=ut+ \frac{at^2}{2} \\
v^2-u^2&=2ax
\end{align}
Answer: The problem of perception as to "What is a new Equation of Motion?" seems to originate with the dogmatic teaching of the Three Equations Of Motion as a Set of Results to be Learned.
They are in fact three results derived from the distillation of Newton's Laws:
$$\mathbf f = \dfrac {\mathrm d} {\mathrm d t} (m \mathbf v)$$
which differential equation is solved, where $\mathbf f$ is set to a constant (and $m$ is taken for granted as being constant also).
This of course can't be done at the elementary level at which the SUVAT equations are initially introduced. So the three convenient "equations of motion" are introduced instead, in a way that the students can get their heads round them.
Whether an equation is given an official Name to Be Remembered is not all that important. What is important is the ability to use them. Working out that fourth equation from the given three is actually a worthy exercise in its own right. Granted it is not a particularly profound equation, as it can be obtained from the other three. But -- get this -- each of the other three has also merely been derived from other equations.
For your teacher to dismiss it as "not an equation" is appalling.
It may be the case that the teacher is teaching from the book, and not from his or her own expertise in the subject. It can be disheartening to be taught by teachers who do not understand the subject they are teaching, but hang on in there, it gets better as you go on in your schooling. | {
"domain": "physics.stackexchange",
"id": 76930,
"tags": "newtonian-mechanics, kinematics"
} |
Is the square-root-of-SWAP for a pair of 4-dimensional qudits isomorphic to two square-root-of-SWAPS for two pairs of qubits? | Question: This may be a very naïve question indicative of a lot of confusion, but I am trying to understand more about Hamiltonian simulation. I'm starting to intuit that the $n^{th}$-root-of-SWAP acting on a single pair of qubits somehow corresponds to what's meant by Hamiltonian simulation of a SWAP gate (much as a Lie algebra is to a Lie group). But what about the $n^{th}$-root-of-SWAP qutrits or qudits, with $d=4?$
For example consider a pair of SWAP gates acting on four qubits; the first SWAP gate swaps the first two qubits, and the second SWAP gate swaps the second two qubits. That is, consider a two-qubit gate such as $\mathsf{SWAP}\otimes\mathsf{SWAP}$.
The $16\times 16$ matrix $\mathsf{SWAP}\otimes\mathsf{SWAP}$ of such a gate may be as below:
$$\mathsf{SWAP}\otimes\mathsf{SWAP}=\begin{pmatrix}
1 & & & & & & & & & & & & & & & &\\
& & 1 & & & & & & & & & & & & & &\\
& 1 & & & & & & & & & & & & & & &\\
& & & 1 & & & & & & & & & & & & &\\
& & & & & & & & 1 & & & & & & & &\\
& & & & & & & & & & 1 & & & & & &\\
& & & & & & & & & 1 & & & & & & &\\
& & & & & & & & & & & 1 & & & & &\\
& & & & 1 & & & & & & & & & & & &\\
& & & & & & 1 & & & & & & & & & &\\
& & & & & 1 & & & & & & & & & & &\\
& & & & & & & 1 & & & & & & & & &\\
& & & & & & & & & & & & 1 & & & &\\
& & & & & & & & & & & & & & 1 & &\\
& & & & & & & & & & & & & 1 & & &\\
& & & & & & & & & & & & & & & 1 &\\
\end{pmatrix}.$$
Notice that $\mathsf{SWAP}\otimes\mathsf{SWAP}$ is unitary (by virtue of it being a permutation matrix) and also hermitian (by virtue of it being symmetric around the diagonal). This I believe is isomorphic to a SWAP gate acting to swap a pair of qudits ($d=4$).
I'd like to see if I can somehow do a local Hamiltonian simulation to simulate such a gate, which may be part of a larger simulation. For example, I'd like to act locally on one of the pairs of qubits, and also act locally on the other of the pair of qubits; but I'm not sure if I'm missing something. The matrix $\mathsf{SWAP}\otimes\mathsf{SWAP}$ does not seem to be composed of a sum of two separate hermitian matrices.
Nonetheless, it might make sense to simulate such a matrix with repeated applications of an "$n^{th}$-root-of-SWAP" on the first pair of qubits and an "$n^{th}$-root-of-SWAP" on the second pair of qubits?
Is $\sqrt {\mathsf{SWAP}}$ acting on a pair of 4-dimensional qudits isomorphic to $\sqrt {\mathsf{SWAP}}\otimes\sqrt{\mathsf{SWAP}}$ acting on two pairs of qubits?
Answer: Swapping 4-level qudits is equivalent to swapping pairs of qubits. Because you can encode a 4-level qudit into a pair of qubits. Similarly, swapping 8-level qudits will be equivalent to swapping triplets of qubits. The swap gate is convenient enough that this should hold regardless of how you map the 4-level qudit into the qubits (e.g. whether you map qudit |2> to big endian qubits |10> or little endian qubits |01>).
That being said, in general it is not the case that $\sqrt{U \otimes U}$ is defined to be $\sqrt{U} \otimes \sqrt{U}$ even though $(\sqrt{U} \otimes \sqrt{U})^2 = U \otimes U$, so you can't go from "swapping qubit pairs is the same as individual qubit swaps" to "the square root of swapping qubit pairs is the same as individual square roots of swapping qubits".
Let the principle square root of $U$ be what you get by computing its eigendecomposition, halving the angles of the eigenvalues in polar coordinates, then putting the matrix back together. Under this definition $\sqrt{SWAP \otimes SWAP}$ looks like this:
Whereas $\sqrt{SWAP} \otimes \sqrt{SWAP}$ looks like this:
You can see they're not the same. In particular, the former entangles qubits that the latter does not. | {
"domain": "quantumcomputing.stackexchange",
"id": 3112,
"tags": "quantum-gate, hamiltonian-simulation"
} |
Effect of different graph operations at algebraic connectivity of graph laplacian? | Question: The algebraic connectivity of a graph G is the second-smallest eigenvalue of the Laplacian matrix of G. This eigenvalue is greater than 0 if and only if G is a connected graph. The magnitude of this value reflects how well connected the overall graph is.
for an example, "adding self-loops" does not change laplacian eigenvalues (specially algebraic connectivity) of graph.
Because, laplacian(G)= D-A is invariant with respect to adding self-loops.
My question is:
Does anyone has studied effect of different operations (such as edge contraction) on spectrum of laplacian?
do you know good references?
Remark: the exact definition of the algebraic connectivity depends on the type of Laplacian used. For this question I prefer to use Fan Chung definition in SPECTRAL GRAPH THEORY. In this book Fan Chung has uesed a rescaled version of the Laplacian, eliminating the dependence on the number of vertices.
Answer: Intuitively operations that preserve connectivity will not decrease the eigenvalues. For example, adding edges to the graph does not decrease the connectivity.
In general, if H is a subgraph of a graph G, by interlacing we know that the i-th largest Laplacian eigenvalue of H is no larger than the i-th largest Laplacian eigenvalue of G. A proof can be found in Proposition 3.2.1 of the book "Spectra of graphs" by Brouwer and Haemers. Note that the definition of Laplacian used in the book is not normalized; it has node degrees on the diagonal and -1 (or 0 if there is no edge) in the off-diagonal entries. | {
"domain": "cstheory.stackexchange",
"id": 743,
"tags": "graph-theory, co.combinatorics, spectral-graph-theory"
} |
Ginzburg-Landau boundary condition in the 1D no fields case | Question: It is commonly seen that in finding the coherence length from Ginzburg-Landau, that the following equation is found:
$\frac{\partial^2 f}{\partial \eta^2} + f(1-f^2) = 0$
which is for a superconductor filling the infinite half space of $x$ from 0 to infinity. This is the normalized version where $\eta = x/\xi$. Now usually one boundary condition used is that $f(x=0) = 0$, which seems reasonable, you would expect the order parameter might go to zero at the boundary. The solution given is
$f(\eta) = \tanh \left(\frac{\eta}{\sqrt{2}} \right)$
However if one considers that usual boundary condition
$ (-i \hbar \nabla \psi - \frac{e^*}{c}{\bf A} \psi)|_{n} = 0$
Then this means surely that $\frac{\partial f}{\partial x}|_{x = 0} = 0$ (1) at the boundary. This means that the previously given solution cannot be correct, and the solution must just be $f = 1$.
However I can see that the current across the surface is still zero for the usual tanh solution. I would like to know, why is the boundary condition (1) disregarded here?
Answer: Both the $\psi=\psi^*=0$ Dirichlet boundary condition and the Neumann-lke
condtion ${\bf n}\cdot (\nabla-2ie{\bf A}/\hbar)\psi=0$, make mathematical sense for the Landau-Ginzburg free energy functional in that either makes the integrated-out boundary term vanish. It's really physics that selects the one you should use. If something (a change of material, say) forces the order parameter to zero at the surface then Dirichlet wins. If the order parameter is non-zero at the surface then Neumann makes sense. As the coherence length is defined by bending the order parameter, I think Dirichlet is Ok. | {
"domain": "physics.stackexchange",
"id": 51175,
"tags": "mathematical-physics, superconductivity"
} |
Scala mastermind solver, which is the most scala-ish | Question: I am trying to express the classical TDD kata of mastermind in the most idiomatic scala I can. Here is a scalatest, test suite :
package eu.byjean.mastermind
import org.junit.runner.RunWith
import org.scalatest.junit.JUnitRunner
import org.scalatest.FlatSpec
import org.scalatest.matchers.ShouldMatchers
@RunWith(classOf[JUnitRunner])
class MastermindSpec extends FlatSpec with ShouldMatchers {
behavior of "Mastermid"
it should "find no good and no misplaced for a wrong guess" in {
Mastermind.check (guess='red)(secret='blue) should equal (0,0)
}
it should "find all good if guess equals secret of one" in {
Mastermind.check('blue)('blue) should equal (1,0)
}
it should "find one good if guess has one good in a secret of two" in {
Mastermind.check('blue,'red)('blue,'green) should equal (1,0)
Mastermind.check('green,'red)('blue,'red) should equal (1,0)
}
it should "find two good if guess has two good in a secret of three" in {
Mastermind.check('blue,'red,'red)('blue,'green,'red) should equal (2,0)
}
it should "find one misplaced blue in a guess of two" in {
Mastermind.check('blue,'red)('green,'blue) should equal (0,1)
}
it should "find two misplaced in a guess of four" in {
Mastermind.check('blue,'blue,'red,'red)('green,'yellow,'blue,'blue) should equal (0,2)
}
it should "find two misplaced colors in a guess of four" in {
Mastermind.check('green,'blue,'yellow,'red)('green,'yellow,'blue,'blue) should equal (1,2)
}
}
and my current implementation
package eu.byjean.mastermind
object Mastermind {
def check(guess: Symbol*)(secret: Symbol*): (Int, Int) = {
val (goodPairs, badPairs) = guess.zip(secret).partition { case (x, y) => x == y }
val(guessMiss,secMiss)=badPairs.unzip
val misplaced=guessMiss.sortBy(_.toString).zip(secMiss.sortBy(_.toString())).filter{case (x,y)=>x==y}
(goodPairs.size, misplaced.size)
}
In the imperative form, some people employ a map and count for each color the number of occurences of the color in the secret and in the guess (only as far as misplaced go of course).
I tried an approach using foldLeft on the badPairs list of tuple, which lead me to variable naming problems and calling map.updated twice in the foldLeft closure which didnt seem right
Any thoughts on how to improve on the above code. It doesn't feel perfect. how would you improve it along the following axis:
bugs: have I missed something obvious ?
performance : this version with zip, unzip, sort, zip again most likely isn't the fastest
readability : how would I define an implicit ordering for Symbol so I don't have to pass _.toString ,
idiomatic scala: is there globally a more "scala-ish" way of solving this ?
Thanks
Answer: All this zipping and unzipping looks really convoluted. I think the problem here is getting too much enamored with what you can do with a Scala collection instead of focusing on the problem at hand.
One way of decreasing zip cost is to do it on a view. This way, no intermediary collection is created -- though that depends a bit on the methods being called. At any rate, zip is one of these methods.
It wouldn't work on your case, however, because you then go on to partition and unzip -- I'd be wary of calling such methods on a view, because they may end up being more expensive, not less.
I think the map approach is better, because it avoids the O(n log n) sorting. A HashMap can be built in O(n), and looked up in constant time. This is how I'd do it:
object Mastermind {
def check(guess: Symbol*)(secret: Symbol*): (Int, Int) = {
val numOfGoodPairs = guess.view zip secret count { case (a, b) => a == b }
val guessesForSymbol = guess.groupBy(identity).mapValues(_.size)
val secretsForSymbol = secret.groupBy(identity).mapValues(_.size)
val unorderedGuesses = for {
symbol <- guessesForSymbol.keys.toSeq
secrets <- secretsForSymbol get symbol
guesses = guessesForSymbol(symbol)
} yield secrets min guesses
val numOfMisplaced = unorderedGuesses.sum - numOfGoodPairs
(numOfGoodPairs, numOfMisplaced)
}
}
The first line should be pretty fast -- it will iterate once through both guess and secret, counting each time they are equal.
Computing both maps is more expensive, but it should be cheaper than sorting. Note that mapValues return a view of the values -- it does not copy any data. This is particularly interesting because it will only compute the size of map elements for keys present in both guess and secret. It is a small optimization, but you gain it almost for free.
We avoid most of the complexity by simply subtracting the number of correct guesses from the number of unordered guesses. | {
"domain": "codereview.stackexchange",
"id": 507,
"tags": "functional-programming, scala"
} |
Fast approximate optical flow / image shift | Question: I need to detect how fast a camera is panning (either horizontal/vertical) to give a warning to the operator to slow down.
The entire image is moving as a block, I don't need an actual direction (although H or V would be a bonus) and I only need an approximate magnitude - ie. trigger if more than 'N' pixels shift between frames.
Images are large and generally uniform low contrast scenes, I don't have any obvious highlights to track. I need to do it in realtime (60fps) and without using all of the CPU.
Niave solution is pick an RoI in the center, find edges, calculate similarity between pairs of frames, shift one of frames left/right/up/down by a pixel, repeat - find minima.
I wondered if there was a smarter solution?
Answer: Probably if you are looking for a simple method, it is to apply the standard Motion Estimation algorithms which are very much matured in MPEG class of compression codecs. They are easy to understand and i guess you will get a lot of ready to use codes. This algorithm produces motion vector on block by block basis - and then you can find the most prominent cluster and take the average motion vector direction and magnitude.
MPEG4 - has another key concept called "Global Motion compensation", a technique which makes attempts to first estimate and compensate camera motion and panning. The beauty is that such methods can be simpler or exhaustive depending on complexity. Here is one example paper and another paper for the same.
In general, camera panning and motion estimation is quite an established research domain. here is a reference: paper and another paper.
On this subject. You will find both rigor and accurate algorithm as well as simple and fast ones. | {
"domain": "dsp.stackexchange",
"id": 241,
"tags": "image-processing, optical-flow, motion-detection"
} |
Why does the state-action value function, defined as an expected value of the reward and state value function, not need to follow a policy? | Question: I often see that the state-action value function is expressed as:
$$q_{\pi}(s,a)=\color{red}{\mathbb{E}_{\pi}}[R_{t+1}+\gamma G_{t+1} | S_t=s, A_t = a] = \color{blue}{\mathbb{E}}[R_{t+1}+\gamma v_{\pi}(s') |S_t = s, A_t =a]$$
Why does expressing the future return in the time $t+1$ as a state value function $v_{\pi}$ make the expected value under policy change to expected value in general?
Answer: Let's first write the state-value function as
$$q_{\pi}(s,a) = \mathbb{E}_{p, \pi}[R_{t+1} + \gamma G_{t+1} | S_t = s, A_t = a]\;,$$
where $R_{t+1}$ is the random variable that represents the reward gained at time $t+1$, i.e. after we have taken action $A_t = a$ in state $S_t = s$, while $G_{t+1}$ is the random variable that represents the return, the sum of future rewards. This allows us to show that the expectation is taken under the conditional joint distribution $p(s', r \mid s, a)$, which is the environment dynamics, and future actions are taken from our policy distribution $\pi$.
As $R_{t+1}$ depends on $S_t = s, A_t = a$ and $p(s', r \mid s, a)$, the only random variable in the expectation that is dependent on our policy $\pi$ is $G_{t+1}$, because this is the sum of future reward signals and so will depend on future state-action values. Thus, we can rewrite again as
$$q_{\pi}(s,a) = \mathbb{E}_{p}[R_{t+1} + \gamma \mathbb{E}_{\pi}[ G_{t+1} |S_{t+1} = s'] | S_t = s, A_t = a]\;,$$
where the inner expectation (coupled with the fact its inside an expectation over the state and reward distributions) should look familiar to you as the state value function, i.e.
$$\mathbb{E}_{\pi}[ G_{t+1} |S_{t+1} = s'] = v_{\pi}(s')\;.$$
This leads us to get what you have
$$q_{\pi}(s,a) = \mathbb{E}_{p}[R_{t+1} + \gamma v_{\pi}(s') | S_t = s, A_t = a]\;,$$
where the only difference is that we have made clear what our expectation is taken with respect to.
The expectation is taken with respect to the conditional joint distribution $p(s', r \mid s, a)$, and we usually include the $\pi$ subscript to denote that they are also taking the expectation with respect to the policy, but here this does not affect the first term as we have conditioned on knowing $A_t = a$ and only applies to the future reward signals. | {
"domain": "ai.stackexchange",
"id": 2057,
"tags": "reinforcement-learning, value-functions, bellman-equations, expectation"
} |
Trying to understand this Quicksort Correctness proof | Question: This proof is a proof by induction, and goes as follows:
P(n) is the assertion that "Quicksort correctly sorts every input array of length n."
Base case: every input array of length 1 is already sorted (P(1) holds)
Inductive step: fix n => 2. Fix some input array of length n.
Need to show: if P(k) holds for all k < n, then P(n) holds as well
He then draws an array A partitioned around some pivot p. So he draws p, and calls the part of the array that is < p as the 1st part, and the part that is > p is the second part. The length of part 1 = k1, and the length of part 2 is k2. By the correctness proof of the Partition subroutine (proved earlier), the pivot p winds up in the correct position.
By inductive hypothesis: 1st, 2nd parts get sorted correctly by recursive calls. (Using P(K1),P(k2))
So: after recursive calls, entire array is correctly sorted.
QED
My confusion: I have a lot of problem seeing exactly how this proves the correctness of it. So we assume that P(k) does indeed hold for all natural numbers k < n.
Most of the induction proofs I had seen so far go something like: Prove base case, and show that P(n) => P(n+1). They usually also involved some sort of algebraic manipulation. This proof seems very different, and I don't understand how to apply the concept of Induction to it. I can somewhat reason that the correctness of the Partition subroutine is the key. So is the reasoning for its correctness as follows: We know that each recursive call, it will partition the array around a pivot. This pivot will then be in its rightful position. Then each subarray will be further partitioned around a pivot, and that pivot will then be in its rightful position. This goes on and on until you get an subarray of length 1, which is trivially sorted.
But then we're not assuming that P(k) holds for all k < n....we are actually SHOWING it does (since the Partition subroutine will always place one element in its rightful position.) Are we not assuming that P(k) holds for all k
Answer: We are indeed assuming $P(k)$ holds for all $k < n$. This is a generalization of the "From $P(n-1)$, we prove $P(n)$" style of proof you're familiar with.
The proof you describe is known as the principle of strong mathematical induction and has the form
Suppose that $P(n)$ is a predicate defined on $n\in \{1, 2, \dotsc\}$. If we can show that
$P(1)$ is true, and
$(\forall k < n \;[P(k)])\Longrightarrow P(n)$
Then $P(n)$ is true for all integers $n\ge 1$.
In the proof to which you refer, that's exactly what's going on. To use quicksort to sort an array of size $n$, we partition it into three pieces: the first $k$ subarray, the pivot (which will be in its correct place), and the remaining subarray of size $n-k-1$. By the way partition works, every element in the first subarray will be less than or equal to the pivot and every element in the other subarray will be greater than or equal to the pivot, so when we recursively sort the first and last subarrays, we will wind up having sorted the entire array.
We show this is correct by strong induction: since the first subarray has $k<n$ elements, we can assume by induction that it will be correctly sorted. Since the second subarray has $n-k-1<n$ elements, we can assume that it will be correctly sorted. Thus, putting all the pieces together, we will wind up having sorted the array. | {
"domain": "cs.stackexchange",
"id": 7338,
"tags": "correctness-proof, induction, quicksort"
} |
How do strings (in string theory) get created or destroyed? | Question: Something has been bothering me about string theory. From what I can tell, string theory is a first quantized theory of the position and momenta along a string, and that the fields of QFT are not really fundamentally needed anymore. However, we allowed for creation of particles in QFT by "exciting" a fundamental field. If there is no corresponding "string field" which we can disturb to create new strings, how would strings get created or destroyed? If they are not created or destroyed, how does first quantized string theory handle processes where particle number is seen to change (e.g. decays, etc)?
Answer: First- and second-quantization
In quantum theories, there are two possible formulations: first- and second-quantized. This is true irrespective of the type of the theory under concerned (this can be particles - i.e. standard QFT -, strings, etc.). The difference between the two is the following:
First-quantized: all the space(time) positions $x^\mu$ of the classical theory are treated as the dynamical variables. They are mapped to quantum operators: $x^\mu \to \hat X^\mu$. In order to define "time" evolution one needs to define a time, that can be the proper-time $\tau$ (or more generally proper-positions), the "real" time $x^0$, etc. The variables are thus interpreted as describing the "target spacetime", the parameters describe the worldvolume (worldline for a particle, worldsheet for a string, etc.). Since the positions are given in the action as variables, only the objects at these positions exist: there is no annihilation or creation (even though this can be introduced via the path integral).
Second-quantized: all the spacetime positions are mapped to labels of a field. The classical fields $\phi(x^\mu)$ are mapped to quantum fields $\hat\Phi(x^\mu)$. The fields contain creation and annihilation operators, each associated with a first-quantized state.
Which perspective you choose is a matter of choice, of computability and of the problem you want to address. Formally it is always possible to get a field theory from a first-quantized one.
String theory
In standard worldsheet string theory, one uses the first-quantized approach because it is technically simpler: there is not much freedom in the interactions (compared to the ones possible for a particle) and conformal field theory techniques allow to perform many complications. For a particle sometimes it is also simpler to work with a first-quantized approach (to compute anomalies à la Bastianelli…).
But there is also a second-quantized approach, called string field theory. In this framework, one can describe the creation and annihilation of strings (there is not much easy introduction to the topic, see for example section 4 of arXiv:hep-th/9411028). Some problems of great importance cannot be treated in a first-quantized approach (off-shell amplitudes and renormalization, background independence, some non-perturbative effects…). Unfortunately, there are huge technical problems in building a useful string field theory yet (even if a lot of progresses has been achieved, especially in the previous years), but in my opinion, it is important to achieve such a construction.
So to conclude both approaches exist for particles, and strings (and formally for any objects), the question is which is one is simpler to use. By chance, most of the computations in string theory can be performed in a first-quantized approach, so it is logical to stick with it. For a particle, the converse holds, that fields are much simpler to handle.
Computing scattering amplitudes
I am not an expert in the world line approach so I may not be fully correct.
First one should specify the theory: it is defined by the following two structures:
the possible interaction vertices (eg. cubic, quartic, etc.) and the corresponding couplings;
an action (often free) for a single particle from which the propagator is derived.
The fact that one specifies by hand the interactions explains why this approach is not very useful in for particles if you don't know already the interactions from a QFT: there are too many possibilities.
Then a scattering amplitude is computed from the following data:
in- and out-states (which can be in different numbers);
the interaction graph (and in particular its topology), i.e. what is the full graph constructed from the fundamental vertices (for example $12 → X → 3Y, Y → 45$ for a process $12 → 345$ with only cubic interactions).
Then the path integral will connect the various in- and out-states together with the propagator, the path being prescribed by the diagram, which serves as a kind of boundary condition.
The point is that at each step one is specifying what are the states which are created/annihilated, so you can handle these processes, but only by hand and not "dynamically".
For a reference, I would suggest looking at the article on nLab, which links to more references. | {
"domain": "physics.stackexchange",
"id": 42818,
"tags": "string-theory, second-quantization, string-field-theory"
} |
Handling EventListeners in JS - Calendar in JS | Question: Good night. It is me again (last post about this code).
Summary: This project is meant to be a mobile calendar made with HTML, CSS and JS. I'm using a <table> to show all the days of the month. And each one of the days are off course a <td>. Currently they all receive an eventListener, that open a pop_up when clicked. The problem I believe is that I'm using they wrongly. More details below.
Main issue: After the last post here, regarding this code, many improvements were made. Now, I'm facing a hard time dealing with event listeners. I have:
A 7x7 <table>. The first <tr> contains the <th>, in conclusion I have a 6x7 <table> containing my days of the month. Each one of these days are a <td>. I added an eventListener to each one of this days. This event triggers a pop-up div that contains a form to add a new schedule event. Once its form is filled, and the 'Confirm' button is clicked, again, by another eventListener, I create a div within the display_data div. So, each time an event is added, this div receives a new item.
The problem in here is the amount of times the parameter 'day' is being changed/runned, as you can see in the script, I ended up having to get the value in other way, not directly from the function.
In Conclusion: I'm not finished with all the verifications yet, I plan to only show the scheduled events on the month that they belong to, etc... However, I've been burning my midnight oil trying to solve this issue with no success at all. If you have any suggestion on how to improve the code in general, fell free to do so!
I'll be posting the code below, also the git link in the correct branch if you rather it.
Github project link
HTML:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Calendar</title>
<script src="../script/script.js"></script>
<link rel="stylesheet" href="../css/style.css" />
</head>
<body>
<div id="add_schedule" class="hide_pop_up">
<div class="close_button">
<span id="schedule_day"></span>
<span id="close_pop_up">X</span>
</div>
<form id="pop_up">
<div class="schedule_time_div">
<div class="schedule_time_div_2">
<label for="schedule_initial_time">Starting at:</label>
<input id="schedule_initial_time" type="time" value="00:00" />
</div>
<div class="schedule_time_div_2">
<label for="schedule_final_time">Ending at:</label>
<input id="schedule_final_time" type="time" value="23:59" />
</div>
</div>
<div class="schedule_title_div">
<label for="schedule_title">Title</label>
<input
id="schedule_title"
placeholder="My title..."
type="text"
required
/>
</div>
<div class="schedule_description_div">
<label for="schedule_description">Description</label>
<input
id="schedule_description"
placeholder="My description..."
type="text"
/>
</div>
<div class="schedule_button_div">
<button id="save_schedule" form="pop_up" type="button">
Confirm
</button>
</div>
</form>
</div>
<div class="main">
<div class="title">
<span class="year_title" id="year_title"></span>
<span class="month_title" id="month_title"></span>
</div>
<div class="calendar">
<div id="month_days" class="month_days">
<table id="days">
<tr>
<th>Sun</th>
<th class="even">Mon</th>
<th>Tue</th>
<th class="even">Wed</th>
<th>Thu</th>
<th class="even">Fri</th>
<th>Sat</th>
</tr>
</table>
</div>
<div id="data_display" class="data_display"></div>
</div>
<div class="buttons">
<button id="back_button">
<img class="arrow_img" alt="back" />
</button>
<button id="t_button">T</button>
<button id="next_button">
<img class="arrow_img" alt="next" />
</button>
</div>
</div>
</body>
</html>
CSS:
* {
padding: 0px;
margin: 0px;
font-family: Verdana, Geneva, Tahoma, sans-serif;
font-weight: lighter;
}
body {
height: 100vh;
display: flex;
justify-content: center;
align-items: center;
background-image: url(../../media/grass.jpg);
/* Blurring the background. Applies behind the element... */
backdrop-filter: blur(9px);
background-size: cover;
}
@keyframes display_data {
0% {
transform: scale3d(0, 1, 1);
}
100% {
transform: scale3d(1, 1, 1);
}
}
@keyframes opacity {
from {
opacity: 0%;
}
to {
opacity: 100%;
}
}
@keyframes display_button_back {
0% {
right: 25px;
transform: scale3d(0.75, 0.75, 1);
}
100% {
right: 0px;
transform: scale3d(1, 1, 1);
}
}
@keyframes display_button_next {
0% {
left: 25px;
transform: scale3d(0.75, 0.75, 1);
}
100% {
left: 0px;
transform: scale3d(1, 1, 1);
}
}
@keyframes display_opacity_zoom {
from {
opacity: 0%;
transform: scale3d(0.5, 0.5, 1);
}
to {
opacity: 100%;
transform: scale3d(1, 1, 1);
}
}
@keyframes display_schedule {
from{
opacity: 0%;
transform: scale3d(.25,1,1);
}
to{
opacity: 100%;
transform: scale3d(1,1,1);
}
}
@keyframes close_schedule {
from{
opacity: 100%;
transform: scale3d(1,1,1);
}
to{
opacity: 0%;
transform: scale3d(.25,1,1);
}
}
.main {
width: 100vw;
height: 100vh;
display: flex;
flex-direction: column;
justify-content: space-between;
align-items: flex-start;
color: white;
background-color: rgba(0, 0, 0, 0.65);
}
.title {
margin-top: 7%;
height: 80px;
width: 100%;
display: flex;
flex-direction: column;
align-items: center;
justify-content: space-evenly;
/* animation: display_opacity_zoom 1s ease-out; */
}
.year_title {
margin-left: 5px;
font-size: 40px;
letter-spacing: 5px;
color: lightsalmon;
text-align: center;
}
.month_title {
margin-left: 15px;
font-size: 25px;
letter-spacing: 15px;
text-align: center;
}
.calendar {
height: 75%;
width: 100vw;
display: flex;
flex-direction: column;
justify-content: space-between;
align-items: center;
}
.month_days {
margin-top: 10px;
width: 100%;
height: 50%;
/* animation: opacity 1s ease-in-out; */
}
table {
margin-top: 20px;
width: 100%;
font-size: 22px;
}
tr,
th,
td {
background-color: none;
}
th {
width: 14%;
text-align: center;
color: white;
}
th:first-child,
th:last-child {
color: lightsalmon;
}
td {
width: 2.38em;
height: 2.38em;
color: white;
text-align: center;
border-radius: 50%;
}
td:hover {
background-color: rgba(112, 203, 255, 0.349);
}
.data_display {
width: 95%;
height: 30%;
display: flex;
flex-direction: column;
justify-content: space-between;
align-items: center;
background-color: white;
border: none;
border-radius: 5px;
overflow-y: scroll;
/* animation: display_data 2s ease; */
}
.data_display_item{
width: 100%;
}
.data_display_div_title{
display: flex;
width: 100%;
justify-content: space-between;
align-items: center;
color: black;
margin-top: 5px;
margin-bottom: 5px;
font-size: 20px;
}
.data_display_div_title :first-child{
margin-left: 5px;
}
.data_display_div_title :last-child{
margin-right: 10px;
background-color: lightsalmon;
border-radius: 5px;
}
.data_display_div_description {
display: flex;
width: 100%;
flex-wrap: wrap;
font-size: 17px;
color: black;
}
.data_display_div_description span{
margin-left: 10px;
}
.schedule_day{
background-color: rgba(112, 203, 255, 0.349);
}
.other_month {
background: none;
color: rgba(175, 175, 175, 0.45);
}
.buttons {
width: 100vw;
height: 70px;
display: flex;
justify-content: space-around;
align-items: flex-start;
}
button {
width: 60px;
height: 60px;
display: flex;
justify-content: center;
align-items: center;
background: none;
border: none;
font-size: 35px;
font-weight: 400;
color: white;
}
button:hover{
cursor: pointer;
}
button:first-child{
/* animation: display_button_back 1s ease; */
position: relative;
}
button:first-child img{
content: url(../../media/left-arrow-line-symbol.svg);
}
/*
button:not(:first-child):not(:last-child){
animation: display_opacity_zoom 1s ease-out;
} */
button:last-child{
/* animation: display_button_next 1s ease; */
position: relative;
}
button:last-child img{
content: url(../../media/right-arrow-angle.svg);
}
.arrow_img{
width: 35px;
height: 35px"
}
.hide_pop_up{
display: none;
}
.schedule_display{
display: flex;
width: 97vw;
height: 80vh;
position: absolute;
display: flex;
flex-direction: column;
border-radius: 5px;
background: white;
justify-content: space-between;
align-items: flex-start;
/* animation: display_schedule .3s ease; */
}
/* .schedule_close{
animation: close_schedule .3s ease;
animation-fill-mode: forwards;
#87FFA7 <= Color for schedules
} */
.close_button{
width: 100%;
height: 40px;
display: flex;
align-items: center;
justify-content: space-between;
}
.close_button span{
font-size: 25px;
margin-right: 10px;
}
.close_button span:hover{
cursor: pointer;
}
form{
width: 100%;
height: 100%;
display: flex;
flex-wrap: wrap;
align-items: center;
justify-content: space-between;
font-size: 25px;
}
.schedule_button_div, .schedule_time_div, .schedule_title_div, .schedule_description_div {
width: 100%;
height: 50px;
padding: 20px;
display: flex;
flex-direction: column;
justify-content: space-evenly;
align-items: flex-start;
}
input{
width: 100%;
font-size: 22px;
border: 2px black solid;
border-top: none;
border-right: none;
border-left: none;
}
.schedule_time_div{
height: 15%;
flex-direction: row;
}
.schedule_time_div input{
width: 150px;
height: 50px;
}
.schedule_time_div_2{
display: flex;
flex-direction: column;
justify-content: space-between;
align-items: flex-start;
}
.schedule_button_div{
justify-content: center;
align-items: center;
}
.schedule_button_div button{
font-size: 20px;
color: black;
border: 2px black solid;
width: 30%;
}
@media only screen and (min-width: 1279px){
.title{
margin-top: 2%;
}
.data_display{
margin-top: 35px;
height: 70vh;
}
.calendar{
width: 97vw;
flex-direction: row;
align-items: flex-start;
}
.month_days{
height: fit-content;
}
td{
border-radius: 0%;
}
.buttons{
width: 50vw;
}
}
JS:
// Returns the amount of days in a month.
const amount_of_days = (year, month) => new Date(year, month + 1, 0).getDate();
// Returns the day of the week in which the month starts.
const first_day_week_for_month = (year, month) =>
new Date(year, month, 1).getDay();
// When given the name, it returns the month number (0-11).
function month_name_in_number(month_name) {
const month_names = [
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December",
];
for (let i = 0; i < 12; i++) {
return month_names.indexOf(month_name);
}
}
// Returns a date object, with more properties.
const date_object = (date_year, date_month) => {
const month_names = [
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December",
];
const date_object = new Date(date_year, date_month);
const date = {
year: date_object.getFullYear(),
month: date_object.getMonth(),
month_name: month_names[date_object.getMonth()],
amount_of_days: amount_of_days(
date_object.getFullYear(),
date_object.getMonth()
),
get_first_Day: first_day_week_for_month(
date_object.getFullYear(),
date_object.getMonth()
),
};
return date;
};
// Returns a date object based on the table data.
function get_table_date() {
const table_year = parseInt(document.getElementById("year_title").innerText);
const table_month = month_name_in_number(
document.getElementById("month_title").innerText
);
return date_object(table_year, table_month);
}
// Prints year + month on the html.
function print_year_and_month(date_year, date_month) {
const date = date_object(date_year, date_month);
document.getElementById("year_title").innerText = date.year;
document.getElementById("month_title").innerText = date.month_name;
}
// Creates the table.
function create_table() {
const table = document.getElementById("days");
// Creates 6 rows.
for (let i = 0; i < 6; i++) {
let current_row = table.insertRow(1 + i);
// Creates 7 cells.
for (let x = 0; x < 7; x++) {
current_row.insertCell(x);
}
}
}
// Resets the 'td' data style properties.
function reset_table_data_style() {
const table = document.getElementById("days");
for (let i = 1; i < 7; i++) {
for (let x = 0; x < 7; x++) {
table.rows[i].cells[x].style.color = "";
table.rows[i].cells[x].style.background = "";
table.rows[i].cells[x].classList.remove("td");
table.rows[i].cells[x].classList.remove("other_month");
}
}
}
// Changes the background color of the current month cell if it is a weekend.
const change_background_color_if_weekend = (row_number) => {
const table = document.getElementById("days");
if (table.rows[row_number].cells[6].classList == "td") {
table.rows[row_number].cells[6].style.color = "lightsalmon";
}
if (table.rows[row_number].cells[0].classList == "td") {
table.rows[row_number].cells[0].style.color = "lightsalmon";
}
};
// Changes the background color of the current month cell if it is today's day.
const change_background_color_if_today = (row_number) => {
const table = document.getElementById("days");
const table_date_object = get_table_date();
if (
table_date_object.year === new Date().getFullYear() &&
table_date_object.month === new Date().getMonth()
) {
for (let i = 0; i < 7; i++) {
if (
table.rows[row_number].cells[i].innerText == new Date().getDate() &&
table.rows[row_number].cells[i].className === "td"
) {
table.rows[row_number].cells[i].style.background = "black";
}
}
} else {
return;
}
};
// Applies the background + today style. + loads schedules
function load_table_style() {
for (let x = 1; x < 7; x++) {
change_background_color_if_weekend(x);
change_background_color_if_today(x);
}
}
// Populates a row.
function populate_row(
execution_number,
row_number,
first_cell,
first_value,
cell_class
) {
if (execution_number <= 7) {
var table = document.getElementById("days");
for (let i = 0; i < execution_number; i++) {
table.rows[row_number].cells[first_cell + i].innerText = first_value + i;
table.rows[row_number].cells[first_cell + i].classList.add(cell_class);
}
} else {
console.log("Alert on populate_row function.");
}
}
// Populates the table.
function populate_table(date_year, date_month) {
// AD = Amount of Days. AC = Amount of cells. CM = Current Month.
const date = date_object(date_year, date_month);
const AC_CM_1_row = 7 - date.get_first_Day;
const AC_last_month = 7 - AC_CM_1_row;
const AD_last_month = amount_of_days(date.year, date.month - 1);
let AD_next_month = 42 - date.amount_of_days - AC_last_month;
let day_counter = AC_CM_1_row;
let lasting_days = date.amount_of_days - day_counter;
// Populates the first row.
if (AC_CM_1_row < 7) {
populate_row(
7 - AC_CM_1_row,
1,
0,
AD_last_month - (7 - AC_CM_1_row) + 1,
"other_month"
);
}
populate_row(AC_CM_1_row, 1, date.get_first_Day, 1, "td");
// Populates the other rows.
let i = 2;
while (day_counter < date.amount_of_days) {
populate_row(7, i, 0, day_counter + 1, "td");
day_counter += 7;
lasting_days = date.amount_of_days - day_counter;
i++;
// If lasting days won't fill a whole row, fill the rest of the table.
if (lasting_days <= 7 && lasting_days !== 0) {
populate_row(lasting_days, i, 0, day_counter + 1, "td");
while (AD_next_month !== 0) {
populate_row(7 - lasting_days, i, lasting_days, 1, "other_month");
AD_next_month -= 7 - lasting_days;
if (AD_next_month > 0) {
populate_row(7, i + 1, 0, 1 + (7 - lasting_days), "other_month");
AD_next_month -= 7;
}
}
day_counter = date.amount_of_days;
}
}
load_table_style();
}
function open_pop_up() {
const pop_up = document.getElementById("add_schedule");
pop_up.classList.remove("schedule_close");
pop_up.classList.add("schedule_display");
}
function close_pop_up() {
const pop_up = document.getElementById("add_schedule");
pop_up.classList.add("schedule_close");
pop_up.classList.remove("schedule_display");
}
function add_schedule_event_to_cells() {
const table = document.getElementById("days");
for (let i = 1; i < 7; i++) {
for (let x = 0; x < 7; x++) {
table.rows[i].cells[x].addEventListener("click", () => {
add_new_schedule_event(table.rows[i].cells[x].innerText);
});
}
}
}
function add_new_schedule_event(day) {
open_pop_up();
const confirm_button = document.getElementById("save_schedule");
const exit_button = document.getElementById("close_pop_up");
const date = document.getElementById("schedule_day");
date.innerText = day;
// ADD a list system that starts in the smallest day, and shows the irformation of the
// current month.
confirm_button.addEventListener("click", () => {
console.log(this.schedule_day.innerText);
const input_title = document.getElementById("schedule_title");
if (input_title.value !== "") {
// Create
const data_display = document.getElementById("data_display");
const input_init_time = document.getElementById("schedule_initial_time");
const input_final_time = document.getElementById("schedule_final_time");
const input_description = document.getElementById("schedule_description");
const data_item = document.createElement("div");
const title_div = document.createElement("div");
const span_title = document.createElement("span");
const span_time = document.createElement("span");
const description_div = document.createElement("div");
const span_description = document.createElement("span");
// Add class
data_item.classList.add("data_display_item");
title_div.classList.add("data_display_div_title");
description_div.classList.add("data_display_div_description");
// Append child
data_display.appendChild(data_item);
data_item.appendChild(title_div);
data_item.appendChild(description_div);
title_div.appendChild(span_title);
title_div.appendChild(span_time);
description_div.appendChild(span_description);
// Values
span_title.innerText = "⬤ " + this.schedule_day.innerText + ": " + input_title.value;
span_time.innerText =
input_init_time.value + " - " + input_final_time.value;
span_description.innerText = input_description.value;
// Clean fields
input_title.value = "";
input_init_time.value = "00:00";
input_final_time.value = "23:59";
input_description.value = "";
close_pop_up();
return;
}
input_title.style.borderBottom = "2px red solid";
});
exit_button.addEventListener("click", () => {
close_pop_up();
return;
});
}
// Loads today's data.
function main() {
print_year_and_month(new Date().getFullYear(), new Date().getMonth());
create_table();
populate_table(new Date().getFullYear(), new Date().getMonth());
add_schedule_event_to_cells();
}
// Loads buttons.
function load_buttons() {
const back_button = document.getElementById("back_button");
const t_button = document.getElementById("t_button");
const next_button = document.getElementById("next_button");
let table_date = get_table_date();
back_button.addEventListener("click", () => {
reset_table_data_style();
table_date.month -= 1;
print_year_and_month(table_date.year, table_date.month);
populate_table(table_date.year, table_date.month);
});
t_button.addEventListener("click", () => {
reset_table_data_style();
table_date = date_object(new Date().getFullYear(), new Date().getMonth());
print_year_and_month(new Date().getFullYear(), new Date().getMonth());
populate_table(new Date().getFullYear(), new Date().getMonth());
});
next_button.addEventListener("click", () => {
reset_table_data_style();
table_date.month += 1;
print_year_and_month(table_date.year, table_date.month);
populate_table(table_date.year, table_date.month);
});
}
// Loads main function as soon as the raw html loads.
function trigger_script() {
document.addEventListener("DOMContentLoaded", () => {
main();
load_buttons();
});
}
// Triggers the code.
trigger_script();
Thanks in advance, all help is welcome!
Answer: Found a solution.
It may have been obvious all along, but, at least now I see it. Every time that I triggered the function add_new_schedule_event(), I was adding a new eventListener, hence I would have multiple values.
My solution:
Divided the main function into sub-functions:
open_pop_up() // Display pop up
close_pop_up() // Hide pop up
load_button_close_pop_up()
load_button_confirm_pop_up()
The last two functions go on the load_buttons() function. By doing so, the events now are only generated once.
Edit: I was also adding eventListeners by passing function executions instead of just the functions names, and sometimes, I'd be even putting them on anonymous functions for some sort of reason. Now, a better fix would be:
Once the form is opened: Add the listeners.
Once the form is closed: Remove the listeners. | {
"domain": "codereview.stackexchange",
"id": 39833,
"tags": "javascript"
} |
Color and circle detection in image | Question: I try to detect and count circles in image (for example smarties)
I use HSL color space. But I am not able to distiguish colors od the same color if they are in touch. I try to erode and dilate picture. But result is the same, I have only some blobs(connected components) od the same color. Do you have some general algorithm for this problem?
(I try to do that with EMGUCV library)
Thanks in advance.
Answer: If I understand you correctly, you erode, then dilate then look for the colour of your blobs. Erosion then dilation is an opening operator. As you can hopefully see that's not going to do too much to seperate the blobs.
I would suggest that you do not really need to dilate. If you just erode enough so that the blobs seperate. You can then mark a pixel in each blob (centre is probably a good bet), which should now be seperated, and find the colour of that pixel in the original image. | {
"domain": "dsp.stackexchange",
"id": 1780,
"tags": "image-processing, image-segmentation"
} |
Why mini batch size is better than one single "batch" with all training data? | Question: I often read that in case of Deep Learning models the usual practice is to apply mini batches (generally a small one, 32/64) over several training epochs. I cannot really fathom the reason behind this.
Unless I'm mistaken, the batch size is the number of training instances let seen by the model during a training iteration; and epoch is a full turn when each of the training instances have been seen by the model. If so, I cannot see the advantage of iterate over an almost insignificant subset of the training instances several times in contrast with applying a "max batch" by expose all the available training instances in each turn to the model (assuming, of course, enough the memory). What is the advantage of this approach?
Answer: The key advantage of using minibatch as opposed to the full dataset goes back to the fundamental idea of stochastic gradient descent1.
In batch gradient descent, you compute the gradient over the entire dataset, averaging over potentially a vast amount of information. It takes lots of memory to do that. But the real handicap is the batch gradient trajectory land you in a bad spot (saddle point).
In pure SGD, on the other hand, you update your parameters by adding (minus sign) the gradient computed on a single instance of the dataset. Since it's based on one random data point, it's very noisy and may go off in a direction far from the batch gradient. However, the noisiness is exactly what you want in non-convex optimization, because it helps you escape from saddle points or local minima(Theorem 6 in [2]). The disadvantage is it's terribly inefficient and you need to loop over the entire dataset many times to find a good solution.
The minibatch methodology is a compromise that injects enough noise to each gradient update, while achieving a relative speedy convergence.
1 Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT'2010 (pp. 177-186). Physica-Verlag HD.
[2] Ge, R., Huang, F., Jin, C., & Yuan, Y. (2015, June). Escaping From Saddle Points-Online Stochastic Gradient for Tensor Decomposition. In COLT (pp. 797-842).
EDIT :
I just saw this comment on Yann LeCun's facebook, which gives a fresh perspective on this question (sorry don't know how to link to fb.)
Training with large minibatches is bad for your health.
More importantly, it's bad for your test error.
Friends dont let friends use minibatches larger than 32.
Let's face it: the only people have switched to minibatch sizes larger than one since 2012 is because GPUs are inefficient for batch sizes smaller than 32. That's a terrible reason. It just means our hardware sucks.
He cited this paper which has just been posted on arXiv few days ago (Apr 2018), which is worth reading,
Dominic Masters, Carlo Luschi, Revisiting Small Batch Training for Deep Neural Networks, arXiv:1804.07612v1
From the abstract,
While the use of large mini-batches increases the available computational parallelism, small batch training has been shown to provide improved generalization performance ...
The best performance has been consistently obtained for mini-batch sizes between m=2 and m=32, which contrasts with recent work advocating the use of mini-batch sizes in the thousands. | {
"domain": "datascience.stackexchange",
"id": 2829,
"tags": "machine-learning, deep-learning"
} |
ZX calculus: What do diamond and loop mean? | Question: Recently, I started to study practical application of ZX calculus but I am confused by meaning of "diamond" and "loop".
Issue no. 1:
There are these rules:
B-rule
and D-rule
But this example seems to use the rules wrongly:
In the middle of a digram, B-rule is used, however, I do not see any loop or diamonds justifying this step (i.e. a disconection of nodes).
Similar situation occurs in this example:
Why is it possible to ignore loop and diamonds?
Issue no. 2:
Interpreation of a diamond in Hilbert space is this:
Diamond = $\sqrt{2}$
What does mean that diamond is $\sqrt{2}$? Is it a normalization constant?
Interpreation of a loop in Hilbert space is this:
Loop represent the dimension of underlying Hilbert space
Assuming D-rule, loop should represent two diamonds hence $\sqrt{2}\sqrt{2} = 2$ which is dimension of Hilbert space for description of single qubit states. But ZX calculus can be used for any number of qubits. What does it mean that loop represent a dimension? How is a dimension of "multi-qubits" Hilbert space represented?
Answer:
If you agree to treat diagrams up to a constant factor, then you can ignore loops and diamonds. As you correctly guessed, it's a normalization constant.
For a multi-qubit system, you represent an identity operator with several wires. If you trace them, you get dimension equal to $2^n$, and in the diagram you represent this dimension as $n$ disjoint loops each contributing a factor of 2. | {
"domain": "quantumcomputing.stackexchange",
"id": 1298,
"tags": "zx-calculus"
} |
Simple script to insert data | Question: This is a simple script which inserts data into a couple of tables. I'm very new to SQL so any feeback, of any kind, would be much appreciated!
DECLARE
@Model_ID INT
, @ModelVersion VARCHAR(10)
, @ExistsInDM INT
, @IsActive BIT
SET @ModelVersion = '2.46.7' -- new version number
SET @IsActive = '1'
SELECT @Model_ID = Model_ID
FROM [sch_AM].[tblDMModelVersion] WITH (NOLOCK)
WHERE [ModelVersion] = @ModelVersion
IF @Model_ID IS NULL
BEGIN
INSERT INTO [sch_AM].[tblDMModelVersion]
( [ModelVersion]
,[IsActive]
)
VALUES ( @ModelVersion
,@IsActive
)
EXECUTE sch_AM.usp_GetActiveModel_ID @ModelVersion, @Model_ID OUTPUT
SELECT @ExistsInDM = COUNT(Model_ID)
FROM [sch_AM].[tblDMModelToTable] WITH (NOLOCK)
WHERE Model_ID = @Model_ID
IF @ExistsInDM = 0
BEGIN
INSERT INTO [sch_AM].[tblDMModelToTable]
SELECT
@Model_ID
,[Table_ID]
,[TableVersion]
,[UserViewable]
,[TableRequiredByModel]
FROM [sch_AM].[tblDMModelToTable] WITH (NOLOCK)
WHERE Model_ID = 1
END
END
SELECT * FROM [sch_AM].[tblDMModelVersion] WHERE Model_ID = @Model_ID
SELECT * FROM [sch_AM].[tblDMModelToTable] WHERE Model_ID = @Model_ID
Answer: Just 1 will work here. It will do a cast to 1
SET @IsActive = 1
You can also assign in the declare
DECLARE
@Model_ID INT
, @ModelVersion VARCHAR(10) = 'lasdf'
, @ExistsInDM INT
, @IsActive BIT = 1
SELECT @Model_ID = Model_ID
FROM [sch_AM].[tblDMModelVersion] WITH (NOLOCK)
WHERE [ModelVersion] = @ModelVersion
Above will get the last value read.
with (nolock)
Is typically not advised. | {
"domain": "codereview.stackexchange",
"id": 30761,
"tags": "beginner, sql, t-sql"
} |
Sub-LightSpeed travel according to special relativity vs AP physics | Question: For background, I'm a high school physics Student, and I was recently looking at special relativity out of personal interest, and I read a summarized statement "Time slows down as your approach the speed of light" and so I went googling around and I saw an answer like this on this stackexchange, and I've seen other answers which state things like "Time only appears slow for the stationary observer" and for me, this seems at odds with the basic idea of being able to see velocity.
Like take the example of a kid kicking the ball versus a railgun shot. Even if you were standing stationary, you would still see the speed difference. yet if the railgun shot was approaching 0.9999c or a similar value, it would appear slower?
Does one theory win over the other, or is my thinking flawed in some way? What Happens?
Answer: Both statements are mostly useless without context because they don't explicitly tell you what you can expect to measure. The natural language phrasing you want is:
The clock which measures the shortest time between two events is the clock for which those two events are in the same place.
And the formal statement you want is:
Given:
$\Delta t$ is the time measured by a clock in some frame of reference $A$ between two events in the same place
$\Delta t'$ is the time measured by a clock in some other frame of reference $B$ between those two events
$v$ is the velocity of $B$ with respect to $A$, hence the two events which are in the same place in the $A$ frame are are separated in space by $v \Delta t'$ in the $B$ frame
Then:
$\Delta t' = \dfrac{\Delta t}{\sqrt{1-v^2/c^2}}$
Observe that we reproduce the natural language statement: $\Delta t'$ (the time measured by a clock for which the events are not in the same place) is always bigger than $\Delta t$ (the time measured by a clock for which the events are in the same place) for all nonzero $v$, and the amount by which it is bigger increases with increasing $|v|$, with an infinite ratio in the limit as $|v| \to c$. It may be illustrative to plot the curve with the time ratio on the vertical axis and velocity on the horizontal axis to be able to visualize the relationship. | {
"domain": "physics.stackexchange",
"id": 95158,
"tags": "special-relativity, speed-of-light, velocity, observers"
} |
Implementing views on a std::vector | Question: Goal:
Wrap a vector. The wrapper can create view on part of the vector.
Support const_view and view.
view allows modification of elements while const_view does not.
The views are ranges (has begin() and end() methods).
Problem:
Code duplication between the const_view and view.
The actual code implements a 2D sparse array that saves more memory than usual. So I want to learn how to solve this problem.
Simplified Code (implements a dense 2D array)
#include <vector>
class A_csub;
class A_sub;
typedef std::vector<int> V;
typedef typename V::iterator It;
typedef typename V::const_iterator Cit;
class A {
// 2D dense array with _r rows and _c columns.
public:
A(const int r, const int c);
A_csub cview(const int n) const;
A_sub view(const int n); // view nth row
It sub_begin(const int n); // iterator at begin of nth row
Cit sub_cbegin(const int n) const;
int _r;
int _c;
private:
V _v;
};
class A_csub {
public:
A_csub(const A& a, const int n);
Cit begin() const;
Cit end() const;
const int size() const;
private:
const A& _a;
const int _n;
};
class A_sub {
public:
A_sub(A& a, const int n);
It begin();
It end();
const int size() const;
private:
A& _a;
const int _n;
};
// -- A -- //
A::A(const int r, const int c) : _r(c), _c(c), _v(r*c) {}
A_csub A::cview(const int n) const { return A_csub(*this, n); }
A_sub A::view(const int n) { return A_sub(*this, n); }
It A::sub_begin(const int n) { return _v.begin() + n*_r; }
Cit A::sub_cbegin(const int n) const { return _v.cbegin() + n*_r; }
// -- A_csub -- //
A_csub::A_csub(const A& a, const int n) : _a(a), _n(n) {}
Cit A_csub::begin() const { return _a.sub_cbegin(_n); }
Cit A_csub::end() const { return begin() + _a._r; }
const int A_csub::size() const { return _a._r; }
// -- A_sub -- //
A_sub::A_sub(A& a, const int n) : _a(a), _n(n) {}
It A_sub::begin() { return _a.sub_begin(_n); }
It A_sub::end() { return begin() + _a._r; }
const int A_sub::size() const { return _a._r; }
int main() {
A a(10,5);
a.cview(4);
A_sub b = a.view(4);
for (auto && e:b) {e=1;}
}
Answer: The variable names you chose are quite bad. I can't seem to figure out what r and c are as constructor input.
Creating such views only for vectors (of ints) is a bit wasteful. You are basically wrapping an iterator pair, so why not make the code generic so that it woks with iterators of any type of container?
Boost, for example, provides Boost.Range for doing just this.
When you implemented a specialized container, you create specialized iterators that know how to traverse it. The view/range stays the same, it's job is still to wrap a pair of iterators. | {
"domain": "codereview.stackexchange",
"id": 24162,
"tags": "c++, vectors, interval"
} |
How is the differential form of Gauss Law used? | Question: How is the differential form of Gauss's Law $\mathbf{\nabla}\cdot{\bf E} = \dfrac{ρ}{ɛ}$ used? What I mean is, where am I measuring $E$ and what is $ρ$ in this context?
Answer: Let me clarify you what divergence actually is.
Consider an arbitrary finite volume $V$, whose surface is $S$, in a vector-field $\bf h$. Then total flux emerging from $S$ is given by $$\text{Net flux from the surface} = \int_S \mathbf{h}\cdot ~\mathrm d{\bf a}.$$
Divide $V$ into two parts: $$V = v_1 +v _2\,.$$ Now the flux out of volume $v$ is given by : $\displaystyle{\int \mathbf{h}\cdot~\mathrm d{\bf a}_V = \int \mathbf{h} \cdot ~\mathrm d{\bf a}_{v_1} + \int \mathbf{h}\cdot ~\mathrm d{\bf a}_{v_2}}\;.$ Start dividing the volume into $N$ parts so that $V = \displaystyle \sum_{i=1}^N v_i\,.$ So, the net flux out of volume $V$ is $$\int \mathbf{h} \cdot ~\mathrm d{\bf a}_V= \sum_{i=1}^N \int \mathbf{h} \cdot ~\mathrm d{\bf a }_{v_i}$$. This is a macroscopic quantity. However, we want to find some microscopic property associated with a certain point along with a neighbourhood of infinitesimal radius; in order to do this, we make $N \to \infty$ so that $\mathrm d{\bf a}_{v_i} \to 0$. Let the flux out of such infinitesimal volume $v_i$ is $\int\mathbf{h}\cdot \mathrm d{\bf a}_i$. This quantity is surely approximating to $0$. But if we take the ratio of the flux divided by the volume it encloses, we can get a finite quantity. This is what we call divergence. $$\text{div} \mathbf{h}_i \equiv \lim_{v_i \to 0} \frac{1}{v_i} \int \mathbf{h}\cdot ~\mathrm d{\bf a}_{v_i} .$$ So, divergence of a vector is a local property that measures at a point the flow of $\bf h$, per unit volume, in the neighbourhood of this point.
So, $$\int_S \mathbf{E}\cdot ~\mathrm d{\bf a} = \lim_{N \to \infty}\sum_{i=1}^N \int_{S_i}\mathbf{E}\cdot ~\mathrm d{\bf a_i} = \int_V \text{div} {\bf E} ~\mathrm dv\,. $$ But we already know, $$\int_S \mathbf{E}\cdot ~\mathrm d{\bf a} = \int_V \frac{\rho(v)}{\varepsilon_0}~\mathrm dv\,.$$ Comparing the two equations, we get $$\text{div} \mathbf{E} =\mathbf{\nabla}\cdot{\bf E} = \dfrac{\rho}{\varepsilon_0}$$ (in Cartesian coordinates).
Tl;dr:
What does $\text{div} \mathbf{E} = \dfrac{\rho}{\varepsilon_0}$ mean?
It means that the flux of electric field per unit volume centered at a certain point is equal to the charge density $\rho(v)$ enclosed by the infinitesimal volume around that point divided by $\varepsilon_0$. | {
"domain": "physics.stackexchange",
"id": 24980,
"tags": "electrostatics, gauss-law"
} |
Contradiction between complex baseband and real-valued baseband | Question: Suppose a direct-conversion transmitter+receiver (ideal transmitter) along with its filters. In complex baseband, the filters in the signal chain can be modeled as a complex-valued FIR filter ($y$, $c$, $x$ are complex-valued!):
$$
y = \sum_{k=0}^K c_k x[n-k]
$$
Rewriting this in cartesian coordinates gives:
$$
y_i + j y_q = \underbrace{\sum_{k=0}^K \left(a_k x_i[n-k] - b_k x_q[n-k]\right)}_{y_i} + j \sum_{k=0}^K \left( b_k x_i[n-k] + a_k x_q[n-k] \right)
$$
Now only considering the real part, it can be seen that it consists of two different filters with which the real and imaginary input signals are filtered.
Now consider the same system from a practical perspective: After the DAC and upconversion, the RF signal looks as follows:
$$
x_{rf} = x_i \cos \omega_c t + x_q \sin \omega_c t
$$
The output sequence $y_i$ is obtained by multiplying $x_{rf}$ with $\cos(\omega_c t +\phi)$ and filtering it with a filter called $H$:
$$
y_i = H( x_{rf} \cos(\omega_c t +\phi) ) \\
\approx H(x_i/2 \cos\phi + x_q/2 \sin\phi) \\
= \frac{\cos\phi}{2} H(x_i + x_q \tan\phi)
= G(x_i + x_q \tan\phi)
$$
The filter $H$ (or $G$) is modeled as a FIR filter:
$$
y_i = \sum_{k=0}^K \left( g_k x_i[n-k] + g_k \tan\phi x_q [n-k] \right)
$$
From the equation above it can be seen that the real and imaginary part are filtered through only one filter (they differ just by the constant $\tan\phi$ !).
Where does this contradiction come from?
PS: I know that the first approach is the correct one because it gives the correct results. I do not understand why it is not consistent with my second approach
Answer: Ok after studying for hours I will attempt an answer to my own question. It does make sense now but I hope that it is correct.
Let us start with the second form:
$$
y_i = \sum_{k=0}^K \left( g_k x_i[n-k] + g_k \tan\phi x_q[n-k] \right) = \sum_{k=0}^K \left( a_{i,k} x_i[n-k] + b_{i,k} x_q[n-k] \right)
$$
Yes, from this is evident that $a_{i,k}=\beta b_{i,k}$, i.e. the i-output is a linear combination of the two I/Q inputs. This however, falls apart as soon as there is any filter in the RF path. Let us consider the a filter $h$ at RF:
$$
y_{rf} = \int_{-\infty}^{\infty} x_{rf}(\tau) h(t-\tau)d\tau
$$
Since the input signal has bandpass characteristics (from which the definition of complex baseband arises) it also has the following canonical representation:
$$
h_c(t) = 2 h_i(t) \cos\omega_c t +2 h_q(t)\sin\omega_c t \\
= 2\mathrm{Re}\left\{ h_z(t) exp(j\omega t) \right\}
$$
and
$$
h_z(t) = h_i(t) + j h_q(t)
$$
From this is is clear that I and Q parts are filtered with two different filters and hence the i-output is not a linear combination of the inputs any more.
Back to the first form: Indeed, solving only for the I channel gives $2K$ real valued coefficients, in the general case!
Solving for the Q channel gives again $2K$ coefficients ($4K$ in total). However, if I and Q are perfectly balanced, the coefficients obtained from the I channel and the Q channel will be identical. So there are really only $2K$ real-valued or $K$ complex valued coefficients, as expected.
However, if there is an I/Q imbalance, the input/output relationship can still be faithfully described but now in general all $4K$ real-valued coefficients are required!
Interestingly, the difference between the coefficients obtained from the I channel and the Q channel corresponds to the (frequency dependent) I/Q imbalance. | {
"domain": "dsp.stackexchange",
"id": 6721,
"tags": "modulation, demodulation, quadrature"
} |
Relation between Electric Potential and Electric Field Intensity | Question: Suppose we have been given a curve y = V(x) where V(x) represents the Electric Potential at x. Now if for a range the curve is a horizontal straight line, can we say that the Electric field intensity in that range will be equal to 0 because $dV = \vec{E}\cdot \vec{dr}$
Answer: The equation you're looking for is:
$$\vec{E}=-\vec{\nabla}V$$
If $V$ depends on $x$ only, the gradient becomes:
$$\vec{E}=-V'(x)\vec{e}_x$$
so $\vec{E}$ is indeed zero in that interval. | {
"domain": "physics.stackexchange",
"id": 89133,
"tags": "electromagnetism, electrostatics, potential"
} |
Why does Bellman-Ford algorithm use < rather than ≤? | Question: The Bellman-Ford Algorithm uses a less-than symbol rather than a less-than-or-equal-to symbol. How does this identify that there is a negative cycle?
For instance, say I have the below example going from initialization state through the first iteration and on to the second.
Initial state
(0)S----(1)----B(inf)
\ /
(1) (-1)
\ /
\ /
C(inf)
First phase
...Check edges...
S->B
D[S] + 1 < D[B] = 0 + 1 < inf => True => Update D[B]
B->C
D[B] + (-1) < D[C] = 1 + (-1) < inf => True => Update D[C]
(0)S----(1)----B(1)
\ /
(1) (-1)
\ /
\ /
C(0)
Second phase when checking for negative cycles
...Check edges...
...Skipping to check B->C...
B->C
D[B] + (-1) < D[C] = 1 + (-1) < 0 => False
(0)S----(1)----B(1)
\ /
(1) (-1)
\ /
\ /
C(0)
Shouldn't the last check between B->C be true to detect the negative cycle? i.e. Shouldn't we use less-than-or-equal-to (≤) rather than (<)?
Answer: First of all, let me clarify a few points about your example:
Bellman-Ford-Moore consists of two phases, stages or steps, each one consisting of a number of iterations as they are implemented with loops. The first phase computes the cost of the shortest-path to reach each vertex from the start vertex and it consists of two nested loops; the second one consists of a single loop which checks whether there are negative cycles or not. I mention this because it seems to me that you refer to the phases as iterations.
Let me also note that in the first phase you skip the evaluation of reaching $S$ from $C$. I assume you just simply avoid making the computation because it is pointless for exemplifying your case, but let me just highlight that Bellman-Ford-Moore would also perform that evaluation. By the way, it is unclear whether you are considering a directed or undirected graph. I just assumed it is directed: $S\rightarrow B\rightarrow C\rightarrow S$.
As noted in the comments by xskxzr, there is no negative cycle in your exmaple. A negative cycle is defined as a path which starts and ends in the same vertex whose overall cost (defined as the sum of all edge costs) is strictly negative. In your case, there is only one cycle from $S$ to it, but it has a cost which is is strictly positive: $c(S, B) + c(B, C) + c(C,S)=1-1+1=1>0$.
I mention these points only to make the answer clearer (if possible) and more useful to other readers.
I will try now to address your question. Once the first phase is over, the distance of the shortest-path from the start vertex $S$ to all other vertices in your graph has been already computed. Let $d_i$ denote the cost of the shortest path from $S$ to the $i$-th vertex.
Consider now any vertex of your graph, $v_i$, and all incident edges to it. Clearly, at least one should be in the shortest-path from $S$, whereas all the others should not. Take any vertex $u$ which has an edge from it to $v_i$:
If the edge $\langle u, v_i\rangle$ is part of the optimal path from $S$ to $v_i$ then clearly $d_u + c(u, v_i)=d_{v_i}$
Otherwise, $d_u + c(u, v_i)>d_{v_i}$ as this is not part of the optimal path.
Thus, when starting the second phase (for detecting negative cycles), there is at least one edge for each vertex for which equality holds without the presence of negative cycles! This is just a consequence of the definition of shortest-path as shown above. This observation precludes the usage of $\leq$ for detecting cycles, as equality holds without negative cycles.
Let us now consider the case of negative cycles. Their essential property is that they decrease the value of reaching its final vertex indefinitely (as crossing it again would reduce the cost to reach its end vertex), i.e., negative cycles strictly decrease the cost of reaching its end vertex and hence, the only way $d_u + c(u, v_i) < d_{v_i}$ is because there is at least one negative cycle.
In your specific example, make $c(C,S)=-1$. Now you have a negative cycle from $S$ to itself. Clearly, once the first phase is over, the cost of the shortest distances computed by Bellman-Ford-Moore would be: $d_S = -2, d_B = 0, d_C = -1$. I assume the following order of edges: $\langle S, B\rangle$, $\langle B, C\rangle$ and $\langle C, S\rangle$ and recall that the first phase is ran as many times as edges minus 1, i.e., twice in your example. So far, after the first iteration (which considers updating the target vertex of all edges in the order given above) of the first phase $d_S = -1, d_B = 1, d_C = 0$. Because Bellman-Ford-Moore dictates to run the first phase as many times as vertices minus one (the reason beig that the costs computed in the previous iteration should be propagated to the longest path whose number of edges is upper bounded by the number of vertices minus 1), a second iteration of the first phase is conducted and the cost of the shortest-path to all vertices gets decremented again, $d_S = -2, d_B = 0, d_C = -1$.
Next, when running the second phase it turns out that $d_S + c(S,B) = -2 + 1 = -1 < d_B = 0$ which reveals the presence of the negative cycle using $<$.
Hope this helps, | {
"domain": "cs.stackexchange",
"id": 14640,
"tags": "algorithms, graphs, algorithm-analysis"
} |
SAT formulation of the condition that an even number of a given set of variables must be set to true | Question: Lets say I have a SAT problem with variables $x_1,...,x_n$. For a given subset of the variables I want to create a clause which forces an even number of the variables in $S$ to be true. Of course there is the brute force solution, but I would like a more efficient solution with as few clauses/variables as possible.
Answer: Let $\oplus$ be XOR, then your question is asking for $\bigoplus_{k=1}^n x_k = 0$. We can encode this efficiently without an exponential explosion in clauses by introducing new variables.
The basic premise is the following:
$$a = b \oplus c \quad\iff\quad a \oplus b \oplus c = 0$$
Assuming $a$ is the new variable this is equivalent to adding the following CNF clauses:
$$(\neg a \vee \neg b \vee \neg c) \,\wedge\, (\neg a \vee b \vee c) \,\wedge\, (a \vee \neg b \vee c) \,\wedge\, (a \vee b \vee \neg c)$$
Using the above construction we add the following clauses:
$$y_1 = x_1$$
$$y_2 = y_1 \oplus x_2$$
$$y_3 = y_2 \oplus x_3$$
$$\cdots$$
Finally we add the clause $\neg y_n$.
Total cost is $n$ additional variables and $4n + 1$ ternary CNF clauses. You can shave off a couple variables / clauses with a better base case for the recursion - I could not be bothered. | {
"domain": "cs.stackexchange",
"id": 21765,
"tags": "logic, satisfiability"
} |
Hall Conductivity of Maxwell Action | Question: I am currently reading David Tong's notes on Chern-Simon's theory and below equation (5.5), he makes the statement:
"The action (5.5) has no Hall conductivity because this is ruled out in d = 3+1 dimensions on rotational grounds."
Why is this true?
Answer: Besides calculating the Hall conductivity directly from the Maxwell action and showing it to be zero, the more general argument is the following.
The important point is that conductivity is a rank-2 tensor, i.e. a tensor with two indices and transforms appropriately under spatial rotations. Once you see this, there are two ways to argue that Hall conductivity is zero.
You want a tensor that is invariant under rotations due to the rotational symmetry in this problem. Such a tensor must be proportional to the identity tensor, therefore it cannot have off-diagonal components.
The Hall conductivity is the anti-symmetric part of the conductivity tensor. In three dimensions, there are three independent parameters for the antisymmetric part. You can show that the antisymmetric part is closed under rotations and it transforms as a pseudovector (other examples of pseudovectors in three dimensions are angular momentum and magnetic field). You cannot have a non-zero pseudovector that is rotationally symmetric, just like vectors. | {
"domain": "physics.stackexchange",
"id": 98613,
"tags": "lagrangian-formalism, condensed-matter"
} |
Guessing words from scrambled letters | Question: How could I possibly shorten / clean this up? I suppose mainly clean up the loop at the start asking whether they would like to scramble the code.
def userAnswer(letters):
print("Can you make a word from these letters? "+str(letters)+" :")
x = input("Would you like to scrample the letters? Enter 1 to scramble or enter to guess :")
while x == '1':
print(''.join(random.sample(letters,len(letters))))
x = input("Would you like to scrample the letters? Enter 1 to scramble or enter to guess :")
word = input("What is your guess? :")
word = word.lower()
if checkSubset(word, letters) == True and checkWord(word) == True:
print("Yes! You can make a word from those letters!")
else:
print("Sorry, you cannot make that word from those letters")
userAnswer("agrsuteod")
Answer: Before presenting my version, I would like to make a couple of comments:
Use descriptive names. The name userAnswer gives me an impression of just getting the user's answer, nothing else. I like to suggest using names such as startScrambleGame, runScrambleGame or the like.
Avoid 1-letter names such as x -- it does not tell you much.
I don't know which version of Python you are using, but mine is 2.7 and input() gave me troubles: it thinks my answer is the name of a Python variable or command. I suggest using raw_input() instead.
Your code calls input() 3 times. I suggest calling raw_input() only once in a loop. See code below.
The if checkSubset()... logic should be the same, even if you drop == True.
Here is my version of userAnswer, which I call startScrambleGame:
def startScrambleGame(letters):
print("Can you make a word from these letters: {}?".format(letters))
while True:
answer = raw_input('Enter 1 to scramble, or type a word to guess: ')
if answer != '1':
break
print(''.join(random.sample(letters, len(letters))))
word = answer.lower()
if checkSubset(word, letters) and checkWord(word):
print("Yes! You can make a word from those letters!")
else:
print("Sorry, you cannot make that word from those letters")
startScrambleGame("agrsuteod") | {
"domain": "codereview.stackexchange",
"id": 7751,
"tags": "python, beginner, game, python-3.x"
} |
Determine the strain energy of a bar | Question: I have a Structures question about determining strain energy that I am unable to solve. I have attempted it and I have the answer, I would appreciate if if someone can just tell me where I am going wrong.
Q: A solid conical bar of circular cross-section is suspended vertically. The length of the bar is $L$, the diameter at the base is $D$ and the weight per unit volume is $\gamma$ (equivalent to density x gravity). Determine the strain energy of the bar due to its own weight.
$$\text{Answer} = U = \dfrac{\pi D^2 \gamma^2 L^3}{360E}$$
My work is as follows:
$$\begin{align}
U &= \dfrac{\gamma((1/3)\pi r^2 L)^2)L}{2E\pi r^2} \\
U &= \dfrac{\gamma(\pi^2r^4(1/9)L^2)L}{2E\pi r^2} \\
U &= \dfrac{\gamma\pi r^2L^3}{9\cdot2E} \\
U &= \dfrac{\gamma\pi(d^2/4)L^3}{18E} \\
U &= \dfrac{\gamma\pi d^2L^3}{18\cdot4E} \\
U &= \dfrac{\pi d^2 \gamma L^3}{72E} \\
\end{align}$$
I am incorrect by a factor of 1/5. Where in my working/logic did I go wrong?
I know:
$$U = \dfrac{(\rho gAL)^2L}{2EA}$$
I assume:
$$AL = \text{Volume (of a cone)} = (1/3)\pi r^2 h$$
I am confused as to whether the A in the denominator is the base of the
cone or the volume divided by the height.
I can derive the correct numerator, however I am not sure how my
lecturer arrived at 360E for the denominator.
Answer: We assume that the wide end of the cone is up, of course—otherwise the stress would go to infinity as the full weight of the cone is applied to a cross-sectional area that tapers to zero. (I incorrectly specified the opposite configuration in my comment above.)
Measure the coordinate $z$ from the bottom (i.e., from the tip of the cone). Then, the diameter at any location $z$ is $$D(z)=\frac{Dz}{L}.$$ Therefore, the corresponding area is $$A(z)=\frac{\pi D(z)^2}{4}.$$ The volume hanging underneath location $z$ is $$V(z)=\frac{\pi D(z)^2z}{12},$$ with corresponding weight $W=\gamma V(z)$ applied across the cross section. Therefore, the stress on any horizontal infinitesimal slice of thickness $dz$ at location $z$ is $$\sigma(z)=\frac{W}{A(z)},$$ and the volumetric strain energy within that slice is thus $$dU=\left(\frac{\sigma(z)^2}{2E}\right)A(z)dz,$$ where I've applied the linear elasticity result that the differential strain energy per unit volume can always be expressed as $du=\sigma\,d\epsilon=\frac{\sigma}{E}d\sigma$ by Hooke's Law, or, integrated, $u=\frac{\sigma^2}{2E}$. Now, integrate $dU$ from $z=1$ to $L$. | {
"domain": "engineering.stackexchange",
"id": 2002,
"tags": "mechanical-engineering, structural-engineering, civil-engineering, structures, energy"
} |
a prototype of finding the (almost) best learning rate and initial weights so that a perceptron converges with the minimal iteration | Question: First of all, I chose the nearest data points/training examples
import numpy as np
import copy
nearest_setosa = np.array([[1.9, 0.4],[1.7, 0.5]])
nearest_versicolour = np.array([[3. , 1.1]])
and then I labeled negative examples as -1, and kept the label for the positive example.
x_train = np.concatenate((nearest_setosa, nearest_versicolour), axis=0)
y_train = [-1, -1, 1]
This is a simplified version of sign function.
def predict(x):
if np.dot(model_w, x) + model_b >= 0:
return 1
else:
return -1
I decided to update the weights once the model makes a wrong prediction.
def update_weights(idx, verbose=False):
global model_w, model_b, eta
model_w += eta * y_train[idx] * x_train[idx]
model_b += eta * y_train[idx]
if verbose:
print(model_b)
print(model_w)
The following code tests a bunch of learning rates(eta) and initial weights to find one which have the model converge with the minimal iteration.
eta_weights = []
for w in np.arange(-1.0, 1.0, .1):
for eta in np.arange(.1, 2.0, .1):
model_w = np.asarray([w, w])
model_b = 0.0
init_w = copy.deepcopy(w)
for j in range(99):
indicator = 0
for i in range(3):
if y_train[i] != predict(x_train[i]):
update_weights(i)
else:
indicator+=1
if indicator>=3:
break
eta_weights.append([j, eta, init_w, model_w, model_b])
I'm not sure if some classic search algorithms, e.g. binary search, are applicable to this particular case.
Is it common to loop so many layers? Is there a better way to do the job?
Answer: There are certainly things you could improve
This use of global is quite confusing: you are using eta, model_w and model_b as a local variables to the for eta in np.arange(.1, 2.0, .1), not as a global state. A cleaner way to do that would be to pass them as parameters to update_weights.
The way j is used outside of the loop is not very clear (even if it should work as intended). What I suggest is to use another variable for that (why not calling it n_iter?). It would look like:
n_iter = 99
for j in range(n_iter):
indicator = 0
...
if indicator>=3:
n_iter = j
break
eta_weights.append([n_iter, eta, init_w, model_w, model_b])
As a bonus now you can tell if your loop broke because of the last point (n_iter=98) or if it ran until the end (n_iter=99)
Why are you using a copy, let even a deepcopy, here? w is only a number (non-mutable), so you can simply do: init_w = w
Layers of loop are not an issue as long as the code stays readable. Here I'd say it is ok. If you still want to shorten it you can use itertools.product:
import itertools
for w, eta in itertools.product(np.arange(-1.0, 1.0, .1), np.arange(.1, 2.0, .1)):
print(f"w={w}, eta={eta}")
I am not sure where you want to use search algorithms. Yes, in the worst case you have to loop 300 times but there is no easy fix here. | {
"domain": "codereview.stackexchange",
"id": 41612,
"tags": "python, algorithm, machine-learning"
} |
Maxwell speed distribution of a mixture of gases vs. of one gas | Question: How is it qualitatively justifiable that, in a mixture of molecules of different kinds in complete equilibrium, each kind of molecule has the same Maxwellian distribution in speed that it would have if the other kinds were not present?
Answer: At the base of the Maxwellian velocity distribution, there are three key ingredients:
the equipartition theorem, connecting the average value of each component of the velocity with the temperature of the system (it is important to connect the average value of each cartesian component of velocity to the temperature);
the central limit theorem (CLT), which ensures a gaussian distribution of each component of the velocity of one particle since it may be considered as the sum of a huge number of variations due to the other particles. Notice that CLT holds under very mild assumption on the distribution of velocity variations);
the statistical independence of each component of the velocity of one particle on any other component of the velocity of the same or other particles. Such a hypothesis is certainly satisfied for a system at equilibrium with a thermostat at temperature $T$ since, under such conditions, there is no constraint on the velocities.
All these three points together are enough to imply the Maxwellian distribution. Consequently, that each kind of molecule has the same Maxwellian distribution in speed that it would have if the other species were not present. | {
"domain": "physics.stackexchange",
"id": 81533,
"tags": "thermodynamics, statistical-mechanics, physical-chemistry"
} |
Populate database of web articles | Question: What I have here is a relatively simple endpoint for a small site I'm making. Idea is to take:
title
content
tags
and create an entry in the database. It works properly.
I threw this together as something that should work and tried to make it clean (like with the error list that gets returned) but clearly I'm doing this all by hand. Are there best practices for writing these API-style endpoints that I'm not following?
I looked under the hood at a few sites (such as SO) and noticed they are using JSON as encoding for the response. That's why I used it too.
On the input validation front, is my system decently robust or is it too flimsy?
I also tried to make the code safe by catching any exceptions that might pop up. If there's something I've overlooked please let me know.
@main.route('/createpost', methods=['POST'])
@login_required
def createpost():
resp = {
'success': False
}
err = []
u = current_user.id # 2458017363
title = request.values.get('title')
_tags = request.values.get('tags') # JSON btw
content = request.values.get('content')
# _attachments = request.files.getlist('file')
# attachments = []
# for f in _attachments:
# if f.filename.rsplit('.', 1)[1].lower() not in ALLOWED_EXTENSIONS:
# filepath = os.path.join(UPLOAD_DIR, secure_filename(f.filename))
# f.save(filepath)
# attachments.append(filepath)
# else:
# err.append('File ' + f.filename + "is not permitted!")
if not title or len(title) > 100:
err.append('Your title must exist and be less than 100 characters.')
try:
tags = json.loads(_tags)
if not tags or len(tags) > 3:
err.append('Choose between 1-3 tags so people know what your post is about!')
except Exception:
err.append('Choose between 1-3 tags so people know what your post is about!')
if not content or len(content) < 50:
err.append('Your content must be at least 50 characters.')
if err:
resp['error'] = err
print('err')
return Response(json.dumps(resp), mimetype='text/json')
# PROVIDED EVERYTHING IS CORRECT
while True:
try:
dbentry = Post(id=snowflake(),
author_id=u,
title=bleach.clean(str(title)),
tags=bleach.clean(str(_tags)),
content=bleach.clean(str(content)).encode(),
)
db.session.add(dbentry)
db.session.commit()
except IntegrityError:
continue
break
resp['success'] = True
return Response(json.dumps(resp), mimetype='text/json')
imports are as follows:
# main.py
import json
import os
import bleach
from sqlalchemy.exc import IntegrityError
from .models import Post # sqlalchemy model for posts
from flask import Blueprint, render_template, request, Response
from flask_login import login_required, current_user
from werkzeug.utils import secure_filename
from . import db
from .utils import snowflake # random number generator
# (hence why i have a while loop around the db entry creation since there is a
# miniscule chance it will give the same number again)
Answer: Some quicky remarks:
a well-behaved API should return specific HTTP codes depending on the outcome, that is:
200 for success, or 201 for resource created. 204 is a possibility for an empty but successful response too but I would not advise using the full panoply available. Just be aware of what codes exist and when to use them.
likewise, use 401/403 for permission issues or authentication errors but here you have the @login_required that should take care of that
accordingly, an error should return a meaningful status code to the client - 400 would be a good candidate for an invalid request
the bottom line, always return an appropriate status code, never return HTTP/200 if the request failed. API calls are routinely automated, and the client will rely on the status code returned by the server to determine if the call was successful or not.
useless variable: u = current_user.id # 2458017363. You could just write: author_id=current_user.id, instead of author_id=u. u is not an intuitive variable name anyway
I don't understand the logic of generating a random ID in your code, the database backend should generate an incremented record ID for you (simply update your data model). I don't see why you need a random value here...
When you decide to handle certain exceptions like except IntegrityError, you can indeed decide to ignore them or hide them to the client but you should still log them for later review. IntegrityError suggests there is something wrong with your handling of incoming data. It should not happen if data is inspected and sanitized properly.
except Exception is too broad when all you're doing is json.loads. The error that you can expect here should be ValueError. Try submitting empty JSON and malformed JSON to verify.
to sum I don't think you should have resp['error'] and resp['success'], just use the same name in all cases eg: 'response'. It is the status code that will make the difference between success and failure. Whatever you do, the API should respond in a way that is consistent and predictable. | {
"domain": "codereview.stackexchange",
"id": 40810,
"tags": "python, json, api, flask"
} |
Find events associated with users on a certain date in MongoDB | Question: I have this script which works perfectly but I experienced some delays because of these 2 for loops [i][j]. Is there any way to do the same function but with a better and more effective process like foreach or other?
User.find({}).lean(true).exec((err, users) => {
let getTEvent = [];
//nested loops() //callbacks
for (let i =0 ; i < users.length; i++) {
if(users[i].events && users[i].events.length) {
const dt = datetime.create();
dt.offsetInDays(0);
const formatted = dt.format('d/m/Y');
// console.log(formatted)
for (let j = 0; j < users[i].events.length; j++) {
if(users[i].events[j].eventDate === formatted) {
getTEvent.push({events: users[i].events[j]});
}
}
}
}
return res.json(getTEvent)
});
The main role of this code is:
find the data in Mongodb: find all users with or without events
loop through data and select events made by any users
push the results in an array which is getEventin order to be useful later for the client side
Details:
The User is the model: const User = require('../models/users.model');
and events is [] array.
This is Mongodb structure of that models:
Answer: I think the main cause of your delays is not a nested for but rather the fact that you extract all data from your MongoDB collection into memory. What you can do is:
calculate formatted date once
query just those users that contain given event date instead of populating all users in memory. Here's the guide.
This will look roughly like the following:
const dt = datetime.create();
dt.offsetInDays(0);
const formatted = dt.format('d/m/Y');
User.find({
"events": {
eventDate: formatted
}
}).lean(true).exec((err, users) => {
let getTEvent = [];
//nested loops() //callbacks
for (let i = 0 ; i < users.length; i++) {
if(users[i].events && users[i].events.length) {
for (let j = 0; j < users[i].events.length; j++) {
if(users[i].events[j].eventDate === formatted) {
getTEvent.push({events: users[i].events[j]});
}
}
}
}
return res.json(getTEvent)
}); | {
"domain": "codereview.stackexchange",
"id": 43648,
"tags": "javascript, datetime, node.js, ecmascript-6, mongodb"
} |
Spoofs MAC Address to manufacturer of user's choice, Linux or MacOS | Question: I've made a BASh script for Linux/MacOS that allows you to search for a MAC manufacturer and then generates a MAC address from that company and then spoofs it to that using macchanger.
You can download it from GitHub, or the code is put below this paragraph. To run it, chmod +x mac-camo and sudo ./mac-camo.
#!/bin/bash
# MAC-Camo
# Disguise your MAC Address as that of any manufacturer as you want
# Made by Keegan Kuhn
# v0.1
# Defines foreground colors
black='tput setaf 0'
red='tput setaf 1'
green='tput setaf 2'
yellow='tput setaf 3'
blue='tput setaf 4'
pink='tput setaf 5'
skyBlue='tput setaf 6'
white='tput setaf 7'
grey='tput setaf 8'
# Defines background colors
bgBlack='tput setab 0'
bgRed='tput setab 1'
bgGreen='tput setab 2'
bgYellow='tput setab 3'
bgBlue='tput setab 4'
bgPink='tput setab 5'
bgSkyBlue='tput setab 6'
bgWhite='tput setab 7'
bgGrey='tput setab 8'
# Defines other special effects
cols='tput cols'
lines='tput lines'
bold='tput bold'
reverseColor='tput rev'
underlineStart='tput smul'
underlineFinish='tput rmul'
standoutStart='tput smso'
standoutFinish='tput rmso'
stopAllFX='tput sgr0'
# Defines hexadecimal colors
hexchars="0123456789ABCDEF"
function start() {
# Renders ANSI art
$stopAllFX; $bgBlack; $bold; $red; clear
sleep .2; echo " ▉╗ ╔▉ ╔▉▉▉╗ ╔▉▉▉▉▉ ╔▉▉▉▉▉ ╔▉▉▉╗ ▉╗ ╔▉ ╔▉▉▉╗"
sleep .2; echo " ▉▉╗ ╔▉▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉▉╗ ╔▉▉ ▉ ▉"
sleep .2; echo " ▉ ▉▉▉ ▉ ▉▉▉▉▉ ▉ $($white)☷☷☷☷☷$($red) ▉ ▉▉▉▉▉ ▉ ▉▉▉ ▉ ▉ ▉"
sleep .2; echo " ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉"
sleep .2; echo " ▉ ▉ ▉ ▉ ▉ ╚▉▉▉▉▉ ╚▉▉▉▉▉ ▉ ▉ ▉ ▉ ▉ ╚▉▉▉╝"
sleep .2; echo
sleep .2; $green; echo " Disguise your MAC Address as that of any manufacturer as you want ($($red)MAC-Camo$($green))"
sleep .2; $skyBlue; echo " Made by $($green)Keegan Kuhn ($($red)keeganjk$($green))"
sleep .2; $skyBlue; echo " Version: $($green)1.0"
sleep .2; echo
sleep .2; $green; echo " Please report all issues to: $($yellow)https://github.com/keeganjk/mac-camo/issues"
$stopAllFX; $bgBlack; echo
# Checks if user is root
override="1"
if [[ $(whoami) == "root" ]]; then
sleep .2; $green; echo " Root successful."
elif [[ $override == "1" ]]; then
sleep .2; $green; echo " Root successful."
else
sleep .2; $white; echo " Sorry, this script must be run as $($bold)root"
sleep .4; $stopAllFX; $bgBlack; $yellow; echo " Try using $($bold)sudo"
sleep .4; $red; $bold; echo " [!] $($stopAllFX; $bgBlack; $red)EXITING!"
$stopAllFX; exit
fi
function spoofBrowse() {
echo
$blue; awk -F '#' '{printf("%10d %s\n", NR, ":" $1 )}' oui.txt
read -p "[*] $($stopAllFX; $bgBlack; $white)Enter the number code for manufacturer: >>> " num
echo "[-] Generating MAC Address"
num=$( expr $num - 1 )
declare -a array
while read -r; do
array+=( "$REPLY" )
done < addr.txt
end=$( for i in {1..6} ; do echo -n ${hexchars:$(( $RANDOM % 16 )):1} ; done | sed -e 's/\(..\)/:\1/g' )
macAddress=${array[$num]}$end
echo "[-] MAC Address generated !"
$skyBlue; $bold; echo
ifconfig -l
$white
read -p "[*] $($stopAllFX; $bgBlack; $white)Select an interface: >>> " iface
$yellow; echo "[-] $iface selected !"
$white; echo "[-] Disabling $iface ..."
ifconfig $iface down
macchanger -m $macAddress $iface
echo "[-] MAC Address spoofed !"
echo "[-] Enabling $iface ..."
ifconfig $iface up
echo "[-] $iface enabled !"
echo "[-] Using address: $macAddress"
exit
}
function spoofUseMAC() {
echo
read -p "[*] $($stopAllFX; $bgBlack; $white)Enter the number code for manufacturer: >>> " num
echo "[-] Generating MAC Address"
num=$( expr $num - 1 )
declare -a array
while read -r; do
array+=( "$REPLY" )
done < addr.txt
end=$( for i in {1..6} ; do echo -n ${hexchars:$(( $RANDOM % 16 )):1} ; done | sed -e 's/\(..\)/:\1/g' )
macAddress=${array[$num]}$end
echo "[-] MAC Address generated !"
echo "[-] Using address: $macAddress"
$skyBlue; $bold; echo
ifconfig -l
$white
read -p "[*] $($stopAllFX; $bgBlack; $white)Select an interface: >>> " iface
$yellow; echo "[-] $iface selected !"
$white; echo "[-] Disabling $iface ..."
ifconfig $iface down
macchanger -m $macAddress $iface
echo "[-] MAC Address spoofed !"
echo "[-] Enabling $iface ..."
ifconfig $iface up
exit
}
function spoofSearchAgainOrNot() {
$bold; $white; echo
echo "[*] $($stopAllFX; $bgBlack; $white)Please select an option from the list below:"
$yellow; $bold; echo " [$($white)0$($yellow)] $($stopAllFX; $bgBlack; $white)Search again"
$yellow; $bold; echo " [$($white)1$($yellow)] $($stopAllFX; $bgBlack; $white)Use one of these"
read -p ">>> " searchAgainOrNot
if [[ "$searchAgainOrNot" == "0" ]]; then
spoofSearch
elif [[ "$searchAgainOrNot" == "1" ]]; then
spoofUseMAC
else
spoofSearchAgainOrNot
fi
}
# Spoof, search option selected
function spoofSearch() {
$bold; echo
read -p "[*] $($stopAllFX; $bgBlack; $white)Search for a manufacturer: >>> " search
$blue; awk -F '#' '{printf("%10d %s\n", NR, ":" $1 )}' oui.txt | grep -i --colour="always" $search
spoofSearchAgainOrNot
}
# Spoof
function spoof() {
echo; $white
echo "[*] $($stopAllFX; $bgBlack; $white)Please select an option from the list below:"
$yellow; $bold; echo " [$($white)0$($yellow)] $($stopAllFX; $bgBlack; $white)Search for a manufacturer"
$yellow; $bold; echo " [$($white)1$($yellow)] $($stopAllFX; $bgBlack; $white)Browse for a manufacturer through $($bold)long $($stopAllFX; $bgBlack; $white)list"
read -p ">>> " searchOrBrowse
if [[ $searchOrBrowse == "0" ]]; then
spoofSearch
elif [[ $searchOrBrowse == "1" ]]; then
spoofBrowse
else
spoof
fi
}
# Install complete
function installComplete() {
$white; $bold; echo
echo "[-] Install complete !"
echo "[*] $($stopAllFX; $bgBlack; $white)Restart or exit?"
$yellow; $bold; echo " [$($white)0$($yellow)] $($stopAllFX; $bgBlack; $white)Restart"
$yellow; $bold; echo " [$($white)1$($yellow)] $($stopAllFX; $bgBlack; $white)Exit"
read -p ">>> " restartOrExit
if [[ "$restartOrExit" == "0" ]]; then
start
elif [[ "$restartOrExit" == "1" ]]; then
$yellow; $bold; echo "[!] $($stopAllFX; $bgBlack; $yellow)EXITING !"
$stopAllFX; exit
else
installComplete
fi
}
# Install cancelled
function installCancelled() {
echo; $red; $bold; echo "[!] $($stopAllFX; $bgBlack; $red)INSTALL CANCELLED !"
$white; $bold; echo
echo "[*] $($stopAllFX; $bgBlack; $white)Restart or exit?"
$yellow; $bold; echo " [$($white)0$($yellow)] $($stopAllFX; $bgBlack; $white)Restart"
$yellow; $bold; echo " [$($white)1$($yellow)] $($stopAllFX; $bgBlack; $white)Exit"
read -p ">>> " restartOrExit
if [[ "$restartOrExit" == "0" ]]; then
start
elif [[ "$restartOrExit" == "1" ]]; then
$red; $bold; echo "[!] $($stopAllFX; $bgBlack; $red)ABORT !"
$red; $bold; echo "[!] $($stopAllFX; $bgBlack; $red)EXITING !"
$stopAllFX; exit
else
installCancelled
fi
}
# Linux install, Pt. 3
function linuxInstallPt3() {
$white; $bold; echo
echo "[-] Installing macchanger ..."
if [[ "$pacman" == "pacman" ]]; then
$pacman -S install macchanger
elif [[ "$paman" == "IDK" ]]; then
apt-get install macchanger
apt install macchanger
rpm install macchanger
yum install macchanger
dnf install macchanger
pacman -S install macchanger
else
$pacman install macchanger
fi
if macchanger --help > /dev/null; then
echo "[-] macchanger installed !"
else
$red; $bold; echo "[!] $($stopAllFX; $bgBlack; $red)ERROR!"
$red; $bold; echo "[!] $($stopAllFX; $bgBlack; $red)MACCHANGER NOT INSTALLED!"
$red; $bold; echo "[!] $($stopAllFX; $bgBlack; $red)EXITING!"
$stopAllFX; exit
fi
echo "[-] Copying files ..."
cp mac-camo /usr/bin/
echo "[-] Files copied !"
echo "[-] Changing file permissions ..."
chmod +x /usr/bin/mac-camo
echo "[-] File permissions changed !"
installComplete
}
# Linux install, Pt. 2
function linuxInstallPt2() {
$green; $bold; $standoutStart; echo
echo " OS: $os "
echo " macchanger installed: $macchangerInstalled "
echo " Package Manager: $pacman "
$standoutFinish; $white; $bold; echo
echo "[*] ^--- $($stopAllFX; $bgBlack; $white)Install with these settings?:"
$yellow; $bold; echo " [$($white)0$($yellow)] $($stopAllFX; $bgBlack; $white)Yes"
$yellow; $bold; echo " [$($white)1$($yellow)] $($stopAllFX; $bgBlack; $white)No"
read -p ">>> " installWithTheseSettings
if [[ "$installWithTheseSettings" == "0" ]]; then
linuxInstallPt3
elif [[ "$installWithTheseSettings" == "1" ]]; then
installCancelled
else
macInstallPt2
fi
}
# Linux install, Pt. 1
function linuxInstallPt1() {
$white; $bold; echo
echo "[*] $($stopAllFX; $bgBlack; $white)Which package manager do you use on this device?:"
$yellow; $bold; echo " [$($white)0$($yellow)] $($stopAllFX; $bgBlack; $white)apt-get"
$yellow; $bold; echo " [$($white)1$($yellow)] $($stopAllFX; $bgBlack; $white)apt"
$yellow; $bold; echo " [$($white)2$($yellow)] $($stopAllFX; $bgBlack; $white)rpm"
$yellow; $bold; echo " [$($white)3$($yellow)] $($stopAllFX; $bgBlack; $white)yum"
$yellow; $bold; echo " [$($white)4$($yellow)] $($stopAllFX; $bgBlack; $white)dnf"
$yellow; $bold; echo " [$($white)5$($yellow)] $($stopAllFX; $bgBlack; $white)pacman"
$yellow; $bold; echo " [$($white)6$($yellow)] $($stopAllFX; $bgBlack; $white)IDK"
read -p ">>> " pacman
if [[ "$pacman" == "0" ]]; then
pacman="apt-get"
linuxInstallPt2
elif [[ "$pacman" == "1" ]]; then
pacman="apt"
linuxInstallPt2
elif [[ "$pacman" == "2" ]]; then
pacman="rpm"
linuxInstallPt2
elif [[ "$pacman" == "3" ]]; then
pacman="yum"
linuxInstallPt2
elif [[ "$pacman" == "4" ]]; then
pacman="yum"
linuxInstallPt2
elif [[ "$pacman" == "5" ]]; then
pacman="pacman"
linuxInstallPt2
elif [[ "$pacman" == "6" ]]; then
pacman="IDK"
linuxInstallPt2
else
macInstallPt1
fi
}
# Linux install, Pt. 0
function linuxInstallPt0() {
$white; $bold; echo
echo "[*] $($stopAllFX; $bgBlack; $white)Is $($bold)macchanger$($stopAllFX; $bgBlack; $white) installed on this device?:"
$yellow; $bold; echo " [$($white)0$($yellow)] $($stopAllFX; $bgBlack; $white)Yes"
$yellow; $bold; echo " [$($white)1$($yellow)] $($stopAllFX; $bgBlack; $white)No"
$yellow; $bold; echo " [$($white)2$($yellow)] $($stopAllFX; $bgBlack; $white)IDK"
read -p ">>> " macchangerInstalled
if [[ "$macchangerInstalled" == "0" ]]; then
macchangerInstalled="Yes"
linuxInstallPt1
elif [[ "$macchangerInstalled" == "1" ]]; then
macchangerInstalled="No"
linuxInstallPt1
elif [[ "$macchangerInstalled" == "2" ]]; then
macchangerInstalled="IDK"
linuxInstallPt1
else
macInstallPt0
fi
}
# MacOS install, Pt. 3
function macInstallPt3() {
$white; $bold; echo
echo "[-] Installing Homebrew ..."
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
if brew ls --versions myformula > /dev/null; then
echo "[-] Homebrew installed !"
echo "[-] Updating Homebrew ..."
brew update
echo "[-] Homebrew updated !"
else
$red; $bold; echo "[!] $($stopAllFX; $bgBlack; $red)ERROR!"
$red; $bold; echo "[!] $($stopAllFX; $bgBlack; $red)HOMEBREW NOT INSTALLED!"
$red; $bold; echo "[!] $($stopAllFX; $bgBlack; $red)EXITING!"
$stopAllFX; exit
fi
echo "[-] Installing macchanger ..."
brew install acrogenesis/macchanger/macchanger
if macchanger --help > /dev/null; then
echo "[-] macchanger installed !"
else
$red; $bold; echo "[!] $($stopAllFX; $bgBlack; $red)ERROR!"
$red; $bold; echo "[!] $($stopAllFX; $bgBlack; $red)MACCHANGER NOT INSTALLED!"
$red; $bold; echo "[!] $($stopAllFX; $bgBlack; $red)EXITING!"
$stopAllFX; exit
fi
echo "[-] Copying files ..."
cp mac-camo /usr/bin/
echo "[-] Files copied !"
echo "[-] Changing file permissions ..."
chmod +x /usr/bin/mac-camo
echo "[-] File permissions changed !"
installComplete
}
# MacOS install, Pt. 2
function macInstallPt2() {
$green; $bold; $standoutStart; echo
echo " OS: $os "
echo " Homebrew installed: $brewInstalled "
echo " macchanger installed: $macchangerInstalled "
$standoutFinish; $white; $bold; echo
echo "[*] ^--- $($stopAllFX; $bgBlack; $white)Install with these settings?:"
$yellow; $bold; echo " [$($white)0$($yellow)] $($stopAllFX; $bgBlack; $white)Yes"
$yellow; $bold; echo " [$($white)1$($yellow)] $($stopAllFX; $bgBlack; $white)No"
read -p ">>> " installWithTheseSettings
if [[ "$installWithTheseSettings" == "0" ]]; then
macInstallPt3
elif [[ "$installWithTheseSettings" == "1" ]]; then
installCancelled
else
macInstallPt2
fi
}
# MacOS Install, Pt. 1
function macInstallPt1() {
$white; $bold; echo
echo "[*] $($stopAllFX; $bgBlack; $white)Is $($bold)macchanger$($stopAllFX; $bgBlack; $white) installed on this device?:"
$yellow; $bold; echo " [$($white)0$($yellow)] $($stopAllFX; $bgBlack; $white)Yes"
$yellow; $bold; echo " [$($white)1$($yellow)] $($stopAllFX; $bgBlack; $white)No"
$yellow; $bold; echo " [$($white)2$($yellow)] $($stopAllFX; $bgBlack; $white)IDK"
read -p ">>> " macchangerInstalled
if [[ "$macchangerInstalled" == "0" ]]; then
macchangerInstalled="Yes"
macInstallPt2
elif [[ "$macchangerInstalled" == "1" ]]; then
macchangerInstalled="No"
macInstallPt2
elif [[ "$macchangerInstalled" == "2" ]]; then
macchangerInstalled="IDK"
macInstallPt2
else
macInstallPt1
fi
}
# MacOS install, Pt. 0
function macInstallPt0() {
$white; $bold; echo
echo "[*] $($stopAllFX; $bgBlack; $white)Is $($bold)Homebrew$($stopAllFX; $bgBlack; $white) installed on this device?:"
$yellow; $bold; echo " [$($white)0$($yellow)] $($stopAllFX; $bgBlack; $white)Yes"
$yellow; $bold; echo " [$($white)1$($yellow)] $($stopAllFX; $bgBlack; $white)No"
$yellow; $bold; echo " [$($white)2$($yellow)] $($stopAllFX; $bgBlack; $white)IDK"
read -p ">>> " brewInstalled
if [[ "$brewInstalled" == "0" ]]; then
brewInstalled="Yes"
macInstallPt1
elif [[ "$brewInstalled" == "1" ]]; then
brewInstalled="No"
macInstallPt1
elif [[ "$brewInstalled" == "2" ]]; then
brewInstalled="IDK"
macInstallPt1
else
macInstallPt0
fi
}
# Installation and Configuration
function installAndConfig() {
$white; $bold; echo
echo "[*] $($stopAllFX; $bgBlack; $white)Please select your OS from the list below:"
$yellow; $bold; echo " [$($white)0$($yellow)] $($stopAllFX; $bgBlack; $white)MacOS"
$yellow; $bold; echo " [$($white)1$($yellow)] $($stopAllFX; $bgBlack; $white)Linux"
read -p ">>> " os
if [[ "$os" == "0" ]]; then
os="MacOS"
macInstallPt0
elif [[ "$os" == "1" ]]; then
os="Linux"
linuxInstallPt0
else
installAndConfig
fi
}
# Menu 0
function menu0() {
$white; $bold; echo
echo "[*] $($stopAllFX; $bgBlack; $white)Please select an option from the list below:"
$yellow; $bold; echo " [$($white)0$($yellow)] $($stopAllFX; $bgBlack; $white)Install and configure"
$yellow; $bold; echo " [$($white)1$($yellow)] $($stopAllFX; $bgBlack; $white)Spoof"
read -p ">>> " spoofOrInstall
if [[ "$spoofOrInstall" == "0" ]]; then
installAndConfig
elif [[ "$spoofOrInstall" == "1" ]]; then
spoof
else
menu0
fi
}; menu0
}; start
How can I make this work better?
Answer: One small thing - you can reduce the number of invocations of tput by storing the output of the tput commands rather than the command lines themselves, and then interpolating those output values when needed.
Example:
bgBlack=$(tput setab 0)
white=$(tput setaf 7)
stopAllFX=$(tput sgr0)
read -p "[*] $stopAllFX$bgBlack${white}Select an interface: >>> " iface | {
"domain": "codereview.stackexchange",
"id": 25728,
"tags": "security, bash, linux, unix, macos"
} |
Size of dispersed phase particles in colloid? | Question: I am really confused about what the size of particles of dispersed phase in a colloid is: one of my textbook says its between $1 - 100nm$, another says its between $1-1000nm$, and Wikipedia says its between $2-500nm$. Is there any reliable source giving the exact information?
Answer: Its between $1-1000nm$ as stated on wikipedia in a paragraph name IUPAC definition.
Colloid: Short synonym for colloidal system.
Note: Quotation from refs.[3][4]
Colloidal: State of subdivision such that the molecules or polymolecular particles dispersed in a medium have at least one dimension between approximately 1 nm and 1 μm, or that in a system discontinuities are found at distances of that order.[5]
Note: Quotation from refs. [3],[4]
Edit: Since it is mention under IUPAC definition that size lies between $1nm-1000nm$, therefore this must be the correct size. | {
"domain": "chemistry.stackexchange",
"id": 562,
"tags": "mixtures"
} |
Proof that L(M) = {accepts the string 1100 } is undecidable | Question: Let $$L_\ = \{\langle M\rangle \mid M \text{ is a Turing Machine that accepts the string 1100}\}\, .$$
To proof that the language $L$ is undecidable I should reduce something to $L$, right?
I tried with the classic $A\ TM$, but I could not figure out how to reduce properly.
How I can I proof that $L$ is undecidable?
Answer: The usual reduction from the halting problem: for example, the same reduction that shows the zero-input halting problem to be undecidable.
Suppose you can determine whether a machine $M$ accepts some magic string (such as $1100$). You can then decide whether an arbitrary machine $M$ halts when given input $x$ as follows. Produce a machine $M_x$ that ignores its input and simulates $M$ running on input $x$. If $M$ halts, $M_x$ accepts whatever input it was given; otherwise, $M$ doesn't halt so nor does the simulation $M_x$. So $M_x$ accepts the magic string if, and only if, $M$ halts on input $x$. Hence, if you could decide whether a machine accepts the magic string, you could decide the halting problem. | {
"domain": "cs.stackexchange",
"id": 3459,
"tags": "turing-machines, reductions, undecidability"
} |
Effect of redshift on energy conservation | Question: Light coming from galaxies that are going away from us is redshifted. Since the energy of a photon is purely dependent on its frequency one may conclude that the energy of these photons decreases. The same light coming from the same star in the same galaxy will be seen to a planet in that galaxy as it "actually" is. So dependent on the relative motion the energy will be seen, differently. How does this not conflict with the conservation of energy? Or the "total" energy of the universe depend on the frame?
I was thinking that the light coming to us is located on a larger range since its period is larger. So the light carries the same energy for both observers. Even if its average energy is smaller total energies are same.
However, I could not convince myself and I think I am making wrong interpretations.
Answer: I prefer to look at it based on certain Laws and observations.
The First and second Law of Thermodynamics are true
Energy is conserved and the universe is moving towards increasing entropy.
The Hubble constant is accurate and the universe is expanding at an exponential rate given by $a(t) = e^{Ht}$, where the constant $H$ is the Hubble expansion rate and $t$ is time. As in all FLRW spaces, $a(t)$, the scale factor, describes the expansion of physical spatial distances.
Redshift is the displacement of spectral lines toward longer wavelengths (the red end of the spectrum) in radiation from distant galaxies and celestial objects. This is interpreted as a Doppler shift that is proportional to the velocity of recession and thus to distance.
Blueshift a shift toward shorter wavelengths of the spectral lines of a celestial object, caused by the motion of the object toward the observer.
You say -
Light coming from galaxies that are going away from us is redshifted. Since the energy of a photon is purely dependent on its frequency one may conclude that the energy of these photons decreases. The same light coming from the same star in the same galaxy will be seen to a planet in that galaxy as it "actually" is. So dependent on the relative motion the energy will be seen, differently. How does this not conflict with the conservation of energy?
Ok let's say that is true we are experiencing a redshift but at the same time let's consider another observer who is moving at the same speed as our recession speed but towards the photon emitter source he is experiencing a blueshift. Each photon will have redshift and a corresponding blueshift depending on the relative frame of refences{one moving towards and one moving away}. The following picture makes it clearer.
"Dopplerfrequenz" by Charly Whisky 18:20, 27 January 2007 (yyy) - Own work. Licensed under CC BY-SA 3.0 via Commons.
An animation illustrating how the Doppler effect causes a car engine or siren to sound higher in pitch when it is approaching than when it is receding. The pink circles represent sound waves.
The energy from each siren waves doesn't change but depending where you are you will here a higher or lower pitch.
Now on a cosmologic scale the with an exponentially expanding universe with local Hubble horizons and trying to decide if the universe is a closed or open system leads to a cluster of different methods to try and verify the Conservation of energy of the universe. But a good way to think about this is to imagine open local systems exchanging matter and energy with each other across the universe and the sum of their energy diferences will be zero
thereby conserving energy. | {
"domain": "physics.stackexchange",
"id": 25977,
"tags": "electromagnetic-radiation, energy-conservation, space-expansion, relative-motion, redshift"
} |
Is Bernoulli’s equation applicable for systems with pumps? | Question: Please have patience with me as I have not looked at this sort of theory in many years (I am also very new to StackExchange).
A question asks the following:
A sloping swimming pool two-thirds full of sea water provides the supply for a fixed installation protecting a risk situated above the pool.
The pool measures 12m long, 6m wide and 1m at the shallow end sloping to 2.5m at the deep
end. A pipe is placed so that its inlet is at the bottom of the deep end of the pool. A pump
imparts pressure to the water in the pipe such that a mercury manometer indicates a pressure
difference of 681mm Hg between inlet and outlet. The inlet of the pipe is 80mm diameter; the
outlet nozzle is 45mm in diameter and is 5m above the inlet. The inlet velocity of the water is
5m/s, the density of sea water is 1050kgm-3 and the density of mercury is 13600kgm-3.
Use Bernoulli’s theorem to calculate the velocity of the water through the outlet nozzle?
By applying the continuity equation the following is found:
$v_1A_1=v_2A_2$
$v_2=15.8m/s$
However, the solution is given as $v_2=10m/s$
If I apply Bernoulli’s equation:
$$
\frac{P_2 -P_1}{ρ}+\frac{1}{2}(v^2_2-v^2_1)+g(z_2-z_1)=0
$$
$$
\frac{-90792.5}{1050}+\frac{1}{2}(v^2_2-25)+9.81(5)=0
$$
$$
v_2=10m/s
$$
This is the answer that the solution gives but continuity is not met as shown above. Is this question ill-posed?
I am very confused why continuity is not met when Bernoulli’s equation is used and why the two methods produce different results. A similar question was asked here but this was for a system without a pump and I am not sure if it is applicable.
My question is: Can Bernoulli’s equation be used when the system has a pump or is it no longer valid due to energy not being conserved?
Answer: No, there is a problem with the question. If the flow rate is set at the inlet, then that is the flow through the pipe, period. The pump doesn't add mass, just a motive force. Bernouli would apply if you were doing the flow calcs yourself. In this case, they gave you the answer at the beginning. The flow rate through the exit will be equal to the flow rate through the inlet no matter what the pump did or how high the exit is.
Bernouli probably doesn't work because they made up the pump head. | {
"domain": "engineering.stackexchange",
"id": 4200,
"tags": "mechanical-engineering, fluid-mechanics, bernoulli"
} |
Electric Magnetic duality | Question: In this paper http://arxiv.org/abs/hep-th/9705122 Section 2
We have $$S_A = \frac{1}{4g^2} \int{d^4x F_{\mu\nu}(A)F^{\mu\nu}(A)}$$
where $F_{\mu\nu}(A) = \partial_{[\mu A\nu]}$. Its Bianchi Identity is $\partial_\mu *F^{\mu\nu}$ (Note that $*$ represents hodge dual)
Great. Now the author went to parent action:
$$S_{F,\Lambda} = \int{d^4x(\frac{1}{4g^2} F_{\mu\nu} F^{\mu\nu} +a \Lambda_\mu \partial_\nu *F^{\nu\mu}} )$$
He first varied it w.r.t $\Lambda_\mu$ and then w.r.t $F_{\mu\nu}$.
1)He got in the first case, $\partial_\mu *F^{\mu\nu} = 0$ and thus he mentioned that our parent action reduces to $S_A$.
2)He got in the second case, $$\frac{1}{2g^2} F^{\mu\nu} = \frac{a}{2} \partial_\rho \Lambda_\sigma \epsilon^{\rho \sigma \mu \nu}= \frac{a}{2} *G^{\mu\nu}$$ and thus he mentioned that now plugging this back into the action:
$$S_{F,\Lambda} \rightarrow S_{\Lambda} = \frac{-g^2a^2}{4} \int{d^4x *G_{\mu\nu} *G^{\mu\nu}}$$
He then said knowing that $*G_{\mu\nu} *G^{\mu\nu}=-2G_{\mu\nu} G^{\mu\nu}$ We obtain perfectly $$S_\Lambda = \frac{g^2}{4} \int{d^4x G_{\mu\nu}(\Lambda)G^{\mu\nu}(\Lambda)}$$
And so this is duality with the coupling constants inversed. Perfect.
My questions:
A) When he said plugging this back into the action above (in italic). He plugged it in the first term of the parent action. What about the second term? Did he throw it away?
B) $\frac{1}{2g^2} F^{\mu\nu} = \frac{a}{2} \partial_\rho \Lambda_\sigma \epsilon^{\rho \sigma \mu \nu}= \frac{a}{2} *G^{\mu\nu}$ Where did this relation come from (The first and the second equality)?
Answer: We start with the action
$$
S_{\Lambda,F} = \int d^4 x \left[ \frac{1}{4g^2} F_{\mu\nu} F^{\mu\nu} + a \Lambda_\mu \partial_\nu \ast F^{\mu\nu}\right]
$$
This action is equivalent to $S_A$ in the sense that the equation of motion for $\Lambda_\mu$ when plugged into $S_{\Lambda,F}$ gives $S_A$. This is legit since $\Lambda_\mu$ appears linearly and without derivatives in the action.
On the other hand, we can find the equations of motion for $F_{\mu\nu}$. For this, we note
$$
\ast F^{\mu\nu} = \frac{1}{2} \varepsilon^{\mu\nu\rho\sigma} F_{\rho\sigma}
$$
Varying the action, we then get
\begin{equation}
\begin{split}
\delta S_{\Lambda,F} &= \int d^4 x \left[ \frac{1}{2g^2} \delta F_{\mu\nu} F^{\mu\nu} + a \frac{1}{2} \Lambda_\mu \varepsilon^{\mu\nu\rho\sigma} \partial_\nu \delta F_{\rho\sigma}\right] \\
&= \frac{1}{2} \int d^4 x \left[ \frac{1}{g^2} F^{\rho\sigma} - a \varepsilon^{\mu\nu\rho\sigma}\partial_\nu \Lambda_\mu \right] \delta F_{\rho\sigma}
\end{split}
\end{equation}
Thus, the equations of motion are
$$
\frac{1}{g^2} F^{\rho\sigma} = a \varepsilon^{\mu\nu\rho\sigma}\partial_\nu \Lambda_\mu
= \frac{a}{2} \varepsilon^{\mu\nu\rho\sigma} \left( \partial_\nu \Lambda_\mu - \partial_\mu \Lambda_\nu \right) \\
= -\frac{a}{2} \varepsilon^{\rho\sigma\nu\mu} \left( \partial_\nu \Lambda_\mu - \partial_\mu \Lambda_\nu \right)
$$
We now define
$$
G_{\nu\mu} \equiv \partial_\nu \Lambda_\mu - \partial_\mu \Lambda_\nu
$$
Using the definition of hodge star, we find
$$
F^{\rho\sigma} = - a g^2 \ast G^{\rho\sigma}
$$
We can now plug this answer back into the action to get
\begin{equation}
\begin{split}
S_{\Lambda,F} &= \int d^4 x \left[ \frac{1}{4g^2} (-ag^2)^2 (\ast G)_{\mu\nu} (\ast G)^{\mu\nu} - \frac{1}{2} a (-ag^2) G_{\nu\mu} \ast \ast G^{\mu\nu}\right] \\
&= \int d^4 x \left[ \frac{a^2 g^2}{4} (\ast G)_{\mu\nu} (\ast G)^{\mu\nu} - \frac{1}{2} a^2 g^2 G_{\mu\nu} \ast \ast G^{\mu\nu}\right]
\end{split}
\end{equation}
Now, we use the fact that
$$
(\ast G)_{\mu\nu} (\ast G)^{\mu\nu} = - 2 G_{\mu\nu} G^{\mu\nu},~~~~ \ast \ast G^{\mu\nu} = - G^{\mu\nu}
$$
we find
$$
S_{\Lambda,F} = \int d^4 x \left[ - \frac{a^2 g^2}{4} G_{\mu\nu} G^{\mu\nu} + \frac{1}{2} a^2 g^2 G_{\mu\nu} G^{\mu\nu}\right] = \int d^4 x \frac{a^2 g^2}{4} G_{\mu\nu} G^{\mu\nu}
$$ | {
"domain": "physics.stackexchange",
"id": 18028,
"tags": "homework-and-exercises, electromagnetism, lagrangian-formalism, tensor-calculus, duality"
} |
Why is SWAN dimming? | Question: The SWAN C/2020 F8 comet is passing near the Earth, but from Wikipedia
The comet has dimmed a small amount since May 3.
It was expected to possibly reach 3rd magnitude in May, but is now expected to hover closer to magnitude 5.
What is the cause of this dimming?
Answer: SWAN seems to be on a parabolic orbit. This means that it is the first time that it is interacting with the Sun, making it a "new" comet. New comets are covered in a layer of very volatile elements that will vaporize pretty fast when the comet is still far away from the Sun. This causes a surge in brightness. But once those very volatile elements are gone, it stops getting brighter and the comet's brightness can even decrease.
In addition to this, it is mostly losing gas, so its tail isn't as visible as if it were a tail of dust.
(From this article, in French) | {
"domain": "astronomy.stackexchange",
"id": 4491,
"tags": "observational-astronomy, comets, magnitude"
} |
What is the physical significance of the imaginary part when plane waves are represented as $e^{i(kx-\omega t)}$? | Question: I've read that plane wave equations can be represented in various forms, like sine or cosine curves, etc. What is the part of the imaginary unit $i$ when plane waves are represented in the form
$$f(x) = Ae^{i (kx - \omega t)},$$
using complex exponentials?
Answer: It doesn't really play a role (in a way), or at least not as far as physical results go. Whenever someone says
we consider a plane wave of the form $f(x) = Ae^{i(kx-\omega t)}$,
what they are really saying is something like
we consider an oscillatory function of the form $f_\mathrm{re}(x) = |A|\cos(kx-\omega t +\varphi)$, but:
we can represent that in the form $f_\mathrm{re}(x) = \mathrm{Re}(A e^{i(kx-\omega t)})=\frac12(A e^{i(kx-\omega t)}+A^* e^{-i(kx-\omega t)})$, because of Euler's formula;
everything that follows in our analysis works equally well for the two components $A e^{i(kx-\omega t)}$ and $A^* e^{-i(kx-\omega t)}$;
everything in our analysis is linear, so it will automatically work for sums like the sum of $A e^{i(kx-\omega t)}$ and its conjugate in $f_\mathrm{re}(x)$;
plus, everything is just really, really damn convenient if we use complex exponentials, compared to the trigonometric hoop-jumping we'd need to do if we kept the explicit cosines;
so, in fact, we're just going to pretend that the real quantity of interest is $f(x) = Ae^{i(kx-\omega t)}$, in the understanding that you obtain the physical results by taking the real part (i.e. adding the conjugate and dividing by two) once everything is done;
and, actually, we might even forget to take the real part at the end, because it's boring, but we'll trust you to keep it in the back of your mind that it's only the real part that physically matters.
This looks a bit like the authors are trying to cheat you, or at least like they are abusing the notation, but in practice it works really well, and using exponentials really does save you a lot of pain.
That said, if you are careful with your writing it's plenty possible to avoid implying that $f(x) = Ae^{i(kx-\omega t)}$ is a physical quantity, but many authors are pretty lazy and they are not as careful with those distinctions as they might.
(As an important caveat, though: this answer applies to quantities which must be real to make physical sense. It does not apply to quantum-mechanical wavefunctions, which must be complex-valued, and where saying $\Psi(x,t) = e^{i(kx-\omega t)}$ really does specify a complex-valued wavefuntion.) | {
"domain": "physics.stackexchange",
"id": 38430,
"tags": "quantum-mechanics, waves, complex-numbers, linear-systems, plane-wave"
} |
Doubly linked list in C | Question: While I'm training to get better at C, I tried my hand at making a doubly linked list implementation.
linkedlist.h
#ifndef LINKED_LIST_H_
#define LINKED_LIST_H_
#include <stdlib.h>
typedef struct node {
void* data;
struct node *next;
struct node *prev;
} node;
typedef node* pnode;
typedef struct List {
pnode head, tail;
} *List;
List init_list();
void push_back(List, void*);
void* pop_back(List);
void push_front(List, void*);
void* pop_front(List);
void foreach(List, void (*func)(void*));
void free_list(List);
void clear(List);
int size(List);
#endif /* LINKED_LIST_H_ */
linkedlist.c
#include "linkedlist.h"
List init_list() {
List list = (List)malloc(sizeof(struct List));
list->head = NULL;
list->tail = NULL;
return list;
}
void push_back(List list, void* data) {
pnode temp = (pnode)malloc(sizeof(struct node));
temp->data = data;
temp->next = NULL;
temp->prev = NULL;
if(!(list->head)) {
list->head = temp;
list->tail = temp;
} else {
list->tail->next = temp;
temp->prev = list->tail;
list->tail = list->tail->next;
}
}
void push_front(List list, void* data) {
pnode temp = (pnode)malloc(sizeof(struct node));
temp->data = data;
temp->next = NULL;
temp->prev = NULL;
if(!(list->tail)) {
list->head = temp;
list->tail = temp;
} else {
list->head->prev = temp;
temp->next = list->head;
list->head = list->head->prev;
}
}
void* pop_front(List list) {
if(!(list->tail)) return NULL;
pnode temp = list->head;
if(list->head == list->tail) {
list->head = list->tail = NULL;
} else {
list->head = list->head->next;
list->head->prev = NULL;
}
void* data = temp->data;
free(temp);
return data;
}
void* pop_back(List list) {
if(!(list->head)) return NULL;
pnode temp = list->tail;
if(list->tail == list->head) {
list->tail = list->head = NULL;
} else {
list->tail = list->tail->prev;
list->tail->next = NULL;
}
void* data = temp->data;
free(temp);
return data;
}
void free_list(List list) {
while(list->head) {
pop_back(list);
}
free(list);
}
void clear(List list) {
while(list->head) {
pop_back(list);
}
}
void foreach(List list, void(*func)(void*)) {
pnode temp = list->head;
if(temp) {
while(temp) {
(*func)(temp->data);
temp = temp->next;
}
}
}
int size(List list) {
int i = 0;
pnode temp = list->head;
while(temp) {
i++;
temp = temp->next;
}
return i;
}
Can you please point out any flaws, errors, stupid mistakes, general improvements, which can make this code better and "C'ish"?
Answer: Welcome on Code Review
Generalities
Includes
You don't need #include <stdlib.h> in linkedlist.h, so remove it and instead, just add it in linkedlist.c right after #include linkedlist.h.
Name collisions
I'll discuss no more this subject after but be aware that names like node, list, size or clear for example, are very usual and subject to collisions. Consider using more robust names, maybe with a prefix.
Usable interface
Since the user look at your header file to know how to use your functions, a good habit is to to name the arguments, that's make the interface more explicit. Furthermore, adding documentation as comments about your interface help them to use it correctly.
Use the const keyword
Read this and this to know when et how.
Assertions and error-checking
You could use assertions to check preconditions, postconditions and invariants. It make your code more explicit and you avoid possibly broke you code when modifying.
How to use assertions in C
How and When to Use C's assert() Macro
Why should I use asserts? (for C++ but still applicable)
struct node
[typedef] struct [tag] { ... } [alias];
Personally, I try to avoid to use sames ''struct tag'' and ''typedef name'' if I have to typedef a struct. It's completely a matter of taste, since "modern" compilers (for more than 20 years) can handle this easily, but so, user know when he work with the tag or the alias.
But there are many others "conventions":
Some prefer "untagged aliased struct" (if there are no self-referencing members).
Where others says to never typedef a struct.
...
Consistency
No matter where you place asterisk for pointers, try to be consistent through your code:
void* data;
^ LEFT-ALIGNED
struct node *next;
^ RIGHT-ALIGNED
(here is another talk on this endless debate)
So IMHO, this is cleaner:
typedef struct node node_t;
typedef node_t* node_ptr;
struct node {
void* data;
node_ptr next;
node_ptr prev;
};
struct List
Consistency
The first letter of the struct node is lowercase, while the first of the struct List is uppercase.
Variables declaration
Try to don't declare several variables on the same line, it will avoid you many problems.
typedef struct list list_t;
typedef list_t* list_ptr;
struct list {
node_ptr head;
node_ptr tail;
};
init_list(), push_back(), push_front()
There's no need to cast the return type of malloc from void* since it's implicitly converted to any pointer type.
You can also omit the struct as you aliases it.
Optionally, you could make more concise using calloc and rid of the manual initialization of next and prev (with few costs).
Since you have a free_list function, consider renaming init_list to alloc_list
For push_* function, you should use sizeof *list->head instead of using the type. It's easier to maintain. (e.g. if you modify the node type latter, the change here is automatic)
For push_* function, you could return an int, an enum or a bool (via <stdbool.h>) to indicate the success of the insert (since the malloc can fail) or a pointer to the new created node (and so NULL in case of fail). No matter how but you should always handle errors when using malloc
pop_back(), pop_front()
I find multiples assignments on same line less readable, but that's my opinion.
For info: On some not-wide-used platforms (PalmOS, 3BSD, and other non ANSI-C compatibles), you should check for NULL before freeing to avoid crashes.
clear()
You could rewrite the cleaning algorithm to be more efficient since you discard return values.
free_list()
Simply use clear before freeing, to avoid code duplication. | {
"domain": "codereview.stackexchange",
"id": 32586,
"tags": "c, linked-list"
} |
Map a hierarchy of nested DTO objects to a list of business objects | Question: Consider I download my data into this data structures:
public class AThingDTO
{
int AID { get; set; }
string AName { get; set;}
Dictionary<int, BThingDTO> BThingsColl { get; set;}
}
public class BThingDTO
{
int BID { get; set; }
string BName { get; set;}
int AID { get; set; }
List<CThingDTO> CThingsColl { get; set;}
}
public class CThingDTO
{
int CID { get; set; }
string CName { get; set;}
int BID { get; set; }
int AID { get; set; }
}
Consider I want to map it into this data structures:
public class AThing
{
int AID { get; set; }
string AName { get; set;}
List<BThing> BThingsColl { get; set; } = new List<BThing>();
}
public class BThing
{
int BID { get; set; }
string BName { get; set;}
List<CThing> CThingsColl { get; set;} = new List<CThing>();
}
public class CThing
{
int CID { get; set; }
string CName { get; set;}
}
I did like this:
public List<AThing> MapFromDTO(Dictionary<int, AThingDTO> dtoThingsHierarchy)
{
var mappedThings = new List<AThing>();
foreach (var aThingDto in dtoThingsHierarchy)
{
var aThing = new AThing();
aThing.AID = aThingDto.AID;
aThing.AName = aThingDto.AName;
foreach (var bThingDto in BThingsColl.Values)
{
var bThing = new BThing();
bThing.BID = bThingDto.BID;
bThing.BName = bThingDto.BName;
foreach (var cThingDto in CThingsColl)
{
var cThing = new CThing();
cThing.CID = cThingDto.CID;
cThing.CName = cThingDto.CName;
bThing.CThingsColl.Add(cThing);
}
aThing.BThingsColl.Add(bThing);
}
mappedThings.Add(aThing);
}
return mappedThings;
}
Is there a more elegant way / more expressive way to express this mapping? Maybe with LINQ?
Answer: The first thing you can do is assign all the properties for a thing in one line using an object initializer. The following:
var cThing = new CThing();
cThing.CID = cThingDto.CID;
cThing.CName = cThingDto.CName;
would become:
var cThing = new CThing {CID = cThingDto.CID, CName = cThingDto.CName};
You can create a list before creating the parent thing so you can assign in the object initializer. I don't think your code works currently because you're not specifying the DTO in the foreach loops and not getting the values from the dictionary that's passed in. If the object initializer is too long, you can expand it to multiple lines or create and use a constructor for each class.
public List<AThing> MapFromDTO(Dictionary<int, AThingDTO> dtoThingsHierarchy)
{
var mappedThings = new List<AThing>();
foreach (var aThingDto in dtoThingsHierarchy.Values)
{
var aThing = new AThing {AID = aThingDto.AID, AName = aThingDto.AName};
foreach (var bThingDto in aThingDto.BThingsColl.Values)
{
var bThing = new BThing {BID = bThingDto.BID, BName = bThingDto.BName};
foreach (var cThingDto in bThingDto.CThingsColl)
{
var cThing = new CThing {CID = cThingDto.CID, CName = cThingDto.CName};
bThing.CThingsColl.Add(cThing);
}
aThing.BThingsColl.Add(bThing);
}
mappedThings.Add(aThing);
}
return mappedThings;
}
Here's one way to rewrite your code using LINQ's method syntax:
public List<AThing> MapFromDTO(Dictionary<int, AThingDTO> dtoThingsHierarchy)
{
return dtoThingsHierarchy.Values
.Select(a => new AThing {AID = a.AID, AName = a.AName, BThingsColl = a.BThingsColl.Values
.Select(b => new BThing{ BID = b.BID, BName = b.BName, CThingsColl = b.CThingsColl
.Select(c => new CThing {CID = c.CID, CName = c.CName}).ToList()}).ToList()}).ToList();
}
Here's LINQ's query syntax:
public List<AThing> MapFromDTO(Dictionary<int, AThingDTO> dtoThingsHierarchy)
{
return
(from a in dtoThingsHierarchy.Values
let bThings =
(from b in a.BThingsColl.Values
let cThings = b.CThingsColl.Select(c => new CThing {CID = c.CID, CName = c.CName}).ToList()
select new BThing{ BID = b.BID, BName = b.BName, CThingsColl = cThings }).ToList()
select new AThing {AID = a.AID, AName = a.AName, BThingsColl = bThings}).ToList();
}
Both LINQ methods are quite a bit shorter, but the tradeoff in this case is they're kind of tough to read. Moving logic to the constructors could help. | {
"domain": "codereview.stackexchange",
"id": 19711,
"tags": "c#, linq, dto"
} |
If I boil water at room temp using a vacuum, will the water instantly liquify if I reintroduce air to the system? | Question: If I boil water at room temperature in a vessel using a vacuum pump, will the water instantly liquefy and fall back into a liquid pool if I reintroduce air into the system?
Basically, if this is a large system, would I see a cloud form and fall? Would this happen all at once or over time?
Answer:
Let's call it a small volume of water in a large volume of vacuum. And
let's say the vacuum is created instantly, and then the system is
sealed.
Once the air is removed, the water evaporates and becomes gas, so some of the pressure returns. There's an equilibrium of liquid water, vacuum, sealed container and temperature where some, or all of the water would evaporate. With a small amount of water and a large room, all of it would probably evaporate unless the temperature was close to freezing. To calculate how much you can look at the vapor pressure of water.
As water evaporates it cools, so some of the water would turn to ice in your scenario and the temperature in your container would drop some but if we assume the temperature remains above freezing, the ice would eventually all sublimate.
Video of water boiling, then freezing in a 60 degree F vacuum chamber.
When the air returns, what would happen depends on the initial humidity of the air and if the combined humidity of the boiled water and the initial water was over 100% relative humidity - called super-saturation. If it was low relative humidity to begin with, the water-vapor might remain below 100% relative humidity and remain in the air as invisible gas. If the relative humidity is over 100%, when the air returns, the water-vapor from the boiled water should condense out of the air back to water fairly quickly. Some could get absorbed in any absorbing material and it wouldn't necessarily return to the floor, it would condense on any surface including walls and ceiling. (like the drops on the wall and ceiling after a hot shower - which is a very similar principal of temporarily super-saturated air when the water from the shower is warmer than the air in the bathroom).
You might get some fog or mist in the air like a hot shower too, foggy mirrors for example. Related question on why showers have this effect.
Will the water instantly liquefy and fall back into a liquid pool if I
reintroduce air into the system?
Basically, if this is a large system, would I see a cloud form and
fall? Would this happen all at once or over time?
Instantly is a funny word in physics, but no, it wouldn't be instant. Water likes to stick to other water. It would take some time (but not a lot of time) for the water to recombine. I think it would happen fairly quickly but not "instantly". The trick with seeing it is that most of the water could condense on surfaces in tiny droplets. Observing how quickly they form would take some careful looking, but I think it would be fairly fast - a few seconds perhaps, but that's a guess. That's a good question on the timing that might need to be clarified or re-asked for a better answer.
Would it fall back into the pool? Probably not, especially with a relatively small amount of water to begin with - as noted above, you'd get drops on the walls, floor and ceiling.
Would you see a cloud form? Probably not. Clouds are mostly ice not water, and require updrafts and falling air pressure. A sudden increase in air pressure would be quite different. You might see some mist, but I don't think it would resemble a cloud and it would all happen fairly quickly. I'd guess over a couple seconds. | {
"domain": "physics.stackexchange",
"id": 44640,
"tags": "thermodynamics, water, vacuum, evaporation, condensation"
} |
Gaia: What is the difference between CCDs used for astrometry, photometry, and spectroscopy? | Question: My knowledge of CCDs is that these are sensors which collect photoelectrons. That's about it.
What is a difference between CCDs used for astrometry, spectroscopy, and photometry? As an example, each type of CCD is currently used by Gaia.
Recommendations of where else to read about this at a detailed engineering level is appreciated. Thanks.
Answer: CCDs are optimized for a certain wavelength range, and for a certain expected signal level. In astronomy, we tend to be short of light, so here we almost always want them to be as sensitive as possible (an exception may be observations of the Sun, which I don't know much about). But for instance, the Nordic Optical Telescope has a CCD which is optimized for blue wavelengths, but has quite a lot fringing in the near-infrared. And further out in the IR, CCDs aren't even used, instead using something which are just called "detectors".
However, whether the CCD is used for imaging (photometry and astrometry) or spectroscopy does not have anything to do with the CCD; it's just a matter of inserting a grism or not. I'm not really into the instruments of Gaia, but I assume that differences in the CCDs are due to different wavelength regions being probed. There may be a difference in how its sub-parts (it's actually an array of CCDs) are positioned (for instance, for spectroscopy in principle you don't need a large field of view, but can do with a long array rather than a more square one), but the design of the individual CCDs are the same. | {
"domain": "astronomy.stackexchange",
"id": 1125,
"tags": "spectroscopy, astrometry, photometry"
} |
Why is ribosomal RNA difficult to remove even with Poly(A) selection? | Question: In this answer (actually in a comment), it is stated that:
As you've noticed from your own analysis, the ribosomal genes have quite variable expression across cells. They're expressed everywhere, and quite difficult to completely remove from a sample (even with polyA selection), with success of removal depending on things like the amount of mechanical damage that the RNA has been exposed to.
Which are the most relevant issues that make Poly(A) selection fail? Looking online, Poly(A) is suggested often but I didn't find an explanation for the different ways in which it can fail.
Answer: We've found ribosomal RNA to be less of a problem with sequencing that depends on polyA, which suggests the issue might be in the library preparation, rather than the selection.
Many polyA RNA library preparation methods involve amplification, rather than selection, which means that existing transcripts that are present in very high abundance (such as rRNA) will still show up in the mixture.
But polyA-tailed RNA isn't the only RNA present in a sample. For a comprehensive look at RNA, the whole-RNA sample preparation methods can't use any form of selection to identify target molecules. In this case, it's typically recommended to use a form of ribosomal RNA removal, where a probe to ribosomal RNA is attached to magnetic beads (or a column, or similar), and used to fish out out ribosomal RNA, leaving the remainder behind. Unfortunately those probes don't work perfectly, either due to the ribosomal RNA being damaged at the probe site, or the probe not matching the ribosomal RNA molecule, or the RNA just not getting into the right physical state to bind with the probe. All these things mean incomplete removal of ribosomal RNA, ending up with a situation where the highly-abundant ribosomal RNA still pokes through in the reads. | {
"domain": "bioinformatics.stackexchange",
"id": 433,
"tags": "rna-seq, scrnaseq, rna, ribosomal"
} |
How to reformulate my problem as a mixed-integer quadratic problem | Question: I have an unknown $n$-dimensional vector $x$ whose analytical expression depends on the following sum $x = z + Ba$ where the vector $z$ and the matrix $B\in \mathbb{R}^{n\times s}$ are given. So the $s$-dimensional vector $a$ is to be computed to find $x$.
The only assumption that we have is $x=0$ when we project $x$ onto the space spanned by $s$ different rows (that we don’t know their indices) of the matrix $B$ which has $n$ rows. To do this projection we can use $P_s\in \mathbb{R}^{n\times n}$ which is $1$ on the diagonal entries that correspond to the $s$ selected rows of $B$ and $0$ elsewhere. Hence, $P_s x= P_s z + P_s Ba=0 \implies a=-(P_sB)^{-1}P_sz$.
The main issue is that we don’t know the positions of these $s$ rows, so the problem is combinatorial and we need to go through all possible $n\choose s$ projections to find the exact $x$ which corresponds to the least cost $f(x)=\|y-Ax\|_2$ where $\|v\|_2=\big(\sum_iv_i^2\big)^{1/2}$, $y\in \mathbb{R}^{m\times 1}$ and the matrix $A\in \mathbb{R}^{m\times n}$ are given.
So my question is how I can reformulate my problem as a mixed-integer quadratic programming to go through all possible $n\choose s$ submatrices of $B$ formed by the $s$ selected rows and finally find the set of rows which corresponds to the least $f(x)$.
Answer: So it sounds like the problem is the following:
Given $A,B,y,z$, I want to find the vector $a \in \mathbb{R}^n$ that minimizes $||y-Az-ABa||_2$, subject to the constraint that there must exist some matrix $P_s$ with exactly $s$ 1's on its diagonal and 0's everywhere else, satisfying $P_s (z+Ba)=0$.
Mixed-integer QCQP
This can be formulated as a mixed-integer quadratically constrained quadratic programming problem, as follows.
We have $n$ unknowns $a_1,\dots,a_n$, so $a=(a_1,\dots,a_n)$. Also introduce $n$ integer zero-or-one unknowns $p_1,\dots,p_n$, with the intention that $P_s$ will be the diagonal matrix whose diagonal is $p_1,\dots,p_n$. Add the cosntraint that $p_1+\dots + p_n=s$, to ensure that exactly $s$ of the entries on the diagonal are $1$. Note that each entry of $z+Ba$ is a linear expression in the unknowns, so the constraint $P_s (z+Ba)=0$ can be expressed as $n$ quadratic constraints, each of the form $p_i \cdot e_i=0$ where $e_i$ is linear in $a_1,\dots,a_n$.
Now note that each entry of $y-Az-ABa$ is a linear expression of the $n$ unknowns $a_1,\dots,a_n$, so the objective function is a quadratic function of the unknowns.
Unfortunately, mixed-integer QCQP is a pretty challenging form of non-linear programming, so I don't know how well this will work in practice. You might have to try it out with some solvers.
This is a pretty brain-dead approach. There may be smarter approaches that are more effective than this, by taking advantage of the structure of your problem. | {
"domain": "cs.stackexchange",
"id": 2494,
"tags": "optimization, combinatorics, parallel-computing, integer-programming"
} |
Stuart platform | Question:
Has there been any work to support 6DOF stuart gough platforms or similar parallel geometries?
Originally posted by hojalot on ROS Answers with karma: 1 on 2013-12-19
Post score: 0
Answer:
Hello,
See here:
http://answers.ros.org/question/9050/using-ros-for-a-delta-robot/
Originally posted by David Galdeano with karma: 357 on 2013-12-19
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by hojalot on 2013-12-19:
This discussion was a few years ago, and I got the sense that at that time support for parallel mechanisms required workarounds. Have any of the proposed workaround been incorporated in to the base ros or are there any specific examples?
Comment by David Galdeano on 2013-12-19:
Kinematics loop crated in parallel robots contain physical constraints which must be solve at high rate, maybe ROS is not adapted for these constraints and cant separate the command algorithms and the motor control.
You can write a small controller (IK) for the robot an take advantage of the ROS framework by using its own communication mechanisms. | {
"domain": "robotics.stackexchange",
"id": 16499,
"tags": "ros"
} |
What is the real position of the tidal bulge? | Question: The wikipage on "tidal accelaeration" has this picture of the tidal bulge:
but says that:
The average tidal bulge is synchronized with the Moon's orbit, and
Earth rotates under this tidal bulge in just over a day. However,
Earth's rotation drags the position of the tidal bulge ahead of the
position directly under the Moon. As a consequence, there exists a
substantial amount of mass in the bulge that is offset from the line
On the other hand, this academical article
states the opposite:
as we move with respect to both tidal bulge and moon, the moon
crosses our meridian before we experience the highest tide.] How
early? Some books show misleading diagrams with the symmetry axis of
the tidal bulges making an angle of 30° or more with the moon. In
fact, the angle is only 3°, so the tides are late by about 24(3/360)60
= 12 minutes
through the centers of Earth and the Moon
Can you say what is the real position of the bulge? it is surely a hard fact not open to interpretations. Do you have access to sites that say where is now the moon and where is the tidal bulge?
According to the basic laws of conservation of momentum and energy, a mass moving to a greater radius should be slowed downm and if we add friction the lag of 12 minutes or more makes sense. If the highest point of the bulge is forward wrt to the vertical of the moon, can you explain what stronger force pushes the water in the direction of the spin?
I found this site that gives the position of sun and moon, but doesn't show tides
This animation, on the other hand, shows that there is no bulge
The red areas experience a rise over 1 m, the blue one a depression of over 1 m:
Answer: There is no great contradiction. The offset of the tidal bulge is about 3 degrees. It is exaggerated in diagrams for clarity. The diagram is correct, but not to scale.
This causes the tides to be slightly late. Imagine a person standing on the Earth of the diagram, with the moon directly overhead. The tidal bulge is on their left. The rotation of the Earth will take them towards the left (the moon is also orbiting but its motion is much slower), so a little later (12 min later) they will reach the maximum of the tide. The maximum is delayed by about 12 min.
Actual flows of water around the coast are driven by this tidal bulge, but are complex effects of local topography. The actual flows of water are highly non-linear, including multiple locations at which there is no tide. | {
"domain": "astronomy.stackexchange",
"id": 1727,
"tags": "tidal-forces"
} |
Reflection At Speed of Light | Question: I have looked online to no avail. There is two competing answers and I am curious to know which one is right.
Someone asked me this question. If you are traveling at the speed of light can you see your reflection in a mirror in front of you?
My answer to the question is no, I would figure that in order for that to happen the light reflecting off you that would appear in the mirror must travel faster than the speed of light to actually reach the mirror (which we all know is impossible).
He says the answer is yes, that it is all relative to the current frame of reference.
Can anyone validate the correct answer with possible references?
Answer: This question cannot really be answered because you cannot travel at the speed of light. See Accelerating particles to speeds infinitesimally close to the speed of light?
If you were massless, you would always travel at the speed of light. However, in that case you would not perceive the passing of time. In relativity, the time that passes for an observer depends on the proper time. The proper time for a light-like trajectory is always zero, so photons themselves do not experience the passage of time.
If you travel very near to the speed of light - perhaps 99.9% light speed relative to Earth, you would still be able to view yourself normally in a mirror you carried with you. That is ensured by the principle of relativity, which states that all physical processes work the same way at any constant speed. | {
"domain": "physics.stackexchange",
"id": 59081,
"tags": "special-relativity, speed-of-light, reflection"
} |
Dirichlet distribution: posteriors and priors of distribution | Question: Let $|\psi\rangle \in \mathbb{C}^{2n}$ be a random quantum state such that $ |\langle z| \psi \rangle|^{2} $ is distributed according to a $\text{Dirichlet}(1, 1, \ldots, 1)$ distribution, for $z \in \{0, 1\}^{n}$.
Let $z_{1}, z_{2}, \ldots, z_{k}$ be $k$ samples from this distribution (not all unique). Choose a $z^{*}$ that appears most frequently.
I am trying to prove:
$$\underset{|\psi\rangle}{\mathbb{E}}\big[|\langle z^{*}| \psi \rangle|^{2}\big] = \underset{|\psi\rangle}{\mathbb{E}}\bigg[\underset{m}{\mathbb{E}}\big[|\langle z^{*}| \psi \rangle|^{2} ~| ~m\big]\bigg] = \mathbb{E}\bigg[\frac{1+m}{2^{n}+k}\bigg],$$
where $m$ is a random variable that denotes the frequency of $z^{*}$.
I am also trying to prove that for the collection $z_{1}, z_{2}, \ldots, z_{k}$
$$\sum_{i \neq j}\mathrm{Pr}[z_{i} = z_{j}] = {n \choose k}\frac{2}{2^{n} + 1}. $$
Basically, I am trying to trace the steps of Lemma $13$ (page 10) of this quantum paper. I realize that my questions have to deal with posteriors and priors of the chosen distribution (though I do not understand how they have been explicitly derived or used here. An explicit derivation will be helpful). Is there any resource where I can find quick formulas for calculating these for other distributions, like the Binomial distribution?
Answer: Let $ p_i = |\langle i | \phi \rangle|^2 \sim Dir(a_1, .., a_{2^n}) = Dir(1, .., 1) $ and $ m_i $ the occurences of outcome $ |i\rangle $ on samples $z_1, .. z_k$.
Since the Dirichlet distribution is the conjugate prior of the categorical (see here), meaning
$ \bf{p} $ $| Z, (1, .. 1), $ $\bf{m} $ $ \sim Dir(2^n, $ $\bf{m} + 1$)
and using the formula for the mean value of Dirichlet we get
$ \mathbb{E}[p_{z*} | m] = \frac{m+1}{\sum_{j=1}^{2^n} (m_j + 1)} = \frac{m+1}{2^n + k} $
For the second claim, take $ i \neq j $ and compute
\begin{align*}
\mathbb{P}[z_i = z_j]
&= \int \mathbb{P}[z_i = z_j | (p_1, .. p_{2^n})] \cdot f(p_1, .. p_{2^n}) \\
&= \int \sum_{k=0}^{2^n} p_k^2 \cdot f(p_1, .. p_{2^n}) \\
&= \sum_{k=0}^{2^n} \mathbb{E}[p_k^2] \\
&= \sum_{k=0}^{2^n} \frac{2}{2^n(2^n + 1)} = \frac{2}{2^n + 1}
\end{align*}
(the last equality holds since $ \bf{x} $ $ \sim Dir($$\bf{a}$) $ \implies \mathbb{E}[x_i^2] = \frac{a_i(a_i + 1)}{a_0(a_0 + 1)} $, $ a_0 = \sum_{i=1}^{N} a_i $.
This means that $ \sum_{i \neq j} \mathbb{P}[z_i = z_j] = {k \choose 2} \frac{2}{2^n + 1} $. | {
"domain": "quantumcomputing.stackexchange",
"id": 1985,
"tags": "complexity-theory, quantum-advantage, probability, linear-algebra"
} |
Telescopes to avoid as a beginner? | Question: I've heard people talk about "department store scopes" or "trash scopes". How do I know what to avoid in a beginner scope? How can I know that I'm not getting something we will be more frustrated with than excited about?
Answer: Avoid small refractors and reflectors on skinny tripod mounts. Don't buy from a discount or department store. Don't buy from eBay, CraigsList, or Amazon.
Here are a few web pages with good information on beginner's telescopes:
http://www.gaherty.ca/tme/TME0702_Buying_a_Telescope.pdf
http://www.scopereviews.com/begin.html
http://observers.org/beginner/j.r.f.beginner.html
For more advanced information, read Phil Harrington's Star Ware, 4th edition (Wiley).
You'll get the greatest value for your money with a Newtonian reflector on a Dobsonian mount, such as these:
http://www.telescope.com/Telescopes/Dobsonian-Telescopes/pc/1/12.uts
http://www.skywatchertelescope.net/swtinc/product.php?class1=1&class2=106
Buy from a store which specializes in telescopes and astronomy, either locally or online; don't buy from department stores, discount stores or eBay as mostly what they sell is junk. Find your local astronomy club and try out different telescopes at one of their star parties:
http://www.skyandtelescope.com/community/organizations
I strongly recommend that beginners steer clear of astrophotography until they have learned their way around the sky. Astrophotography is by far the most expensive and difficult area of amateur astronomy.
Many people who buy telescopes have no idea how to find interesting things to observe. A good introduction to finding things is NightWatch by Terence Dickinson (Firefly). A more advanced book is Star Watch by Phil Harrington (Wiley). | {
"domain": "physics.stackexchange",
"id": 2980,
"tags": "astronomy, telescopes"
} |
Create a program that will take user input, reverse it, and then check if it is a palindrome | Question: Assuming there is no punctuation. It should remove spaces and bring all characters down to lowercase.
My aim was to write code that directly represented my idea of how I would have done things.
This is relevant because there were many ways I could have implemented the cleanString() method and what you see below is not the first way. This is the way that I actually found logical and understand the code for. The other way was to use #include <algorithm> and transform which did not make sense to me despite working.
Here is my solution:
#include <iostream>
#include <vector>
#include <locale>
std::vector <char> stringToVec(std::string inputString) {
std::vector <char> letters;
for (int a = 0; a < inputString.size(); a++) {
letters.push_back(inputString.at(a));
}
return letters;
}
std::string vecToString(std::vector <char> inputVec) {
std::string reversedString("");
for (int a = 0; a < inputVec.size(); a++) {
reversedString.push_back(inputVec.at(a));
}
return reversedString;
}
std::string cleanString(std::string inputString) {
std::locale loc;
for (std::string::size_type i = 0; i < inputString.length(); i++) {
inputString[i] = std::tolower(inputString[i], loc);
}
for (std::string::size_type i = 0; i < inputString.length(); i++) {
if (inputString[i] == ' ') {
inputString.erase(i, 1);
}
}
return inputString;
}
std::string checkPalindrome(std::string reversedString, std::string originalString) {
if (cleanString(originalString) == cleanString(reversedString)) {
return "Yes";
}
return "No";
}
std::vector <char> reverseVec(std::vector <char> inputVec) {
std::vector <char> reversedReturn;
for (int a = inputVec.size() - 1; a >= 0; a--) {
reversedReturn.push_back(inputVec.at(a));
}
return reversedReturn;
}
int main() {
std::cout << "Enter the string you would like reversed: ";
std::string userInput("");
std::getline(std::cin, userInput, '\n');
std::string reversedInput(vecToString(reverseVec(stringToVec(userInput))));
std::cout << "Reversed: " << reversedInput << std::endl;
std::cout << "Palindrome: " << checkPalindrome(reversedInput, userInput) << std::endl;
return 0;
}
Answer: I think you have a classic case of not knowing the standard.
What you are writing is generally already available via either a normal constructor or a standard algorithm. In a few cases I would have just implemented a minor wrapper class that could be used by these facilities.
Lets have a look at a few and see how you could have utilized the standard:
std::vector <char> stringToVec(std::string inputString) {
std::vector <char> letters;
for (int a = 0; a < inputString.size(); a++) {
letters.push_back(inputString.at(a));
}
return letters;
}
The first thing to point out is that you are passing the parameter by value std::string inputString. This means you are copying the parameter into the function. This can be expensive (especially with strings and vectors). Also you are not mutating them so much easier to pass by const reference.
std::vector <char> stringToVec(std::string const& inputString) {
// ^^^^^^^
std::vector <char> letters;
for (int a = 0; a < inputString.size(); a++) {
letters.push_back(inputString.at(a));
}
return letters;
}
This means you can read the original parameter but can not modify it:
The method at() is a bounds checking access operator. If you know that your index is going to be in range then prefer to use operator[]. In the case above you always know that a is going to be in the correct range because you actively check to make sure it is in the correct range a < inputString.size().
std::vector <char> stringToVec(std::string const& inputString) {
std::vector <char> letters;
for (int a = 0; a < inputString.size(); a++) {
letters.push_back(inputString[a]); // Don't need to valid range
// When your loop is already
// guaranteeing the range.
}
return letters;
}
Most algorithms we use tend to use iterators, so get used to using them. So I would re-write the above loop in terms of iterators:
std::vector <char> stringToVec(std::string const& inputString) {
std::vector <char> letters;
for (auto loop = std::begin(inputString), loop != std::end(inputString); ++loop) {
letters.push_back(*loop);
}
return letters;
}
This is such a common pattern that in C++11 they introduced the range based for expression. Basically this is a for() loop that works with an object that can accept std::begin() and std::end() being called on it. It is basically syntactic sugar to make the above loop simpler to write:
std::vector <char> stringToVec(std::string const& inputString) {
std::vector <char> letters;
for (auto letter: inputString) {
letters.push_back(letter);
}
return letters;
}
But if we look at std::vector<> we see that it also has a constructor that accepts two iterators so you can simply use this to construct the vector:
std::vector <char> stringToVec(std::string const& inputString) {
std::vector <char> letters(std::begin(inputString), std::end(inputString));
return letters;
}
Or more simply:
std::vector <char> stringToVec(std::string const& inputString) {
return std::vector<char>(std::begin(inputString), std::end(inputString));
}
We can go through the same processes in reverse of the next function vecToString().
std::string vecToString(std::vector<char> const& inputVec) {
return std::string(std::begin(inputVec), std::end(inputVec));
}
Just as a little style niggle. The seporation of the template type from the class looks strange.
std::vector <char>
^^^^^ That space just looks strange.
Lets move to cleaning the string:
std::string cleanString(std::string inputString) {
std::locale loc;
for (std::string::size_type i = 0; i < inputString.length(); i++) {
inputString[i] = std::tolower(inputString[i], loc);
}
for (std::string::size_type i = 0; i < inputString.length(); i++) {
if (inputString[i] == ' ') {
inputString.erase(i, 1);
}
}
return inputString;
}
Again looking at your loops. I would convert them to using iterators and then range based for. But these are also classic algorithms supported by the standard.
Converting to lower case:
for (std::string::size_type i = 0; i < inputString.length(); i++) {
inputString[i] = std::tolower(inputString[i], loc);
}
This is a standard transform:
std::transform(std::begin(inputString), std::end(inputString), // Source
std::begin(inputString), // Destination (can be source) as long as it it is already large enough.
[&loc](unsigned char c){return std::tolower(c, loc)}
);
Removing a particular character:
for (std::string::size_type i = 0; i < inputString.length(); i++) {
if (inputString[i] == ' ') {
inputString.erase(i, 1);
}
}
This is a standard remove/erase
auto newEnd = std::remove(std::begin(inputString), std::end(inputString), ' ');
inputString.erase(newEnd, std::end(inputString));
This is so common we usually write it as a single line:
inputString.erase(std::remove(std::begin(inputString), std::end(inputString), ' '), std::end(inputString));
Commonly referred to as the erase/remove pattern.
Simple tests can sometime be replaced by the trinary operator. BUT be careful if overused or used in bad places this technique can make the code work (so be careful of using this). But in this instance I think it makes the code easier to read:
if (cleanString(originalString) == cleanString(reversedString)) {
return "Yes";
}
return "No";
You can simplify this to:
return (cleanString(originalString) == cleanString(reversedString))
? "Yes"
: "No";
Reversing a container:
std::vector <char> reverseVec(std::vector <char> inputVec) {
std::vector <char> reversedReturn;
for (int a = inputVec.size() - 1; a >= 0; a--) {
reversedReturn.push_back(inputVec.at(a));
}
return reversedReturn;
}
Like above you can improve your code by using iterators.
std::vector <char> reverseVec(std::vector<char> const& inputVec) {
std::vector <char> reversedReturn;
for (auto loop = std::rbegin(inputVec), loop != std::rend(inputVec); ++loop) {
reversedReturn.push_back(*loop);
}
return reversedReturn;
}
The rbegin() and rend() provide `reverse iterators. They go in the opposite direction to your standard iterator but are basically the same.
We can expand this (like above to just constructing the container with the iterators).
std::vector <char> reverseVec(std::vector<char> const& inputVec) {
return std::vector<char>(std::rbegin(inputVec), std::rend(inputVec));
}
There is also standard algorithm std::reverse() that reverse a container in place.
Using the above:
int main()
{
std::cout << "Enter the string you would like reversed: ";
std::string userInput;
std::getline(std::cin, userInput, '\n');
auto newEnd = std::remove(std::begin(userInput), std::end(userInput), ' ');
std::erase(newEnd, std::end(userInput));
std::transform(std::begin(userInput), std::end(userInput),
std::begin(userInput),
[](unsigned char* c){std::tolower(c);});
std::string reversedInput(std::rbegin(userInput), std::rend(userInput));
std::cout << "Reversed: " << reversedInput << "\n"
<< "Palindrome: "
<< ((userInput == reversedInput) ? "Yes" : "No")
<< "\n";
} | {
"domain": "codereview.stackexchange",
"id": 26735,
"tags": "c++, beginner"
} |
Carroll's derivation of the geodesic equations | Question: In Carroll's derivation of the geodesic equations (page 69, http://preposterousuniverse.com/grnotes/grnotes-three.pdf), he starts with $$\tau=\int\left(-g_{\mu\nu}\frac{dx^{\mu}}{d\lambda}\frac{dx^{\nu}}{d\lambda}\right)^{1/2}d\lambda$$
and arrives at$$\delta\tau=\int\left(-g_{\mu\nu}\frac{dx^{\mu}}{d\lambda}\frac{dx^{\nu}}{d\lambda}\right)^{-1/2}\left(-\frac{1}{2}\partial_{\sigma}g_{\mu\nu}\frac{dx^{\mu}}{d\lambda}\frac{dx^{\nu}}{d\lambda}\delta x^{\sigma}-g_{\mu\nu}\frac{dx^{\mu}}{d\lambda}\frac{d\left(\delta x^{\nu}\right)}{d\lambda}\right)d\lambda.$$
He then changes the curve parametrization from arbitrary $\lambda$
to proper time $\tau$
by plugging $$d\lambda=\left(-g_{\mu\nu}\frac{dx^{\mu}}{d\lambda}\frac{dx^{\nu}}{d\lambda}\right)^{-1/2}d\tau$$
into the above to obtain
$$\delta\tau=\int\left(-\frac{1}{2}\partial_{\sigma}g_{\mu\nu}\frac{dx^{\mu}}{d\tau}\frac{dx^{\nu}}{d\tau}\delta x^{\sigma}-g_{\mu\nu}\frac{dx^{\mu}}{d\tau}\frac{d\left(\delta x^{\nu}\right)}{d\tau}\right)d\tau.$$
I cannot see how that substitution works. I've been told it uses the chain rule, but I just can't see it. Can anyone help? Thanks.
Answer: Basically think of it this way. Take the original equation
$$\tau = \int f(x) \,\mathrm{d}\lambda \tag{1}$$
which in differential form becomes
$$d\tau = f(x) \,\mathrm{d}\lambda \tag{2}$$
after a little rearranging gives
$\frac{d\lambda}{d\tau}$ = $(f(x))^{-1}$--------(3)
with the function $f(x)$ in this case being equal to
$f(x)$ = $(-g_{\mu\nu}\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\lambda})^{1/2}$--------(4)
as was demonstrated
EDIT:
Using eq (3)
$\frac{d\lambda}{d\tau}$ = $(f(x))^{-1}$ = $(-g_{\mu\nu}\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\lambda})^{-1/2}$
Substitute into
$\delta\tau = \int$ $(-g_{\mu\nu}\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\lambda})^{-1/2}$ $(-\frac{1}{2}$$g_{\mu\nu,\sigma}\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\lambda}{\delta}x^{\sigma}-g_{\mu\nu}\frac{dx^\mu}{d\lambda}\frac{d({\delta}x^\nu)}{d\lambda})$ $d\lambda$
gives
$\delta\tau = \int$ $\frac{d\lambda}{d\tau}$ $(-\frac{1}{2}$$g_{\mu\nu,\sigma}\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\lambda}{\delta}x^{\sigma}-g_{\mu\nu}\frac{dx^\mu}{d\lambda}\frac{d({\delta}x^\nu)}{d\lambda})$ $d\lambda$
$\delta\tau = \int$ $\frac{d\lambda}{d\tau}$ $(-\frac{1}{2}$$g_{\mu\nu,\sigma}\frac{dx^\mu}{d\tau}\frac{dx^\nu}{d\tau}{\delta}x^{\sigma}-g_{\mu\nu}\frac{dx^\mu}{d\tau}\frac{d({\delta}x^\nu)}{d\tau})$ $\frac{d\tau}{d\lambda}$$\frac{d\tau}{d\lambda}$ $d\lambda$
Use chain rule to get
$\delta\tau = \int$ $(-\frac{1}{2}$$g_{\mu\nu,\sigma}\frac{dx^\mu}{d\tau}\frac{dx^\nu}{d\tau}{\delta}x^{\sigma}-g_{\mu\nu}\frac{dx^\mu}{d\tau}\frac{d({\delta}x^\nu)}{d\tau})$ $d\tau$
Here I use , to represent the partial derivative with respect to $x^\sigma$. | {
"domain": "physics.stackexchange",
"id": 22051,
"tags": "homework-and-exercises, general-relativity, differential-geometry, variational-principle, geodesics"
} |
Implemnt Gyrosensor in ROS | Question:
Hello everybody,
I would like to use the gyro sensor MPU 6000, which is used on the robotino. It is possible to integrate this gyro sensor in ROS?
If so, is there a manual?
Regards,
Markus
Originally posted by MarkusHHN on ROS Answers with karma: 54 on 2020-06-20
Post score: 0
Answer:
I use the following api from the robotino: here
Also i write my own header and my own cpp-File and include it on the robotino_node
#include "GyroROS.h"
GyroROS::GyroROS()
{
gyroscope_pub_ = nh_.advertise<sensor_msgs::Imu>("gyroscope",1, true);
}
GyroROS::~GyroROS()
{
gyroscope_pub_.shutdown();
}
void GyroROS::setTimeStamp(ros::Time stamp)
{
stamp_ = stamp;
}
void GyroROS::gyroscopeEvent(float angle, float anglevel)
{
XAngle = 0.0;
YAngle = 0.0;
ZAngle = angle;
quat = Eigen::AngleAxisf(XAngle, Eigen::Vector3f::UnitX())
* Eigen::AngleAxisf(YAngle, Eigen::Vector3f::UnitY())
* Eigen::AngleAxisf(ZAngle, Eigen::Vector3f::UnitZ());
gyro_msg_.orientation.x = quat.x();
gyro_msg_.orientation.y = quat.y();
gyro_msg_.orientation.z = quat.z();
gyro_msg_.orientation.w = quat.w();
gyro_msg_.angular_velocity.z = -anglevel;
gyro_msg_.header.stamp = stamp_;
gyro_msg_.header.frame_id = "base_link";
// Covariance Matrix in Orientation
gyro_msg_.orientation_covariance[0] = 0.0;
gyro_msg_.orientation_covariance[4] = 0.0;
gyro_msg_.orientation_covariance[8] = 0.0;
// Covariance Matrix in Angule Velocity
gyro_msg_.angular_velocity_covariance[0] = 0.0;
gyro_msg_.angular_velocity_covariance[4] = 0.0;
gyro_msg_.angular_velocity_covariance[8] = 0.0;
gyroscope_pub_.publish(gyro_msg_);
}
Originally posted by MarkusHHN with karma: 54 on 2020-07-20
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 35170,
"tags": "ros, gyro, robotino, ros-kinetic"
} |
Baryons annihilation | Question: I was wondering if there is a way of calculate the annihilation cross section for two baryons, say $p\bar p\to\pi\pi$ or $p\bar p\to\gamma\gamma$. The problem here is that we cannot use the usual chiral theory because the energy transfer is of the same order of the cutoff $\Lambda\sim 1 \text{Gev}$.
In almost all the articles I have found those cross sections are derived by experimental observation, do you know possibles way to derive them theoretically? Or can you suggest me an article or review about that?
Answer: This is non-perturbative QCD, so at low energy you cannot do much more than phenomenology or use experimental data. Both of the processes are somewhat rare. There is a lot of energy available, and the mean number of produced pions is 5 or 6, even at threshold (data like this can be found in the Particle Data Group tables).
At high energy these processes are exclusive reactions that can be studied using perturbative QCD. This is easier for $\bar{p}p\to\gamma\gamma$, which can be expressed (I think) in terms of a single non-perturbative parameter (so the energy dependence is predicted, and the parameter can be related to other processes), see http://journals.aps.org/prd/abstract/10.1103/PhysRevD.24.1808. Work that extends this formalism to somewhat lower energies can be found in http://arxiv.org/abs/hep-ph/0206288. | {
"domain": "physics.stackexchange",
"id": 26998,
"tags": "quantum-field-theory, nuclear-physics, resource-recommendations, scattering-cross-section, baryons"
} |
Where exactly is 'Colloid' with regards to synthesis of thyroid hormones? | Question: I've researched colloid and it seems to be a substance of microfibres and thin films in which thyroid hormones may be synthesised, but I was wondering where this exactly is... I think it could be in the inter membrane space but I have no evidence of that, or is it just surrounding all cells?
Answer: Colloid is found in thyroid follicles and is composed of the protein from which thyroid hormones are made.
This link below states: "Follicles are filled with colloid, a proteinaceous depot of thyroid hormone precursor."
That precursor is called thyroglobulin.
The image below shows micrographs of the thyroid.
In the image, the colloid is surrounded by thyroid epithelial cells, which collectively form a follicle. The lower mag on the left shows many follicles filled with colloid. The image on the right shows one follicle. The white areas inside the follicle near the epithelial cells show uptake of colloid by the epithelial cells, which are activity synthesizing thyroid hormone.
http://arbl.cvmbs.colostate.edu/hbooks/pathphys/endocrine/thyroid/anatomy.html | {
"domain": "biology.stackexchange",
"id": 4908,
"tags": "cell-biology, endocrinology"
} |
Where can I find pre-trained language models in English and German? | Question: Where can I find (more) pre-trained language models? I am especially interested in neural network-based models for English and German.
I am aware only of Language Model on One Billion Word Benchmark and TF-LM: TensorFlow-based Language Modeling Toolkit.
I am surprised not to find a greater wealth of models for different frameworks and languages.
Answer: Of course now there has been a huge development:
Huggingface published pytorch-transformers, a library for the so successful Transformer models (BERT and its variants, GPT-2, XLNet, etc.), including many pretrained (mostly English or multilingual) models (docs here). It also includes one German BERT model. SpaCy offers a convenient wrapper (blog post).
Update: Now, Salesforce published the English model CTRL, which allows for use of "control codes" that influence the style, genre and content of the generated text.
For completeness, here is the old, now less relevant version of my answer:
Since I posed the question, I found this pretrained German language model:
https://lernapparat.de/german-lm/
It is an instance of a 3-layer "averaged stochastic descent weight-dropped" LSTM which was implemented based on an implementation by Salesforce. | {
"domain": "ai.stackexchange",
"id": 677,
"tags": "neural-networks, natural-language-processing, bert, gpt, language-model"
} |
Reporting the highest satisfied poker hand in F# | Question: I'm learning F# by doing various small projects. One of them is a problem where the program reads Poker hands and rates them. It's Texas Hold'Em, so for each player it tries every five card selection from seven cards.
When it's time for scoring I have made a set of functions that each evaluate a hand and produces an Score option of whether that hand matches or not:
type Score =
{ Points: int
Desc: string }
val tryStraightFlush: PokerHand -> Score option
val tryFourOfAKind: PokerHand -> Score option
// more functions ..
val getHighCard: PokerHand -> Score option
type PokerHand =
class
new : cards:Card list -> PokerHand
member Cards : Card list
member HighCard : Card option
end
But now I have a problem, because I need to use the options that these functions produce, so I cannot do a match like this:
//Non-working code.
//Something roughly similar to this would be nice.
//But I need to use the values from the "try"-functions
//and they get thrown away here:
let scoreCards (hand: PokerHand) =
match hand with
| r when (tryStraightFlush hand) <> None -> //prod. Score
| r when (tryFourOfAKind hand) <> None -> //prod. Score
// more code
Instead my code ends up looking like this, and I have to say I don't like this deep nesting.. How can I have a small function for testing each type of configuration, while avoiding a deeply nested if-expressions?
let scoreCards (hand: PokerHand) =
// Too much nesting!
let r = tryStraightFlush hand
if r <> None then
r.Value
else
let r = tryFourOfAKind hand
if r <> None then
r.Value
else
let r = tryFullHouse hand
if r <> None then
r.Value
else
let r = tryFlush hand
if r <> None then
r.Value
else
// nesting continues..
Answer: The solution here is to use an active pattern, which enables you to create a custom matching function.
You can even reuse your existing functions.
It would look like
let |StraightFlush|_| hand = tryStraightFlush hand
then you can use it like this
match hand with
|Straightflush(score) -> ...
| .... | {
"domain": "codereview.stackexchange",
"id": 22283,
"tags": "beginner, functional-programming, f#, playing-cards"
} |
Thermodynamics - Partial Derivatives | Question: I just need help to solve a problem:
$\left(\frac{∂\overline{E}}{∂V}\right)_{β,N} + β\left(\frac{∂\overline{p}}{∂β}\right)_{N,V} = - \overline{p}$
PS: The bar over E and over p (this in both sides) means that is an average.
I don't know how to start, so any help will be amazing. I'm not a physicist, so I'm having a bad time trying to solve this.
Thank you very much!
Answer: If you very slowly increase the volume of an isolated system, then the internal energy will drop. The ratio between the drop in energy and the volume increase is, by definition, the pressure. The system can be in one of many possible energy levels, these energy levels decrease if we increase the volume (the larger the volume of a system the more closely packed will the energy levels be). If you very slowly increase the volume then the system will remain in whatever energy level it was, the energy will then drop simply because the energy level itself is going down in energy.
A system that is kept at some temperature with temperature parameter $\beta = \frac{1}{k_B T}$. will have a probability $P_r$ of being in a state with energy $E_r$ of:
$$P_r = \frac{\exp(-\beta E_r)}{Z}$$
where $Z$, the so-called partition function is the normalization to make the sum of all the probabilities equal to 1:
$$Z = \sum_r \exp(-\beta E_r)$$
It then follows that the expectation value of the energy is given by:
$$\bar{E} = -\frac{\partial \log{Z}}{\partial\beta}$$
and the pressure is:
$$\bar{p} = \frac{1}{\beta}\frac{\partial \log{Z}}{\partial V}$$
Note that the energy levels $E_r$ are functions of the volume, the above formula yields precisely the expectation value of minus the partial derivative of the energy that defines the pressure.
If you substitute these expressions in the left hand side of the equation in your problem, you see that you get second derivatives, you can then use the symmetry of second derivatives to change the order of differentiation and then simplify the expression. | {
"domain": "physics.stackexchange",
"id": 32302,
"tags": "thermodynamics, differentiation"
} |
Storing words from an input stream into a vector | Question: I'm extremely new to C++ and am doing the exercises on the book Accelerated C++. Here is one of the exercises:
4-5. Write a function that reads words from an input stream and stores
them in a vector. Use that function both to write programs that count
the number of words in the input, and to count how many times each
word occurred.
How can I make this program better by only using vectors?
#include <iostream>
#include <algorithm>
#include <vector>
#include<string>
#include <stdexcept>
using std::cout; using std::cin; using std::vector; using std::sort; using std::string; using std::endl; using std::domain_error; using std::cerr;
//it said "count the number of words in the input, and to count how many times each word occurred". I cannot think of any return value if a function do two things, so I used void.
void run(const vector<string>& v) {
if (v.size() == 0) throw domain_error("Nothing to count");
cout << "words count: " << v.size() << endl;
string unique = v[0];
int count = 1;
for (vector<string>::size_type i = 1; i < v.size(); ++i) {
if (v[i-1] == v[i]) {
count++;
} else {
cout << unique << ": " << count << endl;
unique = v[i];
count = 1;
}
}
cout << unique << ": " << count << endl;
}
int main() {
cout << "Enter all words followed by end of file." << endl;
vector<string> words;
string word;
while (cin >> word) {
words.push_back(word);
}
sort(words.begin(), words.end());
try {
run(words);
} catch (domain_error e) {
std::cerr << e.what() << endl;
}
return 0;
}
Answer:
Minor inconsistency here:
#include<string>
Add a space in between them, like all the others:
#include <string>
Just get rid of this:
using std::cout; using std::cin; using std::vector; using std::sort; using std::string; using std::endl; using std::domain_error; using std::cerr;
Even if you put them onto separate lines, it'll still be lengthy. Just put std:: where necessary and try to keep it consistent.
Exception objects usually should not be passed by value:
catch (domain_error e)
They should be passed by const&:
catch (domain_error const& e)
Consider separating this into additional lines:
if (v.size() == 0) throw domain_error("Nothing to count");
While it's okay to put short statements on a single line, it may be a little harder to read and maintain longer lines, also with the lack of curly braces.
In addition, just call empty() instead of comparing size() to 0.
if (v.empty())
{
throw domain_error("Nothing to count");
}
Instead of getting the first element via []:
string unique = v[0];
use front():
string unique = v.front();
I'd also like to add a compliment:
Thank you for using std::size_type instead of int. I hardly see that done by others. It's good to see that you're paying attention to return types and compiler mismatch warnings. | {
"domain": "codereview.stackexchange",
"id": 8202,
"tags": "c++, beginner, c++11, io, vectors"
} |
Weight lifted by cable wound on a cone | Question: This problem is from Hounser & Hudson, Applied Mechanics: Dynamics (1959):
The answer given by the book is:
$$a = \frac{Dd \omega^2}{4 \pi h}$$
I was trying to solve it using polar coordinates (solution (2) ahead). First I'll present a typical solution, as it has been given to me by colleagues and then ask what's wrong with solution (2):
Solution 1: no cylindrical polar coordinates
An infinitesimal displacement of cable is
$$ds = r d \phi$$
Differentiating twice, which is the acceleration of the cable, one gets:
$$\ddot{s} = \dot{r} \omega \tag{1}$$
The relationship between horizontal displacement $z$ and $r$ is (triangle geometry):
$$r = \frac{Dz}{2h}$$
and the relationship between $z$ and $\phi$ is given by pitch:
$$\Delta z = \frac{\phi d}{2 \pi}$$
Now we have:
$$\Delta r = \frac{dD \phi}{4 \pi h} \Rightarrow \dot{r} = \frac{dD \omega}{4 \pi h}$$
Replace it in $(1)$ and there it is.
Solution 2: cylindrical polar coordinates
We assume an inertial coordinate system lying on the surface of the cone in the position where the cable starts bending from the linear lifting. Using polar coordinates:
$$\mathbf{r} = r \hat{\mathbf{e_r}} + z \hat{\mathbf{e_k}} \Rightarrow \mathbf{\dot{r}} = \dot{r} \hat{\mathbf{e_r}} + r \omega \hat{\mathbf{e_\phi}} + \dot{z} \hat{\mathbf{e_k}} \Rightarrow \mathbf{\ddot{r}} = -r \omega ^2 \hat{\mathbf{e_r}} + 2 \dot{r} \omega \hat{\mathbf{e_\phi}} + \ddot{z} \hat{\mathbf{e_k}}$$
Where it has been assumed $\omega$ (as given by the exercise) and $\dot{r}$ (from the expression given in Sol. 1) are constant.
Now the point is: could I assume the tangential acceleration ($\hat{\mathbf{e_\phi}}$) is the same as the one lifting the cable linearly ? There's no curvature immediately before contact, therefore no centripetal acceleration. Also, the horizontal acceleration should be neglected, as instructed. That is:
$$\ddot{s} = 2 \dot{r} \omega$$
If that is so, this result is exactly double that given as the answer (see $(1)$).
How can I reconcile these results?
I've noticed that the second "parcel" of this value is given by the differentiation of the coordinate vector ($\hat{\mathbf{e_r}}$) on an inertial reference frame. I'm unable to decide why it should be discarded, if the textbook answer is correct.
Answer: The second method is incorrect. If you calculate how fast the constantly changing contact point is moving, it is only moving horizontally, and the horizontal acceleration can be ignored. If you follow the instantaneous motion of the current contact point, you should get only the centripetal acceleration (it is rotating so the acceleration is orthogonal to the tangential direction), which shows that you got one sign mistake and got $2\dot{r}\omega$ instead. The acceleration of the weight really comes from the increased speed of the shortening of the cable. | {
"domain": "physics.stackexchange",
"id": 74438,
"tags": "homework-and-exercises, newtonian-mechanics, rotational-kinematics, string"
} |
What can this circuit be useful for? | Question:
I have calculated the boolean functions for $r$ and $f$:
$f = \overline{s_1} \cdot s_0 + s_1 \cdot \overline{s_0}$.
$r = \overline{s_0 \cdot s_1 \cdot s_2 \cdot s_3}$.
Do you have an idea what an application for this circuit can be? I don't know where we would use it.
Answer: Good Question. It looks like some kind of ripple counter. It repeats after 15 steps. Values (consider the outputs to be 8,4,2,1) as 8,4,2,9,12,6,11,5,10,13,14,15,7,3,1.
Zero only for one step after reset, not repeated. | {
"domain": "cs.stackexchange",
"id": 16488,
"tags": "circuits, digital-circuits"
} |
Can a particle have no instantaneous velocity at all points of the path taken but a finite average velocity? | Question: I have a question on kinematics.
Say the path traced by a particle is given by a Koch curve or Koch snowflake.
Now consider the particle starts from some arbitrary point $A$ on the curve and continues moving with some acceleration. It moves a finite distance on the curve and reaches another point $B$ which is different from $A$ and the particle has not crossed the same point twice.
So there is a net finite displacement covered in a finite time. Hence the particle has a finite average velocity.
But the curve is not differentiable at any point, by definition of the curve. So the particle has no instantaneous velocity at all points of the path taken.
QUESTION: Can a particle have no instantaneous velocity at all points of the path taken but still a finite average velocity?
Is this possible? Can anyone explain this?
Answer: No, it's not possible, because one of the underlying assumptions of kinematics is that all paths are at least twice differentiable. Before you complain about this requirement, remember that physics is about building models that can be used to describe and predict measurements. Measurements always have some amount of uncertainty, and even if you suppose that it is possible for a particle to travel along a nondifferentiable path $x(t)$, it is still always possible to construct a twice-differentiable path that matches $x(t)$ to any desired level of precision. That twice-differentiable path is what you use for the model.
Even beyond that, make sure not to mix up "no instantaneous velocity" with "zero instantaneous velocity". Usually we use these terms interchangeably in physics, but we have the luxury of doing so because (we normally assume) paths are always differentiable and thus there is not really any such thing as, literally, having no instantaneous velocity. If you want to work with nondifferentiable paths, then you have to be more careful. It's conceivable that in such a model, a particle could have a perfectly well defined average velocity between any two points in time and yet never have an instantaneous velocity. This is still fine (if useless) because no physical process actually measures instantaneous velocity. The closest you get is an exceedingly short-time average, e.g. over roughly a period of oscillation of an EM wave when using the Doppler effect. | {
"domain": "physics.stackexchange",
"id": 35900,
"tags": "kinematics, velocity, mathematics, differentiation, fractals"
} |
What is the difference between the rot_star and nrotstar class in LORENE (Numerical Relativity) | Question: Im doing my final grade work about neutron stars simulations. I need to compute some simulations for different EoS. In the classes, I find two different classes:
nrotstar-> I think the n means neutron (it can mean newtonian or 'non' too) but my teacher said he doesn't have idea of it
rot_star-> the class he told me to start reading about.
After searching the documentation and source files I can't extract the conclusion about what is the class what I should use. What are the differences between them?
Answer: I am not sure if it is of use any more, but to me it seems like Nrotstar is New-rot_star.
If you visit the file Lorene/Codes/Rot_star/README, there the variable rot_star is said to be obsolete and the user is suggested to use nrotstar instead. | {
"domain": "physics.stackexchange",
"id": 68559,
"tags": "general-relativity, computational-physics, simulations, neutron-stars"
} |
Instantaneous phase calculation problems | Question: I am currently struggling with a problem when calculating the instantaneous phase of a wavefield using a 1D Hilbert transformation.
The scheme is as follows.
1D Hilbert transformation of wave field $U(x,z)$
$$
q_z(x,z) = H_z[U(x,z)] = \int_R U(x-\xi,z)\frac{d\xi}{\xi}
$$
Calculation of the instantaneous phase $\phi(x,z)$
$$
\phi(x,z) = arctan\left(\frac{q_z(x,z)}{U(x,z}\right),
$$
where $x$ and $z$ are the coordinates and $H_z$ is the 1D Hilbert transformation over the vertical direction.
My problem is, that $\phi(x,z)$ which is implemented using the intrinsic function $atan2()$ contains phase shifts [$\pi \rightarrow -\pi$].
Using the instantaneous phase for further calculations in which its gradient is derived hence leads to imaging artifacts as shown in the attached pictures:
TOP LEFT: input wavefield $U(x,z)$
TOP RIGHT: Hilbert transformed wavefield $q_z(x,z) = H_z[U(x,z)]$
BOTTOM LEFT: instantaneous phase $\phi(x,z)$
BOTTOM RIGHT: derived propagation angle with artifacts
.
My question: I am able to overcome this problem using Gauss filtering or Median filtering. Yet this does not wotrk properly for more complicated wavefields and higher frequencies. Is there another way to overcome those sharp transitions in the area of my wavefield? (not the noise around)
Thanks alot!
Answer: One way to overcome this problem is phase unwrapping, which is also implemented by the Matlab function unwrap. But I believe that the best option for obtaining the derivative of the phase is to avoid a direct computation, but instead start out from the complex function itself. To simplify notation, let's use $U$ (without arguments):
$$U=re^{j\phi}=a+ib$$
where $r$ is the magnitude and $\phi$ is the phase. The derivative of $U$ is given by
$$U'=r'e^{j\phi}+rj\phi'e^{j\phi}\\
\frac{U'}{U}=\frac{r'}{r}+j\phi'$$
Since $r'/r$ is real-valued, we get
$$\phi'=\text{Im}\left\{\frac{U'}{U}\right\}=\frac{ab'-a'b}{a^2+b^2}\tag{1}$$
where $a$ and $b$ are the real and imaginary parts of $U$, respectively. Using (1) instead of a direct derivative of the phase should solve the problem. | {
"domain": "dsp.stackexchange",
"id": 2125,
"tags": "phase, wave, hilbert-transform"
} |
I run into an error when using move_base package of navigation | Question:
Hi,everyone!when I was setting up my robot for navigation ,I followed the navigation tutorial like this .But when I roslaunch the move_base.launch ,I ran into an error.It says ,
You must specify at least three points for the robot footprint,reverting to previous footprint.
I know maybe something is wrong with costmap_common_params.yaml setting "footprint:[[x0,y0],[x1,y1]...[xn,yn]]".But I donot know how to solve the problem.Can anyone help me ???
I add the costmap_common_params.yaml:
obstacle_range: 2.5
raytrace_range: 3.0
footprint: [[x0, y0], [x1, y1], ... [xn, yn]]
inflation_radius: 0.55
observation_sources: laser_scan_sensor
laser_scan_sensor: {sensor_frame: base_laser, data_type: LaserScan, topic: scan, marking: true, clearing: true}
Originally posted by Yuichi Chu on ROS Answers with karma: 148 on 2014-03-07
Post score: 4
Original comments
Comment by ahendrix on 2014-03-07:
Can you add your costmap_common_params.yaml to the question?
Comment by Yuichi Chu on 2014-03-10:
Ok,I add my costmap_common_params.yaml to the question. Maybe I should fill in x0,y0,x1....xn exactly. But I donot know what footprint and inflation_radius means in this file and how much it matters to the algorithm. I have read the tutorial,but canot catch the point .Can you help me?
Answer:
The footprint parameter describes the shape and size of your robot, in meters. 0,0 is assumed to be the turning center of your robot, and you define the corners of your robot relative to that center. +X is forward, +Y is to the left, as defined in REP-103
For example, if you have a 1m square robot that turns about its center, your footprint would be:
footprint: [ [0.5, 0.5], [-0.5, 0.5], [-0.5, -0.5], [0.5, -0.5] ]
However, if your robot turns about its front edge, your footprint would be:
footprint: [ [0.0, 0.5], [0.0, -0.5], [-1.0, -0.5], [-1.0, 0.5] ]
If you have a triangular robot, your footprint might look like:
footprint: [ [1.0, 0.0], [0.0, -0.5], [0.0, 0.5] ]
Originally posted by ahendrix with karma: 47576 on 2014-03-10
This answer was ACCEPTED on the original site
Post score: 8
Original comments
Comment by Yuichi Chu on 2014-03-10:
It helps a lot.Thank you very much,ahendrix!
Comment by jxl on 2015-06-15:
@ahendrix, your reply is very clear,but in the costmap_common_params.yaml ,turtlebot is set robot_radius: 0.18,how many points to define it's circle footprint ,does it use the four corners of it's inscribed square?
Comment by jxl on 2015-06-15:
yes,i want to figure out costmap_2d::Costmap2DROS::getRobotFootprint(), when it is turtlebot ,it will return how many points ?:)
Comment by David Lu on 2015-06-15:
Actually its a 16-sided polygon (hexadecagon).
https://github.com/ros-planning/navigation/blob/jade-devel/costmap_2d/src/costmap_2d_ros.cpp#L371
Comment by jxl on 2015-06-15:
I think i found it, as if in the costmap_2d::Costmap2DROS::setFootprintFromRadius,it set the footprint by 16 points.
Comment by jxl on 2015-06-15:
@David Lu and @ahendrix,thank you ,all :)) | {
"domain": "robotics.stackexchange",
"id": 17205,
"tags": "ros, navigation, move-base, footprint"
} |
Counting occurrences of "hi" in a string, except where followed by "x" | Question: I already solved this one from CodingBat, but I'm unsatisfied with my solution and I feel I could have made it much shorter. Most of the other recursion tasks could be solved by a one line conditional for the base case and a one line return statement usually using ternary operator. So, could I have made this any shorter or more readable?
Given a string, compute recursively the number of times lowercase "hi" appears in the string, however do not count "hi" that have an 'x' immedately before them.
public int countHi2(String str) {
if (str.length() <= 1) {
return 0;
}
if (str.startsWith("x") && str.charAt(1) != 'x') {
return countHi2(str.substring(2));
}
else if (str.startsWith("hi")) {
return 1 + countHi2(str.substring(2));
}
else {
return countHi2(str.substring(1));
}
}
Answer: It doesn't get much shorter than that. But some other improvements are possible.
The current algorithm uses startswith method to check for x and hi, and as such it advances by 1 or 2 characters at a time. This is inefficient.
Another performance issue is creating many temporary strings in the process (because of substring).
It would be better to use indexOf instead of startswith, which will allow you to jump multiple characters at a time. Another big improvement would be to avoid temporary string generation, by creating a helper function that tracks the position to check. Something like this:
public int countHi2(String str) {
return countHi2(str, 0);
}
public int countHi2(String str, int start) {
start = str.indexOf("hi", start);
if (start == -1) {
return 0;
}
int count = 0;
if (start == 0 || str.charAt(start - 1) != 'x') {
count++;
}
return count + countHi2(str, start + 2);
} | {
"domain": "codereview.stackexchange",
"id": 19485,
"tags": "java, strings, recursion"
} |
Speed unit converter | Question: I'm basically a newbie in android app developing. So I'm not sure if this is the right way to write these codes. I have made an app called "Zconverter". It has 9 fragments and the code i m showing here is the smallest of them all. All the fragments have the same type of code. This app works just fine but i m not sure if this is the best way to write this.
public class Speed extends Fragment {
public View onCreateView (LayoutInflater inflater, ViewGroup container , Bundle savedInstanceState){
final ViewGroup rootView = (ViewGroup) inflater.inflate(R.layout.speed_fragment, container, false);
final EditText editMilesHour = (EditText) rootView.findViewById(R.id.editMilesHour);
final EditText editfeetsec = (EditText) rootView.findViewById(R.id.editfeetsec);
final EditText editMeterSec = (EditText) rootView.findViewById(R.id.editMeterSec);
final EditText editKmHour = (EditText) rootView.findViewById(R.id.editKmHour);
final EditText editKnot = (EditText) rootView.findViewById(R.id.editKnot);
TextView textSpeed = (TextView) rootView.findViewById(R.id.textSpeed);
TextView textKmHour = (TextView) rootView.findViewById(R.id.textKmHour);
TextView textKnot = (TextView) rootView.findViewById(R.id.textKnot);
TextView textMilesHour = (TextView) rootView.findViewById(R.id.textMilesHour);
TextView textfeetsec = (TextView) rootView.findViewById(R.id.textfeetsec);
TextView textMeterSec = (TextView) rootView.findViewById(R.id.textMeterSec);
Typeface font = Typeface.createFromAsset(getActivity().getAssets(), "CoffeeHouse.ttf");
textMeterSec.setTypeface(font);
textMilesHour.setTypeface(font);
textKnot.setTypeface(font);
textKmHour.setTypeface(font);
textfeetsec.setTypeface(font);
textSpeed.setTypeface(font);
editMeterSec.setOnFocusChangeListener(new View.OnFocusChangeListener() {
@Override
public void onFocusChange(View v, boolean hasFocus) {
if(hasFocus){
Button buttonConvertSpeed = (Button) rootView.findViewById(R.id.buttonConvertSpeed);
buttonConvertSpeed.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
double MeterSec = Double.valueOf(editMeterSec.getText().toString());
double MilesHour = MeterSec * 2.23694;
editMilesHour.setText(String.valueOf(MilesHour));
double feetsec = MeterSec * 3.28084;
editfeetsec.setText(String.valueOf(feetsec));
double KmHour = MeterSec * 3.6;
editKmHour.setText(String.valueOf(KmHour));
double Knot = MeterSec * 1.94384;
editKnot.setText(String.valueOf(Knot));
}
}
);
}else{
}
}
});
editMilesHour.setOnFocusChangeListener(new View.OnFocusChangeListener() {
@Override
public void onFocusChange(View v, boolean hasFocus) {
if(hasFocus){
Button buttonConvertSpeed = (Button) rootView.findViewById(R.id.buttonConvertSpeed);
buttonConvertSpeed.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
double MilesHour6 = Double.valueOf(editMilesHour.getText().toString());
double MeterSec6 = MilesHour6 * 0.44704;
editMeterSec.setText(String.valueOf(MeterSec6));
double feetsec6 = MilesHour6 * 1.46667;
editfeetsec.setText(String.valueOf(feetsec6));
double KmHour6 = MilesHour6 * 1.60934;
editKmHour.setText(String.valueOf(KmHour6));
double Knot6 = MilesHour6 * 0.868976;
editKnot.setText(String.valueOf(Knot6));
}
}
);
}else{
}
}
});
editfeetsec.setOnFocusChangeListener(new View.OnFocusChangeListener() {
@Override
public void onFocusChange(View v, boolean hasFocus) {
if(hasFocus){
Button buttonConvertSpeed = (Button) rootView.findViewById(R.id.buttonConvertSpeed);
buttonConvertSpeed.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
double feetsec7 = Double.valueOf(editfeetsec.getText().toString());
double MeterSec7 = feetsec7 * 0.3048;
editMeterSec.setText(String.valueOf(MeterSec7));
double MilesHour7 = feetsec7 * 0.681818;
editMilesHour.setText(String.valueOf(MilesHour7));
double KmHour7 = feetsec7 * 1.09728;
editKmHour.setText(String.valueOf(KmHour7));
double Knot7 = feetsec7 * 0.592484;
editKnot.setText(String.valueOf(Knot7));
}
}
);
}else{
}
}
});
editKmHour.setOnFocusChangeListener(new View.OnFocusChangeListener() {
@Override
public void onFocusChange(View v, boolean hasFocus) {
if(hasFocus){
Button buttonConvertSpeed = (Button) rootView.findViewById(R.id.buttonConvertSpeed);
buttonConvertSpeed.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
double KmHour8 = Double.valueOf(editKmHour.getText().toString());
double MeterSec8 = KmHour8 * 0.277778;
editMeterSec.setText(String.valueOf(MeterSec8));
double MilesHour8 = KmHour8 * 0.621371;
editMilesHour.setText(String.valueOf(MilesHour8));
double feetsec8 = KmHour8 * 0.911344;
editfeetsec.setText(String.valueOf(feetsec8));
double Knot8 = KmHour8 * 0.539957;
editKnot.setText(String.valueOf(Knot8));
}
}
);
}else{
}
}
});
editKnot.setOnFocusChangeListener(new View.OnFocusChangeListener() {
@Override
public void onFocusChange(View v, boolean hasFocus) {
if(hasFocus){
Button buttonConvertSpeed = (Button) rootView.findViewById(R.id.buttonConvertSpeed);
buttonConvertSpeed.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
double Knot9 = Double.valueOf(editKnot.getText().toString());
double MeterSec9 = Knot9 * 0.514444;
editMeterSec.setText(String.valueOf(MeterSec9));
double MilesHour9 = Knot9 * 1.15078;
editMilesHour.setText(String.valueOf(MilesHour9));
double feetsec9 = Knot9 * 1.68781;
editfeetsec.setText(String.valueOf(feetsec9));
double KmHour9 = Knot9 * 1.852;
editKmHour.setText(String.valueOf(KmHour9));
}
}
);
}else{
}
}
});
return rootView;
}
Answer: There are several things that can be improved:
Error handling. Your code does not handle the case when a text does not represent a valid double. It would be nice to have some error handling(for instance, you could show an error message in this case). For instance, this line of code:
double MeterSec = Double.valueOf(editMeterSec.getText().toString());
will fail with NumberFormatException if editMeterSec.getText().toString() is not a valid representation of a double number. You can do it by catching this exception:
try {
double MeterSec = Double.valueOf(editMeterSec.getText().toString());
...
} catch (NumberFormatException ex) {
// Show an error message or do something else.
}
"Magic" constants. It is not good to use such numbers as 3.28084 or 3.6 directly in your code because their meaning is not clear. You should create variables with meaningful names to hold these constants. For instance, you could have a
private static final KILOMETERS_PER_HOUR_TO_METERS_PER_SECOND_RATIO = 3.6;
and then use it for the conversion between different units of speed.
Empty else blocks are redundant. Just get rid of them.
if (predicate) {
// Do something.
} else {
// An empty block.
}
should become
if (predicate) {
// Do something.
}
Whitespaces. It is conventional to have a whitespace after if, for and while keywords, before and after curly brackets.
Variable naming. Non-static, non-final variables' names should start with a lowercase letter(and different words are separated using camelCase). It is not consistent in your code: textSpeed, for instance, follows this convention, but MeterSec or feetsec violates it.
Design of your class. Having one huge method that does everything is a bad practice. Moreover, the entire Speed class does too many things: it handles GUI and the conversion logic at the same time. I would recommend creating a separate utility class that converts speed units into each other. Current design makes your code very hard to test.
Avoiding code duplication. Several event listeners contain the same code:
new View.OnFocusChangeListener() {
@Override
public void onFocusChange(View v, boolean hasFocus) {
if (hasFocus) {
Button buttonConvertSpeed = (Button) rootView.findViewById(R.id.buttonConvertSpeed);
buttonConvertSpeed.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
// Do something.
}
});
}
}
}
Instead of using local anonymous classes and duplicate code, you can create a class that implements the View.OnFocusChangeListener interface so that you can reuse it(it can have a constructor that takes a Runnable interface to customize the behavior of the onClick method). In general, you should try and improve the modularity of your code. | {
"domain": "codereview.stackexchange",
"id": 12316,
"tags": "java, beginner, android, converting, event-handling"
} |
Why is Cannabis Sativa considered a distinct form of Cannabis? | Question: My understanding is that the word Sativa is Latin and means "cultivated." See here: https://en.wikipedia.org/wiki/Sativum
Since all cannabis consumed by people is cultivated and grown from seed, shouldn't all Cannabis be called Cannabis Sativa? And so isn't Cannabis Indica also Cannabis Sativa? If taking a cutting of a plant grown from seed and rooting it means that "clone" isn't considered to have been grown from seed then maybe those plants shouldn't be called Sativa, although this would still seem to qualify as cultivation. But either way I fail to see what bearing it has on phenotype or alkaloid profile. How has it come to mean having thin leaves and high THC content?
Answer: C. sativa was originally named by Linnaeus in 1753, long before the plant was commonly used for recreation (in Europe, at least), so the name probably reflects the fact that that species was cultivated for fiber (hemp), just as many plants have the Latin name Officinalis or officinale, meaning that they were used in medicine. Later botanists considered other varieties to be sufficiently different to be different species or subspecies. (Including some interesting reasons based in the laws of the 1970s: https://en.wikipedia.org/wiki/Cannabis#Taxonomy )
It's also not true that all Cannabis is cultivated. It can be found growing wild, either native or as a introduced plant, in many parts of the world. | {
"domain": "biology.stackexchange",
"id": 9744,
"tags": "taxonomy, nomenclature"
} |
Speech Recognition Project - HMM | Question: I am looking for advice on a project I am thinking about completing (for experience etc..)
My project is triggering a set of traffic lights (simulation) using Speech recognition. In theory, I want to use a HMM (Hidden Markov model) to determine whether someone wants to cross the road or not, this can be determined by someone saying "Yes" and then if there is white-noise, or, the Phones "Yes" is not present it can infer that no-one wants to cross and therefore there is no need to trigger the lights.
I'm looking for advice in whether this can be done using an HMM? I have some experience in both of these I just need to know whether it would be a good idea to use an HMM / Viberti algorithm
I hope someone can help me,
Thanks :)
Answer: I learnt to use HTK based on this very basic tutorial
http://www.info2.uqam.ca/~boukadoum_m/DIC9315/Notes/Markov/HTK_basic_tutorial.pdf
It details how to make a yes/no recogniser. It should get you off to a good start at least. There is also some HTK code on the main HTK site that allows you to perform real time speech recognition.
To improve robustness, you might also consider training the recogniser on speech tokens that are mixed with the kind of noises that you'd expect to hear around your receiver. You could place a recording device near to the traffic light and record a day's worth of background material, then randomly select samples windows to add to your speech corpus at various levels. | {
"domain": "dsp.stackexchange",
"id": 481,
"tags": "algorithms, speech-recognition, speech"
} |
Discontinuity in gravitational potential | Question: If you raise an object you increase it's (absolute value) potential energy.
But if you exceed an specific height suddenly the gravity can't pull over the object and it can be interpreted as the potential is turned to zero! (the intuition of potential energy is how much energy it can be released if you let the object go)
Does this mean that the potential energy's function is discontinuous? I guess not, but why? It's against my intuition!
Answer: The problem here is that many introductory physics courses say that gravitation potential energy is
$$U = mgh.$$
This is an approximate expression that only works if your mass $m$ is much lighter than the Earth and relatively close to the Earth. However, it breaks down the second you start considering more complicated situations.
For example, suppose we had an object between the Earth and the moon. Then when it's close to the moon, we have
$$U \approx m g_{\text{moon}} h_{\text{moon}}$$
and when it's close to the Earth, we have
$$U \approx m g_{\text{Earth}} h_{\text{Earth}}.$$
However, these two expressions cannot be combined consistently everywhere. In introductory courses, they sometimes gloss over this point by saying that you should only consider the nearest / most important heavy body, like the one you would fall towards if you let the object go. But that implies that the potential discontinuously changes from the first option to the second option, as you move closer from the moon to the Earth, like the moon's potential suddenly "turns off".
This problem appears because both of the expressions above are only approximate. The true potential energy is
$$U = -\frac{GM_{\text{moon}}m}{r_{\text{moon}}} - \frac{GM_{\text{Earth}}m}{r_{\text{Earth}}}$$
where the $r$'s are the distances to the center of the moon and Earth. Using some calculus, you can show that this reduces to the previous two expressions when the object is very close to either the moon or the Earth. | {
"domain": "physics.stackexchange",
"id": 33149,
"tags": "forces, newtonian-gravity, work, potential-energy"
} |
Higher order filters | Question: I know how higher order filter affects the system, theoretically. The slope of frequency Response becomes steeper as its order increases. I had some speed signals from work. I wanted to investigate more about this, So i designed a $N^th$ order $PT1^n$ filter(multiple PT1 filter concatenated in parallel) and a $N^th$ order Butterworth filter. I just cannot understand the stuff I read theoretically applies here.
1st to 3rd order $PT1^n$ filter:
1st to 3rd order butterworth filter:
These are real signals. It seems to me that as the order is being increased, it's increasing the phase shift to increase the noise reduction. I want to figure out if increasing the order of the filter is worth the hassle, as the order increases the complexity increases and the needs more computational power too.
Main purpose of my filtering is to smoothen the signal, remove those steps. I want to know how muchbenifit will i get by increasing the order of the filter, mathematically and practically
Answer:
Main purpose of my filtering is to smoothen the signal, remove those steps. I want to know how much benifit will i get by increasing the order of the filter, mathematically and practically
There are basically two parameters for filters like a Butterworth lowpass: Its order and the cutoff frequency. Both have influence on the response.
Setting the cutoff frequency lower will make the response more smooth. Setting it higher will make the response follow more closely the steps because more of the frequencies are retained and less are attenuated.
The filter order controls how selective the filter is / how "sharply" it separates the passpand (the frequencies that are preserved) from the stop band (the frequencies that are rejected. Inbetween the passband and the stopband you have the transition band which will get narrower the higher order you use. In the time domain higher order filters tend to make the response more wavy. The filter's impulse response oscillates more and its envelope decays more slowly. Choosing the filter order is kind of a trade-off between frequency selectivity and time selectivity. By time selectivity I mean that a tiny pop in your source signal will affect a larger window of time in the filtered signal.
Without knowing exactly what you are shooting for, it's hard to suggest anything. I hope this explanation of how the two tuning knobs affect the response will enable you to move into the right direction.
If you want to do noise reduction it's generally good to know what the PSD of your signal and the noise part looks like. In your case, the noise is highly correlated and depends too much on the signal. You basically have truncation without any prior dithering which would otherwise decouple the noise error from the signal. So, in your case it's hard to predict the PSD of the error. But given both spectra, Wiener filtering could get you closer to the solution. Then, a good filter's cutoff frequency would depend on the point where the signal PSD crosses the noise's PSD. The order would depend on the difference of the PSD slopes between signal and noise in a loglog domain. So, for example, given a brownish signal mixed with white noise, the difference of the PSD slopes in the loglog domain would be 2. Divide this by two (--> 1) and use that as a filter order. The most appropriate cutoff frequency setting would be to make the -6 dB point the frequency where the signal PSD curve intersects the noise PSD curve.
If you care about the phase response, you could apply the IIR filter bidirectionally. Once in the forward direction and then in the backward direction. This cancels the phase response and squares the magnitude response. To compensate for this you'd reduce the filter order by about a factor of two. This might be a good approach if you do this kind of filtering offline. The phase and group delay of higher order IIR filters is higher like you observed. | {
"domain": "dsp.stackexchange",
"id": 4293,
"tags": "filters, signal-analysis"
} |
GPG class for AJAX calls | Question: I want to know if this GPG class in PHP is up to snuff as a professional-level class. I did my best to include everything and make it easy to use. I'm calling the object by invoke(). If anyone can figure anything I've missed, or how I can do better then, I'd really appreciate it.
This is a very simple codebase I made to interact with the GPG installation. I'm creating Filters for a JS package I created. The Filters are PHP and the direct codebase is JS.
<?PHP
class GPG {
private $id;
function __invoke(string $command, $param1 = "", $param2 = "", $param3 = "")
{
if (!isset($this->id))
$this->id = gnupg_init();
$tempFuncCall = 'gnupg_'.$command;
$one_string = ["addencryptkey","decrypt","encrypt","encryptsign",
"export","gettrustlist","import","keyinfo","listsignatures","setarmor","seterrormode","setsignmode","sign"];
$two_strings = ["adddecryptkey","addsignkey","decryptverify"];
$three_strings = ["verify"];
if (in_array($command,$one_string))
{
return $tempFuncCall($this->id, $param1);
}
else if (in_array($command,$two_strings))
{
return $tempFuncCall($this->id, $param1, $param2);
}
else if (in_array($command,$three_strings))
{
return $tempFuncCall($this->id, $param1, $param2, $param3);
}
else if ($command != "init")
{
try
{
return $tempFuncCall($this->id);
}
catch (e)
{
echo "Command does not exist";
}
}
}
}
?>
Here's an Example:
$gpg = new GPG();
$gpg('addencryptkey', "credentials");
$r = $gpg('encrypt',"this is just some text.");
$gpg('adddecryptkey',"credentials","");
echo $gpg('decrypt', $r);
// $r = $gpg('')
echo $r;
Answer: My opinion is that variable variables and variable function/method names should be avoided as much as possible (I nearly want to say: "NEVER use them"). These features often have a negative impact on the developer experience in IDEs.
I think you would benefit from reading PSR-12 coding guidelines. Things that stand out to me are:
missing __invoke() method visibility declaration https://www.php.net/manual/en/language.oop5.magic.php
if without curly braces
type declarations on all __invoke() arguments
missing spaces after commas inside array declarations
placement of opening curly brace on if blocks
there should not be a space between else if -- it is one word in PHP
Beyond that, here are few other thoughts:
$one_string, $two_string, and $three_string variable names are NOT awesome/intuitive/meaningful. I see that these are whitelists and I don't really have an alternative to offer, but it doesn't strike me as "professional". Should these even be variables at all? Would they be better as constants? Why are there three arguments after command anyhow? Can the variable length parameters be simplified using the spread operator in the signature?
What about:
public function __invoke(string $command, string ...$params): mixed {
//...
return $tempFuncCall($this->id, ...$params);
//...
}
I recommend strict comparisons unless logical to do otherwise. $command !== "init" | {
"domain": "codereview.stackexchange",
"id": 44515,
"tags": "php, object-oriented, classes"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.