anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Is there any benefit to write a driver node based on driver_base package | Question:
Hi,
I'm using a customized 2D laser range finder (not SICK/HOKUYO).
This laser range finder has its own protocol so I must write my own driver node to get the 2D scan and publish it in "sensor_msgs/LaserScan".
I'm wondering if I should write my own driver node based on "driver_base" package, like "hokuyo_node" package.
Is there any benefit to use the "driver_base" package?
Thanks.
Originally posted by Curtis Fu on ROS Answers with karma: 3 on 2015-08-10
Post score: 0
Answer:
I have never used it myself, but having a look at the WikiPage, it does not seem to be a good idea to use it, as it is deprecated.
A framework for writing drivers that
helps with runtime reconfiguration,
diagnostics and self-test. This
package is deprecated.
API Stability This package is for
internal use only. Its API is stable,
but not recommended for use by new
packages.
Originally posted by mgruhler with karma: 12390 on 2015-08-11
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 22420,
"tags": "ros"
} |
What should pwd be replaced with? | Question:
In the page
http://wiki.ros.org/image_transport/Tutorials/PublishingImages
$ ln -s `pwd`/image_common/image_transport/tutorial/ ./src/image_transport_tutorial
command appears. Here 'pwd' should be replaced with something else. What could it be?
Thanks
Originally posted by jbpark03 on ROS Answers with karma: 31 on 2016-03-08
Post score: 0
Answer:
I don't think you need to replace pwd in the command you referred to. On Ubuntu (and other Linux I assume), surrounded by buckquote "`" symbol, the output of command is filled in.
So if you follow the tutorial line-by-line, you should be at ~/image_transport_ws/, which pwd command will return and that's what the tutorial you linked to expects.
Originally posted by 130s with karma: 10937 on 2016-03-08
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 24041,
"tags": "linux"
} |
Schrödinger-Pauli Equation Solutions | Question: The Schrödinger-Pauli equation is the non-relativistic limit of the Dirac equation, and therefore describes spin-1/2 particles in an external electromagnetic field. It is given by:
$$\left[\frac{1}{2m}(\boldsymbol{\sigma} \cdot (\boldsymbol{p}-q\boldsymbol{A}))^2+q\phi\right]|\psi\rangle=i \hbar\frac{\partial}{\partial t}|\psi\rangle.$$
Are there any analytical solutions to this equation? I have searched online but have unfortunately been unable to find any.
Answer: I cannot be sure, but I suspect that you can get analytical solutions of the Pauli equation by taking a non-relativistic limit of analytical solutions of the Dirac equation. The latter can be found in many books, say Bagrov, Vladislav G. / Gitman, Dmitry, The Dirac Equation and its Solutions (http://www.degruyter.com/view/product/177851) (you can find a Google preview). One example of an analytical solution of the Pauli equation can be found in http://arxiv.org/abs/physics/9807019 . | {
"domain": "physics.stackexchange",
"id": 31248,
"tags": "quantum-mechanics, electromagnetism, wavefunction, quantum-spin, spinors"
} |
Fierz idendity (supersymmetry) | Question: So basically I have two Fierz identities involving spinors:
$$\psi^a \psi^b = -\frac{1}{2} \epsilon^{ab} \psi \psi$$
And
$$\overline{\psi}^{\dot{a}} \overline{\psi}^{\dot{b}} = \frac{1}{2} \epsilon^{\dot{a} \dot{b}} \overline{\psi} \overline{\psi}$$
The first one is immediate to solve it: The expression is antisymmetric, and therefore
$$\psi^a \psi^b = c \epsilon^{ab}.$$
But $$\psi \psi = \psi^a \epsilon_{ab} \psi^b = c \epsilon^{ab} \epsilon_{ab} = -2 c.$$
So $$\psi^a \psi^b = -\frac{1}{2} \epsilon^{ab} \psi \psi.$$
The second one is the problem. Certainly, I am missing something in the definition. Using the same logic as above, we have
$$ \overline{\psi}^{\dot{a}} \overline{\psi}^{\dot{b}} = c \epsilon^{\dot{a} \dot{b}}.$$
So $$\overline{\psi} \overline{\psi} = \overline{\psi}_{\dot{a}} \overline{\psi}^{\dot{a}} = \overline{\psi}^{\dot{b}} \epsilon_{\dot{b} \dot{a}} \overline{\psi}^{\dot{a}} = c \epsilon_{\dot{b} \dot{a}} \epsilon^{\dot{b} \dot{a}}.$$
Obtaining $$ c = -\frac{1}{2} \overline{\psi} \overline{\psi}.$$
But this is not the second Fierz identity! What am I missing?
Answer: We define
\begin{align}
\psi \chi &\equiv \psi_a \chi^a = - \epsilon_{ab} \psi^a \chi^b , \qquad {\bar \psi} {\bar \chi} \equiv {\bar \psi}^{\dot a} {\bar \chi}_{\dot a} = \epsilon_{{\dot a}{\dot b}} {\bar \psi}^{\dot a} {\bar \chi}^{\dot b} .
\end{align}
Expanding the sum out explicitly and find
$$
\psi \psi = - 2 \psi^1 \psi^2 , \qquad {\bar \psi} {\bar \psi} = 2 {\bar \psi}^{\dot 1} {\bar \psi}^{\dot 2} . \tag{1}
$$
We have
$$
\psi^a \psi^b = c_1 \epsilon^{ab} \psi \psi , \qquad {\bar \psi}^{\dot a} {\bar \psi}^{\dot b} = c_2 \epsilon^{{\dot a}{\dot b}} {\bar \psi} {\bar \psi}
$$
For some constants $c_1$ and $c_2$.
We can now set $ab={\dot a}{\dot b}=12$ in the equation above and using $\epsilon^{12} = \epsilon^{{\dot 1}{\dot 2}} = 1$, and matching to (1), we find
$$
c_1 = - \frac{1}{2} , \qquad c_2 = \frac{1}{2}.
$$ | {
"domain": "physics.stackexchange",
"id": 100430,
"tags": "definition, conventions, fermions, spinors, grassmann-numbers"
} |
Pet hotel system in Python | Question: I am doing a simple application for a pet hotel. I have almost finished but I'm still new on Python and I would like to see if there is a more efficient way to write this, whilst supporting both Python 2 and 3.
My next steps would be to write a searching algorithm (search by booking ID) and sorting algorithm (merge sort/selection sort etc.) to sort out the different pet types.
import datetime
staffID = 'admin'
password = 'admin'
petName = []
petType = []
bookingID = []
roomID = []
boardedPets = []
history = []
roomInUse = []
roomToUse = []
roomRates = {'dogs':50, 'cats':45, 'birds':30, 'rodents':25}
dogcatRoomsAvailable = 60
birdRoomsAvailable = 80
rodentRoomsAvailable = 100
totalPriceStr = ""
# Login Function
# Requests user for staffID and password to gain access to the menu system
def loginFunction(s, p):
# Login inputs
staffID = input("Enter Staff ID: ")
password = input("Password: ")
# Check if staffID and password is correct;
# If input is not valid, it informs user that ID and password is invalid and requests again
loginTrust = False
while (loginTrust is False):
if (staffID == 'admin') and (password == 'admin'):
print("Successfully logged in")
loginTrust = True
else:
print("Wrong ID or Password. Please enter again. ")
loginTrust = False
staffID = input("Enter Staff ID: ")
password = input("Password: ")
# Check In Function
# Allows user to check in customers' pets
def checkIn(petNm, petTy, bookID, roomuse):
global dogcatRoomsAvailable
global birdRoomsAvailable
global rodentRoomsAvailable
# Pet Name Input
petName= input("Enter pet name: ")
petNm.append(petName)
#Pet Type Input
petType= input("\n'Dog', 'Cat', 'Bird', 'Rodent'\n Enter pet type: ")
# Check if petType is valid
petTyCheck = False
while petTyCheck == False:
if (petType.lower() == 'dog' or petType.lower() == 'cat' or petType.lower() == 'bird' or petType.lower() == 'rodent'):
# Check if rooms are still available
if (dogcatRoomsAvailable != 0):
petTy.append(petName)
petTyCheck = True
elif (birdRoomsAvailable != 0):
petTy.append(petName)
petTyCheck = True
elif (rodentRoomsAvailable != 0):
petTy.append(petName)
petTyCheck = True
else:
print("Rooms for dogs & cats are not available anymore. ")
print(boardedPets)
petTyCheck = True
FrontDeskMenu()
else:
print("Pet type must be only from the list")
petTyCheck = False
petType= input("\n'Dog', 'Cat', 'Bird', 'Rodent'\n Enter pet type: ")
# Check In Date Allocators
checkInDate = datetime.datetime.now()
cIdString = str(checkInDate)
bookingID = str(cIdString[0:4] + cIdString[5:7] + cIdString[8:10] + cIdString[11:13] + cIdString[14:16] + cIdString[17:19])
bookID.append(bookingID)
# Check Out Date Default
checkOutDate = 'Nil'
# Room Allocators
# Pet type input
print("\nRules when assigning rooms: \nFor dogs: 'D' + any numbers \nFor cats: 'C' + any numbers \nFor birds: 'B' + any numbers \nFor rodents: 'R' + any numbers")
print("Remember to insert letter and number plates in front of the kennel after bring the pets in! ")
roomToUse = input('\nAssign a room for the pet: ')
roomCheck = False
rIU = roomToUse[0]
print(rIU)
# Check if rooms are assigned accordingly for the animal
if (petType.lower() == 'dog'):
# Check if input starts with 'D' and is not in use
while roomCheck == False:
if (rIU.lower() == 'd' and (roomInUse.count(roomToUse.upper()) == 0)):
roomInUse.append(roomToUse.upper())
dogcatRoomsAvailable = dogcatRoomsAvailable - 1
print("Rooms left: ", dogcatRoomsAvailable)
roomCheck = True
# If input does not start with 'D'
elif (rIU.lower() != 'd'):
print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'D'. ")
roomCheck = False
roomToUse = input('Assign a room for the pet: ')
rIU = roomToUse[0]
# If room is in use
elif (roomInUse.count(roomToUse.upper()) != 0):
print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'D'. ")
roomCheck = False
roomToUse = input('Assign a room for the pet: ')
rIU = roomToUse[0]
else:
None
if (petType.lower() == 'cat'):
# Check if input starts with 'C' and is not in use
while roomCheck == False:
if (rIU.lower() == 'c' and (roomInUse.count(roomToUse.upper()) == 0)):
roomInUse.append(roomToUse.upper())
dogcatRoomsAvailable = dogcatRoomsAvailable - 1
print("Rooms left: ", dogcatRoomsAvailable)
roomCheck = True
# If input does not start with 'C'
elif (rIU.lower() != 'c'):
print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'C'. ")
roomCheck = False
roomToUse = input('Assign a room for the pet: ')
rIU = roomToUse[0]
# If room is in use
elif (roomInUse.count(roomToUse.upper()) != 0):
print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'C'. ")
roomCheck = False
roomToUse = input('Assign a room for the pet: ')
rIU = roomToUse[0]
else:
None
if (petType.lower() == 'bird'):
# Check if input starts with 'C' and is not in use
while roomCheck == False:
if (rIU.lower() == 'b' and (roomInUse.count(roomToUse.upper()) == 0)):
roomInUse.append(roomToUse.upper())
birdRoomsAvailable = birdRoomsAvailable - 1
print("Rooms left: ", birdRoomsAvailable)
roomCheck = True
# If input does not start with 'C'
elif (rIU.lower() != 'b'):
print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'C'. ")
roomCheck = False
roomToUse = input('Assign a room for the pet: ')
rIU = roomToUse[0]
# If room is in use
elif (roomInUse.count(roomToUse.upper()) != 0):
print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'C'. ")
roomCheck = False
roomToUse = input('Assign a room for the pet: ')
rIU = roomToUse[0]
else:
None
if (petType.lower() == 'rodent'):
# Check if input starts with 'R'
while roomCheck == False:
if (rIU.lower() == 'r' and (roomInUse.count(roomToUse.upper()) == 0)):
roomInUse.append(roomToUse.upper())
rodentRoomsAvailable = rodentRoomsAvailable - 1
print("Rooms left: ", rodentRoomsAvailable)
roomCheck = True
# If input does. not start with 'R'
elif (rIU.lower() != 'r'):
print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'R'. ")
roomCheck = False
roomToUse = input('Assign a room for the pet: ')
rIU = roomToUse[0]
# If room is in use
elif (roomInUse.count(roomToUse.upper()) != 0):
print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'R'. ")
roomCheck = False
roomToUse = input('Assign a room for the pet: ')
rIU = roomToUse[0]
else:
None
# Put information into boardedPets
boardedPets.append([bookingID, petName.title(), petType.title(), cIdString, roomToUse.title(), checkOutDate])
print(boardedPets)
print(roomInUse)
print(len(roomInUse))
print(petName)
# Call back the menu after finishing task
FrontDeskMenu()
def CheckOut():
# Requests for bookingID to checkout
cObid = str(input("Please enter booking ID: "))
counter = 0
outCheck = False
# Misc
cBidLenC = [cObid[i:i+1] for i in range(0, len(cObid), 1)]
print(cBidLenC)
boardNum = len(boardedPets)
print("Boarded pets left: ", boardNum)
# Check out date to be assigned
checkOutDate = datetime.datetime.now()
cOdString = str(checkOutDate)
if (len(cBidLenC) > 14):
print("Invalid booking ID")
cObid = str(input("Please enter booking ID: "))
elif (len(cBidLenC) < 14):
print("Invalid booking ID")
cObid = str(input("Please enter booking ID: "))
elif (len(cBidLenC) == 14):
print("Correct booking ID: ")
# Check out the pets
# Remove pet to check out from boardedPets list
# Insert the pet into history list
while outCheck == False:
for e in boardedPets: # for each list in boardedpets
print('xyz')
for element in e: # for each element in list
print('abc')
if cObid in element:
print('qwe')
# Payment
checkInDay = int(e[3][8:10])
checkOutDay = int(cOdString[8:10])
daysStayed = checkOutDay - checkInDay
if (e[2] == 'Dog'):
# Assume same day checkout rate is also the rate of one day
if (daysStayed == 0):
totalPrice = roomRates['dogs'] * daysStayed + roomRates['dogs']
print("Total days stayed: ", daysStayed)
print("Total: ", totalPrice)
totalPriceStr = ("$" + str(totalPrice))
elif (daysStayed >= 1):
totalPrice = roomRates['dogs'] * daysStayed
print("Total days stayed: ", daysStayed)
print("Total price: $", totalPrice)
elif (e[2] == 'Cat'):
# Assume same day checkout rate is also the rate of one day
if (daysStayed == 0):
totalPrice = roomRates['cats'] * daysStayed + roomRates['cats']
print("Total days stayed: ", daysStayed)
print("Total: ", totalPrice)
totalPriceStr = ("$" + str(totalPrice))
elif (daysStayed >= 1):
totalPrice = roomRates['birds'] * daysStayed
print("Total days stayed: ", daysStayed)
print("Total price: $", totalPrice)
elif (e[2] == 'Bird'):
# Assume same day checkout rate is also the rate of one day
if (daysStayed == 0):
totalPrice = roomRates['birds'] * daysStayed + roomRates['birds']
print("Total days stayed: ", daysStayed)
print("Total: ", totalPrice)
totalPriceStr = ("$" + str(totalPrice))
elif (daysStayed >= 1):
totalPrice = roomRates['birds'] * daysStayed
print("Total days stayed: ", daysStayed)
print("Total price: $", totalPrice)
elif (e[2] == 'Rodent'):
# Assume same day checkout rate is also the rate of one day
if (daysStayed == 0):
totalPrice = roomRates['rodents'] * daysStayed + roomRates['rodents']
print("Total days stayed: ", daysStayed)
print("Total: ", totalPrice)
totalPriceStr = ("$" + str(totalPrice))
elif (daysStayed >= 1):
totalPrice = roomRates['rodents'] * daysStayed
print("Total days stayed: ", daysStayed)
print("Total price: $", totalPrice)
# Data manipulations
outCheck = True
e.pop(5)
e.insert(5, cOdString)
e.append(totalPriceStr)
history.append(e)
boardedPets.pop(counter)
print("Checked out. Remaining: ", len(boardedPets))
print(boardedPets)
print("History length: ", len(history))
print(history)
counter += 1
if outCheck == True:
print("Finished checkout. ")
else:
print("Booking ID not found. Please enter again. ")
cObid = str(input("Please enter booking ID: "))
# Call back the menu after finishing task
FrontDeskMenu()
# Room Availability
# Check for availability of rooms
def roomAvailability():
print("\nRoom Availability\n")
print("Dogs: ", dogcatRoomsAvailable)
print("Birds: ", birdRoomsAvailable)
print("Rodents: ", rodentRoomsAvailable)
FrontDeskMenu()
# History function
# Reads history of pets boarded
def History():
print(history)
FrontDeskMenu()
# Search function
# note: the booking ID is ALWAYS sorted
def SearchFunction():
boardedIDList = []
count = 0
search = str(input("Enter booking ID: "))
while (count < len(boardedPets)):
bc = boardedPets[count][0]
boardedIDList.append(bc)
count = count + 1
search = ("Enter booking ID: ")
for el in boardedIDList:
print(el)
print(boardedIDList)
FrontDeskMenu()
# Menu
# Menu used for calling functions
def FrontDeskMenu():
print("\nTaylor's Pet Hotel\nFront Desk Admin")
print("A. Check in pets")
print("B. Check out pets")
print("C. Rooms Availability")
print("D. History")
print("E. Binary Search")
print("F. Exit\n")
# Input for calling functions
userInput = input("What would you like to do today?: ")
# Check if userInput is valid; if input is not valid, it continues to ask for a valid input
inputCheck = False
while (inputCheck is False):
# Checks userInput and exccute function as requested by user
if (userInput.lower() == 'a'):
checkIn(petName, petType, bookingID, roomInUse)
inputCheck = True
elif (userInput.lower() == 'b'):
CheckOut()
inputCheck = True
elif (userInput.lower() == 'c'):
roomAvailability()
inputCheck = True
elif (userInput.lower() == 'd'):
History()
inputCheck = True
elif (userInput.lower() == 'e'):
SearchFunction()
inputCheck = True
elif (userInput.lower() == 'f'):
quit()
else:
print("Invalid value! Please try again.")
userInput = input("What would you like to do today?: ")
inputCheck = False
loginFunction(staffID, password)
FrontDeskMenu()
print(boardedPets)
Answer: password = 'admin'
You may have guessed this already, but this is not a secure way to store a password. It should be hashed, and stored in a file that has restrictive permissions. This is only a start - you can do more advanced things like using the OS keychain, etc.
while (loginTrust is False):
can be
while not loginTrust:
The same applies to while petTyCheck == False.
This:
if (petType.lower() == 'dog' or petType.lower() == 'cat' or petType.lower() == 'bird' or petType.lower() == 'rodent'):
can be:
if petType.lower() in ('dog', 'cat', 'bird', 'rodent'):
Even better, if you de-pluralize your key names in roomRates, you can write:
if petType.lower() in roomRates.keys():
When you write this:
petType= input("\n'Dog', 'Cat', 'Bird', 'Rodent'\n Enter pet type: ")
You shouldn't hard-code those pet names. Instead, use a variable you already have, such as roomRates:
print(', '.join(roomRates.keys()))
input('Enter pet type: ')
This:
bookingID = str(cIdString[0:4] + cIdString[5:7] + cIdString[8:10] + cIdString[11:13] + cIdString[14:16] + cIdString[17:19])
should not be done this way. As far as I can tell, you're using a custom date format. Read about using strftime for this purpose.
This:
print("\nRules when assigning rooms: \nFor dogs: 'D' + any numbers \nFor cats: 'C' + any numbers \nFor birds: 'B' + any numbers \nFor rodents: 'R' + any numbers")
should have you iterating over the list of pet type names, taking the first character and capitalizing it. Similarly, any other time that you've hard-coded a pet type name, you should attempt to get it from an existing variable.
This:
if (len(cBidLenC) > 14):
print("Invalid booking ID")
cObid = str(input("Please enter booking ID: "))
elif (len(cBidLenC) < 14):
print("Invalid booking ID")
cObid = str(input("Please enter booking ID: "))
elif (len(cBidLenC) == 14):
print("Correct booking ID: ")
should be:
if len(cBidLenC) != 14:
print('Invalid booking ID')
else:
print('Valid booking ID.')
Also, that logic needs to be adjusted so that you loop until the ID is valid.
These:
checkInDay = int(e[3][8:10])
checkOutDay = int(cOdString[8:10])
should not be using string extraction for date components. You should be using actual date objects and getting the day field from them.
This:
count = count + 1
should be
count += 1
You should also consider writing a main function rather than having global code. | {
"domain": "codereview.stackexchange",
"id": 32998,
"tags": "python, beginner"
} |
The distance square in the Newton's law of universal gravitation is really a square? | Question: When I was in the university (in the late 90s, circa 1995) I was told there had been research investigating the $2$ (the square of distance) in the Newton's law of universal gravitation.
$$F=G\frac{m_1m_2}{r^2}.$$
Maybe a model like
$$F=G\frac{m_1m_2}{r^a}$$
with $a$ slightly different from $2$, let say $1.999$ or $2.001$, fits some experimental data better?
Is that really true? Or did I misunderstand something?
Answer: This was suggested by Asaph Hall in 1894, in an attempt to explain the anomalies in the orbit of Mercury. I retrieved the original article in http://adsabs.harvard.edu/full/1894AJ.....14...49H
Interestingly, he mentions in the introduction that Newton himself had already considered in the Principia what happens if the exponent is not exactly 2, and had concluded that the observations available to him strongly supported the exact power 2!
The story is retold, e.g., on p.356 of
N.R. Hanson, Isis 53 (1962), 359-378.
See also Section 2 of
http://adsabs.harvard.edu/full/2005MNRAS.358.1273V | {
"domain": "physics.stackexchange",
"id": 57192,
"tags": "gravity, experimental-physics, newtonian-gravity"
} |
Merge sort implementation in Python | Question: def mergesort( array ):
# array is a list
#base casee
if len(array) <= 1:
return array
else:
split = int(len(array)/2)
#left and right will be sorted arrays
left = mergesort(array[:split])
right = mergesort(array[split:])
sortedArray = [0]*len(array)
#sorted array "pointers"
l = 0
r = 0
#merge routine
for i in range(len(array)):
try:
#Fails if l or r excede the length of the array
if left[l] < right[r]:
sortedArray[i] = left[l]
l = l+1
else:
sortedArray[i] = right[r]
r = r+1
except:
if r < len(right):
#sortedArray[i] = right[r]
#r = r+1
for j in range(len(array) - r-l):
sortedArray[i+j] = right[r+j]
break
else:
#sortedArray[i] = left[l]
#l = l+1
for j in range( len(array) - r-l):
sortedArray[i+j] = left[l+j]
break
return sortedArray
Answer: First of all, the code suffers a very typical problem. The single most important feature of merge sort is stability: it preserves the order of the items which compare equal. As coded,
if left[l] < right[r]:
sortedArray[i] = left[l]
l = l+1
else:
sortedArray[i] = right[r]
r = r+1
of two equals the right one is merged first, and the stability is lost. The fix is simple:
if left[l] <= right[r]:
(or if right[i] < left[i]: if you prefer).
I don't think that try/except on each iteration is a way to go. Consider
try:
while i in range(len(array)):
....
except:
....
Of course here i is not known in the except clause. Again, the fix is simple. Notice that the loop is never terminated by condition: either left or right is exhausted before i reaches limit. It means that testing the condition is pointless, and i is an index on the same rights as l and r:
l = 0
r = 0
i = 0
try:
while True:
....
except:
....
Naked except are to be avoided. Do except IndexError: explicitly. | {
"domain": "codereview.stackexchange",
"id": 21960,
"tags": "python, mergesort"
} |
Getting HRTF from 3-D scan + acoustic simulation | Question: This question follows from Modelling propagation of sound wave by particle simulation
This last fortnight I have started experimenting with Binaural audio; it really blows everything else out of the water.
The main reason it hasn't taken off, I think, is that every individual needs to get their own HRTF calculated, which consists of sitting in a soundproof room with microphones in your ear canals, while something makes popping noises at thousands of locations around you.
I'm only aware of one service provider that may calculate your HRTF: http://www.physiol.usyd.edu.au/~simonc/hrtf_rec.htm and they are in Australia!
I'm wondering whether a more practical method may emerge, which would involve taking a 3-D scan of one's head (maybe using something like http://www.david-3d.com/) and shipping it off for a heavy dose of distributed computing which would return HRTFs for that individual.
I wonder if we may see a day where people have their own HRTF data stored in the cloud, and can enjoy binaural sound on their iPods.
How far away is such a technology? And other any other contenders for measuring HRTF?
π
PS could we possibly have some more tags like "binaural", "hrtf/hrir"
Answer: In the end I managed to do this myself, working in collaboration with http://ir-ltd.net/ to get the scan of my head, Blender to refine the mesh and position a cloud of microphone points, http://www.waveller.com/Waveller_Cloud/ to compute the frequency responses for these points, and finally some Python/NumPy scripting to convert these into impulse responses.
We have collaborated on a joint paper which is being presented at the forthcoming AES conference.
Please leave a message after July '14 if you would like me to link the paper. There is currently no link as it is still in draft form.
I'm using the results of the simulation and I'm happy with the results.
EDIT: http://www.aes.org/e-lib/browse.cfm?elib=17365 | {
"domain": "dsp.stackexchange",
"id": 1873,
"tags": "audio, 3d, spatial, hrtf"
} |
Primitive Twitch.tv IRC Chat Bot | Question: So currently I have this basic little chat bot that can read commands and can timeout users if their message contains a banned word or phrase. I was wondering how I can improve on this bot to be able to !add "word" to the set of banned words and overall general flaw improvements.
import string
from Read import getUser, getMessage
from Socket import openSocket, sendMessage
from Initialize import joinRoom
s = openSocket()
joinRoom(s)
readbuffer = ""
banned_set = {"badword1", "badword2"}
while True:
readbuffer = readbuffer + s.recv(1024)
temp = string.split(readbuffer, "\n")
readbuffer = temp.pop()
for line in temp:
print(line)
if "PING" in line:
s.send(line.replace("PING", "PONG"))
break
user = getUser(line)
message = getMessage(line)
print user + " typed :" + message
if not banned_set.isdisjoint(message.lower().split()):
sendMessage(s, "/timeout " + user)
break
if "!guitars" in message:
sendMessage(s, "Ibanez RG920QM Premium")
break
Answer: In addition to the issues @zondo has pointed out (i.e. PEP 8, some better operators, and the string features), I would also like to point out a few things.
1) Variable names
Variables names such as temp are to be avoided. A much better name for this variable would be something like lines, messages, stack, messageStack^, etc.
^ Note: non PEP 8 camelCasing used to be consistent with existing code as posted. Obviously you would make this message_stack when fixing that issue.
2) Don't PONG everybody!
In your code, it should be noted, that lines 17 - 19 inclusive (shown below for brevity) introduce some (probably?) undesired behaviour...
if "PING" in line:
s.send(line.replace("PING", "PONG"))
break
Consider that a user in the chat says "PING". Your bot will replace it with PONG and send the message back to the room.
This would be particularly bad given that this if-statement occurs before the banned words checking code (and break's out of the loop). Users can now use bad words to their heart's content, provided they include the word "PING" (in uppercase) in their message! Furthermore, the bot will repeat these bad words back to the room!! (This is how security bugs get created)
Note, if you do end up implementing an !add command to insert items into banned_set, PLEASE ensure you have successfully protected your adding code from injection!
3) Decide your case-consistency and stick with it.
On line 23 you include a call to message.lower() (the result of which is not stored anywhere). Then on line 26 your compare message to a lower-case command string ("!guitars"). Do you want "!Guitars" to work just like "!guitars"? If so, you may want to make message lowercase before you split it (as you're already doing for the bad-words check).
Furthermore, with your current logic the message "I've just added the !guitars command to my bot" will trigger the same response as just saying "!guitars". This is because your current logic (using the in operator,) disregards the position of the command string within the message. | {
"domain": "codereview.stackexchange",
"id": 19327,
"tags": "python, security, chat"
} |
Names of IBM Q backends | Question: IBM Q backends have many different names, see for example this link. We have for example processors called Melbourne, Tokyo, Armonk etc.
I am curious where these names come from? For example, I know that IBM headquarter is placed in Armonk, NY. But what about others? Is there any special logic behind naming IBM processors?
Answer: The documentation states that "All quantum systems are given a city name, e.g., ibmq_johannesburg. This name does not indicate where the actual quantum system is hosted."
https://quantum-computing.ibm.com/docs/cloud/backends/configuration
Some cities (e.g., Yorktown) host IBM Research centers. | {
"domain": "quantumcomputing.stackexchange",
"id": 1620,
"tags": "ibm-q-experience, history"
} |
Algorithm that receives a dictionary, converts it to a GET string, and is optimized for big data | Question: I found this question online as an example from a technical interview and it seems to be a flawed question in many ways. It made me curious how I would answer it. So, If you were on a technical Python interview and asked to do the following:
Write an algorithm that receives a dictionary, converts it to a GET string, and is optimized for big data.
Which option would you consider the best answer? Any other code related comments are welcome.
Common:
import requests
base_url = "https://api.github.com"
data = {'per_page': 10}
node = 'users/arctelix/repos'
Option 1:
My first thought was just answer the question in the simplest form and use pagination to control the size of the data returned.
def get_query_str(node, data=None):
# base query
query_str = "%s/%s" % (base_url, node)
# build query params dict
query_params = "&".join(["%s=%s" % (k,str(v))
for k, v in data.items()])
if query_params:
query_str += "?%s" % query_params
return query_str
print("\n--Option 1--\n")
url = get_query_str(node, data)
print("url = %s" % url)
Option 2:
Well, that's not really optimized for big data and the requests library will convert a dict to params for me. Secondly, a generator would be a great way to keep memory in check with very large data sets.
def get_resource(node, data=None):
url = "%s/%s" % (base_url, node)
print("geting resource : %s %s" % (url, data))
resp = requests.get(url, params=data)
json = resp.json()
yield json
print("\n--Option 2--\n")
results = get_resource(node, data)
for r in results:
print(r)
Option 3:
Just in case the interviewer was really looking to see if I knew how join() and a list comprehension could be used to convert a dictionary to a string of query parameters. Let's put it all together and use a generator for not only the pages, but the objects as well. get_query_str is totally unnecessary, but again the task was to write something that returned a "GET string"..
class Github:
base_url = "https://api.github.com"
def get_query_str(self, node, data=None):
# base query
query_str = "%s/%s" % (self.base_url, node)
# build query params dict
query_params = "&".join(["%s=%s" % (k,str(v))
for k, v in data.items()])
if query_params:
query_str += "?%s" % query_params
return query_str
def get(self, node, data=None):
data = data or {}
data['per_page'] = data.get('per_page', 50)
page = range(0,data['per_page'])
p=0
while len(page) == data['per_page']:
data['page'] = p
query = self.get_query_str(node, data)
page = list(self.req_resource(query))
p += 1
yield page
def req_resource(self, query):
print("geting resource : %s" % query)
r = requests.get(query)
j = r.json()
yield j
gh = Github()
pages = gh.get(node, data)
print("\n--Option 3--\n")
for page in pages:
for repo in page:
print("repo=%s" % repo)
Answer: There are a bunch of things that are not said or rendered implicit by the question so I’m going to assume that the optimized for big data part is about the GitHub API response. So I’d go with the third version. But first, some general advices:
Document your code. Docstrings are missing all around your code. You should describe what each part of your API is doing or no-one will make the effort to figure it out and use it.
Don't use %, sprintf-like formatting. These are things of the past and have been superseeded by the str.format function. You may also want to try and push newest features such as formatted string litterals (or f-strings) of Python 3.6: query_str = f'{self.base_url}/{node}'.
You should use a generator expression rather than a list-comprehension in your '&'.joins as you will discard the list anyway. It will save you some memory management. Just remove the brakets and you’re good to go.
You shouldn't use f"{k}={v}" for k, v in data.items(): what if a key or a value contains a '&' or an '='? You should encode the values in your dictionnary before joining them. urllib.parse.urlencode (which is called by requests for you) is your friend.
Now about handling the response:
page = list(self.req_resource(query)) defeats the very purpose of having a generator in the first place. Consider using yield from self.req_resource(query) instead.
Pagination of the Github API should be handled using the Link header instead of manually incrementing the page number. Use the request's headers dictionnary on your response to easily get them.
Consider using the threading module to fetch the next page of data while you are processing the current one. | {
"domain": "codereview.stackexchange",
"id": 22525,
"tags": "python, interview-questions, comparative-review"
} |
LIGO interferometer vs. holographic interferometer, is there a difference? | Question: I understand they are used for different purposes but does anyone know why you couldn't use the LIGO in place of the holographic interferometer ? The Holographic interferometer was used to test for a holographic universe by trying to measure "noise" in the fabric of space time, but since a laser was used I assume it does this by measuring displacement of the laser beams which is essentially what LIGO does to measure the distortion of space as it flexes and relaxes.
By the way the test with the Holometer was done in Fermilab in 2015 by Hogan and the results did not support the holographic theory of the universe. Thank you. ( I was not sure of what team to ask this. )
Answer: The Fermilab holometer consists of two Michelson interferometers (with ~40 meter long arms) sitting right next to each other. The idea is that there maybe a fundamental jiggling of space-time (not necessarily gravitational waves) that will move the two splitter mirrors in the same direction. This would then cause a correlated change in the fringe patterns of the two interferometers. In a Dec 2015 paper arxiv the measured correlation function is shown between 0-6 MHz. The data above 1 MHz (above environmental influences) is used to rule out a particular model of Planck scale space-time jiggling. At some level they must also be able to set a limit on high frequency gravitational waves causing a simultaneous change in output from the two interferometers.
The difference between the LIGO interferometers and the holometer is that the Hanford, Washington and Livingston, Louisiana splitter mirrors are 3000 km apart while the two splitter mirrors at Fermilab are right next to each other. Also, the LIGO arms are 4000 meters long versus 40 meters for the holometer.
Presumably, the output of the two LIGO interferometers could be cross correlated and study the simultaneous jiggling of splitter mirrors that are 3000 km apart. Because of the longer arms, LIGO would do the correlation at lower frequencies than Fermilab. Are there any hypothesized Planck scale fluctuations that might cause simultaneous jiggling in mirrors 3000 km apart?
In fact, LIGO has seen a simultaneous (shifted by 7 msec) change in fringe patterns of the two interferometers (eg: GW150914). A correlation function (shifted by 7 msec) would show power in the 30-150 Hz region). LIGO has interpreted this signal as a gravitational wave. | {
"domain": "physics.stackexchange",
"id": 33344,
"tags": "classical-mechanics, experimental-physics, interferometry"
} |
Why do predatory mites have to be introduced multiple times? | Question: I'm combating spider mite infestation using either Phytoseiulus persimilis or Amblyseius californicus. After extensive study of the literature, I'm still unsure why the producers of these predatory mites suggest that they have to be introduced several times (2-3 times depending on manufacturer). I know that adult predatory mites are very agile and scout for new prey. Phytoseiulus is also known to wipe out spider mite populations. So as long as spider mites are present, why should one introduce them at intervals rather than just introduce a large number of predatory mites at the beginning?
Answer: You are right that ideally it should be enough to apply them once, but predatory mites seem to be quite sensitive: If it gets too cold or too hot or the moisture is too low, they might die.
I applied several rounds of predatory mites in non-optimal conditions without any success. They just kept disappearing without any effect whatsoever. Funnily, some small type of Heteroptera came to save my plants. (from nature without my doing)
Next time I would try some Chryson instead. | {
"domain": "biology.stackexchange",
"id": 11145,
"tags": "ecology, predation"
} |
Stata-style replace in Python | Question: In Stata, I can perform a conditional replace using the following code:
replace target_var = new_value if condition_var1 == x & condition_var2 == y
What's the most pythonic way to reproduce the above on a pandas dataframe? Bonus points if I can throw the new values, and conditions into a dictionary to loop over.
To add a bit more context, I'm trying to clean some geographic data, so I'll have a lot of lines like
replace county_name = new_name_1 if district == X_1 and city == Y_1
....
replace county_name = new_name_N if district == X_N and city == Y_N
What I've found so far:
pd.replace which lets me do stuff like the following, but doesn't seem to accept logical conditions:
`
replacements = { 1: 'Male', 2: 'Female', 0: 'Not Recorded' }
df['sex'].replace(replacements, inplace=True)
`
Answer:
df.where(condition, replacement, inplace=True)
Condition is assumed to be boolean Series/Numpy array. Check out where documentation - here is an example. | {
"domain": "datascience.stackexchange",
"id": 3226,
"tags": "python, pandas, stata"
} |
Are quantum simulators like Microsoft Q# actually using quantum mechanics in their chips? | Question: Unlike Google's Bristlecone or IBM's Qbit computer, do simulators like Q# or Alibaba really use quantum mechanics anywhere in their physical chips? Are they just defining properties using a classical computer and trying to achieve quantum simulations ?
Answer: There is a distinction between what you use to write a program (the SDK), and what you use to run it (the backend).
The SDK can be either a graphical interface, like the IBM Q Experience or the CAS-Alibaba Quantum Computing Laboratory. It could also be a way of writing programs, like Q#, QISKit, Forest, Circ, ProjectQ, etc.
The backend can either be a simulator that runs on a standard computer, or an actual quantum device.
Simulators use our knowledge of quantum theory to construct the simulation program, but no actual quantum computing happens. It is just the standard chips of your own computer, or of a supercomputer they let you use, running standard classical programs.
This approach is something we can do for small quantum programs, but the runtime will become unfeasibly long for large ones. So if you notice that your job takes longer and longer to run as you add more qubits, you know that it is being classically simulated rather than run on a real device.
The only actual quantum devices that can be used are those by IBM, Rigetti and Alibaba. To write programs for these you can use the Q Experience, QISKit or ProjectQ for the IBM devices, Rigetti's Forest for their devices, or the Alibaba graphical interface for their device.
Microsoft are making hardware, and they hope that it will one day be used as a backend in Q#. But they have not yet gotten a single qubit, so we might have to wait a while. Until then it will be only simulators that can be used (or other companies hardware). | {
"domain": "quantumcomputing.stackexchange",
"id": 324,
"tags": "experimental-realization, simulation"
} |
Derivation of stress-energy tensor in curved space-time | Question: I've a problem about calculating the components of the stress-energy tensor in general relativity. I've learned that if we have an action of the form :
$$
S=\int (R+\mathcal L_{m})\sqrt{-g}d^4x
$$
then we can find the S-E tensor by varying the matter term with respect to either $g_{\mu \nu}$ or $g^{\mu \nu}$ . In fact if we variate with respect to $g^{\mu \nu}$ then we can write it as :
$$
T_{\mu \nu} \sim \frac{1}{\sqrt{-g}}\frac{\delta \mathcal L_{m}\sqrt{-g}}{\delta g^{\mu \nu}}
$$
Here maybe I neglected some minus signs which aren't important for my question.
And also by variation with respect to $g_{\mu \nu}$ we get :
$$
T^{\mu \nu} \sim \frac{1}{\sqrt{-g}}\frac{\delta \mathcal L_{m}\sqrt{-g}}{\delta g_{\mu \nu}}
$$
And now my question : Can we prove that the (co/contra)variant S-E tensor which we defined above are related to each other by a usual rising indices method ? , ie :
$$
T_{\mu \nu}=g_{\mu \alpha}g_{\nu \beta}T^{\alpha \beta}
$$
For an arbitrary Lagrangian density , which may be a functional of the metric or it's derivatives ?
Answer: We can but we must use the fact that
$$\delta g^{\mu\mu} = -g^{\mu\alpha} g^{\nu\beta} \delta g_{\alpha\beta}$$
$$\delta g_{\mu\mu} = -g_{\mu\alpha} g_{\nu\beta} \delta g^{\alpha\beta}$$
Then by using chain rule
\begin{eqnarray}
T_{\mu\nu} &=& - \frac{2}{\sqrt{-g}} \frac{\delta S_m}{\delta g_{\alpha\beta}} \frac{\delta g_{\alpha\beta}}{\delta g_{\mu\nu}}\;,\\
&=& - \frac{2}{\sqrt{-g}} \frac{\delta S_m}{\delta g_{\alpha\beta}} \times \bigg( -g_{\alpha \mu} g_{\beta \nu} \frac{\delta g^{\mu\nu}}{\delta g^{\mu\nu}} \bigg)\;,\\
&=&g_{\mu\alpha} g_{\nu\beta} \frac{2}{\sqrt{-g}} \frac{\delta S_m}{\delta g_{\alpha\beta}}\;,\\
&=:& g_{\mu\alpha} g_{\nu\beta} T^{\alpha \beta}\;,
\end{eqnarray}
where we have defined
$$T^{\alpha \beta} = \frac{2}{\sqrt{-g}} \frac{\delta S_m}{\delta g_{\alpha\beta}}$$ | {
"domain": "physics.stackexchange",
"id": 37707,
"tags": "general-relativity"
} |
Gradient in cylindrical coordinate using covariant derivative | Question: I'm reading a little pdf book as an introduction to tensor analysis ("Quick introduction to tensor analysis", by R. A. Sharipov). I've reached the last section where it is explained how it is possible to differentiate a tensor field in curvilinear coordinates. The author derive the formula for the covariant derivative for a general tensor:
$$ \nabla_p X^{i_1, \cdots, i_r}_{j1, \cdots, j_s} = {{\partial X^{i_1, \cdots, i_r}_{j1, \cdots, j_s}} \over {\partial y^p}} + \sum_\alpha^r \sum_{m_\alpha} \Gamma^{i_\alpha}_{pm_\alpha}X^{i_1, \cdots, m_\alpha, \cdots, i_r}_{j1, \cdots, j_s} - \sum_\alpha^r \sum_{n_\alpha} \Gamma^{n_\alpha}_{pj_\alpha}X^{i_1, \cdots, i_r}_{j1, \cdots, n_\alpha, \cdots j_s} $$
I then used the formula (which is explained and derived inside the article) for the Christoffel symbol:
$$ \Gamma^k_{ij} = {{\partial y^k} \over {\partial x^q}} {{\partial^2 x^q} \over {\partial y^i \partial y^j}} $$
to calculate the Christoffel symbol for cylindrical coordinates.
The author leaves as exercise to the reader to derive the expression of the gradient of a function $f$ in cylindrical coordinates starting from the covariant derivative. I've tryed to do what I was asked for, this is my attemp:
$$ \nabla f = (\nabla_\mu f) \hat{e}^\mu $$
which I've expanded into:
$$ \nabla f = \nabla_r f \hat{r} + \nabla_\varphi f \hat{\varphi} + \nabla_h f \hat{h} $$
where $r = \sqrt{(x^2 + y^2)}$, $\varphi = tan^{-1} {y \over x}$, $h = z$. Then I used the linearity of the derivation operation and used $f^r = f\hat{r}$, $f^\varphi = f\hat{\varphi}$, $f^h = f\hat{h}$. Hence the previous expansion can be calculated as:
$$ \nabla f = (\partial_r f^r + \Gamma^r_{rr} f^r + \Gamma^r_{r\varphi} f^\varphi + \Gamma^r_{rh} f^h) + (\partial_\varphi f^\varphi + \Gamma^\varphi_{\varphi r} f^r + \Gamma^\varphi_{\varphi \varphi} f^\varphi + \Gamma^\varphi_{\varphi h} f^h) + (\partial_h f^h + \Gamma^h_{h r} f^r + \Gamma^h_{h \varphi} f^\varphi + \Gamma^h_{h h} f^h)$$
where $\Gamma^r_{rr} = \Gamma^r_{r \varphi} = \Gamma^r_{rh} = \Gamma^\varphi_{\varphi r} = \Gamma^\varphi_{\varphi h} = \Gamma^h_{h r} = \Gamma^h_{h \varphi} = \Gamma^h_{hh} = 0$ and $\Gamma^\varphi_{\varphi \varphi} = {1 \over r}$
Henceforth:
$$ \nabla f = \partial_r f^r + \left (\partial_\varphi f^\varphi + {1 \over r} f^\varphi \right ) + \partial_h f^h $$
But from here I don't know how should I go forth, since the correct expression for gradient in cylindrical coordinates is:
$$ \nabla f = \partial_r f \hat{r} + {1 \over r} \partial_\varphi f \hat{\varphi} + \partial_h f \hat{h} $$
(which I've taken from wikipedia)
Any advice on how I shall go on to derive the correct gradient formula?
P.S. Exuse my poor English, I'm still practising it. Anyway thanks in advance for your answer
Answer: The $\nabla$ in differential geometry is NOT the same $\nabla$ that you learn about in your vector calculus courses. In a typical vector calculus course, when one considers a function $f:\Bbb{R}^n\to\Bbb{R}$ and introduces $\nabla f$ as the gradient vector field, this is what I will henceforth refer to as $\text{grad}(f)$. This is a vector field i.e a tensor field of type $(1,0)$. In differential geometry, $\nabla f$ is a $(0,1)$ tensor field, i.e a covector field. By definition,
\begin{align}
\nabla f&:= df=\sum_{i=1}^n\frac{\partial f}{\partial x^i}\,dx^i\tag{$*$}
\end{align}
Even from your first equation, you can see that because $f$ is a smooth function, it is a $(0,0)$ tensor field, so that $r=s=0$. So, if $r$ and $s$ are $0$, there shouldn't even be any $\Gamma$ symbols. Just look at the formula you wrote:
\begin{align}
\nabla_{p}f&=\frac{\partial f}{\partial y^p}
\end{align}
(slightly sloppy notation, but this is exactly what $(*)$ says).
From here, if you want to recover the familiar vector-calculus formula, then it's not the $\Gamma$'s which matter, but rather the metric tensor itself, for which you need to know how one can convert between vectors and covectors using the musical isomorphism. By definition, if $g$ refers to the metric tensor on our manifold then
\begin{align}
\text{grad}(f)&:= g^{\sharp}(df)\\
&=g^{\sharp}\left(\sum_{i=1}^n\frac{\partial f}{\partial x^i}\,dx^i\right)\\
&=\sum_{i=1}^n\frac{\partial f}{\partial x^i}g^{\sharp}(dx^i)\\
&=\sum_{i,j=1}^n\frac{\partial f}{\partial x^i}g^{ij}\frac{\partial }{\partial x^j}\tag{$**$}
\end{align}
Here, $g_{ij}:=g\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)$, and $(g^{ij})$ is the inverse matrix of $(g_{ij})$.
Now, $\frac{\partial}{\partial x^j}$ is the $j^{th}$ coordinate basis-vector, but it is not necessarily normalized. If we let $\mathbf{e}_j$ denote the normalized version, then the relationship is that
\begin{align}
\frac{\partial}{\partial x^j}&=\left\|\frac{\partial}{\partial x^j}\right\|\cdot \mathbf{e}_j =\sqrt{g_{jj}}\mathbf{e}_j\tag{$***$}
\end{align}
(i.e the length of a vector is the square root of its inner/dot-product with itself). Plugging $(***)$ into $(**)$ yields
\begin{align}
\text{grad}(f)&=\sum_{i,j=1}^ng^{ij}\sqrt{g_{jj}}\frac{\partial f}{\partial x^i}\,\mathbf{e}_j.
\end{align}
This is the general formula for the gradient vector field of a smooth function on any (Pseudo)-Riemannian manifold $(M,g)$.
In the case where $M=\Bbb{R}^3$ and $g$ is the standard metric tensor field on $\Bbb{R}^3$, the components relative to the cylindrical coordinates are
\begin{align}
[g_{ij}]&=
\begin{pmatrix}
g_{rr}& g_{r\phi}&g_{rz}\\
g_{\phi r}&g_{\phi\phi}&g_{\phi z}\\
g_{zr}&g_{z\phi}&g_{zz}
\end{pmatrix}
=
\begin{pmatrix}
1&0&0\\
0& r^2 & 0\\
0&0&1
\end{pmatrix}
\end{align}
Since the matrix is diagonal, the inverse matrix is simply the matrix whose entries are reciprocals. So, plugging this into the above expression, it actually simplifies a lot:
\begin{align}
\text{grad}(f)&=\sum_{i,j=1}^ng^{ij}\sqrt{g_{jj}}\frac{\partial f}{\partial x^i}\mathbf{e}_j\\
&=\sum_{i=1}^n\frac{1}{g_{ii}}\sqrt{g_{ii}}\frac{\partial f}{\partial x^i}\,\mathbf{e}_i\tag{due to diagonal matrix}\\
&=\sum_{i=1}^n\frac{1}{\sqrt{g_{ii}}}\frac{\partial f}{\partial x^i}\mathbf{e}_i
\end{align}
For the specific case of cylindrical coordinates, we thus get
\begin{align}
\text{grad}(f)&=\frac{1}{\sqrt{g_{rr}}}\frac{\partial f}{\partial r}\mathbf{e}_r+
\frac{1}{\sqrt{g_{\phi\phi}}}\frac{\partial f}{\partial \phi}\mathbf{e}_{\phi}+
\frac{1}{\sqrt{g_{zz}}}\frac{\partial f}{\partial z}\mathbf{e}_z\\
&=\frac{\partial f}{\partial r}\mathbf{e}_r + \frac{1}{r}\frac{\partial f}{\partial \phi}\mathbf{e}_{\phi}+\frac{\partial f}{\partial z}\mathbf{e}_z,
\end{align}
which is precisely the formula you quote. So, just to reiterate, the $\frac{1}{r}$ comes from the metric tensor itself, not the $\Gamma$ (the $\Gamma$'s only appear if you're covariantly differentiating tensor fields of rank $\geq 1$ i.e $r+s\geq 1$).
Take a look at this math answer of mine for a similar calculation for polar coordinates in the plane (it's almost exactly the same calculation), and see the various links there.
Comments
YOu write
.... which I've expanded into
\begin{align}
\nabla f&=(\nabla_rf)\hat{r}+(\nabla_{\phi}f)\hat{\phi}+(\nabla_hf)\hat{h}
\end{align}
Well, this is just wrong; on the LHS you have a covector field (i.e a tensor field of type $(0,1)$) while on the RHS you have a vector field (a tensor field of type $(1,0)$) so of course they cannot be equal. This is also not the definition of $\nabla f$ as given in your book (which is the same as $(*)$ which I wrote above). Actually, even the $\hat{e}^{\mu}$ notation is terrible, because the $\hat{}$ somehow suggests you're talking about a unit covector field, which is just wrong. The correct equation is $(*)$ which I wrote above, and surely that equation is super easy to remember. | {
"domain": "physics.stackexchange",
"id": 80592,
"tags": "coordinate-systems, tensor-calculus, differentiation"
} |
tensorflow beginner demo, is that possible to train a int-num counter? | Question: I'm new to tensorflow and deep-learning, I wish to get a general concept by a beginner's demo, i.e. training a (int-)number counter, to indicate the most repeated number in a set (if the most repeated number is not unique, the smallest one is chosen).
e.g.
if seed=[0,1,1,1,2,7,5,3](int-num-set as input), then most = 1(the most repeated num here is 1, which repeated 3 times);
if seed = [3,3,6,5,2,2,4,1], then most = 2 (both 2 and 3 repeated most/twice, then the smaller 2 is the result)
Here I didn't use the widely used demos like image classifier or MNIST data-set, for a more customized perspective and a easier way to get data-set. so if this is not a appropriate problem for deep-learning, please help me know it.
The following is my code and apparently the result is not as expected, may I have some advice?
like:
is this kind of problems suitable for deep-learning to solve?
is the network-struct appropriate for this problem?
is the input/output data(or data-type) right for the network?
import random
import numpy as np
para_col = 16 # each (num-)set contains 16 int-num
para_row = 500 # the data-set contains 500 num-sets for trainning
para_epo = 100 # train 100 epochs
# initial the size of data-set for training
x_train = np.zeros([para_row, para_col], dtype = int)
y_train = np.zeros([para_row, 1], dtype = int)
# generate the data-set by random
for row in range(para_row):
seed = []
for col in range(para_col):
seed.append(random.randint(0,9))
most = max(set(seed), key = seed.count) # most repeated num in seed(set of 16 int-nums between 0~9)
# fill in data for trainning-set
x_train[row] = np.array(seed,dtype = int)
y_train[row] = most
# print(str(most) + " @ " + str(seed))
# define and training the network
import tensorflow as tf
# a simple network according to some tutorials
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(para_col, 1)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# train the network
model.fit(x_train, y_train, epochs = para_epo)
# test the network
seed_test = [5,1,2,3,4,5,6,7,8,5,5,1,2,3,4,5]
# seed_test = [1,1,1,3,4,5,6,7,8,9,0,1,2,3,4,5]
# seed_test = [9,0,1,9,4,5,6,7,8,9,0,1,2,3,4,5]
x_test = np.zeros([1,para_col],dtype = int)
x_test[0] = np.array(seed_test, dtype = int)
most_test = model.predict_on_batch(x_test)
print(seed_test)
for o in range(10):
print(str(o) + ": " + str(most_test[0][o]*100))
the training result looks like converged according to
...
Epoch 97/100
16/16 [==============================] - 0s 982us/step - loss: 0.1100 - accuracy: 0.9900
Epoch 98/100
16/16 [==============================] - 0s 1ms/step - loss: 0.1139 - accuracy: 0.9900
Epoch 99/100
16/16 [==============================] - 0s 967us/step - loss: 0.1017 - accuracy: 0.9860
Epoch 100/100
16/16 [==============================] - 0s 862us/step - loss: 0.1082 - accuracy: 0.9840
but the printed output looks unreasonable and random, the following is a result after one of the trainings
[5, 1, 2, 3, 4, 5, 6, 7, 8, 5, 5, 1, 2, 3, 4, 5]
0: 0.004467500184546225
1: 0.2172523643821478
2: 2.9886092990636826
3: 1.031165011227131
4: 69.71694827079773
5: 12.506482005119324
6: 1.0543939657509327
7: 0.2930430928245187
8: 8.086799830198288
9: 4.100832715630531
actually 5 is the right answer (repeated five times and most), but is the printed output indicating 4 is the answer (at a probability of 69.7%)?
Answer: This type of problem is not really suited to deep learning. Each node in the neural network expects numeric input, applies a linear transformation to it, followed by a non-linear transformation (the activation function), so your inputs need to be numeric. While your inputs are numbers, they are not being used numerically, as the inputs could be changed to letters or symbols. Also, your network looks like it is overfitting. It is very large for the number of inputs and so is probably just memorising the training data, which is why you appear to good results on your training data.
Tensorflow has a tensorflow-datasets package (installed separately from the main TF package) which provides easy access to a range of datasets (see https://www.tensorflow.org/datasets for details). Maybe look here to find a suitable dataset to use. | {
"domain": "datascience.stackexchange",
"id": 10958,
"tags": "deep-learning, tensorflow"
} |
Is it possible to calculate some kind of friction substitute for a fast moving object sliding on water? | Question: An object is accelerated on land, parallell to the surface of a tank of water. The object is then released onto the perfectly still water, making it slide on top of the water.
Is it possible to calculate some kind of substitute for friction between these two surfaces, or is the problem too complex?
Answer: There are two sources of drag when an object travels though a fluid like air or water.
The first is viscosity. To travel through a fluid, you must stir it. Fluid elements near the object travel near the speed of the object. Fluid elements farther away travel near the background speed of the fluid. When one Fluid element slides past another, there is friction (except in superfluids.) The object must exert a force on nearby fluid elements to make them overcome this friction. The fluid elements exert a reaction force back on the object, slowing the object. This force is called viscosity.
The second is that an object must push fluid out of the way in front of it, and fluid flows in behind to fill the space where the object was. Fluid must be accelerated to move out of the way, giving it kinetic energy. The object must exert a force on the fluid to accelerate it. There is are reaction force from the fluid on the object, slowing the object. This is called an inertial force.
Both forces are present. But in most motion, one is overwhelmingly bigger than the other. I show how to estimate the ratio in my answer to Shooting two projectiles at the same time with different. mass. The ratio is called the Reynold's Number.
In most everyday motion, the Reynold's number is large, indicating that inertial forces dominate. In that case, you typically ignore viscosity forces. This is especially true when the object is larger than an insect and moves faster than say 1 m/s.
In your scenario, the object is moving through air, which exerts drag because of inertial forces. It is also sliding over the water. It starts to dig into the water because of the downward gravity force. Water pushes back upward. There are different ways this can happen.
Sometimes the object skips like a stone. The reaction force is enough the push it entirely out of the water. It leaves expanding rings at places where it touches down and is relaunched.
Sometimes the object slides over the water like a water skier. In this case, the reaction force is big enough to keep the object from sinking. It pushes a furrow in the water.
Sometimes the object floats like a boat. Water pressure is greater than the weight of the object, and this hold the object up. The object leaves a wake.
In all three cases, the object pushes water out of the way, and the inertial reaction force slows the object down. | {
"domain": "physics.stackexchange",
"id": 90360,
"tags": "fluid-dynamics, friction, water"
} |
Limit on velocity in Minkowski Spacetime geometry | Question: Let A be a rocket moving with velocity v.
Then the slope of its worldline in a spacetime diagram is given by c/v.
Since it is a slope, c/v = tan(theta) for some theta > 45 and theta < 90.
Does this impose a mathematical limit on v?
If so what is it?
As in, we know tan(89.9999999999) = 572957795131.
And c = 299792458.
Using tan(89.9999999999) as our limit of precision, the smallest v we can use is:
c/v = tan(89.9999999999)
=> 299792458 / v = 572957795131
Therefore, v = 1911.18 m/s
What is the smallest non zero value of v? Is there a limit on this?
Answer: Since a worldline along the time axis on Minkowski diagram is at rest, it is more intuitive to measure angles from that axis instead, as then 'slope' is (space)/(time), i.e., a velocity. Then we have the trigonometric relationship:
$$\frac{v}{c} = \tanh\alpha$$
where Minkowski spacetime follows hyperbolic trigonometry because of the sign difference in the Minwkoski metric/distance formula compared to Euclidean metric/Pythagorean theorem.
The hyperbolic angle $\alpha$ can be any real number, and limit it imposes on speed under this restriction of real numbers is that $|v|<c$.
A lot of STR formula become rather intuitive in this form, e.g., Lorentz transformation is just a rotation with hyperbolic trigonometry, and the velocity addition formula is:
$$\begin{eqnarray*}u\oplus v = \frac{u+v}{1+uv/c^2}
&\Longleftrightarrow& \tanh(\alpha+\beta) = \frac{\tanh\alpha+\tanh\beta}{1+\tanh\alpha\tanh\beta}\text{,}\end{eqnarray*}$$
and so forth.
Note that in Euclidean space, the corresponding question is 'if you have three lines intersecting at a point, and the first makes a slope $m$ with the second, while the second makes a slope $l$ with the third, what slope does the first line make with the third?', and the answer to that also follows that pattern of the normal tangent addition formula. | {
"domain": "physics.stackexchange",
"id": 7645,
"tags": "spacetime, geometry, special-relativity"
} |
Zero Ohmic resistance in superconductors is a little bit too enthusiastic? | Question: According to this article:
Superconductors contain tiny tornadoes of supercurrent, called vortex filaments, that create resistance when they move.
Does this mean that our description of zero Ohmic resistance in superconductors is a little bit too enthusiastic?
Answer: I think that since superconductors were originally discovered because they exhibited electrical resistances indistinguishable from zero that zero electrical resistivity may have originally been the defining characteristic of a superconductor. But as more was learned about superconductors and the superconducting state it was realized that superconductors can exist in a mixed or vortex state consisting of normal-state vortices in a superconducting medium, and such a system can exhibit energy dissipation due to the movement of the vortices which results in an electrical resistance that is not strictly zero. So, yes, I guess you could say that the use of the term "superconductor" when these materials were first discovered could have been a bit "too enthusiastic" since it suggested that absolute zero resistance was an essential, defining characteristic of superconductors when it's not.
Not an expert in superconductivity and maybe someone else will chime in, but from my perspective as an experimentalist I would say that an operational definition of superconductivity is a very low electrical resistance combined with the Meissner Effect (i.e., magnetic flux exclusion). | {
"domain": "physics.stackexchange",
"id": 49778,
"tags": "electromagnetism, superconductivity"
} |
Mass change during phase transition | Question: When you fill up a balloon about a quarter way with liquefied butane fuel and let it sit at room temperature it will turn into gas. But why does the gas weigh the same as the liquified butane?
The liquefied butane liquid weighs just about as much as water.
Answer: Assuming you fill the balloon only with liquid butane, the answer is very simple - the gas in the balloon at room temperature IS the liquid in the balloon initially. If you put some amount of liquid butane into the balloon and seal it at t=0, then allow it to warm until the liquid has evaporated, there is essentially no transport of other matter into or out of the balloon during that time. The only thing that has happened is that the initial charge of liquid has absorbed thermal energy from the surroundings and undergone a phase change from liquid to gas.
Because there is no change in the amount of material in the balloon - there are the same number of butane molecules in the gas phase as there were initially in the liquid phase - there can be no change in mass. Liquid and gas phase contain the same number of molecules, at some fixed mass per molecule.
Note that while the mass does not change, the density does. | {
"domain": "chemistry.stackexchange",
"id": 6298,
"tags": "everyday-chemistry, phase, evaporation"
} |
Derivation of angular momentum in cylindrical coordinates | Question: I tried to derive the formula for angular momentum ($\vec{l} = m\rho \phi^2 \vec{e_z}$ in the case of motion restricted to the x-y plane) in cylindrical coordinates directly from the vector cross product $m (\vec{r} \times \dot{\vec{r}})$.
Taking $\vec{r} = (\rho, \phi, z)$ and $\dot{\vec{r}} = (\dot \rho - \phi \dot\phi, \rho\dot\phi + \dot\phi, \dot z)$ as $d(\rho\vec{e_\rho})/dt = \dot\rho \vec{e_\rho} + \rho \dot\phi \vec{e_\phi}$ and $d(\phi\vec{e_\phi})/dt = \dot\phi \vec{e_\phi} - \phi \dot\phi \vec{e_\rho}$
However when taking the cross product (and taking $z = 0$ as we are moving in strictly the x-y plane), I get $\vec{l} = m(\rho \phi^2 + \rho\dot\phi - \dot\rho\phi + \phi^2\dot\phi)\vec{e_z}$ and I am unsure how to deal with the other three terms.
Is this actually the formula for angular momentum and is there just some intial assumption I am missing that gets rid of the last three terms, or have I misrepresented my vectors somehow? In the literature given to us, the derivation of the formula isn't given at all although intuitively it clearly makes sense.
Answer: Your expression for the velocity is just wrong.
$$\tag1\dot{\vec r}=\frac{\mathrm d\vec r}{\mathrm dt}=\frac{\mathrm d\ }{\mathrm dt}\left(\varrho\hat{\vec\varrho}+z\hat{\vec z}\right)=\dot\varrho\hat{\vec\varrho}+\varrho\dot\varphi\hat{\vec\varphi}+\dot z\hat{\vec z}$$
so that the angular momentum is obtained by
$$\tag2\vec\ell=m\left(\varrho\hat{\vec\varrho}+z\hat{\vec z}\right)\times\left(\dot\varrho\hat{\vec\varrho}+\varrho\dot\varphi\hat{\vec\varphi}+\dot z\hat{\vec z}\right)=m\left[-z\varrho\dot\varphi\hat{\vec\varrho}+\left(z\dot\varrho-\varrho\dot z\right)\hat{\vec\varphi}+\varrho^2\dot\varphi\hat{\vec z}\right]$$ | {
"domain": "physics.stackexchange",
"id": 96038,
"tags": "homework-and-exercises, rotational-dynamics, angular-momentum, coordinate-systems, vectors"
} |
Are galaxies really structured the way they look in pictures? | Question:
Are real galaxies really structured the way they are in pictures online?
I'm wondering this because if the speed limit of the universe is light speed, which means stuff we see on the sky or detected are delayed. Therefore,
shouldn't galaxies look extremely distorted and not structured like what we see?
Or some clever tricks are taken to make it look correct?
Answer: Although a galaxy may recede from us at arbitrarily high velocities (even superluminally) because space expands, their rotation and motion through space happen at non-relativistic speeds, of the order of a few 100 km/s, or a few 1000 km/s at most. Hence, every part of a galaxy moves with roughly the same speed with respect to the observer, and are thus not distorted.
However, there is another effect that may distort the image of a galaxy, namely gravitational lensing: If you observe a distant galaxy lying behind a massive cluster of galaxies, then the huge mass of the cluster curves space in such a way as to make the light from the background galaxies take slightly different paths toward you. This distorts the look of the background galaxies (and may even cause it to appear at multiple locations on the sky).
In the image of the cluster Abell S1063 below (from APOD), you see this effect. In fact, by measuring the "banana-shaped-ness" of the background galaxies, it is possible to calculate the mass of the foreground cluster; one of the ways to infer the presence of dark matter. | {
"domain": "physics.stackexchange",
"id": 51746,
"tags": "optics, visible-light, astrophysics, speed-of-light, galaxies"
} |
Classifying text documents using linear/incremental topics | Question: I'm attempting to classify text documents using a few different dimensions. I'm trying to create arbitrary topics to classify such as size and relevance, which are linear or gradual in nature. For example:
size: tiny, small, medium, large, huge.
relevance: bad, ok, good, excellent, awesome
I am training the classifier by hand. For example, this document represents a 'small' thing, this other document is discussing a 'large' thing. When I try multi-label or multi-class SVM for this it does not work well and it also logically doesn't make sense.
Which model should I use that would help me predict this linear type of data? I use scikit-learn presently with a tfidf vector of the words.
Answer: If you want these output dimensions to be continuous, simply convert your size and relevance metrics to real-valued targets. Then you can perform regression instead of classification, using any of a variety of models. You could even attempt to train a multi target neural net to predict all of these outputs at once.
Additionally, you might consider first using a topic model such as LDA as your feature space.
Based on the values, it sounds like the "relevance" might be a variable best captured by techniques from sentiment analysis. | {
"domain": "datascience.stackexchange",
"id": 433,
"tags": "classification, scikit-learn"
} |
"Burger" lanes: What are they, where are they found, and what do they look like? | Question: What and where are they, and what do they look like?
Do all transportations with roundabouts use them in some form?
This is different from What's the purpose of a 'burger' lane in a roundabout?, which is very specific to why. This is an open floor to describe where and what they are, since not all transportation systems will have these.
Answer: if your question is how they look like then:
figure: source hulldailymail
or
figure: source openstreet wiki
They are also known as "through roundabouts"
As to their name I suppose that the names comes from the fact that it looks like a hamburger. I.e. the two buns are the green islands and the two lanes of the road are the cheese and bacon slices. | {
"domain": "engineering.stackexchange",
"id": 4558,
"tags": "civil-engineering, traffic-intersections"
} |
How to disable gravity only for a specific model in Gazebo? | Question:
A similar question has been answered. http://answers.ros.org/question/65991/how-to-disable-gravity-of-a-model-in-the-world-only-disable-one-model/
However, this only works for the .sdf model files. It does not work for urdf. Kindly help.
<link name="base_link" gravity="0 0 0">
<gravity>0</gravity>
</link>
<joint name="base_joint" type="fixed">
<origin xyz="0 0 0" rpy="0 0 0" />
<parent link="base_link" />
<child link="body_link" />
</joint>
<link name="body_link">
<gravity>0</gravity>
<inertial>
<mass value="0.1" />
<origin xyz="0 0 0" />
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1" />
</inertial>
<visual name="base_visual">
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry name="pioneer_geom">
<mesh filename="package://rotors_description/meshes/simple_airplane1.dae" />
</geometry>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry>
<mesh filename="package://rotors_description/meshes/simple_airplane1.dae" />
</geometry>
</collision>
</link>
Originally posted by webvenky on Gazebo Answers with karma: 23 on 2016-05-26
Post score: 0
Answer:
It looks like your file is in the URDF format (you use origin instead of pose for example).
<gravity> within a link is specified in SDF, but not in URDF. Luckily, the conversion from URDF to SDF is possible. In order to use SDF tags within your URDF link, use the <gazebo> tag, as explained in this tutorial. It should look something like this:
<gazebo reference="base_link">
<gravity>0</gravity>
</gazebo>
Originally posted by chapulina with karma: 7504 on 2016-05-26
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by webvenky on 2016-05-26:
Thanks a lot! It works. :-)
Comment by hbaqueiro on 2018-11-05:
Does it remain static in the world even if something collide with it? I was looking for something more like the tag.
Comment by chapulina on 2018-11-05:
No, it will not remain static, it will just not be affected by gravity. There's no way to set only one link to be static, that tag works only on the model level. You can however make a model static and connect it to a non-static model with a joint.
Comment by BrumBrum on 2020-01-07:
For me the tags did not work, this did work however:
<gazebo reference="base_link">
<turnGravityOff>true</turnGravityOff>
</gazebo>
Comment by myboyhood on 2020-05-04:
For me or can not stop model to drop down to the ground,
in order to ensure model in the air, I fixed it in the world link
<link name="world"/>
<parent link="world"/>
<child link="link1"/>
</joint>
Comment by val on 2020-08-07:
I tried both and but none of them turned gravity off. When I check the generated sdf file it shows that there are two tags defining gravity. Adding the tag in Urdf file doesn't overwrite the default value. it just adds another tag.
<gravity>0</gravity>
<gravity>1</gravity>
Does anyone know how i can overwrite the original tag?
Comment by th123 on 2021-01-21:
Hi val,
have you found a solution for this problem by now? I assume I am facing the same issue.
Comment by danzimmerman on 2021-02-17:
Yep I'm hitting this now too (ROS Melodic + Gazebo9)
Related to https://github.com/osrf/sdformat/issues/71 ?
Comment by danzimmerman on 2021-02-17:
@th123 <turnGravityOff>1</turnGravityOff> (case-sensitive) is working for me in ROS Melodic. <turngravityoff> all-lowercase doesn't work. | {
"domain": "robotics.stackexchange",
"id": 3924,
"tags": "gazebo"
} |
Equalization : using the spectral or the temporal signal? | Question: I'm trying to equalize a sound signal - using a JAVA program - and I'm using this process :
1/ Conversion of the temporal signal to a spectral signal, using a FFT
2/ Applying a coefficient to each frame of the spectral signal to equalize it
3/ Conversion of the spectral signal modified to a temporal signal
4/ Reading of that temporal signal
If I'm applying that process to a "pure" signal (ex : a 400 Hz sinusoïdal signal generated "on the fly" by a java class), it seems to work. But if I'm applying it to a "real" wav signal (a song, for example), the result is unsusable. I hear a kind of "sliced" sound, even if all the equalization coefficients are equal to "1" (=no modification).
So, to equalize a sound, does the process I describe above is the right solution ?
Or shall I avoid that double conversion and applying a convolution product on the temporal signal ?
If I have to do apply a convolution product, how to do it ? I have no clue about calculating it with "random" signals.
Thank you for all your answers.
Answer: If you are trying to make a proper filter, you would want to use FFT convolution, which is like OLA, but not the same. OLA is more of a synthesis technique for reducing noise between frames caused by incoherent phase mangling from directly manipulating frequency data.
You also need to resolve the phase of the bins for your filter. You are only defining the magnitudes and phase at those discrete frequencies, not the frequencies in between, which can be radically different then what you have in your head. The easiest thing to do is to use linear phase to center the impulse of the filter in the window. This can be done by multiplying each bin magnitude by e^jpik.
If you are trying to realize an analog prototype, this is a bad way to do it. Your frequency bins are linearly spaced, but analog filters are logarithmic. The effect will be that your lower bands have a lower “Q” and vice versa. You can still do it, but it may not do what you wanted. | {
"domain": "dsp.stackexchange",
"id": 6815,
"tags": "signal-analysis, frequency-spectrum, convolution"
} |
Activated complex theory vs. consecutive reactions | Question: Activated complex theory, tells us that due to the collision between the molecules of the reactants, they form a transition specie before the product is formed, which is called active complex. On the other hand we have consecutive reactions on which is also formed a intermediate product before forming the actual product we're interested on. My question is :
Where is the difference between the activated complex and the intermediate product, since they are both formed before the actual product ?
Answer: The key difference is that transition states occur at a maximum of the potential energy curve for the reaction whereas intermediates occur at a local minimum. Take this example of the reaction profile for an $\ce{S_{N}1}$ reaction:
You will see that the products and reactants occur at minima on the curve and the intermediate also occurs at a minimum, albeit a higher energy one. In between the minima are located the maxima where you find the transition states. Unless the activation energy is zero (see this question for rare examples) there will always be a transition state located between any two minima.
Transition states are usually represented using dashed bonds to show bonds in the process of being broken or formed as opposed to being fully formed as in intermediates or products. Additionally the Hammond postulate says that the transition state will most resemble the stable species closest to it in energy, in this case the carbocation, and this can be used to help predict the structures of transition states.
Transition states are very short lived because they immediately 'roll downhill' on the potenetial energy curve to reach an intermediate or product. By contrast some intermediates are actually quite stable and can be isolated. | {
"domain": "chemistry.stackexchange",
"id": 3339,
"tags": "physical-chemistry, kinetics"
} |
How can I keep a smaller water reservoir's water level at half available when being fed from a larger reservoir? | Question: I'm trying to create my own ultrasonic humidifier. I ordered the misting part which works great but it only functions correctly in shallow water. So I'd like to feed from a large water reservoir to a smaller one. My question is how can I fill the smaller reservoir to a desired water level? Will I have to use a closing/opening valve or is there a simpler way? (I was thinking a small balloon hooked up to a pulley that opens and closes a latch much like a toilet but I am trying to avoid complexity.)
Answer: I recently saw an auto pet waterer. The intent of the device is to keep the same water level as the pet drinks the water. This is accomplished with basically a bottle of water turned upside down and the top of it submerged under the water level. If the water level falls below the top of the water bottle, then air bubbles make their way up to the top of the bottle, exchanging air for water and keeping the level the same.
This is a very simple solution, and this type of approach may be appropriate for what you're trying to do, but it differs from the other solutions proposed. There are some drawbacks:
The pressure of the water reservoir has to adjust to accommodate its level
The device can only make up for lost water - if you add water the level won't remain the same
If you're building a humidifier I doubt the second point would be a problem. You're only going to be removing water, right? The first point may actually be more troublesome. For one, you refilling it isn't trivial. You need to actually close up the water reservoir, turn it rightside up again, then fill it. If you just opened a plug, then it would all fall out and make a big mess. | {
"domain": "physics.stackexchange",
"id": 5825,
"tags": "fluid-dynamics, water, pressure"
} |
Why don't choir voices destructively interfere so that we can't hear them? | Question: Sound is propagated by waves. Waves can interfere.
Suppose there are two tenors standing next to each other and each singing a continuous middle-C.
Will it be the case that some people in the audience cannot hear them because of interference?
Would it make a difference if they were two sopranos or two basses and singing a number of octaves higher or lower?
How does this generalize to an array of n singers?
Given a whole choir, to what extent are their voices less than simply additive because of this? Is it possible that, for some unfortunate member of the audience, the choir appears to be completely silent--if only for a moment?
Answer: The main issue in the setting of an orchestra or choir is the fact that no two voice or instruments maintain exactly the same pitch for any length of time. If you have two pure sine wave source that differ by just one Hertz, then the interference pattern between them will shift over time - in fact at any given point you will hear a cycle of constructive and destructive interference which we recognize as beats, but the exact time when each member of the audience will hear the greatest or least intensity will vary with their position.
Next let's look at the angular distribution of signal. If two tenors are singing a D3 of 147 Hz (near the bottom of their range) the wavelength of the sound is 2 m: if they stand closer together than 1 m there will be no opportunity to create a 180 degree phase shift anywhere. If they sing near the top of their range, the pitch is closer to 600 Hz and the wavelength 0.5 m. But whatever interference pattern they generate, a tiny shift in frequency would be sufficient to move the pattern - so no stationary observer would experience a "silent" interference - even of the fundamental frequency.
Enter vibrato: most singers and instruments deliberately modulate their frequency slightly - this makes the note sound more appealing and allows them to make micro corrections to the pitch. It also makes the voice stand out more against a background of instruments and tends to allow it to project better (louder for less effort on the part of the singer). This is used by soloists but more rarely by good choirs - because in the choir you want to blend voices, not have them stand out.
At any rate, the general concept here is incoherence: the different source of sound in a choir or orchestra are incoherent, meaning that they do not maintain a fixed phase relationship over time. And this means they do not produce a stationary interference pattern.
A side effect of interference is seen in the volume of a choir: if you add the amplitudes of two sound sources that are perfectly in phase, your amplitude doubles and the energy / intensity quadruples. A 32 man choir would be over 1000 times louder than a solo voice - and this would be achieved in part because the voices could only be heard "right in front" of the choir (perfectly coherent voices would act like a phased array). But since the voice are incoherent, there is no focusing, no amplification, and they can be heard everywhere.
Note that incoherence is a function of phase and frequency - every note is a mix of frequencies, and although a steady note will in principle contain just a fundamental and its harmonics, their exact relationship is very complicated. Even if you took a single singer's voice, and put it into two speakers with a delay line feeding one of the speakers, I believe you would still not find interference because of the fluctuations in pitch over even a short time. Instead, your ear would perceive this as two people singing.
And finally - because a voice (or an instrument) is such a complex mix of frequencies, there is in general no geometric arrangement of sources and receiver in which all frequencies would interfere destructively at the same time. And the ear is such a complex instrument that it will actually "synthesize" missing components in a perceived note - leading to the strange phenomenon where for certain instruments, the perceived pitch corresponds to a frequency that is not present - as is the case with a bell, for example. | {
"domain": "physics.stackexchange",
"id": 25966,
"tags": "waves, acoustics, interference"
} |
rosrun executable or found error | Question:
hi, I used to hydro distribution of ros. I take to following error therefore I leave no stone unturned unfortunately I didn't correct. how can I correct this error?
Thank you in advance.
**"$rosrun my_pcl_tutorial example input:=/narrow_stereo_textured/points2
[rosrun] Couldn't find executable named example below /home/esetron/catkin_ws/src/my_pcl_tutorial
[rosrun] Found the following, but they're either not files,
[rosrun] or not executable:
[rosrun] /home/esetron/catkin_ws/src/my_pcl_tutorial/src/example"**
Originally posted by hamdi on ROS Answers with karma: 73 on 2014-07-03
Post score: 0
Answer:
I've usually seen users have this problem in a few cases:
Your package isn't built - run catkin_make in your workspace
You haven't added your excutable to your CMakeLists.txt - follow the tutorial for adding your executable to your CMakeLists.txt
You executable is built, but it's in the wrong place - make sure you're calling catkin_package() before add_exectuable() in your CMakeLists.txt
Originally posted by ahendrix with karma: 47576 on 2014-07-03
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by hamdi on 2014-07-04:
thank you Ahendrix, I forgot to do the third suggestion. | {
"domain": "robotics.stackexchange",
"id": 18504,
"tags": "rosrun"
} |
Why is the amu of chlorine-35 less than 35? | Question: My book says that a proton weighs 1.0073u, a neutron weighs 1.0087u, and an electron weighs 0.00055u.
Now, why is the mass of chlorine-35 equal to 34.969? Are there not 17 protons, 18 neutrons, and 17 electrons? I calculated it and it sums to around 35.29. Where did I go wrong?
Answer: You need to account for the energy released when nucleons and electrons come together and form a Cl-35 atom. Its called the Binding Energy.
This sort of equation can help to explain :
(Rest Mass Energy of Individual nucleons,electrons*) - (Various Binding Energies) = (Rest Mass Energy of Natural)
*edit | {
"domain": "chemistry.stackexchange",
"id": 13673,
"tags": "atoms"
} |
Merger happening tangentially, but dark matter at both sides? | Question: According to this news,
The expectation of "unaccounted energy" comes from the fact the merger
of galaxy clusters is occurring tangentially to the observers'
line-of-sight. This means they are potentially missing a good fraction
of the kinetic energy of the merger because their spectroscopic
measurements only track the radial speeds of the galaxies.
Read more at: http://phys.org/news/2014-04-hubble-team-monster-el-gordo.html#jCp
What strikes as a surprising is that the picture shows the dark matter distribution (inferred from weak lensing) in blue hue, and it shows a similar pattern that Bullet Cluster, even though in the Bullet Cluster case, the collision happen perpendicular to the line-of-sight
Any idea why so much discrepancy between normal matter and dark matter distribution along that axis?
Answer: According to the original paper from the Atacama Telescope team the collision axis is somewhere between 15° and 30° to the line of sight. So the claim that the axis is tangential to the line of site is misleading (since the line of sight is a straight line, wouldn't the tangent to it be the same straight line?).
The velocity component normal to the line of sight is estimated at 586km/s (page 15 of the paper), so we'd expect to see some separation of the dark and bayonic matter distributions even though it wouldn't be as great as for the bullet cluster. For comparison, the collision speed in the bullet cluster is estimated to be 4500 km/s and the axis is roughly normal to the line of sight. | {
"domain": "physics.stackexchange",
"id": 12941,
"tags": "dark-matter, gravitational-lensing"
} |
Lunar Eclipse - Total darkness? | Question: Can a lunar eclipse completely 'turn off' the mooon, i.e. like a New Moon? Or is it always just a 'shading' of the moon (sometimes red in color)?
Answer: No.
Lunar eclipses are caused when the Moon is in opposition to the Sun. Normally this produces a full moon, but if the Moon is in exact opposition (considering incline of the Moon's orbital plane), all direct sunlight will be blocked from the Moon:
So if all the sunlight is blocked, how can we see the Moon? Well, the main reason is that sunlight is often dispersed in the atmosphere, and so it can reach the Moon. In addition, there's airglow, which is when sunlight hits Earth's upper atmosphere and causes multiple chemical reactions, scattering light throughout the night sky.
Thus, all this light from Earth, called earthshine, reflects off the Moon and illuminates it. In addition, Earth removes and blocks parts of the sunlight's spectrum, leaving only the longer wavelengths. This causes the Moon to appear red. Lastly, because the Earth blocks off all the direct sunlight from the Sun (only diffracted sunlight and airglow reach the Moon), we can actually see Earth's shadow on the Moon. | {
"domain": "astronomy.stackexchange",
"id": 1846,
"tags": "lunar-eclipse"
} |
How to derive the formula for total impedance, $Z$, in an $RLC$ circuit? | Question: Where this is an AC circuit, how can we derive the below formula for impedance, $Z$?
$R = $ resistance, $X_{L} = $ inductive reactance, and $X_{c} = $ capacitive reactance.
Answer: The impedance is actually a complex quantity. It has a magnitude which describes the ratio between current and voltage magnitude, but also a phase which gives the phase difference between the current and the voltage. So $|U|=|Z|\cdot|I|$ and $Phase(U) - Phase(I) = Phase(Z)$.
For a resistor, the impedance is equal to the resistance $Z_R = R$ because there is no phase difference between the current and the voltage.
For a capacitor $Z_C = \frac{1}{j \cdot \omega C}$, with $j \cdot j = -1$. It is imaginary because the phase of the current is 90 degree greater than the phase of the voltage for capacitors.
For an inductance $Z_L = j \cdot \omega L$. It is imaginary because the phase of the current is 90 degree less than the phase of the voltage.
Impedances are really nice to work with, because you can apply the same rules as you do for resistors when putting several impedances in series or in parallel.
So if all impedances are in series, like in your picture, then the total impedance is simply $$Z_{tot} = Z_R+Z_C+Z_L = R + \frac{1}{j \cdot \omega C} + j \cdot \omega L$$
If the impedances were in parallel you would have $$\frac{1}{Z_{tot}} = \frac{1}{Z_R}+ \frac{1}{Z_C}+\frac{1}{Z_L}$$
The formula you showed in your question is not really a formula for the impedance. It is a formula for the absolute value of the impedance. In other words, in your formula you have $$Z = |Z_{tot}| = \sqrt{\Re(Z_{tot})^2 + \Im(Z_{tot})^2}$$
For $Z_{tot} = Z_R+Z_C+Z_L = R + \frac{1}{j \cdot \omega C} + j \cdot \omega L$, the real part is $\Re(Z_{tot})=R$ and the imaginary part is $\Im(Z_{tot})=\omega L - \frac{1}{\omega C}$. So if we define $X_L = \omega L$ and $X_C = \frac{1}{\omega C}$ we get :
$$Z = |Z_{tot}| = \sqrt{\Re(Z_{tot})^2 + \Im(Z_{tot})^2} = \sqrt{R^2 +(\omega L - \frac{1}{\omega C})^2} = \sqrt{R^2 + (X_L-X_C)^2}$$
This $Z$ give you an idea of the ratio of the magnitudes of the current and the voltage, but gives no information about the phase. I guess that if you are using it you don't know much about complex numbers and why they are useful to describe the magnitude and phase of oscillating quantities. But trying to avoid them makes your life only more difficult, so I suggest you to learn some basics about complex numbers and understand where the Euler formula comes from.
I also suggest you to watch this video to understand what I mean by "the phase of the current is 90 degree greater than the phase of the voltage in a capacitor".
Here is a small video to understand how to take the absolute value of a complex number, so you understand where the square root comes from. Also notice that when taking the imaginary part of $Z_{tot}$ I used the fact that $\frac{1}{j} = -j$. | {
"domain": "physics.stackexchange",
"id": 90454,
"tags": "homework-and-exercises, electric-circuits, electrical-resistance, capacitance, inductance"
} |
How to interpret correlation functions in QFT? | Question: I'm fairly new to the subject of quantum field theory (QFT), and I'm having trouble intuitively grasping what a n-point correlation function physically describes. For example, consider the 2-point correlation function between a (real) scalar field $\hat{\phi}(x)$ and itself at two different space-time points $x$ and $y$, i.e. $$\langle\hat{\phi}(x)\hat{\phi}(y)\rangle :=\langle 0\rvert T\lbrace\hat{\phi}(x)\hat{\phi}(y)\rbrace\lvert 0\rangle\tag{1}$$ where $T$ time-orders the fields.
Does this quantify the correlation between the values of the field at $x=(t,\mathbf{x})$ and $y=(t',\mathbf{y})$ (i.e. how much the values of the field at different space-time points covary, in the sense that, if the field $\hat{\phi}$ is excited at time $t$ at some spatial point $\mathbf{x}$, then this will influence the "behaviour" of the field at later time $t'$ at some spatial point $\mathbf{y}$)? Is this why it is referred to as a correlation function?
Furthermore, does one interpret $(1)$ as physically describing the amplitude of propagation of a $\phi$-particle from $x$ to $y$ (in the sense that a correlation of excitations of the field at two points $x$ and $y$ can be interpreted as a "ripple" in the field propagating from $x$ to $y$)?
Answer: Yes, in scalar field theory, $\langle 0 | T\{\phi(y) \phi(x)\} | 0 \rangle$ is the amplitude for a particle to propagate from $x$ to $y$. There are caveats to this, because not all QFTs admit particle interpretations, but for massive scalar fields with at most moderately strong interactions, it's correct. Applying the operator $\phi({\bf x},t)$ to the vacuum $|0\rangle$ puts the QFT into the state $|\delta_{\bf x},t \rangle$, where there's a single particle whose wave function at time $t$ is the delta-function supported at ${\bf x}$. If $x$ comes later than $y$, the number $\langle 0 | \phi({\bf x},t)\phi({\bf y},t') | 0 \rangle$ is just the inner product of $| \delta_{\bf x},t \rangle$ with $| \delta_{\bf y},t' \rangle$.
However, the function $f(x,y) = \langle 0 | T\{\phi(y) \phi(x)\} | 0 \rangle$ is not actually a correlation function in the standard statistical sense. It can't be; it's not even real-valued. However, it is a close cousin of an honest-to-goodness correlation function.
If make the substitution $t=-i\tau$, you'll turn the action
$$iS = i\int dtd{\bf x} \{\phi(x)\Box\phi(x) - V(\phi(x))\}$$
of scalar field theory on $\mathbb{R}^{d,1}$ into an energy function
$$-E(\phi) = -\int d\tau d{\bf x} \{\phi(x)\Delta\phi(x) + V(\phi(x))\}$$
which is defined on scalar fields living on $\mathbb{R}^{d+1}$. Likewise, the oscillating Feynman integral $\int \mathcal{D}\phi e^{iS(\phi)}$ becomes a Gibbs measure $\int \mathcal{D}\phi e^{-E(\phi)}$.
The Gibbs measure is a probability measure on the set of classical scalar fields on $\mathbb{R}^{d+1}$. It has correlation functions $g(({\bf x}, \tau),({\bf y},\tau')) = E[\phi({\bf x}, \tau)\phi({\bf y},\tau')]$. These correlation functions have the property that they may be analytically continued to complex values of $\tau$ having the form $\tau = e^{i\theta}t$ with $\theta \in [0,\pi/2]$. If we take $\tau$ as far as we can, setting it equal to $i t$, we obtain the Minkowski-signature "correlation functions" $f(x,y) = g(({\bf x},it),({\bf y},it'))$.
So $f$ isn't really a correlation function, but it's the boundary value of the analytic continuation of a correlation function. But that takes a long time to say, so the terminology gets abused. | {
"domain": "physics.stackexchange",
"id": 62165,
"tags": "quantum-field-theory, greens-functions, correlation-functions, propagator"
} |
CodeChef Fusing Weapons in a circular list | Question: I am currently trying to solve this problem on Codechef:
Before the start of each stage, N weapons appear on the screen in circular order. Each weapon has an integer associated with it, which represents its level. The chef can choose two adjacent weapons of the same level and fuse them into a single weapon of level A+1, where A is the level of the weapons before fusing. Both the old weapons will disappear and the new weapon will be placed in the place of the old weapons, shrinking the circle.
Chef can fuse as many times as he wants, and in each stage, he wants to make a weapon with as high a level as possible. Each stage is independent of other stages.
Please help Chef by figuring out the maximum level of a weapon that he can get in each stage.
However, my code seems to exceed the time limit. Can someone please tell me how to optimize this code to prevent it from exceeding the time limit?
#include <iostream>
using namespace std;
class Set
{
public:
int data[200000];
int length;
};
int findMax(int data[], int size)
{
int max = data[0];
for (int i = 1; i < size; i++)
{
if (max < data[i])
{
max = data[i];
}
}
return max;
}
void mergeData(int data[], int &size)
{
for (int i = 0; i < size - 1; i++)
{
for (int j = i + 1; j < size; j++)
{
if (data[i] == data[j])
{
data[i]++;
for (int k = j; k < size - 1; k++)
{
data[k] = data[k + 1];
}
size--;
mergeData(data, size);
}
}
}
}
//Main function
int main()
{
int numSets;
cin >> numSets;
Set* sets = new Set[100];
for (int i = 0; i < numSets; i++)
{
cin >> sets[i].length;
for (int j = 0; j < sets[i].length; j++)
{
cin >> sets[i].data[j];
}
}
for (int i = 0; i < numSets; i++)
{
mergeData(sets[i].data, sets[i].length);
cout << findMax(sets[i].data, sets[i].length) << endl;
}
return 0;
}
Answer: Looking at your merge_data() function, you'll notice that as soon as it finds 2 matching values, it combines them and recurses. What happens during and after the recursion? Let's take a look.
Suppose we have this set:
3,8,4,2,2,7,14
We first get 3. Next we start walking the rest of the list. We compare against 8. No match. We compare against 4. No match. We do this until we run out of things to compare. Then we increment i. We check the rest of the array. Etc. Eventually we come to the 2. We compare to the next value and there is a match! We combine them and recurse. What happens now? We start by comparing 3 to 8, which we've already done. Then again with 4, which we've also already done. So we're doing a bunch of work over and over again.
Then, eventually we return from our recursion. Now we go on to process the rest of the array. There's 2 problems here: 1) At this point, the recursion we're returning from has already processed the rest of the array, and 2) the array is no longer the same size, but we're going to process to the end of the original array anyway!
I haven't worked out a full solution, but hopefully the above is enough to help you see what's wrong with your current solution. | {
"domain": "codereview.stackexchange",
"id": 28286,
"tags": "c++, programming-challenge, time-limit-exceeded, circular-list"
} |
pointcloud_to_laserscan parameter min_height | Question:
hey guys,
short question:
Is the pointcloud_to_laserscan package capable of using negative values as min_height?
Background information:
I'm desperatly trying to get a scan out of my PointCloud2 data, that my kinect generates - as it happens the kinect is not on the ground level, which means that some obstacles have negative height values...
that somehow seems to be a problem for the node - the obstacles are part of the pcl, but the laserscan does not see them...
I hope you get what my problem is, for sure I can't be the first to try this :(
Edit:
Update
As a workaround I've changed some part of the cloud_to_scan.cpp file and
manually increased the height - this works, even though I don't understand why
-y+1 < min_height with min_height =0
works, wenn
-y < min_height with a value of -1 fails...
I mean, if y is like 0.5 in the first case it is:
-0.5+1 <0 => 0.5 < 0 => false
in the second case:
-0.5 < -1 => false
Edit:
Thanks for your answer, that indicates that somehow I messed up my package.
I'm sure that I set the parameters correct in the launchfile, and just to check that I changed the default values - with no result.
Originally posted by Flowers on ROS Answers with karma: 342 on 2012-10-08
Post score: 0
Answer:
Yes, the min_height parameter can be negative (anything between -10.0 and 10.0 meters). The only thing to look out for is that pointcloud_to_laserscan assumes the input PointCloud to be in the Kinect optical frame (x to the right, y down, z forward), while most other coordinate systems in ROS have x forward, y to the left, z up. That means that -y is up in pointcloud_to_laserscan.
BTW, you can use the following command to adjust these parameters dynamically with a GUI; that makes parameter tuning much easier:
rosrun dynamic_reconfigure reconfigure_gui
Originally posted by Martin Günther with karma: 11816 on 2012-10-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Flowers on 2012-10-10:
hmm, I'm pretty sure my launchfile works and sets the parameters right(tried some few extrema, which led to the results I expected)...still negative values won't work :( | {
"domain": "robotics.stackexchange",
"id": 11284,
"tags": "ros, kinect, pointcloud-to-laserscan"
} |
Is the four-jerk time-like or space-like? | Question: In the paper Dynamics of a Charged Particle the author claims after equation (10):
However, this equation is mathematically inconsistent because both $\dot v^\mu$ and $\dot F^\mu$ are spacelike fourvectors, i.e. are perpendicular to the velocity $v^\mu$, while $\ddot v^\mu$ is not.
I don't believe this is correct since the time component of the four-acceleration is zero in the proper frame. Differentiating this wrt the proper time will again give a zero time component for the proper four-jerk giving another space-like four vector.
Answer: The answer to the question "Is the four-jerk time-like or space-like?" is addressed by a paper by Russo and Townshend ("Relativistic kinematics and stationary motions", 9 October 2009,
Journal of Physics A: Mathematical and Theoretical, Volume 42, Number 44 - https://doi.org/10.1088/1751-8113/42/44/445402 - preprint at https://arxiv.org/abs/0902.4243 ).
In view of these observations, it seems remarkable that the relativistic generalization of
jerk, snap, etc. has attracted almost no attention in more than a century since the foundation
of special relativity. It might be supposed that this is because there is little new to relativistic
kinematics once one has defined the D-acceleration A, in a D-dimensional Minkowski spacetime,
as the proper-time derivative of the D-velocity U:
$$A =\frac{dU}{d\tau}=\gamma \frac{dU}{dt},\qquad \gamma= \frac{1}{\sqrt{1 − v^2}} . \qquad(1.2) $$
In particular, it is natural to suppose that one should define the relativistic jerk as $J = \frac{dA}{d\tau}$.
However, J is not necessarily spacelike.
This was pointed out in our previous paper [3] and it
led us to define the relativistic jerk as
$$\Sigma = J − A^2U , \qquad J = dA/d\tau.\qquad (1.3)$$
Observe that $U\cdot\Sigma ≡ 0$, which implies that $\Sigma$ is spacelike if non-zero.
$\qquad$
Following the posting in the archives of the original version of this paper, it was brought to
our attention that relativistic jerk arises naturally in the context of the Lorentz-Dirac equation,... | {
"domain": "physics.stackexchange",
"id": 39383,
"tags": "special-relativity, jerk"
} |
Dot product in cylidrical coordinates | Question: I'm given the vector:
$$\vec{V}{(r,θ,z)}=\frac{1}{r}\hat{e_r} + (r\cosθ)\hat{e_θ}+\frac{z^2}{r^2}\hat{e_z}$$
I want the scalar product ${\vec{\nabla}}\cdot{\vec{V}}$
We know that in cylindrical coordinates : $$\vec{\nabla}=\left<\frac{\partial}{\partial r},\frac{1}{r}\frac{\partial}{\partial θ},\frac{\partial}{\partial z} \right>$$
So , the product should be
$${\vec{\nabla}}\cdot{\vec{V}} =\frac{\partial}{\partial r}\left(\frac{1}{r}\right) + \frac{1}{r}\frac{\partial}{\partial θ}(r\cosθ)+\frac{\partial}{\partial z}\left(\frac{z^2}{r^2}\right) = -\frac{1}{r^2}-\sinθ +\frac{2z}{r^2}$$
However , in the answers , the answer given is this :
$${\vec{\nabla}}\cdot{\vec{V}}=\frac{1}{r}\Big\{\frac{\partial}{\partial r}(1)+\frac{\partial}{\partial θ}(r\cosθ)+\frac{1}{r}\frac{\partial}{\partial z}(z^2)\Big\}=-\sinθ+\frac{2z}{r^2}$$
I don't understand why $\frac{1}{r}$ was factored out and how is that possible. I understand you can factor it out for the partial derivative with respect to $θ$ and $z$ but in the first one, which is with respect to $r$, it shouldn't be factored out, it should be differentiated. Any thoughts? Am I missing something or is there a typo in the answers?
Answer: The divergence operator in cylindrical coordinates is actually different from what you believe it to be:
$$
\nabla\cdot\mathbf A=\frac{1}{r}\frac{\partial}{\partial r}\left(r A_r\right)+\frac{1}{r}\,\frac{\partial A_\theta}{\partial\theta}+\frac{\partial A_z}{\partial z}
$$
You seem to be confusing it with the gradient operator, which as the form you specify:
$$
\nabla f=\frac{\partial f}{\partial r}\hat{r}+\frac{1}{r}\,\frac{\partial f}{\partial \theta}\hat{\theta}+\frac{\partial f}{\partial z}\hat{z}
$$
(though obviously you're ignoring the unit vectors). | {
"domain": "physics.stackexchange",
"id": 52726,
"tags": "vectors, differentiation, calculus"
} |
Class diagram of Tic-Tac-Toe Game | Question: I wrote a basic tic toe game. See, https://jsfiddle.net/shai1436/Lgy1u84s/4/
I am not satisfied with the way I have designed classes and how I have implemented the undo feature. Please suggest feedback.
//player0 is O and player1 is X
let board;
class Game {
constructor() {
this.player = 0;
this.setTurnText(this.player + 1);
this.setResultText();
}
togglePlayer() {
if (this.player === 1)
this.player = 0;
else
this.player = 1;
this.setTurnText(this.player + 1);
}
setTurnText(player) {
const ele = document.getElementById('turn-text');
ele.innerText = 'Player ' + player + ' turn';
}
setResultText() {
const ele = document.getElementById('result-text');
ele.innerText = ' ';
}
declareWinner(player) {
const ele = document.getElementById('result-text');
ele.innerText = 'Player ' + player + ' won';
console.log("player " + player + " won ");
}
declareDraw() {
const ele = document.getElementById('result-text');
ele.innerText = ' Draw ';
}
}
class Board {
constructor() {
this.gameBoard = new Array(new Array(3), new Array(3), new Array(3));
this.gameStatus = null; // 0: player0 wins, 1: player1 wins, 2: draw, null: undecided
this.cellsFilled = 0;
this.findGameStatus = this.findGameStatus.bind(this);
this.game = new Game();
this.boardCanvas = new BoardCanvas('canvas');
this.gameHistory = new Array();
}
updateBoard(indices) {
if (!this.canDraw(indices))
return;
this.gameBoard[indices.x][indices.y] = this.game.player;
this.gameHistory.push(indices);
this.cellsFilled++;
this.updateBoardCanvas();
this.findGameStatus(indices);
if (this.gameStatus === 0 || this.gameStatus === 1)
this.game.declareWinner(this.gameStatus + 1);
else if (this.gameStatus === 2)
this.game.declareDraw();
this.game.togglePlayer();
}
updateBoardCanvas() {
this.boardCanvas.drawBoard(this.gameBoard);
}
undo() {
const indices = this.gameHistory.pop();
this.gameBoard[indices.x][indices.y] = undefined;
this.updateBoardCanvas();
this.game.togglePlayer();
this.cellsFilled--;
}
canDraw(indices) {
const iscellEmpty = this.gameBoard[indices.x][indices.y] === undefined;
const isGameInProgress = this.gameStatus === null;
return iscellEmpty && isGameInProgress;
}
findGameStatus(indices) {
if (this._checkRow(indices) ||
this._checkColumn(indices) ||
this._checkDiagonal() ||
this._checkReverseDiagonal()) {
this.gameStatus = this.game.player;
}
else if (this.cellsFilled === 9) {
this.gameStatus = 2;
}
}
_checkRow(indices) {
const row = indices.x;
for (let i = 0; i < 3; i++) {
if (this.gameBoard[row][i] !== this.game.player)
return false;
}
return true;
}
_checkColumn(indices) {
const col = indices.y;
for (let i = 0; i < 3; i++) {
if (this.gameBoard[i][col] !== this.game.player)
return false;
}
return true;
}
_checkDiagonal() {
for (let i = 0; i < 3; i++) {
if (this.gameBoard[i][i] !== this.game.player)
return false;
}
return true;
}
_checkReverseDiagonal() {
for (let i = 0; i < 3; i++) {
if (this.gameBoard[i][2 - i] !== this.game.player)
return false;
}
return true;
}
}
class BoardCanvas {
constructor(id) {
this.canvas = document.getElementById(id);
this.ctx = this.canvas.getContext('2d');
this.drawBoard();
this.addClickListener();
}
mapIndicesToCanvasCells(x, y) {
var bbox = this.canvas.getBoundingClientRect();
const loc = {
x: x - bbox.left * (canvas.width / bbox.width),
y: y - bbox.top * (canvas.height / bbox.height)
};
loc.x = Math.floor(loc.x / 100) * 100;
loc.y = Math.floor(loc.y / 100) * 100;
return loc;
}
drawCross(y, x) {
this.ctx.save();
this.ctx.translate(x, y);
this.ctx.beginPath();
this.ctx.moveTo(20, 20);
this.ctx.lineTo(80, 80);
this.ctx.moveTo(80, 20);
this.ctx.lineTo(20, 80);
this.ctx.stroke();
this.ctx.restore();
}
drawCircle(y, x) {
this.ctx.save();
this.ctx.translate(x, y);
this.ctx.beginPath();
this.ctx.arc(50, 50, 30, 0, Math.PI * 2, true);
this.ctx.stroke();
this.ctx.restore();
}
drawBoard(board) {
this.clearBoard();
for (let i = 0; i < 3; i++) {
for (let j = 0; j < 3; j++) {
this.ctx.strokeRect(100 * i, 100 * j, 100, 100);
if (board && board[i][j] === 0)
this.drawCircle(100 * i, 100 * j);
else if (board && board[i][j] === 1)
this.drawCross(100 * i, 100 * j);
}
}
}
addClickListener() {
this.canvas.onclick = (e) => {
const loc = this.mapIndicesToCanvasCells(e.clientX, e.clientY);
const indices = {};
let temp = loc.x;
indices.x = Math.floor(loc.y / 100);
indices.y = Math.floor(temp / 100);
board.updateBoard(indices);
}
}
clearBoard() {
this.ctx.clearRect(0, 0, this.canvas.width, this.canvas.height);
}
}
const init = () => {
board = new Board();
}
init();
const undo = () => {
board.undo();
}
window.init = init;
window.undo = undo;
Answer: Player is confusing
Only 1 player is defined. This spawns the need for confusing code that looks like this.player is 0 then 1 then 2 then 3 and so on. And player-value incrementing is spread over many methods which my spidey sense says "uh-oh, player disconnects ahead!".
changePlayer() { this.player = this.player === 0 ? 1 : 0; }
currentPlayuer() { return this.player; }
Personally, when I write a second method for a given thing I start to consider making a separate class. Classes should be about exposing related functionality. Good classes expose functionality and hide state.
Array Iterator Functions
Read up on Array.map, Array.every, Array.some, et cetera. These will really clean up the array looping.
Class decoupling
Class purpose needs to be more precisely nailed down conceptually. Then the existing coupling will be more evident. Mixing UI functionality into many classes seems universal in my coding experiences. It just too easy to do when updating the screen is a simple one liner.
Game sounds like it should be the overall game manager. It should be coordinating the other objects through their own APIs, but is directly manipulating raw state that should be in other objects such as display.
Board is only the board and should only be aware of its own state - which squares are occupied. But it is also handling display details. gameHistory sounds like high level functionality that belongs in a conceptually higher level class.
BoardCanvas sounds like the place for display functions, but it is not. The DOM and Canvas are conceptually display components for tic-tac-toe and only BoardCanvas should have to use them. BoardCanvas needs an a tic-tac-toe game appropriate API. addClickListener() is a spot-on example of good decoupling.
Board contains a Game or vice versa?
As a general rule have higher level classes contain lower level classes. Board is a low level and thus "stupid" class. Keep it stupid. It should not be coordinating Game - BoardCanvas interaction; which will happen if you invert the containment heirarchy.
undo
const undo = () => { board.undo(); }
You'll end up naturally writing lots of these "pass through" functions with decoupled classes. This invisible hand of OO, so to speak, will make high level classes read appropriately high level and classes at all levels will be able to "mind their own business".
game flow logic
In the spirit of expressing high level logic, at the highest level, I imagine the game as a loop. Whether this logic is in Game or a new class is a design decision but the overall point is "layers of abstraction" in the application.
// initialze variables, create objects, etc.
var noWinner = true;
...
while (noWinner) {
...
// testing for a winner or tie game should be somewhere in the
// method we're calling (or method chain it might be calling).
// An if-else in this game loop takes away part of the "who won?" logic
// from its proper place.
noWinner = this.hasWon(currentPlayer());
}
boardCanvas.displayWinner(this.winner);
// I suppose the winner could be "its a tie" | {
"domain": "codereview.stackexchange",
"id": 35525,
"tags": "javascript, tic-tac-toe"
} |
Is the velocity a scalar or a vector in one dimensional Lorentz transformations? | Question: Is the sign of velocity v important in the one dimensional Lorentz transformations?
My question arises because the length contraction and the time dilation effects will work out in exactly the same way independent of the direction of motion of the moving body.
Answer: The sign has no effect on the factors. The direction only matters to length contraction because lengths are contracted in that direction. | {
"domain": "physics.stackexchange",
"id": 62270,
"tags": "special-relativity, relativity, lorentz-symmetry, relative-motion"
} |
Proper way to compare two Dictionaries | Question: I am implementing an IEqualityComparer for Dictionary objects and am looking for input on a couple of different approaches. I define equality in this case to be that both dictionaries contain the same set of KeyValuePair's as defined by equality of the hash value for the respective keys and values.
The first generates a hash value by XORing all of the keys and value in both dictionaries and comparing them. The other uses the HashSet collection and its SymetricExceptWith method. Are these functionally equivalent and are the pros/cons to either approach or better ways to accomplish this. Both approaches are working for my test cases.
GetHashCode approach:
class DictionaryComparer<TKey, TValue> : IEqualityComparer<IDictionary<TKey, TValue>>
{
public DictionaryComparer()
{
}
public bool Equals(IDictionary<TKey, TValue> x, IDictionary<TKey, TValue> y)
{
// fail fast if count are not equal
if (x.Count != y.Count)
return false;
return GetHashCode(x) == GetHashCode(y);
}
public int GetHashCode(IDictionary<TKey, TValue> obj)
{
int hash = 0;
foreach (KeyValuePair<TKey, TValue> pair in obj)
{
int key = pair.Key.GetHashCode(); // key cannot be null
int value = pair.Value != null ? pair.Value.GetHashCode() : 0;
hash ^= ShiftAndWrap(key, 2) ^ value;
}
return hash;
}
private int ShiftAndWrap(int value, int positions)
{
positions = positions & 0x1F;
// Save the existing bit pattern, but interpret it as an unsigned integer.
uint number = BitConverter.ToUInt32(BitConverter.GetBytes(value), 0);
// Preserve the bits to be discarded.
uint wrapped = number >> (32 - positions);
// Shift and wrap the discarded bits.
return BitConverter.ToInt32(BitConverter.GetBytes((number << positions) | wrapped), 0);
}
}
HashSet approach:
class DictionaryComparer<TKey, TValue> : IEqualityComparer<IDictionary<TKey, TValue>>
{
public DictionaryComparer()
{
}
public bool Equals(IDictionary<TKey, TValue> x, IDictionary<TKey, TValue> y)
{
if (x.Count != y.Count)
return false;
HashSet<KeyValuePair<TKey, TValue>> set = new HashSet<KeyValuePair<TKey, TValue>>(x);
set.SymmetricExceptWith(y);
return set.Count == 0;
}
}
Answer: A 32-bit hash returned by GetHashCode has 2^32 possible values, with a probability distribution dependent on the hashing function. If there are more than 2^32 possible input values then you will get collisions (see here). And while we like to think collisions are rare, they turn up a lot more frequently than we like to think. It gets worse when people are actively attacking you through your hashing function.
@svick is correct that you can't use a hash code to compare objects for equality. All you can be certain of (assuming a consistent hash implementation) is that two objects with different hashes are not equal. No other guarantee is given.
Depending on the cost of generating the hashes, you might actually be better off not using them in them in this instance.
The only really guaranteed equality test for a pair of Dictionary instances is to examine their contents.
The simple shortcuts you can implement:
Check if either instance is null (it happens)
Check if both input Dictionary instances are the same instance
Check if the counts differ
The other slight speed improvement is to check the keys first. Often checking the keys is a faster operation than checking the values.
Something like:
public bool Equals<TKey, TValue>(IDictionary<TKey, TValue> x, IDictionary<TKey, TValue> y)
{
// early-exit checks
if (null == y)
return null == x;
if (null == x)
return false;
if (object.ReferenceEquals(x, y))
return true;
if (x.Count != y.Count)
return false;
// check keys are the same
foreach (TKey k in x.Keys)
if (!y.ContainsKey(k))
return false;
// check values are the same
foreach (TKey k in x.Keys)
if (!x[k].Equals(y[k])
return false;
return true;
}
Adding a loop to check for hash inequality might improve the speed. Try it and see. | {
"domain": "codereview.stackexchange",
"id": 4410,
"tags": "c#, .net, hash-map"
} |
Making sense out of the visual representation of transcription | Question: Most people are familiar with the following diagram. Some genomic DNA with a promoter region, exons and introns. This is transcribed into RNA that is then translated into a polypeptide.
When we look closer at the strand that is being transcribed we can distinguish between the two as the sense and anti-sense strands.
So the transcription factors and RNA polymerase bind and begins transcribing mRNA in the 5' to 3' direction and thus reading the anti-sense strand in the 3' to 5' direction and have the same sequence as the sense DNA strand, substituting U for T in the mRNA.
My question would be, shouldn't the exons be numbered in the reverse order as shown in the first picture I provided. So instead of Promoter -> Exon1 -> Intron -> Exon 2, should it be, Promoter -> Exon N -> Intron -> Exon N-1?
Also, in bioinformatic sites are the gene sequences listed in the sense or anti-sense strand? I have noticed in some bioinformatic tools, to determine what polypeptide will result from a DNA sequence, one must input the sense strand in 5'to 3' orientation and not the anti-sense strand.
Answer: All visual representations and nearly all coordinate systems are based on the sense strand. The polymerase machinery has no clue about what is sense and what is antisense, because each is the antisense of the other.
For visual representation this makes much more sense and relays more information as it removes the extra complicated layer of information. And, in most cases the gene structures are declared in the order of the reference genome which is always the positive or sense strand.
Next coming to the bioinformatics part, most of your databases such as UCSC, Ensemble and NCBI maintain gene coordinates on the reference genome, But, there's a catch, when reporting the information through a bed file,
Negative stranded genes are reported as chromosome stop start by NCBI (last I used it was a one a half years ago), UCSC will provide the chromosome start stop and both will report the strandedness, UCSC expects that you the bioinformatician will create the reverse complement when you find the strand information, while NCBI expects that your program will fail a sanity check because stop - start will come as negative, implying that you cannot make a mistake while parsing NCBI bed files. Furthermore, UCSC indexes are maintained as 0 based, while NCBI is maintained as 1 based.
I would urge you to validate this information
But why not just keep a negative strand as well? while keeping the gene coordinates and elements for the antisense in the format you just mentioned.
Because speaking from a Computational point of view it just makes more sense, this system would consume more storage (please remember the entire system was formulated before storage became as cheap as it is today) it would consume more memory during tasks (exactly double of what it is consuming today). So it's just better to have a positive strand reference genome and all genes and elements based on that.
Just an example of how alignment of sequencing reads works,
You align your read to the reference genome
Aligns? If yes it has mapped to the positive strand
No? Reverse complement the read and align back
Aligns? If yes it has mapped to the negative strand
No? Possibly an erroneous read or other artefacts. | {
"domain": "biology.stackexchange",
"id": 5909,
"tags": "transcription"
} |
Divergent sum in lightcone quantization of bosonic string theory | Question: I had the following question regarding lightcone quantization of bosonic strings - The normal ordering requirement of quantization gives us this infinite sum $\sum_{n=1}^\infty n$. This is regularized in several ways, for example by writing
$$
\sum_{n=1}^\infty e^{- n \epsilon } n = \frac{1}{\epsilon^2} - \frac{1}{12} + {\cal O}(\epsilon^2)
$$
Most texts now simply state that the divergent part can be removed by counterterms. David Tongs notes (chap. 2 page 29) specifically state that this divergence is removed by the counterterm that restore Weyl invariance in the quantized theory (in dimensional regularization).
I would like to see this explicitly. Is there any note regarding this? Or if you have any other idea how one would systematically remove the divergence above, it would be great!
Answer: Note that $n$ is really the momentum in the $\sigma$ direction so it has the units of the world sheet mass. The exponent $-n\epsilon$ in the regulator has to be dimensionless so $\epsilon$ has the units of the world sheet distance.
Consequently, the removed term $1/\epsilon^2$ has the units of the squared world sheet mass. This are the same units as the energy density in 1+1 dimensions. If you just redefine the stress energy tensor on the world sheet as
$$T_{ab} \to T_{ab} + \frac{C}{\epsilon^2} g_{ab}$$
where $C$ is a particular number of order one you may calculate (that depends on conventions only), it will redefine your Hamiltonian so that the ground state energy is shifted in such a way that the $1/\epsilon^2$ term is removed.
This "cosmological constant" contribution to the stress-energy tensor may be derived from the cosmological constant term in the world sheet action, essentially $C\int d^2\sigma\sqrt{-h}$. Classically, this term violates the Weyl symmetry. However, quantum mechanically, there are also other loop effects that violate this symmetry – your regulated calculation of the ground state energy is a proof – and this added classical counterterm is needed to restore the Weyl (scaling) symmetry.
It's important that this counterterm and all the considerations above are unable to change the value of the finite leftover, $-1/12$, which is the true physical finite part of the sum of positive integers. This is the conclusion we may obtain in numerous other regularization techniques. The result is unique because it really follows from the symmetries we demand – the world sheet Weyl symmetry or modular invariance. | {
"domain": "physics.stackexchange",
"id": 11634,
"tags": "string-theory, renormalization, regularization"
} |
Properties of steel and aluminum alloys | Question: I have been trying to compare 304 stainless steel with 7075 aluminum for some personal research. This steel alloy has a specific heat capacity of 500 J/kg-C, while for aluminum the value is 960 J/kg-C. Does this mean it is harder to heat up aluminum than steel? Then, steel has a thermal conductivity of 130 W/m-K, while for aluminum the value is 16.2 W/m-K. Does this mean that steel conducts heat away better than aluminum? Thank you.
Answer: Heat capacity (specific heat) varies inversely with atomic mass, the Dulong–Petit law. Al is about 27 amu, and Fe is about 56 amu, so as you noted, it would be expected that aluminum stores more heat. An extreme example is lead solder, which has such low specific heat that a calloused plumber's hand can wipe a solder joint with little discomfort.
As for thermal conductivity, that property is associated with the rigidity of crystal lattices, where sound travels as phonons, as in diamond, and with electrical conductivity, where "freely moving valence electrons transfer not only electric current but also heat energy." Aluminum and copper are among the best metallic conductors, so are used for cooking-pan bottoms to spread heat evenly. Since stainless steel has many additions to Fe, such as Ni and Cr, these inclusions provide discontinuities at grain boundaries that further impede heat transfer. Dewar flasks are made of stainless steel or glass, rather than Al, because they are poor conductors of heat. | {
"domain": "chemistry.stackexchange",
"id": 6318,
"tags": "physical-chemistry"
} |
Refactored game of Snake | Question: A week ago I requested a review of my code for a game of Snake.
First game of Snake
I made some changes based on your answers and now I want to show you present code. Something else to modify here?
GameMain.java
import javax.swing.*;
public class GameMain extends JFrame{
public static void main(String[] args) {
JFrame frame = new GameInstant();
frame.setTitle("Snake Game");
frame.setSize(1000,800);
frame.setResizable(false);
frame.setLocationRelativeTo(null);
frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
frame.setVisible(true);
}
}
GameInstance.java
import javax.swing.*;
import java.awt.*;
import java.awt.event.KeyEvent;
import java.awt.event.KeyListener;
import java.util.concurrent.ScheduledThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public class GameInstant extends JFrame {
private JPanel scorePanel;
SnakeGame snakeGame = new SnakeGame();
public GameInstant() {
addKeyListener(new KeyListener() {
@Override
public void keyTyped(KeyEvent e) {
}
@Override
public void keyPressed(KeyEvent e) {
if (e.getKeyCode() == KeyEvent.VK_LEFT) {
snakeGame.storeDirectionOfSnake(Direction.LEFT);
} else if (e.getKeyCode() == KeyEvent.VK_UP) {
snakeGame.storeDirectionOfSnake(Direction.UP);
} else if (e.getKeyCode() == KeyEvent.VK_RIGHT) {
snakeGame.storeDirectionOfSnake(Direction.RIGHT);
} else if (e.getKeyCode() == KeyEvent.VK_DOWN) {
snakeGame.storeDirectionOfSnake(Direction.DOWN);
}
}
@Override
public void keyReleased(KeyEvent e) {
}
});
DrawingTheBoard gamePanel = new DrawingTheBoard();
this.add(gamePanel, BorderLayout.CENTER);
scorePanel = new JPanel();
scorePanel.add(gamePanel.scoreLabel, BorderLayout.CENTER);
this.add(scorePanel, BorderLayout.PAGE_END);
ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(5);
executor.scheduleAtFixedRate(new RepaintTheBoard(this), 0, snakeGame.getGameSpeed(), TimeUnit.MILLISECONDS);
}
}
class RepaintTheBoard implements Runnable {
private GameInstant theGame;
public RepaintTheBoard(GameInstant theGame) {
this.theGame = theGame;
}
public void run() {
theGame.repaint();
}
}
class DrawingTheBoard extends JComponent {
public JLabel scoreLabel;
private boolean inGame = false;
private int score = 0;
CellData[][] board;
SnakeGame snakeGame = new SnakeGame();
GameBoard gameBoard = new GameBoard();
public DrawingTheBoard() {
board = gameBoard.getBoard();
scoreLabel = new JLabel("Score: " + score);
scoreLabel.setFont(new Font("Serif", Font.PLAIN, 40));
}
public void paint(Graphics g) {
Graphics2D g2D = (Graphics2D) g;
g2D.setBackground(Color.BLACK);
g2D.fillRect(0, 0, getWidth(), getHeight());
update();
for (int i = 0; i < gameBoard.getxCells(); i++) {
for (int j = 0; j < gameBoard.getyCells(); j++) {
if (board[i][j] == CellData.APPLE || board[i][j] == CellData.SNAKE) {
g2D.setPaint(Color.WHITE);
g2D.fillRect(i * 10, j * 10, 10, 10);
} else if (board[i][j] == CellData.WALL) {
g2D.setPaint(Color.RED);
g2D.fillRect(i * 10, j * 10, 10, 10);
}
}
}
if (snakeGame.hasEatenApple()) {
score += 10;
scoreLabel.setText("Score: " + Integer.toString(score));
} else if (snakeGame.isDead()) {
score = 0;
scoreLabel.setText("Score: " + Integer.toString(score));
}
}
public void update() {
if (inGame == false) {
snakeGame.initializeGame();
inGame = true;
}
snakeGame.changeSnakeDirection();
snakeGame.updateSnake();
if (snakeGame.snakeIsDead()) {
snakeGame.removeSnake();
snakeGame.initializeGame();
}
snakeGame.updateApple();
snakeGame.updateBoard();
}
}
SnakeGame
import java.util.LinkedList;
public class SnakeGame {
private int gameSpeed = 100;
private LinkedList<Point> body;
private Point head;
private static boolean eatenApple = false;
private static boolean isDead = false;
private static Direction snakeDirection;
Snake theSnake = new Snake();
Apple theApple = new Apple();
GameBoard board = new GameBoard();
public SnakeGame() {
}
public void initializeGame() {
board.cleanBoard();
theSnake.createSnake(board.getxCells() / 2, board.getyCells() / 2);
theApple.createNewApple();
addAppleToGameBoard();
}
public boolean collidesWith(CellData cellData) {
body = theSnake.getBody();
head = body.get(0);
CellData cell = board.getBoard()[head.getX()][head.getY()];
return (cell == cellData);
}
public boolean snakeIsDead() {
if (collidesWith(CellData.WALL)
|| collidesWith(CellData.SNAKE)) {
isDead = true;
return true;
} else {
isDead = false;
return false;
}
}
public void takeAppleFromGameBoard() {
board.setDataCell(theApple.getRandomXPos(), theApple.getRandomYPos(), CellData.EMPTY);
}
public void addAppleToGameBoard() {
board.setDataCell(theApple.getRandomXPos(), theApple.getRandomYPos(), CellData.APPLE);
}
public void updateApple() {
if (collidesWith(CellData.APPLE)) {
takeAppleFromGameBoard();
theSnake.eat();
theApple.createNewApple();
eatenApple = true;
} else {
eatenApple = false;
}
}
public void storeDirectionOfSnake(Direction direction) {
snakeDirection = direction;
}
public void changeSnakeDirection(){
if (snakeDirection != null) {
theSnake.changeDirection(snakeDirection);
}
}
public void addSnakeToBoard() {
body = theSnake.getBody();
for (int i = 0; i < body.size(); i++) {
board.setDataCell(body.get(i).getX(), body.get(i).getY(), CellData.SNAKE);
board.setDataCell(theSnake.getTailCell().getX(), theSnake.getTailCell().getY(), CellData.EMPTY);
}
}
public void updateSnake() {
theSnake.update();
}
public void updateBoard(){
addAppleToGameBoard();
addSnakeToBoard();
}
public void removeSnake() {
body = theSnake.getBody();
theSnake.clearBody();
for (int i = 0; i < body.size(); i++) {
board.setDataCell(body.get(i).getX(), body.get(i).getY(), CellData.EMPTY);
}
}
public int getGameSpeed() {
return gameSpeed;
}
public boolean hasEatenApple() {
return eatenApple;
}
public boolean isDead() {
return isDead;
}
}
GameBoard.java
public class GameBoard {
private int boardWidth = 1000;
private int boardHeight = 700;
private int xCells = boardWidth / 10;
private int yCells = boardHeight / 10;
private static CellData board[][];
public GameBoard() {
board = new CellData[xCells][yCells];
}
public void cleanBoard() {
for (int i = 0; i < xCells; i++) {
board[i][0] = CellData.WALL;
}
for (int i = 0; i < xCells; i++) {
board[i][yCells - 1] = CellData.WALL;
}
for (int j = 0; j < yCells; j++) {
board[0][j] = CellData.WALL;
}
for (int j = 0; j < yCells; j++) {
board[xCells - 1][j] = CellData.WALL;
}
for (int i = 1; i < xCells - 1; i++) {
for (int j = 1; j < yCells - 1; j++) {
board[i][j] = CellData.EMPTY;
}
}
}
public void setDataCell(int x, int y, CellData cellData) {
board[x][y] = cellData;
}
public CellData[][] getBoard() {
return board;
}
public int getxCells() {
return xCells;
}
public int getyCells() {
return yCells;
}
}
Apple.java
import java.util.Random;
public class Apple {
private int randomXPos;
private int randomYPos;
Random r = new Random();
GameBoard board = new GameBoard();
public Apple(){
}
public void createNewApple(){
randomXPos = r.nextInt(board.getxCells()-2)+1;
randomYPos = r.nextInt(board.getyCells()-2)+1;
}
public int getRandomXPos(){
return randomXPos;
}
public int getRandomYPos(){
return randomYPos;
}
}
Snake.java
import java.awt.*;
import java.util.LinkedList;
public class Snake{
private LinkedList<Point> body; // list holding points(x,y) of snake body
private Point head;
private static Direction headDirection;
private static Point tailCell;
private static boolean hasEatenApple = false;
public Snake() {
body = new LinkedList<>();
}
public void createSnake(int x, int y) {
//creating 3-part starting snake
body.addFirst(new Point(x,y));
body.add(new Point(x - 1, y));
body.add(new Point(x - 2, y));
headDirection = Direction.RIGHT;
tailCell = body.getLast();
}
public void clearBody(){body.clear();
}
public void changeDirection(Direction theDirection) {
if (theDirection != headDirection.opposite())
this.headDirection = theDirection;
}
//updating localisation of snake
public void update() {
addPartOfBody(headDirection.getX(), headDirection.getY());
}
private void addPartOfBody(int x, int y) {
head = body.get(0);
body.addFirst(new Point(head.getX() + x, head.getY() + y));
tailCell = body.getLast();
if (hasEatenApple == false) {
body.removeLast();
} else {
hasEatenApple = false;
}
}
public LinkedList<Point> getBody() {
return (LinkedList<Point>) body.clone();
}
public Point getTailCell(){return tailCell;}
public void eat() {
hasEatenApple = true;
}
}
Point.java
public class Point {
private int x;
private int y;
public Point(int x, int y) {
this.x = x;
this.y = y;
}
public int getX() {
return x;
}
public int getY() {
return y;
}
}
Direction.java
public enum Direction {
LEFT {
Direction opposite() {
return RIGHT;
}
int getX(){
return -1;
}
int getY(){
return 0;
}
},
RIGHT {
Direction opposite() {
return LEFT;
}
int getX(){
return 1;
}
int getY(){
return 0;
}
},
UP {
Direction opposite() {
return DOWN;
}
int getX(){
return 0;
}
int getY(){
return -1;
}
},
DOWN {
Direction opposite() {
return UP;
}
int getX(){
return 0;
}
int getY(){
return 1;
}
};
abstract Direction opposite();
abstract int getX();
abstract int getY();
}
CellData.java
public enum CellData {
EMPTY, SNAKE, APPLE, WALL;
}
Answer: Suggestions
Currently, it seems as if you would like the game to run in full-screen mode, but what about the few people in the world still having 1280x720 or lower displays on their systems? 1000x800 will run out of vertical screen space on such displays.
If you want to make a proper full-screen display of your JFrame, try the following code from this SO answer: JFrame in full screen Java:
frame.setExtendedState(JFrame.MAXIMIZED_BOTH);
frame.setUndecorated(true);
Use this just before the .setVisible(true) call.
You should also take a look at this, the Java Exclusive Full-Screen mode API. That should help when you want to get fullscreen properly (which the previous suggestion essentially is not, as it's not exclusive).
You already import java.awt. Why not use it's Point class instead of rolling your own? It works pretty much the same, so it should be a drop-in replacement at this stage.
Get your boardWidth and boardHeight parameters from the host JFrame as parameters to your GameBoard constructor, and move all field initialization there. This should make your code more flexible against different resolutions.
You don't use the java.awt.* you imported for Snake.java. you can safely get rid of it.
Any reason why eatenApple and isDead are static in SnakeGame? I don't think that they need to be.
Division of responsibility:
I feel that spawning an apple on the board should be the responsibility of the board, not the apple. Also, maintaining the state of a snake should be the responsibility of the snake, not the game logic. So, createNewApple() should belong to GameBoard and isDead() should belong to Snake, along with the previously mentioned variables (point 5). If you absolutely need to, you could expose these values using getters in SnakeGame.
The next is a tricky point, what you've done is correct, just letting you know why you shouldn't change it in the future.
Instead of using LinkedList for representing the points of the body of the snake, never use a java.util.ArrayList, even if you do an ensureCapacity(xCells*yCells-2*(xCells+yCells)) call on the ArrayList object when initializing it to prevent reallocations (xCells*yCells2*(xCells+yCells) is the maximum length of the snake).
A linked list takes O(n) time to remove it's last element if it's a singly linked list with only a head pointer, whereas for an array it is always an O(1) operation. Now java.util.LinkedList is a doubly linked list for which deletion of the last element can be done in O(1) time, so in this case the time complexity is not an obvious saving, however, when it comes to adding an element to the head of the list, the story is completely different. Then, ArrayList takes O(n) time while LinkedList takes O(1).
TL;DR Keep using LinkedList.
Style
Indentation
I'm sure this point was raised in answers to your previous question, but your indentation, linebreaks and braces are inconsistent. Try to use an editor or IDE capable of autoformatting to help you in this. Try to follow one style of indentation and braces and be consistent, it greatly improves the readability of your code.
Naming
Maybe you're autogenerating getters and setters, but take care, the API they expose is not evident from their names. In Apple.java, getRandomxPos() & getRandomyPos() seem to return only a particular x (or y, respectively) position, which is predetermined. Drop the Random in their names, it makes no sense as part of the API. Similarly for jFrame. That's not really representative of what jFrame's purpose is. You have gotten away with GameBoard and SnakeGame thanks to the class names, but try to indicate the purpose of a variable via its name. | {
"domain": "codereview.stackexchange",
"id": 23072,
"tags": "java, beginner, object-oriented, snake-game"
} |
Why does the graph of the electrical conductivity of sulfuric acid/water solutions have this knee in the ~85%-~92% range? | Question: This answer to an earlier question regarding the electrical conductivity of sulfuric acid provides a graph showing the conductivity of sulfuric acid/water mixtures ranging from 0% to 100% sulfuric acid:
(Image by Horace E. Darling in "Conductivity of sulfuric acid solutions" [Journal of Chemical & Engineering Data 9.3 (1964): 421-426.], via M. Farooq here at ChemSE.)
As can be seen, the conductivity of the solution rises smoothly from 0% to a peak at approximately 30% sulfuric acid, and declines thereafter. However, at approximately 85% sulfuric acid, conductivity reaches a local minimum, after which it actually rises slightly with increasing sulfuric-acid concentration until reaching a local maximum at approximately 92% sulfuric acid, before again dropping off, more steeply, as the concentration of sulfuric acid in the solution continues to increase to 100%.
Why does the trend of decreasing conductivity with increasing sulfuric-acid concentration temporarily reverse in the ~85%-~92% range?
Answer: The comment by Vikki made me dig even older papers. Since conductance (not conducitivity note that Darling is using an incorrect terminology from today's standards) is inversely related to viscosity, I thought there must a sharp change in the viscosity of sulfuric acid solution as a function of concentration. This guess is not that bad. This is from 1923 paper. Rhodes, F. H., and C. B. Barbour. "The viscosities of mixtures of sulfuric acid and water." Industrial & Engineering Chemistry 15.8 (1923): 850-852.
There is a sharp increase in viscosity at 85%, which indicates there is a major structural change in sulfuric acid solution in the range 85-92%. Sulfuric acid forms a hydrate in this range. When the viscosity is high, the conductance goes down, there is a depression in the curve. This viscosity jump is causing the double hump. Once we are past the high viscosity range, conductance goes up again.
It is amazing how simple molecules do not stop from surprising us! | {
"domain": "chemistry.stackexchange",
"id": 15738,
"tags": "aqueous-solution, conductivity"
} |
Finding the direction of the magnetic field acting on protons in a cyclotron? | Question: I have been trying to answer part b) of the question below:
(The image is the image of the cyclotron in question)
To find the direction of the magnetic field acting on the protons I tried using the right hand rule, treating the direction of current as "upwards" and the force on the protons as "to the left", which gives me the answer "in to the page". This is the right answer, however I am uncertain as to whether by method of finding it is correct. So, I am wondering if my method is indeed correct or not.
Answer: Your reasoning is correct: you're using the equation for the Lorentz force $\vec{F} = q\vec{v}\times\vec{B}$ where you know the direction of the force and the current.
What may be a little confusing about your approach is that you're applying the Lorentz force formula at the point where the proton has already been deflected a bit, and is travelling upwards (with the force pointing to the left). Try applying it for the moment where the proton is just entering the D (what are the directions of $\vec{v}$ and $\vec{F}$ here?).
The result will of course be the same, but it will help you understand where the initial deflection comes from. | {
"domain": "physics.stackexchange",
"id": 29601,
"tags": "homework-and-exercises, electromagnetism, accelerator-physics, particle-accelerators"
} |
Does Rviz have collision detection like gazebo? | Question:
I am wondering if rviz has a collision detection system similar to gazebo implemented. I see that there is an option to enable collision, but I have not found and information online about what it does exactly.
Ubuntu 14.04 LTS
ROS: Indigo
Originally posted by justinkgoh on ROS Answers with karma: 25 on 2016-06-21
Post score: 2
Answer:
rviz only renders data, so it does not have collision detection. The option you're seeing is to visualize the collision geometry, which can be different from the visual geometry. A common example is to have a high polygon and detailed mesh for visualization of your robot, but for collision detection you might just use a box or a sphere so it is faster. rviz can render these shapes, but does not do any collision detection with them.
Originally posted by William with karma: 17335 on 2016-06-21
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by justinkgoh on 2016-06-30:
Thanks for the info. | {
"domain": "robotics.stackexchange",
"id": 25016,
"tags": "ros, gazebo, rviz, collision"
} |
Why are lithium and beryllium such good conductors but not chlorine? | Question: Why are lithium and beryllium so conductive? The $2s$ band has a much different energy range from the $2p$ band, so I guess the only explanation is that $N$ states are empty. But if that was the case, why isn't chlorine also a amazing conductor, since chlorine has $N$ empty states as well in the valence band?
Answer: The conductivity of a material depends on several factors, such as the number of valence electrons, the band structure, the crystal structure, the temperature, and the presence of impurities.
Lithium and beryllium are metals with one and two valence electrons, respectively. They have a simple hexagonal crystal structure that allows their electrons to form overlapping bands. This means that there is no band gap between the 2s and 2p orbitals, and the electrons can move freely across both bands. Therefore, lithium and beryllium have high conductivity.
Chlorine is a nonmetal with seven valence electrons. It has a complex orthorhombic crystal structure that creates a large band gap between the 3s and 3p orbitals. This means that the electrons are confined to their respective orbitals and cannot move across the band gap. Therefore, chlorine has low conductivity. | {
"domain": "physics.stackexchange",
"id": 99035,
"tags": "solid-state-physics, atomic-physics, conductors, orbitals, elements"
} |
Checking if characters in a string can be rearranged to make a palindrome | Question: Can I please have some advice on optimizing, cleaning up my code, and places where I could save space/time?
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdbool.h>
bool pal_perm(char*);
int main()
{
printf("The output is %sa palindrome.\n", pal_perm("abbas")? "": "not "); //Output: The output is a palindrome.
printf("The output is %sa palindrome.\n", pal_perm("deeds")? "": "not "); //Output: The output is a palindrome.
printf("The output is %sa palindrome.\n", pal_perm("dead")? "": "not "); //Output: The output is not a palindrome.
return 0;
}
bool pal_perm(char* str)
{
char alpha[256];
int oddCount =0;
int size = strlen(str);
memset(alpha, 0, sizeof(alpha));
//see how many occurances of each letter
for(char ch = 'a'; ch <= 'z'; ch++)
{
for(int i=0; i < size; i++)
{
if(str[i] == ch)
alpha[str[i]]++;
}
}
//count the number of times a letter only appears once
for(int j=0; j<256; j++)
{
if(alpha[j] == 1 || (alpha[j]%2==1))
oddCount++;
}
//if there is more than one letter that only occurs, then it
//cannot be a palindrome.
if(oddCount <= 1)
return true;
else
return false;
}
Answer: Strange output
What is a user to think when seeing such output of a program?
The output is a palindrome.
The output is not a palindrome.
I wouldn't know what this program is trying to tell me.
Consider this alternative:
void print_result(char * s)
{
printf("The characters of \"%s\" %s be rearranged into a palindrome.\n", s, pal_perm(s) ? "can" : "cannot");
}
int main()
{
print_result("abbas");
print_result("deeds");
print_result("dead");
}
Output:
The characters of "abbas" can be rearranged into a palindrome.
The characters of "deeds" can be rearranged into a palindrome.
The characters of "dead" cannot be rearranged into a palindrome.
Though actually I would prefer something much simpler than that:
printf("\"%s\" -> %s\n", s, pal_perm(s) ? "true" : "false");
Producing output:
"abbas" -> true
"deeds" -> true
"dead" -> false
Usability
It would be more interesting if the program took the strings from the command line, instead of using hardcoded values, for example:
int main(int argc, char ** argv) {
for (int i = 1; i < argc; i++) {
print_result(argv[i]);
}
}
For the record, @Law29 suggested another alternative in a comment:
You can also read from standard input. This lets you either type in words as they come to mind, or use a whole file (there are files of dictionary words, for example). Example:
#define MAX_WORD_SIZE 50
int main(int argc, char ** argv) {
char buf[MAX_WORD_SIZE]
while (fgets (buf, MAX_WORD_SIZE, stdin)) {
print_result(buf);
}
}
Testing
Getting the implementation right can be tricky.
You revised your post 3-4 times to fix bugs pointed out in comments.
It's good to automate your tests so that they can be repeated easily,
for example by adding methods like these:
void check(char * s, bool expected)
{
if (pal_perm(s) != expected) {
printf("expected \"%s\" -> %s but got %s\n", s, expected ? "true" : "false", expected ? "false" : "true");
exit(1);
}
}
void run_tests()
{
check("a", true);
check("aa", true);
check("aba", true);
check("abba", true);
check("aabb", true);
check("aabbs", true);
check("deeds", true);
check("ab", false);
check("abc", false);
check("dead", false);
}
Use boolean expressions directly
Instead of this:
if(oddCount <= 1)
return true;
else
return false;
You can simply return the boolean expression itself:
return oddCount <= 1;
Excessive looping
As @DarthGizka explained, instead of this:
for(char ch = 'a'; ch <= 'z'; ch++)
{
for(int i=0; i < size; i++)
{
if(str[i] == ch)
alpha[str[i]]++;
}
}
This is identical, but without unnecessary looping:
for(int i=0; i < size; i++)
{
alpha[str[i]]++;
}
Unnecessary conditions
The first condition is unnecessary:
if(alpha[j] == 1 || (alpha[j]%2==1))
This is exactly the same:
if(alpha[j]%2==1)
Too compact writing style
Instead of this:
if(alpha[j]%2==1)
I suggest to put spaces around operators, and before ( in if statements:
if (alpha[j] % 2 == 1)
Stop iterating when you already know the result
Once you find two characters with odd number of occurrences,
you can stop iterating and return false.
As such, you don't even need an int oddCount, but a bool seenOdd.
So instead of this:
int oddCount = 0;
//count the number of times a letter only appears once
for (int j = 0; j < 256; j++)
{
if (alpha[j] % 2 == 1) oddCount++;
}
//if there is more than one letter that only occurs, then it
//cannot be a palindrome.
return oddCount <= 1;
You could write:
bool seenOdd = false;
// scan for odd number of occurrences, stop after seeing two
for (int j = 0; j < 256; j++)
{
if (alpha[j] % 2 == 1) {
if (seenOdd) return false;
seenOdd = true;
}
}
// less then 2 letters with odd number of occurrences, must be true
return true; | {
"domain": "codereview.stackexchange",
"id": 30451,
"tags": "beginner, c, strings"
} |
Structure of function that describes sine signal | Question: I need to create a vector of sine signal.
So I'm trying to figure out what the structure of the function that describe sine signal ?
For example the function $\sin 2x$ meets the requirements?
If the answer is no, then what is the reason?
Answer: I think you posted similar question 3 days ago, regarding your teacher claiming that $\sin 2x$ is not a sinusoidal function. Nevertheless this function is definitely sinusoidal. Otherwise how come we can say that signals can be decomposed into sums of sinusoids (and cosinusoids)? You have plenty of orthogonal waves: $\sin x, \ \sin 2x, \ \sin 3x, \ldots$, and all of them are sinusoids.
The only thing that is coming into my mind, is that:
Discrete-time sinusoid is periodic only if its fundamental frequency
$f_0$ is a rational number
For a sinusoid with frequency $f_0$ to be periodic, we should have:
$$ \sin[2\pi f_0 (N+n)+\theta] = \sin[2\pi f_0 n + \theta]$$
This relations is true if and only if there exists an integer $k$ such that:
$$2\pi f_0N=2\pi k $$
or, equivalently:
$$f_0 = \dfrac{k}{N} $$
To determine the fundamental frequency of a periodic sinusoid, we express its frequency as above and cancel common factors so that $k$ and $N$ have no common divisors. Then the fundamental period of the sinusoid is equal to N.
So for example:
$f_0 = \dfrac{31}{60}$ implies that $N=60$
$f_0 = \dfrac{30}{60}$ implies that $N=2$
Thus if you define your discrete sinusoid to be:
$\sin[2 f_0 t]$, or
$\sin[\sqrt{2} f_0 t]$
then these are no longer periodic in digital domain. Why? Think of that in following way: each sample at start of the new period will be shifted slightly - it's not a rational multiple of $\pi$. Below is some plot for two signals:
$\sin [2 \pi t]$
$\sin [2 \pi \sqrt{2} t]$
Sampling frequency is $10 \ \mathtt{Hz}$ and upper time limit is $8 \ \mathtt{s} $
You can see a first sinusoid is periodic - every 10 samples (1 second) you get repeating pattern. On the contrary second one, of which fundamental frequency cannot be represented as a rational number (no way to decompose $\sqrt{2}$) periods are not the same - thus your signal has no period. Check the figure below for overlay of 11 periods: | {
"domain": "dsp.stackexchange",
"id": 1902,
"tags": "wave"
} |
What is normal force and when it acts? | Question: what are contact forces and according to: https://www.physicsclassroom.com/class/newtlaws/Lesson-2/Types-of-Forces
it says there are 6 types of contact forces. I am having doubt with applied force and normal force because both acts when there is contact between two bodies when we push a book kept on a table we take normal force from table surface, but what about the contact between my hand and book we say that applied force, when we push a wall horizontally we say its reaction force is normal force but how ?? when we kick a ball we say we applied a force but where is the normal force between my foot and ball surface which came in contact during kicking, or when we hit a ball by a bat we say we apply a force with bat on the ball but what about normal force between bat and ball surface. when we pull a string attach to a celling we say we applied a force on the string but what about the contact between my hand and string surface where is normal force between them. i am so much confused please somebody help me with this.
Answer: It seems like you are misinterpreting the words 'applied' and 'normal'. You are thinking that applied force is some different type of force from normal force. You are thinking that there is some 'applied' force plus 'normal' force acting at same time between your foot and ball and that's where you are confused.
In reality the 'normal' force is the actual 'applied' force and it is the 'normal' force that is causing the motion of ball. The site you provided seems to be wrong because currently I am unable to remember any type of contact 'applied' force that is different from other mentioned contact forces.
Your foot apply normal force on ball and in return ball apply same amount of normal force on your feet that you feel when kicking. The normal force applied on ball cause an impulse and the ball start moving. | {
"domain": "physics.stackexchange",
"id": 98080,
"tags": "newtonian-mechanics, forces, terminology, definition"
} |
Elliptic orbits and why sun located at focal point acts like at the center of the ellipse? | Question: In the book "Classical Mechanics Point particles and relativity by Greiner"
We calculate Forces in the motion on an ellipse as follows
we first parametrizate the ellipse $$\vec r(t)=<a\cos(\omega t),b\sin(\omega t)>$$and take second derivative and found $$\vec F=m\vec a(t)=-m\omega^2 \vec r(t)$$
Which points the center of the ellipse
But then he follows "The planets also move around the sun along elliptic orbits. The sun as the center of attraction located in one of the focal points of the ellipse..."
with formula $$\vec F_G=-\gamma \dfrac{mM}{r^2}\dfrac{\vec r}{r}$$
Question: If the force required to hold the particle in elliptic orbit points center and the sun is at the focal point so what is the extra force which make the logic complete?
Answer: That parametrization of the elipse corresponds to a body held by a linear elastic device to a point. That is the meaning of $F⃗ = −m\omega^2 r(t)$
If a or b is zero, it is a simple harmonic motion. But gravitational force is proportional to $\frac{1}{r^2}$, and is not described by that parametrization. | {
"domain": "physics.stackexchange",
"id": 71609,
"tags": "newtonian-mechanics, gravity"
} |
Can you explain gyroscopic precession using only Newton's three linear laws without applying their angular cousins? | Question: Is there an intuitive approach to understand gyroscopic motion based on Newton's laws without passing through angular momentum conservation?
Answer: "Intuitive" is a tricky word. Most people find gyroscopic effects unintuitive no matter what we do. And by far the most intuitive way to understand gyroscopic effects is through angular momentum conversion. That reduces these effects to a handful of straight forward equations.
Fundamentally the motion of gyroscopes is based on momentum. You wont be able to make sense of them without it. Momentum can be viewed two major ways: linear and angular. They're actually describing the same concept, but with different symmetries. You can try to understand a gyro using linear momentum, but because it isn't good at leveraging rotational symmetries, you will have a large number of integrals and sines and cosines involved. Maybe that qualifies as intuitive for you, but my guess is it does not. Gyros are not easy to understand in a linear sense. We teach them in a rotational world with angular momentum because they are far easier to understand that way. | {
"domain": "physics.stackexchange",
"id": 67797,
"tags": "newtonian-mechanics, rotational-dynamics, rigid-body-dynamics, gyroscopes, precession"
} |
n*log n and n/log n against polynomial running time | Question: I understand that $\Theta(n)$ is faster than $\Theta(n\log n)$ and slower than $\Theta(n/\log n)$. What is difficult for me to understand is how to actually compare $\Theta(n \log n)$ and $\Theta(n/\log n)$ with $\Theta(n^f)$ where $0 < f < 1$.
For example, how do we decide $\Theta(n/\log n)$ vs. $\Theta(n^{2/3})$ or $\Theta(n^{1/3})$
I would like to have some directions towards proceeding in such cases. Thank you.
Answer: If you just draw a couple of graphs, you'll be in good shape. Wolfram Alpha is a great resource for these kinds of investigations:
Generated by this link. Note that in the graph, log(x) is the natural logarithm, which is the reason the one graph's equation looks a little funny. | {
"domain": "cs.stackexchange",
"id": 472,
"tags": "asymptotics, mathematical-analysis, landau-notation"
} |
SQL Query generator, round 2 | Question:
This is the second round of reviews. The first round can be found in this question.
This is a project I have been working on. This is one of my first experiences with Python and OOP as a whole. I have written a GUI that handles the inputs for these classes, but I will ask for a separate review for that, since the question would be rather bulky when including both.
The goal of this program is to create standard SQL (SQL server) queries for everyday use. The rationale behind this is that we regularly need similar queries, and would like to prevent common mistakes in them. The focus on this question is on the Python code however.
The information about the tables and their relation to each-other is provided by a JSON file, of which I have attached a mock-up version.
The code consists of three parts:
A universe class which handles the JSON file and creates the context of the tables.
A query class, which handles the specifications of which tables to include, which columns to take, how to join each table and optional where statements.
A PyQT GUI that handles the inputs. This is excluded in this post and will be posted separately for another review. It can be found here on Github
The JSON:
{
"graph": {
"table1": {
"tag": ["table1"],
"DBHandle": ["tables.table1"],
"Priority": [1],
"Columns": ["a", "b", "c"],
"Joins": {
"table2": ["on table2.a = table1.a", "inner"],
"table3": ["on table1.c = table3.c", "inner"]
}
},
"table2": {
"tag": ["table2"],
"DBHandle": ["tables.table2"],
"Priority": [2],
"Columns": ["a", "d", "e"],
"Joins": {
"table3": ["on table2.d=table3.d and table2.e = table3.e", "inner"]
}
},
"table3": {
"tag": ["table3"],
"DBHandle": ["tables.table3"],
"Priority": [4],
"Columns": ["c", "d", "e"],
"Joins": []
}
},
"presets": {
"non empty b": {
"table": ["table1"],
"where": ["table1.b is not null"]
}
}
}
The reviewed Python code:
# -*- coding: utf-8 -*-
"""
Created on Thu Aug 3 14:33:44 2017
@author: jdubbeldam
"""
from json import loads
class Universe:
"""
The Universe is a context for the Query class. It contains the information
of the available Database tables and their relation to eachother. This
information is stored in a JSON file.
"""
def __init__(self, filename):
"""
Reads the JSON and separates the information in a presets dictionary and
a graph dictionary. The latter contains the information of the nodes in
the universe/graph, including relational information.
"""
with open(filename, encoding='utf-8') as file:
self.json = loads(str(file.read()))
self.presets = self.json['presets']
self.json = self.json['graph']
self.tables = self.json.keys()
self.connections = self.get_edges()
def get_edges(self):
"""
Creates a dictionary with for each node a list of nodes that join on
that node.
"""
edges = {}
for table in self.tables:
edges[table] = []
try:
edges[table] += [connected_tables
for connected_tables in self.json[table]['Joins']]
except AttributeError:
pass
for node in edges:
for connected_node in edges[node]:
if node not in edges[connected_node]:
edges[connected_node].append(node)
return edges
def shortest_path(self, start, end, path_argument=None):
"""
Calculates the shortest path in a graph, using the dictionary created
in getEgdes. Adapted from https://www.python.org/doc/essays/graphs/.
"""
if path_argument is None:
old_path = []
else:
old_path = path_argument
path = old_path + [start]
if start == end:
return path
if start not in self.connections:
return None
shortest = None
for node in self.connections[start]:
if node not in path:
newpath = self.shortest_path(node, end, path)
if newpath:
if not shortest or len(newpath) < len(shortest):
shortest = newpath
return shortest
def join_paths(self, nodes):
"""
Extension of shortest_path to work with multiple nodes to be connected.
The nodes are sorted based on the priority, which is taken from the JSON.
shortest_path is called on the first two nodes, then iteratively on each
additional node and one of the existing nodes returned by shortest_path,
selecting the one that takes the fewest steps.
"""
sorted_nodes = sorted([[self.json[node]['Priority'][0], node] for node in nodes])
paths = []
paths.append(self.shortest_path(sorted_nodes[0][1], sorted_nodes[1][1]))
for next_node_index in range(len(sorted_nodes) - 2):
shortest = None
flat_paths = [item for sublist in paths for item in sublist]
old_path = len(flat_paths)
for connected_path in flat_paths:
newpath = self.shortest_path(connected_path,
sorted_nodes[next_node_index+2][1],
flat_paths)
if newpath:
if not shortest or len(newpath[old_path:]) < len(shortest):
shortest = newpath[old_path:]
paths.append(shortest)
return paths
class Query:
"""
Query contains the functions that allow us to build an SQL query based on
a universe object. It maintains lists with the names of activated tables
and, if applicable, which of their columns in a dictionary. Implicit tables
are tables that are called, only to bridge joins from one table to another.
Since they are not explicitly called, we don't want their columns in the query.
how_to_join is a dictionary that allows setting joins (left, right, inner, full)
other than the defaults imported from the JSON.
"""
core = 'select\n\n{columns}\n\nfrom {joins}\n\n where {where}'
def __init__(self, universum):
self.graph = universum
self.active_tables = []
self.active_columns = {}
self.implicit_tables = []
self.join_strings = {}
for i in self.graph.tables:
self.join_strings[i] = self.graph.json[i]['Joins']
self.how_to_join = {}
self.where = []
def add_tables(self, tablename):
"""
Sets given tablename to active. GUI ensures that only valid names
will be given.
"""
if tablename not in self.active_tables:
self.active_tables.append(tablename)
self.active_columns[tablename] = []
def add_columns(self, table, column):
"""
Sets given columnname from table to active. GUI ensures that only valid names
will be given.
"""
if column not in self.active_columns[table]:
self.active_columns[table].append(column)
def add_where(self, string):
"""
Adds any string to a list to be input as where statement. This could be
vulnerable for SQL injection, but the scope of this project is in-house
usage, and the generated SQL query isn't directly passed to the server.
"""
self.where.append(string)
def find_joins(self):
"""
Calls the join_paths function from Universe class. Figures out which joins
are needed and which tables need to be implicitly added. Returns a list
of tuples with tablenames to be joined.
"""
tags = [self.graph.json[table]['tag'][0]
for table in self.active_tables]
join_paths = self.graph.join_paths(tags)
join_sets = [(table1, table2)
for join_edge in join_paths
for table1, table2 in zip(join_edge[:-1], join_edge[1:])]
for sublist in join_paths:
for item in sublist:
if item not in self.active_tables:
self.add_tables(item)
self.implicit_tables.append(item)
return join_sets
def generate_join_statement(self, table_tuple):
"""
Creates the join statement for a given tuple of tablenames. The second
entry in the tuple is always the table that is joined. Since the string
is stored in a dictionary with one specific combination of the two table
names, the try statement checks which way around it needs to be. how contains
the default way to join. Unless otherwise specified, this is used to generate
the join string.
"""
added_table = table_tuple[1]
try:
on_string, how = self.graph.json[table_tuple[0]]['Joins'][table_tuple[1]]
except TypeError:
table_tuple = (table_tuple[1], table_tuple[0])
on_string, how = self.graph.json[table_tuple[0]]['Joins'][table_tuple[1]]
if table_tuple not in self.how_to_join:
self.how_to_join[table_tuple] = how
join_string = (self.how_to_join[table_tuple]
+ ' join '
+ self.graph.json[added_table]['DBHandle'][0]
+ ' '
+ self.graph.json[added_table]['tag'][0]
+ '\n')
return join_string + on_string
def generate_select_statement(self, table):
"""
Creates the column specification. If no columns of an active table are
specified, it assumes all the columns are wanted.
"""
if not self.active_columns[table]:
self.active_columns[table] = ['*']
return ',\n'.join([(self.graph.json[table]['tag'][0]
+ '.'
+ i)
for i in self.active_columns[table]])
def compile_query(self):
"""
Handles compilation of the query. If there are more than one activated
table, joins need to be handled. First the required joins are found, then
the strings that handle this are generated. The column statement is created.
If there is no where statement specified, '1=1' is added. The relevent
statements are added into the core query and returned.
"""
if len(self.active_tables) == 1:
base_table = self.active_tables[0]
join_statement = []
else:
joins = self.find_joins()
base_table = joins[0][0]
join_statement = [self.generate_join_statement(i) for i in joins]
join_statement = ([self.graph.json[base_table]['DBHandle'][0]
+ ' '
+ self.graph.json[base_table]['tag'][0]]
+ join_statement)
completed_join_statement = '\n\n'.join(join_statement)
column_statement = [self.generate_select_statement(table)
for table in self.active_tables
if table not in self.implicit_tables]
completed_column_statement = ',\n'.join(column_statement)
if self.where:
where_statement = '\nand '.join(self.where)
else:
where_statement = '1 = 1'
query = Query.core.replace('{columns}', completed_column_statement)
query = query.replace('{joins}', completed_join_statement)
query = query.replace('{where}', where_statement)
return query
if __name__ == "__main__":
graph = Universe('example.JSON')
query = Query(graph)
query.addTables('table1')
query.addTables('table2')
query.addTables('table3')
print(query.compileQuery())
Answer: I have been refactoring this code myself as well in the meanwhile, so I thought I'd post some of the insights I have gained myself.
Class inheritance
Instead of passing a Universe instance when creating a Query, by making Query a subclass of Universe, I was able to reduce the amount of information that was stored in both classes. This makes accessing the attributes and methods of Universe in Query's methods shorter as well.
Query.join_strings does nothing
self.join_strings = {}
for i in self.graph.tables:
self.join_strings[i] = self.graph.json[i]['Joins']
self.join_strings is defined, but used nowhere else. Also the use of i is bad (was an oversight).
Indirectly still iterating over .keys()
self.json = self.json['graph']
self.tables = self.json.keys()
in Universe.__init__() stores the keys (tablenames). This is only used to iterate later:
edges = {}
for table in self.tables:
edges[table] = []
try:
edges[table] += [connected_tables
for connected_tables in self.json[table]['Joins']]
except AttributeError:
pass
We might as well have iterated over self.json. However, for naming purposes, I prefer the following:
self.tables = self.json['graph']
Since that improves the naming, and removes the need to keep the json attribute around. So we can turn that into a regular variable without the self.
Expand the add_* methods to also allow for removing of that item.
This is mostly relevant with the GUI in mind. It contained a bit of a workaround to be able to remove tables and columns from the Query.
So I added an argument to the add_* methods to be able to set to remove instead.
def add_tables(self, tablename, add_or_remove=True):
"""
Toggles active setting of given tablename. GUI ensures that only valid names
will be given.
"""
if add_or_remove:
if tablename not in self.active_tables:
self.active_tables.append(tablename)
self.active_columns[tablename] = []
else:
self.active_tables.remove(tablename) | {
"domain": "codereview.stackexchange",
"id": 35440,
"tags": "python, beginner, python-3.x, sql"
} |
Proving that the IDTFT is the inverse of the DTFT? | Question: The DTFT is given by:
$$X(e^{j\omega}) = \sum_{n=-\infty}^{\infty}x[n]e^{-j\omega n}$$
The IDTFT is given by:
$$x[n]=\frac{1}{2\pi}\int_{0}^{2\pi}X(e^{j\omega})e^{j\omega n}d\omega$$
I have been able to show by substitution of the DTFT into the IDTFT that the transform and a subsequent inverse transform return $x[n]$:
$$\begin{align}
x[n]&=\frac{1}{2\pi}\int_{0}^{2\pi}X(e^{j\omega})e^{j\omega n}d\omega\\
&=\frac{1}{2\pi}\int_{0}^{2\pi} \left( \sum_{k=-\infty}^{\infty}x[k]e^{-j\omega k} \right)e^{j\omega n}d\omega\\
\end{align}$$
Swap the order of integration and summation:
$$x[n]=\frac{1}{2\pi}\sum_{k=-\infty}^{\infty}\int_{0}^{2\pi}x[k]e^{j\omega (n-k)}d\omega$$
Argue that given $e^{j\omega (n-k)}$ is an odd function, it will only evaluate to anything other than 0 when $k=n$:
$$\begin{align}
x[n]&=\frac{1}{2\pi}\sum_{k=-\infty}^{\infty}\int_{0}^{2\pi}x[k]e^{j\omega (n-k)} d\omega \ \delta[n-k]\\
&=\frac{1}{2\pi}\int_{0}^{2\pi}x[n]d\omega \\
&=\frac{2\pi}{2\pi}x[n]\\
\end{align}$$
However, I have been unable to show the dual case: that the inverse transform (IDTFT) substituted into the forward transform (DTFT) gives $X(e^{j\omega})$. How can we show this?
Answer: $$\begin{align}X(e^{j\omega})&=\sum_{n=-\infty}^{\infty}x[n]e^{-jn\omega}\\&=\sum_{n=-\infty}^{\infty}\left[\frac{1}{2\pi}\int_{0}^{2\pi}X(e^{j\Omega})e^{jn\Omega}d\Omega\right]\;e^{-jn\omega}\\&=\int_{0}^{2\pi}X(e^{j\Omega})\left[\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}e^{jn(\Omega-\omega)}\right]d\Omega\\&=\int_{0}^{2\pi}X(e^{j\Omega})\delta(\Omega-\omega)d\Omega\\&=X(e^{j\omega})\end{align}$$
where I've used the identity
$$\delta(\omega)=\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}e^{jn\omega}$$ | {
"domain": "dsp.stackexchange",
"id": 7244,
"tags": "fourier-transform, dft, dtft"
} |
How do I use PCL from ROS Hydro? | Question:
I am trying to understand how PCL is integrated into ROS Hydro. I installed ROS in Ubuntu 12.04 using the ros-hydro-desktop-full package. From "rospack list" I can see that it comes with 4 PCL packages:
pcl
pcl_conversions
pcl_msgs
pcl_ros
What is the functionality of these 4 packages, especially pcl_ros and pcl? There is also a pcl-1.7 folder in my /opt/ros/hydro/share folder with some cmake config files. There is no package.xml file though. What does this folder do?
Also, I seem to have 2 copies of the pcl-1.7 libraries. I have it in /usr/lib and also in /opt/ros/hydro/lib. So it seems like I have a standalone pcl library (I am not sure how I got this) and one that is integrated with ROS. Is this going to be a problem?
Finally, and this is the biggest source of my confusion, the wiki page for hydro/migration says:
pcl is no longer packaged by the ROS community as a catkin package, so any packages which directly depend on pcl should instead use the new rosdep rules libpcl-all and libpcl-all-dev and follow the PCL developer's guidelines for using PCL in your CMake.
So, why is there a pcl package in ROS Hydro in the first place with libraries in /opt/ros/hydro/lib?
As you can see I am quite confused, any help will be greatly appreciated!
Originally posted by munnveed on ROS Answers with karma: 77 on 2013-09-11
Post score: 3
Answer:
The question was answered in the PCL Users mailing list.
http://www.pcl-users.org/How-do-I-use-PCL-from-ROS-Hydro-td4029613.html
Originally posted by munnveed with karma: 77 on 2013-09-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by aknirala on 2013-09-15:
If possible, please enumerate the exact steps needed for this. I was following the tutorial : http://wiki.ros.org/pcl/Tutorials, but was not able to run it. While creating package I needed to remove pcl dependency, then also I was not able to compile code using voxel_grid.
Comment by munnveed on 2013-09-15:
Can specify exactly what error you are getting? And are you using ROS Hydro?
Comment by ndepalma on 2013-09-24:
I'm confirming that the tutorials should be updated.
Comment by aknirala on 2013-10-12:
Hi, I was able to run it, and pointed out the changes at : http://answers.ros.org/question/90176/running-pcl-in-hydro/ kindly let me know if some correction needs to be done.
Comment by Athoesen on 2013-11-03:
Did you happen to update the tutorial or should I follow the changes on the above link you just put?
Comment by aknirala on 2013-11-03:
I have updated the tutorial (quite sometime), you can find comments in tutorial saying for hydro users kindly use... Let me know if things are fine. | {
"domain": "robotics.stackexchange",
"id": 15500,
"tags": "pcl, ros-hydro"
} |
Does home cooking induction stove produces any harmful (to humans) electrical/magnetic fields? | Question: An induction cooker, or stove that are based on the principle of electromagnetic induction, and is used much, for cooking food now a days.
My stove had a thick iron circular plate on the top.
Assuming that to be having the same configuration as any induction cooker has, does it emits any strong electrical or magnetical waves/radiation or similar things which could damage our body? Is there any possibility?
I know that only ionising radiation like: X rays, beta alpha or gamma rays can penetrate skin and cause cell damage. And also, cancer. But I doubt on the induction stove, because it may have strong induction currents/magnetic feilds.
Answer: The simple answer is, until now, we haven't found any negative effects on health, and we've looked quite deep.
In order to cause any damage from electromagnetic radiation, one of three things has to happen. Either the radiation is high-frequency enough that it can ionize atoms, which leads to ionization damage, or you have to get electrocuted, or cooked.
Let us discuss the last two conditions.
Human bodies are susceptible to electrocution only at AC frequencies that are low. When the frequency becomes high enough, no uncontrolled depolarization happens, so you don't get electrocuted.
At high AC frequencies, the only way of suffering any damage would be through ohmic heating of tissue, i.e. current heating you up.
Both of these effects, though require a high enough potential difference between any two points of your body to happen, nowhere near what is radiated away by such induction heaters, or even leakage fields from big fat transformers (though I wouldn't recommend touching the output of one).
The electric fields are simply not that large in magnitude. | {
"domain": "physics.stackexchange",
"id": 76766,
"tags": "electromagnetic-radiation, magnetic-fields, electric-fields, estimation, biology"
} |
Membrane potential after exposure to glutamate | Question:
Neurons were kept in a physiological solution. During the resting
phase, the membrane potential in the axoplasm of neurons was negative
compared to the extracellular space and a potential difference of -70
mV was observed in this phase. Neurons were then treated in two
different experiments with either gamma-amino butyric acid (GABA; an
inhibitory neurotransmitter) or glutamate (an excitatory
neurotransmitter) and the membrane potentials were recorded. Choose
the correct statement/s:
(A) The resting membrane potential of -70 mV would not change with
either GABA or glutamate treatments.
(B) The membrane potential would be even more negative than resting
phase with GABA treatment.
(C) The membrane potential would be positive when the neuron was
exposed to glutamate.
(D) The membrane potential would be more negative than resting
potential after glutamate treatment.
I feel, since, glutamate is excitatory, so resting potential should decrease when exposed to glutamate and increase when exposed to GABA. So (A)&(D) automatically gets eliminated and (B) is correct answer. I am confused at (C)
Answer: It is more correct to call it "resting potential difference" (like your question), because electrical potential is relative, not absolute.
That phrasing exposes a crucial point: The difference of what? Cell cytoplasms are negatively charged (to remember this, it helps to remember that protons are usually pumped out of the cytosol either into the periplasm, vesicles, mitochondrial intermemberane space, or outside of the cell). If you subtract the potential of the outside from the inside, you'll get +70. If you do it the other way around, you'll get -70. By convention, the (+) probe of the voltmeter is stuck inside the cell, and the (-) is stuck outside, so we end up with the "official" figure of -70 mV.
Nature obviously does not like this potential difference, and wants to neutralize it by pushing current across the membrane. Luckily the membrane is not very conductive, and the cell can expend energy to undo the effects of any leaking and prevent the potential [difference] from drifting toward 0 (it would actually drift toward a number above 0 because potential difference isn't the only factor, there is also the concentration difference).
So the cell is like a battery that keeps itself charged. It also has a threshold, and it will only empty itself if it has discharged at least to a certain point. That point depends on the cell, but a typical value is -55 mV (so closer to equilibrium point than the resting -70 mV).
What will an inhibitory chemical like GABA do? It will pull the cell further away from the threshold, so it's harder to overcome it. Bringing -70 mV to -90 mV would be lowering it (because the number goes down).
What will an excitatory chemical like glutamate do? It will bring the cell closer to the threshold, so it's easier to overcome it. Bringing -70 mV to -55 mV would be raising it (because the number goes up).
The question, unfortunately, does not specify whether the action potential fires. Typically, the peak of the AP is +40 mV. In theory, you could have a threshold say, at +20 mV, and then perhaps the potential could go positive (eg. +10 mv) and stay there. But I really, really doubt you could find a cell with a threshold above 0. If the threshold is below zero, then the cell will reach a positive potential difference (if you use enough glutamate to elicit the AP), but it will only stay there momentarily before collapsing back to -80 mV (the refractory state). | {
"domain": "biology.stackexchange",
"id": 2674,
"tags": "neuroscience, homework"
} |
Continually refresh game data using AJAX | Question: I've been working with JavaScript and AJAX a lot in the past, and now I'm moving towards the backend and working with databases more. I want to update my game data in as close to real-time as possible in JavaScript.
Here is what I have been doing to update the data as frequently as possible (I simplified it):
function updateData() {
var xmlhttp = new XMLHttpRequest();
xmlhttp.ontimeout = function (e) {
// XMLHttpRequest timed out. Try sending another request
updateData();
};
xmlhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
// only keep the data after the leading { (in case errors are outputted) (SHOULD NOT HAPPEN)
var data = this.responseText.slice(this.responseText.indexOf('{'));
try {
data = JSON.parse(data); // we're expecting JSON data -- make this an object for us
} catch (error) {
console.error("Error parsing data");
}
// handle data...
// and repeat:
updateData();
}
};
xmlhttp.open("GET", "data_getter.php", true); // "?t=" + getTime() to endure that the data is not cached
xmlhttp.send();
}
updateData();
I call the function updateData() once and then each time the previous request is received, the function gets ran again.
I am wondering if there is a better way to continually refresh data (or make the data on the game website as close to realtime as possible) than to send an AJAX request every time the previous one is received? This method means that there will be a delay of the time it takes for the server to load, but that isn't too much.
Is this the best practice? Or can you somehow open a connection to a PHP script that communicates with the server in realtime and not close the connection for the duration of the game?
Answer: Main question
Is this the best practice? Or can you somehow open a connection to a PHP script that communicates with the server in realtime and not close the connection for the duration of the game?
A better approach would be to use web sockets. That way the function doesn't need to run continuously but instead the front-end code can respond to data coming back from the server. PHP supports sockets and there are a few examples on the web - e.g. in PHP documentation, this chat application (which actually I don't recommend parts of - e.g. the global variables).
For example:
$(document).ready(function(){
var websocket = new WebSocket("ws://exampleDomain.com/data_getter.php");
websocket.onmessage = function(event) {
var data = JSON.parse(event.data);
//handle data
};
The PHP code would likely need to utilize socket_create() && socket_send().
Other review points (about current code)
The current code sets ontimeout:
xmlhttp.ontimeout = function (e) {
// XMLHttpRequest timed out. Try sending another request
updateData();
};
this could be simplified to
xmlhttp.ontimeout = updateData;
Bear in mind that the event target e would be passed as the first argument to updateData so if a different set of arguments was needed it would require additional work - e.g. using Function.bind().
With the ready statechange handler, the function updateData() only gets called when the status code is 200
xmlhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
what if there is a different status code? Perhaps it wouldn't be wise to keep making requests to the server, but instead show an error message- e.g. invalid input (4xx) or server error (5xx). | {
"domain": "codereview.stackexchange",
"id": 38023,
"tags": "javascript, php, game, ajax, server"
} |
Dropping connection | Question:
I am trying to run joy Tutorials/WritingTeleopNode on Ubuntu 11.4 with ROS electric. I am using a Cordless Rumble Pad 2. Everytime I get an [ERROR] [1323140529.624261794]: Client [/teleop] wants topic /joy to have datatype/md5sum [joy/Joy/e3ef016fcdf22397038b36036c66f7c8], but our version has [sensor_msgs/Joy/5a9ea5f83505693b71e785041e67a8bb]. Dropping connection. The joystick is working up to that point.
Any help would be appreciated,
Morpheus
Originally posted by Morpheus on ROS Answers with karma: 111 on 2011-12-05
Post score: 2
Answer:
That error indicates that some node is publishing a sensor_msgs/Joy message on /joy but your node is expecting a joy/Joy message.
It appears tutorial was not updated for Electric. In Electric, the Joy message was migrated to the sensor_msgs package and saw the addition of a Header field.
The quick fix for your teleop code should be change from joy::Joy to sensor_msgs::Joy, updating the includes and manifest dependencies appropriately.
Originally posted by Eric Perko with karma: 8406 on 2011-12-05
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 7535,
"tags": "ros, joystick, turtlesim-node"
} |
Multiplayer bowling in Ruby (follow-up: injection, single responsibility) | Question: This is a multiplayer bowling simulator in Ruby. It allows for variable skill levels, and produces weighted random results for each player's rolls based on those skill settings.
This is a complete rewrite of code I first posted for review here. My first solution was entirely procedural (all methods, no classes). I got some great pointers on OOP basics in the response, plus a referral to Sandi Metz, and worked out a new solution on that basis.
Everything works correctly. I'm posting the core classes, but the complete code (with scoring procedures, user input/screen output, tests) is in this gist.
For review: I'm looking for critiques/advice regarding the PlayerGame class in particular (3rd block below), where I've located the primary game logic.
Two basic questions:
Does this PlayerGame class qualify as 'single responsibility'? If not, what should go?
For a program this small, are the direct dependencies (see .bowl and .score_game - instance creation in each case) worth injecting or otherwise minimizing?
The game logic had to go somewhere, and PlayerGame seemed like the best place. The remaining classes are fairly well isolated/dumb. That means PlayerGame is directing instance creation; the dependencies are at least isolated in private methods, but it seems like I could/should do better.
I'm wondering about a) Moving instance creation outside the class entirely (I think this would involve learning more about factory patterns. In practice, though, would you typically bother with the extra layer in a program this small?); b) Removing the turn-by-turn score data from PlayerGame and encapsulating it in a different class (Or do frames-played and scores really belong in a common object?); and/or -- c) Rethinking the modeling entirely (Trying to get away from gameplay as instance creation?).
Design/modeling considerations:
I'm defining the problem around a game that's scored in progress (as
needed by the display at an alley, essentially), so this differs from the
'kata'/Robert Martin version of the problem.
I'm modeling a Game as a collection of PlayerGame(s), and modeling an individual
player-game as a series of consecutive Frame instances with corresponding subtotals.
Frame results are generated by Player instances (which store and apply skill levels). Scores are calculated within one-time-use ScoringWindow instances that are only aware of a given player's last three frames. Keeping Frame as a class means .strike? (etc.) is available both during gameplay and during scoring.
I'm modeling gameplay itself in terms of instance creation. Unlike card games (where you can generate a deck in advance, and gameplay is just re/distributing cards), it would seem odd to generate a bowler's frames in advance rather than on a turn-by-turn basis.
Player:
# This class creates arrays of weighted-random rolls (1 or 2, from 10 pins).
class Player
attr_reader :skill
def initialize(skill = 0)
raise RangeError unless skill >= 0
@skill = skill
end
def roll
reset_lane
weighted_roll(@pins_standing)
@results
end
private
def reset_lane
@roll_no = 1
@pins_standing = 10
@results = []
end
def update_lane(pins_hit)
@roll_no += 1
@pins_standing -= pins_hit
end
def weighted_roll(pins)
pins_hit = apply_skill(pins)
@results << pins_hit
update_lane(pins_hit)
weighted_roll(@pins_standing) unless (pins_hit == 10 || @roll_no > 2)
end
def apply_skill(pins)
picks = []
(@skill + 1).times { picks << rand(0..pins)
break if picks.max == pins }
picks.max
end
end
Frame:
# This class stores and evaluates a single array of rolls.
class Frame
attr_reader :results
def initialize(player_roll)
@results = player_roll
end
def first_roll
@results[0]
end
def second_roll
@results[1]
end
def total
@results.reduce(:+)
end
def strike?
@results[0] == 10
end
def spare?
strike? == false && total == 10
end
end
PlayerGame:
# This class generates and stores Frame objects, then sends for scoring.
class PlayerGame
attr_reader :player, :frames, :scores
def initialize(player)
@player = player
@frames = []
@scores = []
end
def take_turn
@frames.length == 9 ? bowl_tenth : bowl
score_turn
end
def frames_played
@frames.map { |fr| fr.results }
end
def scores_posted
@scores.flatten
end
private
def bowl
player_frame = Frame.new(@player.roll)
@frames << player_frame
end
def bowl_tenth
base_frame = Frame.new(@player.roll)
if base_frame.strike? || base_frame.spare?
tenth_frame = generate_bonus(base_frame)
@frames << tenth_frame
else
@frames << base_frame
end
end
# Covers all possible cases starting from strike or spare (including 10-10-10).
def generate_bonus(base_frame)
first_bonus = Frame.new(@player.roll)
second_bonus = Frame.new(@player.roll)
source_rolls = [base_frame.results, first_bonus.results, second_bonus.results]
three_rolls = source_rolls.flatten.shift(3)
Frame.new(three_rolls)
end
def current_frame
@frames.length
end
def last_known_score
@scores.compact.empty? ? 0 : @scores.compact[-1]
end
def score_turn
active_frames = (current_frame <= 3) ? @frames : @frames[-3..-1]
window = ScoringWindow.new(active_frames, last_known_score)
@scores << window.return_scores[-1]
@scores[-2] ||= window.return_scores[-2] if window.return_scores.length >= 2
@scores[-3] ||= window.return_scores[-3] if window.return_scores.length == 3
end
end
ScoringWindow:
Omitted here (see link above), but each instance generates/returns array of 1-3 elements, used by PlayerGame at end of preceding block.
Game:
# This class creates a single game and directs player(s) to bowl in sequence.
class Game
attr_reader :players, :player_games
def initialize(players, turn_recorder)
@players = players
@turn_recorder = turn_recorder
@player_games = []
@players.each { |player| @player_games << PlayerGame.new(player) }
play_game
end
private
def play_game
10.times { play_turn; record_turn }
end
def play_turn
@player_games.each { |curr_player| curr_player.take_turn }
end
def record_turn
@turn_recorder.record(@player_games)
end
end
Answer: Ok, I had to review this :)
First of all: Wow. This is leaps and bounds beyond the first version. Heck, it's not even in the same category (literally; the previous version was procedural, this is object oriented. And has tests!). I am very impressed!
To try to answer your specific questions up front:
Does this PlayerGame class qualify as 'single responsibility'? If not, what should go?
Not quite. It and ScoringWindow are sort of stepping on each other's toes, but so are Player and Frame. It's tough to say what specifically should go, though, without knowing where it should go to. You can refactor things in thousands of ways, so I'd rather leave it open. Perhaps you'll get some refactoring ideas from the stuff below, though.
For a program this small, are the direct dependencies (see .bowl and .score_game [you meant score_turn, right?] - instance creation in each case) worth injecting or otherwise minimizing?
Size ain't got nothing to do with it. But really, don't worry about creating instances; that's what the new method is for. If your code is intended to create some frames, then by all means let it create some frames! You can't perfectly decouple everything - in fact what makes much of any code work is that it relies on other code. So it's about picking your battles. And in this case, I'd say you've picked well: turn_recorder is injected, while you create instances of Frame and ScoringWindow as needed. The former isn't integral or core to your model, so yeah, inject that. The latter two are integral to your model, so it makes sense depend firmly on those.
Review time.
I looked at the code in the gist, just to get a complete picture, so I've included a few (superficial) notes on ScoringWindow too, but I've left out the TurnRecorder class. But kudos on separating such user interaction code from the rest!
I've intentionally kept my notes either super low-level (syntax stuff) or more high-level (class interactions etc.). What's in between is refactoring, but that's an exercise left to the reader.
Overall notes
There's quite a smattering of "magic numbers" across the classes. Most notably, of course, is the number 10. Constants or methods should help clean this up.
As a trivial style note: It's funny that you've indented private one extra space. Most often it's just at the same level of indentation as whatever's around it, though other prefer to outdent it like you would else in a if..else..end statement. Personally, I do the former, but as I said: Trivial style note. There's a lot of code to get to.
Player
The methods reset_lane and update_lane smell a bit as though Player has multiple responsibilities. It probably is overkill to make a separate Lane class, but from a semantics standpoint, it's perhaps a little strange that a player determines how many pins are standing. The player is solely in charge of saying when "its" turn is over.
Use do..end for multiline blocks. In other words, please don't do this:
(@skill + 1).times { picks << rand(0..pins)
break if picks.max == pins }
You could also use a semicolon instead of a line break, but this isn't the place for that either.
Trivial small things:
Use @results.first instead of @results[0] in #strike?
I'd write the expression in #spare? using !strike? rather than the direct comparison with false
Frame
If you have accessors (attr_* declarations, or custom accessor methods) it's often a good idea to rely on them within a class too. For instance, you could use the accessor for results in a couple of places, in place of the "raw" instance variable. This decouples your code internally.
Addendum to the above: Take care when using auto-generated accessors; sometimes it's best to write your own, and make sure it returns a duplicate of your instance variable. For instance, right now, I could call some_frame.results and then modify the returned array. Since that array is the very same object as @results inside your class, I'm modifying the frame's internal data. There's no need to be paranoid about this, though (Ruby doesn't have a hardline approach to data/method access like some other languages do, so you can go crazy trying to lock everything down). Still, writing a method that returns @results.dup isn't too terrible, and it would avoid instance variables being modified by accident outside the instance.
PlayerGame
You can use a short-hand syntax for your mapping in #frames_played:
@frames.map(&:results)
which reads as "call results on each item in the array". Same as how you use inject(:+) elsewhere to calculate a sum without writing out the entire block.
I see Frame.new(@player.roll) in a bunch of places. This could be extracted into a method, and DRY your code (and, as a side-benefit, ease testing).
Or, perhaps it's Player that should use Frame. Right now, Frame knows stuff like #strike? and #spare?- logic which is almost duplicated in Player as pins_hit == 10. Compared to the duplication of Frame.new(...) mentioned above, this is more troublesome, as the (near-)duplication spans multiple classes.
In the tenth frame, you're actually playing 3 frames, then possibly discarding some rolls. While it certainly works, it does seem a little inelegant. I think you're perhaps conflating the concept of "frame" with that of a "roll". A game is divided into successive frames, but scoring is actually based on successive rolls.
I mentioned magic numbers above, and while that usually pertains to actual numbers, a method name like #bowl_tenth is also a magic number in a sense. Or, at any rate, it's "magically numbered". But it's not horrible. In my own code, I hand-waved my own use of magic numbers by saying "well, I'm only interested in regular 10-pin/10-frames bowling". Which is a fair reason, but it's also fair to wag a finger at it.
#score_turn (see below)
ScoringWindow
This class and PlayerGame#score_turn has a tangled little web of intrigue. #score_turn digs into ScoringWindow's data a bit, and ScoreWindow, in turn, is created based on data in a PlayerGame. It's not terrible, and it's not quite a mutual dependency, but it is a little... intimate, for lack of better word. I do like the concept of a "scoring window", though.
Overall, this class' methods smell a little complicated to me (almost every method has if..else branches, and/or early returns), and #update_two_prev and #update_one_prev seem to share some code that could be extracted.
Game
Good use of dependency injection (turn_recorder)!
You could define @player_games as simply @players.map { |player| PlayerGame.new(player) } instead of creating an array, and then pushing items to it in an each-block. Doing the latter is almost an anti-pattern in Ruby, when you have all of Ruby's lovely Array and Enumerable methods on your side.
A quick note on tests
Firstly: Yay, tests! Awesome job! ... yeah, that's basically my whole point here.
Oh, okay, one note: You have this comment on your tests for PlayerGame
Seems tough to test - game logic is here, with non-deterministic results, and requires contact with all classes except Game.
Indeed. It's not quite a code smell, since PlayerGame is a class that ties a lot of other classes together, and provided you can test those other classes fairly independently, you're doing ok. However, the non-deterministic part is a bit tricky. It makes it hard to trust your own tests, which isn't a good feeling.
It's great if you can somehow write your code to avoid such situations (without, of course, writing your code specifically for your tests; it should be the other way around). Also great if you can selectively stub out methods, and replace them with deterministic versions.
This, incidentally, is another reason why you'll often want to rely on accessors, rather than raw instance variables, inside your classes: It lets you stub out the accessor in your tests. This approach could perhaps have let you do some further testing of PlayerGame without nasty randomness.
With all that out of the way, I'm back to my original statement: I'm impressed. There are probably an infinite number of ways to approach this; you reasoned about yours, and the reasoning is sound. Implementation has some rough edges, though, but it ain't half bad. Keep up the good work! | {
"domain": "codereview.stackexchange",
"id": 10106,
"tags": "object-oriented, ruby, game, simulation"
} |
Relativistic collisions (elastic) - stationary target, equal mass | Question: A moving electron with energy $E$ hits a stationary electron. Question is to find the scattering angle (the angle between the paths of the two electrons after collision) in terms of the energy $E$ and the electron mass $m$.
What can we say about the angle between the electrons after collision?
Are both making the same angle to the horizontal axis?
Is the angle between them $\pi/2$ ?
EDIT 3: *By squaring and adding equations (2) and (3) written below, we can show that
$$\cos\phi = \frac{\gamma_1\gamma_2 - \gamma}{\sqrt{\gamma_1^2-1}\sqrt{\gamma_2^2-1}} $$
EDIT 2: In Newtonian mechanics, this information is sufficient to show that the angle between them after collision is $\pi/2$. I can show the proof also. I just can't be sure if this information is not enough in relativistic mechanics.
What extra information, if necessary, can be added to find the angle between them after collision?
EDIT 1:
In the center of mass frame, all I can do is find the total energy, and from there, find the equal and opposite momenta of each particle in vertical axis.
$E_{com}$ is $m\sqrt{2(\gamma + 1)}$. Equal mass and equal momentum gives equal energy for both ($E^2 = p^2 + m^2$).
$E = E_{com}/2 = m\sqrt{(\gamma + 1)/2}$. So $p$ along $y$-axis for each is $m(\sqrt{(\gamma + 1)/2}-1) = m\sqrt{(\gamma - 1)/2}$.
I am not sure how I can proceed further to get my angle. For that I would require individual momenta along horizontal axis.
My approach:
So here the red solid line represents the path between the two electrons before collision, the blue and green lines show the paths of the two moving electrons after collision. Total angle between them, which we need to determine, let's call it $\phi$. And the angle that the blue electron makes with the horizontal axis be $\alpha$.
I am working in natural units, so my $c=1$. I assume my energy $E$ is $\gamma m$ and the energies of the electrons after collision are $\gamma_1 m$ for blue and $\gamma_2 m$ for green electron. I have following constraint equations.
For momentum conservation equations, I used $\gamma v = \sqrt{\gamma^2 -1}$
Energy conservation:
$$\gamma + 1 = \gamma_1 + \gamma_2 \tag1$$
Horizontal momentum conservation:
$$\sqrt{\gamma^2-1} = \sqrt{\gamma^2_1 - 1}\cos(\phi-\alpha) + \sqrt{\gamma^2_2 -1} \cos\alpha \tag2$$
Vertical momentum conservation:
$$0 = \sqrt{\gamma^2_1-1}\,\sin(\phi - \alpha) - \sqrt{\gamma^2_2 -1}\,\sin\alpha.\tag3$$
And for my four unknowns $\gamma_1$, $\gamma_2$, $\alpha$ and $\phi$, I need a fourth constraint equation. I believe that should be:
$P_\mu P^\mu$ before and after collision: $${(\gamma+1)^2}m^2 - \gamma^2 m^2 v^2 = 2m^2 + 2\gamma_1 \gamma_2 m^2 - 2\gamma_1\gamma_2 m^2 v_1 v_2 \cos\phi$$
Simplifying
$${(\gamma+1)^2}- (\gamma^2 -1) = 2 + 2\gamma_1 \gamma_2 - 2\sqrt{\gamma^2_1 -1}\sqrt{\gamma^2_2 -1} \cos\phi$$
$$\gamma = \gamma_1 \gamma_2 - \sqrt{\gamma^2_1 -1}\,\sqrt{\gamma^2_2 -1}\,\cos\phi.\tag4$$
I am ready with 4 constraint equations needed to solve for 4 unknowns. But, this looks quite complicated and maybe I missed some simpler trick to do this the easy way. Can anyone help me out here?
Answer: I've edited your equations to improve readability. I also added tags I'll make use of in the following.
Your main error is in believing that your data may determine final state. It's not so, not even in newtonian mechanics. The reason is that the assumption of an exactly central collision isn't tenable
(otherwise you'd have all momenta aligned). So there's an unsaid unknown: the disalignment between balls centres.
Energy and momentum conservation still do hold, but are unable to completely determine the outcome: there is an equation lacking.
You've tried to obtain a fourth equation by conservation of $P_\mu P^\mu$ but it's an illusion. Since you already used up conservation of $P_\mu$ your last eq. (4) must be a consequence of the preceding three. Try it, by computing $(1)^2 - (2)^2 - (3)^2$.
The really interesting thing is to prove $\phi<\pi/2$. Would you like to engage in the proof?
Edit
$\let\g=\gamma$
Using (1) you get
$$\cos\phi = {(\g_1-1)\,(\g_2-1) \over
\sqrt{\g_1^2-1}\,\sqrt{\g_2^2-1}} > 0.$$ | {
"domain": "physics.stackexchange",
"id": 56566,
"tags": "homework-and-exercises, special-relativity, kinematics, collision"
} |
For change in entropy dS = dq/T, is T the temperature of system or surrounding or both? | Question: For change in entropy dS = dqrev/T, is T the temperature of system or surrounding or both?
I am confused about Thot, Tcold, Tsys and Tsurr.
If qrev, are we talking about the reversible cycle such as the carnot engine?
Answer: If the heat is transferred reversibly, the temperatures of the two bodies have to be the same. Transfer of heat from hotter to colder body is irreversible. | {
"domain": "physics.stackexchange",
"id": 16562,
"tags": "thermodynamics, entropy"
} |
OOP Battleship console game in Java | Question: My Second take on this can be found here
I wanted to make a simple console game in order to practice OOP. I would really appreciate a review that looks at readability, maintenance, and best practices.
What annoys me a little bit with this code is I don't use interfaces, abstract classes, or inheritance, but I couldn't find a good use case for them here.
Board.java
package com.tn.board;
import com.tn.constants.Constants;
import com.tn.ship.Ship;
import com.tn.utils.Position;
import com.tn.utils.Utils;
import java.awt.Point;
import java.util.Scanner;
public class Board {
private static final Ship[] ships;
private char[][] board;
/**
* Initialize ships (once).
*
*/
static {
ships = new Ship[]{
new Ship("Carrier", Constants.CARRIER_SIZE),
new Ship("Battleship", Constants.BATTLESHIP_SIZE),
new Ship("Cruiser", Constants.CRUISER_SIZE),
new Ship("Submarine", Constants.SUBMARINE_SIZE),
new Ship("Destroyer", Constants.DESTROYER_SIZE)
};
}
/**
* Constructor
*/
public Board() {
board = new char[Constants.BOARD_SIZE][Constants.BOARD_SIZE];
for(int i = 0; i < Constants.BOARD_SIZE; i++) {
for(int j = 0; j < Constants.BOARD_SIZE; j++) {
board[i][j] = Constants.BOARD_ICON;
}
}
placeShipsOnBoard();
}
/**
* Target ship ship.
*
* @param point the point
* @return ship
*/
public Ship targetShip(Point point) {
boolean isHit = false;
Ship hitShip = null;
for(int i = 0; i < ships.length; i++) {
Ship ship = ships[i];
if(ship.getPosition() != null) {
if(Utils.isPointBetween(point, ship.getPosition())) {
isHit = true;
hitShip = ship;
break;
}
}
}
final char result = isHit ? Constants.SHIP_IS_HIT_ICON : Constants.SHOT_MISSED_ICON;
updateShipOnBoard(point, result);
printBoard();
return (isHit) ? hitShip : null;
}
/**
* Place ships on board.
*/
private void placeShipsOnBoard() {
System.out.printf("%nAlright - Time to place out your ships%n%n");
Scanner s = new Scanner(System.in);
for(int i = 0; i < ships.length; i++) {
Ship ship = ships[i];
boolean isShipPlacementLegal = false;
System.out.printf("%nEnter position of %s (length %d): ", ship.getName(), ship.getSize());
while(!isShipPlacementLegal) {
try {
Point from = new Point(s.nextInt(), s.nextInt());
Point to = new Point(s.nextInt(), s.nextInt());
while(ship.getSize() != Utils.distanceBetweenPoints(from, to)) {
System.out.printf("The ship currently being placed on the board is of length: %d. Change your coordinates and try again",
ship.getSize());
from = new Point(s.nextInt(), s.nextInt());
to = new Point(s.nextInt(), s.nextInt());
}
Position position = new Position(from, to);
if(!isPositionOccupied(position)) {
drawShipOnBoard(position);
ship.setPosition(position);
isShipPlacementLegal = true;
} else {
System.out.println("A ship in that position already exists - try again");
}
} catch(IndexOutOfBoundsException e) {
System.out.println("Invalid coordinates - Outside board");
}
}
}
}
private void updateShipOnBoard(Point point, final char result) {
int x = (int) point.getX() - 1;
int y = (int) point.getY() - 1;
board[y][x] = result;
}
/**
*
* @param position
* @return
*/
private boolean isPositionOccupied(Position position) {
boolean isOccupied = false;
Point from = position.getFrom();
Point to = position.getTo();
outer:
for(int i = (int) from.getY() - 1; i < to.getY(); i++) {
for(int j = (int) from.getX() - 1; j < to.getX(); j++) {
if(board[i][j] == Constants.SHIP_ICON) {
isOccupied = true;
break outer;
}
}
}
return isOccupied;
}
/**
*
* @param position
*/
private void drawShipOnBoard(Position position) {
Point from = position.getFrom();
Point to = position.getTo();
for(int i = (int) from.getY() - 1; i < to.getY(); i++) {
for(int j = (int) from.getX() - 1; j < to.getX(); j++) {
board[i][j] = Constants.SHIP_ICON;
}
}
printBoard();
}
/**
* Print board.
*/
private void printBoard() {
System.out.print("\t");
for(int i = 0; i < Constants.BOARD_SIZE; i++) {
System.out.print(Constants.BOARD_LETTERS[i] + "\t");
}
System.out.println();
for(int i = 0; i < Constants.BOARD_SIZE; i++) {
System.out.print((i+1) + "\t");
for(int j = 0; j < Constants.BOARD_SIZE; j++) {
System.out.print(board[i][j] + "\t");
}
System.out.println();
}
}
}
Constants.java
package com.tn.constants;
public class Constants {
private Constants() {}
public static final int PLAYER_LIVES = 17; //sum of all the ships
public static final int CARRIER_SIZE = 5;
public static final int BATTLESHIP_SIZE = 4;
public static final int CRUISER_SIZE = 3;
public static final int SUBMARINE_SIZE = 3;
public static final int DESTROYER_SIZE = 2;
public static final char SHIP_ICON = 'X';
public static final char BOARD_ICON = '-';
public static final char SHIP_IS_HIT_ICON = 'O';
public static final char SHOT_MISSED_ICON = 'M';
public static final char[] BOARD_LETTERS = {'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'};
public static final int BOARD_SIZE = 10;
}
Player.java
package com.tn.player;
import com.tn.board.Board;
import com.tn.constants.Constants;
import com.tn.ship.Ship;
import java.awt.Point;
import java.util.HashMap;
import java.util.Map;
import java.util.Scanner;
public class Player {
private int id;
private int lives;
private Board board;
private Map<Point, Boolean> targetHistory;
private Scanner scanner;
/**
* Instantiates a new Player.
*
* @param id the id
*/
public Player(int id) {
System.out.printf("%n=== Setting up everything for Player %s ====", id);
this.id = id;
this.lives = Constants.PLAYER_LIVES;
this.board = new Board();
this.targetHistory = new HashMap<>();
this.scanner = new Scanner(System.in);
}
/**
* Gets id.
*
* @return the id
*/
public int getId() {
return id;
}
/**
* Gets lives.
*
* @return the lives
*/
public int getLives() {
return lives;
}
/**
* Decrement live by one.
*/
public void decrementLiveByOne() {
lives--;
}
/**
* Turn to play.
*
* @param opponent the opponent
*/
public void turnToPlay(Player opponent) {
System.out.printf("%n%nPlayer %d, Choose coordinates you want to hit (x y) ", id);
Point point = new Point(scanner.nextInt(), scanner.nextInt());
while(targetHistory.get(point) != null) {
System.out.print("This position has already been tried");
point = new Point(scanner.nextInt(), scanner.nextInt());
}
attack(point, opponent);
}
/**
* Attack
*
* @param point
* @param opponent
*/
private void attack(Point point, Player opponent) {
Ship ship = opponent.board.targetShip(point);
boolean isShipHit = (ship != null) ? true : false;
if(isShipHit) {
ship.shipWasHit();
opponent.decrementLiveByOne();
}
targetHistory.put(point, isShipHit);
System.out.printf("Player %d, targets (%d, %d)",
id,
(int)point.getX(),
(int)point.getY());
System.out.println("...and " + ((isShipHit) ? "HITS!" : "misses..."));
}
}
Ship.java
package com.tn.ship;
import com.tn.utils.Position;
public class Ship {
private String name;
private int size;
private int livesLeft;
private boolean isSunk;
private Position position;
public Ship(String name, int size) {
this.name = name;
this.size = size;
this.livesLeft = size;
this.isSunk = false;
}
public String getName() {
return name;
}
public int getSize() {
return size;
}
public int getLivesLeft() {
return livesLeft;
}
public boolean isSunk() {
return isSunk;
}
public void setSunk(boolean sunk) {
isSunk = sunk;
}
public Position getPosition() {
return position;
}
public void setPosition(Position position) {
this.position = position;
}
public void shipWasHit() {
if(livesLeft == 0) {
isSunk = true;
System.out.println("You sunk the " + name);
return;
}
livesLeft--;
}
}
Position.java
package com.tn.utils;
import com.tn.constants.Constants;
import java.awt.Point;
public class Position {
private Point from;
private Point to;
/**
* Instantiates a new Position.
*
* @param from the from
* @param to the to
*/
public Position(Point from, Point to) {
if(from.getX() > Constants.BOARD_SIZE || from.getX() < 0
|| from.getY() > Constants.BOARD_SIZE || from.getY() < 0
|| to.getX() > Constants.BOARD_SIZE || to.getX() < 0
|| to.getY() > Constants.BOARD_SIZE || to.getY() < 0) {
throw new ArrayIndexOutOfBoundsException();
}
this.from = from;
this.to = to;
}
/**
* Gets from.
*
* @return the from
*/
public Point getFrom() {
return from;
}
/**
* Gets to.
*
* @return the to
*/
public Point getTo() {
return to;
}
}
Utils.java
package com.tn.utils;
import java.awt.Point;
public class Utils {
private Utils() {
}
/**
* Distance between points double.
*
* @param from the from
* @param to the to
* @return the double
*/
public static double distanceBetweenPoints(Point from, Point to) {
double x1 = from.getX();
double y1 = from.getY();
double x2 = to.getX();
double y2 = to.getY();
return Math.sqrt(Math.pow(x1-x2, 2) + Math.pow(y1-y2, 2)) + 1;
}
/**
* Is point between boolean.
*
* @param point the point
* @param position the position
* @return the boolean
*/
public static boolean isPointBetween(Point point, Position position) {
Point from = position.getFrom();
Point to = position.getTo();
return from.getY() <= point.getY()
&& to.getY() >= point.getY()
&& from.getX() <= point.getX()
&& to.getX() >= point.getX();
}
}
Game.java
package com.tn.game;
import com.tn.player.Player;
public class Game {
private Player[] players;
/**
* Instantiates a new Game.
*/
public Game() {
players = new Player[]{
new Player(1),
new Player(2)
};
}
/**
* Start.
*/
public void start() {
int i = 0;
int j = 1;
int size = players.length;
Player player = null;
while(players[0].getLives() > 0 && players[1].getLives() > 0) {
players[i++ % size].turnToPlay(players[j++ % size]);
player = (players[0].getLives() < players[1].getLives()) ?
players[1] :
players[0];
}
System.out.printf("Congrats Player %d, you won!",player.getId());
}
}
Main.java
package com.tn;
import com.tn.game.Game;
public class Main {
public static void main(String[] args) {
Game game = new Game();
game.start();
}
}
Answer: Thanks for sharing your code.
What annoys me a little bit with this code is I don't use interfaces, abstract classes, or inheritance,
Doing OOP means that you follow certain principles which are (amongst others):
information hiding / encapsulation
single responsibility
separation of concerns
KISS (Keep it simple (and) stupid.)
DRY (Don't repeat yourself.)
"Tell! Don't ask."
Law of demeter ("Don't talk to strangers!")
Interfaces, abstract classes, or inheritance support hat principles and should be used as needed. They do not "define" OOP.
IMHO the main reason why your approach fails OOP is that your "Model" is an array of an primitive type char. This ultimately leads to a procedural approach for the game logic.
I would think of an interface like this:
interface GameField{
char getIcon();
Result shootAt();
}
where Result would be an enum:
enum Result{ NO_HIT, PARTIAL_HIT, DESTROYED }
And I would have different implementations of the interface:
public class BorderField implements GameField{
private final char borderName;
public BorderField(char borderName){
this.borderName = borderName;
}
@Override
public char getIcon(){
return borderName;
}
@Override
public Result shootAt(){
return Result.NO_HIT;
}
}
public class WaterField implements GameField{
private boolean isThisFieldHit = false;
@Override
public char getIcon(){
return isThisFieldHit?'M': ' ';
}
@Override
public Result shootAt(){
return Result.NO_HIT;
}
}
public class ShipField implements GameField{
private final Ship ship;
private boolean isThisFieldHit = false;
public ShipField(Ship ship){
this.ship = ship;
}
@Override
public char getIcon(){
Result shipState = ship.getState();
switch(shipState){
case NO_HIT:
return ' ';
case PARTIAL_HIT:
return isThisFieldHit?'O':' ';
case DESTROYED:
return '#';
}
@Override
public Result shootAt(){
ship.hit();
return ship.getState();
}
}
This should be enough, hope you get the idea...
Formal issues
Naming
Finding good names is the hardest part in programming. So always take your time to think about your identifier names.
On the bright side you follow the Java naming conventions.
But you should have your method names start with a verb in its present tense.E.g.: shipWasHit() should be named hit().
Or distanceBetweenPoints() should be calculateDistanceBetween(). Here the parameters reveal that the distance is between points, so no need to put that in the method name.
Be verbose in your variable names. instead of
double x1 = from.getX();
double y1 = from.getY();
double x2 = to.getX();
double y2 = to.getY();
this variables should rather be named like this:
double startPointX = from.getX();
double startPointY = from.getY();
double endPointX = to.getX();
double endPointY = to.getY();
Take your names from the problem domain, not from the technical solution.
eg.: SHIP_ICON should be SHIP only unless you have another constant within the Ship class.
Comments
Comments should explain why the code is like it is. Remove all other comments.
comments should only be used on interface or abstract methods where they contain the contract that the implementer must fulfill.
Constants class
Put things together that belong together. Define constants in the class that uses them. | {
"domain": "codereview.stackexchange",
"id": 25397,
"tags": "java, object-oriented"
} |
Why ReLU is better than the other activation functions | Question: Here the answer refers to vanishing and exploding gradients that has been in sigmoid-like activation functions but, I guess, Relu has a disadvantage and it is its expected value. there is no limitation for the output of the Relu and so its expected value is not zero. I remember the time before the popularity of Relu that tanh was the most popular amongst machine learning experts rather than sigmoid. The reason was that the expected value of the tanh was equal to zero and and it helped learning in deeper layers to be more rapid in a neural net. Relu does not have this characteristic, but why it is working so good if we put its derivative advantage aside. Moreover, I guess the derivative also may be affected. Because the activations (output of Relu) are involved for calculating the update rules.
Answer: The biggest advantage of ReLu is indeed non-saturation of its gradient, which greatly accelerates the convergence of stochastic gradient descent compared to the sigmoid / tanh functions (paper by Krizhevsky et al).
But it's not the only advantage. Here is a discussion of sparsity effects of ReLu activations and induced regularization. Another nice property is that compared to tanh / sigmoid neurons that involve expensive operations (exponentials, etc.), the ReLU can be implemented by simply thresholding a matrix of activations at zero.
But I'm not convinced that great success of modern neural networks is due to ReLu alone. New initialization techniques, such as Xavier initialization, dropout and (later) batchnorm also played very important role. For example, famous AlexNet used ReLu and dropout.
So to answer your question: ReLu has very nice properties, though not ideal. But it truly proves itself when combined with other great techniques, which by the way solve non-zero-center problem that you've mentioned.
UPD: ReLu output is not zero-centered indeed and it does hurt the NN performance. But this particular issue can be tackled by other regularization techniques, e.g. batchnorm, which normalizes the signal before activation:
We add the BN transform immediately before the nonlinearity, by
normalizing $x = Wu+ b$. ... normalizing it is likely to produce
activations with a stable distribution. | {
"domain": "datascience.stackexchange",
"id": 10902,
"tags": "machine-learning, neural-network, deep-learning, gradient-descent, activation-function"
} |
How does the half-integer spanning-tree problem contain the TSP? | Question: I am trying the understand the following statement from the book of Grotschel, Lovasz and Schrijver:
Here, $\delta(W)$ is the set of edges incident to a set of vertices $W$.
They define an optimization problem whose solution is a nonnegative real vector. If we add a constraint that all elements of the vector must be integers, then the problem is equivalent to the minimum spanning tree problem, and therefore is solvable in polynomial time.
However, if the constraint says that all elements of the vector must be half-integers (that is, either an integer or an integer plus $1/2$), the problem becomes NP-complete, as it includes the symmetric travelling salesman problem.
I do not see the connection: why does the problem with half-integer constraints contain the TSP?
Answer: Reduction from Hamiltonian Cycle (which is just a special case of TSP anyway):
Take an unweighted graph $G$ and make into a complete weighted graph $G'$ by adding a heavy edge (say, $\ell(e) = 2n$) for every $e\not\in E(G)$, and keeping $\ell(e)=1$ for $e\in E(G)$.
For any vector $\mathbf{x}$ that is a solution to the half-integer spanning tree problem, the subset of edges $F$ with $\mathbf{x}(e) > 0$ for $e\in F$ must obviously make a connected subgraph of $G$. This implies that $|F| \geq n-1$. If $|F| = n-1$, then $(V(G), F)$ is a tree and contains at least two leaves $u,v$. The edges $e_u,e_v$ incident on these leaves must have a weight of 1 in $\mathbf{x}$. Therefore the combined weight $\sum_{e\in F} \mathbf{x}(e)$ is at least $\frac{n+1}{2}$. Also, if $|F| \geq n+1$, then of course the combined weight is at least $\frac{n+1}{2}$.
On the other hand, if $|F| = n$ and every edge has a weight of $\frac{1}{2}$ in $\mathbf{x}$, then the combined weight is only $\frac{n}{2}$. This is only possible if $(V(G), F)$ is a cycle.
Therefore, $\sum_{e\in F} \mathbf{x}(e)\ell(e)$ can achieve a minimum value of $\frac{n}{2}$ if and only if $G$ contains a Hamiltonian cycle. | {
"domain": "cs.stackexchange",
"id": 21904,
"tags": "reductions, minimum-spanning-tree, traveling-salesman"
} |
What is an observer in quantum mechanics? | Question: My question is not about (pseudo) philosophical debate; it concerns mathematical operations and experimental facts.
What is an observer? What are the conditions required to be qualified of observer, both mathematically and experimentally?
Answer: Are we talking quantum mechanics? Then I'd say that a "measurement" is any operation that entangles orthogonal states of the system under consideration with orthogonal states of the environment. "Measurement" is the important thing in most formulations of QM. Colloquially speaking, an observer is something that performs measurements.
The only other place in physics I can think of where "observer" shows up is in the oft-used phrase "This is obvious to the casual observer". This is just shorthand for "I can't be bothered to write out the mathematical proof". | {
"domain": "physics.stackexchange",
"id": 67965,
"tags": "quantum-mechanics, measurement-problem, observers"
} |
Do transcription factors bind to both strands of DNA? | Question: Do transcription factors (or generally proteins) bind to only single strand of DNA or both strands? Since it can have non covalent bonds to both strands in theory. I would like to know the mechanism. Any reference books, papers or links will be helpful.
Answer: The short summary is that typical TFs bind and read both strands together, as a basepair sequence. Some proteins instead recognise a site on the helix by its shape and flexibility. ssDNA-binding proteins obviously bind one strand but they do this in a non-specific manner.
RNA-binding proteins recognise the sequence on a single strand by inserting intercalating planar residues between bases! All of this binding is non-covalent.
Transcription factors recognise sites in dsDNA, with DNA-binding domains. The rest of the protein might surround (partially, to varying degree) the negative outer surface of the dsDNA double helix with positively-charged surface, in order to hold it on to DNA as it scans (perhaps) along its length.
DNA-binding domains: major groove
The following domains are found in many transcription factors, and they all recognise both strands. More correctly, they recognise basepairs and their orientation. The first 5 pages of this lecture slideshow demonstrate that the chemical groups on the side of basepairs, accessible in the major groove, allow proteins to distinguish A:T, T:A, C:G & G:C by the order of hydrogen-bond donors, acceptors, and a methyl group.
Hence, TFs recognise a sequence of basepairs - oriented such that one strand is (e.g.) pTpCpApG, and the complementary strand is pCpTpGpA; and the bulk of the protein may 'sit' on one strand or the other - or a nearby gene may locally define one strand or the other as the coding strand but this does not mean that this one strand is read.
Zinc fingers probe the major groove with reading helices.
Helix-turn-helix motifs do much the same.
Leucine zippers also do much the same.
These are common domains that all recognise basepairs in the major groove by interactions with residues on a probing aplha-helix.
TATA-binding protein: minor groove
TATA-binding protein (TBP) is a different, interesting case. It binds the 'TATA-box' via the minor groove, where the exposed chemical groups only distinguish [A/T] from [C/G], but not their orientation. This means that the sequences on each strand cannot be easily read from the minor groove. TBP instead recognises the shape and flexibility of the double-helix at the TATA-box, 'grips' it by the minor groove and bends the DNA, which aids the melting of the strands to the transcription 'bubble'.
The TATA-box sequence is usually pTpApTpApApA on the coding strand upstream of the transcriptional start. This is the convention when giving the sequence of a TF-binding site, but you couldn't say that TBP actually reads TATAAA - it doesn't!
Here is another, similar set of lecture slides.
Even better, here is the same material covered in a popular textbook. | {
"domain": "biology.stackexchange",
"id": 3613,
"tags": "genetics, dna, protein-binding, transcription-factor"
} |
What prevents two particles that made a black hole to unmake it? | Question: Assume you have two high energy particles approaching each other and forming a black hole even before colliding (but before a singularity is formed, which I am not sure that is possible). If the laws of physics are time reversible, then I could start my problem with these two same particles with their momentums reversed, and the solution should be a black hole that splits into the two particles. Is this picture correct? I suspect it is not for some reason I am missing. Or is this the way a black hole evaporates?
Note: actually, we can restrict ourselves to analyze two classical (but relativistic) particles that do not interact, let us forget about quantum mechanics here, so there should be no black hole evaporation or entropy, I believe.
Answer:
If the laws of physics are time reversible, then I could start my problem with these two same particles with their momentums reversed, and the solution should be a black hole that splits into the two particles.
There is a really subtle, and in my opinion beautiful, detail in this statement: the definition of black hole is not time-reversible.
From the moment you say "black hole", you gave up on reversibility. By definition, a black hole is the region of spacetime which no observer that goes to infinity in infinite time can see (see this wonderful PBS Spacetime video for more details). This definition assumes that the black hole region can't be viewed from the future, and it is not symmetric with respect to time reversal. An analogue definition is that of a white hole.
This asymmetry in the very definition of black hole is what allows, for example, the result that a black hole's area can only increase over time, which explicitly distinguishes past from future. The answer to your question is then essentially the same: since the very definition of a black hole already distinguishes past and future, no, you can't find a black hole splitting into two particles by attempting to invoke time reversal symmetry.
Notice that this result that the area of a black hole always increases is fairly similar to the Second Law of Thermodynamics. In modern days, it in fact is interpreted as the application of the Second Law of Thermodynamics to systems involving black holes, as I discussed in this a bit more technical post. | {
"domain": "physics.stackexchange",
"id": 90687,
"tags": "black-holes, event-horizon, causality"
} |
Can hydrogen plasma react with oxygen? | Question: What if you put hydrogen in a vacuum and turned it into a plasma? There is no oxygen in the vacuum, but once you eject the plasma would it react drastically with the oxygen?
Would it explode?
Would it be a bigger explosion than if it wasn't plasma? Could this be a propulsion mechanism?
Thanks!
Answer: The solar wind is primarily made of protons and electrons (i.e., the constituent parts of $^{1}H$) with roughly ~1-5% alpha-particles and then much much smaller fractions of heavier ions (e.g., see https://iopscience.iop.org/article/10.3847/1538-4365/aab71c/ and references therein). That is, the solar wind is a quasi-neutral plasma.
What if you put hydrogen in a vacuum and turned it into a plasma?
No, neutral hydrogen atoms exposed to vacuum (in the absence of other effects) will not spontaneously ionize due to the vacuum alone.
There is no oxygen in the vacuum, but once you eject the plasma would it react drastically with the oxygen? Would it explode?
I think you are asking whether shooting a plasma into Earth's atmosphere would cause an explosive reaction, right? The answer is most likely not. The recombination rate would be so high that the plasma would not live long, though it would heat up the local atmosphere and cause some secondary electron generation, at least temporarily.
The question may be better posed if you asked whether there is a critical limit beyond which a runaway ignition would occur. This was a serious concern for the scientists initially working on the first atomic weapons (e.g., see https://blogs.scientificamerican.com/cross-check/bethe-teller-trinity-and-the-end-of-earth/). It turned out to not be a concern for any of the bombs tested, obviously (and thankfully).
Would it be a bigger explosion than if it wasn't plasma?
I am not sure what you are asking. Are you asking whether an explosion would be bigger if neutral hydrogen were released into our atmosphere? Neutral hydrogen would not cause an explosion but would combine with other hydrogen to form diatomic molecules, if not already in that form, and maybe some $OH^{+}$ ions temporarily. However, most of this would occur very rapidly as there are ~$10^{23}$ particles per mole sitting around waiting to react.
Could this be a propulsion mechanism?
I assume you mean something similar/analogous to rocket propulsion? I suppose in principle, one could create thrust if the plasma were ejected from one side of an object. The problem is that it would be extremely inefficient, energy-wise. It takes a lot of energy to ionize particles and then you need to generate enough energy to accelerate said particles so momentum balance produces a net thrust on the object. This is a lot of energy that is not free, i.e., it would need to be supplied by something within the propelled object. Thus, I would guess this would not make a feasible propulsion system. | {
"domain": "physics.stackexchange",
"id": 76381,
"tags": "plasma-physics, hydrogen, propulsion, explosions"
} |
Rock-Paper-Scissors game in Clojure | Question: I followed braveclojure book and built this little command line Rock, Paper, Scissors game. The game works fine, but I was wondering if there is a better / more elegant / more clojure-y way to deal with side effects?
For example, the body of my play-round function is just a bunch of printlns and it doesn't return anything (well, nil by default).
And I tried to keep such functions to a minimum, but still...
Or perhaps I'm being paranoid, because after all, an application without side - effects is useless.
(ns rps.core
(:gen-class))
(defn get-input
"Waits for user to enter text and hit enter, then cleans the input"
([] (get-input ""))
([default]
(let [input (clojure.string/trim (read-line))]
(if (empty? input) default input))))
(defn get-random-choice
"Let the computer pick a random choice"
[choices]
(-> choices keys rand-nth))
(defn update-player-choice
"Add r / p / s to the list of choices"
[players player choice]
(update-in players [player :choices] conj choice))
(defn get-round-winner
"This function returns a keyword of the
winning player or nil if it is a draw"
[user-choice computer-choice]
(cond
(= user-choice computer-choice) nil
(or (and (= user-choice :r) (= computer-choice :s))
(and (= user-choice :p) (= computer-choice :r))
(and (= user-choice :s) (= computer-choice :p))) :user
:else :computer))
(defn increment-player-score
"Increment the winner's score"
[players winner]
(update-in players [winner :score] inc))
(defn update-player-scores
"If there is a winner, update the winner's score
otherwise return the original state of players"
[players winner]
(if (not (nil? winner))
(increment-player-score players winner)
players))
(defn get-round-winner-name
"Display the name of the round winner"
[players winner]
(get-in players [winner :name]))
(defn game-is-on
"Determine if the game is still on by
checking that both scores are < 3"
[players]
(every? #(-> % :score (< 3)) (vals players)))
(defn generate-players
"Return a simple object of players in the game"
([user-name] (generate-players user-name "Computer"))
([user-name computer-name]
{:user {:score 0
:choices []
:name user-name}
:computer {:score 0
:choices []
:name computer-name}}))
(defn display-scores
"Display the scores and end the game"
[players]
(let [user-score (get-in players [:user :score])
comp-score (get-in players [:computer :score])
user-won? (> user-score comp-score)
user-name (get-in players [:user :name])
comp-name (get-in players [:computer :name])]
(println (format "%s won the game with the score of %s to %s"
(if user-won? user-name comp-name)
(if user-won? user-score comp-score)
(if user-won? comp-score user-score)))))
(defn display-round-intro
"A helper function that displays i.e. Rock vs Scissors"
[choices user-choice computer-choice]
(println (format "%s vs %s" (get choices user-choice) (get choices computer-choice))))
(defn display-question
"Display the key question - Rock, Paper, Scissors?"
[choices]
(let [question (->> choices
(map #(format "%s(%s)" (second %) (-> (first %) name)))
(interpose ", ")
(apply str))]
(println (str question "?"))))
(defn play-round
"The core game logic"
[players choices]
(display-question choices)
(let [user-choice (-> (get-input) keyword)
computer-choice (get-random-choice choices)
round-winner (get-round-winner user-choice computer-choice)
updated-players (-> players
(update-player-choice :user user-choice)
(update-player-choice :computer computer-choice)
(update-player-scores round-winner))]
(display-round-intro choices user-choice computer-choice)
(if (nil? round-winner)
(println "Draw")
(println (format "%s has won the round" (get-round-winner-name players round-winner))))
(if (game-is-on updated-players)
(play-round updated-players choices)
(display-scores updated-players))))
(defn ask-for-name
"Get the name from the user"
[]
(println "What is your name?")
(let [user-name (get-input)
players (generate-players user-name)
choices {:r "Rock"
:p "Paper"
:s "Scissors"}]
(play-round players choices)))
(defn -main
"Start the game"
[& args]
(println "Let the games begin")
(ask-for-name))
Answer: This is fairly good code. I don't really have much bad to say about it. Most of my suggestions will be stylistic, or based on little things I've learned that have helped me.
First though:
I was wondering if there is a better / more elegant / more clojure-y way to deal with side effects?
Really, you only have a few impure aspects of your program:
get-input: You need to get input from the user, and have sectioned the functionality off into a single function that you use everywhere. That's pretty much the best you can do.
The display- functions: These functions are arguably doing too much. They're compiling the data together into a String, and displaying the String. What if you made this program networked in the future, and wanted to use the same functionality? I'd prefer to create format- functions, and println the returns from them. You could also pass in a Stream (like *out* to print to the stdout), and print into the Stream. That way, the user can use a StringStream if they just want the formatted String.
... and consequently play-round: Of course, where you tie everything together, you're going to have some side effects. Even the strictest Haskell programs have to have a procedure somewhere. The idea is to create pure functions wherever possible, and section the side-effect causing functions off and test them separately. Here, it could be argued that using println is forcing the user to only print to the console, but you'd likely need to rewrite the procedure for another circumstance anyway, so this isn't a huge problem.
Some more general observations:
You never do any input validation! I got some real funky results by entering nonsense. My personal library function that I use for getting simple console input is:
(defn ask-for-input
"Prompts the user for input, checks it using the validation function, and displays the error if the validation fails. Newlines aren't added after the messages."
[prompt-message error-message validate-f]
(print prompt-message)
(flush)
(let [result (read-line)]
(if (validate-f result)
result
(do
(print error-message)
(flush)
(recur prompt-message error-message validate-f)))))
Yes, this has a lot of side effects, but I've found that any time I needed to ask for, and validate user input, this is essentially what I come up with, so I decided to just wrap that common code in a function. I nearly always need a prompt, a way to validate, and to display an error message, so this has proved very helpful. Using this, you could ask for a player's move in a manner more like:
(ask-for-input (format-question choices) ; As mentioned above
"Invalid choice!"
choices) ; Maps return nil (falsey) for a invalid key
That way, you know the data you're getting is definitely valid, and you can go ahead and use it.
The keywords you're using to represent things can be difficult to find. I've began to explicitly "declare" the keywords that I'm using at the top of my file, like I was declaring an Enum. You have two main "Enums": :user+:computer, and :r+:p+:s. I would explicitly write these at the top of the file, and, I would use namespaced keywords (::) instead:
(def valid-moves #{::rock, ::paper, ::scissors})
(def valid-player-types #{::user, ::computer})
This has multiple benefits:
You don't need to go searching through the file if you come back to the project later to see what keywords you're using; everything is explicit at the top of the file. If you're using an intelligent IDE, this also gives it a heads up of what you'll be using so it can give completion hints easier.
By putting them in a global set, you can easily check if a move is valid:
(valid-moves "Some invalid nonsense") ; nil - Falsely on a bad lookup
(valid moves ::rock) ; ::rock - A truthy value, so it's valid
By namespacing the keywords and again, using an good IDE, all you have to do is write ::, and it can immediately suggest the correct keywords to use. This is convenient, prevents keyword spelling errors, and helps prevent you from using the wrong keyword that was used elsewhere or a potential previous misspelled keyword.
Really, it seems like you should have a Player record:
(defrecord Player [score choices user-name])
and then create a pseudo-constructor to reduce redundancy:
(defn new-player [username]
(->Player 0 [] username))
Then you can simplify generate-players to basically:
(defn generate-players2 [user-name computer-name]
{::user (new-player user-name)
::computer (new-player computer-name)})
Be very careful returning nil from functions. If you forget to handle the nil somewhere, you're likely to get a NullPointerException, which doesn't give very helpful hints about what might have gone wrong. I'd go one of two ways:
In the event of a tie, return something like ::tie. This at least has a chance of giving you more information down the road in case something goes wrong. You'll at least know where the bad data originated from.
Return nil, but make it very clear that the function may return nil. Documentation is great, but I've started taking it a step further ending the names of such functions with a ?. This is dancing with Hungarian Notation, but I like the reminder that a function returns nil. Using the function always involves the question of failure, so I think that should be heavily reflected. This also allows you to make use of when-let and if-let. These seem useless when you first encounter them, but they've grown on me. I tried writing your play-round to make use of if-let, but it got quite messy unfortunately. In this specific case, I'd use option 1.
play-game is susceptible to a Stack Overflow! You're using recursion without using recur, which is "dangerous". If somehow the players manage to tie over and over again, it will crash. Change the recursive call to:
(if (game-is-on updated-players)
(recur updated-players choices) ; Here
(display-scores updated-players))))
My brains fried from a long day of work, and Edge is started to lag this is getting so long. I'll post back if I think of anything else, but these were the main things I noticed. | {
"domain": "codereview.stackexchange",
"id": 29741,
"tags": "clojure, rock-paper-scissors"
} |
Newton's shell theorem in 2d | Question: I was wondering how to prove the analog of Newton's shell theorem for 2 dimensions, in which gravity obeys an inverse-linear law. Meaning:
that an anywhere inside a circle, the gravitational field due to the circle is 0
that outside the circle, the gravitational field from the circle is the same as if the mass from the circle was concentrated at the origin
I'm pretty sure these things are true, but when I tried to do the integral by imitating the 3-dimensional case, I failed. (I tried to imitate this.)
UPDATE 1: CuriousOne asked to show what I did. So here goes, using the same variable names and the same diagram as in the wikipedia page. In the 3D case, namely, it boils down to doing the one-variable integral
$$
\int_{0}^{\pi} \frac{\sin\theta}{s^2}\cos\phi\,d\theta.
$$
The idea, then, is to use $s$ as the variable of integration. (Assuming the second subquestion, $s$ then goes from $r - R$ to $r + R$.) One has
$$
\cos \phi = \frac{r^2 + s^2 - R^2}{2rs}
$$
by the law of cosines, where $r$ and $R$ are constants. (Yay.) To substitute $\sin\theta\,d\theta$, one starts with the fact that
$$
\cos\theta = \frac{R^2 + r^2 - s^2}{2Rr}
$$
(law of cosines again), which upon differentiating becomes
$$
-\sin\theta\,d\theta = \frac{-sds}{Rr}
$$
so that our integral becomes
$$
\int_{r-R}^{r+R} \frac{1}{s^2}\cdot \frac{s}{Rr}\cdot \frac{r^2+s^2-R^2}{2rs}ds
$$
which simplifies and is easy to integrate. In the 2D case, the diagram and the relationships between the variables are the same, but the integral boils down to
computing
$$
\int_0^\pi \frac{1}{s}\cos\phi\,d\theta.
$$
(Unless I'm already mistaken.) I presumed that switching to $s$ as the variable of integration is the right thing to do (again), but I don't know how to get rid of $d\theta$ anymore, now that $\sin\theta$ is missing. [The rest of my failed efforts are removed due to lack of interest. For the record, this section previously ended with "can someone at least check my work?". (See comments below.)]
UPDATE 2: Qmechanic gives a solution below; basically, one should do the integral using $\theta$ as the variable of integration, not $s$.
I also just discovered this preprint, which gives a simple geometric solution that works for all dimensions. (Point outside the sphere.) It's very nice, and I also wonder if it's possible to "algebrize" the solution (i.e., turn it into a sequence of equalities involving integrals).
Answer: Hints to the case $r>R$:
The specific gravitational potential reads $$ U~=~GM\int_{0}^{2\pi} \frac{d\theta}{2\pi} ~\ln s, \qquad s^2 ~=~R^2+r^2 -2Rr \cos\theta, \tag{1} $$
where we use the same notation as on the Wikipedia page.
From symmetry we know the gravitational field must be central/radial
$$ -g_r~=~ \frac{\partial U}{\partial r}
~\stackrel{(1)}{=}~\frac{GM}{r}\int_{0}^{2\pi} \frac{d\theta}{4\pi} \left[ 1 + \frac{r^2-R^2}{R^2+r^2 -2Rr \cos\theta} \right]$$
$$~\stackrel{z=e^{i\theta}}{=}~
\frac{GM}{2r} +\frac{GM}{2r} \oint_{|z|=1} \frac{dz}{2\pi i}\frac{r^2-R^2}{(R^2+r^2)z -Rr (z^2+1)} $$
$$~=~
\frac{GM}{2r} -\frac{GM}{2r} \oint_{|z|=1} \frac{dz}{2\pi i}\frac{r^2-R^2}{Rr (z-r/R)(z-R/r)} ~=~\ldots~=~\frac{GM}{r}.\tag{2} $$ | {
"domain": "physics.stackexchange",
"id": 25044,
"tags": "homework-and-exercises, newtonian-gravity, gauss-law"
} |
Basic Concepts on Blocks and spring | Question:
Let there be two blocks $m_1$ and $m_2$ both are attched with a spring and have a velocity $v_1$ and $v_2$ respectively . They are attached to same spring.
Now I have some doubts related to it.
With respect to centre of mass, how these blocks are doing SHM.
And in ground frame can we use energy conservation like this
$$(1/2)m_1v_1^2+(1/2)m_2v_2^2=(1/2)kx^2$$
Where $x$ is maximum compression in spring. Where k is spring constant
Can we use concept of reduce mass in such situations
Answer: Regarding point 1: if you consider the center of mass stationary (observe the two blocks in their center of mass frame), then the two blocks will move towards each other and away again. The velocity of one ($m_1$) will always be $\frac{m_2}{m_1}$ of the other (conservation of momentum). Considering the c.o.m. as the origin, that origin will be fixed and each mass will move as though it only "sees" the bit of spring on its side of the c.o.m. (dashed line = location of center of mass):
It follows that you can use reduced mass (although I prefer, from the visual above, to use the normal mass and scale the $k$ of the spring. Same result, mathematically). And if you look for conservation of energy, at any moment the sum of kinetic energies of the two blocks plus the elastic energy stored will be constant. However, it's not clear that your expression in (2) would be correct - there is no reason that the velocity of the two blocks would be a maximum at the same time (except in the c.o.m. frame). | {
"domain": "physics.stackexchange",
"id": 37468,
"tags": "homework-and-exercises, newtonian-mechanics, harmonic-oscillator, spring"
} |
vlookup in python | Question: I wrote a little function that does, I think, what the Excel function does:
Given a value and a table (matrix), search for the row that has value closest to but not greater than the given value. Returns the value of a column in the matrix.
Is there something I'm missing here? It works but seems too easy.
def vlookup(self, key, table, column):
value = table[0][column]
for row in table:
if row[0] >= key:
break
else:
value = row[column]
return value
Answer: I had to look up the definition of VLOOKUP to exactly understand what it meant, because I didn't get it quite right when first reading at your code:
The VLOOKUP function performs a vertical lookup by searching for a value in the first column of a table and returning the value in the same row in the index_number position.
(In your function index_number is called column and I think it is clearer.)
Doing that, I found the exact signature of the excel function: VLOOKUP( value, table, index_number, [approximate_match] ) which you don't quite match here. Also, as stated in the comments, there is no mention of sorted columns in the documentation, so we’ll try to get rid of that.
And last thing to mention: you seem to define this function as a method of a class but never use the self parameter. You’d be better turning that into a simple function or turning that into a staticmethod.
Handling unsorted columns
So we want to extract a column out of a table (the first one, actually), filter out values that are too high and taking the maximum of what is left. Simple enough in Python:
def demo_function(key, table):
extracted_column = (row[0] for row in table)
interesting_values = filter(lambda x: x < key, extracted_column)
return max(interesting_values)
We can do even better by removing the lambda because we know that any numerical object has a bunch off dunder dedicated to comparison:
def demo_function(key, table):
extracted_column = (row[0] for row in table)
interesting_values = filter(key.__gt__, extracted_column)
return max(interesting_values)
or, as a one-liner:
def demo_function(key, table):
return max(filter(key.__gt__, (row[0] for row in table)))
Handling the return value from an other column
Since you are not actually interested in the values contained in the first column, we need to work on row items. The max function will happily use the first value of the tuple/list to determine which item is the bigger. But the filter function will need to be aware that we are working with tuples/lists:
def vlookup_approximate(key, table, column):
return max(filter(lambda x: x[0] < key, table))
This function returns the whole row of interest. To actually achieve the desired effect, you only need to return the column column:
def vlookup_approximate(key, table, column):
return max(filter(lambda x: x[0] < key, table))[column]
Handling exact matches
The original function has an optional fourth argument to toggle between exact and approximate matches. This means that we have to adapt our filter rule to look at exactly the key value or less than the key value in our first column. This also means that we need to use key.__ge__ (x <= key) instead of key.__gt__ (x < key) for the comparison function:
def vlookup(key, table, column, approximate_match=True):
compare = key.__ge__ if approximate_match else key.__eq__
return max(filter(lambda row: compare(row[0]), table))[column]
Handling lack of results
Looking at the example usages of VLOOKUP, we can see that in case of no match found in the first column, the #N/A value is returned. In our case, the use of the max function raise an exception if the selection is empty:
>>> max([])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: max() arg is an empty sequence
>>> max(filter((12).__eq__, range(10)))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: max() arg is an empty sequence
So we need to take that into account and return None instead of the exception in case no match is found to match this behaviour:
def vlookup(key, table, column, approximate_match=True):
compare = key.__ge__ if approximate_match else key.__eq__
try:
return max(filter(lambda row: compare(row[0]), table))[column]
except ValueError:
return None
Visual clutter
The filter function, especially using a lambda can also be written as a comprehension to improve the comprehension at first glance. I am not sure of which way is faster, though. If speed matters to you, time both and keep the best one:
def vlookup(key, table, column, approximate_match=True):
compare = key.__ge__ if approximate_match else key.__eq__
try:
return max(row for row in table if compare(row[0]))[column]
except ValueError:
return None
Python 2
As stated in the comments, ints or floats in Python 2 does not provide the __ge__ method. You can still get the same behaviour using the operator module, but you’ll need to explicitly provide key as the first parameter:
from operator import __ge__, __eq__
def vlookup(key, table, column, approximate_match=True):
compare = __ge__ if approximate_match else __eq__
try:
return max(row for row in table if compare(key, row[0]))[column]
except ValueError:
return None
This excerpt works the same in both Python 2 and Python 3 | {
"domain": "codereview.stackexchange",
"id": 19412,
"tags": "python"
} |
llibrostime i386 uses illegal instruction (fucomip) | Question:
Hi,
I am trying to run ROS Hydro on an Intel Galileo running Debian. I've installed ROS, the i386 version, and it's giving me an Illegal Instruction error when running the listener cpp tutorial. It appears the offending code is in librostime:
$ gdb devel/lib/beginner_tutorials/talker
...
Program received signal SIGILL, Illegal instruction.
0xb7931195 in ros::DurationBase<ros::WallDuration>::fromSec(double) () from /opt/ros/hydro/lib/librostime.so
(gdb) backtrace
#0 0xb7931195 in ros::DurationBase<ros::WallDuration>::fromSec(double) () from /opt/ros/hydro/lib/librostime.so
#1 0xb7e64d1a in ros::WallDuration::WallDuration(double) () from /opt/ros/hydro/lib/libroscpp.so
#2 0xb7ea1ee0 in ?? () from /opt/ros/hydro/lib/libroscpp.so
#3 0xb7ea1f82 in ?? () from /opt/ros/hydro/lib/libroscpp.so
#4 0xb7ff0202 in ?? () from /lib/ld-linux.so.2
#5 0xb7ff02d9 in ?? () from /lib/ld-linux.so.2
#6 0xb7fe287f in ?? () from /lib/ld-linux.so.2
(gdb) display/i $pc
1: x/i $pc
=> 0xb7931195 <_ZN3ros12DurationBaseINS_12WallDurationEE7fromSecEd+37>: fucomip %st(1),%st
It looks like it's using the fucomip instruction, which was added with the Pentium Pro.
I'll try to figure out how to build this from source... but is this a configuration error in the build farm?
Originally posted by Jon Stephan on ROS Answers with karma: 837 on 2014-02-01
Post score: 0
Answer:
This is not a configuration error on the build farm. The build farm has to make a decision about the minimum supported instruction set and uses the defaults defined on the compiling system. It is very likely that a Pentium Pro is not cover by that due to its age.
You should be easily able to build the packages you need from source following the instructions on the wiki: http://wiki.ros.org/hydro/Installation/Source
Originally posted by Dirk Thomas with karma: 16276 on 2014-02-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Jon Stephan on 2014-02-01:
Dirk,
Aptitude tells me this is an i386 package. Doesn't that mean it should only include 386 instructions? If that's true, fucomip should not occur, because that's not an i386 instruction. Or am I misunderstanding something?
Comment by Dirk Thomas on 2014-02-01:
"i386" does not stand for the 386 instruction set. It indicates 32-bit binaries. The opposite one is "amd64" for 64-bit binaries.
Comment by Jon Stephan on 2014-02-01:
Hmmm... in gcc it does: http://gcc.gnu.org/onlinedocs/gcc-3.3.6/gcc/i386-and-x86_002d64-Options.html, there are i386 through i686 flags
If i386 is actually i686 in ROS, that's unfortunate. The Galileo board can support i586, but not i686, i'd hate to have to build from source when it's so close.
Comment by fergs on 2014-02-01:
Ubuntu officially dropped support for i386, i486, or i586, with 10.10. I can't find the actual press release, but this is noted for the 12.04 notes: https://help.ubuntu.com/12.04/installation-guide/i386/hardware-supported.html
Comment by Jon Stephan on 2014-02-02:
Ok, I see, that's too bad. | {
"domain": "robotics.stackexchange",
"id": 16848,
"tags": "roscpp"
} |
About learning a single Gaussian in total-variation distance | Question: I am looking for the proof of this following result which I saw as being claimed as a "folklore" in a paper. It would be helpful if someone can share a reference where this has been shown!
Let $G$ be an $n-$dimensional Gaussian and let $\delta >0$. Then there
exists a polynomial time algorithm that given $O(\frac {n^2}{\delta^2})$ independent samples from $G$ returns a probability
distribution ${\bf P}$ so that with probability at least $\frac 2 3$
we have, $d_{\text{TV}}(G,{\bf P}) < \delta$
To clarify the notation of "TV" used above :
Given two distributions $\bf P$ and $\bf Q$ with p.d.fs (which we denote by the same symbols) we define the ``Total-Variation" distance between them as,
$ d_{\text{TV}}({\bf P},{\bf Q}) := \frac {1}{2} \int_{\bf x \in \mathbb{R}^n} \vert {\bf P}(\bf x) - {\bf Q}(\bf x) \vert d{\bf x} =: \frac {1}{2} ||{\bf P} - {\bf Q}||_1 $
Answer: Essentially, this follows from three facts:
learning a Gaussian in total variation distance $\delta$ is equivalent to learning its two parameters, $\mu,\Sigma$, to (respectively) $\ell_2$ and relative Frobenius norms $O(\delta)$. (Since then the "empirical Gaussian" with the mean and covariance you estimated will be $\delta$-close to the true Gaussian).
learning the mean $\mu$ to $\ell_2$ distance $\delta$ can be done with $O(\frac{n}{\delta^2})$ samples. (This is tight)
learning the covariance to relative Frobenius distance $\delta$ can be done with $O(\frac{n^2}{\delta^2})$ samples. (This is tight)
I suggest you try to prove these yourself. The first one follows from relatively "standard" facts about Gaussians [1], the second two are good exercises.
(If you really want a specific reference, I can try to dig some up.)
[1] Theorem 1.3 in this recent paper is overkill, but will do the job. https://arxiv.org/abs/1810.08693 | {
"domain": "cstheory.stackexchange",
"id": 4941,
"tags": "ds.algorithms, machine-learning, lg.learning, st.statistics"
} |
What are the "inexact differentials" in the first law of thermodynamics? | Question: The first law of thermodynamics states that
$$dU=\delta Q - \delta W$$
I have only just graduated high school and I am finding the above form of the equation rather difficult to understand due to the fact that I don't understand what inexact differentials are. Is it possible for anybody to please explain this to me? (I have taken an A.P course in calculus in school).
Answer: The mostly math-free explanation:
The internal energy $U$ is a function of state. It depends only on the state of the system and not how it got there. The notions of heat $Q$ and work $W$ are no such functions - they are properties of a process, not of a state of the thermodynamic system. This means that we can compute the infinitesimal change $\mathrm{d}U$ as the actual change $U$ of the function between two infinitesimally close points, but the infinitesimal changes in heat and work $\delta Q,\delta W$ depend on the way we move from one such point to the other.
More formally:
Now, you should imagine the state space of thermodynamics, and the system taking some path $\gamma$ in it. We call the infinitesimal change in internal energy $\mathrm{d}U$, which is formally a differential 1-form. It's the object that when integrated along the path gives the total change in internal energy, i.e. $U_\text{end}-U_\text{start} = \int_\gamma \mathrm{d} U$. You may think of this as completely analogous to other potentials in physics: If we have a conservative force $F = -\nabla U$, then integrating $F$ along a path taken gives the difference between the potential energies of the start and the end of the path. This is why $U$ is sometimes called a "thermodynamic potential", and this means that the $\mathrm{d}U$ is an actual differential - it is the derivative of the state function $U$.
Since $W$ and $Q$ are not state functions, there are no differentials $\mathrm{d}W$ or $\mathrm{d}Q$. However, along any given path $\gamma$, we can compute the infinitesimal change in work and heat, and also the total change $\Delta W[\gamma]$ and $\Delta Q[\gamma]$, so heat and work are functionals on paths. It turns out that, together with linearity - the work along two paths is the sum of work along each of them - this is enough to know that there are two differential 1-forms representing heat and work on the entire state space (for a formal derivation of this claim, see this excellent answer by joshphyiscs). These forms we call $\delta W$ and $\delta Q$, where we use $\delta$ instead of $\mathrm{d}$ to remind us that these are not differentials of state functions. | {
"domain": "physics.stackexchange",
"id": 98194,
"tags": "thermodynamics, energy"
} |
When converting a Context-Free Grammar to Chomsky Normal Form why is a new start state added? | Question: I'm taking a theoretical computer science class and we just went over the steps to rewrite a context-free grammar in Chomsky Normal Form. The steps we were told to complete are:
Add a new start state pointing to the old start state
Eliminate Epsilon Rules
Eliminate Unit Rules
Change Long Rules into Short Ones
I think I understand how to do each rule, but I'm not seeing the reason for step 1 as the examples we did in class all led to the new starting state being equal to the old starting state because of step 3 when the unit rules were eliminated. Perhaps I'm misunderstanding something so I'll give the example that was given in class.
So for example we were told to convert the following:
$S \rightarrow AbA\;|\;B$
$B \rightarrow a\;|\;b$
$A \rightarrow \epsilon\;|\;a$
Step 1 adds the following production rule:
$S_0 \rightarrow S$
Step 2 makes the production rules become:
$S_0 \rightarrow S$
$S \rightarrow AbA\;|\;Ab\;|\;bA\;|\;b\;|\;B$
$B \rightarrow a\;|\;b$
$A \rightarrow a$
Step 3 makes the rules the following
$S_0 \rightarrow AbA\;|\;Ab\;|\;bA\;|\;b\;|\;a$
$S \rightarrow AbA\;|\;Ab\;|\;bA\;|\;b\;|\;a$
$B \rightarrow a\;|\;b$
$A \rightarrow a$
Step 4 makes the rules the following:
$S_0 \rightarrow A U_1\;|\;U_2 A\;|\;A U_2\;|\;b\;|\;a$
$S \rightarrow A U_1\;|\;U_2 A\;|\;A U_2\;|\;b\;|\;a$
$B \rightarrow a\;|\;b$
$A \rightarrow a$
$U_1 \rightarrow U_2 A$
$U_2 \rightarrow b$
$S_0$ just ends up being the same production rule as $S$ so why did we need it in the first place? Is there a case where $S_0$ won't produce the same output as $S$? Also since no state ever goes to S when it starts from the initial state $S_0$ is it okay to get rid of $S$?
Answer: If $G$ is a grammar with start symbol $S$, then $G'$, the augmented grammar for $G$, is $G$ with a new start symbol $S'$ and production $S' \rightarrow S$. The purpose of this new starting production is to indicate to the parser when it should stop parsing and announce acceptance of the input. That is, acceptance occurs when and only when the parser is about to reduce by $S' \rightarrow S$. | {
"domain": "cstheory.stackexchange",
"id": 1397,
"tags": "fl.formal-languages, grammars, context-free"
} |
Download and unzip an XML document | Question: I am working on an integration software. I will need to download a list of products and import it into my website. The list of products will be downloaded from Constants.EndPoint which contains a zipped xml document.
I will be downloading this file periodically (once every 4 hours)
I have written UnzipClient which downloads and extract the file and returns the XML document. The consumer is responsible for deserializing the XML and importing its content (I have not included the consumer code in here)
UnzipClient.cs
public class UnzipClient
{
private static readonly HttpClient _httpClient;
private static readonly Uri _endpointUri;
static UnzipClient()
{
_httpClient = new HttpClient();
_endpointUri = new Uri(Constants.EndPoint);
}
public async Task<(XmlDocument xmlDocument, string error)> GetXml()
{
try
{
var response = await _httpClient.SendAsync(new HttpRequestMessage(HttpMethod.Get, _endpointUri));
error = IsValidResponse(response, "Get");
if (!string.IsNullOrEmpty(error))
{
return (null, error);
}
var xml = await LoadXml(response);
return (xml, "");
}
catch (Exception ex)
{
string error = $"Exception sending a Get request. Message: '{ex.Message}', InnerException: '{ex.InnerException}'";
return (null, error);
}
}
private string IsValidResponse(HttpResponseMessage response, string RequestType)
{
if (response == null)
{
return $"{RequestType} response is null";
}
else if (response.StatusCode != HttpStatusCode.OK)
{
return $"Invalid response to {RequestType} request. StatusCode: '{response.StatusCode}', Reason: '{response.ReasonPhrase}'.";
}
else if (response.Content == null)
{
return $"{RequestType} request, response.Content is null";
}
return string.Empty;
}
private async Task<XmlDocument> LoadXml(HttpResponseMessage response)
{
using (var zipStream = await response.Content.ReadAsStreamAsync())
using (ZipArchive archive = new ZipArchive(zipStream))
{
if (archive.Entries != null && archive.Entries.Count >= 1)
{
using (var unzipStream = archive.Entries[0].Open())
{
var xml = new XmlDocument();
xml.Load(unzipStream);
return xml;
}
}
}
return null;
}
}
The code works, but I am a little confused with the using blocks. Stream and ZipArchive are both Disposable so I would like to dispose them as soon as possible... however the consumer still needs to works with the xml document, so not sure if disposing of the unzipStream would have any impact on xml document?
Answer: Current Code Notes :
UnzipClient is not descriptive enough. Since the actual class is projected to one purpose and provider, then it would be better if you name it after the provider name and the provider api section if any. The goal here is to let anyone know the class purpose without the need to dig inside the class code.
the static constructor is unneeded. along with the static Uri
The Task<(XmlDocument xmlDocument, string error)> it's fine, however, I would prefer a user-defined class. It will give you more maintainability, readability, and extendibility. Or you can just return Task<XmlDocument> and throw exceptions when needed.
XmlDocument if is it self-choice, I would suggest using XDocument instead (AKA LINQ to XML). More readable, and easier to work with.
RequestType you can use HttpMethod instead.
LoadXml and IsValidResponse should be moved inside the main method, because it does not do anything outside that scope.
When using HttpClient it is a good idea to make use of BaseAddress instead of passing the full path on each request. This would be useful if for some reason the host is changed, then you only need to update BaseAddress
HttpClient has GetAsync why not use it instead of the current SendAsync?.
Here is an example that demonstrates the above notes :
public class XmlUnzipClientResult
{
public int StatusCode { get; }
public bool IsSuccess { get; }
public XmlDocument Result { get; }
public string Message { get; }
public Exception ExceptionError { get; }
public XmlUnzipClientResult(int statusCode, bool isSuccess, XmlDocument result, string message, Exception exception)
{
StatusCode = statusCode;
IsSuccess = isSuccess;
Result = result;
Message = message;
ExceptionError = exception;
}
public static XmlUnzipClientResult Success(XmlDocument result)
{
return new XmlUnzipClientResult(200, true, result, null, null);
}
public static XmlUnzipClientResult Failure(int statusCode, string message, Exception exception = null)
{
return new XmlUnzipClientResult(statusCode, false, null, message, exception);
}
}
public class XmlUnzipClient : IDisposable
{
private readonly HttpClient _httpClient = new HttpClient();
private readonly Uri _endpointUri = new Uri(Constants.EndPoint);
public async Task<XmlUnzipClientResult> GetRequestResult()
{
try
{
var response = await _httpClient.GetAsync(_endpointUri);
if(response.StatusCode != HttpStatusCode.OK)
{
return XmlUnzipClientResult.Failure((int)response.StatusCode, $"Invalid response to {HttpMethod.Get} request. StatusCode: '{response.StatusCode}', Reason: '{response.ReasonPhrase}'.");
}
using (var zipStream = await response.Content.ReadAsStreamAsync())
using (var archive = new ZipArchive(zipStream))
{
var entry = archive.Entries?.FirstOrDefault();
if(entry != null)
{
using (var unzipStream = entry.Open())
{
var xml = new XmlDocument();
xml.Load(unzipStream);
return XmlUnzipClientResult.Success(xml);
}
}
}
}
catch (Exception ex)
{
return XmlUnzipClientResult.Failure(500, $"'{ex.Message}'", ex);
}
return XmlUnzipClientResult.Failure(500, $"Unexpected Error");
}
#region IDisposable
private bool _disposed;
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
private void Dispose(bool disposing)
{
if (!_disposed)
{
if (disposing)
{
_httpClient.Dispose();
}
_disposed = true;
}
}
#endregion
}
the usage would be something like :
using(var client = new XmlUnzipClient())
{
var results = await client.GetRequestResult();
if(results.IsSuccess)
{
// do something.
}
}
If there is multiple calls, you can declare a private static XmlUnzipClient in the class that will do that calls.
These are just to give some insights on what you've already done. However, from the given context, I don't beleive you need a class for that, it will be better to use extension methods for the ZipArchive which would add more useability to your project for more wider scope.
So, you need an extension method on HttpResponseMessage to return ZipArchive and another one on ZipArchiveEntry to return XmlDocument
Example :
public static class ZipArchiveExtensions
{
public static async Task<ZipArchive> ReadAsZipArchiveAsync(this HttpResponseMessage response)
{
if (response == null) throw new ArgumentNullException(nameof(response));
try
{
using (var zipStream = await response.Content.ReadAsStreamAsync())
using (ZipArchive archive = new ZipArchive(zipStream))
{
return archive;
}
}
catch (Exception)
{
// handle exceptions
}
return null;
}
public static XmlDocument ToXmlDocument(this ZipArchiveEntry entry)
{
if(entry == null) throw new ArgumentNullException(entry);
try
{
var xml = new XmlDocument();
xml.Load(entry.Open());
return xml;
}
catch(Exception)
{
// handle exceptions
}
return null;
}
}
with that you can do this :
XmlDocument xml;
using (var archive = await response.ReadAsZipArchiveAsync())
{
var entry = archive.Entries?.FirstOrDefault();
if(entry != null)
{
xml = entry.ToXmlDocument();
}
}
if(xml != null)
{
// success do something
} | {
"domain": "codereview.stackexchange",
"id": 42409,
"tags": "c#, memory-management, stream"
} |
Python + selenium scraper to grab results using reverse search | Question: I've written some code in python in combination with selenium to scrape populated result from a website after performing a reverse search.
My scraper opens that site clicks on the "search by address" button and then takes the street number and address from the "original.csv" file and then put it in the searchbox and hit the search button.
Once the result is populated my scraper grabs it and write the result in a new csv file creating new columns in it along with the previous columns in the "Original Csv" file.
It is necessary to switch two iframes to get to the result. To get result for all searches it is necessary to write complex xpaths which can grab data by searching two different locations because sometimes the data are not in a particular location.
I've used try except block in my script so that it can take care of the result with no value. I tried to write all the data in "Number" and "City" column but as I'am very weak in handling try except functionality that is why I created extra column named "Number1" and "City1" so that no data are missing. "Number1" and "City1" both fall under different xpaths, though!
However, my script is running errorlessly and fetching desired results. Any input on this will be highly appreciated.
Here is what I've written to get the job done:
import csv
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
def get_info(driver, wait):
with open("Original.csv", "r") as f, open('Updated.csv', 'w', newline='') as g:
reader = csv.DictReader(f)
newfieldnames = reader.fieldnames + ['Number','City','Number1','City1']
writer = csv.writer = csv.DictWriter(g, fieldnames = newfieldnames)
writer.writeheader()
for item in reader:
driver.get('http://hcad.org/quick-search/')
driver.switch_to_frame(driver.find_element_by_tag_name("iframe"))
driver.find_element_by_id("s_addr").click()
wait.until(EC.presence_of_element_located((By.NAME, 'stnum')))
driver.find_element_by_name('stnum').send_keys(item["Street"])
driver.find_element_by_name('stname').send_keys(item["Address"])
driver.find_element_by_xpath("//input[@value='Search']").click()
try:
driver.switch_to_frame(driver.find_element_by_id("quickframe"))
try:
element = driver.find_element_by_xpath("//td[@class='data']/table//th")
name = driver.execute_script("return arguments[0].childNodes[10].textContent", element).strip() or driver.execute_script("return arguments[0].childNodes[12].textContent", element).strip()
except:
name = ""
try:
element = driver.find_element_by_xpath("//td[@class='data']/table//th")
pet = driver.execute_script("return arguments[0].childNodes[16].textContent", element).strip() or driver.execute_script("return arguments[0].childNodes[18].textContent", element).strip()
except:
pet = ""
try:
name1 = driver.find_element_by_xpath("//table[@class='bgcolor_1']//tr[2]/td[3]").text
except Exception:
name1 = ""
try:
pet1 = driver.find_element_by_xpath("//table[@class='bgcolor_1']//tr[2]/td[4]").text
except Exception:
pet1 = ""
item["Number"] = name
item["City"] = pet
item["Number1"] = name1
item["City1"] = pet1
print(item)
writer.writerow(item)
except Exception as e:
print(e)
if __name__ == '__main__':
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 10)
try:
get_info(driver, wait)
finally:
driver.quit()
Here is the link to the csv file which I used to search the result. "https://www.dropbox.com/s/etgj0bbsav4ex4y/Original.csv?dl=0"
Answer:
bare exception clauses, generally speaking, should be avoided
I would apply "Extract Method" refactoring method to, at least, move the complexity of getting numbers and cities into a separate function.
I also don't really like these extra Number1 and City1 and, I think, you can still use just Number and City, but provide multiple ways to locate them on a page and fall down to an empty string only after all of them failed.
You can replace:
driver.switch_to_frame(driver.find_element_by_tag_name("iframe"))
with just:
driver.switch_to_frame(0)
This will switch to the first frame in the HTML tree.
f and g are not descriptive variable names, how about input_file and output_file?
Alternative Solution
You can avoid using a real browser and all the related overhead and switch requests and BeautifulSoup - this should dramatically improve the overall performance.
Here is a sample working code for a single search:
import requests
from bs4 import BeautifulSoup
search_parameters = {
'TaxYear': '2017',
'stnum': '15535',
'stname': 'CAMPDEN HILL RD'
}
with requests.Session() as session:
session.headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36'}
session.post('https://public.hcad.org/records/QuickSearch.asp', data={'search': 'addr'},
headers={'Content-Type': 'application/x-www-form-urlencoded',
'Referer': 'https://public.hcad.org/records/quicksearch.asp'})
response = session.post('https://public.hcad.org/records/QuickRecord.asp', data=search_parameters,
headers={'Content-Type': 'application/x-www-form-urlencoded',
'Referer': 'https://public.hcad.org/records/QuickSearch.asp'}, allow_redirects=True)
soup = BeautifulSoup(response.content, "lxml")
print(soup.select_one("td.data > table th")) | {
"domain": "codereview.stackexchange",
"id": 26934,
"tags": "python, performance, python-3.x, web-scraping, selenium"
} |
Error at the beginner_tutorials package - build error (ROS tutorials) | Question:
Hello,
I am a beginner at ROS and I was doing the following tutorial :
http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29 .
I did exactly what the tutorial instructed me to do (copy - pasted the code in use) and then tried to build my project
with catkin_make .I also checked the directories of the package and everything was in accordance to the tutorial's instructions .I also sourced every environmental variable needed .I am in 14.04 Ubuntu LTS (dual install with windows) .
However, I got the following errors :
/opt/ros/jade/include/ros/time.h:180:31: required from here
/usr/include/boost/format/feed_args.hpp:248:84: error: no matching function for call to ‘boost::io::too_many_args::too_many_args(int&, int&)’
boost::throw_exception(too_many_args(self.cur_arg_, self.num_args_));
^
/usr/include/boost/format/feed_args.hpp:248:84: note: candidates are:
In file included from /usr/include/boost/format.hpp:44:0,
from /usr/include/boost/math/policies/error_handling.hpp:31,
from /usr/include/boost/math/special_functions/round.hpp:14,
from /opt/ros/jade/include/ros/time.h:58,
from /opt/ros/jade/include/ros/serialization.h:34,
from /opt/ros/jade/include/std_msgs/String.h:14,
from /home/patrchri/catkin_ws/src/beginner_tutorials/src/talker.cpp:2:
/usr/include/boost/format/exceptions.hpp:66:15: note: boost::io::too_many_args::too_many_args()
class too_many_args : public format_error
^
/usr/include/boost/format/exceptions.hpp:66:15: note: candidate expects 0 arguments, 2 provided
/usr/include/boost/format/exceptions.hpp:66:15: note: boost::io::too_many_args::too_many_args(const boost::io::too_many_args&)
/usr/include/boost/format/exceptions.hpp:66:15: note: candidate expects 1 argument, 2 provided
make[2]: *** [beginner_tutorials/CMakeFiles/talker.dir/src/talker.cpp.o] Error 1
make[1]: *** [beginner_tutorials/CMakeFiles/talker.dir/all] Error 2
make: *** [all] Error 2
Invoking "make -j4 -l4" failed
Could you please tell me what I am missing here and what this error means?
In case more info is needed for you to help me, please tell me to edit my question .
Thank you in advance for your help .
Originally posted by patrchri on ROS Answers with karma: 354 on 2016-04-28
Post score: 0
Original comments
Comment by mgruhler on 2016-04-28:
This output just shows that there is an error, but not which one.
Scroll up in the terminal and please post the real error message.
Comment by jarvisschultz on 2016-04-28:
What OS are you on? What version of boost do you have installed on your system?
Comment by patrchri on 2016-04-28:
I am in Ubuntu 14.04 LTS
Comment by patrchri on 2016-04-29:
I found the error .Thank you for your help anyway :)
Comment by mgruhler on 2016-04-29:
@patrchri we typically don't close questions on ROS answers. If you figured this out, maybe you could answer your own question, so others can find this solution?
Comment by patrchri on 2016-04-29:
@mig I need more points to reopen the question so I will answer it here .I apologize for closing it without posting the solution, but it was not something major .Due to character's limitations the answer is posted at my next comment .
Comment by patrchri on 2016-04-29:
The error was generated because of bad mistyping of the "ros/ros.h" inclusion .This happened because at start I attempted to write the code and then copy pasted the rest of the code .The strange part is that the terminal didn't indicate me the error at the file.cpp .Thanks again for your answers .
Comment by mgruhler on 2016-04-29:
@patrchri, open now
Answer:
The error was generated because of bad mistyping of the #include "ros/ros.h" statement .This happened because at start I attempted to write the code on my own, but then I decided to copy paste the rest of the code .This wrong copy paste led to a ros.h" statement instead of the proper inclusion as written above .The strange part is that the terminal didn't indicate me the error at the file.cpp as a syntax error ,as there was no #include statement .
Thanks again for your answers .
The wrong syntax was like that :
ros.h"
#include "std_msgs/String.h"
#include <sstream>
(The rest of the code in talker.cpp)
Originally posted by patrchri with karma: 354 on 2016-04-29
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 24505,
"tags": "ros, beginner-tutorials"
} |
Why does different nondimensionalizations give different results? Although the results should be the same | Question: I have some problems with the non-dimensionalization of the Hamiltonian of motion in a Coulomb field.
The Hamiltonian has a following form:
$$H=-\frac{\hbar^2}{2\mu^*} \Delta_r-\frac{e^2}{\epsilon_0 r}$$
where $\hbar$, $\mu$, $e$, $\epsilon_0$ are Planck's constant, mass, charge and dielectric constant respectively.
I would like to dimensionless the Hamiltonian two ways:
1)$\quad$ $E_{01}=\frac{\mu^* e^4}{{\epsilon_0}^2 \hbar^2}$, $\quad$ $a_1=\frac{\hbar^2 \epsilon_0}{\mu^* e^2}$, $\quad$ $E_{01}=\frac{e^2}{a_1\epsilon_0}$
Now I divided the Hamiltonian by $E_{01}$ and after some simple expression transformations I got:
$$\tilde{H}=-\frac{1}{2} \Delta_\tilde{r}-\frac{1}{ \tilde{r}}$$ where $\tilde{r}=\frac{r}{a_1}$
2)$\quad$ $E_{02}=\frac{\mu e^4}{{\epsilon_0}^2 \hbar^2}$, $\quad$ $a_2=\frac{\hbar^2 \epsilon_0}{\mu e^2}$, $\quad$ $E_{02}=\frac{e^2}{a_2\epsilon_0}$
I again divided the Hamiltonian by $E_{02}$ and after some simple expression transformations I got:
$$\tilde{\tilde{H}}=-\frac{1}{2} \frac{\mu}{\mu^*}\Delta_\tilde{\tilde{r}}-\frac{1}{ \tilde{\tilde{r}}}$$ where $\tilde{\tilde{r}}=\frac{r}{a_2}$
It's known from the theory the eigen wave functions and eigen energies for this problem. Consider ground state, it's have following wave function: $\psi=2e^{-r}$
Now I will give the code in Wolfram Mathematica, where try to calculate energy of ground state:
In the code I use symbol $\mu bar$ instead of $\mu^*$.
ClearAll["Global`*"]
hbar = 1054571/1000000*10^(-27);(*Planck constant*)
eV = 1602176/1000000*10^(-12);
ee = 4803204/1000000*10^(-10);(*e charge*)
meV = 10^(-3)*eV;
ϵ0 = 30;(*dielectric constatnt*)
μ = 9.277*^-29;
μbar = 1.261*^-28;
E01 = (μbar*ee^4)/(ϵ0^2*hbar^2);
E02 = (μ*ee^4)/(ϵ0^2*hbar^2);
Psi[r_] := 2 E^-r;
(*dimensionless Hamiltonian 1*)
(*kinetic energy*)
KK1 = -(1/2)*
NIntegrate[
Psi[r]*Laplacian[Psi[r], {r, θ, ϕ}, "Spherical"]*
r^2, {r, 0, ∞}];
(*potential energy*)
PP1 = Integrate[Psi[r]*(-1/r)*Psi[r]*r^2, {r, 0, ∞}];
EE1 = KK1 + PP1(*energy in dimensionless units*)
Out[620]= -0.5
EEE1 = EE1*E01/meV (*energy in meV*)
Out[621]= -2.09269
(*dimensionless Hamiltonian 2*)
(*kinetic energy*)
KK2 = -(1/2)*μ/μbar*
NIntegrate[
Psi[r]*Laplacian[Psi[r], {r, θ, ϕ}, "Spherical"]*
r^2, {r, 0, ∞}];
(*potential energy*)
PP2 = Integrate[Psi[r]*(-1/r)*Psi[r]*r^2, {r, 0, ∞}];
EE2 = KK2 + PP2(*energy in dimensionless units*)
Out[627]= -0.632157
EEE2 = EE2*E02/meV (*energy in meV*)
Out[628]= -1.94649
Please explain to me what am I doing wrong? The energies EEE1 and EEE2 should be the same, because the results should not depend on non-dimensionalization.
Answer: In the two unit systems you are using, the unit of length is different. That means that the wavefunction is only proportional to $e^{-r}$ in (at most) one of them; in the other, it should be $e^{-(\mu^*/\mu) r}$ or something like that, where the factor arises to "convert" the length units from one system to the other. (Remember that these two wave functions have to "mean" the same thing; if $r = 1 $ (say) means two physically different things in the two systems then $e^{-r}$ corresponds to two physically different wavefunctions.)
Also, I will reiterate my comment from your previous question that your wavefunctions must be properly normalized before you calculate the energies. I'm not sure that they are, but it would be easy to check; simply issue the command
Integrate[Psi[r]^2*r^2, {r, 0, ∞}];
and see if the result is equal to 1. Importantly, if you correct the wavefunction as I've described above, you will also need to change the normalization of that wavefunction. | {
"domain": "physics.stackexchange",
"id": 93797,
"tags": "quantum-mechanics, orbital-motion, integration, coulombs-law, hydrogen"
} |
What methods exist for distance calculation in clustering? when we should use each of them? | Question: What methods exist for distance calculation in clustering? like Manhattan, Euclidean, etc.?
Plus, I don't know when I should use them. I always use Euclidean distance.
Answer: Well, there is a book called
Deza, Michel Marie, and Elena Deza. Encyclopedia of distances. Springer Berlin Heidelberg, 2009. ISBN 978-3-642-00233-5
I guess that book answers your question better than I can...
Choose the distance function most appropriate for your data.
For example, on latitude and longitude, use a distance like Haversine. If you have enough CPU, you can use better approximations such as Vincenty's.
On histograms, use a distribution-baes distance. Earth-movers (EMD), divergences, histogram intersection, quadratic form distances, etc.
On binary data, for example Jaccard, Dice, or Hamming make a lot of sense.
On non-binary sparse data, such as text, various variants of tf-idf weights and cosine are popular.
Probably the best tool to experiment with different distance functions and clustering is ELKI. It has many many distances, and many clustering algorithms that can be used with all of these distances (e.g. OPTICS). For example Canberra distance worked very well for me. That is probably what I would choose as "default". | {
"domain": "datascience.stackexchange",
"id": 776,
"tags": "clustering, distance"
} |
Vector algebra in tetrad formalism | Question: Working with general relativistic model of binary NS system and came upon usage of tetrads. My team uses orthonormal Schwarzschild tetrad and so:
$\boldsymbol{\gamma_{\hat{i}}}\cdot\boldsymbol{\gamma_{\hat{j}}}=\eta_{\hat{i}\hat{j}}$
Here, hatted indices are tetrad indices and \eta is Minkovski metric tenzor.
Does it mean, that explicit componentwise form of dot and cross products will be the same as in Minkovski metric?
(e.g., $\boldsymbol{A}\cdot\boldsymbol{B}=-A^{\hat{0}}B^{\hat{0}}+A^{\hat{1}}B^{\hat{1}}+...$)
Answer: Yes indeed, that's the point of using the tetrad. It's an orthonormal basis, so you trade convenience when taking derivatives and the like (because you don't have a coordinate basis anymore) for convenience in dot products. | {
"domain": "physics.stackexchange",
"id": 44234,
"tags": "general-relativity, differential-geometry, notation"
} |
PCA vs tSNE in single cell RNA-seq | Question: What makes tSNE being the preferred dimensional reduction for visualization in single cell RNA-seq over PCA?
I am aware that tSNE works better at showing local structures and fails to capture global structures of the data.
But I think I don't fully get the reason of why is this an advantage? It offers better resolution? Or better separation of the cells as compared to PCA?
Answer: tSNE often offers better visual representation (separation) on such complicated data than PCA. As Micheal pointed out, computing a tSNE embedding over 20.000 gene dimensions is computationally unfeasible, so a number of PCs are normally calculated and these are used as input for calculating the tSNE. They are used in tandem.
As for global vs. local, we are much more interested to see similarity of cells to a limited number of neighbours indicating a celltype and grouping these close together. This is more important than the distance between such celltypes. (assuming the separation in your tSNE is driven by a biologically meaningful factor such as celltype and not some confounder)
Edit:
Since I just made these images for myself anyway I might as well post them. The first 2 PCs show some separation by celltype, tSNE computed over 100 PCs gets very nice separation. | {
"domain": "bioinformatics.stackexchange",
"id": 994,
"tags": "scrnaseq, ngs, single-cell"
} |
A puzzle related to nested loops | Question: For a given input $N$, how many times does the enclosed statement executes?
for $i$ in $1\ldots N$ loop
$\quad$for $j$ in $1\ldots i$ loop
$\quad$$\quad$for $k$ in $i\ldots j$ loop
$\quad$$\quad$$\quad$$sum = sum + i$ ;
$\quad$$\quad$end loop;
$\quad$end loop;
end loop;
Can anyone figure out an easy way or a formula to do this in general. Please explain.
Answer: You need to solve simple formula
$\sum_{i=1}^N\sum_{j=1}^i\sum_{k=i}^j1$
this will give you overall result of
$\frac{1}{6}N(N+1)(N+2)$
Math is easy to do here but I used Wolfram Alpha | {
"domain": "cs.stackexchange",
"id": 729,
"tags": "algorithm-analysis, loops"
} |
DRCSIM: hand bits with too big moments of inertia | Question:
drcsim-2.0: There seem to be lots of parts of the Sandia hand which are quite light yet have the default moments of inertia:
inertia ixx="0.01" ixy="0" ixz="0" iyy="0.01" iyz="0" izz="0.01"
right_f0_base mass value="0.35"
right_f0_fixed_accel mass value="0.001"
right_f0_0 mass value="0.05"
right_f0_1 mass value="0.05"
right_f0_1_accel mass value="0.001"
right_f0_2 mass value="0.05"
right_f0_2_accel mass value="0.001"
right_f1_base mass value="0.35
...
When all the parts are combined, this gives a hand about double the moment of inertia of the pelvis or uleg, and second only to the utorso. This causes simulations with the hands attached to be quite unrealistic. Essentially the hands act as substantial flywheels. It would be useful to get a more realistic set of moments of inertia for the hand bits.
I realize this may be a disaster in terms of simulation stability, since the range of size of the parts will be much larger.
In case you are wondering, something with a moment of inertia of 0.01 which weighs only 0.001kg needs to be at least about 6 meters across.
Can you tell me the total weight of a Sandia hand?
Thanks,
Chris
Originally posted by cga on Gazebo Answers with karma: 223 on 2013-02-04
Post score: 0
Answer:
This issue is ticketed.
Originally posted by gerkey with karma: 1414 on 2013-02-04
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3004,
"tags": "gazebo"
} |
What do the square brackets $[ ]$ and $\mid$ mean in $[G_t \mid S_t=s]$? | Question: Here is the formula of state-value function in Reinforcement Learning.
What do the square brackets $[ ]$ and $\mid$ mean in $[G_t \mid S_t=s]$? Why use square brackets? Why use $\mid$?
Why do mathematical formulas use all kinds of ambiguous symbols? Not as unique as the symbols of programming language?
Answer: The square brackets are part of the expectation operator (i.e. a function of a random variable, which in this case is $G_t$). This is common notation for the expectation. So, it's not $\left[ X \right]$, but rather $\mathbb{E}\left[ \cdot \right]$ or $\mathbb{E}\left[ X \right]$, for some random variable $X$. This notation is similar to the notation of a function $f(\cdot)$ or $f(x)$, but we use square brackets because expectations are taken with respect to random variables, which are actually functions (if this is too confusing, just ignore these details for now).
The $\mid$ is also common notation and means that we condition on knowing $S_t = s$ (an event). If you are not familiar with conditional expectations and probability distributions, you can take a look at them e.g. here.
So, you can read your formula as
the conditional expectation of $G_t$ (the return, i.e. the sum of future rewards), given that we know that the state at time $t$ is $s$ (i.e. the condition).
This is indeed the definition of the state value function. | {
"domain": "ai.stackexchange",
"id": 3246,
"tags": "reinforcement-learning, math, notation"
} |
What is the universe 'expanding' into? | Question: We say the universe is expanding, and by expanding we mean the distance between objects gets larger over time. We call that "Metric Expansion of the Universe". So far so good. I kind of get the idea about of distances getting larger.
Now, I think of a balloon's surface and the distance between two arbitrary points on the surface gets larger as the metric expansion happens. But, in order for metric expansion to happen, doesn't the universe really expand INTO something. Balloon's surface expands into air so there's no problem imagining it, but how about the universe itself?
Also, do we mean the whole universe or observable universe when we say the universe is expanding? Both maybe?
Edit: Also, I know some multiverse theories that try to explain it, but the idea of universe is expanding has been there before multiverse was even considered, so I guess it can be explained without multiverse theories.
Answer: The balloon analogy is useful in some respects, but it is misleading in one important respect. In the balloon analogy the curavture of the balloon surface is extrinsic while in GR the curvature of the universe is intrinsic.
Extrinsic curvature is easy to understand. The surface of a balloon, or the hills and valleys on a landscape, or (to make a 1D analogy) a railway line are extrinsically curved because there is another dimension external to the surface that allows the surface to curve. We say that our surface is embedded in a manifold with a dimensionality one greater than the surface.
Intrinsic curvature is much harder to understand because it's counter intuitive. I described intrinsic curvature in my answer to Universe being flat and why we can't see or access the space "behind" our universe plane? but let me try a simpler example.
Suppose you watch an ant walking along an elastic rope, and you see the ant changing speed. You would assume the ant is accelerating. But suppose we had stretched some bits of the rope and compressed others:
The dotted lines show equally spaced divisions on the unstretched rope, so when we compress the rope the dotted lines get closer together and when we stretch the rope the dotted lines get farther apart.
The key feature of intrinsic curvature (and GR) is that the ant sees all the divisions as equally spaced no matter how much we stretch the rope. So if the ant crawls one division per second on the unstretched rope it still crawls at one division per second on the stretched rope. So we see the ant moving more slowly at the left end of the rope than at the right end, and we might explain this by saying the ant is being accelerated by some force (like gravity). But actually the ant is moving in an intrinsically curved space.
This is what happens in GR. The curvature of spacetime is like some bits of spacetime being compressed and other bits being stretched, and this is what causes the acceleration that we describe as gravity. There is no external dimension that the universe is being curved in.
You started off by asking about the metric expansion of space. Well this is like the elastic rope being continuously stretched, but the rope is infinite and has no ends. So the rope isn't being stretched into anything - all the stretching is internal. Likewise the universe isn't expanding into anything. | {
"domain": "physics.stackexchange",
"id": 11755,
"tags": "universe, space-expansion"
} |
Were there any images of Sanduleak -69 202 (progenitor of SN1987A) before it exploded? | Question: We all know about SN1987A, the closest observed supernova since Kepler's time. Its progenitor was Sanduleak -69 202, a magnitude 12 blue supergiant, catalogued in 1970. Were there any images of this star before it exploded, and how did we know that this was the star that produced said supernova?
Answer: Yes, it is clearly visible in photographs of the LMC from the 1980s. It is shown in this comparison from the Australian Astronomical Observatory
We know it was this star in the most simple possible way: The supernova occurred exactly at the same location as this star, and when we look now, that star has gone (replaced by a little bipolar supernova remnant) | {
"domain": "astronomy.stackexchange",
"id": 5556,
"tags": "star, observational-astronomy, supernova"
} |
Hybridization of Na in [Na(H2O)6]+ | Question: In the complex ion $\ce{[Na(H2O)6]+}$, the sodium cation forms 6 coordinate bonds with water ligands. Typically this octahedral form is associated with $\mathrm{sp^3d^2}$ hybridization as far as I know, but in the case of sodium the $d$ orbitals aren't readily available, so how can this be described as $\mathrm{sp^3d^2}$ hybridization?
I would guess that its hybridization is $\mathrm{s^2p^4}$?
On further thought I think the d orbitals probably are accessible and that it is indeed $\mathrm{sp^3d^2}$ but this has a high hybridisation energy and so we dont see $\ce{[Na(H2O)6]+}$ in large amounts instead we see the ion solvated by partial charges and electrostatic rather than dative covalent interactions
Answer: Solvation is the process by which a species is dissolved in a solvent. The most classic example of this is when a metal ion is dissolved in water. The electronegative oxygens in the water molecules are attracted electrostatically to the positive charge on the metal ion. A solvation shell of water molecules result.
According to this article, which written about the hydration of alkali metal ions:
In spite of many conducted studies the knowledge of the structures and bonding properties of the hydrated alkali metal ions in aqueous solution is scarce and deviating.
Also according to another letter, which reports research done on the stability of the different possible forms of the ion, there is not a consensus on only one form. This letter discusses three possible forms of the ion all in the form of $n_1 +n_2$, where $n_1$ is the number of molecules in the first shell of hydration and $n_2$ is the number of molecules in the second shell of hydration. The forms discussed are:
demonstrating that the water molecules do not necessarily have to be coordinately bonded to the metal itself, but in fact that the second shell of hydration could include water molecules interacting with the water molecules in the first shell of hydration.
In short the hydration of sodium (and the other alakali metals) is not well understood yet, though it is under study. | {
"domain": "chemistry.stackexchange",
"id": 3451,
"tags": "water, aqueous-solution, ions, hybridization"
} |
Would you be weightless at the center of the Earth? | Question: If you could travel to the center of the Earth (or any planet), would you be weightless there?
Answer: Correct. If you split the earth up into spherical shells, then the gravity from the shells "above" you cancels out, and you only feel the shells "below" you. When you are in the middle there is nothing "below" you.
Refrence from Wikipedia Gauss & Shell Theorem.
{I am using some simplistic terms, but I don't want to break out surface integrals and radial flux equations}
Edit: Although the inside of the shell will have zero gravity classically, it will also have non zero gravity relativistically. At the perfect center the forces may balance out, yielding an unstable solution, meaning that a small perturbation in position will result in forces that exaggerate this perturbation. | {
"domain": "physics.stackexchange",
"id": 22607,
"tags": "gravity, newtonian-gravity, earth, planets, geophysics"
} |
Intuitive Explanation for Inelastic Collisions? | Question: How is it possible that momentum is conserved in a collision while the total kinetic energy goes down? Intuitively it seems as if the total kinetic energy in the system goes down, momentum should go down as well.
Answer: Keep in mind that kinetic energy is a scalar, which can not be negative, while momentum is a vector. There is only one way for the total kinetic energy of a system to be zero - all the parts must have a kinetic energy of zero i.e. nothing is moving. On the other hand, since momentum is a vector, it's quite possible a system could have a total momentum of zero while two or more of the parts are moving.
Consider a simple setup with two objects of equal mass and speed moving towards each other. The total momentum of this system is zero. How can it go down? | {
"domain": "physics.stackexchange",
"id": 35076,
"tags": "classical-mechanics, energy, momentum, collision"
} |
How is the Gould Belt younger than the sun? | Question: Our sun is located in the Gould Belt, a group of stars which is thought to be 30 - 50 million years old. However, the sun formed 4.6 billion years ago.
Did the belt form around us?
Answer: Everything is in motion in our Galaxy. The Sun has executed some 20 laps of the Galaxy since it was born and may have migrated inwards or outwards to some extent. The Sun's location has nothing to do with the Gould belt or vice versa.
The Gould belt stars formed just 30 million years or so ago. The position of the Sun relative to the Gould belt is a coincidence and it will not be in the Gould belt in another 30 million years or so, because it has motion relative to it. | {
"domain": "astronomy.stackexchange",
"id": 4218,
"tags": "solar-system, time"
} |
A regular expression for an automaton which accepts strings with no more than 3 consecutive zeros | Question: This is the automaton I want to find the regular expression for:
As you see, states Q1 to Q4 are accepted and Q5 is a kind of trap. This automaton accepts strings that have no more than 3 consecutive zeros.
Can anybody help finding a regular expression for this?
Note : What I tried ...
I defined A:=(0+00+000)
So the automata accepts the strings which contain no consecutive A's. But I don't know how to find that kind of regular expression.
Answer: Hint: A string with no more than 3 consecutive zeroes has the general form
$$0^{p_1} 1 0^{p_2} 1 0^{p_3} 1 \ldots,$$
where $p_i \leq 3$. Try expressing that structure as a regular expression. | {
"domain": "cs.stackexchange",
"id": 6368,
"tags": "regular-languages, finite-automata, regular-expressions"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.