Dataset Viewer
Auto-converted to Parquet Duplicate
anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Is there any benefit to write a driver node based on driver_base package
Question: Hi, I'm using a customized 2D laser range finder (not SICK/HOKUYO). This laser range finder has its own protocol so I must write my own driver node to get the 2D scan and publish it in "sensor_msgs/LaserScan". I'm wondering if I should write my own driver node based on "driver_base" package, like "hokuyo_node" package. Is there any benefit to use the "driver_base" package? Thanks. Originally posted by Curtis Fu on ROS Answers with karma: 3 on 2015-08-10 Post score: 0 Answer: I have never used it myself, but having a look at the WikiPage, it does not seem to be a good idea to use it, as it is deprecated. A framework for writing drivers that helps with runtime reconfiguration, diagnostics and self-test. This package is deprecated. API Stability This package is for internal use only. Its API is stable, but not recommended for use by new packages. Originally posted by mgruhler with karma: 12390 on 2015-08-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 22420, "tags": "ros" }
What should pwd be replaced with?
Question: In the page http://wiki.ros.org/image_transport/Tutorials/PublishingImages $ ln -s `pwd`/image_common/image_transport/tutorial/ ./src/image_transport_tutorial command appears. Here 'pwd' should be replaced with something else. What could it be? Thanks Originally posted by jbpark03 on ROS Answers with karma: 31 on 2016-03-08 Post score: 0 Answer: I don't think you need to replace pwd in the command you referred to. On Ubuntu (and other Linux I assume), surrounded by buckquote "`" symbol, the output of command is filled in. So if you follow the tutorial line-by-line, you should be at ~/image_transport_ws/, which pwd command will return and that's what the tutorial you linked to expects. Originally posted by 130s with karma: 10937 on 2016-03-08 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 24041, "tags": "linux" }
Schrödinger-Pauli Equation Solutions
Question: The Schrödinger-Pauli equation is the non-relativistic limit of the Dirac equation, and therefore describes spin-1/2 particles in an external electromagnetic field. It is given by: $$\left[\frac{1}{2m}(\boldsymbol{\sigma} \cdot (\boldsymbol{p}-q\boldsymbol{A}))^2+q\phi\right]|\psi\rangle=i \hbar\frac{\partial}{\partial t}|\psi\rangle.$$ Are there any analytical solutions to this equation? I have searched online but have unfortunately been unable to find any. Answer: I cannot be sure, but I suspect that you can get analytical solutions of the Pauli equation by taking a non-relativistic limit of analytical solutions of the Dirac equation. The latter can be found in many books, say Bagrov, Vladislav G. / Gitman, Dmitry, The Dirac Equation and its Solutions (http://www.degruyter.com/view/product/177851) (you can find a Google preview). One example of an analytical solution of the Pauli equation can be found in http://arxiv.org/abs/physics/9807019 .
{ "domain": "physics.stackexchange", "id": 31248, "tags": "quantum-mechanics, electromagnetism, wavefunction, quantum-spin, spinors" }
Fierz idendity (supersymmetry)
Question: So basically I have two Fierz identities involving spinors: $$\psi^a \psi^b = -\frac{1}{2} \epsilon^{ab} \psi \psi$$ And $$\overline{\psi}^{\dot{a}} \overline{\psi}^{\dot{b}} = \frac{1}{2} \epsilon^{\dot{a} \dot{b}} \overline{\psi} \overline{\psi}$$ The first one is immediate to solve it: The expression is antisymmetric, and therefore $$\psi^a \psi^b = c \epsilon^{ab}.$$ But $$\psi \psi = \psi^a \epsilon_{ab} \psi^b = c \epsilon^{ab} \epsilon_{ab} = -2 c.$$ So $$\psi^a \psi^b = -\frac{1}{2} \epsilon^{ab} \psi \psi.$$ The second one is the problem. Certainly, I am missing something in the definition. Using the same logic as above, we have $$ \overline{\psi}^{\dot{a}} \overline{\psi}^{\dot{b}} = c \epsilon^{\dot{a} \dot{b}}.$$ So $$\overline{\psi} \overline{\psi} = \overline{\psi}_{\dot{a}} \overline{\psi}^{\dot{a}} = \overline{\psi}^{\dot{b}} \epsilon_{\dot{b} \dot{a}} \overline{\psi}^{\dot{a}} = c \epsilon_{\dot{b} \dot{a}} \epsilon^{\dot{b} \dot{a}}.$$ Obtaining $$ c = -\frac{1}{2} \overline{\psi} \overline{\psi}.$$ But this is not the second Fierz identity! What am I missing? Answer: We define \begin{align} \psi \chi &\equiv \psi_a \chi^a = - \epsilon_{ab} \psi^a \chi^b , \qquad {\bar \psi} {\bar \chi} \equiv {\bar \psi}^{\dot a} {\bar \chi}_{\dot a} = \epsilon_{{\dot a}{\dot b}} {\bar \psi}^{\dot a} {\bar \chi}^{\dot b} . \end{align} Expanding the sum out explicitly and find $$ \psi \psi = - 2 \psi^1 \psi^2 , \qquad {\bar \psi} {\bar \psi} = 2 {\bar \psi}^{\dot 1} {\bar \psi}^{\dot 2} . \tag{1} $$ We have $$ \psi^a \psi^b = c_1 \epsilon^{ab} \psi \psi , \qquad {\bar \psi}^{\dot a} {\bar \psi}^{\dot b} = c_2 \epsilon^{{\dot a}{\dot b}} {\bar \psi} {\bar \psi} $$ For some constants $c_1$ and $c_2$. We can now set $ab={\dot a}{\dot b}=12$ in the equation above and using $\epsilon^{12} = \epsilon^{{\dot 1}{\dot 2}} = 1$, and matching to (1), we find $$ c_1 = - \frac{1}{2} , \qquad c_2 = \frac{1}{2}. $$
{ "domain": "physics.stackexchange", "id": 100430, "tags": "definition, conventions, fermions, spinors, grassmann-numbers" }
Pet hotel system in Python
Question: I am doing a simple application for a pet hotel. I have almost finished but I'm still new on Python and I would like to see if there is a more efficient way to write this, whilst supporting both Python 2 and 3. My next steps would be to write a searching algorithm (search by booking ID) and sorting algorithm (merge sort/selection sort etc.) to sort out the different pet types. import datetime staffID = 'admin' password = 'admin' petName = [] petType = [] bookingID = [] roomID = [] boardedPets = [] history = [] roomInUse = [] roomToUse = [] roomRates = {'dogs':50, 'cats':45, 'birds':30, 'rodents':25} dogcatRoomsAvailable = 60 birdRoomsAvailable = 80 rodentRoomsAvailable = 100 totalPriceStr = "" # Login Function # Requests user for staffID and password to gain access to the menu system def loginFunction(s, p): # Login inputs staffID = input("Enter Staff ID: ") password = input("Password: ") # Check if staffID and password is correct; # If input is not valid, it informs user that ID and password is invalid and requests again loginTrust = False while (loginTrust is False): if (staffID == 'admin') and (password == 'admin'): print("Successfully logged in") loginTrust = True else: print("Wrong ID or Password. Please enter again. ") loginTrust = False staffID = input("Enter Staff ID: ") password = input("Password: ") # Check In Function # Allows user to check in customers' pets def checkIn(petNm, petTy, bookID, roomuse): global dogcatRoomsAvailable global birdRoomsAvailable global rodentRoomsAvailable # Pet Name Input petName= input("Enter pet name: ") petNm.append(petName) #Pet Type Input petType= input("\n'Dog', 'Cat', 'Bird', 'Rodent'\n Enter pet type: ") # Check if petType is valid petTyCheck = False while petTyCheck == False: if (petType.lower() == 'dog' or petType.lower() == 'cat' or petType.lower() == 'bird' or petType.lower() == 'rodent'): # Check if rooms are still available if (dogcatRoomsAvailable != 0): petTy.append(petName) petTyCheck = True elif (birdRoomsAvailable != 0): petTy.append(petName) petTyCheck = True elif (rodentRoomsAvailable != 0): petTy.append(petName) petTyCheck = True else: print("Rooms for dogs & cats are not available anymore. ") print(boardedPets) petTyCheck = True FrontDeskMenu() else: print("Pet type must be only from the list") petTyCheck = False petType= input("\n'Dog', 'Cat', 'Bird', 'Rodent'\n Enter pet type: ") # Check In Date Allocators checkInDate = datetime.datetime.now() cIdString = str(checkInDate) bookingID = str(cIdString[0:4] + cIdString[5:7] + cIdString[8:10] + cIdString[11:13] + cIdString[14:16] + cIdString[17:19]) bookID.append(bookingID) # Check Out Date Default checkOutDate = 'Nil' # Room Allocators # Pet type input print("\nRules when assigning rooms: \nFor dogs: 'D' + any numbers \nFor cats: 'C' + any numbers \nFor birds: 'B' + any numbers \nFor rodents: 'R' + any numbers") print("Remember to insert letter and number plates in front of the kennel after bring the pets in! ") roomToUse = input('\nAssign a room for the pet: ') roomCheck = False rIU = roomToUse[0] print(rIU) # Check if rooms are assigned accordingly for the animal if (petType.lower() == 'dog'): # Check if input starts with 'D' and is not in use while roomCheck == False: if (rIU.lower() == 'd' and (roomInUse.count(roomToUse.upper()) == 0)): roomInUse.append(roomToUse.upper()) dogcatRoomsAvailable = dogcatRoomsAvailable - 1 print("Rooms left: ", dogcatRoomsAvailable) roomCheck = True # If input does not start with 'D' elif (rIU.lower() != 'd'): print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'D'. ") roomCheck = False roomToUse = input('Assign a room for the pet: ') rIU = roomToUse[0] # If room is in use elif (roomInUse.count(roomToUse.upper()) != 0): print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'D'. ") roomCheck = False roomToUse = input('Assign a room for the pet: ') rIU = roomToUse[0] else: None if (petType.lower() == 'cat'): # Check if input starts with 'C' and is not in use while roomCheck == False: if (rIU.lower() == 'c' and (roomInUse.count(roomToUse.upper()) == 0)): roomInUse.append(roomToUse.upper()) dogcatRoomsAvailable = dogcatRoomsAvailable - 1 print("Rooms left: ", dogcatRoomsAvailable) roomCheck = True # If input does not start with 'C' elif (rIU.lower() != 'c'): print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'C'. ") roomCheck = False roomToUse = input('Assign a room for the pet: ') rIU = roomToUse[0] # If room is in use elif (roomInUse.count(roomToUse.upper()) != 0): print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'C'. ") roomCheck = False roomToUse = input('Assign a room for the pet: ') rIU = roomToUse[0] else: None if (petType.lower() == 'bird'): # Check if input starts with 'C' and is not in use while roomCheck == False: if (rIU.lower() == 'b' and (roomInUse.count(roomToUse.upper()) == 0)): roomInUse.append(roomToUse.upper()) birdRoomsAvailable = birdRoomsAvailable - 1 print("Rooms left: ", birdRoomsAvailable) roomCheck = True # If input does not start with 'C' elif (rIU.lower() != 'b'): print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'C'. ") roomCheck = False roomToUse = input('Assign a room for the pet: ') rIU = roomToUse[0] # If room is in use elif (roomInUse.count(roomToUse.upper()) != 0): print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'C'. ") roomCheck = False roomToUse = input('Assign a room for the pet: ') rIU = roomToUse[0] else: None if (petType.lower() == 'rodent'): # Check if input starts with 'R' while roomCheck == False: if (rIU.lower() == 'r' and (roomInUse.count(roomToUse.upper()) == 0)): roomInUse.append(roomToUse.upper()) rodentRoomsAvailable = rodentRoomsAvailable - 1 print("Rooms left: ", rodentRoomsAvailable) roomCheck = True # If input does. not start with 'R' elif (rIU.lower() != 'r'): print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'R'. ") roomCheck = False roomToUse = input('Assign a room for the pet: ') rIU = roomToUse[0] # If room is in use elif (roomInUse.count(roomToUse.upper()) != 0): print("Room Number is either invalid or the room may be in use. Make sure the first letter starts with a 'R'. ") roomCheck = False roomToUse = input('Assign a room for the pet: ') rIU = roomToUse[0] else: None # Put information into boardedPets boardedPets.append([bookingID, petName.title(), petType.title(), cIdString, roomToUse.title(), checkOutDate]) print(boardedPets) print(roomInUse) print(len(roomInUse)) print(petName) # Call back the menu after finishing task FrontDeskMenu() def CheckOut(): # Requests for bookingID to checkout cObid = str(input("Please enter booking ID: ")) counter = 0 outCheck = False # Misc cBidLenC = [cObid[i:i+1] for i in range(0, len(cObid), 1)] print(cBidLenC) boardNum = len(boardedPets) print("Boarded pets left: ", boardNum) # Check out date to be assigned checkOutDate = datetime.datetime.now() cOdString = str(checkOutDate) if (len(cBidLenC) > 14): print("Invalid booking ID") cObid = str(input("Please enter booking ID: ")) elif (len(cBidLenC) < 14): print("Invalid booking ID") cObid = str(input("Please enter booking ID: ")) elif (len(cBidLenC) == 14): print("Correct booking ID: ") # Check out the pets # Remove pet to check out from boardedPets list # Insert the pet into history list while outCheck == False: for e in boardedPets: # for each list in boardedpets print('xyz') for element in e: # for each element in list print('abc') if cObid in element: print('qwe') # Payment checkInDay = int(e[3][8:10]) checkOutDay = int(cOdString[8:10]) daysStayed = checkOutDay - checkInDay if (e[2] == 'Dog'): # Assume same day checkout rate is also the rate of one day if (daysStayed == 0): totalPrice = roomRates['dogs'] * daysStayed + roomRates['dogs'] print("Total days stayed: ", daysStayed) print("Total: ", totalPrice) totalPriceStr = ("$" + str(totalPrice)) elif (daysStayed >= 1): totalPrice = roomRates['dogs'] * daysStayed print("Total days stayed: ", daysStayed) print("Total price: $", totalPrice) elif (e[2] == 'Cat'): # Assume same day checkout rate is also the rate of one day if (daysStayed == 0): totalPrice = roomRates['cats'] * daysStayed + roomRates['cats'] print("Total days stayed: ", daysStayed) print("Total: ", totalPrice) totalPriceStr = ("$" + str(totalPrice)) elif (daysStayed >= 1): totalPrice = roomRates['birds'] * daysStayed print("Total days stayed: ", daysStayed) print("Total price: $", totalPrice) elif (e[2] == 'Bird'): # Assume same day checkout rate is also the rate of one day if (daysStayed == 0): totalPrice = roomRates['birds'] * daysStayed + roomRates['birds'] print("Total days stayed: ", daysStayed) print("Total: ", totalPrice) totalPriceStr = ("$" + str(totalPrice)) elif (daysStayed >= 1): totalPrice = roomRates['birds'] * daysStayed print("Total days stayed: ", daysStayed) print("Total price: $", totalPrice) elif (e[2] == 'Rodent'): # Assume same day checkout rate is also the rate of one day if (daysStayed == 0): totalPrice = roomRates['rodents'] * daysStayed + roomRates['rodents'] print("Total days stayed: ", daysStayed) print("Total: ", totalPrice) totalPriceStr = ("$" + str(totalPrice)) elif (daysStayed >= 1): totalPrice = roomRates['rodents'] * daysStayed print("Total days stayed: ", daysStayed) print("Total price: $", totalPrice) # Data manipulations outCheck = True e.pop(5) e.insert(5, cOdString) e.append(totalPriceStr) history.append(e) boardedPets.pop(counter) print("Checked out. Remaining: ", len(boardedPets)) print(boardedPets) print("History length: ", len(history)) print(history) counter += 1 if outCheck == True: print("Finished checkout. ") else: print("Booking ID not found. Please enter again. ") cObid = str(input("Please enter booking ID: ")) # Call back the menu after finishing task FrontDeskMenu() # Room Availability # Check for availability of rooms def roomAvailability(): print("\nRoom Availability\n") print("Dogs: ", dogcatRoomsAvailable) print("Birds: ", birdRoomsAvailable) print("Rodents: ", rodentRoomsAvailable) FrontDeskMenu() # History function # Reads history of pets boarded def History(): print(history) FrontDeskMenu() # Search function # note: the booking ID is ALWAYS sorted def SearchFunction(): boardedIDList = [] count = 0 search = str(input("Enter booking ID: ")) while (count < len(boardedPets)): bc = boardedPets[count][0] boardedIDList.append(bc) count = count + 1 search = ("Enter booking ID: ") for el in boardedIDList: print(el) print(boardedIDList) FrontDeskMenu() # Menu # Menu used for calling functions def FrontDeskMenu(): print("\nTaylor's Pet Hotel\nFront Desk Admin") print("A. Check in pets") print("B. Check out pets") print("C. Rooms Availability") print("D. History") print("E. Binary Search") print("F. Exit\n") # Input for calling functions userInput = input("What would you like to do today?: ") # Check if userInput is valid; if input is not valid, it continues to ask for a valid input inputCheck = False while (inputCheck is False): # Checks userInput and exccute function as requested by user if (userInput.lower() == 'a'): checkIn(petName, petType, bookingID, roomInUse) inputCheck = True elif (userInput.lower() == 'b'): CheckOut() inputCheck = True elif (userInput.lower() == 'c'): roomAvailability() inputCheck = True elif (userInput.lower() == 'd'): History() inputCheck = True elif (userInput.lower() == 'e'): SearchFunction() inputCheck = True elif (userInput.lower() == 'f'): quit() else: print("Invalid value! Please try again.") userInput = input("What would you like to do today?: ") inputCheck = False loginFunction(staffID, password) FrontDeskMenu() print(boardedPets) Answer: password = 'admin' You may have guessed this already, but this is not a secure way to store a password. It should be hashed, and stored in a file that has restrictive permissions. This is only a start - you can do more advanced things like using the OS keychain, etc. while (loginTrust is False): can be while not loginTrust: The same applies to while petTyCheck == False. This: if (petType.lower() == 'dog' or petType.lower() == 'cat' or petType.lower() == 'bird' or petType.lower() == 'rodent'): can be: if petType.lower() in ('dog', 'cat', 'bird', 'rodent'): Even better, if you de-pluralize your key names in roomRates, you can write: if petType.lower() in roomRates.keys(): When you write this: petType= input("\n'Dog', 'Cat', 'Bird', 'Rodent'\n Enter pet type: ") You shouldn't hard-code those pet names. Instead, use a variable you already have, such as roomRates: print(', '.join(roomRates.keys())) input('Enter pet type: ') This: bookingID = str(cIdString[0:4] + cIdString[5:7] + cIdString[8:10] + cIdString[11:13] + cIdString[14:16] + cIdString[17:19]) should not be done this way. As far as I can tell, you're using a custom date format. Read about using strftime for this purpose. This: print("\nRules when assigning rooms: \nFor dogs: 'D' + any numbers \nFor cats: 'C' + any numbers \nFor birds: 'B' + any numbers \nFor rodents: 'R' + any numbers") should have you iterating over the list of pet type names, taking the first character and capitalizing it. Similarly, any other time that you've hard-coded a pet type name, you should attempt to get it from an existing variable. This: if (len(cBidLenC) > 14): print("Invalid booking ID") cObid = str(input("Please enter booking ID: ")) elif (len(cBidLenC) < 14): print("Invalid booking ID") cObid = str(input("Please enter booking ID: ")) elif (len(cBidLenC) == 14): print("Correct booking ID: ") should be: if len(cBidLenC) != 14: print('Invalid booking ID') else: print('Valid booking ID.') Also, that logic needs to be adjusted so that you loop until the ID is valid. These: checkInDay = int(e[3][8:10]) checkOutDay = int(cOdString[8:10]) should not be using string extraction for date components. You should be using actual date objects and getting the day field from them. This: count = count + 1 should be count += 1 You should also consider writing a main function rather than having global code.
{ "domain": "codereview.stackexchange", "id": 32998, "tags": "python, beginner" }
The distance square in the Newton's law of universal gravitation is really a square?
Question: When I was in the university (in the late 90s, circa 1995) I was told there had been research investigating the $2$ (the square of distance) in the Newton's law of universal gravitation. $$F=G\frac{m_1m_2}{r^2}.$$ Maybe a model like $$F=G\frac{m_1m_2}{r^a}$$ with $a$ slightly different from $2$, let say $1.999$ or $2.001$, fits some experimental data better? Is that really true? Or did I misunderstand something? Answer: This was suggested by Asaph Hall in 1894, in an attempt to explain the anomalies in the orbit of Mercury. I retrieved the original article in http://adsabs.harvard.edu/full/1894AJ.....14...49H Interestingly, he mentions in the introduction that Newton himself had already considered in the Principia what happens if the exponent is not exactly 2, and had concluded that the observations available to him strongly supported the exact power 2! The story is retold, e.g., on p.356 of N.R. Hanson, Isis 53 (1962), 359-378. See also Section 2 of http://adsabs.harvard.edu/full/2005MNRAS.358.1273V
{ "domain": "physics.stackexchange", "id": 57192, "tags": "gravity, experimental-physics, newtonian-gravity" }
Merge sort implementation in Python
Question: def mergesort( array ): # array is a list #base casee if len(array) <= 1: return array else: split = int(len(array)/2) #left and right will be sorted arrays left = mergesort(array[:split]) right = mergesort(array[split:]) sortedArray = [0]*len(array) #sorted array "pointers" l = 0 r = 0 #merge routine for i in range(len(array)): try: #Fails if l or r excede the length of the array if left[l] < right[r]: sortedArray[i] = left[l] l = l+1 else: sortedArray[i] = right[r] r = r+1 except: if r < len(right): #sortedArray[i] = right[r] #r = r+1 for j in range(len(array) - r-l): sortedArray[i+j] = right[r+j] break else: #sortedArray[i] = left[l] #l = l+1 for j in range( len(array) - r-l): sortedArray[i+j] = left[l+j] break return sortedArray Answer: First of all, the code suffers a very typical problem. The single most important feature of merge sort is stability: it preserves the order of the items which compare equal. As coded, if left[l] < right[r]: sortedArray[i] = left[l] l = l+1 else: sortedArray[i] = right[r] r = r+1 of two equals the right one is merged first, and the stability is lost. The fix is simple: if left[l] <= right[r]: (or if right[i] < left[i]: if you prefer). I don't think that try/except on each iteration is a way to go. Consider try: while i in range(len(array)): .... except: .... Of course here i is not known in the except clause. Again, the fix is simple. Notice that the loop is never terminated by condition: either left or right is exhausted before i reaches limit. It means that testing the condition is pointless, and i is an index on the same rights as l and r: l = 0 r = 0 i = 0 try: while True: .... except: .... Naked except are to be avoided. Do except IndexError: explicitly.
{ "domain": "codereview.stackexchange", "id": 21960, "tags": "python, mergesort" }
Getting HRTF from 3-D scan + acoustic simulation
Question: This question follows from Modelling propagation of sound wave by particle simulation This last fortnight I have started experimenting with Binaural audio; it really blows everything else out of the water. The main reason it hasn't taken off, I think, is that every individual needs to get their own HRTF calculated, which consists of sitting in a soundproof room with microphones in your ear canals, while something makes popping noises at thousands of locations around you. I'm only aware of one service provider that may calculate your HRTF: http://www.physiol.usyd.edu.au/~simonc/hrtf_rec.htm and they are in Australia! I'm wondering whether a more practical method may emerge, which would involve taking a 3-D scan of one's head (maybe using something like http://www.david-3d.com/) and shipping it off for a heavy dose of distributed computing which would return HRTFs for that individual. I wonder if we may see a day where people have their own HRTF data stored in the cloud, and can enjoy binaural sound on their iPods. How far away is such a technology? And other any other contenders for measuring HRTF? π PS could we possibly have some more tags like "binaural", "hrtf/hrir" Answer: In the end I managed to do this myself, working in collaboration with http://ir-ltd.net/ to get the scan of my head, Blender to refine the mesh and position a cloud of microphone points, http://www.waveller.com/Waveller_Cloud/ to compute the frequency responses for these points, and finally some Python/NumPy scripting to convert these into impulse responses. We have collaborated on a joint paper which is being presented at the forthcoming AES conference. Please leave a message after July '14 if you would like me to link the paper. There is currently no link as it is still in draft form. I'm using the results of the simulation and I'm happy with the results. EDIT: http://www.aes.org/e-lib/browse.cfm?elib=17365
{ "domain": "dsp.stackexchange", "id": 1873, "tags": "audio, 3d, spatial, hrtf" }
Primitive Twitch.tv IRC Chat Bot
Question: So currently I have this basic little chat bot that can read commands and can timeout users if their message contains a banned word or phrase. I was wondering how I can improve on this bot to be able to !add "word" to the set of banned words and overall general flaw improvements. import string from Read import getUser, getMessage from Socket import openSocket, sendMessage from Initialize import joinRoom s = openSocket() joinRoom(s) readbuffer = "" banned_set = {"badword1", "badword2"} while True: readbuffer = readbuffer + s.recv(1024) temp = string.split(readbuffer, "\n") readbuffer = temp.pop() for line in temp: print(line) if "PING" in line: s.send(line.replace("PING", "PONG")) break user = getUser(line) message = getMessage(line) print user + " typed :" + message if not banned_set.isdisjoint(message.lower().split()): sendMessage(s, "/timeout " + user) break if "!guitars" in message: sendMessage(s, "Ibanez RG920QM Premium") break Answer: In addition to the issues @zondo has pointed out (i.e. PEP 8, some better operators, and the string features), I would also like to point out a few things. 1) Variable names Variables names such as temp are to be avoided. A much better name for this variable would be something like lines, messages, stack, messageStack^, etc. ^ Note: non PEP 8 camelCasing used to be consistent with existing code as posted. Obviously you would make this message_stack when fixing that issue. 2) Don't PONG everybody! In your code, it should be noted, that lines 17 - 19 inclusive (shown below for brevity) introduce some (probably?) undesired behaviour... if "PING" in line: s.send(line.replace("PING", "PONG")) break Consider that a user in the chat says "PING". Your bot will replace it with PONG and send the message back to the room. This would be particularly bad given that this if-statement occurs before the banned words checking code (and break's out of the loop). Users can now use bad words to their heart's content, provided they include the word "PING" (in uppercase) in their message! Furthermore, the bot will repeat these bad words back to the room!! (This is how security bugs get created) Note, if you do end up implementing an !add command to insert items into banned_set, PLEASE ensure you have successfully protected your adding code from injection! 3) Decide your case-consistency and stick with it. On line 23 you include a call to message.lower() (the result of which is not stored anywhere). Then on line 26 your compare message to a lower-case command string ("!guitars"). Do you want "!Guitars" to work just like "!guitars"? If so, you may want to make message lowercase before you split it (as you're already doing for the bad-words check). Furthermore, with your current logic the message "I've just added the !guitars command to my bot" will trigger the same response as just saying "!guitars". This is because your current logic (using the in operator,) disregards the position of the command string within the message.
{ "domain": "codereview.stackexchange", "id": 19327, "tags": "python, security, chat" }
Names of IBM Q backends
Question: IBM Q backends have many different names, see for example this link. We have for example processors called Melbourne, Tokyo, Armonk etc. I am curious where these names come from? For example, I know that IBM headquarter is placed in Armonk, NY. But what about others? Is there any special logic behind naming IBM processors? Answer: The documentation states that "All quantum systems are given a city name, e.g., ibmq_johannesburg. This name does not indicate where the actual quantum system is hosted." https://quantum-computing.ibm.com/docs/cloud/backends/configuration Some cities (e.g., Yorktown) host IBM Research centers.
{ "domain": "quantumcomputing.stackexchange", "id": 1620, "tags": "ibm-q-experience, history" }
Algorithm that receives a dictionary, converts it to a GET string, and is optimized for big data
Question: I found this question online as an example from a technical interview and it seems to be a flawed question in many ways. It made me curious how I would answer it. So, If you were on a technical Python interview and asked to do the following: Write an algorithm that receives a dictionary, converts it to a GET string, and is optimized for big data. Which option would you consider the best answer? Any other code related comments are welcome. Common: import requests base_url = "https://api.github.com" data = {'per_page': 10} node = 'users/arctelix/repos' Option 1: My first thought was just answer the question in the simplest form and use pagination to control the size of the data returned. def get_query_str(node, data=None): # base query query_str = "%s/%s" % (base_url, node) # build query params dict query_params = "&".join(["%s=%s" % (k,str(v)) for k, v in data.items()]) if query_params: query_str += "?%s" % query_params return query_str print("\n--Option 1--\n") url = get_query_str(node, data) print("url = %s" % url) Option 2: Well, that's not really optimized for big data and the requests library will convert a dict to params for me. Secondly, a generator would be a great way to keep memory in check with very large data sets. def get_resource(node, data=None): url = "%s/%s" % (base_url, node) print("geting resource : %s %s" % (url, data)) resp = requests.get(url, params=data) json = resp.json() yield json print("\n--Option 2--\n") results = get_resource(node, data) for r in results: print(r) Option 3: Just in case the interviewer was really looking to see if I knew how join() and a list comprehension could be used to convert a dictionary to a string of query parameters. Let's put it all together and use a generator for not only the pages, but the objects as well. get_query_str is totally unnecessary, but again the task was to write something that returned a "GET string".. class Github: base_url = "https://api.github.com" def get_query_str(self, node, data=None): # base query query_str = "%s/%s" % (self.base_url, node) # build query params dict query_params = "&".join(["%s=%s" % (k,str(v)) for k, v in data.items()]) if query_params: query_str += "?%s" % query_params return query_str def get(self, node, data=None): data = data or {} data['per_page'] = data.get('per_page', 50) page = range(0,data['per_page']) p=0 while len(page) == data['per_page']: data['page'] = p query = self.get_query_str(node, data) page = list(self.req_resource(query)) p += 1 yield page def req_resource(self, query): print("geting resource : %s" % query) r = requests.get(query) j = r.json() yield j gh = Github() pages = gh.get(node, data) print("\n--Option 3--\n") for page in pages: for repo in page: print("repo=%s" % repo) Answer: There are a bunch of things that are not said or rendered implicit by the question so I’m going to assume that the optimized for big data part is about the GitHub API response. So I’d go with the third version. But first, some general advices: Document your code. Docstrings are missing all around your code. You should describe what each part of your API is doing or no-one will make the effort to figure it out and use it. Don't use %, sprintf-like formatting. These are things of the past and have been superseeded by the str.format function. You may also want to try and push newest features such as formatted string litterals (or f-strings) of Python 3.6: query_str = f'{self.base_url}/{node}'. You should use a generator expression rather than a list-comprehension in your '&'.joins as you will discard the list anyway. It will save you some memory management. Just remove the brakets and you’re good to go. You shouldn't use f"{k}={v}" for k, v in data.items(): what if a key or a value contains a '&' or an '='? You should encode the values in your dictionnary before joining them. urllib.parse.urlencode (which is called by requests for you) is your friend. Now about handling the response: page = list(self.req_resource(query)) defeats the very purpose of having a generator in the first place. Consider using yield from self.req_resource(query) instead. Pagination of the Github API should be handled using the Link header instead of manually incrementing the page number. Use the request's headers dictionnary on your response to easily get them. Consider using the threading module to fetch the next page of data while you are processing the current one.
{ "domain": "codereview.stackexchange", "id": 22525, "tags": "python, interview-questions, comparative-review" }
LIGO interferometer vs. holographic interferometer, is there a difference?
Question: I understand they are used for different purposes but does anyone know why you couldn't use the LIGO in place of the holographic interferometer ? The Holographic interferometer was used to test for a holographic universe by trying to measure "noise" in the fabric of space time, but since a laser was used I assume it does this by measuring displacement of the laser beams which is essentially what LIGO does to measure the distortion of space as it flexes and relaxes. By the way the test with the Holometer was done in Fermilab in 2015 by Hogan and the results did not support the holographic theory of the universe. Thank you. ( I was not sure of what team to ask this. ) Answer: The Fermilab holometer consists of two Michelson interferometers (with ~40 meter long arms) sitting right next to each other. The idea is that there maybe a fundamental jiggling of space-time (not necessarily gravitational waves) that will move the two splitter mirrors in the same direction. This would then cause a correlated change in the fringe patterns of the two interferometers. In a Dec 2015 paper arxiv the measured correlation function is shown between 0-6 MHz. The data above 1 MHz (above environmental influences) is used to rule out a particular model of Planck scale space-time jiggling. At some level they must also be able to set a limit on high frequency gravitational waves causing a simultaneous change in output from the two interferometers. The difference between the LIGO interferometers and the holometer is that the Hanford, Washington and Livingston, Louisiana splitter mirrors are 3000 km apart while the two splitter mirrors at Fermilab are right next to each other. Also, the LIGO arms are 4000 meters long versus 40 meters for the holometer. Presumably, the output of the two LIGO interferometers could be cross correlated and study the simultaneous jiggling of splitter mirrors that are 3000 km apart. Because of the longer arms, LIGO would do the correlation at lower frequencies than Fermilab. Are there any hypothesized Planck scale fluctuations that might cause simultaneous jiggling in mirrors 3000 km apart? In fact, LIGO has seen a simultaneous (shifted by 7 msec) change in fringe patterns of the two interferometers (eg: GW150914). A correlation function (shifted by 7 msec) would show power in the 30-150 Hz region). LIGO has interpreted this signal as a gravitational wave.
{ "domain": "physics.stackexchange", "id": 33344, "tags": "classical-mechanics, experimental-physics, interferometry" }
Why do predatory mites have to be introduced multiple times?
Question: I'm combating spider mite infestation using either Phytoseiulus persimilis or Amblyseius californicus. After extensive study of the literature, I'm still unsure why the producers of these predatory mites suggest that they have to be introduced several times (2-3 times depending on manufacturer). I know that adult predatory mites are very agile and scout for new prey. Phytoseiulus is also known to wipe out spider mite populations. So as long as spider mites are present, why should one introduce them at intervals rather than just introduce a large number of predatory mites at the beginning? Answer: You are right that ideally it should be enough to apply them once, but predatory mites seem to be quite sensitive: If it gets too cold or too hot or the moisture is too low, they might die. I applied several rounds of predatory mites in non-optimal conditions without any success. They just kept disappearing without any effect whatsoever. Funnily, some small type of Heteroptera came to save my plants. (from nature without my doing) Next time I would try some Chryson instead.
{ "domain": "biology.stackexchange", "id": 11145, "tags": "ecology, predation" }
Stata-style replace in Python
Question: In Stata, I can perform a conditional replace using the following code: replace target_var = new_value if condition_var1 == x & condition_var2 == y What's the most pythonic way to reproduce the above on a pandas dataframe? Bonus points if I can throw the new values, and conditions into a dictionary to loop over. To add a bit more context, I'm trying to clean some geographic data, so I'll have a lot of lines like replace county_name = new_name_1 if district == X_1 and city == Y_1 .... replace county_name = new_name_N if district == X_N and city == Y_N What I've found so far: pd.replace which lets me do stuff like the following, but doesn't seem to accept logical conditions: ` replacements = { 1: 'Male', 2: 'Female', 0: 'Not Recorded' } df['sex'].replace(replacements, inplace=True) ` Answer: df.where(condition, replacement, inplace=True) Condition is assumed to be boolean Series/Numpy array. Check out where documentation - here is an example.
{ "domain": "datascience.stackexchange", "id": 3226, "tags": "python, pandas, stata" }
Are quantum simulators like Microsoft Q# actually using quantum mechanics in their chips?
Question: Unlike Google's Bristlecone or IBM's Qbit computer, do simulators like Q# or Alibaba really use quantum mechanics anywhere in their physical chips? Are they just defining properties using a classical computer and trying to achieve quantum simulations ? Answer: There is a distinction between what you use to write a program (the SDK), and what you use to run it (the backend). The SDK can be either a graphical interface, like the IBM Q Experience or the CAS-Alibaba Quantum Computing Laboratory. It could also be a way of writing programs, like Q#, QISKit, Forest, Circ, ProjectQ, etc. The backend can either be a simulator that runs on a standard computer, or an actual quantum device. Simulators use our knowledge of quantum theory to construct the simulation program, but no actual quantum computing happens. It is just the standard chips of your own computer, or of a supercomputer they let you use, running standard classical programs. This approach is something we can do for small quantum programs, but the runtime will become unfeasibly long for large ones. So if you notice that your job takes longer and longer to run as you add more qubits, you know that it is being classically simulated rather than run on a real device. The only actual quantum devices that can be used are those by IBM, Rigetti and Alibaba. To write programs for these you can use the Q Experience, QISKit or ProjectQ for the IBM devices, Rigetti's Forest for their devices, or the Alibaba graphical interface for their device. Microsoft are making hardware, and they hope that it will one day be used as a backend in Q#. But they have not yet gotten a single qubit, so we might have to wait a while. Until then it will be only simulators that can be used (or other companies hardware).
{ "domain": "quantumcomputing.stackexchange", "id": 324, "tags": "experimental-realization, simulation" }
Derivation of stress-energy tensor in curved space-time
Question: I've a problem about calculating the components of the stress-energy tensor in general relativity. I've learned that if we have an action of the form : $$ S=\int (R+\mathcal L_{m})\sqrt{-g}d^4x $$ then we can find the S-E tensor by varying the matter term with respect to either $g_{\mu \nu}$ or $g^{\mu \nu}$ . In fact if we variate with respect to $g^{\mu \nu}$ then we can write it as : $$ T_{\mu \nu} \sim \frac{1}{\sqrt{-g}}\frac{\delta \mathcal L_{m}\sqrt{-g}}{\delta g^{\mu \nu}} $$ Here maybe I neglected some minus signs which aren't important for my question. And also by variation with respect to $g_{\mu \nu}$ we get : $$ T^{\mu \nu} \sim \frac{1}{\sqrt{-g}}\frac{\delta \mathcal L_{m}\sqrt{-g}}{\delta g_{\mu \nu}} $$ And now my question : Can we prove that the (co/contra)variant S-E tensor which we defined above are related to each other by a usual rising indices method ? , ie : $$ T_{\mu \nu}=g_{\mu \alpha}g_{\nu \beta}T^{\alpha \beta} $$ For an arbitrary Lagrangian density , which may be a functional of the metric or it's derivatives ? Answer: We can but we must use the fact that $$\delta g^{\mu\mu} = -g^{\mu\alpha} g^{\nu\beta} \delta g_{\alpha\beta}$$ $$\delta g_{\mu\mu} = -g_{\mu\alpha} g_{\nu\beta} \delta g^{\alpha\beta}$$ Then by using chain rule \begin{eqnarray} T_{\mu\nu} &=& - \frac{2}{\sqrt{-g}} \frac{\delta S_m}{\delta g_{\alpha\beta}} \frac{\delta g_{\alpha\beta}}{\delta g_{\mu\nu}}\;,\\ &=& - \frac{2}{\sqrt{-g}} \frac{\delta S_m}{\delta g_{\alpha\beta}} \times \bigg( -g_{\alpha \mu} g_{\beta \nu} \frac{\delta g^{\mu\nu}}{\delta g^{\mu\nu}} \bigg)\;,\\ &=&g_{\mu\alpha} g_{\nu\beta} \frac{2}{\sqrt{-g}} \frac{\delta S_m}{\delta g_{\alpha\beta}}\;,\\ &=:& g_{\mu\alpha} g_{\nu\beta} T^{\alpha \beta}\;, \end{eqnarray} where we have defined $$T^{\alpha \beta} = \frac{2}{\sqrt{-g}} \frac{\delta S_m}{\delta g_{\alpha\beta}}$$
{ "domain": "physics.stackexchange", "id": 37707, "tags": "general-relativity" }
Gradient in cylindrical coordinate using covariant derivative
Question: I'm reading a little pdf book as an introduction to tensor analysis ("Quick introduction to tensor analysis", by R. A. Sharipov). I've reached the last section where it is explained how it is possible to differentiate a tensor field in curvilinear coordinates. The author derive the formula for the covariant derivative for a general tensor: $$ \nabla_p X^{i_1, \cdots, i_r}_{j1, \cdots, j_s} = {{\partial X^{i_1, \cdots, i_r}_{j1, \cdots, j_s}} \over {\partial y^p}} + \sum_\alpha^r \sum_{m_\alpha} \Gamma^{i_\alpha}_{pm_\alpha}X^{i_1, \cdots, m_\alpha, \cdots, i_r}_{j1, \cdots, j_s} - \sum_\alpha^r \sum_{n_\alpha} \Gamma^{n_\alpha}_{pj_\alpha}X^{i_1, \cdots, i_r}_{j1, \cdots, n_\alpha, \cdots j_s} $$ I then used the formula (which is explained and derived inside the article) for the Christoffel symbol: $$ \Gamma^k_{ij} = {{\partial y^k} \over {\partial x^q}} {{\partial^2 x^q} \over {\partial y^i \partial y^j}} $$ to calculate the Christoffel symbol for cylindrical coordinates. The author leaves as exercise to the reader to derive the expression of the gradient of a function $f$ in cylindrical coordinates starting from the covariant derivative. I've tryed to do what I was asked for, this is my attemp: $$ \nabla f = (\nabla_\mu f) \hat{e}^\mu $$ which I've expanded into: $$ \nabla f = \nabla_r f \hat{r} + \nabla_\varphi f \hat{\varphi} + \nabla_h f \hat{h} $$ where $r = \sqrt{(x^2 + y^2)}$, $\varphi = tan^{-1} {y \over x}$, $h = z$. Then I used the linearity of the derivation operation and used $f^r = f\hat{r}$, $f^\varphi = f\hat{\varphi}$, $f^h = f\hat{h}$. Hence the previous expansion can be calculated as: $$ \nabla f = (\partial_r f^r + \Gamma^r_{rr} f^r + \Gamma^r_{r\varphi} f^\varphi + \Gamma^r_{rh} f^h) + (\partial_\varphi f^\varphi + \Gamma^\varphi_{\varphi r} f^r + \Gamma^\varphi_{\varphi \varphi} f^\varphi + \Gamma^\varphi_{\varphi h} f^h) + (\partial_h f^h + \Gamma^h_{h r} f^r + \Gamma^h_{h \varphi} f^\varphi + \Gamma^h_{h h} f^h)$$ where $\Gamma^r_{rr} = \Gamma^r_{r \varphi} = \Gamma^r_{rh} = \Gamma^\varphi_{\varphi r} = \Gamma^\varphi_{\varphi h} = \Gamma^h_{h r} = \Gamma^h_{h \varphi} = \Gamma^h_{hh} = 0$ and $\Gamma^\varphi_{\varphi \varphi} = {1 \over r}$ Henceforth: $$ \nabla f = \partial_r f^r + \left (\partial_\varphi f^\varphi + {1 \over r} f^\varphi \right ) + \partial_h f^h $$ But from here I don't know how should I go forth, since the correct expression for gradient in cylindrical coordinates is: $$ \nabla f = \partial_r f \hat{r} + {1 \over r} \partial_\varphi f \hat{\varphi} + \partial_h f \hat{h} $$ (which I've taken from wikipedia) Any advice on how I shall go on to derive the correct gradient formula? P.S. Exuse my poor English, I'm still practising it. Anyway thanks in advance for your answer Answer: The $\nabla$ in differential geometry is NOT the same $\nabla$ that you learn about in your vector calculus courses. In a typical vector calculus course, when one considers a function $f:\Bbb{R}^n\to\Bbb{R}$ and introduces $\nabla f$ as the gradient vector field, this is what I will henceforth refer to as $\text{grad}(f)$. This is a vector field i.e a tensor field of type $(1,0)$. In differential geometry, $\nabla f$ is a $(0,1)$ tensor field, i.e a covector field. By definition, \begin{align} \nabla f&:= df=\sum_{i=1}^n\frac{\partial f}{\partial x^i}\,dx^i\tag{$*$} \end{align} Even from your first equation, you can see that because $f$ is a smooth function, it is a $(0,0)$ tensor field, so that $r=s=0$. So, if $r$ and $s$ are $0$, there shouldn't even be any $\Gamma$ symbols. Just look at the formula you wrote: \begin{align} \nabla_{p}f&=\frac{\partial f}{\partial y^p} \end{align} (slightly sloppy notation, but this is exactly what $(*)$ says). From here, if you want to recover the familiar vector-calculus formula, then it's not the $\Gamma$'s which matter, but rather the metric tensor itself, for which you need to know how one can convert between vectors and covectors using the musical isomorphism. By definition, if $g$ refers to the metric tensor on our manifold then \begin{align} \text{grad}(f)&:= g^{\sharp}(df)\\ &=g^{\sharp}\left(\sum_{i=1}^n\frac{\partial f}{\partial x^i}\,dx^i\right)\\ &=\sum_{i=1}^n\frac{\partial f}{\partial x^i}g^{\sharp}(dx^i)\\ &=\sum_{i,j=1}^n\frac{\partial f}{\partial x^i}g^{ij}\frac{\partial }{\partial x^j}\tag{$**$} \end{align} Here, $g_{ij}:=g\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)$, and $(g^{ij})$ is the inverse matrix of $(g_{ij})$. Now, $\frac{\partial}{\partial x^j}$ is the $j^{th}$ coordinate basis-vector, but it is not necessarily normalized. If we let $\mathbf{e}_j$ denote the normalized version, then the relationship is that \begin{align} \frac{\partial}{\partial x^j}&=\left\|\frac{\partial}{\partial x^j}\right\|\cdot \mathbf{e}_j =\sqrt{g_{jj}}\mathbf{e}_j\tag{$***$} \end{align} (i.e the length of a vector is the square root of its inner/dot-product with itself). Plugging $(***)$ into $(**)$ yields \begin{align} \text{grad}(f)&=\sum_{i,j=1}^ng^{ij}\sqrt{g_{jj}}\frac{\partial f}{\partial x^i}\,\mathbf{e}_j. \end{align} This is the general formula for the gradient vector field of a smooth function on any (Pseudo)-Riemannian manifold $(M,g)$. In the case where $M=\Bbb{R}^3$ and $g$ is the standard metric tensor field on $\Bbb{R}^3$, the components relative to the cylindrical coordinates are \begin{align} [g_{ij}]&= \begin{pmatrix} g_{rr}& g_{r\phi}&g_{rz}\\ g_{\phi r}&g_{\phi\phi}&g_{\phi z}\\ g_{zr}&g_{z\phi}&g_{zz} \end{pmatrix} = \begin{pmatrix} 1&0&0\\ 0& r^2 & 0\\ 0&0&1 \end{pmatrix} \end{align} Since the matrix is diagonal, the inverse matrix is simply the matrix whose entries are reciprocals. So, plugging this into the above expression, it actually simplifies a lot: \begin{align} \text{grad}(f)&=\sum_{i,j=1}^ng^{ij}\sqrt{g_{jj}}\frac{\partial f}{\partial x^i}\mathbf{e}_j\\ &=\sum_{i=1}^n\frac{1}{g_{ii}}\sqrt{g_{ii}}\frac{\partial f}{\partial x^i}\,\mathbf{e}_i\tag{due to diagonal matrix}\\ &=\sum_{i=1}^n\frac{1}{\sqrt{g_{ii}}}\frac{\partial f}{\partial x^i}\mathbf{e}_i \end{align} For the specific case of cylindrical coordinates, we thus get \begin{align} \text{grad}(f)&=\frac{1}{\sqrt{g_{rr}}}\frac{\partial f}{\partial r}\mathbf{e}_r+ \frac{1}{\sqrt{g_{\phi\phi}}}\frac{\partial f}{\partial \phi}\mathbf{e}_{\phi}+ \frac{1}{\sqrt{g_{zz}}}\frac{\partial f}{\partial z}\mathbf{e}_z\\ &=\frac{\partial f}{\partial r}\mathbf{e}_r + \frac{1}{r}\frac{\partial f}{\partial \phi}\mathbf{e}_{\phi}+\frac{\partial f}{\partial z}\mathbf{e}_z, \end{align} which is precisely the formula you quote. So, just to reiterate, the $\frac{1}{r}$ comes from the metric tensor itself, not the $\Gamma$ (the $\Gamma$'s only appear if you're covariantly differentiating tensor fields of rank $\geq 1$ i.e $r+s\geq 1$). Take a look at this math answer of mine for a similar calculation for polar coordinates in the plane (it's almost exactly the same calculation), and see the various links there. Comments YOu write .... which I've expanded into \begin{align} \nabla f&=(\nabla_rf)\hat{r}+(\nabla_{\phi}f)\hat{\phi}+(\nabla_hf)\hat{h} \end{align} Well, this is just wrong; on the LHS you have a covector field (i.e a tensor field of type $(0,1)$) while on the RHS you have a vector field (a tensor field of type $(1,0)$) so of course they cannot be equal. This is also not the definition of $\nabla f$ as given in your book (which is the same as $(*)$ which I wrote above). Actually, even the $\hat{e}^{\mu}$ notation is terrible, because the $\hat{}$ somehow suggests you're talking about a unit covector field, which is just wrong. The correct equation is $(*)$ which I wrote above, and surely that equation is super easy to remember.
{ "domain": "physics.stackexchange", "id": 80592, "tags": "coordinate-systems, tensor-calculus, differentiation" }
tensorflow beginner demo, is that possible to train a int-num counter?
Question: I'm new to tensorflow and deep-learning, I wish to get a general concept by a beginner's demo, i.e. training a (int-)number counter, to indicate the most repeated number in a set (if the most repeated number is not unique, the smallest one is chosen). e.g. if seed=[0,1,1,1,2,7,5,3](int-num-set as input), then most = 1(the most repeated num here is 1, which repeated 3 times); if seed = [3,3,6,5,2,2,4,1], then most = 2 (both 2 and 3 repeated most/twice, then the smaller 2 is the result) Here I didn't use the widely used demos like image classifier or MNIST data-set, for a more customized perspective and a easier way to get data-set. so if this is not a appropriate problem for deep-learning, please help me know it. The following is my code and apparently the result is not as expected, may I have some advice? like: is this kind of problems suitable for deep-learning to solve? is the network-struct appropriate for this problem? is the input/output data(or data-type) right for the network? import random import numpy as np para_col = 16 # each (num-)set contains 16 int-num para_row = 500 # the data-set contains 500 num-sets for trainning para_epo = 100 # train 100 epochs # initial the size of data-set for training x_train = np.zeros([para_row, para_col], dtype = int) y_train = np.zeros([para_row, 1], dtype = int) # generate the data-set by random for row in range(para_row): seed = [] for col in range(para_col): seed.append(random.randint(0,9)) most = max(set(seed), key = seed.count) # most repeated num in seed(set of 16 int-nums between 0~9) # fill in data for trainning-set x_train[row] = np.array(seed,dtype = int) y_train[row] = most # print(str(most) + " @ " + str(seed)) # define and training the network import tensorflow as tf # a simple network according to some tutorials model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(para_col, 1)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # train the network model.fit(x_train, y_train, epochs = para_epo) # test the network seed_test = [5,1,2,3,4,5,6,7,8,5,5,1,2,3,4,5] # seed_test = [1,1,1,3,4,5,6,7,8,9,0,1,2,3,4,5] # seed_test = [9,0,1,9,4,5,6,7,8,9,0,1,2,3,4,5] x_test = np.zeros([1,para_col],dtype = int) x_test[0] = np.array(seed_test, dtype = int) most_test = model.predict_on_batch(x_test) print(seed_test) for o in range(10): print(str(o) + ": " + str(most_test[0][o]*100)) the training result looks like converged according to ... Epoch 97/100 16/16 [==============================] - 0s 982us/step - loss: 0.1100 - accuracy: 0.9900 Epoch 98/100 16/16 [==============================] - 0s 1ms/step - loss: 0.1139 - accuracy: 0.9900 Epoch 99/100 16/16 [==============================] - 0s 967us/step - loss: 0.1017 - accuracy: 0.9860 Epoch 100/100 16/16 [==============================] - 0s 862us/step - loss: 0.1082 - accuracy: 0.9840 but the printed output looks unreasonable and random, the following is a result after one of the trainings [5, 1, 2, 3, 4, 5, 6, 7, 8, 5, 5, 1, 2, 3, 4, 5] 0: 0.004467500184546225 1: 0.2172523643821478 2: 2.9886092990636826 3: 1.031165011227131 4: 69.71694827079773 5: 12.506482005119324 6: 1.0543939657509327 7: 0.2930430928245187 8: 8.086799830198288 9: 4.100832715630531 actually 5 is the right answer (repeated five times and most), but is the printed output indicating 4 is the answer (at a probability of 69.7%)? Answer: This type of problem is not really suited to deep learning. Each node in the neural network expects numeric input, applies a linear transformation to it, followed by a non-linear transformation (the activation function), so your inputs need to be numeric. While your inputs are numbers, they are not being used numerically, as the inputs could be changed to letters or symbols. Also, your network looks like it is overfitting. It is very large for the number of inputs and so is probably just memorising the training data, which is why you appear to good results on your training data. Tensorflow has a tensorflow-datasets package (installed separately from the main TF package) which provides easy access to a range of datasets (see https://www.tensorflow.org/datasets for details). Maybe look here to find a suitable dataset to use.
{ "domain": "datascience.stackexchange", "id": 10958, "tags": "deep-learning, tensorflow" }
Is it possible to calculate some kind of friction substitute for a fast moving object sliding on water?
Question: An object is accelerated on land, parallell to the surface of a tank of water. The object is then released onto the perfectly still water, making it slide on top of the water. Is it possible to calculate some kind of substitute for friction between these two surfaces, or is the problem too complex? Answer: There are two sources of drag when an object travels though a fluid like air or water. The first is viscosity. To travel through a fluid, you must stir it. Fluid elements near the object travel near the speed of the object. Fluid elements farther away travel near the background speed of the fluid. When one Fluid element slides past another, there is friction (except in superfluids.) The object must exert a force on nearby fluid elements to make them overcome this friction. The fluid elements exert a reaction force back on the object, slowing the object. This force is called viscosity. The second is that an object must push fluid out of the way in front of it, and fluid flows in behind to fill the space where the object was. Fluid must be accelerated to move out of the way, giving it kinetic energy. The object must exert a force on the fluid to accelerate it. There is are reaction force from the fluid on the object, slowing the object. This is called an inertial force. Both forces are present. But in most motion, one is overwhelmingly bigger than the other. I show how to estimate the ratio in my answer to Shooting two projectiles at the same time with different. mass. The ratio is called the Reynold's Number. In most everyday motion, the Reynold's number is large, indicating that inertial forces dominate. In that case, you typically ignore viscosity forces. This is especially true when the object is larger than an insect and moves faster than say 1 m/s. In your scenario, the object is moving through air, which exerts drag because of inertial forces. It is also sliding over the water. It starts to dig into the water because of the downward gravity force. Water pushes back upward. There are different ways this can happen. Sometimes the object skips like a stone. The reaction force is enough the push it entirely out of the water. It leaves expanding rings at places where it touches down and is relaunched. Sometimes the object slides over the water like a water skier. In this case, the reaction force is big enough to keep the object from sinking. It pushes a furrow in the water. Sometimes the object floats like a boat. Water pressure is greater than the weight of the object, and this hold the object up. The object leaves a wake. In all three cases, the object pushes water out of the way, and the inertial reaction force slows the object down.
{ "domain": "physics.stackexchange", "id": 90360, "tags": "fluid-dynamics, friction, water" }
Limit on velocity in Minkowski Spacetime geometry
Question: Let A be a rocket moving with velocity v. Then the slope of its worldline in a spacetime diagram is given by c/v. Since it is a slope, c/v = tan(theta) for some theta > 45 and theta < 90. Does this impose a mathematical limit on v? If so what is it? As in, we know tan(89.9999999999) = 572957795131. And c = 299792458. Using tan(89.9999999999) as our limit of precision, the smallest v we can use is: c/v = tan(89.9999999999) => 299792458 / v = 572957795131 Therefore, v = 1911.18 m/s What is the smallest non zero value of v? Is there a limit on this? Answer: Since a worldline along the time axis on Minkowski diagram is at rest, it is more intuitive to measure angles from that axis instead, as then 'slope' is (space)/(time), i.e., a velocity. Then we have the trigonometric relationship: $$\frac{v}{c} = \tanh\alpha$$ where Minkowski spacetime follows hyperbolic trigonometry because of the sign difference in the Minwkoski metric/distance formula compared to Euclidean metric/Pythagorean theorem. The hyperbolic angle $\alpha$ can be any real number, and limit it imposes on speed under this restriction of real numbers is that $|v|<c$. A lot of STR formula become rather intuitive in this form, e.g., Lorentz transformation is just a rotation with hyperbolic trigonometry, and the velocity addition formula is: $$\begin{eqnarray*}u\oplus v = \frac{u+v}{1+uv/c^2} &\Longleftrightarrow& \tanh(\alpha+\beta) = \frac{\tanh\alpha+\tanh\beta}{1+\tanh\alpha\tanh\beta}\text{,}\end{eqnarray*}$$ and so forth. Note that in Euclidean space, the corresponding question is 'if you have three lines intersecting at a point, and the first makes a slope $m$ with the second, while the second makes a slope $l$ with the third, what slope does the first line make with the third?', and the answer to that also follows that pattern of the normal tangent addition formula.
{ "domain": "physics.stackexchange", "id": 7645, "tags": "spacetime, geometry, special-relativity" }
Zero Ohmic resistance in superconductors is a little bit too enthusiastic?
Question: According to this article: Superconductors contain tiny tornadoes of supercurrent, called vortex filaments, that create resistance when they move. Does this mean that our description of zero Ohmic resistance in superconductors is a little bit too enthusiastic? Answer: I think that since superconductors were originally discovered because they exhibited electrical resistances indistinguishable from zero that zero electrical resistivity may have originally been the defining characteristic of a superconductor. But as more was learned about superconductors and the superconducting state it was realized that superconductors can exist in a mixed or vortex state consisting of normal-state vortices in a superconducting medium, and such a system can exhibit energy dissipation due to the movement of the vortices which results in an electrical resistance that is not strictly zero. So, yes, I guess you could say that the use of the term "superconductor" when these materials were first discovered could have been a bit "too enthusiastic" since it suggested that absolute zero resistance was an essential, defining characteristic of superconductors when it's not. Not an expert in superconductivity and maybe someone else will chime in, but from my perspective as an experimentalist I would say that an operational definition of superconductivity is a very low electrical resistance combined with the Meissner Effect (i.e., magnetic flux exclusion).
{ "domain": "physics.stackexchange", "id": 49778, "tags": "electromagnetism, superconductivity" }
Mass change during phase transition
Question: When you fill up a balloon about a quarter way with liquefied butane fuel and let it sit at room temperature it will turn into gas. But why does the gas weigh the same as the liquified butane? The liquefied butane liquid weighs just about as much as water. Answer: Assuming you fill the balloon only with liquid butane, the answer is very simple - the gas in the balloon at room temperature IS the liquid in the balloon initially. If you put some amount of liquid butane into the balloon and seal it at t=0, then allow it to warm until the liquid has evaporated, there is essentially no transport of other matter into or out of the balloon during that time. The only thing that has happened is that the initial charge of liquid has absorbed thermal energy from the surroundings and undergone a phase change from liquid to gas. Because there is no change in the amount of material in the balloon - there are the same number of butane molecules in the gas phase as there were initially in the liquid phase - there can be no change in mass. Liquid and gas phase contain the same number of molecules, at some fixed mass per molecule. Note that while the mass does not change, the density does.
{ "domain": "chemistry.stackexchange", "id": 6298, "tags": "everyday-chemistry, phase, evaporation" }
Derivation of angular momentum in cylindrical coordinates
Question: I tried to derive the formula for angular momentum ($\vec{l} = m\rho \phi^2 \vec{e_z}$ in the case of motion restricted to the x-y plane) in cylindrical coordinates directly from the vector cross product $m (\vec{r} \times \dot{\vec{r}})$. Taking $\vec{r} = (\rho, \phi, z)$ and $\dot{\vec{r}} = (\dot \rho - \phi \dot\phi, \rho\dot\phi + \dot\phi, \dot z)$ as $d(\rho\vec{e_\rho})/dt = \dot\rho \vec{e_\rho} + \rho \dot\phi \vec{e_\phi}$ and $d(\phi\vec{e_\phi})/dt = \dot\phi \vec{e_\phi} - \phi \dot\phi \vec{e_\rho}$ However when taking the cross product (and taking $z = 0$ as we are moving in strictly the x-y plane), I get $\vec{l} = m(\rho \phi^2 + \rho\dot\phi - \dot\rho\phi + \phi^2\dot\phi)\vec{e_z}$ and I am unsure how to deal with the other three terms. Is this actually the formula for angular momentum and is there just some intial assumption I am missing that gets rid of the last three terms, or have I misrepresented my vectors somehow? In the literature given to us, the derivation of the formula isn't given at all although intuitively it clearly makes sense. Answer: Your expression for the velocity is just wrong. $$\tag1\dot{\vec r}=\frac{\mathrm d\vec r}{\mathrm dt}=\frac{\mathrm d\ }{\mathrm dt}\left(\varrho\hat{\vec\varrho}+z\hat{\vec z}\right)=\dot\varrho\hat{\vec\varrho}+\varrho\dot\varphi\hat{\vec\varphi}+\dot z\hat{\vec z}$$ so that the angular momentum is obtained by $$\tag2\vec\ell=m\left(\varrho\hat{\vec\varrho}+z\hat{\vec z}\right)\times\left(\dot\varrho\hat{\vec\varrho}+\varrho\dot\varphi\hat{\vec\varphi}+\dot z\hat{\vec z}\right)=m\left[-z\varrho\dot\varphi\hat{\vec\varrho}+\left(z\dot\varrho-\varrho\dot z\right)\hat{\vec\varphi}+\varrho^2\dot\varphi\hat{\vec z}\right]$$
{ "domain": "physics.stackexchange", "id": 96038, "tags": "homework-and-exercises, rotational-dynamics, angular-momentum, coordinate-systems, vectors" }
Are galaxies really structured the way they look in pictures?
Question: Are real galaxies really structured the way they are in pictures online? I'm wondering this because if the speed limit of the universe is light speed, which means stuff we see on the sky or detected are delayed. Therefore, shouldn't galaxies look extremely distorted and not structured like what we see? Or some clever tricks are taken to make it look correct? Answer: Although a galaxy may recede from us at arbitrarily high velocities (even superluminally) because space expands, their rotation and motion through space happen at non-relativistic speeds, of the order of a few 100 km/s, or a few 1000 km/s at most. Hence, every part of a galaxy moves with roughly the same speed with respect to the observer, and are thus not distorted. However, there is another effect that may distort the image of a galaxy, namely gravitational lensing: If you observe a distant galaxy lying behind a massive cluster of galaxies, then the huge mass of the cluster curves space in such a way as to make the light from the background galaxies take slightly different paths toward you. This distorts the look of the background galaxies (and may even cause it to appear at multiple locations on the sky). In the image of the cluster Abell S1063 below (from APOD), you see this effect. In fact, by measuring the "banana-shaped-ness" of the background galaxies, it is possible to calculate the mass of the foreground cluster; one of the ways to infer the presence of dark matter.
{ "domain": "physics.stackexchange", "id": 51746, "tags": "optics, visible-light, astrophysics, speed-of-light, galaxies" }
Classifying text documents using linear/incremental topics
Question: I'm attempting to classify text documents using a few different dimensions. I'm trying to create arbitrary topics to classify such as size and relevance, which are linear or gradual in nature. For example: size: tiny, small, medium, large, huge. relevance: bad, ok, good, excellent, awesome I am training the classifier by hand. For example, this document represents a 'small' thing, this other document is discussing a 'large' thing. When I try multi-label or multi-class SVM for this it does not work well and it also logically doesn't make sense. Which model should I use that would help me predict this linear type of data? I use scikit-learn presently with a tfidf vector of the words. Answer: If you want these output dimensions to be continuous, simply convert your size and relevance metrics to real-valued targets. Then you can perform regression instead of classification, using any of a variety of models. You could even attempt to train a multi target neural net to predict all of these outputs at once. Additionally, you might consider first using a topic model such as LDA as your feature space. Based on the values, it sounds like the "relevance" might be a variable best captured by techniques from sentiment analysis.
{ "domain": "datascience.stackexchange", "id": 433, "tags": "classification, scikit-learn" }
"Burger" lanes: What are they, where are they found, and what do they look like?
Question: What and where are they, and what do they look like? Do all transportations with roundabouts use them in some form? This is different from What's the purpose of a 'burger' lane in a roundabout?, which is very specific to why. This is an open floor to describe where and what they are, since not all transportation systems will have these. Answer: if your question is how they look like then: figure: source hulldailymail or figure: source openstreet wiki They are also known as "through roundabouts" As to their name I suppose that the names comes from the fact that it looks like a hamburger. I.e. the two buns are the green islands and the two lanes of the road are the cheese and bacon slices.
{ "domain": "engineering.stackexchange", "id": 4558, "tags": "civil-engineering, traffic-intersections" }
How to disable gravity only for a specific model in Gazebo?
Question: A similar question has been answered. http://answers.ros.org/question/65991/how-to-disable-gravity-of-a-model-in-the-world-only-disable-one-model/ However, this only works for the .sdf model files. It does not work for urdf. Kindly help. <link name="base_link" gravity="0 0 0"> <gravity>0</gravity> </link> <joint name="base_joint" type="fixed"> <origin xyz="0 0 0" rpy="0 0 0" /> <parent link="base_link" /> <child link="body_link" /> </joint> <link name="body_link"> <gravity>0</gravity> <inertial> <mass value="0.1" /> <origin xyz="0 0 0" /> <inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1" /> </inertial> <visual name="base_visual"> <origin xyz="0 0 0" rpy="0 0 0" /> <geometry name="pioneer_geom"> <mesh filename="package://rotors_description/meshes/simple_airplane1.dae" /> </geometry> </visual> <collision> <origin xyz="0 0 0" rpy="0 0 0" /> <geometry> <mesh filename="package://rotors_description/meshes/simple_airplane1.dae" /> </geometry> </collision> </link> Originally posted by webvenky on Gazebo Answers with karma: 23 on 2016-05-26 Post score: 0 Answer: It looks like your file is in the URDF format (you use origin instead of pose for example). <gravity> within a link is specified in SDF, but not in URDF. Luckily, the conversion from URDF to SDF is possible. In order to use SDF tags within your URDF link, use the <gazebo> tag, as explained in this tutorial. It should look something like this: <gazebo reference="base_link"> <gravity>0</gravity> </gazebo> Originally posted by chapulina with karma: 7504 on 2016-05-26 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by webvenky on 2016-05-26: Thanks a lot! It works. :-) Comment by hbaqueiro on 2018-11-05: Does it remain static in the world even if something collide with it? I was looking for something more like the tag. Comment by chapulina on 2018-11-05: No, it will not remain static, it will just not be affected by gravity. There's no way to set only one link to be static, that tag works only on the model level. You can however make a model static and connect it to a non-static model with a joint. Comment by BrumBrum on 2020-01-07: For me the tags did not work, this did work however: <gazebo reference="base_link"> <turnGravityOff>true</turnGravityOff> </gazebo> Comment by myboyhood on 2020-05-04: For me or can not stop model to drop down to the ground, in order to ensure model in the air, I fixed it in the world link <link name="world"/> <parent link="world"/> <child link="link1"/> </joint> Comment by val on 2020-08-07: I tried both and but none of them turned gravity off. When I check the generated sdf file it shows that there are two tags defining gravity. Adding the tag in Urdf file doesn't overwrite the default value. it just adds another tag. <gravity>0</gravity> <gravity>1</gravity> Does anyone know how i can overwrite the original tag? Comment by th123 on 2021-01-21: Hi val, have you found a solution for this problem by now? I assume I am facing the same issue. Comment by danzimmerman on 2021-02-17: Yep I'm hitting this now too (ROS Melodic + Gazebo9) Related to https://github.com/osrf/sdformat/issues/71 ? Comment by danzimmerman on 2021-02-17: @th123 <turnGravityOff>1</turnGravityOff> (case-sensitive) is working for me in ROS Melodic. <turngravityoff> all-lowercase doesn't work.
{ "domain": "robotics.stackexchange", "id": 3924, "tags": "gazebo" }
Equalization : using the spectral or the temporal signal?
Question: I'm trying to equalize a sound signal - using a JAVA program - and I'm using this process : 1/ Conversion of the temporal signal to a spectral signal, using a FFT 2/ Applying a coefficient to each frame of the spectral signal to equalize it 3/ Conversion of the spectral signal modified to a temporal signal 4/ Reading of that temporal signal If I'm applying that process to a "pure" signal (ex : a 400 Hz sinusoïdal signal generated "on the fly" by a java class), it seems to work. But if I'm applying it to a "real" wav signal (a song, for example), the result is unsusable. I hear a kind of "sliced" sound, even if all the equalization coefficients are equal to "1" (=no modification). So, to equalize a sound, does the process I describe above is the right solution ? Or shall I avoid that double conversion and applying a convolution product on the temporal signal ? If I have to do apply a convolution product, how to do it ? I have no clue about calculating it with "random" signals. Thank you for all your answers. Answer: If you are trying to make a proper filter, you would want to use FFT convolution, which is like OLA, but not the same. OLA is more of a synthesis technique for reducing noise between frames caused by incoherent phase mangling from directly manipulating frequency data. You also need to resolve the phase of the bins for your filter. You are only defining the magnitudes and phase at those discrete frequencies, not the frequencies in between, which can be radically different then what you have in your head. The easiest thing to do is to use linear phase to center the impulse of the filter in the window. This can be done by multiplying each bin magnitude by e^jpik. If you are trying to realize an analog prototype, this is a bad way to do it. Your frequency bins are linearly spaced, but analog filters are logarithmic. The effect will be that your lower bands have a lower “Q” and vice versa. You can still do it, but it may not do what you wanted.
{ "domain": "dsp.stackexchange", "id": 6815, "tags": "signal-analysis, frequency-spectrum, convolution" }
Activated complex theory vs. consecutive reactions
Question: Activated complex theory, tells us that due to the collision between the molecules of the reactants, they form a transition specie before the product is formed, which is called active complex. On the other hand we have consecutive reactions on which is also formed a intermediate product before forming the actual product we're interested on. My question is : Where is the difference between the activated complex and the intermediate product, since they are both formed before the actual product ? Answer: The key difference is that transition states occur at a maximum of the potential energy curve for the reaction whereas intermediates occur at a local minimum. Take this example of the reaction profile for an $\ce{S_{N}1}$ reaction: You will see that the products and reactants occur at minima on the curve and the intermediate also occurs at a minimum, albeit a higher energy one. In between the minima are located the maxima where you find the transition states. Unless the activation energy is zero (see this question for rare examples) there will always be a transition state located between any two minima. Transition states are usually represented using dashed bonds to show bonds in the process of being broken or formed as opposed to being fully formed as in intermediates or products. Additionally the Hammond postulate says that the transition state will most resemble the stable species closest to it in energy, in this case the carbocation, and this can be used to help predict the structures of transition states. Transition states are very short lived because they immediately 'roll downhill' on the potenetial energy curve to reach an intermediate or product. By contrast some intermediates are actually quite stable and can be isolated.
{ "domain": "chemistry.stackexchange", "id": 3339, "tags": "physical-chemistry, kinetics" }
How can I keep a smaller water reservoir's water level at half available when being fed from a larger reservoir?
Question: I'm trying to create my own ultrasonic humidifier. I ordered the misting part which works great but it only functions correctly in shallow water. So I'd like to feed from a large water reservoir to a smaller one. My question is how can I fill the smaller reservoir to a desired water level? Will I have to use a closing/opening valve or is there a simpler way? (I was thinking a small balloon hooked up to a pulley that opens and closes a latch much like a toilet but I am trying to avoid complexity.) Answer: I recently saw an auto pet waterer. The intent of the device is to keep the same water level as the pet drinks the water. This is accomplished with basically a bottle of water turned upside down and the top of it submerged under the water level. If the water level falls below the top of the water bottle, then air bubbles make their way up to the top of the bottle, exchanging air for water and keeping the level the same. This is a very simple solution, and this type of approach may be appropriate for what you're trying to do, but it differs from the other solutions proposed. There are some drawbacks: The pressure of the water reservoir has to adjust to accommodate its level The device can only make up for lost water - if you add water the level won't remain the same If you're building a humidifier I doubt the second point would be a problem. You're only going to be removing water, right? The first point may actually be more troublesome. For one, you refilling it isn't trivial. You need to actually close up the water reservoir, turn it rightside up again, then fill it. If you just opened a plug, then it would all fall out and make a big mess.
{ "domain": "physics.stackexchange", "id": 5825, "tags": "fluid-dynamics, water, pressure" }
Why don't choir voices destructively interfere so that we can't hear them?
Question: Sound is propagated by waves. Waves can interfere. Suppose there are two tenors standing next to each other and each singing a continuous middle-C. Will it be the case that some people in the audience cannot hear them because of interference? Would it make a difference if they were two sopranos or two basses and singing a number of octaves higher or lower? How does this generalize to an array of n singers? Given a whole choir, to what extent are their voices less than simply additive because of this? Is it possible that, for some unfortunate member of the audience, the choir appears to be completely silent--if only for a moment? Answer: The main issue in the setting of an orchestra or choir is the fact that no two voice or instruments maintain exactly the same pitch for any length of time. If you have two pure sine wave source that differ by just one Hertz, then the interference pattern between them will shift over time - in fact at any given point you will hear a cycle of constructive and destructive interference which we recognize as beats, but the exact time when each member of the audience will hear the greatest or least intensity will vary with their position. Next let's look at the angular distribution of signal. If two tenors are singing a D3 of 147 Hz (near the bottom of their range) the wavelength of the sound is 2 m: if they stand closer together than 1 m there will be no opportunity to create a 180 degree phase shift anywhere. If they sing near the top of their range, the pitch is closer to 600 Hz and the wavelength 0.5 m. But whatever interference pattern they generate, a tiny shift in frequency would be sufficient to move the pattern - so no stationary observer would experience a "silent" interference - even of the fundamental frequency. Enter vibrato: most singers and instruments deliberately modulate their frequency slightly - this makes the note sound more appealing and allows them to make micro corrections to the pitch. It also makes the voice stand out more against a background of instruments and tends to allow it to project better (louder for less effort on the part of the singer). This is used by soloists but more rarely by good choirs - because in the choir you want to blend voices, not have them stand out. At any rate, the general concept here is incoherence: the different source of sound in a choir or orchestra are incoherent, meaning that they do not maintain a fixed phase relationship over time. And this means they do not produce a stationary interference pattern. A side effect of interference is seen in the volume of a choir: if you add the amplitudes of two sound sources that are perfectly in phase, your amplitude doubles and the energy / intensity quadruples. A 32 man choir would be over 1000 times louder than a solo voice - and this would be achieved in part because the voices could only be heard "right in front" of the choir (perfectly coherent voices would act like a phased array). But since the voice are incoherent, there is no focusing, no amplification, and they can be heard everywhere. Note that incoherence is a function of phase and frequency - every note is a mix of frequencies, and although a steady note will in principle contain just a fundamental and its harmonics, their exact relationship is very complicated. Even if you took a single singer's voice, and put it into two speakers with a delay line feeding one of the speakers, I believe you would still not find interference because of the fluctuations in pitch over even a short time. Instead, your ear would perceive this as two people singing. And finally - because a voice (or an instrument) is such a complex mix of frequencies, there is in general no geometric arrangement of sources and receiver in which all frequencies would interfere destructively at the same time. And the ear is such a complex instrument that it will actually "synthesize" missing components in a perceived note - leading to the strange phenomenon where for certain instruments, the perceived pitch corresponds to a frequency that is not present - as is the case with a bell, for example.
{ "domain": "physics.stackexchange", "id": 25966, "tags": "waves, acoustics, interference" }
rosrun executable or found error
Question: hi, I used to hydro distribution of ros. I take to following error therefore I leave no stone unturned unfortunately I didn't correct. how can I correct this error? Thank you in advance. **"$rosrun my_pcl_tutorial example input:=/narrow_stereo_textured/points2 [rosrun] Couldn't find executable named example below /home/esetron/catkin_ws/src/my_pcl_tutorial [rosrun] Found the following, but they're either not files, [rosrun] or not executable: [rosrun] /home/esetron/catkin_ws/src/my_pcl_tutorial/src/example"** Originally posted by hamdi on ROS Answers with karma: 73 on 2014-07-03 Post score: 0 Answer: I've usually seen users have this problem in a few cases: Your package isn't built - run catkin_make in your workspace You haven't added your excutable to your CMakeLists.txt - follow the tutorial for adding your executable to your CMakeLists.txt You executable is built, but it's in the wrong place - make sure you're calling catkin_package() before add_exectuable() in your CMakeLists.txt Originally posted by ahendrix with karma: 47576 on 2014-07-03 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by hamdi on 2014-07-04: thank you Ahendrix, I forgot to do the third suggestion.
{ "domain": "robotics.stackexchange", "id": 18504, "tags": "rosrun" }
Why is the amu of chlorine-35 less than 35?
Question: My book says that a proton weighs 1.0073u, a neutron weighs 1.0087u, and an electron weighs 0.00055u. Now, why is the mass of chlorine-35 equal to 34.969? Are there not 17 protons, 18 neutrons, and 17 electrons? I calculated it and it sums to around 35.29. Where did I go wrong? Answer: You need to account for the energy released when nucleons and electrons come together and form a Cl-35 atom. Its called the Binding Energy. This sort of equation can help to explain : (Rest Mass Energy of Individual nucleons,electrons*) - (Various Binding Energies) = (Rest Mass Energy of Natural) *edit
{ "domain": "chemistry.stackexchange", "id": 13673, "tags": "atoms" }
Merger happening tangentially, but dark matter at both sides?
Question: According to this news, The expectation of "unaccounted energy" comes from the fact the merger of galaxy clusters is occurring tangentially to the observers' line-of-sight. This means they are potentially missing a good fraction of the kinetic energy of the merger because their spectroscopic measurements only track the radial speeds of the galaxies. Read more at: http://phys.org/news/2014-04-hubble-team-monster-el-gordo.html#jCp What strikes as a surprising is that the picture shows the dark matter distribution (inferred from weak lensing) in blue hue, and it shows a similar pattern that Bullet Cluster, even though in the Bullet Cluster case, the collision happen perpendicular to the line-of-sight Any idea why so much discrepancy between normal matter and dark matter distribution along that axis? Answer: According to the original paper from the Atacama Telescope team the collision axis is somewhere between 15° and 30° to the line of sight. So the claim that the axis is tangential to the line of site is misleading (since the line of sight is a straight line, wouldn't the tangent to it be the same straight line?). The velocity component normal to the line of sight is estimated at 586km/s (page 15 of the paper), so we'd expect to see some separation of the dark and bayonic matter distributions even though it wouldn't be as great as for the bullet cluster. For comparison, the collision speed in the bullet cluster is estimated to be 4500 km/s and the axis is roughly normal to the line of sight.
{ "domain": "physics.stackexchange", "id": 12941, "tags": "dark-matter, gravitational-lensing" }
Lunar Eclipse - Total darkness?
Question: Can a lunar eclipse completely 'turn off' the mooon, i.e. like a New Moon? Or is it always just a 'shading' of the moon (sometimes red in color)? Answer: No. Lunar eclipses are caused when the Moon is in opposition to the Sun. Normally this produces a full moon, but if the Moon is in exact opposition (considering incline of the Moon's orbital plane), all direct sunlight will be blocked from the Moon: So if all the sunlight is blocked, how can we see the Moon? Well, the main reason is that sunlight is often dispersed in the atmosphere, and so it can reach the Moon. In addition, there's airglow, which is when sunlight hits Earth's upper atmosphere and causes multiple chemical reactions, scattering light throughout the night sky. Thus, all this light from Earth, called earthshine, reflects off the Moon and illuminates it. In addition, Earth removes and blocks parts of the sunlight's spectrum, leaving only the longer wavelengths. This causes the Moon to appear red. Lastly, because the Earth blocks off all the direct sunlight from the Sun (only diffracted sunlight and airglow reach the Moon), we can actually see Earth's shadow on the Moon.
{ "domain": "astronomy.stackexchange", "id": 1846, "tags": "lunar-eclipse" }
How to derive the formula for total impedance, $Z$, in an $RLC$ circuit?
Question: Where this is an AC circuit, how can we derive the below formula for impedance, $Z$? $R = $ resistance, $X_{L} = $ inductive reactance, and $X_{c} = $ capacitive reactance. Answer: The impedance is actually a complex quantity. It has a magnitude which describes the ratio between current and voltage magnitude, but also a phase which gives the phase difference between the current and the voltage. So $|U|=|Z|\cdot|I|$ and $Phase(U) - Phase(I) = Phase(Z)$. For a resistor, the impedance is equal to the resistance $Z_R = R$ because there is no phase difference between the current and the voltage. For a capacitor $Z_C = \frac{1}{j \cdot \omega C}$, with $j \cdot j = -1$. It is imaginary because the phase of the current is 90 degree greater than the phase of the voltage for capacitors. For an inductance $Z_L = j \cdot \omega L$. It is imaginary because the phase of the current is 90 degree less than the phase of the voltage. Impedances are really nice to work with, because you can apply the same rules as you do for resistors when putting several impedances in series or in parallel. So if all impedances are in series, like in your picture, then the total impedance is simply $$Z_{tot} = Z_R+Z_C+Z_L = R + \frac{1}{j \cdot \omega C} + j \cdot \omega L$$ If the impedances were in parallel you would have $$\frac{1}{Z_{tot}} = \frac{1}{Z_R}+ \frac{1}{Z_C}+\frac{1}{Z_L}$$ The formula you showed in your question is not really a formula for the impedance. It is a formula for the absolute value of the impedance. In other words, in your formula you have $$Z = |Z_{tot}| = \sqrt{\Re(Z_{tot})^2 + \Im(Z_{tot})^2}$$ For $Z_{tot} = Z_R+Z_C+Z_L = R + \frac{1}{j \cdot \omega C} + j \cdot \omega L$, the real part is $\Re(Z_{tot})=R$ and the imaginary part is $\Im(Z_{tot})=\omega L - \frac{1}{\omega C}$. So if we define $X_L = \omega L$ and $X_C = \frac{1}{\omega C}$ we get : $$Z = |Z_{tot}| = \sqrt{\Re(Z_{tot})^2 + \Im(Z_{tot})^2} = \sqrt{R^2 +(\omega L - \frac{1}{\omega C})^2} = \sqrt{R^2 + (X_L-X_C)^2}$$ This $Z$ give you an idea of the ratio of the magnitudes of the current and the voltage, but gives no information about the phase. I guess that if you are using it you don't know much about complex numbers and why they are useful to describe the magnitude and phase of oscillating quantities. But trying to avoid them makes your life only more difficult, so I suggest you to learn some basics about complex numbers and understand where the Euler formula comes from. I also suggest you to watch this video to understand what I mean by "the phase of the current is 90 degree greater than the phase of the voltage in a capacitor". Here is a small video to understand how to take the absolute value of a complex number, so you understand where the square root comes from. Also notice that when taking the imaginary part of $Z_{tot}$ I used the fact that $\frac{1}{j} = -j$.
{ "domain": "physics.stackexchange", "id": 90454, "tags": "homework-and-exercises, electric-circuits, electrical-resistance, capacitance, inductance" }
How to interpret correlation functions in QFT?
Question: I'm fairly new to the subject of quantum field theory (QFT), and I'm having trouble intuitively grasping what a n-point correlation function physically describes. For example, consider the 2-point correlation function between a (real) scalar field $\hat{\phi}(x)$ and itself at two different space-time points $x$ and $y$, i.e. $$\langle\hat{\phi}(x)\hat{\phi}(y)\rangle :=\langle 0\rvert T\lbrace\hat{\phi}(x)\hat{\phi}(y)\rbrace\lvert 0\rangle\tag{1}$$ where $T$ time-orders the fields. Does this quantify the correlation between the values of the field at $x=(t,\mathbf{x})$ and $y=(t',\mathbf{y})$ (i.e. how much the values of the field at different space-time points covary, in the sense that, if the field $\hat{\phi}$ is excited at time $t$ at some spatial point $\mathbf{x}$, then this will influence the "behaviour" of the field at later time $t'$ at some spatial point $\mathbf{y}$)? Is this why it is referred to as a correlation function? Furthermore, does one interpret $(1)$ as physically describing the amplitude of propagation of a $\phi$-particle from $x$ to $y$ (in the sense that a correlation of excitations of the field at two points $x$ and $y$ can be interpreted as a "ripple" in the field propagating from $x$ to $y$)? Answer: Yes, in scalar field theory, $\langle 0 | T\{\phi(y) \phi(x)\} | 0 \rangle$ is the amplitude for a particle to propagate from $x$ to $y$. There are caveats to this, because not all QFTs admit particle interpretations, but for massive scalar fields with at most moderately strong interactions, it's correct. Applying the operator $\phi({\bf x},t)$ to the vacuum $|0\rangle$ puts the QFT into the state $|\delta_{\bf x},t \rangle$, where there's a single particle whose wave function at time $t$ is the delta-function supported at ${\bf x}$. If $x$ comes later than $y$, the number $\langle 0 | \phi({\bf x},t)\phi({\bf y},t') | 0 \rangle$ is just the inner product of $| \delta_{\bf x},t \rangle$ with $| \delta_{\bf y},t' \rangle$. However, the function $f(x,y) = \langle 0 | T\{\phi(y) \phi(x)\} | 0 \rangle$ is not actually a correlation function in the standard statistical sense. It can't be; it's not even real-valued. However, it is a close cousin of an honest-to-goodness correlation function. If make the substitution $t=-i\tau$, you'll turn the action $$iS = i\int dtd{\bf x} \{\phi(x)\Box\phi(x) - V(\phi(x))\}$$ of scalar field theory on $\mathbb{R}^{d,1}$ into an energy function $$-E(\phi) = -\int d\tau d{\bf x} \{\phi(x)\Delta\phi(x) + V(\phi(x))\}$$ which is defined on scalar fields living on $\mathbb{R}^{d+1}$. Likewise, the oscillating Feynman integral $\int \mathcal{D}\phi e^{iS(\phi)}$ becomes a Gibbs measure $\int \mathcal{D}\phi e^{-E(\phi)}$. The Gibbs measure is a probability measure on the set of classical scalar fields on $\mathbb{R}^{d+1}$. It has correlation functions $g(({\bf x}, \tau),({\bf y},\tau')) = E[\phi({\bf x}, \tau)\phi({\bf y},\tau')]$. These correlation functions have the property that they may be analytically continued to complex values of $\tau$ having the form $\tau = e^{i\theta}t$ with $\theta \in [0,\pi/2]$. If we take $\tau$ as far as we can, setting it equal to $i t$, we obtain the Minkowski-signature "correlation functions" $f(x,y) = g(({\bf x},it),({\bf y},it'))$. So $f$ isn't really a correlation function, but it's the boundary value of the analytic continuation of a correlation function. But that takes a long time to say, so the terminology gets abused.
{ "domain": "physics.stackexchange", "id": 62165, "tags": "quantum-field-theory, greens-functions, correlation-functions, propagator" }
CodeChef Fusing Weapons in a circular list
Question: I am currently trying to solve this problem on Codechef: Before the start of each stage, N weapons appear on the screen in circular order. Each weapon has an integer associated with it, which represents its level. The chef can choose two adjacent weapons of the same level and fuse them into a single weapon of level A+1, where A is the level of the weapons before fusing. Both the old weapons will disappear and the new weapon will be placed in the place of the old weapons, shrinking the circle. Chef can fuse as many times as he wants, and in each stage, he wants to make a weapon with as high a level as possible. Each stage is independent of other stages. Please help Chef by figuring out the maximum level of a weapon that he can get in each stage. However, my code seems to exceed the time limit. Can someone please tell me how to optimize this code to prevent it from exceeding the time limit? #include <iostream> using namespace std; class Set { public: int data[200000]; int length; }; int findMax(int data[], int size) { int max = data[0]; for (int i = 1; i < size; i++) { if (max < data[i]) { max = data[i]; } } return max; } void mergeData(int data[], int &size) { for (int i = 0; i < size - 1; i++) { for (int j = i + 1; j < size; j++) { if (data[i] == data[j]) { data[i]++; for (int k = j; k < size - 1; k++) { data[k] = data[k + 1]; } size--; mergeData(data, size); } } } } //Main function int main() { int numSets; cin >> numSets; Set* sets = new Set[100]; for (int i = 0; i < numSets; i++) { cin >> sets[i].length; for (int j = 0; j < sets[i].length; j++) { cin >> sets[i].data[j]; } } for (int i = 0; i < numSets; i++) { mergeData(sets[i].data, sets[i].length); cout << findMax(sets[i].data, sets[i].length) << endl; } return 0; } Answer: Looking at your merge_data() function, you'll notice that as soon as it finds 2 matching values, it combines them and recurses. What happens during and after the recursion? Let's take a look. Suppose we have this set: 3,8,4,2,2,7,14 We first get 3. Next we start walking the rest of the list. We compare against 8. No match. We compare against 4. No match. We do this until we run out of things to compare. Then we increment i. We check the rest of the array. Etc. Eventually we come to the 2. We compare to the next value and there is a match! We combine them and recurse. What happens now? We start by comparing 3 to 8, which we've already done. Then again with 4, which we've also already done. So we're doing a bunch of work over and over again. Then, eventually we return from our recursion. Now we go on to process the rest of the array. There's 2 problems here: 1) At this point, the recursion we're returning from has already processed the rest of the array, and 2) the array is no longer the same size, but we're going to process to the end of the original array anyway! I haven't worked out a full solution, but hopefully the above is enough to help you see what's wrong with your current solution.
{ "domain": "codereview.stackexchange", "id": 28286, "tags": "c++, programming-challenge, time-limit-exceeded, circular-list" }
pointcloud_to_laserscan parameter min_height
Question: hey guys, short question: Is the pointcloud_to_laserscan package capable of using negative values as min_height? Background information: I'm desperatly trying to get a scan out of my PointCloud2 data, that my kinect generates - as it happens the kinect is not on the ground level, which means that some obstacles have negative height values... that somehow seems to be a problem for the node - the obstacles are part of the pcl, but the laserscan does not see them... I hope you get what my problem is, for sure I can't be the first to try this :( Edit: Update As a workaround I've changed some part of the cloud_to_scan.cpp file and manually increased the height - this works, even though I don't understand why -y+1 < min_height with min_height =0 works, wenn -y < min_height with a value of -1 fails... I mean, if y is like 0.5 in the first case it is: -0.5+1 <0 => 0.5 < 0 => false in the second case: -0.5 < -1 => false Edit: Thanks for your answer, that indicates that somehow I messed up my package. I'm sure that I set the parameters correct in the launchfile, and just to check that I changed the default values - with no result. Originally posted by Flowers on ROS Answers with karma: 342 on 2012-10-08 Post score: 0 Answer: Yes, the min_height parameter can be negative (anything between -10.0 and 10.0 meters). The only thing to look out for is that pointcloud_to_laserscan assumes the input PointCloud to be in the Kinect optical frame (x to the right, y down, z forward), while most other coordinate systems in ROS have x forward, y to the left, z up. That means that -y is up in pointcloud_to_laserscan. BTW, you can use the following command to adjust these parameters dynamically with a GUI; that makes parameter tuning much easier: rosrun dynamic_reconfigure reconfigure_gui Originally posted by Martin Günther with karma: 11816 on 2012-10-10 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Flowers on 2012-10-10: hmm, I'm pretty sure my launchfile works and sets the parameters right(tried some few extrema, which led to the results I expected)...still negative values won't work :(
{ "domain": "robotics.stackexchange", "id": 11284, "tags": "ros, kinect, pointcloud-to-laserscan" }
Is the four-jerk time-like or space-like?
Question: In the paper Dynamics of a Charged Particle the author claims after equation (10): However, this equation is mathematically inconsistent because both $\dot v^\mu$ and $\dot F^\mu$ are spacelike fourvectors, i.e. are perpendicular to the velocity $v^\mu$, while $\ddot v^\mu$ is not. I don't believe this is correct since the time component of the four-acceleration is zero in the proper frame. Differentiating this wrt the proper time will again give a zero time component for the proper four-jerk giving another space-like four vector. Answer: The answer to the question "Is the four-jerk time-like or space-like?" is addressed by a paper by Russo and Townshend ("Relativistic kinematics and stationary motions", 9 October 2009, Journal of Physics A: Mathematical and Theoretical, Volume 42, Number 44 - https://doi.org/10.1088/1751-8113/42/44/445402 - preprint at https://arxiv.org/abs/0902.4243 ). In view of these observations, it seems remarkable that the relativistic generalization of jerk, snap, etc. has attracted almost no attention in more than a century since the foundation of special relativity. It might be supposed that this is because there is little new to relativistic kinematics once one has defined the D-acceleration A, in a D-dimensional Minkowski spacetime, as the proper-time derivative of the D-velocity U: $$A =\frac{dU}{d\tau}=\gamma \frac{dU}{dt},\qquad \gamma= \frac{1}{\sqrt{1 − v^2}} . \qquad(1.2) $$ In particular, it is natural to suppose that one should define the relativistic jerk as $J = \frac{dA}{d\tau}$. However, J is not necessarily spacelike. This was pointed out in our previous paper [3] and it led us to define the relativistic jerk as $$\Sigma = J − A^2U , \qquad J = dA/d\tau.\qquad (1.3)$$ Observe that $U\cdot\Sigma ≡ 0$, which implies that $\Sigma$ is spacelike if non-zero. $\qquad$ Following the posting in the archives of the original version of this paper, it was brought to our attention that relativistic jerk arises naturally in the context of the Lorentz-Dirac equation,...
{ "domain": "physics.stackexchange", "id": 39383, "tags": "special-relativity, jerk" }
Dot product in cylidrical coordinates
Question: I'm given the vector: $$\vec{V}{(r,θ,z)}=\frac{1}{r}\hat{e_r} + (r\cosθ)\hat{e_θ}+\frac{z^2}{r^2}\hat{e_z}$$ I want the scalar product ${\vec{\nabla}}\cdot{\vec{V}}$ We know that in cylindrical coordinates : $$\vec{\nabla}=\left<\frac{\partial}{\partial r},\frac{1}{r}\frac{\partial}{\partial θ},\frac{\partial}{\partial z} \right>$$ So , the product should be $${\vec{\nabla}}\cdot{\vec{V}} =\frac{\partial}{\partial r}\left(\frac{1}{r}\right) + \frac{1}{r}\frac{\partial}{\partial θ}(r\cosθ)+\frac{\partial}{\partial z}\left(\frac{z^2}{r^2}\right) = -\frac{1}{r^2}-\sinθ +\frac{2z}{r^2}$$ However , in the answers , the answer given is this : $${\vec{\nabla}}\cdot{\vec{V}}=\frac{1}{r}\Big\{\frac{\partial}{\partial r}(1)+\frac{\partial}{\partial θ}(r\cosθ)+\frac{1}{r}\frac{\partial}{\partial z}(z^2)\Big\}=-\sinθ+\frac{2z}{r^2}$$ I don't understand why $\frac{1}{r}$ was factored out and how is that possible. I understand you can factor it out for the partial derivative with respect to $θ$ and $z$ but in the first one, which is with respect to $r$, it shouldn't be factored out, it should be differentiated. Any thoughts? Am I missing something or is there a typo in the answers? Answer: The divergence operator in cylindrical coordinates is actually different from what you believe it to be: $$ \nabla\cdot\mathbf A=\frac{1}{r}\frac{\partial}{\partial r}\left(r A_r\right)+\frac{1}{r}\,\frac{\partial A_\theta}{\partial\theta}+\frac{\partial A_z}{\partial z} $$ You seem to be confusing it with the gradient operator, which as the form you specify: $$ \nabla f=\frac{\partial f}{\partial r}\hat{r}+\frac{1}{r}\,\frac{\partial f}{\partial \theta}\hat{\theta}+\frac{\partial f}{\partial z}\hat{z} $$ (though obviously you're ignoring the unit vectors).
{ "domain": "physics.stackexchange", "id": 52726, "tags": "vectors, differentiation, calculus" }
Class diagram of Tic-Tac-Toe Game
Question: I wrote a basic tic toe game. See, https://jsfiddle.net/shai1436/Lgy1u84s/4/ I am not satisfied with the way I have designed classes and how I have implemented the undo feature. Please suggest feedback. //player0 is O and player1 is X let board; class Game { constructor() { this.player = 0; this.setTurnText(this.player + 1); this.setResultText(); } togglePlayer() { if (this.player === 1) this.player = 0; else this.player = 1; this.setTurnText(this.player + 1); } setTurnText(player) { const ele = document.getElementById('turn-text'); ele.innerText = 'Player ' + player + ' turn'; } setResultText() { const ele = document.getElementById('result-text'); ele.innerText = ' '; } declareWinner(player) { const ele = document.getElementById('result-text'); ele.innerText = 'Player ' + player + ' won'; console.log("player " + player + " won "); } declareDraw() { const ele = document.getElementById('result-text'); ele.innerText = ' Draw '; } } class Board { constructor() { this.gameBoard = new Array(new Array(3), new Array(3), new Array(3)); this.gameStatus = null; // 0: player0 wins, 1: player1 wins, 2: draw, null: undecided this.cellsFilled = 0; this.findGameStatus = this.findGameStatus.bind(this); this.game = new Game(); this.boardCanvas = new BoardCanvas('canvas'); this.gameHistory = new Array(); } updateBoard(indices) { if (!this.canDraw(indices)) return; this.gameBoard[indices.x][indices.y] = this.game.player; this.gameHistory.push(indices); this.cellsFilled++; this.updateBoardCanvas(); this.findGameStatus(indices); if (this.gameStatus === 0 || this.gameStatus === 1) this.game.declareWinner(this.gameStatus + 1); else if (this.gameStatus === 2) this.game.declareDraw(); this.game.togglePlayer(); } updateBoardCanvas() { this.boardCanvas.drawBoard(this.gameBoard); } undo() { const indices = this.gameHistory.pop(); this.gameBoard[indices.x][indices.y] = undefined; this.updateBoardCanvas(); this.game.togglePlayer(); this.cellsFilled--; } canDraw(indices) { const iscellEmpty = this.gameBoard[indices.x][indices.y] === undefined; const isGameInProgress = this.gameStatus === null; return iscellEmpty && isGameInProgress; } findGameStatus(indices) { if (this._checkRow(indices) || this._checkColumn(indices) || this._checkDiagonal() || this._checkReverseDiagonal()) { this.gameStatus = this.game.player; } else if (this.cellsFilled === 9) { this.gameStatus = 2; } } _checkRow(indices) { const row = indices.x; for (let i = 0; i < 3; i++) { if (this.gameBoard[row][i] !== this.game.player) return false; } return true; } _checkColumn(indices) { const col = indices.y; for (let i = 0; i < 3; i++) { if (this.gameBoard[i][col] !== this.game.player) return false; } return true; } _checkDiagonal() { for (let i = 0; i < 3; i++) { if (this.gameBoard[i][i] !== this.game.player) return false; } return true; } _checkReverseDiagonal() { for (let i = 0; i < 3; i++) { if (this.gameBoard[i][2 - i] !== this.game.player) return false; } return true; } } class BoardCanvas { constructor(id) { this.canvas = document.getElementById(id); this.ctx = this.canvas.getContext('2d'); this.drawBoard(); this.addClickListener(); } mapIndicesToCanvasCells(x, y) { var bbox = this.canvas.getBoundingClientRect(); const loc = { x: x - bbox.left * (canvas.width / bbox.width), y: y - bbox.top * (canvas.height / bbox.height) }; loc.x = Math.floor(loc.x / 100) * 100; loc.y = Math.floor(loc.y / 100) * 100; return loc; } drawCross(y, x) { this.ctx.save(); this.ctx.translate(x, y); this.ctx.beginPath(); this.ctx.moveTo(20, 20); this.ctx.lineTo(80, 80); this.ctx.moveTo(80, 20); this.ctx.lineTo(20, 80); this.ctx.stroke(); this.ctx.restore(); } drawCircle(y, x) { this.ctx.save(); this.ctx.translate(x, y); this.ctx.beginPath(); this.ctx.arc(50, 50, 30, 0, Math.PI * 2, true); this.ctx.stroke(); this.ctx.restore(); } drawBoard(board) { this.clearBoard(); for (let i = 0; i < 3; i++) { for (let j = 0; j < 3; j++) { this.ctx.strokeRect(100 * i, 100 * j, 100, 100); if (board && board[i][j] === 0) this.drawCircle(100 * i, 100 * j); else if (board && board[i][j] === 1) this.drawCross(100 * i, 100 * j); } } } addClickListener() { this.canvas.onclick = (e) => { const loc = this.mapIndicesToCanvasCells(e.clientX, e.clientY); const indices = {}; let temp = loc.x; indices.x = Math.floor(loc.y / 100); indices.y = Math.floor(temp / 100); board.updateBoard(indices); } } clearBoard() { this.ctx.clearRect(0, 0, this.canvas.width, this.canvas.height); } } const init = () => { board = new Board(); } init(); const undo = () => { board.undo(); } window.init = init; window.undo = undo; Answer: Player is confusing Only 1 player is defined. This spawns the need for confusing code that looks like this.player is 0 then 1 then 2 then 3 and so on. And player-value incrementing is spread over many methods which my spidey sense says "uh-oh, player disconnects ahead!". changePlayer() { this.player = this.player === 0 ? 1 : 0; } currentPlayuer() { return this.player; } Personally, when I write a second method for a given thing I start to consider making a separate class. Classes should be about exposing related functionality. Good classes expose functionality and hide state. Array Iterator Functions Read up on Array.map, Array.every, Array.some, et cetera. These will really clean up the array looping. Class decoupling Class purpose needs to be more precisely nailed down conceptually. Then the existing coupling will be more evident. Mixing UI functionality into many classes seems universal in my coding experiences. It just too easy to do when updating the screen is a simple one liner. Game sounds like it should be the overall game manager. It should be coordinating the other objects through their own APIs, but is directly manipulating raw state that should be in other objects such as display. Board is only the board and should only be aware of its own state - which squares are occupied. But it is also handling display details. gameHistory sounds like high level functionality that belongs in a conceptually higher level class. BoardCanvas sounds like the place for display functions, but it is not. The DOM and Canvas are conceptually display components for tic-tac-toe and only BoardCanvas should have to use them. BoardCanvas needs an a tic-tac-toe game appropriate API. addClickListener() is a spot-on example of good decoupling. Board contains a Game or vice versa? As a general rule have higher level classes contain lower level classes. Board is a low level and thus "stupid" class. Keep it stupid. It should not be coordinating Game - BoardCanvas interaction; which will happen if you invert the containment heirarchy. undo const undo = () => { board.undo(); } You'll end up naturally writing lots of these "pass through" functions with decoupled classes. This invisible hand of OO, so to speak, will make high level classes read appropriately high level and classes at all levels will be able to "mind their own business". game flow logic In the spirit of expressing high level logic, at the highest level, I imagine the game as a loop. Whether this logic is in Game or a new class is a design decision but the overall point is "layers of abstraction" in the application. // initialze variables, create objects, etc. var noWinner = true; ... while (noWinner) { ... // testing for a winner or tie game should be somewhere in the // method we're calling (or method chain it might be calling). // An if-else in this game loop takes away part of the "who won?" logic // from its proper place. noWinner = this.hasWon(currentPlayer()); } boardCanvas.displayWinner(this.winner); // I suppose the winner could be "its a tie"
{ "domain": "codereview.stackexchange", "id": 35525, "tags": "javascript, tic-tac-toe" }
Is the velocity a scalar or a vector in one dimensional Lorentz transformations?
Question: Is the sign of velocity v important in the one dimensional Lorentz transformations? My question arises because the length contraction and the time dilation effects will work out in exactly the same way independent of the direction of motion of the moving body. Answer: The sign has no effect on the factors. The direction only matters to length contraction because lengths are contracted in that direction.
{ "domain": "physics.stackexchange", "id": 62270, "tags": "special-relativity, relativity, lorentz-symmetry, relative-motion" }
Proper way to compare two Dictionaries
Question: I am implementing an IEqualityComparer for Dictionary objects and am looking for input on a couple of different approaches. I define equality in this case to be that both dictionaries contain the same set of KeyValuePair's as defined by equality of the hash value for the respective keys and values. The first generates a hash value by XORing all of the keys and value in both dictionaries and comparing them. The other uses the HashSet collection and its SymetricExceptWith method. Are these functionally equivalent and are the pros/cons to either approach or better ways to accomplish this. Both approaches are working for my test cases. GetHashCode approach: class DictionaryComparer<TKey, TValue> : IEqualityComparer<IDictionary<TKey, TValue>> { public DictionaryComparer() { } public bool Equals(IDictionary<TKey, TValue> x, IDictionary<TKey, TValue> y) { // fail fast if count are not equal if (x.Count != y.Count) return false; return GetHashCode(x) == GetHashCode(y); } public int GetHashCode(IDictionary<TKey, TValue> obj) { int hash = 0; foreach (KeyValuePair<TKey, TValue> pair in obj) { int key = pair.Key.GetHashCode(); // key cannot be null int value = pair.Value != null ? pair.Value.GetHashCode() : 0; hash ^= ShiftAndWrap(key, 2) ^ value; } return hash; } private int ShiftAndWrap(int value, int positions) { positions = positions & 0x1F; // Save the existing bit pattern, but interpret it as an unsigned integer. uint number = BitConverter.ToUInt32(BitConverter.GetBytes(value), 0); // Preserve the bits to be discarded. uint wrapped = number >> (32 - positions); // Shift and wrap the discarded bits. return BitConverter.ToInt32(BitConverter.GetBytes((number << positions) | wrapped), 0); } } HashSet approach: class DictionaryComparer<TKey, TValue> : IEqualityComparer<IDictionary<TKey, TValue>> { public DictionaryComparer() { } public bool Equals(IDictionary<TKey, TValue> x, IDictionary<TKey, TValue> y) { if (x.Count != y.Count) return false; HashSet<KeyValuePair<TKey, TValue>> set = new HashSet<KeyValuePair<TKey, TValue>>(x); set.SymmetricExceptWith(y); return set.Count == 0; } } Answer: A 32-bit hash returned by GetHashCode has 2^32 possible values, with a probability distribution dependent on the hashing function. If there are more than 2^32 possible input values then you will get collisions (see here). And while we like to think collisions are rare, they turn up a lot more frequently than we like to think. It gets worse when people are actively attacking you through your hashing function. @svick is correct that you can't use a hash code to compare objects for equality. All you can be certain of (assuming a consistent hash implementation) is that two objects with different hashes are not equal. No other guarantee is given. Depending on the cost of generating the hashes, you might actually be better off not using them in them in this instance. The only really guaranteed equality test for a pair of Dictionary instances is to examine their contents. The simple shortcuts you can implement: Check if either instance is null (it happens) Check if both input Dictionary instances are the same instance Check if the counts differ The other slight speed improvement is to check the keys first. Often checking the keys is a faster operation than checking the values. Something like: public bool Equals<TKey, TValue>(IDictionary<TKey, TValue> x, IDictionary<TKey, TValue> y) { // early-exit checks if (null == y) return null == x; if (null == x) return false; if (object.ReferenceEquals(x, y)) return true; if (x.Count != y.Count) return false; // check keys are the same foreach (TKey k in x.Keys) if (!y.ContainsKey(k)) return false; // check values are the same foreach (TKey k in x.Keys) if (!x[k].Equals(y[k]) return false; return true; } Adding a loop to check for hash inequality might improve the speed. Try it and see.
{ "domain": "codereview.stackexchange", "id": 4410, "tags": "c#, .net, hash-map" }
Making sense out of the visual representation of transcription
Question: Most people are familiar with the following diagram. Some genomic DNA with a promoter region, exons and introns. This is transcribed into RNA that is then translated into a polypeptide. When we look closer at the strand that is being transcribed we can distinguish between the two as the sense and anti-sense strands. So the transcription factors and RNA polymerase bind and begins transcribing mRNA in the 5' to 3' direction and thus reading the anti-sense strand in the 3' to 5' direction and have the same sequence as the sense DNA strand, substituting U for T in the mRNA. My question would be, shouldn't the exons be numbered in the reverse order as shown in the first picture I provided. So instead of Promoter -> Exon1 -> Intron -> Exon 2, should it be, Promoter -> Exon N -> Intron -> Exon N-1? Also, in bioinformatic sites are the gene sequences listed in the sense or anti-sense strand? I have noticed in some bioinformatic tools, to determine what polypeptide will result from a DNA sequence, one must input the sense strand in 5'to 3' orientation and not the anti-sense strand. Answer: All visual representations and nearly all coordinate systems are based on the sense strand. The polymerase machinery has no clue about what is sense and what is antisense, because each is the antisense of the other. For visual representation this makes much more sense and relays more information as it removes the extra complicated layer of information. And, in most cases the gene structures are declared in the order of the reference genome which is always the positive or sense strand. Next coming to the bioinformatics part, most of your databases such as UCSC, Ensemble and NCBI maintain gene coordinates on the reference genome, But, there's a catch, when reporting the information through a bed file, Negative stranded genes are reported as chromosome stop start by NCBI (last I used it was a one a half years ago), UCSC will provide the chromosome start stop and both will report the strandedness, UCSC expects that you the bioinformatician will create the reverse complement when you find the strand information, while NCBI expects that your program will fail a sanity check because stop - start will come as negative, implying that you cannot make a mistake while parsing NCBI bed files. Furthermore, UCSC indexes are maintained as 0 based, while NCBI is maintained as 1 based. I would urge you to validate this information But why not just keep a negative strand as well? while keeping the gene coordinates and elements for the antisense in the format you just mentioned. Because speaking from a Computational point of view it just makes more sense, this system would consume more storage (please remember the entire system was formulated before storage became as cheap as it is today) it would consume more memory during tasks (exactly double of what it is consuming today). So it's just better to have a positive strand reference genome and all genes and elements based on that. Just an example of how alignment of sequencing reads works, You align your read to the reference genome Aligns? If yes it has mapped to the positive strand No? Reverse complement the read and align back Aligns? If yes it has mapped to the negative strand No? Possibly an erroneous read or other artefacts.
{ "domain": "biology.stackexchange", "id": 5909, "tags": "transcription" }
Divergent sum in lightcone quantization of bosonic string theory
Question: I had the following question regarding lightcone quantization of bosonic strings - The normal ordering requirement of quantization gives us this infinite sum $\sum_{n=1}^\infty n$. This is regularized in several ways, for example by writing $$ \sum_{n=1}^\infty e^{- n \epsilon } n = \frac{1}{\epsilon^2} - \frac{1}{12} + {\cal O}(\epsilon^2) $$ Most texts now simply state that the divergent part can be removed by counterterms. David Tongs notes (chap. 2 page 29) specifically state that this divergence is removed by the counterterm that restore Weyl invariance in the quantized theory (in dimensional regularization). I would like to see this explicitly. Is there any note regarding this? Or if you have any other idea how one would systematically remove the divergence above, it would be great! Answer: Note that $n$ is really the momentum in the $\sigma$ direction so it has the units of the world sheet mass. The exponent $-n\epsilon$ in the regulator has to be dimensionless so $\epsilon$ has the units of the world sheet distance. Consequently, the removed term $1/\epsilon^2$ has the units of the squared world sheet mass. This are the same units as the energy density in 1+1 dimensions. If you just redefine the stress energy tensor on the world sheet as $$T_{ab} \to T_{ab} + \frac{C}{\epsilon^2} g_{ab}$$ where $C$ is a particular number of order one you may calculate (that depends on conventions only), it will redefine your Hamiltonian so that the ground state energy is shifted in such a way that the $1/\epsilon^2$ term is removed. This "cosmological constant" contribution to the stress-energy tensor may be derived from the cosmological constant term in the world sheet action, essentially $C\int d^2\sigma\sqrt{-h}$. Classically, this term violates the Weyl symmetry. However, quantum mechanically, there are also other loop effects that violate this symmetry – your regulated calculation of the ground state energy is a proof – and this added classical counterterm is needed to restore the Weyl (scaling) symmetry. It's important that this counterterm and all the considerations above are unable to change the value of the finite leftover, $-1/12$, which is the true physical finite part of the sum of positive integers. This is the conclusion we may obtain in numerous other regularization techniques. The result is unique because it really follows from the symmetries we demand – the world sheet Weyl symmetry or modular invariance.
{ "domain": "physics.stackexchange", "id": 11634, "tags": "string-theory, renormalization, regularization" }
Properties of steel and aluminum alloys
Question: I have been trying to compare 304 stainless steel with 7075 aluminum for some personal research. This steel alloy has a specific heat capacity of 500 J/kg-C, while for aluminum the value is 960 J/kg-C. Does this mean it is harder to heat up aluminum than steel? Then, steel has a thermal conductivity of 130 W/m-K, while for aluminum the value is 16.2 W/m-K. Does this mean that steel conducts heat away better than aluminum? Thank you. Answer: Heat capacity (specific heat) varies inversely with atomic mass, the Dulong–Petit law. Al is about 27 amu, and Fe is about 56 amu, so as you noted, it would be expected that aluminum stores more heat. An extreme example is lead solder, which has such low specific heat that a calloused plumber's hand can wipe a solder joint with little discomfort. As for thermal conductivity, that property is associated with the rigidity of crystal lattices, where sound travels as phonons, as in diamond, and with electrical conductivity, where "freely moving valence electrons transfer not only electric current but also heat energy." Aluminum and copper are among the best metallic conductors, so are used for cooking-pan bottoms to spread heat evenly. Since stainless steel has many additions to Fe, such as Ni and Cr, these inclusions provide discontinuities at grain boundaries that further impede heat transfer. Dewar flasks are made of stainless steel or glass, rather than Al, because they are poor conductors of heat.
{ "domain": "chemistry.stackexchange", "id": 6318, "tags": "physical-chemistry" }
Refactored game of Snake
Question: A week ago I requested a review of my code for a game of Snake. First game of Snake I made some changes based on your answers and now I want to show you present code. Something else to modify here? GameMain.java import javax.swing.*; public class GameMain extends JFrame{ public static void main(String[] args) { JFrame frame = new GameInstant(); frame.setTitle("Snake Game"); frame.setSize(1000,800); frame.setResizable(false); frame.setLocationRelativeTo(null); frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); frame.setVisible(true); } } GameInstance.java import javax.swing.*; import java.awt.*; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; import java.util.concurrent.ScheduledThreadPoolExecutor; import java.util.concurrent.TimeUnit; public class GameInstant extends JFrame { private JPanel scorePanel; SnakeGame snakeGame = new SnakeGame(); public GameInstant() { addKeyListener(new KeyListener() { @Override public void keyTyped(KeyEvent e) { } @Override public void keyPressed(KeyEvent e) { if (e.getKeyCode() == KeyEvent.VK_LEFT) { snakeGame.storeDirectionOfSnake(Direction.LEFT); } else if (e.getKeyCode() == KeyEvent.VK_UP) { snakeGame.storeDirectionOfSnake(Direction.UP); } else if (e.getKeyCode() == KeyEvent.VK_RIGHT) { snakeGame.storeDirectionOfSnake(Direction.RIGHT); } else if (e.getKeyCode() == KeyEvent.VK_DOWN) { snakeGame.storeDirectionOfSnake(Direction.DOWN); } } @Override public void keyReleased(KeyEvent e) { } }); DrawingTheBoard gamePanel = new DrawingTheBoard(); this.add(gamePanel, BorderLayout.CENTER); scorePanel = new JPanel(); scorePanel.add(gamePanel.scoreLabel, BorderLayout.CENTER); this.add(scorePanel, BorderLayout.PAGE_END); ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(5); executor.scheduleAtFixedRate(new RepaintTheBoard(this), 0, snakeGame.getGameSpeed(), TimeUnit.MILLISECONDS); } } class RepaintTheBoard implements Runnable { private GameInstant theGame; public RepaintTheBoard(GameInstant theGame) { this.theGame = theGame; } public void run() { theGame.repaint(); } } class DrawingTheBoard extends JComponent { public JLabel scoreLabel; private boolean inGame = false; private int score = 0; CellData[][] board; SnakeGame snakeGame = new SnakeGame(); GameBoard gameBoard = new GameBoard(); public DrawingTheBoard() { board = gameBoard.getBoard(); scoreLabel = new JLabel("Score: " + score); scoreLabel.setFont(new Font("Serif", Font.PLAIN, 40)); } public void paint(Graphics g) { Graphics2D g2D = (Graphics2D) g; g2D.setBackground(Color.BLACK); g2D.fillRect(0, 0, getWidth(), getHeight()); update(); for (int i = 0; i < gameBoard.getxCells(); i++) { for (int j = 0; j < gameBoard.getyCells(); j++) { if (board[i][j] == CellData.APPLE || board[i][j] == CellData.SNAKE) { g2D.setPaint(Color.WHITE); g2D.fillRect(i * 10, j * 10, 10, 10); } else if (board[i][j] == CellData.WALL) { g2D.setPaint(Color.RED); g2D.fillRect(i * 10, j * 10, 10, 10); } } } if (snakeGame.hasEatenApple()) { score += 10; scoreLabel.setText("Score: " + Integer.toString(score)); } else if (snakeGame.isDead()) { score = 0; scoreLabel.setText("Score: " + Integer.toString(score)); } } public void update() { if (inGame == false) { snakeGame.initializeGame(); inGame = true; } snakeGame.changeSnakeDirection(); snakeGame.updateSnake(); if (snakeGame.snakeIsDead()) { snakeGame.removeSnake(); snakeGame.initializeGame(); } snakeGame.updateApple(); snakeGame.updateBoard(); } } SnakeGame import java.util.LinkedList; public class SnakeGame { private int gameSpeed = 100; private LinkedList<Point> body; private Point head; private static boolean eatenApple = false; private static boolean isDead = false; private static Direction snakeDirection; Snake theSnake = new Snake(); Apple theApple = new Apple(); GameBoard board = new GameBoard(); public SnakeGame() { } public void initializeGame() { board.cleanBoard(); theSnake.createSnake(board.getxCells() / 2, board.getyCells() / 2); theApple.createNewApple(); addAppleToGameBoard(); } public boolean collidesWith(CellData cellData) { body = theSnake.getBody(); head = body.get(0); CellData cell = board.getBoard()[head.getX()][head.getY()]; return (cell == cellData); } public boolean snakeIsDead() { if (collidesWith(CellData.WALL) || collidesWith(CellData.SNAKE)) { isDead = true; return true; } else { isDead = false; return false; } } public void takeAppleFromGameBoard() { board.setDataCell(theApple.getRandomXPos(), theApple.getRandomYPos(), CellData.EMPTY); } public void addAppleToGameBoard() { board.setDataCell(theApple.getRandomXPos(), theApple.getRandomYPos(), CellData.APPLE); } public void updateApple() { if (collidesWith(CellData.APPLE)) { takeAppleFromGameBoard(); theSnake.eat(); theApple.createNewApple(); eatenApple = true; } else { eatenApple = false; } } public void storeDirectionOfSnake(Direction direction) { snakeDirection = direction; } public void changeSnakeDirection(){ if (snakeDirection != null) { theSnake.changeDirection(snakeDirection); } } public void addSnakeToBoard() { body = theSnake.getBody(); for (int i = 0; i < body.size(); i++) { board.setDataCell(body.get(i).getX(), body.get(i).getY(), CellData.SNAKE); board.setDataCell(theSnake.getTailCell().getX(), theSnake.getTailCell().getY(), CellData.EMPTY); } } public void updateSnake() { theSnake.update(); } public void updateBoard(){ addAppleToGameBoard(); addSnakeToBoard(); } public void removeSnake() { body = theSnake.getBody(); theSnake.clearBody(); for (int i = 0; i < body.size(); i++) { board.setDataCell(body.get(i).getX(), body.get(i).getY(), CellData.EMPTY); } } public int getGameSpeed() { return gameSpeed; } public boolean hasEatenApple() { return eatenApple; } public boolean isDead() { return isDead; } } GameBoard.java public class GameBoard { private int boardWidth = 1000; private int boardHeight = 700; private int xCells = boardWidth / 10; private int yCells = boardHeight / 10; private static CellData board[][]; public GameBoard() { board = new CellData[xCells][yCells]; } public void cleanBoard() { for (int i = 0; i < xCells; i++) { board[i][0] = CellData.WALL; } for (int i = 0; i < xCells; i++) { board[i][yCells - 1] = CellData.WALL; } for (int j = 0; j < yCells; j++) { board[0][j] = CellData.WALL; } for (int j = 0; j < yCells; j++) { board[xCells - 1][j] = CellData.WALL; } for (int i = 1; i < xCells - 1; i++) { for (int j = 1; j < yCells - 1; j++) { board[i][j] = CellData.EMPTY; } } } public void setDataCell(int x, int y, CellData cellData) { board[x][y] = cellData; } public CellData[][] getBoard() { return board; } public int getxCells() { return xCells; } public int getyCells() { return yCells; } } Apple.java import java.util.Random; public class Apple { private int randomXPos; private int randomYPos; Random r = new Random(); GameBoard board = new GameBoard(); public Apple(){ } public void createNewApple(){ randomXPos = r.nextInt(board.getxCells()-2)+1; randomYPos = r.nextInt(board.getyCells()-2)+1; } public int getRandomXPos(){ return randomXPos; } public int getRandomYPos(){ return randomYPos; } } Snake.java import java.awt.*; import java.util.LinkedList; public class Snake{ private LinkedList<Point> body; // list holding points(x,y) of snake body private Point head; private static Direction headDirection; private static Point tailCell; private static boolean hasEatenApple = false; public Snake() { body = new LinkedList<>(); } public void createSnake(int x, int y) { //creating 3-part starting snake body.addFirst(new Point(x,y)); body.add(new Point(x - 1, y)); body.add(new Point(x - 2, y)); headDirection = Direction.RIGHT; tailCell = body.getLast(); } public void clearBody(){body.clear(); } public void changeDirection(Direction theDirection) { if (theDirection != headDirection.opposite()) this.headDirection = theDirection; } //updating localisation of snake public void update() { addPartOfBody(headDirection.getX(), headDirection.getY()); } private void addPartOfBody(int x, int y) { head = body.get(0); body.addFirst(new Point(head.getX() + x, head.getY() + y)); tailCell = body.getLast(); if (hasEatenApple == false) { body.removeLast(); } else { hasEatenApple = false; } } public LinkedList<Point> getBody() { return (LinkedList<Point>) body.clone(); } public Point getTailCell(){return tailCell;} public void eat() { hasEatenApple = true; } } Point.java public class Point { private int x; private int y; public Point(int x, int y) { this.x = x; this.y = y; } public int getX() { return x; } public int getY() { return y; } } Direction.java public enum Direction { LEFT { Direction opposite() { return RIGHT; } int getX(){ return -1; } int getY(){ return 0; } }, RIGHT { Direction opposite() { return LEFT; } int getX(){ return 1; } int getY(){ return 0; } }, UP { Direction opposite() { return DOWN; } int getX(){ return 0; } int getY(){ return -1; } }, DOWN { Direction opposite() { return UP; } int getX(){ return 0; } int getY(){ return 1; } }; abstract Direction opposite(); abstract int getX(); abstract int getY(); } CellData.java public enum CellData { EMPTY, SNAKE, APPLE, WALL; } Answer: Suggestions Currently, it seems as if you would like the game to run in full-screen mode, but what about the few people in the world still having 1280x720 or lower displays on their systems? 1000x800 will run out of vertical screen space on such displays. If you want to make a proper full-screen display of your JFrame, try the following code from this SO answer: JFrame in full screen Java: frame.setExtendedState(JFrame.MAXIMIZED_BOTH); frame.setUndecorated(true); Use this just before the .setVisible(true) call. You should also take a look at this, the Java Exclusive Full-Screen mode API. That should help when you want to get fullscreen properly (which the previous suggestion essentially is not, as it's not exclusive). You already import java.awt. Why not use it's Point class instead of rolling your own? It works pretty much the same, so it should be a drop-in replacement at this stage. Get your boardWidth and boardHeight parameters from the host JFrame as parameters to your GameBoard constructor, and move all field initialization there. This should make your code more flexible against different resolutions. You don't use the java.awt.* you imported for Snake.java. you can safely get rid of it. Any reason why eatenApple and isDead are static in SnakeGame? I don't think that they need to be. Division of responsibility: I feel that spawning an apple on the board should be the responsibility of the board, not the apple. Also, maintaining the state of a snake should be the responsibility of the snake, not the game logic. So, createNewApple() should belong to GameBoard and isDead() should belong to Snake, along with the previously mentioned variables (point 5). If you absolutely need to, you could expose these values using getters in SnakeGame. The next is a tricky point, what you've done is correct, just letting you know why you shouldn't change it in the future. Instead of using LinkedList for representing the points of the body of the snake, never use a java.util.ArrayList, even if you do an ensureCapacity(xCells*yCells-2*(xCells+yCells)) call on the ArrayList object when initializing it to prevent reallocations (xCells*yCells2*(xCells+yCells) is the maximum length of the snake). A linked list takes O(n) time to remove it's last element if it's a singly linked list with only a head pointer, whereas for an array it is always an O(1) operation. Now java.util.LinkedList is a doubly linked list for which deletion of the last element can be done in O(1) time, so in this case the time complexity is not an obvious saving, however, when it comes to adding an element to the head of the list, the story is completely different. Then, ArrayList takes O(n) time while LinkedList takes O(1). TL;DR Keep using LinkedList. Style Indentation I'm sure this point was raised in answers to your previous question, but your indentation, linebreaks and braces are inconsistent. Try to use an editor or IDE capable of autoformatting to help you in this. Try to follow one style of indentation and braces and be consistent, it greatly improves the readability of your code. Naming Maybe you're autogenerating getters and setters, but take care, the API they expose is not evident from their names. In Apple.java, getRandomxPos() & getRandomyPos() seem to return only a particular x (or y, respectively) position, which is predetermined. Drop the Random in their names, it makes no sense as part of the API. Similarly for jFrame. That's not really representative of what jFrame's purpose is. You have gotten away with GameBoard and SnakeGame thanks to the class names, but try to indicate the purpose of a variable via its name.
{ "domain": "codereview.stackexchange", "id": 23072, "tags": "java, beginner, object-oriented, snake-game" }
Why does the graph of the electrical conductivity of sulfuric acid/water solutions have this knee in the ~85%-~92% range?
Question: This answer to an earlier question regarding the electrical conductivity of sulfuric acid provides a graph showing the conductivity of sulfuric acid/water mixtures ranging from 0% to 100% sulfuric acid: (Image by Horace E. Darling in "Conductivity of sulfuric acid solutions" [Journal of Chemical & Engineering Data 9.3 (1964): 421-426.], via M. Farooq here at ChemSE.) As can be seen, the conductivity of the solution rises smoothly from 0% to a peak at approximately 30% sulfuric acid, and declines thereafter. However, at approximately 85% sulfuric acid, conductivity reaches a local minimum, after which it actually rises slightly with increasing sulfuric-acid concentration until reaching a local maximum at approximately 92% sulfuric acid, before again dropping off, more steeply, as the concentration of sulfuric acid in the solution continues to increase to 100%. Why does the trend of decreasing conductivity with increasing sulfuric-acid concentration temporarily reverse in the ~85%-~92% range? Answer: The comment by Vikki made me dig even older papers. Since conductance (not conducitivity note that Darling is using an incorrect terminology from today's standards) is inversely related to viscosity, I thought there must a sharp change in the viscosity of sulfuric acid solution as a function of concentration. This guess is not that bad. This is from 1923 paper. Rhodes, F. H., and C. B. Barbour. "The viscosities of mixtures of sulfuric acid and water." Industrial & Engineering Chemistry 15.8 (1923): 850-852. There is a sharp increase in viscosity at 85%, which indicates there is a major structural change in sulfuric acid solution in the range 85-92%. Sulfuric acid forms a hydrate in this range. When the viscosity is high, the conductance goes down, there is a depression in the curve. This viscosity jump is causing the double hump. Once we are past the high viscosity range, conductance goes up again. It is amazing how simple molecules do not stop from surprising us!
{ "domain": "chemistry.stackexchange", "id": 15738, "tags": "aqueous-solution, conductivity" }
Finding the direction of the magnetic field acting on protons in a cyclotron?
Question: I have been trying to answer part b) of the question below: (The image is the image of the cyclotron in question) To find the direction of the magnetic field acting on the protons I tried using the right hand rule, treating the direction of current as "upwards" and the force on the protons as "to the left", which gives me the answer "in to the page". This is the right answer, however I am uncertain as to whether by method of finding it is correct. So, I am wondering if my method is indeed correct or not. Answer: Your reasoning is correct: you're using the equation for the Lorentz force $\vec{F} = q\vec{v}\times\vec{B}$ where you know the direction of the force and the current. What may be a little confusing about your approach is that you're applying the Lorentz force formula at the point where the proton has already been deflected a bit, and is travelling upwards (with the force pointing to the left). Try applying it for the moment where the proton is just entering the D (what are the directions of $\vec{v}$ and $\vec{F}$ here?). The result will of course be the same, but it will help you understand where the initial deflection comes from.
{ "domain": "physics.stackexchange", "id": 29601, "tags": "homework-and-exercises, electromagnetism, accelerator-physics, particle-accelerators" }
Does Rviz have collision detection like gazebo?
Question: I am wondering if rviz has a collision detection system similar to gazebo implemented. I see that there is an option to enable collision, but I have not found and information online about what it does exactly. Ubuntu 14.04 LTS ROS: Indigo Originally posted by justinkgoh on ROS Answers with karma: 25 on 2016-06-21 Post score: 2 Answer: rviz only renders data, so it does not have collision detection. The option you're seeing is to visualize the collision geometry, which can be different from the visual geometry. A common example is to have a high polygon and detailed mesh for visualization of your robot, but for collision detection you might just use a box or a sphere so it is faster. rviz can render these shapes, but does not do any collision detection with them. Originally posted by William with karma: 17335 on 2016-06-21 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by justinkgoh on 2016-06-30: Thanks for the info.
{ "domain": "robotics.stackexchange", "id": 25016, "tags": "ros, gazebo, rviz, collision" }
Why are lithium and beryllium such good conductors but not chlorine?
Question: Why are lithium and beryllium so conductive? The $2s$ band has a much different energy range from the $2p$ band, so I guess the only explanation is that $N$ states are empty. But if that was the case, why isn't chlorine also a amazing conductor, since chlorine has $N$ empty states as well in the valence band? Answer: The conductivity of a material depends on several factors, such as the number of valence electrons, the band structure, the crystal structure, the temperature, and the presence of impurities. Lithium and beryllium are metals with one and two valence electrons, respectively. They have a simple hexagonal crystal structure that allows their electrons to form overlapping bands. This means that there is no band gap between the 2s and 2p orbitals, and the electrons can move freely across both bands. Therefore, lithium and beryllium have high conductivity. Chlorine is a nonmetal with seven valence electrons. It has a complex orthorhombic crystal structure that creates a large band gap between the 3s and 3p orbitals. This means that the electrons are confined to their respective orbitals and cannot move across the band gap. Therefore, chlorine has low conductivity.
{ "domain": "physics.stackexchange", "id": 99035, "tags": "solid-state-physics, atomic-physics, conductors, orbitals, elements" }
Checking if characters in a string can be rearranged to make a palindrome
Question: Can I please have some advice on optimizing, cleaning up my code, and places where I could save space/time? #include <stdio.h> #include <stdlib.h> #include <string.h> #include <stdbool.h> bool pal_perm(char*); int main() { printf("The output is %sa palindrome.\n", pal_perm("abbas")? "": "not "); //Output: The output is a palindrome. printf("The output is %sa palindrome.\n", pal_perm("deeds")? "": "not "); //Output: The output is a palindrome. printf("The output is %sa palindrome.\n", pal_perm("dead")? "": "not "); //Output: The output is not a palindrome. return 0; } bool pal_perm(char* str) { char alpha[256]; int oddCount =0; int size = strlen(str); memset(alpha, 0, sizeof(alpha)); //see how many occurances of each letter for(char ch = 'a'; ch <= 'z'; ch++) { for(int i=0; i < size; i++) { if(str[i] == ch) alpha[str[i]]++; } } //count the number of times a letter only appears once for(int j=0; j<256; j++) { if(alpha[j] == 1 || (alpha[j]%2==1)) oddCount++; } //if there is more than one letter that only occurs, then it //cannot be a palindrome. if(oddCount <= 1) return true; else return false; } Answer: Strange output What is a user to think when seeing such output of a program? The output is a palindrome. The output is not a palindrome. I wouldn't know what this program is trying to tell me. Consider this alternative: void print_result(char * s) { printf("The characters of \"%s\" %s be rearranged into a palindrome.\n", s, pal_perm(s) ? "can" : "cannot"); } int main() { print_result("abbas"); print_result("deeds"); print_result("dead"); } Output: The characters of "abbas" can be rearranged into a palindrome. The characters of "deeds" can be rearranged into a palindrome. The characters of "dead" cannot be rearranged into a palindrome. Though actually I would prefer something much simpler than that: printf("\"%s\" -> %s\n", s, pal_perm(s) ? "true" : "false"); Producing output: "abbas" -> true "deeds" -> true "dead" -> false Usability It would be more interesting if the program took the strings from the command line, instead of using hardcoded values, for example: int main(int argc, char ** argv) { for (int i = 1; i < argc; i++) { print_result(argv[i]); } } For the record, @Law29 suggested another alternative in a comment: You can also read from standard input. This lets you either type in words as they come to mind, or use a whole file (there are files of dictionary words, for example). Example: #define MAX_WORD_SIZE 50 int main(int argc, char ** argv) { char buf[MAX_WORD_SIZE] while (fgets (buf, MAX_WORD_SIZE, stdin)) { print_result(buf); } } Testing Getting the implementation right can be tricky. You revised your post 3-4 times to fix bugs pointed out in comments. It's good to automate your tests so that they can be repeated easily, for example by adding methods like these: void check(char * s, bool expected) { if (pal_perm(s) != expected) { printf("expected \"%s\" -> %s but got %s\n", s, expected ? "true" : "false", expected ? "false" : "true"); exit(1); } } void run_tests() { check("a", true); check("aa", true); check("aba", true); check("abba", true); check("aabb", true); check("aabbs", true); check("deeds", true); check("ab", false); check("abc", false); check("dead", false); } Use boolean expressions directly Instead of this: if(oddCount <= 1) return true; else return false; You can simply return the boolean expression itself: return oddCount <= 1; Excessive looping As @DarthGizka explained, instead of this: for(char ch = 'a'; ch <= 'z'; ch++) { for(int i=0; i < size; i++) { if(str[i] == ch) alpha[str[i]]++; } } This is identical, but without unnecessary looping: for(int i=0; i < size; i++) { alpha[str[i]]++; } Unnecessary conditions The first condition is unnecessary: if(alpha[j] == 1 || (alpha[j]%2==1)) This is exactly the same: if(alpha[j]%2==1) Too compact writing style Instead of this: if(alpha[j]%2==1) I suggest to put spaces around operators, and before ( in if statements: if (alpha[j] % 2 == 1) Stop iterating when you already know the result Once you find two characters with odd number of occurrences, you can stop iterating and return false. As such, you don't even need an int oddCount, but a bool seenOdd. So instead of this: int oddCount = 0; //count the number of times a letter only appears once for (int j = 0; j < 256; j++) { if (alpha[j] % 2 == 1) oddCount++; } //if there is more than one letter that only occurs, then it //cannot be a palindrome. return oddCount <= 1; You could write: bool seenOdd = false; // scan for odd number of occurrences, stop after seeing two for (int j = 0; j < 256; j++) { if (alpha[j] % 2 == 1) { if (seenOdd) return false; seenOdd = true; } } // less then 2 letters with odd number of occurrences, must be true return true;
{ "domain": "codereview.stackexchange", "id": 30451, "tags": "beginner, c, strings" }
Structure of function that describes sine signal
Question: I need to create a vector of sine signal. So I'm trying to figure out what the structure of the function that describe sine signal ? For example the function $\sin 2x$ meets the requirements? If the answer is no, then what is the reason? Answer: I think you posted similar question 3 days ago, regarding your teacher claiming that $\sin 2x$ is not a sinusoidal function. Nevertheless this function is definitely sinusoidal. Otherwise how come we can say that signals can be decomposed into sums of sinusoids (and cosinusoids)? You have plenty of orthogonal waves: $\sin x, \ \sin 2x, \ \sin 3x, \ldots$, and all of them are sinusoids. The only thing that is coming into my mind, is that: Discrete-time sinusoid is periodic only if its fundamental frequency $f_0$ is a rational number For a sinusoid with frequency $f_0$ to be periodic, we should have: $$ \sin[2\pi f_0 (N+n)+\theta] = \sin[2\pi f_0 n + \theta]$$ This relations is true if and only if there exists an integer $k$ such that: $$2\pi f_0N=2\pi k $$ or, equivalently: $$f_0 = \dfrac{k}{N} $$ To determine the fundamental frequency of a periodic sinusoid, we express its frequency as above and cancel common factors so that $k$ and $N$ have no common divisors. Then the fundamental period of the sinusoid is equal to N. So for example: $f_0 = \dfrac{31}{60}$ implies that $N=60$ $f_0 = \dfrac{30}{60}$ implies that $N=2$ Thus if you define your discrete sinusoid to be: $\sin[2 f_0 t]$, or $\sin[\sqrt{2} f_0 t]$ then these are no longer periodic in digital domain. Why? Think of that in following way: each sample at start of the new period will be shifted slightly - it's not a rational multiple of $\pi$. Below is some plot for two signals: $\sin [2 \pi t]$ $\sin [2 \pi \sqrt{2} t]$ Sampling frequency is $10 \ \mathtt{Hz}$ and upper time limit is $8 \ \mathtt{s} $ You can see a first sinusoid is periodic - every 10 samples (1 second) you get repeating pattern. On the contrary second one, of which fundamental frequency cannot be represented as a rational number (no way to decompose $\sqrt{2}$) periods are not the same - thus your signal has no period. Check the figure below for overlay of 11 periods:
{ "domain": "dsp.stackexchange", "id": 1902, "tags": "wave" }
What is normal force and when it acts?
Question: what are contact forces and according to: https://www.physicsclassroom.com/class/newtlaws/Lesson-2/Types-of-Forces it says there are 6 types of contact forces. I am having doubt with applied force and normal force because both acts when there is contact between two bodies when we push a book kept on a table we take normal force from table surface, but what about the contact between my hand and book we say that applied force, when we push a wall horizontally we say its reaction force is normal force but how ?? when we kick a ball we say we applied a force but where is the normal force between my foot and ball surface which came in contact during kicking, or when we hit a ball by a bat we say we apply a force with bat on the ball but what about normal force between bat and ball surface. when we pull a string attach to a celling we say we applied a force on the string but what about the contact between my hand and string surface where is normal force between them. i am so much confused please somebody help me with this. Answer: It seems like you are misinterpreting the words 'applied' and 'normal'. You are thinking that applied force is some different type of force from normal force. You are thinking that there is some 'applied' force plus 'normal' force acting at same time between your foot and ball and that's where you are confused. In reality the 'normal' force is the actual 'applied' force and it is the 'normal' force that is causing the motion of ball. The site you provided seems to be wrong because currently I am unable to remember any type of contact 'applied' force that is different from other mentioned contact forces. Your foot apply normal force on ball and in return ball apply same amount of normal force on your feet that you feel when kicking. The normal force applied on ball cause an impulse and the ball start moving.
{ "domain": "physics.stackexchange", "id": 98080, "tags": "newtonian-mechanics, forces, terminology, definition" }
Elliptic orbits and why sun located at focal point acts like at the center of the ellipse?
Question: In the book "Classical Mechanics Point particles and relativity by Greiner" We calculate Forces in the motion on an ellipse as follows we first parametrizate the ellipse $$\vec r(t)=<a\cos(\omega t),b\sin(\omega t)>$$and take second derivative and found $$\vec F=m\vec a(t)=-m\omega^2 \vec r(t)$$ Which points the center of the ellipse But then he follows "The planets also move around the sun along elliptic orbits. The sun as the center of attraction located in one of the focal points of the ellipse..." with formula $$\vec F_G=-\gamma \dfrac{mM}{r^2}\dfrac{\vec r}{r}$$ Question: If the force required to hold the particle in elliptic orbit points center and the sun is at the focal point so what is the extra force which make the logic complete? Answer: That parametrization of the elipse corresponds to a body held by a linear elastic device to a point. That is the meaning of $F⃗ = −m\omega^2 r(t)$ If a or b is zero, it is a simple harmonic motion. But gravitational force is proportional to $\frac{1}{r^2}$, and is not described by that parametrization.
{ "domain": "physics.stackexchange", "id": 71609, "tags": "newtonian-mechanics, gravity" }
Can you explain gyroscopic precession using only Newton's three linear laws without applying their angular cousins?
Question: Is there an intuitive approach to understand gyroscopic motion based on Newton's laws without passing through angular momentum conservation? Answer: "Intuitive" is a tricky word. Most people find gyroscopic effects unintuitive no matter what we do. And by far the most intuitive way to understand gyroscopic effects is through angular momentum conversion. That reduces these effects to a handful of straight forward equations. Fundamentally the motion of gyroscopes is based on momentum. You wont be able to make sense of them without it. Momentum can be viewed two major ways: linear and angular. They're actually describing the same concept, but with different symmetries. You can try to understand a gyro using linear momentum, but because it isn't good at leveraging rotational symmetries, you will have a large number of integrals and sines and cosines involved. Maybe that qualifies as intuitive for you, but my guess is it does not. Gyros are not easy to understand in a linear sense. We teach them in a rotational world with angular momentum because they are far easier to understand that way.
{ "domain": "physics.stackexchange", "id": 67797, "tags": "newtonian-mechanics, rotational-dynamics, rigid-body-dynamics, gyroscopes, precession" }
n*log n and n/log n against polynomial running time
Question: I understand that $\Theta(n)$ is faster than $\Theta(n\log n)$ and slower than $\Theta(n/\log n)$. What is difficult for me to understand is how to actually compare $\Theta(n \log n)$ and $\Theta(n/\log n)$ with $\Theta(n^f)$ where $0 < f < 1$. For example, how do we decide $\Theta(n/\log n)$ vs. $\Theta(n^{2/3})$ or $\Theta(n^{1/3})$ I would like to have some directions towards proceeding in such cases. Thank you. Answer: If you just draw a couple of graphs, you'll be in good shape. Wolfram Alpha is a great resource for these kinds of investigations: Generated by this link. Note that in the graph, log(x) is the natural logarithm, which is the reason the one graph's equation looks a little funny.
{ "domain": "cs.stackexchange", "id": 472, "tags": "asymptotics, mathematical-analysis, landau-notation" }
SQL Query generator, round 2
Question: This is the second round of reviews. The first round can be found in this question. This is a project I have been working on. This is one of my first experiences with Python and OOP as a whole. I have written a GUI that handles the inputs for these classes, but I will ask for a separate review for that, since the question would be rather bulky when including both. The goal of this program is to create standard SQL (SQL server) queries for everyday use. The rationale behind this is that we regularly need similar queries, and would like to prevent common mistakes in them. The focus on this question is on the Python code however. The information about the tables and their relation to each-other is provided by a JSON file, of which I have attached a mock-up version. The code consists of three parts: A universe class which handles the JSON file and creates the context of the tables. A query class, which handles the specifications of which tables to include, which columns to take, how to join each table and optional where statements. A PyQT GUI that handles the inputs. This is excluded in this post and will be posted separately for another review. It can be found here on Github The JSON: { "graph": { "table1": { "tag": ["table1"], "DBHandle": ["tables.table1"], "Priority": [1], "Columns": ["a", "b", "c"], "Joins": { "table2": ["on table2.a = table1.a", "inner"], "table3": ["on table1.c = table3.c", "inner"] } }, "table2": { "tag": ["table2"], "DBHandle": ["tables.table2"], "Priority": [2], "Columns": ["a", "d", "e"], "Joins": { "table3": ["on table2.d=table3.d and table2.e = table3.e", "inner"] } }, "table3": { "tag": ["table3"], "DBHandle": ["tables.table3"], "Priority": [4], "Columns": ["c", "d", "e"], "Joins": [] } }, "presets": { "non empty b": { "table": ["table1"], "where": ["table1.b is not null"] } } } The reviewed Python code: # -*- coding: utf-8 -*- """ Created on Thu Aug 3 14:33:44 2017 @author: jdubbeldam """ from json import loads class Universe: """ The Universe is a context for the Query class. It contains the information of the available Database tables and their relation to eachother. This information is stored in a JSON file. """ def __init__(self, filename): """ Reads the JSON and separates the information in a presets dictionary and a graph dictionary. The latter contains the information of the nodes in the universe/graph, including relational information. """ with open(filename, encoding='utf-8') as file: self.json = loads(str(file.read())) self.presets = self.json['presets'] self.json = self.json['graph'] self.tables = self.json.keys() self.connections = self.get_edges() def get_edges(self): """ Creates a dictionary with for each node a list of nodes that join on that node. """ edges = {} for table in self.tables: edges[table] = [] try: edges[table] += [connected_tables for connected_tables in self.json[table]['Joins']] except AttributeError: pass for node in edges: for connected_node in edges[node]: if node not in edges[connected_node]: edges[connected_node].append(node) return edges def shortest_path(self, start, end, path_argument=None): """ Calculates the shortest path in a graph, using the dictionary created in getEgdes. Adapted from https://www.python.org/doc/essays/graphs/. """ if path_argument is None: old_path = [] else: old_path = path_argument path = old_path + [start] if start == end: return path if start not in self.connections: return None shortest = None for node in self.connections[start]: if node not in path: newpath = self.shortest_path(node, end, path) if newpath: if not shortest or len(newpath) < len(shortest): shortest = newpath return shortest def join_paths(self, nodes): """ Extension of shortest_path to work with multiple nodes to be connected. The nodes are sorted based on the priority, which is taken from the JSON. shortest_path is called on the first two nodes, then iteratively on each additional node and one of the existing nodes returned by shortest_path, selecting the one that takes the fewest steps. """ sorted_nodes = sorted([[self.json[node]['Priority'][0], node] for node in nodes]) paths = [] paths.append(self.shortest_path(sorted_nodes[0][1], sorted_nodes[1][1])) for next_node_index in range(len(sorted_nodes) - 2): shortest = None flat_paths = [item for sublist in paths for item in sublist] old_path = len(flat_paths) for connected_path in flat_paths: newpath = self.shortest_path(connected_path, sorted_nodes[next_node_index+2][1], flat_paths) if newpath: if not shortest or len(newpath[old_path:]) < len(shortest): shortest = newpath[old_path:] paths.append(shortest) return paths class Query: """ Query contains the functions that allow us to build an SQL query based on a universe object. It maintains lists with the names of activated tables and, if applicable, which of their columns in a dictionary. Implicit tables are tables that are called, only to bridge joins from one table to another. Since they are not explicitly called, we don't want their columns in the query. how_to_join is a dictionary that allows setting joins (left, right, inner, full) other than the defaults imported from the JSON. """ core = 'select\n\n{columns}\n\nfrom {joins}\n\n where {where}' def __init__(self, universum): self.graph = universum self.active_tables = [] self.active_columns = {} self.implicit_tables = [] self.join_strings = {} for i in self.graph.tables: self.join_strings[i] = self.graph.json[i]['Joins'] self.how_to_join = {} self.where = [] def add_tables(self, tablename): """ Sets given tablename to active. GUI ensures that only valid names will be given. """ if tablename not in self.active_tables: self.active_tables.append(tablename) self.active_columns[tablename] = [] def add_columns(self, table, column): """ Sets given columnname from table to active. GUI ensures that only valid names will be given. """ if column not in self.active_columns[table]: self.active_columns[table].append(column) def add_where(self, string): """ Adds any string to a list to be input as where statement. This could be vulnerable for SQL injection, but the scope of this project is in-house usage, and the generated SQL query isn't directly passed to the server. """ self.where.append(string) def find_joins(self): """ Calls the join_paths function from Universe class. Figures out which joins are needed and which tables need to be implicitly added. Returns a list of tuples with tablenames to be joined. """ tags = [self.graph.json[table]['tag'][0] for table in self.active_tables] join_paths = self.graph.join_paths(tags) join_sets = [(table1, table2) for join_edge in join_paths for table1, table2 in zip(join_edge[:-1], join_edge[1:])] for sublist in join_paths: for item in sublist: if item not in self.active_tables: self.add_tables(item) self.implicit_tables.append(item) return join_sets def generate_join_statement(self, table_tuple): """ Creates the join statement for a given tuple of tablenames. The second entry in the tuple is always the table that is joined. Since the string is stored in a dictionary with one specific combination of the two table names, the try statement checks which way around it needs to be. how contains the default way to join. Unless otherwise specified, this is used to generate the join string. """ added_table = table_tuple[1] try: on_string, how = self.graph.json[table_tuple[0]]['Joins'][table_tuple[1]] except TypeError: table_tuple = (table_tuple[1], table_tuple[0]) on_string, how = self.graph.json[table_tuple[0]]['Joins'][table_tuple[1]] if table_tuple not in self.how_to_join: self.how_to_join[table_tuple] = how join_string = (self.how_to_join[table_tuple] + ' join ' + self.graph.json[added_table]['DBHandle'][0] + ' ' + self.graph.json[added_table]['tag'][0] + '\n') return join_string + on_string def generate_select_statement(self, table): """ Creates the column specification. If no columns of an active table are specified, it assumes all the columns are wanted. """ if not self.active_columns[table]: self.active_columns[table] = ['*'] return ',\n'.join([(self.graph.json[table]['tag'][0] + '.' + i) for i in self.active_columns[table]]) def compile_query(self): """ Handles compilation of the query. If there are more than one activated table, joins need to be handled. First the required joins are found, then the strings that handle this are generated. The column statement is created. If there is no where statement specified, '1=1' is added. The relevent statements are added into the core query and returned. """ if len(self.active_tables) == 1: base_table = self.active_tables[0] join_statement = [] else: joins = self.find_joins() base_table = joins[0][0] join_statement = [self.generate_join_statement(i) for i in joins] join_statement = ([self.graph.json[base_table]['DBHandle'][0] + ' ' + self.graph.json[base_table]['tag'][0]] + join_statement) completed_join_statement = '\n\n'.join(join_statement) column_statement = [self.generate_select_statement(table) for table in self.active_tables if table not in self.implicit_tables] completed_column_statement = ',\n'.join(column_statement) if self.where: where_statement = '\nand '.join(self.where) else: where_statement = '1 = 1' query = Query.core.replace('{columns}', completed_column_statement) query = query.replace('{joins}', completed_join_statement) query = query.replace('{where}', where_statement) return query if __name__ == "__main__": graph = Universe('example.JSON') query = Query(graph) query.addTables('table1') query.addTables('table2') query.addTables('table3') print(query.compileQuery()) Answer: I have been refactoring this code myself as well in the meanwhile, so I thought I'd post some of the insights I have gained myself. Class inheritance Instead of passing a Universe instance when creating a Query, by making Query a subclass of Universe, I was able to reduce the amount of information that was stored in both classes. This makes accessing the attributes and methods of Universe in Query's methods shorter as well. Query.join_strings does nothing self.join_strings = {} for i in self.graph.tables: self.join_strings[i] = self.graph.json[i]['Joins'] self.join_strings is defined, but used nowhere else. Also the use of i is bad (was an oversight). Indirectly still iterating over .keys() self.json = self.json['graph'] self.tables = self.json.keys() in Universe.__init__() stores the keys (tablenames). This is only used to iterate later: edges = {} for table in self.tables: edges[table] = [] try: edges[table] += [connected_tables for connected_tables in self.json[table]['Joins']] except AttributeError: pass We might as well have iterated over self.json. However, for naming purposes, I prefer the following: self.tables = self.json['graph'] Since that improves the naming, and removes the need to keep the json attribute around. So we can turn that into a regular variable without the self. Expand the add_* methods to also allow for removing of that item. This is mostly relevant with the GUI in mind. It contained a bit of a workaround to be able to remove tables and columns from the Query. So I added an argument to the add_* methods to be able to set to remove instead. def add_tables(self, tablename, add_or_remove=True): """ Toggles active setting of given tablename. GUI ensures that only valid names will be given. """ if add_or_remove: if tablename not in self.active_tables: self.active_tables.append(tablename) self.active_columns[tablename] = [] else: self.active_tables.remove(tablename)
{ "domain": "codereview.stackexchange", "id": 35440, "tags": "python, beginner, python-3.x, sql" }
Proving that the IDTFT is the inverse of the DTFT?
Question: The DTFT is given by: $$X(e^{j\omega}) = \sum_{n=-\infty}^{\infty}x[n]e^{-j\omega n}$$ The IDTFT is given by: $$x[n]=\frac{1}{2\pi}\int_{0}^{2\pi}X(e^{j\omega})e^{j\omega n}d\omega$$ I have been able to show by substitution of the DTFT into the IDTFT that the transform and a subsequent inverse transform return $x[n]$: $$\begin{align} x[n]&=\frac{1}{2\pi}\int_{0}^{2\pi}X(e^{j\omega})e^{j\omega n}d\omega\\ &=\frac{1}{2\pi}\int_{0}^{2\pi} \left( \sum_{k=-\infty}^{\infty}x[k]e^{-j\omega k} \right)e^{j\omega n}d\omega\\ \end{align}$$ Swap the order of integration and summation: $$x[n]=\frac{1}{2\pi}\sum_{k=-\infty}^{\infty}\int_{0}^{2\pi}x[k]e^{j\omega (n-k)}d\omega$$ Argue that given $e^{j\omega (n-k)}$ is an odd function, it will only evaluate to anything other than 0 when $k=n$: $$\begin{align} x[n]&=\frac{1}{2\pi}\sum_{k=-\infty}^{\infty}\int_{0}^{2\pi}x[k]e^{j\omega (n-k)} d\omega \ \delta[n-k]\\ &=\frac{1}{2\pi}\int_{0}^{2\pi}x[n]d\omega \\ &=\frac{2\pi}{2\pi}x[n]\\ \end{align}$$ However, I have been unable to show the dual case: that the inverse transform (IDTFT) substituted into the forward transform (DTFT) gives $X(e^{j\omega})$. How can we show this? Answer: $$\begin{align}X(e^{j\omega})&=\sum_{n=-\infty}^{\infty}x[n]e^{-jn\omega}\\&=\sum_{n=-\infty}^{\infty}\left[\frac{1}{2\pi}\int_{0}^{2\pi}X(e^{j\Omega})e^{jn\Omega}d\Omega\right]\;e^{-jn\omega}\\&=\int_{0}^{2\pi}X(e^{j\Omega})\left[\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}e^{jn(\Omega-\omega)}\right]d\Omega\\&=\int_{0}^{2\pi}X(e^{j\Omega})\delta(\Omega-\omega)d\Omega\\&=X(e^{j\omega})\end{align}$$ where I've used the identity $$\delta(\omega)=\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}e^{jn\omega}$$
{ "domain": "dsp.stackexchange", "id": 7244, "tags": "fourier-transform, dft, dtft" }
How do I use PCL from ROS Hydro?
Question: I am trying to understand how PCL is integrated into ROS Hydro. I installed ROS in Ubuntu 12.04 using the ros-hydro-desktop-full package. From "rospack list" I can see that it comes with 4 PCL packages: pcl pcl_conversions pcl_msgs pcl_ros What is the functionality of these 4 packages, especially pcl_ros and pcl? There is also a pcl-1.7 folder in my /opt/ros/hydro/share folder with some cmake config files. There is no package.xml file though. What does this folder do? Also, I seem to have 2 copies of the pcl-1.7 libraries. I have it in /usr/lib and also in /opt/ros/hydro/lib. So it seems like I have a standalone pcl library (I am not sure how I got this) and one that is integrated with ROS. Is this going to be a problem? Finally, and this is the biggest source of my confusion, the wiki page for hydro/migration says: pcl is no longer packaged by the ROS community as a catkin package, so any packages which directly depend on pcl should instead use the new rosdep rules libpcl-all and libpcl-all-dev and follow the PCL developer's guidelines for using PCL in your CMake. So, why is there a pcl package in ROS Hydro in the first place with libraries in /opt/ros/hydro/lib? As you can see I am quite confused, any help will be greatly appreciated! Originally posted by munnveed on ROS Answers with karma: 77 on 2013-09-11 Post score: 3 Answer: The question was answered in the PCL Users mailing list. http://www.pcl-users.org/How-do-I-use-PCL-from-ROS-Hydro-td4029613.html Originally posted by munnveed with karma: 77 on 2013-09-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by aknirala on 2013-09-15: If possible, please enumerate the exact steps needed for this. I was following the tutorial : http://wiki.ros.org/pcl/Tutorials, but was not able to run it. While creating package I needed to remove pcl dependency, then also I was not able to compile code using voxel_grid. Comment by munnveed on 2013-09-15: Can specify exactly what error you are getting? And are you using ROS Hydro? Comment by ndepalma on 2013-09-24: I'm confirming that the tutorials should be updated. Comment by aknirala on 2013-10-12: Hi, I was able to run it, and pointed out the changes at : http://answers.ros.org/question/90176/running-pcl-in-hydro/ kindly let me know if some correction needs to be done. Comment by Athoesen on 2013-11-03: Did you happen to update the tutorial or should I follow the changes on the above link you just put? Comment by aknirala on 2013-11-03: I have updated the tutorial (quite sometime), you can find comments in tutorial saying for hydro users kindly use... Let me know if things are fine.
{ "domain": "robotics.stackexchange", "id": 15500, "tags": "pcl, ros-hydro" }
Does home cooking induction stove produces any harmful (to humans) electrical/magnetic fields?
Question: An induction cooker, or stove that are based on the principle of electromagnetic induction, and is used much, for cooking food now a days. My stove had a thick iron circular plate on the top. Assuming that to be having the same configuration as any induction cooker has, does it emits any strong electrical or magnetical waves/radiation or similar things which could damage our body? Is there any possibility? I know that only ionising radiation like: X rays, beta alpha or gamma rays can penetrate skin and cause cell damage. And also, cancer. But I doubt on the induction stove, because it may have strong induction currents/magnetic feilds. Answer: The simple answer is, until now, we haven't found any negative effects on health, and we've looked quite deep. In order to cause any damage from electromagnetic radiation, one of three things has to happen. Either the radiation is high-frequency enough that it can ionize atoms, which leads to ionization damage, or you have to get electrocuted, or cooked. Let us discuss the last two conditions. Human bodies are susceptible to electrocution only at AC frequencies that are low. When the frequency becomes high enough, no uncontrolled depolarization happens, so you don't get electrocuted. At high AC frequencies, the only way of suffering any damage would be through ohmic heating of tissue, i.e. current heating you up. Both of these effects, though require a high enough potential difference between any two points of your body to happen, nowhere near what is radiated away by such induction heaters, or even leakage fields from big fat transformers (though I wouldn't recommend touching the output of one). The electric fields are simply not that large in magnitude.
{ "domain": "physics.stackexchange", "id": 76766, "tags": "electromagnetic-radiation, magnetic-fields, electric-fields, estimation, biology" }
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
7