anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Why can't we use Google Translate for every translation task?
Question: Once a book is published in a language, why can't the publishers use Google Translate AI or some similar software to immediately render the book in other languages? Likewise for Wikipedia: I'm not sure I understand why we need editors for each language. Can't the English Wikipedia be automatically translated into other languages? Answer: Google has achieved significant progress in AI translation, but it's still no-where near a qualified human translator. Natural language translation is already very challenging, adding domain knowledge to the equation is too much even for Google. I don't think we have the technology to translate an arbitrary book from one language to another reliably.
{ "domain": "ai.stackexchange", "id": 866, "tags": "neural-networks, machine-translation, google-translate" }
How was the core temperature of the Sun estimated?
Question: It was estimated that the heat inside the core of the Sun inside around 15 000 000 °C - this value is extremely enormous. How did scientists estimate this value? Answer: The composition can be determined by taking spectra. Additionally, the mass can be determined through dynamics. If you combine these two, under the assumption that the star is in a state of hydrostatic equilibrium (which means that the outward thermal pressure of the star due to fusion of hydrogen into helium is in balance with the inward tug of gravity), you can make statements about what the temperature and density must be in the core. You need high densities and high temperatures in order to fuse hydrogen into helium. Remember what's happening: Temperatures are hot enough for hydrogen in the core to be completely ionized, meaning that in order to fuse these protons into helium nuclei, you need to overcome the electromagnetic repulsion as two protons come close (like charges repel). Below is a diagram of the process of one particular type of fusion (Proton-proton chain reaction). The other fusion reaction which occurs at the cores of stars is called the carbon-nitrogen-oxygen (CNO) cycle, and is the dominant source of energy for stars more massive than about 1.3 solar masses. Below shows this process. Edit: Somebody pointed out that this doesn't actually answer the question at hand - which is true. Forgetting how to do some of the basic back of the envelope calculations myself (I admit, stellar astrophysics is definitely not my specialty), I've stumbled across a very crude and simple estimation of how to calculate the central pressure and temperature of the sun from. The calculation does however point out the correct values and what one would need to know in order to get the details correct.
{ "domain": "astronomy.stackexchange", "id": 3756, "tags": "the-sun, temperature, core" }
Extraction of nickel from its ores
Question: How is nickel mined and extracted from its ores? What are the appropriate word and chemical equations for this process? Answer: Nickel can be extracted directly from its ore by reduction by hydrogen or carbon monoxide at elevated temperature at $600\ \mathrm{^\circ C}$ to $650\ \mathrm{^\circ C}$: $$\ce{NiO + H2 -> Ni + H2O}$$ $$\ce{NiO + CO -> Ni + CO2}$$ Also, it can be extracted by treatment with dilute sulfuric acid following by precipitation: $$\ce{NiO + 2H+ -> Ni^2+ + H2O}$$
{ "domain": "chemistry.stackexchange", "id": 977, "tags": "metal, transition-metals, extraction" }
Makefile for a C++ project using Boost, Eigen, and htslib
Question: I had a Makefile but it doesn't meet the "industry-standard expectation". That was feedback from my client. The old Makefile was rejected. Thus, I'm making a new one. Here is the repository for my project. My old Makefile: # Boost C++ library BOOST = /usr/local/include/boost_1_64_0 # Linear-algebra library EIGEN = /usr/local/Cellar/eigen/3.2.8/include/eigen3 # HTSLIB library for BAM files HTSLIB = /Users/tedwong/Sources/QA/htslib # Where the header are INCLUDE = src EXEC = anaquin SOURCES = $(wildcard src/*.cpp src/tools/*.cpp src/analyzers/*.cpp src/RnaQuin/*.cpp src/VarQuin/*.cpp src/MetaQuin/*.cpp src/data/*.cpp src/parsers/*.cpp src/writers/*.cpp src/stats/*.cpp src/cufflinks/*.cpp) OBJECTS = $(SOURCES:.cpp=.o) OBJECTS_TEST = $(SOURCES_TEST:.cpp=.o) SOURCES_LIB = $(wildcard src/htslib/cram/*.c) OBJECTS_LIB = $(SOURCES_LIB:.c=.o) $(EXEC): $(OBJECTS) $(OBJECTS_TEST) $(OBJECTS_LIB) g++ $(OBJECTS) $(OBJECTS_TEST) $(OBJECTS_LIB) -DBACKWARD_HAS_BFD -g -lpthread -lz -lhts -L $(HTSLIB) -o $(EXEC) %.o: %.c gcc -g -c -DBACKWARD_HAS_BFD -I src/htslib -I $(INCLUDE) -I $(EIGEN) -I ${BOOST} $< -o $@ %.o: %.cpp g++ -g -DK_HACK -DBACKWARD_HAS_BFD -c -std=c++11 -I src/htslib -I src/stats -I $(INCLUDE) -I $(EIGEN) -I ${BOOST} $< -o $@ clean: rm -f $(EXEC) $(OBJECTS) $(OBJECTS_TEST) My new Makefile: # # Please modify only BOOST, EIGEN and HTSLIB. You should be able to leave all other options intact. C++ compiler with C++11 support is mandatory. # # Boost C++ library BOOST = /usr/local/include/boost_1_64_0 # Linear-algebra library EIGEN = /usr/local/Cellar/eigen/3.2.8/include/eigen3 # HTSLIB library for reading BAM files HTSLIB = /Users/tedwong/Sources/QA/htslib CC = g++ CFLAGS = -g -O2 CPPFLAGS = -c -std=c++11 DFLAGS = #DFLAGS = -DBACKWARD_HAS_BFD # https://github.com/bombela/backward-cpp LIBS = -lpthread -lz -lhts # Where the header are (no need to modify this) INCLUDE = src EXEC = anaquin SOURCES = $(wildcard src/*.cpp src/tools/*.cpp src/analyzers/*.cpp src/RnaQuin/*.cpp src/VarQuin/*.cpp src/MetaQuin/*.cpp src/data/*.cpp src/parsers/*.cpp src/writers/*.cpp src/stats/*.cpp src/cufflinks/*.cpp) OBJECTS = $(SOURCES:.cpp=.o) OBJECTS_TEST = $(SOURCES_TEST:.cpp=.o) SOURCES_LIB = $(wildcard src/htslib/cram/*.c) OBJECTS_LIB = $(SOURCES_LIB:.c=.o) $(EXEC): $(OBJECTS) $(OBJECTS_TEST) $(OBJECTS_LIB) $(CC) $(OBJECTS) $(OBJECTS_TEST) $(OBJECTS_LIB) $(CFLAGS) $(DFLAGS) $(LIBS) -L $(HTSLIB) -o $(EXEC) %.o: %.c $(CC) $(CFLAGS) -c $(DFLAGS) -I $(INCLUDE) -I $(EIGEN) -I ${BOOST} $< -o $@ %.o: %.cpp $(CC) $(CFLAGS) $(DFLAGS) $(CPPFLAGS) -I $(HTSLIB) -I src/stats -I $(INCLUDE) -I $(EIGEN) -I ${BOOST} $< -o $@ clean: rm -f $(EXEC) $(OBJECTS) $(OBJECTS_TEST) Answer: Couple of things you do that don't match standards. CXX is usually the C++ compiler CC is usually the C compiler. CXXFLAGS are the flags that are applied to the C++ compiler. CPPFLAGS are the flags that are applied for the pre-processor. This usually means passed to both C and C++ compiler. CFLAGS are the flags passed to the C compiler. LDDFLAGS are the flags passed to the linker. Why are you building the tests into your main executable? Should the test application be generated as a seprate application that includes the normal objects (apart from main) and the test objects. Why do you have libs outside your lib variable? $(LIBS) -L $(HTSLIB) You should have a line earlier in your make file like: # This adds more libs to the current set of libs. LIBS += -L $(HTSLIB) I am confused why you are including -I ${BOOST} for the C compiler but not the C++ compiler. I did not think there was anything in the boost that could be used by C (I could be wrong but seems strange). The default rule for C/C++/Linking are: $(CC) $(CPPFLAGS) $(CFLAGS) -c $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $(CC) $(LDFLAGS) n.o $(LOADLIBES) $(LDLIBS) Unless you need to change these I would not. This means adding values to the existing variables. Something like this (untested): # Addition Packages. SRC_PACKAGES = BOOST EIGEN LIB_PACKAGES = HTSLIB # Boost C++ library ROOT_BOOST▸ = /usr/local/include/boost_1_64_0 # Linear-algebra library ROOT_EIGEN = /usr/local/Cellar/eigen/3.2.8/include/eigen3 # HTSLIB library for reading BAM files ROOT_HTSLIB = /Users/tedwong/Sources/QA/htslib # # Turn on BFD by compiling with # make -DHASBFD=1 DFLAGS_1▸ = -DBACKWARD_HAS_BFD # https://github.com/bombela/backward-cpp DFLAGS▸ ▸ = $(DFLAGS_$(HASBFD)) EXTRA_INCLUDE_DIR▸ = $(foreach loop, $(SRC_PACKAGES) $(LIB_PACKAGES), -I$(ROOT_$(loop))) EXTRA_LIB_DIR▸ ▸ = $(foreach loop, $(LIB_PACKAGES), -L$(ROOT_$(loop))) # Compiler CXX▸▸ ▸ = g++ CC▸ ▸ ▸ = $(CXX) CPPFLAGS▸ += -g -O2 $(EXTRAFLAGS) $(DFLAGS) -Isrc -Isrc/stats CFLAGS▸ ▸ += CXXFLAGS▸ += -std=c++11 # Linker LDFLAGS▸▸ += $(EXTRA_LIB_DIR) LDLIBS▸ ▸ += -lpthread -lz -lhts # Application SRC_DIR▸▸ = src src/tools src/analyzers src/RnaQuin src/VarQuin src/MetaQuin src/data src/parsers src/writers src/stats src/cufflinks EXEC = anaquin SOURCES = $(wildcard $(foreach loop, $(SRC_DIR), $(loop)/*.cpp)) OBJECTS = $(patsubst $(SOURCE), %.cpp, %.o) $(EXEC): $(OBJECTS) .PHONEY: clean clean: $(RM) $(EXEC) $(OBJECTS) $(OBJECTS_TEST)
{ "domain": "codereview.stackexchange", "id": 27986, "tags": "c++, comparative-review, makefile" }
Why is electric flux due to a point charge placed at the face of a hemishpere, cone and cube the same?
Question: In all these 3 cases, the net electric flux is found to be $q/2ε_0$. I think this has something to do with the integral of $\left(\mathbf{E}\boldsymbol{\cdot}\mathrm d\mathbf{S}\right)$ which appears on the left side of Gauss's Law. We are somehow exploiting the symmetry to arrive at the result where the $\mathbf{E}$ (or electric field) in the formula comes out of the integral. But I don't know how? Secondly, does the electric flux remain the same when we place the point charge on the face of an unsymmetrical 3d object instead of these 3 objects (cube, cone and hemisphere) ? If anyone would be kind enough to help me out in the simplest way possible, that would make my understanding clear! Answer: By gauss law, we have that $\displaystyle \int \vec{E} \cdot \mathrm{d} \vec{S}=\dfrac{Q_{\text{enc}}}{\epsilon_0}$, so if for example, when a charge is kept on the face of a hemisphere, the charge is not enclosed. We want to enclose it fully in a body such that electric field still remains uniform so that gauss law can be applied. So, we complete the rest half by taking another hemisphere, so now flux through the whole sphere would become $\dfrac{Q_\text{enc}}{\epsilon_0}$, so the flux through the hemisphere would be just half of it due to symmetry. Same goes with cone and cube.
{ "domain": "physics.stackexchange", "id": 75422, "tags": "electricity, electric-fields, symmetry, gauss-law" }
Using the SSIM method for large images
Question: I'm try to implement the SSIM method. This method has already been implemented in Python (source code), but my goal is implement it with using only Python and NumPy. My goal is also to use this method on big images (1024x1024 and above). But filter2 works very slow (approx. 62 sec. for 1024x1024). cProfile gives me information that _methods.py:16(_sum), fromnumeric.py:1422(sum), and method 'reduce' of 'numpy.ufunc' objects eat main part time of run. import numpy def filter2(window, x): range1 = x.shape[0] - window.shape[0] + 1 range2 = x.shape[1] - window.shape[1] + 1 res = numpy.zeros((range1, range2), dtype=numpy.double) x1 = as_strided(x,((x.shape[0] - 10)/1 ,(x.shape[1] - 10)/1 ,11,11), (x.strides[0]*1,x.strides[1]*1,x.strides[0],x.strides[1])) * window for i in xrange(range1): for j in xrange(range2): res[i,j] = x1[i,j].sum() return res def ssim(img1, img2): window = numpy.array([\ [0.0000, 0.0000, 0.0000, 0.0001, 0.0002, 0.0003, 0.0002, 0.0001, 0.0000, 0.0000, 0.0000],\ [0.0000, 0.0001, 0.0003, 0.0008, 0.0016, 0.0020, 0.0016, 0.0008, 0.0003, 0.0001, 0.0000],\ [0.0000, 0.0003, 0.0013, 0.0039, 0.0077, 0.0096, 0.0077, 0.0039, 0.0013, 0.0003, 0.0000],\ [0.0001, 0.0008, 0.0039, 0.0120, 0.0233, 0.0291, 0.0233, 0.0120, 0.0039, 0.0008, 0.0001],\ [0.0002, 0.0016, 0.0077, 0.0233, 0.0454, 0.0567, 0.0454, 0.0233, 0.0077, 0.0016, 0.0002],\ [0.0003, 0.0020, 0.0096, 0.0291, 0.0567, 0.0708, 0.0567, 0.0291, 0.0096, 0.0020, 0.0003],\ [0.0002, 0.0016, 0.0077, 0.0233, 0.0454, 0.0567, 0.0454, 0.0233, 0.0077, 0.0016, 0.0002],\ [0.0001, 0.0008, 0.0039, 0.0120, 0.0233, 0.0291, 0.0233, 0.0120, 0.0039, 0.0008, 0.0001],\ [0.0000, 0.0003, 0.0013, 0.0039, 0.0077, 0.0096, 0.0077, 0.0039, 0.0013, 0.0003, 0.0000],\ [0.0000, 0.0001, 0.0003, 0.0008, 0.0016, 0.0020, 0.0016, 0.0008, 0.0003, 0.0001, 0.0000],\ [0.0000, 0.0000, 0.0000, 0.0001, 0.0002, 0.0003, 0.0002, 0.0001, 0.0000, 0.0000, 0.0000]\ ], dtype=numpy.double) K = [0.01, 0.03] L = 65535 C1 = (K[0] * L) ** 2 C2 = (K[1] * L) ** 2 mu1 = filter2(window, img1) mu2 = filter2(window, img2) mu1_sq = numpy.multiply(mu1, mu1) mu2_sq = numpy.multiply(mu2, mu2) mu1_mu2 = numpy.multiply(mu1, mu2) sigma1_sq = filter2(window, numpy.multiply(img1, img1)) - mu1_sq sigma2_sq = filter2(window, numpy.multiply(img2, img2)) - mu2_sq sigma12 = filter2(window, numpy.multiply(img1, img2)) - mu1_mu2 ssim_map = numpy.divide(numpy.multiply((2*mu1_mu2 + C1), (2*sigma12 + C2)), numpy.multiply((mu1_sq + mu2_sq + C1),(sigma1_sq + sigma2_sq + C2))) return numpy.mean(ssim_map) def calc_ssim(): img1 = numpy.array(numpy.zeros((1024,1024)),dtype=numpy.double) img2 = numpy.array(numpy.zeros((1024,1024)),dtype=numpy.double) return ssim(img1, img2) Answer: In your strided filter2, x1 is (1014, 1014, 11, 11). You are iterating over the 1st 2 dimensions in order to sum on on the last 2. Let sum do all the work for you, res = x1.sum((2,3)) def filter2(window, x): range1 = x.shape[0] - window.shape[0] + 1 range2 = x.shape[1] - window.shape[1] + 1 x1 = as_strided(x,((x.shape[0] - 10)/1 ,(x.shape[1] - 10)/1 ,11,11), (x.strides[0]*1,x.strides[1]*1,x.strides[0],x.strides[1])) * window res = x1.sum((2,3)) return res In my tests this gives a 6x speed improvement. With numpy iteration, especially nested ones over large dimensions like 1014 is a speed killer. You want to vectorize this kind of thing as much as possible. Traditionally Matlab had the same speed problems, but newer versions recognize and compile loops like yours. That's why your numpy is so much slower.
{ "domain": "codereview.stackexchange", "id": 4598, "tags": "python, numpy" }
Timetable app for myself
Question: My aim was to build a small, simple program that would show me my school timetable for the present day as well as the following day. I am new to PyQt(5) and programming in general. I started programming about 5 months ago on and off just as a fun side hobby. I started with basic text-based programs which gradually got more complex and this is my first (useful) GUI program. Any thoughts/suggestions? import datetime import sys from PyQt5.QtCore import Qt from PyQt5.QtWidgets import (QHBoxLayout, QApplication, QWidget, QLabel) from PyQt5.QtGui import (QIcon, QPixmap) #My weekly school timetable Mon = '\n1 Spanish\n2 Spanish\n3 Music\n4 Music\n5 English Literature\n6 English Literature\n7 History\n8 History' Tue = '\n1 Social Studies\n2 Social Studies\n3 Spanish\n4 Spanish\n5 Information Technology\n6 Information Technology\n7 English Literature\n8 English Language' Wed = '\n1 English Literature\n2 English Literature\n3 Information Technology\n4 Information Technology\n5 HSB\n6 HSB\n7 Mathematics\n8 Mathematics' Thu = '\n1 History\n2 History\n3 English Language\n4 English Language\n5 HSB\n6 HSB\n7 Mathematics\n8 Mathematics' Fri = '\n1 Music\n2 Music\n3 English Language\n4 English Language\n5 Mathematics\n6 Mathematics\n7 Social Studies\n8 Social Studies' #Returns day in terms of Monday = 0, Tuesday = 1... day = datetime.datetime.today().weekday() if day==0: the_day = Mon tomorrow = Tue if day==1: the_day = Tue tomorrow = Wed if day==2: the_day = Wed tomorrow = Thu if day==3: the_day = Thu tomorrow = Fri if day==4: the_day = Fri tomorrow = '' elif day==5 or day==6: the_day = '' tomorrow = Mon if day in range(5): today = '<b>Today\'s Timetable:<\b>' next_day = '<b>Tomorrow\'s Timetable:<\b>' else: today = '' next_day = '<b>Monday\'s Timetable:<\b>' class timetable(QWidget): def __init__(self): super().__init__() self.initUI() def initUI(self): if day==4: lbl = QLabel(self) pixmap = QPixmap('weekend.jpg') smaller_pixmap = pixmap.scaled(160, 300, Qt.KeepAspectRatio, Qt.FastTransformation) lbl.setPixmap(smaller_pixmap) lbl.move(-3, 162) lbl.show() elif day==5 or day==6: lbl = QLabel(self) pixmap = QPixmap('weekend.jpg') smaller_pixmap = pixmap.scaled(160, 300, Qt.KeepAspectRatio, Qt.FastTransformation) lbl.setPixmap(smaller_pixmap) lbl.move(-3, 0) lbl.show() title0 = QLabel(today, self) title0.move(5,5) schedule0 = QLabel(the_day, self) schedule0.move(5, 16) title1 = QLabel(next_day, self) title1.move(5,150) schedule1 = QLabel(tomorrow, self) schedule1.move(5, 160) self.setGeometry(7, 30, 150, 287) self.setFixedSize(self.size()) self.setWindowTitle('Timetable') self.setWindowIcon(QIcon('icon.png')) self.show() if __name__ == '__main__': app = QApplication(sys.argv) ex = timetable() ex.show() sys.exit(app.exec_()) #created with PyQt5 using Python 3.5 Answer: I would change your setup code to take a list of lists, where the inner lists are lists of subjects on that day and the outer list collects all these lists: schedule = [['Spanish', 'Spanish', 'Music', 'Music', 'English Literature', 'English Literature', 'History', 'History'], ...] def format_day(day_schedule): return "\n" + "\n".join("{} {}".format(i, subject) for i, subject in enumerate(day_schedule, 1)) today = date.datetime.today().weekday() try: the_day = format_day(schedule[today]) except IndexError: the_day = '' try: tomorrow = format_day(schedule[today + 1]) except IndexError: tomorrow = format_day(schedule[0]) This could actually be moved into the class, which could just take the schedule and day as parameters (which would make it a lot easier to run this, e.g for two different schedules): import datetime import sys from PyQt5.QtCore import Qt from PyQt5.QtWidgets import (QHBoxLayout, QApplication, QWidget, QLabel) from PyQt5.QtGui import (QIcon, QPixmap) def format_day(day_schedule): return "\n" + "\n".join("{} {}".format(i, subject) for i, subject in enumerate(day_schedule, 1)) class Timetable(QWidget): def __init__(self, schedule, day): super().__init__() self.schedule = schedule self.day = day self.init_UI() def init_UI(self): try: today = format_day(self.schedule[self.day]) except IndexError: today = '' try: tomorrow = format_day(self.schedule[self.day + 1]) except IndexError: tomorrow = format_day(self.schedule[0]) if self.day <= 5: today_header = '<b>Today\'s Timetable:<\b>' tomorrow_header = '<b>Tomorrow\'s Timetable:<\b>' else: today_header = '' tomorrow_header = '<b>Monday\'s Timetable:<\b>' if self.day in (4, 5, 6): lbl = QLabel(self) pixmap = QPixmap('weekend.jpg') smaller_pixmap = pixmap.scaled(160, 300, Qt.KeepAspectRatio, Qt.FastTransformation) lbl.setPixmap(smaller_pixmap) if self.day == 4: lbl.move(-3, 162) else: lbl.move(-3, 0) lbl.show() title0 = QLabel(today_header, self) title0.move(5, 5) schedule0 = QLabel(today, self) schedule0.move(5, 16) title1 = QLabel(tomorrow_header, self) title1.move(5, 150) schedule1 = QLabel(tomorrow, self) schedule1.move(5, 160) self.setGeometry(7, 30, 150, 287) self.setFixedSize(self.size()) self.setWindowTitle('Timetable') self.setWindowIcon(QIcon('icon.png')) self.show() if __name__ == '__main__': app = QApplication(sys.argv) schedule = [['Spanish', 'Spanish', 'Music', 'Music', 'English Literature', 'English Literature', 'History', 'History'], ...] today = date.datetime.today().weekday() ex = Timetable(schedule, today) ex.show() sys.exit(app.exec_()) #created with PyQt5 using Python 3.5 Note that I also renamed some of your variables (there is now a today and a today_header and a tomorrow and tomorrow_header.
{ "domain": "codereview.stackexchange", "id": 25396, "tags": "python, beginner, python-3.x, pyqt" }
Spherical Conducting Shells, Potential
Question: When we have a spherical conducting shell and charge on outer surface of the shell then the potential inside remains constant i.e, kQ/R (R=radius). But say the inner surface of the shell is charged rather than outer then too does the potential remain constant? Will the electric field still be 0 inside or will charges move towards outer surface , what would be the result ? A situation similar to the one in the image below:- Answer: If the grey and blue parts of your diagram are the conductors and the magnitudes of the two sets of charges are the same then you have drawn a correct diagram with an electric field present only in the region between the two sets of charges. The field in that region being the same as if there was a $-Q$ charge at the centre of the arrangement. In all other regions, including outside the outer sphere, the electric field will be zero and there will be no charge resident on the outside of the outer sphere. Application of Gauss's law will show that is so.
{ "domain": "physics.stackexchange", "id": 32176, "tags": "electrostatics, electric-fields, potential, capacitance" }
Does a correct application of the Lorentz force to find induced emf need resistance?
Question: The emf between two points is defined as the work done per unit charge on the charge when moved (along a given and stated path) between those two points. To find the emf due to a magnetic field the usual integral is (considering a closed coil): $$\text{emf}=\int{ \vec v \times \vec B \cdot d \vec r}$$ But should it not be: $$\text{emf}=\int{ (\vec v \times \vec B +\vec F) \cdot d \vec r}$$ where $\vec F$ is the force on the particles due to resistance or is this taken care for in the self inductance of the coil? Answer: B fields can not create emf except when there is changing magnetic flux, from Faraday's law. In practice a resistance would be needed unless the material was superconductive in which case all the B field (i.e. magnetic flux density) is expelled from the conductor. The E field times a charge integrated over a path will give the emf or voltage and unless superconductivity is present a resistance is present. If superconductive an impedance from Lenz's law will limit the current.
{ "domain": "physics.stackexchange", "id": 21557, "tags": "electromagnetism" }
Coexistence of a static electric and magnetic dipole
Question: I have been trying to construct a charged object (not a conductor carrying any current) that can behave simultaneously as an electric and magnetic dipole and then calculate the electric and magnetic forces of interaction between two copies of this object at a large distance, and possibly associate an energy of interaction with this system as well (just the minimum energy required to assemble the system starting with all elements at infinity). However, insofar, I have been quite unsuccessful. My first thought was to construct a ring with a charge distribution of $\lambda=\lambda_0 \cos(\theta)$ and make this ring rotate with a certain velocity $\omega$ to establish an "effective" current. While this behaves as an electric dipole, I calculated its magnetic dipole moment to be $M=\dfrac{\lambda_0 \omega R^3}{2}\int_0^{2\pi} \cos(\theta) \,d\theta =0$. Similar issues occur when I tried to consider a disc, spherical shell and solid sphere with a charge distribution dependent on the angle a ring-like element subtended at the center. Next, I thought of considering a rotating rod with a linearly varying charge distribution from $-Q$ to $+Q$ at the other end (basically $\lambda (x)=-Q+\dfrac{2Q}{l} x)$, rotating about its one end but two issues arise herein; firstly, it is a rotating electric dipole and I have never dealt with such a scenario and I do not know how it may affect the force of interaction between two such rods. Secondly, I'm having a little trouble calculating the magnetic moment; considering an element $dx$ at a distance $x$ from the one end, it appears that $di_{eq}=dq/dt=\dfrac{\lambda(x) \omega dx}{2\pi}$ and hence since $A=\pi x^2$, $$M=\int A.di=\dfrac{\omega}{2}\int_{0}^{l}x\lambda(x)dx=\dfrac{Q \omega l^2}{12}$$ But, this seems to break the well-known rule about the gyromagnetic ratio for a system with circular symmetry (where ring-like elements may be taken) stating that $\dfrac{M}{L}=\dfrac{Q}{2m}$ since it gives the ratio to be $\dfrac{Q}{4m}$ (Reference: https://en.wikipedia.org/wiki/Gyromagnetic_ratio#For_a_classical_rotating_body) where $L$ is the angular momentum. So, to summarize, I have 3 queries associated with this one problem; Can there exist a system that can be modeled as a stationary electric and magnetic dipole (i.e. no dipole vector is translating or rotating) simultaneously? If so, can one give a concrete example of the same? Why is the rotating rod violating the rule regarding the gyromagnetic ratio? Does the rotation of the electric dipole vector in the rod scenario render the formula $F=\dfrac{6\mu_0 p_1p_2}{4\pi r^4}$ inapplicable? If so, why? What would be the modified formula for the electric force of interaction between to such identical rotating rods at a distance $r>>l$? Answer: If you want to look at interaction energy, it is a good idea to start with a Hamiltonian for the system, i.e. something that would be conserved in time: With some work you can show that if electromagnetic Lagrangian does not have explicit time dependence, and there is no dissipation ($\mathbf{E}.\mathbf{J}=0$) then the following quantity is conserved in time: $$ H=\int_V d^3r\left[\frac{\epsilon_0}{2}E^2+\frac{1}{2\mu_0}B^2\right] $$ Where $\mathbf{E}$ is the electric field $\mathbf{B}$ is the magnetic field, $\mathbf{J}$ is the current density, $\epsilon_0$ is the permittivity of the free space and $\mu_0$ is the permeability of the free space. This is your conserved energy. $V$ is the volume over which you integrate If the current density $\mathbf{J}$ and charge density $\rho$ are constant in time, and therefore so are the scalar potential $\phi$ and the vector potential $\mathbf{A}$, we can re-write it in terms potentials: $$ H=\frac{1}{2}\int_V d^3 r\left[\phi\rho+\mathbf{J}.\mathbf{A}\right] $$ This is the expression you can use to evaluate all your expressions. For example a combination of electric dipole $\mathbf{p}$ and magnetic dipole $\mathbf{m}$ at point $\mathbf{r_1}$ has charge current configuration: $$ \begin{align} \rho&=\mathbf{p}.\boldsymbol{\nabla}\delta^{(3)}\left(\mathbf{r}-\mathbf{r}_1\right)\\ \mathbf{J}&=\boldsymbol{\nabla}\times\mathbf{m}\delta^{(3)}\left(\mathbf{r}-\mathbf{r}_1\right) \end{align} $$ You can evaluate the energy using the normal rules for the delta functions: $$ H=\frac{1}{2}\left[-\mathbf{p}.\boldsymbol{\nabla}\phi_1+\mathbf{m}.\boldsymbol{\nabla}\times\mathbf{A}_1\right] $$ Where $\boldsymbol{\nabla}\phi_1$ means evaluated at point $\mathbf{r}_1$. To look at the interaction energy between two dipole configurations at different points, you would find the potentials ($\phi$, $\mathbf{A}$) generated by the second configuration, at point $\mathbf{r}_2\neq\mathbf{r}_1$, and substitute into the above expression.
{ "domain": "physics.stackexchange", "id": 99941, "tags": "electromagnetism, electrostatics, magnetic-moment, dipole, dipole-moment" }
What program could I use to create a protein model from scratch?
Question: I would like to create protein models such as the ones in the Protein Data Bank. I have got GROMACS but I don't know if it can be used to model macromolecules. Answer: You have several options to generate smaller peptides using the peptide builders in Avogadro or PyMol
{ "domain": "chemistry.stackexchange", "id": 4268, "tags": "biochemistry, computational-chemistry, software" }
Question regarding the number of alleles
Question: Why is it that in biology we often say that a gene has two alleles? When we analyze allele frequencies (e.g. using Hardy-Weinberg Equilibrium), formulas are often generalized for two alleles of a gene. Given mutations, isn't it quite likely that a population will have more than two alleles for a gene? Answer: The two-allele scenario is often used in genetics teaching because of its simplicity. However, quite a few genes have more than two alleles. Some examples that readily come to mind are: The ABO gene in humans: This determines ABO blood group, and has six common alleles. Many more rare alleles have been described [1]. The human leukocyte antigen (HLA) genes: These are well-known for their allelic diversity. One of them, HLA-A, alone has over 6,000 alleles [2]. Reference Seltsam A, Hallensleben M, Kollmann A, Blasczyk R. The nature of diversity and diversification at the ABO locus. Blood 2003; 102(8):3035–3042. https://doi.org/10.1182/blood-2003-03-0955 Robinson J, Halliwell JA, Hayhurst JD, et al. The IPD and IMGT/HLA database: allele variant databases. Nucleic Acids Research 2015; 43:D423-431. http://hla.alleles.org/nomenclature/stats.html
{ "domain": "biology.stackexchange", "id": 10658, "tags": "evolution, allele, hardy-weinberg" }
Will the current travel through the 4 ohms lamps if a resistor is put there?
Question: This is a part of a question I was solving, the question asked about what will happen if the switch is in position 2. My answer was to say that all the lamps are going to turn on as I thought the current will travel the way I drew in the picture. I knew later, though, that my answer was wrong and that the current will only flow through the 12 ohms lamps and then straight to the battery. I thought the reason I was wrong is because, at point x, the current would "prefer" to take the route which has no resistors and go straight to the battery than to take the route with the two 4 ohms resistors. However someone told me my answer is wrong because of this: Current only flows from a higher voltage to a lower voltage. What you highlighted cannot happen because current cannot flow towards a lamp unless there is a lower voltage on the other side of it. The current will continue to the lower voltage of the battery instead. But I wasn't convinced and to know which of us is right I want to know what will happen if we put a resistor here. Will the current still not travel to the 4 ohms lamp? Answer: When we say "current will flow" and "current won't flow" we are using phrasings which let us simplify the circuit. By using logic we can deduce that current will not flow through the 4 ohm lamps. However, if we are not so sure, we can just calculate. We can assume some current $i$ goes through the 4 ohm lamps, and solve the equations. If it turns out that $i$ equals zero, that just meant that current didn't' flow through those lamps. In this case, we can use the equation for serial resistors to find the resistence of the loop through the two 4 ohm lamps: 4ohm + 4ohm = 8ohm. Thus we have an 8 ohm resistance with those two lamps put together. Since they are resistive loads, we can use $I=\frac{\Delta V}{R}$ to determine how much current flows through them. Since they're attached to the same wire, the voltage difference between them is 0V. Thus, the current is $I=\frac{0V}{8\Omega} = 0 \text A$. That is to say, the current through that loop is 0. Which is the same as you would get if you said "no current flows through the 4 ohm lamps," only I was able to calculate this directly rather than using logic. I calculated the current through the lamps, and found it was 0 amps. So if you're ever unsure about whether current is flowing through a loop, you can always test it by calculating in these ways. In later courses, you will find this thinking is useful because you'll start to deal with non-ideal wires, with parasitic capacitance and inductance. All sorts of interesting stuff can happen at high frequencies, like a disconnected stub of wire can disrupt a signal. If you remember that you can always just calculate the currents through the wires, and show that they are zero, you won't have to remember all the rules of current flow perfectly. Then you can focus on the rules which are most useful Rules like "always take off your wedding ring when working on a car." Your ring is a non-ideal wire, and shorting a car battery across that non-ideal wire can get hot in a jiffy!
{ "domain": "physics.stackexchange", "id": 72523, "tags": "electric-circuits, electric-current" }
How much energy would be required to increase the velocity of an electron from 0 to 0.9c?
Question: I looked it up on the web and found a few different equations such as E = ymc^2 K.E = 0.5mv^2 E^2 = (mc^2)^2 + (pc)^2 where p = ymv From my what I know, the closer an object gets to the speed of light, the more energy it has and the more heavier it becomes. The 2nd equation doesn’t seem to deal with relativistic speeds so I’m unsure about that one. But the other two seem to be the energy possessed by the electron and not the energy I need to put into the electron to increase its speed. When I inserted the values, I got really small numbers which I didn’t expect it but when I assumed the electron was going at the speed of light, they seem to work (they gave me infinity y= 1/0). So I was thinking if there is another equation that describes the energy needed to increase the electrons speed close to the speed of light. Answer: It depends a little bit on how you add the energy, because an accelerating electron will radiate power (according to the Larmor formula and this needs to be replenished. What this means is that if you put your electron in a synchrotron, and slowly increase the energy, then it will lose a relatively large amount of energy; in a linear accelerator, the loss will be smaller. If we ignore these losses, then the kinetic energy of the electron is given by $$E = mc^2 - m_0 c^2 = (\gamma-1) m_0 c^2 = \left(\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}-1\right)m_0 c^2$$ It's not hard to see that this reduces to $\frac12 mv^2$ when $v\ll c$. I will leave it up to you to substitute numbers.
{ "domain": "physics.stackexchange", "id": 47548, "tags": "homework-and-exercises, special-relativity, energy, velocity" }
How did Einstein calculate the wavelength of photons?
Question: Einstein's Photoelectric equation states that $$h\nu = h\nu_0 + \frac{1}{2}mv^2$$ which uses frequency. But if he assumed light to be a stream of particles how would he calculate it's frequency? de Broglie's hypothesis was much after Einstein's theory and used it as basis so how did Einstein calculate the wavelength and frequency of photons? NOTE: This question doesn't actually solve my doubt since I wanted to know how the frequency was derived not what the frequency actually represents. Answer: He didn't need to calculate the wavelength. Measuring the wavelength of light had been routine physics since the time of Fraunhofer. He did have to use the speed of light to calculate frequency given wavelength.
{ "domain": "physics.stackexchange", "id": 90507, "tags": "photons, frequency, wavelength, photoelectric-effect" }
Questions about Yukawa Potential for Strong Force
Question: Here is what I have understood about the strong force: This is a close range force which acts on two charges. The way it works is that a charge releases a meson which attracts the two charges together and this force is created by a potential given by $$V_{\text{Yukawa}}(r) = -g^2\frac{e^{-\alpha m r}}{r}.$$ (This is from the Wikipedia page). Please correct me if my understanding is wrong. Now my first question is I know that $g^2$ and $\alpha$ are scaling constants but what values should I put for them to accurately describe the strong force? My second question is what is $r$? On the Wikipedia page it says, "$r$ is the radial distance to the particle" but what is "the particle?" Are they refering to the meson? If so, I thought the meson is moving so won't the radial distance be changing? Answer: Well, nuclear binding is a complex, multifaceted, business, and the Yukawa potential is a framework conceptual cornerstone for it. For the virtual (~metaphorical; unreal; computational; quantum mechanical) meson which intermediates the "force", normally a pion, with mass m = 138 MeV, α =1, as follows from the pion propagator, outlined in WP. The mass in the exponent typifies the intermediating meson, not the two nucleons bound by the potential, at a distance r from each other. But as you see from the linked SP article, there are actually several mesons involved, involving several masses and coupling strengths g, resulting in a complex picture, with lots of parameters, masses, signs, etc, all to be determined by a plethora of experiments. They are normally of order 1, but can vary substantially with the nucleus in question. The takeaway is that the picture to be emulated, in the back of the reader's mind, is the EM Coulomb potential, with a massless photon, m =0, so infinite range, and coupling strength e, the elementary electric charge. This morphs to the Yukawa with its range shortened, and its coupling strength enhanced enormously, but I'd doubt specific values of such parameters, available in reviews, would help you much. Edit in response to comment questions As always in HEP and nuclear physics, the natural units employed in the nondimensionalization set $\hbar \to 1$, $c\to 1$, so that masses are measured in MeV, while distances, such as r, are measured in 1/MeV. You may readily deduce then that $\frac{1}{\hbox{MeV}}= 197.3 \hbox{fm} \sim 2\cdot 10^{-13}$m . In your conventions, α =1 and g ~ 10 are dimensionless.
{ "domain": "physics.stackexchange", "id": 82992, "tags": "nuclear-physics, potential, strong-force, mesons" }
Replacing tokens in a text file with replaced text
Question: This code parses a text file, replacing special tokens marked as $(token) with some replaced text. The $ symbol can be escaped by entering a double $$. In the example a user enters a text file containing tokens to replace and a separate mapping file with, for instance, colour:red to denote how to replace a token. Some remarks I have is that is is not using any OO really. Does it need to be? #include <fstream> #include <iostream> #include <vector> #include <map> #include <string> static void readToken(std::istream& strm, std::vector<char>& token) { while(strm) { int c (strm.get()); token.push_back(c); if(')' == c) break; } } static void lookup_replacement(const std::map<std::string,std::string>& token_map, const std::vector<char>& key, std::vector<char>& value) { if(key.size() > 3) { //3 chars min are '$', '(' and ')' std::map<std::string,std::string>::const_iterator it = token_map.find(std::string(key.begin()+2, key.end()-1)); if(it != token_map.end()) value.insert(value.end(), it->second.begin(), it->second.end()); } } static bool readTo(std::istream& strm, std::vector<char>& buf, const char term[]) { bool ret(false); while(strm) { int c (strm.peek()); if(strchr(term, c)) { strm.get(); int c2(strm.peek()); if(c != c2) { strm.unget(); ret = true; break; } } strm.get(); buf.push_back(c); } return ret; } static void create_mapping(std::ifstream& strm, std::map<std::string,std::string>& tokens) { if(strm.good()) { std::string line; while(std::getline(strm, line)) { std::string::size_type pos = line.find_first_of(':'); if(pos != std::string::npos) tokens.insert(std::make_pair<std::string, std::string>(line.substr(0, pos), line.substr(pos+1))); } } } int main() { std::map<std::string,std::string> token_map; typedef std::map<std::string,std::string>::iterator map_iter; std::cout << "Enter path to text file to replace\nTokens to be replaced should be of form: $(token)\n"; std::string file; std::cin >> file; std::cout << "Enter path to file with target token <-> replacement token in format:\n" "<toreplace1>:<replacement1>\n<toreplace2>:<replacement2>\netc\n"; std::string mapfile; std::cin >> mapfile; std::ifstream strmmap(mapfile.c_str()); create_mapping(strmmap, token_map); // replaced text std::vector<char> replaced; std::ifstream strm(file.c_str()); if(!strm.bad()) { while(!strm.eof()) { if(readTo(strm, replaced, "$")) { std::vector<char> token; readToken(strm, token); std::vector<char> replace_tok; lookup_replacement(token_map, token, replace_tok); if(!replace_tok.empty()) replaced.insert(replaced.end(), replace_tok.begin(), replace_tok.end()); } } } //print out replaced text std::cout << "your replaced text file\n"; std::vector<char>::iterator pit = replaced.begin(); while(pit != replaced.end()) std::cout << *pit++; return 0; } The example input file I was using: Once upon a $(place1) there was a $(adjective1) $(colour) $(animal). Do you have a $$ You have enemies? Good. That means you've stood up for something, sometime in your life. by $(author) on $(date) A fanatic is one who can't change his mind and won't change the subject. By $(author) And the mapping file: place1:time adjective1:cunning colour:brown animal:fox author:Winston Churchill date:06/06/2013 Answer: Its a little heavy on the code side. Your definition of a token still leaves possibilities that you have not defined. $$ : Replace with $ Fine $(<X>) : Replace with the mapped `<X>` Fine. $(<stuff>\n : No closing ')' Error $() : What happens with no identifier. Error $<X> : Some other character after '$' that is not '(' or '$' Error Especially your stream handling. Its usually bad practice to test for .good(), .bad() or .eof() during normal processing. You want to test these after things go wrong to generate the appropriate error message or not. Usually stream code looks like this: while(std::getline(stream, line)) { // correctly read a line } // or while(stream >> value >> value2 >> value3 >> etc) { // correctly read a value from the stream } You can use OO to compartmentalize your mapping. Personally I would combine your mapper and mapped class into a single class. class Mapper { // Class contains a map from tokens => value // used via replace() to replace all tokens $(<X>) in string // with the values contained in the map. std::map<std::string, std::string> replaceMap; public: Mapper(std::string const& fileName) { // Read all the mapping values from the file. std::ifstream mapFile(fileName.c_str()); std::string line; while(std::getline(mapFile, line)) { // Each line in the file contains a token/value mapping std::stringstream linestream(line); std::string token; std::string value; std::getline(linestream, token, ':'); std::getline(linestream, value); // Save the token and value replaceMap[token] = value; } } std::string replace(std::string const& line) { std::string result; std::size_t last = 0; // Search for the '$' char repeatedly. for(std::size_t find = line.find('$');find != std::string::npos;last=find, find=line.find('$', find)) { // Copy the inert text from line // into the result result += line.substr(last, (find-last)); // If the next character is a '(' then search for the closing ')' // // If we don't find a token `$(<X>)` then ignore the '$' and treat // it like a normal character. Note If <X> has no mapping then // we replace it with the empty string. Note if <X> is empty it is // is still value. bool hit = false if ((find + 1 < line.size()) && line[find + 1] == '(') { std::size_t end = line.find(')', find); if (end != std::string::npos) { std::string key = line.substr(find+2, end-find-2); result += replaceMap[key]; find = end + 1; hit = true; } } if (!hit) { // Token was not found. // Put the '$' on the output and move on. result += '$'; find++; } } // Add the rest of the inert string to the output result += line.substr(last); return result; } }; This makes it easy to use: int main(int argc, char* argv[]) { Mapper mapper(argv[1]); std::ifstream input(argv[2]); std::string line; while(std::getline(input, line)) { std::string result = mapper.replace(line); std::cout << result << "\n"; } }
{ "domain": "codereview.stackexchange", "id": 3917, "tags": "c++, parsing" }
When is this even possible (even for a dense graphs) $|E| = \Theta (|V|^2)$
Question: Wikipedia says that "a dense graph is a graph in which the number of edges is close to the maximal number of edges." and "The maximum number of edges for an undirected graph is $|V|(|V|-1)/2$". Then why do even use $ |E| = \Theta (|V|^2)$?, understand that $\Theta$ is the correct(tightest) bound in asymptomatic notation. It seems to me that $|E| = \Theta (|V|^2)$ can never happen, so why do we use it? Answer: Note that asymptotic bounds only apply to infinite sequences. In this case, $|E| = \Theta(|V|^2)$ applies to an implicit infinite sequence of graphs $G_i=(V_i,E_i)$, meaning that there are two positive constants $c,c'$ such that, $c\cdot|V_i|^2 \leq |E_i| \leq c'\cdot|V_i|^2$ whenever $i$ is large enough. This constraint can be satisfied. For every $i\in \mathbb N$, take $G_i$ to be the complete graph on $\{1,\ldots, i\}$. Hence, $G_i$ has exactly $i\cdot(i-1)/2$ edges. For large enough $i$, we have $$ \frac{1}{4}i^2 \leq \frac{i\cdot(i-1)}{2} \leq \frac{1}{2}i^2 $$ So, we can say that $|E| = \Theta(|V|^2)$. Another sequence could be constructed taking "almost complete" graphs, where we remove one edge from each complete graph $G_i$ in the previous sequence. This would still satisfy the bound. We could even remove, say, $100*i$ edges from each $G_i$ (when possible) and still satisfy the bound. This is because we only care about $|E_i|$ growing with "quadratic speed".
{ "domain": "cs.stackexchange", "id": 13038, "tags": "graphs, asymptotics" }
Where to find common inorganic/organometallic molecules?
Question: I'm working on a project to create a large open repository of quantum calculations, largely for teaching purposes. I can get thousands of common organic compounds easily from sources like PubChem or drug databases. I'm looking for sources for teaching or other uses with VSEPR examples or other types of inorganic and organometallic molecules. Ideally I'd want 100s or 1000s with good coverage of the periodic table. Any ideas? To be clear, there are lots of databases of solids. I know most of those, although feel free to suggest some for archival. I really want isolated complex ions, molecules, etc. I don't think there are many, since I haven't turned anything up besides the Cambridge database, which is restrictive about reuse. Thanks! Answer: I appreciate the suggestions but some digging found a few possibilities: As I mentioned in the question, there's the Cambridge Structural Database, which includes over 700,000 compounds (both organic and inorganic/organometallic). It's decidedly not free, but available at many universities. There is the "Teaching Subset" of 733 compounds from CCDC which offers a Java web view of interesting compounds, including plenty of VSEPR examples and inorganic and organometallic species. This is free to access, but as far as I can tell, not free to redistribute. Cool Molecules from St. Olaf college, including 900+ structures from experimental data, and lots of inorganic and organometallic species (most elements are reflected), including lots of "cool" or unusual shapes (e.g., hexagonal bipyramidal). Crystallography Open Database as mentioned by @permeakra. I was aware of this for solid-state materials, but it seems as if they've merged in data from CrystalEye, a resource which grabbed crystallography data from journal articles, including many molecular species now, and can be browsed by journal (e.g., Organometallics). The last two resources seem the best, since the data can be freely distributed. They also don't require using Java to view the structures.
{ "domain": "chemistry.stackexchange", "id": 2023, "tags": "inorganic-chemistry, organometallic-compounds" }
get Yaw Pitch Roll values
Question: How can I get the yaw pitch roll values between joint and world? Originally posted by omeranar1 on ROS Answers with karma: 31 on 2021-06-28 Post score: 0 Answer: On the command line: rosrun tf tf_echo source_frame target_frame In Python: import tf listener = tf.TransformListener() p_joint = geometry_msgs.msg.PoseStamped() p_joint.header.frame_id = "your_joint" p_joint_in_world = listener.transformPose("world", p_joint) q = p_joint_in_world.pose.orientation # Quaternion rpy = tf.transformations.euler_from_quaternion(q.x, q.y, q.z, q.w) In C++, I don't have boilerplate code on hand, but you could convert the geometry_msg.msg.Quaternion a Tf::Quaternion, convert it to a Matrix3x3 and call getRPY. This answer assumes that you meant "Roll Pitch Yaw", because this is the standard in robotics. If you really need the YPR Euler angles, use the euler_from_quaternion function's axes parameter in Python, or the getEulerYPR function in C++. Originally posted by fvd with karma: 2180 on 2021-06-28 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by omeranar1 on 2021-06-28: its worked thank you
{ "domain": "robotics.stackexchange", "id": 36595, "tags": "ros, gazebo, rviz, moveit, ros-melodic" }
Why is copper (II) oxide used in quantitative analysis of carbon?
Question: What is the significance of using cupric oxide in the quantitative analysis of carbon? As far as I can tell from my research, only $\ce{CuO}$ is used to oxidize carbon to $\ce{CO2}$. Why can't we use other oxidants like $\ce{HNO3}$ or other metal oxides such as hematite? Is the use of $\ce{CuO}$ prevalent because of low cost or does it has some other special property? Answer: You got the crux of it: low cost. And it works, regardless of what the starting compound was. And it's not just low cost to purchase, it is easy to regenerate the used cupric oxide multiple times by heating it under oxygen.
{ "domain": "chemistry.stackexchange", "id": 7312, "tags": "experimental-chemistry, analytical-chemistry, elemental-analysis" }
Deriving Hamilton's Principle from Lagrange's Equations
Question: I'm trying to derive Hamilton's Principle from Lagrange's Equations, as I've heard they're logically equivalent statements, and am stuck on a final step. For simplicity, assume we're dealing with a system with only one generalized coordinate $q$, so that there is only one equation of motion of the system: $$\frac{d}{dt} \left( \frac{\partial L_q}{\partial \dot{q}} \circ \Lambda_t \right) - \frac{\partial L_q}{\partial q} \circ \Lambda_t = 0 \quad,$$ where $L_q:\mathbb{R}^3 \to \mathbb{R}$ is the Lagrangian of the system expressed in terms of $q$, $\dot{q}$ and time $t$, $\Lambda_t(t) = (q_t(t), \dot{q_t}(t),t)$, where $q_t:\mathbb{R} \to \mathbb{R}$ is an actual trajectory, so a solution to the differential equation. Working back from the steps taken to arrive at this equation from the variational principle, I did the following: let $\delta q_t:\mathbb{R} \to \mathbb{R}$ be a deviation from the actual trajectory, so a smooth function that satisfies $\delta q_t (t_1) = 0, \delta q_t(t_2) = 0$, for some given time instants $t_1$ and $t_2$. Multiplying the differential equation through by $\delta q_t$ and integrating from $t_1$ to $t_2$, we get $$\int_{t_1}^{t_2} \left[ \frac{d}{dt} \left( \frac{\partial L_q}{\partial \dot{q}} \circ \Lambda_t \right) \delta q_t - \frac{\partial L_q}{\partial q} \circ \Lambda_t \phantom{,} \delta q_t \right] = 0 \quad.$$ Since by hypothesis $\delta q_t (t_1) = 0, \delta q_t(t_2) = 0$, the following holds $$\int_{t_1}^{t_2} \frac{d}{dt} \left( \frac{\partial L_q}{\partial \dot{q}} \circ \Lambda_t \phantom{,} \delta q_t \right) = 0$$ $$\Rightarrow \int_{t_1}^{t_2} \frac{d}{dt} \left( \frac{\partial L_q}{\partial \dot{q}} \circ \Lambda_t \right) \delta q_t = \int_{t_1}^{t_2} \left[ \frac{d}{dt} \left( \frac{\partial L_q}{\partial \dot{q}} \circ \Lambda_t \phantom{,} \delta q_t \right) - \frac{d}{dt} \left( \frac{\partial L_q}{\partial \dot{q}} \circ \Lambda_t \right) \delta q_t \right] = \int_{t_1}^{t_2} \frac{\partial L_q}{\partial \dot{q}} \circ \Lambda_t \phantom{,} \dot{\delta q_t} \quad.$$ So the full integral becomes $$\int_{t_1}^{t_2} \left[ \frac{d}{dt} \left( \frac{\partial L_q}{\partial \dot{q}} \circ \Lambda_t \right) \delta q_t + \frac{\partial L_q}{\partial \dot{q}} \circ \Lambda_t \phantom{,} \dot{\delta q_t} \right] = 0 \quad,$$ which we can recognize as being the statement that the variation of some functional of the form $$A(q_t) = \int_{t_1}^{t_2} L_q(q_t(t), \dot{q_t}(t), t) \phantom{,} dt$$ must be equal to zero for all admissible $\delta q_t$, so $$\delta A(q_t, \delta q_t) = 0 \quad.$$ So starting from Lagrange's Equation for the system, we logically arrive at the implication that the variation of the functional $A$ (the action functional) must be zero at the system's true trajectory, for such $\delta q_t$. But this doesn't on its own imply that $q_t$ is in fact an extremum of $A$, which is what Hamilton's Principle states, and from which $\delta A(q_t, \delta q_t) = 0$ follows. Where's the missing piece that lets us conclude that $q_t$ is in fact an extremum of $A$, and not only a point at which its variation vanishes? Is it a mathematical fact about the form of the functional itself that allows us to conclude that? Answer: Hamilton’s principle is often sloppily stated. It is not a “least-action” principle, nor is it a “principle of extremal action”. It is a principle of “stationary” action. For example, on the round sphere, take two non-antipodal points, and consider the longer arc of the great circle which joins these two points (i.e the longer geodesic). This path is a saddle point for the length functional on the sphere (the length functional is the action in this context).
{ "domain": "physics.stackexchange", "id": 95648, "tags": "classical-mechanics, lagrangian-formalism, variational-principle, variational-calculus" }
References for cosmological perturbation theory in the ADM formalism
Question: What would you consider the best online resources for learning the 3+1 ADM formalism and gauge invariant perturbation theory in cosmology? (Assuming intermediate level GR and QFT familiarity) Answer: Hah, I just studied this a while ago with James Bardeen, so I would say he is the best resource for learning this! Since you probably don't have access to the physical Bardeen, you can check out: Physical Review D, Vol 22 no 8 (1980) "Gauge-invariant cosmological perturbations" and Physical Review D, Vol 40 no 6 (1989) "Designing density fluctuation spectra in inflation" There is also a set of lecture notes I have sitting on my desk by him that claim they are "to be published in Particle Physics and Cosmology" which are dated 1988, so presumably they were published within the next year or so. If you can find them, the talks are probably the easier of the three, and the first PrD article is the second easiest. The third paper is very nice, but more technically difficult.
{ "domain": "physics.stackexchange", "id": 308, "tags": "general-relativity, cosmology, resource-recommendations, hamiltonian-formalism, perturbation-theory" }
Snake game in Pygame
Question: This is my first game, and looking for some help to improve current code because I've identified a lot that I think could be written more efficiently, particularly the segment that checks which key has been pressed but I'm not sure how to improve it. import pygame from pygame.locals import * import random import sys pygame.init() FPS = 30 fpsClock = pygame.time.Clock() WIN_WIDTH = 680 #width of window WIN_HEIGHT = 500 #height of window DISPLAY = (WIN_WIDTH, WIN_HEIGHT) #variable for screen display DEPTH = 32 #standard FLAGS = 0 #standard BLACK = (0, 0, 0) #black RED = (255, 0, 0) #red GOLD = (255, 215, 0) LOL = (14, 18, 194) YOLO = (155, 98, 245) WHITE = (255, 255, 255) screen = pygame.display.set_mode(DISPLAY, FLAGS, DEPTH) pygame.display.set_caption('Snaek') collision_coords = [1] snake_parts = [1] Score = 0 speed = 12 snakex = 125 snakey = 70 size = 20 # --- classes --- class Snake(pygame.Rect): def __init__(self, x, y, screen, size, colour): pygame.Rect.__init__(self, x, y, size, 20) self.screen = screen self.colour = colour self.x = x self.y = y def draw(self, screen): pygame.draw.rect(self.screen, self.colour, self) def coordinates(self): return self.x, self.y class Food(pygame.Rect): def __init__(self, x, y, screen): pygame.Rect.__init__(self, x, y, 20, 20) self.screen = screen def draw(self, screen): pygame.draw.rect(self.screen, GOLD, self) class Barrier(pygame.Rect): def __init__(self, x, y, screen): pygame.Rect.__init__(self, x, y, 40, 20) self.screen = screen def draw(self, screen): pygame.draw.rect(self.screen, LOL, self) class GameMenu(): def __init__(self, screen, options): self.screen = screen self.options = options # --- functions --- def get_food_pos(WIN_WIDTH, WIN_HEIGHT): WIN_WIDTH = random.randint(100, WIN_WIDTH-150) WIN_HEIGHT = random.randint(100, WIN_HEIGHT-150) return WIN_WIDTH, WIN_HEIGHT def texts(score): font=pygame.font.Font(None,30) scoretext=font.render("Score:"+' ' + str(score), 1,(255,255,255)) screen.blit(scoretext, (500, 15)) eaten = True pressed_right = True pressed_left = False pressed_up = False pressed_down = False pygame.key.set_repeat(10,10) level = [ "PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP", "PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "P P", "PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP", ] def display_menu(): while True: screen.fill(BLACK) for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() elif event.type == pygame.MOUSEBUTTONUP: pos = pygame.mouse.get_pos() if 385 > pos[0] > 275: if 202 > pos[1] > 185: return elif 293 > pos[1] > 275: pygame.quit() sys.exit() else: pass font = pygame.font.Font(None, 30) play_game = font.render("Play Game", 1, WHITE) quit_game = font.render("Quit Game", 1, WHITE) screen.blit(play_game, (275, 185)) screen.blit(quit_game, (275, 275)) pygame.display.update() fpsClock.tick(FPS) display_menu() while True: screen.fill(BLACK) for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() elif event.type == pygame.KEYDOWN: # check for key presses if event.key == pygame.K_LEFT: if pressed_right: pressed_right = True# left arrow turns left else: pressed_left = True pressed_right = False pressed_up = False pressed_down = False elif event.key == pygame.K_RIGHT: if pressed_left: pressed_left = True# right arrow turns right else: pressed_right = True pressed_left = False pressed_up = False pressed_down = False elif event.key == pygame.K_UP: if pressed_down:# up arrow goes up pressed_down = True else: pressed_up = True pressed_right = False pressed_left = False pressed_down = False elif event.key == pygame.K_DOWN: if pressed_up: break else: pressed_down = True pressed_right = False pressed_up = False pressed_left = False x = snakex y = snakey collision_coords = [1] if pressed_left: snakex -= speed elif pressed_right: snakex += speed elif pressed_up: snakey -= speed elif pressed_down: snakey += speed snake_parts[0] = Snake(snakex, snakey, screen, int(size), RED) collision_coords[0] = snake_parts[0].coordinates() snake_parts[0].draw(screen) if eaten: foodx, foody = get_food_pos(WIN_WIDTH, WIN_HEIGHT) eaten = False my_food = Food(foodx, foody, screen) my_food.draw(screen) if snake_parts[0].colliderect(my_food): eaten = True screen.fill(BLACK) a_snake = Snake(snakex, snakey, screen, int(size), RED) snake_parts.append(a_snake) Score += 1 for i in range(1, len(snake_parts)): tempx, tempy = snake_parts[i].coordinates() snake_parts[i] = Snake(x, y, screen, int(size), RED) collision_coords.append(snake_parts[i].coordinates()) snake_parts[i].draw(screen) x, y = tempx, tempy platform_x = 0 platform_y = 0 for row in level: for col in row: if col == "P": col = Barrier(platform_x, platform_y, screen) col.draw(screen) if snake_parts[0].colliderect(col): pygame.quit() sys.exit() platform_x += 15 platform_y += 20 platform_x = 0 for i in range(2, len(collision_coords)): if int(collision_coords[0][1]) == int(collision_coords[i][1]) and int(collision_coords[0][0]) == int(collision_coords[i][0]): pygame.quit() sys.exit() texts(Score) pygame.display.update() fpsClock.tick(FPS) Answer: There is quite a lot of code, so I'll point out the first few things that I notice. Naming: you have some constants all upper case which is good, but you also have constants that are lower case and one (Score) which is neither. I'd say stick to all upper case for constants. A small typo (Snaek) x and y don't really say much, but you're using them as the old position of the snake, so maybe rename them to orig_x and orig_y pygame.key.set_repeat(10, 10) is useless here, you can remove it. get_food_pos gets as arguments the width and height of the screen, but no need to name them the same way, which is actually quite misleading. Name them simply width and height and return food_x and food_y, not the same variables. Now for the implementation. Yes, the key press handler can be written with less code. You can have an array of key presses and use the values to determine where you're supposed to go. You can initialize it like this: (LEFT, RIGHT, UP, DOWN) = (0, 1, 2, 3) pressed = [0, 1, 0, 0] And in the main loop use it like this: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() elif event.type == pygame.KEYDOWN: # check for key presses if event.key == pygame.K_LEFT and not pressed[RIGHT]: pressed = [-1, 0, 0, 0] elif event.key == pygame.K_RIGHT and not pressed[LEFT]: pressed = [0, 1, 0, 0] elif event.key == pygame.K_UP and not pressed[DOWN]: pressed = [0, 0, -1, 0] elif event.key == pygame.K_DOWN and not pressed[UP]: pressed = [0, 0, 0, 1] snakex += speed * (pressed[LEFT] + pressed[RIGHT]) snakey += speed * (pressed[UP] + pressed[DOWN]) What happens here is that if one of the keys is pressed, the array will contain all zero values, except for the pressed key. The value there will be negative or positive depending on the direction, so you can simply sum and multiply the result by your speed. The display menu can also be written differently. First of all I'd rather have that return a value and use that to determine if the users want to quit or not. Then I'd rename it to something like get_menu_choice. There's no need to repaint continuously if you're not changing anything, so your drawing code can be outside of that while loop. If you want to detect collisions between anything and a rectangle there's a specific method for that. You get your mouse position and check if it's collided with a rectangle. To sum up, something like this: def get_menu_choice(): screen.fill(BLACK) font = pygame.font.Font(None, 30) play_game = font.render("Play Game", 1, WHITE) quit_game = font.render("Quit Game", 1, WHITE) screen.blit(play_game, (275, 185)) screen.blit(quit_game, (275, 275)) pygame.display.update() fpsClock.tick(FPS) while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() pos = pygame.mouse.get_pos() (mouse_clicked, _, __) = pygame.mouse.get_pressed() start_game_rect = pygame.Rect(275, 185, 110, 27) quit_game_rect = pygame.Rect(275, 275, 110, 27) if mouse_clicked: if start_game_rect.collidepoint(pos): return 1 if quit_game_rect.collidepoint(pos): return 0 if get_menu_choice() == 0: pygame.quit() sys.exit()
{ "domain": "codereview.stackexchange", "id": 24297, "tags": "python, pygame" }
Entropy of Van der Waals fluid
Question: Given the thermal equation of state of van der Waals gas as $$(p-an^2/V^2)(V-nb)=nRT$$ how can its entropy be calculated? Answer: Start with $TdS=dU+pdV$, substitute $p=\frac{nRT}{V-nb}-\frac{an^2}{V^2}$, divide by $T$, you get: $$dU=TdS-\left(\frac{nRT}{V-nb}-\frac{an^2}{V^2}\right)dV$$ Let the caloric equation be $\delta Q = C_vdT$ where $C_v=C_v(T,V)$ so that $$dU=C_vdT+T\frac{\partial S}{\partial V}-\left(\frac{nRT}{V-nb}-\frac{an^2}{V^2}\right)dV\tag{**}$$ but you also have in general that $dU=C_vdT+\left(T\frac{\partial p}{\partial T}-p\right)dV$ and because $dU$ is a perfect differential $$\frac{\partial C_v}{\partial V}=\frac{\partial}{\partial T}\left(T\frac{\partial p}{\partial T} -p\right)=T\frac{\partial^2p}{\partial T^2}$$ But this is $0$ for the van der Waals gas, that is $\frac{\partial C_v}{\partial V}=0$, therefore $C_v=C_v(T)$ and $$\delta Q=TdS=dU+pdV\\ C_vdT+\frac{RT}{V-nb}dV$$ or $$dS=\frac{C_v}{T}dT+\frac{nR}{V-nb}dV$$ and $$S-S_0=\int_{T_0}^T\frac{C_v}{T}dT+nR\int_{T_0}^T\frac{1}{V-nb}dV$$and $$S(T,V)=S_0+\int_{T_0}^T\frac{C_v(T)}{T}dT+nR \rm{ln} \left( \frac {V-nb}{V_0-nb} \right) $$ ** thanks to @Chemomechanics for pointing out that the 2nd term $T\frac{\partial S}{\partial V}$ was missing; it was correctly shown in the next equation as consequence of Maxwell's relation. Addendum: A similar result can be derived for a somewhat more general thermal equation of state such as $p(T,V)=Tf(V)+g(V)$. (for the vdW fluid $f(V)=\frac{nR}{V-nb}$ and $g(V)=\frac{an^2}{V^2}$) Let $S=S(T,V)$ then $$dU=TdS-pdV=T\left(\frac{\partial S}{\partial T}dT+\frac{\partial S}{\partial V}dV\right)-pdV\\ =T\frac{\partial S}{\partial T}dT+\left(T\frac{\partial S}{\partial V}-p \right)dV$$ Now use Maxwell's equation $\frac{\partial S}{\partial V}=\frac{\partial p}{\partial T}$ and write $C_v=T\frac{\partial S}{\partial T}$: $$dU=C_vdT+\left(T\frac{\partial p}{\partial T}-p \right)dV$$ For this gas $\frac{\partial p}{\partial T}=f(V)$ and also $\frac{\partial^2 p}{\partial T^2}=0$. Now use the equality of mixed partial derivatives for $dU$ to be exact differential, that is $$\frac{\partial C_v}{\partial V} = \frac{\partial}{\partial T}\left(T\frac{\partial p}{\partial T}-p \right) =\frac{\partial T}{\partial T}\frac{\partial p}{\partial T} + T \frac{\partial^2 p}{\partial T^2}-\frac{\partial p}{\partial T}=0$$ Wee that $C_v=C_v(T)$ independently of $V$. We can integrate the entropy by noting that $T\frac{\partial p}{\partial T} = Tf$, therefore $dU=C_vdT+(Tf-p)dV=C_vdT-gdV$: $$dS=\frac{1}{T}dU+\frac{p}{T}dV\\ \frac{C_v}{T}dT+\frac{p-g}{T}dV=\frac{C_v}{T}dT+fdV$$ and $$S-S_0=\int_{T_0}^T \frac{C_v(T)}{T}dT + \int_{V_0}^V f(V)dV$$
{ "domain": "physics.stackexchange", "id": 92833, "tags": "thermodynamics, entropy" }
Algorithm to transform a string according to occurrence count of each character
Question: I have this algorithm that counts the frequency of character occurrence in a string, and outputs a new string based on that. For example, input = 'aabbcccaaa' output = 'a5b2c2' Here is my implementation in python def compression(string): string = string.lower() freq_count = {} for index, char in enumerate(string): if char not in freq_count: freq_count[char] = 1 else: freq_count[char] += 1 return_string = '' for key in freq_count: return_string += key + str(freq_count[key]) print(return_string) return return_string compression('aabccccaaa') My question is, am I making this algorithm less efficient by using dict to memoize values. Also, I know that creating a new string takes up memory allocation, so is there a way to improve on that? Answer: def compression(string): Naming can be hard, but it is important to get right. If I were to call compression('abcd'), I would expect the result length to be at-most the length of the input string. "compression" doesn't really describe what is happening within the function. So what exactly is your function doing? From your description: I have this algorithm that counts the frequency of character occurrence in a string, and outputs a new string based on that. A lot of nice verbs in that description you can use for a function name (serialize_frequencies?). string = string.lower() Does case-sensitivity have anything to do with your stated goals of calculating and serializing the frequency of characters? It depends on the context in which this function is used. Case-sensitivity isn't always required. If you really want to provide a mechanism for case-insensitive frequency generation, consider a toggle parameter or another function that transforms the input then calls this function. serialize_frequencies(string, case_insensitive = False): if case_insensitive: string = string.lower() freq_count = {} for index, char in enumerate(string): if char not in freq_count: freq_count[char] = 1 else: freq_count[char] += 1 A function that performs a single operation is simpler to understand, test, and reuse. Don't be afraid to break functions up into suitable logical parts and parameterize. enumerate is a nice utility when you need to iterate through a sequence but also want to know the index. Since you don't need the index, you can just iterate through the string itself. for char in string: if char not in freq_count: freq_count[char] = 1 else: freq_count[char] += 1 With that said, Python's collections includes a dictionary sub-class to count frequencies (Counter). freq_count = Counter(string) return_string = '' for key in freq_count: return_string += key + str(freq_count[key]) If you want to iterate a dictionary by its key-value pair, Python's built-in dictionary includes the method items(). return_string = '' for key, value in freq_count.items(): return_string += key + str(value) You can write the loop that appends each pair using the string method join. return_string = ''.join(k+str(v) for k,v in freq_count.items()) print(return_string) Debugging artifact? My question is, am I making this algorithm less efficient by using dict to memoize values. No. But as 200_success has noted, calling compression('abcd') might result in 'a1b1c1d1' or 'c1d1b1a1' depending on the implementation. Ordering for the built-in dictionary is arbitrary and could change between implementations, versions, or possibly application executions. If ordering matters, then you should use a sorted container (OrderedDict, SortedDict) or manually sort the resulting dictionary before serializing.
{ "domain": "codereview.stackexchange", "id": 26082, "tags": "python, compression" }
Predicting next element of a sequence given small amount of data
Question: I have data of bank branches and amount of revenue they have generated in a month. The data looks like this: I am tasked to find the expected revenue for the branch for the next month using machine learning. Initially I was planning to use LSTM networks for such analysis, but I doubt its possible with such small amount of data. I personally think machine learning is an overkill for such task. What would be the most appropriate way to predict the revenue for next month? I thought about increasing the amount of data by treating every branch as equal and using the row corresponding to each branch as separate instance for training (but I doubt that is a correct approach). Any advice would be appreciated Answer: You might find the link helpful. https://towardsdatascience.com/how-to-model-time-series-data-with-linear-regression-cd94d1d901c0
{ "domain": "datascience.stackexchange", "id": 7809, "tags": "lstm, prediction, sequence" }
Scattering from a box potential of width $L$ doesn't reproduce a step potential in the limit $L \rightarrow \infty$
Question: Consider the scattering of a quantum particle in one dimension, caused by a step in the potential (this appears in many undergrad level QM books): $$ V(x) = \begin{cases} V_1 & x<0 \\ V_2 & x>0\end{cases}. $$ The particle is incident from the left, so it's wavefunction is: $$ \psi(x) = \begin{cases} e^{i k_1 x} + r e^{-i k_1 x} & x<0 \\ t e^{i k_2 x} & x>0\end{cases}, $$ where $k_i =\sqrt{2m(E-V_i)}/\hbar$. Matching the wavefunction and its derivative at $x=0$ gives: $$ r = \frac{k_1-k_2}{k_1+k_2} ~~~;~~~ t = \frac{2 k_1}{k_1+k_2}.$$ Now we put another step in the potential at some distance $L$, which makes it a box potential: $$ V(x) = \begin{cases} V_1 & x<0 \\ V_2 & 0<x<L \\ V_1 & L<x\end{cases}. $$ We solve this in a similar manner as before, with the wavefunction: $$ \psi(x) = \begin{cases} e^{i k_1 x} + r e^{-i k_1 x} & x<0 \\ a e^{i k_2 x} + b e^{-i k_2 x} & 0<x<L \\ t e^{i k_1 x} & L<x \end{cases}. $$ Matching the wavefunction and its derivative at $x=0,L$ gives: $$ r = \frac{k_1^2-k_2^2}{k_1^2+k_2^2+2 i k_1 k_2 \cot{(k_2 L)} } ~~~;~~~ t = \text{(something)}.$$ How come the second scattering problem doesn't reproduce the first scattering problem in the limit $L \rightarrow \infty$? I'm looking only at the value of $r$. I send a particle in, it scatters, and I get something back with an amplitude $r$. It seems unphysical that if the potential changed at $x=L$, it changes the scattering at $x=0$, no matter how far $L$ is. Answer: In 1D particles propagate without decay. That is, the propagator of a free particle (or of a particle under constant potential lower than its energy, see comment below) does not decay with distance. Therefore, the particle will "sense" any change in the potential at any distance. Think of the ripples created by a stone thrown into a lake. If the lake's surface is the usual 2D, the waves decay with distance and therefore you'll see a different pattern for different sizes of lakes. If, on the other hand, the lake is 1D (a wave guide), then the waves do not decay with distance and you'll feel the boundary of your lake no matter how far it is (the only thing that'll change is the phase with which the reflected waves will return - exactly like in your calculation). Comment BTW - If $E<V_2$ then $k_2$ is imaginary, and therefore $\cot(k_2L)\to -i$ for $L\to\infty$ and the two expressions coincide. This is because in this case the propagator does decay with distance.
{ "domain": "physics.stackexchange", "id": 6215, "tags": "quantum-mechanics, scattering" }
How do I find an expectation value for an electron's magnetic moment?
Question: Given a spin state: $|s\rangle$ = some linear combination of $|\uparrow\rangle + |\downarrow\rangle$ possibly with an imaginary component. How do you get from the definition of a magnetic momentum operator $\hat{\mu}_e = g\mu_B\hat{\sigma}$ to the expectation value of the electron spin magnetic moment? $g$ is the gyrmoagnetic factor and is approximately 2.0023. $\mu_B =\frac{e\hbar}{2m_o}$ is the Bohr magneton. $\hat{\sigma}$ is the Pauli spin matrix. I feel like this is the operation $\langle s| \hat{\mu}_e |s\rangle$ If it is, I need an example walk-through with some arbitrary complex $|s\rangle$ Answer: Let $$|s\rangle = \alpha|\uparrow\rangle + \beta|\downarrow\rangle$$ We assume that $s$ is normalized i.e. $\langle s | s\rangle = 1 \implies |\alpha|^2+|\beta|^2 = 1$. Then the expectation value of $\hat{\mu}_e$ is: $$\langle\hat{\mu}_e\rangle = \langle s|\hat{\mu}_e|s\rangle$$ $$\implies \langle\hat{\mu}_e\rangle = |\alpha|^2\langle\uparrow| \hat{\mu}_e |\uparrow\rangle + |\beta|^2\langle\downarrow| \hat{\mu}_e |\downarrow\rangle + \alpha^{\ast}\beta\langle\uparrow| \hat{\mu}_e |\downarrow\rangle + \alpha\beta^{\ast}\langle\downarrow| \hat{\mu}_e |\uparrow\rangle$$ Now, $\hat{\mu}_e = g\mu_B\hat{\sigma}$. We use this together with $\langle \uparrow|\hat{\sigma}|\uparrow\rangle = 1$, $\langle \downarrow|\hat{\sigma}|\downarrow\rangle = -1$, and $\langle \uparrow|\hat{\sigma}|\downarrow\rangle = \langle \downarrow|\hat{\sigma}|\uparrow\rangle = 0$, to get: $$\langle\hat{\mu}_e\rangle = g\mu_B(|\alpha|^2 - |\beta|^2)$$ Note that this expression for the expectation value is consistent with interpreting $|\alpha|^2$ and $|\beta|^2$ as probabilities of finding the spin to be $\uparrow$ and $\downarrow$ respectively, as required.
{ "domain": "physics.stackexchange", "id": 21436, "tags": "quantum-mechanics, angular-momentum, quantum-spin, bloch-sphere" }
What will happen to a man if he hangs from the power cable of train using one hand and secondly using two hands?
Question: If a man holds and hangs from the power cable of trains with one hand, will he be alive or will he die if he holds and hangs from the electric wire using two hands without touching the ground ? I think in the first case he will survive as electric current does not pass through him if he holds it with one hand. But in the second case current passes through him if he holds it with two hands and he may not survive. But is the second case right ? Answer: An easy way to determine whether there's current passing through the person or not is to look at the voltage difference between the two points that this person connects to the circuit. Because the difference in voltage is the reason of current passing through. (Same as no water pressure difference, no water flow) When touching a wire with two hands, because the resistance of wire is close to zero, the voltage difference is also negligible. Therefore, the person is safe because there's no current passing through him. As an extension, if the person touches two different wire with two hands at the same time instead, he will most likely get electric shock, because of the voltage difference is not zero. A grounded person touched a wire with one hand will get shocked is based on the same reason. Hope it helps :)
{ "domain": "physics.stackexchange", "id": 30482, "tags": "electricity, electric-current" }
Energy and force relation
Question: So in simple machine we apple less force with more displacement to exert same energy as the load need so if energy is related to tiredness i.e. more energy you lose more tired you feel but by applying less force we didn't felt tired why? Answer: The energy used by your body and muscles is not the same as the work done on an object you manipulate. The body takes more energy to do the work because it is an engine with less than 100% efficiency. Also, being "out of breath" or feeling muscle fatigue is a complicated biological response, and not a good measure of physical quantities like work. The relationship between force $F$, displacement $d$ and work $W$ (in a straight line) is $$ W=Fd $$ So you can apply a lower force for a longer distance and output the same work.
{ "domain": "physics.stackexchange", "id": 88147, "tags": "newtonian-mechanics, forces, energy, biology, displacement" }
Why does openni_tracker not show in Ubuntu software center
Question: Why does openni_tracker not show in Ubuntu software center sudo apt-get install openni_tracker [sudo] password for viki: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package openni_tracker Originally posted by rnunziata on ROS Answers with karma: 713 on 2014-01-05 Post score: 0 Answer: I believe there are no binaries for this feature. Originally posted by rnunziata with karma: 713 on 2014-01-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 16577, "tags": "ros" }
Why does Joule heating not occur when no current flows through a conductor?
Question: Joule heating happens every time when the conduction electrons transfer kinetic energy to the conductor's atoms through collisions, causing these conductor's atoms to increase their kinetic and vibrational energy which manifests as heat. Then, why wouldn't it happen when no current is flowing through the conductor? When there is no current, the electrons are still moving randomly at a speed of $\mathrm{~10^5\ m/s}$, but at a zero average velocity. Then, why don't these electrons collide with the atomic ions making up the system and transfer energy to them causing them to heat up even when no net current is flowing? Answer: When no current is flowing, the system is in thermal equilibrium. The electrons do transfer kinetic energy to the atoms through collisions, but the atoms also transfer kinetic energy to the electrons, and these two processes happen at the same rate, so there's no net energy transfer and the system neither heats up nor cools down. This is just the same as any other case of thermal equilibrium: effectively, the electrons and the atoms are at the same temperature, and that's why there's no heat flow. However, when you switch the voltage on there is an electric current accelerating the electrons, which increases their kinetic energy. Now they have, on average, more kinetic energy to give to the atoms than the atoms have to give to them. This means that there is a net transfer of energy from the electrons to the atoms. Moving an electron in an electric field changes its potential energy, and this is where the energy for the heating ultimately comes from.
{ "domain": "physics.stackexchange", "id": 9543, "tags": "thermodynamics, electricity, electrons" }
60GB file loading domain to base domain + tld parser
Question: I have a 60GB file of domain names. Some of them are repeated several times and some of them have sub domains that need stripping out too. For example this might give you a better idea of what the file looks like. email.example.com test.staging.lol.uni.ac.uk hello.test.com email.example.com test.com After parsing the file I would like to create another file that contains. example.com uni.ac.uk test.com As you can see duplicate base domains are removed and we've left with just the base domain and the top level domain with no duplicates. Here is my attempt so far and while it works, it's dog slow. string[] topLevelDomains = new string[] { ".travelersinsurance", ".accountants", ".lamborghini", ".progressive", ".productions", ".theguardian", ".blackfriday", ".engineering", ".enterprises", ".photography", ".investments", ".motorcycles", ".blockbuster", ".contractors", ".rightathome", ".schaeffler", ".foundation", ".swiftcover", ".apartments", ".associates", ".supersport", ".immobilien", ".industries", ".healthcare", ".africa.com", ".technology", ".eurovision", ".nationwide", ".consulting", ".restaurant", ".republican", ".accountant", ".protection", ".properties", ".university", ".directory", ".community", ".education", ".montblanc", ".accenture", ".microsoft", ".melbourne", ".marshalls", ".marketing", ".financial", ".lifestyle", ".frontdoor", ".lancaster", ".solutions", ".barcelona", ".insurance", ".institute", ".analytics", ".police.uk", ".homegoods", ".amsterdam", ".security", ".partners", ".computer", ".services", ".telecity", ".lighting", ".showtime", ".football", ".clothing", ".pharmacy", ".training", ".cleaning", ".attorney", ".budapest", ".observer", ".holdings", ".brussels", ".supplies", ".broadway", ".guardian", ".graphics", ".download", ".capetown", ".discover", ".discount", ".uconnect", ".diamonds", ".ventures", ".business", ".builders", ".democrat", ".pictures", ".feedback", ".plumbing", ".exchange", ".mortgage", ".esurance", ".software", ".engineer", ".catering", ".property", ".delivery", ".genting", ".gallery", ".hosting", ".hoteles", ".holiday", ".hangout", ".hamburg", ".guitars", ".watches", ".weather", ".website", ".wedding", ".whoswho", ".windows", ".winners", ".youtube", ".zuerich", ".flowers", ".florist", ".flights", ".fitness", ".fishing", ".finance", ".markets", ".fashion", ".farmers", ".express", ".exposed", ".network", ".domains", ".cruises", ".organic", ".courses", ".coupons", ".country", ".cooking", ".contact", ".company", ".cologne", ".college", ".citadel", ".pioneer", ".caravan", ".capital", ".recipes", ".brother", ".rentals", ".booking", ".science", ".singles", ".surgery", ".systems", ".theatre", ".tickets", ".trading", ".academy", ".kitchen", ".com.vc", ".aip.ee", ".org.ee", ".org.uz", ".net.uz", ".com.uz", ".net.lr", ".fie.ee", ".org.uy", ".net.uy", ".mil.uy", ".gub.uy", ".edu.uy", ".com.uy", ".com.eg", ".nsn.us", ".med.ee", ".isa.us", ".fed.us", ".dni.us", ".edu.eg", ".org.lr", ".plc.uk", ".org.uk", ".nhs.uk", ".net.uk", ".gov.lr", ".ltd.uk", ".gov.uk", ".edu.lr", ".com.lr", ".eun.eg", ".org.ug", ".com.ug", ".edu.bh", ".com.ar", ".com.bh", ".grp.lk", ".gov.bf", ".ltd.lk", ".gov.eg", ".web.lk", ".soc.lk", ".com.vi", ".k12.vi", ".net.vi", ".org.vi", ".lib.ee", ".com.vn", ".ngo.lk", ".net.vn", ".org.vn", ".edu.vn", ".edu.lk", ".gov.vn", ".org.lk", ".int.vn", ".com.lk", ".int.lk", ".biz.vn", ".net.lk", ".sch.lk", ".pro.vn", ".com.vu", ".edu.vu", ".gov.lk", ".net.vu", ".org.vu", ".gov.ee", ".edu.ee", ".cn.com", ".com.ws", ".gov.lc", ".net.ws", ".org.ws", ".edu.lc", ".org.lc", ".gov.ws", ".mar.it", ".net.lc", ".edu.ws", ".com.lc", ".photos", ".mil.ec", ".gob.ec", ".org.lb", ".alt.za", ".net.lb", ".edu.za", ".gov.za", ".gov.lb", ".law.za", ".edu.lb", ".com.lb", ".mil.za", ".net.za", ".ngo.za", ".nis.za", ".org.hu", ".org.la", ".nom.za", ".com.la", ".per.la", ".gov.la", ".org.za", ".edu.la", ".web.za", ".gov.ec", ".edu.ec", ".abarth", ".org.ua", ".net.ua", ".physio", ".gov.ua", ".edu.ua", ".com.ua", ".mil.eg", ".net.la", ".int.la", ".piaget", ".com.kz", ".org.ec", ".mil.tz", ".mil.kz", ".abbott", ".abbvie", ".gov.kz", ".net.kz", ".edu.kz", ".pro.ec", ".med.ec", ".k12.ec", ".active", ".idv.tw", ".org.tw", ".net.tw", ".com.tw", ".mil.tw", ".gov.tw", ".edu.tw", ".net.eg", ".org.eg", ".edu.tt", ".gov.tt", ".org.kz", ".fin.ec", ".net.ec", ".com.de", ".net.ky", ".com.ec", ".africa", ".int.tt", ".pro.tt", ".agency", ".biz.tt", ".net.tt", ".org.tt", ".com.tt", ".org.ky", ".sci.eg", ".travel", ".art.dz", ".com.ky", ".kep.tr", ".edu.tr", ".k12.tr", ".mil.tr", ".pol.tr", ".bel.tr", ".gov.tr", ".tel.tr", ".pol.dz", ".bbs.tr", ".gov.ky", ".edu.ky", ".com.se", ".gen.tr", ".web.tr", ".org.tr", ".net.tr", ".biz.tr", ".airbus", ".com.tr", ".com.es", ".nom.es", ".mil.to", ".edu.to", ".org.to", ".net.to", ".gov.to", ".com.to", ".org.es", ".airtel", ".alipay", ".edu.dz", ".alsace", ".rnu.tn", ".rns.tn", ".alstom", ".gov.dz", ".pictet", ".de.com", ".anquan", ".org.tn", ".net.tn", ".nat.tn", ".net.dz", ".ind.tn", ".gov.tn", ".fin.tn", ".ens.tn", ".com.tn", ".gob.es", ".edu.tm", ".mil.tm", ".gov.tm", ".nom.tm", ".net.tm", ".org.tm", ".org.bb", ".com.tm", ".edu.es", ".gov.tl", ".com.et", ".gov.et", ".web.tj", ".org.dz", ".org.tj", ".nic.tj", ".net.tj", ".net.bb", ".mil.tj", ".int.tj", ".gov.tj", ".lom.it", ".edu.tj", ".com.tj", ".eu.com", ".biz.tj", ".gov.bb", ".org.et", ".edu.bb", ".net.th", ".com.bb", ".lig.it", ".gb.com", ".laz.it", ".fvg.it", ".edu.et", ".biz.et", ".aramco", ".emr.it", ".net.et", ".com.dz", ".org.sz", ".biz.bb", ".gb.net", ".com.fr", ".org.sy", ".com.sy", ".mil.sy", ".net.sy", ".gov.sy", ".edu.sy", ".web.do", ".gov.sx", ".nom.fr", ".red.sv", ".org.sv", ".gob.sv", ".edu.sv", ".com.sv", ".prd.fr", ".sld.do", ".org.do", ".hu.com", ".author", ".net.do", ".hu.net", ".mil.do", ".spb.su", ".gov.do", ".cam.it", ".gob.do", ".nov.su", ".edu.do", ".com.do", ".art.do", ".bayern", ".gov.dm", ".edu.dm", ".org.dm", ".net.dm", ".com.dm", ".jp.net", ".cal.it", ".berlin", ".pro.cy", ".bharti", ".mil.kr", ".org.cy", ".net.cy", ".kr.com", ".bas.it", ".blanco", ".ltd.cy", ".gov.cy", ".com.cy", ".org.st", ".net.st", ".mil.st", ".gov.st", ".biz.cy", ".edu.st", ".no.com", ".com.st", ".qc.com", ".ru.com", ".abr.it", ".org.so", ".net.so", ".com.so", ".gov.cx", ".org.cw", ".net.cw", ".org.sn", ".edu.it", ".edu.sn", ".com.sn", ".art.sn", ".tra.kp", ".bostik", ".org.sl", ".gov.sl", ".edu.sl", ".net.sl", ".com.sl", ".cci.fr", ".edu.cw", ".broker", ".rep.kp", ".mil.sh", ".org.sh", ".gov.sh", ".net.sh", ".com.sh", ".com.cw", ".per.sg", ".edu.sg", ".gov.sg", ".org.sg", ".net.sg", ".com.sg", ".inf.cu", ".gov.cu", ".net.cu", ".org.cu", ".edu.cu", ".com.cu", ".org.kp", ".camera", ".gov.kp", ".com.ge", ".edu.kp", ".com.kp", ".com.ba", ".edu.ge", ".org.se", ".gov.ge", ".gov.kn", ".org.ge", ".mil.ge", ".edu.kn", ".net.ge", ".career", ".org.kn", ".net.kn", ".pvt.ge", ".gov.it", ".net.gg", ".org.gg", ".fhv.se", ".quebec", ".sa.com", ".com.gh", ".edu.gh", ".gov.gh", ".org.gh", ".casino", ".racing", ".mil.gh", ".mil.ba", ".com.gi", ".ltd.gi", ".gov.ba", ".gov.sd", ".realty", ".med.sd", ".edu.sd", ".org.sd", ".net.sd", ".com.sd", ".gov.gi", ".edu.sc", ".org.sc", ".net.sc", ".gov.sc", ".com.sc", ".mod.gi", ".org.sb", ".net.sb", ".gov.sb", ".edu.sb", ".com.sb", ".edu.gi", ".sch.sa", ".edu.sa", ".pub.sa", ".med.sa", ".gov.sa", ".org.sa", ".net.sa", ".com.sa", ".org.gi", ".web.co", ".mil.rw", ".int.rw", ".edu.ba", ".com.rw", ".int.is", ".edu.rw", ".net.rw", ".gov.rw", ".com.km", ".rec.co", ".mil.ru", ".gov.ru", ".org.co", ".nom.co", ".center", ".snz.ru", ".net.co", ".mil.co", ".int.co", ".chanel", ".nkz.ru", ".ass.km", ".gov.co", ".mil.km", ".edu.km", ".chrome", ".church", ".kms.ru", ".circle", ".org.is", ".cmw.ru", ".prd.km", ".edu.co", ".claims", ".gov.km", ".clinic", ".nom.km", ".com.co", ".org.km", ".vrn.ru", ".gov.is", ".coffee", ".comsec", ".condos", ".coupon", ".credit", ".com.ki", ".reisen", ".udm.ru", ".gov.ki", ".org.ki", ".net.ki", ".biz.ki", ".tsk.ru", ".edu.ki", ".net.ba", ".tom.ru", ".mil.kg", ".dating", ".datsun", ".stv.ru", ".gov.kg", ".spb.ru", ".edu.kg", ".com.kg", ".net.kg", ".dealer", ".org.kg", ".degree", ".rnd.ru", ".ptz.ru", ".org.ba", ".edu.is", ".dental", ".edu.an", ".design", ".com.is", ".nsk.ru", ".net.is", ".nov.ru", ".org.an", ".direct", ".net.an", ".msk.ru", ".se.com", ".com.an", ".org.al", ".biz.az", ".repair", ".doosan", ".report", ".mil.jo", ".gov.jo", ".sch.jo", ".edu.jo", ".net.jo", ".dunlop", ".org.jo", ".khv.ru", ".dupont", ".durban", ".com.jo", ".pro.az", ".org.je", ".net.je", ".sch.ir", ".review", ".emerck", ".energy", ".jar.ru", ".org.ir", ".net.ir", ".gov.ie", ".net.al", ".estate", ".gov.ir", ".events", ".expert", ".mil.az", ".us.org", ".mil.al", ".family", ".cbg.ru", ".rocher", ".web.id", ".bir.ru", ".mil.cn", ".org.cn", ".se.net", ".net.cn", ".gov.cn", ".edu.cn", ".rogers", ".viajes", ".org.ru", ".net.ru", ".int.ru", ".edu.ru", ".com.ru", ".net.iq", ".com.gl", ".gov.al", ".gov.rs", ".edu.az", ".edu.rs", ".org.rs", ".viking", ".edu.gl", ".www.ro", ".com.cn", ".org.az", ".net.cm", ".rec.ro", ".flickr", ".nom.ro", ".villas", ".edu.al", ".org.ro", ".com.ro", ".net.gl", ".nom.re", ".gov.cm", ".com.re", ".org.gl", ".sch.qa", ".org.qa", ".net.qa", ".com.cm", ".mil.qa", ".gov.qa", ".edu.qa", ".com.qa", ".gov.az", ".org.py", ".net.py", ".mil.py", ".gov.py", ".edu.py", ".com.al", ".com.py", ".com.gn", ".mil.cl", ".int.az", ".net.az", ".ryukyu", ".com.az", ".safety", ".edu.gn", ".virgin", ".com.pt", ".org.ai", ".int.pt", ".edu.pt", ".org.pt", ".gov.pt", ".net.pt", ".gov.gn", ".net.ps", ".org.ps", ".com.ps", ".plo.ps", ".sec.ps", ".gov.ps", ".edu.ps", ".org.gn", ".sakura", ".gob.cl", ".gov.cl", ".futbol", ".vision", ".org.iq", ".gallup", ".net.gn", ".com.aw", ".net.ai", ".est.pr", ".int.ci", ".garden", ".biz.pr", ".pro.pr", ".sanofi", ".edu.pr", ".gov.pr", ".org.pr", ".net.pr", ".com.pr", ".com.gp", ".net.gp", ".net.pn", ".edu.pn", ".org.pn", ".school", ".gov.pn", ".net.ci", ".edu.gp", ".com.iq", ".george", ".schule", ".edu.ci", ".mil.iq", ".edu.iq", ".giving", ".gov.iq", ".com.ai", ".uk.com", ".global", ".com.io", ".waw.pl", ".off.ai", ".com.ci", ".voting", ".org.ci", ".gov.cd", ".vic.au", ".nom.ag", ".google", ".uk.net", ".tas.au", ".eu.int", ".gratis", ".qld.au", ".us.com", ".voyage", ".uy.com", ".mil.in", ".vuelos", ".gov.in", ".walter", ".nsw.au", ".res.in", ".health", ".warman", ".edu.in", ".hermes", ".shouji", ".hiphop", ".act.au", ".sch.id", ".hockey", ".nic.in", ".ind.in", ".webcam", ".gov.bz", ".asn.au", ".edu.bz", ".org.bz", ".net.bz", ".hughes", ".gov.au", ".com.bz", ".gen.in", ".com.by", ".mil.by", ".gov.by", ".org.bw", ".net.ag", ".org.in", ".imamat", ".net.in", ".org.bt", ".net.bt", ".gov.bt", ".insure", ".org.ag", ".intuit", ".edu.ac", ".edu.bt", ".com.bt", ".gov.bs", ".edu.bs", ".jaguar", ".org.bs", ".net.bs", ".com.bs", ".zlg.br", ".net.id", ".vet.br", ".com.ag", ".tur.br", ".joburg", ".trd.br", ".tmp.br", ".teo.br", ".juegos", ".kaufen", ".srv.br", ".slg.br", ".rec.br", ".kinder", ".kindle", ".edu.af", ".qsl.br", ".psi.br", ".psc.br", ".pro.br", ".kyknet", ".za.com", ".elk.pl", ".lancia", ".ppg.br", ".org.br", ".latino", ".odo.br", ".lawyer", ".ntr.br", ".net.af", ".lefrak", ".biz.id", ".not.br", ".edu.au", ".net.br", ".mus.br", ".soccer", ".mil.br", ".org.af", ".med.br", ".com.af", ".mat.br", ".social", ".lel.br", ".living", ".org.au", ".leg.br", ".gov.af", ".locker", ".net.au", ".jus.br", ".jor.br", ".london", ".mil.id", ".gr.com", ".inf.br", ".ind.br", ".imb.br", ".gov.br", ".org.im", ".ggf.br", ".luxury", ".com.au", ".madrid", ".g12.br", ".maison", ".makeup", ".fst.br", ".net.im", ".market", ".mattel", ".fot.br", ".fnd.br", ".ven.it", ".vda.it", ".far.br", ".eti.br", ".etc.br", ".esp.br", ".vao.it", ".eng.br", ".emp.br", ".edu.br", ".eco.br", ".ecn.br", ".gov.pl", ".com.ac", ".mobily", ".cnt.br", ".mil.ae", ".cng.br", ".cim.br", ".sos.pl", ".bmd.br", ".monash", ".sex.pl", ".rel.pl", ".gov.ae", ".in.net", ".mormon", ".com.im", ".nom.pl", ".moscow", ".mil.pl", ".bio.br", ".xihuan", ".ato.br", ".art.br", ".gsm.pl", ".arq.br", ".edu.pl", ".biz.pl", ".gov.as", ".atm.pl", ".mutual", ".aid.pl", ".org.pl", ".net.pl", ".com.pl", ".org.gp", ".sch.ae", ".gos.pk", ".gop.pk", ".gon.pk", ".gok.pk", ".gob.pk", ".gov.pk", ".web.pk", ".biz.pk", ".fam.pk", ".org.pk", ".edu.pk", ".net.pk", ".com.pk", ".nagoya", ".com.gr", ".mil.ph", ".ngo.ph", ".edu.ph", ".gov.ph", ".org.ph", ".net.ph", ".com.ph", ".edu.gr", ".edu.pf", ".org.pf", ".com.pf", ".net.gr", ".net.pe", ".com.pe", ".org.pe", ".mil.pe", ".nom.pe", ".gob.pe", ".edu.pe", ".org.gr", ".nom.pa", ".med.pa", ".abo.pa", ".ing.pa", ".net.pa", ".edu.pa", ".sld.pa", ".org.pa", ".com.pa", ".gob.pa", ".xperia", ".gov.gr", ".com.gt", ".pro.om", ".org.om", ".net.om", ".natura", ".med.om", ".gov.om", ".edu.om", ".com.om", ".studio", ".edu.gt", ".agr.br", ".adv.br", ".org.nz", ".net.nz", ".adm.br", ".mil.nz", ".org.ae", ".mil.bo", ".iwi.nz", ".net.bo", ".yachts", ".gen.nz", ".org.bo", ".cri.nz", ".com.br", ".gob.gt", ".ind.gt", ".com.nr", ".net.nr", ".org.nr", ".edu.nr", ".gov.nr", ".int.bo", ".biz.nr", ".mil.gt", ".net.gt", ".net.ae", ".org.gt", ".gob.bo", ".mil.ng", ".gov.ng", ".sch.ng", ".org.ng", ".net.ng", ".gov.bo", ".edu.ng", ".com.ng", ".co.com", ".edu.bo", ".umb.it", ".nom.ad", ".nissan", ".supply", ".web.nf", ".rec.nf", ".per.nf", ".net.nf", ".com.nf", ".com.gy", ".net.gy", ".com.hk", ".edu.hk", ".gov.hk", ".org.na", ".com.na", ".com.bo", ".sydney", ".tos.it", ".taa.it", ".target", ".org.ac", ".mil.ac", ".pro.na", ".org.bm", ".idv.hk", ".net.bm", ".mil.my", ".edu.my", ".gov.my", ".org.my", ".net.my", ".com.my", ".net.hk", ".net.mx", ".edu.mx", ".gob.mx", ".org.mx", ".com.mx", ".org.hk", ".org.mw", ".net.mw", ".gov.bm", ".int.mw", ".gov.mw", ".edu.mw", ".edu.bm", ".com.mw", ".tattoo", ".biz.mw", ".tur.ar", ".com.hn", ".pro.mv", ".org.mv", ".net.mv", ".com.bm", ".office", ".mil.mv", ".int.mv", ".olayan", ".gov.mv", ".edu.mv", ".org.ar", ".com.mv", ".biz.mv", ".org.bi", ".edu.hn", ".museum", ".sic.it", ".net.ac", ".sex.hu", ".gov.mu", ".org.mu", ".net.mu", ".com.mu", ".org.hn", ".org.mt", ".net.mt", ".edu.mt", ".com.mt", ".net.hn", ".org.ms", ".net.ms", ".gov.ms", ".edu.ms", ".com.ms", ".mil.hn", ".gov.mr", ".gob.hn", ".net.ar", ".mil.ar", ".online", ".gov.mo", ".edu.mo", ".org.mo", ".net.mo", ".com.mo", ".com.hr", ".org.mn", ".edu.mn", ".gov.mn", ".com.ht", ".edu.bi", ".org.ml", ".net.ml", ".gov.ml", ".com.bi", ".edu.ml", ".com.ml", ".oracle", ".orange", ".inf.mk", ".gov.mk", ".edu.mk", ".net.mk", ".org.mk", ".com.mk", ".tennis", ".otsuka", ".int.ar", ".gov.ar", ".com.mg", ".mil.mg", ".edu.mg", ".gob.ar", ".prd.mg", ".gov.mg", ".nom.mg", ".org.mg", ".net.ht", ".gov.bh", ".its.me", ".gov.me", ".sar.it", ".edu.me", ".org.me", ".net.me", ".gov.ac", ".pro.ht", ".org.ht", ".org.bh", ".tienda", ".med.ht", ".pug.it", ".edu.ar", ".org.ma", ".gov.ma", ".net.ma", ".pmn.it", ".art.ht", ".ae.org", ".org.ly", ".med.ly", ".sch.ly", ".edu.ly", ".plc.ly", ".gov.ly", ".net.ly", ".com.ly", ".net.bh", ".ar.com", ".asn.lv", ".net.lv", ".br.com", ".mil.lv", ".org.lv", ".gov.lv", ".edu.lv", ".com.lv", ".pol.ht", ".mol.it", ".gov.lt", ".edu.ht", ".org.ls", ".msk.su", ".total", ".ge.it", ".tours", ".fr.it", ".pb.ao", ".co.ao", ".fm.it", ".trade", ".trust", ".fi.it", ".fg.it", ".og.ao", ".gv.ao", ".fe.it", ".fc.it", ".ed.ao", ".en.it", ".tunes", ".cz.it", ".ct.it", ".cs.it", ".cr.it", ".co.it", ".cn.it", ".cl.it", ".ci.it", ".ch.it", ".vegas", ".ce.it", ".cb.it", ".video", ".vista", ".ca.it", ".co.uz", ".co.vi", ".me.uk", ".co.uk", ".ac.uk", ".ne.ug", ".go.ug", ".sc.ug", ".ac.ug", ".or.ug", ".co.ug", ".zt.ua", ".zp.ua", ".vn.ua", ".uz.ua", ".te.ua", ".sm.ua", ".ac.vn", ".sb.ua", ".rv.ua", ".pl.ua", ".od.ua", ".mk.ua", ".lv.ua", ".lt.ua", ".lg.ua", ".kv.ua", ".ks.ua", ".kr.ua", ".km.ua", ".ac.za", ".co.za", ".kh.ua", ".if.ua", ".dp.ua", ".dn.ua", ".cv.ua", ".cr.ua", ".cn.ua", ".ck.ua", ".tm.za", ".in.ua", ".tv.tz", ".sc.tz", ".or.tz", ".ne.tz", ".me.tz", ".go.tz", ".co.tz", ".ac.tz", ".actor", ".adult", ".aetna", ".co.tt", ".nc.tr", ".dr.tr", ".av.tr", ".tv.tr", ".co.tm", ".go.tj", ".co.tj", ".ac.tj", ".or.th", ".mi.th", ".in.th", ".go.th", ".co.th", ".ac.th", ".archi", ".ac.sz", ".co.sz", ".audio", ".autos", ".azure", ".baidu", ".beats", ".tm.cy", ".bible", ".bingo", ".black", ".boats", ".co.st", ".tm.fr", ".ac.cy", ".boots", ".bosch", ".build", ".tm.se", ".sa.cr", ".pp.se", ".or.cr", ".cards", ".go.cr", ".fi.cr", ".ed.cr", ".co.gg", ".co.cr", ".fh.se", ".bd.se", ".ac.se", ".ac.cr", ".tv.sd", ".co.rw", ".ac.rw", ".co.gl", ".chase", ".cheap", ".chloe", ".cisco", ".citic", ".click", ".cloud", ".coach", ".codes", ".crown", ".tw.cn", ".mo.cn", ".cymru", ".hk.cn", ".dabur", ".zj.cn", ".dance", ".yn.cn", ".xz.cn", ".xj.cn", ".tj.cn", ".sx.cn", ".deals", ".sn.cn", ".sh.cn", ".sd.cn", ".sc.cn", ".qh.cn", ".nx.cn", ".nm.cn", ".ln.cn", ".dodge", ".jx.cn", ".js.cn", ".jl.cn", ".drive", ".hn.cn", ".hl.cn", ".dubai", ".hi.cn", ".he.cn", ".hb.cn", ".ha.cn", ".earth", ".gx.cn", ".edeka", ".email", ".epost", ".epson", ".gz.cn", ".gs.cn", ".gd.cn", ".fj.cn", ".cq.cn", ".faith", ".bj.cn", ".ah.cn", ".fedex", ".final", ".pp.ru", ".ac.ru", ".in.rs", ".ac.rs", ".co.rs", ".ac.cn", ".nt.ro", ".tm.ro", ".ac.gn", ".co.cm", ".go.pw", ".ed.pw", ".or.pw", ".ne.pw", ".co.pw", ".forex", ".forum", ".co.cl", ".md.ci", ".gallo", ".ac.pr", ".games", ".go.ci", ".co.pn", ".ac.ci", ".ed.ci", ".gifts", ".gives", ".glade", ".glass", ".co.ci", ".globo", ".gmail", ".or.ci", ".gc.ca", ".yk.ca", ".sk.ca", ".qc.ca", ".pe.ca", ".green", ".gripe", ".group", ".gucci", ".on.ca", ".guide", ".nu.ca", ".nt.ca", ".ns.ca", ".nl.ca", ".nf.ca", ".nb.ca", ".mb.ca", ".bc.ca", ".ab.ca", ".homes", ".horse", ".house", ".of.by", ".iinet", ".ikano", ".co.bw", ".intel", ".irish", ".jetzt", ".tv.br", ".koeln", ".kyoto", ".av.it", ".at.it", ".weber", ".weibo", ".ar.it", ".aq.it", ".ap.it", ".ao.it", ".an.it", ".al.it", ".works", ".ag.it", ".world", ".ac.ae", ".xerox", ".yahoo", ".co.ae", ".zippo", ".id.ir", ".co.ir", ".ac.ir", ".ac.in", ".za.bz", ".co.in", ".tv.im", ".tt.im", ".co.im", ".ac.im", ".or.id", ".my.id", ".go.id", ".co.id", ".ac.id", ".co.ca", ".co.nl", ".nokia", ".co.na", ".ws.na", ".tv.na", ".cc.na", ".in.na", ".ca.na", ".mx.na", ".us.na", ".dr.na", ".or.na", ".nowtv", ".co.mw", ".ac.mw", ".omega", ".or.mu", ".co.mu", ".ac.mu", ".iz.hr", ".or.bi", ".osaka", ".co.bi", ".co.mg", ".tm.mg", ".ac.me", ".co.me", ".tm.mc", ".paris", ".ac.ma", ".co.ma", ".id.ly", ".parts", ".id.lv", ".party", ".co.ls", ".ac.lk", ".photo", ".co.hu", ".co.lc", ".ac.be", ".tm.hu", ".tv.bb", ".pizza", ".place", ".poker", ".co.bb", ".praxi", ".press", ".prime", ".rs.ba", ".sc.kr", ".re.kr", ".pe.kr", ".or.kr", ".ne.kr", ".ms.kr", ".kg.kr", ".hs.kr", ".go.kr", ".es.kr", ".co.kr", ".ac.kr", ".promo", ".co.ba", ".quest", ".rehab", ".tm.km", ".reise", ".or.jp", ".ne.jp", ".lg.jp", ".gr.jp", ".go.jp", ".ed.jp", ".co.jp", ".ad.jp", ".ac.jp", ".co.je", ".vv.it", ".vt.it", ".vs.it", ".vr.it", ".ricoh", ".vi.it", ".pp.az", ".rocks", ".rodeo", ".ve.it", ".vc.it", ".vb.it", ".va.it", ".ud.it", ".tv.it", ".ts.it", ".tr.it", ".tp.it", ".to.it", ".tn.it", ".salon", ".te.it", ".ta.it", ".sv.it", ".wa.au", ".ss.it", ".sr.it", ".sp.it", ".sener", ".so.it", ".seven", ".si.it", ".sa.au", ".sa.it", ".nt.au", ".ro.it", ".rn.it", ".rm.it", ".shoes", ".ri.it", ".rg.it", ".oz.au", ".id.au", ".re.it", ".rc.it", ".ra.it", ".pz.it", ".pv.it", ".pu.it", ".pt.it", ".skype", ".pr.it", ".sling", ".smart", ".po.it", ".pn.it", ".smile", ".pi.it", ".pg.it", ".solar", ".pe.it", ".pd.it", ".pc.it", ".or.at", ".space", ".gv.at", ".co.at", ".pa.it", ".ot.it", ".ac.at", ".or.it", ".stada", ".store", ".og.it", ".nu.it", ".study", ".no.it", ".style", ".sucks", ".na.it", ".mt.it", ".ms.it", ".swiss", ".mo.it", ".mn.it", ".tatar", ".mi.it", ".me.it", ".mc.it", ".mb.it", ".lu.it", ".lt.it", ".lo.it", ".li.it", ".tires", ".tirol", ".le.it", ".lc.it", ".tmall", ".kr.it", ".today", ".is.it", ".tokyo", ".im.it", ".tools", ".gr.it", ".it.ao", ".go.it", ".toray", ".legal", ".lexus", ".mp.br", ".lilly", ".linde", ".lipsy", ".loans", ".locus", ".lotte", ".lotto", ".lupin", ".macys", ".mango", ".fm.br", ".media", ".miami", ".tm.pl", ".money", ".mopar", ".pc.pl", ".movie", ".am.br", ".nadex", ".ac.pa", ".co.om", ".tv.bo", ".nexus", ".co.nz", ".ac.nz", ".bv.nl", ".co.gy", ".nikon", ".ninja", ".bz.it", ".bt.it", ".bs.it", ".vodka", ".br.it", ".bo.it", ".bn.it", ".bl.it", ".bi.it", ".bg.it", ".wales", ".co.ag", ".ba.it", ".watch", ".co.ve", ".lease", ".prod", ".prof", ".able", ".surf", ".adac", ".yoga", ".talk", ".raid", ".ally", ".read", ".army", ".work", ".auto", ".reit", ".kiwi", ".baby", ".band", ".bank", ".taxi", ".land", ".rent", ".rest", ".info", ".rich", ".beer", ".best", ".life", ".like", ".jobs", ".bike", ".bing", ".limo", ".blog", ".blue", ".link", ".live", ".loan", ".bofa", ".loft", ".zero", ".bond", ".love", ".aero", ".team", ".luxe", ".buzz", ".room", ".cafe", ".rsvp", ".call", ".camp", ".tech", ".care", ".cars", ".casa", ".cash", ".meet", ".meme", ".mobi", ".menu", ".cbre", ".safe", ".mini", ".mint", ".cern", ".co.no", ".chat", ".city", ".club", ".cool", ".name", ".sale", ".tips", ".date", ".deal", ".moto", ".diet", ".dish", ".docs", ".b.br", ".save", ".duck", ".duns", ".post", ".navy", ".vote", ".fail", ".town", ".fans", ".farm", ".fast", ".fiat", ".film", ".fire", ".fish", ".toys", ".news", ".next", ".scot", ".ford", ".a.se", ".seat", ".b.se", ".c.se", ".d.se", ".e.se", ".f.se", ".seek", ".g.se", ".h.se", ".i.se", ".fund", ".nico", ".k.se", ".l.se", ".m.se", ".n.se", ".o.se", ".p.se", ".r.se", ".nike", ".zone", ".game", ".tube", ".s.se", ".t.se", ".u.se", ".w.se", ".x.se", ".y.se", ".z.se", ".sexy", ".gent", ".gift", ".show", ".gold", ".silk", ".site", ".golf", ".skin", ".xbox", ".goog", ".sohu", ".open", ".guge", ".guru", ".song", ".help", ".here", ".sony", ".weir", ".page", ".host", ".pars", ".spot", ".c.la", ".wiki", ".pics", ".imdb", ".zara", ".asia", ".immo", ".star", ".ping", ".pink", ".play", ".plus", ".wine", ".java", ".porn", ".gap", ".sap", ".run", ".fly", ".rio", ".fan", ".ren", ".esq", ".qvc", ".dwg", ".pru", ".dot", ".pin", ".dnp", ".pet", ".dev", ".ott", ".day", ".net", ".csc", ".ong", ".cfd", ".one", ".ceo", ".obi", ".cbs", ".ntt", ".edu", ".nra", ".xxx", ".now", ".abb", ".ngo", ".aco", ".new", ".aeg", ".nba", ".cba", ".mtn", ".anz", ".mov", ".car", ".moi", ".axa", ".gov", ".zip", ".mma", ".bbt", ".mlb", ".bcn", ".meo", ".bzh", ".med", ".bot", ".mba", ".bet", ".ltd", ".bio", ".lol", ".bmw", ".law", ".cal", ".yun", ".you", ".xyz", ".int", ".xin", ".wtf", ".wtc", ".wme", ".win", ".wed", ".vip", ".vet", ".ups", ".trv", ".top", ".thd", ".tci", ".tax", ".tab", ".stc", ".srt", ".srl", ".soy", ".sky", ".ski", ".sex", ".tel", ".mil", ".sas", ".foo", ".rip", ".fit", ".ril", ".eus", ".red", ".eat", ".pub", ".pnc", ".dog", ".pid", ".ooo", ".dad", ".onl", ".crs", ".biz", ".off", ".ceb", ".nyc", ".cbn", ".nrw", ".com", ".org", ".aaa", ".nhk", ".abc", ".nfl", ".ads", ".nec", ".msd", ".app", ".mom", ".bar", ".pro", ".bbc", ".mit", ".cab", ".men", ".buy", ".man", ".bid", ".bnl", ".lat", ".joy", ".jnj", ".jmp", ".iwc", ".itv", ".ist", ".ink", ".ing", ".ice", ".how", ".hiv", ".hbo", ".got", ".goo", ".cat", ".gmx", ".bom", ".sew", ".li", ".lc", ".ar", ".aq", ".lb", ".la", ".kz", ".ao", ".ky", ".kr", ".kp", ".kn", ".an", ".am", ".km", ".ki", ".kg", ".jp", ".jo", ".al", ".ai", ".je", ".ag", ".it", ".is", ".af", ".ir", ".iq", ".io", ".in", ".im", ".ae", ".ie", ".ad", ".cx", ".cz", ".de", ".dj", ".cw", ".cv", ".dk", ".dm", ".do", ".cu", ".dz", ".ec", ".cr", ".yt", ".ee", ".ws", ".wf", ".vu", ".vn", ".vi", ".co", ".vg", ".ve", ".vc", ".va", ".uz", ".eg", ".uy", ".us", ".uk", ".ug", ".ua", ".tz", ".tw", ".tv", ".tt", ".es", ".cn", ".tr", ".tp", ".to", ".cm", ".tn", ".tm", ".cl", ".et", ".tl", ".tk", ".tj", ".th", ".tg", ".tf", ".td", ".ci", ".ch", ".cg", ".cf", ".cd", ".cc", ".eu", ".fi", ".tc", ".fm", ".fo", ".fr", ".sz", ".ca", ".sy", ".sx", ".sv", ".bz", ".su", ".st", ".by", ".sr", ".bw", ".bv", ".so", ".sn", ".sm", ".bt", ".sl", ".sk", ".sj", ".si", ".bs", ".sh", ".sg", ".ga", ".gb", ".gd", ".ge", ".gf", ".gg", ".gh", ".gi", ".se", ".sd", ".sc", ".sb", ".sa", ".gl", ".rw", ".ru", ".rs", ".ro", ".re", ".gm", ".gn", ".qa", ".py", ".pw", ".pt", ".ps", ".gp", ".pr", ".pn", ".pm", ".pl", ".pk", ".gq", ".gr", ".ph", ".br", ".pf", ".pe", ".pa", ".gs", ".gt", ".bo", ".om", ".nz", ".nu", ".nr", ".no", ".bm", ".nl", ".gw", ".bj", ".gy", ".ng", ".nf", ".hk", ".ne", ".bi", ".nc", ".bh", ".bg", ".na", ".bf", ".be", ".my", ".mx", ".hm", ".bb", ".hn", ".mw", ".mv", ".mu", ".mt", ".ba", ".ms", ".mr", ".hr", ".mq", ".mp", ".az", ".ax", ".aw", ".mo", ".ht", ".mn", ".ml", ".mk", ".mh", ".mg", ".me", ".md", ".mc", ".ma", ".au", ".ly", ".lv", ".lu", ".at", ".as", ".lt", ".ls", ".lr", ".lk", ".hu", ".id", ".ac" }; HashSet<string> domainHashList = new HashSet<string>(); int lineNumber = 0; double pc = 0; double totalLines = 1906663905; var t = Task.Run(() => { using (var fileStream = File.OpenRead("F:\\domains-final\\domains\\domains-final.csv")) { using (var reader = new StreamReader(fileStream)) { while (reader.Peek() >= 0) { lineNumber++; pc = Math.Round((lineNumber / totalLines) * 100, 10); Dispatcher.Invoke(() => { lineLabel.Content = "Line number: " + lineNumber.ToString() + " Percentage: " + pc.ToString(); }); try { string line = reader.ReadLine(); if (!line.Contains('.')) continue; foreach (string topLevelDomain in topLevelDomains) { string domain = line.Trim(); if (domain.EndsWith(topLevelDomain)) { string cleanedDomainTemp = line.Replace(topLevelDomain, ""); if (!cleanedDomainTemp.Contains('.')) { string cleanedDomain = cleanedDomainTemp + topLevelDomain; if (domainHashList.Contains(domain)) break; domainHashList.Add(domain); File.AppendAllText("F:\\domains-final\\domains\\doms.txt", domain + Environment.NewLine); } else { string cleanedDomain = cleanedDomainTemp.Split('.').Last() + topLevelDomain; if (domainHashList.Contains(cleanedDomain)) break; domainHashList.Add(cleanedDomain); File.AppendAllText("F:\\domains-final\\domains\\doms.txt", cleanedDomain + Environment.NewLine); } break; } } } catch { } } } } }); I'm looking at my computer's resources while running this, and the disk access is at 0-1% and the CPU usage is at 30%, so I'm assuming I'm maxing out just one core. I have some ideas but I'm not 100% sure how to implement them correctly as C# isn't my main language. Spawn more threads and use all CPU cores Attempt to load 1/3 of the file into my 32GB of memory at a time Buffer up the disk writing and write 1000 domains into the results file at a time See if I can find a more efficient way of converting to base + TLD (but I can't find one so far) What improvements can I make to speed this code up? Answer: I would separate out the Reading of lines from a file to it's own method. A nice way to report back progress is the IProgress interface. That way we can just calculate the progress and report it back. You can make this more complex than just the percentage but I just passed back the percentage. if you want line count you can do the code for that. But just beware reporting back to the dispatcher can eat up time. It's why I only push back when the percentage changed. Also stream are going to be buffered so the percentage will be a tad off but with large files it's close enough from my experience. I personally like the extra brackets on using statements but I also always put brackets on single line if. That to me is a personal preference and you should base it on your coding guidelines. This method is similar to File.ReadLines but since we needed Progress I created this method. private static IEnumerable<string> FileReadLines(string file, IProgress<int> progress) { var fileSize = new FileInfo(file).Length; using (var fileStream = File.OpenRead(file)) { using (var reader = new StreamReader(fileStream)) { Int? previousProgress = null; while (!reader.EndOfStream) { var line = reader.ReadLine(); if (line != null) { if (progress != null) { var percentDone = (int)Math.Round((reader.BaseStream.Position * 100d) / fileSize, 0); if (previousProgress != percentDone) { progress.Report(percentDone); previousProgress = percentDone; } } yield return line; } } } } } Now I moved the array of top level domains into a static field and change it to a hashset. Plus I calculated the depth of domains. private static readonly HashSet<string> _topLevelDomains = new HashSet<string>(new[] { ".travelersinsurance", ".accountants", "...."}); // did not include them all here to save ones and zeros private static int _maxDomainLevel = _topLevelDomains.Max(d => d.Count(x => x == SplitChar)); private const char SplitChar = '.'; Now I created a method to take a line from the file and return back keyvaluepairs based on the potential matches in the top level domain hashset. private static IEnumerable<KeyValuePair<string, string>> GetDomains(string domain) { var domainParts = domain.Split(SplitChar); int start; if (domainParts.Length <= _maxDomainLevel) { start = 1; } else { // only need to match on part of the string since we can eliminate any that have more parts than in the top level domain start = domainParts.Length - _maxDomainLevel; } for (var i = start; i < domainParts.Length; i++) { var range = domainParts.Length - i; // build up the domain from the subparts var key = SplitChar + string.Join(SplitChar, Enumerable.Range(i, range) .Select(x => domainParts[x])); var value = domainParts[i - 1] + key; yield return new KeyValuePair<string, string>(key, value); } } Now that we have all the pieces we can write some PLINQ code to process it all in parallel public static void ParseFile(string inputfile, string outputFile, IProgress<int> progress) { var domains = FileReadLines(inputfile, progress) .AsParallel() .Where(x => x.Contains('.')) .Select(x => GetDomains(x).FirstOrDefault(kv => _topLevelDomains.Contains(kv.Key))) .Select(kv => kv.Value) .Where(x => x != null) .Distinct(); File.AppendAllLines(outputFile, domains); } Now you can call it like so. My example I was putting progress to console but you would also push it to the dispatcher. ParseFile(@"c:\temp\source.txt", @"c:\temp\output.txt", new Progress<int>(l => Console.WriteLine(l))); I, obviously, don't have a 60Gig file but I believe with using PLINQ and changing the top level domains to be a hash set will be quicker. If you want total control then I would use a producer/consumer like the TPL DataFlow but I think it's over kill for this.
{ "domain": "codereview.stackexchange", "id": 36956, "tags": "c#, performance" }
November 25th 2011 partial solar eclipse visibility from Christchurch, New Zealand
Question: Where can I find a site that will give me an indication of coverage for the November 25th 2011 partial solar eclipse as viewed from Christchurch, New Zealand tonight, as well as local times for the stages of the event? Planning on viewing from 43.780 S, 172.660 E (just east of Lake Ellesmere, I will be looking west over the lake) in the hope of getting a reflection photograph over the lake, if the water is still enough, unless anyone can see any flaw in this plan. UPDATE Success. Despite the national media reporting it at 7.30pm, they had the wrong time too! Was an hour out, and just before sunset. However it was better than an hour earlier, if we'd missed it! One of my shots below: Answer: The RASC Observer's Handbook contains detailed predictions for Christchurch. In UT eclipse begins at 7:07, max eclipse at 7:42. Magnitude of eclipse 0.278, obscuration 0.170. Slightly more eclipse (0.306) in Dunedin. These extreme partial eclipses aren't of much interest to astronomers, and most ordinary people would hardly notice such a slight covering of the Sun. My article shows the coverage as seen from Dunedin, as well as Cape Town, Hobart and the South Pole: http://www.space.com/13725-partial-solar-eclipse-viewing-tips.html
{ "domain": "physics.stackexchange", "id": 3180, "tags": "astronomy, eclipse" }
What is an example for a decidable language not in P?
Question: I'm having trouble showing that $P\neq R$. Obviously $P\subseteq R$, but is there a decidable language which is definitely not (under all answers to open questions s.t. $P=NP$ or $NP=PSPACE$) in $P$ ? Answer: Yes, there are decidable languages that are definitely not in P. The time hierarchy theorem says that P$\,\neq\,$EXP, so P$\,\neq\,$R, independently of the P vs NP problem. Any EXP-complete problem is definitely not in P: for example determining whether white has a winning strategy from a position in generalized chess ("generalized" in the sense of allowing a board of any dimensions, with any arrangement of any number of pieces, but otherwise following all the rules of standard chess).
{ "domain": "cs.stackexchange", "id": 6942, "tags": "complexity-theory, polynomial-time" }
osrf/ros2:testing docker image not working
Question: Running: docker pull osrf/ros2:testing docker run -it osrf/ros2:testing Throws an error: /ros_entrypoint.sh: line 5: /opt/ros/rolling/setup.bash: No such file or directory Am I doing something wrong, or is the docker image broken? Originally posted by ijnek on ROS Answers with karma: 460 on 2021-08-11 Post score: 0 Answer: EDIT: This was intended behaviour. Quoting from a response to the issue, While the osrf/ros2:testing does include the same ros_entrypoint.sh script to keep it interchangeable with other ros image tags, it does not pre-install any ros specific packages, as it is intended to be used as a base image for child Dockerfiles that would install user specified ros dependencies. If you do want to spawn containers directly from the osrf/ros2:testing tag, you could override the default entrypoint by specifying a different one via the docker run arg: docker run -it --rm --entrypoint="" osrf/ros2:testing ORIGINAL: I've raised an issue in osrf/docker_images about this. Originally posted by ijnek with karma: 460 on 2021-08-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 36795, "tags": "docker" }
Why is the vanadium(3+) ion paramagnetic?
Question: I know that the electron configuration of vanadium is $[\ce{Ar}]\mathrm{4s^2 3d^3}$. None of the electrons in the 3d subshell are paired. Once it loses these three electrons, shouldn't the remainder of the electrons be paired? How can $\ce{V^{3+}}$ be paramagnetic if it loses all its unpaired electrons? Answer: In addition to the general rules of how electronic configurations of atoms and ions are calculated, the elements from the $\mathrm{d}$-block (a.k.a. the transition metals) obey one special rule: In general, electrons are removed from the valence-shell $\mathrm{s}$-orbitals before they are removed from valence $\mathrm{d}$-orbitals when transition metals are ionized. (I took this formulation from these online lecture notes, but you will find equivalent statements in your textbooks.) So, what that does mean is that if you remove electrons from vanadium(0), you will remove the $\mathrm{4s}$ electrons before you remove the $\mathrm{3d}$-electrons. So, you have the following electronic configurations: $\ce{V}$ is $\ce{[Ar]} \mathrm{4s^2 3d^3}$ $\ce{V^2+}$ is $\ce{[Ar]} \mathrm{4s^0 3d^3}$ $\ce{V^3+}$ is $\ce{[Ar]} \mathrm{4s^0 3d^2}$ $\ce{V^4+}$ is $\ce{[Ar]} \mathrm{4s^0 3d^1}$ $\ce{V^5+}$ is $\ce{[Ar]} \mathrm{4s^0 3d^0}$ And thus, $\ce{V^3+}$ is paramagnetic, because it has two unpaired $\mathrm{3d}$-electrons. In fact, all the ions above are paramagnetic, except $\ce{V^5+}$.
{ "domain": "chemistry.stackexchange", "id": 9, "tags": "inorganic-chemistry, ions, electronic-configuration, transition-metals" }
Gravity on supermassive black hole's event-horizon
Question: $M =$ black hole mass Gravitation is about $r^{-2}$ Schwarzschild radius, $r_{\text{S}}$, is $\propto M$ So, more massive black holes have weaker gravitation at their event horizon. Consider a black hole so enormous that the gravitation on its event horizon is negligible. Person A is 1 meter 'outside' the horizon, and Person B is inside (1 meter from the horizon as well). Person B throws a ball to Person A. Both just started accelerating towards the black hole very slowly, so why person A wont catch the ball? Why won't Person A even ever see person B granted A will somehow escape later on? Reference: https://mathpages.com/rr/s7-03/7-03.htm Answer: Person A will not see anybody beyond the even horizon, even in metre ahead. That is because one meter in a flat coordinates (which I suppose you mean) corresponds to infinite distance in the co-moving coordinates of the observer A. Observer A will be able to see large objects (larger than 1 meter) ahead of him which still outside the even horizon. At the same time, ofserver at infinity will see observer A shortened in radial direction and becoming like a flat disk on the surface of black hole. The crossing the horizon for observer A (if happened) would look not like crossing a spatial surface, but like crossing a moment of time: now he is before horizon, and now he is inside. All objects around him, ahead and beyond cross the horizon nearly simultaniously (with difference only of the time it takes for light to travel between them). Something in meter ahead him in flat coordinates corresponds to a thing that crossed the horizon infinite time before he did, so he would not be able to see the observer B. Even if observer B is also outside the horizon, the distance between them would be so large that they hardly could see each other. If you meant that the observers were 1 meter of each other in co-moving coordinates, then they both either outside the horizon or inside it. They cannot see each other in a meter but be separated by a horizon, because the horizon is null surface, it is not spatial surface. Two friends travelling in one spaceship will cross the horizon nearly simultaniously, even if spatially separated (for a distant observer the length of their spaceship will become zero at the horizon).
{ "domain": "physics.stackexchange", "id": 60650, "tags": "general-relativity, gravity, black-holes, event-horizon, causality" }
Transmission for electric motor
Question: I want to build an electric motorcycle ( more of an electric bike, actually), but i don't want to use a conversion kit. I want to take out the pedals and put a motor there (either 12v DC or 220v AC). My problem is how to transfer the power from the engine to the rear wheel: Do I need some sort of gearbox? Let's say I would use a 0.7kw, 3000rpm 220v AC motor: How do I connect the spinning rod of the motor to a chain or belt (it doesn't necessarily have to be chain driven€ Is this a bad idea? Can the motor drive the wheel directly? The whole thing (including the rider) would weigh less than 150kg. 0.7kw is enough for the speed I desire. (I am interested in building a dirt bike actually...) Excuse my English, I am not a native speaker. Answer: A transmissions main job is to manage torque. What you have to do is look at the motor spec-sheet and see how much torque it generates. Ensure it is sufficient to get you moving, calculate the friction coefficient with your weight and make sure the torque isn't so high you get wheel spin. Finally tune it to make it comfortable. But really, this would be more for a "finely tuned machine". Sounds like you want something fun and easy!!! In reality you should just get a motor with a metal rotor(part that spins), weld/bolt it onto the sprocket, and pump some current through it. If your bike already has multiple gears I would say leave them on as you can adjust the torque ratio to accommodate a wider range of electric motors.
{ "domain": "engineering.stackexchange", "id": 1437, "tags": "motors" }
PointCloud2 access data
Question: I have a PointCloud2 topic and I need to access the x, y and z of the points. I have found: pcl::PointCloudpcl::PointXYZRGB::ConstPtr The problem is that I don't know how to use it. Do you know where can I find some example code describing how to get coordinates in PCL2? [EDIT] Now, I am using this code but it is not working properly void pcl2_to_scan::callback(const sensor_msgs::PointCloud2ConstPtr &pPCL2) { for (uint j=0; j < pPCL2->height * pPCL2->width; j++){ float x = pPCL2->data[j * pPCL2->point_step + pPCL2->fields[0].offset]; float y = pPCL2->data[j * pPCL2->point_step + pPCL2->fields[1].offset]; float z = pPCL2->data[j * pPCL2->point_step + pPCL2->fields[2].offset]; // Some other operations } } Thank you. Originally posted by arenillas on ROS Answers with karma: 223 on 2014-08-26 Post score: 1 Answer: A good place to start is the API documentation. For example, the pcl::Poincloud doc shows you that you can get any point by indexing into the cloud with the [] operator, like an array. The PointXYZRGB description isn't as clear, but it does tell you it's a struct (meaning all members are public), and you'll find you can access the values via point.x, .y, .z, etc. Originally posted by paulbovbel with karma: 4518 on 2014-08-27 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by paulbovbel on 2014-08-27: of course first you have to dereference (*cloud) the ConstPtr, which is a pointer type Comment by arenillas on 2014-08-27: I do not understand what does this mean Comment by paulbovbel on 2014-08-27: Anything specific? Comment by arenillas on 2014-08-27: this is what I do not understand: to dereference (*cloud) the ConstPtr If you want see my code I have edited the question Comment by paulbovbel on 2014-08-27: There are two ways to receive pointcloud data in a callback: either as a sensor_msgs or a pcl type. For working with the data, the pcl type provides a better interface since the sensor_msgs type just contains a blob of data. Comment by paulbovbel on 2014-08-27: When you subscribe to a message, you get a ConstPtr which means it's a boost shared_pointer to a const piece of data. That is not important, since you can just treat it as a regular pointer, and remember not to try to modify the data. Comment by paulbovbel on 2014-08-27: For general C++ help, I would point you to stackoverflow (no pun intended) Comment by paulbovbel on 2014-08-27: And here is somewhere you can see how individual points get accessed from a pcl::Pointcloud callback Comment by kodplayer on 2020-12-25: The above mentioned website adress docs.pointclouds.org is removed
{ "domain": "robotics.stackexchange", "id": 19192, "tags": "pcl, pointcloud" }
Joint::GetLinkForce() gets segmentation fault
Question: Hi, Has anybody used Joint::GetLinkForc() or Joint::GetLinkTorque in gazebo simulator? I am using them in a gazebo plugin and these functions give me segmentation fault. Thanks, Originally posted by Zara on ROS Answers with karma: 99 on 2012-08-22 Post score: 0 Original comments Comment by mcevoyandy on 2012-08-23: yeah, I made a post about this awhile back, http://answers.ros.org/question/40678/gazebo-plugin-get-total-force-on-joint/ follow the link in that post to another question where HSU talks about some of this. I assume everything he said there is still valid. Answer: Ticketed here. Originally posted by hsu with karma: 5780 on 2012-08-31 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 10722, "tags": "ros, gazebo, joint, gazebo-plugin" }
Constructor Injection, new dependency and its impact on code
Question: This is my interface, all of my transport concrete classes will be implementing this interface. interface ITransport { void Travel() } These are my existing implementation class: class Horse : ITransport { public Horse() { } public void Travel() { Console.Writeline("I am travelling on a horse"); } } class Camel : ITransport { public Camel() { } public void Travel() { Console.Writeline("I am travelling on a Camel"); } } class Ship : ITransport { public Ship() { } public void Travel() { Console.Writeline("I am travelling on a Ship"); } } This is my Factory class I create instance of my transport classes from this factory class. class TransportFactory { public TransportFactory() { } ITransport ProvideTransport(string transportType) { switch(transportType) { case "Camel": return Camel; break; case "Horse": return Horse; break; case "Ship": return Ship; break; } } } This is my creational class meaning I configure my client classes or consumer classes through this class: class ConfigurationBuilder { ITransport camel; ITransport ship; ITransport horse; public ConfigurationBuilder() { TransportFactory tFactory = new TransportFactory(); horse = tFactory.ProvideTransport("horse"); camel = tFactory.ProvideTransport("camel"); ship = tFactory.ProvideTransport("ship"); } Human ConfigureHuman() { return new Human(camel, ship, horse); } } This is my client code the actual caller of these transport class: class Human { ITransport camel; ITransport ship; ITransport horse; public Human(ITransport camel, ITransport ship, ITransport horse) { this.camel = camel; this.ship = ship; this.horse = horse; } void Travel() { //TransportFactory tFactory = new TransportFactory(); if(ground =="plain") { horse.Travel(); } if(ground =="desert") { camel.Travel(); } if(ground =="sea") { ship.Travel(); } } } Now my question is What will happen when I have a new means of traveling: such as Plane for Air and then Rocket for Space etc. My Constructor will change and will keep on changing as more and more means of traveling keep on adding throughout application life time. Is this the correct design or is there a better way. As for the question is considered I couldn't come up with intuitive words to explain my problem :) Update: Commented TransportFactory out of Human class Update2: I guess I now understood how it will be done. I will be needing two factories now.TransportFactory which is already implemented above, the second one will be HumanFactory which will create a human but also has additional responsibility of calling travel method of appropriate ITransport. public class HumanFactory { ITransport camel; ITransport ship; ITransport horse; Human _human; Dictionary<string, ITransport> _availableTransports; event Action<Human, string> transportRequested; public HumanFactory(TransportFactory tFactory) { horse = tFactory.ProvideTransport(TransportTypes.Horse); camel = tFactory.ProvideTransport(TransportTypes.Camel); ship = tFactory.ProvideTransport(TransportTypes.Ship); } public Human ConfigureHuman() { if (_availableTransports == null) { _availableTransports = new Dictionary<string, ITransport>(); _availableTransports.Add(GroundTypes.Desert.ToString(), camel); _availableTransports.Add(GroundTypes.Sea.ToString(), ship); _availableTransports.Add(GroundTypes.Plains.ToString(), horse); } transportRequested += new Action<Human, string>(_human_transportRequested); _human = new Human(transportRequested); return _human; } void _human_transportRequested(Human human, string groundType) { if (_availableTransports.ContainsKey(groundType)) { ITransport suitableTransport = _availableTransports[groundType]; suitableTransport.Travel(); } else { //code for handling below conditions goes here //I don't know what to do for this type of plain? } } } Here is the Human class now: public class Human { Action<Human, string> _transportRequested; public Human(Action<Human, string> transportRequested) { _transportRequested = transportRequested; } public void Travel() { if (_transportRequested != null) { var ev = _transportRequested; ev.Invoke(this, GroundTypes.Plains.ToString()); } } } We will call the Human class now as: TransportFactory tFactory=new TransportFactory(); HumanFactory humanFactory = new HumanFactory(tFactory); Human human = humanFactory.ConfigureHuman(); human.Travel(); One More thing: I guess I should be having a lock in the HumanFactory class in _human_transportRequested method? In case there is a multithreaded scenario. I guess now my code is following Law of Demeter :) Special Thanks to Nik( for that marvelous solution) and Gleb( for those wonderful videos) Answer: I assume that this is just an example and I don't need to tell you that you should use enums instead of strings, remove breaks after returns, indent your code, etc. :) So, to answer your question: I think you should refactor your Human class: class Human { private readonly TransportFactory _tFactory; public Human(TransportFactory tFactory) { if (tFactory == null) { throw new ArgumentNullException("tFactory"); } _tFactory = tFactory; } void Travel(string ground) { var transport = _tFactory.ProvideTransport(ground); transport.Travel(); } } class TransportFactory { ITransport ProvideTransport(string groundType) { switch(groundType) { case "desert": return new Camel(); case "plain": return new Horse(); case "sea": return new Ship(); default: return new ArmsAndLegs(); } } } You will still need to modify factory method when you add new transport though. This can be avoided by using reflection. Edit: here is another example: class Human { public event Action<Human, ПкщгтвЕнзуы> TransportRequested; void Travel(GroundTypes ground) { var ev = TransportRequested; if (ev != null) { ev(this, ground); } } } interface ITransport { void Transport(object cargo); } class TravelAgency { private readonly TransportFactory _tFactory; public TravelAgency(TransportFactory tFactory) { if (tFactory == null) { throw new ArgumentNullException("tFactory"); } _tFactory = tFactory; } //this method at some point subscribes to TransportRequested event private void OnTransportRequested(Human human, GroundTypes ground) { var transport = _tFactory.ProvideTransport(ground); transport.Transport(human); } } Edit2 Subscription using your TransportFactory logic: public class HumanFactory { Dictionary<GroundTypes, ITransport> _availableTransports; public HumanFactory(TransportFactory tFactory) { _availableTransports = new Dictionary<string, ITransport>(); _availableTransports.Add(GroundTypes.Desert, tFactory.ProvideTransport(TransportTypes.Horse)); _availableTransports.Add(GroundTypes.Sea, tFactory.ProvideTransport(TransportTypes.Camel)); _availableTransports.Add(GroundTypes.Plains, tFactory.ProvideTransport(TransportTypes.Ship)); } public Human CreateHuman() { var h = new Human(); h.TransportRequested += OnHumanTransportRequested; return h; } //you can incapsulate this logic in some other object //you should also consider implementing Events Aggregator //or those subscriptions might become quite complicated as their number increases void OnHumanTransportRequested(Human human, GroundTypes groundType) { if (_availableTransports.ContainsKey(groundType)) { ITransport suitableTransport = _availableTransports[groundType]; suitableTransport.Travel(); } else { //code for handling below conditions goes here //I don't know what to do for this type of plain? } } }
{ "domain": "codereview.stackexchange", "id": 4132, "tags": "c#, object-oriented, dependency-injection" }
How to determine approximability of a problem when we don't know how good a solution is?
Question: As far as I have learned, an approximation algorithm for an optimization problem Runs in polynomial time, and Whose cost can be bounded by a function of input in terms of distance from the optimal cost. If we consider the optimization version of subset-sum problem, it says Given a set $A$ of integers, what is the maximum value of $sum(B) \le t$ where $B \subset A$, $sum(B) = \sum{b\in B}$ and $t \in Z^+$ In this problem, we know that we search for a maximum value that is smaller than or equal to $t$. So, a solution $B_1$ is better than another solution $B_2$ if $\dfrac{sum(B_1)}{sum(OPT)} > \dfrac{sum(B_2)}{sum(OPT)}$, $OPT$ being the optimal subset. Now, let us consider the WSN localization problem, which is defined as: We have a set $A = \{1,2,\dots,n\}$ of $n$ points in $d$-dimensions. We only know the coordinates of some points, let us denote them with $B = \{b_1, b_2, \dots, b_m\}$ where $B \subset P$. By using the positions of the nodes in $B$, we aim to assign positions to the nodes in $A$. While doing this, we use the Euclidean distance graph $G = <V,E,W>$ that is given as input. In this graph, each node $i \in V$ corresponds to a point $i \in P$ and each edge $\{i,j\} \in E$ means that there is a distance measurement between the point $i$ and point $j$. The weights of an edge corresponds to the Euclidean distance between two points. We use a unit disk graph (UDG) model. In a UDG, an edge $\{i,j\}$ exists if and only if the Euclidean distance $\delta_{ij}$ between $i$ and $j$ is smaller than or equal to a specific value $R$. That indicates if there does not exist an edge $\{i,j\}$, then $\delta_{ij} > R$. We aim to assign coordinates to each node $v \in V$ considering their Euclidean distances. This process is called localization. As the motivation is wireless sensor nodes, we assume that we cannot measure distances with 100% accuracy. Therefore, each distance $\delta_{ij}$ is altered by adding a value $\epsilon$ to model the environmental noise. For any two points $i,j$ whose pairwise distance is $\hat{\delta}_{ij} < R$, the given distance is $\delta_{ij} = \hat{\delta}_{ij} + \epsilon$. $\epsilon$ is up to $\pm P\%$ of the wireless range $R$ selected from a uniform random distribution. Let us assume that we always can localize %100 of the nodes i.e. the input graph is localizable. Our objective is to minimize the average localization error. This is computed by $\dfrac{\sum\limits_{v \in V}||v_{est} - v_{act}||}{|V|}$, where $v_{est}$ is the estimated position of node $v$, $v_{act}$ is the actual position of node $v$ and $||v_{est} - v_{act}||$ denotes the Euclidean distance between two. The optimal solution for any instance is clearly $\forall v \in V, v_{est} = v_{act}$ and the cost is $cost(OPT) = 0$. In subset sum, we know how close we are to the value $t$ and can compare two solutions. In traveling salesman problem, we can compare the solutions by their costs. However, in localization problem, we cannot compare two solutions without knowing the actual positions, which is impossible by the definition of the problem. My question is: Can we anyhow prove if WSN localization problem is approximable or not by not being able to know how good a solution is? Or is it completely irrelevant? For further reading, here is the NP-hardness proof of the localization problem with noisy distances. This paper defines the formal theory of the same problem. Answer: Definition of a well-formed optimization problem It seems to me the core problem you have here is that the problem you have defined is not what I'd call a well-formed optimization problem. Normally, an optimization problem looks something like this: Input: $x$ Goal: find $y$ that maximizes $\Phi(x,y)$ where $\Phi$ is some objective function that's specified as part of the problem statement. The key point here is that the objective function has to be computable, as a function of the input and the proposed output. Your proposed formalization of the WSN localization problem does not have this form: the objective function is not computable as a function of the information known to us. Therefore, it is not a well-formed optimization problem. This explains the problem you're having. The reason you can't compare how good different solutions are is that you don't have a computable way to measure how good a single solution is, given the information available to you. Consequently, your problem does not fit into the standard framework of optimization problems and approximation algorithms. This means you can't take advantage of all that theory, and strange things can happen. So, at this point you have two options. First, you can try to formulate your problem as a way that does qualify in this framework, i.e., where you do have a computable objective function. The other option is to accept that you don't fit into this framework and you'll have to develop everything from scratch without being able to use the existing framework as is. Another way to put this: if your objective function depends on values that aren't known (aren't part of the input), then what you haven't is not purely an algorithmic problem. To make this an algorithmic task, you need a way to measure success. One candidate way to do this would be to establish some probability distribution on the locations of the sensors, then frame this as a statistical inference problem, where you want to find the maximum likelihood estimate, or something like that. Another approach would be to choose a different objective function, that is computable. For instance, maybe you could evaluate the accuracy of a solution in terms of how closely it matches the givens: for what fraction of edges $\{v,v'\}$ do we have $||v_\text{est}-v'_\text{est}|| < R+\varepsilon$, and what fraction of non-edges $v,v'$ do we have $||v_\text{est}-v'_\text{est}|| > R-\varepsilon$; maybe the objective function is the sum of these two fractions, or something. Of course, this is a different objective function, so it's maximizing something different, and the solution to that optimization problem might or might not match what you're actually looking for. For posterity, here's some feedback on an older version of the problem, which has now been addressed by the revised question. I see some misconceptions in this question, both about how we measure the quality of approximations, and about when approximation algorithms are applicable. The best answer I can give is to clear up these confusions/misconceptions; that will help you think about this kind of problem more clearly. Definition of the approximation ratio You have a confusion about how we normally measure the quality of an approximation, in the standard theory of approximation algorithms. This shows up in your discussion of approximation algorithms for subset sum. With subset-sum, the objective function we are maximizing is $\text{sum}(B)$. In the standard theory of approximability, we measure the quality of an approximation by how good the value of $\text{sum}(B)$ is, compared to the optimal value of that for the optimal choice of $B$ -- not compared to $t$. When we ask whether there is a $2$-approximation for subset-sum, we ask whether there's an algorithm that outputs a set $B_\text{approx}$ such that $${\text{sum}(B_\text{approx}) \over \text{sum}(B_\text{opt})} \ge 1/2,$$ where $B_\text{opt}$ is the optimal solution (the one that maximizes $\text{sum}(B)$). The ratio on the left-hand-side is called the approximation ratio. Notice how $t$ doesn't appear in this quantity. The approximation ratio is defined with reference to the optimal solution, not with reference to $t$. When approximation algorithms are applicable Approximation algorithms are only applicable to optimization problems. The "WSN localization problem", as you have defined it, is not an optimization problem: it is a decision problem. Since it's not an optimization problem, the entire concept of an approximation algorithm doesn't enter into it. Asking "is it approximable?" is a category error: it's like asking "Is 'flying' a mammal?" -- 'flying' isn't even a noun, let alone an animal. You then talk about two metrics. If you pick one of those metrics, you could define an optimization problem that is based upon the "WSN localization problem", where you ask to find the placement of node positions that complies with all the constraints and that maximizes (or minimizes) that metrics. Then you would have an optimization problem. However, when you have two metrics, we don't get a well-defined optimization problem, since it's not clear what would count as optimal. Indeed, there might be multiple solutions that each achieve a different tradeoff between these two metrics, and is "Pareto-optimal". (See also What is a bicriteria approximation algorithm?.) So, to count as an optimization function, you have to be maximizing/minimizing a single objective function; the standard theory of approximation algorithms is only applicable when you have a problem that is of this form. If your problem isn't of this form, it probably doesn't make sense to ask if it is approximable, as standard definitions of approximability don't apply.
{ "domain": "cs.stackexchange", "id": 5179, "tags": "algorithm-analysis, proof-techniques, approximation" }
What does it mean by a nonrenormalizable operator being induced in a Lagrangian?
Question: I have heard that nonrenormalizable operators (i.e., mass dimension greater than 4) can be "induced" in the Lagrangian (that we started with) via loop effects. However, I do not understand what does it mean by a new operator or term in the Lagrangian being induced. In QED, loops generally correspond to self-energy diagrams, vacuum polarization diagrams and vertex correction diagrams which modify the bare mass, bare charge, and bare fields to the corresponding renormalized quantities by adding counterterms (or by splitting the original Lagrangian into a renormalized part and a counterterm part). Counterterms must be included, as I understand, to get rid of various divergences (or cut-off dependence). $\bullet$ But why should one include nonrenormalizable terms in the original Lagrangian (in which none of the terms did resemble the induced operator)? $\bullet$ Can they be thought of as counnterterms? $\bullet$ How many of them can/should be included? $\bullet$ Can one clarify the concept inducing non-renormalizable operators in the context of a simple field theory? Answer: Oddly enough, non-renormalizable operators appear when renormalizing: In the Wilsonian viewpoint, every QFT is defined as an effective theory with an intrinsic momentum cutoff $\Lambda_0$. Renormalizing the theory corresponds to lowering this cutoff by integrating out the Fourier modes of the fields above the new cutoff $\Lambda$ in the path integral. It turns out that, if you (formally) compute this integral, that the result is a new partition function which looks like the partition function for a different action, the Wilsonian effective action, and that it includes all terms not forbidden by symmetry no matter their renormalizability. Effectively, you should think of this as the original action including all those terms with a coupling constant of 0, and when renormalizing by lowering the scale towards the infrared cutoff, they start to appear again because the coupling gets renormalized away from 0 unless protected by symmetry. An illustrative example of how non-renormalizable terms can appear in the infrared is the Fermi theory of beta decay, which has a non-renormalizable interaction part $\propto \bar\psi_p\psi_n\bar\psi_e\psi_\nu$ where $n,p,e,\nu$ denote the spinor field of the proton, neutron, electron and neutrino, respectively. This theory is amazingly predictive in the low-energy regime, but this is just an effective vertex one gets from integrating out the weak interaction - in the full Standard model, this four-fermion vertex is resolved into the fermions interacting not directly, but via vertices involving W- and Z-bosons of the weak interaction.
{ "domain": "physics.stackexchange", "id": 35453, "tags": "quantum-field-theory, renormalization, effective-field-theory" }
What exactly does Ohm's law say?
Question: Ohm's law states that the current through a conductor between two points is directly proportional to the voltage across the two points. Introducing the constant of proportionality, the resistance, R one arrives at the usual mathematical equation that describes this relationship: $$V=IR.$$ Then why the assertion that V=IR is a statement of Ohm's law is not true? Answer: $V = RI$ is a statement of Ohm's law, provided the resistance $R$ is a constant, i.e. independent of the voltage $V$ or the current $I$. Ohm's law is valid to a good accuracy for a wide range of materials (called ohmic materials), but does not apply to all materials.
{ "domain": "physics.stackexchange", "id": 71759, "tags": "electric-current, electrical-resistance, voltage, conductors, approximations" }
Force on a line
Question: Say you have a rigid line of mass $m$ and length $\ell$ along the $x$ axis and you apply a constant force $f$ at one end in a direction that is always perpendicular to the line, starting in the $y$ direction. Assume there are no other external forces other than the applied force. How would you find the position and rotation of the center of the line at time $t$ (or $dt$)? Would any parts of this line remain motionless? How does the answer change as you move the force towards the center of mass? Answer: So here are the equations of motion: You have the geometry so you know the mass moment of inertia about the center of mass to be $I_C=\frac{m}{12}\ell^2$ The applied force $f$ creates a torque about the center of mass equal to $\tau=\frac{\ell}{2}\,f$ The center of mass will accelerate by $a_C$ in the direction of $f$ with $$ m a_C = f $$ The body will accelerate rotationally by $\alpha$ with $$ \tau = I_C \alpha \bigg\} \frac{\ell}{2}\,f = \frac{m}{12} \ell^2 \alpha $$ Given the motion $$a_C = \frac{f}{m} \\ \alpha = \frac{6 f}{m \ell}$$ lets find if this corresponds to a rotation about a point. If a point exists it will lie on the other side of the rod from where the force is applied, lets assume a distance $c$ from the center of mass. If the linear acceleration of that point is zero, but the rotation is not, then the linear acceleration of the center of mass would be $a_C = c\, \alpha$. Find the point by equating $$ \frac{f}{m} = c \, \frac{6 f}{m \ell} \bigg\} c = \frac{\ell}{6} $$ FYI - The point of rotation is called a pole and the line of action of the force a polar. These show up in the center of percussion calculation. In your case, the point of the force $f$ is the point of percussion of the rod about the rotation point. You can reverse the situation and say that a point located at $c=\frac{\ell}{6}$ is the center of percussion of when rotating about the end point.
{ "domain": "physics.stackexchange", "id": 12240, "tags": "homework-and-exercises, newtonian-mechanics, rotational-dynamics, rigid-body-dynamics" }
Neutrinos and anti-neutrinos in the Standard Model
Question: In standard model neutrinos and the left handed electron forms SU(2) doublet. What about the anti-neutrinos in the standard model? Do they also form some doublet? If neutrinos have tiny masses will it not imply indirectly and conclusively that right-handed neutrinos must exist in nature? EDIT : Neutrinos will have Majorana mass term if they are Majorana fermion. Is that right? Now, if neutrinos are Majorana fermions, will they have definite handedness? For example, does $\nu_M=\begin{pmatrix}\nu_L\\ i\sigma^2\nu_L^*\end{pmatrix}$ have definite handedness? Therefore, doesn't it imply that if neutrinos are massive then a right-handed component of it $\begin{pmatrix} 0\\ i\sigma^2\nu_L^*\end{pmatrix}$ must exist? Although we are not using $\nu_R$ to construct this column, does it imply $\nu_M$ do not have a right handed component? It is the column $\nu_M$ which we should call a neutrino. Then it has both the components. However, one can say that a purely right-handed neutrino need not exist if the neutrino is a Majorana fermion. Therefore, it seems that if neutrinos are massive a right handed component of it must exist (be it a Dirac particle as well as a Majorana particle). Correct me if I am wrong. Answer: The antineutrinos do indeed form a doublet. The particle-antiparticle conjugation operator is usually denoted by $\hat{C}$ and is defined through: \begin{equation} \hat{ C}: \psi \rightarrow \psi ^c = C \bar{\psi} ^T \end{equation} where $ C \equiv i \gamma _2 \gamma _0 $. So given a neutrino you can always get its complex conjugate with this operator: \begin{equation} \nu _L ^{\,\,c } = i \gamma _2 \gamma _0 ( \overline{\nu _L} ) ^T \end{equation} Its easy to check this that this antineutrino is actually right handed, by applying a left projector onto it. The antineutrino forms a doublet with the antileptons: \begin{equation} \left( \begin{array}{c} \nu _L ^{\,c } \\ e _L ^{ \, c} \end{array} \right) \end{equation} With regards to your second question, no having neutrino masses does not imply that there exist right handed neutrinos. This is because neutrinos could have Majorana masses ($\frac{m}{2} \nu _L \nu _L +h.c. $) as well as Dirac masses $m( \overline{\nu_L} \nu_R + h.c.)$. Majorana masses could arise if for example there exists a heavy Higgs which is a triplet under $SU(2)_L$ (which can be rise to what's known as a type 2 See-saw mechanism).
{ "domain": "physics.stackexchange", "id": 12168, "tags": "particle-physics, neutrinos, antimatter" }
Assembler-program which reverses the content of EAX
Question: Following exercise: Write a program that takes a number (of size 4 bytes) x as input, and then reverses all the bits of x, and outputs the result. By reversing all bits we mean that the bit with original location i will move to location 31-i. Small example (for the 8 bit case): if x == {01001111}_2, then the output is {11110010}_2. In this example we reversed only 8 bits. Your program will be able to reverse 32 bits. Full exercise-description can be seen here: XORPD GitHub I tinkered out the following idea. Please take into account my comments too. format PE console entry start include 'win32a.inc' ; =============================================== section '.text' code readable executable start: mov eax, 0x4f ; 0x4f is equal to 01001111 (from the exercise-description example). mov cl, 0x1f ; cl becomes the control variable. 0x1f == 31 decimal xor edx, edx ; edx will accumulate the different states of ebx during the runtime. process_bit: shr eax, 0x1 ; Kick the right-most bit out ... jc add_one ; If it was a 1 jump to 'add_one' ... mov ebx, 0x0 ; ... otherwise write a 0 ... jmp now_rotate add_one: mov ebx, 0x1 now_rotate: rol ebx, cl ; The right-most bit has been fresh written. Now move it n-positions to the left. or edx, ebx ; "Save" or "Add" the current positive bits (1-bits) of ebx to edx. loop process_bit mov eax, edx call print_eax_binary ; Exit the process: push 0 call [ExitProcess] include 'training.inc' I guess it works right. Please compare the result on the screenshot with the example-value from the exercise-description. What to think about my solution? Is it valid? Or does it have to be improved? Is there a better way to solve the described task? Looking forward to read your hints and comments. Answer: In assembler it is often possible to avoid jmp instructions. This is done to improve branch prediction and thereby performance. Your code can be written without the jump to add_one: xor ebx, ebx shr eax, 1 adc ebx, 0 These three instructions replace the whole jumping. The adc (add with carry) instruction's original purpose is to support chained addition, but it can be used creatively for many other purposes. Note that I had to move the xor ebx, ebx to the top since it updates the carry flag. There are also the rcr and rcl instructions that efficiently use the carry flag as a one-bit register. You can just repeat 32 times: rcr eax, 1 rcl ebx, 1 And you're done. When you inline this loop, you have a constant-time operation that finishes in 64 instructions. Each of them probably takes a single cycle, therefore 64 cycles. That's already acceptable, but there are probably better ways. On the electrical, hardware level, the bitswap operation can be implemented by just swapping the wires, therefore chances are high that there is some machine instruction that makes this task more efficient. Just read through the whole processor manual to see if there is anything related. Have a look for the keywords "swap", "shift", "mask". If you want to get a really fast program, you should use the bswap eax instruction, followed by code that reverses the bit order. This can probably be found in the excellent book Hacker's Delight. The basic idea is to take every second bit and shift it to the left. At the same time, take the remaining bits and shift them to the right. Like this: bits0 = (x & 0x55555555) << 1 bits1 = (x >> 1) & 0x55555555 x = bits0 | bits1 Then do the same thing with groups of 2 bits, and then once more with groups of 4 bits. The groups of 8 and 16 bits are done by the bswap instruction, you don't need to implement them on your own.
{ "domain": "codereview.stackexchange", "id": 36798, "tags": "algorithm, bitwise, assembly" }
Unable to locate package ros-groovy-desktop-full
Question: My ubuntu linux is 12.04. And I want to install ROS, but when I followed the instruction in ros.org, in the step of "sudo apt-get install ros-groovy-desktop-full", I got this problem "Unable to locate package ros-groovy-desktop-full". And I tried other version of ROS such as hydro and diamondback and other version of ubuntu such as 12.10, this problem can not be solved. Originally posted by rosmichael on ROS Answers with karma: 1 on 2014-03-12 Post score: 0 Answer: Have you checked if your ubuntu allows "restricted," "universe," and "multiverse" ? if not-easiest way is to download Synaptic Package Manager, run it and check settings in Settings->Repositories and in the first tab "Ubuntu Software" there is all wrote down. than you have to add good repo for your ubuntu, as in tutorial and u should not forgot about sudo apt-get update than you can check if apt-get finds sth (find or just type sudo apt-get install ros- and press tab) Originally posted by BP with karma: 176 on 2014-03-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by rosmichael on 2014-03-12: thanks solved by goagent
{ "domain": "robotics.stackexchange", "id": 17269, "tags": "ros, package, ros-groovy" }
rqt_graph not shown in window
Question: Hi , I am new in ROS . I am using ubuntu 12.04 LTS . ROS Hydro I want to test the tuturial 6 about undrstanding ROS topics. I can run the first command . but in part when I want to see the output of rosrun rqt_graph rqt_graph in the opened window I can't see the map of rqt graph. the opened window is empty but there are some button in it. sara@sara:~$ rosrun rqt_graph rqt_graph Couldn't import dot_parser, loading of dot files will not be possible. PluginHandlerDirect._restore_settings() plugin "rqt_graph/RosGraph#0" raised an exception: Traceback (most recent call last): File "/opt/ros/hydro/lib/python2.7/dist-packages/qt_gui/plugin_handler_direct.py", line 116, in _restore_settings self._plugin.restore_settings(plugin_settings_plugin, instance_settings_plugin) File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_graph/ros_graph.py", line 202, in restore_settings self._refresh_rosgraph() File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_graph/ros_graph.py", line 226, in _refresh_rosgraph self._update_graph_view(self._generate_dotcode()) File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_graph/ros_graph.py", line 259, in _update_graph_view self._redraw_graph_view() File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_graph/ros_graph.py", line 292, in _redraw_graph_view same_label_siblings=True) File "/opt/ros/hydro/lib/python2.7/dist-packages/qt_dotgraph/dot_to_qt.py", line 228, in dotcode_to_qt_items graph = pydot.graph_from_dot_data(dotcode.encode("ascii", "ignore")) File "/usr/lib/pymodules/python2.7/pydot.py", line 199, in graph_from_dot_data return dot_parser.parse_dot_data(data) NameError: global name 'dot_parser' is not defined What should I do ? Thanks in advance Sarah Originally posted by sara.ershadi on ROS Answers with karma: 1 on 2014-02-25 Post score: 0 Original comments Comment by Veerachart on 2014-03-03: I also got the same problem with you. I am using Ubuntu 13.10 and ROS Hydro, installed from source a week ago. Answer: I know that this is an older topic and that there was an answer given. But in my case that answer did NOT work. I'm running indigo installed on Linux Mint 17.1 (based on Ubuntu 14.04 Trusty). I was having the same error as listed above and all the solutions I found didn't help. Until I did some digging. I found that due to having more than one Python version installed the pip that was running by default was updating a different version of Python than ROS was using. So when I tried un-installing pyparsing and pydot and then installing newer versions there was no apparent change in behavior. Trying to downgrade pyparsing gave an error and would not install the 1.5.7 version. What I did find was that I had to explicitly run the version of pip from where the version of Python 2.7 being used by ROS was running on my system. In my case this was sudo /usr/bin/pip install --upgrade pydot This caused pyparsing and pydot to be updated. Note that I got a message about not uninstalling pydot due to it being owned by OS, but this didn't impact the fix Once the update completed rqt_graph is now working on my system. You can do: which python To locate the directory where python is running from and double check where your pip is by doing: which pip Both pip and python should be on the same path (e.g. /usr/bin in my case) but the default pip binary was being found at /usr/local/bin and not /usr/bin. So I thought I would add this note for anyone who happens to be tearing their hair out with a similar issue. Burt Originally posted by burtbick with karma: 201 on 2017-04-15 This answer was ACCEPTED on the original site Post score: 7
{ "domain": "robotics.stackexchange", "id": 17082, "tags": "ros, rqt-graph" }
Implementation of a queue
Question: It's been nearly a year I've been using C++, and I have just implemented this queue. I've tested it in many different scenarios, and it seems to work completely fine. Would you mind telling me what can be improved? What techniques might and might not be used in a professional scenario? In general, could you please give me your opinions so that I can improve my skills? // --- Implementation of an exception class class E: public std::exception{ const char * _msg = "Default Exception."; E(){}; public: E(const char * message) throw() { this->_msg = message; } const char * what() const throw(){ return this->_msg; } }; // --- Implementation of a queue. template <typename T> class Queue { static const int _defaultSize = 10; static const int _maxSize = 1000; int _size; int _currentSize = 0; T * _queuePointer; int _firstInQueue = -1; // Holds the index of the first item in the queue (the item that should be popped). int _lastIndex = -1; // Holds the index that we have just pushed a new element to. ("last index to have an element being added to") public: // Constructors and Destructors Queue(int sz=_defaultSize); // Default/Int Constructor Constructor Queue(const Queue & other); // Copy Constructor ~Queue(); // Destructor // Overloaded Assignment Operator Queue & operator = (const Queue rhs); // To implement the copy-and-swap idiom // Utility Functions void swap(Queue & rhs); T enqueue(const T & node); T dequeue(); bool isFull() const { return (this->getCurrentSize() == this->getSize()); }; bool isEmpty() const { return (!this->getCurrentSize()); }; // Getters/Accessors int getCurrentSize() const { return this->_currentSize; } int getSize() const { return this->_size; } }; // Implementation of Constructors and Destructors template <typename T> Queue<T>::Queue(int sz){ if (sz < 1 || sz > _maxSize){ // Invalid 'sz' argument value. throw E("Queue Exception: Invalid size argument value."); }else { std::cout << "Created Object (Default/Int Constructor)" << std::endl; this->_size = sz; this->_queuePointer = new T[this->_size]; } } template <typename T> Queue<T>::Queue(const Queue<T> & other){ this->_size = other._size; this->_currentSize = other._currentSize; this->_lastIndex = other._lastIndex; this->_queuePointer = new T[this->_size]; for(int i=0; i < this->_size; i++){ this->_queuePointer[i] = other._queuePointer[i]; } } template <typename T> Queue<T>::~Queue(){ delete [] this->_queuePointer; } // Implementation Of The Overloaded Assignment Operator template <typename T> Queue<T> & Queue<T>::operator = (Queue<T> rhs){ // So that I can use the copy-and-swap idiom. this->swap(rhs); return *this; } // Implementation of Utility Functions template <typename T> void Queue<T>::swap(Queue<T> & rhs){ std::swap(this->_size, rhs._size); std::swap(this->_currentSize, rhs._currentSize); std::swap(this->_lastIndex, rhs._lastIndex); /* As I am assigning, it means that dynamic memory was allocated for the lhs object. So, before copying the content of the rhs to the lhs object, let's delete the allocated memory from the lhs object and allocate again based on the new size. */ delete [] this->_queuePointer; this->_queuePointer = new T[this->_size]; for(int i=0; i < this->_size; i++){ this->_queuePointer[i] = rhs._queuePointer[i]; } } template <typename T> T Queue<T>::enqueue(const T & node){ if(this->isFull()){ // The queue is full. throw E("Queue Exception: Your queue is full! You can't push anymore until you pop something."); }else { // The queue is not full. if(this->_firstInQueue == -1){ // If it is the first item being pushed to the queue. this->_firstInQueue++; // The first in queue is now index 0. } // This if statement will just be executed if I push another node, and the last // node I added was at the last position of the queue. if(this->_lastIndex == (this->getSize() - 1)){ // If the last index is at the last position of the queue, // set the last index to -1 again. this->_lastIndex = -1; } // Increasing index to the index number that we should add the new element. this->_lastIndex++; // Pushing element here (with respect to/using lastindex)... this->_queuePointer[this->_lastIndex] = node; // Increasing the current size of the queue. this->_currentSize++; } return (this->_queuePointer[this->_lastIndex]); } template <typename T> T Queue<T>::dequeue(){ if(this->isEmpty()){ // The queue is empty. throw E("Queue Exception: Your queue is empty. There is nothing to pop!"); } // The queue is not empty. T value_to_be_returned = this->_queuePointer[this->_firstInQueue]; if(this->_currentSize == 1){ // If the queue has just one element and this element is at index 0. // Setting the index of the first in the queue to -1, because I am popping the // last element of the queue. Now, if I push a new element after popping this last one, // the element being pushed will be first in the queue, and its index will be 0. this->_firstInQueue = -1; // The first in queue is now back to index -1. // Returning the last index to -1, so that when the new item is pushed, the last index will be 0. this->_lastIndex = -1; // OBS: fiq and the li must ALWAYS go back to their initial values, -1, if all // the values are popped from the queue. }else { // Increasing index. // This if statement will just be executed if the first element in the queue // is at the last position of the queue. If so, we need to set the first in queue // variable to -1 again, and then increase it to 0, so that the next element first // in the queue is at index 0. if (this->_firstInQueue == (this->getSize() - 1)){ this->_firstInQueue = -1; } this->_firstInQueue++; } // Decreasing queue's current size. this->_currentSize--; return value_to_be_returned; } Answer: I see a number of things that may help you improve your code. Use the required #includes The code uses std::exception, std::cout and std::swap but the corresponding headers are not listed. It was not difficult to infer, but it helps reviewers if the code is complete. The code should have these three lines: #include <exception> #include <iostream> #include <utility> Fix the bugs There are some problems with the operator = implementation and friends. First, the copy does not initialize the _firstInQueue member. Since its default value is -1, subsequent calls to functions such as dequeue will attempt to access out-of-bounds memory which is undefined behavior. Second, the loop that copies pointers marches through the indices up to this->_size, but fails to account for the possibility that, say, Q1 is larger than Q2. Don't use this-> everywhere Within member functions, this-> is implied, so writing it out everywhere just needlessly clutters the code. Every instance of this-> in this code can safely be deleted. Don't use std::endl if '\n' will do Using std::endl emits a \n and flushes the stream. Unless you really need the stream flushed, you can improve the performance of the code by simply emitting '\n' instead of using the potentially more computationally costly std::endl. With that said, in this code, I think the entire line of code should be deleted since it appears to simply be debugging help. return is not a function Since return is a keyword and not a function, it does not need parentheses for any argument that may follow. So instead of this: bool isEmpty() const { return (!getCurrentSize()); }; I would write this: bool isEmpty() const { return !getCurrentSize(); } Note also that the trailing semicolon is not necessary.
{ "domain": "codereview.stackexchange", "id": 30605, "tags": "c++, c++11, queue" }
Why would a Boltzmann brain be transient?
Question: The Boltzmann brain idea as I understand it: suppose the universe has an infinite lifetime. Once heat death is achieved, there are no more large-scale structures to the universe -- everything is just particles random floating around. But, just by chance these particles will sometimes randomly form themselves into structures. One possible structure is a human brain, in exactly the state as your brain right now, e.g. with neurons firing in a way appropriate to the memories + perceptions you have right now. Given that heat-death state lasts indefinitely, we would expect to be one of these randomly-formed brain structures, as opposed to being a human that exists because of being born, etc. Usually, when I see this argument, it's asserted that a Boltzmann brain would vanish almost instantaneously. My question is, why? If atoms randomly arrange themselves into something like a brain, why wouldn't this structure persist like an actual brain? (This is the position I understand this blog post to be taking). I think there's something I don't understand about entropy, because from this perspective I don't even see why there would be a heat death in the first place. Given infinite time, wouldn't the uniform sea of randomly-moving atoms just by chance happen to arrange itself into a universe (that is, a self-perpetuating state) and continue from there? Answer: A Boltzmann brain is not that different from what might be called Boltzmann cheese. Given enough time a set of atoms or particles might arrange themselves by statistical fluctuations into big wheel of cheese. If that happens there is no reason to think the cheese would then rapidly be demolished unless it formed in a star, or falling into a black hole or in similar circumstances that would demolish it, say on my table where I start to eat it. The Boltzmann brain would appear to require far fewer special circumstances for its occurrence than an entire universe. The number of microstates, or Hilbert space of states for a brain is less than an entire cosmos. So the difficulty that is raised is that it would statistically be more plausible for a Boltzmann brain to spontaneously occur and from there dream all that I or the rest of us observe. It is similar to the philosopher's question of how do we know that we are not just a brain in a vat with input stimuli that gives us the conscious appearance of an exterior world that actually does not exist. There is though one key difference between the spontaneous generation of a cosmos and the appearance of a Boltzmann brain. We have some reason to think that a cosmology emerges from a high energy vacuum, called a false vacuum, that by various means tunnels or transitions into a lower energy vacuum. This idea originated with Coleman and de Luccia as a bubble nucleation and was found to work well with inflationary cosmology. This gives an open direction for the generation of a cosmology, and as an open spacetime is not subject to the constraints of closed thermodynamics. The Boltzmann brain occurs by pure statistical fluctuations with no “gradient” or direction given by energy potential differences, say from a high energy vacuum of complete symmetry (false vacuum) to a physical vacuum at low energy. A comparison between the spontaneous quantum generation of a cosmology and the thermodynamic occurrence of a Boltzmann brain appears to be different categorical problems. Sean Carroll is concerned about Boltzmann brains (BB). I read quite some time ago an estimate on the occurrence of a BB, and as I recall the time is comparable to or longer than the estimated stability of the de Sitter vacuum of $10^{10^{70}}$ years. In a multiverse situation with something like $10^{1000}$ cosmologies on the landscape, and F-theory is on the order $10^{10^5}$, there is maybe no way that we can prove that BBs do not exist. On the other hand the purpose of science is not to cover ground like that and to disprove that something can't exist in some absolute sense. On the whole I would say the BB problem is one that is not exactly worth being overly concerned about.
{ "domain": "physics.stackexchange", "id": 31659, "tags": "statistical-mechanics, entropy" }
Robotino Mapping, Navigation, Obtacle avoidance
Question: Hello, I am a student and new to Robotino and ROS. I am using Robotino with ROS Indigo. My goal is to move the robot from point A to B and avoid obstacle. I have already done obstacle detection using distance sensors and kinect. I can move the robot manually using teleop. I write a simple node to move the robot in a straight line and u-turn and return to the source point geometry_msgs::Twist cmd_vel_msg_; if(count>100 && count<140) { ROS_INFO("U turn ..........%s", msg.data.c_str()); vel_x = 0.0; vel_y = 0.0; vel_omega = 0.6; } else { vel_x = .17; vel_y = 0.0; vel_omega = 0.0; ROS_INFO("straight------------- %s", msg.data.c_str()); } cmd_vel_msg_.linear.x = scale_linear_ * vel_x; cmd_vel_msg_.linear.y = scale_linear_ * vel_y; cmd_vel_msg_.angular.z = scale_angular_ * vel_omega; pub.publish(cmd_vel_msg_); ---------------------------------------------------------//and the publisher is pub = n.advertise<geometry_msgs::Twist>("/cmd_vel", 1, true); Well I did the above node just for test purpose, and successfully tested that I can detect obstacle while moving. But my target is to move the robot in a defined map. If there is any obstacle detected than avoid the obstacle collision. I need to create a map for that but unfortunately the link is removed : http://wiki.ros.org/robotino_navigation/Tutorials/Mapping%20with%20Robotino I am not getting any help and clue for creating map with robotino. i do not have any laser scanner. I am having kinect. Can anyone please please please suggest me what approach should I take to reach my goal? Originally posted by rasoo on ROS Answers with karma: 43 on 2016-07-22 Post score: 0 Answer: Since you are using Kinect, you might look at the Turtlebot tutorials (http://learn.turtlebot.com/) to see if there is any info that could be applicable to your robot (launch files, package for reading Kinect – or how to publish distance info into ROS from Kinect, etc.). Originally posted by Mark Rose with karma: 1563 on 2016-07-22 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by rasoo on 2016-08-09: Thank you so very much for your response. I have visited the link you provided. Now I have some specific questions. For robotino I use teh steps fromthe links http://wiki.ros.org/ja/robotino_navigation/Tutorials/Mapping%20with%20Robotino I was trying to run those command from my laptop. [continue] Comment by rasoo on 2016-08-09: Can you please tell me whic command should i run in laptopn and which command to run in the Robotino? normally i use to connect robotino using roboitno_node roslaunch robotino_node robotino_node.launch hostname:=172.26.1.1 Thank you for your time. Comment by Mark Rose on 2016-08-09: As long as the communication is good, it doesn't make that much difference what you run on the robot and what you run on your laptop. On my robot (not a Robotino), I try to run everything on the robot except rviz and whatever I'm using for teleop. (I often use the teleop panel in rviz). Comment by Mark Rose on 2016-08-09: You can also run a single launch file on your laptop that spawns nodes on both your laptop and the robot, using the <machine> tag to define the hosts and the machine= attribute on the <node> tag. (Caveat: you must use RSA tokens with SSH, not SHA-1 tokens.) Comment by rasoo on 2016-08-09: Thanks a lot. Actually I was running everything in my laptop. And the map_saver command was not executing. It was waiting for the map.My questions Do you use kinect to creat a map? If I load a sample map then i can do the navigation using rviz. But how the obstacle detection will work? I was trying Comment by rasoo on 2016-08-09: to write my own node for obstacle detection using kinect and distance sensors. Because distance sensor can only detect within range of 40 cm and base of the robotino. And kinect will detect object from 40-75 cm range. But I am now confused, is writing those code will be reinventing the wheel!!!???!! Comment by Mark Rose on 2016-08-09: There is obstacle avoidance built-in to the local planner provided by move_base, so yes, if you write your own obstacle avoidance you may be reinventing the wheel. (But if you write a better planner, if you can, and use the plugin architecture, then others can use your code.) Comment by Mark Rose on 2016-08-09: To use the built-in obstacle detection, you need to publish a PointCloud, PointCloud2, or LaserScan using the Kinect and/or other distance sensors. The turtlebot packages include nodes to publish a LaserScan from Kinect data. Comment by rasoo on 2016-08-16: Thanks once again. But I am using RObotino and I do not have Laser Scan hardware. But I can get data from /odom odeometry, /distance_sensor PointCloud and /camera/depth/points PointCloud2 type msgs. I need to create a map first. I am trying to use slam gmapping. For that I need LaserScan data...... Comment by rasoo on 2016-08-16: I have tried to convert the PointCloud data to Laser scan using PoinCloud_to_laserScan and it publishes data over the topic /scan. But Rostopic echo /scan shows nothing. I created the bag for /scan anf tf. But I am still unable to create map. Show the message waiting for map. Comment by rasoo on 2016-10-06: Thanks fro your suggestion. Somehow I managed to load map to my navigation module. In the simulation RVIZ, it works good.But when I connnect the robotino, the localization is not working. The robot position in the map is blinking in different places. DO you have any idea why it happens?
{ "domain": "robotics.stackexchange", "id": 25327, "tags": "ros, navigation, mapping, robotino, ros-indigo" }
Running many instances of Rviz[for example] - ROS over Multiple machines
Question: Hey guys, So, I am working on the project where different robots need to communicate with each other and I am using ROS over multiple machines. My question is if I run more than one instance of Rviz, say on different robots then all of the instances of Rviz will be publishing messages on the same topic. Also, different nodes will be subscribed to receive messages from the topics where their corresponding Rviz is publishing. But all this will create a mess because all the instances of Rviz will publish on the same topic and all corresponding nodes are subscribed to the same topic. What is the best way to separate the topics on which different instances of Rviz are publishing so that corresponding nodes can be subscribed to corresponding topics? I hope I have made myself clear. Please ask in case of any confusion. Thanks in advance. Naman Originally posted by Naman on ROS Answers with karma: 1464 on 2014-01-24 Post score: 0 Answer: It sounds like you should be using a separate topic for each receiving node, and configuring the corresponding rviz instance to publish to that topic. I've seen this done in the past by assigning a namespace to each robot. The tool properties in rviz have some support for configuring tool properties. For example, you can change the topic that navigation goals are published to. Originally posted by ahendrix with karma: 47576 on 2014-01-24 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 16760, "tags": "ros" }
Is a single photon emitted as a spherical EM wavefront?
Question: If yes, could the same photon hit multiple targets as it expands? If not, how does the photon acquire the wave-ness if it is not born as a spherical wave? Also, in second case, how can multiple photons synchronize to make up a single wave front? Answer: The only way to induce electromagnetic radiation is to disturb subatomic particles. For the emission of a photon it's enough that in an atom an (excited) electron falls back into a lower level. Once emitted the photon travels through empty space as a quanta of energy. The photon is indivisible during its travel. Hence a single photon couldn't have a spherical wavefront. The sum of the emitted photons - say from an electric bulb - is called electromagnetic radiation. The emission from a laser is very strong directed, from a bulb it is much more spherical directed. So the radiation could be spherical, no matter would this be from a bulb or a star. Being far enough away from the source one would receive single photons. But this is not a wavefront. how does the photon acquire the wave-ness? A wavefront one could produce with radiation waves. Radio waves are produced by periodical acceleration of electrons in the antenna rod. how can multiple photons synchronize to make up a single wave front? Since this acceleration happens nearly synchronously for all involved electrons the number of emitted photons follows the frequency of the antenna generator. So for radio waves one really could measure wave properties (which is not possible for a bulb powered by a DC current). But again, being far enough from the antenna one will receive single photons.
{ "domain": "physics.stackexchange", "id": 38147, "tags": "visible-light, waves, photons" }
Teleop control not working in Turtlebot Simulator package and Gazebo
Question: currently following the tutorial @ http:// www.ros.org/wiki/turtlebot_simulator/Tutorials/hydro/Explore%20the%20Gazebo%20world Everything else seems to work fine. The model loads into gazebo and the pointclooud image can be seen in Rviz. But I cannot control the turtlebot using the teleop keyboard configuration. I'm using the newest Gazebo(1.9) and ros Hydro is installed. If I try to use the $ roslaunch kobuki_keyop keyop.launch command it tells me Keyop can not connect. I'm not sure if they've started on different servers and are unable to communicate. Originally posted by Alkaros on ROS Answers with karma: 103 on 2013-08-26 Post score: 0 Answer: Assuming you are using Hydro sources, I commited a fix for the turtlebot teleop on monday, so a git pull on turtlebot_simulator should fix it. Kobuki keyop also should work if you have all the latest code. The "could not connect" warning appears because before the enable/disable commands where not simulated on Gazebo, but they are now. Please also ensure you use the latest Gazebo and SDF versions. If you don't wont to mangle with code, all these changes will be released together with the official Hydro release. Originally posted by jorge with karma: 2284 on 2013-08-27 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by bit-pirate on 2013-08-27: No, don't wait! :-) Get the sources, test it and let us know what's not working. This will help us polishing the code for the release! Comment by Alkaros on 2013-08-27: I'll definitely give it a go. How hard can it be, right? :)
{ "domain": "robotics.stackexchange", "id": 15365, "tags": "gazebo, simulation, rviz, turtlebot, ros-hydro" }
Converting UTC time to date in d3
Question: I intend to convert strings that are UTC timestamps into a date format, say day + date number (e.g. 'Mon 27'). My current solution is: Parse the UTC timestamp into a date object using d3.utcParse("%Y-%m-%dT%H:%M:%S") Convert the date object into a timestamp without hours (i.e. just the date information), using d3.timeFormat("%Y-%m-%d"). Parse the new timestamp again, using d3.timeParse("%Y-%m-%d"). The motivation behind this is that when passing the data into d3.js for plotting purposes, the dates are slightly offset due to (1) the timezone and (2) the daylight saving settings in the locality. This offset causes the dates to be shifted slightly with respect to the axis ticks (see figure 1), and I have therefore used this approach to strip hours, minutes, and seconds information out (see figure 2, with fixed dates). Figure 1 (above): Dates before fixing, where data points are slightly offset to the right due to daylight saving settings Figure 2 (above): Fixed dates, with hours, minutes, and seconds stripped out using the procedure stated above This is achieved by chaining the output of three functions, which I found really clunky and chatty, and wondering if there is a better approach to that: var data = [ ["2017-03-18T01:00:00", 20], ["2017-03-19T01:00:00", 10], ["2017-03-20T01:00:00", 5], ["2017-03-21T01:00:00", 0], ["2017-03-22T01:00:00", 1], ["2017-03-23T01:00:00", 12], ["2017-03-24T01:00:00", 23], ["2017-03-25T01:00:00", 65], ["2017-03-26T01:00:00", 78], ["2017-03-27T01:00:00", 123] ]; // Functions to parse timestamps var parseUTCDate = d3.utcParse("%Y-%m-%dT%H:%M:%S"); var formatUTCDate = d3.timeFormat("%Y-%m-%d"); var parseDate = d3.timeParse("%Y-%m-%d"); // Iterate through data for (let i in data) { var timestamp = data[i][0]; console.log(parseDate(formatUTCDate(parseUTCDate(timestamp)))); } <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.7.4/d3.min.js"></script> // Just a normal responce from the server var response = { "status": "Ok", "data": [ ["2017-03-18T01:00:00+00:00", 20], ["2017-03-19T01:00:00+00:00", 10], ["2017-03-20T01:00:00+00:00", 5], ["2017-03-21T01:00:00+00:00", 0], ["2017-03-22T01:00:00", 1], ["2017-03-23T01:00:00+00:00", 12], ["2017-03-24T01:00:00", 23], ["2017-03-25T01:00:00+00:00", 65], ["2017-03-26T01:00:00+00:00", 78], ["2017-03-27T01:00:00+00:00", 123] ] }; // Parse the response var chart = new Vue({ el: '#visitors7days', data: function() { return { layout: { width: 800, height: 400, margin: { left: 50, top: 50, right: 50, bottom: 50 } }, plot: { points: [] } } }, // Computed functions computed: { // Return dimensions of SVG chart svgViewBox: function() { return '0 0 ' + (this.layout.width + this.layout.margin.left + this.layout.margin.right) + ' ' + (this.layout.height + this.layout.margin.top + this.layout.margin.bottom); }, // Stage stageTransform: function() { return { 'transform': 'translate(' + this.layout.margin.left + 'px,' + this.layout.margin.top + 'px)' } } }, // Initialisation mounted: function() { // Update plot this.update(); }, // Methods methods: { // Update elements in chart update: function() { // Internal variables var _w = this.layout.width; var _h = this.layout.height; // Date parser var parseUTCDate = d3.utcParse("%Y-%m-%dT%H:%M:%S"); var formatUTCDate = d3.timeFormat("%Y-%m-%d"); var parseDate = d3.timeParse("%Y-%m-%d"); var getDate = function(d) { return parseDate(formatUTCDate(parseUTCDate(d))); }; // Compute scale this.plot.scale = { x: d3.scaleTime().range([0, _w]), y: d3.scaleLinear().range([_h, 0]) }; var scale = this.plot.scale; // Generate area this.plot.area = d3.area() .x(function(d) { return scale.x(d.date); }) .y1(function(d) { return scale.y(d.count); }); // Generate line this.plot.line = d3.line() .x(function(d) { return scale.x(d.date); }) .y(function(d) { return scale.y(d.count); }); // Push individual points into data var _d = response.data; for (let i in _d) { this.plot.points.push({ date: getDate(_d[i][0].split('+')[0]), // Clean up dates with trailing GMT offsets count: _d[i][1] }) } // Set extend of data this.plot.scale.x.domain(d3.extent(this.plot.points, function(d) { return d.date; })); this.plot.scale.y.domain([0, d3.max(this.plot.points, function(d) { return d.count; })]); this.plot.area.y0(this.plot.scale.y(0)); // Draw axes d3.select(this.$refs.xAxis) .attr('transform', 'translate(0,' + this.layout.height + ')') .call( d3.axisBottom(scale.x) .ticks(7) .tickFormat(d3.timeFormat("%a, %b %d")) ); d3.select(this.$refs.yAxis) .call( d3.axisLeft(scale.y) ); // Draw area var $area = d3.select(this.$refs.area); $area .datum(this.plot.points) .attr('d', this.plot.area) .attr('fill', '#1ABC9C') .attr('fill-opacity', 0.5); // Draw line var $line = d3.select(this.$refs.line); $line .data([this.plot.points]) .attr('d', this.plot.line); // Draw points var $g = d3.select(this.$refs.points); $g.selectAll('circle.point').data(this.plot.points) .enter() .append('circle') .attr('r', 5) .attr('class', 'point') .attr('cx', function(d) { return scale.x(d.date); }) .attr('cy', function(d) { return scale.y(d.count); }); } } }); svg { background-color: #eee; display: block; width: 100%; } svg g.axis text { fill: #555; } svg .line { fill: none; stroke: #159078; stroke-width: 2px; } svg circle.point { fill: #fff; stroke: #159078; stroke-width: 2px; } <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.5.0/d3.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.2.6/vue.min.js"></script> <div id="visitors7days"> <svg :view-box.camel="svgViewBox" preserveAspectRatio="xMidYMid meet"> <g :style="stageTransform"> <g class="axis x" ref="xAxis"></g> <g class="axis y" ref="yAxis"></g> <path class="area" ref="area"></path> <path class="line" ref="line"></path> <g class="points" ref="points"></g> </g> </svg> </div> Answer: I don't understand why you need to parse the date twice. d3.utcParse() will return a Date object. You really don't gain anything from then re-parsing that Date from a different format. You can use that Date object as is to output whatever format you want from it. If you want to remove the time component from the original date provided, you can use Date.setHours() method to do so. So, if you wanted to map your original data array to an array where the UTC string value have been replaced by Date objects with time values stripped out (i.e. set to 00:00:00.000), that might look like this: var dataWithDates = data.map(function(el) { el[0] = parseUTCDate(el[0]).setHours(0,0,0,0); return el; }); This also might mean you data parser code looks like: // Date parser var parseUTCDate = d3.utcParse("%Y-%m-%dT%H:%M:%S"); var getDate = function(d) { return parseUTCDate(d).setHours(0,0,0,0); }; So you could pick one of these options depending on where you want to make the conversion.
{ "domain": "codereview.stackexchange", "id": 24958, "tags": "javascript, datetime, d3.js" }
C++ UTF-8 decoder
Question: While writing simple text rendering I found a lack of utf-8 decoders. Most decoders I found required allocating enough space for decoded string. In worse case that would mean that the decoded string would be four times as large as the original string. I just needed to iterate over characters in a decoded format so I would be able to render them on the screen, so I wrote a simple function that would allow me to do that: #include <cstdint> // unsigned integer types typedef uint64_t U64; typedef uint32_t U32; typedef uint16_t U16; typedef uint8_t U8; // signed integer types typedef int64_t I64; typedef int32_t I32; typedef int16_t I16; typedef int8_t I8; U32 NextUTF8Char(const char* str, U32& idx) { // https://en.wikipedia.org/wiki/UTF-8 U8 c1 = (U8) str[idx]; ++idx; U32 utf8c; if (((c1 >> 6) & 0b11) == 0b11) { // at least 2 bytes U8 c2 = (U8) str[idx]; ++idx; if ((c1 >> 5) & 1) { // at least 3 bytes U8 c3 = (U8) str[idx]; ++idx; if ((c1 >> 4) & 1) { // 4 bytes U8 c4 = (U8) str[idx]; ++idx; utf8c = ((c4 & 0b00000111) << 18) | ((c3 & 0b00111111) << 12) | ((c2 & 0b00111111) << 6) | (c1 & 0b00111111); } else { utf8c = ((c3 & 0b00001111) << 12) | ((c2 & 0b00111111) << 6) | (c1 & 0b00111111); } } else { utf8c = ((c1 & 0b00011111) << 6) | (c2 & 0b00111111); } } else { utf8c = c1 & 0b01111111; } return utf8c; } Usage: const char* text = u8"ta suhi škafec pušča"; U32 idx = 0; U32 c; while ((c = NextUTF8Char(text, idx)) != 0) { // c is our utf-8 character in unsigned int format } I'm currently mostly concerned about the following : Readability: The intent of every piece of code is clear to the reader. Correctness: Everything is working as it should (I think it's clear what should happen). Performance: Can anything be done to improve the performance of this code? Answer: // unsigned integer types typedef uint64_t U64; typedef uint32_t U32; typedef uint16_t U16; typedef uint8_t U8; // signed integer types typedef int64_t I64; typedef int32_t I32; typedef int16_t I16; typedef int8_t I8; This has instantly made the code harder to read (as well as being incorrect, since <cstdint> declares those names in the std namespace). I'm not sure why we declare so many types, when we use just two of them anyway. U32 NextUTF8Char(const char* str, U32& idx) Why not return a standard std::wchar_t? Or perhaps a char32_t? Similarly, str ought to be a const char8_t* (so that the example code compiles). I'd use a std::size_t for the index (or more likely get rid of idx altogether, and pass a reference to pointer instead). The whole thing seems like a reinventing a lot of work that's already done for you: #include <cwchar> char32_t NextUTF8Char(const char8_t*& str) { static const int max_utf8_len = 5; auto s = reinterpret_cast<const char*>(str); wchar_t c; std::mbstate_t state; auto len = std::mbrtowc(&c, s, max_utf8_len, &state); if (len > max_utf8_len) { return 0; } str += len; return c; } #include <iostream> int main() { std::locale::global(std::locale{"en_US.utf8"}); const auto* text = u8"ta suhi škafec pušča"; char32_t c; std::size_t i = 0; while ((c = NextUTF8Char(text)) != 0) { std::cout << '[' << i++ << "] = " << (std::uint_fast32_t)c << '\n'; // c is our utf-8 character in unsigned int format } } I think that std::codecvt<char32_t, char8_t, std::mbstate_t> could easily do much the same: #include <locale> char32_t NextUTF8Char(const char8_t*& str) { if (!*str) { return 0; } auto &cvt = std::use_facet<std::codecvt<char32_t, char8_t, std::mbstate_t>>(std::locale()); std::mbstate_t state; char32_t c; char32_t* p = &c+1; auto result = cvt.in(state, str, str+6, str, &c, p, p); switch (result) { case std::codecvt_base::ok: return c; case std::codecvt_base::partial: return c; case std::codecvt_base::error: return 0; case std::codecvt_base::noconv: return 0; } return c; } Either is better than writing your own UTF-8 decoder.
{ "domain": "codereview.stackexchange", "id": 45058, "tags": "c++, utf-8" }
How are real particles created?
Question: The textbooks about quantum field theory I have seen so far say that all talk in popular science literature about particles being created spontaneously out of vacuum is wrong. Instead, according to QFT those virtual particles are unobservable and are just a mathematical picture of the perturbation expansion of the propagator. What I have been wondering is, how did the real particles, which are observable, get created? How does QFT describe pair production, in particular starting with vacuum and ending with a real, on-shell particle-antiparticle pair? Can anybody explain this to me and point me to some textbooks or articles elaborating on this question (no popular science, please)? Answer: ''The textbooks about quantum field theory I have seen so far say that all talk in popular science literature about particles being created spontaneously out of vacuum is wrong.'' And they are right doing so. See also my essay https://www.physicsforums.com/insights/physics-virtual-particles/ ''How does QFT describe pair production, in particular starting with vacuum and ending with a real, on-shell particle-antiparticle pair?'' It doesn't. There are no such processes. Pair production is always from other particles, never from the vacuum or from a single stable particle. ''I cannot find a calculation for an amplitude <0|e+e-> or something like that.'' Because this amplitude always vanishes. All nonzero amplitudes must respect the conservation of 4-momentum, which is impossible for <0|e+e->. You can see this from the delta-function which appears in the S-matrix elements. It follows from this formula that the requested amplitude vanishes, since delta(q)=0 when q is nonzero.
{ "domain": "physics.stackexchange", "id": 48693, "tags": "quantum-field-theory, particle-physics" }
Ruby script on-all-nodes-run not only for teaching
Question: I wrote the following Ruby script several years ago and have been using it often ever since on a Bioinformatics computer cluster. It pulls out a list of hosts from the Torque queuing system qnodes. It ssh'es and runs a command on all the nodes. Then it prints the output and/or errors in a defined order (alphabetical sort of the hostnames). A nice feature: results are printed immediately for the host that is next in the order. I would like to use it as an example for a Ruby workshop. Could you please suggest best-practice and design pattern improvements? #!/usr/bin/ruby EXCLUDE = [/girkelab/, /biocluster/, /parrot/, /owl/] require "open3" # Non-interactive, no password asking, and seasonable timeouts SSH_OPTIONS = ["-o PreferredAuthentications=publickey,hostbased,gssapi,gssapi-with-mic", "-o ForwardX11=no", "-o BatchMode=yes", "-o SetupTimeOut=5", "-o ServerAliveInterval=5", "-o ServerAliveCountMax=2" ].join(" ") SSH = "/usr/bin/ssh #{SSH_OPTIONS}" MKDIR = "/bin/mkdir" raise "Pleae give this command at least one argument" if ARGV.size < 1 COMMAND = ARGV[0..-1].join(' ') output_o = {} output_e = {} IO_CONNECTIONS_TO_REMOTE_PROCESSES = {} def on_all_nodes(&block) nodes = [] Kernel.open('|qnodes | grep -v "^ " | grep -v "^$"') do |f| while line = f.gets i = line.split(' ').first nodes.push(i) if EXCLUDE.select{|x| i =~ x}.empty? end end nodes.sort.each {|n| block.call(n)} end # Create processes on_all_nodes do |node| stdin, stdout, stderr = Open3.popen3("#{SSH} #{node} \"#{COMMAND}\"") IO_CONNECTIONS_TO_REMOTE_PROCESSES[node] = [stdin, stdout, stderr] end has_remote_errors = false # Collect results on_all_nodes do |node| stdin, stdout, stderr = IO_CONNECTIONS_TO_REMOTE_PROCESSES[node] stdin.close e_thread = Thread.new do while line = stderr.gets line.chomp! STDERR.puts "#{node} ERROR: #{line}" has_remote_errors = true end end o_thread = Thread.new do while line = stdout.gets line.chomp! puts "#{node} : #{line}" end end # Let the threads finish t1 = nil t2 = nil while [t1, t2].include? nil if t1.nil? t1 = e_thread.join(0.1) # Gives 1/10 of a second to STDERR end if t2.nil? t2 = o_thread.join(0.1) # Give 1/10 of a second to STDOUT end end end exit(1) if has_remote_errors Answer: First of all, take a look at the Net::SSH library. I don't have much experience with it, so I don't know whether it supports all the options you need. But if it does, using it might be more robust than using the command line utility (you wouldn't have to worry about whether ssh is installed in the place you expect (or at all) and you wouldn't have to worry about escaping the arguments). Assuming you can't (or won't) use Net::SSH, you should at least replace /usr/bin/ssh with just ssh, so at least it still works if ssh is installed in another location in the PATH. nodes = [] Kernel.open('|qnodes | grep -v "^ " | grep -v "^$"') do |f| while line = f.gets i = line.split(' ').first nodes.push(i) if EXCLUDE.select{|x| i =~ x}.empty? end end When you initialize an empty array and then append to it in a loop, that is often a good sign you want to use map and/or select instead. line = f.gets is a bit of an anti-pattern in ruby. The IO class already has methods to iterate a file line-wise. To find out whether none of the elements in an array meet a condition, negating any? seems more idiomatic than building an array with select and check whether it's empty. nodes = Kernel.open('|qnodes | grep -v "^ " | grep -v "^$"') do |f| f.lines.map do |line| line.split(' ').first end.reject do |i| EXCLUDE.any? {|x| i =~ x} end end nodes.sort.each {|n| block.call(n)} I would recommend that instead of taking a block and yielding each element, you just return nodes.sort and rename the function to all_nodes. This way you can use all_nodes.each to execute code on all nodes, but you could also use all_nodes.map or all_nodes.select when it makes sense. Open3.popen3("#{SSH} #{node} \"#{COMMAND}\"") Note that this will break if COMMAND contains double quotes itself. Generally trying to escape command line arguments by surrounding them with quotes is a bad idea. system and open3 accept multiple arguments exactly to avoid this. If you make SSH an array (with one element per argument) instead of a string, you can use the multiple-arguments version of popen3 and can thus avoid the fickle solution of adding quote around COMMAND to escape spaces, i.e.: Open3.popen3(*(SSH + [node, COMMAND])) IO_CONNECTIONS_TO_REMOTE_PROCESSES = {} # ... on_all_nodes do |node| stdin, stdout, stderr = Open3.popen3("#{SSH} #{node} \"#{COMMAND}\"") IO_CONNECTIONS_TO_REMOTE_PROCESSES[node] = [stdin, stdout, stderr] end If you heeded my above advice about all_nodes you can simplify this using map. I also would suggest not using a Hash here. If you use an array instead, the nodes will stay in the order in which you inserted them, which will mean that you can iterate over that array instead of invoking all_nodes again. has_remote_errors = false all_nodes.map do |node| [node, Open3.popen3(*(SSH + [node, COMMAND]))] end.each do |node, (stdin, stdout, stderr)| stdin.close ethread = # ... # ... end This way you removed the complexity of first putting everything into a hash and then getting it out again. while line = stderr.gets Again, this can be written more idiomatically as stderr.each_line do |line|. Same with stdout. first = true This is never used. I can only assume that it's a left over of previous iterations of the code, which is no longer necessary. Obviously it should be removed. # Let the threads finish t1 = nil t2 = nil while [t1, t2].include? nil if t1.nil? t1 = e_thread.join(0.1) # Gives 1/10 of a second to STDERR end if t2.nil? t2 = o_thread.join(0.1) # Give 1/10 of a second to STDOUT end end I don't see any benefit of doing it this way. Just do: e_thread.join o_thread.join Note that joining on one thread does not mean that the other threads stop running - only the main thread does, but that's perfectly okay as you want that anyway.
{ "domain": "codereview.stackexchange", "id": 15237, "tags": "ruby, multithreading, networking, child-process" }
ROS Answers SE migration: ROS on windows
Question: Hi! I'm Korean college student. I succeeded in communicating between Windows and Ubuntu using the example in hydro / win_ros. (windows - hydro, ubuntu - indigo) If I try to run the ROS code using visual studio2015, I get a link error. I also made various settings as shown on the wiki. Is there a way to solve it? Also, I would like to compile with open source code by adding the ROS code, would not it be a problem? thanks Originally posted by fhggty2017 on ROS Answers with karma: 1 on 2017-06-30 Post score: 0 Original comments Comment by gvdhoorn on 2017-06-30: win_ros has not been maintained for a very long time (and Hydro has been EOL for quite some time now too). Do you really need to use win_ros, or do you just need to be able to communicate with a ROS node graph from Windows? If the latter, you could look at rosbridge_suite, .. Comment by gvdhoorn on 2017-06-30: .. ros.net, rosserial_windows or perhaps even ROS on Win10/WSL (#q238646). Answer: ROS 2 does support Windows! You write your windows code in ROS 2, then use the ros1_bridge on your Linux system to communicate with ROS 1. Originally posted by allenh1 with karma: 3055 on 2017-07-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 28259, "tags": "ros" }
Product of deltas in kinetic second quantization hamiltonian
Question: I am trying to derive the result for a kinetic hamiltonian in second quantization in term of the fields, that is: $\hat{H} = \int - \Psi^\dagger (r) \frac{\hbar^2\hat{\nabla}^2}{2m} \Psi(r)$ I start with a single particle Hamiltonian, like: $$\hat{h} = -\frac{\hbar^2\hat{\nabla}_1^2}{2m}$$ and I want to obtain an analogous formula in second quantization, given a basis $|k\rangle$, applying the recipe: $$\hat{H} = \sum_{k,l} \langle k | \hat{h} | l \rangle a^\dagger_k a_l$$ However when the basis is $|r\rangle$, I encounter some (formal) problems. What is $\langle s | \hat{h} | r \rangle$? Ignoring the constants, I know that $\nabla^2 | r \rangle$=$\nabla^2\delta(x-r)$, so the "matrix element" $\langle s | \hat{h} | r \rangle$ should be something along $$\int dx\ \delta(x-s) \nabla^2\delta(x-r)$$ I understand the $\delta$ as a linear functional, so mathematically I can't really define this integral. I don't know how to solve that, or to show that, combined with the field $\Psi(r)$ it represents the operator $\Psi(r) \rightarrow\nabla^2 \Psi(r)$. I'd really appreciate an explanation of this passage, intuitive or rigorous (better both). Answer: $\newcommand{\bra}[1]{\langle #1 \vert}$ $\newcommand{\ket}[1]{\vert #1 \rangle}$ In the position basis $\hat{p}^2$ acts as $$ -\nabla^2 \psi(x) = \bra{x}\hat{p}^2\ket{\psi}$$ From \begin{align} \bra{\psi}\hat{p}^2\ket{\psi}&=\int\mathrm{dr}\,\bra{\psi}\ket{r}\bra{r}\hat{p}^2\ket{\psi}\\ &=\int\mathrm{dr}\,\psi(r)^*(-\nabla_r^2\psi(r))\\ &=\int\mathrm{drdr'}\,\psi(r')^*(-\delta(r-r')\nabla_r^2)\psi(r)\\ &=\int\mathrm{drdr'}\,\bra{\psi}\ket{r'}\bra{r'}\hat{p}^2\ket{r}\bra{r}\ket{\psi} \end{align} it follows by comparison that $$\bra{r'}\hat{p}^2\ket{r} =-\delta(r-r')\nabla_r^2$$ The $\delta$ simply tells you, that $\hat{p}^2$ is diagonal in the position basis. This argument certainly lacks mathematical rigor. One has to accept the fact that the position eigenstates can be used in this fashion. Or learn some functional analysis. Addendum: The meaning of $\hat{p}$ being diagonal in the pos. basis is locality. The integral displayed above does not mix wavefunctions at different points. That should be expected from a derivative operation. It only cares about an infinitesimal neighborhood of a point. The expectation value for some other non-diagonal operator $A$ would be $$ \langle A \rangle = \int\mathrm{drdr'}\psi^*(r')A(r,r')\psi(r)$$ "mixing" the wavefunction at different point in space.
{ "domain": "physics.stackexchange", "id": 20872, "tags": "homework-and-exercises, quantum-field-theory, mathematical-physics, second-quantization" }
How to cut off the initial disturbance automatically?
Question: I would like to find a way to cut off the initial disturbance (e.g. the part tf_n<20). Though it is easy to do manually for single figure, I need a way to automatically identify the steady point, as I need to do it 100 times otherwise. I would prefer to do it by Matlab. Answer: I just come up with a solution when I am writing question... I can compare the amplitudes of this period and last steady periods (for my case there is a fixed responding frequency). When the relative error is small enough for 5 or 10 times, it can be considered as a good cutoff point. below a Matlab script I used, and it worked well for my case. function [t1,y1,t2,y2]=autocut(f1,t1,y1,t2,y2) errlim=10e-2; cut1=findcut(f1,t1,y1,errlim); cut2=findcut(f1,t2,y2,errlim); cutp=min([cut1 cut2]); t1=t1(cutp:end); t2=t2(cutp:end); y1=y1(cutp:end); y2=y2(cutp:end); end function cutnt=findcut(f1,t,y,errlim) dt=t(20)-t(19); T1n=ceil(1/f1/dt)*2;%T1n points in one period, number of periods for A=(max-min)/2 tn=length(t); nperiod=floor(tn/(T1n)); %% Ast is the amplitude of last 5 periods with steady vibration tstartn=(nperiod-5-1)*T1n+1;tendn=nperiod*T1n; Ast=(max(y(tstartn:tendn))-min(y(tstartn:tendn)))*0.5; %% if the relative error is continuously small for 15 periods, s=0;%count periods that satisfies the condition, 5 statisfaction periods define the steady for i1=1:nperiod % tstartn=(i1-1)*T1n+1; tendn=i1*T1n; yamp(i1)=(max(y(tstartn:tendn))-min(y(tstartn:tendn)))*0.5; err(i1)=abs((yamp(i1)-Ast)/yamp(i1));%relative error if err(i1)<errlim s=s+1; if s==5 break end end end cutnt=tendn;
{ "domain": "dsp.stackexchange", "id": 4746, "tags": "matlab" }
Average distance between an object and the body it's orbiting over time
Question: I became interested in finding the average distance of an orbiting object over time from its parent, expressed mathematically as $\frac{\int_{0}^{T}r\left(t\right)dt}{T}$, where $T$ is the orbital period and $r(t)$ is the orbital distance at a given point in time. After googling didn't work, I wrote some code to find this number at different orbital apoapsides, and plotted it with the orbital apoapsis (the periapsis was always 1), hoping the equation would reveal itself that way. Alas, it did not. Does anyone here know the answer? Answer: The time-averaged distance over the orbital period $T$ is $$\langle r \rangle \equiv \frac1T \int_0^T r(t)\,dt = a \left( 1+\frac12 e^2 \right),\tag1$$ where $a$ is the semi-major axis of the orbit and $e$ the eccentricity. Evaluating this integral is non-trivial. First, there is no simple expression for $r(t)$ to use! But there is a simple expression for $r(\theta)$, where $\theta$ is the angle around the orbit, namely $$r = \frac{a(1-e^2)}{1+e\cos\theta}.\tag2$$ (This is Kepler's First Law). So the trick for evaluating the integral for the time-average is to use $$dt=\frac{d\theta}{\dot\theta}\tag3$$ to convert it from an integral over $t$ to an integral over $\theta$: $$\langle r \rangle = \frac1T \int_0^{2\pi} \frac{r(\theta)}{\dot\theta}\,d\theta.\tag4$$ Clearly for this to work, we need to be able to express $\dot\theta$ in terms of $\theta$. We can do this using Kepler's Second Law, which says that the rate at which the orbital body sweeps out area is constant: $$\frac{dA}{dt} = \text{const} = \frac{A}{T}.\tag5$$ Geometrically, the area of the infinitesimal triangle formed by the orbital segment $r\,d\theta$ at distance $r$ is $$dA=\frac12 r^2 d\theta,\tag6$$ so $$\dot\theta = \frac{2}{r^2}\frac{dA}{dt} = \frac{2}{r^2}\frac{A}{T}.\tag7$$ Furthermore, the geometrical area of the elliptical orbit is $$A = \pi ab = \pi a^2(1-e^2)^{1/2}\tag8$$ where $$b = a(1-e^2)^{1/2}\tag9$$ is the semi-minor axis. Thus we have $$\dot\theta = \frac{2}{r^2}\frac{\pi a^2(1-e^2)^{1/2}}{T}.\tag{10}$$ Since we know $r$ as a function of $\theta$, we now also know $\dot\theta$ as a function of $\theta$. Putting this together, we have $$\frac{r}{\dot\theta} = \frac{Tr^3}{2\pi a^2(1-e^2)^{1/2}}\tag{11}$$ and $$\langle r \rangle = \frac{1}{2\pi a^2(1-e^2)^{1/2}}\int_0^{2\pi}r(\theta)^3\,d\theta = \frac{a(1-e^2)^{5/2}}{2\pi}\int_0^{2\pi}\frac{d\theta}{(1+e\cos\theta)^3}.\tag{12}$$ The integral can be done by contour integration (or, more simply, by a computer algebra system) and evaluates to $$\int_0^{2\pi}\frac{d\theta}{(1+e\cos\theta)^3} = \frac{\pi(2+e^2)}{(1-e^2)^{5/2}},\tag{13}$$ giving $$\langle r \rangle =a \left( 1+\frac12 e^2 \right).\tag{14}$$
{ "domain": "physics.stackexchange", "id": 89259, "tags": "newtonian-gravity, orbital-motion" }
UDP image livestream from Android device to C# desktop application
Question: After searching a lot, on how to do it and not finding any good solutions, I implemented my own UDP livestream from an Android device to a C#/WPF desktop application. It works, however, since I get complete JPEG images from the camera2 API each frame in android, I need to wait until I have data for the complete image before I can display it (instead of updating image areas). All in all, slicing up the images and putting them together on the other end is a pain. I ended up buffering up to 4 images, always discarding the oldest one as new ones come along. I still have to implement connection losses and so on... Anyway, here is the Android side (kotlin): // gets called as soon as a new image is available //livestreamSocket is a DatagramSocket, connection is basically set up at this point. fun sendLivestreamImage(img: Image) { if(livestreamSocket.isClosed) return /** Protocol: [8bytes timestamp][4byte imagesize][4bytes startindex][4bytes payloadlength][x bytes payload] */ var buffer = img.planes[0].buffer var imgSize = buffer.remaining() var payloadSize = liveStreamPacketSize - livestreamHeaderSize var numbPackets = (imgSize / payloadSize) + 1 //Log.d(TAG, "size: ${imgSize}, numbPackets: ${numbPackets}") for(i in 1..numbPackets){ if(i < numbPackets){ var bytes = ByteArray(liveStreamPacketSize) var startIndex = buffer.position() var headbuffer = ByteBuffer.allocate(livestreamHeaderSize).putLong(img.timestamp).putInt(8, imgSize).putInt(12, startIndex).putInt(16, payloadSize) headbuffer.rewind() headbuffer.get(bytes, 0, livestreamHeaderSize) buffer.get(bytes, livestreamHeaderSize, payloadSize) var packet = DatagramPacket(bytes, bytes.size, udpLivestreamAddress, udpLivestreamPort) livestreamSocket.send(packet) } else { payloadSize = buffer.remaining() var startIndex = buffer.position() var bytes = ByteArray(payloadSize + livestreamHeaderSize) var headbuffer = ByteBuffer.allocate(livestreamHeaderSize).putLong(img.timestamp).putInt(8, imgSize).putInt(12, startIndex).putInt(16, payloadSize) headbuffer.rewind() headbuffer.get(bytes, 0, livestreamHeaderSize) buffer.get(bytes, livestreamHeaderSize, payloadSize) var packet = DatagramPacket(bytes, bytes.size, udpLivestreamAddress, udpLivestreamPort) livestreamSocket.send(packet) } } } On the C# side I have the following function listening to the UDP connection: /// <summary> /// waits and receives livestream images /// </summary> public async Task ReceiveLivestream(IProgress<BitmapImage> imageReceivedProgress) { var udpEndpoint = new IPEndPoint(IPAddress.Any, GenDefInt.UdpLiveStreamPort); liveStreamReceiver = new UdpClient(udpEndpoint); var imgBuilder = new LivestreamImageBuilder(); while (true) { var recv = await liveStreamReceiver.ReceiveAsync(); long timestamp = IPAddress.NetworkToHostOrder(BitConverter.ToInt64(recv.Buffer, 0)); int imageSize = IPAddress.NetworkToHostOrder(BitConverter.ToInt32(recv.Buffer, 8)); int startIndex = IPAddress.NetworkToHostOrder(BitConverter.ToInt32(recv.Buffer, 12)); int payloadLength = IPAddress.NetworkToHostOrder(BitConverter.ToInt32(recv.Buffer, 16)); var img = imgBuilder.CreateImageFromUdpPackets(timestamp, imageSize, startIndex, payloadLength, recv.Buffer); if (img == null) continue; else imageReceivedProgress.Report(img); //reports the updated image back to the viewmodel of the application, where it will update the GUI } } The important parts are happening in the LivestreamImageBuilder class: public class LivestreamImageBuilder { #region Private Fields private byte[][] imageBuffers = new byte[4][]; private int[] copiedBytes = new int[4]; private long[] timeStamps = new long[4]; #endregion #region Public Properties #endregion #region Constructors #endregion #region Private Methods /// <summary> /// returns a BitmapImage from a full imagebuffer bytearray /// </summary> /// <param name="imgBuffer"></param> /// <returns></returns> private BitmapImage GetImageFromBuffer(byte[] imgBuffer) { var img = new BitmapImage(); using (MemoryStream ms = new MemoryStream(imgBuffer)) { img.BeginInit(); img.CacheOption = BitmapCacheOption.OnLoad; img.StreamSource = ms; img.EndInit(); } return img; } #endregion #region Public Methods /// <summary> /// Manages and fills up to four image buffers with data from the latest udp packet /// if the respective image buffer is full, returns a BitmapImage, else returns null /// </summary> /// <param name="timeStamp"></param> /// <param name="imageSize"></param> /// <param name="startIndex"></param> /// <param name="payloadLength"></param> /// <param name="buffer"></param> /// <returns></returns> public BitmapImage CreateImageFromUdpPackets(long timeStamp, int imageSize, int startIndex, int payloadLength, byte[] buffer) { for (uint i = 0; i < 4; i++) { if (timeStamp == timeStamps[i]) { Buffer.BlockCopy(buffer, 20, imageBuffers[i], startIndex, payloadLength); copiedBytes[i] += payloadLength; if (copiedBytes[i] >= imageSize) { return GetImageFromBuffer(imageBuffers[i]); } else return null; } } //find oldest buffer int oldest = 0; for (int i = 0; i < timeStamps.Length; i++) { if (timeStamps[i] < timeStamps[oldest]) { oldest = i; } } timeStamps[oldest] = timeStamp; imageBuffers[oldest] = new byte[imageSize]; copiedBytes[oldest] = 0; Buffer.BlockCopy(buffer, 20, imageBuffers[oldest], startIndex, payloadLength); copiedBytes[oldest] += payloadLength; if (copiedBytes[oldest] >= imageSize) { return GetImageFromBuffer(imageBuffers[oldest]); } else return null; } #endregion } Again, it works. I just found it weird that I couldn't find solutions to this (I'm sure, somebody has already solved this in a better, more efficient way), so I would appreciate all kinds of suggestions for improvements, errors I overlooked or any other kind of input :) Obviously, If there are questions concerning the code, please just ask away -> I'm a bit lazy with comments^^ Answer: I have no experience with Kotlin, so I'll purely be focusing on the C# part Empty #regions: Delete them, they just take up unnecessary space. In the same vein, I personally add an empty line after the #region and just before endregion, makes it more readable (IMO). Naming: Microsoft's own naming conventions aren't complete, so I follow what everyone else, and even Microsoft themselves does. Namely: Prefix private members in a class with an underscore. This turns your LivestreamImageBuilder member declarations into this: private byte[][] _imageBuffers = new byte[4][]; private int[] _copiedBytes = new int[4]; private long[] _timeStamps = new long[4]; This then allows you to use the proper name as the parameter in GetImageFromBuffer, i.e private BitmapImage GetImageFromBuffer(byte[] imageBuffer) Even though MS's naming conventions are sparce, they tell us: DO NOT use abbreviations or contractions as part of identifier names. So it depends on how pedantic you want to be. Is img an abbreviation? Yes, does basically everyone understand that it means image? Also yes, but you should still technically not call it img. Documentation: You are using XMLDocs, but their not quite complete. For example, you have zero documentation concerning parameters which I think is quite important. Especially when they could be confusing, like long timeStamp. Why is it a long and not a DateTime? What are acceptable values? or is anything not acceptable? Also there is no documentation for what a method can return. In your CreateImageFromUdpPackets method I think it's especially important, because it can return null, a simple Returns the newest Image received, if none are completely received returns null would suffice. Comments: I know, I know, they're a PITA, but important. Some code blocks are quite difficult to understand, for example: Buffer.BlockCopy(buffer, 20, imageBuffers[i], startIndex, payloadLength); copiedBytes[i] += payloadLength; if (copiedBytes[i] >= imageSize) { return GetImageFromBuffer(imageBuffers[i]); } // ... Could use a comment, something like Copies newly received image into the local buffer, and returns the image if it's complete For loop: Currently you're iterating through your timeStamps in a for loop with a fixed max, this should be replaced by this: for (int i = 0; i < timeStamps.Length; i++) { if (timeStamp == timeStamps[i]) // ... } Because what if you decide you want to keep 5 images around? Or 3? Or however many. If else statements: You have multiple unnecessary else statements, they should be removed. So turn if (copiedBytes[i] >= imageSize) { return GetImageFromBuffer(imageBuffers[i]); } else return null; Into if (copiedBytes[i] >= imageSize) return GetImageFromBuffer(imageBuffers[i]) // The curly braces can optionally be left out return null; Custom network data class: Why not use a custom network data class instead of passing 5 parameters to your CreateImageFromUdpPackets? You could even implement methods on it construct it directly from a byte array. Like so: public class LivestreamImagePacket { public long Timestamp {get; set;} public int ImageSize {get; set;} public int StartIndex {get; set;} public int PayloadLength {get; set;} public LiveStreamImagePacket(long timestamp, int imageSize, int startIndex, int payloadLength) { Timestamp = timestamp; ImageSize = imageSize; StartIndex = startIndex; PayloadLength = payloadLength; } public static LivestreamImagePacket FromBytes(byte[] bytes) { long timestamp = IPAddress.NetworkToHostOrder(BitConverter.ToInt64(recv.Buffer, 0)); int imageSize = IPAddress.NetworkToHostOrder(BitConverter.ToInt32(recv.Buffer, 8)); int startIndex = IPAddress.NetworkToHostOrder(BitConverter.ToInt32(recv.Buffer, 12)); int payloadLength = IPAddress.NetworkToHostOrder(BitConverter.ToInt32(recv.Buffer, 16)); return new LivestreamImagePacket(timestamp, imageSize, startIndex, payloadLength); } } Then you can use it like so: while (true) { var recv = await liveStreamReceiver.ReceiveAsync(); var packet = LivestreamImagePacket.FromBytes(recv.Buffer); var img = imgBuilder.CreateImageFromUdpPackets(packet, recv.Buffer); } Of course you'd need to change the CreateImageFromUdpPackets signature to this CreateImageFromUdpPackets(LivestreamImagePacket packet, byte[] buffer) and replace the usages of timestamp with packet.Timestamp
{ "domain": "codereview.stackexchange", "id": 41113, "tags": "c#, android, kotlin, udp" }
Why are isomers difficult to separate?
Question: I recently learned that attempts to compare the spectra of two isomers of $\ce{C_3H_3^+}$ were frustrated by the difficulty of separating the two species. What makes these isomers difficult to separate? Answer: It would be important to know what kind of "separation" you are talking about -- spectral separation, which would just mean suppression on the spectroscopic response from undesired species, or physical separation, where you actually remove those species from your sample? The problem is that not only may your sample contain different isomers, but these may actually interconvert through rearrangement reactions after ionization. This is a very common phenomenon in mass spectrometry, for example. If you want to single out the spectral signals of a certain isomer in a bulk sample, the simplest method would be to take the spectra of all unwanted components (from the literature or, preferably, separate measurements) and subtract them from your "mixed" spectrum. You would need information about the composition of your sample, though, or at least some indicator which tells you how much and what you have to subtract. Physically separating isomers may also be possible, e.g. if one of the isomers can be selectively ionized by laser irradiation, then separated from the remaining non-ionized sample. Such techniques are actually somewhat common in physical chemistry (not so much in everyday analytics, I think), although their applicability does depend heavily on the compounds and isomers under investigation. And there certainly are cases where different isomers are just too difficult to separate, or interconvert so readily that one structure virtually don't exist without the other.
{ "domain": "chemistry.stackexchange", "id": 220, "tags": "carbocation" }
How many photons exist in another dimension spaces?
Question: As I understand, there are 2 types of photons in our (3+1) space with photon helicity $\pm 1$. How many photons exist in another spaces like (2 + 1) or (1 + 1)? Can we apply the same for gravitons? Answer: In the space (2+1) we introduce the tensor \begin{equation} F^{\mu \nu}=\left(\begin{array}{ccc} 0 & -E^x / c & -E^y / c \\ E^x / c & 0 & -B \\ E^y / c & B & 0 \end{array}\right) \end{equation} As a consequence, the magnetic field is no longer a vector, but a scalar. Maxwell's equations are then written in the form \begin{equation} \begin{aligned} & \partial_\nu F^{\mu \nu}=\mu_0 j^\mu, \\ & \partial_\nu \tilde{F}^\nu=0, \end{aligned} \end{equation} where $j^\mu$ is surface current. By differentiating the first equation we can obtain an analogue of the continuity equation $$ \partial_\mu j^\mu=0, $$ Maxwell's complete equations would look like: \begin{equation} \begin{aligned} & \boldsymbol{\nabla} \cdot \boldsymbol{E}=\frac{\sigma}{\epsilon_0} \\ & \boldsymbol{\nabla}_{\perp} \cdot \boldsymbol{E}=\frac{\partial B}{\partial t}, \\ & \boldsymbol{\nabla}_{\perp} B=\mu_0 \boldsymbol{j}+\frac{1}{c^2} \frac{\partial \boldsymbol{E}}{\partial t}, \end{aligned} \end{equation} Where \begin{equation} \nabla_{\perp}=\left(\partial_y,-\partial_x\right) \end{equation} From this, in a similar way as for our space, wave equations are obtained \begin{equation} \nabla^2 \boldsymbol{E}-\frac{1}{c^2} \partial_{t t}^2 \boldsymbol{E}=\frac{1}{\epsilon_0} \nabla \sigma+\mu_o \partial_t \boldsymbol{j}, \end{equation} \begin{equation} \nabla^2 B-\frac{1}{c^2} \partial_{t t}^2 B=\mu_0 \nabla_{\perp} \cdot j \end{equation} If there are no charges, then the equations are independent of each other and you can get a whole set of photons with different phase shifts of the two waves. In space (1+1), the EM field tensor will look like: \begin{equation} F^{\mu \nu}=\left(\begin{array}{cc} 0 & -E \\ E & 0 \end{array}\right) \end{equation} and Maxwell's equations will be \begin{equation} \frac{\partial E}{\partial x}=0 \end{equation} The wave equation will be \begin{equation} -\frac{\partial^2 E}{\partial t^2}+\frac{\partial^2 E}{\partial x^2}=0 \end{equation} Given the constant field, photons will not exist in (1+1) space.
{ "domain": "physics.stackexchange", "id": 99721, "tags": "general-relativity, special-relativity, photons, curvature, helicity" }
Nuclear fusion reactors and neutrons
Question: The majority of energy produced by nuclear fusion is harnessed by neutrons or protons that split out from the product. Given the dominant fusion method today is Deuterium + Tritium which produces He and a neutron (a neutron that has most of the energy from the fusion), what do current experimental fusion reactors do to harness the energy from said neutrons? This question is asked using the context that neutrons cannot be controlled using electromagnetic forces. Hence, the energy contained in neutrons would (in my mind, at least) be difficult to capture without resorting to some sort of fission technique. Answer: This is known as the "first wall" problem of fusion: what do you wrap a fusion reactor with (the first wall), so as to capture the neutron energy without being destroyed by the intense neutron flux? More specifically, the objective of the first wall is to rattle the neutron flux around so as to "thermalize" the neutrons (transfer their kinetic energy into lattice vibrations which show up as heat, which then can be carried off by some heat transfer medium to boil water into steam, etc.) without being ruined (from a materials science standpoint) by damage from the neutrons. Doing so is essential from an energy balance standpoint to make the fusion reaction products all "pay their way" towards breakeven by harvesting their kinetic energy before they zoom right out of the reactor volume and escape. This remains as an unsolved problem in fusion technology. For example, superalloy metals get their constituent atoms knocked out of their lattice positions from neutron impacts, which interferes with ductility mechanisms (rendering the metal incapable of exhibiting resistance to thermal and mechanical shock). In addition, neutron capture leads to transmutation of the alloy constituents into new elements which lack high-temperature corrosion resistance while also generating hydrogen atoms within the lattice which lead to swelling and embrittlement. This is an extremely difficult business!
{ "domain": "physics.stackexchange", "id": 91661, "tags": "electromagnetism, energy, nuclear-physics, fusion, neutrons" }
Check if user selects correct Spanish word article
Question: All Spanish nouns have a gender (masculine [el] or feminine [la]; e.g. la casa, el perro). I'm using some javascript and jquery to display a random word from a list of nouns (2D array), as well as buttons corresponding to the two Spanish articles ("el" or "la"). If the user clicks on the correct article for the word, a new one loads; else, the user sees an alert and is prompted to choose the correct article before loading a new word. I made this work on codepen, but I wasn't able to get it to work correctly without reloading the page. All feedback will be well received. $(document).ready(function() { setWord(); function setWord() { //returns either 0 or 1 var elOrLa = Math.round(Math.random()); //wordList[0] contains 'la' nouns; wordList[1] contains those 'el' nouns var wordList = [["casa", "comida", "chica"],["perro", "amigo", "muchacho"]]; // Return random int between min (included) and max (excluded) function getRandomInt(min, max) { min = Math.ceil(min); max = Math.floor(max); return Math.floor(Math.random() * (max - min)) + min; } //maximum length of each subarray var laLength = wordList[0].length; var elLength = wordList[1].length; //random index for subarrays var randomLaIndex = getRandomInt(0, laLength); var randomElIndex = getRandomInt(0, elLength); var article1 = "la "; var article2 = "el "; var word; var articlePlusWord; //sets random word and articlePlusWord if (elOrLa === 0) { word = wordList[0][randomLaIndex]; articlePlusWord = article1.concat(word); } else { word = wordList[1][randomElIndex]; articlePlusWord = article2.concat(word); } $("#test-word").text(word); if (elOrLa === 0) { $("#la-btn").on("click", function() { location.reload(true); }); $("#el-btn").on("click", function() { alert("TOMA NOTA: ".concat(articlePlusWord)); $("#el-btn").fadeOut(); $("#la-btn").css({"background":"limegreen"}); $("#test-word").addClass('bg-danger').removeClass('bg-warning'); }); } else { $("#el-btn").on("click", function() { location.reload(true); }); $("#la-btn").on("click", function() { alert("TOMA NOTA: ".concat(articlePlusWord)); $("#la-btn").fadeOut(); $("#el-btn").css({"background":"limegreen"}); $("#test-word").addClass('bg-danger').removeClass('bg-warning'); }); } }; }); <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.1.0/jquery.min.js"></script> <h1 class="text-center"><span class="bg-warning" id="test-word"></span></h1> <br> <div class="container"> <button type="button" class="btn btn-primary btn-lg" id="el-btn">EL</button> <button type="button" class="btn btn-primary pull-right btn-lg" id="la-btn">LA</button> </div> Answer: Message for moderators As the OP already noticed, there is a strange issue when executing this in the context of the SO snippet: uncaught exception: unknown (can't convert to string) I experienced the same with my totally refactored version, while it works without error in the usual context of my browser. I can't see a good way to inquire about that, so TIA if one of the SO developers could look at it and explain it! Now the review This is properly written, and I see no big flaws. But the code might be significantly reduced, by merely avoiding useless parts and repetition. Useless parts consist of preparing data for both "la" and "el" cases, while each execution will only use one of those two articles. So I suggest to prepare data for the involved one only, as soon as it was chosen. Repetition appears in the click events handling, where you precisely define what must happen for each button, and in both cases depending it's the right one or not. There too, we can compact this by acting for the two buttons at once, and evaluating if it's right during the flow. Here is a working snippet based on yours, where I included these suggestions: function setWord() { // Return random int between min (included) and max (excluded) function getRandomInt(min, max) { min = Math.ceil(min); max = Math.floor(max); return Math.floor(Math.random() * (max - min)) + min; } const words = [ ["casa", "comida", "chica"], ["perro", "amigo", "muchacho"] ]; const articles = ['la', 'el']; var $buttons = $('.btn'); var elOrLa = Math.round(Math.random()); var article = articles[elOrLa]; var $goodBtn = $('#' + article + '-btn'); var $word = $('#test-word'); var word = words[elOrLa][getRandomInt(0, words[elOrLa].length)]; $word.text(word); $buttons.click(function() { if (this.id == $goodBtn[0].id) { location.reload(true); } else { alert('TOMA NOTA: ' + article + ' ' + word); $buttons.not($goodBtn).fadeOut(); $goodBtn.css({'background': 'limegreen'}); $word.addClass('bg-danger').removeClass('bg-warning'); } }); } setWord(); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <h1 class="text-center"><span class="bg-warning" id="test-word"></span></h1> <br> <div class="container"> <button type="button" class="btn btn-primary btn-lg" id="el-btn">EL</button> <button type="button" class="btn btn-primary pull-right btn-lg" id="la-btn">LA</button> </div> Apart from the changes exposed above, you can see that: I dropped the $(document).ready() wrapper: in most cases (and sure here) it's not needed, as soon as the script appears after the HTML description part I also moved the getRandomInt() function, which is now the 1st thing encountered in the setWord() function: this is a good habit to take now, since it becomes mandatory if you're using "use strict"; (see this page)
{ "domain": "codereview.stackexchange", "id": 23474, "tags": "javascript, jquery, html, quiz" }
How strong is spider silk?
Question: Spider silk is pretty darn strong and all sorts of comparisons are made to steel. I'm more curious about the various moduli of spider silk and how it compares to other materials. What is the Young's modulus of spider silk? What is the bulk modulus of spider silk? What is the shear modulus of spider silk? In general how do those moduli describe the material properties of spider. A simpler way to ask the question, what does it mean when it is said that spider silk is strong? Answer: Wikipedia has a remarkably well-cited article on the subject of silks and their various biological isoforms and mechanical properties. With respect to tensile strength, spider's silk is as tough as high-grade steel. Explcitly, dragline silk was measured by Pérez-Rigueiro at al. to be 600 ± 50 MPa with a comparison to silkworm silk. Reference: Pérez-Rigueiro J, Elices M, Llorca J, Viney C. 2001. Tensile properties ofArgiope trifasciata drag line silk obtained from the spider’s web. Journal of Applied Polymer Science 82: 2245–2251 [pdf]
{ "domain": "biology.stackexchange", "id": 197, "tags": "zoology" }
Why is this not a regular language
Question: So I recently had a problem where I had to create a regular language. After consulting my professor on my solution he told me it was close to correct but to check my definition of a regular language. I am somewhat lost on why my solution is not regular? From my understanding it meets the requirements of being a right regular language since there is only one non-terminal on the right side of the expression and the non-terminal is the rightmost symbol on the right side of the expression. Are pipe symbols not allowed? Answer: The derivation $T_1 \to U_1$ is not right-regular. It does not fall into the three allowed categories of rules. See Wikipedia on right-regular grammars. If you have a non-terminal on the right-hand side, it needs to be preceded by a terminal.
{ "domain": "cs.stackexchange", "id": 2085, "tags": "regular-languages, context-free, automata" }
Find if a car with booster and varying throttle acceleration can reach target
Question: This is a personal hobby project, I'm in control of all decisions made regarding my bot, but I can't change anything about the physics of the game (offline-only, and it's approved by the game devs - so don't worry!) I have a car, and the car is facing its target. All it needs to do is drive forwards, however, this is much more complicated than just a constant forwards acceleration. The problem: The car can only gain acceleration via the throttle when it's on the ground (kind of, more on that in a bit) but it also has A FREAKIN' ROCKET on the back, which propels it forwards at a constant rate in the air and on the ground. The rocket needs fuel, which goes from 0-100 (easiest to think of this as a percent - however, if the car has more than 100% rocket fuel, then due to the systems in place this signifies that the car has unlimited rocket fuel). The throttle acceleration is dependant on the speed of the car. The faster it's going, the less acceleration pressing the throttle provides, until the acceleration reaches 0. At a certain point, the car jumps and can no longer accelerate using the throttle, but due to some magic, the car actually gets a bit acceleration (idk it's one of those bugs turned feature and it needs to be accounted for). The rocket (if there's rocket fuel) can also speed up the car in the air, and by a lot. I don't care about the z axis, as that's handled elsewhere. With this knowledge at hand, I want to figure out ahead of time if the car can travel a certain distance in a certain time with a certain jump time. Initially, the car may or may not be at rest, it may be traveling forwards (positive velocity) or backwards (negative velocity), and the car may or may not have rocket fuel. The jump time may also be 0, which signifies that there is no jump for the car to make. Program includes: #include <math.h> In my program, rocket fuel is referred to as 'boost'. Here are my constants - I've added comments to clear some things up: static const double simulation_dt = 1. / 20.; // Delta time to run the simulation runs at static const double physics_dt = 1. / 120.; // Delta time that the actual game runs at static const double boost_consumption = 100. * (1. / 3.); static const double max_boost = 100; // all distance measurements are depicted in centimeters static const double aerial_throttle_accel = 100. * (2. / 3.); // due to some magic, the car actually gets a bit acceleration - don't question it static const double brake_force = 3500; // when trying to acceleration in the opposite direction that you're traveling, ur speed decreases by this amount every second static const double min_simulation_distance = 25; // If we get closer to the target than this, we consider the simulation done static const double max_speed = 2300; static const double max_speed_no_boost = 1410; // the relationship between velocity and throttle acceleration is mostly linear static const double start_throttle_accel_m = -36. / 35.; static const double start_throttle_accel_b = 1600; static const double end_throttle_accel_m = -16; static const double end_throttle_accel_b = 160; Converting velocity to the acceleration given by the throttle: double throttle_acceleration(double *car_velocity_x) { double x = fabs(*car_velocity_x); if (x >= max_speed_no_boost) return 0; // use y = mx + b to find the throttle acceleration if (x < 1400) return start_throttle_accel_m * x + start_throttle_accel_b; x -= 1400; // there's a very sharp dropoff here that brings the acceleration to 0 by the time the velocity is 1410 return end_throttle_accel_m * x + end_throttle_accel_b; } Here's my code for solving this problem, which is horribly inefficient but it does get the job done... it works by doing a sort of simulation, processing changes tick-by-tick with no optimization. _Bool can_reach_target_forwards(double *max_time, double *jump_time, double *boost_accel, double *distance_remaining, double *car_speed, unsigned char *car_boost, double *max_speed_reduction) { double v = *car_speed; double t = 0; double b = *car_boost; double d = *distance_remaining; double ba_dt = *boost_accel * simulation_dt; double ms = max_speed - ceil(*max_speed_reduction); double ms_ba_dt = ms - ba_dt; double bc_dt = boost_consumption * simulation_dt; double bk_dt = brake_force * simulation_dt; while (d > min_simulation_distance && t <= *max_time && (v <= 0 || d / v > *jump_time)) { // if we're going backwards, then apply the braking acceleration... otherwise, apply the throttle acceleration v += (v < 0) ? bk_dt : throttle_acceleration(&v) * simulation_dt; // if we have boost & we're at less than max speed if (b > bc_dt && v < ms_ba_dt) { // apply velocity from boost and reduce boost amount accordingly v += ba_dt; if (b <= max_boost) b -= bc_dt; } // subtract the proper amount of distance and add the simulation delta time to the total time d -= v * simulation_dt; t += simulation_dt; } double th_dt = aerial_throttle_accel * simulation_dt; double ms_th_dt = ms - th_dt; // this is basically the same as above, but it's for after the car jumps (if it does at all) while (d > min_simulation_distance && t <= *max_time) { // yes, this IS max_speed, NOT max_speed_no_boost! if (v <= ms_th_dt) v += th_dt; if (b > bc_dt && v < ms_ba_dt) { v += ba_dt; if (b <= max_boost) b -= bc_dt; } d -= v * simulation_dt; t += simulation_dt; } return d <= min_simulation_distance; } I'm not very familiar with Calculus (I haven't gotten to that in school and it seems hard -_-), but I'm still wondering if there's anything that I can do that will speed up the function. Currently, it's speed is acceptable, but I would rather develop my programming/math skills and have it be exceptional. My bot runs at 120tps, so this function, along with a LOT of other things, needs to run in 8 milliseconds or (preferably much) less. Above all else the true purpose of this post is I want to learn where I need to improve. I'm relatively new to C (only a few months of experience, and I'm self-taught) and I would also love feedback on any C standards or C programming conventions that I'm violating. If there's any tips or tricks that would make using C easier, I'm all ears! Also, no, I'm not switching away from plain C (yet, at least.) Answer: Here are some things that may help you improve your program. Eliminate unused variables Unused variables are a sign of poor quality code, and you don't want to write poor quality code. In this code, physics_dt is unused. Your compiler is smart enough to tell you about this if you ask it nicely. Pass by value rather than by pointer This code has a peculiar quirk that it uses pointers for every passed value. Unless you're planning on altering the value, plain old data like this really should just passed by value rather than by pointer, so instead of this: double throttle_acceleration(double *car_velocity_x) { double x = fabs(*car_velocity_x); Write this: double throttle_acceleration(double car_velocity_x) { double x = fabs(car_velocity_x); The same is true for all of the other arguments. Provide complete code to reviewers This is not so much a change to the code as a change in how you present it to other people. Without the full context of the code and an example of how to use it, it takes more effort for other people to understand your code. This affects not only code reviews, but also maintenance of the code in the future, by you or by others. One good way to address that is by the use of comments. Another good technique is to include test code showing how your code is intended to be used. Consider using structures There are a great many variables and constants in this program. The constants are generally named well, which is good. However, I think I'd approach it a bit differently. For the throttle_acceleration, I'd make that simply acceleration and write it like this: double acceleration(double car_velocity_x) { // the relationship between velocity and throttle acceleration // is expressed as three piecewise linear equations static const struct { double min_x; // minimum x value for this equation double m; // slope of line double b; // intercept of line } equation[3] = { { 1410, 0 , 0 }, { 1400, -16 , 22560 }, { 0, -36.0 / 35.0, 1600 }, }; for (int i = 0; i < 3; ++i) { if (car_velocity_x > equation[i].min_x) { return car_velocity_x * equation[i].m + equation[i].b; } } // if the velocity is negative, apply the brakes return brake_force; } This also has the effect of simplifying the code that uses it: v += acceleration(v) * simulation_dt; Use const where practical Many of the calculated values in can_reach_target_forwards are constants. It may help the compiler create better code if you declare them as const. Even if it doesn't it makes the code more understandable to human readers. Use a real bool If you #include <stdbool.h> you can use a real bool as well as the constants true and false which can make your code a bit clearer. Don't repeat yourself As the comments in the code note, the loops before and after the jump are nearly identical. I think it would make sense to combine them. Here's one way to do that: bool on_ground = true; for (double t = 0; d > min_simulation_distance && t <= max_time; t += simulation_dt) { if (on_ground &= (v <= 0 || d/v > jump_time)) { v += acceleration(v) * simulation_dt; } else { // no longer on the ground if (v <= ms_th_dt) v += th_dt; } // if we have boost & we're at less than max speed if (b > bc_dt && v < ms_ba_dt) { // apply boost v += ba_dt; if (b <= max_boost) b -= bc_dt; } // subtract the proper amount of distance d -= v * simulation_dt; } Note that I have also converted while into for to make it clear which variable is being incremented. Consider a mathematical solution As you suspected, there is a more mathematical way to approach this problem. First, let's slightly reframe the problem as the question "how much time would it take to get to the target?" We can designate that time as \$t_{f}\$ where the \$f\$ signifies final. First, let's consider only deceleration from braking and acceleration from the throttle and ignore boost and jump time for the moment. \$a(v) = \left\{ \begin{array}{ll} 3500 & v < 0 \\ -\frac{36}{35}v + 1600 & 0\leq v < 1400 \\ -16v +22560 & 1400 \leq v < 1410 \\ 0 & 1410 \leq v \\ \end{array} \right. \$ Note that this is not exactly how the code is currently implemented, because at zero velocity, the current implementation actually gets acceleration due to braking which makes no physical sense. I've taken the liberty of altering this, but there is likely little noticable difference. Because only one of these is used at a time, we can break it into steps. First, consider a start with a negative velocity so that we're moving away from the target. In this phase we're just braking, so at any time \$t\$ during this phase, the velocity is: $$ v(t) = v_0 + 3500t $$ It's easy to figure out how long it will take to get to zero velocity using algebra. $$ 0 = v_0 + 3500t $$ $$ 3500t = -v_0 $$ $$ t = \frac{-v_0}{3500} $$ Since we've been moving away from the target, how far away is the target now? This is where calculus comes in handy, but I'll try to explain in a way that doesn't assume you already know calculus to whet your appetite for learning it (it's really not as hard as you might think). We know that for each step in time, we have \$\Delta d = v(t)\Delta t\$ and so if we also account for the initial distance \$d_0\$ we have: $$ d(t) = d_0 + \sum v(t) \Delta t $$ Expressing this in calculus notation is very similar: $$ d(t) = d_0 + \int v(t) \delta t $$ Substituting the equation above, that expands to this: $$ d(t) = d_0 + \int v_0 + 3500 \delta t $$ Now evaluating this integral is probably not something you've learned yet, but it's actually quite simple to evaluate integrals of simple polynomials. The answer in this case is: $$ d(t) = 1750 t^2 + v_0 t + d_0 $$ So given that the time to zero velocity is \$v_0/-3500\$ we can evaluate: $$ d\left(\frac{v_0}{-3500}\right) = 1750 \left(\frac{v_0}{-3500}\right)^2 + v_0\left(\frac{v_0}{-3500}\right) + d_0 $$ $$ = \frac{v_0^2}{7000} + d_0 $$ So now we are at zero velocity at a time and distance we can easily calculate. The next phase is where \$0\leq v < 1400\$ and we can perform a similar analysis, but because in this phase, our equation has the velocity depending on itself, the solution uses an ordinary differential equation which is often taught after introductory calculus. Without showing the derivation, the solution is: $$ v(t) = \frac{35 v_0 + 56000 t}{36t + 35} $$ Since we want to know when \$v(t) = 1400\$, a little algebra yields: $$ t = \frac{1400-v_0}{160} $$ We can do a similar exercise for each kind of acceleration and also then add in boost and jump time and eventually come up with one big equation that incorporates all of these and yields the time required to get to the destination given all of the variables. In short, while it's a bit tedious and involves some mathematics you might not yet have learned, there is definitely a way to solve this analytically without resorting to simulation. I hope this inspires you to keep learning more mathematics as well as helping you improve your software.
{ "domain": "codereview.stackexchange", "id": 40217, "tags": "beginner, c" }
Does audio upsampling create noise/artifacts or degrade the signal?
Question: I've read that upsampling performed in digital music playback can color the sound, produce artifacts, etc. For example, an audio file ripped from a CD might be 44.1Khz/16-bit, and then upconverted to 48Khz/16-bit and played via an optical digital audio output. Audiophiles say this is bad because the signal needs to be "bit perfect" to be reproduced correctly. Is this correct? My vague knowledge of DSP from grad school leads me to think that all upsampling should do is increase the bandwidth of the signal. But, since the source signal is bandlimited, there shouldn't be anything new in the added bandwidth. And I don't see why the process would color the sound. What am I missing? Answer: The up-sampling process will always change the signal in some measurable way. However, if it's done properly the changes are negligible and don't result it any audible difference. Most commercially sample rate converters (hardware or software implementations), do a really good job at this. Off course, if done badly, upsampling can result in clearly audible signal degradation. I'm not familiar with Apple's implementation but I would assume that they got this correctly.
{ "domain": "dsp.stackexchange", "id": 12560, "tags": "audio, sampling, music, resampling, supersampling" }
Oort cloud shape
Question: In the episode of Cosmos dedicated to cometary objects and other things, the Oort cloud was graphically rendered as a spherical distribution of dots. This surprised me as I thought that it should be somehow a disk or a torus, given the solar system's rotational momentum. Why is the Oort ensemble thought to be spherical? Is it deduced from the orbits of the various comets? I think so. Why it does not spread into a more disk like object? Answer: I didn't really want to answer this one because there may be some new information that I'm not aware of and, to my understanding, the shape, mass and content of the Oort cloud is a subject of some ongoing study, so I invite correction. The formation of a Solar-System is pretty complicated and there may still be significant unknowns on the process and with the Oort cloud, uncertainty on the precise shape, mass, density distribution and origin, whether it formed with the solar-system or whether much of it is captured. I wanted to begin with the unknowns. We can't see the Oort cloud so we can't measure it directly. Estimates can be made by observing very long period comets that fly into the inner solar system and by extrapolating their orbital periods, an estimate of the contents of the Oort cloud can be made. One problem is, that estimate is based on very eccentric orbits only, because those are the only ones we see that enter our telescope's range. This is one of my favorite minute physics videos for it's simplicity and the fun diagrams they use. The gist of the simple answer is that a cloud or nebulous mass of debris and dust, has a fixed angular momentum and a fixed orbital plane, so as the cloud of debris gets a push (usually from a not too far away supernova), and the debris begins to condense into a proto-solar-system, it maintains the orbital plane and angular momentum and as it condenses, it rotates very rapidly. The rapid rotation isn't all that relevant to your question, but that's why not all the matter can fall into the star, because there's usually too much angular momentum. The same happens with gas giant planets, which is why Jupiter, for example, has 4 large formation moons (where as the Earth - somewhat smaller region where it formed, no formation moons, but it has an impact moon). I'm making this more wordy than it should be, but the point is, stuff orbits the sun because of angular momentum. The sum of this angular momentum has an orbital plane, and each individual object has an inclination to that orbital plane. When planets absorb the matter in their orbital regions, the up and down directions, or inclinations of the objects in orbit tends to cancel out, but the net angular momentum and orbital plane remain constant. (This is mostly, but not entirely true - for example, Planet 9, if it's eventually discovered, might explain why the 8 inner planets, on average don't line up with the sun's rotational plane. Planet 9 may have taken some of the inclination in one direction with it as it was thrown far outside the solar-system leaving the inner 8 planets with an inclination in the other direction relative to the sun. When (if) planet 9 is discovered, then we can check if it balances out our solar-systems tilted inclination. But it's the collisions that help the planets line up along the orbital plane of the solar system, because the ups and downs mostly cancel out. If there are no collisions, then there's no cancelling out of the ups and downs and the objects in orbit remain in a kind of nebulous blob assortment - which over time, is probably best represented as a sphere. That's not the entire answer, however. Take the asteroid belt for example. There was (I think) probably not sufficient collisions to make the asteroid belt flat, and Ceres is probably not a failed planet, because it probably came later. Based on Ceres' density it probably came from further out in the solar-system. It may have started out as a moon (perhaps kicked out of Neptune's orbit by the miss-behaving Triton) or a dwarf-planet originally in the kuiperbelt. It's density is too low to have formed by collisions in the asteroid belt. So the Asteroid belt is flat (er mostly) likely due to gravitational shepherding from Jupiter and perhaps, by a very strong magnetic field from our young sun, A proper answer of our estimated shape of the oort cloud would be based on a survey of all the long-period (or highly eccentric) comets that we've observed and I'm not that interested in doing the research on that, but I would guess that there's enough variety of orbital inclination to support the generally circular shape - because I don't think the circular shape would be used so often if it didn't reflect the observation of long period comets. The Kuiper belt, for example, (and I couldn't find a really specific answer to this), but it appears to be somewhat flat, kind of donut or torus shaped (Pluto has a higher than usual inclination). The proper term, if you want to get technical, is inclination distribution. No distribution = flat. Full or high distribution = sphere. The Kuiper belt's relative/somewhat flatness may be driven by mostly Neptune sheparding rather than collisions (again I'm not precisely sure). In fact it was the regularity of some inclinations of the most eccentric kuiper belt objects (just passing through in other words) that lead to the planet-9 theory in the first place. If there's large planets around, they assist in the shepherding and flattening of smaller objects in their orbital viscinity. In the case of the Galaxy, to my knowledge, there's insufficient collisions to explain the flatness of the Milky-way (it's basically pizza shaped, with maybe a small ping pong ball or large marble in the middle). It's my understanding that the Galaxy was flattened by a very strong magnetic field more than by collision (somebody correct me if I'm wrong). That's the extent of my knowledge at least. I invite someone smarter than me to answer this as well.
{ "domain": "astronomy.stackexchange", "id": 7152, "tags": "solar-system, oort-cloud" }
How to get accuracy, F1, precision and recall, for a keras model?
Question: I want to compute the precision, recall and F1-score for my binary KerasClassifier model, but don't find any solution. Here's my actual code: # Split dataset in train and test data X_train, X_test, Y_train, Y_test = train_test_split(normalized_X, Y, test_size=0.3, random_state=seed) # Build the model model = Sequential() model.add(Dense(23, input_dim=45, kernel_initializer='normal', activation='relu')) model.add(Dense(1, kernel_initializer='normal', activation='sigmoid')) # Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) tensorboard = TensorBoard(log_dir="logs/{}".format(time.time())) time_callback = TimeHistory() # Fit the model history = model.fit(X_train, Y_train, validation_split=0.3, epochs=200, batch_size=5, verbose=1, callbacks=[tensorboard, time_callback]) And then I am predicting on new test data, and getting the confusion matrix like this: y_pred = model.predict(X_test) y_pred =(y_pred>0.5) list(y_pred) cm = confusion_matrix(Y_test, y_pred) print(cm) But is there any solution to get the accuracy-score, the F1-score, the precision, and the recall? (If not complicated, also the cross-validation-score, but not necessary for this answer) Thank you for any help! Answer: Metrics have been removed from Keras core. You need to calculate them manually. They removed them on 2.0 version. Those metrics are all global metrics, but Keras works in batches. As a result, it might be more misleading than helpful. However, if you really need them, you can do it like this from keras import backend as K def recall_m(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = true_positives / (possible_positives + K.epsilon()) return recall def precision_m(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1))) precision = true_positives / (predicted_positives + K.epsilon()) return precision def f1_m(y_true, y_pred): precision = precision_m(y_true, y_pred) recall = recall_m(y_true, y_pred) return 2*((precision*recall)/(precision+recall+K.epsilon())) # compile the model model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m]) # fit the model history = model.fit(Xtrain, ytrain, validation_split=0.3, epochs=10, verbose=0) # evaluate the model loss, accuracy, f1_score, precision, recall = model.evaluate(Xtest, ytest, verbose=0)
{ "domain": "datascience.stackexchange", "id": 7756, "tags": "machine-learning, neural-network, deep-learning, classification, keras" }
Work done in accelerating an object in circular motion
Question: In a hammer throw competition, an athlete spins a “hammer” before releasing it. The “hammer” used in this sport is a metal ball of mass = 8 kg and diameter = 30cm, which is attached to a string of length = 1.2 m. You may neglect the weight of the string. A world-class athlete can spin the ball up to the speed of = 30 m/s. Calculate the corresponding work done by the athlete to spin-up the “hammer”, assuming that the “hammer” moves in a horizontal circle of radius throughout the entire process. My issue is that I dont see why the answer isnt just $(1/2) mv^2$. Answer: You're missing an extra term because every time you spin the hammer around yourself the hammer itself has rotated once around it's own axis (think of our moon, which always faces us because it's tidally locked). Thus you have your kinetic energy: $$E_{k}=\frac{1}{2}Mv^{2}$$ But you also have some rotational energy for your hammer about an axis through it's center, which will be of the form: $$E_{r}=\frac{1}{2}I\omega^{2}$$ The moment of inertia for a solid spherical mass of constant density is $I=\frac{2}{5}MR^{2}$ and your angular rotational speed $\omega$ is one rotation every full rotation of the hammer around the person spinning it: $$\omega=\frac{v}{2\pi l}$$ Since it's a homework question, I'm sure you can take it from there (:
{ "domain": "physics.stackexchange", "id": 54622, "tags": "homework-and-exercises, newtonian-mechanics, energy, acceleration, work" }
What is the Point of Monocular SLAM
Question: I am not a V-SLAM expert yet, but as far as I understand with monocular V-SLAM there is a scale ambiguity introduced by the fact that a camera essentially is an azimuth sensor that maps the 3D world to the 2D world via a projectivity. In particular I was reading/using orb_slam_2_ros and came across the following: This is the ROS implementation of the ORB-SLAM2 real-time SLAM library for Monocular, Stereo and RGB-D cameras that computes the camera trajectory and a sparse 3D reconstruction (in the stereo and RGB-D case with true scale). So my question is, from a practical perspective, if with monocular SLAM you don't have "true scale" then what is the point, what useful information would monocular V-SLAM give to a mobile robot trying to localize in a space? Thanks! Cheers Answer: I would say true scale actually doesn't really matter. A good way to reason about this is actually video games. If you play a racing game(or really any kind of 3D game) do you care that the world has the proper scale? No you don't. If I went in and modified the video game to double the size/scale of all the models you would still be able to play it. So it is possible to setup your system to navigate a world without knowing the true scale. It is what you do when you play a video game. A SLAM map still has a use in this world as it can be used to localize your position/build a map. The reason true scale is used in the real world is that it unifys the units of your other sensors, and devices like motors,encoders,... . E.g. if your motors makes you move 1 m/s you know exactly how it will interact with some other sensor. The are other tasks you can do that also don't require knowing the scale e.g. 3D reconstruction or structure from motion(computer vision terminology for monocular SLAM). We only care about the quality of the 3D reconstruction. The scale doesn't really matter. This scaleless 3D model can then be used in things like video games, 3D printing, and more which all don't care about the final scale of the model. If you do need true scale, you can also always scale it later to the correct size. E.g I know this object is 1 meter tall so scale the whole scene by the appropriate amount. If you do need scale then another alternative is to is to fuse the monocular SLAM results in a loosely coupled fashion with an IMU or other sensor. So your SLAM algorithm may not be aware of the true scale, but your fused estimator algorithm is. A lot of of visual SLAM work on drones, was implemented this way before tightly coupled implementation became the standard. Finally I would argue the lack of true scale is not the main problem of monocular SLAM algorithms, it is scale drift. As long as your map has the same consistent scale then everything works out. Problems start to occur when different parts of the map have different scales. This means reasoning in one area doesn't work in another.
{ "domain": "robotics.stackexchange", "id": 2290, "tags": "ros, slam" }
Arithmetic Coding for Blocks of Images
Question: I try to understand that how to use arithmetic coding on images. For this, I code on MATLAB. I tell my understanding for arithmetic coding. If I misunderstood to this algorithm, please correct me. After that, I share to my MATLAB code and its error. Slice the image using 8*8 macroblock. Define range using every pixel value and fit 0 with 1. Low and high range value updated for every pixel value. I understand like this. How to represent with floating point is a better representation. For example my range 0.32423, how to represent this? Other one question is, how to useful this method better than Huffman? Here is the my code: clear all clc I = [128 75 72 105 149 169 127 100; ... 122 84 83 84 146 138 142 139; ... 118 98 89 94 136 96 143 188; ... 122 106 79 115 148 102 127 167; ... 127 115 106 94 155 124 103 155; ... 125 115 130 140 170 174 115 136; ... 127 110 122 163 175 140 119 87; ... 146 114 127 140 131 142 153 93]; Image = I(:); prob = zeros(255,1); comp = arithenco(Image,prob) Here is the error: Answer: First, let's try understand how to work on array for encoding an then move to image and block of images. By looking at MATLAB's arithenco() function you need to supply a stream of values on the range [1, 2, ..., N] where N is the number of symbols. You also need to supply it a prior about the probability of each symbol. If you get an image, mI with values on the range {0, 1, ..., 255} and you have no prior about the data you should use its own empiric histogram: vC = zeros(256, 1); %<! Counts vS = mI(:) + 1; %<! The stream for ii = 1:length(vS) vC(vS(ii)) = vC(vS(ii)) + 1; %<! You can do it faster with histcounts() end Now you can do something like: vCode = arithenco(vS, vC). In images, since we have highly correlated pixels within a small window we can use that for better encoding. This means that instead of using a global vC for the whole image we can use it per small neighborhood. So you can do the above trick per window of 8x8. The efficient way to do it on MATLAB, given you have access to Image Processing Toolbox, is using im2col() with the distinct option. Then operate on each column efficiently.
{ "domain": "dsp.stackexchange", "id": 10943, "tags": "image-processing, matlab, compression" }
Why does light in a room not form constructive and destructive interference patterns?
Question: This is something that I have wondered for a long time. How come when I walk around why do I not see random black spots where light has collided destructively and bright spots where it has collided constructively? Answer: There are two answers to your question. Actually now that I read your question again, I also see two questions: To answer the first question: Light does not collide - it can sail right through any other light beam - so light does not interact with itself - one light beam does not bend another light beam. BUT light can form interference patterns, since the waves do add up together whenever they overlap. Now for the two answers to the now slightly modified question: "How come when I walk around why do I not see random black spots where light has destructively or constructively added together?" The first is that there is destructive interference - but the regions for it are too small to see, since usually the configuration for destructive interference is about the wavelength of the waves in size - this is a result of the waves coming at all angles, and furthermore having different frequencies. Since the wavelength of light is just less than a millionth of a meter - or less than 0.001 mm which is too small to see, you don't see it. Also light vibrates at billions and billions of times per second, and is coming in general from billions of places at once, so the whole effect gets washed out by our eyes and brain, which operate quite a bit slower than that. The second answer is despite all these obstacles to observing a phenomenon like this, you can see it - usually under physics class experiments, but also in 'the wild'. For example if you are walking at night under monochromatic street lights and see an oil slick on a puddle - you can see constructive and destructive interference.
{ "domain": "physics.stackexchange", "id": 4511, "tags": "electromagnetism, electromagnetic-radiation, interference" }
Deriving special relativity using alternative axioms
Question: Einstein’s paper on SR takes as axioms that (1) the laws of physics are the same in all inertial frames, and (2) the vacuum speed of light is the same for all observers, independent of the relative speed of the source. I saw a reference once to an alternative derivation, dating to around 1907 (after Einstein’s) that instead used (2') any two observers would see each other moving at the same speed. This derivation essentially replaces $\gamma = (\sqrt{1-\frac{v^2}{c^2}}) ^{-1}$ with $\gamma = (\sqrt{1-\alpha v^2})^{-1}$ and changes the velocity sum formula to $\frac{v+u}{1+\alpha vu}$. If $\alpha = 0$, you get Gallilean invariance and Newton's mechanics. If $\alpha > 0$, you get Lorentz invariance and SR. Unfortunately I failed to record the details of the citation, so I have not been able to find the paper or modern treatments of this approach. Can anybody explain this derivation, or point me to a source? Answer: I think you are looking for "Einige allgemeine Bemerkungen über das Relativitätsprinzip," Physikalische Zeitschrift. 11. (1910), pp. 972–976 by Vladimir Ignatowski https://de.wikisource.org/wiki/Einige_allgemeine_Bemerkungen_%C3%BCber_das_Relativit%C3%A4tsprinzip translated as "Some general remarks on the relativity principle" by Vladimir Ignatowski https://en.wikisource.org/wiki/Translation:Some_General_Remarks_on_the_Relativity_Principle It begins When Einstein introduced the relativity principle some time ago, he simultaneously assumed that the speed of light ${\displaystyle c}$ shall be a universal constant, i.e. it maintains the same value in all coordinate systems. Also Minkowski started from the invariant ${\displaystyle r^{2}-c^{2}t^{2}}$ in his investigations, although it is to be concluded from his lecture "Space and Time"[1], that he attributed to ${\displaystyle c}$ the meaning of a universal space-time constant rather than that of the speed of light. Now I've asked myself the question, at which relations or transformation equations one arrives when only the relativity principle is placed at the top of the investigation, and whether the Lorentzian transformation equations are the only ones at all, that satisfy the relativity principle. The passage of interest follows Eq. (24) $${\displaystyle p={\frac {1}{\sqrt {1-q^{2}n}}}}\qquad\mbox(24)$$ From (24) it follows, that ${\displaystyle n}$ (which we can denote as a universal space-time constant) is the reciprocal square of a velocity, thus an absolute-positive quantity. We see that we obtained transformation equations similar to those of Lorentz, except that ${\displaystyle n}$ is used instead of ${\displaystyle {\tfrac {1}{c^{2}}}}$. However, the sign is still undetermined, because we could have set the positive sign under the square root in (24) as well. Now, in order to determine the numerical value and the sign of ${\displaystyle n}$, we have to look at the experiment... ... ... This gives $$ {\displaystyle n={\frac {1}{c^{2}}}}\qquad\mbox{(25)}$$ And only from that it follows, that ${\displaystyle c}$ is constant for all coordinate systems. At the same time we see that the universal space-time constant ${\displaystyle n}$ is determined by the numerical value of ${\displaystyle c}$. Now it is clear that optics lost its special position with respect to the relativity principle by the previous derivation of the transformation equations. By that, the relativity principle itself gains more general importance, because it doesn't depend on a special physical phenomenon any more, but on the universal constant ${\displaystyle n}$. Nevertheless we can grant optics or the electrodynamic equations a special position, though not in respect to the relativity principle, but in respect to the other branches of physics, namely in so far as it is possible to determine the constant ${\displaystyle n}$ from these equations. For more early papers on Relativity, this is a useful starting point: https://en.wikisource.org/wiki/Portal:Relativity FOOTNOTE: Here is another reference by Ignatowski. This reference is a more thorough introduction to the idea. "Das Relativitätsprinzip" Archiv der Mathematik und Physik 17: 1-24, 18: 17-40 (1910) https://de.wikisource.org/wiki/Das_Relativit%C3%A4tsprinzip_(Ignatowski) I had to run it through Google Translate.
{ "domain": "physics.stackexchange", "id": 62135, "tags": "special-relativity, lorentz-symmetry" }
Bash function to dynamically move N directories above the current directory
Question: up moves N directories above the current directory (executes "cd .."" N times) When executed without input values it moves only one directory above the current directory When executed with input values, up uses the first input value (checking to see if it is a positive integer)to move above the current directory the specified number of directories up() { if [ -z "$1" ] then eval "cd .." else if [ "$1" -eq "$1" ] then if [ "$1" -gt "0" ] then for i in `seq 1 $1`; do eval "cd .." done eval "pwd" else echo "First argument is not a positive integer" fi else echo "First argument is not an integer" fi fi } I did consider building a cd ../../.. etc., chain which would execute after the for loop builds the required string and would also preserve the usefulness of cd - but this version was slightly cleaner. The [ "$1" -eq "$1" ] thing is one way to check if a value is an integer. Answer: If you're writing this as a shell function, then you don't need to eval. The indentation could be done more idiomatically. By using elif, you can eliminate one level of indentation and make the code structure clearer. up() { if [ -z "$1" ]; then cd .. elif [ "$1" -eq "$1" ]; then if [ "$1" -lt 0 ]; then echo "First argument is not a positive integer" else for i in `seq 1 $1`; do cd .. done pwd fi else echo "First argument is not an integer" fi } Your question is very similar to this one. The main problem, as you have already noted in your own comment, is that by doing multiple cd .. in sequence instead of one cd ../../.., you would be inserting many hops into the directory history. Then, cd - or referencing $OLDPWD wouldn't work as expected.
{ "domain": "codereview.stackexchange", "id": 17002, "tags": "bash" }
Converting a Date to Hexadecimal Word
Question: I have two functions, one to convert a datetime to a hex word, and one to convert a word to a datetime. I was just wondering if there was a more efficient way to convert back and forth. /// <summary> /// Converts a date time to a hexadecimal word /// </summary> /// <param name="date">the date time</param> /// <returns>returns the Hex value of the date</returns> private int convertToWordFromDate(DateTime date) { int m, d, y; int month = date.Month; int day = date.Day; int year = int.Parse(date.Year.ToString().Substring(2, 2)); int yearShift = 9; int monthShift = 5; int word; y = year << yearShift; m = month << monthShift; d = day; if (year == 0) word = m + d + (y * 100); if (year < 10) word = (m + d + y) * 16; else word = m + d + y; return word; } /// <summary> /// Converts a hexadecimal word to a valid dateTime /// </summary> /// <param name="word">the word</param> /// <returns>the date time</returns> private DateTime convertToDateFromWord(int word) { int month, day, year; int monthMask = 0x01E0; int dayMask = 0x001F; int yearMask = 0xFE00; int monthShift = 32; int yearShift = 512; DateTime date = new DateTime(); try { year = (word & yearMask) / yearShift; month = (word & monthMask) / monthShift; day = (word & dayMask); if (year >= 80 && year <= 99) year += 1900; else if (year >= 0 && year <= 79) year += 2000; else if (year < 0 || month <= 0 || day <= 0) { year = 1; month = 1; day = 1; } else { year = 1; month = 1; day = 1; } date = new DateTime(year, month, day); } catch { date = new DateTime(1, 1, 1); } return date; } Answer: I note that your functions are not doing what they say they're doing. For example, you don't convert a date to a hexadecimal word, you convert to an integer value, a value that also happens to disregard the century and the time component entirely. In getting the date back, you also perform some business rules against the date, by treating certain years as last century, other years as current century. What I would expect when seeing these functions is that I would certainly get the same date I passed into one function as the output from the other. That is not necessarily going to be the case. You also have issues with a blanket catch that swallows any exception. If you are going to catch an exception, be specific and catch what you truly can handle, something you might expect. Work towards eliminating those exceptions entirely, so instead of catching them, you are preventing them by validating against their causes beforehand, if possible. You have an if/else chain where an else if and else have the exact some code in their code blocks. Eliminate the redundancy. There are issues with shifts and masks not being consistent, in terms of where they are defined and the methodology being used. If you change something in one place, you have to change it in another. More than that, you have to change it in a different way. To be blunt, it's basically not clear what the code is doing at an initial glance, nor is it necessarily clear why it is doing it. I don't know what your business case is, but if I was writing a class to convert a date to a "hexadecimal word" and then back again, I would probably expect to write something that takes a date and returns a string, and then takes a string and returns a date. The quick, not completely tested version might be something simple like this public class DateConverter { public string ConvertToHexString(DateTime date) { return date.Ticks.ToString("X2"); } public DateTime ConvertFromHexString(string hexInput) { long ticks = Convert.ToInt64(hexInput, 16); return new DateTime(ticks); } } Which you could validate with DateTime originalDate = new DateTime(1955, 11, 11, 22, 4, 0); DateConverter converter = new DateConverter(); string hexValue = converter.ConvertToHexString(originalDate); DateTime returnedDate = converter.ConvertFromHexString(hexValue); Debug.Assert(originalDate == returnedDate);
{ "domain": "codereview.stackexchange", "id": 877, "tags": "c#" }
Is there such a thing as an "or" case in a dependency graph?
Question: Suppose $A$ depends on $B$ or $C$, but not necessarily exclusively. Is it possible to model this in a dependency graph? Can we just link $A$ to $B$ and $C$ via a diamond (like in UML) (though I don't see how this can be represented mathematically)? Or do we have to condense $B$ and $C$ to a single node $D$? Answer: It's possible to represent a general boolean function as a directed acyclic graph. Wikipedia has examples of this on the binary decision diagram and propositional directed acyclic graph pages. In the case of the PDAG, the up triangle represents AND, the down triangle OR, and the diamond represents NOT. The PDAG seems to be the closest to an AND/OR dependency graph, with the topmost element only having its requirements satisfied if the function would yield a 1. To represent this mathematically, you could use Boolean algebra.
{ "domain": "cs.stackexchange", "id": 11980, "tags": "graphs" }
isPalindrom-function in TypeScript
Question: I have written an isPalindrom-Function in TypeScript. const isPalindrom = (word: String): boolean => { const mid = Math.floor(word.length / 2); for (let i = 0; i < mid; i++) { const left = word[i].toLocaleLowerCase(); const right = word[word.length - 1 - i] .toLocaleLowerCase(); if (left != right) { return false; } } return true } ["Anna", "AXnna", "Kayak", "kayakx", "LEVEL", "LEVExL", "rotor", "roxtor", "wow", "wxow", "mom", "Ymom", "rEpaPer", "repXaper"] .forEach(word => { console.log(`Is ${word} a Palindrom? => ${isPalindrom(word)}`); }); Is my TypeScript-usage correct? What could be improved concerning the algorithmic-approach? I have used the double-equals operator, because the values can't be something else then strings. Is it still necessary to use the triple-equals operator in TypeScript? Answer: I have used the double-equals operator, because the values can't be something else then strings. Is it still necessary to use the triple-equals operator in TypeScript? This is backwards. Necessary? No. Recommended? Yes. If you know the type is always going to be the same, you should use the triple-equals so as to save the extra checks to see if they are the same. I.e. the triple-equals is simpler than double-equals. Both have to check if the types are the same. With triple-equals, you can stop processing if not. With double-equals, differing types starts a more complicated round of type coercion. More discussion: You Don't Know JS: Loose vs. Strict Equals You use the double-equals when the types can be different but you want to coerce them into the same type for equality testing. For example, if 1 == '1'. All that said, this is not a requirement. Functionally, your code will work with the double-equals. But if you're asking about best practices, then best practice is to use the triple-equals whenever you don't need the type-coercion. It's the double-equals that you should only use when needed. You should default to triple-equals unless it won't work for your use case (because you require type coercion). Note that even if you do require type coercion, sometimes it is better to do the type coercion explicitly rather than use the loose rules of double-equals. But that won't make a difference here. Personally, I would prefer to use two index variables rather than calculate one index value from the other. const normalized = word.toLocaleLowerCase(); for (let left = 0, right = normalized.length - 1; left < right; left++, right--) { if (normalized[left] !== normalized[right]) { return false; } } return true; I find this simpler and easier to follow. Also, it is more easily expandable to cover more complicated normalization. E.g. the removal of punctuation and spaces.
{ "domain": "codereview.stackexchange", "id": 43292, "tags": "algorithm, typescript, palindrome" }
Velocity addition for tachyons
Question: How does the velocity of a tachyon transform under a Lorentz boost? Suppose we only consider motion along the $x$ direction for simplicity. If the velocity of the tachyon is $u$ in the lab frame, what is the velocity $u'$ of the tachyon in a frame moving with velocity $v$ (slower than $c$) relative to the lab frame? Can we still use the formula $$ u' = \frac{u-v}{1-\frac{uv}{c^2}} $$ Answer: The short answer is that it is valid, but that misses a lot of subtleties. Using three-vectors for velocity in special relativity is fairly unnatural even for ordinary speeds, and it's worse for tachyons, since their three-velocity can be "infinite" and, in a certain sense, "time reversed" (see below), and those situations can't be described properly with three-vectors. It's better to use a four-vector to represent speed. The four-velocity at a point is just a tangent vector to the worldline at that point. A three-velocity $(u_x,u_y,u_z)$ is equivalent to an unnormalized four-velocity of $(1,u_x,u_y,u_z)$, or any positive scalar multiple of that. All four-vectors transform in the same way. Under a Lorentz boost in the $x$ direction by $v, |v|<1$ (using a three-velocity $v$ in anticipation of recovering the formula quoted in the question), that four-velocity becomes $(γ(1 - vu_x), γ(u_x - v1), u_y, u_z)$. You can renormalize that to make the time component equal to $1$ again, getting $$\left( 1,\; \frac{u_x-v}{1-vu_x},\; \frac{u_y}{γ(1-vu_x)},\; \frac{u_z}{γ(1-vu_x)} \right)$$ and if you drop the $1$, and add explicit factors of $c$, you get a generalization of the formula from the question (which is the special case where $u_y=u_z=0$). Nothing in that derivation depends on the speed $u$ being light speed or less as such. However, when you renormalize the boosted four-vector, you have to divide by the time component, and when the speed is tachyonic (and only when it's tachyonic), that time component may be zero. In that case, you can represent the speed by a "formally infinite" three-velocity that has an infinite magnitude and a direction like that of any nonzero vector. However, this is mathematically suspect, and really the only reason it makes sense is because it's equivalent to the four-vector form. There is also no natural reason for the velocity to be treated as "infinite" in the first place. Whether a velocity is "infinite" in this sense is frame-dependent and hence not really a property of the velocity. The time component can also be negative, and because of that, when you divide by it, you lose sign information. (This is when "the tachyon appears to travel backwards" as you observed in a comment.) Whether the sign information matters depends on the nature of the hypothesized tachyons. If tachyons have an arrow of time, and the four-velocity points in the direction of increasing proper time (as one normally imagines it does for sublight speeds), then the direction of the arrow of time is lost in the conversion. However, if tachyons have an arrow of time, then you can use them to send signals into your own past (using a "tachyonic antitelephone"), so perhaps they can't have one and the loss of sign information doesn't matter. Ultimately, though, the three-velocity is just a poor way of thinking about the velocity of tachyons, even though it can be adapted to that purpose. It's better to stick with the four-velocity for both subluminal and superluminal speeds.
{ "domain": "physics.stackexchange", "id": 92558, "tags": "special-relativity, inertial-frames, velocity, faster-than-light, tachyon" }
QPSK Data Rate vs Bandwidth at Passband: Why can it only transmit 1 bit per Hz and not 2 bits per Hz?
Question: 1) This is a follow up question to the most up-voted answer to this question. 2) The accepted answer is that "QPSK transmits 1 bit per Hz at passband". 3) The theory found in books seems to agree with this answer. 4) But this defies my logic and my experience. 5) My logic and experience tells me that QPSK transmits 2 bits per Hz at passband. 6) From experience, I know that in order to acquire 1 MHz of complex bandwidth (at baseband), I need to set the IQ rate of my receiver to 1 Mega samples per second. (i.e. IQ rate = Complex Bandwidth, and keep in mind that the IQ rate usually needs to be slightly higher to account for the roll-off of an anti-aliasing filter. But let's assume that there is no filter for simplicity.) 7) At baseband, this is the equivalent of 500 KHz of bandwidth if you don't consider the negative frequencies. 8) And at passband, the total bandwidth will be 1 MHz because the negative frequencies "appear" as the frequencies on the "lower" side of the carrier frequency and they are now taken into consideration on the amount of bandwidth, effectively "doubling" the bandwidth. 9) With an IQ rate of 1 Mega samples per second, I can send 2 bits on each sample (because it is QPSK). 10) So the effective data rate is 2 Mbps for a bandwidth of 1 MHz at passband (in my experience). 11) Refer to image below, describing relationship between data rate and bandwidth in my experience. 12) Other people seem to have obtained similar results to mine: A) Refer to the last video on these student experiments: They show a transmission of 2 Mbps with a 1 MHz QPSK signal. B) In this video a 4 KHz data rate signal results in a 2 KHz QPSK 3dB spectrum. Answer: Let us assume you have a bandwidth $W = 1\,\text{Hz}$ in passband. This means that you have a bandwidth $B = W/2 = 0.5\,\text{Hz}$ in baseband. In baseband, using sinc pulses, we can transmit at a symbol rate $R_p = 2B = 1\,\text{Bd}$, where $\text{Bd}$ (baud) is the SI unit for "symbols (or pulses) per second". Using BPSK modulation, we can transmit one bit per symbol, for a bit rate $R_b = 1\,\text{b/s}$. Now, QPSK is esentially two BPSK signals in parallel. In other words, we have two BPSK baseband signals $s_I(t)$ and $s_Q(t)$, each having a bandwidth of 0.5 herz and transmitting one bit per second. By using quadrature, we transmit both $s_I(t)$ and $s_Q(t)$ at the same time in passband, using the signal $$ s(t) = s_I(t)\cos(2\pi f_c t) - s_Q(t)\sin(2\pi f_c t). $$ The QPSK signal $s(t)$ has a bandwdith of 1 herz and transmits 2 bits per second (one over the in-phase component and one over the quadrature component). However, note that the relationship $R_p = 2B$ is valid only for sinc pulses, which are almost never used in practice. If you use other pulses, then the answer will be different. Many introductory textbooks are not explicit about this. Let us say that you wish to use a pulse $p(t)$. Furthermore, $p(t)$ is a Nyquist pulse. Then, the spectrum of the pulse determines the pulse rate and subsequently the bit rate. As an example, let us repeat the exercise above for pulses shaped like a half-sine wave. In this case, the pulse rate is $R_p = B/2.5$. Then, if $B=0.5\,\text{Hz}$, $R_p = 0.2\,\text{Bd}$ and, using QPSK, you will only be able to transmit at a bit rate $R_b = 0.4\,\text{b/s}$ per herz in passband. Note that the $2.5$ figure I used above is my (conservative) personal measurement and reflects what I'm comfortable with. Many introductory textbooks are extremely optimistic about the actual bandwidth of different pulses. In particular, it is common for textbooks to claim that $R_p = B$ for rectangular pulses, which I believe borders on the dishonest (Stallings and Tomasi do this, among others). This is what is behind the claim that "QPSK transmits 1 b/s/Hz" that you (very sensibly) suspect is false. Note also that the answer you linked to is correct, for the specific pulse shape that is assumed. My answer is more general and allows you to calculate the bit rate for any Nyquist pulse you choose to use.
{ "domain": "dsp.stackexchange", "id": 7420, "tags": "power-spectral-density, bandwidth, qpsk, bpsk, baseband" }
What is "super" in superphosphate?
Question: This question is inspired from a previous question(marked unclear). I don't know the context of that question but I was intrigued by a statement: Superphosphate is used instead of just phosphate because superphosphate is a compound whereas phosphate is an ion. How the name "superphosphate" describe a compound? Since it contains the suffix -ate, shouldn't it be considered an ion just like phosphate? I googled "superphosphate" and it gave results about it being a fertilizer, its various types and its suppliers. Is it a type of phosphate fertilizer or is it just a trademark name? What is the significance of the word "super" in this context? On some more searching, I came to know that the other name of calcium dihydrogenphosphate is calcium superphosphate. Is it the same superphosphate that we are talking about? Does IUPAC recommends its usage? If there are superoxide and superphosphate, are there any other ion containing the name "super" like supersulfate or supernitrate? The names seem to be too absurd/obsolete to be used. Searching for "supernitrate" gave 2 results: alibaba and super calcium nitrate which seems to be nitrogenous fertilizers. Searching for "supersulfate" gave me results of a type of cement(One example here). So, I think that the name "super" isn't bound to fertilizers only. To clarify my questions: What is the significance of "super" in superphosphate? Is it a real chemical name or a trademark name? Is the same as superoxide? Are there any other ions containing the name "super"? Answer: The term superphosphate is really old, even well before the concept of atoms was proposed by Dalton. Therefore it is difficult to rationalize the choice of this terminology. In the unabridged version of the Oxford English Dictionary, you can see the earliest usage dates back to 1798 Chemistry. A phosphate containing an excess of phosphoric acid; an acid phosphate. Now disused except in superphosphate of lime, calcium superphosphate: cf. sense 2. 1798 Philos. Trans. (Royal Soc.) 88 17 It was..Scheele who discovered, that the urine of healthy persons contains superphosphate, or acidulous phosphate, of lime. Further information from the OED on the usage of "super" in chemical names confirms its use since antiquity. See antique examples (b) Denoting the highest proportion of a component, esp. owing to a high oxidation state. Now chiefly archaic or hist., except in superoxide n. and in the names of certain substances used in industry and commerce, as superphosphate n., supersulphate n. Cf. sub- prefix 4b(b). (i) Categories » [1788 J. St. John tr. L. B. Guyton de Morveau et al. Method Chym. Nomencl. 107 New names... Acetite of lead. Ancient names... Sugar of lead, Super-acetated lead.] 1811 Jrnl. Nat. Philos. June 78 The aqua lithargyri acetati is a saturated solution of the proper acetate of lead,..it is an essentially different salt from the super-acetate of lead. 1913 Brit. Med. Jrnl. 4 Oct. 875/1 Dr. Latham..used the superacetate of lead in consumption. 1979 A. J. Youngson Sci. Revol. Victorian Med. i. 18 Acetate or superacetate of lead combined with opium was prescribed for haemorrhage of the lungs.
{ "domain": "chemistry.stackexchange", "id": 12468, "tags": "inorganic-chemistry, everyday-chemistry, nomenclature, reference-request" }
Linear Programming Problem - what is feasible size for solution on a PC
Question: I need to get feeling for the feasible size of a LPP, that can be solved on a PC. Say, its a good one (8 cores @ 3+GHz, 64GB RAM). We also assume that number of variables is close to the number of additional conditions. I know it depends on the solver, but what size is feasible for solution in several hours/few days? 1000 of variables? A million? Order of magnitude would be enough. Update. Let's consider a general form: matrix is not sparce/diagonal/blotchy or whatever. Let's say matrix's rows are filled with many hundreds of values (@ thousands of variables). Answer: There is no simple answer. The running time depends not only on the number of variables, but also on the "difficulty" of the system of inequalities. The best you can do is to try to benchmark on representative problem instances.
{ "domain": "cs.stackexchange", "id": 16547, "tags": "time-complexity, linear-programming" }
Why do doors turn?
Question: I really think that I might be overthinking it but I was thinking about a door. When you try and open it with a force, it will produce a translational AND a rotational effect on the door. Any good door you have probably doesn't translate so that means the hinges must be applying a force to oppose this translational motion (Newton's 3rd Law). But if the hinges are applying a force, and not at the center, shouldn't it produce a torque too? Shockingly, as the force must be equal, that means the rotational torque must be equal too right? So how can doors turn if they have equal but opposite torques applied to them? Is this similar to how objects can fall despite having no net force (air resistance balancing out gravity) Answer: Torque depends on the force and the distance between the hinge and the point where the force is applied. When you pull on the door at the handle, you apply a force and there is a nonzero distance between handle and hinges, so you get torque and as a consequence rotation around the hinges. The hinges apply their force at the rotational center, so they can't produce any torque. There are no "equal but opposite torques" because there is only one. Torques change rotations just as forces change translations. Ongoing rotations or translations don't require acting torques or forces.
{ "domain": "physics.stackexchange", "id": 54315, "tags": "newtonian-mechanics, forces, rotational-dynamics, everyday-life, torque" }
Abstraction for multiple connection methods
Question: Coding to the interface, rather than the implementation. Here is what I'm doing in simple terms. Note: Although written using PHP, this is more of a general design / abstraction question that developers using any language could help answer. I'm writing an application that can handle different types of connection to gather it's data. The connection could be: A server somewhere abroad Localhost On a local network All connections must handle their data retrieval using SFTP as the data may be sensitive. Therefore, I coded an interface as follows: The Interface interface ConnectionInterface { /** * __construct instantiates connection object with settings */ /** * connect() attempts connection using settings set up within constructor */ public function connect(); /** * runCommand() executes a cmd and retrieves a string using the connection resource */ public function runCommand($command); /** * ping() checks that the actual host exists, before trying to connect */ public function ping($ip); } Above is the interface I created before coding anything else. Here is one concrete implementation that requires SSH2 via PECL to be installed. SSH2 Implementation class SSHConnection implements ConnectionInterface { public $config; public $conn; /** * Instantiates SSHConnection Object with settings */ public function __construct($ip, $port, $username, $password) { // Check SSH2 extension loaded. If not, throw exception / exit // Set class variables to that of those passed into constructor } /** * Attempt SSH connection using settings set up within constructor * * @return mixed True on success, false on exception * @throws \Exception if can't find server or can't connect */ public function connect() { extract($this->config); if (!$this->ping($ip)) { return false; } if (@!$conn = ssh2_connect($ip, $port)) { throw new \Exception('Unable to connect to the server'); return false; } if (@!ssh2_auth_password($conn, $username, $password)) { throw new \Exception('Incorrect server login username / password'); return false; } $this->conn = $conn; return true; } /** * Simple server ping using exec(), used in $this connect() */ public function ping($ip) { // Ping.. obviously.. } /** * Execute a command and retrieve a string using the connection resource * * @param string $command The command to run (example: 'ln -s') * @return mixed An object containing arrays data */ public function runCommand($command) { // Use $this->conn to run a shell command } /** * If there is a connection, runs exit on it then unset */ public function __destruct() { // If connection, ssh_exec 'exit' then unset($this->conn) } } My questions Apart from giving a load of general suggestions about why and how to improve this code, I have a few more specific questions that I hope could be added to your answer: Is it wrong to perform any sort of calculations within the constructor? For example, if the SSH2 extension isn't loaded, isn't the constructor the clearest place to put the check for this? Someone suggested creating a "test if you can run this class" command line script, but... honestly I think that someone trying to create this object should not be allowed to if they don't have the required dependency. Should the ping() function be here? Ping is only required when using a server abroad, not really if on localhost. Should there be a check here? Should this be moved somewhere else entirely? How should I go about it? What about the actual data retrieval? This is just the connection. Should I insert the data retrieval within this class? Or create a new DataRetrieval object which uses the connection object? Your help will be greatly appreciated. This is not a work project, just a personal one and I asked this question to learn - I know it'll work, but I want it to work well. So what would you do and why? Answer: Is it wrong to perform any sort of calculations within the constructor? Opinion is somewhat divided on this matter. I think it is generally accepted that a constructor should not contain business logic because it makes your code much more difficult to mock for testing, however this particular case you show comes down to how you define that term "business logic". Personally I believe that checking for global dependencies (in this case, checking for the existence of the SSH2 extension) is acceptable. Ideally you would not need to be checking for any global environmental state, you would simply inject a state object, but because of the way PHP's extension system works that's not really possible. Obviously injecting state object is possible, but that would still need to check global state, so you haven't really gained anything, except possibly in terms of SRP - but this would require an added layer of abstraction to separate the protocol from the connection. It's up to you whether you think this is worthwhile. The alternative is to put this check in a separate method and require that the consumer call it explicitly. However, to me this is at odds with the idea of the interface and is defining and exposing the underlying implementation. There are only two cases that need to be mocked: the dependencies exist, or they don't. There's no real logic that needs to be tested here. What you definitely should not do is automatically connect in the constructor, but I think that dependency validation in the constructor is harmless. However, I know there are others who would disagree with me on this point. Should the ping() function be here? No, the ping function should not be there. We have already had a conversation about this in chat, but sum it up in a sentence: The ping function is there to validate that the host is connectable, this should be done internally by the connect() method and not by a separate external API call. This is exposing part of your implementation in your interface, exactly what you are trying to avoid. What about the actual data retrieval? It depends. If the class contains a send() mechanism, it should also contain the retrieve() mechanism. But it may be that this should be divided up a bit more: /** * Value object that just holds the connection parameters like host, port, protocol * * It may be that this is simply a concrete implementation and an interface isn't required */ interface ConnectionParameters { // ... } interface Connector { /** * Uses a parameters object to create a connection object * * @param ConnectionParameters $parameters The parameters to use * * @return Connection The created connection * * @throws \RuntimeException When the connect operation fails */ public function connect(ConnectionParameters $parameters); } /** * Represents an active connection */ interface Connection { /** * Get the connection parameters used to create the connection * * This is optional, but personally I believe it makes sense to carry this * information with the connection. Obviously in order for this to be implemented * the object will need to be passed in by the Connector. * * Some may say this is inviting LoD violations and that the association, if * required, should be carried by the consumer. * * @return ConnectionParameters Parameters used to create the connection */ public function getParameters(); /** * Send data from a buffer * * @param DataBuffer $buffer Buffer that holds data to send * @param int $length Number of bytes to send (<0: drain buffer) * * @return int Number of bytes sent */ public function send(DataBuffer $buffer, $length = -1); /** * Receive data into a buffer * * @param DataBuffer $buffer Buffer to populate with received data * @param int $length Number of bytes to receive (<0: all pending data) * * @return int Number of bytes received */ public function recv(DataBuffer $buffer, $length = -1); /** * Close the connection */ public function close(); } /** * Represents a store of data that can be transmitted via the connection * * You may wish to add other methods to this interface, for example an fgets() * equivalent. Arguably though, that might be a case for extending this interface: * This assumes all data is binary, you might want to have TextBuffer extends DataBuffer */ interface DataBuffer { /** * Read some data from the buffer * * @param int $length Number of bytes to read (<0: drain buffer) * * @return string Data from buffer */ public function read($length = -1); /** * Write some data to the buffer * * @param string $data Data to write * @param int $length Number of bytes to write (<0: all pending data) * * @return int Number of bytes written */ public function write($data, $length = -1); }
{ "domain": "codereview.stackexchange", "id": 35010, "tags": "php, object-oriented" }
colcon build moveit2 failure because of moveit_ros_warehouse collect2 error
Question: I'm building moveit2 from this tutorial: moveit2_tutorials.picknik.ai/doc/getting_started/getting_started.html I'm on Ubuntu 20.04 and my ros2 version is foxy, which was built from debian. Rosdep returns all rosdeps installed successfully and moveit2's repos have been successfully cloned. I'm getting this when building: --- stderr: moveit_ros_warehouse /usr/bin/ld: libmoveit_warehouse.so.: undefined reference to `MD5' collect2: error: ld returned 1 exit status Here is the code from colcon build: :~/robotics/moveo_ws/ws$ colcon build --event-handlers desktop_notification- status- >--cmake-args -DCMAKE_BUILD_TYPE=Release Starting >>> geometric_shapes [Processing: geometric_shapes] [Processing: geometric_shapes] [Processing: geometric_shapes] [Processing: geometric_shapes] Finished <<< geometric_shapes [2min 1s] Starting >>> moveit_msgs Finished <<< moveit_msgs [7.18s] Starting >>> srdfdom Finished <<< srdfdom [0.52s] Starting >>> moveit_common Finished <<< moveit_common [0.27s] Starting >>> controller_manager_msgs Finished <<< controller_manager_msgs [2.03s] Starting >>> ros2_control_test_assets Finished <<< ros2_control_test_assets [0.25s] Starting >>> hardware_interface Finished <<< hardware_interface [0.98s] Starting >>> controller_interface Finished <<< controller_interface [0.66s] Starting >>> controller_manager Finished <<< controller_manager [1.18s] Starting >>> warehouse_ros Finished <<< warehouse_ros [0.45s] Starting >>> moveit_resources_panda_description Finished <<< moveit_resources_panda_description [0.26s] Starting >>> moveit_resources_panda_moveit_config Finished <<< moveit_resources_panda_moveit_config [0.28s] Starting >>> moveit_resources_fanuc_description Finished <<< moveit_resources_fanuc_description [0.26s] Starting >>> moveit_resources_fanuc_moveit_config Finished <<< moveit_resources_fanuc_moveit_config [0.27s] Starting >>> forward_command_controller Finished <<< forward_command_controller [0.63s] Starting >>> moveit_resources_pr2_description Finished <<< moveit_resources_pr2_description [0.30s] Starting >>> moveit_core Finished <<< moveit_core [4.59s] Starting >>> moveit_ros_occupancy_map_monitor Finished <<< moveit_ros_occupancy_map_monitor [0.48s] Starting >>> moveit_ros_planning Finished <<< moveit_ros_planning [2.06s] Starting >>> moveit_kinematics Finished <<< moveit_kinematics [0.98s] Starting >>> moveit_ros_warehouse --- stderr: moveit_ros_warehouse /usr/bin/ld: libmoveit_warehouse.so.: undefined reference to `MD5' collect2: error: ld returned 1 exit status make[2]: *** [warehouse/CMakeFiles/moveit_warehouse_broadcast.dir/build.make:346: warehouse/moveit_warehouse_broadcast] Error 1 make[1]: *** [CMakeFiles/Makefile2:189: warehouse/CMakeFiles/moveit_warehouse_broadcast.dir/all] Error 2 make: *** [Makefile:141: all] Error 2 Failed <<< moveit_ros_warehouse [1.37s, exited with code 2] Summary: 20 packages finished [2min 27s] 1 package failed: moveit_ros_warehouse 1 package had stderr output: moveit_ros_warehouse 33 packages not processed I see this and look closer at ws/build/moveit_ros_warehouse/CMakeFiles and cat CMakeError.log. Here is the cat: Performing C SOURCE FILE Test CMAKE_HAVE_LIBC_PTHREAD failed with the >following output: Change Dir: /home/forrest/robotics/moveo_ws/ws/build/moveit_ros_warehouse/CMakeFiles/CMakeTmp Run Build Command(s):/usr/bin/make cmTC_6c77e/fast && /usr/bin/make -f CMakeFiles/cmTC_6c77e.dir /build.make CMakeFiles/cmTC_6c77e.dir/build make[1]: Entering directory '/home/forrest/robotics/moveo_ws/ws/build/moveit_ros_warehouse/CMakeFile/CMakeTmp' Building C object CMakeFiles/cmTC_6c77e.dir/src.c.o /usr/bin/cc -DCMAKE_HAVE_LIBC_PTHREAD -o CMakeFiles/cmTC_6c77e.dir/src.c.o -c /home/forrest/robotics/moveo_ws/ws/build/moveit_ros_warehouse/CMakeFiles/CMakeTmp/src.c Linking C executable cmTC_6c77e /usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_6c77e.dir/link.txt --verbose=1 /usr/bin/cc -DCMAKE_HAVE_LIBC_PTHREAD CMakeFiles/cmTC_6c77e.dir/src.c.o -o cmTC_6c77e /usr/bin/ld: CMakeFiles/cmTC_6c77e.dir/src.c.o: in function main': src.c:(.text+0x46): undefined reference to pthread_create' /usr/bin/ld: src.c:(.text+0x52): undefined reference to pthread_detach' /usr/bin/ld: src.c:(.text+0x63): undefined reference to pthread_join' collect2: error: ld returned 1 exit status make[1]: *** [CMakeFiles/cmTC_6c77e.dir/build.make:87: cmTC_6c77e] Error 1 make[1]: Leaving directory '/home/forrest/robotics/moveo_ws/ws/build/moveit_ros_warehouse/CMakeFile/CMakeTmp' make: *** [Makefile:121: cmTC_6c77e/fast] Error 2 Source file was: #include <pthread.h> void* test_func(void* data) { return data; } int main(void) { pthread_t thread; pthread_create(&thread, NULL, test_func, NULL); pthread_detach(thread); pthread_join(thread, NULL); pthread_atfork(NULL, NULL, NULL); pthread_exit(NULL); return 0; } Determining if the function pthread_create exists in the pthreads failed with the >following output: Change Dir: /home/forrest/robotics/moveo_ws/ws/build/moveit_ros_warehouse/CMakeFiles/CMakeTmp Run Build Command(s):/usr/bin/make cmTC_9cb30/fast && /usr/bin/make -f >CMakeFiles/cmTC_9cb30.dir/build.make CMakeFiles/cmTC_9cb30.dir/build make[1]: Entering directory '/home/forrest/robotics/moveo_ws/ws/build/moveit_ros_warehouse/CMakeFiles/CMakeTmp' Building C object CMakeFiles/cmTC_9cb30.dir/CheckFunctionExists.c.o /usr/bin/cc -DCHECK_FUNCTION_EXISTS=pthread_create -o >CMakeFiles/cmTC_9cb30.dir/CheckFunctionExists.c.o -c /usr/share/cmake-3.16/Modules/CheckFunctionExists.c Linking C executable cmTC_9cb30 /usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_9cb30.dir/link.txt --verbose=1 /usr/bin/cc -DCHECK_FUNCTION_EXISTS=pthread_create >CMakeFiles/cmTC_9cb30.dir/CheckFunctionExists.c.o -o cmTC_9cb30 -lpthreads /usr/bin/ld: cannot find -lpthreads collect2: error: ld returned 1 exit status make[1]: *** [CMakeFiles/cmTC_9cb30.dir/build.make:87: cmTC_9cb30] Error 1 make[1]: Leaving directory '/home/forrest/robotics/moveo_ws/ws/build/moveit_ros_warehouse/CMakeFiles/CMakeTmp' make: *** [Makefile:121: cmTC_9cb30/fast] Error 2 Is this the error: /usr/bin/ld: cannot find -lpthreads? I checked online and people recommend not to use -l when working with pthreads. I'm new to c and don't know much about pthreads. I also don't know where to go about finding CMakeFiles/cmTC_9cb30.dir and changing -lpthreads to -pthread Is the error the top half that says there are 3 undefined referrences to pthread functions? Any guidance on this would be wonderfully appreciated. Originally posted by Daggrosh on ROS Answers with karma: 28 on 2021-07-03 Post score: 0 Answer: I just tried building it from source and I didn't have any error, can you share more info, which commits hash are you using for each package .? what's the output of echo $AMENT_PREFIX_PATH .? what's the output for the following command apt policy ros-foxy-warehouse-ros ros-foxy-warehouse-ros-mongo .? Originally posted by jafar_abdi with karma: 221 on 2021-07-05 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Daggrosh on 2021-07-05: The ament prefix path output is: /opt/ros/foxy The output for the apt policy is: ros-foxy-warehouse-ros: Installed: 2.0.1-1focal.20210601.182449 Candidate: 2.0.1-1focal.20210601.182449 Version table: *** 2.0.1-1focal.20210601.182449 500 500 http://packages.ros.org/ros2/ubuntu focal/main amd64 Packages 100 /var/lib/dpkg/status ros-foxy-warehouse-ros-mongo: Installed: 2.0.2-1focal.20210618.005428 Candidate: 2.0.2-1focal.20210618.005428 Version table: *** 2.0.2-1focal.20210618.005428 500 500 http://packages.ros.org/ros2/ubuntu focal/main amd64 Packages 100 /var/lib/dpkg/status More information: I have built moveit2 up until the packages depending on moveit_ros_warehouse (which total 11 packages) using the --continue-on-error colcon command, so I have successfully installed warehouse_ros and warehouse_ros_mongo. Comment by Daggrosh on 2021-07-05: I have since tried to build moveit2 again in a fresh ws. Here are the exact steps, which ended with the same error: mkdir moveit2_test cd moveit2_test git clone git@github.com:ros-planning/moveit2 #successful mv moveit2 src rosdep install -i -r --from-paths src --ignore src --rosdistro foxy -y #successful colcon build --event-handlers desktop_notification- status- --cmake-args -DCMAKE_BUILD_TYPE=Release #successful until moveit_ros_warehouse error: --- stderr: moveit_ros_warehouse /usr/bin/ld: libmoveit_warehouse.so.: undefined reference to `MD5' collect2: error: ld returned 1 exit status make[2]: *** [warehouse/CMakeFiles/moveit_warehouse_broadcast.dir/build.make:346: warehouse/moveit_warehouse_broadcast] Error 1 make[1]: *** [CMakeFiles/Makefile2:189: warehouse/CMakeFiles/moveit_warehouse_broadcast.dir/all] Error 2 make: *** [Makefile:141: all] Error 2 I also got a stderror output for moveit_core, but it was a warning and don't think it's a big deal Comment by jafar_abdi on 2021-07-06: After debugging it further, I think this's due to a bug in ament_cmake where it does prefer the system installed package rather than the source one, if you remove these packages ros-foxy-warehouse-ros ros-foxy-warehouse-ros-mongo it should pass Comment by Daggrosh on 2021-07-07: It worked! You're magic man, thanks so much
{ "domain": "robotics.stackexchange", "id": 36640, "tags": "ros, ros2, moveit, colcon" }
Check if is it possible to get to a point that is presented by a number
Question: Problem description from an Iranian online course, translated with the help of Google Translate: Tired of coding, Mehdi has gone on to his childhood games. But because he doesn't know who to play with, he has to change the rules of the game and play solitaire. To begin with, he wants to play solitaire "Walnut, Break Out". Mehdi is standing n cm from the wall and wants to reach the wall. To do this, he can extend his leg forward, or transverse his leg forward. The goal is for him to stretch his legs and move forward so that he can tangle with the wall at the end. But Mehdi doesn't code anymore, so you need to help him figure out how to win this game. That is, tell him to stretch his leg a few times and cross a few times to get the exact distance. The problem is about checking whether we can get to a wall that is represented with a number, by horizontal and vertical moves or not. If we can, print one of the correct answers and we cant, print -1. We have three input values: The distance to the wall \$ n \$ , the foot length \$ x\$ , and the foot width \$ y\$ . The values are restricted by $$ 1 \le n,x,y \le 100\,000 \, . $$ We have to check if \$ n \$ is a multiple of \$ x \$ plus a multiple of \$ y \$. For example if \$n = 10 \$and \$x = 2 , y=3\$ , we can reach to \$10\$ with multiplying \$2 \cdot 2 + 3 \cdot 2 \$, or \$2 \cdot 5 + 3 \cdot 0 \$. I wrote this code for it. I get correct answers for most of the test cases except one, and 2 errors for a time limit exceeded. I am looking for an optimized and faster solution. #include<iostream> using namespace std; int main() { int n,x,y; int x1,y1; cin>>n>>x>>y; bool flag = false; for(int i=0;i<n;i++) { for(int j=0;j<n+1;j++) { if (x * i + j * y == n) { flag = true; x1=i; y1=j; } } if(flag) break; } if(flag == false) cout<<"-1"; else cout<<x1<<" "<<y1; return 0; } Answer: General remarks This using namespace std; is considered bad practice, see for example Why is “using namespace std” considered bad practice? on Stack Overflow. Consistent indenting and spacing increases the legibility of the code. Use curly braces for if/else blocks even if they consist only of a single statement. Enable all compiler warnings and fix them, such as std::cout<<x1<<" "<<y1; // Variable 'x1' may be uninitialized when used here // Variable 'y1' may be uninitialized when used here Choose better variable names: bool flag = false; does not indicate what the flag is used for. Testing boolean values: This may be opinion-based, but I prefer if (!flag) { ... } over if (flag == false) { ... } The return statement in main() is optional, and can be omitted. Program structure Separating the actual computation from the I/O makes the main method short, increases the clarity of the program, and allows you to add unit tests easily. In addition, you can “early return” from the function if a solution is found, so that the flag, x1, y1 variables becomes obsolete. As of C++17 you can return an optional which contains a value (the solution as a pair) or not. With these suggestions, the program could look like this: #include <iostream> #include <optional> std::optional<std::pair<int, int>> solveSteps(int x, int y, int n) { for (int i = 0; i <= n; i++) { for (int j = 0; j <= n; j++) { if (x * i + j * y == n) { // Return solution: return std::make_optional(std::make_pair(i, j)); } } } // No solution found: return std::nullopt; } int main() { int n, x, y; std::cin >> n >> x >> y; auto solution = solveSteps(x, y, n); if (solution) { std::cout << solution->first << " " << solution->second << "\n"; } else { std::cout << "-1\n"; } } Increasing the performance First you can increase i and j in steps of x and y, respectively. That reduces the number of iterations and save the multiplications: std::optional<std::pair<int, int>> solveSteps(int x, int y, int n) { for (int i = 0; i <= n; i += x) { for (int j = 0; j <= n; j += y) { if (i + j == n) { // Return solution: return std::make_optional(std::make_pair(i/x, j/y)); } } } // No solution found: return std::nullopt; } The next improvement is to get rid of the inner loop: After moving i steps of width x you only have to check if the remaining distance is a multiple of y: std::optional<std::pair<int, int>> solveSteps(int x, int y, int n) { for (int i = 0; i <= n; i += x) { if ((n - i) % y == 0) { // Return solution: return std::make_optional(std::make_pair(i/x, (n-i)/y)); } } // No solution found: return std::nullopt; } Another improvement would be to check if y > x. In that case it is more efficient to iterate in steps of width y and check if the remaining distance is a multiple of x. Mathematics Some final remarks on how this can be solved mathematically, with links for further reading. What you are looking for is solution \$ (i, j) \$ to the equation $$ n = i x + j y $$ with non-negative integers \$ i, j \$. This is related to Bézout's identity. In particular, a solution can only exist if \$ n \$ is a multiple of the greatest common divisor \$ \gcd(x, y) \$, which is efficiently determined with the euclidean algorithm. In that case it is easy to check if a solution with non-negative numbers exists, compare e.g. Finding positive Bézout coefficients on Mathematics Stack Exchange.
{ "domain": "codereview.stackexchange", "id": 35981, "tags": "c++, algorithm, programming-challenge, time-limit-exceeded, search" }
ROS Answers SE migration: ROS Real Time
Question: Hi everybody. I just controlled two motors using ROS, however it is not a real time process. I am just wondering what is the best easy way to make a pthread that runs in real time, which control the motors and can easily send messages to other ROS nodes. Any help will be appreciate it Originally posted by acp on ROS Answers with karma: 556 on 2012-05-23 Post score: 1 Answer: Do you really need realtime control? At what rate are you trying to control your motors? As Bence said, a good architecture for doing realtime control in a ROS environment is OROCOS/RTT. If you just care about publishing from your realtime loops, there is a realtime publisher in the realtime_tools package (http://www.ros.org/wiki/realtime_tools), but I've never used these. If you create an RTT component, you can make a given input/output port use some ROS message type, and then use the tools in the rtt_ros_integration package to connect that port up to a ROS topic ( http://www.ros.org/wiki/rtt_ros_integration ). The default way to interact with an RTT component is through the OCL Deployer interface, and you can write interpreted scripts for this interface that specify how to connect RTT ports up to ROS topics. The line of script to connect an RTT port to a ROS topic looks something like this: stream("YourComponentName.YourRTTPortName", ros.topic("/topic_name")) Originally posted by jbohren with karma: 5809 on 2012-05-25 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by acp on 2012-05-28: Hi, in advance thank you for your replay. Yea, I need a real time control, however I have some questions. 1.-) do i need to install a real time tool, for instance RTAI and then use this tool to create a rtt component? 2.-)what do you think about Boost? Comment by acp on 2012-05-28: Hi, in advance thank you for your replay. Yea, I need a real time control, however I have some questions. Comment by acp on 2012-05-28: I am a bit confuse, I have installed orocos_toolchain_ros which contain rtt_ros_integration, is that enough to create a rtt component? Comment by jbohren on 2012-05-29: If you need realtime control, then you need a kernel that supports some sort of realtime scheduling. If you don't have a realtime kernel, then any RTT components that you build will not have realtime precision. Comment by acp on 2012-06-05: http://www.orocos.org/wiki/orocos/ i recommend to read RTT wiki and toolchain and follow the examples in http://www.orocos.org/wiki/orocos/toolchain/getting-started/toolchain-tutorials (RTT 2.x Exercises). To get a good in site of rtt-components and real time processes Comment by matheus.pinto on 2019-02-20: what is the C++ equivalent of the script line you passed to link a port to the topic?
{ "domain": "robotics.stackexchange", "id": 9522, "tags": "ros, real-time, multi-thread" }
Angular momentum with respect to the centre of mass
Question: I have been told [Warning: I leave this because it's what I asked and allows to understand the dialogues in the comments, but Azad, whom I thank, has pointed that the formula does not hold in general in the form it is expressed] that the angular momentum of and rigid body with respect to any point $P$ can always be expressed as $$\mathbf{L}_{P}=\mathbf{r}_{cm}\times M\mathbf{v}_{cm}+\big(\sum_im_iR_i^2\big)\boldsymbol{\omega}$$ where $\mathbf{r}_{cm}$ is the position of the centre of mass with respect to $P$, $M$ the mass of the body, $R_i$ the distance of the $i$-th point, having mass $m_i$, composing the body, and $\sum_im_iR_i^2=I$ its moment of inertia with respect to the instantaneous axis of the rotation around the centre of mass of angular velocity $\boldsymbol{\omega}$. I know that the velocity $\mathbf{v}_i$ of each point $P_i$, having mass $m_i$, of a rigid body of mass $M$ can be see as the sum of a translation velocity of one of its points $C$ plus a rotation velocity around that point: $\mathbf{v}_i=\mathbf{v}_{C}+\boldsymbol{\omega}\times\overrightarrow{CP_i}$. If we chose $C$ as the centre of mass I see that $$\mathbf{L}_{cm}=\sum_i \overrightarrow{CP_i}\times m_i\mathbf{v}_{i}=\sum_i \overrightarrow{CP_i}\times m_i\mathbf{v}_{cm}+\sum_i \overrightarrow{CP_i}\times m_i(\boldsymbol{\omega}\times\overrightarrow{CP_i})$$$$=\sum_i \overrightarrow{CP_i}\times m_i(\boldsymbol{\omega}\times\overrightarrow{CP_i}) $$because, if I am not wrong, $\sum_i \overrightarrow{CP_i}\times m_i\mathbf{v}_C=(\sum_i m_i\overrightarrow{CP_i})\times\mathbf{v}_C=\mathbf{0}$ since $\sum_i m_i\overrightarrow{CP_i}$ is the position of the centre of mass with respect to itself, which is $\mathbf{0}$. How can it be proved that $\sum_i \overrightarrow{CP_i}\times m_i(\boldsymbol{\omega}\times\overrightarrow{CP_i})=(\sum_im_iR_i^2)\boldsymbol{\omega}$? I have searched a lot on the Internet and on books, but I find nothing. To give some background of mine, I have studied nothing of analytical mechanics. I find the formula very, very interesting both in itself and because, if the moment of inertia does not depend upon time, $\forall t\quad I(t)= I(t_0)$, the above expression can be differentiated to get the formula of the resultant torque with respect to the centre of mass $\sum\boldsymbol{\tau}_{cm}=\frac{d\mathbf{L}_{cm}}{dt}=I\boldsymbol{\alpha}_{cm}$ where $\boldsymbol{\alpha}$ is the angular acceleration around the centre of mass. I heartily thank you for any answer! Some unfruitful trials: by using the "BAC CAB identity" as suggested by Azad, whom I heartily thank, $\mathbf{a}\times(\mathbf{b}\times\mathbf{c})=(\mathbf{a}\cdot\mathbf{c})\mathbf{b}-(\mathbf{a}\cdot\mathbf{b})\mathbf{c}$, I can see that$$\sum_i \overrightarrow{CP_i}\times m_i(\boldsymbol{\omega}\times\overrightarrow{CP_i})=\sum_im_i\|\overrightarrow{CP_i}\|^2\boldsymbol{\omega}-m_i(\overrightarrow{CP_i}\cdot\boldsymbol{\omega})\overrightarrow{CP_i}$$which, by decomposing every $\overrightarrow{CP_i}$ into an axial component $\mathbf{A}_i$ and a radial component $\mathbf{R}_i$, whose norms respectively are $A_i$ and $R_i$, with $R_i$ as the distance from $i$ to the axis of rotation, becomes $$\sum_im_iR_i^2\boldsymbol{\omega}+\sum_i m_i A_i^2\boldsymbol{\omega}-m_i(\mathbf{A}_i\cdot\boldsymbol{\omega})\overrightarrow{CP_i}$$but I cannot prove that $\sum_i m_i A_i^2\boldsymbol{\omega}-m_i(\mathbf{A}_i\cdot\boldsymbol{\omega})\overrightarrow{CP_i}=\mathbf{0}$. Answer: I think you are overcomplicating this. Consider an arbitrary point P moving with linear speed $\mathbf{v}_A$. Linear momentum is $$\mathbf{P} = m \mathbf{v}_{cm}$$ Angular momentum at the center of mass is $$\mathbf{L}_{cm} = I_{cm} \mathbf{\omega}$$ Linear velocity of the center of mass is $$\mathbf{v}_{cm} = \mathbf{v}_A + \mathbf{\omega} \times \mathbf{r}_{cm}$$ where $\mathbf{r}_{cm}$ is the location of the center of mass relative to A. Linear momentum in terms of the motion of A is $$\mathbf{P} = m (\mathbf{v}_A + \mathbf{\omega} \times \mathbf{r}_{cm})$$ $$\boxed{ \mathbf{P} = m \mathbf{v}_A - m \mathbf{r}_{cm} \times \mathbf{\omega} }$$ Angular momentum at A is $$\mathbf{L}_A =\mathbf{L}_{cm} +\mathbf{r}_{cm} \times \mathbf{P}$$ which is expanded as $$\mathbf{L}_A =I_{cm} \mathbf{\omega} +\mathbf{r}_{cm} \times m \mathbf{v}_{cm} = I_{cm} \mathbf{\omega} +\mathbf{r}_{cm} \times m (\mathbf{v}_A + \mathbf{\omega} \times \mathbf{r}_{cm}) $$ $$\boxed{ \mathbf{L}_A = I_{cm} \mathbf{\omega}-m \mathbf{r}_{cm} \times\mathbf{r}_{cm} \times \mathbf{\omega} + m \mathbf{v}_{A}}$$ Combined the spatial momenum at A yields the 6×6 spatial inertia matrix at A $$ \hat{\ell}_A = I_A \hat{v}_A $$ $$ \begin{Bmatrix} \mathbf{P} \\ \mathbf{L}_A \end{Bmatrix} = \begin{bmatrix} m & -m [\mathbf{r}_{cm}\times] \\ m [\mathbf{r}_{cm}\times] & I_{cm}-m\,[\mathbf{r}_{cm}\times][\mathbf{r}_{cm}\times] \end{bmatrix} \begin{Bmatrix}\mathbf{v}_{A} \\ \mathbf{\omega} \end{Bmatrix}$$ NOTE: For the wierd $[\mathbf{r}\times]$ notation that seems to be missing a vector see What is the Vector/Cross Product? The mass momenent of inertia at A is thus defined as $$I_A = I_{cm}-m\,[\mathbf{r}_{cm}\times][\mathbf{r}_{cm}\times]$$ This is an vector representation of the parallel axis theorem. Finally you need to differentiate the momentum expressions to arrive at the 6 Newton-Euler equations of motion (See https://physics.stackexchange.com/a/80449/392)
{ "domain": "physics.stackexchange", "id": 22007, "tags": "newtonian-mechanics, rotational-dynamics, rigid-body-dynamics" }
ASCII Paint Bucket
Question: In MS Paint, if we choose paint bucket and fill click on a certain spot, it gets filled with a new chosen color along with its neighboring pixels that are the same color until it reaches certain limitations such as a different color. This program, using recursion, does the same thing except to a flat ASCII surface: xxxx xxxx 0000 0000 0xx0 ---> 2, 2, p -> 0pp0 xxxx pppp And here's the code in question: def findchar(pattern, posx, posy): pattern_list = pattern.splitlines() return pattern_list[posy][posx] def fill(pattern, posx, posy, char): oldchar = findchar(pattern, posx, posy) pattern_list = pattern.splitlines() line_split = list(pattern_list[posy]) line_split[posx] = char pattern_list[posy] = ''.join(line_split) new_pattern = '\n'.join(pattern_list) if posx >= 0 and posx+1 < len(pattern_list[0]) and posy >= 0 and posy+1 < len(pattern_list): for i in [-1, 0, 1]: if pattern_list[posy+i][posx+1] == oldchar: new_pattern = fill(new_pattern, posx+1, posy+i, char) elif pattern_list[posy+i][posx-1] == oldchar: new_pattern = fill(new_pattern, posx-1, posy+i, char) elif pattern_list[posy+1][posx+i] == oldchar: new_pattern = fill(new_pattern, posx+i, posy+1, char) elif pattern_list[posy-1][posx+i] == oldchar: new_pattern = fill(new_pattern, posx+i, posy-1, char) return new_pattern print(fill("xxxx\n0000\n0xx0\nxxxx", 2, 2, 'p')) Thoughts? Answer: I would also suggest doing the conversion to a list once at the beginning and back to a string at the end. In addition I would suggest to use a different algorithm. Your algorithm will fail if the image becomes too big (where too big is for a usual setup when the number of cells to fill > 1000, the default recursion limit of python). You can easily write this as an iterative algorithm in this way: def flood_fill(image, x, y, replace_value): image = [list(line) for line in image.split('\n')] width, height = len(image[0]), len(image) to_replace = image[y][x] to_fill = set() to_fill.add((x, y)) while to_fill: x, y = to_fill.pop() if not (0 <= x < width and 0 <= y < height): continue value = image[y][x] if value != to_replace: continue image[y][x] = replace_value to_fill.add((x-1, y)) to_fill.add((x+1, y)) to_fill.add((x, y-1)) to_fill.add((x, y+1)) return '\n'.join(''.join(line) for line in image) This uses a set to hold all points which need to be replaced by the char, adding all adjacent points to the set if a point was replaced. It loops and processes each point in the set until it is empty.
{ "domain": "codereview.stackexchange", "id": 21623, "tags": "python, algorithm, recursion, ascii-art" }
JSON Input validator and Parser
Question: There are probably a lot better ways to do it, but take it as a learning exercise. Basically below is the JSON InputValidation and parsing using nlohmann::json which takes expected fields, objects arrays and verifies its presence and (optionally) parses them into an appropriate c++ structure. inputvalidation.hpp: namespace iv { template<typename _Tp> class Field; template<typename... _Ts> class Object; template<typename _Tp> class Array; template<typename _Old, typename _New> class Deprecated; namespace detail { template<class _Tp, template<class...> class Template> struct is_specialization : ::std::false_type {}; template<template<class...> class Template, class... Args> struct is_specialization<Template<Args...>, Template> : ::std::true_type {}; template<typename _Tp> struct remove_opt { using type = _Tp; }; template<typename _Tp> struct remove_opt<::std::optional<_Tp>> { using type = _Tp; }; template<typename _Tp> using remove_opt_t = typename remove_opt<_Tp>::type; template<typename _Tp> using decay_t = ::std::decay_t<remove_opt_t<_Tp>>; #define _CONSTEVAL constexpr template<typename _pack, std::size_t N> _CONSTEVAL std::size_t elem_size(std::size_t& ref, std::array<std::size_t, std::tuple_size_v<_pack>>& offsets) noexcept { using _Tp = std::conditional_t< is_specialization<std::tuple_element_t<N, _pack>, std::optional>{}, std::optional<typename decay_t<std::tuple_element_t<N, _pack>>::value_type>, typename decay_t<std::tuple_element_t<N, _pack>>::value_type>; while (ref % alignof(_Tp) != 0) ++ref; offsets[N] = ref; ref += sizeof(_Tp); return alignof(_Tp); } template<typename _pack, typename std::size_t... Indices> _CONSTEVAL const std::tuple< const size_t, const size_t, const std::array<std::size_t, std::tuple_size_v<_pack>>> structure_type_helper(std::index_sequence<Indices...>) { std::size_t size = 0; std::array<std::size_t, std::tuple_size_v<_pack>> offsets = {}; auto pad = (elem_size<_pack, Indices>(size, offsets) | ...); std::size_t padding = 1; while (pad >>= 1) padding *= 2; return std::make_tuple(size, padding, offsets); } template<typename _Tp> struct structure_type { static constexpr const auto _storage = structure_type_helper<_Tp>(std::make_index_sequence<std::tuple_size_v<_Tp>>()); using type = typename std::aligned_storage_t<std::get<0>(_storage), std::get<1>(_storage)>; static constexpr const std::array<std::size_t, std::tuple_size_v<_Tp>>& offsets = std::get<2>(_storage); }; template<typename _Tp> using structure_type_t = typename structure_type<_Tp>::type; #undef _CONSTEVAL template<typename _pack, typename std::size_t... Indices> inline bool typeCheck(const nlohmann::json& j, const _pack& tuple, std::index_sequence<Indices...>) noexcept; template<typename _pack, typename std::size_t... Indices> inline void fromTuple(const _pack& tuple, const nlohmann::json& j, uint8_t* where, std::index_sequence<Indices...>); } template<typename _Tp> class Field { static_assert(!std::is_reference_v<_Tp> && !std::is_pointer_v<_Tp>, "Field type can not have a reference or a pointer type"); static_assert(!detail::is_specialization<_Tp, Field>{}, "Field type can not have field as a value type"); public: using value_type = _Tp; using comparator_type = bool(const value_type&); constexpr Field() = default; constexpr explicit Field(const char* tp) : _name(tp) {} constexpr explicit Field(const char* tp, comparator_type f) : _name(tp), _comp(f) {} bool check(const nlohmann::json& j) const noexcept { try { auto value = j.get<value_type>(); if (_comp) { return _comp(value); } return true; } catch (...) { return false; } } value_type parse(const nlohmann::json& j) const { return j.get<value_type>(); } constexpr const char* name() const noexcept { return _name; } private: const char* _name = nullptr; comparator_type* _comp = nullptr; }; template<typename... _Ts> class Object { static_assert(sizeof...(_Ts), "Object must have at least one field"); public: using tuple_type = std::tuple<_Ts...>; using value_type = typename detail::structure_type_t<tuple_type>; constexpr Object() = default; constexpr explicit Object(const char* tp, tuple_type&& fields) : _name(tp), _pack(std::move(fields)) {} constexpr explicit Object(const char* tp, const Object& ref) : _name(tp), _pack(ref._pack) {} bool check(const nlohmann::json& j) const noexcept { if (j.is_object() != true) { return false; } if constexpr (sizeof...(_Ts) != 0) { return detail::typeCheck(j, _pack, std::make_index_sequence<std::tuple_size_v<tuple_type>>()); } } value_type parse(const nlohmann::json& j) const { value_type storage; uint8_t* ptr = reinterpret_cast<uint8_t*>(&storage); detail::fromTuple(_pack, j, ptr, std::make_index_sequence<std::tuple_size_v<tuple_type>>()); return storage; } constexpr const char * name() const noexcept { return _name; } constexpr const tuple_type& pack() const noexcept { return _pack; } private: const char* _name = nullptr; tuple_type _pack; }; template<typename _Tp> class Array { static_assert(!std::is_reference_v<_Tp> && !std::is_pointer_v<_Tp>, "Can not create an array of pointers or references"); static_assert(!detail::is_specialization<_Tp, std::optional>{}, "Can not create an array of optionals"); public: using value_type = std::vector<typename _Tp::value_type>; constexpr Array() = default; constexpr explicit Array(const char* tp) : _name(tp) {} constexpr explicit Array(const char* tp, std::size_t limit) : _name(tp), _lim(limit) {} constexpr explicit Array(const char* tp, const _Tp& check, std::size_t limit = 0) : _name(tp), _comp(check), _lim(limit) {} bool check(const nlohmann::json& j) const noexcept { if (j.is_array() != true) { return false; } if (_lim && j.size() > _lim) { return false; } for (const auto& elem : j) { if (_comp.check(elem) != true) { return false; } } return true; } value_type parse(const nlohmann::json& j) const { value_type ret; ret.reserve(16); for (const auto& elem : j) { ret.push_back(_comp.parse(elem)); } return ret; } constexpr const char * name() const noexcept { return _name; } constexpr std::size_t limit() const noexcept { return _lim; } private: const char* _name = nullptr; _Tp _comp; std::size_t _lim = 0; }; template<typename _Old, typename _New> class Deprecated { static_assert(!detail::is_specialization<_Old, Deprecated>{} && !detail::is_specialization<_New, Deprecated>{}, "Deprecation of deprecated type is not allowed"); public: using depr_type = _Old; using new_type = _New; using value_type = std::variant<typename depr_type::value_type, typename new_type::value_type>; constexpr Deprecated() = default; constexpr explicit Deprecated(_Old&& depr, _New&& replacement) : _old(depr), _new(replacement) {} bool check(const nlohmann::json& j) const noexcept { return _new.check(j) || _old.check(j); } value_type parse(const nlohmann::json& j) const { return _new.check(j) ? _new.parse(j) : _old.parse(j); } constexpr const char * name() const noexcept { return _new.name(); } _Old _old; _New _new; }; namespace detail { #define _RUNTIME inline template<std::size_t N, class... _Ts> _RUNTIME const decay_t<std::tuple_element_t<N, std::tuple<_Ts...>>>& getVal(const std::tuple<_Ts...>& tuple) noexcept { if constexpr (is_specialization<std::decay_t<std::tuple_element_t<N, std::tuple<_Ts...>>>, std::optional>{}) { return std::get<N>(tuple).value(); } else { return std::get<N>(tuple); } } template<std::size_t N, class... _Ts> _RUNTIME bool typeCheckHelper(const nlohmann::json& j, const std::tuple<_Ts...>& tuple) noexcept { auto it = j.find(getVal<N>(tuple).name()); if (it == j.end() || it->is_null()) // element not found { if constexpr (is_specialization<std::decay_t<std::tuple_element_t<N, std::tuple<_Ts...>>>, std::optional>{}) { return true; } //TODO: Handle error - field not found return false; } if (getVal<N>(tuple).check(*it) == false) { //TODO: handle error - invalid field type return false; } return true; } template<typename _pack, typename std::size_t... Indices> _RUNTIME bool typeCheck(const nlohmann::json& j, const _pack& tuple, std::index_sequence<Indices...>) noexcept { return (typeCheckHelper<Indices>(j, tuple) && ...); } template<typename _Tp> _RUNTIME const decay_t<_Tp>& getVal(const _Tp& ref) { if constexpr (is_specialization<std::decay_t<_Tp>, std::optional>{}) { return ref.value(); } else { return ref; } } template<typename _Tp> _RUNTIME void fromTupleImpl(_Tp&& element, const nlohmann::json& data, uint8_t* where) { using _Ty = std::conditional_t< is_specialization<_Tp, std::optional>{}, std::optional<typename decay_t<_Tp>::value_type>, typename decay_t<_Tp>::value_type>; new (where) _Ty(getVal(element).parse(data[getVal(element).name()])); } template<typename _pack, typename std::size_t... Indices> _RUNTIME void fromTuple(const _pack& tuple, const nlohmann::json& j, uint8_t* where, std::index_sequence<Indices...>) { ((void)fromTupleImpl(std::get<Indices>(tuple), j, where + structure_type<_pack>::offsets[Indices]), ...); } #undef _RUNTIME } template<typename... _Ts> constexpr Object<_Ts...> make_object(const char* name, _Ts&& ...args) { return Object<_Ts...>{name, std::make_tuple(std::forward<decltype(args)>(args)...)}; } template<typename... _Ts> constexpr std::optional<Object<_Ts...>> make_nullable_object(const char* name, _Ts&& ...args) { return Object<_Ts...>{name, std::make_tuple(std::forward<decltype(args)>(args)...)}; } template<typename _Tp, typename... _Ts> constexpr _Tp get(const Object<_Ts...>& ref, const nlohmann::json& j) { static_assert(alignof(detail::structure_type_t<std::tuple<_Ts...>>) == alignof(_Tp) && alignof(detail::structure_type_t<std::tuple<_Ts...>>) == alignof(_Tp), "Invalidly calculated structure alignment and/or size."); auto _storage = ref.parse(j); return *reinterpret_cast<_Tp*>(&_storage); } } Usage: // this is 'read' from the file nlohmann::json j; j["first"] = 1; j["second"] = "string"; j["third"]["subfield1"] = "asdf"; j["third"]["subfield2"] = 1954; j["third"]["subfield3"].push_back(1); j["third"]["subfield3"].push_back(8); j["third"]["subfield3"].push_back(27); // structure metadata - tell the validator what do you expect in JSON auto obj = make_object("", Field<int>{"first"}, Field<std::string>{"second"}, make_object("third", Field<std::string>{"subfield1"}, Field<int>{"subfield2"}, Array<Field<double>>{"subfield3"} ) ); // create a structure that reflects the JSON layout struct s1 { int a; std::string b; struct { std::string a; int b; std::vector<double> c; } c; }; // verify that it has everything you're expecting and parse it if (obj.check(j)) { s1 s = get<s1>(obj, j); // do whatever you want with the structure } You can also have an array of objects if you want. Go ahead and experiment if you want.. Side note: At the moment having std::vector of structure containing std::string have unexpected effects when accessing the string on clang and gcc. Works with MSVC tho. I don't know what the problem is unfortunately. I've track that to the std::vector itself so far. Answer: Observatoion I don't really have much to say on this code. Looks good.If this was at work (and it had unit tests) I would say fine to check in. The below are very minor comments. Code Review Please stop using the leading underscore. Identifiers with a leading underscore are usually reserved. The rules are not obvious (you break them) but because they are not obvious you should avoid putting the _ at the beginning of an identifier. Note: The end is fine. see: What are the rules about using an underscore in a C++ identifier? I very rarely see the leading :: used to specify an absolute namespace. ::std::false_type Sure that works. Good use of template meta programming. Not sure I like these. #define _CONSTEVAL constexpr #define _RUNTIME inline Since they are always defined why have them at all? Also in the class you don't need inline its redundant when used in the class. The general rule is don't use it unless you must. The only time you must is out of class definitions in the header file. I find this hard to read: using _Tp = std::conditional_t< is_specialization<std::tuple_element_t<N, _pack>, std::optional>{}, std::optional<typename decay_t<std::tuple_element_t<N, _pack>>::value_type>, typename decay_t<std::tuple_element_t<N, _pack>>::value_type>; When I build types I do it over a couple of lines so it easy to read (by the next person to look at the code). using NthElement = std::tuple_element_t<N, _pack> using DecayNthElement = typename decay_t<NthElement>::value_type; using IsSpecNthElement = is_specialization<NthElement, std::optional>; using Type = std::conditional_t< IsSpecNthElement{}, std::optional<DecayNthElement>, DecayNthElement::value_type >; I would simplify this: if (_comp) { return _comp(value); } return true; // This is just as easy // But now I think about it yours is fine. return _comp ? _comp(value) : true;
{ "domain": "codereview.stackexchange", "id": 38477, "tags": "c++, json, c++17" }
RViz displays "No messages received", what could be the reason?
Question: Hi there, I have a code that goes like this and is very similar to an example code: #include <ros/ros.h> //#include <sensor_msgs/PointCloud2.h> #include <pcl/ros/conversions.h> #include <pcl_ros/point_cloud.h> #include <pcl/point_types.h> #include <iostream> using namespace std; ros::Publisher pub; void cloud_cb (const sensor_msgs::PointCloud2ConstPtr& input){ float x=0,y=0,z=0; pcl::PointCloud<pcl::PointXYZ>::Ptr msg (new pcl::PointCloud<pcl::PointXYZ>); msg->header.frame_id = "some_frame"; msg->height = msg->width = 1; msg->points.push_back (pcl::PointXYZ(x, y, z)); pub.publish (msg); } int main (int argc, char** argv){ ros::init (argc, argv, "some_init"); ros::NodeHandle nh; ros::Subscriber sub = nh.subscribe ("output", 100, cloud_cb); pub = nh.advertise<sensor_msgs::PointCloud2> ("output", 100); ros::Rate loop_rate(1); while (nh.ok()){ ros::spin (); loop_rate.sleep ();} } Don't be concerned about the function's input argument. I just need this later as soon as I am able to process some incoming data, publish and then visualize it with RViz. So far, I'd like to see one single point when I'm using RViz. However, it indicates "No messages received". Does anyone know what the fault is? Kevin Originally posted by tordalk on ROS Answers with karma: 1 on 2012-10-11 Post score: 0 Original comments Comment by Lorenz on 2012-10-11: If possible, could you please edit your post and add the complete source code? As far as I can see, everything looks ok. Comment by tordalk on 2012-10-12: Hello Martin, thanks for your reply. I finally got your point with the triggering. Unfortunately when I run your code a problem with the transformation appears, stating : Transform [sender=/some_init] For frame [some_frame]: Frame [some_frame] does not exist. How can I fix that? Comment by Lorenz on 2012-10-12: Don't use the frame id some_frame. That was just an example. Instead, use something that is present in your tf tree, for instance input->header.frame_id. Also, don't forget to set the stamp, e.g.: msg->header.stamp = input->header.stamp Answer: Hi Kevin, I think that the problem is that you are subscribing to the topic "output" and then you are publishing to the topic "output" again. But, you publish to the topic "output" inside the callback of the subscriber to the topic "output" (cloud_cb). So, unless there is an external source pouring data into the topic "output", you will never trigger the callback (cloud_cb) and therefore, you will never publish anything. It is like a dog trying to bite its own tail :P Even if you have an external source of content for the topic "output", I have never tried this kind of scenario, so I don't really know what behavior to expect. If you trigger the callback with a single external message to the topic "output" then you will get what you want, but maybe if there is an external continuous source of data for the topic "output" the amount of messages published and read again would grow exponentially? Not sure about this. Probably this would work: #include <ros/ros.h> //#include <sensor_msgs/PointCloud2.h> #include <pcl/ros/conversions.h> #include <pcl_ros/point_cloud.h> #include <pcl/point_types.h> #include <iostream> using namespace std; ros::Publisher pub; void cloud_cb (const sensor_msgs::PointCloud2ConstPtr& input){ float x=0,y=0,z=0; pcl::PointCloud<pcl::PointXYZ>::Ptr msg (new pcl::PointCloud<pcl::PointXYZ>); msg->header.frame_id = "some_frame"; msg->height = msg->width = 1; msg->points.push_back (pcl::PointXYZ(x, y, z)); pub.publish (msg); } int main (int argc, char** argv){ ros::init (argc, argv, "some_init"); ros::NodeHandle nh; ros::Subscriber sub = nh.subscribe ("output", 100, cloud_cb); pub = nh.advertise<sensor_msgs::PointCloud2> ("output", 100); //Publish the first message so we can trigger cloud_cb for the first time float x=0,y=0,z=0; pcl::PointCloud<pcl::PointXYZ>::Ptr msg (new pcl::PointCloud<pcl::PointXYZ>); msg->header.frame_id = "some_frame"; msg->height = msg->width = 1; msg->points.push_back (pcl::PointXYZ(x, y, z)); pub.publish (msg); ros::spin (); } Originally posted by Martin Peris with karma: 5625 on 2012-10-11 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Lorenz on 2012-10-12: Small side note: you don't need to put ros::spin inside a loop. It won't return until the node is shut down anyway. Comment by Martin Peris on 2012-10-12: @Lorenz totally true! I overlooked that detail, thanks for the note. I will change ros::spin for ros::spinOnce to keep consistency on the answer. Comment by Lorenz on 2012-10-12: I believe that using ros::spin is a much better solution than calling ros::spinOnce() once a second. The reason is that with spinOnce, subscriber callbacks are only executed when you call that method, i.e. once a second you'll get all subscribers executed, then nothing happens for one second, ... Comment by Martin Peris on 2012-10-12: Well, depending on what you are trying to accomplish that can be argued, but this discussion would not be really relevant to the question at hand, so I will just modify my answer again :) Comment by Lorenz on 2012-10-12: True, there are some cases when spinOnce is well-suited, mainly to prevent multi-threading issues. The main problem is to chose an appropriate spin rate.
{ "domain": "robotics.stackexchange", "id": 11327, "tags": "rviz" }
Using word embeddings with additional features
Question: I have the set of queries for classification task using Gradient Boosting Classifier of scikit learn. I want to enrich the model by feeding additional features along with GloVe. How should I approach scaling in this case? GloVe is already well scaled, however, features are not. I have tried StandardScaler, but this reduced the performance in comparison with just using GloVe. The problem maybe with the feature itself, however, I need your opinion on scaling starategies in case of glove and dummy variable. Answer: My first comment would be that you have to remember that Tree-based models are not scale-sensitive and therefore scaling should not affect model's performance, so as you well mention it should a problem with the feature itself. If anyway you want to scale all your features you could use MinMaxScaler with the min and max values, being the min and max fo the Glove Vectors so that all the features are on the same scale
{ "domain": "datascience.stackexchange", "id": 8002, "tags": "machine-learning, scikit-learn, nlp, feature-scaling" }
What is Allelic Imbalance
Question: Can anyone help me explain what allelic imbalance is, hopefully shortly? Surprisingly, we cannot find any introduction online. Answer: It might be used differently in different contexts, but generally speaking, in my world - allelic imbalance is when there's a difference in the level of gene expression from different alleles, usually through genetic (e.g. a variant in a promoter) or epigenetic mechanisms (e.g. one copy silenced, as in imprinted regions). There are a few blog posts here with some more information, and plenty of papers (google "Allelic imbalance") if you want to dig deeper.
{ "domain": "bioinformatics.stackexchange", "id": 796, "tags": "phylogenetics" }