anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Which dicarboxylic acid has the most acidic hydrogen? | Question:
Which of the following acids (maleic, fumaric, succinic, or malonic) has the most acidic hydrogen?
I think that malonic acid should have the most acidic hydrogen due to the presence of an active methylene group.
The extraction of hydrogen atom from malonic acid would lead to high resonance stabilization, which is not the case in other acids.
However the answer key says that maleic acid has the most acidic hydrogen. Can someone please explain it to me why?
Answer: The $\mathrm pK_\mathrm a$ of the $\ce{CH2}$ group in a dicarbonyl compound is roughly $10$, whereas the $\mathrm pK_\mathrm a$ of a carboxylic acid is roughly $5$. For example, dimethyl malonate has a $\mathrm pK_\mathrm a$ of $13$ whereas acetic acid has a $\mathrm pK_\mathrm a$ of $4.7$. So, the C–H bond acidity is not likely to be under consideration in any of your compounds.
Let's draw up some $\mathrm pK_\mathrm a$ data (only first ionisations, all data from Wikipedia):
$$\begin{array}{cc} \hline
\text{Carboxylic acid} & \mathrm pK_\mathrm a \\ \hline
\text{Maleic} & 1.9 \\
\text{Fumaric} & 3.03 \\
\text{Succinic} & 4.2 \\
\text{Malonic} & 2.83 \\ \hline
\end{array}$$
Clearly, all of these relate to ionisation of the $\ce{CO2H}$ group; if the acidity of dimethyl malonate is anything to go by, the $\ce{CH2}$ group in malonic acid is less acidic than the $\ce{CO2H}$ group by a factor of $10^{10}$.
What remains is to analyse this trend. Why is maleic acid the most acidic? The answer is – at least partly – intramolecular hydrogen bonding. When the first carboxylic acid group is deprotonated, the resultant monoanion is stabilised as such:
You are probably familiar with the mantra that a more stable conjugate base implies a more acidic compound. This is precisely what happens here.
Obviously, for fumaric acid, this is not possible because the double bond holds the two carboxyl groups apart. One could, however, argue that similar stabilisation could be derived for malonic and succinic acids. Why is it not a factor, then? It has to do with conformation: for the malonate or succinate anions to enjoy the benefits of intramolecular hydrogen bonding, they have to twist themselves into one specific conformation where the carboxyl groups are close to each other. This results in a loss of configurational entropy (the molecule is not as "free" as it would like to be to explore different conformations). For maleic acid, though, this is not a problem because the (Z)-double bond plonks those two carboxyl groups next to each other, and they don't have a choice about it. This bears some similarity to the concept of preorganisation, where two or more groups are "organised" in a fashion which alters their properties.
That's not the only reason why maleic acid is especially acidic, however. Part of the reason is also because an $\mathrm{sp^2}$ carbon is more electronegative than an $\mathrm{sp^3}$ carbon. An $\mathrm{sp^2}$ orbital contains 33% s-character, compared to an $\mathrm{sp^3}$ orbital which has 25% s-character. s-Orbitals are closer to the nuclei than p-orbitals, and therefore, an electron in an $\mathrm{sp^2}$ orbital experiences a greater effective nuclear charge from carbon; this translates into a greater electronegativity. (For similar reasons, alkenes are more acidic than alkanes.)
So, relative to the succinate anion, both the maleate and fumarate anions are stabilised by electron withdrawal via the inductive effect. This explains why fumaric acid is more acidic than succinic acid; the intramolecular hydrogen bonding explains why maleic acid is more acidic than fumaric acid.
Lastly, of course, malonic acid is more acidic than succinic acid because the electron-withdrawing $\ce{CO2H}$ substituent is fewer bonds away. In fact, succinic acid is barely more acidic than acetic acid, which doesn't have an electron-withdrawing substituent on it. | {
"domain": "chemistry.stackexchange",
"id": 9414,
"tags": "organic-chemistry, acid-base, hydrogen-bond"
} |
include ROS into external CMake project (OpentTLD) | Question:
Hi,
I want to extend the OpenTLD project to use it with ros. I'm using the C++ port from Georg Nebehay ( https://github.com/gnebehay/OpenTLD ).
This project is build with cmake and multiple custom CmakeLists.txt files.
Unfortunately I am not sure how I have to change my own packages CMakeLists.txt to build OpenTLD with the need cmake settings?
Any help would be appreciated.
Kind regards,
Markus
Originally posted by bajo on ROS Answers with karma: 206 on 2012-08-02
Post score: 2
Original comments
Comment by Kevin on 2012-09-25:
Did you ever get this working?
Comment by bajo on 2012-10-08:
Yes, but i only have it running with a hardcoded boundary box. If you want i will upload it to github so you can use and modify it.
Comment by Kevin on 2012-12-31:
yes, if you could post it on github I would like that ... thanks!
Answer:
Thank you for your answer.
In the end i followed another way.
I adapted the CMakeLists.txt to reflect to custom CMakeLists.txt files from the OpenTLD package.
Before I didn't know you could do it this way. This site helped me a lot.
http://www.ros.org/wiki/rosbuild/CMakeLists
Originally posted by bajo with karma: 206 on 2012-08-08
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 10456,
"tags": "ros, external-libraries, build, cmake"
} |
Is there an industry consensus standard for storing sensor data? | Question: I would like to know what standard(s) may exist for formatting log data from sensors that record information such as:
temperature
pressure
humidity
vibration
acceleration vector
magnetic field vector
location vector (ex: GPS)
etc.
I have been an end-user of SCADA software products which record various data fields for each stream of log data it receives into a MySQL database. However, I have never looked into determining how exactly that data was stored. I would like to know if there exists an industry consensus standard for storing sensor data that facilitates information interchange. For example, the Internet Engineering Task Force (IETF) publishes documents called "RFCs" for the purpose of standardizing implementations of commonly used protocols like IMAP (email), TLS (secure web browsing).
For example, such a standard might indicate to me the best practices for:
formatting units of measurement (ex: degF vs. °F)
formatting of timestamps of each measurement (ex: ISO-8601)
file format type (CSV files vs. SQL database)
delimiters for measurement data (ex: , in .csv files)
indicating origin and license of data (ex: end device owner/operator, equipment tag number, CC RDFa tags)
encrypting measurements (esp. in a way compatible with sudden power-loss, ex: DTLS)
While researching this question I noticed that the IETF says the Internet of Things (IoT) is a topic of study for several of its working groups. Its definition of IoT is
The Internet of Things is the network of physical objects or "things"
embedded with electronics, software, sensors, actuators, and
connectivity to enable objects to exchange data with the manufacturer,
operator and/or other connected devices.
However, I did not find any IETF standards document describing standard methods for storing log data. Is there an ISA standard covering methods for logging data? I really like their ISA 5.1 standard for P&ID symbols, "Instrumentation Symbols and Identification".
Thank you for any direction you may provide.
Edit-Update (2021-02-02)
After several months of off-and-on research, the closest standard that defines what I was looking for is the Semantic Sensor Network (SSN) Ontology published by the W3C. It is an expanded version of the Sensor, Observation, Sample, and Actuator (SOSA) Ontology. It is capable of associating metadata about an observation (i.e. units, time, location, site-specific names, relationships to other entities in a data collection hierarchy) in an RDF graph data structure. RDF graphs can be "serialized" into JSON (specifically, JSON-LD). The JSON graph data can be formatted using the JSON lines specification so each observation can be appended to a newline-delimited log file in a streaming fashion. Because the logs are machine-readable, an appropriate software package could later import the readings into a database or whatever a user requires.
Search results aren't clear whether SSN will become an industry-standard for general encoding of sensor information but it seems to me the best method for storing sensor data in a self-documenting way. For now, I'm happy to have discovered the method of recording measurements + metadata as an RDF graph using the SSN ontology, serializing the graph as as JSON-LD, and formatting the JSON-LD as JSON Lines that are streamed and appended to a continuously growing log file.
Answer: My experience is there is no particular standards for saving sensor data. Usually one just picks a format that is useful to the task at hand. If, for instance one wants to do post processing analysis in Excel then CSV formatted text files work pretty well. If you do use CSV, then just save numerical values. Don't add text indicating units to each number as this would have to be stripped to do any processing. You can provide units in the header row. Commas or tabs are fine as delimiters. Please use the extension ".csv" for comma delimited files. I've seen ".tsv" for tab delimited files. Python and Pandas provide very nice functions for handling CSV files as do many other languages.
I would suggest looking into using the JSON file format. From JSON.org:
JSON (JavaScript Object Notation) is a lightweight data-interchange
format. It is easy for humans to read and write. It is easy for
machines to parse and generate.
It is still human readable like CSV, but provides a greater ability to encode useful meta information with the data and more complex data structures. Many, many programming languages have libraries for reading and writing JSON files. I find JSON much less verbose than XML.
As for units, I highly encourage the use of metric units unless there is some particularly customary unit involved. | {
"domain": "engineering.stackexchange",
"id": 3367,
"tags": "sensors, instrumentation, iot"
} |
Bosonisation of two non-interacting Fermions | Question: Assume we have 2 sets of non-interacting fermions which I show by $\psi^{\pm}$ and $\chi^{\pm}$ where we have $\left< \psi^{+}(z) \psi^{-}(0) \right>=\frac{1}{z}$ and similar for $\chi$. Now we bosonise the fermions by setting $\psi^{\pm}=e^{\pm i X}$ and $\chi^{\pm}=e^{\pm i Y}$ where $X$ and $Y$ are free bosons. I can see this procedure captures that $\psi^{\pm}$ anti commute with each other, however, I cannot see that it captures that $\psi$ and $\chi$ also anti commute. For example, if I set $j=\psi^{+} \chi^{+}=e^{iX} e^{iY}$, the OPE of $j$ with $\psi^{-}$ equals $-
\frac{\chi^{+}}{z}$, however, using bosons $X$ and $Y$, I get $+ \frac{e^{iY}}{z}$. Or another issue that I have is that I do not see why $\psi^+$ and $\chi^+$ anti commute? Do you have any solution for what I am missing?
Answer: You need to add extra Klein factors to the bosonized Fermi fields to make $\psi =\exp\{ iX\} $ and $\chi=\exp \{iY\} $ anticommute. You can, for example multiply $\chi$ by $\exp\{i\pi N(\psi)\}$ where $N(\psi)$ is the number operator for the $\psi$ fermions. The each time a $\psi$ is commuted through $\chi \exp\{i\pi N(\psi)\}$ you get a minus sign. These factors are ugly, and can often be ignored, as is explained in the lectures I linked to above | {
"domain": "physics.stackexchange",
"id": 73601,
"tags": "quantum-mechanics, conformal-field-theory, fermions, bosonization"
} |
Why bother studying SARS-CoV-2 in mice? Why not start with rats? | Question: Mice, hamsters, ferrets, monkeys. Which lab animals can help defeat the new coronavirus? | Science | AAAS
Other SARS-CoV-2 researchers are turning to rats. They are no more susceptible to COVID-19 than mice, but their larger size is an advantage. “You often want to do repetitive bleeding in an experiment, and you can’t do that with mice,” says Prem Premsrirut of Mirimus, whose company is collaborating with an academic group that’s using CRISPR to create a rat model with a human ACE2 receptor. Vaccine studies, for example, often assess how different doses affect antibody responses over several days. Premsrirut notes that “most toxicology studies” of drugs also start in rat. “If you can study a drug directly in rats, you’re a step ahead.”
I'm assuming that you can repetitively bleed rats, but not mice, because rats' "larger size is an advantage", because
"A rat weighing 400 g would therefore have a total blood volume (TBV) of approximately 64 ml/kg x 0.4 kg = 25.6 ml." But a mouse weighing 25 g would therefore have a total blood volume (TBV) of approximately 58.5 ml/kg x 0.025 kg = 1.46 ml.
the same percentage of TBV yields more blood in rats than mice.
Answer: Because the knowledge base about mice is larger that that of rats.
Knockout mice were produced in 1989, knockout rats in 2003 (https://en.wikipedia.org/wiki/Knockout_mouse)
The immunology of mice has been studied more than any other animal except humans. Correlations have been made between many components of immunology between mice and humans. | {
"domain": "biology.stackexchange",
"id": 10550,
"tags": "zoology"
} |
VBuzz all fizzed up | Question: I don't normally write vb.net code, and I would rather maintain a monstruous vb6 project than a simple vb.net one.
There's something about fizzbuzz going on this weekend, that pushed me to write this:
Namespace CR.Sandbox.VB.FizzBuzz
Module Sandbox
Public Sub Main()
For index = 1 To 100
Console.WriteLine(FizzBuzzer.Convert(index))
Next
Console.ReadLine()
End Sub
End Module
Module FizzBuzzer
Public Function Convert(ByVal value As Integer) As String
If value Mod 15 = 0 Then
Return "FizzBuzz"
ElseIf value Mod 3 = 0 Then
Return "Fizz"
ElseIf value Mod 5 = 0 Then
Return "Buzz"
Else
Return value.ToString()
End If
End Function
End Module
End Namespace
I put Convert in its own module because my understanding is that a module in VB is analoguous to a static class in c#, which is what I wanted to have here - idea being to keep the fizzbuzz-specifics in a fizzbuzz-specialized class.
Is there something I should know about vb.net that would make this code better?
Answer: Things I like:
You defined the requirements correctly.
Module was probably the right call.
You used index instead of i for your loop. That's Awesome-sauce.
Ok, so what could be done better?
3,5, and 15 are all magic numbers, so it's a good time to introduce some constants. However one of them is not like the others. 3 and 5 are conditions. 15 is the lowest common multiple of those conditions.
So, we have three choices:
Define 3 constants.
Const fizzDivisor As Integer = 3
Const buzzDivisor As Integer = 5
Const fizzbuzzDivisor As Integer = 15
Note that this requires that any future developer understand that 15 is the lowest common multiple of 3 and 5. Also, to update either condition, we have to update their lcm by hand. I don't know about you, but I'm lazy and forgetful. I only want to make changes in one place.
Use two constants and calculate the lowest common multiple.
This sounds perfect. Until you realize that in order to be efficient, we would want to calculate it once; outside of the loop. That would bind together Main and FizzBuzzer in a way I wouldn't be comfortable with. In fact, they become so tightly bound that there would be no need for FizzBuzzer at all. You would just implement the whole program right in Main.
Accept that the "ugly" way to write it may be the best way.
Public Function Convert(ByVal value As Integer) As String
Const fizzDivisor As Integer = 3
Const buzzDivisor As Integer = 5
If (value Mod fizzDivisor = 0) And (value Mod buzzDivisor = 0) Then
Return "FizzBuzz"
ElseIf value Mod fizzDivisor = 0 Then
Return "Fizz"
ElseIf value Mod buzzDivisor = 0 Then
Return "Buzz"
Else
Return value.ToString()
End If
End Function
This way, there are no more magic numbers and the cost (both CPU and real life) of computing a lowest common multiple have been wiped way completely.
What the heck, let's go one step further and run all kinds of different FizzBuzz programs. Forget the constants; let's use parameters.
Public Function Convert(ByVal value As Integer, Optional ByVal fizzDivisor As Integer = 3, Optional ByVal buzzDivisor As Integer = 5) As String
If (value Mod fizzDivisor = 0) And (value Mod buzzDivisor = 0) Then
Return "FizzBuzz"
ElseIf value Mod fizzDivisor = 0 Then
Return "Fizz"
ElseIf value Mod buzzDivisor = 0 Then
Return "Buzz"
Else
Return value.ToString()
End If
End Function
We're using Optional parameters with default values, so your existing implementation won't break. Anyone familiar with FizzBuzzer still gets the expected results just by passing a value to it, but now it's easy to implement many different variations of Fizzbuzz.
Public Sub Main()
For index = 1 To 100
Console.WriteLine(FizzBuzzer.Convert(index))
Next
For index = 1 To 100
Console.WriteLine(FizzBuzzer.Convert(index),3,4))
Next
For index = 1 To 100
Console.WriteLine(FizzBuzzer.Convert(index),4,7))
Next
Console.ReadLine()
End Sub | {
"domain": "codereview.stackexchange",
"id": 8501,
"tags": "vb.net, fizzbuzz"
} |
How to Do Convolution a 2D Signal (Image) and Result of Convolution of Two 1D Filters? | Question: Assume a 2-D signal (i.e., some image). Load image and assume it to be signal x. Next assume that instead of having a 2-D filter you have two one D filters
$$f_1[n] =\begin{bmatrix}0.25 && 0.5 && 0.25\end{bmatrix}$$ and
$$f_2[n]=\begin{bmatrix}0.25 \\ 0.5 \\ 0.25\end{bmatrix}$$
Assume that the convolution
$$f_1[n]* f_2[n] = f_3[n]= \begin{bmatrix}0.0625 && 0.125 && 0.0625\\ 0.125 && 0.25 && 0.125\\0.0625 && 0.125 && 0.0625\end{bmatrix}$$
Using this information and output at each stage verify that Associative property holds.
My code:
f3 = [0.0625 0.125 0.0625; 0.125 0.25 0.125; 0.0625 0.125 0.0625]
x=imread('img.jpg')
conv2('x, f3)
But this gives error saying x is not a vector. How do I fix this?
Answer: You should show that conv2(f3,x) can be implemented in a separable way. In other words, computing x1 = conv2(f1,x), then applying f2 on the results, x12 = conv2(f2,x1) gives the same result. Or f12 = conv2(f2,f1) then x12 = conv2(f12,x).
You should be aware of the correct "dimensions" for f1 and f2: transpose of each other, so that one works on "rows" and the other on columns.
This is shown in the following Matlab code:
dataImage = zeros(25,25);
% Create a delta-like image
dataImage(13,13)=1;
% Create a random-kernel image
dataImage(11:13,11:13)=rand(3,3);
imagesc(dataImage);colormap gray
f1 = [0.25 0.5 0.25]'; f2 = [0.25 0.5 0.25];
disp(f1*f2);
f3 = [0.0625 0.125 0.0625; 0.125 0.25 0.125; 0.0625 0.125 0.0625];
disp(f3);
dataImagef1f2 = conv2(f2,conv2(f1,dataImage));
dataImagef1f2 = conv2(conv2(f2,f1),dataImage);
dataImagef3 = conv2(f3,dataImage);
subplot(2,2,1)
imagesc(dataImage);
xlabel('data')
subplot(2,2,2)
imagesc(dataImagef1f2);
xlabel('(data*f1)*f2')
subplot(2,2,3)
imagesc(dataImagef1f2);
xlabel('(f1*f2)*data')
subplot(2,2,4)
imagesc(dataImagef3);
xlabel('f3')
Suggestions for tutorials:
The Matlab Image Processing toolbox guide (900+ pages)
An Introduction to Digital Image Processing with Matlab | {
"domain": "dsp.stackexchange",
"id": 5013,
"tags": "image-processing, matlab, filters, convolution, continuous-signals"
} |
Stability and lifetime of soap bubbles formed with light gases like helium or hydrogen? | Question: A friend asked me if it would be possible to make soap bubbles out of a gas like hydrogen and if you did, would they float higher, faster. Due to the lower mass of light gases (compared to the air) I know they will float up faster (buoyancy). I wonder how stable they would be though.
My question is, will bubbles filled with a light gas like helium or hydrogen be stable for similar time periods to bubbles filled with air? I suspect there are complicated effects between the gas and the surface tension of the soap. Will the smaller molecules of the light gases be able to breach the soap bubble faster?
I found a video that shows bubbles filled with air having helium added to them. They seem stable but I wonder if they'd be stable if they didn't have the air mixed in (pure helium).
Answer: I have seen bubbles made with hydrogen. This is a popular trick with the various lecturers who do fireworks related lectures because the bubbles make a satisfying pop if you ignite them.
A bubble is mainly stabilised by layers of surfactant adsorbed at the gas/water interface. As the bubble wall thins, the adsorbed surfactant layers at the opposite gas/water surfaces come into contact and prevent further thinning. This is a purely kinetic barrier as the gas/water surface tension is still greater than zero (i.e. you'd reduce the overall energy by reducing the surface area) but the rate of desorption of surfactant from the surface is slow.
In principle the gas will affect the adsoption of the surfactant at the gas/water interface and possibly affect the stability, but in practice all common gases are so different from water that the relatively minor differences between gases makes little difference. Bubbles can even be blown with steam as long as you keep the gas phase temperature above 100C.
However, over medium timespans the gas inside the bubble will diffuse out through the water film and cause the bubble to shrink. The rate at which this happens will depend on the solubility of the gas and its diffusion rate in water. I'm sure there will be differences between hydrogen and air, though I don't know of anyone who has actually measured it. I found papers reporting diffusion rates here and here, though both are behind paywalls. I had better luck with solubility figures. Both hydrogen and helium are about a factor of ten less soluble in water than nitrogen, which would make their bubbles more stable than air bubbles though their greater diffusion rates will counteract this to some extent. | {
"domain": "physics.stackexchange",
"id": 8153,
"tags": "surface-tension, stability, bubbles"
} |
Why Linear bounded automata requires Nondeterministic Turing machine ? Why not Deterministic Turing machine? | Question: Going through the topic of LBA, i.e., Linear bounded automata. I found that LBA requires the NTM with some constraints on tape. I found the same information from different sources. But I did not get why all the sources mentioned that it needs a NTM. I am not getting why I need a NTM for LBA? We also know that any NTM can be converted to DTM. Therefore, I can take a DTM and put those tape constraints for building a LBA. I think that with a tape constraints DTM would be converted to LBA! Please make me correct and kindly tell me why I need an NTM for an LBA?
Answer: The conversion from nondeterministic Turing machine to deterministic Turing machines doesn't conserve space. The best known construction, known as Savitch's theorem, converts a nondeterministic Turing machine using space $s(n)$ to a deterministic one using space $O(s(n)^2)$, and this is suspected to be tight in general; see for example this question on cstheory.
Linear-bounded automata correspond to a class of grammars, context-sensitive grammars, in the following strong sense: a language can be described by a context-sensitive grammar iff it is accepted by some linear-bounded automaton. We only know how to prove this result if the linear-bounded automata are allowed to be nondeterministic. Indeed, assuming that $\mathsf{DSPACE}(n) \neq \mathsf{NSPACE}(n)$ (a conjecture which is much weaker than the tightness of Savitch's theorem), there exists a context-sensitive language which cannot be decided by a deterministic linear-bounded automaton. | {
"domain": "cs.stackexchange",
"id": 17028,
"tags": "turing-machines, automata, context-sensitive, linear-bounded-automata"
} |
Swapping two integer numbers with no temporary variable | Question: I tried to swap 2 integer numbers without using an additional variable as a traditional swap.
Is it legal in C++? My VC compiler doesn't complain nor gives any warning about it. If so, how can I improve this script?
#include <iostream>
int main()
{
int a = 20;
int b = 66;
// before swapping
std::cout << a << ' ' << b << '\n';
// swap
a ^= b ^= a ^= b;
// after swapping
std::cout << a << ' ' << b << '\n';
}
For this code:
int a = 20;
int b = 66;
a ^= b ^= a ^= b;
Assembler output for VC++ 2013:
_b$ = -20 ; size = 4
_a$ = -8 ; size = 4
mov DWORD PTR _a$[ebp], 20 ; 00000014H
mov DWORD PTR _b$[ebp], 66 ; 00000042H
mov eax, DWORD PTR _a$[ebp]
xor eax, DWORD PTR _b$[ebp]
mov DWORD PTR _a$[ebp], eax
mov ecx, DWORD PTR _b$[ebp]
xor ecx, DWORD PTR _a$[ebp]
mov DWORD PTR _b$[ebp], ecx
mov edx, DWORD PTR _a$[ebp]
xor edx, DWORD PTR _b$[ebp]
mov DWORD PTR _a$[ebp], edx
For this code:
int a = 20;
int b = 66;
int t = a;
a = b;
b = t;
Assembler output for VC++ 2013:
_t$ = -32 ; size = 4
_b$ = -20 ; size = 4
_a$ = -8 ; size = 4
mov DWORD PTR _a$[ebp], 20 ; 00000014H
mov DWORD PTR _b$[ebp], 66 ; 00000042H
mov eax, DWORD PTR _a$[ebp]
mov DWORD PTR _t$[ebp], eax
mov eax, DWORD PTR _b$[ebp]
mov DWORD PTR _a$[ebp], eax
mov eax, DWORD PTR _t$[ebp]
mov DWORD PTR _b$[ebp], eax
Answer: You make assumptions which may not be true. Why do you believe that
int tmp = a;
a = b;
b = tmp;
actually is compiled down to using an actual variable? It is likely just a register used on the CPU.
Have you inspected it?
Further, why do you assume that:
a ^= b ^= a ^= b;
uses fewer registers than a swap?
Really, what you should do is:
#include <algorithm>
#include <iostream>
int main() {
int a = 20;
int b = 66;
// before swapping
std::cout << a << ' ' << b << '\n';
// swap
std::swap(a,b);
// after swapping
std::cout << a << ' ' << b << '\n';
}
Which is also a reminder that having the a ^= b ^= a ^= b; 'naked' in your code is not good practice. Something like that should be embedded in a function, not directly in the main method.
Update - assembler output
For the code:
int a = 20;
int b = 66;
int t = a;
a = b;
b = t;
return a;
you get the assembler output:
movl $20, -12(%rbp)
movl $66, -8(%rbp)
movl -12(%rbp), %eax
movl %eax, -4(%rbp)
movl -8(%rbp), %eax
movl %eax, -12(%rbp)
movl -4(%rbp), %eax
movl %eax, -8(%rbp)
movl -12(%rbp), %eax
popq %rbp
For the code:
int a = 20;
int b = 66;
a ^= b ^= a ^= b;
return a;
you get
movl $20, -8(%rbp)
movl $66, -4(%rbp)
movl -4(%rbp), %eax
xorl %eax, -8(%rbp)
movl -8(%rbp), %eax
xorl %eax, -4(%rbp)
movl -4(%rbp), %eax
xorl %eax, -8(%rbp)
movl -8(%rbp), %eax
popq %rbp
What does that show?
It shows that both systems run in 12 instructions, including the copy to the stack (%rbp)
that both systems use the single register %eax
both systems use the stack as a temp store for the result (the XOR reuses -8 and -4 offsets in the stack, the tmp uses -12(%rbp)
Net result? Both systems use less than 16 bytes of the stack, they both use 1 register in addition to the stack, and they both have the same number of instructions.
I know which one is more readable....
Of course, with the above code, if I add -O2 to the optimization, I get the assembler:
movl $66, %eax
ret
which, as you can imagine, is fast. | {
"domain": "codereview.stackexchange",
"id": 27590,
"tags": "c++, optimization, integer"
} |
My cross entropy loss gradient calculation is wrong according to the answer key | Question: Given a neural network model for Covid-19 classification with $C=1$ for positive and $C=0$ for negative
Let $x_1 = 6$ and $x_2=2$ find
Probability if the patient got Covid-19 $p\left(C=1 | x; w,b\right)$
Probability if the patient didn’t get Covid-19 $p\left(C=0 | x; w,b\right)$
Find the gradient of $CE_{Loss}$
My attempt
For the first problem
$$
\begin{aligned}
O_1 &= \text{ReLU} \left( b_1 + \sum{x_iw_i} \right) \\
&= \text{ReLU} \left( 0.2 + 6 \cdot 0.3 - 0.2 \cdot 2 \right) \\
&= \text{ReLU} \left( 1.6 \right) \\
&= 1.6 \\
\end{aligned}
$$
$$
\begin{aligned}
O_2 &= \text{Sig} \left( -0.6 - 0.1 \cdot 2 - 0.2 \cdot 6 \right) \\
&= \text{Sig} \left( -2 \right) \\
&\approx 0.8808 \\
\end{aligned}
$$
$$
\begin{aligned}
O_3 &= \text{Sig} \left( 0.6 - 0.3 \cdot 0.8808 + 1.6 \cdot 0.5 \right) \\
&\approx 0.75690 \\
\end{aligned}
$$
$$
\begin{aligned}
p\left(C=1 | x; w,b\right) = O_3 \approx 0.75690
\end{aligned}
$$
For the second problem
$$
\begin{aligned}
p\left(C=0 | x; w,b\right) = 1 - \left(C=1 | x; w,b\right) = 0.2431
\end{aligned}
$$
For the third problem
Since it is a lot of things to calculate I'll take $\frac{\delta L}{\delta w_6}$ as example
$$
\begin{aligned}
\frac{\delta L}{\delta w_6} &= \frac{\delta L}{\delta \hat{y}} \cdot \frac{\delta \hat{y}}{\delta w_6} \\
&= \left(\hat{y}-y\right)O_2\left(O_3 \left(1-O_3\right)\right)\\
&= \left(O_3-y\right)O_2\left(O_3 \left(1-O_3\right)\right)
\end{aligned}
$$
The answer key says it should be $\left(O_3-y\right)O_2$. Where did I go wrong?
Answer: For the first problem your $O_2$ is mistaken and should be corrected as below
$$
\begin{aligned}
O_2 &= \text{Sig} \left( -0.6 - 0.1 \cdot 2 + 0.2 \cdot 6 \right) \\
&= \text{Sig} \left( 0.4 \right) \\
&\approx 0.5987 \\
\end{aligned}
$$
$$
\begin{aligned}
O_3 &= \text{Sig} \left( 0.6 - 0.3 \cdot 0.5987 + 1.6 \cdot 0.5 \right) \\
&\approx 0.7721 \\
\end{aligned}
$$
$$
\begin{aligned}
p\left(C=1 | x; w,b\right) = O_3 \approx 0.7721
\end{aligned}
$$
For the second problem thus corrected to:
$$
\begin{aligned}
p\left(C=0 | x; w,b\right) = 1 - \left(C=1 | x; w,b\right) = 0.2279
\end{aligned}
$$
For the third problem since $w_6$ is one of the weights connecting the output layer and we assume $\hat{y}=g(z^{(3)})$, where $z^{(3)}$ is the net input for the output unit and $g$ is its sigmoid transfer function. Then we arrive at:
$$
\begin{aligned}
\frac{\delta L}{\delta w_6} &= \frac{\delta L}{\delta \hat{y}} \cdot \frac{\delta \hat{y}}{\delta z^{(3)}} \cdot \frac{\delta z^{(3)}}{\delta w_6} \\
\end{aligned}
$$
and since $L$ is the binary cross-entropy loss function, we know $\frac{\delta L}{\delta \hat{y}} = \frac{(\hat{y}-y)}{(1-\hat{y})\hat{y}}$. And from above transfer function we know $\frac{\delta \hat{y}}{\delta z^{(3)}}=(1-\hat{y})\hat{y}$, and $\frac{\delta z^{(3)}}{\delta w_6}=O_2$. Thus finally we simplify as $\frac{\delta L}{\delta w_6}=(O_3-y)O_2$ | {
"domain": "ai.stackexchange",
"id": 3608,
"tags": "neural-networks, homework"
} |
Calculate daily average for selected period in PHP | Question: I would like to calculate the average for the same days (e.g. Monday, Tuesday, etc) in a selected period:
$data = array(
'2016-05-01' => 100,
'2016-05-02' => 150, // monday
'2016-05-03' => 5,
'2016-05-04' => 5,
'2016-05-05' => 25,
'2016-05-06' => 25,
'2016-05-07' => 25,
'2016-05-08' => 25,
'2016-05-09' => 55, // monday again
'2016-05-10' => 25,
'2016-05-11' => 35,
'2016-05-12' => 25,
'2016-05-13' => 125,
'2016-05-14' => 25,
'2016-05-15' => 225,
'2016-05-16' => 25, // and again...
'2016-05-17' => 25,
'2016-05-18' => 25,
'2016-05-19' => 25,
'2016-05-20' => 25,
'2016-05-21' => 25,
);
// Store occurrence of same days
$countDays = array('Mon' => 0, 'Tue' => 0, 'Wed' => 0, 'Thu' => 0, 'Fri' => 0, 'Sat' => 0, 'Sun' => 0);
// Store values
$output = array('Mon' => 0, 'Tue' => 0, 'Wed' => 0, 'Thu' => 0, 'Fri' => 0, 'Sat' => 0, 'Sun' => 0);
// Count days
foreach($data as $key => $value) {
$dayName = date('D', strtotime($key));
$countDays[$dayName]++;
}
// Calculate average
foreach($data as $key => $value) {
$dayName = date('D', strtotime($key));
$output[$dayName] += round($value / $countDays[$dayName]);
}
For example, I'd expect the following output for all the available (three) Mondays:
(150 + 55 + 25) / 3 = 230 / 3 = 76
Answer: The only change I would make is in your two foreach loops. If you see below, I have made the change I would recommend making. This will make the loop a bit shorter and more accurate.
// Store occurrence of same days
$countDays = array('Mon' => 0, 'Tue' => 0, 'Wed' => 0, 'Thu' => 0,
'Fri' => 0, 'Sat' => 0, 'Sun' => 0);
// Store values
$output = array('Mon' => 0, 'Tue' => 0, 'Wed' => 0, 'Thu' => 0,
'Fri' => 0,'Sat' => 0, 'Sun' => 0);
// Count days
foreach($data as $key => $value) {
$dayName = date('D', strtotime($key));
$countDays[$dayName]++;
//add up all the qty when you go through the array the first time
// IE all the Tuesdays in the array get added up.
$output[$dayName] += $value;
}
// Calculate average
foreach($output as $key => $value) {
//now take all the totals you just made in the other loop, and avg them
$output[$key] = round($value / $countDays[$key]);
}
This is the only way I can see to improve the code. This way, you have a 'shorter' second foreach loop, which may improve the speed of your code a bit. You could also do the $value by reference and change it on the fly, but I think the way I have it now is clearer. This way you aren't doing the math more then once per array element, so you only do the avg for something like Tuesday once.
As Pinoniq stated you can drop the second foreach loop and add a function, and use array_map to do the rounding. Leave the first foreach loop the same and then add this, instead of the other foreach loop.
function average($a, $b){
return round($a/$b);
}
$output = array_map("average",$output,$countDays);
//var_dump($output); //<-only if you want to test results.
Downside: You lose your index keys if you do it like this. However, you can easily fix that by just keeping the keys in an array, and then print them out at the same time, or use array_combine($keys,$values); to get your output array back. | {
"domain": "codereview.stackexchange",
"id": 20049,
"tags": "php, datetime"
} |
What is the correct explanation of the null result of Michelson-Morley experiment? | Question: Does the null result of the Michelson-Morley experiment rule out the existence of the ether medium?
My understanding is that if instead of Galilean transformation, we use Lorentz transformation (i.e. the velocity of light is the same both downstream and crossstream), the fringe shift will also be zero whether or not there is an ether medium.
Answer: The null result of the Michelson Morley experiment rules out a rigid aether. The Michelson Gale experiment rules out a dragged aether. Currently only the Lorentz aether, which is an aether that is designed to be experimentally indistinguishable from no aether, is compatible with experiment. | {
"domain": "physics.stackexchange",
"id": 85249,
"tags": "special-relativity, speed-of-light, aether"
} |
Problem related in relative motion | Question: The speed of a boat is 1.5 m/s in still water. One needs to cross a river of width 500 m with this boat. Along the direction of the river a strong wind is blowing with a speed of 0.9 m/s. The boat is oared to the opposite shore of the river, but the water current tries to send it in the direction of the river. Find the speed of the boat relative to the shore, then find the distance between the point where it starts to the point where it reaches the opposite shore, and then find the time it takes to cross the river.
Answer:
Think of the velocity of the boat as a vector. $1.5$ m/s in the x-direction (to the shore), $0.9$ m/s in the y-direction (in the direction of the river)
As such, you can find the magnitude of the velocity vector, which we routinely call the speed of the boat
The 500 meters in the x-direction will be covered in $500/1.5$ seconds.
Using this time interval, multiply by $0.9$ m/s to get the distance covered in the y-direction.
Now using Pythagoras' theorem, you can find the distance between point A and point B.
Lesson you should take away from this: vectors can be really useful concepts.
N.B. I don't think you're supposed to put your homework questions here, but seeing how you asked a similar question before (from your profile), I figured I'd help you out anyway. | {
"domain": "physics.stackexchange",
"id": 19532,
"tags": "homework-and-exercises, time, speed, relative-motion"
} |
Remove repeated words from a 2D character array | Question: This code removes repeated words from a word array (2D char array). I want to optimize this as much possible for speed.
#include "stdio.h"
#include "string.h"
main()
{
char words[2000][20];
/*This array is populated by some other module. possible example of this array is given below, but in reality it would far more elements.
char words[6][20] =
{
{"barrack"},
{"david"},
{"John"},
{"david"},
{"benjamin"},
{"barrack"}
};*/
int names_found = 10;
char out_words[6][20];
int i,j;
int act_names=0;
for (i = 0; i < names_found; i ++)
{
j = i + 1;
while (j < names_found)
{
if (strcmp(words[i], words[j]) == 0)
{
memmove(words + j, words + (names_found - 1), sizeof(words[0]));
-- names_found;
}
else
++ j;
}
}
for(i=0;i<names_found;i++)
{
printf("%s\n",words[i]);
}
}
How can the 2 for loops for i and j above be optimized?
As you see, the above logic currently outputs to same input buffer (in-place processing). Can it be optimized further if we allow a second buffer say char out_words[2000][20]?
Answer:
Your algorithm is \$O(n^2)\$. You can reduce this to \$O(n)\$ by first inserting them into a closed hash table (i.e., one with bucket chains).
I'd optimise is rather than copying the whole items, just copy pointers to the original strings (unless you intend to alter their contents later on). | {
"domain": "codereview.stackexchange",
"id": 7769,
"tags": "optimization, c, strings, array"
} |
Is an electron an ideal magnetic dipole? | Question: Spin (for example of an electron) is described as an intrinsic form of angular moment, and we often say the electron has therefore magnetic moment, due to this angular moment. Well, I suppose when we say "magnetic moment" we actually mean magnetic dipole moment. In that case, is that dipole ideal? That is, are quadrupole, octupole, etc, components of magnetic moment zero for an elementary particle with spin? If not, why (since an elementary particle is as close as I think we can get to the definition of an ideal dipole)?
Answer:
Is an electron an ideal magnetic dipole?
Theoretically, yes. I'll explain the theoretical prediction. If any experiment ever disagrees with this prediction, please let me know. Please let everybody know!
The theoretical answer is based on the Wigner-Eckart theorem, as explained in an existing answer to the related question Why do spin-$\frac{1}{2}$ nuclei have zero electric quadrupole moment? The answer I'm posting here is really just a more pedagogically-worded version of that existing answer, tailored for this new question.
An electron has intrinsic angular momentum equal to $\hbar/2$. To the degree that an electron can be localized at a point, this is a statement about how a single-electron state transforms under rotations about that point. Namely, it transforms as a spin-$1/2$ representation of the (covering group of the) rotation group. The electron can also have non-intrinsic (also called "orbital") angular momentum that depends on its motion through space, and that can have associated higher multipole moments, like it does in excited states of a hydrogen atom. The question asks about the electron's intrinsic properties, though, and for that, we can treat the electron as being localized at a point.
Multipole moments, by definition, also transform in specific ways under the rotation group: a monopole moment transforms as a spin-$0$ representation, a dipole moment transforms as a spin-$1$ representation, a quadrupole moment transforms as a spin-$2$ representation, and so on. Again, this is by definition.
Consider any multipole observable $M$ that transforms as a spin-$j$ representation, and let $|e\rangle$ be a single-electron state. The expectation value of $M$ in this state is proportional to $\langle e|M|e\rangle$, the inner product of $|e\rangle$ with $M|e\rangle$. If we rotate everything, including all states and all observables, then the theory's predictions should be unchanged. In particular, $\langle e|M|e\rangle$ should be unchanged. This is possible only if $|e\rangle$ and $M|e\rangle$ both transform the same way, so that their inner product remains unchanged under arbitrary rotations. The state $|e\rangle$ has spin $1/2$ and the observable $M$ has spin $j$, so the state $M|e\rangle$ is a linear combination of spins $j+ 1/2$ and $j-1/2$. We need $|e\rangle$ and $M|e\rangle$ to transform the same way if we want their inner product $\langle e|M|e\rangle$ to be invariant, so we must have either $j+ 1/2=1/2$ or $j-1/2=1/2$, which implies $j\in\{0,1\}$. In other words, an electron can have a monopole moment (and we know that it does have an electric monopole moment) and a dipole moment (and we know that it does have a magnetic dipole moment), but it cannot have any higher multiple moments, neither electric nor magnetic.
By the way, this argument is not limited to elementary particles. It also works for composite particles, such as nuclei (which is what the linked question is about). A composite particle may have various excited states with different angular momenta, so a generic state of the composite particle may be a superposition of several different intrinsic angular momenta, allowing it to have several different intrinsic multipole moments in such a state. But in its ground state, it transforms as a specific representation of the (covering group of the) rotation group, so the preceding argument still applies to a composite particle in its ground state. In this context, the only special thing about elementary particles is that they're always in their ground state. I suppose that's one way to define what "elementary particle" means. | {
"domain": "physics.stackexchange",
"id": 74696,
"tags": "quantum-mechanics, electrons, atomic-physics, quantum-spin, elementary-particles"
} |
How does the collision of two protons forming a hydrogen-2 atom produce energy if a neutron is more massive than a proton? | Question: Wouldn't the transformation of a proton to a neutron require energy since the neutron is more massive?
Answer: Deuteron (nucleus of hydrogen-2) mass is $$M_{\text{deuteron}}<2M_{\text{proton}}~~~(\text{and of course}~M_{\text{deuteron}}<M_{\text{proton}}+M_{\text{neutron}})$$ because of binding energy, which is always subtracted from a total mass of constituent particles (proton and neutron in this case).
The proton-to-neutron transformation occurs via the inverse beta decay. | {
"domain": "physics.stackexchange",
"id": 44372,
"tags": "nuclear-physics, astronomy, neutrinos, stars, protons"
} |
If we introduce any disease or deadly effect to a large group of living things, will such a group be able to develop a resistance? | Question: Like for instance, introducing cancer or radiation,
to a large group of insects/animals that can reproduce very quickly,
would it result in the surviving population of these living things to be more resistant to whatever adverse condition that was subjected upon it?
Answer: Is there selection against cancer?
What is selection?
Let's start by explaining what selection is. Selection is a differential in fitness among different genotypes. In other words, if you consider a phenotypic trait for which there is some variance in the population. If, in the population, there is a correlation between this trait and fitness and there is a correlation between this trait and genetics, then there is selection.
The three ingredients for selection are
Variance for the phenotypic trait
Correlation between the phenotypic trait and fitness
Correlation between the phenotypic trait and genetics
These three bullet points are known as Lewontin's recipe. The last bullet point could be rephrased into "the phenotypic trait is heritable" and all bullet points put together could be rephrased into "fitness is heritable". Have a look at the post Why is a heritability coefficient not an index of how “genetic” something is? to understand the concept of heritability.
Is there selection on cancer?
Cancer is fundamentally a genetic disease. If this is unclear to you, you might want to read a bit more about cancer.
There is variance in the population in cancer (not everybody has a cancer but some people do) so we satisfied our first bullet point. Cancer is a genetic trait. So we satisfied our second bullet point. Cancer is a disease and therefore affect fitness (cancers kill) so we satisfied our third bullet point. In short, yes there is selection on cancer.
How does increasing the amount of carcinogens in our environment will affect selection on cancer?
From wiki
A carcinogen is any substance, radionuclide, or radiation that promotes carcinogenesis, the formation of cancer.
If you increase the amount of carcinogens in our environment, you are going to increase the prevalence of the disease, which will increase the variance for the disease (unless the prevalence would become very very very high) and therefore increase selection against cancer.
Does selection work?
I just wanted to know if it were possible [, by the means of] natural selection [to] gradually evolve some group of living things to be more resistant to any harsh condition, such as extreme heat/cold, intense radiation, any disease, etc.
Yes, it is possible. Natural selection does work.
Note that if you are interested in a short intro course to evolutionary biology, then you may enjoy having a look at Evo101 by UC Berkeley. | {
"domain": "biology.stackexchange",
"id": 7824,
"tags": "evolution, bioinformatics"
} |
cv_bridge: sensor_msgs::Image to sensor_msgs::ImageConstPtr | Question:
Hi there!
I've been having some real trouble with cv_bridge (and, I guess c++ in general); I'm really stumped as to how to solve this issue but I suspect it's down to a fundamental misunderstanding of c++.
I'm trying to write a very basic ros package which will capture a video from a webcam, publish it frame by frame to a topic, and then have a subscriber take the data and display the image. However, I am following an API which stresses that the message must be passed as a sensor_msgs/Image inside another message containing some other information.
message definition:
sensor_msgs/Image frame
uint8 camera_id
My publisher uses cv_bridge in the following manner:
cv::Mat cv_frame;
cap >> cv_frame; //I have a VideoCapture called cap.
vision_ros::img_msg imsg;
sensor_msgs::Image::Ptr ros_frame;
std_msgs::Header head;
ros_frame = cv_bridge::CvImage(head, "", cv_frame).toImageMsg();;
imsg.frame = *ros_frame;
Now, my understanding of c++ and pointers is limited (I reference http://www.cplusplus.com/doc/tutorial/pointers/ constantly...) but, my aim here was to capture a new frame from the webcam convert it using cv_bridge (which leaves me with a sensor_msgs::Image::Ptr) and then add this to my img_msg as a sensor_msgs::Image by using the deference operator (*). It seems to work, but if it's wrong, please say something!
I then want my subscriber callback function to work along the lines of:
void streamCallback(vision_ros::img_msg)
{
sensor_msgs::Image ros_frame = imsg.frame;
cv::Mat cv_frame = <some conversion from sensor_msgs::Image to cv::Mat>
}
The way I feel it should be done (after reading the documentation) is to convert from sensor_msgs::Image to cv_bridge::CvImage and then extract the CvImage.image, property which should be of type cv::Mat.
However, all of the functions return a CvImagePtr for which I thought I could use the deference operator again to get the "data stored at...".
Finally, all of the functions to convert from a sensor_msgs::Image to cv_bridge::CvImagePtr must take a const sensor_msgs::ImageConstPtr as an argument.
So my question is, how do I get from my sensor_msgs::Image to a sesnor_msgs::ImageConstPtr?
I tried:
sensor_msgs::Image ros_frame;
sensor_msgs::ImageConstPtr ros_frame_ptr = &ros_frame;
to no avail.
Originally posted by Pufpastry on ROS Answers with karma: 3 on 2015-06-03
Post score: 0
Answer:
Your callback could look like:
void streamCallback(vision_ros::img_msg)
{
cv::Mat cv_frame = cv_bridge::toCvCopy( imsg.frame, "mono8" /* or other encoding */ )->image;
}
Originally posted by Wolf with karma: 7555 on 2015-06-03
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Pufpastry on 2015-06-03:
Thank you! That fixed the cv_bridge based errors, now I just need to figure out why I can't get imshow to show anything... Thanks again! | {
"domain": "robotics.stackexchange",
"id": 21828,
"tags": "ros, opencv, message, sensor, image"
} |
Superoperator cannot increase relative entropy | Question: Note: Cross-posted on Physics SE.
So I have to show that a superoperator $\$$ cannot increase relative entropy using the monotonicity of relative entropy:
$$S(\rho_A || \sigma_A) \leq S(\rho_{AB} || \sigma_{AB}).$$
What I have to prove:
$$S(\$\rho|| \$ \sigma) \leq S(\rho || \sigma).$$
Now the hint is that I should use the unitary representation of the superoperator $\$$. I know that we can represent $ \$ \rho = \sum_i M_i \rho M_i^{\dagger} $ with $\sum_i M_i M_i^{\dagger} = I$. Now I am able to write out $S(\$\rho|| \$ \sigma_A)$ in this notation, but that doesn't bring me any further.
Does anyone have any idea how to show this in the way that the questions hints to? I already read the original paper of Lindblad but this doesn't help me (he does it another special way). Any clues or how to do this?
Answer: I'm not an expert with this sort of thing (i.e. there may be imperfections in this argument), but hopefully this will set you in the right direction...
Consider $\rho_{AB}=\rho_A\otimes |0\rangle\langle 0|$ and $\sigma_{AB}=\sigma_A\otimes |0\rangle\langle 0|$. It must be that $S(\rho_A\|\sigma_A)=S(\rho_{AB}\|\sigma_{AB})$.
Now, your superoperator can be described by a unitary $U$ over a larger space:
$$
\$\rho=\text{Tr}_B\left(U(\rho\otimes|0\rangle\langle 0|)U^\dagger\right)
$$
So, let
$$
\tilde\rho_{AB}=U\rho_{AB}U^\dagger\qquad\tilde\sigma_{AB}=U\sigma_{AB}U^\dagger.
$$
Since it's unitary,
$$
S(\rho_A\|\sigma_A)=S(\rho_{AB}\|\sigma_{AB})=S(\tilde \rho_{AB}\|\tilde\sigma_{AB}),
$$
but from your original statement (I assume, you've not quite stated it precisely enough)
$$
S(\tilde \rho_{AB}\|\tilde\sigma_{AB})\geq S(\$\rho_A\|\$\sigma_A)
$$ | {
"domain": "quantumcomputing.stackexchange",
"id": 738,
"tags": "entanglement, information-theory, entropy"
} |
Receiving data from a network stream | Question: I basically have a couple of clients built with C#, using System.Net.Sockets TCP connections connecting to a node server backend.
I've looked at tons of examples and everyone seems to receive data in a different way. I copied methods I liked and thought would work for me and put together this method.
This is the method I use for receiving ALL data from the server. Is this method acceptable? Is there anything I can do better here?
ReceiveData Method
/// <summary>
/// This should be started in it's own thread as it blocks like crazy
/// This will keep running while the socket is connected, waiting for
/// data to arrive and then handle it (via calling the action you pass)
/// </summary>
/// <param name="tcpNetwork">the tcp connection which contains the open socket and stream reader</param>
/// <param name="handleFunction">Action in which you want to run to handle your packets received</param>
public static void ReceiveData(TcpNetwork tcpNetwork, Action<Packet> handleFunction)
{
//grab the streamReader from the tcp object
var streamReader = tcpNetwork.GetStreamReader();
//while the socket is still connected
while (tcpNetwork.Connected)
{
try
{
//Create a new header object which is a static size
//(Set Via TcpNetwork)
//the server will always send a header packet before any
//other packet
var header = new char[TcpNetwork.HeaderLength];
//attempt to read the header, this will find data, or
//throw an IOException when we timeout (if no data is being sent)
//in which case we just hide it and start looking for more data
TcpNetwork.ReadWholeArray(streamReader, header);
//if we got this far that means we have a header packet (in json form)
//convert it to our HeaderPacket object so we can determine the length of the
//content packet
var headerPacket = JsonConvert.DeserializeObject<HeaderPacket>(new string(header));
//create a new char array for our content packet using the size that was sent
//with our header packet
var contentPacket = new char[headerPacket.Length];
//attempt to read the whole contentPacket, we will keep reading until we get the right amount of data
//or we reach the end of the stream in which case we'll throw an exception cause something bad happened
TcpNetwork.ReadWholeArray(streamReader, contentPacket);
//convert our character array to a string
var json = new string(contentPacket);
//make sure the string is json
if (Packet.IsJson(json))
{
//convert it from json to our packet object
var packet = Packet.ConvertFromServerPacket(json);
//call the action we passed in to handle all the packets
handleFunction(packet);
}
else
{
throw new FormatException("Received non-json response from server");
}
}
catch (IOException ioException)
{
//natural timeout if we don't receive any messages
//keep chugging
}
catch (ThreadAbortException abortingExcetion)
{
// -- let it abort
}
catch (Exception x)
{
throw x;
}
}
}
Relevant parts of TcpNetwork.cs
/// <summary>
/// Reads data into a complete array, throwing an EndOfStreamException
/// if the stream runs out of data first, or if an IOException
/// naturally occurs.
/// </summary>
/// <param name="reader">The stream to read data from</param>
/// <param name="data">The array to read bytes into. The array
/// will be completely filled from the stream, so an appropriate
/// size must be given.</param>
public static void ReadWholeArray(StreamReader reader, char[] data)
{
var offset = 0;
var remaining = data.Length;
while (remaining > 0)
{
var read = reader.Read(data, offset, remaining);
if (read <= 0)
throw new EndOfStreamException
(String.Format("End of stream reached with {0} bytes left to read", remaining));
remaining -= read;
offset += read;
}
}
/// <summary>
/// Gets a StreamReader which can be used to read from the connected socket
/// </summary>
/// <returns>StreamReader of a network stream</returns>
public StreamReader GetStreamReader()
{
if (_tcpClient == null)
return null;
return _streamReader ?? (_streamReader = new StreamReader(_tcpClient.GetStream()));
}
HeaderPacket.cs
public class HeaderPacket
{
public int Length { get; set; }
}
Clarifications
TcpNetwork is a wrapper class around System.Net.Sockets.TcpClient
Server is node.js so sending JSON across the wire seemed reasonable. I'm not sure if that's something I should be doing or not.
Using Json.net to convert between my POCOs and JSON objects
Answer: Just one minor issue: You shouldn't rethrow an exception with throw ex;, because this messes up the stack trace. It is recommended to use just throw; instead.
Out of personal preference I wouldn't write that much of comments, actually. Sure, it looks nice, but you are explaining twice what ReadWholeArray does. I think the name is chosen well enough to give an idea what the method does, and for details you have the documentation at the definition of the method (e.g. shown via IntelliSense). Comments shouldn't explain how exactly you are doing something, they should tell the reader what you are doing, and why. | {
"domain": "codereview.stackexchange",
"id": 4063,
"tags": "c#, networking, tcp, json.net"
} |
Why does the classic wave equation for a non-relativistic string look like the Klein-Gordon equation? | Question: There is a very old equation known as the "wave equation". It's an ordinary classical non-relativistic differential equation which applies to just about every kind of ordinary wave you can imagine (fluid dynamics, acoustics, mechanical waves, etc). It can be derived from Hooke's law simply by imagining a sequence of beads on a string, where each bead has the same mass. In the continuum limit of a string with constant tension and mass per unit length, such as that on a violin, you get this:
$$
\displaystyle {\frac {\partial ^{2}u}{\partial t^{2}}}=c^{2}{\frac {\partial ^{2}u}{\partial x^{2}}}
$$
where $c = \sqrt{T/\rho}$ is the velocity of traveling waves on the string, T is the tension of the string, and $\rho$ is the mass per unit length of the string.
This equation was discovered by d’Alembert in 1746 (see, eg. https://en.wikipedia.org/wiki/Wave_equation)
The 3D version of this, for ordinary fluids, mechanical waves, etc. is:
$$
\displaystyle {\frac {\partial ^{2}}{\partial t^{2}}}u -c^2\mathbf {\nabla } ^{2}u =0.
$$
Fast forward 150+ years, and we have Einstein's theory of special relativity, which says that the energy of any body in motion is: $$\displaystyle E = \sqrt{(pc)^2 + (mc^2)^2}$$
Moving forward a few more decades, we can combine this with the assumption from quantum mechanics that momentum should not be viewed as an ordinary real number, but instead a non-commuting functional operator in a complex Hilbert space $ \displaystyle p = -i\hbar \frac{d}{dx} $. This gives us the Klein-Gordon equation, the relativistic version of the Schrodinger Equation:
$$
\displaystyle {\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\psi -\mathbf {\nabla } ^{2}\psi +{\frac {m^{2}c^{2}}{\hbar ^{2}}}\psi =0
$$
For the ultra-relativistic case when $E >> m$, this reduces to exactly the classic wave equation we derived from Hooke's law:
$$
\displaystyle {\frac {\partial ^{2}\psi}{\partial t^{2}}}=c^{2}{\frac {\nabla ^{2}\psi}{\partial x^{2}}}
$$
My question is: why does the non-relativistic non-quantum wave equation for a string on a violin look more like the wave equation for a relativistic quantum particle than a non-relativistic quantum particle? Is that purely coincidence, or could it be seen as an early hint of the equations of relativity showing up in the 1700's? Without working out the math, a priori I would have guessed that the wave equation for a non-relativistic string would look more like the Schrodinger Equation than Klein-Gordon, since relativity is not involved, ie there is no reason to expect it to have a Lorentz invariant form:
$$
\displaystyle i\hbar {\frac {\partial \psi }{\partial t}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi.
$$
Also, it seems interesting that this equation was first derived when thinking about the motion of a 1-dimensional string. This is a long shot, but... does this by any chance relate to the fact that quantum gravity turned out to have a consistent embedding in a quantum mechanics of strings, but not in a quantum mechanics of particles?
Answer: With one time dimension and one space dimension, the wave equation's most general solution is $f(t-x/c)+g(t+x/c)$ with $f,\,g$ twice-differentiable, i.e. each solution is a sum of two speed-$c$ waves, one moving right, the other left. Equivalently, the former's value at a time $t$ depends on the behaviour of the wave's source a time $x/c$ earlier, so is a "retarded" solution, while the latter is an "advanced" solution.
I've moved from the maths to the physics, but we can just as easily go the other way. If we only want the retarded them, a solution is of the form $f(t-x/c)$, and the PDE this fits is $u_t=-u_x/c$. But as soon as we work in multiple space dimensions, e.g. because we're thumping a drum skin, the requirement that both sides are scalars effects a generalization such as $u_t=-\hat{e}\cdot\nabla u/c$, with $\hat{e}$ an arbitrary unit vector. Too arbitrary. It picks out a privileged direction in space. (We can fix that with a spacetime-unifying idea like $\gamma^\mu u_\mu=0$, but let's put that aside for a moment.) If we use second-order derivatives too we avoid this problem, viz. $u_{tt}=u_{ii}$. The only downside is there are now advanced solutions too. (The general selection in multiple space dimensions looks a fair bit more complicated, but the principle is similar; we just need a $k$-space integral instead of a $2$-term sum.)
But one of the most important ideas in science is there's more than one place you can start your theory. We're not choosing axioms, then seeing which theorems they give us, like a mathematician with a preferred set theory. The universe just "is", all at once. So let's do something very ahistorical: suppose you tried to write down a relativistic quantum field theory without first knowing what classical relativity looks like. I know that sounds crazy, but bear with me.
Uniting time and space means actions become spacetime integrals of Lagrangian densities expressible in terms of fields and spacetime derivatives, rather than just time integrals of Lagrangians expressible in terms of coordinates and their time derivatives. Even before you work out ideas like Lorentz-invariance, you already realize a field ought to vibrate in a similar retarded-plus-maybe-advanced way. This quickly gives you the Klein-Gordon equation before you know Einstein's energy-momentum relation! Or if you work with a spinor with enough components, the privileged-direction idea comes off as more workable, because of a $\gamma^\mu\partial_\mu$ operator. (Hilariously, though, it actually doesn't get rid of the advanced solutions; or on a related point, it doesn't get rid of $E<0$ solutions to $E=m^2c^4+p^2c^2$.) But Lagrangians tend to give us even-order time derivatives in our EOMs, although there's a way round that.
So the real question is why does plucking a string look like Klein-Gordon rather than Dirac? (Schrödinger is out of the running because of the $f(t-x/c)$ Ansatz; we need $\nabla$ to have the same degree as $\partial_t$.) Well, the unhelpful answer is "because this derivation says so". The intuitive summary of that unhelpful answer is the string's behaviour is built on top of Newton's second law, so you need a second-order time derivative. If you think we should make our inference in the opposite direction, that won't satisfy you, in which case I'll need to motivate why the string amplitude shouldn't be a spinor with at least $4$ components. However, I suspect you don't find that negative fact surprising enough to warrant an "explanation". | {
"domain": "physics.stackexchange",
"id": 84271,
"tags": "waves, schroedinger-equation, string, oscillators, klein-gordon-equation"
} |
Field momentum of Klein-Gordon Lagrangian | Question: Given the Lagrangian $L$ of the field $\phi$ the field momentum $\Pi$ reads:
$$L_{KG}=-\frac{1}{2}\partial^\mu\phi\partial_\mu\phi-\frac{1}{2}m^2\phi^2$$
$$\Pi=\frac{\partial L}{\partial(\partial_\mu\phi)}=\partial_\mu\phi$$
I dont see how the derivative above gives this result. How do we perform this derivative? The wrong way I thought is doing it like this:
$$\Pi=\frac{\partial L}{\partial(\partial_\mu\phi)}=-\frac{1}{2}\frac{\partial }{\partial\dot{\phi}}\dot{\phi}^2=-\dot{\phi}=-\partial_\mu\phi$$
PS: The Minkowski signature convention is $(-,+,+,+)$.
Answer: First there is an issue with your definition. The canonically conjugate momentum is
$$\pi=\dfrac{\partial \mathcal{L}}{\partial(\partial_0 \phi)}$$
In fact notice that in your equation the LHS carries no indices and the RHS carries one which should indicate something is actually wrong.
So you must differentiate $\mathcal{L}$ with respect to $\dot{\phi}=\partial_0\phi$.
Why is so? Well, this is a straightforward generalization of the canonically conjugate momentum from classical mechanics, where the momentum conjugate to $q$ is
$$p = \dfrac{\partial L}{\partial \dot{q}}$$
Now how do we compute this for the KG lagrangian? Well the Lagrangian is
$$\mathcal{L}[\phi,\partial_\mu\phi]=\frac{1}{2}\partial_\mu \phi \partial^\mu\phi-\frac{1}{2}m^2\phi^2=\frac{1}{2}\eta^{\mu\nu}\partial_\mu \phi\partial_\nu \phi-\frac{1}{2}m^2\phi^2$$
Hence it is a function of $\phi$ and $\partial_\mu \phi$ for $\mu=0,1,2,3$.
You should when differentiating regard $\phi,\partial_0\phi,\partial_1\phi,\partial_2\phi,\partial_3\phi$ as five different and independent coordinates!
So we compute
$$\frac{\partial \mathcal{L}}{\partial (\partial_0 \phi)}=\frac{1}{2}\eta^{\mu\nu}\frac{\partial}{\partial(\partial_0\phi)}(\partial_\mu \phi \partial_\nu\phi)$$
Where the last term vanishes because $\phi^2$ doesn't depend on $\partial_0\phi$.
Next we have
$$\frac{\partial \mathcal{L}}{\partial (\partial_0 \phi)}=\frac{1}{2}\eta^{\mu\nu}\left[\frac{\partial (\partial_\mu\phi)}{\partial(\partial_0\phi)}\partial_\nu\phi + \frac{\partial(\partial_\nu\phi)}{\partial(\partial_0\phi)}\partial_\mu \phi\right] =\frac{1}{2}\eta^{\mu\nu}\left[\delta_{\mu 0}\partial_\nu\phi+\delta_{\nu 0}\partial_\mu \phi\right]$$
Using the Kronecker deltas this is
$$\frac{\partial \mathcal{L}}{\partial (\partial_0 \phi)}=\frac{1}{2}\eta^{0\nu}\partial_\nu\phi+\frac{1}{2}\eta^{\mu 0}\partial_\mu \phi.$$
Finally if one works in the $(+,-,-,-)$ signature, $\eta^{\mu 0}= \delta_{\mu 0}$ and hence you get
$$\pi = \partial_0 \phi = \dot{\phi}.$$ | {
"domain": "physics.stackexchange",
"id": 57081,
"tags": "lagrangian-formalism, momentum, field-theory, definition, differentiation"
} |
Finding all partitions of a grid into k connected components | Question: I am working on floor planing on small orthogonal grids. I want to partition a given $m \times n$ grid into $k$ (where $k \leq nm$, but usually $k \ll nm$) connected components in all possible ways so that I can compute a fitness value for each solution and pick the best one. So far, I have the fitness evaluation at the end of the algorithm, no branch-and-bound or other type of early-termination, since the fitness function is specified by users and usually requires the complete solution.
My current approach to listing all possible grid partitions into connected components is quite straight forward and I am wondering what optimizations can be added to avoid listing duplicate partitions? There must be a better way than what I have right now. I know the problem is NP, but I would at like to push my algorithm from brute-force to a smart and efficient approach.
Overview
For better visualization and description I will reformulate the task to an equivalent one: paint the grid cells using $k$ colors so that each color builds a single connected component (with respect to 4-neighborhood) and of course all grid is completely painted.
My approach so far:
Generate all seed scenarios. A seed scenario is a partial solution where each color is applied to a single cell only, the remaining cells are yet empty.
Collect all possible solutions for each seed scenario by expanding the color regions in a DFS manner.
Filter out duplicate solutions with help of a hash-table.
Seed scenarios
I generate the seed scenarios as permutations of $k$ unique colors and $mn-k$ void elements (without repetition of the voids). Hence, the total number is $(nm)! / (mn-k)!$ For example, for a $1 \times 4$ grid and colors ${0, 1}$ with void denoted as $\square$ the seed scenarios are:
$[0 1 \square \square]$
$[0 \square 1 \square]$
$[0 \square \square 1]$
$[1 0 \square \square]$
$[1 \square 0 \square]$
$[1 \square \square 0]$
$[\square 0 1 \square]$
$[\square 0 \square 1]$
$[\square 1 0 \square]$
$[\square 1 \square 0]$
$[\square \square 0 1]$
$[\square \square 1 0]$
Seed growth / multicolor flood-fill
I assume the painting to be performed in a fixed ordering of the colors. The seed scenario always comes with the first color set as the current one. New solutions are generated then either by switching to the next color or by painting empty cells by the current color.
//PSEUDOCODE
buffer.push(seed_scenario with current_color:=0);
while(buffer not empty)
{
partial_solution := buffer.pop();
if (partial_solution.available_cells.count == 0)
result.add(partial_solution);
else
{
buffer.push(partial_solution.nextColor()); //copy solution and increment color
buffer.pushAll(partial_solution.expand()); //kind-of flood-fill produces new solutions
}
}
partial_solution.expand() generates a number of new partial solutions. All of them have one additional cell colored by the current color. It examines the current region boundary and tries to paint each neighboring cell by the current color, if the cell is still void.
partial_solution.nextColor() duplicates the current partial solution but increments the current painting color.
This simple seed growth enumerates all possible solutions for the seed setup. However, a pair of different seed scenarios can produce identical solutions. There are indeed many duplicates produced. So far, I do not know how to take care of that. So I had to add the third step that filters duplicates so that the result contains only distinct solutions.
Question
I assume there should be a way to get rid of the duplicates, since that is where the efficiency suffers the most. Is it possible to merge the seeds generation with the painting stage? I started to thing about some sort of dynamic programming, but I have no clear idea yet. In 1D it would be much easier, but the 4-connectivity in a 2D grid makes the problem much harder. I tried searching for solutions or publications, but didn't find anything useful yet. Maybe I am throwing in the wrong keywords. So any suggestions to my approach or pointers to literature are very much appreciated!
Note
I found Grid Puzzle Split Algorithm, but not sure if the answers can be adapted to my problem.
Further thoughts (update #1)
I started to think in the following direction. If there are two connected components, their union will be connected as well. So I could proceed in a divide-and-conquer way:
Generate all distinct 2-partitions (connectivity condition must hold of course).
For each solution from (1) paint one component with one of the available colors and recursively apply (1) to the second component using the remaining colors. Terminate each branch once all colors have been used for at least one cell.
This is a very rough idea, but I believe it should avoid duplicates. I will investigate further if I can prove it. But still, how to generate all distinct 2-partitions of a 2D grid efficiently remains an open question for me.
Answer: Two components
Here is an algorithm you can use to enumerate the ways to partition any graph (including a grid) into $k=2$ connected components, say a red component and a blue component. Let's define the "size" of such a partition to be the number of vertices in the red component.
The algorithm: generate all partitions of size 1; then generate all partitions of size 2; then all partitions of size 3; and so on.
Given all partitions of size $s$, you can enumerate all partitions of size $s+1$ as follows: pick a partition of size $s$, let $R$ denote the set of red vertices in that partition, pick a blue vertex $v \notin R$ that is adjacent to some red vertex in $R$; construct a new partition of size $s+1$ with red vertices $R' = R \cup \{v\}$ and blue vertices $V \setminus R'$; and test whether the blue vertices form a connected component.
The base case is that you can easily enumerate all partitions of size 1: each is just a single vertex, and you can choose any one vertex you want. (If you consider two partitions equivalent if one can be obtained from the other by swapping colors, and you only want to enumerate partitions up to equivalence, then it suffices to enumerate only those partitions where the first vertex is colored red. In that case, there is only one partition of size 1: the first vertex is red and the rest are blue.)
To make this run efficiently, I suggest you store a hashtable of all partitions of size $1$, a hashtable of all partitions of size $2$, and so on. The hash function will map a partition to a unique hashcode, by hashing the set of red vertices. This way you can efficiently test whether a partition generated using the above procedure is new or a duplicate, and avoid adding duplicates more than once. This will also allow you to enumerate all partitions of a given size.
To help make it more efficient to enumerate all vertices that are adjacent to some red vertex, I suggest you store each subset as a set of red vertices on the fringe and a set of red vertices on the interior. (The "fringe" are the red vertices in $R$ that are adjacent to at least one blue vertex; the interior are the rest of the red vertices.) This makes it easy to enumerate over all blue vertices that are adjacent to some red vertex in $R$ (by traversing the fringe and enumerating their neighbors).
With these methods, you can enumerate all such connected components in at most $O(N^2 2^N)$ time, where $N$ is the number of vertices in the graph ($N=mn$ in your example), and probably substantially less in practice.
Multiple components
We can generalize this to handle an arbitrary number $k\ge 2$ of connected components, as follows. First, pick a partition of the graph into 2 components, one red and one blue. Then, pick a way to partition the blue component into two connected components, say one violet and one cyan. Repeat $k-1$ times, in each step partioning the last component, until you have $k$ components. If you at each stage you enumerate over all choices, at the end you will have enumerated all ways to partition the graph into $k$ connected components.
(If you consider two partitions equivalent if one can be obtained from the other by permuting the colors, then it suffices to enumerate the ways to partition into two components up to equivalence.)
The running time of this algorithm will be very bad: it might be close to $k^N$. So, this will only be useful when the number $N$ of vertices in the graph is very small. | {
"domain": "cs.stackexchange",
"id": 16682,
"tags": "graphs, dynamic-programming, partitions"
} |
How can a sample that is in two different states be at a uniform temperature? | Question: In my chemistry course, the professor has discussed how a sample that contains both solid and liquid water would be at 0 degrees Celsius no matter what the proportion of solid to liquid water is. I find this concept to be confusing, because wouldn't the areas of the sample that had already changed from solid to liquid begin increasing in temperature even while the other parts of the sample continue to be in the process of fusion?
The other part of this idea that confuses me is that if the sample was at a uniform temperature, then why do bodies of water (ex. lakes that freeze in the wintertime) continue to have different temperatures of water within them?
Answer: What is confusing you is the thermodynamic concept of equilibrium. A system at equilibrium has no external source of energy flowing onto it and is perfectly mixed internally. These are not common circumstances in the real world.
In a system that is in equilibrium, a solid and its corresponding liquid can exist at one temperature. Any extra energy added will, when equilibrium is restored, serve to convert the solid to liquid (this takes energy). Since, at equilibrium, we have perfect mixing this will not result in a temperature change. For water it takes a lot of energy to convert ice to liquid water.
Of course, in the real world, nothing is at equilibrium and there is always a flow of energy in or out of the system. And for an isolated system at equilibrium, when energy is added there will be a period where parts of the system will be hotter than others. But, since being at equilibrium means the system will be perfectly mixed, this is temporary and the system will regain equilibrium with a constant temperature but a different mix of water and ice.
So you professor is right if he specifies that the system is in equilibrium but you are right in the real world for most systems. The idea of equilibrium is a useful idea for understanding the limiting behaviour systems if a little unrealistic as a description of the real world. | {
"domain": "chemistry.stackexchange",
"id": 4753,
"tags": "water, equation-of-state"
} |
How many nodes can publish to a single topic? | Question:
hello every body. I'm new in Ros and I have a simple question.
How many nodes can publish to a single topic?
thank you.
Originally posted by hamed ghasemi on ROS Answers with karma: 3 on 2018-10-07
Post score: 0
Answer:
Theoretically? An infinite number of nodes publishers can publish to the same topic.
Practically? Until you run out of CPU, memory or network bandwidth. But even then there is no real limit, things will just get really, really, really slow and unreliable.
Originally posted by gvdhoorn with karma: 86574 on 2018-10-07
This answer was ACCEPTED on the original site
Post score: 10 | {
"domain": "robotics.stackexchange",
"id": 31874,
"tags": "ros, ros-kinetic, topic, node"
} |
Change code to use functional style | Question: I tried to solve a programming contest problem in Scala, but I have a feeling, that it could be done in a more functional way. I compared it with the solution written in imperative style and it is not shorter or less complex at all. Do you have any idea how to "impove" or approach the problem in a more functional way?
Here is my solution:
object Sunshine extends App {
def tanning(bedNumber:Int, part1 : List[Char], part2: List[Char], inBed: List[Char], tanned:Int, walkedAway: Int) : Int = {
val actCustomer =part2.head
val tanningTuple=(part1.indexOf(actCustomer),inBed.indexOf(actCustomer),bedNumber>inBed.length,part2.tail)
tanningTuple match {
case(_,_,_,Nil) => walkedAway
// enter,can be tanned
case (-1,-1,true,_) =>
tanning(bedNumber,actCustomer::part1,part2.tail,actCustomer::inBed,tanned,walkedAway)
// enter, can not be tanned
case (-1,-1,false,_) =>
tanning(bedNumber,actCustomer::part1,part2.tail,inBed,tanned,walkedAway)
//leave, not in bed
case (x ,-1,_,_) if x>=0 =>
tanning(bedNumber,actCustomer::part1,part2.tail,inBed,tanned,walkedAway+1)
//leave, in bed
case (x,y,_,_) if x>=0 && y>=0 =>
tanning(bedNumber,actCustomer::part1,part2.tail,inBed-actCustomer,tanned+1,walkedAway)
}
}
println(tanning(2,List(),"ABBAJJKZKZ".toList,List(),0,0))
println(tanning(3,List(),"GACCBDDBAGEE".toList,List(),0,0))
println(tanning(3,List(),"GACCBGDDBAEE".toList,List(),0,0))
println(tanning(1,List(),"ABCBCA".toList,List(),0,0))
}
Answer: A good way to think about this problem is to list the different cases when a customer is encountered in the list:
The person arrives and there is a bed available => the person gets a tan and the number of available bed decreases by one
The person arrives and there is not a bed available => the person won't tan
The person leaves and they got a tan => the number of available beds increases by one
The person leaves and they didn't get a tan => nothing special happens
When there are no more customers then we need to return the number of people who didn't tan.
This is embodied by the following function:
def step(avail: Int, tanned: Set[Char], not_tanned: Set[Char], customers: List[String]): Int = {
customers match {
// case 3 above
case c::cs if tanned(c) => step(avail+1, tanned, not_tanned, cs)
// case 4 above
case c::cs if left(c) => step(avail, tanned, not_tanned, cs)
// case 1 above
case c::cs if avail > 0 => step(avail-1, tanned+c, not_tanned, cs)
// case 2 above
case c::cs if avail == 0 => step(avail, tanned, not_tanned+c, cs)
// exit condition, no more customers
case Nil => not_tanned.size
}
}
Since the problem statement says that no customer visits more than once and everyone who doesn't get a tan leaves before the people getting tans do it isn't necessary to remove people from the sets or maintain a list of waiting customers in case a bed opens up.
The step function above needs to be called with an initial state which, assuming needs is the number of beds in the salon and customers is the string specified in the problem, is
step(needs, Set(), Set(), customers.toList)
It is possible to transform the step function into a fold which would remove the explicit list traversal, but in my opinion that would hurt the readability of the code.
A complete version of the code is
object tanning {
def howmanyleft(nbeds: Int, customers: String): Int = {
def step(avail: Int, tanned: Set[Char], not_tanned: Set[Char], customers: List[Char]): Int = {
customers match {
case c::cs if tanned(c) => step(avail+1, tanned, not_tanned, cs)
case c::cs if not_tanned(c) => step(avail, tanned, not_tanned, cs)
case c::cs if avail > 0 => step(avail-1, tanned+c, not_tanned, cs)
case c::cs if avail == 0 => step(avail, tanned, not_tanned+c, cs)
case Nil => not_tanned.size
}
}
step(nbeds, Set(), Set(), customers.toList)
}
def main(args: Array[String]) {
println(howmanyleft(2, "ABBAJJKZKZ"))
println(howmanyleft(3, "GACCBDDBAGEE"))
println(howmanyleft(3, "GACCBGDDBAEE"))
println(howmanyleft(1, "ABCBCA"))
}
} | {
"domain": "codereview.stackexchange",
"id": 2083,
"tags": "scala, functional-programming"
} |
2D FFT how is it done? | Question: I have a texture that I want to perform the 2DFFT on and I am trying to understand the "Row/Column" or "Column/Row" idea but I am unsure if I have understood it correctly. Hoping some one can explain it a bit better from my current understanding because it seems my current understanding of it requires a lot of FFTs which does not seem right to me.
If I have a 256 by 256 texture and I perform a 2DFFT on the image. Lets say I chose columns/row order so I do columns first, so does that mean I first perform:
256 1DFFT's... one for each column "image.x", containing 256 samples:
1DFFT for x = 0 -> Samples: image[0,0], image[0,1] ... image[0,255]
1DFFT for x = 1 -> Samples: image[1,0], image[1,1] ... image[1,255]
...
1DFFT for x = 255 -> Samples: image[255,0], image[255,1] ... image[255,255]
Then I have to do another 256 1DFFT's along the rows "image.y", containing 256 samples:
1DFFT for y = 0 -> Samples: image[0,0], image[1,0] ... image[255,0]
1DFFT for y = 1 -> Samples: image[0,1], image[1,1] ... image[255,1]
...
1DFFT for y = 255 -> Samples: image[0,255], image[1,255] ... image[255,255]
So in total I do 256 + 256 = 512 1DFFTs ? Is that correct?
Answer: Yes, you are right. It is a lot of FFTs, but remember it is a lot of samples as well.
If you have a FFT algorithm with complexity $O(n \cdot log(n))$, then for the 2D scenario you will have an algorithm with complexity $O(M \cdot (N \cdot log(N)) + N\cdot (M \cdot log(M)))$
but if you express this in terms of $M \cdot N$, what you get is.
$O(M \cdot N \cdot (log(N) + log(M)))$ and finally with the log sum identity you get $O(M \cdot N \cdot log( M \cdot N ))$
The complexity in terms of number of samples is the same | {
"domain": "dsp.stackexchange",
"id": 9905,
"tags": "fft"
} |
Efficient lookup when key is made of multiple elements and elements can be empty | Question: I am wanting to create a map where the key contains multiple elements and the elements can be empty/null. The empty values are treated as "anything". I want to lookup function to match when the stored key is the lookup value or it is a generalised version - index key has empties where lookup value has values. I think the formalisation would be "the lookup value logically subsumes the index-key". I also want the lookup function to return the most specific index-key, that is the key with the fewest empties.
For example, if the data is stored in a (<key>, <value>) tuple with the key being a tuple of the elements and ? representing the empty set/null value:
((1, ?, 6, 3), "hey")
((1, 5, 6, 3), "hi")
((2, ?, ?, ?), "hello")
So lookup((2, 4, 5, 6)) -> "hello". And lookup((1, 5, 6, 3)) -> "hi" because (1, 5, 6, 3) is more specific than (1, ?, 6, 3).
A simple solution is to store them as shown above and simply look through them. This would take $O(nm)$ where $n$ is the number of entries and $m$ is the number of elements in the key. Checking in most-to-least specific would mean a match could be returned immediately.
This there an approach that could improve this?
Thank you
Answer: This is the problem of building a data structure for storing rectangles in $m$-dimensional space, so you can answer stabbing queries efficiently.
For instance, if your keys have 4 elements, then we are in $m=4$ dimensions. The key $(1,5,6,3)$ corresponds to a single point in 4-dimensional space. The key $(1,?,6,3)$ corresponds to a line in 4-dimensional space. The key $(2,?,?,?)$ corresponds to an (infinite) box in 4-dimensional space.
So, in general, each key can be thought of as an (axis-aligned) box in $m$-dimensional space. A query is a point in $m$-dimensional space, and you want to find all boxes that contain that point. That is sometimes known in the literature as a "stabbing query" (particularly when we're talking about intervals, i.e., $m=1$).
There are many data structures for this problem. Take a look at R-trees, for example. You can also look at $m$-dimensional segment trees. It is known that you can build a $m$-dimensional segment tree in $O(n (\lg n)^m)$ time and $O(n (\lg n)^m)$ space; each lookup can be answered in $O(k + (\lg n)^m)$ time, where $k$ is the number of entries that match the query. Thus, segment trees will likely be a good representation if the dimension $m$ is small. (Optimizations: The exponent can be reduced slightly from $m$ to $m-1$. There are also special algorithms that achieve even better performance in the case of $m=2$, $m=3$, or $m=4$.)
See also Data structure to hold list of rectangles?. | {
"domain": "cs.stackexchange",
"id": 2720,
"tags": "time-complexity"
} |
What is the value of a quantum field? | Question: As far as I'm aware (please correct me if I'm wrong) quantum fields are simply operators, constructed from a linear combination of creation and annihilation operators, which are defined at every point in space. These then act on the vacuum or existing states. We have one of these fields for every type of particle we know. E.g. we have one electron field for all electrons. So what does it mean to say that a quantum field is real or complex valued. What do we have that takes a real or complex value? Is it the operator itself or the eigenvalue given back after it acts on a state? Similarly when we have fermion fields that are Grassmann valued what is it that we get that takes the form of a Grassmann number?
The original reason I considered this is that I read boson fields take real or complex values whilst fermionic fields take Grassmann variables as their values. But I was confused by what these values actually tell us.
Answer: When physicists say that a quantum field $\phi(x)$ is real-valued, they are usually referring to Feynman's path integral formulation of quantum field theory, which is equivalent to Schwinger's operator formulation.
The values of a field $\phi(x)$ in the path integral formulations are numbers. E.g.:
If the numbers are real, we say that the field $\phi(x)$ is real-valued. (Such a field $\phi(x)$ typically corresponds to a Hermitian field operator $\hat{\phi}(x)$ in the operator formalism.)
If the numbers are complex, we say that the field $\phi(x)$ is complex-valued.
If the numbers are Grassmann-odd, we say that the field $\phi(x)$ is Grassmann-odd. (The numbers in this case are so-called supernumbers. See also this Phys.SE post.) | {
"domain": "physics.stackexchange",
"id": 5882,
"tags": "quantum-field-theory, operators, path-integral, fermions, grassmann-numbers"
} |
Three numbers calculator | Question: Edit: I made a new calculator after taking your input New Calculator
I made this three-function calculator. I have been learning C for a couple of months, and I learned C++ through the internet.
This is my first "real" program in C++, it started really messy but I did a lot of refining and debuging and deleting unnecessary code and this is the final result.
If you see any way to improve it or something I did wrong please tell me.
// Three numbers Calculator
#include "stdafx.h"
#include<iostream>
using namespace std;
int main()
{
cout << "Enter action as # to exit program" << endl;
cout << "Possible actions:+,-,*,/\n" << endl;
while (1) //loop
{
long double num1, num2, num3, total;
char action1, action2;
cin >> num1 >> action1 >> num2 >> action2 >> num3;
if (action1 == '/' && num2 == 0 || action2 == '/' && num3 == 0)
cout << "You can't divide by zero" << endl;
else if ((action2 == '*' || action2 == '/') && (action1 == '-' || action1 == '+')) //action2 will be preformed before action1 (Order of operation)
{
switch (action2) //I didn't include the options for '+' or '-' because the if statement requires action2 to be '*' or '/'.
{
case('/'):
total = num2 / num3;
break;
case('*'):
total = num2*num3;
break;
default:
cout << "Input not recognized";
break;
}
switch (action1) //I didn't include the options for '*' or '/' because the if statement requires action1 to be '+' or '-'.
{
case('+'):
cout << num1 << "+" << num2 << action2 << num3 << "=" << total + num1;
break;
case('-'):
cout << num1 << "-" << num2 << action2 << num3 << "=" << num1 - total;
break;
default:
cout << "Input not recognized";
break;
}
}
else //action1 will be performed before action2 (Order of operation)
{
switch (action1)
{
case('+'):
total = num1 + num2;
break;
case('-'):
total = num1 - num2;
break;
case('/'):
total = num1 / num2;
break;
case('*'):
total = num1*num2;
break;
case('#'):
system("PAUSE");
return 0;
break;
default:
cout << "Input not recognized";
break;
}
switch (action2)
{
case('+'):
cout << num1 << action1 << num2 << "+" << num3 << "=" << total + num3;
break;
case('-'):
cout << num1 << action1 << num2 << "-" << num3 << "=" << total - num3;
break;
case('/'):
cout << num1 << action1 << num2 << "/" << num3 << "=" << total / num3;
break;
case('*'):
cout << num1 << action1 << num2 << "*" << num3 << "=" << total*num3;
break;
case('#'):
system("PAUSE");
return 0;
break;
default:
cout << "Input not recognized";
break;
}
}
cout << "\n\n";
}
}
Update:
Input: 7+8/2
Output: 7+8/2=11
Answer: I see a number of things that may help you improve your code.
Fix your formatting
The code as posted has inconsistent indenting which makes it hard to read. Choose a particular style and apply it consistently to make your code easier to read.
Don't abuse using namespace std
Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid.
Validate the input
The code does not seem to be able to handle negative numbers as in the expression:
-3+2
If it's not intended to handle negative numbers, it would be good to let the user know. If it is, then there's a bug that should be fixed.
Don't use std::endl if you don't really need it
The difference betweeen std::endl and '\n' is that '\n' just emits a newline character, while std::endl actually flushes the stream. This can be time-consuming in a program with a lot of I/O and is rarely actually needed. It's best to only use std::endl when you have some good reason to flush the stream and it's not very often needed for simple programs such as this one. Avoiding the habit of using std::endl when '\n' will do will pay dividends in the future as you write more complex programs with more I/O and where performance needs to be maximized.
Don't use system("PAUSE")
There are two reasons not to use system("cls") or system("PAUSE"). The first is that it is not portable to other operating systems which you may or may not care about now. The second is that it's a security hole, which you absolutely must care about. Specifically, if some program is defined and named PAUSE or pause, your program will execute that program instead of what you intend, and that other program could be anything. First, isolate these into a seperate functions pause() and then modify your code to call those functions instead of system. Then rewrite the contents of those functions to do what you want using C++. For example:
void pause() {
getchar();
}
General portability
This code could be made portable if you omit the Windows-only include files #include "stdafx.h". If you must have stdafx.h, consider wrapping it so that the code is portable:
#ifdef WINDOWS
#include "stdafx.h"
#endif
Separate input, output and calculation
To the degree practical it's usually good practice to separate input, output and calculation for programs like this. By putting them in separate functions, it isolates the particular I/O for your platform (which is likely to be unique to that platform or operating system) from the logic of the program (which does not depend on the underlying OS). So for example, the main program might call a function to fetch and validate the input, a second function to actually perform the mathematical operations and then a third to emit the results.
Omit spurious parentheses
The argument to a case statement or return statement does not need parentheses, so instead of this:
case('#'):
One could write this:
case '#':
Consolidate strings
Right now, the string "Input not recognized" appears four times in the program. I'd recommend creating a const variable instead and then using that instead of repeating the string. | {
"domain": "codereview.stackexchange",
"id": 23264,
"tags": "c++, calculator"
} |
Could there be any reason to prefer convolution-based calculation of autocorrelation? | Question: Theoretically both of ways of calculating autocorrelation function are identical: strightforward convolution and Fourier-based method where we use FFT/iFFT in practice. And as it is well known, the computational complexity is $\mathcal O(N^2)$ and $\mathcal O(N\log N)$ for them respectively.
So the question is: are there any reasons why one could prefer convolution over the FFT-based method?
I could imagine only the memory-related argument (e.g. when we don't have any extra memory for calculations), but thinking of a regular PCs, at the time when memory constrains start to play role, the time complexity of $N^2$ will already be a killer for the first method.
Are there any particular numerical issue of the FFT that one needs to be aware of here (and choose convolution to avoid it)?
Answer:
And as it is well known, the computational complexity is $\mathcal{O}(N^2)$ and $\mathcal{O}(N \log N)$ for them respectively.
$\mathcal{O}$ notation ignores any constants that determine exactly how fast those functions run. Depending on the constants out front, it may be faster to compute the convolution (or correlation) directly.
For example, the Fourier transform method to convolve may take $10^5 N \log N$ seconds while the direct method may take $10^{-3} N^2$ seconds. If $N = 100$, the direct method is faster.
In general for large $N$, the Fourier method tends to be faster with speedups of up. For more detail on computing which method is faster, see scipy PR #5608.
Below is a graph that times the Fourier transform method of convolution as fftconvolve and the direct method as convolve. We see that the direct method is faster for many arrays of practical size.
Note: this graph is when convolving 1D signals. For more higher dimensional signals (2D, 3D, etc) fftconvolve tends to be faster more often
To time these graphs, I used scipy.signal.fftconvolve and np.convolve with the default parameters. As of scipy version 0.19, scipy.signal.convolve will choose between the direct and Fourier transform methods to choose the faster method. | {
"domain": "dsp.stackexchange",
"id": 4657,
"tags": "fft, convolution, autocorrelation, fast-convolution"
} |
Why does a ball roll? | Question: I searched for my question and I found many results but all of them eventually try to solve something specific (like solving for the kinetic energy or finding when does the ball stop).
what I want is why does a ball roll at all?
Pretend I shot a ball and the ball started slipping on the ground, pretend it doesn't leave the ground and that the ground isn't incline.
Now I thought of friction force exerting a torque on the ball, if the force is $F_c$ then the torque should be:
$$\tau=u\times F_c\times r$$(There is no sin because $F_c$ is perpendicular to $r$)
The existence of $u$ is due to the fact that the force doesn't just rotate the ball but also slows it down.
What's missing in my assumption is the speed of the ball, it seems to me that the faster the ball the faster it rotates, I don't have an explanation for this because $F_c$ doesn't relate to velocity, $F_c$ is given by :
$$F_c=u_k\times N$$
where $N$ is the normal force and $u_k$ is the kinetic friction coefficient.
So what's a better explanation to this phenomena?
Edit
Something sparked in my mind, the ball doesn't just slip, in reality it bounces, we just don't see the bounces.
But I don't know how to put this into a good formula or explanation.
PS
what I mean by "the faster the ball" I mean its initial velocity (after shooting it) because the ball slows down.
Answer: Consider two different regimes.
First - when the ball is still slipping, the relative motion of the ground and ball causes a force on the ball which (a) slows down the center of mass and (b) increases the angular speed. Once the ball rolls without slipping this force disappears.
Second - when the ball does not slip, the contact point must be stationary. This means that the angular velocity $\omega=\frac{v}{r}$. Now in that expression $v$ is the velocity of the rolling ball which is less than the velocity you launched it with because of what I wrote above. The time it takes to stop slipping depends on the initial velocity - as you said the torque is independent of the speed but the ball will just keep slipping (and rotating faster) as long as it takes to reach the point of no slip.
I wrote some earlier answers analyzing this motion in more detail - see for example here or there | {
"domain": "physics.stackexchange",
"id": 21357,
"tags": "rotational-dynamics, friction"
} |
How or where I could experience sonoluminescence in daily life? | Question: Sonoluminescence is a known phenomenon, even if a conclusive explanation for its origin is not given. I wonder if I could ever hear and see sonoluminescence somehow or somewhere in daily life, or if it's a very rare event that only a lab can reproduce.
Thanks in advance for your answers.
Answer: Since the effect only happens under very high energies, one would think that this
or if it's a very rare event that only a lab can reproduce
is the most likely alternative. However, sonoluminescence has been observed at least in one case in nature:
Pistol shrimps can snap a specialized claw shut to create a cavitation bubble that generates acoustic pressures of up to 80 kPa at a distance of 4 cm from the claw, strong enough to kill small fish.
The snap can also produce sonoluminescence from the collapsing cavitation bubble. As it collapses, the cavitation bubble reaches temperatures of over 5,000 K (the surface temperature of the sun is around 5,800 K), although the intensity is still too low to be seen by the naked eye.
Another group of crustaceans, the mantis shrimp, contains species whose club-like forelimbs can strike so quickly and with such force as to induce sonoluminescent cavitation bubbles upon impact. The intensity and the duration of the light are also, however, too weak and too short to be detected without equipment. | {
"domain": "physics.stackexchange",
"id": 56104,
"tags": "visible-light, acoustics"
} |
Computational methods to obtain relative energies of electronic configurations of atoms | Question: When one learns the Aufbau principle to “predict” electronic configurations, and the $(n + \ell)$ ordering rule (or Madelung rule), one also learns of the exceptions to the rules… some of which (Cr, Cu, Mo, Ag, Au) can be half-explained by adding another “rule”, some of which (Nb, Ru, Rh, …) cannot. We have on this site our fair share of electronic-configuration questions dealing with these: Nb, Pt, Pd, Cu, Th, etc.
My question here is not about rules or theories to rationalize and understand these complicated electronic configurations, but rather about computational tools to predict them. What computational methods (and software) would one use to compute the different energies of several electronic configurations of an atom?
For example, take niobium. How would I answer, with computational quantum chemistry methods, the following question:
what is the energy difference between the 5s1 4d4 and 5s2 4d3 electronic configurations of niobium?
And would that method also give some interpretation in terms of what these energy differences arise from?
Answer: The different occupations you suggest correspond to different spin states. Therefore, they are quite simple to distinguish.
On the other hand, the ground states are often degenerate (in the terms of 5 equivalent d orbitals available). Therefore the lowest (and sufficient) level of theory is the CAS-SCF with inclusion of the s- and d- orbitals.
I tried several calculations using ORCA in def2-svp basis and the results correspond to what one would expect, the time to perform the calculations is in order of seconds.
Zr (should be $4d^2$ $5s^2$, ie. MULT=3)
CAS-SCF STATES FOR BLOCK 1 MULT= 5 NROOTS= 1
ROOT 0: E= -46.4247189198 Eh
0.62280 [ 0]: 211110
0.37690 [ 2]: 211011
CAS-SCF STATES FOR BLOCK 2 MULT= 3 NROOTS= 1
ROOT 0: E= -46.4308999710 Eh
0.49082 [ 0]: 221100
0.47988 [ 2]: 221001
0.01125 [ 18]: 210111
0.00276 [ 27]: 201120
Cr (should be $3d^5$ $4s^1$, ie. MULT=7)
CAS-SCF STATES FOR BLOCK 1 MULT= 7 NROOTS= 1
ROOT 0: E= -1043.1949498785 Eh
1.00000 [ 0]: 111111
CAS-SCF STATES FOR BLOCK 2 MULT= 5 NROOTS= 1
ROOT 0: E= -1043.1559672632 Eh
0.40000 [ 15]: 111111
0.30000 [ 0]: 211110
0.30000 [ 30]: 011112
Nb (should be $4d^4$ $5s^1$, ie. MULT=6)
CAS-SCF STATES FOR BLOCK 1 MULT= 6 NROOTS= 1
ROOT 0: E= -56.3066169626 Eh
0.96220 [ 0]: 111110
0.03771 [ 2]: 111011
CAS-SCF STATES FOR BLOCK 2 MULT= 4 NROOTS= 1
ROOT 0: E= -56.2843641219 Eh
0.89474 [ 0]: 211100
0.04255 [ 2]: 211001
The lower the energy the better.
From the numbers I'd conclude that it is possible to obtain the ground state configuration by reasonably simple quantum chemical calculations. Nevertheless, I'd expect also less elaborate theories to predict such a fundamental properties.
Edit: I added the multiplicity to the expected ground state configuration | {
"domain": "chemistry.stackexchange",
"id": 6171,
"tags": "computational-chemistry, atoms, electronic-configuration"
} |
JSON fetcher for Eclipse plugin | Question: This is a client module of an Eclipse plugin. I am planning to use this code as a "good exception handling" code example in one of my papers. How does it look?
HttpURLConnection httpconn=null;
BufferedReader breader=null;
try{
URL url=new URL(this.web_service_url);
httpconn=(HttpURLConnection)url.openConnection();
httpconn.setRequestMethod("GET");
System.out.println(httpconn.getResponseMessage());
if(httpconn.getResponseCode()==HttpURLConnection.HTTP_OK)
{
breader=new BufferedReader(new InputStreamReader(httpconn.getInputStream()));
String line=null;
while((line=breader.readLine())!=null)
{
jsonResponse+=line;
}
//System.out.println(jsonResponse);
//display_json_results(jsonResponse);
}
}catch(MalformedURLException e){
MessageDialog.openError(Display.getDefault().getShells()[0], "Invalid URL", e.getMessage());
}catch(ProtocolException e){
MessageDialog.openError(Display.getDefault().getShells()[0], "Invalid Protocol", e.getMessage());
}
catch(IOException e2){
Log.info("Failed to access the data"+e2.getMessage());
}
finally{
try{
breader.close();}catch(IOException e){
Log.info("Failed to release resources"+e.getMessage());
}
}
Answer: General
As example code, there is a fair amount to comment on. For an eclipse plugin, I would at least expect you to select-all and Ctrl-Shift-F ....
consistent use of braces (on the end of the line, not start of the new line)
consistent spacing between values and operators jsonResponse+=line; to jsonResponse += line;
The formatted code looks like:
HttpURLConnection httpconn = null;
BufferedReader breader = null;
try {
URL url = new URL(this.web_service_url);
httpconn = (HttpURLConnection) url.openConnection();
httpconn.setRequestMethod("GET");
System.out.println(httpconn.getResponseMessage());
if (httpconn.getResponseCode() == HttpURLConnection.HTTP_OK) {
breader = new BufferedReader(new InputStreamReader(
httpconn.getInputStream()));
String line = null;
while ((line = breader.readLine()) != null) {
jsonResponse += line;
}
// System.out.println(jsonResponse);
// display_json_results(jsonResponse);
}
} catch (MalformedURLException e) {
MessageDialog.openError(Display.getDefault().getShells()[0],
"Invalid URL", e.getMessage());
} catch (ProtocolException e) {
MessageDialog.openError(Display.getDefault().getShells()[0],
"Invalid Protocol", e.getMessage());
} catch (IOException e2) {
Log.info("Failed to access the data" + e2.getMessage());
} finally {
try {
breader.close();
} catch (IOException e) {
Log.info("Failed to release resources" + e.getMessage());
}
}
Working with the formatted code now:
Why do you have an active System.out.println(httpconn.getResponseMessage()); in the code? That should be commented out. Use the Log for that.
Why are the HttpURLConnection and BufferedReader declared outside the try-block? There is no need.
The BufferedReader should be opened with a try-with-resource block to perform the auto-close.
you are losing newlines on the BufferedReader's readLine(). You should use a different system, or alternatively add the newline back in... unless you are using some other mechanism to reformat thte JSON.
you should be appending to a StringBuilder, not doing String concatenation (jsonResonse += line; == BAD)
why have commented-out code in example code? Get rid of the println's and the display_json_results
Exception Handling
This exception-handling has a lot of problems I can see.
you are throwing away all stack traces.... you do not report them! Why?
you have different forms of exception naming in your handlers. Two of them call the exception e. The third is called e2. Use meaningful names, or use consistent names. The mix is.... mixed up.
Calling Log.info("Failed to access the data" + e2.getMessage()); is ... poor. If you have the Log available, it should at least be a warning! Also, you should pass the full exception to the Log, and log the full trace. Finally, you do not have a space between the 'data' and the e2.getMessage() in the output... .... access the data" should be ... access the data: "
When there is an exception, you should help the user by indicating what data was causing the exception. In this case, the errors/dialogs should contain this.web_service_url since that was the source of the problem.
Since you have the log, not only should you be outputting the exception message for the URL formats to the Display, but also to the Log.
Quick reformat
I messed with the code, and got the following:
try {
URL url = new URL(this.web_service_url);
HttpURLConnection httpconn = (HttpURLConnection) url.openConnection();
httpconn.setRequestMethod("GET");
//System.out.println(httpconn.getResponseMessage());
if (httpconn.getResponseCode() == HttpURLConnection.HTTP_OK) {
try (BufferedReader breader = new BufferedReader(new InputStreamReader(
httpconn.getInputStream()));) {
String line = null;
while ((line = breader.readLine()) != null) {
jsonResponse.append(line).append("\n");
}
}
// System.out.println(jsonResponse);
// display_json_results(jsonResponse);
}
} catch (MalformedURLException mue) {
Log.warn("Invalid URL " + this.web_service_url, mue);
MessageDialog.openError(Display.getDefault().getShells()[0],
"Invalid URL " + this.web_service_url, mue.getMessage());
} catch (ProtocolException pe) {
Log.warn("Protocol Exception " + this.web_service_url, pe);
MessageDialog.openError(Display.getDefault().getShells()[0],
"Invalid Protocol " + this.web_service_url, pe.getMessage());
} catch (IOException ioe) {
Log.warn("Failed to access the data " + this.web_service_url, ioe);
}
} | {
"domain": "codereview.stackexchange",
"id": 36120,
"tags": "java, error-handling, http, eclipse"
} |
Unitary transform using displacement operator to get time-independent Hamiltonian? | Question: I am considering a driven cavity field with Hamiltonian
$$H = \hbar\omega a^{\dagger}a + f(t)(a + a^{\dagger})$$
where $f(t) = \epsilon e^{-i\omega_{d}t} + \epsilon^* e^{i\omega_{d}t}$ is a classical harmonic driving force with frequency $\omega_d$.
My goal is to make this Hamiltonian time-independent as to be able to use QUTIP's steady-state solver.
For that, I have to use the right unitary transform, i.e. choose the right rotating frame.
I can first choose $U = e^{-i\hbar\omega_{RF} a^{\dagger}a}$, yielding
$$\tilde{H}= \epsilon ae^{-i(\omega_{RF}-\omega_{d})t} + \epsilon a^{\dagger}e^{i(\omega_{RF}+\omega_{d})t} + \epsilon^* ae^{-i(\omega_{RF}+\omega_{d})t} + \epsilon^* a^{\dagger}e^{i(\omega_{RF}-\omega_{d})t}$$
Performing the RWA and using $\Delta = \omega_d - \omega_{RF}$, we get
$$\tilde{H} = \epsilon ae^{i\Delta t} + \epsilon^*a^{\dagger}e^{-i\Delta t}$$
Now, the thing that confuses me:
I can do another unitary transform using a displacement operator $D_{\beta}$, such that $UaU^{\dagger} \Rightarrow a - \beta$. If I then choose $\beta = -\frac{\epsilon^*e^{-i\Delta t}}{\hbar \Delta}$, I indeed get a time-independent Hamiltonian:
$$\tilde{H} = \frac{2|\epsilon|^2}{\hbar \Delta}$$
But now I don't have an operator anymore, just a number, so there must be something wrong.
Answer: Start with the Schrodinger equation, given by
\begin{align*}
i\hbar\frac{d}{dt}\left\lvert \psi \right\rangle = \left(\hbar\omega a^{\dagger}a + f(t)(a + a^{\dagger})\right)\left\lvert \psi \right\rangle\,.
\end{align*}
We insert the identity $U^{\dagger}U$, where
$$
U = e^{i\omega_{RF} ta^{\dagger}a}\,,
$$
in front of the kets and multiply both sides by $U$. On the left-hand side, we get
\begin{align*}
U\left(i\hbar\frac{d}{dt}U^{\dagger}U\left\lvert \psi \right\rangle\right)
&=
U\left(
U^{\dagger}i\hbar\frac{d}{dt}\lvert \tilde{\psi} \rangle
+i\hbar\frac{dU^{\dagger}}{dt}\left\lvert \psi \right\rangle
\right)
\\
&=
UU^{\dagger}i\hbar\frac{d}{dt}\lvert \tilde{\psi} \rangle
+
Ui\hbar\frac{dU^{\dagger}}{dt}\left\lvert \psi \right\rangle\,,
\end{align*}
where
\begin{align*}
\lvert \tilde{\psi} \rangle = U\left\lvert {\psi} \right\rangle\,,
\end{align*}
and since
\begin{align*}
\frac{dU^{\dagger}}{dt} = \frac{d}{dt} e^{-i\omega_{RF} ta^{\dagger}a}
=e^{-i\omega_{RF} ta^{\dagger}a}(-i\omega_{RF} a^{\dagger}a)
=-U^{\dagger}i\omega_{RF} a^{\dagger}a
\,,
\end{align*}
this becomes
\begin{align*}
U\left(i\hbar\frac{d}{dt}U^{\dagger}U\left\lvert \psi \right\rangle\right)
&=
UU^{\dagger}i\hbar\frac{d}{dt}\lvert \tilde{\psi} \rangle
+
Ui\hbar\left(-U^{\dagger}i\omega_{RF} a^{\dagger}a\right)
\left\lvert \psi \right\rangle\\
&=
i\hbar\frac{d}{dt}\lvert \tilde{\psi} \rangle
+\hbar\omega_{RF} a^{\dagger}a
\left\lvert \psi \right\rangle\,.
\end{align*}
On the right-hand side, noting that
\begin{align*}
UaU^{\dagger}\left\lvert n \right\rangle
&=
e^{i\omega_{RF} ta^{\dagger}a} a e^{-i\omega_{RF} ta^{\dagger}a}\left\lvert n \right\rangle
=e^{i\omega_{RF} ta^{\dagger}a} a e^{-i\omega_{RF} tn}\left\lvert n \right\rangle
=e^{i\omega_{RF} ta^{\dagger}a}e^{-i\omega_{RF} tn}\sqrt{n}\left\lvert n-1 \right\rangle\\
&=e^{i\omega_{RF} t(n-1)}e^{-i\omega_{RF} tn}\sqrt{n}\left\lvert n-1 \right\rangle
=e^{-i\omega_{RF} t}a\left\lvert n \right\rangle\,,
\end{align*}
so that $UaU^{\dagger} = e^{-i\omega_{RF} t}a$ and $Ua^{\dagger}U^{\dagger} = e^{i\omega_{RF} t}a^{\dagger}$, we get
\begin{align*}
\hbar\omega a^{\dagger}a + f(t)(a + a^{\dagger}) \to
\hbar\omega a^{\dagger}a + f(t)\left( e^{-i\omega_{RF} t}a + e^{i\omega_{RF} t}a^{\dagger}\right)\,.
\end{align*}
Combining the two transformations, we arrive at the transformed Hamiltonian
\begin{align*}
\tilde{H}
&=\hbar(\omega - \omega_{RF})a^{\dagger}a
+ \epsilon e^{-i(\omega_{d}+\omega_{RF})t}a
+\epsilon^* e^{i(\omega_{d}-\omega_{RF})t}a
+ \epsilon e^{-i(\omega_{d}-\omega_{RF})t}a^{\dagger}
+\epsilon^* e^{i(\omega_{d}+\omega_{RF})t}a^{\dagger}\,.
\end{align*}
Assuming that $\Delta =\omega_d - \omega_{RF}$ is ``small'', we make the RWA, leading to
\begin{align*}
\tilde{H}
&\approx\hbar(\omega - \omega_{RF})a^{\dagger}a
+\epsilon^* e^{i\Delta t}a
+ \epsilon e^{-i\Delta t}a^{\dagger}\,.
\end{align*}
Finally, applying the transformation
$$
U_2 = e^{i\Delta ta^{\dagger}a}\,,
$$
so that the derivative on the left-hand side creates a term $-\hbar\Delta a^{\dagger}a$ on the right-hand side, and
the right-hand side transforms according to $U_2aU_2^{\dagger} = e^{-i\Delta t}a$ and $U_2a^{\dagger}U_2^{\dagger} = e^{i\Delta t}a^{\dagger}$, the approximate Hamiltonian in RWA transforms into the time-independent form
\begin{align*}
\tilde{H}
&\approx\hbar(\omega - \omega_{RF}-\Delta)a^{\dagger}a
+\epsilon^* a
+ \epsilon a^{\dagger}\,.
\end{align*}
\begin{align*}
\tilde{H}
&\approx\hbar(\omega - \omega_{d})a^{\dagger}a
+\epsilon^* a
+ \epsilon a^{\dagger}\,.
\end{align*} | {
"domain": "physics.stackexchange",
"id": 88023,
"tags": "quantum-mechanics, homework-and-exercises, operators, hamiltonian, quantum-computer"
} |
Easter date calculator Android application | Question: Since Easter holidays are close, I have decided to develop my Android skills by writing an Android app that calculate that date for the Western and Easter calendars.
The formulas were gotten from the Internet.
It is a simple app where, you enter the year and you get the dates for both calendars in a TextView, I've used EditText as labels and made them unclickable, they appear when the dates are visible and get hidden when the dates are not show.
I do not have any professional or internship experience whatsoever. Any input would help.
It was built and compiled on Android Studio 4.1.2 and ran on my device.
Here is the code for the XML.
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent">
<TextView
android:id="@+id/textView"
android:layout_width="459dp"
android:layout_height="56dp"
android:layout_marginTop="8dp"
android:layout_marginBottom="8dp"
android:text="@string/welcome"
android:textSize="22sp"
app:layout_constraintBottom_toTopOf="@+id/editTextDate"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.5"
app:layout_constraintStart_toStartOf="parent"
android:gravity="center"
app:layout_constraintTop_toTopOf="parent" />
<EditText
android:id="@+id/editTextDate"
android:layout_width="wrap_content"
android:layout_height="46dp"
android:layout_marginTop="16dp"
android:ems="10"
android:gravity="center"
android:maxLength="4"
android:digits="0123456789"
android:inputType="date"
android:textSize="22sp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.5"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/textView" />
<TextView
android:id="@+id/textViewWestern"
android:layout_width="150dp"
android:layout_height="48dp"
android:layout_marginTop="60dp"
android:gravity="center"
android:text=""
android:textSize="22sp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.045"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/editTextDate" />
<TextView
android:id="@+id/textViewEastern"
android:layout_width="150dp"
android:layout_height="48dp"
android:layout_marginTop="60dp"
android:gravity="center"
android:text=""
android:textSize="22sp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.969"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/editTextDate" />
<EditText
android:id="@+id/western"
android:layout_width="150dp"
android:layout_height="44dp"
android:layout_marginStart="12dp"
android:layout_marginTop="16dp"
android:ems="10"
android:focusable="false"
android:focusableInTouchMode="false"
android:gravity="center"
android:inputType="textPersonName"
android:background="@android:color/transparent"
android:text="Western"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/editTextDate" />
<EditText
android:id="@+id/eastern"
android:layout_width="150dp"
android:layout_height="44dp"
android:layout_marginTop="16dp"
android:ems="10"
android:focusable="false"
android:focusableInTouchMode="false"
android:inputType="textPersonName"
android:text="Eastern"
android:background="@android:color/transparent"
android:gravity="center"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.969"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/editTextDate" />
</androidx.constraintlayout.widget.ConstraintLayout>
The MainActivity.java code.
package com.example.easterdate;
import androidx.appcompat.app.AppCompatActivity;
import android.os.Bundle;
import android.text.Editable;
import android.text.TextWatcher;
import android.view.Gravity;
import android.view.View;
import android.widget.EditText;
import android.widget.TextView;
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
EditText Edit = findViewById(R.id.editTextDate);
final TextView text = findViewById(R.id.textViewWestern);
final TextView textEastern = findViewById(R.id.textViewEastern);
final EditText eastern = findViewById(R.id.eastern);
final EditText western = findViewById(R.id.western);
eastern.setVisibility(View.INVISIBLE);
western.setVisibility(View.INVISIBLE);
Edit.addTextChangedListener(new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence s, int start, int count, int after) {
}
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
if (s.toString().trim().length() == 0 || Integer.parseInt(s.toString()) < 1800) {
text.setText("");
textEastern.setText("");
eastern.setVisibility(View.INVISIBLE);
western.setVisibility(View.INVISIBLE);
return;
}
EditText Edit = findViewById(R.id.editTextDate);
int AN = Integer.parseInt(Edit.getText().toString());
int G = AN % 19;
int C = AN / 100;
int H = (C - C / 4 - (8 * C + 13) / 25 + 19 * G + 15) % 30;
int I = H - (H / 28) * (1 - (H / 28) * (29 / (H + 1)) * ((21 - G) / 11));
int J = (AN + AN / 4 + I + 2 - C + C / 4) % 7;
int L = I - J;
int MP = 3 + (L + 40) / 44;
int JP = L + 28 - 31 * (MP / 4);
int mon, day;
int A = AN % 19;
int b = AN % 7;
int ce = AN % 4;
int d = (19 * A + 16) % 30;
int e = (2 * ce + 4 * b + 6 * d) % 7;
int f = (19 * A + 16) % 30;
int key = f + e + 3;
if (key > 30)
mon = 5;
else
mon = 4;
if (key > 30)
day = key - 30;
else day = key;
String[] month = {"January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"};
eastern.setVisibility(View.VISIBLE);
western.setVisibility(View.VISIBLE);
text.setText("Sunday " + JP + " " + month[MP - 1]);
textEastern.setText("Sunday " + day + " " + month[mon - 1]);
}
@Override
public void afterTextChanged(Editable s) {
}
});
}
}
How professional is this code? should I have used functions instead of imperative programming?
The TextWatcher is used to prevent the app from crashing and to have the dates display on input, any better solution?
Answer: Small review
How professional is this code?
Scant documentation
Dates of Easter are not easy to discern and rely on some arcane calculations. "The formulas were gotten from the Internet." is insufficient documentation. Consider a programmer after you having to fix or extend this code. Better to include, in code, more details - at least a URL or citation to the source algorithm. Example below.
I am not sure of OP's source.
Anonymous Gregorian algorithm: Meeus/Jones/Butcher
Dates of Easter
Astronomical Algorithms 1991
Jean Meeus
https://en.wikipedia.org/wiki/Computus#Anonymous_Gregorian_algorithm
Meeus's Julian algorithm
https://en.wikipedia.org/wiki/Computus#Meeus's_Julian_algorithm
Range
Code deserves to document the range of years in which it is valid.
Precision in description
"... calculate that date for the Western and Easter calendars."
By "Western", I suspect you mean Gregorian calendar.
By "Easter", I suspect you mean "Eastern" or Julian calendar. | {
"domain": "codereview.stackexchange",
"id": 40737,
"tags": "beginner, android"
} |
Uncertainty principle for $\Delta\frac{1}{x}\Delta p$ | Question: I've tried to determine uncertainty for operators $\xi=\frac{1}{x}$ and $p$. To do so, I used relation that $B^{-1}[A, B]B^{-1} = -[A, B^{-1}]$ and so I got:
$$
\frac{1}{x}[p, x]\frac{1}{x} = \frac{-i \hbar}{x^2}
$$
which means, the uncertainty is:
$$
\Delta\xi\Delta p \geq \frac{\hbar}{2x^2}
$$
I'm not sure if the uncertainty might be dependent on the x value (for example, uncertainty for operators $x$ and $p$ is greater or equal to $\frac{\hbar}{2}$ which is constant value). Please tell me if my result is correct.
Answer: Your result is almost right, but not quite. Since $x$ is an operator, the right-hand side does not make sense as you've written it, and it needs to be replaced by the expectation value of $\hbar/x^2$.
In that form, it is a special case of the Robertson uncertainty relation,
$$
\sigma _{A}\sigma _{B}\geq{\frac {1}{2}}\left|\langle [{\hat {A}},{\hat {B}}]\rangle \right|
$$
(where the $\langle\cdot\rangle$ represent expectation value, as usual). | {
"domain": "physics.stackexchange",
"id": 59005,
"tags": "quantum-mechanics, homework-and-exercises, operators, heisenberg-uncertainty-principle"
} |
Calculating partial pressure with 3 equilibrium equations | Question: So my friend had sent this question to me some days ago and it seems I still am not able to solve it. I am confused whether the amount of A that dissociates in the first equation would account for the third one and how they inter relate.
The correct answer is 8.
Answer:
I am confused whether the amount of A that dissociates in the first equation would account for the third one and how they inter relate.
All atoms of A are in the form of $\ce{A2}$ initially. After equilibrium is established, some are in the form of atomic $\ce{A}$, and some are part of $\ce{AB}$ while the rest remain as $\ce{A2}$. If you had a dial you could turn, increasing the dissociation constant of $\ce{A2}$ (leaving everything else constant), you would lower the concentration of the $\ce{A2}$, which would disturb the equilibrium of the third reaction, consequently lowering the concentration of $\ce{AB}$ (and increasing that of $\ce{B2}$. So it is all linked, and you have to solve the entire set of equations simultaneously.
Solution
It is possible to calculate all five partial pressures using the mass balance of A and B (two equations) and the third equilibrium constant (one equation) together with the given partial pressure of AB and the total pressure (two equations). Initially, the partial pressure of $\ce{A2}$ was 1 atm (same for $\ce{B2}$).
Let $x$ be the partial pressure of $\ce{A2}$ and $y$ the partial pressure of $\ce{B2}$ at equilibrium. From the stoichiometry of reactants and products, we get the following expressions of the partial pressure of A and B (2 atm is the partial pressure if all of A were present in atomic form, starting with 1 atm of $\ce{A2}$):
$$P_A = \pu{2 atm} - P_{AB} - 2x = \pu{1.5 atm} - 2x$$
$$P_B = \pu{2 atm} - P_{AB} - 2y = \pu{1.5 atm} - 2y$$
All five species together have to account for the equilibrium pressure:
$$ x + y + P_A + P_B + P_{AB} = \pu{2.75 atm}$$
Substituting the expressions for $P_A$ and $P_B$, we get:
$$x + y + (\pu{1.5 atm} - 2x) + (\pu{1.5 atm} - 2y) + \pu{0.5 atm} = \pu{2.75 atm}$$
Simplifying, we get:
$$x + y = \pu{0.75 atm}\tag{*}$$
We can express $y$ by (using the equilibrium constant for the third reaction):
$$ y = \frac{\pu{0.125 atm^2}}{x}$$
and use that in equation (*), simplifying to this quadratic:
$$ x^2 + \pu{0.125 atm^2} = \pu{0.75 atm} \cdot x$$
The two solutions for $x$ are $\pu{0.5 atm}$ and $\pu{0.25 atm}$, where corresponding values for $y$ are $\pu{0.25 atm}$ and $\pu{0.5 atm}$. The two solutions reflect the symmetry of the problem (A and B are interchangeable without changing anything). From this, you can calculate partial pressures of A and B, and the equilibrium constants. Their ratio is 8 for one solution and 1/8 for the other. The last sentence in the problem breaks the symmetry, and the answer is 8, as stated.
This was a brute force answer - I'm sure there is a more elegant approach. | {
"domain": "chemistry.stackexchange",
"id": 12911,
"tags": "physical-chemistry, equilibrium"
} |
Are small car engines designed to tolerate a higher duty cycle than large car engines? | Question: For simplicity we're going to ignore aerodynamics and vehicle mass for a moment...
Are small (around 1 litre) car engines designed to tolerate a higher duty cycle than large (2+ litre) car engines?
Cars are usually all travelling at roughly the same speed which means the engines will be putting out proportionally similar amount of power, however a small engine would be operating at a greater proportion of its output than a large one to maintain the same speed.
For some background to this, I have a 1.1 l car and I spend quite a lot of time with my foot completely flat on the floor, especially going uphill on a high speed road (60 or 70 mph limit) to keep up with the traffic. I think my record's somewhere around 2 minutes on full throttle.
Answer: Most (modern) small and large car engines are designed for 100% duty cycle. This means that at 100% rated power(gas pedal all the way down) the engine can run continuously. Heat dissipation is the limiting factor like Dave Tweed stated. Cars that are not designed to continuously dissipate 100% of the heat generated at max power require the driver to watch the temperature gauge to limit the power use.
Modern engines do not have this problem because the engine is governed (speed regulated) below the cooling capacity of the radiator. Most modern engines use electric fans on the radiators that are independent of engine rpm; greatly increasing continuous cooling capacity.
Older cars and "high performance" cars may have power that exceeds the cooling capacity. Any engine that has had the maximum engine speed regulation removed or any engine that can be "red lined" can also overheat. An engine boosting system such as nitrous oxide also exceeds the cooling capacity and thus must be used intermittently.
You will often see both large and small cars pulled over for overheating along a steep hill on a hot day. In this instance the "duty cycle" under these operating conditions was not continuous (100%). However, duty cycle is typically not used to describe this behavior, because it is a design expectation that it can operate continuously. The engine was simply operating outside of its designed range.
Duty cycle is not influenced by the size of the engine, but rather, duty cycle is a design parameter when designing an engine system. Most cars would be designed for continuous duty, while race cars would be designed for intermittent. | {
"domain": "engineering.stackexchange",
"id": 574,
"tags": "automotive-engineering, engines"
} |
Does breathing on your glasses help with cleaning? | Question: I noticed people who wear glasses clean the lenses by first breathing on them and then wiping it with something like a tissue or cloth etc.
Does this actually help?
Answer: It depends. Your exhaled breath is about 37 degree C and 100% relative humidity. If the temperature of the lenses are less than 37 degrees then water from your breath will condense on the lenses. This water is nearly pure (distilled purity) and can help dissolve any dirt, film that has attached to the lens.
But if the lenses are hotter than 37 degrees, you get no condensation, so breathing on the lens doesn't help. In this case just lick the lens. | {
"domain": "physics.stackexchange",
"id": 42384,
"tags": "optics, soft-question, everyday-life, vision"
} |
Can rosserial set parameters? | Question:
The rosserial parameter documentation shows getting parameters from the parameter server, but not setting parameters. Is there a way to set parameters from rosserial?
I'm guessing not, since the NodeHandle class only has parameter getters, not setters, but I'm wondering if anything's in the works or why this hasn't been a priority yet.
Originally posted by nckswt on ROS Answers with karma: 539 on 2017-03-08
Post score: 0
Original comments
Comment by ahendrix on 2017-03-08:
Parameters are usually set during startup/launch, and aren't normally used to communicate information from hardware to other nodes, so that's probably why there hasn't been an effort to set parameters from rosserial.
Comment by nckswt on 2017-03-08:
I was looking for a good way to provide several 32-byte constant values from my microcontroller into my ROS environment (unique IDs for all my devices, read on boot). rosserial doesn't seem to have a latching mechanism, so I supposed my best option is to use service calls through rosserial.
Answer:
There isn't an enumeration in the TopicInfo message for setting parameters: https://github.com/ros-drivers/rosserial/blob/jade-devel/rosserial_msgs/msg/TopicInfo.msg , and this is the message type that defines the wire protocol for seting up topics and services and retrieving parameters, so the definitive answer is no; rosserial cannot set parameters.
Originally posted by ahendrix with karma: 47576 on 2017-03-08
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 27249,
"tags": "ros, rosserial-arduino, rosserial, parameter, parameter-server"
} |
Maximum Q value for new state in Q-Learning never exists | Question: I'm working on implementing a Q-Learning algorithm for a 2 player board game.
I encountered what I think may be a problem. When it comes time to update the Q value with the Bellman equation (above), the last part states that for the maximum expected reward, one must find the highest q value in the new state reached, s', after making action a.
However, it seems like the I never have q values for state s'. I suspect s' can only be reached from P2 making a move. It may be impossible for this state to be reached as a result of an action from P1. Therefore, the board state s' is never evaluated by P2, thus its Q values are never being computed.
I will try to paint a picture of what I mean. Assume P1 is a random player, and P2 is the learning agent.
P1 makes a random move, resulting in state s.
P2 evaluates board s, finds the best action and takes it, resulting in state s'. In the process of updating the Q value for the pair (s,a), it finds maxQ'(s', a) = 0, since the state hasn't been encountered yet.
From s', P1 again makes a random move.
As you can see, state s' is never encountered by P2, since it is a board state that appears only as a result of P2 making a move. Thus the last part of the equation will always result in 0 - current Q value.
Am I seeing this correctly? Does this affect the learning process? Any input would be appreciated.
Thanks.
Answer: Your problem is with how you have defined $s'$.
The next state for an agent is not the state that the agent's action immediately puts the environment into. It is the state when it next takes an action. For some, more passive, environments, these are the same things. But for many environments they are not. For instance, a robot that is navigating a maze may take an action to move forward. The next state does not happen immediately at the point that it starts to take the action (when it would still be in the same position), but at a later time, after the action has been processed by the environment for a while (and the robot is in a new position), and the robot is ready to take another action.
So in your 2-player game example using regular Q learning, the next state $s'$ for P2 is not the state immediately after P2's move, but the state after P1 has also played its move in reaction. From P2's perspective, P1 is part of the environment and the situation is no different to having a stochastic environment.
Once you take this perspective on what $s'$ is, then Q learning will work as normal.
However, you should note that optimal behaviour against a specific opponent - such as a random opponent - is not the same as optimal play in a game. There are other ways to apply Reinforcement Learning ideas to 2-player games. Some of them can use the same approach as above - e.g. train two agents, one for P1 and one for P2, with each treating the other as if it were part of the environment. Others use different ways of reversing the view of the agent so that it can play versus itself more directly - in those cases you can treat each player's immediate output as $s'$, but you need to modify the learning agent. A simple modification to Q learning is to alternate between taking $\text{max}_{a'}$ and $\text{min}_{a'}$ depending on whose turn you are evaluating (and assuming P1's goal is to maximise their score while P2's goal is to minimise P1's score - and by extension maximise their own score in any zero-sum game) | {
"domain": "ai.stackexchange",
"id": 1084,
"tags": "q-learning"
} |
Wavelets from a wavefront interfere to form original rays outside a slit but not inside? | Question: If the wavelets interfere to form the original rays outside the slit, then why in slits so they get spread as per Huygens principle? Why don't the wavelets emerging from wavefront incident on the slit interface to form the original rays the same way as they interfere outside the slit resulting in fringes? And what is the intensity variation of wavelets as an angle between the wavelet and direction of propagation ?
Answer: they always can be seen as elementary waves (you call it wavelets?) they interfere so that straight wavefronts result before the slit. (at the border of your wide beam the do the same as at the slit) in the slit there are less "wavelets" so you can see separate interference, if you make the slit wider and wider you have the same situation as on the other side of the slit, the minima and maxima get closer and closer till you don't have any. | {
"domain": "physics.stackexchange",
"id": 67148,
"tags": "optics, waves"
} |
the shorstest cycle containing two given points | Question: I am given a edge-weighted (multi)graph $G$ and two of its vertices, $u, v\in V(G)$. I want to find two edge-disjoint paths that connects $u$ and $v$ while minimizing the sum of the lengths of the paths.
what is the complexity of this problem ?
(If can reduce Hamiltonian Cycle to it if we ask for vertex-disjoint paths and allow negatives weights)
Anything known about approximation algorithms in general ? in planar graphs ?
(if we can cheat by an edge twice and paying it twice, then finding a shortest path gives a $2$-approximation.)
More generally, what if now I ask for $k$ edge-disjoints paths between from $u$ to $v$ ?
rq: this problem looks somehow related to subset TSP
Answer: The problem of finding the shortest simple cycle through two vertices in a weighted undirected graph can be solved in the same time as Dijkstra's algorithm for a single shortest path, by applying Suurballe's algorithm for finding disjoint $s$–$t$ shortest paths in a directed graph.
It doesn't quite work to turn your given undirected graph directed by replacing each undirected edge by two directed edges and to apply Suurballe to the resulting directed graph: you will get two paths that do not use the same directed edge, but they may share vertices or even use the same undirected edge (in opposite directions by the two paths).
Instead, first replace each undirected edge by two directed edges to form a directed graph. Then, in the resulting directed graph, split each vertex into two, one for the incoming adjacencies and one for the outgoing adjacencies, connected by an edge from incoming to outgoing. Finally, apply Suurballe to this graph. The result will be two directed paths that translate back to the two undirected graphs of minimum total edge length. | {
"domain": "cstheory.stackexchange",
"id": 4906,
"tags": "cc.complexity-theory, approximation-algorithms, approximation-hardness, planar-graphs"
} |
Virtual robot speed reduce when approaching to corner | Question:
Hi everyone
I has a problem related to my simulation, I am currently running a simulation on teb_local_planner.
When I run my simulation, I found out that my robot become very slow in situation that have to go around the corner.
like this video
https://www.youtube.com/watch?v=bf8igGb1U_M
here is my paste bin for the teb_local_planner_param.yaml file http://pastebin.com/7VVJ0kSL
It report warning like below
[ WARN] [1473995722.552885072, 92.600000000]: Map update loop missed its desired rate of 5.0000Hz... the loop actually took 0.6000 seconds
[ WARN] [1473995723.131613002, 93.100000000]: Control loop missed its desired rate of 5.0000Hz... the loop actually took 0.5000 seconds
[ WARN] [1473995723.132625078, 93.100000000]: Map update loop missed its desired rate of 5.0000Hz... the loop actually took 0.5000 seconds
[ WARN] [1473995724.218327309, 94.200000000]: Control loop missed its desired rate of 5.0000Hz... the loop actually took 1.1000
Originally posted by samping on ROS Answers with karma: 33 on 2016-09-15
Post score: 0
Answer:
can you try to check if the issue still occurs with the most recent version in the kinetic-devel branch? you can also compile it on indigo.
Originally posted by croesmann with karma: 2531 on 2016-09-23
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by samping on 2016-09-26:
Thank you very much croesmann, do you mind put your comments in answer so I can close this post? | {
"domain": "robotics.stackexchange",
"id": 25771,
"tags": "ros, rviz, teb-local-planner"
} |
RGTensor (Mathematica package) Print and Plot3D functions | Question: I plot a function for a spherically symmetric metric in RGTensor package written for Mathematica and for pressure profile I obtain a complicated plot.
But then as I Print[] the corresponding pressure profile the value is zero which contradicts the resultant plot. Does anyone know why this happens?
Answer: This is essentially impossible to answer conclusively without more context, but unless you actually intended to plot a quantity that was of the order of a few times $10^{-16}$ (which is roughly where the machine-error threshold normally sits), then your plot is essentially the zero function, and there is no contradiction there. | {
"domain": "physics.stackexchange",
"id": 40373,
"tags": "computational-physics, software"
} |
Given a complete, weighted and undirected graph $G$, complexity of finding a path with a specific cost | Question: Given a fully connected graph $G$, suppose that we are searching for a simple path $P$ with a specific cost $c$.
Is answering to that problem yes or no equivalent to subset-sum problem?
What would be the complexity of finding such path?
I have made a reduction from subset-sum problem:
If each number in a set $S$ is a vertex of $G$ and weight of $<i,j>$ is $|i-j|$, then answering the question above yes or no is the same as solving the sumbet-sum problem.
P.S. The initial vertex I have visited is added to the cost.
Edit: Edge weights
Answer: Given an instance $\langle \{s_1,\ldots,s_n\}, T \rangle$ of subset sum, construct the following weighted graph. The vertices are
$$ v_0,v_1,\ldots,v_n,u_1,\ldots,u_n. $$
Connect $v_{i-1}$ and $v_i$ with an edge of weight $s_i$. Connect $v_{i-1}$ to $v_i$ via $u_i$ (i.e., add the edges $\{v_{i-1},u_i\},\{u_i,v_i\}$) with zero weight edges. There is a simple path of total cost $T$ iff there is a subset of $\{s_1,\ldots,s_n\}$ summing to $T$. This shows that your problem is NP-hard, and in fact NP-complete, since it's clearly in NP. | {
"domain": "cs.stackexchange",
"id": 2771,
"tags": "algorithms, graphs, decision-problem"
} |
Clarifying what it means to be Strongly NP-Complete | Question: Wikipedia defines strongly NP-Complete as:
A problem is said to be strongly NP-complete, if it remains so even when all of its numerical parameters are bounded by a polynomial in the length of the input.
What I interpret this to mean is this:
Let's consider the 3-partition problem, with our numerical paramater being $\sum_{x \in S} x = B$. Since this problem is strongly NP-Complete, there exists a polynomial $p(N)$, such that if we restrict ourselves to only sets $S$ such that $B < p(|S|)$, this problem is still NP-Complete. (i.e. finding an algorithm with complexity polynomial in $|S|$ for this restriction would prove P=NP)
Is this the correct interpretation? If so, where could I find upper bounds of such polynomials $p$ for famous problems, such as the 3-partition problem? Also, if I'm not mistaken, this implies that the 3-partition problem is NP-complete in terms of $B$, as well as $|S|$, right?
Answer: Yes, that's correct. You'll probably have to figure out the polynomial yourself. The way to find the polynomial is to look at the reduction, and from that you should be able to deduce a polynomial.
In your case, I believe there is a reduction from 3D-MATCHING to 3PARTITION, so you would analyze that reduction to find the polynomial. Given an instance of 3D-MATCHING, the reduction explicitly constructs an instance of 3PARTITION whose solution can be used to solve the 3D-MATCHING instance; so you'd analyze how large the parameter of 3PARTITION instance could possibly be, as a function of the size of the set, and that would tell you the polynomial $p$ you are looking for. | {
"domain": "cs.stackexchange",
"id": 15286,
"tags": "complexity-theory, np-complete, reference-request"
} |
Could one argue that h (Planck constant) and $\hbar$/2 (Dirac constant) are in fact independant constants? | Question: My question is very naive and could sound strange but it seems to me natural in so far as the Planck constant is related to the first quantization (of newtonian particle mechanics/galilean relativity) while the Dirac one shows up in the second quantization (of fields theory/special relativity)... Anyway I think the conceptual difference between the two constants is rarely emphasized.
Answer: "First quantization" and "second quantization" are widely used names for the same procedure applied to two different classical systems – classical mechanics and classical field theory. In both cases, the operation introduces Planck's constant and it's always the same constant. For example, the angular momentum $J_z$ is a multiple of $\hbar/2=h/4\pi$ both in non-relativistic quantum mechanics for a few particles as well as quantum field theory – this fact results both from "first quantization" as well as "second quantization". It is a universal fact of quantum mechanics. In the same way, Schrödinger's equation contains the factor of $\hbar$ and this equation holds for non-relativistic quantum mechanics as well as quantum field theory (with an appropriate Hamiltonian that encodes the total energy).
The reason why $\hbar$ is more often found in quantum field theory and $h$ is more often found in simpler discussions of quantum mechanics is that $h$ is associated with frequency $f$ which is the quantity chosen by physics beginners while the advanced physicists usually consider the angular frequency $\omega=2\pi f$ to be more natural, and that's why they also talk about $\hbar$. Note that the energy of a photon (or another quantum) is $E=hf=\hbar\omega$; the factors of $2\pi$ cancel.
The constant $\hbar$ is more fundamental because it naturally appears in the Heisenberg equations, Schrödinger's equations, Feynman's path integral, commutators of $x,p$, and so on. These are the fundamental equations and it would be a waste of time to write an extra $1/2\pi$ in all of them. A physics beginner doesn't really understand these fundamental equations well. He prefers to look at things like $E=hf$ which may be written in a simple way using $h$ as long as we express the frequency by $f$.
But $\hbar=h/2\pi$ always holds – they're not independent at all. They're just two constants associated with two conventions and the more one knows about quantum theories, the more likely he is to switch to $\hbar$.
Let me also mention that the Dirac constant is $\hbar$ and not $\hbar/2$. | {
"domain": "physics.stackexchange",
"id": 6119,
"tags": "quantum-mechanics, quantum-field-theory, physical-constants, second-quantization"
} |
Heterotic string as worldvolume theory of two coincident 9-branes in 27 dimensions? | Question: The heterotic string is a combination of right-moving excitations from a $\mathrm{D} =10$ superstring and left-moving excitations from a $\mathrm{D} =26$ bosonic string, with the left-movers behaving as if the extra 16 dimensions are compactified. The heterotic string is also derived from $\mathrm{D} =11$ M -theory , as an open 2-brane stretched between two "end-of-the-world" 9-branes (spatial boundaries; this is M-theory compactified on a line segment, and the 9-branes lie at the ends). So I am led to imagine a 27-dimensional theory, containing branes. We compactify 16 dimensions, and consider the worldvolume theory of two parallel 9-branes. When they are coincident, we get the heterotic string; when they are slightly separate, we get "heterotic M-theory".
A 27-dimensional fundamental theory has been discussed before (hep-th/9704158 section 4; hep-th/0012037; arXiV:0807.4899), but I don't see this particular line of thought discussed.
Answer: Dear Mitchell, this is a very nice research project - at least judging by the fact that I have made a similar proposal. ;-)
In this very form, however, it can't be right because any hypothetical 27-dimensional theory fails to be supersymmetric and the supersymmetry breaking can't be quite undone. However, brave souls have played with the transmutation of string theories that are very different on the world sheet, see e.g. some of the papers by Simeon Hellerman and Ian Swanson:
http://www.slac.stanford.edu/spires/find/hep/www?rawcmd=find+a+swanson+and+a+hellerman&FORMAT=WWW&SEQUENCE=
In fact, my specific version of the proposal had one mathematical piece of evidence that was much more specific than yours. You could imagine some $E_8$ group already in 27 dimensions. And a funny feature of the $E_8$ are its holonomy groups. The nontrivial one is the $\pi_3$ which is $Z$, and then the next nontrivial one is $\pi_{15}$. In normal M-theory, with the 3-form described following Diaconescu-Moore-Witten as the Chern-Simons form of an $E_8$ gauge field, $\pi_{3}$ is what allows fivebranes (codimension 5) to exist.
Similarly, $\pi_{15}$ of $E_8$ may create codimension 17 objects, and 27-17 = 10 which is the spacetime dimension of the Hořava-Witten domain wall. Very natural. So I would actually propose you modify the proposal so that $E_8$ already exists in the bulk of 27 dimensions and you create a variation of the DMW paper at the same moment.
Otherwise, you will face a lot of trouble. The quantities are unstable, unprotected by supersymmetry, so even if the instabilities can be survived, you won't be able to match the precise numbers on both sides of a duality.
Moreover, non-fermionic theories don't carry any gauginos and they have no anomalies, so you will be able to show no nontrivial anomaly cancellation that would be similar to the anomaly cancellation of heterotic M-theory, and so on. It is simply very hard to make a convincing story of a 27-dimensional origin of the heterotic string.
Note that even the ordinary bosonic M-theory remains highly inconclusive. So far, we have only presented some analogous construction for another string vacuum - an appendix to the papers you mentioned that are not terribly important (or famous) at this moment themselves.
Best wishes
Lubos | {
"domain": "physics.stackexchange",
"id": 97588,
"tags": "string-theory, research-level, kaluza-klein"
} |
Custom data type - "Binary" Float (0-1.0) | Question: In my current game I have the need for values ranging from 0 to 1, or from -1 to 1. The values should never exceed this threshold, so instead of constantly validating the values I made a custom data type which essentially clamps a float to that threshold. I have been researching various ways of making it better, and I feel like what I have now is just about perfect for my uses. I decided against implementing IConvertible since all I'm really doing is wrapping a float with code I need; the value is still a float, and can be extracted and converted in the unlikely event it would need to be.
Is there anything I can improve on? Anything I'm doing wrong? I'd love to know!
P.S. You'll notice there's mention of a "SBloat", that's the same thing as a Bloat, just signed and ranging from -1.0 to 1.0. The code is almost identical.
using System;
using UnityEngine;
namespace ProjectBleak {
/// <summary>
/// Represents an unsigned binary float ranging from 0 to 1.0. If the value exceeds this limit, it will be clamped to stay in range.
/// </summary>
[System.Diagnostics.DebuggerDisplay("{value}")]
public struct Bloat : IEquatable<Single>, IFormattable {
/// <summary> 0 </summary>
public static readonly float MinValue = 0f;
/// <summary> 1.0 </summary>
public static readonly float MaxValue = 1f;
private float value;
/// <summary> Should an error be thrown when the value would be lower than 0 or higher than 1.0? </summary>
public bool ErrorOnExceed { get; set; }
/// <summary>
/// Represents an unsigned binary float ranging from 0 to 1.0. If the value exceeds this limit, it will be clamped to stay in range.
/// </summary>
/// <param name="value">Value ranging from 0 to 1.0</param>
/// <param name="errorOnExceed">Should an error be thrown when the value would be lower than 0 or higher than 1.0?</param>
public Bloat(float value, bool errorOnExceed = false) {
ErrorOnExceed = errorOnExceed;
if (value < MinValue) {
value = MinValue;
if (ErrorOnExceed) {
Debug.LogError($"Bloat:: {value} exceeds minimum allowed value.");
}
}
else if (value > MaxValue) {
value = MaxValue;
if (ErrorOnExceed) {
Debug.LogError($"Bloat:: {value} exceeds maximum allowed value.");
}
}
this.value = value;
}
public override int GetHashCode() {
unchecked {
int hash = 17;
hash = hash * 23 + value.GetHashCode();
if (ErrorOnExceed) {
hash = hash |= 1 << 19;
}
else {
hash = hash |= 1 << 13;
}
return hash;
}
}
public bool Equals(float other) {
return value == other;
}
public override bool Equals(object obj) {
if (obj is SBloat || obj is Bloat || obj is Single) {
return Equals((float)obj);
}
return false;
}
public override String ToString() {
return value.ToString();
}
public String ToString(IFormatProvider provider) {
return value.ToString(provider);
}
public String ToString(String format) {
return value.ToString(format);
}
public String ToString(String format, IFormatProvider provider) {
return value.ToString(format, provider);
}
// Keep the Signed-to-Unsigned conversion explicit since we will be losing any negative number
public static explicit operator Bloat(SBloat s) => new Bloat(s);
public static implicit operator Bloat(Single f) => new Bloat(f);
public static implicit operator float(Bloat b) => b.value;
public static Bloat operator +(Bloat left, Single right) {
return new Bloat(left.value + right, left.ErrorOnExceed);
}
public static Bloat operator -(Bloat left, Single right) {
return new Bloat(left.value - right, left.ErrorOnExceed);
}
public static Bloat operator +(Bloat left, Bloat right) {
return new Bloat(left.value + right.value, left.ErrorOnExceed);
}
public static Bloat operator -(Bloat left, Bloat right) {
return new Bloat(left.value - right.value, left.ErrorOnExceed);
}
public static bool operator ==(Bloat left, Single right) {
return left.value == right;
}
public static bool operator !=(Bloat left, Single right) {
return left.value != right;
}
public static bool operator <(Bloat left, Single right) {
return left.value < right;
}
public static bool operator >(Bloat left, Single right) {
return left.value > right;
}
public static bool operator <=(Bloat left, Single right) {
return left.value <= right;
}
public static bool operator >=(Bloat left, Single right) {
return left.value >= right;
}
public static bool operator ==(Single left, Bloat right) {
return left == right.value;
}
public static bool operator !=(Single left, Bloat right) {
return left != right.value;
}
public static bool operator <(Single left, Bloat right) {
return left < right.value;
}
public static bool operator >(Single left, Bloat right) {
return left > right.value;
}
public static bool operator <=(Single left, Bloat right) {
return left <= right.value;
}
public static bool operator >=(Single left, Bloat right) {
return left >= right.value;
}
public static bool operator ==(Bloat left, Bloat right) {
return left.value == right.value;
}
public static bool operator !=(Bloat left, Bloat right) {
return left.value != right.value;
}
public static bool operator <(Bloat left, Bloat right) {
return left.value < right.value;
}
public static bool operator >(Bloat left, Bloat right) {
return left.value > right.value;
}
public static bool operator <=(Bloat left, Bloat right) {
return left.value <= right.value;
}
public static bool operator >=(Bloat left, Bloat right) {
return left.value >= right.value;
}
public static bool operator ==(Bloat left, SBloat right) {
return left.value == right;
}
public static bool operator !=(Bloat left, SBloat right) {
return left.value != right;
}
public static bool operator <(Bloat left, SBloat right) {
return left.value < right;
}
public static bool operator >(Bloat left, SBloat right) {
return left.value > right;
}
public static bool operator <=(Bloat left, SBloat right) {
return left.value <= right;
}
public static bool operator >=(Bloat left, SBloat right) {
return left.value >= right;
}
}
}
Answer: All in all it seems OK to me. I have these comments:
This test:
Bloat a = new Bloat(0.5f);
object b = new Bloat(0.3f);
Console.WriteLine(a.Equals(b));
Throws an InvalidCastException
in:
public override bool Equals(object obj)
{
if (obj is SBloat || obj is Bloat || obj is float)
{
return Equals((float)obj); // Invalid Cast
}
return false;
}
The solution seems to be to cast to Bloat instead of float:
return Equals((Bloat)obj);
I don't understand the implementation of GetHashCode(). What do you gain from the factors and adds? Why not just return value.GetHashCode()?
You implement IEquatable<float> but not IEquatable<Bloat>? Without the latter, this example results in an unnecessary call to the Bloat-to-float operator:
Bloat a = new Bloat(0.5f);
Bloat b = new Bloat(0.3f);
Console.WriteLine(a.Equals(b));
In the constructor you have the parameter errorOnExceed and the comment says that an error will be thrown if errorOnExceed == true and the value is out of range. I don't see that happen? I'm not sure, but IMO it seems that ErrorOnExceed should be a static member valid for all instances in an application.
In order to sort a List<Bloat> by calling List<Bloat>.Sort() or to use IEnumerable<Bloat>.OrderBy(b => b) you'll have to implement IComparable<Bloat>.
You implement for instance the +-operator for Bloat and float:
public static Bloat operator +(Bloat left, float right)
{
return new Bloat(left.value + right, left.ErrorOnExceed);
}
this is fine and is used in this situation:
Bloat a = new Bloat(0.5f);
float b = 0.3f;
Bloat c = a + b;
But if you do this:
Bloat c = b + a;
it works through the Blot + Bloat operator with an overhead of calling the implicit float-to-Bloat operator for b. For optimization and symmetry you should consider to implement the float + Bloat operator. The same applies to the other operators for float as well as for SBloat.
You alternate between Single and float. For readability consider to use only one of them. | {
"domain": "codereview.stackexchange",
"id": 31757,
"tags": "c#, performance, unity3d, unit-conversion"
} |
Does every online algorithm has an offline counterpart? | Question: According to the wikipedia page for Online algorithms, it states:
"Not every online algorithm has an offline counterpart."
At the time of asking this question there is no citation for this claim.
How it possible to not have an offline counterpart? What is an example of an algorithm that is online only?
Answer: This statement seems false to be because of the notion of competitive ratio that is used to analyze virtually all online algorithms. Heuristically the competitive ratio measures how well the online algorithm performs on a sequence of inputs versus an adversary that knows the sequence in advance. In particular, the adversary is solving an offline problem.
If you look at any of the online problems that are studied, they all use the competitive ratio as a benchmark. To back up this claim, look at the abstracts of any paper in a well known journal (such as SODA 18: https://archive.siam.org/meetings/da18/da18_abstracts.pdf) and search for abstracts that study online algorithms. They all use some type of competitive ratio in their analysis. | {
"domain": "cstheory.stackexchange",
"id": 4573,
"tags": "online-algorithms"
} |
Adaptive equalization vs inverse of transfer function | Question: I have the following equalization problem as shown in the figure below:
Now I can compute the coefficients for my adaptive FIR filter c (dim(c) = N) the following:
$\mathbf{c_{opt}} = (\mathbf{H}^T\mathbf{H})^{-1}\mathbf{H}~\mathbf{h_{ideal}}$
where $\mathbf{H}$ is a convolution matrix with shifted vectors of $\mathbf{h}$ and $\mathbf{h_{ideal}}$ is chosen such that $x[n]=d[n]$ (delay-free equalizer).
The channel impulse response is given as
$\mathbf{h} = [1, 0.5]^T$
$\Rightarrow H(z) = 1+0.5 z^{-1}$ so the inverse of the system would be IIR:
$1/H(z) = \frac{z}{z+0.5}$
Now the question is the following: What is the difference between the LS-solution with an adaptive filter and direct inversion of the system? Is it just that one filter is FIR and the other one IIR? Therefore with the FIR-filter we cannot reach full equalization and a residual error stays?
Answer: Inverting a channel can only be done when the channel is a minimum phase system (trailing echos only). A minimum phase system is characterized as having all zeros in the left half plane (for the s plane, or equivalently in a sampled system and the z plane all zeros inside the unit circle). Inverting such a channel results in poles where every zero exists, and a causal system that has any poles in the right half plane (outside the unit circle) is not stable. So a minimum phase system has a stable causal inverse, while a mixed phase or maximum phase system does not. | {
"domain": "dsp.stackexchange",
"id": 8382,
"tags": "finite-impulse-response, infinite-impulse-response, homework, adaptive-filters, equalization"
} |
Discrete Langevin Equation | Question: We have the Langevin equation, that describes the motion of a particle in a viscous medium, given by
\begin{equation}\label{Langevin}
\frac{dv}{dt} = -\gamma v + \zeta(t)
\end{equation}
With the conditions that
\begin{equation}
\langle \zeta(t) \rangle = 0
\end{equation}
\begin{equation}
\langle \zeta(t)\zeta(t') \rangle = \Gamma \delta(t-t')
\end{equation}
And, if we make the time discrete, by putting $t = n\tau$ we can obtain the relation
\begin{equation}
v_{n+1} = av_n + \sqrt{\tau \Gamma}\xi_n
\end{equation}
where $a = (1 - \tau \gamma)$ with the conditions
\begin{equation}
\langle \xi_i \rangle = 0
\end{equation}
\begin{equation}
\langle \xi_i \xi_j \rangle = \delta_{ij}
\end{equation}
My question is that I didn't know how I obtain the discrete equation from the continuous equation. I understand the $a$ but why the square root appears? What transformation between $\xi$ and $\zeta$ I should do?
Answer: The short answer to your question is $\zeta(t) \rightarrow \sqrt{\frac{\Gamma}\tau}\xi_n$. The easiest reason to give for the square root is dimensional analysis. $\zeta$ is dimensionful, but $\xi$ is dimensionless, so using dimensional analysis in the variance equations will give you the square root.
In order to deduce this, you should think about how one discretizes differential equations. Working out this procedure, one gets $v(t) \rightarrow v_n$, $\zeta(t) \rightarrow \zeta_n$, $\frac{dv}{dt} \rightarrow \frac{v_{n+1} - v_n}{\tau}$, and $\delta(t-t') \rightarrow \frac{1}{\tau}\delta_{ij}$. The $\frac{1}{\tau}$ can be remembered based on dimensional considerations, or you can integrate/sum both sides in order to properly derive it. Now, simply plug these in and find the relationship between $\zeta_n$ and $\xi_n$. | {
"domain": "physics.stackexchange",
"id": 35057,
"tags": "statistical-mechanics, discrete, non-equilibrium, stochastic-processes"
} |
gazebo standalne version cannot be started as usual after installing gazebo ros packages | Question:
Hello
I have installed gazebo 1.9 and gazebo_ros_pkgs successfully following the instructions in the tutorials both installed from source and I am running ros groovy. And I tested gazebo using this command :
rosrun gazebo_ros gazebo , and it run successfully as well.
But the problem is when I want to run gazebo standalone with my world file as usual without ros integration as following “gazebo file.world “
Even though there is no other gazebo running I get this error :Exception [Master.cc:50] Unable to start server[Address already in use]. There is probably another Gazebo process running.
Any solutions for this error!
Thanks in advance!
Originally posted by Zahra on Gazebo Answers with karma: 122 on 2013-08-12
Post score: 1
Answer:
Did you try?
killall gzserver
Originally posted by Arn-O with karma: 316 on 2013-08-12
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Zahra on 2013-08-12:
I just did, and it works. thanks a lot!
Comment by _sparkle_eyes on 2019-05-30:
For some reason this is not working for me? | {
"domain": "robotics.stackexchange",
"id": 3423,
"tags": "ros, gazebo"
} |
Form Validation Evaluation | Question: I've been trying several styles of validation code for a form. I was wondering if this is good or needs some improvement or just trash it and use jQuery validator code?
$("#registration-form").on("submit", function(e) {
e.preventDefault();
var error_field = "";
$(this).find(".input-field").each(function() {
var field_id = $(this).attr("id");
var value = $.trim($(this).val());
var length = value.length;
var error_value = "";
// Do field checks
if (length == 0) {
error_value = "You missed this field. D:";
} else if (length > 0 && length < 7) {
error_value = capitalize(field_id) + " too short";
} else if (length > 30 && field_id == "username") {
error_value = capitalize(field_id) + " too long";
} else if (length > 50 && field_id == "password") {
error_value = capitalize(field_id) + " too long";
} else if (passwordCombination(value) === false && field_id == "password") {
error_value = capitalize(field_id) + " must contain at least one string, number, and symbol.";
} else if (emailValidate(value) === false && field_id == "email") {
error_value = "Invalid email! :(";
}
// Do show error labels
if (error_value != null) {
$("#registration-form .content-row."+ field_id + " .box-error").text(error_value);
error_field += error_value;
}
});
if (error_field == "") {
var username = $.trim($("#username").val());
var email = $.trim($("#email").val());
var password = $.trim($("#password").val());
$.ajax({ type: "POST", url: "process/app_usage.php",
data: { username: username, email: email, process: "validate_user_email" },
success: function(data) {
// Do stuff
}
});
}
});
Answer:
I was wondering if this is good
Well, are you just looking to prevent the normal user from messing up during registration? I think it would be fine for that.
I'll see if I can point out flaws in your validation...
First, you check to see if length is 0. If it is, you warn them. Cool. But right after, you check to see if it's greater than 0. This is redundant? It would have to be longer than 0 characters to pass the first conditional!
Now you check length < 7. But what if my email is a@a.ca? Eeek! But I was lucky and set up my domain www.a.ca with email support and registered with my first initial!
Why can't the username be longer than 30 characters? Is your database not capable of such long strings? If it truly just cannot handle those awfully long names, fine. If it can, I'd suggest you support long names! Besides, I'd hate to find a new username if I've always used my most favorite 32 characters username!
Why can't my password be longer than 50 characters either? I might be one of those users who has funky passwords that are always has 9 words in their passwords.
What exactly is passwordCombination()? If I'm one of those funky users I mentioned in point 4, I'd be upset with this rule.
Use <input type="email" /> to validate emails on the client-side, not emailValidate(). Then check it again on the server with something such as PHP's filter_var().
":(" should really be "):" to be consistant with eye placement in "D:".
There's some critique on your validation. So maybe it would be best to find an established validation library and make slight alterations to suit you! Google has some validation standards, and W3C also has some tips and tricks up their sleeve!
But always remember:
Servers should not rely on client-side validation. Client-side validation can be intentionally bypassed by hostile users, and unintentionally bypassed by users of older user agents
Now I'm not JavaScript maniac, but I might be able to help a little!
I believe you could generalize $(this).find(".input-field") into something such as $(this).find("input[type=text], input[type=password]"). This way you don't rely on having that class.
You just check if error_field is empty, so why not make it simpler and turn it into a counter. Each error increments it. In the end, if it's value is equal to 0, it's all good! | {
"domain": "codereview.stackexchange",
"id": 8817,
"tags": "javascript, jquery, validation, form"
} |
Are the modal participation factors bounded for shock response spectrum analysis? | Question: Recently dug myself into the theory of shock response spectrum analysis, but one thing is not clear for me.
The theory says that the peak response of the structure can be calculated as the product of the participation factor and the pertaining point of the shock response spectrum.
The participation factor is calculated as:
$$
\Gamma_{kj}=\phi_{j}^{T}\textbf{M}\textbf{1}_k
$$
where $\phi$ is the eigenvector, $\textbf{M}$ is the mass matrix and $\textbf{1}$ is the corresponding rigid body motion vector.
Since the partcipation factor calculation includes the eigenvactors, which are not exact as they are scaleable, it would mean, that the response of the strcture would depend on the eigenvector normalization.
So my question would be that does it mean that it is assumed that for this type of analysis the eigenvectors are mass normalized and if so are they bounded to a specific value?
Reference:
https://www.comsol.co.in/multiphysics/response-spectrum-analysis
Answer: In the section "The Multiple DOF System" the Comsol document says
It has been assumed that the mass matrix normalization of the
eigenmodes is used and that the damping matrix
can be diagonalized by the eigenmodes.
Eigenvectors are assumed to be mass normalized in any mathematical derivation using them, unless somebody wants to be deliberately perverse.
The sensible way to define modal participation factors makes them dimensionless quantities. For mass normalized vectors, $\phi_j^{-1}\mathbf{M}\phi_j = 1$ so $\phi_j^{-1}\mathbf{M}$ has the dimension 1/length, and that is multiplied by a length (the rigid body motion vector) in your formula.
Note the participation factors can be negative, because the sign of a mass normalized eigenvector is arbitary even though its magnitude is fixed by the normalization. | {
"domain": "engineering.stackexchange",
"id": 3563,
"tags": "vibration, modal-analysis, shock, eigenvalue-analysis"
} |
Understanding the probability of measurement w.r.t. density matrix | Question: I am told that the probability of measuring $\lambda$ is $$p_\lambda = Tr(\hat{P}_\lambda\hat{\rho}) = Tr(\hat{P}_\lambda\hat{\rho}\hat{P}_\lambda)$$ where $\hat{P}_\lambda = \sum_{n:\lambda_n = \lambda}|n\rangle\langle n|$ is the projection operator for eigenstates $n$ with an eigenvalue $\lambda$.
I have no idea how this is derived or why $$Tr(\hat{P}_\lambda\hat{\rho}) = Tr(\hat{P}_\lambda\hat{\rho}\hat{P}_\lambda).$$
Answer:
This is not derived, this is the Born rule or the Lüders-von Neumann measurement rule. Either way, it is a fundamental axiom of quantum mechanics that measurements work this way. See e.g. this, this or this question for more discussion of possible motivations.
If you are fine with the Born rule for pure states and wonder where this rule for mixed states comes from, write the mixed state as a mixture of pure states $\rho = \sum_i p_i \lvert \psi_i\rangle\langle \psi_i\rvert$ and consider that the probability to measure the eigenvalue $\lambda$ for some observable $\Lambda$ with eigenstates $\lvert \lambda\rangle$ in the state $\lvert \psi_i\rangle$ would be $\lvert \langle \lambda\vert \psi_i\rangle \rvert^2$ by the Born rule and
$$ \mathrm{tr}(\lvert \lambda\rangle \langle \lambda\vert \psi_i\rangle\langle \psi_i\rvert) = \lvert\langle \lambda\vert \psi_i \rangle\rvert^2,$$
so
$$\mathrm{tr}(P_\lambda \rho) = \sum_i p_i \lvert\langle \lambda\vert \psi_i \rangle\rvert^2,$$
is just the probability to measure $\lambda$ in each of the $\psi_i$, weighted by their mixed probability $p_i$.
$\mathrm{tr}(P\rho P) = \mathrm{tr}(P^2\rho) = \mathrm{tr}(P\rho)$, where the first equality is due to the cyclicity of the trace ($\mathrm{tr}(ABC) = \mathrm{tr}(CAB)$ for any operators $A,B,C$) and the second because projections are idempotent ($P^2 = P$). | {
"domain": "physics.stackexchange",
"id": 95175,
"tags": "quantum-mechanics, quantum-information, probability, density-operator, born-rule"
} |
Is numbness the absence of one or all touch sensations? | Question: I am studying the effects of tetrodotoxin and its symptoms when consumed. Numbness is one of the first sensations reported.
But I googled numbness and I couldn't find information about whether this means all touch sensations (e.g. thermal, mechano, etc) have to be absent for numbness or just one specific kind.
My Question:
Can someone
Define numbness
Explain whether numbness is the absence of one or more than one or all touch sensations.
Answer: The definition of numbness has been answered by yourself, and I will focus on the second part of the question.
In terms of the underlying physiological mechanism behind numbness I think it's good to narrow the question down and focus on local anesthetics, which numb a local area of the body for minor surgical procedures. Cocaine and related compounds have been shown to block not only pain, but also warmth, cold and touch perception, as well as blocking motor function. Cocain and related compounds are basically voltage-gated Na+ channel blockers, but also block K+ channels to some extent. Their affinity to various different channels differs, explaining the differential effects. E.g., pain is blocked more effectively than motor functions (Scholz, 2002).
We all know the feeling of numbness when our limbs are asleep due to restriction of blood flow. Likely all neural activity is blocked here too, as the cutting off of oxygen affects all neurons.
Given the broad range of effects of local anesthetics and restriction of blood flow, I would tentatively conclude that numbness can be interpreted as a reduction of sensitivity of all the senses.
Reference
- Scholz, Br J Anaesth (2002); 89; 52-61 | {
"domain": "biology.stackexchange",
"id": 4316,
"tags": "neuroscience, homework, neurophysiology, touch"
} |
Geodesics of anti-de Sitter space | Question: It is said that (p. 9), given the anti-de Sitter space $\text{AdS}_2$, let's say in the static coordinates
$$ds^2 = -(1 + x^2) dt^2 + \frac{1}{(1+x^2)} dx^2$$
Every timelike geodesic will cross the same point after a time interval of $\pi$. That is, if $(x_0, t_0) \in \gamma$, then $(x_0, t_0 + \pi) \in \gamma$.
So I've been trying to find out how to show it. The non-zero Christoffel symbols are
$${\Gamma^x}_{xx} = - \frac{x}{1+x^2},\ {\Gamma^x}_{tt} = x + x^3, {\Gamma^t}_{xt} = {\Gamma^t}_{tx} = \frac{x}{1+x^2}$$
So that the geodesic equation is
\begin{eqnarray}
\ddot{x}(\tau) &=& \frac{x}{1+x^2} \dot{x}^2 - \dot{t}^2 (x + x^3)\\
\ddot{t}(\tau) &=& -2 \frac{x}{1+x^2} \dot{x} \dot{t}\\
\end{eqnarray}
We also have the two following equality : the timelike geodesic is such that $g(u,u) = -1$
$$\frac{1}{(1+x^2)} \dot{x}^2 -(1 + x^2) \dot{t}^2 = -1$$
and since the metric is static, there is a timelike Killing vector $\xi$ such that $g(\xi, u)$ is a constant.
$$(1 + x^2) \dot{t} = E$$
or
$$\dot{t} = \frac{E}{(1 + x^2)}$$
This gives us
$$\dot{x}^2 = -(1 + x^2) + E^2$$
And so
\begin{eqnarray}
\ddot{x}(\tau) + x &=& 0\\
\ddot{t}(\tau) &=& -2 x \dot{x} \frac{E}{(1 + x^2)^2}\\
\end{eqnarray}
Which gives us for a start that $x(\tau) = A \sin(\tau) + B \cos(\tau)$. Not quite periodic in $\pi$ (it should be $2\pi$ here), but more importantly this periodicity is in $\tau$ only and not in $t$, and it doesn't seem that $t = \tau$ in this scenario. Is there something wrong here or did I commit a mistake, either in interpreting the statement or the derivation here?
Given $x(\tau) = \sin(\tau)$, Wolfram Alpha gives out the following solution for $t(\tau)$, for instance :
$$t(\tau) = c_1 \tau + c_2 - \frac{1}{2\sqrt{2}} \arctan(2 \sqrt{2} \tan(\tau))$$
which doesn't seem to be particularly helpful here.
Answer: "Every timelike geodesic will cross the same point after a time interval of $\pi$" will be true if the half-period is $\pi$. You found the general solution for $x(\tau)$, namely
$$x(\tau)=A\sin\tau+B\cos\tau$$ or, alternately, $$x(\tau)=A\sin{(\tau-\tau_0)}.$$ When $\tau$ increases by $\pi$, $x$ does come back to what it was, after a half-period.
But we want to show that, when $x$ comes back, $t$, and not just $\tau$, has increased by $\pi$. So what is $t$ doing?
When you substitute $x(\tau)=A\sin{(\tau-\tau_0)}$ into
$$\frac{\dot{x}^2}{1+x^2}-(1+x^2)\dot{t}^2=-1$$
and solve for $t$, you get
$$t(\tau)=\tan^{-1}{[\sqrt{A^2+1}\tan{(\tau-\tau_0)}]}+t_0.$$
To see what is going on here, let's take $\tau_0$ and $t_0$ to be zero (since they just represent uninteresting time translations) and look at the function $\tan^{-1}{(\sqrt{A^2+1}\tan{\tau})}$. Here is a plot of it when $A=\sqrt{3}$ (just an arbitrary value as an example):
But $t$ isn't really discontinuous like this. The arctangent function is multivalued, and we have to take the appropriate branch of it so that t increases continuously with $\tau$. This means we move up the second blue curve by $\pi$, the third blue curve by $2\pi$, etc. to get a continuous function $t(\tau)$ that looks like this:
The result is that whenever $\tau$ increases by $\pi$, so does $t$!
So, to summarize, the timelike geodesics are
$$\begin{align}
x&=A\sin\tau \\
t&=\tan^{-1}{[\sqrt{A^2+1}\tan{\tau}]}
\end{align}$$
where we have dropped the uninteresting time-translation constants.
When $\tau$ increases by $\pi$, $t$ also increases by $\pi$, and $x$ comes back to what it was. This is what you were trying to show. | {
"domain": "physics.stackexchange",
"id": 53444,
"tags": "general-relativity, geodesics, anti-de-sitter-spacetime"
} |
How to ship ROS nodes? | Question:
Hi,
What is the recommended way to ship custom ROS nodes? Is it through debian packages? If yes, where can I find some tutorials on best practices to ship custom ROS code?
Originally posted by tushar on ROS Answers with karma: 26 on 2017-09-11
Post score: 0
Answer:
I do not know of a 'recommended way'. I would say it depends on your business model and your client's preferences and capabilities.
If you want to give your clients the most flexibility, provide the source code of your package in a standard ROS directory structure (http://wiki.ros.org/Packages). Having this your client can modify / reuse your code easier, even under a newer ROS version. The best way to provide source code is probably uploading it to a version control server like github. You could even handle updates this way.
If you would like to hide your source code and provide your client with an easy way to install your package, you might want to provide debian packages and look at question #q192419
If you do not want to deal too much with your client's Ubuntu / ROS setup, then Docker might be an option for you, see: http://wiki.ros.org/docker/Tutorials
Originally posted by Andre Volk with karma: 775 on 2017-09-12
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 28826,
"tags": "ros, compile"
} |
AI Governor to Play a Strategy Game | Question: I had the idea the other day to implement an artificial intelligence that could play my strategy game. This could be useful in a variety of different ways. For example, the player could "lose contact" with one of the worlds in the game, and the AI could take over in that event. Or perhaps the player could elect a worker as governor to lead one of the worlds. Or maybe there could be opposing players that are artificial intelligence working on their own worlds competing with the player.
I've devised just such an artificial intelligence, and after watching it play my game for a little while, I think it's actually doing pretty well! I know that there are many possible problems with the code, so I hope this one will be a good one for code review.
I have tried to reduce duplicate code as much as possible, and I think I have succeeded. Where I am not happy is with the extensibility and changeability of the code. For example, at the moment the logic is hard coded to match the rules of the game, and so if major aspects of those rules change the AI will need to be changed as well. I tried hard to think of a way to better abstract this away, but it seemed like I had to pass on a large amount of information for something like that to work, and some of the logic contained in this class would have to be repeated in those classes anyway.
The other thing I am not happy about is that I have to pass the Tower itself to the AI in order for it to choose the proper move. I could package up the list of floors and the list of workers, but I also need some job validation that is contained inside the Tower object. Is this a major problem in this case? After all, for an AI to make a good decision it is always going to need a lot of information. Maybe I should have a subclass of the Tower that I instantiate and pass to the AI? But then I would need to have the Game configure this properly before passing it to the AI, and copy the values from the real tower to this mock tower object anyway.
The code is comprised of a GamePlayingAI and the GameAIJob classes. The GameAIJob has the bits of information relevant to the Game to actually send the real job to a tower. I am a bit concerned that this is a data only object, but I can't think of a better way to pass those values along.
DTGamePlayingAI.h
#import <Foundation/Foundation.h>
#import "DTTower.h"
#import "DTGamePlayingAIState.h"
#import "DTGameAIJob.h"
@interface DTGamePlayingAI : NSObject <NSCoding>
-(void) activate;
-(void) update;
@property GamePlayingAIState state;
@property DTGameAIJob *moveToPerform;
@property DTTower *tower;
@property int commonResources;
@property int rareResources;
@property int foodResources;
@end
DTGamePlayingAI.m
#import "DTGamePlayingAI.h"
#import "DTTowerFloor.h"
#import "DTJobCost.h"
#import "DTGamePlayingAIJobType.h"
#import "DTJobType.h"
#import "DTRoomType.h"
/*
AI starts in NoMove state, which counts down until it reaches ReadyForInput
The game then inserts the tower data and sets the state to received input
On the next tick, the AI will choose a general needs case
This will either immediately choose a job, or will select another case
If another case, on the next tick the AI will select a job
Once a job is chosen, state is changed to MoveDecided and the Game collects the move
If the state ends up back in NoMove, the countdown starts over
This allows for time to pass between AI move selections
*/
static const int kMinimumResources = 200;
static const int kMinimumDwarves = 6;
static const int kMinimumFood = 325;
static const int kAgeForOld = 165;
@implementation DTGamePlayingAI {
NSMutableArray *_revealedFloors;
DTCountdown *_countdownBetweenJobs;
int _countdownBetweenJobsAmount;
}
#pragma mark - Initialization
-(id) init {
self = [super init];
if (self) {
_state = GamePlayingAIStateNotStarted;
_revealedFloors = [[NSMutableArray alloc]init];
//the reason this is a property and not a constant is because it could be
//optionally lowered to make the AI make decisions faster or vice versa
_countdownBetweenJobsAmount = 50;
}
return self;
}
#pragma mark - Activation
-(void) activate {
_countdownBetweenJobs = [[DTCountdown alloc]initWithCount:_countdownBetweenJobsAmount];
[_countdownBetweenJobs startCountdown];
self.state = GamePlayingAIStateNoMove;
}
#pragma mark - Update Loop
-(void) update {
switch (self.state) {
//the countdown only counts down in this case
case GamePlayingAIStateNoMove:
[self updateCountdown];
break;
//immediately after the game loads with the tower data
case GamePlayingAIStateReceivedInput:
[self calculateBestMove];
break;
//big picture cases that sometimes cause supporting buildings to be built
case GamePlayingAIStateNeedMoreDwarves:
[self addMoveToGetDwarves];
break;
case GamePlayingAIStateNeedMoreFood:
[self addMoveToGetFood];
break;
case GamePlayingAIStateNeedMoreResources:
[self changeStateToGetMoreResources];
break;
case GamePlayingAIStateExpandTower:
[self changeStateToExpandTower];
break;
//these are supporting buildings
//arranged in the order they must be built underground
case GamePlayingAIStatePlaceLadders:
[self addMoveToBuildLadder];
break;
case GamePlayingAIStateMineFloors:
[self addMoveToMineFloor];
break;
case GamePlayingAIStateBuildWalls:
[self addMoveToBuildWalls];
break;
case GamePlayingAIStateBuildBottom:
[self addMoveToBuildBottom];
break;
case GamePlayingAIStateBuildRoom:
[self addMoveToBuildRoom];
break;
//activated by resource need case
case GamePlayingAIStateHaulItems:
[self addMoveToHaulItems];
break;
default:
break;
}
}
#pragma mark - Countdown Update
-(void) updateCountdown {
[_countdownBetweenJobs update];
if (_countdownBetweenJobs.state == CountdownFinished) {
self.tower = nil;
self.state = GamePlayingAIStateReadyForInput;
[_countdownBetweenJobs restartCountdown];
}
}
#pragma mark - Move Update
-(void) calculateBestMove {
//this is the method that will need to be changed if the game changes significantly in any way
[self calculateRevealedFloors];
//first check if more resources are needed
if ([self needMoreCommonResources:self.commonResources]) {
self.state = GamePlayingAIStateNeedMoreResources;
//then check if more dwarves are needed
} else if ([self needMoreDwarves]) {
self.state = GamePlayingAIStateNeedMoreDwarves;
//then check if more food is needed
} else if ([self needMoreFood:self.foodResources]) {
self.state = GamePlayingAIStateNeedMoreFood;
//if no needed jobs are found, set to this state
} else {
self.state = GamePlayingAIStateExpandTower;
}
//failsafe to deactivate working rooms once they are not needed
//this will override a previously chosen job
if (![self needMoreDwarves]) {
if ([self floorsWithRunningRoomUpgradesOfType:RoomTypeBirther].count > 0) {
[self endBirtherJob];
}
} else if (![self needMoreFood:self.foodResources]) {
if ([self floorsWithRunningRoomUpgradesOfType:RoomTypeFarm].count > 0) {
[self endFarmJob];
}
}
}
-(void) calculateRevealedFloors {
[_revealedFloors removeAllObjects];
for (id key in self.tower.towerDict) {
DTTowerFloor *floor = [self.tower.towerDict objectForKey:key];
if (floor.isRevealed) {
[_revealedFloors addObject:floor];
}
}
}
#pragma mark - Needs Checking
-(BOOL) needMoreDwarves {
//count up the number of dwarves under a certain age
NSMutableArray *youngEnoughDwarves = [[NSMutableArray alloc]init];
for (DTDwarf *dwarf in self.tower.dwarfListForRender) {
if (dwarf.age < kAgeForOld) {
[youngEnoughDwarves addObject:dwarf];
}
}
if (youngEnoughDwarves.count < kMinimumDwarves) {
//make sure that no birther rooms are already active
if ([self floorsWithRunningRoomUpgradesOfType:RoomTypeBirther].count > 0) {
return NO;
}
return YES;
}
return NO;
}
-(BOOL) needMoreFood:(int)currentFood {
if (currentFood < kMinimumFood) {
//allows for up to three farms to be activated
int maxActiveFarms = 2; //change this to a method that increases for the number of dwarves or revealed floors
if ([self floorsWithRunningRoomUpgradesOfType:RoomTypeFarm].count > maxActiveFarms) {
return NO;
}
return YES;
}
return NO;
}
-(BOOL) needMoreCommonResources:(int)commonResources {
return commonResources < kMinimumResources;
}
#pragma mark - State Changing Moves
-(void) changeStateToGetMoreResources {
//haul items if available, if not dig out floors if available, if not build new ladders to reveal
if ([self floorsNeedingHauling].count > 0) {
self.state = GamePlayingAIStateHaulItems;
} else if ([self floorsForDigging].count > 0) {
self.state = GamePlayingAIStateMineFloors;
} else if ([self floorsWithoutLadders].count > 0) {
self.state = GamePlayingAIStatePlaceLadders;
} else {
self.state = GamePlayingAIStateNoMove;
}
}
-(void) changeStateToExpandTower {
//either build a room, or a bottom, or walls, or mine, or place a ladder
if ([self floorsWithValidPermissionsForJob:RoomBuildJob].count > 0) {
self.state = GamePlayingAIStateBuildRoom;
} else if ([self floorsWithValidPermissionsForJob:BottomBuildJob].count > 0) {
self.state = GamePlayingAIStateBuildBottom;
} else if ([self floorsWithValidPermissionsForJob:WallBuildJob].count > 0) {
self.state = GamePlayingAIStateBuildWalls;
} else if ([self floorsWithValidPermissionsForJob:JobTypeMining].count > 0) {
self.state = GamePlayingAIStateMineFloors;
} else if ([self floorsWithValidPermissionsForJob:LadderJob].count > 0) {
self.state = GamePlayingAIStatePlaceLadders;
} else {
self.state = GamePlayingAIStateNoMove;
}
}
#pragma mark - Specific Job Moves
-(void) addMoveToGetDwarves {
//if there are birthers around that are inactive, start one
NSMutableArray *floorsWithInactiveBirthers = [self floorsWithInactiveRoomUpgrades:RoomTypeBirther];
if (floorsWithInactiveBirthers.count > 0) {
DTTowerFloor *floor = [floorsWithInactiveBirthers firstObject];
_moveToPerform = [[DTGameAIJob alloc]initWithFloor:floor.floorNumber AIJobToDo:GamePlayingAIJobTypeFlipRoomUpgradeSwitch jobToDo:0 roomType:0];
//otherwise if there are rooms available to upgrade, upgrade one to birther
} else if ([self floorsWithRooms].count > 0) {
DTTowerFloor *floor = [[self floorsWithRooms] firstObject];
_moveToPerform = [[DTGameAIJob alloc]initWithFloor:floor.floorNumber AIJobToDo:GamePlayingAIJobTypeRoomUpgrade jobToDo:0 roomType:RoomTypeBirther];
//otherwise expand the tower
} else {
//eventually this will not always happen
self.state = GamePlayingAIStateExpandTower;
}
[self finalizeStateForConcreteMove];
}
-(void) addMoveToGetFood {
//if there are inactive farms, start one
NSMutableArray *floorsWithInactiveFarms = [self floorsWithInactiveRoomUpgrades:RoomTypeFarm];
if (floorsWithInactiveFarms.count > 0) {
DTTowerFloor *floor = [floorsWithInactiveFarms firstObject];
_moveToPerform = [[DTGameAIJob alloc]initWithFloor:floor.floorNumber AIJobToDo:GamePlayingAIJobTypeFlipRoomUpgradeSwitch jobToDo:0 roomType:0];
//otherwise if one can be built, build one
} else if ([self floorsWithRooms].count > 0) {
DTTowerFloor *floor = [[self floorsWithRooms] firstObject];
_moveToPerform = [[DTGameAIJob alloc]initWithFloor:floor.floorNumber AIJobToDo:GamePlayingAIJobTypeRoomUpgrade jobToDo:0 roomType:RoomTypeFarm];
//otherwise expand the tower
} else {
//eventually this will not always happen
self.state = GamePlayingAIStateExpandTower;
}
[self finalizeStateForConcreteMove];
}
-(void) addMoveToMineFloor {
NSMutableArray *floorsNeedingMining = [self floorsForDigging];
if (floorsNeedingMining.count > 0) {
DTTowerFloor *floor = [floorsNeedingMining firstObject]; //add method to pick the closest to 0 or closest to a stockpile
_moveToPerform = [[DTGameAIJob alloc]initWithFloor:floor.floorNumber AIJobToDo:GamePlayingAIJobTypeBasicJob jobToDo:JobTypeMining roomType:0];
}
[self finalizeStateForConcreteMove];
}
-(void) addMoveToBuildLadder {
NSMutableArray *floorsNeedingLadders = [self floorsWithoutLadders];
if (floorsNeedingLadders.count > 0) {
DTTowerFloor *floor = [floorsNeedingLadders firstObject]; //add method to pick the deepest floor
_moveToPerform = [[DTGameAIJob alloc]initWithFloor:floor.floorNumber AIJobToDo:GamePlayingAIJobTypeBasicJob jobToDo:LadderJob roomType:0];
}
[self finalizeStateForConcreteMove];
}
-(void) addMoveToHaulItems {
NSMutableArray *floorsNeedingHauling = [self floorsNeedingHauling];
if (floorsNeedingHauling.count > 0) {
DTTowerFloor *floor = [floorsNeedingHauling firstObject]; //add method to choose closest to zero, gateway, or stockpile
_moveToPerform = [[DTGameAIJob alloc]initWithFloor:floor.floorNumber AIJobToDo:GamePlayingAIJobTypeBasicJob jobToDo:JobTypeHaulItem roomType:0];
}
[self finalizeStateForConcreteMove];
}
-(void) addMoveToBuildRoom {
NSMutableArray *floorsThatCanBuildRooms = [self floorsWithValidPermissionsForJob:RoomBuildJob];
if (floorsThatCanBuildRooms.count > 0) {
DTTowerFloor *floor = [floorsThatCanBuildRooms firstObject];
_moveToPerform = [[DTGameAIJob alloc]initWithFloor:floor.floorNumber AIJobToDo:GamePlayingAIJobTypeBasicJob jobToDo:RoomBuildJob roomType:0];
}
[self finalizeStateForConcreteMove];
}
-(void) addMoveToBuildBottom {
NSMutableArray *floorsThatCanBuildBottoms = [self floorsWithValidPermissionsForJob:BottomBuildJob];
if (floorsThatCanBuildBottoms.count > 0) {
DTTowerFloor *floor = [floorsThatCanBuildBottoms firstObject];
_moveToPerform = [[DTGameAIJob alloc]initWithFloor:floor.floorNumber AIJobToDo:GamePlayingAIJobTypeBasicJob jobToDo:BottomBuildJob roomType:0];
}
[self finalizeStateForConcreteMove];
}
-(void) addMoveToBuildWalls {
NSMutableArray *floorsThatCanBuildWalls = [self floorsWithValidPermissionsForJob:WallBuildJob];
if (floorsThatCanBuildWalls.count > 0) {
DTTowerFloor *floor = [floorsThatCanBuildWalls firstObject];
_moveToPerform = [[DTGameAIJob alloc]initWithFloor:floor.floorNumber AIJobToDo:GamePlayingAIJobTypeBasicJob jobToDo:WallBuildJob roomType:0];
}
[self finalizeStateForConcreteMove];
}
#pragma mark - Failsafe Room Deactivation
-(void) endBirtherJob {
NSMutableArray *floorsWithRunningBirthers = [self floorsWithRunningRoomUpgradesOfType:RoomTypeBirther];
if (floorsWithRunningBirthers.count > 0) {
DTTowerFloor *floor = [floorsWithRunningBirthers firstObject];
_moveToPerform = [[DTGameAIJob alloc]initWithFloor:floor.floorNumber AIJobToDo:GamePlayingAIJobTypeFlipRoomUpgradeSwitch jobToDo:0 roomType:0];
}
[self finalizeStateForConcreteMove];
}
-(void) endFarmJob {
NSMutableArray *floorsWithRunningFarms = [self floorsWithRunningRoomUpgradesOfType:RoomTypeFarm];
if (floorsWithRunningFarms.count > 0) {
DTTowerFloor *floor = [floorsWithRunningFarms firstObject];
_moveToPerform = [[DTGameAIJob alloc]initWithFloor:floor.floorNumber AIJobToDo:GamePlayingAIJobTypeFlipRoomUpgradeSwitch jobToDo:0 roomType:0];
}
[self finalizeStateForConcreteMove];
}
#pragma mark - Finalize State
-(void) finalizeStateForConcreteMove {
//if a concrete move was not decided, or not expanding, go to nomove
if (_moveToPerform) {
self.state = GamePlayingAIStateMoveDecided;
} else if (self.state != GamePlayingAIStateExpandTower) {
self.state = GamePlayingAIStateNoMove;
}
}
#pragma mark - Complex Floor Collections
-(NSMutableArray *) floorsWithInactiveRoomUpgrades:(RoomType)roomType {
NSMutableArray *floorsWithInactiveRoomUpgrades = [[NSMutableArray alloc]init];
for (DTTowerFloor *floor in _revealedFloors) {
if ((floor.floorBuildState & FloorHasRoomUpgrade) && floor.room.roomType == roomType) {
if (![floor isRoomUpgradeJobCreationCountdownRunning]) {
[floorsWithInactiveRoomUpgrades addObject:floor];
}
}
}
return floorsWithInactiveRoomUpgrades;
}
-(NSMutableArray *) floorsWithRunningRoomUpgradesOfType:(RoomType)roomType {
NSMutableArray *floorsWithRunningRoomUpgrades = [[NSMutableArray alloc]init];
for (DTTowerFloor *floor in _revealedFloors) {
if (floor.room.roomType == roomType) {
if ([floor isRoomUpgradeJobCreationCountdownRunning]) {
[floorsWithRunningRoomUpgrades addObject:floor];
}
}
}
return floorsWithRunningRoomUpgrades;
}
-(NSMutableArray *) floorsWithValidPermissionsForJob:(JobType)jobType {
NSMutableArray *validFloors = [[NSMutableArray alloc]init];
for (DTTowerFloor *floor in _revealedFloors) {
if ([_tower checkIfFloor:floor.floorNumber isValidForJob:jobType]) {
[validFloors addObject:floor];
}
}
return validFloors;
}
#pragma mark - Simple Floor Collections
-(NSMutableArray *) allFloorsWithInactiveRooms {
NSMutableArray *allFloorsWithInactiveRooms = [[NSMutableArray alloc]init];
for (DTTowerFloor *floor in _revealedFloors) {
if ([floor hasRoomWithWorkOption]) {
if (![floor isRoomUpgradeJobCreationCountdownRunning]) {
[allFloorsWithInactiveRooms addObject:floor];
}
}
}
return allFloorsWithInactiveRooms;
}
-(NSMutableArray *) floorsWithRooms {
NSMutableArray *floorsWithRooms = [[NSMutableArray alloc]init];
for (DTTowerFloor *floor in _revealedFloors) {
if ((floor.floorBuildState & FloorHasRoom) && ((floor.floorBuildState & FloorHasRoomUpgrade) == 0)) {
[floorsWithRooms addObject:floor];
}
}
return floorsWithRooms;
}
-(NSMutableArray *) floorsWithoutLadders {
NSMutableArray *floorsWithoutLadders = [[NSMutableArray alloc]init];
for (DTTowerFloor *floor in _revealedFloors) {
if (((floor.floorBuildState & FloorHasLadder) == 0) && [floor getActiveJobType] != LadderJob && [self.tower checkIfFloor:floor.floorNumber isValidForJob:LadderJob]) {
[floorsWithoutLadders addObject:floor];
}
}
return floorsWithoutLadders;
}
-(NSMutableArray *) floorsForDigging {
NSMutableArray *floorsForDigging = [[NSMutableArray alloc]init];
for (DTTowerFloor *floor in _revealedFloors) {
if (floor.groundBlocks.count > 0 && [floor getActiveJobType] != JobTypeMining && (floor.floorBuildState & FloorHasLadder)) {
[floorsForDigging addObject:floor];
}
}
return floorsForDigging;
}
-(NSMutableArray *) floorsNeedingHauling {
NSMutableArray *floorsNeedingHauling = [[NSMutableArray alloc]init];
for (DTTowerFloor *floor in _revealedFloors) {
if (floor.groundBlocks.count == 0 && floor.itemsNeedingHauling.count > 0 && [floor getActiveJobType] != JobTypeHaulItem) {
[floorsNeedingHauling addObject:floor];
}
}
return floorsNeedingHauling;
}
#pragma mark - NSCoding
-(id) initWithCoder:(NSCoder *)aDecoder {
self = [super init];
if (self) {
_state = [aDecoder decodeIntegerForKey:@"state"];
_moveToPerform = [aDecoder decodeObjectForKey:@"moveToPerform"];
_revealedFloors = [aDecoder decodeObjectForKey:@"revealedFloors"];
_tower = [aDecoder decodeObjectForKey:@"tower"];
_commonResources = [aDecoder decodeIntForKey:@"commonResources"];
_rareResources = [aDecoder decodeIntForKey:@"rareResources"];
_foodResources = [aDecoder decodeIntForKey:@"foodResources"];
_countdownBetweenJobs = [aDecoder decodeObjectForKey:@"countdownBetweenJobs"];
_countdownBetweenJobsAmount = [aDecoder decodeIntForKey:@"countdownBetweenJobsAmount"];
}
return self;
}
-(void) encodeWithCoder:(NSCoder *)aCoder {
[aCoder encodeInteger:self.state forKey:@"state"];
[aCoder encodeObject:self.moveToPerform forKey:@"moveToPerform"];
[aCoder encodeObject:_revealedFloors forKey:@"revealedFloors"];
[aCoder encodeObject:self.tower forKey:@"tower"];
[aCoder encodeInt:self.commonResources forKey:@"commonResources"];
[aCoder encodeInt:self.rareResources forKey:@"rareResources"];
[aCoder encodeInt:self.foodResources forKey:@"foodResources"];
[aCoder encodeObject:_countdownBetweenJobs forKey:@"countdownBetweenJobs"];
[aCoder encodeInt:_countdownBetweenJobsAmount forKey:@"countdownBetweenJobsAmount"];
}
@end
DTGameAIJob.h
#import <Foundation/Foundation.h>
#import "DTGamePlayingAIJobType.h"
#import "DTJobType.h"
#import "DTRoomType.h"
@interface DTGameAIJob : NSObject <NSCoding>
-(id) initWithFloor:(int)floorNumber AIJobToDo:(GamePlayingAIJobType)AIJobType jobToDo:(JobType)jobType roomType:(int)roomType;
@property int floorToDoMove;
@property GamePlayingAIJobType jobTypeToDo;
@property JobType jobToDo;
@property RoomType roomToBuild;
@end
DTGameAIJob.m
#import "DTGameAIJob.h"
@implementation DTGameAIJob
-(id) initWithFloor:(int)floorNumber AIJobToDo:(GamePlayingAIJobType)AIJobType jobToDo:(JobType)jobType roomType:(int)roomType {
self = [super init];
if (self) {
_floorToDoMove = floorNumber;
_jobTypeToDo = AIJobType;
_jobToDo = jobType;
_roomToBuild = roomType;
}
return self;
}
-(id) initWithCoder:(NSCoder *)aDecoder {
self = [super init];
if (self) {
_floorToDoMove = [aDecoder decodeIntForKey:@"floorToDoMove"];
_jobTypeToDo = [aDecoder decodeIntegerForKey:@"jobTypeToDo"];
_jobToDo = [aDecoder decodeIntegerForKey:@"jobToDo"];
_roomToBuild = [aDecoder decodeIntegerForKey:@"roomToBuild"];
}
return self;
}
-(void) encodeWithCoder:(NSCoder *)aCoder {
[aCoder encodeInt:self.floorToDoMove forKey:@"floorToDoMove"];
[aCoder encodeInteger:self.jobTypeToDo forKey:@"jobTypeToDo"];
[aCoder encodeInteger:self.jobToDo forKey:@"jobToDo"];
[aCoder encodeInteger:self.roomToBuild forKey:@"roomToBuild"];
}
@end
Here is where the AI is updated inside the Game class:
-(void) processAI {
//once a second tower is added the ai will start running
if (_towerArray.count > 1) {
[_gameAI update];
if (_gameAI.state == GamePlayingAIStateReadyForInput) {
_gameAI.commonResources = self.currentCommonResources;
_gameAI.rareResources = self.currentRareResources;
_gameAI.foodResources = self.currentFoodResources;
//there was a bug with doing it this way in the prototype which was fixed by instantiating a new tower and copying relevant values over, then passing it in
//after fleshing out the real class, switched back to this way and the bug was gone
_gameAI.tower = [_towerArray firstObject];
_gameAI.state = GamePlayingAIStateReceivedInput;
} else if (_gameAI.state == GamePlayingAIStateMoveDecided) {
if (_gameAI.moveToPerform) {
switch (_gameAI.moveToPerform.jobTypeToDo) {
case GamePlayingAIJobTypeBasicJob:
[self addJobToQueueOfType:_gameAI.moveToPerform.jobToDo forFloor:_gameAI.moveToPerform.floorToDoMove];
break;
case GamePlayingAIJobTypeFlipRoomUpgradeSwitch:
[self flipRoomUpgradeCountdownSwitch:_gameAI.moveToPerform.floorToDoMove];
break;
case GamePlayingAIJobTypeRoomUpgrade:
[self buildRoomUpgradeOfType:_gameAI.moveToPerform.roomToBuild forFloor:_gameAI.moveToPerform.floorToDoMove];
break;
default:
break;
}
}
_gameAI.moveToPerform = nil;
_gameAI.state = GamePlayingAIStateNoMove;
}
}
}
Answer: DTGamePlayingAI
For this post, I'm only looking at the DTGamePlayingAI class. This is a big question, and this is a big answer. I don't want the lack of commentary on other posted classes to be taken as a sign that they're flawless. And some of the comments from this post may be applicable to the other classes as well, but as a full disclaimer, I'm writing this post before having even looked at the other classes.
for (id key in self.tower.towerDict) {
DTTowerFloor *floor = [self.tower.towerDict objectForKey:key];
if (floor.isRevealed) {
[_revealedFloors addObject:floor];
}
}
When iterating over all the objects in a dictionary and the key itself is irrelevant, we can iterate over the objects as an array:
for (DTTowerFloor *floor in [self.tower.towerDict allObjects]) {
if (floor.isRevealed) {
[_revealedFloors addObject:floor];
}
}
if (youngEnoughDwarves.count < kMinimumDwarves) {
//make sure that no birther rooms are already active
if ([self floorsWithRunningRoomUpgradesOfType:RoomTypeBirther].count > 0) {
return NO;
}
return YES;
}
return NO;
Having BOOL returns where the logic works out like this is quite common. You use it more than once. I've ended up with structures like this myself before as well.
However, I find a different pattern to be slightly more pleasing.
BOOL needMoreDwarves = (youngEnoughDwarves.count < kMinimumDwarves);
BOOL canBirthMoreDwarves = ([self floorsWithRunningRoomUpgradesOfType:RoomTypeBirther].count > 0);
return needMoreDwarves && canBirthMoreDwarves;
-(BOOL) needMoreFood:(int)currentFood;
-(BOOL) needMoreCommonResources:(int)commonResources;
These two methods are curious. They're not exposed publicly, so they're only used privately. The only argument they seem to take is a property of this class... a variable the method can already know about without it being passed. Why are we passing a variable? Just make the methods take no arguments and access the appropriate variable within the method.
You have a lot of methods which build and return a mutable array and in a lot of places, you're calling these methods only to call count on the array to get a number. And probably the worst part here is that you're not even actually concerned with the count. You're only concerned with whether or not an object exists in the array.
Maybe I've missed a spot, but it seems that your methods build these arrays to completeness, then the most that is ever done with the return value is to check whether it's count is greater than zero, and in some cases, do something with the first object in the array.
I'm suggesting we change that. These methods shouldn't build and return arrays. Instead, the should iterate through _revealedFloors and return the first object they would otherwise have added to the array they're building. If we make it through the entirety of the array and haven't found a match, we return nil.
Now, we've saved a bit of time and a bit of space. And we can still do everything we'd want to do.
In the cases where we'd just check if the count is greater than 0, instead we can just check the return to see whether it's nil or not.
In the cases where we'd check if the count is greater than 0 then grab firstObject, we can instead just use the return value (after checking it for nil).
And by the way, you don't need to check an array's count before calling firstObject or lastObject. These two convenience methods are relatively new, and Apple designed these so that they'll return nil if the array is empty, preventing the need for checking the array's size to prevent an index out of bounds exception. Just grab firstObject or lastObject and check if it's nil. | {
"domain": "codereview.stackexchange",
"id": 8818,
"tags": "game, objective-c, ai"
} |
I need to find the Ka of a weak acid in titration with a strong base | Question: I have been working on this lab forever and have not been able to get a $K_\text{a}$ anywhere close to the accepted value. I’m sure I’m missing something really simple.
We titrated $\ce{CH3COOH}$ with $\ce{NaOH}$.
I used $10.00\ \mathrm{mL}$ of $\ce{CH3COOH}$. Volume of $\ce{NaOH}$ at the equivalence point was $10.14\ \mathrm{mL}$, and the concentration of $\ce{NaOH}$ was $0.1530\ \mathrm{mol \cdot L^{-1}}$.
First I found the concentration of $\ce{CH_3COOH}$ by using $c_\text{a} \cdot V_\text{a} = c_\text{b} \cdot V_\text{b}$, so $c_\text{a} \cdot 10.00\ \mathrm{mL} = 0.1530\ \mathrm{mol \cdot L^{-1}} \cdot 10.14\ \mathrm{mL}$
$c_\text{a} = 0.1551\ \mathrm{mol \cdot L^{-1}}$
Then I used the equation: $\ce{CH3COOH + H2O <=> H3O+ + CH3COO-}$
and the pH (at the midpoint of $5.07\ \mathrm{mL}\ \ce{NaOH}$) which was $4.87$
$$K_\text{a}= \frac{[\ce{H3O+}] [\ce{CH3COO-}]}{[\ce{CH3COOH}]
[\ce{H3O+}]}=[\ce{CH3COO-}]= 10^{-4.87}= 1.36 \times 10^5
$$
$$
K_\text{a}= \frac{(1.36 \times 10^5)^2}{ 0.1551} = 1.19 \times 10^{-9}
$$
Any help on what I am doing wrong would be greatly appreciated!
Answer: I’m not sure if I’m missing your question, but I thought that perhaps you could use the relation $\mathrm{pH}=\mathrm{p}K_\text{a}$ at the half equivalence point? The answer would be still incorrect but closer?
Otherwise, you could probably use stoichiometry to solve for $K_\text{a}$ at the beginning or the end of the titration by constructing an ICE table or otherwise. By considering the deprotonation at the start, so the concentration of protons would be equivalent to concentration of conjugate base. Then, instead of inserting the pH at the half equivalence point, insert the pH before the titration.
If you need help doing the algebra, I could try...
But I think what you did wrong was inserting the concentration at the half equivalence point, rather than before the titration. | {
"domain": "chemistry.stackexchange",
"id": 3154,
"tags": "inorganic-chemistry, acid-base, titration"
} |
Far-field distance of antenna and a wave phase | Question: In a following reference [David M. Pozar 2012, 4ed, Microwave Engineering, p 661], it is told that the far field distance of a relatively large antenna is given from the formula
\begin{equation}
R_{ff}=\frac{2D^2}{\lambda}
\end{equation}
where D is a maximum dimension of an antenna, $\lambda$ is a wavelength. And it is told that:"This result is derived from the condition that the actual spherical wave front radiated by
the antenna departs less than $\pi/8 =22.5^\circ$ from a true plane wave front over the maximum
extent of the antenna". Does this mean that we have to compare the "far-field" component of electric field with a "near-field" component and at phase $\pi/8$ they are equal or something else? I think that I miss the point.
Answer: The condition on the far field in the free space is obtained from wave equations. First of all, the equation on the retarded potentials is
\begin{equation} \label{eq:dAlembert}
\Box{\vec{A}=-\mu_0 \vec{j}}
\end{equation}
where the temporal part of of potential is $e^{j \omega t}$, so we can get the following equation assuming $k = \omega /c$
\begin{equation} \label{eq:Helmholtz}
\Delta\vec{A}+ k^{2}\vec{A} = -\mu_0\vec{j}
\end{equation}
Next we solve this equation assuming Green function formalism.
\begin{equation}
G(\vec{r}) = \frac {-e^{\pm ikr}}{4\pi r}
\end{equation}
After a long calculation and using Laurenz jauge, we can achieve precise equations on electric and magnetic field. For exemple, for the electric field we have
\begin{equation}
\vec{H} = -\frac{1} {4 \pi}
\int \ e^{ik|\vec{r}-\vec{r}\,'|} \frac {1} {|\vec{r}-\vec{r}\,'|^2} (-\frac{\vec{r}-\vec{r}\,'} {|\vec{r}-\vec{r}\,'|} + ik (\vec{r}-\vec{r}\,') ) \times \vec{j}(\vec{r}\,') \, d^3\vec{r}\,'
\end{equation}
We are interested in the term $ e^{ik|\vec{r}-\vec{r}\,'|} $, where the Taylor series of the following function is
\begin{equation} \label{Eq:Abs_r-r'}
|\vec{r}-\vec{r}\,'| = \sqrt{(\vec{r}-\vec{r}\,')\cdot(\vec{r}-\vec{r}\,')}\approx r (1-\vec{e}_r \vec{r}\,'/r +\frac{{\vec{r}\,'}^2} {2 r^2})
\end{equation}
In order to decrease the last term in the previous equation we have the far field criteria:
\begin{equation}
\frac{{r\,'}^2} {2 r^2} < \lambda/16
\end{equation}
or equally, $$phase = k\frac{{r\,'}^2} {2 r^2} < \frac{\pi}{8}$$.
It is well explained in [p83, A.B. Smolders, H.J. Visser, U. Johannsen, Modern Antennas and Microwave Circuits, September 2020, Eindhoven University of Technology]. | {
"domain": "physics.stackexchange",
"id": 88675,
"tags": "electromagnetism, antennas"
} |
How does instantaneous velocity cause displacement in just one point? | Question: I have a question.
Falling object graph is curve shape right?
And instantaneous velocity is tangent line but how does this velocity make displacement in distance? Because suppose instantaneous velocity is 10 m/s but it's just one point or instant of time it's not make any displacement if it make it should 2*t( time more than zero) but in graph it's just one point.
Answer: Your question echoes Zeno's paradox, if I understand you correctly. Basically you are asking how come a derivative can have a non zero value if it is defined at a point. Well from elementary calculus, we learn that it is not quite true that the derivative depends only on a single point, but also on its neighborhood, for the case of a velocity, assuming $y(t)$ is the function that relates time $t$ to the height of a falling object $y(t)$:
$$
v(t) = y'(t) = \lim_{\Delta{t}\to 0}\frac{y(t+\Delta{t})-y(t)}{\Delta{t}}
$$
So you see, for $y'(t)$ to be defined at any point $t$, it must be true that $y$ has a defined limit when it approaches the value $y(t)$ which is saying something about the neighborhood of $y(t)$.
There's a lot more that can be said about this, but this is only to give you a feel that the derivative doesn't really depend only on a single point.
Now the next thing to realize is that a displacement of the falling object which occurs between nearby times $t=t_1$ and $t=t_2$ is the velocity (speed in this case, say at $t_1$) times this small time interval: $y'(t_1)\cdot(t_2-t_1)$, which also explains how such a displacement is generated by this instanteneous speed.
Note: This is only a good approximation for when the speed doesn't vary significantly (or at all) between times $t_1$ and $t_2$. Otherwise you will need to integrate the speed (or velocity for the general case) function across the time interval to calculate the precise displacement! | {
"domain": "physics.stackexchange",
"id": 93339,
"tags": "kinematics, differentiation, projectile, calculus, free-fall"
} |
How to intuitively come up with an example for an ambiguous grammar and how to make that grammar unambiguous? | Question: I don't get how to intuitively come up with an example for an ambiguous grammar.
Let's take as an example this grammar:
Declaration ::= Type D ;
Type ::= "int" | "char"
D ::= "*" D
| D "[" number "]"
| D "(" Type ")"
| "(" D ")"
| name
I am told outright that this grammar is ambiguous. What is expected of me is to find one example that proves that it is. What I'm interested is what is the thought process that allows you to find an example. Our teacher just gave us one example that would show that we would obtain two different derivation tree like:
int *foo[5]; has two derivation tree
Declaration Declaration
/ | \ / | \
Type D ; Type D ;
| / \ | / \____
int * D int / \ \ \
/ \____ D [ 5 ]
/ \ \ \ / \
D [ 5 ] * D
| |
foo foo
However I have no idea how he thought to himself that int*foo[5] would be the example before doing the trees. It all boils down to how they did it without trial and error?
How to make that grammar unambiguous?
I was also given the task to make the above grammar unambiguous. However I don't know yet again what is the intuition behind making it unambiguous.
They gave us this solution:
Declaration ::= Type D ;
Type ::= "int" | "char"
D ::= "*" D
| "(" D ")" D'
| name D'
D' ::= "[" number "]" D'
| "(" Type ")" D'
| empty <== empty string
There must be a pattern in all of this. What it is? What is the general method to solve this type of problem regardless of which grammar is given?
Answer: The simple way of seeing this class of ambiguities is to observe that if two right-hand sides for the same non-terminal overlap, then they could be applied in either order:
D ::= "*" D
| D "[" number "]"
The overlap is pretty clear:
"*" D "[" number "]"
Another example, where a right-hand side can overlap with itself:
E ::= E "+" E
Overlap:
E "*" E
E "*" E
-------------
E "*" E "*" E
The solution is to remove the overlap by introducing a new non-terminal for one of the overlapping uses. Here's my solution to the ambiguity in your grammar:
Declaration ::= Type D ;
Type ::= "int" | "char"
D ::= "*" D
E
E ::= E "[" number "]
| E "(" Type ")"
| "(" D ")"
| name
Which productions you decide to shift to the new non-terminal is not arbitrary; it will define which of the competing parses is correct for the language you are designing. There is no universal answer to that question, so it cannot be done algorithmically.
Note: The clumsy solution you have been provided with is probably because your lesson is being mixed with a discussion of top-down parsing. My grammar, though unambiguous, is not suitable for top-down parsing because it is left-recursive. However, the resulting parse tree reflects the syntactic structure of the declaration. | {
"domain": "cs.stackexchange",
"id": 14869,
"tags": "formal-grammars, programming-languages, tree-grammars"
} |
Isn't the depth of a convolutional layer, the number of colors (or colorspace size)? | Question: I have been going through a CNN tutorial and noticed that depth of a convolutional layer is equal to the number of filters. But, shouldn't the depth be the number of colors in the image? I mean, if it's RGB then, depth is 3 right? Am I missing something here?
Answer: Yes, the depth of an image is equal to the color channels (1 for gray-scale images, 3 for RGB).
However, that is only the case for the input layer of the CNN. During the first convolution layer, the image can be passed through as many filters as we select. This number becomes the depth of the first layer. The subsequent layers can have any depth we want.
For example in the CNN depicted below:
The image input to the CNN initially has a depth of 1 (because it is gray-scale). The first convolutional layer passes it through 6 filters, so the depth of the first layer becomes 6. The second layer passes the output of the first through 16 filters, meaning that the second layer has a depth of 16, etc. | {
"domain": "datascience.stackexchange",
"id": 3559,
"tags": "cnn, convolution"
} |
How is Newton's third law not valid in special relativity while momentum is still conserved? | Question: I'm assuming questions about the third law being invalid in relativity have already been asked on this site, but I'm specifically asking about how momentum is still conserved inspite of it.
Consider only two particles in the universe. In some reference frame, they're moving with velocities $v_1(t)$ and $v_2(t)$ at time $t$.
The relativistic momentum is:
$$\gamma (v_1(t))m_1v_1(t)+\gamma (v_2(t))m_2v_2(t)$$
The relativistic momentum after a small time $dt$ is:
$$\gamma (v_1(t+dt))m_1v_1(t+dt)+\gamma (v_2(t+dt))m_2v_2(t+dt)$$
If both these quantities are same, we can set them equal. After setting them equal, we get:
$$\gamma (v_1(t+dt))m_1v_1(t+dt)-\gamma (v_1(t))m_1v_1(t)=-(\gamma (v_2(t+dt))m_2v_2(t+dt)-\gamma (v_2(t))m_2v_2(t))$$
Dividing both sides by $dt$ and letting $dt\rightarrow 0$, we get:
$$\frac{d(\gamma(v_1(t))m_1v_1(t))}{dt}=-\frac{d(\gamma(v_2(t))m_2v_2(t))}{dt}$$
$$F_{12}=-F_{21}$$
So what's wrong with the above?
Edit- That other question is about General Relativity and the answers do not address the computations I've provided here. I want to know what's wrong with this specifically
Edit- After reading the other links, what I got is that the momentum conservation, as I've stated stated it in this post, is incorrect. This is because I did not account for the momentum of the field. So that means that Newton's third law is also correct if we extend the notion of force from particle-particle interactions to particle-field interactions. Is this correct? And in what sense do fields carry momentum? What is the mass and velocity of fields?
Answer: Good question, and the answer is that Newton's third law remains valid in Special Relativity as long as one applies it the right way, and that means it has to be applied locally at each event where forces are acting, not non-locally by comparing a force
at some location $A$ with another force at some other location $B$. The third law applies to forces acting in a pair at any one location, say $A$.
When a force acts at the boundary between solid objects, this is straightforward. Each object pushes on the other.
When a force acts throughout a solid, one can analyse it the same way; for the details you need to invoke the concept of pressure and/or tension and stress. This is done in full via the stress-energy tensor.
The case where people say the third law breaks down is for example when charged objects attract or repel one another at a distance. It is true that in such cases the force on one object is not necessarily of equal size and opposite direction to the force on the other object. But one should ask: how is the force arising? It arises by an interaction between the charge on any given body and the electromagnetic field right there at the body. If the force causes the body to accelerate, for example, then, by conservation of momentum, one must find that momentum is moving out of the field and into the accelerating body. Force is, by definition, rate of change of momentum. One concludes that there is a pair of forces: one acting on the charged body, and the other, equal and opposite, acting on the electromagnetic field. These forces cause momentum to go into the charged body, and an equal and opposite momentum to go into the electromagnetic field. They are both present at the same location. They are equal and opposite.
It might seem odd to think of a force acting on a field, but the stress energy tensor makes no distinction between matter and field. Anything that can carry momentum can in principle have a force act on it.
P.S. The calculation offered in the original question is ok if the two particles are colliding at a single event, but if they are at different places and the momenta are changing with time (owing to the force from a field, for example) then you can't assume each particle's momentum will change by the same amount during some small time. In this case the sum of the two particles' momenta is not constant, because they are interacting with a third party, namely the field. | {
"domain": "physics.stackexchange",
"id": 71288,
"tags": "newtonian-mechanics, special-relativity, forces, momentum, conservation-laws"
} |
Why correlation functions? | Question: While this concept is widely used in physics, it is really puzzling (at least for beginners) that you just have to multiply two functions (or the function by itself) at different values of the parameter and then average over the domain of the function keeping the difference between those parameters:
$$C(x)=\langle f(x'+x)g(x')\rangle$$
Is there any relatively simple illustrative examples that gives one the intuition about correlation functions in physics?
Answer: The correlation function you wrote is a completely general correlation of two quantities,
$$\langle f(X) g(Y)\rangle$$
You just use the symbol $x'$ for $Y$ and the symbol $x+x'$ for $X$.
If the environment - the vacuum or the material - is translationally invariant, it means that its properties don't depend on overall translations. So if you change $X$ and $Y$ by the same amount, e.g. by $z$, the correlation function will not change.
Consequently, you may shift by $z=-Y=-x'$ which means that the new $Y$ will be zero. So
$$\langle f(X) g(Y)\rangle = \langle f(X-Y)g(0)\rangle = \langle f(x)g(0) \rangle$$
As you can see, for translationally symmetric systems, the correlation function only depends on the difference of the coordinates i.e. separation of the arguments of $f$ and $g$, which is equal to $x$ in your case.
So this should have explained the dependence on $x$ and $x'$.
Now, what is a correlator? Classically, it is some average over the probabilistic distribution
$$\langle S \rangle = \int D\phi\,\rho(\phi) S(\phi)$$
This holds for $S$ being the product of several quantities, too. The integral goes over all possible configurations of the physical system and $\rho(\phi)$ is the probability density of the particular configuration $\phi$.
In quantum mechanics, the correlation function is the expectation value in the actual state of the system - usually the ground state and/or a thermal state. For a ground state which is pure, we have
$$\langle \hat{S} \rangle = \langle 0 | \hat{S} | 0 \rangle$$
where the 0-ket-vector is the ground state, while for a thermal state expressed by a density matrix $\rho$, the correlation function is defined as
$$\langle \hat{S} \rangle = \mbox{Tr}\, (\hat{S}\hat{\rho})$$
Well, correlation functions are functions that know about the correlation of the physical quantities $f$ and $g$ at two points. If the correlation is zero, it looks like the two quantities are independent of each other. If the correlation is positive, it looks like the two quantities are likely to have the same sign; the more positive it is, the more they're correlated. They're correlated with the opposite signs if the correlation function is negative.
In quantum field theory, correlation functions of two operators - just like you wrote - is known as the propagator and it is the mathematical expression that replaces all internal lines of Feynman diagrams. It tells you what is the probability amplitude that the corresponding particle propagates from the point $x+x'$ to the point $x'$. It is usually nonzero inside the light cone only and depends on the difference of the coordinates only.
An exception to this is the Feynman Propagator in QED. It is nonzero outside the light cone as well, but invokes anti-particles, which cancel this nonzero contribution outside the light cone, and preserve causality.
Correlation functions involving an arbitrary positive number of operators are known as the Green's functions or $n$-point functions if a product of $n$ quantities is in between the brackets. In some sense, the $n$-point functions know everything about the calculable dynamical quantities describing the physical system. The fact that everything can be expanded into correlation functions is a generalization of the Taylor expansions to the case of infinitely many variables.
In particular, the scattering amplitude for $n$ external particles (the total number, including incoming and outgoing ones) may be calculated from the $n$-point functions. The Feynman diagrams mentioned previously are a method to do this calculation systematically: a complicated correlator may be rewritten into a function of the 2-point functions, the propagators, contracted with the interaction vertices.
There are many words to physically describe a correlation function in various contexts - such as the response functions etc. The idea is that you insert an impurity or a signal into $x'$, that's your $g(x')$, and you study how much the field $f(x+x')$ at point $x+x'$ is affected by the impurity $g(x')$. | {
"domain": "physics.stackexchange",
"id": 28419,
"tags": "field-theory, greens-functions, correlation-functions, propagator"
} |
Is there benefit to extra stabilizers in a rotated surface code? | Question: I'm reading Horsman et al. "Surface code quantum computing by lattice surgery" and I'm wondering about the rotated surface code.
Consider Figure 13:
This is supposed to have distance 5. But in (c), an $X$ error on the top left qubit and the qubit beneath it would be undetectable (brown plaquettes are $X$-stabilizer measurements). If there were a stabilizer measurement on every outside edge, this problem wouldn't occur. What am I misunderstanding?
Answer:
an X error on the top left qubit and the qubit beneath it would be undetectable
That error would be detected by the flipping of the four body Z stabilizer adjacent to the lower qubit you operated on:
If you or I got X and Z mixed up, then the error is undetectable, but it corresponds to a topologically trivial cycle from a boundary to itself, so it has no effect on the logical qubit: | {
"domain": "quantumcomputing.stackexchange",
"id": 2921,
"tags": "error-correction, surface-code"
} |
How to determine from the formula that there is a phenyl group present in the molecule? | Question: Why is the organic compound $\ce{C6H5-CH=CH-CH2Cl}$ named 3-chloro-1-phenylprop-1-ene and not 9-chloronon-5,7-dien-1,3-yne even though there is no specification whatsoever that there is a phenyl group in the organic molecule?
$$\ce{C6H5-CH=CH-CH2Cl}$$
can also be written as
$$\ce{CH#C-C#C-CH=CH-CH=CH-CH2-Cl}$$
and the name for this organic compound would be 9-chloronon-5,7-dien-1,3-yne.
How do we surely know that there exists a phenyl group in the molecule?
Also why is phenyl treated as a substituent and cannot be considered in the longest chain?
Answer: It's a matter of convention.
$\ce{C6H5}$ is conventionally used to represent phenyl because it is a common moiety. Your long, highly unsaturated, chain is also $\ce{C6H5}$, but it wouldn't conventionally be written like that without expansion, unless you're setting out to confuse someone. | {
"domain": "chemistry.stackexchange",
"id": 6118,
"tags": "organic-chemistry, nomenclature, molecular-structure"
} |
Efficient way to find intersections | Question: I have an Euclidean graph: each vertex is a point on the 2D plane, so the weight of each edge is the Euclidean distance between the vertices.
I am randomly creating a path thru all the vertices and I want to know if there is any efficient way to find all the intersections on my path.
Answer: A path of length $n$ consists of $n$ line segments in the plane. You want to find all intersections between these line segments. This is a standard problem that has been studied in depth in the computer graphics literature.
A simple algorithm is the following: for each pair of line segments, check whether they intersect (using a standard geometric algorithm). The running time of this algorithm is $O(n^2)$. In terms of worst-case complexity, this is the best we can hope for, as in the worst case, there can be $\Theta(n^2)$ intersections, so obviously it'll take at least $\Theta(n^2)$ time to output them all.
However, in many cases, we have an arrangement of line segments where there are only a few intersections, say at most $O(n)$ intersections. In this case, there are algorithms whose running time is $O(n \lg n)$. See, e.g., the sweep line technique; there are others as well. In general, if there are $k$ intersections, the total running time for the sweep line algorithm to find all $k$ intersections is $O((n+k) \lg n)$ time. | {
"domain": "cs.stackexchange",
"id": 1909,
"tags": "algorithms, graphs, graph-traversal, weighted-graphs"
} |
Rate of catalytic hydrogenation of alkenes | Question: What does the rate of catalytic hydrogenation of alkenes depend upon? What'd be the increasing order of rate towards catalytic hydrogenation of the following alkenes, A (2-methylpropene), B (cis-but-2-ene), and C (trans-but-2-ene)?
I suppose this has something to do with sterics and orientation - as catalytic hydrogenation is a surface phenomenon (occurs on surface of metal catalyst with adsorbed hydrogen), but I'm not sure about the above order in particular (I don't think a clear comparison can be made without looking at data, as the alkenes are quite similar), and the factors on which rate of hydrogenation of hydrocarbons depends in general.
Answer: Catalytic hydrogenation is more or less a surface phenomenon so sterics play the deciding role in the rate of hydrogenation.
From Wikipedia:
Here's a newman projection of the three Alkenes.
A = cis-dimethylbutene, B = trans-dimethylbutene, C = 2-methylpropene
From here it is easy to see that the cis-dimethylbutene will have the least hindered approach towards the surface of the catalyst as both the methyl groups can be made to face away from the surface.
The rates of hydrogenation in the case of trans-dimethylbutene and 2-methylpropene will be decided by which one of them is more strained.
2-methylpropene is more strained than trans-dimethylbutene.
So, we can now establish the order.
$$\text{A > C > B}$$
References
Catalytic Hydrogenation:A Core Technology In Synthesis; Ryoji Noyori
Wikipedia article on Hydrogenation | {
"domain": "chemistry.stackexchange",
"id": 10511,
"tags": "organic-chemistry, reaction-mechanism, hydrocarbons, catalysis"
} |
Computing modular exponent given order | Question: I want to compute $g^{mn}$ mod $n^2$ where $n=pq$ and I know that $g$ has order $kn$ mod $n^2$ where $m<k$. Is there any clever way of doing it utilizing the order? I have tried other methods of computing $g^{mn}$ directly.
My question stems from an implementation of the Paillier cryptosystem that I'm working on. Right now, I'm following the original paper "Public-key cryptosystems based on composite degree residuosity classes", scheme 3 with the recommendation of generating $g$ using DSA. I have looked at different methods of computing modular exponentiation directly such as square and multiply, k-ary, window sliding, etc. I want an algorithm for computing $g^{mn} \bmod n^2$ that does not rely upon knowledge of the factorization of $n$.
Answer: There's a small chance you might be able to get a speed-up by using properties of arithmetic modulo $n^2$, depending on how $g$ was chosen.
Suppose $g=a(1+bn)$. Then note that, by the binomial theorem,
$$g^x = a^x (1+bn)^x = a^x \sum_i {x \choose i} (bn)^i = a^x (1 + bx n + n^2 \times \text{stuff})$$
so we see that
$$g^x = a^x (1 + bx n) \pmod{n^2}$$
Thus if you can compute $a^x \bmod n^2$ efficiently, you can compute $g^x \bmod n^2$ with just a tiny bit more work. Here you want to take $x=mn$. The simplification is that now $g$ can always be expressed in the form $g=a(1+bn)$ where $0 \le a < n$.
What can we say about $a$? Well, for one thing we know that $g^{kn}=1 \pmod{n^2}$, so it follows that $a^{kn}=1 \pmod{n^2}$. Thus, the order of $a$ modulo $n^2$ has to be a divisor of $kn$.
If the order of $a$ is exactly $kn$, we haven't gotten anywhere.
But if the order of $a$ is a strict divisor of $kn$, we can get some speedup. We first reduce $mn$ modulo the order of $a$, then compute $a^{mn} \bmod n^2$, and finally use the relationship above to get the value of $g^{mn} \bmod n^2$. (For instance, if you get lucky, it is possible that the order of $a$ might be $k$, and then you'll get a non-trivial speedup.)
Whether this yields a speedup in practice will depend on the value of $g$ and how $g$ was chosen; since that wasn't specified in your question, it's not possible to evaluate whether this is likely to help in practice, but it might. | {
"domain": "cs.stackexchange",
"id": 4011,
"tags": "algorithms, cryptography, modular-arithmetic"
} |
Derivative of angular momentum of rigid body | Question: I found this equation that describes the change in angular momentum $\vec{L}$ of a rigid body rotating about a fixed point $O$. $I_o$ is the moment of inertia of the body with respect to the axis of rotation and I marked with $⊥$ the component of the angular momentum perpendicular to the axis of rotation.
$$\vec{L_o} = I_o \vec{ω} + \vec{L_o}_⊥ \implies
\frac{d\vec{L_o}}{dt}= I_o \frac{d\vec{ω}}{dt}+ \frac{1}{ω}
\frac{dω}{dt} \vec{L_o}_⊥ + \vec{ω} \times \vec{L_o}_⊥ =
\vec{\tau}_{ext}$$
In the derivation the following was used $ \vec{L_o}_⊥ = \vec{\omega}A \hat{L_o}_⊥ $ with $A$ some constant depending on the distribution of mass.
How does the middle term $\frac{1}{ω} \frac{dω}{dt} \vec{L_o}_⊥$ come out and what does it means?
It is a part of the derivative of $\vec{L_o}_⊥$ but I don't understand how and why it is there.
Answer: As you mentioned, $\vec {L_o}_⊥$ is proportional to $\omega$. So the middle term means $\vec {L_o}_⊥$ varies by varying angular velocity.
You can also derive this. formula like this:
$$\frac{d \vec {L_o}_⊥ }{dt}= \frac {d\vec{\omega}}{dt}A \hat{L_o}_⊥+ \vec{\omega}A \frac{d\hat{L_o}_⊥}{dt}= \frac{1}{\omega}
\frac{d \omega }{dt} \vec{L_o}_⊥ + \vec{ \omega } \times \vec{L_o}_⊥ $$. | {
"domain": "physics.stackexchange",
"id": 30088,
"tags": "angular-momentum, rotational-dynamics, rotation, rigid-body-dynamics"
} |
Free Expansion in Szilard Engine? | Question: A crucial step in the definition of Szilard Engine is computing the work done by the single molecule in moving the piston from the middle to one end (left or right, depending on the location of the molecule). The textbook treatment calculates the work as resulting from a reversible isothermal expansion. I don't understand this step.
The second half of the engine (not containing the molecule) is vacuum, so shouldn't the expansion be a free expansion? I'm thinking on the similar lines as this answer, but the gas in one half of the engine contains just one molecule. If the expansion is indeed free, then the molecule doesn't lose any energy in pushing the piston, and no heat flows into the engine from the heat bath, and there is no decrease in entropy anywhere.
What am I missing here?
Answer: It confused me too when I first I first saw it.
The thing is, if you want to extract any work, as you noted, pushing against the vacuum alone won't be very useful (you won't extract anything), so you have to add a "system" that would permit you to extract work from the molecule (otherwise it wouldn't be called an engine in the first place!). And this is not really clearly emphasized in a lot of papers.
Here is a good illustration (taken here)
The important part here the the mass attached to the pulley and linked to the piston. This way, the single molecule has to work against the gravity and since we supposed that the transformation is reversible, we say that $p_{molecule}=p_{pulley + gravity}$. And indeed, to go from $V/2$ to $V$, you have to extract energy from the particle (since the mass has moved upward in the gravitational field and thus gained energy).
In most of the figures we can find online, the engine "lacks" anything that can permit the single molecule do to work but this is essential! | {
"domain": "physics.stackexchange",
"id": 80711,
"tags": "thermodynamics, work, adiabatic, thought-experiment"
} |
When does the product automaton of two NFAs A and B not decide L(A) U L(B)? | Question: We are given the following task:
Let $\mathcal{A} = {}(Q_A,\Sigma, \delta_A, s_A, F_A)$ and $\mathcal{B} = {}(Q_B,\Sigma, \delta_B, s_B, F_B)$ be two NFAs and let their product automaton be defined as:
$\mathcal{A} \times \mathcal{B} = {}(Q_A \times Q_B,\Sigma, \delta, (s_A,s_B), F)$ with
$\delta = {}\{ ((p,q),\sigma,(p',q')) \mid (p,\sigma,p') \in \delta_A, (q,\sigma,q') \in \delta_B \}$ and
$F= {}(Q_A \times F_B) \space \cup \space (F_A \times Q_B)$
(So the accepting (combined) states are the ones where at least one of the two individual states of $\mathcal{A}$ or $\mathcal{B}$ is accepting.)
Give an example for two NFAs $\mathcal{A}$ and $\mathcal{B}$ so that their product automaton $\mathcal{A} \times \mathcal{B}$ does not decide the language $L(\mathcal{A}) \space \cup \space L(\mathcal{B})$.
Now the one thing that is by far confusing me the most (and what I want to focus on in this question) is the following tip that is given as well:
Tip: There is an example in which $\mathcal{A}$ only has one state.
I can't see how that could be possible, with the following reasoning:
If $\mathcal{A}$ only has one state - let's call it $q$ - this means two things:
$q$ is either accepting or it is not accepting
All of the transitions of $\mathcal{A}$ start at and directly lead back to $q$.
Based on 1. we have two cases.
Let's first consider the case that $q$ is accepting ($q \in F_\mathcal{A}$). Because of 2. it follows that $\mathcal{A}$ accepts every word $w \in \Sigma ^*$ (so $L(\mathcal{A}) = \{w \in \Sigma^*\}$).
Let $\mathcal{B}$ be an arbitrary NFA. Based on the definition of $F$ and the fact that $q$ is accepting it is clear that all of the states of $\mathcal{A} \times \mathcal{B}$ will be accepting as well, which implies that the product automaton also accepts every $w \in \Sigma ^*$.
This means that $\space L(\mathcal{A} \times \mathcal{B}) = \{w \in \Sigma^*\} = L(\mathcal{A}) \space \cup \space L(\mathcal{B})$.
Now let $q$ be non-accepting ($q \notin F_\mathcal{A})$. Again, based on the definition of $F$ we can see that the accepting states of $\mathcal{A} \times \mathcal{B}$ are going exactly correspond to the accepting states of $\mathcal{B}$.
Point 2 implies that the transitions of $\mathcal{A}$ will "not have any impact on the transitions of $\mathcal{A} \times \mathcal{B}$", by which I mean that the product automaton is going to have the same transitions as $\mathcal{B}$ and therefore be isomorphic to $\mathcal{B}$, which again results in
$L(\mathcal{A} \times \mathcal{B}) = L(\mathcal{B}) = L(\mathcal{B}) \space \cup \space \varnothing = L(\mathcal{B}) \space \cup \space L(\mathcal{A})$
(since $\mathcal{A}$ doesn't accept any input, which means $L(\mathcal{A}) = \varnothing$)
This proof is trying to show that if $\mathcal{A}$ only has one state, then $\mathcal{A} \times \mathcal{B}$ will always decide $L(\mathcal{B}) \space \cup \space L(\mathcal{A})$.
Obviously, there has to be a logic fault in my thinking and in the proof above somewhere, which I can't seem to be able to figure out, so I'd really appreciate if anyone could point out what I'm missing.
Answer: The culprit is the fact that, in the presence of nondeterminism, automata need not have transitions to other states for the same input symbols.
Consider two NFA over the input alphabet $\{a,b\}$:
$$N_1 = (\{q_1\},\{a,b\},q_1,\delta_1,\{q_1\})$$
where $\delta_{1}(q_1,a) = \{ q_1 \}$ and $\delta_{1}(q_1,b) = \emptyset$
and
$$N_1 = (\{q_2\},\{a,b\},q_2,\delta_2,\{q_2\})$$
where $\delta_{2}(q_2,a) = \emptyset$ and $\delta_{2}(q_2,b) = \{ q_2 \}$
– that is, each of these automata has a single state, which is accepting, and a single transition, which is a loop on $a$ or $b$, respectively.
Clearly, $L(N_1)= \{ a^n \mid n \geq 0 \}$ and $L(N_2)= \{ b^n \mid n \geq 0 \}$. But the product construction will give us an automaton
$$N_{12} = (\{ (q_1,q_2 \}, \{ a,b \}, (q_1,q_2), \delta_{12}, \{ (q_1,q_2)\})$$
where $\delta_{12}((q_1,q_2),a) = \emptyset$ and $\delta_{12}((q_1,q_2),b) = \emptyset$, since there is no symbol $x$ such that both $N_1$ and $N_2$ admit an $x$-transition from their sole state.
Therefore $L(N_{12}) = \{ \varepsilon \} \neq L(N_1) \cup L(N_2)$. | {
"domain": "cs.stackexchange",
"id": 8953,
"tags": "regular-languages, automata, finite-automata"
} |
Ampere's Circuital law proof for closed circular loop | Question: How do we prove the magnetic field for a circular loop is $\frac{\mu_0I}{2r}$ using Ampere's Circuital law
I proved it using Biot-Savart Law but i am getting $\mu_0I/2\pi r$ instead of $\frac{\mu_0I}{2r}$ while using Ampere's law.
Please help me where I am doing wrong
Answer: You probably misapplied Ampere's law. This law is usually used to find magnetic field only in special cases when the contour integral can be found as a function of single field value based on symmetry.
Magnetic field of a circular current loop is not so simple and Ampere's law cannot be easily used to find it. In such cases, the method of choice is to use the Biot-Savart law (integrate the contributions to the field due to elements of the circuit) or find vector potential as a function of position and then derive magnetic field from it. | {
"domain": "physics.stackexchange",
"id": 53538,
"tags": "electric-current"
} |
Validity of Ohm's Law due to Induced Electric | Question: If we have a conducting loop of resistance R in a region of varying external magnetic field, how can we determine the current through the loop?
First, if we consider Ohm's Law, then we get that
$$\epsilon = -\frac{d\phi}{dt} = IR$$
However, if we treat it like an inductor, we get
$$\epsilon = -\frac{d\phi}{dt} = -L\frac{dI}{dt}$$
I am confused as to which is true and why. Is Ohm's Law even valid here? I found several videos and books that use Ohm's Law which really confused me as I thought Ohm's Law wasn't valid in the case of varying electric fields as it only holds in a condition of steady state.
Answer: Your first equation, which I shall write as
$$\mathscr E=-\frac{d\Phi}{dt},$$
is correct, as long as we interpret $\Phi$ as the total magnetic flux through the loop. Thus
$$\Phi=\Phi_{ext}+LI.$$
As John Rennie has remarked, the self-inductance term (the second term on the right) will no doubt be negligible compared with the external field (the first term on the right). Whether or not this is the case can be assessed by substituting for $\Phi$ from the second equation into the first, giving
$$\mathscr E=-\frac{d\Phi_{ext}}{dt}-L\frac{d I}{dt}\ \ \ \ \ \ \text{that is} \ \ \ \ \ \mathscr E=-\frac{d\Phi_{ext}}{dt}-\frac LR\frac{d \mathscr E}{dt}$$
A copper ring of diameter 50 mm and wire radius 0.5 mm has an inductance of $2.94\times 10^{-7}$ H and a resistance of $3.44\times 10^{-3}\Omega$, so you can estimate the rate of change of emf at which the inductance term would be significant. | {
"domain": "physics.stackexchange",
"id": 96934,
"tags": "electromagnetism, electric-current, electrical-resistance, electromagnetic-induction"
} |
model selection in clustering | Question: I am working on a mall customer segmentation dataset (5 features, 200 rows) using clustering. This dataset does not have any ground truth labels. I had a few doubts regarding clustering:
Can I use model selection in clustering using the silhouette score? - Since my dataset does not have any ground truth labels, I read on the sklearn documentation that you can use Silhouette score to evaluate the performance of the model. Can I use different clustering techniques (like K Means, DBSCAN, Mean shift, etc.) and select the model with the highest silhouette score? The idea is sort of similar to how we do model selection in supervised learning except in the latter we use cross validation.
How do I detect overfitting in clustering? Since the dataset has no labels, I cannot think of a way to identify if the model is overfitting the data.
How do I plot the final clusters when my dataset has more than 2 dimensions? I have seen a lot of visualizations around clustering (like the one below):
Should I use PCA to reduce the features to 2 and then plot the clusters? or is there another way to do this?
Answer: To answer your initial question, yes you can use silhouette score with different clustering methods. You could also use the Davies-Bouldin Index or the Dunn Index.
Regarding over-fitting, (this is my personal suggestion) but you could train the model n times on different types of the same data to see if there clustering is the same even though the values are changed. Short example: If you have to cluster 5 apples and 6 oranges, the cluster should be the same for 10 apples and 12 oranges. You can find a bit more detail on this here: https://datascience.stackexchange.com/a/20292/103857
For your third query:
Calculate distances between data points, as appropriate to your problem.
Then plot your data points in two dimensions instead of fifteen, preserving distances as far as possible. This is probably the key aspect of your question. Read up on multidimensional scaling (MDS) for this. Finally, color your points according to cluster membership.
(source for third query: https://stats.stackexchange.com/a/173823)
Regarding pca, its subjective. PCA works well with high correlation. If your dimensions are like apples and oranges then your directly effecting your models performance, so do keep that in check. A bit of eda would help before you dive into that. | {
"domain": "datascience.stackexchange",
"id": 8187,
"tags": "clustering, model-evaluations, overfitting, model-selection"
} |
Only sea water appears blue in color, why this is not happening in river water? | Question: Is the salt in the water the reason for scattering sunlight into blue?
Answer: The reason is not salt water. Large masses of water seem to be blue, both oceans and swimming pools.
Water absorbs most of the red wavelength. Most materials have their color from their surface level valence electron's absorption and re-emission processes. In water, it is not like that, because most of the color is caused by vibration of the molecules. Because of the photons get inelastically scattered (instead of just being absorbed), part of their energies get transformed into the vibrational energies of the molecules.
Now when a photon interacts with an atom, three things can happen:
elastic scattering, Rayleigh scattering, when the photon keeps its energy and changes the angle
inelastic scattering, the photon gives part of its energy to the atom and changes the angle
absorption, the photon gives all its energy to the atom, and the absorbing electron moves to a higher energy level.
In the case of large masses of water, three things cause the color:
elastic scattering, Rayleigh scattering, on the surface of the water, sunlight is reflected and appears to be blue from certain angles
inelastic scattering, part of the photons' energies transfers into the energies of the vibrational energies of the molecules. This means, that because of Hydrogen bonding, the red color wavelength photons will be shifted to blue color wavelengths.
water can absorb and re-emit light too, but because of the high ratio of refraction and reflection in water, the ratio of photons absorbed is low.
This gives a body of water when looking through it a blue color. But then comes the question, why is a body of water when looking at it (not through) blue?
Light scattering by suspended matter is required for the color in this case, and the blue light needs to return to the surface to be visible. Such suspended matter can also shift the scattered light to green color, often seen in rivers. The answer to the question of why rivers appear green instead of blue is because of those three factors have different effects on different types of masses of water.
Pools appear usually more colorless, but still blue, and not green. This is because the effects come in this order of importance:
reflection of the bottom of pool is dominant
reflected light from the surface (blue) is less dominant, and vibrational shifts are less dominant than the color of the bottom
reflected light from suspended matter (green) is less dominant
angle of observation less dominant
Oceans appear bluer because:
reflected light from the surface is dominant and vibrational shifts
angle of observation (and depth) is important because oceans look blue from far away, when deep enough (so that the bottom is not reflecting)
reflected light from suspended matter is not so dominant because oceans tend to be clear
Rivers, shallow lakes:
Reflection from suspended matter (green) is important because shallow lakes and rivers tend to have more sand and dust then oceans and pools
Reflection of the bottom is important because they are shallow, and the bottom is not blue (more gray like sand)
angle of reflection is important because lakes from far away and certain angles appear bluer when the bottom is not visible and surface reflection becomes dominant
Vibrational shift become less dominant because the suspended matter is more dominant and the water is not deep enough | {
"domain": "physics.stackexchange",
"id": 51180,
"tags": "visible-light, water, material-science, scattering, atmospheric-science"
} |
What is the equation for the electric field made by a capacitor? | Question: In a capacitor, how can I calculate its electric field?
Answer: The electric field is geometry dependent. Depending on the geometry of intrest different techniques may be used. But in general one would solve Poisson's equation $\nabla^2\phi=0$ where $\phi$ is the potential, using the boundary conditions on the capacitor plates. Then the electric field is $\vec E=-\nabla\phi$.
For the simple case of large (infinite) parallel plates this gives $E=\frac{V}{d}$ where $V$ is the potential difference between the plates and $d$ is the perpendicular plate seperation. | {
"domain": "physics.stackexchange",
"id": 80696,
"tags": "electrostatics, electric-fields, capacitance"
} |
Data structure for querying maximum of a rolling total | Question: Let $A = \{a_1, \ldots, a_n\}$ be a set of numbers and $w$ be some number. Define $p_w^{A}: \mathbb{R} \rightarrow \mathbb{N}$ as $$p_w^{A}(r) = |\{i \in \{1, \ldots, n\} | r \leq a_i \leq r + w\}|.$$ Note that $p_w^{A}(\mathbb{R})$, the image of the whole $\mathbb{R}$ under $p_w^A$, is finite. Define $MRT_w(A) = \max p_w^{A}(\mathbb{R})$.
Let's fix a number $w$. The task is to implement a data structure which can store a set of values and has the following interface:
Add value.
Remove value.
Compute $MRT_w(A)$, where $A$ is the set of all values which are currently stored in a data structure.
Can you get an $O(\log n)$ complexity for all 3 operations?
Answer: Consider a data structure (call it $D$), which can store pairs of $(key, value)$ (where $key$ and $value$ are some numbers) and has the following interface:
Put $(key, value)$ pair to a data structure (if such key is already in a data structure then ignore this operation). We will refer to this operation as $D.put(key, value)$.
Remove $key$ from a data structure. Denotation: $D.remove(key)$.
Given a pair of numbers $(l, r)$ and number $v$, consider all keys, which are currently stored in a data structure such that $key \in [l, r]$. Add number $v$ to all values, which are paired with those keys. Denotation: $D.add(l, r, v)$.
Given a pair of numbers $(l, r)$, consider all keys, which are currently stored in a data structure such that $key \in [l, r]$. Return a maximum of all values, which are paired with those keys. Denotation: $D.get\_max(l, r)$.
We can implement such a data structure so that all operations will have a worst-case time complexity $O(\log n)$. For a detailed discussion about this data structure you can refer to this question.
Now let's describe how to implement the data structure from the current question. First of all, if the data structure currently contains numbers $A = \{a_1, \ldots, a_n\}$ then to compute $MRT_w(A)$ it's sufficiently to consider numbers $p_w^A(a_1), \ldots, p_w^A(a_n)$. So let's create a data structure $D$ (from the firs part of the answer) and store there pairs $(a_i, p_w^A(a_i))$. There are 2 main questions that should be answered:
How to compute $p_w^A(a)$ when we try to put a new number $a$ into our data structure?
How to update values in $D$ during put and remove operations?
Data structure $D$ is implemented on the basis of a binary search tree. So when we put a new number $a$, we can easily compute a number of keys in a data structure, which lie in a segment $[a-w, a+w]$. And this number exactly equals to $p_w^A(a)$. Moreover, the keys which values should be affected by operation $D.put(a, p_w^A(a))$ form a continuous interval and their values lie in a segment $[a-w,a]$. So before we actually call $D.put(a, p_w^A(a))$, we should call $D.add(a-w, a, 1)$.
The same idea works when we remove some key $a$ from our data structure. It affects only consecutive interval of keys, which values lie in $[a-w,a]$ and it decrements their values. So before we call $D.remove(a)$, we should call $D.add(a-w, a, -1)$.
Computation of $MRT_w(A)$ is done by calling $D.get\_max(min, max)$, where $min$ and $max$ are minimal and maximal keys, which are currently stored in a data structure. | {
"domain": "cs.stackexchange",
"id": 21178,
"tags": "algorithms, data-structures, arrays"
} |
Java String search and replace | Question: I would like to search a String, and for a certain word, I would like to change only the first 3 letters
For example:
This is a java institute which insures that you are in that institute. ins!
I want to change the first 3 letter of institute (ins) to JSE, so the result will be:
This is a java JSEtitute which insures that you are in that JSEtitute. ins!
I have the following, but it is NOT doing the job perfectly:
public class ExistanceAndReplace {
public static void main(String[] args) {
String s = "This is a java institute of insurance and insu.";
if (s.contains("institute")) {
String s1 = s.replaceFirst("ins", "JSS");
System.out.println(s1);
}else{
System.out.println("not found!");
}
}
}
Any suggestions?
Answer: This code of yours is not working, despite your assurances that it is. It replaces just the first occurrence of 'ins', and that should be obvious since you call the replaceFirst method. The example sentence you use in the code is not the same as the sentence in your description. The one in the description will fail your code:
This is a java institute which insures that you are in that institute. ins!
Your code will produce:
This is a java JSStitute which insures that you are in that institute. ins!
and it should produce:
This is a java JSEtitute which insures that you are in that JSEtitute. ins!
There are three problems
the specification says replace the ins in institute with JSE, your code though, replaces just the first ins in institute, not them all
you replace the ins with JSS, and not JSE.
you don't check for the beginning of the word... you will find ins in "dustbins".
These small details are things you have to fix, and show a low attention to detail.
Of course, a more advanced solution would be:
s.replaceAll("\\bins(?=titute\\b)", "JSE");
You can see that in action here in Ideone | {
"domain": "codereview.stackexchange",
"id": 12499,
"tags": "java, strings, search"
} |
How does a qubit reset affect the probabilities of the other qubits in Qiskit? | Question: I'm looking at a 2 qubit system, with a reset on the second qubit (qubit=1).
I would have expect that circ.reset(qubit=1) is equal to state.probabilities(qargs=[0]).
But that's not the case.
For example a maximally pure state:
qc = QuantumCircuit(2)
qc.h(0)
qc.cx(0,1)
i_state = Statevector.from_label('00')
t_state = i_state.evolve(qc)
plot_histogram(data=t_state.probabilities_dict())
plot_histogram(data=t_state.probabilities_dict(qargs=[0]))
results in:
50% $|00\rangle$, 50% $|11\rangle$ and 50% $|0\rangle$, 50% $|1\rangle$
qc.reset(qubit=1)
results sometimes in: 100% $|00\rangle$ and 100% $|0\rangle$
OR sometimes in: 100% $|01\rangle$ and 100% $|1\rangle$.
What does .reset(qubit=1) actually do?
Does it sample from the probability of qubit=1 and collapse the other, entangled qubit=0 depending on how qubit=1 was sampled?
Answer: According to IBM Research Blog[1]:
Internally, these reset instructions are composed of a mid-circuit measurement followed by an x-gate conditioned on the outcome of the measurement.
So, your circuit is equivalent to the following circuit.
This should explain to you the result you get:
If measurement result is $1$ the state will collapse to $|11\rangle$ and $X$ gate will be applied resulting in $|01\rangle$
If measurement result is $0$ the state will collapse to $|00\rangle$ and $X$ gate will not be applied. Final state remains $|00\rangle$
In general, the state will collapse to one of the possible states. Then the resetted qubit state will change to become zero if it is not zero already. | {
"domain": "quantumcomputing.stackexchange",
"id": 3695,
"tags": "qiskit"
} |
Faraday's Law Induced Electric Field | Question: I have been going through A Student's Guide to Maxwell's Equations and am looking at Faraday's Law but am confused about why there always seems to be an assumption that the induced field is tangent to a given loop. I included a diagram to illustrate. The left hand side of the equation is the dot product of the induced electric field and the differential element dL but to me this only specifies that the parallel component is summed and that the actual direction of the field is free to point in any direction so long as there is some non zero component tangent to the ring.
Also another issue I see is that if we consider this same ring in two different positions it seem like we can create contradictory fields where the electric field can take on more than one value at a point.
The overarching question here is how is the induced electric field actually calculated? Is it possible with just Faraday's law or is there additional information required such as the tangency assumption?
Answer: If you use Jefimenko's (https://en.wikipedia.org/wiki/Jefimenko%27s_equations), the only sources of electric and magnetic fields at a point are the charges, currents, and their time derivatives on the point's past light cone. The equations ensure Maxwell's Eq are satisfied.
In this viewpoint, a changing magnetic field doesn't induces and electric field (even though this is a handy viewpoint in designing electric motors, metal detectors, etc), rather the sources ensure that $\partial B/\partial t$ is proportional to $\nabla \times E$. The curl of $E$ is not E, of course.
In your example, all that is ensured is the integral of $E$ around the loop , not the value at any point. | {
"domain": "physics.stackexchange",
"id": 93210,
"tags": "electromagnetism, maxwell-equations, vector-fields"
} |
Clarification on jQuery global variables in external files and scope | Question: Now that I've used it a lot, I think I'm starting to get the hang of jQuery and its capabilities. One thing I'm still unclear about is how global variables are initiated and used, and when the variable is in reference to a jQuery object, both cases especially in and with external JavaScript files.
For example, if we had this test website:
HTML file
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" type="text/css" href="someCSSreset.css">
<link rel="stylesheet" type="text/css" href="external.css">
<style>
.header {
color: orange;
background: blue;
font-size: 1.5em;
}
.content {
color: purple;
background: pink;
}
h1, h2 {
position: relative;
margin: 0 auto;
}
</style>
<script src="jQuery.min.js"></script>
<script src="example.js"></script>
<script>
// Global undefined variable. Is this allowed?
window.globalUndefined;
// Global variable referencing a number?
window.globalNumber = 42;
// Global undefined jQuery object?
window.$someElement;
// Global jQuery object. Do I need window. in front of the right-hand statement?
window.$allP = $("p");
$(document).ready(function() {
// Using global variable in document
$allP.click(function() {
$(this).css("color", "red");
});
// Using global variable from external file internally. Is this correct?
$allHeaders.click(function() {
$(this).css("color", "green");
});
// Am I passing these correctly?
doSomething($allP, $allHeaders);
});
</script>
</head>
<body>
<div class="header wrap">
<div class="header main">
<h1>EXAMPLE PAGE</h1>
<p>hi guys lol</p>
</div>
</div>
<div class="content wrap">
<div class="content main">
<h2>Paragraph 1</h2>
<p>Lorem ipsum</p>
</div>
</div>
</body>
</html>
And then, our two relevant external files:
external.css
body {
background-color: light-blue;
font-family: sans-serif;
width: 100%;
}
.wrap {
width: 50%;
border: 2px solid black;
}
example.js
// Does an external jQuery file need $(document).ready wrappers around it?
// Global undefined variable. Is this allowed?
window.extGlobalVar;
//Global variable referencing a number?
window.extGlobalNumber = 117;
// Global undefined jQuery object?
window.$extElementVar;
// Global jQuery object. Do I need a window. in front of the right-hand statement?
window.$allHeaders = $("h1");
function doSomething(p, h) {
$(p).mouseOver(function() {
// Using the internal global number variable?
$(this).css("width", globalNumber + "px");
});
// Using internal global jQuery
$allHeaders.mouseOver(function() {
// Using the external global number variable?
$(this).css("width", extGlobalNumber + "px");
});
// Can I later set the two undefined jQuery objects in the external file, or do I
// have to do it internally? Or does either work?
$someElement = $(".wrap");
$extElementVar = $(".main");
}
That's a lot of questions randomly strewn throughout some shitty code, so I'll reiterate them here:
In the above examples, am I using the various global variables and global jQuery objects correctly (in both the internal and external cases)?
In the above examples, am I passing the two global variables into that (externally-held) jQuery function correctly (in both the internal and external cases)?
Am I calling that (externally-held) jQuery function correctly?
Is there a way to be able to call an externally-held jQuery function on an object internally? E.g. $("div").doSomething(), but doSomething is defined in an external file. If so, how?
If we're allowed to have undefined global variables, can I later set them in the external file, or do they have to be set internally? Or can I do either?
Can you declare global variables in a function or in the $(document).ready() wrapper if you format it as window.$(etc)?
Answer: Let me first try to clear up what I expect to be some confusion:
All JavaScript code on a page runs in the same environment. Whether a piece of code is defined in an external file or in the page itself makes little difference, as long as things are loaded in the proper order.
So if you had HTML like
<script>
/* chunk of code 1 */
/* chunk of code 2 */
/* chunk of code 2 */
</script>
It's the same as:
<script> /* chunk of code 1 */ </script>
<script> /* chunk of code 2 */ </script>
<script> /* chunk of code 3 */ </script>
Which, in turn is the same as:
<script src="chunk1.js"></script>
<script src="chunk2.js"></script>
<script src="chunk3.js"></script>
From the browser's point of view, all JS - whether part of the HTML or in a separate file - is just part of one big JS file. So in you case, that file starts with all of jQuery's code, then the code from "example.js" and then the code in the HTML.
So when you say "externally-held" and "internally-held", I say "same difference". Functions, where ever they're defined, have their own internal scope (and access to any "outer" scope) - files do not automatically constitute a scope.
To answer your questions directly:
In the above examples, am I using the various global variables and global jQuery objects correctly (in both the internal and external cases)?
Yes and no. Yes in the sense that you're doing it right, but no in the sense that you may not be doing the right thing :)
Global variables are useful ($ aka jQuery is a global variable), but the fewer you have, the better. JavaScript is the wild west: Browsers and libraries may define different things in the global scope, so it's easy to step on something's toes - or have your own toes stepped on, if some library accidentally overwrites your global variables.
So the usual guideline for JavaScript is to avoid "polluting the global scope". Hence why jQuery "hides" (almost) all its functionality behind the $ variable: It limits jQuery's footprint to a single variable (even so, there were libraries before jQuery that used the global $ for something, so things can still get weird).
So, again, yes, those are global variables, and - since everything is essentially one big file - you're doing that part right. But even so, I wouldn't recommend it.
In the above examples, am I passing the two global variables into that (externally-held) jQuery function correctly (in both the internal and external cases)?
Again: Yes and no. It certainly makes sense to pass them - but it's not necessary to make them global to begin with.
Am I calling that (externally-held) jQuery function correctly?
Sure - again: One great big file.
Is there a way to be able to call an externally-held jQuery function on an object internally? E.g. $("div").doSomething(), but doSomething is defined in an external file. If so, how?
Yes: Plugins. Plugins are how you extend jQuery with custom functionality. It's slightly more involved than just writing a function, but it's not too bad.
Still, consider if it's worth it. The functionality you're adding should be somewhat generic to really make sense. For instance, jQuery UI defines a bunch of plugins to animate elements and such. This works for any element, on any page. The same way jQuery itself makes sense on any page, anywhere.
But if your functionality is more specific (i.e. only works in your specific circumstances) don't bother with plugins. Functions work just fine too. Sure, it may not be quite as neat as writing $("div").doSomething(), but it's a lot more flexible. You can define tons of plugins, but.. eh', not worth it.
In your case, your doSomething function relies on global variables, and its functionality is very specific (in other words, it's tightly coupled to your circumstances), so making it a plugin would be complicating matters.
Still, as an example, here's a plugin that makes the background of an element red (that is, a fairly generic - if useless - piece of functionality that doesn't rely on a specific context):
jQuery.fn.paintItRed = function () {
this.each(function () {
$(this).css('background', 'red');
});
return this;
};
You can then use $("div").paintItRed(), and it'll make all the DIVs red! Yay?
If we're allowed to have undefined global variables, can I later set them in the external file, or do they have to be set internally? Or can I do either?
Well, undefined means just that: Not defined. At all. So if you do nothing at all, the variable will be undefined. it also means that doing this:
window.anUndefinedVariable;
doesn't do anything really. You're not really declaring anything (at best, JavaScript will attempt to read the variable, which isn't defined).
And again, we're back to why global scope pollution is a problem: Any piece of code can define or overwrite a global variable. So for instance, jQuery defines $ as a global variable, but your code can easily re-define it as null - and then you can't use $ for anything.
Can you declare global variables in a function or in the $(document).ready() wrapper if you format it as window.$(etc)?
Sure. In browsers, window is the global object. And, as mentioned, any piece of code can run code like window.someGlobal = 23;.
That's because window is just another object, and any object your code can access, it can mess with.
This is getting pretty long-winded (sorry), but you've got a few more question in your code, and I'll quickly want to answer one of those:
// Does an external jQuery file need $(document).ready wrappers around it?
Yes, if your code need to access element in the page. If you load your file at the top of the HTML, it starts running before the page itself has actually been parsed. Again: Just the same as if that file's content was actually embedded in the page itself.
So whenever you want to make sure that the page has actually been fully downloaded and parsed, you have to use $(document).ready(function () {...}) - or its shortcut: $(function () {...});
It's no different from a click event. You want some code to run when something's clicked - not before! So you use $("#something").click(function () {...}), and you know that that function's code will only run when the "something"-element is clicked. Similarly, $(document).ready is just another event listener, which waits for the ready event to fire before running the function.
Phew - sorry for the verbiage. | {
"domain": "codereview.stackexchange",
"id": 10608,
"tags": "javascript, jquery"
} |
Electromagnetic wave associated with a single photon | Question: To the question regarding the relation between the energy of a photon (E = hf) and the energy of the associated electromagnetic wave, I read somewhere that the energy of an electromagnetic wave is Nhf, N representing the number of photons per second that pass through a unit area. How could the electromagnetic wave associated with a single photon produce several photons? If I read it wrong and it isn't, how can we represent the electomagnetic wave associated with a single photon and what would be its energy as a function of its frequency.
Regards
Answer: The energy of a single photon is $E=hf$ which also corresponds to an electromagnetic wave of frequency $f$. If the beam of light consists of multiple photons, then the total energy of the beam of light equals the number of photons in the light beam $\times$ the energy of a single photon: $E_{tot}= Nhf$. In the wave framework, this can be seen as the superpostion of all the $N$ electromagnetic waves of frequency $f$. This resulting superposition is also a wave.
So, it can be that the total electromagnetic wave consists of the superposition of $N$ "fundamental" electromagnetic waves. Here, the $N$ "fundamental" electromagnetic waves correspond to the $N$ photons. | {
"domain": "physics.stackexchange",
"id": 70254,
"tags": "energy, waves, photons"
} |
Do low frequency sounds really carry longer distances? | Question: It is a common belief that low frequencies travel longer distances. Indeed, the bass is really what you hear when the neighbor plays his HiFi loud (Woom Woom). Try asking people around, a lot of them believe that low sounds carry longer distances.
But my experience isn't as straightforward. In particular:
When I stand near someone who's listening loud music in headphones, it is the high pitched sounds that I hear (tchts tchts), not the bass.
When I sit next to an unamplified turntable (the disc is spinning but the volume is turned off), I hear high pitched sounds (tchts tchts), not the bass.
So with very weak sounds, high frequencies seem to travel further?
This makes me think that perhaps low frequencies do not carry longer distances, but the very high amplitude of the bass in my neighbor's speakers compensates for that. Perhaps also the low frequencies resonate with the walls of the building? Probably also the medium the sound travels through makes a difference? Or perhaps high frequencies are reflected more by walls than low frequencies?
I found this rather cute high school experiment online, which seems to conclude that low and high frequencies travel as far, but aren't there laws that physicist wrote centuries ago about this?
Answer: Do low frequencies carry farther than high frequencies? Yes. The reason has to do with what's stopping the sound. If it weren't for attenuation (absorption) sound would follow an inverse square law.
Remember, sound is a pressure wave vibration of molecules. Whenever you give molecules a "push" you're going to lose some energy to heat. Because of this, sound is lost to heating of the medium it is propagating through. The attenuation of sound waves is frequency dependent in most materials. See Wikipedia for the technical details and formulas of acoustic attenuation.
Here is a graph of the attenuation of sound at difference frequencies (accounting for atmospheric pressure and humidity):
As you can see, low frequencies are not absorbed as well. This means low frequencies will travel farther. That graph comes from this extremely detailed article on outdoor sound propagation.
Another effect that affects sound propagation, especially through walls, headphones, and other relative hard surfaces is reflection. Reflection is also frequency dependent. High frequencies are better reflected whereas low frequencies are able to pass through the barrier:
This is and frequency-based attenuation are why low-frequency sounds are much easier to hear through walls than high frequency ones.
Frequency Loudness in Headphones:
The above description apply to sounds that travel either through long distances or are otherwise highly attenuated. Headphones start off at such low intensities already they don't travel long enough distances for attenuation to be a dominate factor. Instead, the frequency response curve of the human ear plays a big role in perceived loudness.
The curves that show human hearing frequency response are called Fletcher–Munson curves:
The red lines are the modern ISO 226:2003 data. All the sound along a curve is of "equal loudness" but as you can see, low frequencies must be much more intense to sound equally as loud as higher frequency sounds. Even if the low frequencies are reaching your ear, it's harder for you to hear them.
Headphone sound is doubly compounded by the difficulty of making headphones with good low-frequency response. With loudspeakers you can split the job of producing frequencies among a subwoofer, a midrange speaker, and a tweeter. For low frequencies subwoofers are large and have a resonating chamber which simply isn't an option with headphones that must produce a large range of sound frequencies in a small space. Even a good pair of headphones like Sennheiser HD-650 struggle with lower frequencies:
So if it sounds like high frequencies travel farther with headphones, it's because headphones are poor at producing low frequencies and your ear is poor at picking them up. | {
"domain": "physics.stackexchange",
"id": 51282,
"tags": "acoustics, everyday-life, frequency"
} |
Derive hamiltonian from equations of motion | Question: Is there a method for deriving the hamiltonian given that you know the equations of motion?
For example given the equation (equation 5 in paper linked) they simply the derive the Hamiltonian in equations 6-8. What is the method behind this?
Paper link: https://arxiv.org/abs/1902.01344
Answer: if you have this kind of differential equations:
$$\vec{\ddot{r}}=-\vec{F}(\vec{r})\tag 1$$
you can get the Hamiltonian.
multiply equation (1) from the left with $\vec{\dot{r}}$
$$\vec{\dot{r}}\cdot \vec{\ddot{r}}=-\vec{\dot{r}}\cdot\vec{F}(\vec{r})$$
thus:
$$\frac{1}{2}\frac{d}{dt}(\vec{\dot{r}}\cdot \vec{\dot{r}})=
-\vec{\dot{r}}\cdot\vec{F}(\vec{r})$$
or
$$\frac{1}{2}\int d(\vec{\dot{r}}\cdot \vec{\dot{r}})=
-\int\vec{F}(\vec{r})\cdot d\vec{r}$$
$\Rightarrow$
$$\underbrace{\frac{1}{2}\vec{\dot{r}}\cdot \vec{\dot{r}}}_{T}=\underbrace{-\int\vec{F}(\vec{r})\cdot d\vec{r}}_{U}$$
with the Lagrangian $L=T-U$ you can obtain the Hamiltonian
Example:
$$\ddot{r}=\underbrace{-\frac{M}{r^2}}_{F(r)}$$
$\Rightarrow$
$$T=\frac{1}{2}\dot{r}^2\quad,U=-\frac{M}{r}$$
with $L=T-U$ you get the Hamiltonian
$$H=\frac{1}{2}\,p^2-\frac{M}{r}=T+U$$
where $p=\frac{dL}{d(\dot{r})}=\dot{r}$ | {
"domain": "physics.stackexchange",
"id": 65813,
"tags": "lagrangian-formalism, orbital-motion, hamiltonian-formalism, hamiltonian, binary-stars"
} |
Does the environment matter (area outside the box) in tensorflow's object detection algorithm? | Question: I am exploring tensorflow's object detection algorithm.
Prior to training I had to mark boxes around my items in the training dataset images. This was fed into training. Does the environment (surroundings) outside of the box marking matter in tensorflow's object detection algorithm? Or is the training based only on the contents inside of the marked box?
Answer: The training is based on the boxes content only, but during the detection process, the algorithm has to scan all the image.
Consequently, there is no learning of the environment outside the box.
Such algorithms only focus on detecting specific objects, independently from their surrounding environment.
However, tensorflow could be used to apply contextual object recognition, but it requires additional function such as an attention mechanism. | {
"domain": "datascience.stackexchange",
"id": 11035,
"tags": "tensorflow, object-detection"
} |
Why does it has a constant val_loss:? | Question:
I am working on somr dataset and am implementing a deep neural network. There are some typos that I am not familiar with. strong text
Answer: Change the last layer's Neuron count to 2.
model.add(keras.layers.Dense( 2, activation="softmax"))
OR
Change your last layer's activation to Sigmoid and keep y_train with single column
model.add(keras.layers.Dense( 1, activation="sigmoid")) | {
"domain": "datascience.stackexchange",
"id": 8120,
"tags": "deep-learning, keras"
} |
"AI will kill us all! The machines will rise up!" - what is being done to dispel such myths? | Question: Science Fiction has frequently shown AI to be a threat to the very existence of mankind. AI systems have often been the antagonists in many works of fiction, from 2001: A Space Odyssey through to The Terminator and beyond.
The Media seems to buy into this trope as well. And in recent years we have had people like Elon Musk warn us of the dangers of an impending AI revolution, stating that AI is more dangerous than nukes.
And, apparently, experts think that we will be seeing this AI revolution in the next 100 years.
However, from my (albeit limited) study of AI, I get the impression that they are all wrong. I am going to outline my understanding below, please correct me if I am wrong:
Firstly, all of these things seem to be confusing Artificial Intelligence with Artificial Consciousness. AI is essentially a system to make intelligent decisions, whereas AC is more like the "self-aware" systems that are shown in science fiction.
Not AI itself, but intelligence and intelligent decision-making algorithms are something we've been working with and enhancing since before computers have been around. Moving this over to an artificial framework is fairly easy. However, consciousness is still something we are learning about. My guess is we won't be able to re-create something artificially if we barely understand how it works in the real world.
So, my conclusion is that no AI system will be able to learn enough to start thinking for itself, and that all our warnings of AI are completely unjustified.
The real danger comes from AC, which we are a long, long way from realizing because we are still a long way off from defining exactly what consciousness is, let alone understanding it.
So, my question is, assuming that my understanding is correct, are any efforts are being made by companies or organizations that work with AI to correct these popular misunderstandings in sci-fi, the media, and/or the public?
Or are the proponents of AI ambivalent towards this public fear-mongering?
I understand that the fear mongering is going to remain popular for some time, as bad news sells better than good news. I am just wondering if the general attitude from AI organizations is to ignore this popular misconception, or whether a concerted effort is being made to fight against these AI myths (but unfortunately nobody in the media is listening or cares).
Answer: Nothing.
Its in almost everyone's favor for it to stay that way financially. Having non-technical individuals associate AI with terminators makes a perception that the field has greater capabilities than it does $\rightarrow$ this leads to grants, funding, etc...
Is there any negative? Yes. Misconceptions always have drawbacks. We see the creation of dumb ethics boards and such cough cough Elon Musk.
But if history has anything to say about this, as the field gains popularity (which it is dnagerously quick), information will spread by definition, and eventually misconceptions will be laid to rest.
Note that this answer is biased and based upon my own opinions | {
"domain": "ai.stackexchange",
"id": 1439,
"tags": "social, artificial-consciousness"
} |
Bhabha scattering Energy conservation | Question: Griffiths Ex. 2.4 in his book "Elementary particles" says:
"Determine the mass of the virtual photon in each of the lowest-order
diagrams for Bhabha scattering (assume the electron and positron are
at rest). What is its velocity? (Note that these answers would be
impossible for real photons)"
I know that the mass in the Annihilation Diagram is $m=2m_e$ using the Center of Momentum system and the formula $E^2 - p^2c^2 = m^2c^4$. I'm considering as three different steps
($e^-e^+,\gamma,e^-e^+$),
and conservation over each of them (each vertex before and after).
(Time goes horizontally).
But what about the Scattering diagram
The virtual photon is it part of the "before interaction" or "after interaction"? Is it part of both? that would make $m=0$. If not it would be $m=2m_e$ considering the virtual photon as a middle step.
Answer: (2nd diagram) Since the photon is virtual, its exchange is the process that happens during interaction, neither before nor after.
Now, energy and 3-momentum is conserved at each vertex. Also keep in mind that there is a total [initial(i) or final(f)] energy-momentum 4-vector conservation delta function in any scattering process: $\delta(\sum p_i^\mu - \sum p_f^\mu)$.
Using these, in the 2nd diagram, for the topmost vertex (with photon energy/momentum flowing into the vertex), energy conservation gives $E_{e^+(in)} + E_\gamma = E_{e^+(out)}$. Since $E_{e^+} = m$ for both, $E_\gamma=0$. Similarly, $\vec{p}_{e^+(in)} + \vec{p}_\gamma = \vec{p}_{e^+(out)}$ gives $\vec{p}_\gamma = 0$, and therefore, photon velocity $=0$. Now, using the energy-momentum dispersion relation for these values of $E_\gamma$ and $\vec{p}_\gamma$, we find that $m_\gamma=0$.
As Griffiths points out, this is not possible for real photons: they can never have zero velocity. | {
"domain": "physics.stackexchange",
"id": 50622,
"tags": "quantum-field-theory, elementary-particles"
} |
D3 Matrix table | Question: I am able to develop this matrix but I think code can be improved. I am creating a map on which each rectangle may have zero through many list items with their titles (still need to add code for adding titles):
var width = 600,
height = 600;
var margin = {top: -5, right: -5, bottom: -5, left: -5};
var zoom = d3.behavior.zoom()
.scaleExtent([1, 15])
.on("zoom", zoomed);
var svgContainer = d3.select("body").append("svg")
.attr("width", width)
.attr("height", height)
.style("background-color", "black")
.append("g")
.attr("transform", "translate(" + margin.left + "," + margin.right + ")")
.call(zoom);
var zoomed = function () {
svgContainer.attr("transform", "translate("+ d3.event.translate + ")scale(" + d3.event.scale + ")");
};
var zoom = d3.behavior.zoom()
.scaleExtent([1, 8])
.on("zoom", zoomed)
.size([width, height]);
svgContainer.call(zoom);
var rectangle1 = svgContainer.append("rect")
.attr("x", 0)
.attr("y", 0)
.attr("width", 100)
.attr("height", 100)
.attr("fill", "red");
var rectangle2 = svgContainer.append("rect")
.attr("x", 100)
.attr("y", 0)
.attr("width", 100)
.attr("height", 100)
.attr("fill", "yellow");
var rectangle3 = svgContainer.append("rect")
.attr("x", 200)
.attr("y", 0)
.attr("width", 100)
.attr("height", 100)
.attr("fill", "red");
var rectangle4 = svgContainer.append("rect")
.attr("x", 0)
.attr("y", 100)
.attr("width", 100)
.attr("height", 100)
.attr("fill", "yellow");
var rectangle5 = svgContainer.append("rect")
.attr("x", 100)
.attr("y", 100)
.attr("width", 100)
.attr("height", 100)
.attr("fill", "red");
var rectangle6 = svgContainer.append("rect")
.attr("x", 200)
.attr("y", 100)
.attr("width", 100)
.attr("height", 100)
.attr("fill", "yellow");
var rectangle7 = svgContainer.append("rect")
.attr("x", 0)
.attr("y", 200)
.attr("width", 100)
.attr("height", 100)
.attr("fill", "red");
var rectangle8 = svgContainer.append("rect")
.attr("x", 100)
.attr("y", 200)
.attr("width", 100)
.attr("height", 100)
.attr("fill", "yellow");
var rectangle9 = svgContainer.append("rect")
.attr("x", 200)
.attr("y", 200)
.attr("width", 100)
.attr("height", 100)
.attr("fill", "red");
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.4.11/d3.min.js"></script>
Answer: What do you think about this?
var rects = [
[0, 0, "#C0FC3E"],
[0, 200, "#60FC60"],
[0, 400, "#64FE2E"],
[0, 600, "#00FF00"],
[200, 0, "#F6FF33"],
[200, 200, "#AFFC3B"],
[200, 400, "#00FF00"],
[200, 600, "#64FE2E"],
[400, 0, "#FDB500"],
[400, 200, "#8DB723"],
[400, 400, "#AFFC3B"],
[400, 600, "#60FC60"],
[600, 0, "red"],
[600, 200, "#FDB500"],
[600, 400, "#F6FF33"],
[600, 600, "#C0FC3E"]
];
var width = 800,
height = 800,
boxWidth = 200,
boxHeight = 200;
var svgContainer = d3.select("body").append("svg")
.attr("width", width)
.attr("height", height);
var len = rects.length;
for(var i=0;i<len;i++){
CreateRect(rects[i][0],rects[i][1],rects[i][2])
}
svgContainer.append("text")
.attr("x", 85)
.attr("y", 125)
.attr("font-size", 55)
.text("3")
.attr("onclick", "alert('You clicked A');"); | {
"domain": "codereview.stackexchange",
"id": 12982,
"tags": "javascript, matrix, d3.js"
} |
Ammonium sulphate precipitation assay pH dependence | Question: In general does the pH effect the precipitation, e.g. would a pH of 6 cause less precipitation than pH 7.5. Or are they unrelated?
Answer: I think it would affect some proteins, but ammonium sulfate precipitation usually requires such large amounts of salt, that you would have to add a lot of base to adjust the pH.
When I have done ammonium sulfate precipitation, its to take a rough cut of proteins out of a whole cell/whole organ lysate - its more like a hammer than a pair of tweezers as far as protein purification is concerned. I would rather use a charged plastic resin or a sizing column if I were to care about the isoelectric purification. | {
"domain": "biology.stackexchange",
"id": 625,
"tags": "biochemistry, proteins"
} |
What is the purpose of electronic braking in motors? | Question: I have a Micro Magician v2 micro controller. It has a A3906 Dual FET “H” bridge motor driver built in.
In the manual it states "Electronic braking is possible by driving both inputs high."
My first question is, what is the purpose of these brakes? If I set the left/right motor speed to 0, the robot stops immediately anyway. What advantage is there to using these brakes, or am I taking the word "brake" too literally?
My second question is, the driver has "motor stall flags that are normally held high by pullup resistors and will go low when a motor draws more than the 910mA current limit. Connect these to spare digital inputs so your program will know if your robot gets stuck." But when my robot hits a wall, the wheels just keep on spinning (slipping if you will), I take it these stall flags can be used on a rough surface where the wheels have more friction?
Answer: Question 1: If you are satisfied with your robots stopping distance, than you don't need electronic braking. With a heavier or faster robot, its quite useful. The larger the machine, the more inertia becomes a important design factor. For example, a form of electronic braking is used by open-pit mining dump trucks. Ordinary dump trucks, being much smaller, make do with ordinary brakes.
Question 2: The stall flags are useful to detect that the wheel is not moving despite full power being applied. I'm presuming we're still talking about a small robot here - what would happen if the wheel got tangled in something like a long hair or thread?
I had a two-inch long robot once that got its powered wheels jammed on a tiny piece of toothpick it encountered while crossing a carpet. Freak luck, but it does happen. If you only run your robot on clean surfaces, and it'll spin the tires when it bumps into something, than you probably don't need the flags. Personally, though, I would use them -- but I'm a belt AND suspenders kinda guy. | {
"domain": "robotics.stackexchange",
"id": 284,
"tags": "motor, h-bridge"
} |
What will cause when collision check sampling rate set too low? | Question:
I follow moveit wizard tutorial.
If I only set collision check sampling rate at 1000, what bad situation will happen?
Is PR2 maybe more possible to in collision when executing a trajectory, or the planning will more possible to fail to generate an uncollision trajectory?
Thank you~
Originally posted by sam on ROS Answers with karma: 2570 on 2014-07-01
Post score: 0
Answer:
That number only affects the wizard -- once generated, it is never used again -- thus, the easy answer is "don't change it" since it won't improve performance in anything except the wizard (which is hopefully a one time thing)
For a better understanding of what is going on, that sampling basically helps decide which links should not be collision checked later on (and this information is put in the exported SRDF). For instance, two links that are adjacent and always touch cannot be checked for collision or all plans would fail to be collision free. The wizard may also be able to determine when two links are never in contact, because they are always too far apart, and thus there is no reason to collision check those joints as the result will always be false.
Originally posted by fergs with karma: 13902 on 2014-07-01
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by sam on 2014-07-02:
Thank you~After the test, I found that if the rate is too low, it will more likely to output: No motion plan found. No execution attempted , then PR2 will not execute anything~ | {
"domain": "robotics.stackexchange",
"id": 18468,
"tags": "moveit"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.