anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
ETH based lower bound for $k$-COLORING of bounded degree graph | Question: It is known that there is no $2^{o(n)}$-time algorithm for 3-COLORABILITY of graphs of maximum degree four, unless ETH fails [1]. Is a there a similar result for $k$-COLORABILITY assuming only ETH (not SETH)?
Emden-Weinert et al. [2] proved that for all $k\geq 3$, $k$-COLORABILITY is NP-complete for graphs of maximum degree $k-1+\sqrt{k}$. By an alternate reduction, I can show that there is no $2^{o(n)}$-time algorithm for $k$-COLORABILITY in graphs of maximum degree $k-1+\sqrt{k}$ (assuming ETH). I suppose this is not a new result. Could you please point me to a paper that gives this result? (I hate to brand results as folklore). Thanks in advance.
References
[1] Cygan, Marek; Fomin, Fedor V.; Golovnev, Alexander; Kulikov, Alexander S.; Mihajlin, Ivan; Pachocki, Jakub; Socała, Arkadiusz, Tight bounds for graph homomorphism and subgraph isomorphism, Krauthgamer, Robert (ed.), Proceedings of SODA 2016, Arlington, VA, USA, January 10–12, 2016. Philadelphia, PA: SIAM; New York, NY: ACM. 1643-1649 (2016). ZBL1409.68209.
[2] Emden-Weinert, Thomas; Hougardy, Stefan; Kreuter, Bernd, Uniquely colourable graphs and hardness of colouring graphs of large girth, Comb. Probab. Comput. 7, No. 4, 375-386 (1998). ZBL0918.05051.
Answer: The result mentioned in the question can be obtained by a chain of two standard reductions. The simplest reduction for $k$-COLORABILITY $\leq_p$ $(k+1)$-COLORABILITY (namely, adding a universal vertex) is clearly a linear reduction.
Also, the reduction $k$-COLORABILITY $\leq_p$ $k$-COLORABILITY($\Delta\leq k-1+\lceil \sqrt{k} \rceil$) given by Emden-Weinert et al. [2] is a linear reduction.
From these observations, it follows that, unless ETH fails, there is no $2^{o(n)}$-time algorithm for $k$-COLORABILITY($\Delta\leq k-1+\lceil \sqrt{k} \rceil$), .
Detailed Explanation:
Let $G$ be a graph of maximum degree $4$ as constructed in the hardness result of Cygan et al [1].
Add $k-4$ new vertices to $G$ and connect them to all vertices of $G$ as well as to themselves, the resulting graph $G'$ has an unbounded degree but, all except $k-4$ of the vertices has degree at most $k$. We do not know whether $G'$ is $k$-colorable. The number of newly added edges and vertices for a constant $k$ is $O(n)$, thus any $2^{o(n)}$ algorithm for $k$-coloring of $G'$ falsifies ETH.
The second step is to reduce the maximum degree of $G'$ by the method provided in the work of Emden-Weinert et al. [2]. Their method roughly speaking is as follows: they take a high degree vertex $u$ then remove $k$ edges of $u$ and add $k-1$ new edges to $u$ to connect it to a specific gadget with $O(k)$ many vertices. This procedure clearly reduces the degree of $u$ by one, and if we repeat this process $n-k$ times, the degree of $u$ will be at most $k$. They showed that the newly created graph after this step exhibits a similar coloring scheme as the original graph (except for the gadgets). As long as there is a high degree vertex, they repeat the mentioned degree reduction procedure. Additionally, based on their construction, we know that the maximum degree of vertices inside gadgets is at most $k-1+\lceil \sqrt{k} \rceil$, thus once the procedure ends, the resulting graph has the claimed maximum degree.
Observe that we have to perform the aforementioned procedure only on $k$ vertices of $G'$ and since their degree is $n+k-4$, after at most $O(kn)$ iterations the procedure stops. Additionally in each iteration, we add at most $O(k)$ new vertices, which means by the end of the procedure, the constructed graph has at most $O(k^2n)$ vertices. Hence, any $2^{o(k^2n)}$ algorithm for $k$-coloring of this graph, would result in a $2^{o(n)}$ algorithm for $3$-coloring of the input graph $G$. Since $k$ is a fixed constant, we conclude that any $2^{o(n)}$ algorithm for k-coloring on this graph would lead us to a $2^{o(n)}$ algorithm for $3$-coloring of $G$. | {
"domain": "cstheory.stackexchange",
"id": 5239,
"tags": "lower-bounds, graph-colouring, complexity-assumptions"
} |
Batch alignment of inconsistently named Fastq files | Question: I have numerous gzip-compressed paired-end Fastq files with ChIP-seq data that I would like to align with bowtie2. I confirmed that bowtie2 takes .gz files, although I do see this warning when I run for two samples:
Warning: gzbuffer added in zlib v1.2.3.5. Unable to change buffer size
from default of 8192.
My files are named like so.
10_S10_L002_R1_001.fastq.gz 12_S12_L002_R1_001.fastq.gz 2_S2_L001_R1_001.fastq.gz 4_S4_L001_R1_001.fastq-003.gz 6_S6_L001_R1_001.fastq-002.gz 8_S8_L002_R1_001.fastq.gz
10_S10_L002_R2_001.fastq.gz 12_S12_L002_R2_001.fastq.gz 2_S2_L001_R2_001.fastq.gz 4_S4_L001_R2_001.fastq-002.gz 6_S6_L001_R2_001.fastq-003.gz 8_S8_L002_R2_001.fastq.gz
11_S11_L002_R1_001.fastq.gz 1_S1_L001_R1_001.fastq.gz 3_S3_L001_R1_001.fastq.gz 5_S5_L001_R1_001.fastq-002.gz 7_S7_L002_R1_001.fastq.gz 9_S9_L002_R1_001.fastq.gz
11_S11_L002_R2_001.fastq.gz 1_S1_L001_R2_001.fastq.gz 3_S3_L001_R2_001.fastq.gz 5_S5_L001_R2_001.fastq-003.gz 7_S7_L002_R2_001.fastq.gz 9_S9_L002_R2_001.fastq.gz
The bash commands I tried are below. When I run this test command:
for i in $(ls *.fastq*.gz | grep -v 6 | rev | cut -c 16- | rev | uniq); do
echo $i;
done
...I get this output.
12_S12_L002_R1_001.fastq.gz 12_S12_L002_R2_001.fastq.gz
When I run this:
for i in $(ls *.fastq*.gz | rev | cut -c 16- | rev | uniq); do
echo $i
bowtie2 -q -p20 -x /run/media/punit/data1/BowtieIndex/hg38 -1 ${i}R1_001.fastq.gz -2 ${i}R2_001.fastq.gz | samtools view -bS - >${i%}.bam
done
I do see one BAM for my paired end files. But as I see the second set of files which are named:
6_S6_L001_R1_001.fastq-002.gz 6_S6_L001_R2_001.fastq-003.gz
if I run the same script over them, I would get an error as the loop can't read the paired end file.
So the tedious way is I can make a separate folder for these sets of data with fastq-002.gz and fastq-003.gz and can run the script by tweaking the cut parameter.
But I would like to know what is the programmatic way of resolving the issue.
Answer: A very simple approach, but one which will work with the names you show, is to remove any occurrence of 00N (where N is any number) from right before the .gz. To do so, edit your script to:
for i in $(ls *.fastq*.gz | sed 's/00[0-9]\.gz/.gz/' | rev | cut -c 16- | rev | uniq); do
bowtie2 -q -p20 -x /run/media/punit/data1/BowtieIndex/hg38 \
-1 ${i}R1_001.fastq.gz \
-2 ${i}R2_001.fastq.gz |
samtools view -bS - >${i%}.bam"
done | {
"domain": "bioinformatics.stackexchange",
"id": 924,
"tags": "read-mapping, fastq, shell"
} |
What are the effects of combining rapamycin with dietary restriction? | Question: Are the effects additive or subadditive? In many ways, rapamycin acts like a CR mimetic, but even CR can only go so far.
Answer: To clarify; administration of rapamycin (a drug) to lab organisms (including mice [1]) extends lifespan. Similarly, restricting the intake of nutrients to the minimum without causing malnutrition also extends lifespan in lab animals (including primates [2]).
Rapamycin inhibits the mTOR pathway (mammalian Target Of Rapamycin) - specifically mTORC1 (Complex 1) - which influences protein synthesis, autophagy and inflammation (among others). Upstream factors of mTOR include nutrient availability and insulin signaling (see "Deconvoluting mTOR biology" for good review [3]).
It has been hypothesized that the lifespan-extending effects of caloric restriction (CR) are mediated by mTOR (one can see why - mTOR is affected by nutrient availability). In fact it may depend on the method of CR;
Greer et al [4] report that different methods of CR in C.elegans, for instance feeding them a diluted food source, or conversely feeding them on alternate days, do not necessarily require the same genetic pathways. Not only this but CR combined with a genetic mutant (eat-2) have additive lifespan-enhancing effects.
So whilst the evidence is not concrete, and I look forward to other studies in mammals similar to the one by Greer et al, it looks as though rapamycin and CR have similar but not exactly the same effects on lifespan; rapamycin specifically inhibits an individual pathway which is involved in many processes, and some of its effects are not necessarily desirable (e.g. rapamycin inhibits the immune system [5]). On the other hand, CR (most of the different types) seems to be mediated by mTOR - this difference is critical: mTOR is not necessarily inhibited by CR, it is just required for its effect.
Therefore combining rapamycin and CR is unlikely to have an additive effect as rapamycin may override any influence CR has on mTOR signaling, but I have not seen a study in which this has been tried. Combing different methods of CR (or developing drugs to do just that) may well have additive lifespan-enhancing effects.
Harrison DE, Strong R, Sharp ZD, et al. Rapamycin fed late in life extends lifespan in genetically heterogeneous mice. Nature. 2009;460(7253):392-5.
Colman RJ, Anderson RM, Johnson SC, et al. Caloric restriction delays disease onset and mortality in rhesus monkeys. Science (New York, N.Y.). 2009;325(5937):201-4.
Weber JD, Gutmann DH. Deconvoluting mTOR biology. Cell cycle (Georgetown, Tex.). 2012;11(2):236-48.
Greer EL, Brunet A. Different dietary restriction regimens extend lifespan by both independent and overlapping genetic pathways in C. elegans. Aging cell. 2009;8(2):113-27.
Thomson AW, Turnquist HR, Raimondi G. Immunoregulatory functions of mTOR inhibition. Nature reviews. Immunology. 2009;9(5):324-37. | {
"domain": "biology.stackexchange",
"id": 333,
"tags": "senescence, pharmacology"
} |
What does volume parameter for a real gas equation mean for a fluid flowing in a pipe? | Question: I want to understand what the volume means in a $(p,v,T)=0 \ $, for a system of real gases flowing in a pipe. Will it be that since the fluid is flowing, $V$, will be the local volume in a given region of the pipe?
Actually, I was expecting volumetric rate to be present in the EOS (Equation of State) (also here) of real gases for flowing fluid, not volume, since the pipe is open ended it has no definite volume for the fluid.
Answer: Lowercase $v$ in this situation is usually either molar volume ($V/n$) or specific volume ($V/M = 1/\rho$). Because it's a ratio of two extensive quantities in either case $v$ is an intensive quantity that can vary with position, like pressure $p$ and temperature $T$.
So, for example, the usual $PV=nRT$ becomes $Pv = RT$ for molar volume, and $Pv=mRT$ with $m$ the mean molar mass when $v$ is the specific volume. | {
"domain": "physics.stackexchange",
"id": 91238,
"tags": "thermodynamics, fluid-dynamics, statistical-mechanics"
} |
Transition smach to same final state | Question:
Hi all,
I want to execute a self transition in which the initial state is same as the final state
STATE_A -----> STATE_A
However, when the smach.execute() is called, it continuously executes the transition in an infinite loop.
My main main condition is that I need to do a self transition.
Please suggest any suitable techniques to do so.
Thanks and Cheers !!
Originally posted by amarbanerjee23 on ROS Answers with karma: 5 on 2017-09-07
Post score: 0
Original comments
Comment by knxa on 2017-09-10:
Not sure what you are trying to do. If you at the end enter STATE_A again with the same input data as the first time, wouldn't you expect the state machine execution to loop forever?
Comment by amarbanerjee23 on 2017-09-12:
Hi @knxa. The problem I am facing need to have a self transition from initial state A to final state A. But behaviour of initial state A is different to final state A. I wish to use the concept of entry and exit actions to create behaviour for transitions and not states.
Hope I was able to explain.
Comment by gvdhoorn on 2017-09-12:
I think I believe I agree with @knxa: if the behaviour is different, why are you trying to reuse the state?
Comment by amarbanerjee23 on 2017-09-12:
Hi @gvdhoorn, I see what you are saying, but I want to reuse the state but with a different action behaviour. E.g. A->B is a transition T1 having some actions, while A->A is another transition T2 with a different action. I want to associate the behaviour with the transition not the state.
Answer:
A state represents a task being done. If the task represented by A should be performed again it makes ok sense to me to reenter A. And it makes sense that the behaviour of A might be dependent on input.
But a state is not aware of how the state was entered, whether the transition came from B or from itself or something else.
You will have to provide this difference as userdata if you need it.
Consider if A really represents the same task both in the beginning and the end.
Originally posted by knxa with karma: 811 on 2017-09-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by amarbanerjee23 on 2017-09-13:
Hi, @knxa,I tried doing that but still, the statemachine seems to run in an infinite loop.
However,when I try the self transition A->A in a Concurrence container,it runs only once as per requirement. Would that be a good option ? Else, could you please give an example for A->A transition in smach ?
Comment by knxa on 2017-09-13:
I am not yet too experience with SMACH myself so take my input with caution. Unfortunately I don't have time to construct an example for you. Good luck.
Comment by amarbanerjee23 on 2017-09-13:
No problem ! Thanks for your inputs !!
It seems that Concurrent containers do not get into an infinite loop for a self transition and userdata does not help in changing behaviour of a state at different conditions.
Thanks and Cheers !! | {
"domain": "robotics.stackexchange",
"id": 28783,
"tags": "smach"
} |
What is a Seyfert galaxy? | Question: I know Seyfert galaxies are types of active galaxies, but I do not understand what separates them from other types of active galaxies. Are they brighter/not as bright as other types of active galaxies? Do they emit in different wavelengths relative to other active galaxies? I've read that 10% of all galaxies are Seyfert galaxies, so it seems to me that understanding what these galaxies are is very important to understanding galaxies overall.
Answer: Seyfert galaxies differ from other active galaxies (most notably quasars) in that their galactic nuclei are lower in luminosity compared to the rest of the galaxy. Quasars have nuclei that easily outshine the rest of the galaxy. Seyfert galaxies, on the other hand, host active nuclei that do not outshine the rest of the galaxy by the same amount.
Interestingly enough, in his original analysis, Seyfert focused on emission lines, noting that in this class of galaxies, there are strong high-ionization emission lines present in certain parts of the spectrum. This alternate definition is also used today. LINERS (Low Ionization Nuclear Emission line RegionS) are very similar to Seyfert galaxies if this definition is used; they are differentiated because LINERS also contain low-ionization emission lines.
References:
Seyfert (1943)
Popping (2008)
A Knowledgebase for Extragalactic Astronomy and Cosmology: Seyfert Galaxies | {
"domain": "astronomy.stackexchange",
"id": 1065,
"tags": "galaxy, active-galaxy, seyfert-galaxy"
} |
Lots of DOM selectors and lots of click events | Question: My problem is that when I try to make an app with a lot of buttons, I end up with a ton of global variables, a ton of DOM selectors and an equal amount of click functions that while simillar, I don't really understand how I could go about making a single function or how to approach it, surelly there has to be a much better way. I'll post 2 examples, I achieve the result I want, but my code is not pretty, at all:
http://codepen.io/Hyde87/pen/egOzLg?editors=1010
let count = 0;
const output = document.getElementById("output");
const gameResult = document.getElementById("gameResult");
const numbers = document.querySelectorAll(".each-number");
const numArray = Array.from(numbers);
const binaries = document.querySelectorAll(".binary-number");
const randomizer = document.getElementById("randomizer");
const oneHundredTwentyEight = document.getElementById("128");
const sixtyFour = document.getElementById("64");
const thirtyTwo = document.getElementById("32");
const sixteen = document.getElementById("16");
const eight = document.getElementById("8");
const four = document.getElementById("4");
const two = document.getElementById("2");
const one = document.getElementById("1");
oneHundredTwentyEight.addEventListener("click", function() {
document.getElementById("binary-128").textContent = "1";
addMyValue(128);
})
sixtyFour.addEventListener("click", function() {
document.getElementById("binary-64").textContent = "1";
addMyValue(64);
})
thirtyTwo.addEventListener("click", function() {
document.getElementById("binary-32").textContent = "1";
addMyValue(32);
})
sixteen.addEventListener("click", function() {
document.getElementById("binary-16").textContent = "1";
addMyValue(16);
})
eight.addEventListener("click", function() {
document.getElementById("binary-8").textContent = "1";
addMyValue(8);
})
four.addEventListener("click", function() {
document.getElementById("binary-4").textContent = "1";
addMyValue(4);
})
two.addEventListener("click", function() {
document.getElementById("binary-2").textContent = "1";
addMyValue(2);
})
one.addEventListener("click", function() {
document.getElementById("binary-1").textContent = "1";
addMyValue(1);
})
for (let i = 0; i < numArray.length; i++) {
numArray[i].addEventListener("click", function() {
this.classList.add("light");
})
}
function getRandom() {
return Math.floor(Math.random() * (128 - 1 + 1)) + 1;
}
randomizer.addEventListener("click", () => {
for (let i = 0; i < numArray.length; i++) {
numArray[i].classList.remove("light");
}
for (let i = 0; i < binaries.length; i++) {
binaries[i].textContent = "0";
}
gameResult.textContent = "";
count = 0;
output.textContent = getRandom();
})
const addMyValue = (num) => {
count += num;
console.log(parseInt(output.textContent));
if (count > parseInt(output.textContent)) {
gameResult.textContent = "Wrong value, you went over it."
count = 0;
output.textContent = "";
} else if (count === parseInt(output.textContent)) {
gameResult.textContent = "You got it right!";
output.textContent = "";
}
}
Another example of this:
http://codepen.io/Hyde87/pen/ObgadP
var outputArr = [];
var firstValue;
var secondValue;
var resetValues;
var totalNumber = document.getElementById("totalNumber");
var buttons = document.getElementsByClassName("col-xs-3");
var one = document.getElementById("one");
var two = document.getElementById("two");
var three = document.getElementById("three");
var four = document.getElementById("four");
var five = document.getElementById("five");
var six = document.getElementById("six");
var seven = document.getElementById("seven");
var eight = document.getElementById("eight");
var Nine = document.getElementById("nine");
var divide = document.getElementById("divide");
var multiply = document.getElementById("multiply");
var subtract = document.getElementById("subtract");
var comma = document.getElementById("comma");
var add = document.getElementById("add");
var equals = document.getElementById("equals");
var C = document.getElementById("C");
var back = document.getElementById("back");
/**************************************************************
EVENTLISTENERS
***********************************************************/
zero.addEventListener("click", function(){
getValue(0);
});
one.addEventListener("click", function(){
getValue(1);
});
two.addEventListener("click", function(){
getValue(2);
})
three.addEventListener("click", function(){
getValue(3);
})
four.addEventListener("click", function(){
getValue(4);
})
five.addEventListener("click", function(){
getValue(5);
})
six.addEventListener("click", function(){
getValue(6);
})
seven.addEventListener("click", function(){
getValue(7);
})
eight.addEventListener("click", function(){
getValue(8);
})
nine.addEventListener("click", function(){
getValue(9);
})
comma.addEventListener("click", function(){
getValue(".");
/*****************************************************
OPERANDS AND SPECIAL KEYS
****************************************************/
})
add.addEventListener("click", function(){
operation("+");
})
multiply.addEventListener("click", function(){
operation("*");
})
subtract.addEventListener("click", function(){
operation("-");
})
divide.addEventListener("click", function(){
operation("/");
})
equals.addEventListener("click", function(){
equalResult();
})
C.addEventListener("click", function(){
clear();
})
back.addEventListener("click", function(){
backs();
})
/* Function getValue() pushes the value of each click into the outputArr and displays in the totalNumber(which is the calculator display) the chain of numbers pressed*/
function getValue(value){
outputArr.push(value);
totalNumber.innerHTML += value;
}
/*The operation function is triggered by pressing +, -, *, /, it creates a value variable that gets the numbers inside the outputArr and joins them into a string (removing then the commas and creating a single value), we then empty the outputArr, we display the operand sign in the display and store the value in the firstValue global variable.*/
function operation(operand){
var value = outputArr.join("");
outputArr = [];
totalNumber.innerHTML = operand;
return firstValue = Number(value)
}
/* Function clear (C key) simply resets everything */
function clear (){
totalNumber.innerHTML = " ";
outputArr = [];
return firstValue = 0;
}
/* The back function pops the last value we added and displays the outputArr as a joined string */
function backs (){
outputArr.pop();
totalNumber.innerHTML = outputArr.join("");
}
/* The equal result function assigns the value of the outputArr into the secondValue var, it then empties the outputArr and then it turns the string stored in secondValue into a number. Depending on the operand that is present in the display it performs one of the if/else possibilities. After that, the result in the display is stored in the outputArr as a number, also in the firstValue global var we store whatever number is in the display. Basically the result of firstValue and secondValue is stored as a firstValue again, so we re-use it. */
function equalResult(){
var secondValue = outputArr.join("");
outputArr = [];
secondValue = Number(secondValue);
if (totalNumber.innerHTML.indexOf("+") > -1) {
totalNumber.innerHTML = firstValue + secondValue;
} else if (totalNumber.innerHTML.indexOf("*") > -1){
totalNumber.innerHTML = firstValue * secondValue;
} else if (totalNumber.innerHTML.indexOf("/") > -1){
totalNumber.innerHTML = firstValue / secondValue;
} else if (totalNumber.innerHTML.indexOf("-") > -1){
totalNumber.innerHTML = firstValue - secondValue;
}
outputArr.push(Number(totalNumber.innerHTML))
console.log(outputArr)
return firstValue = totalNumber.innerHTML;
}
Answer: Yes, you can do it using HTML data attributes. div elements even not needed classes. E.g. HTML:
<div class="binaries">
<p>
<div data-id="128">0</div>
<div data-id="64">0</div>
<div data-id="32">0</div>
<div data-id="16">0</div>
<div data-id="8">0</div>
<div data-id="4">0</div>
<div data-id="2">0</div>
<div data-id="1">0</div>
</p>
</div>
<div class="numbers">
<p>
<div data-id="128">128</div>
<div data-id="64">64</div>
<div data-id="32">32</div>
<div data-id="16">16</div>
<div data-id="8">8</div>
<div data-id="4">4</div>
<div data-id="2">2</div>
<div data-id="1">1</div>
</p>
</div>
then binaries and numbers variables will be look like this:
const binaries = document.querySelectorAll('.binaries div');
const numbers = document.querySelectorAll('.numbers div');
then all you have to do is to add click event to all of numbers elemetnts → take dataset inside of it → search binary element with this data-id and change it textContent. Something like that:
const binaries = document.querySelectorAll('.binaries div');
const decNumbers = document.querySelectorAll('.numbers div');
const numArray = Array.from(decNumbers);
for (i = 0; i < numArray.length; i++) {
numArray[i].addEventListener('click', function() {
var id = this.dataset.id;
document.querySelector('.binaries div[data-id="' + id +'"]').textContent = "1";
addMyValue(id);
});
}
Also you can dynamically all of your div, e.g.
let binariesWrap = document.querySelector('.binaries p');
for (let i = 0; i < 8; i++) {
var pow = Math.pow(2,i);
var div = document.createElement('div');
div.dataset.id = pow;
div.textContent = "0";
binariesWrap.appendChild(div);
console.log(div);
}
.binaries div {
display: inline-block;
width: 50px;
height: 50px;
margin: 1.1%;
font-size: 50px;
line-height: 50px;
border-radius: 100%;
margin-bottom: -20px;
}
<div class="binaries">
<p></p>
</div>
The same with numbers elements
Full code with a dynamic generating:
let count = 0;
const output = document.getElementById("output");
const randomizer = document.getElementById("randomizer");
const gameResult = document.getElementById("gameResult");
let binaries, numbers, numArray;
initData();
function initData() {
let binariesWrap = document.querySelector('.binaries p');
let numbersWrap = document.querySelector('.numbers p');
for (let i = 0; i < 8; i++) {
let pow = Math.pow(2, i);
let div = document.createElement('div');
let cln = div.cloneNode(true);
div.dataset.id = cln.dataset.id = pow;
div.textContent = "0";
cln.textContent = pow;
binariesWrap.appendChild(div);
numbersWrap.appendChild(cln);
}
binaries = document.querySelectorAll('.binaries div');
decNumbers = document.querySelectorAll('.numbers div');
numArray = Array.from(decNumbers);
}
for (i = 0; i < numArray.length; i++) {
numArray[i].addEventListener('click', function() {
var id = this.dataset.id;
document.querySelector('.binaries div[data-id="' + id + '"]').textContent = "1";
addMyValue(id);
});
}
for (let i = 0; i < numArray.length; i++) {
numArray[i].addEventListener("click", function() {
this.classList.add("light");
})
}
function getRandom() {
return Math.floor(Math.random() * (128 - 1 + 1)) + 1;
}
randomizer.addEventListener("click", () => {
for (let i = 0; i < numArray.length; i++) {
numArray[i].classList.remove("light");
}
for (let i = 0; i < binaries.length; i++) {
binaries[i].textContent = "0";
}
gameResult.textContent = "";
count = 0;
output.textContent = getRandom();
})
const addMyValue = (num) => {
count += num;
if (count > parseInt(output.textContent)) {
gameResult.textContent = "Wrong value, you went over it."
count = 0;
output.textContent = "";
} else if (count === parseInt(output.textContent)) {
gameResult.textContent = "You got it right!";
output.textContent = "";
}
}
body {
text-align: center;
width: 100vw;
height: 100vh;
min-width: 530px;
background: #D5D5D5;
}
.explanation {
width: 70%;
margin: 0 auto;
margin-bottom: 40px;
}
.numbers div {
display: inline-block;
width: 50px;
height: 50px;
margin: 1%;
border: 1px solid black;
line-height: 50px;
border-radius: 100%;
background: white;
}
.binaries div {
display: inline-block;
width: 50px;
height: 50px;
margin: 1.1%;
font-size: 50px;
line-height: 50px;
border-radius: 100%;
margin-bottom: -20px;
}
.numbers div.light {
transition: 500ms;
background: #6D7993;
color: white;
}
#randomizer {
padding: 5px 20px;
}
#output {
font-size: 30px;
margin-top: 10px;
}
<body>
<div class="wrap">
<div class="explanation">
<h3> The Binary Code Game </h3>
<h4> A Javascript representation of the game as seen in Harvard's CS50 course. <br><br> First get a number (click the get a number button), then click the circles in order to sum the values and match the number you got, without going over it. Once you get it right, what you actually see is a binary representation of that number. Clicking the same value twice negates the purpose of the game. </h4>
</div>
<div class="binaries">
<p></p>
</div>
<div class="numbers">
<p></p>
</div>
<button id="randomizer">Get a Number</button>
<p id="output"></p>
<h3 id="gameResult"> </h3>
</div>
</body> | {
"domain": "codereview.stackexchange",
"id": 23805,
"tags": "javascript"
} |
Set of all Languages | Question: Sorry if the Question sounds a little trivial.
Let A* be the set of all languages over A={a,b}. Then A can be written as {aUb}*
which is a regular expression.So this is the set of all languages(actually set of all strings) regular.Am i thinking in the right direction.
Answer: You're almost there, but you need the right notation. Let's start with the alphabet $\Sigma = \{a,b\}$. Then the regular expression $(a+b)$ represents a set of two strings, namely $\{a,b\}$. The expression $(a+b)^2$ represents the set of the strings $\{aa, ab, ba, bb\}$. The expression $(a+b)^3 \ldots$ The expression $\{a,b\}^*$, also denoted $\Sigma^*$, represents the union of all these sets. Each of these sets is called a language, and so $\Sigma^*$ is also a language. So $\Sigma^*$ is one language, not the set of all languages, over the alphabet $\Sigma$, and it contains all the strings you can form out of that alphabet.
The set of all languages you can form using that alphabet is the power set of $\Sigma^*$, denoted by $\mathcal{P}(\Sigma^*) = \{L | L \subseteq \Sigma^*\}$. That is, if $L$ contains only strings from your alphabet, then $L$ is a language over $\Sigma$, so $L$ is an element of the set $\mathcal{P}(\Sigma^*)$. Now, some of these languages are regular, and some are not. Certainly the language $\Sigma^*$ is a regular language, and $\emptyset$ is a regular language (you can draw their finite automata and see that they are the two simplest automata). Other languages, such as $\{a^nb^m|n+m \text{ is prime}\}$ are not regular. I hope that clears some things up. | {
"domain": "cs.stackexchange",
"id": 5401,
"tags": "regular-languages, regular-expressions"
} |
How to aggregate data where instances occur over different time intervals | Question: I am working on a problem in which I have several instances that have predictors that have activity over various different time periods (i.e. <3 months to well over 20 months.) Originally I attempted to use knowledge I have about this problem (it is an opportunity to sale conversion model) and learned that the average time for a deal to close is about 9 months, so I broke my predictors up into three month intervals. However, I took another look at the lengths of these deals and see that there are a variety of instances that have durations that are not even close to 9 months so this idea does not make sense.
The only idea I have gotten is just creating a duration column where I subtract the start and the stop date and then just do the summation for each predictor. However, I feel that the instances might get incorrectly labeled because some might have an overwhelmingly higher amount of activity than another due to the duration of the deal. Has anyone else encountered such a problem. Not sure if this is a common problem but a quick glance at google/reddit did not come up with anything (I could be asking the problem wrong.)
Comment
Answer: I had the same problem. you can use aggregation functions. For example use Max, Min, Avg, count, std or some calculation like the slope of the line. In cases which you have a different period, you can divide your value to days of each period. | {
"domain": "datascience.stackexchange",
"id": 3440,
"tags": "time-series, feature-engineering"
} |
ROS connectivity and usernames | Question:
I think I've just discovered a really dumb mistake on my part and I want confirmation as to whether this matters. On two machines (one of which is the ROS_MASTER, c1), my username is ibrahim, but on some other machines that are connected to the ROS_MASTER through openvpn on the PR2 basestation, my username is ibrahima. On the machines with username ibrahima I can read topics fine, but I can't access services. Does ROS assume that you can ssh with the same username into other machines? I have set up an ssh config file (~/.ssh/config) such that I can ssh without having to type the proper username but I still can't get services to work. For example, if i run rosservice call static_map where static_map is running on the basestation, it says it's Unable to communicate with service /static_map. I've set ROS_IP on the other machines to be the VPN IP address and not the WAN IP, which fixed things like running rosnode info on a node running on a VPN-connected machine but this doesn't seem to help with services.
Is it just the username being an issue, or are there more steps needed to set up VPN connected computers besides setting up the connection and setting the proper ROS_MASTER_URI, ROS_IP, and any necessary entries in /etc/hosts
Originally posted by Ibrahim on ROS Answers with karma: 307 on 2012-01-11
Post score: 2
Answer:
ROS communications do not require username or authentification. Read the technical overview
The only thing which requires SSH username and keys is roslaunch. All other communications simply require port numbers.
There appears to be an issue with something like routing or name lookup and services. If you can describe how to reproduce it we can take a closer look.
Originally posted by tfoote with karma: 58457 on 2012-03-05
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by ahendrix on 2012-03-05:
Agreed; this is probably a problem with name resolution on your network. Running 'rosservice info ' should show you which hostname clients are using when creating service clients. You should make sure that this name resolves properly on all machines.
Comment by Ibrahim on 2012-03-06:
It was probably a routing issue, I actually am not sure at the moment if the issue still exists because I'm not in the lab at the moment but I think somehow I figured it out or it magically fixed itself. | {
"domain": "robotics.stackexchange",
"id": 7863,
"tags": "rosservice"
} |
When a loop A induces a current in an adjacent loop B does the change in current in the loop B affect the current in loop A? | Question: Let's say we take a Loop A connected to an AC voltage source and place it adjacent to a closed Loop B.
we know that the change in the current in Loop A leads to a change in the magnetic flux
through loop B ($\Delta\phi_{\rm BA}$) and hence a current is induced in the loop B.
If the induced current in the loop B changes won't it lead to a change in the magnetic flux through loop A ($\Delta\phi_{\rm AB}$)?
If yes, then won't the $\Delta\phi_{\rm AB}$ induce a current in the loop A in such a fashion that it increases the net current in the loop A at that instant to oppose $\Delta\phi_{\rm AB}$?
Now it is once again loops back to step 1. This seems to be an infinite loop of the above three steps thus producing 'infinite' current in both the loops.
Can someone clarify where my understanding goes wrong and what actually happens in the process. It would be really helpful if I get an answer.
Answer: technically a varying induced flux generates an electromotive force (emf). In order to relate it to a varying current, it will depend on the resistance, or more generally the impedance (it could be wired to a capacitor etc.) of the loop. Specifying the impedances, you therefore have a dynamical system, for the respective currents $i_A,i_B$ which you can solve depending on the varying degree of complexity.
For example, in your case, I think you are interested in the following model: loop A (resp. B) has resistance $R_A$ (resp. $R_B$) and applied AC voltage $u = u_0e^{i\omega t}$, and they have mutual inductance $M$. In this case, you get the system:
$$
R_Ai_A+M\frac{di_B}{dt}=u
$$
$$
R_Bi_B+M\frac{di_A}{dt}=0
$$
Since the model was simple, you can easily get a closed form setting $\tau = M/\sqrt{R_AR_B}$:
$$
i_A(t) = i_1\cosh(t/\tau)-i_2\sinh(t/\tau)+u\frac{R_B}{R_AR_B+\omega^2M^2}
$$
$$
i_B(t) = -i_1\sinh(t/\tau)+i_2\cosh(t/\tau)+u\frac{-i\omega M}{R_AR_B+\omega^2M^2}
$$
with $i_1,i_2$ fixed by the initial conditions. Actually, for typical initial values, the solution diverges from the stationary solution as you suspected. These pathologies arise because the model is not realistic, you neglected the self-inductances of the loops which help converge to a stationary solution. Writing their respective self-inductances $L_A,L_B$, the equations become:
$$
RI+L\frac{dI}{dt} = U
$$
where I wrote vectorially the system: $I = (i_A,i_B)$, $U=(u,0)$, $R = \begin{pmatrix} R_A & 0 \\ 0 & R_B\end{pmatrix}$ (resistance matrix) and $L = \begin{pmatrix} L_A & M \\ M & L_B\end{pmatrix}$ (inductance matrix). While before $-L^{-1}R$ had eigenvalues $\pm 1/\tau$ whose positive one resulted in the exponential instability, the new one with has only negative eigenvalues which you can check with the determinant (positive so same sign) and the trace (negative so both negative). Physically, the self-inductance saves us thanks to Lenz's law which acts as a break to the amplification effect you noticed.
Hope this helps and please tell me if you find some mistakes. | {
"domain": "physics.stackexchange",
"id": 87750,
"tags": "electromagnetism, electric-circuits, electric-current"
} |
Toy ball check valve functionality and details | Question: In the making of an inflatable toy ball (think of a basketball), a check valve is used to assist the user in the inflation of the ball by preventing backflow of inserted air. Is this check valve made of any metalic parts (such as conventional check valves) or is it all plastic? Also, what type of valve is it? Is there a maximum pressure this type of check valve can sustain before becoming damaged?
Answer: Depends on the design - some just use a soft plastic tube that collapses in on itself to seal. That is why some inflating kits have long needle adaptors amongst others in them tapered or parallel.
The maximum pressure a check valve could withstand IMHO is probably far greater than the material used for that ball itself. So, if you over pressurise it then it bursts, usually from the weakest point, and from what i have seen near the seam but not the valve failing first. | {
"domain": "engineering.stackexchange",
"id": 1948,
"tags": "pressure, valves, pressure-vessel"
} |
Project Euler 39: Integer right triangles | Question: I just finished Project Euler 39:
If p is the perimeter of a right angle triangle with integral length sides, {a, b, c} … For which value of p ≤ 1000, is the number of solutions maximised?
I'm pretty satisfied with my solution. I derive the formula for the hypotenuse as follows:
$$\begin{align}
a + b + c &= p \tag{Def. perimeter}\\
b + c &= p - a \tag{2}\\
\\
a^2 + b^2 &= c^2 \tag{Pythagorean}\\
a^2 &= c^2 - b^2 \\
a^2 &= (c - b)(c + b) \\
a^2 &= (c - b)(p - a) \tag{from (2)} \\
\frac{a^2}{p - a} &= c - b \tag{7} \\
\\
(c - b) + (c + b) &= \frac{a^2}{p - a} + (p - a) \tag{from (7) and (2)}\\
2c &= \frac{a^2}{p - a} + \frac{(p - a)^2}{p - a} \\
2c &= \frac{a^2 + (p - a)^2}{p - a} \\
c &= \frac{a^2 + (p - a)^2}{2\ (p - a)}
\end{align}$$
Any optimization would be appreciated.
from timeit import default_timer as timer
start = timer()
def find_pythagorean_triples(perimeter):
count = 0
for a in range(3, perimeter / 2 - 1): # triangle inequality
c = (a**2 + (perimeter - a)**2) / (2*(perimeter - a)) # derived formula
b = (c**2 - a**2)**0.5
if b.is_integer() and a + b + c == perimeter:
count += 1
return count
def find_most_solutions_upto(limit):
data = {"most": 0, "num": 0}
for p in range(limit/2, limit + 1): # scale upwards
if find_pythagorean_triples(p) > data['most']:
data['most'] = find_pythagorean_triples(p)
data['num'] = p
return data['num']
ans = find_most_solutions_upto(1000)
elapsed_time = (timer() - start) * 1000 # s --> ms
print "Found %d in %r ms." % (ans, elapsed_time)
Answer:
You are using a dictionary data = {"most": 0, "num": 0} where you could just as well use local variables. The locals would be more readable and faster.
The built-in max function is convenient when looking for a maximum. You could write find_most_solutions_upto like this:
def find_most_solutions_upto(limit):
return max(xrange(limit/2, limit + 1), key=find_pythagorean_triples) | {
"domain": "codereview.stackexchange",
"id": 7505,
"tags": "python, optimization, programming-challenge, python-2.x"
} |
List of possible image features for content based image retrieval | Question: I am trying to find a list of possible image features like color, oriented edges and so on for measuring their usability in case of finding same/similar objects in images. Does anyone know such a list or at least some features?
Answer: The field itself is too vast. So i doubt you can have a fully exhaustive list here.
However, MPEG 7 is one of the primary effort in standardizing this area. So what is included here is not universal - but at least the most primary.
Here are some key feature set which are identified in MPEG7 ( I can really talk only about Visual Descriptors not others see this for full scope).
There are 4 catagory of Visual Descriptors:
1. Color Descriptors which includes :
Dominant color,
Color Layout (essentially Primary color on block-by-block basis)
Scalable Color (essentially Color histogram),
Color Structure (essentially local Color histogram),
and Color spaces to make things interoperable.
2. Texture Descriptors (see also this) which includes :
Texture Browsing Descriptor - which defines granularity/coarseness, regularity,and direction.
Homogeneous Texture Descriptor - which is based on Gabor filter bank.
and
Edge Histogram
3. Shape Descriptors which includes :
Region based descriptors are scalar attributes of shape under consideration - such as area, ecentricities etc.
Contour based which captures actual characteristic shape features and
3D descriptors
4. Motion Descriptors for Video
Camera Motion (3-D camera motion parameters)
Motion Trajectory (of objects in the scene) [e.g. extracted by tracking algorithms]
Parametric Motion (e.g. motion vectors, which allows description of motion of scene. But it can be more complex models on various objects).
Activity which is more of a semantic descriptor.
MPEG 7 doesn't define "How these are extracted" - it only defines what they mean and how to represent/store them. So research does exists on how to extract and use them.
Here is another good paper that gives insight in this subject.
But yes, many of these features are rather basic and may be more research will create more sophisticated (and complex) feature set. | {
"domain": "dsp.stackexchange",
"id": 333,
"tags": "image-processing, cbir"
} |
Why string theory? | Question: I am new to String Theory. I've read that String theory is an important theory because it is a good candidate for a unified theory of all forces. It is "better" than the Standard Model of particle physics because it included gravity.
So, is this the importance of String theory (to unify all forces)? Or there are other features that make it a good theory?
edit: I am not asking for a complete explanation of the theory, I'm just trying to understand its importance (conceptually, not mathematically) as a starting point to hence start exploring its details.
Answer: "Why string theory?", you ask. I can think of three main reasons, which will of course appeal to each of us differently. The order does not indicate which I consider most or least important.
Quantum gravity
A full theory of quantum gravity - that is, a theory that both includes the concepts of general relativity and those of quantum field theory - has proven elusive so far. For some reasons why, see e.g. the questions A list of inconveniences between quantum mechanics and (general) relativity? and the more technical What is a good mathematical description of the Non-renormalizability of gravity?. It should be noted that all this "non-renormalizability" is a perturbative statement and it may well be that quantum gravity is non-perturbatively renormalizable. This hope guides the asymptotic safety programme.
Nevertheless, already perturbative non-renormalizability motivates the search for a theoretical framework in which gravity can be treated in a renormalizable matter, at best perturbatively. String theory provides such a treatment: The infinite divergences of general relativity do not appear in string theory due to a similarity between the high energy and the low energy physics - the UV divergences of quantum field theory just do not appear. See also Does the renormalization group apply to string theory?
Restricting the landscape of possible theories, "naturalness"
Contrary to what seems to be "well-known", string theory in fact restricts its possible models more powerfully than ordinary quantum field theory. The space of all viable quantum field theories is much larger than those that can be obtained as the low-energy QFT description of string theory, where the theories not coming from a string theory model are called the "swampland". See Vafa's The String Landscape and the Swampland [arXiv link].
Furthermore, there are many deep relations between many possible models of string theory, like the dualities which led Witten and others to conjecture a hidden underlying theory called M-theory. It is worth mentioning at this point that string theory itself is only defined in a perturbative manner, and no truly non-perturbative description is known. M-theory is supposed to provide such a description, and in particular show all the known string theory variants as arising from it in different limits. To many, this is a much more elegant description of physics than a quantum field theory, where, within rather loose limits, we seem to be able to just put in any fields we like. Nothing in quantum field theory singles out the structure of the Standard Model, but notably, gauge theories (loosely) like the Standard Model appear to be generated from string theoretic models with a certain "preference". It's hard to not get a gauge theory from string theory, and generating matter content is also possible without special pleading.
Mathematical importance
Regardless of what the status of string theory as a fundamental theory of physics is, it has proven both a rich source of motivation for mathematicians as well as providing other areas of physics with a toolbox leading to deep and new insights. Most prominent among those is probably the AdS/CFT correspondence, leading to applications of originally string theoretic methods in other fields such as condensed matter. Mirror symmetry plays a similar role for pure mathematics.
Furthermore, string theory's emphasis on geometry - most of the intricacies of the phenomenology involve looking at the exact properties of certain manifolds or more general "shapes" - means it is led to examine objects that have long been of independent interest to mathematicians working on differential or algebraic geometry and related field. This has already led to a large bidirectional flow of ideas, where again Witten is one of the most prominent figures switching rather freely between doing things of "pure" mathematical interest and investigating "physical" questions. | {
"domain": "physics.stackexchange",
"id": 93302,
"tags": "string-theory, quantum-gravity, theory-of-everything"
} |
Android APP user registration page implementation | Question: This is a follow-up question for Android APP User class implementation. I am attempting to build a user registering system and this post shows the user registration page implementation.
The experimental implementation
Project name: UserRegistrationAPP
birthday.xml Layout file:
<vector xmlns:android="http://schemas.android.com/apk/res/android"
android:width="24dp"
android:height="24dp"
android:viewportWidth="24"
android:viewportHeight="24">
<path
android:pathData="M19.313,5.097h-3.375L15.938,4h-1.125v1.097h-2.25L12.563,4L11.437,4v1.097h-2.25L9.187,4L8.062,4v1.097L4.687,5.097C3.757,5.097 3,5.835 3,6.742v12.613C3,20.262 3.757,21 4.687,21h14.626c0.93,0 1.687,-0.738 1.687,-1.645L21,6.742c0,-0.907 -0.757,-1.645 -1.687,-1.645zM19.875,19.355c0,0.302 -0.253,0.548 -0.562,0.548L4.687,19.903c-0.31,0 -0.562,-0.246 -0.562,-0.548L4.125,6.742c0,-0.303 0.252,-0.548 0.562,-0.548h3.375L8.062,7.29h1.125L9.187,6.194h2.25L11.437,7.29h1.126L12.563,6.194h2.25L14.813,7.29h1.125L15.938,6.194h3.375c0.31,0 0.562,0.245 0.562,0.548v12.613z"
android:fillColor="#D8D8D8"
android:fillType="nonZero"/>
<path
android:pathData="M7,9h2.222v1.142L7,10.142L7,9zM10.889,9h2.222v1.142L10.89,10.142L10.89,9zM14.778,9L17,9v1.142h-2.222L14.778,9zM7,12.429h2.222L9.222,13.57L7,13.57L7,12.43zM10.889,12.429h2.222L13.111,13.57L10.89,13.57L10.89,12.43zM14.778,12.429L17,12.429L17,13.57h-2.222L14.778,12.43zM7,15.857h2.222L9.222,17L7,17v-1.143zM10.889,15.857h2.222L13.111,17L10.89,17v-1.143zM14.778,15.857L17,15.857L17,17h-2.222v-1.143z"
android:fillColor="#D8D8D8"
android:fillType="nonZero"/>
</vector>
email.xml Layout file:
<vector xmlns:android="http://schemas.android.com/apk/res/android"
android:width="24dp"
android:height="24dp"
android:viewportWidth="24"
android:viewportHeight="24">
<path
android:pathData="M21.167,20.16L2.833,20.16C1.823,20.16 1,19.353 1,18.36L1,7.56C1,6.567 1.822,5.76 2.833,5.76h18.334C22.177,5.76 23,6.567 23,7.56v10.8c0,0.993 -0.822,1.8 -1.833,1.8zM2.833,6.66c-0.505,0 -0.916,0.404 -0.916,0.9v10.8c0,0.496 0.41,0.9 0.916,0.9h18.334c0.505,0 0.916,-0.404 0.916,-0.9L22.083,7.56c0,-0.496 -0.41,-0.9 -0.916,-0.9L2.833,6.66z"
android:fillColor="#D8D8D8"
android:fillType="nonZero"/>
<path
android:pathData="M12.5,16.32L4.216,11.363c-0.22,-0.131 -0.282,-0.4 -0.14,-0.603 0.141,-0.203 0.434,-0.26 0.653,-0.13L12.5,15.28l7.771,-4.65c0.22,-0.13 0.512,-0.073 0.653,0.13 0.142,0.202 0.08,0.472 -0.14,0.603L12.5,16.32z"
android:fillColor="#D8D8D8"
android:fillType="nonZero"/>
<path
android:pathData="M3.455,17.28c-0.147,0 -0.291,-0.075 -0.378,-0.214 -0.14,-0.22 -0.084,-0.518 0.126,-0.665l4.09,-2.88c0.21,-0.147 0.491,-0.088 0.63,0.133 0.14,0.22 0.084,0.518 -0.126,0.666l-4.09,2.88c-0.078,0.054 -0.165,0.08 -0.252,0.08zM20.545,17.28c-0.087,0 -0.174,-0.026 -0.252,-0.08l-4.09,-2.88c-0.21,-0.148 -0.266,-0.445 -0.126,-0.666 0.139,-0.22 0.42,-0.28 0.63,-0.133l4.09,2.88c0.21,0.147 0.266,0.445 0.126,0.665 -0.087,0.14 -0.231,0.214 -0.378,0.214z"
android:fillColor="#D8D8D8"
android:fillType="nonZero"/>
</vector>
id.xml Layout file:
<vector xmlns:android="http://schemas.android.com/apk/res/android"
android:width="24dp"
android:height="24dp"
android:viewportWidth="24"
android:viewportHeight="24">
<path
android:pathData="M21.508,20L1.492,20C1.22,20 1,19.78 1,19.508L1,5.492C1,5.22 1.22,5 1.492,5h20.016c0.272,0 0.492,0.22 0.492,0.492v1.009h-0.937L21.063,5.938L1.937,5.938v13.126h19.126L21.063,8.001L22,8.001v11.507c0,0.272 -0.22,0.492 -0.492,0.492z"
android:fillColor="#D8D8D8"
android:fillType="nonZero"/>
<path
android:pathData="M7.5,13C6.12,13 5,11.88 5,10.501 5,9.123 6.12,8 7.5,8S10,9.12 10,10.499C10,11.877 8.88,13 7.5,13zM7.5,8.947c-0.855,0 -1.552,0.697 -1.552,1.552 0,0.855 0.697,1.551 1.552,1.551 0.855,0 1.552,-0.696 1.552,-1.551S8.355,8.947 7.5,8.947z"
android:fillColor="#D8D8D8"
android:fillType="nonZero"/>
<path
android:pathData="M10.848,17L4.476,17C4.213,17 4,16.785 4,16.521c0,-1.092 0.314,-2.188 0.864,-3.01 0.647,-0.964 1.577,-1.494 2.622,-1.494 0.258,0 0.468,0.212 0.468,0.471 0,0.26 -0.21,0.472 -0.468,0.472 -0.925,0 -1.513,0.587 -1.844,1.08 -0.37,0.554 -0.611,1.269 -0.682,2.017h5.403c-0.106,-1.05 -0.556,-2.031 -1.207,-2.585 -0.197,-0.168 -0.222,-0.465 -0.056,-0.665 0.166,-0.199 0.461,-0.224 0.66,-0.057 0.966,0.818 1.563,2.264 1.563,3.771 0,0.264 -0.213,0.479 -0.475,0.479zM13.003,9L19,9v0.943h-5.997L13.003,9zM13.003,12.005L19,12.005v0.943h-5.997v-0.943zM13.003,15.047h4.123v0.944h-4.123v-0.944z"
android:fillColor="#D8D8D8"
android:fillType="nonZero"/>
</vector>
name.xml Layout file:
<vector xmlns:android="http://schemas.android.com/apk/res/android"
android:width="24dp"
android:height="24dp"
android:viewportWidth="24"
android:viewportHeight="24">
<path
android:pathData="M20.289,16.908c-0.452,-1.078 -1.103,-2.047 -1.928,-2.878 -0.826,-0.831 -1.789,-1.485 -2.86,-1.94 -0.308,-0.132 -0.623,-0.245 -0.943,-0.34 1.3,-0.847 2.165,-2.321 2.165,-3.995C16.723,5.133 14.603,3 11.998,3 9.392,3 7.273,5.133 7.273,7.755c0,1.674 0.864,3.148 2.164,3.995 -0.32,0.095 -0.634,0.208 -0.943,0.34 -1.07,0.455 -2.034,1.109 -2.86,1.94 -0.825,0.831 -1.475,1.8 -1.928,2.878C3.236,18.025 3,19.211 3,20.434c0,0.312 0.252,0.566 0.562,0.566 0.311,0 0.563,-0.254 0.563,-0.566 0,-2.117 0.819,-4.108 2.306,-5.605C7.918,13.333 9.896,12.51 12,12.51s4.081,0.824 5.569,2.32c1.487,1.497 2.306,3.488 2.306,5.605 0,0.312 0.252,0.566 0.563,0.566 0.31,0 0.562,-0.254 0.562,-0.566 -0.005,-1.223 -0.243,-2.41 -0.711,-3.526zM8.395,7.753c0,-1.997 1.616,-3.623 3.6,-3.623 1.985,0 3.6,1.626 3.6,3.623 0,1.997 -1.615,3.623 -3.6,3.623 -1.984,0 -3.6,-1.626 -3.6,-3.623z"
android:fillColor="#D8D8D8"
android:fillType="nonZero"/>
</vector>
password.xml Layout file:
<vector xmlns:android="http://schemas.android.com/apk/res/android"
android:width="24dp"
android:height="24dp"
android:viewportWidth="24"
android:viewportHeight="24">
<path
android:pathData="M20.311,10.334c-0.441,-0.443 -1.036,-0.691 -1.659,-0.691h-1.174L17.478,6.5C17.478,3.46 15.024,1 12,1 8.976,1 6.522,3.46 6.522,6.5v3.143L5.348,9.643c-0.623,0 -1.221,0.248 -1.66,0.691C3.249,10.777 3,11.374 3,12v1.964C3,18.955 7.029,23 12,23c2.385,0 4.677,-0.952 6.364,-2.646C20.051,18.66 21,16.362 21,13.964L21,12c0,-0.625 -0.247,-1.226 -0.689,-1.666zM8.087,6.5C8.087,4.331 9.84,2.571 12,2.571s3.913,1.76 3.913,3.929v3.143L8.087,9.643L8.087,6.5zM19.435,13.964c0,4.124 -3.328,7.465 -7.435,7.465 -4.107,0 -7.435,-3.341 -7.435,-7.465L4.565,12c0,-0.434 0.35,-0.786 0.783,-0.786h13.304c0.432,0 0.783,0.352 0.783,0.786v1.964z"
android:fillColor="#D8D8D8"
android:fillType="nonZero"/>
<path
android:pathData="M12,12c-0.552,0 -1,0.373 -1,0.833v3.334c0,0.46 0.448,0.833 1,0.833s1,-0.373 1,-0.833v-3.334c0,-0.46 -0.448,-0.833 -1,-0.833z"
android:fillColor="#D8D8D8"
android:fillType="nonZero"/>
</vector>
phone.xml Layout file:
<vector xmlns:android="http://schemas.android.com/apk/res/android"
android:width="24dp"
android:height="24dp"
android:viewportWidth="24"
android:viewportHeight="24">
<path
android:pathData="M12.5,18.645c-0.64,0 -1.162,0.503 -1.162,1.121s0.522,1.121 1.162,1.121c0.64,0 1.162,-0.503 1.162,-1.12 0,-0.619 -0.522,-1.122 -1.162,-1.122zM18.247,1L6.753,1C5.787,1 5,1.76 5,2.692v18.616C5,22.24 5.787,23 6.753,23h11.494c0.966,0 1.753,-0.76 1.753,-1.692L20,2.692C20,1.76 19.213,1 18.247,1zM6.753,2.128h11.494c0.322,0 0.584,0.253 0.584,0.564L18.831,16.64L6.17,16.64L6.17,2.692c0,-0.31 0.262,-0.564 0.584,-0.564zM18.247,21.872L6.753,21.872c-0.322,0 -0.584,-0.253 -0.584,-0.564v-3.444h12.662v3.444c0,0.31 -0.262,0.564 -0.584,0.564z"
android:fillColor="#D8D8D8"
android:fillType="nonZero"/>
</vector>
activity_main.xml Layout file:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<androidx.constraintlayout.widget.Guideline
android:id="@+id/guideline_right2"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="vertical"
app:layout_constraintGuide_end="40dp" />
<androidx.constraintlayout.widget.Guideline
android:id="@+id/guideline_left"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="vertical"
app:layout_constraintGuide_begin="40dp" />
<androidx.constraintlayout.widget.Guideline
android:id="@+id/guideline_right"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="vertical"
app:layout_constraintGuide_end="40dp" />
<ImageView
android:id="@+id/img_bgName"
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="@+id/editText_name"
app:layout_constraintEnd_toStartOf="@+id/guideline_right"
app:layout_constraintHorizontal_bias="0.0"
app:layout_constraintStart_toStartOf="@+id/guideline_left"
app:layout_constraintTop_toTopOf="@+id/editText_name"
app:layout_constraintVertical_bias="1.0"
app:layout_constraintVertical_chainStyle="packed" />
<ImageView
android:id="@+id/img_bgID"
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="@+id/editTextID"
app:layout_constraintEnd_toStartOf="@+id/guideline_right"
app:layout_constraintHorizontal_bias="0.0"
app:layout_constraintStart_toStartOf="@+id/guideline_left"
app:layout_constraintTop_toTopOf="@+id/editTextID"
app:layout_constraintVertical_chainStyle="packed" />
<ImageView
android:id="@+id/img_bgPass"
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="@+id/editTextID"
app:layout_constraintStart_toStartOf="@+id/guideline_left"
app:layout_constraintTop_toTopOf="@+id/editTextID"
app:layout_constraintVertical_bias="1.0" />
<ImageView
android:id="@+id/img_bgPass7"
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="@+id/editTextBirthday"
app:layout_constraintEnd_toStartOf="@+id/guideline_right2"
app:layout_constraintStart_toStartOf="@+id/guideline_left"
app:layout_constraintTop_toTopOf="@+id/editTextBirthday" />
<ImageView
android:id="@+id/img_bgPass8"
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="@+id/editTextCellphone"
app:layout_constraintEnd_toStartOf="@+id/guideline_right2"
app:layout_constraintStart_toStartOf="@+id/guideline_left"
app:layout_constraintTop_toTopOf="@+id/editTextCellphone" />
<ImageView
android:id="@+id/img_bgPass9"
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="@+id/editTextEmail"
app:layout_constraintEnd_toStartOf="@+id/guideline_right2"
app:layout_constraintStart_toStartOf="@+id/guideline_left"
app:layout_constraintTop_toTopOf="@+id/editTextEmail" />
<ImageView
android:id="@+id/img_bgPass10"
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="@+id/editTextPassword"
app:layout_constraintEnd_toStartOf="@+id/guideline_right2"
app:layout_constraintStart_toStartOf="@+id/guideline_left"
app:layout_constraintTop_toTopOf="@+id/editTextPassword" />
<ImageView
android:id="@+id/img_bgPass11"
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="@+id/editTextPasswordAgain"
app:layout_constraintEnd_toStartOf="@+id/guideline_right2"
app:layout_constraintStart_toStartOf="@+id/guideline_left"
app:layout_constraintTop_toTopOf="@+id/editTextPasswordAgain" />
<EditText
android:id="@+id/editText_name"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="6dp"
android:layout_marginBottom="8dp"
android:backgroundTint="#00FFFFFF"
android:ems="18"
android:inputType="textPersonName"
android:text="@string/register_name"
android:textSize="12sp"
app:layout_constraintBottom_toTopOf="@+id/editTextID"
app:layout_constraintStart_toEndOf="@+id/imageView12"
app:layout_constraintTop_toBottomOf="@+id/textView2"
app:layout_constraintVertical_chainStyle="packed" />
<EditText
android:id="@+id/editTextID"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="6dp"
android:layout_marginBottom="8dp"
android:backgroundTint="#00FFFFFF"
android:ems="18"
android:inputType="textPersonName"
android:text="@string/register_id"
android:textSize="12sp"
app:layout_constraintBottom_toTopOf="@+id/editTextBirthday"
app:layout_constraintStart_toEndOf="@+id/imageView22"
app:layout_constraintTop_toBottomOf="@+id/editText_name" />
<EditText
android:id="@+id/editTextBirthday"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="6dp"
android:layout_marginBottom="8dp"
android:backgroundTint="#00FFFFFF"
android:ems="18"
android:inputType="textPersonName"
android:text="@string/register_birthday"
android:textSize="12sp"
app:layout_constraintBottom_toTopOf="@+id/editTextCellphone"
app:layout_constraintStart_toEndOf="@+id/imageView17"
app:layout_constraintTop_toBottomOf="@+id/editTextID" />
<EditText
android:id="@+id/editTextCellphone"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="6dp"
android:layout_marginBottom="8dp"
android:backgroundTint="#00FFFFFF"
android:ems="18"
android:inputType="textPersonName"
android:text="@string/register_cellPhone"
android:textSize="12sp"
app:layout_constraintBottom_toTopOf="@+id/editTextEmail"
app:layout_constraintStart_toEndOf="@+id/imageView19"
app:layout_constraintTop_toBottomOf="@+id/editTextBirthday" />
<EditText
android:id="@+id/editTextEmail"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="6dp"
android:layout_marginBottom="8dp"
android:backgroundTint="#00FFFFFF"
android:ems="18"
android:inputType="textPersonName"
android:text="@string/register_email"
android:textSize="12sp"
app:layout_constraintBottom_toTopOf="@+id/editTextPassword"
app:layout_constraintStart_toEndOf="@+id/imageView21"
app:layout_constraintTop_toBottomOf="@+id/editTextCellphone" />
<EditText
android:id="@+id/editTextPassword"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="6dp"
android:layout_marginBottom="8dp"
android:backgroundTint="#00FFFFFF"
android:ems="18"
android:inputType="textPersonName"
android:text="@string/register_password"
android:textSize="12sp"
app:layout_constraintBottom_toTopOf="@+id/editTextPasswordAgain"
app:layout_constraintStart_toEndOf="@+id/imageView20"
app:layout_constraintTop_toBottomOf="@+id/editTextEmail" />
<EditText
android:id="@+id/editTextPasswordAgain"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="6dp"
android:layout_marginBottom="12dp"
android:backgroundTint="#00FFFFFF"
android:ems="18"
android:inputType="textPersonName"
android:text="@string/register_PasswordAgain"
android:textSize="12sp"
app:layout_constraintBottom_toTopOf="@+id/btn_apply"
app:layout_constraintStart_toEndOf="@+id/imageView18"
app:layout_constraintTop_toBottomOf="@+id/editTextPassword" />
<Button
android:id="@+id/btn_apply"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:text="Apply"
android:textColor="#FFFFFF"
android:textSize="18sp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toStartOf="@+id/guideline_right"
app:layout_constraintHorizontal_bias="0.5"
app:layout_constraintStart_toStartOf="@+id/guideline_left"
app:layout_constraintTop_toBottomOf="@+id/editTextPasswordAgain" />
<TextView
android:id="@+id/textView2"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="64dp"
android:layout_marginBottom="16dp"
android:text="@string/registration_form_title"
android:textColor="#bf1f2a"
android:textSize="24sp"
app:layout_constraintBottom_toTopOf=""
app:layout_constraintEnd_toStartOf="@+id/guideline_right"
app:layout_constraintHorizontal_bias="0.5"
app:layout_constraintStart_toStartOf="@+id/guideline_left"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintVertical_bias="0.4"
app:layout_constraintVertical_chainStyle="packed" />
<ImageView
android:id="@+id/imageView12"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="8dp"
app:layout_constraintBottom_toBottomOf="@+id/img_bgName"
app:layout_constraintStart_toStartOf="@+id/img_bgName"
app:layout_constraintTop_toTopOf="@+id/img_bgName"
app:srcCompat="@drawable/name" />
<ImageView
android:id="@+id/imageView22"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="8dp"
app:layout_constraintBottom_toBottomOf="@+id/img_bgID"
app:layout_constraintStart_toStartOf="@+id/img_bgID"
app:layout_constraintTop_toTopOf="@+id/img_bgID"
app:srcCompat="@drawable/id" />
<ImageView
android:id="@+id/imageView17"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="8dp"
app:layout_constraintBottom_toBottomOf="@+id/img_bgPass7"
app:layout_constraintStart_toStartOf="@+id/guideline_left"
app:layout_constraintTop_toTopOf="@+id/img_bgPass7"
app:srcCompat="@drawable/birthday" />
<ImageView
android:id="@+id/imageView18"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="8dp"
app:layout_constraintBottom_toBottomOf="@+id/img_bgPass11"
app:layout_constraintStart_toStartOf="@+id/img_bgPass11"
app:layout_constraintTop_toTopOf="@+id/img_bgPass11"
app:srcCompat="@drawable/password" />
<ImageView
android:id="@+id/imageView21"
android:layout_width="24dp"
android:layout_height="24dp"
android:layout_marginStart="8dp"
app:layout_constraintBottom_toBottomOf="@+id/img_bgPass9"
app:layout_constraintStart_toStartOf="@+id/img_bgPass9"
app:layout_constraintTop_toTopOf="@+id/img_bgPass9"
app:srcCompat="@drawable/email" />
<ImageView
android:id="@+id/imageView20"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="8dp"
app:layout_constraintBottom_toBottomOf="@+id/img_bgPass10"
app:layout_constraintStart_toStartOf="@+id/img_bgPass10"
app:layout_constraintTop_toTopOf="@+id/img_bgPass10"
app:srcCompat="@drawable/password" />
<ImageView
android:id="@+id/imageView19"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="8dp"
app:layout_constraintBottom_toBottomOf="@+id/img_bgPass8"
app:layout_constraintStart_toStartOf="@+id/img_bgPass8"
app:layout_constraintTop_toTopOf="@+id/img_bgPass8"
app:srcCompat="@drawable/phone" />
<androidx.constraintlayout.widget.Guideline
android:id="@+id/guideline"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="vertical"
app:layout_constraintGuide_begin="15dp" />
</androidx.constraintlayout.widget.ConstraintLayout>
User class implementation:
package com.example.userregistrationapp;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.time.LocalDate;
import java.time.LocalDateTime;
import java.time.LocalTime;
import java.time.format.DateTimeFormatter;
import java.time.format.DateTimeParseException;
import java.util.Locale;
public class User implements java.io.Serializable{
private String fullName;
private String personalID;
private String dateOfBirth;
private String cellPhoneNumber;
private String emailInfo;
private String password;
private final String dateFormat = "yyyy-MM-dd";
public User(String fullNameInput,
String personalIDInput,
String dateOfBirthInput,
String cellPhoneNumberInput,
String emailInfoInput,
String passwordInput) throws NoSuchAlgorithmException, NullPointerException, IllegalArgumentException
// User object constructor
{
// Reference: https://stackoverflow.com/a/6358/6667035
if (fullNameInput == null) {
throw new NullPointerException("fullNameInput must not be null");
}
this.fullName = fullNameInput;
if (personalIDInput == null) {
throw new NullPointerException("personalIDInput must not be null");
}
this.personalID = personalIDInput;
if (dateOfBirthInput == null) {
throw new NullPointerException("dateOfBirthInput must not be null");
}
this.dateOfBirth = dateOfBirthInput;
if (cellPhoneNumberInput == null) {
throw new NullPointerException("cellPhoneNumberInput must not be null");
}
this.cellPhoneNumber = cellPhoneNumberInput;
if (emailInfoInput == null) {
throw new NullPointerException("emailInfoInput must not be null");
}
this.emailInfo = emailInfoInput;
if (passwordInput == null) {
throw new NullPointerException("passwordInput must not be null");
}
this.password = hashingMethod(passwordInput);
}
public String getFullName()
{
return this.fullName;
}
public String getPersonalID()
{
return this.personalID;
}
public String getDateOfBirth()
{
return this.dateOfBirth;
}
public String getCellPhoneNumber()
{
return this.cellPhoneNumber;
}
public String getEmailInfo()
{
return this.emailInfo;
}
public String getHash() throws NoSuchAlgorithmException
{
return hashingMethod(this.fullName + this.personalID);
}
public String getHashedPassword() throws NoSuchAlgorithmException
{
return this.password;
}
public boolean checkPassword(String password) {
boolean result = false;
try {
result = this.password.equals(hashingMethod(password));
}
catch (Exception e)
{
e.printStackTrace();
}
return result;
}
//**********************************************************************************************
// Reference: https://stackoverflow.com/a/2624385/6667035
private String hashingMethod(String inputString) throws NoSuchAlgorithmException {
MessageDigest messageDigest = MessageDigest.getInstance("SHA-256");
String stringToHash = inputString;
messageDigest.update(stringToHash.getBytes());
String stringHash = new String(messageDigest.digest());
return stringHash;
}
}
strings.xml:
<resources>
<string name="app_name">UserRegistrationAPP</string>
<string name="registration_form_title">Registration Form</string>
<string name="register_name">Name</string>
<string name="register_id">ID</string>
<string name="register_birthday">birthday</string>
<string name="register_cellPhone">CellPhone</string>
<string name="register_email">Email</string>
<string name="register_password">Password</string>
<string name="register_PasswordAgain">Type Password Again</string>
<string name="register_name_null_message">Please fill in name!</string>
<string name="please_fill_in_register_id">Please fill in ID!</string>
<string name="please_select_birthday">Please select birthday!</string>
<string name="please_fill_in_register_cellPhone">Please fill in cellphone number!</string>
<string name="please_fill_in_register_cellPhone_number">Please fill in correct cellphone number!</string>
<string name="please_fill_in_Email">Please fill in Email!</string>
<string name="please_fill_in_correct_Email">Please fill in correct Email!</string>
<string name="please_fill_in_password">Please fill in password!</string>
<string name="please_fill_in_confirm_password">Please fill in password again!</string>
<string name="confirmation_password_not_equal">Please check passwords are equal!</string>
<string name="send">Registration information have been sent!</string>
<string name="OK">OK</string>
</resources>
MainActivity.java implementation:
package com.example.userregistrationapp;
import androidx.appcompat.app.AppCompatActivity;
import android.content.Context;
import android.os.Bundle;
import android.text.InputType;
import android.view.View;
import android.widget.Button;
import android.widget.EditText;
import android.widget.Toast;
import java.security.NoSuchAlgorithmException;
import java.util.Calendar;
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
final EditText nameEditText = findViewById(R.id.editText_name);
clickAndClear(nameEditText);
final EditText personalIDEditText = findViewById(R.id.editTextID);
clickAndClear(personalIDEditText);
final EditText dateOfBirthInfoEditText = findViewById(R.id.editTextBirthday);
View.OnClickListener dateOfBirthInfoClickHandler = v -> {
if (v ==dateOfBirthInfoEditText) {
Calendar calendar = Calendar.getInstance();
int year = calendar.get(Calendar.YEAR);
int month = calendar.get(Calendar.MONTH);
int day = calendar.get(Calendar.DAY_OF_MONTH);
new android.app.DatePickerDialog(v.getContext(), (view, year1, month1, day1) -> {
String dateTime = String.valueOf(year1)+"-"+String.valueOf(month1)+"-"+String.valueOf(day1);
dateOfBirthInfoEditText.setText(dateTime);
}, year, month, day).show();
}
};
dateOfBirthInfoEditText.setOnClickListener(dateOfBirthInfoClickHandler);
final EditText cellphoneNumberEditText = findViewById(R.id.editTextCellphone);
clickAndClear(cellphoneNumberEditText);
final EditText emailInfoEditText = findViewById(R.id.editTextEmail);
clickAndClear(emailInfoEditText);
final EditText passwordEditText = findViewById(R.id.editTextPassword);
clickAndClear(passwordEditText, true);
final EditText confirmPasswordEditText = findViewById(R.id.editTextPasswordAgain);
clickAndClear(confirmPasswordEditText, true);
final Button applyButton = findViewById(R.id.btn_apply);
View.OnClickListener ApplyButtonClickHandler = v -> {
if (v == applyButton) {
// Parsing Information
String nameString = getEditTextContent(nameEditText);
String personalIDString = getEditTextContent(personalIDEditText);
String dateOfBirtString = getEditTextContent(dateOfBirthInfoEditText);
String cellphoneNumberString = getEditTextContent(cellphoneNumberEditText);
String emailInfoString = getEditTextContent(emailInfoEditText);
String passwordString = getEditTextContent(passwordEditText);
String confirmPasswordString = getEditTextContent(confirmPasswordEditText);
// Checking Information
if ((nameString.isEmpty()) || (nameString.contains(getResources().getString(R.string.register_name)))) {
showAlertDialog(getResources().getString(R.string.register_name_null_message), getResources().getString(R.string.OK));
return;
}
if ((personalIDString.isEmpty()) || (personalIDString.equals(getResources().getString(R.string.register_id)))) {
showAlertDialog(getResources().getString(R.string.please_fill_in_register_id), getResources().getString(R.string.OK));
return;
}
if ((dateOfBirtString.isEmpty()) || (dateOfBirtString.equals(getResources().getString(R.string.register_birthday)))) {
showAlertDialog(getResources().getString(R.string.please_select_birthday), getResources().getString(R.string.OK));
return;
}
if ((cellphoneNumberString.isEmpty()) || (cellphoneNumberString.equals(getResources().getString(R.string.register_cellPhone)))) {
showAlertDialog(getResources().getString(R.string.please_fill_in_register_cellPhone), getResources().getString(R.string.OK));
return;
}
if (checkCellphoneNumber(cellphoneNumberEditText.getText().toString()) == false) {
showAlertDialog(getResources().getString(R.string.please_fill_in_register_cellPhone_number), getResources().getString(R.string.OK));
return;
}
if ((emailInfoString.isEmpty()) || (emailInfoString.equals(getResources().getString(R.string.register_email)))) {
showAlertDialog(getResources().getString(R.string.please_fill_in_Email), getResources().getString(R.string.OK));
return;
}
if (checkEmail(emailInfoString)==false) {
showAlertDialog(getResources().getString(R.string.please_fill_in_correct_Email), getResources().getString(R.string.OK));
return;
}
if (passwordString.isEmpty() || passwordString.equals(getResources().getString(R.string.register_password))) {
showAlertDialog(getResources().getString(R.string.please_fill_in_password), getResources().getString(R.string.OK));
return;
}
if (confirmPasswordString.isEmpty()) {
showAlertDialog(getResources().getString(R.string.please_fill_in_confirm_password), getResources().getString(R.string.OK));
return;
}
if (passwordString.equals(confirmPasswordString) == false) {
showAlertDialog(getResources().getString(R.string.confirmation_password_not_equal), getResources().getString(R.string.OK));
return;
}
try {
sendRegisterInfo(new User(
nameString,
personalIDString,
dateOfBirtString,
cellphoneNumberString,
emailInfoString,
passwordString
));
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
}
showToast(getResources().getString(R.string.send), Toast.LENGTH_SHORT);
}
};
applyButton.setOnClickListener(ApplyButtonClickHandler);
return;
}
private void sendRegisterInfo(User newUser) {
// TODO: perform send operation!
return;
}
private boolean checkEmail(String input) {
return input.contains("@");
}
private boolean checkCellphoneNumber(String input) {
return isNumeric(input);
}
private boolean isNumeric(String s) {
return s != null && s.matches("[-+]?\\d*\\.?\\d+");
}
private String getEditTextContent(final EditText editTextInput) {
return editTextInput.getText().toString();
}
private void showToast(String textInput, int duration) {
Context context = getApplicationContext();
CharSequence text = textInput;
Toast toast = Toast.makeText(context, text, duration);
toast.show();
}
private void showAlertDialog(String titleString, String stringOnPositiveButton) {
androidx.appcompat.app.AlertDialog.Builder builder = new androidx.appcompat.app.AlertDialog.Builder(this);
builder.setMessage(titleString)
.setPositiveButton(stringOnPositiveButton, (dialog, id) -> {
});
builder.show();
}
// clickAndClear method
// Clear EditText content after clicking it.
private void clickAndClear(final EditText editTextInput) {
View.OnClickListener editTextViewClickHandler = v -> {
if (v == editTextInput) {
editTextInput.setText("");
}
};
editTextInput.setOnClickListener(editTextViewClickHandler);
}
// clickAndClear method
// Clear EditText content after clicking it.
// If isPassword==true, hide characters like `***` with `.setInputType(InputType.TYPE_CLASS_TEXT | InputType.TYPE_TEXT_VARIATION_PASSWORD);` syntax.
private void clickAndClear(final EditText editTextInput, final boolean isPassword) {
View.OnClickListener editTextViewClickHandler = v -> {
if (v == editTextInput) {
editTextInput.setText("");
if (isPassword ==true) {
editTextInput.setInputType(InputType.TYPE_CLASS_TEXT | InputType.TYPE_TEXT_VARIATION_PASSWORD);
}
}
};
editTextInput.setOnClickListener(editTextViewClickHandler);
}
}
All suggestions are welcome.
The summary information:
Which question it is a follow-up to?
Android APP User class implementation
What changes has been made in the code since last question?
Besides creating User class, the full registration page implementation is shown in this post.
Why a new review is being asked for?
If there is any possible improvement, please let me know.
Answer: throws
Unchecked exceptions aren't usually declared as being thrown by functions. So you don't really need to say that the constructor can throw NullPointerException. I don't really like that you're throwing NullPointerException, it seems unusual to me, however as the linked question and it's many related questions say, this is an area of debate/personal taste. Your User constructor says it throws IllegalArgumentException, but I can't see an obvious place that this actually happens. Some IDEs will highlight these types of issues for you so that you don't forget to remove throws declarations if you remove the code that actually throws the exception...
Your getHashedPassword definitely doesn't throw NoSuchAlgorithmException for example since it only returns a member variable.
this
You don't need to use this. ever time you reference a member variable. In most instances it just adds noise/overhead that doesn't need to be there. Generally speaking, the only time that I'd expect to see this. is if you're passing in parameters that have the same name as the member variable and you need to disambiguate the references (usually in a constructor or a set method).
catch
You rarely want to catch Exception. This is generally reserved for top level, catch all cases (such as around an API endpoint to allow the exception to be translated into a response code). If your code can actually handle an exception, then you should be catching the specific exceptions that you are interested in, that way you don't accidently ignore an exception you weren't really expecting. So, checkPassword should probably be catching NoSuchAlgorithmException.
Say what you mean...
It seems likely to me that sendRegisterInfo will have the possibility for failure. You've also coded your User constructor with the expectation that hashing the password might not work. This results in a try/catch block in your MainActivity. However, all the catch does is print the stack and swallow the exception. As far as the user is concerned, they get a toast telling them the registration information has been sent. It's generally bad customer relations to mislead your users...
Apply
Apply is a funny name for a button that sends registration information. Maybe it makes sense in the rest of the screen flow (which I haven't looked at), however I'd expect something more like submit or register.
You're also directly setting the android:text to "Apply" in the button declaration. Generally you'd want to use a string resource instead, indeed buttons are one of the first things I'd expect you to want to internationalise if you wanted to support multiple languages in the future.
clickAndClear
I might have missed something, however this method looks like it would drive me a bit crazy as a user. It makes sense for you to clear the text box when I first click on it, if it has some default value in it. However to me, it looks like if I fill out your registration information, then realise I've missed a letter in my e-mail address and click to update it, that you'll throw away what I've previously typed and rather than just adding in the missing character I'll have to type the whole address in again from scratch. I'd find that rather frustrating.
Password
You're setting the inputType for your password fields programmatically. Is there a reason that you're not setting it directly in the layout file? | {
"domain": "codereview.stackexchange",
"id": 41475,
"tags": "java, object-oriented, android, error-handling, classes"
} |
What type of neutrinos do we detect | Question: There are three types of neutrinos known today. When detecting them, how can we tell which type we are detecting?
Answer: Neutrino flavor is defined as agreeing with the flavor of the charged lepton participating in the interaction, so that the neutrino in the reaction
$$ \nu + A \to \mu + X \,, $$
is defined to be a muon neutrino and the one in
$$ \nu + n \to e + p $$
is a electron neutrino by definition.
We have no way of knowing the alleged flavor of a neutrino participating in a neutral current interaction.
As a matter of experimental fact electron and moun neutrinos (and anti-neutrinos) are easy, but tau neutrinos are much harder because demonstrating that you have a tau-lepton is hard, but both OPERA and IceCube can do that (to chose currently running experiments). | {
"domain": "physics.stackexchange",
"id": 24304,
"tags": "standard-model, neutrinos"
} |
Accelerated expansion of the universe caused by dark energy | Question: Maybe I am misunderstanding this concept but my question is the following: would we state that the universe is accelerating its expansion if we measure the speed of an astrophysical object when it reaches a distance 'd' from Earth and the corespondent velocity 'v' due to Hubble flow and after that we measure the velocity of another object more later in time when it also reaches the same distance 'd' of the first object from Earth and the measurament shows these two velocities are equal? Should we then state the universe 'baloon' is expanding but the rate of expansion is constant?
Answer: We describe the expansion of the universe with a scale factor that we conventionally call $a$. We take $a=1$ right now, so if in the future the universe has doubled in size that means $a=2$, or if we look back to a time when in the past when the universe was half the size $a = \tfrac12$.
To understand what this scale factor means we can relate it to any convenient property of the universe. For example we could consider the average spacing between galaxies, or possibly the average spacing between galaxy clusters. If we say the universe has doubled in size, i.e. $a = 2$, we mean this average spacing has doubled. Alternatively consider the average density of matter in the universe. If the size of the universe doubles we expect the average density of matter to fall by a factor eight, so the density is related to $a$ by $\rho \propto 1/a^3$.
Now we understand what we mean by the scale factor, the acceleration or otherwise of the expansion is just a statement of the way $a$ changes with time. If $a$ were a constant that would mean the universe was static i.e. the average spacing wasn't changing and neither was the average density. Alternatively if $a$ is increasing linearly with time then the expansion rate is constant.
So an accelerated expansion just means the scale factor $a(t)$ is increasing at a rate that is faster than linearly in time. In fact for a dark energy dominated universe we expect $a \propto e^{kt}$ for some constant $k$. | {
"domain": "physics.stackexchange",
"id": 81178,
"tags": "astronomy, space-expansion, dark-energy"
} |
How is slope error calculated for Bresenham’s Algorithm for both 2D and 3D? | Question: I've been working with Bresenham’s Algorithm in 2D and understand it is derived from the following logic:
y = mx+c
To get the slope error at a given point, the following equation is used:
d2 is the distance from value Y to the whole integer above.
d1 is the distance from value Y to the whole integer below.
d1 - d2 = [m(xₖ+1)+c - yₖ]-[yₖ+1-m(xₖ+1)+c]
If d1 - d2 < 0, the next pixel should be (xₖ+1, yₖ)
If d1 - d2 >= 0, then the next pixel should be (xₖ+1, yₖ+1)
This gets turned into and reduce to the following equation to remove the m variable.:
Pk = 2∆y(xₖ) - 2∆x(yₖ) + 2∆y + 2∆xc - ∆x
The initial value of the error is:
c = y1 - (∆y/∆x)*x₁
With the loops step increasing this decision value by:
The actual value of next is calculated by plugging in (xₖ+1, yₖ+1) and (xₖ+1, yₖ) and subtracting Pknext - Pk. If this value is less than zero, we choose (xₖ+1, yₖ).
When (xₖ+1, yₖ+1) and (xₖ+1, yₖ) are plugged into Pknext, they end up simplifying to 2(∆y-∆x) and 2∆y.
This simplifies this logic into:
Constraints
x₁ < x₂
y₁ < yx₂
∆y/∆x ≤ 1
Given x₁, y₁, x₂, y₂
∆x = x₂ - x₁
∆y = y₂ - y₁
P = 2∆y - ∆x # Inital Decision value
while x ≤ x₂
x = x+1
IF P < 0
Plot(x,y)
P = P + 2∆y # Decision value of East
ELSE
Plot(x,y+1)
P = P + 2∆y - 2∆x # Decision value of North East
How is this logic applied in 3D?
I found this article by geeks for geeks: https://www.geeksforgeeks.org/bresenhams-algorithm-for-3-d-line-drawing/
But it never explained how the math is derived. We can get the value of Y by simply using mx+b and calculating the decision variable from there, but how is the decision variable done in 3D?
Answer: I will first explain a closely related line drawing algorithm in 2D.
The parametric equation of a continuous line can be written vectorially
P = P0 + t.∆P
where t runs from 0 to 1.
If we want to discretize in such a way that the pixels are contiguous, the increment of t must be the smallest of 1/∆x and 1/∆y, let 1/∆, and we compute the points
Pi = P0 + i.∆P/∆.
Assuming that ∆=∆x, this is
Pix = P0x + i
Piy = P0y + i.∆y/∆x.
(i runs from 0 to ∆x)
For efficiency, the division of i.∆y by ∆x can be done incrementally by keeping the quotient and the remainder (i.∆y = q.∆x + r); for a new i, we add ∆y to the remainder and if it exceeds ∆x, we fix by subtracting ∆x and incrementing the quotient. This explains the if statement in Bresenham, which chooses between a lateral (east) or diagonal (north-east) move.
x+= 1 // East
r+= ∆y
if r ≥ ∆x
r-= ∆x; y+= 1 // North
Now the generalization to 3D is immediate: choose ∆ = max(∆x, ∆y, ∆z) and, assuming ∆=∆x,
Pix = P0x + i
Piy = P0y + i.∆y/∆x
Piz = P0z + i.∆z/∆x.
A complete discussion must involve the signs of the deltas. | {
"domain": "cs.stackexchange",
"id": 20297,
"tags": "algorithms, computer-graphics"
} |
Route protection with custom middleware in Laravel 8 | Question: I am working on a blogging application in Laravel 8.
The application assigns users roles and permissions. There is a many-to-many relationship between roles and permissions.
I have created a custom middleware to give users access to routes based on their permissions:
namespace App\Http\Middleware;
use Closure;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Auth;
class CheckUserPermissions
{
/**
* Handle an incoming request.
*
* @param \Illuminate\Http\Request $request
* @param \Closure(\Illuminate\Http\Request): (\Illuminate\Http\Response|\Illuminate\Http\RedirectResponse) $next
* @return \Illuminate\Http\Response|\Illuminate\Http\RedirectResponse
*/
// Permissions checker
public function hasPermissionTo($permission) {
return in_array($permission, Auth::user()->role->permissions->pluck('slug')->toArray());
}
public function handle(Request $request, Closure $next, ...$permissions)
{
// Check user permissions
foreach ($permissions as $permission) {
if (!$this->hasPermissionTo($permission)) {
$permission_label = join(' ', explode('-', $permission));
return redirect()->back()->with('error', 'You do not have permission to ' . $permission_label);
}
}
return $next($request);
}
}
In routes\web.php I have:
// Dashboard routes
Route::group(['prefix' => 'dashboard', 'middleware' => ['auth']], function() {
Route::get('/', [DashboardController::class, 'index'])->name('dashboard');
// Settings routes
Route::group(['prefix' => 'settings', 'middleware' => ['checkUserPermissions:edit-settings']], function() {
Route::get('/', [SettingsController::class, 'index'])->name('dashboard.settings');
Route::post('/update', [SettingsController::class, 'update'])->name('dashboard.settings.update');
});
// Pages routes
Route::group(['prefix' => 'pages'], function() {
Route::get('/', [PageController::class, 'index'])->name('dashboard.pages')->middleware('checkUserPermissions:view-pages');
Route::get('/new', [PageController::class, 'create'])->name('dashboard.pages.new')->middleware('checkUserPermissions:add-pages');
Route::post('/add', [PageController::class, 'save'])->name('dashboard.pages.add');
Route::get('/edit/{id}', [PageController::class, 'edit'])->name('dashboard.pages.edit')->middleware('checkUserPermissions:edit-pages');
Route::post('/update/{id}', [PageController::class, 'update'])->name('dashboard.pages.update');
Route::get('/delete/{id}', [PageController::class, 'delete'])->name('dashboard.pages.delete')->middleware('checkUserPermissions:delete-pages');
});
// Category routes
Route::group(['prefix' => 'categories'], function() {
Route::get('/', [ArticleCategoryController::class, 'index'])->name('dashboard.categories')->middleware('checkUserPermissions:view-categories');
Route::get('/new', [ArticleCategoryController::class, 'create'])->name('dashboard.categories.new')->middleware('checkUserPermissions:add-categories');
Route::post('/add', [ArticleCategoryController::class, 'save'])->name('dashboard.categories.add');
Route::get('/edit/{id}', [ArticleCategoryController::class, 'edit'])->name('dashboard.categories.edit')->middleware('checkUserPermissions:edit-categories');
Route::post('/update/{id}', [ArticleCategoryController::class, 'update'])->name('dashboard.categories.update');
Route::get('/delete/{id}', [ArticleCategoryController::class, 'delete'])->name('dashboard.categories.delete')->middleware('checkUserPermissions:delete-categories');
});
// Article routes
Route::group(['prefix' => 'articles'], function() {
Route::get('/', [ArticleController::class, 'index'])->name('dashboard.articles')->middleware('checkUserPermissions:view-articles');
Route::get('/new', [ArticleController::class, 'create'])->name('dashboard.articles.new')->middleware('checkUserPermissions:add-articles');
Route::post('/add', [ArticleController::class, 'save'])->name('dashboard.articles.add');
Route::get('/edit/{id}', [ArticleController::class, 'edit'])->name('dashboard.articles.edit')->middleware('checkUserPermissions:edit-articles');
Route::post('/update/{id}', [ArticleController::class, 'update'])->name('dashboard.articles.update');
Route::get('/delete/{id}', [ArticleController::class, 'delete'])->name('dashboard.articles.delete')->middleware('checkUserPermissions:delete-articles');
});
// Comments routes
Route::group(['prefix' => 'comments'], function() {
Route::get('/', [CommentController::class, 'index'])->name('dashboard.comments')->middleware('checkUserPermissions:view-comments');
Route::get('/delete/{id}', [CommentController::class, 'delete'])->name('dashboard.comments.delete')->middleware('checkUserPermissions:delete-comments');
Route::get('/approve/{id}', [CommentController::class, 'approve'])->name('dashboard.comments.approve')->middleware('checkUserPermissions:approve-comments');
Route::get('/unapprove/{id}', [CommentController::class, 'unapprove'])->name('dashboard.comments.unapprove')->middleware('checkUserPermissions:unapprove-comments');
});
// User management routes
Route::group(['prefix' => 'users'], function() {
Route::get('/rights', [UserRightsController::class, 'index'])->name('user-rights')->middleware('checkUserPermissions:manage-user-rights');;
Route::get('/rights/change-role/{id}', [UserRightsController::class, 'change_role'])->name('change-role')->middleware('checkUserPermissions:assign-user-roles');
Route::post('/rights/update-role/{id}', [UserRightsController::class, 'update_role'])->name('update-role');
Route::get('/rights/ban/{id}', [UserRightsController::class, 'ban_user'])->name('ban-user')->middleware('checkUserPermissions:ban-users');
Route::get('/rights/activate/{id}', [UserRightsController::class, 'activate_user'])->name('activate-user')->middleware('checkUserPermissions:activate-users');
});
});
The Super-admin has several abilities related to user management:
use App\Models\User;
use App\Models\Role;
class UserRightsController extends Controller
{
public function roles() {
return Role::all();
}
public function index() {
$users = User::paginate(10);
return view('dashboard/user-rights', ['users' => $users]);
}
public function change_role($id) {
$user = User::find($id);
return view('dashboard/change-role',['user' => $user, 'roles' => $this->roles()]);
}
public function update_role(Request $request, $id) {
$user = User::find($id);
$user->role_id = $request->get('role_id');
$user->save();
return redirect()->route('user-rights')->with('success', 'The role for ' . $user->first_name . ' ' . $user->last_name . ' was updated');
}
public function ban_user($id){
User::find($id)->update(['active' => 0]);
return redirect()->back()->with('success', 'The user is now banned');
}
public function activate_user($id){
User::find($id)->update(['active' => 1]);
return redirect()->back()->with('success', 'The user is now active');
}
}
Questions:
Do you see any security issues in the code above?
Are there ways to make it more DRY?
Do you see any architecture flaw?
Answer:
I see an avoidable single-use variable. And explode-join can be replaced with str_replace.
$permission_label = join(' ', explode('-', $permission));
return redirect()->back()->with('error', 'You do not have permission to ' . $permission_label);
Can be reduced to:
return redirect()->back()->with(
'error',
'You do not have permission to ' . str_replace('-', ' ', $permission)
);
I'd probably not declare $users in index() nor $user in change_role().
I recommend adding type declarations to all of the $id arguments in the UserRightsController() class as well as $permission in the hasPermissionTo() method. Even handle() can have string ...$permissions.
I do not have any insights regarding security; I have no experience with Laravel. | {
"domain": "codereview.stackexchange",
"id": 44385,
"tags": "php, laravel"
} |
How to calculate probability of a state given the partition function | Question: Given a canonical partition function $Z$, by the thermodynamic connection equations, I can say that $$A = -k_B T \ln Z$$
So, internal energy $U$ is given by
$$\langle E \rangle =U = -\frac{\partial \ ln Z}{\partial \beta}$$
My question is, how would I calculate the probability of the energy being equal to the average energy $U$, given by the above equation? How would I calculate the probability of the energy of the system being $kU$, where $k$ is some positive constant?
My question is, would the probability of the energy being
$$P(E=U) = e^{-\beta U}/Z$$
and
$$P(E=kU) = e^{-\beta kU}/Z?$$
How would you do this for an ideal gas partition function
$$Z = \frac{1}{N!} \left( \frac{2\pi m k_B T}{h^2} \right)^{3N/2} V^N?$$
Answer: If you have the partition function, then it's straightforward to get the relevant distribution expressed in phase space coordinates $\Gamma$:
$$p(\Gamma) = \frac{e^{-\beta E(\Gamma)}}{Z}$$
But that's not the question! The question is about finding the distribution of the energies themselves. For this you need to project the above $6N$-dimensional probability distribution onto a single dimension:
$$p(E)=\int \delta(E-E(\Gamma))\frac{e^{-\beta E(\Gamma)}}{Z}d\Gamma$$
Which is a complicated $6N$-dimensional integral! So in general you will see that this question is extremely difficult and only soluble numerically (and usually not even to satisfactory precision). However, the ideal gas presents one of the few soluble cases, where you know that each velocity coordinate is a normal independent and identically distributed (i.i.d.) variable. The sum of the squares of $3N$ random i.i.d. standard normal variables is known as the chi-square distribution:
$$\chi^2_{3N}(x)=\frac{x^{\frac{3N}{2}-1}e^{-\frac{x}{2}}}{2^{\frac{3N}{2}}\Gamma(\frac{3N}{2})}$$
And this gives you the probability density of observing a particular sum of squares of "dimensionless" velocities. For velocities with variance $\frac{1}{\beta m}$ (Maxwell-Boltzmann distribution) this is equal to:
$$\chi^2_{3N}(\epsilon)=\frac{\beta m(\beta m\epsilon)^{\frac{3N}{2}-1}e^{-\frac{\beta m\epsilon}{2}}}{2^{\frac{3N}{2}}\Gamma(\frac{3N}{2})}$$
Where $\epsilon$ is the squared sum of the Maxwell-Boltzmann distributed velocities. To convert it to a kinetic energy you have to scale once again by $\frac{m}{2}$, s.t. $E = \frac{m\epsilon}{2}$:
$$\chi^2_{3N}(E)=\frac{\beta (\beta E)^{\frac{3N}{2}-1}e^{-\beta E}}{\Gamma(\frac{3N}{2})}$$
Which is the distribution you were looking for.
P.S. For both of these rescalings I have used the general identity for $y=sx$: $p(y)=sp(sx)$ which is just a change of variables. I have also used some algebra to derive the above so they might not be immediately obvious but they should be readily demonstrable. | {
"domain": "physics.stackexchange",
"id": 71430,
"tags": "thermodynamics, statistical-mechanics"
} |
Desicision tree classification with a "false" attribute | Question: This is a pretty specific problem but I think it can help me understand better the whole concept of the subject.
A doctor in the hospital is in charge of 20 medical students. For every patient, the doctor consults with the students rather the patient has a problem. Each student can either say "yes", "no" or "Don't know".
The doctor builds a decision tree based on the answers of the students in period of two months. It is known that one of the students that answered the questions is an imposter, and all of his answers are based and opposite of another student the he picked. When the "real" student answered "yes", the imposter answered "no" and vice versa. When the "real" one answered "I don't know" so did the imposter.
Also it is known that every real student is giving a decisive answer - "yes" or "no" for at least 60% of the cases.
Tasks:
Give an algorithm to find the imposter.
A tree was built again without the imposter's answers. Prove it is the same as the old tree.
I understand that the information gain of the imposter and the real student he based his answers on is the same. So the algorithm needs to find groups of students with the same information gain. Within those groups, find two students with answered all reversed. The one who is wrong more than 60% of the time will be the imposter.
Regarding the second task I'm not sure why it is true. Thinking of a case of a patient where all the students answered "I don't know" except for one student who answered "yes" and the imposter who answered "no". Isn't it possible that the new tree will classify this patient as "yes" instead of "no", now when the imposter is gone?
Answer: Maybe I'm not interpreting the scenario correctly, but task 1 seems impossible and task 2 easier than you've made it sound.
I'm reading it as a sample for each patient, with one feature for each student, possible values yes/no/dunno.
In that case, at any split of the decision tree the imposter and the chosen real student will split the data in the same fashion. If either of them is chosen as the split criteria, then it's random which of the two is used, but if we remove the imposter first, then the real student will be used; but there is not difference in the way the samples are split.
As for finding the imposter, all you can do AFAICT is look for pairs who give exactly opposite responses. But if there are multiple such pairs, I don't see how to pick one, and even when you have a pair of opposites, I don't see any way to determine which is real and which is imposter. The 60% figure seems to just indicate that nobody answers "I don't know" too often. I guess we can make an additional assumption that the students are better-than-random, so that the imposter is worse? | {
"domain": "datascience.stackexchange",
"id": 4489,
"tags": "classification, decision-trees"
} |
Is there a layman's explanation for why Grover's algorithm works? | Question: This blogpost by Scott Aaronson is a very useful and simple explanation of Shor's algorithm.
I'm wondering if there is such an explanation for the second most famous quantum algorithm: Grover's algorithm to search an unordered database of size $O(n)$ in $O(\sqrt{n})$ time.
In particular, I'd like to see some understandable intuition for the initially surprising result of the running time!
Answer: There is a good explanation by Craig Gidney here (he also has other great content, including a circuit simulator, on his blog).
Essentially, Grover's algorithm applies when you have a function which returns True for one of its possible inputs, and False for all the others. The job of the algorithm is to find the one that returns True.
To do this we express the inputs as bit strings, and encode these using the $|0\rangle$ and $|1\rangle$ states of a string of qubits. So the bit string 0011 would be encoded in the four qubit state $|0011\rangle$, for example.
We also need to be able to implement the function using quantum gates. Specifically, we need to find a sequence of gates that will implement a unitary $U$ such that
$U | a \rangle = - | a \rangle, \,\,\,\,\,\,\,\,\,\,\,\,\, U | b \rangle = | b \rangle $
where $a$ is the bit string for which the function would return True and $b$ is any for which it would return False.
If we start with a superposition of all possible bit strings, which is pretty easy to do by just Hadamarding everything, all inputs start off with the same amplitude of $\frac{1}{\sqrt{2^n}}$ (where $n$ is the length of the bit strings we are searching over, and therefore the number of qubits we are using). But if we then apply the oracle $U$, the amplitude of the state we are looking for will change to $-\frac{1}{\sqrt{2^n}}$.
This is not any easily observable difference, so we need to amplify it. To do this we use the Grover Diffusion Operator, $D$. The effect of this operator is essentially to look at how each amplitude is different from the mean amplitude, and then invert this difference. So if a certain amplitude was a certain amount larger than the mean amplitude, it will become that same amount less than the mean, and vice-versa.
Specifically, if you have a superposition of bit strings $b_j$, the diffusion operator has the effect
$D: \,\,\,\, \sum_j \alpha_j \, | b_j \rangle \,\,\,\,\,\, \mapsto \,\,\,\,\,\, \sum_j (2\mu \, - \, \alpha_j) \, | b_j \rangle$
where $\mu = \sum_j \alpha_j$ is the mean amplitude. So any amplitude $\mu + \delta$ gets turned into $\mu - \delta$. To see why it has this effect, and how to implement it, see these lecture notes.
Most of the amplitudes will be a tiny bit larger than the mean (due to the effect of the single $-\frac{1}{\sqrt{2^n}}$), so they will become a tiny bit less than the mean through this operation. Not a big change.
The state we are looking for will be affected more strongly. Its amplitude is a lot less than the mean, and so will become a lot greater the mean after the diffusion operator is applied. The end effect of the diffusion operator is therefore to cause an interference effect on the states which skims an amplitude of $\frac{1}{\sqrt{2^n}}$ from all the wrong answers and adds it to the right one. By repeating this process, we can quickly get to the point where our solution stands out from the crowd so much that we can identify it.
Of course, this all goes to show that all the work is done by the diffusion operator. Searching is just an application that we can connect to it.
See the answers to other questions for details on how the functions and diffusion operator are implemented. | {
"domain": "quantumcomputing.stackexchange",
"id": 1971,
"tags": "quantum-algorithms, complexity-theory, grovers-algorithm"
} |
error in TurtleBot2 | Question:
Hi,guys. I have big trouble.....in TurtleBot2.
When I launch rviz, TurtleBot2 rotations itself........but in fact it is not rotation......
This is video: Turtlebot2 error
I use groovy in ros and Ubuntu 12.04LTS 64-bit.
Originally posted by FfoNy on ROS Answers with karma: 62 on 2014-06-10
Post score: 0
Original comments
Comment by FfoNy on 2014-06-11:
I found the problem........
"/odom" has problem but I have no idea to solve.
Answer:
Looks like you have the same problem as zenifed. Check the first answer; updating kobuki firmware should fix the problem.
http://answers.ros.org/question/160561/turtlebot-navigation-hydro-drifting-orentation/#172632
Originally posted by jorge with karma: 2284 on 2014-06-11
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 18230,
"tags": "turtlebot2, turtlebot, ubuntu, ubuntu-precise, base-odometry"
} |
What happens to forces when a charge travels at near c? | Question: Suppose 3 electrons are travelling at $c - 1$mm at 1 cm distance
The charges are at rest relative to one another in their frame, but what happens to fundamental forces Fe and Fg? What Fe does A get from B or C? and what about B
Answer: If the charges are moving at (near) c relative to a given reference frame (there is no mention of a reference frame in the question, but there must be one, otherwise we wouldn't know there is any - inertial - movement whatsoever), but they are at rest to each other, then according to SR postulates we may as well assume that they are simply not moving at all. We are free to assume they are stationary, and that the chosen reference frame moves at (near) c relative to them.
In such case the forces remain unchanged, as long as we do not have any information concerning possible influence of this reference frame on the charges. | {
"domain": "physics.stackexchange",
"id": 32615,
"tags": "electromagnetism, special-relativity, forces, charge"
} |
How can I implement a visualization of archived convective data from numerical weather models? | Question: I've always found geographic visualizations of surface/mixed-layer CAPE models and other weather data to be incredibly interesting.
Here's a map featured by the Capital Weather Gang over at the Washington Post, during the June 12-13, 2013 derecho series in the Ohio Valley/Chesapeake Bay regions. It was created by WeatherBELL, which is a subscription service offering visualizations based on their proprietary technology.
Here's another map featured on the blog of Cliff Mass, a professor of atmospheric sciences at the University of Washington. This is an internal UW WRF model that is GFS-initialized.
I would like to know how I can take archived GFS or WRF data and visualize it on a map. Or even if that's possible (perhaps archived numerical data is only available to paid subscribers). What I would like to do is plot surface and upper-air readings to better understand the mesoscale setups for thunderstorms in the Pacific Northwest, where I live.
Answer: The best model archive I know of offhand is Iowa State Meteorology's Gempak model data in their MTArchive. Looks like model-wise, they have at least 10 years of GFS (and the predecessor, the AVN). No idea if the models are complete files (include CAPE) or not, though.
Gempak software is free, though you won't get graphics nearly as polished as WxBell; they've spent a lot of time fine tuning their setup, and likely use newer more flexible tools (probably GrADS or OpenGrADS, both also free). You could probably get GrADS to use the .gem files, perhaps by converting them? | {
"domain": "earthscience.stackexchange",
"id": 926,
"tags": "meteorology, atmosphere-modelling, weather-forecasting, visualization"
} |
Can a strong electric field cause the electrons to come out of the atoms? | Question: Can a strong electric field cause the electrons to come out of the atoms, is this how free electron are obtained in a discharge tube?
Answer: Yes, this is called field emission. It is easiest to create by having the electric field geometry between the two electrodes be highly asymmetric, as for example having the negative electrode be a sharpened needle pointing towards a flat plate which serves as the positive electrode. You can get field emission with tens to hundreds of volts; if you don't have the asymmetry then it will take thousands to tens of thousands of volts to kick the process into operation (see my answer to your other question here). | {
"domain": "physics.stackexchange",
"id": 55203,
"tags": "electrons, electric-fields, electric-current, atomic-physics, ionization-energy"
} |
The required energy to accelerate a particle from one velocity to another | Question: Say I have a charged molecule running along a linear evacuated tube (so no wind resistance). In the laboratory frame, we can measure that the particle is moving at some speed $v_1$. Provided that kinetic energy scales as the square of the velocity, how much energy do we need to impart on the particle to accelerate it to some velocity $v_2$? Why should this depend on $v_1$ and not strictly the difference $v_2 - v_1$?
Answer:
Why should this depend on v1 and not strictly the difference v2−v1?
If I understand your question correctly, you're not so much asking for a mathematical explanation as you are for a physical one?
Remember that KE is essentially the energy stored from work done by a force on the particle and that work is force through a distance.
When there is an initial velocity, the distance through which the force is applied is different than when there is no initial velocity.
For example, let's say that to uniformly accelerate from 0 to $\Delta v$ requires 1 second and a distance of 1 meter.
Now, imagine that there is an initial velocity $v_1$, in the direction of the applied force, and we have the same uniform acceleration for 1 second. The change in velocity is the same but the distance through which the same force is applied is now larger by $v_1 \cdot 1 s$.
More work is done by the force since it is applied through a greater distance. So, the change in KE must depend on the initial velocity, i.e., change in KE must be frame dependent. | {
"domain": "physics.stackexchange",
"id": 38194,
"tags": "kinematics"
} |
What will happen to the Indian plate after it slides under the Eurasian Plate? | Question: What will happen if the Indian Plate is done sliding under the Eurasian plate? I hypothesied some possible answers. Tell me the answer and if my hypothesis is not correct.
Most likely to least.
It will become part of the Eurasian Plate?
It will slide back out of Siberia?
It will get pushed into the mantle?
I know it is funny to think that the Indian Plate would find its way out of Siberia. I would also want to know what would happen to Mount Everest and the Himalayas.
Answer: It is a continent-continent collision following subduction, no subduction is taking place at the moment. The ocean floor that was attached to it has vanished into the mantle or got wedged into the thickened crust. The collision builds up immense stress, resulting in deformation, faulting, folding, crustal shortening and uplift of the Tibetan plateau.
https://www.files.ethz.ch/structuralgeology/jpb/files/English/Asiaeng.pdf
Will it become part of the Eurasian Plate?
That's a question of definition of plate boundaries, fault lines, sutures ...
Will it slide back out of Siberia?
Of course not. How could rocks even be preserved in such a process? In the process of shortening the crust is near completely consumed.
Will it get pushed into the mantle?
No. Continental crust doesn't subduct. Both sides have approximately equal density and the mantle is much denser, so they just push and shove each other. Maybe parts get pushed to greater depths (~100km) in the dynamic process and exhumed again. Shortening will continue, uplift, faulting, folding and erosion in equilibrium, and when and where uplift stops erosion will take over.
Afterwards there may probably be something like the Appalachians, an older result of a continent-continent collision and a very thick continental crust. Pop-science link depicting the principle: https://www.nps.gov/subjects/geology/plate-tectonics-collisional-mountain-ranges.htm | {
"domain": "earthscience.stackexchange",
"id": 2243,
"tags": "plate-tectonics, mountains, tectonics, mantle, erosion"
} |
How are protective sheets attached to plastics? | Question: When you buy plastic sheets such as acrylics, polycarbonate sheets etc, they come with a paper/plastic protective film. Like in this image :
Protective Film for Acrylic Sheet PMMA Panel
You notice that there is no adhesive (at least not something tacky) that is present on the sheets, and there is no residue when the sheet is peeled off, I was wondering if anyone can shed some light on how these protective films are applied ?
Answer: The material is called "Electrostatic Protective Film".
It is manufactured by letting the plastic film cool in a strong electrostatic field: there are high-voltage electrodes on top and bottom of the plastic. This causes the polar molecules in the plastic to align so that one side of the film has a positive charge and the other side has a negative one.
This charge will persist quite long, and will keep the film adhered to the plastic sheet by electrostatic forces. However, because the electrostatic force is quickly reduced by distance, it is important that no dust gets between the protective film and the base plastic during manufacturing. | {
"domain": "engineering.stackexchange",
"id": 1285,
"tags": "plastic, adhesive"
} |
What do I hear when listening to a computer-generated sine wave? | Question: When I use a sine-wave generator (such as this one), I give credit to the software and my hardware that a pure sine wave is produced (as close as is technologically possible) — that is, no harmonics. However, once the signal reaches my speakers and/or the air between the speakers and my ears, are harmonics automatically created within that medium?
Assuming a pure signal (no harmonics), does the vibrating medium naturally introduce harmonics?
Answer: To generate harmonics, you need a nonlinear element. Loudspeakers are not perfectly linear, so yes, they generate weak harmonics. Air is generally very close to a linear medium for sound unless the intensity approaches shockwave levels, so it doesn't normally generate significant harmonics.
Of course, the other things in the chain aren't perfectly linear either: digital to analog converters, amplifiers, eardrums... | {
"domain": "physics.stackexchange",
"id": 93811,
"tags": "waves, acoustics, electrical-engineering, signal-processing, harmonics"
} |
Create a C++ string using printf-style formatting | Question: It's often convenient to use C-style printf format strings when writing C++. I often find the modifiers much simpler to use than C++ I/O manipulators, and if I'm cribbing from existing C code, it helps to be able to re-use the existing format strings.
Here's my take on creating a new std::string given a format and the corresponding arguments. I've given it an annotation so GCC can check the argument types agree when the format is a literal, but I don't know how to help other compilers.
I've provided a more memory-efficient version for C++17, which provides read/write access to the underlying array of a string. My copy of CPP Reference still says "Modifying the character array accessed through data() has undefined behavior", but the Web version has been edited (May 2017) to indicate that it's only the const version that has that constraint.
For earlier standards (I require minimum C++11), we may need to allocate a temporary array, as we can't write to a string's data. Unfortunately, this requires an extra allocation and copy.
#include <string>
std::string format(const char *fmt, ...)
#ifdef __GNUC__
__attribute__ ((format (printf, 1, 2)))
#endif
;
// Implementation
#include <cstdio>
#include <cstdarg>
#if __cplusplus < 201703L
#include <memory>
#endif
std::string format(const char *fmt, ...)
{
char buf[256];
va_list args;
va_start(args, fmt);
const auto r = std::vsnprintf(buf, sizeof buf, fmt, args);
va_end(args);
if (r < 0)
// conversion failed
return {};
const size_t len = r;
if (len < sizeof buf)
// we fit in the buffer
return { buf, len };
#if __cplusplus >= 201703L
// C++17: Create a string and write to its underlying array
std::string s(len, '\0');
va_start(args, fmt);
std::vsnprintf(s.data(), len+1, fmt, args);
va_end(args);
return s;
#else
// C++11 or C++14: We need to allocate scratch memory
auto vbuf = std::unique_ptr<char[]>(new char[len+1]);
va_start(args, fmt);
std::vsnprintf(vbuf.get(), len+1, fmt, args);
va_end(args);
return { vbuf.get(), len };
#endif
}
// Test program
#include <iostream>
int main()
{
std::clog << "'" << format("a")
<< "'" << std::endl;
std::clog << "'" << format("%#x", 1337)
<< "'" << std::endl;
std::clog << "'" << format("--%c--", 0) // an embedded NUL
<< "'" << std::endl;
std::clog << "'" << format("%300s++%6.2f", "**", 0.0).substr(300)
<< "'" << std::endl;
}
void provoke_warnings()
{
// warning: zero-length gnu_printf format string
// [-Wformat-zero-length]
std::clog << "'" << format("") << "'" << std::endl;
// warning: format ‘%c’ expects argument of type ‘int’, but
// argument 2 has type ‘const char*’ [-Wformat=]
std::clog << "'" << format("%c", "bar") << "'" << std::endl;
}
I've compiled the code with both C++17 and C++11 compilers, and verified them both under Valgrind using this test program.
I'd welcome any comments on the code itself or on my testing.
Answer: The design is sound.
Despite what naysayers may express, there are a few overwhelming advantages that your solution based on venerable printf and C-variadics arguments has over ostream and C++ variadics:
performance: ostream has terrible formatting performance by design,
footprint: any variadic template solution must be carefully designed to avoid the bloat resulting of instantiating one variant for each and every combination of arguments; reaching zero-cost is only possible if the function can be fully inlined without increasing the call-site footprint (possibly by delegating to a non-template core).
Your design sidesteps those two pitfalls, which is great.
Furthermore, your use of the format attribute ensures a compile-time check of the format vs the arguments. Unlike the contenders presented, it will diagnose at compile-time that the number of arguments matches (on top of their types), avoiding the necessity for runtime errors.
Nitpick
I really encourage you to place braces systematically around if-blocks. There's little reason not to, and it'll prevent the occasional slip-up.
Weaknesses
There are two weakness to the design:
no variant allowing the user to specify the buffer,
a very limited set of accepted types.
The first is an issue for composition and reuse.
Composition: if I wish to create a larger string by calling in a sub-function, it will create several intermediate buffers which may negate the performance advantage the solution has in terms of raw-formatting,
Reuse: the user may already have a sufficiently large buffer available.
Unfortunately, the C++ standard library does not allow one to pass an existing buffer to a string (sigh) and is generally pretty lacking in raw buffers (sigh), so you'll have to roll your own.
I would do so in two steps: an abstract base class Write which exposes a way to write bytes in slices and a ready-made implementation based on std::unique_ptr<char, free> + size + capacity (not vector, because it zeroes the memory when resizing...).
The second is an issue for extension, and performance. In order to format their own types, users are encouraged to "pre-format" their own types into strings, which will result in needless temporary allocations.
There is unfortunately no simple way to solve this issue, it's a fundamental limitation of printf-based solution. It will be up to the users of the solution to decide whether the cost of temporary allocations is worth bearing, or not, on a per-call-site basis. | {
"domain": "codereview.stackexchange",
"id": 31999,
"tags": "c++, strings, c++11, formatting, c++17"
} |
Difference between dynamic loading and dynamic linking in the OS | Question: First of all, I saw many answers of this topic in Quora, Stackoverflow, and this site. But, I couldn't definitely understand the difference between dynamic loading and dynamic linking.
What I understood until now is this.
Dynamic loading : system library or other routine is loaded during run-time and it is not supported by OS.
Dynamic linking : system library or other routine is linked during run-time and it is supported by OS.
What I'm confusing is the behaviour of this two concepts. I know that loading occurs after linking like below image. (from Operating System concepts 9th edition, Chapter 8 figure 8.3)
Then, my question is that does dynamic loading is occured after dynamic linking? If I'm wrong, what is the exact difference between two concepts?
Answer: Let me explain these terms in the simplest way possible. Now since both the terms have the word dynamic in them, both occur during execution.
Dynamic Loading : Suppose our program that is to be executed consist of various modules. Of course its not wise to load all the modules into main memory together at once(in some cases it might not be even possible because of limited main memory). So basically what we do here is we load the main module first and then during execution we load some other module only when its required and the execution cannot proceed further without loading it.
Dynamic Linking : Suppose our program has some functions whose definition is present in some system library. We do know the header file only consists of declarations of functions and not definitions. So during execution when the function gets called we load that system library into main memory and link the function call inside our program with the function definition inside system library. | {
"domain": "cs.stackexchange",
"id": 15071,
"tags": "operating-systems, compilers"
} |
PCL Kdtree symbol lookup error running program from terminal | Question:
I'm JUST TRYING to run the icp Demo available in the pcl tutorials
I can compile it perfectly... the problem is when i try to run the program in the terminal I get this :
./interactive_icp: symbol lookup
error: ./interactive_icp: undefined
symbol:
_ZN3pcl6search6KdTreeINS_8PointXYZEEC1Eb
And Yes, I've already tried this :
sudo apt-get install -f and
sudo apt-get autoremove together with
erasing build dirs and rebuilding in
my workspace
rm -r devel rm -r build ...
but it still doesn't work... Can you please help me ? It's been a week and a half all around this and its getting really frustrating since I just want to see the demo running !
I'm running pcl 1.7 with QT creator on Ubuntu 12.04 LTS.
THANK YOU ALL for your time !!
Originally posted by Gop on ROS Answers with karma: 3 on 2014-04-03
Post score: 0
Answer:
Ok, first make sure all ros packges are upgraded. Call apt-get update and apt-get upgrade. If there are any ros-... packages "kept back", then try again with apt-get distupgrade or pick them out manually.
After that, rebuild your compelte workspace. As you have shown, it is best to remove devel/build/install. If this doesn't bring success, rinse and repeat, but this time execute the following two commands in order (call catkin_make with your parameters) and post the compelte output of both as an edit to your question (its probably best to put them on gist.github.com or pastebin.com and put the link in your question).
$ env | grep -i ROS
$ catkin_make [...]
Originally posted by demmeln with karma: 4306 on 2014-04-04
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 17524,
"tags": "pcl, pcl-1.7"
} |
Interpreting result of a toboggan problem (special cases) | Question:
Assuming there is friction contrary to the question above,
I derived the equation as in my textbook:
$$a_x=g(\sin\alpha-\mu_{k}\cos\alpha)$$
Testing special cases, if $\alpha=90°$,$a_x=g$, just as we would expect in a free fall. However, letting $\alpha=0°$, we get
$$a_x=-\mu_{k}g$$
Shouldn't $a_x=0$ as on a horizontal surface?
Answer: The friction force $F_nμ_k$ only applies while the body is moving, and this force's direction is opposite to the movement direction.
In your formula $a_x=g(sin\alpha-\mu_{k}cos\alpha)$, you effectively subtract the friction force from the gravitational acceleration force, assuming that there is movement in positive x direction.
So, $a_x=-\mu_{k}g$ is perfectly valid in case you started with some positive speed. Then the friction will continuously slow down the toboggan to a halt.
After that, the friction force reduces itself to the amount necessary to compensate the gravitational acceleration force. | {
"domain": "physics.stackexchange",
"id": 71642,
"tags": "homework-and-exercises, newtonian-mechanics, friction"
} |
How is the distribution probability in the canonical ensemble derived? | Question: I'm confused by the derivation of the canonical ensemble, namely the origin of the probability density, that is the Boltzmann factor. Here's what I have:
We have a system of particles with $(N_{tot},V_{tot},E_{tot})$ in thermodynamical equilibrium that we divide into two subsystems, $A$ and $B$. Let $(N,V,E)$ describe $A$, and let $(N',V',E')$ describe $B$. We have the relations $$ N+N' = N_{tot} $$ $$V+V' = V_{tot}$$ $$E+E' \approx E_{tot}$$
where in the last one we've neglected any potential interaction between the particles, arguing that the number of degrees of freedom in any interaction is orders of magnitude smaller than the total amount of particles, because interactions are short ranged.
Now, consider the probability that subsystem $A$ has energy $E$. This is the ratio of the number of microstates of the entire system wherein $A$ has energy $E$ to the number of microstates such that the entire system has $E_{tot}$:
$$P(A \mbox{ has }E)= \frac{\Omega_{tot}(A \mbox{ has }E)}{\Omega_{tot}(E_{tot})}$$
but since we've neglected the interactions, we can factor the numerator:
$$P(A \mbox{ has }E)= \frac{\Omega(E)\Omega ' (E')}{\Omega_{tot}(E_{tot})}$$
where the omegas follow the same convention with primes and subscripts as the other variables. But now I don't understand how to get the Boltzmann factor... this relies on having
$$P(A \mbox{ has }E) = C \cdot \Omega ' (E')$$
But I don't see why that's true... $\Omega(E)$ isn't constant, is it?
Answer: Courtesy of Kittel's Elementary Statistical Physics:
Like Kittel, I'm going to call the two subsystems S (for subsystem) and R (for reservoir), instead of A and B.
The first key: instead of counting the states of S with exactly energy $E_s$ (a set of volume $0$ in the full phase space), consider the states within a small amount $\delta E_s$ of $E_s$. Then the probability $\mathrm{d}p$ that S's energy falls within that range is:
$$ \mathrm{d}p = C \, \mathrm{d} \Gamma_{\text{tot}} = C \, \mathrm{d} \Gamma_s \Delta \Gamma_r $$
where $\mathrm{d} \Gamma_{\text{tot}}$ is the volume of system phase space for which A is in the range $[E_s-\delta E_s, E_s]$, that volume factors into subsystem and reservoir phase space volumes, $\mathrm{d} \Gamma_s$ is the volume of the subsystem phase space with the desired energy, and $\Delta \Gamma_r$ is the volume of reservoir phase space that keeps the total energy at $E_{\text{tot}}$.
Now the (dimensionless) entropy of the reservoir $\sigma_r$ is:
$$ \sigma_r(E_r) = \log{\Delta \Gamma_r} $$
We want to Taylor-series expand this entropy about the total energy to get a term proportional to $E_s$. The reservoir energy $E_r$ is:
$$ E_r = E_{\text{tot}} - E_s $$
Now the second key: So that we can truncate the Taylor series after the first term, we assume that the reservoir is much, much bigger than S, so that $E_s \ll E_{\text{tot}}$. Then:
$$ \sigma_r(E_r) \approx \sigma_r(E_t) - \frac{\partial \sigma_r (E_{\text{tot}})}{\partial E_{\text{tot}}} E_s =
\sigma_r(E_t) - \frac{E_s}{\tau} $$
where $\tau$ is the (normalized) system temperature, by definition (where again we're employing the assumption that the reservoir is much larger than the subsystem, so that the same temperature characterizes both reservoir and system). Plugging in:
$$ \mathrm{d}p = C \Delta \Gamma_r \, \mathrm{d} \Gamma_s = C e^{\sigma_r(E_r)} \, \mathrm{d} \Gamma_s = A e^{-E_s/\tau} \, \mathrm{d} \Gamma_s $$
where $ A = C e^{\sigma_r(E_{\text{tot}})} $ is a constant (since it's evaluated at $E_{\text{tot}}$).
And there you are. | {
"domain": "physics.stackexchange",
"id": 8547,
"tags": "thermodynamics, statistical-mechanics"
} |
Trajectory planner for a multi robot omniwheeled soccer system | Question:
Hi folks,
Sorry for such a general question but I'm kind of in a hurry for RoboCup'11. I was planning to write my own Dynamic Safety Search (J.R.Bruce ,2006) algorithm to control a multi robot omniwheeled soccer system, then I came across with the DWA of base_local_planner. In principle, DSS is very similar to DWA, so I am planning to go with TrajectoryPlannerROS class.
So, my questions are as follows:
I obtain the linear line segments by using ompl's RRT-based planners for each robot. These robots have an almost-elliptically shaped velocity profile in x-y coordinates(viz. v_x = [-3.5, 3.5], v_y = [-4, 4], but the applicable velocities forms a deltoid in v_x-v_y plane). So, I have different max and min velocities for different velocity directions. But we can set the velocity limits more like in a square shape in the parameters of TrajectoryPlannerROS. Is is possible to set the allowable speeds in a different way? For instance, with a polygon, convex-hull, etc.
This motion control program will run on an offboard computer, and odometry data will not be available, but the global velocity can be provided. However, the time limit for this motion control program is like a couple of miliseconds. Do you think it can perform sufficiently good?
What exactly is a strafing velocity?
Thank you for your assistance...
Cheers
Originally posted by Kadir Firat Uyanik on ROS Answers with karma: 288 on 2011-06-19
Post score: 0
Answer:
First off, you might want to take a look at the dwa_local_planner as it implements DWA with the added perk of having dynamic_reconfigure built in. This might make tuning the planner for your needs a lot easier than dealing with the trajectory planner.
Now to answer your questions:
The dwa_local_planner allows you to set min and max velocities in both the x and y dimensions independently. The TrajectoryPlannerROS only allows for setting a min and max x velocity. For y (or strafing) velocities, it takes a list of admissible velocities to simulate. This might be another reason for you to give the dwa_local_planner a shot.
A couple of milliseconds will probably be OK as far as performance is concerned.
Strafing just means moving sideways, meaning there's a non-zero y velocity.
Originally posted by eitan with karma: 2743 on 2011-06-22
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Kadir Firat Uyanik on 2011-06-24:
I think representing circle shaped obstacles as 4 or 8 points and publishing this information to the sensor topic that costmap is subscribed to would do the trick. Considering the performance and inner workings of dwa_local_planner, should I create "n" many instances of the planner for "n" robots?
Comment by Kadir Firat Uyanik on 2011-06-24:
Eitan, in my application obstacles are all represented as circles and the environment is,though highly dynamic,fully observable via global vision. What costmap(sensor source) settings would you recommend?It might be better asking this as a new question, but anyway it is not totally off the topic. | {
"domain": "robotics.stackexchange",
"id": 5895,
"tags": "ros, navigation, base-local-planner, dwa-local-planner"
} |
How are vaccines mass-produced? | Question: I have a background in product design and so am familiar with with how most things are mass-produced — food, machines, etc. But I've been able to find very little information on how vaccines are mass-produced.
It looks like there are 4 types of vaccines, all of which include pieces or byproducts of the virus they're intended to counteract.
If you're producing billions of vaccines, I imagine you need a tremendous amount of the virus.
How is such a mass of virus obtained? Do they just fill up tanks with a culturing agent and a sample of the virus and wait for it to grow, like a giant petri dish? Are there big vats of Coronavirus sitting in factories somewhere?
Answer: Per Wikipedia, typically when one needs a lot of virus, it is grown in a controlled cell environment. This used to be eggs, but is moving toward cell cultures instead. So basically yes, factories full of virus (though more like in nice discrete bioreactors than big Joker-friendly vats).
Synthetic vaccines, such as the mRNA vaccines for COVID, do not need this step at all, since they do not actually use the virus, but can be done through cell-free biochemical reactions that replicate the mRNA directly. | {
"domain": "biology.stackexchange",
"id": 11147,
"tags": "virology, pathology, epidemiology"
} |
Hector_Slam mapping Hokuyo | Question:
Hello
I'm trying to create something similar to this http://www.youtube.com/watch?feature=player_embedded&v=F8pdObV_df4, but i'm not able to create constant good maps.
I'm using hector slam with the hokuyo URG-04LX and an imu for roll/pitch.(so no odometry)
Some (indoor) maps are perfect, but if i try it again the map sometimes drifts and the position in rviz becomes weird.
So my question is: Hector_slam creates its own odometry right? Or do i have to add odometry for proper map results? Or is my laser scanner to weak for hector, so that i need something like hokuyo UTM-30LX?
Thx for your response
Originally posted by Tirgo on ROS Answers with karma: 66 on 2012-08-31
Post score: 0
Answer:
Well Hector is able to work without any odometry support. In such a case Hector uses the laser data only to calc. its location in respect to the map. As long as the laser data are meaning full hector will work well in this mode, i.e. the range, noise, updated rate and vehicle speed stay within some bounds. For example, the hokuyo URG-04LX has a range of 4-7m (if i remember correctly) which means your environment should not have spots larger the 4-7m free space. As rule of the thump you can say that at least 50% of the beams should stay within half of the max range. Therefore your outdoor conditions are easily violating this rule of the thump.
Hope this helps.
Originally posted by tlinder with karma: 663 on 2012-08-31
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Tirgo on 2012-08-31:
Hello thx for your response. So basically i have to avoid spots with over 4-7 meter free space. That would explain the broken maps. Are there any tricks to work around? Or other stacks like hector_slam which could work? Because 4-7 meter is even indoor not that much.
Comment by bredzhang on 2015-12-17:
hector slam can handle slam without any odom param, I can understand when the robot do the linear move, but think about this situation, if the robot just do rotation in the same place, the hector slam can know the rotation accurate? and if it is a round spheric environment, I don't think hector c
Comment by Tirgo on 2016-04-08:
Depends on your hardware and the enviroment, but yes it can also handle rotations. If you have a short range laser scanner or an enivorement without many charactaristica you are running into problems. | {
"domain": "robotics.stackexchange",
"id": 10837,
"tags": "slam, navigation, mapping, hokuyo, hector-slam"
} |
Could you help me to resolve this exercise of K-NN? | Question: I am just a young learner in Data Science. Could you help me to resolve this exercise of K-NN?
Ex.1
The table below provides a training data set containing six observations, three predictors, and one qualitative response variable:
x1 x2 x3 y
0 3 0 Red
2 0 0 Red
0 1 3 Red
0 1 2 Green
−1 0 1 Green
1 1 1 Red
Suppose we wish to use this dataset to make a prediction for Y when x1 = x2 = x3 = 0 using
K-nearest neighbors.
1) With the euclidean distance, what is the prediction with K = 1 and with K = 3 for the test
point (0, 0, 0)?
2) If the Bayes decision boundary in this problem is highly non-linear, then would we expect the
best value for K to be large or small? Why?
Answer: An algorithm implementing KNN for classification tasks goes as follows:
Compute the distance (in this case Euclidean) from the test point in question to all points in the training data.
Using the computations from 1), sort the training points in ascending order, according to their distance from the test point in question (smallest distance first).
Take the first K training points from the list in part 2), and collect the classes assigned to each in one list.
Assign to the test point in question the class which appears most often in the list from part 3).
In this case, since we are interested in the Euclidean distance, you can use the formula
$$ d(p, p') = \sqrt{(x_{1}- x_{1}')^{2} + (x_{2}- x_{2}')^{2} + (x_{3}- x_{3}')^{2}} $$
to calculate the distance between a training point $p=(x_{1}, x_{2},x_{3})$ and a test point in question $p' = (x_{1}', x_{2}', x_{3}')$.
With the euclidean distance, what is the prediction with K = 1 and with K = 3 for the test point (0, 0, 0)?
For the K=1 case, if my algebra serves me correctly, the training point with the smallest distance to the test point $p' = (0,0,0)$ is the point $p=(-1,0,1)$ (with distance $\sqrt{2}$). Assuming my calculation is correct, this means we should associate the class of the point p, namely the class "Green", to the test point $p'$. The K=3 case can be approached by the same method, but this time we will need to calculate all distances from train points to p', keep the train points with the 3 smallest distances, and assign the class which occurs most often among those 3 train points. I'll leave it to you.
If the Bayes decision boundary in this problem is highly non-linear, then would we expect the best value for K to be large or small? Why?
Some intuition can be gained by thinking about the K=1 case. In this case, each training point has a neighborhood around it such that any new test points in that region will be classified in the same way as the training point. By definition, this neighborhood is given by those points which are closer to the training point in question than they are to any other training points. These neighborhoods are known as Voronoi cells, and together they form a Voronoi tesselation (see wikipedia for some pictures). The decision boundaries for KNN with K=1 are comprised of collections of edges of these Voronoi cells, and the key observation is that traversing arbitrary edges in these diagrams can allow one to approximate highly nonlinear curves (try making your own dataset and drawing it's voronoi cells to try this out). To answer the question, one can investigate how this ability to approximate nonlinear curves changes as K gets made larger, and try to relate the findings to the existence of a highly nonlinear decision boundary from a Naive Bayes classifier. | {
"domain": "datascience.stackexchange",
"id": 7531,
"tags": "k-nn"
} |
Is there an assumption that implies $P=ZPP$ which is not known to imply $P=BPP$? | Question: There are assumptions that are known to imply that $P = BPP$. For example, if there exists a function in $E = DTIME(2^{O(n)})$ that has circuit complexity $2^{\Omega(n)}$, then $P = BPP$ [1]. Clearly, such a result would also imply that $P = ZPP$.
Is there an assumption that is known to imply $P = ZPP$ but is not known to imply that $P = BPP$? Alternatively, is there a reason to believe that such a result is unlikely to exist?
[1] Impagliazzo, Russell, and Avi Wigderson. "P= BPP if E requires exponential circuits: Derandomizing the XOR lemma." Proceedings of the twenty-ninth annual ACM symposium on Theory of computing. 1997.
Answer: I think it is "easy" to come up with an assumption that implies one but not necessarily the other... (just write down a condition that is equivalent to P=ZPP)... however, a "natural" and non-uniform assumption (e.g. some weak form of PRG) seems harder, since (for example) hitting set generators (the non-uniform thing you need for P=RP) imply pseudorandom generators (what you need for P=BPP).
Just to give an idea of how annoying the problem is, here is a "natural" non-uniform condition that implies P=ZPP but (oops) also implies hitting sets, so it also implies P=BPP.
Say a circuit pair $(C,C')$ is good for length $n$ if $C$ and $C'$ have the same number of inputs, and for every input $x$ of length $n$,
$(Pr_y[C(x,y)=1]>2/3 \wedge Pr_y[C'(x,y)=0]=1)$ XOR $(Pr_y[C'(x,y)=1]>2/3 \wedge Pr_y[C(x,y)=0]=1)$.
Intuitively, these pairs can model any $RP \cap coRP = ZPP$ function.
To prove $P=ZPP$, it would suffice to have for all $\epsilon > 0$, a polynomial time function which given $1^n$, prints a set $S$ of $poly(n)$ strings of length up to $n$ such that for all circuit pairs $(C,C')$ with size $n$ that are good for length $m=n^{\epsilon}$, and all $x$ of length $m$, $(\exists y \in S)[C(x,y)=1 \vee C'(x,y)=1]$. (This should suffice, since by definition of "good", for all $x$, it cannot be that both $C$ and $C'$ have some input $y$ making them accept. I set $m=n^{\epsilon}$ to keep the condition from being too strong for other reasons.)
The main point is that the hitting set $S$ above "only" has to work for good circuit pairs. Nevertheless, this constraint isn't enough to keep from getting a full hitting set. Consider any circuit $C$ with $\Pr_x[C(x)=1]>2/3$. Write the inputs of $C$ over "$y$-variables" instead of $x$-variables. Look at the circuit pair $(0,C)$, where $0$ is the circuit which outputs zero on all inputs $(x,y)$. This pair trivially satisfies the goodness condition ($C$ and $0$ have the same behavior on all inputs $x$, because they do not depend on $x$ at all). And if there is always an $a \in S$ such that $[C(x,a)=1 \vee 0(x,a) = 1]$ is true, then $S$ is just a hitting set.
You could try to require some "non-trivialness" condition on top of that (say that each circuit in the pair can't be trivial), but the patches I can think of could also be circumvented.
It would be interesting if there is a more general way to formalize this problem, so that one could convincingly show that any hitting set for anything resembling "ZPP circuits" is just a hitting set. | {
"domain": "cstheory.stackexchange",
"id": 4994,
"tags": "cc.complexity-theory, complexity-classes, randomness"
} |
Creating a resource count unit test in Q# | Question: I want to create a unit test in Q# that runs an operation and asserts that it used at most 10 Toffoli operations. How do I do this?
For example, what changes do I have to make to the code below?
namespace Tests {
open Microsoft.Quantum.Diagnostics;
open Microsoft.Quantum.Intrinsic;
operation op() : Unit {
using (qs = Qubit[3]) {
for (k in 0..10) {
CCNOT(qs[0], qs[1], qs[2]);
}
}
}
@Test("ResourcesEstimator")
operation test_op_toffoli_count_at_most_10() : Unit {
...?
op();
...?
if (tof_count > 10) {
fail "Too many Toffolis";
}
}
}
```
Answer: There are multiple ways to do this, depending on what exactly you want to check.
Using AllowAtMostNCallsCA library operation.
The easiest way (with the code short enough to be included) is applicable if you know that you only use CCNOT gates when you want a Toffoli gate, and you never use Controlled X gates. In this case you can AllowAtMostNCallsCA operation that enforces exactly that:
namespace Tests {
open Microsoft.Quantum.Diagnostics;
open Microsoft.Quantum.Intrinsic;
operation op() : Unit {
using (qs = Qubit[3]) {
for (k in 0..10) {
CCNOT(qs[0], qs[1], qs[2]);
}
}
}
@Test("ResourcesEstimator")
operation test_op_toffoli_count_at_most_10() : Unit {
within {
AllowAtMostNCallsCA(10, CCNOT, "Too many Toffolis");
} apply {
op();
}
}
}
This test will fail right away, and that's how you can notice that the loop actually uses 11 CCNOTs. However, if you replace the CCNOTs in the loop body with Controlled X([qs[0], qs[1]], qs[2]); (which is exactly the same gate), the test will happily pass irrespective of how many times you use the gate.
Using resource estimator.
You can write a test (it will have to be in a different language, though) that will invoke ResourcesEstimator, run an operation using it, extract the statistics and analyze them. I don't have a ready example on hand; the closest thing is this documentation. This approach has the advantage of matching the resource estimates done by ResourcesEstimator exactly, rather than tracking specific gates.
Using custom simulator.
If you want to do some sophisticated checks, for example, count all gates that act on exactly 3 qubits and limit their number, you can write your own simulator that would extend the simulator you like by doing some extra counting whenever an operation is called. This simulator could expose access to those statistics to your Q# code, so you could write custom assertions using this data. I'm not including the full code here, since it's pretty lengthy (and you indicated that you don't want to use a second language); you can find a complete example here (some tasks in the Quantum Katas use tests like "prohibit all multi-qubit operations except Measure", so this simulator comes handy). | {
"domain": "quantumcomputing.stackexchange",
"id": 2166,
"tags": "programming, q#"
} |
Undefined reference to ros (melodic) | Question:
Hi, i'm on raspbian and try to make a camera vision with ROS and OpenCV but I get errors with ROS...
cmake_minimum_required(VERSION 3.0.2)
project(test)
find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs)
include_directories(${catkin_INCLUDE_DIRS})
add_executable(camera src/camera.cpp)
find_package(OpenCV)
include_directories(${OpenCV_INCLUDE_DIRS})
target_link_libraries(camera ${OpenCV_LIBRARIES})
My CMakeLists.txt
<package format="2">
<name>test</name>
<version>0.0.0</version>
<description>Test caméra OpenCV et ROS</description>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>roscpp</build_depend>
<build_depend>rospy</build_depend>
<build_depend>std_msgs</build_depend>
<build_depend>opencv2</build_depend>
<run_depend>roscpp</run_depend>
<run_depend>rospy</run_depend>
<run_depend>std_msgs</run_depend>
<run_depend>opencv2</run_depend>
</package>
My package.xml
/usr/bin/ld: camera.cpp:(.text+0x1ac): undefined reference to `ros::Rate::Rate(double)'
/usr/bin/ld: camera.cpp:(.text+0x1b0): undefined reference to `ros::ok()'
/usr/bin/ld: camera.cpp:(.text+0x440): undefined reference to `ros::console::initialize()'
/usr/bin/ld: camera.cpp:(.text+0x48c): undefined reference to `ros::console::initializeLogLocation(ros::console::LogLocation*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, ros::console::levels::Level)'
/usr/bin/ld: camera.cpp:(.text+0x4d0): undefined reference to `ros::console::setLogLocationLevel(ros::console::LogLocation*, ros::console::levels::Level)'
/usr/bin/ld: camera.cpp:(.text+0x4d8): undefined reference to `ros::console::checkLogLocationEnabled(ros::console::LogLocation*)'
/usr/bin/ld: camera.cpp:(.text+0x540): undefined reference to `ros::console::print(ros::console::FilterBase*, void*, ros::console::levels::Level, char const*, int, char const*, char const*, ...)'
/usr/bin/ld: camera.cpp:(.text+0x558): undefined reference to `ros::spinOnce()'
/usr/bin/ld: camera.cpp:(.text+0x564): undefined reference to `ros::Rate::sleep()'
/usr/bin/ld: camera.cpp:(.text+0x5cc): undefined reference to `ros::console::g_initialized'
/usr/bin/ld: camera.cpp:(.text+0x618): undefined reference to `ros::Publisher::~Publisher()'
/usr/bin/ld: camera.cpp:(.text+0x648): undefined reference to `ros::NodeHandle::~NodeHandle()'
/usr/bin/ld: camera.cpp:(.text+0x77c): undefined reference to `ros::Publisher::~Publisher()'
/usr/bin/ld: camera.cpp:(.text+0x7b4): undefined reference to `ros::NodeHandle::~NodeHandle()'
All the cmd commands are working, but impossible to find the ros package and I get no error for "ros.h", so I don't think the problem comes from the include.
Help will be appreciated.
Originally posted by Pombor on ROS Answers with karma: 5 on 2020-07-07
Post score: 0
Answer:
At minimum, you didn't link against ${catkin_LIBRARIES} nor fill out catkin_package() macro.
You can use this as a relatively complete CMakeLists demo: https://github.com/SteveMacenski/slam_toolbox/blob/melodic-devel/slam_toolbox/CMakeLists.txt (does outside deps, ros deps, etc)
Originally posted by stevemacenski with karma: 8272 on 2020-07-07
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 35239,
"tags": "ros, ros-melodic, package.xml, cmake"
} |
Why the available parities of the infinite well solutions change as we change the boundaries positions? | Question: So, I have noticed something in the solutions of the infinite quantum well and I don't quite understand it. The solutions are of the form
\begin{equation}
\phi_{n}(x) = A\cos(kx)+B\sin(kx)
\end{equation}
If the boundaries of the well are at $x=0$ and $x=L$ then the boundary condition $\phi_{n}(x=0)=0$ leads to $A=0$, meaning that we only have odd-parity solutions available.
If, however, the boundaries of the well are at $x=-a$ and $x=a$ we have
\begin{eqnarray}
A\cos(ka)+B\sin(ka)=0 \\
A\cos(ka)-B\sin(ka)=0
\end{eqnarray}
which leads us too
\begin{eqnarray}
A\cos(ka)=0 \\
B\sin(ka)=0
\end{eqnarray}
giving rise to odd and even parity solutions.
How come we gain one type of solutions by moving the boundary? Shouldn't the system be the same independently of where we decide to put the origin of our system?
Thank you very much.
Answer: The energies will be the same but the solutions - of course - will not. The simplest example is to compare $\cos(x)$ which is even, with $\sin(x)$, which is odd. If you just translate by $\pi/2$, then $\cos(x-\pi/2)=\sin(x)$: by simply translating the origin you change the parity of the function. Of course, a sine function is just a cosine function shifted by $\pi/2$ so displacing (or shifting) the original does not affect the shape of the solution although it does affect its parity.
The parity depends quite fundamentally where you place your origin, since parity is a reflection about a reference points: if your well extends from $0$ to $2a$, it is not symmetric but if it extends from $-a$ to $a$, then it certainly is symmetric.
In fact, it is entirely possible to solve this problem with the well from $0$ to $2a$ without using parity arguments. The parity operation allows you to connect the boundary conditions, thus dividing the work by $2$, but the cost is that you must solve for the even and odd solutions separately, so there is no real savings in terms of work, although there is additional insight into the solutions. | {
"domain": "physics.stackexchange",
"id": 54955,
"tags": "quantum-mechanics"
} |
Win32 File API in VBA | Question: Win32 File API Wrapper
Based on a few fundamental frustrations with VBA (namely the lack of ability to work with files larger than 2GB, the lack of encapsulation of the file functions and the lack of intellisense to guide my use of the file statements) I put together a wrapper for the Win32 File API. This includes 64-bit functions which allow reading and writing past the 4GB limit of 32 bit addressing.
Concerns
One issue with this wrapper is that for the 32 bit functions, offsets larger than 2GB are negative numbers in VBA since it doesn't have unsigned longs, which I suppose is fine as long as you're aware of it, but it does make use of the API less intuitive and you have to be careful of offset math.
Another issue is the use of Currency for the 64 bit functions - it's kind of a hack and it again makes the math awkward. I would love to incorporate the GB, MB, KB consts into the class somehow, but Enum only supports longs and Const variables can't be public.
I'd appreciate any style advice, corrections to mistakes I've made or suggestions for how to make the wrapper more intuitive.
clsFile
Option Compare Database
Option Explicit
'Based on the example on msdn:
'http://support.microsoft.com/kb/189981
'Some of the constants come from Winnt.h
Public Enum SeekOrigin
so_Begin = 0
so_Current = 1
so_End = 2
End Enum
Public Enum FileAccess
' FILE_READ_DATA = &H1 ' winnt.h:1801
' 'FILE_LIST_DIRECTORY = &H1 ' winnt.h:1802
' FILE_WRITE_DATA = &H2 ' winnt.h:1804
' 'FILE_ADD_FILE = &H2 ' winnt.h:1805
' FILE_APPEND_DATA = &H4 ' winnt.h:1807
' 'FILE_ADD_SUBDIRECTORY = &H4 ' winnt.h:1808
' 'FILE_CREATE_PIPE_INSTANCE = &H4 ' winnt.h:1809
' FILE_READ_EA = &H8 ' winnt.h:1811
' FILE_READ_PROPERTIES = &H8 ' winnt.h:1812
' FILE_WRITE_EA = &H10 ' winnt.h:1814
' FILE_WRITE_PROPERTIES = &H10 ' winnt.h:1815
' FILE_EXECUTE = &H20 ' winnt.h:1817
' 'FILE_TRAVERSE = &H20 ' winnt.h:1818
' 'FILE_DELETE_CHILD = &H40 ' winnt.h:1820
' FILE_READ_ATTRIBUTES = &H80 ' winnt.h:1822
' FILE_WRITE_ATTRIBUTES = &H100 ' winnt.h:1824
FILE_ALL_ACCESS = &H1F01FF ' winnt.h:1826
FILE_GENERIC_READ = &H120089 ' winnt.h:1828
FILE_GENERIC_WRITE = &H120116 ' winnt.h:1835
' FILE_GENERIC_EXECUTE = &H1200A0 ' winnt.h:1843
' FILE_SHARE_READ = &H1 ' winnt.h:1848
' FILE_SHARE_WRITE = &H2 ' winnt.h:1849
' FILE_NOTIFY_CHANGE_FILE_NAME = &H1 ' winnt.h:1860
' FILE_NOTIFY_CHANGE_DIR_NAME = &H2 ' winnt.h:1861
' FILE_NOTIFY_CHANGE_ATTRIBUTES = &H4 ' winnt.h:1862
' FILE_NOTIFY_CHANGE_SIZE = &H8 ' winnt.h:1863
' FILE_NOTIFY_CHANGE_LAST_WRITE = &H10 ' winnt.h:1864
' FILE_NOTIFY_CHANGE_SECURITY = &H100 ' winnt.h:1865
' 'MAILSLOT_NO_MESSAGE = -1 ' winnt.h:1866
' 'MAILSLOT_WAIT_FOREVER = -1 ' winnt.h:1867
' FILE_CASE_SENSITIVE_SEARCH = &H1 ' winnt.h:1868
' FILE_CASE_PRESERVED_NAMES = &H2 ' winnt.h:1869
' FILE_UNICODE_ON_DISK = &H4 ' winnt.h:1870
' FILE_PERSISTENT_ACLS = &H8 ' winnt.h:1871
' FILE_FILE_COMPRESSION = &H10 ' winnt.h:1872
' FILE_VOLUME_IS_COMPRESSED = &H8000 ' winnt.h:1873
' IO_COMPLETION_MODIFY_STATE = &H2 ' winnt.h:1874
' IO_COMPLETION_ALL_ACCESS = &H1F0003 ' winnt.h:1875
' DUPLICATE_CLOSE_SOURCE = &H1 ' winnt.h:1876
' DUPLICATE_SAME_ACCESS = &H2 ' winnt.h:1877
' DELETE = &H10000 ' winnt.h:1935
' READ_CONTROL = &H20000 ' winnt.h:1936
' WRITE_DAC = &H40000 ' winnt.h:1937
' WRITE_OWNER = &H80000 ' winnt.h:1938
' SYNCHRONIZE = &H100000 ' winnt.h:1939
' STANDARD_RIGHTS_REQUIRED = &HF0000 ' winnt.h:1941
' STANDARD_RIGHTS_READ = &H20000 ' winnt.h:1943
' STANDARD_RIGHTS_WRITE = &H20000 ' winnt.h:1944
' STANDARD_RIGHTS_EXECUTE = &H20000 ' winnt.h:1945
' STANDARD_RIGHTS_ALL = &H1F0000 ' winnt.h:1947
' SPECIFIC_RIGHTS_ALL = &HFFFF ' winnt.h:1949
' ACCESS_SYSTEM_SECURITY = &H1000000
End Enum
Public Enum FileShare
NONE = &H0
FILE_SHARE_DELETE = &H4
FILE_SHARE_READ = &H1
FILE_SHARE_WRITE = &H2
End Enum
Public Enum FileCreationDisposition
CREATE_ALWAYS = &H2
CREATE_NEW = &H1
OPEN_ALWAYS = &H4
OPEN_EXISTING = &H3
TRUNCATE_EXISTING = &H5
End Enum
'Public Enum FileFlagsAndAttributes
' 'Attributes
' FILE_ATTRIBUTE_ENCRYPTED = &H4000
' FILE_ATTRIBUTE_READONLY = &H1 ' winnt.h:1850
' FILE_ATTRIBUTE_HIDDEN = &H2 ' winnt.h:1851
' FILE_ATTRIBUTE_SYSTEM = &H4 ' winnt.h:1852
' FILE_ATTRIBUTE_DIRECTORY = &H10 ' winnt.h:1853
' FILE_ATTRIBUTE_ARCHIVE = &H20 ' winnt.h:1854
' FILE_ATTRIBUTE_NORMAL = &H80 ' winnt.h:1855
' FILE_ATTRIBUTE_TEMPORARY = &H100 ' winnt.h:1856
' FILE_ATTRIBUTE_ATOMIC_WRITE = &H200 ' winnt.h:1857
' FILE_ATTRIBUTE_XACTION_WRITE = &H400 ' winnt.h:1858
' FILE_ATTRIBUTE_COMPRESSED = &H800 ' winnt.h:1859
' 'Flags
' FILE_FLAG_BACKUP_SEMANTICS = &H2000000
' FILE_FLAG_DELETE_ON_CLOSE = &H4000000
' FILE_FLAG_NO_BUFFERING = &H20000000
' FILE_FLAG_OPEN_NO_RECALL = &H100000
' FILE_FLAG_OPEN_REPARSE_POINT = &H200000
' FILE_FLAG_OVERLAPPED = &H40000000
' FILE_FLAG_POSIX_SEMANTICS = &H100000
'End Enum
Private Const INVALID_FILE_HANDLE = -1 '&HFFFFFFFF
Private Const FORMAT_MESSAGE_FROM_SYSTEM = &H1000
Private Const INVALID_FILE_SIZE As Long = -1 '&HFFFFFFFF
Private Const INVALID_SET_FILE_POINTER As Long = -1 '&HFFFFFFFF
Private Declare Function FormatMessage Lib "Kernel32" Alias "FormatMessageA" (ByVal dwFlags As Long, _
lpSource As Long, _
ByVal dwMessageId As Long, _
ByVal dwLanguageId As Long, _
ByVal lpBuffer As String, _
ByVal nSize As Long, _
Arguments As Any) As Long
Private Declare Function CreateFile Lib "Kernel32" Alias "CreateFileA" (ByVal lpFileName As String, _
ByVal dwDesiredAccess As Long, _
ByVal dwShareMode As Long, _
lpSecurityAttributes As Long, _
ByVal dwCreationDisposition As Long, _
ByVal dwFlagsAndAttributes As Long, _
hTemplateFile As Long) As Long
Private Declare Function SetFilePointer Lib "Kernel32" (ByVal hFile As Long, _
ByVal lDistanceToMove As Long, _
lpDistanceToMoveHigh As Long, _
ByVal dwMoveMethod As Long) As Long
Private Declare Function ReadFile Lib "Kernel32" (ByVal hFile As Long, _
lpBuffer As Any, _
ByVal nNumberOfBytesToRead As Long, _
lpNumberOfBytesRead As Long, _
ByVal lpOverlapped As Long) As Long
Private Declare Function WriteFile Lib "Kernel32" (ByVal hFile As Long, _
lpBuffer As Any, _
ByVal nNumberOfBytesToWrite As Long, _
lpNumberOfBytesWritten As Long, _
ByVal lpOverlapped As Long) As Long
Private Declare Function FlushFileBuffers Lib "Kernel32" (ByVal hFile As Long) As Long
Private Declare Function GetFileSize Lib "Kernel32" (ByVal hFile As Long, _
lpFileSizeHigh As Long) As Long
Private Declare Function CloseHandle Lib "Kernel32" (ByVal hObject As Long) As Long
Private Declare Sub CopyMemory Lib "Kernel32" Alias "RtlMoveMemory" (ByVal dest As Long, ByVal src As Long, ByVal size As Long)
Private m_Handle As Long
Private Sub Class_Terminate()
If Not m_Handle = 0 Then
Flush
CloseFile
End If
End Sub
Public Sub OpenFile(path As String, Optional access As FileAccess = FileAccess.FILE_GENERIC_READ, Optional share As FileShare = FileShare.NONE, Optional CreationDisposition As FileCreationDisposition = FileCreationDisposition.OPEN_ALWAYS)
Dim Ret As Long
Ret = CreateFile(path, access, share, ByVal 0&, CreationDisposition, 0&, ByVal 0&)
If Ret = INVALID_FILE_HANDLE Then
Err.Raise vbObjectError + Err.LastDllError, "clsFile.OpenFile", DecodeAPIErrors(Err.LastDllError)
Else
m_Handle = Ret
End If
End Sub
'Properties
Public Property Get Length() As Double
Dim Ret As Currency
Dim FileSizeHigh As Long
Ret = GetFileSize(m_Handle, FileSizeHigh)
If Not Ret = INVALID_FILE_SIZE Then
Length = Ret
Else
Err.Raise vbObjectError + Err.LastDllError, "clsFile.Length", DecodeAPIErrors(Err.LastDllError)
End If
End Property
Public Property Get Position() As Long
Dim Ret As Long
Dim DistanceToMoveHigh As Long
Ret = SetFilePointer(m_Handle, 0&, DistanceToMoveHigh, 1&) '1 is FILE_CURRENT
If DistanceToMoveHigh = 0 Then
If Ret = -1 Then
Position = -1 'EOF'
Else
Position = Ret
End If
Else
Class_Terminate
Err.Raise vbObjectError + Err.LastDllError, "clsFile.Position", DecodeAPIErrors(Err.LastDllError)
End If
End Property
Public Property Get Handle() As Long
Handle = m_Handle
End Property
'Functions
Public Function ReadBytes(ByRef buffer() As Byte, ByVal buffer_offset As Long, ByVal count As Long) As Long
Dim Ret As Long
Dim BytesRead As Long
Ret = ReadFile(m_Handle, buffer(buffer_offset), count, BytesRead, 0&)
If Ret = 1 Then
ReadBytes = BytesRead
Else
Class_Terminate
Err.Raise vbObjectError + Err.LastDllError, "clsFile.ReadBytes", DecodeAPIErrors(Err.LastDllError)
End If
End Function
Public Function ReadBytesPtr(ByVal ptrBuf As Long, ByVal buffer_offset As Long, ByVal count As Long) As Long
Dim Ret As Long
Dim BytesRead As Long
Ret = ReadFile(m_Handle, ByVal ptrBuf + buffer_offset, count, BytesRead, 0&)
If Ret = 1 Then
ReadBytesPtr = BytesRead
Else
Class_Terminate
Err.Raise vbObjectError + Err.LastDllError, "clsFile.ReadBytesPtr", DecodeAPIErrors(Err.LastDllError)
End If
End Function
Public Function WriteBytes(ByRef buffer() As Byte, ByVal buffer_offset As Long, ByVal count As Long) As Long
Dim Ret As Long
Dim BytesWritten As Long
Ret = WriteFile(m_Handle, buffer(buffer_offset), count, BytesWritten, 0&)
If Ret = 1 Then
WriteBytes = BytesWritten
Else
Class_Terminate
Err.Raise vbObjectError + Err.LastDllError, "clsFile.WriteBytes", DecodeAPIErrors(Err.LastDllError)
End If
End Function
Public Function WriteBytesPtr(ByVal ptrBuf As Long, ByVal buffer_offset As Long, ByVal count As Long) As Long
Dim Ret As Long
Dim BytesWritten As Long
Ret = WriteFile(m_Handle, ByVal ptrBuf + buffer_offset, count, BytesWritten, 0&)
If Ret = 1 Then
WriteBytesPtr = BytesWritten
Else
Class_Terminate
Err.Raise vbObjectError + Err.LastDllError, "clsFile.WriteBytes", DecodeAPIErrors(Err.LastDllError)
End If
End Function
Public Function SeekFile(ByVal LoBytesOffset As Long, origin As SeekOrigin) As Long
Dim Ret As Long
Dim HiBytesOffset As Long
Ret = SetFilePointer(m_Handle, LoBytesOffset, HiBytesOffset, origin)
If Not Ret = INVALID_SET_FILE_POINTER Then
SeekFile = Ret
Else
Err.Raise vbObjectError + Err.LastDllError, "clsFile.SeekFile", DecodeAPIErrors(Err.LastDllError)
End If
End Function
Public Function SeekFile64bit(ByVal offset As Currency, origin As SeekOrigin) As Currency
'Take care with this function. A Currency variable is an 8-byte (64-bit) scaled (by 10,000) fixed-point number.'
'This means that setting a Currency variable to 0.0001 is the equivalent of a binary value of 1.'
'If you want to set an offset with an immediate value, write it like so:'
'1073741824 Bytes (1 GB) would be 107374.1824@, where @ is the symbol for an immediate Currency value.'
'Refer to http://support.microsoft.com/kb/189862 for hints on how to do 64-bit arithmetic'
Dim Ret As Long
Dim curFilePosition As Currency
Dim LoBytesOffset As Long, HiBytesOffset As Long
CopyMemory VarPtr(HiBytesOffset), VarPtr(offset) + 4, 4
CopyMemory VarPtr(LoBytesOffset), VarPtr(offset), 4
Ret = SetFilePointer(m_Handle, LoBytesOffset, HiBytesOffset, origin)
CopyMemory VarPtr(curFilePosition) + 4, VarPtr(HiBytesOffset), 4
CopyMemory VarPtr(curFilePosition), VarPtr(Ret), 4
SeekFile64bit = curFilePosition
End Function
Public Sub CloseFile()
Dim Ret As Long
Ret = CloseHandle(m_Handle)
m_Handle = 0
End Sub
Public Sub Flush()
Dim Ret As Long
Ret = FlushFileBuffers(m_Handle)
End Sub
'***********************************************************************************
' Helper function, from Microsoft page as noted at top
Private Function DecodeAPIErrors(ByVal ErrorCode As Long) As String
Dim sMessage As String, MessageLength As Long
sMessage = Space$(256)
MessageLength = FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM, 0&, _
ErrorCode, 0&, sMessage, 256&, 0&)
If MessageLength > 0 Then
DecodeAPIErrors = Left(sMessage, MessageLength)
Else
DecodeAPIErrors = "Unknown Error."
End If
End Function
And a example:
mdlMain
Option Compare Database
Option Explicit
Const GB As Currency = 107374.1824@
Const MB As Currency = 104.8576@
Const KB As Currency = 0.1024@
Public Sub Main()
Dim oFile As New clsFile
oFile.OpenFile "largefilepath"
oFile.SeekFile64bit 6 * GB, so_Begin
End Sub
Answer: I'm not overly familiar with using WinApi calls from VBA, but I'll do my best here, because this is a cool piece of code. Let's get started.
Option Compare Database
This line ties your class to access. It won't compile in any other host app. I try to keep utility classes like this host agnostic. Removing this option will allow you to use this class in any app that supports VBA. I honestly don't like this option anyway. It ties how the code behaves to the environment it's running in by letting Access determine how string comparisons are made. If you're going to use an Option Compare, choose either Text or Binary depending on your needs. Both of those are available in any of the host apps by the way. (I know, Access probably inserted this line for you, moving on...)
'Based on the example on msdn:
'http://support.microsoft.com/kb/189981
I love comments like this. Awesome. Well done!, but MS is notorious for killing urls on a whim with no redirect. It would help to leave the title of the article so it can be searched for if the link goes dead.
Public Enum FileAccess
' FILE_READ_DATA = &H1 ' winnt.h:1801
' 'FILE_LIST_DIRECTORY = &H1 ' winnt.h:1802
' FILE_WRITE_DATA = &H2 ' winnt.h:1804
' 'FILE_ADD_FILE = &H2 ' winnt.h:1805
' FILE_APPEND_DATA = &H4 ' winnt.h:1807
Normally, I'd say that this is dead code and you should kill it, but I get why you've done this. It's good documentation and all the work is already done should you decide that you need any of these additional values. I'd leave a comment threatening a psychotic episode should anyone ever "be helpful" and remove it, because, let's face it, someone like me could easily come along and wipe out all this "dead code" without batting an eye about it.
Private Const INVALID_FILE_HANDLE = -1 '&HFFFFFFFF
Private Const FORMAT_MESSAGE_FROM_SYSTEM = &H1000
Private Const INVALID_FILE_SIZE As Long = -1 '&HFFFFFFFF
Private Const INVALID_SET_FILE_POINTER As Long = -1 '&HFFFFFFFF
It's petty, and doesn't really matter, but I might do this the other way round for consistency, or better yet, leave a single explanatory comment.
' &HFFFFFFFF == -1
' &H1000 == 4096
Private Const INVALID_FILE_HANDLE = &HFFFFFFFF
Private Const FORMAT_MESSAGE_FROM_SYSTEM = &H1000
Private Const INVALID_FILE_SIZE As Long = &HFFFFFFFF
Private Const INVALID_SET_FILE_POINTER As Long = &HFFFFFFFF
I like the way you're handling errors, but you could extract a method to reduce the duplication.
Private Sub RaiseError(ByVal caller As String)
Err.Raise vbObjectError + Err.LastDllError, TypeName(Me) & "." & caller, DecodeAPIErrors(Err.LastDllError)
End Sub
There's a magic number in SeekFile64bit.
CopyMemory VarPtr(HiBytesOffset), VarPtr(offset) + 4, 4
CopyMemory VarPtr(LoBytesOffset), VarPtr(offset), 4
Ret = SetFilePointer(m_Handle, LoBytesOffset, HiBytesOffset, origin)
CopyMemory VarPtr(curFilePosition) + 4, VarPtr(HiBytesOffset), 4
CopyMemory VarPtr(curFilePosition), VarPtr(Ret), 4
I'm not terribly familiar with directly working with pointers this way, so I have no idea why this uses 4. A well named constant would help a schmuck like me understand what's happening here.
And that's all very minor really. This is good code. Unfortunately, I've no better ideas about how to simplify the API. It's a side effect of needing to use the Currency type to get large enough values. I've got.... nothing. I think the best you can do is add some example code inside of the class and document the use of the class's API as best you can.
What you may be able to do is some precompiler directive magic. You're using currency to get a 64bit integer, right? Well, on 64bit installs, there's the LongLong type. So, you might be able to clean this up for use in that environment, but in my experience, very people are actually running 64bit installs of office, so it may not be worth the effort. Particularly when it would mean that you would effectively have two APIs for the same class. | {
"domain": "codereview.stackexchange",
"id": 20203,
"tags": "vba, winapi"
} |
Error using twitter R package's userTimlien | Question: I am using twitteR package to retrievie timeline data. My request looks as follows:
tweets <- try(userTimeline(user , n=50),silent=TRUE)
and this worked quite well for a time, but now I receive this error message:
Error in function (type, msg, asError = TRUE) : easy handle already used in multi handle
In a related question on Stackoverflow one answer is to use Rcurl directly but this does not seem to work with twitteR package. Anybody got an idea on this?
Answer: It seems to be working well on my configuration:
Ubuntu Vivid and R:
> sessionInfo()
R version 3.1.2 (2014-10-31)
Platform: x86_64-pc-linux-gnu (64-bit)
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] twitteR_1.1.8
loaded via a namespace (and not attached):
[1] bit_1.1-12 bit64_0.9-4 bitops_1.0-6 DBI_0.3.1 httr_0.6.1
[6] magrittr_1.5 RCurl_1.95-4.6 rjson_0.2.15 stringi_0.4-1 stringr_1.0.0
[11] tools_3.1.2
Maybe you should update packages versions? | {
"domain": "datascience.stackexchange",
"id": 291,
"tags": "data-mining, r"
} |
How to transform a DNAStringSet from the Bioconductor package Biostrings to a data frame? | Question: I am working on Mac OS X. I am using R version 3.5.1.
I have important a FASTA file into R using Biostrings::readDNAStringSet. This creates a DNAStringSet object in R.
I would like to transform it into a data frame so that I can then plot the widths with ggplot2.
as.data.frame on my DNAStringSet object returns a data frame with only one column. Is there a function that returns a data frame with columns width, seq and names present in the DNAStringSet object?
Answer: According to the documentation (?Biostrings::DNAStringSet):
width(x): A vector of non-negative integers containing the number of
letters for each element in x. Note that width(x) is
also defined for a character vector with no NAs and is
equivalent to nchar(x, type="bytes").
names(x): NULL or a character vector of the same length as x
containing a short user-provided description or comment for
each element in x. These are the only data in an
XStringSet object that can safely be changed by the user. All
the other data are immutable! As a general recommendation,
the user should never try to modify an object by accessing
its slots directly.
as.character(x, use.names=TRUE): Converts x to a character vector
of the same length as x. The use.names argument controls
whether or not names(x) should be propagated to the names
of the returned vector.
This should be enough for you to create a data.frame:
dss2df <- function(dss) data.frame(width=width(dss), seq=as.character(dss), names=names(dss))
Also, if you have a huge DNAStringSet, perhaps you do not want to convert all the sequences to characters; storing sequences as characters supposedly is less efficient. For plotting it might be sufficient to have just widths.
qplot(width(dss), geom='histogram')) | {
"domain": "bioinformatics.stackexchange",
"id": 750,
"tags": "r, bioconductor, format-conversion, biostrings"
} |
What are the products of the reaction of methoxypropane with HBr? | Question:
The products of the reaction of methoxypropane ($\ce{CH3CH2CH2OCH3}$) with $\ce{HBr}$ are?
In my opinion, the products should be: bromopropane ($\ce{CH3CH2CH2Br}$) + methanol ($\ce{CH3OH}$). The first step is the cleavage of the ether. This can happen via the formation of a propyl carbonation or a methyl carbocation. Since the propyl carbocation is more stable, bromine will react with the same to form bromopropane (the other product thus formed is methanol)
But the products given in the solution were propanol ($\ce{CH3CH2CH2OH}$) + bromomethane ($\ce{CH3Br}$). So, what is wrong in my reasoning? And what is the correct reasoning?
Source: NCERT (India) 12th
Answer: The first step is protonation of the oxygen of the ether. This cationic species is then subject to nucleophilic attack by Br-. There are two potential sites of attack i.e. the carbons bonded to the protonated oxygen. Bromine is quite a bulky nucleophile, and of the two sites, the methyl group carbon is less hindered so this is attacked in preference, leading to bromomethane and propan-1-ol as your products. | {
"domain": "chemistry.stackexchange",
"id": 10192,
"tags": "organic-chemistry, halides, alcohols, ethers"
} |
In Cirq, how do you display circuit diagrams that are "prettier" than the ones displayed by default? | Question: This is a duplicate of a question that was asked on the Cirq issues page. I'm duplicating this question to increase it's visibility.
Answer: The standard way to print circuits in Cirq is by calling print(circuit): which prints the text diagram representation of the circuit. A "prettier" representation can be displayed with SVGCircuit. In a Jupyter notebook,
import cirq
from cirq.contrib.svg import SVGCircuit
a, b = cirq.LineQubit.range(2)
SVGCircuit(cirq.Circuit(cirq.X(a), cirq.CNOT(a,b)))
outputs:
SVGCircuit is a work-in-progress. Please add comments explaining what problems you run into on the Circuit.repr_svg with nice diagrams for ipython issue. | {
"domain": "quantumcomputing.stackexchange",
"id": 5035,
"tags": "programming, cirq"
} |
rosserial with 2 arduino | Question:
Hi every one. I have a problem when interfacing raspberry pi with two arduino through rosserial_python, when I connect to the second arduino after the first one succeeded,the first terminal will say : **shutdown request: new node registered with same name**
i search and find the solution like include parameter _name after the rosrun command, but it's not worked for me.please help me. thank you and sorry for my english
Originally posted by drtritm on ROS Answers with karma: 187 on 2020-01-09
Post score: 0
Answer:
You cannot have 2 or more nodes with the same name. So you need to change the name of one of your rosserial nodes, eg:
rosrun rosserial_python serial_node.py __name:="node2" ... etc
Note the double underscore preceding name.
Or, as suggested, using a launch file:
<launch>
<!-- Launch 2 Arduino boards -->
<node
pkg="rosserial_python"
type="serial_node.py"
name="ArduinoOne"
args="/dev/ttyACM0"
></node>
<node
pkg="rosserial_python"
type="serial_node.py"
name="ArduinoTwo"
args="/dev/ttyACM1"
></node>
</launch>
The above requires the Arduino boards to be /dev/ttyACM0 and /dev/ttyACM1. This can be avoided by using udev rules to give each board a unique symbolic name, eg:
<launch>
<!-- Launch the Arduino Board with Adafruit 9-DOF IMU -->
<node
pkg="rosserial_python"
type="serial_node.py"
name="arduino_imu_9dof"
args="/dev/elegooOne"
respawn="true"
></node>
</launch>
So above we've given the Arduino board the symbolic name elegooOne. Larger Arduino boards, eg Uno, Mega, have a unique serial number that's good for udev rules.
Originally posted by Marvin with karma: 91 on 2020-01-09
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by drtritm on 2020-01-12:
Yes, i did it like i mentioned in the question, and it's not worked
Comment by Marvin on 2020-01-14:
When using rosrun try using the name argument with a double underscore to rename. Obviously, you can check, one at a time, that the nodes are being renamed as expected via rosnode list.
Comment by gvdhoorn on 2020-01-14:
I would suggest to use a launch file instead. It's easier to edit, easier to start and was intended to be used to start multiple ROS nodes.
Comment by drtritm on 2020-01-17:
thanks very much | {
"domain": "robotics.stackexchange",
"id": 34250,
"tags": "ros, ros-melodic, rosserial"
} |
Recognition of markes in long audio recording | Question: I am about to do a lot of a recordings when performing tests. The thing is that there are some parts I would like to extract easily from my recording. Is there a way to play some special sequence of sounds, for example few beeps, and then search the whole recording for those markers?
In the end I want to obtain file with labeled segments similarly to what Audacity is doing. Until now I was using hand claps and searching for energy spikes in the long recorded signal. Since I am performing measurements of room with starter pistol this is not a good idea. Later this will allow me for very quick and easy analysis to search for groups of similar sounds. For example:
start time, end time, location
100, 399, loc1
500, 600, loc2
I tried calculating correlation with some pattern, but when room is reverberant it is not performing very good. I get some false markers from threshold correlation. It is because marker is smeared and not exact.
What would be the best method and type of a signal for that task? Any good and fast techniques such as matched filtering?
Addendum:
saying simple what i want:
play the marker sound/pattern at random times
record this signal and other signals
search for these recorded patterns in very long recording
being able to say at which point in time those patterns are
for example cross correlation (isn't working well) could give spikes i can search for
it should be idiot-proof against noise, tonal/impulsive sounds, reverberation and low playback level
it should not be very slow. for example at 10 minutes file 40 seconds is my upper limit
Answer: You can use DTMF tones which are very easily decodable with resonator banks. Brief explanation at http://en.m.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling
If you are in a reverberant room then something like a mobile phone will not put too much energy in it and consequently the recording will not be affected too much by the reverberation. Also, DTMFs are relatively high pitch and would stand out from the background.
There are 16 combinations in a typical keypad but then you can alter symbols (e.g. 1213), possibly in a recording, to build more complex phrases or markers. Altering the symbols quicker than the time constant of the room will reduce the effect of reverberation.
If you are using the starter gun to obtain impulse responses from the room, you can play a sequence in the beginning and just after the end of the recording to roughly identify a particular response but I wouldn't recommend relying on the markers for absolute syncing. You can add a mic close to the gun to obtain the pulse start and end and use the room mics to estimate the dynamics of the room (attack, decay and others)
Hope this helps
Updated part of this answer:
The general idea remains the same, tag a point in time by playing a sound and detect its presence later. The question now is, which sounds and how to detect them. Here is an example using DTMF codes.
1) Get a DTMF generator (for example, a DTMF android app such as this one, https://play.google.com/store/apps/details?id=hu.soska.dtmf&hl=en )
2) In a "clean" (non-reverberant) room, record a sequence of sounds, for example "1928". This is the "tag". Please make sure that each number is keyed in for at least 1 second.
3) In the room under measurement, start your recording. When you want to tag a point, have the "1928" sound reproduced and recorded along with the main recording.
4) Build a filter bank of 8 filters at the DTMF frequencies. The resonators are dead simple filters, here is some example code to get your coefficients: http://lifeorange.com/MATLAB/MATLAB_FD.htm
5) Open the file, run the recording through the filters. Their output will vary depending on the power of the tone in the signal. During the reproduction of DTMF tones, the output of the filters will be increasing. Apply a very simple rule: To maintain a "correct match" you would have to see pairs of filters (representing numbers) going "high" and staying high for at least 1 second. Look for this rule in the outputs of your filters. You now have the matches.
Why single tones and resonators? Because they are very easy and fast to decode.
Alternatives?
Record a Pseudorandom generator signal at step 4 and use cross correlation for step 5. Cross correlation though will be much slower because it would have to run repeatedly in a "sliding window" fashion (please see http://en.wikipedia.org/wiki/Overlap%E2%80%93add_method).
An alternative for step 5 could be multimon (please see http://manpages.ubuntu.com/manpages/gutsy/man1/multimon.1.html ), but that would not give you the offset.
These are all approximate solutions though, they will not give you a dead accurate point in time that your "event" occurs, only a rough approximation (which also depends from the length of your recording).
Hope this helps. | {
"domain": "dsp.stackexchange",
"id": 2534,
"tags": "discrete-signals, audio"
} |
Handling a feature with multiple categorical values for the same instance value | Question: I have data in the following form:
table 1
id, feature1, predict
1, xyz,yes
2, abc, yes
table2
id, feature2
1, class1
1, class2
1, class3
2, class2
I could perform a one many join and train on the resultant set- which is one way to go about it. But If I rather wanted to maintain the length of the resultant set equal length of table 1, what is the technique?
Answer: One possible approach is to perform an encoding, where each level of the feature2 corresponds to a new feature (column).
This way you may describe the 1:N relation between the feature 1 and 2
Here a small example in R
> table1 <- data.frame(id = c(1,2), feature1 = c("xyz","abc"), predict = c(T,T))
> table2 <- data.frame(id = c(1,1,1,2), feature2 = c("class1", "class2", "class3", "class2"))
>
> ## encoding
> table(table2)
feature2
id class1 class2 class3
1 1 1 1
2 0 1 0
The new object contains the (now unique) id and setting of the feature2.
You need only to merge (join) the result to the table1 (basically same task a DB join - which variance: inner, outer or full depends on your requirements). | {
"domain": "datascience.stackexchange",
"id": 568,
"tags": "machine-learning, dataset, data-cleaning, feature-extraction"
} |
Variant Allele Frequency (VAF) peaks for clonal CNAs | Question: I am reading an overview of the CNAqc package, which defines how the algorithm computes "Variant Allele Frequency (VAF) peaks for clonal CNAs."
mutations present in a percentage $0<c<1$ of tumour cells, sitting on a segment $nA:nB$;
tumour purity $\pi$;
a healthy diploid normal;
Since the proportion of all reads from the tumour is $\pi(n_A+n_B)$, and from the normal is $2(1-\pi)$. Then, mutations present in $m$ copies of the tumour genome should peak at VAF value
$$v_m(c)=\frac{m \pi c}{2(1−\pi)+\pi(n_A+n_B)}.$$
However, I don't understand exactly the definition $n_A:n_B$ in this context.
Are $n_A:n_B$ the number of reads coming from alleles $A$ and allele $B$ in the tumor? But then, why the normal has only $n = 2$?
Answer: On a further inspection on the associated paper, $n_A$ and $n_B$ are the allele-specific copy numbers.
And it definitely makes sense that we have 2 (diploid) for the normal component. | {
"domain": "biology.stackexchange",
"id": 12002,
"tags": "bioinformatics, cancer"
} |
How are pointers modeled on bit-based computer models? | Question: Why bit-based computer models?
The perhaps most commonly used computer model is a random access machine that can store natural (or even real) numbers in infinitely many cells indexed by natural numbers. This model's main benefit is that it greatly simplifies calculating Runtimes, as arithmetic and indexing are simply assumed to be O(1), so that there is no need to estimate the size of numbers and pointers used, greatly simplifying runtime estimation.
This model is however obviously not realistic, as real computers simply cannot save natural numbers of any length in one "cell", or operate on them in constant time. This sort of constitutes the necessity of bit-based (or byte-based) computer models, where any cell holds either 1 or 0. The best known one of those is the Turing Machine.
The indexing problem
While one certainly can simulate indexing on a Turing Machine or similar, it is probably painstakingly slow. In reality, this problem is solved by building computers with specialized indexing hardware, and this remedy obviously is not accounted for in the Turing Machine model.
The question
Now the question is, is there a bit-based computer model that models faster indexing (which in reality is achieved by specialized hardware) in a senseful way?
In particular, is there a bit-based model where indexing is in O(1), regardless of index length? Would such a model even make sense?
Answer: It sounds like you're just asking for the definition of Random-access machine, which is similarly formal to Turing machines, but formally defines how to do "indexing" operations.
There's also easy ways to equip a Turing machine with random access capabilities. You give it an extra "index" tape that it can only write to (so it can't use it for extra read-memory), and the Turing machine has an extra operation which is "Query". Upon doing a "Query" operation, you look at the position on the (main) tape pointed to by the "index" tape, take the symbol there, write it under the Turing machine's head on the (main) tape. So in this way, the Turing machine writes out the location that it wants to read, in binary, and then the query lets it copy the appropriate data from there. Similarly, then, there's a "Store" operation that takes the symbol under the Turing machine's head and writes it somewhere else in memory based on what's on the index tape. This generally captures efficient pointer operation. It does mean that copying pointers takes $\log(n)$ time, where $n$ is the size of memory in use. Typically $\log(n)$ factors are happily ignored of course; they only come into play when doing very fine-grained runtime analysis. It also adds a factor of $\log(n)$ to space complexity over a RAM-style "register machine", which is slightly more likely to be an issue. | {
"domain": "cs.stackexchange",
"id": 21574,
"tags": "time-complexity, asymptotics, computer-architecture, runtime-analysis, computation-models"
} |
How does the diameter of an outlet pipe for a given pump affect the mass flow rate? | Question: For a given pump moving water out of a container, how does the diameter of the pipe leaving the pump affect the mass or volume of water transferred in a given unit of time? We can make simplifications and assume that frictional losses are negligible.
My instinct is that more cross sectional area allows more water to flow though, but the speed at which the water flows will drop correspondingly, and the volumetric flow rate would be unaffected. After all, the pump does a given amount of work on the water, and that remains unchanged in these scenarios.
Answer: In steady state, at a given speed the pump operates at a point determined by the intersection of the pump head curve (pump head as function of volumetric flow rate) and the system head curve (elevation and friction loss in the system served by the pump). For a frictionless outlet pipe the system head curve is not changed so the pump flow rate is not changed, but the velocity in the pipe increases.
In reality, the greater velocity through the smaller pipe causes more friction loss which changes the operating point and more power must be supplied to maintain constant flow rate. | {
"domain": "physics.stackexchange",
"id": 92993,
"tags": "fluid-dynamics"
} |
Why the carrier frequency does not appear after AM modulation? | Question: Okay, this might be a very basic question, but I'm completely new to the field.
I have a $40$ Hz sine wave ($1000$ samples, $fs = 1$ kHz), which I have amplitude modulated to a $4$ Hz sine wave ($1000$ samples, $fs = 1$ kHz). In the time-domain, the result looks like I expected. However, when I calculate the FFT, there are $2$ sidebands but nothing at $40$ Hz anymore. If I increase the FFT resolution with zero-padding I can see something there, but very modest.
Is this expected behaviour? Can someone explain what is happening here and why. Thank you!
EDIT: to clarify, I get two peaks which are at 16 Hz and 24 Hz.
Answer: What you did is multiply two sinusoids with each other:
$$z(n)=\sin(n\theta_1)\cdot\sin(n\theta_2)$$
where $\theta_1=2\pi\cdot 40/1000$ and $\theta_2=2\pi\cdot 4/1000$.
The result $z(n)$ can be written as
$$z(n)=\frac{1}{2}\left[\cos(n(\theta_1-\theta_2))-\cos(n(\theta_1+\theta_2))\right]$$
So you get two sinusoids with the sum and the difference of the two original frequencies, i.e. one with $40-4=36Hz$, and the other one with $40+4=44Hz$. This is exactly what you see in your FFT plot. You use a 1000-point FFT, i.e. the frequency at index $i$, ($i=1,2,\ldots,1000)$, is given by
$$f_i=\frac{f_s}{1000}(i-1)=i-1,\quad i=1,\ldots,1000$$
where $f_s=1000Hz$ is the sampling frequency. You get peaks at $i=37$ and $i=45$, which correspond to frequencies $f_{37}=36Hz$ and $f_{45}=44Hz$, exactly as expected.
I think you made a mistake in the mapping of FFT indices to frequencies.
By the way, with standard amplitude modulation you indeed get the carrier in the spectrum because the modulated signal looks like
$$s(n)=[1+mx(n)]\sin(n\theta_0)$$
where $x(n)$ is your message, $m$ is a scaling factor, $\theta_0$ is the carrier frequency, and the term $1$ which is added to the signal results in the carrier wave in addition to the sidebands. You just multiplied two sinusoids without this constant term, that's why you don't see any carrier. This was already correctly pointed out in chirlu's answer. | {
"domain": "dsp.stackexchange",
"id": 1039,
"tags": "fft, discrete-signals, modulation"
} |
Does ${\bf AC^0PAD} = {\bf PPAD}$? | Question: What happens if we define ${\bf PPAD}$ such that instead of a polytime Turing-machine/polysize circuit, a logspace Turing-machine or an ${\bf AC^0}$ circuit encodes the problem?
Recently giving faster algorithms for Circuit satisfiability for small circuits turned out to be important, so I wonder what happens to rectricted versions of ${\bf PPAD}$.
Answer: $\def\ac{\mathrm{AC}^0}$Yes, $\ac\mathrm{PAD}=\mathrm{PPAD}$. (Here and below, I’m assuming $\ac$ is defined as a uniform class. Of course, with nonuniform $\ac$ we’d just get $\mathrm{PPAD/poly}$.)
The basic idea is quite simple: $\ac$ can do one step of a Turing machine computation, hence we can simulate one polynomial-time computable edge by a polynomially long line of $\ac$-computable edges. By a further extension of the idea, one could simulate edges computable in poly time with a PPAD oracle, that is, PPAD is closed under Turing reducibility; this argument is given in Buss and Johnson.
There are many equivalent definition of PPAD in the literature that differ in various details, hence let me fix one here for definiteness. An NP search problem $S$ is in PPAD if there is a polynomial $p(n)$, and polynomial-time functions $f(x,u)$, $g(x,u)$, and $h(x,u)$ with the following properties. For each input $x$ of length $n$, $f$ and $g$ represent a directed graph $G_x=(V_x,E_x)$ without self-loops where $V_x=\{0,1\}^{p(n)}$, and every node has in-degree and out-degree at most $1$. The representation is such that if $(u,v)\in E_x$, then $f(x,u)=v$ and $g(x,v)=u$; if $u$ has out-degree $0$, $f(x,u)=u$; and if $u$ has in-degree $0$, $g(x,u)=u$.
The node $0^{p(n)}\in V_x$ is a source (i.e., it has in-degree $0$ and out-degree $1$). If $u\in V_x$ is any source or sink (in-degree $1$, out-degree $0$) other than $0^{p(n)}$, then $h(x,u)$ is a solution to $S(x)$.
We can define $\ac\mathrm{PAD}$ similarly, except we require $f,g,h$ to be in $\mathrm{FAC}^0$.
I will ignore $h$ in the construction for simplicity. (It is not hard to show that one can take it to be a projection, hence $\ac$-computable.)
So, consider a PPAD problem $S$ defined by $f$ and $g$, and fix Turing machines computing $f$ and $g$ in time $q(n)$. For any $x$, we define a directed graph $G'_x=(V'_x,E'_x)$ whose vertices are sequences of the following form:
$(0,u,c_1,\dots,c_k)$, where $u\in V_x$, $0\le k\le q(n)$, and $c_1,\dots,c_k$ are the first $k$ configurations in the computation of $f(x,u)$.
$(0,u,c_1,\dots,c_{q(n)},v,d_1,\dots,d_k)$, where $u,v\in V_x$, $0\le k\le q(n)$, $f(x,u)=v$, $c_1,\dots,c_{q(n)}$ is the full computation of $f(x,u)$, and $d_1,\dots,d_k$ are the first $k$ steps in the computation of $g(x,v)$.
$(1,v,d_1,\dots,d_k)$, where $0^{p(n)}\ne v\in V_x$, $0\le k\le q(n)$, and $d_1,\dots,d_k$ are the first $k$ configurations in the computation of $g(x,v)$.
$(1,v,d_1,\dots,d_{q(n)},u,c_1,\dots,c_k)$, where $u,v\in V_x$, $v\ne0^{p(n)}$, $0\le k\le q(n)$, $g(x,v)=u$, $d_1,\dots,d_{q(n)}$ is the computation of $g(x,v)$, and $c_1,\dots,c_k$ are the first $k$ steps in the computation of $f(x,u)$.
$E'_x$ consists of the edges in $V'_x\times V'_x$ of the following kinds:
$(0,u,c_1,\dots,c_k)\to(0,u,c_1,\dots,c_{k+1})$
$(0,u,c_1,\dots,c_{q(n)})\to(0,u,c_1,\dots,c_{q(n)},v)$
$(0,u,c_1,\dots,c_{q(n)},v,d_1,\dots,d_k)\to(0,u,c_1,\dots,c_{q(n)},v,d_1,\dots,d_{k+1})$
$(0,u,c_1,\dots,c_{q(n)},v,d_1,\dots,d_{q(n)})\to(1,v,d_1,\dots,d_{q(n)},u,c_1,\dots,c_{q(n)})$ if $f(u)=v$ and $g(v)=u$ (i.e., either $(u,v)\in E_x$, or $u=v$ is an isolated vertex)
$(1,v,d_1,\dots,d_{q(n)},u,c_1,\dots,c_{k+1})\to(1,v,d_1,\dots,d_{q(n)},u,c_1,\dots,c_k)$
$(1,v,d_1,\dots,d_{q(n)},u)\to(1,v,d_1,\dots,d_{q(n)})$
$(1,v,d_1,\dots,d_{k+1})\to(1,v,d_1,\dots,d_k)$
$(1,u)\to(0,u)$
Formally, let $r(n)$ be a polynomial bounding the lengths of binary representations of all the sequences above (such that we can extend or shorten sequences, and extract their elements with $\ac$-functions); we actually put $V'_x=\{0,1\}^{r(n)}$, and we let all vertices except the above-mentioned sequences to be isolated.
It is easy to see that the functions $f'$, $g'$ representing $G'_x$ are $\ac$-computable: in particular, we can test in $\ac$ whether $c_1,\dots,c_k$ is a valid partial computation of $f(x,u)$, we can compute $c_{k+1}$ from $c_k$, and we can extract the value of $f(x,u)$ from $c_{q(n)}$.
The sinks in $G'_x$ are nodes of the form $(0,u,c_1,\dots,c_{q(n)},u,d_1,\dots,d_{q(n)})$ where $u$ is a sink in $G_x$. Likewise, sources are $(1,v,d_1,\dots,d_{q(n)},v,c_1,\dots,c_{q(n)})$ where $v$ is a source in $G_x$, except that in the special case $v=0^{p(n)}$, we have pruned the line early and the corresponding source in $G'_x$ is just $(0,0^{p(n)})$. We can assume the encoding of sequences is done in such a way that $(0,0^{p(n)})=0^{r(n)}$.
Thus, $f'$ and $g'$ define an $\ac\mathrm{PAD}$ problem $S'$, and we can extract a solution to $S(x)$ from a solution to $S'(x)$ by an $\ac$-function $h'$ which outputs the second component of a sequence. | {
"domain": "cstheory.stackexchange",
"id": 2918,
"tags": "cc.complexity-theory, circuit-complexity, logspace, ppad, ac0"
} |
Proving a greedy algorithm of finding MINIMAL group | Question: I am given a group $A$ of real numbers and I have to find the minimal group $B$ such that for each $a$ in $A$ there exists at least one $b$ in $B$ such that $|a-b|\leq 1$
So what I think the algorithm should do is first sort $A$, then insert $A[0]+1$ to $B$, and then iterate on $A$ and check if the last number we added to $B$ is still in his radius of $1$. If not - insert a new number to $B$ that will be that $A[i]+1$. The complexity is of course $O(n \log(n))$ because we have to sort $A$.
Now I have to prove that the algorithm gives us indeed the minimal group $B$. And here I am stuck. If there is a better solution (an algorithm that gives us a smaller group), then we can change the number of elements in $B$ and reorganize them so that they will cover more elements in $A$ using less elements then we originally had. How do I write a formal explanation?
Answer: Call a number $m$ a milestone if it is in $A$ and your greedy algorithm adds $m+1$ to $B$.
The greedy algorithm selects each next milestone so that it is greater than the current milestone${}+1+1$. So any two milestones are more than 2 apart.
Hence each number in any solution can cover at most one milestone. Any solution must add at least one distinct number for each milestone.
Since the greedy algorithm adds exactly one number for each milestone, it is optimal. | {
"domain": "cs.stackexchange",
"id": 21284,
"tags": "algorithm-analysis, greedy-algorithms"
} |
Is it possible to consider natural convection as a case of forced convection with zero velocity? | Question: When calculating the Nusselt number we have to use Reynolds number if there is forced convection, while Grashof number is used in case of natural convection. I was wondering if and why could be a big mistake to compute the Nusselt number considering only the Reynolds formulation, using zero velocity if there is natural convection.
Answer: Probably all of the real-world convective heat transfer equations are empirical in nature, because most of them involve turbulence, and physicists don't yet know how to mathematically model turbulence. As such, using one convective heat transfer equation to model convective heat transfer for a phenomenon for which it was not formulated, probably means that you are applying your convective heat transfer equation to a regime for which data either wasn't taken, was somewhat lacking, or completely failed because it doesn't account for either laminar flow or turbulent flow. Unless you can find published data which uses your equation in the way that you intend to use it, consider the probability of having an unknown amount of error in your result. | {
"domain": "physics.stackexchange",
"id": 35352,
"tags": "homework-and-exercises, thermodynamics, energy"
} |
Is there an absence of centrifugal forces in rotating frame of a pulley or in belt friction derivation? | Question: Considering a pulley or belt I understand the derivation of the difference in tension(capstan equation) due the resultant torque needed for the pulley to rotate.
My question is for the tiny section of rope, in the rotating frame it is indeed in static equilibrium and hence we can derive the capstan equation. But, in this rotating frame why are there no centrifugal forces?Is the normal reaction force $dN$ the fictitious force?
It seems to make sense that in the inertial frame, the vertical components of $T$ and $T+dT$ would not be cancelled out as we need some centripetal force for the tiny rope section move in a circle while in the rotating(non-inertial frame) the normal reaction force gives the pseudo-force to keep forces working in the rotating frame.
Answer: The capstan equation is derived on the assumption that the system is in static equilibrium - ie it is not rotating. In this case there is no centrifugal force.
It is necessary for there to be a normal force in order for there to be a friction force which prevents the rope from slipping against the bollard or pulley which it is wrapped around. And if there is tension in the rope, and the angle $\theta$ is non-zero, then there must be a normal force in order to balance forces on the section of rope in contact with the bollard or pulley.
Contact forces are always real, never fictitious. They are exerted by real objects, and have a physical origin.
If the rope is actually rotating, as when it is wrapped around a pulley, then there is a fictitious centrifugal force acting on it. This reduces the normal reaction between the rope and the pulley and thereby also reduces the friction force between the rope and pulley. However, the rope is usually assumed to have negligible mass and the angular velocity of the pulley is relatively low, so the centrifugal force is insignificant.
A hoop which is rotating in space has a real tension in it. The resultant tension on each segment is radially inward. In an inertial frame, this tension is necessary to explain the centripetal acceleration of the hoop. In a rotating frame the resultant tension force on each segment is balanced by the fictitious centrifugal force.
In reply to your comment
If there is a constant torque on the pulley this causes angular acceleration of the pulley and rope. Each segment of the rope accelerates tangentially - ie its speed increases while the torque is maintained. While the rope is moving in a circle each segment is also accelerating centripetally.
Before the pulley has started to rotate each segment of rope is at rest so it has no centripetal acceleration. The resultant tension force on it has a tangential component causing tangential acceleration. It also has a radial component, which is balanced by the outward normal contact force from the pulley.
When the pulley is moving each segment of rope now has centripetal acceleration so there is a net force inwards. The radial component of tension has not changed because the torque acting on the pulley has not changed. What has changed is that the normal contact force has reduced. This also reduces the static friction force which is accelerating the pulley.
As the angular velocity of the pulley increases so will the centripetal acceleration of each segment. At some point the normal contact force will become zero : the rope is rotating with the pulley but it is no longer pressing against the pulley. The rope loses contact with the pulley and moves outwards. Friction is no longer able to act so the pulley stops accelerating but the rope does not.
So the tension force in the rope does not increase as the angular velocity of the pulley increases, unless the torque is increased. Centripetal acceleration increases as tangential velocity increases. When the tension force can no longer provide sufficient centripetal acceleration the rope lifts off the pulley.
In the rotating frame we see the centrifugal force gradually increasing as the angular velocity increases. At first the fixed tension force is sufficient to keep the rope pressed against the pulley, but at some point the centrifugal force has grown so big that the tension force cannot prevent the rope from lifting off the pulley and continuing to move outwards.
If we did wish to ensure that the rope kept in contact with the pulley as its angular velocity increased then Yes the radial component of the tension force would have to increase also to provide more centripetal acceleration. This means that the difference between $T_2$ and $T_1$ would have to increase. But this increase in torque would also increase the tangential speed which would increase the centripetal force which is required, which means the difference between $T_2$ and $T_1$ would have to increase even further, in a vicious cycle.
The Role of Friction
Friction is required to transmit some of the torque from the rope to the pulley in order to make it accelerate. As with friction on the ground, this requires a normal force between the rope and pulley, otherwise the friction force is zero. There also needs to be a tangential force ($T_2-T_1 \gt 0$).
Suppose we push a small box which is resting on top of a large box which is resting on a frictionless floor. (In our case the rope is the small box, the pulley is the large box and the axle of the pulley is the frictionless floor.) Friction between the two boxes transmits some of the force which we apply from the small box to the large box to accelerate it at the same rate, because friction opposes relative motion.
It is the same with the rope and pulley. Friction from the rope causes the pulley to rotate. Without it the rope would slide around the pulley.
The normal contact force is a reaction force. As the pulley rotates some of the radial component of tension is needed to provide centripetal force. What is left over is opposed by the normal contact force.
As angular velocity increases more centripetal force is needed so there is less left over and the normal force gets smaller. This means that the friction force also gets smaller. At some point the friction force is not big enough to keep the pulley rotating at the same rate as the rope.
Going back to the boxes, if the small box is heavy enough when we push on it with a fixed force $F$ then the large box moves also. But if we remove some weight from the small box this reduces the friction force. There comes a point at which the friction force is so weak that if we continue to push with the same force $F$ the small box slides over the large box. If we want to keep the large box moving we have to reduce $F$ or put weight back into the small box.
In the same way with the rope if we want to keep providing friction to accelerate the pulley then we have to either increase the sum of the 2 tension forces $T_1, T_2$ while keeping the difference the same (this increases the normal contact force - like adding weight to the small box) or we reduce the difference between the tension forces while keeping the sum the same (this reduces the torque - like reducing the force $F$ we apply to the small box). | {
"domain": "physics.stackexchange",
"id": 51458,
"tags": "forces, classical-mechanics, rotational-dynamics, friction, centrifugal-force"
} |
Simple BloomFilter Class | Question: We have some BloomFilters (like a java.util.set without deletion) and we store them in ehcache (you can think it's as a java.util.Map):
For example, Constant.BLOOM_FILTER_CACHE ehcache contains:
key value
aaa bloomfilter (of name aaa)
bbb bloomfilter (of name bbb)
ccc bloomfilter (of name ccc)
If we call localCacheManager.get(Constant.BLOOM_FILTER_CACHE, aaa), it will return a bloomfilter (of name aaa) or Null (if not initialized).
If we call localCacheManager.put(Constant.BLOOM_FILTER_CACHE, aaa, new BloomFilter(100, 0.001)), it will put a bloomfilte to cache entry aaa.
If we want to add data1 into BloomFilter of name aaa, then we may call add(aaa, data1) and if we want to know if aaa BloomFilter contains data1, we should call contains(aaa, data1) (in that case, it will return true).
If we want to look up or add, we should check the bloomfilter is initialized or not.
package com.xxx.utils.bloomfilter;
import com.clearspring.analytics.stream.membership.BloomFilter;
import com.xxx.utils.cache.ehcache.LocalCacheManager;
import com.xxx.utils.common.Constant;
public class BloomFilterInCache {
private int bloomFilterElementSize;
private double bloomFilterErrorRate;
private LocalCacheManager localCacheManager;
public BloomFilterInCache(int bloomFilterElementSize,
double bloomFilterErrorRate, LocalCacheManager localCacheManager) {
this.bloomFilterElementSize = bloomFilterElementSize;
this.bloomFilterErrorRate = bloomFilterErrorRate;
this.localCacheManager = localCacheManager;
}
private boolean isBloomFilterExistsByName(String name)
{
BloomFilter bf = (BloomFilter) localCacheManager.get(Constant.BLOOM_FILTER_CACHE, name);
if (bf == null)
return false;
return true;
}
private void newBloomFilter(String name)
{
BloomFilter bf = new BloomFilter(bloomFilterElementSize, bloomFilterErrorRate);
localCacheManager.put(Constant.BLOOM_FILTER_CACHE, name, bf);
}
private void NotExistsThenPut(String name)
{
if (!isBloomFilterExistsByName(name))
{
newBloomFilter(name);
}
}
public boolean contains(String name, String key)
{
NotExistsThenPut(name);
BloomFilter bf = (BloomFilter) localCacheManager.get(Constant.BLOOM_FILTER_CACHE, name);
return bf.isPresent(key);
}
public void add(String name, String key)
{
NotExistsThenPut(name);
BloomFilter bf = (BloomFilter) localCacheManager.get(Constant.BLOOM_FILTER_CACHE, name);
bf.add(key);
}
}
Are there any flaws in this code?
Answer: private boolean isBloomFilterExistsByName(String name)
{
The brace belong on the previous line.
isBloomFilterExistsByName(String name)
is...Exists sounds strange.
private void newBloomFilter(String name)
From the name, I'd expect this method to return a new Bloom Filter instead of putting it somewhere.
private void NotExistsThenPut(String name)
Java method names starts with lowercase. A better name would be putIfNotExists.
I'd actually prefer a method like getOrCreate.
public boolean contains(String name, String key)
contains and isPresent are wrong names for this. A Bloom Filter is like a Set, but an undependable one. A much better name is mayContain.
This method is an invitation to inefficiency. Imagine a tight loop calling it. On every iteration you look up the Bloom Filter twice (once to make sure that it exists and once to use it). This may easily dominate the cost of the filter's isPresent operation.
I'd suggest to provide methods allowing to use it like
bloomFilterInCache.getOrCreate("name").mayContain("element");
which allows efficiency via "extract local variable".
public void add(String name, String key)
Again, an invitation to inefficiency. | {
"domain": "codereview.stackexchange",
"id": 13553,
"tags": "java, cache, bloom-filter"
} |
Superior improved stack implementation using a linked list | Question: This is a followup to
Improved stack implementation using a linked list
Simple stack implementation using linked list
Please review my hopefully improved stack implementation
As suggested by some answers to my previous code, some of the functions can be overloaded to provide versions of return by value, return by reference and const or pass by value, pass by reference and const, but the function definitions are almost the same so I did not add it here.
Regarding pointers, I know std::unique_ptr will be better and I already saw an implementation through an answer to my previous code, but I like to play with normal pointers right now.
template<class T>
class Stack {
using stacksize = std::size_t;
public:
Stack() : first{nullptr}, n{0} {}
stacksize size() const { return n; }
bool empty() const { return n == 0; }
Stack(const Stack&);
Stack(Stack&&);
Stack& operator=(Stack);
Stack& operator=(Stack&&);
T& operator[](const stacksize& i) {
Node* traverse = first;
stacksize x = 0;
while (x < i) {
traverse = traverse->next;
++x;
}
return traverse->item;
}
void push(const T&);
void pop();
T& peek() const;
~Stack() {
while (!empty()) {
pop();
}
}
private:
struct Node {
T item;
Node* next;
Node(const T& t, Node* link) :item{t}, next{link} { }
};
Node* first;
stacksize n;
};
template<class T>
Stack<T>::Stack(const Stack& s) :first{nullptr}, n{0}{
for (auto t = s.first; t != nullptr; t = t->next) {
push(t->item);
}
}
template<class T>
Stack<T>& Stack<T>::operator=(Stack s) {
std::swap(first,s.first);
std::swap(n,s.n);
return *this;
}
template<class T>
Stack<T>::Stack(Stack&& s) :first{s.first}, n{s.n} {
s.first = nullptr;
s.n = 0;
}
template<class T>
Stack<T>& Stack<T>::operator=(Stack&& s) {
std::swap(s.n,n);
std::swap(s.first,first);
return *this;
}
template<class T>
void Stack<T>::push(const T& t) {
first = new Node{t,first};
++n;
}
template<class T>
void Stack<T>::pop() {
Node* oldfirst = first;
first = first->next;
delete oldfirst;
--n;
}
template<class T>
T& Stack<T>::peek() const {
return first->item;
}
Answer: Code Review
I find your placement of & inconsistent with *
Node(const T & t, Node* link) :item{t}, next{link} { }
And r-value reference declaration stranger still
Stack(Stack & & ); // I am surprised that compiled. As each `&` is a separate token
// While `&&` is a single token.
This is all part of the type information. Put it with the type.
Node(const T& t, Node* link) :item{t}, next{link} { }
Stack(Stack&& rhs);
You provide a push by copy.
void push(const T & );
and you are familiar with move semantics. Why not provide a push by move?
void push(T&& val);
Very Minor personal preference.
Personally I prefer const on the right of the type. The rule is that const binds left unless it is on the very left hand side then it binds right. There is one obsecure corner case were this makes a difference. But it is obscure so don't worry.
Node(T const& t, Node* link) :item{t}, next{link} { }
I just find it easier when reading types (as you read them right to left).
char const * const x;
// x is "const pointer" to a "const char"
char const * x;
// x is "pointer" to a "const char"
char * const x;
// x is "const pointer" to a "char" | {
"domain": "codereview.stackexchange",
"id": 15245,
"tags": "c++, c++11, stack"
} |
How to retrain the neural network when new data comes in? | Question: I am new to deep learning. Can anybody help me with the online learning implimentation for deep learning models. As per my understanding, i can save a keras/tensorflow model after training and when new data comes in, i can reload the network back and retrain the network using the new data. I haven’t seen anywhere documenting this method. Is my understanding incorrect? If yes, let me know what can be done so that the model keeps on getting retrained when new data comes in?
Answer: Its extremely simple. There are a lot of ways of doing it. I am assuming you are familiar with Stochastic Gradient Descent. I am going to tell one naive way of doing it.
Reload the model into RAM.
Write a SGD function like SGD(X,y). It will take the new sample and label and run one step of SGD on it and save the updated model.
As you can see this will be highly inefficient, a better way is to save a number of samples and then run a step of stochastic batch gradient descent on it. So that you dont have to reload the updated model every time you give it a new sample.
I hope this gives you a rough idea of how the implementation can be done. You can easily find much more efficient and scalable ways of doing this. If you are not familiar waith algorithms like SGD, I would recommend to get familiar with them because online learning is just a one sample mini batch gradient descent algorithm. | {
"domain": "datascience.stackexchange",
"id": 2330,
"tags": "neural-network, deep-learning, online-learning"
} |
Definition of entropy for canonical vs. microcanonical ensembles | Question: The two following definitions of entropy are often used in the microcanonical and canonical ensembles, respectively:
$S=k_b\ln(\Omega)$
$S=-k_b\sum_ip_i\ln(p_i)$
I am curious how the second can be derived from the first, and if they can both be applied to the canonical ensemble. Specifically, how could one apply equation 1 to the canonical ensemble?
As well, this is somewhat unrelated, but if you work backwards from the canonical ensemble, and use $F=-kT\ln(Z)$ and $S=-(\partial F/ \partial T) $, you can arrive at $S/k_b =\ln(Z)+\beta \bar E $. If you then apply definition 1, you can rearrange this to get $\Omega = Ze^{\beta \bar E}$. Is this application of definition 1 valid, and is the relation between $\Omega$ and $Z$ true for the canonical ensemble?
Answer: Definition $1$ is specific to the microcanonical ensemble, because the quantity $\Omega$ is the number of microstates (mutually orthogonal microstates in the quantum case) compatible with the specified conditions. The logarithm of $\Omega$ is used for mathematical convenience, and the factor of Boltzmann's constant $k_b$ is included only because old traditions are hard to break, despite the minor inconvenience it causes.
Definition $2$ can be applied to any ensemble. It is a natural measure of how "non-presumptous" a given probability distribution is. The larger the value of $S$, the less presumptuous the distribution. The previous comment about the factor of $k_b$ applies here, too.
When definition $2$ is applied to the microcanonical ensemble, all of the $p_i$ are equal to each other for all states that are compatible with the specified conditions, and are zero otherwise. That is, $p_i=1/\Omega$. Using this in definition $2$ gives
$$
S=-k_b\sum_i p_i\ln(p_i)=-k_b \frac{1}{\Omega}\sum\ln(1/\Omega)
=k_b\ln(\Omega)
$$
because $\Omega$ is also the number of terms in the sum. In this sense, definition $1$ is a special case of definition $2$.
Definition $1$ does not apply to the canonical ensemble, but it can be used to deduce the canonical ensemble for a subsystem that is part of a larger system described by the microcanonical ensemble. This derivation only works if $\Omega(E)$ does not increase exponentially (or faster) as a function of $E$. If it does, then the canonical ensemble doesn't exist. The microcanonical ensemble is more generally applicable than the canonical ensemble, but the canonical ensemble (when it exists) is usually more convenient.
Definition $2$ can also be used to deduce the canonical ensemble: among all probability distributions with a given average value of the total energy, the canonical ensemble is the least presumptuous distribution — it maximizes the entropy in definition $2$.
Given the canonical ensemble $p_i\propto \exp(-\beta E_i)$, the partition function can be written as
$$
Z=\sum_i \exp(-\beta E_i)
\sim \sum_E\Omega(E) e^{-\beta E}
$$
where $\Omega(E)$ is the number of states with total energy $E$. (I'm writing "$\sim$" so that I don't have to define $\Omega(E)$ more carefully, which would probably not be helpful here.) Typically, $\Omega(E)$ is a rapidly increasing function of $E$, but not exponentially increasing, so the summand is sharply peaked at a particular value $\bar E$ of $E$ (which is typically essentially the average value), and we might as well have $Z\sim \Omega(\bar E)\exp(-\beta \bar E)$. Using this in the equation $S/k_b=\ln(Z)+\beta\bar E$ that was shown in the OP gives $S\sim k_b\ln\Omega(\bar E)$, in agreement with definition $1$. The agreement is only approximate, as expected, because the canonical ensemble with the given temperature is only approximately equivalent to a microcanonical ensemble with the specific total energy $\bar E$. The approximation is typically very good for a macroscopic system, and a more-careful version of this derivation can be used to quantify just how good the approximation is. | {
"domain": "physics.stackexchange",
"id": 57136,
"tags": "statistical-mechanics, entropy"
} |
T-SQL Getting Sequential events with first even criteria | Question: I have a query I am writing where I want output if a person has some service provided, then I want all the services they had provided after that and I don't want the individual returned if that is the only service they had (the criteria service)
I have some example data:
In this instance above I would not want the last row.
The code I have, I feel is verbose at best. I am confident it works but I think it could be written better.
This will get some sample data
declare @bmh_plm_ptacct_v as table (
med_rec_no varchar(10)
, ptno_num varchar(10)
, adm_date date
, dsch_date date
, hosp_svc varchar(5)
)
insert into @bmh_plm_ptacct_v
values('123456','123456748','2017-12-18','2018-01-12','PSY'),
('123456','123456789','2018-01-17','2018-01-18','EME'),
('123456','123456889','2018-01-19','2018-01-21','EME'),
('123456','123478978','2018-01-25','2018-01-25','EME'),
('123456','123457979','2018-05-21','2018-05-21','EME'),
('123456','123458988','2018-06-03','2018-06-04','EME'),
('123456','123458989','2018-07-27','2018-08-14','PSY'),
('123456','123458990','2018-09-23','2018-09-24','EME'),
('123456','123459999','2018-09-25','2018-09-30','PSY')
declare @vReadmits as table (
[index] varchar(10)
, interim int
, [readmit] varchar(10)
)
insert into @vReadmits
values('123458990','25','123459999')
declare @hosp_svc_dim_v as table (
hosp_svc varchar(50)
, hosp_svc_name varchar(100)
, orgz_cd varchar(10)
)
insert into @hosp_svc_dim_v
values('PSY','Pyschiatry','s0x0'),
('EME','Emergency Department','s0x0')
SELECT Med_Rec_No
, PtNo_Num
, Adm_Date
, Dsch_Date
, hosp_svc
, CASE
WHEN B.READMIT IS NULL
THEN 'No'
ELSE 'Yes'
END AS [Readmit Status]
, [Event_Num] = ROW_NUMBER() over(partition by med_rec_no order by ADM_date)
, [PSY_Flag] = CASE WHEN hosp_svc = 'PSY' THEN '1' ELSE '0' END
INTO #TEMPA
FROM smsdss.bmh_plm_ptacct_v AS A
LEFT OUTER JOIN smsdss.vReadmits AS B
ON A.PtNo_Num = b.[INDEX]
AND B.INTERIM < 31
WHERE Dsch_Date >= '01-01-2018'
AND dsch_date < '12-31-2018'
ORDER BY Med_Rec_No, A.Adm_Date
;
SELECT A.*
INTO #TEMPB
FROM #TEMPA AS A
WHERE A.hosp_svc = 'PSY'
;
SELECT B.*
INTO #TEMPC
FROM #TEMPA AS B
WHERE B.hosp_svc != 'PSY'
AND B.Med_Rec_No IN (
SELECT DISTINCT Med_Rec_No
FROM #TEMPB
)
;
SELECT Med_Rec_No
, PtNo_Num
, Adm_Date
, Dsch_Date
, hosp_svc
, [Readmit Status]
, Event_Num
, PSY_Flag
, [Keep_Flag] = ROW_NUMBER() OVER(PARTITION BY MED_REC_NO ORDER BY ADM_DATE)
INTO #TEMPD
FROM (
SELECT B.*
FROM #TEMPB AS B
UNION ALL
SELECT C.*
FROM #TEMPC AS C
WHERE C.Med_Rec_No IN (
SELECT ZZZ.Med_Rec_No
FROM #TEMPB AS ZZZ
WHERE ZZZ.Med_Rec_No = C.Med_Rec_No
AND C.Event_Num > ZZZ.Event_Num
)
) AS A
ORDER BY MED_REC_NO, Event_Num
;
SELECT A.Med_Rec_No
, A.PtNo_Num
, CAST(A.ADM_DATE AS DATE) AS [Adm_Date]
, CAST(A.Dsch_Date AS DATE) AS [Dsch_Date]
, A.hosp_svc
, HS.hosp_svc_name
, A.[Readmit Status]
, A.Event_Num
, A.Keep_Flag
FROM #TEMPD AS A
LEFT OUTER JOIN SMSDSS.hosp_svc_dim_v AS HS
ON A.hosp_svc = HS.hosp_svc
AND HS.orgz_cd = 'S0X0'
WHERE A.Med_Rec_No IN (
SELECT DISTINCT ZZZ.MED_REC_NO
FROM #TEMPD AS ZZZ
WHERE Keep_Flag > 1
)
DROP TABLE #TEMPA;
DROP TABLE #TEMPB;
DROP TABLE #TEMPC;
DROP TABLE #TEMPD;
Answer: Review
Your temporary table names don't say much about what they present. This makes it hard to figure out what they mean. Consider using better, more meaningful names.
DROP TABLE #TEMPA;
DROP TABLE #TEMPB;
DROP TABLE #TEMPC;
DROP TABLE #TEMPD;
It gets worse by aliasing these temporary tables with different letters:
FROM #TEMPA AS A -- Fair enough
FROM #TEMPA AS B -- Mamma mia!
There is only a need for an order by in the resulting query and the analytical functions (row over), not in the temporary tables.
There is no need for the temporary tables, you could use CTE's instead.
Refactored Query
Fiddle containing OP + Refactored Query
This only refactors the query for readability. I am sure a more compact and optimized query could be found.
with ACC as (
SELECT Med_Rec_No
, PtNo_Num
, Adm_Date
, Dsch_Date
, hosp_svc
, CASE WHEN B.READMIT IS NULL THEN 'No' ELSE 'Yes' END AS [Readmit Status]
, [Event_Num] = ROW_NUMBER() over(partition by med_rec_no order by ADM_date)
, [PSY_Flag] = CASE WHEN hosp_svc = 'PSY' THEN '1' ELSE '0' END
FROM bmh_plm_ptacct_v AS A
LEFT OUTER JOIN vReadmits AS B
ON A.PtNo_Num = b.[INDEX] AND B.INTERIM < 31
WHERE Dsch_Date >= '01-01-2018'
AND dsch_date < '12-31-2018'
)
, EMERG as (
SELECT ACC.* FROM ACC WHERE hosp_svc = 'PSY'
)
, PSY as (
SELECT ACC.*
FROM ACC
WHERE hosp_svc != 'PSY'
AND Med_Rec_No IN (SELECT DISTINCT Med_Rec_No FROM EMERG)
)
, ACC_REL as (
SELECT Med_Rec_No
, PtNo_Num
, Adm_Date
, Dsch_Date
, hosp_svc
, [Readmit Status]
, Event_Num
, PSY_Flag
, [Keep_Flag] = ROW_NUMBER() OVER(PARTITION BY MED_REC_NO ORDER BY ADM_DATE)
FROM (
SELECT * FROM EMERG
UNION ALL
SELECT * FROM PSY
WHERE PSY.Med_Rec_No IN (
SELECT e.Med_Rec_No
FROM EMERG AS e
WHERE e.Med_Rec_No = PSY.Med_Rec_No
AND PSY.Event_Num > e.Event_Num
)
) AS A
)
SELECT A.Med_Rec_No
, A.PtNo_Num
, CAST(A.ADM_DATE AS DATE) AS [Adm_Date]
, CAST(A.Dsch_Date AS DATE) AS [Dsch_Date]
, A.hosp_svc
, HS.hosp_svc_name
, A.[Readmit Status]
, A.Event_Num
, A.Keep_Flag
FROM ACC_REL AS A
LEFT OUTER JOIN hosp_svc_dim_v AS HS
ON A.hosp_svc = HS.hosp_svc AND HS.orgz_cd = 'S0X0'
WHERE A.Med_Rec_No IN (
SELECT DISTINCT rel.MED_REC_NO
FROM ACC_REL AS rel
WHERE Keep_Flag > 1
)
ORDER BY Med_Rec_No, Adm_Date
; | {
"domain": "codereview.stackexchange",
"id": 35564,
"tags": "sql, sql-server, t-sql"
} |
Time evolution of a state on adding a constant to the Hamiltonian of the Schrodinger equation | Question: Adding a constant to a potential energy function does not change the dynamics (time evolution) of an object in Newtonian physics. I expect the same to be true in quantum mechanics but for some reason, I'm getting a result that contradicts this.
$$ i\hbar \,\partial_t \Psi = \hat{H}\Psi $$
$$ \hat{H}\phi_j=E_j\phi_j$$
$$\implies i\hbar\,\partial_t \phi = E_j\phi_j$$
$$\implies\phi_j(t)=\phi_j\;\exp\left(-\frac{i\,E_j\,t}{\hbar}\right)$$
$$\Psi(0) = \sum_j c_j\,\phi_j \implies \Psi(t)=\sum_j c_j\,\phi_j\,\exp\left(-\frac{i\,E_j\,t}
{\hbar}\right)$$
Now consider adding a constant $V_0$ to the hamiltonian, the eigenfunctions will be the same $\phi_j$ whereas the energy eigenvalues will become $E_j+V_0$.
$$\hat{H}^\prime=\hat{H}+V_0$$
$$i\hbar \,\,\partial_t\xi = (\hat{H}+V_0)\,\xi$$
$$ (\hat{H}+V_0)\phi_j=(E_j+V_0)\phi_j $$
$$\xi(0) = \Psi(0) =\sum_j c_j\,\phi_j \implies \xi(t)=\sum_j c_j\,\phi_j\,\exp\left(-\frac{i\,(E_j+V_0)\,t}{\hbar}\right)$$
Now despite $\xi(0) = \Psi(0)$ and the only difference between the two hamiltonians being a constant $V_0$ the time evolution of $\Psi$ and $\xi$ are different (the argument in the exponential term is different).
I have a strong feeling that I have made a mistake in this but I can't see where and would be grateful for any insights.
Answer: You can simply factor out $\exp\left(-\frac{i\,V_0\,t}{\hbar}\right)$ from
$$\xi(t)=\sum_j c_j\,\phi_j\,\exp\left(-\frac{i\,(E_j+V_0)\,t}{\hbar}\right)
$$
to get
$$\xi(t)=\exp\left(-\frac{i\,V_0\,t}{\hbar}\right)\sum_j c_j\,\phi_j\,\exp\left(-\frac{i\,E_j\,t}{\hbar}\right)
$$
which is a global phase shift, and so has no observable effect since states are represented by rays in Hilbert space. Naturally, though, the energy eigenvalues are shifted evenly as $E_j \rightarrow E_j + V_0$. | {
"domain": "physics.stackexchange",
"id": 74418,
"tags": "quantum-mechanics, schroedinger-equation, time-evolution"
} |
Why don't we know the electric potential at any point in a circuit, only the difference in the electric potential (voltage)? | Question: Why in circuit analysis, don't we know the electric potential at any point in a circuit?
Answer: In loop analysis where we apply Kirchhoff's voltage law (the algebraic sum of the potential differences around a loop equals zero) it is only necessary to consider potential differences.
In doing node analysis where we apply Kirchhoff's current law (the algebraic sum of the currents into a node equals zero) we consider the potentials at the different nodes. In order to do this we assign a potential of zero to some node in the circuit. Then the potential at each node is measured with respect to that node.
Although the assignment of zero potential is technically arbitrary, to facilitate the analysis it is usually a node that is common to multiple branches, or the negative terminal of a battery. But no matter where this zero potential is assigned, the potential differences across specific components will be the same.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 88941,
"tags": "electrostatics, electric-circuits, electricity, potential, voltage"
} |
How to decompose errors given a DetectorErrorModel in Stim? | Question: Suppose I'm presented with a detector error model (.dem) file that does not have its error mechanisms decomposed into edge-like error mechanisms (i.e. no ^'s appearing). Is there a way to apply the decomposition directly to the DEM object in the python API?
Answer: There's no built-in method to do this.
You could write python code to identify the graphlike errors (the ones with at most two detectors), and then attempt to decompose each non-graphlike errors into the known graphlike errors. Alternatively, if the detectors have coordinate data, you may be able to use that to identify which subgraph each detector is associated with and group the detectors into graphlike pieces that way. A more expensive strategy might be to actually ask a decoder to decode the symptoms of each of the non-graphlike errors, and use the edges it returns as their decompositions. | {
"domain": "quantumcomputing.stackexchange",
"id": 4825,
"tags": "stim"
} |
Is the matrix representation of the Lorentz transform the same for all 4-vectors? | Question: If I have a four vector of the form:
$$
\left(
\begin{array}{ccc}
T\\
\vec{X}\end{array}
\right)
$$
where $T$ is the analogous time component (i.e. energy, angular frequency, scalar potential, charge density) and $\vec{X}$ is the analogous space component (i.e. momentum, wave number, vector potential, current density).
Then would finding the respective 4-vectors in a different frame be a simple matter of multiplying any four vector by the Boost matrix (for the case where the relative velocity is in the x-direction):
$$
\left( \begin{array}{ccc}
\gamma & -\beta\gamma & 0 & 0 \\
-\beta\gamma & \gamma & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{array} \right)
$$
?
In other words, is the Boost matrix the Lorentz transformation for all objects classified as 4-vectors? Or are there certain 4-vectors whose Lorentz transform is represented by a different matrix?
What about for the case of electric $\vec{E}$ and magnetic fields $\vec{B}$? There are 3 components for each mentioned field, meaning there are 6 quantities that transform in a change of reference frame. I realize there are equations for them, but they don't appear to be related to the Boost matrix.
Answer: Yes, all 4-vectors transform as you state under a Lorentz transform.
For the case of $\vec E$ and $\vec B$, they are indeed not 4-vectors. There are two ways of transforming the $\vec E$ and $\vec B$ fields to different coordinate frames. You can define the $A$ 4-vector in terms of the potential functions $\phi$ and $\vec A$, letting
\begin{equation}
A = \begin{pmatrix}\frac{\phi}{c}\\\vec A\end{pmatrix}.
\end{equation}
Then $A$ transforms as a 4-vector and you can get the fields back from the standard relations $\vec E = -\nabla \phi - \frac{d\dot A}{dt}$, $\vec B = \nabla \times \vec A$.
Or you can define a certain 4x4, rank-2 tensor $F$, which is written directly in terms of $\vec E$ and $\vec B$, where
\begin{equation}
F^{\mu \nu} = \begin{pmatrix} 0 & -E_x/c & -E_y/c & -E_z/c \\ E_x/c & 0 & -B_z & B_y \\ E_y/c & B_z & 0 & -B_x \\ E_z/c & -B_y & B_x & 0 \end{pmatrix}.
\end{equation}
Then you can transform this tensor, using standard rules for transforming tensors between coordinate systems, and pick out the components of $\vec E$, $\vec B$ in the transformed system. $F$ is used much more in Lagrangian treatments of electrodynamics. | {
"domain": "physics.stackexchange",
"id": 27808,
"tags": "special-relativity, relativity"
} |
What is the difference between ensemble methods and hybrid methods, or is there none? | Question: I have the feeling that these terms often are used as synonyms for one another, however they have the same goal, namely increasing prediction accuracy by combining different algorithms.
My question thus is, is there a difference between them? And if so is there some book/paper that explains the difference?
Answer: Here is a para that I found by searching What are hybrid methods in Machine Learning, on google.
"In general, it is based on combining two different machine learning techniques. For example, a hybrid classification model can be composed of one unsupervised learner (or cluster) to pre-process the training data and one supervised learner (or classifier) to learn the clustering result or vice versa." Along with that example, let's consider an example of Ensemble Learning that is Random Forest. *
In classical ensemble learning, you have different or similar algorithms, working on different or the same data-sets (for example Random Forest Stratifies the data set and builds different Decision Trees for those data-sets, while at the same time you can build different models on the same unstratified data-set and create an ensemble method). So in essence, you have different machine learning models, working independently of each other to give a prediction and then there is a voting system (hard or soft voting) which determines the final prediction.
According to the example of the hybrid machine learning model that we saw, the models in hybrid machine learning models essentially feed their output to one another (one-way) in an effort to create an efficient and accurate machine learning model. So the difference in both is that ensemble methods work independently to vote on an outcome while hybrid methods work together to predict one single outcome, which no voting element present in it.
*https://www.sciencedirect.com/science/article/pii/S1568494609001215 | {
"domain": "datascience.stackexchange",
"id": 11141,
"tags": "machine-learning, ensemble-modeling, ensemble-learning"
} |
How can models like Mosaic's MPT-7b or Bloombergs BLOOMGPT take in so many tokens? | Question: I've read the paper on ALiBi, and I understand that these models are biasing the values made in the query/key multiplication.
But from my understanding, when I build the actual model I give it N input nodes. When I train a model I give it vectors of length N. How then at inference can I give it vectors of length greater than N? Am I misunderstanding how the multiplication of key and query works? Can there be keys of any length?
Edit: I guess my question includes, why isn't there a multiplication error when I use longer keys in my inference?
Answer: In the attention blocks, Keys and Values are of the same length, but Queries are not necessarily of the same length as the former. For instance, in the encoder-decoder attention blocks, the Keys and Values come from the encoder representations while the Queries come from the decoder representations (which are computed from a completely different sequence than the encoder ones).
In the Transformer model, the only factor limiting the length of the inputs is the positional embeddings (see this answer for details. In the ALiBi paper, they replace them with a non-trainable approach and, experimentally, the approach is shown to extrapolate to longer sequences than those seen during training. | {
"domain": "datascience.stackexchange",
"id": 11740,
"tags": "language-model"
} |
Characterizing real-time sampled signal | Question: I have a signal which is sampled in real-time. At every time interval I receive a value. I need to characterize this signal as active or inactive (to take priority in the control actions). By active it means that we observe big changes over a specific time-window.
One way to do it as I consider, is to keep a history of N samples and calculate the mean value and the standard deviation with an update algorithm. Based on the std I can say if it is inactive or not. The problem is that I have tens of thousands of signals and the sampling period is not the same on all of them. So, keeping a variable length sample history for all the signals will be difficult.
For the mean value I can use averaging with forgetting factor. The forgetting factor can be connected to the sampling period of each signal to have a similar observation window on all signals.
Now the question:
Is there any metric similar to averaging with forgetting factor that can give me the same information as std? It has to be low on memory usage and computationally light (since it's a real-time operation).
Any papers, books or references are welcome!
Thanks in advance.
Answer: Try something like this:
$\begin{eqnarray}
\mu(n) &=& (1 - \alpha_1) \mu(n) + \alpha_1 x(n) \\
\bar{x}(n) &=& x(n) - \mu(n) \\
s(n) &=& (1 - \alpha_2) s(n) + \alpha_2 \bar{x}(n)^2 \\
\sigma(n) &=& \sqrt{s(n)} \\
\end{eqnarray}$
This is equivalent to sending your input signal to a 1-pole DC blocking high-pass filter with a pole at $\alpha_1$, then sending the result to a RMS detector whose response time is set by $\alpha_2$ - to measure how much the signal is wiggling once its central trend has been removed.
Observe that $\sigma(n)$ is consistent with the definition of the standard deviation. | {
"domain": "dsp.stackexchange",
"id": 447,
"tags": "filter-design, statistics, real-time"
} |
Base64 encoder in Assembly x86-64 Linux language | Question: We were asked to create a Base64 encoder for Assembly x86-64 on Linux. Was wondering how my code below could be improved, be it notation or anything else. We only had Assembly for 3 months so I'm not really that used to any kind of particular notation. That's a reason why I'm asking. Thanks for your help !
SECTION .data
Base64Table: db "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
SECTION .bss
byteStorage: resb 30000
bytesToReadAtOnce: equ 30000
b64EncStor: resb 40000
b64EncStorLen: equ $ - b64EncStor
SECTION .text
global _start
_start:
;sys read put everything from the file into the buffer "byteStorage"
mov rax, 0
mov rdi, 0
mov rsi, byteStorage
mov rdx, bytesToReadAtOnce
syscall
xor r11, r11 ; syscall strangely changes r11
xor r12, r12 ;r12 will keep track of index in byteStorage array
mov r13, 0 ;r13 will keep track of index in b64EncStor array
.encodingInProgress:
cmp rax, 0
je .weHaveFinished ;if no bits remaining, and no extra one or two
;bytes, we simply jump to the end
dec rax
inc r12
mov r8b, [byteStorage + r12 -1] ; put each input char in a register each
mov r11b, r8b
shr r11b, 2
and r11b, 0x3F
mov r11b, [Base64Table + r11]
mov [b64EncStor + r13], r11b ; our first char is now encoded
inc r13
cmp rax, 0 ;if rax = 0, rax was one before above decrementation, so we jump
je .oneExtraByte ;to .oneExtraByte
;char two
dec rax
inc r12
mov r9b, [byteStorage + r12-1] ; put each input char in a register each
and r8b, 0x3
shl r8b,4
mov r11b, r9b
shr r11b, 4
and r11b, 0xF
add r8b, r11b
mov r8b, [Base64Table + r8]
mov [b64EncStor+r13], r8b ; second char now encoded
inc r13
cmp rax, 0 ;rax was two before being decremented twice above, so we
je .twoExtraBytes ;jump to .twoExtraBytes
;char three
dec rax
inc r12
mov r10b, [byteStorage + r12-1] ;put each input char in a register each
and r9b, 0xF
shl r9b, 2
mov r8b, r10b
shr r8b, 6
and r8b, 0x3
add r9b, r8b
mov r9b, [Base64Table + r9]
mov [b64EncStor+r13], r9b ; third char now encoded
inc r13
;char four
and r10b, 0x3F
mov r10b, [Base64Table + r10]
mov [b64EncStor+r13], r10b ; fourth char now encoded
inc r13
jmp .encodingInProgress
;--------
.oneExtraByte: ;so we need four (and not two !) bits more to reach 12
shl r8b, 4
and r8b, 0x3F ;only keep six bits from left, the two most right are zero
mov r8b, [Base64Table + r8]
mov [b64EncStor + r13], r8b
inc r13
mov r8b, "=" ;add two extra equal signs
mov [b64EncStor + r13], r8b
inc r13
mov [b64EncStor + r13], r8b
inc r13
jmp .weHaveFinished
;------
.twoExtraBytes: ;so we need two (and not four !) bits more to reach 18
;inc r12
mov r10b, [byteStorage + r12-1] ;put each input char in a register each
shl r10b, 2
and r10b, 0x3F ;only keep six bits from left, the two most right are zero
mov r10b, [Base64Table + r10]
mov [b64EncStor + r13], r10b
inc r13
mov r8b, "=" ;add one extra equal sign
mov [b64EncStor + r13], r8b
inc r13
jmp .weHaveFinished
;--------
.weHaveFinished:
;syscall for write, to output the result
mov rax, 1
mov rdi, 1
mov rsi, b64EncStor
mov rdx, r13
syscall
xor r12,r12
mov rax, 60 ; System call for exit
mov rdi, 0
syscall
Answer:
Wiping registers
mov rax, 0
mov rdi, 0
xor r11, r11
xor r12, r12
mov r13, 0
The preferred way to clear a register is to use the xor reg, reg instruction. It's small and fast. From the above it would seem that you knew this already but didn't apply it consistently. But there's more to it than just using xor. It is best to only have xor operate on the low 32 bits because the CPU will zero the high 32 bits automatically. For the 'old' registers (RAX, RBX, ... ) this will shave off a REX prefix:
xor eax, eax
xor edi, edi
xor r11d, r11d
xor r12d, r12d
xor r13d, r13d
A lurking danger
mov r8b, [Base64Table + r8]
mov r9b, [Base64Table + r9]
mov r10b, [Base64Table + r10]
Your program only ever writes to the lowest byte of the r8, r9, and r10 registers. There's no guarantee whatsoever that the whole 64-bit register will be suitable for indexing like you plan. Best add the following to your wipe list:
xor r8d, r8d
xor r9d, r9d
xor r10d, r10d
Redundant operations
shr r11b, 2
and r11b, 0x3F
The shr instruction already cleared the 2 topmost bits. There's no need for the and instruction that would do the same thing.
and r8b, 0x3
shl r8b,4
mov r11b, r9b
shr r11b, 4
and r11b, 0xF
add r8b, r11b
Here the same redundancy with shifting and anding r11b. In this case however you can consolidate both these and's:
shl r8b, 4
mov r11b, r9b
shr r11b, 4
add r8b, r11b
and r8b, 0x3F
and r9b, 0xF
shl r9b, 2
mov r8b, r10b
shr r8b, 6
and r8b, 0x3
add r9b, r8b
And again the same redundancy with shifting and anding r8b. Consolidating both and's gives:
shl r9b, 2
mov r8b, r10b
shr r8b, 6
add r9b, r8b
and r9b, 0x3F
jmp .weHaveFinished
;--------
.weHaveFinished:
The code can just as well fall through at this point. The jmp is redundant.
Optimizations
dec rax
...
cmp rax, 0 ;if rax = 0, rax was one before above decrementation
je .oneExtraByte ; we jump to .oneExtraByte
You can safely delay executing the dec rax instruction. The code at the ellipsis doesn't depend on the value that's in RAX. Instead of inspecting using cmp, inspect the flags from using dec. Apply this trick 3 times:
test rax, rax ; `TEST RAX, RAX` is preferred over `CMP RAX, 0`
jz .weHaveFinished
.encodingInProgress:
...
dec rax
jz .oneExtraByte
...
dec rax
jz .twoExtraBytes
...
dec rax
jnz .encodingInProgress
jmp .weHaveFinished
;----------------------------------
.oneExtraByte:
mov [b64EncStor + r13], r8b
inc r13
mov r8b, "=" ;add two extra equal signs
mov [b64EncStor + r13], r8b
inc r13
mov [b64EncStor + r13], r8b
inc r13
jmp .weHaveFinished
All of this incrementing on r13 is overkill. The code is about to end anyway. Just write it like:
mov [b64EncStor + r13], r8b
mov word [b64EncStor + r13 + 1], "=="
jmp .weHaveFinished
Style
To improve readability, you could start all of your tail-comments at the same column. (same goes for instruction mnemonics and operands)
Also be consistent with how you use whitespace. eg. I see the following:
mov [b64EncStor + r13], r11b ; our first char is now encoded
mov [b64EncStor+r13], r8b ; second char now encoded
mov [b64EncStor + r13], r8b
Wouldn't you agree that
mov [b64EncStor + r13], r11b ; our first char is now encoded
mov [b64EncStor + r13], r8b ; second char now encoded
mov [b64EncStor + r13], r8b
is nicer to look at? | {
"domain": "codereview.stackexchange",
"id": 43046,
"tags": "linux, assembly, base64, byte"
} |
Swapping dynamically populated images on hover/touch | Question: I'm using jQuery to swap images on hover. There's a main image, and a gallery of thumbs. When the thumb is hovered over/touched, the larger version of it populates the main image, and the smaller version of the original populates the thumb. This happens via swapping src and data- attributes.
This all works, but being relatively new to JS/jQuery, the code is probably a little clunky and could use a keener eye. I don't fully grasp callbacks (among other JS things) yet, so I imagine there'll be some streamlining/chuckling with that.
function large_to_thumb(imageHovered, mainLarge, mainThumb){
$(imageHovered).attr('src', mainThumb).data('large', mainLarge);
}
function thumb_to_large(image, mainLarge, mainThumb){
var mainImage = $('.main-image');
// 1. put gallery image data-large into .main-large src
mainImage.attr('src', $(image).data('large'));
// 2. put gallery image src into .main-large data-gallery-image
mainImage.data('gallery-image', $(image).attr('src'));
large_to_thumb(image, mainLarge, mainThumb);
}
$('.product-gallery img').on('mouseenter touch', function(){
var mainLarge = $('.product-image .main-image').attr('src');
var mainThumb = $('.product-image .main-image').data('gallery-image');
thumb_to_large(this, mainLarge, mainThumb);
});
li {
display:inline;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="product-image">
<img class="main-image" src="http://placehold.it/300x300" data-gallery-image="http://placehold.it/150x150">
<div class="product-gallery">
<ul>
<li>
<img src="http://placehold.it/149x149" data-large="http://placehold.it/299x299">
</li>
<li>
<img src="http://placehold.it/148x148" data-large="http://placehold.it/298x298">
</li>
</ul>
</div>
</div>
Answer: Some good suggestions already from @konijn, so I won't rehash any of the content from that answer.
I would consider writing this a a jQuery plugin. This will allow you to properly namespace your code and allow you to define your own variables for re-use within this scope (i.e. to prevent need to re-query DOM with every event action. This will also make your solution re-usable across various websites and DOM configurations.
That might look like
(function( $ ) {
$.fn.galleryThumbSwapper = function(options) {
// Extend our default options with those provided.
// Note that the first argument to extend is an empty
// object – this is to keep from overriding our "defaults" object.
var opts = $extend({},$.fn.galleryThumbSwapper.defaults, options);
// do our DOM selections once
// define wrapper based on element this function was called on
var $wrapper = $(this);
// define main image and gallery image items
// here we only find element within the wrapper such that
// you could use multiple galleries on a page without
// worry about interaction problems between galleries so
// long as each gallery is contained within the wrapper
var $mainImage = $wrapper.find(opts.mainImageSelector);
var $galleryImages = $wrapper.find(opts.galleryImageSelector);
// set click handler
$galleryImages.on('click', swapImages);
// define callback for click handler
function swapImages() {
// wrapper for triggered element
$triggerElement = $(this);
// get current main image info
var currentMainSrc = $mainImage.attr('src');
var currentMainData = $mainImage.data('gallery-image');
// get triggered thumbnail info
var newMainSrc = $triggerElement.data('large');
var newMainData = $triggerElement.attr('src');
// perform the swap
$mainImage.attr('src', newMainSrc);
$mainImage.data('gallery-image', newMainData);
$triggerElement.attr('src', currentMainSrc);
$triggerElement.data('large', currentMainData);
}
// return this for use in chaining
return this;
}
// define default values for plug-in
// you could also possibly add default values for the data fields here
// so as to make these configurable
$.fn.galleryThumbSwapper.defaults = {
mainImageSelector = '.main-class',
galleryImageSelector = '.product-gallery img'
};
}( jQuery ));
// usage
$(document).ready(
$('.product-image').galleryThumbSwapper();
);
// or, with overriding default options
$(document).ready(
$('.product-image').galleryThumbSwapper({
mainImageSelector = '#main_img',
galleryImageSelector = '.thumb_img'
});
);
A few other thoughts:
You might consider using id instead of or in addition to class for the main wrapper DOM element. i.e. product-image. This would guarantee uniqueness, particularly if you used the jQuery plug-in approach I am suggesting.
You might consider your data attribute naming of 'large' which doesn't seem to make much sense as what you are storing in there is a URL. I actually don't see why you can't use the same data-attribute name on the main image and the gallery images.
Seems like a bit of an odd user experience to have thumbnails potentially be changed in order based on how the user triggers these image swaps. To me as a user, I would expect to be able to hover across the thumbs, without it impacting the order of the thumbnails.
Consider having images have their own class as well. This would address current inconsistency in having some elements be included in logic based on class while others become part of the logic based on element type. | {
"domain": "codereview.stackexchange",
"id": 21753,
"tags": "javascript, beginner, jquery, image, callback"
} |
How do I start a node with an anonymous name with roslaunch? | Question:
I have a node that passes the following argument when calling ros::init:
ros::init_options::AnonymousName
It works fine and gives me a unique node name. However, when I launch this node from a roslaunch file, it does not create a unique node name anymore.
How do I fix this?
Originally posted by Benoit Larochelle on ROS Answers with karma: 867 on 2011-07-14
Post score: 2
Answer:
Please see: http://www.ros.org/wiki/roslaunch/XML
"
$(anon name)
e.g. $(anon rviz-1). Generates an anonymous id based on name. name itself is a unique identifier: multiple uses of $(anon foo) will create the same "anonymized" name. This is used for name attributes in order to create nodes with anonymous names, as ROS requires nodes to have unique names.
"
Originally posted by kwc with karma: 12244 on 2011-07-15
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by F1iX on 2020-10-01:
Is there a way to actually generate random ids/names then? | {
"domain": "robotics.stackexchange",
"id": 6146,
"tags": "roslaunch"
} |
Can paint colors be classified by wavelengths reflected or absorbed? | Question: As I understand it, we recognize pigment colors when certain wavelengths are reflected while others are absorbed. Since astronomers study star composition by their spectrums, could every paint color be given its own classification with a similar method?
I realize it would be terribly inconvenient, and maybe interfere with people's creativity... but is it possible?
Answer: Yes, kind of.
The problem is that paint colors are made of pigment. As you say, some wavelenghts are reflected and some aren't.
However, the color perceived depends on the ones that come back reflected towards our eyes. Now: what if you don't send all wavelenghts? The perceived color will be different. This means that the actual color depends on the light you throw to it.
So actually colors of objects are represented by the reflectivity, $\rho(\lambda)$, that is the percentage of each incident wavelenght that the object does reflect.
Like this, you can classify every color regardless of the ambient light you are using (spectrum and intennsity).
For example, if you use XYZ coordinate system, which is easily convertible into RGB, is
$$X_i = \int_{visible} \rho_{obj}(\lambda)\cdot I_{light}(\lambda) \cdot \bar{x}_i\cdot d\lambda$$
and $\bar{x}_i$ can be found in tables.
In short, reflectivity is like the spectrum of a paint. It determines is "color" independently of the incident light.
To compute the perceived color coordinates, you must specify the light you are using.
PS: When I read Paint in the title of the question I though about the computer program "Paint". I should spend less hours working here haha.
But that makes me say that light colours on a screen would be very different: actual light sources do have a true spectrum. However, pixels have it reduced to a sum of three tiny spectra. | {
"domain": "physics.stackexchange",
"id": 50659,
"tags": "visible-light"
} |
Why do the planets' orbital distances fall on an exponential curve? | Question: Background: I was recently reading a book on the planets to my son and I noticed a pattern in the distributions of the planets. The planets' distances roughly follow an exponential distribution.
Below is a plot of scaled, log orbital distances
$$
\tilde{d_n} = \frac{\log(d_n/d_1)}{\log(d_2/d_1)}
$$
with the line $an+b$:
Where $d_1$ corresponds to Mercury and so on. Ceres is included, Pluto is excluded. By linear regression, $a = 0.90, b = -1.06$.
For the statistically minded, the data has a Pearson's correlation of 0.996. Note that this is a well known phenomenon, see Pletser and references. The code used to generate the plot may be provided on request.
Question: What is the mechanism that leads to this distribution?
Aside: Is there a good introductory text on solar system formation for the mathematically inclined?
Update: This is also known as Titius–Bode law.
Answer: This correlation is known as Titius-Bode's law, which is often stated as
\begin{equation}
d=0.4 + 0.3 \cdot 2^n
\end{equation}
where d represents planet's mean distance from the Sun in Astronomical Units and n = -∞, 0, 1, 2... for Mercury, Venus, Earth, Mars, asteroid belt, Jupiter and so on.
The rule is not satisfied exactly with Neptune's orbit (n=7) constituting a significant departure from it: according to the law Neptune's mean distance ought to be 38.8 AUs, but is in reality just 30 AUs (disagreement of close to 30% with all other planets agreeing to less than 6%). In fact, this departure is what has historically led to diminishing importance of the law. See also the table and chart in wikipedia.
It is currently thought that if the law is not a pure coincidence then it is a consequence of orbital instabilities and the mechanism through which Solar system was formed. It's been shown that rotational and scale invariance of a protoplanetary disk leads to density maxima in the disk appearing periodically in variable
\begin{equation}
x = \ln \frac{r_n}{r_0}
\end{equation}
which leads to geometric series for planetary distances similar to that expressed in Titius-Bode's law. See this and this paper for details.
Note that the requirements of rotational and scale invariance are very general. As the nebula from which protoplanetary disk is formed collapses under its own gravity, its rotation increases due to the law of the conservation of angular momentum. This eventually leads to the protoplanetary disk's rotational symmetry. Also, gravity does not have intrinsic length scale, so the nebula is highly likely to possess scale invariance. These two requirements are so general that even if the Titius-Bode's law is real it isn't at all useful to select between Solar system's formation models.
I don't know of an advanced book specifically on Solar system formation, but there is a very good book by A.E. Roy on orbital mechanics which certainly would qualify as a book for the mathematically inclined which in addition to chapters on orbital mechanics, rocket dynamics, interplanetary trajectory design includes few solar system formation and many-body stellar systems. So depending on how broad your interests are you may enjoy it. | {
"domain": "physics.stackexchange",
"id": 12752,
"tags": "orbital-motion, solar-system, planets"
} |
Can time speed up? If so how can you do it? | Question: Einstein said that by traveling about the speed of light time slows down so it's only natural to think the opposite would speed up time but it does so how does it work.
Answer: The shortest time between two events is always measured by the observers for which those events are in the same place - that is, the same displacement vector to the observer.
So, you always measure the shortest time between two events that happen to you, because wherever you go, there you are.
However, you can measure a shorter time between two events distant from each other by boosting into a frame for which those two events are closer together in space.
For instance:
A train with a clock on the side of it drives past you. Each time the clock on the train ticks, the train has moved relative to you. If you boost into the train's frame (you get in your car and match the train's velocity vector), now each time the clock on the train ticks, the train is in the same place relative to you. Thus you have (very very slightly) reduced the time you measure between ticks of the train's clock. | {
"domain": "physics.stackexchange",
"id": 83946,
"tags": "special-relativity, time"
} |
Add Default Rubberduck VBA Folder Annotation to VBProject.VBComponents | Question: Currently Rubberduck VBA files all the VBProject VBComponents in that do not have a Folder Annotation in a Folder named after the VBProject. It can be time consuming to manually organize the components for the first time and a inconvenient to add annotations to every Worksheet, Userform, Class, ...ect. When ran my code will organize any unfiled component by ProjectName.ComponentType.
Before
After
You will need to add a reference to the Microsoft Visual Basic for Applications Extensibility 5.3 and from file options check "Trust access to VBA project object model".
Private Sub AddDefaultFolderAnnotationsToVBComponents()
Dim Component As VBComponent
Dim Project As VBProject
Select Case Application.Name
Case "Microsoft Access", "Microsoft Word"
Set Project = IIf(True, Application, 0).VBE.ActiveVBProject
Case "Microsoft Excel"
Set Project = IIf(True, Application, 0).ActiveWorkbook.VBProject
Case "Microsoft PowerPoint"
Set Project = IIf(True, Application, 0).VBE.VBProjects(1)
End Select
Dim FolderName As String
Dim HasFolder As Boolean
For Each Component In Project.VBComponents
With Component.CodeModule
If .CountOfLines = 0 Then
HasFolder = False
Else
HasFolder = InStr(.Lines(1, .CountOfLines), Chr(39) & "@Folder")
End If
End With
If Not HasFolder Then
Select Case Component.Type
Case vbext_ComponentType.vbext_ct_StdModule
FolderName = Project.Name & ".Modules"
Case vbext_ComponentType.vbext_ct_ClassModule
FolderName = Project.Name & ".Classes"
Case vbext_ComponentType.vbext_ct_MSForm
FolderName = Project.Name & ".Forms"
Case vbext_ComponentType.vbext_ct_ActiveXDesigner
FolderName = Project.Name & ".Designers"
Case vbext_ComponentType.vbext_ct_Document
FolderName = Project.Name & ".Documents"
End Select
Component.CodeModule.InsertLines 1, Chr(39) & "@Folder(""" & FolderName & """)"
End If
Next
End Sub
Note: I purposely kept all the functionality in a single procedure for portability.
Kudos to the Rubberduck team the latest version Rubberduck VBA v2.5 is a C# WPF that follows the MVVM pattern to the letter!
Answer:
HasFolder = InStr(.Lines(1, .CountOfLines), Chr(39) & "@Folder")
That condition can technically be True in modules that don't actually have the annotation; a @Folder Rubberduck annotation is only valid in a module's declarations section, so there's no need to grab the module content any further than .CountOfDeclarationLines - if the module is nearing the 10K lines capacity, using this instead of .CountOfLines could make a significant difference in the size of the string being passed to the InStr function.
Select Case Component.Type
Case vbext_ComponentType.vbext_ct_StdModule
FolderName = Project.Name & ".Modules"
Case vbext_ComponentType.vbext_ct_ClassModule
FolderName = Project.Name & ".Classes"
Case vbext_ComponentType.vbext_ct_MSForm
FolderName = Project.Name & ".Forms"
Case vbext_ComponentType.vbext_ct_ActiveXDesigner
FolderName = Project.Name & ".Designers"
Case vbext_ComponentType.vbext_ct_Document
FolderName = Project.Name & ".Documents"
End Select
I wouldn't repeat the concatenation here - just work out the last part of the name per the component type, and then concatenate with Project.Name & "." (I'd pull the separator dot out of the Case blocks as well) - and then I might give it a bit of breathing room, but that's more subjective:
Select Case Component.Type
Case vbext_ComponentType.vbext_ct_StdModule
ChildFolderName = "Modules"
Case vbext_ComponentType.vbext_ct_ClassModule
ChildFolderName = "Classes"
Case vbext_ComponentType.vbext_ct_MSForm
ChildFolderName = "Forms"
Case vbext_ComponentType.vbext_ct_ActiveXDesigner
ChildFolderName = "Designers" 'note: not supported in VBA
Case vbext_ComponentType.vbext_ct_Document
ChildFolderName = "Documents"
End Select
FolderName = Project.Name & "." & ChildFolderName
I like the @Folder("Parent.Child") syntax and I see that's what you're generating here:
Component.CodeModule.InsertLines 1, Chr(39) & "@Folder(""" & FolderName & """)"
Note that this would also be legal... and simpler to generate:
Component.CodeModule.InsertLines 1, "'@Folder " & FolderName
Obviously if you prefer the parenthesized syntax (either works, it's really just down to personal preference) like I do then by all means keep it, but Rubberduck's new Move to Folder command doesn't put the parentheses in. I'd probably have the single-quote ' character spelled out, too, but I can see how a ' might be harder than necessary to read in the middle of a bunch of " double quotes. On the other hand, defining a constant for it would remove the need to have the "@Folder string literal defined in multiple places:
Private Const RD_FOLDER_ANNOTATION As String = "'@Folder "
...
Component.CodeModule.InsertLines 1, RD_FOLDER_ANNOTATION & FolderName
I have to mention that Rubberduck deliberately shoves all modules under the same default named-after-the-project folder (they can easily be sorted by component type in the Code Explorer), because we strongly believe grouping modules by component type is utterly useless and counter-productive: when I look at the code for a given functionality, I want to see all the code related to that functionality - and I couldn't care less about the component type of the code I'm looking at... it's mostly all class modules anyway.
A sane way to organize the modules in a project, is by functionality: you want your ThingView user form in the same place as your ThingModel class and your ThingPresenter and the Things custom collection - that way when you're working on that Thing, you don't have to dig up the various pieces in an ever-growing list of components under some useless "Class Modules" folder. | {
"domain": "codereview.stackexchange",
"id": 39210,
"tags": "vba, excel, rubberduck, ms-access, powerpoint"
} |
mass-friction-spring system with closed loop | Question: My professor told us that it is possible to see the friction in a mass-friction-spring as the contribution of a closed loop control system. He wrote the following formulas:
Transfer function:
$$G(S)=\frac s {s^2+\frac k M}$$
gain:
$$\rho=h/M$$
where k=spring constant, h=friction constant, M=mass
Can you recognize these formulas and give me some indication about this method?
I think it would be useful to understand what is the meaning of the transfer function is this case?
Answer: Based on the information you've given, I believe your professor is suggesting that a friction term can be represented as shown in the following block diagram.
The transfer function $G(s)$ relates force ($F(s)$, the input in the diagram) to velocity ($sX(s)$, the output in the diagram) for a mass-spring system. The damping ($\rho$) is represented in the feedback loop as a proportional gain acting on the velocity. We can use the feedback rule to create a single, closed loop transfer function:
\begin{align}
G_{cl}(s) &= \frac{G(s)}{1+\rho G(s)},\\
&= \frac{\frac{s}{s^2+k/M}}{1 + \left(\frac{h}{M} \right)\frac{s}{s^2+k/M}},\\
&= \frac{s}{s^2 + \frac{h}{M}s + \frac{k}{M}}.
\end{align}
As you can see, the closed loop transfer function is the same as the general mass-spring-damper transfer function. | {
"domain": "engineering.stackexchange",
"id": 3487,
"tags": "control-theory, transfer-function, feedback-loop"
} |
Filters in convolutional autoencoders | Question: I have a question regarding the number of filters in a convolutional Autoencoder.
As far as I have understood, as the network gets deeper, the amount of filters in the convolutional layer increases.
A typical pattern would be to $16, 32, 64, 128, 256, 512 ...$
Right now I am looking into Autoencoders and on the Keras Blog I noticed that they do it the other way around.
They start with $16$ filters in the first layer, then the number of filters is decreased:
https://blog.keras.io/building-autoencoders-in-keras.html
Now I am wondering if this is usual when working with Autoencoders or does it depend on what kind of features the network should learn?
Thanks in advance,
Cheers,
Michael
Answer: Autoencoders are meant to reduce the dimensionality of your data. Increasing the number of filters would do the opposite. | {
"domain": "datascience.stackexchange",
"id": 5541,
"tags": "deep-learning, autoencoder"
} |
How does the expansion of the universe not violate causality? | Question: It is often said that faster than light travel would violate causality.
However, because the universe is expanding, there are actually distant stars that move away from us at a speed greater than the speed of light.
Why is this allowed, even though it could violate causality?
Answer: If you only take into account expansion of space, then there is no way to violate causality, because information still cannot travel faster than light. If two galaxies drift apart faster than light, it is not even possible for signals emitted from one galaxy to reach the other. They have crossed each other's Hubble radius. | {
"domain": "physics.stackexchange",
"id": 14454,
"tags": "speed-of-light, space-expansion, causality"
} |
Loop optimization for image processing | Question: I have this piece of code that is running too slow. I was wondering if anyone can help me optimize it as I can't seem to find any more shortcuts. I'm not sure if using List<> is going to help me but I need complex operation such as Union and Overlap.
Also, List is desirable because I don't know how many unique partitions an Image Region is going to be before running.
u1.length = 13254 which is the number of distinct elements in RevisedListMeanH
RevisedListMeanH.Count = 90000
The purpose of the first piece of code is to group together pixels via horizontal comparison and vertical comparison. This runs for about 70 seconds.
The second portion combines both vertical and horizontal pixel blocks into a 2D-block. This section runs for about 120 seconds. My goal is to have both of these loops complete under 10 seconds.
These numbers are from a 300x300 pixel region comparisons of a 4000x3000 image.
watch.Start();
for (int s = 0; s < u1.Length; s++ )//iterate through uniquelist
{
List<int> ConnectedBlocksH = new List<int>();
List<int> ConnectedBlocksV = new List<int>();
float[] RH = RevisedListMeanH.ToArray();
float[] RV = RevisedListMeanV.ToArray();
for (int a = 0; a < RevisedListMeanH.Count; a++)//iterate through bitmap with res
{
if (u1[s] == RH[a])
{
ConnectedBlocksH.Add(a);//add the index
}
}
for (int a = 0; a < RevisedListMeanV.Count; a++)//iterate through bitmap with res
{
if (u1[s] == RV[a])
{
ConnectedBlocksV.Add(a);//add the index
}
}
ArrayOfConnectedBlocksH[s] = ConnectedBlocksH;//where the data IS
ArrayOfConnectedBlocksV[s] = ConnectedBlocksV;//where the data IS
}
watch.Stop();
long asasasdda = watch.ElapsedMilliseconds; //71 sec
watch.Reset();
watch.Start();
//if intersects then union the two lists
//iterate through both list
List<List<int>> ListOfConnectedBlocks = new List<List<int>>();
for (int a = 0; a < ArrayOfConnectedBlocksH.Length; a++ )
{
HashSet<int> i = new HashSet<int>(ArrayOfConnectedBlocksH[a]);
//trigger means it scanned and there was no overlap to add to the group
while (true)
{
bool trigger = true;
for (int b = 0; b < ArrayOfConnectedBlocksV.Length; b++)
{
if (i.Overlaps(ArrayOfConnectedBlocksV[b]))
{
i.UnionWith(ArrayOfConnectedBlocksV[b]);//combines all overlaps into one group
ArrayOfConnectedBlocksV[b].Clear();//merged so just remove
trigger = false;
}
}
if (trigger)
{
break;
}
trigger = true;
for (int c = 0; c < ArrayOfConnectedBlocksH.Length; c++)//now cycle through horizontal
{
if (i.Overlaps(ArrayOfConnectedBlocksH[c]))
{
i.UnionWith(ArrayOfConnectedBlocksH[c]);//combines all overlaps into one group
ArrayOfConnectedBlocksH[c].Clear();//merged so just remove
trigger = false;
}
}//first cycle to get T0
if(trigger)
{
break;
}
}
if (i.Count != 0)
{
ListOfConnectedBlocks.Add(i.ToList<int>());
}
}
watch.Stop();
long asasasda = watch.ElapsedMilliseconds;//122 seconds
Answer: Some suggestions which might make it faster:
Move your ToArray() statements outside the loop so that you only execute them once and then reuse them:
float[] RH = RevisedListMeanH.ToArray();
float[] RV = RevisedListMeanV.ToArray();
for (int s = 0; s < u1.Length; s++ )//iterate through uniquelist
{
... etc ...
In the second loop, you convert a List to a HashSet:
HashSet<int> i = new HashSet<int>(ArrayOfConnectedBlocksH[a]);
You could avoid that by making ArrayOfConnectedBlocksH[a] i.e. ConnectedBlocksH a HashSet instead of a List to begin with.
Might it be any faster if you cached the Length and Count property values instead of calling them repeatedly? For example:
for (int s = 0, int length = u1.Length; s < length; s++ )
The basic problem in your first loop is that you are iterating a lot of elements, and doing u1.Length * (RevisedListMeanH.Count + RevisedListMeanV.Count) comparisons.
It might be faster although more complicated to do the following.
Convert your u1 list to a list of index/value pairs:
List<KeyValuePair<int,float>> u1Values = new List<KeyValuePair<int,float>>();
for (int s = 0; s < u1.Length; s++ )
u1Values.Add(new KeyValuePair<int,float>(s, u1[s]));
Sort your index/value pairs by value:
u1Values.Sort((kvp1, kvp2) => kvp1.Value.CompareTo(kvp2.Value));
Do the same with your RevisedListMeanH and RevisedListMeanV lists.
Now that your all arrays are sorted by value, it is easier/cheaper to find which elements match: you can do it by iterating through all the arrays once; something like:
int u = 0; // index into u1Values
int v = 0; // index into vValues
for (;;)
{
int i = u1Values[u].Value.CompareTo(vValues[v].Value);
if (i == 0)
{
// matches!
int s = u1Values[u].Key;
ArrayOfConnectedBlocksV[s].Add(vValues[v].Key);//add the index
// which do we increment now: ++u or ++v?
}
else if (i > 0)
{
// u1Values[u] is too big
if (++v == u1Values.Count)
break;
}
else
{
if (++u == vValues.Count)
break;
}
} | {
"domain": "codereview.stackexchange",
"id": 6511,
"tags": "c#, performance, image"
} |
Definition electric field | Question: Can someone verify whether this is correct?
Given the Coulomb force, we can define the electric field in a point in space as $\bar E = \frac{\bar F}{q} $, where q is a positive test charge and $\bar F$ is the Coulomb force acting on q. We make the hypothesis that q does not make the other charges to move. Rewriting the equation above, when a point charge Q acts on q, we find that $\bar E = \frac{1}{4 \pi \epsilon_0} \frac{Q (\bar r_q - \bar r_Q)}{|\bar r_q - \bar r_Q|^3}$. Hence, we can interpret the electric field as the force, that comes from a point charge Q, acting on a charge of $1C$ (we can regognize this in the formula above). Is this correct?
From wikipedia:
The electric field, ${\displaystyle \mathbf {E} }$, at a given point
is defined as the (vector) force, ${\displaystyle \mathbf {F} }$, that
would be exerted on a stationary test particle of unit charge by
electromagnetic forces (i.e. the Lorentz force). A particle
of charge ${\displaystyle q}$ would be subject to a
force ${\displaystyle \mathbf {F} =q\mathbf {E} }$.
Do they mean the magnitude of the charge of a proton with unit charge? Or 1 coulomb?
I'm very confused. Can someone clearly explain what electric field is if my thinking about it is wrong? Is there an intuitive way to think about it?
Answer: Electric field at any point is the force that a unit positive charge (1unit positive charge= 1Coulomb) would feel when placed at that particular point. This is a good way to decide the direction of the electric field lines.
Basically we define electric field in terms of the force a particle experiences at that point because that's the only thing we can measure. Field lines are just there to help us visualise. Yet they are a very important concept in physics. They stress the importance of the finite time taken for propagation of electromagnetic waves in more advanced physics. | {
"domain": "physics.stackexchange",
"id": 38975,
"tags": "electrostatics, electric-fields"
} |
Converting the Gravitational Constant | Question: Hello all and thank you for reading.
I am creating a program in Unity where I recreate the earth-moon orbit. Unity struggles with numbers so large when I write code in meters and kilograms so I made my own units. 1 unit of mass = the earth's mass (5.9736f*10^24 kg). 1 unit of length is the Earth's Diameter (12,756,000m) and the distance between the earth and the moon at a supermoon is 28.222 units of length (360,000,000m). In this world, what is the gravitational constant? I have done the math in two ways and gotten answers orders of magnitude different. I am officially giving up and asking for help from the internet.
Any help is really appreciated and please show work, I am dying to know how this is done.
Answer: The SI value of the gravitational constant is
$$G=6.674\times 10^{-11}\frac{\text{meter}^3}{\text{kilogram}\,\text{second}^2}.$$
Define your new units (with the time unit taken from your comment):
$$1\text{ MyLengthUnit}=1.2756\times 10^7\text{ meters};$$
$$1\text{ MyMassUnit}=5.9736\times 10^{24}\text{ kilograms};$$
$$1\text{ MyTimeUnit}=13.5\text{ days}=1.1664\times 10^6\text{ seconds}.$$
Then in terms of your units, the SI units are
$$1\text{ meter}=\frac{1}{1.2756\times 10^7}\text{ MyLengthUnit};$$
$$1\text{ kilogram}=\frac{1}{5.9736\times 10^{24}}\text{ MyMassUnit};$$
$$1\text{ second}=\frac{1}{1.1664\times 10^6}\text{ MyTimeUnit}.$$
This means that
$$\begin{align}
G&=6.674\times 10^{-11}\frac{\left(\frac{1}{1.2756\times 10^7}\text{ MyLengthUnit}\right)^3}{\left(\frac{1}{5.9736\times 10^{24}}\text{ MyMassUnit}\right)\left(\frac{1}{1.1664\times 10^6}\text{ MyTimeUnit}\right)^2}\\
&=\frac{(6.674\times 10^{-11})(5.9736\times 10^{24})(1.1664\times 10^6)^2}{(1.2756\times 10^7)^3}\frac{\text{MyLengthUnit}^3}{\text{MyMassUnit}\,\text{MyTimeUnit}^2}\\
&=261321\frac{\text{MyLengthUnit}^3}{\text{MyMassUnit}\,\text{MyTimeUnit}^2}.
\end{align}$$
(I kept a few too many (in)significant digits in the final integral value.) | {
"domain": "physics.stackexchange",
"id": 66719,
"tags": "homework-and-exercises, newtonian-gravity, orbital-motion, units"
} |
Theoretical Calculation of Specific Heat of a Gas | Question: I've read that the theoretical specific heat of a monatomic gas (like dissociated hydrogen or oxygen) is $20.8\, \mathrm{\dfrac{J}{mol\cdot K}}$ at constant pressure and $12.5\, \mathrm{\dfrac{J}{mol\cdot K}}$ at constant volume.
Is there a way to calculate theoretical values for the specific heats of polyatomic gases?
Is there a way to calculate the temperature dependence of specific
heats for monatomic species?
I don't know much about quantum mechanics, but maybe this is one of the "tricks" you can use it for? The specific heats of everything else I've seen (experimental data) depends on temperature, so I assume monatomic gases are the same. I think the theoretical values I have are for $298\,\textrm{K}$, $1$ atm.
I'm working on some code that simulates the elementary reactions going on in the combustion of several fuels and in order to accurately calculate the Gibbs free energy change to determine whether or not each reaction will occur, I need specific heats that are as accurate as possible, especially in the 2000-4000K range. It's hard to find experimental data for less common species like OH, HO2, etc.
Answer: Leaving this here for now... Will update with more information and references later.
Einstein and Debye showed that specific heat is a function of temperature, but is asymptotic at high* temperatures. Here is a simple explanation why:
Heat, with regard to everyday applications, is simply a measure of the motion of atoms and molecules. Let's start with gases. Classical theory tells us there are three types of motion for an atom: translational, rotational, and vibrational. Here's where quantum theory comes in. Quantum Mechanics tells us that there is a "temperature" threshold where each of these types of motion can begin to occur. We know from thermodynamics that there is a distribution of temperatures with regard to the atoms in a gas, so some atoms may have breached this temperature threshold and are in motion in more ways than others--let's call this having more degrees of freedom. This is where the temperature depends comes in. At low temperatures, there is a significant variance in degrees of freedom between atoms, and atoms continuously climb and fall below these thresholds. However, at high temperatures, most atoms have attained enough energy to gain all possible degrees of freedom. Therefore, there is little temperature dependence at high temperature for gases. In solids and liquids, magnetism must be considered, so they can exhibit different behavior at high temperatures.
The above discussion completely ignores pressure and volume, which are far more likely to have an impact on your calculations than temperature. Classical theory says polyatomic gases should have a constant specific heat at high temperatures, but I doubt it is that simple. I will do some reading on these issues and get back to you.
*Varies depending on the compound. Most gases have stable specific heats around room temperature, but behavior can vary wildly. See Einstein Temperature or Debye Temperature, specifically for Diamond. | {
"domain": "physics.stackexchange",
"id": 32456,
"tags": "thermodynamics, physical-chemistry, kinetic-theory"
} |
I am trying to identify this bone I found on the beach at the Delaware Bay in Delaware. It is 2 1/2 inches wide and 1 1/2 tall | Question:
This bone was found at Big Stone beach on the Delaware bay, Delaware. It weighs only 19 grams
Answer: It's a pharyngeal tooth from a large fish like a drum or a parrot fish.
The pharyngeal jaws are for grinding and also contain a pharyngeal mill, where larger and smaller teeth grind coarser and finer coral pieces. Fish like the Black Drum use similar teeth to grind bivalves and molluscs.
They are ideal for scraping and crushing the tough and calcified materials that make up the parrot fish's diet, coral skeletons and polyps, and algae-covered rocks.
The teeth are typically arranged in several rows, with each row containing multiple tooth plates. The outermost row consists of larger, stronger teeth, while the inner rows may contain smaller and more specialized teeth for further grinding and pulverizing food particles. This multi-row arrangement provides parrot fish with an efficient means of food processing. teeth of that kind are also found in other fish, and they are reinforced to handle high mechanical loads. https://www.sciencedirect.com/science/article/abs/pii/S1047847720301039 | {
"domain": "biology.stackexchange",
"id": 12303,
"tags": "species-identification, zoology, marine-biology"
} |
Quantum computing roadmap | Question: I have to create a roadmap for the quantum computing technology. Looking around I found the timeline on wikipedia that is pretty wide but does not highlight the key events in quantum computing research neither sets the possible future for research.
Could somebody help me to define which are key events in the quantum computing fields? When (and why) we started to explore this technology, for which milestones we passed through and (maybe) what we can set as future milestones?
Answer: This Quantum Information Science and Technology Roadmapping Project is a very detailed roadmap prepared under ARDA (Advanced Research and Development Activity) around ~2004 with further refinements. Here is the introduction/overview. It had a panel of about 19 elite academic scientists/experts.
For a completely contrasting recent overview/review on progress to date/future prospects see this [controversial] paper State of the Art and Prospects for Quantum Computing, M. I. Dyakonov (2012). | {
"domain": "cs.stackexchange",
"id": 1500,
"tags": "reference-request, quantum-computing"
} |
Partial derivative of Dirac Lagrangian with respect to derivatives of fields | Question: Why is $\frac{\partial\mathcal{L}}{\partial(\partial_\nu \bar{\psi})} = 0$, for the Dirac Lagrangian $\mathcal{L} = \bar{\psi}(i \gamma^\mu \partial_\mu - m)\psi$?
This comes up in deriving the Noether current for $\psi \rightarrow e^{i\alpha}\psi$ for example.
My confusion comes from the fact that we can write the following term in the Lagrangian $i\bar{\psi}\gamma^\mu\partial_\mu\psi = -i(\partial_\mu \bar{\psi})\gamma^\mu\psi$ by integrating by parts which makes it look like $\frac{\partial\mathcal{L}}{\partial(\partial_\nu \bar{\psi})} = -i \gamma^\mu \psi$.
In fact, this is how we get the equations of motion for $\bar{\psi}$.
Answer:
$\psi$ and $\bar \psi$ are thought as two independent variables in the Lagrangian.
If you write a Lagrangian as $\mathcal{L}_1 =\bar\psi(...)\psi$, you should use it to calculate the Noether current or equation of motion. If you have the other one, $\mathcal{L}_2 =\psi(...)\bar\psi$, you have to perform the derivatives based on this one.
The two results will be equivalent, they are Dirac dual to each other. | {
"domain": "physics.stackexchange",
"id": 72823,
"tags": "lagrangian-formalism, differentiation, fermions, dirac-equation"
} |
Why does current conservation involve an arbitrary function? | Question: In section 6.1 of Peskin's quantum field theory introduction, right after equation 6.3, the four current density $j^{\mu}$ is said to be conserved because for any function $f \left( x \right)$ that falls off at infinity, we have
$$
\int f \left( x \right) \partial _{\mu} j^{\mu} \left( x \right) \mathrm{d}^4 x = 0
$$
I am just so confused on the fact that there is a function $f \left( x \right)$ involved in this evaluation. Is current conservation not just $\int \partial _{\mu} j^{\mu} \left( x \right) \mathrm{d}^4 x = 0$?
Thanks in advance!
Answer: Current conservation is $\partial_\mu j^\mu=0$, not integrated over ANYTHING (note the zeroth component of $j$ is the current density, so this is equivalent to the usual statement of current conservation $\dot\rho=-\vec\nabla\cdot\vec j$). So, Peskin wants to show $\partial_\mu j^\mu$ is zero. He does it by showing its integral over any test function $f(x)$ is zero. If $g(x)$ is a function such that $\int f(x)g(x)=0$ for all test functions $f(x)$, then $g(x)=0$. | {
"domain": "physics.stackexchange",
"id": 52855,
"tags": "quantum-field-theory, conservation-laws, electric-current"
} |
Java method - levels of abstraction | Question: In his Clean Code book, Robert C. Martin states that functions should do only one thing and one should not mix different levels of abstraction. So I started wondering whether the code below meets these requirements (I assume the answer is negative). Which one of the following code snippets is better?
Snippet 1
public void run() {
isRunning = true;
try {
serverSocket = new ServerSocket(port);
}
catch (IOException | IllegalArgumentException e) {
System.out.println("Could not bind socket to given port number.");
System.out.println(e);
System.exit(1);
}
while(isRunning) {
Socket clientSocket = null;
try {
clientSocket = serverSocket.accept();
}
catch (IOException e) {
logEvent("Could not accept incoming connection.");
}
Client client = new Client(clientSocket, this);
connectedClients.put(client.getID(), client);
Thread clientThread = new Thread(client);
clientThread.start();
logEvent("Accepted new connection from " + clientSocket.getRemoteSocketAddress().toString());
client.send("@SERVER:HELLO");
}
}
Snippet 2
public void run() {
isRunning = true;
createServerSocket();
while(isRunning) {
Socket clientSocket = null;
clientSocket = acceptConnection();
addClient(clientSocket);
}
}
private void createServerSocket() {
try {
serverSocket = new ServerSocket(port);
}
catch (IOException | IllegalArgumentException e) {
System.out.println("Could not bind socket to given port number.");
System.out.println(e);
System.exit(1);
}
}
private Socket acceptConnection() {
try {
Socket clientSocket = serverSocket.accept();
}
catch (IOException e) {
logEvent("Could not accept incoming connection.");
}
return clientSocket;
}
private void addClient(Socket clientSocket) {
Client client = new Client(clientSocket, this);
connectedClients.put(client.getID(), client);
Thread clientThread = new Thread(client);
clientThread.start();
logEvent("Accepted new connection from " + clientSocket.getRemoteSocketAddress().toString());
client.send("@SERVER:HELLO");
}
However, splitting such simple code into multiple fragments seems like an overkill (although it's more readable - is it?). Besides, createServerSockets has side effects (Sys.exit) and addClient does too many things at once. Any advice?
Answer: The second one is much easier to read, it expresses the developers intent. At first sight I (as a maintainer or another developer in the same team, for example) just want a quick overview about the code and don't care about the details. (How the code creates a server socket, for example.) The second one exactly gives that.
The System.exit side effect could be eliminated: throw an exception with a proper message (and don't forget to set the cause). Then handle the exception in the run method. I often see wrappings like the following:
public void run() {
try {
doRun();
} catch (SomeException e) {
// exception handling here
}
}
This often makes the doRun() simpler and easier to read.
Some other notes:
Socket clientSocket = null;
clientSocket = acceptConnection();
Could be written as
Socket clientSocket = acceptConnection();
I'd modify the createServerSocket() method to return the server socket and the acceptConnection() method to have a ServerSocket parameter. It would remove the temporal coupling between the two methods. It would be impossible to call them in the wrong order.
I've not written any threading code recently but I don't think that creating a separate thread for every client scale well. You might want to use an executor/thread pool there. | {
"domain": "codereview.stackexchange",
"id": 5826,
"tags": "java, multithreading, exception-handling, socket, comparative-review"
} |
Hermitian properties of Dirac operator | Question: I am trying to understand the Hermiticity of the (massless) Dirac operator in both (flat) Minkowski space and Euclidean space.
Let us define the Dirac operator as $D\!\!\!/=\gamma^\mu D_\mu$, where $D_\mu = \partial_\mu-igA_\mu$, where in general $A_\mu$ is a non-Abelian gauge field. For completeness, they us assume the gauge fields are members of SU(2), and we are working in the Weyl representation for $\gamma$'s).
I have read in a number of sources on Lattice QCD that in Euclidean space
$D\!\!\!/^\dagger =-D\!\!\!/$ , however I wish to show this.
Generally
$D\!\!\!/=\gamma^0(\partial_0-igA_0)+\gamma^i (\partial_i-igA_i)$.
Then noting that $A_\mu^\dagger=A_\mu$, $\gamma_\mu^\dagger=\gamma_\mu$:
$D\!\!\!/^\dagger=(\partial_0^\dagger+igA_0)\gamma^0+ (\partial_i^\dagger+igA_i)\gamma^i $.
Obviously for this to be true, $\partial_\mu^\dagger=-\partial_\mu$, but why? My understanding was that $\partial_\mu$ really represents $\mathbf{I}_{2x2} \partial_\mu$ for SU(2).
I am then further interested in understanding if in Minkowski space the Dirac operator is Hermitian, anti-Hermitian, or none of the above.
Similar to above, working in the (+,-,-,-) metric, noting in this case $\gamma_0^\dagger=\gamma_0$ and $\gamma_i^\dagger=-\gamma_i$,
$D\!\!\!/=\gamma^0(\partial_0-igA_0)-\gamma^i(\partial_i-igA_i) $,
so
$D\!\!\!/^\dagger=(\partial_0^\dagger+igA_0)\gamma^0 -(\partial_i^\dagger+igA_i)(-\gamma^i)=(-\partial_0+igA_0)\gamma^0 -(-\partial_i+igA_i)(-\gamma^i)=-\big((\partial_0-igA_0)\gamma^0+(\partial^i-igA_i)\gamma^i \big)\neq -D\!\!\!/ ~~\text{or}~~D\!\!\!/ $
Edit After a helpful comment, I see that $\partial_\mu^\dagger=-\partial_\mu$, however I believe I made a mistake in my original Minkowski space derivation, and I don't think it is non-Hermitian generally. Can anyone clarify this?
Answer: Hint: yes, $\partial^\dagger=-\partial$. One way to see this is that $\hat p=-i\partial$, and $\hat p$ is self-adjoint, which means that $(-i\partial)=(-i\partial)^\dagger=+i\partial^\dagger$. Or put it another way, check the definition of $\dagger$:
$$
\langle f,\partial g\rangle=\int f\partial g=-\int \partial f\ g=-\langle \partial f,g\rangle
$$
where I used integration by parts. We usually say that $i\partial$ is hermitian instead of saying that $\partial$ is anti-hermitian, but these are obvioulsy equivalent. You'll hear the former more often though.
With this, I believe you can easily prove that $iD\!\!\!/\ $ is hermitian. For that you'll need to use the fact that the gamma matrices are self-adjoint (meaning $\bar\gamma^\mu=\gamma^\mu$, not $\gamma^{\mu\dagger}=\gamma^\mu$, which is false). If you need more details say so and I'll elaborate.
EDIT
As $iD=i\partial+gA$ is the sum of two terms, it suffices to prove the hermicity properties of both of them independently. I believe you know how to deal with the second term:
$$
g\gamma^\mu A_\mu
$$
is self-adjoint because $\bar\gamma^\mu=\gamma^\mu$.
You can prove that $i\not\partial$ is hermitian by proving that so is $i\bar\psi\not\partial\psi$. This is easier because
$$
(i\bar\psi \not\partial \psi)^\dagger=\overline{i\bar\psi\not\partial\psi}=-i\bar\psi \bar{ \not\partial}\psi
$$
so you've got to prove $\bar{\not\partial}=-\not\partial$ instead of $\not\partial^\dagger=-\not\partial$. Now,
$$
\bar{\not\partial}=\overline{\gamma^\mu\partial_\mu}=\bar\gamma^\mu \partial_\mu^\dagger=-\gamma^\mu\partial_\mu=-\not\partial
$$
where I used $\bar\gamma=\gamma$ and $\partial^\dagger=-\partial$.
I hope its more clear now. You should be able to fill in the details. | {
"domain": "physics.stackexchange",
"id": 29509,
"tags": "quantum-field-theory, dirac-equation, dirac-matrices"
} |
Is entropy change always zero after a quasi-static evolution? | Question: If I am thinking in terms of a idealized, perfect carnot cycle I know that in sum
$$\Delta S_{\mathrm{total}} = 0.\tag{1}$$
But that does not mean that there is no entropy generated during the individual steps
$$\Delta S = \frac{Q}{T}.\tag{2}$$
It is just that in total the entropy terms 'cancel out' such that equation (1) holds.
Yet when I read about other processes sometimes they seem to imply that 'quasi-static' always means $\Delta S =0$.
For example as I was reading about the Jarzynski equality he states at the very beginning of the paper:
When the parameters are changed
infinitely slowly along some path γ from an initial point A to a final point B in parameter
space, then the total work W performed on the system is equal to the Helmholtz free energy
difference ∆F between the initial and final configurations.
$$W = \Delta F \tag{3}$$
Now this means that we have a system which moves (slowly) from point A to point B in the parameter space. The only way I can think of to obtain eq. (3) is to use the definition of the Helmholtz free energy
$$F = E - TS\tag{4}$$
re-write it in terms of
$$\Delta F = \Delta E - T\Delta S\tag{5}$$
and iff $\Delta S =0$ then I obtain $\Delta F = \Delta E$ where additionally $\Delta E = W$ because we are in a thermally isolated system yielding the solution of eq. (3).
So my question: Does the fact that we do a quasi-static, slow evolution automatically imply that $\Delta S =0$.
Because literally in the next paragraph Jarzyski says that if the evolution is not quasi static we obtain
$$W \geq \Delta F\tag{6}$$
which (assuming my investigation was correct) would mean that here $\Delta S \geq0.$
Answer: The system is not thermally isolated like you think it is. What we actually have is a system always in contact with a heat bath. This means there is heat exchange, but for the slow process we are always in thermal equilibrium with the heat bath. The process is then an isothermal process, not an adiabatic one.
Therefore $\Delta E=W+Q=W+T\Delta S$ so that the change in free energy ends up being $\Delta F=W$. Notice this does not assume $\Delta S=0$. Quasi-static does not mean no change in entropy.
Just to help even further, technically your change in the free energy is fully given by
$$\Delta F=\Delta E-T\Delta S-S\Delta T$$
but since the temperature does not change for the slow process, we get the relationship you give. But for the fast process $\Delta T\neq0$ , which is why you get $\Delta F>0$ (and this analysis technically should be done with differentials) | {
"domain": "physics.stackexchange",
"id": 55460,
"tags": "thermodynamics, entropy, fluctuation-dissipation"
} |
How far extra low frequency electromagnetic waves can go away from Sun? | Question: Since ELF EM (3 Hz to 30 Hz) has a large wavelength (100000 km to 10000 km), i was wondering that how far from Sun these waves can go? And if radio waves that are used in mobile communications can pass through walls and buildings, how deep ELF EM can go inside the Earth?
Answer: There are two points to consider. One is whether the waves are reflected from the surface (or at any interface between layers with different electrical properties) and second, how far waves can propagate once inside a partially conductive medium.
The transmission of a certain frequency into a medium is approximately proportional to its impedance. In turn, the impedance is proportional to $\sqrt{f/\sigma}$, where $\sigma$ is the conductivity of the medium. Thus low frequencies do not necessarily help if the conductivity is high. The penetration depth of the wave in a good conductor is approximately proportional to $\sqrt{1/\sigma f}$, so low frequencies and low conductivity lead to larger penetration.
The conductivity of "earth" (typical soil), might be of order 0.01 S/m. Sea water is more conductive - about 5 S/m.
Whether they act as good conductors is determined by the ratio $\sigma/2\pi \epsilon_0 f$. Taking $f=3-30$ Hz, we see that both soil and seawater are excellent conductors at these frequencies.
In those circumstances, the amplitude transmission fraction is aporoximately
$$ t \simeq \frac{2\eta}{377},$$
where 377 Ohms is the vacuum impedance and $\eta \simeq \sqrt{2\pi f \mu_0/\sigma}$ is the impedance of the medium. So the transmission fraction is much less than 1 percent for both these materials.
Once in the medium, the signal fades exponentially, with a penetration depth (where that amplitude falls by $1/e$) given approximately by
$$ d = \sqrt {\frac{2}{2\pi f \mu_0 \sigma}}$$,
which is 1-3 km for soil and is 40-130 m for seawater.
Thus my answer would be, most of the waves are reflected, but for those that penetrate, the depth reached is a few km if not in the ocean and a few hundred metres in the ocean (and indeed ELF is used for communicating with submerged submarines).
A second part to your question concerns propagation of waves through space.
Here, the relevant point is whether the wave frequency is above or below the plasma frequency
$$\nu_p = \left( \frac{e^2 n_e}{4\pi^2 \epsilon_0 m_e}\right)^{1/2} = 9000 \left(\frac{n_e}{{\rm cm}^{-3}}\right)^{1/2}\ {\rm Hz},$$
where $n_e$ is the electron number density and $m_e$ is the electron mass. If waves have a frequency below the plasma frequency then they will be reflected.
The value of $n_e$ varies considerably from place to place. In the solar corona, typical values are $n_e \sim 10^{5}$ to $10^{6}$ cm$^{-3}$. This means that ELF waves generate by the Sun will not propagate through the coronal plasma. In addition, the electron density in the ionosphere of the Earth is typically of the same order of magnitude. So ELF waves from outer space will not penetrate the Earth's ionosphere.
Even should the ELF waves be produced near the top of the corona, they will not propagate through the solar wind. The electron density falls with distance from the Sun, but is still $n_e \sim 100$ cm$^{-3}$ at the orbit of the Earth, giving a plasma frequency of 90 kHz. | {
"domain": "physics.stackexchange",
"id": 63335,
"tags": "electromagnetic-radiation"
} |
Does the order of declarations in an inductive type matter? | Question: I was wondering if the order of declarations of an inductive type can matter.
For example in Coq you can define Nat either by:
Inductive Nat :=
| O : Nat
| S : Nat -> Nat.
or
Inductive Nat :=
| S : Nat -> Nat
| O : Nat.
This will perhaps change the order of the parameters in the automatically generated eliminator, but that’s not a big deal.
What I’m wondering is if it is possible to write a declaration like
Inductive typewhereordermatters :=
| cons1 : type1
| cons2 : type2.
where type2 is a dependent type, depending on cons1? (and in this case, write the declarations in the other order would not have any meaning, because type2 would be referring to cons1 which does not exists yet).
Answer:
The order does not matter. I cannot think of a case where it would. As Andrej Bauer points out in a comment, if you change the order the result is canonically isomorphic to the original.
One case cannot depend on another case. The elements of the sum represent a choice, so it doesn't make sense that the choice taken depends upon a choice that is not taken. | {
"domain": "cstheory.stackexchange",
"id": 5355,
"tags": "lo.logic, type-theory, coq, dependent-type"
} |
What is the expected frequency of image objects in autoware.ai? | Question:
I am running the autoware.ai v1.13 (built in kinetic) base perception stack (lidar + camera with yolo weights).
Currently I am building it with CUDA and running on an AStuff Spectra PC. The output frequency for the /detection/image_detector/objects topic seems to only be 3Hz. Is this expected? When using CUDA what is the expected frequency of this topic?
Originally posted by msmcconnell on ROS Answers with karma: 268 on 2021-02-09
Post score: 0
Answer:
This depend on a lot of factors. You have to consider not only the upstream nodes/topics (the camera driver, the image rectifier, etc) but also the framerate that the camera itself is producing. So, in short, there is no "expected" frequency.
Originally posted by Josh Whitley with karma: 1766 on 2021-02-10
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by msmcconnell on 2021-02-10:
Expected might have been a poor choice of words. "Desired" might be more appropriate. I am wondering what the downstream components might desire to operate normally. For example my understanding is that most of the autoware.ai planning stack is designed to run at 10Hz. So I would think the object perception stack should operate at this same frequency. Does the fact this node is outputing at only 3Hz mean those nodes are in-capable of outputting data at the rate expected for planning or do they have logic to account for lapses in data on this topic?
Comment by Josh Whitley on 2021-02-10:
The perception stack only operates at 10Hz because that is the typical frequency of data from a 3D lidar. If you had a lidar that ran faster, the downstream nodes would run faster as well. The design is almost completely data-driven (I say almost because there are some localization components which are temporal). The question you might want to ask instead ias "is the object detection node causing delays" which can be investigated by just looking at the frequency of the input and output topics. If the answer is no, look further upstream to see why the rate is so low.
If your camera is producing data at 30 Hz, I would excpet certain nodes like object recognition to be running slower but it's all up to hardware and what you consider safe. | {
"domain": "robotics.stackexchange",
"id": 36065,
"tags": "ros, ros-kinetic, cuda"
} |
Wouldn't a Red-Black tree fix up after insertion mess up the BST ordering? | Question: I've been reading about fixing up after an insertion into a red black tree. (http://web.cse.ohio-state.edu/~lai/2331/0.Red-Black%20Trees.pdf)
The most surprising part is not that there are 6 things that we need to do after inserting into a red black tree. i.e. if both parent, uncle red, then parent, uncle black, grandparents red, replace inserted value with grandparent. The most surprising part is that wouldn't this blatantly mess up the BST ordering such that a node x is always greater than the left side and less than the right side?
In above link, page 10, the author just replaced grandparent with the newly inserted node x. Wouldn't this completely mess up the BST ordering? Can someone explain if BST ordering should be preserved in a red-black tree and why rotation/fix up after insertion will not have any affect on this ordering?
Answer: A correct implementation of red-black trees will maintain the BST ordering property. Whenever you add a node to the tree or delete a node from the tree you very carefully maintain this property. If you have to modify the structure of the tree then you do so in such a way that maintains the BST property.
I suggest you go carefully through the insertion and deletion pseudocode and verify that both operations maintain the BST property. | {
"domain": "cs.stackexchange",
"id": 3506,
"tags": "data-structures, search-trees"
} |
Why are there free electrons in a metal? | Question: Recently, we covered metallic bonding in chemistry, and frankly, I understood little.
I understand that:
Metals bond to each other via metallic bonding
Electricity can flow via free or delocalized electrons
But, I do not understand why the metal atoms turn into ions and delocalize the electrons, why don't the metal atoms stay as atoms?
Answer: When electricity flows, the electrons are considered "free" only because there are more electrons than there should be, and because the transition metals, such as iron, copper, lead, zinc, aluminum, gold etc. are willing to transiently accept and give up electrons from the d-orbitals of their valence shell.
Transition metals are defined in part by their stability in a wide range of "oxidation states"; that is, in several combinations of having too many or too few electrons compared to protons. This is thought to be because of the d orbital in their valence shells. Compared to the s and p orbitals at a particular energy level, electrons in the d shell are in a relatively high energy state, and by that token they have a relatively "loose" connection with their parent atom; it doesn't take much additional energy for these electrons to be ejected from one atom and go zooming through the material, usually to be captured by another atom in the material (though it is possible for the electron to leave the wire entirely). This impetus can be caused by many things, from mechanical impact to chemical reactions to electromagnetic radiation (aka light, though not all of it visible); antennas work to capture radio frequencies, because the light at those frequencies induces an electric current in the wire of the antenna. Now, in the absence of a continuous force keeping the electron in this higher energy state, the electron (and the metal atoms) will naturally settle into a state of equilibrium. Electricity is generated when just such a force is acting on the metal, giving energy to the electrons in the d orbital and forcing them to move in a certain direction.
This impetus can come from many sources, as discussed, be it the movement of a magnet within a coil of wire, or a chemical redox reaction in a battery creating a relative imbalance of electrons at each of two electrodes. The end result is that the electrons, given additional energy from this voltage source, are ejected from their "parent" atom and are captured by another. The "holes" left behind by these electrons are filled by other electrons coming in behind them from further back in the circuit. Thus, the energy provided by the voltage source is carried along the wire by the transfer of electrons.
The analogy typically made is to the flow of water, and it generally holds in many circumstances; the "voltage source" can be thought of as being like a pump or a reservoir, from which water flows through pipes, and the amount of water and the pressure it's placed under (by the pump or by gravity) can be harnessed to do work, before draining back to a lower reservoir. The pipes are similar to wires in many ways; the larger the diameter, and the smoother the inside of the pipe, the more and the faster water can flow through it (equivalent in many ways to the thickness and conductivity of the metal wire), and when under enough pressure (high enough voltage), the pipes will actually expand slightly and hold more water than they would at low pressure (this is a property of wires and other electrical conductors called "capacitance"; the ability to store a charge while under voltage and to discharge it after the voltage is released). | {
"domain": "chemistry.stackexchange",
"id": 16279,
"tags": "bond, metal"
} |
Spin dependence of wavefunction of meson states bound by a certain potential | Question: Lets assume two spin half particles, for example a charm quark and a charm anti-quark are bound by a spherical harmonic oscillator potential to form bound meson states. The possible ground state configurations are vector state, known as the J/psi particle, and the other is the pseudoscalar state, known as the eta_c particle.
My question is, is there any way to consider the spin dependence of the resulting meson states, while writing the bound state radial wavefunction of the meson states?
Answer: This is the sort of thing your instructor has presumably drilled into you doggedly:
The spin-singlet $\eta_c$ is an $^1S_0$ state that should remind you of orthopositronium, so with $J^{PC}= 0^{-+}$.
By contrast, the spin-triplet J/ψ is $^3S_1$ reminding you of parapostironium, so with $J^{PC}= 1^{--}$.
The reason, as you recall, is $P=(-)^{L+1}$ and $C=(-)^{L+S}$, but here they are both s-wave, so L=0. The spins dictate that the triplet is spin symmetric and the singlet antisymmetric. | {
"domain": "physics.stackexchange",
"id": 85740,
"tags": "particle-physics, harmonic-oscillator, quarks, mesons"
} |
Importing collada meshes in gazebo | Question:
I created a robot mesh in Blender, and now I want to import it in Gazebo. The problem is that, when I insert the model in Gazebo, the shape is good, but it is completely black, altought it had different colours in Blender. To color the object in Blender, I added some new materials.
The result of exporting to collada was just robot.dae file. Maybe there should be some other files representing materials? The part of the SDF that imports mesh looks like this:
<geometry>
<mesh><uri>file://robot.dae</uri></mesh>
</geometry>
...for both visual and collision tag.
Originally posted by bot on Gazebo Answers with karma: 46 on 2013-06-01
Post score: 1
Answer:
Take a look in ~/.gazebo/ogre.log. There are probably errors that say a material cannot be found.
Gazebo will search for materials in all paths listed in GAZEBO_RESOURCE_PATH. It's important to note that for each path listed in GAZEBO_RESOURCE_PATH, gazebo will search in the media/materials/textures subdirectory.
For example, if GAZEBO_RESOURCE_PATH=/usr/local/share/gazebo-1.8. The material texture files should be in /usr/local/share/gazebo-1.8/media/materials/textures.
You should probably just make a model, which has a self-contained directory struture for all meshes, materials, and SDF files. See here, and these examples.
Originally posted by nkoenig with karma: 7676 on 2013-06-06
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3327,
"tags": "gazebo, sdformat"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.