anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
What are the consequences of not freezing layers in transfer learning? | Question: I am trying to fine tune some code from a Kaggle kernel. The model uses pretrained VGG16 weights (via 'imagenet') for transfer learning. However, I notice there is no layer freezing of layers as is recommended in a keras blog. One approach would be to freeze the all of the VGG16 layers and use only the last 4 layers in the code during compilation, for example:
for layer in model.layers[:-5]:
layer.trainable = False
Supposedly, this will use the imagenet weights for the top layers and train only the last 5 layers. What are the consequences of not freezing the VGG16 layers?
from keras.models import Sequential, Model, load_model
from keras import applications
from keras import optimizers
from keras.layers import Dropout, Flatten, Dense
img_rows, img_cols, img_channel = 224, 224, 3
base_model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(img_rows, img_cols, img_channel))
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
add_model.add(Dense(256, activation='relu'))
add_model.add(Dense(1, activation='sigmoid'))
model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
model.compile(loss='binary_crossentropy', optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
model.summary()
Answer: I think that the main consequences are the following:
Computation time: If you freeze all the layers but the last 5 ones, you only need to backpropagate the gradient and update the weights of the last 5 layers. In contrast to backpropagating and updating the weights all the layers of the network, this means a huge decrease in computation time. For this reason, if you unfreeze all the network, this will allow you to see the data fewer epochs than if you were to update only the last layers weights'.
Accuracy: Of course, by not updating the weights of most of the network your are only optimizing in a subset of the feature space. If your dataset is similar to any subset of the imagenet dataset, this should not matter a lot, but, if it is very different from imagenet, then freezing will mean a decrease in accuracy. If you have enough computation time, unfreezing everything will allow you to optimize in the whole feature space, allowing you to find better optima.
To wrap up, I think that the main point is to check if your images are comparable to the ones in imagenet. In this case, I would not unfreeze many layers. Otherwise, unfreeze everything but get ready to wait for a long training time. | {
"domain": "datascience.stackexchange",
"id": 9017,
"tags": "machine-learning, deep-learning, keras, tensorflow, transfer-learning"
} |
Is contact force repulsive or attractive? | Question: Here I'm talking about the contact force when, say an object in placed on the surface of, maybe a table.
I understand attractive and repulsive forces operate between the object and the table. Here is the contact force attractive or repulsive?
When I think of the normal reaction force as a component of the contact force, should I regard it as attractive or repulsive? And why?
This question probably might get closed as a duplicate, but the other question is about the association of friction and normal reaction to the contact force, the answers to that question don't really have what I am looking for
Answer: Repulsive. The contact force comes from the electrons in the outer shell of the atoms in the surface of the objects repelling each other, not only electromagnetically, but also because of Pauli's exclusion principle, which won't allow them to occupy nearby positions at the same time. | {
"domain": "physics.stackexchange",
"id": 77843,
"tags": "newtonian-mechanics, forces, contact-mechanics"
} |
neural network probability output and loss function (example: dice loss) | Question: A commonly loss function used for semantic segmentation is the dice loss function. (see the image below. It resume how I understand it)
Using it with a neural network, the output layer can yield label with a softmax or probability with a sigmoid.
But how the dice loss works with a probility output ?
The numerator multiply each label (1 or 0) of the predicted and ground truth. Both pixels need to be set to 1 to be in the green area. What is the result with a probability like 0.7 ? Does the numerator result in a floating number (ie with ground truth = [1, 1] and predicted = [0.7, 0], the "green" area of the numerator would be 0.7) ?
Answer: I think there is a bit of confusion here. The dice coefficient is defined for binary classification. Softmax is used for multiclass classification.
Softmax and sigmoid are both interpreted as probabilities, the difference is in what these probabilities are. For binary classification they are basically equivalent, but for multiclass classification there is a difference.
When you apply sigmoid, you make sure that all output neurons are between 0 and 1.
When you apply softmax, you make sure that all output neurons are between 0 and 1 AND that they sum up to 1.
This means, when the output is sigmoid, the input data can be in several classes at the same time. For softmax, you force the network to pick one of the classes.
The code you posted here is correct for binary classification, but for multiclass, there is some intricacy when it comes to combining the classes.
Since in your example the target consists of two pixels, each labeled either 0 or 1, you are dealing with binary classification and should use pixel-wise sigmoid in the first place, i.e. the probabilities from your model should be e.g. [0.7, 0.8] or something like that.
Pixel-wise softmax should only be used if each pixel could be in only one of many classes and softmax over all pixels does not make much sense, as this would force the model to pick one out of many pixels to label as 1. | {
"domain": "datascience.stackexchange",
"id": 7669,
"tags": "neural-network, image-segmentation"
} |
Which oil was used by Robert A. Millikan in his oil drop experiment? | Question: I need to know which oil was used by Robert A. Millikan in his oil drop experiment, i.e.name of oil or chemical formula of the oil molecule.
Answer: The name, as far as I can find, is "highest grade of clock-oil" as mentioned in his 1913 paper (page 111, last line). The density of this oil at $23\, {}^\circ \mathrm C$ is $0.9199$, but the units aren't mentioned.
EDIT: I went to look up an earlier work (1911) and found two other oils of interest that were used, namely "cleaned gas engine oil" of density $0.9041$ and the more volatile mineral oil (machine oil) of density $0.8960$, both measured at $25\, {}^\circ \mathrm C$. I believe these are the oils the Wikipedia entry alluded to. | {
"domain": "physics.stackexchange",
"id": 35320,
"tags": "experimental-physics, electrons, charge, history"
} |
Entropy and reversible and irreversible processes | Question: If we have a system that goes from equilibrium state A to B, since entropy is a state function then for the entropy change we would have:
$\Delta S = S_B - S_A$
Since entropy is a state function it is important the end state and the initial state. It doesn't matter whether the process to bring the system from A to B is rev. or irrev.
But if the process is rev. Then for rev. processes we know that $\Delta S = 0$.
And for irrev. processes $\Delta S > 0$.
So how exactly can entropy change be zero since the process is reversible but at the same time because the system goes to another state, whose multiplicity is different then the one of the previous state, the entropy changes. But if the same process from A to B is irreversible now the entropy is bigger?
In the following link:
Why is the work done in reversible process greater than work done in irreversible process?
Bob D says:
"To elaborate, since entropy is a property of the system, the difference in entropy between two equilibrium states is the same, regardless of the process (reversible or irreversible) that connects the two states. Consequently, in an irreversible process the additional entropy generated in the system must be transferred to the surroundings in order for the change in entropy of the system to be the same."
His statement indicates that regardless of the type of process the entropy change is the same (as it should). But as I explained above, for each case entropy behaves differently.
Answer: In a reversible process, the change in entropy of the system plus surroundings is zero, but not necessarily for each of them individually. The entropy change of the system can be positive, negative, or zero. But, for a reversible path, it will be equal to the entropy change for any irreversible path between the same two end states. | {
"domain": "physics.stackexchange",
"id": 83652,
"tags": "thermodynamics, statistical-mechanics, entropy, reversibility"
} |
Smallest expression within brackets | Question: I would like to know if this is a good approach to finding out all the expressions within brackets. (I was actually asked to find out the smallest bracket expression, I have omitted the function to check for the smallest string and spaces within the expression).
s = "((a+b*d)*(x/2))*(1+(y+(x-2)/10))"
b = []
j = 0
b.append(j)
stack = []
for i in range(0, len(s)):
if s[i] == '(':
j = j + 1
b.append(j)
b[j] = ''
elif s[i] == ')':
b[j] = b[j] + ')'
stack.append(b[j])
j = j - 1
# 0 is omitted to exclude characters outside of brackets
for k in range(1, j + 1):
b[k] = b[k] + s[i]
print(s)
print(stack)
Answer: Your code works, but is difficult to understand. Especially it is hard to understand what exactly is stored in b, this list contains both integers and strings. The name b also does not give a clue what it used for.
I would suggest a more straight forward method with more descriptive names for variables:
s = "((a+b*d)*(x/2))*(1+(y+(x-2)/10))"
opening_bracket_pos = []
stack = []
for i in range(0, len(s)):
if s[i] == '(':
opening_bracket_pos.append(i)
elif s[i] == ')':
start_pos = opening_bracket_pos.pop()
stack.append(s[start_pos: i+1])
print(s)
print(stack) | {
"domain": "codereview.stackexchange",
"id": 37286,
"tags": "python, python-3.x"
} |
Identify this black dragonfly with bluish coloration along its flat top, yellow on it's mid-body and green on its head | Question: I took these photos around noon in late July in Hsinchu county Taiwan. This particular stick in this particular pond is visited by many dragonfly species. I have seen this kind of dragonfly very regularly, in different nearby ponds, they seem to me to be far more active than other dragonfly species in the same area at the same time.
Its body is about 5 cm long and mostly black. However it has light "power blue" to white markings on the flat top surface of its "tail" and "mid" sections and the front part of its body before the head, and it seems to have some smaller yellow bands on the sides of its fore body and greenish markings on its head.
Wings are mostly transparent but are tinged orange or brown near the body.
Question: Is it possible to identify this particular species of dragonfly?
Answer: This species of dragonfly is Brachydiplax chalybea. Colloquially known as blue dasher.
General information
More information | {
"domain": "biology.stackexchange",
"id": 10817,
"tags": "species-identification, entomology"
} |
Copper (II) Acetate from 5% vinegar + salt, electrochemically | Question: I'm trying to create Copper (II) Acetate crystals, but in these times of Coronavirus it's difficult to come by hydrogen peroxide. I could be patient, but I'm not, so I'm trying to make it electrochemically. I have 5% vinegar and lots of copper scrap, and an adjustable power supply. Unfortunately I can't barely get any current going, so I've thought of adding salt, regular NaCl. I'm curious what effect this will have on the final outcome though. Balancing equations is something I struggle with, but I'd really like to learn the chemistry here. Will the salt interfere with or alter the growth of the copper acetate crystals? Ultimately I'm trying to make calcium copper acetate crystals, I've already made the calcium acetate.
Answer: To prepare copper acetate absence H2O2, employ a known method from hydrometallurgy to process copper ore employing aqueous ammonia, air (a source of oxygen) and a small amount of salt (acting as an electrolyte for this, in part, spontaneous electrochemical reaction detailed below). This results in tetra-ammine copper hydroxide. The latter exists only in solution and upon evaporation yields CuO (and possibly some Cu2O also, see this old patent). Add an acid of choice (like vinegar, a source of acetate) to convert CuO into copper acetate.
Related copper chemistry with ammonia and oxygen:
Cited half-reactions:
$\ce{1/2 O2 + H2O + 2 e- -> 2 OH-}$ (cathodic reduction of O2 at surface of the Copper)
And, at the Copper anode, the formation of the complex:
$\ce{Cu + 4 NH3 + 2 H2O -> [Cu(NH3)4(H2O)2](2+) + 2 e-}$ (anodic dissolution of Cu by a complexing agent)
With an overall reaction:
$\ce{Cu + 4 NH3 + 1/2 O2 + 3 H2O -> [Cu(NH3)4(H2O)2](2+) + 2 OH-}$
Also, some interesting cited standard chemical reactions, where copper oscillates between cuprous and cupric states as noted by the source below:
$\ce{2 Cu + 4 NH3 + 1/2 O2 + H2O -> 2 [Cu(NH3)2]OH}$
$\ce{2[Cu(NH3)2]OH + 4 NH3 (aq) + 1/2 O2 + H2O -> 2 [Cu(NH3)4](OH)2}$
$\ce{Cu + [Cu(NH3)4](OH)2 <--> 2 [Cu(NH3)2]OH}$
Reference: "Kinetics and Mechanism of Copper Dissolution In Aqueous Ammonia" and a related work detailing reactions: ‘Copper Dissolution in Ammonia Solutions: Identification of the Mechanism at Low Overpotentials’, which is also a fully available PDF.
Some alternate preparations would be to follow paths to copper patina (see this site, which discusses three paths for the home chemists which includes an embodiment of the ammonia prep, and, with the addition of vinegar, would convert the basic copper carbonate to copper acetate. | {
"domain": "chemistry.stackexchange",
"id": 13463,
"tags": "electrochemistry, home-experiment, crystal-structure"
} |
Install CCNY Computer Vision Stack | Question:
I like to install the CCNY Computer Vision Stack.
(Link: http://www.ros.org/wiki/ccny_vision#Installing)
But I have no folder ros/stacks, so where should I install the stack? I only have a folder "ros-workspace" in my home directory. Should i create there an "stacks" folder?
I am new at this topic, so I hope you can help me.
Originally posted by Janina on ROS Answers with karma: 11 on 2012-05-13
Post score: 0
Answer:
Your folder ~/ros-workspace is a perfect place for any stack. As long as a folder is listed in the environment variable ROS_PACKAGE_PATH it can be anywhere on your system and ROS will find all packages inside this folder.
Originally posted by Stephan with karma: 1924 on 2012-05-13
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9378,
"tags": "ros"
} |
Adaptive filtering | Question: I want to mention upfront that I'm not very experienced in this field.
I have a signal $u(k)$ that I get from a black box simulation (sampled irregularly). The signal looks like this:
The blue signal has small frequency oscillations that i want to remove. The orange curve was obtained by fitting a polynomial on the whole dataset.
My goal is to obtain a smooth estimate of the blue signal in real-time (streaming data) and then compute its derivative. Is it possible to obtain something as smooth as the orange curve ?
If so, can you suggest some commonly used methods ?
Answer: You may have few options:
On Line Least Squares
You may use the Sequential Least Squares (MATLAB Code available in the link).
This will give you exactly what you got using the polynomial model.
Since it is a polynomial model, you will be able to calculate the derivative on line as well.
Savitzky Golay Filter
Those filter with pre defined coefficients approximate the polynomial Least Squares solution locally.
They also have a variant to calculate the derivative on line.
Since they are online they are even more sensitive to outliers (Locally) than the global Least Squares.
Kalman Filter
You may use the Kalman Filter to have online estimation both of the smooth curve and its local derivative. | {
"domain": "dsp.stackexchange",
"id": 12161,
"tags": "lowpass-filter, denoising, adaptive-filters"
} |
Using XSLT 3.0 to extract information from real-world HTML and produce JSON | Question: For work, I extract information from HTML and XML sources to save in databases. The objective is to generate JSON representing the source document's information and its relationships in order to 1) extract key information for database columns, like the title of a court case, and 2) to save the whole resulting JSON structure in a PostgreSQL JSONB column for possible later reference.
Normally this is done piecemeal using Xpaths with the lxml.etree or lxml.html libraries in Python, which our system is written in, but we wanted to try solutions using XSLT 3.0 and Xpath 3.1, which they do not support.
I'm mainly asking for feedback on the XSLT, but for context, this is what's happening:
The solution we hit on is to invoke the saxon-js processor for nodejs from within Python with check_output(), capture the output, and load it as JSON.
This is the input. It's a version of https://pubapps2.usitc.gov/337external/4921 run through Python's lxml.html.clean, parsed, and output again with html.tostring(root, method='xml') in order to make it parsable with XSLT. saxon-js is being run with the -ns:##html5 option to simplify the Xpath namespaces.
<html:div xmlns:html="http://www.w3.org/1999/xhtml">
<!--2.5-->
<html:meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"/>
Investigation Detail
<html:meta name="viewport" content="width=device-width, initial-scale=1.0"/>
<html:link rel="shortcut icon" href="/337external/static/images/favicon.ico" type="image/x-icon"/>
<html:link rel="apple-touch-icon" href="/337external/static/images/apple-touch-icon.png"/>
<html:link rel="apple-touch-icon" sizes="114x114" href="/337external/static/images/apple-touch-icon-retina.png"/>
<!-- <link rel="stylesheet" href="/337external/static/css/main.css" type="text/css">-->
<html:div id="page-wrap">
<html:div id="header">
<html:div id="agencytitle">
<html:h1>United States<html:br/>
International Trade Commission</html:h1>
</html:div>
<html:div id="agencytitle" style="width:inherit;">
<html:h1 style="font-size: xx-large;">337Info</html:h1>
</html:div>
<html:div id="topnav">
<html:ul>
<html:li><html:a href="/337external/">Home</html:a></html:li>
<html:li class="help"><html:a href="/337external/advanced">Advanced Search</html:a></html:li>
<html:li class="help"><html:a href="http://www.usitc.gov/documents/337Info_FAQ.pdf" target="_blank">FAQ</html:a></html:li>
<html:li class="help"><html:a href="/337external/help">Help</html:a></html:li>
<html:li class="help"><html:a href="http://www.usitc.gov/documents/337Info_tutorial.pptx" target="_blank">Tutorial</html:a></html:li>
<html:li class="help"><html:a href="mailto:337InfoHelp@usitc.gov?subject=337Info%20External%20help">Contact Us</html:a></html:li>
<html:li class="help"><html:a href="/337external/disclaimer">Disclaimer</html:a></html:li>
</html:ul>
</html:div>
</html:div>
<html:div id="inside">
<html:div id="detail-main-content">
<html:div id="main-filter-content"><html:h2> Summary Investigation Information</html:h2></html:div>
<html:div id="main-detail-window">
<html:div style="float:right">
<html:p>
<html:span style="font-weight:bolder">Investigation Number:</html:span>
<html:span style="margin-left:.5em;">337-TA-1185</html:span>
</html:p>
<html:p>
<html:span style="font-weight:bolder">Investigation Type:</html:span>
<html:span style="margin-left:.5em;"> Violation</html:span>
</html:p>
<html:p>
<html:span style="font-weight:bolder">Docket Number:</html:span>
<html:span style="margin-left:.5em;"> 3418</html:span>
</html:p>
<html:p>
<html:span style="font-weight:bolder">Investigation Status:</html:span>
<html:span style="margin-left:.5em;"> Terminated</html:span>
</html:p>
</html:div>
<html:div id="titlecontainer">
<html:p><html:span style="font-weight:bolder">Title (In the Matter of Certain):</html:span></html:p>
<html:h2>Certain Smart Thermostats, Smart HVAC Systems, and Components Thereof; Inv. No. 337-TA-1185</html:h2>
</html:div>
</html:div>
<!--Start right-container -->
<html:div id="right-filter">
<html:div id="right-filter-content-detail2"><html:h2><html:img src="/337external/static/images/scheduled.png" width="24" height="20" alt="People Icon"/>Procedural History</html:h2></html:div>
<!-- START of Prcedural history -->
<html:div id="returned-detail-content">
<html:p><html:span class="dateTitle">Complaint Filed</html:span>
<html:span class="Returneddate">10/22/2019</html:span></html:p>
<html:p><html:span class="dateTitle">Date of Institution</html:span>
<html:span class="Returneddate">11/27/2019</html:span></html:p>
<html:p><html:span class="mainTitle">Markman Hearing Dates</html:span></html:p>
<html:p><html:span class="SubTitle">Start </html:span>
<html:span class="Returneddate"/></html:p>
<html:p><html:span class="SubTitle">End </html:span>
<html:span class="Returneddate"/></html:p>
<html:p><html:span class="mainTitle"> Evidentiary Hearing Dates</html:span></html:p>
<html:p><html:span class="SubTitle"> Scheduled Start </html:span>
<html:span class="Returneddate">07/21/2020</html:span></html:p>
<html:p><html:span class="SubTitle"> Scheduled End </html:span>
<html:span class="Returneddate">07/24/2020</html:span></html:p>
<html:p><html:span class="SubTitle"> Actual Start </html:span>
<html:span class="Returneddate">11/16/2020</html:span></html:p>
<html:p><html:span class="SubTitle"> Actual End </html:span>
<html:span class="Returneddate">11/19/2020</html:span></html:p>
<html:p><html:span class="mainTitle"> Target Date</html:span>
<html:span class="Returneddate">08/20/2021</html:span></html:p>
<html:p><html:span class="mainTitle"> Final ID On Violation</html:span></html:p>
<html:p><html:span class="SubTitle"> Due Date</html:span>
<html:span class="Returneddate">04/20/2021</html:span></html:p>
<html:p><html:span class="SubTitle"> Issue Date</html:span>
<html:span class="Returneddate">04/20/2021</html:span></html:p>
<html:p><html:span class="mainTitle"> Non Final (Terminating) ID Issued </html:span>
<html:span class="Returneddate"/></html:p>
<html:p><html:span class="mainTitle"> Final Determination of No Violation</html:span>
<html:span class="Returneddate">07/20/2021</html:span></html:p>
<html:p><html:span class="mainTitle"> Final Determination of Violation</html:span>
<html:span class="Returneddate"/></html:p>
<html:p><html:span class="mainTitle"> Termination Date</html:span>
<html:span class="Returneddate">07/20/2021</html:span></html:p>
</html:div>
<html:div> </html:div>
<!-- END of Prcedural history -->
<!-- START of invUnfairAct -->
<html:div id="returned-detail-content2">
<html:div id="right-filter-content-detailnested"><html:h3>Unfair Act Alleged<html:span class="expand float_right">+</html:span></html:h3></html:div>
<html:table id="investigations" width="100%" class="visible_none" cellpadding="0px" cellspacing="0px">
<html:tbody><html:tr>
<html:th style="width:65%">Type </html:th>
<html:th>Active - Inactive</html:th>
</html:tr>
<html:tr>
<html:td>Patent Infringement</html:td>
<html:td>11/22/2019 - </html:td>
</html:tr>
</html:tbody></html:table>
</html:div>
<html:div> </html:div>
<!-- END of invUnfairAct -->
<!-- START of IP -->
<html:div id="returned-detail-content2">
<html:div id="right-filter-content-detailnested"><html:h3>Patent Number(s) <html:span class="expand float_right">+</html:span></html:h3> </html:div>
<html:table id="investigations" class="visible_none" width="100%" cellpadding="0px" cellspacing="0px">
<html:tbody><html:tr>
<html:th>Number </html:th>
<html:th>Active - Inactive</html:th>
</html:tr>
<html:tr>
<html:td>10,018,371</html:td>
<html:td>11/22/2019 - 07/20/2021</html:td>
</html:tr>
<html:tr>
<html:td>8,131,497</html:td>
<html:td>11/22/2019 - 07/20/2021</html:td>
</html:tr>
<html:tr>
<html:td>8,432,322</html:td>
<html:td>11/22/2019 - 07/20/2021</html:td>
</html:tr>
<html:tr>
<html:td>8,498,753</html:td>
<html:td>11/22/2019 - 12/15/2020</html:td>
</html:tr>
</html:tbody></html:table>
<html:div id="right-filter-content-detailnested"><html:h3>HTS Number(s) <html:span class="expand float_right">+</html:span></html:h3></html:div>
<html:table id="investigations" class="visible_none" width="100%" cellpadding="0" cellspacing="0">
<html:tbody><html:tr>
<html:th>Number </html:th>
<html:th>Category Basket</html:th>
</html:tr>
<html:tr>
<html:td>90321000</html:td>
<html:td>Consumer Electronics Products</html:td>
</html:tr>
<html:tr>
<html:td>90322000</html:td>
<html:td>Consumer Electronics Products</html:td>
</html:tr>
<html:tr>
<html:td>90328960</html:td>
<html:td>Consumer Electronics Products</html:td>
</html:tr>
</html:tbody></html:table>
</html:div>
<html:div> </html:div>
<!-- END of IP -->
<!-- START of TEO -->
<!-- END of TEO -->
<!-- START of Remand -->
<!-- END of Remand -->
</html:div>
<!--End right-container -->
<!--
Start landing-container
-->
<html:div id="detail-left-filter">
<html:div id="left-filter-content"><html:h2><html:img src="/337external/static/images/participants_icon.png" width="24" height="20" alt="People Icon"/> Participant Information</html:h2></html:div>
<html:div id="returned-detail-content3">
<html:div id="left-filter-content">
<html:h3>
<html:div class="float_right">
<html:div id="active">☑ <html:span style="font-size:0.6em; padding-right:5px"> - Active </html:span></html:div>
<html:div id="inactive">☒<html:span style="font-size:0.6em"> - Inactive </html:span></html:div>
</html:div>
Complainant Information
</html:h3>
</html:div>
<html:table id="investigations" width="100%" cellpadding="0" cellspacing="0">
<html:tbody><html:tr>
<html:th>Name - City/State/Country</html:th>
<html:th>Lead Counsel for Service</html:th>
<html:th>Active Inactive Date</html:th>
</html:tr>
<html:tr>
<html:td>
<html:div id="active"> ☑ </html:div>
<html:div id="name">
EcoFactor, Inc. - Palo Alto , CA , United States of America
</html:div>
</html:td>
<html:td>Russ August & Kabat</html:td>
<html:td>11/22/2019 - </html:td>
</html:tr>
</html:tbody></html:table>
<html:div id="left-filter-content">
<html:h3>
<html:div class="float_right">
<html:div id="active">☑ <html:span style="font-size:0.6em; padding-right:5px"> - Active </html:span></html:div>
<html:div id="inactive">☒<html:span style="font-size:0.6em"> - Inactive </html:span></html:div>
</html:div>
Respondent Information
</html:h3>
</html:div>
<html:div id="bbGrid-subgrid">
<html:div class="bbGrid-container">
<html:table id="investigations" class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th class="icon-plus-main" style="width:15px">+</html:th>
<html:th rowspan="2">Name - City/State/Country</html:th>
<html:th rowspan="2">Lead Counsel for Service</html:th>
<html:th rowspan="2">Active Inactive Date</html:th>
</html:tr>
</html:thead>
<html:tbody>
<html:tr class="bbGrid-row">
<html:td class="bbGrid-subgrid-control">
<html:span class="icon-plus">+</html:span>
</html:td>
<html:td> <html:div id="active"> ☑</html:div> <html:div id="name">Alarm.com Holdings, Inc. - Tysons , VA , United States</html:div>
</html:td>
<html:td>
Foster, Murphy, Altman & Nickel, PC
</html:td>
<html:td>10/22/2019
-
</html:td>
</html:tr>
<html:tr class="bbGrid-subgrid-row visible_none">
<html:td/>
<html:td colspan="4">
<html:div class="bbGrid-container">
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Disposition, Date</html:th>
<html:th>Disposition by Unfair Acts, Date</html:th>
<html:th>Remedial Orders Issued, Issue Date, Status, Change Date</html:th>
</html:tr>
</html:thead>
<html:tbody>
<html:tr class="bbGrid-row" bgcolor="white">
<html:td> No Violation Found 07/20/2021</html:td>
<html:td> 07/20/2021</html:td>
<html:td> </html:td>
</html:tr>
</html:tbody>
</html:table>
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Customs Enforcement Desc</html:th>
<html:th>Forum</html:th>
<html:th>Receipt Customs Letter</html:th>
<html:th>Seizure Forfeiture Order</html:th>
<html:th>Documents</html:th>
</html:tr>
</html:thead>
<html:tbody>
</html:tbody>
</html:table>
</html:div>
</html:td>
</html:tr>
<html:tr class="bbGrid-row">
<html:td class="bbGrid-subgrid-control">
<html:span class="icon-plus">+</html:span>
</html:td>
<html:td> <html:div id="active"> ☑</html:div> <html:div id="name">Alarm.com Incorporated - Tysons , VA , United States</html:div>
</html:td>
<html:td>
Foster, Murphy, Altman & Nickel, PC
</html:td>
<html:td>11/22/2019
-
</html:td>
</html:tr>
<html:tr class="bbGrid-subgrid-row visible_none">
<html:td/>
<html:td colspan="4">
<html:div class="bbGrid-container">
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Disposition, Date</html:th>
<html:th>Disposition by Unfair Acts, Date</html:th>
<html:th>Remedial Orders Issued, Issue Date, Status, Change Date</html:th>
</html:tr>
</html:thead>
<html:tbody>
<html:tr class="bbGrid-row" bgcolor="white">
<html:td> No Violation Found 07/20/2021</html:td>
<html:td> 07/20/2021</html:td>
<html:td> </html:td>
</html:tr>
</html:tbody>
</html:table>
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Customs Enforcement Desc</html:th>
<html:th>Forum</html:th>
<html:th>Receipt Customs Letter</html:th>
<html:th>Seizure Forfeiture Order</html:th>
<html:th>Documents</html:th>
</html:tr>
</html:thead>
<html:tbody>
</html:tbody>
</html:table>
</html:div>
</html:td>
</html:tr>
<html:tr class="bbGrid-row">
<html:td class="bbGrid-subgrid-control">
<html:span class="icon-plus">+</html:span>
</html:td>
<html:td><html:div id="inactive">☒</html:div> <html:div id="name">Daikin America, Inc. - Orangeburg , NY , United States</html:div>
</html:td>
<html:td>
Latham & Watkins LLP
</html:td>
<html:td>11/22/2019
-
07/01/2020
</html:td>
</html:tr>
<html:tr class="bbGrid-subgrid-row visible_none">
<html:td/>
<html:td colspan="4">
<html:div class="bbGrid-container">
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Disposition, Date</html:th>
<html:th>Disposition by Unfair Acts, Date</html:th>
<html:th>Remedial Orders Issued, Issue Date, Status, Change Date</html:th>
</html:tr>
</html:thead>
<html:tbody>
<html:tr class="bbGrid-row" bgcolor="white">
<html:td> Settlement 07/01/2020</html:td>
<html:td> 07/01/2020</html:td>
<html:td> </html:td>
</html:tr>
</html:tbody>
</html:table>
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Customs Enforcement Desc</html:th>
<html:th>Forum</html:th>
<html:th>Receipt Customs Letter</html:th>
<html:th>Seizure Forfeiture Order</html:th>
<html:th>Documents</html:th>
</html:tr>
</html:thead>
<html:tbody>
</html:tbody>
</html:table>
</html:div>
</html:td>
</html:tr>
<html:tr class="bbGrid-row">
<html:td class="bbGrid-subgrid-control">
<html:span class="icon-plus">+</html:span>
</html:td>
<html:td><html:div id="inactive">☒</html:div> <html:div id="name">Daikin Industries, Ltd. - Osaka , , Japan</html:div>
</html:td>
<html:td>
Latham & Watkins LLP
</html:td>
<html:td>11/22/2019
-
07/01/2020
</html:td>
</html:tr>
<html:tr class="bbGrid-subgrid-row visible_none">
<html:td/>
<html:td colspan="4">
<html:div class="bbGrid-container">
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Disposition, Date</html:th>
<html:th>Disposition by Unfair Acts, Date</html:th>
<html:th>Remedial Orders Issued, Issue Date, Status, Change Date</html:th>
</html:tr>
</html:thead>
<html:tbody>
<html:tr class="bbGrid-row" bgcolor="white">
<html:td> Settlement 07/01/2020</html:td>
<html:td> 07/01/2020</html:td>
<html:td> </html:td>
</html:tr>
</html:tbody>
</html:table>
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Customs Enforcement Desc</html:th>
<html:th>Forum</html:th>
<html:th>Receipt Customs Letter</html:th>
<html:th>Seizure Forfeiture Order</html:th>
<html:th>Documents</html:th>
</html:tr>
</html:thead>
<html:tbody>
</html:tbody>
</html:table>
</html:div>
</html:td>
</html:tr>
<html:tr class="bbGrid-row">
<html:td class="bbGrid-subgrid-control">
<html:span class="icon-plus">+</html:span>
</html:td>
<html:td><html:div id="inactive">☒</html:div> <html:div id="name">Daikin North America LLC - Houston , TX , United States</html:div>
</html:td>
<html:td>
Latham & Watkins LLP
</html:td>
<html:td>11/22/2019
-
07/01/2020
</html:td>
</html:tr>
<html:tr class="bbGrid-subgrid-row visible_none">
<html:td/>
<html:td colspan="4">
<html:div class="bbGrid-container">
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Disposition, Date</html:th>
<html:th>Disposition by Unfair Acts, Date</html:th>
<html:th>Remedial Orders Issued, Issue Date, Status, Change Date</html:th>
</html:tr>
</html:thead>
<html:tbody>
<html:tr class="bbGrid-row" bgcolor="white">
<html:td> Settlement 07/01/2020</html:td>
<html:td> 07/01/2020</html:td>
<html:td> </html:td>
</html:tr>
</html:tbody>
</html:table>
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Customs Enforcement Desc</html:th>
<html:th>Forum</html:th>
<html:th>Receipt Customs Letter</html:th>
<html:th>Seizure Forfeiture Order</html:th>
<html:th>Documents</html:th>
</html:tr>
</html:thead>
<html:tbody>
</html:tbody>
</html:table>
</html:div>
</html:td>
</html:tr>
<html:tr class="bbGrid-row">
<html:td class="bbGrid-subgrid-control">
<html:span class="icon-plus">+</html:span>
</html:td>
<html:td> <html:div id="active"> ☑</html:div> <html:div id="name">Ecobee Ltd. - Toronto , , Canada</html:div>
</html:td>
<html:td>
Venable LLP
</html:td>
<html:td>11/22/2019
-
</html:td>
</html:tr>
<html:tr class="bbGrid-subgrid-row visible_none">
<html:td/>
<html:td colspan="4">
<html:div class="bbGrid-container">
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Disposition, Date</html:th>
<html:th>Disposition by Unfair Acts, Date</html:th>
<html:th>Remedial Orders Issued, Issue Date, Status, Change Date</html:th>
</html:tr>
</html:thead>
<html:tbody>
<html:tr class="bbGrid-row" bgcolor="white">
<html:td> No Violation Found 07/20/2021</html:td>
<html:td> 07/20/2021</html:td>
<html:td> </html:td>
</html:tr>
</html:tbody>
</html:table>
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Customs Enforcement Desc</html:th>
<html:th>Forum</html:th>
<html:th>Receipt Customs Letter</html:th>
<html:th>Seizure Forfeiture Order</html:th>
<html:th>Documents</html:th>
</html:tr>
</html:thead>
<html:tbody>
</html:tbody>
</html:table>
</html:div>
</html:td>
</html:tr>
<html:tr class="bbGrid-row">
<html:td class="bbGrid-subgrid-control">
<html:span class="icon-plus">+</html:span>
</html:td>
<html:td> <html:div id="active"> ☑</html:div> <html:div id="name">Ecobee, Inc. - Toronto , , Canada</html:div>
</html:td>
<html:td>
Venable LLP
</html:td>
<html:td>11/22/2019
-
</html:td>
</html:tr>
<html:tr class="bbGrid-subgrid-row visible_none">
<html:td/>
<html:td colspan="4">
<html:div class="bbGrid-container">
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Disposition, Date</html:th>
<html:th>Disposition by Unfair Acts, Date</html:th>
<html:th>Remedial Orders Issued, Issue Date, Status, Change Date</html:th>
</html:tr>
</html:thead>
<html:tbody>
<html:tr class="bbGrid-row" bgcolor="white">
<html:td> No Violation Found 07/20/2021</html:td>
<html:td> 07/20/2021</html:td>
<html:td> </html:td>
</html:tr>
</html:tbody>
</html:table>
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Customs Enforcement Desc</html:th>
<html:th>Forum</html:th>
<html:th>Receipt Customs Letter</html:th>
<html:th>Seizure Forfeiture Order</html:th>
<html:th>Documents</html:th>
</html:tr>
</html:thead>
<html:tbody>
</html:tbody>
</html:table>
</html:div>
</html:td>
</html:tr>
<html:tr class="bbGrid-row">
<html:td class="bbGrid-subgrid-control">
<html:span class="icon-plus">+</html:span>
</html:td>
<html:td> <html:div id="active"> ☑</html:div> <html:div id="name">Google LLC - Mountain View , CA , United States</html:div>
</html:td>
<html:td>
WHITE & CASE LLP
</html:td>
<html:td>11/22/2019
-
</html:td>
</html:tr>
<html:tr class="bbGrid-subgrid-row visible_none">
<html:td/>
<html:td colspan="4">
<html:div class="bbGrid-container">
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Disposition, Date</html:th>
<html:th>Disposition by Unfair Acts, Date</html:th>
<html:th>Remedial Orders Issued, Issue Date, Status, Change Date</html:th>
</html:tr>
</html:thead>
<html:tbody>
<html:tr class="bbGrid-row" bgcolor="white">
<html:td> No Violation Found 07/20/2021</html:td>
<html:td> 07/20/2021</html:td>
<html:td> </html:td>
</html:tr>
</html:tbody>
</html:table>
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Customs Enforcement Desc</html:th>
<html:th>Forum</html:th>
<html:th>Receipt Customs Letter</html:th>
<html:th>Seizure Forfeiture Order</html:th>
<html:th>Documents</html:th>
</html:tr>
</html:thead>
<html:tbody>
</html:tbody>
</html:table>
</html:div>
</html:td>
</html:tr>
<html:tr class="bbGrid-row">
<html:td class="bbGrid-subgrid-control">
<html:span class="icon-plus">+</html:span>
</html:td>
<html:td><html:div id="inactive">☒</html:div> <html:div id="name">Schneider Electric SE - Rueil-Malmaison , , France</html:div>
</html:td>
<html:td>
Jenner & Block LLP
</html:td>
<html:td>11/22/2019
-
08/31/2020
</html:td>
</html:tr>
<html:tr class="bbGrid-subgrid-row visible_none">
<html:td/>
<html:td colspan="4">
<html:div class="bbGrid-container">
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Disposition, Date</html:th>
<html:th>Disposition by Unfair Acts, Date</html:th>
<html:th>Remedial Orders Issued, Issue Date, Status, Change Date</html:th>
</html:tr>
</html:thead>
<html:tbody>
<html:tr class="bbGrid-row" bgcolor="white">
<html:td> Settlement 08/31/2020</html:td>
<html:td> 08/31/2020</html:td>
<html:td> </html:td>
</html:tr>
</html:tbody>
</html:table>
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Customs Enforcement Desc</html:th>
<html:th>Forum</html:th>
<html:th>Receipt Customs Letter</html:th>
<html:th>Seizure Forfeiture Order</html:th>
<html:th>Documents</html:th>
</html:tr>
</html:thead>
<html:tbody>
</html:tbody>
</html:table>
</html:div>
</html:td>
</html:tr>
<html:tr class="bbGrid-row">
<html:td class="bbGrid-subgrid-control">
<html:span class="icon-plus">+</html:span>
</html:td>
<html:td><html:div id="inactive">☒</html:div> <html:div id="name">Schneider Electric USA, Inc. - Andover , MA , United States</html:div>
</html:td>
<html:td>
Jenner & Block LLP
</html:td>
<html:td>11/22/2019
-
08/31/2020
</html:td>
</html:tr>
<html:tr class="bbGrid-subgrid-row visible_none">
<html:td/>
<html:td colspan="4">
<html:div class="bbGrid-container">
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Disposition, Date</html:th>
<html:th>Disposition by Unfair Acts, Date</html:th>
<html:th>Remedial Orders Issued, Issue Date, Status, Change Date</html:th>
</html:tr>
</html:thead>
<html:tbody>
<html:tr class="bbGrid-row" bgcolor="white">
<html:td> Settlement 08/31/2020</html:td>
<html:td> 08/31/2020</html:td>
<html:td> </html:td>
</html:tr>
</html:tbody>
</html:table>
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Customs Enforcement Desc</html:th>
<html:th>Forum</html:th>
<html:th>Receipt Customs Letter</html:th>
<html:th>Seizure Forfeiture Order</html:th>
<html:th>Documents</html:th>
</html:tr>
</html:thead>
<html:tbody>
</html:tbody>
</html:table>
</html:div>
</html:td>
</html:tr>
<html:tr class="bbGrid-row">
<html:td class="bbGrid-subgrid-control">
<html:span class="icon-plus">+</html:span>
</html:td>
<html:td> <html:div id="active"> ☑</html:div> <html:div id="name">Vivant, Inc. - Provo , UT , United States</html:div>
</html:td>
<html:td>
Williams Simons & Landis PLLC
</html:td>
<html:td>11/22/2019
-
</html:td>
</html:tr>
<html:tr class="bbGrid-subgrid-row visible_none">
<html:td/>
<html:td colspan="4">
<html:div class="bbGrid-container">
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Disposition, Date</html:th>
<html:th>Disposition by Unfair Acts, Date</html:th>
<html:th>Remedial Orders Issued, Issue Date, Status, Change Date</html:th>
</html:tr>
</html:thead>
<html:tbody>
<html:tr class="bbGrid-row" bgcolor="white">
<html:td> No Violation Found 07/20/2021</html:td>
<html:td> 07/20/2021</html:td>
<html:td> </html:td>
</html:tr>
</html:tbody>
</html:table>
<html:table class="bbGrid-grid table table-bordered table-condensed" width="100%" cellpadding="0" cellspacing="0">
<html:thead class="bbGrid-grid-head">
<html:tr class="bbGrid-grid-head-holder">
<html:th>Customs Enforcement Desc</html:th>
<html:th>Forum</html:th>
<html:th>Receipt Customs Letter</html:th>
<html:th>Seizure Forfeiture Order</html:th>
<html:th>Documents</html:th>
</html:tr>
</html:thead>
<html:tbody>
</html:tbody>
</html:table>
</html:div>
</html:td>
</html:tr>
</html:tbody>
</html:table>
</html:div></html:div>
</html:div>
<html:div id="left-filter-content">
<html:h2><html:img src="/337external/static/images/participants_icon.png" width="24" height="20" alt="People Icon"/> Agency Participant Information</html:h2>
</html:div>
<html:div id="returned-detail-content3">
<html:div style="width:35%;float:left;">
<html:div id="left-filter-content"><html:h3>Office of Unfair Import Investigations (OUII)</html:h3>
</html:div>
<html:div id="returned-detail-content">
<html:p>
<html:span style="font-weight:bolder">Level of Participation:</html:span>
<html:span style="margin-left:.5em;"> Full </html:span>
</html:p>
<html:br/>
<html:table id="investigations" width="100%" cellpadding="0px" cellspacing="0px">
<html:tbody><html:tr>
<html:th>Name </html:th>
</html:tr>
<html:tr>
<html:td>
<html:div id="active"> ☑ </html:div>
<html:div class="name"> Jeffrey Hsu</html:div>
</html:td>
</html:tr>
<html:tr>
<html:td>
<html:div id="active"> ☑ </html:div>
<html:div class="name"> Paul Gennari</html:div>
</html:td>
</html:tr>
</html:tbody></html:table>
</html:div>
</html:div>
<html:div style="width:30%;float:left;">
<html:div id="left-filter-content"><html:h3>General Counsel (GC)</html:h3>
</html:div>
<html:div id="returned-detail-content">
<html:table id="investigations" width="100%" cellpadding="0px" cellspacing="0px">
<html:tbody><html:tr>
<html:th>Name </html:th>
</html:tr>
<html:tr>
<html:td>
<html:div id="active"> ☑ </html:div>
Michael Liberman
</html:td>
</html:tr>
<html:tr>
<html:td>
<html:div id="active"> ☑ </html:div>
Megan Valentine
</html:td>
</html:tr>
<html:tr>
<html:td>
<html:div id="inactive"> ☒ </html:div>
Houda Morad
</html:td>
</html:tr>
</html:tbody></html:table>
</html:div>
</html:div>
<html:div style="width:35%;float:left;">
<html:div id="left-filter-content"><html:h3>Administrative Law Judge (ALJ) </html:h3>
</html:div>
<html:div id="returned-detail-content">
<html:p>
</html:p><html:table id="investigations" width="100%" cellpadding="0px" cellspacing="0px">
<html:tbody><html:tr>
<html:th>Name </html:th>
</html:tr>
<html:tr>
<html:td>
<html:div id="active"> ☑ </html:div> David Shaw
</html:td>
</html:tr>
</html:tbody></html:table>
</html:div>
</html:div>
<html:div style="float:right;width:100%">
<html:div style="float:right;">
<html:div id="active">☑ <html:span style="font-size:0.6em; padding-right:5px"> - Active </html:span></html:div>
<html:div id="inactive">☒<html:span style="font-size:0.6em"> - Inactive </html:span></html:div>
</html:div>
</html:div>
<html:br clear="all"/>
</html:div>
</html:div>
</html:div>
<!--End landing-container -->
<html:div style="clear: both;"/>
</html:div>
<html:div id="footer">
<html:p> </html:p>
<html:div class="address"> … </html:div>
<html:p> </html:p>
<html:div class="midSection" align="left"> … </html:div>
<html:p/>
<html:p> </html:p>
<html:div class="midSectionRight" align="left"> … </html:div>
<html:p/>
<html:p> </html:p>
<html:div class="right" align="left"> </html:div>
</html:div>
<html:div style="clear: both;"/>
</html:div>
</html:div>
This is my desired output:
{'Office of Unfair Import Investigations (OUII)': {'Level of Participation': 'Full',
'people': [{'name': 'Jeffrey Hsu', 'status': 'active'},
{'name': 'Paul Gennari', 'status': 'active'}]},
'Administrative Law Judge (ALJ) ': {'people': [{'name': 'David Shaw',
'status': 'active'}]},
'General Counsel (GC)': {'people': [{'name': 'Michael Liberman',
'status': 'active'},
{'name': 'Megan Valentine', 'status': 'active'},
{'name': 'Houda Morad', 'status': 'inactive'}]},
'Patent Number(s)': [{'Number': '10,018,371',
'Active - Inactive': '11/22/2019 - 07/20/2021'},
{'Number': '8,131,497', 'Active - Inactive': '11/22/2019 - 07/20/2021'},
{'Number': '8,432,322', 'Active - Inactive': '11/22/2019 - 07/20/2021'},
{'Number': '8,498,753', 'Active - Inactive': '11/22/2019 - 12/15/2020'}],
'Investigation Type': 'Violation',
'Investigation Number': '337-TA-1185',
'Investigation Status': 'Terminated',
'Respondent Information': [{'Active Inactive Date': '10/22/2019 -',
'Dispositions': [{'Disposition by Unfair Acts, Date': '07/20/2021',
'Remedial Orders Issued, Issue Date, Status, Change Date': '',
'Disposition, Date': 'No Violation Found 07/20/2021'}],
'Lead Counsel for Service': 'Foster, Murphy, Altman & Nickel, PC',
'Name - City/State/Country': 'Alarm.com Holdings, Inc. - Tysons , VA , United States'},
{'Active Inactive Date': '11/22/2019 -',
'Dispositions': [{'Disposition by Unfair Acts, Date': '07/20/2021',
'Remedial Orders Issued, Issue Date, Status, Change Date': '',
'Disposition, Date': 'No Violation Found 07/20/2021'}],
'Lead Counsel for Service': 'Foster, Murphy, Altman & Nickel, PC',
'Name - City/State/Country': 'Alarm.com Incorporated - Tysons , VA , United States'},
{'Active Inactive Date': '11/22/2019 - 07/01/2020',
'Dispositions': [{'Disposition by Unfair Acts, Date': '07/01/2020',
'Remedial Orders Issued, Issue Date, Status, Change Date': '',
'Disposition, Date': 'Settlement 07/01/2020'}],
'Lead Counsel for Service': 'Latham & Watkins LLP',
'Name - City/State/Country': 'Daikin America, Inc. - Orangeburg , NY , United States'},
{'Active Inactive Date': '11/22/2019 - 07/01/2020',
'Dispositions': [{'Disposition by Unfair Acts, Date': '07/01/2020',
'Remedial Orders Issued, Issue Date, Status, Change Date': '',
'Disposition, Date': 'Settlement 07/01/2020'}],
'Lead Counsel for Service': 'Latham & Watkins LLP',
'Name - City/State/Country': 'Daikin Industries, Ltd. - Osaka , , Japan'},
{'Active Inactive Date': '11/22/2019 - 07/01/2020',
'Dispositions': [{'Disposition by Unfair Acts, Date': '07/01/2020',
'Remedial Orders Issued, Issue Date, Status, Change Date': '',
'Disposition, Date': 'Settlement 07/01/2020'}],
'Lead Counsel for Service': 'Latham & Watkins LLP',
'Name - City/State/Country': 'Daikin North America LLC - Houston , TX , United States'},
{'Active Inactive Date': '11/22/2019 -',
'Dispositions': [{'Disposition by Unfair Acts, Date': '07/20/2021',
'Remedial Orders Issued, Issue Date, Status, Change Date': '',
'Disposition, Date': 'No Violation Found 07/20/2021'}],
'Lead Counsel for Service': 'Venable LLP',
'Name - City/State/Country': 'Ecobee Ltd. - Toronto , , Canada'},
{'Active Inactive Date': '11/22/2019 -',
'Dispositions': [{'Disposition by Unfair Acts, Date': '07/20/2021',
'Remedial Orders Issued, Issue Date, Status, Change Date': '',
'Disposition, Date': 'No Violation Found 07/20/2021'}],
'Lead Counsel for Service': 'Venable LLP',
'Name - City/State/Country': 'Ecobee, Inc. - Toronto , , Canada'},
{'Active Inactive Date': '11/22/2019 -',
'Dispositions': [{'Disposition by Unfair Acts, Date': '07/20/2021',
'Remedial Orders Issued, Issue Date, Status, Change Date': '',
'Disposition, Date': 'No Violation Found 07/20/2021'}],
'Lead Counsel for Service': 'WHITE & CASE LLP',
'Name - City/State/Country': 'Google LLC - Mountain View , CA , United States'},
{'Active Inactive Date': '11/22/2019 - 08/31/2020',
'Dispositions': [{'Disposition by Unfair Acts, Date': '08/31/2020',
'Remedial Orders Issued, Issue Date, Status, Change Date': '',
'Disposition, Date': 'Settlement 08/31/2020'}],
'Lead Counsel for Service': 'Jenner & Block LLP',
'Name - City/State/Country': 'Schneider Electric SE - Rueil-Malmaison , , France'},
{'Active Inactive Date': '11/22/2019 - 08/31/2020',
'Dispositions': [{'Disposition by Unfair Acts, Date': '08/31/2020',
'Remedial Orders Issued, Issue Date, Status, Change Date': '',
'Disposition, Date': 'Settlement 08/31/2020'}],
'Lead Counsel for Service': 'Jenner & Block LLP',
'Name - City/State/Country': 'Schneider Electric USA, Inc. - Andover , MA , United States'},
{'Active Inactive Date': '11/22/2019 -',
'Dispositions': [{'Disposition by Unfair Acts, Date': '07/20/2021',
'Remedial Orders Issued, Issue Date, Status, Change Date': '',
'Disposition, Date': 'No Violation Found 07/20/2021'}],
'Lead Counsel for Service': 'Williams Simons & Landis PLLC',
'Name - City/State/Country': 'Vivant, Inc. - Provo , UT , United States'}],
'Docket Number': '3418',
'HTS Number(s)': [{'Category Basket': 'Consumer Electronics Products',
'Number': '90321000'},
{'Category Basket': 'Consumer Electronics Products', 'Number': '90322000'},
{'Category Basket': 'Consumer Electronics Products', 'Number': '90328960'}],
'Procedural History': {'Non Final (Terminating) ID Issued': '',
'Final Determination of No Violation': '07/20/2021',
'Final Determination of Violation': '',
'Termination Date': '07/20/2021',
'Due Date': '04/20/2021',
'Evidentiary Hearing Dates': {'Scheduled Start': '07/21/2020',
'Actual Start': '11/16/2020',
'Actual End': '11/19/2020',
'Scheduled End': '07/24/2020'},
'Complaint Filed': '10/22/2019',
'Issue Date': '04/20/2021',
'Scheduled Start': '07/21/2020',
'Date of Institution': '11/27/2019',
'Actual Start': '11/16/2020',
'Final ID On Violation': {'Due Date': '04/20/2021',
'Issue Date': '04/20/2021'},
'Markman Hearing Dates': {'Start': '', 'End': ''},
'Actual End': '11/19/2020',
'Target Date': '08/20/2021',
'Scheduled End': '07/24/2020',
'Start': '',
'End': ''},
'Unfair Act Alleged': [{'Type': 'Patent Infringement',
'Active - Inactive': '11/22/2019 -'}],
'Complainant Information': [{'Active Inactive Date': '11/22/2019 -',
'Lead Counsel for Service': 'Russ August & Kabat',
'Name - City/State/Country': ''}],
'title': 'Certain Smart Thermostats, Smart HVAC Systems, and Components Thereof; Inv. No. 337-TA-1185'}
This is the current state of my XSLT that produces that JSON.
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:map="http://www.w3.org/2005/xpath-functions/map"
version="3.0">
<xsl:output method="json"/>
<xsl:template match='/'>
<xsl:map>
<xsl:map-entry key="'title'" select="string(//div[@id='titlecontainer']/h2)"/>
<xsl:apply-templates/>
</xsl:map>
</xsl:template>
<!-- Overrides default template for text-->
<xsl:template match="text()"/>
<!--Used for title block info, Procedural History, and Office of Unfair Import Investigations (OUII) tables-->
<xsl:template match="p[count(span) = 2]">
<xsl:map-entry key="translate(normalize-space(./span[1]), ':', '')" select="normalize-space(./span[2])"/>
</xsl:template>
<!-- Procedural History-->
<xsl:template match="div[@id='right-filter']/div[@id = 'returned-detail-content']">
<xsl:map-entry key="'Procedural History'">
<xsl:map>
<xsl:for-each-group select=".//p[span[contains(@class, 'SubTitle')]]" group-by="normalize-space(preceding-sibling::p[span[contains(@class, 'mainTitle')]][1])">
<xsl:map-entry key="current-grouping-key()">
<xsl:map>
<xsl:apply-templates select="current-group()"/>
</xsl:map>
</xsl:map-entry>
</xsl:for-each-group>
<xsl:apply-templates/>
</xsl:map>
</xsl:map-entry>
</xsl:template>
<!--Rightmost tables and Complainant Information-->
<xsl:template match="table[preceding-sibling::div[@id = 'right-filter-content-detailnested' or @id='left-filter-content']]">
<xsl:map-entry key="normalize-space(translate(./preceding-sibling::div[1]/h3/text()[normalize-space(.)], '+', ''))">
<xsl:variable name="headers" select=".//th"/>
<xsl:sequence select="array{.//tr[td] ! map:merge(for $i in 1 to count($headers) return map{normalize-space($headers[$i]): normalize-space(./td[$i]/text()[normalize-space(.)])})}"/>
</xsl:map-entry>
</xsl:template>
<!-- Agency Participant Information tables-->
<xsl:template match="div[@id='returned-detail-content3']/div[contains(@style, 'float:left;')]">
<xsl:map>
<xsl:map-entry key="string(./div[@id='left-filter-content']/h3)">
<xsl:map>
<xsl:apply-templates/>
</xsl:map>
</xsl:map-entry>
</xsl:map>
</xsl:template>
<xsl:template match="tbody[tr/th[normalize-space(.) = 'Name']]">
<xsl:variable name="people" as="map(*)*">
<xsl:apply-templates/>
</xsl:variable>
<xsl:map-entry key="'people'" select="array{$people}"/>
</xsl:template>
<!--OUII rows-->
<xsl:template match="div[contains(@style, 'float:left;')]//td[div[contains(@class, 'name')]]">
<xsl:map>
<xsl:map-entry key="'name'" select="normalize-space(./div[contains(@class, 'name')])"/>
<xsl:map-entry key="'status'" select="string(./div[1]/@id)"/>
</xsl:map>
</xsl:template>
<!--Other rows-->
<xsl:template match="div[contains(@style, 'float:left;')]//td[count(div)=1 and div[@id='active' or @id='inactive']]">
<xsl:map>
<xsl:map-entry key="'name'" select="normalize-space(./text()[normalize-space(.)])"/>
<xsl:map-entry key="'status'" select="string(./div[1]/@id)"/>
</xsl:map>
</xsl:template>
<!-- Respondent Information-->
<xsl:template match="div[@id='bbGrid-subgrid']">
<xsl:map-entry key="normalize-space(./preceding-sibling::div[1]/h3/text()[normalize-space()])">
<xsl:variable name="rows" as="map(*)*">
<xsl:apply-templates select="./div/table/tbody/tr[not(.//table)]"/>
</xsl:variable>
<xsl:sequence select="array{$rows}"/>
</xsl:map-entry>
</xsl:template>
<xsl:template match="tr[contains(@class, 'bbGrid-row')]">
<xsl:map>
<xsl:variable name="headers" select="./ancestor::table[1]/thead/tr/th"/>
<xsl:for-each select="td[not(contains(span/@class, 'icon-plus'))]">
<xsl:variable name="index" select="count(preceding-sibling::td) + 1"/>
<xsl:map-entry key="string($headers[$index])" select="if (./div[2]) then normalize-space(./div[2]) else normalize-space(.)"/>
</xsl:for-each>
<xsl:if test="following-sibling::tr[1][td/div/table]">
<xsl:map-entry key="'Dispositions'">
<xsl:variable name="dispositions" as="map(*)*">
<xsl:apply-templates select="following-sibling::tr[1]"/>
</xsl:variable>
<xsl:sequence select="array{$dispositions}"/>
</xsl:map-entry>
</xsl:if>
</xsl:map>
</xsl:template>
<xsl:template match="td/div[1][contains(@id, 'active')]">
<xsl:map-entry key="'status'" select="string(@id)"/>
</xsl:template>
</xsl:stylesheet>
We're fairly happy with the approach so far. I mainly wanted to ask for feedback on the XSLT, as I am returning to the language after a long absence. My premium is on maintainability and comprehensibility for other programmers.
I've tried a few different approaches to see how I like them. For instance, the <!--Rightmost tables and Complainant Information--> template is very compact and produces the desired output but is maybe not very comprehensible.
Comments on how to improve the approach of applying XSLT 3.0 to real-world HTML from Python are also welcome.
Answer: I don't see any glaring problems with your code (on a pretty quick scan). You might like to try the arrow operator:
normalize-space(translate(./preceding-sibling::div[1]/h3/text()[normalize-space(.)], '+', ''))
can be written
preceding-sibling::div[1]/h3/text()[normalize-space(.)]
=> translate('+', '')
=> normalize-space()
which I find more readable.
But is this use of text() correct? Your only h3 has a span element within it. If there were two child text nodes separated by a span then your code would fail with a type error. I don't know your data of course.
You might also find that you can simplify the whitespace handling if you strip whitespace from block-level elements, for example
<xsl:strip-space elements="div ul p"/>
Another observation: as an alternative to calling out to SaxonJS running under node.js, you could use the Python binding in SaxonC. | {
"domain": "codereview.stackexchange",
"id": 42807,
"tags": "json, web-scraping, xpath, xslt"
} |
Rebuilding workspace after kinetic update? | Question:
Hi there. After the latest update I'm encountering various breakages and processes dying on startup. For example a Time is out of dual 32-bit range error. Should my workspace and ROS nodes etc be built after a large update like this?
Originally posted by NotABot on ROS Answers with karma: 41 on 2018-09-11
Post score: 4
Original comments
Comment by ahendrix on 2018-09-11:
Probably. It would help to know if you're just upgrading packages within Kinetic, or upgrading from an older version of ROS (Indigo, Jade, etc) to Kinetic.
Comment by NotABot on 2018-09-11:
From an older version of Kinetic. Is it possible to roll back to previous versions of the packages? Or reinstall without that update?
Comment by lstelzner on 2018-09-11:
We've also been having issues with a kinetic build that seems to have been released approx 4 days ago. We build all our ros into docker containers, I have a docker from 5 days ago that launches our stack fine, the one from 4 days ago fails. Half of the nodes die on launch without any useful logs.
Answer:
There was a release of package updates for Kinetic a few days ago: https://discourse.ros.org/t/new-packages-for-kinetic-2018-09-06/5983 . It looks like that included releases of lots of packages.
It sounds like you're seeing either an ABI compatibility issue, or a bug introduced by one of the changes in one of the packages that was updated. In general ROS strives to maintain backward compatibility with these updates so that you don't need to recompile when you update packages, but sometimes the packages maintainers make mistakes, or it's just not possible.
If this is an ABI compatibility issue or a new bug, it should be reported upstream immediately so that the maintainers can decide how best to handle it. If you're not sure which package or stack to report this issue on, I suggest you update your question with the exact error message that you're seeing, or ask a new question that includes the exact error message.
Originally posted by ahendrix with karma: 47576 on 2018-09-11
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by gvdhoorn on 2018-09-12:
This was most likely an ABI breakage: we've had several workspaces 'break' after the latest sync. All of those fixed by updating all ROS pkgs (not just a few of the outstanding updates) and then rebuilding the workspace.
Rebuilding after a apt update is always recommended. | {
"domain": "robotics.stackexchange",
"id": 31759,
"tags": "ros, ros-kinetic, update"
} |
C++: Mastermind game two players | Question: I have made an attempt to write the Mastermind game, to improve my basics in C++. I would like my code reviewed, emphasising on design, readability and structure of the program.
The secret code is stored as a string of 4 character/PEGS and input is taken as: RRGY for red red green yellow.
#define RED "\x1b[31;1m"
#define GREEN "\x1b[32;1m"
#define YELLOW "\x1b[33;1m"
#define BLUE "\x1b[34;1m"
#define MAGENTA "\x1b[35;1m"
#define CYAN "\x1b[36;1m"
#define WHITE "\x1b[37;1m"
#define RESET "\x1b[0m"
void printCode(const std::string &code);
bool isMatching(const std::string &code, std::string &userCode);
inline void showMoves(std::vector<std::string> &prevMoves);
void readCode(std::string &code);
void setup(unsigned int &maxGames, unsigned int &maxGuess);
const unsigned int PEGS = 4;
main()
int main()
{
unsigned int maxGuess, maxGames, p1score, p2score;
setup(maxGames, maxGuess);
for (int game = 0; game != maxGames; ++game)
{
bool won = false;
int score = 0;
std::string secretCode;
//keeps track of all the previous moves and feedback.
std::vector<std::string> prevMoves;
std::cout << "New Game\nSet secret code: ";
readCode(secretCode);
for (int guess = 1; guess != maxGuess + 1; ++guess)
{
std::cout << "guess: " << guess << " / "
<< maxGuess << "\n";
std::string userCode;
std::cout << "Code: ";
readCode(userCode);
score++;
if (isMatching(secretCode, userCode))
{
won = true;
std::cout << WHITE << "Code successfully broken!\n";
printCode(secretCode);
break;
}
// Update previous moves and display them.
prevMoves.push_back(userCode);
showMoves(prevMoves);
}
if (!won)
{
std::cout << "Oops! You were unable"
<< " to crack the code\n";
printCode(secretCode);
}
if (score == maxGuess)
score++;
// if current game is an even number then update
// player 1's score.
if (game % 2 != 0)
{
p1score = score;
std::cout << "Player 1: " << p1score << "\n\n";
}
else
{
p2score = score;
std::cout << "Player 2: " << p2score << "\n\n";
}
}
std::cout << "\n\nPlayer 1: " << p1score << '\n'
<< "Player 2: " << p2score << '\n';
return 0;
}
setup()
void setup(unsigned int &maxGames, unsigned int &maxGuess)
{
std::cout << "Max Games: ";
std::cin >> maxGames;
std::cout << "Max turns: ";
std::cin >> maxGuess;
try
{
if(maxGames % 2 != 0 || maxGuess % 2 != 0)
throw std::range_error("Invalid number!");
if(maxGames > 12 || maxGuess > 12)
throw std::overflow_error("Number too high!");
}
catch(std::range_error err)
{
std::cout << err.what()
<< "\nNumber should be even!\n"
<< "Try again\n\n";
setup(maxGames, maxGuess);
}
catch(std::overflow_error err)
{
std::cout << err.what()
<< "\nNumber should be less than 13\n"
<< "Try again\n\n";
setup(maxGames, maxGuess);
}
}
showMoves()
inline void showMoves(std::vector<std::string> &prevMoves)
{
for (std::vector<std::string>::const_iterator i = prevMoves.begin();
i != prevMoves.end(); ++i)
printCode(*i);
}
readCode()
void readCode(std::string &code)
{
std::cin >> code;
try
{
if(code.size() != PEGS)
throw std::length_error("Invalid number of characters!");
for(std::string::size_type i = 0; i != PEGS; ++i)
if(code[i] != 'R' && code[i] != 'G' && code[i] != 'B' &&
code[i] != 'Y' &&code[i] != 'M' && code[i] != 'C' && code[i] != '-')
throw std::range_error("Invalid colours!");
}
catch(std::length_error err)
{
std::cout << "\n\n" << err.what() << '\n'
<< "Permitted characters " << PEGS
<< "\nTry again: ";
readCode(code);
}
catch(std::range_error err)
{
std::cout << "\n\n" << err.what() << '\n'
<< "Permitted colours: "
<< "R G B Y M C -\n"
<< "Try again: ";
readCode(code);
}
system("clear");
return ;
}
printCode()
void printCode(std::string const &code)
{
std::string::size_type i = 0;
while (i != code.size())
{
switch (code[i++])
{
case 'R':
std::cout << RED << "# ";
break;
case 'G':
std::cout << GREEN << "# ";
break;
case 'Y':
std::cout << YELLOW << "# ";
break;
case 'B':
std::cout << BLUE << "# ";
break;
case 'M':
std::cout << MAGENTA << "# ";
break;
case 'C':
std::cout << CYAN << "# ";
break;
case 'P':
std::cout << WHITE << " P";
break;
case 'O':
std::cout << WHITE << " O";
break;
default:
std::cout << " ";
}
}
std::cout << RESET << '\n';
}
isMatching()
bool isMatching(const std::string &secretCode, std::string &userCode)
{
// keeps track of the duplicates, so that feedback
// is not provided twice for a single colour.
std::vector<bool> seenCode(4, false);
std::vector<bool> seenUserCode(seenCode);
std::string feedback;
std::string::size_type i, j;
// for each character in the userCode, update the feedback
// with the character 'P' if they match both in colour
// and position with the secret code.
for (i = 0; i != PEGS; ++i)
{
if (secretCode[i] == userCode[i])
{
feedback += 'P';
seenCode[i] = seenUserCode[i] = true;
}
}
// for each character in the userCode, update the feedback
// with the character 'O', if they match in colour but not in
// position with the secret code.
for (i = 0; i != PEGS; ++i)
{
if (!seenCode[i])
{
for (j = 0; j != PEGS; ++j)
{
if (!seenUserCode[j] && secretCode[i] == userCode[j])
{
seenUserCode[j] = true;
feedback += 'O';
break;
}
}
}
}
// concatenate the userCode with feedback.
userCode += feedback;
// if the userCode did not match the code.
if (feedback != "PPPP")
return false;
return true;
}
I plan to add rules and how to play files later on.
Answer: I love the UI! Good job. But, there are a few (major) things that you can do better.
Where are the #includes?
Either you didn't paste them here (you should always post the whole code BTW), or you haven't included them. That's bad! Some compilers will happily compile your code even if you are missing some #includes, but that is not standard behavior.
When I compiled your code, I was greeted with a lot of errors, all related to missing headers.
Don't use macros when you can use variables
Macros are bad, but in some cases they are necessary, because there is no viable alternative. But not in your case. You could easily replace those "constants" with actual variables:
constexpr auto RED = "\x1b[31;1m";
constexpr auto GREEN = "\x1b[32;1m";
constexpr auto YELLOW = "\x1b[33;1m";
constexpr auto BLUE = "\x1b[34;1m";
constexpr auto MAGENTA = "\x1b[35;1m";
constexpr auto CYAN = "\x1b[36;1m";
constexpr auto WHITE = "\x1b[37;1m";
constexpr auto RESET = "\x1b[0m";
Don't declare variables at the top of functions if they are needed later
Always declare variables in the smallest scope possible, and never ever declare every variable used in a function at the top of it. That's bad for several reasons:
You might initialize a variable that is expensive to create, only to not use it because the function returned prematurely.
You might accidentally change a value of a variable that you shouldn't have changed.
You might forget to initialize some variables (which you did - see isMatching), leaving them in an unspecified state. If you forget that the variable doesn't have a value yet, you will have a (possible) hard time debugging.
Listen to the compiler warnings!
Always listen to the compiler warnings, and always compile with a high warning level.
You have 3 warnings that can easily be fixed:
main.cpp: In function 'int main()':
main.cpp:29:29: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int game = 0; game != maxGames; ++game)
~~~~~^~~~~~~~~~~
main.cpp:40:35: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int guess = 1; guess != maxGuess + 1; ++guess)
~~~~~~^~~~~~~~~~~~~~~
main.cpp:70:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (score == maxGuess)
std::...::size_type is also known as std::size_t (within reasons)
std::size_t is required to be able to store the maximum size of any type. This means that std::size_t is either larger or equal to std::...::size_type. In fact, in most cases, size_type is a synonym:
When indexing C++ containers, such as std::string, std::vector, etc, the appropriate type is the member typedef size_type provided by such containers. It is usually defined as a synonym for std::size_t.
Taken from here.
This can save you some typing :)
Why inline?
Why is showMoves inline? It really doesn't need to be inline, as it is declared inside a translation unit, and thus can't be used anywhere else, which is the whole point of inline! You should declare a function as inline to enable the compiler to better optimize it as the definition of the function will be available in every translation unit using it.
Contrary of what you might think, declaring a function inline doesn't make the compiler inline your function. It does this whether you declared it inline or not. It can also ignore the inline keyword, as it might deem the function unsuitable for inlining.
Catch exceptions by const&
If you don't catch exceptions by reference, you will see unexpected behavior if an exception is thrown that inherits that specific exception. This is called object slicing. Making it const is better because it prevents you have actually modifying the exception if you didn't mean to (this can be applied to every variable - see const correctness).
Prefer returning a value than passing a reference
You should write functions that return a new value, instead of modifying a value passed as a parameter. The reason being, is that you never want your variables to be in an unspecified state, and if you pass be ref, you will need to write 2 lines instead of 1 when initializing it, possibly leaving it uninitialized for a short amount of time.
// 2 lines
std::string secretCode;
readCode(secretCode);
// 1 line
std::string secretCode = readCode();
With the advent of move semantics and RVO, there won't be any string copied, it will be moved, which is very cheap. You can read more here.
Use < instead of !=
In every loop that you have, you have var != end or similar constructs. If you accidentally increment var past end, the loop will continue running. Instead, if you would have used var < end, then the loop will still terminate.
Sometimes, this is not possible, for example with iterators, which don't have an operator<.
You can inline setup
setup functions are very controversial. I would advocate to not using them, as if you for some reason forget to call them, you will get strange results.
The recursion of setup can be easily changed into a while loop, which has the added advantaged of not overflowing the stack if a user enters the wrong numbers to many times.
Some general questions
Why does the number of games have to be even? Same for the number of guesses? Is there any practical reason why I can't play 1 or 3 games? Or why I can't specify 5 guesses?
I think you should drop those requirements, but I might be missing something.
Const correctness
Use const whenever possible, as stated previously above. showMoves doesn't change its first parameter prevMoves, so why not make it const? That way, someone using that function will know that the passed container will not be modified. Without const, people (including you some time in the future) will be vary of using that function because of the possible side effects that is has.
Use auto and for each loops
You can use auto to reduce a few variable definitions:
std::vector<std::string>::const_iterator i = prevMoves.begin();
becomes
auto i = prevMoves.cbegin(); // note the c, stands for const
At the same time, you can use for each loops:
for (auto i = prevMoves.cbegin(); i != prevMoves.cend(); ++i)
printCode(*i);
becomes
for (const auto& value : prevMoves)
printCode(value);
Variable names
You should only use i and j for small loops, not anything more. Always use better names if you can. Also, ALL_CAPS_VARIABLES should only be used for macros, and not for constant variables.
Unnecessary return statements
You can omit return 0; in main, as the compiler automatically "adds" it.
return; statements as last statements in a function returning void is unnecessary, as the function will already return if there is no more statement.
Don't use platform dependent code
std::system("clear"); is a typical example of platform dependent code. If I were to run your code on a Windows machine, it will not work. Why? Because the clear command on Windows is called cls, and not clear. You should try to avoid any platform dependent code whenever possible, and provide alternatives if you can't (using macros).
Also, coloring the terminal is not supported for every terminal, so you might want to disable colors if the terminal doesn't support it.
Let's shorten printCode!
This is completely optional, but I would prefer this printCode implementation:
void printCode(std::string const &code)
{
static std::unordered_map<char, std::string> map = {
{ 'R', "\x1b[31;1m# " }, // red
{ 'G', "\x1b[32;1m# " }, // green
{ 'Y', "\x1b[33;1m# " }, // yellow
{ 'B', "\x1b[34;1m# " }, // blue
{ 'M', "\x1b[35;1m# " }, // magenta
{ 'C', "\x1b[36;1m# " }, // cyan
{ 'P', "\x1b[37;1m P" }, // white
{ 'O', "\x1b[37;1m O" } // white
};
for (auto ch : code)
std::cout << (map.find(std::toupper(ch)) != map.end() ? map[std::toupper(ch)] : " ");
std::cout << "\x1b[0m" << '\n';
// you might notice that I don't use any of the constants
// that's because this is the only place where you are using them
// so you might as well inline them
}
Just because it's short and readable :) | {
"domain": "codereview.stackexchange",
"id": 24697,
"tags": "c++"
} |
Classifying survey response text SVM | Question: I have 800 responses to an open-ended survey question. Each response is categorized into 3 categories based on a list of 70 categories. These categories are things like "stronger leadership", "better customer service", "programs", and etc...
My question is, can I use this as a training data set in order to develop a model that I can use in the future as we get more survey responses? We would like to be able to tag, label, or classify each survey response into (up to) 3 of the 70 categories.
Is this even possible? Or do I have to use a NB with simple words? Can you please guide me to tutorials, examples, etc.?
Using R in this exercise.
Answer: Assigning ~3 of 70 categories means you would be performing multi-label classification.
In the end, it doesn't make much difference if you use Naive Bayes or SVM; they are both families of algorithms that translate provided independent variables (your feature space) into hopefully correct dependent variables (target classes).
The question is how to construct a good feature space. The state of the art approaches in text mining are (or were) first tokenizing words, stripping punctuation and stop words, stemming or lemmatizing them, creating a bag-of-words model of those words' relative frequencies and perhaps the frequencies of those words' bigrams or trigrams.
Then run your classification learners on that. Assume the resulting feature space table might get really wide (lots of words and combinations of words), so you might want to consider some form of dimensionality reduction.
Of course, you will have to repeat the same filtering process with exact same parameters for each new survey you want to classify.
Here's another good batch of answers on multi-label text classification. | {
"domain": "datascience.stackexchange",
"id": 631,
"tags": "machine-learning, r, text-mining, svm"
} |
How is o-Fluorophenol more acidic than phenol even after having Hydrogen bonding? | Question:
Source: Concepts of Organic Chemistry by Dr OP Tandon, Himanshu Pandey, Dr AK Virmani
Page no: 241
Answer: Perhaps the words in the picture are explained at greater length in the text, or perhaps actual numbers are given for the dissociation constants (https://www.quora.com/Why-is-o-flurophenol-is-more-acidic-than-p-flurophenol).
As it turns out, the dissociation constants are 8.7, 9.3. 9.9 and 10.0 for ortho, meta, para and unsubstituted phenol. Ortho-fluorophenol is the most acidic - perhaps in spite of the possible hydrogen bonding. The inductive effect of the fluoro group is surely greatest at the ortho position.
In the series of nitrophenols (Acidity order of nitrophenols), the para (p$K_a$ = 7.16) is slightly more acidic than the ortho (p$K_a$ = 7.2), so it could be said that the ortho substituted compound, which would be expected to be more acidic (if you assume that the resonance effects at ortho and para positions would be similar), but isn't, we have to look for some other explanation, and hydrogen bonding could stabilize the neutral molecule.
Well, even if the actual effect is stronger than 0.04 p$K_a$ units, the hydrogen bonding effect seems to be rather small, and is overcome in the fluorophenols by the $-I$ effect of fluorine. Just scratch out the words "H-bond decreases acidity". (Lightly.) | {
"domain": "chemistry.stackexchange",
"id": 14936,
"tags": "organic-chemistry, resonance, hydrogen-bond"
} |
Single electron on a two atoms chain : factorisation of hilbert space by external and internal degrees of motion | Question: I have a question on the first pages of the book "A Short Course
on Topological
Insulators" by János K. Asbóth, László Oroszlány and András Pályi
But actually we can see it here : http://theorie.physik.uni-konstanz.de/burkard/sites/default/files/ts15/TalkSSH.pdf
Presentation of the problem
We work with a 1 dimensional chain where there are two types of atoms $A$ and $B$. The unit cell is labelled by $m$. We study the motion of one electron.
We have different interaction terms : $v$ and $w$.
They work with the following SSH hamiltonian model :
$$ H=v \sum_{m=1}^N (|m,B\rangle\langle m,A| +hc)+w\sum_{m=1}^{N-1} (|m+1,A\rangle\langle m,B|+hc)$$
Where $hc$ is for the hermitic conjugate.
Thus, if we write the hamiltonian, we have something that looks like :
$$ H = \begin{bmatrix}0&v&0&0&0&0&0&0\\v&0&w&0&0&0&0&0 \\ 0&w&0&v&0&0&0&0 \\ 0&0&v&0&w&0&0&0\\ 0&0&0&w&0&v&0&0\\ 0&0&0&0&v&0&w&0 \\ 0&0&0&0&0&w&0&v \\ 0&0&0&0&0&0&v&0 \end{bmatrix}$$
And the Hilbert space can be seen as a tensorial product of :
$\mathcal{H}_{tot}=\mathcal{H}_{ext} \otimes \mathcal{H}_{int}$
Where the external degree of freedom is represented by the letter $m$, and the internal one by the fact we are on site $A$ or $B$.
Thus : $|m,\alpha\rangle = |m\rangle \otimes |\alpha \rangle$ where $\alpha=A,B$.
My question
But there is something I misunderstand here.
I agree that we can see the total Hilbert space of the problem as a tensorial product of an internal and external degrees of freedom hilbert spaces.
But in the same time, if we consider the state $|m,A\rangle$, we would see a gaussian centered on the atom $A$ in cell $m$. And then $|A\rangle$ would be a gaussian centered in $0$ and $|m\rangle$ would "shift it" to the position $m$ right ? But if we write everything on the $x$ basis we have :
$$ \psi(x)=\psi_m(x) \psi_A(x)$$ and it should be $0$ outside of the support of $\psi_A$. And as $\psi_A$ is a gaussian centered in $0$ we would have a wavefunction that is zero everywhere if we go far enough.
Where is the mistake I do in my vision of the problem?
Isn't $|m,A\rangle$ a gaussian centered on the atom $A$ that is in the cell $m$ ? If so, what represents the kets $|m\rangle$ and $|A\rangle$ physically (what those wavefunctions look like).
Answer: It is wrong to write $\psi(x)=\psi_m(x)\psi_A(x)$. The correct wave function $\psi_{mA}(x)$ that represents the state $|m,A\rangle$ should be
$$\psi_{mA}(x)=e^{-m\partial_x}\psi_{A}(x)=\left(1-m\partial_x+\frac{1}{2!}m^2\partial_x^2-\frac{1}{3!}m^3\partial_x^3+\cdots\right)\psi_{A}(x) =\psi_{A}(x-m),$$
where we have used the formula of Taylor expansion. Physically, this can be understood by noticing that $p=-\text{i}\partial_x$ is the momentum operator that generates translation, and the meaning of the state $|m,A\rangle$ is the simply the Gaussian packet $\psi_A(x)$ translated by the displacement $m$.
The tensor product in $|m,A\rangle=|m\rangle\otimes|A\rangle$ does not mean to multiply two wave functions together directly. It just means that if you consider the following linear superposition, the result can be expanded in the tensor product basis as
$$(c_m|m\rangle+c_n|n\rangle)\otimes(c_A|A\rangle+c_B|B\rangle)=c_mc_A|m,A\rangle+c_mc_B|m,B\rangle+c_nc_A|n,A\rangle+c_nc_B|n,B\rangle.$$
This is what defines a tensor product structure in the Hilbert space. Any algebraic structure satisfying such property of binlienar maps can be called a tensor product. You can see that in terms of the wave function, the following algebraic structure is indeed a tensor product
$$(c_m e^{-m\partial_x}+c_n e^{-n\partial_x})\otimes(c_A \psi_A(x)+c_B \psi_B(x))=c_mc_A \psi_A(x-m)+c_mc_B \psi_B(x-m)+c_nc_A \psi_A(x-n)+c_nc_B \psi_B(x-n).$$
In this sense, the operators $e^{-m\partial_x}$ form a set of basis of the external Hilbert space, which can be denoted as $|m\rangle$ abstractly. There is no wave function associated with $|m\rangle$, because $|m\rangle=e^{-m\partial_x}$ is actually represented as a linear operator in this case.
Well, if one insists to understand the $|m\rangle$ state as a wave function, one possible interpretation is to consider it to be a Dirac delta function located at $x=m$ (the center of the $m$th unit cell).
$$\psi_m(x)=\delta(x-m).$$
But still, the tensor product $|m\rangle\otimes|A\rangle$ does not correspond to multiplying the wave functions $\psi_m(x)$ and $\psi_A(x)$ together in a point-wise manner. It should actually be understood as a convolution of the two wave functions:
$$\psi_{mA}(x)=(\psi_{m}*\psi_A)(x)=\int\text{d}y\psi_m(y)\psi_A(x-y)=\psi_A(x-m).$$
The convolution also satisfies the algebraic properties of the tensor product and is hence a legitimate representation of the tensor product. This point of view is secretly equivalent to the above operator point of view, because, in functional analysis, the Dirac delta function (or the shifted Dirac delta function) is actually defined to be the kernel of the identity operator (or the translation operator). | {
"domain": "physics.stackexchange",
"id": 46533,
"tags": "quantum-mechanics, condensed-matter, wavefunction, hilbert-space, hamiltonian"
} |
Check if a binary tree is a perfect tree | Question: I'm trying to write an algorithm to check if a given binary tree is a perfect binary tree, and of course with the lowest complexity.
I was thinking to calculate the height of the tree $h$ and the number of the nodes $n$, then if $2^{h+1} -1 = n$, it's a perfect tree.
But then again, it's not very efficient.
The I thought to find the minimum tree height, and maximum tree height, and if they are equal, it's a perfect tree.
But also this case, isn't much of improvement.
Any idea how to make it better?
Answer: Assuming you are given the root and nothing else, this problem has a lower bound of $\Omega(n)$.
An informal proof:
Consider an adversary who creates a perfect tree of height $h$ and size $n$. Now this adversary also has one extra node whose parent has yet to be determined.
The adversary, being sneaky, doesn't determine the parent of this node until you've checked every other node in the tree. To be more precise, the parent of this extra node will be the very last leaf node you check.
Clearly this will take $\Omega(n)$ time for any algorithm to determine this adversary has an imperfect tree.
That being said, you could find algorithms with a better expected time. For example, you could try non-deterministic algorithms such as random walks of the tree, then compare the lengths of these random walks to give a probability of perfectness. Although in any case, you won't do better than $O(n)$. | {
"domain": "cs.stackexchange",
"id": 9312,
"tags": "algorithms, binary-trees"
} |
Period of phase leads the advantage of Heisenberg's Limit disappear? | Question: In Quantum Metrology, the aim is to estimate some unknown parameters(I will talk about one parameter estimation in this post, while multiparameter is also available) as precise as possible. Without quantum resources, we can only realize the Standard Quantum Limit(SQL), normally referred to as the form $ var(\hat{\theta})=1/n$, where $\hat{\theta}$ is the estimator(function of random variables to estimate the unknown parameter $\theta$) and $n$ is the number of experiments. While with quantum resources, we may reach the so-called Heisenberg's Limit(HL), normally referred to as the form $var(\hat{\theta})=1/n^2$, a square enhancement in the precision. Here is a frequently used state example, stating that the GHZ state $1/\sqrt{2}(|0\rangle^{\otimes n}+|1\rangle^{\otimes n})$ after the evolution described by the unitary operator $U = e^{-i\theta\sigma_z/2}\otimes e^{-i\theta\sigma_z/2}\otimes...$ will become
$$
1/\sqrt{2}(|0\rangle^{\otimes n}+e^{in\theta}|1\rangle^{\otimes n})\tag{1}
$$
ignoring the global phase. And we can estimate the value of the parameter by measuring this parameterized quantum state.
My question is, the HL will show its advantage only when the scale is $1/n^2$, while from eq.(1), we can easily see that there's a $2\pi$ period in an exponential function, no matter how small $\theta$ is, we cannot always enhance our precision by HL scale when $n$ pass some specific value. So, how to understand this paradox, did I miss something?
Answer: You are correct, but the SQL is a local limit, when you already have a very good idea what the value of $\theta$ is, so there is no contradiction.
Let's work through it. You measure some relative phase $\Theta$, and you infer that
$$\Theta=n\theta-2\pi k,\quad k\in \mathbb{N}.$$ You work out that
$$\theta=\frac{\Theta +2\pi k}{n},$$ where $\Theta$ and $n$ are known ($n$ is known because you set up your experiment) and $k$ is unknown. You now must determine what values of $k$ are possible, then you are finished.
Because the SQL is local, we know a priori that $\theta_{\text{low}}\leq \theta\leq \theta_{\text{high}}$ for some values of $\theta_{\text{low}}$ and $ \theta_{\text{high}}$, so we deduce that
$$\frac{n\theta_{\text{low}}-\Theta}{2\pi}\leq k\leq \frac{n\theta_{\text{high}}-\Theta}{2\pi}.$$ So as long as our initial knowledge has sufficiently small $\theta_{\text{high}}-\theta_{\text{low}}$, there will only be one possible value of $k$, and we will fully determine the value of $\theta$. If our initial knowledge is not precise enough, you are correct that using too large a value of $n$ will yield multiple possible results. | {
"domain": "quantumcomputing.stackexchange",
"id": 3260,
"tags": "quantum-metrology, technologies"
} |
Determine if 2 javascript objects have strictly the same properties | Question: I need to compare 2 objects to determine if they have strictly the same properties.
This is what I came up with so far:
import isEmpty from 'lodash/isEmpty';
const shallowPropertiesMatch = (firstObject, secondObject) => {
if (isEmpty(firstObject) || isEmpty(secondObject)) return false;
const firstObjectKeys = Object.keys(firstObject);
const secondObjectKeys = Object.keys(secondObject);
if (firstObjectKeys.length !== secondObjectKeys.length) return false;
if (!firstObjectKeys.every(value => secondObjectKeys.includes(value))) return false;
return true;
};
I was wondering if there is a more efficient, elegant or simpler way of doing this?
Answer: How can you know if a function is efficient if you use 3rd party code. Even if you check the source, it is subject to change without notice so you can never know if your code is running the best it can. That is the price you pay for using 3rd party code.
However I don't see the need to use lodash/isEmpty as you determine that when you get the object keys. If there are no keys the object is empty.
Not delimiting a statement block eg if (isEmpty(firstObject) || isEmpty(secondObject)) return false; is a bad habit. Always delimit all blocks with {}.
Your naming is way too verbose. Use the functions context to imply meaning. The function name implies (well sort of) you are handling objects.
One solutions is as follows.
function compareObjKeys(A, B) {
const keys = Object.keys(A);
if (keys.length > 0) {
const keysB = Object.keys(B);
if (keysB.length === keys.length) {
const keysA = new Set(keys);
return keysB.every(k => keysA.has(k));
}
}
return false;
}
But I would not go as far and favor a smaller source size. The performance savings of the above are minor and only when one of the objects is empty which I would imagine would be rare
function compareObjKeys(A, B) {
const kA = new Set(Object.keys(A)), kB = Object.keys(B);
return kB.length > 0 && kA.size === kB.length && kB.every(k => kA.has(k));
} | {
"domain": "codereview.stackexchange",
"id": 33851,
"tags": "javascript, functional-programming"
} |
In cell division, are daughter cells identical? | Question: I understand that after a cell replicates, there will be two daughter cells instead of one. But wouldn't one of them be the old cell that created the second one?
The old cell having gone through G0, interphase, and mitosis and after dividing wouldn't it be somewhat different from the new cell (the old cell's telomeres having shortened, having already undergone the cell cycle and so on)?
Answer: Here is a picture to expand on my comment better. From wikipedia:
In the above diagram, the red chromosome represents one homolog while the blue chromosome represents the other homolog in the pair. After replication in Interphase, you have two homologs, each consisting of duplicated sister chromatids. You can see how we have the one enlarged cell in Mitosis be cleaved into two daughter(diploid) cells. | {
"domain": "biology.stackexchange",
"id": 5025,
"tags": "cell-biology, mitosis, cell-cycle"
} |
How do I see if the material is a Topological Insulator from the band structure? | Question: In this paper1 the following bandstructure of Bi$_2$Se$_3$ is shown:
In "a" they show the bands without Spin orbit coupling (SOC) and in "b" they include SOC.
It is said that:
"Figure 2a and b show the band structure of Bi$_2$Se$_3$
without and with SOC, respectively. By comparing the two figure
parts, one can see clearly that the only qualitative change induced
by turning on SOC is an anti-crossing feature around the $\Gamma$ point,
which thus indicates an inversion between the conduction band
and valence band due to SOC effects, suggesting that Bi$_2$Se$_3$ is a
topological insulator"
What is meant by the "anti crossing around the $\Gamma$ point after SOC is turned on?" Also before SOC is turned on there is no crossing between valence band and conduction band!?
And what is meant by the "inversion between conduction and valence band"? Am I supposed to see that conduction and valence bands are mirrored at the Fermi level (dashed line) when going from the left figure to the right? And why does this indicate that we have a topological insulator?
1 H. Zhang, C.-X. Liu, X.-L. Qi, X. Dai, Z. Fang & S.-C. Zhang, "Topological insulators in $\require{mhchem}\ce{Bi2Se3}$, $\ce{Bi2Te3}$ and $\ce{Sb2Te3}$ with a single Dirac cone on the surface", Nat. Phys. 5, 438–442 (2009).
Answer: $\Gamma$ comes from the energy band you called conduction band in Fig. 2a, i.e. the band structure is inverted at this momentum." /> | {
"domain": "physics.stackexchange",
"id": 72313,
"tags": "topological-insulators"
} |
Does receiver always know about the noise level in real world application? | Question: In many research articles, the effect of measurement noise on estimation performance is often reported. But it is not clear to me what is the proper way of using the signal to noise ratio (SNR). For example, the variance of measurement noise which is assumed to be Additive White Gaussian Noise( AWGN) is varied to obtain different SNRs. Then for each SNR the mean square error (MSE) between the parameter estimates and the known parameters are calculated. When SNR is high, it means that the amount of noise is less.
The following code in Matlab is based on my understanding that SNR is defined after the input to a system is passed and we get the output from the system, then we add noise for a particular SNR. Also, please correct me if I have put any incorrect information. Thank you.
In the code, I first generate some data $x$ consisting of $N=100$ data points. Then I have randomly generated 3 coefficients representing the impulse response of an FIR system. The data $x$ is passed through the FIR system to obtain $y$. I then add AWGN of SNR = 40dB.
Question1: My question is which step of the estimation stage is the SNR defined? Can somebody please confirm if this is the correct approach or not?
Question2: If SNR = 40dB, how does one know the variance of the noise at the receiver end? In practice (in industry application) does the receiver end always know about the level of SNR and the variance of the noise?
N=100;
x = randn(1,N); %generating random data
h = rand(1,3); %some unknown parameters representing impulse response
y=filter(h,1,x); %FIR filter
z = awgn(y,40,'measured'); % adding AWGN measurement noise at SNR = 40dB
%Run some estimation method to estimate h_hat
Answer: It depends somewhat on the application. For an active sonar system the receiver has to deal with the near backscatter ( reverberation) of the active pulse ( unless it is continuous wave) so there typically is some blanking and fast/slow automatic gain control using feedback. In a passive sonar receiver, the same philosophy using feedback based AGC is typical. Similar concepts apply to some radar systems. Down stream, there are a number of ideas used that include leaky integration to track slow variations in background, and if your detection architecture is a large number of cells like fft bins, there are schemes where means or medians of neighbor cells adjust the gain of a particular cell.
In these kind of systems, the Neyman Pearson criteria is used to set thresholds, so you really don’t need to know SNR, you need to have your AGC and other schemes give you a known noise variance. While an optimal likelihood ratio would include SNR as a known term, one can go with a locally optimal assumption, like a weak SNR or a generalized likelihood ratio. Neyman Pearson also circumvents having to know signal prior probabilities of occurrence.
Communication receivers also use fast/slow feedback AGC. Systems have a finite dynamic range so you have to put your signal at a reasonable operating point. This isn’t the same as knowing your SNR but an AGC lets you do things like pilot detection. If necessary, SNR becomes an online estimation and tracking problem. A clean signal is often equivalent, in a practical sense, to knowing your noise level.
There is a phenomena known as the “threshold effect” that can be explained as there being a relatively narrow range of SNR increase where performance goes from very poor to very good. Knowing your exact SNR below this threshold range has no utility, performance is bad regardless. Knowing your exact SNR above this range is probably not consequential either.
The answer to your question is yes, a practical receiver does need to know the noise level, but typically in the context of placing the signal at an operating point compatible with its dynamic range. The locus of this function is AGC. There may be additional downstream SNR processing as well. It isn’t necessarily located in a single place. CFAR is one kind of downstream processing for radar/sonar. | {
"domain": "dsp.stackexchange",
"id": 5996,
"tags": "matlab, noise, snr, soft-question"
} |
RVIZ *** glibc detected *** | Question:
Hi, I am using Hydro with Ubuntu 12.04 and unable to run rviz. I am getting following error when doing $ rosrun rviz rviz:
[ INFO] [1405607287.945120631]: rviz version 1.10.16
[ INFO] [1405607287.945243544]: compiled against OGRE version 1.7.4 (Cthugha)
[ INFO] [1405607288.233226022]: Stereo is NOT SUPPORTED
[ INFO] [1405607288.233341114]: OpenGl version: 3 (GLSL 1.3).
*** glibc detected *** /home/asif/ROS_CatkinWS_Hydro/devel/lib/rviz/rviz: corrupted double-linked list: 0x00000000020422a0 ***
Inconsistency detected by ld.so: dl-open.c: 221: dl_open_worker: Assertion `_dl_debug_initialize (0, args->nsid)->r_state == RT_CONSISTENT' failed!
I removed rviz using $ sudo apt-get remove ros-hydro-rviz and tried to re-install using $ sudo apt-get install ros-hydro-rviz and it gave me an error:
E: Unable to locate package ros-hydro-rviz
Then I downloaded the source from github and managed to install it successfully using catkin_make but still getting same *** glibc detected *** error as above.
Any idea how to solve this problem! Thanks in advance.
PS: I found couple of similar questions asked earlier, for example this and this but did not found a solution.
Originally posted by AsifA on ROS Answers with karma: 231 on 2014-07-17
Post score: 1
Answer:
See the answer in this question where the error msg looks very similar (as you mentioned).
Originally posted by 130s with karma: 10937 on 2014-09-22
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 18653,
"tags": "rviz, ros-hydro"
} |
Can the normal equation be used to optimise the RNN's weights? | Question: I have made an RNN from scratch in Tensorflow.js. In order to update my weights (without needing to calculate the derivatives), I thought of using the normal equation to find the optimal values for my RNN's weights. Would you recommend this approach and if not why?
Answer: Unfortunately, this is not possible. The normal equation can only directly optimise a single layer that connects input and output. There is no equivalent for multiple layers such as those in any neural network architecture. | {
"domain": "ai.stackexchange",
"id": 2249,
"tags": "neural-networks, recurrent-neural-networks, backpropagation"
} |
How can I check whether a nonlinear system is zero-state observable? | Question: Given a nonlinear system, such as:
$$\begin{align}
x_1' &= x_2 \\
x_2' &= −x_1^3 + u \\
y &= x_2
\end{align}$$
How can I check the zero-state observability of the system?
Answer: I've found the answer.
To check if a system is zero state observable, put $u=0$ and check whether $x=0$ when $y=0$. If yes, it is zero-state observable. Otherwise not!
For the given system, by putting $u=0$ and $y=0$, we see that $x_2=0$, therefore $x'_2=0$ and thus $-x_1^3=0$ or $x_1=0 \implies x=0$. Thus it is zero-state observable. | {
"domain": "engineering.stackexchange",
"id": 793,
"tags": "control-engineering, control-theory"
} |
What are the accelerations of blocks? | Question: I've talked with 2 teachers about this situation:
one teacher said he was completely sure that B have twice the acceleration of A, the other said he was completely sure they have same acceleration. Can you have a better look on it? What do think? Consider it has no friction.
Answer: Let the initial length of the bottom segment of rope be $l_1$, the initial length of the middle segment be $l_2$, and the initial length of the top segment be $l_3$.
Since the total length of the rope is constant, we can write
$$l_1 + l_2 + l_3 = K$$
Now, displace the block $A$ by $\Delta x_A$ down the slope and then
$$(l_1 + \Delta x_A) + (l_2 + \Delta x_A) + (l_3 + \Delta l_3) = K$$
Thus
$$\Delta l_3 = -2 \Delta x_A$$
So, the length of the segment $l_3$ decreases twice as much as the displacement of block $A$.
However, this isn't the entire story. The top-most pulley moves down the slope with block $A$ and so the displacement of block $B$ is
$$\Delta x_B = \Delta l_3 + \Delta x_A = -\Delta x_A$$
And so we conclude that the blocks have accelerations of equal magnitude and opposite direction. | {
"domain": "physics.stackexchange",
"id": 30245,
"tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics, forces, gravity"
} |
Cannot teleop turtlebot3 in gazebo with melodic | Question:
I am trying to control a turtlebot3 in gazebo using ROS. I can do this without a problem in Ubuntu 16 ros kinetic (gazebo 7). But I cannot do the same thing with same commands in Ubuntu 18 ros melodic (gazebo 9)
Here's the log shown from roslaunch turtlebot3_gazebo turtlebot3_empty_world.launch in ros melodic
xacro: in-order processing became default in ROS Melodic. You can drop the option.
started roslaunch server http://xxxxxx:45893/
SUMMARY
========
PARAMETERS
* /gazebo/enable_ros_network: True
* /robot_description: <?xml version="1....
* /rosdistro: melodic
* /rosversion: 1.14.5
* /use_sim_time: True
NODES
/
gazebo (gazebo_ros/gzserver)
gazebo_gui (gazebo_ros/gzclient)
spawn_urdf (gazebo_ros/spawn_model)
auto-starting new master
process[master]: started with pid [3042]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to 331250c0-acb1-11ea-af42-0c5b8f279a64
process[rosout-1]: started with pid [3053]
started core service [/rosout]
process[gazebo-2]: started with pid [3056]
process[gazebo_gui-3]: started with pid [3064]
process[spawn_urdf-4]: started with pid [3070]
[ INFO] [1591968766.435037191]: Finished loading Gazebo ROS API Plugin.
[ INFO] [1591968766.435766509]: waitForService: Service [/gazebo/set_physics_properties] has not been advertised, waiting...
[ INFO] [1591968766.484922295]: Finished loading Gazebo ROS API Plugin.
[ INFO] [1591968766.485743143]: waitForService: Service [/gazebo_gui/set_physics_properties] has not been advertised, waiting...
[ INFO] [1591968766.762218392, 0.003000000]: waitForService: Service [/gazebo/set_physics_properties] is now available.
[ INFO] [1591968767.093444620, 0.168000000]: Physics dynamic reconfigure ready.
[spawn_urdf-4] process has finished cleanly
log file: /home/XXXXX/.ros/log/331250c0-acb1-11ea-af42-0c5b8f279a64/spawn_urdf-4*.log
ROS topic list after launching this is as follows
/clock
/gazebo/link_states
/gazebo/model_states
/gazebo/parameter_descriptions
/gazebo/parameter_updates
/gazebo/set_link_state
/gazebo/set_model_state
/rosout
/rosout_agg
Here's the log from the same roslaunch in ros kinetic in which this works fine
started roslaunch server http://xxxxxx-l-l:41851/
SUMMARY
========
PARAMETERS
* /robot_description: <?xml version="1....
* /rosdistro: kinetic
* /rosversion: 1.12.14
* /use_sim_time: True
NODES
/
gazebo (gazebo_ros/gzserver)
gazebo_gui (gazebo_ros/gzclient)
spawn_urdf (gazebo_ros/spawn_model)
auto-starting new master
process[master]: started with pid [8166]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to a1193f98-acb1-11ea-8274-8c04ba099a14
process[rosout-1]: started with pid [8179]
started core service [/rosout]
process[gazebo-2]: started with pid [8187]
process[gazebo_gui-3]: started with pid [8207]
process[spawn_urdf-4]: started with pid [8213]
[ INFO] [1591968950.965463454]: Finished loading Gazebo ROS API Plugin.
[ INFO] [1591968950.965900805]: waitForService: Service [/gazebo/set_physics_properties] has not been advertised, waiting...
[ INFO] [1591968950.978686441]: Finished loading Gazebo ROS API Plugin.
[ INFO] [1591968950.979178680]: waitForService: Service [/gazebo/set_physics_properties] has not been advertised, waiting...
[ INFO] [1591968951.385803403, 0.023000000]: waitForService: Service [/gazebo/set_physics_properties] is now available.
[ INFO] [1591968951.738905171, 0.163000000]: Laser Plugin: Using the 'robotNamespace' param: '/'
[ INFO] [1591968951.738957452, 0.163000000]: Starting Laser Plugin (ns = /)
[ INFO] [1591968951.740414014, 0.163000000]: Physics dynamic reconfigure ready.
[ INFO] [1591968951.740771321, 0.163000000]: Laser Plugin (ns = /) <tf_prefix_>, set to ""
[ INFO] [1591968951.833262130, 0.163000000]: Starting plugin DiffDrive(ns = //)
[ INFO] [1591968951.833336896, 0.163000000]: DiffDrive(ns = //): <rosDebugLevel> = na
[ INFO] [1591968951.834254336, 0.163000000]: DiffDrive(ns = //): <tf_prefix> =
[ INFO] [1591968951.836164861, 0.163000000]: DiffDrive(ns = //): Advertise joint_states
[ INFO] [1591968951.836952897, 0.163000000]: DiffDrive(ns = //): Try to subscribe to cmd_vel
[ INFO] [1591968951.840035635, 0.163000000]: DiffDrive(ns = //): Subscribe to cmd_vel
[ INFO] [1591968951.840746578, 0.163000000]: DiffDrive(ns = //): Advertise odom on odom
[ INFO] [1591968951.865549383, 0.185000000]: waitForService: Service [/gazebo/set_physics_properties] is now available.
[ INFO] [1591968951.925287635, 0.245000000]: Physics dynamic reconfigure ready.
[spawn_urdf-4] process has finished cleanly
log file: /home/XXXXX/.ros/log/a1193f98-acb1-11ea-8274-8c04ba099a14/spawn_urdf-4*.log
And this is the ROS topic list
/clock
/gazebo/link_states
/gazebo/model_states
/gazebo/parameter_descriptions
/gazebo/parameter_updates
/gazebo/set_link_state
/gazebo/set_model_state
/gazebo_gui/parameter_descriptions
/gazebo_gui/parameter_updates
/imu
/joint_states
/odom
/rosout
/rosout_agg
/scan
/tf
It seems like ROS related stuff are not running properly in Ubuntu 18 ros melodic. Am I doing anything wrong or what's the issue here?
Originally posted by teshansj on ROS Answers with karma: 168 on 2020-06-12
Post score: 1
Answer:
Apparently I didn't have ros-melodic-gazebo-ros-pkgs installed. (And didn't have any error indicating that it was missing either)
Originally posted by teshansj with karma: 168 on 2020-06-13
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by ngovanmao on 2020-07-20:
Thank you. I also faced this problem as a newbie. | {
"domain": "robotics.stackexchange",
"id": 35114,
"tags": "ros, ros-melodic, gazebo-ros"
} |
Name of the coordinate system | Question:
Hey,
I tried to get the name of the coordinate system of the world from the docu, but so far I found nothing.
What is the (global) coordinate system called in the documentation in gazebo?
Also if you can send me the link to the docu page.
Thank you for your help
Originally posted by morrac on Gazebo Answers with karma: 57 on 2021-02-01
Post score: 0
Answer:
If you are not targeting a model link, you may leave any related field empty. This will reference the world frame.
If you are using the ROS API, then it also lets you have "world" or "map" specified as well.
https://github.com/ros-simulation/gazebo_ros_pkgs/blob/kinetic-devel/gazebo_ros/src/gazebo_ros_api_plugin.cpp#L725
Originally posted by nlamprian with karma: 833 on 2021-02-01
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 4584,
"tags": "gazebo-11"
} |
How does carrier frequency offset effect the constellation diagram in OFDM? | Question: In the case of OFDM modulation, if the carrier frequency is slightly off then how does that effect the constellation diagram ?
Is there some general rule how it will change (such as rotate) ?
Or is it more or less arbitrary ?
Answer: so let's act as if we had only one active subcarrier "at a time", just to ignore interference between the different subcarriers.
The frequency offset in time domain means a time offset in frequency domain, and probably a fractional one at that. Since that offset is constant for each carrier, the resulting sample stream (i.e. the output of your DFT) is simply shifted in time.
Now comes two things:
A frequency offset usually means that your sampling rates don't match perfectly, either. In any case, this means you're losing orthogonality of the OFDM subcarriers – which is why OFDM synchronizers need to be more accurate than (non-sinc-shaped) filterbank multicarrier systems, simply because each DFT bin "sees" energy of its adjacent, slightly shifted, neighbor.
That's why OFDM systems typically make use of a proper preamble – if you enjoy reading classics, go for Schmidl and Cox' Robust frequency and timing synchronization for OFDM (Google tells me you can get the PDF from here) – in fact, that paper goes from understanding how to timing-synchronize an OFDM system to fixing the carrier offset (because they are so closely linked together). | {
"domain": "dsp.stackexchange",
"id": 3797,
"tags": "demodulation, ofdm, constellation-diagram"
} |
How to calculate the velocity of a piston-like linkage? | Question: I ve tried to solve this problem in so many ways but still didn't manage to do it...
What would be the correct way to solve it please?
This arm of this mechanism has a length of 0,2m. The piston has an angular velocity of 2000 tours/min. What would be the velocity of point D for an angle theta of 60 degrees?
I think that what I am missing is the angle formed by the arm and the line, which is 50mm long. Example like here (different
exercise):
I am trying to look for this angle beta which could help me solve the problem
expected answer:2,88m/s
Answer:
First you build a kinematic chain between A, B and D with radius $r$ between AB and the length $\ell$ between BD. The orientation angles from vertical are $\theta$ and $\phi$ respectively.
$(x_B,y_B) = (x_A,y_A) + (-r \sin\theta,r \cos \theta)$
$(x_D,y_D) = (x_B,y_B) + (-\ell \sin\phi, - \ell \cos \phi)$
You find the angle $\phi$ from the constraint that $x_A-x_D = r \sin\theta + \ell \sin \phi = x_{AD}$ $$\sin \phi = \frac{x_{AD}}{\ell}-\frac{r}{\ell} \sin \theta $$
Next you differentiate with respect to time using the chain rule to get the velocities
$(\dot{x}_B,\dot{y}_B) = (-r \dot{\theta} \cos\theta, -r \dot{\theta}\sin\theta)$
$(\dot{x}_D,\dot{y}_D) = (\dot{x}_B,\dot{y}_B) + (-\ell \dot{\phi} \cos\phi, \ell \dot{\phi} \sin \phi)$
You find the rotational velocity of the connecting rod from the constraint $\dot{x}_D=-r\dot{\theta}\cos\theta -\ell \dot{\phi} \cos \phi=0$ $$ \dot{\phi} = - \frac{r \dot{\theta} \cos\theta}{\ell \cos{\phi}}$$
The collar speed is
$$ v = \dot{y}_D = \ell \dot{\phi} \sin \phi - r \dot{\theta} \sin\theta = -\frac{r \sin(\theta+\phi)}{\cos\phi} \dot{\theta} $$ where $\dot{\theta}$ is the rotation of the disk in radians per second. | {
"domain": "engineering.stackexchange",
"id": 584,
"tags": "mechanical-engineering, kinematics"
} |
Why isn't Nilsson's Sequence Score an admissible heuristic function? | Question: I understand what an admissible heuristic is, I just don't know how to tell whether one heuristic is admissible or not. So, in this case, I'd like to know why Nilsson's sequence score heuristic function isn't admissible.
Answer: I will use the 8-puzzle game to show you why Nilson's sequence score heuristic function is not admissible. In the 8-puzzle game, you have a $3 \times 3$ board of (numbered) squares as follows.
+---+---+---+
| 0 | 1 | 2 |
+---+---+---+
| 7 | 8 | 3 |
+---+---+---+
| 6 | 5 | 4 |
+---+---+---+
The numbers in these squares are just used to refer to the specific squares (to avoid saying the "middle square" or the "upper-left square"). So, when I say square 1, I refer to the upper-left square. In these games, you have 8 "tiles". Let's denote these tiles by $A, B, C, D, E, F, G$ and $H$. So, in this game, there is always one square which is free (or empty), given that there are $9$ squares. The goal of this game is to reach the following configuration of tiles
+---+---+---+
| A | B | C |
+---+---+---+
| H | | D |
+---+---+---+
| G | F | E |
+---+---+---+
Note that, in the case of the 8-puzzle game, a "state" is a configuration of the board. So, the following two board configurations are two distinct states
+---+---+---+
| | A | C |
+---+---+---+
| H | B | D |
+---+---+---+
| G | F | E |
+---+---+---+
and
+---+---+---+
| | C | A |
+---+---+---+
| H | B | D |
+---+---+---+
| G | F | E |
+---+---+---+
The rules of the 8-puzzle are simple. You can move one tile (at a time) from its current position (or square) to another position, provided that the destination square is free. You can only move a tile horizontally and vertically (and one square at a time).
I will not explain in this answer how the Nilson's sequence score heuristic works. Here is a explanation of how is used in the case of the 8-puzzle game. You should read this explanation and make sure you understand how Nilson's heuristic works before proceeding! You can also find an explanation of how this heuristic works in the book "Principles of Artificial Intelligence" (at page 85), by Nils J. Nilsson (1982).
Why isn't then Nilson's sequence score admissible?
A heuristic function $h$ is admissible if $h(n) \leq h^*(n)$, for all states $n$ in the state space, where $h^*(n)$ is the actual distance to reach the goal (which is often unknown in practice, hence the need to use heuristics to estimate such distance).
Note that admissibility is a property that must hold for all states. So, if we find a state where the condition above is not satisfied for the Nilson's sequence score heuristic function, then we show that the Nilson's sequence score is not admissible.
Let us create the following state
+---+---+---+
| A | B | C |
+---+---+---+
| | H | D |
+---+---+---+
| G | F | E |
+---+---+---+
Note that this state only differs from the goal state by one move: if we move the tile $H$ to square $7$, then we reach the goal state. So, the actual distance to reach the goal state is $1$ move. But what does the Nilson's score function tell us regarding the distance of this state to the goal state? You can see from the algorithm presented in this answer to compute the Nilson's sequence score (of a board configuration) that the score of the configuration (or state) above would be more than $1$ (you can immediately see this because you need to multiply by $3$). Therefore, Nilson's sequence score overestimated the distance to the goal (at least for one state), thus it cannot be admissible (by definition). | {
"domain": "ai.stackexchange",
"id": 980,
"tags": "proofs, admissible-heuristic, heuristic-functions, 8-puzzle-problem, nilssons-heuristic-function"
} |
Stopping teleop without shutting down SMACH state-machine | Question:
I'm building a mobile robot that is using SMACH for high-level task management. I'm starting of trying to give users the ability to chose between between exploration and teleoperation. I have no issues setting up the the state-machine, and and the transistions between states seems to work fine. Both teleop and exploration starts up after adding the user-input.
The issue I'm running into is when I want to stop the teleop_twist_keyboard-node, and transistion back to mode select state(teleop or exploration). The only way that seem to stop teleop is using CTRL-C, and that shuts down the whole program, including the state-machine. I thought about using threading, but teleop seems to allready be using it, so I'm not sure if that is the correct path to pursue.
Have anyone been able to find a way to stop the teleop_twist_keyboard-node and without shutting down the state-machine?
I will gladly post code, but I'm not sure what to post for this one.
UPDATE:
I am now able to stop teleop without shutting down the state-machine with the help of subprocess.Popen. But now it won't let me control the robot from the terminal, and quits teleop preemptively. This the two main states I'm switching between:
class RoamMode(smach.State):
def init(self):
smach.State.init(self, outcomes= ['Exploraiton mode selected', 'Teleoperation mode selected', 'Wrong key'])
def execute(self, userdata):
user_input = input('Press T for Teleoperation mode, or press E for Exploration mode')
if user_input == 'T':
return 'Teleoperation mode selected'
elif user_input =='E':
return 'Exploraiton mode selected'
else:
return 'Wrong key'
class RoamTeleOp(smach.State):
def init(self):
smach.State.init(self, outcomes=['Teleoperation stopped'])
def execute(self, userdata):
tp_process = subprocess.Popen('python3 run_teleop.py', shell=True)
if tp_process.poll() == None:
tp_process.kill()
rospy.sleep(10.)
return 'Teleoperation stopped'
This is the other file I'm running with subprocess.Popen('python3 run_teleop.py', shell=True):
import rospy
import roslaunch
package = 'teleop_twist_keyboard'
executable = 'teleop_twist_keyboard.py'
node = roslaunch.core.Node(package, executable)
launch = roslaunch.scriptapi.ROSLaunch()
launch.start()
process = launch.launch(node)
rospy.spin()
Originally posted by DanEB on ROS Answers with karma: 3 on 2021-03-29
Post score: 0
Original comments
Comment by Tahir M. on 2021-03-29:
You can kill the teleop_node from your script.
Comment by DanEB on 2021-03-29:
You sugest using another thread to kill the node, while it's running?
EDIT:
Still have not figured out a practical way of killing the node while it's running with rosnode.kill_nodes. Even tried with threading, but that made all sorts of mess.
Comment by mgruhler on 2021-03-30:
Why do you want to shut down the teleop node? wouldn't it make more sense to extend it with a key to basically quit it, could be a service or a simple topic that you publish on additionally to the regular twist topic, that your state machine listens to and then transitions back out? You could additionally extend the teleop node to be triggered/enabled by the state machine
Comment by DanEB on 2021-03-30:
The way that teleop is implemented it made sense to me to stop the node, rather than running it through a topic with with some kind of key-interrupt. I am still unsure on how to actually make a topic that I can interrpt, since the communication is two-ways. I will closer into your suggestions. As a beginner to ROS I may be overcomplicating this matter. Your help is appreaciated.
Answer:
Would https://wiki.ros.org/twist_mux be of any help?
I'd set that up with 2 inputs: from the exploration commands and one for the teleop/
That still does not directly cover stopping the exploration command generation in Smach, but you could implement that by subscribing to the teleop topic in the RoamMode state and then calling self.preempt() when there is a message received (and it's not already preempting etc)
Originally posted by Loy with karma: 141 on 2021-04-03
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by DanEB on 2021-04-03:
This fixes it! I am a bit puzzled right now, because I am sure I checked twist_mux a few days ago, and it was not supported for Noetic?
Anyways, thank you!
Comment by mgruhler on 2021-04-06:
might very well be that it just slipped in with the latest sync...
Comment by DanEB on 2021-04-06:
I just realized that it was yocs_cmd_vel_mux that I have been looking at. | {
"domain": "robotics.stackexchange",
"id": 36256,
"tags": "smach"
} |
Capacitor discharging connected to power supply AND resistor together | Question: Lets say I have 2 capacitors P and Q connected to a 9V supply. Across P there's a resistor connected in parallel with the switch open (off position). When I turn on the battery they both fully charge. When I switch on the battery but also simultaneously switch on the switch to the resistor across P, does P discharge (even though its connected to both the power supply and the resistor) ?
From a question, the answer is that it does discharge. Why ? Does the resistor take priority over the power supply ? Or is it just that the question forgot to mention that the power supply is then turned off ?
My question more clearly is this : when a charged capacitor is connected in parallel to both a power supply and a resistor does the capacitor discharge ?
Diagram for reference:
Answer: If you have the switch set to the Y position you will get an initial current through the resistor, but this will decay exponentially with time and given enough time the current in the resistor will be zero.
If the current in the resistor is zero then the voltage across the resistor is also zero, and since the capacitor P is in parallel with the resistor the voltage across the capacitor P must also be zero.
And if the voltage across the lower capacitor is zero it is discharged. So the answer to your question is that yes with the switch set to Y the capacitor P will be discharged. | {
"domain": "physics.stackexchange",
"id": 100530,
"tags": "electric-circuits, electrical-resistance, capacitance, batteries"
} |
Fast implementation of basic addition algorithm | Question:
Write code for a modified version of the Grade School addition algorithm that adds the integer one to an m-digit integer. Thus, this modified algorithm does not even need a second number being added. Design the algorithm to be fast, so that it avoids doing excessive work on carries of zero.
I encountered this question looking over last year's final for my algorithms course. I'm not really sure how to answer it, although it seems like it isn't a very challenging question.
Answer: To slightly expand on @greybeard's answer, here are a few hints:
Start adding from the right, as usual.
When can you stop? Obviously, when you don't have a carry. When will that happen?
When do you have to look at the next digit? Obviously, when you have a carry. When will that happen?
Challenge: show that the average number of steps this algorithm will take on an $m$-digit number represented in base $b$ is less than $b/(b-1)$. | {
"domain": "cs.stackexchange",
"id": 3846,
"tags": "algorithms, arithmetic"
} |
Dictionary with sets as keys where lookup can be set intersection | Question: Normally, when working with dictionaries, we expect around O(1) complexity when we go to retrieve a value given the key (and when we insert). I work in Python, but this might apply to any dynamic typed language. I have been working with dictionaries where the keys are frozensets, which are just sets that are hashable and thus can be used as keys for your dictionary. I have a problem which would be magically solved if I could do "lookups" which are actually not key based, but set based. That is to say, it would be great if I had a data structure like a dictionary where I can pass in a set as a key and the data structure returns all values where the keys have a non-zero intersection with the key I passed in. I am quickly coming to the conclusion that this is never going to be O(1), but who knows. So, the question is, can we create a data structure that is basically a dictionary, but the keys are sets and it has a magic lookup ability where you can pass in a set and get all values back who's keys have non-zero intersection with the key (set) I passed in?
I imagined something to do with loading the values in. Firstly, you need keys that are sets of tuples (hashable). Then, when you load in a new value, you update the key to have a pointer to one other key and then these form long chains that do the job of getting you your sets of keys that meet the intersection criteria. This means that you push the complexity of insert way up, but lookup is fast. Apart from this, I have no idea, and I think a simple argument could be made that any such data structure will have either a bad O(N) insert or bad O(N) lookup, or both, where N is the number of keys.
Answer: Unfortunately, no, this isn't achievable (at least not if you want to handle arbitrary sets as keys).
The reason is simple: if $N$ denotes the number of elements in a key set, then it takes $O(N)$ time just to read all of those elements, and any correct algorithm will have to read all of those elements to handle a query.
There is a simple data structure that might suffice: a dictionary that maps from each element to a list of all keys that contain that element. Then, given a query set, you can iterate over all the elements in the query set, look them up in the dictionary, and check whether any of them are an element of any stored key.
While this is linear time in the size of the query set, note that the running time is only a constant factor of the time it takes to create the query set. So there is a sense in which this is a constant-factor overhead, and thus the best you could hope for (in asymptotic analysis).
Where you might be able to do better is if you knew there was some structure on how query sets were created. For instance, suppose every query set was created via the union of two prior query sets, the intersection of two prior query sets, or by inserting an element into a prior query set. Then we could improve on the above data structure. If you have some more specific situation, I suggest pondering exactly what are the operations the data structure has to be able to handle and then asking a new question about how to support those operations efficiently. | {
"domain": "cs.stackexchange",
"id": 20104,
"tags": "data-structures, sets, dictionaries"
} |
Explaining the Drake Equation on a smaller scale | Question: So me and my friend were planning a video to explain the Drake Equation (within a time limit of 5 minutes), and we needed some help. This video is aimed at explaining the concept to an age group between 13-18 years, and having gone through loads of articles, we felt that a lot of the common audience would not be able to comprehend the concept.
So our question was, can we take the Drake Equation and try to explain it with maybe a real life example and on a smaller scale? It is an incredibly interesting equation and we felt like doing a good job of explaining it to a common teenager.
Thanks!
Answer: No need to make it complicated: what about this...
Just scribble a rectangle on a piece of paper, and say "there are 100 billion stars in our galaxy"....
Then, color off (let's say) 1/3 of the rectangle, and say "only one third of those are the sort of star that could have life, so that's blah billion"
Then, color off (say) 9/10ths of that box, and say "we believe about 90% of those have planets - so that's blah billion"
Then, color off (say) 1/20th of that box, and say "of those with planets, it seems that about 1 in 20 have Earth-like planets. Now we're down to blah billion..."
and so on.
(Note: the Drake equation has a number of fairly silly terms relating to "nuclear war!", which were added as political sops in that era; suggest ignore these unless you want to sound 90 years old!)
So just scribble a box or draw a line on a piece of paper ... or maybe use "a bag of marbles" as the other answer suggests.
Just BTW there is in fact an entire documentary (I noticed it on "Netflix") called "The Drake Equation" which does exactly what you say...
.. it is not really very good as I remember. I think the guy simply draws a line in the ground, to do the "fractions" demo, you know? (ie, they just erase more and more of the line). It doesn't need to be more complicated than that.
It's worth noting that the Drake equation simply points out:
(i) if you multiply those three or four fractions together, you get the number of civilizations in the galaxy. Which is self-evident.
but, the whole point is
(ii) we have utterly no clue - not even vaguely - what most of the fractions are,
You could say it's a written formula, which, helps clarify our thinking on, something we are utterly clueless about. So rather than just vaguely saying "we're utterly clueless," we can speak more clearly about the nature of our cluelessness!
although interestingly,
(iii) very admirably, the issue of "How many stars have planets?" ... one could say that issue has been somewhat settled these very years, as we speak - that's great. | {
"domain": "astronomy.stackexchange",
"id": 1876,
"tags": "solar-system, milky-way, life, astrobiology, drake-equation"
} |
.cif .pdb conversion with python | Question: can you please give me an advice how to convert .cif files into .pdb preferably using python?
Thanks in advance!
Best,
Balint Biro
Answer: As @Michael said PyMOL can do this. PyMOL is not only an app, but also a python package —installed via conda not via the regular download (you can have both).
import pymol2
with pymol2.PyMOL() as pymol:
pymol.cmd.load(infile,'myprotein')
pymol.cmd.save(infile.replace('.cif', '.pdb'), selection='myprotein')
This runs the parallelisable instance, so you can have that context manager block running on how many threads or processes you fancy. | {
"domain": "bioinformatics.stackexchange",
"id": 1593,
"tags": "python, pdb"
} |
What is the sun's spectral series? | Question: My physics book says that six colors can be distinctly seen in white light: red, orange, yellow, green, blue, and violet. Does solar light only use these six wavelengths and mix them additively, or does it use a range of colors from red to violet?
And what effect does the scattering of blue light from the sky have on the solar spectrum?
Answer: Actually, The Sun outputs much more than these six wavelengths. The figure provided shows the spectrum of the Sun:
As you can see, the Sun outputs light along a continuous curve at all visible wavelengths. The combination of which appears white to our eyes as a byproduct of having evolved in orbit of this star.
The image also shows the absorption effects of the atmosphere. If you specifically discuss the blue scattering from the atmosphere, the following figure is a nice depiction.
As you can see, the blue scattering leaves only the more red visible wavelengths, which makes sunlight appear more yellow around midday and more red near dawn/dusk.
As for why we can simulate white light perfectly without having to reproduce the solar spectrum; that has everything to do with how humans perceive colour. The human eye has special cone cells capable of perceiving colour. As shown below, there are only three true colours we can see; red, gree, and blue (with more green cells than the others again due to the solar spectrum).
It is by using a combination of these three colours that we interpret all possible visible wavelengths. Our brains interpret the relative strengths by which each different colour cone is stimulated and assigns a different colour to each combination. The next figure shows the sensitivity ranges of these cones.
The three types of cones are sensitive over all visible wavelengths but are stimulated differently by each wavelength. Thus, we can interpret a nice teal when a 500nm wave stimulates the cones appropriately, or we can simulate this for human eyes (and this is what we do with things like computer screens) by combining certain levels of 400, 535, and 700nm (thanks to Peter Shor for the correction) lights to stimulate the cones by the same ratio as the 500nm light. Being the same ratio, our brain is tricked into seeing it as teal, but the summed light itself is not actually teal. In the same way, a combination of the six colours you mentioned would look white to us, but to make a true white, the Sun must output across all wavelengths in the visible spectrum at varying amounts. | {
"domain": "physics.stackexchange",
"id": 8025,
"tags": "optics, visible-light, sun"
} |
Origin of 3 types of elementary plaquette excitations in Majorana plaquette model | Question: I'm studying Majorana Fermion Surface Code for Universal Quantum Computation by
Sagar Vijay, Timothy H. Hsieh, and Liang Fu. There they consider the Majorana plaquette model on an hexagonal lattice.
There are Majorana fermions ($\gamma_n$) at each lattice site and the Hamiltonian is $$H=-u\sum_p \mathcal{O}_p,$$ where $u>0$, $\mathcal{O}_p =i \Pi_{n\in vertex(p)}\gamma_n$ and $p$ denotes a labeling of the hexagonal plaquettes in the lattice.
The ground state of the system $|gs>$, satisfies $\mathcal{O}_p|gs>$ for all $p$.
Putting the system in a tours,the total fermion parity operator, $\Gamma$, is considered as $$ \Gamma= i^{\frac{N}{2}} \Pi_n \gamma_n,$$ where the lattice in the torus has $N$ sites.
It is asserted that "For convenience, we choose a unit cell for the honeycomb
lattice consisting of three plaquettes labeled A, B, and C". The the fermion parity, on the torus, is equal to the product of Majorana fermions over the plaquettes of type A only, and algo equal to the producto over the other types separately.
Then it is discussed the way to excite the lattice above ground state, the need for string operators, etc. In this context my question is: ¿how is it the excistence of three types of excitations, A,B,C, deduced from the above considerations? I see it is related to the 3-coloring of the lattice and to the conservation of $\Gamma$ and also to the possibility of expressing it only considering separate colors so these restrictions limit the types of excitations above ground state (that change the eigenvalue of one or more $\mathcal{O}_p$ operators) but I can understand the complete argument.
I understand that if I change the eigenvalues of two diferent color plaquettes, I ruin the fermion parity so this is not allowed. Via the same token I won't be able to change one type of excitation (say a red plaquette excited) into another (say green) because this would violate fermion parity conservation.
I can't see why the three coloring (topology) implies three diferent physical excitations, what distinguishes them physicall? is it related to their position in the lattice? (because I can permute colors, of course I understand that color per se are of no significance).
Answer: We say two excited states contain the same topological excitation if there is a local operator that takes one to the other.
Looking at Fig. 1 in the paper, applying a single Majorana operator $\gamma_i$ to the ground state creates three excitations surrounding site $i$, which has been labelled $A,B,C$ in the figure. This follows from the (anti-)commutation relations for the Majorana operators. Since these excitations are $Z_2$ objects (the $O_i$ have eigenvalues $\pm 1$), applying $\gamma_i \gamma_j$ where $i,j$ are nearest neighbors excites two of the same colored plaquettes (say blue), and cancels out the red and green plaquette excitations. Because of the $Z_2$ nature of the excitations, the previous operator is also equivalent to hopping an excitation from one blue plaquette to the other blue plaquette, i.e. two blue excitations are connected via a local operator but a blue and red plaquette excitation cannot be connected.
You can keep connecting plaquettes of the same color this way to create the string operator that hops these excitations, shown in Fig. 2. | {
"domain": "physics.stackexchange",
"id": 95127,
"tags": "condensed-matter, quantum-information, superconductivity, lattice-model"
} |
Command line IRC client | Question: I made this IRC client ages ago in Python, and thought I'd revisit it for Python 3.5 (I'd like to play with asyncio). Before I start, I'd like to have an overall design review about this.
Known issues, although feel free to comment on them:
I do too much in one class
There is a fair bit of repetition that could probably be pulled out
I don't think there are any obvious bugs, but I never had a rigorous test suite for this (read as: there was never a test suite, I wrote this before I discovered unit testing) so its very possible I've missed something.
I'm also not an expert in the IRC spec - this was mostly googling and trial and error. If I've misunderstood/misused/missed something please comment on that as well.
from __future__ import absolute_import, print_function, unicode_literals
from functools import partial
from multiprocessing import dummy
import datetime
import enum
import logging
import select
import socket
import time
import threading
now = datetime.datetime.now()
logging.basicConfig(
filename=''.join(["Logs/", str(datetime.datetime.now()), ".log"]),
level=logging.INFO
)
ErrorCodes = enum(
'ErrorCodes',
'UNKNOWN_HOST UNKNOWN_FAILURE UNKNOWN_CHANNEL UNKOWN_USER '
'MESSAGE_SENT PONG_SUCCESS SERVER_CONNECTED HOSTNAME_NOT_RESOLVED '
'SERVER_DISCONNECTED CHANNEL_JOINED CHANNEL_NAME_MALFORMED CHANNEL_LEFT '
)
class IrcMember(object):
"""Represents an individual who uses the IRC client.
Only stores non-sensitive information.
Attributes
----------
nickname : str
User nickname.
real_name : str
User's "real" name.
ident : str
User's id
servers: dict
Server name to socket mapping for connected servers.
server_channels: dict
Server name to a list of channel names.
server_data: dict
Server name to dict of user information if it differs from the
default values.
lock: threading.Lock
Socket lock.
replies: dict
Pending replies.
DEFAULT_PORT: int
The default port to use.
"""
DEFAULT_PORT = 6667
def __init__(self, nick, **kwargs):
"""Creates a new member.
Parameters
----------
nick : str
The user's nickname.
real : str, optional
The user's "real" name, defaults to the nickname.
ident : str, optional
The user's id, defaults to the nickname.
"""
self.nickname = nick
self.real_name = nick
self.ident = nick
for key, value in kwargs.iteritems():
self.__dict__[key] = value
self.servers = {}
self.server_channels = {}
self.server_data = {}
self.lock = threading.Lock()
self.replies = {}
def send_server_message(self, hostname, message):
"""Send a message to a server.
Parameters
----------
hostname : str
Name of the server to send to.
message : str
Message to send.
"""
if hostname not in self.servers:
logging.warning("No such server {}".format(hostname))
logging.warning("Failed to send message {}".format(message))
return ErrorCodes.UNKNOWN_HOST
sock = self.servers[hostname]
try:
sock.send("{} \r\n".format(message.rstrip()))
except socket.error as e:
logging.exception(e)
logging.warning("Failed to send message {}".format(message))
return ErrorCodes.UNKNOWN_FAILURE
else:
return ErrorCodes.MESSAGE_SENT
def send_channel_message(self, hostname, chan_name, message):
"""Sends a message to a channel.
Parameters
----------
hostname : str
Name of the server to send to
chan_name : str
Name of the channel
message : str
Message to send
"""
if hostname not in self.servers:
logging.warning("Not connected to server {}".format(hostname))
logging.warning("Failed to send message {}".format(message))
return ErrorCodes.UNKNOWN_HOST
elif chan_name not in self.server_channels[hostname]:
logging.warning("Not in channel {}".format(chan_name))
logging.warning("Failed to send message {}".format(message))
return ErrorCodes.UNKNOWN_CHANNEL
else:
return self.send_private_message(
hostname, chan_name, message, channel=True
)
def send_private_message(self, hostname, name, message, channel=False):
"""Sends a private message.
Parameters
----------
hostname: str
Name of the server to send to
name: str
Name of the user or channel to send to
message: str
Message to send
channel: bool, optional
Whether or not this is a channel message
"""
if hostname not in self.servers:
logging.warning("No such server {}".format(hostname))
logging.warning("Failed to send message {}".format(message))
return ErrorCodes.UNKNOWN_HOST
if not (channel or self.user_exists(name)):
return ErrorCodes.UNKNOWN_USER
message = "PRIVMSG {}: {}".format(username, message.rstrip())
return self.send_server_message(hostname, message)
def user_exists(self, username):
"""Validate a user exists.
Parameters
----------
username: str
Name of the user.
"""
## TODO: implement this
return True
def ping_pong(self, sock, data):
"""Pong a server.
Parameters
----------
sock : socket.socket
Socket to pong on.
data : str
Data to send in pong.
"""
try:
sock.send("PONG {}\r\n".format(data))
except socket.error as e:
logging.exception(e)
logging.warn("Couldn't pong the server")
return ErrorCodes.UNKNOWN_FAILURE
else:
return ErrorCodes.PONG_SUCCESS
def join_server(self, hostname, port=None, **kwargs):
"""Join a server.
Parameters
----------
hostname: str
Name of the server to join.
port: int, optional
Port to connect on - defaults to `IrcMember.DEFAULT_PORT`.
nickname: str, optional
Nickname to use on this server
real_name: str, optional
'Real' name ot use on this server
ident: str, optional
Identity to use on this server.
"""
if port is None:
port = IrcMember.DEFAULT_PORT
if hostname in self.servers:
logging.warn("Already connected to {}".format(hostname))
return ErrorCodes.SERVER_CONNECTED
nick = self.nickname
ident = self.ident
realname = self.real_name
## Checking if the data for this server is different from the defaults
if kwargs:
self.serv_to_data[hostname] = {}
for key, value in kwargs.items():
if key in ['nickname', 'real_name', 'ident']:
self.server_data[hostname][key] = value
locals()[key] = value
else:
logging.info(
"key-value pair {}: {} unusued".format(key, value)
)
if not self.server_data[hostname]:
del self.server_data[hostname]
try:
ip = socket.gethostbyname(hostname)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((ip, port))
self.servers[hostname] = sock
self.serv_to_chan[hostname] = []
sock.settimeout(2)
self.send_server_message(
hostname, "NICK {}\r\n".format(nick))
self.send_server_message(hostname,
"USER {} {} bla: {}\r\n".format(nick, ident, realname))
except socket.gaierror as e:
logging.exception(e)
return ErrorCodes.HOSTNAME_NOT_RESOLVED
except socket.error as e:
logging.exception(e)
if port != IrcMember.DEFAULT_PORT:
logging.warning(
"Consider using port %s (the defacto IRC port) not of %s"
% (IrcMember.DEFAULT_PORT, port)
)
return ErrorCodes.UNKNOWN_FAILURE
else:
logging.info("Connected to {} on {}".format(hostname, port))
return ErrorCodes.SERVER_CONNECTED
def leave_server(self, hostname):
"""Leave a server.
Parameters
----------
hostname: str
Server to disconnect from.
"""
if hostname not in self.servers:
logging.warning("Not connected to {}".format(hostname))
return ErrorCodes.SERVER_DISCONNECTED
try:
self.send_server_message(hostname, "QUIT\r\n")
self.servers[hostname].close()
except socket.error as e:
logging.exception(e)
logging.warning("Failed to leave server {}".format(hostname))
return 1
else:
for attr_name in ['servers', 'server_channels', 'server_data']:
try:
del getattr(self, attr_name)[hostname]
except KeyError:
# This host doesn't have any data about that
pass
logging.info("Left server {}".format(hostname))
return ErrorCodes.SERVER_DISCONNECTED
def join_channel(self, hostname, channel_name):
"""Join a channel.
Parameters
----------
hostname: str
Server the channel is on.
channel_name: str
Name of the channel.
"""
if channel_name in self.serv_to_chan[hostname]:
logging.warning(
"Already connected to {} on {}".format(hostname, channel_name))
return ErrorCodes.CHANNEL_JOINED
if chan_name.startswith("#"):
try:
self.send_server_message(
hostname, "JOIN {}\r\n".format(chan_name))
except socket.error as e:
logging.exception(e)
logging.warning("Failed to connect to {}".format(chan_name))
return ErrorCodes.UNKNOWN_FAILURE
else:
self.serv_to_chan[hostname].append(chan_name)
logging.info("Connected to {}".format(chan_name))
return ErrorCodes.CHANNEL_JOINED
else:
logging.warning("Channel names should look like #<channel_name>")
return ErrorCodes.CHANNEL_NAME_MALFORMED
def leave_channel(self, hostname, chan_name):
"""Leave a channel.
Parameters
----------
hostname: str
Server the channel is on.
channel_name: str
Name of the channel.
"""
if hostname not in self.servers:
logging.warning("No such server {}".format(hostname))
return ErrorCodes.UNKNOWN_HOST
elif chan_name not in self.serv_to_chan[hostname]:
logging.warning("No such channel {}".format(chan_name))
return ErrorCodes.CHANNEL_LEFT
else:
try:
self.send_server_message(
hostname, "PART {}\r\n".format(chan_name)
)
except socket.error as e:
logging.exception(e)
logging.warning("Failed to leave {}".format(chan_name))
return ErrorCodes.UNKNOWN_FAILURE
else:
self.serv_to_chan[hostname].remove(chan_name)
logging.info("Left channel {}".format(chan_name))
return ErrorCodes.CHANNEL_LEFT
def receive_all_messages(self, buff_size=4096):
"""Display all messages waiting to be received.
Parameters
----------
buff_size: int, optional
How large of a buffer to receive with.
"""
ready, _, _ = select.select(self.servers.values(), [], [], 5)
if ready:
for i in range(len(ready)):
for host, sock in self.servers.iteritems():
if sock == ready[i]:
ready[i] = host
try:
pool = dummy.Pool()
pool.map(partial(self.receive_message,
buff_size=buff_size),
(tuple(ready),))
with self.lock:
replies, self.replies = self.replies, {}
for server, reply in replies.iteritems():
print("{} :\n\n".format(server))
for message in reply:
print(" {}".format(message))
except socket.error as e:
logging.exception(e)
logging.warning("Failed to get messages")
return ErrorCodes.UNKNOWN_FAILURE
return ErrorCodes.MESSAGES_RECEIVED
def receive_message(self, hostname, buff_size=4096):
"""Receive a message from a single server.
Parameters
----------
hostname: tuple
Server to receive from.
buff_size: int, optional
How large of a buffer to receive with.
Notes
-----
Has already checked that there is a message waiting.
"""
hostname = hostname[0]
reply = []
sock = self.servers[hostname]
while True:
try:
readbuffer = sock.recv(buff_size)
if not readbuffer:
break
temp = readbuffer.split("\n")
readbuffer = temp.pop()
for line in temp:
line = line.rstrip().split()
if (line[0] == "PING"):
self.ping_pong(sock, line[1])
else:
line = " ".join(line)
reply.append(line)
except socket.error:
break
with self.lock:
try:
if reply not in self.replies[hostname]:
self.replies[hostname] += reply
except KeyError:
self.replies[hostname] = reply
def __del__(self):
for host, sock in self.servers.items():
self.leave_server(host)
if __name__ == "__main__":
NICK = raw_input("Please enter your nickname ")
HOST = raw_input("Please enter your desired server ")
CHAN = raw_input("Please enter your desired channel ")
me = IRC_member(NICK)
me.join_server(HOST, nickname='test', ident='test', real_name='test')
time.sleep(1)
me.receive_all_messages()
me.join_channel(HOST, CHAN)
time.sleep(1)
me.receive_all_messages()
i = 0
while i < 100:
start = time.time()
msg = raw_input("Would you like to say something? ")
if msg == 'n':
break
if msg.rstrip():
me.send_channel_message(HOST, CHAN, msg)
me.receive_all_messages()
end = time.time()
if (end-start) < 5:
time.sleep(int(5-(end-start)))
i += 1
Answer: Okay, since bounties are worth giving people a run for their money, and I love Python, and I miss my days of making IRC bots... I'll give this a go..
I'll try to "dig into how [you] design and implement the IRC client as a whole", particularly with regards to the spec (Which I have actually (somewhat, as much as is possible with things like RFC's) read!).
But before I do that, there are a couple of issues I'd like to point out overall.
Oh the OOPmanity
Firstly, as you mentioned - you are doing too much in one class! You're dealing with a domain where we have the benefit of some pretty well discretized concepts (read: classes).
The main one I think you should consider abstracting out of IrcMember, is the idea of a Connection. Make a separate class for connections, that manages the socket, stores the (list of?) room(s) you're in, etc. This way, IrcMember can just represent the user, i.e. a real name, ident, nickname (unless that is per-server).
However, don't go OOverboard and make abstractions for channels, messages, or any thing else.
Interactive Command-line (oh my)
Secondly, as I'm sure you're aware (As would be anyone having run the code), a command-line interface for something interactive like IRC isn't the best way, particularly without ncurses, etc.
As raw_input blocks, you're unable to check for new messages (or server PING's) - which will eventually (5 - 20 minutes depending on which network) timeout your socket as idle on the server-side. Of course, if you're chatting actively you should have something to say at least every 5 minutes, so this isn't a a huge deal. But it does mean your client won't be capable of idling online (not 100% sure of your use-case for it here).
Okay, now into the good stuff...
Channel names
From RFC1459 Section 1.3:
Channels names are strings (beginning with a '&' or '#' character) of
length up to 200 characters. Apart from the the requirement that the
first character being either '&' or '#'; the only restriction on a
channel name is that it may not contain any spaces (' '), a control G
(^G or ASCII 7), or a comma (',' which is used as a list item
separator by the protocol).
(Note that there are also extensions, both standard and non-standard, to allow channel names starting with % and probably other characters as well)
So, your join_channel code which checks channel names start with '#':
if chan_name.startswith("#"):
...
else:
logging.warning("Channel names should look like #<channel_name>")
return ErrorCodes.CHANNEL_NAME_MALFORMED
is not strictly by the book.
No MODE's
Perhaps not a game-changer (depending on what you want/need to do in your client), but your script has no method to allow for the sending of an IRC MODE command (either for the user or for a channel).
Some (not all) networks use user-modes to handle name registration, etc - but if your not concerned about that then leave it out as it is.
KICKing, Banning, and generally staying on TOPIC
The KICK and TOPIC commands, as well as the channel MODE 'b' (ban) are used for room administration (keeping out the rif-raf, setting the channel topic, etc). If your client is just for chatting, then again don't worry about them - but they are needed in real life (unfortunately).
(Also, if you plan on using IRCX - then look into the ACCESS command.)
Make it all a bit more INVITEing
IRC supports invitation-only chat rooms, so if you plan on joining/inviting people to them then you'll need the INVITE command. Otherwise, it can (probably) be ignored.
Channels are great... If you know their name
This one was a bit more of a 'shocker' to me, to LIST support? Is your user expected to know the exact name of the channel they desire?
If not, I'd suggest adding some logic for room lists (most likely associated with the Connection instance for that server).
Channels are great... Unless you like talking to people
Along the same lines as above, the channel functionality is somewhat limited without the NAMES command. At present, the client can join a room and send it a message - but not know if there's anybody else there or not or who they are!
NAMES tells the server to send you a list of what nicknames are registered (joined) on a channel (usually, a channel that you have also joined).
Most servers send a NAMES list along with a successful JOIN notification.
I would recommend that you store a list of the nicknames in a room once the client has joined it.
Miscellaneous other things missing
Your client at present has no way to WHOIS or WHO anyone else, or to send anyone (person, i.e. a nickname) a message (PRIVMSG or NOTICE), and also cannot send NOTICE's to channels. (None of which are big deals, at all, so I just wrapped them all up here).
Some code smells I noticed
Just to finish up, a bit more on your coding... As you mentioned, "There is a fair bit of repetition that could probably be pulled out". Particularly things like send_channel_message, which just ends up calling send_private_message anyway (I do understand why, because they both use PRIVMSG, and channel messages need some additional checks first). But it just doesn't feel right reading through the code.
There (mostly) seems to be a one-to-one mapping of a lot of your functions to IRC command verbs. For example, leave_server -> QUIT, join_channel -> JOIN, send_private_message -> PRIVMSG, etc.
Part of me just feels that (as you say) there is just a tad too much repetition in that architecture. (Perhaps some more abstract encapsulation of a server command and arguments?)
The leave_server function also doesn't seem to support QUIT messages, not the end of the world - but easily implemented.
Also, as much as I love Easter Eggs, what is this:
def user_exists(self, username):
"""Validate a user exists.
Parameters
----------
username: str
Name of the user.
"""
## TODO: implement this
return True
(PS, I love ## TODO: implement this)
Was this going to be (one day, I'm sure) some sort of logic around eight the WHO or WHOIS commands to prevent nickname collision? | {
"domain": "codereview.stackexchange",
"id": 18353,
"tags": "python, object-oriented, multithreading, socket, server"
} |
What biological processes cause leaves to change colours in Autumn? | Question: I am curious to learn what are the biological mechanisms that cause leaves of deciduous plants to change colour? What happens to the chlorophyll?
What environmental phenomena (temperature/air pressure/length of sunlight) are they responding to?
Answer: Here is a quite nice Chemistry of Autumn Leaf Color. And here is a less chemical, but more detailed explanation regarding the triggering of colour change, which is quite similar to the one from the University of Illinois that @Oreotrephes already gave.
As I understand it, during sunny days a lot of chlorophyll is broken down, but also built up. When after the summer the days get cold and dark, chlorophyll is built up slower (or not any more at some point) and you start to see the colour from the carotenoids and anthocyanins. A harsh change from sunny to cold and dark is supposedly bringing up the brightest colours. At some point, also the carotenoids break down and leaves get brown. The faster the change to cold and dark the larger is the frame between having all chlorophyll broken down and having the other colouring molecules broken down (-> bright colours).
But that's the amateurs explanation. | {
"domain": "biology.stackexchange",
"id": 1236,
"tags": "biochemistry, botany, life-history, pigmentation"
} |
Magnetic Field at centre of Hemisphere | Question:
MY APPROACH:
I know that we have rotate by an angle theta from centre and take an angular element which corresponds to a ring and write the expression for field accordingly. But my main problem is that i am not able to interpret how the turns of the wire are distributed. I assumed that they are distributed over the surface area uniformly however that lands up the value for 'N' as 3. However the answer given is 4. All I need to understand is how the turns vary with angle theta. It would be great if someone could help me out with this one
Answer: Consider a current looping uniform arc distribution. The loop density per arc length
$$
\lambda = \frac{N}{\pi R / 2}
$$
the nunber of loops $dN$ between $\theta$ and $theta + d\theta$:
$$
d N(\theta) = \lambda R d\theta = \frac{2 N}{\pi} d\theta.
$$
These rings at angle $\theta$ contribute a magnetic field of:
$$
dB = \frac{\mu_o R^2 \sin^2\theta }{2 R^3} \frac{2 N i}{\pi} d\theta,
$$
The total field is:
$$
B(O) = \int_{\pi/2}^{\pi} \frac{\mu_o N i \sin^2\theta }{\pi R} d\theta
$$
$$
= \int_{\pi/2}^{\pi} \frac{\mu_o N i}{\pi R} \frac{1-\cos 2 \theta}{2} d\theta
$$
$$
= \frac{\mu_o N i}{2 \pi R} \left[ \theta - \frac{\sin 2 \theta }{2} \right] _{\pi/2}^\pi
$$
$$
= \frac{\mu_o N i}{2 \pi R} \frac{\pi}{2} = \frac{\mu_o N i}{4 R}
$$
We assume the uniform arc distribution to obtain $n=4$. | {
"domain": "physics.stackexchange",
"id": 76953,
"tags": "electromagnetism, magnetic-fields, electromagnetic-induction"
} |
End of chirp in phase 0 | Question: I would like the chirp to end in phase zero.
Chirp time or end frequency may vary slightly.
Now I'm checking the maplot output.
I observe the same thing at the output of the sound card.
In this solution, the chirp does not always end in phase zero.
help find a solution.
Thanks for the help.
Andrew
from pylab import *
import numpy as np
from scipy.signal import chirp
f0 = 7000
f1 = 17000
samplerate = 192000
T = 0.0013
T = np.ceil(T*f1)/f1 # new T
t = np.arange(0, int(T*samplerate)) / samplerate
w = chirp(t, f0=f0, f1=f1, t1=T,phi=270, method='linear')
fig, ax = subplots(figsize=(6,1))
ax.set_title("Chirp ")
ax.plot(w)
show()
Answer: Derivations here - you can pick any tmin, tmax, and fmin, fmax for some number of samples or sampling rate N.
We adjust the code one line toward the end to rescale phi to end as an integer multiple of $2\pi$ to yield zero phase; this has the effect of nudging fmin and fmax, slightly or greatly depending on all other parameters - see here.
An alternative variant that forces only the end of the chirp to be zero phase will exactly preserve fmin and fmax, by subtracting. Forcing all, zero phase at tmin and tmax without changing fmin and fmax is impossible.
Taking your parameters, with cosine and sine:
Code
import numpy as np
import matplotlib.pyplot as plt
def _lchirp(N, tmin=0, tmax=1, fmin=0, fmax=None):
fmax = fmax if fmax is not None else N / 2
t = np.linspace(tmin, tmax, N, endpoint=True)
a = (fmin - fmax) / (tmin - tmax)
b = (fmin*tmax - fmax*tmin) / (tmax - tmin)
phi = (a/2)*(t**2 - tmin**2) + b*(t - tmin)
phi *= (2*np.pi)
return phi
def lchirp(N, tmin=0, tmax=1, fmin=0, fmax=None, zero_phase_tmin=True, cos=True):
phi = _lchirp(N, tmin, tmax, fmin, fmax)
if zero_phase_tmin:
phi *= ( (phi[-1] - phi[-1] % (2*np.pi)) / phi[-1] )
else:
phi -= (phi[-1] % (2*np.pi))
fn = np.cos if cos else np.sin
return fn(phi)
#%%######################################################################
f0 = 7000
f1 = 17000
samplerate = 192000
T = .0013
N = int(samplerate * T)
tmin = 0
tmax = T
t = np.linspace(tmin, tmax, N, endpoint=True)
for zero_phase_min in (True, False):
for cos in (True, False):
x = lchirp(N=int(samplerate * T), tmin=tmin, tmax=tmax, fmin=f0, fmax=f1,
zero_phase_tmin=zero_phase_min, cos=cos)
plt.plot(t, x)
plt.title("cos={}, zero_phase_tmin={}".format(cos, zero_phase_min),
weight='bold', fontsize=17, loc='left')
plt.show() | {
"domain": "dsp.stackexchange",
"id": 10198,
"tags": "python, chirp, sweep"
} |
What is the difference between 4th generation sequencing and NGS? | Question: The generation of sequencing technologies has come on leaps and bounds and there are stark differences between the types of technology used. There is a great Q&A here What is the difference between second and third generation sequencing?
NGS refers to high throughput massively parallel sequencing technology (See illumina's marketing page). 4th generation sequencing has been used to describe single molcule nanopore based technology (Nanopore-based Fourth-generation DNA Sequencing Technology).
So is NGS just a massively parallel version of 4th gen, or is there a more fundemental difference?
Answer: I think that all of this is pure marketing and can be safely ignored. There is no basis for the generations of sequencing in chemistry or instrumentation, except possibly that second-generation (Illumina) was a massive parallelization of first-generation sequencing (Sanger) through clever chemistry and instrument modifications. It is merely "I want to argue that this technology occupies a particular niche right now".
All of these terms are obviously very imprecise. I had not heard "4th-generation" before; having analyzed all of these data types, I personally would lump nanopore into the third-generation sequencing box with PacBio: they are long, relatively noisy reads that grew popular around the same time for roughly the same applications and are currently under very active research and development.
I would allow that these two technologies (PacBio and nanopore) are definitely distinct from Sanger (much older, now fairly restricted to specific applications like confirming plasmid sequences) and Illumina (older, mature tech with deep market penetration, making tons of money and trying to hold onto market share).
"NGS" itself is a rather silly term for similar reasons, as it is now clearly a technology that is fighting for relevance in certain applications where it was once dominant (e.g. genome assembly). If we must apply generational terminology to Illumina, it is clearly second-generation. This is one of Joe Felsenstein's oft-repeated rants.
Nanopore itself is as far as I know established in 1996, whereas the first publication I am aware of that forms the basis for PacBio is 2003. For comparison, the Solexa sequencing that became the basis of Illumina instruments from thence was introduced in 2005. So any meaningful chronology also does not support these generations. | {
"domain": "biology.stackexchange",
"id": 11693,
"tags": "lab-techniques, dna-sequencing, sequence-analysis, rna-sequencing"
} |
How to calculate the new concentration of a solution after adding more solute and convert it to ppm? | Question: My question is the following:
$2~\mathrm{g}$ table salt is added to $0.5~\mathrm{m^3}$ water whose salt concentration is $10~\mathrm{mg\,L^{-1}}$. Compute the salt concentration of the mixture in ppm.
I am fairly new to chemistry actually this is my first time taking a chemistry course of this level so I'm having quite a hard time. Therefore, if anyone could recommend some websites where I could receive help or even videos, such as khanacademy that would be greatly appreciated!
As for the following question all I know is that $2~\mathrm{g} = 2000~\mathrm{mg}$
and ppm is calculated by:
$$\text{ppm} = \frac{\text{grams of solute}}{\text{grams of solution}}\times 1000000 $$
However, im confused as to how much grams of solute and grams of solution there will be in the mixture after adding the $2~\mathrm{g}$ of salt?
Answer: Let's think about it :
You have the formula of the mass concentration $C_m$:
$C_m = \frac{m}{V}$,
where $m$ is the mass in grams of solute and $V$ the volume in liters of the solution.
A first step for you would be to calculate the mass of solute already in the solution : your solution has a volume of $0.5~\mathrm m^3$, which is $V=500~\mathrm L$, with a massic concentration of $C_m=10~\mathrm{mg\cdot L^{-1}}$. Therefore, there is already a mass
$m=C_m~\times~V = 10\cdot10^{-3} \times 500 = 5~\mathrm g$
of salt in your solution.
To this mass, you add $2~\mathrm g$ of salt, so the mass of solute in your solution is now $m=5~\mathrm g + 2~\mathrm g = 7~\mathrm g$.
So, at end you have a mass $m=7~\mathrm g$ of salt.
I let you compute the mass of solution, knowing that $1~\mathrm L$ of water weights $1~\mathrm{kg}$. | {
"domain": "chemistry.stackexchange",
"id": 5644,
"tags": "homework, solubility, solutions, concentration"
} |
SAT - Hardness of determining backbone literals | Question: Let $F$ be a CNF formula. Let $l$ be one of $F$'s literals.
Question
Which is the complexity of determining whether $l$ is a backbone literal or not? The obvious way to do that is to propagate $\lnot l$ on $F$, obtaining $F'$: $l$ is then a backbone literal of $F$ if and only if $F'$ is unsatisfiable. I'm asking if there is another way.
Answer: The complexity is co-NP complete because you're converting to unsatisfiability of $F \land \lnot l$. I have a proof of the completeness in my PhD thesis, Claim 2 (if I may blatantly advertise myself).
There's a small problem for unsatisfiable formulas, if your definition of a backbone means that all the literals are backbone for an unsatisfiable formula, then the problem stays in co-NP. If, however, the set of backbone literals is empty for an unsatisfiable formula, you also need NP. | {
"domain": "cstheory.stackexchange",
"id": 776,
"tags": "cc.complexity-theory, ds.algorithms, sat"
} |
Costmap timeout | Question:
When using amcl having set the initial pose I'm getting a lot of warnings related to costmap:
Costmap2DROS transform timeout
Eventually followed by:
[ERROR] [1301261347.311430182]: Connectivity Error: Could not find a common time /base_link and /map.
My costmap parameters look like the following:
local_costmap:
global_frame: /odom
robot_base_frame: /base_link
update_frequency: 2.0
publish_frequency: 2.0
static_map: false
rolling_window: true
width: 5.0
height: 5.0
resolution: 0.1
transform_tolerance: 0.5
global_costmap:
global_frame: /map
robot_base_frame: /base_link
update_frequency: 2.0
publish_frequency: 2.0
static_map: true
transform_tolerance: 0.5
rolling_window: true
width: 20.0
height: 20.0
resolution: 0.1
obstacle_range: 2.5
raytrace_range: 3.0
robot_radius: 0.3
footprint: [[-0.18, -0.24], [0.11, -0.24], [0.11, 0.24], [-0.18, 0.24]]
inflation_radius: 0.4
observation_sources: scan
scan: {sensor_frame: /kinect_fake_laser, data_type: LaserScan, topic: /scan, marking: true, clearing: true}
I've checked /odom and it is indeed being published regularly throughout. Any ideas on what might be causing the above warnings and errors?
RESULTS: for all Frames
Frames:
Frame: /base published by /robot_state_publisher Average Delay: 0.00295674 Max Delay: 0.00692934
Frame: /base_link published by /odometry Average Delay: 0.0201343 Max Delay: 0.0437646
Frame: /collar published by /robot_state_publisher Average Delay: 0.00296054 Max Delay: 0.00693374
Frame: /front_side published by /robot_state_publisher Average Delay: 0.00296375 Max Delay: 0.00693695
Frame: /head_pan_bracket published by /robot_state_publisher Average Delay: 0.00296677 Max Delay: 0.00693988
Frame: /head_tilt_servo published by /robot_state_publisher Average Delay: 0.0029704 Max Delay: 0.00694373
Frame: /head_tilt_support_left published by /robot_state_publisher Average Delay: 0.00297362 Max Delay: 0.00694736
Frame: /head_tilt_support_right published by /robot_state_publisher Average Delay: 0.00297716 Max Delay: 0.0069512
Frame: /joystick published by /robot_state_publisher Average Delay: 0.00298044 Max Delay: 0.00695455
Frame: /joystick_box published by /robot_state_publisher Average Delay: 0.00298363 Max Delay: 0.00695825
Frame: /kinect_depth_camera published by /robot_state_publisher Average Delay: 0.00298672 Max Delay: 0.00696168
Frame: /kinect_fake_laser published by /robot_state_publisher Average Delay: 0.0029899 Max Delay: 0.00696531
Frame: /kinect_illumination published by /robot_state_publisher Average Delay: 0.00299316 Max Delay: 0.00696894
Frame: /kinect_illumination2 published by /robot_state_publisher Average Delay: 0.00299688 Max Delay: 0.00697271
Frame: /kinect_pivot published by /robot_state_publisher Average Delay: 0.00300013 Max Delay: 0.00697634
Frame: /kinect_rgb_camera published by /robot_state_publisher Average Delay: 0.0030034 Max Delay: 0.0069799
Frame: /kinect_sensor published by /robot_state_publisher Average Delay: 0.00300658 Max Delay: 0.0069834
Frame: /left_axel published by /robot_state_publisher Average Delay: 0.00300997 Max Delay: 0.00698717
Frame: /left_castor published by /robot_state_publisher Average Delay: 0.00301312 Max Delay: 0.00699108
Frame: /left_castor_stand published by /robot_state_publisher Average Delay: 0.00301631 Max Delay: 0.00699457
Frame: /left_castor_stand_support1 published by /robot_state_publisher Average Delay: 0.00301972 Max Delay: 0.00699834
Frame: /left_castor_stand_support2 published by /robot_state_publisher Average Delay: 0.00302338 Max Delay: 0.00700246
Frame: /left_castor_stand_support3 published by /robot_state_publisher Average Delay: 0.00302763 Max Delay: 0.00700686
Frame: /left_castor_stand_support4 published by /robot_state_publisher Average Delay: 0.00303146 Max Delay: 0.0070114
Frame: /left_motor published by /robot_state_publisher Average Delay: 0.0030348 Max Delay: 0.00701503
Frame: /left_side published by /robot_state_publisher Average Delay: 0.00303901 Max Delay: 0.0070188
Frame: /left_wheel published by /robot_state_publisher Average Delay: 0.00304233 Max Delay: 0.00702244
Frame: /left_wheel_inner published by /robot_state_publisher Average Delay: 0.00304612 Max Delay: 0.00702628
Frame: /left_wheel_support published by /robot_state_publisher Average Delay: 0.00305038 Max Delay: 0.00703012
Frame: /neck published by /robot_state_publisher Average Delay: 0.00305355 Max Delay: 0.00703438
Frame: /on_off published by /robot_state_publisher Average Delay: 0.00305671 Max Delay: 0.00703731
Frame: /on_off_switch published by /robot_state_publisher Average Delay: 0.00305977 Max Delay: 0.00704032
Frame: /rear_full_side published by /robot_state_publisher Average Delay: 0.00306319 Max Delay: 0.0070436
Frame: /right_axel published by /robot_state_publisher Average Delay: 0.00306662 Max Delay: 0.00704702
Frame: /right_castor published by /robot_state_publisher Average Delay: 0.00306983 Max Delay: 0.00705065
Frame: /right_castor_stand published by /robot_state_publisher Average Delay: 0.00307344 Max Delay: 0.00705435
Frame: /right_castor_stand_support1 published by /robot_state_publisher Average Delay: 0.00307702 Max Delay: 0.00705785
Frame: /right_castor_stand_support2 published by /robot_state_publisher Average Delay: 0.00308106 Max Delay: 0.00706211
Frame: /right_castor_stand_support3 published by /robot_state_publisher Average Delay: 0.0030851 Max Delay: 0.00706637
Frame: /right_castor_stand_support4 published by /robot_state_publisher Average Delay: 0.00308937 Max Delay: 0.00707077
Frame: /right_motor published by /robot_state_publisher Average Delay: 0.00309329 Max Delay: 0.00707489
Frame: /right_side published by /robot_state_publisher Average Delay: 0.00309697 Max Delay: 0.00707866
Frame: /right_wheel published by /robot_state_publisher Average Delay: 0.00310049 Max Delay: 0.00708271
Frame: /right_wheel_inner published by /robot_state_publisher Average Delay: 0.00310445 Max Delay: 0.00708676
Frame: /right_wheel_support published by /robot_state_publisher Average Delay: 0.00310843 Max Delay: 0.00709081
Frame: /shelf1 published by /robot_state_publisher Average Delay: 0.00311185 Max Delay: 0.00709472
Frame: /shelf2 published by /robot_state_publisher Average Delay: 0.00311519 Max Delay: 0.00709828
Frame: /start_button published by /robot_state_publisher Average Delay: 0.00311848 Max Delay: 0.00710199
Frame: /stop_button published by /robot_state_publisher Average Delay: 0.00312182 Max Delay: 0.00710576
Frame: /top_side published by /robot_state_publisher Average Delay: 0.00312518 Max Delay: 0.00710939
Node: /odometry 16.2818 Hz, Average Delay: 0.0201343 Max Delay: 0.0437646
Node: /robot_state_publisher 16.2618 Hz, Average Delay: 0.00303951 Max Delay: 0.0070183
Localisation launch file:
<launch>
<arg name="map_file" default="ros_map.yaml"/>
<!-- Run the map server -->
<node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" respawn="true"/>
<node pkg="amcl" type="amcl" name="amcl" args="scan:=/scan" respawn="true">
<!-- Publish scans from best pose at a max of 10 Hz -->
<param name="odom_model_type" value="diff"/>
<param name="odom_alpha5" value="0.1"/>
<param name="transform_tolerance" value="0.5" />
<param name="gui_publish_rate" value="5"/>
<param name="laser_max_beams" value="30"/>
<param name="laser_max_range" value="3.0"/>
<param name="min_particles" value="500"/>
<param name="max_particles" value="5000"/>
<param name="kld_err" value="0.05"/>
<param name="kld_z" value="0.99"/>
<param name="odom_alpha1" value="0.2"/>
<param name="odom_alpha2" value="0.2"/>
<!-- translation std dev, m -->
<param name="odom_alpha3" value="0.8"/>
<param name="odom_alpha4" value="0.2"/>
<param name="laser_z_hit" value="0.5"/>
<param name="laser_z_short" value="0.05"/>
<param name="laser_z_max" value="0.05"/>
<param name="laser_z_rand" value="0.5"/>
<param name="laser_sigma_hit" value="0.4"/>
<param name="laser_lambda_short" value="0.1"/>
<param name="laser_lambda_short" value="0.1"/>
<param name="laser_model_type" value="likelihood_field"/>
<!-- <param name="laser_model_type" value="beam"/> -->
<param name="laser_likelihood_max_dist" value="2.0"/>
<param name="update_min_d" value="0.2"/>
<param name="update_min_a" value="0.15"/>
<param name="odom_frame_id" value="odom"/>
<param name="resample_interval" value="1"/>
<param name="transform_tolerance" value="0.1"/>
<param name="recovery_alpha_slow" value="0.001"/>
<param name="recovery_alpha_fast" value="0.1"/>
<param name="initial_pose_x" value="0.0"/>
<param name="initial_pose_y" value="0.0"/>
<param name="initial_pose_a" value="0.0"/>
<param name="initial_cov_xx" value="0.1"/>
<param name="initial_cov_yy" value="0.1"/>
<param name="initial_cov_aa" value="0.05"/>
</node>
</launch>
Originally posted by JediHamster on ROS Answers with karma: 995 on 2011-03-27
Post score: 3
Original comments
Comment by Eric Perko on 2011-03-27:
Can you include the output of "rosrun tf tf_monitor" or otherwise let us see what the tf statistics are for your tf tree?
Answer:
The commanded velocity not being achieved shouldn't have any effect on the TrajectoryPlanner's ability to run. In fact, I've had our robot sit e-stopped for many minutes with a velocity being commanded and never seen anything like this. I'm very surprised that upping the velocity limits solves the costmap timeout problem for you as it should be totally unrelated.
I'm glad that you have things working, but I'm still quite puzzled as to what was actually going on.
Originally posted by eitan with karma: 2743 on 2011-04-08
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 5224,
"tags": "navigation, costmap-2d, amcl, transform"
} |
Partitioning an array based on a condition in Javascript | Question: Using the array.filter function, I can efficiently pull out all elements that do or do not meet a condition:
let large = [12, 5, 8, 130, 44].filter((x) => x > 10);
let small = [12, 5, 8, 130, 44].filter((x) => !(x > 10));
However, in the example above, I'm iterating over the array twice and performing the same test each time. Is there a simple way to generate both 'large' and 'small' in a single pass over the array? In particular, if the callback to evaluate whether an element should be kept is expensive, I'd like to avoid calling it twice.
Answer: Using TypeScript/ECMAScript 6 syntax it can be achieved this way. I am not sure whether it's more or less elegant compared to the original variant, but
It does the job;
Requires only one run;
Can be further chained with map () or other functions.
const [small, large] = // Use "deconstruction" style assignment
[12, 5, 8, 130, 44]
.reduce((result, element) => {
result[element <= 10 ? 0 : 1].push(element); // Determine and push to small/large arr
return result;
},
[[], []]); // Default small/large arrays are empty
More options can be found in various StackOverflow questions. | {
"domain": "codereview.stackexchange",
"id": 29537,
"tags": "javascript, array, callback"
} |
Pygame version of my 3D Tic Tac Toe/Connect 4 | Question: I posted a question a while back asking for some feedback on the code of a game I made (it was limited to typing the input and drawing the output in ASCII).
Now I've got it linked up with pygamef. Does anything look out of place? Do you notice any bugs? Do the colours work? Is there anything particularly annoying?
Use CTRL+SHIFT+d while in options (hit ESC to bring them up if you've already started the game) to reveal the debug settings, and enable to see the mouse coordinate conversion and AI stuff going on under the hood.
Instructions
The aim is to get as many complete rows as you can, and the grid will flip every 3 turns to throw you off, otherwise it gets a bit easy. The game ends when all spaces are taken (though this is a bit annoying when you are having to fill in the last few ones, so I'll just make it end when there are no points left).
At this time, I still need to make the instructions page and a 'player x won' page, though everything else is working without bugs as far as I can tell.
Normal game:
With debug enabled:
To see the entire thing, you'll need this link. If you don't have pygame (or python for that matter), here is a standalone version of the game from py2exe.
class MouseToBlockID(object):
"""Converts mouse coordinates into the games block ID.
The first part is to calculate which level has been clicked, which
then allows the code to treat the coordinates as level 0. From this
point, it finds the matching chunks from the new coordinates which
results in two possible blocks, then it calculates how they are
conected (highest one is to the left if even+odd, otherwise it's to
the right), and from that, it's possible to figure out which block
the cursor is over.
A chunk is a cell of a 2D grid overlaid over the isometric grid.
Each block is split into 4 chunks, and each chunk overlaps two
blocks.
"""
def __init__(self, x, y, grid_main):
self.x = x
self.y = y
self.y_original = y
self.grid_main = grid_main
self._to_chunk()
def _to_chunk(self):
"""Calculate which chunk the coordinate is on."""
y_offset = self.grid_main.size_y * 2 + self.grid_main.padding
self.y_coordinate = int((self.grid_main.centre - self.y) / y_offset)
self.y += y_offset * self.y_coordinate
chunk_size_x = self.grid_main.size_x / self.grid_main.segments
chunk_size_y = self.grid_main.size_y / self.grid_main.segments
self.height = int((self.grid_main.centre - self.y) / chunk_size_y)
self.width = int((self.x + self.grid_main.size_x + chunk_size_x) / chunk_size_x) -1
def find_x_slice(self):
"""Find block IDs that are on the x segment"""
past_middle = self.width >= self.grid_main.segments
values = []
if self.width >= self.grid_main.segments:
count = 0
while True:
n_multiple = self.grid_main.segments * count
width_addition = self.width - self.grid_main.segments + count
if width_addition < self.grid_main.segments:
values.append(n_multiple + width_addition)
if width_addition < self.grid_main.segments - 1:
values.append(n_multiple + width_addition + 1)
else:
break
count += 1
elif self.width >= 0:
starting_point = self.grid_main.segments - self.width
values.append((starting_point - 1) * self.grid_main.segments)
width_addition = 0
for i in range(starting_point, self.grid_main.segments):
n_multiple = self.grid_main.segments * i
values.append(n_multiple + width_addition)
if 0 < i < self.grid_main.segments:
values.append(n_multiple + width_addition + 1)
else:
break
width_addition += 1
return values
def find_y_slice(self):
"""Find block IDs that are on the y segment"""
height = self.height
past_middle = height >= self.grid_main.segments
if past_middle:
height = 2 * self.grid_main.segments - 1 - height
values = []
count = 0
while True:
n_multiple = count * self.grid_main.segments
height_addition = height - count
if height_addition >= 0:
values.append(n_multiple + height_addition)
if height_addition >= 1:
values.append(n_multiple + height_addition - 1)
else:
break
count += 1
if past_middle:
values = [pow(self.grid_main.segments, 2) - i - 1 for i in values]
return values
def find_overlap(self):
"""Combine the block IDs to find the 1 or 2 matching ones."""
x_blocks = self.find_x_slice()
y_blocks = self.find_y_slice()
if self.y_coordinate >= self.grid_main.segments:
return []
return [i for i in x_blocks if i in y_blocks]
def find_block_coordinates(self):
"""Calculate the coordinates of the block IDs, or create a fake
block if one is off the edge.
Returns a list sorted by height.
If only one value is given for which blocks are in the chunk, that
means the player is on the edge of the board. By creating a fake
block off the side of the board, it allows the coorect maths to be
done without any modification.
"""
matching_blocks = self.find_overlap()
if not matching_blocks:
return None
matching_coordinates = {i: self.grid_main.relative_coordinates[i]
for i in matching_blocks}
#Create new value to handle 'off edge' cases
if len(matching_coordinates.keys()) == 1:
single_coordinate = matching_coordinates[matching_blocks[0]]
new_location = (0, -self.grid_main.centre)
#Workaround to handle the cases in the upper half
if self.height < self.grid_main.segments:
top_row_right = range(1, self.grid_main.segments)
top_row_left = [i * self.grid_main.segments
for i in range(1, self.grid_main.segments)]
if self.width >= self.grid_main.segments:
top_row_right.append(0)
else:
top_row_left.append(0)
if matching_blocks[0] in top_row_left:
new_location = (single_coordinate[0] - self.grid_main.x_offset,
single_coordinate[1] + self.grid_main.y_offset)
elif matching_blocks[0] in top_row_right:
new_location = (single_coordinate[0] + self.grid_main.x_offset,
single_coordinate[1] + self.grid_main.y_offset)
matching_coordinates[-1] = new_location
return sorted(matching_coordinates.items(), key=lambda (k, v): v[1])
def calculate(self, debug=0):
"""Calculate which block ID the coordinates are on.
This calculates the coordinates of the line between the two
blocks, then depending on if a calculation results in a positive
or negative number, it's possible to detect which block it falls
on.
By returning the (x1, y1) and (x2, y2) values, they can be linked
with turtle to see it how it works under the hood.
"""
all_blocks = self.find_block_coordinates()
if all_blocks is None:
return None
highest_block = all_blocks[1][1]
line_direction = self.width % 2 == self.height % 2
if self.grid_main.segments % 2:
line_direction = not line_direction
#print self.width, self.height
x1, y1 = (highest_block[0],
highest_block[1] - self.grid_main.y_offset * 2)
negative = int('-1'[not line_direction:])
x2, y2 = (x1 + self.grid_main.x_offset * negative,
y1 + self.grid_main.y_offset)
sign = (x2 - x1) * (self.y - y1) - (y2 - y1) * (self.x - x1)
sign *= negative
#Return particular things when debugging
if debug == 1:
return (x1, y1), (x2, y2)
if debug == 2:
return sign
selected_block = all_blocks[sign > 0][0]
#If extra block was added, it was -1, so it is invalid
if selected_block < 0:
return None
return selected_block + self.y_coordinate * pow(self.grid_main.segments, 2)
class CoordinateConvert(object):
def __init__(self, width, height):
self.width = width
self.height = height
self.centre = (self.width / 2, self.height / 2)
def to_pygame(self, x, y):
x = x - self.centre[0]
y = self.centre[1] - y
return (x, y)
def to_canvas(self, x, y):
x = x + self.centre[0]
y = self.centre[1] - y
return (x, y)
class GridDrawData(object):
"""Hold the relevant data for the grid, to allow it to be shown."""
def __init__(self, length, segments, angle, padding=5):
self.length = length
self.segments = segments
self.angle = angle
self.padding = padding
self._calculate()
def _calculate(self):
"""Perform the main calculations on the values in __init__.
This allows updating any of the values, such as the isometric
angle, without creating a new class."""
self.size_x = self.length * math.cos(math.radians(self.angle))
self.size_y = self.length * math.sin(math.radians(self.angle))
self.x_offset = self.size_x / self.segments
self.y_offset = self.size_y / self.segments
self.chunk_height = self.size_y * 2 + self.padding
self.centre = (self.chunk_height / 2) * self.segments - self.padding / 2
self.size_x_sm = self.size_x / self.segments
self.size_y_sm = self.size_y / self.segments
#self.segments_sq = pow(self.segments, 2)
#self.grid_data_len = pow(self.segments, 3)
#self.grid_data_range = range(self.grid_data_len)
self.length_small = self.length / self.segments
self.relative_coordinates = []
position = (0, self.centre)
for j in range(self.segments):
checkpoint = position
for i in range(self.segments):
self.relative_coordinates.append(position)
position = (position[0] + self.x_offset,
position[1] - self.y_offset)
position = (checkpoint[0] - self.x_offset,
checkpoint[1] - self.y_offset)
#Absolute coordinates for pygame
chunk_coordinates = [(0, - i * self.chunk_height) for i in range(self.segments)]
self.line_coordinates = [((self.size_x, self.centre - self.size_y),
(self.size_x, self.size_y - self.centre)),
((-self.size_x, self.centre - self.size_y),
(-self.size_x, self.size_y - self.centre)),
((0, self.centre - self.size_y * 2),
(0, -self.centre))]
for i in range(self.segments):
chunk_height = -i * self.chunk_height
self.line_coordinates += [((self.size_x, self.centre + chunk_height - self.size_y),
(0, self.centre + chunk_height - self.size_y * 2)),
((-self.size_x, self.centre + chunk_height - self.size_y),
(0, self.centre + chunk_height - self.size_y * 2))]
for coordinate in self.relative_coordinates:
start = (coordinate[0], chunk_height + coordinate[1])
self.line_coordinates += [(start,
(start[0] + self.size_x_sm, start[1] - self.size_y_sm)),
(start,
(start[0] - self.size_x_sm, start[1] - self.size_y_sm))]
class RunPygame(object):
overlay_marker = '/'
player_colours = [GREEN, LIGHTBLUE]
empty_colour = YELLOW
fps_idle = 15
fps_main = 30
fps_smooth = 120
padding = (5, 10)
overlay_width = 500
option_padding = 2
def __init__(self, C3DObject, screen_width=640, screen_height=860, default_length=200, default_angle=24):
self.C3DObject = C3DObject
self.width = screen_width
self.height = screen_height
self.length = default_length
self.angle = default_angle
self.player = int(not self.C3DObject.current_player)
self.convert = CoordinateConvert(self.width, self.height)
self.to_pygame = self.convert.to_pygame
self.to_canvas = self.convert.to_canvas
def _next_player(self):
self.player = int(not self.player)
def _previous_player(self):
self._next_player()
def play(self, p1=False, p2=Connect3D.bot_difficulty_default, allow_shuffle=True, end_when_no_points_left=False):
#Setup pygame
pygame.init()
self.screen = pygame.display.set_mode((self.width, self.height))
self.clock = pygame.time.Clock()
pygame.display.set_caption('Connect 3D')
background_colour = BACKGROUND
self.backdrop = pygame.Surface((self.width, self.height))
self.backdrop.set_alpha(196)
self.backdrop.fill(WHITE)
#Import the font
self.font_file = 'Miss Monkey.ttf'
try:
pygame.font.Font(self.font_file, 0)
except IOError:
raise IOError('unable to load font - download from http://www.dafont.com/miss-monkey.font')
self.font_lg = pygame.font.Font(self.font_file, 36)
self.font_lg_size = self.font_lg.render('', 1, BLACK).get_rect()[3]
self.font_md = pygame.font.Font(self.font_file, 24)
self.font_md_size = self.font_md.render('', 1, BLACK).get_rect()[3]
self.font_sm = pygame.font.Font(self.font_file, 18)
self.font_sm_size = self.font_sm.render('', 1, BLACK).get_rect()[3]
self.draw_data = GridDrawData(self.length,
self.C3DObject.segments,
self.angle,
padding = self.angle / self.C3DObject.segments)
#NOTE: These will all be cleaned up later, the grouping isn't great currently
held_keys = {'angle': 0,
'size': 0}
#Store one off instructions to wipe later
game_flags = {'clicked': False,
'mouse_used': True,
'quit': False,
'recalculate': False,
'reset': False,
'hover': False,
'flipped': False,
'disable_background_clicks': False,
'winner': None}
#Store information that shouldn't be wiped
game_data = {'players': [p1, p2],
'overlay': 'options',
'move_number': 0,
'shuffle': [allow_shuffle, 3],
'debug': False}
#Store temporary things to update
store_data = {'waiting': False,
'waiting_start': 0,
'shuffle_count': 0,
'temp_fps': self.fps_main,
'player_hover': None,
'shuffle_hover': None,
'new_game': False,
'continue': False,
'exit': False,
'instructions': False,
'debug_hover': None}
block_data = {'id': None,
'object': None,
'taken': False}
tick_data = {'old': 0,
'new': 0,
'update': 4, #How many ticks between each held key command
'total': 0}
mouse_data = pygame.mouse.get_pos()
#How long to wait before accepting a move
moving_wait = 0.5
#For controlling how the angle and length of grid update
angle_increment = 0.25
angle_max = 35
length_exponential = 1.1
length_increment = 0.5
length_multiplier = 0.01
time_current = time.time()
time_update = 0.01
while True:
self.clock.tick(store_data['temp_fps'] or self.fps_idle)
tick_data['new'] = pygame.time.get_ticks()
if game_flags['quit']:
return self.C3DObject
#Check if no spaces are left
if '' not in self.C3DObject.grid_data:
game_flags['winner'] = self.C3DObject._get_winning_player()
#Need to come up with some menu for the winner
#Print so it reminds me each time this happens
print 'finish this'
#Reset loop
self.screen.fill(background_colour)
if tick_data['total']:
game_flags['recalculate'] = False
game_flags['mouse_used'] = False
game_flags['clicked'] = False
game_flags['flipped'] = False
game_flags['disable_background_clicks'] = False
store_data['temp_fps'] = None
tick_data['total'] += 1
#Reinitialise the grid
if game_flags['reset']:
game_flags['reset'] = False
game_data['move_number'] = 0
game_data['shuffle'][0] = allow_shuffle
game_data['players'] = (p1, p2)
self.C3DObject = Connect3D(self.C3DObject.segments)
game_flags['hover'] = None
game_flags['recalculate'] = True
store_data['waiting'] = False
game_flags['winner'] = None
if game_flags['hover'] is not None:
if self.C3DObject.grid_data[game_flags['hover']] == self.overlay_marker:
self.C3DObject.grid_data[game_flags['hover']] = ''
game_flags['hover'] = None
if game_data['overlay']:
game_flags['disable_background_clicks'] = True
#Delay each go
if store_data['waiting']:
game_flags['disable_background_clicks'] = True
if store_data['waiting_start'] < time.time():
game_flags['recalculate'] = True
attempted_move = self.C3DObject.make_move(store_data['waiting'][1], store_data['waiting'][0])
if attempted_move is not None:
game_data['move_number'] += 1
self.C3DObject.update_score()
store_data['shuffle_count'] += 1
if store_data['shuffle_count'] >= game_data['shuffle'][1] and game_data['shuffle'][0]:
store_data['shuffle_count'] = 0
self.C3DObject.shuffle()
game_flags['flipped'] = True
else:
game_flags['flipped'] = False
else:
self._next_player()
print "Invalid move: {}".format(store_data['waiting'][0])
store_data['waiting'] = False
else:
try:
self.C3DObject.grid_data[store_data['waiting'][0]] = 9 - store_data['waiting'][1]
except TypeError:
print store_data['waiting'], ai_turn
raise TypeError('trying to get to the bottom of this')
#Run the AI
ai_turn = None
if game_data['players'][self.player] is not False:
if not game_flags['disable_background_clicks'] and game_flags['winner'] is None:
ai_turn = SimpleC3DAI(self.C3DObject, self.player, difficulty=game_data['players'][self.player]).calculate_next_move()
#Event loop
for event in pygame.event.get():
if event.type == pygame.QUIT:
return
#Get single key presses
if event.type == pygame.KEYDOWN:
game_flags['recalculate'] = True
if event.key == pygame.K_ESCAPE:
if game_data['overlay'] is None:
game_data['overlay'] = 'options'
else:
game_data['overlay'] = None
if event.key == pygame.K_RIGHTBRACKET:
self.C3DObject.segments += 1
game_flags['reset'] = True
if event.key == pygame.K_LEFTBRACKET:
self.C3DObject.segments -= 1
self.C3DObject.segments = max(1, self.C3DObject.segments)
game_flags['reset'] = True
if event.key == pygame.K_UP:
held_keys['angle'] = 1
if event.key == pygame.K_DOWN:
held_keys['angle'] = -1
if event.key == pygame.K_RIGHT:
held_keys['size'] = 1
if event.key == pygame.K_LEFT:
held_keys['size'] = -1
#Get mouse clicks
if event.type == pygame.MOUSEBUTTONDOWN:
game_flags['clicked'] = event.button
game_flags['mouse_used'] = True
if event.type == pygame.MOUSEMOTION:
game_flags['mouse_used'] = True
#Get held down key presses, but only update if enough ticks have passed
key = pygame.key.get_pressed()
update_yet = False
if tick_data['new'] - tick_data['old'] > tick_data['update']:
update_yet = True
tick_data['old'] = pygame.time.get_ticks()
if held_keys['angle']:
if not (key[pygame.K_UP] or key[pygame.K_DOWN]):
held_keys['angle'] = 0
elif update_yet:
self.draw_data.angle += angle_increment * held_keys['angle']
game_flags['recalculate'] = True
store_data['temp_fps'] = self.fps_smooth
if held_keys['size']:
if not (key[pygame.K_LEFT] or key[pygame.K_RIGHT]):
held_keys['size'] = 0
elif update_yet:
length_exp = (max(length_increment,
(pow(self.draw_data.length, length_exponential)
- 1 / length_increment))
* length_multiplier)
self.draw_data.length += length_exp * held_keys['size']
game_flags['recalculate'] = True
store_data['temp_fps'] = self.fps_smooth
#Update mouse information
if game_flags['mouse_used'] or game_flags['recalculate']:
game_flags['recalculate'] = True
mouse_data = pygame.mouse.get_pos()
x, y = self.to_pygame(*mouse_data)
block_data['object'] = MouseToBlockID(x, y, self.draw_data)
block_data['id'] = block_data['object'].calculate()
block_data['taken'] = True
if block_data['id'] is not None and ai_turn is None:
block_data['taken'] = self.C3DObject.grid_data[block_data['id']] != ''
#If mouse was clicked
if not game_flags['disable_background_clicks']:
if game_flags['clicked'] == 1 and not block_data['taken'] or ai_turn is not None:
store_data['waiting'] = (ai_turn if ai_turn is not None else block_data['id'], self.player)
store_data['waiting_start'] = time.time() + moving_wait
self._next_player()
#Highlight square
if not block_data['taken'] and not store_data['waiting'] and not game_data['overlay']:
self.C3DObject.grid_data[block_data['id']] = self.overlay_marker
game_flags['hover'] = block_data['id']
#Recalculate the data to draw the grid
if game_flags['recalculate']:
if not store_data['temp_fps']:
store_data['temp_fps'] = self.fps_main
self.draw_data.segments = self.C3DObject.segments
self.draw_data.length = float(max((pow(1 / length_increment, 2) * self.draw_data.segments), self.draw_data.length, 2))
self.draw_data.angle = float(max(angle_increment, min(89, self.draw_data.angle, angle_max)))
self.draw_data._calculate()
if game_flags['reset']:
continue
#Draw coloured squares
for i in self.C3DObject.range_data:
if self.C3DObject.grid_data[i] != '':
chunk = i / self.C3DObject.segments_squared
coordinate = list(self.draw_data.relative_coordinates[i % self.C3DObject.segments_squared])
coordinate[1] -= chunk * self.draw_data.chunk_height
square = [coordinate,
(coordinate[0] + self.draw_data.size_x_sm,
coordinate[1] - self.draw_data.size_y_sm),
(coordinate[0],
coordinate[1] - self.draw_data.size_y_sm * 2),
(coordinate[0] - self.draw_data.size_x_sm,
coordinate[1] - self.draw_data.size_y_sm),
coordinate]
#Player has mouse over square
block_colour = None
if self.C3DObject.grid_data[i] == self.overlay_marker:
if game_data['players'][self.player] is False:
block_colour = mix_colour(WHITE, WHITE, self.player_colours[self.player])
#Square is taken by a player
else:
j = self.C3DObject.grid_data[i]
#Square is being moved into, mix with red and white
mix = False
if isinstance(j, int) and j > 1:
j = 9 - j
moving_block = square
mix = True
block_colour = self.player_colours[j]
if mix:
block_colour = mix_colour(block_colour, GREY)
if block_colour is not None:
pygame.draw.polygon(self.screen,
block_colour,
[self.to_canvas(*corner)
for corner in square],
0)
#Draw grid
for line in self.draw_data.line_coordinates:
pygame.draw.aaline(self.screen,
BLACK,
self.to_canvas(*line[0]),
self.to_canvas(*line[1]),
1)
self._draw_score(game_flags['winner'])
if game_data['debug']:
self._draw_debug(block_data)
if game_data['overlay']:
store_data['temp_fps'] = self.fps_main
header_padding = self.padding[1] * 5
subheader_padding = self.padding[1] * 3
self.blit_list = []
self.rect_list = []
self.screen.blit(self.backdrop, (0, 0))
screen_width_offset = (self.width - self.overlay_width) / 2
current_height = header_padding + self.padding[1]
#Set page titles
if game_data['overlay'] == 'instructions':
title_message = 'Instructions/About'
subtitle_message = ''
elif game_data['move_number'] + bool(store_data['waiting']) and game_data['overlay'] == 'options':
title_message = 'Options'
subtitle_message = ''
else:
title_message = 'Connect 3D'
subtitle_message = 'By Peter Hunt'
title_text = self.font_lg.render(title_message, 1, BLACK)
title_size = title_text.get_rect()[2:]
self.blit_list.append((title_text, (self.padding[0] + screen_width_offset, current_height)))
current_height += self.padding[1] + title_size[1]
subtitle_text = self.font_md.render(subtitle_message, 1, BLACK)
subtitle_size = subtitle_text.get_rect()[2:]
self.blit_list.append((subtitle_text, (self.padding[0] + screen_width_offset, current_height)))
current_height += subtitle_size[1]
if subtitle_message:
current_height += header_padding
if game_data['overlay'] == 'options':
#Player options
players_unsaved = [p1, p2]
players_original = list(game_data['players'])
player_hover = store_data['player_hover']
store_data['player_hover'] = None
options = ['Human', 'Beginner', 'Easy', 'Medium', 'Hard', 'Extreme']
for player in range(len(game_data['players'])):
if players_unsaved[player] is False:
players_unsaved[player] = -1
else:
players_unsaved[player] = get_bot_difficulty(players_unsaved[player], _debug=True)
if players_original[player] is False:
players_original[player] = -1
else:
players_original[player] = get_bot_difficulty(players_original[player], _debug=True)
params = []
for i in range(len(options)):
params.append([i == players_unsaved[player] or players_unsaved[player] < 0 and not i,
i == players_original[player] or players_original[player] < 0 and not i,
[player, i] == player_hover])
option_data = self._draw_options('Player {}: '.format(player),
options,
params,
screen_width_offset,
current_height)
selected_option, options_size = option_data
current_height += options_size
if not player:
current_height += self.padding[1]
else:
current_height += subheader_padding
#Calculate mouse info
if selected_option is not None:
player_set = selected_option - 1
if player_set < 0:
player_set = False
store_data['player_hover'] = [player, selected_option]
if game_flags['clicked']:
if not player:
p1 = player_set
else:
p2 = player_set
if not game_data['move_number']:
game_data['players'] = (p1, p2)
#Ask whether to flip the grid
options = ['Yes', 'No']
params = []
for i in range(len(options)):
params.append([not i and allow_shuffle or i and not allow_shuffle,
not i and game_data['shuffle'][0] or i and not game_data['shuffle'][0],
not i and store_data['shuffle_hover'] or i and not store_data['shuffle_hover'] and store_data['shuffle_hover'] is not None])
option_data = self._draw_options('Flip grid every 3 goes? ',
['Yes', 'No'],
params,
screen_width_offset,
current_height)
selected_option, options_size = option_data
current_height += subheader_padding + options_size
#Calculate mouse info
store_data['shuffle_hover'] = None
if selected_option is not None:
store_data['shuffle_hover'] = not selected_option
if game_flags['clicked']:
allow_shuffle = not selected_option
if not game_data['move_number']:
game_data['shuffle'][0] = allow_shuffle
#Toggle hidden debug option with ctrl+alt+d
if not (not key[pygame.K_d]
or not (key[pygame.K_RCTRL] or key[pygame.K_LCTRL])
or not (key[pygame.K_RALT] or key[pygame.K_LALT])):
store_data['debug_hover']
options = ['Yes', 'No']
params = []
for i in range(len(options)):
params.append([not i and game_data['debug'] or i and not game_data['debug'],
not i and game_data['debug'] or i and not game_data['debug'],
not i and store_data['debug_hover'] or i and not store_data['debug_hover'] and store_data['debug_hover'] is not None])
option_data = self._draw_options('Show debug info? ',
['Yes', 'No'],
params,
screen_width_offset,
current_height)
selected_option, options_size = option_data
store_data['debug_hover'] = None
if selected_option is not None:
store_data['debug_hover'] = not selected_option
if game_flags['clicked']:
game_data['debug'] = not selected_option
current_height += subheader_padding + options_size
box_spacing = (header_padding + self.padding[1]) if game_data['move_number'] else (self.padding[1] + self.font_lg_size)
box_height = [current_height]
#Tell to restart game
if game_data['move_number']:
current_height += box_spacing
restart_message = 'Restart game to apply settings.'
restart_text = self.font_md.render(restart_message, 1, BLACK)
restart_size = restart_text.get_rect()[2:]
self.blit_list.append((restart_text, ((self.width - restart_size[0]) / 2, current_height)))
current_height += header_padding
#Continue button
if self._pygame_button('Continue',
store_data['continue'],
current_height,
-1):
store_data['continue'] = True
if game_flags['clicked']:
game_data['overlay'] = None
else:
store_data['continue'] = False
box_height.append(current_height)
current_height += box_spacing
#Instructions button
if self._pygame_button('Instructions' if game_data['move_number'] else 'Help',
store_data['instructions'],
box_height[0],
0 if game_data['move_number'] else 1):
store_data['instructions'] = True
if game_flags['clicked']:
game_data['overlay'] = 'instructions'
else:
store_data['instructions'] = False
#New game button
if self._pygame_button('New Game' if game_data['move_number'] else 'Start',
store_data['new_game'],
box_height[bool(game_data['move_number'])],
bool(game_data['move_number']) if game_data['move_number'] else -1):
store_data['new_game'] = True
if game_flags['clicked']:
game_flags['reset'] = True
game_data['overlay'] = None
else:
store_data['new_game'] = False
#Quit button
if self._pygame_button('Quit to Desktop' if game_data['move_number'] else 'Quit',
store_data['exit'],
current_height):
store_data['exit'] = True
if game_flags['clicked']:
game_flags['quit'] = True
else:
store_data['exit'] = False
#Draw background
background_square = (screen_width_offset, header_padding, self.overlay_width, current_height + self.padding[1] * 2)
pygame.draw.rect(self.screen, WHITE, background_square, 0)
pygame.draw.rect(self.screen, BLACK, background_square, 1)
for i in self.rect_list:
rect_data = [self.screen] + i
pygame.draw.rect(*rect_data)
for i in self.blit_list:
self.screen.blit(*i)
pygame.display.flip()
def _pygame_button(self, message, hover, current_height, width_multipler=0):
multiplier = 3
#Set up text
text_colour = BLACK if hover else GREY
text_object = self.font_lg.render(message, 1, text_colour)
text_size = text_object.get_rect()[2:]
centre_offset = self.width / 10 * width_multipler
text_x = (self.width - text_size[0]) / 2
if width_multipler > 0:
text_x += text_size[0] / 2
if width_multipler < 0:
text_x -= text_size[0] / 2
text_x += centre_offset
text_square = (text_x - self.option_padding * (multiplier + 1),
current_height - self.option_padding * multiplier,
text_size[0] + self.option_padding * (2 * multiplier + 2),
text_size[1] + self.option_padding * (2 * multiplier - 1))
self.blit_list.append((text_object, (text_x, current_height)))
#Detect if mouse is over it
x, y = pygame.mouse.get_pos()
in_x = text_square[0] < x < text_square[0] + text_square[2]
in_y = text_square[1] < y < text_square[1] + text_square[3]
if in_x and in_y:
return True
return False
def _draw_options(self, message, options, params, screen_width_offset, current_height):
"""Draw a list of options and check for inputs.
Parameters:
message (str): Text to display next to the options.
options (list): Names of the options.
params (list): Contains information on the options.
It needs to have the same amount of records as
options, with each of these being a list of 3 items.
These are used to colour the text in the correct
way.
param[option][0] = new selection
param[option][1] = currently active
param[option][2] = mouse hoving over
screen_width_offset (int): The X position to draw the
text.
current_height (int/float): The Y position to draw the
text.
"""
message_text = self.font_md.render(message, 1, BLACK)
message_size = message_text.get_rect()[2:]
self.blit_list.append((message_text, (self.padding[0] + screen_width_offset, current_height)))
option_text = [self.font_md.render(i, 1, BLACK) for i in options]
option_size = [i.get_rect()[2:] for i in option_text]
option_square_list = []
for i in range(len(options)):
width_offset = (sum(j[0] + 2 for j in option_size[:i])
+ self.padding[0] * (i + 1) #gap between the start
+ message_size[0] + screen_width_offset)
option_square = (width_offset - self.option_padding,
current_height - self.option_padding,
option_size[i][0] + self.option_padding * 2,
option_size[i][1] + self.option_padding)
option_square_list.append(option_square)
#Set colours
option_colours = list(SELECTION['Default'])
param_order = ('Waiting', 'Selected', 'Hover')
for j in range(len(params[i])):
if params[i][j]:
rect_colour, text_colour = list(SELECTION[param_order[j]])
if rect_colour is not None:
option_colours[0] = rect_colour
if text_colour is not None:
option_colours[1] = text_colour
rect_colour, text_colour = option_colours
self.rect_list.append([rect_colour, option_square])
self.blit_list.append((self.font_md.render(options[i], 1, text_colour), (width_offset, current_height)))
x, y = pygame.mouse.get_pos()
selected_square = None
for square in range(len(option_square_list)):
option_square = option_square_list[square]
in_x = option_square[0] < x < option_square[0] + option_square[2]
in_y = option_square[1] < y < option_square[1] + option_square[3]
if in_x and in_y:
selected_square = square
return (selected_square, message_size[1])
def _format_output(self, text):
"""Format text to remove invalid characters."""
left_bracket = ('[', '{')
right_bracket = (']', '}')
for i in left_bracket:
text = text.replace(i, '(')
for i in right_bracket:
text = text.replace(i, ')')
return text
def _draw_score(self, winner):
"""Draw the title."""
#Format scores
point_marker = '/'
p0_points = self.C3DObject.current_points[0]
p1_points = self.C3DObject.current_points[1]
p0_font_top = self.font_md.render('Player 0', 1, BLACK, self.player_colours[0])
p1_font_top = self.font_md.render('Player 1', 1, BLACK, self.player_colours[1])
p0_font_bottom = self.font_lg.render(point_marker * p0_points, 1, BLACK)
p1_font_bottom = self.font_lg.render(point_marker * p1_points, 1, BLACK)
p_size_top = p1_font_top.get_rect()[2:]
p_size_bottom = p1_font_bottom.get_rect()[2:]
if winner is None:
go_message = "Player {}'s turn!".format(self.player)
else:
if len(winner) != 1:
go_message = 'The game was a draw!'
else:
go_message = 'Player {} won!'.format(winner[0])
go_font = self.font_lg.render(go_message, 1, BLACK)
go_size = go_font.get_rect()[2:]
self.screen.blit(go_font, ((self.width - go_size[0]) / 2, self.padding[1] * 3))
self.screen.blit(p0_font_top, (self.padding[0], self.padding[1]))
self.screen.blit(p1_font_top, (self.width - p_size_top[0] - self.padding[0], self.padding[1]))
self.screen.blit(p0_font_bottom, (self.padding[0], self.padding[1] + p_size_top[1]))
self.screen.blit(p1_font_bottom, (self.width - p_size_bottom[0] - self.padding[0], self.padding[1] + p_size_top[1]))
def _draw_debug(self, block_data):
"""Show the debug information."""
mouse_data = pygame.mouse.get_pos()
x, y = self.to_pygame(*mouse_data)
debug_coordinates = block_data['object'].calculate(debug=1)
if debug_coordinates is not None:
if all(i is not None for i in debug_coordinates):
pygame.draw.aaline(self.screen,
RED,
pygame.mouse.get_pos(),
self.to_canvas(*debug_coordinates[1]),
1)
pygame.draw.line(self.screen,
RED,
self.to_canvas(*debug_coordinates[0]),
self.to_canvas(*debug_coordinates[1]),
2)
possible_blocks = block_data['object'].find_overlap()
y_mult = str(block_data['object'].y_coordinate * self.C3DObject.segments_squared)
if y_mult[0] != '-':
y_mult = '+{}'.format(y_mult)
info = ['DEBUG INFO',
'FPS: {}'.format(int(round(self.clock.get_fps(), 0))),
'Segments: {}'.format(self.C3DObject.segments),
'Angle: {}'.format(self.draw_data.angle),
'Side length: {}'.format(self.draw_data.length),
'Coordinates: {}'.format(mouse_data),
'Chunk: {}'.format((block_data['object'].width,
block_data['object'].height,
block_data['object'].y_coordinate)),
'X Slice: {}'.format(block_data['object'].find_x_slice()),
'Y Slice: {}'.format(block_data['object'].find_y_slice()),
'Possible blocks: {} {}'.format(possible_blocks, y_mult),
'Block weight: {}'.format(block_data['object'].calculate(debug=2)),
'Block ID: {}'.format(block_data['object'].calculate())]
font_render = [self.font_sm.render(self._format_output(i), 1, BLACK) for i in info]
font_size = [i.get_rect()[2:] for i in font_render]
for i in range(len(info)):
message_height = self.height - sum(j[1] for j in font_size[i:])
self.screen.blit(font_render[i], (0, message_height))
#Format the AI text output
ai_message = []
for i in self.C3DObject.ai_message:
#Split into chunks of 50 if longer
message_len = len(i)
message = [self._format_output(i[n * 50:(n + 1) * 50]) for n in range(round_up(message_len / 50.0))]
ai_message += message
font_render = [self.font_sm.render(i, 1, BLACK) for i in ai_message]
font_size = [i.get_rect()[2:] for i in font_render]
for i in range(len(ai_message)):
message_height = self.height - sum(j[1] for j in font_size[i:])
self.screen.blit(font_render[i], (self.width - font_size[i][0], message_height))
Background
The game is designed to be a 4x4x4 grid, but I didn't want to limit it, so everything I did had to be coded to work with any value (I'll refer to these as segments). When playing the game, use [ and ] to change the amount of segments, though be warned the AI will take exponentially longer, as each new segment creates 9x more processing. Also, you can change the side length and angle with the arrow keys.
Mouse coordinate to block ID
I initially drew the game with turtle, which was quite slow, but didn't require coordinates so was easy. However, converting the mouse coordinates into which block it was over wasn't, since the grid was isometric and not normal squares.
Turtle coordinates have (0, 0) in the middle, whereas pygame coordinates have (0, 0) in the top left, so as I wrote this function for turtle, there's an extra layer in place to convert the absolute coordinates from the mouse input into relative coordinates for this.
I got which level the mouse was on, and then converted it to the top level, so that I didn't have to worry about getting the code working on all levels.
I split the top level into 2D 'chunks' that were half the size of the blocks, so that there was one chunk for each connection between a block. I converted the mouse coordinates into which chunks they were in.
With a lot of effort, I figured out 3 formulas (1 for X, 2 for Y) which would get all block IDs on those rows, for any amount of segments
I'd compare the lists to find matches between the two, which in the middle of the grid, would result in 2 blocks. At the edge, it'd result in 1, so to get the next part correctly working, I had to make it come up with a fake block, so that it'd be able to compare the two.
Using some formula I found for 'detecting if a point is over or under a line' (no idea what that is called), I find if the value is positive or negative, which depending on if the slope of the line is going up or down, can result in the correct block ID. I noticed that if both X and Y chunks are positive, or both are negative, the line between the two blocks slopes one way, and if one is positive and one is negative, the line slopes the other way (this is then reversed for an odd number of segments), so with that final tweak I got it working correctly.
Grid Draw Data
Since the only thing provided is number of segments, angle, line length and padding, each time any of this changes I need to recalculate virtually everything to keep the game working. This class takes care of that, and stores all the required information.
AI
The AI difficulty just determines how likely it is to not notice something, and how likely it is to change its priorities, but it still does all the calculations beforehand. When the game is in the early stages, the chance of not noticing an n-1 row is greatly reduced, since it's obvious to the human eye as well, and otherwise the AI just looks stupid.
It will look ahead 1 move to see if there's any rows of n-1, and if not, for every space in the grid, it'll look ahead 1 move to see if there's any rows of n-2.
If it is n-1, the first priority is to block the enemy, then gain points. This reverses for n-2, otherwise the AI only blocks and never does anything itself. If there is nothing found from this way, it'll determine the maximum number of points that can be gained from each block, and pick the best option (e.g. if you switch to an odd number of segments, the AI will always pick the middle block first).
Something I added yesterday was a bit of prediction, which works alongside the first method I mentioned, as I noticed the extreme AI was easy to trick. If you try trick the AI (as in you line up 2 points in 1 move, so if one is blocked you still get another point), it'll now notice that and block it. Likewise it can do the same to you. You can see this happening if you watch 2 extreme AI battle it out with each other.
Pygame
Since most of the time not much is going on, I made the frame rate variable, so it goes at 15 FPS if there is no movement (any less than that and you notice a delay when you try to move), 30 FPS if you are moving the mouse or have a menu open, and 120 FPS if you are resizing the grid.
The options overlay is drawn as part of the loop after everything else. I slightly modify the layout if the first move has been taken, and disable the instant updating of options (otherwise if you are losing you can temporarily activate the AI and win).
With the way I did the buttons, they look bad if two are next to each other and are a different size, so I tried my best to keep names a similar length (hence why 'help' changes to 'instructions').
Answer: 1. MouseToBlockID
Normally an instance of a class represents some thing, that is, a persistent object or data structure. But an instance of MouseToBlockID does not seem to represent any kind of thing. What you need here is a function that takes game coordinates and returns a block index.
See Jack Diederich's talk "Stop Writing Classes".
Since this function makes use of the attributes of the GridDrawData class, this would best be written as a method on that class:
def game_to_block_index(self, gx, gy):
"""Return index of block at the game coordinates gx, gy, or None if
there is no block at those coordinates."""
The naming of variables needs work. When you have coordinates in three dimensions, it's conventional to call them "x", "y" and "z". But here you use the name y_coordinate for "z". That's bound to lead to confusion.
The code is extraordinarily long and complex for what should be a simple operation. There are more than 200 lines in this class, but converting game coordinates to a block index should be a simple operation that proceeds as follows:
Adjust gy so that it is relative to the origin of the bottom plane (the z=0 plane) rather than relative to the centre of the window:
gy += self.centre
Find z:
z = int(gy // self.chunk_height)
Adjust gy so that it is relative to the origin of its z-plane:
gy -= z * self.chunk_height
Reverse the isometric grid transform:
dx = gx / self.size_x_sm
dy = gy / self.size_y_sm
x = int((dy - dx) // 2)
y = int((dy + dx) // 2)
Check that the result is in bounds, and encode position as block index:
n = self.segments
if 0 <= x < n and 0 <= y < n and 0 <= z < n:
return n ** 3 - 1 - (x + n * (y + n * z))
else:
return None
And that's it. Just twelve lines.
It will be handy to encapsulate the transformation from block coordinates to block index in its own method:
def block_index(self, x, y, z):
"""Return the block index corresponding to the block at x, y, z, or
None if there is no block at those coordinates.
"""
n = self.segments
if 0 <= x < n and 0 <= y < n and 0 <= z < n:
return n ** 3 - 1 - (x + n * (y + n * z))
else:
return None
See below for how this can be used to simplify the drawing code.
The encoding of block indexes is backwards, with (0, 0, 0) corresponding to block index 63 and (3, 3, 3) to block index 0. You'll see that had to write n ** 3 - 1 - (x + n * (y + n * z)) whereas x + n * (y + n * z) would be the more natural encoding.
2. GridDrawData
The computation of game coordinates for the endpoints of the lines is verbose, hard to read, and hard to check:
self.line_coordinates = [((self.size_x, self.centre - self.size_y),
(self.size_x, self.size_y - self.centre)),
((-self.size_x, self.centre - self.size_y),
(-self.size_x, self.size_y - self.centre)),
((0, self.centre - self.size_y * 2),
(0, -self.centre))]
What you need is a method that transforms block coordinates into game coordinates:
def block_to_game(self, x, y, z):
"""Return the game coordinates corresponding to block x, y, z."""
gx = (x - y) * self.size_x_sm
gy = (x + y) * self.size_y_sm + z * self.chunk_height - self.centre
return gx, gy
Then you can compute all the lines using block coordinates, which is much easier to read and check:
n = self.segments
g = self.block_to_game
self.lines = [(g(n, 0, n - 1), g(n, 0, 0)),
(g(0, n, n - 1), g(0, n, 0)),
(g(0, 0, n - 1), g(0, 0, 0))]
for i, j, k in itertools.product(range(n+1), range(n+1), range(n)):
self.lines.extend([(g(i, 0, k), g(i, n, k)),
(g(0, j, k), g(n, j, k))])
Using block_to_game you can avoid the need for relative_coordinates. Instead of:
for i in self.C3DObject.range_data:
if self.C3DObject.grid_data[i] != '':
chunk = i / self.C3DObject.segments_squared
coordinate = list(self.draw_data.relative_coordinates[i % self.C3DObject.segments_squared])
coordinate[1] -= chunk * self.draw_data.chunk_height
square = [coordinate,
(coordinate[0] + self.draw_data.size_x_sm,
coordinate[1] - self.draw_data.size_y_sm),
(coordinate[0],
coordinate[1] - self.draw_data.size_y_sm * 2),
(coordinate[0] - self.draw_data.size_x_sm,
coordinate[1] - self.draw_data.size_y_sm),
coordinate]
write:
n = self.draw_data.segments
g = self.draw_data.block_to_game
for x, y, z in itertools.product(range(n), repeat=3):
i = self.draw_data.block_index(x, y, z)
if self.C3DObject.grid_data[i] != '':
square = [g(x, y, z),
g(x + 1, y, z),
g(x + 1, y + 1, z),
g(x, y + 1, z)]
3. RunPygame
game_flags['recalculate'] gets set whenever game_flags['mouse_used'] is set. This means that the grid gets unnecessarily recalculated every time the mouse moves.
4. Isometric coordinates
Here's an explanation of how you can derive the isometric coordinate transformation and its inverse. Let's take the forward transformation first. You start with Cartesian coordinates \$x, y\$ and you want isometric coordinates \$ix, iy\$.
It's easiest to work this out if you introduce an intermediate set of coordinates: uniform isometric coordinates \$ux, uy\$ where the scale is the same in both dimensions (the diamonds are squares) and the height and width of each diamond is 1.
Now, the transformations are easy: to go from Cartesian coordinates to uniform isometric coordinates we use: $$ \eqalign{ ux &= {y + x \over 2} \cr uy &= {y - x \over 2} } $$ and then from uniform to plain isometric coordinates we use scale factors \$sx, sy\$: $$ \eqalign{ ix &= ux·sx \cr iy &= uy·sy } $$ Putting these together: $$ \eqalign{ ix &= (y + x){sx\over2} \cr iy &= (y - x){sy\over2} } $$ To reverse the transformation, treat these as simultaneous equations and solve for \$x\$ and \$y\$: $$ \eqalign{ x &= {ix\over sx} - {iy\over sy} \cr y &= {ix\over sx} + {iy\over sy}} $$
(These formulae aren't quite the same as the ones I used in the code above, but that's because your backwards block numbering scheme required me to swap \$x\$ and \$y\$, and because your size_x_sm is half of the scale factor \$sx\$.) | {
"domain": "codereview.stackexchange",
"id": 16551,
"tags": "python, game, tic-tac-toe, pygame, connect-four"
} |
Why do we need to connect the computers to a LAN cable when we can use the internet | Question: why don't we put a network card or usb adapter to connect to wifi and connect all the computer to the internet without LAN and WAN cables
i may act stupid so please explain it to me my friend
Answer: Using a cable, you can get up to ten gigabit per second transfer rates. WiFi is much slower. In addition, in one area you can only use at most three WiFi channels at the same time, so in an office with 100 computers you have a problem.
You can also run Ethernet over 100 meters with good cables, try that with WiFi. | {
"domain": "cs.stackexchange",
"id": 15888,
"tags": "computer-networks"
} |
Can preprocessing the whole population cause data leakage? | Question: Introduction
I understand the problem of data leakage that could be caused by the preprocessing step when our training and test sets are just samples of an unknown population. The preprocessing parameters should be calculated from the training set only, then we just apply the same procedure to validation/test set, since this would be the way to proceed with any other sample from the unknown population (in production stage, for example).
Question
What about the situation where we have the whole population at hand? Could we calculate the preprocessing parameters (scaling factors, encoding, etc.) from the entire population?
Extra Context
We have the whole population and the modeling process would depend of user input. The training set is defined by the user input and the trained model is used to classify the population.
Answer: If you have the entire population, there is no need for inference. Thus data leakage is not an issue. You can fit any transformation on the data without a concern for its effect on prediction because there is no prediction step. | {
"domain": "datascience.stackexchange",
"id": 4231,
"tags": "supervised-learning, preprocessing, data-leakage"
} |
Center of mass in hydrogen atom | Question: I have few questions regarding quantum treatment of the hydrogen atom problem.
Why does one changes coordinate from position vector of electron and nucleus to COM coordinates and relative position between electron and the nucleus? I know there are 6 position vectors and it will be difficult to solve but are there are any other reason of choosing them? We say that Schrödinger equation in the center of mass is fixed and doesn't provide that much information.
I saw a video where the professor says that during relative motion of two body, COM doesn't change. I don't know why is that so.
As we assume nucleus to be stationary and electron moves, don't we think its COM will change as electron moves? having different COM location, so what is the actual scenario? Doesn't that mean that as electron moves, the nucleus should also move a bit?
I don't get the concept clear in my head that after change in coordinates, we keep COM location to be at origin. Why? And we study the final motion of COM of system and the electron, as electron is already in COM.
Answer: The time-independent Schrödinger equation for the hydrogen atom
(i.e. nucleus and electron) is
$$\left(
-\frac{\hbar^2}{2m_n}\nabla_n^2
-\frac{\hbar^2}{2m_e}\nabla_e^2 -
-\frac{e^2}{4\pi\epsilon_0|\mathbf{r}_e-\mathbf{r}_n|}
\right)\Psi(\mathbf{r}_n,\mathbf{r}_e)
= E \Psi(\mathbf{r}_n,\mathbf{r}_e) \tag{1}$$
where $\Psi(\mathbf{r}_n,\mathbf{r}_e)$ is the wavefunction of nucleus
and electron.
Why does one changes coordinate from position vector of electron and
nucleus to COM coordinates and relative position between electron
and the nucleus.
Like I know there are 6 position vectors and it will be difficult
to solve but are there are any other reason of choosing them.
Like after some maths and physics we say that schrodinger equation
of center of mass is fixed and doesn't give that much of information.
Equation (1) is hard to solve because of the potential energy term depending
on both positions $\mathbf{r}_e$ and $\mathbf{r}_n$.
But at least, the potential energy depends only on the position difference
$\mathbf{r}_e-\mathbf{r}_n$.
That is why we decide to not attack this problem in the original coordinates
$\mathbf{r}_n$ and $\mathbf{r}_e$.
Instead we define the new coordinates:
$$\begin{align}
\mathbf{r} &= \mathbf{r}_e - \mathbf{r}_n
\quad &\text{position difference between electron and nucleus}\\
\mathbf{R} &= \frac{m_n\mathbf{r}_n + m_e\mathbf{r}_e}{m_n+m_e}
\quad &\text{center-of-mass position}
\end{align} \tag{2}$$
That means we don't look for a wavefunction $\Psi(\mathbf{r}_n,\mathbf{r}_e)$,
but for a wavefunction $\psi(\mathbf{R},\mathbf{r})$.
And keep in mind, these positions $\mathbf{R}$ and $\mathbf{r}$ are just
abstract definitions. There is no particle at any of these two positions.
With these new coordinates we can transform (skipping the mathematical details)
the Schrödinger equation (1) to
$$\left(
-\frac{\hbar^2}{2(m_n+m_e)}\nabla_R^2
-\frac{\hbar^2(m_n+m_e)}{2m_nm_e}\nabla_r^2
-\frac{e^2}{4\pi\epsilon_0|\mathbf{r}|}
\right)\psi(\mathbf{R},\mathbf{r})
= E \psi(\mathbf{R},\mathbf{r}) \tag{3}$$
This equation can be written a little bit simpler by defining
$M=m_n+m_e$ (the total mass) and $\mu=\frac{m_nm_e}{m_n+m_e}$
(the so-called reduced mass):
$$\left(
-\frac{\hbar^2}{2M}\nabla_R^2
-\frac{\hbar^2}{2\mu}\nabla_r^2
-\frac{e^2}{4\pi\epsilon_0|\mathbf{r}|}
\right)\Psi(\mathbf{R},\mathbf{r})
= E \Psi(\mathbf{R},\mathbf{r}) \tag{4}$$
Now for solving this equation (4) we can use separation of variables,
i.e. we do the approach
$\psi(\mathbf{R},\mathbf{r}) = \psi_R(\mathbf{R}) \psi_r(\mathbf{r})$.
We get two separate equations:
$$-\frac{\hbar^2}{2M}\nabla_R^2 \psi_R(\mathbf{R})
= E_R \psi_R(\mathbf{R}) \tag{5a}$$
and
$$\left(
-\frac{\hbar^2}{2\mu}\nabla_r^2
-\frac{e^2}{4\pi\epsilon_0|\mathbf{r}|}
\right)\psi_r(\mathbf{r})
= E_r \psi_r(\mathbf{r}) \tag{5b}$$
Unlike equation (1), these separate equations (5a, 5b) are easily solvable.
And that is the whole point why we used the coordinates defined in (2)
in the first place.
The solutions of (5a) are simple plane waves for the COM motion
(with any wave-vector $\mathbf{k}$, i.e. with constant total momentum
$\mathbf{P}=\hbar\mathbf{k}$):
$$\psi_R(\mathbf{R})=e^{i\mathbf{k}\mathbf{R}} \tag{6a}$$
And the solutions of (5b) are the hydrogen orbitals
(centered at $\mathbf{r}=\mathbf{0}$) for the relative motion:
$$\psi_r(\mathbf{r})=\psi_{nlm_l}(r,\theta,\phi) \tag{6b}$$
Also keep in mind, because $\mathbf{r}$ and $\mathbf{R}$ were only abstract
"positions", the wavefunctions $\psi_r(\mathbf{r})$ and $\psi_R(\mathbf{R})$
are abstract as well. Especially, $\psi_r(\mathbf{r})$ describes the movement
of the position difference between electron and nucleus, but not directly
the position of the electron.
I saw one of the video where the professor says that during
relative motion of two body, COM doesn't changes.
I don't know why is that so.
That is not quite correct. Actually the COM does move,
but it moves only with constant speed in a straight line,
because there is no potential energy in equation (5a),
and hence no force acting on the COM.
As we assume nucleus to be stationary and electron moves,
don't we think its COM will change as electron moves,
having different COM location, so what is the actual scenario.
Does that don't mean that as electron moves, the nucleus should also move a bit.
You are right, the nucleus moves a little bit relative to the COM.
You can see this when you solve the transformation (2)
for $\mathbf{r}_e$ and $\mathbf{r}_n$:
$$\begin{align}
\mathbf{r}_e = \mathbf{R} + \frac{m_n}{m_n+m_e}\mathbf{r} \\
\mathbf{r}_n = \mathbf{R} - \frac{m_e}{m_n+m_e}\mathbf{r}
\end{align} \tag{7}$$
From the second equation of (7) you see, the nucleus is moving a little bit
relative to the COM position $\mathbf{R}$, because $\frac{m_e}{m_n+m_e}\ll 1$.
I don't get the concept clear in my head that after change in coordinates,
we keep COM location to be at origin. Why? And we study the final motion
of COM of system and the electron, as electron is already in COM thing.
You are confusing COM position and position difference.
The COM position is at $\mathbf{R}$.
The hydrogen orbital $\psi_r(\mathbf{r})$ is in terms of the position difference $\mathbf{r}$.
From equations (2) and (7) you can visualize it like this. | {
"domain": "physics.stackexchange",
"id": 93466,
"tags": "quantum-mechanics, classical-mechanics, reference-frames, hydrogen"
} |
Draw a DFA that accepts ((aa*)*b)* | Question: A homework question asks me to a draw a DFA for the regular expression
$((aa^*)^*b)^*$
I'm having trouble with this because I'm not sure how to express the idea of $a$ followed by $0$ or many $a$'s many times, in terms of a DFA.
I would guess it that $(aa^*)^*$ should be the same thing as $\lambda + a^*$ but I'm not sure if I can formally say that. If I could, it would make my DFA simply
Answer: Edited:
(a* a)* can be change to a*
Now we have (a* b)*
i.e.
If nothing comes then, we have to be at final state.
If b comes then, we have to be at final state.
But if a comes then, sequence of a followed by single b is accepted.
DFA for this will be:
M=({q0,q1}, {a,b}, ∆, q0, {q0})
Where,
q0= initial as well as final state
And ∆:
∆(q0, b)=q0
∆(q0, a)=q1
∆(q1, b)=q0
∆(q1, a)=q1
Explanation:
As q0 is initial as well as final, then, epsilon and b is accepted.
If a comes then, it will be followed be a b. | {
"domain": "cs.stackexchange",
"id": 4731,
"tags": "regular-languages, automata, finite-automata, regular-expressions"
} |
How do I determine which variables contribute to the 1st PC in PCA? | Question: Given the coefficients of PC1 as follows for each variable (0.30, 0.31, 0.42, 0.37, 0.13, -0.43, 0.29, -0.42, -0.11) which variables contributes most to this PC? Does the sign(+/-) matters or considering the absolute value is enough?
Answer: Welcome to the site. PCA is an unsupervised dimensionality reduction algorithm. It works by transforming the original feature-set into eigen-vectors that are difficult to map with the original feature set. As such, the first Principal Component (PC) contains the features with maximum variance. The subsequent PCs contain features with decreased variance to the first PC.
With this background, I invite you to read this Q on SO. It has the solution to programmatically determine the features deemed most important by PCA.
[edited]
Regarding the sign of the components, eve if you change them you do not change the variance that is contained in the first component. Moreover, when you change the signs, the weights (prcomp( ... )$rotation) also change the sign, so the interpretation stays exactly the same:
set.seed( 2020 )
df <- data.frame(1:10,rnorm(10))
pca1 <- prcomp( df )
pca2 <- princomp( df )
pca1$rotation
gives
PC1 PC2
X1.10 0.9876877 0.1564384
rnorm.10. 0.1564384 -0.9876877
and pca2$loadigs gives,
Comp.1 Comp.2
SS loadings 1.0 1.0
Proportion Var 0.5 0.5
Cumulative Var 0.5 1.0
Then the question arises that why the interpretation remains the same
You do the PCA regression of y on component 1. In the first version (prcomp), say the coefficient is positive: the larger the component 1, the larger the y. What does it mean when it comes to the original variables? Since the weight of the variable 1 (1:10 in df) is positive, that shows that the larger the variable 1, the larger the y.
Now use the second version (princomp). Since the component has the sign changed, the larger the y, the smaller the component 1 -- the coefficient of y< over PC1 is now negative. But so is the loading of the variable 1; that means, the larger variable 1, the smaller the component 1, the larger y -- the interpretation is the same.
The conclusion is that for each PCA component, the sign of its scores and of its loadings is arbitrary and meaningless. It can be flipped, but only if the sign of both scores and loadings is reversed at the same time.
Furthermore, the directions that the principal components act correspond to the eigenvectors of the system. If you are getting a positive or negative PC it just means that you are projecting on an eigenvector that is pointing in one direction or 180∘ away in the other direction. Regardless, the interpretation remains the same! It should also be added that the lengths of your principal components are simply the eigenvalues. | {
"domain": "datascience.stackexchange",
"id": 8398,
"tags": "pca, dimensionality-reduction"
} |
Why are accelerator beam neutrino experiments built an angle off the beam direction? | Question: Was reading some papers and review articles on accelerator based neutrino experiments and this came up a few times. Most of what I could find mentions "shrinkage in neutrino energy spectra" and reducing the effects of electron neutrino impurities in a muon neutrino beam, but remains vauge.
For example, NOvA in the US is mentioned to be located 14 milliradians/12 km away from the beam direction.
I was just wondering what use to these experiments this has?
Answer: Off-axis beams, depending on the neutrino energy and the production source, can offer fluxes of neutrinos with a much more narrow range of energies. This is useful because the incoming neutrino energy is unknown on an event by event basis, and the neutrino flux is often a large source of systematic uncertainty in cross section and oscillation analysis.
The physics behind this is summarized in this excellent paper https://arxiv.org/pdf/hep-ex/0111033.pdf by Kirk McDonald. Basically for $\theta$ measured as the angle between the daughter neutrinos momentum and the direction of the parent pion's momentum, there exists a maximum neutrino energy possible. This implies that "many different pion energies contribute to the this neutrino energy, which enhances the neutrino spectrum at this angle-energy combination, θ ≈ (30-50 MeV)/Eν".
One can see this in the plot below from the paper which shows the relative neutrino flux as a function of neutrino energy and off axis angle. | {
"domain": "physics.stackexchange",
"id": 94312,
"tags": "particle-physics, experimental-physics, neutrinos, accelerator-physics"
} |
Is testing easier/harder than learning? | Question: How is the Property testing is related to PAC model of learning?
More precisely,
Let we have given a property tester, $\mathcal{A}$, for the (concept) class of function $\mathcal{F_n}$ which receives as input a size parameter $n$ (labeled input $(x_1,f(x_1)), (x_2,f(x_2)),...,(x_n,f(x_n))$), distance parameter $0<\epsilon<1$, confidence interval $0<\delta<1/2$, and does the following:
-if $f\in \mathcal{F_n}$, then with probability probability $(1-\delta)$ (over the choice of $x_i$'s) $\mathcal{A}$ accepts $f$.
-if $f$ is $\epsilon$-far from $\mathcal{F_n}$, then with probability probability $(1-\delta)$ (over the choice of $x_i$'s) $\mathcal{A}$ rejects $f$.
Now, I have following two questions:
1) Now, how this tester $\mathcal{A}$ can be used to generate learning algorithm (under PAC learning model) for the concept class $\mathcal{F_n}$, and vice versa. And how does VC-dim of $\mathcal{F_n}$ plays role in the reduction.
2) Can we give some sort of characterization (for example, on the basis of VC-dim) over the concept class for which testing is easier/harder than learning?
Pls let me know if I am not able to put the question clearly.
Thanks.
Answer: If the learning algorithm is proper (i.e. it always produces a hypothesis from the class $F_n$), then it also gives a testing algorithm -- simply run the learning algorithm, and see whether the hypothesis it produced has error rate $<\epsilon$, which can be done with only $\approx 1/\epsilon^2$ samples. If it does, since the hypothesis is in $F_n$, this is a constructive proof that the function you are testing has distance at most $\epsilon$ from $F_n$. If the algorithm was a PAC learning algorithm for $F_n$, then when $f \in F_n$, it must generate such a hypothesis. So any proper learning algorithm can be converted to a testing algorithm with only an additional $\approx 1/\epsilon^2$ samples at most.
Moreover, if you are only worried about sample complexity and not computational efficiency, then without loss of generality you can always use a proper PAC learning algorithm. Since the sample complexity of learning is $\mathrm{VCDIM}(F_n)/\epsilon^2$, this means you can always test with at most this many samples.
However, generally testing is easier than learning. For example, linear functions in $d$ dimensions require $d$ samples to learn, but only a constant number of samples to test. | {
"domain": "cstheory.stackexchange",
"id": 2666,
"tags": "machine-learning, property-testing"
} |
Find how much reputation a user had on a given date | Question: I saw this question on MSE and went ahead and wrote a solution to it:
https://meta.stackexchange.com/questions/313561/determining-users-reputation-as-of-particular-date
This calculates the reputation a user should have had at a certain date. I attempt to factor in everything that SEDE allows me to see. So -1 events from downvoting posts and serial voting reversal as well as the documentation reputation aren't factored in.
I'm curious about what could be improved here
SELECT
-- Total Reputation
(
SUM(CASE WHEN r2d.ReputationFromVotes + r2d.ReputationFromSuggestedEdits > 200 THEN 200 ELSE r2d.ReputationFromVotes + r2d.ReputationFromSuggestedEdits END)
+ SUM(r2d.ReputationFromBounties)
+ COALESCE((SELECT SUM(v4.BountyAmount * -1) FROM Votes AS v4 WHERE v4.VoteTypeId = 8 AND v4.UserId = ##UserId## AND v4.CreationDate < ##UntilDate:string## ),0)
+ COALESCE((SELECT COUNT(*) * 2 FROM Posts AS p3 WHERE p3.OwnerUserId = ##UserId## AND p3.AcceptedAnswerId IS NOT NULL),0)
+ SUM(r2d.ReputationFromAccepts)
) AS TotalReputation,
-- Rep Capped Activities with the Cap Factored in
SUM(
CASE
WHEN r2d.ReputationFromVotes + r2d.ReputationFromSuggestedEdits > 200 THEN 200
ELSE r2d.ReputationFromVotes + r2d.ReputationFromSuggestedEdits
END) AS ReputationFromRepCap,
-- Total Bounties recieved
SUM(r2d.ReputationFromBounties) AS ReputationFromBounties,
-- Total Bounties given
COALESCE((SELECT SUM(v4.BountyAmount * -1) FROM Votes AS v4 WHERE v4.VoteTypeId = 8 AND v4.UserId = ##UserId## AND v4.CreationDate < ##UntilDate:string## ),0) AS ReputationGivenAsBounties,
-- Total Reputation from Accepting Answers
COALESCE((SELECT COUNT(*) * 2 FROM Posts AS p3 WHERE p3.OwnerUserId = ##UserId## AND p3.AcceptedAnswerId IS NOT NULL),0) AS ReputationFromAcceptingAnswers,
-- Total Reputation from Accepted Answers
SUM(r2d.ReputationFromAccepts) AS ReputationFromAcceptedAnswers
FROM
(
SELECT
v.CreationDate AS VoteDate,
-- Total Reputation from Post Upvotes
-- PostTypeId 1 = Question, 2 = Answer
-- VoteTypeId 2 = Upvote, 3 = Downvote
-- CommunityOwnedDate is when a post was made CW.
-- Votes before that count, after not.
-- Vote Date is truncated to full days only so grouping works
SUM((CASE
WHEN (p.PostTypeId = 1 AND v.VoteTypeId = 2 AND (p.CommunityOwnedDate > v.CreationDate OR p.CommunityOwnedDate IS NULL)) THEN 5
WHEN (p.PostTypeId = 2 AND v.VoteTypeId = 2 AND (p.CommunityOwnedDate > v.CreationDate OR p.CommunityOwnedDate IS NULL)) THEN 10
WHEN (v.VoteTypeId = 3 AND (p.CommunityOwnedDate > v.CreationDate OR p.CommunityOwnedDate IS NULL)) THEN -2
ELSE 0
END)) AS ReputationFromVotes,
-- Total Reputation from Answer Bounties
-- VoteTypeId 9 = Bounty Close (Bounty Awarded)
-- BountyAmount = Amount of Reputation awarded
SUM(CASE
WHEN v.VoteTypeId = 9 THEN v.BountyAmount
ELSE 0
END) AS ReputationFromBounties,
-- Total Reputation from Answer Accepts
-- VoteTypeId 1 = AcceptedByOriginator (Answer Accepted)
SUM(CASE
WHEN (v.VoteTypeId = 1 AND (p.CommunityOwnedDate > v.CreationDate OR p.CommunityOwnedDate IS NULL)) THEN 15
ELSE 0
END) AS ReputationFromAccepts,
-- Total Reputation from Suggested Edits
-- if ApprovalDate isn't NULL and RejectionDate is NULL it's been approved and not overriden
-- Group by the same Date as Votes for Rep-Cap evaluation (They count towards it)
COALESCE((SELECT
SUM(CASE WHEN (se.ApprovalDate IS NOT NULL AND se.RejectionDate IS NULL) THEN 2 ELSE 0 END)
FROM SuggestedEdits AS se
WHERE se.OwnerUserId = ##UserId##
AND YEAR(v.CreationDate) = YEAR(se.ApprovalDate)
AND MONTH(v.CreationDate) = MONTH(se.ApprovalDate)
AND DAY(v.CreationDate) = DAY(se.ApprovalDate) ),0) AS ReputationFromSuggestedEdits
FROM Posts AS p
INNER JOIN Votes AS v ON v.PostId = p.Id
WHERE p.OwnerUserId = ##UserId:int##
AND v.CreationDate <= ##UntilDate:string##
GROUP BY v.CreationDate
) as r2d
Answer: The main improvements I would suggest are:
Minimize the number of accesses made to the large, minimally indexed, SEDE tables.
Break the query into multiple steps.
Reduce or eliminate code repetition.
Temporary tables are allowed on SEDE. Use these to fetch the minimal data needed from the large base tables to perform the calculations. Referring to these smaller data sets will be much more efficient than interrogating the base tables multiple times.
Properly used, temporary tables enable more accurate cardinality estimation, automatic statistics on intermediate results, and provide indexing opportunities - all of which can improve final plan quality and performance.
Breaking the query up also makes the logic easier to comprehend and debug (initially, and when maintaining it in future). Errors and redundancies are much easier to spot with smaller queries.
I have not attempted to improve the reputation calculation logic itself, except in a few minor ways, but the following illustrates a possible re-implementation of the provided code using temporary tables:
Initialization
DECLARE
@UserId integer = 22656, -- Jon Skeet, why not
@UntilDate datetime = CURRENT_TIMESTAMP;
DROP TABLE IF EXISTS
#Posts, #Data, #Edits;
Posts data
Just the columns we are going to need from the Posts table:
CREATE TABLE #Posts
(
Id integer NOT NULL,
AcceptedAnswerId integer NULL,
CommunityOwnedDate datetime NULL,
PostTypeId tinyint NOT NULL
);
Load the minimal number of rows needed:
INSERT #Posts
(
Id,
AcceptedAnswerId,
CommunityOwnedDate,
PostTypeId
)
SELECT
P.Id,
P.AcceptedAnswerId,
P.CommunityOwnedDate,
P.PostTypeId
FROM dbo.Posts AS P
WHERE
P.OwnerUserId = @UserId
AND P.CreationDate <= @UntilDate
AND
(
P.CommunityOwnedDate IS NULL
OR P.CommunityOwnedDate <= @UntilDate
);
This applies the common predicates in the original query as early as possible, including setting a boundary of the values of CommunityOwnedDate that might impact the reputation calculation up to the desired date. Other filtering could be added here, for example to restrict the PostTypeId values to just those of interest.
The execution plan (for Jon Skeet) is:
This plan is relatively efficient, being primarily based on a seek to the OwnerUserId specified. The Merge Interval subtree is concerned with finding the range of CommunityOwnedDate values used as a secondary seek predicate.
The Key Lookup is sadly unavoidable, given the SEDE view dbo.Posts wrapping the underlying table dbo.PostsWithDeleted (filtering on DeletionDate IS NULL without a supporting index). Nevertheless, it does mean we can filter CreationDate in the same lookup without adding significant cost.
Adding votes data
The next step adds the columns we need from the voting data associated with the qualifying posts. This is performed as a second step rather than joining all in one step because the #Posts table provides useful accurate cardinality and statistical information. A join query is quite likely to mis-estimate, resulting in an inappropriate plan selection or hash spill, for example.
CREATE TABLE #Data
(
AcceptedAnswerId integer NULL,
CommunityOwnedDate datetime NULL,
PostTypeId tinyint NOT NULL,
BountyAmount integer NULL,
VoteTypeId tinyint NOT NULL,
CreationDate datetime NOT NULL
)
WITH (DATA_COMPRESSION = ROW);
Row compression is not necessarily super-useful here, but it does illustrate the extra flexibility of using discrete result sets.
INSERT #Data WITH (TABLOCK)
(
AcceptedAnswerId,
CommunityOwnedDate,
PostTypeId,
BountyAmount,
VoteTypeId,
CreationDate
)
SELECT
P.AcceptedAnswerId,
P.CommunityOwnedDate,
P.PostTypeId,
V.BountyAmount,
V.VoteTypeId,
V.CreationDate
FROM #Posts AS P
JOIN dbo.Votes AS V
ON V.PostId = P.Id
WHERE
V.CreationDate <= @UntilDate
AND
(
P.CommunityOwnedDate IS NULL
OR P.CommunityOwnedDate > V.CreationDate
);
The query above again applies predicates as early as possible, and gives us a combined view of the posts and votes that we can interrogate in multiple ways later on at low cost (certainly much cheaper than accessing the base tables again).
The execution plan features accurate cardinality estimates, appropriate use of parallelism, and a hashtable build-side bitmap filter used to perform semi-join reduction on the Votes table scan. This removes rows that cannot possibly join even before they are surfaced from the storage engine to the query processor (as indicated by the INROW attribute).
Suggested Edit Data
We only need the date (no time component) and total reputation earned for suggested edits:
CREATE TABLE #Edits
(
ApprovalDate datetime PRIMARY KEY,
Reputation integer NOT NULL
);
INSERT #Edits
(
ApprovalDate,
Reputation
)
SELECT
G.ApprovalDate,
EditRep = 2 * COUNT(*) -- 2 rep per approved suggested edit
FROM dbo.SuggestedEdits AS SE
CROSS APPLY
(
VALUES (CONVERT(datetime, CONVERT(date, SE.ApprovalDate)))
) AS G (ApprovalDate)
WHERE
SE.OwnerUserId = @UserId
AND SE.ApprovalDate <= @UntilDate
AND SE.RejectionDate IS NULL
GROUP BY
G.ApprovalDate;
Using CROSS APPLY with VALUES allows us to avoid repeating the datetime to date conversion in the SELECT list and GROUP BY clause. The converted value is put back in a datetime type for consistency with the other SEDE-derived tables.
It is generally best practice to pay careful attention to data types, avoiding implicit conversions. This has myriad benefits, not least for cardinality estimation and join performance & optimizations.
A possible improvement to the code here would be to use the date type everywhere only the date portion is needed.
The parallel scan of the SuggestedEdits table is the best we can do here, without better indexing in SEDE.
Reputation Per Day
This table largely replaces the original r2d subquery, computing reputation totals per day:
CREATE TABLE #DayTotals
(
CreationDate datetime PRIMARY KEY,
ReputationFromVotes integer NOT NULL,
ReputationFromBounties integer NOT NULL,
ReputationFromAccepts integer NOT NULL,
ReputationFromSuggestedEdits integer NOT NULL
);
INSERT #DayTotals WITH (TABLOCK)
(
CreationDate,
ReputationFromVotes,
ReputationFromBounties,
ReputationFromAccepts,
ReputationFromSuggestedEdits
)
SELECT
D.CreationDate,
ReputationFromVotes =
ISNULL
(
SUM
(
CASE
WHEN D.PostTypeId = 1 AND D.VoteTypeId = 2 THEN 5
WHEN D.PostTypeId = 2 AND D.VoteTypeId = 2 THEN 10
WHEN D.VoteTypeId = 3 THEN -2
ELSE 0
END
), 0
),
ReputationFromBounties =
ISNULL(SUM(CASE WHEN D.VoteTypeId = 9 THEN D.BountyAmount ELSE 0 END), 0),
ReputationFromAccepts =
ISNULL(SUM(CASE WHEN D.VoteTypeId = 1 THEN 15 ELSE 0 END), 0),
ReputationFromSuggestedEdits = ISNULL(SUM(E.Reputation), 0)
FROM #Data AS D
LEFT JOIN #Edits AS E
ON E.ApprovalDate = D.CreationDate
GROUP BY
D.CreationDate;
The execution plan shows efficient parallel processing using only the relatively small temporary tables:
Other reputation totals
These are single values, stored in variables for convenience:
DECLARE
@RepBountiesGiven integer =
(
SELECT
ISNULL(SUM(-D.BountyAmount), 0)
FROM #Data AS D
WHERE
D.VoteTypeId = 8 -- BountyStart
),
@RepFromAcceptingAnswers integer =
(
SELECT
2 * COUNT(*) -- 2 rep per answer accepted
FROM #Posts AS P
WHERE
P.PostTypeId = 1
AND P.AcceptedAnswerId IS NOT NULL
),
@ReputationFromRepCap integer =
(
SELECT
SUM
(
CASE
WHEN ReputationFromVotes + ReputationFromSuggestedEdits <= 200
THEN ReputationFromVotes + ReputationFromSuggestedEdits
ELSE 200
END
)
FROM #DayTotals
);
Final query
Now we have all the elements we need to produce the final output:
SELECT
TotalReputation =
@ReputationFromRepCap +
SUM(DT.ReputationFromBounties) +
@RepBountiesGiven +
@RepFromAcceptingAnswers +
SUM(DT.ReputationFromAccepts),
ReputationFromRepCap = @ReputationFromRepCap,
ReputationFromBounties = SUM(DT.ReputationFromBounties),
ReputationGivenAsBounties = @RepBountiesGiven,
ReputationFromAcceptingAnswers = @RepFromAcceptingAnswers,
ReputationFromAcceptedAnswers = SUM(DT.ReputationFromAccepts)
FROM #DayTotals AS DT;
Try the complete script at: SEDE Demo | {
"domain": "codereview.stackexchange",
"id": 31550,
"tags": "sql, sql-server, t-sql, stackexchange"
} |
Could quarks be free in higher-dimensional space than 3D? | Question: Reading this answer, I now wonder: if quarks are confined by $r^2$ potential, could their potential allow infinite motion in higher-dimensional space?
To understand why I thought this might be possible, see what we have with electrostatic potential: in 3D it is proportional to $r^{-1}$. This is just what Poisson equation tells us for point charge. If we solve Poisson equation in 2D space, we'll see potential is proportional to $\ln\frac r {r_0}$, and in 1D it's proportional to $r$. We can see that it only allows infinite motion starting form 3D.
Could the same hold for quarks, but with some higher than 3D dimension? Or is their potential of completely different nature with respect to space dimensionality?
Answer: If you take the classical analogy of a charge generating field lines then the force at some point can be taken as the density of field lines at that point. In 3D at some distance $r$ the field lines are spread out over a spherical surface of area proportional to $r^2$ so their density and hence force goes as $r^{-2}$ - so far so good.
The trouble with the strong force is that the interactions between gluons cause the field lines to attract each other, so instead of spreading out they group together to form a flux tube or QCD string. In effect all the field lines are compressed into a cylindrical region between the two particles so the field line density, and hence the force, is independant of the separation between the quarks.
This means it doesn't matter what the dimensionality of space is, because the field lines will always organise themselves along the 1D line between the quarks. The quarks woould be confined in any dimension space.
Annoyingly I can't find an authoritative but popular level article on QCD flux tubes, but a Google will find you lots of articles to look through. | {
"domain": "physics.stackexchange",
"id": 11369,
"tags": "renormalization, spacetime-dimensions, quantum-chromodynamics, quarks, confinement"
} |
The equivalence relations cover problem (in graph theory) | Question: An equivalence relation on a finite vertex set can be represented by an undirected graph that is a disjoint union of cliques. The vertex set represents the elements and an edge represents that two elements are equivalent.
If I have a graph $G$ and graphs $G_1,\dots,G_k$, we say that $G$ is covered by $G_1,\dots,G_k$ if the set of edges of $G$ is equal to the union of the sets of edges of $G_1,\dots,G_k$. The edge sets of $G_1,\dots,G_k$ do not need to be disjoint. Note that any undirected graph $G$ can be covered by a finite number of equivalence relations (i.e., disjoint union of cliques graphs).
I have several questions:
What can be said about the minimal number of equivalence relations required to cover a graph $G$?
How can we compute this minimal number?
How can we compute an explicit minimum cover of $G$, i.e., a set of equivalence relations whose size is minimal and which cover $G$?
Does this problem has any applications apart from partition logic (the dual of the logic of subsets)?
Does this problem has a well established name?
Given the various misunderstandings indicated by the comments, here are some pictures to illustrate these concepts. If you have an idea for an easier to understand terminology (instead of "cover", "equivalence relation", "disjoint union of cliques" and "not necessarily disjoint" union of edge sets), feel free to let me know.
Here is a picture of a graph and one equivalence relation covering it:
Here is a picture of a graph and two equivalence relations covering it:
It should be pretty obvious that at least two equivalence relations are required.
Here is a picture of a graph and three equivalence relations covering it:
It's less obvious that at least three equivalence relations are required. Lemma 1.9 from Dual of the Logic of Subsets can be used to show that this is true. The generalization of this lemma to nand operations with more than two inputs was the motivation for this question.
Answer: The problem is known as the equivalence covering problem in graph theory. It is upper bounded by the clique covering number (the minimum collection of cliques such that each edge of the graph is in at least one clique). There are many similar problems and definitions; one has to be very careful here. These two numbers are denoted by $\text{eq}(G)$ and $\text{cc}(G)$, respectively.
There are special graph classes where the exact value or a good upper bound for either number is known. In general, to the best of my knowledge, the best bounds are given by Alon [1]:
$$\log_2 n - \log_2 d \leq \text{eq}(G) \leq \text{cc}(G) \leq 2e^2 (\Delta+1)^2 \ln n,$$
where $\Delta$ is the maximum degree of $G$. By the way, a covering with $\lceil n^2/4 \rceil$ triangles and edges is always possible (cf. Mantel's theorem), and this is easy to find algorithmically as well.
Not surprisingly, computing either number is $\sf NP$-complete. Even for split graphs, computing $\text{eq}(G)$ is $\sf NP$-hard (but can be approximated within an additive constant 1) as shown in [2]. It is also hard to compute for graphs in which no two triangles have a vertex in common [3].
[1] Alon, Noga. "Covering graphs by the minimum number of equivalence relations." Combinatorica 6.3 (1986): 201-206.
[2] Blokhuis, Aart, and Ton Kloks. "On the equivalence covering number of splitgraphs." Information processing letters 54.5 (1995): 301-304.
[3] Kučera, Luděk, Jaroslav Nešetřil, and Aleš Pultr. "Complexity of dimension three and some related edge-covering characteristics of graphs." Theoretical Computer Science 11.1 (1980): 93-106. | {
"domain": "cs.stackexchange",
"id": 4288,
"tags": "algorithms, graphs, clique"
} |
PDO Database Auto Table Creator | Question: How can I do the following with less nested loops.
How can I output the Table names above their corresponding tables without having to loop.
Functions for returning the DB Info:
// Return all the Table names in an array
public function getDbTables() {
$result = $this->query("SHOW TABLES");
while ($row = $result->fetchAll(PDO::FETCH_COLUMN)) {
return $row;
}
}
// Return ALL column names for each Table available
public function getColumnNames($table){
$sql = "SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name = :table";
try {
$stmt = $this->dbh->prepare($sql);
$stmt->bindValue(':table', $table, PDO::PARAM_STR);
$stmt->execute();
$output = array();
while($row = $stmt->fetch(PDO::FETCH_ASSOC)){
$output[] = $row['COLUMN_NAME'];
}
return $output;
}
catch(PDOException $pe) {
trigger_error('Could not connect to MySQL database. ' . $pe->getMessage() , E_USER_ERROR);
}
}
// Populate HTML Table with the DB Table Data
public function populateTable($table) {
$sql = "SELECT * FROM $table";
try {
$stmt = $this->dbh->prepare($sql);
$stmt->execute();
while($row = $stmt->fetch(PDO::FETCH_ASSOC)){
return $row;
}
}
catch(PDOException $pe) {
trigger_error('Could not connect to MySQL database. ' . $pe->getMessage() , E_USER_ERROR);
}
}
HTML Output:
<?php print_r($phpCore->getDbTables()); ?>
<div>
<form>
<table>
<?php
foreach($phpCore->getDbTables() as $tablename) {
echo '<tr>';
foreach($phpCore->getColumnNames($tablename) as $fieldnames) {
echo '<td>'.$fieldnames.'</td>';
}
echo '</tr>';
echo '<tr>';
foreach($phpCore->populateTable($tablename) as $tabledata) {
echo '<td><input type="text" value="'.$tabledata.'"></input></td>';
}
echo '</tr>';
}
?>
</table>
</form>
</div>
Visual Output:
Answer: A simple solution could be to add another <tr> element into the first foreach loop. You already have the table name, so outputting it to the screen shouldn't be too much trouble.
/*
* Your original structure with a line added.
*/
foreach($phpCore->getDbTables() as $tablename) {
/*
* Display the current table name.
*/
echo '<tr><b>Table: ' . ucfirst($tablename) . '</b></tr>';
echo '<tr>';
foreach($phpCore->getColumnNames($tablename) as $fieldnames) {
echo '<td>'.$fieldnames.'</td>';
}
echo '</tr>';
echo '<tr>';
foreach($phpCore->populateTable($tablename) as $tabledata) {
echo '<td><input type="text" value="'.$tabledata.'"></input></td>';
}
echo '</tr>';
}
I feel like the answer was a little easy, so I will add some more. Mixing PHP and HTML with interpolating strings with variables and tags can be hard to read and course some bugs later. You should consider the following.
Extra
I would urge you to separate display logic and domain logic. The domain logic should only be responsible of fetching the required data, where the display logic should be responsible for the actual rendering of the data. An example could be the following:
The domain logic
Then the appropriate data has been fetched from the data storage source you could return an array or even an object with the data. If using an array it could look like the following:
$tables = [
'users' => [
'columns' => ['id', 'username'],
'data' => ['1', 'admin']
],
'logins' => [
'columns' => ['id', 'user_id', 'timestamp'],
'data' => ['1', '1', '2015-7-21 21:25:00']
]
];
The display logic would then receive this array structure and decide (independently of the domain) how it should be rendered.
<table>
<?php foreach($tables as $name => $values): ?>
<tr>
<td>
<b><?= ucfirst(htmlspecialchars($name, ENT_QUOTES, 'UTF-8')); ?></b>
</td>
</tr>
<tr>
<?php foreach($values['data'] as $data): ?>
<td>
<input type="text" value="<?= htmlspecialchars($data, ENT_QUOTES, 'UTF-8'); ?>"></input>
</td>
<?php endforeach; ?>
</tr>
<tr>
<?php foreach($values['columns'] as $column): ?>
<td>
<?= ucfirst(htmlspecialchars($column, ENT_QUOTES, 'UTF-8')); ?>
</td>
<?php endforeach; ?>
</tr>
<?php endforeach; ?>
</table>
Now you have separated display and domain logic effortlessly from each other. You can change your display to easily include the table name without changing the functions/methods used to fetch data. The same goes for the domain part. You can add new information into the domain without breaking the display layer. The new information will only be displayed when you write the necessary HTML structure.
Disclaimer: personal stuff incoming! Using the alternative PHP syntax inside HTML helps me change mental state so that I am less likely to write domain actions inside the display layer. This is of course subjective.
I would also remind you of output escaping. Here I have used the htmlspecialchars() function, but there are more to it than that. If you want you should read up on it as some of nastiest security attacks are performed with lack of output escaping.
Hope this can help, happy coding! | {
"domain": "codereview.stackexchange",
"id": 14824,
"tags": "php, html, pdo"
} |
Simulation of multi-particle systems, randomness and chaos | Question: The answer https://physics.stackexchange.com/a/10441/50677 for #2 (chaotic randomness) claims that the absolute knowledge (whatever that would be) of starting conditions were sufficient for a perfect prediction of outcome. I think this is wrong. After all the interactions between particles happens in quantum dimensions and therefore is fundamentally random in the sense that one cannot predict e.g. the amount and direction of impulse that is exchanged in a "collision" interaction between two molecules - we can only trace probability clouds which even after a short time and few particles lead to what we perceive as chaotic behaviour: it is as good as not knowing exactly the starting conditions. This randomness affects systems like a gas atmosphere (thus making weather forecast impossible even if we knew each and every particles initial parameters) as well as the orbits of planets, therefore I think the answer in this point is wrong. Or am I?
Answer: It is impossible for us to decide whether reality is deterministic or stochastic.
Stochasticity may happen on such small scales that we need to precisely know the state of the entire universe to distinguish it from chaos.
Deterministicity may be so complex that we need to precisely measure the entire universe to make a prediction.
Quantum mechanics is a model of reality which is stochastic as this is the best way to explain observations.
However, whether there is an underlying deterministic process (hidden variables) that governs this apparent stochasticity is something we can never know.
At best we can make statements of the limits of this determinism and argue with Occam’s razor.
Just consider that we can simulate quantum randomness on a deterministic computer.
So, to come back to your question, if the universe happens to be deterministic and we could somehow completely isolate and measure a part of it, we could precisely predict the future.
Our insights from quantum mechanics and in particular regarding local hidden-variable theories put strong restrictions on what we can actually measure, how much we would have to measure, and whether such a thing as perfect isolation even makes sense, but nothing more. | {
"domain": "physics.stackexchange",
"id": 46449,
"tags": "quantum-mechanics, kinematics, simulations, chaos-theory, randomness"
} |
If two asteroids will collide, how can we call it? | Question: If a smaller celestial body collides with a larger celestial body, but neither is classified as a planet, what do we call the smaller celestial body that collided with the larger one?
Asteroid/meteoroid or meteor/meteorite?
Do we call it asteroid collision or meteorite collision from the smaller to the greater?
Is there any logic in saying "meteor" even when there is virtually no atmosphere involved?
Answer: They are both called asteroids.
There is neither logic, nor science, nor even a convention of calling either body a "meteor", since meteors are strictly atmospheric phenomena, that occur when a small asteroid (a meteoroid) hits the atmosphere. When a meteoroid hits the moon, no meteor is formed.
In the case that two asteroids collide we call it an asteroid collision. | {
"domain": "astronomy.stackexchange",
"id": 6975,
"tags": "asteroids, meteor, meteorite, meteoroid"
} |
Solvability of Turing Machines | Question: I'm preparing for an exam, and on a sample one provided (without solutions), we have this question: Is the following solvable or non-solvable: Given a turing machine $T$, does it accept a word of even length? - Given a deterministic 1-tape turing machine $T$, does $T$ ever read the contents of the 10th cell?
Thanks! -
Answer: "Non-trivial properties" for Rice's theorem are properties of the language, not of the machine itself - for instance, it's decidable how many states a TM has, which is certainly a non-trivial property! See Perplexed by Rice's theorem for a bit of a clarification on this. Now, one of your two properties is a property of the language, while one is a property of the machine; this should allow you to apply Rice's theorem in one case but not the other. Note that this isn't conclusive evidence that the other case is solvable, but it's guidance that you might try looking for a positive answer there.
(Note that this is meant to be a very loose hint just to guide you; I can certainly go into more specific details if you'd like...) | {
"domain": "cs.stackexchange",
"id": 768,
"tags": "turing-machines, undecidability"
} |
Normal distribution instead of Logistic distribution for classification | Question: Logistic regression, based on the logistic function $\sigma(x) =
\frac{1}{1 + \exp(-x)}$, can be seen as a hypothesis testing problem. Where the reference distribution is the standard Logistic distribution where the p.m.f is
$f(x) = \frac{\exp(-x)}{[1 + \exp(-x)]^2}$
and the c.d.f is
$F(x) = \sigma(x) = \frac{1}{1 + \exp(-x)}$
The hypothesis to test is
$H_0: x \text{ isn't positive} \hspace{2.0cm} H_1: x \text{ is positive}$
The test statistic is $F(x)$. We reject $H_0$ if $F(x) \geq \alpha$ where $\alpha$ is the level of significance (in terms of hypothesis testing) or classification threshold (in terms of classification problem)
My question is that why they don't come up with the Standard normal distribution, which truly reflects the "distribution of nature", instead of Logistic distribution ?
Answer: Nice comparison.
Generally, we are allowed to experiment with as many distributions as we want, and find the one that suits our purpose. However, the normality assumption leads to an intractable derivation consisting of the notorious erf function.
Let's first pinpoint what is $x$ in the context of logistic regression. Logistic regression model can be written as:
$$P(y=1|\boldsymbol{x})=\frac{1}{1+e^{-\boldsymbol{w}^t\boldsymbol{x}}}=F(\boldsymbol{w}^t\boldsymbol{x})$$
So your $x$ is actually $z=\boldsymbol{w}^t\boldsymbol{x}$. This means, although it is reasonable to assume that predicate $\boldsymbol{x}$ comes from a normal distribution, the same argument does not hold for a linear combination of its dimensions, i.e. $z$. In other words, the normal assumption is not as natural for $z$ as for $\boldsymbol{x}$.
But still, let's see what happens with normal assumption. The problem that we face here is analytical intractability. More specifically, to fit a similar model to observations using Maximum Likelihood, we need (1) derivative of cumulative distribution function (CDF) with respect to each parameter $w_i$, and (2) value of CDF for a given $z$ (see this lecture section 12.2.1 for more details).
For logistic distribution, the required gradient would be:
$$\begin{align*}
\frac{\partial F(\boldsymbol{x};\boldsymbol{w})}{\partial w_i}&=\frac{\partial (1+e^{-\boldsymbol{w}^t\boldsymbol{x}})^{-1}}{\partial w_i}= x_i e^{-\boldsymbol{w}^t\boldsymbol{x}}(1+e^{-\boldsymbol{w}^t\boldsymbol{x}})^{-2} =x_if(\boldsymbol{x};\boldsymbol{w})
\end{align*}$$
However for normal distribution, CDF is the erf function which does not have an exact formula, though, its gradient is tractable. Assuming $z \sim \mathcal{N}(0, 1)$, the gradient would be:
$$\begin{align*}
\frac{\partial F(\boldsymbol{x};\boldsymbol{w})}{\partial w_i}&=\frac{\partial \left(\frac{1}{2}+\frac{1}{2}\text{erf}\left(\frac{z}{\sqrt{2}}\right)\right)}{\partial w_i}=\frac{x_i}{\sqrt{2 \pi}} e^{-\frac{(\boldsymbol{w}^t\boldsymbol{x})^2}{2}}=x_if(\boldsymbol{x};\boldsymbol{w})
\end{align*}$$
In summary, the normality assumption is not as justified for $z=\boldsymbol{w}^t\boldsymbol{x}$ as for $\boldsymbol{x}$, and it leads to an intractable CDF. Therefore, we continue using the good old logistic regression!
Here is a visual comparison of normal and logistic CDFs:
taken from a post by Enrique Pinzon, which implies a large analytical cost for a small difference! | {
"domain": "datascience.stackexchange",
"id": 4890,
"tags": "classification, logistic-regression, distribution"
} |
catkin_make error when build talker and listener | Question:
I follow the tutorials and make talker.cpp and listener.cpp in the dir ~/catkin_ws/src/beginner_tutorials/src. I add the CMakelist.txt like the following lines:
add_executable(talker src/talker.cpp)
target_link_libraries(talker ${catkin_LIBRARIES})
add_dependencies(talker beginner_tutorials_generate_messages_cpp)
add_executable(listener src/listener.cpp)
target_link_libraries(listener ${catkin_LIBRARIES})
add_dependencies(listener beginner_tutorials_generate_messages_cpp)
but when I cd ~/catkin_ws and catkin_make this error happened :
Base path: /home/linux/catkin_ws
Source space: /home/linux/catkin_ws/src
Build space: /home/linux/catkin_ws/build
Devel space: /home/linux/catkin_ws/devel
Install space: /home/linux/catkin_ws/install
####
#### Running command: "make cmake_check_build_system" in "/home/linux/catkin_ws/build"
####
####
#### Running command: "make -j2 -l2" in "/home/linux/catkin_ws/build"
####
Scanning dependencies of target listener
Scanning dependencies of target talker
make[2]: *** No rule to make target 'beginner_tutorials/CMakeFiles/listener.dir/build'. Stop.
CMakeFiles/Makefile2:473: recipe for target 'beginner_tutorials/CMakeFiles/listener.dir/all' failed
make[1]: *** [beginner_tutorials/CMakeFiles/listener.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 50%] Building CXX object beginner_tutorials/CMakeFiles/talker.dir/src/talker.cpp.o
[100%] Linking CXX executable /home/linux/catkin_ws/devel/lib/beginner_tutorials/talker
[100%] Built target talker
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j2 -l2" failed
So what should I do to solve the problem
Originally posted by Caryzhou on ROS Answers with karma: 1 on 2017-04-24
Post score: 0
Answer:
Check that listener.cpp is in the specified directory and has the correct name.
Originally posted by nlamprian with karma: 366 on 2017-04-25
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Caryzhou on 2017-04-25:
thank you I'sorry to name the file the wrong name ,thank you very much! | {
"domain": "robotics.stackexchange",
"id": 27718,
"tags": "ros"
} |
How does the voltage across an inductor and a capacitor vary in series LCR circuit about resonance? | Question: I am aware that at resonance, the voltage across the inductor and the capacitor are equal in magnitude and opposite in phase. However, I want to know how the voltage across $L$ and $C$ vary if I vary the frequency on either side of the resonant frequency and if any relationship exists between the two (e.g. whether one decreases while the other increases with frequency below the resonant frequency and swaps their behaviour above the resonant frequenc).
Update:
For and LCR circuit, we can write the following expressions for the voltages across the capacitor $V_c$ and the inductor $V_L$:
$V_c=\frac{-j}{\omega C}\frac{V}{R+j(\omega L - \frac{1}{\omega C})}$
and
$V_L=V\frac{ j\omega L }{R+j(\omega L - \frac{1}{\omega C})}$
($V$ is the rms volatge applied to the circuit and $j=\sqrt{-1}$)
And, the magnitudes of of $V_c$ and $V_L$ are:
$|{V_c}|=\frac{V}{\omega C\sqrt{R^2+(\omega L - \frac{1}{\omega C})^2}}$
and
$|V_L|=\frac{V \omega L}{\sqrt{R^2+(\omega L - \frac{1}{\omega C})^2}}$
And I plotted $|V_c|$ and $|V_L|$ as functions of $\omega$ using Mathematica and here are the results:
The first plot shows the variation of $|V_c|$ with $\omega$ and the second plot shows the variation of $|V_L|$ with $\omega$. The voltages are along the $Y axis$ and $\omega$ is along the $X axis$.
(I used some standard values: $V=5\,\text{V}$, $R=100\,\Omega$, $C=1\,\mu\text{F}$ and $L=30\,\text{mH}$.)
Answer: note: I accidentally thought OP was asking about a series $LC$, not a series $LCR$.
Including the $R$ changes the results here by making the infinities turn into large finite values.
Suppose you hook your series $LC$ circuit up to a voltage source with frequency dependent phasor $\tilde{V}_s(\omega)$.
Intuition
First let's guess what happens.
At low frequency the inductor looks like a short circuit and the capacitor looks like an open, so the voltage across the inductor should be near zero and the voltage across the capacitor should be roughly $V_s$.
At high frequency the inductor looks like an open and the capacitor looks like a short (opposite of low frequency case) so the voltage across the inductor should be roughly $V_s$ and the voltage across the capacitor should be roughly zero.
Near the resonance, the impedances of the inductor and capacitor cancel and the total impedance of the series circuit is very small.
Therefore, near the resonance the total current $I$ through the circuit gets very large.
The voltage across the inductor is given by $V_L = I_L \times Z_L$ where $I$ is the current through the inductor and $Z_L$ is the impedance of the inductor.
Since we have a series circuit, $I_L = I$, so near the resonance where $I$ gets very large we expect $V_L$ to also get very large.
The same reasoning applies to the capacitor.
Math
From the voltage divider equation you know that the voltage across the inductor is
$$V_L(\omega) = \tilde{V}_s(\omega) \frac{Z_L(\omega)}{Z_L(\omega) + Z_C(\omega)}$$
where $Z_L$ is the impedance of the inductor and $Z_C$ is the impedance of the capacitor.
Putting in the usual impedance for capacitor and inductor gives
$$\tilde{V}_L(\omega) / \tilde{V}_s(\omega) = \frac{-\omega^2 / \omega_0^2}{1 - \omega^2 / \omega_0^2} $$
where $\omega_0 \equiv 1/\sqrt{LC}$ is the resonance frequency.
As $\omega \rightarrow 0$, $\tilde{V}_L \rightarrow 0$.
As $\omega \rightarrow \omega_0^-$ ($\omega$ approaches $\omega_0$ from the lower side), $\tilde{V}_L \rightarrow -\infty$.
As $\omega \rightarrow \omega_0^+$, $\tilde{V_L} \rightarrow \infty$.
As $\omega \rightarrow \infty$, $\tilde{V_L} \rightarrow \tilde{V}_s$.
By the same reasoning you get
$$\tilde{V}_C(\omega) / \tilde{V}_s(\omega) = \frac{1}{1 - \omega^2 / \omega_0^2} \, .$$
Here,
As $\omega \rightarrow 0$, $\tilde{V}_C \rightarrow \tilde{V}_s$.
As $\omega \rightarrow \omega_0^-$, $\tilde{V}_C \rightarrow \infty$.
As $\omega \rightarrow \omega_0^+$, $\tilde{V}_C \rightarrow -\infty$.
As $\omega \rightarrow \infty$, $\tilde{V}_C \rightarrow 0$.
Comparing the results for the capacitor and inductor you can see that their roles are exactly switched in all cases. | {
"domain": "physics.stackexchange",
"id": 52046,
"tags": "electric-circuits, capacitance, voltage, induction, resonance"
} |
UR3 RT remote position control | Question:
We're looking for a simple way to control the position of the head (end effector/ TCP) of a UR3e robot.
The requirements are simple, we wish to send in real time 6DOF representing the head position in the world - p[x,y,z,ax,ay,az], and have the robot move the in a simple linear manner.
Manual calibration of the pose is possible.
The goal is to close a loop with an external operator (teleoperation), for example an "absolute" joystick.
The closest references found:
Move Group for generating poses
Jog control as refereto connecting to UR3 - https://github.com/tork-a/jog_control
Any thought would be appreciated.
Thanks!
Originally posted by SometimesButMaybe on ROS Answers with karma: 23 on 2020-02-24
Post score: 2
Answer:
I would suggest you take a look at the wiki/ur_robot_driver package. This would provide you with a driver capable of interfacing with the real-time external motion capabilities of the UR controller (at either 125 or 500 Hz, depending on which exact model of controller you have).
After starting the driver, switch to the JointGroupVelocityController using the controller_manager's services.
Then simply provide appropriate setpoints to the driver at the expected rate (ie: either 125 or 500 Hz) to close the loop.
You may want to take a look at the jog_arm package that was recently merged into MoveIt: Arm Jogging in Real-Time.
The requirements are simple, we wish to send in real time 6DOF representing the head position in the world - p[x,y,z,ax,ay,az], and have the robot move the in a simple linear manner.
I like your repeated use of "simple". You are probably aware how "not simple" this sort of task is in a distributed control setting with (almost) infinite variation in input devices and (almost) infinite variation in system-to-be-controlled (the fact you use a UR3 specifically doesn't really matter, there can be no assumptions about this in the software).
Originally posted by gvdhoorn with karma: 86574 on 2020-02-25
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by SometimesButMaybe on 2020-02-25:
Hi!
Thanks for the through answer, we'll definitely use the materials you sent.
You got a point about the "simple" terminology.. It comes from us expecting this functionality to be more integrated into the UR ecosystem (ROS aside), and learning the limitations as we progress through our project.. | {
"domain": "robotics.stackexchange",
"id": 34488,
"tags": "ros, ros-melodic, universal-robot"
} |
Confusion regarding the derivation of commutator relations using the radial ordered contour integrals in 2D CFT | Question: I am a little confused about the way commutators are derived in the radial quantization of a 2D CFT. I am trying to derive the relation $$\int_w \mathcal{R} \left(a(z)b(w)\right)dz = \left[A,b(w)\right]$$ where $A = \int a(z) dz.$
In order to clarify my question, I will follow the steps in reverse compared to what is done in the book "Conformal Field Theory" by Francesco and other authors.
Let's say we have two operators $a(z)$ and $b(w)$. Let us fix the point $w$ somewhere in the plane. Define two circular contours $C_1$ and $C_2$ around the origin such that the point $w$ sits in the annular region made by $C_1$ and $C_2$. Now define two integrals $\int_{C_1}a(z)b(w)dz$ and $\int_{C_2}b(w)a(z)dz$ and subtract them (contours shown in fig 1) as $$\int_{C_1}a(z)b(w)dz - \int_{C_2}b(w)a(z)dz.$$
Then morph the loops from fig 1 into those of fig 2 by moving the red and blue loops close to each other without changing the order of the operators (in order to maintain radial ordering). Finally, I break the integrals of fig 2 into two pieces, one piece is shown in fig 3 and the other one is in fig 4.
My question is this: Why do we neglect the integral corresponding to fig 4? In the limiting case where the blue arc $C_4$ and the red arc $C_5$ come very close, the integral corresponding to figure 4 should be $$\int_{C_5 \to C_4} [a(z),b(w)]dz.$$ Isn't the integral of fig 4 the major contributor to the commutator $\left[A,b(w)\right]$? If so, wouldn't the contribution of integral in fig 3 go to zero in the limit where $C_3$ shrinks to point $w$ because then fig 4 will become the entire commutator?
Answer: The OPE $${\cal R} a(z)b(w)~=~\ldots$$ is often a regular function for $z\neq w$, so then the contour in OP's Fig. 4 is zero. In fact the OPE is typically a truncated Laurent series in $z$ around $w$, cf. Ref. 1.
References:
P. Di Francesco, P. Mathieu and D. Senechal, CFT, 1997; subsection 6.1.2 and eq. (5.72). | {
"domain": "physics.stackexchange",
"id": 85352,
"tags": "operators, conformal-field-theory, integration, commutator"
} |
Is there any convincing evidence for the existence of nanobacteria? | Question: The existence of nanometer scale microorganisms has been proposed and used explain several phenomena including morphological structures in a martian meteorite (ALH 84001) and implication in the formation of kidney stones (1).
Is there now any consensus in the biology community whether these are biotic or abiotic in nature?
(1) Kajander EO, Ciftcioglu N (1998) Nanobacteria: An alternative mechanism for pathogenic intra- and extracellular calcification and stone formation. Proc Natl Acad Sci U S A 95: 8274–8279
Answer: There are two convincing papers (1,2) arguing that the observed "nanobacteria" are in fact mineral/protein complexes and not any living organisms.
In (1) Martel and Young created very similar looking particles from calcium carbonate in vitro. The calcium carbonate precipitations can even look similar to dividing cells:
After 5 days of serum incubation, a white precipitate formed and
adhered on the bottom of each flask. When examined by scanning
electron microscopy (SEM) and transmission electron microscopy (TEM),
the precipitate was seen to consist of particles with a size ≈500 nm
(Fig. 1 A, B, and E), an observation consistent with normal bacterial
morphology, which peaks at one particular size specific to the strain
on hand. Some forms resembled cells undergoing division (Fig. 1 D,
arrow).
One strong argument against those nanobacteria being living organisms is that they don't seem to contain any nucleic acids. Of course non-DNA based organisms are theoretically feasible, but that would be an absolutely extraordinary discovery.
The autors of (1) also observed these particles after filtration through a 0.1um filter. They argue that this size could not be sufficient to contain all the cellular machinery bacteria need:
Because an earlier workshop commissioned by the NAS had suggested that
the minimal cellular size of life on Earth must exceed 200 nm in
diameter to harbor the cellular machinery based on DNA replication
(19), it is unlikely that nanobacteria represent living entities
unless they contain some other type of replicating mechanism.
(1) Martel J and Young JD, Purported nanobacteria in human blood as calcium carbonate nanoparticles. PNAS 2008, 105(14):5549-54
(2) Raoult D. et al., Nanobacteria Are Mineralo Fetuin Complexes, PLoS Pathog. 2008, 4(2): e41. | {
"domain": "biology.stackexchange",
"id": 8,
"tags": "microbiology, astrobiology"
} |
How to actually calculate work from force times distance? | Question: I understand that work = force $\cdot$ distance, but I'm not sure of the units and conversions to use.
What I know is that I have a piston head of area $\pu{28cm^2}$ (~$\pu{4.4in^2}$) traveling a distance of $\pu{6cm}$ (2.4") perpendicular to an opposing force of $40,000$ psi . How much work is being done? I'm guessing I am looking for an answer in joules?
My thinking so far is that the answer to this question is as follows:
work in $\pu{Joules}$ = $\pu{Pa * m^3}$ displacement.
Displacement = area of piston head $\cdot$ distance travelled $= \pu{28cm^2} \cdot \pu{6cm} = \pu{168cm^3} = \pu{1.68\cdot 10^{-3} m^3}$. Force in $\pu{Pa}$ is $\pu{40000psi} \cdot 6895 = \pu{275.8MPa}$.
So force $\cdot$ displacement = $\pu{46,334 Joules}$.
I'm not sure if this is correct and am confused by some of the conversions.
Answer: You might want to familiarize yourself first with the correct use of quantities, and then with the correct use of the corresponding units. In particular, $40\,000\ \mathrm{psi}$ is the value of a pressure and not of a force. Furthermore, you should avoid confusing quantities, names of quantities, and units in equations as in “work in joules = Pa * m^3 displacement”. You might want to use proper quantity calculus instead. You should also avoid the wrong use of SI prefixes as in “40000 × 6895 = 275.8M”.
Your initial assumption is correct. Work is force times distance, assuming the movement is in the direction of the force. In quantity calculus, this is expressed as the following quantity equation (for simplicity’s sake ignoring the vectorial character of the quantities):
$$W=F\cdot x\tag1$$
where
$W$ is work,
$F$ is force, and
$x$ is distance.
You do not know the value of the force $F$; however, you know the value of the pressure $p$ and the value of the surface area $A$. And you know that pressure is defined as
$$p=\frac FA\tag2$$
Thus, the force $F$ acting on the piston head is
$$F=p\cdot A\tag3$$
You could use this equation to calculate an intermediate value for the force $F$; however, it is generally preferable to insert such intermediate equations into the total equation for the final result; i.e. in this case, insert Equation (3) into Equation (1), which yields:
$$W=p\cdot A\cdot x\tag4$$
Now you have only a single equation for the work $W$ that only depends on known independent quantities (the given pressure $p$, area $A$, and distance $x$).
It is only now that you should insert actual numerical values for the quantities. In order to avoid the need for conversion factors, you might want to convert the given values to a coherent system of units first, preferably SI base units. Thus
$p=40\,000\ \mathrm{psi}=2.7579\times10^8\ \mathrm{Pa}$,
$A=28\ \mathrm{cm^2}=0.0028\ \mathrm{m^2}$, and
$x=6\ \mathrm{cm}=0.06\ \mathrm m$.
Inserting these values into Equation (4) yields
$$\begin{align}W&=p\cdot A\cdot x\\&=2.7579\times10^8\ \mathrm{Pa}\times0.0028\ \mathrm{m^2}\times0.06\ \mathrm m\\&=46333\ \mathrm J\\&\approx5\times10^4\ \mathrm J\end{align}$$
(Note that $1\ \mathrm{Pa}=1\ \mathrm{kg\ m^{-1}\ s^{-2}}$ and that $1\ \mathrm J=1\ \mathrm{kg\ m^2\ s^{-2}}$.) | {
"domain": "chemistry.stackexchange",
"id": 11509,
"tags": "units"
} |
Brillouin scattering in multimode fibers | Question: Fiber ring resonators made up of single mode fibers often incorporate an optical isolator to suppress the build-up of Brillouin scattering. However, ring resonators made of multimode fibers generally do not contain an isolator.
Why is it that the adverse effects of Brillouin scattering are negligible in multimode fibers?
Answer:
Stimulated Brillouin scattering is propagating "backwards", hence isolator effectively kills it while allowing laser operation in main direction.
Multi-mode fiber lasers are typically not single-frequency, hence SBS is much less of a concern.
So, if you make single-mode fiber laser, but not single frequency, with large enough bandwidth (>>10GHz) - it would also not require special means for SBS suppression. | {
"domain": "physics.stackexchange",
"id": 58269,
"tags": "optics, scattering, fiber-optics, laser-cavity"
} |
A Function Applier for Applying Various Algorithms on Nested Container Things in C++ | Question: This is a follow-up question for A recursive_replace_if Template Function Implementation in C++, A recursive_copy_if Template Function Implementation in C++, A recursive_count_if Function with Unwrap Level for Various Type Arbitrary Nested Iterable Implementation in C++ and A recursive_count_if Function For Various Type Arbitrary Nested Iterable Implementation in C++. After implemented recursive_replace_if and recursive_copy_if, I am trying to generalize the operation of applying a function on elements in nested container. The unwrap level parameter is used here instead of std::invocable as the termination condition of the recursion process.
The experimental implementation
The experimental implementation is as below.
// recursive_function_applier implementation
template<std::size_t unwrap_level, class F, std::ranges::range Range, class... Args>
constexpr auto recursive_function_applier(const F& function, const Range& input, Args... args)
{
if constexpr (unwrap_level >= 1)
{
Range output{};
std::ranges::transform(
std::ranges::cbegin(input),
std::ranges::cend(input),
std::inserter(output, std::ranges::end(output)),
[&function, &args...](auto&& element) { return recursive_function_applier<unwrap_level - 1>(function, element, args...); }
);
return output;
}
else
{
Range output{};
function(
std::ranges::cbegin(input),
std::ranges::cend(input),
std::inserter(output, std::ranges::end(output)),
args...);
return output;
}
}
Test cases
Test cases with using std::ranges::copy function
// std::copy test cases
// std::vector<int>
std::vector<int> test_vector{ 5, 7, 4, 2, 8, 6, 1, 9, 0, 3 };
recursive_print(recursive_function_applier<0>(std::ranges::copy, test_vector));
// std::vector<std::vector<int>>
std::vector<decltype(test_vector)> test_vector2{ test_vector , test_vector , test_vector };
recursive_print(recursive_function_applier<1>(std::ranges::copy, test_vector2));
// std::vector<std::string>
std::vector<std::string> test_string_vector{ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20" };
recursive_print(
recursive_function_applier<0>(
std::ranges::copy, test_string_vector
)
);
// std::vector<std::vector<std::string>>
std::vector<decltype(test_string_vector)> test_string_vector2{ test_string_vector , test_string_vector , test_string_vector };
recursive_print(
recursive_function_applier<1>(
std::ranges::copy, test_string_vector2
)
);
// std::deque<int>
std::deque<int> test_deque;
test_deque.push_back(1);
test_deque.push_back(2);
test_deque.push_back(3);
test_deque.push_back(4);
test_deque.push_back(5);
test_deque.push_back(6);
recursive_print(recursive_function_applier<0>(std::ranges::copy, test_deque));
// std::deque<std::deque<int>>
std::deque<decltype(test_deque)> test_deque2;
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
recursive_print(recursive_function_applier<1>(std::ranges::copy, test_deque2));
// std::list<int>
std::list<int> test_list = { 1, 2, 3, 4, 5, 6 };
recursive_print(recursive_function_applier<0>(std::ranges::copy, test_list));
// std::list<std::list<int>>
std::list<std::list<int>> test_list2 = { test_list, test_list, test_list, test_list };
recursive_print(recursive_function_applier<1>(std::ranges::copy, test_list2));
recursive_print(
recursive_function_applier<10>(
std::ranges::copy,
n_dim_container_generator<10, std::list>(test_list, 3)
)
);
Test cases with using std::ranges::replace_copy_if function
// std::ranges::replace_copy_if test cases
// std::vector<int>
std::vector<int> test_vector{ 5, 7, 4, 2, 8, 6, 1, 9, 0, 3 };
recursive_print(recursive_function_applier<0>(std::ranges::replace_copy_if, test_vector, std::bind(std::less<int>(), std::placeholders::_1, 5), 55));
// std::vector<std::vector<int>>
std::vector<decltype(test_vector)> test_vector2{ test_vector , test_vector , test_vector };
recursive_print(recursive_function_applier<1>(std::ranges::replace_copy_if, test_vector2, std::bind(std::less<int>(), std::placeholders::_1, 5), 55));
// std::vector<std::string>
std::vector<std::string> test_string_vector{ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20" };
recursive_print(
recursive_function_applier<0>(
std::ranges::replace_copy_if, test_string_vector, [](std::string x) { return (x == "1"); }, "11"
)
);
// std::vector<std::vector<std::string>>
std::vector<decltype(test_string_vector)> test_string_vector2{ test_string_vector , test_string_vector , test_string_vector };
recursive_print(
recursive_function_applier<1>(
std::ranges::replace_copy_if, test_string_vector2, [](std::string x) { return (x == "1"); }, "11"
)
);
// std::deque<int>
std::deque<int> test_deque;
test_deque.push_back(1);
test_deque.push_back(2);
test_deque.push_back(3);
test_deque.push_back(4);
test_deque.push_back(5);
test_deque.push_back(6);
recursive_print(recursive_function_applier<0>(std::ranges::replace_copy_if, test_deque, [](int x) { return (x % 2) == 0; }, 0));
// std::deque<std::deque<int>>
std::deque<decltype(test_deque)> test_deque2;
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
recursive_print(recursive_function_applier<1>(std::ranges::replace_copy_if, test_deque2, [](int x) { return (x % 2) == 0; }, 0));
// std::list<int>
std::list<int> test_list = { 1, 2, 3, 4, 5, 6 };
recursive_print(recursive_function_applier<0>(std::ranges::replace_copy_if, test_list, [](int x) { return (x % 2) == 0; }, 0));
// std::list<std::list<int>>
std::list<std::list<int>> test_list2 = { test_list, test_list, test_list, test_list };
recursive_print(recursive_function_applier<1>(std::ranges::replace_copy_if, test_list2, [](int x) { return (x % 2) == 0; }, 0));
recursive_print(
recursive_function_applier<10>(
std::ranges::replace_copy_if,
n_dim_container_generator<10, std::list>(test_list, 3),
[](int x) { return (x % 2) == 0; },
0
)
);
Test cases with using std::ranges::remove_copy function
// std::remove_copy test cases
// std::vector<int>
std::vector<int> test_vector{ 5, 7, 4, 2, 8, 6, 1, 9, 0, 3 };
recursive_print(recursive_function_applier<0>(std::ranges::remove_copy, test_vector, 5));
// std::vector<std::vector<int>>
std::vector<decltype(test_vector)> test_vector2{ test_vector , test_vector , test_vector };
recursive_print(recursive_function_applier<1>(std::ranges::remove_copy, test_vector2, 5));
// std::vector<std::string>
std::vector<std::string> test_string_vector{ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20" };
recursive_print(
recursive_function_applier<0>(
std::ranges::remove_copy, test_string_vector, "0"
)
);
// std::vector<std::vector<std::string>>
std::vector<decltype(test_string_vector)> test_string_vector2{ test_string_vector , test_string_vector , test_string_vector };
recursive_print(
recursive_function_applier<1>(
std::ranges::remove_copy, test_string_vector2, "0"
)
);
// std::deque<int>
std::deque<int> test_deque;
test_deque.push_back(1);
test_deque.push_back(2);
test_deque.push_back(3);
test_deque.push_back(4);
test_deque.push_back(5);
test_deque.push_back(6);
recursive_print(recursive_function_applier<0>(std::ranges::remove_copy, test_deque, 1));
// std::deque<std::deque<int>>
std::deque<decltype(test_deque)> test_deque2;
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
recursive_print(recursive_function_applier<1>(std::ranges::remove_copy, test_deque2, 1));
// std::list<int>
std::list<int> test_list = { 1, 2, 3, 4, 5, 6 };
recursive_print(recursive_function_applier<0>(std::ranges::remove_copy, test_list, 1));
// std::list<std::list<int>>
std::list<std::list<int>> test_list2 = { test_list, test_list, test_list, test_list };
recursive_print(recursive_function_applier<1>(std::ranges::remove_copy, test_list2, 1));
recursive_print(
recursive_function_applier<10>(
std::ranges::remove_copy,
n_dim_container_generator<10, std::list>(test_list, 3),
1
)
);
Test cases with using std::ranges::remove_copy_if function
// std::remove_copy_if test cases
// std::vector<int>
std::vector<int> test_vector{ 5, 7, 4, 2, 8, 6, 1, 9, 0, 3 };
recursive_print(recursive_function_applier<0>(std::ranges::remove_copy_if, test_vector, std::bind(std::less<int>(), std::placeholders::_1, 5)));
// std::vector<std::vector<int>>
std::vector<decltype(test_vector)> test_vector2{ test_vector , test_vector , test_vector };
recursive_print(recursive_function_applier<1>(std::ranges::remove_copy_if, test_vector2, std::bind(std::less<int>(), std::placeholders::_1, 5)));
// std::vector<std::string>
std::vector<std::string> test_string_vector{ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20" };
recursive_print(
recursive_function_applier<0>(
std::ranges::remove_copy_if, test_string_vector, [](std::string x) { return (x == "0"); }
)
);
// std::vector<std::vector<std::string>>
std::vector<decltype(test_string_vector)> test_string_vector2{ test_string_vector , test_string_vector , test_string_vector };
recursive_print(
recursive_function_applier<1>(
std::ranges::remove_copy_if, test_string_vector2, [](std::string x) { return (x == "0"); }
)
);
// std::deque<int>
std::deque<int> test_deque;
test_deque.push_back(1);
test_deque.push_back(2);
test_deque.push_back(3);
test_deque.push_back(4);
test_deque.push_back(5);
test_deque.push_back(6);
recursive_print(recursive_function_applier<0>(std::ranges::remove_copy_if, test_deque, std::bind(std::less<int>(), std::placeholders::_1, 5)));
// std::deque<std::deque<int>>
std::deque<decltype(test_deque)> test_deque2;
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
recursive_print(recursive_function_applier<1>(std::ranges::remove_copy_if, test_deque2, std::bind(std::less<int>(), std::placeholders::_1, 5)));
// std::list<int>
std::list<int> test_list = { 1, 2, 3, 4, 5, 6 };
recursive_print(recursive_function_applier<0>(std::ranges::remove_copy_if, test_list, std::bind(std::less<int>(), std::placeholders::_1, 5)));
// std::list<std::list<int>>
std::list<std::list<int>> test_list2 = { test_list, test_list, test_list, test_list };
recursive_print(recursive_function_applier<1>(std::ranges::remove_copy_if, test_list2, std::bind(std::less<int>(), std::placeholders::_1, 5)));
recursive_print(
recursive_function_applier<5>(
std::ranges::remove_copy_if,
n_dim_container_generator<5, std::list>(test_list, 3),
std::bind(std::less<int>(), std::placeholders::_1, 5)
)
);
Full Testing Code
The full testing code:
// A Function Applier for Applying Various Algorithms on Nested Container Things in C++
#include <algorithm>
#include <array>
#include <cassert>
#include <chrono>
#include <complex>
#include <concepts>
#include <deque>
#include <exception>
#include <execution>
#include <functional>
#include <iostream>
#include <iterator>
#include <list>
#include <map>
#include <numeric>
#include <optional>
#include <ranges>
#include <stdexcept>
#include <string>
#include <type_traits>
#include <utility>
#include <variant>
#include <vector>
template<typename T>
concept is_inserterable = requires(T x)
{
std::inserter(x, std::ranges::end(x));
};
#ifdef USE_BOOST_MULTIDIMENSIONAL_ARRAY
template<typename T>
concept is_multi_array = requires(T x)
{
x.num_dimensions();
x.shape();
boost::multi_array(x);
};
#endif
// recursive_copy_if function
template <std::ranges::input_range Range, std::invocable<std::ranges::range_value_t<Range>> UnaryPredicate>
constexpr auto recursive_copy_if(const Range& input, const UnaryPredicate& unary_predicate)
{
Range output{};
std::ranges::copy_if(std::ranges::cbegin(input), std::ranges::cend(input),
std::inserter(output, std::ranges::end(output)),
unary_predicate);
return output;
}
template <
std::ranges::input_range Range,
class UnaryPredicate>
constexpr auto recursive_copy_if(const Range& input, const UnaryPredicate& unary_predicate)
{
Range output{};
std::ranges::transform(
std::ranges::cbegin(input),
std::ranges::cend(input),
std::inserter(output, std::ranges::end(output)),
[&unary_predicate](auto&& element) { return recursive_copy_if(element, unary_predicate); }
);
return output;
}
// recursive_count implementation
template<std::ranges::input_range Range, typename T>
constexpr auto recursive_count(const Range& input, const T& target)
{
return std::count(std::ranges::cbegin(input), std::ranges::cend(input), target);
}
// transform_reduce version
template<std::ranges::input_range Range, typename T>
requires std::ranges::input_range<std::ranges::range_value_t<Range>>
constexpr auto recursive_count(const Range& input, const T& target)
{
return std::transform_reduce(std::ranges::cbegin(input), std::ranges::cend(input), std::size_t{}, std::plus<std::size_t>(), [target](auto&& element) {
return recursive_count(element, target);
});
}
// recursive_count implementation (with execution policy)
template<class ExPo, std::ranges::input_range Range, typename T>
requires (std::is_execution_policy_v<std::remove_cvref_t<ExPo>>)
constexpr auto recursive_count(ExPo execution_policy, const Range& input, const T& target)
{
return std::count(execution_policy, std::ranges::cbegin(input), std::ranges::cend(input), target);
}
template<class ExPo, std::ranges::input_range Range, typename T>
requires (std::is_execution_policy_v<std::remove_cvref_t<ExPo>>) && (std::ranges::input_range<std::ranges::range_value_t<Range>>)
constexpr auto recursive_count(ExPo execution_policy, const Range& input, const T& target)
{
return std::transform_reduce(execution_policy, std::ranges::cbegin(input), std::ranges::cend(input), std::size_t{}, std::plus<std::size_t>(), [execution_policy, target](auto&& element) {
return recursive_count(execution_policy, element, target);
});
}
// recursive_count_if implementation
template<class T, std::invocable<T> Pred>
constexpr std::size_t recursive_count_if(const T& input, const Pred& predicate)
{
return predicate(input) ? 1 : 0;
}
template<std::ranges::input_range Range, class Pred>
requires (!std::invocable<Pred, Range>)
constexpr auto recursive_count_if(const Range& input, const Pred& predicate)
{
return std::transform_reduce(std::ranges::cbegin(input), std::ranges::cend(input), std::size_t{}, std::plus<std::size_t>(), [predicate](auto&& element) {
return recursive_count_if(element, predicate);
});
}
// recursive_count_if implementation (with execution policy)
template<class ExPo, class T, std::invocable<T> Pred>
requires (std::is_execution_policy_v<std::remove_cvref_t<ExPo>>)
constexpr std::size_t recursive_count_if(ExPo execution_policy, const T& input, const Pred& predicate)
{
return predicate(input) ? 1 : 0;
}
template<class ExPo, std::ranges::input_range Range, class Pred>
requires ((std::is_execution_policy_v<std::remove_cvref_t<ExPo>>) && (!std::invocable<Pred, Range>))
constexpr auto recursive_count_if(ExPo execution_policy, const Range& input, const Pred& predicate)
{
return std::transform_reduce(execution_policy, std::ranges::cbegin(input), std::ranges::cend(input), std::size_t{}, std::plus<std::size_t>(), [predicate](auto&& element) {
return recursive_count_if(element, predicate);
});
}
// recursive_count_if implementation (the version with unwrap_level)
template<std::size_t unwrap_level, std::ranges::range T, class Pred>
auto recursive_count_if(const T& input, const Pred& predicate)
{
if constexpr (unwrap_level > 1)
{
return std::transform_reduce(std::ranges::cbegin(input), std::ranges::cend(input), std::size_t{}, std::plus<std::size_t>(), [predicate](auto&& element) {
return recursive_count_if<unwrap_level - 1>(element, predicate);
});
}
else
{
return std::count_if(std::ranges::cbegin(input), std::ranges::cend(input), predicate);
}
}
// recursive_function_applier implementation
template<std::size_t unwrap_level, class F, std::ranges::range Range, class... Args>
constexpr auto recursive_function_applier(const F& function, const Range& input, Args... args)
{
if constexpr (unwrap_level >= 1)
{
Range output{};
std::ranges::transform(
std::ranges::cbegin(input),
std::ranges::cend(input),
std::inserter(output, std::ranges::end(output)),
[&function, &args...](auto&& element) { return recursive_function_applier<unwrap_level - 1>(function, element, args...); }
);
return output;
}
else
{
Range output{};
function(
std::ranges::cbegin(input),
std::ranges::cend(input),
std::inserter(output, std::ranges::end(output)),
args...);
return output;
}
}
// recursive_print implementation
template<std::ranges::input_range Range>
constexpr auto recursive_print(const Range& input, const int level = 0)
{
auto output = input;
std::cout << std::string(level, ' ') << "Level " << level << ":" << std::endl;
std::ranges::transform(std::ranges::cbegin(input), std::ranges::cend(input), std::ranges::begin(output),
[level](auto&& x)
{
std::cout << std::string(level, ' ') << x << std::endl;
return x;
}
);
return output;
}
template<std::ranges::input_range Range> requires (std::ranges::input_range<std::ranges::range_value_t<Range>>)
constexpr auto recursive_print(const Range& input, const int level = 0)
{
auto output = input;
std::cout << std::string(level, ' ') << "Level " << level << ":" << std::endl;
std::ranges::transform(std::ranges::cbegin(input), std::ranges::cend(input), std::ranges::begin(output),
[level](auto&& element)
{
return recursive_print(element, level + 1);
}
);
return output;
}
// recursive_replace_copy_if implementation
template<std::ranges::range Range, std::invocable<std::ranges::range_value_t<Range>> UnaryPredicate, class T>
constexpr auto recursive_replace_copy_if(const Range& input, const UnaryPredicate& unary_predicate, const T& new_value)
{
Range output{};
std::ranges::replace_copy_if(
std::ranges::cbegin(input),
std::ranges::cend(input),
std::inserter(output, std::ranges::end(output)),
unary_predicate,
new_value);
return output;
}
template<std::ranges::input_range Range, class UnaryPredicate, class T>
requires (!std::invocable<UnaryPredicate, std::ranges::range_value_t<Range>>)
constexpr auto recursive_replace_copy_if(const Range& input, const UnaryPredicate& unary_predicate, const T& new_value)
{
Range output{};
std::ranges::transform(
std::ranges::cbegin(input),
std::ranges::cend(input),
std::inserter(output, std::ranges::end(output)),
[&unary_predicate, &new_value](auto&& element) { return recursive_replace_copy_if(element, unary_predicate, new_value); }
);
return output;
}
// recursive_size implementation
template<class T> requires (!std::ranges::range<T>)
constexpr auto recursive_size(const T& input)
{
return 1;
}
template<std::ranges::range Range> requires (!(std::ranges::input_range<std::ranges::range_value_t<Range>>))
constexpr auto recursive_size(const Range& input)
{
return std::ranges::size(input);
}
template<std::ranges::range Range> requires (std::ranges::input_range<std::ranges::range_value_t<Range>>)
constexpr auto recursive_size(const Range& input)
{
return std::transform_reduce(std::ranges::begin(input), std::end(input), std::size_t{}, std::plus<std::size_t>(), [](auto& element) {
return recursive_size(element);
});
}
// recursive_transform implementation
// recursive_invoke_result_t implementation
// from https://stackoverflow.com/a/65504127/6667035
template<typename, typename>
struct recursive_invoke_result { };
template<typename T, std::invocable<T> F>
struct recursive_invoke_result<F, T> { using type = std::invoke_result_t<F, T>; };
template<typename F, template<typename...> typename Container, typename... Ts>
requires (
!std::invocable<F, Container<Ts...>> &&
std::ranges::input_range<Container<Ts...>> &&
requires { typename recursive_invoke_result<F, std::ranges::range_value_t<Container<Ts...>>>::type; })
struct recursive_invoke_result<F, Container<Ts...>>
{
using type = Container<typename recursive_invoke_result<F, std::ranges::range_value_t<Container<Ts...>>>::type>;
};
template<typename F, typename T>
using recursive_invoke_result_t = typename recursive_invoke_result<F, T>::type;
template <std::ranges::range Range>
constexpr auto get_output_iterator(Range& output)
{
return std::inserter(output, std::ranges::end(output));
}
template <class T, std::invocable<T> F>
constexpr auto recursive_transform(const T& input, const F& f)
{
return std::invoke(f, input); // use std::invoke() instead, https://codereview.stackexchange.com/a/283364/231235
}
template <
std::ranges::input_range Range,
class F>
requires (!std::invocable<F, Range>)
constexpr auto recursive_transform(const Range& input, const F& f)
{
recursive_invoke_result_t<F, Range> output{};
std::ranges::transform(
std::ranges::cbegin(input),
std::ranges::cend(input),
std::inserter(output, std::ranges::end(output)),
[&f](auto&& element) { return recursive_transform(element, f); }
);
return output;
}
template<std::size_t dim, class T>
constexpr auto n_dim_vector_generator(T input, std::size_t times)
{
if constexpr (dim == 0)
{
return input;
}
else
{
auto element = n_dim_vector_generator<dim - 1>(input, times);
std::vector<decltype(element)> output(times, element);
return output;
}
}
template<std::size_t dim, std::size_t times, class T>
constexpr auto n_dim_array_generator(T input)
{
if constexpr (dim == 0)
{
return input;
}
else
{
auto element = n_dim_array_generator<dim - 1, times>(input);
std::array<decltype(element), times> output;
std::fill(std::begin(output), std::end(output), element);
return output;
}
}
template<std::size_t dim, class T>
constexpr auto n_dim_deque_generator(T input, std::size_t times)
{
if constexpr (dim == 0)
{
return input;
}
else
{
auto element = n_dim_deque_generator<dim - 1>(input, times);
std::deque<decltype(element)> output(times, element);
return output;
}
}
template<std::size_t dim, class T>
constexpr auto n_dim_list_generator(T input, std::size_t times)
{
if constexpr (dim == 0)
{
return input;
}
else
{
auto element = n_dim_list_generator<dim - 1>(input, times);
std::list<decltype(element)> output(times, element);
return output;
}
}
template<std::size_t dim, template<class...> class Container = std::vector, class T>
constexpr auto n_dim_container_generator(T input, std::size_t times)
{
if constexpr (dim == 0)
{
return input;
}
else
{
return Container(times, n_dim_container_generator<dim - 1, Container, T>(input, times));
}
}
void copy_test();
void replace_copy_if_test();
void remove_copy_test();
void remove_copy_if_test();
int main()
{
copy_test();
replace_copy_if_test();
remove_copy_test();
remove_copy_if_test();
return 0;
}
void copy_test()
{
// std::copy test cases
// std::vector<int>
std::vector<int> test_vector{ 5, 7, 4, 2, 8, 6, 1, 9, 0, 3 };
recursive_print(recursive_function_applier<0>(std::ranges::copy, test_vector));
// std::vector<std::vector<int>>
std::vector<decltype(test_vector)> test_vector2{ test_vector , test_vector , test_vector };
recursive_print(recursive_function_applier<1>(std::ranges::copy, test_vector2));
// std::vector<std::string>
std::vector<std::string> test_string_vector{ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20" };
recursive_print(
recursive_function_applier<0>(
std::ranges::copy, test_string_vector
)
);
// std::vector<std::vector<std::string>>
std::vector<decltype(test_string_vector)> test_string_vector2{ test_string_vector , test_string_vector , test_string_vector };
recursive_print(
recursive_function_applier<1>(
std::ranges::copy, test_string_vector2
)
);
// std::deque<int>
std::deque<int> test_deque;
test_deque.push_back(1);
test_deque.push_back(2);
test_deque.push_back(3);
test_deque.push_back(4);
test_deque.push_back(5);
test_deque.push_back(6);
recursive_print(recursive_function_applier<0>(std::ranges::copy, test_deque));
// std::deque<std::deque<int>>
std::deque<decltype(test_deque)> test_deque2;
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
recursive_print(recursive_function_applier<1>(std::ranges::copy, test_deque2));
// std::list<int>
std::list<int> test_list = { 1, 2, 3, 4, 5, 6 };
recursive_print(recursive_function_applier<0>(std::ranges::copy, test_list));
// std::list<std::list<int>>
std::list<std::list<int>> test_list2 = { test_list, test_list, test_list, test_list };
recursive_print(recursive_function_applier<1>(std::ranges::copy, test_list2));
recursive_print(
recursive_function_applier<5>(
std::ranges::copy,
n_dim_container_generator<5, std::list>(test_list, 3)
)
);
return;
}
void replace_copy_if_test()
{
// std::ranges::replace_copy_if test cases
// std::vector<int>
std::vector<int> test_vector{ 5, 7, 4, 2, 8, 6, 1, 9, 0, 3 };
recursive_print(recursive_function_applier<0>(std::ranges::replace_copy_if, test_vector, std::bind(std::less<int>(), std::placeholders::_1, 5), 55));
// std::vector<std::vector<int>>
std::vector<decltype(test_vector)> test_vector2{ test_vector , test_vector , test_vector };
recursive_print(recursive_function_applier<1>(std::ranges::replace_copy_if, test_vector2, std::bind(std::less<int>(), std::placeholders::_1, 5), 55));
// std::vector<std::string>
std::vector<std::string> test_string_vector{ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20" };
recursive_print(
recursive_function_applier<0>(
std::ranges::replace_copy_if, test_string_vector, [](std::string x) { return (x == "1"); }, "11"
)
);
// std::vector<std::vector<std::string>>
std::vector<decltype(test_string_vector)> test_string_vector2{ test_string_vector , test_string_vector , test_string_vector };
recursive_print(
recursive_function_applier<1>(
std::ranges::replace_copy_if, test_string_vector2, [](std::string x) { return (x == "1"); }, "11"
)
);
// std::deque<int>
std::deque<int> test_deque;
test_deque.push_back(1);
test_deque.push_back(2);
test_deque.push_back(3);
test_deque.push_back(4);
test_deque.push_back(5);
test_deque.push_back(6);
recursive_print(recursive_function_applier<0>(std::ranges::replace_copy_if, test_deque, [](int x) { return (x % 2) == 0; }, 0));
// std::deque<std::deque<int>>
std::deque<decltype(test_deque)> test_deque2;
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
recursive_print(recursive_function_applier<1>(std::ranges::replace_copy_if, test_deque2, [](int x) { return (x % 2) == 0; }, 0));
// std::list<int>
std::list<int> test_list = { 1, 2, 3, 4, 5, 6 };
recursive_print(recursive_function_applier<0>(std::ranges::replace_copy_if, test_list, [](int x) { return (x % 2) == 0; }, 0));
// std::list<std::list<int>>
std::list<std::list<int>> test_list2 = { test_list, test_list, test_list, test_list };
recursive_print(recursive_function_applier<1>(std::ranges::replace_copy_if, test_list2, [](int x) { return (x % 2) == 0; }, 0));
recursive_print(
recursive_function_applier<5>(
std::ranges::replace_copy_if,
n_dim_container_generator<5, std::list>(test_list, 3),
[](int x) { return (x % 2) == 0; },
0
)
);
return;
}
void remove_copy_test()
{
// std::remove_copy test cases
// std::vector<int>
std::vector<int> test_vector{ 5, 7, 4, 2, 8, 6, 1, 9, 0, 3 };
recursive_print(recursive_function_applier<0>(std::ranges::remove_copy, test_vector, 5));
// std::vector<std::vector<int>>
std::vector<decltype(test_vector)> test_vector2{ test_vector , test_vector , test_vector };
recursive_print(recursive_function_applier<1>(std::ranges::remove_copy, test_vector2, 5));
// std::vector<std::string>
std::vector<std::string> test_string_vector{ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20" };
recursive_print(
recursive_function_applier<0>(
std::ranges::remove_copy, test_string_vector, "0"
)
);
// std::vector<std::vector<std::string>>
std::vector<decltype(test_string_vector)> test_string_vector2{ test_string_vector , test_string_vector , test_string_vector };
recursive_print(
recursive_function_applier<1>(
std::ranges::remove_copy, test_string_vector2, "0"
)
);
// std::deque<int>
std::deque<int> test_deque;
test_deque.push_back(1);
test_deque.push_back(2);
test_deque.push_back(3);
test_deque.push_back(4);
test_deque.push_back(5);
test_deque.push_back(6);
recursive_print(recursive_function_applier<0>(std::ranges::remove_copy, test_deque, 1));
// std::deque<std::deque<int>>
std::deque<decltype(test_deque)> test_deque2;
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
recursive_print(recursive_function_applier<1>(std::ranges::remove_copy, test_deque2, 1));
// std::list<int>
std::list<int> test_list = { 1, 2, 3, 4, 5, 6 };
recursive_print(recursive_function_applier<0>(std::ranges::remove_copy, test_list, 1));
// std::list<std::list<int>>
std::list<std::list<int>> test_list2 = { test_list, test_list, test_list, test_list };
recursive_print(recursive_function_applier<1>(std::ranges::remove_copy, test_list2, 1));
recursive_print(
recursive_function_applier<5>(
std::ranges::remove_copy,
n_dim_container_generator<5, std::list>(test_list, 3),
1
)
);
return;
}
void remove_copy_if_test()
{
// std::remove_copy_if test cases
// std::vector<int>
std::vector<int> test_vector{ 5, 7, 4, 2, 8, 6, 1, 9, 0, 3 };
recursive_print(recursive_function_applier<0>(std::ranges::remove_copy_if, test_vector, std::bind(std::less<int>(), std::placeholders::_1, 5)));
// std::vector<std::vector<int>>
std::vector<decltype(test_vector)> test_vector2{ test_vector , test_vector , test_vector };
recursive_print(recursive_function_applier<1>(std::ranges::remove_copy_if, test_vector2, std::bind(std::less<int>(), std::placeholders::_1, 5)));
// std::vector<std::string>
std::vector<std::string> test_string_vector{ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20" };
recursive_print(
recursive_function_applier<0>(
std::ranges::remove_copy_if, test_string_vector, [](std::string x) { return (x == "0"); }
)
);
// std::vector<std::vector<std::string>>
std::vector<decltype(test_string_vector)> test_string_vector2{ test_string_vector , test_string_vector , test_string_vector };
recursive_print(
recursive_function_applier<1>(
std::ranges::remove_copy_if, test_string_vector2, [](std::string x) { return (x == "0"); }
)
);
// std::deque<int>
std::deque<int> test_deque;
test_deque.push_back(1);
test_deque.push_back(2);
test_deque.push_back(3);
test_deque.push_back(4);
test_deque.push_back(5);
test_deque.push_back(6);
recursive_print(recursive_function_applier<0>(std::ranges::remove_copy_if, test_deque, std::bind(std::less<int>(), std::placeholders::_1, 5)));
// std::deque<std::deque<int>>
std::deque<decltype(test_deque)> test_deque2;
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
test_deque2.push_back(test_deque);
recursive_print(recursive_function_applier<1>(std::ranges::remove_copy_if, test_deque2, std::bind(std::less<int>(), std::placeholders::_1, 5)));
// std::list<int>
std::list<int> test_list = { 1, 2, 3, 4, 5, 6 };
recursive_print(recursive_function_applier<0>(std::ranges::remove_copy_if, test_list, std::bind(std::less<int>(), std::placeholders::_1, 5)));
// std::list<std::list<int>>
std::list<std::list<int>> test_list2 = { test_list, test_list, test_list, test_list };
recursive_print(recursive_function_applier<1>(std::ranges::remove_copy_if, test_list2, std::bind(std::less<int>(), std::placeholders::_1, 5)));
recursive_print(
recursive_function_applier<5>(
std::ranges::remove_copy_if,
n_dim_container_generator<5, std::list>(test_list, 3),
std::bind(std::less<int>(), std::placeholders::_1, 5)
)
);
return;
}
A Godbolt link is here.
All suggestions are welcome.
The summary information:
Which question it is a follow-up to?
A recursive_replace_if Template Function Implementation in C++,
A recursive_copy_if Template Function Implementation in C++,
A recursive_count_if Function with Unwrap Level for Various Type Arbitrary Nested Iterable Implementation in C++ and
A recursive_count_if Function For Various Type Arbitrary Nested Iterable Implementation in C++
What changes has been made in the code since last question?
I am trying to implement a recursive_function_applier template function as a function applier for applying various algorithms on nested container.
Why a new review is being asked for?
Not sure the design of recursive_function_applier template function is a good idea or not. Whatever, this structure seems to be working fine in std::ranges::copy, std::ranges::replace_copy_if, std::ranges::remove_copy and std::ranges::remove_copy_if cases. If there is any possible improvement, please let me know.
Answer: What does it do?
I am trying to generalize the operation of applying a function on elements in nested container.
If you say that, I would think that your recursive_transform() would already cover it. It seems that this "function applier" is actually more a "container algorithm applier"; it recurses a nested container to a given level, expects that every element at that point is also a container, and then calls the given function, expecting it to have a similar interface as STL algorithms such as std::copy(): they get two input iterators and an output iterator as arguments.
An alternative approach
I think the only reason for this function is to deal with the fact that STL algorithms never return a container. You could say that they are not working in a functional programming style. If they did, then you could just easily compose your recursive_transform() with a copy() function that does return a container. So maybe you should make those instead? Consider:
struct SameAsInput;
template<typename Output = SameAsInput, typename Input>
auto return_copy(const Input& input) {
std::conditional_t<
std::is_same_v<Output, SameAsInput>,
Input,
Output
> output;
std::ranges::copy(input, std::inserter(output, std::ranges::end(output)));
return output;
}
Then instead of:
recursive_function_applier<0>(std::ranges::copy, test_vector);
You could write:
recursive_transform<0>(test_vector, return_copy);
You can make return_copy_n(), return_remove_copy(), return_remove_copy_if(), and so on.
Restricting the container type
You are using the concept std::ranges::range to restrict the type of input. However, you want to create a container that has the same type of input, and not every range is a container. For example, views are ranges too. Unfortunately, there is no std::container concept, but you can make one yourself. | {
"domain": "codereview.stackexchange",
"id": 44499,
"tags": "c++, recursion, lambda, c++20, variadic"
} |
How do I statistically evaluate a ML model? | Question: I have a model that predicts sentiment of tweets. Are there any standard procedures to evaluate such a model in terms of its output?
I could sample the output, work out which are correctly predicted by hand, and count true and false positives and negatives but is there a better way?
I know about test and training sets and metrics like AUROC and AUPRC which evaluate the model based on known data, but I am interested in the step afterwards when we don't know the actual values we are predicting. I could use the same metrics, I suppose, but everything would need to be done by hand.
Answer: There are a lot of ways to evaluate the performance of an ML model. You mentioned AUROC and AUPRC. Generally you start with the confusion matrix and derive metrics such as sensitivity, accuracy, recall, precision, etc. You can see a good outline of them here.
It seems what you are asking is a shortcut to determining how good your sentiment classification model is but there aren't any without labeled test data. You either do this by hand or you find a test set in the world, preferably something that is well know and documented and also fits your objectives. I recommend you read Neil Slater's answer at https://datascience.stackexchange.com/questions/12226/how-do-i-assess-which-sentiment-classifier-is-best-for-my-project/12228. He gives some good advice on the subjectivity of sentiment analysis classification and points out a labeled data set of Tweets which you might be able to use to test your classifier.
I also found this Kaggle competition which has a test set that might be of help to you: Angry Tweets | {
"domain": "ai.stackexchange",
"id": 442,
"tags": "machine-learning, classification"
} |
Auto registering class registry | Question: I built this auto registering registry to look into a package and import all modules and register any classes that are a subclass of a base class. I use it as a registry for a model's choices in Django and to be able to do a getattr on the class to get properties in a service that processes some data based on choices selected and dynamically finds the right class.
Reasoning behind this methodology is because we would have an unknown number of stat classes that will be created in the future. I wanted to avoid forgetting to register them and to hide the implementation details of the choices on the model (The model is absolutely massive). So I wanted a folder that just takes care of that for us.
I am trying to determine if this is breaking any best practices, is a good design, safe, efficient and not doing anything downright stupid. (Maybe the dynamic import).
Base class for using it other places if need be
class AutoRegClassRegistry(Singleton):
PACKAGE = None
BASE_CLS = None
def __init__(self, force_import=True, *args, **kwargs):
super(AutoRegClassRegistry, self).__init__(*args, **kwargs)
self.force_import = force_import
if self.PACKAGE:
package_dir = os.path.dirname(self.PACKAGE.__file__)
for module in [name for _, name, _ in pkgutil.iter_modules([package_dir])]:
module = "{}.{}".format(self.PACKAGE.__name__, module)
if self.force_import:
_ = __import__(module)
for name, obj in inspect.getmembers(sys.modules[module], inspect.isclass):
if name == self.BASE_CLS.__name__:
continue
if issubclass(obj, self.BASE_CLS):
self.register(obj)
def register(self):
raise NotImplementedError("You must inherit from this class and override the register func")
Project structure
- init.py (this ClientStats class in here)
- base.py
- autoregister
- init.py
- hourly.py
- daily.py
- weekly.py
- monthly.py
Implementation
class ClientStats(AutoRegClassRegistry):
"""
A registry for client stats that also behaves as a tuple when used as a choices field. You can also do a getattr on
this class to get the class, description or short of a stat.
"""
PACKAGE = auto_register
BASE_CLS = BaseStat
def __init__(self, *args, **kwargs):
self.__choices = []
super(ClientStats, self).__init__(*args, **kwargs)
def __getitem__(self, key):
return self.__choices[key]
def __iter__(self):
return iter(self.__choices)
def register(self, cls):
cls_short = getattr(cls, "SHORT", None)
if not cls_short:
raise AttributeError("CLS {} doesn't have attribute 'SHORT' which is required to register to ClientStats")
cls_desc = getattr(cls, "DESCRIPTION", None)
if not cls_desc:
raise AttributeError(
"CLS {} doesn't have attribute 'DESCRIPTION' which is required to register to ClientStats")
if hasattr(self, cls_short):
raise AttributeError("ClientStats already has a {} registered".format(cls_short))
setattr(self, cls_short, cls_short)
setattr(self, "{}_DESC".format(cls_short), cls_desc)
setattr(self, "{}_CLS".format(cls_short), cls)
self.__choices.append((cls_short, getattr(self, "{}_DESC".format(cls_short))))
CLIENT_STATS = ClientStats()
Example class that gets registered
class DailyStatExample(BaseStat):
SHORT = "DSE"
DESCRIPTION = "Example stat class"
Sample usage of the ClientStats class:
models.py
class ExampleModel(models.Model):
client_stats = MultiSelectSlugField(choices=CLIENT_STATS)
script.py
stats = ExampleModel.objects.get(pk=1).client_stats
for stat in stats:
klass = getattr(CLIENT_STATS, "{}_CLS".format(stat)
klass_inst = klass()
klass.calculate(arg1, arg2)
Answer: Why
for module in [name for _, name, _ in pkgutil.iter_modules([package_dir])]:
and not
for _, module, _ in pkgutil.iter_modules([package_dir]):
? If it's about iterating eagerly, using list would work better than a list comprehension.
_ = __import__(module)
is also weird; why not just __import__(module)?
It makes sense for AutoRegClassRegistry to be an ABC, but by no means is this a requirement. It's worth noting that
def register(self):
raise NotImplementedError("You must inherit from this class and override the register func")
has the wrong number of arguments, so will give an unhelpful error instead.
Don't use name mangling for self.__choices = [] unless there's good reason; _choices works just as well.
cls_short = getattr(cls, "SHORT", None)
if not cls_short:
raise AttributeError("CLS {} doesn't have attribute 'SHORT' which is required to register to ClientStats")
seems like it breaks EAFTP;
cls_short = cls.SHORT
would work just as well and
try:
cls_short = cls.SHORT
except AttributeError:
raise AttributeError("CLS {} doesn't have attribute 'SHORT' which is required to register to ClientStats")
would be more descriptive of intent. The same happens immediately after.
The whole
if hasattr(self, cls_short):
raise AttributeError("ClientStats already has a {} registered".format(cls_short))
setattr(self, cls_short, cls_short)
setattr(self, "{}_DESC".format(cls_short), cls_desc)
setattr(self, "{}_CLS".format(cls_short), cls)
looks really backwards, though; why not just put this in a simple data structure? Why does it have to be through messy attribute accesses.
This whole thing feels really dynamic, and dynamism should be a last resort. But I've never touched Django, so I can't really say whether what you're doing is justified or not.
As I said, I've never touched Django so I can't really say whether this could work, but I'd be tempted to do something more like
DailyStatExample = ClientStats.new("DSE", "Example stat class")
or
@ClientStats.register("DSE", "Example stat class")
class DailyStatExample:
... | {
"domain": "codereview.stackexchange",
"id": 19099,
"tags": "python, django, dynamic-programming"
} |
Finding if a given number is in a given range based on a flag | Question: Question Description:
Given a number n, return True if n is in the range 1..10, inclusive. Unless outsideMode is True, in which case return True if the number is less or equal to 1, or greater or equal to 10.
in1to10(5, False) → **True**
in1to10(11, False) → **False**
in1to10(11, True) → **True**
The link to the problem is here.
Below is my first attempt. Please feel free to review it based on best practices, style and any other efficient solution.
def in1to10(n, outside_mode):
if(n>1 and n<10):
return not outside_mode
else:
return outside_mode or n is 10 or n is 1
Answer:
Please feel free to review it based on best practices, style and any other efficient solution.
@mjolka already provided a more natural alternative implementation I completely agree with: following the logic precisely as in the problem statement is natural and easy to understand. I think the reason why it's easier to understand than yours has to do with the fact that the ranges being evaluated in the 2 distinct cases depending on the value of outside_mode have an overlap on the values 1 and 10. If the two ranges were complements of each other, things would be simpler.
As for style:
if(n>1 and n<10):
return not outside_mode
else:
return outside_mode or n is 10 or n is 1
The parentheses in the if condition are redundant.
And PEP8 recommends to put spaces around operators, so this would be better:
if n > 1 and n < 10:
Even better is to use Python's kick-ass ... < ... < ... operator:
if 1 < n < 10:
Be careful with using is. In this example it works as expected, but if you try to use for non-primitive types, it might not give what you expect:
>>> [1, 2] is [1, 2]
False
>>> (1, 2) is (1, 2)
False
Unless there is a specific reason to use is, I think you should use ==:
return outside_mode or n == 10 or n == 1 | {
"domain": "codereview.stackexchange",
"id": 10141,
"tags": "python, beginner, programming-challenge"
} |
Brainfuck interpreter in Clojure | Question: I'm currently learning Clojure to try to get a good understanding of functionnal programming, so I wrote this brainfuck interpreter that takes a filename on the command line. I tried to make it as correct as I could, but I'm sure there are many things that would make more sense that I didn't think about, or bugs (although it works with the examples on wikipedia). Please criticize!
(ns brainfuck.core (:gen-class))
;; Input functions
(def valid-chars '(\< \> \[ \] \. \, \+ \-))
(defn valid-char?
"Tests whether a character is valid brainfuck. Works with one or
more characters at a time."
([c] (some #{c} valid-chars))
([c & others] (every? valid-char? (conj others c))))
(defn filter-invalid
"Removes characters that aren't valid brainfuck."
[code]
(filter valid-char? code))
(defn get-code
"Returns the sanitized content of file."
[file]
(filter-invalid (slurp file)))
(defn get-input
"Returns one number entered by the user. If the user enters a
character, it is converted to ASCII. If the user enters a string,
this function returns 0."
[]
(let [input (read-line)]
(if (zero? (count input))
0
(int (first input)))))
;; Helper functions
(defn make-context
"Creates the base context."
[code-src]
{:code code-src, :ip 0, :data (vec (repeat 30000 0)), :data-pointer 0})
(defn current-data
"Returns the current data cell pointed to by :data-pointer."
[context]
(nth (context :data) (context :data-pointer)))
(defn current-instruction
"Returns the current instruction."
[context]
(nth (context :code) (context :ip)))
(defn inc-ip
"Increments the instruction pointer."
[context]
(update-in context [:ip] inc))
(defn dec-ip
"Decrements the instruction pointer."
[context]
(update-in context [:ip] dec))
;; Functions to interpret instructions
(defn jump-to-matching-rb
"Sets instruction pointer to the matching ].
Should only be called with one argument, the other is for recursion."
([context] (jump-to-matching-rb (inc-ip context) 1))
([context level]
(if (= level 0)
(dec-ip context) ;; we've found the matching bracket
(cond
(= (nth (context :code) (context :ip)) \[)
(jump-to-matching-rb (inc-ip context) (inc level))
(= (nth (context :code) (context :ip)) \])
(jump-to-matching-rb (inc-ip context) (dec level))
:else
(jump-to-matching-rb (inc-ip context) level)))))
(defn jump-to-matching-lb
"Sets the instruction pointer to the matching [."
([context] (jump-to-matching-lb (dec-ip context) 1))
([context level]
(if (= level 0)
context ;; we've found the matching bracket
(cond
(= (nth (context :code) (context :ip)) \])
(jump-to-matching-lb (dec-ip context) (inc level))
(= (nth (context :code) (context :ip)) \[)
(jump-to-matching-lb (dec-ip context) (dec level))
:else
(jump-to-matching-lb (dec-ip context) level)))))
(defn interpret-left-bracket
"Interprets the [ brainfuck instruction."
[context]
(if (= (current-data context) 0)
(jump-to-matching-rb context)
context))
(defn interpret-right-bracket
"Interprets the ] brainfuck instruction."
[context]
(jump-to-matching-lb context))
(defn interpret-current
"Interprets the current instruction and return the modified context.
For each instruction, the corresponding alteration to the context
is made and the modified context is returned."
[context]
(let [c (current-instruction context)]
(case c
\> (update-in context [:data-pointer] inc)
\< (update-in context [:data-pointer] dec)
\+ (update-in context [:data (context :data-pointer)] inc)
\- (update-in context [:data (context :data-pointer)] dec)
\. (do (print (char (nth (context :data) (context :data-pointer)))) context)
\, (assoc-in context [:data (context :data-pointer)] (get-input))
\[ (interpret-left-bracket context)
\] (interpret-right-bracket context))))
(defn interpret
"Interprets the brainfuck program."
[context]
(loop [ctx context]
(if (< (ctx :ip) (count (ctx :code)))
(recur (inc-ip (interpret-current ctx))))))
(defn -main
"Brainfuck interpreter. Run it with a filename as argument."
[file]
(interpret (make-context (get-code file))))
Answer: I'm not sure why you need valid-char? to accept multiple chars
If write
(def valid-char? #{\< \> \[ \] \. \, \+ \-})
And if you really need it:
(def valid-chars? [& chars]
(every? valid-char? chars))
It's more idiomatic to put the keyword for in a map lookup:
(defn current-instruction [context]
(nth (:code context) (:ip context)))
Also, it looks like your jump-to-matching-* methods should be calling current-context rather than doing this same calculation.
It's general using whitespace commas (see make-context) is not considered good style. It distracts from readability. Normally you'd split each key-value pair on a new line for clarity.
I don't actually know how [ and ] work, so this might not be valid, but at first glance I'd want separate out the find-matching-bracket from the jump-to. I wonder if the find code can be written in a way to be direction-agnostic so that you only have to write it once? Maybe not.
The logic in interpret-current is really simple but I consider moving each command to a function to make the dispatch really clear. An (arguable) more elegant approach would be a multimethod:
(defmulti interpret-context current-instruction)
(defmethod interpret-context \>
[context]
(update-in context [:data-pointer] inc))
(defmethod interpret-context \<
[context]
(update-in context [:data-pointer] inc))
;; etc... | {
"domain": "codereview.stackexchange",
"id": 14640,
"tags": "clojure, interpreter, brainfuck"
} |
In the Davisson-Germer experiment, where is the coherence? | Question: Experiment of diffraction using light, we use laser beam because we need coherence. In Davisson-Germer experiment, there is no coherence but why the electrons diffract in that way?
Answer: For the diffraction of light no coherent laser light is necessary, as is known from the diffraction of ordinary light in diffraction gratings. The observed interference effects occur in single photons with themselves. The same happens in the diffraction of x-rays in crystal lattices. The diffraction of electrons in crystals, as demonstrated in the Davisson-Germer experiment, is due to the de Broglie wave properties of electrons. It is completely analogous to the diffraction of x-rays in crystal lattices. No coherence of different electron waves is needed. The diffraction effects can be considered to be due to the interference of the de Broglie wave of single electrons with themselves. | {
"domain": "physics.stackexchange",
"id": 49025,
"tags": "quantum-mechanics"
} |
List of qubit locations with cirq | Question: As far I understand, qubits in cirq are labelled by their positions on chip. For example
print( cirq.google.Foxtail.qubits )
yields
frozenset({GridQubit(0, 1), GridQubit(1, 9), GridQubit(0, 2), ...
I would like to get a simpler version of the above, namely a simple array of tuples for the positions of all qubits
[ (0,1), (0,2), (0,3), ..., (1,1), (1,2), (1,3), ... ]
What is the easiest way to obtain this for a given known device in cirq?
Answer: GridQubit has comparison methods defined, so sorted will give you a list of the qubits in row-major order:
>>> sorted(cirq.google.Foxtail.qubits)
[GridQubit(0, 0), GridQubit(0, 1), [...] GridQubit(1, 9), GridQubit(1, 10)]
Once you have that, you're one list comprehension away:
>>> [(q.row, q.col) for q in sorted(cirq.google.Foxtail.qubits)]
[(0, 0), (0, 1), [...] (1, 9), (1, 10)]
Because tuples also have a default ordering, it doesn't matter whether you sort before or after the conversion:
>>> sorted((q.row, q.col) for q in cirq.google.Foxtail.qubits)
[(0, 0), (0, 1), [...] (1, 9), (1, 10)] | {
"domain": "quantumcomputing.stackexchange",
"id": 300,
"tags": "programming, cirq"
} |
Why is the Lucas test not recommended to differentiate higher alcohols? | Question: I read about the Lucas test for alcohols in which a cloudy precipitate is produced with a tertiary or secondary alcohol. However, on a side note it was mentioned that this test is useful only for compounds having less than six carbon atoms. Why is this so?
Answer: The problem is that the alcohols themselves might be insoluble. Ethanol is soluble, yes, but as the alkyl chain length increases, the hydrophobic nature due to the chain starts dominating over whatever hydrophilicity the $\ce{-OH}$ group had been providing for lower homologues.
From source [1]:
.. any alcohol that is insoluble in the reagent might appear to be giving a positive 3° test.
The paper goes on to properly quantify several things:
Based on the results of this study, using 1 drop of alcohol added to 10 drops of reagent, it can be said that, at the very least, all saturated acyclic monofunctional alcohols having six or fewer carbons are soluble in the Lucas reagent.
It is clear that the relative amounts of the sample and the reagent become important in such borderline cases (~6 carbon long chains). If too much alcohol is added, it may cause an illusion of positive lucas test.
From source [1]: (emphasis mine)
That statement may not be true when 3 or 4 drops are used
per 10 drops of reagent. In fact, sometimes when 10 drops of
reagent are added to 3 or 4 drops of some of the alcohols, an
interesting event occurs after a delay of a couple of minutes
or so. Apparently, some undissolved alcohol that has been
clinging to the bottom of the test tube suddenly breaks loose
and, by streaming to the surface of the very dense Lucas
reagent, gives a momentary illusion of a positive test. The
real cloudiness, alkyl chloride formation, appears later—all
the more reason to use a high reagent/alcohol ratio.
References:
[1] "A study of the Lucas test"
R. A. Kjonaas and B. A. Riedford
J. Chem. Ed. 1991 68 (8), 704
DOI: 10.1021/ed068p704 | {
"domain": "chemistry.stackexchange",
"id": 12446,
"tags": "organic-chemistry, alcohols"
} |
Inverse discrete Fourier transform | Question:
If anyone can help solving this exercise I'll be grateful. It's urgent.
(I've added my answer, but I think it's wrong)
Answer: No it's not correct.
Derive it like this :
1-) $2N$-point DFT of $x[n]$ is: $X_{2N}[k] = \sum_{n=0}^{2N-1} x[n] e^{-j \frac{2\pi}{2N} k n }$
2-) $Y[k] = X_{2N}[2k+1]$ be the odd-indexed samples of $X_{2N}[k]$
3-) We are looking for $y[n] = \text{N-point IDFT}\{ Y[k]\}$.
4-) Elaborate on step-2 and step-1 to see that $Y[k] = \text{N-point DFT}\{ x[n] e^{-j \frac{\pi}{N}n}\}$
5-) Then from steps 3 and 4 we get :
$$y[n] = \text{N-point IDFT}\{ \text{N-point DFT}\{ x[n] e^{-j \frac{\pi}{N}n}\} \} $$
$$ y[n] = x[n] e^{-j \frac{\pi}{N}n} $$ | {
"domain": "dsp.stackexchange",
"id": 7976,
"tags": "dft"
} |
What are flags in the setIO service of the UR messages | Question:
Hi, this was a general question form me because i have been working with a UR10 and i can set the IO's to a value. I cannot set the mode however (current or voltage controlled).
i was just wondering what the flags on a pin do. The ur_msgs/SetIO documentation shows that there is a function that sets these flags.
Are flags the parameters that set the output of the analog pins to current or voltage control? I also tried this already with the Universal robot Driver with the function
FUN_SET_FLAG
but when i use this function, i get the message that it is not implemented yet in this driver.
thank you in advance
Originally posted by stefvanlierop on ROS Answers with karma: 37 on 2020-02-12
Post score: 0
Original comments
Comment by gvdhoorn on 2020-02-12:\
I also tried this already with the Universal robot Driver but when i use this function, i get the message that it is not implemented yet in this driver
I'm guessing you are referring to FUN_SET_TOOL_VOLTAGE here. Would be good to mention that.
Answer:
Are flags the parameters that set the output of the analog pins to current or voltage control?
No.
Flags are (from the The URScript Programming Language, version 3.12):
Flags behave like internal digital outputs. They keep information between program runs.
They are essentially a set of booleans in memory of the controller, not connected to any physical output or input. They can be used to store simple binary information.
Often used as mutexes or in wait conditions (ie: one thread waits on a flag to become True before continuing. Another thread sets the flag to True to signal the other thread it may continue).
Originally posted by gvdhoorn with karma: 86574 on 2020-02-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2020-02-12:
This is just functionality offered by Universal Robots on their controllers btw. Nothing ROS-specific about this.
The SetIO service you mention is just a ROS interface to that functionality.
Comment by gvdhoorn on 2020-02-12:
Btw: you can configure the tool voltage and other setting using snippets of URScript (see the scripting manual).
The ur_hardware_interface node in the ur_robot_driver package exposes a urscript topic to which you can send snippets of URScript.
You could use that to configure the settings you are referring to in your question.
Comment by gvdhoorn on 2020-02-12:
You're right. I've updated my comment.
Comment by stefvanlierop on 2020-02-12:
I do not understand what you mean with scripting manual, is it this page
Comment by gvdhoorn on 2020-02-12:
No.
It's the standard URScript manual as provided by Universal Robots on their support website.
This one for the CB series controller (not e-Series), system software version 3.12.
Comment by stefvanlierop on 2020-02-17:
With the URScript topic, you mean the script_command Topic of the ur_hardware_interface node
Comment by gvdhoorn on 2020-02-17:
Yes, see this (or search for script_command in the ROS_INTERFACE docs).
Comment by stefvanlierop on 2020-02-17:
i send this to the topic ur_hardware_interface/script_command : 'sec myProgram(): set_analog_outputdomain(1,1) end' .
but it doesnt do anything. when i send 'set_analog_outputdomain(1,1)' it does set the pin domain correctly but then it interrupts the driver ur_cap that is running (as is mentioned in the documentation you send in the previous comment) my polyscope version is 3.7.1.
Is this not up to date enough? Should i start another question or update this one?
Comment by gvdhoorn on 2020-02-17:
Seeing as we're now no longer answering general usage questions, but a potential problem with the implementation of the script_command topic provided by ur_robot_driver, I would suggest to post an issue on the issue tracker.
If/when you do, please:
post a comment here with a link to the issue you posted (so we keep things connected)
include example code with your issue which shows the problem you are running into
If what you include in your comment is a verbatim copy of what you published to script_command, then I believe you need to make sure you include newlines where appropriate. But again: please post an issue. | {
"domain": "robotics.stackexchange",
"id": 34417,
"tags": "ros-kinetic"
} |
Why does less force correspond to less work done? | Question: I'm confused with the definition of work in physics. I know that it's defined as a product of force, displacement and a cosine of angle between them: $W = F s \cos(\alpha)$. But that means that the work of moving an object from one place to another depends on the absolute value of force we apply to it provided the displacement and the angle stay the same. But as I understood, the work equals to the total energy given away to move the object. But with weaker force and the same distance we will move the object to the finish point just with more time and the energy intuitively should stay the same. So why is that the case that weaker force corresponds to less work done? Thank you.
Answer: No, the energy wouldn't be the same if you move an object the same distance with a weaker force. Let's see this explicitly. The energy that is given to the object during this process will be reflected in what property of the object? It will be reflected in its speed. So, if the speed of the object at the end of the process is smaller for the weaker force then we can explicitly see that the process carried out with a weaker force will, in fact, be pouring less energy into the object. So, let's see what is the speed $v$ at the end of this process if it is carried out under the influence of a force $F$. For simplicity, I will assume that the force is in the direction of the displacement.
Assuming a constant force, the acceleration of the object will be $F/m$. Now, since the object travels a distance of $s$ starting from rest, we can write $s=\frac{1}{2}\frac{F}{m}t^2$ where $t$ is the time it takes for the process to complete. As you can see, you are right that it would take a larger $t$ if we choose a smaller $F$ to cover the same distance $s$. But, now, let's compute the speed that the object will have at the end of this process. For uniformly accelerated motion, we can write $v=\frac{F}{m}t$. Expressing this $t$ in terms of $s$ from our previous formula, we can write $$v=\frac{F}{m}\sqrt\frac{2ms}{F}=\sqrt{\frac{2Fs}{m}}$$
So, indeed, the weaker the force, the smaller the final speed will be. | {
"domain": "physics.stackexchange",
"id": 67411,
"tags": "newtonian-mechanics, energy, work"
} |
Can supervised learning be recast as reinforcement learning problem? | Question: Let's assume that there is a sequence of pairs $(x_i, y_i), (x_{i+1}, y_{i+1}), \dots$ of observations and corresponding labels. Let's also assume that the $x$ is considered as independent variable and $y$ is considered as the variable that depends on $x$. So, in supervised learning, one wants to learn the function $y=f(x)$.
Can reinforcement learning be used to learn $f$ (possibly, even learning the symbolic form of $f(x)$)?
Just some sketches how can it be done: $x_i$ can be considered as the environment and each $x_i$ defines some set of possible "actions" - possible symbolic form of $f(x)$ or possible numerical values of parameters for $f(x)$ (if the symbolic form is fized). And concrete selected action/functional form $f(x, a)$ (a - set of parameters) can be assigned reward from the loss function: how close the observation $(x_i, y_i)$ is to the value that can be inferred from $f(x)$.
Are there ideas or works of RL along the framework that I provided in the previous passage?
Answer: Any supervised learning (SL) problem can be cast as an equivalent reinforcement learning (RL) one.
Suppose you have the training dataset $\mathcal{D} = \{ (x_i, y_i \}_{i=1}^N$, where $x_i$ is an observation and $y_i$ the corresponding label. Then let $x_i$ be a state and let $f(x_i) = \hat{y}_i$, where $f$ is your (current) model, be an action. So, the predicted label of observation $x_i$ corresponds to the action taken in state $x_i$. The reward received after having taken action $f(x_i)$ in state $x_i$ can then be defined as the loss $|f(x_i) - y_i|$ (or any other suitable loss).
The minimization of this loss is then equivalent to the maximization of the (expected) reward. Therefore, in theory, you could use trajectories of the form $$T=\{(x_1, f(x_1), |f(x_1) - y_1|), \dots, (x_N, f(x_N), |f(x_N) - y_N|)\}$$ to learn a value function $q$ (for example, with Q-learning) or a policy $\pi$, which then, given a new state $x_{\text{new}}$ (an observation) produces an action $f(x_{\text{new}})$ (the predicted label).
However, note that the learned policy might not be able to generalise to observations not present in the training dataset. Moreover, although it is possible to solve an SL problem as an RL problem, in practice, this may not be the most appropriate approach (i.e. it may be inefficient).
For more details, read the paper Reinforcement Learning and its Relationship to Supervised Learning (2004) by Barto and Dietterich, who give a good overview of supervised and reinforcement learning and their relationship. The paper Learning to predict by the methods of temporal differences (1988), by Richard Sutton, should also give you an overview of reinforcement learning from a supervised learning perspective. However, note that this does not mean that a reinforcement learning problem can be cast as an equivalent supervised learning one. See section 1.3.3 Converting Reinforcement Learning to Supervised Learning of the mentioned paper Reinforcement Learning and its Relationship to Supervised Learning for more details.
Reinforcement learning can thus be used for classification and regression tasks. See, for example, Reinforcement Learning for Visual Object Detection (2016) by Mathe et al. | {
"domain": "ai.stackexchange",
"id": 1340,
"tags": "reinforcement-learning, comparison, supervised-learning, function-approximation, regression"
} |
Work-energy theorem in rotational motion | Question: Assume a full cylinder starts slipping without rolling on the floor with initial velocity $v_0$ and the floor has kinetic and static friction coefficient $\mu$. Find the distance it will travel until it rolls without slipping.
My idea was to use the work-energy theorem:
$E_{k,final}-E_{k,initial}=W_{friction}$, but the problem is that I don't know how this theorem works when considering angular kinetic energy, so what is the right expression for $E_{k,final}$:
$\frac12mv^2$ or $\frac12mv^2 + \frac12I\omega^2$ ?
Answer: Considering a cylinder of radius $R$, the condition for rolling without slipping is met when $v=\omega R$. What we'll do is basically find how $\omega$ and $v$ vary in time and equate them:
To find $\omega$ we'll use the rotational analog of Newton's $2^{\text{nd}}$ law:
$$\begin{aligned}
\tau=||\mathbf r\wedge\boldsymbol f||=||(0,-R)\wedge(-f,0)||=fR=\mathcal I\alpha=\mathcal I\frac{\mathrm d\omega}{\mathrm dt}&\implies\int_0^\omega\mathrm d\omega'=\frac{fR}{\mathcal I}\int_0^t\mathrm dt'\\
&\implies \omega(t)=\frac{fR}{\mathcal I}t=\frac{2\mu g}{R}t,
\end{aligned}$$
where in the last step we used the fact that $\mathcal I_{\text{cylinder}}=\dfrac{1}{2}mR^2$ and $f=\mu N=\mu mg$.
As for $v$ we'll use Newton's $2^{\text{nd}}$ law
$$m\frac{\mathrm dv}{\mathrm dt}=-f=-\mu mg\implies\int_{v_0}^v\mathrm dv'=-\mu g\int_0^t\mathrm dt'\implies v(t)=v_0-\mu gt$$
Imposing $v(t_{\mathrm r})=\omega(t_{\mathrm r})R$, we get that
$$v_0-\mu gt_{\mathrm r}=2\mu gt_{\mathrm r}\implies t_{\mathrm r}=\frac{v_0}{3\mu g}$$
Finally to find the distance travelled by the cylinder once it starts rolling without slipping, we shall integrate $v$ once again an evaluate it at $t_{\mathrm r}$:
$$d=\int_0^{t_{\mathrm r}}v\mathrm dt=\left[v_0t-\frac{1}{2}\mu gt^2\right|_0^{t_{\mathrm r}}=v_0\left(\frac{v_0}{3\mu g}\right)-\frac{1}{2}\mu g\left(\frac{v_0}{3\mu g}\right)^2=\left(\frac{1}{3}-\frac{1}{18}\right)\frac{v_0^2}{\mu g}=\frac{5v_0^2}{18\mu g}$$
It might be doable by energies like you suggested, but it'd be a bit more tedious alternative to get to the same conclusion since what you're asking can't be simpler than imposing $v=\omega R$. | {
"domain": "physics.stackexchange",
"id": 100358,
"tags": "homework-and-exercises, newtonian-mechanics, energy, rotational-dynamics, work"
} |
Is the coefficient of thermal expansion a symmetric tensor? | Question: Is the coefficient of thermal expansion (CTE) a symmetric tensor?
For thermal stresses, CTE has to be symmetric otherwise the stress tensor would not be symmetric. So does this mean that CTE tensor is always symmetric?
Answer: The stress tensor is commonly taken as a symmetric tensor, and it is a consequence of the conservation of angular momentum. In that line, the thermal expansion tensor should be symmetric. From Reference 1:
If the temperature of a crystal is changed, the resulting deformation may be specified by the strain tensor $[\epsilon_{ij}]$. When a small temperature change $\Delta T$ takes place uniformly throughout the crystal the deformation is homogeneous, and it is found that all the components of $[\epsilon_{ij}]$ are proportional to $\Delta T$; thus
$$\epsilon_{ij} = \alpha_{ij} \Delta T,$$
where the $\alpha_{ij}$ are constants, the coefficients of thermal expansion. Since $[\epsilon_{ij}]$ is a tensor, so also is $[\alpha_{ij}]$, and, moreover, since $[\epsilon_{ij}]$ is symmetrical so also is $[\alpha_{ij}]$.
There are some theories that allow for non-symmetric stress tensors, though. But they are much less common.
References
Nye, John Frederick. Physical properties of crystals: their representation by tensors and matrices. Oxford university press, 1985. | {
"domain": "physics.stackexchange",
"id": 49482,
"tags": "symmetry, stress-energy-momentum-tensor, stress-strain"
} |
Would a black hole's rotational axis precess in orbit around the sun? | Question: The earth rotates, and its axis of rotation precesses due to the gravitational pull of the sun and moon and other planets upon the mass of the earth.
If an earth-sized, rotating black hole was in orbit around the sun, would its axis of rotation precess?
EDIT: I forgot that an Earth mass black hole would have a supertiny horizon. And making the horizon Earth radius means that it's heavier than the sun. So let's just scale it all up.
Consider a "solar system" in black hole equivalent: a "sun" black hole with a horizon the size of our sun, an "Earth" black hole with a horizon the size of our Earth and the same angular momentum as our Earth, and a "moon" black hole, orbiting the "Earth", with a horizon the size of our moon which is orbiting "Earth". Does the "Earth" precess? Does it experience any tidal forces?
Answer: Since there are no extra rules for black holes they should follow the same laws. In general relativity this effects are called the de Sitter- and the Lense Thirring-effect which have been verified with big trumpets call by the famous Gravity Probe B. | {
"domain": "physics.stackexchange",
"id": 19952,
"tags": "black-holes, orbital-motion, rotational-dynamics, precession"
} |
Show Equivalence Between Multiplication in Time Domain to Convolution in Frequency Domain | Question: My goal is to compute the Fourier of the product between two discrete time signals, y1 and y2. This can be done by computing the convolution between the fourier transform of y1, f1 and the fourier transform of y2, f2 (This is what I understood from the Wikipedia Page).
I tried the following code in MATLAB, it shows that the convolution between y1 and y2 is equal to ifft(f1.*f2) as shown on the Wikipedia page, the code worked perfectly:
y1=sin(x);
y2=sin(3*x);
n=length(y1);
convol=conv(y1,y2);
f1=fft(y1,n*2-1);
f2=fft(y2,n*2-1);
plot(ifft(f1.*f2))
hold on
plot(convol,'x')
Now I want to be able to compute the fourier transform of the product of y1 by y2 using the convolution theorem, this can be done by computing the convolution between f1 and f2, so I wrote the following code:
x=0:0.01:10;
y1=sin(x);
y2=sin(3*x);
n=length(y1);
f1=fft(y1,n*2-1);
f2=fft(y2,n*2-1);
convol=conv(f1,f2);
ffty1y2=fft(y1.*y2,length(convol));
plot(abs(ffty1y2))
hold on
plot(abs(convol)/length(convol),'x')
As you can see the results do not match (neither in location of the peaks nor in their values), what could be wrong here? How do I fix this?
Answer: Let's assume you have 2 signals: vX and vY.
So:
clear();
numSamplesX = length(vX);
numSamplesY = length(vY);
numSamplesConv = numSamplesX + numSamplesY - 1;
vTimeDomainConv = conv(vX, vY);
vFrequencyDomainConv = ifft(fft(vX, numSamplesConv) .* fft(vY, numSamplesConv), 'symmetric');
max(abs(vTimeDomainConv - vFrequencyDomainConv)) %<! Should be < 1e-12
This is the Convolution Theorem for Discrete Signals to show convolution in time domain is equivalent to element wise multiplication in frequency domain.
If you want to show element wise multiplication in time domain can be done using the convolution in frequency domain you need to either interpolate the time domain signal to length of linear convolution or just use circular convolution in frequency domain:
clear();
numSamples = 6;
vX = randn(numSamples, 1);
vY = randn(numSamples, 1);
vXY = vX .* vY; %<! Time Domain Mulaiplication
vDxy = fft(vXY); %<! The DFT of the Time Domain Multiplication
vDx = fft(vX);
vDy = fft(vY);
% Applying circular convolution
vXYC = cconv(vDx, vDy, numSamples) / numSamples; %<! Normalization
% Comparing the result in Frequency Domain
max(abs(vXYC - vDxy)) %<! Should be < 1e-12
% Time Domain
vXYFromFrequency = ifft(vXYC, 'symmetric');
% Comparing the result in Time Domain
max(abs(vXY - vXYFromFrequency)) %<! Should be < 1e-12 | {
"domain": "dsp.stackexchange",
"id": 8526,
"tags": "matlab, discrete-signals, fourier-transform, convolution, time-frequency"
} |
What happens when an object being accelerated by gravity begins to approach light speed? | Question: Imagine you have a pair of 2-dimensional circular portals, with one placed perfectly above the other (practically the same as portals in the Portal games). A spherical object is held between the portals and then dropped, enabling it to fall endlessly. The room this occurs in is a perfect vacuum and the sphere has 0 sideward velocity relative to the portals.
Normally an object accelerates until terminal velocity, but magically there isn't a single gaseous particle in the room.
Does the acceleration of the object begin to slow as it reaches a notable fraction of the speed of light, as more and more energy is required to accelerate it further? Or does the object accelerate at the same rate (as gravity provides force, not energy) before reaching a hard cutoff at 99.999...% the speed of light?
I know relativity has a lot to do with observers, so I guess the observer would be someone looking into the room through a glass window. Perhaps they have a laser shining across the path of the falling object, and in that way measure the speed of the object (by measuring the amount of time the laser is blocked for divided by the sphere's diameter).
Answer: This is going to be a crappy (i.e., non-mathematical) partial answer, but I hope it will help a little:
Whenever you talk about things moving at relativistic speeds, it's important to be explicit about frames of reference. Observers who are moving differently from each other will disagree on things like, how fast an object is moving, how long an object is, how much time elapses between events. Sometimes they cannot even agree on the order in which different events happen.
Let's say, for sake of argument, the the "object" is a person wearing a space suit (because of the vacuum.) The falling test subject will feel no acceleration at all—they are freely falling. They seem to be falling through an endless "tunnel of hoops" where the hoops are the edges of the portal.
As the subject falls, they count how many hoops they fall through per second. That number will increase without bound. At first, the hoops-per-second will increase because the hoops are flying by at ever increasing speed. But, that speed will asymptotically approach (and never exceed) $C$. As the speed gets close to $C$, the hoops-per-second will continue to increase though because the distance between the hoops will seem to get closer, and closer, asymptotically approaching zero.
Meanwhile, there's an experimenter in the "lab" coordinate system, who's looking in from the side. From the experimenter's point of view, the speed of the falling test subject will never exceed $C$, and the number of times-per-second that the subject falls through the portal will never exceed $C/l$ where $l$ is the vertical distance between the entrance and the exit of the portal.
The "lab" observer sees the number of "hoops-per-second" reach a limit. But the falling test subject sees it increase forever. How is that possible? The discrepancy is resolved if the "lab" observer is able to see the falling test subject's wrist watch. As the speed of falling approaches $C$, the "lab" observer will see the wrist watch slowing down. The falling test subject, sees nothing unusual about their own watch, but the experimenter sees the time between ticks of the second hand grow longer and longer, again without bound. If the lab observer counts number of hoops per tick of the subject's watch, they'll get the same number that the falling test subject experiences.
I'm sure that there's more that could be said about this (including mathematical equations), but I'm not the person who can say it with authority. | {
"domain": "physics.stackexchange",
"id": 94737,
"tags": "general-relativity, gravity, speed-of-light"
} |
NA in LR model summary(R) | Question: So, i was trying to improve mr LR model performing multiple linear regression on a dataset. I had a categorical variable region
Region(variable):
Midwest
Northeast
South
West
I made dummy variable for each of them and it did improved my model a bit.
Previous Model Summary
After adding these variable(Which i made using a different variable)
I am getting NA's in the coefficient for west, which i don't understand why. can someone explain?
Answer: You've given all four regions a dummy variable, so these are perfectly multicollinear, and the (unpenalized?) regression doesn't have a unique solution. R automatically drops a column in this situation and reports the NA.
https://stats.stackexchange.com/q/212903/232706
https://stackoverflow.com/q/7337761/10495893
https://stats.stackexchange.com/q/25804/232706
(You can see where R calls C and subsequently FORTRAN functions, then inserts the NAs at
https://github.com/wch/r-source/blob/0f07757ad10ca31251b28a2c332812e63c0acf38/src/library/stats/R/lm.R#L117
A nice article that helped me find that: http://madrury.github.io/jekyll/update/statistics/2016/07/20/lm-in-R.html ) | {
"domain": "datascience.stackexchange",
"id": 6762,
"tags": "r, linear-regression"
} |
Help with Continuously Separating and Sorting Solids out of a Centrifuge | Question: I am designing a simple centrifuge to separate soil by particle size (as well as decanting water). My goal is to be able to continuously funnel in 500 $cm^3$ of soil and 1.25 liters of water and have the components of the soil (clay, silt, sand) separated from each other and funneled into individual containers for each component. Water will be easily decanted and I know how to do so.
The problem is that I am having difficulties trying to figure out how to easily and continuously extract the soil layers without remixing them together. It is important that the machine is able to sort the soil layers while the centrifuge is still spinning. This is due to waiting for the centrifuge to stop spinning/restart spinning adding large time wastes over many kilograms of soil processed.
I was thinking about adding opening/closing dividers to attempt to keep the layers separate while funneled out, however, there are two problems with this:
The ratio/amount of soil components (again, clay, silt, and sand) is unknown. Installing opening/closing dividers will have to be at fixed positions and as the soil components are unknown, dividing along these fixed positions will cause some remixing of the soil.
I am hesitant to introduce moving parts on a spinning part spinning at 2,500 RPM (the inner drum of the centrifuge). This seems like an unnecessary safety risk that I would like to avoid, if possible.
Here is a diagram of a simple example of what the inner drum looks like:
What is supposed to be happening in the centrifuge is similar to if you grabbed some soil, mixed it with water, and let it set, but exceptionally faster. If I wait for it to settle naturally, I have to wait 4-5 days for a small 500g sample to fully settle, whereas with a continuously flowing/separating centrifuge (and additional mechanisms), I can expect to sort hundreds of kilograms within a day. This, along with it automatically sorting soil components, drastically improves my workflow.
Short explanation of expected sorting behavior:
Sand particles (defined as particles with diameters from 0.05 mm to 2 mm) should separate closest to the wall of the centrifuge due to being the densest particles in the soil.
Silt particles (defined as particles with diameters from 0.002 mm to 0.05 mm) should separate and collect in the "middle" of the soil solids due to it being the moderately dense particles in the soil.
Clay particles (defined as particles with diameters less than 0.002 mm) should separate closest to the water layer due to it being the least dense particles in the soil. I also expect organic matter to collect close to this layer, but I know of some ways of eliminating the organic matter.
Water, of course, would be the furthest away from the wall due to it being much less dense than any of the soil solids.
It is not required for the sorting to be perfect. For instance, I do not expect perfectly sized particles that are perfectly sorted. I do expect some silt to be mixed with clay, sand with silt, etc. However, I would prefer this to be minimal with a reasonable setup.
I greatly look forward to your suggestions. Additionally, if I have missed any oversights, I appreciate pointing them out as well. Thank you very much.
Answer: Separating solids of different densities by centrifuge would be extremely difficult as you have identified.
I would recommend building a multi stage classifier instead. These are used in milling processes to extract the desired material size. They generally use air, but could also be built using water as the working fluid. It is based on the principle of Elutriation, and essentially uses the terminal velocity of each individual particle as the separation principle. With the streams separated you can then remove the water via gravity seperation, centrifuge, screening, filter press, your choice. Depending on the geometric shape and density of the sand, you may have to add a wet or dry screening process to separate it from denser smaller particles that had similar terminal velocities.
I recommend finding some mining engineering books to review other options and to locate the calculations for sizing your system. | {
"domain": "engineering.stackexchange",
"id": 4715,
"tags": "mechanical-engineering"
} |
Transmit power of non-orthogonal FDM | Question: It is known that the total power of an OFDM signal equals to the sum of the powers on each individual subcarriers. I am a bit puzzled by the total transmit power of a generalized FDM system if subcarriers are not orthogonal and if it should remain the same or changed due to the non-orthogonality.
Let us consider the following OFDM transmit signal:
$$s(t) = \sum_{k = 0}^{N-1} a_k e^{j2\pi f_k t} = \sum_{k = 0}^{N-1} a_k e^{j2\pi \frac{k \: \phi}{T} t}, \quad 0\leq t\leq T,$$
where $k$ is the subcarrier index, $a_k$ are i.i.d. data symbols with zero mean, $N$ is the total number of subcarriers, $T$ is the FDM symbol duration, and $\phi < 1$ is introduced to break the orthogonality.
The transmit power is represented as
$$ P = \frac{1}{T} \int_{0}^{T} s(t) \: s^*(t) dt,$$
which can be further simplified as
$$P = \frac{1}{T} \sum_{k = 0}^{N-1} \sum_{k' = 0}^{N-1} a_k a^*_{k'} \int_{0}^{T} e^{j2\pi \phi (k - k')t/T} dt$$
Now, the value of the integral $\int_{0}^{T} e^{j2\pi \phi (k - k')t/T} dt$ does not equal to $0$ if $k \neq k'$ (due to the non-orthogonality introduced by $\phi$). This means that the transmit power in case of non-orthogonal subcarriers is no longer equals the sum of the power on each subcarrier? However, it is higher than the sum!!!
I feel there is something wrong in this simple derivation, as by intuition, if a power source spends a certain amount of power on each subcarrier, then how come the total power will be greater than the sum of these individual powers?
========================
Update
The following update to make the modeling of non-orthogonality more clear. Let us consider the FDM signal written as
$$s(t) = \sum_{k = 0}^{N-1} a_k e^{j2\pi (f_k + \delta_k) t} = \sum_{k = 0}^{N-1} a_k e^{j2\pi (\frac{k \:}{T} + \delta_k) t}, \quad 0\leq t\leq T,$$
where $\delta_k$ is the frequency offset on subcarrier $k$. Following similar approach, the power $P$ can be found as
$$P = \frac{1}{T} \sum_{k = 0}^{N-1} \sum_{k' = 0}^{N-1} a_k a^*_{k'} \int_{0}^{T} e^{j2\pi (k/T + \delta_k - k'/T - \delta_{k'})t} dt$$
which does not equal to the sum of power on individual subcarriers.
Answer: In your example, the sinusoids are still orthogonal to one another; the key is that you must integrate over the correct period when calculating $P$. All of the sinusoids have frequencies that are multiples of $\frac{\phi}{T}$, so their resulting sum will have period $\frac{T}{\phi}$.
You should therefore take this fact into account when calculating the average power over a period of $s(t)$:
$$
P = \frac{\phi}{T} \sum_{k = 0}^{N-1} \sum_{k' = 0}^{N-1} a_k a^*_{k'} \int_{0}^{\frac{T}{\phi}} e^{j2\pi \phi (k - k')t/T} dt
$$
With this formulation, then you do get the useful property that the integral is zero for all $k \ne k'$, which allows you to simplify the expression to be simply the sum of the powers of each of the sinusoids. | {
"domain": "dsp.stackexchange",
"id": 3783,
"tags": "continuous-signals, ofdm"
} |
How to simulate temperature change of oven? | Question: I am trying to write a software, which will model the oven temperature change when turning on/off. The data I can get is graph, by taking temperature reading each second from T0 time up to some temperature. Then I can take data for cooling graph. So I will have two non linear curves. Having this data, how can I simulate temperature change? Should I take current temperature at any given moment, see if oven is on/off, then follow respective curve to increase or decrease the temperature with calculated speed? Mostly I'm not clear on how to calculate the speed from curve.
What will be precision of such modelling?
took some readings from oven as sample data. I've turned off oven at 80 degree.
Answer: Once you have the data from the heating process, fit a curve through it. Do the same thing with the cool-down data. You then have an approximate expression for the evolution of the temperature with time when the heat source is on and when it is off. You should in general get a shape like $T(t) = T_0 + f(t)$ when the oven is on and $T(t) = T_0 - g(t)$ when it is off. Use these expressions in your code. Everytime the oven is switched between on or off, you switch equations in your code.
More in detail: if you start at $t=0$ with $T(0) = T_0$ and the heat source activated, you do nothing until an action is taken. Keep a logical variable active with value 1 if the heat source is active and 0 if not. If the action is to switch off the heat source, you simply change the value of your logical variable, overwrite $T_0$ and put $t=0$ again. If the action is to measure the temperature, read the logical variable and calculate the approximate temperature $T(t)$ using the expression corresponding to the value of active. You know $T_0$ and the time $t_1$ passed since $t=0$ so it's a straightforward calculation. (you know the time either from the actual time passed or from a simulated time that you could establish yourself)
If desired you could put all this in a loop and calculate the temperature nearly continuously while the heat source is being activated and deactivated. But my answer is already getting into the field of programming more than physics.
The precision of this particular approach depends partly on the accuracy of the plots (if the data is good). If the fit is no good, any extrapolations made with it will be completely unreliable. But if your heat source isn't too whimsical, you should be able to get a more than reasonable fit with a fairly basic mathematical model.
Another aspect of the problem which could have an important impact on the reliability of this approach is if the system exhibits a significant amount of hysteresis or if there is a time-memory. Both phenomena would be difficult to incorporate into a general model, at least without additional experimental data of different heat-up/cool-down cycles. So if their effects are not negligible within the desired level of precision, you will need more knowledge of their behaviour through experiments. | {
"domain": "physics.stackexchange",
"id": 6028,
"tags": "temperature, simulations"
} |
Entropy and the $2^{nd}$ law of thermodynamics | Question: I have just been introduced to the word "entropy" and as it is my understanding that it is a measure of the randomness and chaos of particles in as system.
My textbook list the 2nd law of thermodynamics in various forms one of them "The entropy of the universe can never decrease", but somewhere I read that the universe has been cooling ever since the big bang.
Should a system that is cooling not become less chaotic and random since there is less energy available for motion, vibration ect?
Answer:
it is my understanding that it is a measure of the randomness and chaos of particles in as system
There are many different kinds of entropy. "Measure of chaos" is very inaccurate characterization of all of them.
Second law of thermodynamics has consequences for changes in macroscopic bodies that can attain thermodynamic equilibrium state and thus be ascribed thermodynamic entropy - this is meaningful only for systems in thermodynamic equilibrium.
Thermodynamic equilibrium of body is a situation where macroscopic observations detect no macroscopic motions, no changes, no transfer of energy. Moreover, to apply thermodynamic theory to the system, its equilibrium state must be accessible to the system given external constraints. For gas, this means it has to be in a closed container, otherwise it will spread everywhere.
Universe is hardly such a system, for we know of no constraints to its volume or energy ; and thus there are no thermodynamic state variables entropy could be function of.
Universe has never been experimentally probed in such a way as air or steam in heat engines. To claim it is in thermodynamic equilibrium and can be ascribed thermodynamic entropy is unfounded. It is also meaningless to claim the whole universe obeys 2nd law of thermodynamics, when nobody has ever observed heat transfer between the universe and a reservoir and defined temperature of the universe. Using 2nd law in such grand scenarios is a patently unfounded extrapolation of our experience with systems of size comparable to humans or at most, to celestial bodies. In the whole universe, there is gravity and other things mostly ignored in all experiments on which thermodynamic theory was founded. | {
"domain": "physics.stackexchange",
"id": 19916,
"tags": "thermodynamics, entropy"
} |
How do I calculate effectiveness and NTU of a heat exchanger when there is phase change? | Question: This is my first post here. I explain the big picture of my current circumstances, I am a software engineer currently making a platform for evaluating process equipment in plants. I do not really know the technical details since it is not my area of expertise and I apologize if this question does not make any sense.
Basically, the engineer who used to help me build this software found these formulas to calculate effectiveness of a shell and tube heat exchanger:
$$
Cmin = m_1*C_{p 1}
$$
$$
Cmax = m_2*C_{p 2}
$$
Then we'd have to choose which one of the two is the minimum and the maximum. Then we'd have to do this:
$$
C = \frac{Cmin}{Cmax}
$$
However, the engineer said that when there is a phase change Cmax would be considered as infinity and therefore C would equal 0.
If C equals 0 then that would mean the formula that would be used for effectiveness would be the following:
$$
\epsilon = 1-e^{-NTU}
$$
And NTU would be:
$$
NTU= \frac{U*A}{Cmin}
$$
First of all, is this logic true? If not, what is the right logic? And how should I calculate Cmin in the NTU formula? And would this still be true even if there is phase change in both sides of the sell and tube exchanger?
Again, I'm sorry if I wasn't clear enough, this is not my area of expertise...
Answer: First of all, I feel sorry about that I did not see your question earlier.
Your reasoning is suitable for most cases, but since you are not a chemical or mechanical engineer we can forget about messy details and pointless scenarios.
Your inference that if
\begin{equation}
c=0
\end{equation}
Then for all heat exchangers
\begin{equation}
\epsilon=1-e^{(-NTU)}
\end{equation}
is absolutely correct. Also you can calculate $C_{min}$ by comparing following two:
\begin{equation}
C_{hot}=\dot{\mathbf{m_{hot}}}*C_{p,hot}\\
C_{cold}=\dot{\mathbf{m_{cold}}}*C_{p,cold}
\end{equation}
Where $\dot{\mathbf{m_{cold}}}$ is mass flow rate of cold fluid and $\dot{\mathbf{m_{hot}}}$ is mass flow rate of hot fluid, notation is the same for specific heats.
$C_{min}$ is the smaller one from $C_{hot}$ and $C_{cold}$. Mass flow rate can be calculated by many ways. For example one can calculate mass flow rate if cross sectional area, density of the fluid and velocity of fluid is known, by the formula for steady flow $\dot{\mathbf{m}}=\rho*v*A_{c}$.
The specific heats are obtained from tables for most cases, so you will not calculate them.
For the last question I have less to say. It's a troublesome question to develop a clear answer, for me. But I believe we would use some other techniques to analyse the situation with phase change in both sides, or we would divide the analysis of process in order to account for all physical phenomena respectively, or maybe we would at least add some extra terms for phase change. Also I think the answer of this question would be case-specific. Therefore it would be best for you to consult the engineer you work with. | {
"domain": "engineering.stackexchange",
"id": 5397,
"tags": "chemical-engineering, heat-exchanger"
} |
Why is depth complexity relevant? | Question: Since gate complexity correspond to the number of gate for a given quantum circuit, it seems that depth complexity bring no more information about quantum complexity than gate complexity. So does gate complexity encompass depth complexity ?
Answer: Circuit depth matters because qubits have finite coherence time. You could imagine two circuits, each with $N$ gates, where the first circuit can implement them all in parallel and thus finish in one time-step, where the other must implement them in series, necessitating longer coherence of the physical qubits. Thus, the gate count alone only tells you how the fidelity might degrade as a function of gate error, and the context of the gates relative to each other and the underlying hardware will determine the circuit depth.
Depth complexity is not totally independent from gate complexity, because a circuit with a large number of gates is likely to have large depth as well, but this is only qualitative. When gates cannot be performed in parallel, they contribute to the total depth of the circuit. This might occur because two sequential gates act on a shared qubit, but could also result from two gates that, though logically independent, rely on shared classical resources (multiplexed drive or readout lines). Thus, circuit depth can increase due to both the algorithm's structure and the physical limitations of the hardware. As an example, this paper discusses how to optimally map logical to physical qubits when compiling quantum circuits onto hardware with restricted connectivity. The algorithm presented by the authors can trade off circuit depth and gate count. To execute the circuit labeled "code" (top left), you can either insert 4 SWAP gates that can all run in parallel, or instead only use 3, but they will need two time-steps. If your qubits have pretty good coherence times but poorly-calibrated gates, you should go with the latter solution, and v.v. | {
"domain": "quantumcomputing.stackexchange",
"id": 2891,
"tags": "complexity-theory, quantum-circuit"
} |
Searching a string of consecutive characters in a binary bar code | Question: I am writing a program that will take a binary bar code and convert it to a zip-code. I need to validate the bar code. A valid binary bar code contains no more and no less than two 1s in any position per five digits. This is what I have for the validation part so far and would like simplify this code.
#include <iostream>
#include <string>
using std::cout;
using std::string;
int main()
{
const int SIZE = 5;//number of characters to test in a set
string bar = "100101010010001";
string tempstring;
int count = 0;
while (count < bar.length())
{
int test = 0;
tempstring = bar.substr(count, SIZE);//split bar string into next five characters
for (int index = 0; index < SIZE; index++)
{
if (tempstring[index] == '1')
{
++test;
}
}
if (test != 2)
{
cout << "Invalid bar code!\n";
exit(1);
}
count += 5;
}
cout << "Valid bar code!\n";
}
Answer: You can do without creating a temp string,
simply by using bar[count + index] instead of tempstring[index].
Also, the while loop would be more natural as a for loop.
Like this, with some other improvements:
for (int count = 0; count < bar.length(); count += 5)
{
int ones = 0;
for (int index = 0; index < SIZE; index++)
{
if (bar[count + index] == '1')
{
++ones;
}
}
if (ones != 2)
{
cout << "Invalid bar code!\n";
return 1;
}
} | {
"domain": "codereview.stackexchange",
"id": 13809,
"tags": "c++, strings, search"
} |
Extrapolating GLM coefficients for year a product was sold into future years? | Question: I've fit a GLM (Poisson) to a data set where one of the variables is categorical for the year a customer bought a product from my company, ranging from 1999 to 2012. There's a linear trend of the coefficients for the values of the variable as the year of sale increases.
Is there any problem with trying to improve predictions for 2013 and maybe 2014 by extrapolating to get the coefficients for those years?
Answer: I believe that this is a case for applying time series analysis, in particular time series forecasting (http://en.wikipedia.org/wiki/Time_series). Consider the following resources on time series regression:
http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471363553.html
http://www.stats.uwo.ca/faculty/aim/tsar/tsar.pdf (especially section
4.6)
http://arxiv.org/abs/0802.0219 (Bayesian approach) | {
"domain": "datascience.stackexchange",
"id": 102,
"tags": "statistics, glm, regression"
} |
Why is label pruning possible with hub labeling? | Question: Hub labeling (HL) computes superlabels using the vertices visited by the forward and reverse Contraction Hierarchies (CH) search. Those labels are then pruned (see HL, sec. 4.2) to generate strict labels.
I don't understand how there can be labels that can be pruned. From my understanding the shortcuts added by CH should make sure that in both forward and reverse searches we will either
reach a node via a shortest path or
not reach a node at all.
How is it possible that we reach some nodes via a path that is longer than the shortest path? Could anyone provide a minimal example?
Answer: I have emailed the authors and they kindly supplied me with the following example:
Let the nodes be contracted in alphabetical order then the red edge will be added during contraction.
Now look at the upward search from A (i.e. we can only visit nodes that have been contracted later). A will settle B with distance 1 and D with distance 2. Since we are not allowed to visit C via D (as this is not upward) we must settle C via B with distance 6. However, as can clearly be seen from the non-contracted graph, the A-C distance is 5 (via D).
This means that we now have an incorrect distance label for C at A. | {
"domain": "cstheory.stackexchange",
"id": 3558,
"tags": "ds.algorithms, shortest-path"
} |
Energy and time evolution of a particle in a potential well | Question: I have a particle in an infinite square well (the box is from 0 to $a$), in the state described by the function
$$\psi (x) = \begin{cases}
Ax(a-x) & \mathrm{for }\;\;\;\;0<x<a,\qquad \\ 0 \qquad &\text{otherwise}.
\end{cases}$$
I have to determine the most likely value of energy and the probability to obtain a value of $E = \frac{9\hbar^2 {\pi}^2}{2ma^2} $.
To solve the second question I thought that $E$ iss the classic solution for energy in a potential well with $n=3$. So I can calculate $\langle3| \psi\rangle$ $-$ in which $3$ is the solution wave function with $n=3$ $-$ and that is it? Right?
But what about first question?
Do I have to calculate $\langle H \rangle$ and compare it with a solution of the potential well?
I also have to determine the evolution of the wave function for $t>0$ when at $t=0$ we turn off the potential well, any hints?
Answer: First normalize the state to find $A$.
Then you need to express the state as a superposition of the stationary states of the infinite square well:
$$
\psi\left(x\right) = A x \left(a-x\right) = \sum_{n=1}^\infty c_n \psi_n\left(x\right),
$$
where $\psi_n\left(x\right) = \sqrt{2/a} \sin\left(n \pi x / a\right)$ is the $n$-th stationary state. You can do this using the orthogonality of the stationary states,
$$
\int_0^a dx \ \psi^*_m\left(x\right) \psi_n\left(x\right) = \frac{2}{a} \int_0^a dx \ \sin\left(\frac{m \pi x}{ a}\right) \sin\left(\frac{n \pi x}{ a}\right) = \delta_{mn},
$$
by integrating the equation above:
$$
\begin{align}
\int_0^a dx \ \psi^*_m\left(x\right) \left[A x \left(a-x\right)\right] &= \int_0^a dx \ \psi^*_m\left(x\right) \left[ \sum_{n=1}^\infty c_n \psi_n\left(x\right) \right] \\
&= \sum_{n=1}^\infty c_n \left[ \int_0^a dx \ \psi^*_m\left(x\right)\psi_n\left(x\right) \right]\\
&= \sum_{n=1}^\infty c_n \delta_{m n} \\
&= c_m
\end{align}
$$
I'll leave the $c_n = A \sqrt{2/a} \int_0^a dx \ \sin\left(n \pi x / a\right) x \left(a-x\right)$ integral for you to work out.
Once you have the $c_n$'s, the most likely value of a measurement of the energy is the energy corresponding to the stationary state with maximum $c_n$.
To find the probability of measuring $9 \hbar^2 \pi^2 / 2 m a^2$ for the energy, determine the stationary state that this energy corresponds to, and compute $\left|c_n\right|^2$.
For the time evolution, since the potential is $0$ everywhere after $t=0$, it is a free particle, and the general solution is:
$$
\Psi\left(x,t\right) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty dk \ \phi\left(k\right) \exp\left[i\left(k x + \frac{\hbar k^2}{2 m} t\right)\right],
$$
where
$$
\phi\left(k\right) = \frac{1}{\sqrt{2 \pi}} \int_0^a dx \ \Psi\left(x,0\right) \exp\left(-i k x\right) = \frac{A}{\sqrt{2 \pi}} \int_0^a dx \ x\left(a-x\right) \exp\left(-i k x\right) .
$$
So, now you just have to do this integral. | {
"domain": "physics.stackexchange",
"id": 11138,
"tags": "quantum-mechanics, homework-and-exercises, energy, wavefunction, potential"
} |
What causes the expansion and implosion of the medium before an explosion? | Question: I often see in slo-mo explosions, whether underwater or in air, the medium expanding outward before imploding, at which point the flash of the explosion happens. I used to think this was a cool movie trope, but I see in many cases, like this and this the phenomenon is very real. Even this nuke shows a kind of implosion of the air after explosion (although I'm not sure if this is the same thing). What causes this? Is the flash of light the actual "explosion"?
Answer: Background
I think it is important to recognize these movies show effects of shock waves, some of which result from blast waves.
As I showed at https://physics.stackexchange.com/a/271329/59023 and https://physics.stackexchange.com/a/242450/59023, the shock Mach number for a blast wave goes as:
$$
\begin{align}
M\left( t \right) & = \frac{ U_{shn}\left( t \right) }{ C_{s,up} } = \frac{ 2 \ k }{ 5 \ C_{s,up} } \left( \frac{ E_{o} }{ \rho_{up} } \right)^{1/5} \ t^{-3/5} \tag{1a} \\
& = \frac{ 2 \ k^{5/2} }{ 5 \ C_{s,up} } \sqrt{ \frac{ E_{o} }{ \rho_{up} } } \ R^{-3/2}\left( t \right) \tag{1b}
\end{align}
$$
where $t$ is time from the initial release of energy, $E_{o}$, from a point source, $\rho_{up}$ is the ambient gas mass density, $k$ is a dimensionless parameter used for scaling, $C_{s,up}$ is the upstream speed of sound (can assume this is constant, like all other upstream variables), and $R\left( t \right)$ is the radial shock position vs. time given by:
$$
R\left( t \right) = k \left( \frac{ E_{o} }{ \rho_{up} } \right)^{1/5} \ t^{2/5} \tag{2}
$$
I often see in slo-mo explosions, whether underwater or in air, the medium expanding outward before imploding...
One can see from Equations 1 and 2 that $M \propto t^{-3/5} \propto R^{-3/2}$ and will not always be greater than unity. Meaning, there will be a point when hot gas, which acts as the shock piston in some of these cases, no longer has enough energy to maintain a shock wave.
One of the main differences between a shock produced by a blast wave and that driven by some impenetrable obstacle (in both cases, the driver is called the piston) is that the former involves a relatively narrow shocked, sheath region followed by a rarefaction region. The blast shock thus effectively acts like a plow and piles up mass as it moves outward from the energy source leaving behind an evacuated region behind. Thus, there is an upstream ambient pressure followed by an overpressure region (i.e., the sheath), lastly followed by the lowest pressure region of the rarefaction.
Thus, when $\left( M - 1 \right)$ becomes null and negative, the result is an implosion.
at which point the flash of the explosion happens... What causes this? Is the flash of light the actual "explosion"?
[Possible but less probable] The flash of light at the peak of the implosion may be due to something called sonoluminescence, which is not fully understood. What we do know is that if one symmetrically compresses a gas to sufficient pressures, it can emit light.
[More probable] It may also be due to compressional heating occurring faster than it can conduct, convect, and/or radiate away resulting in a super hot gas that thermally emits electromagnetic radiation. This is called cavitation.
Additional Details
I wrote a detailed answer that provides background on shock waves at https://physics.stackexchange.com/a/139436/59023, https://physics.stackexchange.com/a/210097/59023, https://physics.stackexchange.com/a/306184/59023, and https://physics.stackexchange.com/a/136596/59023 that you may find helpful.
References
Whitham, G.B. (1999), Linear and Nonlinear Waves, New York, NY: John Wiley & Sons, Inc.; ISBN:0-471-35942-4. | {
"domain": "physics.stackexchange",
"id": 41356,
"tags": "explosions, shock-waves"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.