anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
The effect of a chemical reaction on compressible fluid flow | Question: Problem
I am interested in deriving an expression for how an instantaneous exothermic chemical reaction (where the molecules comprising the fluid are converted to a mix of smaller molecules) process influences the velocity of a compressible fluid flow. The fluid flows across a circular boundary and the surface area of the flow is constant before and after the reaction.
I will have to assume the mass stays constant throughout. There will be some increase in pressure due to the exothermic nature of the reaction, which in turn affects the compressibility of the fluid, causing the velocity to increase. I obtained an expression showing this, however I am not convinced I have the full story.
Attempt
Using the relation $V_m=\frac{M}{\rho}$, we obtain a relationship for the density of the fluid relative to the molar mass of the fluid. Which, after the chemical reaction will have decreased. Now, using the continuity of the fluid we have
$$\frac{M_1}{V_{m_1}}(\pi r_1^2)v_1=\frac{M_2}{V_{m_2}}(\pi r_2^2)v_2$$
From here the two areas must be equal in order for the surface area to stay constant, we have:
$$\frac{M_1}{M_2}\frac{V_{m_2}}{V_{m_1}}v_1=v_2$$
I am sure that there must be some formula relating the temperature change to the increase in molecules or something along those lines, however I have absolutely no experience with chemistry at all. I could assume the fluid to follow the ideal gas law and get an expression with temperature but this of course will affect the fidelity of the derivation, I am not sure whether it is a valid assumption.
If someone would be able to point me in the right direction or advise that would be great!
Answer: The problem can be addressed using macroscopic thermodynamics without referring to the molecular picture of the fluid. We however need to know something about the chemical composition of the fluid after the reaction. I suppose that not all of your fluid undergoes the chemical reaction, so the resulting fluid is going to be multi-component. Below, I sketch a solution of the problem under the assumption that all the components move with the same macroscopic velocity, so one can treat the fluid after the reaction as a single-component matter with some average thermodynamic properties. A more detailed solution would depend on the exact nature of your fluid, e.g. on whether the ideal-gas approximation is appropriate.
The state of the fluid is completely described by its temperature $T$, pressure $P$, density $\rho$ and velocity $v$. I assume, as you indicated, that the geometry of the problem is one-dimensional so that the velocity can be treated as a scalar. I will use indices 1 and 2 to refer to the fluid before and after the reaction. There is one relation between $T$, $P$ and $\rho$ supplied by the equation of state, so we need three additional relations to fully determine the unknown values of $T_2$, $P_2$, $\rho_2$ and $v_2$.
Mass conservation. Given that the cross-section area of the flow does not change, this amounts to the simple condition
$$
\rho_1v_1=\rho_2v_2,
$$
equivalent to your equation.
Momentum balance. The flow of momentum per unit area and per second in a fluid of velocity $v$ is $\rho v^2$, which is just the mass flow times velocity. Then the second law of Newton requires that
$$
P_1-P_2=\rho_2v_2^2-\rho_1v_1^2.
$$
Energy balance. The above two constraints are purely mechanical. Here is the only place where we need some input about the chemical reaction in the system. I will model the reaction as an instantaneous supply of heat per kilogram equal to $q$. Then the energy balance condition can be expressed in terms of increase of specific enthalpy $h$ (enthalpy per kilogram) plus specific kinetic energy (which is just $\frac12v^2$),
$$
h_2+\frac12v_2^2=h_1+\frac12v_1^2+q.
$$
The enthalpy itself is in turn determined by the temperature and pressure. For an ideal gas, it can be extracted for instance from the knowledge of specific heat at constant pressure.
The above set of equations together with the equation of state is sufficient to determine the final state of your fluid. As said, a more concrete solution can only be given in case you know the equation of state of your fluid. Hope this helps anyway.
EDIT. In classical thermodynamics, thermodynamic potentials such as enthalpy are only defined up to an arbitrary additive constant, which can be chosen independently for different materials. In order that the comparison of $h_1$ and $h_2$ makes sense, we therefore have to fix the reference values of the enthalpy of the fluid before and after the reaction appropriately. In my answer, I assume that the specific enthalpies of these two different fluids are equal at temperature $T_1$ and pressure $P_1$. | {
"domain": "physics.stackexchange",
"id": 85130,
"tags": "thermodynamics, fluid-dynamics, flow"
} |
Equilibrium between a system and a heat reservoir | Question: In Pathria's Statistical Mechanics, Section 3.1, the expression for the probability, $P_r$ of finding a system characterized by the energy value $E_R$ in a reservoir is derived. The derivation goes as follows:
We consider the given system $A$, immersed in a very large heat reservoir $A'$ [...] On attaining a state of mutual equilibrium, the system and the reservoir would have a common temperature, say $T$. Their energies, however, would be variable and, in principle, could have, at any time t, values lying anywhere between $0$ and $E^{(0)},$ where $E^{(0)}$ denotes the energy of the composite system [...]
$$E_r + E'_r=E^{(0)}=\rm const.$$
... Let the number of these states be denoted by $\Omega'(E'_r)$ [...]
$$P_r \propto \Omega'(E'_r) \equiv \Omega'(E^{(0)} -E_r).$$
[...] we may carry out an expansion [...] around $E_r =0.$ However, for reasons of convergence, it is essential to effect the expansion of the logarithm instead:
$$\ln \Omega'(E'_r) = \ln \Omega'(E^{(0)}) + \left(\frac{\partial \ln \Omega'}{\partial E'} \right)_{E'=E^{(0)}}(E'_r - E^{(0)}) + ...$$
$$\approx const - \beta'E_r,$$
[...] in equilibrium, $\beta' = \beta = 1/kT.$$
[...]
$$P_r \propto \exp(-\beta E_r)$$
I have a few questions here:
For many systems, is it not the case that the temperature can be expressed as a function of the energy? An ideal gas is one such example. In that case, supposing I had an ideal gas inside a reservoir, which was free to exchange energy, but held at fixed particle number and volume, how could it be possible for it to be at a common temperature, but still take on any energy value?
Why is it essential to effect the expansion of the logarithm instead? What is the justification for this?
(Edit:) Why is the relevant probability taken to be
$$P_r \propto \Omega' (E'_r)$$
rather than
$$P_r \propto \Omega' (E'_r) \cdot \Omega(E_r)?$$
Thanks for any help.
Answer:
For many systems, is it not the case that the temperature can be
expressed as a function of the energy? An ideal gas is one such
example.
What you're referring is the relationship between the internal energy and temperature. The internal energy is a thermodynamic quantity of the system which is calculated as an ensemble average from Statistical Mechanics. What Pathria here refers to here is not the ensemble average energy, it is simply the energy of the system at any instant in time. Ensemble averaged energy (or internal energy) would be equivalent to time-averaged energy, according to ergodic hypothesis. Even in the case of ideal gas, the internal energy is related to the temperature as $U \propto k_B T$, not the instantaneous energy.
In that case, supposing I had an ideal gas inside a reservoir, which
was free to exchange energy, but held at fixed particle number and
volume, how could it be possible for it to be at a common temperature,
but still take on any energy value?
This is precisely the framework of canonical ensemble in statistical mechanics. The system is free to exchange energy and as a result a thermal equilibrium of common temperature is attained between the reservoir and system. To physically understand this, imagine the system on an average has energy $U$, but the system's energy changes instantaneously, fluctuating around the value $U$. While the energy fluctuates around $U$, the temeprature $T$ of the system and reservoir remain constant, such that the temperature is directly related to the average energy $U$.
Why is it essential to effect the expansion of the logarithm instead? What is the justification for this?
$\Omega(E)$ scales as the density of states of the system, which grows as $E^N$, where $N$ is the number of particles in the system. Realistically speaking, this is a extremely sharply rising function as $N$ ~ $10^{23}$. Therefore taking logarithmic expansion gives a much better approximation. | {
"domain": "physics.stackexchange",
"id": 92450,
"tags": "statistical-mechanics"
} |
Big O notation simplification from sum | Question: On calculating the convexity of an optimization problem, I am getting a term $O(\sqrt{n+m}(n)^3)$. Here both $m$ and $n$ are parameters. Is there any way I can simplify this term to write it as a product?
I know that if time complexity $T(n,m)=O(\sqrt{n+m}(n)^3)$, then $\exists C,M $ such that $\vert T(n,m) \vert \leq C\vert \sqrt{n+m}(n)^3 \vert $ when either $n,m \geq M$.
I need the simplification for the interpretation of the algorithm. (Like, if time complexity is $O(n)$, then if $n$ is multiplied by 10, then the time complexity is also multiplied by 10. But it seems such an interpretation is not possible in the above case, as $m$ and $n$ are independent terms in summation).
Answer: Presumably you are after a substitution like$$\sqrt{n+m}\,n^3\le f(n)\,g(m),$$ or equivalently
$$n+m\le p(n)\,q(m).$$
If you freeze $m$, then $p(n)=\Omega(n)$ must hold (and similarly $q(m)=\Omega(m)$). So I see no better solution than
$$\sqrt{n+m}\,n^3\le\sqrt{nm}\,n^3= n^{7/2}m^{1/2}.$$ | {
"domain": "cs.stackexchange",
"id": 20175,
"tags": "complexity-theory, time-complexity"
} |
How does Ostwald-Walker method work? | Question: From searching online I found this:
The apparatus consists of two sets of bulbs. The first set of three bulbs is filled with solution to half of their capacity and second set of another three bulbs is filled with the pure solvent. Each set is separately weighed accurately. Both sets are connected to each other and then with the accurately weighed set of guard tubes filled with anhydrous calcium chloride or some other dehydrating agents like $\displaystyle P_2O_5$, conc. $\displaystyle H_2SO_4$ etc. The bulbs of solution and pure solvent are kept in a thermostat maintained at a constant temperature.
A current of pure dry air is bubbled through. The air gets saturated with the vapours in each set of bulbs. The air takes up an amount of vapours proportional to the vapour pressure of the solution first and then it takes up more amount of vapours from the solvent which is proportional to the difference in the vapour pressure of the solvent and the vapour pressure of solution, i.e. $\displaystyle p_0 – p_s$. The two sets of bulbs are weighed again. The guard tubes are also weighed.
Loss in mass in the solution bulbs $\displaystyle \propto p_s $
Loss in mass in the solvent bulbs $\displaystyle \propto (p_0 – p_s) $
Total loss in both sets of bulbs $\displaystyle \propto [p_s +(p_0 – p_s)] \propto p_0 $
Total loss in mass of both sets of bulbs is equal to gain in mass of guard tubes.
Thus, $\displaystyle \frac{p_0-p_s}{p_0} = \frac{\text{Loss in mass in solvent bulbs}}{\text{Total loss in mass in both sets of bulbs}}
= \frac{\text{Loss in mass in solvents bulbs}}{\text{Gain in mass of guard tubes}}$
Further we know from Raoult’s law
$\displaystyle \frac{p_0-p_s}{p_0} = \frac{\frac{w_A}{m_A}}{\frac{w_A}{m_A} + \frac{w_B}{m_B}}$
The above relationship is used for calculation of molecular masses of non-volatile solutes.
For very dilute solutions, the following relationship can be applied.
$\displaystyle \frac{p_0-p_s}{p_0} = \frac{\text{Loss in mass of solvent bulbs}}{\text{Gain in mass of guard tubes}}= \frac{w_Am_B}{w_Bm_A}$
My questions:
Why "The air takes up an amount of vapours proportional to the vapour pressure of the solution first and then it takes up more amount of vapours from the solvent which is proportional to the difference in the vapour pressure of the solvent and the vapour pressure of solution"?
How can we add $\propto p_s$ and $\propto p_0-p_s$ when we don't know what the proportionality constants are or even if they are equal?
It would be better if you could explain in detail the process because I'm not getting it properly from here.For your information the text after "From Raoult's Law" and about "the setup" is purely clear to me.
Answer: How are the vapors transported?
The additional question from the comments:
Why [is] the amt. of vapours taken up proportional to vapour press. Why directly proportional? Why not nothing else?
First things first, let's get a definition (Wikipedia, italics added):
Vapor pressure [...] is defined as the pressure exerted by a vapor in thermodynamic equilibrium with its condensed phases (solid or liquid) at a given temperature in a closed system.
So basically, before you start the experiment (see below) the vapors have already formed, meaning that thermodynamic equilibrium has been achieved. Then you start bubbling through dry gas, which then mixes with the vapor gas already in the bubbles. If you set the gas flow appropriately, you get a near-equilibrium situation (see the graphic below) and the gas that exits the bulb assemply has exactly the same amount of vapor in it as the vapor "layer" (which can not be thought of as a layer anymore because of the bubbling) above the liquid surface.
We do not have an equilibrium situation, which is why so much care has to be taken that we get as close as possible to the equilibrium situation. (Also, we do not have a closed system, so everything starts to get slightly problematic at this point. But to a first degree of approximation, this will hold up fine.)
This is also why it is directly proportional to the vapor pressure: There is no other mechanism that moves solvent molecules from the condensed phase to the vapor phase. Of course you could add a supersonic vibrating membrane and force more solvent molecules into a vapor phase, but that is not what you want to do, because you want to quantify the relative lowering of the vapor pressure.
Is relative vapor pressure lowering a thing?
Why "The air takes up an amount of vapours proportional to the vapour pressure of the solution first and then it takes up more amount of vapours from the solvent which is proportional to the difference in the vapour pressure of the solvent and the vapour pressure of solution"?
I guess the question you really want to ask is: Why is the vapor pressure of a solution lower than the vapor pressure of a pure substance?
The answer is not simple, but the explanation of the vapor pressure lowering effect can be given by assuming a simple model: Imagine that in the solution, there are two types of molecules; the solvent and the solute. In the pure solvent, there is only one kind. Statistically speaking, that means that there is also a smaller amount of solvent molecules at the boundary between the bulk phase and the vapor. Since solute molecules don't go over into the vapor, there is a smaller area of solvent molecules that can go over into the vapor, which means that the vapor pressure of the solution is smaller when compared to the pure solvent.
What about the proportionality constant?
How can we add $\propto p_s$ and $\propto p_0−p_s$ when we don't know what the proportionality constants are or even if they are equal?
While we don't know the magnitude of the proportionality constants, we know that they are equal. The proportionality constant is most likely dependent on ambient pressure, temperature and flow rate of the gas. These three variables are being controlled during the experiment and kept constant for all glass bulbs. (Well, I don't know about keeping ambient pressure constant. But if you perform the experiment fast enough, that shouldn't pose much of a problem.)
The Ostwald-Walker Process
It would be better if you could explain in detail the process because I'm not getting it properly from here.
Why do we do this?
First, we have to remember: Why are we doing this process; what do we want to find out? The answer is: We want to quantify the relative lowering of the vapor pressure.
How do we do this?
Here's what happens during the process:
Dry gas arrives at the first set of bulbs, filled with a solution.
The dry gas gets saturated with solvent vapor.
The solvent-vapor-saturated gas arrives at the second set of bulbs which contain the pure solvent.
Because of the vapor pressure lowering effect (here now applied in reverse) the vapor pressure of the pure solvent is higher. As such, the gas that enters is not saturated. When it exits the second set of bulbs, it is saturated again.
The pure-solvent-saturated gas enters the desiccator and loses all solvent vapor to it.
Dry gas exits the apparatus.
This might still be hard to understand, so here a graphic that shows you what happens to the solvent vapor partial pressure in the gas:
At time 1 the gas enters the first bulb. It exits the first cluster of bulbs at about time 4. Then it travels to the next cluster of bulbs (containing the pure solvent) and loads up there, up until time 8. At time 9 the gas enters the desiccator where it loses all the solvent vapor and exits dry at time 12.
Since we weigh all components, we know exactly how much mass we lost from solution and solvent, and how much mass the desiccant gained (both those numbers [the total of the mass loss of course] should be equal):
$$ \Delta m_\text{solution} + \Delta m_\text{solvent} = \Delta m_\text{desicc.} $$
The following proportionalities hold:
$ \Delta m_\text{solution} \propto p_s $
$ \Delta m_\text{solvent} \propto (p - p_s) $
$ \Delta m_\text{desicc.} \propto p $
The relative lowering of the vapor pressure can now be calculated, it is simply
$$ \frac{p-p_s}{p} = 1 - \frac{p_s}{p}$$
If you know the vapor pressure of the pure solvent, you can simply plug it in the equation, get the proportionality constant and from that calculate the vapor pressure of the solution.
Conclusion
In conclusion, I can only say that I hope to have clarified things for you. I myself spent quite some time researching this method, as I have never heard of this question before. Thank you for the opportunity! | {
"domain": "chemistry.stackexchange",
"id": 2101,
"tags": "experimental-chemistry, vapor-pressure"
} |
Refactor IF into Chain of Responsibility Pattern | Question: I've been trying to learn different patterns of programming, and the "Chain of Responsibility" sort of eludes me. I've been told that my specific code snippet would be a good candidate for chain of responsibility, and I'm wondering if someone could show me how to get there?
Public Overrides Sub OnActionExecuting(ByVal filterContext As ActionExecutingContext)
''# Set a local variable for the HttpContext.Request. This is going to
''# be used several times in the subsequent actions, so it needs to be
''# at the top of the method.
Dim request = filterContext.HttpContext.Request
Dim url As Uri = request.Url
''# Now we get the referring page
Dim referrer As Uri = If(Not request.UrlReferrer Is Nothing, request.UrlReferrer, Nothing)
''# If the referring host name is the same as the current host name,
''# then we want to get out of here and not touch anything else. This
''# is because we've already set the appropriate domain in a previous
''# call.
If (Not referrer Is Nothing) AndAlso
(Not url.Subdomain = "") AndAlso
(Not url.Subdomain = "www") AndAlso
(referrer.Host = url.Host) Then
Return
End If
''# If we've made it this far, it's because the referring host does
''# not match the current host. This means the user came here from
''# another site or entered the address manually. We'll need to hit
''# the database a time or two in order to get all the right
''# information.
''# This is here just in case the site is running on an alternate
''# port. (especially useful on the Visual Studio built in web server
''# / debugger)
Dim newPort As String = If(url.IsDefaultPort, String.Empty, ":" + url.Port.ToString())
''# Initialize the Services that we're going to need now that we're
''# planning on hitting the database.
Dim RegionService As Domain.IRegionService = New Domain.RegionService(New Domain.RegionRepository)
''# Right now we're getting the requested region from the URI. This
''# is when a user requests something like
''# http://calgary.example.com, whereby we extract "calgary" out of
''# the address.
Dim region As Domain.Region = RegionService.GetRegionByName(url.Subdomain)
''# If the RegionService returned a region from it's query, then we
''# want to exit the method and allow the user to continue on using
''# this region.
If Not region Is Nothing Then
Return
End If
''# If we've made it this far, it means that the user either entered
''# an Invalid Region (yes, we already know the region is invalid) or
''# used www. or nothing as a subdomain. Up until this point, we
''# haven't cared if the user is authenticated or not, nor have we
''# cared what the full address in their address bar is. Now we're
''# probably going to start redirecting them somewhere.
''# First off we need to check if they're authenticated users. If they
''# are, we'll just send them on over to their default region.
If filterContext.HttpContext.User.Identity.IsAuthenticated Then
Dim userService As New Domain.UserService(New Domain.UserRepository)
Dim userRegion = userService.GetUserByID(AuthenticationHelper.RetrieveAuthUser.ID).Region.Name
filterContext.HttpContext.Response.Redirect(url.Scheme + "://" + userRegion + "." + url.PrimaryDomain + newPort + request.RawUrl)
End If
''# Now we know that the user is not Authenticated. So here we check for
''# www. If the host has www in it, then we just strip the www and
''# bounce the user to the original request.
If request.Url.Host.StartsWith("www") Then
Dim newUrl As String = url.Scheme + "://" + url.Host.Replace("www.", "") + newPort + request.RawUrl
''# The redirect is permanent because we NEVER want to see www in the domain.
filterContext.HttpContext.Response.RedirectPermanent(newUrl)
''# It's ok for an annonymous browser to view the "Details" of an
''# Event/User/Badge/Tag without being assigned to a regions. So
''# this is why we strip the www but don't redirect the visitor
''# directly over to the "Choose Your Region" view.
End If
''# If we've gone this far, we know the region is invalid, and the
''# user needs to be directed to a "choose your region" page. We're
''# not going to do the redirecting here because we want to allow for
''# browsing to specific Users/Tags/Badges/Events that are Region
''# Agnostic. But if a user tries to view an event listing of any
''# sort, we're going to fire them over to the "Choose Your Region"
''# page via a separate Attribute attached to only the Actions that
''# require it.
End Sub
Here's an uncommented C# version:
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
var request = filterContext.HttpContext.Request;
Uri url = request.Url;
Uri referrer = (request.UrlReferrer != null) ? request.UrlReferrer : null;
if (((referrer != null)) && (!string.IsNullOrEmpty(url.Subdomain)) && (!(url.Subdomain == "www")) && (referrer.Host == url.Host)) {
return;
}
string newPort = url.IsDefaultPort ? string.Empty : ":" + url.Port.ToString();
Domain.IRegionService RegionService = new Domain.RegionService(new Domain.RegionRepository());
Domain.Region region = RegionService.GetRegionByName(url.Subdomain);
if ((region != null)) {
return;
}
if (filterContext.HttpContext.User.Identity.IsAuthenticated) {
Domain.UserService userService = new Domain.UserService(new Domain.UserRepository());
dynamic userRegion = userService.GetUserByID(AuthenticationHelper.RetrieveAuthUser.ID).Region.Name;
filterContext.HttpContext.Response.Redirect(url.Scheme + "://" + userRegion + "." + url.PrimaryDomain + newPort + request.RawUrl);
}
if (request.Url.Host.StartsWith("www")) {
string newUrl = url.Scheme + "://" + url.Host.Replace("www.", "") + newPort + request.RawUrl;
//'# The redirect is permanent because we NEVER want to see www in the domain.
filterContext.HttpContext.Response.RedirectPermanent(newUrl);
}
}
Answer: This method is not a good candidate for the Chain of Responsibility pattern, but it definitely can be implemented using it (just for the educational purpose):
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
//Init
var referrerRequestHandler = new ReferrerRequestHandler();
var regionRequestHandler = new RegionRequestHandler();
var authenticatedRequestHandler = new AuthenticatedRequestHandler(filterContext);
var wwwRequestHandler = new WwwRequestHandler(filterContext);
referrerRequestHandler.SetNextHandler(regionRequestHandler);
regionRequestHandler.SetNextHandler(authenticatedRequestHandler);
authenticatedRequestHandler.SetNextHandler(wwwRequestHandler);
//Run
var request = filterContext.HttpContext.Request;
referrerRequestHandler.Redirect(request);
}
public abstract class RequestHandler
{
public void SetNextHandler(RequestHandler nextHandler)
{
_nextHandler = nextHandler;
}
public void Redirect(HttpRequestBase request)
{
bool handeled = HandleRedirect(request);
if (!handeled)
{
if (_nextHandler != null)
{
_nextHandler.Redirect(request);
}
}
}
protected abstract bool HandleRedirect(HttpRequestBase request);
private RequestHandler _nextHandler;
}
public class ReferrerRequestHandler : RequestHandler
{
protected override bool HandleRedirect(HttpRequestBase request)
{
Uri url = request.Url;
Uri referrer = (request.UrlReferrer != null) ? request.UrlReferrer : null;
if (((referrer != null)) && (!string.IsNullOrEmpty(url.Subdomain)) && (!(url.Subdomain == "www")) && (referrer.Host == url.Host))
{
return true;
}
else
{
return false;
}
}
}
public class RegionRequestHandler: RequestHandler
{
protected override bool HandleRedirect(HttpRequestBase request)
{
Domain.IRegionService RegionService = new Domain.RegionService(new Domain.RegionRepository());
Domain.Region region = RegionService.GetRegionByName(url.Subdomain);
if ((region != null))
{
return true;
}
else
{
return false;
}
}
}
public class AuthenticatedRequestHandler: RequestHandler
{
public AuthenticatedRequestHandler(ActionExecutingContext filterContext)
{
_filterContext = filterContext;
}
protected override bool HandleRedirect(HttpRequestBase request)
{
Uri url = request.Url;
string newPort = url.IsDefaultPort ? string.Empty : ":" + url.Port.ToString();
if (_filterContext.HttpContext.User.Identity.IsAuthenticated)
{
Domain.UserService userService = new Domain.UserService(new Domain.UserRepository());
dynamic userRegion = userService.GetUserByID(AuthenticationHelper.RetrieveAuthUser.ID).Region.Name;
_filterContext.HttpContext.Response.Redirect(url.Scheme + "://" + userRegion + "." + url.PrimaryDomain + newPort + request.RawUrl);
return true;
}
else
{
return false;
}
}
private readonly ActionExecutingContext _filterContext;
}
public class WwwRequestHandler : RequestHandler
{
public WwwRequestHandler(ActionExecutingContext filterContext)
{
_filterContext = filterContext;
}
protected override bool HandleRedirect(HttpRequestBase request)
{
Uri url = request.Url;
if (request.Url.Host.StartsWith("www"))
{
string newUrl = url.Scheme + "://" + url.Host.Replace("www.", "") + newPort + request.RawUrl;
//'# The redirect is permanent because we NEVER want to see www in the domain.
_filterContext.HttpContext.Response.RedirectPermanent(newUrl);
return true;
}
else
{
return false;
}
}
private readonly ActionExecutingContext _filterContext;
} | {
"domain": "codereview.stackexchange",
"id": 603,
"tags": "c#, design-patterns, asp.net, vb.net"
} |
Count digits in a given number using recursion | Question: Here is my code that finds the number of digits in a given integer (either positive or negative). The code works and results in expected output.
'''
Program that returns number of digits in a given integer (either positive or negative)
'''
def ndigits(x):
# Assume for the input 0 the output is 0
if(x == 0):
return 0
if(abs(x) / 10 == 0):
return 1
else:
return 1 + ndigits(x / 10)
Answer: pro:
code is clean and easy to understand
con:
a recursive solution may not be optimized to a loop by your interpreter so there might be a lot of memory (stack) waste because of the recursion overhead. (So you could implement it as a loop instead of a recursion)
You do not need the if(abs(x) / 10 == 0) branch. So to simplify your code you could remove it.
Simplified recursive code:
def ndigits(x):
# Assume for the input 0 the output is 0
if(x == 0):
return 0
else:
return 1 + ndigits(abs(x) / 10)
Simplified tail-recursive code:
end-recursive methods are likely to be detected by the interpreter and transformed to a loop. So this code might be faster and may nor waste memory.
For more information see wikipedia Tail call
def ndigits(x):
return ndigits_(0, abs(x))
def ndigits_(s,x):
if(x == 0):
return s
else:
return ndigits(s+1, x / 10) | {
"domain": "codereview.stackexchange",
"id": 20730,
"tags": "python, python-2.x, recursion"
} |
Why do babies urinate when they hear "SHHH.." sound? | Question: I have noticed it many times in the region where I live. I don't know if its common World wide. But I think someone else would have noticed it. So why does this happen ?
Answer: In general, babies do not naturally urinate when hearing that sound.
However, there is a technique of instilling a Pavlovian response in your baby to urinate when hearing that sound. You start by holding the baby diaper-free, and making the sound when they urinate, so that they make that association between the sound and the sensation of urination. After lots of reinforcement, you can get the baby to urinate when it hears the sound, so babies are potty-trained earlier than they otherwise would be.
It's part of a process called "elimination communication"
https://en.wikipedia.org/wiki/Elimination_communication
It's less common in Western countries, probably because most people prefer to, and have the means to either buy lots of disposable diapers, or have the means to easily wash their cloth ones over and over again. | {
"domain": "biology.stackexchange",
"id": 11582,
"tags": "human-biology"
} |
What is the voltage at the ground electrode of a spark plug? | Question: Since the voltage required to bridge the gap between the central electrode and the ground electrode of a spark plug can reach up to 40 000V and even more, my question is, how does the battery not die since the spark plug is grounded to it. Is all that energy lost in the ionization of the gas and thus the voltage reduced to ~12V?
Answer:
All current flows in a closed loop circuit.
The ignition coil primary current flows from the battery through the coil and returns to the battery through the chassis circuit.
The ignition coil secondary current flows from the ignition HT terminal through the spark plug, across the gap and returns to the ignition coil through the chassis circuit. | {
"domain": "engineering.stackexchange",
"id": 2955,
"tags": "electrical-engineering, battery, car"
} |
Unabe to Install freenect in ROS-Melodic/Ubuntu 18.04 | Question:
I wanted to install freenect in ROS to take depth images using Kinect XBOX 360 1473. I cloned the repository from https://github.com/ros-drivers/freenect_stack and kept it in ~/catkin_ws/src under the folder name freenect_stack. Upon running catkin_make in ~/catkin_ws I get the following error:
[ 21%] Building CXX object freenect_stack/freenect_camera/CMakeFiles/freenect_nodelet.dir/src/nodelets/driver.cpp.o
In file included from /home/athul/catkin_ws/src/freenect_stack/freenect_camera/src/nodelets/driver.h:54:0,
from /home/athul/catkin_ws/src/freenect_stack/freenect_camera/src/nodelets/driver.cpp:39:
/home/athul/catkin_ws/src/freenect_stack/freenect_camera/include/freenect_camera/freenect_driver.hpp:4:10: fatal error: libfreenect/libfreenect.h: No such file or directory
#include <libfreenect/libfreenect.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
freenect_stack/freenect_camera/CMakeFiles/freenect_nodelet.dir/build.make:62: recipe for target 'freenect_stack/freenect_camera/CMakeFiles/freenect_nodelet.dir/src/nodelets/driver.cpp.o' failed
make[2]: *** [freenect_stack/freenect_camera/CMakeFiles/freenect_nodelet.dir/src/nodelets/driver.cpp.o] Error 1
CMakeFiles/Makefile2:2697: recipe for target 'freenect_stack/freenect_camera/CMakeFiles/freenect_nodelet.dir/all' failed
make[1]: *** [freenect_stack/freenect_camera/CMakeFiles/freenect_nodelet.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j1 -l1" failed
Please help. Any input is welcome.
Originally posted by athul on ROS Answers with karma: 11 on 2019-01-27
Post score: 1
Answer:
Is there a reason that you can't install the binary?
Anyways, try installing the dependencies before compiling. From the root of your workspace, run
~/catkin_ws$ rosdep install --from-paths src -i
~/catkin_ws$ catkin_make
Originally posted by jayess with karma: 6155 on 2019-01-27
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2019-01-28:\
Is there a reason that you can't install the binary?
afaict, freenect_stack hasn't been released for Melodic.
Comment by jayess on 2019-01-28:
@gvdhoorn Thanks. I figured as much, wasn't sure though
Comment by athul on 2019-01-28:
I guess libfreenect does not work with melodic :(
The error I got when I ran rosdep:
ERROR: the following packages/stacks could not have their rosdep keys resolved
to system dependencies:
freenect_camera: Cannot locate rosdep definition for [libfreenect]
@gvdhoorn @jayess Thanks
Comment by wesliao on 2019-02-05:
I'm having the same issue. I have libfreenect and libfreenect-dev installed, but freenect_stack won't build.
Comment by ChriMo on 2019-02-13:
same problem :-(
Comment by WyattAutomation on 2019-07-22:
see my answer posted itt; I was able to get it installed and preliminary tests suggest it works fine.
Comment by 130s on 2020-09-20:
Release was tracked at https://github.com/ros-drivers/freenect_stack/issues/48, which is now closed with a release work done. So I think this can now be an "accept" answer. | {
"domain": "robotics.stackexchange",
"id": 32348,
"tags": "ros, ros-melodic, ros-kinetic, ubuntu, freenect"
} |
Sudoku Puzzle Generator | Question: I've written a Sudoku puzzle generator. It currently runs through each line of the 9x9 grid and places numbers randomly if they're valid. It loops over all the numbers from 1-9 and then if it finds itself trapped in a corner with no valid answers it throws out the whole board and restarts.
I tested it by making 1000 puzzles and it took just under 100 seconds, so a single puzzle takes 0.1. In practical terms that's almost negligible but still seems wasteful to ditch the processing up to that point (as it takes, on average, hundreds of attempts to find a valid puzzle). I'm maybe just being impractical to want a more intelligent solution so I thought I'd ask how it looks to people on here and if anyone has suggestions on improving it.
import random
numbers = [1,2,3,4,5,6,7,8,9]
def makeBoard():
board = None
while board is None:
board = attemptBoard()
return board
def attemptBoard():
board = [[None for _ in range(9)] for _ in range(9)]
for i in range(9):
for j in range(9):
checking = numbers[:]
random.shuffle(checking)
x = -1
loopStart = 0
while board[i][j] is None:
x += 1
if x == 9:
#No number is valid in this cell, start over
return None
checkMe = [checking[x],True]
if checkMe in board[i]:
#If it's already in this row
continue
checkis = False
for checkRow in board:
if checkRow[j] == checkMe:
#If it's already in this column
checkis = True
if checkis: continue
#Check if the number is elsewhere in this 3x3 grid based on where this is in the 3x3 grid
if i % 3 == 1:
if j % 3 == 0 and checkMe in (board[i-1][j+1],board[i-1][j+2]): continue
elif j % 3 == 1 and checkMe in (board[i-1][j-1],board[i-1][j+1]): continue
elif j % 3 == 2 and checkMe in (board[i-1][j-1],board[i-1][j-2]): continue
elif i % 3 == 2:
if j % 3 == 0 and checkMe in (board[i-1][j+1],board[i-1][j+2],board[i-2][j+1],board[i-2][j+2]): continue
elif j % 3 == 1 and checkMe in (board[i-1][j-1],board[i-1][j+1],board[i-2][j-1],board[i-2][j+1]): continue
elif j % 3 == 2 and checkMe in (board[i-1][j-1],board[i-1][j-2],board[i-2][j-1],board[i-2][j-2]): continue
#If we've reached here, the number is valid.
board[i][j] = checkMe
return board
def printBoard(board):
spacer = "++---+---+---++---+---+---++---+---+---++"
print (spacer.replace('-','='))
for i,line in enumerate(board):
print ("|| {} | {} | {} || {} | {} | {} || {} | {} | {} ||".format(
line[0][0] if line[0][1] else ' ',
line[1][0] if line[1][1] else ' ',
line[2][0] if line[2][1] else ' ',
line[3][0] if line[3][1] else ' ',
line[4][0] if line[4][1] else ' ',
line[5][0] if line[5][1] else ' ',
line[6][0] if line[6][1] else ' ',
line[7][0] if line[7][1] else ' ',
line[8][0] if line[8][1] else ' ',))
if (i+1) % 3 == 0: print(spacer.replace('-','='))
else: print(spacer)
Answer: 1. Review
There are no docstrings. What do these functions do?
The Python style guide says, "limit all lines to a maximum of 79 characters." If the code followed this recommendation, then we wouldn't have to scroll it horizontally to read it here.
The board is not represented consistently. Looking at printBoard, it seems that each cell is represented by a list [a, b] where b is False if the cell is empty, and True if it contains the number a. But the initialization of the board in attemptBoard looks like this:
board = [[None for _ in range(9)] for _ in range(9)]
which represents empty cells as None, so that if I try to print this board, I get:
TypeError: 'NoneType' object is not subscriptable
I would recommend using a consistent board representation. In this case I think it makes more sense to use None for an empty cell and a number for a full cell (rather than a list). That's because (i) None and small numbers don't need any memory allocation, whereas a list needs to be allocated; (ii) testing a None or a number is quicker than testing a list.
In printBoard you have very repetitive code:
print ("|| {} | {} | {} || {} | {} | {} || {} | {} | {} ||".format(
line[0][0] if line[0][1] else ' ',
line[1][0] if line[1][1] else ' ',
line[2][0] if line[2][1] else ' ',
line[3][0] if line[3][1] else ' ',
line[4][0] if line[4][1] else ' ',
line[5][0] if line[5][1] else ' ',
line[6][0] if line[6][1] else ' ',
line[7][0] if line[7][1] else ' ',
line[8][0] if line[8][1] else ' ',))
This can be rewritten using a loop:
print("|| {} | {} | {} || {} | {} | {} || {} | {} | {} ||"
.format(*(number if full else ' ' for number, full in line)))
or, after simplifying the board representation as recommended above:
print("|| {} | {} | {} || {} | {} | {} || {} | {} | {} ||"
.format(*(cell or ' ' for cell in line)))
The nested loops:
for i in range(9):
for j in range(9):
can be combined into one using itertools.product:
for i, j in itertools.product(range(9), repeat=2):
The variable loopStart is never used.
Instead of this complex while loop:
x = -1
while board[i][j] is None:
x += 1
if x == 9:
#No number is valid in this cell, start over
return None
checkMe = [checking[x],True]
# ... loop body here ...
#If we've reached here, the number is valid.
board[i][j] = checkMe
write a for loop with an else:
for x in checking:
# ... loop body here ...
# If we've reached here, the number is valid.
board[i][j] = x
break
else:
# No number is valid in this cell, start over.
return None
The column check:
checkis = False
for checkRow in board:
if checkRow[j] == checkMe:
#If it's already in this column
checkis = True
if checkis: continue
can be simplified using the built-in function any:
if any(row[j] == checkMe for row in board): continue
The code for checking against other cells in the 3×3 block is very repetitive:
if i % 3 == 1:
if j % 3 == 0 and checkMe in (board[i-1][j+1],board[i-1][j+2]): continue
elif j % 3 == 1 and checkMe in (board[i-1][j-1],board[i-1][j+1]): continue
elif j % 3 == 2 and checkMe in (board[i-1][j-1],board[i-1][j-2]): continue
elif i % 3 == 2:
if j % 3 == 0 and checkMe in (board[i-1][j+1],board[i-1][j+2],board[i-2][j+1],board[i-2][j+2]): continue
elif j % 3 == 1 and checkMe in (board[i-1][j-1],board[i-1][j+1],board[i-2][j-1],board[i-2][j+1]): continue
elif j % 3 == 2 and checkMe in (board[i-1][j-1],board[i-1][j-2],board[i-2][j-1],board[i-2][j-2]): continue
The reason you go to this trouble is to avoid testing against board[i-1][j] and board[i-2][j], which you know would be useless, because you already tested these cells when you checked the column. But in fact that's a false economy. You avoid an unnecessary test, but at the cost of a lot of extra code. It turns out to be just as fast, but a lot simpler, to test all the entries in previous rows of the block, like this:
i0, j0 = i - i % 3, j - j % 3 # origin of 3x3 block
if any(x in row[j0:j0+3] for row in board[i0:i]):
continue
The code only works for 9×9 Sudoku grids made up of 3×3 blocks. But there's nothing special about the numbers 3 and 9 here: the algorithm would be essentially the same for 2 and 4, or 4 and 16. So why not make the code general?
2. Revised code
This isn't any faster than the original code, but it's a lot shorter and simpler, which makes it a better place to start when speeding it up:
import itertools
import random
def attempt_board(m=3):
"""Make one attempt to generate a filled m**2 x m**2 Sudoku board,
returning the board if successful, or None if not.
"""
n = m**2
numbers = list(range(1, n + 1))
board = [[None for _ in range(n)] for _ in range(n)]
for i, j in itertools.product(range(n), repeat=2):
i0, j0 = i - i % m, j - j % m # origin of mxm block
random.shuffle(numbers)
for x in numbers:
if (x not in board[i] # row
and all(row[j] != x for row in board) # column
and all(x not in row[j0:j0+m] # block
for row in board[i0:i])):
board[i][j] = x
break
else:
# No number is valid in this cell.
return None
return board
3. Backtracking
If attempt_board finds that there are no valid numbers for some cell, then it throws away all its work and starts all over again from the beginning. But all that work is not necessarily invalid: most likely the mistake was made only in the last few steps, and so if the algorithm were to go back a little bit and try some different choices, then it would find a solution. This approach is known as backtracking.
Backtracking is easily implemented by using recursion:
def make_board(m=3):
"""Return a random filled m**2 x m**2 Sudoku board."""
n = m**2
board = [[None for _ in range(n)] for _ in range(n)]
def search(c=0):
"Recursively search for a solution starting at position c."
i, j = divmod(c, n)
i0, j0 = i - i % m, j - j % m # Origin of mxm block
numbers = list(range(1, n + 1))
random.shuffle(numbers)
for x in numbers:
if (x not in board[i] # row
and all(row[j] != x for row in board) # column
and all(x not in row[j0:j0+m] # block
for row in board[i0:i])):
board[i][j] = x
if c + 1 >= n**2 or search(c + 1):
return board
else:
# No number is valid in this cell: backtrack and try again.
board[i][j] = None
return None
return search()
I find that this is about 60 times faster than the original code.
4. Algorithm X
There's a reformulation of Sudoku in terms of the "exact cover" problem, and this can be solved using Donald Knuth's "Algorithm X". See this blog post of mine for a detailed explaination of the how this algorithm can be used to solve Sudoku, and see this post on Code Review for an implementation in Python. | {
"domain": "codereview.stackexchange",
"id": 13429,
"tags": "python, algorithm, python-2.x, sudoku"
} |
Checking whether there is k circles with common area | Question: I have N circles with different radius and position in the plane. The problem is finding k circles which have a common area.Obviously this can be solved using Brute-Force in $O(N^k)$. Is there a more efficient way to do such query?
Answer: This problem and related problems have been studied in the paper "Bajaj, C., & Li, M. (1983). On the duality of intersections and closest points. Cornell University."
They call a common area for $k$ circles a '$k$-intersection'. The paper gives an $O(n^2\log n)$ algorithm for the problem of finding a $k$-intersection when the circles have different radius. (Theorem 13) It is not clear to me from the paper how that bound is achieved, but I do see how to achieve $O(n^3)$ time with their approach.
Algorithm
The main idea is to construct the following (directed) planar intersection graph $G=(V,E,F)$: The vertices $V$ of this graph are all intersection points of the circles and the edges $E$ are the circle arcs connecting those intersection points. The direction of the edges is counterclockwise on the circle, such that the interior of the circle it is part of lies to the left of the edge. We also keep track of the faces $F$ of this planar graph, which are the regions enclosed by the edges $E$.
Note that each region of circle intersections corresponds to a face of this planar graph and that the number of circles that contain the region is equal to the number of clockwise boundary edges of the corresponding face. If we have computed $G$, then we can maintain a counter for each face in $F$ and traverse the graph to visit all edges and increase the counter of face left of each edge. Then, after we have traversed the edges, each face has the number of circles that contain it stored in the counter.
Complexity
Since there can be at most $O(n^2)$ intersections of $n$ circles, $|V|=O(n^2)$. Since $G$ is a planar graph, $|E|\leq 3|V|-6 =O(n^2)$. So, traversing the graph takes $O(|V|+|E|)=O(n^2)$ time.
As for constructing the graph, the authors claim that this graph can be constructed in $O(n^2\log n)$ time by sorting the edges. I did not see any further explanation of this in the paper and do immediately see how to achieve this. (in particular, it would be helpful to know on what they sort.)
I do see how to do this in $O(n^3)$: The vertices and edges of the graph can be constructed in $O(n^2)$ time by iteratively adding new circles. When a new circle is added, we add the new intersection points and subdivide some of the existing edges.
Adding the faces is more complicated and I currently only see the naive method of for each edge $e$, traversing all edges that are on the boundary of the two faces adjacent to edge $e$. This takes $O(n)$ per edge, so $O(n^3)$ in total. It does seem like we can do better here, but I currently don't see how.
$O(n^2\log n)$ is good, but can we do better? For the case where the circles all have the same radius, this is possible: Chazelle and Lee have found an $O(n^2)$ time algorithm based on an implicit traversal through the intersection graph ("Chazelle, B. M., & Lee, D. T. (1986). On a circle placement problem. Computing, 36(1-2), 1-16."). Whether $O(n^2)$ is possible for the case with circles of different radius they leave as an open problem, which has remained open for now, as far as I'm aware. | {
"domain": "cs.stackexchange",
"id": 14335,
"tags": "computational-geometry"
} |
Why is it advantageous to use to moving two particles to collide in a head-on collision, over a stationary target? | Question: I have seen similar questions on this topic but I am more specifically wondering for particle accelerators what the effect on momentum and its relation, (if there is one) to the de Brogilei wavelength.
Answer: When two particles collide head-on with equal but opposite velocities, all of the energy content of both particles is available to create new product particles in the collision. This lets physicists explore higher energy phenomena than they can when colliding one moving particle with another which is stationary in the lab reference frame. De Broglie effects don't directly enter into the picture here. | {
"domain": "physics.stackexchange",
"id": 77529,
"tags": "special-relativity, energy, kinematics, collision, particle-accelerators"
} |
How can absorption spectra form if atoms can't remain in an excited state? | Question: I have been tasked to write a research paper on stars. However, I know very little about physics in general. I am learning about how we can glean information about stars by analyzing the light that they emit. So, first, I am learning about how light interacts with matter.
I just learned about atoms and the fact that they normally exist in a grounded state. Either a collision with another atom or the absorption of a photon with the right wavelength can force the electron(s) in the atom up to a higher energy level. The atom is now in an excited state.
However, atoms can not remain in an excited state, as this state is not stable. So, 10-6 to 10-9 seconds later, a photon is emitted due to a new found surplus of energy as the electron drops back down to its ground level.
Subquestion: which is the cause and which is the effect here? Is the electron dropping down because the photon is released? Or is the release of the photon the result of the electron being sucked back down by some fundamental force? If the latter is the case(which I suspect) what is this force? Is this the electromagnetic force?
It is my understanding that (assuming the excitation was caused by the absorption of a photon) the photon being released would have a wavelength equal to the photon that was absorbed.
If the above is true, I am confused as to how we notice absorption lines in light that passes through a gas.
It is stated that the atoms in the gas absorb some of the light that is passing through them, but under my current understanding of the interaction, this light would soon be re-emitted. So, I would think that we should still see a continuous spectrum. what am I missing here?
Answer: Basically, absorption lines exist because absorbed photon are not re-emitted in the same direction, so dark lines can be observed. There are various reason causing this.
For example, the extra energy can be dissipated as phonon in solid or strongly interacting system. Excited states can also emit multiple low frequency photon if there are meta-stable states. Lastly, even the atoms re-emit photon with same frequency, the photon direction is completely random. Therefore, the all re-emitted light can be ignored if the detector is sufficiently far away, hence, dark lines. | {
"domain": "physics.stackexchange",
"id": 93420,
"tags": "visible-light, photons, atoms, absorption"
} |
Lex Scheme in Rust | Question: I'm writing a Scheme interpreter in Rust, and the first step is the parser. I've finished the lexer, and would like to see what people think of it before I go any further.
#[derive(Debug, PartialEq)]
enum LexToken {
Num(f64),
Symbol(String),
String(String),
LeftBracket,
RightBracket,
}
fn lex_input(input: &str) -> Result<Vec<LexToken>, &'static str> {
let mut output = Vec::new();
let input_length = input.len();
let mut current_idx = 0;
while current_idx < input_length {
if let Some((lexed_string, new_idx)) = lex_string(&input, current_idx) {
output.push(lexed_string);
current_idx = new_idx;
continue;
}
if let Some((lexed_number, new_idx)) = lex_number(&input, current_idx) {
output.push(lexed_number);
current_idx = new_idx;
continue;
}
if let Some((lexed_left_bracket, new_idx)) = lex_left_bracket(&input, current_idx) {
output.push(lexed_left_bracket);
current_idx = new_idx;
continue;
}
if let Some((lexed_right_bracket, new_idx)) = lex_right_bracket(&input, current_idx) {
output.push(lexed_right_bracket);
current_idx = new_idx;
continue;
}
if let Some(new_idx) = lex_whitespace(&input, current_idx) {
current_idx = new_idx;
continue;
}
if let Some((lexed_symbol, new_idx)) = lex_symbol(&input, current_idx) {
output.push(lexed_symbol);
current_idx = new_idx;
continue;
}
}
Ok(output)
}
fn lex_string(input: &str, from_idx: usize) -> Option<(LexToken, usize)> {
if input
.chars()
.nth(from_idx)
.expect("Lexxer skipped past the end of the input")
!= '"'
{
return None;
}
let output = input
.chars()
.skip(from_idx + 1)
.take_while(|&char| char != '"')
.collect::<String>();
Some((
LexToken::String(output.to_string()),
from_idx + output.len() + 2,
))
}
fn lex_left_bracket(input: &str, from_idx: usize) -> Option<(LexToken, usize)> {
if input
.chars()
.nth(from_idx)
.expect("Lexxer skipped past the end of the input")
!= '('
{
return None;
}
Some((LexToken::LeftBracket, from_idx + 1))
}
fn lex_right_bracket(input: &str, from_idx: usize) -> Option<(LexToken, usize)> {
if input
.chars()
.nth(from_idx)
.expect("Lexxer skipped past the end of the input")
!= ')'
{
return None;
}
Some((LexToken::RightBracket, from_idx + 1))
}
fn lex_whitespace(input: &str, from_idx: usize) -> Option<usize> {
if input
.chars()
.nth(from_idx)
.expect("Lexxer skipped past the end of the input")
.is_whitespace()
{
return Some(from_idx + 1);
}
None
}
fn lex_number(input: &str, from_idx: usize) -> Option<(LexToken, usize)> {
let next_char = input
.chars()
.nth(from_idx)
.expect("Lexxer skipped past the end of the input");
if !next_char.is_numeric() && next_char != '-' && next_char != '.' {
return None;
}
let num_as_string = input
.chars()
.skip(from_idx)
.take_while(|&char| char.is_numeric() || char == '.' || char == 'e' || char == '-')
.collect::<String>();
match num_as_string.parse::<f64>() {
Ok(num) => Some((LexToken::Num(num), from_idx + num_as_string.len())),
Err(_) => None,
}
}
fn lex_symbol(input: &str, from_idx: usize) -> Option<(LexToken, usize)> {
let output = input
.chars()
.skip(from_idx)
.take_while(|&char| !char.is_whitespace() && char != '(' && char != ')')
.collect::<String>();
Some((
LexToken::Symbol(output.to_string()),
from_idx + output.len(),
))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn lex_brackets() {
let input = "()";
let expected_output = vec![LexToken::LeftBracket, LexToken::RightBracket];
compare(input, expected_output);
}
#[test]
fn lex_string() {
let input = "\"scheme\"";
let expected_output = vec![LexToken::String("scheme".to_string())];
compare(input, expected_output);
}
#[test]
fn lex_list_of_strings() {
let input = "(\"little\" \"scheme\")";
let expected_output = vec![
LexToken::LeftBracket,
LexToken::String("little".to_string()),
LexToken::String("scheme".to_string()),
LexToken::RightBracket,
];
compare(input, expected_output);
}
#[test]
fn lex_list_of_strings_with_whitespace() {
let input = " ( \"little\" \"scheme\" ) ";
let expected_output = vec![
LexToken::LeftBracket,
LexToken::String("little".to_string()),
LexToken::String("scheme".to_string()),
LexToken::RightBracket,
];
compare(input, expected_output);
}
#[test]
fn lex_number() {
let tests = vec![
("123", LexToken::Num(123f64)),
("0.123", LexToken::Num(0.123f64)),
("-0.1e-5", LexToken::Num(-0.1e-5f64)),
];
for (input, expect) in tests {
compare(input, vec![expect]);
}
}
#[test]
fn lex_list_of_numbers() {
let input = "(123 0.123 -0.1e-5)";
let expected_output = vec![
LexToken::LeftBracket,
LexToken::Num(123f64),
LexToken::Num(0.123f64),
LexToken::Num(-0.1e-5f64),
LexToken::RightBracket,
];
compare(input, expected_output);
}
#[test]
fn lex_symbol() {
let tests = vec![
("some_func", LexToken::Symbol("some_func".to_string())),
("-", LexToken::Symbol("-".to_string())),
("e", LexToken::Symbol("e".to_string())),
];
for (input, expect) in tests {
compare(input, vec![expect]);
}
}
#[test]
fn lex_list_of_symbols() {
let input = "(somefunc #some_symbol -)";
let expected_output = vec![
LexToken::LeftBracket,
LexToken::Symbol("somefunc".to_string()),
LexToken::Symbol("#some_symbol".to_string()),
LexToken::Symbol("-".to_string()),
LexToken::RightBracket,
];
compare(input, expected_output);
}
fn compare(input: &str, expected_output: Vec<LexToken>) {
let actual_output = lex_input(input).unwrap();
assert_eq!(actual_output, expected_output);
}
}
Answer: You have several variations on this in your code:
if input
.chars()
.nth(from_idx)
.expect("Lexxer skipped past the end of the input")
!= '"'
{
return None;
}
This is problematic. In general, taking nth item in an iterator isn't going to be very efficient, since it has to iterate through all the previous elements in the iterator to get there. Some iterators may override this with a fast implementation, but str.chars() because the string is UTF-8 which has variable length characters. So it has to read through the string counting up the characters. Because you do this in a loop, you are going through the input string over and over and over again.
A possible alternative is to pass around a mutable reference to the std::char::Chars object returned by str.chars(). Then you can have each lexing function move the iterator forward as it consume characters. This approach can be helped by using .peekable() to create an iterator with a peek() method that lets you look at the next item in the iterator without consuming it. Also, you can clone() this iterator to remember a particular position in the string, and assign to an iterator to reset it to a previous position.
But what I'd actually do is define a struct, something like
struct Reader {
currentLine: u32,
currentColumn: u32,
buffer: String,
source: std::io::Read
}
This struct:
Reads from a std::io::Read, so it can be used with files/in memory data/whatever.
Keeps track of the current line and column, probably quite useful for reporting errors.
Provides utility functions to peek/consume data from the stream in a way conducive to implementing the lexer. | {
"domain": "codereview.stackexchange",
"id": 36733,
"tags": "rust, scheme, lexical-analysis"
} |
Application of Newton's law of viscosity in a problem on disc viscometer | Question:
Hi ,
My instructor solved this problem using a circular elemental strip of thickness 'dr'. He told me we get only shear stress in horizontal layers of fluid. He used Newton's law of viscosity to get the value of shear stress at top face of plate. The Newton's law of viscosity is only applied between two plates where there is velocity gradient.
But we are also observing linear profile of velocity on the top face of plate. Here my doubt is why can't we apply Newton's law of viscosity for this circular strip?
That is, in a circular strip we have a number of layers of fluid with different velocities. So there is velocity gradient between them.
But shear stress between is not calculated between layers?
Is there no relative motion between those cylinderical fluid layers?
What goes wrong in assuming it?
Also this problem is solved example in other textbook on fluid mechanics. In that example too they didn't consider shear stress between cylinderical fluid surfaces.
I am studying this course for first time.
Can anyone please explain elaborately..
Thank you
Edit: I am adding images of how i solved it . So it will avoid any confusion present in my text
And the below image is the actual doubt
Answer: You are quite correct that there is relative motion along the radial direction as well as normal to the plates. However in viscometers the gap between the plates is orders of magnitude smaller than their radius, so the shear rate in the radial direction is negligibly small compared to the shear rate in the normal direction.
The viscosity is defined as:
$$ \eta = \frac{\sigma}{\dot\gamma} $$
where $\sigma$ is the shear stress and $\dot\gamma$ is the shear rate. The shear rate is the change in velocity with distance. Consider the change in velocity with distance as we move in the radial direction. The velocity that the top plate moves at a distance $r$ from the axis is:
$$ v = r\omega $$
and the distance is $r$, so the shear rate in the horizontal is simply:
$$\dot\gamma_h = \frac{r\omega}{r} = \omega $$
Now consider the change in velocity with distance as we move vertically. The bottom plate is fixed, while at a distance $r$ from the axis the top plate is moving with a velocity $v = r\omega$, and the thickness of the liquid layer is $h$ so the shear rate in the vertical direction is:
$$ \dot\gamma_v = \frac{r\omega}{h} $$
So the ratio of the two shear rates is:
$$ \frac{\dot\gamma_v}{\dot\gamma_h} = \frac{r}{h} $$
Viscometers are constructed so that the thickness of the liquid layer $h$ is much less than the radius of the plate, so that means $\dot\gamma_v \gg \dot\gamma_h$ and therefore the force we have to apply is dominated by $\dot\gamma_v$.
Note that commercial viscometers do not use parallel plates because the shear rate in the vertical direction changes with $r$. Instead they use a cone and plate geometry. See for example this answer. | {
"domain": "physics.stackexchange",
"id": 80846,
"tags": "viscosity"
} |
How many ways are there to perform image segmentation? | Question: I'm new in Artificial Intelligence and I want to do image segmentation.
Searching I have found these ways
Digital image processing (I have read it in this book: Digital Image Processing, 4th edition)
Convolutional neural networks
Is there something else that I can use?
Answer: Apart from the multitudes of traditional image segmentation techniques (Watershed, Clustering or Variational methods), newer Segmentation schemes using Deep Learning are actively being used, which provide better results and are better for real-time applications, owing to minimum computation overheads involved.
The following blog provides a detailed review of recent advancements in this field: Review of Deep Learning Algorithms for Image Semantic Segmentation
For the traditional methods, this Wikipedia article provides a nice summary:
Image Segmentation | {
"domain": "ai.stackexchange",
"id": 1696,
"tags": "convolutional-neural-networks, reference-request, image-segmentation, algorithm-request, model-request"
} |
Outside of string theory, can exchange particles be understood as being one-dimensional? | Question: I'm pretty new to quantum, without any formal education therein, so forgive my general layman ignorance when I ask how it is that point-like exchange particles can be modeled in Feynman diagrams as occupying multiple values of space at a single value of time (i.e. a situation in which the virtual photon created by an electromagnetic interaction between two electrons is graphed to be "flat"/0-slope along an x axis of space and a y axis of time).
Alternately, while a proton (a non-point composite particle) is comprised of three quarks and three gluons, all of which are themselves elementary point particles, a proton takes up a set amount of space. I understand that it is the strong force, acting via the gluons, that keeps the quarks in place and preserves this spatial shape,
Am I making the mistake of putting too much credence in apparent trajectory, which Feynman diagrams do not purport to represent? Or is it intentionally meant to depict the travel of certain exchange particles as being (in some non-literal way) instantaneous?
Answer: The short answer is that Feynman diagrams definitely do not represent the trajectories of particles in spacetime; they are simply a way of writing down formulae that would otherwise be pretty hard to remember. The long answer is below.
Quantum Field Theory, which is the mathematical formalism behind particle physics, dictates that when we scatter two electrons off of each other they have certain probabilities of coming out at different angles. To get these probabilities we have to do a calculation which, unfortunately, is impossible to do analytically. What we usually do when confronted with such a situation is to do a series expansion: break the complicated expression up into an infinite sum of simpler terms and hope that by evaluating only some of them we get a good approximation to the probability for the process at hand. (You can take a look at the Wikipedia articles for anomalous magnetic moment to see just how well this idea worked.)
What Feynman and others did was to realize that each of the terms in the series could be represented with a little picture that we now call a Feynman diagram. The Feynman rules for a particular theory tell us how to convert the diagram into a mathematical expression that is more or less straightforward to evaluate. This is a very handy tool, because it's much easier to just draw lots of diagrams than to remember the complicated formulas off the top of your head.
In a way, when we draw Feynman diagrams we imagine that whatever process we're studying can take place through the exchange of one or more "virtual particles". But we shouldn't attach too much reality to this idea. First off, and this is the meat of your question, they don't represent the trajectories of particles because the particles do not have well defined trajectories. They are quantum objects, and in some sense they take every possible path, and these paths all contribute equally to the probability amplitude. (This is at the heart of the Feynman path integral.) So the answer to your question is that the particles aren't everywhere at the same time, because while the vertical axis sort of represents time, the horizontal axis does not represent space. This is related to the fact that the particles being interchanged are virtual: they can have any mass, move at any speed, and in general break lots of laws of physics. The reason they can do that, of course, is that they don't really exist.
There is another reason Feynman diagrams shouldn't be taken literally, which is this: even if each diagram was indeed a picture of what's really happening, how come there are infintely many of them? When two electrons scatter off of each other the main contribution is from one-photon exchange, but there's also diagrams with lots of photons exchanged at different places, diagrams with particle-antiparticle pairs being created and destroyed, and so on. To get the full probability we need to sum all of them, so we can't really claim that something definite is going on there. | {
"domain": "physics.stackexchange",
"id": 19512,
"tags": "quantum-mechanics, quantum-field-theory, feynman-diagrams, virtual-particles, exchange-interaction"
} |
Generalizations of Brzozowski's method of derivatives of regular expressions to grammars? | Question: Brzozowski's method of derivatives is a very pretty technique for building deterministic automata from regular expressions in a nicely algebraic way. I've worked out some cute generalizations of this technique to handle some larger classes of grammars, but the algorithms are straightforward enough that it seems quite possible that they've been discovered before. But Googling references to descendants of this technique doesn't seem to turn up much. Anyone know of anything?
Answer: In Total Parser Combinators (ICFP 2010) I use Brzozowski derivatives to establish that language membership is decidable for a certain class of potentially infinite grammars. | {
"domain": "cstheory.stackexchange",
"id": 495,
"tags": "reference-request, fl.formal-languages, parsing"
} |
Rotate a N by N matrix by 90 degrees clockwise | Question: The task:
Given an N by N matrix, rotate it by 90 degrees clockwise.
For example, given the following matrix:
[[1, 2, 3], [4, 5, 6], [7, 8, 9]] you should return:
[[7, 4, 1], [8, 5, 2], [9, 6, 3]]
Follow-up: What if you couldn't use any extra space?
const mtx = [[1, 2, 3],
[4, 5, 6],
[7, 8, 9]];
My imperative solution:
function transpose(matrix){
const size = matrix[0].length;
const newMatrix = JSON.parse(JSON.stringify(matrix));
for (let rowIndex = 0; rowIndex < size; rowIndex++) {
const row = matrix[rowIndex];
for (let colIndex = 0; colIndex < size; colIndex++) {
const newRowIndex = colIndex;
const newColIndex = size - 1 - rowIndex;
newMatrix[newRowIndex][newColIndex] = row[colIndex];
}
}
return newMatrix;
};
console.log(transpose(mtx));
My declarative solution:
const transpose2 = matrix => {
const size = matrix[0].length;
const newMatrix = JSON.parse(JSON.stringify(matrix));
matrix.forEach((row, rowIndex) => {
row.forEach((val, colIndex) => {
const newRowIndex = colIndex;
const newColIndex = size - 1 - rowIndex;
newMatrix[newRowIndex][newColIndex] = val;
});
});
return newMatrix;
};
console.log(transpose2(mtx));
My solution to the Follow Up question:
Didn't really understand the part with "no extra space". The only solution that came to my mind was to overwrite the existing matrix with the new values and at the end clean it up by deleting the old values. ...but appending new values to old values requires extra space, doesn't it?
const transposeNoSpace = matrix => {
const size = matrix[0].length;
matrix.forEach((row, rowIndex) => {
row.forEach((val, colIndex) => {
const newRowIndex = colIndex;
const newColIndex = size - 1 - rowIndex;
const newVal = val[1] ? val[0] : val;
matrix[newRowIndex][newColIndex] = [matrix[newRowIndex][newColIndex], newVal];
});
});
return matrix.map(row => row.map(col => col[1]));
};
console.log(transposeNoSpace(mtx));
I'd also be interested in a pure functional solution
Answer: Copying 2D array
JSON is not for copying data.
const newMatrix = JSON.parse(JSON.stringify(matrix))
Should be
const newMatrix = matrix.map(row => [...row]);
Naming
The naming you use is too verbose and gets in the way of readability.
For 2D data it is common to use row, col or x,y to refer to the coordinates of an item.
As x and y are traditionally ordered with x (column) first and y (row) second 2D arrays don't lend them selves to that naming. It is acceptable to abbreviate to r, c or indexing 2D arrays (within the iterators, not outside)
transpose as a function name is too general. It would be acceptable if transpose included a argument to define how to transpose the array but in this case rotateCW would be more fitting.
For 2D data the distance from one row to the same position on the next is called the stride
As the input array is square there is no need to get the stride from the inner array. const stride = matrix[0].length; should be const stride = matrix.length;
Rewriting your solutions
Both these solutions are \$O(n)\$ time and space.
function rotateCW(arr) {
const stride = arr.length;
const res = arr.map(row => [...row]);
for (let r = 0; r < stride; r++) {
const row = arr[r];
for (let c = 0; c < stride; c++) {
res[c][stride - 1 - r] = row[c];
}
}
return res;
}; // << redundant semicolon. Lines end with ; or } not both
Be careful when describing a function as declarative. Though the definition is somewhat loose you should make the effort to declare all high level processing as named functions and the imperative code at the lowest level.
const rotateCW = arr => {
const rotItem = (r, c, item) => res[r][arr.length - 1 - c] = item;
const processRow = (row, r) => row.forEach((item, c) => rotItem(c, r, item));
const res = arr.map(row => [...row]);
arr.forEach(processRow);
return res;
}
Functional
Don't get caught up on declarative, in your past questions you had the focus on functional. You should keep that focus.
In this case the core of the solution is converting a row column reference (or coordinate) into the inverse of the rotation. We rotate CW by replacing the current item by the one 90deg CCW of it. arr[y][x] = arr[stride-1-x][y]
The next example is functional and also the smallest code.
const rotateCW = arr => {
const rotItem = (r, c) => arr[arr.length - 1 - r][c];
const processRow = (row, r) => row.map((item, c) => rotItem(c, r));
return arr.map(processRow);
}
or as a one liner
const rotCW = arr => arr.map((row, y) => row.map((v, x) => arr[arr.length - 1 - x][y]));
The \$O(1)\$ space solution.
The "no extra space" simply means that it should be \$O(1)\$ space complexity. This can be done via the traditional swap,
var a = 1, b = 2;
const temp = a;
a = b;
b = temp;
Or in ES6+
var a = 1, b = 2;
[a,b] = [b,a];
The swap does not have to be just two items but can be over as many as you want (shifting), the only requirement is that only one spare slot is needed to shift all items.
var a = 1, b = 2, c = 3, d = 4;
const temp = a;
a = b;
b = c;
c = d;
d = temp;
Or in ES6+
var a = 1, b = 2, c = 3, d = 4;
[a, b, c, d] = [b, c, d, a];
Note The ES6 method for swapping, as a source code complexity reduction, is great...
but the JS engine does not know you are just swapping (or shifting), it creates an array to hold all the items on the right so that it does not overwrite them when assigning the new values. That means that the ES6+ swap is \$O(n)\$ space complexity.
Example
I come from a very heavy visual related background and x,y are the most logical way to index 2D+ arrays so will use it in this example.
function rotate2DArray(arr) {
const stride = arr.length, end = stride - 1, half = stride / 2 | 0;
var y = 0;
while (y < half) {
let x = y;
while (x < end - y) {
const temp = arr[y][x];
arr[y][x] = arr[end - x][y];
arr[end - x][y] = arr[end - y][end - x];
arr[end - y][end - x] = arr[x][end - y];
arr[x][end - y] = temp;
x ++;
}
y++;
}
return arr;
}
The above is very unreadable and it pays to put a little order and alignment to the source
function rotate2DArray(arr) {
const stride = arr.length, end = stride - 1, half = stride / 2 | 0;
var y = 0;
while (y < half) {
let x = y;
const ey = end - y;
while (x < ey) {
const temp = arr[y][x], ex = end - x;
arr[y ][x ] = arr[ex][y ];
arr[ex][y ] = arr[ey][ex];
arr[ey][ex] = arr[x ][ey];
arr[x ][ey] = temp;
x ++;
}
y++;
}
return arr;
}
Thus the array is rotated in place for \$O(1)\$ space. | {
"domain": "codereview.stackexchange",
"id": 34217,
"tags": "javascript, algorithm, programming-challenge, functional-programming"
} |
Understanding context free grammars in conjunction with PDA | Question: I have read TONS of articles about context free grammars and Pushdown Automata but I think there are things that I dont seem to understand. I am not studying computer science but I am really interested in many of the topics and I hope you can help me realign my knowledge about the CFG and PDA.
So my question is: could you tell me if what im writing here is correct? If not could you tell me what is wrong in my interpretation?
What I think I understood:
A state of a pushdown automata is equivalent to a set of production rules (or is it a production rule?) and these production rules contain replacement rules.
Depending on what character we read from our input we add an appropiate stack character or replace a character in the stack which is defined by one of the replacement rules.
So lets say we have defined the following production rule: a^n b^m : m != n
the replacement rules looks like:
production rule {
input ; Stack Symbol -> To Do On Stack
a ; nothing -> A
a ; A -> AA
b ; A -> nothing
emtpy -> #
}
for every "a" we read we add an A and for every "b" we remove an A.
The stuff what happens in my PDA stack is therefore defined in the replacement rule of my production rule?
Now if I wanted to parse a nested input like: "{hello world {fooBar is cool + 5}}"
I assume I need two production rules each with different replacement rules for our stack:
RULE 1:
{ ; nothing -> O
[a-zA-Z] ; O -> L
ok seriously I have to stop here because I have no idea what im doing. I would appreciate it if you could tell if my interpretation is wrong and how to handle the last example.
I am sorry if I couldnt be more concise. Im currently a bit confused with this.
Answer: A PDA is a machine with a certain set of states and an infinite, initially empty, stack. It also has an input tape with the input word written on it. As long as there's input remaining, the machine reads the next character of input, checks its current state, and pops off the top character of the stack to read it. Based on what it finds, the PDA then switches into some new state and pushes a new character onto the stack.
The set of all languages PDAs can recognize is called the context-free languages. They are context-free roughly because what happens next depends only on the character at the top of the stack, rather than all the context beneath it in the stack. A PDA is finished when it goes into a special state for accepting or rejecting the input word. (Some people instead say that a PDA is finished when the stack is empty.) The language of a PDA is the set of all words that it accepts.
A context free grammar is a set of production rules. For simplicity, a production rule looks like $A \mapsto B$ or $A\mapsto BC$ or $A \mapsto \mathtt{a}$ or $A \mapsto \epsilon$. These respectively mean "If you have the symbol $A$, you may replace it with $B$ / $BC$ / the character $\mathtt{a}$ / the empty string." A production rule is a rule that allows you to transform a symbol. The capital letter symbols are called nonterminals; they are sort of like variables. The symbols like $\mathtt{a}$ are terminals; they are sort of like constants. There is a special nonterminal symbol called the start symbol, usually denoted $S$. The language of a context-free grammar (set of rules) is the set of all strings of terminals (like $\mathtt{abcde}$) you can make starting from the start symbol $S$ and applying any of the rules until only terminal symbols remain.
It turns out that context-free grammars (CFGs) are equivalent to pushdown automata (PDAs) because they can recognize the same languages. In particular, for every CFG there's a corresponding PDA, and vice versa.
Here's the CFG->PDA recipe. If you have a CFG, make a PDA with one start state and one reject state. Push the start symbol onto the stack. Then, loop: pop off the top symbol of the stack. Nondeterministically pick a production rule with that symbol on the left hand side. Push all the symbols on the right hand side of that rule onto the stack (make sure the leftmost symbol ends upon top). If the symbol at the top of the stack is a terminal character, make sure it matches the next character of input, otherwise reject. If the stack becomes empty, accept. | {
"domain": "cs.stackexchange",
"id": 11051,
"tags": "context-free, pushdown-automata, parsers"
} |
Effect on the partition function upon the mixing of two ideal gases | Question:
Consider a box that is separated into two compartments by a thin wall. Each
compartment has a volume V and temperature T. The first compartment contains N
atoms of ideal monatomic gas A and the second compartment contains N atoms of
ideal monatomic gas B. Assume that the electronic partition functions of both gases
are equal to 1. The molecular partition function for each component is given by
$$q_i = \frac{V}{Λ_i^3}$$
Firstly I am asked to write the total initial canonical partition function which is given in the question as $Q_{initial}=Q_AQ_B$. The algebra is fine, but why would the two separate gases have a total canonical function?
I was then asked to show that after the gases mix (and do not react) that the following is true: $$\frac{Q_{mixed}}{Q_{initial}}=4^N$$.
I got that $Q_{initial}=\frac{q_A^Nq_b^N}{(N!)^2}$. Therefore, the only reasonable expression for the mixed function is:
$$Q_{mixed}=\frac{(2q_A)^N(2q_b)^N}{(N!)^2}$$
However, I cannot see why the individual molecular partition functions will have doubled. Especially since A and B are distinct gases.
Answer: The fact that $Q_{\mathrm{initial}} = Q_{A}Q_{B}$ implies that the two subsystems, respectively consisting of gas A and gas B, are independent. Because $\log Q_{\mathrm{initial}} = \log Q_{A} + \log Q_{B}$, it follows that various extensive thermodynamic quantities (e.g., internal energy, free energies, entropy) of the total system are simply given by the sum of the corresponding quantities of the subsystems.
The partition functions are doubled because each gas occupies double the initial volume after mixing, i.e., $V \rightarrow 2V$. | {
"domain": "chemistry.stackexchange",
"id": 4686,
"tags": "physical-chemistry, statistical-mechanics"
} |
Quick sort, Hoare's partition algorithm. Is there a mistake in CLRS? | Question: The following problem appears in "Introduction to Algorithms" by Thomas Cormen et. al., aka CLRS.
Problem 7-1.b
Hoare's partition algorithm from the book.
Part b: Assuming the subarray $A[p,\cdots,r]$ contains at least two elements, show that the indices $i$ and $j$ are such that we never access an element of outside the subarray $A[p,\cdots,r]$
I think this it is not possible to prove this because this statement is not correct, i.e., during the course of execution we happen to access elements outside the array.
Here I prove the converse of part b by using a counter-example.
Consider the array A = [1,2]. Let p = 1 and there r =2.
Initially, $i = 0$, $j = 3$ and $x = 1$.
After the loop $5-7$ executes once, $j$ becomes 2 and the condition in line 7 fails resulting the termination of the loop $5-7$.
Now we reach the loop $8-10$. Now let's look at the state after this loop executes twice. $i = 2$ and the condition in line 10 still holds. Therefore the loop executes a third time and $i$ becomes 3.
Now to test the condition in line 10 we access $A[3]$ which is outside the subarray $A[1,\cdots, 2]$
Is there a mistake in my reasoning or is the problem statement wrong?
Answer: The problem statement is correct.
I think you getting confused about: repeat $j \gets j-1$ until $A[j] \leq x$. It means that if $A[j]>x$ then do $j \gets j-1$.
Similarly, loop $8-10$ means that if $A[i]<x$ then do $i \gets i+1$.
Therefore, the algorithm executes in the following way on $A = [1,2]$:
After the loop $5-7$, $j$ becomes $1$ since $A[2] > x$.
After loop $8−10$, $i$ becomes $1$ since $A[1] = x$.
Since $i = j = 1$, the algorithm terminates and returns $1$. | {
"domain": "cs.stackexchange",
"id": 18326,
"tags": "algorithms, sorting, quicksort"
} |
Visualizing objects in rviz | Question:
Guys,
i am completely new to ROS and trying to learn how to get things done. I am trying to model an environment where I have two objects and the pr2 in my world. The launch file is the following:
<!-- send urdf to param server -->
<param name="object1" textfile="/home/x/ros_workspace/project/objects/urdf/object1.urdf" />
<param name="object2" textfile="/home/ros_workspace/project/objects/urdf/object2.urdf" />
<!-- push urdf to factory and spawn robot in gazebo -->
<node name="spawn_object1" pkg="gazebo" type="spawn_model" args="-urdf -param object1 -x 3.0 -y 3.0 -z 0.1 -model object1" respawn="false" output="screen" />
<node name="spawn_object2" pkg="gazebo" type="spawn_model" args="-urdf -param object2 -x 2.75 -y 3.5 -z 0.25 -model object2" respawn="false" output="screen" />
My objects get correctly spanwned and I can visualize them in gazebo, but they do not appear on rviz. How would I see the objects in rviz?
Thanks
Originally posted by zamboni on ROS Answers with karma: 11 on 2011-11-27
Post score: 0
Answer:
You have to upload URDF model of your object to ROS parameter server (object1, object2) - it's already in your launch file.
Then you have to add its display in RVIZ - type robot model (set robot description to "object1"). For object2 you have to add next robot model display.
And finally publish transformations via TF - sample code for this task is here. It's reading (one) object position from Gazebo and publishing transformation between world frame, robot base_link and object link.
Originally posted by ZdenekM with karma: 704 on 2011-11-28
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 7436,
"tags": "rviz"
} |
cannot install/run rosbridge on raspberry pi 3 | Question:
I have a Raspberry Pi 3B which has debian stretch OS and a ROS kinetic running on it.
I have tried to install rosbridge_suite and rosbridge_server on it from source as with this code and I had the following error:
sudo apt install ros-kinetic-rosbridge-server ros-kinetic-rosbridge-suite
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package ros-kinetic-rosbridge-server
E: Unable to locate package ros-kinetic-rosbridge-suite
I believe rosbridge for ROS kinetic is already available, correct me if I am mistaken.
After this failed trial, I tried to build it using a rosinstall_generator as described in here , by those commands:
$ pi@raspberrypi:~/ros_catkin_ws $ rosinstall_generator rosbridge_server --rosdistro kinetic --deps --wet-only --tar > kinetic-custom_ros.rosinstall
$ pi@raspberrypi:~/ros_catkin_ws $ wstool init src kinetic-ros_comm-wet.rosinstall
Error: There already is a workspace config file .rosinstall at "src". Use wstool install/modify.
pi@raspberrypi:~/ros_catkin_ws $ wstool merge -t src kinetic-custom_ros.rosinstall
Performing actions:
Add new elements:
common_msgs/actionlib_msgs, common_msgs/diagnostic_msgs, common_msgs/nav_msgs, common_msgs/sensor_msgs, common_msgs/stereo_msgs, common_msgs/trajectory_msgs, common_msgs/visualization_msgs, geometry2/tf2_msgs, ros_tutorials/rospy_tutorials, rosauth, rosbag_migration_rule, rosbridge_suite/rosapi, rosbridge_suite/rosbridge_library, rosbridge_suite/rosbridge_msgs, rosbridge_suite/rosbridge_server
Config changed, maybe you need run wstool update to update SCM entries.
Overwriting /home/pi/ros_catkin_ws/src/.rosinstall*
update complete.
then I enter this:
pi@raspberrypi:~/ros_catkin_ws $ wstool update -t src
After doing this and catkin making, when I ask for rospack list to see the downloaded/built packages, I cannot see rosbridge server or rosbridge suite. Anway, afterwards I tried to run a node for rosbridge :
pi@raspberrypi:~ $ roslaunch rosbridge_server rosbridge_udp.launch
and I got the error:
[rosbridge_udp.launch] is neither a launch file in package [rosbridge_server] nor is [rosbridge_server] a launch file name
The traceback for the exception was written to the log file
Afterwards, I went to the folder in which launch file is located, and tried to run it with:
pi@raspberrypi:~/ros_catkin_ws/src/rosbridge_suite/rosbridge_server/launch $ roslaunch rosbridge_udp.launch
*****... logging to /home/pi/.ros/log/d966bdce-a4c1-11e9-b85d-b827eb727113/roslaunch-raspberrypi-4361.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://raspberrypi:43533/
SUMMARY
========
PARAMETERS
* /rosapi/params_glob: [*]
* /rosapi/services_glob: [*]
* /rosapi/topics_glob: [*]
* /rosbridge_udp/authenticate: False
* /rosbridge_udp/delay_between_messages: 0
* /rosbridge_udp/fragment_timeout: 600
* /rosbridge_udp/interface:
* /rosbridge_udp/max_message_size: None
* /rosbridge_udp/params_glob: [*]
* /rosbridge_udp/port: 9090
* /rosbridge_udp/services_glob: [*]
* /rosbridge_udp/topics_glob: [*]
* /rosbridge_udp/unregister_timeout: 10
* /rosdistro: kinetic
* /rosversion: 1.12.14
NODES
/
rosapi (rosapi/rosapi_node)
rosbridge_udp (rosbridge_server/rosbridge_udp)
auto-starting new master
process[master]: started with pid [4372]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to d966bdce-a4c1-11e9-b85d-b827eb727113
process[rosout-1]: started with pid [4385]
started core service [/rosout]
ERROR: cannot launch node of type [rosbridge_server/rosbridge_udp]: rosbridge_server
ROS path [0]=/opt/ros/kinetic/share/ros
ROS path [1]=/opt/ros/kinetic/share
ERROR: cannot launch node of type [rosapi/rosapi_node]: rosapi
ROS path [0]=/opt/ros/kinetic/share/ros
ROS path [1]=/opt/ros/kinetic/share
As you can see I cannot run the node , and I am not sure what is the reason.
Thank you for your help!
Originally posted by mmp52 on ROS Answers with karma: 80 on 2019-07-15
Post score: 0
Answer:
It seems that the answer is following installation steps strictly,
after adding a new package with rosinstall:
$ cd ~/ros_catkin_ws
$ rosinstall_generator <your_required_ros_packages> --rosdistro kinetic --deps --wet-only --tar > kinetic-custom_ros.rosinstall
$ wstool merge -t src kinetic-custom_ros.rosinstall
$ wstool update -t src
One should catkin make in a special method (I was just catkin_make'ing and the package did not build in that case):
$ sudo ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/kinetic
So this solves the installation and building any package to RosberryPi
Originally posted by mmp52 with karma: 80 on 2019-07-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 33425,
"tags": "ros, ros-kinetic, rosbridge, rosbridge-suite, rosbridge-server"
} |
I wantna publish a initialpose to AMCL, but error~ | Question:
I try to write a node for publish a msg to initialpose topic, but error
Error: TF_NAN_INPUT: Ignoring transform for child_frame_id "odom" from authority "unknown_publisher" because of a nan value in the transform (-nan -nan -nan) (-nan -nan -nan -nan)
at line 244 in /tmp/binarydeb/ros-kinetic-tf2-0.5.16/src/buffer_core.cpp
Error: TF_DENORMALIZED_QUATERNION: Ignoring transform for child_frame_id "odom" from authority "unknown_publisher" because of an invalid quaternion in the transform (-nan -nan -nan -nan)
at line 257 in /tmp/binarydeb/ros-kinetic-tf2-0.5.16/src/buffer_core.cpp
my code:
#include <ros/ros.h>
#include <geometry_msgs/PoseWithCovarianceStamped.h>
int main (int argc, char** argv)
{
ros::init(argc, argv, "initialpose_pub");
ros::NodeHandle nh_;
ros::Publisher initPosePub_ = nh_.advertise<geometry_msgs::PoseWithCovarianceStamped>("initialpose", 2, true);
ros::Rate rate(1.0);
while(nh_.ok())
{
//get time
ros::Time scanTime_ = ros::Time::now();
//create msg
geometry_msgs::PoseWithCovarianceStamped initPose_;
//create the time & frame
initPose_.header.stamp = scanTime_;
initPose_.header.frame_id = "map";
//position
initPose_.pose.pose.position.x = 0.f;
initPose_.pose.pose.position.y = 0.f;
//angle
initPose_.pose.pose.orientation.z = 0.f;
//publish msg
initPosePub_.publish(initPose_);
//sleep
rate.sleep();
}
return 0;
}
I don't know why.
Originally posted by wings0728 on ROS Answers with karma: 50 on 2017-11-07
Post score: 0
Answer:
The orientation that your node is publishing isn't a valid quaternion.
The details of quaternions are too complex to explain here, but there's a nice tutorial on them: http://wiki.ros.org/Tutorials/Quaternions
The short version is that you should initialize the w part of your quaternion to 1.0 or use a function to create your quaterninon instead of just setting the z value:
initPose_.pose.pose.orientation.w = 1.0;
or
initPose_.pose.pose.orientation = tf::createQuaternionFromRPY(0.0, 0.0, 0.0);
or
initPose_.pose.pose.orientation = tf::createQuaternionFromYaw(0.0);
Note that if you want to use the tf convenience functions for creating quaternions, you'll want to include the appropriate tf header and add a dependency on tf to your CMakeLists.txt
Originally posted by ahendrix with karma: 47576 on 2017-11-07
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by wings0728 on 2017-11-07:
thank you!!!!!!!!!!
it's working!!!! | {
"domain": "robotics.stackexchange",
"id": 29306,
"tags": "navigation, amcl"
} |
Multiple null Checks or try/catch NullPointerException | Question: There is A LOT of information online stating that you should NEVER catch a NullPointerException. Generally I agree, but I am wondering about this one case.
I have inherited code that requires me to access data that I need in the following way
context.getGrandParent().getParent().getChild().isRequired()
There is no guarantee that any of the objects in this hierarchy will not be null.
I have to enter a block if isRequired() returns true. First, and what I initially wrote, with null checks:
if(context != null
&& context.getGrandParent() != null
&& context.getGrandParent().getParent() != null
&& context.getGrandParent().getParent().getChild() != null
&& context.getGrandParent().getParent().getChild().isRequired()
){
// continue with business logic
} else {
LOG.error("Unable to determine if processing is required.");
}
// continue with other inherited code
Setting aside that I could refactor this, perhaps for readability, wouldn't it make more sense to do the following?
boolean isRequired = false;
try {
isRequired = context.getGrandParent().getParent().getChild().isRequired();
} catch (NullPointerException npe) {
LOG.error("Unable to determine if processing is required.");
}
if(isRequired){
// continue with business logic
}
// continue with other inherited code
Answer: The problem with catching NullPointerException is “which one did you catch?” A null can be returned from getGrandParent(), and using that return value without checking will cause the exception. OR a bug in getGrandParent() might cause an exception while trying to find the parent’s parent, and you are obscuring the bug by assuming the NullPointerException results from a properly returned null value.
You can use Optional to properly capture the null and not call subsequent function.
Optional<Boolean> isRequired = Optional.ofNullable(context)
.map(Context::getGrandParent)
.map(GrandParent::getParent)
.map(Parent::getChild)
.map(Child::isRequired);
if (!isRequired.isPresent()) {
LOG.error("Unable to determine if processing is required.");
} else if (isRequired.get()) {
// continue with business logic
}
The Context::, GrandParent::, Parent::, Child:: class types are, of course, WAG's. You'll need to supply the corrent types based on the type returned by the previous stage.
Alternately, you could use .getOrElse(Boolean.FALSE) at the end of the .map() chain. | {
"domain": "codereview.stackexchange",
"id": 34764,
"tags": "java, null"
} |
Coloring book. Finding region by point | Question: Let me explain what I want to achieve. I'm working on the coloring book project. On the input, I'm getting transparent images with black borders (Like this). Currently, I've created the 2D Matrix with colors in points. And basing on this Matrix I want to form an array of regions.
Region:
class Region {
var set = Set<Point>()
func contains(_ point: Point) -> Bool {
return set.contains(point)
}
}
The region represents points that form bordered parts(e.g. gun barrel, button, hat).
My future logic will be this:
Users taps.
Get point.
Find the region that contains that point.
Color all the points in that region.
The problem is that I don't know how to form those regions :). Can you please help me solve this problem? Maybe it's some famous algorithm but I just don't know how to google it properly.
P.S. I hope this is the correct place to ask this kind of question.
Answer: Once you get the point, you can use Flood Fill Algorithm to replace the current color in the region containing the point with the new color.
Basically, this algorithm starts from a specified point and then recursively(or iteratively) keep replacing the current color with the new specified color in the neighbouring points till it reaches the boundary of the region.
It detects the boundary by checking for points having different color(different from current and new color). So you don't have to detect/store the regions in advance. | {
"domain": "cs.stackexchange",
"id": 14625,
"tags": "algorithms, matrix"
} |
Is this an acceptable use of $a \,dx = v \,dv$? | Question: From the chain rule we have
$$a\, dx = \frac{dv}{dt}\, dx = dv\frac{dx}{dt} = v \,dv$$
This was introduced in an early chapter of my physics textbook and I have been getting a lot of mileage out of it. For example, in part (b) of this problem
A small object is placed at the top of an incline that is essentially frictionless. The object slides down the incline onto a rough horizontal surface, where it stops in 5.0 s after traveling 60 m. (a) What is the speed of the object at the bottom of the incline and its acceleration along the horizontal surface? (b) What is the height of the incline?
you are supposed to use the work-energy theorem, like so:
Let $x$ denote the hypotenuse of the incline and $\theta$ its slope. Then $mg \sin\theta$ is the component of $F_g$ parallel to the surface.
$$\begin{align}W_\mathrm{net} &= \Delta K \\
mg \,x \sin\theta& = \frac{1}{2}m(v_1^2 -0) \end{align}$$
Given $v_1 = 24$ from part (a), we can solve for the height $x \sin\theta = 29.36$ m.
Instead, I noticed that $a\,dx = v\,dv$ implies $\bar a \Delta x = \bar v \Delta v$. The parallel component of the acceleration vector is $g \sin \theta$, $\Delta x = x$, $\bar v = \frac{1}{2}(v_1 - 0) = \frac{v_1}{2}$, and $\Delta v = v_1$. Then we have
$$\begin{align} \bar a \Delta x &= \bar v \Delta v \\
g \,x \sin\theta & = \frac{v_1^2}{2} \end{align}$$
which gets you the same thing.
Are there situations in which an approach like this wouldn't work? (I realize that I assumed constant acceleration when calculating $\bar v$, but anything beyond that?)
Is it correct (from a physics standpoint) to say that $a\,dx = v\,dv$ implies $\bar a \Delta x = \bar v \Delta v$? I would prove this mathematically by integrating on both sides, noting that the average value of a function $f(x)$ on an interval $(x_0, x_1)$ is $$\bar f(x) = \frac{1}{x_1 - x_0}\int_{x_0}^{x_1} f(x) dx$$
Does my work amount to a derivation of the work-energy theorem, or a lucky circumvention of it?
Silly question: If a student did this on a test, would you give them credit?
Answer:
Are there situations in which an approach like this wouldn't work? (I realize that I assumed constant acceleration when calculating $\bar v$, but anything beyond that?)
Once you start working in more than one dimension, things will get more complicated. In that case, you have
$$\mathbf{a} \cdot d \mathbf{x} = \mathbf{v} \cdot d \mathbf{v}$$
and you'll need to keep track of the vector directions. Also, in some cases, the averages you define will not be convenient to compute (as discussed below).
Is it correct (from a physics standpoint) to say that $a\,dx = v\,dv$ implies $\bar a \Delta x = \bar v \Delta v$? I would prove this mathematically by integrating on both sides, noting that the average value of a function $f(x)$ on an interval $(x_0, x_1)$ is $$\bar f(x) = \frac{1}{x_1 - x_0}\int_{x_0}^{x_1} f(x) dx$$
Sure, but you need to be careful with how you define these averages. You've defined
$$\bar{a} = \frac{1}{\Delta x} \int a \, dx, \quad \bar{v} = \frac{1}{\Delta v} \int v \, dv.$$
In other words your $\bar{a}$ is an average over changes in $x$, while your $\bar{v}$ is an average over changes in $v$. But you could also define other averages, like
$$\bar{a}' = \frac{1}{\Delta t} \int a \, dt, \quad \bar{a}'' = \frac{1}{\Delta v} \int a \, dv, \quad \bar{a}''' = \frac{1}{\Delta a} \int a \, da, \quad \ldots.$$
So if you forget what kinds of averages you're using, you'll get the wrong answer. You got lucky here because the acceleration was a constant, and any average of a constant is the same.
Does my work amount to a derivation of the work-energy theorem, or a lucky circumvention of it?
The only essential step you need to prove the work-energy theorem is the chain rule, as I explain in detail here. Using the chain rule to conclude $a \, dx = v \, dv$ and then integrating both sides is the whole derivation of the work-energy theorem. (Though some introductory textbooks somehow make it sound much more complicated...)
Silly question: If a student did this on a test, would you give them credit?
I would, but in practice it depends on how good your teacher is. | {
"domain": "physics.stackexchange",
"id": 67282,
"tags": "homework-and-exercises, newtonian-mechanics, work"
} |
Collisions not detected between attached object and environment? | Question:
Hi,
I'm now facing a new issue in my implementation of the manipulation stack: the collisions between the object I just picked up (which is correctly attached to the hand) and the environment don't seem to be computed.
I'm placing my object directly on top of another object and I don't get any collisions, though I'm getting collisions between the hand and the collision map if I try to place the object at a position where the hand would be in collision with the environment.
Any ideas?
Cheers,
Ugo
PS: here is a link to a screencast I made to show the current status of my implementation of this stack.
More details regarding my questions:
The collision_support_surface_name is definitely "table". I uploaded a new video to try to better show the problem I'm having with the collision.
In this video you can ignore the small orange/yellow spheres in rviz: it's a work in progress, but I'm just sending a list of targets to the place action and I'm (badly) displaying the grid for this list of targets.
As you can see there's no collision found between the object lying on the table and the attached object.
If I check the DEBUG messages, I can see lots of allowed collisions (between the attached body and the hand mostly), but none between the attached body and the other objects or the table. The collisions between the hand and the unattached body / environments are found.
Originally posted by Ugo on ROS Answers with karma: 1620 on 2011-02-16
Post score: 2
Original comments
Comment by mmwise on 2011-02-17:
when you see an answer you like, mark it as an accepted answer
Answer:
Hi Ugo,
Like Gil said, the manipulation pipeline will indeed disregard collisions between the object you are placing (identified as the "collision_object_name" in your PlaceGoal) and the support you are placing on (identified by the "collision_support_surface_name" in your PlaceGoal). This has to happen, otherwise any place operation would by definition bring your object in collision with whatever you are placing it on.
However, collision checking between the object and the support surface is disabled only in the last stage of the operation (interpolated IK), so that you don't hit the table with the object while moving around, but only when you're actually placing it.
I can not tell from the video what the support surface is in your case. If you're passing the "other object" as the "collision_support_surface_name" then the grasping pipeline will indeed put one object on top of the other. If you pass the table as the "collision_support_surface_name" then a place goal that brings your object in collision with the "other object" should be rejected.
Matei
Originally posted by Matei Ciocarlie with karma: 586 on 2011-02-16
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 4767,
"tags": "ros, kinect, manipulation"
} |
EF6 Code First unit of work pattern with IoC/DI | Question: I'm trying to implement the unit of work pattern with dependency injection / inversion of control and entity framework version 6.1.1 Code First, in an asp.net-mvc project.
public interface IGenericRepository<T> : where T : class
{
IQueryable<T> AsQueryable();
IEnumerable<T> GetAll();
IEnumerable<T> Find(Expression<Func<T, bool>> predicate);
T Single(Expression<Func<T, bool>> predicate);
T SingleOrDefault(Expression<Func<T, bool>> predicate);
T First(Expression<Func<T, bool>> predicate);
T GetById(int id);
void Add(T entity);
void Delete(T entity);
void Attach(T entity);
}
public interface IUnitOfWork : IDisposable
{
IGenericRepository<Order> OrderRepository { get; }
IGenericRepository<Customer> CustomerRepository { get; }
IGenericRepository<Employee> EmployeeRepository { get; }
void Commit();
}
public class EfUnitOfWork : DbContext, IUnitOfWork
{
private readonly EfGenericRepository<Order> _OrderRepo;
private readonly EfGenericRepository<Customer> _customerRepo;
private readonly EfGenericRepository<Employee> _employeeRepo;
public DbSet<Order> Orders { get; set; }
public DbSet<Customer> Customers { get; set; }
public DbSet<Employee> Employees { get; set; }
public EfUnitOfWork()
{
_orderRepo = new EfGenericRepository<Order>(Orders);
_customerRepo = new EfGenericRepository<Customer>(Customers);
_employeeRepo = new EfGenericRepository<Employee>(Employees);
}
#region IUnitOfWork Implementation
public IGenericRepository<Order> OrderRepository
{
get { return _orderRepo; }
}
public IGenericRepository<Customer> CustomerRepository
{
get { return _customerRepo; }
}
public IGenericRepository<Employee> EmployeeRepository
{
get { return _employeeRepo; }
}
public void Commit()
{
this.SaveChanges();
}
#endregion
}
public class EfGenericRepository<T> : IGenericRepository<T>
where T : class
{
private readonly DbSet<T>_dbSet;
public EfGenericRepository(DbSet<T> dbSet)
{
_dbSet = dbSet;
}
#region IGenericRepository<T> implementation
public virtual IQueryable<T> AsQueryable()
{
return _dbSet.AsQueryable();
}
public IEnumerable<T> GetAll()
{
return _dbSet;
}
public IEnumerable<T> Find(Expression<Func<T, bool>> predicate)
{
return _dbSet.Where(predicate);
}
// And so on ...
#endregion
}
And this is how I'm using it in a controller:
public HomeController(IUnitOfWork unitOfWork) { ..... }
But I've read that EF6 already has Unit of Work out of the box. How would that fit in this picture?
Also, with which dependency injector should I go with?
Answer: Unit of work
There are some issues of mechanics here. Normally with a generic repository, you want to be able to extend it, so you'd write:
public interface IOrderRepository : IGenericRepository<Order>
{
//Some Order-specific queries here
}
But this would mean updating your IUnitOfWork every time too.
You're also having to add lots of annoying boilerplate code to your EfUnitOfWork to make it conform to the interface. The question here has a much nicer way of achieving the same thing.
However, my actual suggestion would be to remove IUnitOfWork altogeter, and simply add a Save (or, if you prefer, Commit) method to your repositories. Then any code which needs data access should be passed the repositories it needs directly, rather than ever being passed the DbContext.
The Generic Repository
I think the first hint that you're putting the cart before the horse here is the name you picked: IGenericRepository. The fact that it's generic absolutely does not need to be in the name. For one thing, it's already implied by the generic parameter. But more importantly, the fact that you're using the generic repository pattern, as opposed to just the repository pattern, is an unimportant detail.
The generic repository is simply a base class for your repositories. The only reason it's there is that there will be some methods you'll want on all of your actual repositories, and you don't want to have to repeat them.
Mat's Mug already gave the reasons in his answer not to expose IQueryable or take Expression or Func arguments in your repository, and I'd strongly echo those.
How to write your repositories
So given that, here's what I think a generic repository should look like to start out:
public interface IRepository<T> { }
Why is it empty? Because you don't know what methods you're going to need yet.
As soon as you need data access in one of your classes, write a repository for it:
public interface IOrderRepository : IRepository<T>
{
IEnumerable<Order> GetAllPendingOrders(DateTime endDate);
void Add(Order order);
}
(GetAllPendingOrders and Add being made up example data access methods that you might find yourself needing to use in some service class)
Note how these methods are defined by what the repository's consumer needs, they're not just aimed to exactly match what IDbSet already gives you.
After you have a few repositories, you'll find you have quite a bit of repetition. For example, Add would probably be on most or all of them. (GetAllPending... would not!) So then, you simply refactor by removing those methods and creating a generic version on the Repository<T>. This is just plain old vanilla extracting of a base class that you see all over the place, there's nothing magic about it just because it's relating to repositories.
Guessing
If you really don't like the idea of starting with an empty generic repository, there are some methods which using educated guesswork, you can be almost sure you'll need. So a starting point for your generic repository might be the following cut-down version of the one you posted:
public interface IGenericRepository<T> : where T : class
{
IEnumerable<T> GetAll();
T GetById(int id);
void Add(T entity);
void Save();
}
But to emphasise: don't just copy this without understanding why! Generic repositories are horribly badly explained in dozens of articles and blog posts across the internet, and it's really worth clearing up in your mind what they're actually for before using them. | {
"domain": "codereview.stackexchange",
"id": 9746,
"tags": "c#, design-patterns, entity-framework, repository"
} |
anyway to change camera publisher topic? | Question:
hi guys.
I'm currently testing out the Teleop (Hydro) app on my Nexus 10 to control Turtlebot.
Im able to establish the pairing between both parties, and able to control the turtlebot using the Virtual Joystick but unable to view live camera feed.
note that I'm using Asus Xtion Pro Live camera. It is able to display image in rviz (so i take it as it's working)
BUT, there is no topic: /camera/rgb/image_raw/compressed
These are the steps that i made:
roscore
rocon_launch turtlebot_bringup bringup.concert
roslaunch turtlebot_bringup 3dsensor.launch
okay. so what i find out is.. that the app itself is subscribing to:
/camera/rgb/image_raw/compressed
but when i run:
rostopic info
/camera/rgb/image_raw/compressed
it shows:
Type: sensor_msgs/CompressedImage
Publishers:
/camera/camera_nodelet_manager
(http://192.168.1.23:50576/) / this is
actually my turtlebot ip/
Subscribers:
none
I'm wondering if i need to change there 3dsensor.launch file to manually change the publishing topic so that my nexus can subscribe to it?
if i need to do so, how may i change the launch file?
Originally posted by syaz nyp fyp on ROS Answers with karma: 167 on 2014-08-05
Post score: 0
Answer:
You can simply remap the topic to another topic by adding this to the line where you start the node in the launch file
remap from="oldTopic" to="newTopic"
Originally posted by Mehdi. with karma: 3339 on 2014-08-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 18926,
"tags": "turtlebot, ros-hydro, xtion, asus, teleop"
} |
Question based on scaling the FFT for obtaining PSD | Question: Following is the code given in MATLAB's site to estimate PSD using FFT.
Fs = 1000;
t = 0:1/Fs:1-1/Fs;
x = cos(2*pi*100*t) + randn(size(t));
N = length(x);
xdft = fft(x);
xdft = xdft(1:N/2+1);
psdx = (1/(Fs*N)) * abs(xdft).^2;
psdx(2:end-1) = 2*psdx(2:end-1);
freq = 0:Fs/length(x):Fs/2;
plot(freq,10*log10(psdx))
grid on
title('Periodogram Using FFT')
xlabel('Frequency (Hz)')
ylabel('Power/Frequency (dB/Hz)'
I know I might lack a proper understanding of the fundamentals, but can anyone explain in order to extract the first half of xdft why does the index run from $1$ to $N/2 + 1$ and not $1$ to $N/2$? (I suppose the Nyquist frequency lies at $i = N/2$, am I right?)
Some sources mention that the square of the magnitude should be scaled by $\frac{1}{N}$, while here it is scaled by $\frac{1}{Fs*N}$. I am unable to figure out which of these two should be used.
I understand that the DC value should be left untouched when converting to single-sided spectrum. But why is the scaling by $2$ performed only till end-1 and not till end?
Answer: The exact location (index) of the Nyquist frequency in your frequency vector depends on the length of the input signal. If the length is odd the Nyquist frequency is not included; if the length is even it is included. Consequently, and assuming the signal is real, either the first $\frac{N}{2}$ or $ \frac{N}{2} + 1$ samples are needed to fully describe your signal in the frequency domain. Note that, like the first sample describing the DC value, the sample describing the Nyquist value only occurs once; it should therefore not be multiplied by two when calculating the PSD of a real signal.
The proper scaling value to be used depends on the exact definition of the DFT. I have not checked MATLAB’s definition but if you want to make sure that the used scaling factor is correct use Parseval’s theorem. Check if the signal power in the time and frequency domain are identical; if not adjust the scaling factor accordingly. | {
"domain": "dsp.stackexchange",
"id": 6847,
"tags": "fft, power-spectral-density"
} |
Effect of protocol ordering on multiparty comm. complexity | Question: Brief Background
In Multi-Party Protocols by Chandra, Lipton, and Furst [CFL83], a Ramsey-theoretic proof is used to show a lower bound (and later, a matching upper bound) for the predicate Exactly-$n$ in the NOF multiparty communication complexity model. From the paragraph at the top of the second column of Page 1, we can see that they define the model such that the communication is strictly cyclic: e.g., for parties $P_0, P_1, P_2$, $P_0$ broadcasts at time $t=0$, $P_1$ broadcasts at time $t=1$, $P_2$ broadcasts at time $t=2$, then $P_0$ broadcasts at time $t=3$, and so on.
In most other papers, this cyclic ordering restriction is not made. For (arbitrary) example, in Separating Deterministic from Nondeterministic NOF Multiparty Communication Complexity by Beame, David, Pitassi, and Woelfel [BDPW07], a counting argument over protocols separates $\bf{RP}^{cc}_k$ from $\bf{P}^{cc}_k$. By their definition, "a protocol specifies, for every possible [public] blackboard contents [i.e., broadcast history] whether or not the communication is over, the output if over and the next player to speak if not." (emphasis added)
Importantly, the proof technique in [CFL83] appears (to my eyes) to crucially depend on the parties speaking in a cyclic/modular fashion.
Question
Allow me to play Devil's Advocate:
Doesn't the lower bound proof of [CFL83] break if we allow the parties to speak in an ordering specified by the protocol? More specifically, is it possible there could there be a protocol with a different communication pattern than cyclic for Exactly-$n$ in the NOF model that costs less than the $\log(\chi_k(n))$ lower bound given in the paper?
Or more generally -- what's going on here? Why is one (highly cited) paper (I use the following very liberally) "allowed" to restrict the possible protocols to round-robin communication patterns only?
Answer: Any protocol $\pi$ can be modified into an equivalent protocol $\hat\pi$ that has the special "round-robin" communication pattern. The modification is as follows: Whenever party $i$ generates an output in $\pi$, it holds it in a buffer until party $i-1$ has spoken. After party $i-1$ speaks, party $i$ either releases its buffer or broadcasts a "dummy" (or empty) message if there is nothing in the buffer.
The conversion of $\pi$ into $\hat\pi$ is without loss of generality, with respect to the correctness and security properties of the protocol. Whether it incurs a loss of generality with respect to communication complexity depends on whether the model allows "empty" messages. You should see whether the proof technique in this paper assumes that parties can broadcast only non-empty messages. If empty messages are allowed, then $\hat\pi$ has the same communication complexity as $\pi$. | {
"domain": "cstheory.stackexchange",
"id": 933,
"tags": "communication-complexity"
} |
NCurses-based Tetris game in C++ | Question: What sticks out and what would you have done better in this termina-tetris implementation? I do not intend to use namespaces or split it up into multiple files.
#include <ncurses.h>
#include <array>
#include <cstdlib>
// The tetris board
const int board_size_x = 15;
const int board_size_y = 20;
// In-game stats
int block_x = 0; // Position of the moving block inside the board
int block_y = 0;
int tick_force_down; // Ticks down. Force down block when < 0. Resets to 'level' every time the block moves down.
int level; // Max ticks before block is forced down. Decreases during the game as score increases.
int score; // Current score (number of lines taken)
// Where to print stuff
const int board_x = 10; // Tetris board
const int board_y = 3;
const int score_y = 1; // Scoreboard
const int score_x = 5;
const int next_block_x = 30; // Next block
const int next_block_y = 5;
const int blocksize = 3;
using block = std::array<std::array<int, blocksize>, blocksize>;
// Tetris board. 0 = empty
std::array<std::array<int, board_size_x>, board_size_y> board {};
block current_block {};
block next_block{};
// Draw a colored square
void drawsquare(int y, int x, int color){
move(y,x);
attron(COLOR_PAIR(color));
addch(' ');
attroff(COLOR_PAIR(color));
}
// Draw block
void drawblock(int row, int col, block & b){
for(int y=0; y < blocksize; ++y)
for(int x=0; x < blocksize; ++x)
if(b[y][x])
drawsquare(row + y, col + x, b[y][x]);
}
// Draw moving block
void drawmoving(){
drawblock(board_y+1+block_y, board_x+1+block_x, current_block);
}
// Draw next block
void drawnext(){
mvprintw(next_block_y, next_block_x, "Next: ");
drawblock(next_block_y+1, next_block_x+1, next_block);
}
// Randomize next block
void newnext(){
int c = 1 + rand()%7; // Color. 1-7 as initialized for ncurses.
switch(rand()%7){
case 0:
next_block = {0,c,0, 0,c,0, c,c,c};
break;
case 1:
next_block = {c,c,c, c,c,c, c,c,c};
break;
case 2:
next_block = {c,c,0, 0,c,c, 0,0,0};
break;
case 3:
next_block = {0,c,0, 0,c,0, 0,c,c};
break;
case 4:
next_block = {0,c,0, 0,c,0, 0,c,0};
break;
case 5:
next_block = {0,c,c, c,c,0, 0,0,0};
break;
case 6:
next_block = {0,c,0, 0,c,0, c,c,0};
break;
}
}
// Crystalizes moving block into the tetris board
void raster(){
for(int y=0; y < blocksize; ++y)
for(int x=0; x < blocksize; ++x){
if(! current_block[y][x])
continue;
board[block_y+y][block_x+x] = current_block[y][x];
}
}
// block is inside another rasterized block or outside the board?
bool collide(int row, int col, const block & b){
for(int y=0; y < blocksize; ++y)
for(int x=0; x < blocksize; ++x){
if(! b[y][x] )
continue;
int y_on_board = row + y;
int x_on_board = col + x;
if(x_on_board < 0 || x_on_board >= board_size_x || y_on_board >= board_size_y)
return true;
if(board[y_on_board][x_on_board])
return true;
}
return false;
}
// Drops the next block, makes a new next. False on collide.
bool drop(){
block_y = 1;
block_x = board_size_x/2 - 1;
current_block = next_block;
newnext();
return !collide(block_y, block_x, current_block);
}
// Rotated right if possible
void rotright(){
block rot;
for(int ny=0; ny < blocksize; ++ny)
for(int nx=0; nx < blocksize; ++nx)
rot[ny][nx] = current_block[blocksize-1-nx][ny];
if(collide(block_y, block_x, rot))
return;
current_block = rot;
}
// Rotated left if possible
void rotleft(){
block rot;
for(int ny=0; ny < blocksize; ++ny)
for(int nx=0; nx < blocksize; ++nx)
rot[ny][nx] = current_block[nx][blocksize-1-ny];
if(collide(block_y, block_x, rot))
return;
current_block = rot;
}
// false and refuse on collide
bool movedown(){
if(collide(block_y+1, block_x, current_block))
return false;
++block_y;
return true;
}
void moveleft(){
if(collide(block_y, block_x-1, current_block))
return;
--block_x;
}
void moveright(){
if(collide(block_y, block_x+1, current_block))
return;
++block_x;
}
void textout(int y, int x, const char* str){
mvprintw(y, x, str);
}
// Returns number of cleared lines
int clearlines(){
int cleared = 0;
for(int y=0; y < board_size_y; ++ y){
int squares = 0;
for(int x=0; x < board_size_x; ++ x){
if(board[y][x])
++squares;
}
// Drop down all the above lines
if(squares == board_size_x){
++cleared;
for(int xc=0; xc < board_size_x; ++xc) // Clear line. Important for row 0.
board[y][xc]=0;
for(int y2 = y; y2 > 0; --y2) // The line we're moving to
for(int x2 = 0; x2 < board_size_x; ++x2)
board[y2][x2] = board[y2-1][x2]; // Move above line to this line
}
}
return cleared;
}
void drawboard(){
// Draw a box around the tetris board
mvaddch(board_y, board_x, ACS_ULCORNER);
mvaddch(board_y, board_x + board_size_x + 1, ACS_URCORNER);
mvaddch(board_y + board_size_y + 1, board_x, ACS_LLCORNER);
mvaddch(board_y + board_size_y + 1, board_x + board_size_x + 1, ACS_LRCORNER);
for(int i = 1; i <= board_size_x; ++i){
mvaddch(board_y, board_x + i , ACS_HLINE);
mvaddch(board_y + board_size_y + 1, board_x + i, ACS_HLINE);
}
for(int i = 1; i <= board_size_y; ++i){
mvaddch(board_y + i, board_x, ACS_VLINE);
mvaddch(board_y + i, board_x + board_size_x + 1, ACS_VLINE);
}
// Draw the filled board squares
for(int y=0; y < board_size_y; ++y)
for(int x=0; x < board_size_x; ++x)
drawsquare(board_y + y + 1, board_x + x + 1, board[y][x]);
}
// Init a new game
void newgame(){
newnext();
drop();
level = 300;
tick_force_down = level;
score = 0;
}
bool lost = false;
bool ingame_loop(){
int c=getch();
if(c == 'q' || c == 'Q')
return false;
if(lost){
mvprintw(0,0,"You lost. Press q to quit.");
refresh();
return true;
}
bool down = false;
if(--tick_force_down < 0){
tick_force_down = level;
down = true;
}
switch(c){
case ' ':
down = true;
while(movedown())
;
break;
case 'z':
case 'Z':
rotleft();
break;
case 'x':
case 'X':
case KEY_UP:
rotright();
break;
case KEY_LEFT:
moveleft();
break;
case KEY_RIGHT:
moveright();
break;
case KEY_DOWN:
down = true;
break;
}
if(down){
tick_force_down = level;
if(!movedown()){
raster();
if(!drop())
lost = true;
else{
int lines = clearlines();
level -= lines;
score += lines;
}
}
}
// Update the screen
clear();
drawboard();
drawmoving();
drawnext();
mvprintw(score_y, score_x, "Score: %d", score);
refresh();
return true;
}
int main()
{
// Init ncurses
initscr();
start_color();
curs_set(0);
cbreak();
noecho();
keypad(stdscr,TRUE);
for(int i=1; i <= 7; ++i) // man init_pair
init_pair(i, COLOR_BLACK, i);
timeout(1);
newgame();
while(ingame_loop())
;
endwin();
return 0;
}
Answer: I'll steer clear of the high-level design questions, and just critique the code itself within its own context.
Curly braces:
for(int y=0; y < blocksize; ++y)
for(int x=0; x < blocksize; ++x)
if(b[y][x])
drawsquare(row + y, col + x, b[y][x]);
While omitting the curly braces is valid, you should try to avoid doing that if the full statement doesn't fit on one line. That's because the following compiles fine, but doesn't do what you want, and is an easy mistake to make:
for(int y=0; y < blocksize; ++y)
for(int x=0; x < blocksize; ++x)
if(b[y][x])
drawsquare(row + y, col + x, b[y][x]);
std::cout << "I drew a square\n";
Constness
void drawblock(int row, int col, block & b){
If a function accepts a non-const reference, it's generally seen as a contract that the function could possibly modify the object. That's not the case here, so the reference should be const.
Comments:
// Draw block
void drawblock(int row, int col, block & b){
These types of comments are just useless. Comments should provide additional information, not just redundantly state what the code obviously says already.
Variable types:
const int board_size_x = 15;
Size variables should use size_t.
Excessive copies
Your block type is big enough that I would personally create global instances and pass pointers instead of copying them around.
Limit scoping of lost
lost is only ever used inside of ingame_loop(), it has no business being a global variable. I would simply move the while() loop inside of ingame_loop().
Uninitialized rand()
You need to call srand(), otherwise your program won't actually be random. Better yet, you should use the stl random library instead.
Minor stuff
Spacing:
Be consistent:
block current_block {};
block next_block{};
^
for(int y=0; y < blocksize; ++y)
^
if(! current_block[y][x])
^
Visual language:
You have a nice missed opportunity to make your block initializations visually readable:
next_block = {0,c,0, 0,c,0, c,c,c};
//vs
next_block = {0, c, 0,
0, c, 0,
c, c, c}; | {
"domain": "codereview.stackexchange",
"id": 27361,
"tags": "c++, tetris, curses"
} |
How to convert a dataframe into a single dictionary that is not nested? | Question: I have a dataframe as below:
+----+----------------+-------------+----------------+-----------+
| | attribute_one| value_one | attribute_two | value_two |
|----+----------------+-------------+----------------+-----------|
| 0 | male | 10 | female | 15 |
| 1 | 34-45 | 17 | 55-64 | 8 |
| 2 | graduate | 32 | high school | 5 |
...
I want to convert it into dictionary that gives this output:
{'male': '10',
'34-45':'17',
'graduate':'32'
'female':'15',
'55-64': '8',
'high school': '5'
}
How do I do that? I only want attribute columns as keys and their value columns as values.
Answer: Explanation is mentioned in comments
# Created some data like yours
data = {
'attribute_one':['male','34-45','graduate'],
'value_one':[10,17,32],
'attribute_two':['female','55-64','high school'],
'value_two':[15,8,5]
}
# Pandas for handling dataframes
import pandas as pd
# Created a dataframe from the given data
df = pd.DataFrame(data)
# Sliced the columns of interests
df1 = df.iloc[:,0:2] # all values of first two columns
df2 = df.iloc[:,2:4] # all values of last two columns
# Final dictionary for the output
your_dict = {}
# Iterate through numpy values of dataframes
for i in df1.to_numpy():
your_dict[i[0]] = i[1] # populate the dictionary with first dataframe
for i in df2.to_numpy():
your_dict[i[0]] = i[1] # populate the dictionary with second dataframe
# Your dictionary is ready
print(your_dict) | {
"domain": "datascience.stackexchange",
"id": 9697,
"tags": "python, pandas, dataframe"
} |
Snake game using Canvas API | Question: Edit 2: For anyone interested, you can play the game at buggysnake.com
Edit 1: I have typed up the code for making the body of the snake move. It's not perfect and there are small problems I need to fix. But I have updated the javascript code below to include that code.
Background: I recently started learning Javascript about 3 weeks ago. About a week ago I learnt about the Canvas API.
As for my first project, I decided to make a snake game using the Canvas API. As I am a beginner in both Javascript and the Canvas API, I am not sure if what I have typed up is good code. I would appreciate if someone could review it and provide some feedback regarding simplifying it and maybe generalising it to make the movements work with the entire body of the snake. As it is now, the only possible way I can think of moving the snake is by moving individual parts of it. There is still one piece missing in the code. That piece is the part where the body of the snake moves after the head has. I haven't been able to add that part in yet.
There are a total of three files. The HTML, CSS and Javascript files. I have typed them up in that order. Just in case if anyone wants to run the game on their own computer. But the code in review is the Javascript code.
index.html
<!DOCTYPE html>
<html lang="en-us">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Canvas</title>
<script src="script.js" defer></script>
<link href="style.css" rel="stylesheet">
</head>
<body>
<canvas class="myCanvas">
<p>Something for now</p>
</canvas>
</body>
</html>
style.css
body {
margin: 0;
overflow: hidden;
}
script.js
const canvas = document.querySelector(".myCanvas");
const html = document.querySelector("html");
const width = canvas.width = window.innerWidth;
const height = canvas.height = window.innerHeight;
const ctx = canvas.getContext("2d");
let scoreCounter = 0;
let generalInstanceNumber = 0;
let current_x;
let current_y;
//An array for our snake body.
const snakes = [];
//An array for the keys pressed.
const keysPressed = [];
//The constructor for our snake.
function Snake(x, y,) {
this.alreadyTurnedRight = true;
this.alreadyTurnedLeft = true;
this.alreadyTurnedUp = true;
this.alreadyTurnedDown = true;
this.x = x;
this.y = y;
this.velx = 0;
this.vely = 0;
this.position_x = 0;
this.position_y = 0;
this.instanceNumber;
this.draw = function () {
ctx.fillStyle = "green";
ctx.fillRect(this.x, this.y, 50, 50);
}
this.updateRight = function () {
if (this.x >= width) {
gameStarted = false;
gameOver();
}
else if (this.y === current_y) {
this.velx = 0;
this.velx += 10;
this.x += this.velx;
this.alreadyTurnedRight = false;
this.alreadyTurnedLeft = true;
this.alreadyTurnedUp = true;
this.alreadyTurnedDown = true;
}
else {
if (keysPressed[keysPressed.length - 2] === "ArrowUp" && this.alreadyTurnedRight === true) {
this.y -= 10;
}
if (keysPressed[keysPressed.length -2] === "ArrowDown" && this.alreadyTurnedRight === true) {
this.y += 10;
}
}
}
this.updateLeft = function () {
if (this.x <= 0) {
gameStarted = false;
gameOver();
}
else if (this.y === current_y) {
this.velx = 0;
this.velx -= 10;
this.x += this.velx;
this.alreadyTurnedRight = true;
this.alreadyTurnedLeft = false;
this.alreadyTurnedUp = true;
this.alreadyTurnedDown = true;
}
else {
if (keysPressed[keysPressed.length - 2] === "ArrowUp" && this.alreadyTurnedLeft === true) {
this.y -= 10;
}
if (keysPressed[keysPressed.length - 2] === "ArrowDown" && this.alreadyTurnedLeft === true) {
this.y += 10;
}
}
}
this.updateUp = function () {
if (this.y <= 0) {
gameStarted = false;
gameOver();
}
else if (this.x === current_x) {
this.vely = 0;
this.vely -= 10;
this.y += this.vely;
this.alreadyTurnedRight = true;
this.alreadyTurnedLeft = true;
this.alreadyTurnedUp = false;
this.alreadyTurnedDown = true;
}
else {
if (keysPressed[keysPressed.length - 2] === "ArrowRight" && this.alreadyTurnedUp === true) {
this.x += 10;
}
if (keysPressed[keysPressed.length - 2] === "ArrowLeft" && this.alreadyTurnedUp === true) {
this.x -= 10;
}
}
}
this.updateDown = function () {
if (this.y >= height) {
gameStarted = false;
gameOver();
}
else if (this.x === current_x) {
this.vely = 0;
this.vely += 10;
this.y += this.vely;
this.alreadyTurnedRight = true;
this.alreadyTurnedLeft = true;
this.alreadyTurnedUp = true;
this.alreadyTurnedDown = false;
}
else {
if (keysPressed[keysPressed.length - 2] === "ArrowRight" && this.alreadyTurnedDown === true) {
this.x += 10;
}
if (keysPressed[keysPressed.length - 2] === "ArrowLeft" && this.alreadyTurnedDown === true) {
this.x -= 10;
}
}
}
}
//Function for clearing the canvas for a new drawing.
function updateCanvas() {
ctx.fillStyle = "black";
ctx.fillRect(0, 0, width, height);
}
//Function for Game Over.
function gameOver() {
rPressed = true;
updateCanvas();
ctx.fillStyle = "red";
ctx.font = "70px helvetica";
ctx.fillText("Game Over", width/2 - 200, height/2);
ctx.fillStyle = "yellow";
ctx.font = "45px helvetica";
ctx.fillText("Press r to restart the game", width/2 - 280, height/2 + 100);
}
//Function for drawing the score.
function score() {
ctx.fillStyle = "blue";
ctx.font = "45px helvetica";
ctx.fillText(`Score: ${scoreCounter}`, 10, 50);
}
//Code for setting up the starting screen.
ctx.fillStyle = "black";
ctx.fillRect(0, 0, width, height);
ctx.fillStyle = "Green";
ctx.font = "70px helvetica";
ctx.fillText("Snake Game", width/2 - 220, height/2);
ctx.fillStyle = "yellow";
ctx.font = "45px helvetica";
ctx.fillText("Press s to start the game", width/2 - 260, height/2 + 100);
let x = width/2;
let y = height/2;
const snake = new Snake(x, y);
snake.instanceNumber = 0;
snakes.push(snake);
let apple_x = randomNumber(0, width);
let apple_y = randomNumber(0, height);
const apple = new Apple(apple_x, apple_y);
//Function for starting the game.
let gameStarted = false;
addEventListener("keydown", setUpScreen);
function setUpScreen(event) {
if (event.key === "s") {
updateCanvas();
snake.draw();
apple.draw();
score();
gameStarted = true;
removeEventListener("keydown", setUpScreen);
}
}
//Code for restarting the game.
let rPressed = false;
addEventListener("keydown", restartGame);
function restartGame(event) {
if (event.key === "r" && rPressed === true) {
location.reload();
}
}
//Code for performing animations based on user input.
let rightKeyPressed = true;
let leftKeyPressed = true;
let upKeyPressed = true;
let downKeyPressed = true;
addEventListener("keydown", moveSnake);
function moveSnake(event) {
if (event.key === "ArrowRight" && gameStarted === true && rightKeyPressed === true) {
keysPressed.push("ArrowRight");
current_x = snake.x;
current_y = snake.y;
rightKeyPressed = false;
leftKeyPressed = true;
upKeyPressed = true;
downKeyPressed = true;
cancelAnimationFrame(requestLoopLeft);
cancelAnimationFrame(requestLoopUp);
cancelAnimationFrame(requestLoopDown);
loopRight();
}
if (event.key === "ArrowLeft" && gameStarted === true && leftKeyPressed === true) {
keysPressed.push("ArrowLeft");
current_x = snake.x;
current_y = snake.y;
rightKeyPressed = true;
leftKeyPressed = false;
upKeyPressed = true;
downKeyPressed = true;
cancelAnimationFrame(requestLoopRight);
cancelAnimationFrame(requestLoopUp);
cancelAnimationFrame(requestLoopDown);
loopLeft();
}
if (event.key === "ArrowUp" && gameStarted === true && upKeyPressed === true) {
keysPressed.push("ArrowUp");
current_x = snake.x;
current_y = snake.y;
rightKeyPressed = true;
leftKeyPressed = true;
upKeyPressed = false;
downKeyPressed = true;
cancelAnimationFrame(requestLoopLeft);
cancelAnimationFrame(requestLoopRight);
cancelAnimationFrame(requestLoopDown);
loopUp();
}
if (event.key === "ArrowDown" && gameStarted === true && downKeyPressed === true) {
keysPressed.push("ArrowDown");
current_x = snake.x;
current_y = snake.y;
rightKeyPressed = true;
leftKeyPressed = true;
upKeyPressed = true;
downKeyPressed = false;
cancelAnimationFrame(requestLoopLeft);
cancelAnimationFrame(requestLoopRight);
cancelAnimationFrame(requestLoopUp);
loopDown();
}
}
//Functions for animating the direction of the snake.
let requestLoopRight;
function loopRight(number) {
updateCanvas();
score();
apple.draw();
for (const element of snakes) {
element.draw();
element.updateRight();
}
apple.update();
//console.log(`from loopRight ${snake.x}`);
requestLoopRight = requestAnimationFrame(loopRight);
}
let requestLoopLeft;
function loopLeft() {
updateCanvas();
score();
apple.draw();
for (const element of snakes) {
element.draw();
element.updateLeft();
}
apple.update();
//console.log(`From loopLeft ${snake.x}`);
requestLoopLeft = requestAnimationFrame(loopLeft);
}
let requestLoopUp;
function loopUp(name) {
updateCanvas();
score();
apple.draw();
for (const element of snakes) {
element.draw();
element.updateUp();
}
apple.update();
//console.log(`From loopUp ${snake.y}`);
requestLoopUp = requestAnimationFrame(loopUp);
}
let requestLoopDown;
function loopDown() {
updateCanvas();
score();
apple.draw();
for (const element of snakes) {
element.draw();
element.updateDown();
}
apple.update();
//console.log(`From loopDown ${snake.y}`);
requestLoopDown = requestAnimationFrame(loopDown);
}
//Function for generating a random integer.
function randomNumber(min, max) {
return Math.floor(min + Math.random()*(max - min));
}
//The constructor for our apple.
function Apple(x, y) {
this.x = x;
this.y = y;
this.draw = function () {
ctx.fillStyle = "red";
ctx.beginPath();
ctx.arc(this.x, this.y, 28, 0, 2*Math.PI, false);
ctx.fill();
}
//This function includes the code for collision detection and adding a square to the snakes body.
this.update = function () {
let x = 0;
let y = 0;
if (Math.sqrt(((snake.x + 25) - this.x)*((snake.x + 25) - this.x) + ((snake.y + 25) - this.y)*((snake.y + 25) - this.y)) <= 28) {
this.x = randomNumber(0, width);
this.y = randomNumber(0, height);
scoreCounter += 1;
generalInstanceNumber += 1;
if (rightKeyPressed === false) {
let arrayLength = snakes.length - 1;
x = snakes[arrayLength].x - 50;
y = snakes[arrayLength].y;
const newSnake = new Snake(x, y);
newSnake.instanceNumber = generalInstanceNumber;
snakes.push(newSnake);
}
if (leftKeyPressed === false) {
let arrayLength = snakes.length - 1;
x = snakes[arrayLength].x + 50;
y = snakes[arrayLength].y;
const newSnake = new Snake(x, y);
newSnake.instanceNumber = generalInstanceNumber;
snakes.push(newSnake);
}
if (upKeyPressed === false) {
let arrayLength = snakes.length - 1;
x = snakes[arrayLength].x;
y = snakes[arrayLength].y + 50;
const newSnake = new Snake(x, y);
newSnake.instanceNumber = generalInstanceNumber;
snakes.push(newSnake);
}
if (downKeyPressed === false) {
let arrayLength = snakes.length - 1;
x = snakes[arrayLength].x;
y = snakes[arrayLength].y - 50;
const newSnake = new Snake(x, y);
newSnake.instanceNumber = generalInstanceNumber;
snakes.push(newSnake);
}
}
}
}
Answer: Old school
The code looks very old school. No use of modern JS syntax (could be 8 year old code).
The comments in your code that contains pronoun "our" gives away the fact that you have copied an example, alway check the date of example code, and always use the most recent examples (within a year).
Stay up to date, especially when you are learning.
Writing a snake game
A snake game is has deceptively simple code complexity, however as the length of the snakes body grows the amount of work needed to update each frame of the animation quickly becomes overwhelming for the CPU.
Looking at your code the runtime complexity is \$O(n^2)\$ where \$n\$ is the number of snake body parts plus the number of apples. At worst, \$n\$ can be the number of playfield cells.
The naïve design starts with the snakes head and an apple, adding moving searchable snake segments, and searchable apples as the game progresses.
The better design considers a playfield (grid) of snake body parts and apples. Only the location of the snakes head and tail are tracked.
The best time complexity for a snake game is \$O(1)\$ (generally all classic games are time \$O(1)\$ and space \$O(n)\$)
Keep it D.R.Y.
D.R.Y. (Don't Repeat Yourself)
Your handling of directions is very repetitive.
There is no need.
Think of the 4 direction moves, as one move in a given direction.
Now you have one function to move, rather than 4 functions to move up, left, right, down.
Example
const Vec2 = (x = 0, y = 0) => ({x, y});
const directions = {
up: Vec2( 0, -1),
right: Vec2( 1, 0),
down: Vec2( 0, 1),
left: Vec2(-1, 0)
};
const keys = ((...keyNames) => {
const keys = {};
for (const keyName of keyNames) { keys[keyName] = false }
const keyEvents = e => {
keys[e.key] !== undefined && (keys[e.key] = e.type === "keydown");
}
addEventListener("keydown", keyEvents);
addEventListener("keyup", keyEvents);
return keys;
})("ArrowUp", "ArrowRight", "ArrowDown", "ArrowLeft", "s", "r");
var gameOver = false;
function Snake(x, y) {
const dirs = directions; // local alias dirs
const pos = Vec2(x, y);
var currentDir = Vec2();
var velocity = 10;
return Object.freeze({
draw() {
ctx.rect(pos.x, pos.y, 50, 50);
},
isGameOver() {
if (pos.x >= width || pos.x < 0 || pos.y < 0 || pos.y >= height) {
gameOver = true;
}
},
move() {
if (keys.ArrowUp) {
currentDir !== dirs.down && (currentDir = dirs.up);
} else if (keys.ArrowRight) {
currentDir !== dirs.left && (currentDir = dirs.right);
} else if (keys.ArrowDown) {
currentDir !== dirs.up && (currentDir = dirs.down);
} else if (keys.ArrowLeft) {
currentDir !== dirs.right && (currentDir = dirs.left);
}
pos.x += currentDir.x * velocity;
pos.y += currentDir.y * velocity;
isGameOver();
}
});
}
Poor event use
Your use of events is way to complex for such a simple game. Imagine as the game gets more complex, following what is going on, testing and debugging would get very hard.
Always aim to minimise the number of listeners waiting for events. The more events there are the more you obfuscate logic flow!!!
I suspect you have already lost any idea of what is happening as it is evident that you are incorrectly requesting and canceling animation frames. Parts of your snake will simply stop moving.
Generally animations and games use one game loop. eg
function mainLoop(time) {
if (gameOver) {
showGameOver();
} else {
playGame();
}
requestAnimationFrame(mainLoop);
}
Too much this
Avoid the token this..
JavaScripts' this was not thought out (it's like a with without a name). Reading only part of the source code you can not know for sure what this refers to. this makes code harder to read and much noisier.
Use closure to create private variables.
Use named objects and reference the name rather than this
Comments can be evil
Comments can give the reader (and you when you come back months or years later to read the code) the wrong idea of the author's intent.
Example from your code
// Function for generating a random integer. function
randomNumber(min, max) {
return Math.floor(min + Math.random()*(max - min));
}
A number in javascript is not an integer, your intent is confused...
Did you want an integer or a number?
Is the function a bug and the comment a copy paste remnant
What is your intent the function name or the comment?
The only way to know is to find a use case within your code, thus the comment makes the code much harder to read and understand, the antithesis of what comment are there for.
The better option is to avoid comments and write code that is easily understood.
const randomInt = (min, max) => Math.floor(min + Math.random() * (max - min));
What is goin on?
I found some very strange bits of code, you assign value 0 and then add 10
velx = 0;
velx += 10;
Why not just assign 10
velx = 10; | {
"domain": "codereview.stackexchange",
"id": 44853,
"tags": "javascript, game, canvas"
} |
Generalized geography on solid grid graphs | Question: A post here For which families of graphs is Generalized Geography in $P$? mentioned that generalized geography on solid grid graphs is open. Is the question still open? A quick search on Google shows no results, but I wanted to see if anyone with more familiarity with the area can confirm.
Related to this question, I also wonder if Generalized Geography on grid graphs with holes or thin graphs are also open? What about on bipartite graphs, expander graphs...etc?
Answer: Just to complete my comment:
GG remains PSPACE-complete on planar bipartite directed graphs with maximum degree 3 (see
D. Lichtenstein, M. Sipser: GO Is Polynomial-Space Hard. J. ACM 27(2): 393-401 (1980) )
But (vertex) GG on undirected graphs is in P (see this Q&A and this paper) so it is in P also for grid graphs.
For directed solid grid graphs I didn't find any reference, however, I think there is an easy way to simulate a planar bipartite directed graph; the following idea should work:
Both the "diamond" structure (A) and the crossover gadget (C), can be converted into an equivalent (directed) solid grid graph gadget (B) and (D). | {
"domain": "cstheory.stackexchange",
"id": 3152,
"tags": "ds.algorithms, graph-theory, graph-algorithms, np-hardness"
} |
What is the "curse of dimensionality" in molecular dynamics? | Question: The 'curse of dimensionality' or the 'bottleneck problem' in molecular dynamics is explained in page 5 and 6 of Ab Initio Molecular Dynamics: Basic Theory and Advanced Methods by Dominic Marx and Jurg Hutter .
They proved that Car-Parrinello MD outperforms classical MD with a computational advantage growing as $\sim 10^N$ with system size, which I am unable to understand, especially the last paragraph of page 5.
To be more specific, I am copying the relevant contents:
In the case of using classical mechanics to describe the dynamics - which is the focus of the present book - the limiting step for large systems is the first one, why should this be so? There are $3N-6$ internal degrees of freedom that span the global potential energy surface of an unconstrained $N$-body system. Using, for simplicity, 10 discretization points per coordinate implies that of the order of $10^{3N-6}$ electronic structure calculations are needed in order to map such a global potential energy surface. Thus, the computational workload for the first step in the approach outlined above grows roughly like ∼$10^N$ with increasing system size ∼$N$. This is what might be called the curse of dimensionality or dimensionality bottleneck of calculations that rely on global
potential energy surfaces.
What is needed in ab initio molecular dynamics instead? I am confused about the parts placed in bold:
Suppose that a useful trajectory consists of about $10^M$ molecular dynamics steps, i.e. $10^M$ electronic structure calculations are needed to generate one trajectory. Furthermore, it is assumed that $10^n$ independent trajectories are necessary in order to average over different initial conditions so that $10^{M+n}$ ab initio molecular dynamics steps are required in total. Finally, it is assumed that each single-point electronic structure calculation needed to devise the global potential energy surface and one ab initio molecular dynamics time step require roughly the same amount of cpu time. Based on this truly simplistic order of magnitude estimate, the advantage of ab initio molecular dynamics vs. calculations relying on the computation of a global potential energy surface amounts to about $10^{3N-6-M-n}$. The crucial point is that for a given statistical accuracy (that is for $M$ and $n$ fixed and independent of $N$) and for a given electronic structure method, the computational advantage of ``on-the-fly” approaches grows like ∼$10^N$ with system size. Thus, Car–Parrinello methods always outperform the traditional three-step approaches if the system is sufficiently large and complex. Conversely, computing global potential energy surfaces beforehand and running many classical trajectories afterwards without much additional cost always pays off for a given system size $N$ like ∼$10^{M+n}$ if the system is small enough so that a global potential energy surface can be computed and parameterized.
How do the bolded points prove that fact that "computational advantage of 'on-the-fly' approaches grows like ∼$10^N$ with system size" over classical MD?
Answer: It's certainly more of a "back of the envelope" calculation than a rigorous proof. The main point is, that for a certain system, the cost of obtaining a set of trajectories with a given statistical accuracy is fixed with ab initio MD, as only the dynamically relevant parts of the PES are calculated.
With what is here called "classical MD" (but also applies to "quantum dynamics" and "wave packet" methods), the whole PES is computed in advance, and this cost is system dependent (scales with $3N - 6$ for nonlinear systems); the individual trajectories over this PES are then calculated comparably quickly.
The ratio of the two scalings is $\frac{10^{3N-6}}{10^{M+n}}$, increasing with system size (making ab initio MD favourable) and decreasing with required statistical accuracy (making classical MD favourable). | {
"domain": "chemistry.stackexchange",
"id": 8231,
"tags": "physical-chemistry, computational-chemistry, density-functional-theory, molecular-mechanics, molecular-dynamics"
} |
ROS Answers SE migration: p2os teleop | Question:
Hi,
I'm using ros fuerte, pioneer 3dx robot and p2os(even though there's no version of p2os for ros fuerte, p2os works and i can publish cmd to robot - robot moves)..
so I need to use the joystick and to move robot manually. When I launch the teleop_joy.launch, it works well and it seems that connection is established, but pioneer doesn't move.
Since it didn't work, I've found this question http://answers.ros.org/question/39962/problem-with-p2os-teleop/ and did the remaping part in teleop_joy.launch:
remap from="/p2os_teleop/cmd_vel" to="/cmd_vel"
remap from="/p2os_teleop/joy" to ="/joy"
Again it didn't work. I also tried to include teleop_joy.launch in p2os.launch file, and again, connection is established and everyhing seems ok, but robot still dosen't move.. :/
when I call rxgraph function, I can see that cmd_vel is published from /p2os_teleop to /p2os driver.
any idea what else to do?
thank you in advance for your help :)
Originally posted by em on ROS Answers with karma: 23 on 2012-09-07
Post score: 0
Answer:
It usually helps to use rostopic echo or rostopic hz to confirm that data are actually being published on all your topics of interest.
Originally posted by joq with karma: 25443 on 2012-09-07
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by em on 2012-09-11:
I've checked that also, but didn't help - everything seemed fine.. One friend of mine helped me, the problem was with hardvare-the joystick wasn't compatible :/ anyway, it works now :) | {
"domain": "robotics.stackexchange",
"id": 10939,
"tags": "ros"
} |
Macro to format a specific worksheet | Question: I am still new to VBA. I recorded several actions that I need to perform to ensure that a specific worksheet is formatted properly.
Row 1 contains headers. Other than the headers, there is no data in the sheet when this should be run. I know that the raw code from the recording can be shrunk down, but I would appreciate some assistance in figuring that out.
Sub MasterSheetFormatTest()
'
' MasterSheetFormatTest Macro
'
'
Columns("A:A").ColumnWidth = 7
Columns("B:B").ColumnWidth = 6
Columns("C:C").ColumnWidth = 17.14
Columns("D:D").ColumnWidth = 13.57
Columns("E:E").ColumnWidth = 2.71
Columns("F:F").ColumnWidth = 21.43
Columns("G:G").ColumnWidth = 16.43
Columns("H:H").ColumnWidth = 7.86
Columns("I:I").ColumnWidth = 13.43
Columns("J:J").ColumnWidth = 25.14
Columns("K:K").ColumnWidth = 39.29
Columns("L:L").ColumnWidth = 34.14
Columns("M:M").ColumnWidth = 23.14
Columns("N:N").ColumnWidth = 5.57
Columns("O:O").ColumnWidth = 17.14
Columns("P:P").ColumnWidth = 17.14
Columns("Q:Q").ColumnWidth = 8.14
Columns("R:R").ColumnWidth = 17.71
Columns("S:S").ColumnWidth = 22.57
Columns("T:T").ColumnWidth = 20.43
Columns("U:U").ColumnWidth = 15.57
Columns("V:V").ColumnWidth = 13.43
Columns("W:W").ColumnWidth = 13.43
Columns("X:X").ColumnWidth = 10.86
Columns("Y:Y").ColumnWidth = 8.57
Columns("Z:Z").ColumnWidth = 7.57
Columns("AA:AA").ColumnWidth = 7.57
Columns("AB:AB").ColumnWidth = 15
Columns("AC:AC").ColumnWidth = 9.29
Columns("AD:AD").ColumnWidth = 15.86
Columns("AE:AE").ColumnWidth = 67.29
Range("A1:AE1").Select
Range("AE1").Activate
With Selection
.HorizontalAlignment = xlCenter
.VerticalAlignment = xlBottom
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = False
End With
With Selection
.HorizontalAlignment = xlCenter
.VerticalAlignment = xlCenter
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = False
End With
ActiveWindow.Zoom = 90
ActiveWindow.Zoom = 80
ActiveWindow.Zoom = 70
Range("A2").Select
With ActiveWindow
.SplitColumn = 0
.SplitRow = 1
End With
ActiveWindow.FreezePanes = True
End Sub
Answer: This is repeated for each column's width:
Columns("A:A").ColumnWidth
It could just be:
Columns("A").ColumnWidth
The code repeats this block of code almost verbatim twice in a row, with no change in the Selection:
With Selection
.HorizontalAlignment = xlCenter
.VerticalAlignment = xlCenter
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = False
End With
With the only difference being .VerticalAlignment = xlBottom (1st one) and .VerticalAlignment = xlCenter (2nd one).
You should be able to remove one of the two altogether and get identical results, as the 2nd one should overwrite changes made by the 1st one.
This repeats the same command with varying zoom levels:
ActiveWindow.Zoom = 90
ActiveWindow.Zoom = 80
ActiveWindow.Zoom = 70
I would say unless you need to see that is actually zooms progressively, just do ActiveWindow.Zoom = 70 by itself. | {
"domain": "codereview.stackexchange",
"id": 17408,
"tags": "vba, excel"
} |
Why don't tree ensembles require one-hot-encoding? | Question: I know that models such as random forest and boosted trees don't require one-hot encoding for predictor levels, but I don't really get why. If the tree is making a split in the feature space, then isn't there an inherent ordering involved? There must be something I'm missing here.
To add to my confusion I took a problem I was working on and tried using one-hot encoding on a categorical feature versus converting to an integer using xgboost in R. The generalization error using one-hot encoding was marginally better.
Then I took another variable and did the same test, and saw the opposite result.
Can anyone help explain this?
Answer: The encoding leads to a question of representation and the way that the algorithms cope with the representation.
Let's consider 3 methods of representing n categorial values of a feature:
A single feature with n numeric values.
one hot encoding (n Boolean features, exactly one of them must be on)
Log n Boolean features,representing the n values.
Note that we can represent the same values in the same methods. The one hot encoding is less efficient, requiring n bits instead of log n bits.
More than that, if we are not aware that the n features in the on hot encoding are exclusive, our vc dimension and our hypothesis set are larger.
So, one might wonder why use one hot encoding in the first place?
The problem is that in the single feature representation and the log representation we might use wrong deductions.
In a single feature representation the algorithm might assume order. Usually the encoding is arbitrary and the value 3 is as far for 3 as from 8. However, the algorithm might treat the feature as a numeric feature and come up with rules like "f < 4". Here you might claim that if the algorithm found such a rule, it might be beneficial, even if not intended. While that might be true, small data set, noise and other reason to have a data set that mis represent the underlying distribution might lead to false rules.
Same can happen in logarithmic representation (e.g., having rules like "third bit is on). Here we are likely to get more complex rules, all unintended and sometimes misleading.
So, we should had identical representations, leading to identical results in ideal world. However, in some cases the less efficient representation can lead to worse results while on other cases the badly deduce rules can lead to worse results.
In general, if the values are indeed very distinct in behaviour, the algorithm will probably won't deduce such rule and you will benefit from the more efficient representation. Many times it is hard to analyze it beforehand so what you did, trying both representations, is a good way to choose the proper one. | {
"domain": "datascience.stackexchange",
"id": 1616,
"tags": "machine-learning, decision-trees, xgboost, categorical-data, representation"
} |
Finding phrase in specified files | Question: Please provide constructive feedback so I can learn from my mistakes. Please tell me what I did right and where I came up with very good solution and where I should improve. Here is the code (Please pardon some indentation, formatting errors):
/*
Author: Filip Mirosław
Author's GitHub Account: https://github.com/Sproza
Purpose: To find requested phrase in specified files.
What you can do with this code:
PLease feel free to do whatever you like
with this piece of code (unless it is for bad purpose).
If you do not make major changes to the program
(only little tweaks) please remember to specify me as an author
of this code using github account(my name and surname is optional).
Also please provide link to the original code.
*/
#include <iostream>
#include <string>
#include <sstream>
#include <vector>
#include <fstream>
using namespace std;
int main()
{
// !!!!!!!!!! SECTION GETTING AND PROCESSING USER'S INPUT !!!!!!!!!!
// Variable storing phrase to look for in the files.
string phrase;
cout << "Find: ";
getline(cin, phrase);
// Vector storing file names to look for a phrase.
vector <string> files;
cout << endl << "In: ";
string users_input;
getline(cin, users_input);
// Vector element to store first word.
files.push_back("");
// Variable used to iterate through vector's elements and add words to them.
int element = 0;
stringstream sstream;
for(int i = 0; i < users_input.size(); i++)
{
if(users_input[i] != ' ')
{
// Variable to store users_input[i] stringified.
string append_this;
sstream << users_input[i];
sstream >> append_this;
files[element].append(append_this);
sstream.str("");
sstream.clear();
}
else
{
files.push_back("");
element++;
}
}
// !!!!!!!!!! SECTION LOOKING FOR A PHRASE IN THE FILES !!!!!!!!!!
vector <string> occurrences;
string line;
// Variable storing result of calling .find(phrase) function after casting to string.
string _find;
// Variable use as a second argument to function .find(phrase) descrbing
// index number of character at which to start looking for phrase.
int start_from = 0;
for(int i = 0; i < files.size(); i++)
{
ifstream file(files[i].c_str());
if(!file)
{
cout << "Error opening a file: \"" << files[i] << "\"" << endl;
continue;
}
occurrences.push_back("File: " + files[i]);
string line_number = "1";
int line_number_arithmetic;
while(!file.eof())
{
getline(file, line);
if(line.find(phrase) != -1)
{
occurrences.push_back("Line: " + line_number);
}
while(line.find(phrase, start_from) != -1)
{
sstream << line.find(phrase, start_from);
sstream >> _find;
sstream.str("");
sstream.clear();
occurrences.push_back(_find);
start_from = line.find(phrase, start_from) + phrase.size();
}
if(line.find(phrase, start_from) == -1)
{
start_from = 0;
sstream << line_number;
sstream >> line_number_arithmetic;
sstream.str("");
sstream.clear();
line_number_arithmetic += 1;
sstream << line_number_arithmetic;
sstream >> line_number;
sstream.str("");
sstream.clear();
}
}
file.close();
}
// !!!!!!!!!! SECTION PRINTING OUT THE RESULTS OF THE SEARCH !!!!!!!!!!
cout << endl << endl << "Search Completed!" << endl << endl << "Full Report: " << endl;
int i = 0;
int number_of_occurrences = 0;
bool insert_coma = false;
while(i < occurrences.size())
{
number_of_occurrences = 0;
if((occurrences[i].at(0) == 'F') && (occurrences[i].at(1) == 'i') &&
(occurrences[i].at(2) == 'l') && (occurrences[i].at(3) == 'e') &&
(occurrences[i].at(4) == ':') && (occurrences[i].at(5) == ' '))
{
for(int j = i + 1; j < occurrences.size(); j++)
{
if(occurrences[j].find_first_not_of("0123456789") == -1)
{
number_of_occurrences++;
}
else if((occurrences[j].at(0) == 'F') && (occurrences[j].at(1) == 'i') &&
(occurrences[j].at(2) == 'l') && (occurrences[j].at(3) == 'e') &&
(occurrences[j].at(4) == ':') && (occurrences[j].at(5) == ' '))
{
break;
}
}
cout << endl << number_of_occurrences << " occurrences found in file ";
occurrences[i].erase(0, 6);
cout << "\"" << occurrences[i] << "\"";
i++;
insert_coma = false;
}
else if((occurrences[i].at(0) == 'L') && (occurrences[i].at(1) == 'i') &&
(occurrences[i].at(2) == 'n') && (occurrences[i].at(3) == 'e') &&
(occurrences[i].at(4) == ':') && (occurrences[i].at(5) == ' '))
{
occurrences[i].erase(0, 6);
cout << endl << "\tLine: " << occurrences[i];
i++;
insert_coma = false;
}
else
{
if(insert_coma)
{
cout << ", " << occurrences[i];
}
else
{
cout << " Position: " << occurrences[i];
}
i++;
insert_coma = true;
}
}
return 0;
}
Answer: I see a number of things which may help you improve your code.
Don't abuse using namespace std
Especially in a very simple program like this, there's little reason to use that line. Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid.
Fix your formatting
There are abundant examples here of C++ code that is well formatted. This code has peculiar indentation that makes it difficult to tell when a function begins and ends. Fixing that would help.
Break up the code into smaller functions
The main() code is very long and does a series of identifiable steps. Rather than having everything in one long function, it would be easier to read and maintain if each discrete step were its own function.
Be careful with signed and unsigned
In the current code, the loop integers i and j are signed int values, but they're being compared with unsigned quantities files.size() and occurrences.size(), etc. Better would be to declare them all as unsigned or perhaps size_t.
Don't use std::endl if you don't really need it
The difference betweeen std::endl and '\n' is that '\n' just emits a newline character, while std::endl actually flushes the stream. This can be time-consuming in a program with a lot of I/O and is rarely actually needed. It's best to only use std::endl when you have some good reason to flush the stream and it's not very often needed for simple programs such as this one. Avoiding the habit of using std::endl when '\n' will do will pay dividends in the future as you write more complex programs with more I/O and where performance needs to be maximized.
Simplify by using standard algorithms
Here's a program that does most of what yours does, but in far fewer lines:
#include <iostream>
#include <fstream>
#include <string>
void search(const std::string &phrase, std::istream &in) {
std::string line;
for (unsigned linenum=1; getline(in, line); ++linenum) {
for (auto pos=line.find(phrase); pos != std::string::npos; pos=line.find(phrase, ++pos) ) {
std::cout << "\tLine: " << linenum << " Position: " << pos << "\n";
}
}
}
int main(int argc, char *argv[]) {
if (argc < 3) {
std::cerr << "Usage: search phrase file+\n";
return 0;
}
const std::string phrase{argv[1]};
for (int n=2; n < argc; ++n) {
std::cout << "Searching " << argv[n] << " for " << phrase << "\n";
std::ifstream infile{argv[n]};
search(phrase, infile);
}
}
Omit return 0
When a C or C++ program reaches the end of main the compiler will automatically generate code to return 0, so there is no need to put return 0; explicitly at the end of main.
Note: when I make this suggestion, it's almost invariably followed by one of two kinds of comments: "I didn't know that." or "That's bad advice!" My rationale is that it's safe and useful to rely on compiler behavior explicitly supported by the standard. For C, since C99; see ISO/IEC 9899:1999 section 5.1.2.2.3:
[...] a return from the initial call to the main function is equivalent to calling the exit function with the value returned by the main function as its argument; reaching the } that terminates the main function returns a value of 0.
For C++, since the first standard in 1998; see ISO/IEC 14882:1998 section 3.6.1:
If control reaches the end of main without encountering a return statement, the effect is that of executing return 0;
All versions of both standards since then (C99 and C++98) have maintained the same idea. We rely on automatically generated member functions in C++, and few people write explicit return; statements at the end of a void function. Reasons against omitting seem to boil down to "it looks weird". If, like me, you're curious about the rationale for the change to the C standard read this question. Also note that in the early 1990s this was considered "sloppy practice" because it was undefined behavior (although widely supported) at the time.
So I advocate omitting it; others disagree (often vehemently!) In any case, if you encounter code that omits it, you'll know that it's explicitly supported by the standard and you'll know what it means. | {
"domain": "codereview.stackexchange",
"id": 25732,
"tags": "c++, strings, file"
} |
State of the art in quantum memory | Question: Presently, how much information can a quantum computer store, in how many qubits? What restrictions are there and how does it vary across realizations (efficiency of data storage, ease of reading and writing, etc)?
Answer: Unfortunately the state of the technology regarding memories is not as developed as you seem to expect. When we talk about a memory, we think of a device that can store information for an infinite amount of time (for all practical purposes). So before we can think about the size of the memory in a quantum computer, we should look at whether a single quantum memory has been built. There is a lot of progress in this direction, but to my knowledge the currently best "memory" achieved a coherence time of about 6 hours (which is amazing, but still not what we are used from classical computers). Although the fidelity of the retrieved state is in the high nineties, the success probability for storage and readout is very low.
There is also work on using error correction codes to built a memory, but those approaches do not give better results so far. | {
"domain": "quantumcomputing.stackexchange",
"id": 15,
"tags": "quantum-memory"
} |
Magnetic moment of a radially symmetric current | Question: In my latest assignment I'm tasked with finding a magnetic moment $\mu$ of a hydrogen atom, whose current distribution $\mathbf{j}(\mathbf{r})$ looks like
$$\mathbf{j}(\mathbf{r})=\frac{e\hbar}{3^8 \pi ma^4} \frac{r^3}{a^3}e^{-\frac{2r}{3a}}\sin\theta\cos^2\theta\mathbf{e_\varphi},$$
where $a$ is the Bohr radius and $m$ is the electron mass. It is also said that the electron orbits at a radius $r$, so I assume I need to integrate the radial component from 0 to $r$
So I got the usual formula for the magnetic moment,
$$\mu={{1}\over{2}}\int d^3r'(\mathbf{r}\times\mathbf{j}(\mathbf{r}))$$
The cross product term can be expressed as
$$\mathbf{r}\times\mathbf{j}(\mathbf{r})=r\cdot j(\mathbf{r})\cdot\sin\frac{\pi}{2}\mathbf{e_\theta}=rj(\mathbf{r})\mathbf{e_\theta}$$
So the moment becomes
$=\frac{e\hbar}{3^8ma^7}\int_{0}^{r}\int_{0}^{\pi}r'^4e^{-\frac{2}{3a}r'}\sin\theta\cos^2\theta dr'd\theta\mathbf{e_\theta}$
$$u:=\frac{2}{3a}r', dr'=\frac{3}{2}a\cdot du$$
$$v:=\cos\theta, d\theta=-\frac{dv}{sin\theta}$$
$=-\frac{e\hbar}{3^8ma^7}(\frac{3}{2}a)^5\int_{0}^{u(r)}u^4e^{-u}du\int_{1}^{-1}v^2dv\mathbf{e_\theta}$
(and after several layers of integration by parts)
$=-\frac{e\hbar}{3^3\cdot2^5ma^2}[-e^{-2r/3a}\Bigl((\frac{2}{3a}r)^4+4(\frac{2}{3a}r)^3+12(\frac{2}{3a}r)^2+24(\frac{2}{3a}r)+24\Bigr)+24]\cdot[-\frac{2}{3}]\mathbf{e_\theta}$
$=\frac{e\hbar}{6^4ma^2}[24-e^{-2r/3a}\Bigl((\frac{2}{3a}r)^4+4(\frac{2}{3a}r)^3+12(\frac{2}{3a}r)^2+24(\frac{2}{3a}r)+24\Bigr)]\mathbf{e_\theta}$.
I'm fairly certain in my integrals, but this result is extremely messy, which makes me doubt if I chose the correct approach in the first place
Am I using the correct formula? And if I am, am I integrating $dr'$ over correct boundaries?
Answer: So the question may have been a bit vague ("have I done everything correctly?"), so I feel obliged to put up a proper answer now. I was wrong in a whole bunch of places
Firstly, as Cryo has pointed out in the comments, the $\hat{\theta}$ vector is position dependent and not uniquely defined, so to fix this one would transform the unit vector to cartesian coordiantes:
$$\hat{\theta}=\cos\theta\cos\varphi\hat{\mathbf{x}}+\cos\theta\sin\varphi\hat{\mathbf{y}}-\sin\hat{\mathbf{z}}.$$
The second thing that was pointed out is that the triple integral in spherical coordinates obviously have an additional $r'^2\sin\theta$, which I forgot
and so the integral becomes
$=\frac{1}{2}\int_{0}^{\infty}\int_0^{2\pi}\int_0^\pi\frac{e\hbar}{3^8\pi ma^4}\frac{r'^3}{a^3}e^{-2r'/3a}\sin\theta\cos^2\theta\cdot r'\cdot\cdot(\cos\theta\cos\varphi\hat{\mathbf{x}}+\cos\theta\sin\varphi\hat{\mathbf{y}}-\sin\hat{\mathbf{z}})r'^2\sin\theta dr'd\varphi d\theta$
$=\frac{1}{2}\frac{e\hbar}{3^8}\int_0^\infty\int_0^\pi r'^6e^{-2r'/3a}\sin^2\theta\cos^2\theta([\sin\varphi]_0^{2\pi}\cos\theta\hat{\mathbf{x}}+[-\cos\varphi]_0^{2\pi}\cos\theta\hat{\mathbf{y}}-[\varphi]_0^{2\pi}\sin\theta\hat{\mathbf{z}})dr'd\theta$
$$[\sin\varphi]_0^{2\pi}=0-0=0; [-\cos\varphi]_0^{2\pi}=-1+1=0$$
$=-\frac{e\hbar}{3^8ma^7}\int_0^\infty\int_0^\pi r'^6e^{2r'/3a}\sin^3\theta\cos^2\theta dr'd\theta\hat{\mathbf{z}}$
$$u:=\frac{2}{3a}r',\space dr'=\frac{3}{2}ar'$$
$$v:=\cos\theta,\space d\theta=-\frac{dv}{\sin\theta}$$
$=\frac{e\hbar}{3^8ma^7}\int_0^\infty (\frac{3}{2}au)^6e^{-u}(\frac{3}{2}a)du\int_{1}^{-1}(1-v^2)v^2dv$,
and, after a whole lot of integration by parts,
$=\frac{e\hbar}{3\cdot 2^7m}\cdot 720\cdot[\frac{1}{3}v^3-\frac{1}{5}v^5]_1^{-1}$
$=\frac{e\hbar}{3\cdot 2^7m}\cdot 720\cdot(-\frac{4}{15})$
$=-\frac{1}{2}\frac{e\hbar}{m}$
However, you may have noticed that I forgot to implement one of the conditions given in the question, namely that the electron orbits the proton at a distance $r$. With this in mind the calculation becomes a lot easier:
$...=-\frac{e\hbar}{3^8ma^7}\int_0^\infty\int_0^\pi r'^6e^{2r'/3a}\color{red}{\delta(r'-r)}\sin^3\theta\cos^2\theta dr'd\theta\hat{\mathbf{z}}$
$=-\frac{e\hbar}{3^8ma^7}r^6e^{-2r/3a}\int_0^\pi\sin^3\theta\cos^2\theta dr'd\theta\hat{\mathbf{z}}$
which is just
$-(-\frac{4}{15})\frac{e\hbar}{3^8ma^7}r^6e^{-2r/3a}=(\frac{4}{15})\frac{e\hbar}{3^8ma^7}r^6e^{-2r/3a}$,
or
$4.0644\cdot10^{-5}\frac{e\hbar}{ma^7}r^6e^{-2r/3a}$ | {
"domain": "physics.stackexchange",
"id": 54263,
"tags": "homework-and-exercises, electromagnetism, field-theory, magnetic-moment"
} |
Is it essential for two meshing gears to have even number of teeth? | Question: Should the teeth of meshing gears be even rather than odd numbers? If not, which is better?
Answer: The recommendation is quite the opposite.
The teeth of meshing gears are, where possible, chosen to be odd or, better again, primes so that a bad tooth doesn't keep hitting the same point on the opposing gear and gear wear will be even. For example, 23 tooth gear driven by a 19 tooth gear will only come back into phase after 19 × 23 = revolutions.
You need to confirm this, but as far as I know the tooth repeat frequency is given by
$\frac {LCM}{T_1 \times T_2}$ where $ LCM $ is the least common multiple of the number of teeth on the two gears and $ T_1 $ and $ T_2 $ are the number of teeth on each gear. | {
"domain": "engineering.stackexchange",
"id": 4611,
"tags": "mechanical-engineering, gears, mechanisms, machine-design, machine-elements"
} |
Can I use a transfer function to filter noise? | Question: I want to make it simple!
I want to filter white noise with a transfer function and it's going to be zero phase.
Assume that we have our low pass filter.
$$G(s) = \frac {1}{1 + Ts} $$
Where $T$ is my tuning parameter.
And then www.deviantart.com apply our noisy data $u(t)$ to get our filtered output $y(t)$
$$y(t) = G(s)u(t)$$
Then i flipp $y(t)$ to $y(-t)$
And do the same process again.
$$u(-t) = G(s)y(-t)$$
And now i flipp $u(-t)$ to $u(t)$.
Questions:
Is Discrete Fourier Transform better to use instead of a low pass filter?
Will this method work? Using transfer functions.
Thank you.
Answer: DFT is just a tool to convert time domain samples to frequency domain. To filter your discrete data, you can just perform DFT on the input data, multiply it with the transfer function of the LPF and then take the inverse DFT. | {
"domain": "dsp.stackexchange",
"id": 8720,
"tags": "filters, filter-design, lowpass-filter, phase"
} |
Injecting an object in each function in Go | Question: My current project uses a database represented as object. Furthermore I want to implement methods like CreateUser(), DeleteUser() etc, but my current code requires the injection of the db-object as parameter in each function.
Making the db-object global would solve the problem, but I learnt that global variables are mostly seen as a bad practice. Are there any other solutions?
package main
import (
"fmt"
"time"
"github.com/jinzhu/gorm"
_ "github.com/lib/pq"
_ "github.com/mattn/go-sqlite3"
)
type User struct {
ID int
Name string `sql:"size:50"`
Username string `sql:"size:50"`
CreatedAt time.Time
UpdatedAt time.Time
}
func CreateUser(db gorm.DB, name, username string) {
user := User{
Name: name,
Username: username,
}
db.Create(&user)
}
func main() {
db, err := gorm.Open("sqlite3", "data.sqlite3")
if err != nil {
fmt.Println(err)
}
db.DB()
fmt.Println("Programm is running...")
CreateUser(db, "John Doe", "johndoe")
}
Answer: Usually, when you have a lot of functions operating on the same data, it's worth considering making those functions methods on this data:
type UserDB struct {
*gorm.DB
}
func (udb *UserDB) Create(name, username string) error {
user := User{
Name: name,
Username: username,
}
return udb.Create(&user).Error
}
// etc. | {
"domain": "codereview.stackexchange",
"id": 13500,
"tags": "go"
} |
What's the difference between aircon modes Auto, Sun, Snowflake? | Question: Most air conditioners have three modes: Auto, Sun, Snowflake
What is the difference it terms how it works?
Answer: The sun is heating mode. When the room temperature reaches the set temperature, the air conditioner stops operating until the temperature falls below the set temperature and the starts operating again. When in heating mode, the air conditioner does not cool. This setting is used during cold weather periods, such as in winter.
The snowflake is cooling mode. When the room temperature reaches the set temperature, the air conditioner stops operating until the temperature rises above the set temperature and starts operating again. When in cooling mode, the air conditioner does not heat the room. This setting is used in hot weather periods, such as summer.
Auto is automatic mode where the air conditioner can either heat or cool as required. It tries to achieve the set temperature by switching from heat mode to cooling mode automatically. If you don't want to change between heat and cool modes when the seasons change, just set the air conditioner to automatic mode. | {
"domain": "engineering.stackexchange",
"id": 1832,
"tags": "airflow, temperature, electrical"
} |
Why does one not represent states with equivalence classes in the common formalism? | Question: Some context:
Usually, one describes states formally through elements of a Hilbert space $\mathcal{H}$ (e.g. the n-dimensional vector space of complex numbers with the standard basis and standard scalar product). This way the representation is not unique - two representations $|\psi_1\rangle$ and $|\psi_2\rangle=e^{i\gamma}|\psi_1\rangle$ represent the same state.
A unique way of representation is achieved by the equivalence relation $|\psi_1\rangle\sim|\psi_2\rangle:\Leftrightarrow \exists\gamma\in\mathbb{R}:|\psi_1\rangle=e^{i\gamma}|\psi_2\rangle$. Marinescu (978-0-12-383874-2) defines states this way in the first place.
Question:
Why does one usually still not calculate with equivalence classes but with elements of $\mathcal{H}$? Also Marinescu abandons the idea of "rays" (meaning equivalence classes) right after introducing them and goes on by using "the state $|\psi\rangle\in\mathcal{H}$".
Problematic scenario:
Using equivalence classes would come in handy e.g. in the following scenario. Usually in physics, people do identify a mathematical representation with the label of something. In this case, a state is called / labeled $|\psi\rangle$ using the mathematical representation $|\psi\rangle=(1,0)^T$ for example. Since two representations $|\psi_1\rangle$ and $|\psi_2\rangle=e^{i\gamma}|\psi_1\rangle$ represent the same state (since measurement statistics don't differ), one would have two different terms/labels for the same, which is not welcomed. If one used equivalence classes, this problem wouldn't exist.
Edit: I quickly want to touch on the fact that I said "states are represented by elements of a Hilbert space" and not "states are elements of a Hilbert space". In my opinion, this doesn't matter for the question, since "identifying a mathematical representation with the label of something" as explained in my "problematic scenario" is exactly what this is. Although I see many books don't see a more general term "state" above the mathematical entity - as a sidenote: is there a reason for this?
Answer: I guess part of the confusion is that there are multiple levels of modeling involved. At the very top, you have the actual physical phenomena, which may be complicated and have aspects beyond our knowledge.
Then you narrow that down to only the aspects that are relevant to quantum mechanics: your conceptual model of a quantum state - the physical state. This is the thing that ultimately determines the behavior observed in quantum phenomena and measurement outcomes.
It is an abstraction, and, to quote Edsger W. Dijkstra: "The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise." An abstraction defines conceptually what aspects are relevant (and therefore also what can be ignored or assumed away). So, this abstract state does not necessarily completely describe every aspect of the physical system - but it does completely describe it within the confines of the theoretical model.
Then you have the formal mathematical model of the physical state, a ray in projective Hilbert space (the "ray" terminology comes from the concept of projective spaces). But rays are kind of inconvenient to work with, so you represent states with elements of a Hilbert space. Someone more knowledgeable than me will have to explain why exactly that's the case, but I have a feeling that it has to do with the apparatus of linear algebra and calculus being available for use on Hilbert spaces, and also with the fact that this formalism is more readily utilized by computers.
So, now instead of having rays as first class elements of your model, you have equivalence classes "in the background", but you work with the elements of a Hilbert space - and you keep track of the fact that some formally different elements are going to represent the same conceptual physical state (those lying on the same ray / in the same equivalence class). Because of this, you have to be careful about how you manipulate them mathematically (e.g. how you add them, you're careful to normalize things, etc).
Then keeping all that in mind, you use the term "state" somewhat loosely (e.g. you say "state $|\psi\rangle$" instead of "state represented by $|\psi\rangle$", or some such thing), but ultimately, you're really concerned with the physical states. | {
"domain": "physics.stackexchange",
"id": 94729,
"tags": "hilbert-space, quantum-information, quantum-states"
} |
Messy Sudoku solver | Question: I have made a simple Sudoku solver in python and TKinter. However, I have used code from many different sources, so it is not neat. I can only write ugly code, so it is just about impossible for me to neaten it fully. I have made a start, but it is still bad. Here's what I have so far:
## IMPORTS ##
import tkinter as tk
from tkinter import ttk
from tkinter.messagebox import showinfo
## VARIABLES ##
count = 0
## METHODS ##
def board_to_list(board):
entryboard = [[],[],[],[],[],[],[],[],[]]
for row in range(9):
for item in range(9):
try:
if (board[row][item].get() == ""):
entryboard[row].append(-1)
elif not(int(board[row][item].get()) in [1,2,3,4,5,6,7,8,9]):
raise ValueError
else:
entryboard[row].append(int(board[row][item].get()))
except:
showinfo(message="Invalid sudoku")
return False
return entryboard
def find_next_empty(puzzle):
for row in range(9):
for column in range(9):
if puzzle[row][column] == -1:
return row, column
return None, None
def is_valid(puzzle, guess, row, col):
row_vals = puzzle[row]
if guess in row_vals:
return False
col_vals = [puzzle[i][col] for i in range(9)]
if guess in col_vals:
return False
row_start = (row // 3) * 3
col_start = (col // 3) * 3
for r in range(row_start, row_start + 3):
for c in range(col_start, col_start + 3):
if puzzle[r][c] == guess:
return False
return True
def solve_sudoku(puzzle):
global count
row, col = find_next_empty(puzzle)
if row is None and col is None:
return True
for guess in range(1,10):
count += 1
if is_valid(puzzle, guess, row, col):
puzzle[row][col] = guess
if solve_sudoku(puzzle):
return True
puzzle[row][col] = -1
return False
def is_impossible(puzzle):
for i in range(9):
row = {}
column = {}
block = {}
row_cube = 3 * (i//3)
column_cube = 3 * (i%3)
for j in range(9):
if puzzle[i][j]!= -1 and puzzle[i][j] in row:
return False
row[puzzle[i][j]] = 1
if puzzle[j][i]!=-1 and puzzle[j][i] in column:
return False
column[puzzle[j][i]] = 1
rc= row_cube+j//3
cc = column_cube + j%3
if puzzle[rc][cc] in block and puzzle[rc][cc]!=-1:
return False
block[puzzle[rc][cc]]=1
return True
def handle_solve_click(event):
global count
count = 0
entryboard = board_to_list(board)
if not entryboard:
return False
if not(is_impossible(entryboard)):
showinfo(message="Invalid sudoku")
return False
solve_sudoku(entryboard)
time = count/5
while time > 10000:
time -= 1000
print(time)
pb.start(round(time/100))
window.after(round(time), show_solution, entryboard)
window.after(10, update_progress_bar)
def show_solution(entryboard):
count = 0
for row in range(9):
for item in range(9):
board[row][item].delete(0, tk.END)
board[row][item].insert(0, entryboard[row][item])
print("+" + "---+"*9)
for i, row in enumerate(entryboard):
print(("|" + " {} {} {} |"*3).format(*[x if x != -1 else " " for x in row]))
if i % 3 == 2:
print("+" + "---+"*9)
else:
print("+" + " +"*9)
pb.stop()
pb['value'] = 100
def handle_clear_click(event):
for row in range(9):
for item in range(9):
board[row][item].delete(0, tk.END)
pb['value'] = 0
progress['text'] = "0.0%"
def handle_hint_click(event):
entryboard = board_to_list(board)
otherboard = board_to_list(board)
if not(entryboard):
return False
if not(is_impossible(entryboard)):
showinfo(message="Impossible")
return False
solve_sudoku(entryboard)
for row in range(9):
for item in range(9):
if otherboard[row][item] != entryboard[row][item]:
board[row][item].delete(0, tk.END)
board[row][item].insert(0, entryboard[row][item])
return True
showinfo(message="Already solved")
def update_progress_bar():
if pb['value'] < 100:
progress['text'] = f"{pb['value']}%"
window.after(10, update_progress_bar)
else:
pb['value'] = 100
progress['text'] = "Complete"
## MAIN LOOP ##
if __name__ == "__main__":
window = tk.Tk()
board = [[],[],[],[],[],[],[],[],[]]
entryboard = [[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,]]
sudoku_frame = tk.Frame(relief=tk.SUNKEN, borderwidth=5)
for row in range(9):
for item in range(9):
myentry = tk.Entry(master=sudoku_frame, width=1)
row_start = (row // 3)
col_start = (item // 3)
rowpos = row_start + row
colpos = col_start + item + 1
myentry.grid(row=rowpos, column=colpos)
board[row].append(myentry)
sudoku_frame.pack()
progress_frame = tk.Frame()
pb = ttk.Progressbar(master=progress_frame, orient='horizontal', mode='determinate', length=280)
pb.grid(row=0, column=0)
progress = tk.Label(master=progress_frame, text="0.0%")
progress.grid(row=1, column=0)
progress_frame.pack(pady=15)
button_frame = tk.Frame(relief=tk.RIDGE, borderwidth=5)
solve_btn = tk.Button(master=button_frame, text="Solve", relief=tk.FLAT, borderwidth=2)
solve_btn.bind("<Button-1>", handle_solve_click)
solve_btn.grid(row=0,column=0)
clear_btn = tk.Button(master=button_frame, text="Clear", relief=tk.FLAT, borderwidth=2)
clear_btn.bind("<Button-1>", handle_clear_click)
clear_btn.grid(row=0, column=1)
hint_btn = tk.Button(master=button_frame, text="Hint", relief=tk.FLAT, borderwidth=2)
hint_btn.bind("<Button-1>", handle_hint_click)
hint_btn.grid(row=0, column=3)
button_frame.pack()
window.mainloop()
Answer: First, your program works. So that's a great place to start.
Some cosmetic comments:
Put blank lines between your functions. This helps people reading your code to know when something new is happening.
Put spaces between operators like +, -, ==, !=, etc. to make the lines easier to read.
The sudoku boxes in the GUI window are rather narrow and difficult to click on and enter numbers. Making the boxes wider and the text entry centered looks better: myentry = tk.Entry(master=sudoku_frame, width=3, justify='center')
Initializing repetitive structures
In several places, you initialize sudoku data with literal lists and lists of lists like these:
board = [[],[],[],[],[],[],[],[],[]]
entryboard = [[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,],[-1,-1,-1,-1,-1,-1,-1,-1,-1,]]
You can use list comprehension to create these structures. The resulting code is shorter and easier to read.
board = [[] for _ in range(9)]
entryboard = [[-1 for _ in range(9)] for _ in range(9)]
The variable _ is often used as a dummy variable where we don't care about its value. We just need it to store the current value out of range(9).
Similar changes can be made to entryboard in board_to_list(). In that same function, elif not(int(board[row][item].get()) in [1,2,3,4,5,6,7,8,9]): can be written elif not(int(board[row][item].get()) in range(1, 10):
Initializing the entry grid
I don't understand the math being done in the loop creating all the tk.Entry widgets. This seems to work just as well.
for row in range(9):
for item in range(9):
myentry = tk.Entry(master=sudoku_frame, width=3, justify='center')
myentry.grid(row=row, column=item)
board[row].append(myentry)
Global variables
I'm not talking about count at the top. That variable is modified in several functions and is declared global in those functions. That's fine and suits its purpose.
I'm mostly talking about the board variable that is created on the second line of the if __name__ == "__main__": block. This variable holds all of the text entry widgets that are used to both allow the user to enter numbers and display numbers when the GUI solves a cell. In almost all functions, the board variable is the one created at the bottom of the code. There's no indication where the board variable is defined when looking at the function. I understand that the bind() method that binds the functions to button clicks only take an Event as its only argument, but there's a way to sneak the board and any other variables you need into the callback.
As an example, let's look at the handle_clear_click() function. First, change the parameters of the called function to take a board argument.
def handle_clear_click(event, board, pb, progress):
for row in range(9):
for item in range(9):
board[row][item].delete(0, tk.END)
pb['value'] = 0
progress['text'] = "0.0%"
Then, instead of directly passing just the function to the bind() call, create a lambda expression that provides the parameters that the bind() call cannot.
clear_btn.bind("<Button-1>", lambda event: handle_clear_click(event, board, pb, progress))
Now, there's no ambiguity in where the variables board, pb, and progress come from. Plus, this allows for easier changes later on. What if you want to allow work on more than one sudoku board in the same GUI? While that may not be likely for this program, other programs you write will have to juggle multiple sets of data.
Do the same thing for the other two buttons: Solve and Hint.
Unused data
The entryboard variable in the if __name__ == "__main__": block is not used anywhere, so you can delete it. The line count = 0 in show_solution() either needs to be deleted or have global put before it to reset the global count variable.
Several functions' return values that are not used. For example, handle_solve_click() returns False when the puzzle cannot be solved and nothing (implicitly None) otherwise. These return values don't do anything useful, so a simple return statement with nothing after it is fine.
def handle_solve_click(event):
global count
count = 0
entryboard = board_to_list(board)
if not entryboard:
return
if is_impossible(entryboard):
showinfo(message="Invalid sudoku")
return
solve_sudoku(entryboard)
time = count/5
while time > 10000:
time -= 1000
print(time)
pb.start(round(time/100))
window.after(round(time), show_solution, entryboard)
window.after(10, update_progress_bar)
Semantic bug
The function is_impossible() returns False when the puzzle is impossible to solve. This is the reverse of what I would expect given the name. I would switch True and False in all the return values of this function and fix the handle_hint_click() and handle_solve_click() functions so that the lines if not(is_impossible(entryboard)): do not have a not. The other choice is to rename the function is_possible(). | {
"domain": "codereview.stackexchange",
"id": 44915,
"tags": "python, tkinter, sudoku"
} |
Why exactly are standard potentials additive? | Question: I don't really study chemistry so while my question may be very obvious, its not obvious to me. If we take an electrochemical reaction like
$$\ce{2Fe^2+ + Au^3+ -> 2Fe^3+ + Au+}$$
we can find its standard potential and standard Gibb's energy by summing half reactions whose are known, and can be consulted in tables.
It's intuitive for me that the standard Gibb's energy will be given by a sum of the corresponding reactions, and that I can multiply a reaction by minus one, or whatever, and use that to construct my complete reaction.
The reason I see Gibbs energy as intuitive is because you're essentially just taking
$$\sum_{\mathrm{products}} \mu_{\mathrm{products}} - \sum_{\mathrm{reactants}} \mu_{\mathrm{reactants}}$$
and this will prove the additive property of "summing up reactions".
But I don't see any reason for additivity being true when considering standard potentials. I don't even see why inverting the reaction should invert my potential if my half cell is in equilibrium and is a kind of "whole" that is compared to the standard hydrogen electrode and should therefore have the same potential no matter how I write it.
How am I screwing up here? Why can we add up potentials?
Answer: It helps to view this in terms of the equation
$$\Delta G^\circ = -nFE^\circ$$
Reversing the reaction reverses the sign on $\Delta G^\circ$, and therefore the sign on $E^\circ$. The sign on the potential determines the direction in which electricity flows. By convention, a galvanic cell (spontaneous reaction) has a positive $E^\circ$, while an electrolytic cell (non-spontaneous reaction) has a negative $E^\circ$. The magnitude of the potential stays the same, but a sign is added to signify the direction of current flow.
As far as adding half cells to determine the overall potential, this can be done because of the relationship in the above equation. You can see that $E^\circ$ depends on $\Delta G^\circ$ and $n$. But for instance, if $n$ is multiplied by 2, $\Delta G^\circ$ will be multiplied by 2 as well (there will be twice as much Gibbs' energy because the reaction involves twice the electron transfer). This means that you can add half-cell potentials to get the overall cell potential. Be careful to note though that if you multiply a reaction by a coefficient that you don't multiply the potentials, as you would for $\Delta G^\circ$; the potential does not depend on the stoichiometric coefficients, only on what the cathode and anode are. | {
"domain": "chemistry.stackexchange",
"id": 17455,
"tags": "electrochemistry, energy, free-energy, reduction-potential"
} |
Forces on a spinning ball | Question: The drag force opposes the direction of velocity, and the lift force is perpendicular to both the drag force and the rotation direction.
So if a tennis ball is rotating clockwise and moving to the left with velocity v, the drag force is acting to the right, and the lift force is acting downwards since for clockwise rotation the lift force acts down. Or is the lift force acting upwards since the ball is moving left and spinning clockwise?
Now let's consider the tennis ball rotating clockwise and purely moving upwards, in the positive y direction only. The drag force is downwards, but what direction is the lift force now? Is it to the right?
Answer: To decide the direction of lift force on a spinning ball , you should know about magnus force( and also about the Bernoulli's Principle linked in dnaik's answer )
which says that if kinetic energy of the moving fluid at a certain fixed height $h$ is increased then the pressure of the fluid decreases.
,
When the tennis ball is rotating clockwise and moving with velocity $v$ in the left direction , the air will be flowing towards the right direction with respect to the ball and since it is rotating clockwise, the air above the ball will have more velocity than the air below the ball. Due to this the kinetic energy of the air above is more and so air pressure is less than the pressure of the air below the ball. Due to this pressure difference, a net force acts on the ball in the upward direction (shown by yellow arrow in the gif) and that can prevent the ball from falling down due to gravity.
Same process can be applied during upward motion.
When the ball is thrown upward and also rotating clockwise , the air is coming down with respect to the ball and because of the spin there exists a pressure difference and hence the ball experiences a force in the right direction.
Note : To visualise the direction of magnus force in the case of upward motion , look at the same animation by rotating your screen (if possible otherwise just imagine the image to be rotated in clockwise direction) by $90°$.
Hope it helps ☺️. | {
"domain": "physics.stackexchange",
"id": 71900,
"tags": "newtonian-mechanics, rotational-dynamics, projectile, drag, lift"
} |
PSNR of two images of different size in matlab | Question: I performed bicubic interpolation on a 256*256 image(img)
dest = interp2(img,'bicubic')
and i got a 511 * 511 image.I want to compute PSNR of a 512 * 512 image(original) and the 'dest' image as follows
original = double(original);
dest = double(dest);
[M N] = size(original);
error = original - dest;
MSE = sum(sum(error .* error)) / (M * N);
if(MSE > 0)
PSNR = 10*log(255*255/MSE) / log(10);
disp(['PSNR = ', num2str(PSNR)])
else
PSNR = 99;
disp(['PSNR = ', num2str(PSNR)])
end
But I'm getting error due to different matrix dimensions.How to avoid this error.Is it possible to calculate PSNR of images with different size?Please help
Answer: When interpolating, why not use the same size as the image to compare with:
% some random inputs
original = rand(435,782);
img = rand(100,200);
% position where to interpolate
[x,y] = meshgrid(linspace(1,size(img,2),size(original,2)),linspace(1,size(img,1),size(original,1)));
% interpolate
dest = interp2(img,x,y,'bicubic');
% display size of result
disp(size(dest)); | {
"domain": "dsp.stackexchange",
"id": 2185,
"tags": "image-processing, matlab, interpolation, matrix"
} |
Language Processing: Determine if one paragraph is relevant to another paragraph | Question: Context: I want to determine if someone's written review contains content that is relevant to a paragraph that they are reviewing.
To do so, I am trying to determine if one paragraph is relevant to another paragraph. I initially tried to use TF-IDF to calculate the relevancy, but I think TF-IDF works well for determining if one paragraph is relevant to a whole set of paragraphs. I only want to determine if two paragraphs are relevant with each other.
What would be a good approach for this problem?
Answer: A very simple approach can be:
Calculate tf-idf vector for sentence 1 and 2.
Calculate vector similarity (Cosine similarity) of these 2 vectors.
This is a general approach and works for any representational vector.
For a more complex one, check semantic similarity with BERT post from Keras blog. | {
"domain": "ai.stackexchange",
"id": 2914,
"tags": "natural-language-processing, resource-request"
} |
How to compute generator matrix from a parity check matrix? | Question: I have a parity matrix ("H") that is not in canonical form (the identity matrix is not on the right side).
I'm trying to programatically calculate the generator matrix ("G") from it.
The Wikipedia entry on Hamming codes talks about the relationship between parity check matrixes and generator matrixes:
http://en.wikipedia.org/wiki/Hamming_code
It says that H*transpose(G)=0
I thought I could figure out G by taking the Nullspace of H. However, I end up with many fractional numbers and don't know how to use that. The examples I've seen use gauss-jordan elimination to put the matrix into row-echelon form. However, shouldn't I be able to do it numerically using svd or something like that?
Here's some code where I multiply a message by a generator matrix (the example is taken from Wikipedia). I would like to get the same message when I multiply by the generator that I have calculated.
import numpy as np
import scipy.linalg
def nullspace(A, atol=1e-13, rtol=0):
A = np.atleast_2d(A)
u, s, vh = np.linalg.svd(A)
tol = max(atol, rtol * s[0])
nnz = (s >= tol).sum()
ns = vh[nnz:].conj().T
return ns
H = np.mat( [[1,1,1,1,0,0],
[0,0,1,1,0,1],
[1,0,0,1,1,0]] )
G = np.mat( [[1,0,0,1,0,1],
[0,1,0,1,1,1],
[0,0,1,1,1,0]] )
M = np.mat( [1,0,1] )
print "Message * generator=", M*G
GT2 = nullspace(H)
G2 = GT2.T
print "Message * calculated generator=", M*G2
Answer: With forward-error-correcting coding, one is working in a finite field, typically the field of two elements denoted by GF$(2)$ or $\mathbb F_2$. So, there are no fractional numbers
and no fancy methods such as singular value decomposition: you use bit-by-bit XOR additions of the rows of $H$ and Gauss-Jordan elimination to reduce $H$ to row-echelon form $[P_{(n-k)\times k} \mid I_{(n-k)\times(n-k)}]$.
Then, set $G = [I_{k\times k} \mid (P^T)_{k\times(n-k)}]$ and you are done. (For
nonbinary fields, use $[I \mid -P^T]$). Note
that all arithmetic in the verification $HG^T = 0$ is also finite field arithmetic
with $1\cdot 1 =1$ and $1+1 = 0 = 1\oplus 1$ for the case of GF$(2)$. | {
"domain": "dsp.stackexchange",
"id": 692,
"tags": "linear-algebra, forward-error-correction, ecc"
} |
how to install geometry_msgs on raspberry pi | Question:
can someone show me how to install geometry_msgs on raspberry Pi 2?
Originally posted by Vinh K on ROS Answers with karma: 110 on 2017-03-01
Post score: 0
Original comments
Comment by raspet on 2017-03-02:
What is your operative system? I encountered lots of problems for raspbian and switched to ubuntu mate 14.04 and everything worked like a charm
Comment by Vinh K on 2017-03-02:
I have Raspbian Jessie. But having many issues getting packages install. I'll use Ubuntu Mate. Thanks
Answer:
Download common messages from here: https://github.com/ros/common_msgs and place it in src folder. Then run catkin_make in your workspace. It will install all common message type including geometry_msgs. If you only want geometry messages then download the geometry_msgs folder.
Originally posted by Ameer Hamza Khan with karma: 96 on 2017-06-26
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 27178,
"tags": "raspberrypi, geometry-msgs, rasbperrypi"
} |
gazebo-1.4 with player example - errors | Question:
Hi,
I'm trying to get player to work with gazebo to control a pioneer robot. I used the example provided with gazebo-1.4 in the examples/player/position2d folder. Here are the problems I encountered:
The .world was in the old sdf format, so I tried using gzsdf to convert it. gzsdf failed because the filename did not end with .sdf. So after the renaming the world file, it worked.
Player was able to connect to gazebo and playerv opened its window, but when I tried to subscribe to position2d it gave the following error:
GazeboDriver::GazeboDriver
Gazebo Plugin driver creating 1 device
6665.4.0 is a position2d interface.
Listening on ports: 6665
accepted TCP client 0 on port 6665, fd 13
libprotobuf ERROR google/protobuf/wire_format.cc:1059] Encountered string containing invalid UTF-8 data while parsing protocol buffer. Strings must contain only UTF-8; use the 'bytes' type for raw bytes.
libprotobuf ERROR google/protobuf/message_lite.cc:123] Can't parse message of type "gazebo.msgs.Pose" because it is missing required fields: position, orientation
closing TCP connection to client 0 on port 6665
Originally posted by logicalguy on Gazebo Answers with karma: 73 on 2013-03-05
Post score: 2
Answer:
Here is a pull request that should fix this problem:
https://bitbucket.org/osrf/gazebo/pull-request/352/update-player-position2d-interface-to-use
Originally posted by nkoenig with karma: 7676 on 2013-03-06
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by logicalguy on 2013-03-08:
Hi, I installed from mercurial today and ran the test again. It works - I can control the robot using player, but I still get error messages such as:
libprotobuf ERROR google/protobuf/message_lite.cc:123] Can't parse message of type "gazebo.msgs.Pose" because it is missing required fields: position, orientation
libprotobuf ERROR google/protobuf/wire_format.cc:1059] Encountered string containing invalid UTF-8 data while parsing protocol buffer. Strings must contain only UTF-8; use the 'bytes' | {
"domain": "robotics.stackexchange",
"id": 3088,
"tags": "gazebo-1.4"
} |
Oxidation of an aldehyde - how is this a loss of electrons? | Question: I've been reviewing redox reactions and have been using the OIL RIG acronym to classify oxidations vs. reductions. Generally this makes sense, and many resources I've looked at repeat the "Oxidation Is Loss - Reduction Is Gain of electrons" statement.
But, I also know that aldehydes can be oxidized to carboxylic acids:
$$
\ce{RCHO -> RCO2H}
$$
The compound $\ce{RCHO}$ does not lose electrons when oxidized to $\ce{RCO2H}$ -- in fact, it gains a whole new oxygen atom. So, there appears to be a net gain of electrons here. Does OIL RIG not always hold, or am I misunderstanding something?
Answer: OILRIG always holds, but you need to be careful at what you are actually looking at. When we say a compound gets oxidised, that is technically not true. In this framework, only atoms get oxidised or reduced.
In the reaction
$$\ce{R-CHO + [O] -> R-COOH}$$
we usually omit the reduction part, because it's always the same: the oxygen.
To identify the whole redox process, use oxidation numbers. Keep in mind that these are simply a bookkeeping tool and the partial charges of the atoms involved might be quite different. Setting $\ce{R {=} CH3}$ or the above reaction it boils down to this:
$$\ce{
\overset{+1}{H}_3\overset{-3}{C}-\overset{\color{\red}{+1}}{C}\overset{+1}{H}\overset{-2}{O}
+ [\overset{\color{\green}{0}}{O}]
->
\overset{+1}{H}_3\overset{-3}{C}-\overset{\color{\red}{+3}}{C}\overset{-2}{O}\overset{\color{\green}{-2}}{O}\overset{+1}{H}
}$$
Now you can see that the carbonyl carbon increases its oxidation state from $\color{\red}{+1}$ to $\color{\red}{+3}$, so formally it lost $\pu{\color{\red}{2} e^-}$. The oxidising agaent, indicated with $\ce{[O]}$, at the same time decreases its oxidation state from $\color{\green}{0}$ to $\color{\green}{-2}$, therefore formally gaining $\pu{\color{\green}{2} e^-}$. The redox reaction is complete.
Let's look at a real world example, the Baeyer-Villiger oxidation:
As you can see, the ketone gets oxidised, more precisely the carbonyl carbon gets oxidised as increases its oxidation state. On the other side, the peroxide oxygens get reduced as they decrease their oxidation state. | {
"domain": "chemistry.stackexchange",
"id": 8456,
"tags": "organic-chemistry, redox"
} |
what machine/deep learning/ nlp techniques are used to classify a given words as name, mobile number, address, email, state, county, city etc | Question: I am trying to generate an intelligent model which can scan a set of words or strings and classify them as names, mobile numbers, addresses, cities, states, countries and other entities using machine learning or deep learning.
I had searched for approaches, but unfortunately I didn't find any approach to take. I had tried with bag of words model and glove word embedding to predict whether a string is name or city etc..
But, I didn't succeed with bag of words model and with GloVe there are a lot of names which are not covered in the embedding example :- lauren is present in Glove and laurena isn't
I did find this post here, which had a reasonable answer but I couldn't the approached used to solve that problem apart from the fact that NLP and SVM were used to solve it.
Any suggestions are appreciated
Thanks and Regards,
Sai Charan Adurthi.
Answer: Applying common categorical labels to words is typically called Named-entity recognition (NER).
NER can be done by static rules (e.g., regular expressions) or learned rules (e.g., decision trees). These rules are often brittle and do not generalize. Conditional Random Fields (CRF) are often a better solution because they are able to model the latent states of languages. Current state-of-the-art performance in NER is done with a combination of Deep Learning models.
The Stanford Named Entity Recognizer and spaCy are packages to perform NER. | {
"domain": "datascience.stackexchange",
"id": 2904,
"tags": "machine-learning, deep-learning, text-mining, nlp"
} |
Where is the flaw in deriving Gauss's law in its differential form? | Question: From the divergence theorem for any vector field E,
$\displaystyle\oint E\cdot da=\int (\nabla\cdot E) ~d\tau$
and from Gauss's law
$\displaystyle\oint E\cdot da=\frac{Q_{enclosed}}{\epsilon_0}=\int \frac{\rho}{\epsilon_0}~d\tau$
Hence,
$\displaystyle\int\frac{\rho}{\epsilon_0}d\tau=\int (\nabla\cdot E)~d\tau$
Textbooks conclude from the last equation that
$\displaystyle \nabla\cdot E=\frac{\rho}{\epsilon_0}$
My question is how can we conclude that the integrands are the same? Because I can think of the following counter example, assume
$\displaystyle \int_{-a}^a f(x)~dx=\displaystyle \int_{-a}^a [f(x)+g(x)]~dx$
where $g(x)$ is an odd function. Obviously the 2 integrals are equal but we cannot conclude that $f(x)$ is equal to $f(x)+g(x)$ so where is the flaw?
Answer: The equation
$$\displaystyle\int_{V}\frac{\rho}{\epsilon_0}d\tau=\int_{V}(\nabla\cdot E)~d\tau$$
is true for all region $V$ in space the integration is performed over. That is why it follows that the integrands are equal.
Your counterexample is invalid, because the integrals are equal only when the domain of integration is of the form $[-a,a]$. | {
"domain": "physics.stackexchange",
"id": 25364,
"tags": "homework-and-exercises, electrostatics, gauss-law"
} |
Can a single photon have an energy density? | Question: In one question, which is further irrelevant for thís question, the comment was made that a single photon can have an energy density.
I didn't agree. Off course the wavefunction is spread out in space, which seems to suggest that the energy is spread out in space also, giving the photon an energy density.
The question is though if the energy is really spread out over space. I think not so because if we look at the photon (without disturbing it, so this can happen only in our minds), the photon erratically (the wavefunction is related to the probability density to find it at some infinitesimal interval) jumps from one place to another, within the confines of its wavefunction and without us knowing it. Because of this, a photon density isn't a well-defined concept. At least, according to me. I hope there is someone who disagrees!
Answer: I'm not sure about a photon, but its possible to associate a charge density with an infinitesimally small source of charge.
$\rho=q\delta(\vec{x'}-\vec{u}t);$
This represents the charge density, $\rho$, in terms of the diract delta function. The delta function is roughly an infinitely high, infinitely narrow spike when its argument is zero. This argument corresponds to a point charge moving at constant velocity $\vec{u}$.
By definition, a photon has a single energy, an therefor a single frequency and consequently wavelength.
$E=\hbar\omega$
$c=\omega/\kappa$
So the wave number is: $k=E/\hbar c$
We can use the dirac delta function again to represent the photon in momentum space:
$$\phi(k)=\delta(k-E/\hbar c)$$
The most generic one dimensional wave function in the space domain is :
$\psi(x)=\int_{-\infty}^{\infty} \phi(k)e^{i(kx-Et/\hbar)}dk$
In this specific case:
$$\psi(x)=\int_{-\infty}^\infty \delta(k-E/\hbar c)e^{i(kx-Et/\hbar)}dk=e^{\frac{iE}{\hbar c}(x-ct)}$$
This cannot be normalized but does hold kinematic information of the photon, e.g. $i\hbar\partial\psi/\partial t=E\psi$.
It in some sense moves at the speed of light.
So the 3D analog might be the best you can do in undergraduate quantum mechanics.
By the uncertainty principle, an infinitely precise momentum/energy implies infinitely imprecise knowledge of its location.
You don't know where the photon is so you don't know the density. So it seems your intuition is correct. | {
"domain": "physics.stackexchange",
"id": 57059,
"tags": "energy, photons, density"
} |
How do you calculate sample difference in terms of sensor signals? | Question: A paper I read called Preprocessing Techniques for Context Recognition from Accelerometer Data refers to sample difference as the delta value between signals in a pairwise arrangement of samples that allows a basic comparison between the intensity of user activity.
How would you do the pairwise arrangement? Would it require you to have different files of data representing different classes?
For example, I have a CSV:
1495573445.162, 0, 0.021973, 0.012283, -0.995468, 1
1495573445.172, 0, 0.021072, 0.013779, -0.994308, 1
1495573445.182, 0, 0.020157, 0.015717, -0.995575, 1
1495573445.192, 0, 0.017883, 0.012756, -0.993927, 1
where the second, third, and fourth columns are the axes of accelerometer data.
I have several files named for one gesture and several others for another gesture and would like to use this sample difference statistic to help classify the data.
Also as a secondary question, this was listed as a preprocessing technique but it sounds like it's more of a feature. Could I get clarification on that as well?
Answer: Differencing is a common preprocessing step for time series. Here's an example in python:
from pandas import DataFrame
data=[[1495573445.162, 0, 0.021973, 0.012283, -0.995468, 1],
[1495573445.172, 0, 0.021072, 0.013779, -0.994308, 1],
[1495573445.182, 0, 0.020157, 0.015717, -0.995575, 1],
[1495573445.192, 0, 0.017883, 0.012756, -0.993927, 1]]
df = DataFrame(data, columns=['timestamp', 'foo', 'x', 'y', 'z', 'bar']).set_index('timestamp')
df.assign(dx=df.x.diff(), dy=df.y.diff(), dz=df.z.diff())
The result:
foo x y z bar dx dy dz
timestamp
1.495573e+09 0 0.021973 0.012283 -0.995468 1 NaN NaN NaN
1.495573e+09 0 0.021072 0.013779 -0.994308 1 -0.000901 0.001496 0.001160
1.495573e+09 0 0.020157 0.015717 -0.995575 1 -0.000915 0.001938 -0.001267
1.495573e+09 0 0.017883 0.012756 -0.993927 1 -0.002274 -0.002961 0.001648 | {
"domain": "datascience.stackexchange",
"id": 1761,
"tags": "feature-selection, data-cleaning"
} |
How to optimize the molecular geometry under PyMol? | Question: I already now about the Optimize.py plugin, which seems to be working well, but I don't see an option to prevent it from changing double bonds to single ones. If Optimize isn't capable of such, how can I achieve a minimized molecule from a hand-built one?
As far as I know Optimize uses Open Babel. I could imagine that the problem is, that it doesn't export the state of bonds at all.
Answer: Disclaimer: I don't typically use PyMol, so I only made a quick look at the Optimize.py script.
You're right that the problem is with the bond orders. The Optimize.py script uses a PDB file as an intermediate representation that it sends to Open Babel for force field optimization and other properties.
The problem is that PDB does not support bond orders. So yes, you don't get double-bond information back, and I can't verify if Open Babel is getting bond orders from PyMol and Optimize.py.
I looked through the PyMol commands and it seems as if there's a way to write an MDL Molfile (which would have bond orders) but not read one - PDB is the only option for reading and writing in a PyMol script.
My suggestions:
Either push the PyMol developers to support other formats (e.g., Sybyl .mol2 or MDL Mol) in the scripting commands.
Use another program like Avogadro to draw and optimize the geometry and export to read into PyMol later.
(Disclaimer: I'm an Avogadro and Open Babel developer, but there are many programs that could be used.) | {
"domain": "chemistry.stackexchange",
"id": 4987,
"tags": "software"
} |
Why should collected blood from patients be analyzed as soon as possible? | Question: I'm looking for the possible factors influencing blood samples. What happens to the blood cells esp. WBCs/PBMCs after 8 hours of blood draw? For example, I know that granulocytes may cause oxidative stress and that may affect the viability of WBCs. What else could have an effect on blood samples that we can't store them for a long time at room temperature? Thank you.
Answer: Biological samples, including whole blood, are not at equilibrium. Biological processes, e.g., cellular metabolism, continue. Cells interact with their environment and a tube is not the same thing as the human vascular system. For one, in humans, on average, the entire volume of blood recirculates once every minute (see Costanzo Physiology Ch 4). Recirculation, with the associated access to nutrients, waste drop off, and exposure to various different compartments doesn't happen in a specimen tube. Nutrients are consumed, waste collects, cellular stress occurs, intracellular components leak out, cellular morphology changes... You can slow this down by putting samples on ice and collecting blood in a tube with components that preserve the sample for the specific test. There's a nice discussion of this in Lange's Guide to Diagnostic Tests, Chapter 1.
Given ongoing biological process in a very different environment, you can understand how interpretation of a particular test result depends on consistent collection and storage methods. Different tests are going to be more or less sensitive to time since collection and storage methods. There's a good deal of data on this for whole blood in this review. | {
"domain": "biology.stackexchange",
"id": 8969,
"tags": "human-biology, hematology"
} |
RViz Crashes with Out of Memory Exception | Question:
I am repeatedly publishing Markers and MarkerArrays to RViz and then, randomly, this error occurs.
Does anyone know what might be causing this?
I am running RViz from inside a docker container using the X11 server of my Ubuntu 12.04 host. The docker container is running ROS Jade and Ubuntu 14.04. RViz reports the OpenGL version as 4.5.
[ WARN] [1454370341.533289122]: OGRE EXCEPTION(7:InternalErrorException): Index Buffer: Out of memory in GLHardwareIndexBuffer::lock at /build/buildd/ogre-1.8-1.8.1+dfsg/RenderSystems/GL/src/OgreGLHardwareIndexBuffer.cpp (line 121)
Qt has caught an exception thrown from an event handler. Throwing
exceptions from an event handler is not supported in Qt. You must
reimplement QApplication::notify() and catch all exceptions there.
terminate called after throwing an instance of 'Ogre::InternalErrorException'
what(): OGRE EXCEPTION(7:InternalErrorException): Index Buffer: Out of memory in GLHardwareIndexBuffer::lock at /build/buildd/ogre-1.8-1.8.1+dfsg/RenderSystems/GL/src/OgreGLHardwareIndexBuffer.cpp (line 121)
bash: line 1: 2140 Aborted (core dumped) rosrun rviz rviz --no-stereo -d /home/bidski/catkin_ws/src/structure_modeller/structure_modeller.rviz
Originally posted by Bidski on ROS Answers with karma: 96 on 2016-02-01
Post score: 0
Original comments
Comment by Javier V. Gómez on 2016-02-02:
Are you properly publishing the markers? That is, deleting/rewriting them ?
Comment by Bidski on 2016-02-02:
Does publishing a marker with the same namespace and id not overwrite the previous one?
Comment by ahendrix on 2016-03-05:
How much memory do you have? Have you checked that your machine still has free RAM when rviz crashes?
Answer:
Does this error appear during the starting rviz, or well after its running?
Is this an issue that --env="QT_X11_NO_MITSHM=1" would solve?
See the end of this ros wiki tutorial:
wiki.ros.org/docker/Tutorials/GUI
The other thing, how are you enabling hardware acceleration for opengl in the container?
I've only been able to install the graphics driver in the image, and mount the GPU devices to the container to get dedicated hardware to work. I though I got away without this if my machine was using an integrated graphics instead.
More on those topics:
wiki.ros.org/docker/Tutorials/Hardware%20Acceleration
github.com/NVIDIA/nvidia-docker/issues/11
Originally posted by ruffsl with karma: 1094 on 2016-03-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 23626,
"tags": "ros, librviz, docker"
} |
Why do humans suffer anxiety when they view "Trypophobia trigger images"? | Question: When you type Trypophobia Trigger Images in google, you see a variety of images with irregular lumps and bumps among some more gory images.
Many people report that these images induce phobia like symptoms of anxiety.
Why do we get anxious when exposed to these images? What advantage is there to be had from this response?
I find the reasons like this ABC news report on ants and spiders. But still didn't get it any info from it.
Answer: Trypophobia is not a recognised specific anxiety disorder (Washington Post). It is worth mentioning that anyone can have a phobia to anything, this is merely a question of whether many people associate these spatial patterns with anxiety. Nevertheless, the response of individuals to these images can be quantified (Le et al., 2015). Ultimately the findings show that a response of trypophobia is not correlative with anxiety. Note that here we are discussing anxiety in a phobia response test. Typically anxiety manifests as sweating, dizziness, headaches, racing heartbeats, nausea, fidgeting, uncontrollable crying or laughing and drumming on a desk. This is not merely feeling uncomfortable.
One hypothesis was that these images had irregular spatial patterns that cause revulsion. A study found that in nature some animals and plants may use this patterning as a warning mechanism and that it is associated with poisonous animals (spatial pattern quantification of 10 poisonous animals versus 10 control animals p=0.03), and indeed spiders were among those that use irregular patterns (Cole & Wilkins, 2013). Note that this hypothesis was presented in a psychology journal so the evolutionary mechanisms remain, in my opinion, not fully explored and scrutinised.
Hover over the below yellow box to view a lotus seed head, which has typical irregular spatial patterning presented in the 2013 study.
This image is often reported as inducing trypophobia.
Answer: In summary, humans do not reliably feel anxious when viewing these images. It also remains unclear why some people do get anxious or uncomfortable when viewing these images. It is perhaps to do with an aversion to some potentially harmful animals, but evidence remains scarce. | {
"domain": "biology.stackexchange",
"id": 6922,
"tags": "behaviour, human-evolution, psychology"
} |
Equivalence of the definition of a future set | Question: Let $(M,g)$ be a spacetime. Let's say that $F$ is a future set if $F = I^+(S)$ for some set $S$. I'm trying to check the equivalence "$F$ is a future set if and only if $I^+(F) \subseteq F$".
If $F$ is a future set, then $F = I^+(S)$ implies $I^+(F) = I^+(I^+(S))\subseteq I^+(S) = F$, as wanted, since the last inclusion is easy to check and holds for arbitrary sets.
I can check the other implication if we assume from the start that $F$ is open. Then $S = F$ would work, since the inclusion $I^+(F) \subseteq F$ is given, and the inclusion $F \subseteq I^+(F)$ is verified by taking $x \in F$ and a small geodesic ball centered in $x$ contained in $F$ (this is possible since we assume $F$ open); then we use the exponential map to send a timelike geodesic to the past of $x$, and any point on that curve will testify that $x \in I^+(F)$, as wanted.
I do not know how to proceed if we don't assume that $F$ is open from the start. Help?
Answer: The equivalence you propose is false if $F$ is not open. Actually, your second statement is the true definition: $F\subset M$ is a future set if $I^+(F)\subset F$. Your first statement clearly can only be true if $F$ is open, for $I^+(x)$ is open for all $x\in M$, and only in such a case can it be equivalent to your second statement. However, there are subsets $F\subset M$ that are not open and nonetheless satisfy $I^+(F)\subset F$ - take e.g. $$F=\{x=(x^0,\ldots,x^{n-1})\in\mathbb{R}^{1,n-1}=(\mathbb{R}^n,g=\text{diag}(-+\cdots+))\ |\ x^0\geq 0\}\ ,$$ which is clearly closed and indeed satisfies $I^+(F)\subset F$. Of course, the inclusion is strict if $F$ is not open. | {
"domain": "physics.stackexchange",
"id": 42113,
"tags": "general-relativity, differential-geometry, topology"
} |
Verifying Why Python Rust Module is Running Slow | Question: I am working on converting some python code over to Rust, and I have come across a bit of a peculiarity in the way that my code is behaving. Namely, the module that I have written in Rust is much slower than the same code written in Python. I originally thought that this was due to the fact that the module that I was writing had a lot of overhead from converting the Python list object to a Rust vector but now I am not so sure. Even when I scale this for large grid graphs (400x400 or larger), the overhead seems to just scale with it, so there might be something else wrong with the code. Here is the bit that appears to be causing the issue:
use pyo3::prelude::*;
use pyo3::wrap_pyfunction;
use rand::seq::SliceRandom;
use rand::thread_rng;
use std::collections::HashMap;
use pyo3::types::PyList;
fn current_component(n: usize, component_merge_dict: &mut HashMap<usize, usize>) -> usize
{
let mut nodeid = n;
let mut nodeids_to_update = Vec::with_capacity(component_merge_dict.len());
while let Some(&next_nodeid) = component_merge_dict.get(&nodeid)
{
if nodeid == next_nodeid { break; }
nodeids_to_update.push(nodeid);
nodeid = next_nodeid;
}
for nid in nodeids_to_update
{
component_merge_dict.insert(nid, nodeid);
}
nodeid
}
#[pyfunction]
fn rand_kruskal_memo(_py: Python,
py_node_list: &PyList,
py_edge_list: &PyList) -> PyResult<Vec<((i32, i32), (i32, i32))>>
{
let node_list: Vec<(i32, i32)> = py_node_list.extract()?;
let edge_list: Vec<((i32, i32), (i32, i32))> = py_edge_list.extract()?;
let mut tree_edge_list: Vec<((i32, i32), (i32, i32))> = Vec::with_capacity(node_list.len() - 1);
let mut edge_indices: Vec<usize> = (0..edge_list.len()).collect();
edge_indices.shuffle(&mut rand::thread_rng());
let mut nodes_to_components_dict: HashMap<&(i32, i32), usize> = HashMap::new();
let mut component_merge_dict: HashMap<usize, usize> = HashMap::new();
node_list.iter().enumerate().into_iter().for_each(|(index, node)| {
nodes_to_components_dict.insert(node, index);
component_merge_dict.insert(index, index);
});
let mut num_components: usize = node_list.len();
let mut curr: usize = 0;
while num_components > 1
{
let this_edge: ((i32, i32), (i32, i32)) = edge_list[edge_indices[curr]];
curr += 1;
let component_1: &usize = nodes_to_components_dict.get(&this_edge.0).unwrap();
let component_num_1: usize = current_component(*component_1, &mut component_merge_dict);
let component_2: &usize = nodes_to_components_dict.get(&this_edge.1).unwrap();
let component_num_2: usize = current_component(*component_2, &mut component_merge_dict);
if component_num_1 != component_num_2
{
component_merge_dict.insert(component_num_1, component_num_2);
tree_edge_list.push(this_edge);
num_components -= 1;
}
}
Ok(tree_edge_list)
}
#[pymodule]
fn rusty_tree(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_function(wrap_pyfunction!(rand_kruskal_memo, m)?)?;
Ok(())
}
I have also implemented a Union-Find method that seems to be suffering from the same issue, and I don't understand it. I also implemented both of these methods in C++ using Pybind11, and that code does not seem to have the conversion overhead issue that I am seeing here, so I am a bit confused as to what is going on. Admittedly, I am mainly a Python and C++ developer, so it is possible that I am just not accustomed to working with Rust quite yet and there is a simple fix that I am not aware of. Regardless, if someone wouldn't mind going over this and either telling me where I went wrong or telling me a more efficient way to define the bindings between Rust and python objects, I would greatly appreciate it. Thank you!
Answer: Rust defaults to a safer but slower hash function implementation. This page discusses the issue. Basically, Rust's function is designed to insure that you don't have too many collisions, even if a hostile party is trying to mess with your program. However, most of us aren't really in that situation and can benefit from a faster hash function. Python, on the other hand, relies really heavily on its hash function and makes sure that it is really fast. I suspect that your code spends almost all of its time looking up hashes and thus the hash lookup dominates the differences between Python and Rust.
You can solve this by using crates like rustc-hash, fnv, and ahash which provide alternatives to the standard hashmap which use faster hash functions. Alternately, you can try to restructure the algorithm to index into Vecs which will be much faster.
Also, make sure you are compiling in release mode. | {
"domain": "codereview.stackexchange",
"id": 44905,
"tags": "python, rust"
} |
What is the precise definition of Gravitational Sphere of Influence (SOI)? | Question: I am trying to understand the gravitational sphere of influence (SOI), but all I get by searching is the formula that you can find on Wikipedia, that is
$$ r_{SOI} = a \left( \frac{m}{M} \right)^{2/5} $$
where
m: mass of orbiting (smaller) body
M: mass of central (larger) body
a: semi-major axis of smaller body
When inputing the Moon numbers in this formula, we get a SOI of 66,183 km for the Moon over the Earth. This is consistent with other sources on the web, for example the Apollo mission transcripts when they talk about entering the Moon SOI.
What I don't understand is that when I calculate the gravitational forces between different bodies using Newton's laws, an object placed at this distance between the Earth and the Moon still gets a bigger pull from the Earth. Say for example that we had an object with a mass of 100 kg, these are the gravitational pull (in Newtons) that it would receive from the Earth and the Moon at different distances :
Force from Earth on Earth's surface : 979.866 N
Force from Earth at 384400 km (Moon dist) : 0.27 N
Force from Moon at 66183 km from Moon : 0.112 N
Force from Earth at 318216 km (66183 km from Moon) : 0.394 N
Force from Moon at 38400 km from Moon : 0.333 N
Force from Earth at 346000 km (38400 km from Moon) : 0.333 N
As you can see, the pull from Earth and Moon cancel each other at around 38,000 km, not 66,000 km. This is somewhat counterintuitive to me, as I first thought that a spacecraft (for example) would get more pull from the Moon than from the Earth when it entered the Moon's gravitational sphere of influence. I suspect that it has to do with the fact that the Moon is in orbit around the Earth, i.e. it is in constant acceleration in the same direction as the Earth's pull, but I would like a clear explanation if somebody had one.
Answer: I was also wondering this for a while and found an not entirely complete derivation of the formula (starting from page 14).
In which the following equation is used,
$$
\ddot{\vec{r}}+\underbrace{\frac{\mu_i}{\|\vec{r}\|^3}\vec{r}}_{-A_i}=\underbrace{-\mu_j\left(\frac{\vec{d}}{\|\vec{d}\|^3}+\frac{\vec{\rho}}{\|\vec{\rho}\|^3}\right)}_{P_j},
$$
where $\vec{r}$ is the vector between the centers of gravity of a spacecraft, indicated with $m$, and the celestial body with gravitational parameter $\mu_i$, $\vec{d}$ is the vector between the centers of gravity of a spacecraft and the celestial body with gravitational parameter $\mu_j$ and $\vec{\rho}$ is the vector between the centers of gravity of celestial body $\mu_i$ and $\mu_j$. These vectors are also illustrated in the following figure.
And looking at the spacecraft from an accelerated reference frame of a celestial body, then $A$ is defined as the primary gravitational acceleration and $P$ as the perturbation acceleration due to the other celestial body.
And the SOI is defined due to Laplace as the surface along which the following equation satisfies,
$$
\frac{P_j}{A_i}=\frac{P_i}{A_j},
$$
so
$$
\frac{\mu_j\left(\frac{\vec{d}}{\|\vec{d}\|^3}+\frac{\vec{\rho}}{\|\vec{\rho}\|^3}\right)}{\mu_i\frac{\vec{r}}{\|\vec{r}\|^3}}=\frac{\mu_i\left(\frac{\vec{r}}{\|\vec{r}\|^3}+\frac{\vec{\rho}}{\|\vec{\rho}\|^3}\right)}{\mu_j\frac{\vec{d}}{\|\vec{d}\|^3}}.
$$
This will not return a spherical surface, but it can be approximated by one when $\mu_i << \mu_j$, who's radius is equal to
$$
\|\vec{r}\|\approx r_{SOI}=\|\vec{\rho}\|\left(\frac{\mu_i}{\mu_j}\right)^{\frac{2}{5}}.
$$
This is where the slides of the lecture stop and I will try to to fill in the rest.
When $\mu_i << \mu_j$ than the SOI will be relatively close to $\mu_i$ so
$$
\|\vec{\rho}\|\approx\|\vec{d}\|,
$$
and if you look at the figure above you can see that when $\|\vec{r}\|$ is small than $\vec{d}$ and $\vec{\rho}$ almost point in opposite direction and form a triangle with $\vec{r}$ such that
$$
\vec{\rho}+\vec{d}=\vec{r}.
$$
By rewriting the definition of the surface using the approximation you get
$$
\mu_j^2\frac{\vec{d}}{\|\vec{\rho}\|^6}=\mu_i^2\frac{1}{\|\vec{r}\|^3}\left(\frac{\vec{r}}{\|\vec{r}\|^3}+\frac{\vec{\rho}}{\|\vec{\rho}\|^3}\right)
$$
The other approximation which has to be made is that $\|\vec{r}\|<<\|\vec{\rho}\|$ so that
$$
\frac{\vec{r}}{\|\vec{r}\|^3}+\frac{\vec{\rho}}{\|\vec{\rho}\|^3}\approx\frac{\vec{r}}{\|\vec{r}\|^3}.
$$
Now the equation can be reduced to
$$
\mu_j^2\frac{\vec{d}}{\|\vec{\rho}\|^6}=\mu_i^2\frac{\vec{r}}{\|\vec{r}\|^6}.
$$
By generalizing $\vec{r}$ as a constant radius you can make this problem one dimensional, so $\|\vec{r}\|$ can substitute $\vec{r}$ and since there are no more vector additions (such that small differences between them could matter) therefore $\|\vec{\rho}\|$ can also substitute $\vec{d}$, which gives final equation
$$
\mu_j^2\|\vec{r}\|^5=\mu_i^2\|\vec{\rho}\|^5\longrightarrow\|\vec{r}\|=\|\vec{\rho}\|\left(\frac{\mu_i}{\mu_j}\right)^{\frac{2}{5}}.
$$ | {
"domain": "physics.stackexchange",
"id": 10978,
"tags": "newtonian-gravity, solar-system, orbital-motion"
} |
The fundamental importance of R.E.M. Sleep. (Rapid Eye Movement) | Question: Question:
I know that experiments have been conducted to determine the importance of R.E.M. sleep in our sleep cycle. It is particularly important for learning, information synthesis, and recovery from distress. Why else is R.E.M. sleep important? What experiments have been done/observations been made to determine the neurological mechanisms underlying R.E.M. sleep? I know that we exhibit high frequency $\alpha$ waves, similar to the waves we experience during wakefulness.
Wiki:
During REM sleep, high levels of acetylcholine in the hippocampus suppress feedback from hippocampus to the neocortex, and lower levels of acetylcholine and norepinephrine in the neocortex encourage the spread of associational activity within neocortical areas without control from the hippocampus. This is in contrast to waking consciousness, where higher levels of norepinephrine and acetylcholine inhibit recurrent connections in the neocortex. REM sleep through this process adds creativity by allowing "neocortical structures to reorganise associative hierarchies, in which information from the hippocampus would be reinterpreted in relation to previous semantic representations or nodes.
Do these reorganized neocortical hierarchies remain this way?
Just HOW integral is R.E.M. sleep to our brain development?
Answer:
REM sleep stimulates the brain regions used in learning. This may be
important for normal brain development during infancy, which would
explain why infants spend much more time in REM sleep than adults (see
Sleep: A Dynamic Activity ). Like deep sleep, REM sleep is associated
with increased production of proteins. One study found that REM sleep
affects learning of certain mental skills. People taught a skill and
then deprived of non-REM sleep could recall what they had learned
after sleeping, while people deprived of REM sleep could not.
Some scientists believe dreams are the cortex's attempt to find
meaning in the random signals that it receives during REM sleep. The
cortex is the part of the brain that interprets and organizes
information from the environment during consciousness. It may be that,
given random signals from the pons during REM sleep, the cortex tries
to interpret these signals as well, creating a "story" out of
fragmented brain activity.
Source: link | {
"domain": "biology.stackexchange",
"id": 807,
"tags": "human-biology, sleep, neurology"
} |
What are Intersensory Associations? | Question: While I was reading about "Neural Control and Coordination" I came across this
"Association areas in the forebrain are responsible for complex functions like intersensory associations, ....."
What are "intersensory associations"? I have searched the net but could not find anything useful.
Answer: A more common terminology regarding 'intersensory associations' is multisensory or crossmodal integration. Crossmodal integration takes place in the association cortices in the brain (Fig. 1). An example is the coupling of auditory and visual input during lip reading, as mentioned in the comments. Lip reading can aid in acoustic speech understanding, especially so in the hearing impaired.
The association cortices include most of the cerebral surface of the human brain and are responsible for integrating the sensory input that arrives in the primary sensory cortices. The diverse functions of the association cortices are loosely referred to as “cognition,” which literally means the process by which we come to know the world. Cognition enables us to attend to external stimuli, to identify the significance of stimuli and to plan meaningful responses to them. The association cortices receive and integrate information from a variety of sources and in turn influence a range of cortical and subcortical targets (Purves et al., 2001).
Fig. 1. Association cortices. source: Brown, Physiology & Neuroscience websites
Reference
- Purves et al., Neuroscience, 2nd ed. Sunderland (MA): Sinauer Associates; 2001 | {
"domain": "biology.stackexchange",
"id": 6510,
"tags": "neuroscience, brain, neurophysiology"
} |
Building object identity in PHP | Question: I am trying to build object identity in PHP so that when I have a collection of objects, each one can have a string as identifier and all of these identifiers are afterwards combined to form an unique md5 to represent the "identity of a collection".
Why? So that I can choose to skip re-execution of code when it's not needed:
interface SomeTestInterface
{
public function testFunction();
}
abstract class Identifiable
{
public function __toString()
{
$identity_shards = array_merge( get_object_vars( $this ), class_implements( $this ) );
$identity_string = '';
foreach( $identity_shards as $identity_shard_key => $identity_shard_value ) {
$identity_string .= (string) $identity_shard_key . (string) json_encode( $identity_shard_value );
}
return md5( get_class( $this ) . $identity_string );
}
}
class SomeBaseClass extends Identifiable implements SomeTestInterface
{
public function __construct( $number )
{
$this->number = $number;
$this->thing = 'a';
$this->other_thing = ['a','b','c',1,2,3,];
}
public function testFunction()
{
return 'a';
}
}
This is testable with:
for( $i = 1; $i < 10000; $i++ ) {
$class = new SomeBaseClass( $i );
(string) $class;
}
For me, PHP 7.3 and WordPress, this takes ~100ms to execute.
My micro-decisions:
I need json_encode on $identity_shard_value because you can't cast an array to string, for example. json_encode is both fast in my experience and knows how to deal with it all.
I chose to make this an abstract class because json_encode doesn't have access to scoped classes, as such, it cannot encode what it can't find, so I must be able to access $this though it's weird because even in the abstract class, I still can't encode it, but I should be able to.
My main concerns with this is if I really need all these items to build my object identity or if there's another, faster way. 10000 objects in 0.1ms, although very good on its own, doesn't necessarily scale.
In essence, every single object that implements Identifiable in a collection that a module of my framework has will have an identity that I will then combine into a final "collection identity" to later do a check such as:
$collection_identity = getCollectionIdentity( $collection ); //MD5 computed from the identity of all these objects
if( $collection_identity != getCollectionIdentityByName( 'some_collection' ) {
setCollectionIdentity( 'some_collection', $collection_identity );
//re-execute code
} else {
retrieveDataFromStorage();
}
As you can see, it checks if there was a change to the objects / collection and if so, it re-executes all the other code, but if not, it just retrieves what that "other code" generated in the past and as such, this is a way to use persistent storage to skip execution of heavy code.
Answer: I think this code is fine, and can't be sped up very much. But ....
The MD5 hash is most likely unique, it's got 16^32 (3.4e38) values after all, but once in a blue moon two different objects will have the same identity, especially if you use this a lot. This might cause very rare, random, bugs in your software. Bugs that are virtually impossible to track down.
I don't think the __toString() magic Method was intended for the purpose you're now using it for. I have learned that; "You should always use something for the purpose it was intended for.". The purpose of __toString() is to give you a readable representation of the object. By appropriating it now for identifying objects, you're loosing the capability to use it for its intended purpose later.
You're also relying on an undocumented property of get_object_vars(), namely that it will always return the variables in the same order. Will it? I don't know. It probably will, but doesn't have to. This could also change with changing versions of PHP, leaving you with a very big headache if it happens. You could use ksort() to make sure the order is always the same, but that will slow things down a lot.
I've also read in various places, and in the comments in the manual, that get_object_vars() doesn't return static variables. That makes sense since all objects of a class share the same values for these variables, but it is something to keep in mind.
The storing and checking of the identity hashes, in some collection of hashes, will probably be the slowest part of this whole idea.
Then my final problem wilt this code:
Properly written code would know the identity of its objects, or at least have a 100% reliable method to check this. Your code should be written in such a way that it already minimizes object duplication. This code seems the result of not being able to write good and efficient code (sorry, I'm trying to make a point here).
For instance, many objects could already have a simply ID integer that identifies them. For instance a model class, based on a database row, would most likely have such an ID. Most other classes could, if needed, have a similar way to identify themselves. Once you combine such an ID with the class name you should have a 100% reliable identifier.
If you really need a way to identify various objects you could simply add an identity() method to them. Something like this:
<?php
class MyClass
{
public function __construct($id)
{
$this->id = $id;
}
public function identity() {
return get_class() . ":" . $this->id;
}
}
$myObject = new MyClass(999);
echo $myObject->identity();
?>
This would return:
MyClass:999
I agree that this is a very basic example, but it should be possible to do something similar for any class.
By writing such a specific identifier method for each class you can optimize it, which means it will be faster, and you can make it a 100% reliable under any circumstances. It is also a lot easier to debug, because you can see and read what is going on. No hiding behind mysterious hashes here.
Conclusion: Despite my objections I think your code looks fine. I do however wonder whether this approach will, in the end, cause more trouble than it is worth.
Note: There is more discussion in the comments. In the end coolpasta wrote a response to this question. | {
"domain": "codereview.stackexchange",
"id": 34790,
"tags": "performance, php"
} |
Filtering sidelobes | Question: Good evening to everyone,
my question is probably trivial, but I'm new to DSP. I have a tone at the frequency 0.5 Hz, amplitude 0.001 V, and with a big DC component of 0.5 V. This signal is sampled at 900 Hz (I know that it is much higher than necessary). I generate the samples as follows:
fsin = 0.5; fs = 900; Ts = 1 / fs;
t = 0:Ts:100;
sig = 500e-3 + 1e-3 * sin(2 * pi * fsin * t);
After I compute the spectrum as follows:
NFFT = 2 ^ nextpow2(numel(sig));
Spectrum = fftshift(fft(sig, NFFT));
Plotting this spectrum near the tone, I see that it is completely drowned by the sidelobes of the DC component:
I would expect that even if I try to filter the signal with a pass-band of 0.2-0.8 Hz (thus containing the tone) I will not be able to see the tone anyway, because the big sidelobes dominate the tone in 0.2-0.8 Hz. Instead, surprisingly, adopting the following filter
[sf,~] = bandpass(sig, [0.2,0.8], fs);
the resulting spectrum shows a clear tone:
Therefore, my question is: how is it possible that the filter is able to recover the tone even if it is completely drowned by the DC component's sidelobes in the pass-band? Thank to all.
Answer: The reason the bandpass filter eliminated the problem is because it removed the DC component. This can also be done by simply subtracting the average before taking the DFT. However there is much more to understand in this question that should be of interest to the OP, and after reading and understanding the detail given below, this first statement should make complete sense.
To be clearer, I would not refer to the effect the OP is seeing as the "sidelobes of a DC signal" but more specifically the sidelobes of the rectangular window that was used to select that portion of the DC signal in time, prior to computing the DFT. As I will explain in detail, the primary consideration is to first choose the sampling rate to be much lower such that the time duration can be increased with no penalty (for the cases when time can be arbitrarily extended such as this). A secondary consideration is improved windowing, if needed. I will detail both below.
For the benefit of the OP who is new to DSP, I will attempt to answer this as simply as possible with the basics of what is occurring. Starting with consideration of the "DC" component alone: To be truly "DC" the signal must extent to positive and negative infinity in time, as "DC" implies a constant level that never changes. The Fourier Transform of such an unrealizable signal would be the expected impulse in frequency at $F=0$, with no energy at any other frequency locations (no sidelobes). The realizable signal is both causal (starts at $t=0$) and time limited (the OP used 100 seconds). The asymmetry in time leads to a phase shift in frequency but has no other change to the magnitude of the resulting Spectrum. I will focus on the magnitude result so phase effects and causality will not be mentioned further other than that main point.
So to recap, IF the signal at DC extended to positive and minus infinity, the Fourier Transform would be an impulse in Frequency at $F=0$. However what we do in the DFT computation (in addition to sampling effects that I won't cover) is select a time limited sample of the waveform which is equivalent to multiplying the waveform in time with a "rectangular window", a function that is unity over the time interval of choice.
Multiplying two waveforms in time is equivalent to convolving those two waveforms in frequency (and vice versa). Therefore the effect of taking only a portion of the DC waveform in time with a rectangular window as has been done is equivalent to convolving the impulse at $F=0$ (as the Fourier Transform of the DC signal) with a Sinc function (as the Fourier Transform of the rectangular window!). This is the sidelobes that occur for not only the DC signal, but any other signal at any other frequency location that would otherwise be an impulse in Frequency (so the tone at 0.5 Hz will also have sidelobes all scaled by the amplitude of that tone).
That said, below shows the plot of the DFT of the rectangular window that the OP had used, using the same number of samples that were used, as well as the same result with extending the number of samples out 10 times as much (importantly this was done by "zero-padding", meaning we only increased NFFT but we did not change the actual time length of the window which is still 100 seconds).
First a wider spectrum from -1 to + 1 Hz of just the DFT of the rectangular window over the same number of samples, sampling rate and duration as the OP used (although still a narrow part of the spectrum, the signal is sampled far higher than necessary which actually leads to further processing complexity; I would recommend that the OP change the sampling rate to 10 Hz which would be more than sufficient for this!). I have superimposed a longer zero-padded DFT to show that increasing the DFT length (but not changing the time duration of the window itself!) just serves to interpolate more samples in frequency of the underlying Sinc function as the Fourier Transform of the rectangular window (and once sampled with associated aliasing effects the Sinc becomes the "Dirichlet Kernel" but at such a high sampling rate appears very much as a Sinc here):
If we zoom in very close to $F=0$ we clearly see the shape of the Sinc, and how with the shorter DFT length that the OP chose results in samples on this same Sinc function (as the Fourier Transform of the Rectangular Window). We also see this very important takeaway: The first nulls for the main lobe of the Sinc appear at $f = 1/T$ where $T$ is the total time duration of the window in seconds! This is the reason that the strategy the OP should take (more important than using an improved window which also will help) is to increase the total time duration, which can readily be done with the same processing by decreasing the sampling rate (more on this later):
It is this function that would convolve in frequency with the actual waveform, which is a tone at DC ($F=0$ Hz) and a smaller tone at $F = 0.5$ Hz. Given that the waveform has been multiplied by the rectangular window in time, then the result from the convolution will be a Sinc centered at $F=0$ and another Sinc centered at $F=0.5$ each weighted by their respective magnitudes. Below I plot the result of each separately, while the final result would be the sum (superposition) of each of these spectrums. We see, as the OP described, that the 0.5 Hz waveform is completely buried by the sidelobes from the rectangular windowing of the stronger DC signal.
An immediate and obvious first strategy is to minimize the sampling rate and maximize the time duration to the extent that is possible. Once that is properly set further considerations can be made with using a different time domain window. I will first show the impact of changing the sampling rate to a more appropriate value. For the rectangular window, the sidelobes have first nulls at $f=1/T$ as I described, and their peaks will decrease at $1/f$. This means if we double the frequency the sidelobe will have gone down by -6 dB (we see this in the previous plot by comparing the sidelobe levels at $f=0.3$ Hz to $f = 0.6$ Hz). So similarly if we decreased the sampling rate from $f_s = 900$ to $fs = 10$ (which is sufficiently higher than our highest frequency of $f=0.5$ to not introduce significant aliasing effects), and correspondingly increase the time duration by $900/10 = 90$ we will have the same number of time samples but we will have reduced the sidelobe level at our frequency of interest (0.5 Hz) by $20 log_{10}(90) = 40$ dB. Inspecting the previous graph indicates that this would be more than enough and perhaps sufficient for a more immediate result without the further complexity of windowing (which I would still ultimately advise using, but my point here is the preeminence of this consideration in either case).
Spectrum with Sampling rate reduced to 10 Hz (and time duration extended by 90, same # of samples!) and we see directly the 40 dB predicted improvement:
And zooming in on this new spectrum for direct comparison to the previous plot where the signal was buried:
And finally, showing what would traditionally be done which in addition to all first consideration above on minimizing sampling rate and maximizing time duration, is to multiply the time domain waveform with an improved window with the key property of having lower sidelobes than the rectangular window (but at the expense of losing frequency resolution: any window with lower sidelobes will have a wider main lobe). Below shows the end result with $f_s = 10 Hz$, $T = 9000$ and using a Kaiser Window, where the sidelobe is pushed down to -100 dB in vicinity of the waveform of interest! | {
"domain": "dsp.stackexchange",
"id": 9827,
"tags": "matlab, fft, bandpass"
} |
Handle redirect manually | Question: I built a class which uses an HttpClient instance for downloading the content of a page. This is working pretty well but I'm not really satisfied about my implementation of redirect.
In fact, I handle the redirect manually as you can see:
/// <summary>
/// Web request handler
/// </summary>
public class NetworkHelper
{
/// <summary>
/// Store the url.
/// </summary>
private static Uri _storedUrl = null;
/// <summary>
/// Store the html data.
/// </summary>
private static string _storedData = "";
public static string GetHtml(Uri url)
{
if (url == _storedUrl)
return _storedData;
else
_storedUrl = url;
HttpWebRequest webReq = (HttpWebRequest)WebRequest.Create(url);
try
{
HttpClientHandler handler = new HttpClientHandler();
handler.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip;
HttpClient httpClient = new HttpClient(handler);
HttpRequestMessage request = new HttpRequestMessage
{
RequestUri = url,
Method = HttpMethod.Get
};
HttpResponseMessage response = httpClient.SendAsync(request).Result;
int statusCode = (int)response.StatusCode;
if (statusCode >= 300 && statusCode <= 399)
{
Uri redirectUri = response.Headers.Location;
if (!redirectUri.IsAbsoluteUri)
{
redirectUri = new Uri(request.RequestUri.GetLeftPart(UriPartial.Authority) + redirectUri);
}
return GetHtml(redirectUri);
}
using (WebClient wClient = new WebClient())
{
wClient.Encoding = System.Text.Encoding.UTF8;
_storedData = wClient.DownloadString(url);
}
return _storedData;
}
catch (WebException)
{
throw;
}
}
}
How can I improve my implementation for handling the redirect?
Answer: Your code looks solid, however here are a few minor points.
Catching/throwing exceptions
You should only catch exceptions when you are going to handle them. There's no point in catching an exception when you're only going to (re)throw it. However if you were to use logging and still 'bubble up' the exception, you could go with this:
try
{
//Your webrequest/response code...
}
catch (Exception ex)
{
//DO YOUR LOGGING HERE
throw;
}
But as of now I would remove the try/catch from the method and handle exceptions outside of it:
try
{
string theHtml = NetworkHelper.GetHtml("http://example.com");
}
catch (WebException ex)
{
//Handle the exception
}
Stored url and data
As of now, your code doesn't allow the same url twice in a row; it'll just return the stored data. This raises 2 questions/issues with me:
What if I want to refresh the result?
I can bypass the check easily but in a hacky way:
Pass url1
Pass url2 to change the _storedUrl variable
Call the method again with url1
I think better would be to change the signature of the method and add a boolean (optional) parameter to indicate if you want to refresh the result:
public static string GetHtml(Uri url, bool refresh = false)
{
if (url == _storedUrl && !refresh)
return _storedData;
else
_storedUrl = url;
//Rest of the logic...
}
Now you will only return the _storedData if you have the same url and don't want to refresh.
Hope this helps! | {
"domain": "codereview.stackexchange",
"id": 30669,
"tags": "c#, web-scraping"
} |
If I leave a glass of water out, why do only the surface molecules vaporize? | Question: If I leave a glass of water out on the counter, some of the water turns into vapor. I've read that this is because the water molecules crash into each other like billiard balls and eventually some of the molecules at the surface acquire enough kinetic energy that they no longer stay a liquid. They become vapor.
Why is it only the molecules on the surface that become vapor? Why not the molecules in the middle of the glass of water? After all, they too are crashing into each other.
If I put a heating element under the container and increase the average kinetic energy in the water molecules to the point that my thermometer reads ~100°C, the molecules in the middle of the glass do turn into vapor. Why doesn't this happen even without applying the heat, like it does to the surface molecules?
Answer: From a thermodynamic point of view, at fixed pressure, the vaporization takes place when the temperature exceeds the temperature of change of state $ Tc (P ) $
Within the liquid, the pressure that is to be taken into account is the hydrostatic pressure. This pressure is a little greater than 1 bar and the associated vaporization temperature is 100 ° C.
On the surface (thickness of some mean free path), the environment of the molecules is different. the pressure to be taken into account is the partial pressure of water vapor, which is related to the moisture content of the air. If the humidity is less than 100%, this pressure is well below 1 bar and evaporation takes place at a much lower temperature. | {
"domain": "physics.stackexchange",
"id": 55672,
"tags": "thermodynamics, statistical-mechanics, temperature, everyday-life, evaporation"
} |
What is a covariance matrix? | Question: Suppose you have k samples from each of the N elements of a uniform linear array (ULA) of sensors:
What is the physical meaning of a covariance matrix?
How do you form a covariance matrix with the samples?
How do you decide how many samples you need to use to form the covariance matrix?
Answer: It's the key point of array signal processing, I suppose. Say $x$ is the input vector of $[N,1]$ dimension collected from $N$ array sensors. $x(k)$ is its realization at the $k$ moment of time. By its definition covariance matrix (sometimes it's called autocorrelation matrix):
$R = E[x\cdot x^H]$ ,
where $E[]$ is expectation operator and $x^H$ is Hermitian conjugate. For the ergodic process
$R = \lim_{M\to\infty} 1/M \cdot \displaystyle\sum_{k=0}^M x(k) \cdot [x(k)]^H$ .
But in practice we can estimate $R$ with necessary precision with the snapshot of finite length. In the array processing theory snapshot is a group of vectors $x(k)$. It's the basic data block for array processing algorithms.
$R = 1/K \cdot \displaystyle\sum_{k=0}^K x(k) \cdot [x(k)]^H$ ,
where $K$ is the number of spatial vectors or snapshot size. The interesting question is what is optimal value for $K$. In models I've done, $K$ varies from 64 to 256. The estimation precision is enough for MVDR beamformer or Capon spectral estimation. There is interesting trick for adaptive beamformer design (if you have further interest). You can reduce $K$ (and accelerate adaptation process) if you use so-called diagonal loading, look:
$R = R + \sigma^2 \cdot I$,
where $\sigma$ is some variable depending on SNR (maybe someone define it more precisely?) and $I$ is identity matrix of the size $[N, N]$, same size as $R$ actually. But if you want to do spectral estimation procedure by your array (DoA estimation), you shouldn't perform diagonal loading because it will rise noise floor of the estimation and some weak signals of interest will be lost.
Covariance matrix is the second order statistic of the random process which is measured at the array sensors. It contains information about the sources in space (number, strength, direction) and can be used for sources detection and separation. Actually the number of independent spatial signals at the input of array and the rank of $R$ is the same. Singular values decomposition (SVD) of $R$ gives the information about signal subspace which is necessary for subspace-based DoA estimation techniques like MUSIC or ESPRIT. It is worth to say algorithms based upon covariance matrix inversion/decomposition suffers from source correlation. If the sources is highly correlated (have the same waveforms or too close direction) they can't be separated. And also the system performance degrades in highly correlated scenario.
Very good reference about the topic will be
Harry L. Van Trees - Detection, Estimation, and Modulation Theory, Optimum Array Processing
In the book more general form of covariance matrix is discussed. It's shown that it can be defined either in time domain or in frequency domain. And furthermore frequency domain interpretation is more common since it allows us to solve the problem of wideband beamformer (or DoA estimation) very easy.
Hope this helps. | {
"domain": "dsp.stackexchange",
"id": 2040,
"tags": "statistics, matrix, covariance"
} |
What does differential consumption mean? | Question: I have not been able to find any definition of differential consumption online. What does it mean in this quotation?
Vertebrates can influence natural fire regimes in several ways. First, herbivores limit fuel quantity by consuming and recy- cling plant matter that would otherwise accumulate as litter, and by reducing the density of vegetation [3]. Second, differential consumption of plant growth forms can enforce changes in the composition of vegetation and thereby alter the type and arrangement of fuel. Third, herbivory can gen- erate large-scale habitat heterogeneity, as a result of variation in herbivore activity in response to factors such as terrain and water availability [3,11], and this can mean that zones of low and high flammability are interspersed in arrangements that could impede the spread of landscape fires. Finally, herbivores and other animals may alter the abiotic environment in ways that affect flammability: by forming trails, dust-baths or leks, large animals create lines or patches of bare ground that can act as fire breaks, while some species forage by turning over or digging through the litter layer and surface soils, and in the process bury fine fuels and thus reduce fuel loads. In the sections below, we explore the evidence supporting these effects of herbivores on fuel and fire regimes, in the past and present.
Johnson, Christopher N., et al. “Can Trophic Rewilding Reduce the Impact of Fire in a More Flammable World?” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 373, no. 1761, Dec. 2018, doi:10.1098/rstb.2017.0443
Answer:
Second, differential consumption of plant growth forms can enforce changes in the composition of vegetation and thereby alter the type and arrangement of fuel.
This sentence means that herbivores eat more of certain plant growth forms than others. "Differential consumption" just means "different consumption." This can be important in the context of fire risk because herbivores will tend to not eat certain types of plant biomass, such as full-grown trees, but will eat new growth, grasses, etc (therefore the total amount of fuel by type is different). | {
"domain": "biology.stackexchange",
"id": 9406,
"tags": "metabolism, ecosystem"
} |
SNR calculation with Noise Spectral Density | Question: I am having difficulty in calculating the SNR for the following scenario:
In a cellular network
Transmit power=36dBm
Noise spectral density=-174dBm/Hz (AWGN)
Bandwidth=10MHz
I have some other parameters such as pathloss and Rayleigh fading, but I think my biggest problem is the conversion of dB to power.
Answer: The comment above already gave you a roadmap to get to an answer, but I'll flesh it out a bit:
SNR is the ratio of signal power to noise power. To calculate the signal power at your receiver, you'll need to take the path loss and fading model into account, as you noted.
The noise power at the receiver is described by a (flat) noise power spectral density and receiver bandwidth. To calculate the total noise power over that bandwidth, you simply multiply the amount of power per Hertz times the width of the band. When power is specified in logarithmic units (dB), you need to first convert it to a linear scale. Recall the relationship between mW and dBm:
$$
P|_{dBm} = 10 \log_{10}\left(\frac{P}{1\text{ mW}}\right)
$$
That is, to convert a linear-scale power quantity to dBm, take ten times the base-10 logarithm of the ratio of that power quantity to 1 milliwatt. For example:
30 dBm: 1 W
0 dBm: 1 mW
-30 dBm: 1 uW
-60 dBm: 1 nW
-90 dBm: 1 pW
To go the other way, invert the calculation:
$$
P = 10^{\frac{P|_{dBm}}{10}} * 1\text{ mW}
$$
So, to calculate the total noise power at your receiver, you would convert the noise power spectral density to linear units using the above equation:
$$
S_n = 10^{\frac{-174}{10}} \frac{\text{mW}}{\text{Hz}} = 3.981 * 10^{-18} \frac{\text{mW}}{\text{Hz}}
$$
then multiply by the bandwidth to get the total amount of noise power:
$$
P_n = BS_n = \left(10 * 10^{6}\text{ Hz}\right)\left(3.981 * 10^{-18} \frac{\text{mW}}{\text{Hz}}\right)
$$
$$
P_n = 3.981 * 10^{-11} \text{ mW}
$$
Then, use your model to predict the received signal power, take the ratio, and you've got your SNR. | {
"domain": "dsp.stackexchange",
"id": 6374,
"tags": "noise, snr"
} |
Reaction force on a beam | Question: I have the following problem most of which I have calculated but have a difficulty with the beam under angle. The angle is 56,30 degress. I just cannot figure out how the forces there are acting - the equations I am making have three unknown variables in two equations, which cannot be calculated, so apparently there are forces I "cannot see" there as well and the equations are wrong.
The disributed loads have been replaced with concentrated loads: the point where they act is indicated.
Thank you very much for advice.
Answer: It is worth noting that this can be treated as two separate structures:
I assume that's how you solved the reactions of support A. Given that, you can ignore the entire left-hand structure and act like you only have the right-hand side.
For this, let's use the standard equations:
$$\begin{align}
\sum F_x &= B_x + C_x + 29.4 = 0 \\
\sum F_z &= B_z + C_z - 24.5 - 7 = 0 \\
\sum M_B &= -29.4 \cdot 6 + 6C_z = 0 \\
\therefore C_z &= 29.4\text{ kN} \\
\therefore B_z &= 24.5 + 7 - C_z = 2.1\text{ kN}
\end{align}$$
It might seem like you're now stuck without solving for $F_x$, but you can notice that the diagonal member is a truss, with only axial loads. That means that the vertical and horizontal forces applied to it must be proportional to its tangent. Therefore we have:
$$\begin{align}
C_x &= -C_z \cdot \dfrac{6}{9} = -19.6\text{ kN} \\
\therefore B_x &= -29.4 - C_x = -9.8\text{ kN}
\end{align}$$
To check our work: | {
"domain": "engineering.stackexchange",
"id": 1434,
"tags": "structural-analysis, applied-mechanics, statics"
} |
Can a periodic motion whose displacement is given by $ x=\sin^2(\omega t)$, be considered as a SHM? | Question: The definition of Simple Harmonic Motion is :
simple harmonic motion is a type of periodic motion or oscillation motion where the restoring force is directly proportional to the displacement and acts in the direction opposite to that of displacement
From the definition of SHM we can state that, in order to be considered as SHM, a periodic motion only has to meet the following criteria that is:
(where $a$ is acceleration and $x$ is the displacement from equilibrium position)
$$ a \propto -x$$
If we have a periodic motion where $$ x= \sin^2(\omega t)$$ $$\implies v=\frac{dx}{dt}=2\omega \sin(\omega t)\cos(\omega t)=\omega \sin(2\omega t)$$ $$\implies a=\frac{d^2x}{dt^2}=2\omega^2\cos(2\omega t)=2\omega^2(1-2\sin^2(\omega t))$$
Since, $x=\sin^2(\omega t)$, we get, $$a=2\omega^2(1-2x)$$ then, can we say that $$a \propto -x$$
and thus declare that it is a SHM?
Answer: Since $sin^2 (\omega\,t) = \frac{1}{2}\left(1-\cos(2\,\omega\,t)\right)$, it is evident that a position time dependence of the form $x=sin^2 (\omega\,t)$ indeed describe simple harmonic motion centered on the point $x=\frac{1}{2}$ and with frequency $2\,\omega$. This is the meaning of the constant part in your force $1-2\,x$: it is an offset of the point where there is zero force.
But in SHM, by displacement don't we mean the displacement from the equilibrium point, and by equilibrium point don't we mean the point where the force is zero? But here the force is zero when the object is $\frac{1}{2}$ units away from equilibrium. This is confusing me.
I guess it depends on one's definition. But generally in physics we seek definitions that are co-ordinate independent. One such definition would be that SHM is rectillinear motion wherefor the force on a particle is proportional to and directed against the particle's displacement from the particle's equilibrium (zero force) point. | {
"domain": "physics.stackexchange",
"id": 39241,
"tags": "homework-and-exercises, harmonic-oscillator, oscillators"
} |
Custom tooltip containing a list | Question: I'm looking to improve my code as seen in the demo link below. The code displays a list in a tooltip. The tooltip is accessed by hovering over the last element of the main list.
Here's the demo.
The ultimate objective is to wrap only one of the below list item in the code in a JS for loop which will dynamically create more than one list item.
<li class="coupontooltip_li_list">
<div>
<img class="coupontooltip_img" src="http://lorempixum.com/100/100/nature/1">
<span class="coupontooltiplistspan">Distance: 200m</br>Duration: 00:01:30</br>Laps: 4</span>
</div>
Full HTML code:
<div>
<ul id="cellvaluelist">
<li><a href="#" class="swim">MAIN 1</a></li>
<li><a href="#" class="chrono">MAIN 2</a></li>
<li><a href="#" class="couponcode">MAIN 3
<div class="coupontooltip">
<ul class="coupontooltip_ul_list">
<li class="coupontooltip_li_list">
<div>
<img class="coupontooltip_img" src="http://lorempixum.com/100/100/nature/1">
<span class="coupontooltiplistspan">Distance</br>Duration: </br>Laps: </span>
</div>
</li></br>
<li class="coupontooltip_li_list">
<div>
<img class="coupontooltip_img" src="http://lorempixum.com/100/100/nature/1">
<span class="coupontooltiplistspan">SET 1: abcd</br>SET 2: EFGH</br>SET 3: ijkl</span>
</div>
</li></br>
<li class="coupontooltip_li_list">
<div>
<img class="coupontooltip_img" src="http://lorempixum.com/100/100/nature/1">
<span class="coupontooltiplistspan">Distance: 200m</br>Duration: 00:01:30</br>Laps: 4</span>
</div>
</li>
</ul>
</div>
</a>
</li>
</ul>
</div>
Full CSS code:
#cellvaluelist {
font-family:'Trebuchet MS', Tahoma, Sans-serif;
font-size: 20px;
list-style-type: none;
margin: 0;
padding: 0;
}
#cellvaluelist > li {
list-style-type: none;
text-align: left;
border-bottom: 2px solid #F5F5F5;
}
#cellvaluelist > li:last-child {
border: none;
}
#cellvaluelist > li a {
text-decoration: none;
color: #000;
display: block;
width: 150px;
-webkit-transition: font-size 0.3s ease, background-color 0.3s ease;
-moz-transition: font-size 0.3s ease, background-color 0.3s ease;
-o-transition: font-size 0.3s ease, background-color 0.3s ease;
-ms-transition: font-size 0.3s ease, background-color 0.3s ease;
transition: font-size 0.3s ease, background-color 0.3s ease;
}
#cellvaluelist > li a:hover {
background: #F5F5F5;
}
.swim {
background: #626FD1;
font-size: 15px;
font-weight: normal;
}
.chrono {
background: #EDCF47;
font-size: 15px;
font-weight: normal;
}
.couponcode {
background: #47ED4D;
font-size: 15px;
font-weight: normal;
}
.couponcode:hover .coupontooltip {
display: block;
}
.coupontooltip {
display: none;
background: #FFCC00;
position: absolute;
z-index: 1000;
}
.coupontooltip_ul_list, .coupontooltip_li_list {
background: #FF0000;
list-style-type: none;
float: left;
margin: 0;
padding: 0;
width: 100%
}
.coupontooltip_li_list {
background: #D6D6D6;
border-bottom: 2px solid #F5F5F5;
}
.coupontooltip_img {
width: 50px;
height: 50px;
float: left;
padding: 5px;
}
.coupontooltiplistspan {
display: inline-block;
}
Answer: You don't need this div
<div class="coupontooltip">
and then you would just move the CSS for it into the CSS for the ul that is inside that div currently.
so instead of this
.coupontooltip {
display: none;
background: #FFCC00;
position: absolute;
z-index: 1000;
}
.coupontooltip_ul_list, .coupontooltip_li_list {
background: #FF0000;
list-style-type: none;
float: left;
margin: 0;
padding: 0;
width: 100%
}
You will end up with this:
.coupontooltip_ul_list {
display: none;
background: #FFCC00;
position: absolute;
z-index: 1000;
}
.coupontooltip_ul_list, .coupontooltip_li_list {
background: #FF0000;
list-style-type: none;
float: left;
margin: 0;
padding: 0;
width: 100%
}
don't forget to change the .coupontooltip to .coupontooltip_ul_list Here:
.couponcode:hover .coupontooltip {
display: block;
}
otherwise it won't function.
I would also get rid of the other divs in your code
the image tags and the span tags (which I am unsure you need these either as the li tags are text tags anyway) are already held by the li tag and you aren't doing anything with these div tags, they are just clutter and add an extra indentation that you don't need.
with the span, I would just make this another list to be honest with you or maybe even a table, they are displaying data
Here is what your HTML should look like (without changing those spans to tables/lists, I will let you play with that when you are inserting the data with your JavaScript loops)
<div>
<ul id="cellvaluelist">
<li>
<a href="#" class="swim">MAIN 1</a>
</li>
<li>
<a href="#" class="chrono">MAIN 2</a>
</li>
<li>
<a href="#" class="couponcode">MAIN 3
<ul class="coupontooltip_ul_list">
<li class="coupontooltip_li_list">
<img class="coupontooltip_img" src="http://lorempixum.com/100/100/nature/1">
<span class="coupontooltiplistspan">Distance</br>Duration: </br>Laps: </span>
</li>
</br>
<li class="coupontooltip_li_list">
<img class="coupontooltip_img" src="http://lorempixum.com/100/100/nature/1">
<span class="coupontooltiplistspan">SET 1: abcd</br>SET 2: EFGH</br>SET 3: ijkl</span>
</li></br>
<li class="coupontooltip_li_list">
<img class="coupontooltip_img" src="http://lorempixum.com/100/100/nature/1">
<span class="coupontooltiplistspan">Distance: 200m</br>Duration: 00:01:30</br>Laps: 4</span>
</li>
</ul>
</a>
</li>
</ul>
</div>
#cellvaluelist {
font-family: 'Trebuchet MS', Tahoma, Sans-serif;
font-size: 20px;
list-style-type: none;
margin: 0;
padding: 0;
}
#cellvaluelist > li {
list-style-type: none;
text-align: left;
border-bottom: 2px solid #F5F5F5;
}
#cellvaluelist > li:last-child {
border: none;
}
#cellvaluelist > li a {
text-decoration: none;
color: #000;
display: block;
width: 150px;
-webkit-transition: font-size 0.3s ease, background-color 0.3s ease;
-moz-transition: font-size 0.3s ease, background-color 0.3s ease;
-o-transition: font-size 0.3s ease, background-color 0.3s ease;
-ms-transition: font-size 0.3s ease, background-color 0.3s ease;
transition: font-size 0.3s ease, background-color 0.3s ease;
}
#cellvaluelist > li a:hover {
background: #F5F5F5;
}
.swim {
background: #626FD1;
font-size: 15px;
font-weight: normal;
}
.chrono {
background: #EDCF47;
font-size: 15px;
font-weight: normal;
}
.couponcode {
background: #47ED4D;
font-size: 15px;
font-weight: normal;
}
.couponcode:hover .coupontooltip_ul_list {
display: block;
}
.coupontooltip_ul_list {
display: none;
background: #FFCC00;
position: absolute;
z-index: 1000;
}
.coupontooltip_ul_list,
.coupontooltip_li_list {
background: #FF0000;
list-style-type: none;
float: left;
margin: 0;
padding: 0;
width: 100%
}
.coupontooltip_li_list {
background: #D6D6D6;
border-bottom: 2px solid #F5F5F5;
}
.coupontooltip_img {
width: 50px;
height: 50px;
float: left;
padding: 5px;
}
.coupontooltiplistspan {
display: inline-block;
}
<div>
<ul id="cellvaluelist">
<li>
<a href="#" class="swim">MAIN 1</a>
</li>
<li>
<a href="#" class="chrono">MAIN 2</a>
</li>
<li>
<a href="#" class="couponcode">MAIN 3
<ul class="coupontooltip_ul_list">
<li class="coupontooltip_li_list">
<img class="coupontooltip_img" src="http://lorempixum.com/100/100/nature/1">
<span class="coupontooltiplistspan">Distance</br>Duration: </br>Laps: </span>
</li>
</br>
<li class="coupontooltip_li_list">
<img class="coupontooltip_img" src="http://lorempixum.com/100/100/nature/1">
<span class="coupontooltiplistspan">SET 1: abcd</br>SET 2: EFGH</br>SET 3: ijkl</span>
</li></br>
<li class="coupontooltip_li_list">
<img class="coupontooltip_img" src="http://lorempixum.com/100/100/nature/1">
<span class="coupontooltiplistspan">Distance: 200m</br>Duration: 00:01:30</br>Laps: 4</span>
</li>
</ul>
</a>
</li>
</ul>
</div> | {
"domain": "codereview.stackexchange",
"id": 12843,
"tags": "html, css"
} |
Correlation among features (e.g. doc length, punctuation, ... ) in classifying spam emails | Question: I extracted some other features from my dataset regarding punctuation, capital letters, upper case words. I got these value:
looking at the correlation with my target variable (1=spam, 0=not spam), using .corr() in python.
BT stands for binary text, e.g., and BS stands for binary summary, where I assign 1 or 0 based on the presence of a capital letter in the text/summary, or upper case word, or...
Do you think that features like these can be useful in model building? I cannot see very strong correlations, but I would like to determine if an email can be spam or not based also on features like these (number of character/text length; presence of !, upper case words,....).
I have around 1000 emails, but only 50 are spam (maybe too small to extract useful information).
However, I had to extract these information, so it is a new dataset, built on my own, so I could not get many more spam emails (and I would like to not use datasets from kaggle, for instance).
What do you think?
Answer: First about the features i think you could add some such as :
the time when the letter is received,
number of links in the email,
the whole structure (does it follow typical structure for email),
number of words that contains numbers in it,
what is the whole mood of the email (sales,threats,info,...-for this purpose you can use sentiment analysis),
number of attachments ,
type of attachments and so on.
After that try with feature selection (you could read more about it here). For the imbalance data you need to resample the data. I would:
add copies of spam emails(oversampling)
try to generate new spam email (smote)
You can read more here.
I hope my answer will give you some clarity. | {
"domain": "datascience.stackexchange",
"id": 8480,
"tags": "python, classification, text-mining, correlation"
} |
Normal force for car banking turn vs object sliding down slope | Question: When describing an object sliding down a slope, we say that the normal force is less than the weight of the object, and that the vector weight minus the normal force equals the force with which the object is pushed down the slope. (bottom left in image)
However, when describing a car making a banking turn on an angled road, the normal force seems to be greater than the weight of the object, such that the normal force minus the weight is a horizontal centripetal force. (bottom right in image)
I don't understand how to reconcile these models and was hoping someone could share their insight about it.
Answer: The normal force is one of a class of peculiar forces that vary depending on the situation. For example, the normal force of a table on a heavy book is greater than the normal force on a lightweight book. In this case you can't find the normal force unless you know the motion of the book (stationary). The normal force of the floor of an elevator on a book is different for a stationary elevator compared to the same book in an accelerating elevator. Forces of this type are called constraint forces.
So to reconcile your two pictures, you have to know the motion of the car. In one case it slides straight down the ramp, accelerating. In the other the it executes uniform circular motion. In both of your cases, the direction of the net force, which is constrained by the known motion, is enough information to completely solve the situation.
Bottom line: you can't find the normal force on an object without first knowing the motion of the object. | {
"domain": "physics.stackexchange",
"id": 19803,
"tags": "homework-and-exercises, newtonian-mechanics"
} |
Third eigenvalue of Lorentz equations | Question: I was reading and working on the Strogatz's book on nonlinear dynamics and chaos on my own. I was trying to solve problem 9.2.1. The thing is that, I do not understand how can I solve part c) of that problem. I've seen in the internet that they find the third eigenvalue saying that is is well known that the sum of the three eigenvalues must give a certain quantity. Since they know two of the eigenvalues, they solve for the other. It strikes me how can they know what does the sum of the three values give.
Please, find attached the book with the problem on page 342.
http://arslanranjha.weebly.com/uploads/4/8/9/3/4893701/nonlinear-dynamics-and-chaos-strogatz.pdf
Answer: According to the Vieta's formula, you can find the third eigenvalue from the first two eigenvalues and the coefficient at $\lambda^2$ in the polynom of the third degree for the eigenvalues. | {
"domain": "physics.stackexchange",
"id": 36883,
"tags": "homework-and-exercises, chaos-theory, non-linear-systems, complex-systems"
} |
How do you calculate Mcr (critical buckling moment) | Question: When designing a steel beam, the resistance to buckling is related to Mcr; the elastic critical moment for lateral-torsional buckling.
However the Eurocodes give no advice about how to calculate this parameter.
How would you calculate it?
Answer: In case Eurocodes do not provide enough information, some sources exist. In the case of elastic critical moment for lateral-torsional buckling, an NCCI (Non-contradictory, complementary information) document exists.
The document code is SN 003, and one version (maybe not the latest) can be accessed here. Hopefully, this will cover your current needs. In addition, the French technical centre for steel construction has developped a couple pieces of software that can run the calculation for you:
LTbeam: for beams under bending sollicitations
LTbeamN: for beams under bending and compression sollicitations | {
"domain": "engineering.stackexchange",
"id": 2742,
"tags": "civil-engineering, steel, beam, eurocodes"
} |
Join Lines (considering `-` at the end of lines) | Question: Normally when a line ends with -, it means it should be joined differently.
For example, if we join lines of:
The causal part of the competency definition is to distinguish between
those many characteristics that may be studied and measured about a per-
son and those aspects that actually do provide a link to relevant behaviour.
Thus having acquired a particular academic qualification may or may not
be correlated with the capacity to perform a particular job. The qualifica-
tion is scarcely likely – talisman-like – to cause the capacity for perform-
ance. However, a tendency to grasp complexity and to learn new facts and
figures may well be part of the causal chain, and the competency would be
expressed in these terms, not in terms of the possibly related qualification.
It should look like:
The causal part of the competency definition is to distinguish between
those many characteristics that may be studied and measured about a
person and those aspects that actually do provide a link to relevant
behaviour. Thus having acquired a particular academic qualification
may or may not be correlated with the capacity to perform a particular
job. The qualification is scarcely likely – talisman-like – to cause
the capacity for performance. However, a tendency to grasp complexity
and to learn new facts and figures may well be part of the causal
chain, and the competency would be expressed in these terms, not in
terms of the possibly related qualification.
Instead of:
The causal part of the competency definition is to distinguish between
those many characteristics that may be studied and measured about a
per- son and those aspects that actually do provide a link to relevant
behaviour. Thus having acquired a particular academic qualification
may or may not be correlated with the capacity to perform a particular
job. The qualifica- tion is scarcely likely – talisman-like – to cause
the capacity for perform- ance. However, a tendency to grasp
complexity and to learn new facts and figures may well be part of the
causal chain, and the competency would be expressed in these terms,
not in terms of the possibly related qualification.
Please look at the difference between person and per- son, qualification and qualifica- tion.
To solve this problem, I have written a small command line application.
#!/usr/bin/env python3
import sys
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('infile', nargs='?', type=argparse.FileType('r'), default=sys.stdin)
args = parser.parse_args()
# taking input from stdin which is empty, so there is neither a stdin nor a file as argument
if sys.stdin.isatty() and args.infile.name == "<stdin>":
sys.exit("Please give some input")
paragraph = []
for line in args.infile:
if line.endswith("-\n"):
paragraph.append(line.rstrip('-\n'))
else:
paragraph.append(line.replace("\n", " "))
print(''.join(paragraph))
Please give me some feedback so that I can better this program.
Answer: Low hanging fruits
use a if_main guard in your code. This means you can import it without the whole thing running each time you load it.
Adding functions is always a good thing.
Adding typing hints is always nice.
I have no idea what this part does
# taking input from stdin which is empty, so there is neither a stdin nor a file as argument
if sys.stdin.isatty() and args.infile.name == "<stdin>":
sys.exit("Please give some input")
so I removed it. Code seems to be running just as fine.
Instead I added a required=True keyword to argparse.
With these simple changes the code now looks like
def content_2_proper_paragraph(content:str) -> str:
paragraph = []
for line in content:
if line.endswith("-\n"):
paragraph.append(line.rstrip("-\n"))
else:
paragraph.append(line.replace("\n", " "))
return "".join(paragraph)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"-f", "--file", nargs="?", type=argparse.FileType("r"), required=True
)
args = parser.parse_args()
paragraph = content_2_proper_paragraph(args.file)
print(paragraph)
Enchantments
So let us keep working on the code. There is still the issue with leading spaces, and some semantics.
The trailing - to split a word over multiple lines is called a hyphen and so our code should call it that.
This part screams for a re-factorization
for line in content:
if line.endswith("-\n"):
paragraph.append(line.rstrip("-\n"))
else:
paragraph.append(line.replace("\n", " "))
If we reorder the logic and start with the paragraph, it looks like this
for line in content:
paragraph.append(line.rstrip("-\n") if line.endswith("-\n") else line.replace("\n", " "))
The pattern to look out for here is
for x in X:
f(x)
This can be rewritten as map(f, X) or inline [f(x) for x in X]. This means the whole append part should be it's own function. Something like
def content_without_hyphens(
content: str, hyphen: str = "-", keep_if_space: bool = True
):
def remove_trailing_hyphen(line):
*chars, penultimate_char, last_char, _ = line
last_char_is_hyphen = (last_char == hyphen) and (
keep_if_space or penultimate_char != " "
)
return (
"".join(chars)
+ penultimate_char
+ ("" if last_char_is_hyphen else last_char)
)
paragraph = "".join(map(remove_trailing_hyphen, content))
return paragraph
works, but is really messy. I tried to implement a better method to remove the trailing -, but ultimately this is a big fail. Your method with string replace is much cleaner. However, we can not use it directly because we need to use negative lookbehind to figure out if a SPACE preceeds the HYPHEN. The whole code then looks like this
import re
from typing import Annotated
Paragraph = Annotated[str, "a series of sentences"]
Hyphen = Annotated[
str, "Divides long words between the end of one line and the beginning of the next"
]
def paragraph_without_hyphens(paragraph: Paragraph, hyphen: Hyphen = "-") -> Paragraph:
TRAILING_HYPHEN = re.compile(fr"(?<!( )){hyphen}$")
def remove_trailing_hyphen_regex(line):
return TRAILING_HYPHEN.sub("", line).rstrip("\r\n")
return "".join(map(remove_trailing_hyphen_regex, paragraph))
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"-f", "--file", nargs="?", type=argparse.FileType("r"), required=True
)
paragraph = parser.parse_args().file
print(paragraph_without_hyphens(paragraph)) | {
"domain": "codereview.stackexchange",
"id": 42276,
"tags": "python, python-3.x"
} |
In keras seq2seq model, what is the difference between `model.predict()` and the inference model? | Question: I am looking into seq2seq model in keras, for example, this blog post from keras or this. All the examples I have seen have some inference model, that depicts the original model. That inference model is then used to make the predictions.
My question is why can't we just do the model.predict(). I mean, we can because I have used it and it works but what is the difference between these two approaches. Is it wrong to use model.predict() and do the reverse word tokenizer for the argmax ?
Answer: I understand that with "All the examples I have seen have some inference model, that depicts the original model" you mean that there is a function that performs complex operations with the model instead of just invoking model.predict(). Such a function is called decode_sequence in the linked examples.
Note that you can't just invoke model.predict() once because you don't have any inputs to feed to the decoder.
The thing with this type of seq2seq models is that they are autoregressive. This means that they predict the next token based on its previous predictions. Therefore, you need to predict one token at a time: first, you predict the first token, then you invoke again the model with such a prediction to get the next token, and so on. This is precisely what function decode_sequence does: it just invokes model.predict() to get the next token, until the stop condition is met, that is either predicting the \n token or having predicted the maximum number of tokens. | {
"domain": "datascience.stackexchange",
"id": 9935,
"tags": "keras, tensorflow, sequence-to-sequence, inference"
} |
Extract iodine from tincture? | Question: What I am trying to do is extract elemental iodine from iodine tincture - that is 5% iodine solution in aqueous ethanol with $\ce{KI}$ as "helper additive", or whatever you'd call it. I don't have povidone-iodine, I don't have iodide tincture, and I don't have $\ce{KI}$. Well, there is $\ce{KI}$ in my tincture, but that is not my point. My point is, there are procedures for extracting $\ce{I2}$ from povidone-iodine, there are procedures for converting iodide salts (in tincture or not) to elemental iodine, and there are those for simply precipitating out $\ce{I_2}$ from tincture, which is my case.
BTW why not just evaporate the ethanol, or is the iodine too volatile - will sublimate with the evaporating ethanol(iodine sublimates at $\ce{113.7 ^\circ C}$)?
I want to use household vinegar - 6% apple vinegar if possible because it is easily available. Also, I want to use 3% pharmacy grade $\ce{H2O2}$ for the same reasons.
I have done this, kind of successfully, but I want to know what is the chemistry behind all that, and how would you calculate the amounts of reagents needed?
Answer: If you want to save the iodine $\ce{I_2}$ from its solution, you may add enough $\ce{KOH}$ or $\ce{NaOH}$ in the mixture to transform all $\ce{I_2}$ into a colorless mixture of $\ce{KI}$ and $\ce{KIO_3}$. $$\ce{3I_2 + 6OH^- -> IO_3^- + 5I^- + 3H_2O}$$The remaining ethanol can then be evaporated by heating the nearly colorless solution to a temperature higher than $80$°C. When the temperature reaches $100$°C, the ethanol is totally removed from the solution : the hot solution can be cooled down to room temperature, and some acid should be added in order to destroy the excess of $\ce{NaOH}$ ou $\ce{KOH}$, and then to recover the iodine $\ce{I_2}$ according to $$\ce{IO_3^- + 5 I^- + 6 H^+ -> 3 I_2 + 3 H_2O}$$ The totality of the iodine $\ce{I_2}$ dissolved in the original tincture is recovered, without any amount of ethanol, and can be saved by filtration. | {
"domain": "chemistry.stackexchange",
"id": 14140,
"tags": "inorganic-chemistry, extraction"
} |
PDO class for multiple databases | Question: I have a PDO class below:
class DB {
private $dbh;
private $stmt;
static $db_type;
static $connections;
public function __construct($db, $id="") {
switch($db) {
case "db1":
try{
$this->dbh = new PDO("mysql:host=localhost;dbname=ms".$id, 'root', '', array( PDO::ATTR_PERSISTENT => true ));
} catch(PDOException $e){
print "Error!: " . $e->getMessage() . "<br />";
die();
}
break;
case "db2":
try{
$this->dbh = new PDO("mysql:host=localhost;dbname=users", 'root', '', array( PDO::ATTR_PERSISTENT => true ));
} catch(PDOException $e){
print "Error!: " . $e->getMessage() . "<br />";
die();
}
break;
}
self::$db_type = $db;
}
static function init($db_type = "", $id){
if(!isset(self::$connections[$db_type])){
self::$connections[$db_type] = new self($db_type, $id);
}
return self::$connections[$db_type];
}
public static function query($query) {
self::$connections[self::$db_type]->stmt = self::$connections[self::$db_type]->dbh->prepare($query);
return self::$connections[self::$db_type];
}
public function bind($pos, $value, $type = null) {
if( is_null($type) ) {
switch( true ) {
case is_int($value):
$type = PDO::PARAM_INT;
break;
case is_bool($value):
$type = PDO::PARAM_BOOL;
break;
case is_null($value):
$type = PDO::PARAM_NULL;
break;
default:
$type = PDO::PARAM_STR;
}
}
self::$connections[self::$db_type]->stmt->bindValue($pos, $value, $type);
return self::$connections[self::$db_type];
}
public function execute() {
return self::$connections[self::$db_type]->stmt->execute();
}
}
Is it ok for multiple DB connections or not?
Answer: A quick recap from SO
If you want to implement the Singleton pattern (first read about SOLID, though, and pay special attention to injection), make the constructor private.
Your query method is static, why? it defaults (without the user being able to do anything about this) to the last connection that was established. Are you sure that's what the user wanted? Of course not! But then again, all of the other methods, like execute exhibit the same behaviour, so Everybody will end up working on the same connection, until they run into trouble and revert to using their own instances of PDO.
As ever, everything I have to say about custom wrapper classes around an API like PDO offers, can be read here. What I think of DB wrappers in general can be found here.
If you want to have all current connections available globally, then your code should shift towards a Factory, not a Singleton pattern:
class Factory
{
private static $connections = array();
public static function getDB($host, array $params)
{
if (!isset(self::connections[$host]))
{
self::connections[$host] = new DB($params);
}
return self::connections[$host];
}
} | {
"domain": "codereview.stackexchange",
"id": 4449,
"tags": "php, mysql, singleton"
} |
Problem in adding two Schunk arms plus a simple shape (e.g. cylinder) into .urdf.xacro | Question:
Dear Friends,
First of all, I apologize if my question may look trivial. I am new with ROS.
Here is the problem: The aim is to have two Schunk arms mounted on a simple box.
We have a robot.urdf.xacro file. This file "includes" the model of the arm:
<xacro:include filename="$(find schunk_description)/urdf/lwa4d/lwa4d.urdf.xacro" />
And uses it twice to have two arms:
<!-- arm -->
<xacro:schunk_lwa4d name="arm" parent="world" has_podest="true">
<origin xyz="0 0 0.026" rpy="0 0 0" />
</xacro:schunk_lwa4d>
<!-- arm2 -->
<xacro:schunk_lwa4d name="arm2" parent="world" has_podest="true">
<origin xyz="0.5 0.5 0.526" rpy="0 0 0" />
</xacro:schunk_lwa4d>
And then when I spawn the xacro file in launch file, 2 arms are loaded in Gazebo. Till here seems fine!
But adding a simple shape to this set was/is a pain for me.
I define a ros package, called myschunk_gazebo. Then I try to include one simple .xacro file including only a box (or cylinder) in myschunk_gazebo/models and I call the file simple_shape.xacro.
I know that ROS can locate the package. I check it with rospack:
ros@ros:~/Documents/Damon_CPP/ROS_Workspace/devel$ rospack find myschunk_gazebo
/home/ros/Documents/Damon_CPP/ROS_Workspace/src/myschunk_gazebo
But from here the problem starts!
Well, as I know, even this simple shape is called a "robot". So I write simple_shape.xacro. like:
<?xml version="1.0"?>
<robot name="myfirst">
<link name="base_link">
<visual>
<geometry>
<cylinder length="0.6" radius="0.2"/>
</geometry>
</visual>
</link>
</robot>
When I launch the file, only the empty space is loaded in gazebo. And between many messages, I thought this might be useful to be put in here:
Failed to find root link: Two root links found: [base_link] and [world]
I guess I know what is going on. We should have only one , which I have in my urdf.xacro file that I spawn in my launch file. But what I do when I include simple_shape.xacro is that I have two s. And hence, two root links.
I remove the lines related to robot and change simple_shape.xacro to:
<?xml version="1.0"?>
<link name="base_link">
<visual>
<geometry>
<cylinder length="0.6" radius="0.2"/>
</geometry>
</visual>
</link>
I also include base_link in my robot.urdf.xacro file:
<link name="base_link"/>
And loads only empty space.
I don't think the inclusion of the file should be a problem. But I highly doubt if I use the shape in robot.urdf.xacro properly, or my codes for generating the shape are correct.
It would be so nice if a friend can clarify for me what is going on.
Thanks. :)
Originally posted by Damon on ROS Answers with karma: 137 on 2015-07-10
Post score: 0
Answer:
tl;dr: you probably forgot to add a joint between world and your box/cylinder's base_link.
Longer version: a urdf (which is what the .xacro files are turned into before Gazebo or anything else in ROS gets to them) models a scene as a tree. So every link needs to be connected to another. Forests (set of disconnected trees, or islands of links) are not allowed.
In your case, the two calls to xacro:schunk_lwa4d supply that macro with two important arguments:
parent="world"
<origin xyz="0 0 0.026" rpy="0 0 0" />
Argument 1 is used internally in xacro:schunk_lwa4d to setup a fixed joint (or parent-child relation) between a link outside of the schunk_lwa4d macro called world and the root of the LWA 4D (here). One of those two links will be made the root of the LWA 4D model, depending on the has_podest argument.
Argument 2 then defines the relative transform between that world link and the ${name}_podest_link here (in case has_podest=true), or here for ${name}_base_link if you have has_podest=false.
Now the world link can either be predefined by Gazebo, or added by you to your composite xacro (I use the term composite to mean an xacro that does not really define anything new itself, but is a collection of instantiations of other xacro:macros). In all cases you end up with a tree structure, in which world is the root.
Failed to find root link: Two root links found: [base_link] and [world]
This should now be obvious, but in the case where you added the box/cylinder, you seem to not have defined any joint linking the base_link (or its root) of your box to the rest of the scene. That results in two forests (ie: islands) and that is not allowed in urdf (and so Gazebo can't work with it either).
Solution: define a joint (probaby a fixed one) which fixes your box/cylinder in your scene, using origin to define the relative transform between the parent and the base_link of your new addition (or whatever you end up naming the root of your model).
Note also that all links in a urdf must be uniquely named: you cannot have two base_links. This is why most xacros include a prefix or name parameter, which is prefixed to all internally defined links. It essentially namespaces all links, allowing you to instantiate a single xacro macro multiple times, by using different values for the prefix (in your case arm and arm2). You should probably do something similar with your box/cylinder. See wiki/urdf/Tutorials - Using Xacro to Clean Up a URDF File: Leg macro for an example.
Originally posted by gvdhoorn with karma: 86574 on 2015-07-11
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Damon on 2015-07-23:
Dear gvdhoorn,
Thank you so much for your answer. The reason that I am replying late is that I realized I need to learn urdf more deeply. So I started from the beginning and at the end I was able to solve the problem. I will write my experience here in the form of an answer soon.
Thanks again. :)
Comment by gvdhoorn on 2015-07-28:
Ok. If you feel this answer has answered your question, please indicate that by ticking the checkmark.
Comment by Damon on 2015-07-29:
Ow, Ok. I will do so. Plus, if anybody else is reading this, I should mention again that I will write the problem and the solution more in detail as soon as I have time (right now I have 15 days to my thesis deadline! :( ) | {
"domain": "robotics.stackexchange",
"id": 22141,
"tags": "urdf, xacro"
} |
Chemical composition of seawater | Question: Is it true that the sea water is composed of about $86\%$ oxygen, $11\%$ hydrogen and $3\%$ of minerals? The chemical formula of water is $\ce{H2O}$ (two hydrogen and one oxgen) that shows that the number of hydrogen is greater than that of oxygen.
If the number of hydrogen is greater, then why does the sea water consist of $11\%$ hydrogen and $86\%$ oxygen, which is lesser than the oxygen?
The book which I am reading says which is confusing me:
... Seawater is composed of about $86\%$ oxygen, $11\%$ hydrogen and $3\%$ of minerals, consisting mainly of sodium and chlorine.
Answer: The book that you're reading is measuring by mass.
If you have pure water then you would expect oxygen to make up $\frac{16}{16 + 2}\times 100\% \approx 89 \% $ by mass. Likewise, hydrogen would make up $\frac{2}{16 + 2}\times 100\% \approx 11 \% $ by mass. | {
"domain": "chemistry.stackexchange",
"id": 11101,
"tags": "water, elements"
} |
Graham's scan algorithm including all colinear points | Question: I'm solving the Erect the Fence problem on leetcode. My approach is to use Graham's scan algorithm with the following steps:
Find the leftmost point p0
Sort the points according to their angle relative to p0
Iterate over the points:
while the orientation of the current point and the two last on the stack is clockwise, pop the last point from the stack
add the current point to the stack
The problem is sorting the points in such a way, that moving away from p0, the nearest points are first, and moving forward p0, the other way round. I cannot find any clean solution to this problem and every proposed solution on leetcode using this algorithm, is somehow wrong.
How to modify Graham's scan algorithm to include all degenerated points (creating the same angle with p0)?
Is it even possible, or should I go for another algorithm?
Answer: Here is my code, which passes all the tests. It's based on Matthew C's comment.
def graham_scan(points: list) -> list:
p0 = find_leftmost(points)
hull = []
points.sort(key=cmp_to_key(lambda p, q: orientation_cmp(p0, p, q)))
# handle colinear points at the beginning
i = 0
n = len(points)
while i < (n - 1) and get_orientation(p0, points[i], points[i+1]) == 0:
i += 1
points[:i+1] = sorted(points[:i+1], key=lambda p: get_distance(p0, p))
# main loop
for p in points:
while len(hull) > 1 and get_orientation(hull[-2], hull[-1], p) < 0:
hull.pop(-1)
hull.append(p)
return hull
get_orientation returns -1 for the clockwise orientation of two vectors and -1 for anti-clockwise. get_distance function returns the distance between two points. orientation_cmp is:
def orientation_cmp(p0, p, q):
orientation = get_orientation(p0, p, q)
if orientation < 0:
return 1
elif orientation > 0:
return -1
return get_distance(p0, q) - get_distance(p0, p) | {
"domain": "cs.stackexchange",
"id": 21175,
"tags": "convex-hull"
} |
Simple Hangman game - first Python project | Question: I created a basic Hangman game that uses a text file to select a secret word. Are there any areas for improvement?
import random
secret_word = ['']
user_list = []
number_of_tries = 5
guessed_letters = []
user_tries = 0
user_guess = ''
def select_word():
global secret_word, user_list
with open('secret_words.txt', 'r') as f:
word = f.read()
word_list = word.split('\n')
secret_word = word_list[random.randint(1, len(word_list))]
user_list = ['-'] * len(secret_word)
def game_over():
if user_tries == number_of_tries or user_list == list(secret_word):
return True
else:
return False
def user_input():
global user_guess
user_guess = input('Guess a letter\n')
check_guess(user_guess)
def repeated(guess):
global guessed_letters
if guess in guessed_letters:
print('You already guessed that letter!\n')
return True
else:
guessed_letters.append(user_guess)
return False
def check_guess(guess):
correct_guess = False
for x in range(len(secret_word)):
if guess == secret_word[x]:
user_list[x] = guess
correct_guess = True
elif not correct_guess and x == len(secret_word)-1:
global user_tries
user_tries += 1
print('Wrong guess, you lose one try\n'
'Remaining tries : {}\n'.format(number_of_tries - user_tries))
if correct_guess:
print('Correct guess!')
def valid_input(user_letter):
valid_letters = 'qwertyuiopasdfghjklzxcvbnm'
if user_letter.lower() in list(valid_letters):
return True
else:
print('Invalid input')
return False
# main code:
print('----HANG MAN----')
print('*Welcome, guess the word\n*you have 5 tries.')
select_word()
while not game_over():
for x in user_list:
print(x, end='')
user_guess = input('\nGuess a letter : ')
if valid_input(user_guess):
if repeated(user_guess):
continue
else:
check_guess(user_guess)
if user_list != list(secret_word):
print('Game over, you died!\ncorrect word was {}'.format(secret_word))
else:
print('Congratulations! you guessed the correct word\n')
Answer: For the word selection, there is a bug on
secret_word = word_list[random.randint(1, len(word_list))]
you should change to
secret_word = word_list[random.randint(0, len(word_list)-1)]
because random.randint(1, len(word_list)) does not return 0 index, and could return an index off the bound (len(word_list)).
Also, you may remove secret_word = [''] and user_list=[] at the beginning.
number_of_tries = 5
guessed_letters = []
user_tries = 0
user_guess = ''
def select_word():
with open('secret_words.txt', 'r') as f:
word = f.read()
word_list = word.split('\n')
secret_word = word_list[random.randint(0, len(word_list)-1)]
user_list = ['-'] * len(secret_word)
return secret_word, user_list
looks more compact. So you can use it as :
print('----HANG MAN----')
print('*Welcome, guess the word\n*you have 5 tries.')
secret_word, user_list = select_word()
...
Also for efficiency and compactness, you can change this
while not game_over():
for x in user_list:
print(x, end='')
user_guess = input('\nGuess a letter : ')
if valid_input(user_guess):
if repeated(user_guess):
continue
else:
check_guess(user_guess)
to:
while not game_over():
print(''.join(user_list))
user_guess = input('\nGuess a letter : ')
if valid_input(user_guess):
if not repeated(user_guess):
check_guess(user_guess)
For the game itself, you may want to try using classes, which will make it more readable and easier to analyze. | {
"domain": "codereview.stackexchange",
"id": 33248,
"tags": "python, beginner, game, hangman"
} |
How to use cmd_vel data in navigation stack? | Question:
I'm trying to convert geometry_msgs::Twist message received from cmd_vel topic to something that can make my arduino robot move.
I know that from geometry_msgs::Twist message i'll receive linear velocity and angular velocity.
My motors are actually controlled by calling functions like that:
void RobotControl::goAhead(int PWM){
digitalWrite(_IN1, LOW);
digitalWrite(_IN2, HIGH);
digitalWrite(_IN3, LOW);
digitalWrite(_IN4, HIGH);
analogWrite(_ENB, PWM);
analogWrite(_ENA, PWM); }
in this case, PWM is a value between 0 and 200 and sets speed for my motors in this range.
The main question is:
Is there a way to use received cmd_vel
data and convert this to instructions
that can make my robot move by simpy
call goAhead() function? If my goAhead function has
not been implemented in the right way, how
to implement a funciton that solve the
problem?
Thank u!
--- EDIT---
ok, let's see if i understand this:
velCallback(Twist &twistmsg) {
vel_x = twistmsg.linear.x;
vel_th = twistmsg.angular.z;
if(vel_x == 0){
right_vel = vel_th * width_robot / 2.0;
left_vel = (-1) * right_vel; }
else if(vel_th == 0){
left_vel = right_vel = vel_x;
}
else{
left_vel = vel_x - vel_th / 2.0;
right_vel = vel_x + vel_th / 2.0;
}
now if everything is right i can calcolate RPM:
RPMleft = ((60 * left_vel) / (diameter * PI))
RPMright = ((60 * right_vel) / (diameter * PI))
now i should calcalate PWM, i googled for almost one hour but i can't find anything. Do you know how can i do it?
Originally posted by Oper on ROS Answers with karma: 67 on 2016-07-16
Post score: 0
Answer:
You have to write a function that performs the body-frame velocities to wheel velocities transformation. The body-frame velocities are contained in the Twist messages published on /cmd_vel. Then you have to also write a second, much simpler function that converts the wheel velocities to motor RPM commands and then PWM commands.
If your robot is a differential-drive one, you can probably Google search for the corresponding equations. There might even be a ROS package that can help with that. The key takeaway is that this function is robot-specific, as it depends on things like number of wheels, wheel configuration, distance between wheels, wheel radius, etc.
Various functions in RobotControl might already be doing some of that work. But based on the input argument of goAhead, I doubt it does the transformation. It probably just has all wheels spin at the same, fixed RPM (in an open-loop fashion).
Originally posted by spmaniato with karma: 1788 on 2016-07-16
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Oper on 2016-07-16:
Thank you man for this fast response, i've just update the main question. Do you know how to convert PWM to RPM? i can't find the formula
Comment by spmaniato on 2016-07-17:
I don't think there's a standard way of doing that unfortunately. If you have wheel encoders, you can use feedback from those to write a PID controller. The PID controller will take in the desired RPS (which you calculated already) and the current RPS and spit out PWM values.
Comment by spmaniato on 2016-07-17:
If you don't have wheel encoders, try something simple, like PWMleft = k * RPMleft and PWMright = k * RPMright, where k is a constant that you figure out based on trial and error. (It may depend on things like terrain, such as wood vs carpet)
Comment by Oper on 2016-07-17:
Yes man, i have wheel encoders, I've just seen some examples about PID controller, my only doubt now is how to calcolate values like Kp Ki Kd and Ko. I think that i can reduce Ki to zero, but i don't have any idea on how to calcolate other values. Do you know how i can do this?
Comment by spmaniato on 2016-07-17:
Start simple, just P control first. Get something kinda working and then add complexity and rigor :-)
Comment by MH_Ahmed on 2019-12-03:
Could anyone explain how to convert the Twist data from the base_link frame into the wheel_link frame?
From the answer, I understood to receive the twist data and extract the X-component of the linear velocity and the Z-component of the angular velocity. Then convert them into the wheel_link frame and send them to my controllers, Am I correct? | {
"domain": "robotics.stackexchange",
"id": 25264,
"tags": "ros, arduino, navigation, base-controller"
} |
Relation between density and viscosity | Question: Viscosity is resistance to the motion of fluid layers sliding over one another. Density is a measure of forces of attraction between atoms or molecules of same species. Is there any relation between the two?
Answer: No, mercury is very dense but pours easily
Honey is lighter but is very viscous - (changes with temperature though)
Density and viscosity are two different characteristics of a fluid.
Viscosity has the units Poise for dynamic viscosity and Stokes for kinematic viscosity. | {
"domain": "engineering.stackexchange",
"id": 1736,
"tags": "fluid-mechanics"
} |
Object-oriented student library 2 | Question: Follow up of Object-oriented student library
Question: How do you refactor this code so that it is pythonic, follows OOP, reads better and is manageable? How can I write name functions and classes better? How do you know which data structure you need to use so as to manage data effectively?
from collections import defaultdict
from datetime import datetime, timedelta
class StudentDataBaseException(Exception): pass
class NoStudent(StudentDataBaseException): pass
class NoBook(StudentDataBaseException): pass
"""To keep of a record of students
who have yet to return books and their due dates"""
class CheckedOut:
loan_period = 10
fine_per_day = 2
def __init__(self):
self.due_dates = {}
def check_in(self, name):
due_date = datetime.now() + timedelta(days=self.loan_period)
self.due_dates[name] = due_date
def check_out(self, name):
current_date = datetime.now()
if current_date > self.due_dates[name]:
delta = current_date - self.due_dates[name]
overdue_fine = self.fine_per_day * delta.days
print("Fine Amount: ", overdue_fine)
# This only contains the title name for now
class BookStatus:
def __init__(self, title):
self.title = title
def __repr__(self):
return self.title
def __hash__(self):
return 0
def __eq__(self, other):
return self.title == other
# contains a set of books
class Library:
record = CheckedOut()
def __init__(self):
self.books = set()
def add_book(self, new_book):
self.books.add(new_book)
def display_books(self):
if self.books:
print("The books we have made available in our library are:\n")
for book in self.books:
print(book)
else:
print("Sorry, we have no books available in the library at the moment")
def lend_book(self, requested_book):
if requested_book in self.books:
print(f'''You have now borrowed \"{requested_book}\"''')
self.books.remove(requested_book)
return True
else:
print(f'''Sorry, \"{requested_book}\" is not there in our library at the moment''')
return False
# container for students
class StudentDatabase:
def __init__(self):
self.books = defaultdict(set)
def borrow_book(self, name, book, library):
if library.lend_book(book):
self.books[name].add(book)
return True
return False
def return_book(self, name, book, library):
if book not in self.books[name]:
raise NoBook(f'''\"{name}\" doesn't seem to have borrowed "{book}"''')
return False
else:
library.add_book(book)
self.books[name].remove(book)
return True
def students_with_books(self):
for name, books in self.books.items():
if books:
yield name, books
def borrow_book(library, book_tracking):
name = input("Student Name: ")
book = BookStatus(input("Book Title: "))
if book_tracking.borrow_book(name, book, library):
library.record.check_in(name)
def return_book(library, book_tracking):
name = input("Student Name: ")
returned_book = BookStatus(input("Book Title: "))
if book_tracking.return_book(name, returned_book, library):
library.record.check_out(name)
line = "_" * 100
menu = "Library Management System \n\n \
1) Add Book \n \
2) Display all Books \n \
3) Borrow a Book \n \
4) Return a Book \n \
5) Lending Record \n \
6) Exit"
library = Library()
book_tracking = StudentDatabase()
while True:
print(line)
print(menu)
choice = get_valid_choice(min=1, max=6)
print(line)
if choice == 1:
library.add_book(BookStatus(input("Book Title: ")))
elif choice == 2:
library.display_books()
elif choice == 3:
borrow_book(library, book_tracking)
elif choice == 4:
return_book(library, book_tracking)
elif choice == 5:
students = tuple(book_tracking.students_with_books())
if students:
for name, book in students:
print(f"{name}: {book}")
else:
print("No students have borrowed books at the moment")
elif choice == 6:
break
Answer: I think the general construction of your code is poor.
A book is owned by a library even if the book is on loan.
Your code doesn't seem to be able to handle books with the same name.
Without running your code it looks like you can steal books by taking one out and returning another.
BookStatus is incredibly poorly implemented. I'd instead use dataclasses.
You're effectively creating an in-memory database. And so you should design the database, and then the code around that.
Starting with the database design you have three things:
Book
Loan
Person
Yes there is no library, because all books are in your library.
Each book can be loaned many times, but only one book can be in a loan.
Each person can have multiple loans, but each loan can only be given to one person.
And so your Loan object should have the ID of the book and the person, but neither the book nor the person should have any links to the other database items.
We can then create these objects and tables in Python.
I'm using dataclasses (Python 3.6) and typing to build the objects quickly.
Since we're already using typing for the dataclass, I decided to make the rest of the program fully typed allowing a static analyzer, such as mypy, to check my code for errors.
This allows the base code of:
from dataclasses import dataclass
from typing import Optional, Set, Mapping, TypeVar, Dict, Type, Iterator
from datetime import datetime, timedelta
T = TypeVar('T')
@dataclass
class Book:
id: int
name: str
@dataclass
class Person:
id: int
name: str
@dataclass
class Loan:
id: int
book: Book
person: Person
checkout: datetime
due: datetime
checkin: Optional[datetime]
class Table(Mapping[int, T]):
_db: Dict[int, T]
def __init__(self, type: Type[T]) -> None:
self._db = {}
self._type = type
def __getitem__(self, key: int) -> T:
return self._db[key]
def __iter__(self) -> Iterator[T]:
return iter(self._db)
def __len__(self) -> int:
return len(self._db)
books = Table(Book)
people = Table(Person)
loans = Table(Loan)
This then allows us to easily add the additional functionality:
class Table(Mapping[int, T]):
# Other code
def add(self, *args, **kwargs) -> None:
key = len(self)
self._db[key] = self._type(key, *args, **kwargs)
def display(self) -> None:
for value in self.values():
print(value)
def borrow_book(person: int, book: int, loan_days: int) -> None:
checkout = datetime.now()
loans.add(
books[book],
people[person],
checkout,
checkout + timedelta(days=loan_days),
None
)
def return_book(loan: int) -> None:
loans[loan].checkin = datetime.now()
def display_active_loans() -> None:
has_active = False
for loan in loans.values():
if loan.checkin is not None:
continue
has_active = True
print(f'{loan.id}: {loan.person.name} -> {loan.book.name}')
if not has_active:
print('No active loans')
And usage is fairly easy, you just use IDs:
books.add('Title')
books.display()
people.add('Student')
people.display()
borrow_book(0, 0, 10)
display_active_loans()
return_book(0)
display_active_loans()
from dataclasses import dataclass
from typing import Optional, Set, Mapping, TypeVar, Dict, Type, Iterator
from datetime import datetime, timedelta
T = TypeVar('T')
@dataclass
class Book:
id: int
name: str
@dataclass
class Person:
id: int
name: str
@dataclass
class Loan:
id: int
book: Book
person: Person
checkout: datetime
due: datetime
checkin: Optional[datetime]
class Table(Mapping[int, T]):
_db: Dict[int, T]
def __init__(self, type: Type[T]) -> None:
self._db = {}
self._type = type
def __getitem__(self, key: int) -> T:
return self._db[key]
def __iter__(self) -> Iterator[T]:
return iter(self._db)
def __len__(self) -> int:
return len(self._db)
def add(self, *args, **kwargs) -> None:
key = len(self)
self._db[key] = self._type(key, *args, **kwargs)
def display(self) -> None:
for value in self.values():
print(value)
books = Table(Book)
people = Table(Person)
loans = Table(Loan)
def borrow_book(person: int, book: int, loan_days: int) -> None:
checkout = datetime.now()
loans.add(
books[book],
people[person],
checkout,
checkout + timedelta(days=loan_days),
None
)
def return_book(loan: int) -> None:
loans[loan].checkin = datetime.now()
def display_active_loans() -> None:
has_active = False
for loan in loans.values():
if loan.checkin is not None:
continue
has_active = True
print(f'{loan.id}: {loan.person.name} -> {loan.book.name}')
if not has_active:
print('No active loans') | {
"domain": "codereview.stackexchange",
"id": 32562,
"tags": "python, beginner, object-oriented, python-3.x"
} |
A free-fall electron | Question: I am reading Wheeler and Taylor's Spacetime Physics. In Ch2, Wheeler mentioned:
"for gravity, any free-fall frame is an inertial frame." (roughly)
I am left wondering if that is true for electrical force:
Consider one charge is under a statistic electrical field. The charge is in free-fall. Is the electron's free fall frame an inertial frame?
(if yes, then can we say electrical force is a pseudo-force too?)
Answer: The special quality of gravitational fields that is not shared by electric (or magnetic) fields is the Equivalence Principle. The thought experiment you need to do is something like this...
Imagine being in a laboratory which is floating in outer space in the absence of any external fields and closed to the outside world. Do a series of experiments in that laboratory and record a video of what happens.
Now imagine that, while you are sleeping, somebody switches on a uniform gravitational field so that (in the usual way we describe things) your laboratory accelerates along the field lines. The question is, would you be able to perform an experiment to deduce the existence of that field? It turns out the answer (so far as we have been able to tell) is No, and we call this fact the Equivalence Principle. The equivalence principle means that the so-called 'free-fall' frame of the laboratory is just as 'inertial' as the one floating in outer space.
Finally, suppose instead that while you were sleeping somebody turns on a uniform electric field, and let's ask the same question (i.e. would you be able to tell when you wake up?). This time the answer is a definite Yes. As a simple example, a positive and negative charge aligned with the field would now experience an attractive or repulsive force in addition to their previous attractive force and so the trajectory of the particles would be completely different from the one you recorded in the earlier experiment. (Actually, the effects around you would probably be so obvious you probably wouldn't even need to perform an explicit laboratory experiment!)
In short: A uniform gravitational field doesn't make any difference to the internal dynamics of a system, provided the system is allowed to 'free-fall' in that field. The falling frame is just as inertial as one floating in empty space. An electric (or electromagnetic) field absolutely does effect the internal dynamics of a system, however, and so does not allow us to create a 'free-fall' frame which behaves inertially.. | {
"domain": "physics.stackexchange",
"id": 52792,
"tags": "electromagnetism, general-relativity, reference-frames, inertial-frames"
} |
Linq query performance improvements | Question: As I am getting my Linq query to a functional point, I start looking at the query and think about all the "ANY" and wonder if those should be a different method and then I have data conversions going on.
Does anything jump out as being a performance issue? What is recommended to make this more performant? (Yes, I need all the &&.)
etchVector =
from vio in list
where excelViolations.Any(excelVio => vio.VioID.Formatted.Equals(excelVio.VioID.ToString()))
&& excelViolations.Any(excelVio => vio.RuleType.Formatted.Equals(excelVio.RuleType))
&& excelViolations.Any(excelVio => vio.VioType.Formatted.Equals(excelVio.VioType))
&& excelViolations.Any(excelVio => vio.EtchVects.Any(x => x.XCoordinate.Equals(excelVio.XCoordinate)))
&& excelViolations.Any(excelVio => vio.EtchVects.Any(y => y.YCoordinate.Equals(excelVio.YCoordinate)))
select new EtchVectorShapes
{
VioID = Convert.ToInt32(vio.EtchVects.Select(x => x.VioID)),
ObjectType = vio.EtchVects.Select(x => x.ObjectType).ToString(),
XCoordinate = Convert.ToDouble(vio.EtchVects.Select(x => x.XCoordinate)),
YCoordinate = Convert.ToDouble(vio.EtchVects.Select(x => x.YCoordinate)),
Layer = vio.EtchVects.Select(x => x.Layer).ToString()
};
Answer: This is without optimizations, but the errors that you describe seem to be in how you are getting the data.
Based on your lists within list and the error message you are getting , try something like this:
etchVector = list.Where(vio => excelViolations.Any(currVio => vio.VioID.Formatted.Equals(currVio.VioID.ToString())
&& vio.RuleType.Formatted.Equals(currVio.RuleType)
&& vio.VioType.Formatted.Equals(currVio.VioType)
&& vio.Bows.Any(bw => bw.XCoordinate.Equals(currVio.XCoordinate))
&& vio.Bows.Any(bw1 => bw1.YCoordinate.Equals(currVio.YCoordinate)))).SelectMany(vi => vi.EtchVects).ToList(); | {
"domain": "codereview.stackexchange",
"id": 3772,
"tags": "c#, performance, linq"
} |
Why is light energy 100% reflected in total internal reflection? | Question: during reflection of light energy transfer is not 100%.
when light from a rarer medium like air strikes a denser medium like glass slab, some part of light is reflected back and some of it is refracted.
But during total internal reflection (TIR) when a beam of light from a denser medium like water strikes a rarer medium like air, it is 100% reflected and there is no loss of energy.
why?
what is the reason that no energy is lost in TIR but lost in glass slab?
Answer: Firstly, your point about total-internal-reflection (look into the tag wiki for further info about the phenomenon) resulting in 100% transmission of light energy and hence 100% intensity of incident light being preserved during reflection is conventionally considered true, but is not entirely correct.
Let me explain:
Reflection of any type, even if it is total internal reflection, cannot reflect 100% of the incident light energy. Total Internal Reflection nearly 100% (99.9999% maybe, but not 100%) efficiency compared to conventional reflection from a surface.
An interface between 2 optical media has the property of a "critical angle", the angle of incidence beyond which if light is incident on the interface from the optically denser medium into the optically rarer medium, it will be reflected back entirely into the denser medium, with no refraction into the rarer medium. Normally, at an interface between 2 transparent media of different optical densities, the light wave will be partially reflected into the medium from which it was incident and partially refracted into the medium into which it was incident, until the angle of incidence goes beyond this critical angle. This phenomenon of total internal reflection is usually observed for light, but is also applicable for other types of waves, like sound and waves on a string. This phenomenon is explained by Huygen's Wave Theory of Light in a classical sense. It occurs at the interface between 2 media of different densities in which the wave travels.
Hence, the intensity of incident light is preserved nearly entirely in total internal reflection, except for that minor fraction which may be absorbed by the denser medium itself, which is low due to the medium being transparent. There is also some energy loss due to photon tunneling across the interface, but again, it is not conventionally significant (total about $10^{-3} \%$). There are also evanescent-waves across the interface, but they do not result in net energy transfer across the interface. Refer Wikipedia here.
In conventional reflection, 2 surfaces are involved:
1) A transparent unsilvered surface
2) An opaque silvered surface
For light transmission through the unsilvered surface, similar energy losses are applicable as in case of total internal reflection.
However, for light reflection at the silvered surface, the surface being opaque, will absorb a significant portion (say about 1%-2%) of the incident light energy, while it will reflect most of the light energy due to it being silvered and reflective.
This 2nd significant energy loss of incident light at the silvered surface is not applicable for total internal reflection, hence we commonly say that total internal reflection forms images at 100% the brightness (intensity) of the incident light. | {
"domain": "physics.stackexchange",
"id": 26050,
"tags": "energy, reflection, refraction"
} |
Degree of freedom and specific heat concept link with radiation | Question: I was reading the black body radiation and there the total energy of the black body radiation is $E=\sigma T^4$ and so specific heat is $C_v = 4 \sigma T^3$ so it is proportional to $T^3$.
I read that specific heat actually depends on degree of freedom of the system and also depends on the number of ways in which flow of heat can occur. So is in this case degree of freedom $3$? And is that where the $3$ in $T^3$ comes from?
Answer: There is a simple relation between specific heat and the number of degrees of freedom for some classes of classical systems (https://en.wikipedia.org/wiki/Equipartition_theorem). Black body radiation is a quantum system, so there is no such simple relation, and the number of degrees of freedom is infinite for black body radiation. | {
"domain": "physics.stackexchange",
"id": 49290,
"tags": "thermodynamics, temperature, thermal-radiation, degrees-of-freedom"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.