text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Sum of two squares - Number of steps in Fermat descent
If a prime $p$ can be written as the sum of two squares, then one can construct this representation via Fermat descent if we know an $x$ such that $x^2 \equiv −1 \mod p$. Is there a possibility to say how much steps the Fermat descent will have without just going through the descent?
I have also posted this question on MathStackExchange a week ago (here), but received no answer. I hope it is ok to post it here, too.
EDIT: For example, let $p=1553$. Then $x=339$ and $339^2 + 1^2 = 74 \cdot 1553$. With the descent, we next compute numbers $x_2$, $y_2$ with $x_2^2+y_2^2 = k \cdot 1553$ with $k < 74$. In this case, $x_2=-142$, $y_2=5$ and $x_2^2+y_2^2=13 \cdot 1553$. Going on, in the next step we get $(-9)^2 + (-55)^2 = 2 \cdot 1553$ and finally $(-32)^2 + 23^2 = 1 \cdot 1553$. So here we have three steps: $74 \mapsto 13, 13 \mapsto 2, 2 \mapsto 1$.
Can you be a bit more specific? I.e., define what exactly you mean by a "Fermat descent step", if you want exact numbers or bounds, ... Note that the number of steps will usually depend on the square root of $-1$ mod $p$ you start with.
@Michael: I made an edit, I hope it is clear now what I mean with "step"
How did you compute $x_2$?
From the example, it appears that the following is used: assume you have $x,y$ with $x^2 + y^2 = kp$. Then take $x' + iy' = (x + iy)(u - iv)/k$ where $u,v$ are the absolutely smallest remainders of $x,y$ mod $k$. In the example, $u = -31$, $v = 1$, which leads to $(x_2, y_2) = (-142, -5)$.
Yes, that is the way I computed $x_2,y_2$.
In each iteration of Fermat descent, the multiple of $p$ written as a sum of two squares is at least divided by 2. So if you start with $x>0$ satisfying $x\equiv -1\mod p$, the procedure ends in at most $\log_2(\frac{x^2+1}{p})$ iterations.
Why x/p? x<p, so the log_2() is negative.
Dear joro: Thanks for spotting that. It's now fixed.
In the example above this bound would be $6$, whereas the exact number of steps is $3$, so this is a bit too rough. Also, I am more interested in a lower bound.
Let $l$ be the square root of $-1$: $l^2+1\equiv 0 \mod p$. Then from factorization $x^2+y^2\equiv(x+ly)(x-ly)$ we see that a set
$$\{(x,y)\in\mathbb{Z}^2:x^2+y^2\equiv 0\mod p\}$$ is the union of two latticies
$$\Lambda_\pm=\{(x,y)\in\mathbb{Z}^2:x\pm ly\equiv 0\mod p\}.$$
Each step of Fermat descent is almost the same (but not exactly the same) as a step from Gauss reduction of given basis $(0,p)$, $(\mp l,1)$. It means that it works like "nearest integer continued fraction" algorithm. But vectors $(x_k,y_k)$ may jump between $\Lambda_+$ and $\Lambda_-$, numbers $x_k$, $y_k$ may be reminders or swapped reminders and so on. If you'll need more details on Fermat descent then it will be necessary to take care about all these technical difficulties.
| common-pile/stackexchange_filtered |
Cron on Ubuntu AWS with Python/Anaconda Virtual Environment
Previously was using Fedora, and I was calling cron jobs using this method, which worked perfectly:
source /home/me/miniconda/bin/activate me_dev; python /home/me/avant_bi/g_parse.py
Now this throw an error in the cron logs:
/bin/sh: 1: source: not found
I've tried switching source for a . to no avail, as I read something I didn't fully understand about Ubuntu cron not working with the source call.
I've also tried
/home/me/miniconda/envs/me_dev/python /home/me/avant_bi/g_parse.py
Which is the location of the python I use when I activate the environment generally, but that seemingly is doing nothing (no logs of it running in cron).
I've tried multiple variations of this to no avail. Any ideas for what to do in this situation?
default shell on ubuntu is /bin/dash so /bin/sh will be a symlink to that. source is a bash builtin. to run cron jobs as bash put SHELL=/bin/bash in the cron file
Awesome! Couldn't find anything for this. Had also tried some PATH= at the top, and that wasn't working. Thanks!
| common-pile/stackexchange_filtered |
OpenGL - strange SSAO artifact
I followed the tutorial at Learn OpenGL to implement Screenspace Ambient Occlusion. Things are mostly looking okay besides a strange artifact at the top and bottom of the window.
The problem is more obvious moving the camera, when it appears as if top parts of the image are imprinted on the bottom and vise versa, as shown in this video.
The artifact worsens when standing close to a wall and looking up and down so perhaps the Znear value is contributing? The scale of my scene does seem small compared to other demos, Znear and Zfar are 0.01f and 1000 and the width of the shown hallway is around 1.2f.
I've read into the common SSAO artifacts and haven't found anything resembling this.
#version 330 core
in vec2 TexCoords;
layout (location = 0) out vec3 FragColor;
uniform sampler2D MyTexture0; // Position
uniform sampler2D MyTexture1; // Normal
uniform sampler2D MyTexture2; // TexNoise
const int samples = 64;
const float radius = 0.25;
const float bias = 0.025;
uniform mat4 projectionMatrix;
uniform float screenWidth;
uniform float screenHeight;
void main()
{
//tile noise texture over screen based on screen dimensions divided by noise size
vec2 noiseScale = vec2(screenWidth/4.0, screenHeight/4.0);
vec3 sample_sphere[64];
sample_sphere[0] = vec3(0.04977, -0.04471, 0.04996);
sample_sphere[1] = vec3(0.01457, 0.01653, 0.00224);
sample_sphere[2] = vec3(-0.04065, -0.01937, 0.03193);
sample_sphere[3] = vec3(0.01378, -0.09158, 0.04092);
sample_sphere[4] = vec3(0.05599, 0.05979, 0.05766);
sample_sphere[5] = vec3(0.09227, 0.04428, 0.01545);
sample_sphere[6] = vec3(-0.00204, -0.0544, 0.06674);
sample_sphere[7] = vec3(-0.00033, -0.00019, 0.00037);
sample_sphere[8] = vec3(0.05004, -0.04665, 0.02538);
sample_sphere[9] = vec3(0.03813, 0.0314, 0.03287);
sample_sphere[10] = vec3(-0.03188, 0.02046, 0.02251);
sample_sphere[11] = vec3(0.0557, -0.03697, 0.05449);
sample_sphere[12] = vec3(0.05737, -0.02254, 0.07554);
sample_sphere[13] = vec3(-0.01609, -0.00377, 0.05547);
sample_sphere[14] = vec3(-0.02503, -0.02483, 0.02495);
sample_sphere[15] = vec3(-0.03369, 0.02139, 0.0254);
sample_sphere[16] = vec3(-0.01753, 0.01439, 0.00535);
sample_sphere[17] = vec3(0.07336, 0.11205, 0.01101);
sample_sphere[18] = vec3(-0.04406, -0.09028, 0.08368);
sample_sphere[19] = vec3(-0.08328, -0.00168, 0.08499);
sample_sphere[20] = vec3(-0.01041, -0.03287, 0.01927);
sample_sphere[21] = vec3(0.00321, -0.00488, 0.00416);
sample_sphere[22] = vec3(-0.00738, -0.06583, 0.0674);
sample_sphere[23] = vec3(0.09414, -0.008, 0.14335);
sample_sphere[24] = vec3(0.07683, 0.12697, 0.107);
sample_sphere[25] = vec3(0.00039, 0.00045, 0.0003);
sample_sphere[26] = vec3(-0.10479, 0.06544, 0.10174);
sample_sphere[27] = vec3(-0.00445, -0.11964, 0.1619);
sample_sphere[28] = vec3(-0.07455, 0.03445, 0.22414);
sample_sphere[29] = vec3(-0.00276, 0.00308, 0.00292);
sample_sphere[30] = vec3(-0.10851, 0.14234, 0.16644);
sample_sphere[31] = vec3(0.04688, 0.10364, 0.05958);
sample_sphere[32] = vec3(0.13457, -0.02251, 0.13051);
sample_sphere[33] = vec3(-0.16449, -0.15564, 0.12454);
sample_sphere[34] = vec3(-0.18767, -0.20883, 0.05777);
sample_sphere[35] = vec3(-0.04372, 0.08693, 0.0748);
sample_sphere[36] = vec3(-0.00256, -0.002, 0.00407);
sample_sphere[37] = vec3(-0.0967, -0.18226, 0.29949);
sample_sphere[38] = vec3(-0.22577, 0.31606, 0.08916);
sample_sphere[39] = vec3(-0.02751, 0.28719, 0.31718);
sample_sphere[40] = vec3(0.20722, -0.27084, 0.11013);
sample_sphere[41] = vec3(0.0549, 0.10434, 0.32311);
sample_sphere[42] = vec3(-0.13086, 0.11929, 0.28022);
sample_sphere[43] = vec3(0.15404, -0.06537, 0.22984);
sample_sphere[44] = vec3(0.05294, -0.22787, 0.14848);
sample_sphere[45] = vec3(-0.18731, -0.04022, 0.01593);
sample_sphere[46] = vec3(0.14184, 0.04716, 0.13485);
sample_sphere[47] = vec3(-0.04427, 0.05562, 0.05586);
sample_sphere[48] = vec3(-0.02358, -0.08097, 0.21913);
sample_sphere[49] = vec3(-0.14215, 0.19807, 0.00519);
sample_sphere[50] = vec3(0.15865, 0.23046, 0.04372);
sample_sphere[51] = vec3(0.03004, 0.38183, 0.16383);
sample_sphere[52] = vec3(0.08301, -0.30966, 0.06741);
sample_sphere[53] = vec3(0.22695, -0.23535, 0.19367);
sample_sphere[54] = vec3(0.38129, 0.33204, 0.52949);
sample_sphere[55] = vec3(-0.55627, 0.29472, 0.3011);
sample_sphere[56] = vec3(0.42449, 0.00565, 0.11758);
sample_sphere[57] = vec3(0.3665, 0.00359, 0.0857);
sample_sphere[58] = vec3(0.32902, 0.0309, 0.1785);
sample_sphere[59] = vec3(-0.08294, 0.51285, 0.05656);
sample_sphere[60] = vec3(0.86736, -0.00273, 0.10014);
sample_sphere[61] = vec3(0.45574, -0.77201, 0.00384);
sample_sphere[62] = vec3(0.41729, -0.15485, 0.46251);
sample_sphere[63] = vec3 (-0.44272, -0.67928, 0.1865);
// get input for SSAO algorithm
vec3 fragPos = texture(MyTexture0, TexCoords).xyz;
vec3 normal = normalize(texture(MyTexture1, TexCoords).rgb);
vec3 randomVec = normalize(texture(MyTexture2, TexCoords * noiseScale).xyz);
// create TBN change-of-basis matrix: from tangent-space to view-space
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
// iterate over the sample kernel and calculate occlusion factor
float occlusion = 0.0;
for(int i = 0; i < samples; ++i)
{
// get sample position
vec3 sample = TBN * sample_sphere[i]; // from tangent to view-space
sample = fragPos + sample * radius;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projectionMatrix * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5; // transform to range 0.0 - 1.0
// get sample depth
float sampleDepth = texture(MyTexture0, offset.xy).z;
// range check & accumulate
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z - sampleDepth));
occlusion += (sampleDepth >= sample.z + bias ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / samples);
FragColor = vec3(occlusion);
}
The SSAO algorithm samples around each fragment (Monte Carlo integration). At the borders of the view the coordinates of the of some samples are out of bounds. This may cause the artifacts. If you are close to a wall then then the radius for the samples gets larger than the size of the projection to the viewport. The samples are out of bounds, too.
That makes a lot of sense. Do you know why might I be experiencing a more exaggerated effect than the tutorial? or what the remedy is?
The algorithm is a bit sensitive and strongly depends on you need. I implemented bound checks and played around with the distribution of the samples. But not, every check goes at cost of performance.
As Rabbid76 suggested, the artifacts were caused by sampling outside of the screen borders. I added a check to prevent this and things are looking much better..
vec4 clipSpacePos = projectionMatrix * vec4(sample, 1.0); // from view to clip-space
vec3 ndcSpacePos = clipSpacePos.xyz /= clipSpacePos.w; // perspective divide
vec2 windowSpacePos = ((ndcSpacePos.xy + 1.0) / 2.0) * vec2(screenWidth, screenHeight);
if ((windowSpacePos.y > 0) && (windowSpacePos.y < screenHeight))
if ((windowSpacePos.x > 0) && (windowSpacePos.x < screenWidth))
// THEN APPLY AMBIENT OCCLUSION
It hasn't entirely fixed the issue though as areas close to the windows edge now appear lighter than they should because fewer samples are tested. Perhaps somebody can suggest an approach that moves the sample area to an appropriate location?
Thanks for posting your solution. I took another path and set the texture wrap mode for the gbuffers to Clamp_To_Border and used a color that disables ssao
Conditionals in shaders are not the best idea. Readers are advised to use clamp instead.
| common-pile/stackexchange_filtered |
Object oriented design. What better?
I have a Printer class that should print a number and text. The number never changes for each Client class. I have more Client objects with different number values.
What design is better?
In the sample1 the number sends to print() method as argument, therefore all Client objects use single Print object. In the sample2 the number sends to the Printer constructor, therefore each Client object have own Printer object.
Please help me figure it out.
What about having a PrintController that can be assigned a Client and a Printer? Separating the client from the printer seems like a good idea.
Then PrintController should contain the number? I think it is redundant class. I want to know how design class methods better. Send the number to the print() method directly or the Printer can contain the number.
Is the number a unique identifier of the client?
It is only an example. The number could be an unique identifier of the client. But only print feature needs the number. May be the number will be used somewhere else.
If it is used somewhere else, in relation to the client, then I would make it an attribute of the client and give it a more meaningful name such as ClientPrinterNumber.
What is better, the sample1 that uses single Printer object, or sample2 that uses more Printer objects with the clientNumber?
Let us continue this discussion in chat.
What is number? What does it represent and what entity does it belongs to? It is the client number, or is it a print job number, or anything else? It has to be clearly defined before we know the answer.
Number 2 seems to fit your requirements better.
In the first solution you use a "generic" Printer, which knows nothing about Clients or their numbers, therefore need the number as a parameter. This seems logical because you probably have a physical "printer" in real life and that does not depend on any Clients.
However, your object model must fit the requirements, not "real life". This is a bit confusing, because we sometimes call the "requirements" "real life". Regardless, your requirements clearly state that Client wants to print some text and for the client the "number" is static, i.e. irrelevant. So just make a mental change, that the Printer is not a generic printer, but a Printer specifically there for the Clients.
With this mental model the 2. solution clearly fits better.
I would recommend solution 1.
Because if the number is not a requirement for the printer it shouldn't go there. If the number is specific to clients, they should store their individual value and pass it to the printer. This makes the printer reusable in other places, where the number might change.
What if you need to query the client's number? Solution 2 makes you ask the printer about the number specific to the client, whether it is unique or not. That's not good: you are violating separation of concerns. But beside that it doesn't feel smooth, right? Solution 2 forces you to initialize the printer again or create a new instance any time the number changes (and it will as you stated).
The printer shouldn't care about the content it is printing. To make more reusable you could make the 'Print()' method to accept an IPrintable with a method 'GetData()'. Then the printer doesn't have to change the signature of the 'Print()' any time you add new content types or content and in this case you would also avoid too much arguments in the method's signature. So new content types just implement IPrintable.
Now you decided you need to print a number, a text and an additional date or timestamp? Then simply modify the IPrintable object or create a new implementation instead of modifying the Printer class itself. The IPrintable object could also be responsible for formatting the output. Printer shouldn't care about formatting too in order to make it more generic. Otherwise small changes require you to implement a new printer.
A printer usually has a queue to allow concurrent use by the clients. It will be a lot more difficult to implement this in case you store those kind of information directly in the printer object. Code will look not nice anymore. Better to keep associated data together e.g. inside an IPrintable parameter.
| common-pile/stackexchange_filtered |
Implementing normal user/pass, Twitter & Facebook auth
I have created a public facing website which allows you to login using a username/password, or with Twitter, or with Facebook.
When logging in with Twitter for the first time (for example), a user is created in my database with a nickname matching the Twitter screen name. I want this nickname to always be unique.
The problem is that in some cases a user with that nickname already exists, so the user can't be added. I am unsure as to the accepted approach for this problem, the only solution I can see so far is to ask the user to override their nickname, but this doesn't seem too elegant.
The reason the nickname needs to be unique is not a code issue, but an interface issue, for example there are forums and I want each user to be uniquely identified by their nickname.
Are there any other methods anyone can suggest for dealing with this problem?
Edit: At the request of some of the replies I will clarify an example:
Lets say I have a user named Joe Bloggs who is a member on my website. He is not a member of Twitter or Facebook. His nickname on my site is JoeBloggs.
Then, another Joe Bloggs comes along, and wants to sign in with his Twitter account. His Twitter name is JoeBloggs, so when he signs in with Twitter, my system attempts to automatically set his nickname to JoeBloggs. However, this nickname already exists. What is the normal or best practice in the cases where nicknames like this overlap? The only thing I can think of is to prompt the user to specify a different and unique nickname (just for display on my site).
The reason I ask is that this must be a common issue for sites which let you login via Facebook, Twitter, Yahoo - there must be an overlap in the names which are returned from said websites, so I wondered what the normal process is.
Could you use the Twitter API to confirm they really are the Twitter ID they say they are, and if no Twitter account, allow them other means to authenticate (Google+, Facebook, LinkedIn, old school username and password)? Allowing users to login with a Twitter ID that they don't own seems like asking for a world of hurt.
I don't think you're understanding my question. I have users already, a large database of users, who all have nicknames. I am thinking of allowing Twitter as an ADDITIONAL option for new users to sign up. When they sign up using Twitter, I want to automatically assign their nickname to their Twitter ID. The problem occurs if there is already a user who has that nickname. Nothing I have said would mean allowing users to login via Twitter if they don't own it.
I have a very similar situation -- a socially integrated site where users can login with FB, Twitter, LinkedIn, or old fashioned username/password and then later can connect one or more of those accounts. If you could post details as to what your users table looks like, I think I could be helpful.
Maybe you could try and change the database itself - get a field like 'nickOrigin', allow there values like '[yourSite]', 'twitter', etc., and only allow new entries if no entry with the same nick AND nickOrigin exists. Execute a query to modify existing users to have a nickOrigin of [yourSite], and things should be backwards compatable, or at least I imagine them that way :)
| common-pile/stackexchange_filtered |
In an integral domain $R$, are principal ideas always equal to $R$ ??
If we define:
1) $R$ is a ring, $I$ is an ideal. We say that $I$ is prime if $I$ is not equal to $R$ and whenever $ab$ belongs to $I$ then either $a$ is in $I$ or $b$ is in $I$.
2) Let $R$ be a commutative ring and let a be an element of $R$. The set $\langle a\rangle=\{ar: r \in R\}$ is an ideal and any ideal of this form is called principal.
3) Let $R$ be an integral domain and let a be a non-zero element of $R$. We say that $a$ is prime, if $\langle a\rangle$ is a prime ideal.
But suppose $R$ is a finite integral domain. For any non zero $p$, $pa = pb$ iff $a = b$. Can we conclude that $\langle p\rangle = R$ (thus $p$ can't be prime)?
To answer your last paragraph: yes, and I'm pretty sure that the linked question and all of its duplicates all mention that "$r\mapsto pr$ is a bijection on $R$ so that $pr=1$ for some $r$", and after that $pR=R$ of course.
You should prove that finite integral domains are fields. In this case, the only ideals are $(0)$ and the whole field, in which case all ideals are trivially prime and principal.
If the integral domain is not finite, there are principal ideals which do not generate the whole ring. An example is $k[x]$ for any field $k$, and the ideal generated by $(x)$. On the other hand, a principle ideal is not necessarily prime either. An example is again taking $k[x]$, but now with ideal $(x^2)$. Here we have $x^2 = x \cdot x$. $x \not \in (x^2)$, so this shows the ideal is not prime.
thank you ! I think I got it
| common-pile/stackexchange_filtered |
What Thunderbolt 3 docks are backwards compatible with legacy Thunderbolt 2 and Thunderbolt 1 Macs
This is the case where you are running an older Mac with a Thunderbolt 1 or 2 port, and you would like to get a dock to add some faster ports like usb 3, but are finding it hard to find an original dock to match the obsolete thunderbolt exactly as in your mac, and you would like to get a newer dock that will work with your next computer too.
So far I have found only one device that mentions back-words compatibility officially and that is the Caldigit Thunderbolt 3 product line.
"The CalDigit Thunderbolt 3 devices, such as TS3 Plus, TS3, TS3 Lite, T4 Thunderbolt 3 RAID are backward compatible with legacy Thunderbolt 1 and 2 computers."
"An adapter from Apple is required to convert from Thunderbolt 3 to Thunderbolt 2 protocols. A Thunderbolt 2 cable is also required in this setup."
"This adapter will also enable your legacy Thunderbolt 1 and 2 computers to make use of many functions that Thunderbolt 3 has to offer including a dual extended monitor setup if it is supported by the computer’s GPU. The laptop charging feature however is not implemented in the legacy Thunderbolt computers."
Is my CalDigit Thunderbolt 3 device compatible with Apple Thunderbolt 1 and 2 computers?
There are quite a few affordable used Thunderbolt 3 docks out there, some mention that they are not back-words compatible like some models from Kensington SD5300T, SD5200T, SD2400T saying.
Only compatible with laptops equipped with Thunderbolt 3 or Thunderbolt 4 ports; not backwards compatible with Thunderbolt 1 or Thunderbolt 2.
SD5200T Thunderbolt 3
Most don't mention it at all, so does that mean they should work. And for example today I contacted StarTech and their support team noted that it was not possible to go Thunderbolt 3 to Thunderbolt 2 saying that thunderbolt is not backwards compatible that way, which is clearly not the case. So for me this looks like a trial done best by doing experiments and reporting what we find. What are we seeing working or not for Thunderbolt 3 docks on older Thunderbolt 2 and Thunderbolt 1 macs?
I’m not sure we can crowd source a comprehensive list, but we have been very impressed with how well Thunderbolt 2 to 3 allows all working devices to connect to newer macs and newer macs to connect to older devices. This seems more about are the drivers for the device operational on the OS and less about “compatible” for thunderbolt itself not negotiating the various speeds and connectors.
Ya I will report back what I found I picked up a used startech tb 3 dock, says mac os 10.13 is supported, so in theory it should work, just fine. Just not sure maybe its a TB controller option that can disable backwards support, but it seems like a mystery blackhole of what works or not, so hoping folks can chime in on whats is working for them
So does this sound like opinion based? or a survey?
| common-pile/stackexchange_filtered |
Make file Doesn't detect changes in source files
I am very much new to make files , I am facing very basic problem , My Makefile doesn't detect changes I made to source files . The problem is , when I first time generate consoleapp binary from my source file i get expected output . But When I change source file again and when I run make again it says
make: 'consoleapp' is up to date , So what changes I have to give to make file so that it detects my changes
Below is my Makefile :
consoleapp:
g++ consoleapp.cpp -o consoleapp
clean:
rm -rf *.o consoleapp
This is my Source File :
#include <iostream>
using namespace std;
int main()
{
cout<<"I am ok \n"; // I am changing this line again after giving make
return 0;
}
try it with make clean;make consoleapp.
make clean will remove my binary consoleapp and then i can see updated changes , I want that without doing make clean since in my real case there are multiple cpp files involved, doing make clean isn't right choice every time
Change consoleapp: to consoleapp: consoleapp.cpp
make relies on the makefile author to tell it what each target's prerequisites are -- that is, which other targets or files affect the construction of the target in question, so that if they are newer or themselves out of date then the target is out of date and should be rebuilt. As your other answer already indicates, you do not designate any prerequisites for your targets, so make considers them out of date if and only if they don't exist at all.
That's actually problematic for both targets, albeit in different ways. For target consoleapp, which represents an actual file that you want to build, the failure to specify any prerequisites yields the problem you ask about: make does not recognize that changes to the source file necessitate a rebuild. The easiest way to fix that would be to just add the source file name to the recipe's header line, after the colon:
consoleapp: consoleapp.cpp
g++ consoleapp.cpp -o consoleapp
Generally speaking, however, it is wise to minimize duplication in your makefile code, and to that end you can use some of make's automatic variables to avoid repeating target and prerequisite names in your rule's recipe. In particular, I recommend always using $@ to designate the rule's target inside its recipe:
consoleapp: consoleapp.cpp
g++ consoleapp.cpp -o $@
It's a bit more situational for prerequisites. In this case, all the prerequisites are source files to be compiled, and furthermore there is only one. If you are willing to rely on GNU extensions then in the recipe you might represent the sources via either $< (which represents the first prerequisite), or as $^ (which represents the whole prerequisite list, with any duplicates removed). For example,
consoleapp: consoleapp.cpp
g++ $^ -o $@
If you are not using GNU make, however, or if you want to support other people who don't, then you are stuck with some repetition here. You can still save yourself some effort, especially in the event of a change to the source list, by creating a make variable for the sources and duplicating that instead of duplicating the source list itself:
consoleapp_SRCS = consoleapp.cpp
consoleapp: $(consoleapp_SRCS)
g++ $(consoleapp_SRCS) -o $@
I mentioned earlier that there are problems with both of your rules. But what could be wrong with the clean rule, you may ask? It does not create a file named "clean", so its recipe will be run every time you execute make clean, just as you want, right? Not necessarily. Although that rule does not create a file named "clean", if such a file is created by some other means then suddenly your clean rule will stop working, as that file will be found already up to date with respect to its (empty) list of prerequisites.
POSIX standard make has no solution for that, but GNU make provides for it with the special target .PHONY. With GNU make, any targets designated as prerequisites of .PHONY are always considered out of date, and the filesystem is not even checked for them. This is exactly to support targets such as clean, which are used to designate actions to perform that do not produce persistent artifacts on the file system. Although that's a GNU extension, it is portable in the sense that it uses standard make syntax and the target's form is reserved for extensions, so a make that does not support .PHONY in the GNU sense is likely either to just ignore it or to treat it as an ordinary rule:
.PHONY: clean
clean:
rm -rf *.o consoleapp
because your target has no dependence. Please use this codes that rely to all cpp file in current dir to update binary.
SRCS=consoleapp.cpp
consoleapp: $(SRCS)
g++ $< -o $@
That will fail if there is a file named e.g. bar.cpp
if that, please modify to SRCS=consoleapp.cpp
| common-pile/stackexchange_filtered |
upgrading from <IP_ADDRESS> to <IP_ADDRESS>
I am new to magento. If I use the Package upgrade available through magento connect downloader do I need to backup and overwite my custom files as in the upgrade roadmap when upgrading a full magento installation?
For modules such as Lib_Magento, Lib_Mage & Lib_Unserialize ,,,etc. I am in version <IP_ADDRESS> would like to upgrade to <IP_ADDRESS>.
If this is not the correct path please suggest the correct way.
Magento <IP_ADDRESS>. updates the following files from <IP_ADDRESS>:
app/code/core/Mage/Adminhtml/Helper/Sales.php
app/code/core/Mage/Core/Model/Config.php
app/code/core/Mage/Sales/Model/Quote/Item.php
lib/Varien/File/Uploader.php
Without knowing the exact files you mean by "custom files", if any of these files are overridden in the app/code/core or lib/ directory which is considered bad practice, you can copy these to app/code/local before upgrading.
e.g. "app/code/core/Mage/Adminhtml/Helper/Sales.php" would be copied to "app/code/local/Mage/Adminhtml/Helper/Sales.php".
See app/Mage.php and lib/Varien/Autoload.php files for how autoloading works in Magento.
You can also upgrade using shell providing you have shell access and you know how to use shell. I would also test this locally before applying on a live site.
You can download and apply the patch 7405 from www.magento.com/download for <IP_ADDRESS> to <IP_ADDRESS> by moving the patch file into your Magento root directory and then from the command line
cd path/to/magento_root
sh patch_file_name.sh
Then delete the the patch file name.
I forgot to say. Backup your site before running the patch just as a precautionary measure.
If you done some changes in core files then you should follow below links.
http://excellencemagentoblog.com/blog/2014/03/03/override-magento-core-files-simplest-way/
http://inchoo.net/magento/overriding-magento-blocks-models-helpers-and-controllers/
Afer that please Reindex and Refresh Cache.
And check your all functionality is working as per your requirement or not.
Then backup your All magento files and Database
Goto connect manager
click on Check for Upgrades
Then you can see available updates for magento.
Then just click on Mage_All_Latest to newly available upgrade
And it will update your all magento functionality
Now some are remaining those are your third party extension you need to update them one by one.
No necessary for patches you have installed magento <IP_ADDRESS>
Is there any necessary for patches if i have installed Magento <IP_ADDRESS>
http://docs.magento.com/m1/ce/user_guide/magento/release-notes-ce-<IP_ADDRESS>.html
| common-pile/stackexchange_filtered |
Modify catcode in the preamble or in the \AtBeginDocument?
I want to modify the catcode of comma, with the following code
\documentclass{article}
\AtBeginDocument{%
\catcode`\,=\active
\protected\def,{...}%
}
\begin{document}
a,b,c $a,b,c$.
\end{document}
I get the following error message:
! Missing control sequence inserted.
<inserted text>
\inaccessible
l.6 \begin{document}
But with the following code, I get correct result:
\documentclass{article}
\begingroup
\catcode`\,=\active
\protected\gdef,{...}%
\endgroup
\AtBeginDocument{%
\catcode`\,=\active
}
\begin{document}
a,b,c $a,b,c$.
\end{document}
I'm wondering what are the differences between these two ways. Why the first way causes problem?
Edit: Althougn David Carlisle give another related question, but I think this question maybe not the same as that question, since both examples contains
\AtBeginDocument{%
\catcode`\,=\active
}
I could not understand why only one example causes problem.
As explained in the linked answer this is the standard issue that catcode changes (like \verb to not apply to tokens, so do not work in the argument of another command)
Also similar to http://tex.stackexchange.com/q/201348/21930
@DavidCarlisle But I put \catcode,=\activeinside\AtBeginDocument` in both of the examples. I think it is not the same as your linked answer.
@Z.H. You need \catcode`\,=active outside of \AtBeginDocument as well: the tokenization needs to be correct.
@Z.H. it is the same (and independent of anything specific about \AtBeginDocument I'm not sure what difference you see?
@JosephWright I am still confused, you means the catcode change doesn't reset after the group?
@DavidCarlisle The linked question doesn't explain how to solve the issue: either this question should be reopened with an answer that does, or the linked one needs editing.
@JosephWright hmm perhaps this is a better dup http://tex.stackexchange.com/questions/12638/how-to-change-catcode-in-a-macro?rq=1 (or we could put an answer here, but it's such a FAQ it must be a dup of something:-)
@DavidCarlisle Oh I understand it now. In fact, the statement you could not change catcodes inside a command in the linked question is wrong, The correct statement is you could not change catcodes and redefine the active character inside the same command.
@Z.H. it isn't the definition that you can not do it is that the catcode change does not affect the tokens in the argument \mbox{\catcode\,=13 zzzz , zzzz}it is a normal, non-active,between thezzz` whatever that code is doing, definition or use.
@DavidCarlisle Is there any way to make TeX do it anyway? Or would it involve a \special and a "fake" version of TeX with different handling of \def?
@1010011010 why? why not just use the standard tex syntax \catcode\,=13\mbox{ zzzz , zzzz}` obviously if you use some unspecified non-tex program it could do whatever you specify, but it would have to be very different from TeX so the question of making some Tex syntax work on that new system may not make sense at all.
@1010011010 You may use \scantokens command.
It's not a good idea to make the comma active, by the way.
In the argument of \AtBeginDocument, the comma is not active. The comma becomes active, when the argument with the category code setting is executed later in \begin{document}. Also keep in mind, that the category codes gets assigned during the tokenization. When an argument is read, the bytes of the input are tokenized with the current category code regime. When the argument is executed, the category change of the comma will affect later tokenizations, but not the tokens of the remainder of the argument.
Lowercase tilde trick
The typical pattern is to make use of the tilde character, because it is active in LaTeX by default. Then \lowercase is used to convert the tilde to the comma. The category code does not change in this process and the result is an active comma. A group prevents a permanent setting of the lowercase setting for the tilde.
\documentclass{article}
\AtBeginDocument{%
\catcode`\,=\active
\begingroup
\lccode`\~=`\, %
\lowercase{\endgroup
\protected\def~%
}{...}%
}
\begin{document}
a,b,c $a,b,c$.
\end{document}
Changing the category code for the whole statement
Another pattern is to change the category code for the whole statement including the argument. First the original category code of the comma is saved, e.g. via a resetting macro (\ResetCommaCatcode).
Then the category code of the comma is changed. Now \AtBeginDocument is called, the argument now contains an active comma token (in \protected\def,). Afterwards the category code of the comma is reset to the previous value.
\documentclass{article}
\edef\ResetCommaCatcode{%
\catcode`\noexpand\,=\the\catcode`\,\relax
}
\catcode`\,=\active
\AtBeginDocument{%
\catcode`\,=\active
\protected\def,{...}%
}
\ResetCommaCatcode
\begin{document}
a,b,c $a,b,c$.
\end{document}
| common-pile/stackexchange_filtered |
Is there a definition of the words: Quotient, Fractions and Ratio?
I'm curious, do we have a definition of quotient, fraction and ratio?
The Wikipedia page is very unclear. To be honest I've used those words interchangeably to mean any expression of the form $$\frac{A}{B}, \ \ A:B, \ \ AB^{-1}$$ where $A$ and $B$ are whatever, integers, real numbers, polynomials etc.
I was wondering if I'm actually correct is using these terms interchangeably? There is this answer but the people who have answered it don't seem to agree.
My take is that "quotient" is used when there is explicit division, while "ratio" is the only one (among the three words) that can relate more than two quantities. From MESE (no clear consensus in the answers either): Using terminology for the different concepts of rational number and How to explain the difference between the fraction a / b and the ratio a : b?.
@ryang Well, now that I've read up all those articles and many more, there does seem to not be a general agreement. However what do you mean by "explicit division"?
I mean when the division operation is being discussed (or at least being emphasised).
| common-pile/stackexchange_filtered |
Stat struct "Field 'st_size' could not be resolved
I'm having problems with st_size of stat struct in a simple program for Raspberry Pi.
I write the code in Eclipse C/C++ IDE (CDT) with a Macbook, and then compile it in the Raspberry Pi. I didn't have problems until I started using fstat and stat structure.
I test this code in Raspberry Pi and it work fine.
#include <stdio.h>
#include <arm-linux-gnueabihf/sys/stat.h>
#include <unistd.h>
int main(void) {
FILE *pFile;
struct stat buf;
pFile = fopen("File.bin","rb");
if ( pFile != 0 ) {
fstat(fileno(pFile), &buf);
printf("File size: %d\n", buf.st_size);
//printf("File size: %d\n", buf.stat);
fclose(pFile);
}
return 0;
}
The problem is when I tried to use buf.st_size the Eclipse throws Field 'st_size' could not be resolve. Instead, the Eclipse autocomplete with '.stat' like the commented line (buf.stat).
So far, I added in "C/C++ General -> Paths and Symbols -> Includes" the path where I copied the includes "stat.h" obtained from the "Raspberry Pi".
I can't make "Field 'st_size' could not be resolve" problem go away.
Any suggestion would be appreciate!
Thank you for your time
arm-linux-gnueabihf/sys/stat.h - that's a strange include. Just #include <sys/stat.h>. I write the code in Eclipse C/C++ IDE (CDT) with a Macbook, and then compile it in the Raspberry Pi - so you need to accept the fact, that eclipse does not know what raspberry pi includes are and can't index them. You can copy them all /usr/include to your PC and add include path to eclipse, but it feels crude.
...also, who throws what errors and when? Please be precise. Also, trim your code to the necessary [mcve]. Further readings: [tour] and [ask]. Please take the time to study those.
I don't know why I can not mark the question as answered. (The check mark doesn't appear below the down arrow).
Thanks to Kamil Cuk answer, I was able to fix the Eclipse IDE problem. I changed the #include <arm-linux-gnueabihf/sys/stat.h> for #include <sys/stat.h>.
Ulrich Eckhardt sorry if I didn't express very well.
Who: The Eclipse IDE mark that error in the editor window where I wrote the code.
When: The error appear when I wrote "buf.st_size".
I will have in mind your suggestion.
You can answer your own question, and mark the answer as accepted. (Only answers can be marked as accepted, not questions.)
| common-pile/stackexchange_filtered |
How to handle a project in git with the same code but different templates?
I have a project where the only difference are the templates.
The question is how should I manage this with git now?
At the moment I have 3 different git Repos, but it's hard to maintain all 3 of them.
What would be a good solution for this problem?
1 Repo for the Codebase and 3 for the different templates(submodules/subtree)?
Edit:
It's a PHP Project, with templates i mean the gui/frontend(html,css,js...)
-app
-Resources
-Template
-js
-css
-html
-Images
-Translation
Can you specify what you mean by "template" in this context?
@ThomasStringer i added the needed information, hope it's clear now
A couple of suggestions that might make this easier to handle by changing your project structure a bit.
First, you are right that managing this in three different repos is probably too much work. Generally I have found that splitting different software components that all seem to make sense to a general project by folder in a repo is a good way to modularize your code and resources.
If you haven't thought about the concepts of Continuous Integration and Continuous Delivery then this is worth looking into.
Continuous Integration
This approach is about mainlining all of your active development threads into a single branch. Also playing into this is organizing the automation of packaging and versioning your code such that it can be deployed easily. This can be done through tooling or scripting, I am not sure what tooling is out there for this in PHP, but you should have the ability to package modules of PHP software components (or groups of pages) as well as static resources from a template of your choosing in another folder entirely in your repo.
Continuous Delivery
This discipline and approach is concerning using tooling or scripting to take a packaged build and deploy it one quick to one or many environments for testing or production.
So what does this mean to you, well if you were running on Apache then typically your static resources would be in a separate physical file location that can be served from and this would be mapped to a different context root than typically would be your dynamic PHP page content. The two could feasibly live in separate file locations so they can probably and should probably be treated like separate independently themed and versioned software components, each with their own packaging behavior and each with their own deployment behavior.
It makes sense to keep your templates in another folder and to be treated like different software modules in your Git repository.
The added benefit of this is with one-click deployments for each type of template build of your choosing, you get 1 step closer to the Joel Test.
Thx for this great answer :)
| common-pile/stackexchange_filtered |
Matching a string between two sets of characters without using lookarounds
I've been working on some regex to try and match an entire string between two characters. I am trying to capture everything from "System", all the way down to "prod_rx." (I am looking to include both of these strings in my match). Below is the full text that I am working with:
\"alert_id\":\"123456\",\"severity\":\"medium\",\"summary\":\"System generated a Medium severity alert\\\\prod_rx.\",\"title\":\"123456-test_alert\",
The regex that I am using right now is...:
(?<=summary\\":\\").*?(?=\\")
This works perfectly when I am able to use lookarounds, such as in Regex101: https://regex101.com/r/jXltNZ/1. However, the regex parser in the software that my company uses does not support lookarounds (crazy, right?).
Anyway - my question is basically how can I match the above text described without using lookaheads/lookbehinds. Any help is VERY MUCH appreciated!!
Use summary\\":\\"(.*?)\\", see https://regex101.com/r/jXltNZ/2. Grab Group 1 value.
When you can't use lookarounds, put a capture group around the part that you want to match, and then extract that group from the result.
Are you still struggling with this or is the problem solved?
Well, we can simply use other non-lookaround method, such as this simple expression:
.+summary\\":\\"(.+)\\",
and our data is in this capturing group:
(.+)
our right boundary is:
\\",
and our left boundary is:
.+summary\\":\\"
Demo
| common-pile/stackexchange_filtered |
How can i make the common function/view for servinf downloadble files in django python
I am displaying the list of objects in the html table.
i have the download link in front of every row which i want them to download the linked file.
I have made this function
def make_downloadable_link(path):
#Prepare the form for downloading
wrapper = FileWrapper(open(mypath))
response = HttpResponse(wrapper,'application/pdf')
response['Content-Length'] = os.path.getsize(mypath)
fname = mypath.split('/')[-1]
response['Content-Disposition'] = 'attachment; filename= fname'
return response
This is working fine if i use it for hard coded path in view for single file. But i want to make a generic view so that it works on all the files in the table
I hav the path of the file avaiable in object.path variable but i am confused how can i pass the path object to the downlaod file view. because i want to hide that actual path from the user.
I don't know what to write in the URLs.py file fo that download file view
What would you like to do is get actual file path from object. And as you have said the file path is stored in object.path that makes it easy.
For example:
urls.py
url(r'^download/(?P<object_id>\d+)/$', "yourapp.views.make_downloadable_link", name="downloadable")
In views.py:
def make_downloadable_link(object_id):
# get object from object_id
object = ObjectModel.objects.get(id=object_id)
mypath = object.path
#prepare to serve the file
wrapper = FileWrapper(open(mypath))
response = HttpResponse(wrapper,'application/pdf')
response['Content-Length'] = os.path.getsize(mypath)
fname = mypath.split('/')[-1]
response['Content-Disposition'] = 'attachment; filename= fname'
return response
i tried that but django says that no url defined and also for this question. i want the file stored in some dir to serve as download not necessarily the File field
| common-pile/stackexchange_filtered |
Hi, i have problem with my code, i want to create a new list from the list
The elements of the list are integers and strings, I want to convert them to integers first, but the program ignores the command int(i) and outputs a list with the same elements as the first list.
lista = [2,3,4,'5','6','7']
lista_2 = []
for i in lista:
int(i)
lista_2.append(i)
print(lista_2)
It doesn't ignore it, you do, and then just append the original value
Alternatively: lista_2 = list(map(int, lista)).
Did you run the code?
You are appending the original value. It needs to be stored in some variable or appended while converting to int.
Here is one approach: using a variable
lista = [2,3,4,'5','6','7']
lista_2 = []
for i in lista:
converted = int(i)
lista_2.append(converted)
print(lista_2)
Here is another approach: converting while appending
lista = [2,3,4,'5','6','7']
lista_2 = []
for i in lista:
lista_2.append(int(i))
print(lista_2)
There are also other approaches like list comprehension:
lista = [2,3,4,'5','6','7']
lista_2 = [int(item) for item in lista]
print(lista_2)
or using map and passing int as a function to it:
lista = [2,3,4,'5','6','7']
lista_2 = list(map(int,lista))
print(lista_2)
int(n) does not modify n - it returns an integer after converting n in accordance with certain rules. See: this documentation
In your case, this is better implemented as a list comprehension as follows:
lista = [2,3,4,'5','6','7']
lista_2 = [int(n) for n in lista]
print(lista_2)
Output:
[2, 3, 4, 5, 6, 7]
Your mistake is:
int(i)
it should be:
i = int(i)
| common-pile/stackexchange_filtered |
bibtex-style "angewandte" and double citations
I would like to use double citations with the BibTeX style angewandte (chemie). In the readme file this is explained:
For double citations in the style of Angewandte Chemie (German of international edition) the second literature citation can be given with the BibTeX fields twojournal, twovolume and twopages. For the year the field year is used!
However, I don't understand these instructions. Can somebody help me in this case?
Edit: I simply downloaded the bst-file from https://chemieunser.wordpress.com/2012/05/02/bibtex-stil-fur-die-angewandte-chemie/ , put it in the folder of my tex-file and use include with the following commands (only the most important listed):
\usepackage{mciteplus}
\usepackage[hidelinks,bookmarksopen]{hyperref}
\bibliographystyle{angewandte}
\bibliography{master_references}
For those of us not already familiar with this particular bibliography style: What's the exact name of the style file, and is this file available on the CTAN?
For non-chemists: Angewandte Chemie is published in English and German. Historically, the two editions had different articles, then the same articles but different pages. They like us to cite both editions.
The style linked to comes with a demo file which shows the bibliography entry should be given in the form
@ARTICLE{Ache1989,
author = {H. J. Ache},
journal = {Angew. Chem.},
year = {1989},
volume = {101},
pages = {1-21},
timestamp = {2012.04.27},
twojournal = {Angew. Chem. Int. Ed.},
twopages = {1-20},
twovolume = {28}
}
where the two... entries are used for the second version. This leads to a minimal example something like
\RequirePackage{filecontents}
\begin{filecontents*}{\jobname.bib}
@ARTICLE{Ache1989,
author = {H. J. Ache},
journal = {Angew. Chem.},
year = {1989},
volume = {101},
pages = {1-21},
twojournal = {Angew. Chem. Int. Ed.},
twopages = {1-20},
twovolume = {28}
}
\end{filecontents*}
\documentclass{article}
\usepackage[sort&compress,numbers]{natbib}
\usepackage{mciteplus}
\begin{document}
\cite{Ache1989}
\bibliographystyle{angewandte}
\bibliography{\jobname}
\end{document}
There are two Angew. Chem. styles on CTAN, my own angew style, part of the rsc package and the ChemEurJ style, part of the chembst bundle. My style does not attempt anything 'clever' for Angewandte citations while ChemEurJ offers the germanpages concept:
\RequirePackage{filecontents}
\begin{filecontents*}{\jobname.bib}
@ARTICLE{Ache1989,
author = {H. J. Ache},
journal = {Angew. Chem. Int. Ed.},
year = {1989},
volume = {28},
pages = {1-20},
germanpages = {1-21},
}
\end{filecontents*}
\documentclass{article}
\usepackage[sort&compress,numbers]{natbib}
\usepackage{mciteplus}
\begin{document}
\cite{Ache1989}
\bibliographystyle{ChemEurJ}
\bibliography{\jobname}
\end{document}
Both of these rely on a fixed relationship between the two editions: they won't work if the paper happens to be in one year in English and a different year in German.
If you are willing to consider biblatex then he chem-angew style (part of the biblatex-chem bundle, again written by me) offers a way to link two independent database entries:
\RequirePackage{filecontents}
\begin{filecontents*}{\jobname.bib}
@ARTICLE{Ache1989,
author = {H. J. Ache},
journal = {Angew. Chem},
year = {1989},
volume = {101},
pages = {1-21},
related = {Ache1989a},
relatedtype = {translatedas},
}
@ARTICLE{Ache1989a,
author = {H. J. Ache},
journal = {Angew. Chem. Int. Ed.},
year = {1989},
volume = {28},
pages = {1-20},
}
\end{filecontents*}
\documentclass{article}
\usepackage[backend=biber,style=chem-angew]{biblatex}
\addbibresource{\jobname.bib}
\begin{document}
\cite{Ache1989}
\printbibliography
\end{document}
This has the advantage that each entry is therefore usable on it's own in other styles. (I provide a bundle of biblatex styles but have only added this linking to the chem-angew style as other publishers do not do this routinely.)
| common-pile/stackexchange_filtered |
histogram for every variable in pandas dataframe grouped by 1 specific variable
I am trying to create a histogram plot for every column in my pandas dataframe, but for each of the separate plots to have a separate color coding for each of the 2 values in 1 of the columns.
I can successfully create a histogram if you ignore the grouping by this field by doing
pd.set_option('display.float_format', str)
random_sample_join.hist(bins=5, figsize=(20, 20), rwidth=5)
I tried to do it based on a grouping by adding:
pd.set_option('display.float_format', str)
random_sample_join.hist(bins=5, figsize=(20, 20), rwidth=5, by = 'x')
but I get a really weird graph for the later where nothing is eligible etc. How can I do this?
How to create a Minimal, Reproducible Example
| common-pile/stackexchange_filtered |
Why two thread accessing same data while reading from database?
Here the code i tried but multiple thread are accessing same data in database ....
MY XML
<?xml version="1.0" encoding="UTF-8"?>
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:batch="http://www.springframework.org/schema/batch"
xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch.xsd
http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<bean id="JobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
<batch:job id="firstjob">
<batch:step id="masterStep">
<batch:partition step="step1" partitioner="rangePartitioner">
<batch:handler grid-size="1" task-executor="taskExecutor" />
</batch:partition>
</batch:step>
</batch:job>
<batch:step id="step1" xmlns="http://www.springframework.org/schema/batch">
<batch:tasklet >
<batch:chunk reader="itemReader" writer="ItemWriter" commit-interval="1" >
</batch:chunk>
</batch:tasklet>
</batch:step>
<bean id="rangePartitioner" class="com.spring.itemreader.Partioner" />
<bean id="taskExecutor" class="org.springframework.core.task.SimpleAsyncTaskExecutor" />
<bean id="jobRepository" class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="transactionManager" ref="transactionManager" />
<property name="databaseType" value="mysql" />
</bean>
<bean id="transactionManager" class="org.springframework.batch.support.transaction.ResourcelessTransactionManager" />
<bean id="dataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://localhost:3306/test" />
<property name="username" value="root" />
<property name="password" value="" />
</bean>
<jdbc:initialize-database data-source="dataSource">
<jdbc:script location="org/springframework/batch/core/schema-drop-mysql.sql" />
<jdbc:script location="org/springframework/batch/core/schema-mysql.sql" />
</jdbc:initialize-database>
<bean id="itemReader" class="org.springframework.batch.item.database.JdbcCursorItemReader" scope="step">
<property name="dataSource" ref="dataSource" />
<property name="sql" value="select id from reader where id >= ? and id <= ? " />
<property name="preparedStatementSetter" ref="readersetter"/>
<property name="rowMapper" ref="rowmapprer"/>
</bean>
<bean id="rowmapprer" class="com.spring.itemreader.rowmapprer">
<property name="data" ref="data"/>
</bean>
<bean id="data" class="com.spring.itemreader.data" />
<bean id="ItemWriter" class="org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name="assertUpdates" value="false" />
<property name="itemPreparedStatementSetter">
<bean class="com.spring.itemwriter.setter" />
</property>
<property name="sql" value="INSERT INTO writer VALUES(?)" />
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="readersetter" class="com.spring.itemreader.readersetter" scope="step">
<property name="fid" value="#{stepExecutionContext[fromId]}"/>
<property name="tid" value="#{stepExecutionContext[toId]}"/>
</bean>
</beans>
and my partitioner class is:
public class Partioner implements Partitioner{
@Override
public Map<String, ExecutionContext> partition(int gridSize) {
//map for storing:
Map<String, ExecutionContext> result
= new HashMap<String, ExecutionContext>();
int range = 100;
int fromId = 1;
int toId = range;
ExecutionContext value=null;
using for loop i am giving range id and ExecutionContext
for (int i = 1; i <= gridSize; i++) {
value = new ExecutionContext();
System.out.println("\nStarting : Thread" + i);
System.out.println("fromId : " + fromId);
System.out.println("toId : " + toId);
value.putInt("fromId", fromId);
value.putInt("toId", toId);
// give each thread a name, thread 1,2,3
value.putString("name", "Thread" + i);
result.put("partition" + i, value);
fromId = toId + 1;
toId += range;
}
return result;
}
and my reader setter
public class readersetter implements PreparedStatementSetter {
private int fid;
private int tid;
public int getFid() {
return fid;
}
public void setFid(int fid) {
this.fid = fid;
}
public int getTid() {
return tid;
}
public void setTid(int tid) {
this.tid = tid;
}
@Override
public void setValues(PreparedStatement se) throws SQLException {
// TODO Auto-generated method stub
se.setInt(1,fid);
se.setInt(2,tid);
}
but above working fine for grid size 1,its not working when i increase size ,please help with this issue...
When I increase my grid size this is what am facing
Starting : Thread1
start : 1
end : 110
Starting : Thread2
start : 111
end : 220
Starting : Thread3
start : 221
end : 330
Starting : Thread4
start : 331
end : 440
Starting : Thread5
start : 441
end : 550
Error PreparedStatementCallback; SQL [INSERT INTO writer VALUES(?)]; Duplicate entry '331' for key 'PRIMARY'; nested exception is java.sql.BatchUpdateException: Duplicate entry '331' for key 'PRIMARY'
like that everytime am getting same error,other thread range are changed while accessing DB....
Sysout of Rowmapper and WriterSetter
Starting : Thread1
fromId : 1
toId :
220
Starting : Thread2
fromId : 221
toId :
440
row mapper221
row mapper1
writer1
writer1
row mapper2
writer2
row mapper3
did you look at SB sample projects? it should contains an example about partition
@ Luca Basso Ricc ,if you have any reference please give
http://stackoverflow.com/questions/23447498/sample-spring-batch-project-download
Have you looked in the db to confirm that each Step ExecutionContext has a unique range of ids?
@Michael Minella , Thanks for reply . In My DB range is different ,like when i put sysout for partition its showing like 1 to 250 but in DB it writing data from 2 to 251 (its skiping my partition range) ,i don't how this range is changing
Can you add the code for your readersetter bean?
@Michael Minella ,Added code for readersetter bean and added my whole Xml content....
I'm not seeing anything obviously wrong here. My only other suggestion would be to put a System.out in the reader setter to verify that the values are begin passed to the partitions correctly (your current System.out's just verify that the partitions are being created correctly).
@Michael Minella ,checked in reader setter as well .it showing same range Value.But I put sysout in writersetter it returning multiple values in setter(i.e its repeating value 1 two times ) because of that i getting duplicate vales while inserting
@Michael Minella ,,I just added Sysout for My Rowmapper and Writer Setter ,same Values writing twice(added result above).Thank You......:) –
I have the exact same problem using local partitioning and JpaItemWriter. The partition ranges are correct with unique IDs but the writers sooner or later throw unique-constraint exception. Only when grid-size=1 it works.
Update for my case: finally it was as issue with my Processor. The reader was processing correctly one item at a item but my processor was performing a jpql query at some other table in order to produce items for the writer. In that jpql query I had neglected the "distinct" keyword. So I was getting duplicate root entities. This was the real root cause of my problem.
I think the problem is in your partitioning logic.Change your for loop with the following.
for (int i = 1; i <= gridSize; i++) {
toId = fromId + blocksize;
ExecutionContext value = new ExecutionContext();
if (fromId > totalRecords) {
break;
}
if (toId > totalRecords) {
toId = totalRecords;
System.out.println("\nStarting : Thread" + i);
System.out.println("fromId : " + fromId);
System.out.println("toId : " + toId);
value.putDouble("fromId", fromId);
value.putDouble("toId", toId);
// give each thread a name, thread 1,2,3
value.putString("name", "Thread" + i);
result.put("partition" + i, value);
break;
} else {
System.out.println("\nStarting : Thread" + i);
System.out.println("fromId : " + fromId);
System.out.println("toId : " + toId);
value.putDouble("fromId", fromId);
value.putDouble("toId", toId);
value.putString("name", "Thread" + i);
result.put("partition" + i, value);
fromId = toId + 1;
}
}
@ Dangling Piyush ,thanks for reply.but my issue is sometimes its working fine (1 out 10) when i increase grid size . but mostly it end up falied . In your partitioner, code is fine In sysout it displaying correct partition but while accessing DB ,thread try to get same data and ended up failing .
@ Dangling Piyush ,And i synchronized reader as well .if you know any solution please help :)
Post code of your reader how your are accessing data fromId and toId attributes from ExecutionContext.
| common-pile/stackexchange_filtered |
SQL Server - CDC - Log-Scan Process failed to construct a replicated command from log sequence number (LSN)
CDC is enabled on the database and multiple tables are included for cdc. We had been receiving the below alert intermittently, monthly once which is causing the changes not written on the CT tables. We do not have option rather than just reset the transaction log using below :-
EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1; EXEC sp_replflush;
Error :-
'The Log-Scan Process failed to construct a replicated command from log sequence number (LSN) {000af057:003bd8ef:0002}
Log Scan process failed in processing log records. Refer to previous errors in the current session to identify the cause and correct any associated problems.
The Log-Scan Process failed to construct a replicated command from log sequence number (LSN) {000af057:003bd8ef:0002}. Back up the publication database and contact Customer Support Services.'
Let us know if you are facing this in your environment. What can be the proactive fix for this, reactive fix is causing the data loss to the subscriber(Debezium) consume the data and ending up backfilling the data
Find the current version we are running on :-
Microsoft SQL Server 2016 (SP3-OD) (KB5006943) - 13.0.6404.1 (X64)
| common-pile/stackexchange_filtered |
Load website with v-navigation-drawer closed
Is it possible to load the website with the Navigation Drawer closed and open just after the click, like a mobile menu?
I am using Vuetify:
<template>
<v-app toolbar--fixed toolbar footer>
<v-navigation-drawer
temporary
v-model="sideNav"
enable-resize-watcher
disable-route-watcher
right
dark
absolute>
<v-list dense>
<v-list-tile
v-for="item in menuItems"
:key="item.title"
router
:to="item.link">
<v-list-tile-action>
<v-icon>{{ item.icon }}</v-icon>
</v-list-tile-action>
<v-list-tile-content class="sidemenu-item">{{ item.title }}</v-list-tile-content>
</v-list-tile>
</v-list>
</v-navigation-drawer>
<v-toolbar dark class="blue-grey darken-4">
<v-toolbar-title>
<router-link to="/" tag="span" style="cursor: pointer">
<img class="logo" src="static/images/main_logo.png" alt="">
</router-link>
</v-toolbar-title>
<v-spacer></v-spacer>
<v-toolbar-side-icon
@click.stop="sideNav = !sideNav"></v-toolbar-side-icon>
</v-toolbar>
<main>
<router-view></router-view>
</main>
<v-footer class="blue-grey darken-4 main-footer">
<span class="white--text main-footer">© {{ new Date().getFullYear() }}</span>
</v-footer>
</v-app>
</template>
<script>
export default {
data () {
return {
sideNav: true,
menuItems: [
{ icon: 'home', title: 'Home', link: '/' },
{ icon: 'fast_forward', title: 'Sign Up', link: '/signup' },
{ icon: 'business', title: 'About', link: '/About' },
{ icon: 'mail', title: 'Contact', link: '/contact' }
]
}
}
}
</script>
Now when the application is loaded it appears open on big screens and closed on small screens. I'd like that this menu has the same behaviour on small and big screens: always closed and open just when the user clicks on the hamburger menu.
There is a way. You could simply use the drawer prop like drawer="false" to diable it. But of course you need then a way to activate it. See the code below.
<template>
<v-app>
<v-navigation-drawer v-model="drawer" fixed app >
...
</v-navigation-drawer>
<v-toolbar fixed app :clipped-left="clipped" dark color="primary">
<v-toolbar-side-icon @click="drawer = !drawer"></v-toolbar-side-icon>
</v-toolbar>
</v-app>
</template>
<script>
export default {
data () {
return {
drawer: false
}
}
}
</script>
Another way is to add the stateless property. Combine it with the hide-overlay property so you can still use the drawer on mobile.
I think you need mobile-break-point property:
mobile-break-point="10240"
Works perfectly. Thank you!
Can you explain me why do you choose "10240"?
@JavierCárdenas , it was 10 times 1024. Any value big enough can be used.
Change the prop 'enable-resize-watcher' to 'disable-resize-watcher'
Also when you have not used that prop add the prop add this prop 'disable-resize-watcher'to disable the navigation-drawer from opening in large viewports
| common-pile/stackexchange_filtered |
Siunitx: Suppressing decimal alignment for an integer row in a table
How can I align the figures in the column of the following MWE?
The figure in the N row is always an integer, so I don't want that to be aligned on the decimal place like the coefficients above: it's too far to the right. Is there any alternative to placing the N figures in curly brackets so siunitx will disregard them?
Also, the minus sign is much longer than usual without siunitx. Is there an option I can specify so that the minus sign is the 'normal' length?
\documentclass{article}
\usepackage{siunitx}
\usepackage{longtable}
\usepackage{booktabs}
\def\sym#1{\ifmmode^{#1}\else\(^{#1}\)\fi}%Need for STATA tables
\def\tmp{
\parbox{\linewidth-1ex}{
\footnotesize
\lipsum[3-4]
\vspace{1ex}
}
}%Need for STATA tables
\begin{document}
\begin{longtable}{@{\extracolsep{\fill}}
@{}l
S[table-format = 3.3,detect-mode,
group-digits = false,
input-symbols = ,
input-open-uncertainty = ,
input-close-uncertainty = ,
table-align-text-pre = false,
table-align-text-post = false,
round-precision = 3,
table-space-text-pre = (,
]
}
\toprule
text & 0.900\sym{***}\\
& (-35.22) \\
N & 512 \\
\bottomrule
\bottomrule
\end{longtable}
\end{document}
Could you clarify the point about the minus sign? I suspect you are used to using a hyphen for a minus, but in proper typography the two are distinct (compare for example -1 and $-1$ when typeset: the latter is correct).
| common-pile/stackexchange_filtered |
`react-hover` npm package not in production build but working fine in development
I want to make use of the npm package react-hover (here's the github link) in my project and when I develop locally in dev build it works just fine. When I run the npm run build command and serve the production version the components I want to show in the <ReactHover> object don't render on the page.
I started the project using create-react-app I serve the production version locally using the npm package serve.
I will share all the files of my project since it's nothing spectacular, here are all files on a public repo.
I haven't really been able to find this issue anywhere else so any insight would be appreciated.
You can simply use CSS to do that.
Define two divs, one with id="hover-selector-parent" and another with id="hover-selector-child".
Then configure your CSS file to handle the event -
#hover-selector-parent:hover~#hover-selector-child{
display:block;
}
#hover-selector-child{
display: none;
}
| common-pile/stackexchange_filtered |
How to exclude Facebook Pixel and Google Analytics from firing on one Wordpress page
I'm trying to setup an email opt in page that can load as fast as possible.
At the moment, it takes a second to load my Facebook Pixel and Google analytics which make the page a bit slower.
I need the tracking on all other pages, but I'd rather have this one page load as fast as possible without tracking.
What would be the best way for me to do this? Can I exclude all tracking from just this one page someone without having to remove it from the rest of the site?
Maybe this?
Wrap your pixel & analytics code in a test for that particular page using is_page(), for example:
if ( !is_page( 'email-opt-in' ) {
echo "<!-- code here -->";
}
is_page() can accept the page ID or the page slug. See Codex page.
I'm using a theme that has me install the pixel and analytics across the entire site in a designated area.
Do you mean I can take this, place it inside that theme area, and so long as I have the pixel and analytics within this if statement, they will not load on the page?
Thanks for this!
No, you can't do that. You would need to not use those theme fields and instead add all of the code that Facebook and Google supply to your theme's header. This would go in the header.php file, but you shouldn't ever edit the theme's copy of that file, rather a copy of it installed in a Child theme. Or you could create your own plugin to hook into the wp_head() function.
This sounds like what I need but its a little over my head. Any chance you could point me in the right direction as to how I can create a plugin or child theme like this?
| common-pile/stackexchange_filtered |
Arm Roast after 5 hours still tough
When I make a Arm or Chuck Roast always after 5 hours it is still tough, the beef is grass feed. I have give up making them. My wife wants me to make another Arm Roast again and I want it to come out month watering and not tough. My wife has ALS and it really needs to be very tender for her. I would like to make it in a Dutch Oven in the oven, I would set it at 250F but how long for a 3 pound roast?
I would like it to be very tasty with a sauce or gravy.
Are you roasting this dry or with moisture? It will come out better if you braise it.
If it's too tough, keep cooking it. Some people use "tender" to describe a pot roast that is tender like a good steak, others want it to fall apart with no knife required. I regularly cook pot roast 8-12 hours. As log as you have it covered for most of the time (like in a crock pot or in the over covered with foil), it will keep getting more tender until it falls apart under its own weight (which sounds like it may be what you want).
| common-pile/stackexchange_filtered |
Multiple python versions under apache+mod_wsgi
I have several virtual hosts configured under the same apache instance on redhat:
apache-2.2.15
mod_wsgi-3.5 compiled with default system python-2.6
For every virtual host WSGIScriptAlias setting is pointed to the python file where the virtual environment is activated:
activate_this = '/path_to_the_virtualenv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
Now, I'm planning to upgrade one of the projects to python-2.7, another one to python-3.x. I know that I can have different virtual environments, separate python sandboxes. So, everything is good on the python side.
The question is: is it possible to use different python versions for different apache virtual hosts under the same apache+mod_wsgi instance?
If not, what would be the best option to proceed?
There is a relevant WsgiPythonHome setting, but it is defined globally in the "server config" context, not per virtual host. Plus, mod_wsgi is compiled for the specific python version, so I'm not sure it can handle the case.
No it isn't possible. The mod_wsgi binary has to be compiled against one Python version only and only one instance of a compiled mod_wsgi module can be loaded into Apache at a time.
What you would need to do is setup Apache to proxy to a separate WSGI server listening on its own ports.
To use Apache/mod_wsgi as that backend server as well, you would want to investigate using mod_wsgi 4.1.X. See:
https://pypi.python.org/pypi/mod_wsgi
This newer version of mod_wsgi provides a way of installing mod_wsgi against multiple Python versions and running up an Apache instance for each using a provided script. The script takes over all the setup of the Apache configuration so you do not need to worry about it.
Glad to see you on StackExchange again! Thank you very much, this is exactly what I wanted to know.
Why is that? Is this a limitation of the mod_swgi implementation, or of Apache?
It is an operating system limitation. You can't load two different versions of the Python library into the one process.
| common-pile/stackexchange_filtered |
Jcheckbox always returnig false in Swing Java
I am using Java and Swing, all I want to de is to test if the Jcheckbox is selected and if so adding its text to a list of Strings, the problem is that the isSelected fuction is always returning False even if the checkbox is selected.
Here is the code I wrote:
List<JCheckBox> checkBoxes = new ArrayList<JCheckBox>();
List<String> infos = new ArrayList<String>();
String sql = "select NAME from drugs ";
pre=con.prepareStatement(sql);
res=pre.executeQuery();
while(res.next()){
checkBoxes.add(new JCheckBox(res.getString("NAME")));
panel.add(new JCheckBox(res.getString("NAME")));
};
for (JCheckBox checkBox : checkBoxes) {
if (checkBox.isSelected()) {
infos.add(checkBox.getText());
}
}
You're asking why a code snippet is not working, something very difficult to do. Best to create a [mre], one that is free of the database code, and that shows the problem for us. Please read the link.
Is the key code, where you test the state of the check boxes, being called from within a listener such as an ActionListener or listener to some other event? Or is it called on code creation (before the user interacts with the GUI)?
If you’re reading the state of a JCheckBox that you just created, then yes, it will always be unselected, because a JCheckBox created with the String-only constructor is always unselected.
Hello, thank you for your answers, as you can see I got the answer below from CausingUnderflowsEverywhere and the point is that the checkbox I add to my checkBoxes data structure and the checkbox I add to the panel are two different check boxes.
Code in Java runs once through unless you write code in a loop. You're checking if the checkbox is selected instantly after they are created in the panel. The code is checking if your newly added check boxes are selected (which of course no one has clicked them yet) and then finishes. They are never checked again.
The solution will be to move this selection check into an event handler. But before we get there, you have a second error in your code.
while(res.next()){
checkBoxes.add(new JCheckBox(res.getString("NAME")));
panel.add(new JCheckBox(res.getString("NAME")));
};
The checkbox you add to your checkBoxes data structure and the checkbox you add to the panel are two different check boxes. Each time you use the new keyword in Java, you create a new independent object. In your case what you really need is to create 1 new checkbox, and put it in the panel, and also store it in your data structure.
The solution:
while(res.next()){
JCheckBox checkBox = new JCheckBox(res.getString("NAME"));
checkBoxes.add(checkBox);
panel.add(checkBox);
};
Now we can continue to create an event handler. An event handler will react to someone clicking the check box and run the code that checks the state of the check box to apply any changes. An event handler example to suit your needs can be coded as follows:
checkBox.addItemListener(new ItemListener() {
@Override
public void itemStateChanged(ItemEvent e) {
if (e.getStateChange() == ItemEvent.SELECTED) {
infos.add(checkBox.getText());
}
if (e.getStateChange() == ItemEvent.DESELECTED) {
infos.remove(checkBox.getText());
}
}
});
Now when we join the code with all the fixes we get:
while(res.next()){
JCheckBox checkBox = new JCheckBox(res.getString("NAME"));
checkBoxes.add(checkBox);
panel.add(checkBox);
checkBox.addItemListener(new ItemListener() {
@Override
public void itemStateChanged(ItemEvent e) {
if (e.getStateChange() == ItemEvent.SELECTED) {
infos.add(checkBox.getText());
}
if (e.getStateChange() == ItemEvent.DESELECTED) {
infos.remove(checkBox.getText());
}
}
});
}
I changed the code exactly as you wrote above and it is working fine, now I see the point and understand the difference, thank you.
| common-pile/stackexchange_filtered |
MPJPEG Expected boundary '--' not found
I get the following error when streaming from a video camera.
"[mpjpeg @ 00555000] Expected boundary '--' not found, instead found a line of n bytes"
When debugging, the above error is written to the console multiple times a second. As a result i can only get a frame every many seconds. This prevents me to actually stream from the camera. In release mode, the problem is not there.
I'd like to solve the problem the clean way by letting ffmpeg know that stream is not mpjpeg but a mjpeg one.
I read about forcing "-f mjpeg" in ffmpeg.exe, but i'm not actually using ffmpeg.exe: i'm using its libraries directly.
So how do i set those parameters?
Use av_find_input_format, like ffmpeg does:
AVInputFormat* iformat = av_find_input_format("mjpeg");
avformat_open_input(&format_context, ip_cam_http_address, iformat, &opts)
| common-pile/stackexchange_filtered |
Git - folder not pushed to repo
I have been trying to push an entire folder to my repo on Bitbucket and I kept getting following when I view the source:
I also check the commit to find it empty, Nothing was pushed. The folder was initially named ChartNew.js as result of a clone, so I tried renaming it mutiple times but I am still stuck with this issue.
Any idea how can I fix that.
UPDATE:
so I tried the second answer in No submodule mapping found in .gitmodules for path and missing .gitmodules file and yes the folders were removed successfully. I cloned the Charts repo to redo the push my repo but then I got stuck again with the same problem and the same issue in the screenshot above occured. :/
Git doesn't track directories. Does it contain any files?
yes the folder contains mutiple files but nothing was pushed to the repo
Those are not submodule entries? What happen if you clone that BitBucket repo? Do you see those folders empty? Do they fill up after a git submodule update --init?
@VonC I tried to clone the repo again but the folder ChartNewjs is empty. When pushed the first time, nothing inside it was pushed to the repo so basically nothing was there when I cloned again
@omarsafwany do you see a .gitmodule in your repo?
@VonC no it's not there
Strange, because you have the same picture in http://stackoverflow.com/a/30075692/6309
I ran the following command git submodule update --init --recursive but I got this response: No submodule mapping found in .gitmodules for path 'public/ChartNewjs'
Ok, can you try git rm ChartNewjs (no trailing /) and git rm Charts? Maybe the .gitmodule was deleted, and the gitlinks remains in the index. Then add -A . and commit and push.
@VonC Finally!!! git add -A . worked and the issue was fixed. Thank you
Great. I have edited my answer accordingly.
Those are submodule entries.
They are called gitlink, special entries in the index of the pushed repo.
A submodule is made to record a specific SHA1 of a subrepo: see "git submodule checks out the same commit".
If a .gitmodule is no longer in the repo, then those gitlinks need to be removed:
git rm ChartNewjs # no trailing /
git rm Charts
git add -A .
git commit -m "remove gitlinks submodules"
git push
omarsafwany suggests in the edit revision:
In case git rm didn't remove anything, then check the the answer "No submodule mapping found in .gitmodule for a path that's not a submodule", in order to remove, then proceed with the above instructions.
| common-pile/stackexchange_filtered |
Sequelize special query that involves "last inserted row id"
I have the following database structure (expressed in SQLite dialect):
CREATE TABLE `Clocks` (`_id` INTEGER PRIMARY KEY AUTOINCREMENT, `time` DATETIME);
CREATE TABLE `Operations`
(
`_id` UUID NOT NULL PRIMARY KEY,
`finished` TINYINT(1) NOT NULL DEFAULT 0,
`launchedOn` BIGINT REFERENCES `Clocks` (`_id`) ON DELETE SET NULL ON UPDATE CASCADE,
`finishedOn` BIGINT REFERENCES `Clocks` (`_id`) ON DELETE SET NULL ON UPDATE CASCADE
);
Now what I would like to achieve in Sequelize.js looks like the following SQL Query in SQLite:
BEGIN TRANSACTION;
INSERT INTO Clocks(time) VALUES (date('now'));
INSERT INTO Operations(_id, finished, userId, launchedOn) VALUES ('00000000-0000-0000-0000-000000000001', 0, last_insert_rowid());
COMMIT;
and following:
BEGIN TRANSACTION;
INSERT INTO Clocks(time) VALUES (date('now'));
UPDATE Operations
SET finished = 1,
finishedOn = last_insert_rowid()
WHERE _id = '00000000-0000-0000-0000-000000000001';
COMMIT;
I've did some investigation with Sequelize.js, I've got idea on how to organize the above transactions but I have no idea on how to include last_insert_rowid() into list of inserted items. This function by the way is SQLite-specific is there cross-database alternative to it?
Thank you in advance!
on both queries, are you using last_insert_rowid() to get the previously inserted clock's id?
Yes, that is correct answer... :)
You can try these, you can return the resulting object created upon insert and use it on the next query.
the first query:
return sequelize.transaction(function (t) {
return Clocks.sync().then(function(){
return Clocks.create({
time: Sequelize.fn('date', 'now')
});
}).then(function(clock){
return Operations.sync().then(function(){
return Operations.create({
id: '00000000-0000-0000-0000-000000000001',
finished: 0,
launchedOn: clock.id
}, {transaction: t});
});
});
});
second query:
return sequelize.transaction(function (t) {
return Clocks.sync().then(function(){
return Clocks.create({
time: Sequelize.fn('date', 'now')
});
}).then(function(clock){
return Operations.sync().then(function(){
return Operations.update({
finished: 1,
finishedOn: clock.id
},
{where: { id: '00000000-0000-0000-0000-000000000001'}},
{transaction: t});
});
});
});
I'm pretty sure you can do the same on other dialects.
As far as I understood, Sequelize won't use actual clock.id value in Operations.create call, it will somehow figure out that I want to reference the id of the previous call and use that to render SQL query. Or I'm missing something and Clock.create will be actually executed (query sent to DB and it's results are obtained) and only then Operations.create will use the value returned from previous call. Which would mean that were sending SQL statements individually to DB?
| common-pile/stackexchange_filtered |
Windows phone 8 download then unzip to isolated storage
I'm writing a windows phone 8 application that have following functions
Download a zip file from the internet
Extract it to the isolated storage
I'm looking for a solution to deal with it but haven't found once. If you have any suggestion please help.
Thanks in advance!
EDIT:
I break it down into several steps:
Check if storage is available - DONE
Check if file is compressed - DONE
Use Background Transfer (or another method) to download to local folder and display information to user (percentage, ect.) - NOT YET
Unzip file to desired location in isolated storage - NOT YET
Do stuffs after that... - DONE
For step 4, I found and modified some script to extract file to isolated storage (using SharpGIS.UnZipper lib):
public async void UnzipAndSaveFiles(Stream stream, string name)
{
using (IsolatedStorageFile isoStore = IsolatedStorageFile.GetUserStoreForApplication())
{
using (var zipStream = new UnZipper(stream))
{
foreach (string file in zipStream.FileNamesInZip)
{
string fileName = Path.GetFileName(file);
if (!string.IsNullOrEmpty(fileName))
{
StorageFolder folder = ApplicationData.Current.LocalFolder;
folder = await folder.CreateFolderAsync("html", CreationCollisionOption.OpenIfExists);
StorageFile file1 = await folder.CreateFileAsync(name, CreationCollisionOption.ReplaceExisting);
//save file entry to storage
using (var writer = new StreamWriter(await file1.OpenStreamForWriteAsync()))
{
writer.Write(file);
}
}
}
}
}
}
This code is untested (since I haven't downloaded any file).
Can anyone point out any thing that should be corrected (enhanced)?
Can anyone help me to modify it to extract password-protected file (Obviously I have the key)?
Might want to try DotNetZip. You can get using the NuGet manager. As for downloading a file off the internet there are a lot of answers on stackoverflow.
This link might help you. http://www.sharpgis.net/post/2009/04/22/REALLY-small-unzip-utility-for-Silverlight
Could follow these too: http://stackoverflow.com/questions/11742348/extract-zip-file-from-isolatedstorage
http://stackoverflow.com/questions/22889276/how-to-unzip-a-file-in-isolatedstorage-in-windows-phone-8-app
| common-pile/stackexchange_filtered |
Multiple variable className in next js
How to apply multiple className in Next js, in which there are variable classsnames as well ?
I'm using component level css approach
Here's my code and what I want to do:
import styles from "./ColorGroup.module.css";
const colors = ["red", "sky", "yellow", "green", "golden"];
const ColorGroup = () => {
return (
<div className={styles.colorContainer}>
<text>Colour</text>
<div className={styles.colorBoxContainer}>
{colors.map((color, index) => (
<div
// variable color class is not get applied, only colorBox is applied, I want both
className={`${styles}.${color} ${styles.colorBox}`}
key={index}
></div>
))}
</div>
</div>
);
};
Goal:
CSS code:
/* code for layout and arrangement above */
.colorBox {
height: 36px;
width: 36px;
}
.red {
background-color: lightcoral;
}
.sky {
background-color: skyblue;
}
.yellow {
background-color: lightyellow;
}
.green {
background-color: lightgreen;
}
.golden {
background-color: greenyellow;
}
But this method is only applying the colorBox className and not doing anything for ${styles}.${color} . How to apply both ?
You should use bracket
<div className={styles.colorBoxContainer}>
{colors.map((color, index) => (
<div
className={`${styles[color]} ${styles.colorBox}`}
key={index}
></div>
))}
</div>
You can try to use the style object, it easier to use it with variables :
<div key={index} className={`${styles.colorBox}`} style={{ backgroundColor: color }} >
</div>
Thanks, this approach didn't come to my mind :D But do you know how to apply those classNames, if we want to do something more than just backgroundColor ?
| common-pile/stackexchange_filtered |
Java JAX-RS CORS/Tomcat Conflict : javax.servlet.ServletException
I am currently running my own web-app and it usually does not give me any issues to set up my environment. However, this is a new machine set up with Tomcat 9.0 and JDK 8. Only difference between this one and others which work properly is the version of Eclipse IDE.
All resources return 404 in this environment and I have narrowed down the reason for this during runtime to:
javax.servlet.ServletException: It is not allowed to configure supportsCredentials=[true] when allowedOrigins=[*]
Anyone know why this is no longer allowed / why it does not work?
In src/main/webapp/WEB-INF/web.xml CORS filters are added as follows:
<?xml version="1.0" encoding="UTF-8"?>
<!-- This web.xml file is not required when using Servlet 3.0 container,
see implementation details http://jersey.java.net/nonav/documentation/latest/jax-rs.html -->
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
<servlet>
<servlet-name>Jersey Web Application</servlet-name>
<servlet-class>org.glassfish.jersey.servlet.ServletContainer</servlet-class>
<init-param>
<param-name>jersey.config.server.provider.packages</param-name>
<param-value>lksecure.lks,lksecure,messenger.msg</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>Jersey Web Application</servlet-name>
<url-pattern>/webapi`/`*</url-pattern>
</servlet-mapping>
<filter>
<filter-name>CorsFilter</filter-name>
<filter-class>org.apache.catalina.filters.CorsFilter</filter-class>
<init-param>
<param-name>cors.allowed.origins</param-name>
<param-value>*</param-value>
</init-param>
<init-param>
<param-name>cors.allowed.methods</param-name>
<param-value>GET,POST,DELETE,HEAD,OPTIONS</param-value>
</init-param>
<init-param>
<param-name>cors.allowed.headers</param-name>
<param-value>Content-Type,auth,user,persona,target,recaptcha,id,endpoint,portX-Requested-With,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers</param-value>
</init-param>
<init-param>
<param-name>cors.exposed.headers</param-name>
<param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials</param-value>
</init-param>
<init-param>
<param-name>cors.support.credentials</param-name>
<param-value>true</param-value>
</init-param>
<init-param>
<param-name>cors.preflight.maxage</param-name>
<param-value>10</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>CorsFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
</web-app>
Thanks!
You have to provide a comma separated white list of allowed origins to also have supportCredentials=[true]:
http://tomcat.apache.org/tomcat-7.0-doc/config/filter.html#CORS_Filter
How to narrow down the problem and how to find out which URLs you have to add for Tomcats web.xml cors.allowed.origins parameter is explained here:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors
This behavior was introduced due to security reasons in May 2018:
https://bz.apache.org/bugzilla/show_bug.cgi?id=62343
`<init-param>
<param-name>cors.allowed.origins</param-name>
<param-value>*</param-value>
</init-param>`
just put a '/' before '*' like this
`<init-param>
<param-name>cors.allowed.origins</param-name>
<param-value>/*</param-value>
</init-param>`
| common-pile/stackexchange_filtered |
Convert org.w3c.dom.Node into Document
I have a Node from one Document. I want to take that Node and turn it into the root node of a new Document.
Only way I can think of is the following:
Node node = someChildNodeFromDifferentDocument;
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setNamespaceAware(true);
DocumentBuilder builder = factory.newDocumentBuilder();
Document newDocument = builder.newDocument();
newDocument.importNode(node);
newDocument.appendChild(node);
This works, but I feel it is rather annoyingly verbose. Is there a less verbose/more direct way I'm not seeing, or do I just have to do it this way?
This is related to http://stackoverflow.com/questions/3184268/org-w3c-dom-domexception-wrong-document-err-a-node-is-used-in-a-different-docu
The code did not work for me - but with some changes from this related question I could get it to work as follows:
Node node = someChildNodeFromDifferentDocument;
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setNamespaceAware(true);
DocumentBuilder builder = factory.newDocumentBuilder();
Document newDocument = builder.newDocument();
Node importedNode = newDocument.importNode(node, true);
newDocument.appendChild(importedNode);
That looks about right to me. While it does look generally verbose, it certainly doesn't look significantly more verbose than other code using the DOM API. It's just an annoying API, unfortunately.
Of course, it's simpler if you've already got a DocumentBuilder from elsewhere - that would get rid of quite a lot of your code.
Alright, guess I just have to accept it then :p Yeah, in my actual code I have created an XmlHelper which handles the factories and such.
@Svish: Right - and if you need to do this in more than one place, you could easily write a createDocument method in your helper class :)
document from Node
Document document = node.getOwnerDocument();
Just wish to point out that, using node.getOwnerDocument() return the entire original document, not the sub portion that that node can represent.
You can simply clone the old document using cloneNode and then typecast it to Document like below:
Document newDocument = (Document) node.cloneNode(true);
Maybe you can use this code:
String xmlResult = XMLHelper.nodeToXMLString(node);
Document docDataItem = DOMHelper.stringToDOM(xmlResult);
The answer should at least point where to find XMLHelper's and DOMHelper's implementations as they're not in the Java standard library.
| common-pile/stackexchange_filtered |
Add .js in Magento
I'm trying to add my own .js to a Magento project, but I can't do it...
I've followed the steps here, but I still can't do it.
I did the following:
Frontend: Opened page.xml and inside the block: block type="page/html_head" name="head" as="head", I included my .js file: lib/query.js.
Backend: I modified main.xml like said here, and I added the .js to the block named as head.
But nothing happens. The javascript seems to not be working. Any ideas on why, or any step that I need to follow?
I EDIT THE QUESTION because I found a different problem...:
I did everything as suggested in the Answers. And it didn't work. But I think it's because of the script.
When I go to firebut, I see these mistakes, that I don't know if they were at first:
First mistake is located at head.phtml, when I make the call:
<script type="text/javascript">
$jQuery.noConflict();
</script>
The second and the third one are located at the beginning of my .js files...
Any idea? Maybe solving this, will solve my other issue...
Are you putting your file in (root folder) /lib/query.js or /js/lib/query.js?
Second one... /js/lib/jquery.js. Is that correct?
Copy the url by viewing source for your script url (www.xyz/js/lib/jquery.js) and paste it into your browser address bar and check if load. Also check my code below for one way of doing jquery noconflict
@Sonhja: check my answer here http://stackoverflow.com/questions/8310233/magento-using-jquery-with-noconflict/8310465#8310465
jQuery: Creates a different alias for jQuery to use in the rest of the script.
var $j = jQuery.noConflict();
// Do something with jQuery
$j("div p").hide();
//then use magento prototype with
$("content").style.display = 'none';
See more http://api.jquery.com/jQuery.noConflict/
Since it seem like you are add the same file to the admin and the frontend. Put your file in (root folder) /js/query.js (not /lib/query.js)
Add this to page.xml
<block type="page/html_head" name="head" as="head">
<action method="addJs"><script>query.js</script></action>
....
Add this to main.xml
<default>
<block type="adminhtml/page" name="root" output="toHtml" template="page.phtml">
<block type="adminhtml/page_head" name="head" as="head" template="page/head.phtml">
<action method="addJs"><script>query.js</script></action>
....
It's not a wise idea to make changes to core files. Take a look on workaround for admin files
http://www.magentocommerce.com/boards/viewthread/17306/
I copied the .js link, and it loads. I tried your code, but it still doesn't work...
If the file load that mean that the magento xml is working correctly. Did you fixed the jQuery noConflict() issue? Take a look at the url below the code for the example to use noConflixt() the way you have it or do it the way i suggest above. What error are you getting?
Yes, I'm making some advance... I solved the jQuery noConflict. That was a problem on the order I loaded the .js files. Right now I only have this one: TypeError: $(document).ready is not a function. Step by step I will arive! :D
With the way above you would do $j(document).ready()... with your way jQuery.noConflict()(function(){
$(document).ready()
});
It still says jQuery is not defined... :(( Aggg!
I will make your answer as good. I think it will someday work... :P
What I normally do is append 'var $j = jQuery.noConflict();' to the end of my jquery.js file
Inside the 'block type="page/html_head" ' insert following line
<action method="addJs"><script>lib/query.js</script></action>
Also put your query.js file at "magento-root-direcory/js/lib/query.js" location.
And of course do the reindexing and delete your cache.
I did so, but I think my problem is a different one... Why do you think that mistakes appear?
1) Have you deleted the cache and session?: http://kb.siteground.com/article/How_to_clear_the_cache_in_Magento.html
2) Double check the page source in your web browser. The file might be included correctly already, but may be your javascript file itself is the problem.
I think you're right. I think my javascript is wrong. Could yo please check the edit and see if you know what's wrong?
| common-pile/stackexchange_filtered |
How do I write a non Velocity test that uses Mongo?
I got Velocity to work, and it's cool, but I wish I could run tests on the command line without starting up Meteor. Restarting is slow
I have managed to write simple unit tests that I can run with npm test, but they don't have access to any Meteor or package code, so I can only write pretty isolated unit tests. Yeah, I can mock out things, but mocking out Mongo seems mad.
Is there a way to write command line tests that can use Mongo and package code?
Can you link/post the how you run npm tests?
I just cd to the test directory and do npm test
The tests have var expect = require("chai").expect;, and that's the whole setup, I think.
i've heard you can run velocity cli, would be good if there was better documentation around that, I would like to do the same but haven't got it working yet myself. there is an alternative newer command line testing framework https://github.com/anticoders/gagarin if you want to give it a shot.
I would say stick with Velocity. They are working on some nice ways to speed it up: https://www.youtube.com/watch?v=83dBtU6qy6c
You can also now run velocity directly from the command line w/ meteor run --test.
The Velocity speedup is about running a lot of tests concurrently. My goal is sort of the opposite. I want to run one test quickly, during TDD. Right now it takes 20 seconds before the tests even start.
| common-pile/stackexchange_filtered |
Chrome extension with content script on extension icon
I want to put something to work but I am totally confused about googles "isolated worlds". I searched a lot but I cannot find the answer to my needs.
Workflow: User clicks on extension icon -> a javascript will search on DOM tree for a div "xpto and get its content -> Use this content to do a google search in a new tab.
manifest.json
{
"manifest_version": 2,
"name": "searchongoog",
"description": "test",
"version": "1.0",
"permissions": [ "tabs",
"https://*/*","http://*/*","activeTab"
],
"content_scripts": [
{
"matches": ["https://*/*","http://*/*"],
"js": ["background.js"],
"run_at": "document_end"
}
],
"browser_action": {
"default_icon": "icon.png",
"default_popup": "popup.html"
}
}
With this I can get the information, but everytime I open a page it automatically gets it done. I dont want that, I want it to do the background.js only when the extension icon is clicked. I tried to invoke the background.js inside popup.html but it doesn't have access to the DOM to extract the information to search.
Any help would be great.
PS: I am a total noob on this, so dont flame me if I am doing totally wrong.
Thanks anyway
You could try "programmatic injection" instead of "match patterns injection".
Programmatic injection is useful especially when you don't want to inject code to every page which match the pattern, for example: code should be injected when user click the browser action button.
I wrote a simple example to demonstrate how to implement "programmatic inject":
First, you need to add a background page to register the "browser action" click event listener.
manifest.json
{
"manifest_version": 2,
"name": "searchongoog",
"description": "test",
"version": "1.0",
"permissions": [ "tabs",
"https://*/*","http://*/*","activeTab"
],
"background":{
"persistent":false,
"page":"background.html"
},
"browser_action": {
"default_icon": "icon.png"
}
}
Register the browser action click listener (injector.js is the script that you search the DOM and get content for new tab google searching).
Note: Please make sure that the browser action has no popup. This event will not fire if it has popup.
background.html
<script>
chrome.browserAction.onClicked.addListener(function(tab) {
chrome.tabs.executeScript(null,{file:"injector.js"});
});
</script>
injector.js
alert("Programmatic injection!");
DEMO snapshot:
Hope this is helpful for you.
Why are you using manifest injected content scripts ? Also, please, don't encourage persistent background pages for no obvious reason.
@ExpertSystem thanks for your reminding. The manifest injected content script is author's requirement. I didn't modify it. And you're right! I should set persistent attribute to "false".
OP's requirement is to achieve what they want. I don't think they really care if the re is a manifest injected content script. Further more, since you are proposing a different approach (i.e. one that does work) it only makes sense to remove unnecessary parts of OP's (not working) approach. Just my two cents...
I cant make it work.
I don't understand, Do I need to write a html only with that code? Nothing else? I did that and suppose that in my injector.js I have "alert("test");" it won't fire up any alert.
My manifest it's the same as you proposed, still dont work
Sorry... it'a typo.... I updated the code of background.html and manifest file. You could try again.
Sorry, maybe I am seeing the whole picture wrong but is the background.html just that code? Makes no sense, no | common-pile/stackexchange_filtered |
Data Source Explorer is missing in Eclipse Juno 4.2.2
I am using eclipse juno 4.2.2 version.
I am unable to open perspective for Data Source Explorer.
Is any software is missing in my eclipse. Can you please provide steps to install if it is missing.
Data source explorer is not available in (JUNO)Eclipse classic 4.2.2 version.
Download (JUNO)Eclipse IDE for Java EE Developers version to access Data Source Explorer.
| common-pile/stackexchange_filtered |
Are inline virtual functions really a non-sense?
I got this question when I received a code review comment saying virtual functions need not be inline.
I thought inline virtual functions could come in handy in scenarios where functions are called on objects directly. But the counter-argument came to my mind is -- why would one want to define virtual and then use objects to call methods?
Is it best not to use inline virtual functions, since they're almost never expanded anyway?
Code snippet I used for analysis:
class Temp
{
public:
virtual ~Temp()
{
}
virtual void myVirtualFunction() const
{
cout<<"Temp::myVirtualFunction"<<endl;
}
};
class TempDerived : public Temp
{
public:
void myVirtualFunction() const
{
cout<<"TempDerived::myVirtualFunction"<<endl;
}
};
int main(void)
{
TempDerived aDerivedObj;
//Compiler thinks it's safe to expand the virtual functions
aDerivedObj.myVirtualFunction();
//type of object Temp points to is always known;
//does compiler still expand virtual functions?
//I doubt compiler would be this much intelligent!
Temp* pTemp = &aDerivedObj;
pTemp->myVirtualFunction();
return 0;
}
Consider compiling an example with whatever switches you need to get an assembler listing, and then showing the code reviewer that, indeed, the compiler can inline virtual functions.
The above usually will not be inlined, because you are calling virtual function in aid of base class. Although it depends only on how smart the compiler is. If it would be able to point out that pTemp->myVirtualFunction() could be resolved as non-virtual call, it might have inline that call. This referenced call is inlined by g++ 3.4.2: TempDerived & pTemp = aDerivedObj; pTemp.myVirtualFunction(); Your code is not.
One thing gcc actually does is compare the vtable entry to a specific symbol and then use an inlined variant in a loop if it matches. This is especially useful if the inlined function is empty and the loop can be eliminated in this case.
@doc Modern compiler try hard to determine at compile time the possible values of pointers. Just using a pointer isn't sufficient to prevent inlining at any significant optimization level; GCC even performs simplifications at optimization zero!
Virtual functions can be inlined sometimes. An excerpt from the excellent C++ faq:
"The only time an inline virtual call
can be inlined is when the compiler
knows the "exact class" of the object
which is the target of the virtual
function call. This can happen only
when the compiler has an actual object
rather than a pointer or reference to
an object. I.e., either with a local
object, a global/static object, or a
fully contained object inside a
composite."
True, but it's worth remembering that the compiler is free to ignore the inline specifier even if the call can be resolved at compile time and can be inlined.
Amother situation when I think inlining can happen is when you'd call the method for example as this->Temp::myVirtualFunction() - such invokation skips the virtual table resolution and the function should be inlined without problems - why and if you'd want to do it is another topic :)
@RnR. It's not necessary to have 'this->', just using the qualified name is enough. And this behaviour takes place for destructors, constructors and in general for assignment operators (see my answer).
sharptooth - true, but AFAIK this is true of all inline functions, not just virtual inline functions.
void f(const Base& lhs, const Base& rhs)
{
}
------In implementation of the function, you never know what lhs and rhs points to until runtime.
lzprmgr - that method/function can certainly be inlined, no matter what lhs and rhs is.
@BaiyanHuang - well, often you do know at compile time, when the method f() itself gets inlined, which may allow the compiler to deduce the types of lhs and rhs. Furthermore, even though lhs or rhs may have arbitrary types, some of the methods could be declared final which would allow inlining. Finally, there are other cases where methods can be proven not overridden, such as classes declared in anonymous namespaces.
C++11 has added final. This changes the accepted answer: it's no longer necessary to know the exact class of the object, it's sufficient to know the object has at least the class type in which the function was declared final:
class A {
virtual void foo();
};
class B : public A {
inline virtual void foo() final { }
};
class C : public B
{
};
void bar(B const& b) {
A const& a = b; // Allowed, every B is an A.
a.foo(); // Call to B::foo() can be inlined, even if b is actually a class C.
}
Wasn't able to inline it in VS 2017.
I don't think it works this way. Invocation of foo() through a pointer/reference of type A can never be inlined. Calling b.foo() should allow inlining. Unless you are suggesting that the compiler already knows this is a type B because it's aware of the previous line. But that's not the typical usage.
For example, compare the generated code for bar and bas here: https://godbolt.org/g/xy3rNh
@JeffreyFaust There's no reason that information shouldn't be propagated, is there? And icc seems to do it, according to that link.
@AlexeyRomanov Compilers have freedom to optimize beyond the standard, and the certainly do! For simple cases like above, the compiler could know the type and do this optimization. Things are rarely this simple, and it's not typical to be able to determine the actual type of a polymorphic variable at compile time. I think OP cares about 'in general' and not for these special cases.
There is one category of virtual functions where it still makes sense to have them inline. Consider the following case:
class Base {
public:
inline virtual ~Base () { }
};
class Derived1 : public Base {
inline virtual ~Derived1 () { } // Implicitly calls Base::~Base ();
};
class Derived2 : public Derived1 {
inline virtual ~Derived2 () { } // Implicitly calls Derived1::~Derived1 ();
};
void foo (Base * base) {
delete base; // Virtual call
}
The call to delete 'base', will perform a virtual call to call correct derived class destructor, this call is not inlined. However because each destructor calls it's parent destructor (which in these cases are empty), the compiler can inline those calls, since they do not call the base class functions virtually.
The same principle exists for base class constructors or for any set of functions where the derived implementation also calls the base classes implementation.
One should be aware though that empty braces don't always mean the destructor does nothing. Destructors default-destruct every member object in the class, so if you have a few vectors in the base class that could be quite a lot of work in those empty braces!
You don't need inline when you define the function in the class. It's implicit.
I've seen compilers that don't emit any v-table if no non-inline function at all exists (and defined in one implementation file instead of a header then). They would throw errors like missing vtable-for-class-A or something similar, and you would be confused as hell, as i was.
Indeed, that's not conformant with the Standard, but it happens so consider putting at least one virtual function not in the header (if only the virtual destructor), so that the compiler could emit a vtable for the class at that place. I know it happens with some versions of gcc.
As someone mentioned, inline virtual functions can be a benefit sometimes, but of course most often you will use it when you do not know the dynamic type of the object, because that was the whole reason for virtual in the first place.
The compiler however can't completely ignore inline. It has other semantics apart from speeding up a function-call. The implicit inline for in-class definitions is the mechanism which allows you to put the definition into the header: Only inline functions can be defined multiple times throughout the whole program without a violation any rules. In the end, it behaves as you would have defined it only once in the whole program, even though you included the header multiple times into different files linked together.
Well, actually virtual functions can always be inlined, as long they're statically linked together: suppose we have an abstract class Base with a virtual function F and derived classes Derived1 and Derived2:
class Base {
virtual void F() = 0;
};
class Derived1 : public Base {
virtual void F();
};
class Derived2 : public Base {
virtual void F();
};
An hypotetical call b->F(); (with b of type Base*) is obviously virtual. But you (or the compiler...) could rewrite it like so (suppose typeof is a typeid-like function that returns a value that can be used in a switch)
switch (typeof(b)) {
case Derived1: b->Derived1::F(); break; // static, inlineable call
case Derived2: b->Derived2::F(); break; // static, inlineable call
case Base: assert(!"pure virtual function call!");
default: b->F(); break; // virtual call (dyn-loaded code)
}
while we still need RTTI for the typeof, the call can effectively be inlined by, basically, embedding the vtable inside the instruction stream and specializing the call for all the involved classes. This could be also generalized by specializing only a few classes (say, just Derived1):
switch (typeof(b)) {
case Derived1: b->Derived1::F(); break; // hot path
default: b->F(); break; // default virtual call, cold path
}
Are they any compilers that do this? Or is this just speculation? Sorry if I'm overly skeptical, but your tone in the description above sounds sort of like -- "they totally could do this!", which is different from "some compilers do this".
Yes, Graal does polymorphic inlining (also for LLVM bitcode via Sulong)
Marking a virtual method inline, helps in further optimizing virtual functions in following two cases:
Curiously recurring template pattern (http://www.codeproject.com/Tips/537606/Cplusplus-Prefer-Curiously-Recurring-Template-Patt)
Replacing virtual methods with templates (http://www.di.unipi.it/~nids/docs/templates_vs_inheritance.html)
inline really doesn't do anything - it's a hint. The compiler might ignore it or it might inline a call event without inline if it sees the implementation and likes this idea. If code clarity is at stake the inline should be removed.
For compilers that operate on single TUs only, they can only inline implicitly functions that they have the definition for. A function can only be defined in multiple TUs if you make it inline. 'inline' is more than a hint and it can have a dramatic performance improvement for a g++/makefile build.
Inlined declared Virtual functions are inlined when called through objects and ignored when called via pointer or references.
In the cases where the function call is unambiguous and the function a suitable candidate for inlining, the compiler is smart enough to inline the code anyway.
The rest of the time "inline virtual" is a nonsense, and indeed some compilers won't compile that code.
Which version of g++ won't compile inline virtuals?
Hm. The 4.1.1 I have here now appears to be happy. I first encountered problems with this codebase using a 4.0.x. Guess my info is out of date, edited.
A compiler can only inline a function when the call can be resolved unambiguously at compile time.
Virtual functions, however are resolved at runtime, and so the compiler cannot inline the call, since at compile type the dynamic type (and therefore the function implementation to be called) cannot be determined.
When you call a base class method from the same or derived class the call is unambiguous and non-virtual
@sharptooth: but then it would be a non-virtual inline method. The compiler can inline functions you don't ask it to, and it probably knows better when to inline or not. Let it decide.
@dribeas: Yes, that's exactly what I'm talking about. I only objected to the statement that virtual finctions are resolved at runtime - this is true only when the call is done virtually, not for the exact class.
I believe that's nonsense. Any function can always be inlined, no matter how big it is or whether it's virtual or not. It depends on how the compiler was written. If you do not agree, then I expect that your compiler cannot produce non-inlined code either. That is: The compiler can include code that at runtime tests for the conditions it could not resolve at compile-time. It's just like the modern compilers can resolve constant values/reduce nummeric expressions at compile-time. If a function/method is not inlined, it does not mean it cannot be inlined.
It does make sense to make virtual functions and then call them on objects rather than references or pointers. Scott Meyer recommends, in his book "effective c++", to never redefine an inherited non-virtual function. That makes sense, because when you make a class with a non-virtual function and redefine the function in a derived class, you may be sure to use it correctly yourself, but you can't be sure others will use it correctly. Also, you may at a later date use it incorrectly yoruself. So, if you make a function in a base class and you want it to be redifinable, you should make it virtual. If it makes sense to make virtual functions and call them on objects, it also makes sense to inline them.
Actually in some cases adding "inline" to a virtual final override can make your code not compile so there is sometimes a difference (at least under VS2017s compiler)!
Actually I was doing a virtual inline final override function in VS2017 adding c++17 standard to compile and link and for some reason it failed when I am using two projects.
I had a test project and an implementation DLL that I am unit testing. In the test project I am having a "linker_includes.cpp" file that #include the *.cpp files from the other project that are needed. I know... I know I can set up msbuild to use the object files from the DLL, but please bear in mind that it is a microsoft specific solution while including the cpp files is unrelated to build-system and much more easier to version a cpp file than xml files and project settings and such...
What was interesting is that I was constantly getting linker error from the test project. Even if I added the definition of the missing functions by copy paste and not through include! So weird. The other project have built and there are no connection between the two other than marking a project reference so there is a build order to ensure both is always built...
I think it is some kind of bug in the compiler. I have no idea if it exists in the compiler shipped with VS2020, because I am using an older version because some SDK only works with that properly :-(
I just wanted to add that not only marking them as inline can mean something, but might even make your code not build in some rare circumstances! This is weird, yet good to know.
PS.: The code I am working on is computer graphics related so I prefer inlining and that is why I used both final and inline. I kept the final specifier to hope the release build is smart enough to build the DLL by inlining it even without me directly hinting so...
PS (Linux).: I expect the same does not happen in gcc or clang as I routinely used to do these kind of things. I am not sure where this issue comes from... I prefer doing c++ on Linux or at least with some gcc, but sometimes project is different in needs.
| common-pile/stackexchange_filtered |
How to run T-SQL queries using LINQ to Entities?
I want to execute some T-SQL. I want to use existing L2E connection configurations, I don't want to add a new SqlClient provider connection configuration to my .config file. How do I do that? ObjectContext seems to have only methods that return results with classes from data mapping schema.
I've tried doing ObjectContext.Connection.CreateCommand method, then specifying my T-SQL as the command text, but it failed with a message that is something like "could not understand what entities you are talking about in your query".
You can use ExecuteStoreQuery(). Also see How to: Directly Execute Commands Against the Data Source
| common-pile/stackexchange_filtered |
Continuum limit for volume
Given the probability density
$$\rho = \frac{i\hbar}{2mc^2}(\psi^*\partial_t\psi - (\partial_t\psi^*)\psi) = |\mathcal{N}|^2|\Xi|^2$$
where $\mathcal{N}$ is a (real) normalization constant and $\Xi$ does not depend on $\vec{x}$.
If we want to normalize it such that we find one particle in a volume $V$, we demand
$$\int_V \mathrm{d}x^3|\mathcal{N}|^2|\Xi|^2 = V |\mathcal{N}|^2|\Xi|^2$$
and therefore $\mathcal{N} = \frac{1}{\sqrt{V|\Xi|^2}}$
How can I get the continuum limit of this? I've seen $V\rightarrow (2\pi\hbar)^3$ but this does not work out with the units. Also, what would be the idea behind this?
What exactly do you mean by continuum limit here? It looks like you are already dealing with a continuous $x$. Do you mean the infinite volume limit?
Sorry, I meant going from quantum to classical.
| common-pile/stackexchange_filtered |
I need to call ACTION METHOD in button click using MVC angular JS
The script file which im using. While Im running it on IE it shows javascript runtime error angular is undefined.
alert("hi");
var myapp = angular.module('myapp', []);
alert("xxxxx");
myapp.controller("myctrl",
function ($scope, $http, $location) {
alert("inside fun;")
$scope.SaveFeedback = function () {
alert("savefeedback called");
$http({
method: 'POST',
url: '/Home/SaveFeedback',
data: {
Fornecedor: $scope.fornecedor
},
headers: { 'Content-Type': 'application/x-www-formurlencoded' }
}).success(function (data) {
alert("in success fun");
$location.path("/Home/Index");
}).error(function () {
alert("error");
});
};
});
alert("end");
can you make sure angular js file is loaded before this piece of code executes?
Before code executes I added four references... here it is but its still throwing angular is undefined
///
///
///
///
///
///
how about creating a demo page in plnkr then we can help troubleshoot?
plnkr ? I didt understand
sorry for misunderstanding. http://plnkr.co/ is a website to help create demos quickly.
Did you inject the angular app in your html? like: <body ng-app="myapp"> also you need to inject the angular-route etc in your app, when you declare the module.
not using .html file. im using .cshtml. i used like .Wait I try as per yur post
Thank yu so much. Its working.that javascript run time error angular runtime error cleared. but function inside the mypp.controller("myctrl",function...... ) is not firing
I believe the right syntax for controller is myapp.controller("myctrl", [function () {....}])
Also, the way you define your function now it won't be fired automatically, it has to be called first (for example with a ng-click)
Appears like you havn't included the Angular JS library in the head. Please include
<script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script>
in the head and try else you can download and add to script folder and then refer in the head.
I added these four references..
///
///
///
///
///
///
| common-pile/stackexchange_filtered |
Proving a sum of a polynomial is convergent
Let $p(x)$ be a nonzero polynomial. I want to prove that the series $\sum_{n=1}^{\infty}\frac{p(n)}{a^n}$ in convergent for every $a>1$. I thought it might be useful to prove that $lim_{n\to\infty}\sqrt[n]{p(n)}=1$.
Hint. The polynomial has just finitely many terms. Think about
$$
\frac{n^d}{a^n}$$
when $d$ and $a$ are fixed and $n$ is large.
That's very helpful Ethan!
Now, if I assume that I've proven that $\sqrt[n]{n^d}=1$, then from the root test:
$limSup_{n\to\infty}\sqrt[n]{\frac{n^d}{a^n}}=\frac{1}{a}<1$ and therfore the series $\sum_{n}^
{\infty}\frac{n^d}{a^n}$ converges. Then I can wrute the polynomial as $p(n)=a_in^i+a_{i-1}n^{i-1}+...+a_0$ and then:
$\sum_{n}^{\infty}\frac{p(n)}{a^n}=\sum_{n}^{\infty}\frac{a_in^i+a_{i-1}n^{i-1}+...+a_0}{a^n}=\sum_{n}^{\infty}{\frac{a_in^i}{a^n}+\frac{a_{i-1}n^{i-1}}{a^n}+...+\frac{a_0}{a^n}}=\sum_{n}^{\infty}{\frac{a_in^i}{a^n}}+\sum_{n}^{\infty}{\frac{a_{i-1}n^{i-1}}{a^n}}+...+\sum_{n}^{\infty}{\frac{a_0}{a^n}}$ and as we've shown each of these sums converge.
Is this correct?
| common-pile/stackexchange_filtered |
"Where we headed?" or "Where we heading?"; which one is correct?
I had to pick one of my friends for a movie. She didn't know where we would go, so the moment she sat in the car, she asked "So, where we headed?" I told her the multiplex name. But I thought she should have asked, "where we heading (= going)"?
Why people say use past tense "headed" when they still need to reach the destination?
Neither is correct. Is should be "Where are we headed/heading" (or "Where're we headed/heading").
Both are correct in speech (and in text quoting speech, like stories). The predictable and information-free auxiliary verb are is frequently, and for some people normally, omitted in this construction.
Possible duplicate of "To be headed for" and "To be headed over to"
Slavic languages formally allow you to drop 'is/are' ('This is a glass' -> Polish to szklanka)
Farlex gives this (common) usage for headed; though this is the adjectival usage, the participle usage corresponds:
headed - having a heading or course in a certain direction; "westward
headed wagons".
The present participle is certainly not a wrong alternative (as an adjective or participle), and is, as you imply, at least as logical. The fact that 'head' can be (and 'head out' virtually has to be) punctive (then, we headed west / we headed into the wind // we headed out west) as well as durative (we headed steadily west / we were heading into the wind) very probably gives rise to the choice. Between, I'd say, "So, where are we headed?" and "So, where are we heading?"
Google Ngrams show that the 'headed' version ('where are we headed') has been consistently appreciably more popular in the US since say 1985; in the UK, until about 1980, it was the other way round. The versions are now roughly equal in popularity in the UK, possibly because of the influence of American film dialogues ('headed' sounding more butch).
Similar phenomena with Where's the cannon pointed? vs Where's the cannon pointing? It seems to be a natural affordance due to the fact that the same verb forms (head, point) are used for both the intransitive stative directional sense and the transitive causative directed-motion sense. For continuous directional phenomena like wind or wave, one can do the same thing with go -- Where is the wind gone/going? Verb pairs with this affordance don't always overlap in context or invited inference, however.
And, as an afterthought, note the absence of auxiliary verbs (are) in the question title. This is due to Conversational Deletion, a well-known syntactic phenomenon in colloquial English.
You colonial types seem more deleterious than us staid and proper stick-at-homes. But I've got to accept 'Which way the women go!?' as a correct misusage.
In speech it would of course pass an oatist.
It's hard to speak clearly with a large cigar in one's mouth. Especially when your brothers have put glue in your moustache-paint. But we owe the great man one of the best hints on punctuation: use a comma if you can't spell 'semicolon'.
@EdwinAshworth As a Northerner (well, North Midlander really) I grew up with "Where we headed / heading / off?" as perfectly normal speech. It's not really RP of course but fine for us oiks.
I do not think we are talking past tense here. I think it is about somebody else directing the action, i.e. "where are we headed?" = "where are we being sent?" by a 3rd party.
It should be heading instead of headed. Thats the correct sentence with present situation. Headed refer to the situation which got completed.
‘Headed’ is used when you are already on a journey and someone asks you where you are headed.
‘Heading’ is used when you have not started the journey and someone asks you where you are heading.
A newly appointed person, in high position like the president, may ask,”Where is our country heading?” He is new and he has not started the journey of building the country.
But if the person has been given a second term of office, then he may ask,”Where is our country headed?” He has already started his program of building the country and will continue to do so.
I understand where you are coming from.
I mean really...
Where are we headed?
Does not make sense to me. The English language is very complex, for example, you said "Why people say use past tense "headed" when they still need to reach the destination?"
Okay past tense is incorrect, I mean how can that be, if they are already on the way, is should be heading, not headed, of course it should. simply because the adding of "ing" to a word, usually means its current event in progress, and the adding of "ed" usually means a past event.
Okay so these examples will say why its correct:
We are heading over there, we will be at their house soon.
This sentence says that we are in the current progress which brings us to the word headING, since its something we're currently doing, like right now, I'm typing this post, it wouldn't make sense to say I'm typed this post would it?
Now, here I will use past tense to tell you I'm heading out but I'm still on the way, as per your comment about still not been at the destination, I will do it in three words.
You asked:
"Why people say use past tense "headed" when they still need to reach the destination?"
"We've headed out"
Now that's my opinion, I think that should be the right way about things, but the definition of the word according to the Cambridge Dictionary (In British mode too) actually means "going in a particular direction: Which way are you headed?" which would mean headed means heading, so to speak, so, as far as official things go, I cannot say, but I still stick with "heading" over "headed" when it comes to talking about the present, and headed when talking about the past.
I don't totally get what you are trying to say which I guess not different than what already answered and commented by John Lawler. Other thing is you're poor in expressing thoughts in words, need to work on that.
| common-pile/stackexchange_filtered |
IS there any way to load balance traffic on both links configured with OSPF
Branch network is connected with DC through two point -point leased lines of different bandwidth . I want both links to be in active -active mode. How can use both link in active -active mode we are using OSPF as routing protocol..
That is a very bad idea. It will kill real-time traffic, such as VoIP, and it will cause problems for TCP due to out-of-order packet delivery that will cause retruansmission requests and retransmissions, as well as affect processing on the hosts to resolve the out-of-order packets. It is possible that the network performance will be worse with both.
Did any answer help you? If so, you should accept the answer so that the question doesn't keep popping up forever, looking for an answer. Alternatively, you can post and accept your own answer.
By default OSPF does not support unequal cost load balancing for traffic on multiple link with different bandwidth . But to make traffic to distribute among two different links of different bandwidth make them Equal cost load balancing if two liks are equal cost then OSFP can distribute and loadbalence tràffic on this two links your can accomplish this task with use of this command ip ospf cost command
router(config)# ip ospf cost "value"
configure the cost metric manually for each interface. This command overrides any settings for the reference bandwidth that you set using the reference-bandwidth command in router configuration mode.
There are a few points to consider when attempting load balancing.
For Cisco routers at least, you have the option of per-flow load balancing, where the first stream of data will use only one link and the next stream will use the next link, with streams alternating between the two; and per-packet load balancing where each ip packet uses a different link.
Per-packet load balancing will cause lots of problems, as @ronmaupin explained in the comment.
You can do per-flow (the default setting on the router), but there still are problems:
Since all the packets of a flow take the same path, if that flow exceeds the bandwidth of the link, you will drop packets. In other words, the available bandwidth for a single flow is no more than the bandwidth of a single link. Since you don't know which link will be chosen, you have to assume it might be the slower link.
If a flow is sent over one link (e.g. the slower link), that link could be saturated even though there's plenty of bandwidth on the other link.
OSPF on Cisco routers can only do equal-cost load balancing, where flows alternate equally between the links). With links of different bandwidths, you don't want that. The EIGRP protocol can do proportional load balancing, but it's a bit complicated to set up.
You have to consider that traffic flows in both directions. Each router makes an independent decision on which link to use. The router at site A may choose the fast link to send traffic to site B, but the B router may choose the slower link for the return traffic. Depending on the circumstances, this may be a problem.
As explained by Ron Maupin in comment, doing Equal Cost Multipath Routing (ECMP), especially with different bandwidth, may cause issue.
(But it is supported by some OSPF implementation, including Cisco one.)
What you could do is use Policy Based Routing (PBR) to have some kind of traffic routed over a link and other traffic routed over the second link, with fallback.
To do this with a routing protocol you will likely need some kind of VRF.
The exact way to do it is very dependent on the functionalities available on the routers used.
| common-pile/stackexchange_filtered |
Can we apply ANN to cryptography?
If a group of computers have identical ANN with exact same set of learning data and all have functionality of encryption and decryption, would there be any way for interceptors to interpret encrypted data?
+
Applying the fact that people with more background information obtaining more knowledge from same source than those who don't, would it be possible for ANN to interpret data based on their access level?
(Each level has different amount of "background information")
For example, if there is a encrypted text file, a computer with highest access level would fully decrypt the data to a plain text while a computer with lower access level would only decrypt half of them (and this decrypted half becomes a plain text).
If above methods can exist, what would be their pros and cons compared to pre-existing technologies? (AES, Blowfish and so on)
I know that in order to discuss your question, I must have a background in cryptography, which I don't have! But there is something that I know for sure:
First of all, a simple search gave me this. It might help.
To lots of us, ANN looks like a magic wand which can turn almost everything into anything. But the point is, ANN is only a ML algorithm, but absolutely a powerful one. You should be aware that there are random and multiple weight initializations in ANNs, and this means a fairly stochastic behavior of your network, which vanishes only in shallow networks with high amount of training data. Your network's specifications are even likely influenced by the order of feeding the instances in the training phase. So if what I think of the encryption/decryption process "certain rules for message transformation" is true, then a stochastic network is indeed a sophisticated option (but according to scientists in neural cryptography research area, completely possible)
You're probably thinking of auto-encoding networks, one kind that maps X to X and has a bottle neck, such that if successfully trained, it has developed the power of reducing the number of data features from say 10'000 to 500, and then successfully retrieving the original data with 10'000 features from that 500-features one; such that If I feed a new instance to a trained auto-encoding network, grab it in the bottle neck layer and send it to you, you must be able to feed the message to the bottle neck layer and receive the full decoded message from the output layer. However there's (always) a catch! and that is you need a really "BIG" dataset to train your deep network. Besides you cannot expect it to work well on every kind of data after being trained by a fixed amount. for example your network cannot successfully retrieve an image of a dog, if all you've fed in the training phase was cats' and lizards' images. So can you guarantee that new messages to be encoded have the same type as the ones in the training set?! If no, then you might also take this challenge into account.
| common-pile/stackexchange_filtered |
Masking in footage something that comes into view and leaves
I have footage where the camera pans and e.g. a carpet rod comes into view and then leaves the view. I want to create a mask to later remove the rod with e.g. inpaint.
Objects which are there for all the time of the footage can be tracked, then (if necessary) tracks are averaged and mask can be parented to a track.
This wouldn't work here because no point of the carpet rod is permanently visible (therefore also average track would produce nonsense), so I can't parent the mask to a single track and I haven't seen a way to change the parenting over time.
Is there any other way than moving the mask by hand using keyframes?
If the carpet rod isn't visible for long enough to even be fully visible and tracked for more than one frame in a row then is it really that big of a deal to just manually keyframe it?
@Jakemoyo It is fully visible for some time and parts of it are visible for a longer time (but different parts at different times so I can't use a single track as parent).
Maybe track it in like 3 separate chunks depending on what part is visible, duplicate the mask that number of times and parent each one to it's respective tracker?
| common-pile/stackexchange_filtered |
COCOS2D Touch detection after application minimisation by gesture(iPad)
Currently I'm fixing an old Cocos2D game. There is a strange problem with it: when the application is being sent to background by the four-finger gesture(iPad) after returning it behaves like the touch began and didn't end, and there no way to end it because the application does not respond to touches. Also when I launch the application and do nothing, the screen becomes locked and after that the application some times does not respond to touches.
I guess that there is something I need to do in the following AppDelegate methods, but I can't figure out exactly:
- (void)applicationWillResignActive:(UIApplication *)application {
[[CCDirector sharedDirector] pause];
}
- (void)applicationDidBecomeActive:(UIApplication *)application {
[[CCDirector sharedDirector] resume];
}
enter code here
- (void)applicationDidEnterBackground:(UIApplication*)application {
[[CCDirector sharedDirector] stopAnimation];
}
-(void) applicationWillEnterForeground:(UIApplication*)application {
[[CCDirector sharedDirector] startAnimation];
}
Be sure to have touchesCancelled implemented, this may be called when multitasking the app away and if your touch code only handles touchesEnded it will think it's still in "being touched" mode.
Thank you so much, that was the reason
| common-pile/stackexchange_filtered |
Probability distribution of log(1+exp(X)) for Gaussian RV
I am interested in the theory of random variables $Y$ of the form $Y = \log(1+\exp(X))$ with Gaussian $X\sim\mathcal N(\mu, \sigma^2)$.
In particular, I'm interested in results on the cumulative distribution function, the moment generating function, and the cumulant generating function.
What might be suitable keywords to find discussions of these probability distributions / random variables?
I know that for $x\to\pm\infty$, I can approximate $Y$ as either $Y=X$ or $Y=0$, respectively. Maybe there exist bounds on the stochastic properties of $Y$ in terms of these approximations?
Since $f(x)= \log(1+\exp(x))$ is one-to-one, we can directly calculate the cumulative distribution function of $Y$ as $\mathbb{P}(Y \leq y) = \mathbb{P}(X \leq \log(e^y - 1)) = \Phi(( \log(e^y-1) - \mu)/\sigma)$, where $\Phi$ is the CDF of the standard normal distribution.
| common-pile/stackexchange_filtered |
Call C Functions from Haskell at runtime
I'm building an interpreter for a dynamic programming language in Haskell. I'd like to add a simple mechanism to call C functions. In the past, I've used the Haskell FFI to call C functions that I had explicitly declared the name and type of; this approach won't work here because the interpreter won't know the name or type of the C functions to be called until runtime.
Is it possible to declare and call C functions at runtime? Where should I begin?
Can't you make a dynamic dispatch mechanism in C and Haskell FFI bindings to this dispatcher? In other words, have a single Haskell->C call that will branch out as needed using all the unsafe power of C and/or ASM.
Dynamic Importing
If you can list all possible types for the C functions that may be called, then you can use the FFI's dynamic import capability ( http://www.haskell.org/onlinereport/haskell2010/haskellch8.html). A dynamic import function wraps a C function at runtime. You'll need to declare an import function for each C function type that you may be calling. (Actually, only the ABI matters, so you can treat all C pointer types as equivalent.)
foreign import ccall "dynamic" mkPtrFun :: FunPtr (Ptr () -> IO (Ptr ())) -> Ptr () -> IO (Ptr ())
If you have a pointer to a C function, you can make it callable from Haskell using this wrapper function.
callWithNull :: FunPtr (Ptr a -> IO (Ptr ())) -> IO (Ptr ())
callWithNull f = mkPtrFun f nullPtr
If the types of the C functions are unknown when the Haskell code is compiled, then you cannot do this with the FFI.
Dynamic Loading
As for obtaining C function pointers dynamically, the FFI doesn't help you. You can use dynamic loading libraries in C such as libdl. See the man pages: http://linux.die.net/man/3/dlopen .
| common-pile/stackexchange_filtered |
Nginx as a Load balance while login to application its saying invalid user id/password
I have configured nginx successfully. When I made a load balance with my two application servers (spring boot), when my application opens with the load balance IP, the application asks for a userid and a password. If I give the correct answer, then it's also showing me "incorrect id/password".
I have configured nginx on my spring boot application and I am able to login with the same user id/password.
Can you suggest me where I am going wrong in my configuration?
My config file is as follows:
upstream backend {
#server <IP_ADDRESS>:80;
server <IP_ADDRESS>:80;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://backend;
location /private/public/ {
auth_basic off;
}
}
| common-pile/stackexchange_filtered |
Excel: How to update multiple tables at the same time
I have these 2 tables in two different sheets in excel. They both have common tabs but the only one I am concerned about is the student ID. What I would like to do is make changes to the student ID in the first table and have it reflected in the second tables student ID for data consistency. I am working in excel 2010 and everything I have researched is for later versions of excel. I have tried the special link technique but because these are 2 tables it does not work. The special link option does not appear. I am not sure if I need a script to do this or if excel 2010 has a built in way to do this.
First Table:
Student ID Last name Initial Age Program
STF348-245 Another L. 21 Drafting
STF348-246 Different R. 19 Science
STF348-247 Name G. 18 Arts
STF348-248 Going L. 23 Nursing
STF348-249 Up M. 37 Science
STF348-250 And J. 20 Arts
STF348-251 Down F. 26 Business
STF348-252 Different S. 22 Arts
STF348-253 Different W. 20 Nursing
STF348-254 Different L. 19 Drafting
Second Table:
Student ID Last name Initial Age Program
STF348-245 Another L. 21 Drafting
STF348-246 Different R. 19 Science
STF348-247 Name G. 18 Arts
STF348-248 Going L. 23 Nursing
STF348-249 Up M. 37 Science
STF348-250 And J. 20 Arts
STF348-251 Down F. 26 Business
STF348-252 Different S. 22 Arts
STF348-253 Different W. 20 Nursing
STF348-254 Different L. 19 Drafting
Clarified a little more
@Jeeped, I don't know if OP is asking for tables to be consolidated as per your suggested dupe, or if OP just wants changes in one to be reflected automatically in another. I don't think you should have been so quick to close in this instance.
@CallumDA I am looking for the second thing you said. I do not get the paste special link option when trying to copy and paste from table to table. it works for cells just fine but not for tables.
In which case it is definitely not a duplicate - I have voted to reopen this question so that you might be able to get answers
@CallumDA - The OP has shown no effort and supplied no sample data ([mcve]); the only question is 'is it possible'. The answer is Yes, it is possible. If the OP decides to put in any effort then they can find the solution themselves and if they run into trouble, they can post back with specifics and I'm sure that all of the volunteers here will run to assist in any way they can. btw, I'm sorry if I caused you any grief with my comment to your response a few days ago; I have read the Q&A on meta but stayed out of the issue as it was somewhat about me.
@Jeeped, I completely agree that the question isn't clear - hence the confusion in the first place. I tend to think that unless something is definitely an exact dupe it shouldn't be marked as such. -- about the post the other day, no problem at all. I went to Meta to get clarification, and I got a lot more than I expected haha. You were right to comment as you did, and they backed that up so it's good enough for me :)
Added pictures for clarification.
Pictures are not a Minimal, Complete, and Verifiable example. For starters, you're asking everyone who might otherwise help you, to manually recreate your table structure (instead, you could provide plain text examples in your question, which makes copy/paste more easy to do).
Have you actually tried anything? "I am just not sure how to accomplish this" -- How do you think you might accomplish this? Have you researched similar topics and attempted to implement those solutions? Show your work...
You might also consider elaborating on what the results should be, and why. It may not be apparent what field(s) have changed and/or whether any other field(s) change as a result, etc.
With the exception of inserting new rows, this can be accomplished using the VLOOKUP function only, assuming the student ID field is a unique identifier. E.g., in the "Program" field, put: =VLOOKUP([@[Student ID]],Table1,5,False)) and drag the formula down. Now any changes to Program on Table1 will be reflected in Table2.
With the exception of inserting or appending new rows, this can be accomplished using the VLOOKUP function only, assuming the Student ID field is a unique identifier.
In the "Program" field of Table2, put:
=VLOOKUP([@[Student ID]],Table1,5,False))
Copy/Drag the formula down. Now any changes to Program on Table1 will be reflected in Table2.
Follow the same procedure for other columns, simply using the appropriate header name as the first argument to the function, and making sure to also change the column Index (5 in the above example).
NB: This assumes the "first" table is named "Table1" -- if not, modify the formula accordingly.
If you want to preserve the tables as strict duplicates of one another, including order, then you don't even need VLOOKUP. In Table2, just do:
Student ID | Student Name | Last Name
=Table1[@[Student ID]] | =Table1[@[Student Name]] | =Table1[@[Last Name]]
Thank you so much, this perfectly answers my question.
Quick question for you. When I place that formula is the program field am i replacing the values that are already there? Such as Drafting, Science, etc..
Yes, of course. You need to overwrite the existing value with the formula that will dynamically return the results from the other table.
Okay if that's the case, it says there is an error in the formula. not 100% sure where it might be because it does not give me specifics
| common-pile/stackexchange_filtered |
pagination link is not working for me
When I'm on the first page I click on pagination link(2),link(3), link(4) page pagination link not working.
<?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');
class Pagination extends CI_Controller
{
public function __construct() {
parent:: __construct();
$this->load->helper("url");
$this->load->model("Countries_Model");
$this->load->library("pagination");
}
public function index() {
$config = array();
$config["base_url"] = base_url() . "pagination";
$config["total_rows"] = $this->Countries_Model->record_count();
$config["per_page"] = 10;
$config["uri_segment"] = 3;
$choice = $config["total_rows"] / $config["per_page"];
$config["num_links"] = round($choice);
$config['use_page_numbers'] = TRUE;
$config['page_query_string'] = TRUE;
$config['prev_link'] =TRUE;
$this->pagination->initialize($config);
$page = ($this->uri->segment(2))? $this->uri->segment(2) : 0;
$data["results"] = $this->Countries_Model
->get_countries($config["per_page"], $page);
$data["links"] = $this->pagination->create_links();
$this->load->view("pagination", $data);
}
} ?>
What means "not working"?
when I click on 2nd, 3rd or last page link page shows the object not found error url:http://localhost/CI/pagination/10 Please tell me how to solve this problem.
add your pagination link generated and error
sorry i cant able to understand how can i add pagination link generated error
Do, $config["uri_segment"] = 3; to $config["uri_segment"] = 2; and $config['base_url'] = base_url().'pagination/index';
Look at this, it has a detailed explanation on how to use CI with pagination http://www.phpecosystem.com/2014/01/codeigniter-crud-with-pagination.html
change this line :
$config["base_url"] = base_url() . "pagination";
to
$config["base_url"] = base_url("pagination/index");
I suppose that this method fetches the data from your database:
$data["results"] = $this->Countries_Model
->get_countries($config["per_page"], $page);
well you need also to pass an offset (the $offset is the segment 4 in your case),
in your controller
public function index($offset = 0) {
.....
$this->Countries_Model
->get_countries($config["per_page"], $page,$offset);
}
in your model
get_countries($per_page, $page,$offset){
......
$this->db->get('yourtable',$per_page, $offset);
}
if i made the above changes also,2nd,3rd.. link is not working
| common-pile/stackexchange_filtered |
Multiple Column Dropdown in Details View control
I have created a dropdown list in a DetailsView control
I want to make that dropdown have multiple columns. Is that possible? If So how?
I've done a lot of searching and cant seem to find the solution.
Below is the code for my dropdown template:
<asp:TemplateField HeaderText="Division" SortExpression="fDivisionID">
<EditItemTemplate>
<asp:DropDownList
ID="DivisionDropDownList"
runat="server"
DataSourceID="DivisionSqlDataSource"
DataTextField="DivisionName"
DataValueField="DivisionID"
Text='<%# Bind("fDivisionID") %>'>
</asp:DropDownList>
</EditItemTemplate>
<ItemTemplate>
<asp:Label ID="Label1" runat="server" Text='<%# Bind("DivisionName") %>'></asp:Label>
</ItemTemplate>
</asp:TemplateField>
How do I add another column to that.. such as "Location"..
Thanks in advance
There are I think three ways actually.
Add two columns in your Select statement when fetching from database like below. or from here
Select Column1 + ' ' + Column2 From Yourtable
You can use third party controls.
Jquery Also supports this feature Here
This is to do it using other controls... Can it be done within the Dataview control?
Thanks..I've gone another way and created a new field and combined what I need in there and used that. What you're suggesting cannot be done within the detailsview control
You can do it in dropdownlist to show multiple columns by appending text or by other options as mentioned above.
| common-pile/stackexchange_filtered |
If $f\in L^{\infty}(E)$,then $\|f\|_{ \infty}=\lim_{p\to \infty}\|f\|_{ p}$.
Let $E$ be a measurable set with $m(E)< \infty$.Then
$L^{\infty}\subset L^p(E)$ for each $p$ with $1\le p\lt
\infty$.Furthermore,if $f\in L^{\infty}$,then $$\|f\|_{
\infty}=\lim_{p\to \infty}\|f\|_{ p}$$
.
Proof:Let $f\in L^{\infty}(E)$ and $\mu=\|f\|_{\infty}$
$\implies \vert f(x)\vert^p \le \mu^p$ a.e. on $E$
$\implies \int_E \vert f(x)\vert^p \ dx \le \mu^p.m(E) \lt \infty$ a.e. on $E$
$\implies f\in L^p(E)$
$\implies L^{\infty}(E)\subset \ L^p(E)$.
How to show if $f\in L^{\infty}$,then $\|f\|_{ \infty}=\lim_{p\to \infty}\|f\|_{ p}$?
I've tried hard to prove this. My professor suggested me to use the concept of limit superior,but i'm failing to make that idea practical.I know it is easy as it is consequence of above result but i'm lacking somewhere.So it will be heplful to give the detailed explanation of proof.
Thank you!
This is Davide Giraudo's answer for Limit of $L^p$ norm
Fix $\delta>0$ and let $S_\delta:=\{x,|f(x)|\geqslant \lVert f\rVert_\infty-\delta\}$ for $\delta<\lVert f\rVert_\infty$. We have
$$\lVert f\rVert_p\geqslant \left(\int_{S_\delta}(\lVert f\rVert_\infty-\delta)^pd\mu\right)^{1/p}=(\lVert f\rVert_\infty-\delta)\mu(S_\delta)^{1/p},$$
since $\mu(S_\delta)$ is finite and positive.
This gives
$$\liminf_{p\to +\infty}\lVert f\rVert_p\geqslant\lVert f\rVert_\infty.$$
As $|f(x)|\leqslant\lVert f\rVert_\infty$ for almost every $x$, we have for $p>q$, $$
\lVert f\rVert_p\leqslant\left(\int_X|f(x)|^{p-q}|f(x)|^qd\mu\right)^{1/p}\leqslant \lVert f\rVert_\infty^{\frac{p-q}p}\lVert f\rVert_q^{q/p},$$
giving the reverse inequality.
My question is how $\left(\int_X|f(x)|^{p-q}|f(x)|^qd\mu\right)^{1/p}\leqslant \lVert f\rVert_\infty^{\frac{p-q}p}\lVert f\rVert_q^{q/p}$ happened?
This has already been answered here.
@Miguel:Will you please explain how does liminf enters in the scenario-$\liminf_{p\to +\infty}\lVert f\rVert_p\geqslant\lVert f\rVert_\infty.$?
Sure: You know $a_n \geq b_n \rightarrow b$ but you don't know yet whether $a_n$ converges. What you do know is that $\lim \inf a_n \geq b$, since the $\lim \inf$ is always defined. If you prove the other inequality for the $\lim \sup$ then you know that the limit of $a_n$ exists and is equal to $b$.
@Miguel:Why lim inf $a_n$ is defined only why not lim sup $a_n$?
@Miguel:How $\left(\int_X|f(x)|^{p-q}|f(x)|^qd\mu\right)^{1/p}\leqslant \lVert f\rVert_\infty^{\frac{p-q}p}\lVert f\rVert_q^{q/p}$ happened?
(Please don't spam the conversation with repeated posts) lim sup is of course also defined for every sequence (check your notes for analysis 1!). As I said, you also have to prove a bound for it later. As to your other question: $f$ is bounded by its supremum norm.
@Miguel:Apology for the repeated posts,I'm new on this site and do not know how to use it properly.
@Miguel:I got it.I thought that this is the application of some standard inequality .
Great! I'm glad it's clear now. :)
https://en.wikipedia.org/wiki/H%C3%B6lder%27s_inequality#Generalization_of_H.C3.B6lder.27s_inequality - look at the end of this section. Then use boundedness.
Miguel:How does $\delta $ vanished in $\liminf_{p\to +\infty}\lVert f\rVert_p\geqslant\lVert f\rVert_\infty.$
| common-pile/stackexchange_filtered |
How to set width of a inline element?
I want to design a tab header. The html code is,
<div class="tab-header">
<a href="" class="current">tab1-title</a>
<a href="">tab2-title</a>
</div>
Now I need to apply a background image to the current class, too make effect like this,
But the inline element a is not big enough for this background image, so I adjust the width and height of the element a. But the adjustment failed, the width/height of the element did not change.
How could I get the right effect?
Thanks.
To apply width, set css property 'display' to either 'block' or 'inline-block'.
block: the element will sit in a single line. In such case you may want to set float so links are in the same line;
inline-block; the element will have height, width, etc, and multiple elements will sit in the same line (block).
Set the display property to inline-block, and then you can set the width, height, and vertical-align as necessary.
Use
display: inline-block;
on the inline element. With a little tweaking, this has wide cross-browser support and is incredibly useful for the kind of layout you're after.
You'll have to use display:block on the anchor.
it's an inline element, why you want to convert it to block
| common-pile/stackexchange_filtered |
Can I use a map's iterator type as its mapped type?
I have a tree whose nodes are large strings. I don't really need to navigate the tree other than to follow the path from a node back to the root, so it suffices for each node to consist of the string and a pointer to its parent. I also need to be able to quickly find strings in the tree. The nodes of the tree themselves are not ordered, so this would require some sort of index. However, the strings are big enough that I would rather not duplicate them by storing them both in my tree and in my index.
I could implement both my tree and the index with a single std::map if the key for the map was the string and the mapped value was the pointer to its parent. However, I cannot figure out a way to write either of these types. My best guess would be something like this:
using Tree = std::map<std::string, typename Tree::const_iterator>;
or maybe:
using Node = std::pair<std::string const, typename Node const*>;
using Tree = std::map<std::string, Node const*>;
But these recursive type definitions don't compile. Is there any way to create this type in C++? Or a better way to do what I am trying to do?
This seems like a bit of an abuse of containers. For me personally, a container should not be used as the primary data structure for any non-trivial type. You can make it work, but very quickly you'll find that you blur the lines between interface and implementation. If you care about that, you'll end up wrapping this inside your own class, and then the "quick-and-easy" approach you currently use will seem less and less useful. If you're concerned about memory footprint for overlapping/related structures, look into Boost's intrusive container library. It's great, and more performant too.
However, the strings are big enough that I would rather not duplicate them by storing them both in my tree and in my index. -- Have you considered using std::string_view, thus not needing to duplicate entire strings?
You can wrap the iterator in a type of your own and reference that type instead to avoid the recurisve type problem.
struct const_iterator_wrapper {
using iterator_type = map<string, const_iterator_wrapper>::const_iterator;
iterator_type iter;
const_iterator_wrapper() {}
const_iterator_wrapper(iterator_type _iter) : iter(_iter) {}
};
using tree = map<string, const_iterator_wrapper>;
Your using Node definition doesn't compile, but an actual structure would:
struct Node {
std::string const s;
Node const* n;
};
Yes, but this struct is unrelated to the entries in a map. I was hoping to make a map entry (or the pair it contains) point to another map entry.
It was not mentioned whether or not a node's parent might change after creation. If it does not, then a set might be a better fit than a map. (If it does change, then the set option is not completely off the table, but it might require weakening const correctness guarantees, possibly by making the parent pointer mutable.) In fact, you might not have to change your data structure much to use a set.
Let's say your nodes currently look like the following.
struct Node {
std::string data;
const Node * parent; // Might need to add `const`
};
You want these sorted by the data, ignoring the parent pointer. This might require defining a new function. If the following operator< is already defined as something else, then it takes a little more work to define your set, but still not hard.
bool operator<(const Node &a, const Node &b) {
return a.data < b.data;
}
This is all you need to define a set of these nodes that will function much like your desired map.
std::set<Node> tree;
// Add a root element.
auto result = tree.emplace("root", nullptr);
auto root_it = result.first;
// Add a child to the root.
tree.emplace("child", &*root_it);
// The `&*` combination may look odd. It is, though, a way to
// convert an iterator to a pointer.
There are a few gotchas that some people might find unexpected, but nothing other than what comes from using a map for this role.
In the end, as far as the structure of the data is concerned, a map<K, const V> is equivalent to a set<pair<K, V>> with a suitable comparison function. While the member functions differ, the biggest real functional difference between a map and a set of pairs is that the map's values can be changed (hence const V instead of V earlier).
| common-pile/stackexchange_filtered |
Proper way to Mock repository objects for unit tests using Moq and Unity
At my job we are using Moq for mocking and Unity for an IOC container. I am fairly new to this and do not have many resources at work to help me out with determining the best practices I should use.
Right now, I have a group of repository interfaces (Ex: IRepository1, IRepository2... IRepository4) that a particular process needs to use to do its job.
In the actual code I can determine all of the IRepository objects by using the IOC container and using the RegisterType() method.
I am trying to figure out the best way to be able to test the method that needs the 4 mentioned repositories.
I was thinking I could just register a new instance of the Unity IOC container and call RegisterInstance on the container for each mock object passing in the Mock.Object value for each one. I am trying to make this registration process reusable so I do not have to keep doing the same thing over and over with each unit test unless a unit test requires some specific data to come back from the repository. This is where the problem lies... what is the best practice for setting up expected values on a mocked repository? It seems like if I just call RegisterType on the Unity container that I would lose a reference to the actual Mock object and would not be able to override behavior.
Unit tests should not use the container at all. Dependency Injection (DI) comes in two phases:
Use DI patterns to inject dependencies into consumers. You don't need a container to do that.
At the application's Composition Root, use a DI Container (or Poor Man's DI) to wire all components together.
How not to use any DI Container at all for unit testing
As an example, consider a class that uses IRepository1. By using the Constructor Injection pattern, we can make the dependency an invariant of the class.
public class SomeClass
{
private readonly IRepository1 repository;
public SomeClass(IRepository1 repository)
{
if (repository == null)
{
throw new ArgumentNullException("repository");
}
this.repository = repository;
}
// More members...
}
Notice that the readonly keyword combined with the Guard Clause guarantees that the repository field isn't null if the instance was successfully instantiated.
You don't need a container to create a new instance of MyClass. You can do that directly from a unit test using Moq or another Test Double:
[TestMethod]
public void Test6()
{
var repStub = new Mock<IRepository1>();
var sut = new SomeClass(repStub.Object);
// The rest of the test...
}
See here for more information...
How to use Unity for unit testing
However, if you absolutely must use Unity in your tests, you can create the container and use the RegisterInstance method:
[TestMethod]
public void Test7()
{
var repMock = new Mock<IRepository1>();
var container = new UnityContainer();
container.RegisterInstance<IRepository1>(repMock.Object);
var sut = container.Resolve<SomeClass>();
// The rest of the test...
}
I skimmed your answer and was thinking about if is should downvote or not (because it seemed you were recommending to use IoC container in unit tests) and then actually read your answer ... I wanted to say, here is my shameful +1. If people find it useful even to today, you should use ArgumentNullException(nameof(repository)) and not ArgumentNullException("repository") - post was written 5-6 years ago when this feature did not exist.
| common-pile/stackexchange_filtered |
What's the correct way to bind this interface?
I'm having a hard time to come up with the correct way to translate the last part (<GPUImageInput>) into the binding. Any suggestions?
@interface GPUImageFilter : GPUImageOutput <GPUImageInput>
Start here : http://docs.xamarin.com/guides/ios/advanced_topics/binding_objective-c_libraries#Binding_Protocols and http://docs.xamarin.com/guides/ios/advanced_topics/api_design#Models.
Depending on what's in GPUImageInput, I'd bind it with the [Model] attribute, then make GPUImageFilter inherit from it
[Model]
//Look, no BaseTypeAttribute
interface GPUImageInput {
//[Export] everything you need
}
[BaseType (typeof (XXXX))]
interface GPUImageFilter : GPUImageInput {
//[Export again]
}
Hope it helps
| common-pile/stackexchange_filtered |
HOW to find last SUNDAY DATE through batch
How to find 'last SUNDAY DATE' through batch command?
what I was trying is
echo %date:~4,2%%date:~7,2%%date:~10,4%
this is good for current date
but for last Sunday date... can anyone suggest any logic through batch.
To get add/sub functionallity for dates try a look at How to get yesterday's date in batch file?
Windows PowerShell:
[DateTime]::Now.AddDays(-1 * [DateTime]::Now.DayOfWeek.value__)
Sorry the environment variable %date% in Batch (.bat) files sounds uneasy to do calculations.
@jeb Thanks and I have removed Linux and OSX part because I mis-reach batch to bash initially and only recognize batch-file refers to DOS .bat. In order to have at least an answer for Microsoft's platform, I've put PowerShell as replacement.
Thanks a lot ken, it worked as i expected. I used powershell [DateTime]::Now.AddDays(-1 * [DateTime]::Now.DayOfWeek.value__) ... Now how do i store output into variable.
By "store output into variable" do you mean a BATCH file variable? That's tricky. The simplest thing is to write your whole script in Powershell. However if that's not an option you could have the PS script return a return code that's the date of last Sunday. So if last Sunday is on 15th then return code = 15. But if last Sunday is in previous month you'd have to do something special. If return codes could be negative you could return -1 for last day of previous month. But return codes must be positive, so instead add, say, 40, so 39 is last day of previous month. 38 is next to last day, etc.
Thanks guys i got it.
Great solution; you don't need the .value__ part, because the [System.DayOfWeek] enumeration value returned by [DateTime]::Now.DayOfWeek is implicitly converted to its integer equivalent in this case.
| common-pile/stackexchange_filtered |
Using Kakutani's Theorem to prove that reflexivity is invariant under homeomorphism.
I was reviewing an exercise which is
Exercise: Let $E$ and $F$ be normed spaces and suppose that $T:E \to F$ is an homeomorphism. Show that if $E$ is reflexive, then $F$ is reflexive.
I already did the exercise using the definition of reflexivity (evaluation map $J:E \to E^{\ast\ast}$ is surjective), but I was willing to rewrite using Kakutani's Theorem:
Kakutani's Theorem: The unit closed ball is weakly compact iff the space is reflexive.
I couldn't write it down, I was wondering if you could help me. I tried something along the lines:
$T$ is continuous, so it is weakly continuous (the same for the inverse).
$T(\overline{B_E})$ is weakly compact.
Try to insert $\overline{B_F}$, which is weakly closed, inside an $n\overline{B_E}$ for some $n$ and finish.
but as you can see, I couldn't close the argument.
EDIT: The question is answered in the comments below the accepted one. The full exercise has many claims and this was one of them. However, for this particular question the linearity is needed as @treedetective points out. Assuming this, @triple_sec's solution is correct. We concluded that there was an ambiguity in the question. Thank you for your time.
Since $T^{-1}$ is continuous, it is also bounded: there exists some $n\in\mathbb N$ such that $\|T^{-1}(y)\|\leq n\|y\|$ for any $y\in F$.
Pick any $y\in\overline{B_F}$ and let $x\equiv T^{-1}(y)\in E$. Then, $$\|x\|=\|T^{-1}(y)\|\leq n\|y\|\leq n,$$ so that $x\in n\overline{B_E}$. Therefore, $y=T(x)\in T(n\overline{B_E})=nT(\overline{B_E})$, the last equality stemming from linearity.
It follows that $\overline{B_F}\subseteq nT(\overline{B_E})$.
I feel the word "isomorphism" was wrong, the exercise only gives "topological isomorphism", hence my edit for the word "homeomorphism". I don't think we could suppose T linear here, but thanks for the input.
$T$ needs to be linear, otherwise the exercise is not true. By Kadets' Theorem, all separable Banach spaces are topologically equivalent. This means, for example, that $c_0$ and $\ell_2$ are topologically equivalent, but only $l_2$ is reflexive. You can find Kadets' article here: http://testuvannya.com.ua/M.I.Kadets/PDF/K-1967-1.pdf
@treedetective By “topologically equivalent,” do you mean homeomorphic (via a possibly non-linear transformation)?
Yes, exactly. To be more precise, all infinite dimensional separable Banach spaces are homeomorphic to $\mathbb{R}^\mathbb{N}$. Torunczyk generalized the "separability" assumption and proved that any two infinite dimensional Banach spaces with the same density character are homeomorphic. Another interesting result of the same philosophy, is that every infinite dimensional Banach space is homeomorphic to its closed unit ball and to its closed unit sphere.
@treedetective So by "topological isomorphism" the exercise requires linearity? I am probably confused because of the term.
@J.Sparrow I agree with you that I, too, would interpret “topological isomorphism” as a homeomorphism without any reference to linearity. However, the exercise might assume that linearity goes without saying given the functional-analytic context.
Given the functional analytic context, the term "isomorphism" would be the best. One may even say "linear isomorphism" to avoid ambiguity, although the word "linear" is redundant and most authors omit it. But by saying "topological isomorphism" the author clearly means "homeomorphism". There is no other reason to use the word "topological" in order to describe an isomorphism, other than to distinguish it from the other type of isomorphisms that we know (the linear ones). So the formulation of the exercise is not correct. It should say plain "isomorphism" instead.
| common-pile/stackexchange_filtered |
php dechex with dark colors only
I am using php's dechex function to generate random colors as per requirements.Here is my working code.
dechex(rand(0x000000, 0xFFFFFF));
Howerver, I want to use dark colors only. I have found this code so far which generated only light colors thanks for this and this article.
However, I am yet to find a proper solution to generate only dark colors. I have tried several things like below.
'#' . substr(str_shuffle('AABBCCDDEEFF00112233445566778899AABBCCDDEEFF00112233445566778899AABBCCDDEEFF00112233445566778899'), 0, 6);
And
'#' . substr(str_shuffle('ABCDEF0123456789'), 0, 6);
But these, sometimes generating light colors randomly.
Edit:
I would like to have a solution with hex and rgb.
How Can I achieve this ?
How do you define "dark colors"?
Possible duplicate of How to generate lighter/darker color with PHP?
By using the term dark, I mean a color in which I can set my text color like white or light.
So put dark range into your code sprintf('#%06X', mt_rand(0x000000, 0x555555));
Here how to get dark color for both Hex and RGB
$hexMin = 0;
$hexMax = 9;
$rgbMin = 0;
$rgbMax = 153; // Hex 99 = 153 Decimal
$hex = '#' . mt_rand($hexMin,$hexMax) . mt_rand($hexMin, $hexMax) . mt_rand($hexMin, $hexMax) . mt_rand($hexMin,$hexMax) . mt_rand($hexMin, $hexMax) . mt_rand($hexMin, $hexMax);
$rgb = 'rgb(' . mt_rand($rgbMin,$rgbMax). ',' . mt_rand($rgbMin,$rgbMax). ',' . mt_rand($rgbMin,$rgbMax). ')';
Put your HEX to contain only dark colors by limiting max value:
$max = 9;
'#' . mt_rand(0. $max) . mt_rand(0. $max) . mt_rand(0. $max);
The main thing you seem to want is to ensure that each pair of hex digits is below a certain level once you've generated a random number. As rand() will generate any value up to the limit, my approach is to keep your original limit of 0xffffff but once the number has been generated, apply a bitwise and (&) to clear the high bits for each byte...
echo '#'.dechex(rand(0x000000, 0xFFFFFF) & 0x3f3f3f);
You can tweak the 0x3f3f3f to a limit which you want to set to limit the maximum value.
generate a random color :
function darker_color($rgb, $darker=2) {
$hash = (strpos($rgb, '#') !== false) ? '#' : '';
$rgb = (strlen($rgb) == 7) ? str_replace('#', '', $rgb) : ((strlen($rgb) == 6) ? $rgb : false);
if(strlen($rgb) != 6) return $hash.'000000';
$darker = ($darker > 1) ? $darker : 1;
list($R16,$G16,$B16) = str_split($rgb,2);
$R = sprintf("%02X", floor(hexdec($R16)/$darker));
$G = sprintf("%02X", floor(hexdec($G16)/$darker));
$B = sprintf("%02X", floor(hexdec($B16)/$darker));
return $hash.$R.$G.$B;
}
$color = '#'.dechex(rand(0x000000, 0xFFFFFF));
$dark = darker_color($color);
echo "$color => $dark";
Even if a random generated color is dark , the function pick up a darker color . normaly it goes to black color .
| common-pile/stackexchange_filtered |
What is the status of Stack Overflow Cities?
There used to be a section in Stack Overflow Careers called "Cities." But it seems to have disappeared with the merge into the regular stackoverflow.com website. I enjoyed browsing it to discover other tech hubs (some of which I had no idea were tech hubs) and enjoyed the images and descriptions. I also liked the job salary averages it listed - I don't know how they were computed, but it seemed more realistic than other sites I browsed. Anyway, it was a nice area to get a "flavor" of each hub.
Will the cities portal come back?
The same as GeoCities.
Right now, no, cities is not making a return. Why? Well, mostly because each city required a fair amount of work from our marketing team and then a small amount of code from the dev team. Essentially the process didn't scale.
We do have plans to introduce something that replaces city pages, but focused more around job search. This is still in the early stages of planning so any ideas on what you'd like to see are welcome!
| common-pile/stackexchange_filtered |
Textarea on change with delay
I would like to scan QR-Codes with a scanner. I already have a scanner and when I put the focus in a hidden textarea he places the text in the textarea when I scan a QR-Code.
I would like to do something with the data outputted in the specific textarea. Now I have in my jQuery the following:
$("#qr_data").bind("input", function (e) {
console.log(e);
});
The problem is that the loading of the QR-data in my textarea takes 2 seconds or so. So the function is called like 20 times ... . I only want the last data so I can do something with it. How could I do this?
Could you use the change event and then add a delay (timeout) to it? See a previous answer to this.. http://stackoverflow.com/questions/9424752/jquery-change-with-delay
// we will build a new event
// once for initing a new event "lastchange"
$("textarea").each(function () {
var timer = null;
$(this).change(function(){
var node = this;
if(timer != null)
clearTimeout(timer);
timer = setTimeout(function(){
$(node).trigger('lastchange');
}, 1000);
});
});
// use custom event
$("textarea#qr_data").bind("lastchange", function (e) {
console.log($(this).val(), e);
});
Hi what I would do is bind to the change event and postpone the function for like 500ms every time it fires. As if you are making an auto complete inputfield and you don't want to fire ajax request on every keystroke.
var qr_timeout = null;
$("#qr_data").change(function(){
if(qr_timeout != null)
clearTimeout(qr_timeout);
qr_timeout = setTimeout(function(){
//Do your magic here
}, 500);
});
This way you don't have to worry if the loading will take 1,2 or even 10 seconds
| common-pile/stackexchange_filtered |
System.Threading.Timer callback is never called
My System.Threading.Timer (which has a callback) never fires reliably. This is part of my programming assignment where I input the amount of time the timer is supposed to run from a textbox.
The timer is declared like this:
System.Threading.Timer timer = new System.Threading.Timer(WorkerObject.callback, null, delay, Timeout.Infinite);
And the delay is the simply an int describing the delay for the callback to fire the first time (it is only supposed to fire once).
The callback method is like this:
public static void callback(Object stateinfo)
{
stop = true;
}
And all that does is set a flag to true which stops a loop (which is being run by a thread on a ThreadPool, in effect, stopping the thread).
The loop looks like this:
while (!stop)
{
currentTextbox.Invoke(new Action(delegate()
{
currentTextbox.AppendText((counter++) + Environment.NewLine);
currentTextbox.Update();
}));
}
My problem is that the stop variable is always false for any delay over 5000 milliseconds. Is there a way to "force" the callback to always fire?
This question might not apply (I forget the API offhand)... but do you start the timer?
The program is just printing out a counter which is continually incremented until the timer says stop.
System.Threading.Timer doesn't need to be started; it starts automatically after a delay (0 for no delay, but in my case, i specified a delay)
@Jaxidian: System.Threading.Timer starts when you create it.
You need to hold on to the reference to the timer.
Most likely the timer object is being garbage collected, which will run its finalizer, stopping the timer.
So hold on to the reference for as long as you need the timer to be alive.
+1. This is an insidious problem. If you run the code in the debugger (in either Debug or Release mode), the timer won't be collected. But if you run it without the debugger attached, the timer will get collected. Made me crazy the first time I ran into that one.
@JimMischel: I totally agree. My memorable moment was passing a delegate to an unmanaged API. With the debugger attached everything worked fine, but without the debugger the GC got a lot more aggressive. I suppose we all have our stories :)
Yup, this was exactly it! @JimMischel , I had the same problem, in the debugger, the timer would work correctly, but otherwise the timer would get garbage collected.
I would suggest using a CancellationTokenSource:
static CancellationTokenSource Cancel = new CancellationTokenSource();
public static void Callback(object state)
{
Cancel.Cancel();
}
and your loop:
while (!Cancel.IsCancellationRequested)
{
...
}
This is much cleaner than using volatile, and is easier to port when you move your simple proof of concept to separate classes. See my blog, Polling for Cancellation, for more info.
The runtime Jitter is probably optimizing away your while(!stop) condition to while(true).
Mark the stop variable as volatile.
private volatile bool stop = false;
Probably would work, but I wouldn't suggest volatile when there are other options.
| common-pile/stackexchange_filtered |
un-focus parent
When I am using focus parent ($mod+a) I find I am unable to return the focus to the child that had it before. Instead focus remains on the parent. To remove focus from the parent I have to manually click inside a child container or switch to another workspace and back again. What can I use instead?
In order to focus the child, you can use the command focus child. It seems that it is not bound by default, but you can just add your own binding. For example
bindsym $mod+z focus child
Yes, in the meantime I was also using workspace back_and_forth (bound to $mod+b). So two $mod+b also do the trick.
| common-pile/stackexchange_filtered |
ModuleNotFoundError: No module named 'mysql' even when mysql.connector installed
I am having problems getting a program to import the mysql.connector
Note: Running on a Windows 10 machine.
I have done the following
1) Installed anaconda (install properly)
I checked that the anaconda paths are in my pathname variable
Here is the PATH var:
C:\Users\john.fox\AppData\Local\Continuum\anaconda3;C:\Users\john.fox\AppData\Local\Continuum\anaconda3\Library\mingw-w64\bin;C:\Users\john.fox\AppData\Local\Continuum\anaconda3\Library\usr\bin;C:\Users\john.fox\AppData\Local\Continuum\anaconda3\Library\bin;C:\Users\john.fox\AppData\Local\Continuum\anaconda3\Scripts;C:\Users\john.fox\AppData\Local\Continuum\anaconda3\envs\Python34;
2) Change the python version to python3.4 (was python3.6)
This was needed to get mysql-connector to work properly
The Python34 path is in the pathname variable
3) Installed the mysql-connector using anaconda (installed correctly)
4) Did a "conda search mysql-connector"
It came back with all the mysql.connectors (the one for py34_0 is present)
C:\Users\john.fox>conda search mysql.connector
Fetching package metadata .............
mysql-connector-python 2.0.3 py26_0 defaults
2.0.3 py27_0 defaults
2.0.3 py33_0 defaults
2.0.3 py34_0 defaults
2.0.3 py35_0 defaults
2.0.4 py27_0 defaults
2.0.4 py34_0 defaults
2.0.4 py35_0 defaults
2.0.4 py36_0 defaults
5) Attempted to run a script with import mysql.connector
Received:
Traceback (most recent call last):
File "jira_export5_Beta.py", line 3, in <module>
import mysql.connector
ModuleNotFoundError: No module named 'mysql'
6) Went to Python 3.6.2 for default
7) Attempted to run again, same results.
If anyone has words of wisdom (for this problem) I would appreciate hearing them.
Thanks JWF
Hello Everyone:
It turns out that by changing the order in which the Path variable had the different items the problem cleared up. I found that putting the Ptyhon34 path in the front then closing and opening a new cmd window and then running that things worked out correctly. JWF
| common-pile/stackexchange_filtered |
Is there any easier and shorter way to do this code? [PHP, Loops, Arrays]
I have a code to modify the data that comes up from external API. However, I didn't like my code. I believe that there is a shorter way to do.
let me explain the flow:
I request to an api endpoint to get currency short codes. I mean $results contains these:
[0] => EURUSD
[1] => USDTRY
etc...
I want to save these as EUR, USD, TRY. I used str_split to do this. Also, I used array_unique to remove the same values.
Right now, my array contains this.
[0] => EUR
[3] => USD
[5] => TRY
It's not enough for me. I need to change keys according to my database structure.
My table contains: id, name, created
I have to rename each key as name. (btw I use Phnix to migrate and seeding)
$results = json_decode($httpResponse->getBody());
$data = [];
$prepared = [];
foreach ($results as $key => $item) {
$data = array_merge($data, str_split($item, 3));
}
$data = array_unique($data);
foreach ($data as $key => $item) {
array_push($prepared, ['name' => $item]);
}
$currency = $this->table('currency');
$currency->truncate();
$currency->insert($prepared)->save();
Do you have any clue for the best way?
Can you share your expected output with the input array?
What do you mean, I need to change keys according to my database structure? And does you code work or not? By the way, it seems that a unique index on the name is really all you need.
@SahilGulati the code and output are correct actually. I want smarter way to do. Expected output: Array ( [0] => Array ( [name] => EUR ) )
@jeroen this code is working correctly. I expect the best way from you for loops, etc.if there is better way. of course.
If the code is working, http://codereview.stackexchange.com would be the appropriate place to ask this.
@deceze ach so. I didn't know before. thank you.
You could skip the first lot of merges and just str_split(implode("",$results),3) to generate a list of all of the codes.
in your code you make a lot of useless operation: considering that the lenght of the string is always 3 char you can simply use substr to obtain the currency code and use the currency code as key to make your array "unique" (if the same currency is added more than once, will "override" the previous one, whithout affecting the final result).
$results = json_decode($httpResponse->getBody());
$prepared = [];
foreach ($results as $item) {
$itemName = substr($item,0,3);
$prepared[$itemName] = ['name' => $itemName];
}
$currency = $this->table('currency');
$currency->truncate();
$currency->insert($prepared)->save();
@BilalBaraz This answer will be helpful for you..try this..
You are missing the second half of the strings and 2. You are not removing the duplicate values. Using the currency code as a key would solve the second problem.
@SahilGulati: you're right, I'm sorry, I'm pretty new in writing answer...
@jeroen: edited, adding the second problem and an explanation as suggested by Sahil Gulati ;)
You're still missing the second half of the string but the OP can figure that out :-)
it is different a bit. substr is incorrect choice already. str_split is good. and the code needs to remove the same values. not only rename the their keys.
@BilalBaraz Using the name as the key you will overwrite and effectively remove the duplicate values. And you can use substr() for the second half as well or stick with str_split(), both will do the job with the input you have shown.
| common-pile/stackexchange_filtered |
Jasperreports Indent after the first line
I am producing reports with jasper 6.6.0 in PDF and docx.
The problem that I am facing is that I can not have a paragraph indention after the first line. The input text is a paragraph separated by new lines from a XML file:
<Literatur>Bjelland, EK; Wilkosz, P; Tanbo, TG; Eskild, A (2014). Is unilateral oophorectomy associated with age at menopause? A population study (the HUNT2 Survey). Human Reproduction 29(4): 835-41. DOI: 10.1093/humrep/deu026
Leitlinienprogramm Onkologie (AWMF, Deutsche Krebsgesellschaft, Deutsche Krebshilfe) (2012): S3-Leitlinie Diagnostik, Therapie und Nachsorge des Mammakarzinoms, Langversion 3.0, 2012, AWMF-Registernummer: 032/045OL, http://leitlinienprogramm-onkologie.de/Mammakarzinom.67.0.html (Recherchedatum: 15.10.2015). Stand: Juli 2012, gültig bis 30.06.2017.
</Literatur>
I add and put each line between and then the following style:
<style name="indent" style="Standard">
<box leftPadding="5"/>
<paragraph firstLineIndent="-5"/>
</style>
Using in text field:
<textField isStretchWithOverflow="true" isBlankWhenNull="true">
<reportElement style="indent" positionType="Float" x="0" y="159" width="465" height="28" uuid="f7184f30-82ee-44b4-8dcb-9d3d70ac3bc8"/>
<textFieldExpression><![CDATA[$F{Literatur}]]></textFieldExpression>
</textField>
as has been posted in Indentation in generated PDF using JasperReports but this mad the literatur part to vanish and not showing at all.
How can I produce export in PDF and docx just like my java export?
DOCX looks like:
PDF looks like:
Java export which is correct:
The negative indent is buggy can you create a datasource like this https://stackoverflow.com/questions/47225881/how-to-add-indentation-on-bullet-list/47227580#47227580 see last section
I see. I am trying your link, but jasper can not resolve the _THIS field?
well I have no clue why, I would need to see the code, where does it not resolve it?. Anyway your solution is to stop using negative indent
| common-pile/stackexchange_filtered |
Selenium Headers in Python
how can I change my headers in Selenium like I do in requests
,I want to change my cookies and headers .
from myCookie import cookie
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
headers = {
"Host": "mbasic.facebook.com",
"User-Agent": "Mozilla/5.0 (Linux; U; Android 4.4.2; en-us; SCH-I535 Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30",
"Accept": "image/avif,image/webp,*/*",
'Accept-Language': 'en-US,en;q=0.5',
'Accept-Encoding': '',
'Alt-Used': 'https://mbasic.facebook.com/',
'Connection': "keep-alive",
'Cookie': cookie,
}
driver = webdriver.Chrome()
driver.get("https://www.fb.com")
# how can I update driver with my cookies and this headers
Did you do ANY research on this yourself? A web search on "python selenium cookies" returns lots of hits...
driver.add_cookie({"name": cookie_name, "value": cookie_value})
@Mokhtaromar This answer gives you the exact code for setting a cookie. I don't understand what more are you are asking for.
I have a netscap cookie i get it from network in my local driver , I want to add it in selenium
| common-pile/stackexchange_filtered |
PermissionError: [Errno 13] Permission denied when saving HoloMap to GIF
I am trying to create an animated gig from a series of heat maps with HoloViews.
I need to do this in a Python script, i. e. specifically not in a Jupyter notebook.
When saving the image, Python throws an error because it cannot create a temporary file in the temp-folder of the current user (this is under Windows). Happens regardless of the user, even when I run Python as admin.
When I stop in the debugger and change the temp-file path to some other place, e. g. Desktop, that works, but the resulting holo.gif in the working directory is empty (0 bytes). The temporary gif, though, is correctly animated, so I guess the code is basically OK.
[Edit: Not so sure anymore. I ran this the night through on 26.531 heat maps each of which consisted of a 5x5 grid. The process did not finish (i. e. did not hit the breakppoint at Image.py line 1966). Is there a way to do what I want that is less painfully slow?]
Answers to similar problems on StackOverflow did point to permission problems (but what kind of problem could that be if it doesn't even work for an admin?) and suggest saving to another location, which is impossible here as I have no control over where matplotlib will try to create temporary files.
The problem is specifically with gif's, I can create *.png or *.html output without error. (AFAIK, the difference is that gif-creation uses ImageMagick.)
Here's the code (construction of underlying heat map data left out):
import holoviews as hv
hv.extension('matplotlib')
renderer = hv.renderer('matplotlib')
renderer.fps = 3
heatMapDict = {
k: hv.HeatMap(measurements[k].sensors) for k in range(len(measurements))
}
holo = hv.HoloMap(heatMapDict, kdims='index')
renderer.save(holo, 'holo', fmt='gif')
And the traceback:
INFO:matplotlib.animation:Animation.save using <class 'matplotlib.animation.PillowWriter'>
Traceback (most recent call last):
File "cm3.py", line 69, in <module>
renderer.save(holo, 'holo', fmt='gif')
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\holoviews\plotting\renderer.py", line 554, in save
rendered = self_or_cls(plot, fmt)
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\holoviews\plotting\mpl\renderer.py", line 108, in __call__
data = self._figure_data(plot, fmt, **({'dpi':self.dpi} if self.dpi else {}))
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\holoviews\plotting\mpl\renderer.py", line 196, in _figure_data
data = self._anim_data(anim, fmt)
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\holoviews\plotting\mpl\renderer.py", line 246, in _anim_data
anim.save(f.name, writer=writer, **anim_kwargs)
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\matplotlib\animation.py", line 1174, in save
writer.grab_frame(**savefig_kwargs)
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\contextlib.py", line 119, in __exit__
next(self.gen)
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\matplotlib\animation.py", line 232, in saving
self.finish()
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\matplotlib\animation.py", line 583, in finish
duration=int(1000 / self.fps))
File "C:\Users\y2046\AppData\Local\Programs\Python\Python37\lib\site-packages\PIL\Image.py", line 1966, in save
fp = builtins.open(filename, "w+b")
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\y2046\\AppData\\Local\\Temp\\tmp4im5ozo8.gif'
Addendum:
I'm coming to think that this is not a permission problem after all. Perhaps it has to do with reentrancy and file-locking under Windows? The Python process in fact may create files in the temp directory, as proved by inserting the following test code before calling renderer.save():
import os
import builtins
filename = 'C:\\Users\\y2046\\AppData\\Local\\Temp\\test.txt'
fp = builtins.open(filename, "w+b")
try:
fp.write("first".encode('utf-8'))
finally:
fp.close()
os.remove(filename)
I should test this under Linux. If it works there, there must be a bug in the Pillow writer.
Well what are the rights you have for this file?
@brainfuck4d as I said, happens even when I run this as admin with unrestricted rights. And when running under a user with restricted rights, the user can of course create arbitrary files in his own temp-directory. At least manually in Windows explorer. No idea why the Python process running under that same user appears to be unable to do the same.
Does this only occur with 26.531 or would 5 heatmaps be enough? Where is the limit? Is some windows energy saving mode or similar enabled?
@ImportanceOfBeingErnest In fact 1 single heatmap is enough, which somehow counts against my hypotheses of file locking. Unless Image.py really goes ahead and tries to acquire a second lock on the file after creating it in order to manipulate the contents.
The same code works under CentOS 7. (Except I had to add matplotlib.use('agg') to avoid use of tkinter which I did not find a simple way to install into Python 3.7.1) So now my current guess is that matplotlib gif animation has a Windows-specific bug.
Can you make this reproducible? I.e using a code one can copy and paste? Is this depending on the use of holoviews?
@ImportanceOfBeingErnest here's an SSCE: https://gist.github.com/smillies/2575831ddfa5731b8d04d08d0f1bdad1
I don't think it's holoviews, I think it's the interaction of my Windows system with matplotlib. But I do not have an example using only matplotlib.
That was more meant to trigger the removal of holoviews from the mwe.
Not being in any way familiar with this stuff, I fear that gist is as minimal as I can get.
The problem is when I create a minimal example that only uses matplotlib (see here) it works as expected. I wouldn't preclude the holoviews save function to be responsible. But it would help if you could run the example i posted and see if it works for you.
Oh, I mean I do get the same error with the holoviews example you posted, but indeed the matplotlib-only example works fine for me.
Sorry, I just deleted my previous comment (not quite in time, unfortunately), because I made a silly mistake in testing and ran the wrong example. But I can in fact confirm that your MWE also works for me. So the problem with my example must be to do with holoviews after all, do you think?
It looks like it, although I currently cannot see where it comes from, because essentially holoviews does the same call to FuncAnimation.save("fname", writer="pillow", fps=5) at the end.
I played around a bit in the source code of holoviews and it also fails to save via imagemagick or imagemagick_file. So I'd say there is something broken with holoviews. (It might turn out that something should be changed inside of matplotlib, but only the holoview devs can tell us, what that would be.) So opening an issue with them might be a good idea.
It looks like there is something broken with HoloViews. I have opened issue #3151 with them.
Looks like a platform-specific difference in behavior for a Python system call. HoloViews master now has a workaround that will be in the latest release.
| common-pile/stackexchange_filtered |
Intercept display change on hover?
I have some elements that, upon hover, toggle the display (none/block) of a child ul. Unfortunately, I can't transition a fade via CSS with display. Using opacity won't work either since the hover area expands onto the opacity: 0 ul, and a combo of both (transition opacity, but still toggle display) doesn't seem to work.
Is there a way to intercept the display change via Javascript, and have it fade between block and none? Are there alternate suggestions (I tried a height: 0/auto toggle too, didn't work right)? I'd prefer an intercept method than a pure JS method, in case JS is disabled or something.
Can you post your html please?
You can use the example below. Anything like: Hi. is enough.
If I understand you correctly. Assuming something like: <div class="nav-container"><ul></ul></div>.
You can listen for hover on the parent, since it contains the child:
var parent = document.getElementsByClassName('nav-container')[0];
var connect = function (element, event, callback) {/* I think you */};
var disconnect = function (handle) { /* can find */};
var addClass = function (element, className) { /* these elsewhere. */};
var removeClass = function (element, className) { /* ... */};
var hoverHandle = connect(parent, 'hover', function (event) {
addClass(parent, 'hovered');
if (blurHandle) {
disconnect(blurHandle);
}
});
var blurHandle = = connect(parent, 'blur', function (event) {
removeClass(parent, 'hovered');
if (hoverHandle) {
disconnect(hoverHandle);
}
});
Then in the CSS:
.nav-container > ul {
display: none;
/* do fade out */
}
.nav-container.hovered > ul {
display: block;
/* do fade in */
}
If you're using jQuery 1.7, then this'll become:
var navContainers = $('.nav-container');
navContainers.on('hover', function () {
$(this).toggleClass('hovered');
});
navContainers.on('blur', function () {
$(this).toggleClass('hovered');
});
Edit: Had a couple of typos.
Thanks! I am using jquery, is there something in there that would help simplify your code here?
Quite a bit :) I'll add a revision.
| common-pile/stackexchange_filtered |
search mysql database for value
I would like to search the database row by row for an IP address and for a poll number. They need to be on the same row (yes they are in their own columns).
If the IP address and poll are on the same page I would like to show some code. If not I would like to show a different set of code.
How do I go about searching the database row by row for matching IP addresses and poll numbers? Thanks.
P.S - yes there is an ID column in the database.
Post your table structure
use sql AND operator in sql WHERE clause
SELECT something FROM sometable WHERE ip=$ip and poll_number=$poll_number. But that's about all we can do since you've provided no useful specifics
YOu need to post more information
here is the easiest way to achieve your task .
<?php
$ip = '<IP_ADDRESS>';
$pollNum = 3546;
$query = mysqli_query($connection , "SELECT id FROM poll_table WHRE pollID ='$id' AND ip = '$ip'");
if(mysqli_num_rows($query) > 0) {
# Found -> Execute a set of code
} else {
# Not Found -> Execute a different set of code .
}
?>
You forgot a quote in $pollNum = '3546; Notice SO's syntax highlighting?
Usually with a SELECT-statement, i.e. if you search for IP-Address <IP_ADDRESS> and poll number 127:
SELECT
ipaddress,
poll_number,
other_fields
FROM
your_table
WHERE
ipaddress = '<IP_ADDRESS>' -- i.e your search value for the ip-address
AND
poll_number = '127' -- and your poll number
First of all as it is now 2014, please use PDO to perform queries:
try {
$dbh = new PDO('mysql:host=localhost;dbname=datebasename', 'username', 'password');
}
catch(PDOException $e) {
echo "ERROR: ".$e->getMessage();
}
$stmt = $dbh->prepare("SELECT * FROM table WHERE ip=:ip AND poll_number=:poll_number");
$stmt->bindParam(':ip', $ip, PDO::PARAM_STR);
$stmt->bindParam(':poll_number', $poll_number, PDO::PARAM_INT);
$stmt->execute();
$rows = $stmt->fetchAll();
if($rows){
foreach($rows as $row)
print_r($row);
}
else
echo "No results";
| common-pile/stackexchange_filtered |
browserHistory undefined with React Router 2.00 release candidates
With React Router 2.0.0rc1-5 I have been getting browserHistory as undefined after import:
import { browserHistory } from 'react-router'
The package seems to be installed correctly, but regardless of the version and whether on server or client, I have gotten the same result.
Maybe this a known bug?
With newer versions of react-router, you get the history creator from the history package. Take a look at the docs.
From the master branch, I believe they are actually packaging browserHistory into react-router now
WIth the 2.0 rcs, I get undefined on the server... and in the browser I get connect.js?243b:60 Uncaught TypeError: finalMapStateToProps is not a function (which seems like a Redux issue). I think it is related to the rendering of React Router though, as I receive the same error on the server unless I remove the router from my renderToString
Gotcha. It actually should return undefined on the server, if I'm reading the code correctly. You wouldn't use the history on the server, opting instead for match and RoutingContext. And the browser error sounds like a Redux configuration problem, perhaps in your code - feel free to add it above and we can take a look.
It seems as though it was in fact a Redux syntax issue. Once I wasn't using browserHistory on the server, as u said, I avoided the undefined issue. My redux error was related to my use of connect() in a container. Thanks for the help!
@hoodsy can you elaborate what the solution in your container was?
See useRouterHistory:
https://github.com/rackt/react-router/blob/master/upgrade-guides/v2.0.0.md#using-custom-histories
I'm using this in server-side:
import {Router, RouterContext, match, useRouterHistory} from 'react-router';
import {createMemoryHistory} from 'history';
// ...
const appHistory = useRouterHistory(createMemoryHistory)({});
const component = (
<Provider store={store} key="provider">
<Router routes={routes} history={appHistory} />
</Provider>
);
you can install react-router version 3.0, like this : npm install --save<EMAIL_ADDRESS>the should also update your webpack configuration file. And you are good to go.
Install react-router version 3.0
npm install --save<EMAIL_ADDRESS>or
yarn add<EMAIL_ADDRESS>Then, the both the methods work:
Method 1
import { Router, useRouterHistory } from 'react-router';
import {createMemoryHistory} from 'history';
import routes from './routes';
const appHistory = useRouterHistory(createMemoryHistory)({});
ReactDOM.render(
<Router history={appHistory} routes={routes}/>,
document.getElementById('root')
);
Method 2
import { Router, browserHistory } from 'react-router';
import routes from './routes';
ReactDOM.render(
// or hashHistory
<Router history={browserHistory} routes={routes}/>,
document.getElementById('root')
);
| common-pile/stackexchange_filtered |
Get JScrollPane to move based on current item?
I currently have a JScrollpane which has a list of buttons that run off the bottom of the container and cause the vertical scroll bar to appear (which is intended). At pre-defined intervals on a timer these buttons will become "selected". If one of the buttons selected is off the screen I would like to JScrollPane to automatically move itself so the selected button is fully visible on the screen.
I did a bit of looking on here and apparently the scrollRectToVisible method is what I am supposed to be using but it doesn't do anything for me.
I'm doing the following:
My ScrollPane contains a JPanel which is set to BoxLayout in the YAxis. I add a bunch of buttons to this panel. Once all the buttons are added a Timer starts to increment through these buttons and highlight them. After each increment in the timer I am calling the following line:
elevatorScroller.scrollRectToVisible(elevatorLevels[position].getBounds());
My intention here is to have the JScrollPane move the the currently highlighted button and continue to do so, however I'm not getting any movement.
elevatorScroller is my JScrollPane and the elevatorLevels array houses my buttons.
Any idea what might be causing this not to move? Do I have to call repaint or anything after advising the JScrollPane to move?
For better help sooner, post a MCTaRE (Minimal Complete Tested and Readable Example). Incorporate a call to scrollRectToVisible(Rectangle) in your example.
I was reading the documentation for the scrollRectToVisible method incorrectly which states:
Forwards the scrollRectToVisible() message to the JComponent's parent
The keyword here is parent by calling the scrollRectToVisible() method directly to my JScrollPane nothing was happening as the parent of that item was a JFrame. By instead passing the scrollRectToVisible() method to my JPanel it passed on the above to its parent who was the JScrollPane and it now works correctly.
"I currently have a JScrollpane which has a list of buttons that run off the bottom of the container and cause the vertical scroll bar to appear (which is intended). At pre-defined intervals on a timer these buttons will become "selected". If one of the buttons selected is off the screen I would like to JScrollPane to automatically move itself so the selected button is fully visible on the screen."
You could also use the JViewPort of the JScrollPane and use setViewPosition(Point) of the JViewPort. Something like this
public void setView(JScrollPane scroll, Component comp) {
JViewport view = scroll.getViewport();
Point p = comp.getLocation();
view.setViewPosition(p);
}
Here's a running example
import java.awt.Color;
import java.awt.Component;
import java.awt.Dimension;
import java.awt.Point;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.BoxLayout;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.JScrollPane;
import javax.swing.JViewport;
import javax.swing.SwingUtilities;
import javax.swing.Timer;
public class ChangeViewPort {
Component activeComponent;
Component[] buttons;
int componentIndex = 0;
public ChangeViewPort() {
JPanel panel = new JPanel();
BoxLayout boxLayout = new BoxLayout(panel, BoxLayout.PAGE_AXIS);
panel.setLayout(boxLayout);
for (int i = 0; i < 40; i++) {
panel.add(new JButton("Button " + i));
}
buttons = panel.getComponents();
activeComponent = buttons[componentIndex];
final JScrollPane scroll = new JScrollPane(panel);
Timer timer = new Timer(500, new ActionListener(){
public void actionPerformed(ActionEvent e) {
((JButton)activeComponent).setForeground(Color.BLACK);
if (componentIndex >= buttons.length - 1) {
componentIndex = 0;
} else {
componentIndex ++;
}
activeComponent = buttons[componentIndex];
((JButton)activeComponent).setForeground(Color.red);
setView(scroll, activeComponent);
System.out.println(((JButton)activeComponent).getActionCommand());
}
});
timer.start();
scroll.setPreferredSize(new Dimension(200, 300));
JFrame frame = new JFrame();
frame.add(scroll);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
public void setView(JScrollPane scroll, Component comp) {
JViewport view = scroll.getViewport();
Point p = comp.getLocation();
view.setViewPosition(p);
}
public static void main(String[] args) {
SwingUtilities.invokeLater(new Runnable(){
public void run() {
new ChangeViewPort();
}
});
}
}
Check out Scrolling a Form. It was designed to handle scrolling as a user tabs from component to component in a form. But the logic just looks for focus changes so it should also work when you have a Timer that changes focus to a different component.
| common-pile/stackexchange_filtered |
Bug? JsNumber toFixed returns different values in SuperDev and JS
I'm using GWT 2.8.2.
When I run the following code in SuperDev mode, it logs 123.456, which is what I expect.
double d = 123.456789;
JsNumber num = Js.cast(d);
console.log(num.toFixed(3));
When I compile to JavaScript and run, it logs 123 (i.e. it does not show the decimal places).
I have tried running the code on Android Chrome, Windows Chrome and Windows Firefox. They all exhibit the same behavior.
Any idea why there is a difference and is there anything I can do about it?
Update: After a bit more digging, I've found that it's to do with the coercion of the integer parameter.
console.log(num.toFixed(3)); // 123 (wrong)
console.log(num.toFixed(3d)); // 123.456 (correct)
It seems that the JsNumber class in Elemental2 has defined the signature as:
public native String toFixed(Object digits);
I think it should be:
public native String toFixed(int digits);
I'm still not sure why it works during SuperDev mode and not when compiled though.
Nice catch! This appears to be a bug in the jsinterop-generator configuration used when generating Elemental2's sources. Since JS doesn't have a way to say that a number is either an integer or a floating point value, the source material that jsinterop-generator works with can't accurately describe what that argument needs to be.
Usually, the fix is to add this to the integer-entities.txt (https://github.com/google/elemental2/blob/master/java/elemental2/core/integer_entities.txt), so that the generator knows that this parameter can only be an integer. However, when I made this change, the generator didn't act on the new line, and logged this fact. It turns out that it only makes this change when the parameter is a number of some kind, which Object clearly isn't.
The proper fix also then is probably to fix the externs that are used to describe what "JsNumber.toFixed" is supposed to take as an argument. The spec says that this can actually take some non-number value and after converting to a number, doesn't even need to be an integer (see https://www.ecma-international.org/ecma-262/5.1/#sec-<IP_ADDRESS> and https://www.ecma-international.org/ecma-262/5.1/#sec-9.3).
So, instead we need to be sure to pass whatever literal value that the Java developer provides to the function, so that it is parsed correctly within JS - this means that the argument needs to be annotated with @DoNotAutobox. Or, we could clarify this to say that it can either be Object or Number for the argument, and the toFixed(Object) will still be emitted, but now there will be a numeric version too.
Alternatively, you can work around this as you have done, or by providing a string value of the number of digits you want:
console.log(num.toFixed("3"));
Filed as https://github.com/google/elemental2/issues/129
Thanks for digging into this and creating the issue. I appreciate it.
The problem is that "java" automatically wraps the int as an Integer and GWT end up transpiling the boxed Integer as a special object in JS (not a number). But, if you use a double, the boxed double is also transpiled as a native number by GWT and the problem disappears.
I'm not absolutely sure why this works in super-devmode, but it should not. I think that the difference is that SDM maps native toString to the Java toString, and (even weirder) the native toFixed call the toString of the argument. In SDM the boxed-interger#toString returns the string representation of the number which ends up coercing back to int, but in production, the boxed-interger#toString returns "[object Object]", which is handled as NaN.
There is a special annotation @DoNotAutobox to being able to use primitive integers in JS native APIs. This prevents integer auto-wrap, so the int is transpired to a native number (example usage in the Js#coerceToInt method). Elemental2 might add this annotation or change the type to int as you suggest. Please, create an issue in the elemental2 repo to fix this (https://github.com/google/elemental2/issues/new).
Thanks for the explanation about boxed-integer#toString. I thought using Java would save me from being caught out by JavaScript's implicit type coercion. Ah well!
Unfortunately, gwt isn't quite magic - in order to still permit you to box an int to an Integer, and store it as an Object (such as in a collection, etc), it has to be tagged with type details, which means it is no longer just a number, but a wrapper object. Just as in regular Java, try to avoid boxed primitives where possible, but sometimes it isn't possible. Hopefully with that filed issue, we'll lose this extra layer of wrapping.
And what about the behavior change between SDM and production, did you know it? is it true that the #toString is only overridden in SDM?
| common-pile/stackexchange_filtered |
How can I solve burst damage problem when rolling group initiative?
Problem
The PHB states, at p. 189
The DM makes one roll for an entire group of identical creatures, so each member of the group acts at the same time.
Honestly, I usually ignored this one, except for really large groups of weak monsters. But in a 4 goblins vs 4 PCs fight, I would usually roll individual initiatives. Lately, I've decided to follow the book. Combats became quicker, which is good, but another problem appeared: Burst damage.
While the NPCs might be dumb, they are not animal-level of dumb. Some of them are actually smarter than the PCs. Focus firing in order to neutralize one threat each time makes sense for me - two PCs at 1 HP are more annoying than one unconscious and one full life PC, and most of the NPCs know that.
The fact that all the NPCs are actually attacking together the same character leads to the following series of events:
One PC, usually the melee/tank one, which is closer to the NPCs, get focus fired.
Being low level, he has low Max HP and drops to 0 HP.
This is actually happening roughly every other encounter. This is a problem for me and my players.
The question is: Group roll is the "default"/mandatory rule described in the PHB. For many encounters described in published adventures, it actually becomes a Side Initiative, as every creature in the encounter is the same creature, thus each side acts at once. How can I solve this problem while still using this rule? What are the implications of not using it at all? Are there cases where it is simply better to not use it, period? - Currently, I feel like "low level party" is a strong candidate for cases where I shouldn't be using it.
Party
The party consists of 6 PCs at level 2. Classes are: Ranger, Cleric, Paladin, Barbarian, Warlock and Druid. Not all of them are present every session. We're running Lost Mine of Phandelver. Almost every group encounter (Goblin ambush, goblin hideout, redbrand ruffians) led to some character getting to 0 HP.
Note: I understand that the problem can be solved other ways, e.g. "don't make the NPCs focus fire, screw what makes sense", but I would rather not playing my NPCs in an irrational way.
@PurpleMonkey There are encounters where the burst damage should not be a problem, mainly higher level encounters where PCs have more HP. To be fair, even if the player wasn't frustrated by getting to 0 HP, the burst damage from 4 NPCs attacking at the same time a 1st/2nd level character and instantly dropping him unconscious is still a problem for me. One that I could solve by playing the NPCs irrationally and randomly targetting, but which feels like a bad solution.
@PurpleMonkey Any more clear/less opinion-based? (at least in the body - still thinking about a better title)
Right now this is an opinion question to me, but might be able to be solved if you revised it to address your specific issue. Can you provide exact details for your party and the enemies they're facing, perhaps it will make it possible to provide an answer tailor made for your situation as opposed to a blanket answer that will not apply to anyone.
@Pyrotechnical Some drastic changes were made in title and body. I'm not sure how much more I can add about the specific campaign without enabling answers that won't be helpful two sessions from now because then they would be at level 3 and the encounters are different, though.
How do you get "mandatory" out of that rule? That assumption seems to be a core issue here. Can you explain why you have characterized it as mandatory?
@KorvinStarmast I would use only default. It's mandatory in the sense it strictly says "The DM does it. Period." - although I don't think any rule is indeed strictly "mandatory" in any RPG. In particular, I'm more than willing to go back and roll individual initiatives. I'm interested in ways to solve it while using the rule, though. Premise: the rules aren't that badly balanced, I guess (?)
It's mandatory in the sense it strictly says-- No It Is Not Mandatory. Read the DMG. The rules serve you. You are master of rules. Please see also my answer to the frustrated cleric. Do not say, as DM, "but the rules make me do that." Not in this edition unless you want to keep having these same problems. (It's also bad form ...)
@KorvinStarmast My point here is more like: "the rules are like this, I'm assuming the rules aren't awfully balanced and there should be a way to follow them without a PC dropping down every round" - if that's wrong (and the rule itself does actually lead to this problem frequently, period) I would actually appreciate an answer saying so.
See my answer regarding your frustrated cleric.
Thinking like a monster
There's no question: focus fire is a good idea. No matter how you look at it, the mathematics are in your favor if you try to injure one enemy until it is down, and then move on to another. Once you realize this, it may seem like you either have to use this tactic, or be completely unrealistic to what an enemy would do. But there are a few rationales for why a group of enemies would fail to use focus fire. And not all these are "screw what's rational: I want my PCs to survive." It can sometimes be more realistic to have your monsters avoid focus fire. Here are a few possible reasons they might.
1. They are used to weaker enemies
Many "evil" creatures prey on creatures far weaker than themselves preferentially. A squad of goblins might be used to attacking only the sick, the weak, or the unarmed and unarmored: merchant families, or innocent homesteaders. Most importantly, they may be used to enemies that go down in one hit. In a situation like that, it's actually disadvantageous for the goblins to all aim at the same enemy: their first salvo might end up hitting an enemy with five arrows when one would have killed him, and leaving other targets unscathed. By spreading out their attacks, they ensure that they drop the maximum possible number of enemies in the first round, ensuring no one escapes and leads to a messy and lengthy hunt. Perhaps these monsters default to such tactics, only realizing part way into the combat that these opponents do not fall so easily.
2. Enemies act at the same time, but don't think as one
Trained adventurers can act in groups as a seamless unit: adapting their tactics to those of their allies almost instantly. But lesser enemies (again, let's default to goblins as an example) fight without discipline. Perhaps all the goblins do think that they should focus fire: concentrating on a single enemy. But each goblin might have a different opinion on which enemy that should be. Three of them might shout conflicting orders at the same time ("Shoot the human with the bow!" "Shoot the shiny dwarf!" "Shoot the elf who set me on fire!"), and different goblins might follow different orders. If anything, the fact that these enemies are all acting on the same initiative might give more justification for them being confused as to what the other goblins are doing: they act almost simultaneously, and don't have time to notice their companions' actions, or to coordinate their efforts.
3. Monsters are selfish and dumb
Naturally, this one depends on the particular enemies you're fighting. Some enemies might be inclined to focus fire in spite of low Intelligence scores due to their natural instincts to focus on one weakened enemy at a time (like wolves). But generally speaking, some monsters aren't that smart. Even some very intelligent enemies (like vampires) might not be used to fighting in a group, and might simply fight using their own priorities, rather than trying to use teamwork. Some monsters fight based more using rage and bloodlust than tactics and intelligence: even if those monsters are intelligent enough to plan and think tactically, they may abandon these plans in the heat of battle, and attack only the creatures that hurt them, or the ones that look smallest, or the ones that look like they have the most valuable equipment. The three monsters who were hit by a fireball might all aim at the wizard, but the four who weren't might focus on the Paladin to loot his shiny armor. Bottom line, monsters' motivations aren't always to make the smartest move: sometimes it's to make a move that satisfies some baser instinct.
4. Monsters don't know what Hit Points are
This is a big one (and is kind of a variation on point #1). "Rationally", every arrow that a monster fires is an arrow they expect to find their enemy's heart, and drop them dead, regardless of whether it is the first arrow fired at an enemy or the 50th. Monsters don't know they are in a game, and that it is statistically impossible for a goblin's first arrow to drop a level 5 raging Barbarian. The monsters may spread their attacks around because they expect that each attack will kill an enemy. They don't know that "1 hit point" is a status an enemy can reach: they just know they hurt their opponents until they go down.
Likewise, we (as DMs) know what the best tactics are against this particular party, but the monsters may not. For example, enemies might not think to keep themselves spread out enough so that only two of them are in a Fireball radius at a time: why would they if they've never seen these enemies (or maybe any others) use Fireball? Similarly, it may not be particularly obvious to the monsters which creatures have lower max HP and which don't (note that HP often tracks stamina and drive as much as physical damage or resilience). Thus, they may make decisions that you, as a DM who knows everyone's stats and abilities, know is a bad idea: but it's not unrealistic or irrational for them to behave this way, because they don't have access to your information.
But sometimes... focus fire is the way to go
Note that all of these reasons (other than #4) are conditional. There may be monsters who are smart and/or great at using teamwork: they may target the weakest enemy like a pack of wolves picking off the weakest member of a herd. Focus fire can often be exactly the sort of tactic your monsters should be using, in which case, go for it! This can indicate that these enemies are particularly well trained, disciplined, or simply dangerous.
When the right tactic is empirically obvious to us as DMs, it can be hard to have our intelligent creatures behave in a tactically unsound way, while we strive to make their thinking realistic. But it's worth remembering that realistic thinking can be exactly the sort of thing that would lead to bad tactics. After all, people (and monsters) make bad decisions all the time.
Yes, effectively, monsters know what hit points are. Realistically, most humanoid creatures are going to be familiar with at least hunting or domestic conflict, and will know that a target isn't neutralized the second you injure them, unless you get lucky (say, massive nervous system damage). If a goblin knows it can't necessarily kill another goblin in one shot, then it certainly knows a big scary armored human won't die right away either.
Yeah, fourth point is a little off for me. The NPCs know how damaged the party members are. Not an exact number, obviously, but they should be able to note a difference between 1 HP and full life. Other than that, the other reasons fit most of the time, so I upvoted it anyway.
NPCs definitely know when an enemy is in trouble ("bloodied" at half hit points). But keep in mind, the way hit points are described (PHB, p. 195-196) implies that many attacks don't actually physically damage a character when they "hit", but sap their stamina, will, or even use up their luck. "An attack that reduces you to 0 hit points strikes you directly, leaving a bleeding injury or other trauma, or it simply knocks you unconscious." Goblins definitely know that some enemies take more arrows to kill than others. But they may not know that a barbarian will take 5-14 arrows to drop.
I do agree: monsters would have a decent sense of their own capacities, and would have a good sense of how combat works in their world. I guess what I'm getting at is that monsters don't know the combat capacity of their opponents. Even if you imagine the monsters as knowing what hit points are, they certainly don't know how many hit points the characters have (to start with). And so, they might reasonably expect a single attack to take each character down.
This is a very thorough and well thought out answer. I'm keeping a link to this.
The last note here that probably bears mentioning is that later down the track when the PCs grow into their roles, focus fire won't necessarily be the best option - a higher level paladin or cleric (paladin especially) will likely want the enemy trying to focus them while leaving their allies free to blast with impunity
As a very late addendum, one could add the "help" action. Some monsters, especially smaller creatures, like to use their large numbers to swarm their targets. While this is laughable to higher level adventurers, low level PCs are always a few lucky rolls away from a TPK when facing a horde of goblins or kobolds. A few DMs I know have such "swarms" use the help action. They get increased chances of hitting you (2 kobolds = one attack with advantage), but it limits the max damage you can take (only one attack can hit per 2 kobolds).
@Dungarth I'm not sure I understand the value of this tactic, unless the swarm includes some stronger creatures. Wouldn't the kobolds be better off both attacking (2 kobolds = 2 attack rolls, which do damage once if either hits or twice if both hit) rather than helping and attacking (2 kobolds = two attack rolls, but only does damage once if either of them or both hit)?
@Gandalfmeansme It's a way to reduce burst damage while still giving the players a feel for greater numbers. While the kobolds could potentially deal more damage by attacking separately, OP's problem was burst damage from focus firing. Saying "4 kobolds attack you, 2 hit" feels exactly the same to the players whether or not the kobolds are attacking individually or "focus firing" by helping one another. You get to describe kobolds ganging up on the wizard just like kobolds would, but the odds of getting an "accidental" kill as all 4 attacks randomly hitting and killing him are gone.
This is a fantastic answer and I agree with all of it, but I think you are burying the lede a little bit. Point #4 is the biggest one. It becomes rational to spread out your fire, even irrational to concentrate it too much, if you expect every attack has a serious chance to kill or at least injure your enemy enough that they can't fight anymore. The archers at Agincourt did not all fire at one knight until he was down before firing at the next one....inside the fiction spreading firing is probably the more rational and realistic choice.
@KorvinStarmast Just wanted to say "thank you." That comment really made my day (two years ago).
@Gandalfmeansme And your answer has stood the test of time.
Don't use group initiative
Well, it's an obvious option, and you already made the decision to follow the book, however, I can't stress enough the guiding principle from the DMG, see page 235:
The rules serve you, not vice versa.
It is your job as a DM to decide if group initiative is good for your game. Dungeon Master's Guide page 270 describes the Side Initiative variant rule. It is a "stronger" variation of the group initiative, when the whole side acts first in combat.
DMG describes upsides and downsides of this approach:
This variant encourages teamwork and makes your life as a DM easier, since you can more easily coordinate monsters. On the downside, the side that wins initiative can gang up on enemies and take them out before they have a chance to act.
DM decides, what approach would be better. We don't know your players nor your adventure, is up to you to decide by your own experience. Side initiative is meant to be a DM's tool, making their work easier.
If you find that it actually makes things worse for you, don't use it. You chose to change to this method. You don't like the result. Change again.
Fair enough. If only one type of creature exists, the side initiative is a particular case of the group roll. It's a little awkward that it is presented as the default in PHB, though (and many encounters from published adventures are essentially a group of the same creature)
I think you can rephrase your question to something like "Group initiative is not an optional rule. Is it mandatory? Do I have to use it and what happens if I don't?" I personally don't use group initiative, but my combat encounters usually doesn't have much combatants.
I have removed things about player frustration and tried to clarify the problem and question. I might have invalidated your answer in the process, though, please recheck.
Use smaller groups
I normally group monsters into 2 or 3. Perhaps all the goblin archers go in one group, the goblin stabbers in another group and their wolves in a third group.
Use average damage
At low levels, the swinginess of dice can be really bad. At levels 1 and 2, sometimes 3, I use the average damage for monsters (for example, 5 damage instead of 1d6+1).
Remind the players that they can focus-fire as well
In D&D, you should be dogpiling on foes. A monster with 1 HP can hit you just as badly as when it had full HP. Players should be focused on downing enemies, rather than just injuring them.
Remind the players not to neglect defence
Spells like shield of faith and bless and fog cloud are there for a reason.
Having a two-handed weapon instead of a shield is a choice that has a consequence.
Remind the players that this doesn't last
Once their characters get into tier 2 they will have the hit points and defences that they don't have to worry about a dogpile taking them out in 1 round.
While I like this answer, D&D with side initiative is a lot like historical naval combat, in that "battle of the first salvo" effects are pronounced and snowballing is a common side effect, particularly at lower levels. (The players can focus fire as well paragraph is what this comment is related to).
Use Group Initiative As A Guideline
I use group initiative almost exclusively because it saves a lot of rolling, but burst damage can be an issue. You don't need to strictly follow it, however. You may choose to roll once for all your monsters, and then assign them in different groups to act together, as though they had all rolled different initiatives. This "chunking" of turns reduces the burst effect. The rule I stick to is that no creature should act before the rolled initiative value.
For example, you have 6 bandits and roll 20 initiative. Based on how much you want the PCs to feel "surprised", you might have the bandits act at different initiative counts. I might choose to have 4 of them act on initiative count 20 and 2 more act on count 10, or 2 on count 20, 2 on 15, and 2 on 10. Use group initiative to simply initiative rolls and modify it to get the narrative effect you desire.
Essentially, you use the rolled initiative as the "max", and all the others are lower? That's interesting, but how do you handle in the case of bad rolls? i.e. if the group rolls 1, everyone has to be at initiative 1. So they are all attacking at the same time anyway.
Oh, almost forgot - Welcome to RPG.se. When you have time, please take our tour. I've already commented on what I think would improve the answer above. Other than that, great start with experience-based answer :)
In the circumstances where really bad rolls come into play, then you should rely on your narrative intuition as the GM. If everyone is acting at the same initiative, let your creatures act in a way that accentuates the "coolness" of the encounter. I will typically let my melee characters charge the tankiest character to emphasize the protector feeling, or try and rush the mage to add urgency. Then allow a few players to act before the next group, maybe the archers open fire on someone who is out of position, etc.... Also, thanks for the welcome!
| common-pile/stackexchange_filtered |
How to detect a timeout when using asynchronous Socket.BeginReceive?
Writing an asynchronous Ping using Raw Sockets in F#, to enable parallel requests using as few threads as possible. Not using "System.Net.NetworkInformation.Ping", because it appears to allocate one thread per request. Am also interested in using F# async workflows.
The synchronous version below correctly times out when the target host does not exist/respond, but the asynchronous version hangs. Both work when the host does respond. Not sure if this is a .NET issue, or an F# one...
Any ideas?
(note: the process must run as Admin to allow Raw Socket access)
This throws a timeout:
let result = Ping.Ping ( IPAddress.Parse( "<IP_ADDRESS>" ), 1000 )
However, this hangs:
let result = Ping.AsyncPing ( IPAddress.Parse( "<IP_ADDRESS>" ), 1000 )
|> Async.RunSynchronously
Here's the code...
module Ping
open System
open System.Net
open System.Net.Sockets
open System.Threading
//---- ICMP Packet Classes
type IcmpMessage (t : byte) =
let mutable m_type = t
let mutable m_code = 0uy
let mutable m_checksum = 0us
member this.Type
with get() = m_type
member this.Code
with get() = m_code
member this.Checksum = m_checksum
abstract Bytes : byte array
default this.Bytes
with get() =
[|
m_type
m_code
byte(m_checksum)
byte(m_checksum >>> 8)
|]
member this.GetChecksum() =
let mutable sum = 0ul
let bytes = this.Bytes
let mutable i = 0
// Sum up uint16s
while i < bytes.Length - 1 do
sum <- sum + uint32(BitConverter.ToUInt16( bytes, i ))
i <- i + 2
// Add in last byte, if an odd size buffer
if i <> bytes.Length then
sum <- sum + uint32(bytes.[i])
// Shuffle the bits
sum <- (sum >>> 16) + (sum &&& 0xFFFFul)
sum <- sum + (sum >>> 16)
sum <- ~~~sum
uint16(sum)
member this.UpdateChecksum() =
m_checksum <- this.GetChecksum()
type InformationMessage (t : byte) =
inherit IcmpMessage(t)
let mutable m_identifier = 0us
let mutable m_sequenceNumber = 0us
member this.Identifier = m_identifier
member this.SequenceNumber = m_sequenceNumber
override this.Bytes
with get() =
Array.append (base.Bytes)
[|
byte(m_identifier)
byte(m_identifier >>> 8)
byte(m_sequenceNumber)
byte(m_sequenceNumber >>> 8)
|]
type EchoMessage() =
inherit InformationMessage( 8uy )
let mutable m_data = Array.create 32 32uy
do base.UpdateChecksum()
member this.Data
with get() = m_data
and set(d) = m_data <- d
this.UpdateChecksum()
override this.Bytes
with get() =
Array.append (base.Bytes)
(this.Data)
//---- Synchronous Ping
let Ping (host : IPAddress, timeout : int ) =
let mutable ep = new IPEndPoint( host, 0 )
let socket = new Socket( AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp )
socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.SendTimeout, timeout )
socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, timeout )
let packet = EchoMessage()
let mutable buffer = packet.Bytes
try
if socket.SendTo( buffer, ep ) <= 0 then
raise (SocketException())
buffer <- Array.create (buffer.Length + 20) 0uy
let mutable epr = ep :> EndPoint
if socket.ReceiveFrom( buffer, &epr ) <= 0 then
raise (SocketException())
finally
socket.Close()
buffer
//---- Entensions to the F# Async class to allow up to 5 paramters (not just 3)
type Async with
static member FromBeginEnd(arg1,arg2,arg3,arg4,beginAction,endAction,?cancelAction): Async<'T> =
Async.FromBeginEnd((fun (iar,state) -> beginAction(arg1,arg2,arg3,arg4,iar,state)), endAction, ?cancelAction=cancelAction)
static member FromBeginEnd(arg1,arg2,arg3,arg4,arg5,beginAction,endAction,?cancelAction): Async<'T> =
Async.FromBeginEnd((fun (iar,state) -> beginAction(arg1,arg2,arg3,arg4,arg5,iar,state)), endAction, ?cancelAction=cancelAction)
//---- Extensions to the Socket class to provide async SendTo and ReceiveFrom
type System.Net.Sockets.Socket with
member this.AsyncSendTo( buffer, offset, size, socketFlags, remoteEP ) =
Async.FromBeginEnd( buffer, offset, size, socketFlags, remoteEP,
this.BeginSendTo,
this.EndSendTo )
member this.AsyncReceiveFrom( buffer, offset, size, socketFlags, remoteEP ) =
Async.FromBeginEnd( buffer, offset, size, socketFlags, remoteEP,
this.BeginReceiveFrom,
(fun asyncResult -> this.EndReceiveFrom(asyncResult, remoteEP) ) )
//---- Asynchronous Ping
let AsyncPing (host : IPAddress, timeout : int ) =
async {
let ep = IPEndPoint( host, 0 )
use socket = new Socket( AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp )
socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.SendTimeout, timeout )
socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, timeout )
let packet = EchoMessage()
let outbuffer = packet.Bytes
try
let! result = socket.AsyncSendTo( outbuffer, 0, outbuffer.Length, SocketFlags.None, ep )
if result <= 0 then
raise (SocketException())
let epr = ref (ep :> EndPoint)
let inbuffer = Array.create (outbuffer.Length + 256) 0uy
let! result = socket.AsyncReceiveFrom( inbuffer, 0, inbuffer.Length, SocketFlags.None, epr )
if result <= 0 then
raise (SocketException())
return inbuffer
finally
socket.Close()
}
Is there any particular reason you want to re-invent System.Net.NetworkInformation.Ping.SendAsync()? It already supports timeouts.
SendAsync/SendToAsync is not the same as the AsyncSendTo above... the former do not integrate with F# asynchronous workflows (which greatly simplify writing async code).
Erm, the point is that you don't have to write it.
Sorry, didn't read the whole response... two reasons to reinvent. First, the docs imply that Ping.SendAsync uses one thread per instance which limits its scalability, whereas Begin/End do not. Second, I want to understand how to manipulate raw sockets for use with other non TCP/UDP protocols.
@nobugz - Tested this out, and System.Net.NetworkInformation.Ping.SendAsync doesn't scale. Attempting to sweep a class-B quickly consumes 100's of threads and will eventually run out of memory. Using Begin/End (via F# async workflows) takes only a few threads and a few 100's of Mb.
crossposted to http://cs.hubfs.net/forums/thread/13621.aspx
After some thought, came up with the following. This code adds an AsyncReceiveEx member to Socket, which includes a timeout value. It hides the details of the watchdog timer inside the receive method... very tidy and self contained. Now THIS is what I was looking for!
See the complete async ping example, further below.
Not sure if the locks are necessary, but better safe than sorry...
type System.Net.Sockets.Socket with
member this.AsyncSend( buffer, offset, size, socketFlags, err ) =
Async.FromBeginEnd( buffer, offset, size, socketFlags, err,
this.BeginSend,
this.EndSend,
this.Close )
member this.AsyncReceive( buffer, offset, size, socketFlags, err ) =
Async.FromBeginEnd( buffer, offset, size, socketFlags, err,
this.BeginReceive,
this.EndReceive,
this.Close )
member this.AsyncReceiveEx( buffer, offset, size, socketFlags, err, (timeoutMS:int) ) =
async {
let timedOut = ref false
let completed = ref false
let timer = new System.Timers.Timer( double(timeoutMS), AutoReset=false )
timer.Elapsed.Add( fun _ ->
lock timedOut (fun () ->
timedOut := true
if not !completed
then this.Close()
)
)
let complete() =
lock timedOut (fun () ->
timer.Stop()
timer.Dispose()
completed := true
)
return! Async.FromBeginEnd( buffer, offset, size, socketFlags, err,
(fun (b,o,s,sf,e,st,uo) ->
let result = this.BeginReceive(b,o,s,sf,e,st,uo)
timer.Start()
result
),
(fun result ->
complete()
if !timedOut
then err := SocketError.TimedOut; 0
else this.EndReceive( result, err )
),
(fun () ->
complete()
this.Close()
)
)
}
Here is a complete Ping example. To avoid running out of source ports and to prevent getting too many replies at once, it scans one class-c subnet at a time.
module Ping
open System
open System.Net
open System.Net.Sockets
open System.Threading
//---- ICMP Packet Classes
type IcmpMessage (t : byte) =
let mutable m_type = t
let mutable m_code = 0uy
let mutable m_checksum = 0us
member this.Type
with get() = m_type
member this.Code
with get() = m_code
member this.Checksum = m_checksum
abstract Bytes : byte array
default this.Bytes
with get() =
[|
m_type
m_code
byte(m_checksum)
byte(m_checksum >>> 8)
|]
member this.GetChecksum() =
let mutable sum = 0ul
let bytes = this.Bytes
let mutable i = 0
// Sum up uint16s
while i < bytes.Length - 1 do
sum <- sum + uint32(BitConverter.ToUInt16( bytes, i ))
i <- i + 2
// Add in last byte, if an odd size buffer
if i <> bytes.Length then
sum <- sum + uint32(bytes.[i])
// Shuffle the bits
sum <- (sum >>> 16) + (sum &&& 0xFFFFul)
sum <- sum + (sum >>> 16)
sum <- ~~~sum
uint16(sum)
member this.UpdateChecksum() =
m_checksum <- this.GetChecksum()
type InformationMessage (t : byte) =
inherit IcmpMessage(t)
let mutable m_identifier = 0us
let mutable m_sequenceNumber = 0us
member this.Identifier = m_identifier
member this.SequenceNumber = m_sequenceNumber
override this.Bytes
with get() =
Array.append (base.Bytes)
[|
byte(m_identifier)
byte(m_identifier >>> 8)
byte(m_sequenceNumber)
byte(m_sequenceNumber >>> 8)
|]
type EchoMessage() =
inherit InformationMessage( 8uy )
let mutable m_data = Array.create 32 32uy
do base.UpdateChecksum()
member this.Data
with get() = m_data
and set(d) = m_data <- d
this.UpdateChecksum()
override this.Bytes
with get() =
Array.append (base.Bytes)
(this.Data)
//---- Entensions to the F# Async class to allow up to 5 paramters (not just 3)
type Async with
static member FromBeginEnd(arg1,arg2,arg3,arg4,beginAction,endAction,?cancelAction): Async<'T> =
Async.FromBeginEnd((fun (iar,state) -> beginAction(arg1,arg2,arg3,arg4,iar,state)), endAction, ?cancelAction=cancelAction)
static member FromBeginEnd(arg1,arg2,arg3,arg4,arg5,beginAction,endAction,?cancelAction): Async<'T> =
Async.FromBeginEnd((fun (iar,state) -> beginAction(arg1,arg2,arg3,arg4,arg5,iar,state)), endAction, ?cancelAction=cancelAction)
//---- Extensions to the Socket class to provide async SendTo and ReceiveFrom
type System.Net.Sockets.Socket with
member this.AsyncSend( buffer, offset, size, socketFlags, err ) =
Async.FromBeginEnd( buffer, offset, size, socketFlags, err,
this.BeginSend,
this.EndSend,
this.Close )
member this.AsyncReceive( buffer, offset, size, socketFlags, err ) =
Async.FromBeginEnd( buffer, offset, size, socketFlags, err,
this.BeginReceive,
this.EndReceive,
this.Close )
member this.AsyncReceiveEx( buffer, offset, size, socketFlags, err, (timeoutMS:int) ) =
async {
let timedOut = ref false
let completed = ref false
let timer = new System.Timers.Timer( double(timeoutMS), AutoReset=false )
timer.Elapsed.Add( fun _ ->
lock timedOut (fun () ->
timedOut := true
if not !completed
then this.Close()
)
)
let complete() =
lock timedOut (fun () ->
timer.Stop()
timer.Dispose()
completed := true
)
return! Async.FromBeginEnd( buffer, offset, size, socketFlags, err,
(fun (b,o,s,sf,e,st,uo) ->
let result = this.BeginReceive(b,o,s,sf,e,st,uo)
timer.Start()
result
),
(fun result ->
complete()
if !timedOut
then err := SocketError.TimedOut; 0
else this.EndReceive( result, err )
),
(fun () ->
complete()
this.Close()
)
)
}
//---- Asynchronous Ping
let AsyncPing (ip : IPAddress, timeout : int ) =
async {
use socket = new Socket( AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp )
socket.Connect( IPEndPoint( ip, 0 ) )
let pingTime = System.Diagnostics.Stopwatch()
let packet = EchoMessage()
let outbuffer = packet.Bytes
let err = ref (SocketError())
let isAlive = ref false
try
pingTime.Start()
let! result = socket.AsyncSend( outbuffer, 0, outbuffer.Length, SocketFlags.None, err )
pingTime.Stop()
if result <= 0 then
raise (SocketException(int(!err)))
let inbuffer = Array.create (outbuffer.Length + 256) 0uy
pingTime.Start()
let! reply = socket.AsyncReceiveEx( inbuffer, 0, inbuffer.Length, SocketFlags.None, err, timeout )
pingTime.Stop()
if result <= 0 && not (!err = SocketError.TimedOut) then
raise (SocketException(int(!err)))
isAlive := not (!err = SocketError.TimedOut)
&& inbuffer.[25] = 0uy // Type 0 = echo reply (redundent? necessary?)
&& inbuffer.[26] = 0uy // Code 0 = echo reply (redundent? necessary?)
finally
socket.Close()
return (ip, pingTime.Elapsed, !isAlive )
}
let main() =
let pings net =
seq {
for node in 0..255 do
let ip = IPAddress.Parse( sprintf "192.168.%d.%d" net node )
yield Ping.AsyncPing( ip, 1000 )
}
for net in 0..255 do
pings net
|> Async.Parallel
|> Async.RunSynchronously
|> Seq.filter ( fun (_,_,alive) -> alive )
|> Seq.iter ( fun (ip, time, alive) ->
printfn "%A %dms" ip time.Milliseconds)
main()
System.Console.ReadKey() |> ignore
This is now very close to being exactly what you want. But there's still one problem with this, see my latest answer.
James, your own accepted answer has a problem I wanted to point out. You only allocate one timer, which makes the async object returned by AsyncReceiveEx a stateful one-time-use object. Here's a similar example that I trimmed down:
let b,e,c = Async.AsBeginEnd(Async.Sleep)
type Example() =
member this.Close() = ()
member this.AsyncReceiveEx( sleepTime, (timeoutMS:int) ) =
let timedOut = ref false
let completed = ref false
let timer = new System.Timers.Timer(double(timeoutMS), AutoReset=false)
timer.Elapsed.Add( fun _ ->
lock timedOut (fun () ->
timedOut := true
if not !completed
then this.Close()
)
)
let complete() =
lock timedOut (fun () ->
timer.Stop()
timer.Dispose()
completed := true
)
Async.FromBeginEnd( sleepTime,
(fun st ->
let result = b(st)
timer.Start()
result
),
(fun result ->
complete()
if !timedOut
then printfn "err"; ()
else e(result)
),
(fun () ->
complete()
this.Close()
)
)
let ex = new Example()
let a = ex.AsyncReceiveEx(3000, 1000)
Async.RunSynchronously a
printfn "ok..."
// below throws ODE, because only allocated one Timer
Async.RunSynchronously a
Ideally you want every 'run' of the async returned by AsyncReceiveEx to behave the same, which means each run needs its own timer and set of ref flags. This is easy to fix thusly:
let b,e,c = Async.AsBeginEnd(Async.Sleep)
type Example() =
member this.Close() = ()
member this.AsyncReceiveEx( sleepTime, (timeoutMS:int) ) =
async {
let timedOut = ref false
let completed = ref false
let timer = new System.Timers.Timer(double(timeoutMS), AutoReset=false)
timer.Elapsed.Add( fun _ ->
lock timedOut (fun () ->
timedOut := true
if not !completed
then this.Close()
)
)
let complete() =
lock timedOut (fun () ->
timer.Stop()
timer.Dispose()
completed := true
)
return! Async.FromBeginEnd( sleepTime,
(fun st ->
let result = b(st)
timer.Start()
result
),
(fun result ->
complete()
if !timedOut
then printfn "err"; ()
else e(result)
),
(fun () ->
complete()
this.Close()
)
)
}
let ex = new Example()
let a = ex.AsyncReceiveEx(3000, 1000)
Async.RunSynchronously a
printfn "ok..."
Async.RunSynchronously a
The only change is to put the body of AsyncReceiveEx inside async{...} and have the last line return!.
Very good catch, thank you! Answer updated to reflect this fix.
The docs clearly state that the timeout only applies to the sync versions:
http://msdn.microsoft.com/en-us/library/system.net.sockets.socketoptionname.aspx
That's a shame... complicates things quite a bit :-( Will update if I find a graceful solution...
A couple things...
First, it's possible to adapt the .NET FooAsync/FooCompleted pattern into an F# async. The FSharp.Core library does this for WebClient; I think you can use the same pattern here. Here's the WebClient code
type System.Net.WebClient with
member this.AsyncDownloadString (address:Uri) : Async<string> =
let downloadAsync =
Async.FromContinuations (fun (cont, econt, ccont) ->
let userToken = new obj()
let rec handler =
System.Net.DownloadStringCompletedEventHandler (fun _ args ->
if userToken = args.UserState then
this.DownloadStringCompleted.RemoveHandler(handler)
if args.Cancelled then
ccont (new OperationCanceledException())
elif args.Error <> null then
econt args.Error
else
cont args.Result)
this.DownloadStringCompleted.AddHandler(handler)
this.DownloadStringAsync(address, userToken)
)
async {
use! _holder = Async.OnCancel(fun _ -> this.CancelAsync())
return! downloadAsync
}
and I think you can do the same for SendAsync/SendAsyncCancel/PingCompleted (I have not thought it through carefully).
Second, name your method AsyncPing, not PingAsync. F# async methods are named AsyncFoo, whereas methods with the event pattern are named FooAsync.
I didn't look carefully through your code to try to find where the error may lie.
Renamed PingAsync to AsyncPing. I'll look into the other ideas when time permits to see if it overcomes my timeout problem.
Tried to wrap this in an Async.FromContinuations, but it still suffers from creating hundreds of threads and otherwise does not scale (runs out of memory when pinging a class B in parallel). Code posted in a separate answer, in case someone finds it of use...
Here is a version using Async.FromContinuations.
However, this is NOT an answer to my problem, because it does not scale. The code may be useful to someone, so posting it here.
The reason this is not an answer is because System.Net.NetworkInformation.Ping appears to use one thread per Ping and quite a bit of memory (likely due to thread stack space). Attempting to ping an entire class-B network will run out of memory and use 100's of threads, whereas the code using raw sockets uses only a few threads and under 10Mb.
type System.Net.NetworkInformation.Ping with
member this.AsyncPing (address:IPAddress) : Async<PingReply> =
let pingAsync =
Async.FromContinuations (fun (cont, econt, ccont) ->
let userToken = new obj()
let rec handler =
PingCompletedEventHandler (fun _ args ->
if userToken = args.UserState then
this.PingCompleted.RemoveHandler(handler)
if args.Cancelled then
ccont (new OperationCanceledException())
elif args.Error <> null then
econt args.Error
else
cont args.Reply)
this.PingCompleted.AddHandler(handler)
this.SendAsync(address, 1000, userToken)
)
async {
use! _holder = Async.OnCancel(fun _ -> this.SendAsyncCancel())
return! pingAsync
}
let AsyncPingTest() =
let pings =
seq {
for net in 0..255 do
for node in 0..255 do
let ip = IPAddress.Parse( sprintf "192.168.%d.%d" net node )
let ping = new Ping()
yield ping.AsyncPing( ip )
}
pings
|> Async.Parallel
|> Async.RunSynchronously
|> Seq.iter ( fun result ->
printfn "%A" result )
EDIT: code changed to working version.
James, I've modified your code and it seems it work as well as your version, but uses MailboxProcessor as timeout handler engine. The code is 4x times slower then your version but uses 1.5x less memory.
let AsyncPing (host: IPAddress) timeout =
let guard =
MailboxProcessor<AsyncReplyChannel<Socket*byte array>>.Start(
fun inbox ->
async {
try
let socket = new Socket( AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp )
try
let ep = IPEndPoint( host, 0 )
let packet = EchoMessage()
let outbuffer = packet.Bytes
let! reply = inbox.Receive()
let! result = socket.AsyncSendTo( outbuffer, 0, outbuffer.Length, SocketFlags.None, ep )
if result <= 0 then
raise (SocketException())
let epr = ref (ep :> EndPoint)
let inbuffer = Array.create (outbuffer.Length + 256) 0uy
let! result = socket.AsyncReceiveFrom( inbuffer, 0, inbuffer.Length, SocketFlags.None, epr )
if result <= 0 then
raise (SocketException())
reply.Reply(socket,inbuffer)
return ()
finally
socket.Close()
finally
()
})
async {
try
//#1: blocks thread and as result have large memory footprint and too many threads to use
//let socket,r = guard.PostAndReply(id,timeout=timeout)
//#2: suggested by Dmitry Lomov
let! socket,r = guard.PostAndAsyncReply(id,timeout=timeout)
printfn "%A: ok" host
socket.Close()
with
_ ->
printfn "%A: failed" host
()
}
//test it
//timeout is ms interval
//i.e. 10000 is equal to 10s
let AsyncPingTest timeout =
seq {
for net in 1..254 do
for node in 1..254 do
let ip = IPAddress.Parse( sprintf "192.168.%d.%d" net node )
yield AsyncPing ip timeout
}
|> Async.Parallel
|> Async.RunSynchronously
I'm pretty certain this will leak a socket and a pending async receive. Using a mailbox is a good idea, but it would need an additional command to send a Close to the socket to unhinge the pending I/O. See my accepted answer for an alternative approach.
@James: i've found leaks and now it works too :) Your solution is faster up to 4x times.
Possibly see also:
http://blogs.msdn.com/pfxteam/archive/2010/05/04/10007557.aspx
| common-pile/stackexchange_filtered |
onRestoreInstance() is not getting called
I am using onRestoreInstance() for retrieving the data saved in onSaveInstance, but in my application it is not being called when coming back to the activity again.
While my activity is running, I press Home key. Then if I am again starting the activity, onRestorInstance() is not being called thats why I am unable to retrieve the values.
Can anyone explain that exactly when the onRestoreInstance() method is called?
when activity is destroyed an recreated like orientation change then you can save and retrieve the same in onRestoreInstance() Override the onPause() and onResume() in your activity. Log a statement and check yourself
Press Home key would not cause a ReCreate, just onPause(), but OnRestoreInstanceState() is called after onCreate() you can check this Recreating.
| common-pile/stackexchange_filtered |
How does heatmap3 go from value to colour?
I'm using the heatmap3 package for R to generate some gene expression data heatmaps.
My question is: how are the values for the gene expression being "mapped" to colours?
As an example of the same (reproducible) problem, lets use the example code in the vignette on the mtcars dataset:
library(heatmap3)
heatmap3(mtcars,scale="col",margins=c(2,10),RowSideColors=RowSideColors, balanceColor = TRUE)
The legend shows that the colours range from approximately -2 to plus 3. If we use the Maserati Bora as an example
mpg cyl disp hp drat wt qsec vs am gear carb
15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8
We can see in the heatmap that it has a bright red colour ie a score of ~3 for carb and a blue score of about -1.5 for qsec.
Looking at the other cars in the dataset, they have the following carb:
4 4 1 1 2 1 4 2 2 4 4 3 3 3 4 4 4 1 2 1 1 2 2 4 2 1 2 2 4 6 8 2
So the Maserati Bora is definitely an outlier in terms of carbon, as is the Ferrari Dino (carb = 6), which is also as expected red in the heatmap.
Similarly, looking at the qseq
16.46 17.02 18.61 19.44 17.02 20.22 15.84 20.00 22.90 18.30 18.90 17.40 17.60 18.00 17.98 17.82 17.42 19.47 18.52 19.90 20.01 16.87 17.30 15.41 17.05 18.90 16.70 16.90 14.50 15.50 14.60 18.60
The Maserati's 14.60 is close to the minimum of 14.5, which is observed in the Ford Pantera L, which is also blue, as we'd expect.
But how are we actually going from the value to the colour? What is the formula for calculating this z-score? And is it being averaged out on a per-car or a per-parameter (mpg, cyl etc) basis?
OK, I think I just figured it out - but posting here in case others have the same problem:
It's using the basic z-score formula, so
(mtcars$qsec - mean(mtcars$qsec))/sd(mtcars$qsec)
(value - pop_mean(value))/ sd(value)
| common-pile/stackexchange_filtered |
Jacobi vs tidal radius for star cluster
The tidal radius is defined in King (1962) as:
the value of r (the radius) at which f (the density profile) reaches zero...
This also referred by King as the "limiting radius" written as (Eq 3):
$$r_t=R_{GC}\left(\frac{M}{2M_g}\right)^{1/3}$$
where $R_{GC}$ is the galactocentric distance, $M$ is the cluster's mass, and $M_{g}$ is the galaxy's mass.
This can also be written as Pinfield et al. (1998) Eq (12):
$$r_t = \left( \frac{GM}{2(A-B)^2} \right)^{1/3}$$
where $A$ and $B$ are the Oort constants and $M$ is the total mass.
Now, I've seen the "tidal radius" defined as:
$$r_t=R_{GC}\left(\frac{M}{3M_g}\right)^{1/3}$$
in Chernoff & Weinberg (1990) Eq 2. Notice the 3 in the denominator instead of the 2 as above. According to the authors:
The tidal radius depends weakly upon the form of the mass distribution of the halo. The Eq. above is correct for the tidal stress of a Keplerian force field. On the other hand, for constant $v_g$, the factor $3M_g$ becomes $2M_g$.
This same quantity (using $3M_g$) is referred to as the "Jacobi radius" in Piatti (2015) Eq 4.
I've also seen the Jacobi radius written as:
$$r_J=\left(\frac{GM}{4\Omega^2-k^2}\right)^{1/3}$$
(where $G, M, \Omega$, and $k$ are the gravitational constant, the cluster
mass, the circular, and the epicyclic frequencies of a near circular
orbit, respectively) in Ernst et al. (2011) Eq (8).
According to Wikipedia, both the tidal and the Jacobi radius are the same thing.
My question: are these two radius the same quantity?
In the book Galactic Dynamics by Binney and Tremaine (second edition) there is a whole section explaining the difference between the Jacobi radius and the tidal radius (page 677-chapter 8).
Here, $r_J$ is defined as:
$$r_J= R_0\left(\frac{m}{3M}\right)^{1/3}$$
The Jacobi radius $r_J$ (also, Roche or Hill radius) of an orbiting stellar system is expected to correspond to the observational tidal radius, the maximum extent of the satellite system. However this correspondence is only approximate.
Five reasons are subsequently given to explain why the correspondence is only approximate.
If they had one good reasons, they wouldn't need five ;-)
In the paper "A million binaries from Gaia eDR3: sample selection and validation of
Gaia parallax uncertainties" El-Badry et al (2021) the Jacobi radius, in the context of orbiting binary White Dwarf's, is defined as the separation between two orbiting binaries beyond which the Galactic tidal field dominates a binary’s internal acceleration. The relation given for the Jacobi radius in the solar neighbourhood is
$
rJ = 1.35 \textrm{ pc} \times \frac{M_{tot}}{M}
$
$M_{tot}$ is the mass of the binary system and $M$ is the mass of the sun. It's a bit strange that the Jacobi radius is defined in terms of the sun but I think it is just an approximation of the effective range in which two objects are gravitationally bound in an orbit.
The paper references (Jiang & Tremaine 2010) for the relationship.
thanx Gabriel for adding the detailed definition to my reply.. an additional definition of the tidal radius hereafter
in this definition it describes molecular clouds rather than stellar systems..again notice the difference (2 is in the numerator).. there is no reference on where he got this formalism from..this is from Tan 2000 eq (9) check also eq (8)(http://adsabs.harvard.edu/abs/2000ApJ...536..173T
| common-pile/stackexchange_filtered |
Finding a bijection from $\{1,2,...,nm\}$ to $X \times Y$
I'm trying to prove that for two finite sets $X,Y$, where $|X|=n$, $|Y|=m$, $|X||Y|=|X \times Y|$. I know that there exists bijections $f:\{1,2,...,n\} \rightarrow X$ and $g:\{1,2,...,m\} \rightarrow Y$ and I'm trying to find a bijection $h:\{1,2,...,nm\} \rightarrow X \times Y$. I know that this is equivalent to showing there exists a bijection $k: X \times Y \rightarrow \{1,2,...,nm\}$. One such bijection that seemingly works is $k(x_i,y_j) = (i-1)m + j$, where $1 \leq i \leq n$ and $1 \leq j \leq m$. I've tried to prove the injectivity and surjectivity of this function but to no avail:
Injecivity: Suppose $k(x_a,y_b) = k(x_c,y_d)$ for $x_a, x_c \in X$ and $y_b, y_d \in Y$. This gives $(a-1)m + b = (c-1)m + d$ but I've not been able to show that $a = c, b = d$ from here.
Surjectivity: Suppose $z \in \{1,2,...,nm\}$ then $\exists i,j$ s.t. $ z = (i-1)m + j$ for some $i,j$ satisfying $1 \leq i \leq n$ and $1 \leq j \leq m$. Now, since $X, Y$ can be written as $X = \{x_1,x_2,...,x_n\}$, $Y = \{y_1,y_2,...,y_m\}$ and $X \times Y = \{(x,y) | x \in X \wedge y \in Y\}$ then we can say $\exists(x_i, y_j) \in X \times Y$ and by the definition of $k$ we have $k((x_i, y_j)) = (i-1)m + j$. But I'm not sure if this all watertight since I've assumed that $z$ can be written in the desired form.
Yes, the constraint on $j$ would be $0 \leq j \leq m$ but would that then imply $1 \leq i - 1 \leq n+1$? I'm not sure exactly how to proceed.
Ok, how exactly would that then allow us to deduce that $i,j$ are appropriately constrained for there to exist $(x_i, y_j)$ s.t. $k(x_i, y_j) = (i-1)m+j$?
@Hai indeed, your surjectivity proof is not watertight at all since you've just assumed what you should prove. You are given only that z in 1,...,mn and you should be able to construct explicitly (find such i,j) that z = (i-1)m+j. Hint. Try to use induction on z.
First let me do a little change: instead of $\{1,2,...,n\},\{1,2,...,m\},\{1,2,...,nm\}$ I will use $\{0,1,...,n-1\},\{0,1,...,m-1\},\{0,1,...,nm-1\}$.
Now, to prove $k$ is bijective I will find the inverse of $k$: $k(x_i,y_j)=(i-1)m + j$, to isolate $i$ I will use the fact that $j<m$, first let's divide by $m$: $\frac{(i-1)m + j}m=i-1 + \frac jm$, now notice that $i-1$ is an integer and $\frac jm$ is a fraction, so if I use the floor function on this expression I will get rid from $j$:$\lfloor i-1 + \frac jm\rfloor=i-1$, now adding to this $1$ we get $$\left\lfloor\frac{k(x_i,y_j)}{m}\right\rfloor+1=i\implies f^{-1}\left(\left\lfloor\frac{k(x_i,y_j)}{m}\right\rfloor+1\right)=x_i$$
Now we can easily isolate the $j$: $k(x_i,y_j)=(i-1)m + j=\left(\left\lfloor\frac{k(x_i,y_j)}{m}\right\rfloor+1-1\right)m+j=\left\lfloor\frac{k(x_i,y_j)}{m}\right\rfloor m+j\\\implies j=k(x_i,y_j)-\left\lfloor\frac{k(x_i,y_j)}{m}\right\rfloor m$
With this we only left to do is to use the inverse of $g$:$$j=k(x_i,y_j)-\left\lfloor\frac{k(x_i,y_j)}{m}\right\rfloor m\implies g^{-1}\left(k(x_i,y_j)-\left\lfloor\frac{k(x_i,y_j)}{m}\right\rfloor m\right)=y_j$$
Combining those 2 we get $$k^{-1}(z)=\left(f^{-1}\left(\left\lfloor\frac{z}{m}\right\rfloor+1\right),g^{-1}\left(z-\left\lfloor\frac{z}{m}\right\rfloor m\right)\right)$$
Thus we can conclude that $k$ is bijective
Here is a different proof that use induction.
It is trivial that $X\times \{a\}$ is the same size as $\{0,1,...,n-1\}$
Assuming that $X\times Y$ is the same size $\{0,1,...,nm-1\}$ I'll show that $X\times (Y\cup \{a\})$ where $a\notin Y$ is the same size of $\{0,1,...,n(m+1)-1\}$:
Let's call $h$ the bijective from $X\times Y$ to $\{0,1,...,nm-1\}$, then set $k(x_i,y_j)=h(x_i,y_j)$ if $(x_i,y_j)\in X\times Y$ and $k(x_i,y_j)=nm-1+i$ elsewhere, try to show that $k$ is bijective from $X\times (Y\cup \{a\})$ to $\{0,1,...,n(m+1)-1\}$ and then you are done with the induction
!
too strong, I'm pretty sure he is meant to only use naturals and induction
@famesyasd What do you mean by too strong? it is possible to use induction but why?
Well I mean I have seen this exercise in two books, and in both it was before integers, rationals etc so I assumed that he shouldn't use what you have used yet.
Hmm @famesyasd , I got this question in my first course in set theory and I use this function with the inverse to show it is bijective
@Hai , please comment here, if you want for me to add a proof using induction
@famesyasd I asked the OP if he wants using induction. What you said could be correct
Ah I hadn't really considered that I probably shouldn't use rationals in the proof since the course hasn't covered it yet. In that case induction is probably what I should use.
@Hai I added a way using induction!
I see, thanks a lot!
@Holo Should the base case not be when Y has cardinality $0$?
@Hai it doesn't matter, both work(and both trivial, $X\times\emptyset=\emptyset$)
So for your bijection $F$ (here what I wrote is actually your $F^{-1}$) $$\forall x\, x \in F^{-1} \iff x \in \{1,...,mn\} \times (A \times B) $$$$\ \wedge\, \exists a \in \{1,...,m\},\ b \in \{1,...,n\}\ $$$$x = (n * P(a) + b,\, (f(a),\,g(b)).$$
You would want to prove the following lemma:
$$\forall m,n \in N\ \forall x\, x \in \{1,..,mn\}\implies \exists a \in \{1,...m\}\,\exists b \in \{1,...n\}\ x = (n*P(a) + b)$$
and
$$\forall m,n \in N\ \forall x\, x \in \{1,..,mn\}\forall a,c \in \{1,...m\}\,\forall b,d \in \{1,...n\}\ $$$$x = (n*P(a) + b) \wedge x = (n * P(c) + d) \implies (a = c \wedge b = d).$$
This lemma is needed to prove that you have surjectivity (the first part) and injectivity (the second).
To rewrite the notation: We have $$\forall m,n, x\ \,x \in \{m,..,n\} \iff m \leq x \wedge x \leq n$$ where $$\forall a,b\ \, a \leq b \iff a,b \in N \wedge \exists c \in N\, b = a + c$$
also we have $$\forall a \in N\, a \neq 0 \implies S(P(a)) = a$$ where $S$ is effectively $\forall n \in N\,S(n) = n + 1$ and $P(n) = n - 1$ provided that $n \neq 0$
For example to take injectivity Suppose that $a \neq c$ (for example if $a < c$ then we have the following series of inequalities:
$$nP(a) + b <= nP(a) + n = n(P(a) + 1) = n (P(a) + S(0)) = n (S(P(a)+0)) = n(S(P(a)) = n*a < n*a + 1 <= n*P(c) + 1 <= n*P(c) + d$$
Here you should be able to justify all transitions.
Some context, perhaps of interest.
Want to find a bijection:
$f:${$ 0,1,2,.....,nm-1$} $\rightarrow X×Y$,
where $X=${$0,1,2.....n-1$}, and
$Y=${$0,1,2,....m-1$}.
$X×Y=$ {$(i, k)|$
$0\le i \le n-1, 0 \le k \le m-1$}
In matrix form
$(0,0) (0,1) (0,2).....(0,m-1)$
$(1,0) (1,1) (1,2).....(1,m-1)$
$ :$
$(n-1,0) (n-1,1)..(n-1,m-1)$.
Consider the dictionary order on the set $B:=X×Y$.
$A$ and $B$ have the same order type, if
there exists a bijection $f: A \rightarrow B$ s.t.
$a_1 <_A a_2 \Rightarrow f(a_1) <f(a_2)$.
Let $k \in {0,1,2,....,mn-1}.$
Euclidean division :
$k= mq +r$, where $0 \le q$ , $0 \le r \lt m$,
with unique $q, r$.
$f: A \rightarrow B.$
$f(k) = f(mq+r) =(q, r)$.
$f$ is bijective.
(https://en.m.wikipedia.org/wiki/Euclidean_division)
| common-pile/stackexchange_filtered |
Menu rollover images and text only working on one
I have a rollover menu that displays the days of the week. They originally sat happily side by side each other with an image that appeared when hovered over. Now I'm trying to get text and an image to appear when you hover over a day. The text works on the first day ("Thurs") but the rest of the days don't appear, nor does the hover image background.
What am I doing wrong? Pls go easy on me, i'm trying!
I've also made a jsfiddle (this is how the menu originally looked)
THE HTML
<div id="nav">
<ul class="menu">
<li>
<div class="img1 left"> <a href="" id="thursButton" class="thursButton"></a>
<p>Somewhere Only We Know Lily Allen
<br/>Story Of My Life One Direction
<br/>
</p>
</div>
</li>
<li>
<div class="img2 left"> <a href="" id="friButton" class="friButton"></a>
<p>Somewhere Only We Know Lily Allen
<br/>Story Of My Life One Direction
<br/>
</p>
</div>
</li>
<li>
THE CSS
#nav {
display:inline;
list-style: none;
position: fixed;
width: 1290px;
margin: 0 auto;
left:0px;
right:0px;
float:clear;
top: 120px;
z-index: 1000;
}
.menu li {
display: inline;
vertical-align:top;
float:left;
}
.menu li a {
display: block;
height: 407px;
width:250px;
text-indent: -999999px;
}
.img1 {
width: 250px;
height: 407px;
position: relative;
}
.thursButton {
width:250px;
height:177px;
display:block;
}
#thursButton {
background-image:url('http://static.tumblr.com/2wdsnoc/K8umxhswx/thu.png');
}
.thursButton:hover {
background-image:url('http://static.tumblr.com/2wdsnoc/6Rxmxht1d/thu-hover.png');
}
.img2 {
width: 250px;
height: 407px;
position: relative;
}
.satButton {
width:250px;
height:177px;
display:block;
}
#satButton {
background-image:url(http://static.tumblr.com/2wdsnoc/drJmxhstf/sat.png');
}
.satButton:hover {
background-image:url('http://static.tumblr.com/2wdsnoc/qfsmxhstx/sat-hover.png');
}
.left p {
color: #FFFFFF;
display: none;
font-size: 18px;
left: 10px;
position: absolute;
text-shadow: 1px 1px 1px #000000;
top: 100px;
width: 250px;
}
.left:hover p {
display:block;
}
A lot of errors in your HTML.
must be .
You are closing and twice...
<br> is generally accepted as usage du jour (http://stackoverflow.com/questions/1946426/html-5-is-it-br-br-or-br) But jsfiddle always complains. It's perfectly valid for HTML documents, if not XHTML it should be used vs <br />
At a minimum, your list markup is invalid.
@ExtPro You are right, but I still believe that for a better portability we should always use the closing one.
There's only Thu in your fiddle. How can we check the other days? Also, do you really need background images to get your text to look like this?
@tewathia, there's Fri too.. just scroll down
@ExtPro I removed the but the rest of the menu still isn't appearing
I suppose this looks somewhat presentable:
HTML
<div id="nav">
<ul class="menu">
<li>
<a href="#" id="thursButton" class="thursButton">
<div class="img1 left">
<p>Somewhere Only We Know Lily Allen<br/>Story Of My Life One Direction</p>
</div>
</a>
</li>
<li>
<a href="#" id="friButton" class="friButton">
<div class="img2 left">
<p>Somewhere Only We Know Lily Allen<br/>Story Of My Life One Direction</p>
</div>
</a>
</li>
<li>
<a href="#" id="satButton" class="satButton">
<div class="img3 left">
<p>Somewhere Only We Know Lily Allen<br/>Story Of My Life One Direction</p>
</div>
</a>
</li>
</ul>
</div>
CSS
#nav {
display:inline;
list-style:none;
position:fixed;
width:1290px;
margin:0 auto;
left:0px;
right:0px;
float:clear;
top:120px;
z-index:1000;
}
.menu li {
display:inline;
vertical-align:top;
float:left;
}
.menu li a {
display:block;
height:407px;
width:250px;
}
.img1 {
width:250px;
height:407px;
position:relative;
}
.thursButton {
width:250px;
height:177px;
display:block;
}
#thursButton {
background-image:url('http://static.tumblr.com/2wdsnoc/K8umxhswx/thu.png');
}
#thursButton:hover {
background-image:url('http://static.tumblr.com/2wdsnoc/6Rxmxht1d/thu-hover.png');
}
.img2 {
width:250px;
height:407px;
position:relative;
}
.friButton {
width:250px;
height:177px;
display:block;
}
#friButton {
background-image:url('http://static.tumblr.com/2wdsnoc/9dtmxhsw1/fri.png');
}
#friButton:hover {
background-image:url('http://static.tumblr.com/2wdsnoc/dCtmxht0o/fri-hover.png');
}
.img3 {
width:250px;
height:407px;
position:relative;
}
.satButton {
width:250px;
height:177px;
display:block;
}
#satButton {
background-image:url('http://static.tumblr.com/2wdsnoc/drJmxhstf/sat.png');
}
#satButton:hover {
background-image:url('http://static.tumblr.com/2wdsnoc/qfsmxhstx/sat-hover.png');
}
.left p {
color:#FFFFFF;
display:none;
font-size:18px;
left:10px;
position:absolute;
text-shadow:1px 1px 1px #000000;
top:100px;
width:250px;
}
.left:hover p {
display:block;
}
There were multiple errors with incorrectly closed tags.
I suggest Using a HTML or CSS Validator to troubleshoot future issues:
http://validator.w3.org/ For HTML
http://jigsaw.w3.org/css-validator/ For CSS
Ah ok, so I position the img as 'relative' and the button is 'display:block'?
How do you get the hover image to appear tho? I created a hover section but it only shows the original img
also I cleaned up my html http://jsfiddle.net/arp8D/7/ thank you for those two links :)
I updated my answer. Should work better now. I suggest getting a text editor like Notepad++ so you can notice errors thanks to syntax highlighting. It makes your job a lot easier and it's also freeware.
Amazing, thank you @WP_. Just so I understand and can learn for future ref - did you remove the text-indent and add a # instead of . on the hover?
Yes. The text-indent is redundant because the -texts are already set to display:none;. I changed the hover thingies to id's (#) rather than classes (.). You were using CSS code for the classes "thursButton", "friButton" and "satButton" to set their background image. But you also used id's to change the background image when hovering. That doesn't work at all. Either use classes or id's, not both, when using pseudo-classes like :hover or :focus for a specific element. Practice makes perfect!
Thank you, @WP_! Your help has been invaluable!
Could i possibly pester you for a tiny bit longer? The menu has knocked the alignment of the site content http://tinyurl.com/o46hj3b. Any ideas why?
I suggest that you get a good developer tool like FireBug if you use Firefox. Many browsers already have an integrated developer tool that you can use by clicking F12. Use the element inspector button (usually an arrow and a square icon) and then click on any element on your page to view the HTML code and the CSS rules that affects it. I also recommend fixing the DOCTYPE error on your webpage before doing any heavy work. If the DOCTYPE is not correct, your code is not guaranteed to work as you'd expect.
Sorry, @WP_... one final question, I promise. When I hover up the text, the hover image disappears tinyurl.com/o46hj3b. I checked with firebug and couldnt see anything unusual :-/
| common-pile/stackexchange_filtered |
MediaPlayer seekTo method is not accurate
I want to seek to a specified position on mp3 file, my proplem is the MediaPlayer seek to about 31 seconds ahead, for example I want to seek to 136540 mSeconds which means 2 Minutes and 16 Seconds, the MediaPlayer seek instead to 1 Minutes and 45 Seconds
here is my code
File[] ffile1 = ContextCompat.getExternalFilesDirs(this, null);
String mp3FilePath =ffile1[0].getAbsolutePath() + File.separator + "002.mp3";
MediaPlayer mediaPlayer = new MediaPlayer();
try {
mediaPlayer.setDataSource(mp3FilePath);
SystemClock.sleep(1000);
mediaPlayer.prepare();
SystemClock.sleep(200);
mediaPlayer.seekTo(136540);
SystemClock.sleep(200);
mediaPlayer.start();
} catch (IllegalArgumentException e) {
e.printStackTrace();
} catch (IllegalStateException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Here is the mp3 file that I use https://download.quranicaudio.com/qdc/abdul_baset/mujawwad/2.mp3
Please help, Thanks
try to set on prepare listener
@Style-7, The same problem :(
try to use another video, may be video file is corrupted
@Style-7 I tested another mp3 from the same site, It works good, but the file that has a problem works and seek good with windows media pleayer and VLC player, Thanks
When it comes to MP3s, depending on their encoding, MediaPlayer can't always parse them correctly, and so the seek function is essentially broken. It has to do with the variable bit rate not being properly encoded in the MP3 itself.
Instead of MediaPlayer, try using ExoPlayer. It's a fairly simple drop-in replacement, which shouldn't require you to change too much code
| common-pile/stackexchange_filtered |
Getting Dynamic UIColor from SwiftUI Color (light and dark mode)
It is possible through an easy initialiser to get the UIColor from a SwiftUI Color as follows:
let primary = UIColor(Color("primary"))
I added the Color with a light and dark mode in an Asset Catalogue.
If primaryis used in UIKit Code it unfortunately only gives me the light mode Version and does not dynamically change the color for dark mode.
I do not know, how to get the dark and light mode color from a Color to write my own extension as a bridge.
Is there a solution or do you always start with UIColors and bridge it into SwiftUI Code? The project is still heavily based on UIKit, but new views are written in SwiftUI.
Why UIColor(named: ) does not work for you?
We have a Design System in place, which declares it colors right now with the Color class. We would have to write duplicate code in our architecture right now. Do you know a possible way?
I don't think what you want is possible. To achieve that you needed an initializer of Color that takes a light and dark variant and a way to resolve to a cgColor given a certain interface style (both not possible afaik). But your common ground could be the color name of your color from the asset catalogue and you could derive your SwiftUI/UIKit colors from the identifier.
struct ColorId: RawRepresentable, Equatable, Hashable {
let rawValue: String
init(_ rawValue: String) {
self.init(rawValue: rawValue)
}
init(rawValue: String) {
self.rawValue = rawValue
}
static let primaryText = ColorId("asset_name_of_primary_text_color")
}
extension ColorId {
var color: Color { Color(rawValue) }
var uiColor: UIColor? { UIColor(named: rawValue) }
}
struct SomeView: View {
let textColor = ColorId.primaryText
var body: some View {
Text("Text")
.foregroundColor(textColor.color)
}
}
final class SomeUIView: UILabel {
private let textColorId: ColorId
init(textColorId: ColorId) {
self.textColorId = textColorId
super.init(frame: .zero)
self.textColor = textColorId.uiColor
}
required init?(coder: NSCoder) { fatalError("need a color id") }
}
Then you pass around your color ids and get the UIColor or Color values when you need them. If you use something like RSwift you could also just pass around the ColorResource's.
| common-pile/stackexchange_filtered |
Find IP address of local DHCP device other than through Powershell
I'm developing an application for my own use which, though I'm developing it on Windows is destined for a Raspberry PI, if it works. This needs to make a TCP connection to another device on my local network (a solar inverter) to collect data.
I hoped that the box would respond to the PnP multicast, but tests suggest it does not. I have a TalkTalk router at the moment but would prefer a solution that would survive a change of broadband provider.
Google searches seem to come up only with PowerShell solutions, but if PowerShell can do it then that suggests there's an underlying DHCP protocol (unless PowerShell is accessing PnP data).
Oops! Turns out that (at least with this hub) there's a trivial answer. The hub populates its own DNS, so all I needed to do was use the address "LuxController.lan:8000".
That's the device name I set in the hub web interface.
| common-pile/stackexchange_filtered |
How to know if a user is previously loged in in django but with firebase database
how do we know this in HTML and python both?
When we open the django website it should show different things for logged user.
please give some examples with a code with comments
but dont use django authtentcation system ..
| common-pile/stackexchange_filtered |
How can I detect Empty Content as input content in SwiftUI?
I have a CustomView which has a simple logic! If this CustomView get a non EmptyView should return an HStack, and If this CustomView get an EmptyView should return a Circle, how could I find out the right option depending on input content?
for example it should work like this: But it does not! I must hard code true or false for the result you see in down
CustomView() { Text("Hello") }.background(Color.red).padding()
CustomView().background(Color.red)
Goal: This question try find an answer to set right value for useHStack value depending on input content!
PS: Logically there is a way to use useHStack as parameter of CustomView, in this case there was no need to ask this question! So we are trying solve the issue without using useHStack as input parameter of CustomView or any other also! I like detect at initialization on CustomView not in body or through GeometryReader and so on . . .
My issue is there, how can I know when useHStack must be true or false depending on incoming content! incoming content means from example: Text("Hello")
import SwiftUI
struct ContentView: View {
var body: some View {
CustomView() { Text("Hello") }
CustomView()
}
}
struct CustomView<Content: View>: View {
let content: () -> Content
let useHStack: Bool
init(@ViewBuilder content: @escaping () -> Content) {
self.content = content
self.useHStack = true // or: false depending on content(), if we got EmptyView() then false otherwise true! the idea is make this part Automatic!
}
init() where Content == EmptyView {
self.init(content: { EmptyView() })
}
var body: some View {
if useHStack {
HStack {
content()
Spacer()
Circle().frame(width: 50, height: 50, alignment: .center)
}
}
else {
Circle().frame(width: 50, height: 50, alignment: .center)
}
}
}
what do you mean by "incoming content"? the content is static - so you know at compile time what the content is
sorry, maybe I use wrong naming, you tell me the correct one, I edit it, I mean like this: CustomView() { incoming content }
It's not the naming.. i don't understand what you mean? Why doesn't work in your current solution?
I have zero solution!
The code you included - what doesn't work there? You seem to have the right init for the EmptyView case - set the useHStack there
I think I could not explained it well, sorry for that, let me try here, If I use this code: CustomView() { Text("Hello") }.background(Color.red).padding() here we got Text("Hello") and useHStack should be automatically set to true. but if I got this code: CustomView().background(Color.red) that means we got EmptyView () and useHStack should be automatically set to false. right now what you see in code I am hard coding true or false! the idea is make it automatically depending on incoming content!
Well, why not set self.useHStack = false in the init() where Content == EmptyView
Have you success? I tried before asking this question! Xcode did not let me do it! Xcode said: 'let' property 'useHStack' may not be initialized directly; use "self.init(...)" or "self = ..." instead
LoL I think I must used it as var not let! :)))
@NewDev: I am so thankful for your time and help, previously I got that Xcode massage, but I did not read it well at first place, while telling you I read it again, and I found the issue, but you told me first.
You can keep it as let, but then you'd need a private init to set everything there.
can you show me that code plz?
You can set useHStack to false inside the init() where Content == EmptyView. You either would need to make it var useHStack: Bool, or create a private init:
private init(content: @escaping () -> Content, useHStack: Bool) {
self.content = content
self.useHStack = useHStack
}
init(@ViewBuilder content: @escaping () -> Content) {
self.init(content: content, useHStack: true)
}
init() where Content == EmptyView {
self.init(content: { EmptyView() }, useHStack: false)
}
After all things we spoke or code, could not we directly see inside content at first place? I know it is not possible for us, but I want be sure. in this case we would not need even use useHStack, because we could read or look inside content inside body as well, for example if is Empty do this or if not do that.
@swiftPunk, SwiftUI view is statically known. Sure, you can detect EmptyView here, but it wouldn't work if you had something like if someCondition { EmptyView } else { Text("not empty") } - i.e. this wouldn't detect if the view at runtime became "empty". So, you need to clarify to yourself what you're after and what it means to "see inside"
No, I wanted ask this code, if we can do this: var body: some View { if content() != EmptyView() { return HStack {} } else { return Circle() } } in this case it would be much easier for us!
I mean.. you can always check the type.. something like if type(of: content) == (() -> EmptyView).self {..}
Maybe you can add this code to your answer as well, for full cover! init(@ViewBuilder content: @escaping () -> Content) where Content == EmptyView { self.init(content: content, useHStack: false) }
that doesn't feel like it belongs to the question, and it's also necessarily the most elegant approach
If we use this code CustomView() { EmptyView() } or CustomView() {} then we need that, but about condition of question you are right! thanks again for help
The new iOS 18 Groups.init(subviews:transform:) API allows you to perform this kind of customized logic. Here's their example from the documentation:
struct CardsView<Content: View>: View {
var content: Content
init(@ViewBuilder content: () -> Content) {
self.content = content()
}
var body: some View {
VStack {
Group(subviews: content) { subviews in
HStack {
if subviews.count >= 2 {
SecondaryCard { subview[1] }
}
if let first = subviews.first {
FeatureCard { first }
}
if subviews.count >= 3 {
SecondaryCard { subviews[2] }
}
}
if subviews.count > 3 {
subviews[3...]
}
}
}
}
}
| common-pile/stackexchange_filtered |
My little,small program has problems (Ubuntu 14.04, Qt5, Code::Blocks)
Ok i am going to post a series of pictures i took, with some red annotations on them:
Picture 1 : My Main.cpp
Picture 2 : Qt5.3 Directory on Ubuntu
Picture 3 : Additional include directories on Code::Blocks
Picture 4 : Linker Directory i’ve included on Code::Blocks
Picture 5 : Linker Libraries i point to
Picture 6 : Some other options
Picture 7 : And the error i am getting when i finally run the program
This is the first time i’am trying to go into Qt5 on Ubuntu and i need to work with Code::Blocks,
because this is the IDE i am most confordable with in case someone suggest QtCreator. I know its
powerfull but i will use Qt5 only for some parts, not the whole project will be based on Qt5. Is there any good
tutorial-introduction you know about Qt5 and Linux in general?
Based on error message I believe that source of the error is linker settings which miss Qt5Widget library.
That was it. I feel dump.
| common-pile/stackexchange_filtered |
jQuery resize function doesn't work on page load
How do I get this function to not only run on window resize but also on initial page load?
$(window).resize(function() {
...
});
This solution is now deprecated since jQuery 3.0: https://api.jquery.com/bind/#bind-eventType-eventData-handler
You'll want to use:
$(document).ready(function() { /* your code */ });
To make something happen onload. If you want something to work onload and onresize, you should do:
onResize = function() { /* your code */ }
$(document).ready(onResize);
$(window).bind('resize', onResize);
I think the best solution is just to bind it to the load and resize event:
$(window).on('load resize', function () {
// your code
});
This is the most straightforward answer of all
This makes it simple and clean.
This behavior is by design.
You should put your code into a named function, then call the function.
For example:
function onResize() { ... }
$(onResize);
$(window).resize(onresize);
Alternatively, you can make a plugin to automatically bind and execute a handler:
$.fn.bindAndExec = function(eventNames, handler) {
this.bind(eventNames, handler).each(handler);
};
$(window).bindAndExec('resize', function() { ... });
Note that it won't work correctly if the handler uses the event object, and that it doesn't cover every overload of the bind method.
$(document).ready(onResize);
$(window).bind('resize', onResize);
didn't work with me.
Try
$(window).load('resize', onResize);
$(window).bind('resize', onResize);
instead.
(I know this question is old, but google sent me here, so...)
Another approach, you can simply setup a handler, and spoof a resize event yourself:
// Bind resize event handler through some form or fashion
$(window).resize(function(){
alert('Resized!');
});
// Trigger/spoof a 'resize' event manually
$(window).trigger('resize');
| common-pile/stackexchange_filtered |
Deploying git repo to "production" server
I have a git repo where a group of developers and I are collaborating on project code. occasionally, we'd like to copy the git repo to a VPS (setup as Linux server) that has an external IP address (w.x.y.z).
I can go into an SFTP client and navigate up the folder hierarchy to the server root and then navigate down to /var/www/ (server web root) to drop my files in but I'd like to deploy to the server through the command line instead.
My goal is to configure the Linux server as a remote git directory but don't know how to go about navigating up the file hierarchy to have git recognize that the remote branch repo needs to go into /var/www/.
Brief search has uncovered git remote add origin<EMAIL_ADDRESS>then use git push origin.
It seems that connecting this way to w.x.y.z will land me at the home folder of 'username', not the root web directory (/var/www/) when accessed via the browser.
Does anyone have insight into how I should go about setting up a remote directory for deploying a git repo to a "production" server?
You appear to be doing this a rather non-obvious way. I think what you want to do is copy the git repo to somewhere else (the vps server). The standard way to achieve this is git clone.
In your /var/www/ directory or an appropriate subdirectory thereof, do:
git clone [URL-FROM-GITHUB]
That will clone the git repository to your VPS. You can then update it with
git pull
could script this with
ssh my.vps.server 'cd /var/www/whatever && git pull'
However, normally you don't want the entire project in '/var/www/...' because that would also put stuff you did not mean to deploy there, e.g. the .git directory. Hence perhaps better to clone the repo within your home directory, and make a small script to rsync the appropriate /var/www/ directory against your repo, using --exclude to remove files you don't want, or just rsync-ing a subdirectory of the repo.
Thanks. I believe this is what I was looking for. I'll look into rsync to see if I can get that working for what I need
I have a question about this, why would I git pull if I can just rsync directly to /var/www/ directory?
@AliElkhateeb rsync will pull whatever happens to be in the working copy at the time, rather than the head commit from master. Running a git repo on the web server also allows you to easily roll back to a different commit.
| common-pile/stackexchange_filtered |
Sex: How much is too much?
Regardless of genre, at what point does sex stop becoming art and become the erotic blubbering of a perverse mind?
Ever since middleschool my writing has had a sensual twist, and it has become more detailed with age and ... experience. But now I am concerned that the story line might start to get lost, if i continue focusing on sex.
How much is too much? Where does sex stop being entertaining and begin to be a distraction?
Similar: http://writers.stackexchange.com/questions/312/how-much-sex-is-allowed-in-a-non-romance-novel
@PraveshParekh... allbeit similar mine is asking when is it distracting to the reader not if a pubisher cares
Agreed. I thought it might be of interest, hence put the link in the comment
There is never too much sex
| common-pile/stackexchange_filtered |
How to use asp.net sessions in javascript?
I have a form in Asp.Net with 3 pages by going with a next button to the next page.
What I've done so far in C# is I created sessions like this:
Session["FirstName"] = txtFirst.Text;
Session["LastName"] = txtLast.Text;
Than what I did is on that Next button I called a javascript function where I tried to access these sessions like this:
<script type="text/javascript">
var fn = '<%=Session["FirstName"]%>';
var ln = '<%=Session["LastName"]%>';
</script>
But it's not working, when I am debugging it gets exactly what we entered inside quotes: http://prntscr.com/ag3wdo
Is your javascript inside a .js file or .aspx (or equivalent)
Based on your screenshot, it looks like even var fn = '<%= "test" %>'; wouldn't work.
is inside .js file
That syntax: <%=Session["FirstName"]%> is a special asp.net syntax. It needs to be processed on the server, where it gets replaced with the value of that session value. Only then does this get sent to the browser.
Some important notes:
the browser doesn't understand that <%= ... %> syntax
the server needs to process it, so it must be in a file that is processed, like an .aspx or .ascx
a javascript file (.js) is sent as-is, without extra processing, so this syntax doesn't work here.
Most likely those tags are in a .js file, not an aspx or ascx page. Those tags only work on those extensions. Javascript files are sent as-is.
You can do a few things:
Create a setup stub in your page, maybe from your master page, that sets the variables in javascript so you can use them later on.
Load the file from an ashx handler and replace the session tags by hand.
Load the entire Javascript file in your page. This isn't really a good idea when the file is big.
You need to either have that script directly in an aspx page (not a .js file), or you need to set the values of the variables from a script in the aspx page.
Alternatively, you could send the js files to be evaluated by the .NET engine (create a handler), but that's not a good thing to do, as a general rule because it would process ALL js files and add overhead where you don't need it.
Last resort, you can set the values in a hidden field to then be accessed by javascript later. See Accessing Asp.Net Session variables in JS
| common-pile/stackexchange_filtered |
Are there biblical passages supporting a "quasi-incarnation" attributed to the mystical union of the Holy Spirit and Mary?
The Logos or the Incarnate Word became flesh in what known to be "hypostatic union". Jesus Christ the Second Person of the Most Holy Trinity became flesh and dwelt among us having two nature both fully God and fully Man or God-Man.
In the case of the Holy Spirit the Third Person of the Most Holy Trinity, scriptures tells us that our bodies is the Temple of the Holy Spirit, but the dwelling place is in the heart of our soul.So, specifically it's not the physical heart but it pertains to the heart of our soul as the Holy Spirit is a spirit.
I've encountered a comment comparing the full and mystical union of Mary and the Holy Spirit as "hypostasis" or "quasi-incarnate" meaning the Holy Spirit mysteriously takes over "full & perfect union in the soul of Mary in fullness mystically" or the simple word "possession" is a good description.
My slight understanding on this phenomenon is "the Holy Spirit fully dwells in the Heart of the soul of Mary perfectly". While in most Christians the Holy Spirit can dwell only imperfectly because of our fallen nature that is subject concupiscence.
Is this the right understanding of the word "quasi incarnation"
My question is, is there a Catholic teaching on "quasi incarnation"?
And is the termed "quasi incarnation" been officially used by the Catholic Church to described the mystical union of the Holy Spirit and Mary as His Spouse or Advocate?
Or if there is none Catholic teaching yet, are there biblical passages that speaks of "quasi-incarnation" teaching?
St. Maximillian Kolbe termed the Blessed Mother the "quasi-incarnation of the Holy Ghost".
Fr. Karl Stehlin's Who Are You, O Immaculata? p. 50:
The nature of the union [of the Holy Ghost and Mary] consists in the union of wills. Mary identifies so thoroughly with the will of God that one can speak about a quasi-incarnation2 of the Holy Ghost in Mary.
The Third Person of the Most Blessed Trinity was not made flesh. Yet our human word "spouse" cannot express the reality of the relation between the Immaculata and the Holy Ghost. We can therefore say that the Immaculata is in a certain sense an "incarnation of the Holy Ghost". The Holy Ghost, whom we love, is in her, and through her we love the Son. The Holy Ghost is very little appreciated.3
2. Maximilian Kolbe always insisted that of course there can be no question here of a real incarnation of the Holy Ghost, which would be heretical. Instead he is searching for words and concepts that portray more profoundly the intimate relation between Mary and the Holy Ghost. Therefore the qualifier “quasi” is very important here, so as to make clear that there is only a certain analogy with the mystery of the Incarnation.3. Conference dated February 5, 1941, in KMK p. 428
She and the Holy Ghost can both be properly called the Immaculate Conception (ibid. pp. 50-51):
We can say likewise that she is the greatest, most excellent, purest Temple of the Holy Ghost. Mary herself corroborates this truth when she defines herself at Lourdes: “I am the Immaculate Conception” and thus assigns to herself the title that in the strict sense is an attribute of God (“I am...”) and is applied in particular to the Holy Ghost, who within the Trinity is the eternally perfect, “immaculate” conception of the Father and the Son.
If among creatures a bride takes the name of her husband by the fact that she belongs to him, unites herself with him, makes herself like unto him and together with him becomes the source of new life, how much more should the name of the Holy Spirit, "Immaculate Conception", be the name of her in whom He lives with a love which is fruitful in the entire supernatural economy?4
4. Final article of February 17, 1941, KR 212-213
Thank you for directing me to this answer. What I read here seems very easily understood if the Holy Spirit is maternal. It seems the Holy Spirit's procession from the Father is one of bearance. The Holy Spirit appears to bear out the Father's "will". If the Holy Spirit's procession is as the eternal Matriarch, then the "overshadowing" of Mary would have been a dual Maternity explaining the dual nature of Jesus Christ and the Paternal contributions of God the Father.by the maternal bearance of the Holy Spirit.
@Rick "It seems the Holy Spirit's procession from the Father is one of bearance." The Holy Spirit proceeds from both the Father and the Son, and He is related to the Father and the Son by the relation of passive spiration.
As you know the Holy Spirit's dual procession (From Father and Son) is yet questioned by the Eastern Orthodox Church since 1054. You are correct the answer to this question would impinge upon the "filioque" (the Son also).
@Rick the Holy Spirit does not proceed as "eternal matriarch" but is a procession of the Will.
It's somewhat difficult (for me) to understand your question, but it seems you might be asking whether the union between Mary and the Holy Spirit is of the same sort as the hypostatic union between the human and divine natures of Jesus Christ. If that's your question, the answer is no. Mary and the Holy Spirit are two persons, each having only one nature --- Mary's human nature and the Holy Spirit's divine nature --- whereas Jesus is one Person with two natures.
You are quite right that Mary, being sinless, was more perfectly united to the Holy Ghost than any of us, but this unity does not make them a single person.
Notice, for example, that the ancient creeds say that the second Person of the Trinity "was incarnate" or "was conceived ... and born ...", but they say nothing of the sort about the third Person. Likewise, the Gospel of St. John says that the Word became flesh, but says nothing of the sort about the Spirit. The Gosspel of St. Luke also describes Mary and the Holy Ghost as separate persons: Gabriel tells Mary at the annunciation that "the Holy Ghost shall come upon thee" (not that Mary shall come upon herself), and the account of the visitation says that when Mary arrived at Elizabeth's house, Elizabeth was filled with the Holy Ghost and started talking to Mary.
Finally, note that, in the decree defining the dogma of the immaculate conception, Mary's preservation from original sin was attributed to the merits of Christ's sacrifice on Calvary. If she had been hypostatically united to the Holy Spirit, no such preservation and no such merit would have been needed; sin would have been as impossible for her as it was for Christ.
I should explicitly say that I've answered here from the Catholic point of view. I think the Orthodox and most Protestant groups would agree with this, but there will surely be some denominations that disagree.
Quasi-Incarnation, this is a hard phrase to digest even the Theologians to whom the 5th Dogma of Advocate,Mediatrix and Co-Redemptrix had already suffered difficulty of providing a clear and harmonized defnition for the benefits of all the faithful.
The Vertex of Love
OCTOBER 8, 2012 BY JONATHAN FLEISCHMANN
Can a mere infant Christian comprehend this quasi-incarnation representation?
This phrase/termed can be likened to St. Peter words:
"Brothers, I could not address you as spiritual, but as worldly—as infants in Christ. I gave you milk, not solid food, for you were not yet ready for solid food. In fact, you are still not ready, 3for you are still worldly."(1Corinthians3:1-2)
If quasi-incarnation had been introduced to Lumen Gentium and especially to the Catechism of the Catholic Church it's landscape can be significantly altered. For example CCC691 & CCC692
The proper name of the Holy Spirit
691 "Holy Spirit" is the proper name of the one whom we adore and glorify with the Father and the Son. The Church has received this name from the Lord and professes it in the Baptism of her new children.16
The term "Spirit" translates the Hebrew word ruah, which, in its primary sense, means breath, air, wind. Jesus indeed uses the sensory image of the wind to suggest to Nicodemus the transcendent newness of him who is personally God's breath, the divine Spirit.17 On the other hand, "Spirit" and "Holy" are divine attributes common to the three divine persons. By joining the two terms, Scripture, liturgy, and theological language designate the inexpressible person of the Holy Spirit, without any possible equivocation with other uses of the terms "spirit" and "holy."
Titles of the Holy Spirit
692 When he proclaims and promises the coming of the Holy Spirit, Jesus calls him the "Paraclete," literally, "he who is called to one's side," ad-vocatus.18 "Paraclete" is commonly translated by "consoler," and Jesus is the first consoler.19 The Lord also called the Holy Spirit "the Spirit of truth."20
If the Catholic Church embraces the teaching and reflections of St. Maximillian Kolbe regarding the mystical union of the Holy Spirit and Mary, and by the Church acknowledging and already teaches that Mary is the Spouse of the Holy Spirit as proclaimed by Pope Pius XII.(Mary is the Spouse of the Holy Spirit)
Holy Spirit and Mary by virtue of their "union of wills" not by flesh as we human accustomed to in marriage that we become "one flesh" but in the case of Mary and the Holy Spirit had only "one will". This can be further explain that Mary by her fiat had surrendered her will to the will of the Father, which also co-substantial with the Holy Spirit.
And Mary said: Behold the handmaid of the Lord; be it done to me according to thy word. - Luke1:38 DRA
But look closely, Mary had surrendered Her will according to Thy WORD/Logos only. God the Father had respected Mary's "freewill", the FIAT of Mary only refers to the Redemptive Mission of Christ for Her role as Theotokos.
Pope St. John Paul II mentioned and contemplated Mary's second fiat "at the foot of the Cross”. As the Agonizing Jesus before expiring uttered the famous Word in John Gospel.
Woman, behold your son... behold your mother. - John19:26-27
Jesus like the Abba Father "entrusting" His only begotten son to the Woman, now At the Foot of the Cross looking at all the redeemed and by grace won by Christ will become adopted children of the Abba Father "entrusted" also to the love and care of the Woman.(Mother of all the Redeemed)
This is the second fiat according Pope St. John Paul II the Great.
The Redemptive Act or Mission belong to Jesus Christ.
The Salvific Act or Mission belong to the Holy Spirit.
And by, St. Kolbe contemplating that the action of Mary is also the action of the Holy Spirit, Mary's cooperation in the mission of His beloved son Jesus Christ, now fully cooperates too in the mission of the Holy Spirit her Spouse.
In closing, I will share my reflections on Abba Father, Jesus and the Holy Spirit.
Jesus walks the earth and "dwelt among us" by Mary's first fiat to the Will of the Abba Father providing Jesus her "immaculate body" as the "immaculate human soul" of Jesus comes from the Abba Father.
The Holy Spirit walk the earth too, this time Mary gave her "immaculate soul" fully and perfectly where her freewill resides. Mary again this time silently pronounced Her second fiat "At the foot of the Cross" to the will of the Father, as Jesus words is the willof the Father.
Mary accepted her new role in the Salvific Mission of the Holy Spirit, this time Mary will be the Mother of not just a single child like Jesus in His humanity, but a "Spiritual Mother of all the Redeemed" to form another image of Christ.
This is the great mystery of Man's Salvation.
St. Louis De Montfort in teaching that wisdom is divided into uncreated wisdom and created wisdom, and St. Alphosus Liguori's teaching of "another advocate"a Doctor of the Church that pertains to the Greatness of Mary's Role as "full of grace" is greatly illumined by St. Maximillian Kolbe teaching and reflections on "quasi-incarnation".
St. Kolbe now teaches there are two immaculate conception.
The Holy spirit Uncreated Immaculate Conception and Mary is the Created Immaculate Conception.
This is a reflection of St. Bernadette revelation when Mary said "I am the Immaculate Conception" and not "i am immaculately conceived".
PS: If the title of the Holy Spirit as the "quasi-incarnation" is clearly defined by St. Kolbe it follows then that the "Paraclete, Advocate and the Spirit of Truth" belonging to the title of the Holy Spirit, is acted by the Holy Spirit in Mary. (All my previous answers were vindicated; Mary is truly the "another advocate",created Wisdom and the Spirit of Truth who testifies.)
"Sweet Heart of Mary be my salvation." (Pieta Prayer Booklet)
Interesting diagram
| common-pile/stackexchange_filtered |
What is WebViewTransport in Android?
What is exactly is WebViewTransport and what is it used for?
| common-pile/stackexchange_filtered |
Check for host protected area and device configuration overlay
I'd like to know whether any sectors on my solid state drive are inaccessible due to
the host protected area (HPA)
or the device configuration overlay (DCO)
Is there a file in /proc/ I can read or any tool I can use to find out about HPA and DCO?
I'm on Arch Linux 5.9.14.
with hdparm
To find out about the host protected area, use hdparm's -N option, for example
sudo hdparm -N /dev/sda
yields this on my machine:
/dev/sda:
max sectors =<PHONE_NUMBER>/1953529856, HPA is disabled
With --dco-identify we can find out about the device configuration overlay.
sudo hdparm --dco-identify /dev/sda
Example output:
/dev/sda:
DCO Checksum verified.
DCO Revision: 0x0002
The following features can be selectively disabled via DCO:
Transfer modes:
mdma0 mdma1 mdma2
udma0 udma1 udma2 udma3 udma4 udma5 udma6
Real max sectors:<PHONE_NUMBER>
ATA command/feature sets:
SMART error_log security 48_bit
WRITE_UNC_EXT
SATA command/feature sets:
interface_power_management SSP
Let's focus on this line:
Real max sectors:<PHONE_NUMBER>
Comparing this number with the "max sectors" line of hdparm -N, we can see that there is no sector hidden using DCO:
1953529856 -<PHONE_NUMBER> = 0
| common-pile/stackexchange_filtered |
Symfony create new service as a new instance
I have a service defined with several dependency injections in the constructor. At some point I want to get the service as a new instance instead of the same instance already created. Note that normally I want the service to be shared, but on an edge case I want to create a new instance, so the shared option in the service definition is not aplicable.
I can create a new object, but I then I would have to inject the dependencies manually, and I would prefer to let symfony to deal with it.
So how can I tell Symfony to return a service as a new instance?
Thank you.
As far as I know, there is no way to tell the Symfony Dependency Injection Container to return some times the shared instance, and other times a new instance of a service.
By default, the services are shared, as you already found out. You can tell the container to create a not-shared service by setting the shared setting to false in your service definition:
# app/config/services.yml
services:
AppBundle\SomeNonSharedService:
shared: false
Armed with this knowledge, I think the solution for your issue is to create a duplicate of the shared service with a different name and mark it as not-shared as explained above. When you ask the container to get the duplicate, it will create a new instance every time.
Do not attempt to create the duplicate as an alias of the original service, it doesn't work. The first thing the implementation of Contaner::get() does is to search for the service ID provided as argument into the list of aliases and use the ID of the original service instead if it finds it there.
I finally made a reset operation in the service, but I think your solution fits better. Thank you!
| common-pile/stackexchange_filtered |
Can't use malloc twice in a function
I'm trying to write a simple HTTP server.
I wrote a simple function called 'handleConnection' to handle incoming connection.
I use two malloc functions. First one is to receive the GET header and second one is to extract the path from GET header and second one cause malloc(): corrupted top size\nAborted error.
Here is the code:
int handleConnection(int sockfd)
{
struct sockaddr_in client;
socklen_t clientLength = sizeof(struct sockaddr_in);
int new_sockfd;
new_sockfd = accept(sockfd, (struct sockaddr*)&client, &clientLength);
// receive get header
int getHeaderSize = 0;
char *getHeader = malloc(getHeaderSize);
char tempBuffer;
// stop receiving while loop when matchedTerminators is equal to 2
int matchedTerminators = 0;
char terminators[2] = {'\r', '\n'};
// receiving while loop
while(matchedTerminators != 2)
{
recv(new_sockfd, (void *)&tempBuffer, 1, 0);
if(tempBuffer == terminators[0] || tempBuffer == terminators[1])
matchedTerminators++;
else
{
matchedTerminators = 0;
getHeaderSize++;
strcat(getHeader, &tempBuffer);
}
}
// If already received the get header
printf("%s\n", getHeader);
// extract the path(/) from get header
int pathSize = 0; // this value might increase later
char* path = malloc(pathSize); // when pathSize is increaced this malloc function cause error
/*
Code to extract the path from get header
*/
// free malloc
free(path);
free(getHeader);
return 0;
}
Both of your malloc calls are allocating a zero-sized memory block. You need to determine the parameter sizes before calling the allocator.
But I increase getHeaderSize and pathSize in while loops
@SwamHtetAung: Which has no effect on what you've allocated. There's no magic connection between those variables and the allocated memory block.
Here you allocate 0 bytes of memory:
int getHeaderSize = 0;
char *getHeader = malloc(getHeaderSize);
Here you try and store a byte of data in the 0 bytes you allocated:
strcat(getHeader, &tempBuffer);
Bonus: strcat only works on null-terminated strings, and you never tried to make getHeader into a null-terminated string.
You may know this already, but it's a common beginner mistake: the computer does things in the order you tell it to, unless otherwise indicated. It does not do all the things at once. It does not go back and redo things it already did (unless you tell it to). If you do malloc(getHeaderSize) and getHeaderSize is 0, it allocates 0 bytes of memory. If you change getHeaderSize to 100 after that, you still allocated 0 bytes of memory, because the computer doesn't time-travel.
Thanks I use realloc and it solve my problem
int getHeaderSize = 0;
char *getHeader = malloc(getHeaderSize);
// ...
int pathSize = 0;
char* path = malloc(pathSize);
The two molloc calls allocate memory for 0 byte, so when you try to access some value such as path[0], it must be out of the memory range. And you will get the error malloc(): corrupted top size which means you have wrongly accessed memory.
This answer would be better without the last sentence.
| common-pile/stackexchange_filtered |
What is the most efficient way to manage multiple level relationships using WordPress Pods CMS
This is my first time using Pods in Wordpress, so forgive me if this is a bit of an easy question to answer. I've done a bit of searching for an answer to this problem, but haven't found anything that answers my question specifically.
I have two different kinds of pods created.
Areas of Study: Such as Physics, Sociology or Computing
Case Studies: Details on case studies performed in various areas of study
Each Case Study Pod has a relationship field linking it to a specific Area of Study Pod. (In this example, assume that a specific case study can only be related to a single area of study. This may change in the future, but for now, this should suffice.)
Each Area of Study has a relationship field linking it to a specific WP Page (e.g. a detailed page that describes the details for the Physics Area of Study).
What I want to do, is while looping through various Case Studies, be able to obtain the link for the WP Page, it's specific area of focus is related to.
I technically know HOW to do it, but I'm not sure it's the most efficient way of doing it.
This is what I have done, and it seems to be working:
<?php
$case_studies = new Pod('case_studies');
$case_studies->findRecords(null, 200);
$total_rows = $case_studies->getTotalRows();
if ($total_rows > 0) {
while ($case_studies->fetchRecord()) {
// RETRIEVE AREA OF FOCUS DETAILS
$aof = $case_studies->get_field('area_of_focus');
$aof = $aof[0];
// GET AREA OF FOCUS POD
$aofs = new Pod('areas_of_focus');
$params = array('where' => "t.id like '".intval($aof['id'])."'");
$aofs->findRecords($params);
if ($aofs->fetchRecord()) {
$page = $aofs->get_field('focus_page');
$area_of_focus_url = get_permalink($page[0]['ID']);
echo '<a href="'.$area_of_focus_url.'">'. $aof['name'].'</a>';
} else {
echo $aof['name'];
}
// CASE STUDY DETAILS GO HERE
}
}
?>
This seems like it's at least an O(n²) algorithm, if not more. I really don't know the internals of the Pod class, so it could be even more expensive than that. Also this will be executing several MySQL queries instead of a single query.
Would it be better if I used some MySQL joins in the initial case study query so that I don't have to have it run a bunch of other queries? Or are there any other functions within Pods itself, that can allow me to retrieve more details on relationship fields?
Any tips would be greatly appreciated.
Did you ever found a practical solution to this ?
Not with PodsCMS. I just stopped using Pods all together. I use Advanced Custom Fields plugin < http://wordpress.org/extend/plugins/advanced-custom-fields/ > now and just create my own custom post types < http://codex.wordpress.org/Post_Types >. For what I've needed it for, it's been pretty efficient doing things that way.
You can actually go very deep with get_field by traversing relationships.
Try this out for size:
$page_ID = $case_studies->get_field('area_of_focus.focus_page.ID');
Pods will do all the work for you, you can go a ton of levels deep, and you can even do the same with findRecords (as of Pods 1.12)
It's been a while since I've looked into this. I'll have to give it a shot again when I get the chance. Thanks.
| common-pile/stackexchange_filtered |
Choosing activation and loss functions in autoencoder
I am following this keras tutorial to create an autoencoder using the MNIST dataset. Here is the tutorial: https://blog.keras.io/building-autoencoders-in-keras.html.
However, I am confused with the choice of activation and loss for the simple one-layer autoencoder (which is the first example in the link). Is there a specific reason sigmoid activation was used for the decoder part as opposed to something such as relu? I am trying to understand whether this is a choice I can play around with, or if it should indeed be sigmoid, and if so why? Similarily, I understand the loss is taken by comparing each of the original and predicted digits on a pixel-by-pixel level, but I am unsure why the loss is binary crossentropy as opposed to something like mean squared error.
I would love clarification on this to help me move forward! Thank you!
You are correct that MSE is often used as a loss in these situations. However, the Keras tutorial (and actually many guides that work with MNIST datasets) normalizes all image inputs to the range [0, 1]. This occurs on the following two lines:
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
Note: as grayscale images, each pixel takes on an intensity between 0 and 255 inclusive.
Therefore, BCE loss is an appropriate function to use in this case. Similarly, a sigmoid activation, which squishes the inputs to values between 0 and 1, is also appropriate. You'll notice that under these conditions, when the decoded image is "close" to the encoded image, BCE loss will be small. I found more information about this here.
I wrote about it here, but it was ages ago so I cannot find it now; BCE's properties as a function means it's not the best choice for image data, even in greyscale. Unlike MSE, it is asymmetrically biased against overconfidence, so it systematically underestimates the values, needlessly dimming the output intensities. And, as this question shows, causes unnecessary confusion on top.
Hmm. I think you may be correct in general, but for this particular use case (an autoencoder), it's been empirically and mathematically shown that training on the BCE and MSE objective both yield the same optimal reconstruction function: https://arxiv.org/pdf/1708.08487.pdf — but that's just a minor detail.
I cannot load the pdf for some reason, but I'm not surprised - the minima of both losses are the same if your goal is to autoencode a 1:1 match of intensities. It's just not always an optimal loss if your goal is to have a nice-looking image; e.g. MNIST would probably look best with most pixels being either 1 or 0 (in/not in the set of pixels for the character, basically learning a topology).
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.