text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Firebase authentication for gmail with custom domain
I was wondering if I can restrict sign in using firebase authentication for web to users with a certain email address, for example, I want only my colleges in the company to sign in (we all have gmail accounts with a custom domain).
If such a feature is not possible I was thinking about validating the email address with javascript to make sure that it belongs to the same domain. but still, it will be nicer and more secure if there is a built-in feature to achieve it.
u can use contains() in java so for example if a user is writing his email in the edittext. You can do a condition if(txt.contains(@customdomain.com)){ //do watever, authenticate user}
thanks for the tip, but i forgot to mention that I'm using firebase authentication for a static website, i have edited the question now however
While you can't prevent users from signing in, you can limit access to database or files to users from a specific domain. See my answer to this question for an example for the realtime database.
| common-pile/stackexchange_filtered |
listing directory contents with xargs and grep
I have a directory with lots of json and pdf files that are named in a pattern. I am trying to filter the files on name with the following pattern \d{11}-\d\.(?:json|pdf) in the command. For some reason it is not working. I believe it is due the fact that the xargs take the arguments one big line of string or when the input is split there is some whitespace, \n or null character.
ls | xargs -d '\n' -n 1 grep '\d{11}-\d\.(?:json|pdf)'
if I try just this ls | xargs -d '\n' -n 1 grep '\d' It selects file names with digits in them, as soon as I specify the multiplicity regex, nothing matches.
are you planning to filter the list of filenames, or the contents of the files? Because running ... |xargs grep $pattern would run grep $pattern file1 file2 ..., and look at the contents of the files
It's unclear what you want to achieve. Do you just want to list the filenames? What are some examples of filename that you want to list and that you don't want to list?
You also don't want to parse the output of ls. You haven't clarified what the objective is, but if you are starting with wanting to find files that match a certain pattern(s), you are better off using something along the lines of find /path/to/directory -type f -name *:json -o -name *pdf
@ilkkachu Yes. That would work as well. More clarity is needed on what is expected though.
@ilkkachu No, I am not looking inside the files, but rather on the filename. I am trying to apply the pattern on the filenames and filtering it.
@NasirRiley I am trying to filter the file names based on the pattern matches.
I have edited the question and made it more clear. I am not sure why, it was also showing the matched file names with just \d on the command as regex. Does it look inside file and filenames ?
First, ls | xargs grep 'pattern' makes grep look for occurrences in
contents of files listed by ls, not in list of filenames. To look for
filenames it should be enough to do:
ls | grep 'pattern'
Second, grep '\d{11}-\d\.(?:json|pdf)' would work only with GNU grep
and -P option. Use the following syntax instead - it works with GNU,
busybox and FreeBSD implementations of grep:
ls | grep -E '[[:digit:]]{11}-[[:digit:]]\.(json|pdf)'
Third, parsing ls is not a good
idea. Use
GNU find:
find . -maxdepth 1 -regextype egrep -regex '.*/[[:digit:]]{11}-[[:digit:]]\.(json|pdf)'
or FreeBSD find:
find -E . -maxdepth 1 -regex '.*/[[:digit:]]{11}-[[:digit:]]\.(json|pdf)'
Thanks that worked. The one I was missing; the -P option.
You don't need any of that complexity. Just use a shell glob. This one is for shells such as bash that understand {x,y} braced alternatives:
ls *[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]-[0-9].{json,pdf}
If you want to do something with the matched files, don't take the output of ls but just use the glob to iterate across the files directly.
that is a lot of digit regex :). I was looking for a more consistent regex based solution as the directory has a lot of files. Thanks for you reply.
It's not a regex; it's a glob used directly by the shell. Try it
| common-pile/stackexchange_filtered |
Symfony 3 Doctrine Repository Join and array
I'm working on symfony 3 and got some problem with my builder on my repository.
How it's work:
I have an announce who have a lot of information like name, game etc...
And the user link by a relation OneToOne
I send you a part from my announce entity
class Announce
{
/**
* @var int
*
* @ORM\Column(name="id", type="integer")
* @ORM\Id
* @ORM\GeneratedValue(strategy="AUTO")
*/
private $id;
/**
* @ORM\OneToOne(targetEntity="Wolk\UsersBundle\Entity\User")
* @ORM\JoinColumn(nullable=false)
*/
private $user;
/**
* @var string
*
* @ORM\Column(name="game", type="string", length=255)
*/
private $game;
Etc...
Now you can see my repository
public function byResearch($role, $game, $region, $language, $rank, $gamemode, $matchtype, $platform)
{
$qb = $this->createQueryBuilder('u')
->select('u');
if ($language != null) {
$qb->join('u.user' , 's');
$qb->addSelect('s');
$qb->andWhere('s.language like \''.$language.'\'');
}
if ($gamemode!= null) {
$qb->andWhere('u.gamemode = \''.$gamemode.'\'');
}
if ($matchtype!= null) {
$qb->andWhere('u.matchtype = \''.$matchtype.'\'');
}
if ($region!= null) {
$qb->andWhere('u.region = \''.$region.'\'');
}
if ($rank!= null) {
$qb->andWhere('u.rank like \'%'.$rank.'%\'');
}
if ($platform!= null) {
$qb->andWhere('u.platform like \'%'.$platform.'%\'');
}
if ($game!= null) {
$qb->andWhere('u.game = \''.$game.'\'');
}
if ($role!= null) {
foreach($role as $itm1)
{
$qb->andWhere('u.role like \'%'.$itm1.'%\'' );
}
}
$qb->andwhere('u.active = :active');
$qb->setParameter('active', '1');
$qb->orderBy('u.date', 'DESC');
return $qb->getQuery()->getResult();
}
My problem is about the language from my user
People will search for french, german, english only announce and all user have all their language in the entity user
User.Language = Array('fr', 'en') // something like that
And actually i have this result on my website (from the profiler)
SELECT
a0_.id AS id_0,
a0_.game AS game_1,
a0_.platform AS platform_2,
a0_.Availability AS Availability_3,
a0_.language AS language_4,
a0_.Description AS Description_5,
a0_.category AS category_6,
a0_.goal AS goal_7,
a0_.Rank AS Rank_8,
a0_.active AS active_9,
a0_.premium AS premium_10,
a0_.level AS level_11,
a0_.visit AS visit_12,
a0_.region AS region_13,
a0_.role AS role_14,
a0_.exp AS exp_15,
a0_.lan AS lan_16,
a0_.gamemode AS gamemode_17,
a0_.matchtype AS matchtype_18,
a0_.date AS date_19,
f1_.username AS username_20,
f1_.username_canonical AS username_canonical_21,
f1_.email AS email_22,
f1_.email_canonical AS email_canonical_23,
f1_.enabled AS enabled_24,
f1_.salt AS salt_25,
f1_.password AS password_26,
f1_.last_login AS last_login_27,
f1_.confirmation_token AS confirmation_token_28,
f1_.password_requested_at AS password_requested_at_29,
f1_.roles AS roles_30,
f1_.id AS id_31,
f1_.gender AS gender_32,
f1_.birthday AS birthday_33,
f1_.subscribedate AS subscribedate_34,
f1_.Country AS Country_35,
f1_.language AS language_36,
f1_.timezone AS timezone_37,
a0_.user_id AS user_id_38,
f1_.image_id AS image_id_39,
f1_.premium_id AS premium_id_40
FROM
announce a0_
INNER JOIN fos_user f1_ ON a0_.user_id = f1_.id
WHERE
f1_.language LIKE 'fr'
AND a0_.platform LIKE '%PC%'
AND a0_.game = 'lol'
AND a0_.active = ?
ORDER BY
a0_.date DESC
And i really don't know what is wrong about the language search
Join is maybe wrong for a relation OneToOne?
Maybe "Where like" for an array is not the good solution?
I'm searching since a day without knowing which things is not working so i hope you can help me in this :)
If you need more information, it's will be with pleasure
sorry, i don't get what your actual issue/question is. could you rephrase it a bit?
if ($language != null) {
$qb->join('u.user' , 's');
$qb->addSelect('s');
$qb->andWhere('s.language like ''.$language.''');
}
this is my problem, i get no result when i make a search and don't know why, my search have language with fr for example and my announce have fr, i have no result. I really don't know why
You forgot to add percent signs around language value. It should be:
$qb->andWhere('s.language like \'%'.$language.'%\'');
Thank you a lot, i take so much time for resolve this ;)
| common-pile/stackexchange_filtered |
Emulating CSS3 border-radius and box-shadow in IE7/8
I'm working on HTML for a small web application; the design calls for a content area with rounded corners and a drop shadow. I've been able to produce this with CSS3, and it works flawlessly on Firefox and Chrome:
However, Internet Explorer 7 and 8 (not supporting CSS3) is a different story:
Is there an easy, lightweight JavaScript solution that would allow me to either 1) use IE-specific features to achieve this, or 2) modify the DOM (programmatically) in such a way that adds custom images around the content area to emulate the effect?
in my opinion there is no good solution to this problem beside doing it with images. The JS rounded corners are placing elements in your corners with the background image/color to create this effect in IE. i think its not a proper solution. I would use a solution with images, or wait for IE9 :P PS: i like this design.
This is my method, I use the conditionals to target CSS files to IE browsers.
Say you have your div with the id #page_container. In your regular master.css or css3.css file, you would give it your width, height, rounded corners and drop shadow with styles.
Now, when IE hits your page, it will pull in the condition css you had set up. For that same div#page_container, you may alter the width a bit, height, maybe some padding, then give it a background-image to make it look like the drop shadow, rounded-corner version.
So your head will have this:
<head>
<link rel="stylesheet" type="text/css" href="master.css" />
<!--[if lte IE 8]> <link rel="stylesheet" type="text/css" href="ie.css" /> <![endif]-->
</head>
In the master.css file, you would have this definition for your main div:
div#page_container {
width: 960px;
height: auto;
padding: 10px 10px 10px 10px;
background: #ccc;
drop-shadow: whatever...
rounded-corner: whatever...
}
Now, in your ie.css file, and because it is referenced in your second, the definition will cascade down so you can alter it a bit:
div#page_container {
width: 960px;
height: auto;
padding: 15px 15px 15px 15px; /* this is key */
background: #ccc url(/path/to/image.file) no-repeat top left;
}
Add just enough extra padding so the drop shadows fit in with your background-image. Because it cascades down, it will overwrite the original 10px padding you had, expanding the box model to fit in your extra shadow graphics.
Couple benefits to this method include:
Only IE will see this definition and the call to the image. If this is a high volume app, that will save on bandwidth and any delays associated with the call.
Likewise, because you didn't hard code in the rounded corner graphics that every browser would see, your Firefox and Safari users won't need hit the server with extra image calls.
No need to add in yet another javascript plug-in that checks for IE, creates new markup, time, etc...
I've chosen to go with this solution as it seems the cleanest. I'll be incorporating a touch of IE-only JavaScript to assist with some of the DOM-level modifications to make this a little cleaner as well.
@Adam Maras - wise choice. I've used different plugins on different projects and always came to problems because the plugin affected some other of my elements. Now for last year I used similar aproach to this and I can say that it pays off on huge projects. I would go with a plugin for a simple < 5 pages site.
This may have been the way the author went, but I think some of the plugins (particularly pie.htc) have come a long way and deserve an additional look. This method wastes development time cutting images for browsers that are loosing market-share, rather than including a solution (like pie.htc) that works in IE6-8 with minimal fuss and trouble. I resisted the plugins for a long time myself, but they have become an invaluable tool in the arsenal now.
Check out this post: http://www.smashingmagazine.com/2010/04/28/css3-solutions-for-internet-explorer/
It covers specifically rounded corners and box shadow for CSS3 in IE7/8.
+1 looks like a nice solution to me. I have tested them in IE6, 7 and 8, and they are pretty slow, but they work. (specially the rounded corners took long for just 4 corners, i wonder how it covers a entire page)
The question is whether these solutions will interact to give me both rounded corners and drop shadows. I will investigate.
you can combine the two techniques. The problem with this one is that your server mus be able to load HTC files.
The HTC files are a bit of pain but once you've got everything organized, it works like a charm.
I think a VML solution is probably the way to go. Looking at the code in the htc file, it could be minified and sped up a little.
The real good solution would be this, adding images will increase your server requests, slow down your page load time, etc.
This answer would be more useful if it summarized the pertinent techniques from that URL.
First of all I like to mention that there is no perfect solution for this until IE9, where the CSS border-radius is gonna be implemented.
Here are the different solutions you have until then:
You could use one of many JS scripts that simulates rounded corners. But none of them implement the shadow properly. Here is the list of the scripts i tried and my conclusions.
All of this scripts have something in common, they are placing additional elements in your HTML to give you the illusion of rounded corners.
DD Roundies: This script is very lightweight an works pretty well. It works without any framework and plays nice with jQuery & Prototype (i assume its working well with the others to, but i can't tell for sure). It uses the CSS3 proprieties on browsers who support CSS3. And uses the same hack as all the others for IE. The antialiazing on this one works very good.
edit i have to correct my self here. This one works with a HTC File. It does not placing additional elements in your HTML.
Curvy Corners and the jQuery
Plugin Curvy Corners: I like this one to. The antialiazing is working very good to. And it plays nice with background images. But it does not play nice with CSS3 shadows. It does not check if your Browser support CSS3 and always uses the ugly solution of adding elements to your dom.
Nifty Corners & jquery
Corner: Both have a bad antialiazing and the corners look very edgy. jQuery corners has trouble to handle background images.
Here is the reason why none of them is a proper solution in my opinion:
curvy corners dom messing screenshot http://meodai.ch/stackoverflow/curvy.png
curvy dom mess
nifty dom mess http://meodai.ch/stackoverflow/nifty.png
nifty dom mess
There are a few other but i think they are not mentionable at this place.
As you can see they are adding a lot of elements to your dom. This can cause trouble if you want to add rounded corners to a huge amount of elements. It can make some older browser / computers crash. For the shadows its pretty much the same problem. There is a jQuery plugin that handles shadows on boxes and fonts:
http://dropshadow.webvex.limebits.com/
My conclusion: If i am doing a small budget job, IE users just have edges and no shadows. If the client has some money to spend, so i am doing it with CSS only and i make images for every corner. If they absolutely have to be there but there is no time or no money to do it, i use one of the mentioned JS Scripts DD_roundies with preference. Now its up to you.
PS: IE users are used to ugly interfaces, they don't gonna see that the corners and shadows are missing anyway :P
+1 for the 'PS' plus I think CSS should be allowed graceful degradation if the newer/cooler features aren't present. I also think IE users deserve to suffer a little, but that's just me being vindictive... =)
+1 for the PS too. Couldn't agree more. IE is the bane of all web developers.
It was just released and it's in beta but check it out: http://css3pie.com/
CSS3 PIE is amazing!. It's in beta2 stage!
This is amazing! Haven't checked ie7 yet, but it sure looks exactly like it should in ie8.
I've started using the .htc script found here:
CSS3 support for Internet Explorer 6, 7, and 8
It's the simplest implementation of CSS3 for IE6+ that I've seen.
.box {
-moz-border-radius: 15px; /* Firefox */
-webkit-border-radius: 15px; /* Safari and Chrome */
border-radius: 15px; /* Opera 10.5+, future browsers, and now also Internet Explorer 6+ using IE-CSS3 */
-moz-box-shadow: 10px 10px 20px #000; /* Firefox */
-webkit-box-shadow: 10px 10px 20px #000; /* Safari and Chrome */
box-shadow: 10px 10px 20px #000; /* Opera 10.5+, future browsers and IE6+ using IE-CSS3 */
behavior: url(ie-css3.htc); /* This lets IE know to call the script on all elements which get the 'box' class */
}
I have been using CSS3PIE, which is pretty easy to implement, just add a behavior:url(pie.htc); to the css tag and you're good to go.. does it all for you, also includes support for border-image, gradients, box-shadow, rgba and a few others... the source is at: http://css3pie.com/
for drop-shadow in IE use:
.shadow {
background-color: #fff;
zoom: 1;
filter: progid:DXImageTransform.Microsoft.Shadow(color='#969696', Direction=135, Strength=3);
}
for round corners use DD_roundies as mentioned below, just 9Kb is a good compromise for have round corner in a second! ;-)
of course for programmatically IE-specific features use conditional comments! ;-)
To allow graceful degradation I bet you should use this script called CssUserAgent from http://www.cssuseragent.org/
Nifty Corners Cube produces nice results and is supposed to be downwards compatible all the way to IE5.5.
There is a jquery plugin that modifies the DOM to produce rounded corners. Check it out here:
http://blue-anvil.com/jquerycurvycorners/test.html
There example page rendered for me in IE, Chrome and FF. Using Firebug you can see that the plugin introduces a bunch of 1px by 1px divs that create the effect.
| common-pile/stackexchange_filtered |
How to stop all services except ssh?
In command line, how can I stop all services from my Ubuntu (Server in this particular case), except SSH?
And how can I list all services to be sure all of them were stopped?
possible duplicate of How to enable or disable services?
I seriously doubt you want to stop all services but ssh.
I am doing an operation that is extremely low level and, yes, I need to do exactly this.
This is not a duplicate of that question.
What you are looking for is called single user mode. It is a state where only the bare essential services required for the machine to run are running. You enter it either by booting with an 's' argument given to the kernel, or you can switch to it using init s. sshd is not normally considered an essential service though, so it would be stopped. To fix this, you need to edit /etc/init/ssh.conf and add an 'S' to the list of runlevels it should start in and not be stopped in, so it looks like this:
start on runlevel [S12345]
stop on runlevel [!S12345]
Instead of editing ssh.conf, use an Upstart override file.
@muru, good point
I seriously doubt that your system will be still in a useful state after disabling everything but sshd.
For a status list of upstart services do:
sudo service --status-all
For System V services:
sudo initctl list
Disable anything that has a + or is listed as start/running with the appropriate commands. To state the blatantly obvious: if you do this via ssh "service network stop" or the like won't do you any good.
Believe me, I need to do exactly this.
You do realize that SSH is of no use to you, without the network service?
Of course I assume you know what you're about to do (and wish good luck), but I think the warning had to be given for others reading this.
| common-pile/stackexchange_filtered |
Generic Oracle Lookup Validation Function
I have several Tables that contains just id and description in that particular schema, i wonder if it is possible to write one generic function which will read something like:
create or replace FUNCTION tab_lookup (key_field char,key_value char,from_table char,return_field char) RETURN char IS
a varchar2(1000);
BEGIN
select &return_field into a
from &from_table
where &key_field=key_value;
return(a);
exception
when others
then
return('*ERR*');
END;
i want to use it at inside package application which only 50 users will be using.
You can always set up foreign key constraints.
foreign keys are already there .. i am just trying to speed up front-end development.
You do know that this will save you exactly 1 character per invocation? a := tab_lookup('keyfield', 'keyvalue', 'mytable', 'returnfield'); vs select returnfield into a from mytable where keyfield = 'keyvalue';
sorry that couldnot get your point.
"You do know that this will save you exactly 1 character per invocation?"
Modified your version, by changing it using Dynamic SQL and changed the Input parameters' Datatype as VARCHAR2
CREATE OR REPLACE FUNCTION tab_lookup (key_field VARCHAR2,
key_value VARCHAR2,
from_table VARCHAR2,
return_field VARCHAR2,
return_type VARCHAR2)
RETURN VARCHAR2 IS
result_a varchar2(1000);
query_string VARCHAR2(4000);
/*version 0.1*/
BEGIN
query_string := 'SELECT '||return_field||
'FROM '||from_table||
'WHERE '||key_field || ' = :key_value ';
IF(return_type = 'SQL') THEN
result_a := query_string;
ELSE
//this line will not work in forms 6i remove the using key_value word
EXECUTE IMMEDIATE query_string USING key_value into result_a;
END IF;
RETURN (result_a);
EXCEPTION
// add DBMS_ASSERT Exceptions
WHEN
NO_DATA_FOUND THEN
RETURN(NULL);
WHEN
TOO_MANY_ROWS THEN
RETURN('**ERR_DUPLICATE**');
WHEN OTHERS
THEN
RETURN('*ERR_'||SQLERRM);
END;
Why would there be single quotes to escape? The values being concatenated can't contain them, and the bind variable doesn't need escaping. It might be worth checking the passed values are at least valid identifiers, with DBMS_ASSERT, before concatenating to avoid SQL injection.
It would be very nice of if you could demonstrate the DBMS_ASSERT in context
| common-pile/stackexchange_filtered |
How may Myanmar modify the constitution and keep off the military from politics?
In January 2019, the National League for Democracy pushed for constitutional reform, but was unsuccessful because any changes required 75% approval in the legislature, and 25% of seats are reserved for the military.
Details: https://en.wikipedia.org/wiki/2020_Myanmar_general_election
Miss Suu Kyi's party has won 79.5% seats in the bicameral legislature. 396/498 = 79.5%
Details: https://www.voanews.com/east-asia-pacific/aung-san-suu-kyi-nld-win-second-landslide-election-myanmar
Does it mean they can change the constitution now? (79.5% > 75%)
Or, does it mean they still can't change it? As 25% is reserved for military. So, 79.5% of 75% = 59.625%. Is 100% of the 75% necessary to change the constitution?
Which is correct : 1 or 2?
The second of your statements is true; elected representatives are unable to amend the constitution unilaterally. An amendment to the constitution has to be approved by more than 75% of representatives, and in certain cases must also be subsequently confirmed in a nation-wide referendum, and be supported by more than half of those eligible to vote - see section 436 of the 2008 Myanmar Constitution.
As the constitution also guarantees that 25% of seats in both the lower and upper houses (Pyithu Hluttaw and Amyotha Hluttaw) are controlled by the military, in sections 109(b) and 141(b), the military has a veto over constitutional change - an amendment to the constitution cannot go ahead without the votes of at least some of the unelected Defence
Services personnel serving in the legislature.
For a recent example of this power, consider the vote against the amendments proposed in 2015, which would have lowered the threshold of representatives required to pass a constitutional amendment to 70%, as well as altering the qualifications for President which currently block Aung San Suu Kyi from that role. When the proposals were defeated, San Suu Kyi had the following response:
"I am not surprised with the result," Suu Kyi told reporters after the
vote. "This makes it very clear that the constitution can never be
changed if the military representatives are opposed." She said she
didn't see the vote as a loss, since the result had been anticipated,
so her supporters should not lose hope.
| common-pile/stackexchange_filtered |
requets are redirected to login.aspx?ReturnUrl=
I have implemented a webservice using servicestack using Visual Studio. Running the service from the vs debugger works just fine. I have just tried to deploy it to a debian machine using XSP4. The service makes use of logging, from what I can tell the service is up and running. A log file is created when I start the service, but any request I make does not work. For instance, I make the following request using a browser:
http://<IP_ADDRESS>/Activity/5b1e5316-8ea5-4ba5-aaee-7f40151b80d3/Unit
But the browser is being redirected to:
http://<IP_ADDRESS>/login.aspx?ReturnUrl=%2fActivity%2f5b1e5316-8ea5-4ba5-aaee-7f40151b80d3%2fUnit
I have implemented my own authentication using a global requestfilter that I add in the Configure method. I am very confused why the request is redirected to login.aspx. Also, in the log file is see the following:
Error 2013-01-10 00:07:53.2631 NotFoundHttpHandler <IP_ADDRESS> Request not found: /login.aspx?ReturnUrl=%2fActivity%2f5b1e5316-8ea5-4ba5-aaee-7f40151b80d3%2fUnit
Does anybody have any idea what may cause this behaviour? Here is the code that adds the global filter:
this.RequestFilters.Add((httpReq, httpResp, requestDto) =>
{
try
{
var userCreds = httpReq.GetBasicAuthUserAndPassword();
if (userCreds == null)
{
httpResp.ReturnAuthRequired();
return;
}
var userName = userCreds.Value.Key;
var userPass = userCreds.Value.Value;
if (!TryResolve<IAuthenticationProvider>().AuthenticateUser(userName, userPass))
{
httpResp.ReturnAuthRequired();
}
return;
}
catch (Exception ex)
{
log.Error(ex);
throw new ApplicationException(ex.Message, ex);
}
});
This looks like an ASP.NET redirection issue in your Web.Config. ServiceStack doesn't redirect to any login.aspx page.
I have the same issue but with MVC. the authentication is set to None in web.config as well.
I had same problem with MVC after adding the . I added the following to web.config (system.webServer/modules)
@sam if you were to post your comment as an answer, I would upvote it ;)
I just figured it out. I added
<authentication mode="None" />
to the Web.config like so:
<system.web>
<!-- mode=[Windows|Forms|Passport|None] -->
<authentication mode="Windows" />
</system.web>
The documentation can be found here: msdn.microsoft.com/en-us/library/aa291347(v=vs.71).aspx
I used the snippet below to solve my woes with .NET Forms auth overriding the 401 I'm trying to return via ServiceStack.NET. On every request, at the beginning, if it's AJAX, immediately suppress the behavior. This flag was created by MS for just this purpose. http://msdn.microsoft.com/en-us/library/system.web.httpresponse.suppressformsauthenticationredirect.aspx
Note that many other solutions that show up when searching tell you disable .NET auth, which is simply not feasible in the majority of cases. This code works with no other modifications.
protected void Application_BeginRequest()
{
HttpRequestBase request = new HttpRequestWrapper(Context.Request);
if (request.IsAjaxRequest())
{
Context.Response.SuppressFormsAuthenticationRedirect = true;
}
}
| common-pile/stackexchange_filtered |
What does += mean in this context?
def get_initials(fullname):
xs = (fullname)
name_list = xs.split()
initials = ""
for name in name_list: # go through each name
initials += name[0].upper() # append the initial
## ^^ what is happening here?
return initials
What is the += in this context? Is it incrementing the value in the list?
there is no increment in this code, += is not increment, but string concatenation here
Does this answer your question? What does it mean += in Python?
The line initials += name[0].upper() # append the initial adds the first character to a string, the process:
Split a string into a list (So john doe becomes ['john', 'doe'])
Iterate over each item in that list
For each item in that list, append to the empty string the first character, capitialized
For example, for john, get the first character j and capitalize it as J
Return the initials (JD in this case)
| common-pile/stackexchange_filtered |
how can i set custom height and width constants for an UIAlertController?
so i have created a UIAlertController in my Xcode Project. Now i want to know if its possible to set the width and the height of the AlertWindow, because the default UIAlertController is to small. Even when i set the frame of the UIAlertController, the alert that pops up has still the default height and width.
heres my code:
func passwordAlert() {
let alert = UIAlertController(title: "type in some text", message: "", preferredStyle: .alert)
alert.view.layer.cornerRadius = 0
alert.view.layer.frame.size = CGSize(width: 300, height: 500)
alert.addAction(UIAlertAction(title: "add", style: .default, handler: { (action) in
alert.dismiss(animated: true, completion: nil)
}))
self.present(alert, animated: true, completion: nil)
}
No. You don't customize UIAlertController. You create your own component if needed (plenty of them in GitHub/CocoaControls/Carthage/Cocoapods, etc.)
and then i have to import my own customized alertcontroller ?
It wont be an UIAlertController... it will be your own UIview component that you can set the way you want
| common-pile/stackexchange_filtered |
Using thread for parallel processing in java
I need few of the functions in my program to run simultaneously. These processes returns records. But, the output of one is the input to the other. In such a case, if at a point of time function A takes some time to output some record to the function B, I need to the function B to wait till function A provides some records as input for this process. Can I achieve this simply by using the thread functionalities such as wait, join, etc.. Or Is there any other ways to achieve the same functionality.
Edited:
As per the below mentioned suggestions, If I use the producer-consumer algorithm with BlockingQueue,ExecutorService, Future and CountDownLatch, Can I achieve every functionalities I requested?
Check out the classes in package java.util.concurrent
If you have a very simple task you could just have a volatile variable that will act as a flag. If you have more complex functionality you will need to have a concurrent variable for the data used by both threads. There are plenty implementations for concurrency in Java libraries so just pick something from there.
As mentioned above you can use blocking queue with producer consumer
OR
You can use countdown latch of the java concurrency to solve your problem.
How CountDownLatch works?
CountDownLatch.java class defines one constructor inside:
//Constructs a CountDownLatch initialized with the given count.
public void CountDownLatch(int count) {...}
This count is essentially the number of threads, for which latch should wait. This value can be set only once, and CountDownLatch provides no other mechanism to reset this count.
The first interaction with CountDownLatch is with main thread which is goind to wait for other threads. This main thread must call, CountDownLatch.await() method immediately after starting other threads. The execution will stop on await() method till the time, other threads complete their execution.
Other N threads must have reference of latch object, because they will need to notify the CountDownLatch object that they have completed their task. This notification is done by method : CountDownLatch.countDown(); Each invocation of method decreases the initial count set in constructor, by 1. So, when all N threads have call this method, count reaches to zero, and main thread is allowed to resume its execution past await() method.
Below is a simple example. After the Decrementer has called countDown() 3 times on the
CountDownLatch, the waiting Waiter is released from the await() call.
CountDownLatch latch = new CountDownLatch(3);
Waiter waiter = new Waiter(latch);
Decrementer decrementer = new Decrementer(latch);
new Thread(waiter) .start();
new Thread(decrementer).start();
Thread.sleep(4000);
public class Waiter implements Runnable{
CountDownLatch latch = null;
public Waiter(CountDownLatch latch) {
this.latch = latch;
}
public void run() {
try {
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Waiter Released");
}
}
public class Decrementer implements Runnable {
CountDownLatch latch = null;
public Decrementer(CountDownLatch latch) {
this.latch = latch;
}
public void run() {
try {
Thread.sleep(1000);
this.latch.countDown();
Thread.sleep(1000);
this.latch.countDown();
Thread.sleep(1000);
this.latch.countDown();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
In your case you can use callable to create thread instead of runnable as you need to get the retrun value from one thread and have to pass that value to second thread.
In most cases you do not need to use wait etc. All you need to do is choose a good safe structure to use to communicate between your threads,
In this specific case I would suggest one of the concurrent queuue implementations, perhaps a BlockingQueue such as ArrayBlockingQueue.
+1. yes.. using BlockingQueue is perhaps the simplest solution.
This looks like the consumer-producer-problem. As suggested by others you can use a BlockingQueue. Here is an example for how to use it:
public static void main(final String[] args) {
final ExecutorService producer = Executors.newSingleThreadExecutor();
final ExecutorService consumer = Executors.newSingleThreadExecutor();
final BlockingQueue<Integer> workpieces = new LinkedBlockingQueue<>();
producer.submit(new Runnable() {
@Override
public void run() {
final Random rand = new Random();
for (;;) {
try {
workpieces.put(rand.nextInt());
Thread.sleep(1000);
} catch (final InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
}
}
});
consumer.submit(new Runnable() {
@Override
public void run() {
for (;;) {
try {
System.out.println("Got " + workpieces.take());
} catch (final InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
}
}
});
}
It generates a random number every second in the producer-thread which is printed by the consumer-thread.
Java's Fork and Join looks suitable for the usecase specified in your Question.
See http://docs.oracle.com/javase/tutorial/essential/concurrency/forkjoin.html
Have a look at BlockingQueue classes and producer/consumer patterns.
The first thread is getting the work unit from an input blocking queue and putting its output to an output blocking queue (with size restrictions).
The second thread is doing the using this output queue as an input.
With this method you can also easialy adjust the number of threads.
Ensure the the work load per work unit is not to small.
This is similar to producer-consumer problem. You can use Java's BlockingQueue.
The process A will enqueue its results and the process B will wait until A's output is ready in the queue. When output of A is available, then B can read and consume it.
You can use BlockingQueue between producer and consumer threads. The producer will keep on adding results to queue if it is not full, concurrently the consumer thread can process pending messages from queue.
| common-pile/stackexchange_filtered |
Querying One Dimensional JSON string
I'm trying to check to see if a one dimensional array string like:
[1,2,3,4,5]
contains a certain value, like 4.
I know with multi-dimensional arrays I could do something like:
JSON_VALUE(columnName, '$.key')
but for the life of me, I can't figure out how to search a keyless json string.
I've tried:
WHERE JSON_VALUE(columnName, '$') = 1
WHERE JSON_VALUE(columnName, '$.') = 1
WHERE 1 IN JSON_VALUE(columnName, '$')
WHERE 1 IN JSON_VALUE(columnName, '$.')
and nothing works.
SQL 2016 - the version of SQL I have.
Yes - my version of MS SQL SERVER is 2016... i feel like we're focusing very heavily on something that is not worth focusing on lol.
When it comes to JSON, the specific dbms is important!
It's a string, can you use like or charindex?
and do what like ... (columnName LIKE ',1,' OR columnName LIKE '[1,' OR columnName LIKE ',1]') .... seems like a JSON function would be muuuuch easier
Well as per the usual it depends caveat, it depends. If the requirement is to simply ascertain if a value exists, as the description suggests, then why overcomplicate it.
I feel like doing multiple LIKE this or that or this other thing for each possible variation is overcomplicating it.... it would be great if i could do something like JSON_VALUE(columnName, '$') = 1
Assuming that the string '[1,2,3,4,5]' is in a column in your table, you could use an EXISTS with OPENJSON:
SELECT V.YourColumn
FROM (VALUES('[1,2,3,4,5]'),('[7,8,9]'))V(YourColumn)
WHERE EXISTS (SELECT 1
FROM OPENJSON(V.YourColumn)
WITH (Value int '$') OJ
WHERE Value = 4);
OPENJSON isn't a function in my version of ms sql.
Dumb question, but why is the WITH line required? This seems to return the same with or without it in SQL Server 2016.
@Brds actually openjson is in Sql2016 - using exists would actually work fine
openjson requires compatibility set to 130 or higher, so it's reasonable that OP can't see it (a DBA could change this with ALTER DATABASE DatabaseName SET COMPATIBILITY_LEVEL = 130).
It's not "required" @EdmCoff , but I'd rather the OPENJSON return an int than an nvarchar(MAX) which then needs to be implicitly converted to an int.
OPENJSON isn't a function, @Brds , it's an operator, and it is available in SQL Server 2016, as Stu mentioned; in fact it was introduced with that version.
@Larnu Thanks for clarifying about the WITH. As I mentioned previously, many default installs of SQL Server 2016 do not have OPENJSON available (so it will throw a "OPENJSON is not a recognized built-in function name" error) unless the compatibility is explicitly set.
"As I mentioned previously, many default installs of SQL Server 2016 do not have OPENJSON available" By default the compatibility level of a SQL Server 2016 instance is 130 @EdmCoff , so the complete opposite of this statement is true. By Default they will have it. It is only if someone lowers the compatibility that it won't be.
Alright, thanks for clarifying. I guess I was wrong about the default compatibility. I think I over-interpreted the line "Compatibility level 120 may be the default even in a new Azure SQL Database" at https://learn.microsoft.com/en-us/sql/t-sql/functions/openjson-transact-sql?view=sql-server-ver15 and assumed it also applied outside Azure.
Yep, Azure SQL Databases have a default of 140 now, @EdmCoff, and have for some time. I've raised an Issue for the document.
| common-pile/stackexchange_filtered |
Calculating difference between two DateTime objects in C#
I have two DateTime objects in C#, say birthDateTime and curDateTime.
Now, I want to calculate the year difference bewteen current datetime and someone's birthday. I want to do something like
int years = curDateTime- birthDateTime;
//If years difference is greater than 5 years, do something.
if(years > 5)
....
possible duplicate of How do I calculate someone's age in C#?
Sounds like you are trying to calculate a person's age in the way people usually respond when asked for their age.
See Calculate age in C# for the solution.
The subtraction operator will result in a TimeSpan struct. However, the concept of "years" does not apply to timespans, since the number of days in a year is not constant.
TimeSpan span = curDateTime - birthDateTime;
if (span.TotalDays > (365.26 * 5)) { ... }
If you want anything more precise than this, you will have to inspect the Year/Month/Day of your DateTime structs manually.
Note that you should not use this solution if you want to compute "age" in the way people usually state a person's age.
You can use AddYears method and check the condition if the Year property.
var d3 = d1.AddYears(-d2.Year);
if ((d1.Year - d3.Year) > 5)
{
var ok = true;
}
You will need to handle invalid years, e.g. 2010 - 2010 will cause an exception.
if (birthDateTime.AddYears(5) < curDateTime)
{
// difference is greater than 5 years, do something...
}
DateTime someDate = DateTime.Parse("02/12/1979");
int diff = DateTime.Now.Year - someDate.Year;
This is it, no exceptions or anything...
| common-pile/stackexchange_filtered |
failure of floating point interrupt MFC Windows 10
We have an MFC product built in Visual Studio 2012/13 in C++ that performs many mathematical calculations. We use floating point interrupts through the coprocessor to help speed up the calculations, by setting up try/catch blocks. Those interrupts work well in Windows 7, but one of the updates for Windows 10 broke the sequence. While the "signal" call still sets up the correct pointer for the "handler" callback, Windows 10 ignores it. (The same code works in a console program; it only fails in MFC code, even the simplest stripped example). We have a workaround using SEH, but we would still like to know why this happened. Thanks for any information.
The relevant code is, where the handler throws a FloatException (note that an interrupt never gets to the handler in Windows 10):
_clearfp();
_controlfp_s(0, _EM_INEXACT, _MCW_EM);
signal(SIGFPE, (fptr)HandleMathError);
try
{
...calculate
}
catch (FloatException &e)
{
...recover
_fpreset();
_clearfp();
_controlfp_s(0, old_control_word, _MCW_EM);
signal(SIGFPE, SIG_DFL);
return;
}
For completeness, in order to see the MFC failure, the required code has at least 9 files or so. First generate a simple MFC application using: single document, doc/view, mfc standard interface, static library, no compound doc, no database, classic menu. Then, in the MainFrm::OnCreate, insert the interrupt code above with "raise(SIGFPE)" as the only calculation. Add a definition for the "HandleMathError" and "Float Exception" as:
typedef void( *fptr )( int );
void __cdecl HandleMathError( int sig, int subcode );
class FloatException : public std::exception
{
public:
FloatException( int _foo )
{
foo = _foo;
}
int foo;
};
void __cdecl HandleMathError( int sig, int subcode )
{
throw FloatException( subcode );
return;
}
With this code, the interrupt handler is entered on Windows 7, and is not entered in Windows 10. A test for the handler being present using signal is positive on both OSs; Windows 10 is ignoring the handler.
"it only fails in MFC code, even the simplest stripped example" - Why don't you post a [mcve] then? It's also important to know your floating point compiler settings, as well as the target architecture. I haven't heard about code using interrupts and a coprocessor in more than 20 years. What setup are you using? (I'm sure this cannot possibly be a 80387, although that really is the only chip I know that goes by the term coprocessor and is programmed through interrupts.)
I use the following settings: /EHsc, ?fp:precise, /fp:except, and /arch:IA32. the last switch forces the use of the embedded i87. the code i use is 10-100 times faster than deeply embedded searches for NAN and INF. The simplest MFC is as said, generate MFC, embed calc that "raise(SIGFPE)" and go.
I once had to diagnose a problem that was traced back to a device driver changing the FP flags and not restoring them... the bug would manifest only after printing to a HP printer. Not sure if that's even a possibility in more modern versions of Windows.
@MarkRansom, Thanks for the thought. I have had this Windows 10 failure on three customer systems and two of my own, including a "clean" isolated system with nothing added. That still leaves a lot of running services though. Further, the return of an inserted "signal" just before a failure still points to the correct "handler". I'll try to look at the control word again.
| common-pile/stackexchange_filtered |
Setting text field as first responder on app launch in Swift iOS
I have a simple app with a couple text entry fields. When I launch the app, nothing happens until I tap a field or use a control. How do I set the focus to be on the first text field and open the keyboard when the app launches, saving the user a tap? (new to swift - is this setting the first responder? - not sure how that works)
Agreed with @Sh_Khan's answer,
Just verify one more thing to disable the hardware keyboard. You'll need to toggle your keyboard through ⌘K or through below steps,
iOS Simulator -> Hardware -> Keyboard
Uncheck Connect Hardware Keyboard
yes, sometimes this occurs with simulators
Agreed @Sh_Khan
Thank you both, I'll give it a try!
in viewWillAppear/viewDidAppear do
@IBOutlet weak var firstTexF:UITextField!
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
firstTexF.becomeFirstResponder()
}
Right answer, just a minor styling comment - you can (and it's preferable to) omit the self and simply access firstTexF directly.
| common-pile/stackexchange_filtered |
Find $\lim_{x\to 0^+} (\sin x)^{(1/\ln(x))}$,without L' Hospital (homework)
I got this home work assignment to find the limit without L'Hospital's rule. I used $e$ like that: $$\sin x^{1/\ln(x)} = \exp\left(\frac{\ln \sin (x)}{\ln(x)}\right)$$ and now I want to find this limit: $\lim_{x\to 0^+}\frac{\ln\sin(x)}{\ln(x)}$, but I couldn't get it through.
Please learn to typeset using MathJax.
@TPace Thanks for the edit, but you should use $\sin x, \ln x, \dots$
Note that the limit must be considered for $x\to 0^+$. You first step is correct then you need some manipulation to get the result.
@daniel let correct the OP with lim $x\to 0^+$
@gimusi , true, I am not yet familiar with the MathJax, but i'll get used to it. Nice solution by the way, thanks for that.
@daniel You need to try to get familiar with it. For the solution try always to use standard limits when it is possible. You are welcome, Bye!
@JohnMa It's good as it is now. Thanks for the help
Note that for $x\to 0^+$
$$\Large{\sin x^{\frac1{\log x}}=e^{\frac{\log \sin x}{\log x}}=e^{\frac{\log \frac{\sin x}{x} +\log x}{\log x}}\to e^1=e}$$
indeed
$${\frac{\log \frac{\sin x}{x} +\log x}{\log x}}={\frac{\frac{\log \frac{\sin x}{x}}{\log x} +1}{1}}\to 1$$
Nice, I didn't see this way!
Yes we can just use standard limit in this case, but also your solution is nice.
$$\sin(x)^{1/\log(x)}=\exp\left(\log(\sin(x))*\frac{1}{\log(x)}\right)$$
$$=\exp\left(\frac{\log(\sin(x))}{\log(x)}\right)$$
Now we can use that $\sin(x)=x+O(x^2)$:
$$\exp\left(\frac{\log(x+O(x^2))}{\log(x)}\right)\to\exp\left(\frac{1}{1}\right)=e$$
The simplest is with equivalents:
$\sin x\sim_0 x$, hence
$$\ln(\sin x)\sim_0\ln x,\enspace\text{so }\enspace \sim_0\frac{\ln x}{\ln x}=1,$$
whence $\;\displaystyle\lim_{x\to 0}\,\mathrm e^{\tfrac{\ln(\sin x)}{\ln x}}=\mathrm e^1=\mathrm e$.
Yes indeed that's precisely what is going on, anyway I think that asymptotic equivalence must be handled with great attention since is easy for a beginner to fall in error with it for limits which are governed by terms of order greater than 1.
The only delicate point here is composing by log on the left: the function to be composed must not approach $1$. This being said, it's quite powerful, as it removes all unrelated details.
| common-pile/stackexchange_filtered |
Pass multiple file names captured in a variable to a command (vim)
I am trying to create a script that opens automatically any files containing a particular pattern.
This is what I achieved so far:
xargs -d " " vim < "$(grep --color -r test * | cut -d ':' -f 1 | uniq | sed ':a;N;$!ba;s/\n/ /g')"
The problem is that vim does not recognize the command as separate file of list, but as a whole filename instead:
zsh: file name too long: ..............
Is there an easy way to achieve it? What am I missing?
The usual way to call xargs is just to pass the arguments with newlines via a pipe:
grep -Rl test * | xargs vim
Note that I'm also passing the -l argument to grep to list the files that contain my pattern.
Use this:
vim -- `grep -rIl test *`
-I skip matching in binary files
-l print file name at first match
Try to omit xargs, becouse this leads to incorrect behaviour of vim:
Vim: Warning: Input is not from a terminal
What I usually do is append the following line to a list of files:
> ~/.files.txt && vim $(cat ~/.files.txt | tr "\n" " ")
For example :
grep --color -r test * > ~/.files.txt && vim $(cat ~/.files.txt | tr "\n" " ")
I have the following in my .bashrc to bind VV (twice V in uppercase) to insert that automatically :
insertinreadline() {
READLINE_LINE=${READLINE_LINE:0:$READLINE_POINT}$1${READLINE_LINE:$READLINE_POINT}
READLINE_POINT=`expr $READLINE_POINT + ${#1}`
}
bind -x '"VV": insertinreadline " > ~/.files.txt && vim \$(cat ~/.files.txt | tr \"\\n\" \" \")"'
| common-pile/stackexchange_filtered |
.net core & .net framework in the same team
Our development team has many .net framework 4.6 projects (VS 2015).
We want to start a new .net core project to eventually deploy on linux.
We have installed VS 2017 and the .net core 2.0 preview.
But how can we reuse the existing library projects in this new one ?
We research but it is not clear for us :
- we need to change the target of the old projects from ".Net Framework 4.6" to ".NetStandard 1.x" ? (and solve the incompatibility)
- or we can use them like that ? (but how?)
Thanks
As far as I know, you can not use .Net desktop libraries like they are, you have to rebuild them against .Net Core and probably spend some time 1) making your code to run with limited available facilities; 2) testing your code to ensure that it behaves in exactly the same way it did on original .Net. For example, again as far as I know, LINQ is using Expression and is compiled to IL in .Net, while in .Net Core it is interpreted. Which sometimes can be a noticeable difference.
@Sergey.quixoticaxis.Ivanov: .NET Core doesn't interpret expression trees as far as I'm aware... and as of .NET Core 2.0, desktop assemblies should be usable too, I believe - that's the plan.
@JonSkeet I'm currently developing UWP software utilizing EFCore and the statement that LINQ is being parsed came from one of MS guys on EFCore github. Maybe it was true for the older versions, maybe it's only .Net Native specific, maybe I got something wrong. I'm not certain.
I hope that the plan becomes reality soon, we'd be more then happy to move our server side to Linux but our first attempts were too time consuming so we pushed this idea away, at least for now.
@Sergey.quixoticaxis.Ivanov: I can easily imagine that being true for UWP but not other platforms. https://github.com/dotnet/corefx/issues/10470 suggests it's UWP...
@JonSkeet yeap, .Net Native issue, thanks for the link.
Microsoft publishes official guidelines for the porting process: https://learn.microsoft.com/en-us/dotnet/articles/core/porting/
To summarize:
Deal with your dependencies (by migrating them), recursively
Retarget your projects. Applications move to .NET core, libraries move to .NET Standard, where possible.
Use some helpful tooling to verify your ports
Test
So, to share things between .NET Framework and .NET Core, your libraries should target .NET Standard, as much as possible. Otherwise, you could possibly share the code and have to do multiple builds - build once targetting .NET Framework and again targetting .NET Core.
You can use/reference your old projects only if you target Full Framework in your new projects (which is not the case if you are going to run them on Linux).
If you started with preview you should convert you old projects to .Net Core projects and either target .NET Core 2.0 Preview or NetStandard 2.0 Preview. If you are not going to reference/use your old projects outside your application it might be better to target .NET Core 2.0 Preview because it might provide more API than NetStandard 2.0 Preview.
| common-pile/stackexchange_filtered |
start Ubuntu 18.04 an error : end kernel panic -not syncing: attempted to kill init! exitcode=0x0000000b occurs by stack smashing detected
I downgraded my Ubuntu 18.10 to 18.04 today. After rebooting and logining the downgraded 18.04, I met an error as follows:
Then I ran the following commands in the terminal hoping to fix it.
sudo apt remove --purge locales
sudo apt install locales
Unfortunately, the second command failed with the error: *** stack smashing detected ***: <unknown> terminated. Then I run any command, this error would occur.
So I shut down and restarted the computer by pressing down the power button. Finally, the 18.04 OS failed to start with the following errors.
Hmm.. As you probably feel, you screwed it up big time. You should never, ever remove the locales package. And you should really not try to "downgrade" either. You should have saved your $HOME somewhere and made a fresh install of 18.04 instead. So I guess you need to start it all over.
Thanks a lot! I have reinstalled a ubuntu 20.04 in my computer. Fortunately, when I installed Ubuntu 18.10 earlier, I had a separate partition for the home, So my data does not lose. Well, what is the role of the 'locals'?
Well, you can run apt show locales to get a hint. It's built by the glibc source package, which is one of the core packages that simply must be there on a working system.
| common-pile/stackexchange_filtered |
Information on the moderator candidates
Yi Jiang has put together a nice site that use the Stack Overflow API to display information on the moderator candidates.
Click here to see all the information.
The information presented is:
What the candidate wrote when nominating themselves.
Answers/Questions on the main site and it's meta
Up-vote/down-vote ratio
Average reputation earned per post
Average reputation earned per day.
Participation in other Stack Exchange sites.
It displays what Yi Jiang considers to be noteworthy badges and whether the nominee has the badge and a summary of their recent activity.
The information is presented as is with no commentary.
It came about from this question on MSO Moderator nomination possibly useful statistics
Wow, that's really neat! Kudos to Yi Jiang.
Could there be the badge descriptions on the page as well? I mean, the names aren't as cryptic as something like an Xbox Live Achievement, but I still had to go and look to see what 'Sportsmanship' was.
And could there be a possibility for it to be linked to the election page?
Your first concern has been updated - hovering over the badges will now produce a tooltip that explains what the badge is.
| common-pile/stackexchange_filtered |
Is there an event connected to the choice of a datalist <option>?
I have a (dynamic) list of choices in an <input> field:
<input list="choices">
<datalist id="choices">
<option>one</option>
<option>two</option>
</datalist>
Is there an event fired right after the choice of the <option> is made? (which I would like to catch/use in Vue.js if this matters). This would be when the left button of the mouse is clicked in the scenario below:
you can trigger .change jquery every when option value is changed
@ISHIDA: you mean after the option was chosen, not after each keystroke (and therefore a change of the content of the input field)? That would be great.
Yes I have provided an example in answers
Try this. it will check if the value exist in the input box and alert is triggered when you select an option as you have requested
<input list="choices" id="searchbox">
<datalist id="choices">
<option id="one">one</option>
<option>two</option>
</datalist>
$("#searchbox").bind('input', function () {
if(checkExists( $('#searchbox').val() ) === true){
alert('item selected')
}
});
function checkExists(inputValue) {
console.log(inputValue);
var x = document.getElementById("choices");
var i;
var flag;
for (i = 0; i < x.options.length; i++) {
if(inputValue == x.options[i].value){
flag = true;
}
}
return flag;
}
This is not what I am looking for, please see https://jsfiddle.net/WoJWoJ/pnfa10gk/ - the alert is fired after each change of the input box - the exact thing I want to avoid
you want to capture mousedown event when selecting options ?
That could be one option - I was hoping for an event which is triggered when an option from the is chosen (clicked on) -- at that moment I would clear up the search field and use the chosen option
@WoJ "clear up the search field and use the chosen option" - isn't that what chosing the dropdown option does by default?
@Bergi: I was not clear: when the option is chosen, I want to clear up the search field (so that it is blank) and use the chosen option elsewhere. The problem is that I can monitor the field in Vue and have its current content (not obligatory the content of an option) but I do not know when the actual option is chosen (as opposed to a partial find in the serach field). I do that currently by checking the content of the input field (I have values on the options, all of the start with XXX so when the input field starts with XXX I assume it is a chosen value, and not a currently typed string)
@Bergi: this is why I want to monitor specifically for the action of choosing an option in the dropdown list
@WoJ check this jsfiddle- http://jsfiddle.net/AAfuX/215/
In this it will trigger for every option value selected as you asked.
You are welcome@WoJ.
Can I know the reason for down vote, whoever gave it, So that I can improve.
Thnk you :)
While exact string match is a clever solution, that still causes problems, namely this will fire the instant anything matches. It's not the same as clicking an item in the list. Additionally, you could never type "foobar" if both "foo" and "foobar" is in the list, because after typing the second 'o' in foo would trigger it.
It looks like options in datalist do not tigger any event, so you need to code your own solution.
I was able to do something similar to fire events with options in datalist, using @input in the element
For example in your code you can try a solution starting with this
<input list="choices" @input="checkInput" >
<datalist id="choices">
<option>one</option>
<option>two</option>
</datalist>
In the method used with @input you can check the changes in the input value.
checkInput(e){
var inputValue = e.target.value;
//Your code
},
For example you can check if the input value, after click in one option of your datalyst, is a valid option, and after that conditions meets you can launch another method or whatever you want.
Your solution is to use Vue?
@LawrenceCherone Yes, the solution I described, works only if you use Vue.
| common-pile/stackexchange_filtered |
how to fix: sqlite3.dll not found on delphi data explorer?
I am following the tutorial at:
http://docwiki.embarcadero.com/RADStudio/XE5/en/Mobile_Tutorial:_Using_SQLite_%28iOS_and_Android%29
but I stuck at the 6th step (test the connection) which always give me:
failed: "sqlite3.dll not found"
In fact I have the sqlite3.dll 64 bit which I put it in c:\Windows\SysWOW64 folder and sqlite3.dll 32 bit which I put it in c:\Windows\System32
I do believe that the sqlite is run in my system because I can use it using FireDAC.
What should I do to fix this?
Thank in advance for the help.
This was asked a couple of weeks ago I think. Have a look for the dupe.
You have the DLLs the wrong way around. Despite the folder names, system32 is for 64-bit Dlls and SysWOW64 for 32-bit ones - another stroke of genius by MS. Make sure the one you want is findable via your system path.
@MartynA, thank you very much for the information. After put the files on its right folder (64 bit to the System32 and 32 bit to the SysWoW64) now the testing is success.
So I consider the problem is solved. Thank you.
Good, I'll write that up as an answer and I'd be grateful if you could accept it.
As I said in a comment, You have the DLLs the wrong way around. Despite the folder names, system32 is for 64-bit Dlls and SysWOW64 for 32-bit ones. So, you should swap the DLLs' locations.
Make sure the one you want is findable via your system path.
Would be much better not to put 3rd party DLLs like this into the system folders.
@David Heffernan: I agree. It's a pity some installers (s/ware I mean) still do this when they've no need to.
| common-pile/stackexchange_filtered |
Python MySQLdb execute table variable
I'm trying to use a variable for a table name. I get the error "... near ''myTable'' at line 1
I must not be escaping this right. The double '' in the error seems to be a clue, but I don't get it.
db = MySQLdb.connect("localhost","user","pw","database" )
table = "myTable"
def geno_order(db, table):
cursor = db.cursor() # prepare a cursor object using cursor() method
sql = "SELECT * FROM %s"
cursor.execute(sql, table)
results = cursor.fetchall()
I am sure there are several duplicates. I picked the one I knew, but if someone else finds one that is even more like the OP's problem by all means use it instead.
You can't use a parameter for the table name in the execute call. You'll need to use normal Python string interpolation for that:
sql = "SELECT * FROM %s" % table
cursor.execute(sql)
Naturally, you'll need to be extra careful if the table name is coming from user input. To mitigate SQL injection, validate the table name against a list of valid names.
cursor.execute("INSERT INTO {0}(var_name)VALUES({1});".format('table_name','"value"')) #if you need to insert more variables
| common-pile/stackexchange_filtered |
Fontawesome webfont show square on content property ( work fine on local and stuck on prod environnement )
I'am using fontAwesome 4.7 in an Angular 6 project so I use the path of fontAwesome.css on my angular.json from "node_module" folder.
My issue is icons using css properties not works on prod envireonnement but on local works fine & ( icons using css classes on HTML elements works fine in both local and prod envireonnement ).
When I inspect element on my local I see that icons using css propoerties shown like that
.panel-control-collapse:before {
content: "";
}
and in my css file its like that :
.panel-control-collapse:before {
content: "\f068";
}
but for icons using css on html like that :
<a class="fa fa-upload" title="Importer en masse"></a>
works fine
so my issue that I'm forced to use icons from css properties but it stucks with that square :(
Is the font uploaded to the server?
Are you using sass?
Paulie_D Yes.
MiomirDancevic Yes
I reslove the issue but I didn't understand from where it comes ( I think the problem was with my scss file ) so what I have done is :
I remove this part of css from my style.scss file.
.panel-control-collapse:before { font-family: FontAwesome; content: "\f068"; }
And I put it in a new css file created in the same root of my style.scss file.
And then it works
angular.json
"assets": [
"src/favicon.ico",
"src/assets",
{
"glob": "**/*",
"input": "./node_modules/font-awesome/fonts/",
"output": "./assets/fonts/"
}
]
styles.scss
$fa-font-path: "/assets/fonts/";
@import '~font-awesome/scss/font-awesome';
Thanks for your answer, but it didn't work. I have reslove the issue but I didn't understand from where it comes ( I think the problem was with my SCSS file ) so what I have done is :
I remove this part of css from my style.scss file.
.panel-control-collapse:before { font-family: FontAwesome; content: "\f068"; }
and I put it in a new CSS file created in the same root of my style.scss file.
and then it works
| common-pile/stackexchange_filtered |
which is the best way to store and relogin with username and password?
Hello i have an application which all activities can be reached after login (like facebook)
I want to write work flow
Status: launch application first time
Login screen-> get username and pass -> check from db -> if true save username and pass to sharedpref -> mainactivity
Status: closed application without logout and restart
Login screen-> get username and pass FROM SHAREDPREFERENCES -> check from db -> if true save username and pass to sharedpref -> mainactivity
I want to jumpover check from db step
Is it safe to set application "if username and password are in sharedprefereces directly show mainactivity because you have also checked and saved to sharedpref"?
You need to account for the account being deleted or the password being changed, so no that will not work.
The best way to save a user's credentials is to implement a token system like OAuth on your server, and store the token instead of the username/password.
If you can't modify the server, then your best bet is to store username and an encrypted password, and authenticate with the server every time the user opens the app.
| common-pile/stackexchange_filtered |
Why is FEM theory only ever written down for simplices?
I noticed that in most theoretic literature (Braess, Ciarlet, ...) for Finite Elements, the method is only described in a detailed way for simplex triangulations, while other forms of hexahedral methods are often omitted or only briefly mentioned.
Is there any theoretical differerence between FEM theory of simplices and hexahedrals? Do theoretical results (convergence, error-estimates) not carry over to hexahedrals, or is it just that the generality is so obvious that it is left to the reader.
In my opinion the basics stay the same but there are some differences worth commenting upon.
The basic principles of defining a local coordinate system, integrating local dense contribution matrices using quadrature rules, and assembling them into global sparse matrices are identical. Often, hexahedral basis functions arise from taking tensor products of some underlying one-dimensional basis, and are integrated using tensor products of one-dimensional quadrature rules, which exposes some opportunities for optimizing code.
One notable difference is polynomial completeness. High order tetrahedral basis tend to arise from some sort of pascal's triangle / pascal's tetrahedron arrangement. For instance, a 2D gradient-conforming basis needs to span {$1$, $x$, $y$, $x^2$, $xy$, $y^2$} to be complete to second order, and these 6 functions can be arranged very tidily on a 6-node triangle (3 vertices, 3 edge midpoints). In contrast, the most natural way to write a second order basis for a 2D quadrilateral is via a tensor product of two 1D lines, leading to a span of {$1$, $x$, $y$, $x^2$, $xy$, $y^2$, $x^2y$, $xy^2$, $x^2y^2$}. This contains more functions / requires more work, but is not any more accurate in the asymptotic sense. This over-completeness becomes more exaggerated as you raise the order or migrate from 2D to 3D.
Despite this overcomplete-ness at the element level, hexahedral elements are generally more efficient in total cost, simply because they are "larger" and fill space more efficiently. Model accuracy is typically a function of mesh size $h$. A hexahedron with sides of length $h$ takes up $h^3$ volume, while a tetrahedron with sides of length $h$ only takes up a volume of $h^3/6$. So given some computational domain with volume $V$ and an desired accuracy/mesh size $h$, I expect to need 6x as many tetrahedra than hexahedra. Even though this difference is just a constant factor, we're talking about the size of the input here, so it will be further magnified by any downstream algorithmic complexities (for instance, if you're using solver/preconditioning techniques that scale superlinearly).
I do want to comment briefly on accuracy. A surprising aspect of some basis functions is that element shape / distortion can effect their order of accuracy, because of how they depend on the pullback/jacobian. As a concrete example, consider 2D Nedelec (curl-conforming) functions. The curl of these functions (ie what's going into the "stiffness" matrix, to borrow a term) is proportional to the jacobian. For a triangle, the jacobian is constant, so you get curl complete to a constant (this is what you want, a lowest order "$\star$-conforming" element should have "$\star$" complete to a constant). In contrast, the jacobian for a quadrilateral is only constant if the shape mapping is affine (ie the quadrilateral is a parallelogram), which means this element is non-convergent on general meshes (ie you can refine $h$ but the error will not improve). Raviart-Thomas elements are similarly affected, as they have a similar dependence on jacobian. Note that a similar thing happens (non-constant jacobian / non-convergence) if you use curvilinear shape mappings (for either triangles or quadrilaterials), I only call out this case specifically because it's surprising that a linear/straight-line quadrilateral can exhibit this problem too (when it's not a parallelogram).
All this said, I find the biggest obstacle to using hexahedra for FEM are not formulation or convergence issues, but mesh generation and computational geometry issues. There are robust general-purpose algorithms for automatic tetrahedral mesh generation (delaunay refinement in particular), but hexahedral meshing is more brittle and often requires user interaction to decompose complicated geometry into simple parts that are eligible for structured strategies or pave/sweep strategies. Note that you can fuse tetrahedral grids with hexahedral grids using pyramidal elements, but the basis functions on pyramids can be pretty technical (pyramids are often implemented in terms of degenerate/distorted hexadra, and the distortion function ends up contributing non-polynomial terms to the jacobian/basis functions, which can make it difficult to obtain the correct order of accuracy).
In light of these geometrical complexities, I find it reasonable for papers to focus more on simplicial elements, there's just a larger market for them.
Wow, thank you very much for the elaborate answer!
The Finite Element Method is blessed with regards to meshes, since it performs well with tetrahedral meshes, combined with the fact tet meshing is quite easy and there are plenty of tet meshers out there. If you are doing CFD, i.e. fluid mechanics, hex meshes behave better than tets, although this is field/application dependent. If you are searching for hex meshers, preferably open source, your search is a short one.
Most FM methods require that we can identify pairs of adjacent elements. This is easy with elements derived from a quadrilateral/hexahedral grid that can be mapped onto an array structure. To deal with the complex domains typical of aircraft, for example, the "multiblock" structure can be adopted where the domain is first mapped onto "blocks" which are very coarse quad/hex grids with an arbitrary connectivity between them. Generating these coarse grids is hard to automate if there must be many blocks, and is commonly done by hand, which is time-consuming. Because of this the multiblock strategy is now little-used in practice, and a single "unstructured" quad/hex grid is used for the entire domain. Individual elements should still
be as "regular" as possible in aspect ratio and skewness.
I fail to see how this answer the OP question.
| common-pile/stackexchange_filtered |
Maps and custom objects in java
I am trying the below piece of code.
class dog{
private String name;
public dog(String n){
name = n;
}
public String getname(){ return name; }
public void setname(String n){ name =n;}
public boolean equals(Object o){
//if (( o instanceof dog )&& (((dog)o).name == name)) return true;
if (( o instanceof dog )&& (((dog)o).name.equals(name))) return true;
else return false;
}
public int hashcode(){
return name.length();
}
public String toString(){
return "Name:"+name;
}
}
This is my Dog class . Now in Main method , I am trying to do the following
Map<Object,Object> m = new HashMap <Object, Object>();
dog p = new dog("GM");
dog q = new dog ("GM");
System.out.println(p.equals(q));
m.put ( new dog("GM"),"K2");
System.out.println(m.get(new dog("GM")));
I am getting a true and a null value. i Was expecting a K2 instead of null . Can somebody help me with this . I have overridden hashcode and equals methods . What is the thing i am missing ??
EDIT : - Changed equals function. Same results .
The String value comparison is incorrect in dog class in equal method.
You used (((dog)o).name == name)). Actually, it should use (((dog) o).name.equals(name). Please change it first and then check something further.
Change your hashCode to return name.hashCode() instead of name.length(), you are using one of the worsts hash functions you can. It won't return negatives, and it rarely uses large numbers.
And it is preferred to apply Java Coding Conventions: 1- Capitalize class name to Dog not dog 2- rename the getter and setter methods to getName and setName so you (and most of libraries) can access them using interpspection.
The immediate problem is that hashCode needs a capital C, you are implementing hashcode with a lowercase c.
((dog)o).name == name compares the identities of the strings. That means if you have two instances of the string "GM", they will .equals() each other, but not ==.
thanks for the input. But p.equals(q) is returning true. That means equals method is working as expected. Isn't ? Also in case of strings i think this is fine. Please correct me if am wrong.
In fact tried that too. same results even if i use equals instead of ==
You are wrong. This is not fine in case of String; in fact it's in the case of strings where it most often confuses people. Sometimes it works (look up "string interning"), and sometimes it doesn't. Apparently for you it's working in the println but if it were working in the map we wouldn't be having this discussion.
Got it, that took way too long
You meant you got the mistake i made ? If yes please do help .
Ah Thanks a lot.. Stupid mistake :)
Use @Override annotation, so compiler prevents typos.
| common-pile/stackexchange_filtered |
Grant users access to all external buckets but exclude our own account buckets
I want to give IAM users access to ALL external buckets- buckets not in our own account.
For example, publicly accessible buckets or buckets someone gives our AWS account access to. If the IAM user in our account does not have access to an external bucket (via IAM policy) they can't access it.
I can't grant them access to ALL buckets though because that will include our own buckets - which we don't want some users to access, or only give access with specific permissions.
Essentially I want a condition that says:
IF NOT <MY AWS ACCOUNT>
grant s3:*
ELSE
don't modify existing S3 permissions
I want to have a policy I can just apply to users or roles that allow access to all external buckets.
Edit
Looks like you can use aws:ResourceAccount?
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_resource_account_data_exch.html
I don't think this is possible because Amazon S3 ARNs do not refer to an account number. Also, Actions, Resources, and Condition Keys for Amazon S3 - AWS Identity and Access Management does not list an Account ID as an available condition.
What is your particular use-case for requiring this? I could understand wanting to provide credentials to a user for accessing 3rd-party buckets (eg with shared data) but not wanting to grant access to buckets in the same account (for security/privacy reasons). However, it is necessary to grant s3:*-like permissions to be able to access external buckets. These do seem like conflicting requirements when it comes to granting access.
I dont care what access they have to other parties buckets, they can break those. I dont want to give them access to upload/delete/etc our accounts buckets. is checking if the bucket prefix doesnt match my companies prefix (assuming all my buckets use the same prefix) and then allowing s3:* the only way to do something like this?
An interesting conundrum! The only way I can immediately think to do this would be to create a separate AWS Account (that has no buckets) and allow them to assume an IAM Role in that account. They could then be granted lots of S3 permissions without granting access to sensitive buckets.
Are you trying same account or cross-account access?
This might help you: https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
As you alluded to in the edit, you can use the below policy to allow an IAM role access to S3 resources in accounts other than your own.
You may use this if you expect other accounts to use resource based permissions to allow your role access to their s3 resources, but you don't want to give your role access to all of s3 in your own account.
Note: replace<PHONE_NUMBER>11 with your account id
{
"Statement": [
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": "*",
"Sid": ""
},
{
"Action": "s3:*",
"Condition": {
"StringEquals": {
"aws:ResourceAccount": "111111111111"
}
},
"Effect": "Deny",
"Resource": "*",
"Sid": ""
}
],
"Version": "2012-10-17"
}
You can add a condition to the policy: IAM JSON Policy Elements: Condition Operators - AWS Identity and Access Management
Edit:
Based on the conversation and further research, this won't work for IAM users. You might be able to add a DENY policy to each of your buckets based on the aws:PrincipalTag and tag any user that you don't want to have access.
What condition would that be? I can't immediately think of something that would refer to the account of an Amazon S3 bucket.
yes this is my problem I cant find a condition. the best I could find was s3:prefix but that would mean all my buckets would need to have the same prefix. What is the best practice for my use case?
You're right. I was thinking about some of the conditions like aws:SourceAccount, but that has limited use, and wouldn't work in this case.
| common-pile/stackexchange_filtered |
Cannot access print queue - Server side printing vs Local printing
Scenario :
User works on a desk top that is locally configured and connected to Printer ‘A’. Any print given directly from the desk top like a word doc or email or a web page, prompts the printer dialog, Printer A is selected, and print queue on that printer can be easily managed from the desktop directly (right click printer icon in system tray or going to Device and printers from start menu)
Likewise, there is a web server X that is linked/configured to same printer A. The user from the desk top, now uses a web application (hosted on the web server X) and there is a 'print feature' in the web application. User selects (checkboxes) couple of documents and clicks on Print button (printer A). The print button executes server side printing code. The printing happens on the server. When I log in to the server and open the printer A queue, I can see the print jobs. But the user cannot see the same queue from the desk top printer A queue.
Why is that ? what do I need to do to give the user access to printer queue irrespective of where the print job originated from (desktop or server)
Why is that ?
If, as described, the web app is doing server-side printing, then the print job never involves the user's local print queue; therefore it doesn't show any information about the print job in the user's queue.
what do I need to do to give the user access to printer queue irrespective of where the print job originated from (desktop or server)
You would need to give them permission to view/control the print queue on the server. Exactly how you would do this depends greatly on your network setup, of which we're not aware.
Assuming Windows, that the user is on a LAN with the server, that the printer on the server is shared with the user, and that they have permission to access/see the printers queue, they should be able to simply head to \\<servername>\Printers to see a list of remote printers on the server, and then view the printer's queue by double-clicking the printer's icon.
| common-pile/stackexchange_filtered |
concat sql rows
Suppose I have this rows in a table
hello
guys
how
are
you?
My expected result is a single row like this one:
hello, guys, how, are, you?
What can I do?
Hint: GROUP_CONCAT().
As GordonLinoff mentioned in the comment, GROUP_CONCAT works.
I don't know your table names or table structures, but you can do something like:
SELECT GROUP_CONCAT(col_name) FROM table WHERE ...
You can read about it here.
I just saw @GordonLinoff posted the same thing I did.
| common-pile/stackexchange_filtered |
sparks cluster is unable to find txt files present in zip folder passed to it
I have a zip folder ce.zip containing python & text files, that I pass to the sparks cluster in the following way.
\bin\spark-submit.cmd --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.0 --py-files ce.zip main.py
But it is showing error on a particular line of the python file that's is trying to access the text file as follows,
FileNotFoundError: [Errno 2] No such file or directory: '...\\ce.zip\\detectors\\files\\device_features.txt'
But it works when I hard code the required file path & keep it outside the zip file.
Am I missing something or passing it in the wrong way?
try with absolute/full path
@shaileshgupta yeah, it works well with hardcoded full path.
| common-pile/stackexchange_filtered |
PHP Call to a member function on a non-object debuging
I try to init my object in a class but I get Call to a member function edit_column() on a non-object and I try to figure out what I'm doing wrong.
Practically I'm using 2 classes one is acting as a controller and second one as model. The code above
<?php
//Model
class MyClass {
public $columns = array();
private $edit_columns = array();
public function select($columns) {
//do select
}
public function from($table) {
//from
}
public function add_column($args) {
//add_column
}
public function edit_column($column, $content) {
$this->edit_columns[$column][] = array('content' => $content, 'replacement' => explode(',', $match_replacement));
//return $this;
}
}
//Controller
class MyOtherClass {
function some() {
include('MyClass.php');
$table = new MyClass();
//I try to init my Class as it follows
$table->select('id', 'name')
->from('mytable')
->edit_column('my arguments')
->add_column('my arguments');
}
}
?>
You are forgetting quotes in from, edit_column and add_column.
Can you show us the complete code for the methods in MyClass?
try MyOtherClass extends Myclass and use myclass method?
To chain you need to return $this; on end of each method.
| common-pile/stackexchange_filtered |
Disable JIT in Drools 6.2 with Java 8
We are working with Drools version 6.2.0.Final for parsing some of our rules. But sometimes when we have a lot of runs, Drools invokes the JIT compiler which is causing failures. We have this covered in our Junit tests and we are getting the following error
java.lang.NoSuchMethodError: org.mvel2.compiler.BlankLiteral.<init>(Ljava/lang/String;)V
at ConditionEvaluatoref4dc802b6174038b0307f5e6196e229.evaluate(Unknown Source)
at org.drools.core.rule.constraint.MvelConstraint.evaluate(MvelConstraint.java:248)
at org.drools.core.rule.constraint.MvelConstraint.isAllowed(MvelConstraint.java:204)
at org.drools.core.reteoo.AlphaNode.assertObject(AlphaNode.java:141)
at org.drools.core.reteoo.CompositeObjectSinkAdapter.doPropagateAssertObject(CompositeObjectSinkAdapter.java:494)
at org.drools.core.reteoo.CompositeObjectSinkAdapter.propagateAssertObject(CompositeObjectSinkAdapter.java:384)
at org.drools.core.reteoo.ObjectTypeNode.propagateAssert(ObjectTypeNode.java:298)
at org.drools.core.phreak.PropagationEntry$Insert.execute(PropagationEntry.java:93)
at org.drools.core.phreak.SynchronizedPropagationList.flush(SynchronizedPropagationList.java:96)
at org.drools.core.phreak.SynchronizedPropagationList.flush(SynchronizedPropagationList.java:69)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.flushPropagations(StatefulKnowledgeSessionImpl.java:1993)
at org.drools.core.common.DefaultAgenda.fireAllRules(DefaultAgenda.java:1289)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.internalFireAllRules(StatefulKnowledgeSessionImpl.java:1294)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1281)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1260)
at org.drools.core.impl.StatelessKnowledgeSessionImpl.execute(StatelessKnowledgeSessionImpl.java:306)
We got this error in Java 7 as well but escaped this by using the option
KieBaseConfiguration kbConfig = KieServices.Factory.get().newKieBaseConfiguration();
kbConfig.setOption(PermGenThresholdOption.get(0));
Setting this option disabled JIT and our code was working fine. But since Java 8 has removed the PermGen option altogether, I am not able to figure out an option to achieve the same thing. This is causing the rules to fail when a lot of runs are executed.
I have tried a lot options to disable it but could not make it work.
- OptimizerFactory.setDefaultOptimizer(OptimizerFactory.SAFE_REFLECTIVE);
- System.setProperty("-Ddrools.dialect.java.compiler", "JANINO");
- System.setProperty("-Djava.compiler", "NONE");
- System.setProperty("-Dmvel2.disable.jit", "true");
- kbConfig.setOption(RuleEngineOption.PHREAK);
Some help will be really helpful.
You should clarify that this isn't Java's JIT. - However, it looks like an MVEL bug, and you should report it - backed up with an example reproducing the problem - on Drools JIRA.
Well after doing a lot of debugging we found that one of our rule was causing the problem. But the rule was valid and we could not discard it.
Finally I found a way to disable the JIT compiler which solved our problem and the error was gone. This is a different solution from what was mentioned by Esteban Aliverti. The one mentioned below worked for our case and would work for most of the cases.
I used the following way to disable the JIT compiler:
KieBaseConfiguration kbConfig = KieServices.Factory.get().newKieBaseConfiguration();
kbConfig.setOption(ConstraintJittingThresholdOption.get(-1));
The actual explanation is as follows:
0 -> force immediate synchronous jitting (it's adviced to use this only for testing purposes).
-1 (or any other negative number) -> disable jitting
Default value is 20.
So once we set the ConstraintJittingThresholdOption as -1, it disables the JIT compiler.
It is also possible to give the run time the following JVM property on startup:
-Ddrools.jittingThreshold=-1
The way I found in Drools 6.4 (maybe it also works in 6.2) to disable the JIT compilation of MVEL expressions is to set its threshold to 0.
The JIT threshold basically tells Drools how many times an expression has to be evaluated before it gets JIT compiled. A threshold of 0 means 'never JIT compile the expressions'.
The way I found to do this was by using the org.kie.internal.conf.ConstraintJittingThresholdOption KieBaseConfiguration in my KieBases:
KieHelper helper = new KieHelper();
helper.addResource(ResourceFactory.newClassPathResource("rules/jit/jit-sample.drl"));
KieSession ksession = helper.build(ConstraintJittingThresholdOption.get(0)).newKieSession();
I couldn't find a way to do it from the kmodule.xml file though.
I didn't try to use -Ddrools.jittingThreshold=0 either, but I think it should also work.
NOTE: The problem with disabling the JIT compilation of the expressions in Drools is that it affects the entire KieBase. I would suggest you to investigate a little bit further what the exception you are getting is all about.
Edit:
Satyam Roy is right. The propper value to use for ConstraintJittingThresholdOption or -Ddrools.jittingThreshold in order to completely disable the JIT compilation is a negative number and not 0.
Hope it helps,
Hi Esteban Aliverti, thanks for your answer. The latest version available is Drools 6.3.0.Final on which I tried with your suggestions. But I was not able to get it work.
The class
org.kie.internal.conf.ConstraintJittingThresholdOption is not present in Drools 6.2 and has been introduced from 6.3 onwards.
So I think your question is wrong. The problem here is not Drools JIT compiling an expression, but the MVEL expression evaluator itself.
| common-pile/stackexchange_filtered |
Extract audio file details from URL(mp3 file)
To get audio file(mp3) details from URL I am using JAudioTagger in my android app but I am unable to get details.
My Code:
val audioFile = AudioFileIO.read(Objects.requireNonNull(File("https://www.soundhelix.com/examples/mp3/SoundHelix-Song-1.mp3")))
Log.d(TAG, "title: ${audioFile.tag.getFirst(FieldKey.TITLE)}")
Log.d(TAG, "artist: ${audioFile.tag.getFirst(FieldKey.ARTIST)}")
Log.d(TAG, "album: ${audioFile.tag.getFirst(FieldKey.ALBUM)}")
Log.d(TAG, "duration: ${audioFile.audioHeader.trackLength * 1000}")
I am getting exception. Guide me in the right direction.
As far as I know, Jaudiotagger does not support online audio streams
Do you know any class or library that supports online audio streams for android?
@AndroidDev Your MP3 file doesn't have title or album. Do you think telling about the exception might help others to help you?
| common-pile/stackexchange_filtered |
Change in route param doesn't updates the view in Angular
So I have this configuration for my single page app:
var app = angular.module('MyApp', ['ng', 'ngRoute']);
app.config(function($routeProvider) {
$routeProvider.
when('/product/id/:id', {
template: '<product-details></product-details>'
})
});
app.directive('productDetails', function() {
return {
controller: 'productDetailsController',
templateUrl: 'templates/product_details.html'
}
});
app.controller('productDetailsController', function($scope, product) {
$scope.product = product;
})
app.factory('product', function($http, $routeParams) {
var product = {};
var id = encodeURIComponent($routeParams.id);
$http.
get('/api/v1/product/id/' + id).
success(function(data) {
return product.data = data;
});
return product;
})
When I move from localhost:3000/#/product/id/id_1 to localhost:300/#/product/id/id_2 it doesn't update the view with the contents of the product with id as id_2 until I refresh the browser. Also, if the last visited url was of, say id_1, and I'm currently on id_2 then when I click the back button the url changes to id_1 but the view continues to be of id_2. I googled but found solutions to similar problems in React and Vue but not in Angular. How can I make the route to reload the view when the route parameter changes?
Probably you should post controller code as well...
@Developer I did it now. Hope this would help. :(
Can u make a plunkr?
The Problem gets resolved upon moving the product service code directly to the controller instead of creating a product service of its own using app.factory() and then injecting it to the controller.
I am not very sure but I reckon the service code (product service in this case) are executed only once by angular (at the time of dependency injection). But the code in controller block is executed again even on a change in the routeParameters.
| common-pile/stackexchange_filtered |
Android record mediaplayer (stream)
I play the music (internet radio stream) in a MediaPlayer. I would like to save the datastream (or the waveform) to a file, and play it back later. Does it have SDK support? I did not found, AudioRecorder and MediaRecorder is only good for mics and voice uplink/downlink.
So, how to save the data played by mediaplayer? And how to play it back? I need an API 8/9 to API 16 solution.
| common-pile/stackexchange_filtered |
Can anyone solve the "Ultimate NOOB question on pymysql"?
I'm just starting to play around with accessing mysql from python and chose to use pymysql. I hit a wall immediately, though, when just testing making a connection to the database. The code is simply...
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import pymysql
connection = pymysql.connect(host='localhost', user='root', password='nunya', db='python')
When I run this code I get the following...
(base) [me@feynmann mysql]$ ./test-pymysql.py
Traceback (most recent call last):
File "./test-pymysql.py", line 6, in <module>
connection = pymysql.connect(host='localhost', user='root',
password='nunya', db='python')
File "/usr/local/lib/python3.7/site-packages/pymysql/__init__.py",
line 94, in Connect
return Connection(*args, **kwargs)
File "/usr/local/lib/python3.7/site-
packages/pymysql/connections.py", line 325, in __init__
self.connect()
File "/usr/local/lib/python3.7/site-
packages/pymysql/connections.py", line 599, in connect
self._request_authentication()
File "/usr/local/lib/python3.7/site-
packages/pymysql/connections.py", line 871, in
_request_authentication
auth_packet = self._process_auth(plugin_name, auth_packet)
File "/usr/local/lib/python3.7/site-
packages/pymysql/connections.py", line 900, in _process_auth
return _auth.caching_sha2_password_auth(self, auth_packet)
File "/usr/local/lib/python3.7/site-packages/pymysql/_auth.py",
line 264, in caching_sha2_password_auth
data = sha2_rsa_encrypt(conn.password, conn.salt,
conn.server_public_key)
File "/usr/local/lib/python3.7/site-packages/pymysql/_auth.py",
line 142, in sha2_rsa_encrypt
raise RuntimeError("cryptography is required for sha256_password
or caching_sha2_password")
RuntimeError: cryptography is required for sha256_password or
caching_sha2_password
I've done several searches, which mostly kept bringing me here, but nothing that seemed to make sense and/or fix the problem.
I also tries switching to the mysql.connector, but got similar but even more confusing errors.
I know this probably seems pretty stupid to most of you, but I'd appreciate any help you would offer.
Try 'pip install cryptography' maybe?
A search found this answer
give this model a try. psycopg2.
I tried your suggestion but cryptograpy was already installed.
| common-pile/stackexchange_filtered |
Professor: The acetic acid production at the plant always fascinated me - that rhodium complex dissolved right alongside the methanol and carbon monoxide reactants.
Chemist: Exactly why homogeneous catalysis works so elegantly. When your catalyst shares the same liquid phase, every molecule can interact directly rather than waiting for surface contact.
Professor: But remember the separation headaches? Getting that rhodium catalyst back out of the product mixture nearly doubled our processing costs. | sci-datasets/scilogues |
How to split a an array into several smaller arrays based on a given criterium between neighbours
Mathematica has two very useful functions to group an array into a list of smaller arrays based on given criteria: Split[] and SplitBy[] which I need to emulate in Python3 code:
Split[list,test] treats pairs of adjacent elements as identical whenever applying the function "test" to them yields True,
SplitBy[list,f] splits list into sublists consisting of runs of successive elements that give the same value when f is applied.
Thus if
a=[2,3,5,7,11,13,17,19,23,29]
Split[a,(#2-#1 < 4)&] gives:
[[2,3,5,7],[11,13],[17,19],[23],[29]]
and SplitBy[a,(Mod[#,2]==0)&] gives:
[[2],[3,5,7,11,13,17,19,23,29]]
In practice the array to be split might be a 2-dimensional table and the test functions might work on the elements in individual columns.
How can this behaviour coded efficiently in Python3?
Don't have a quick answer for the first part of your question, and lenik already provide a nice zip-based solution, but SplitBy can easily be reproduced using the groupby function of the itertools module (doc here).
Beware, groupby will insert a separator (~ create a new group) each time the key change. So if you want something like SplitBy, you have to sort it according to the key function first.
In the end, it will give you something like this:
>>> def split_by(l, func):
groups = []
sorted_l = sorted(l, key=func)
for _, g in it.groupby(sorted_l, key=func):
groups.append(list(g))
return groups
>>> split_by([2,3,5,7,11,13,17,19,23,29], lambda x: x%2)
[[2], [3, 5, 7, 11, 13, 17, 19, 23, 29]]
One-liner version using list comprehension:
splited_by = [list(g) for _, g in it.groupby(sorted(l, key=func), key=func)]
Quick timeit benchmark on my old and broken laptop:
itertools version
>>> %timeit split_by([2,3,5,7,11,13,17,19,23,29], lambda x: x%2)
8.42 µs ± 92.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
zip version
>>> %timeit split_by([2,3,5,7,11,13,17,19,23,29], lambda x: x%2)
10.8 µs ± 53.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
try/catch version
>>> %timeit split_by([2,3,5,7,11,13,17,19,23,29], lambda x: x%2)
12.6 µs ± 162 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
If the downvoter could at least give a feedback or explain why I'm wrong, I'll be glad to improve this answer.
I dowvoted your answer but let me clarify why, I don't think the answer was wrong but it was a bit short, if you wrapped your way into a couple of functions split/split_by and posted a brief snippet applied to the proposed op's example that would be more than enough to upvote your answer in the first place. FYI, i don't like downvoting but in this case I considered the post quite interesting and these fast answers didn't make justice to the original question :)
@BPL Thanks for your feedback, I've just updated my answer ;)
Ok, +1 then ;) . And sorry, when I downvoted I should have provided feedback straightaway wihout you asking for. In fact, that's something I always encourage to people who downvote my answers in order to improve them. Thanks.
Interesting stats, thanks to post them. In fact, that's something I wanted to check yesterday when I posted my answer, if the overhead of having the inner try/catch was too big and yeah, it seems it was actually.
Here's some possible solutions:
a) Using python builtin zip
def split(lst, test):
res = []
sublst = []
for x, y in zip(lst, lst[1:]):
sublst.append(x)
if not test(x, y):
res.append(sublst)
sublst = []
if len(lst) > 1:
sublst.append(lst[-1])
if not test(lst[-2], lst[-1]):
res.append(sublst)
sublst = []
if sublst:
res.append(sublst)
return res
def split_by(lst, test):
return split(lst, lambda x, y: test(x) == test(y))
if __name__ == '__main__':
a = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
print(split(a, lambda x, y: y - x < 4))
print(split_by(a, lambda x: (x % 2) == 0))
b) for-loop with inner try/except:
def split(lst, test):
res = []
sublst = []
for i, x in enumerate(lst):
try:
y = lst[i + 1]
sublst.append(x)
except IndexError:
x, y = lst[i - 1], lst[i]
sublst.append(y)
if not test(x, y):
res.append(sublst)
sublst = []
if sublst:
res.append(sublst)
return res
def split_by(lst, test):
return split(lst, lambda x, y: test(x) == test(y))
if __name__ == '__main__':
a = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
print(split(a, lambda x, y: y - x < 4))
print(split_by(a, lambda x: (x % 2) == 0))
Output:
[[2, 3, 5, 7], [11, 13], [17, 19], [23], [29]]
[[2], [3, 5, 7, 11, 13, 17, 19, 23, 29]]
29 is missing from the output. if you copy-paste my code, at least don't leave important parts out...
@lenik Edited my answer but please clarify me something, the fact I considered using builtin python zip from your deleted answer was enough reason to downvote+comment I've copy-pasted your code? My previous answer was structurally different than yours so to me it felt more like a simple childish downvote. I may be wrong though, peace :)
| common-pile/stackexchange_filtered |
DropDownList Binding to ActionResult Create Method MVC 4 VS2012
I'm a brand new user here - but I've been searching for a couple of hours now to solve following problem:
I've got 2 Entities - Category and Item.
Each Item should belong to a Category - therefore I would like to have a DropDownList which shows all existing Categories when Creating a new Item.
So far my code shows the DropDownList with all the Categories, but when I select a Category and Submit the form (POST) the value for Category is always null.
This naturally causes ModelState.IsValid to be false, because Category isn't nullable.
How can I get the User-Selected-Value into my Create(POST) method?
I've got a Controller with following Methods to Create a new Item:
// GET Method
public ActionResult Create()
{
ViewBag.Category = new SelectList(db.CategorySet, "Id", "CategoryName");
return View();
}
[HttpPost]
public ActionResult Create(Item item)
{
if (ModelState.IsValid)
{
db.ItemSet.Add(item);
db.SaveChanges();
return RedirectToAction("Index");
}
return View(item);
}
And this is the DropDownList in my View (Create.cshtml):
<div class="editor-field">
@Html.DropDownList("Category", (IEnumerable<SelectListItem>) ViewBag.Categories, "--Select Category--")
</div>
Finally I ended up with a custom view model - that way I got it working...
For those of you who don't know what a custom view model is:
You create a new class which contains all the values you need to create your new object, in my example a class which contains a SelectList (property) of available Categories, an integer value (property) for SelectedCategoryId and the Item (property) you want to create.
In your cshtml file you add this class as @model ....CustomCreateItemModel and use it in your DropDownList
If your Item has a CategoryId property:
public class Item
{
public int CategoryId {get;set;]
}
You will need to name your DropDownList to "CategoryId" so that the ModelBinder will be able to bind the value correctly
Or use the strongly typed helper:
Html.DropDownListFor(x=>x.CategoryId...)
yes I do have an int Id as Primary Key in both entities, but I still can't figure out how to get this thing running... I've tried to change the DropDownList, but then it's not even showing my Items anymore
Thanks Armen.
I had the same issue with my dropdown list being populated OK from the database but the OrganisationID (in my case) not making it to the database when a new record was created (in my case only 0 was always captured) - until I just changed the name of the ViewBag to be identical to the value in the dropdown (i.e. both OrganisationID) - as you had helpfully pointed out - and now it works!
For what it's worth, for anyone else going through the frustration that "Desperate coder" and I went through when our naming wasn't consistent to enable binding, here's what I have used to get a dropdown list working (sorry - NOT using the Entity Framework, but the principle should still be clear and easy to adapt if you are using the EF):
But the key takeaway is identical naming to enable binding. Thanks again Armen!
MODEL
public class Organisation_Names
{
public DataSet GetOrg_Names()
{
SqlConnection cn = new SqlConnection(@"Data Source=XXXXXXXXX;User ID=XXXXXXXXX;Password=XXXXXXXXXXX;Initial Catalog=XXXXXXXXXXXX");
SqlCommand cmd = new SqlCommand("sp_GetOrg_Names", cn);
cn.Open();
cmd.CommandType = CommandType.StoredProcedure;
cmd.ExecuteNonQuery();
DataSet ds = new DataSet();
SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(ds);
return ds;
}
}
CONTROLLER
//
// GET: /Services/Create
**public ActionResult Create(Organisation_Names organisation_names)
{
DataSet ds = organisation_names.GetOrg_Names();
ViewBag.OrganisationID = ds.Tables[0];
List<SelectListItem> items = new List<SelectListItem>();
foreach (System.Data.DataRow dr in ViewBag.OrganisationID.Rows)
{
items.Add(new SelectListItem { Text = @dr["OrganisationName"].ToString(), Value = @dr["OrganisationID"].ToString() });
}
ViewBag.OrganisationID = items;
return View();
}
//
// POST: /Services/Create
[HttpPost]
[ValidateAntiForgeryToken]
**public ActionResult Create(CreateServiceModel createservicemodel, Organisation_Names organisation_names, FormCollection selection)
{
DataSet ds = organisation_names.GetOrg_Names();
if (ds == null)
{
return HttpNotFound();
}
ViewBag.OrganisationID = ds.Tables[0];
List<SelectListItem> items = new List<SelectListItem>();
foreach (System.Data.DataRow dr in ViewBag.OrganisationID.Rows)
{
items.Add(new SelectListItem { Text = @dr["OrganisationName"].ToString(), Value = @dr["OrganisationID"] + 1.ToString() });
}
ViewBag.OrganisationID = items;**
if (this.IsCaptchaVerify("Answer was incorrect. Please try again."))
{
try
{
int _records = createservicemodel.CreateService(createservicemodel.OrganisationID, createservicemodel.ServiceName, createservicemodel.ServiceDescription, createservicemodel.ServiceComments, createservicemodel.ServiceIdentificationNumber, createservicemodel.CreatedBy, createservicemodel.NewServiceID);
if (_records > 0)
{
return RedirectToAction("Index", "Services");
}
}
catch
//else
{
ModelState.AddModelError("", "Cannot Create");
}
}
{
return View(createservicemodel);
}
}
VIEW
@model WS_TKC_MVC4.Models.CreateServiceModel
@using CaptchaMvc.HtmlHelpers
@using WS_TKC_MVC4.Models
@{ViewBag.Title = "Service added by " ;} @User.Identity.Name
<script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"> </script>
@using (Html.BeginForm()) {
@Html.AntiForgeryToken()
@Html.ValidationSummary(true)
<fieldset>
<legend>CreateServiceModel</legend>
<div class="editor-label">
<p>Select Organisation</p>
</div>
<div class="editor-field">
@Html.DropDownList("OrganisationID")
@Html.ValidationMessageFor(model => model.OrganisationID)
@Html.EditorFor(model => model.OrganisationID)
</div>
(Some more fields)
<div class="editor-label">
@Html.LabelFor(model => model.MathCaptcha)
</div>
@Html.MathCaptcha("Refresh", "Type answer below", "Answer is a required field.")
<p>
<input type="submit" value="Create" />
</p>
</fieldset>
}
<div>
@Html.ActionLink("Back to List", "Index")
</div>
| common-pile/stackexchange_filtered |
ClerkJS forceRedirectUrl and signUpForceRedirectUrl not working with an external AWS APIGateway domain
I'm trying to get an AWS Lambda api hooked up with a Vite React app using ClerkJS auth. I want to store some user data on sign up and do some other prep on sign in. I want to hit the AWS API rather than redirect to the front end app.
I cannot get the ClerkJS redirects to work all with the AWS endpoints, have tried every combination of params I can imagine.
It all works fine if I redirect to the front end domain, but just redirects to the apps / route when I try to provide the AWS endpoints.
What's going on?
Tried adding the params on the sign in button:
<SignedOut>
<SignInButton
forceRedirectUrl={'https://xxxxxx.execute-api.xxxxxx.amazonaws.com/dev/sign-in'}
signUpForceRedirectUrl={'https://xxxxxx.execute-api.xxxxxx.amazonaws.com/dev/sign-up'}
/>
</SignedOut>
<SignedIn>
<UserButton/>
</SignedIn>
I've also tried adding the fallback and force redirects to the provider in every possible combination:
import React from 'react'
import ReactDOM from 'react-dom/client'
import App from './App.tsx'
import './index.css'
import {ClerkProvider} from "@clerk/clerk-react";
const PUBLISHABLE_KEY = import.meta.env.VITE_CLERK_PUBLISHABLE_KEY
if (!PUBLISHABLE_KEY) {
throw new Error("Missing Publishable Key")
}
ReactDOM.createRoot(document.getElementById('root')!).render(
<React.StrictMode>
<ClerkProvider
publishableKey={PUBLISHABLE_KEY}
allowedRedirectOrigins={['xxxxxx.execute-api.xxxxxx.amazonaws.com']}
signInForceRedirectUrl={'https://xxxxxx.execute-api.xxxxxx.amazonaws.com/dev/sign-in'}
signUpForceRedirectUrl={'https://xxxxxx.execute-api.xxxxxx.amazonaws.com/dev/sign-up'}
signInFallbackRedirectUrl={'https://xxxxxx.execute-api.xxxxxx.amazonaws.com/dev/sign-in'}
signUpFallbackRedirectUrl={'https://xxxxxx.execute-api.xxxxxx.amazonaws.com/dev/sign-up'}
>
<App/>
</ClerkProvider>
</React.StrictMode>,
);
| common-pile/stackexchange_filtered |
split a string by any symbol
What is the regex that I should pass with String.split() in order to split the string by any symbol?
Now, by any symbol I mean any of the following:
`~`, `!`, `@`, `#`, ...
Basically any non-letter and non-digit printable character.
Do you mean any non alphanumeric character?
.split("[^a-zA-Z0-9]")
You should use a non word i.e \W
\W is inverse of \w
\W is similar to [^a-zA-Z0-9_] and so would match any non-word character except _
OR
you can simply use [^a-zA-Z0-9]
You can try using this: -
str.split("[^a-zA-Z0-9]");
This will not include an underscore.
\W is equivalent to: - "[a-zA-Z0-9_]"
[_^\w] doesn't do what you think it does.
Oh. Then how would we remove _ from it?
The caret carries no special meaning if it's not the first character; so what you have is equivalent to [a-zA-Z0-9_\^]
@NullUserException. Yeah exactly. So I removed it. Is there any way to use \W without _?
You may want to use \W or ^\w. You may find more details here: Regex: Character classes
String str = "a@v$d!e";
String[] splitted = str.split("\\W");
System.out.println(splitted.length); //<--print 4
or
String str = "a@v$d!e";
String[] splitted = str.split("[^\\w]");
System.out.println(splitted.length); //<--print 4
[\\W] is redundant, why not just \\W?
@NullUserException: I started putting different pattern e.g. [^a-zA-Z0-9] then replaced with \W :) Removed the braces.
You could either be specific like Spring.split("[~!@$]") or list the values you do not want to split upon Spring.split("[^\\w]")
[^\w] == \W == [^a-zA-Z0-9_], so it includes the underscore.
| common-pile/stackexchange_filtered |
Sending multiple query results to res.render() using node-mysql and mysql-queue
I [new to node.js and programming in general] have two mysql query results (member info and list of workshops that members can attend) and need to send them to res.render() to be presented in .jade template (Member edit page).
To do this I'm using node-mysql and mysql-queue modules. Problem is I don't know how to pass callback function to render the response before queue.execute() finishes so I made workaround and put first two queries in the queue (mysql-queue feature), executed the queue, and afterwards added third "dummy query" which has callback function that renders the template.
My question is can I use this workaround and what would be the proper way to this using this modules?
exports.memberEdit = function (req, res) {
var q = connection.createQueue();
var membersResults,
htmlDateSigned,
htmlBirthDate,
servicesResults;
q.query("SELECT * FROM members WHERE id= ?;", req.id, function (err, results) {
console.log("Članovi: " + results[0]);
membersResults = results[0];
htmlDateSigned = dater.convertDate(results[0].dateSigned);
htmlBirthDate = dater.convertDate(results[0].birthDate);
});
q.query("SELECT * FROM services", function (err, results) {
console.log("Services: " + results);
servicesResults = results;
});
q.execute();
// dummy query that processes response after all queries and callback execute
// before execute() statement
q.query("SELECT 1", function (err,result) {
res.render('memberEdit', { title: 'Edit member',
query:membersResults,
dateSigned:htmlDateSigned,
birthDate:htmlBirthDate,
services:servicesResults });
})
};
I think an alternative could be to use a transaction to wrap your queries with:
var trans = connection.startTransaction();
trans.query(...);
trans.query(...);
trans.commit(function(err, info) {
// here, the queries are done
res.render(...);
});
commit() will call execute() and it provides a callback which will be called when all query callbacks are done.
This is still a bit of a workaround though, it would make more sense if execute() would provide the option of passing a callback (but it doesn't). Alternatively, you could use a module which provides a Promise implementation, but that's still a workaround.
| common-pile/stackexchange_filtered |
Phrasal verbs leftward movement
I was reading a research paper on translating multi-word items, which include phrasal verbs, and I came across a passage about phrasal verbs, by Dixon, that reads:
Moreover, leftward movement will take place, as Dixon argues, when a direct object noun phrase contains new information; therefore, the noun phrase will be positioned after the verb and the preposition. However, once the noun phrase is repeated, then the leftward movement cannot apply. For example, we'll make up a parcel for them…On the morning of Christmas Eve together we'll make the parcel up. (1982: 24).
I didn't really get if the last sentence is right or not! Also, what does Dixon mean by repeating the noun phrase?
Here's the whole phrasal verb movement available:
*Regarding the syntax of phrasal verbs, Dixon (1982: 22) was able to show that there are two movements that will occur in the phrasal verb. One is the leftward movement of prepositions, and the other is the rightward movement of prepositions. An example of the leftward movement of prepositions is Put the visitors up for a night/Put up the visitors for a night.
Dixon (1982: 22) argues that leftward movement cannot take place over a personal
pronoun. For example, I put you up, not Fred, for the presidency/I put up you, not Fred, for the presidency.
Thank you.
I think there may be a typesetting error here. I found this source which appears to list exactly the same usage twice, but for some reason only the first instance is marked "syntactically questionable to many native speakers" (by preceding it with a question mark). It's in a context explaining that because "the parcel" has already been mentioned, it's "okay" to move it leftward in the second reference (giving make the parcel up for them, rather than the normal make up the parcel).
Personally, I think Dixon (1982: 22) is on shaky ground. I'm just as happy with I gave away a dowry, not a daughter as I am with I gave a dowry away, not a daughter.
Where he says "we'll make the parcel up", the noun phrase is repeated. I.e. he says this only in a context in which he's just mentioned a parcel, so he's not introducing new information by mentioning a parcel here.
Although I don't agree about the facts, I think that what is being said concerns stress/intonation. There is a conflict between two principles of English stress. (1) The last element of a phrase has the highest stress. (2) The second of two coreferential noun phrases has low stress. Suppose we write a high stressed constituent in bold and the second of two coreferential noun phrases in italics. Then these two principles give us:
On the morning of Christmas Eve together we'll make the parcel up.
On the morning of Christmas Eve together we'll make up a parcel.
When "the parcel* is definite it has low stress by principle (2) and does not come at the end, by principle (1). But when it is indefinite, so it cannot be the second of two coreferential noun phrases, it has high stress by principle (2) and comes at the end, by principle (1).
But for the following, we have a clash, because principle (1) cannot be satisfied:
On the morning of Christmas Eve together we'll make up the parcel.
So, the above should not be acceptable. This is my understanding of what is being said in the passage you ask about. Unfortunately, the last example above is not unacceptable, in my opinion. It's perfectly okay. Evidently, principle (2) takes precedence.
| common-pile/stackexchange_filtered |
ln -s -T :: what does the "-T" switch do exactly?
I've seen the -T switch being used, e.g. here.
Looking at the man page it simply reads: "treat LINK_NAME as a normal file". Can somebody shed some light showing the difference between using and not using the -T switch?
See the “Target Directory” chapter in the GNU Coreutils manual:
-T
--no-target-directory
Do not treat the last operand specially when it is a directory or a symbolic link to a directory. This can help avoid race conditions in programs that operate in a shared area. For example, when the command mv /tmp/source /tmp/dest succeeds, there is no guarantee that /tmp/source was renamed to /tmp/dest: it could have been renamed to /tmp/dest/source instead, if some other process created /tmp/dest as a directory. However, if mv -T /tmp/source /tmp/dest succeeds, there is no question that /tmp/source was renamed to /tmp/dest.
In the opposite situation, where you want the last operand to be treated as a directory and want a diagnostic otherwise, you can use the --target-directory (-t) option.
Note that the GNU project does not consider man pages as a primary form of documentation; you should always look at the corresponding info page (the note about this is put at the end of each man page for a program which is developed as part of the GNU project).
| common-pile/stackexchange_filtered |
how to make a properly setting file?
I want to make all .py files use uniform settings.
I have a settings.py with a "MySettings" class, which contains all the settings and some calculations.
I imported settings.py into different files and then instantiated it. But now I have two different versions of settings, and modifying one of them does not change the global.
Is there any good way to deal with this situation?
UPDATE1: How to reload() .py file in the same folder?
I successfully import settings from settings.py use @chepner solution. but I can't reload it.
from settings import settings as AMSettings # correct.
reload(settings) # NameError: "settings" is not defined.
reload(AMSettings) # TypeError: reload() argument must be modeule.
Answer for your update: The settings is not defined because you import is as AMSettings. The AMSettings is not a module it is an instance of MySettings class so you cannot reload it. BUT you won't be able to reload the settings because it is also an instance not a module (like in case of AMSettings).
Don't export the class; export an instance of the class.
class MySettings:
def __init__(self):
self.path = ...
self.settings1 = ...
self.settings2 = ...
self.settings3 = ...
settings = MySettings()
Then
from settings import settings
More info should be needed for the proper answer (Eg.: What is the call chain?). If you want to call abc.py or def.py, then it is not possible what you want.
Actually, you need only one instance from MySettings and this instance should be used in abp.py and def.py files. In that case you will have only one reference and the two files "can communicate".
So for example if you want to call the def.py file then the following call chain would be good for you:
Output:
>>> python3 def.py
xxx yyy
zzz yyy
zzz ppp
As you can see above in this case you can change the attributes of MySettings class from both of abc.py and def.py files.
| common-pile/stackexchange_filtered |
Missing wuau.adm policy template from windows\inf folder
I'm using Windows 10 OS, I get missing wuau.adm from windows\inf folder. Can I just replace this file from other server or download it, or copy it back from the sysvol folder?
You may replace the file in either the windows\inf directly or replace it in SYSVOL. And you may replace it from another server or download it. Note: *.adm files have long been phased out by Microsoft. Windows 10 should be using *.admx files instead. Here's a link to the same problem described in the MS TechNet forums. It is from 2009 and earlier OS but it still applies: missing wuau.adm from windows\inf folder
| common-pile/stackexchange_filtered |
Javascript get element index position in DOM array by class or id
My situation
var domElements = document.body.getElementsByTagName('*');
Now I want to return the array item key - position of the element in the array - ( for example domElements[34]) searching in the array for the element with id="asd".
How can I achieve this?
What if instead of ID I want to search trough class="asd hey" ?
Any help appreciated, thank you!
NB: Not in jquery, I need it in pure javascript in this case
@fuyushimoya nope, updated question, i need pure js sorry :P
http://jsfiddle.net/kb3621gb/1/ see this.
Try like this
var matches = document.querySelectorAll("#asd");
If you want to search by class
var matches = document.querySelectorAll(".asd");
If you want an index of your code
try like this
var domElements = document.body.getElementsByTagName('*');
for(var i=0;i<domElements.length;i++){
if(domElements[i].id==="asd"){
// search by id
// index i
}
if(domElements[i].className==="asd"){
// search by class
// index i
}
}
Edit
There another way you can find index
try like this
var domElements = document.body.getElementsByTagName('*');
var domList= Array.prototype.slice.call(document.body.getElementsByTagName('*'));
var itemList = Array.prototype.slice.call(document.querySelectorAll(".asd"));
console.log(domList.indexOf(itemList[0])) // if you wanna find one index
//if you wanna search all index of class
for(var i=0;i<itemList.length;i++)
console.log(domList.indexOf(itemList[i]))
Sorry but i need to return the position in the domElements array, i can't do it straightfowardly
position means ?? index ??
Thank you, i will wait a little to see if any better solution, otherwise i'll accept your, thanks!!
@sbaaaang i have added another way. Please try that :)
Not literal code but if you iterate over the dom elements
for (var i = 0; i < parentElement.children.length; i++) {
var item = parentElement.children[i];
if (item.getAttribute('id') === 'asd') {
return i;
}
}
This has the assumption that instead of selecting ALL DOM elements, you simply select the parentElement of your list of elements - this approach is more logical and certainly a lot more efficient.
I think you are asking this:
var li = document.querySelectorAll("li");
function cutlistitem(lielm) {
lielm.addEventListener("click", function () {
lielm.classList.toggle("done")
})
};
li.forEach(cutlistitem);
I was creating a todo list app in which I created an unordered list of tasks to be done, in that <ul> every task was in an <li>.
Now what I wanted is: to add an event listener to every li element for which I needed to grab every element of li list. I ran a forEach loop on every element of li list and added an event listener to each of the element. To get and index of the element you can use indexOf method.
easiest way to do it
var list = [].slice.call(document.querySelectorAll('.class-name'));
var listIndex = list.filter(function(el, index){
if(el.id === 'id'){
return index;
}
})
| common-pile/stackexchange_filtered |
Finding average weight of edges in unidirectional graph
I'm trying to produce easier-to-parse graph from an interactions table:
from
interactions
from | to | weight
1 | 2 | 3
2 | 1 | 2
3 | 1 | 4
1 | 4 | 2
2 | 4 | 4
2 | 3 | 5
3 | 2 | 1
to
interactions
from | to | average weight
1 | 2 | 2.5
1 | 3 | 4
1 | 4 | 2
2 | 4 | 4
2 | 3 | 3
The trick here is to turn the directional information you have into undirectional information. Let's decide that out "side1" node will always be the smaller one and the "side2" will always be the larger (note that I'm purposefully not calling them "to" and "from", as this would imply directionallity). This logic can be achieved by using LEAST and GREATEST. Once that's achieved, it's a simple matter of using AVG in a grouped query:
SELECT side1, side2, AVG(weight)
FROM (SELECT LEAST(to, from) AS side1, GREATEST(to, from) AS side2, weight
FROM my_table)
GROUP BY side1, side2
| common-pile/stackexchange_filtered |
How to prevent the mouse click event from happening when the button is Disabled
I want to disable the click event for a QPushbutton when it is disabled, whether it can be done ?
You could achieve that either by subclassing QPushButton and installing an eventFilter on your custom class, or adding an even filter to the widget that contains your QPushButton.
If the button is disabled, the click events will have no effects. So, you should not have to discard them. What are you trying to achieve?
even if the push button is disabled the click signal is being generated and the function are performed thats why i am asking for a solution
| common-pile/stackexchange_filtered |
Laravel database migrattion user could be from another type
I'm currently facing an issue on the conception part of my project.
I read many posts and no one abort this problem.
I'm using Laravel 5.6 as API for my web App.
So I have two type of users :
Simple user
Professional user
The "Professional" one has a dashboard where he can do many stuffs, and the "Simple user" can contact the "Professional".
But the "Professional" could also be a "Simple user", he can contact other "Professional". And here is the problem I'm blocked on the conception of Model and Migration of the database. I don't know how to link them, and I don't want to duplicate all the "Simple user" vars in a "Professional" table and just add vars in this one. (Don't know if it's explicite)
So I started by create an User Model/Migration :
App/User.php
namespace App;
use Illuminate\Notifications\Notifiable;
use Illuminate\Foundation\Auth\User as Authenticatable;
class User extends Authenticatable
{
use Notifiable;
/**
* The attributes that are mass assignable.
*
* @var array
*/
protected $fillable = [
'email', 'password', 'old_password', 'last_name', 'first_name', 'birthday', 'address',
'postal_code', 'city', 'country', 'phone_number'
];
/**
* The attributes that should be hidden for arrays.
*
* @var array
*/
protected $hidden = [
'id', 'password', 'old_password', 'remember_token'
];
/**
* The attributes that should be handled by Carbon
*
* @var array
*/
protected $dates = [
'created_at',
'updated_at',
'deleted_at',
'birthday'
];
}
database/migrations/..create_users_table.php
class CreateUsersTable extends Migration
{
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('users', function (Blueprint $table) {
$table->increments('id');
$table->string('email')->unique();
$table->string('password');
$table->string('old_password');
$table->string('phone_number', 50);
$table->string('last_name');
$table->string('first_name');
$table->tinyInteger('sex');
$table->date('birthday');
$table->text('address');
$table->string('postal_code', 10);
$table->string('city', 50);
$table->string('country', 50);
$table->boolean('valid_email')->default(false);
$table->timestamps();
$table->rememberToken();
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
Schema::dropIfExists('users');
}
}
The the Professional Model/Migration :
App/Professional.php
namespace App;
use Illuminate\Notifications\Notifiable;
use Illuminate\Foundation\Auth\User as Authenticatable;
class Professional extends Authenticatable
{
use Notifiable;
/**
* The attributes that are mass assignable.
*
* @var array
*/
protected $fillable = [
'profession', 'organisation', 'subscription', 'activity_start_at', 'valid_num_organisation'
];
/**
* The attributes that should be hidden for arrays.
*
* @var array
*/
protected $hidden = [
'user_id', 'num_organisation'
];
/**
* The attributes that should be handled by Carbon
*
* @var array
*/
protected $dates = [
'created_at',
'updated_at',
'deleted_at',
'activity_start_at'
];
}
database/migrations/..create_professionals_table.php
class CreateProfessionalsTable extends Migration
{
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('professionals', function (Blueprint $table) {
$table->integer('user_id')->unsigned();
$table->primary('user_id');
$table->foreign('user_id')->references('id')->on('users');
$table->string('profession');
$table->string('organisation');
$table->string('num_organisation');
$table->string('subscription');
$table->date('activity_start_at');
$table->boolean('valid_num_organisation')->default(false);
$table->timestamps();
$table->foreign('user_id')
->references('id')->on('users')->onDelete('set null');
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
Schema::table('professionals', function (Blueprint $table) {
$table->dropForeign('professionals_user_id_foreign');
$table->dropColumn('user_id');
});
Schema::dropIfExists('professionals');
}
}
And here is the real question what the best way to link User and the Professional ? Like a "user->isOneOf()" or something like that.
I was thinking about create a role on my user and if role is empty the user is "Simple user" and if not is a professional but I pretty sur is not the best way.
I also look at the polymorphic link but i don't think it's the one I need in this case.
If you have any ideas I'm listening.
Thanks.
First of all, thank you for the extensive description of your problem. Not everyone does that.
Of course, most of this answer is somewhat opinion based. There are many different ways to implement this kind of behaviour. But I think the 'correct' implementation for this problem should be discussed because there are a lot of developers who really don't know where to start on this topic. I am happy to accept any feedback or additions to my answer to make it more complete.
So. Most of the time when I have to deal with more than one type/variant of the same entity, I use just 1 table. Especially when dealing with authentication. Because that makes coding a Laravel application a lot easier.
In the case of just 2 different types of entities. (A normal user and a professional user in your case), I just ad a is_professional boolean flag on the users table. That would be sufficient.
https://laracasts.com/discuss/channels/general-discussion/laravel-auth-multi-user-type
When you want to use more different variants, I'd add a role column to the table. This could be a role_id integer to reference a seperate roles table if you need to store role specific data. But most of the time you can get away with just using a varchar role key. You can also use an enum for this but I never do. I always ran into problems when migrating the table later. Don't know for sure if they fixed those problems already.
There is no shame in storing extra data in your users table that only is filled for certain types of users. In your case that would be the profession and organisation columns for example. Denormalization is accepted in some of this cases. This totally depends on the size and use of your data.
http://www.vertabelo.com/blog/technical-articles/denormalization-when-why-and-how
If denormalization is not the way to go in your application. You could also create a seperate table for any additional information for the professional_users in your case. You should write some logic that makes sure you can load additional information for your model. Maybe in the boot method. Or maybe as a scope.
If you want. You can also create completely different authentication systems for both users. Separating logic when logging in. But when you do that, you have to choose which user can access a certain page. It would be hard to implement accessibility for both types of users I think.
https://scotch.io/@sukelali/how-to-create-multi-table-authentication-in-laravel
Like I said, there is no general ideal implementation for your problem. To answer your question in particular I would have to know your application from top to bottom. Only the developer, you ;), can form the most optimal tactics. Maybe that was not the answer you were looking for but it's the best answer I can give.
Well, first of all thanks for your very detailed answer and all these links very helpful. After discutions with the team we all agree to go on the is_professional solution. I'll also look at the professional_user solution, and see how to deal with scope in this case (I don't know how it works). I think is a nice discutions subject that could be nice if there is other point of view. Thanks.
| common-pile/stackexchange_filtered |
Yii2 signup without saving session data
I have registration form in my Yii2 advanced application. In my application only admin can register users. But when I register new user, admin session data are destroyed and new user's session data are set. What I am trying is not to change session data when I register new user. Admin user is still should be set to session data. How to solve this problem. This is my action in controller:
public function actionSignup()
{
$model = new SignupForm();
if(Yii::$app->user->can('admin'))
{
if (($response = self::ajaxValidate($model)) !== false)
return $response;
if (self::postValidate($model))
{
try
{
$trans = Yii::$app->db->beginTransaction();
if ($user = $model->signup()) {
Yii::$app->getUser()->login($user);
}
//tagidan tushgan odamining child_left, child_right tini o'zgartirish
$under = UserModel::findOne($user->id_parent_under);
if ($under->id_child_left == 0)
$under->id_child_left = $user->id;
else
$under->id_child_right = $user->id;
$under->update(false);
ChildrenBinaryModel::insertNewUser($user);
ChildrenClassicModel::insertNewUser($user);
$parents = ChildrenBinaryModel::find()->with('user')->where(["id_child" => $user->id])->orderBy(['depth' => SORT_ASC])->all();
$c_user = $user;
foreach($parents as $p)
{
$p_user = $p->user;
if ($p_user->id_child_left == $c_user->id)
$p_user->ball_left += 100;
else
$p_user->ball_right += 100;
$p_user->update(false);
$c_user = $p_user;
}
$trans->commit();
}
catch(\Exception $e)
{
$trans->rollBack();
return $this->renderContent($e->getMessage());
}
return $this->render('index');
}
return $this->render('signup', [
'model' => $model,
]);
}else{
throw new ForbiddenHttpException;
}
}
Please update your question with your code. How are you registering this users? Show us the action in your controller and any methods of your User model that is being used.
In my action everything works fine. I mean data will be added to database and user will be signed up when you create new user. But, for example I want to add user named Johnny. Action will be executed and then logged in user in my site will Johnny not admin. But I am trying to achieve that admin should be still logged after adding Johnny
I have solved this problem. In order to make answer clearly and simple I will show you default signup action in site controller (Yii2 advanced template):
public function actionSignup()
{
$model = new SignupForm();
if ($model->load(Yii::$app->request->post())) {
if ($user = $model->signup()) {
if (Yii::$app->user->enableSession = false &&
Yii::$app->getUser()->login($user)) {
return $this->goHome();
}
}
}
return $this->render('signup', [
'model' => $model,
]);
}
Only I have done was adding this
Yii::$app->user->enableSession = false
inside the if statement
If you decide to set enalbeSession to false, there is no need of doing that inside that if (this will only makes that statement never occur). By the way take a look at http://stackoverflow.com/questions/2063480/the-3-different-equals. And looking at your code, you only needed remove any login attempt as @spencer4of6 said.
Checkout the documentation for \yii\web\User::login(). When this is run in your code by calling Yii::$app->getUser()->login($user), the session data is set to match your $user.
If it's always the admin user that's signing up new users, I'm not sure you even need to run the login method.
| common-pile/stackexchange_filtered |
Why are the API demos considered as "legacy", as of API 18?
After installing the newest ADT & SDK of Android to support API 18, I've noticed that all of the API demos are now considered "legacy".
Here's a screenshot:
How come?
I've noticed that even though they are considered "legacy", they contain some things that do belong to API 18, for example this attribute that was found on the manifest:
android:theme="@android:style/Theme.Holo.NoActionBar.Overscan"
It's also weird that this is the only thing Lint warns me about when having minSdk to be lower than API 18 - this can't be the only new thing on API 18 that the demos contain, can it?
What is going on with it, and will we have a different set of samples?
Where do you see this "legacy" notation?
@CommonsWare i've now updated the question with a screenshot. it is shown when you create a new sample android project.
IMHO, you will have a better chance of getting an answer for this on the adt-dev Google Group.
@CommonsWare how do i do that? maybe the answer will be revealed on the next time we have a google event , when google will show their new nexus device.
"how do i do that?" -- um, join the Google Group and ask a question.
@androiddeveloper: AFAIK, there's no legacy notation ever for any release of Android API, including Level 19. Did you install ADT & SDK over an existing directory or into a new directory? Perhaps some files were left over from a previous installation.
@ChuongPham i didn't install, as both the SDK and ADT already have a feature of updating themselves. i think i've even tried (in order to fix something else) to uninstall them both and re-install them, and i still see the "legacy" items.
@androiddeveloper: I was curious so I set up another Android project and voila, I do see the legacy notation, so I stand corrected. This is to do with the version of the API you selected when you create / load an Android project. So, if you select a target of 17 (Android 4.2.2) or earlier, you'll will not see the legacy notation. Anything from API 18 onwards, you will see it. What it is is that some of the sample Android projects were written some time ago, and Google may not have updated them in line with the latest API levels. I would just ignore the legacy notation and continue.
@ChuongPham no, this is the weird part. they still update them. the new API demos of API 19 already have things that belong only to API 19 . you can run Lint to see it for yourself - it will complain about some classes and functions that are only for API 19 . an example is the class "SystemUIModes.java" , which has a reference to "View.SYSTEM_UI_FLAG_IMMERSIVE" , which belongs to API 19.
@androiddeveloper: Yes. But if you look closely, legacy here can mean one or two things: 1) There are methods contained in a sample Android project which are now deprecated in API 19 (the APIDemos project has a few of them e.g. Notification(int, CharSequence, long)), or 2) The notation acts as a visual cue to alert [new] Android developers that a given project may contain "legacy" codes i.e. support earlier versions of Android. This notation won't cause you any grief, so I would ignore it and continue development.
In Android Studio if you try to import the samples project now, you will see 'legacy' folder under android-19. This folder is in the same directory level as 'background', 'connectivity', 'content', 'input', 'media', 'security', 'testing', 'ui'. So @androiddeveloper's original question is valid!
@IgorGanapolsky i don't understand what's wrong with what i've written. i can still see it as "legacy" even now...
Who said there is something wrong?
@IgorGanapolsky i just didn't understand what you've written, so i assumed you are saying that there is something wrong with what i've written... :)
The non-legacy projects all have a build.gradle file, that you can simply open in Android Studio*.
The legacy projects don't have a build.gradle file and if you want to open them in Android Studio you need to either go through "create new project over existing sources" or android update project before you're good to go.
*) not entirely true: some of them point to outdated versions of the Android plugin and you need to tweak the project settings first.
that's the cause for legacy?! even though they keep updating them?!
I believe so, but I haven't heard anything official about it. It's the only obvious difference that I can see.
does it mean that some day google will ditch eclipse and support only their new IDE (android studio) ? i don't like the new IDE ... it lacks many things i'm used to on eclipse... :(
I don't know their plans.
In Android Studio, how can I import the API demos?
| common-pile/stackexchange_filtered |
Calculate Amount of Inventory Change in Azure Analysis Services DAX
I have data in Azure Analysis Services (Tabular) that looks like the following table. I need to create two calculated columns for the Date that the Inventory Changed and the Amount of the Inventory Change. I think and IF function can take care of me for the date of the inventory changed. However, I'm stumped regarding how to calculate the amount of the inventory change.
What are some logical approaches to that?
Try use this:
[Amount of the Inventory Change] = 'YourTable'[Current Inventory Number] -
calculate( max('YourTable'[Current Inventory Number]), filter(values('YourTable'[InventorySnapshotDate])
, 'YourTable'[InventorySnapshotDate] < lastDate('YourTable'[InventorySnapshotDate])
))
| common-pile/stackexchange_filtered |
Adding salt to Blake3 Key derive function
According to the whitepaper, Blake3 can be used as a key derivation function (function key_derive). Currently, as a key derivation function, I used Rust's Hkdf::<Sha256> which takes as input the master key and salt. Adding salt allows to ensure that even in the case I reuse the master key, the output of the Hkdf is randomized and cannot be linked to previous derivations.
Since Blake3 is much faster than Sha256 I would like to use it as my KDF, however, according to the whitepaper Blake3 takes only a key material and context string, which should be " hardcoded, globally unique and application-specific (...) should not contain variable data like salts, IDs or current time".
How can I then add salt to Blake3 key derivation function, to ensure that even if I reuse the same master key multiple times, the output of the key derivation is randomized?
prefix to your key?
@kelalaka I thought about that, but is it secure? If I feed into Blake3 salt || master key, won't the output be partially similar to the output of Blake3 for just master key as input?
Why it should be? If so, then any MD-based hash function will be totally insecure. Also, Blake2,3 are immune to length extension attacks.
@kelalaka Great, thanks a lot!
| common-pile/stackexchange_filtered |
Playing Windows games on Linux
Windows games can be played on Linux either through Wine or by running Windows in a virtual machine. Which gives better performance?
If you have stable 3-5MBit/s internet connection you can use cloud-gaming like PlayKey. Run it on Linux using Wine or Crossover (as shown here) and play.
The more popular game, the more chances you have running it through Wine. For example WoW and The Sims are very playable through Wine.
When it comes to virtualization, I really, really would like to hear about a solution which would allow full-speed gaming through it. Every virtualization I've tried have been severely lacking when it comes to gaming.
You can using Intel VT-d and AMD-Vi, which allow the guest to directly access peripheral devices. It is available on newer Intel and AMD processors.
"More popular game" sometimes means top new game which has less chances to run through Wine (maybe it will run on next versions of Wine). For example, AC Unity, Far Cry 4 currently don't run on Wine.
If a game is supported on Wine (Take a look here), it should play the same as on Windows (sometimes there are minor problems such as fonts, but usually it works well).
However, if it is not supported, you can try Sun Virtualbox if you don't want to pay - it offers basic DirectX and works - but for anything better, take a look at either VMWare Workstation or the free VMware Player which offers much improved graphics performance.
So, use Wine where possible - it is better, then use virtualisation when it isn't supported.
Or, for the best performance all together, dual boot in to Windows!
Wine is (much) faster usually than a VM
A few times, even faster than windows. I used to play Eve online on wine and it was faster than on the same machine in windows (dual boot). It's longer to load, the fps is the same but the interface response on menu, buttons, clicks is faster on wine and voice is a lot better.
Wine is much faster than a VM because Wine is not emulator. Eve online is faster on Wine? It's first time I hear it o.O ... what means "clicks is faster"? or voice is a lot better (maybe you haven't installed latest drivers on Windows)? It gets off-topic, but is interesting.
Try the Lindows Linux distro or Ubuntu Ultimate Gamers Edition. They allow you to play almost any Windows game.
Lindows had been renamed years ago and doesn't exist any more since 2007 or so. Also, both Lindows and Ubuntu Ultimate Gamers use the same mechanisms as other Linux distributions for Windows compatibility.
Nice. I didn't know that. Thanks
Performance or Maximum Compatibility
in virtualization you need to provide/dedicate resources to the Virtual Machine
so if you have this much resources that will never be less then have virtual machine but remember a total 4Gb Memmory doesn't mean its enough... there is always bottlenecks...
and the virtual machine will share the hardware on time base so think about that as well...
simple WINE will give you performace that Virtual Machines can't
but
as Janne Pikkarainen said there will be compatibility support issues with Wine
so there is no answer to this choose your trade off
Performance or Maximum Compatibility
There are some solutions which might help you:
CrossOver allows you to run Windows software on Linux powered on Wine.
Cedega from TransGaming designed specifically for running Windows games on Linux, also based on Wine.
These are your best options for now, I would not use virtualization.
+1, Agree. Also Mention that CrossOver uses Wine libs for more compatibility, and creator of CrossOver is CodeWeavers, who hosts Wine web-site. And TransGaming is the company who brought very fast DX9 called SwiftShader (just copy it's d3d9.dll to any DX9 game folder, and feel the speed).
| common-pile/stackexchange_filtered |
How to determine if pilots in crashes flew again?
I checked this post on Airliners.net , but the formatting is hard to read, and the information or hearsay may not be trustworthy. Also, I'm hoping for a more efficient, efficacious method, rather than researching each accident separately.
I'd wanted to be advised if pilots, suspected of (provable, veritable) pilot error, were still flying: Qantas Flight 1, American Airlines Flight 1420
Yet I'd be heartened and inspired by heroic or esteemed pilots, like those for United Airlines Flight 232.
Well, the captain of AA1420 definitely isn't flying anymore, given that he's dead.
How can I determine if a driver who was involved in an automobile accident is still driving?
The answer is "They probably are."
Broadly, unless the pilot died or the FAA (or equivalent authority) revoked their license as a result of their actions there is no reason the pilot wouldn't still be flying unless they decide to hang up their wings for personal reasons.
They may not be flying for the same airline, and certainly will have had some sort of remedial training (particularly if they were involved in an incident where pilot error was the cause), but damaging an aircraft is usually not a career-killing move unless one makes a habit of it.
If you want to know for certain about a particular flight crew you would need to research each accident separately (locate the pilots' information - name or certificate number - and check something like the FAA airman records branch to see if the pilot still holds a valid certificate and medical.
So pilots are supposed to be flawless? Are you even aware that often an incident or accident is blamed on "pilot error" simply to prevent lawsuits against companies (though that's more often the case when the pilot died and can't defend himself)?
Rest assured if someone makes a serious mistake they're going to get training and are likely to be grounded for a while during that. They're also going to have to live with the resulting damaged reputation, and if people died or were seriously injured with the mental anguish of the feeling of guilt that results (which more than likely will cause them to retire from flying).
Are you also going to demand that anyone with a driver's license who ever was in an accident lose it permanently?
It's life, shit happens as they say. Sometimes the best you can do isn't good enough, and the fact that there were no other options ends up being blamed on you for no other reason than that someone, somewhere, is looking for a scapegoat.
So a pilot who ends up landing hard in bad weather, resulting in a passenger who ignored the seatbelts sign to hit her head and break her nose ends up getting blamed for the hard landing causing the injury for no other reason than that it's better PR for the company to do that than to go public about the fact that the passenger ignored the safety rules and wasn't wearing that seatbelt.
Things like that happen more often than you think, and you want that pilot to as a result lose his license and his job?
Thank you. I've recast my answer to sound less judgmental or critical, which I hadn't intended to be. Is it more just now?
I think, for the common people sounds much better to hear: "passanger ignored the safety rules and was hurt" as "passanger hurted by a pilot error". In the first case they think: "fortunately I am not a such a..hole", in the second they have fear.
@jwenting, « Are you even aware that often an incident or accident is blamed on "pilot error" simply to prevent lawsuits against companies ». By companies do you mean the aircraft manufacturer or the operators. The operators are responsible of pilots errors.
@user40476 either, lawsuits tend to go after the juiciest target. And no, operators aren't responsible for pilot error if they can show that the pilot operated outside of the standards the company sets. Which pilot error of course suggests...
Pilots will make serious mistakes
Although pilot error can cause an accident or serious incident, systems in aviation are generally such that only multiple failures or errors will have a bad outcome. There is a constant process of verifying, cross-checking and so on.
It's true that people, including pilots, make mistakes. But that's only part of the truth; in fact people (including pilots) will make mistakes, as a matter of course. That is what they are expected to do, and that's why the systems they work with are so successful at preventing those mistakes from turning into serious
Accidents tend to represent the failure of systems, not individuals
Accidents tend to happen when the system breaks down and fails to prevent an error developing into a situation. The classic example is AF447, in which a long series of errors (both human and mechanical) occurred; the system that broke down between the three crew members in the cockpit was that of communication.
It would be pretty useless to (say) choose to avoid flying with pilots whose errors had led to accidents. Pilots flying right now all over the world will have made and will be making exactly the same mistakes, just with different outcomes.
It would be better to enquire after patterns
It would be much more useful (but also much more difficult) to know if the systems, patterns and habits at work in the cockpit and beyond were ones conducive to the production and sustenance of uncorrected errors.
For example: the pairing of a laconic and prickly captain with a cowed first officer. Or: an actual practice within an airline that's at variance with standard operating procedures. Or: flight rotas that have left both flight crew with disrupted sleep that week.
The industry's attitude torwards serious error
The aviation industry has an attitude towards mistake-making or even impaired pilots that's quite different from the one expressed in the question.
Each year, quite a number of commercial pilots are arrested for turning up to work (or in some cases, for finishing a shift at the controls) drunk. People might not want to get in a plane with a pilot who'd been previously been arrested for that, but the industry (actually I only know about cases from the USA, so I'd be interested to know more about other regions) has different ideas.
In fact many of them eventually return to work. In the USA the HIMS programme is key to this.
For example, Lyle Prouse, an alcoholic Northwest Airlines captain, served a 16th month prison sentence after being convicted of flying influence of alcohol in 1990. Three years later he was flying again with the same company, and retired in 1998 as the captain of a 747.
| common-pile/stackexchange_filtered |
Capturing Return Code from R in SAS IML
I have a submit /r; block in IML. 95% of the time, things run correctly. The other times there is an ERROR in R.
My log shows
ERROR: R: <whatever the error message is>
When an error occurs, the outputs are not available. Is there a way to trap or detect the first error so I do not attempt to pull outputs that don't exist?
Use the ok option in submit statement (ref). Later in the code, you can handle the error based on the value of the variable that stores the information from ok.
submit / R ok = isOK;
* Do stuff;
endsubmit;
if isOK then do;
* Handle the no error case;
end;
else
* handle the error case;
It does not appear that you can capture the error message itself, unless you write the R script to return some error code instead of failing.
Does not seem to work. I get an error and isOK = 0. Perhaps the R function I am using is not setting an error code.
Can you refactor the R code to throw and catch errors?
Yup, already moving that direction. Actually not my R code, but I have enough to do it. Thanks for the insight.
| common-pile/stackexchange_filtered |
Troubleshooting the REPWEIGHTS option in surveyreg analysis (SAS)
I'm new to SAS and the documentation for repweights seems to be kind of limited. I'm trying to weight a nationally-representative survey, but my replicate weights are producing an error (red in the code, no effect on the results if I pull them out) and I can't figure out what I'm doing wrong. Here's my sample:
proc surveyreg data=ffm.premodel;
weight m1wt;
repweights m1wt_rep1--m1wt_rep33;
where natflag=1;
class lb;
model wr9 = lb ed inc / solution;
lsmeans lb / cov pdiff;
output out=ffm.wr9_1 residual=res;
run;
Help would be greatly appreciated!
Can you post the error?
Usually it is one single hyphen for specifying something like replicate weights:
repweights m1wt_rep1-m1wt_rep33;
Doesn't have anything to do with rep weights, it's just a different kind of variable list- but this certainly could be the issue if they're not consecutive on the data file.
No it didn't have to be specifically replicate weights, but any ordered variable. Anyhow, hard to tell without a log
| common-pile/stackexchange_filtered |
How to concatenate multiple MP4 videos with FFMPEG without audio sync issues?
I have been trying to concatenate a number of MP4 clips (h264,aac) using the FFMPEG concat protocol documented here. The clips concatenate successfully, however there are multiple errors in the logs including:
Non-monotonous DTS in output stream
Past duration too large
Additionally, it seems that the audio and video go slightly out of sync as more clips are added - though it is more noticeable on certain players (Quicktime & Chrome HTML5).
Here is the code I am using, any tips would be appreciated!
Convert each video to temp file
ffmpeg -y -i <input file> -vcodec libx264 -acodec aac -f mpegts -bsf:v h264_mp4toannexb -mpegts_copyts 1 <temp file>
Concat temp files
ffmpeg -i concat <input1|input2 ...> -map 0 -vcodec copy -aprofile aac_low -acodec aac -strict experimental -cutoff 15000 -vbsf aac_adtstoasc -b:a 32k <output file>
I'm having a similar issue, concating the two videos causes audio issues as well as playback issues, did you ever find a solution? (answer from mulvya below didn't work for me). For me the audio quality becomes really terrible, and the second video plays back at around 0.5 speed compared with the original and the total video length is longer than expected.
Since you're encoding both audio and video, just use the concat demuxer:
Create a text file with the list of files to be joined
file 'input1'
file 'input2'
file 'input3'
...
Then run
ffmpeg -f concat -i textfile -map 0 \
-vcodec libx264 \
-aprofile aac_low -acodec aac -strict experimental -cutoff 15000 -b:a 32k <output file>
Thanks Mulvya, we tried that, but had the same timestamp issue.
Those will remain since they relate to the DTS series in the source files. You should only worry if the audio as heard is out of sync.
-try -filter_complex with pts reset and avoid creating mpeg-files, which to my experience crap any sync.
example:
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v]trim=start=770:end=1843.6,scale=768x432,setsar=1/1,setdar=16/9,setpts=PTS-STARTPTS[v0];[0:a]atrim=start=770:end=1843.6,asetpts=PTS-STARTPTS[a0];[1:v]trim=start=58:end=1795.5,scale=768x432,setsar=1/1,setdar=16/9,setpts=PTS-STARTPTS[v1];[1:a]atrim=start=58:end=1795.5,asetpts=PTS-STARTPTS[a1];[v0][a0][v1][a1]concat=n=2:v=1:a=1[outv][outa]" -map [outv] -map [outa] -map_chapters -1 -map_metadata -1 -c:v libx265 -c:a libmp3lame -q:a 8 -ar 44100 -r 24 11.mp4
where start and end in seconds correspond to time to start and end (not the duration) encoding of the file:
0__start~~~e~n~c~o~d~e~~~t~h~i~s~~~end____1:1
0____0.1~~~~~~~~~~~~~~~~~~~~~~~~~~~1.3_____61
encoding from 0:0:0.1 till 0:0:1.3 sec of the file, result shall have duration of 1.2 sec
Read my answer here if you have really many files.
| common-pile/stackexchange_filtered |
loop Read_csv to pandas dataframe
i want to load all .csv files from a folder into a list of seperate dataframes for each file
the folder is called coins.
for file in './coins':
logs_total = [pd.read_csv('./coins/'+file, engine='python')]
The error:
IsADirectoryError: [Errno 21] Is a directory: './coins/.'
without engine='python' its :
ParserError: Error tokenizing data. C error: Calling read(nbytes) on source failed. Try engine='python'.
Your for loop does not reference the files in the coins folder. All Python knows is that './coins' is a string, and you are iterating over each letter in that string.
Also, if you want to build a list of data frames with a for loop, you should create the list outside of the loop first and append to it (or you can use a list comprehension).
To access the files, you can import the either os or glob to get the file names. Here is an example using os.
import os
import pandas as pd
log_total = []
for file in os.listdir('./coins'):
log_total.append(pd.read_csv('./coins/'+file))
Here is an example using glob and a list comprehension.
from glob import glob
import pandas as pd
log_total = [pd.read_csv(f) for f in glob('./coins/*.csv')]
| common-pile/stackexchange_filtered |
adding element to other elements in a for loop only rendering last item
I'm hoping to make a tool for myself aka a javascript bookmark that I can use on webpages specifically google forms, i wanted to loop through all questions on a page and add helpful tips or maybe definitions of words if thats what the question was asking. Im having issues appending children to every element, at the moment only the last question is getting anything appended. Heres some of the code im using for this
var x = document.createElement("P");
var t = document.createTextNode("This is an example of some notes");
x.appendChild(t);
//getting list of questions
var questions = document.getElementsByClassName("freebirdFormviewerComponentsQuestionBaseHeader");
for(var i = 0;i<questions.length;i++){
x.innerHTML=i;
questions[i].appendChild(x);
console.log(i);
}
//for some reason only the last question has text visible
thank you!
edit: heres the link to the form im using, ive tried other forms aswell but they problem remains the same. The output from the console.log is as follows code 0 1 2 3 4 //up to the number of questions so i think it may just be a visual bug https://docs.google.com/forms/d/e/1FAIpQLSfuvQKv0KtayB8MwQ-oYj5kf6K8I8dWIzJhqeDqMWZdwMY1mQ/viewform
Tricky because we don't have your original form. I suggest the following to encourage people to answer.
Can you trim out everything that isn't essential to show the problem? For example everything about fonts and styling can be deleted.
Can you use Javascript to "console.log" the list of questions, and show us what it looks like?
Best of all, can you include a MINIMAL html version of the form (strip out everything that isn't needed) so people here can run your code without Google Forms?
If x.innerHTML = i, then questions[i] will also be i. Shouldnt x.innerHTML == t?
Atm i want the questions to get numbered, definitions will come later in the project but for now i just want to be able to number each question
| common-pile/stackexchange_filtered |
How to evaluate model's performance using a K-S test
I am wondering if a K-S test or other test can be used for comparing models' performance. For example, I have two prediction models, the prediction result of first model is [x1, x2, x3, ..., xn], the prediction result of the second model is [z1, z2, z3, ..., zn] . The ground truth of the prediction is [y1, y2, y3, ..., yn].
The lengths of the lists are not the same, but roughly around 4,000.
The prediction target is kind of noisy, so I am not expecting the prediction results perfect. I conducted the scipy.stats.kstest from scipy, something like
stats.kstest(x, y)
and
stats.kstest(z, y)
The return for x is
statistics:0.14
p-value:1.5e-40
And the return for z is
statistics:0.12
p-value:1.5e-14
I am wondering how I should interpret the results for the models, like which statement is right?
(1) Although both models do not perfectly predict y, z model is still better than than x model because the statistical value is lower.
(2) Both z model and x model are really bad and they cannot be compared. Because both p-values are extremely small.
I'm not sure I understand. What do you believe the test provides you? For prediction problems, one would typically compare models on some loss function. Any particular reason why you're not taking that approach?
The problem is a little bit of specific, what I predicted is the degree of nodes in a graph. It's kind of a tradition to compare the node degree distributions, rather than only a mean square error. It is OK to get a wrong node degree value for each node, as long as the final node degree distribution is similar.
If you're looking at the distribution, why not compare qq plots? I just don't think a hypothesis test is really what you want here.
Another reason of using the hypothesis test is the x, y, and z are not the same length. The model also generates some 'nodes' that do not exist in ground truth. It's more like a 'generative' model, rather than an element-wise prediction model.
What values can these degrees of nodes take?
They are integer values, like 0, 1, 2, 3, 4. But in my another application, it can also be continuous values (not node degrees any more).
| common-pile/stackexchange_filtered |
.Net get DateTimeFormat "G"
I want to use DataGridView to display a date and a DateTimePicker to implement a filter. I want both of them to display the date time in the "G" formatted Standard Date and Time Format. Ideally, I would just do
dateTimePicker1.CustomFormat = "G";
but that does not appear to work. (It literally displays the character "G" in the DateTimePicker.) I found the following workaround, but appears fragile and cumbersome to me.
DateTime now = DateTime.Now;
dateTimePicker1.Format = DateTimePickerFormat.Custom;
dateTimePicker1.CustomFormat = System.Globalization.CultureInfo.CurrentCulture.DateTimeFormat.GetAllDateTimePatterns('G')[0];
dateTimePicker1.Value = now;
dataGridView.Rows.Add(now);
Any suggestions for improvements?
Edit: Added that I set dateTimePicker1.Format.
You could use:
var formatInfo = CultureInfo.CurrentCulture.DateTimeFormat;
string pattern = formatInfo.ShortDatePattern + " " + formatInfo.LongTimePattern;
That follows what the MSDN docs for 'G' state:
The "G" standard format specifier represents a combination of the short date ("d") and long time ("T") patterns, separated by a space.
formatInfo.ShortDatePattern + " " + formatInfo.LongTimePattern is not much of an approvement over the formatInfo.GetAllDateTimePatterns('G')[0] he already has and claims works.
@JeppeStigNielsen: Well, it's an alternative. It may be more efficient too, but that's unlikely to be relevant.
I ended up sticking with my original solution. I appreciate the link to MSDN and another alternative solution. I hadn't found anything in my own searches about what "G" officially was defined as other than it was the default for DateTime.ToString().
DateTimePicker doesn't support that. Check the values it supports for that property:
http://msdn.microsoft.com/en-us/library/system.windows.forms.datetimepicker.customformat.aspx
Check if when you run that code, what's actually being put into the CustomFormat property of your picker isn't the long date and time pattern for the culture in your web.config.
Important Edit: Maybe simply setting the Format property to DateTimePickerFormat.Long will get what you want done.
I should have mentioned this in the question (my bad), I set the Format property to DateTimePickerFormat.Custom. DateTimePickerFormat.Long only shows the date, but I want to display both the date and time.
| common-pile/stackexchange_filtered |
FPDF Getting page numbers of a 1 or 2-Sided form letter
I create a form letter from an order table. Each order can have either 1 or 2 pages. The PDF contains all orders. Now I want to put the page numbers for every order on the PDF.
First Order: Pages 1 and 2,
Second Order: Page 3,
Third Order: Page 4.
The number of pages depends on how many articles a customer ordered (max 2 pages).
PageNo() uses the whole document for numbering. Maybe someone had the same problem?
The expected result can be achieved by subclassing FPDF's FPDF class, and adding an $orderPageNo property to keep track of the current order's "sub-page number". You can then use that property in your customized Header or Footer method.
Example:
<?php
require('fpdf/fpdf.php');
class PDF extends FPDF {
protected $orderNumber, $orderPageNo;
function AcceptPageBreak() {
$accept = parent::AcceptPageBreak();
if ($accept) {
$this->orderPageNo++;
}
return $accept;
}
function Header() {
$this->Cell(50, 30, 'Order #'.$this->orderNumber);
$this->Cell(50, 30, 'Page '.$this->orderPageNo, 0, 1);
}
function Order($orderNumber, $items) {
$this->orderNumber = $orderNumber;
$this->orderPageNo = 1;
$this->AddPage();
for ($i = 1; $i <= $items; $i++) {
$this->Cell(30, 12, 'Item #'.$i, 1, 1);
}
}
}
$pdf = new PDF();
$pdf->SetFont('Arial', '', 12);
$orders = array(1 => 15, 2 => 25, 3 => 35);
foreach ($orders as $orderNumber => $items) {
$pdf->Order($orderNumber, $items);
}
$pdf->Output();
| common-pile/stackexchange_filtered |
How to add variable tax on products in woocommerce?
TAX VARIATION FOR DIFFERENT PRODUCTS
how to add 2 products with different tax rates in woocommerce?
Note of no use: irrelevant to question coz of StackOverFlow rules.
Correctly, setting per-item tax rates should be possible if you register the products yourself in the PayPal website interface. This is exactly what I am trying to avoid, because the products are entered by the end user on my CMS.
If what I want isn't possible with a Website Payments Standard button, I would also welcome other ideas on how to forward such orders to PayPal. I am trying to avoid the Express Checkout payment product because if I understand correctly I won't have full control of the cart. I want to have a custom cart and later be able to offer more payment options besides PayPal.
Have a look on this. https://www.wpdesk.net/blog/woocommerce-taxes-complete-tutorial/
Probably this is what you want. Not so sure.
Sir, i have already done this. What I need is : I need to set different tax price for different products.
https://wordpress.org/support/topic/tax-rates-for-different-products-being-charged-for-all-products/
just now... i have seen this link... i m thinking of something custom we can set on each product page..where we can add tax per product . and show that tax included price in cart. is it possible ?
Probably not, i think that is the best idea to create classes having different rates. Still you can submit a ticket to woocommerce but make sure to explain the requirements properly.
https://woocommerce.com/contact-us/
Thank you sir :) :)
| common-pile/stackexchange_filtered |
jQuery newsitems hide and collapse
I am using a nice script to hide and show several divs
// Catch all clicks on a link with the class 'link'
$('.link').click(function(e) {
// Stop the link being followed:
e.preventDefault();
// Get the div to be shown:
var content = $(this).attr('rel');
// Remove any active classes:
$('.active').removeClass('active');
// Add the 'active' class to this link:
$(this).addClass('active');
// Hide all the content:
$('.content').hide();
// Show the requested content:
$('#' + content).show();
});
This works great on a single div with several items I like to hide.
But I use a template the retrieves news items and I like to make this work on all the divs induvidual. Also hide the second div by default.
<div class="content" id="div[[+idx]]-1">
<p>Show content by default</p>
<a class="link-[[+idx]]" href="#" rel="div[[+idx]]-2">
Link to show div id="div[[+idx]]-2" and hide id="div[[+idx]]-1"
</a>
</div>
<div class="content hide" id="div[[+idx]]-2">
<p>Hide content by default</p>
<a class="link-[[+idx]]" href="#" rel="div[[+idx]]-1">
Link to show div id="div[[+idx]]-1" and hide div id="div[[+idx]]-2"
</a>
</div>
Problem is I use this template for every iteration and the script does not support an undefined number of items and closes all my other divs. As does the second div does not hide on default.
I changed the link to link1 and then you get the follwoing unwanted bahavior:
http://jsfiddle.net/Vh7HR/8/
if I leave out the 1 it does nothing
see my answer, you're currently going to tell EVERY Element with the class "content" to hide. If you want just the child of the element, you need to select it via parent()
EDIT: just realized that your links are actually children of the div that you want to hide, so what you actually want is parent selector
make use of .parent()
// Catch all clicks on a link with the class 'link'
$('.link').click(function(e) {
// Stop the link being followed:
e.preventDefault();
// Get the div to be shown:
var content = $(this).attr('rel');
// Remove any active classes:
$('.active').removeClass('active');
// Add the 'active' class to this link:
$(this).addClass('active');
// Hide all the content:
$(this).parent('.content').hide();
// Show the requested content:
$('#' + content).show();
});
Cool thanks this is great adjusted it on JSFiddle. Now I only need to get the second div to be hidden by default.
You need to use delegation if you are adding new .link dynamically.
By using .on() you can listen for a click on document (or another element that is ancestor to .link and is present when the click handler is attached) and when the click is fired it will look for .link.
Try this:
$(document).on('click', '.link', function(e) {
I recommend you to use the on method provided by jquery to handle click events, the classic click method not works with elements that you create after rendering the DOM.
$( "your-selector" ).on( "click", function() {
alert( "Hello world!");
});
| common-pile/stackexchange_filtered |
OpenGL depth test doesn't work on some computers
My first question here. In my program depth testing works properly on some computers, but it doesn't work on others, objects that are located farther away cover those which are located closer. I called glEnable(GL_DEPTH_TEST); and tried to call glDepthFunc(GL_LESS); and as I said, everything works properly on some computers, but the same program doesn't work properly on other computers. How can it be fixed?
Edit: Problem solved. Added these lines before calling
al_create_display(); and everything works
al_set_new_display_option( ALLEGRO_COLOR_SIZE, 32, ALLEGRO_REQUIRE);
al_set_new_display_option( ALLEGRO_DEPTH_SIZE, 24, ALLEGRO_REQUIRE);
al_set_new_display_option( ALLEGRO_STENCIL_SIZE, 8, ALLEGRO_REQUIRE);
al_set_new_display_option( ALLEGRO_AUX_BUFFERS, 0, ALLEGRO_REQUIRE);
al_set_new_display_option( ALLEGRO_SAMPLES, 4, ALLEGRO_SUGGEST);
You have to ensure that the default framebuffer has a depth buffer. The default framebuffer is created when the OpenGL window and context is initialized.
@Rabbid76 How can I do this if I'm using immediate mode?
@Rabbid76 I'm using Allegro 5, should I call al_set_new_display_option(), or set a display flag?
Sorry, but I'm not familiar with Allegro 5. Probably you have to set al_set_new_display_option(ALLEGRO_DEPTH_SIZE, ..., ...)
In addition to activating the Depth Test (glEnable(GL_DEPTH_TEST)), it is important that the current framebuffer has a depth buffer.
The default framebuffer is created at the time the OpenGL Context is constructed. The creation of the OpenGL context depends on the OS and windowing library (e.g. GLFW, SDL, SFML). Whether a depth buffer is created by default often depends on the system. In general, window libraries provide additional options for explicitly specifying a depth buffer when generating the OpenGL window:
For instance:
GLFW - Framebuffer related hints
glfwWindowHint(GLFW_DEPTH_BITS, 24);
// [...]
GLFWwindow *wnd = glfwCreateWindow(800, 600, "OpenGL window", nullptr, nullptr);
SDL - Using OpenGL With SDL
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
// [...]
SDL_SetVideoMode(800, 600, bpp, flags);
SFML - Using OpenGL in a SFML window
sf::ContextSettings settings;
settings.depthBits = 24;
// [...]
sf::Window window(sf::VideoMode(800, 600), "OpenGL window", sf::Style::Default, settings);
GLUT - glutInitDisplayMode
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH );
glutInitWindowSize(800, 600);
| common-pile/stackexchange_filtered |
Three20 iPhone - Sending XML-RPC Request instead of HTTP?
I am new to Three20 and have been trying to develop an iPhone app with Three20 for the past week. This app has to access to a xmlrpc server.
I know it is possible to receive responses in other formats like JSON.
But for requests, instead of the provided HTTP class TTURLRequest, is it possible to send request by XML-RPC?
I created three20 extention for XML-RPC connection.
It's on my three20 fork.
http://github.com/ngs/three20/tree/master/src/extThree20XMLRPC/
Please try this and feedback me.
Cocoa XML-RPC Client Framework appears to do what you want, although it uses the underlying NSURLConnection and friends that Three20 uses, not Three20 itself.
For the record, XML-RPC uses HTTP as its transport layer, so I don't see why you wouldn't be able to use it for that purpose in the first place; the main thing is writing a library that wraps the underlying HTTP transport pieces so you can invoke methods more transparently.
(I.e., you can set HTTP headers as well as the request method (GET, POST, PUT, etc.), and submit data in the body of an HTTP request, so everything is there to support it. Additionally, XML itself can be parsed via the NSXMLParser class, the Open Source libxml2 library, or other third-party solutions (e.g. TouchXML, which is built on libxml2).)
Lastly, there is another SO question regarding XML-RPC on the iPhone in general, although it has many of the same answers.
| common-pile/stackexchange_filtered |
Manage of memory on query
I have a big table (a lot of columns and some rows) in my OrientDB schema.
(columns early 35.000, rows early 100.000).
When I try to query my table with a simple COUNT, like this:
SELECT COUNT(@rid) FROM myTable WHERE filters
My process occupies early 8 GB of memory.
If I try to rewrite my query using index notation, like this:
SELECT COUNT(@rid) FROM index:myIndex WHERE key = [value1, ... valueN]
My process occupies early 8 GB of memory.
First question:
I reserved for Orient 8 GB of memory, for application server, I must reserve the same memory? Because, with DISK CACHE property of Orient, the manage of memory for its, is OK, but under application server (Tomcat) I've get Out of Memory error.
Second question:
Why a simple COUNT occupies all those memory? There is a stretegy about pagination dependent by columns number?
Sorry I didn't understand some points:
you have 35,000 columns, you mean that you have a class with 35,000 fields?
are you using Tomcat and OrientDB simultaneously?
which version of OrientDB are you using?
Yes 35.000 columns not mapped in POJO class, only some fields are mapped. 2) Tomcat for front end deploy application and OrientDB is DB server. 3) As i tagged my question 2.1 (more specifically 2.1.4)
Try to start db orient without Tomcat, and verify this:
SET INDEX
Using the index on fields used to filter data in the query, it will lead to an improvement.
In OrientDB there are various types of indices, each of them provides advantages in certain situations index. The index choice therefore depends on your situation.
In my tests I used 'SB-TREE'.
VERIFY RAM OCCUPATED BY ORIENTDB PROCESS
With the default settings (OrientDB autoconfig DISKCACHE = 5.064MB (heap = 455MB), uploading 100,000 vertices of the Person class with 3 properties (id, name, city), I have the following memory values:
Size db = 80 MB
Query = SELECT COUNT(@rid) FROM Person WHERE id >= 0
Time Query execution = (cold) 3.57 sec. (hot) 1.88 sec.
Verify query used the index = explain SELECT COUNT(@rid) FROM Person WHERE id >= 0 and check that under the column "involvedIndexes" there is the index that you created (in my case ["Person.id"])
OrientDB process (with studio open)= 442 MB (command: ps -ef | grep orient you get idprocess, then: top -pYOUR_ID )
Time Query:
Index use:
RAM used by orientdb's process
INCREASE HEAP/DISKCACHE
if you have an "out of memory" you may try to increase the heap:
open the file Server.sh (for linux) or server.bat (for windows) in the /bin folder on your orientdb's location.
Set MAXHEAP = -Xmx2048m
if your query is still slow after using indexes increases cache:
MAXDISKCACHE = "- Dstorage.diskCache.bufferSize = 8192 FOR 8GB"
Obviously the heap values and cache depend on how much RAM you have on your system. It takes into account that increasing it too much with Ram is already saturated for OS, and more, only gets disadvantages.
Without Tomcat running, have you got still 'out of memory ' or the count() will return the values (fast enough) ??
If until now everything went well, you could start Tomcat and see how it behaves the RAM with 2 processes (oriendb and tomcat) active.
If it is not saturated, try to re-run the query with count (). Get 'out of memory'?
With the active tomcat, you should try to re-set the memory for it to be sufficient for both Tomcat and for OrientDB. (Also considering the ram used by the OS).
EDIT
A correct way would be, if you already know which properties have to be present in your class, immediately creates the property and on each property creates its relative index. This way as you add vertices, indices are updated automatically after the inputs. So when you run a query you're sure that the filters in the where clause will use the indexes.
If my filters are several, and I build them at runtime, how can I add index for my use? Example: I check in the software filter on name and surname, but the next time I'll check filter on birth date and residence town. I must add and remove always the index before run the query?
Thanks dear, but I can choose a filter on every column (of 35.000). I tried to add at runtime the filters but time for building is very long
| common-pile/stackexchange_filtered |
Term Store Management option missing in site collection settings
I don't see "Term Store Management" in site collection settings. I checked this blog post http://blog.petergerritsen.nl/2010/06/09/term-store-management-option-missing-in-site-collection-settings/ which suggests to activate a feature which I tried but I still don't see the option. Is that something that I am missing?
If you started with Blank site Template, it will not appear.
Please verify that the link exists for sites created other than blank site template (Team Site, Document Workspace, publishing portal etc). If not, something else needs to done!
Try activating BOTH of the following features:
Enable-SPFeature -id "73EF14B1-13A9-416b-A9B5-ECECA2B0604C" -Url <Site-URL>
Enable-SPFeature -id "7201D6A4-A5D3-49A1-8C19-19C4BAC6E668" -Url <Site-URL>
Enabling the first feature you mentioned did the trick for me. Thank you!
Activating the features as above is normally enough to do the trick.
We came across the problem with a client after spending hours searching the web previously for the BCS, metadat, term store were not working even in the central admin. As a result we'd post here in case this of use as it is related.
http://blogs.msdn.com/b/sowmyancs/archive/2010/07/16/sharepoint-2010-service-applications-bcs-metadata-access-service-are-not-working.aspx?CommentPosted=true
hopefully this will save you some time too.
| common-pile/stackexchange_filtered |
"Can't connect to local MySQL server through" socket error
I install mysql and try start
/usr/local/mysql/bin# ./mysql
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
/usr/local/mysql/bin#
Any Idea?
mysql can be started from upstart. Basically you install mysql-server package. You can easily search for mysql-server package in synaptic and install it. You will be prompted for root password during installtion[i.e.mysql user root password] . After that in terminal just do
sudo start mysql
or
sudo service mysql start
Mysql usually is started on boot so you may have it running as well.
Just ps aux | grep mysql to see if it is running.
Job failed to start and no mysql process running
Ubuntu 11.10: sudo dpkg-reconfigure mysql-server-5.1
Ubuntu 12.04: sudo dpkg-reconfigure mysql-server-5.5
Follow instruction then reboot.
This problem may happen when you upgrade from one ubuntu version to another and config gets messed up somehow. Usually the related to the startup process of mysql daemon mysqld)
Worked for me but deleted my users
for mariadb: sudo dpkg-reconfigure mariadb-server
I see you are using /tmp/ for the mysql.socket so either you are not installing the mysql 5.1 that comes from ubuntu by default or you are installing mysql 5.5.x (which you need to mention in your question if this is the case). Anyway some of the things you can do are:
Verify that mysql is not already running: ps -e|grep -i 'mysqld' if it appears then kill it. sudo killall -9 mysqld . If it does not die grab the pid from the ps you made here and kill -9 PID
"Try" to run mysql via /etc/init.d/mysql start if it gives you the exact same error from above then you need to copy the mysql.server file from the mysql you downloaded which can be found in the support-files folder inside the mysql folder you downloaded or in the /usr/local/mysql folder and copy it to /etc/init.d/mysql (Example: cp mysql.server /etc/init.d/mysql and give it a executable permission chmod +x /etc/init.d/mysql then run it again.
If it still gives you a problem then edit the config file in /etc/my.cnf (if you have 5.5 installed) or /etc/mysql/mysql.conf (if you have 5.1 installed) and change ALL the /tmp/mysql.socket lines to /var/run/mysqld/mysqld.sock. Note that the config file can also be in /etc/mysql/my.cnf.
Test this but please ADD what mysql you are using, either 5.1 or 5.5. I mention 5.5 here because 5.5 is the one that uses /tmp and 5.1 uses the default /var/run
If by any change you happen to be using 5.5 then remember to delete the directories of old 5.1 from /etc/mysql, /var/lib/mysql and /user/lib/mysql
Change the server name from "localhost" to "<IP_ADDRESS>" in your connection params, Im sure it will do the trick.
| common-pile/stackexchange_filtered |
Magento homepage directory change
I am a new magento user. I have facing some problems. I installed a new theme in my store. I have remove the theme by removing the theme files. But after that, I have noticed that, my home page is empty. I found that, the home page goes under "home" folder. Means that, now showing the home page in www.example.com/home link.
Please how to solve this problem?
I believe what you're looking for is the "CMS Home Page". You can find that setting in System -> Configuration -> General > Web -> Default Pages > CMS Home Page ()
(sorry, i would have made a comment but i'm not allowed t do so yet ;) )
cheers
Welcome to the site! This is a great first post.
| common-pile/stackexchange_filtered |
How to display variable and value labels in ggplot bar chart?
I'm trying to get the variable labels and value labels to be displayed on a stacked bar chart.
library(tidyverse)
data <- haven::read_spss("http://staff.bath.ac.uk/pssiw/stats2/SAQ.sav")
data %>%
select(Q01:Q04) %>%
gather %>%
group_by(key, value) %>%
tally %>%
mutate(n = n/sum(n)*100, round = 1) %>%
mutate(n = round(n, 2)) %>%
ggplot(aes(x=key, y=n, fill=factor(value))) +
geom_col() +
geom_text(aes(label=as_factor(n)), position=position_stack(.5)) +
coord_flip() +
theme(aspect.ratio = 1/3) + scale_fill_brewer(palette = "Set2")
Instead of Q01, Q02, Q03, Q04, I would like to use the variable labels.
library(labelled)
var_label(data$Q01)
Statistics makes me cry
var_label(data$Q02)
My friends will think Im stupid for not being able to cope with SPSS
var_label(data$Q03)
Standard deviations excite me
var_label(data$Q04)
I dream that . . .
along with associated value labels
val_labels(data$Q01)
Strongly agree Agree Neither Disagree Strongly disagree Not answered
1 2 3 4 5 9
I tried using label = as_factor(n) but that didn't work.
We may extract the labels and then do a join
library(forcats)
library(haven)
library(dplyr)
library(tidyr)
library(labelled)
subdat <- data %>%
select(Q01:Q04)
d1 <- subdat %>%
summarise(across(everything(), var_label)) %>%
pivot_longer(everything())
subdat %>%
pivot_longer(everything(), values_to = 'val') %>%
left_join(d1, by = 'name') %>%
mutate(name = value, value = NULL) %>%
count(name, val) %>%
mutate(n = n/sum(n)*100, round = 1) %>%
mutate(n = round(n, 2)) %>%
ungroup %>%
mutate(labels = names(val_labels(val)[val])) %>%
ggplot(aes(x=name, y=n, fill=labels)) +
geom_col() +
geom_text(aes(label=as_factor(n)),
position=position_stack(.5)) +
coord_flip() +
theme(aspect.ratio = 1/3) +
scale_fill_brewer(palette = "Set2")
-output
Nice! Does this approach work with setting value labels as the legend ("Strongly agree, Somewhat . . . ")?
| common-pile/stackexchange_filtered |
React's setState seems to strip array of items
This is really weird as I've been using setState for months and this is the first time I've ever seen this issue. I have some code that simply sets a state property to another object. In the object to be set I have one property that is an array called GroupModel. The strange thing is once this object get's set in setState, the property GroupModel get's converted down to an object and I lose it's members. I have other array properties in this object and they are not affected.
// just before the call to setState I check newUserGridConfig and find that its child property,
// GroupModel, is indeed an array and has the expected child elment.
self.setState({ localGridConfig: newUserGridConfig }, () => {
// once setState completes I check self.state.localGridConfig.GroupModel and
// find that it is now an object (i.e. [object Object]).
// I then check newUserGridConfig.GroupModel and
// it is still an array and has the expected element in it
var response = self.updateUserGridConfigurationStore(self.state.localGridConfig,
saveNetwork);
Ok this was happening because setState is both asynchronous and not guaranteed to run immediately. So I had some other code that managed to just slip in between and set the property first.
| common-pile/stackexchange_filtered |
Google Chrome crashes without any log
I wrote a Chrome pop-up extension and it works fine. But when click else where to close pop-up, it crashes Chrome. I tried looking at both the locations mentioned in this question to see if some info is available as to why it is crashing to fix this issue, but the directories are empty! Is there a way I can find out why it is crashing? Following are the errors I get.
Google Chrome has stopped working
A problem caused the program to stop working correctly. Windows will
close the program and notify you if a solution is available.
and then I get another window saying
Whoa! Google Chrome has crashed. Relaunch now?
This page explains how to get a crash ID and other crash data, which you can then attach to a bug report. You may need to enable crash reporting in "Preferences" -> "Under the Hood "-> "Automatically send usage statistics and crash reports to Google" to have crash IDs be generated and appear in chrome://crashes.
Thanks for the answer but on that page, the first link requires MSDN account and the second one is giving an installer that is for Windows SDK. I started that but did not see windbg.exe anywhere which is needed according to that page. Have you tried this?
Most of the instructions on the page are not necessary with recent versions of Chrome, where crash IDs are displayed on chrome://crashes. Do you see any crashes listed there?
So what if i want to inspect crash information but not share it with google?
In newer versions of chrome you need to go into settings/privacy and enable "Automatically send usage statistics and crash reports to Google" and then after the next time it crashes you should see the log appear in chrome://crashes
To remove or disable Chrome extensions,
Open the Chrome browser
Type “chrome://extensions/” in the address bar (URL bar)
Press the Enter key
Now, you’ll see all the extensions in a panel form
You can click on ‘Remove’ to uninstall them
You can toggle an extension off to disable it
Why would you want to uninstall the extension? Op is trying to fix the bug, not getting rid of their own work.
| common-pile/stackexchange_filtered |
Using array in the onCreate returns null
I am getting some JSON data and parsing it into my object. Which I am trying to use to get all the fields I need. However I am getting a null on my arraylist and I am not sure why for example.
private List<MovieDetail> mMovieDetails;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_detail);
setupActionBar();
Intent intent = getIntent();
MovieDetail movieDetail = (MovieDetail) intent.getSerializableExtra(POSTER_TRANSFER);
Glide.with(this).load(movieDetail.getPoster()).into((ImageView) findViewById(R.id.main_backdrop));
ProcessMovieSearchImdbIDString processMovieSearchImdbIDString = new ProcessMovieSearchImdbIDString(movieDetail.getImdbID());
processMovieSearchImdbIDString.execute();
CollapsingToolbarLayout collapsingToolbarLayout = (CollapsingToolbarLayout) findViewById(R.id.main_collapsing);
collapsingToolbarLayout.setTitle(movieDetail.getTitle());
//this returns null which it shouldn't
if(mMovieDetails !=null){
for (MovieDetail detail: mMovieDetails) {
Log.d("Details", "movie actors: " + detail.getActors());
}
}
((TextView) findViewById(R.id.grid_title)).setText(movieDetail.getTitle());
((TextView) findViewById(R.id.grid_writers)).setText(movieDetail.getWriter());
((TextView) findViewById(R.id.grid_actors)).setText(movieDetail.getActors());
((TextView) findViewById(R.id.grid_director)).setText(movieDetail.getDirector());
((TextView) findViewById(R.id.grid_genre)).setText(movieDetail.getGenre());
((TextView) findViewById(R.id.grid_released)).setText(movieDetail.getReleased());
((TextView) findViewById(R.id.grid_plot)).setText(movieDetail.getPlot());
((TextView) findViewById(R.id.grid_runtime)).setText(movieDetail.getRuntime());
}
private void setupActionBar() {
ActionBar actionBar = getSupportActionBar();
if (actionBar != null) {
// Show the Up button in the action bar.
actionBar.setDisplayHomeAsUpEnabled(true);
}
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
int id = item.getItemId();
if (id == android.R.id.home) {
startActivity(new Intent(this, MainActivity.class));
return true;
}
return super.onOptionsItemSelected(item);
}
public class ProcessMovieSearchImdbIDString extends JsonParse {
public ProcessMovieSearchImdbIDString(String id) {
super(id);
}
public void execute() {
//super.execute();
ProcessData processData = new ProcessData();
processData.execute();
}
public class ProcessData extends DownloadSearchMovieImdbidJsonData {
protected void onPostExecute(String webData) {
super.onPostExecute(webData);
//this doesn't return null
mMovieDetails=getMovies();
for (MovieDetail detail : mMovieDetails) {
Log.d("Details", "movie actors a: " + detail.getActors());
}
}
}
}
I am not sure why its returning null when using the array in the onCreate method. If anyone could help that would be much appreciated.
Please update logcat too in case of crashes, helps to get to the problem quickly
you have not initialized mMovieDetails anywhere. I guess
i think you forgot to initialise mMovieDetails.
@MohammedAtif I have added a null check so it would not crash
I have now initialised the private List<MovieDetail> mMovieDetails = new ArrayList<>(); however I am still not getting any values back @indramurari @vrundpurohit
How can you get values from an array of size 0? Where are you adding the data to this List??
I think the problem is that you are inflating the list in the onPostExecute method which runs in a different thread than background thread. while the onPostExecute is executing the background thread accessed the movieList and found it null because the list in not filled up yet.
@MohammedAtif I am adding the data to the list in the onPostExecute method
@M.WaqasPervez so what would be the best approach to fill the list?
you should move the logic of implementing list from onCreate to onPostExecute
@M.WaqasPervez Thank you very much for your help. This makes alot of sense now and it worked.
The solution the problem is to move the list to onPostExecute which was mentioned by @M.WaqasPervez
Here is the code fix
public class ProcessMovieSearchImdbIDString extends JsonParse {
public ProcessMovieSearchImdbIDString(String id) {
super(id);
}
public void execute() {
//super.execute();
ProcessData processData = new ProcessData();
processData.execute();
}
public class ProcessData extends DownloadSearchMovieImdbidJsonData {
protected void onPostExecute(String webData) {
super.onPostExecute(webData);
mMovieDetails=getMovies();
for (MovieDetail detail : mMovieDetails) {
((TextView) findViewById(R.id.grid_title)).setText(detail.getTitle());
((TextView) findViewById(R.id.grid_writers)).setText(detail.getWriter());
((TextView) findViewById(R.id.grid_actors)).setText(detail.getActors());
((TextView) findViewById(R.id.grid_director)).setText(detail.getDirector());
((TextView) findViewById(R.id.grid_genre)).setText(detail.getGenre());
((TextView) findViewById(R.id.grid_released)).setText(detail.getReleased());
((TextView) findViewById(R.id.grid_plot)).setText(detail.getPlot());
((TextView) findViewById(R.id.grid_runtime)).setText(detail.getRuntime());
}
}
}
}
Thanks all
| common-pile/stackexchange_filtered |
Why does highchart add a padding for a certain bar chart?
I encountered a very strange highcharts behaviour.
I am rendering the same chart in two containers with only one px difference in height:
<div id="container" style="min-width: 310px; height:118px; margin: 0 auto"></div>
<div id="container2" style="min-width: 310px; height:117px; margin: 0 auto"></div>
The first graph renders the y-axis to 100 and adds unnecessary extra space. The second graph aligns the axis to use only as much space as necessary.
Here's a link to a fiddle:
https://jsfiddle.net/647cg3mp/
Any idea where this extra padding comes from all of a sudden?
That behavior is related with tickInterval and tickPixelInterval properties. You can change it for example by increasing tickPixelInterval or by setting tickAmount to 2.
tickInterval: number
The interval of the tick marks in axis units. When undefined, the tick interval is computed to approximately follow
the tickPixelInterval on linear and datetime axes.
yAxis: {
tickPixelInterval: 73,
...
}
Live demo: https://jsfiddle.net/BlackLabel/do46qhgx/
API Reference:
https://api.highcharts.com/highcharts/yAxis.tickPixelInterval
https://api.highcharts.com/highcharts/yAxis.tickInterval
I ran your fiddle. At first I didnt know what you meant by padding but will deduce you are referring to the range of the y-axis: it's different for the two divs. You can explicitly set the max of the y-axis range in your params,
"yAxis": {
"min": 0,
"max": 100,
which would resolve your problem, in the sense that the two charts would use the same range. As a working solution, you could query the data to determine what the max would be in for any situation, and then round to 50 increments or another suitable increment based on your needs, then use that as the max.
Sorry if you did not understand me right. It's hard to express an issue so that everyone gets what you mean. :-) Setting a max won't help. What I want is a chart that uses all of the available space to display the biggest bar (like in 'container2'). What I don't want is a fixed height where much space is wasted (like in 'container'). By the way, setting a fixed max for y is ignored in the first container.
| common-pile/stackexchange_filtered |
Node.js: how to request an image only if it changed
I'm designing a node.js app.
One of its tasks is to regularly download a set of images from some public, external, site.
One requirement is to avoid repeating the download of images which are not changed from the previous download.
I plan to use "request" module, since it is far more complete and flexible with respect to other networking modules (please correct me if I'm wrong).
This is the code I'm using now (please ignore some mistakes, like comparing dates with > or < operators, consider it pseudo-code...):
var request = require('request');
var myResource = {
'url': 'http://www.example.com/image1.jpg',
'last-modified': 'Mon, 28 Sep 2015 08:44:06 GMT'
};
request(
myResource.url,
{ method: 'HEAD'},
function (err, res, body) {
if (err) {
return console.error('error requesting header:', err);
}
var lastModifiedDate = res.headers['last-modified'];
console.log('last modified date:', lastModifiedDate);
if (lastModifiedDate > myResource['last-modified']) { // resource did change
request(
myResource.url,
function (err, response, contents) {
if (err) {
return console.error('error requesting content:', err);
}
myResource['last-modified'] = lastModifiedDate;
storeContents(contents); // store contents to DB
}
);
}
}
);
This code should work (in principle).
But I ask: request() is called twice: is this a waste of resources?
Could the content request be someway chained to the first request?
Can you suggest a cleaner / smarter / faster approach?
Maybe i'm missing something, but if you know the last-modified date, you should send that as the If-Modified-Since header with the GET request and skip the HEAD request. The server should return a 304 when appropriate.
How "304 Not Modified" works?
Thanks! I didn't remember about If-Modified-Since header... This is probably the answer... I'm accepting it soon :-)
@MarcoS if not, try ETag, but either way you should leverage the image server and response header and use only one request.
How can I trigger the resource download based on ETag? Does something like If-Etag-different exists?
@MarcoS, check the link in my post.
| common-pile/stackexchange_filtered |
How to transcribe the audio of a video file in Flutter?
I'm building a Flutter app that allows users to upload a video file and generate a text transcript of the audio. I've implemented the code to pick a video file from the gallery and display it using the video_player package. However, when I try to start the transcription using the speech_to_text package, it starts recording the user's audio input instead of transcribing the audio from the video file.
Here's the relevant code I'm using:
import 'dart:io';
import 'package:flutter/material.dart';
import 'package:video_player/video_player.dart';
import 'package:speech_to_text/speech_to_text.dart' as stt;
import 'package:image_picker/image_picker.dart';
class VideoTranscriptionApp extends StatefulWidget {
@override
_VideoTranscriptionAppState createState() => _VideoTranscriptionAppState();
}
class _VideoTranscriptionAppState extends State<VideoTranscriptionApp> {
VideoPlayerController? _videoPlayerController;
stt.SpeechToText _speechToText = stt.SpeechToText();
List<Map<String, dynamic>> _transcriptionData = [];
void _initializeSpeechToText() {
_speechToText.initialize();
}
Future<void> _pickVideoAndTranscribe() async {
final pickedFile = await ImagePicker().pickVideo(
source: ImageSource.gallery,
);
if (pickedFile != null) {
final videoFile = File(pickedFile.path);
_videoPlayerController = VideoPlayerController.file(videoFile)
..initialize().then((_) {
setState(() {});
_startTranscription();
});
}
}
void _startTranscription() async {
if (await _speechToText.isAvailable) {
_speechToText.listen(
onResult: (result) {
final word = result.recognizedWords;
final start = result.alternates.first.recognizedWords;
final end = result.alternates.last.recognizedWords;
_transcriptionData.add({
'word': word,
'start': start,
'end': end,
});
setState(() {});
},
);
}
}
void _stopTranscription() {
_speechToText.stop();
}
// Rest of the code...
}
The issue I'm facing is that the speech_to_text package is transcribing the user's audio input instead of the audio from the video file. How can I modify the code to transcribe the audio from the video file?
What part of this code did you think tried to connect the video to the transcriber?
@ScottHunter this did not. I've been browsing and i haven't found any resources to do this. i just am seeking for any guidance.
I would suggest abandoning your current Flutter/TTS library approach and instead attack the problem in 2 stages.
Extract the audio from the video file using ffmpeg/ffmpeg-kit
Leverage something like OpenAI's Whisper to extract the text from the audio file
| common-pile/stackexchange_filtered |
Android Developer Console won't load because a SSL error
I have a weird problem and I can't find any solution. I have my first app ready to be publish, so I paid my developer fee to Google. It's supposed that now I can access to the Android Developer Console web and... nope. I can't access to this url https://market.android.com/publish/receipt or this one https://play.google.com/apps/publish/v2/ or this one https://play.google.com/apps/publish/. All of them returns the same error (Using Google Chrome v.29.0.1547.57 from Ubuntu 64bits, neither Firefox (last version) from Linux or Windows or Chrome from Windows).
I have looked all over the web, I have configured the option "use ssl 2.0" on Chrome and now I have no clue what's happening.
BTW, I'm not behind any proxy and I'm logged with the same account I used to pay the fee.
Can anyone of you b ring some light here please? THanks
I had the same problem - it turns out Google didn't like the DNS settings I had on my mac. Make sure you don't use custom DNS or proxy servers and try again.
This was also my problem. I'm using Unblock.us, configured in my router. Thanks much!
| common-pile/stackexchange_filtered |
Can you apply the same lexer rules to all programming languages?
I'm trying to understand the theory behind a lexer with the purpose of building one (just for my own fun and experience and to compensate for not taking proper CS courses :)).
What I have yet to understand is if lexer theory is the same no matter what language you are analyzing? Is it just about splitting white space then try to figure out what each text represents or is it more than that?
In other words, does the generation of tokens depend on the language being analyzed or is it just about white space? Can you apply the same lexer rules to all programming languages? For example if I have a lexer that tokenizes Java code can I use it for Python or not, since whitespace in Python has other meaning?
For example, I found a nice Python project (Pygments) which seems to provide a working core that allows you to later plugin rules for each of your favorite language (what are the keywords, the comments etc).
@MichaelT: If I read his question correctly, I think he's asking if you can write a lexer that doesn't require you to specify the rules of the programming language. I'd say the answer is probably no.
@MichaelT: I'm thinking a tool that reads an input file and outputs the tokens. In my understanding Lex is more than that, is it not?
I think you might have an alright question here though it's unclear right now. Please edit it to dictate specifically what the behaviour/requirements of this application you refer to as a "universal lexer" is, so we can clearly know what you would call "universal" lexing.
I occasionally write simplistic regex-based lexers (obviously, with mediocre throughput). The names and definitions of tokens can be swapped out without any problem to fit a different grammar. But often, writing a generic lexer isn't possible, e.g. when your whitespace handling is unusual or when you want to use advanced parsing techniques like layout parsing or ruby slippers parsing. But sure, you can write a generic lexer once you decide on a certain feature set.
@Jimmy Hoffa: reworded my question. Is that better?
Keep in mind whitespace isnt the same across all languages. Consider quoted text - some languages use ', others ", and some other will use both. Some have alternative syntax past that.
No you can't. Here is a nonsense snippet of Lisp:
(*foo-bar* 'baz 14)
What are the tokens in this snippet? The answer is something like
LPAREN
SYMBOL "*foo-bar*"
QUOTE
SYMBOL "baz"
INTEGER "14"
RPAREN
Lisps have very liberal rules what can be inside an identifer: hyphens -, asterisks * and many other characters are allowed. Expressions can be quoted which prevents their evaluation, in many dialects this is done by prepending a ' to that expression.
A lexer needs to be aware of the language that it's parsing. If we would have used a C language lexer on the above code, we would get an error because ' introduces a character literal which also requires a closing ' – none is present here. Our C tokenizer might produce:
LPAREN
OP_STAR
IDENTIFIER "foo"
OP_MINUS
IDENTIFIER "bar"
OP_STAR
ERROR character literal isn't terminated: expected »'« after »'b«
For each language we want to lex, we have to write special rules. Every language is unique, and many languages do have subtle differences. Here, the identifiers are so very different that the token streams have lost any resemblance of each other. However, some languages inherit syntax from each other. For a very simple snippet like foo.bar(baz), a lexer written for C++ and one for Python might produce comparable results.
But why do we actually use Lexers?
Formally, lexers are not needed. We can describe a language without them just fine. Lexers are a performance optimization. Lexers are basically very restricted preprocessors for a parser that can match the input very efficiently (e.g. implemented as a state machine). The lexer then emits tokens, which are larger building blocks (a token is a pairing of a type or ID with a string). It is then the job of the parser to combine these tokens in a hierarchical structure (generally, an abstract syntax tree). This frees the parser from doing character-level operations, which makes it more efficient and easier to maintain (I have written parser without this separation. It works, but is annoying).
And here we have another problem: A lexer is written for a specific parser. The two work in tandem and cannot be recombined willy-nilly with other lexers or parsers. In the above token streams I have called the "(" operator LPAREN. But a different grammar might require a OP_LP symbol instead.
So for a variety of reasons, it's impossible to write one lexer to rule them all.
Each language needs it's own lexer. But we can take shortcuts and use programs that generate specialized lexers for us from some specification. Often, tools like lex are used for this. If performance is not an issue, I use the regular expressions library from the host language to cobble together a simple lexer, e.g. in Perl:
# a primitive Lisp lexer
my %tokens = (
LPAREN => qr/[(]/,
RPAREN => qr/[)]/,
SYMBOL => qr/[^\s()']+/,
QUOTE => qr/[']/,
INTEGER => qr/[1-9][0-9]*/,
);
my $space = qr/\s+/;
my $input = "(*foo-bar* 'baz 14)";
my $pos = 0;
my $length = length $input;
POSITION:
while ($pos < $length) {
pos($input) = $pos;
$pos += length $1 and next if $input =~ /\G($space)/gc;
for my $name (keys %tokens) {
if ($input =~ /\G($tokens{$name})/gc) {
print "$name -- $1\n";
$pos += length $1;
next POSITION;
}
}
die "No token matched, just before:\n", (substr $input, $pos, 20), "\n";
}
Output:
LPAREN -- (
SYMBOL -- *foo-bar*
QUOTE -- '
SYMBOL -- baz
SYMBOL -- 14
RPAREN -- )
But we can easily change the token definitions:
# an incomplete C lexer
my %tokens = (
LPAREN => qr/[(]/,
RPAREN => qr/[)]/,
IDENTIFIER => qr/[a-zA-Z_][a-zA-Z0-9_]*/,
CHARACTER => qr/['](?:[^'\\]|\\.)[']/,
INTEGER => qr/[1-9][0-9]*/,
OP_STAR => qr/[*]/,
OP_MINUS => qr/[-]/,
);
my $space = qr/\s+/;
which changes the output to:
LPAREN -- (
OP_STAR -- *
IDENTIFIER -- foo
OP_MINUS -- -
IDENTIFIER -- bar
OP_STAR -- *
No token matched, just before:
'baz 14)
We didn't write a new lexer, we just updated the token definitions.
What to do now:
As a preparation, understand and implement the Shunting-Yard Algorithm. This will teach you about a simple stack machine.
Read up on grammars and languages in computer science. Learn what a rule is.
Read up on Extended Backus-Naur Form notation, even if you don't immediately understand it. This is a way to write down grammars, and grammars are a way to describe languages (e.g. programming languages).
Understand the implications of the Chomsky Hierarchy. Here are the three important levels:
Regular Languages can be matched very efficiently. Lexers usually operate on this level.
Context-Free Languages are the basis for the description of many programming languages. Various parsing strategies can match various subsets. You will want to read up on LL, LR, and PEG at least.
Context-Sensitive Languages are even more general, but they are vastly more difficult to parse. Note that many regex libraries like PCRE have context-sensitive features.
Note that real programming languages are not context-free – semantic whitespace as in Python, here-docs as in Bash, and optional end tags in HTML are there to mess up most elegant descriptions (but tricks exist to fake a context-free grammar even then).
Write a simple recursive-descent parser yourself that can evaluate simple arithmetics like 1 + 2 * 3 and manages operator precedence correctly. Using a lexer is not necessary for this. Add parens to your language, then some functions like abs or sin. Post your solution on codereview to get tips on how to do it better.
Now that you've learned how to parse stuff yourself, you may enjoy helping tools like parser generators. As an exercise, pick a simple language like JSON and write a parser for it with your tool of choice. Return to codereview for criticism.
If you have any problems on the way, ask here on programmers or over on stackoverflow, depending on whether the question is conceptual or implementation-related.
Sure you can: split on whitespace and call it a day!
Now, that is technically a universal lexer, in that you can work through any document and create a processed, tokenized result. Which is is really all a lexer has to do...it just isn't a very useful implementation!
A lexer is all about tokenization, and the rules that apply. And you can certainly write a lexer that simply allows you to plug in new rules to modify it's functioning!
Ultimately what you are trying to do is decouple the rules from their implementation, which is a perfectly reasonable sort of thing to do. You might find it works better, if you are working from scratch, to have that idea in mind but first just get it to work with some simple rules. Then, once it works at a basic level, wrap your head around how to pull out the rules themselves and allow them to be defined elsewhere, yet still be loaded into the program and executed.
Just remember: general-use software is always, always harder to make than specialized software. The more general you want it (do you want a lexer that supports natural language processing?), the less assumptions you can make and the more if() and switch statements you are going to have all over the place, and every optimization might break some needed edge-case feature. And it gets more arduous and time-consuming to test with every generality.
Between the lexer and the parser, the lexer is the easier one to do precisely because it is pretty much context free. It doesn't care if you are missing an end parens on an if statement, for instance, because it doesn't matter to it.
But yes, lexer theory itself is ultimately the same, its just the specific language rules that are different. For instance, in English text you can assume "iscrewedup" is supposed to be one word, and when you can't find that word in a dictionary you can assume something is not right - because written English words split on spaces. But not all language is so clean, and indeed programming langauges usually aren't, because if () and if() are equally valid in most languages.
And in Python indentation means something, while most languages don't care, but at a lexer level it just means a series of spaces or tabs gets tokenized as something like "IncreaseIndent" or "DecreaseIndent", and the parser figures out what that means later on. The theory, however, ultimately remains precisely the same.
With the question itself reworded, I suppose my explanation makes the answer "no, every language has it's own specific rules" ...I am undecided on what to do about that as far as this Q/A combination goes...
In early FORTRAN syntax, blanks were optional. You could write DO100I=1,2,3 or DO 100 I = 1, 2, 3. And you could even put them inside identifiers - A B C and ABC were the same identifier.
@RossPatterson Yes, and fortunately that did not survive :) But sometimes looking back is useful to see flaws in conceptions. The use of whitespace is very language specific - some languages can contain whitespace within a single string definition (wrapping). And I won't start describing white space use in Python...
@owlstead I'm pretty sure modern Fortran still accepts missing spacing. Dunno about extra spacing, but Fortran standards have tried very hard not to invalidate even ancient programs, all the way back to FORTRAN II.
| common-pile/stackexchange_filtered |
Dynamic controller with "controller as" syntax
If I want to assign dynamic controller, I can do the following:
<div ng-controller="MainController">
<div ng-controller="dynamicController"></div>
</div>
function MainController($scope){
$scope.dynamicController = MyCtrl;
$scope.instanceName = "ctrl"; // we'll use this later
}
function MyCtrl(){}
What can I do to make this work with the new "controller as" syntax?
This works fine: <div ng-controller="dynamicController as ctrl"></div>
But how to make ctrl dynamic too? Let's say I want it to have a name that $scope.instanceName holds.
Fiddle: http://jsfiddle.net/ftza67or/2/
There is an idea to make a custom directive that will create and compile html string, but it's an ugly way, let's pretend it does not exist.
Quick search on Google gave me http://toddmotto.com/digging-into-angulars-controller-as-syntax/, and https://thinkster.io/egghead/experimental-controller-as-syntax/
@elclanrs i know how to use "controller as" syntax. My question is how to use it with dynamic controller reference.
Ah, I misread. Maybe you can check the $controller service. Using the name of the function seems like trouble, I'd try to use the name of the registered controller.
So I've looked into angular sources and found this.
https://github.com/angular/angular.js/blob/bf6a79c3484f474c300b5442ae73483030ef5782/src/ng/controller.js
if (isString(expression)) {
match = expression.match(CNTRL_REG),
constructor = match[1],
identifier = identifier || match[3];
expression = controllers.hasOwnProperty(constructor)
? controllers[constructor]
: getter(locals.$scope, constructor, true) ||
(globals ? getter($window, constructor, true) : undefined);
.... and so on
Basically, ng-controller directive currently accepts strings or expressions, if it is an expression, it has to be a controller function reference. If it is a string, well, it will take identifier name as it was passed and I don't see any way to make it evaluate a variable with dynamic expression name.
This should work pretty much the same, but just remember that when you use controller as you can bind properties to this inside the controller to have them accessed by the scope/view.
function MainController ($scope) {
$scope.dynamicController = MyCtrl;
}
function MyCtrl($scope) {
this.foo = "baz";
}
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script>
<div ng-app>
<div ng-controller="MainController">
<div ng-controller="dynamicController as ctrl">
{{ctrl.foo}}
</div>
</div>
I just noticed that jsfiddle that i was using for testing was forked from one with old angular version. This works, yes.
BUT ) How to make controller instance name dynamic too? (the "as ctrl" part)
@Anri, why do you need the alias to be dynamic? It makes little sense. What are you trying to achieve?
@NewDev I have many "cards" in one page, and each card has it's own controller and cards are created in ng-repeater. So in card templates i want to reference correct controller. I guess I could use the same name for all, they are not nested YET, but still curious how to achieve that.
@Anri, you should try to use the same alias and not spaghetti-code your View, if at all possible, or minimize cross-dependencies by assigning meaningful role-based aliases to your controllers.
You can use $scope[dynamicNameVar] = MyCtrl;
@Anri The alias with controller as is like a variable -- you can't have dynamic variable names; that wouldn't make sense
@ExplosionPills well, yes, when you put it like that. But something like <div ng-controller="dynamicController as {{ctrlAlias}}"></div> looks pretty logical in angular template, except there is compilation and evaluation order problem
@Anri you could think about that as something like var variableName = "variable value" but then variableName would be interpolated somehow -- you can't do that!
@ExplosionPills well if I wanted to add dynamic aliases feature to angular, after parsing the expression I would just look up $scope["ctrlAlias"] and if it exists I would create $scope[$scope.ctrlAlias] = $controller.get.... If not, then $scope["ctrlAlias"] = $controller.get... as it does now. What is so hard or illogical in this?
| common-pile/stackexchange_filtered |
Google Drive mapped to letter links instead to user directory
I have two Windows 10 Pro PCs. On both, there are multiple google drive accounts, but one in particular is shared by both.
How can I make it so that on both PCs I can access the files via a dedicated drive letter without linking to Users' directory?
On one PC, I can access the files via a drive letter G:\My Drive\some folder...
Here "Můj disk" means "My Drive"
G:\>dir /A
Volume in drive G is<EMAIL_ADDRESS>- Goog...
Volume Serial Number is 1983-1116
Directory of G:\
09.03.2023 23:44 <DIR> .file-revisions-by-id
09.03.2023 23:44 <DIR> Můj disk
09.03.2023 23:44 <DIR> .shortcut-targets-by-id
09.03.2023 23:44 <DIR> $RECYCLE.BIN
0 File(s) 0 bytes
4 Dir(s) 6 621 237 248 bytes free
G:\>ls -la
ls: Muj disk: No such file or directory
total 0
drwxrwxrwx 1 user group 0 Mar 9 23:44 $RECYCLE.BIN
drwxrwxrwx 1 user group 0 Mar 9 23:44 .file-revisions-by-id
drwxrwxrwx 1 user group 0 Mar 9 23:44 .shortcut-targets-by-id
note: The output of ls ls: Muj disk: No such file or directory is somewhat sus.
However, on the other PC, attempting to open G:\My Drive now instead opens C:\Users\Qwerty\Google Drive.
G:\>dir /A
Volume in drive G is<EMAIL_ADDRESS>- Goog...
Volume Serial Number is 1983-1116
Directory of G:\
10.03.2023 00:10 <DIR> .shortcut-targets-by-id
10.03.2023 00:10 <DIR> .file-revisions-by-id
10.03.2023 00:10 <DIR> $RECYCLE.BIN
10.03.2023 00:10 747 My Drive.lnk
1 File(s) 747 bytes
3 Dir(s) 6 621 200 384 bytes free
G:\>la
total 5
drwxr-xr-x 0 Qwerty 197609 0 Mar 10 00:10 '$RECYCLE.BIN'
drwxr-xr-x 0 Qwerty 197609 0 Mar 10 00:10 .
drwxr-xr-x 1 Qwerty 197609 0 Sep 3 2022 ..
drwxr-xr-x 0 Qwerty 197609 0 Mar 10 00:10 .file-revisions-by-id
drwxr-xr-x 0 Qwerty 197609 0 Mar 10 00:10 .shortcut-targets-by-id
-rwxr-xr-x 0 Qwerty 197609 747 Mar 10 00:10 'My Drive.lnk'
"My Drive" is not a directory but a *.lnk file instead.
How can I make it so that it works as a drive as in previous example?
Instead of [tag:google-drive] use [tag:google-drive-filestream]. According to the tag excerpt, the first is for questions about desktop client that is deprecated.
How can I make it so that it works as a drive
On the machine with the issue while signed on, go to the system tray, and click on the Google Drive icon to bring up the window with the gear icon first.
Next, click on the gear icon
Select Google Drive (Folder from Drive) option
Select Stream files option
Supporting Resource
Use Google Drive for desktop
You do not want to Mirror files basically, you want to Stream files to get the desired result to make those accessible via a mounted drive letter from Windows.
When I noticed the issue, I went to the settings on the correct machine and it was indeed turned to "Stream files" so I changed it to "Mirror files" which I use on the other one, but even after the change, the files are still accessible via a mounted drive without redirecting to user directory.
Though, I just noticed that while browsing, all of the files and folders still display the cloud overlay, suggesting that they are only available online. Right-clicking confirms that. What is interesting too is that on the other machine, the context menu misses the option for offline access.
Ah, alright, I see what the issue was. While the settings (on your 2nd screenshot the cogwheel on top right) is for all accounts, the "Google Drive" option on left is only for a selected account-makes sense, but it was unexpected for me. Switching the account revealed that it was still using streaming. Trying to change that now prompts to choose a folder outside of the drive.
This brings a question: What if I used "available offline" for the whole "G:\My Drive", while still using streaming?. How would that be different from mirroring?
| common-pile/stackexchange_filtered |
All subgroups of a group with order the square of a prime
So I have a group of order $p^2$ (where $p$ is a prime number) and I'm wondering how many subgroups it can have. By Lagrange's theorem I know that if a subgroup exists its order has to divide the order of the group i.e. $p^2$ in other words it has to be of order $1$, $p$ or $p^2$. Of order $1$ we have only the trivial group and of order $p^2$ the group itself while the existence of subgroups of order $p$ is enstablished by Cauchy's theorem but how many of them are there?
I tried to reduce the problem to a combinatorial one however I'm not too familiar with this branch. I reasoned as it follows:
i) I have to choose $p$ element from a set which has $p^2$
ii) the unit must be in the subgroup so we have only to choose $p-1$ elements from a set which has $p^2-1$
iii) for every element the inverse must be in the subgroup so we have only to choose $\frac {p-1}2$ elements (if $p\neq 2$) from a set which has $p^2-1$
iv) for every two elements their composition must be in the subgroup however I don't know how to use this fact and so I don't know how to end the problem. Maybe using the criterion for subgroups can shorten the computation but I'm not sure how to use it.
Tell me if my reasoning is correct and how I should end this exercise
Possible duplicate of Number of subgroups of groups with prime power order
@ArnaudD. sorry but I think my question is different, I wrote my approach and asked if it's correct or not not only to find the number of subgroups
@RenatoFaraone I wrote a complete hint for this question yet I deleted it as I read you only want to know whether your reasoning is correct. I think it is...but I doubt whether it'll take you very far away. Better, try to think in the two possibilities for group: both are abelian, but one is the cyclic group $;C_{p^2};$ (and here we have no problem as there's one single subgroup of every order dividing $;p^2;$), and the other one is the elementary abelian $;C_p\times C_p;$ , which can be made into a linear space and then things get much easier...
The reasoning is correct, but (iv) is where all the real power is, and what will ultimately lead you forward. If you consider one non-identity element of the subgroup, and keep composing it with itself, it must generate more elements of the subgroup. So ultimately it generates a subgroup (of unknown order k), of the subgroup (of order p). By Lagrange's Theorem again, k=1 or k=p, and we deliberately ruled out the former. Thus this one element must generate the whole subgroup of order p.
So if you pick just one element of order p from the original group, then you get a subgroup. There are p-1 such elements for each subgroup of order p. So if there are k elements of order p on the group, then there are k/(p-1) subgroups of order p, or k/(p-1)+2 subgroups total. Now you just need to count elements of order p in the group.
and how do I do this?
That will depend on showing that each group of order p^2 is either Z_(p^2) or (Z_p)^2. In the former case, there are p-1 elements of order p, and in the latter case, p^2-1, as you can check.
To show these are the only cases, realize every element must have order p or p^2, because they generate cyclic subgroups; apply Lagrange's Theorem. Consider the case where you do have an element of order p^2, and the case where you have none.
| common-pile/stackexchange_filtered |
Implementing a Suffix Trie using OOP/C++
I am trying to implement a suffix trie in C++ for a programming assignment. Now I think I have the right idea, but I keep getting a segmentation fault and I haven't been able to find what's causing it.
For this assignment, we are encouraged to use VIM/some other basic text editor, and compile programs from the console. Nevertheless, I've downloaded CLion to try and debug the code so I can find the error.
Now when running in CLion I get the message
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Trying to run the debugger gives the message
Error during pretty printers setup:
Undefined info command: "pretty-printer". Try "help info".
Some features and performance optimizations will not be available.
I'm new to CLion and I'm not sure what to do about this (The only JetBrains IDE I use is Pycharm). Can you help me resolve this?
Now the program itself consists of three classes, Trie, Edge and Node, whose implementations can be seen below. The main idea behind the implementation of the Trie is in the constructor of Trie.cpp.
The code is detailed in full below. I appreciate any help.
Main.cpp
#include <iostream>
using namespace std;
#include "Trie.hpp"
int main(){
string s = "Stef";
Trie trie(s);
return 0;
}
Trie.hpp
#ifndef TRIE_HPP
#define TRIE_HPP
#include <string>
#include "Node.hpp"
#include "Edge.hpp"
using namespace std;
class Trie{
private:
string T;
vector<Node> nodes;
void addWord(Node*, string);
public:
Trie(string);
};
#endif
Trie.cpp
#include <iostream>
#include <cstring>
#include "Trie.hpp"
using namespace std;
Trie::Trie(string T){
T += "#"; //terminating character
this->T = T;
vector<string> suffix; //array of suffixes
for(unsigned int i = 0; i < T.length(); i++)
suffix.push_back(T.substr(i, T.length()-i));
//Create the Root, and start from it
nodes.push_back(Node("")); //root has blank label
Node* currentNode = &nodes[0];
//While there are words in the array of suffixes
while(!suffix.empty()){
//If the character under consideration already has an edge, then this will be its index. Otherwise, it's -1.
int edgeIndex = currentNode->childLoc(suffix[0].at(0));
//If there is no such edge, add the rest of the word
if(edgeIndex == -1){
addWord(currentNode, suffix[0]); //add rest of word
suffix.erase(suffix.begin()); //erase the suffix from the suffix array
break; //break from the for loop
}
//if there is
else{
currentNode = (currentNode->getEdge(edgeIndex))->getTo(); //current Node is the next Node
suffix[0] = suffix[0].substr(1, suffix[0].length()); //remove first character
}
}
}
//This function adds the rest of a word
void Trie::addWord(Node* parent, string word){
for(unsigned int i = 0; i < word.length(); i++){ //For each remaining letter
nodes.push_back(Node(parent->getLabel()+word.at(i))); //Add a node with label of parent + label of edge
Edge e(word.at(i), parent, &nodes.back()); //Create an edge joining the parent to the node we just added
parent->addEdge(e); //Join the two with this edge
}
}
Node.hpp
#ifndef NODE_HPP
#define NODE_HPP
#include <string>
#include <vector>
#include "Edge.hpp"
using namespace std;
class Node{
private:
string label;
vector<Edge> outgoing_edges;
public:
Node();
Node(string);
string getLabel();
int childLoc(char);
void addEdge(Edge);
Edge* getEdge(int);
};
#endif
Node.cpp
#include "Node.hpp"
using namespace std;
Node::Node(){
}
Node::Node(string label){
this->label = label;
}
string Node::getLabel(){
return label;
}
//This function returns the edge matching the given label, returning -1 if there is no such edge.
int Node::childLoc(char label){
int loc = -1;
for(unsigned int i = 0; i < outgoing_edges.size(); i++)
if(outgoing_edges[i].getLabel() == label)
loc = i;
return loc;
}
void Node::addEdge(Edge e){
outgoing_edges.push_back(e);
}
Edge* Node::getEdge(int n){
return &outgoing_edges[n];
}
Edge.hpp
#ifndef EDGE_HPP
#define EDGE_HPP
#include <string>
using namespace std;
class Node; //Forward definition
class Edge{
private:
char label;
Node* from;
Node* to;
public:
Edge(char, Node*, Node*);
char getLabel();
Node* getTo();
Node* getFrom();
};
#endif
Edge.cpp
#include "Edge.hpp"
using namespace std;
Edge::Edge(char label, Node* from, Node* to){
this->label = label;
this->from = from;
this->to = to;
}
char Edge::getLabel(){
return label;
}
Node* Edge::getFrom(){
return from;
}
Node* Edge::getTo(){
return to;
}
&nodes[0];, &nodes.back() - you're storing pointers into a vector for later use, and these become invalid when the vector's underlying storage is relocated as you add elements to it.
Read about pointers in general, and dynamic allocation in particular, in your favourite C++ book.
If you don't yet have a favourite C++ book, pick one from this list.
Thanks for the reply. I'm not sure I understand - what do you mean it's underlying storage is relocated? And what would I do to set currentPointer to point to the first element of nodes if not &nodes[0]?
@LukeCollins A vector stores its elements in a dynamically allocated array. When you add elements to it, this array may be moved and expanded. (This is covered in any decent introductory book.) The answer to the second question is that you shouldn't; you should come up with a different method for identifying nodes. (I would use the index of the node instead of its address.)
| common-pile/stackexchange_filtered |
Weird NSURLSessionDownloadTask behavior over cellular (not wifi)
I've enabled Background Modes with remote-notification tasks to download a small file (100kb) in background when the app receives a push notification.
I've configured the download Session using
NSURLSessionConfiguration *backgroundConfiguration = [NSURLSessionConfiguration backgroundSessionConfiguration:sessionIdentifier];
[backgroundConfiguration setAllowsCellularAccess:YES];
self.backgroundSession = [NSURLSession sessionWithConfiguration:backgroundConfiguration
delegate:self
delegateQueue:[NSOperationQueue mainQueue]];
and activate it using
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:[hostComponents URL]];
[request setAllowsCellularAccess:YES];
NSMutableData *bodyMutableData = [NSMutableData data];
[bodyMutableData appendData:[params dataUsingEncoding:NSUTF8StringEncoding]];
[request setHTTPMethod:@"POST"];
[request setHTTPBody:[bodyMutableData copy]];
_downloadTask = [self.backgroundSession downloadTaskWithRequest:request];
[self.downloadTask resume];
Now everything works correctly only if I'm connected over Wifi or over Cellular but with the iPhone connected with the cable to xCode, if I disconnect the iPhone and receive a push notification over cellular the code stop at [self.downloadTask resume]; line without call the URL.
The class that handles these operation is a NSURLSessionDataDelegate, NSURLSessionDownloadDelegate, NSURLSessionTaskDelegate and so implements:
- (void)URLSession:(NSURLSession *)session didReceiveChallenge:(NSURLAuthenticationChallenge *)challenge completionHandler:(void (^)(NSURLSessionAuthChallengeDisposition, NSURLCredential *))completionHandler
- (void)URLSession:(NSURLSession *)session downloadTask:(NSURLSessionDownloadTask *)downloadTask didWriteData:(int64_t)bytesWritten totalBytesWritten:(int64_t)totalBytesWritten totalBytesExpectedToWrite:(int64_t)totalBytesExpectedToWrite
- (void)URLSession:(NSURLSession *)session downloadTask:(NSURLSessionDownloadTask *)downloadTask didResumeAtOffset:(int64_t)fileOffset expectedTotalBytes:(int64_t)expectedTotalBytes
- (void)URLSession:(NSURLSession *)session downloadTask:(NSURLSessionDownloadTask *)downloadTask didFinishDownloadingToURL:(NSURL *)location
I've try to insert a debug line with a UILocalNotification (presented 'now') after the [self.downloadTask resume] but is called after 5 minutes and says that the self.downloadTask.state is 'suspended'
What is due this weird behavior ?
I am experiencing this exact same behavior on ATT network with IOS 7.0. Since doesn't happen when connected to Xcode I save logs on the phone to see when I restart the app. Found that the NSURLSessionDownloadTasks remain with State=NSURLSessionTaskStateRunning but act like they are suspended. If connect to wifi while the app is still in the background these stalled downloads complete successfully. If i bring the app into the foreground they remain stalled.
From Sani Elfshishawy answer below "when plugged into power and on Wi-Fi" It seems to me that the plugging into XCODE changes the plugged into power condition.
I hit the same case too. I am combining the iBeacon with the background NSURLSession background task to perform some checking when the user near the beacons. And I found that the download task is not available when the user using cellular network. Do you guys have some suggestions?
The documentation for NSURLSessionConfiguration Class Reference here:
https://developer.apple.com/Library/ios/documentation/Foundation/Reference/NSURLSessionConfiguration_class/Reference/Reference.html#//apple_ref/occ/instp/NSURLSessionConfiguration/discretionary
Says: for the discretionary property:
Discussion
When this flag is set, transfers are more likely to occur when plugged
into power and on Wi-Fi. This value is false by default.
This property is used only if a session’s configuration object was
originally constructed by calling the backgroundSessionConfiguration:
method, and only for tasks started while the app is in the foreground.
If a task is started while the app is in the background, that task is
treated as though discretionary were true, regardless of the actual
value of this property. For sessions created based on other
configurations, this property is ignored.
This seems to imply that if a download is started in the background the OS always has discretion as to whether and when to proceed with the download. It seems that the OS is always waiting for a wifi connection before completing these tasks.
My experience supports this conjecture. I find that I can send several notifications for downloads while device is on cellular. They remain stuck. When I switch the device to wifi they all go through.
WTF?! Why Apple is doing THIS?! Of course the user is often not in WiFi when download tasks are started. In my app I rely on downloading VERY small amounts of data when in background AND in cell network (for example due to a app-wake-up by location change). Is that somehow possible?
This should be marked as the answer, its spot on. I have seen several mysterious issues where downloads magically start only when connected via cable and always its been due to this setting.
I got the same problem ,finally i set
configuration.discretionary = NO;
the everything works fine,
for backgroundConfiguration ,
discretionary = YES default, it seems the task begin in connection with WIFI and battery both. hope helpful
What are you doing in
(void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo fetchCompletionHandler:(void (^)(UIBackgroundFetchResult))completionHandler{}
Are you calling the completionHandler right away before your download completes? I believe that doing this does not affect operation in Wifi mode or when connected to Xcode. But somehow when in background on Cellular it makes the download stall until you go to Wifi.
The only real way around this is to dump NSURLSession when the app is in the background and use CF sockets. I can successfully do HTTP requests over cellular while the app is in background if I use CFStreamCreatePairWithSocketToHost to open CFStream
#import "Communicator.h"
@implementation Communicator {
CFReadStreamRef readStream;
CFWriteStreamRef writeStream;
NSInputStream *inputStream;
NSOutputStream *outputStream;
CompletionBlock _complete;
}
- (void)setupWithCallBack:(CompletionBlock) completionBlock {
_complete = completionBlock;
NSURL *url = [NSURL URLWithString:_host];
//NSLog(@"Setting up connection to %@ : %i", [url absoluteString], _port);
CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault, (__bridge CFStringRef)[url host], _port, &readStream, &writeStream);
if(!CFWriteStreamOpen(writeStream)) {
NSLog(@"Error, writeStream not open");
return;
}
[self open];
//NSLog(@"Status of outputStream: %lu", (unsigned long)[outputStream streamStatus]);
return;
}
- (void)open {
//NSLog(@"Opening streams.");
inputStream = (__bridge NSInputStream *)readStream;
outputStream = (__bridge NSOutputStream *)writeStream;
[inputStream setDelegate:self];
[outputStream setDelegate:self];
[inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[inputStream open];
[outputStream open];
}
- (void)close {
//NSLog(@"Closing streams.");
[inputStream close];
[outputStream close];
[inputStream removeFromRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[outputStream removeFromRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[inputStream setDelegate:nil];
[outputStream setDelegate:nil];
inputStream = nil;
outputStream = nil;
}
- (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)event {
//NSLog(@"Stream triggered.");
switch(event) {
case NSStreamEventHasSpaceAvailable: {
if(stream == outputStream) {
if (_complete) {
CompletionBlock copyComplete = [_complete copy];
_complete = nil;
copyComplete();
}
}
break;
}
case NSStreamEventHasBytesAvailable: {
if(stream == inputStream) {
//NSLog(@"inputStream is ready.");
uint8_t buf[1024];
NSInteger len = 0;
len = [inputStream read:buf maxLength:1024];
if(len > 0) {
NSMutableData* data=[[NSMutableData alloc] initWithLength:0];
[data appendBytes: (const void *)buf length:len];
NSString *s = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding];
[self readIn:s];
}
}
break;
}
default: {
//NSLog(@"Stream is sending an Event: %lu", (unsigned long)event);
break;
}
}
}
- (void)readIn:(NSString *)s {
//NSLog(@"reading : %@",s);
}
- (void)writeOut:(NSString *)s{
uint8_t *buf = (uint8_t *)[s UTF8String];
[outputStream write:buf maxLength:strlen((char *)buf)];
NSLog(@"Writing out the following:");
NSLog(@"%@", s);
}
@end
| common-pile/stackexchange_filtered |
Positive input is returning as negative
My "num = -num" line inside of my "if (num<0)" line still affects results, even if my input is greater than 0.
#include <iostream>
int main()
{
std::cout << "Enter a positive number: ";
int num{};
std::cin >> num;
if (num < 0)
std::cout << "Negative number entered. Making positive.\n";
num = -num;
std::cout << "You entered: " << num;
return 0;
}
What is the first statement after if (num < 0) that you expect to execute if num >= 0?
A good compiler with good diagnostics enabled will warn you of stuff like this. Good tools rock.
To have multiple statements inside an if, you must use brackets.
And at this point of learning the language, I would recommend just always using brackets.
if (num < 0) {
std::cout << "Negative number entered. Making positive.\n";
num = -num;
}
Unlike languages like python, leading whitespace is not meaningful to the compiler in C++.
In fact I think Python is the only language that considers indenting as significant, every other language requires some syntax to delimit a code block.
| common-pile/stackexchange_filtered |
Routing in Wordpress
I am using this example to rewrite url to load a given file car-details.php but I stil get error Page not found when I access domain.com/account/account_page/9 How can I get this working.
class Your_Class
{
public function init()
{
add_filter( 'template_include', array( $this, 'include_template' ) );
add_filter( 'init', array( $this, 'rewrite_rules' ) );
}
public function include_template( $template )
{
//try and get the query var we registered in our query_vars() function
$account_page = get_query_var( 'account_page' );
//if the query var has data, we must be on the right page, load our custom template
if ( $account_page ) {
return CUSTOMER_CAR_PLUGIN_DIR.'pages/customer-car-details.php';
}
return $template;
}
public function flush_rules()
{
$this->rewrite_rules();
flush_rewrite_rules();
}
public function rewrite_rules()
{
add_rewrite_rule( 'account/(.+?)/?$', 'index.php?account_page=$matches[1]', 'top');
add_rewrite_tag( '%account_page%', '([^&]+)' );
}
}
add_action( 'plugins_loaded', array( new Your_Class, 'init' ) );
// One time activation functions
register_activation_hook( CUSTOMER_CAR_PLUGIN_DIR, array( new Your_Class, 'flush_rules' ) );
}
Could you please try this domain.com/account/9 instead of this domain.com/account/account_page/9? account_page is your query variable it doesn't have to be in the query string.
same issue page not found.I have changed to CUSTOMER_CAR_PLUGIN_DIR.'pages/customerxxx.php'; which doesn't exist in the plugin dir and could not throw error file not found. I am loading a custom file from my plugin ,could that be the issue then?
Instead of adding query_vars with add_rewrite_tag(), use the query_vars filter hook. I've used the following code to test your routing and it's working just fine.
class OP_Plugin {
public function init() {
add_action( 'init', array( $this, 'add_rewrite_rules' ) );
add_filter( 'query_vars', array( $this, 'add_query_vars' ) );
add_filter( 'template_include', array( $this, 'add_template' ) );
}
public function add_template( $template ) {
$account_page = get_query_var( 'account_page' );
if ( $account_page ) {
echo "Working! \n";
echo "Query: {$account_page}";
// return CUSTOMER_CAR_PLUGIN_DIR.'pages/customer-car-details.php';
return '';
}
return $template;
}
public function flush_rules() {
$this->rewrite_rules();
flush_rewrite_rules();
}
public function add_rewrite_rules() {
add_rewrite_rule( 'account/(.+?)/?$', 'index.php?account_page=$matches[1]', 'top' );
}
public function add_query_vars( $vars ) {
$vars[] = 'account_page';
return $vars;
}
}
$op_plugin = new OP_Plugin();
$op_plugin->init();
This does not work for me. I think you meant $this->add_rewrite_rules();, but even with that it doesn't work, at least not for me on WP 5.3 with nginx.
Flush your permalink settings then try
Yeah, I realized I had to do that after playing with it a bit. I was a bit confused by your flush_rules function, which actually isn't used. Also, this doesn't work with the OP's original routing (domain.com/account/account_page/9) - it returns "account_age/9", but like you mentioned in your comment above, it's better to use a route like domain.com/account/9, which does work.
| common-pile/stackexchange_filtered |
Android: How to save a JSON file that let you keep the data even after power off?
Okay, so I have this file system using serializer to save an arraylist into a JSON file. The JSON file is created in the code. If I turn off the power, the data and file is erased. So if I want the data saved and reloaded even when the android phone is restarted. Which way to save data is the best? Can we use a JSON file to do that?
Files are not erased when devices are turned off.
Just save your JSON file on file system and reuse it when you need.
something is not right in your code. Like CommonsWare said, files are not erased! Anyway, why you save data into json if u have database for that?
Let me double check the code then. Anyone know how to find where the file is?
You can use SharedPreferences or saving it to external storage orsaving it to SQLliteDB
you can convert the JSONObject to a string and save by using one of the above methods.
for examples on using different storage options please refer Storage options
You can save in a file into the sdcard or you can use the SharedPreferences. In fact the toString() of a JSONObject returns a String that represent the json itself
| common-pile/stackexchange_filtered |
Adding input value from php in angular function
I have an input with a value setted by php variable:
<input id="name" type="text" value="<?php if(isset($cib)){echo $cib;}?">.
I want use this value to set all the object of angularjs function as showed in the following code:
`angular.forEach($scope.food, function(obj){
obj["price"] = 500;
obj["detail"] = variable_x;
obj["count"] = 1;
obj["id"] = 1;
});`
If I assign tue value the "variable_X" as I showed in the following codes, it doesn't work. I tried:
var variable_X=<?php if(isset($cib)){echo $cib;} ?>
or
`var variable_X=document.getElementById("name");'
or using ng-model.
Can I obtain the value of this variable in another way?
var variable_X=<?php if(isset($cib)){echo $cib;} ?>
that should work fine, but it will throw an error if $cib is not set right?
the statement will become this to the browser if $cib is undefined...
var variable_X= ;
which fails...
need to set variable to something when it fails..
var variable_X=<?php isset($cib) ? $cib : 'false' ?>;
then in angular check for FALSE value of $cib and do stuff
I use ng-init for assigning initial values when rendering html:
<div ng-controller="..." ng-init="cib = '<?php if(isset($cib)){echo $cib;}?>'">
Then you can use cib both in template and in controller's code:
obj["detail"] = $scope.cib
(note: reading from $scope)
| common-pile/stackexchange_filtered |
Broadcast a word to an xmm register
I need to move a 16-bit word eight times into an xmm register for SSE operations
E. g.: I'd like to work with the 16-bit word ABCD to the xmm0 register, so that the final result looks like
ABCD | ABCD | ABCD | ABCD | ABCD | ABCD | ABCD | ABCD
I want to do this in order to use the paddw operation later on. So far I've found the pushfd operation which does what I want to do, but only for double words (32-bit). pshufw only works for - if I'm not mistaken - 64-bit registers. Is there the operation I am looking for, or do I have to emulate it in some way with multiple pshufw?
Which SSE versions are you targeting?
Also if you only want to do a single paddw and your input/output is not consecutive in memory it would probably be better to just add them using scalar instructions.
@Jester I am going through a loop of paddw instructions afterwards.
If your ABCD is a constant you can do mov eax, 0xABCDABCD; movd xmm0, eax; pshufd xmm0, xmm0, 0 Obviously you can also load it from memory.
@fuz I'm targeting SSE2
@Jester so that's the problem. It's not a constant but stored in cx. Do you think I should do some shift so that ecx contains e. g. 0xABCDABCD?
movd xmm0, ecx; pshuflw xmm0, xmm0, 0; pshufd xmm0, xmm0, 0. Alternatively imul ecx, ecx, 0x00010001; movd xmm0, ecx; pshufd xmm0, xmm0, 0 (assuming top 16 bits of ecx are zero). Note I haven't checked latencies and throughput, just throwing out ideas :)
You can achieve the desired goal by performing a shuffle and then an unpack. In NASM syntax:
# load 16 bit from memory into all words of xmm0
# assuming 16-byte alignment
pshuflw xmm0, [mem], 0 # gives you [ M, M, M, M, ?, ?, ?, ? ]
punpcklwd xmm0, xmm0 # gives you [ M, M, M, M, M, M, M, M ]
Note that this reads 16 bytes from mem and thus requires 16-byte alignment.
Only the first 2 bytes are actually used. If the number is not in memory or you can't guarantee that reading past the end is possible, use something like this:
# load ax into all words of xmm0
movd xmm0, eax ; or movd xmm0, [mem] 4-byte load
pshuflw xmm0, xmm0, 0
punpcklwd xmm0, xmm0
With AVX2, you can use a vpbroadcast* broadcast load or a broadcast from a register source. The destination can be YMM if you like.
vpbroadcastw xmm0, [mem] ; 16-bit load + broadcast
Or
vmovd xmm0, eax
vpbroadcastw xmm0, xmm0
Memory-source broadcasts of 1 or 2-byte elements still decode to a load+shuffle uop on Intel CPUs, but broadcast-loads of 4-byte or 8-byte chunks are even cheaper: handled in the load port with no shuffle uop needed.
Either way this is still cheaper than 2 separate shuffles like you need without AVX2 or SSSE3 pshufb.
Both your and @Jester 's suggestions of doing imul ecx, ecx, 0x00010001; movd xmm0, ecx; pshufd xmm0, xmm0, 0 work perfectly. Which one should I choose? Is there any difference in performance?
Surprisingly, imul has latency 3 on cpus I have looked at, while pshuflw only 1.
@Jester: yup, it's a tradeoff between latency and execution-port pressure on the shuffle port. Intel Haswell and later only have 1 per clock shuffle throughput (port 5 only), so if you're doing a lot of shuffling this can be a problem. As setup for uops that run on other ports, it's fine. Multiply latency of 3c (and throughput of 1c fully pipelined) is really very good when you consider how much work that is. AMD Bulldozer-family has 4c latency for 32-bit multiply, and 6c latency for 64-bit multiply, neither fully pipelined.
| common-pile/stackexchange_filtered |
In OpenGL is there a way to get a list of all uniforms & attribs used by a shader program?
I'd like to get a list of all the uniforms & attribs used by a shader program object. glGetAttribLocation() & glGetUniformLocation() can be used to map a string to a location, but what I would really like is the list of strings without having to parse the glsl code.
Note: In OpenGL 2.0 glGetObjectParameteriv() is replaced by glGetProgramiv(). And the enum is GL_ACTIVE_UNIFORMS & GL_ACTIVE_ATTRIBUTES.
Variables shared between both examples:
GLint i;
GLint count;
GLint size; // size of the variable
GLenum type; // type of the variable (float, vec3 or mat4, etc)
const GLsizei bufSize = 16; // maximum name length
GLchar name[bufSize]; // variable name in GLSL
GLsizei length; // name length
Attributes
glGetProgramiv(program, GL_ACTIVE_ATTRIBUTES, &count);
printf("Active Attributes: %d\n", count);
for (i = 0; i < count; i++)
{
glGetActiveAttrib(program, (GLuint)i, bufSize, &length, &size, &type, name);
printf("Attribute #%d Type: %u Name: %s\n", i, type, name);
}
Uniforms
glGetProgramiv(program, GL_ACTIVE_UNIFORMS, &count);
printf("Active Uniforms: %d\n", count);
for (i = 0; i < count; i++)
{
glGetActiveUniform(program, (GLuint)i, bufSize, &length, &size, &type, name);
printf("Uniform #%d Type: %u Name: %s\n", i, type, name);
}
OpenGL Documentation / Variable Types
The various macros representing variable types can be found in the
docs. Such as GL_FLOAT, GL_FLOAT_VEC3, GL_FLOAT_MAT4, etc.
glGetActiveAttrib
glGetActiveUniform
Perfect answer: Short, straight to the point and everything I ever wanted to know :-)
Further, there's no such thing as GL_OBJECT_ACTIVE_UNIFORMS, there's a GL_OBJECT_ACTIVE_UNIFORMS_ARB but what you actually want to use is GL_ACTIVE_UNIFORMS. The same goes for attributes.
I just noticed something, this only gives you the location for the first element in a uniform array. Not for all of them. What do you do if you want / need all of them?
@Makogan the whole array has this location. To set it, you then use glUniform??v with count set to the number of elements in the array. And actual number of elements of the uniform is returned by glGetActiveUniform in its size output parameter.
There has been a change in how this sort of thing is done in OpenGL. So let's present the old way and the new way.
Old Way
Linked shaders have the concept of a number of active uniforms and active attributes (vertex shader stage inputs). These are the uniforms/attributes that are in use by that shader. The number of these (as well as quite a few other things) can be queried with glGetProgramiv:
GLint numActiveAttribs = 0;
GLint numActiveUniforms = 0;
glGetProgramiv(prog, GL_ACTIVE_ATTRIBUTES, &numActiveAttribs);
glGetProgramiv(prog, GL_ACTIVE_UNIFORMS, &numActiveUniforms);
You can query active uniform blocks, transform feedback varyings, atomic counters, and similar things in this way.
Once you have the number of active attributes/uniforms, you can start querying information about them. To get info about an attribute, you use glGetActiveAttrib; to get info about a uniform, you use glGetActiveUniform. As an example, extended from the above:
GLint maxAttribNameLength = 0;
glGetProgramiv(prog, GL_ACTIVE_ATTRIBUTE_MAX_LENGTH, &maxAttribNameLength);
std::vector<GLchar> nameData(maxAttribNameLength)
for(int attrib = 0; attrib < numActiveAttribs; ++attrib)
{
GLint arraySize = 0;
GLenum type = 0;
GLsizei actualLength = 0;
glGetActiveAttrib(prog, attrib, nameData.size(), &actualLength, &arraySize, &type, &nameData[0]);
std::string name((char*)&nameData[0], actualLength - 1);
}
Something similar can be done for uniforms. However, the GL_ACTIVE_UNIFORM_MAX_LENGTH trick can be buggy on some drivers. So I would suggest this:
std::vector<GLchar> nameData(256);
for(int unif = 0; unif < numActiveUniforms; ++unif)
{
GLint arraySize = 0;
GLenum type = 0;
GLsizei actualLength = 0;
glGetActiveUniform(prog, unif, nameData.size(), &actualLength, &arraySize, &type, &nameData[0]);
std::string name((char*)&nameData[0], actualLength - 1);
}
Also, for uniforms, there's glGetActiveUniforms, which can query all of the name lengths for every uniform all at once (as well as all of the types, array sizes, strides, and other parameters).
New Way
This way lets you access pretty much everything about active variables in a successfully linked program (except for regular globals). The ARB_program_interface_query extension is not widely available yet, but it'll get there.
It starts with a call to glGetProgramInterfaceiv, to query the number of active attributes/uniforms. Or whatever else you may want.
GLint numActiveAttribs = 0;
GLint numActiveUniforms = 0;
glGetProgramInterfaceiv(prog, GL_PROGRAM_INPUT, GL_ACTIVE_RESOURCES, &numActiveAttribs);
glGetProgramInterfaceiv(prog, GL_UNIFORM, GL_ACTIVE_RESOURCES, &numActiveUniforms);
Attributes are just vertex shader inputs; GL_PROGRAM_INPUT means the inputs to the first program in the program object.
You can then loop over the number of active resources, asking for info on each one in turn, from glGetProgramResourceiv and glGetProgramResourceName:
std::vector<GLchar> nameData(256);
std::vector<GLenum> properties;
properties.push_back(GL_NAME_LENGTH);
properties.push_back(GL_TYPE);
properties.push_back(GL_ARRAY_SIZE);
std::vector<GLint> values(properties.size());
for(int attrib = 0; attrib < numActiveAttribs; ++attrib)
{
glGetProgramResourceiv(prog, GL_PROGRAM_INPUT, attrib, properties.size(),
&properties[0], values.size(), NULL, &values[0]);
nameData.resize(values[0]); //The length of the name.
glGetProgramResourceName(prog, GL_PROGRAM_INPUT, attrib, nameData.size(), NULL, &nameData[0]);
std::string name((char*)&nameData[0], nameData.size() - 1);
}
The exact same code would work for GL_UNIFORM; just swap numActiveAttribs with numActiveUniforms.
Thanks for the samples. However actualLength - 1 should be actualLength - 1 as glGetActiveAttrib returns the length of the name excluding the NULL-character.
For anyone out there that finds this question looking to do this in WebGL, here's the WebGL equivalent:
var program = gl.createProgram();
// ...attach shaders, link...
var na = gl.getProgramParameter(program, gl.ACTIVE_ATTRIBUTES);
console.log(na, 'attributes');
for (var i = 0; i < na; ++i) {
var a = gl.getActiveAttrib(program, i);
console.log(i, a.size, a.type, a.name);
}
var nu = gl.getProgramParameter(program, gl.ACTIVE_UNIFORMS);
console.log(nu, 'uniforms');
for (var i = 0; i < nu; ++i) {
var u = gl.getActiveUniform(program, i);
console.log(i, u.size, u.type, u.name);
}
Here is the corresponding code in python for getting the uniforms:
from OpenGL import GL
...
num_active_uniforms = GL.glGetProgramiv(program, GL.GL_ACTIVE_UNIFORMS)
for u in range(num_active_uniforms):
name, size, type_ = GL.glGetActiveUniform(program, u)
location = GL.glGetUniformLocation(program, name)
Apparently the 'new way' mentioned by Nicol Bolas does not work in python.
| common-pile/stackexchange_filtered |
Python maths does not multiply, it uses string instead?
I was just testing a little python maths and I could not multiply numbers! I am really confused because I thought this simple code would work:
test = raw_input("answer")
new = test * 5
print new
Instead, it just gave whatever I wrote five times next to each other. E.g I write 8 and it prints 88888! Can somebody explain this?
You need to cast to int, raw_input returns a string:
test = int(raw_input("answer"))
You can see the type is str without casting:
In [5]: test = raw_input("answer ")
answer 8
In [6]: type(test)
Out[6]: str
In [7]: test = int(raw_input("answer "))
answer 8
In [8]: type(test)
Out[8]: int
When you multiply the string python will return the string repeated test times.
Yeah, just: test = int(raw_input("answer")) (first line) or new = int(test) * 5 (second line). I like the former better in terms of readability.
| common-pile/stackexchange_filtered |
SimpleCmsImage.save will not save a thumbnail image
I have a Ruby on Rails web application that uses the Simple CMS plug-in to upload images to the app.
I've used this before. In fact, my current web application is a clone of an original application that works perfectly.
With the new app, the image itself uploads okay, but - unlike the original app - it does not automatically generate a thumbnail image.
I've isolated this down to the point where SimpleCmsImage.save is called. In the original app, this automatically does two insertions into the simple_cms_image table - one for the image itself; and one for the automatically-generated thumbnail image.
In the new app, there is only one insertion for the image, none for the thumbnail.
Does anyone have any suggestions? I'm at my wit's end to figure this one out?
Thanks in advance,
Tim
| common-pile/stackexchange_filtered |
Huge log backups due to enabling querystore
We have a SQL Server 2019 CU18 where we discovered a strange issue with querystore.
Normally the average size of the hourly logbackup is 40MB but as soon as we enable querystore the average size of the logbackup is 2.5GB.
There are (according to querystore) 140.000 queries executed/hour. This is about 40 executions/second.
This is the config of our querystore:
ALTER DATABASE [db_name]
SET QUERY_STORE = ON
(
OPERATION_MODE = READ_WRITE
,CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 45)
,DATA_FLUSH_INTERVAL_SECONDS = 900
,MAX_STORAGE_SIZE_MB = 2048
,INTERVAL_LENGTH_MINUTES = 30
,SIZE_BASED_CLEANUP_MODE = AUTO
,QUERY_CAPTURE_MODE = AUTO
);
When I open such a big logbackup file with fn_dump_dblog I see that multiple transactions happen in the same second. The transactions all have the name 'SwapPage'.
Operation
CONTEXT
AllocUnitId
Page ID
Transaction Name
LOP_BEGIN_XACT
LCX_NULL
NULL
NULL
SwapPage
LOP_INSYSXACT
LCX_INDEX_INTERIOR
72057594047692800
0001:00056321
NULL
LOP_INSYSXACT
LCX_CLUSTERED
72057594047692800
0001:000a871c
NULL
LOP_INSYSXACT
LCX_CLUSTERED
72057594047692800
0001:0000041b
NULL
LOP_INSYSXACT
LCX_CLUSTERED
72057594047692800
0001:0000041c
NULL
LOP_FORMAT_PAGE
LCX_UNLINKED_REORG_PAGE
72057594047692800
0001:000a8715
NULL
LOP_MODIFY_HEADER
LCX_UNLINKED_REORG_PAGE
72057594047692800
0001:000a8715
NULL
LOP_INSYSXACT
LCX_CLUSTERED
72057594047692800
0001:000a8715
NULL
LOP_MODIFY_HEADER
LCX_HEAP
72057594047692800
0001:000a871c
NULL
LOP_MODIFY_HEADER
LCX_HEAP
72057594047692800
0001:0000041c
NULL
LOP_INSERT_ROWS
LCX_CLUSTERED
72057594047692800
0001:000a8715
NULL
LOP_MODIFY_HEADER
LCX_HEAP
72057594047692800
0001:000a8715
NULL
LOP_MODIFY_HEADER
LCX_HEAP
72057594047692800
0001:000a8715
NULL
LOP_MODIFY_ROW
LCX_INDEX_INTERIOR
72057594047692800
0001:00056321
NULL
LOP_MODIFY_HEADER
LCX_HEAP
72057594047692800
0001:0000041b
NULL
LOP_MODIFY_HEADER
LCX_HEAP
72057594047692800
0001:0000041b
NULL
LOP_MIGRATE_LOCKS
LCX_NULL
NULL
0001:000a8715
NULL
LOP_INSYSXACT
LCX_CLUSTERED
72057594047692800
0001:000a8715
NULL
LOP_INSYSXACT
LCX_CLUSTERED
72057594047692800
0001:0000041c
NULL
LOP_INSYSXACT
LCX_UNLINKED_REORG_PAGE
72057594047692800
0001:0000041b
NULL
LOP_INSYSXACT
LCX_CLUSTERED
72057594047692800
0001:000a871c
NULL
LOP_INSYSXACT
LCX_INDEX_INTERIOR
72057594047692800
0001:00056321
NULL
LOP_COMMIT_XACT
LCX_NULL
NULL
NULL
NULL
The allocation unit points to plan_persist_runtime_stats.
After a comment of Paul White I setup an Extended Event to capture query_store_index_rebuild_started and query_store_index_rebuild_finished. To my suprise querystore was doing index rebuilds.
This are the results of this trace:
event
timestamp
current_size_kb
query_store_index_rebuild_started
2024-12-05 07:51:10.353
874208
query_store_index_rebuild_finished
2024-12-05 07:52:29.073
868832
query_store_index_rebuild_started
2024-12-05 08:20:58.497
873504
query_store_index_rebuild_finished
2024-12-05 08:22:18.320
869152
query_store_index_rebuild_started
2024-12-05 08:36:03.147
874528
query_store_index_rebuild_finished
2024-12-05 08:37:19.670
869664
query_store_index_rebuild_started
2024-12-05 09:06:00.943
874336
query_store_index_rebuild_finished
2024-12-05 09:07:12.750
870304
It looks like the index rebuild is started around 874MB, the max size of querystore is set to 2048.
I also included the stacktrace of the query_store_index_rebuild_started event in the Extended Event.
sqllang!XeSqlPkg::CollectClientHostnameActionInvoke
sqllang!XeSqlPkg::CollectDatabaseIdActionInvoke
sqllang!XeSqlPkg::CollectDatabaseNameActionInvoke
sqllang!XeSqlPkg::CollectNtUsernameActionInvoke
sqllang!XeSqlPkg::CollectSessionIdActionInvoke
sqllang!XeSqlPkg::CollectTSqlStack<XE_ActionForwarder>
sqllang!XeSqlPkg::CollectTSqlStackActionInvoke
qds!XeQdsPkg::query_store_index_rebuild_started::Publish
qds!CDBQDS::ReclaimFreePages qds!CDBQDS::DoSizeRetention
qds!CDBQDS::ProcessQdsBackgroundTask
qds!CQDSManager::AcquireGenericQdsDbAndProcess<<lambda_e51628d7833f66b5a045fa5bf2d27953>>
qds!CDBQDS::ProcessQdsBackgroundTask
sqldk!SOS_Task::Param::Execute sqldk!SOS_Scheduler::RunTask
sqldk!SOS_Scheduler::ProcessTasks
sqldk!SchedulerManager::WorkerEntryPoint
sqldk!SystemThreadDispatcher::ProcessWorker
sqldk!SchedulerManager::ThreadEntryPoint KERNEL32+0x17AC4
ntdll+0x5A8C1
I had hoped to find what is triggering the index rebuild but no such luck.
After some pointers of Zikato I added some extra querystore related events to my trace. This shows that the index rebuild only is triggered if a query_store_size_retention_cleanup_started event has occurred.
No rebuild:
Rebuild:
Everytime the cleanup runs 0KB has been deleted but apparently a rebuild is needed. What confuses me is the appearance of the cleanup event. I thought this would only be triggered when querystore reaches 90% of the max storage size.
Increasing the max size of the querystore doesn't make any difference.
Did anybody experienced the same issue or can somebody explain what is happening?
Other databases on the instance don't have this problem.
Inspection under a debugger reveals that Query Store runs index reorganize on its internal tables after any cleanup, whether triggered by size or age.
This makes some sense, since the purpose of cleanup is to free up space. Users cannot perform maintenance on the internal QDS tables. We also can't disable page locks, which would be one way to prevent reorganize running.
So, there's nothing you can do to disable this reorganizing behaviour in 2019; it's just what Query Store does. This explains the log growth.
SQL Server checks whether either size- or time-based cleanup is needed every time it persists Query Store data to disk. The 90% threshold for size-based cleanup is stored as a variable value rather than a hard-coded constant. This suggests the server may vary the percentage under some conditions.
Time-based cleanup can be prevented from firing with undocumented global trace flag 7748. There doesn't appear to be a similar facility for size-based cleanup.
SQL Server appears to always try time-based cleanup first (if enabled, and so long as the trace flag isn't set) regardless of the triggering condition. This may be because time-based cleanup involves much less overhead (no deciding which queries to evict based on relatively expensive queries run against the internal tables).
Curiously, SQL Server runs time-based cleanup every INTERVAL_LENGTH_MINUTES. Even if it doesn't find anything to do, the index reorganize is still performed afterwards.
There is an undocumented trace flag to prevent index reorganization, but it is only present in SQL Server 2022, not SQL Server 2019. Even if you were running SQL Server 2022, you'd need to contact Microsoft Support to get authorization to use the new flag.
Meanwhile, you could try disabling time-based cleanup with undocumented trace flag 7748 (or setting STALE_QUERY_THRESHOLD_DAYS to zero) for normal operations, allowing it to run during a period where you can tolerate the index reorganization.
Is 2GB enough to hold 45 days of Query Store data?
Ideally, you should avoid size-based cleanup as per this blog post from Kendra Little Query Store Size Based Cleanup Causes Performance Problems - How to Avoid It
Can you try to double the max storage size and see if the reindex frequency drops?
You can also monitor it with query_store_size_retention_cleanup_started XE and check if it correlates to the query_store_index_rebuild_started
| common-pile/stackexchange_filtered |
On card click I need to call a function and send a different parameter to a function present in another component
My project details component TS file
Requirement //I need to pass a different parameter inside activity from another component html on click i.e (click)="notifyAdmin('visited')"
Ts file:
`
notifyAdmin(activity){
this.userData = {...this.userData,downloadActivity:activity}
1. console.log(this.userData,"userData");
this.api.notifyDownload(this.userData).subscribe({
next:(data:any)=>{
console.log(this.extID,this.projectid);
if(data.success){
console.log("data success",data);
}else{
console.log("data failed");
}
},
error: (err: HttpErrorResponse) => {
if (err.status === 401 || err.status === 403) {
console.error("error block");
}
}
})
}
`
I tried using service file but the object userData has some properties which are present on that page only so I cannot do using service file
Needs better formatting
There are many solutions to your problem, I will suggest the one that seems the closest to what you are looking for:
Component1.component.html (the one that has all the information of userData):
<button (click)="adminService.notifyAdmin(activity)">
Component1.component.ts:
class Component1 implements OnInit {
constructor(adminService: AdminService){}
onInit() {
this.adminService.setUserData(this.userData);
}
}
Component2.component.html:
<button (click)="adminService.notifyAdmin(activity)">
admin.service.ts:
class AdminService {
private _userData;
private _extId;
private _projectId;
public setNotificationData(userData, extId, projectId) {
this._userData = userData;
this._extId = extId;
this._projectId = projectId;
}
public notifyAdmin(activity) {
if (!this._userData || !this._extId || !this._projectId) {
//manage this case
return;
}
this.api.notifyDownload(this._userData).subscribe({
next:(data:any)=>{
console.log(this._extId,this._projectId);
if(data.success){
console.log("data success",data);
}else{
console.log("data failed");
}
},
error: (err: HttpErrorResponse) => {
if (err.status === 401 || err.status === 403) {
console.error("error block");
}
}
})
}
}
The idea is that the service will be stateful, and it will store all the information needed for the notification, information that will be provided by both components. if you can't controll when each component will have the necessary information, use Observables, (hint: combineLatest will be your friend in that case).
Note: if you need to do such gymnastics it might suggest that you have a poor architecture. However it is not impossible that you would be forced to do so, maybe you are dealing with an old codebase and this is the best you can do. The solution I suggested is not optimal, in order to provide a better solution, more information about your requirements is needed
| common-pile/stackexchange_filtered |
How to inject @PersistenceContext into pojo class
I'm trying to inject PersistenceContex into POJO using @PersistenceContex annotation, I've read that i need to made that POJO managed to do that. So I inject my POJO class into servlet(so its now managed as dependent object, am i right ?) but when servlet is trying to call metod from injected object i get error:
java.lang.IllegalStateException:
Unable to retrieve
EntityManagerFactory for unitName null
So it looks like PersistenceContext is not injected into POJO properly, what should I do to make it work ?
My POJO class looks like this:
public class FileEntityControlerImpl implements FileEntityInterface {
@PersistenceContext
EntityManager entityManager;
@Override
public void createFile(FileEntity fileEntity) {
...}
@Override
public FileEntity retriveFile(String fileName) {
...}
Injection point:
@Inject
FileEntityInterface fileController;
If I use SLSB and inject using @EJB it works fine.
..::UPDATE::..
stacktrace:
WARNING: StandardWrapperValve[ResourcesServlet]: PWC1406: Servlet.service() for servlet ResourcesServlet threw exception
java.lang.IllegalStateException: Unable to retrieve EntityManagerFactory for unitName MambaPU
at com.sun.enterprise.container.common.impl.EntityManagerWrapper.init(EntityManagerWrapper.java:121)
at com.sun.enterprise.container.common.impl.EntityManagerWrapper._getDelegate(EntityManagerWrapper.java:162)
at com.sun.enterprise.container.common.impl.EntityManagerWrapper.createNamedQuery(EntityManagerWrapper.java:554)
at pl.zawi.mamba.core.integration.controllers.implementation.FileEntityControlerImpl.retriveFile(FileEntityControlerImpl.java:32)
at pl.zawi.mamba.core.face.servlets.ResourcesServlet.doGet(ResourcesServlet.java:60)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97)
at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:325)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:226)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57)
at com.sun.grizzly.ContextTask.run(ContextTask.java:69)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309)
at java.lang.Thread.run(Thread.java:662)
persistance.xml:
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="MambaPU" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>jdbc/MambaDB</jta-data-source>
<exclude-unlisted-classes>false</exclude-unlisted-classes>
<shared-cache-mode>ALL</shared-cache-mode>
<properties>
<!-- <property name="javax.persistence.jdbc.password" value="root"/>-->
<!-- <property name="javax.persistence.jdbc.user" value="root"/>-->
<!-- <property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver"/>-->
<!-- <property name="eclipselink.ddl-generation" value="create-tables"/>-->
<!-- <property name="eclipselink.logging.logger" value="org.eclipse.persistence.logging.DefaultSessionLog"/>-->
<property name="eclipselink.logging.level" value="ALL"/>
<property name="eclipselink.ddl-generation" value="drop-and-create-tables"/>
</properties>
</persistence-unit>
</persistence>
..::UPDATE2::..
If someone is interested there is sorce of my project, I've been using maven so it should be simple to build and run.(MySql drive is not included in pom-s so keep it in mind )
Mamba.Core
Just for the reference:
You don't use the @PersistenceContext annotation at all on Entity classes. Simply including a Persistence Unit with the POJOs will make them managed (adding a persistence.xml and an empty beans.xml into the META-INF folder of the JAR of the POJO classes.
@PersistenceContext is used on Session Beans and its purpose is to automatically inject the EntityManager into the session bean.
first, your pojo needs to be in a bean archive (have beans.xml in META-INF or WEB-INF) in order to be managed
@PersistenceContext requires a META-INF/persistence.xml, where you define a persistent unit
if there is a persistent unit and it still fails, try @PersistenceContext(unitName="name")
beans.xml is present in all modules of my ear file but there's nothing special inside, and yes persistence.xml is also present. I have already try to add unitName, and name, but error is steal the same :(
@zawisza017 can you give the whole stacktrace
I have add stacktrace and persistance.xml to main post. My ear file consists of 3 modules one web and two ejb modules one of them contains persistance.xml
@zawisza017 I'm not sure if the jpa unit can span multiple modules. try to define it in the desired module.
When I Inject SLSB whit PU it works. I have 3 modules face.war where the servlet is, integration.jar where SLSB with injectet Entity Manager are, and logic.jar where i have singleton which use some of integration module SLSB to fill database. My goal was to replace that SLSB with managet POJO creating kind of DAO, as people on stackoverflow advise.
@zawisza017 that's strange. Perhaps grab the latest version of CDI (or the application server you are using)
already tried glassfish 3.1b40 or something around - same mistake. I have no problem placing here my full project if someone would want to try it himself.
@zawisza017 how about trying it on JBoss AS 6? Garcia noted that it works there. Give it a try
I'm currently downloading it, I have relay poor quality connection
I have downloaded Jboss but it looks like i need to read something about howto make my app deploy... because first attemption to deploy end up with lots of errors... unfortunately i have not time to do that :(
@zawisza017 quite bad indeed. After a handful of these experiences I decided not to use applications servers at all, so I've been using Tomcat only for a long time :)
I have the same issue: Glassfish doesn't bring up EntityManager if DAO is not Stateless
I think that is a Glassfish issue, because works fine under JBoss AS 6.
Any one else have this king of problem, i do not know JBoss at all it bad new if I'm forced to use it;/
If you don't like JBoss AS, try to use an empty Stateless Session Bean that has a private @PersistenceContext EntityManager em. This forces Glassfish to initialize persistence unit. For me this solution works fine. It's not a good solution, I know, but works. I'll do more tests to report this issue to Glassfish team.
my PU i already initialized, moreover before servlet is created i use singleton to fill database with some content(from that i know that PU + SLSB works fine when injecting between modules)
Like Bozho, I don't know if @Inject can be inject beans from other module. I can't find about this in the spec (JSR299).
simple injection using @Inject between modules works fine i have already checked it, by creating simple one method(print) Test interface and TestImpl, then injecting it to another module, however that was between two EJB modules
I have the same issue. My SLSB injects my DAO ojbect with @Inject. The @PersistenceContext is in the POJO. When the POJO is in the same maven project as the EJB, everything works fine. Im not sure why, but the EJB cannot inject the POJO (w/ PU) when it is in a different project, unless I make the POJO a SLSB and use @EJB instead of @Inject.
| common-pile/stackexchange_filtered |
R: could not find function "generate.startweights", while using neuralnet package
I was running a neural network using the neuralnet package, but I ran into the problem where if the net hadn't achieved the threshold error predictions later would not work. I found the answer here: R: Error in nrow[w] * ncol[w] : non-numeric argument to binary operator, while using neuralnet package
I opened the calculate.neuralnet function, as described in the answer, commented out lines 65 and 66, and saved it. Upon running the neuralnet again, I now get the error
Error in calculate.neuralnet(learningrate.limit = learningrate.limit, :could not find function "generate.startweights"
When I use the same fixInNamespace function to access the generate.startweights function, its there and accessible. I've since both reinstalled the package and uncommented those lines, but I still get the same error about generate.startweights, which I did not get before. I tried to comment on the original solution with my problems, but I do not have the requisite 50 reputation. Code below
regData <- read.csv('RegistrationandVoterData2.csv')
voteData <- read.csv('RegDataVote.csv')
mergeReg = regData[, 3:ncol(regData)]
mergeVote = voteData[, 6:ncol(voteData)]
mergeTotal = merge(mergeReg, mergeVote, by = c('RGPREC', 'RGPREC_KEY'), all = FALSE)
n<-names(mergeTotal[, c(3:66, 68:79)])
scaled <- scale(mergeTotal[, c(3:66, 68:79)])
scaleDF = as.data.frame(scaled)
names(scaleDF) = n
scaleDF$perDem = mergeTotal$DEM.y / (mergeTotal$REP.y + mergeTotal$DEM.y)
f <- as.formula(paste("perDem ~", paste(n[!n %in% "perDem"], collapse = " + ")))
smp_size <- floor(0.30 * nrow(mergeTotal))
## set the seed to make your partition reproductible
train_ind <- sample(seq_len(nrow(mergeTotal)), size = smp_size)
train <- scaleDF[train_ind, ]
smp_size2 = floor(0.2 *nrow(mergeTotal[-train_ind, ]))
test_ind = sample(seq_len(nrow(mergeTotal[-train_ind, ])), size = smp_size2)
test = scaleDF[-train_ind, ][test_ind, ]
errors = matrix(NA, 24, 2)
for(it in 2:25){
nn <- neuralnet(f,data=train,hidden=c(it),linear.output=T)
pr.nn <- compute(nn,test[, -ncol(test)])
preds = pr.nn$net.result
act = scaleDF$perDem[-train_ind][test_ind]
MSE.nn <- sum((act - preds)^2, na.rm = TRUE)/length(act)
pr.tr <- compute(nn,train[, -ncol(train)])
predsTr = pr.tr$net.result
actTr = scaleDF$perDem[train_ind]
MSE.nnTr <- sum((actTr - predsTr)^2, na.rm = TRUE)/length(actTr)
errors[it, ] = c(MSE.nnTr, MSE.nn)
}
| common-pile/stackexchange_filtered |
SQL Express remote access
Is there any way to install SQL Express 2005 silently while configuring to allow remote access, or must the configuration be done with the SQL Server Management Studio UI?
After the SQL Service is installed log into the database and run the following and then restart the SQL Service.
exec sp_configure 'remote access', 1
reconfigure
This will allow remote access to the service. If you are installing a named instance you'll need to ensure that the SQL Browser service is running and set to automatic.
One of the issues is setting BROWSERSVCSTARTUP to automatic. Check BOL, in case I have the flag typed in incorrectly. I will have to mull over this a bit and determine if this is the only issue in the mix.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.