text
stringlengths
70
452k
dataset
stringclasses
2 values
From root and weight lattices of SU(N) to $\theta$-functions as sections of a line bundle and $CP$-space I have troubles to digest the following messages/discussions in the following work in p.10-12; Which construct a map from the moduli space of flat connections $M_{\rm flat}=\mathbb{E} / {\mathfrak S}_N \to CP^{N-1}$ to the projective space explicitly. My questions: What are the purposes of using the root and weight lattices of SU(N) here? $\theta_k$ are usually theta fuctions. But they emphasize $\theta_k$ are not functions, but sections of a line bundle $L$. Why is that? Their discussions are detailed below: We now make the map $M_{\rm flat}=\mathbb{E} / {\mathfrak S}_N \to CP^{N-1}$ more explicit. We denote the root lattice of $SU(N)$ by ${\mathbb L}$ $$ {\mathbb L}=\left\{ \vec{\ell} =(\ell_1, \cdots, \ell_N) \in \mathbb Z^N; \sum_i \ell_i=0 \right\}\;. $$ The weight lattice is spanned by the fundamental weights $$ \vec{e}_k=(\overset{1}{1},\cdots,\overset{k}{1},0,\cdots,0) - \frac{k}{N}(1,\cdots,1)\;. $$ We define theta functions as $$ \theta_k(\vec{\phi}) := \sum_{ \vec{\ell} \in {\mathbb L}} e^{ \pi i \tau (\vec{\ell}+\vec{e}_k)^2+ 2\pi i (\vec{\ell}+\vec{e}_k) \cdot \vec{\phi} } ~~~~~(k=1,\cdots,N)\;, $$ where $\vec{\phi}=(\phi_1,\cdots,\phi_N)$ and the inner product between vectors is defined as $\vec{\phi}\cdot \vec{\ell}=\sum_i \phi_i \ell_i$. These theta functions are invariant under the Weyl symmetry ${\mathfrak S}_N$ acting on $\vec{\phi}$, because each set $$ \vec{e}_k+{\mathbb L}=\{ \vec{e}_k+\vec{\ell} ; \vec{\ell} \in {\mathbb L} \} $$ is Weyl invariant, and the Weyl symmetry preserves the inner product. Furthermore, under the shift $$ \vec{\phi} \to \vec{\phi} +\tau \vec{m} - \vec{n}~~~~~(\vec{m}, \vec{n} \in {\mathbb L}) \;, $$ they transform as $$ \theta_k(\vec{\phi} + \tau \vec{m} - \vec{n}) =e^{ -\pi i \tau \vec{m}^2-2\pi i \vec{m} \cdot \vec{\phi}}\theta_k(\vec{\phi}) \;. $$ Note that the factor $e^{ -\pi i \tau \vec{m}^2-2\pi i \vec{m} \cdot \vec{\phi}}$ is independent of $k$. We denote points of $CP^{N-1}$ by using homogeneous coordinates as $[Z_1,\cdots,Z_N]$. Then, if we define $$ \varphi (\vec{\phi}):=[\theta_1(\vec{\phi}), \cdots, \theta_N(\vec{\phi})] \;, $$ then the above properties imply that this is a well-defined map from $M_{\rm flat}$ to $CP^{N-1}$. We claim that this is an isomorphism between $M_{\rm flat}$ and $CP^{N-1}$.
common-pile/stackexchange_filtered
VBA code in excel to made text between tags bold I have a csv file which includes the html tags < b > and <\ b > to signify bold text. (I.e several words between these tags, in a longer block of text within the cell, should be bold). Is there a way using vba code in excel to strip the tags, and make the text between the tags bold? Note - There are sometime multiple sets of tags within a given cell. Are the html tags always inside a set of commas - i.e. one,<b>two<\b>, three, or do they ever span them - i.e. one,<b>two, three<\b>? The tags do not span over the commas, but not all text between the commas is in the tags e.g. A three cell example may be: one,not bold < b > bold < \b > not bold < b> bold again <\b> not,three This should do what you want: Sub BoldTags() Dim X As Long, BoldOn As Boolean BoldOn = False 'Default from start of cell is not to bold For X = 1 To Len(ActiveCell.Text) If UCase(Mid(ActiveCell.Text, X, 3)) = "<B>" Then BoldOn = True ActiveCell.Characters(X, 3).Delete End If If UCase(Mid(ActiveCell.Text, X, 4)) = "</B>" Then BoldOn = False ActiveCell.Characters(X, 4).Delete End If ActiveCell.Characters(X, 1).Font.Bold = BoldOn Next End Sub Currently set to run on the activecell, you can just plop it in a loop to do a whole column. You can easily adapt this code for other HTML tags for Cell formatting (ie italic etc) This was in the cell I tested on (minus the space after <): Sample < b>Te< /b>st of < B>bolding< /B> end The result was: Sample Test of bolding end Hope that helps
common-pile/stackexchange_filtered
Scala - choose constructor by runtime type of constructor arguments I have a class with several different constructors, which differ in the types of parameters, when all these parameters extend from the same base class. See here for a simplified example: abstract case class GeneralDataType() case class SpecificDataTypeOne() extends GeneralDataType case class SpecificDataTypeTwo() extends GeneralDataType case class MyNumber(myDataType: Int) extends { def this(data: SpecificDataTypeOne) = this(1) def this(data: SpecificDataTypeTwo) = this(2) } def getDataType(typeId: Int): GeneralDataType = typeId match { case 1 => new SpecificDataTypeOne case 2 => new SpecificDataTypeTwo } val x = getDataType(1) // error: Cannot resolve constructor val mn = new MyNumber(x) How in runtime to choose the correct constructor to use, according to the parameter types? in eclipse I'm getting this error: case class SpecificDataTypeOne has case ancestor GeneralDataType, but case-to-case inheritance is prohibited. To overcome this limitation, use extractors to pattern match on non-leaf nodes. As other suggested, try to use companion object as factory(I still has error that I've added to comment, but might be it's scala version dependent?) object MyNumber { def apply(x:GeneralDataType) : MyNumber = x match { case SpecificDataTypeOne() => new MyNumber(1) case SpecificDataTypeTwo() => new MyNumber(2) } def getDataType(typeId: Int): GeneralDataType = typeId match { case 1 => new SpecificDataTypeOne case 2 => new SpecificDataTypeTwo } val x = getDataType(1) val mn = MyNumber(x) } case class MyNumber(myDataType: Int) abstract case class GeneralDataType() case class SpecificDataTypeOne() extends GeneralDataType case class SpecificDataTypeTwo() extends GeneralDataType Not exactly sure you use case. While companion object can be used in the case. object MyNumber { def apply(typeId: Int): MyNumber = typeId match { case 1 => new MyNumber(new SpecificDataTypeOne) case 2 => new MyNumber(new SpecificDataTypeTwo) } } val mn = MyNumber(1) Hi, I think you misunderstood the question. I get an object of either types (SpecificDataTypeOne or Two) from outside, and then want to call the appropriate constructor based on it's type. If you just need to do it for one GeneralDataType (or maybe a few) and a few constructors each, pattern matching will do what you need: x match { case y1: SpecificDataTypeOne => new MyNumber(y1) case y2: SpecificDataTypeTwo => new MyNumber(y2) } You can make a more general solution using reflection, but it should only be used if the above is not good enough. if I try to overload MyNumber's constructor to contain your code, it doesn't work... You mean something like def this(data: GeneralDataType) = data match ...? No, it won't. Use an apply method in the companion object instead, as in Rockie Yang's answer (just take GeneralDataType as argument).
common-pile/stackexchange_filtered
Caching not working on page with asynchronously loaded asset transforms I'm using PictureFill on a large page, containing many image transforms, that doesn't seem to be caching despite most of the page being wrapped in {% cache %} tags. When I view the page after several refreshes with Dev Mode enabled, the Profiling Summary Report is reporting Total Queries at 174, and the HTML source still contains many examples of image urls that contain something like /cpresources/transforms/XX. I read in this answer that Craft won't cache a page with unresolved transforms, which might explain why this page isn't being cached. But then the question is, why aren't these transforms resolving? Are you getting any JS errors in your browser's console? If the transforms are throwing an error, it'll be logged in craft/storage/runtime/logs. With that many transforms running at once, it's highly possible PHP is running out of memory and/or time. Thanks, Brad. No javascript errors in the console, and my craft/storage/runtime/logs aren't reporting anything. Can you tell me where I could check to determine whether PHP is running out of memory/time? If it's a PHP error, that would get logged in the same folder in a file called phperrors.log. If it's an Apache error, that's wherever your error log files are setup for. Probably worth checking the network tab in your browser and seeing what the response is for the transform AJAX request, too. No PHP errors, and no Apache Errors. Can you elaborate on the AJAX request? All of the image responses seem to return okay, but when the page loads the next time they all need to be made again. The viewed page source always contains a large number /cpresources/transforms/` urls. I'm not sure it matters, but one thing that is unique about this situation is that most of the images on the page are not being loaded with the page, they are stored in a data-src attribute and PictureFill loads them asynchronously after the initial page load. Here's a snippet for reference: https://gist.github.com/cmalven/202027816afb2604e302 Interesting, after reducing a bunch of images on the page, it seems that all of the images left with /cpresources/transforms/ urls are in the tag. I'm guessing that because these images are almost never requested, they will never be resolved, and as a result the cache will never be generated. Does that sound right? Hmm - I've had performance issues on a site using picturefill as well. Very interesting to know if I have a similar issue (sitting at ~150 queries despite caching). Yep, that definitely seems to be the issue. I modified the fallback image inside of <noscript> to use the original upload (not a transform), and the page now loads much faster, suggesting that the cache was correctly generate. Now down to 12 queries (from over 174). Unfortunately, this only worked on one page of my site, other pages still have an issue with asynchronously loaded images (not in a ) remaining cpresources, and thus the page never being cached. Seems to be the case that Craft won't attempt to resolve these /cpresources/transforms/ URLs until the asset is actually requested, which can mess with a setup like PictureFill where there is no guarantee that the transformed assets will ever be loaded. Solution in my case was to to set 'generateTransformsBeforePageLoad' => true, in your Craft config array as mentioned here After doing this, all transform URLs resolve and the page is properly cached. Thanks for solving this, cmal! This is really helpful as I also want to set up PictureFill on a Craft install this week. Wow - I can confirm that if you're using Picturefill and image transforms, this makes a huge difference with regard to caching performance. Just dropped my page load from 150 queries down to 17.
common-pile/stackexchange_filtered
Redirect url to another domain using .htaccess I have the following in a .htaccess file: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I have tried the following but it does not work: RewriteRule ^this-url$ http://www.anotherdomain.com/ [NC,L,R=301] Any idea how I can resolve this? Regards, Neil. It should work with your rule, but it must be placed before those concerning WordPress (Always redirect before rewriting): RewriteEngine On RewriteRule ^this-url$ http://www.anotherdomain.com/ [NC,L,R=301] # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress i'm using this code for my htacces to redirect to another website. RewriteEngine On RewriteRule ^(.*)$ https://www.yourdomain.com/$1 [R=301,L] That should work.
common-pile/stackexchange_filtered
Implicit Differentiation: Demand Does anybody know how to solve this?? A price $p$ (in dollars) and demand $x$ for a product are related by $2x^2+2xp+50p^2=20600$. If the price is increasing at a rate of 2 dollars per month when the price is 20 dollars, find the rate of change of the demand. Hint: For $p'(x) = \frac{\text{d}p}{\text{d}x}$ you know that $p'(20) = 2$ $$2x^2+2xp+50p^2=20600\tag{1}$$ $$4xx'+2x'p+2xp'+100pp'=0$$ $$x'(2x+p)+p'(x+50p)=0$$ $$x'=-{p'(x+50p) \over (2x+p)}$$ In particular instance of time: $$x_1'=-{p_1'(x_1+50p_1) \over (2x_1+p_1)}$$ ...you have the following values: $p_1'=2$, $p_1=20$. To complete the calculation you need the value of $x_1$ too. It can be obtained from (1) by solving a simple quadratic equation for $x_1$ knowing the value of $p_1$: $$2x_1^2+2x_1p_1+(50p_1^2-20600)=0\tag{1}$$ You can proceed from here.
common-pile/stackexchange_filtered
Redactor encode option like TinyMCE Was playing around with redactor and looking to replace a TinyMCE implementation. The backend is ASP.NET and was trying really hard to not turn off RequestValidation of the page; as the current implementation uses the TinyMCE encoding="xml". Redactor does not do this out of the box and could not find anything suitable online. So in the end created a really simple redactor plugin as follows. if (typeof RedactorPlugins === 'undefined') var RedactorPlugins = {}; RedactorPlugins.encode = { init: function () { //add some submit handling to encode the html values var f = this.$source.closest('form'); if (f) f.bind('submit', { el_id: this.$element.attr('id') }, this.encodeOnSubmit); }, encodeOnSubmit: function (e) { //grab the html off the element values and encode it var source = $('#' + e.data.el_id), h, e; if (!source) return; h = source.val(); //encode the source value e = $('<div/>').text(h).html(); //set the source to the encoded value source.val(e); } } This appears to work well (other than stripping of white space with the simple encoding implementation). Was wondering if anyone else had a better way or some thoughts? Otherwise this maybe useful to others!
common-pile/stackexchange_filtered
Many to many relationship CRUD in Room I have modeled a many-to-many relationship with Room based on the docs and this medium article. Using this @Relation I can retrieve RecipeWithIngredients or IngredientWithRecipe Objects from the database. For this to work I need to insert a recipe to the recipe table, an ingredient to the ingredient table then a reference with both of their ids to the reference table first. I don't think there is another way of doing insertion other than using @Transaction on the Dao and just doing it all on one method, however for deletion and updating shouldn't I be able to do those using the Relation as well? The documentation on Room looks a bit lacking to me and so I've come to you //Get for recipeWithIngredients object. (This works) @Transaction @Query("SELECT * FROM Recipe WHERE recipeID = :ID") suspend fun getRecipeWithIngredients(ID:Long): RecipeWithIngredients //Insert recipeWithIngredients object. (I don't see why this wouldn't work) //IDs are autoGenerated so I can't get them until they are inserted @Transaction suspend fun insertRecipeWithIngredients(recipeWithIngredients: RecipeWithIngredients){ val recipeID = insertRecipe(recipeWithIngredients.recipe) for(ingredient in recipeWithIngredients.ingredients){ val ingredientID = insertIngredient(ingredient) insertRecipeIngredientRef(recipeID, ingredientID ) } } //I could try doing this like the insert but should't there be a way to do it like the get? @Transaction @Query("SOME SQL HERE") suspend fun deleteRecipeWithIngredients(recipeWithIngredients: RecipeWithIngredients) I don't think there is another way of doing insertion other than using @Transaction on the Dao and just doing it all on one method Your idea implemented by insertRecipeWithIngredients is interesting and I guess it should work, but I wouldn't say it's a common way to do that. Many-to-many relation in general - is when you have two separate entities that have a value even without relationship between them (User & UserGroup or for example). Even in your use-case it can be that one user at first adds recipe with its short text description and needed steps then another user binds ingredients to it, so insertRecipe and insertRecipeIngredients should be done separately. In this schema you should implement three separate insert methods - first for Recipe, second for Ingridient and third - for RecipeIngridientRef. But you of course in your specific case you can do like you've described above. for deletion and updating shouldn't I be able to do those using the Relation as well Out-of-the box Room doesn't support that. It's only for queries. If for deletion you want to delete rows from RecipeIngridientRef that are referenced to specific Recipe, than it would be enough to use Foreign Key standard mechanism - Cascade deleting. With that you need only to write delete for Recipe entity and the rest Room&Sqlite will do for you. As for update it's really lack of recommendations in documentation. One of the choices - is first to delete all rows in RecipeIngridientRef with specific Recipe and then to use the same method insert for RecipeIngridientRef (both operations in transaction). I just finished another in depth online search for enlightenment and found out everything you just said, the fact that you said it just confirms everything I've read. Thank you for sharing the knowledge.
common-pile/stackexchange_filtered
Field Theory Phase Factor vs Anomaly In this paper on topological quantum field theories the authors discuss something called the anomaly in section 5. In Witten's paper on field theory and the Jone's polynomial he discusses something called the phase ambiguity on page 390. Are the anomaly and the phase ambiguity related? Edit: I originally referenced the wrong page and term in Witten's paper. Fixed it. Yes, the "phase ambiguity" in the Witten's paper is the so-called framing anomaly; The Witten-Reshetikhin-Turaev invariant gains an extra phase factor once you change the framing of a 3-manifold, but this problem is fixed by using the canonical 2-framing. It seems that in the paper you referenced, the $\theta$ map is a Dehn twist, and the "anomaly" is a sort of framing anomaly in this sense.
common-pile/stackexchange_filtered
Temperature of gas leaking into chambers An initially evacuated and thermally isolated chamber has a small hole opened in its side through which an ideal gas effuses from the outside. The gas outside is at standard temperature and pressure. A second, smaller hole directly opposite the first hole on the opposite side of the first chamber lets gas into a second evacuated and thermally isolated chamber. The initial temperature of the gas entering the second chamber, when the first hole is opened, is the same as the temperature for the gas in the first chamber. Why this is the case? Edit: Note that the hole is considered to be smaller than the mean free path. Thus, the gas will not be able to reach equilibrium with the surroundings. Why should only the initial temperature be the same? The temperature of the ideal gas will be the same at all times in all chambers whose walls are maintained at the standard temperature or are thermally isolated. The expansion of an ideal gas into vacuum doesn't perform any work therefore there is no temperature change. Firstly, I'm sorry I didn't make myself clear: I'm considering the case where the hole which the gas effuse through is much smaller than the mean free path. As a result, the gas will never reach equilibrium. The effused gas will have a non-Maxwellian distribution as shown in page 22 of my professor's lecture notes: https://www-thphys.physics.ox.ac.uk/people/AlexanderSchekochihin/A1/2014/A1LectureNotes.pdf
common-pile/stackexchange_filtered
Percentage top value showing wrong computed value in chrome [possible bug!] I created an element dynamically in jquery added it to page, added class to it which gives it top:50%. Everything is fine, it's at 50%, but when i get value of top like this .css('top') i get wrong value, 470 instead of 20. The problem is, this wrong value even though not applied causes error in effect i am trying to get which relies on getting correct value via javascript.. This is no problem in firefox. I get what i see and what's computed. Jsfiddle: http://jsfiddle.net/techsin/dbj2un2e/ d.css( 'top', d.css('top') ); changes the actual position in chrome. (uncomment to see its affect) Screenshot to explain little further. (open image in new tab to see it clearly) you'll need to recreate this in a fiddle or post a link for anyone to know what's actually happening. yea, i was able to reproduce it in jsfiddle. use position().top instead of css('top') var d = $('<div>').addClass('zzz'); $('.abc').append(d); console.log(d.position().top); FIDDLE
common-pile/stackexchange_filtered
jQuery Full Calendar, Displays Incorrect Dates on Calendar I am fetching events via JSON but Full Calendar is rendering them on the wrong dates. JS: events: function(start, end) { start = start.getFullYear() + '-' + (start.getMonth() + 1) + '-' + start.getDate() end = end.getFullYear() + '-' + (end.getMonth() + 1) + '-' + end.getDate() $.ajax({ url: '/activity/calendar/user-feed/?start=' + start + '&end=' + end + '&activity_id=' + activity_id, dataType: 'json' }); }, The ajax response is: [{"start": "2013-05-24", "end": "2013-05-24", "title": "Avaliable"}, {"start": "2013-05-25", "end": "2013-05-25", "title": "Avaliable"}] However Full Calendar displays the events on the 12th and 13th of April instead of the 24th and 25th May. Please help, this makes no sense. Dates in exactly that format work for me in a project where I'm setting my json feed URL directly rather than specifying a function. I take it your activity_id argument is why you're not just doing that? Changing it to a feed URL solved it, thanks!
common-pile/stackexchange_filtered
Parametrized type compiler bug or proper program? I'm surprised this program compiles. I would think the strLengthOne declaration/initialization line would be flagged as a type error. Should not T describe different types for the two different usages? One, the type that contains the single member "^...$" and the other the type that contains the single member "^.$"? Or do I misunderstand what types are being bound to T during usage? class RegularString<T extends string> { constructor(private readonly value: string, regexPatternConstraint: T) { if(!new RegExp(regexPatternConstraint).test(value)) throw new Error("Given value does not conform to regex " + regexPatternConstraint + ". found: " + value) } toString() { return this.value } } const strLengthThree: RegularString<"^...$"> = new RegularString<"^...$">("ABC", "^...$") const strLengthOne: RegularString<"^.$"> = strLengthThree // expected not to compile, but does console.log(strLengthOne.toString().length) // prints 3 Note const three = "^...$"; const one: "^.$" = three does not compile. Better evidence to prove three is not typed as string: const three: "^...$" = "^...$"; const one: "^.$" = three
common-pile/stackexchange_filtered
Boss favors employees of one nationality I worked for hotel x, where my boss prefers subordinates of a particular nationality. The persons of the other nationality in the team were never promoted, appreciated, or given a pay hike, he created situations where persons of other nationalities were made to look stupid and tactfully manipulated information to make persons of the other nationality the culprit which lead to their termination. Their nationality is in the majority. I am trying to think through whether or not this environment is beneficial to my long term career. Can I do anything to interact with this boss effectively or grow my career given what feels like racial bias? Leave, you're doomed otherwise. What country are you in? Local laws will most certainly apply here. I made a fairly comprehensive edit to remove some of the unnecessary tone and focus this more on an actionable question. I think there were two questions in there and not just 1. 1) What about my career under such environment - which the question now reads. 2) Is there anything I can do to help my peers or the work situation that I find? Are you of the favored nationality? @DavidK - so what if local laws apply? Unless the OP has the time, money and inclination to attempt litigative action, the presence of laws doesn't change anything. Surprisingly there are no local laws to cater to such a situation , and its not worth trying @Myridium Because in many locations this is a blatant act of illegal discrimination and would't take much effort to prove it. You wouldn't have to personally sue the company but instead notify an oversight group. You could get what you want by notifying the boss's boss, or the hotel chain, or a local business bureau, or the Equal Employment Commission, all with no financial cost to yourself. There is no such thing here in ksa, and they have finally terminated me to fill the position with a person of their nationality. @JJ I'm sorry to hear that. Is this hotel an international hotel chain? I doubt you would be able to get your job back, but it may still be worth lodging a complaint with the company. Its the famous chain marriott,but here in ksa local partner who has taken the franchisee is the decision maker, the hr rules support employees but the rules are just on the books nothing is practiced.my advice to all is that most of the employee hand books contain things that just are written, but never followed. @JJ Yes, most hotels are owned and run as a franchise, but if the parent company decides that the franchise is not meeting the brand standards, they can force the owner to change or risk losing the ability to call themselves a Marriott hotel. Bro you wrong in this regard, the parent company just is concerned about the money, as long as the local company makes a huge profit, they are happy n will not bother even if employees are ill treated Racial bias is big deal and I personally would not be comfortable working in a company which tolerates that irrespective of whether I am involved party or not. However, first thing you need to do is absolutely make sure that your suspicion is rightly placed. Do you have sufficient data from past to say it without doubt that this is racial bias? Because if you escalate it and you are wrong about it, it will certainly fire back at you. If you do, I would suggest raise it to the right supervisor or HR or Ombudsperson if you have one. Ideally a company should be able to protect your identity and make sure there are no retaliation for raising a serious issue like this. But I think you should keep an alternative job option ready just in case if this is not resolved as per your expectations. Nationality has nothing to do with race. @ Stephan Bijzitter, firstly, "racial bias" was the term used by O.P. (Looks like it is edited now). Secondly, your one liner may make you feel very enlightened about yourself, but it is wrong. Nationality has a VERY STRONG correlation with race. While people of various races/countries settle all over the world, their origi nationality is still used to identify them with race (especially people from Asian, African, and Middle Eastern countries). It is even more relevant in O.P.s context @ Stephan Bijzitter, also down-voting because of race vs nationality difference is just ridiculous. Depending on your country: If you want to make it stop, you can take evidence of discrimination to the authorities and the person/people being discriminated against can bring it up with the authorities for a lawsuit(s). This is a long and painful road, but if you have the backing of a portion of the staff past/present and you have hard evidence you can get the employer penalized(sometimes severely) and compensation for the discriminated employees. You have to prove it though, not just word against word. If you don't want to fight or your in a country that doesn't have any laws against discrimination then leave and find anywhere better. P.S. Please note that either course will likely mean you no longer work there, unless the penalty is to take the business away from the person and give it to someone else who supports the discriminated party rights. Edit based on question edit: If you don't agree with the underlying stance of your employer it's hard to stay working for them, without becoming disgruntled, unless that particular disagreeable belief is outside the day to day work activities. In this case I personally would have an issue benefiting over someone else's mistreatment, but that is the question you have to ask yourself if your not the discriminated race...if you are the discriminated race I wouldn't expect any different treatment for yourself. Your right, the authorities too are of their nationality, the boss was terminated by the GM a year back.The GM resigned 2 months later,leading to the return of the old boss,thanks to his nationality people, then onwards he focuses on firing employees of other nationality for lame reasons,i hurd the rumour that i too am on his hitlist.Its a war between nationalities of pak india and egypt, in ksa. Any bias is really bad ,not just race or nationality .Bosses should not do this but they do .Your industry is the service industry where replacements are easy to find .Remember that in a deregulated labor market it comes down to supply and demand .If you leave causing waves or not you will be replaced within hours by your bosses favorite group. If you just slip out the door so to speak then you will always be able to get a job .If you make waves you will find it hard to get another job again .I made some waves in 1992 .The prospective payout these days balanced against probable employment leprosey will make it not worth it to kick up a fuss .The problem here is that you have no economic power .If workers are easy to find and replace then any boss anywhere can have any hiring bias without any economic downsides.You must get a job that not too many others can do ,then your skills will be respected .When things are more specialised the boss can not favour his or her pet group because the place would be empty .Do some training and get a job that not too many others can do and then you will be unlikely to encounter such problems again . this post is rather hard to read (wall of text). Would you mind [edit]ing it into a better shape? In the real world the OP has the following choices: if they are of the boss's preferred nationality (which I don't suspect it's the case by the sheer existence of this question) they might do the following: a) enjoy the privileged treatment and perhaps feel bad (well, they asked the question, so there is something going on) b) try to fight the situation and have the boss and majority of the preferred nationality against themselves, also gaining trust of the minority is not guaranteed (OP is from the hostile camp) if they are in the minority (why would they asked the question) the choices are: c) leaving the job for new challenges in life (I believe it's the fastest and healthiest solution for this issue) d) try to fight the situation and have the boss and majority of the preferred nationality against themselves, also gaining trust of the minority is not guaranteed (they may want to keep the job and stay out of trouble) Should OP chose for option c), before leaving they might contact some anti-discrimination office and ask whether OP cannot contribute anyhow to solve the situation. In any case I wish the OP all the best. @dan1111, thank you for your explanation. I hope you've read my edit. Again, I strongly disagree against discrimination based on factors irrelevant to the job. But in this case theory and practice don't go along and final advice depends on how much the OP has to lose.
common-pile/stackexchange_filtered
Ethernet Connected But No Internet I am trying this on Intel Galileo Gen 2 which runs Yocto Linux and I am unable to connect to Internet using Ethernet. I have WiFi card but that doesn't connect as well. I have tried following commands and their outputs are as below ifconfig -a Output: enp0s20f6 Link encap:Ethernet HWaddr 98:4F:EE:01:9E:CA inet addr:<IP_ADDRESS> Bcast:<IP_ADDRESS> Mask:<IP_ADDRESS> inet6 addr: fe80::9a4f:eeff:fe01:9eca/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2731 errors:0 dropped:4 overruns:0 frame:0 TX packets:599 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:276624 (270.1 KiB) TX bytes:84524 (82.5 KiB) Interrupt:51 Base address:0x4000 lo Link encap:Local Loopback inet addr:<IP_ADDRESS> Mask:<IP_ADDRESS> inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:5730 errors:0 dropped:0 overruns:0 frame:0 TX packets:5730 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:503288 (491.4 KiB) TX bytes:503288 (491.4 KiB) wlp1s0 Link encap:Ethernet HWaddr 44:85:00:01:8A:D3 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) route -n Output: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface <IP_ADDRESS> <IP_ADDRESS> <IP_ADDRESS> UG 0 0 0 enp0s20f6 <IP_ADDRESS> <IP_ADDRESS> <IP_ADDRESS> U 0 0 0 enp0s20f6 <IP_ADDRESS> <IP_ADDRESS> <IP_ADDRESS> UH 0 0 0 enp0s20f6 The interfaces look like: vi /etc/network/interfaces auto enp0s20f6 iface enp0s20f6 inet static address <IP_ADDRESS> netmask <IP_ADDRESS> auto wlan0 iface wlan0 inet dhcp wireless_mode managed wireless_essid any wpa-driver wext wpa-conf /etc/wpa_supplicant.conf auto lo iface lo inet loopback On trying connmanctl services I get, *AR Wired ethernet_984fee019eca_cable *A wifinw wifi_448500018ad3_646c696e6b2d32333238_managed_psk So ping www.google.com fails with bad address. I tried ping <IP_ADDRESS> and it gives response. As per @frarugi87 its a DNS issue, but I am not sure how to fix this. The /etc/resolv.conf has following nameservers # Generated by Connection Manager nameserver <IP_ADDRESS> nameserver ::1 Resolved this adding OpenDNS servers to my /etc/resolv.conf file. vi /etc/resolv.conf and the file looks like nameserver <IP_ADDRESS> nameserver <IP_ADDRESS> After this, ping to google.com works.
common-pile/stackexchange_filtered
How to write CSV output to stdout? I know I can write a CSV file with something like: with open('some.csv', 'w', newline='') as f: How would I instead write that output to stdout? sys.stdout is a file object corresponding to the program's standard output. You can use its write() method. Note that it's probably not necessary to use the with statement, because stdout does not have to be opened or closed. So, if you need to create a csv.writer object, you can just say: import sys spamwriter = csv.writer(sys.stdout) On Windows, this results in an extra carriage return character after each row. You can give the csv.writer constructor a lineterminator option:writer = csv.writer(sys.stdout, lineterminator=os.linesep) lineterminator=os.linesep makes no sense, as on Windows this is no-op. You probably meant lineterminator='\n', which is also NOT an obviously correct solution (see comments on this post). Reconfiguring sys.stdout to disable universal newlines handling is a possible alternative. To be clear, there seems to be no way to use the lineterminator option (or property of a custom Dialect subclass) cause the writer to write only a linefeed and not Cr+Lf when running on Windows. That results in CrCrLf when writing through sys.stdout unless Python was run with universal linefeeds disabled.
common-pile/stackexchange_filtered
display iframe youtube with a store link in phpmysql I wonder why my code didn't display the iframe. I think its an src problem: $requete = $database->prepare('SELECT * FROM envoie ORDER BY id'); $requete->execute(); $envoies = $requete->fetchAll(PDO::FETCH_ASSOC); foreach($envoies as $envoie) : ?> <h3 class="comm"><?php echo $envoie['upload_title']; ?></h3> <h3 class="comm"><?php echo $envoie['upload_description']; ?></h3> <iframe width="560" height="315" src="<?php $envoie['upload_lien'] ?>" frameborder="0" allowfullscreen></iframe> <h3 class="comm"><?php echo $envoie['upload_temps']; ?></h3> <br><HR><br> <?php endforeach; ?> your not echoing the value @LawrenceCherone I dont know how to answer your commentary but thanks for the answer it now works !
common-pile/stackexchange_filtered
Nameserver always at the top in url bar in nginx (with no redirect) My app has only two pages presented to the user, both in mobile or desktop. Firstly a login page and after the app page itself. When accessing www.myapp.com the user is firstly indexed towards the login page and if he is logged in he is then php redirected to the app page. Here is my server schema: root index.php (login page) mobile/mobile.php profile/profile.php What I'd like is to avoid the user having www.myapp.com/mobile/mobile.php,www.myapp.com/profile/profile.php but instead www.myapp.com and www.myapp.com/desktop or www.myapp.com/mobile so he can also switch between layouts. Unfortunately this is throwing a 404 error in my servers: nginx default file: location ~ /mobile/mobile.php$ { rewrite $host/mobile break; } location ~ /index.php$ { rewrite $host break; } location ~ profile/profile.php$ { rewrite $host/desktop break; } Should it be something like this or should it be another complete way? I'd like to do this in my server, not in the user device... Thank you very much for your help... I don't think this is on-topic for SO, but in any case you did it exactly backwards. You want to rewrite the user-visible URL to the backend URL, not the other way around. Then when you send the user to /mobile, nginx will translate it to /mobile/mobile.php and PHP can execute it. @hobbs could you give a coded example please? I understand your logic but not the syntax The following was tested on my development server: server { listen 80; root /usr/local/www/test; index index.php; location = /mobile { rewrite ^ /mobile/; } location = /desktop { rewrite ^ /profile/; } location / { try_files $uri $uri/ =404; } location /mobile { index mobile.php; try_files $uri $uri/ =404; } location /profile { index profile.php; try_files $uri $uri/ =404; } location ~ \.php$ { include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass php; } } It uses index and try_files to find your scripts and then internally redirect to the fastcgi handler. I have upstream php { ... } defined elsewhere. You might employ rewrite rules and the internal directive to enforce the scheme of no php script on the URL bar. The mobile rewrite is simply to maintain consistency with the desktop rewrite, so that the client doesn't see a trailing /. This is just one example of a multitude of solutions. Typically, there are ancillary files (css, js, images) with their URLs that also need to be accommodated. This is not working at all... did you understand my original request? This only throws a 404... Nevermind, it did! Thank you so much! Only one problem... it works with the mobile directive... but fails with desktop which I have like this (check my question for the server folder structure): location /desktop { index profile/profile.php try_files $uri $uri/ =404; } This throws a 404 error. How exactly could I change that? @Fane Oops! Sorry. I have adjusted the example.
common-pile/stackexchange_filtered
keytool error command Hi I want to ask about my error when I import Certificate using keytool this the error: C:\Program Files (x86)\Java\jre6\lib\security\imig.cer -keystore C:\Program File s (x86)\Java\jre6\lib\security\cacerts -storepass changeit keytool error: java.lang.RuntimeException: Usage error, Files is not a legal command please help my error code in keytool.. Thanks before The error says that it's trying to interpret Files as a command. That's because -keystore argument is truncated after space - to just C:\Program To avoid truncation please surround the full path with double quotes: C:\Program Files (x86)\Java\jre6\lib\security\imig.cer -keystore "C:\Program Files (x86)\Java\jre6\lib\security\cacerts" -storepass changeit If it results in another error - post a separate question :) Try to quote the path because of the spaces.
common-pile/stackexchange_filtered
How reliably can you assess the condition of a reverse-osmosis desalinator? Handheld desalinators operating on the principle of reverse osmosis are available for emergency use in lifeboats. Sometimes they appear for sale unused in their original packaging at a very steep discount because they have been replaced at a scheduled interval by their owners (merchant marines, navies, etc). Is there any reliable means to assess the condition of the device and determine if it is reliable enough either for convenience (reducing the quantity of fresh water carried) or for possible life-sustaining use? Checking the condition of mechanical parts (pivots, levers, body) and of any seals (no cracks) seems straightforward, but what about the reverse osmosis membrane itself? (Inspired by Portable desalination hand pump filter life?) https://www.fda.gov/ICECI/Inspections/InspectionGuides/InspectionTechnicalGuides/ucm072913.htm (General info about RO filters, not application-specific however) You can test it at home. Make you own salt water by adding 35grams of salt to a liter of water. Desalinate the water to a new container, and measure the salinity with a hydrometer. one of lots of google hits, as example You should also be able to taste the difference. Be aware there could still be a flaw in the osmosic barrier that allows an occasional; bacteria, virus, etc through as well as trace amounts of salt. Even if you can't taste or measure the salt with a hydrometer there may still be some. You may or may not want to use a secondary purification process that would not normally be required with reverse osmosis Related How much sea water can I safely drink? After use a certain amount of maintenance is required. @James Jenkins Yeah my concern is that the failure mode might be like an aging bicycle tire—after years in the garage it seems to take air just fine but is in fact brittle and prone to a sudden sidewall blowout compared to a new tire—with a bike tire I just flex in my hands and look for cracking, wondering what is the RO-membrane equivalent. @mmcc Bike tires and RO membranes are not good one day and bad the next. On some date, the likelihood of catastrophic failure increases to a point that risks outway the benefits. Alternately like some software, hardware can have pre planned expiration date purely for business reasons, they need to sell a new one every #years to stay in business. Many recommend send for inspection yearly. katadyn Factor that into the price and decide if it is still a good deal. IMPORTANT: For your safety, we require that an inspection be completed once a year. Our regional service centers are trained to perform all necessary quality checks or you can return the unit to Katadyn for inspection. yeah my question is basically what are they inspecting and is it realistic to learn to perform that inspection yourself. @mmcc You really want to do your own safety check on something that critical? Maybe. I have less knowledge than the factory technician but more incentive to do the work correctly and ability to do it frequently. So like car repair it depends on how much training is required to do it right. Which is what I do not know. Maybe someone reading this will know.
common-pile/stackexchange_filtered
Angular : 500 error when using forkJoin to call api services I implement a call in the ngOnInit to two api route to get some data, I'm using angular 7 and implement forkJoin to call the api like that : ngOnInit() { debugger this.route.params.pipe( switchMap(params => forkJoin([ this.http.get('/api/surveys/' + params.id), this.http.get('/api/surveys/getStatistics/' + params.id) ])) ).subscribe(result => { this.survey = result[0]; this.statistics = result[1]; this.updateChart(result[1]); }, error => this.handleError(error) ); } private handleError(error) { debugger if (error.status === 400) { this.showError(error.error.error, 'Erreur!'); if (error.error.error === 'Le sondage demandé n\'existe plus!') { this.router.navigateByUrl('/sondage'); } } } private updateChart(statistics) { debugger statistics.options.forEach(option => { this.pieData.push(option.number); option.color = this.d3Colors[this.colorCounter]; this.colorCounter = this.colorCounter + 1; }); } after the first debugger the code does not run the call request to the API and pass directly to handleError funcion ! and generate a 500 server error what's the actual question? @AndrewAllen It's from here: https://stackoverflow.com/a/60526695/5367916. OP is implying that by using forkJoin a 500 error is being generated. @KurtHamilton implying != a question Exactly. That's what I was implying... 500 is generally error thrown from API server itself. If your design permits, you should separate both API calls. This way failure of first call will not impact second call. From forkJoin specs: "If any input observable errors at some point, forkJoin will error as well and all other observables will be immediately unsubscribed." So failure of one call makes other API call success irrelevant as observable is already unsubscribed. Saying they “should” separate the calls is suspect. Why would OP be calling unneeded data into the view? If the view requires all data, the entire view should fail and not render a partial view. If the views can exist separately, then they should be separate views calling the data they require. But general best practice is to call all of your view data in one stream and have it fail or succeed as a unit Also, if individual error handling is required, then each individual request can have its own catchError in its own pipe.
common-pile/stackexchange_filtered
Get records of a table which exists in another table distinct field in MySQL I have two tables land_information id | title_number | owner | case_no 1 001 John 201 2 002 Peter 202 3 002 Andrew 203 4 003 Mores 204 sheets id | title_number 1 001 2 001 3 002 4 NULL 5 Unavailable Now, I need to check if title_number of sheets table exists in land_information. How should I make a query which will result like the expected result below. Expected Result id title_number owner case_no 1 001 John 201 3 002 Peter, Andrew 202, 203 Here is my initial sql. SELECT id, li.title_number, owner FROM sheets AS s INNER JOIN land_information AS li ON s.title_number = li.title_number; What's the logic for getting peter in the result for title number = 1? Given that 1 only has john in land_information And what is the relevance of case_no to the question? sorry, I got typo. I edited my expected results Well, the query looks like this: SELECT MIN(sheets.id) as id, sheets.title_number as title_number, GROUP_CONCAT(DISTINCT land_information.owner ORDER BY land_information.id SEPARATOR ',') AS owner, GROUP_CONCAT(DISTINCT land_information.case_no ORDER BY land_information.id SEPARATOR ',') AS case_no FROM sheets INNER JOIN land_information ON sheets.title_number = land_information.title_number GROUP BY (sheets.title_number); I'll explain one row at a time: MIN(sheets.id) as id, selects the minimum value from the sheet.id column, inside the group (as it looks inside you expected result) sheets.title_number as title_number, self explains, it just selects the title number GROUP_CONCAT(DISTINCT land_information.owner ORDER BY land_information.id SEPARATOR ',') AS owner, groups (enumerates) all land_information.owner, each being added just one time (the DISTINCT keyword), separated by a defined separator GROUP_CONCAT(DISTINCT land_information.case_no ORDER BY land_information.id SEPARATOR ',') AS case_no same as above, with the case_no column FROM sheets INNER JOIN land_information ON sheets.title_number = land_information.title_number here we are inner joining both tables, based on the title_number column GROUP BY (sheets.title_number) all rows having the same title number are joined together, and the coresponding functions defined above are applyed on the group. The result I've got looks like this id|title_number|owner |case_no| --+------------+------------+-------+ 1|001 |John |201 | 3|002 |Peter,Andrew|202,203|
common-pile/stackexchange_filtered
Initialization of fields of class Sorry, if this question is trivial... There are two ways of initializing fields For example: The first way is: Public class A{ int a; A(){ a = 5; } } The second way is: Public class A{ int a = 5; A(){ } } Which way is better? Why or why not? Whether you initialize or not primitive int is always '0'. Coming to declaration if you are sticking with a constant default value initializing during the declaration is preferred. If you want the value to be passed while initializing the object then use the constructor. Instance variable are by default initialized. its zero for int. @Braj what if the a = 5 at the beginning? My mistake, I looked at the answer first. Both ways are correct. Prefer second way if you don't want to repeat the code as shown in below code: public class A{ int a; A(){ a=5; } A(String s){ a=5; } A(Long l){ a=5; } } public class A{ int a = 5; A(){ } A(String s){ } A(Long l){ } } In the second case you don't have to initialize it for all the overloaded constructor. Now it depends on your choice. I like this reason: "Prefer second way if you don't want to repeat the code as shown in below code" Instead of A(String a), i think it should be A(int a) { this.a = a; }. Please check It really depends on what you want. If you only have one constructor, then it doesn't matter, because either way the value will start a 0. If you have more than one constructor, it still doesn't really matter, but it might be better to do it the following way: int a; A() { a = 0; } A(int newA) { a = newA; } When I personally develop, I will assign the value at the top in the following cases only: The variable I am assigning is a constants. The variable has some sort of default value, that can and usually will change.
common-pile/stackexchange_filtered
C++ map pointer variable sorting Hello I wanna know how to sort the map which Tkey variable is pointer type. There is getName function which return char* type. so I tried to compare with strcmp. But there are some error in the return part. struct Compare_P { inline bool operator()(Person const& a, Person const& b) { return (strcmp(a.getName(), b.getName())) < 0; } }; map<Person*, House*, Compare_P>A_List; Compare_P needs operator() taking a pair of Person* pointers, instead of (or in addition to) one taking a pair of references. The keys in your map are of type Person*; you need a comparator capable of comparing those keys, not some other type even though related. Your map's key is Person*, but the Compare_P::operator() takes Person const&. You can fix that by either defining map<Person, House, Compare_P> A_List; or by a correct Compare_P struct Compare_P { bool operator()(Person const* a, Person const* b) { return (strcmp(a->getName(), b->getName())) < 0; }
common-pile/stackexchange_filtered
What is this error installing the Anaconda debugger I run this script from the Anaconda terminal: conda create -n jupyterlab-debugger -c conda-forge jupyterlab=3 ipykernel>=6 xeus-python and I get this Preparing transaction: done Verifying transaction: done Executing transaction: | WARNING conda.gateways.disk.delete:unlink_or_rename_to_trash(143): Could not remove or rename C:\Users\nicomp\anaconda3\envs\jupyterlab-debugger\Lib\site-packages\te stpath\cli-32.exe. Please remove this file manually (you may need to reboot to free file handles) done ERROR conda.core.link:_execute(699): An error occurred while installing package 'conda-forge::pywin32-302-py310he2412df_2'. Rolling back transaction: done [Errno 2] No such file or directory: 'C:\\Users\\nicomp\\anaconda3\\envs\\jupyterlab-debugger\\Library\\bin\\pythoncom310.dll' () So it says I need to delete C:\Users\nicomp\anaconda3\envs\jupyterlab-debugger\Lib\site-packages\te stpath\cli-32.exe but that file isn't on my computer in the first place.
common-pile/stackexchange_filtered
mule 3.9.5 Cannot transform xml dom to xml After migration the mule server from 3.9.1 to 3.9.5, I encountered a problem in the transformation of the payload to xml. Here is my code: <flow name="SetVariablesFromPBGB"> <foreach collection="#[xpath3('/*:Envelope/*:Header',payload,'NODESET')]" doc:name="For Each"> <mulexml:dom-to-xml-transformer doc:name="DOM to XML"/> <mulexml:xslt-transformer xsl-file="removeNameSpace.xslt" maxIdleTransformers="2" maxActiveTransformers="5" doc:name="XSLT"/>: <set-variable variableName="soapHeader" value="#[System.getProperty('line.separator')]#[message.payloadAs(java.lang.String)]"/> </foreach> </flow> The dom-to-xml-transformer don't transform the payload to xml: After this line the payload still [soapenv:Header: null] instead of : <?xml version="1.0" encoding="UTF-8"?> <soapenv:Header> </soapenv:Header> What is the input payload and it's type for the flow? <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> soapenv:Header </soapenv:Header> soapenv:Body <AAA> <BBB></BBB> </AAA> </soapenv:Body> </soapenv:Envelope> I'll assume the type is a string, right? yes, here i just set an empty header What is the expected result of this flow? I need to remove the namespace from the header, for that i retrieve the header from the payload, transform it to xml and set the new header without namespace I submitted an answer. Please let me know if it is helpful. Please accept the answer if useful or let me know if there is any issue with it. My understanding is that you want to get the soapEnv:Header element with the prefix soapEnv: but without the namespace declaration for it. I'm not sure that is valid usage of namespaces according to the XML Namespaces specification: The namespace prefix, unless it is xml or xmlns, MUST have been declared in a namespace declaration attribute in either the start-tag of the element where the prefix is used or in an ancestor element (i.e., an element in whose content the prefixed markup occurs). If having the namespace declared is acceptable this DataWeave script could replace the flow's body: <dw:transform-message doc:name="Transform Message"> <dw:input-payload mimeType="application/xml" /> <dw:set-payload><![CDATA[%dw 1.0 %dw 1.0 %output application/xml --- payload.Envelope.*Header ]]> </dw:set-payload> </dw:transform-message> Input: <soapenv:Envelope xmlns:soapenv="schemas.xmlsoap.org/soap/envelope"> <soapenv:Header><t>a</t></soapenv:Header> <soapenv:Body> <import> <data><![CDATA[ <AAA> <BBB></BBB> </AAA>]]></data> </import> </soapenv:Body> </soapenv:Envelope> Output: <?xml version='1.0' encoding='UTF-8'?> <soapenv:Header xmlns:soapenv="schemas.xmlsoap.org/soap/envelope"> <t>a</t> </soapenv:Header> If you want to remove completely the XML namespaces you can use the method adapted from this blog by using a recursive function. This removes completely namespaces declarations and the prefixes. %dw 1.0 %output application/xml --- Header: payload.Envelope.Header Output (for the same input above): <?xml version='1.0' encoding='UTF-8'?> <Header> <t>a</t> </Header> Those are the valid XML outputs that you can perform. NOT RECOMMENDED: If you absolutely must generate invalid XML then you can convert the XML to a string and perform string replacement or regex operation to remove the undesired. Be aware that performing string manipulations on XML is a bad practice. I strongly advise to avoid implementing this and to avoid generating invalid XML. Example: %dw 1.0 %output application/java --- // warning: transforming the XML with string operations is not recommended! write(payload.Envelope.*Header,"application/xml") This returns a Java string that can be manipulated inside DataWeave, Java code, MEL scripts or Groovy scripts. Please i have a question, to use dw, do i have to have an EE version. It is possible that in Mule 3 DataWeave is an Enterprise Edition only feature. However the Mule versions you are using (3.9.1, 3.9.5) are EE releases only so it should not be a problem.
common-pile/stackexchange_filtered
Autodesk forge: Listing files and getting recent I've been looking through the Autodesk Forge API a bit but I have not found a way to do the following things: Query/list files in a whole Fusion 360 team hub, filtering on e. g. filename. Get a list of the N most recently accessed files (created, opened, saved) Is this possible? So far I've seen that is possible through recursion, but I want to avoid making a lot of API calls. Query/list files in a whole Fusion 360 team hub, filtering on e. g. filename. This can be done using the search endpoint. It traverses all subfolders recursively, and can filter based on different criteria (see https://forge.autodesk.com/en/docs/data/v2/developers_guide/filtering/). I believe it can be used at a project level. Get a list of the N most recently accessed files (created, opened, saved) I'm afraid this kind of sorting is not available using these endpoints.
common-pile/stackexchange_filtered
Fixing id field in ActiveRecord While troubleshooting another problem, I added self.primary_key = id to my model. This broke ActiveRecord's management of the id field. Is there any way to revert back? Code was removed and app was restarted. The problem is the schema changed. An explicit id field was added and create_table "sales_orders", force: true do |t| was changed to create_table "sales_orders", id: false, force: true do |t| So the main issue now is I've lost auto-incrementing. please explain what is broken, usually removing the code you added should fix the problem. Revert that change, delete all corrupted records and you should be fine I added some information to the question It seems like there must be another solution, but what I ended up doing was dropping the table and recreating it combining the previous migrations that affected it.
common-pile/stackexchange_filtered
How to copy all the artifacts from source to destination in Multi Stage Docker File I have two stages as follow: Stage # 1 FROM mongo:latest AS mongodb ENV MONGO_INITDB_ROOT_USERNAME root ENV MONGO_INITDB_ROOT_PASSWORD root ENV MONGO_INITDB_DATABASE admin ADD mongo-init.js /docker-entrypoint-initdb.d/ Stage # 02 FROM ubuntu:18.04 RUN apt-get update \ && apt-get install -y wget \ && rm -rf /var/lib/apt/lists/* RUN \ apt-get update && \ apt-get install -y supervisor nginx &&\ rm -rf /var/lib/apt/lists/* COPY supervisord.conf /etc/supervisor/ COPY --from=mongodb #How can I copy all artifacts from source to destination EXPOSE 80 CMD ["supervisord", "-c", "/etc/supervisor/supervisord.conf"] Both of these stages are working perfectly separately. How can I copy all the artifacts from first stage to second stage? I deleted my answer hence I was completely off topic. I don't know to answer your question. If the requirements ask you to run several processes in the same container, perhaps the solution should not be based on Docker. I understand. Thank you :) @NeoAnderson you are always quick. I'd almost always expect this to run in two separate containers (and let Docker play the role you're trying to use supervisord for here). You can't merge two different images like this. @DavidMaze I got it that I cant merge images like this. But If I use above structure in this way Like 2 From Statements then mongodb does not connect because its not in the last stage. Right, you need to separately docker run mongo or use a tool like Docker Compose that can run multiple containers together.
common-pile/stackexchange_filtered
Tabbar not able to access Navigationcontroller I have a navigationController app. I push a tabbar onto the view. Tabs are working title is changed, perfect. Now one of my tabs has a list and I try to link out to a child page within the tabbar: NextViewController *nextController = [[NextViewController alloc] initWithNibName:@"ProfileDetailController" bundle:nil]; [self.navigationController pushViewController:nextController animated:YES]; Nothing happens. Course this works: self.view = nextController.view; I want to be able to push to this subpage within my tabbar AND change the navigationbars buttons. Is this possible? It sounds like you're pushing a UITabBarController onto a UINavigationController? From Apple's documentation, you can't push a tab bar controller onto a navigation controller. What you probably want to do is the opposite: have a tab bar controller with UINavigationControllers as the tab items. This is similar to the interface in, say the iPod app or Phone app. You are right! I found this and it explained it all. http://www.youtube.com/watch?v=LBnPfAtswgw I agree with Alex - a TabBarController inside a navigation controller doesn't seem like a nice UI pattern. Anyways, to answer your question: Have you tried to access the navigation controller via the tab bar controller? self.tabBarController.navigationController I'm not sure if this works, but you could give it a try. Didn't work for me. navigationController in tabBarControlleris nil. I think I found an easy solution. In your class where you want to push a view, declare a local UINavigationController as a propterty: @interface userMenu : UIViewController { UINavigationController *navigationController; } @property (nonatomic, retain) UINavigationController *navigationController; Remember to synthesize it. In your class for the tabBarController: NSArray *viewControllersArray = [self.tabBarController viewControllers]; userMenu *childUserMenu = (userMenu*) [viewControllersArray objectAtIndex:0]; childUserMenu.navigationController = self.navigationController; After that you can do [self.navigationController pushViewController:nextController animated:YES];
common-pile/stackexchange_filtered
changing image source by dealing with state in Reactjs I am trying to make an Image slider like e-commerce product in Reactjs. In javascript it's pretty simple, we only need to change the image source, but how to do it in React? because in React we have to deal with state. First I mapped out all four images(state) as side images(className ='sideImages') then hardcoded the first image of state as display image. Now I want when I click any of the side images it becomes display image (className='display') I hope I am clear. Here is my component import React, { Component } from 'react'; class App extends Component { state = { images: [ { id: 1, img: "https://images.pexels.com/photos/1" }, { id: 2, img: "https://images.pexels.com/photos/2" }, { id: 3, img: "https://images.pexels.com/photos/3" }, { id: 4, img: "https://images.pexels.com/photos/4" } ] }; onClickHandler = () => { this.setState({ // this.state.images[0]['img']: 'this.src' }) } render() { const sideImages = this.state.images.map(image => ( <img key={image.id} src={image.img} alt="" /> )); return ( <div className="imageSlider"> <div onClick={this.onClickHandler} className="sideImages"> {sideImages}</div> <div className='display'> <img src={this.state.images[0]['img']} alt="" /> </div> </div> ) } } export default App; your setState is replacing an object, which contains an array of objects with a string "this.src". You have to replace it with the same thing. Your setState should keep the same data structure, just replace it with a new object of the same data structure but different values. You can take another/better approach of doing this. Keep 2 variables in your state i.e. image array [don't mutate it] and selectedImageIndex to keep track of currently selected/clicked thubnail. Whenever you click on any thumbnail image, just change the selectedImageIndex and the target image src can point to <img src={this.state.images[this.state.selectedImageIndex]["img"]} alt="" /> Here is the working example: https://codepen.io/raviroshan/pen/QVraKo
common-pile/stackexchange_filtered
std::set of boost::weak_ptr<T> - Getting const_iterator to const T? I have class containing an std::set of boost::weak_ptr<T>. I have two functions begin() and end() that return an iterator to the container. However, I don't want clients to be able to modify T. Simply returning a const_iterator won't work, because the T pointed to by the boost::weak_ptr will be editable. What I want to do is return a const_iterator to std::set<boost::weak_ptr<T const> >. Casting from std::set<boost::weak_ptr<T> >::const_iterator does not work. Is there any way to get the behaviour I want? Your these two statements seem contradictory : 1) I don't want clients to be able to modify T and 2) because the T pointed to by the boost::weak_ptr will be editable.. What does it mean? @Nawaz What I mean is: returning a const_iterator to std::set<boost::weak_ptr<T> > makes it so that the client can not modify the weak_ptr. He can, however, still get a shared_ptr<T> from it and then modify T at will. Which is exactly what I don't want happening. You can write a transform iterator to convert the weak_ptr<T> to a weak_ptr<const T>. Since you're already using Boost, you can use boost::transform_iterator: #include <boost/iterator/transform_iterator.hpp> #include <boost/shared_ptr.hpp> #include <boost/weak_ptr.hpp> #include <set> // Functor to transform a weak_ptr<T> to a weak_ptr<const T> template <typename T> struct make_weak_ptr_const : std::unary_function<boost::weak_ptr<T>, boost::weak_ptr<const T> > { boost::weak_ptr<const T> operator()(const boost::weak_ptr<T>& p) const { return p; } }; struct S { }; // Container demonstrating use of make_weak_ptr_const: struct my_awesome_container { typedef std::set<boost::weak_ptr<S> > BaseSet; typedef BaseSet::const_iterator BaseIterator; typedef boost::transform_iterator< make_weak_ptr_const<S>, BaseIterator > iterator; iterator begin() const { return TransformedIterator(data.begin()); } iterator end() const { return TransformedIterator(data.end()); } std::set<boost::weak_ptr<S> > data; }; If you don't want to use boost::transform_iterator, it is a straightforward task to write your own. I showed how to do this in an answer to another question. Thank you, I had no clue such a function existed. This is perfect!
common-pile/stackexchange_filtered
How to use jstree v3 plugin with reorder only I am using jsTree V3.0.8 (http://www.jstree.com/) how to reorder only when use dnd plugin like at V1(http://johntang.github.io/JsTree/_docs/dnd.html#demo2) Use the check_callback option like this: "check_callback" : function (op, node, par, pos, more) { if(more && more.dnd) { return more.pos !== "i" && par.id == node.parent; } return true; }, Here is a demo: http://jsfiddle.net/DGAF4/509/ Works wonderful! +1. Many people are still getting hit on the CRRM example....sadly...
common-pile/stackexchange_filtered
List Vertical Align in IE 9 Hi all, I am having problem with using <li></li> in the IE browser. I have tried using display:block and display:table to correctly align the second line of each bullet with the first, but neither has worked. Here is the list: <ul class="a"> <li> <span>The customer provides the database schema files, which include SQL statements (DDLs of all the database objects and stored procedures), and the application source code.</span> </li> <li> <span>Fujitsu performs the migration analysis. An experienced consultant with the help of our proven migration assessment tool will analyze your organization&#x27;s database environment, assessing the infrastructure, schema and applications.</span> </li> </ul> in your css place this adjust number of px to exact location where you wish for it to start. ul.a li { text-indent:10px; } or try to use padding instead of list style property. Yah , but it works on Chrome , just not on Internet explore I was using list-style-position : inside. i should probably get ride of this property probably could you show your css that you are currently using It works , as soon as i delete the LIST-STAYLE-POSITION PROPERTY and using padding. thanks @ AAnkudovich
common-pile/stackexchange_filtered
tkinter messagebox without buttons In my program I just need to notify user to not press a physical button om a system with no keyboad or mouse, want to popup a Wait message that disapears when the system is again ready Please provide more details related with your question, like your actual code and why it does not work as you expected. This makes easier to understand your problem and help you. A tkMessageBox is just a Dialog with some standard buttons added. So, instead of trying to figure out how to hack around the message box to avoid the buttons, why not just use a Dialog directly? There are two reasons you don't want a message box here. First, the whole point of a message box is that it's a modal dialog with some standardized buttons, and you don't want those buttons. Second, the whole point of a modal dialog is that it's modal—it runs its own event loop, and doesn't return until the dialog is dismissed. This means (unless you're using background threads) your app can't do anything while displaying it. The first problem is easy to solve. tkMessageBox is just a simple wrapper around tkCommonDialog.Dialog. It's worth looking at the source to see just how simple it is to construct a dialog box that does what you want. But tkSimpleDialog.Dialog is even simpler than tkCommonDialog (hence the name). For example: class WaitDialog(tkSimpleDialog.Dialog): def __init__(self, parent, title, message): self.message = message Dialog.__init__(self, parent, title=title, message=message) def body(self, master): Label(self, text=self.message).pack() def buttonbox(self): pass def wait(message): WaitDialog(root, title='Wait', message=message) That's all it takes to create a modal dialog with no buttons. Dialog Windows and the source to tkSimpleDialog have more details. The second problem is even easier to solve: If you don't want a modal dialog, then all you want is a plain old Toplevel. You may want it to be transient, so it stays on top of the master, hides with it, doesn't show up on the taskbar, etc., and you may want to configure all kinds of other things. But basically, it's this simple: def wait(message): win = Toplevel(root) win.transient() win.title('Wait') Label(win, text=message).pack() return win Now you can call wait() and continue to run: def wait_a_sec(): win = wait('Just one second...') root.after(1000, win.destroy) root = Tk() button = Button(root, text='do something', command=wait_a_sec) root.mainloop() This is almost working for me.. I'm using your Wait funtion but calling it directly as win=wait("just a sec") but it doesn't show up until the function it's in is done and then When it shows it's below my root window and there is no text and the win.destroy raises an error 'NoneType' object has no attribute 'destroy' @Blanius: I'd have to see the code you've written to debug it. But it sounds like you're trying to do something slow and blocking in the middle of the button command. You can't ever do that in an event-loop-based app like a typical GUI; if you don't return quickly, the event loop doesn't run, and the whole GUI freezes.
common-pile/stackexchange_filtered
Im not getting any returned value from this... why? Yes it connects to the database, everything else works fine. I cant seem to pull the pass from the db its showing no returned echo <?php $username="test"; include("db.php"); $con=mysql_connect($server, $db_user, $db_pwd) //connect to the database server or die ("Could not connect to mysql because ".mysql_error()); mysql_select_db($db_name) //select the database or die ("Could not select to mysql because ".mysql_error()); $query="select password from ".$table_name." where username='$username'"; $result=mysql_query($query,$con) or die('error'); while ($row = mysql_fetch_assoc($result)); $un_pass_s1=$row['password']; echo $un_pass_s1; ?> Please, don't use mysql_* functions in new code. They are no longer maintained and are officially deprecated. See the red box? Learn about prepared statements instead, and use PDO or MySQLi - this article will help you decide which. You're doing a lot of other things wrong, from using deprecated mysql_* functions to (apparently) storing passwords in plain-text in the database. If you're following a tutorial, you should abandon it and find one which uses PDO. @Kermit thank you :) I Im using old code but rewriting. while ($row = mysql_fetch_assoc($result)); loops until $row is false. The loop body is a single empty statement, ;. You need to put your code which accesses $row inside the loop, not after it. I have no idea what a loop body is? You give me an example of proper syntax? Are you not going to give me an example? I've no intention of doing so, no. My answer is very clear. If you want to learn the syntax for while loops, go look at the documentation or any PHP tutorial. $sql=mysql_query("select password from ".$table_name." where username='$username'"); while($row=mysql_fetch_array($sql)) { $un_pass_s1=$row['password']; } echo "value=".$un_pass_s1;
common-pile/stackexchange_filtered
C++ / MPI: Why do I get bigger execution time with bigger number of processors I'm trying to write a code for serial and parallel algorithm of LDLT decomposition in C++ using MPI. Here's a code #include <iostream> #include <mpi.h> #include <cassert> #include <chrono> #include <cmath> namespace para { double solveSym(int n, double* a, double* b) { int nproc, myid; MPI_Comm_size(MPI_COMM_WORLD, &nproc); MPI_Comm_rank(MPI_COMM_WORLD, &myid); int i = myid; if (myid != 0) { double data; MPI_Request rreq; MPI_Irecv((void*)&data, 1, MPI_DOUBLE, myid - 1, 0, MPI_COMM_WORLD, &rreq); MPI_Status st; MPI_Wait(&rreq, &st); a[i * n + i] = data; } double invp = 1.0 / a[i * n + i]; int send_size = 0; int recv_size = 1; for (int j = i + 1; j < n; j++) { send_size++; recv_size++; if (myid != 0) { double* data = new double[j]; MPI_Request rreq; MPI_Irecv((void*)data, recv_size, MPI_DOUBLE, myid - 1, 0, MPI_COMM_WORLD, &rreq); MPI_Status st; MPI_Wait(&rreq, &st); for (int k = 0; k < recv_size; k++) a[j * n + (myid + k)] = data[k]; } double aji = a[j * n + i]; a[j * n + i] *= invp; for (int k = i + 1; k <= j; k++) a[j * n + k] -= aji * a[k * n + i]; if (myid != nproc - 1) { MPI_Request sreq; double* send_data = new double[send_size]; for (int k = 0; k < send_size; k++) send_data[k] = a[j * n + (i + 1 + k)]; MPI_Isend((void*)send_data, send_size, MPI_DOUBLE, myid + 1, 0, MPI_COMM_WORLD, &sreq); MPI_Status st; MPI_Wait(&sreq, &st); } } return 0; } } namespace seq { void symMatVec(int n, double* a, double* x, double* y) { int i, j; for (i = 0; i < n; i++) { double t = 0.0; for (j = 0; j <= i; j++) t += a[i * n + j] * x[j]; for (j = i + 1; j < n; j++) t += a[j * n + i] * x[j]; y[i] = t; } } void solveSym(int n, double* a, double* x, double* b) { for (int i = 0; i < n; i++) { double invp = 1.0 / a[i * n + i]; for (int j = i + 1; j < n; j++) { double aji = a[j * n + i]; a[j * n + i] *= invp; for (int k = i + 1; k <= j; k++) a[j * n + k] -= aji * a[k * n + i]; } } for (int i = 0; i < n; i++) { double t = b[i]; for (int j = 0; j < i; j++) t -= a[i * n + j] * x[j]; x[i] = t; } for (int i = n - 1; i >= 0; i--) { double t = x[i] / a[i * n + i]; for (int j = i + 1; j < n; j++) t -= a[j * n + i] * x[j]; x[i] = t; } } } int main(int argc, char** argv) { srand((unsigned)time(NULL)); MPI_Init(&argc, &argv); int nproc, myid; MPI_Comm_size(MPI_COMM_WORLD, &nproc); MPI_Comm_rank(MPI_COMM_WORLD, &myid); int n = nproc; double* a = new double[n * n]; assert(a != NULL); for (int i = 0; i < n; i++) for (int j = 0; j < i; j++) a[i * n + j] = rand() / (RAND_MAX + 1.0); for (int i = 0; i < n; i++) { double s = 0.0; for (int j = 0; j < i; j++) s += a[i * n + j]; for (int j = i + 1; j < n; j++) s += a[j * n + i]; a[i * n + i] = s + 1.0; } double start, end; double* xx = new double[n]; assert(xx != NULL); for (int i = 0; i < n; i++) xx[i] = 1.0; double* b = new double[n]; assert(b != NULL); seq::symMatVec(n, a, xx, b); MPI_Barrier(MPI_COMM_WORLD); start = MPI_Wtime(); double x = para::solveSym(n, a, b); MPI_Barrier(MPI_COMM_WORLD); end = MPI_Wtime(); double* output = new double[n]; MPI_Gather((void*)&x, 1, MPI_DOUBLE, (void*)output, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD); if (myid == 0) { std::cout << "processors num = " << nproc << " execution time = " << (end-start)/1000.0 << " seconds" << std::endl; } MPI_Finalize(); return 0; } While I execute this code (4 processors, matrix 100x100) using: mpiexec -np 4 LDLT 100 I get strange results. For example, with matrix of 100x100 using 1 processor, the execution time is 1,2e-9 seconds; using 2 processors, the execution time is 5,48e-9 seconds; using 4 processors, the execution time is 5,55e-9 seconds. Why do I get such results? What's wrong with this code? Help me to correct it. Thanks! EDIT: I made some changes according to y'all suggestions, it has some improvements in execution time (now it's not so little), but still the same problem: I changed the matrix size to 1000x1000, and with 1 processor the execution time = 0,0016 seconds; with 2 processors it takes 0,014 seconds. Here's a code of main() function: int main(int argc, char** argv) { srand((unsigned)time(NULL)); MPI_Init(&argc, &argv); int nproc, myid; MPI_Comm_size(MPI_COMM_WORLD, &nproc); MPI_Comm_rank(MPI_COMM_WORLD, &myid); int n = atoi(argv[1]); double* a = new double[n * n]; assert(a != NULL); for (int i = 0; i < n; i++) for (int j = 0; j < i; j++) a[i * n + j] = rand() / (RAND_MAX + 1.0); for (int i = 0; i < n; i++) { double s = 0.0; for (int j = 0; j < i; j++) s += a[i * n + j]; for (int j = i + 1; j < n; j++) s += a[j * n + i]; a[i * n + i] = s + 1.0; } double start, end; double* xx = new double[n]; assert(xx != NULL); for (int i = 0; i < n; i++) xx[i] = 1.0; double* b = new double[n]; assert(b != NULL); start = MPI_Wtime(); if (nproc == 1) { seq::symMatVec(n, a, xx, b); end = MPI_Wtime(); std::cout << "processors num = " << nproc << " execution time = " << (end - start) << " seconds" << std::endl; MPI_Barrier(MPI_COMM_WORLD); MPI_Finalize(); } else { double x = para::solveSym(n, a, b); double* output = new double[n]; MPI_Gather((void*)&x, 1, MPI_DOUBLE, (void*)output, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD); if (myid == 0) { end = MPI_Wtime(); std::cout << "processors num = " << nproc << " execution time = " << (end - start) << " seconds" << std::endl; } MPI_Barrier(MPI_COMM_WORLD); MPI_Finalize(); } return 0; } Here's a rough guideline: do at least 50 thousand operations in between two messages. shouldn't you have n = atoi(argv[1]) instead of n = nproc? Here's another rough guideline - if a task takes less than 1s to execute, don't waste your time parallelising it. Thanks you all guys, I a bit modified code, but still get the same problem with execution time. Please review an updated main() function code. Firstly, the problem size could be an issue. The time spent on MPI communication (Though MPI_Isend and Irecv is used, it will still behave like MPI_Send and MPI_Recv because of MPI_Wait immediately after the calls - i.e - no communication computation overlap) and synchronisation (MPI_Barrier before the timing calculation) might be more than the actual compute time. On the synchronisation aspect, your approach for measuring time is flawed (in my opinion). Ideally, what you should do is, you should calculate the time taken for calling the solveSym function like this. start = MPI_Wtime(); double x = para::solveSym(n, a, b); end = MPI_Wtime(); MPI_Barrier(MPI_COMM_WORLD); Putting barrier will skew your results there. Then after that, all process should calculate its time taken for the function. Then do an MPI_Reduce and calculate the average time to get the correct timing information. Currently, you are only printing the timing information from one single process. Suggestion (as pointed out by davideperrone in his answer): You should increase the problem size and run the code. If the behaviour persists then, you should change the MPI implementation of your code (By overlapping computation with communication if possible). If I am not mistaken, your function solveSym always return 0 and you are using MPI_Gather to collect it. From the quick glance at the code, what I can see is the implementation is very inefficient. All processes has all the data (all mpi process creates n*n array etc). In my opinion, you can improve your code a lot. Thanks, I made some corrections to my code, but still it doesn't work as it should. You can take a view at upgraded main() function code. Please review it :) Probably this happens because the problem is so small (1.2e-9 s execution time) that the overhead associated with the MPI calls takes much more time than what is needed for the actual computation. Maybe you can try with much larger matrices and share the results. Thanks for your comment, I made some corrections, but still the same problem with processors, I added a new version of main() function code, you can take a look at it
common-pile/stackexchange_filtered
Is there any special restriction on using OriginalQueueId field in AgentWork trigger? We have written a before/after update trigger on AgentWork object to get information about the Owner Change from Queue to User on Case record acceptance from Omni Channel. To get this From and To owner information we tried to use UserId and OriginalQueueId fields from AgentWork object. As we tried to access and use OriginalQueueId field in our trigger's handler class, Our trigger stopped firing. We found no debug logs in developer console on case acceptance. As we commented or removed OriginalQueueId field from our trigger code, trigger started executing. In our findings we saw that "OriginalQueueId" field is not present within AgentWork object from trigger.new list. Here is the log: AgentWork: AgentWork:{Id=0Bz5B0000004LhuSAE, IsDeleted=false, Name=00004387, CreatedDate=2017-09-28 12:33:56, CreatedById=0055B000000W1hGQAS, LastModifiedDate=2017-09-28 12:34:01, LastModifiedById=0055B000000W1hGQAS, SystemModstamp=2017-09-28 12:34:01, UserId=0055B000000W1hGQAS, WorkItemId=5005B000002SIDGQA4, Status=Opened, ServiceChannelId=0N932000000TN1KCAW, LASessionId=2de1db2c-6ae1-429e-97b9-160592cae978, StatusSequence=0, CapacityWeight=1.00, CapacityPercentage=null, RequestDateTime=2017-09-28 12:28:48, AcceptDateTime=2017-09-28 12:34:01, DeclineDateTime=null, CloseDateTime=null, SpeedToAnswer=313, AgentCapacityWhenDeclined=null, PendingServiceRoutingId=0JR5B0000000Ke6WAE, PushTimeout=60, PushTimeoutDateTime=null, HandleTime=null, ActiveTime=null, DeclineReason=null, CancelDateTime=null, AssignedDateTime=2017-09-28 12:33:56} As we did not find this field in AgentWork record from trigger.new list, We tried to do the following query on AgentWork record to get OriginalQueueId field value: SELECT Id, OriginalQueueId FROM AgentWork WHERE Id in: setCurrentAgentWorkIds and again this stopped our trigger from execution and found no debug logs were generated for our trigger statements. We try to execute the same query in Developer console & eclipse and we were able to see OriginalQueueId field value in results. It is just not working in use with our trigger execution. I verified the isAccessible via Schema.DescribeFieldResult and it is returning "true" for OriginalQueueId field. Can we get an explanation on this behavior? Is there any special restriction on using OriginalQueueId field in AgentWork trigger? See Class Here: public with sharing class AgentWorkTriggerHandler { public static void afterUpdate( list<AgentWork> lstNewAgentWorks, map<Id, AgentWork> mapNewAgentWorks, list<AgentWork> oldNewAgentWorks, map<Id, AgentWork> mapOldAgentWorks ){ map<Id, AgentWork> mapAWorkByCaseId = new map<Id, AgentWork>(); for( AgentWork aWork : lstNewAgentWorks ){ System.debug('---> AgentWork: '+aWork); if( aWork.Status == 'Opened' && String.valueOf( aWork.WorkItemId ).startsWith( '500' ) ){ mapAWorkByCaseId.put( aWork.WorkItemId, aWork ); } } if( mapAWorkByCaseId.size() > 0 ){ map<Id, Case> mapNewCase = new map<Id, Case>(); map<Id, Case> mapOldCase = new map<Id, Case>(); map<Id, Id> mapQueueIdByWorkId = getQueueIdByWorkId( mapNewAgentWorks.keyset() ); for( Case caseObj : [SELECT Id, OwnerId, Status, CreatedDate FROM Case WHERE Id in:mapAWorkByCaseId.keySet()]){ AgentWork aWork = mapAWorkByCaseId.get( caseObj.Id ); mapNewCase.put( caseObj.Id, caseObj ); Case oldCase = caseObj; oldCase.OwnerId = mapQueueIdByWorkId.get( aWork.Id ); mapOldCase.put( caseObj.Id, oldCase ); } System.debug( '----> mapOldCase: '+mapOldCase); System.debug( '----> mapNewCase: '+mapNewCase); CaseTriggerHandler.updateCaseLifeCycleRecords( mapNewCase.values(), mapNewCase, mapOldCase.values(), mapOldCase ); } } public static map<Id, Id> getQueueIdByWorkId( set<Id> workIds ){ map<Id, Id> mapQueueIdByWorkId = new map<Id, Id>(); // If I comment below loop statement, I can see debug logs from this handler class in Developer console. // If I leave below loop uncommented, I did not get any debug logs for this handler class in developer console. even no debugs from the trigger. for( AgentWork aWork : [SELECT Id, OriginalQueueId FROM AgentWork WHERE Id in:workIds]){ mapQueueIdByWorkId.put( aWork.Id, aWork.OriginalQueueId ); } return mapQueueIdByWorkId; } } I am also facing the similar Issue . Did you get any solution for this ? This is related an open bug scheduled to be fixed with Summer'19. Behavior of this field OriginalQueueId is very strange for now. It is not accessible on AgentWork instance, but it is present. I serialized the Trigger.new list in AgentWork trigger. System.debug(JSON.serialize(Trigger.new)); Output is this, [ { "attributes": { "type": "AgentWork", "url": "/services/data/v45.0/sobjects/AgentWork/0Bz6C000000EUEQSA4" }, "LastModifiedDate": "2019-03-28T15:23:02.000+0000", "ShouldSkipCapacityCheck": false, "RoutingModel": "LeastActive", "AssignedDateTime": "2019-03-28T15:23:02.000+0000", "Name": "00085887", "IsConference": false, "OwnerId": "0053400000CKLXTAA5", "CreatedById": "0053400000CKLXTAA5", "RequestDateTime": "2019-03-28T15:22:34.000+0000", "WorkItemId": "5006C00000489MBQAY", "RoutingType": "QueueBased", "Status": "Assigned", "IsDeleted": false, "StatusSequence": 0, "OriginalQueueId": "00G6C0000019JaeUAE", "CapacityWeight": 2.00, "LASessionId": "6dce0431-97b3-460e-8512-5cc3fe84d31a", "CurrencyIsoCode": "USD", "SystemModstamp": "2019-03-28T15:23:02.000+0000", "PushTimeout": 120, "UserId": "0053400000CKLXTAA5", "ServiceChannelId": "0N96C000000GmdISAS", "IsTransfer": false, "CreatedDate": "2019-03-28T15:23:02.000+0000", "Id": "0Bz6C000000EUEQSA4", "LastModifiedById": "0053400000CKLXTAA5", "PendingServiceRoutingId": "0JR6C000000CwYqWAK" } ] But any attempt to access the OriginalQueueId field value on AgentWork fails at runtime as reported in the issue link provided above. Following code, for(AgentWork aw : Trigger.new) { Id originalQueueId = aw.OriginalQueueId; } Compiles but fails at runtime with this error message, Variable does not exist: OriginalQueueId Though not very happy with the approach, this is how I could access the value for now, String originalQueueId = String.valueOf(((Map<String, Object>)JSON.deserializeUntyped(JSON.serialize(aw))).get('OriginalQueueId')); This executes just fine and works for me.
common-pile/stackexchange_filtered
D3 v5 zoom limit pan I am trying to limit pan in d3 zoom but I am not getting correct results. I am using following code to extent both scale and translate. var treeGroup = d3.select('.treeGroup'); var rootSVG = d3.select('.rootSVG') var zoom = d3.zoom() .scaleExtent([1.6285, 3]) .translateExtent([[0, 0],[800, 600]]) .on('zoom', function(){ treeGroup.attr('transform', d3.event.transform); }) rootSVG.call(zoom); Here is the JSFiddle: https://jsfiddle.net/nohe76yd/45/ scaleExtent works fine but translateExtent is giving issues. How do I specify correct value for translateExtent so that while panning content always stays inside the svg container? the translate extent depends on the zoom/scale factor. In the on-zoom function modify the transform by using Math.min and Math.max. https://stackoverflow.com/a/51563890/9938317 The translateExtent works best when used dynamically to the graph group you're using. It takes two arguments: topLeft and bottomRight, which are x and y coordinates each. In my example, I recalculate the extent based on the graph's size, with the help of getBBox() and adding some margins. Take a look, it might help you: https://bl.ocks.org/agnjunio/fd86583e176ecd94d37f3d2de3a56814 EDIT: Adding the code that does this to make easier to read, inside zoom function. // Define some world boundaries based on the graph total size // so we don't scroll indefinitely const graphBox = this.selections.graph.node().getBBox(); const margin = 200; const worldTopLeft = [graphBox.x - margin, graphBox.y - margin]; const worldBottomRight = [ graphBox.x + graphBox.width + margin, graphBox.y + graphBox.height + margin ]; this.zoom.translateExtent([worldTopLeft, worldBottomRight]); I am tying to do this, to avoid infinite panning to left (x-axis) but .getBBox() is returning all zero values. x,y, width and height. could you please help on this
common-pile/stackexchange_filtered
How does the Announcer badge work? I understand that the Announcer badge can be earned if the question is visited 25 times. But is this only when I share a link to this question in one of my answers, or can I also share the link (with my user id somewhere to count the accesses) somewhere else, maybe Facebook? You can share the link anywhere, so long as you use the link for questions underneath the question. It does not have to be for your question; you can share the link to any question. thanks now i understand. Is it also possible to ssee how many I referred? @Roflcoptr - you could shorten the url using bit.ly, which will then show you how many click-throughs that shortened url gets Related: How this tracking is performed. The link must be shared outside the SE network according to Rebecca's post: The link must be clicked from outside the network in order for it to count for this purpose. There is no link now, will share do the same? It seems sharing the link to an answer (through share link below the answer post) can also earn you announcer badge as mentioned here. Not sure though if it is applicable only on stackoverflow or all the stackexchange communities. In fact @Idolon's answer below your answer also says the same. @AndrewMorton that's true now, but I believe the question was about sharing links to a SO question outside of SO, so that remains an option. @ChrisBallard I see that facts and context got in the way of what could otherwise have been a useful comment. I'll delete it :) Have I to share the link on some other site, or do sencing it as email count to? But is this only when I share a link to this quesiton in one of my answer or can I also share the link (with my user id somewhere to count the accesses) somewhere else, maybe facebook? In order to gain the Announcer badge you must share a link outside the SE network. Sharing a link in one of your answer won't work. Also, rules for the Announcer, Booster and Publicist badges have changed since 2012-01-10. And sharing direct links to answers now also counts towards these badges. What does outside the SE network exactly mean? If I use email to share the link, will it count? @convert yes, it will. The link click won't count if the link was placed inside of an answer or question at the StackExchange network sites.
common-pile/stackexchange_filtered
Java - Vertx - Publish–Subscribe pattern : publishing message inside its own consumer. Is this a bad idea? I am new to Publish–subscribe pattern, and we are using Vertx in our application. I am trying to do this for some usecase, where I am publishing inside its own consumer: private void functionality() { EventBus eb = Vertx.currentContext().owner().eventBus(); MessageConsumer<String> consumer = eb.consumer("myAddress"); consumer.handler(message -> { if (condition1) { doOperationWhichMightChangeTheCondition(); eb.publish("myAddress","Start Operation"); }else { log.info("Operations on all assets completed"); } }); eb.publish("myAddress","Start Operation"); } Is this a bad idea? Can this also lead to StackOverFlow error like recursive calls, or any other issues? If you are looking for a site that will review your code for you, don't come to stack overflow, put this question on code review Thank you. Didnt know about the site till now. Will use it appropriately. And also i wanted to know if calling publish inside consumer leads to Stackoverflow. The EventBus.publish method is asynchronous; it does not block to wait for consumers to receive/process the published message. So it is perfectly safe to call publish inside a consumer that will then consume the published message. Internally, Vert.x has Netty schedule another call to the consumer, which will not run until the current invocation (and any other methods scheduled ahead of it on the Netty event loop) complete. You can easily prove this to yourself by writing a test that with a consumer that publishes to the address it is consuming from. You won't see a StackOverFlowError.
common-pile/stackexchange_filtered
Installed Node.js and Sublime Text But Keep Getting Errors I'm a beginner at trying to learn web development. I currently downloaded node from the nodejs.org website and also downloaded Sublime Text 3 from the website as well. My code for sublime is as follows: $ node console.log("Who's you're daddy?"); When I try to run it on node.js I get an error that says: "Syntax Error, Unexpected Identifier." This is literally step one and I'm already messing up. How have I set this up wrong? From What you are trying to do seems like you want to print "Some String" in the console using nodejs. In nodejs you can create a javascript logic first and then you can execute that logic using node. Create a file somefile.js Write following code in the file: console.log('Who's your daddy?'); In terminal and execute: node /location/of/the/file/somefile.js It should print 'Who's your daddy?' in the console. To learn more about nodejs and to get started you can refer to How do I get started with Node.js
common-pile/stackexchange_filtered
NullReferenceException during object initialization Why there's a NullReferenceException when trying to set value of X in the code below? It works fine when I use new keyword when initializing B, but why it compiles fine without new and then fails during runtime? https://dotnetfiddle.net/YNvPog public class A { public _B B; public class _B { public int X; } } public class Program { public static void Main() { var a=new A{ B={ X=1 } }; } } Specifically see the section Indirect in the accepted answer. Because in your example you do not initialize B. You basically do a.B.X =1 where B is still null. Are you asking why did you get a NRE or why the compiler doesn't warn you about using the B variable without instancing it before? Terrible compiler! it should detect that B={} will return null. It is very hard to find when you have multiple nested code. I enabled Common Language Runtime Exceptions and it will say something like this [namespace].A.B.get returned null.. It is easy fix B = new B{} Initialization syntax can be tricky. In your code, you're trying to set the value of a.B.X without first setting the value of B. Your code translates to: var a = new A(); a.B.X = 1; ... which would produce the same exception you're getting now. That's because a.B is initialized to null unless you explicitly create an instance for it. As you noted, this will work: var a=new A{ B= new _B { X=1 } }; You could also make sure that A's constructor initializes a B. public _B B = new A._B(); why it compiles fine without new and then fails during runtime? It would require too much work for the compiler to dig into the code for your A class and realize that B will definitely be null at this point in time: as I pointed out you could change the implementation of A's constructor to make sure that's not the case. This is one reason that null reference exceptions are the most common type of exception out there. The best strategy to avoid this is to initialize all of your fields to non-null values in the constructor. If you won't know what value to give them until your constructor is invoked, then make your constructor take those values as parameters. If you expect one of your fields may not always have a value, you can use an optional type like my Maybe<> struct to force programmers to deal with that fact at compile-time. Update 2021 Now C# supports nullable reference types, which you can use to encourage/force programmers to know that your field could be null, if that's the route you want to take. public _B? B; Thanks, I thought it's something similar like int[] a={1,2,3}. After I posted the question I added keyword "nested" and found it in specs: "A member initializer that specifies an object initializer after the equals sign is a nested object initializer, i.e. an initialization of an embedded object. Instead of assigning a new value to the field or property, the assignments in the nested object initializer are treated as assignments to members of the field or property. Nested object initializers cannot be applied to properties with a value type, or to read-only fields with a value type." @Franta: I know what you mean. Also note that list initializer syntax invokes "Add" on the collection, rather than creating a new list. So it's usually wise to always initialize List properties with empty lists in the constructor. I tried to init an array of ints: https://dotnetfiddle.net/SNlACj Then it throws compilation error: "Cannot initialize object of type 'int[]' with a collection initializer" (4.5) and "'int[]' does not contain a definition for 'Add'" (Roslyn). Thanks for explanation - I guess I'd be perplexed from those error messages not knowing what's going on. @StriplingWarrior Your link is unfortunately dead. Perhaps provide a new one? @Heki: Thanks for pointing that out. I moved the library to Github. However, with C#'s addition of nullable reference types, I think that library is far less useful than it used to be. I updated my answer accordingly.
common-pile/stackexchange_filtered
What is the right dual isogeny? I have a question regarding dual isogenies. I read an example in Silverman's book about elliptic curves and am wondering something about this example. We have $\zeta$ as a primitive cube root of unity. Then the elliptic curve $C: y^2=x^3+1$ has complex multiplication: \begin{align*} \phi(x,y)=(\zeta x, -y) \end{align*} Now it is clear to me that we have $\phi^3(P)=-P$ and $\phi^6(P)=P$. But what is now the dual isogeny of $\phi$? Since we could take $\hat{\phi}=\phi^2$ and have $\phi \phi^2= [-1]$ or we could take $\hat{\phi}=\phi^5$ and get $\phi \phi^5 = [1]$. How do I know which is the right one? By the definition of the dual isogeny, you have that $\hat{\phi}\phi = [\deg\phi]$. In this case, $\deg\phi = 1$ (and besides, the degree of an isogeny cannot be negative). Thus $\hat{\phi} = \phi^5$.
common-pile/stackexchange_filtered
Method to encrypt an image and save it in Gallery in React Native for iOS platform? I need to encrypt an image and save the encrypted file in Gallery. Is it possible on iOS platform? Using React Native (JS) and OpenSSL for encryption. Any guide will be helpful! Thank you :) I don't think so. You can manage your app's separate gallery to manipulate such data. But native option doesn't allow it. Also, check how OpenSSL can be used with React Native for iOS. But there is a way I guess where we can convert the image (encrypted file) in PNG form and save it. iOS gallery will not detect anything else.
common-pile/stackexchange_filtered
In Android Market , how frequently developers can/should update their apk files? What are the best practices on updating the apk files in the Android Market ? Is it ok to publish a new version as soon as i fix a minor glitch or should i consolidate a few bugs (if those or minor) and post it in a regular interval. Just released a game and got a extremely corner case crash issue and another minor glitch so i'm not sure if i release the fix right away. Also are there any restrictions on the number of updates per time period ? Even if there is no best practices as such could you (android developers) share how frequently you update your APK files for minor and major issues and what's your positive and negative experiences ? Thanks! Once a week is pretty optimal for generating new downloads and visibility. Based on my experiences and what I have read. Also weekends and holidays seem to generate more traffic. I usually pack more changes into one update and release once in 1-2 weeks. Don't make updates if you have no real content. That may annoy users. Read story #1: http://blog.edward-kim.com/an-android-success-story-13000month-sales-0 Read story #2: http://makingmoneywithandroid.com/2011/05/first-month-on-the-android-market/ People's experiences: Android Market - Time to wait between two updates Market's "just in": http://www.google.bg/support/forum/p/Android+Market/thread?tid=5b8adbb9052fc55c&hl=en Analysis when during day is most downloads: http://nhenze.net/?p=735 Discussion about time of day: Best time/day to publish to Android Market? Makes perfect sense. Great info . Thanks. Personally, I think it depends on the type of application. If you are coding a type of tool that obtains more and more functionality with each update, users probably won't mind the frequent updating. Same goes for an application that has too many major bugs. If you're coding a game though, I think updates relating to style of gameplay should be few and far-between. Users get used to playing a certain way and could get annoyed if they have to keep adapting to what essentially is a different game every time they update. Level pack updates are of course a different story though (I think those don't come fast enough sometimes). Remember though, even if an update goes out for an app, it doesn't mean the user will download it. I've seen too many friends with 22 updates available... < drop down clear > This is the best answer, IMHO. It all depends on what kind of app you have. Jarno's answer is pretty good, sure, but it would fail completely for my app, for example. I have one that is used commercially. Thus, it gets the most traffic and downloads 1-2 days before the peak activity. Believe it or not, Mondays and Tuesdays is the day I get the most downloads, and the peak time of activity and downloads is at early morning. It's definitely not the famous 16:22/4:22 PM (which possibly makes sense for games). Great answer .Thanks. Updates (with bug fixes) i'm planning are not having any/much visual changes but still hesitant to do more often than a week time. I'm wondering if there is a provision to keep the same app name but let the changes available only for the new downloads and don't bother the previous user with the updates. As far as I know you can update as often as you like. You pretty much have to decide what the balance is between annoying your users with frequent updates vs. making them happy by getting frequent bug fixes. For a while I was updating my own apps pretty much weekly and I never had any negative responses to that.
common-pile/stackexchange_filtered
Need guidance to use .map,.group,.pluck, etc. to make multi series line chart in ruby on rails app I'm trying to display a multi series line chart using chartkick in a Ruby on Rails app. The chart should display paper_types and the weightfor each type during some time period. SCREENSHOT ADDED This is my latest try: <%= line_chart [ {name: @pappi.map{|p| [p.paper_type]}, data: @pappi.map{|t| [t.date, t.paper_weight] }, 'interpolateNulls':true} ] %> Where @pappi = Paper.all The code above outputs as the picture below, where every paper_type rounds up on one single line, instead of showing separate lines for each paper_type. What I'm looking for is a chart similar to the screenshot below, were each paper_type has it's own line. Can someone please help me with this so I can get the outcome I want? Can you remove most of this answer's content and define: what you want (maybe a screenshot as example?), the context you have (records and their attributes relevant to the problem) and what you did (last 2 tries that were the closest to the answer) yes I will, give me few minutes :) As I closed your "duplicated" question today and you don't have a lot of reputation to put on a bounty, I will do my best to help you fix this issue. Thank you @MrYoshiji thats very kind of you, I've edited the question and added a sample image from the chartkick documentation following the line_chart example, you need to provide an array of hashes, each hash define the name key/value pair (value will be displayed in the legend), and the data key/value pair (value contains a hash of key/value pairs, where the key is a date, value is a number). thank you @MrYoshiji could you perhaps give me an example, I'm feeling kind of lost here :) I did not test this, only read the doc and concluded the following: line_chart expects you to give argument structured like this: (from the Javascript documentation) line_chart [ { name: 'Line1', data: { '2017-01-01' => 2, '2017-01-08' => 3 } }, { name: 'Line2', data: { '2017-01-01' => 1, '2017-01-08' => 7 } }, ] # try it in your view to make sure it works as described below This will create a chart with 2 lines (Line1 and Line2), horizontal axis will contain 2 values 2017-01-01 and 2017-01-08 and the vertical axis will be a scale, probably from 1 to 7 (min from data to max from data). Following this data structure in your context: Specs (correct me if I am wrong): one line for each different paper_type weight value for a given paper_type and a given date Object mapping to match the desired data structure: # controller all_paper_type_values = Paper.all.pluck(:paper_type).uniq @data_for_chart = all_paper_type_values.map do |paper_type| { name: paper_type, data: Paper.where(paper_type: paper_type).group_by_month(:created_at).sum(:weight) } end # view <%= line_chart(@data_for_chart) %> This is no scoped to any user / dates (period). You will have to include this in the code above. Try this and let me know if it fails. thank you @MrYoshiji this works, but I'm concerned how I can show different paper_types and paper_weights by using this method. One user have maybe 4 paper_types (Newspaper, magazines, officepaper and other paper) and then another user have perhaps only two paper_types to show( newspaper and officepaper). For that user I would only have the graph to show those two types but not all four. didn't see your edit before commenting, I'll check it out I get undefined local variable or method data_for_chart' for #<#Class:0x007fc7694aea60:0x007fc768448c70>` Yeah, this is working 95%, I now have separate lines for each paper_type, but it is not showing the weight. I added a screenshot to the question of how it looks now So, as seen in the screenshot I added, the legend is showing the number of entries for each paper_type. I can understand that is because the code is not showing any paper_weight, my concern now is how to add the paper_weight to the code so it will be shown in the legend? What is the output and/or the SQL generated for Paper.grouped_by_month(:created_at)? The .count instruction is counting records instead of summing all of those records' weight value this is what I get, I'm not understanding it fully irb(main):005:0> Paper.group_by_month(:created_at) Paper Load (5.1ms) SELECT "papers".* FROM "papers" WHERE (created_at IS NOT NULL) GROUP BY (DATE_TRUNC('month', (created_at::timestamptz - INTERVAL '0 hour') AT TIME ZONE 'Etc/UTC') + INTERVAL '0 hour') AT TIME ZONE 'Etc/UTC' ActiveRecord::StatementInvalid: PG::GroupingError: ERROR: column "papers.id" must appear in the GROUP BY clause or be used in an aggregate function LINE 1: SELECT "papers".* FROM "papers" WHERE (created_at IS NOT NUL... Can you try Paper.grouped_by_month(:created_at).sum(:weight)? And eventually Paper.grouped_by_month(:created_at).select('SUM(weight) as total_weight') I get irb(main):007:0> Paper.group_by_month(:created_at).sum(:paper_weight) (9.9ms) SELECT SUM("papers"."paper_weight") AS sum_paper_weight, (DATE_TRUNC('month', (created_at::timestamptz - INTERVAL '0 hour') AT TIME ZONE 'Etc/UTC') + INTERVAL '0 hour') AT TIME ZONE 'Etc/UTC' AS month FROM "papers" WHERE (created_at IS NOT NULL) GROUP BY (DATE_TRUNC('month', (created_at::timestamptz - INTERVAL '0 hour') AT TIME ZONE 'Etc/UTC') + INTERVAL '0 hour') AT TIME ZONE 'Etc/UTC' => {Sat, 01 Apr 2017 00:00:00 UTC +00:00=>#<BigDecimal:7f8887a17680,'0.621E3',9(18)>, Mon, 01 May 2017 to long to sh That is perfect. Use this .sum(:weight) instead of the .count in your code (see updated answer) Fantastic!!!! @MrYoshiji, You just made my day!!!! Thank you so much for all your help I am happy to help. I assume you will be able to filter Paper based on the desired user and/or a period of time. Yes exactly, this seems to be working as it should now What happens when you put any of these options in the rails console? Do you get multiple series of data? Have you tried? <%= line_chart [ name: paper.paper_type, data: current_user.papers.group(:paper_type).group_by_week(:created_at).count ] %> Thank you @Stuart, Yes I've tried this, it gives just an empty graph I'm not sure how I should put these options in the console, being rather new to this and all :) At a terminal, type in rails c. Have you done this? If not, you can read about it here: http://guides.rubyonrails.org/command_line.html. Once there, you can do current_user.papers.group(:paper_type).group_by_week(:created_at).count to see what the results are. I would spend my time doing this. When you get data that looks correct, try the chart.
common-pile/stackexchange_filtered
Opengl rotate an object using the mouse (C++) I have an object that I want to rotate using the mouse in Opengl (I'm using glut). I'm keeping track of the mouse movement and rotate according to the change in x and y of the mouse. But the problem is that the object doesn't move how I want it to. For instance, when I move in x and then move in y and then move in x again, the object seems to move diagonally, while I want it to move from left to right. I know that this is because the newer rotations get done before the older rotations (resulting in unintuitive rotation) because of how matrix multiplication works. But I have no idea of how to tackle this issue. One way I thought of is to change the axes about which I rotate according to the current rotation, but I have no idea if this will yield the correct result. Another thing I thought of is to make sure the latest rotation gets done last (but before glulookat translates the scene), but I don't know how to do this. So what would be the best way to solve this? We have no idea what the behavior of your code is and what behavior you would consider correct. You'll need to add your code and tell us exactly what it is doing and what you want it to do. Hard to know exactly what you want, but it seems like you're after an 'arcball' implementation of some type. Ken Shoemake's original paper is here, but you should be able to find implementations and descriptions of quaternions online. Also, you want a quaternion describing the current state of rotation, and while animating, a temporary quaternion multiplied with the current until you 'release' the mouse, to prevent hysteresis. Sorry if it is not 100 % clear what I mean. I googled 'arcball' and it is somehow what I want but less strict I guess. I just want to rotate based on the mouse movement, it is irrelevant if wether or not I am actually clicking the object. If I press the mouse button and I move from say, left to right, I want the object to rotate around it's y axis. There are two main approaches you can take. Use quaternions by projecting mouse coordinates onto a sphere (the arcball approach). Make the sphere the size of the window, so you don't have to worry about clicking an object. Rotate about y for horizontal mouse movement. Then, for vertical mouse movement, rotate around the cross product of the view vector and y (be careful if the cross product is zero). This is the approach used by Maya and others for orienting an editing view. Thanks for the suggestions. I think I was already using your second method though. But since I'm looking straight down the -z axis the cross product is the x axis (which I was rotating around). The problem I have is when I rotate it and it is multiplied with the current matrix it is applied before the rotation before that (if that makes sense). So it can happen that when I want to rotate left by moving my mouse to the left, the object is moving diagonally instead.
common-pile/stackexchange_filtered
looping dataframes with different number of columns in r Maybe is something trivial but I am trying to solve this problem: I have to data frames, one with 25 and another with 9 columns. Now, what I need to do is to fit polynomial equations where my dependent variable is in the data frame with 25 columns and my independent variable is in the data frame with 9 columns. At the moment I combined the columns together and created a data frame called "my.data", so I am looping over the dependent variables using one independent variable at the time. But, I would like do the functions in the loop 25 * 9 times automatically. Is there any way to do that? setwd("C:\\......") my.data <- read.table("MyData.txt", header = TRUE, sep = "\t") for(i in seq_along(my.data)) { fit1b <- lm(my.data[ ,i] ~ my.data$V1) fit2b <- lm(my.data[ ,i] ~ poly(my.data$V1, 2, raw=TRUE)) fit3b <- lm(my.data[ ,i] ~ poly(my.data$V1, 3, raw=TRUE)) poly1 <-capture.output(summary(fit1b)) poly2 <-capture.output(summary(fit2b)) poly3 <-capture.output(summary(fit3b)) con = file(description = "MyResults.txt", open="a") write.table(poly1, file= con, append = TRUE, quote=F, col.names=FALSE, row.names= F) write.table(poly2, file= con, append = TRUE, quote=F, col.names=FALSE, row.names= F) write.table(poly3, file= con, append = TRUE, quote=F, col.names=FALSE, row.names= F) close(con) } This is a perfect opportunity to use mapply and expand.grid For example. # some dummy data xx <- data.frame(replicate(5, runif(50))) yy <- setNames(data.frame(replicate(3, runif(50))), paste0('Y',1:3)) # all combinations cs <- expand.grid(list(pred = names(xx), resp = names(yy)), stringsAsFactors= FALSE) # a function to do the fitting fitting <- function(pred, resp, dd){ # fit linear model ff <- reformulate(pred, resp) lmf <- lm(ff, data =dd) # create a formula for poly(,2) ff.poly2 <- update(ff, .~poly(.,2, raw=TRUE)) # and poly(,3) ff.poly3 <- update(ff, .~poly(.,3, raw=TRUE)) # fit these models lmp2 <- lm(ff.poly2, data = dd) lmp3 <- lm(ff.poly3, data = dd) # return a list with these three models list(linear = lmf, poly2 = lmp2, poly3 = lmp3) } biglist <- mapply('fitting', pred = as.list(cs[['pred']]), resp = as.list(cs[['resp']]), MoreArgs = list(dd = cbind(xx,yy)), SIMPLIFY = FALSE) # give this list meaningful names names(biglist) <- do.call(paste, c(cs, sep = ':')) You can then extract things / summarize things using some nested lapply statements eg summaries of all the linear models lapply(lapply(biglist, `[[`,'linear'), summary) of the quadratic models lapply(lapply(biglist, `[[`,'poly2'), summary) If you want to extract the information from print(summary(lm)) in a single file, something like capture.output(lapply(biglist, function(x) lapply(x, summary)), file = 'results.txt') will create a file called results.txt with all the results printed there. thank you very much mnel, that worked very well!!! I m very rusty on R.... Thank you very much again!!!! David There is one thing in the code I would like to make it do. To have the output file summarized in the way below, if it is a list it does not come like that, but I am not sure I can do it as a summary. There is one thing I would like to do, to output the summary rather than the list, but I am not sure is possible to then use the writing function you have. Is there any way to obtain that? Call: lm(formula = My-Y-Lable ~ My-X-Label) Residuals: Min 1Q Median 3Q Max -0.35445 -0.17420 -0.10931 0.06975 0.60246 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.7560212 0.0720984 10.49 1.24e-14 * My-X-Label 0.0072100 0.0006597 10.93 2.68e-15 * Signif. codes: 0 ‘’ 0.001 ‘’ 0.01 ‘’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2812 on 54 degrees of freedom Multiple R-squared: 0.6887, Adjusted R-squared: 0.6829 F-statistic: 119.5 on 1 and 54 DF, p-value: 2.676e-15
common-pile/stackexchange_filtered
Error: [$rootScope:inprog] $apply already in progress I just tried to use 'ngTagsInput' in my angular app and the tags in the app are running fine,but sumultaneously its throwing the error in the console.Its not affecting my UI flow but yes errors are logging in the console.Why the cause and solution for this error. Error: Error: [$rootScope:inprog] $apply already in progress http://errors.angularjs.org/1.2.6/$rootScope/inprog?p0=%24apply...... this is order how my files are getting downloaded for browser <script type="text/javascript" src="js/ui-bootstrap.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.0rc1/angular-route.min.js"></script> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/angular.js/1.2.6/angular-resource.js"></script> <script type="text/javascript" src="js/ui-bootstrap-tpls.js"></script> <script src="js/angular-animate.min.js" ></script> <script src="http://cdnjs.cloudflare.com/ajax/libs/angularjs-toaster/0.4.4/toaster.js"></script> <script type="text/javascript" src="js/angular-datatables.js"></script> <script type="text/javascript" src="js/loading-bar.js"></script> <script type="text/javascript" src="js/timer.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.19/angular-cookies.js"></script> <script src="//cdn.ckeditor.com/4.4.7/standard/ckeditor.js"></script> <script src="js/jquery-ui.js"></script> <script type="text/javascript" src="js/angular-sanitize.js"></script> <script type="text/javascript" src="../bower_components/angular-dragdrop/src/angular-dragdrop.min.js"></script> <script type="text/javascript" src="../bower_components/ng-tags-input/ng-tags-input.js"></script> and my view <tags-input ng-model="email_details.cc"></tags-input> my controller $scope.emailTemplates=[{ 'id' :1, 'cc' :['developer<EMAIL_ADDRESS> 'bcc'<EMAIL_ADDRESS> 'subject' :'1testing common template testing common template ' , 'created_on':'2015-07-04 16:04:02', 'body' :'body<b>hi' },{ 'id' :2, 'cc' :['cc2<EMAIL_ADDRESS> 'bcc' :['bcc<EMAIL_ADDRESS> 'subject' :'2testing common template testing common template ' , 'created_on':'2015-07-04 16:04:02', 'body' :'<p>a</p>' } ] There are the self defined directives where in i am using scope.watch/.eval/.apply .directive("myDatepicker", function () { return { restrict: "A", require: "ngModel", link: function (scope, elem, attrs, ngModelCtrl) { var updateModel = function (dateText) { scope.$apply(function () { ngModelCtrl.$setViewValue(dateText); }); }; var options = { dateFormat: "yy/mm/dd", onSelect: function (dateText) { updateModel(dateText); } }; elem.datepicker(options); } } }); .directive("ngRandomClass", function () { return { restrict: 'EA', replace: false, scope: { ngClasses: "=" }, link: function (scope, elem, attr) { //Add random background class to selected element elem.addClass(scope.ngClasses[Math.floor(Math.random() * (scope.ngClasses.length))]); } } }) .directive('compile', function($compile) { return function(scope, element, attrs) { scope.$watch( function(scope) { return scope.$eval(attrs.compile); }, function(value) { var result = element.html(value); //console.log(scope.$parent); $compile(element.contents())(scope.$parent.$parent); } ); }; }) Please, post your code so people could help you find an issue. It is difficult to do with just an error message. Also it would be great if you could reproduce the problem on Plunker or JSFiddle and share it. i had just injected a new dependency and i found an error if i use it as an html attribute Did you add ngTagsInput to your index.html file? Did you add it to your app.js? Have you any alerts present? Sometime these cause digest errors. @MichaelRadionov i have updated my codes now.Hope that might help you. Are you calling scope.apply() or scope.digest() manually? @BobDoleForPresident No no where. is it because any of my upper loading js file effecting tagsInput or getting some clashes ?or i need to change the order of the downloading js fie @BobDoleForPresident sorry but yes i am using scope.$apply() check my updated question Try writing the apply() function like this: function () { ngModelCtrl.$setViewValue(dateText); scope.$apply(); }; I guess it's because there is a code (not Angular project) somewhere that changes browser's cookies when Angular tries to change that too. So, a collision occurs.
common-pile/stackexchange_filtered
What does it mean to Taylor expand $1/(\ln(r) - \ln(x) + i\pi/2)$ in power of $1/\ln(x)$? What does it mean to Taylor expand $1/(\ln(r)-\ln(x)+i\pi/2)$ in power of $1/\ln(x)$? Can some one help me understand what this text means? I could understand everything except for the last step which is to Taylor expand that part of the integrand in power of $1/\ln(x)$. I tried regularly Taylor expand it, but I encounter product rules and it does not seem to be the same as what is in the text. You have an expression $$ \frac{1}{a-w}. $$ You can expand it as a geometric series in $\frac aw$ with the understanding that $|w|>|a|$ using the identity $$ \frac{1}{a-w}=-\frac1w\cdot\frac1{1-\frac aw}=-\frac1w⋅\sum_{k\ge 0}\left(\frac aw\right)^k. $$ Thank you so much. Lol, it is so simple. The text said taylor seires, but is geometric series a type of Taylor expanding? I guess I have to go over the definition.
common-pile/stackexchange_filtered
How to 'convert' MP3 file to numpy array or list I'm working on an audio-related project that connects with Django backend via rest api. Part of the front-end requires to display waveforms of associated mp3 files and for this, it in turn requires optimized data of each mp3 file in form of an array, which the front-end (javascript) then processes and converts to a waveform. I can pick the associated mp3 file from backend storage, the problem is converting it into an array which I can serve to the front-end api. I have tried several methods but none seem to be working. I tried this How to read a MP3 audio file into a numpy array / save a numpy array to MP3? which leaves my computer hanging until I forced it to restart by holding the power button down. I have a working ffmpeg and so, I have also tried this Trying to convert an mp3 file to a Numpy Array, and ffmpeg just hangs which continues to raise TypeError on np.fromstring(data[data.find("data")+4:], np.int16). I can't actually say what the problem is and I really hope someone can help. Thank you in advance! EDIT This is the django view for retrieving the waveform data: NB: I've only included useful codes as I'm typing with my mobile phone. def waveform(self, request, ptype, id): project = Project.objects.get(pk=id) audio = project.audio mp3_path = os.path.join(cdn_dir, audio) cmd = ['ffmpeg', '-i', mp3_path, '-f', 'wav', '-'] p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE, creationflags=0x8000000) data = p.communicate()[0] array = np.fromstring(data[data.find("data")+4:], np.int16) return Response(array) The TypeError I get is this: TypeError: argument should be integer or bytes-like object, not "str" Maybe showing a little more code would help, like how do you define all variables involved in the error. A TypeError is raised when you perform operations with invalid types. So, make sure each of these variables has the type you assume it has (as in print(type(_variable_name_))). See this answer https://stackoverflow.com/questions/9458480/read-mp3-in-python-3 did it help? @Marc Compte, I've updated my answer to include useful codes. Kindly check and give your suggestion. Thanks in advance. @dev ved, I'll try those solutions and let you know if any of them works. Thanks for your help.
common-pile/stackexchange_filtered
PHP typehint specialization without actual method declaration I have the following piece of code all over the place: class Container extends Object implements \IteratorAggregate { public function AddObject(Object $object, $instanceKey) { ... } public function AddComponent(Component $component) { ... } } class MenuContainer extends Container { public function AddComponent(Menu $component) { // <-- I'm redeclaring the method only because I need to change the typehint return parent::AddComponent($component);// I don't do anything useful here } public function AddObject(Menu $object, $instanceKey) { // <-- I'm redeclaring the method only because I need to change the typehint return parent::AddObject($object, $instanceKey); // I don't do anything useful here } } I'm forced to do typehint specialization via method redeclaration because I want to prevent people that are using my code from doing mistake and accidentally adding something incompatible to my menu. So the question: Is there a way of doing typehint specialization without the actual method redeclaration? Instead of specializing your methods, specialize the types you're passing: class Foo { } class Bar extends Foo { } class Container extends Object implements \IteratorAggregate { public function AddObject(Foo $menu) { } } Container::AddObject(new Bar()); Thank you for your answer but, I think you've missed the point deceze. In your example you allow both "Container::AddObject(new Foo())" and "Container::AddObject(new Bar())". But in my case when the Bar is allowed, the Foo is not allowed. And typehints specialization helps doing that. But it forces me to create lots and lots of fish methods that doesn't do anything usefull except "return parent::AddComponent($component);" So I would like not to create these fish methods in derived class but to change the typehint. Here's the link to another post: http://stackoverflow.com/questions/4742302/php-with-netbeans-applying-new-phpdoc-without-the-actual-declaration There I've figured out how to change the PhpDoc, now I would like to change the Typehint the same way if it is possible
common-pile/stackexchange_filtered
How to make an Effect in ngrx I've created a simple effect in ngrx using this as a template. My code is slightly different: ` @Effect() addSeries$ = this.actions$ //Listen for the 'ADD_SERIES' action .ofType(GraphsActions.ADD_SERIES) //use that payload to construct a data query .map(action => action.payload) .switchMap(payload => this.http.get(/*making a url with payload here*/) // If successful, dispatch success action with result .map(res => ({ type: GraphsActions.SERIES_DATA, payload: res.json() })) // If request fails, dispatch failed action .catch(() => Observable.of({ type: 'FAILED' }))); ` In the final map I want to have the payload contain information from the initial payload from the first map, how would that be accomplished? I'm still having a bit of trouble wrapping my head around how some of this streaming stuff works, so if someone could provide an english explanation of what exactly is happening here would be helpful as well. With ngrx, anEffect allows you to trigger side effect(s) when some action has been dispatched. That said, a simple effect workflow would be : - DispatchFETCH_PERSON_DETAILS (with payload : {personId: string}). In the reducer, for that person, just set a boolean isFetchingDetails to true. That allows you to show a spinner (for example) while loading details of that person. - From an effect, catch that action and launch an HTTP request to get the details. - Once you've got the response, dispatch FETCH_PERSON_DETAILS_SUCCESS with the data from the response - If an error happened while fetching the data, dispatch FETCH_PERSON_DETAILS_FAILED with only the personId (that you can find in the previous action.payload) Here, your problem is simply the indentation of you code. If we re-indent it : @Effect() addSeries$ = this.actions$ .ofType(GraphsActions.ADD_SERIES) .map(action => action.payload) .switchMap(payload => this.http.get(/*making a url with payload here*/) .map(res => ({ type: GraphsActions.SERIES_DATA, payload: res.json() })) .catch(() => Observable.of({ type: 'FAILED' }))); We can see that the map is into the switchMap. Thus from the map, you have acccess to the switchMap parameter(s) --> payload. So in order to have the content from the previous payload + the response, you might do : .switchMap(payload => this.http.get(/*making a url with payload here*/) .map(res => ({ type: GraphsActions.SERIES_DATA, payload: {action.payload, ...res.json()} })) (or re-arrange the payload as you want) Ohhh thanks! I should've at least tried using the original payload in the map, but I had convinced myself that I couldn't. Thanks for the explanation too, that really clears up Effects!
common-pile/stackexchange_filtered
Unitary Transformation for a Self-Adjoint Elliptic Operator in one dimension Let $a\colon \mathbb{R}\to \mathbb{R}_+$ be bounded, bounded from below by a strictly positive constant and Lipschitz-continuous. Consider the self-adjoint linear operator $L:=-\partial_x a(x)\partial_x$ (in divergence form) on $L^2(\mathbb{R})$ with domain $D(L)=H^2(\mathbb{R})$. By the spectral theorem for self-adjoint operators, $L$ is unitary equivalent to a multiplication operator, i.e., there is some $\sigma$-finite measure space ($\Omega, \mu$) and some unitary operator $U\colon L^2(\Omega)\to L^2(\mathbb{R})$ and a multiplication operator $M\colon L^2(\Omega)\to L^2(\Omega)$ such that $L=UMU^*$. Is it possible to describe such a unitary operator "more explicitly"? For example, if $a=1$ is constant, then it is well known that $U$ can be chosen to be the $\textbf{Fourier transform}$. Here, I am curious about the more general case when $a\neq 1$. I would appreciate any hints or references connected to this question.
common-pile/stackexchange_filtered
Rpy2: calling to function conaining dots I'm tring to run a R function in Pyton via Jupyter Notebook. the problem is, that my function name (from mice lib) - containing dot. the name of the function is md.pattern, and this is the code that I'm tring to run: from rpy2.robjects.packages import importr mice = importr('mice') mice.md.pattern(train) and this is the error that I get: AttributeError: module 'mice' has no attribute 'md' I also tried to run: from rpy2.robjects.packages import importr mice = importr('mice') pattern = robjects.r("md.pattern") mice.pattern(train) and get the same error. The second way is ok, but you should use pattern(train) and not mice.pattern(train). Ideally, you would use pattern = robjects.r["md.pattern"] rather than pattern = robjects.r("md.pattern") @krassowski, thank you very much! you are the best (-: Beside the suggested answer in the comments, the doc suggests that the following should work: mice.md_pattern(train) https://rpy2.github.io/doc/v3.3.x/html/introduction.html#importing-packages
common-pile/stackexchange_filtered
Getting DATETIME for rows inserted/Modified am using SQL server 2005 , i have a requirement to get the Creation datetime of all the rows in a particular table, unfortunately the table do not have any "rowverion" or datetime column ( i know this is a major design flaw). so , i was wondering if SQL server maintains datetime for each row inserts. comments suggestions appreciated Regards Deepak No, SQL Server does not timestamp the rows automatically when they are created. As you suggested, for the future, you may want to create a new date_created column and set it default to GETDATE(): ALTER TABLE your_table ADD CONSTRAINT dc_your_table_date_created DEFAULT GETDATE() FOR date_created; You may also use a trigger instead, if you prefer, as @Vash suggested in the other answer. If this is a bussines business, purpose you should add the column to table and create a triger AFTER INSERT or UPDATE that set the sysdate to those rows. Or you can use the rowversion
common-pile/stackexchange_filtered
cannot find -lplot - fedora I installed gnuplot using sudo yum install gnuplot on terminal. And I have a cpp file, it use gnuplot. I compile without error.On linking,the error occurs. Compile : g++ -c plot.cpp Link : g++ -o exe plot.o -lplot Code : int main() { FILE *pipe = popen("gnuplot -persist", "w"); // set axis ranges fprintf(pipe,"set xrange [0:11]\n"); fprintf(pipe,"set yrange [0:]\n"); int b = 5;int a; // to make 10 points std::vector<int> x (10, 0.0); // x values std::vector<int> y (10, 0.0); // y values for (a=0;a<10;a++) // 10 plots { x[a] = a; y[a] = 2*a;// some function of a fprintf(pipe,"plot '-'\n"); // 1 additional data point per plot for (int ii = 0; ii <= a; ii++) { fprintf(pipe, "%d %d\n", x[ii], y[ii]); // plot `a` points } fprintf(pipe,"e\n"); // finally, e fflush(pipe); // flush the pipe to update the plot usleep(1000000);// wait a second before updating again } return 0; } But your program doesn't actually use any functions from any plot library, all it uses is standard functions (either of C or of POSIX), so there's no need to link with any plot library. All your program does is execute an external program, and if it's in the path then it will run. Add a flag -L /where/ever/you/have/the/lib -lplotto specify where libplot.a resides, if you need that lib at all. From your code however it seems that you're just feeding data into gnuplot but don't link against any libplot.a. I don't have libplot.a. I open /lib folder,it isnot there. @holazollil The /lib directory should only contain system libraries, and the linker will automatically look in it as well as /usr/lib. You need to specify where you have installed the library (you have installed it, haven't you?). I look /usr/lib but not there. Are you sure you need that lib? Your code doesn't suggest it (at least the part that is shown). What happens when you link without the lib?
common-pile/stackexchange_filtered
Input widths on Bootstrap 3 Update again: I am closing this question by selecting the top answer to keep people from adding answers without really understanding the question. In reality there is no way to do it with the build in functionality without using grid or adding extra css. Grids do not work well if you are dealing with help-block elements that need to go beyond a short input for example but they are 'build-in'. If that is an issue I recommend using extra css classes which you can find in the BS3 discussion here. Now that BS4 is out it is possible to use the included sizing styles to manage this so this is not going to be relevant for much longer. Thanks all for good input on this popular SO question. Update: This question remains open because it is about built-in functionality in BS to manage input width without resorting to grid (sometimes they have to be managed independently). I already use custom classes to manage this so this is not a how-to on basic css. The task is in BS feature discussion list and has yet to be addressed. Original Question: Anyone figure out a way to manage input width on BS 3? I'm currently using some custom classes to add that functionality but I may have missed some non documented options. Current docs say to use .col-lg-x but that clearly doesn't work as it can only be applied to the container div which then causes all kinds of layout/float issues. Here's a fiddle. Weird is that on the fiddle I can't even get the form-group to resize. http://jsfiddle.net/tX3ae/ <form role="form" class="row"> <div class="form-group col-lg-1"> <label for="code">Name</label> <input type="text" class="form-control"> </div> <div class="form-group col-lg-1 "> <label for="code">Email</label> <input type="text" class="form-control input-normal"> </div> <button type="submit" class="btn btn-default">Submit</button> </form> </div> Why use bootstrap to manage these widths? It doesn't appear to be particularly good at it, and it introduces complexity. Why use bootstrap for this, @Eamon? Responsive design and alignment with the rest of your elements that bootstrap is handling come to mind. And anyway, the whole point of the question is how to do it without introducing complexity. @ctb: plain CSS handles these kind of issues just fine; there's no need for the additional complexity of bootstrap. so wait, was this question answered? 38 upvotes is a lot. @rook - the answer is that no, there is no builld-in functionality though it's on their discussion list. However as alternatives go there are a few provided here until/if BS actually adds that. What you want to do is certainly achievable. What you want is to wrap each 'group' in a row, not the whole form with just one row. Here: <div class="container"> <h1>My form</h1> <p>How to make these input fields small and retain the layout.</p> <form role="form"> <div class="row"> <div class="form-group col-lg-1"> <label for="code">Name</label> <input type="text" class="form-control" /> </div> </div> <div class="row"> <div class="form-group col-lg-1 "> <label for="code">Email</label> <input type="text" class="form-control input-normal" /> </div> </div> <div class="row"> <button type="submit" class="btn btn-default">Submit</button> </div> </form> </div> The NEW jsfiddle I made: NEW jsfiddle Note that in the new fiddle, I've also added 'col-xs-5' so you can see it in smaller screens too - removing them makes no difference. But keep in mind in your original classes, you are only using 'col-lg-1'. That means if the screen width is smaller than the 'lg' media query size, then the default block behaviour is used. Basically by only applying 'col-lg-1', the logic you're employing is: IF SCREEN WIDTH < 'lg' (1200px by default) USE DEFAULT BLOCK BEHAVIOUR (width=100%) ELSE APPLY 'col-lg-1' (~95px) See Bootstrap 3 grid system for more info. I hope I was clear otherwise let me know and I'd elaborate. yes that is what I put in my comment to @Skelly - there is an open issue on this and they will address it once things settled down. Adding row us not what I am looking for due to the extra markup that is totally unnecessary if they made the proper classes - which is what I did. I created .input1-12 with percentage width and I just apply that to the input - works great I haven't read much regarding this post but... "Use Bootstrap's predefined grid classes to align labels and groups of form controls in a horizontal layout by adding .form-horizontal to the form. Doing so changes .form-groups to behave as grid rows, so no need for .row." @dtc That doesn't help much for input widths though? @shadowf And what if I want to make an input shorter than its label? Do I have to put an input in yet another pair of divs? @PowerGamer I'm not sure I fully understand the question - do you mean to make the input element shorter? To do so, you only need to change the containing row's class - e.g. change col-xs-5 to col-xs-1. Because the contained label has no width specified on it but the contained input does (through the form-control class applied to it), then the label's size doesn't change but the input will become shorter. Does this answer your question? @shadowf Nope, I meant when the label text is long and I want an input to be shorter than label text without label text wrapping to next line, see http://jsfiddle.net/sf80qu6g/. Now to make an input ~1/3 of its current width you'll have to put an input into more divs. It works for me. But now I have an issue. The sizing is applicable for the label and the text box. How can I make the sizing apply only for the Text box and not the label. Modified fiddle here http://jsfiddle.net/jarajesh/kytsf0vz/ @Mr.X if I understand your question correctly (after all, it's Sunday morning...!), what you want is to limit the width of the textbox but not the label? If so, then you need to utilise the nesting feature of BS. Essentially, put "rows" inside each column and then your desired columns inside each row. It's probably easier if you see the code... Check out this: http://jsfiddle.net/1k1qoaer/ Please let me know if this answers your question. Cheers. thanks that works for me. By the way its Sunday 10 pm for me :-) as I am on the other side of the globe. i have checked your jsfiddle and saw there you use two class col-xs-5 & col-lg-1.....why? @MonojitSarkar, this is the basis of how BS3's grid system works - you specify different classes so that BS will adjust according to the screen size and the classes you specify (see http://getbootstrap.com/docs/3.3/examples/grid/) In essence, it is what I describe in the pseudo code in my answer above: if there is only the col-lg-1 class on the element, when the screen size smaller than 1200px, col-lg-1 is ignored - and if a smaller class is not defined, the default behaviour of the element (being a div that'd be acting as a block) will kick in. Does this make sense? See the link to BS3 docs. In Bootstrap 3 You can simply create a custom style: .form-control-inline { min-width: 0; width: auto; display: inline; } Then add it to form controls like so: <div class="controls"> <select id="expirymonth" class="form-control form-control-inline"> <option value="01">01 - January</option> <option value="02">02 - February</option> <option value="03">03 - March</option> <option value="12">12 - December</option> </select> <select id="expiryyear" class="form-control form-control-inline"> <option value="2014">2014</option> <option value="2015">2015</option> <option value="2016">2016</option> </select> </div> This way you don't have to put extra markup for layout in your HTML. This wins by far on the answers. It's clean, reusable, and just works. In fact, it should be put in bootstrap by default as this is a very common annoyance. Very rarely do you want pure block forms. I tried this solution but it didn't work for me. I wanted to limit the width of my field so I used 'width:200px' in the custom style, but that did not change the width. Only when I set 'max-width:200px' did I get the correct result. When I use this the width will not override the form-control class auto width. Agreed with comments here - I actually tried this before looking here, it doesn't seem to overwrite the defaults... unsure why some custom styles do and some don't. Here's a fiddle demonstrating what this looks like: http://jsfiddle.net/yd1ukk10/ please add to your answer a JS fiddle provided by @brandones (or create your own). Its very helpful to see the result when dealing with bootstrap markup This is a nice solution, only I had to add !important to width & display. Thank you, George, this works on Bootstrap 4, using various class libraries. ASP.net MVC go to Content- Site.css and remove or comment this line: input, select, textarea { /*max-width: 280px;*/ } If you look at the fiddle, you will see that the OP wants to make the fields smaller, not larger. The bigger problem here is that his site is not referencing the site.css at all. I didn't know that css rule was there and it was driving me crazy that my specified widths seemed to be being ignored. Thank you. I think you need to wrap the inputs inside a col-lg-4, and then inside the form-group and it all gets contained in a form-horizontal.. <form class="form form-horizontal"> <div class="form-group"> <div class="col-md-3"> <label>Email</label> <input type="email" class="form-control" id="email" placeholder="email"> </div> </div> ... </form> Demo on Bootply - http://bootply.com/78156 EDIT: From the Bootstrap 3 docs.. Inputs, selects, and textareas are 100% wide by default in Bootstrap. To use the inline form, you'll have to set a width on the form controls used within. So another option is to set a specific width using CSS: .form-control { width:100px; } Or, apply the col-sm-* to the `form-group'. You have to add row class to form-group - I underwent a discussion with the Bootstrap team and they have added the possibility of input specific width control to their planning list. In the meantime we have to use full grids. If you'll go ahead and add row to the form-group class and remove the horizontal class I'll mark this answer. Here's the working fiddle http://jsfiddle.net/tdUtX/2/ and the planning https://github.com/twbs/bootstrap/issues/9397 +@Yashua - +1 great example. This also works with the input-group class, which was giving me trouble. Current docs say to use .col-xs-x , no lg. Then I try in fiddle and it's seem to work : http://jsfiddle.net/tX3ae/225/ to keep the layout maybe you can change where you put the class "row" like this : <div class="container"> <h1>My form</h1> <p>How to make these input fields small and retain the layout.</p> <div class="row"> <form role="form" class="col-xs-3"> <div class="form-group"> <label for="name">Name</label> <input type="text" class="form-control" id="name" name="name" > </div> <div class="form-group"> <label for="email">Email</label> <input type="text" class="form-control" id="email" name="email"> </div> <button type="submit" class="btn btn-default">Submit</button> </form> </div> </div> http://jsfiddle.net/tX3ae/226/ doesn't work for small devices as the input-field gets too small <div class="form-group col-lg-4"> <label for="exampleInputEmail1">Email address</label> <input type="email" class="form-control" id="exampleInputEmail1" placeholder="Enter email"> </div> Add the class to the form.group to constraint the inputs That doesn't really work. It causes every subsequent element to float next to this one. You have to add a div with class row on every form group which is ridiculous. If you are using the Master.Site template in Visual Studio 15, the base project has "Site.css" which OVERRIDES the width of form-control fields. I could not get the width of my text boxes to get any wider than about 300px wide. I tried EVERYTHING and nothing worked. I found that there is a setting in Site.css which was causing the problem. Get rid of this and you can get control over your field widths. /* Set widths on the form inputs since otherwise they're 100% wide */ input[type="text"], input[type="password"], input[type="email"], input[type="tel"], input[type="select"] { max-width: 280px; } I know this is an old thread, but I experienced the same issue with an inline form, and none of the options above solved the issue. So I fixed my inline form like so:- <form class="form-inline" action="" method="post" accept-charset="UTF-8"> <div class="row"> <div class="form-group col-xs-7" style="padding-right: 0;"> <label class="sr-only" for="term">Search</label> <input type="text" class="form-control" style="width: 100% !important;" name="term" id="term" placeholder="Search..." autocomplete="off"> <span class="help-block">0 results</span> </div> <div class="form-group col-xs-2"> <button type="submit" name="search" class="btn btn-success" id="search">Search</button> </div> </div> </form> That was my solution. Bit hacky hack, but did the job for an inline form. You can add the style attribute or you can add a definition for the input tag in a css file. Option 1: adding the style attribute <input type="text" class="form-control" id="ex1" style="width: 100px;"> Option 2: definition in css input{ width: 100px } You can change the 100px in auto I hope I could help. In Bootstrap 3 All textual < input >, < textarea >, and < select > elements with .form-control are set to width: 100%; by default. http://getbootstrap.com/css/#forms-example It seems, in some cases, we have to set manually the max width we want for the inputs. Anyway, your example works. Just check it with a large screen, so you can see the name and email fields are getting the 2/12 of the with (col-lg-1 + col-lg-1 and you have 12 columns). But if you have a smaller screen (just resize your browser), the inputs will expand until the end of the row. You don't have to give up simple css :) .short { max-width: 300px; } <input type="text" class="form-control short" id="..."> If you're looking to simply reduce or increase the width of Bootstrap's input elements to your liking, I would use max-width in the CSS. Here is a very simple example I created: <form style="max-width:500px"> <div class="form-group"> <input type="text" class="form-control" id="name" placeholder="Name"> </div> <div class="form-group"> <input type="email" class="form-control" id="email" placeholder="Email Address"> </div> <div class="form-group"> <textarea class="form-control" rows="5" placeholder="Message"></textarea> </div> <button type="submit" class="btn btn-primary">Submit</button> </form> I've set the whole form's maximum width to 500px. This way you won't need to use any of Bootstrap's grid system and it will also keep the form responsive. I'm also struggled with the same problem, and this is my solution. HTML source <div class="input_width"> <input type="text" class="form-control input-lg" placeholder="sample"> </div> Cover input code with another div class CSS source .input_width{ width: 450px; } give any width or margin setting on covered div class. Bootstrap's input width is always default as 100%, so width is follow that covered width. This is not the best way, but easiest and only solution that I solved the problem. Hope this helped. Bootstrap 3 I achieved a nice responsive form layout using the following: <div class="row"> <div class="form-group col-sm-4"> <label for=""> Date</label> <input type="date" class="form-control" id="date" name="date" placeholder=" date"> </div> <div class="form-group col-sm-4"> <label for="hours">Hours</label> <input type="" class="form-control" id="hours" name="hours" placeholder="Total hours"> </div> </div> I do not know why everyone has seem to overlook the site.css file in the Content folder. Look at line 22 in this file and you will see the settings for input to be controlled. It would appear that your site is not referencing this style sheet. I added this: input, select, textarea { max-width: 280px;} to your fiddle and it works just fine. You should never ever update bootstrap.css or bootstrap.min.css. Doing so will set you up to fail when bootstrap gets updated. That is why the site.css file is included. This is where you can make changes to site that will still give you the responsive design you are looking for. Here is the fiddle with it working Add and define terms for the style="" to the input field, that's the easiest way to go about it: Example: <form> <div class="form-group"> <label for="email">Email address:</label> <input type="email" class="form-control" id="email" style="width:200px;"> </div> <div class="form-group"> <label for="pwd">Password:</label> <input type="password" class="form-control" id="pwd" style="width:200px"> </div> <button type="submit" class="btn btn-default">Submit</button> </form> Bootstrap uses the class 'form-input' for controlling the attributes of 'input fields'. Simply, add your own 'form-input' class with the desired width, border, text size, etc in your css file or head section. (or else, directly add the size='5' inline code in input attributes in the body section.) <script async src="//jsfiddle.net/tX3ae/embed/"></script>
common-pile/stackexchange_filtered
Display data from MySQL to PHP page problem I'm trying to display some specific data from MySQL to the PHP web but I'm always getting 0 results . the code i used is: <?php $conn = mysqli_connect("localhost", "root", "", "summary"); // Check connection if ($conn->connect_error) { die("Connection failed: " . $conn->connect_error); } $user= .$_SESSION["username"]; $sql = "SELECT money FROM users WHERE username= .$user"; $result = $conn->query($sql); if (!empty($result) && $result->num_rows > 0) { // output data of each row while($row = $result->fetch_assoc()) { echo "<tr><td>" . $row["money"]; } echo "</table>"; } else { echo "0 results"; } $conn->close(); ?> first of all you should use prepared statement in case to avoid sql injection try to change code to this: <?php // create connection $conn = new mysqli("localhost", "root", "", "summary"); // Check connection if ($conn->connect_error) { die("Connection failed: " . $conn->connect_error); } $user = $_SESSION["username"]; $sql = "SELECT money FROM users WHERE username = ?"; if ($stmt = $conn->prepare($sql)) { $stmt->bind_param("s",$user); if ($stmt->execute()) { $result = $stmt->get_result(); if ($result->num_rows > 0) { while ($row = $result->fetch_assoc()) { echo $row["money"]; } $stmt->close(); $conn->close(); }else{ echo "0 result"; } } } ?> "Try to" doesn't explain what they did wrong. @FunkFortyNiner the important thing is that the code is working fine ) Your SQL string is wrong. $sql = "SELECT money FROM users WHERE username= '$user'"; However, I also strongly advise you to use prepared statements with parameter binding wither with mysqli_prepare or using PDO So your code changes like this: $user= .$_SESSION["username"]; $sql = "SELECT money FROM users WHERE username= ?"; $stmt = $conn->prepare($sql); if($stmt) { $stmt->bind_param('s', $user); if($stmt->execute()) { $result = $stmt->get_result(); /... } }
common-pile/stackexchange_filtered
Merge sort for prime number of elements? As far as I know, In merge sort, we have to divide elements in a number of groups. But if number are primes, then how is division possible? Do we divide them in unequal groups? If you are going to present an implementation, please do it in C or Python. "Give me source codes" questions aren't what we do here. Check out freelancer. Alternatively if you have theoretical issues check on programmers.stackexchange or similar. I am not sure what you mean. Say you've got a set of 17 elements. You can still divide that number by 5 and hence create 4 groups: 17/5 = 3 groups of 5 elements plus 1 group of 2 elements. I wasn't asking for the implementation. What I said is that I won't be able to understand if you explain it using Java or other languages. Luckily, no specific language was required to help. Sorry if I broke "your" rules. Merge sort does not require you to split your list into equally sized groups. In any properly written merge code, it shouldn't matter at all if the groups are slightly different sizes. You'll usually want them to be close to the same size (to divide the effort evenly, reducing the sort's complexity), but even that is not strictly necessary. The basic merge-sort algorithm will work even if you divide a length-N sequence into length-1 and length-(N-1) subsequences (though performance will be lousy). Say there are 2k + 1, it will be split to two parts, k and k+1 The elements are not necessarily divided evenly, but letting each group have as fewer elements as possible would get the smallest overall complexity. This is the general principle for m-way merge sort, where m denotes the number of groups you are going to have.
common-pile/stackexchange_filtered
How do I group a 2 dimensional list of objects in python 3.10? I have a list of gifs. Each "gif" is a list of frame objects setup like this. gifs = [ [1, 2, 3, 4, 5, 6], [1, 2, 3, 4, 5, 6], [1, 2, 3, 4, 5, 6], ...] I need to order them is to a list of lists where each sub-list contains the nth corresponding frame. i.e. orderedGifs = [ [1, 1, 1, ...], [2, 2, 2, ...], [3, 3, 3, ...], ...] How might I go about achieving this? Try: list(zip(*gifs)) consider using numpy As @enke indicates in the comments, using list(zip(*gifs)) gets you the transposition - but it will be a list of tuples. If you need the groups to be lists, this works: orderedGifs = list(map(list, zip(*gifs))) It works because: *gifs unpacks the original gifs and passed all its contents to zip() (spreading) zip() pairs up elements in tuples from each iterable it's passed, in your case the 1st element of each list, then the 2nd, etc. map(list, xs) takes each element from xs and applies list() to it, returning an iterable of the results So wrapping the whole thing in list() takes the tuples converted to lists and puts them in a list, as required.
common-pile/stackexchange_filtered
Get Date object from DatePickerFragment in Android I have defined DatePickerFragment in my android application, here is the code for it: public class DatePickerFragment extends DialogFragment implements DatePickerDialog.OnDateSetListener { private int year; private int month; private int day; @Override public Dialog onCreateDialog(Bundle savedInstanceState) { // Use the current date as the default date in the picker final Calendar c = Calendar.getInstance(); int year = c.get(Calendar.YEAR); int month = c.get(Calendar.MONTH); int day = c.get(Calendar.DAY_OF_MONTH); // Create a new instance of DatePickerDialog and return it return new DatePickerDialog(getActivity(), this, year, month, day); } @Override public void onDateSet(DatePicker view, int year, int month, int day) { this.year = year; this.month = month; this.day = day; } public Date getDateFromDatePicker(){ Calendar calendar = Calendar.getInstance(); calendar.set(year, month, day); return calendar.getTime(); } } I want to retrieve the date from this date picker from TaskFragment, and set the "date" attribute in my class Task to be equal to the date selected // Method for date picker to appear once the "Due Date" button is clicked. datePickerButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { FragmentManager fm = getActivity().getFragmentManager(); DatePickerFragment datePicker = new DatePickerFragment(); datePicker.show(fm, "datePicker"); task.setDate(datePicker.getDateFromDatePicker()); //test System.out.println(task.getDate()); } }); The date picker appears when clicking the button. However, the date is not being set correctly, and is actually being set as soon as the date button is pressed, not when a date is selected from the picker. Also, when printing the date, I get the following no matter what date I select: I/System.out: Wed Dec 31 10:47:38 GMT+00:00 2 Reading some solutions here directed me to define the onDateSet and getDateFromDatePicker methods, but they do not seem to be functioning correctly. Any help is much appreciated, thank you! Where are you showing the date picker, in an activity? No in a fragment named TaskFragment. I have figured out that the call to the method getDateFromDatePicker happens before the onDateSet method is called, so I suppose I have to take the values from onDateSet and store them somewhere so that they can be accessed by the TaskFragment class. After some digging, I followed the method I read in the book Android Programming: The Big Nerd Ranch Guide, that clearly specifies how to pass data between two Fragments. Here are the steps I followed: Setting a Target Fragment: Here I made the Fragment that will receive the date (TaskFragment in my case) the target fragment of the DatePickerFragment through the setTargetFragment method // set taskFragment as the target fragment of DatePickerFragment datePicker.setTargetFragment(TaskFragment.this, 0); datePicker.show(fm, "datePicker"); Send date to the target Fragment through the DatePickerFragment's onDateSet method. I did this by passing an intent with the date to the target fragment's onActivityResult method. public void onDateSet(DatePicker view, int year, int month, int day) { // create date object using date set by user Calendar calendar = Calendar.getInstance(); calendar.set(year, month, day); Date date = calendar.getTime(); if (getTargetFragment() == null) return; Intent intent = new Intent(); intent.putExtra(DATE, date); // pass intent to target fragment getTargetFragment().onActivityResult(getTargetRequestCode(), Activity.RESULT_OK, intent); Override onActivityResult method in target fragment to accept date @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { if (resultCode != Activity.RESULT_OK){ return; } if (requestCode == REQUEST_DATE){ Date date = (Date) data.getSerializableExtra(DatePickerFragment.DATE); task.setDate(date); } } When using DatePickerFragment, you would normally have an interface that is triggered when the user has selected the date. This is called DatePickerDialog.OnDateSetListener. In your code you are showing the picker and then setting the date straight away instead of waiting for the user to choose. So what you need to do is when the onDateSet callback method is triggered, do what you want with the date then. If you are showing the DatePickerFragment in an Activity, one way of doing this would be to create an interface that allows sending messages from your Fragment to the activity.
common-pile/stackexchange_filtered
Why do WinVerifyTrust and sigcheck disagree about whether a file has a signature? I've used the WinVerifyTrust example from here, but I'm finding that it is getting a TRUST_E_NOSIGNATURE for some files that SysInternals sigcheck reports as signed. For example, c:\windows\system32\mfc42.dll is reported by WinVerifyTrust as signed, but c:\windows\system32\mfc42u.dll is reported as unsigned -- sigcheck reports both as being signed. I believe sigcheck is using WinVerifyTrust internally, but it must be using it differently than in the example I'm looking at -- any suggestions? I think this has to do with something called security catalog. Check out this example code: http://forum.sysinternals.com/howto-verify-the-digital-signature-of-a-file_topic19247.html
common-pile/stackexchange_filtered
Finding shapes after merging together contours with opencv I would like to find specific shapes (circles and the triangles) in the following image there are: Two triangles and two circles in different sizes I have used the: cv2.findContours(image=current_bw_img, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_NONE) for getting all 8 contours. Afterwards I have laid on each blank canvas a subset of contours since some of the contours are part of a recognizable shape, so this way I managed to get a recognizable shapes (triangles and circles). I have done it in a non efficiently manner (2^8), have generated the whole powerset groups of contours even though there are contours which are disjointed, not adjacent to each other which should be ignored. I guess a more correct approach would be to find the nearby contours. def check_all_combinations(current_bw_img, all_contours): contours_ids = range(len(all_contours)) contours_combinations_id = list(powerset(contours_ids)) for img_id, relevant_contours in enumerate(contours_combinations_id): shape_size = current_bw_img.shape img_blank_bw = np.ones((shape_size[0], shape_size[1], 1), np.uint8) * 255 for cnt_id in relevant_contours: cv2.drawContours(image=img_blank_bw, contours=[all_contours[cnt_id]], contourIdx=0, color=0, thickness=-1) cv2.imshow(winname="img_blank_bw", mat=img_blank_bw) img_name = "img_" + str(img_id) + ".png" cv2.imwrite(filename=img_name,img=img_blank_bw) For example I got this image (one image out of 256 subgroups) which is invalid, since they don't form a single shape (NOT adjacent contours) Is there a much better approach? I'll be glad to hear, Thank you!! I have done it in a non efficiently manner (2^8), have generated the whole superset groups of shapes Can you elaborate on this, or are you describing the complexity of the findContours function? Sure I'll elaborate, after I have found the contours with findContours , I have generated (2^8) subsets of contours for checking which subset constructs a recognizable shape. I have generated (2^8) subsets of contours for checking which subset constructs a recognizable shape Could you please post the code for this as well? Yep have posted the code, thank you! Can you check if this could be of some help - https://stackoverflow.com/questions/34832959/overlapping-shapes-recognition-opencv ? Thank you, I have read the solution in the the link you gave here, but I'm looking for a way to retrieve only the adjacent contours. Could you elaborate more on "adjacent contours"? Glancing at you code, it seems as if you are extracting all shapes present in the image I have now added an example for your question regarding "adjacent contours". I hope this is more clear now, thumbs up! Would categorising the edges of each contour help, i.e. as either an arc or a straight line? And maybe you can identify edges which have a shared end, and then looking for triangles involves making all the combinations of edges which are a) straight and b) form a closed shape (i.e. all have shared ends), looking for circles involves making all the combinations of edges which are a) arcs, and b) make a closed shape. Or maybe don’t bother with countours and just use opencv HoughCircle and HoughLinesP? In the end this will differentiate arcs from lines... @barny I'll be glad if you could elaborate a bit more about "identify edges which have a shared end" (each contour holds about more then 200 points around it's perimeter). thank you! Maybe using the image of each contour you can use goodFeaturesToTrack to find corners, and/or houghlines for circles and lines - the aim being to turn each contour into a closed series of segments (either lines or arcs)? And having got these segments then look for where ends of segments either coincide with other ends or intersect with other segments. But I don’t have a solution or perhaps I’d give you an answer. However it might be simplest to not bother with contours but just for circles use HoughCircles (which is easy) or HoughLinesP and some post-processing to try detect triangles.
common-pile/stackexchange_filtered
Parallel exceptions are being caught somehow my exceptions seem to being caught by method they are executing in. Here is the code to call the method. As you can see I create a cancellation token with a time out. I register a method to call when the cancellation token fires and then I start a new task. The cancellation token appears to be working OK. As does the registered method. var cancellationToken = new CancellationTokenSource(subscriber.TimeToExpire).Token; cancellationToken.Register(() => { subscriber.Abort(); }); var task = Task<bool>.Factory.StartNew(() => { subscriber.RunAsync((T)messagePacket.Body, cancellationToken); return true; }) .ContinueWith(anticedant => { if (anticedant.IsCanceled) { Counter.Increment(12); Trace.WriteLine("Request was canceled"); } if (anticedant.IsFaulted) { Counter.Increment(13); Trace.WriteLine("Request was canceled"); } if (anticedant.IsCompleted) { Counter.Increment(14); } The next piece of code is the method that seems to be not only throwing the excetion (expected behavior. but also catching the exception. public async override Task<bool> ProcessAsync(Message input, CancellationToken cancellationToken) { Random r = new Random(); Thread.Sleep(r.Next(90, 110)); cancellationToken.ThrowIfCancellationRequested(); return await DoSomethingAsync(input); } The exception is being thrown by the cancellation token but according to intellitrace it is being caught at the end of the method. I have tried a number of different options including throwing my own exception, but no matter what the continuewith function always executes the IsComleted or ran to completion code. Any ideas on what I am doing wrong? Thanks It's hard to understand what exactly is going on with your code when we don't see all of it. Could you create a small, but complete sample that shows your problem? Also, what is the difference between RunAsync() and ProcessAsync()? Sorry I missed the RunAsync / ProcessAsync when pulling my sample code out Run calls Process with a couple of synchronous methods before and after I assume that RunAsync is the same as ProcessAsync. The exception is being thrown by the cancellation token but according to intellitrace it is being caught at the end of the method. Yup. Any async method will catch its own exceptions and place them on its returned Task. This is by design. no matter what the continuewith function always executes the IsComleted or ran to completion code. Well, let's take another look at the code: var task = Task<bool>.Factory.StartNew(() => { subscriber.RunAsync((T)messagePacket.Body, cancellationToken); return true; }) .ContinueWith( Consider the lambda passed to StartNew: it calls RunAsync, it ignores the Task that it returns, and then it returns true (successfully). So the Task returned by StartNew will always return successfully. This is why the ContinueWith always executes for a successfully-completed task. What you really want is to await the Task returned by RunAsync. So, something like this: var task = Task.Run(async () => { await subscriber.RunAsync((T)messagePacket.Body, cancellationToken); return true; }) .ContinueWith( You're still ignoring the return value of RunAsync (the bool it returns is ignored), but you're not ignoring the Task itself (including cancellation/exception information). P.S. I'm assuming there's a lot of code you're not showing us; using StartNew/Run to just kick off some async work and return a value is very expensive. P.P.S. Use await Task.Delay instead of Thread.Sleep. Thanks thats exactly what i was missing.
common-pile/stackexchange_filtered
Rails: Finding matching attribute in a has-many / belongs-to association Say I have a Restaurant and a Reservation model. I want to find the reservations for each restaurant according to restaurant id, something like this: @reservations = Reservation.find( # current user ) @restaurants = [] @reservations.each do |res| @restaurants += Restaurant.where('id like ?', res.rest_id) end When trying this, this constructs an array, which I've tried to convert to an Active Record object un-successfully. Am I missing something, or is there a more obvious way to do this ? Try Reservation.includes(:restaurant) find fetches a single entry. You meant where, right? Or something as neat as current_user.reservations. Well, I'm stuck, your code and question don't quite match. Your code fetches all restaurants that have a reservation made by current user. Is that it? #load all reservations matching your condition, and eager-load all associated restaurants @reservations = Reservation.where(<some condition>).includes(:restaurants) #get all associated restaurants for all objects in the @reservations collection #which have been eager loaded, so this won't hit the db again), #flatten them to a single array, #then call `uniq` on this array to get rid of duplicates @restaurants = @reservations.map(&:restaurants).flatten.uniq EDIT - added flatten, added explanation in comments This works, but do you also mind explaining how it works ? thanks a lot. added some explanation. I'm actually surprised it worked because i forgot to flatten the results of @reservations.map(&:restaurants) in my original answer: you would have had an array of arrays rather than an array of Restaurant objects. restaurant.rb has_many :reservations reservation belongs_to :restaurant, class_name: 'Restaurant', foreign_key: 'rest_id' You can find restaurant record for this reservation as @reservation = Reservation.joins(:restaurant).where(id: reservation_id)
common-pile/stackexchange_filtered
How to generate a hash for firefox xpi file? I want to inline install firefox extension. In the example here It needs to have hash of the extension .xpi file. They recomend to use nslCryptoHash. The first problem is that the code from the CryptoHash is not working. The firefox throws undefined on Components.classes. The second problem is that how to do hash on a file which I don't have access in browser? I highly recommend you to look at the WebExtension documentation as it is now the way to implement Firefox Addons/extensions: https://developer.mozilla.org/en-US/Add-ons/WebExtensions There is a bunch of misleading docs in the site, not all of them are reviewed and/or complete. Through the new documentation you may see references to the web-ext (https://www.npmjs.com/package/web-ext) tool which helps you on building the .xpi file, both for development and production - the last including a way to sign the file with a valid Mozilla's certificate so you can distribute the extension.
common-pile/stackexchange_filtered
Calculating Standard Deviation of Angles? So I'm working on an application using compass angles (in degrees). I've managed to determine the calculation of the mean of angles, by using the following (found at http://en.wikipedia.org/wiki/Directional_statistics#The_fundamental_difference_between_linear_and_circular_statistics) : double calcMean(ArrayList<Double> angles){ double sin = 0; double cos = 0; for(int i = 0; i < angles.size(); i++){ sin += Math.sin(angles.get(i) * (Math.PI/180.0)); cos += Math.cos(angles.get(i) * (Math.PI/180.0)); } sin /= angles.size(); cos /= angles.size(); double result =Math.atan2(sin,cos)*(180/Math.PI); if(cos > 0 && sin < 0) result += 360; else if(cos < 0) result += 180; return result; } So I get my mean/average values correctly, but I can't get proper variance/stddev values. I'm fairly certain I'm calculating my variance incorrectly, but can't think of a correct way to do it. Here's how I'm calculating variance: double calcVariance(ArrayList<Double> angles){ //THIS IS WHERE I DON'T KNOW WHAT TO PUT ArrayList<Double> normalizedList = new ArrayList<Double>(); for(int i = 0; i < angles.size(); i++){ double sin = Math.sin(angles.get(i) * (Math.PI/180)); double cos = Math.cos(angles.get(i) * (Math.PI/180)); normalizedList.add(Math.atan2(sin,cos)*(180/Math.PI)); } double mean = calcMean(angles); ArrayList<Double> squaredDifference = new ArrayList<Double>(); for(int i = 0; i < normalizedList.size(); i++){ squaredDifference.add(Math.pow(normalizedList.get(i) - mean,2)); } double result = 0; for(int i = 0; i < squaredDifference.size(); i++){ result+=squaredDifference.get(i); } return result/squaredDifference.size(); } While it's the proper way to calculate variance, I'm not what I'm supposed to use. I presume that I'm supposed to use arctangent, but the standard deviation/variance values seem off. Help? EDIT: Example: Inputting the values 0,350,1,0,0,0,1,358,9,1 results with the average angle of 0.0014 (since the angles are so close to zero), but if you just do a non-angle average, you'll get 72...which is way off. Since I don't know how to manipulate individual values to be what they should be, the variance calculated is 25074, resulting in a standard deviation of 158 degrees, which is insane!! (It should only be a few degrees) What I think I need to do is properly normalize individual values so I can get correct variance/stddev values. I did not analyze fully, but this code seems to need Math.atan2(y,x) @maniek - I had done this originally (and have put it back in recently) and the results are the same. I tried the method above along with atan2 and the results I get are the same within 12 or 13 orders of magnitude. EDIT: Looks like using atan2 addresses Chechulin's post. I'll edit my question. By the Wikipedia page you link to the circular standard deviation is sqrt(-log R²), where R = |mean of samples|, if you consider the samples as complex numbers on the unit circle. So the calculation of standard deviation is very similar to the calculation of the mean angle: double calcStddev(ArrayList<Double> angles){ double sin = 0; double cos = 0; for(int i = 0; i < angles.size(); i++){ sin += Math.sin(angles.get(i) * (Math.PI/180.0)); cos += Math.cos(angles.get(i) * (Math.PI/180.0)); } sin /= angles.size(); cos /= angles.size(); double stddev = Math.sqrt(-Math.log(sin*sin+cos*cos)); return stddev; } And if you think about it for a minute it makes sense: When you average a bunch of points close to each other on the unit circle the result is not too far off from the circle, so R will be close to 1 and the stddev near 0. If the points are distributed evenly along the circle their average will be close to 0, so R will be close to 0 and the stddev very large. @Joni - I'm not so mathematically inclined. To calculate the standard deviation using the variance here, I'd use "double stddev = Math.sqrt(Math.log(1/Math.pow(Math.sqrt(sinsin + coscos), 2)))*180/Math.PI;" ? From the wiki page, it seems like you get R the same way but you won't be subtracting from 1. You can simplify by removing the square and square root and you get stddev = sqrt(-log(sin*sin+cos*cos))*180/pi Mind the units. The function as written takes angles in degrees as input and returns the standard deviation in radians. Caution! Circular standard deviation is not an angular quantity! Its values range from 0 to infinity, so "converting to/from radians" doesn't make any sense, and will incorrectly scale results. Current good way to deal with this is through scipy already implemented functions, Cf my answer below : https://stackoverflow.com/a/67622278/6401987 When you use Math.atan(sin/cosine) you get an angle between -90 and 90 degrees. If you have 120 degrees angle, you get cos=-0.5 and sin=0.866, then you get atan(-1.7)=-60 degrees. Thus you put wrong angles in your normalized list. Assuming that variance is a linear deviation, I'd recommend you to rotate your angles array by the -calcMean(angles) and add/subtract 360 to/from angles above/below 180/-180 (damn my writing!)) while finding maximum and minimum angle. It will give you desired deviations. Like this: Double meanAngle = calcMean(angles) Double positiveDeviation = new Double(0); Double negativeDeviation = new Double(0); Iterator<Double> it = angles.iterator(); while (it.hasNext()) { Double deviation = it.next() - meanAngle; if (deviation > 180) deviation -= 180; if (deviation <= -180) deviation += 180; if (deviation > positiveDeviation) positiveDeviation = deviation; if (deviation > negativeDeviation) negativeDeviation = deviation; } return positiveDeviation - negativeDeviation; For average squared deviations you should use your method (with angles, not "normalized" ones), and keep looking for (-180, 180) range! Using atan2 addresses the issues brought up in this answer. It doesn't change the variance results I get though. Thanks. do you check for normalizedList.get(i) - mean to be in -180:180 range? Because if you have it be 300 it means that you should treat it as a -60. it looks like that may have done the trick. I'm not positive but I'll check more closely when I get the chance....or I'll check future replies. Thanks! Your code seems to calculate the difference between the smallest and largest absolute difference in angles. This is not equivalent to the mathematical definition of the standard deviation. The math library function remainder is handy for dealing with angles. A simple change would be to replace normalizedList.get(i) - mean with remainder( normalizedList.get(i) - mean, 360.0) However your first loop is then redundant, as the call to remainder will take care of all the normalisation. Moreover it's simpler just to sum up the squared differences, rather than store them. Personally I like to avoid pow() when arithmetic will do. So your function could be: double calcVariance(ArrayList<Double> angles){ double mean = calcMean(angles); double result = 0; for(int i = 0; i < angles.size(); i++){ double diff = remainder( angles.get(i) - mean, 360.0); result += diff*diff; } return result/angles.size(); } The current good way to deal with this is now the two functions already implemented in scipy : circmean : https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.circmean.html circstd : https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.circstd.html Couple of great things included : vectorization for fast computing nan dealing high, low thresholds, typically for angles between 0 and 360 degrees vs between 0 and 2 Pi. The accepted answer by Joni does an excellent job at answering this question, but as Brian Hawkins noted: Mind the units. The function as written takes angles in degrees as input and returns the standard deviation in radians. Here's a version that fixes that issue by using degrees for both its arguments and its return value. It also has more flexibility, as it allows for a variable number of arguments. public static double calcStdDevDegrees(double... angles) { double sin = 0; double cos = 0; for (int i = 0; i < angles.length; i++) { sin += Math.sin(angles[i] * (Math.PI/180.0)); cos += Math.cos(angles[i] * (Math.PI/180.0)); } sin /= angles.length; cos /= angles.length; double stddev = Math.sqrt(-Math.log(sin*sin+cos*cos)); return Math.toDegrees(stddev); } Circular standard deviation is not an angular quantity! Its values range from 0 to infinity, so "converting to/from radians" doesn't make any sense, and will incorrectly scale results.
common-pile/stackexchange_filtered
AD620 in Proteus: timestep too small I've made this circuit in Proteus, as a part of a practice report: In real life it works, as I built and tested it. But in Proteus, when I want to run the simulation, I get these errors: [SPICE] transient GMIN stepping at time=3.41026e-006 [SPICE] TRAN: Timestep too small; timestep = 1.25e-019: trouble with instance d:u1:3. I realized the error disappears when the gain of the amplifier is 1: that is, when I remove the resistor R1 (letting it be "infinite"). Otherwise, it won't work. Why do I have this error, and how can I solve it? UPDATE # 1 As @Andyaka pointed out, I tried with very small (as Proteus complained about 0 Ohm) resistance values. The circuit I got is this one: But when I run the simulation, I get other errors :( [SPICE] Gmin step [45 of 120] failed: GMIN=4.21697e-007 [SPICE] Gmin stepping failed [SPICE] Source step [12 of 120] failed: source factor = 10.0000 [SPICE] Too many iterations without convergence. UPDATE # 2 I tried the procedure @berto said in the answers section. I connected a load resistor of \$10k\Omega\$ to \$V_{\text{out}}\$ and the ground, and then connected the voltmeter in parallel with it, but I got these errors: [SPICE] transient GMIN stepping at time=2.14996e-007 [SPICE] TRAN: Timestep too small; timestep = 1.25e-019: trouble with node #b:u1:e3#branch. Two interesting facts when I played with the circuit at this stage: When I disconnect the voltmeter and put a load resistor of \$0 \Omega\$ (thus, a connected wire from \$V_{\text{out}}\$ to the ground), the simulation starts and continues with no errors. When I disconnect the voltmeter but leave the proposed load resistor (\$10k\Omega\$) from \$V_{\text{out}}\$ to the ground, the simulation stops with the same timestep too small error. Try putting zero ohm resistors in series with each input and pin 5. Sometimes this works with obstinate models. Same problem in other sims too. @Andyaka I can't! The program complains that "Value must be positive!" :( I'll try with a very small resistance, though. Try this. System --> Set Animation Options --> SPICE Options Change "Default Settings" into "Settings for Better Convergence" and load it. Had same problem ... this suggestion worked Wow... Its worked... had the same issue Try adding a load resistor. An open circuit (ideal voltmeter) at the output can generate simulation problems. Something bigger or equal to \$ 10 k\Omega\$ should do it. Also try adding 100kΩ resistors between the inputs and ground, so the currents will have a return path. I did it but the errors appeared again. One interesting fact is when I disconnect the voltmeter and just put a cable between $V_{\text{out}}$ and the ground, I get no errors. what signal are you using in Vout? I saw something in the datasheet, try adding $100 k\Omega$ resistors between the inputs and ground, so the currents will have a return path. Sorry for the delay. Speaking about the first question, it is direct current. And for the second, let me try it! :) :( It hasn't worked yet. I see now why the $100k\Omega$ resistors, but SPICE still complains! ...another thing you can try is adding small capacitors. Also increase a little the 1 p$\Omega$ resistors, because small time constants can create fast oscillations, 1 m$\Omega$ is a more realistic value.
common-pile/stackexchange_filtered
Oculus rift Camera turn issue using Vizard I'm trying to use Vizard 5 to make a simulation with the Oculus rift. I can't get the mouse to be linked to the MainView method while the oculus is also connected to the mainview, meaning I want to make the mouse move the camera and the oculus also at the same time. Any help would be appreciated. This is the code so far: import viz import oculus import vizcam import vizfx import sys import vizact import vizinfo viz.go(viz.FULLSCREEN) view = viz.addView #add Oculus as HMD hmd = oculus.Rift() hmd.getSensor #links hmd to mainview, both mouse and hmd should be linked to mainview viz.link(hmd.getSensor(), viz.MainView) #vizcam.WalkNavigate() #make mouse invisible and activate mainview's collision #viz.mouse.setVisible(viz.OFF) #collison with objects set to on viz.MainView.collision(viz.ON) viz.MainView.collisionBuffer(0.5) #add model vizfx.addChild('AutoSave_house3D model.osgb') #add environment viz.addChild('sky_day.osgb') #create a sunlight sun = vizfx.addDirectionalLight() sun.color(1.0,1.0,0.8275) sun.setEuler(90,90,0) viz.go() Why do you want to move the camera with the mouse? This is highly discouraged with VR experiences and it can easily make people sick or disoriented. As per the oculus best practices guide http://static.oculus.com/sdk-downloads/documents/Oculus_Best_Practices_Guide.pdf "In general, avoid decoupling the user’s and camera’s movements for any reason."
common-pile/stackexchange_filtered
Why to split the tasks not running on UI thread to make WPF app more responsive? In part1 "Getting Started" of Alexandra Rusina's series. Parallel Programming in .NET Framework 4 in order to make WPF UI responsive, it is done by outsourcing the intensive computations out of UI. Eventually, the code was changed to: for (int i = 2; i < 20; i++) { var t = Task.Factory.StartNew(() => { var result = SumRootN(i); this.Dispatcher.BeginInvoke(new Action(() => textBlock1.Text += "root " + i.ToString() + " " + result.ToString() + Environment.NewLine) ,null); }); } Update: so shifting the intensive computations out of UI thread. Here are the quotes from Part1 article upon coming to this snippet: "To make the UI responsive, I am going to use tasks, which is a new concept introduced by the Task Parallel Library. A task represents an asynchronous operation that is often run on a separate thread" "Compile, run… Well, UI is responsive" And in order to output to WPF UI from those separate task threads (or to avoid InvalidOperationException that says “The calling thread cannot access this object because a different thread owns it.”) the Dispatcher.BeginInvoke() was engaged. Part2 of the same series "Parallel Programming: Task Schedulers and Synchronization Context" tells about the same snippet of code (after small change of introducing a local iteration variable): "This one requires more thorough refactoring. I can’t run all the tasks on the UI thread, because they perform long-running operations and this will make my UI freeze. Furthermore, it will cancel all parallelization benefits, because there is only one UI thread. What I can do is to split each task into..." Does not the part1-article contradict to the part2-article? What is the need of splitting the tasks that are not running on UI thread into parts? What do I undermisoverstand here? I think there are two different concepts here. Part 1 talks about running code in tasks as to not block the UI thread. Part 2 talks about running several tasks in parallel. One is used to "Not block the UI thread" and the other is used to get more things done in parallel. If you have only two threads: the UI thread and the task thread, things are not happening in parallel and you are not leveraging the true power of parallel processing. The statements don't contradict, they are quite adding up to a whole. The UI-thread is responsible for refreshing the window and the controls in it, so the react when you activate them, go over them with the mouse cursor, etc. If you would perform a long-running operation, for example a for-loop with 10000 iterations, the UI would freeze and become unresponsive (the window will blur and the little blue donut of death will come up). Once the for-loop has completed, your UI will be there again. In order to relieve the UI-thread from additional burden, you put long-running tasks in seperate threads/tasks, so they execute concurrently. Your UI will stay responsive and accept commands (clicks, keystrokes, ...). Maybe unrelated but grappling on the basics behind it Users expect an app to remain responsive while it does computation, regardless of the type of machine. This means different things for different apps. For some this translates to providing more realistic physics, loading data from disk or the web faster, quickly presenting complex scenes and navigating between pages, finding directions in a snap, or rapidly processing data. Regardless of the type of computation, users want their app to act on their input and eliminate instances where it appears suspended while it “thinks.” Read this article here.
common-pile/stackexchange_filtered
How to check input in PreferenceActivity and prevent storing of invalid values? I have prepared a complete and simple test case for my question: ValidatePrefs app at GitHub: When you click the "Settings" icon in the top right corner, SettingsActivity is displayed. I would like to validate the 3 EditTextPreference fields by returning true or false from the onPreferenceChange method, but it is never called: public class SettingsActivity extends PreferenceActivity implements Preference.OnPreferenceChangeListener { private static final String TAG = SettingsActivity.class.getSimpleName(); public static final String ADDRESS = "address"; public static final String USERNAME = "username"; public static final String PASSWORD = "password"; @Override public void onCreate(Bundle savedInstanceBundle) { super.onCreate(savedInstanceBundle); addPreferencesFromResource(R.xml.settings); } @Override public boolean onPreferenceChange(Preference preference, Object value) { if (! (preference instanceof EditTextPreference)) { return false; } EditTextPreference editTextPreference = (EditTextPreference)preference; String key = editTextPreference.getKey(); String text = editTextPreference.getText(); if (key == null || key.isEmpty() || text == null || text.isEmpty()) { return false; } switch (key) { case ADDRESS: { try { new URI(text); } catch (URISyntaxException ex) { ex.printStackTrace(); return false; } return true; } case USERNAME: { return text.length() > 0; } case PASSWORD: { return text.length() > 0; } } return false; } } UPDATE: After adding findPreference and setOnPreferenceChangeListener calls in the SettingsActivity the method onPreferenceChange is finally called: @Override public void onCreate(Bundle savedInstanceBundle) { super.onCreate(savedInstanceBundle); addPreferencesFromResource(R.xml.settings); EditTextPreference address = (EditTextPreference) findPreference(ADDRESS); EditTextPreference username = (EditTextPreference) findPreference(USERNAME); EditTextPreference password = (EditTextPreference) findPreference(PASSWORD); address.setOnPreferenceChangeListener(this); username.setOnPreferenceChangeListener(this); password.setOnPreferenceChangeListener(this); } However I still can not validate and prevent saving wrong data, because the text variable contains the old value. You can see it in the debugger screenshot below - I enter the new (and invalid) value of an empty string, but the variable text still contains the old value of user1: UPDATE 2: Okay, I was wrongly inspecting the old value stored in Preference, I should have looked at the new value, passed as 2nd argument to onPreferenceChange method. Here the fixed version of the method in my SettingsActivity: @Override public boolean onPreferenceChange(Preference preference, Object obj) { if (!(preference instanceof EditTextPreference && obj instanceof String)) { return false; } EditTextPreference editTextPreference = (EditTextPreference)preference; String key = editTextPreference.getKey(); String oldValue = editTextPreference.getText(); String newValue = String.valueOf(obj); Log.d(TAG, String.format("key %s, value %s -> %s", key, oldValue, newValue)); // TODO validate the newValue and return true or false maybe use TextWatcher However that method is never called. It seems to me you have forgotten to register setOnPreferenceChangeListener(), that's why the callback is not called. Yes, but where to call that method in my app? See usage in this question.
common-pile/stackexchange_filtered
Logging queries and hits in Elasticsearch We want to use Elasticsearch, Logstash and Kibana to display queries and hits from another Elasticsearch. Setting every slowlog threshold to 0ms gives us the query but not the number of hits index.search.slowlog.threshold.query.warn: 0ms .... Are there any ways of getting the query and the number of hits on each query straight from Elastsearch? Here's an article explaining how to monitor search queries using Packetbeat: https://www.elastic.co/blog/monitoring-the-search-queries
common-pile/stackexchange_filtered
Not able to read xml file using c# I am trying to read a xml file using c# XmlDocument doc = new XmlDocument(); doc.Load(@"/Rules/AssessmentRule.xml"); XmlNode node = doc.SelectSingleNode("/RuleName"); string URI = node.InnerText; return URI; I kept breakpoints in 2nd and 3rd line. I get error in the line below doc.Load(@"/Rules/AssessmentRule.xml"); It says Could not find a part of the path 'C:\Program Files\Rules\AssessmentRule.xml'. The folder structure of my project is, it is having the Rules folder in the same place as my class file C:\Program Files\Rules\AssessmentRule.xml so there is no file at this path ? did you check in your explorer ? no the xml file is in my project folder, i dono how to make it access this project folder path Given its C:\Program Files\ its likely you need elevated permissions. what is the path to your class file? Most likely dup of some "read file next to my executable"... Start with http://stackoverflow.com/questions/6041332/best-way-to-get-application-folder-path/6041505#6041505 If the file in your project folder Try this code for path string wanted_path = Path.GetDirectoryName(Path.GetDirectoryName(System.IO.Directory.GetCurrentDirect‌​ory())); then find the file on that path. When running in debug the path is based from Debug settings, which defaults to bin\debug unless you access the file with full path it will be relative to that folder(bin\debug). [courtesy @miltonb] so below are two solutions. you can add that file into your VS project. then click on that file in VS go to properties set 'Copy to output directory' -> copy always. then just need to give the file name or you get your project directory like this string projectPath = Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName; string xmlLocation = @"Rules/AssessmentRule.xml"; String fullPath = Path.Combine(projectPath,xmlLocation); Path will not be combined until you remove leading forward slash @"Rules/AssessmentRule.xml"; It seems odd that you have the same text at the top of your answer as my answer but your timestamp is later. Suspicious. its seems very odd when you vote down a very detailed answer :) we are here to help others. I will not vote down you because your answer is not wrong I like your answer and would have upvoted had you had courtesy to refer to my answer. It would have been nice. You do have the best answer so far. My mistake.. does that make sense now ? the path is not based on "bin\debug" per se, it's based on the Working directory in the Debug settings, which defaults to bin\debug (if running in debug mode). You should probably use an OpenFileDialog instead. It'll make your life a lot easier: var openFile = new Microsoft.Win32.OpenFileDialog() { CheckFileExists = true, CheckPathExists = true, Filter = "XML File|*.xml" }; if (openFile.ShowDialog() ?? false) { XmlDocument doc = new XmlDocument(); doc.Load(openFile.FileName); XmlNode node = doc.SelectSingleNode("/RuleName"); string URI = node.InnerText; return URI; } else { // User clicked cancel return String.Empty; } That's not correct - the program should be able to open files without user interaction, which I believe that the OP is trying to do. When running in debug the path is based from 'bin\debug' unless you access the file with full path it will be relative to that folder. the path is not based on "bin\debug" per se, it's based on the Working directory in the Debug settings, which defaults to bin\debug. Also, I'm not sure how this answers the OPs question. You need to get the project directory, then append the xml path. string projectPath = Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName; string xmlLocation = @"Rules/AssessmentRule.xml"; String fullPath = Path.Combine(projectPath,xmlLocation); Path will not be combined until you remove leading forward slash @"Rules/AssessmentRule.xml";
common-pile/stackexchange_filtered
$\lim_{ (x,y)\to(0,0)} f(x, y)$ exists along all parabolas that contain the origin. Give a proof or counterexample of the following statement: Let $f$ be a real-valued function, that is defined and continuous on all of $\mathbb{R}^2$ except at the origin. It has a removable discontinuity at the origin provided that the limit $\lim_{ (x,y)\to(0,0)} f(x, y)$ exists along all parabolas that contain the origin. Do you have any thoughts on the problem? I would also try to phrase this more as a question than a command. That ruffles some users. It looks like you've transcribed a math problem, but without putting any of your own thoughts or words around it: this is rather off-putting, more in questions written like this than your earlier ones, because people like to know an OP is open for engagement with the community but this sends all the wrong signals. What about $f(x,y)=1$ if $x>0$ and $0<y<x^3$, $f(x,y)=0$ otherwise? @JonasMeyer: Your function is not continuous outside the origin. @mrf: Thank you, I completely missed that requirement. I could multiply $f$ by something continuous on $x>0$ that goes from $0$ to $1$ to $0$ on each vertical line segment from $(x,0)$ to $(x,x^3)$, but at that point it is probably getting more complicated than David Mitra's examples. Here's an example that satisfies the criteria for parabolas of the form $y=ax^2+bx$ or $x=ay^2+by$: Let $$f(x,y)={\cases{xy^3\over x^2+y^6,&$(x,y)\ne(0,0)$ \cr 0,\phantom{\biggl|}& otherwise}}.$$ First we show that the limit as $(x,y)$ approaches the origin along one of the parabolic paths given above is 0: Along the parabola $y=ax^2+bx$, $a\ne 0$: $$f(x,y)={x(ax^2+bx)^3\over x^2+(ax^2+bx)^6} \quad\buildrel{x\rightarrow0}\over\longrightarrow\quad 0, $$ as two applications of L'Hopital's rule will verify (or observe that the dominant term upstairs is $ax^7$ and the dominant term downstairs is $x^2$). Along the parabola $x=ay^2+by$, $a\ne 0$: $$\eqalign{f(x,y)={(ay^2+by)y^3\over (ay^2+by)^2 +y^6} &={ay^5+by^4\over a^2y^4+2aby^3+b^2y^2+y^6}\cr &={ay^3+by^2\over a^2y^2+2aby +b^2 +y^4}\cr & \buildrel{y\rightarrow0}\over\longrightarrow\quad 0,}$$ as easily seen when $b\ne 0$. For $b=0$, we have $$ {ay^3+by^2\over a^2y^2+2aby +b^2 +y^4} ={ay^3 \over a^2y^2 +y^4}={ay \over a^2 +y^2} \quad \buildrel{y\rightarrow0}\over\longrightarrow\quad 0, $$ as well. Now we show that $\lim\limits_{(x,y)\rightarrow(0,0)} f(x,y)$ does not exist (and thus, $f$ is discontinuous at the origin, but the discontinuity is not removable): Just observe that along the path $x=y^3$: $$ f(x,y)={y^6\over 2y^6}\quad\buildrel{y\rightarrow0}\over\longrightarrow\quad {1\over2}. $$ I'm not sure what happens for a general parabola that passes through the origin... Incidentally, in, Counterexamples in Analysis, by Bernard R. Gelbaum and John M. H. Olmsted, page 116, an example is given of a function which has no limit at $(0,0)$, but such that for any path of the form $x^m=(y/c)^n$, where $c\ne 0$ and $m,n$ are relatively prime positive integers, the limit as $(x,y)$ approaches the origin along the path is zero. The function with the stated properties is: $$ f(x,y)=\cases{ {e^{-1/x^2}y\over e^{-2/x^2}+y^2 },& $x\ne0$\cr 0\phantom{\biggl|} ,&$x=0$}. $$ You seem to have only considered parabolas with axes parallel to one of the coordinate axes and with vertex at the origin. @Jonas Yes, thanks for pointing this out. I will think about what happens for a general parabola with vertex at the origin... I don't see any assumption in the problem that the vertex is at the origin. @Jonas Thanks again.. "contains the origin".
common-pile/stackexchange_filtered
Lie derivative: Leibniz rule proof How can I prove $\mathcal{L}_v(\omega\wedge\alpha) = (\mathcal{L}_v\omega)\wedge\alpha + \omega\wedge(\mathcal{L}_v\alpha)$ ? By writing down the definitions and cranking through the ordinary product rule. (Look up "Cartan's magic formula.") Show $L_v (A\otimes B)= L_v A\otimes B+A\otimes L_v B$ for any tensor $A, B$. Recall the definition of $L_v$ using derivation of flow, you will find this is simply an advanced version of $\frac{d}{dt}(fg)=(\frac{d}{dt}f) g+f (\frac{d}{dt}g)$. Sadly, I'm not familiar with the concept of tensors. What I have is some basic knowledge about differential forms + the definition of the Lie derivative $\mathcal{L}v\omega = \frac{d}{dt}(\textrm{exp}vt)^*\omega\bigg|{t=0}$ @user1135859 This is enough for your proof. What you need is to differentiating $\exp vt^(\omega\wedge \alpha)= (\exp vt^\omega\wedge \exp vt^*\alpha)$. Thanks for your answers! But I'm still struggling with this. Now I have $$\mathcal{L}v(\omega\wedge\alpha)=\ldots=\frac{\rm d}{{\rm d}t}\left((\exp tv)^\omega ;\wedge;(\exp tv)^\alpha \right)\bigg|{t=0} $$ But I don't know how to carry out the differentiation. Do I need some kind of a chain rule? I know that $$ {\rm d}(\mu \wedge\nu) = {\rm d}\mu\wedge\nu ;\pm;\mu\wedge {\rm d}\nu$$ But since in my case there is not simply "$\rm d$" but "$\frac{\rm d}{{\rm d}t}$" I do not know to apply this. @user1135859 Use the chain rule of $\frac{d}{dt} f(t)g(t)$ for whatever product of whatever stuffs (only bi-linearity matters)
common-pile/stackexchange_filtered
Ruby on Rails: How to chain strong parameter if some of the parameters are nested attributes? Suppose I have the following parameters: "struct"=> {"content" => nil}, "name" => "structA" When I try to build a strong parameters filter around it: params = ActionController::Parameters.new("struct"=> {"content" => nil}, "name" => "structA") params.permit(:struct, :name) It only accept name: => {"name"=>"structA"} I read some of the post that for the nested attribute, I need to use "require": params.require("struct").permit! But how can I chain the nested and non-nested attribute as one filter? Try this params.permit(:struct => [:content], :name) Do you mean this: params.permit({:struct => [:content]}, :name)
common-pile/stackexchange_filtered
comparison operator for user object in priority queue I have following problem description You have a program which is parallelized and uses "n" independent threads to process the given list of "m" jobs. Threads take jobs in the order they are given in the input. If there is a free thread, it immediately takes the next job from the list. If a thread has started processing a job, it doesn’t interrupt or stop until it finishes processing the job. If several threads try to take jobs from the list simultaneously, the thread with smaller index takes the job. For each job you know exactly how long will it take any thread to process this job, and this time is the same for all the threads. You need to determine for each job which thread will process it and when will it start processing. Input Format. The first line of the input contains integers "n" and "m" The second line contains integers — the times in seconds it takes any thread to process -th job. The times are given in the same order as they are in the list from which threads take jobs. Threads are indexed starting from 0. If I have following input 4 20 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 For above code I have following code struct qThreadDetails { std::uint64_t m_uiThreadId; std::uint64_t m_availableTime; std::uint64_t m_counter; }; struct qThreadCompare { bool operator()(const qThreadDetails & lhs, const qThreadDetails & rhs) const { if(lhs.m_availableTime < rhs.m_availableTime ) { if (lhs.m_uiThreadId < rhs.m_uiThreadId) { return false; } } if (lhs.m_availableTime < rhs.m_availableTime ) { return false; } return true; } }; std::priority_queue<qThreadDetails, std::vector<qThreadDetails>, qThreadCompare > m_queThreads; if (m_vecJobTimes.size() > 0) { m_vecAssignedWorkers.push_back(0); m_vecStartTimes.push_back(0); qThreadDetails jobDetails; // push job in queue. jobDetails.m_uiThreadId = 0; jobDetails.m_availableTime = m_vecJobTimes[0]; jobDetails.m_counter = 0; m_queThreads.push(jobDetails); } unsigned int uiVecStartTimesIdx = 1; for(unsigned int i = 1; i < m_uiNoOfThreads && uiVecStartTimesIdx < m_vecJobTimes.size(); ++i, uiVecStartTimesIdx++) { m_vecAssignedWorkers.push_back(i); m_vecStartTimes.push_back(0); // push jobs in queue. qThreadDetails jobDetails; jobDetails.m_uiThreadId = i; jobDetails.m_counter = 0; jobDetails.m_availableTime = m_vecJobTimes[i]; m_queThreads.push(jobDetails); // ---------> problem here } It is crashing invalid operator < . not sure why. For input given above. Any help Something that's vaguely described as "invalid operator <" is not a crash, but a compilation error. Additionally, your comparison operator appears to be fundamentally broken. The first if statement can be completely removed because it is redundant. Finally, the shown code, overall, fails to meet the requirements of a [mcve], as explained in stackoverflow.com's center. You need to edit your question and reorganize the shown code a single chunk that is complete and that anyone can attempt to compile in order to reproduce your compilation error. As is, this question is unclear. qThreadCompare doesn't satisfy the requirements of a strict weak ordering. It's obviously not irreflexive: qThreadCompare(x, x) == true for any x. Attempting to use it as a comparator for std::priority_queue therefore exhibits undefined behavior.
common-pile/stackexchange_filtered
Java: How can I get rid of a specific JPanel by clicking an arrow key? This is my inner class that creates the graphic text. I want to be able to press an arrow key and have it disappear. I'm sure it involves the remove method somehow, but I'm in over my head. Very new at this. // STARTUP TEXT class TextPanel extends JPanel implements KeyListener{ // CONSTRUCTOR public TextPanel(){ addKeyListener(this); setFocusable(true); setFocusTraversalKeysEnabled(false); } // PAINT METHOD public void paintComponent(Graphics g2){ super.paintComponent(g2); g2.setColor(Color.WHITE); g2.fillRect(0, 0, this.getWidth(), this.getHeight()); g2.setColor(Color.BLACK); g2.setFont(new Font("TimesRoman", Font.PLAIN, 14)); g2.drawString("Press an arrow key to start", this.getWidth()/4, this.getHeight()/2); } For better help sooner, post an SSCCE. AFAIK You have to use Key Bindings to respond to arrow key then to remove panel (I think from frame) use either setVisible(false) of panel or remove(component) method of frame. I figured that, but how would I do that? @Jazzertron read this: http://docs.oracle.com/javase/tutorial/uiswing/misc/keybinding.html @HarryJoy : Wish I could start a new Thread for this. But please tell me, what does the Abbreviation AFAIK stands for ? +1 for the KeyBinding stuff though. :-) Regards @GagandeepBali : AFAIK = As Far As I Know. ;p @HarryJoy : AFAIK, I never thought it's that easy :-) Thankyou. @GagandeepBali google is your friend as well: http://www.urbandictionary.com/define.php?term=afaik @kleopatra : AFAIK, I completely forgot that thing also :-) LOL Regards /** Handle the key typed event */ public void keyTyped(KeyEvent e) { } /** Handle the key-pressed event */ public void keyPressed(KeyEvent e) { } /** Handle the key-released event */ public void keyReleased(KeyEvent e) { int key=e.getKeyCode(); if(key==KeyEvent.VK_LEFT) { this.setVisible(false); } if(key==KeyEvent.VK_RIGHT) { this.setVisible(true); } } +1, though normally with Swing we don't use KeyEvent, they are meant to be used with AWT, but since it's a valuable information you had given, that's why :-) Regards
common-pile/stackexchange_filtered
Physics OverlapSphere does not respond correctly I can't quite understand how Physics.OverlapSphere works at this point. As shown below in the code, if the player enters the overlapsphere then it should return true, but I keep getting false when the player enters the sphere. This is the script that call the method void Update() { float distance = Vector3.Distance(target.position, transform.position); if (distance <= lookRadius) { agent.SetDestination(target.position); if (distance <= agent.stoppingDistance) { CharacterStats targetStats = target.GetComponent<CharacterStats>(); Debug.Log(enemy.onPlayerEnter()); if (targetStats != null && enemy.onPlayerEnter()) {//if player exist and the player enters the attack area Combat.Attack(targetStats); } FaceTarget(); } } animator.SetFloat("speed", agent.velocity.magnitude); this is the script of the method: public bool onPlayerEnter() { Collider[] hitColliders = Physics.OverlapSphere(interactionTransform.transform.localPosition, radius); //Debug.Log(interactionTransform.transform.localPosition); for(int i = 0; i < hitColliders.Length; i++) { if(LayerMask.LayerToName(hitColliders[i].gameObject.layer) == "Player") { Debug.Log("Player enter"); return true; } } return false; } //visualize the overlapsphere private void OnDrawGizmosSelected() { if (interactionTransform == null) interactionTransform = transform; Gizmos.color = Color.cyan; Gizmos.DrawWireSphere(interactionTransform.position, radius); } collider with monster [Collide with player[][1]][2] https://i.sstatic.net/8Fgh0.png https://i.sstatic.net/jnubp.png For unknown reason, I found that the overlapsphere works at certain position in the map, but the rest of the position does not work at all. I think this probably is a bug in Unity. weird position Your sphere gizmo and actual sphere are different: for drawing gizmo you use transform.position, but for overlap sphere - transform.transform.localPosition (no need to double like transform.transform, and position and localPosition can be very different). I think, this is the answer. Its easy to get mess with LayerMask, its a good approach, but for testing you better use tag, or gameObject.InstanceID, or even gameObject.name. But its minor, in general it looks like you deal with layers right. Be sure that your agent.StoppingDistance not set too small It`s a bad practice to use GetComponent every frame, as you do for get targetStats.
common-pile/stackexchange_filtered
WordPress: Too many taxonomies slow down the site I have a WordPress site with 30,000+ terms for Australian cities/states. Wordpress loads fine with fewer terms but 30,000 terms make it load forever. I've disabled all the plugins and using the WP-2015 template. Is there a way to add such a huge number of custom terms and make the site work normally? I have the exact same issue. It turns out the problem is the WordPress is very slow if you have many terms and the taxonomy is set to be hierarchical. Without this option the admin and frontend pages load fine. It's an old issue from way back which I don't think WordPress can fix with their current database structure. As a partial fix I improvise by making the taxonomy non hierarchical and set the parent/child relations as taxonomy meta. It only works in the front end where you have full control over the queries. Is site slow at all places or just a set of pages? If so, which pages? We have way more terms than you do and we have no problems. I assume that at certain places select queries take too long. You can install plugin called Query Monitor which can show you what takes long, what triggers it and lots of other useful information which can help you locate and fix the problem The Site doesn't load at all. (non of the pages works) The site.com page isn’t working site.com didn’t send any data. ERR_EMPTY_RESPONSE It's old question. The issue is (as @razvan said) for hierarchical because (mainly for administration pages), WordPress create a <select> tag for the selection of the taxonomy. But using non hierarchic taxonomies (like tags) works perfectly.
common-pile/stackexchange_filtered
Debug.Assert does not break into editor in MonoDevelop/MonoTouch Simply put, Debug.Assert calls that fail don't stop the execution flow of the program in MonoDevelop, they just print out a trace message that starts like this (followed by a stack trace): 2012-12-28 19:21:01.978 TestApp[81689:c07] ---- DEBUG ASSERTION FAILED ---- 2012-12-28 19:21:01.979 TestApp[81689:c07] ---- Assert Short Message ---- What can I do to force the failed Debug.Asserts to break execution at the Assert in MonoDevelop? possible duplicate of Causing VS2010 debugger to break when Debug.Assert fails This is not a duplicate of that post, this one is about MonoDevelop, not Visual Studio. After a bit more research, it seems like I would need to write a custom TraceListener that then calls Debugger.Break manually, but I would expect such default behavior to be available in Mono or MonoDevelop... I retagged it from monotouch to mono because this is a general mono problem and not specific to monotouch. The debugger integration parts of Debug.Assert are not implemented yet - see Xamarin bug #4650. I already had a quick look at this a couple of months ago, then realized that doing it right would probably require runtime support to auto-unwind the top frames (so MonoDevelop would stop on the Debug.Assert statement, not somewhere in the trace listener implementation). As a workaround, you can add a custom trace listener and call Debugger.Break(). Thanks for the info Martin. For what it's worth, even a break without an auto-unwind would be more useful than the current situation right now (especially if it was a simple flag to toggle this behavior). Will look into custom trace listeners as suggested. So, calling Debugger.Break() in MonoTouch (running iOS code in simulator) simply kills execution of the running app. Any thoughts?
common-pile/stackexchange_filtered
Allocate a static IP to Data Fusion or Cloud composer instance I am trying to use Google Data Fusion to connect to a Microsoft SQL server database and need to have a static IP. I have tried to provision a static IP on a subnet and connect it to Data Fusion through a VPC and I have also created a small VM with a static IP then put it on the same VPC as the Cloud data fusion instance thinking I can connect the two through that but that was not successful. I have limited experience with networking in Google Cloud Platform (GCP) and I'm seeking guidance on how to connect either Data Fusion or Cloud Composer to a static IP address. This is necessary for me to perform data ingestion from an external database. This is a common problem I would imagine and whilst I have found similar questions, non seem to have clear easy to follow answers. If possible one with screen shots in the UI would be great too to try and understand further. Relates to: How to reserve the public IP(static IP) to execute google dataflow job, so that I can whitelist the IP in source application? Getting Azure SQL Server data into BigQuery Google documentation: How to create private IP Google documentation: Introduction to Cloud Data Fusion networking Can a private Cloud Data Fusion connect to the internet? Indeed, if you have a limited experience on Google Cloud or Networking, it's not obvious at the first glance. Firstly, you need to have in mind that Data fusion runs a job on a Dataproc cluster. And Composer runs on a GKE Autopilot cluster. The similarity is: it's a cluster! Because of it, there are several VMs involved and all can't have a public IP (because of exhaustion of them). So all the VM have a private IP and not a public one. To be able to reach the internet, you need a public IP. That's why you need to create a bridge that remember the requester (the VM and its private IP) and map the request to a public IP. This mechanism is named Network Address Translation, or NAT for short. On Google Cloud, you can use Cloud NAT to perform it: you create your Cloud NAT, select the subnet that you want to NAT and that's all! More detail here If you need to grant a public IP on your MySQL server, you can reserve a public IP and put it in the Cloud NAT configuration Hi @guillaumeblaquiere , thank you for your help! Would I need to set up custom firewall rules or is the default settings enough. The only change I have made is adding a manual private Ip. I only also have one subnet in my VPC so it did not ask me to select a subnet, just the VPC? Yes, select your VPC Oh okay, thank you @guillaumeblaquiere So no custom firewall rules or custom configurations are necessary? It depends. By default, the egress traffic is not blocked on Google Cloud. If you have no special rule that block the outbound traffic, no.
common-pile/stackexchange_filtered
In what time zone are R's file.info()'s mtime expressed? I have a file for which Windows (win7) explorer says that its mtime is 24-oct-2014 12:39. When I ask for this/these properties in R using file.info(), I am getting the following: file.info('r:/data/dm29/dm29 - Sample_138_20141023_0737.dti') size isdir mode mtime ctime atime exe r:/data/dm29/dm29 - Sample_138_20141023_0737.dti 35003850 FALSE 666 2014-10-24 11:39:48 2014-10-23 07:40:32 2014-11-06 11:14:49 no What's going on here? I suppose it has something to do with time zones but I cannot find a reference to something like that in the help. Or is it perhaps due to a recent DST switch that we had? Additionally, how can I fix it in an idiomatic way? The following feels a bit like a kludge to me. mt <- as.POSIXct(file.info('r:/data/dm29/dm29 - Sample_138_20141023_0737.dti')$mtime, tz='Europe/London') attributes(mt)$tzone <- 'Europe/Amsterdam' I'm unclear as to why you expect the Windows display to agree with the R display? Does the Windows display correspond to your local time or to a different one? What about file.info's? I am expecting that because given that R appears/is working in the same time zone, why would the two give different time stamps when the mtime is the same?
common-pile/stackexchange_filtered
Impact on DV01 of cbot bond futures by changing coupon from 6% to 4% CBOT has been asking customers lately what their thoughts would be on coupon change from 6% to 4% on all bond futures. I believe the last time this was done was in 2000 where the coupon was changed from 8% to 6%. My question is, how would this impact the DV01 of the contracts themselves? I believe it reduces the DV01 (requires more contracts to be traded vs same number of actual cash bonds) but not 100% sure. Thanks! Super interesting news... do you have a reference at all (a quick google search has failed me...)? There is an article about it on risk.net but unfortunately, I do not have a subscription. https://www.risk.net/derivatives/7695186/cme-asks-clients-about-changing-implied-ust-futures-coupon this one? It's complicated. Assuming there is no CTD switches, then yes, the theoretical modified duration should be unchanged and the DV01 will be lower. For simplicity, imagine that there is only one bond eligible for delivery into the contract. We'll also ignore all the other complications (e.g., variation margins), then the theoretical futures price is simply the converted forward price of the bond: $$ f = \frac{\text{Bond forward price}}{\text{Bond conversion factor}}. $$ Recall that the conversion factor is approximately the price of a bond assuming its yield to maturity as of the first delivery date is 6%. If we change this to 4%, then the conversion factor will increase, resulting in a decline in $f$, as well as its dollar sensitivity. For a numerical example, I took the current TY contract (TYZ2020 as of 10/21/2020) and ran some simulations. The left column below shows the current market pricing; the right column shows the model price and duration metrics if the notional coupon is changed to 4% today. The next table shows the individual deliverables for TYZ2020, including their current conversion factors as well as the theoretical conversion factors at a 4% notional coupon. Notice that the converted forward DV01s are lower, as expected. However, it's completely plausible that a lower notional coupon does result in a CTD switch. Right now, because yields are so low and curve is upward sloping, CTDs tend to be the higher coupon, lower duration issues. If yields were to rise (meaningfully from current levels) AND notional coupon is adjusted lower, then it's completely plausible for CTDs to move to longer maturity issues, actually increasing both the duration & DV01 of the contracts. To see this, I shocked the yield curve by 400 bps. The table below shows the delivery probabilities and converted forward DV01s. As you can see, at a much higher yield level, changing the notional coupon actually causes a significant CTD switch into longer maturity bonds, increasing duration. I believe this would also reduce the value of the wild card option, which has attained significant value on longer contracts such as WN in recent months. Do you agree? A higher conversion factor would reduce the amount of tails you’d have to sell out of if CF<1 and market rallies post 3pm so I would agree it would reduce the wildcard value. For example, you'd have to sell 1-CF worth of tails (either in the form of futures or equivalent bonds) out to hedge your DV01 post 3pm in the case where CF<1 and the market rallies(higher bond px) post 3pm. You want 1-CF to be as large as possible to lower your breakeven wildcard level, if that makes sense.
common-pile/stackexchange_filtered
How can I find out which station a bookmarked song on Pandora came from? While going through my many bookmarked songs, I sometimes wonder from which of my 30 or so radio stations a song came from. Is there any way to know which station I was listening to when I bookmarked this song? No, sorry. It doesn't look like they store that information. If they do, it's not exposed anywhere. You can view a list of "Thumbed-up" tracks on each station's detail page, though. In the future, you could "thumb-up" songs in addition to bookmarking them; then you'd be able to find the source station later.
common-pile/stackexchange_filtered
is there a note location chart available for 22 hole chromatic harmonica? I am looking for a chart that shows the location of notes, for both draw and blow, as well as the key pressed and unpressed, for a 22 hole chromatic harmonica Something like this but for a 22 hole chromatic I have been searching for hours and I can find ones for diatonic and 12 hole chromatic, but not for a 22 hole one Could anybody point me in the right direction? Thank you Like you, I've searched for a while for information about 22-hole chromatic harmonicas. The only 22-hole instruments I've found information about don't have different blow and draw notes for each hole. Instead, instruments such as the Unica 22-hole chromatic harmonica have two, rather than four notes, for each hole. (Usually, a chromatic harmonica has 4 notes for each hole: blow and draw notes without using the slide; and blow and draw notes using the slide.) The Unica instrument produces the same pitch when a particular hole is blown or drawn. This gives two notes per hole; one with and one without using the slide. This website (which I gather you found, too) has charts for a wide range of different types of chromatic harmonicas. One of these is a 22-hole, 44-reed, 44-tone slide harmonica, which is of the type I mention above. Below is a picture of this instrument (showing the 22 holes) and the fingering chart for this instrument (showing just two notes for each hole - i.e. slide out and slide in): As I say, this is the only fingering chart I've managed to find for any 22-hole chromatic harmonica. If your instrument produces the same notes for blow and draw on any particular hole, this might be the correct note location chart. If not, I'm as stumped as you are... I had found this page, but that figure doesn't show which ones are blow and which ones are draw Aha! This instrument doesn't have blow and draw notes, it only has "one tone per one hole". Does your instrument have blow and draw notes for each hole? This question perplexes me, as I have several harmonicas, mostly Hohner. However, they're about 1,000 miles away right now, so when I get home, I'll play them all and report back... But I feel that most will produce the standard scale, with the scale a semitone out with the button pressed. Watch this space. Cheers, Tim. As ever, you are the font of all knowledge! I noticed that the company calls this model Unica and has only one note per hole. so does this mean this chart is for this specif manufacturer and their model? If I buy any other cheaper brand harmonica with 22 holes, how likely is it that it's note layout would be different? I don't have a harmonica yet so I am doing this to find out whether it would be better to buy a 12 hole or 22 hole one. among the harmonicas I've found, the layout charts for 12 hole ones are more straightforward, but the harmonicas are more expensive. And I couldn't find the layout chart for 22 hole harmonica, but the price is same as the 12 hole one. So I was thinking if the 22 hole one gives me more notes, may be it would be better to but the 22 hole one. Ah, well if that is your main question, I would say go for the 12-hole, these are the conventional chromatic harmonicas, with conventional (blow and draw) note location. As far as I can tell, the 22 hole doesn't give you more notes; they are just cheaper because there are no air valves, as each hole only has the same pitch for blow and draw notes. Been a long time coming back to this question! Sorry! Come on, it's not 6 yrs yet! My Hohners are 16 hole chromatics, a couple are called '64s', as that's the number of notes available. So four octaves, and what I would call standard tuning (in C) B D B D B D D B, for C D E F G A B C in each octave, and diatonic C# with button depressed.
common-pile/stackexchange_filtered
Heimdall could not able to detect Samsung Galaxy Tab P1000? I am trying to flash Samsung Galaxy Tab P1000 using Heimdall commands in Linux. issue: sudo heimdall detect command not supoorting to detect device but device is detected becase sudo heimdall print-pit command is working and rebooting device ... but I could not able to Flash it. Pls help, Thanks in Advance Hi! Welcome to StackOverflow! StackOverflow is for programming questions, and this is not a programming question. Please use other resources, such as http://android.stackexchange.com, or wherever Heimdall support can be found, for Heimdall questions. What error message are getting when attempt to flash device? Also are you using heimdall --recovery ./location/file --no-reboot? Device detection in Heimdall: First off, go in Download mode: With the Samsung Galaxy TAB turned off, press and hold the VOLUME Down button, and then briefly press the POWER button. Plug in connection cable to your device and USB. Only after that Heimdall will detect your device, if it's detectable :) Hope that helps. You can find some great information now on the Lineage OS wiki. They then advise to install twrp using heimdall to flash your tablet with a new OS. Samsung devices come with a unique boot mode called “Download mode”, which is very similar to “Fastboot mode” on some devices with unlocked bootloaders. Download and install the Heimdall suite: Linux: Pick the appropriate package to install for your distribution. The -frontend packages aren’t needed for this guide. After installation, verify Heimdall is installed by running heimdall version in the terminal. Power off the device and connect the USB adapter to the computer (but not to the device, yet). Boot into download mode: With the device powered off, hold Volume Down + Power. On the computer, open a terminal and type: heimdall print-pit If the device reboots, Heimdall is installed and working properly.
common-pile/stackexchange_filtered
Pattern recognition from sample data I have some running index $n = 1,\ldots$. For each $n$ there exists a vector $f_n = (z_l : \sum_{l}{z_l} = n)$ which elements add up to $n$. The vector $f_n$ describes an integer partition of $n$. Consider the following table for $n \in \{1,\ldots, 80\}$. I want to recognize a general pattern such that I can predict $f_n$ for $n > 80$. Apparently there is some series $1,2,4,7,13,24,24,44,79,...$ which generate $f_n$. But I don't know how to further proceed from here. $1,2,4,7,13,24,44$ is the Tribonacci sequence (https://oeis.org/A000073) but the next term is $81$, not $79$. The table seems to be the representation of integers in the Tribonacci basis up to $78$ and the it breaks down. $f_n$ is not a partition of the set ${1,\ldots,n}$, it's a partition of the integer $n$. @Ihf yes, I figured the Tribonacci also. Actually I know the following series $(n : f_n = (n), ~ n<10000) = (1,2,4,7,13,24,44,79,146,268,482,873,1580,2867,5191,9413)$ The sequence $1,2,4,7,13,24,44,79,146,268,482,873,1580,2867,5191,9413$ is the prefix of a complete sequence. The table seems to be the representations of natural numbers in terms of this sequence. Given a natural number $n$, its representation $f(n)$ is obtained by the greedy algorithm: Let $a_m$ be the largest term of the sequence less than or equal to $n$. Then $f(n) = (f(n-a_m), a_m)$, the concatenation of the representation for $n-a_m$ with $a_m$. So, the pattern is fully predictable recursively, once we know all values of $a_m$. With the prefix given, you can find $f(n)$ for all $n\le 20994=1+2+4+7+\cdots+9413$. The next term must be between $9414$ and $20995$. That's correct. The problem is where to get the $a_m$'s from then. @clueless, there are lots of way to continue your sequence into a complete sequence, but I can't see a natural way. How did you find your sequence? It's a specific example of endogenous coalition formation (economics). The $f_n$ describes a numeric coalition structure for $n$ agents. If $n = f_n$, then the grand coalition is stable. There seemed to be a non random pattern that's why I was asking the question in the first place.
common-pile/stackexchange_filtered
How to generate GA for http (main domain) and and https (sub domain)? I am having main domain (http://example.com) and a sub-domain (https://secure.example.com) and I have one GA profile created for http://example.com. Now i want to track the https sub-domain, if i use the same code generated for http site, will that track both domains? Yes. The same code will work for both protocols.
common-pile/stackexchange_filtered
want to stop user input by messagebox but bypassed by a Enter keydown I'm coding a windows form application running on a barcode scanner. The plantform is .Net2.0CF/C#. What i want is whenever user input wrong, the app will pop a messagebox and block the next input(actually,a scan action) until user click the OK on the screen. But normally the user will continuously scan the next stuff as they didn't find anything went wrong, this will insert a Enter keydown so the messagebox will be closed, in one word, the messagebox does not stop the user. How can i code this? Below is a very simple code snippet private void tb_KeyDown(object sender, KeyEventArgs e) { if (e.KeyCode.ToString() == "Return") { if(!ValidateInput(tb.Text)) MessageBox.Show("Error"); } } You can create your own window (Form) that displays the error message, but does not react on the enter key. It should contain a button which the user can click (as you wrote), however you need to make sure the button does not have focus when the window is displayed. (Because if it had focus, pressing the return key will "click" the button.) A simple way for doing this is adding another control which has TabStop set to true (e.g. a textbox, another button) and which has a lower TabIndex property than the button. Additionally, maybe you might want to do a System.Media.SystemSounds.Beep.Play(); when showing the window to draw the user's attention to the window. Thanks, i recalled just now and really thanks for your reply.
common-pile/stackexchange_filtered
asp.net membership provider Guid userID I need (I think) to get the current logged in userID so that I can update one of my tables that uses this userID as a foreign key. The problem is that the userID in the database does not match with this: Guid currentUser = (Guid)Membership.GetUser().ProviderUserKey; currentUser.toString(); results in dffaca0c-ae0b-8549-8073-1639985740be whereas when I look in the database, it is 0CCAFADF0BAE498580731639985740BE Why are they different values? (I only have one user). I am using an oracle database and provider for asp.net, but shouldn't make any difference. try supplying Page.User.Identity.Name to the GetUser() method instead of using the default. I believe these are the same values, but the display order is different. Looking at the 2 values: dffaca0c-ae0b-8549-8073-1639985740be 0CCAFADF-0BAE-4985-8073-1639985740BE The ordering of bytes for the first 3 segments is of a different order: 0CCA FADF => FADF 0CCA => DFFA CA0C == dffaca0c 0BAE => AE 0B == ae0b 4985 => 85 49 == 8549 As @x0n comments, this looks like a difference in endianness with Oracle. According the this description of the structure, the endianness of the first 8 bytes is system dependent, while the endianness of the last 8 bytes is specifically big endian. endian differences on oracle? yup :) now I have to figure out why this is the case. hmm, might be how oracle stores the values in the database... I had the same issue and came up with this which resolved the issue: public static string TranslateOraceEndianUserID() { MembershipUser myObject = Membership.GetUser(); Guid g = new Guid(myObject.ProviderUserKey.ToString()); byte[] b = g.ToByteArray(); string UserID = BitConverter.ToString(b, 0).Replace("-", string.Empty); return UserID; } You could always use the Lower Case User Name column to create the foreign Key. It is always unique. Maybe not the best option but it is the simplest and works well. I have used it in several projects. Or maybe also try using HttpContext.Curent.User to get the current user?
common-pile/stackexchange_filtered
Safari Warning On Certificate We have recently started receiving warnings from Safari: "Safari can't verify the identity of the websites 's-static.ak.facebook." The certificate for the website is invalid. You might be connecting to be 's-static.ak.facebook' which could put your confidential information at risk. Would you like to connect to this website anyway?" How do address the issue with our certificate? we don't see what we are doing wrong. Is this an issue with Facebook or us? Any help would be appreciated. thanks Try another browser .. Check the date and time of your computer are correct, alot of the time messed up dates cause browsers to think certificates can't be verified or are out of date. Have a look at: http://reviews.cnet.com/8301-13727_7-57538839-263/safari-users-hit-by-facebook-certificate-error/ there is a whole article about it. You can see a screenshot of the certificate message from safari, you can click on "Always trust ..." I wouldn't say it's a problem in your PC/your browser. If it is a problem, what it seems to be, then it's a problem of what sits in the facebook address and of how Safari works. If it bothers you, try switching to a better Browser. But basically, many browsers will alert you on thing like that (perhaps not on the same things, but mostly they will). Edit: I've just looked into this, and it seems like there's some Image causing the problem. See more information here.
common-pile/stackexchange_filtered
Proof that the limit inferior is less than or equal to the limit superior of a sequence I have thought of a proof that the limit inferior of a sequence is less than or equal to the limit superior of that sequence as part of Exercise 6.4.3 part (c) from Tao's book Analysis I Fourth Edition. In particular, I approached this with a proof by contradiction, though I'm doubtful about it, and I attribute this to the fact that I've searched for other proofs of the same claim which, compared to what I came up with, are way more complicated. So that hints me that I've made a significant error, but I am somewhat convinced that I probably haven't, for the reason that I've exploited the entirety of part (a) (which already has a proof written down in the same book) which in and on itself is a property of liminf and limsup. To begin, Tao denotes the liminf by $L^{-}$ and the limsup by $L^{+}$, and so it is asked by the reader to show that $L^{-} \le L^{+}$ among other two inequalities which were quite trivial to prove and so I won't bother mentioning them. In addition, part (a) states that for a given sequence of real numbers with a starting index m denoted by $(a_n)_{n=m}^{\infty}$ we have that:(1) For every $x\gt L^{+}$, there exists an $N\ge m$ such that $a_n\lt x$ for all $n\ge N$. Similarly, (2) for every $y\lt L^{-}$, there exists an $N\ge m$ such that $a_n\gt y$ for all $n\ge N$. Proof (of part (c)). Assume for the sake of contradiction that $L^{-}\gt L^{+}$. Thus by (1) we can find some $n_0\ge m$ such that $a_n\lt L^{-}$ for all $n\ge n_0$. Observe that $a_{n_0+k}\lt L^{-}$ for all non-negative integers $k\ge 0$. Hence, by (2) we can find another $n_0^{'}\ge m$ such that $a_n\gt a_{n_0+k}$ for all $n\ge n_0^{'}$. Now by the trichotomy of the natural numbers, we have the cases that $n_0\ge n_0^{'}$ or $n_0\lt n_0^{'}$. Now, if $n_0\ge n_0^{'}$, then $a_{n_0}\gt a_{n_0+k}$ for all $k\ge 0$. Thus, by setting $k=0$ we obtain $a_{n_0}\gt a_{n_0}$, a contradiction. On the other hand, if $n_0\lt n_0^{'}$, then we can find some natural number $\ell$ such that $n_0+\ell\gt n_0^{'}$. Therefore, $a_{n_0+\ell}\gt a_{n_0+k}$ for all $k\ge 0$. Thus, by setting $k=\ell$ we obtain $a_{n_0+\ell}\gt a_{n_0+\ell}$, a contradiction. So we see that in every case we encounter a contradiction which means that the original assumption that $L^{-}\gt L^{+}$ is utterly false, and so we must have that $L^{-}\le L^{+}$, q.e.d. So I ask, is the above proof valid or total gibberish? Use the sequences $a^+n := \sup{k>n} a_k$ and $a^-n := \inf{k>n} a_k$, you have $a^+_n \geq a^-_n$ for all $n$... this will be much clearer and easier since $L^+ = \lim a^+_n = \inf a^+_n$ and $L^- = \lim a^-_n = \sup a^-_n$. Honestly, a bit hard to follow, and I believe unnecessarily complicated. Why not just say that (assuming as you do $L^+<L^-$) if you call $A$ the midpoint of $L^+$ and $L^-$, all the elements of the sequence have to be both greater and smaller than $A$ after some point (using the same properties you called (1) and (2)) which gives a contradiction right away? @GReyes You are right, the contradiction you are proposing is much more immediate. I didn't think it could be simpler. Although, I am still looking for that confirmation. Besides that, you've also mentioned that it was difficult to follow the proof, any tips as to how I could make my future proofs more comprehensible and aesthetically pleasing, for the sake of my mathematical training? I'd really appreciate it. I read your proof in detail and I think it is correct. I don’t have an answer for your question on how to produce more pleasing proofs. For me it is a matter of personal preference. I am a visual thinker. If I don’t “see” the proof all at once, I am not satisfied. In other words, a long chain of otherwise correct steps may be a good proof for some people, but I am not quite satisfied if I am not able to extract the main idea, which can be expressed in a few lines.
common-pile/stackexchange_filtered
Falcon Circus, Prologue: The Twins You get home from work, grab the mail, take off your coat, and sit down on the couch with the TV on the news. "Bills, bills, junk..." you sift through your mail, and stop to look at a very colorful looking flyer. The flyer is bright red and purple- two colors that don't seem to sit well together, but do the job in drawing your attention. The title labels it as some sort of circus ad. The middle of the flyer has a group photo of what you assume to be the curiosities of the circus, but everything is a silhouette. "A strange way to market yourself, but it does get you interested, now doesn't it?" you muse. At the bottom, in large, white letters is the name L. C. Snake. "Snake? That's not a great last name- that seems to be begging for trouble!" you say to yourself jokingly. $\hskip2.5in$ This flyer seems like some kind of joke, or a very poor attempt at marketing. But... you are free this weekend, and if their advertisement looks like this, you're dying to know what the service quality will be. As the weekend comes, you make your final decision: you'll go. Flipping the flyer over, you see a hastily scrawled address. "Now I'm not so sure..." the hand-written location isn't a good vibe to give your customers. Nonetheless, you've decided you'll go, and go you shall. You get in the car and set up the GPS. $\hskip2.5in$ After driving into what seems to be the middle of nowhere, you begin to see the lights and the very top of their red and yellow tent. No wonder it's called Forest Grove- there's trees as far as the eye can see. Now you just have to stay not-murdered! You pull up to the tent and find an empty lot. Getting out of the car, you notice that quite a few people have parked and must be inside the tent. Flyer in hand, you walk inside. $\hskip0.75in$ $\hskip0.75in$ Two children in sweaters and overalls walk up and greet you. One of them looks incredibly sleepy, and the other seems very energetic. To be seeing such odd kids around here must mean they're part of the circus gang- their faces are painted, to boot. You'll call them Red and Green for short. They block your path, and ask you a question before you can tell them to get out of the way: Green: "Hey Mister. what's our names?" Red: "Yeah! You have to guess our names before going further!" Green: "We'll give you a riddle." Red: "A riddle! And if you get it, we'll let you meet Snake!" The one in the green sweater punched his twin. Green: "No, we won't." Red: "Fine. But you still have to guess, okay? Okay! Get Ready!" You weren't prepared for children, and these two seem especially unsettling. Well, whatever. You came here to have an interesting time, and you've been stopped at the door already- this is going to be a great trip. You glance at your flyer. The very front silhouettes seem to mirror each other. Are these two the ones? Before you could ask, they began their riddle. tldr; Red: "I am to be feared!" Green: "People want me cleared..." Red: "I'll howl, I'll bite- I'll give you a fright!" Green: "We are two, or are we three...?" Red: "Selene grants me power, my goddess, the lovely!" Green: "...My first partner... neither plant*, animal, or fungi." Red: "Stop calling me that! I am not a rat!" Green: "I'm not a parasite, I'm not a plant... stop calling me that..." Red: "I'll eat you next month, you'll see!" Green: "...Friend of Apollo, we live in harmony, with rock or tree." Green: "Were you counting when we spoke?" *With the definition that a plant must photosynthesize, have a root/stem/leaf system, and have cell walls. Luckily no one has entered the tent after you, giving you all the time in the world to solve their riddle. After giving it some thought, you decide to tell them your answer, making sure to note the similarity between their names. "You- the one in the green... your name is ______. And you, the red one- you're _____." The Decision My my, I didn't know you had such artistic talent Looking forwards to the rest of this series Dude, I have never met anyone so good at MS Paint. You must have a really boring job XD Red and Green are Lycan and Lichen Red clues: Lycan is short for lycanthrope, or werewolf, hence "feared", "howl", "bite". Selene is the Greek goddess of the moon, which generally triggers the lycanthropic transformation; "eat you next month" also refers to the moon cycle. I think the reference to "rats" is the D&D creature "wererat", which is classified as a lycanthrope. Green clues: A lichen is a composite organism consisting of algae and/or cyanobacteria and fungi ("two, or three"). Some algae are technically plants, I think, but many are not; non-plant algae, as well as cyanobacteria, are neither "plant, animal, or fungi". It is not parasitic, but a symbiotic relationship. They are generally found on rocks and trees. Perfect! Time for me to whip up chapter one of the series, looks like...
common-pile/stackexchange_filtered
How can I create a DIV that's located in the center of my browser screen? I tried the following: <body style="height: 100%;"> <div style="display: block; height: 30rem; width: 100rem; margin: auto;"> x </div> </body> What happens that is the box is centered with the correct right and left margins. But there seems to be no margins at the top and bottom. Note 1rem is equal to about 10px so there should be plenty of space at the top and bottom. However the DIV just sits at the top of the screen. Like that: #yourdiv { position:fixed; height:18em; margin-top: -9em; margin-left: -15em; border: 1px solid #ccc; background-color: #f3f3f3; top: 50%; left: 50%; width:30em; } Tried like this? <body style="height: 100%; width: 100%"> <div style="display: block; position: absolute; top: 50%; left: 50%;"> x </div> </body> Use this css instead of your inline css part #mydiv { position:fixed; top: 50%; left: 50%; width:100em; height:30em; margin-top: -9em; margin-left: -15em; } eg:- <body style="height: 100%;"> <div id="mydiv"> x </div> </body> FYI:- position:fixed; does the trick here. it will keep your div in the center even if you scrolled down the page. Hope this helps Can you tell me why your suggesting -9 and -15 for the margin top and left ? ah just ignore them.I copied this answer from the site i'am currently working at and i forgot to delete them. those 2 lines are totally useless for this particular job. Demo I have used % for sizing, its complete responsive, try changing width and height in px, still div remains completely center of the page. HTML <div class="divX">x</div> css .divX { position:absolute; top:0; bottom:0; left:0; right:0; margin:auto; height: 30%; width:100%; background:red; text-align:center; } DivX :D Oh the memories :)
common-pile/stackexchange_filtered
kendo ui for angular form components not working import { Component, OnInit } from '@angular/core'; import { FormGroup, FormControl, Validators } from '@angular/forms'; @Component({ selector: 'forms', templateUrl: './form.component.html', styleUrls: ['./form.component.scss'] }) export class FormComponent implements OnInit { genders: Array<{text: string, value: number}> = [ {text: 'Male', value: 1}, {text: 'Female', value: 2}, {text: 'Other', value: 3} ]; myForm: FormGroup = new FormGroup({ firstName: new FormControl('', Validators.required), lastName: new FormControl('', Validators.required), age: new FormControl(''), gender: new FormControl(1) }); ngOnInit() { } onSubmit(value: string): void { console.log('submitted value is', value); } } <h1>Forms</h1> <form [formGroup]="myForm" (ngSubmit)="onSubmit(myForm.value)"> <label for="firstName" >First Name:</label> <input type="text" id="firstName" placeholder="first name" [formControl]="myForm.controls.firstName" /> <span style="color: red">required</span> <br/> <label for="lastName">Last Name:</label> <input type="text" id="lastName" placeholder="last name" [formControl]="myForm.controls.lastName" /> <span style="color: red">required</span> <br/> <label for="age">Age:</label> <input type="number" id="age" placeholder="age" [formControl]="myForm.controls.age" /> <span style="color: red">required</span> <br/> <label for="gender"> Gender: <kendo-combobox formControlName="gender" [data]="genders" [textField]="text" [valueField]="'value'" [valuePrimitive]="true" required></kendo-combobox> </label> <div *ngIf="!myForm.valid">Form is not valid</div> <button type="submit">Submit</button> </form> I'm trying to get the CombBox to work using Kendo UI for Angular. I'm getting an error message saying that it kind bind to 'data' because it isn't a known property of 'kendo-combobox'. below is a link from the telerik example I'm using. Note that I'm using ReactiveForms Kendo UI for Angular ComboBox Show us your code, we can't guess the problem. code is added in What does your module definition look like? I've seen this problem when I wasn't properly importing the kendo libraries. I figured it out this morning. And you are 100% correct. I wasn't correctly importing them. In the Telerik documentation at the forms section, it doesn't tell you that you need to install certain npm packages. But then by chance I moved on to dropdowns and saw everything I needed in that section to get it working.
common-pile/stackexchange_filtered
Simple MPI_File_open not working properly I am trying to compile and run this simple C/MPI program (test.c), that only try to open (create, if non existent) a file, nothing else: #include <mpi.h> #include <stdio.h> int main(int argc, char **argv) { int size, rank; MPI_Init(&argc, &argv); MPI_Comm comm; MPI_Comm_dup(MPI_COMM_WORLD, &comm); MPI_Comm_size(comm, &size); MPI_Comm_rank(comm, &rank); MPI_File file; char *filename="ttt.dat"; MPI_File_open(comm, filename, MPI_MODE_CREATE | MPI_MODE_RDWR, MPI_INFO_NULL, &file); if (file != MPI_SUCCESS) { fprintf(stderr, "error fwrite to %s\n", filename); MPI_Abort(comm, 1); } return 0; } This is what I do to compile and run the file, and the error that i get: $ mpicc test.c $ mpirun -np 3 --oversubscribe a.out error fwrite to ttt.dat error fwrite to ttt.dat -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI COMMUNICATOR 3 DUP FROM 0 with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- error fwrite to ttt.dat [pc:08014] 2 more processes have sent help message help-mpi-api.txt / mpi-abort [pc:08014] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages What is happening? I cannot figure out what is wrong with my code, also it worked in the past, but since today it stopped working properly. Also, rarely an error with "bad file descriptor" appears in the terminal error log - I don't have it anymore but I will post it if it happens to me again. (I know that the file is not closed at the end of the code, but the program execution cannot even reach the point in which it should happen) You need to check the actual status rather than just checking whether it failed. I assume that MPI will provide a proper value to indicate why the open failed. Your test is wrong. It should be int ret=MPI_File_open(...); if (MPI_SUCCESS != ret) ... Compile with -Wall, your compiler should have catched that. Thank you for your answers. I just checked the status code and it is 0, that is MPI_SUCCESS (tested via MPI_Error_string). I think I have mistaken what is tested: the return code should be MPI_SUCCESS, not the file handle. And I am testing the file handle. Well... I think this was the problem. I am currently checking furthermore to be sure, but if this is the cause, I apologize for the silly question. Again, thank you for your comments and your patience!
common-pile/stackexchange_filtered
limit inferior is the least cluster point of $\{ x_{n} \}$ Here is the proof of the statement on the title that has been given to me. I understand everything except the last line. Perhaps someone could enlighten me. Let $c'$ be a cluster point of $\{x_n\}$ and $c=\lim \inf x_n > c'$. Then there exists a subsequence $\{x_{n_k}\}$ of $\{ x_{n} \}$ such that $\lim_{k\rightarrow\infty} x_{n_k}=c'$. In other words, $\forall \ \epsilon>0, \ \exists \ N \in \mathbb{N}$ such that $ \forall \ k\geq N$ we have $\mid x_{n_k}-c'\mid <\epsilon$.  Choose $\epsilon = \frac{c-c'}{2}$. Then there are infinitely many $ x_{n_k}$ such that $x_{n_k} \leq c'+\epsilon = c-\epsilon$. Hence $c=\lim \inf x_n=c-\epsilon$ which contradicts $c'<c$. I do not understand how $c=c-\epsilon$ contradicts $c'<c$. The only contradiction I see is that $c=c-\epsilon$ is problematic since $\epsilon \neq 0$ but I do not see how this implies that $c$ has to be less than or equal to $c'$. What am I missing? If a premise leads to a false conclusion (such as $ c= c - \epsilon$ where $\epsilon\neq 0$), then the premise must be false (by contradiction). No, I understand that but I'm struggling to see how $c=c-\epsilon$ is actually a function of said premise. In your second paragraph, should $c = \liminf x_n$? @fwd absolutely, thank you. You have quoted a sequence of statements that begins with “Let $c'$ be a cluster point of ${x_n}$ and $c' < c$” and ends with “$c = c - \epsilon$ where $\epsilon \neq 0$”.  Where do you lose the train of thought? It has just occurred to me that we did indeed use $c'<c$ in order to ensure positivity of our choice of $\epsilon$ so the premise of $c'<c$ was actually applied in obtaining the contradictory statement. Silly mistake. Thanks all!
common-pile/stackexchange_filtered